name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_jdhos7n
I would rather use your [api-example.py](https://api-example.py) because your code it relies on is very polished IMO. But (see notes earlier in this thread) the api doesn't seem to run as well as the webui? I was seeing longer times and getting warnings. This doesn't happen when I use the webui but I need it script acc...
1
0
2023-03-24T13:57:03
gransee
false
null
0
jdhos7n
false
/r/LocalLLaMA/comments/11zz7oa/use_in_regular_python_scripts/jdhos7n/
false
1
t1_jdhk91t
You're running out of RAM if it says killed, so you need to increase the swap space. The [documentation here](https://learn.microsoft.com/en-us/windows/wsl/wsl-config#example-wslconfig-file) by Microsoft provides more info on doing that.
1
0
2023-03-24T13:23:59
Technical_Leather949
false
null
0
jdhk91t
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdhk91t/
false
1
t1_jdhig75
>Emad Mostaque received his master's degree in mathematics and [computer science](https://en.wikipedia.org/wiki/Computer_science) from [Oxford University](https://en.wikipedia.org/wiki/Oxford_University) in 2005. He then went on to spend 13 years working at various [**hedge funds**](https://en.wikipedia.org/wiki/Hed...
3
0
2023-03-24T13:09:59
Vivarevo
false
null
0
jdhig75
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdhig75/
false
3
t1_jdhg9fq
Having the same problem, any advice?
1
0
2023-03-24T12:52:31
shemademedoit1
false
null
0
jdhg9fq
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdhg9fq/
false
1
t1_jdhg4jg
I'm not sure 4Bit is necessarily faster. Definitely better for memory usage, but on GPU, I think not all cards can get more speed with 4bit vs 8bit. It's still one multiplication. I think some modern GPUs have Int8 hardware though.
1
0
2023-03-24T12:51:25
Reddactor
false
null
0
jdhg4jg
false
/r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/jdhg4jg/
false
1
t1_jdhdg77
*Logically* the smallest model (4bit 7B) will give you the fastest speeds. Although I've not tested that. Otherwise you'll need to keep an eye out for new developments... to be honest it's been just a few days since the models were *errrr released*. It takes time to develop, test and publish.
5
0
2023-03-24T12:28:45
Pan000
false
null
0
jdhdg77
false
/r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/jdhdg77/
false
5
t1_jdhdf4c
> As in two hours to complete a response. Are you running the 7B model or a larger one? the 7B model is as fast if not faster than ChatGPT Plus on my Ryzen 7950x
3
0
2023-03-24T12:28:29
Independent-Ant-4678
false
null
0
jdhdf4c
false
/r/LocalLLaMA/comments/120eu0u/would_it_be_better_for_me_to_run_it_on_gpu_or_cpu/jdhdf4c/
false
3
t1_jdhda58
I'm having issues loading 30b on wsl, it prints killed. 13b runs and works. I also have it installed on windows 10 natively and it loads 30b fine too. idk why it doesn't work with wsl. I have gtx3090 24gbvram so it should load, maybe it's not using my gpu idk.
1
0
2023-03-24T12:27:16
doomperial
false
null
0
jdhda58
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdhda58/
false
1
t1_jdh9ovu
The GPU will be *way* faster. For reference, I've run GPT-J 6B locally (not quite LLaMa but probably similar enough to the 7B model to get an idea of performance), and it generates about 110 tokens/80 words per *minute* on a 12700K. That equates to ~5 minutes for a 400-word response (ie roughly the same as ChatGPT's ...
9
0
2023-03-24T11:54:27
AI-Pon3
false
null
0
jdh9ovu
false
/r/LocalLLaMA/comments/120eu0u/would_it_be_better_for_me_to_run_it_on_gpu_or_cpu/jdh9ovu/
false
9
t1_jdh61ul
One advantage of CPU is that 64, 128, or 256 GB of RAM is far cheaper than equivalent amounts of video RAM. For the smaller models that might not be the primary deciding factor. I found llama.cpp to be painfully slow on a Ryzen 3. As in two hours to complete a response.
8
0
2023-03-24T11:17:25
nizus1
false
null
0
jdh61ul
false
/r/LocalLLaMA/comments/120eu0u/would_it_be_better_for_me_to_run_it_on_gpu_or_cpu/jdh61ul/
false
8
t1_jdh5thd
Have you considered contacting Emad Mostaque to see if he can put you in touch with a collaborator?
3
0
2023-03-24T11:14:51
nizus1
false
null
0
jdh5thd
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdh5thd/
false
3
t1_jdh3p1o
inference. It's what I'm playing with.
1
0
2023-03-24T10:50:59
wind_dude
false
null
0
jdh3p1o
false
/r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/jdh3p1o/
false
1
t1_jdh153k
You should look into langchain.
1
0
2023-03-24T10:19:36
Disastrous_Elk_6375
false
null
0
jdh153k
false
/r/LocalLLaMA/comments/120fw2u/training_llama_for_tooluse_via_a/jdh153k/
false
1
t1_jdh129o
I read that 30B 4 Bit takes 20G Vram not sure if that is for fine tuning or inference
1
0
2023-03-24T10:18:33
CrashTimeV
false
null
0
jdh129o
false
/r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/jdh129o/
false
1
t1_jdgyvv6
More heat if it's cold where you are.
5
0
2023-03-24T09:49:39
ambient_temp_xeno
false
null
0
jdgyvv6
false
/r/LocalLLaMA/comments/120eu0u/would_it_be_better_for_me_to_run_it_on_gpu_or_cpu/jdgyvv6/
false
5
t1_jdgw7up
Thank you :)
1
0
2023-03-24T09:11:27
raysar
false
null
0
jdgw7up
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jdgw7up/
false
1
t1_jdgu4fi
Ok, so alpaca.cpp is a fork of the llama.cpp codebase. It is basically the same as llama.cpp except that alpaca.cpp has it hard coded to go straight into interactive mode. I'm getting the speed from llama.cpp in non interactive mode where you pass the prompt in on the command line and it responds, shows the speed and e...
3
0
2023-03-24T08:40:30
MoneyPowerNexis
false
null
0
jdgu4fi
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdgu4fi/
false
3
t1_jdgtfs1
Thanks, I’m going to have to get a new GPU then. I also find the 7B to be pretty confused most of times and forgetting prompts very fast
1
0
2023-03-24T08:30:22
Necessary_Ad_9800
false
null
0
jdgtfs1
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdgtfs1/
false
1
t1_jdgpaj6
got it, thanks. I figure a dual boot is easy enough to warrant it if the performance is so much better. thanks!
1
0
2023-03-24T07:30:09
ehbrah
false
null
0
jdgpaj6
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdgpaj6/
false
1
t1_jdglqd2
[Please see comment above.](https://www.reddit.com/r/LocalLLaMA/comments/11zcqj2/comment/jdglown/)
1
0
2023-03-24T06:40:31
friedrichvonschiller
false
null
0
jdglqd2
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdglqd2/
false
1
t1_jdglown
I think I can make a copypasta guide. It looks rather doable even by the likes of me. I'll try when the GPU arrives so I can verify that it actually works and post some numbers. It'll be several days, I'm afraid. I think it's actually not going to be horrible but I'm guessing here because I'd have to install python...
2
0
2023-03-24T06:40:00
friedrichvonschiller
false
2023-03-24T08:25:27
0
jdglown
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdglown/
false
2
t1_jdg9wrf
Now you are where I was about a year ago with my scripts :) The updated version is [https://github.com/oobabooga/text-generation-webui/](https://github.com/oobabooga/text-generation-webui/) The core text generation stuff is all here: [https://github.com/oobabooga/text-generation-webui/blob/main/modules/text\_generati...
6
0
2023-03-24T04:21:33
oobabooga1
false
null
0
jdg9wrf
false
/r/LocalLLaMA/comments/11zz7oa/use_in_regular_python_scripts/jdg9wrf/
false
6
t1_jdg9jlg
Thank you for your kind reply. This helped! I expanded to a working script. 13b takes about 70 seconds and 7b takes about 31 seconds on a RTX 4090. import time import torch from transformers import AutoModelForCausalLM, AutoTokenizer print("Using GPU" if torch.cuda.is_available() else "Using CPU"...
2
0
2023-03-24T04:18:04
gransee
false
2023-03-24T04:22:07
0
jdg9jlg
false
/r/LocalLLaMA/comments/11zz7oa/use_in_regular_python_scripts/jdg9jlg/
false
2
t1_jdg7b8r
It seem if I foreground run the cmd terminal, it can run faster.
1
0
2023-03-24T03:56:56
nillouise
false
null
0
jdg7b8r
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdg7b8r/
false
1
t1_jdg3h3t
Thank for your explaination, but I don't find out the tokens/second indicator in this software. I just say a hello and get the response("What can I help you with") in one minute.
1
0
2023-03-24T03:23:10
nillouise
false
null
0
jdg3h3t
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdg3h3t/
false
1
t1_jdg0bsz
Thanks. I wish that had been clearer :) I'll try it with alpaca-lora next!
1
0
2023-03-24T02:56:35
_wsgeorge
false
null
0
jdg0bsz
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdg0bsz/
false
1
t1_jdfx7au
Let me try building it myself now that two folks have asked. My AMD GPU doesn't arrive until Sunday and I'm switching back over to Ubuntu tomorrow -- I need a bloody flash drive that's big enough for a boot drive so I can go pure ext4 -- but I may as well have a dry run. Please hold while I fail spectacularly in fron...
1
0
2023-03-24T02:31:31
friedrichvonschiller
false
null
0
jdfx7au
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdfx7au/
false
1
t1_jdfwtrk
this one is a bit confusing. no idea how to get this one up and running. are there any guides out there yet?
2
0
2023-03-24T02:28:32
Honato2
false
null
0
jdfwtrk
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdfwtrk/
false
2
t1_jdfv6su
cool, thank you for the info!
2
0
2023-03-24T02:15:51
msgs
false
null
0
jdfv6su
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdfv6su/
false
2
t1_jdfubvv
This is colossally important! I'm absolutely thrilled that the researchers are reaching out directly to the best in the open source community! Such a collaboration would yield incredible fruit for humanity and further the democratization of access to AI technology. This can only benefit people and AI alike. I would...
13
0
2023-03-24T02:09:12
friedrichvonschiller
false
null
0
jdfubvv
false
/r/LocalLLaMA/comments/12047hn/better_qptqquantized_llama_soon_paper_authors/jdfubvv/
false
13
t1_jdfu946
Frick yeah!!
6
0
2023-03-24T02:08:37
Inevitable-Start-653
false
null
0
jdfu946
false
/r/LocalLLaMA/comments/12047hn/better_qptqquantized_llama_soon_paper_authors/jdfu946/
false
6
t1_jdfrbjg
Is it possible for them to share it?
1
0
2023-03-24T01:46:22
divine-ape-swine
false
null
0
jdfrbjg
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdfrbjg/
false
1
t1_jdfr0xq
[removed]
1
0
2023-03-24T01:44:09
[deleted]
true
null
0
jdfr0xq
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdfr0xq/
false
1
t1_jdfqzyz
I can confirm I am having the exact same error and issues with ozcur/alpaca-native-4bit
1
0
2023-03-24T01:43:58
SomeGuyInDeutschland
false
null
0
jdfqzyz
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdfqzyz/
false
1
t1_jdfnr8w
Yes. You can try using the chat mode feature in kobold or simply type out in the request in a question/answer format.
2
0
2023-03-24T01:19:54
HadesThrowaway
false
null
0
jdfnr8w
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdfnr8w/
false
2
t1_jdfnnt2
Those weights appear to be in huggingface format. You'll need to convert them to ggml format or download the ggml ones.
1
0
2023-03-24T01:19:12
HadesThrowaway
false
null
0
jdfnnt2
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdfnnt2/
false
1
t1_jdfn5nz
Yes it is. That is a windows binary. For OSX you will have to build it from source, I know someone who has gotten it to work.
2
0
2023-03-24T01:15:28
HadesThrowaway
false
null
0
jdfn5nz
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdfn5nz/
false
2
t1_jdfeckp
It could likely use it; whether it's an improvement probably depends on the drivers and OS more than anything else. This [page is suggestive of a yes](https://www.amd.com/en/support/kb/release-notes/rn-rad-win-23-3-2), but I really have no clue. I'm sorry.
2
0
2023-03-24T00:12:22
friedrichvonschiller
false
2023-03-24T02:05:22
0
jdfeckp
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdfeckp/
false
2
t1_jdfcyuu
TORCH-MLIR Version `https://github.com/nod-ai/torch-mlir.git ` Then check out the `complex` branch and `git submodule update --init` and then build with `.\build_tools\python_deploy\build_windows.ps1` Can you please break down what these instructions are saying? I'm not able to follow. Thanks for linking otherwise...
3
0
2023-03-24T00:02:35
Clasp
false
null
0
jdfcyuu
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdfcyuu/
false
3
t1_jdf9tbj
I have a have a Ryzen-9-3900X which [should perform worse](https://cpu.userbenchmark.com/Compare/Intel-Core-i7-12700-vs-AMD-Ryzen-9-3900X/m1750830vs4044) than a i7-12700. I get 148.97 ms per token (~6.7 tokens/s) running ggml-alpaca-7b-q4.bin. It wrote out 260 tokens in ~39 seconds, 41 seconds including load time altho...
2
0
2023-03-23T23:40:16
MoneyPowerNexis
false
null
0
jdf9tbj
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdf9tbj/
false
2
t1_jdf8mx3
had the same error when I ran web-ui through start-webui.bat, then I tried to run with the same parameters through anaconda/miniconda and everything worked, I hope it helps you too. Also, before running web-ui, don't forget to type \`conda activate textgen\`
1
0
2023-03-23T23:32:05
xsafo
false
null
0
jdf8mx3
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdf8mx3/
false
1
t1_jdf713m
All you need is ``` model = AutoModelForCausalLM.from_pretrained(folder) tokenizer = AutoTokenizer.from_pretrained(folder) input_ids = tokenizer.encode("This phrase is about ") output_ids = model.generate(input_ids) print(tokenizer.decode(output_ids)) ```
6
0
2023-03-23T23:20:57
oobabooga1
false
null
0
jdf713m
false
/r/LocalLLaMA/comments/11zz7oa/use_in_regular_python_scripts/jdf713m/
false
6
t1_jdf5des
Same data but different file name. It's in my OP.
1
0
2023-03-23T23:09:23
msgs
false
null
0
jdf5des
false
/r/LocalLLaMA/comments/1200mle/new_torrent_for_alpacas_30b_4bit_weights_189_gb/jdf5des/
false
1
t1_jdf574s
Would this use a Vega series GPU? Thanks!
1
0
2023-03-23T23:08:11
msgs
false
null
0
jdf574s
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdf574s/
false
1
t1_jdf50y3
Edit: the repo owner is working on updating the tutorial. I think its best for people interested in that project to have one consistent source for downloading. everything is already on Hugging Face so why not just link there? its faster and more transparent. but thanks for sharing for the community. [https://huggingfac...
1
0
2023-03-23T23:07:00
Civil_Collection7267
false
2023-03-23T23:27:30
0
jdf50y3
false
/r/LocalLLaMA/comments/1200mle/new_torrent_for_alpacas_30b_4bit_weights_189_gb/jdf50y3/
true
1
t1_jdf46ua
can it be made to work in a instruct/command format with alpaca?
2
0
2023-03-23T23:01:10
SDGenius
false
null
0
jdf46ua
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdf46ua/
false
2
t1_jdf3mew
Only if you plan on using this a lot, then yes, I'd highly recommend using Linux instead. I use Ubuntu. >Do you have any experience with it on a M1 by chance? No experience with M1. I know llama.cpp was designed to run on that but I recommend staying with text-generation-webui.
2
0
2023-03-23T22:57:13
Technical_Leather949
false
null
0
jdf3mew
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdf3mew/
false
2
t1_jdf31l8
[deleted]
1
0
2023-03-23T22:53:11
[deleted]
true
null
0
jdf31l8
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdf31l8/
false
1
t1_jdf1uc6
With the GPTQ implementation, effectively none. You can see the benchmarks [here on the GitHub](https://github.com/qwopqwop200/GPTQ-for-LLaMa) page.
2
0
2023-03-23T22:44:46
Technical_Leather949
false
null
0
jdf1uc6
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdf1uc6/
false
2
t1_jdf1mx2
The slow part is the prompt processing generation speed is actually faster than what you could get normally with 6gb vram.
1
0
2023-03-23T22:43:21
gelukuMLG
false
null
0
jdf1mx2
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdf1mx2/
false
1
t1_jdf12ul
Ah, I see [u/Tree-Sheep](https://www.reddit.com/user/Tree-Sheep/) posted a similar question here [https://www.reddit.com/r/LocalLLaMA/comments/11s5f39/integrate\_llama\_into\_python\_code/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/LocalLLaMA/comments/11s5f39/integrate_llama_into_p...
2
0
2023-03-23T22:39:26
gransee
false
2023-03-23T22:57:47
0
jdf12ul
false
/r/LocalLLaMA/comments/11zz7oa/use_in_regular_python_scripts/jdf12ul/
false
2
t1_jdf0t5l
Wow! So I should dial boot to Linux then I guess any ditto? Ubuntu? Do you have any experience with it on a M1 by chance? My hw options are 2080 ti or M1 Max
1
0
2023-03-23T22:37:36
ehbrah
false
null
0
jdf0t5l
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdf0t5l/
false
1
t1_jdf0rp7
Yes, the 7B model isn't that great as a starting point unless you're using Alpaca-LoRA with it. I've tried many different prompts and testing and strong coherency starts at 13B. 30B is excellent for many use cases, but 13B should be enough for most people.
1
0
2023-03-23T22:37:20
Technical_Leather949
false
null
0
jdf0rp7
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdf0rp7/
false
1
t1_jdezu7s
Oh wow. I had no idea how bad their dataset was. That is absolutely terrible.
3
0
2023-03-23T22:30:47
clayshoaf
false
null
0
jdezu7s
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jdezu7s/
false
3
t1_jdezrcj
No problem, by the way I accidentally only used two "#" in the last example.
2
0
2023-03-23T22:30:14
KerfuffleV2
false
null
0
jdezrcj
false
/r/LocalLLaMA/comments/11yhxjm/llamacpp_vs_alpacacpp_same_model_different_results/jdezrcj/
false
2
t1_jdezhbn
Oops thank you
1
0
2023-03-23T22:28:19
fakezeta
false
null
0
jdezhbn
false
/r/LocalLLaMA/comments/11yhxjm/llamacpp_vs_alpacacpp_same_model_different_results/jdezhbn/
false
1
t1_jdexuoi
Thanks for sharing! but the GitHub page says this doesn't support LLaMA. I don't think is the right sub to share in, see rules #1 and #2: >Posts must be directly related to LLaMA > >This is an open community that highly encourages collaborative resource sharing, but *the sub is not here as merely a source...
1
0
2023-03-23T22:17:09
Civil_Collection7267
false
null
0
jdexuoi
false
/r/LocalLLaMA/comments/11zax10/cformers_transformers_with_a_cbackend_for/jdexuoi/
true
1
t1_jdet33k
eBay. I learned to never buy from sellers that don't take returns a year ago. But that's boring, so I asked Drama LLaMA to answer. `He said he wanted to make sure his graphics card would make LLaMA fly, but it turned out to be more like a rocket launch!` Not bad. I tried a few more times. `When his friends asked ...
3
0
2023-03-23T21:45:07
friedrichvonschiller
false
null
0
jdet33k
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdet33k/
false
3
t1_jdeqdtn
Busted fans!? How did that happen?
2
0
2023-03-23T21:27:38
Gohan472
false
null
0
jdeqdtn
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdeqdtn/
false
2
t1_jdeppwe
They mean you'd have to run the exact same test prompt multiple times to figure out whether the answers remain consistent.
1
0
2023-03-23T21:23:31
iJeff
false
null
0
jdeppwe
false
/r/LocalLLaMA/comments/11yhxjm/llamacpp_vs_alpacacpp_same_model_different_results/jdeppwe/
false
1
t1_jdepfs6
After buying two used 3090s with busted fans and coil whine, I was ready to try something crazy. Amazon's selling 24GB Radeon RX 7900 XTXs for $999 right now with free returns. I passionately agree with [the PyTorch Foundation that we need a little more democratization](https://pytorch.org/blog/democratizing-ai-with-...
5
0
2023-03-23T21:21:43
friedrichvonschiller
false
null
0
jdepfs6
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdepfs6/
false
5
t1_jdeoxg5
I don't run an AMD GPU anymore, but am very glad to see this option for folks that do!
3
0
2023-03-23T21:18:27
iJeff
false
null
0
jdeoxg5
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdeoxg5/
false
3
t1_jdelxdg
So I downloaded the weights and its in 41 different files such as pytorch\_model-00001-of-00041.bin. ​ How do I run it?
1
0
2023-03-23T20:59:17
Snohoe1
false
null
0
jdelxdg
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdelxdg/
false
1
t1_jdeghur
What's the difference between the 8bit and 4bit?
1
0
2023-03-23T20:24:53
pxan
false
null
0
jdeghur
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdeghur/
false
1
t1_jdeb0ot
I keep getting an error on L34 on MacOD (M1). Is it trying to load llamacpp.dll?
2
0
2023-03-23T19:50:05
_wsgeorge
false
null
0
jdeb0ot
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdeb0ot/
false
2
t1_jde91f6
Thank you...
1
0
2023-03-23T19:37:28
petasisg
false
null
0
jde91f6
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jde91f6/
false
1
t1_jde7ous
[removed]
1
0
2023-03-23T19:28:35
[deleted]
true
null
0
jde7ous
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jde7ous/
false
1
t1_jde7nql
[deleted]
7
0
2023-03-23T19:28:24
[deleted]
true
null
0
jde7nql
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jde7nql/
false
7
t1_jde6uj1
>I would also like to see llama.cpp benchmarked against the Hugging Face llama We already know ggml (aka llama.cpp) llama is not delivering the same accuracy, at least not with RtN 4bit. GPTQ 4bit ggml with variable zero offset and binning gets extremely close for 13B (and presumably even closer for the larger mod...
1
0
2023-03-23T19:23:15
markschmidty
false
null
0
jde6uj1
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jde6uj1/
false
1
t1_jde6781
RLHF (and copying RLHF results like Alpaca does) biases LLMs towards factual answers and away from abstract reasoning ability. All Germans do not actually speak Italian or ride bicycles. So the factual answer is none of the above. RLHF reduces performance at just about everything except for reliably responding in a q...
2
0
2023-03-23T19:19:12
markschmidty
false
null
0
jde6781
false
/r/LocalLLaMA/comments/11yhxjm/llamacpp_vs_alpacacpp_same_model_different_results/jde6781/
false
2
t1_jde2zan
what Tesla with 32GB of VRAM is under $300? Tesla M10 says 32GB, but has 4 GPUs with 8GB each
1
0
2023-03-23T18:58:57
ilikepie1974
false
null
0
jde2zan
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jde2zan/
false
1
t1_jddp59d
Well it's practically zero install, considering it's a 1mb zip with 3 files and requires only stock python.
3
0
2023-03-23T17:32:01
HadesThrowaway
false
null
0
jddp59d
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jddp59d/
false
3
t1_jddojq9
The backend tensor library is almost the same so it should not take any longer than the basic llama.cpp. Unfortunately there is a flaw in the llama.cpp implementation that causes prompt ingestion to be slower the larger the context is. I cannot fix it myself - please raise awareness to it here: https://github.com/g...
3
0
2023-03-23T17:28:19
HadesThrowaway
false
2023-03-23T17:32:42
0
jddojq9
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jddojq9/
false
3
t1_jddi5st
Is there any difference in response quality between 7B, 13B and 30B?
1
0
2023-03-23T16:48:24
Necessary_Ad_9800
false
null
0
jddi5st
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jddi5st/
false
1
t1_jdd8vmj
Thank you, I just want to know if it is normal that it runs so slow, or did I miss some settings?
1
0
2023-03-23T15:49:37
nillouise
false
null
0
jdd8vmj
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdd8vmj/
false
1
t1_jdd87xu
llama.cpp is for running inference on CPU if you want to run it on GPU you need https://github.com/oobabooga/text-generation-webui which is a completely different thing.
5
0
2023-03-23T15:45:27
blueSGL
false
null
0
jdd87xu
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdd87xu/
false
5
t1_jdd4yx8
If you are using [text-generation-webui](https://github.com/oobabooga/text-generation-webui) You can use the published loras with these commands (you need to have decapoda-research/llama-7b-hf installed and working): ``` $ python download-model.py tloen/alpaca-lora-7b $ python server.py --load-in-8bit --model llama-7...
1
0
2023-03-23T15:24:44
stringy_pants
false
null
0
jdd4yx8
false
/r/LocalLLaMA/comments/11znjyq/alpacalora_lowrank_llama_instructtuning/jdd4yx8/
false
1
t1_jdd43y6
Do you mean that gpt 3 helped alpaca in it's training (pls explain I'm still a noob)
1
0
2023-03-23T15:19:11
Puzzleheaded_Acadia1
false
null
0
jdd43y6
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jdd43y6/
false
1
t1_jdd3pj9
[deleted]
1
0
2023-03-23T15:16:32
[deleted]
true
null
0
jdd3pj9
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdd3pj9/
false
1
t1_jdcuy9p
I run it with ggml-alpaca-7b-q4.bin succefully, but it is very slow(one minute a reponse), eat all my cpu and don't use my gpu. Is it the expected behaviour? My computer is 12700 and 32g, 2060.
5
0
2023-03-23T14:17:58
nillouise
false
null
0
jdcuy9p
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdcuy9p/
false
5
t1_jdcj9ak
Oh, my mistake, I didn't look closely closely and thought you were talking about llama. Sorry about that.
2
0
2023-03-23T12:50:48
KerfuffleV2
false
null
0
jdcj9ak
false
/r/LocalLLaMA/comments/11zax10/cformers_transformers_with_a_cbackend_for/jdcj9ak/
false
2
t1_jdcf8qy
When you do, please share what it's like. I think it's cool that this was put together, but I'm hesitant to try installing another implementation when I don't know how well it will work.
2
0
2023-03-23T12:16:05
qrayons
false
null
0
jdcf8qy
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdcf8qy/
false
2
t1_jdcd6ed
Don't really have a way to compare, perhaps it'll be better to wait for them to add the LLaMa they promised. A bit of a newb to this still :)
2
0
2023-03-23T11:56:29
ChobPT
false
null
0
jdcd6ed
false
/r/LocalLLaMA/comments/11zax10/cformers_transformers_with_a_cbackend_for/jdcd6ed/
false
2
t1_jdc74r0
How does the inference speed compare to GGML based approaches like llama.cpp? Repo link: https://github.com/ggerganov/llama.cpp
4
0
2023-03-23T10:52:46
KerfuffleV2
false
null
0
jdc74r0
false
/r/LocalLLaMA/comments/11zax10/cformers_transformers_with_a_cbackend_for/jdc74r0/
false
4
t1_jdc6ruh
Just did a pull request to add some interactivity and control to it, seems pretty fast running on a 4870k with no GPU
3
0
2023-03-23T10:48:36
ChobPT
false
null
0
jdc6ruh
false
/r/LocalLLaMA/comments/11zax10/cformers_transformers_with_a_cbackend_for/jdc6ruh/
false
3
t1_jdc4qdi
[removed]
1
0
2023-03-23T10:23:12
[deleted]
true
null
0
jdc4qdi
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdc4qdi/
false
1
t1_jdc4pve
Getting the exact same error as you bro. I think this alpaca model is not quantized properly. Feel free to correct me if i'm wrong guys. Would be great if someone could get this working, I'm on a 1060 6gb too lol.
2
0
2023-03-23T10:23:01
lolxdmainkaisemaanlu
false
null
0
jdc4pve
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdc4pve/
false
2
t1_jdc1v00
Neat! That's really useful information, thanks!
1
0
2023-03-23T09:43:43
_ouromoros
false
null
0
jdc1v00
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jdc1v00/
false
1
t1_jdc1cbp
with the code you can see it's formatted like this: Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {data_point["instruction"]} ### Response: {data_point["output"]} so for a book I could take something like {...
2
0
2023-03-23T09:36:10
Sixhaunt
false
null
0
jdc1cbp
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jdc1cbp/
false
2
t1_jdc18qd
This sounds like a great step towards user friendliness. Can't wait to try it!
7
0
2023-03-23T09:34:43
impetu0usness
false
null
0
jdc18qd
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdc18qd/
false
7
t1_jdbzyhn
Could you share an example? I'm aware how alpaca structures the input, but I don't know how you would convert a book into instruction/input/output triples.
1
0
2023-03-23T09:15:34
_ouromoros
false
null
0
jdbzyhn
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jdbzyhn/
false
1
t1_jdbzp3f
I just looked at how the formatting was done for the alpaca dataset within the google colab: [https://colab.research.google.com/drive/1rqWABmz2ZfolJOdoy6TRc6YI7d128cQO](https://colab.research.google.com/drive/1rqWABmz2ZfolJOdoy6TRc6YI7d128cQO) ​ then I mimicked it for processing the book data into prompts.
1
0
2023-03-23T09:11:36
Sixhaunt
false
null
0
jdbzp3f
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jdbzp3f/
false
1
t1_jdbzdfw
That's really cool! This might be a dumb question, but how do you convert unstructured text database like books into alpaca format? I did some search on the web but failed to get any general or specific answer to that.
1
0
2023-03-23T09:06:45
_ouromoros
false
null
0
jdbzdfw
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jdbzdfw/
false
1
t1_jdbyz1y
Al the parameters are the same: temperature, top\_k, top\_p, repeat\_last\_n and repeat\_penalty. Just the seed is different. I'm running more test and this is only an example. I'm comparing the result of test done for primary school between Alpaca 7B (lora and native) and 13B (lora) model, running both on llama.cp...
1
0
2023-03-23T09:00:44
fakezeta
false
null
0
jdbyz1y
false
/r/LocalLLaMA/comments/11yhxjm/llamacpp_vs_alpacacpp_same_model_different_results/jdbyz1y/
false
1
t1_jdbyj5g
Those programs just evaluate the model. The training for models and how they react makes a huge difference, so you can't really assume stuff like that. There are two correct instruction formats for Alpaca and if you don't use them you won't get the best results. See https://github.com/tatsu-lab/stanford_alpaca#data-re...
4
0
2023-03-23T08:54:03
KerfuffleV2
false
2023-03-23T22:30:35
0
jdbyj5g
false
/r/LocalLLaMA/comments/11yhxjm/llamacpp_vs_alpacacpp_same_model_different_results/jdbyj5g/
false
4
t1_jdbxsl0
I understood that alpaca.cpp is a fork of llama.cpp so I expected similar behaviour, and launched llama.ccp in instuction mode, to be used with Alpaca models. I expected some difference in the result but not completely different: one right and one wrong
1
0
2023-03-23T08:42:40
fakezeta
false
null
0
jdbxsl0
false
/r/LocalLLaMA/comments/11yhxjm/llamacpp_vs_alpacacpp_same_model_different_results/jdbxsl0/
false
1
t1_jdbstba
[removed]
1
0
2023-03-23T07:27:56
[deleted]
true
null
0
jdbstba
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jdbstba/
false
1
t1_jdbsst4
I took the ElderScrolls dataset with all the books from all the games and converted them into the alpaca dataset format for the sake of training an alpaca lora that understands the ES universe: [https://huggingface.co/Xanthius/alpaca7B-ES-lora/blob/main/README.md](https://huggingface.co/Xanthius/alpaca7B-ES-lora/blob/m...
2
0
2023-03-23T07:27:44
Sixhaunt
false
null
0
jdbsst4
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jdbsst4/
false
2
t1_jdbr605
There are many holes and problems with the original Alpaca Dataset released by Stanford. Here is a cleaned and curated Alpaca Dataset that fixes most of the problems: https://github.com/gururise/AlpacaDataCleaned
2
0
2023-03-23T07:04:15
yahma
false
null
0
jdbr605
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jdbr605/
false
2
t1_jdbqw31
How fast is this on a high end consumer processor (ie Ryzen 7950x) vs the same model running on pytorch with a nvidia 4090?
4
0
2023-03-23T07:00:17
yahma
false
null
0
jdbqw31
false
/r/LocalLLaMA/comments/11zax10/cformers_transformers_with_a_cbackend_for/jdbqw31/
false
4