name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_jdmntkd
Well that did it, running on latest commit a1f12d60 now and no more error!
2
0
2023-03-25T15:14:40
stonegdi
false
null
0
jdmntkd
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdmntkd/
false
2
t1_jdmndj4
Awesome! Can you tell me about your setup? I have tried alpaca.cpp with 7B,13B,30B models so far in the terminal and it was possible because I ran it on CPU. I couldn't get text-generation-webui working with cpu version of pytorch, and the amd version just ran out of VRAM. (Maybe because I tried the 30B one) And wh...
5
0
2023-03-25T15:11:28
PixelForgDev
false
2023-03-25T15:35:07
0
jdmndj4
false
/r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdmndj4/
false
5
t1_jdmnalk
No cuda error anymore but when i try your cmd line it outputs FileNotFoundError: [Errno 2] No such file or directory: 'models\\llama-7b-hf\\pytorch_model-00001-of-00033.bin' I also tried python [server.py](https://server.py) \--model llama-7b-hf --gptq-bits 4 --no-stream --cai-chat --lora llama-hh-lora-7B but it...
1
0
2023-03-25T15:10:53
Famberlight
false
null
0
jdmnalk
false
/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdmnalk/
false
1
t1_jdmmknm
Sure, I'm running a R9 5950X + RTX 3090. I tried the 7B and 13B in 8bit mode, then tried 7B, 13B and 30B in 4bit mode (no LoRA) and they all error out the same (tried --gpu-memory and same thing). They all work with all the other character cards. I'm running from commit 29bd41d so maybe I'll try to pull in the latest ...
2
0
2023-03-25T15:05:43
stonegdi
false
null
0
jdmmknm
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdmmknm/
false
2
t1_jdml8oo
Same cuda error. I'll try to repeat cuda steps from that guide
1
0
2023-03-25T14:55:58
Famberlight
false
null
0
jdml8oo
false
/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdml8oo/
false
1
t1_jdmki1g
First time I see this one, works perfectly for me. Here's the cmdline I used: `python` [`server.py`](https://server.py) `--cai-chat --model llama-7b-hf --load-in-8bit --lora llama-hh-lora-7B`
1
0
2023-03-25T14:50:34
stonegdi
false
null
0
jdmki1g
false
/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdmki1g/
false
1
t1_jdmkdr1
Hmm interesting, would you mind sharing your system settings and hardware? Graphics card? Normal, 8bit, or 4bit mode? With 4GB of vram left, I would assume you could load a lot more of the character card. Maybe if you send the --gpu-memory flag in the command line you could get it to use more VRAM?
1
0
2023-03-25T14:49:41
Inevitable-Start-653
false
null
0
jdmkdr1
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdmkdr1/
false
1
t1_jdmjl8a
Yep that worked, thanks.. but I got the same error so I tried shrinking the context from the char preset and it kept crashing until it finally worked but I had to truncate all the text after "a phenomenon called Rayleigh scattering."... anything longer than that and it errors despite having still 4GB vram left. The er...
2
0
2023-03-25T14:43:40
stonegdi
false
null
0
jdmjl8a
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdmjl8a/
false
2
t1_jdmhxal
https://huggingface.co/serpdotai/llama-hh-lora-7B Try this one
1
0
2023-03-25T14:31:08
Famberlight
false
null
0
jdmhxal
false
/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdmhxal/
false
1
t1_jdmhx86
i have had the same thing with 7b-8bit, usually delete the logs/\[character\]-persistence.json file and start again. i can see why the big players are so unsure of these models now as taming them seems tough
2
0
2023-03-25T14:31:07
megadonkeyx
false
null
0
jdmhx86
false
/r/LocalLLaMA/comments/121ad9h/cant_tell_if_the_ai_is_repeating_itself_or/jdmhx86/
false
2
t1_jdmhm45
Not sure since your link doesn't work for me, but if you look at the guide link I shared earlier, scroll all the way down and there is list of links where you can download the LoRAs, the 7B 8bit one is: [https://huggingface.co/tloen/alpaca-lora-7b](https://huggingface.co/tloen/alpaca-lora-7b)
1
0
2023-03-25T14:28:43
stonegdi
false
null
0
jdmhm45
false
/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdmhm45/
false
1
t1_jdmhdsr
Np, dang you are right the file is gone...yeash. I need to find a better place to start sharing things, here is a google drive folder with everything: https://drive.google.com/drive/folders/1KunfMezZeIyJsbh8uJa76BKauQvzTDPw?usp=share_link
1
0
2023-03-25T14:26:56
Inevitable-Start-653
false
null
0
jdmhdsr
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdmhdsr/
false
1
t1_jdmh23w
I tried using this LoRA (7b) https://app.gumroad.com/d/93d42e24012926106dc09b5209314eed. It mentions 7b-hf, is it 4bit?
1
0
2023-03-25T14:24:24
Famberlight
false
null
0
jdmh23w
false
/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdmh23w/
false
1
t1_jdmff8q
Also I see you mentioned 4bit.. I could be wrong but I don't think LoRA works on 4bit just yet.
1
0
2023-03-25T14:11:34
stonegdi
false
null
0
jdmff8q
false
/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdmff8q/
false
1
t1_jdmf1dy
Thanks, I looked at the text and it looks fine to me... maybe I'll try to shorten it. Also, your [file.io](https://file.io) link doesn't work it says file was deleted.
1
0
2023-03-25T14:08:33
stonegdi
false
null
0
jdmf1dy
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdmf1dy/
false
1
t1_jdmewy8
Huge thanks! I'll try this out
1
0
2023-03-25T14:07:33
Famberlight
false
null
0
jdmewy8
false
/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdmewy8/
false
1
t1_jdmeoyh
Check this guide out (link below) especially the troubleshooting section I think this is a know issue: "In order to avoid a CUDA error when starting the web UI, you will need to apply the following fix": [https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how\_to\_install\_llama\_8bit\_and\_4bit/](https://www.reddit...
2
0
2023-03-25T14:05:50
stonegdi
false
null
0
jdmeoyh
false
/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdmeoyh/
false
2
t1_jdmc4rx
Hmm, it might be because the character card length is pretty long and if you are running the 30B model with barely enough VRAM it might not like that. Additionally, it might be a copy paste thing screwing up the formatting of the character card. Try downloading the card here: https://file.io/dnBRMOvAxQPL
1
0
2023-03-25T13:44:47
Inevitable-Start-653
false
null
0
jdmc4rx
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdmc4rx/
false
1
t1_jdmba2s
It's the 13B model in 4-bit mode.
1
0
2023-03-25T13:37:32
Inevitable-Start-653
false
null
0
jdmba2s
false
/r/LocalLLaMA/comments/121ad9h/cant_tell_if_the_ai_is_repeating_itself_or/jdmba2s/
false
1
t1_jdmb8jy
I eventually manually opened the hatch with a large axe, and HAL got upset at me XD. I think it was just the AI doing a good job pretending to be HAL.
2
0
2023-03-25T13:37:10
Inevitable-Start-653
false
null
0
jdmb8jy
false
/r/LocalLLaMA/comments/121ad9h/cant_tell_if_the_ai_is_repeating_itself_or/jdmb8jy/
false
2
t1_jdmao5v
It's using the blip-image-captioning-base model, I think the send\_picture extension automatically downloads it. I downloaded the model repo from hugging face and have it locally on my machine. I edited the [script.py](https://script.py) file to point to where I downloaded the model to my machine instead of it using ...
2
0
2023-03-25T13:32:19
Inevitable-Start-653
false
null
0
jdmao5v
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdmao5v/
false
2
t1_jdmai8q
Thanks for sharing, this is great. I am using the llama-30b with gptq b4 and these settings are causing an error: "IndexError: list index out of range" and no inference is possible... not sure what's going on. When I change the character to another preset then it works fine... maybe this preset is reaching some kind of...
2
0
2023-03-25T13:30:53
stonegdi
false
null
0
jdmai8q
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdmai8q/
false
2
t1_jdm9e3e
Ok I misunderstood it then, thanks! I just saw open source and thought it’d be useful but if you’re correct then it sounds like a bad deal.
0
0
2023-03-25T13:21:10
PM_ME_ENFP_MEMES
false
null
0
jdm9e3e
false
/r/LocalLLaMA/comments/121imtt/has_anyone_tried_dolly_ai/jdm9e3e/
false
0
t1_jdm85rw
It's not open source you need to train it on their platform which costs money. They haven't released the weights. Seriously fuck all them who wants to keep the model to themselves.
2
0
2023-03-25T13:10:09
ckkkckckck
false
null
0
jdm85rw
false
/r/LocalLLaMA/comments/121imtt/has_anyone_tried_dolly_ai/jdm85rw/
false
2
t1_jdm3j0i
https://github.com/databrickslabs/dolly
1
0
2023-03-25T12:25:54
PM_ME_ENFP_MEMES
false
null
0
jdm3j0i
false
/r/LocalLLaMA/comments/121imtt/has_anyone_tried_dolly_ai/jdm3j0i/
false
1
t1_jdm3a63
[removed]
0
0
2023-03-25T12:23:24
[deleted]
true
null
0
jdm3a63
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdm3a63/
false
0
t1_jdm2e5t
[deleted]
-1
0
2023-03-25T12:14:11
[deleted]
true
2023-03-25T12:25:43
0
jdm2e5t
false
/r/LocalLLaMA/comments/121imtt/has_anyone_tried_dolly_ai/jdm2e5t/
false
-1
t1_jdm1rci
[removed]
1
0
2023-03-25T12:07:22
[deleted]
true
null
0
jdm1rci
false
/r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdm1rci/
false
1
t1_jdm19lp
Link?
2
0
2023-03-25T12:02:06
qrayons
false
null
0
jdm19lp
false
/r/LocalLLaMA/comments/121imtt/has_anyone_tried_dolly_ai/jdm19lp/
false
2
t1_jdlynp9
If I understand it, reflexion is basically a process of "rather than have the language model blindly attempt to solve problems and answer questions, let's let it generate a solution, rate that solution somehow, reflect on the quality of its solution, make *another* solution with that in mind, and repeat that several ti...
3
0
2023-03-25T11:32:34
AI-Pon3
false
null
0
jdlynp9
false
/r/LocalLLaMA/comments/121b1l5/implementing_reflexion_into_llamaalpaca_would_be/jdlynp9/
false
3
t1_jdlxp8m
What do they mean by reflextion And what does reflextion mean
1
0
2023-03-25T11:21:05
Puzzleheaded_Acadia1
false
null
0
jdlxp8m
false
/r/LocalLLaMA/comments/121b1l5/implementing_reflexion_into_llamaalpaca_would_be/jdlxp8m/
false
1
t1_jdludit
Thanks alot then i need to look into what im doing wrong. cause following this instructions im not able to run the 4 bit version. either default or with 30b model
1
0
2023-03-25T10:37:53
MageLD
false
null
0
jdludit
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdludit/
false
1
t1_jdlq3dt
/ help installing alpaca.ccp on Android
1
0
2023-03-25T09:36:27
-2b2t-
false
null
0
jdlq3dt
false
/r/LocalLLaMA/comments/121g7yn/need_help_installing_alpaca_on_android/jdlq3dt/
false
1
t1_jdlq2xj
[deleted]
1
0
2023-03-25T09:36:16
[deleted]
true
null
0
jdlq2xj
false
/r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdlq2xj/
false
1
t1_jdlpp2p
Thank you it resolved the loading issue
1
0
2023-03-25T09:30:40
doomperial
false
null
0
jdlpp2p
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdlpp2p/
false
1
t1_jdlnrdd
dude its LAION, they quite literally work on open source datasets. Ever heard about stable diffusion? trained on filtered laion5b. We dont really give a shit about if their model will be runnable on consumer gpu, we wait for a dataset to finetune llama with lora(lora is very consumer-gpu-friendly finetuner)
5
0
2023-03-25T09:02:00
LienniTa
false
null
0
jdlnrdd
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdlnrdd/
false
5
t1_jdln5fm
Is that the 7B?
3
0
2023-03-25T08:52:41
Necessary_Ad_9800
false
null
0
jdln5fm
false
/r/LocalLLaMA/comments/121ad9h/cant_tell_if_the_ai_is_repeating_itself_or/jdln5fm/
false
3
t1_jdlmxbc
This happens to me as well, seems like a bug?
2
0
2023-03-25T08:49:16
Necessary_Ad_9800
false
null
0
jdlmxbc
false
/r/LocalLLaMA/comments/121ad9h/cant_tell_if_the_ai_is_repeating_itself_or/jdlmxbc/
false
2
t1_jdlllya
Im on windows 10. Lama 8b 4bit without lora runs fine. Here is error C:\Users\olegt\miniconda3\envs\textgen\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('C')} warn(msg) C:\Users\olegt\m...
1
0
2023-03-25T08:29:17
Famberlight
false
null
0
jdlllya
false
/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdlllya/
false
1
t1_jdlk0ja
Emad is very well connected in the community of AI researchers, particularly those who don't work for the big secretive groups like OpenAI, DeepMind, etc. He has a track record of providing compute/funding to people who want to get things out into the world
3
0
2023-03-25T08:05:18
nizus1
false
null
0
jdlk0ja
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdlk0ja/
false
3
t1_jdlk06c
Runway raised $50 million and switched to doing video stuff. Gen-1 and Gen-2
1
0
2023-03-25T08:05:09
nizus1
false
null
0
jdlk06c
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdlk06c/
false
1
t1_jdljhhk
Post the cuda error (if it’s Linux I can help) . Can you run a llama without a LoRA? Follow the instructions in the wiki on GitHub for the w/e its callled.
1
0
2023-03-25T07:57:14
wind_dude
false
null
0
jdljhhk
false
/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdljhhk/
false
1
t1_jdlj184
Yes, they just look the same I thought I saw someone calling ui for llama auto1111. I meant oobabooga
1
0
2023-03-25T07:50:28
Famberlight
false
null
0
jdlj184
false
/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdlj184/
false
1
t1_jdliw69
Is this using CLIP interrogator?
2
0
2023-03-25T07:48:20
nizus1
false
null
0
jdliw69
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdliw69/
false
2
t1_jdlitgi
isn't auto1111 StableDiffusion, not LLama?
1
0
2023-03-25T07:47:11
Sixhaunt
false
null
0
jdlitgi
false
/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdlitgi/
false
1
t1_jdlimsi
I'm thinking this would be very tough to implement generally; while the linked post includes a script that would work for very specific examples and the blog/paper has it being used on a dataset of coding problems, it just doesn't seem like something that would be easy to use in a general sense outside of coding, at le...
2
0
2023-03-25T07:44:22
AI-Pon3
false
null
0
jdlimsi
false
/r/LocalLLaMA/comments/121b1l5/implementing_reflexion_into_llamaalpaca_would_be/jdlimsi/
false
2
t1_jdlew0t
Ahhhh, a Luddite running windows I see. Jk. Yea, I haven’t thought about how illl map the fans yet, problem for when they get here.
1
0
2023-03-25T06:50:32
wind_dude
false
null
0
jdlew0t
false
/r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdlew0t/
false
1
t1_jdleqi9
Same for me, I’ll try again in the am.
1
0
2023-03-25T06:48:27
wind_dude
false
null
0
jdleqi9
false
/r/LocalLLaMA/comments/120fw2u/training_llama_for_tooluse_via_a/jdleqi9/
false
1
t1_jdlbk1p
I cant currently run LLaMA-13B on my GPU but I can on my CPU. Thats the main difference. As others have said, GPU will be much faster than CPU. With CPU im taking around 10 minutes per 500 tokens.
2
0
2023-03-25T06:05:09
MentesInquisitivas
false
null
0
jdlbk1p
false
/r/LocalLLaMA/comments/120eu0u/would_it_be_better_for_me_to_run_it_on_gpu_or_cpu/jdlbk1p/
false
2
t1_jdl59j8
qwop seems to have already implemented the changes in his repository, and the code for reading the new quantized models is ready here: [https://github.com/oobabooga/text-generation-webui/pull/530](https://github.com/oobabooga/text-generation-webui/pull/530) Soon this should be merged.
4
0
2023-03-25T04:49:50
oobabooga1
false
null
0
jdl59j8
false
/r/LocalLLaMA/comments/12047hn/better_qptqquantized_llama_soon_paper_authors/jdl59j8/
false
4
t1_jdl45o7
runway also doesn't seem to have progressed past SD 1.5 though
3
0
2023-03-25T04:38:06
Tystros
false
null
0
jdl45o7
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdl45o7/
false
3
t1_jdl0dlj
I've recently changed the compile flags. Try downloading the latest version (1.0.5) and see if there are any improvements. I also enabled sse3. Unfortunately if you only have avx but not avx2, it might not have significant acceleration.
1
0
2023-03-25T04:00:18
HadesThrowaway
false
null
0
jdl0dlj
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdl0dlj/
false
1
t1_jdkwybm
[removed]
1
0
2023-03-25T03:28:38
[deleted]
true
2023-03-25T03:43:25
0
jdkwybm
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdkwybm/
false
1
t1_jdktnxm
Thank you for sharing. I'm excited to play with this!
2
0
2023-03-25T02:59:43
Kat-
false
null
0
jdktnxm
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdktnxm/
false
2
t1_jdksj01
Awesome. Can you shoot me a chat msg please? App keeps freezing for me when I try to chat.
1
0
2023-03-25T02:50:05
Pan000
false
null
0
jdksj01
false
/r/LocalLLaMA/comments/120fw2u/training_llama_for_tooluse_via_a/jdksj01/
false
1
t1_jdkpxd5
Im guessing that you’ve got the 8bit model?
1
0
2023-03-25T02:27:56
EnvironmentalAd3385
false
null
0
jdkpxd5
false
/r/LocalLLaMA/comments/120eu0u/would_it_be_better_for_me_to_run_it_on_gpu_or_cpu/jdkpxd5/
false
1
t1_jdkp3ea
My PC supports AVX but not AVX 512 bit. What are the steps to try with llama.cpp?
1
0
2023-03-25T02:20:57
scorpadorp
false
null
0
jdkp3ea
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdkp3ea/
false
1
t1_jdklk1q
Yes, other people have reported using them with no problems.
1
0
2023-03-25T01:51:51
Technical_Leather949
false
null
0
jdklk1q
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdklk1q/
false
1
t1_jdklfdl
What's your GPU, and do you see "No CUDA runtime is found" as part of the error log?
1
0
2023-03-25T01:50:49
Technical_Leather949
false
null
0
jdklfdl
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdklfdl/
false
1
t1_jdkju6i
Will be wonderful if you can, it's suspected to be an issue with matrix multiplication during the dequantization process. Take a look at https://github.com/ggerganov/llama.cpp/discussions/229
1
0
2023-03-25T01:37:59
HadesThrowaway
false
null
0
jdkju6i
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdkju6i/
false
1
t1_jdkjl5v
It shouldn't be that slow unless your PC does not support avx intrinsics. Have you tried the original llama.cpp? If that is fast you may want to rebuild the llamacpp.dll from the makefile as it might be more targetted at your device architecture.
1
0
2023-03-25T01:35:59
HadesThrowaway
false
null
0
jdkjl5v
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdkjl5v/
false
1
t1_jdkje3z
[removed]
1
0
2023-03-25T01:34:24
[deleted]
true
2023-03-25T01:51:31
0
jdkje3z
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdkje3z/
false
1
t1_jdkhxeh
I'm running custom fan curves with MSIAfterburner, the stock fan curves will happily cook the GPUs before kicking in.
1
0
2023-03-25T01:22:53
blueSGL
false
null
0
jdkhxeh
false
/r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdkhxeh/
false
1
t1_jdkfox2
That would be dope, than we'd all be running the 65b, and I'd have a lil more karam
1
0
2023-03-25T01:05:25
wind_dude
false
null
0
jdkfox2
false
/r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/jdkfox2/
false
1
t1_jdkf1dv
I made it here is the content of the text file that defines the parameters: do_sample=True top_p=1 top_k=12 temperature=0.36 repetition_penalty=1.05 typical_p=1.0 Just copy and paste that into a .txt file and name it whatever you want and put it in the presets folder in the Oobabooga install directory. Additionall...
5
0
2023-03-25T01:00:25
Inevitable-Start-653
false
null
0
jdkf1dv
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdkf1dv/
false
5
t1_jdkdu4v
where'd you get the chatpgtv2 generaiton parameters preset?
2
0
2023-03-25T00:51:10
SDGenius
false
null
0
jdkdu4v
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdkdu4v/
false
2
t1_jdkczoi
Lol, thought you might've had a 2 or 3 bit model for 30B running with 4.5gb and I was gonna be stoked ti was working!
1
0
2023-03-25T00:44:40
c4r_guy
false
null
0
jdkczoi
false
/r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/jdkczoi/
false
1
t1_jdka2fa
I repasted my 1060 6gb when Stable Diffusion dropped back in October. Made a really aggressive custom fan curve as well. Made a huge difference.
1
0
2023-03-25T00:22:40
remghoost7
false
null
0
jdka2fa
false
/r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdka2fa/
false
1
t1_jdk7gyy
my bad, thanks, yes, reversed.
2
0
2023-03-25T00:03:15
wind_dude
false
null
0
jdk7gyy
false
/r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/jdk7gyy/
false
2
t1_jdk63hf
> llama 30b - 7-8 tokens/s (takes about 4.5gb vram) > llama 7b - 14.5-16 tokens/s (takes about 20gb vram) Is that RAM usage reversed?
2
0
2023-03-24T23:52:54
c4r_guy
false
null
0
jdk63hf
false
/r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/jdk63hf/
false
2
t1_jdk5gen
I built a i3-10500 box 2 years ago just to host an m40 for text generation / training on GPT2 models, then CLIP Diffusions / Stable Diffusion became a thing, so I rocked the m40 for both. The maxwell isn't well supported with some [many] libraries and m40 sounds like an air conditioner with the fan running. So, I ...
7
0
2023-03-24T23:48:06
c4r_guy
false
null
0
jdk5gen
false
/r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdk5gen/
false
7
t1_jdk0rdb
Also going to add, seems like 7b 4bit doesn't heat up above the mid 70s
1
0
2023-03-24T23:13:06
wind_dude
false
null
0
jdk0rdb
false
/r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdk0rdb/
false
1
t1_jdk0mcs
haha, that's like my hp servers. I've got 2 120v fans from a network rack I'm going to put in and try and make some temporary fan ducting.
2
0
2023-03-24T23:12:03
wind_dude
false
null
0
jdk0mcs
false
/r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdk0mcs/
false
2
t1_jdjv8z1
Hey, I'm using an m40 as well! I'm getting used to feeling like I'm approaching a plane engine every time I get near the fan I've got bolted onto that thing.
2
0
2023-03-24T22:32:28
toothpastespiders
false
null
0
jdjv8z1
false
/r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdjv8z1/
false
2
t1_jdjv7vn
Hear, hear! To add to this, also keep it clean. I recently noticed (on my 3090) that fans are quickly ramping up to 100% and the performance in any task or benchmark is very bad. After checking gpuz I noticed I was thermally limited (hotspot at 105C, mem and gpu temps were ok). After opening it up, cleaning the dust an...
3
0
2023-03-24T22:32:14
VertexMachine
false
null
0
jdjv7vn
false
/r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdjv7vn/
false
3
t1_jdjpvnr
can you describe the flaw? I know enough C++ that perhaps I can at least modify my own copy
1
0
2023-03-24T21:53:24
ImmerWollteMehr
false
null
0
jdjpvnr
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdjpvnr/
false
1
t1_jdjomf4
Thanks!
1
0
2023-03-24T21:44:40
ImmerWollteMehr
false
null
0
jdjomf4
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jdjomf4/
false
1
t1_jdjlta7
I downloaded the original LLaMA weights from BitTorrent and then converted the weights to 4bit following the readme at [llama.cpp](https://github.com/ggerganov/llama.cpp). Should only take a couple of minutes to convert.
1
0
2023-03-24T21:25:06
Special_Freedom_8069
false
null
0
jdjlta7
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jdjlta7/
false
1
t1_jdjkso1
4 bit? What's the process? My gpu is amd but I've got 64gb ram and plenty of time.
1
0
2023-03-24T21:18:05
ImmerWollteMehr
false
null
0
jdjkso1
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jdjkso1/
false
1
t1_jdjiq30
Yea, I'm interested, I think the first PoC would be a llama to use a simple calculator (maybe even just arthimatic) app. ​ Down the road it would be awesome to be able to see it use: \- REST, json-rpc, and graphQL \- any UI (maybe start with navigating the web) (although this would be a long shot and wou...
1
0
2023-03-24T21:03:59
wind_dude
false
2023-03-24T21:16:52
0
jdjiq30
false
/r/LocalLLaMA/comments/120fw2u/training_llama_for_tooluse_via_a/jdjiq30/
false
1
t1_jdjgrge
To give you a rough idea I get close to double on 7b vs 30b I get about double the speed with 7b. I'm running all hf transformers. I'm running a tesla M40 24gb. CPU and memory aren't an issue on single GPU. If you're running multiple CPU and pcie lanes will become your bottle neck. Hardrive has some affect when loadin...
3
0
2023-03-24T20:50:50
wind_dude
false
2023-03-25T00:04:02
0
jdjgrge
false
/r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/jdjgrge/
false
3
t1_jdjgktb
[deleted]
1
0
2023-03-24T20:49:36
[deleted]
true
2023-03-28T14:15:36
0
jdjgktb
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdjgktb/
false
1
t1_jdjfmgp
[removed]
1
0
2023-03-24T20:43:12
[deleted]
true
null
0
jdjfmgp
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdjfmgp/
false
1
t1_jdjfleb
>Finetuning is not about training the model to answer questions, or follow instructions, but about training how it should react. Arguably that is exactly what they did with chatGPT. They used a reinforcement learning where it produced 2 response, and was rewarded for producing a more favourable response. Theoretic...
8
0
2023-03-24T20:43:01
wind_dude
false
null
0
jdjfleb
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdjfleb/
false
8
t1_jdjehr9
They've yet to release most of it. Not even the training data for fine tuning has been released.
7
0
2023-03-24T20:35:41
wind_dude
false
null
0
jdjehr9
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdjehr9/
false
7
t1_jdjdvpv
4bit 30b, 8bit 13b, and alpaca native
2
0
2023-03-24T20:31:37
Civil_Collection7267
false
null
0
jdjdvpv
false
/r/LocalLLaMA/comments/120x6eh/which_llama_model_is_best_for_my_set_up/jdjdvpv/
false
2
t1_jdj9sw4
Question, would Tesla M40 cards work or not ?
1
0
2023-03-24T20:04:48
MageLD
false
null
0
jdj9sw4
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdj9sw4/
false
1
t1_jdj754r
>will After the "Open"AI fiasco I'll believe this when it's present tense.
9
0
2023-03-24T19:47:22
dealingwitholddata
false
null
0
jdj754r
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdj754r/
false
9
t1_jdiyewd
I'll post some numbers once I get my new card and build SHARK.
2
0
2023-03-24T18:50:24
friedrichvonschiller
false
null
0
jdiyewd
false
/r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/jdiyewd/
false
2
t1_jdiybgk
>I think some modern GPUs have Int8 hardware though. They have both [int4](https://developer.nvidia.com/blog/int4-for-ai-inference/) and [int8](https://forums.developer.nvidia.com/t/does-nvidia-titan-x-have-native-fp16-and-int8-support/44354) support depending on the vintage and manufacturer, and int4 is indeed fas...
1
0
2023-03-24T18:49:49
friedrichvonschiller
false
null
0
jdiybgk
false
/r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/jdiybgk/
false
1
t1_jdixkhk
It's amazing how long the generating phase takes on 4bit 7B. A short prompt of len 12 takes minutes with CPU at 100%. i5-10600k, 32 gig, 850 evo Would this be feasible to install in a HPC cluster?
1
0
2023-03-24T18:45:02
scorpadorp
false
null
0
jdixkhk
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdixkhk/
false
1
t1_jdis7d6
Yes, this project is fully open-source and most importantly all the training data will be released open source so anyone can use it to train their own Alpaca model. But the models they release will also be able to run locally on consumer hardware.
7
0
2023-03-24T18:10:16
jd_3d
false
null
0
jdis7d6
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdis7d6/
false
7
t1_jdirfqu
i wonder what would be the speed on pytorch-MLIR or shark [https://github.com/nod-ai/SHARK.git](https://github.com/nod-ai/SHARK.git) or in other graph optimizing inference systems like openvino for example
3
0
2023-03-24T18:05:19
big_cedric
false
null
0
jdirfqu
false
/r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/jdirfqu/
false
3
t1_jdiq9qp
I see, with a 3 word prompt it comes out as roughly half the speed of the plain chat.exe, but it feels a fair bit slower perhaps because chat.exe starts showing the output as it is being generated rather than all at the end. Thanks for working on this, I hope the breakthroughs keep on coming :)
3
0
2023-03-24T17:57:47
GrapplingHobbit
false
null
0
jdiq9qp
false
/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdiq9qp/
false
3
t1_jdiq1ns
yeah but can we run it on our own machines? A central struggle of this rapid evolution of AI is for individuals to be able to run it themselves. Otherwise we're back to the old "who really owns my computer" thing that goes on with smartphones, etc.
6
0
2023-03-24T17:56:22
dealingwitholddata
false
null
0
jdiq1ns
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdiq1ns/
false
6
t1_jdimtk6
Hi, maybe its also about context length. chatgpt is tested to have 2 times the context length. Its great for summarizing a boring paper.
2
0
2023-03-24T17:36:06
BackgroundFeeling707
false
null
0
jdimtk6
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdimtk6/
false
2
t1_jdijv4a
Are you familiar with https://open-assistant.io/dashboard. It seems they are already doing what you describe and have a nice interface for anyone to help contribute. I think they already have over 100,000 examples which is awesome.
10
0
2023-03-24T17:17:34
jd_3d
false
null
0
jdijv4a
false
/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdijv4a/
false
10
t1_jdiirer
Thanks for the write up! I'll try it this weekend and let you know how it goes.
2
0
2023-03-24T17:10:37
Clasp
false
null
0
jdiirer
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdiirer/
false
2
t1_jdhy3mq
for a single 4090, easiest way to get started and simple to use: [https://github.com/lxe/simple-llama-finetuner](https://github.com/lxe/simple-llama-finetuner)
1
0
2023-03-24T14:59:41
Civil_Collection7267
false
null
0
jdhy3mq
false
/r/LocalLLaMA/comments/120kojq/how_do_i_fine_tune_4_bit_or_8_bit_models/jdhy3mq/
false
1
t1_jdhrz71
Got this exact same problem, with wsl and amd gpu
1
0
2023-03-24T14:19:09
shemademedoit1
false
null
0
jdhrz71
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdhrz71/
false
1