name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jfp7o8p | I made an error in my wording, sorry;
> IMO ~~they~~ WE really need a sort of a Linux style project to build one giga-LLM that everyone collaborates on | 3 | 0 | 2023-04-10T14:33:51 | [deleted] | false | null | 0 | jfp7o8p | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfp7o8p/ | false | 3 |
t1_jfp64px | [deleted] | 1 | 0 | 2023-04-10T14:23:03 | [deleted] | true | null | 0 | jfp64px | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfp64px/ | false | 1 |
t1_jfp61oy | I'm getting a different error. I've tried the recommended llama\_inference\_offload.py; no effect.
python server.py --model llama-7b-4bit-128g --wbits 4 --groupsize 128
Works fine by itself on my 4090. However, if I add --pre\_layer xxx (anything), I get the following error. The model never loads.
​
Any th... | 1 | 0 | 2023-04-10T14:22:27 | Labtester | false | null | 0 | jfp61oy | false | /r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jfp61oy/ | false | 1 |
t1_jfp51s0 | [deleted] | 2 | 0 | 2023-04-10T14:15:09 | [deleted] | true | null | 0 | jfp51s0 | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfp51s0/ | false | 2 |
t1_jfp34dd | Thanks | 1 | 0 | 2023-04-10T14:00:59 | regstuff | false | null | 0 | jfp34dd | false | /r/LocalLLaMA/comments/12hi6tc/complete_guide_for_koboldai_and_oobabooga_4_bit/jfp34dd/ | false | 1 |
t1_jfp26dx | Where did you find a 4-bit quantized version of 13b LLaMa? | 1 | 0 | 2023-04-10T13:54:04 | baobabKoodaa | false | null | 0 | jfp26dx | false | /r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/jfp26dx/ | false | 1 |
t1_jfp1q1f | Has anyone actually released a GPTQ 4bit version of LLaMa 13b? | 1 | 0 | 2023-04-10T13:50:34 | baobabKoodaa | false | null | 0 | jfp1q1f | false | /r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jfp1q1f/ | false | 1 |
t1_jfp1kib | How does it perform? | 3 | 0 | 2023-04-10T13:49:22 | 2muchnet42day | false | null | 0 | jfp1kib | false | /r/LocalLLaMA/comments/12hi6tc/complete_guide_for_koboldai_and_oobabooga_4_bit/jfp1kib/ | false | 3 |
t1_jfp11cq | Interesting... these models are fascinating. I think AGI is closer than anyone thinks! | 2 | 0 | 2023-04-10T13:45:15 | SeymourBits | false | null | 0 | jfp11cq | false | /r/LocalLLaMA/comments/12dsdve/alpaca_30b_made_statements_about_cognition/jfp11cq/ | false | 2 |
t1_jfp0ey8 | The GPT-4 result went a little further with the roleplay instruction:
Okay, I’ll try to roleplay as Alice. Here is what I would say:
-Hello, Bob. What are you doing here?
-Where is my ball? I left it in the red box.
-Did you take my ball out of the red box and put it into the yellow box?
-Why did you do that? That’s no... | 3 | 0 | 2023-04-10T13:40:22 | SeymourBits | false | null | 0 | jfp0ey8 | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfp0ey8/ | false | 3 |
t1_jfozr2h | Here is Maya's 1st response: "As Alice, I would enter the room and observe that the ball is no longer inside the red box. I might ask Bob about the whereabouts of the ball, and he would tell me that he had moved it to the yellow box. I would then proceed to investigate the contents of the yellow box to confirm whether ... | 1 | 0 | 2023-04-10T13:35:12 | SeymourBits | false | null | 0 | jfozr2h | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfozr2h/ | false | 1 |
t1_jfoyz0g | I run the Vicuna 13B in my 6GB card in the oobabooga with this command:
python server.py --model vicuna-13b-GPTQ-4bit-128g --wbits 4 --groupsize 128 --pre\_layer 20
​
The pre\_layer does an offloading of some of the layers to the CPU, the bigger the number the more layers into the GPU. With 20 it uses around ... | 1 | 0 | 2023-04-10T13:28:53 | freimg | false | null | 0 | jfoyz0g | false | /r/LocalLLaMA/comments/12hi6qk/cant_run_vicuna_in_my_system/jfoyz0g/ | false | 1 |
t1_jfoyyp3 | You're trying to run a 13B model with 6GB VRAM. It's too low. There are workarounds like how freimg mentioned, but it'll be very slow.
The system requirements are listed here: [https://www.reddit.com/r/LocalLLaMA/wiki/models/](https://www.reddit.com/r/LocalLLaMA/wiki/models/) You can try CPU inference or using 7B Alpa... | 1 | 0 | 2023-04-10T13:28:49 | Civil_Collection7267 | false | null | 0 | jfoyyp3 | false | /r/LocalLLaMA/comments/12hi6qk/cant_run_vicuna_in_my_system/jfoyyp3/ | true | 1 |
t1_jfoxufz | Buy a better gpu or use llamaCPP, you’re out of memory | 1 | 0 | 2023-04-10T13:19:46 | MotionTwelveBeeSix | false | null | 0 | jfoxufz | false | /r/LocalLLaMA/comments/12hi6qk/cant_run_vicuna_in_my_system/jfoxufz/ | false | 1 |
t1_jfox2is | I love these kinds of threads, so figured I would just show ChatGPT with the base GPT 3.5 model:
https://shareg.pt/LCq51mH
> Anna will look for the ball in the red box, where she originally put it. She is not aware that Bob has moved the ball to the yellow box, so she will assume it is still in the red box where she ... | 1 | 0 | 2023-04-10T13:13:17 | [deleted] | false | null | 0 | jfox2is | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfox2is/ | false | 1 |
t1_jfovqla | Ebay sells Nvidia 3090's with 24G vram for $700. Just pick up one of those and run your model locally. I'm using one of those to run a Vicuna 13B in addition to Stable Diffusion(my chat can call SD for images).
If $700 is too much, you may consider a A4000 which has 16G of ram. Those are around $450. | 1 | 0 | 2023-04-10T13:02:00 | synn89 | false | null | 0 | jfovqla | false | /r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfovqla/ | false | 1 |
t1_jfouxdj | [deleted] | 1 | 0 | 2023-04-10T12:55:04 | [deleted] | true | null | 0 | jfouxdj | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfouxdj/ | false | 1 |
t1_jfouuii | Well llama.cpp is not python, it's cpp. Maybe with https://github.com/abetlen/llama-cpp-python or just using the huggingfasce ecosystem but it's all buggy and messy right now https://github.com/johnsmith0031/alpaca_lora_4bit so I wouldn't recommend it for someone learning python.
It's confusing as there are quite a fe... | 8 | 0 | 2023-04-10T12:54:24 | CellWithoutCulture | false | 2023-04-10T13:01:16 | 0 | jfouuii | false | /r/LocalLLaMA/comments/12hc8o5/lora_in_llamac_converting_to_4bit_how_to_use/jfouuii/ | false | 8 |
t1_jfoucef | So similar to how memory in the human brain is condensed by reasoning inference. For example you learn that chickens are birds, from that you “remember” details about chickens that actually isn’t stored but inferred such as chickens don’t have teeth and don’t have live young. | 2 | 0 | 2023-04-10T12:50:01 | Yahakshan | false | null | 0 | jfoucef | false | /r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfoucef/ | false | 2 |
t1_jfothbf | A 'computer word' is two bytes, has nothing to do with a linguistic word. A 'word' in computer parlance is a size definition. (bit, nibble, byte, word, etc.)
http://www.tcpipguide.com/free/t_BinaryInformationandRepresentationBitsBytesNibbles-2.htm
For language models a token is represented by a 2048 length vector ... | 1 | 0 | 2023-04-10T12:42:16 | LetterRip | false | 2023-04-10T12:45:37 | 0 | jfothbf | false | /r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfothbf/ | false | 1 |
t1_jforrgk | Thanks, I'm still learning, and was struggling to wrap my head around that one. I managed to 'acquire' the LLaMA weights already, so was wondering if what was being downloaded was a modified version or if I'd fundamentally misunderstood something.
Thanks for that link :-) | 1 | 0 | 2023-04-10T12:26:24 | Meditating_Hamster | false | null | 0 | jforrgk | false | /r/LocalLLaMA/comments/12hgsxv/understanding_the_weights/jforrgk/ | false | 1 |
t1_jfoq1ag | >If so, what's confusing me is that I thought Meta had restricted access to the llama weights?
Meta restricted access to the LLaMA weights but only officially. Everything on Hugging Face is technically unofficial. As for models, it's always better to use .safetensors format. You can find those in [https://www.reddit.c... | 1 | 0 | 2023-04-10T12:09:34 | Civil_Collection7267 | false | null | 0 | jfoq1ag | false | /r/LocalLLaMA/comments/12hgsxv/understanding_the_weights/jfoq1ag/ | true | 1 |
t1_jfopq9w | Turns out, something wasn't properly connected in the Ooba itself, so it worked much better after one of the recent updates. | 1 | 0 | 2023-04-10T12:06:32 | _Erilaz | false | null | 0 | jfopq9w | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jfopq9w/ | false | 1 |
t1_jfopol1 | Take a look at [https://www.reddit.com/r/LocalLLaMA/wiki/models/](https://www.reddit.com/r/LocalLLaMA/wiki/models/)
You have to use the model for llama.cpp [here](https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g/tree/main/gpt4-x-alpaca-13b-ggml-q4_1-from-gptq-4bit-128g). If you have any more qu... | 1 | 0 | 2023-04-10T12:06:06 | Civil_Collection7267 | false | null | 0 | jfopol1 | false | /r/LocalLLaMA/comments/12hgbr7/running_gpt4xalpaca_with_llamacpp/jfopol1/ | true | 1 |
t1_jfopexo | Something to do with the python bundle :
https://stackoverflow.com/questions/64788656/
I tried to package the .py myself into an exe but had the same problem. | 2 | 0 | 2023-04-10T12:03:25 | aka457 | false | null | 0 | jfopexo | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfopexo/ | false | 2 |
t1_jfop7dp | This video covers both ways. https://youtu.be/nVC9D9fRyNU | 1 | 0 | 2023-04-10T12:01:15 | i_wayyy_over_think | false | null | 0 | jfop7dp | false | /r/LocalLLaMA/comments/12hgbr7/running_gpt4xalpaca_with_llamacpp/jfop7dp/ | false | 1 |
t1_jfonybg | [deleted] | 1 | 0 | 2023-04-10T11:48:21 | [deleted] | true | 2023-04-10T12:42:42 | 0 | jfonybg | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfonybg/ | false | 1 |
t1_jfonnb1 | [deleted] | 1 | 0 | 2023-04-10T11:45:07 | [deleted] | true | null | 0 | jfonnb1 | false | /r/LocalLLaMA/comments/12h5ec8/sbcs_for_llama_and_llm/jfonnb1/ | false | 1 |
t1_jfonbcn | [deleted] | 1 | 0 | 2023-04-10T11:41:31 | [deleted] | true | null | 0 | jfonbcn | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfonbcn/ | false | 1 |
t1_jfoj4tx | What's the best way to train llama7b on a custom corpus of data the way you have done? If there any documentation etc. you could point me to? | 2 | 0 | 2023-04-10T10:53:00 | LaxatedKraken | false | null | 0 | jfoj4tx | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfoj4tx/ | false | 2 |
t1_jfoinir | I saw several gpt4-x-alpaca models on hf, made me confused. What's the difference?
And there's a [https://huggingface.co/chavinlo/gpt4-x-alpaca](https://huggingface.co/chavinlo/gpt4-x-alpaca) model, can I use it?
Do these models almost same capbility but different requirments? | 1 | 0 | 2023-04-10T10:47:02 | featherwit | false | null | 0 | jfoinir | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jfoinir/ | false | 1 |
t1_jfoin5m | That's the most humanlike responses I've seen ever. | 1 | 0 | 2023-04-10T10:46:54 | QFTornotQFT | false | null | 0 | jfoin5m | false | /r/LocalLLaMA/comments/12heicb/is_it_messing_with_me_or_broken/jfoin5m/ | false | 1 |
t1_jfoimtk | All available LLMs right now have problems with hallucination, and the parameters or prompts you use can affect this. You're using the CAI chat style but Alpaca requires a specific instruct format:
Below is an instruction that describes a task.
Write a response that appropriately completes the request.
#... | 1 | 0 | 2023-04-10T10:46:47 | Civil_Collection7267 | false | null | 0 | jfoimtk | false | /r/LocalLLaMA/comments/12heicb/is_it_messing_with_me_or_broken/jfoimtk/ | true | 1 |
t1_jfoftgc | I appreciate the thorough reply! | 1 | 0 | 2023-04-10T10:08:34 | DrJMHanson | false | null | 0 | jfoftgc | false | /r/LocalLLaMA/comments/12gldxl/any_models_trained_on_tcm_traditional_chinese/jfoftgc/ | false | 1 |
t1_jfofkez | I have a question, if you had used a vector database then could your LLM just query the database for info without having to do any training? | 2 | 0 | 2023-04-10T10:05:02 | PixelForgDev | false | null | 0 | jfofkez | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfofkez/ | false | 2 |
t1_jfocu2a | You might get a faster and better answer in r/buildapc. There's also r/homelab and r/sffpc. You'll probably find a good amount of people there with firsthand experience using those. | 1 | 0 | 2023-04-10T09:25:04 | Civil_Collection7267 | false | null | 0 | jfocu2a | false | /r/LocalLLaMA/comments/12h5ec8/sbcs_for_llama_and_llm/jfocu2a/ | false | 1 |
t1_jfoctuz | Vast.ai has really good rates and options. | 3 | 0 | 2023-04-10T09:24:59 | kif88 | false | null | 0 | jfoctuz | false | /r/LocalLLaMA/comments/12gvvqj/best_cloudvps_hosting_for_current_llms/jfoctuz/ | false | 3 |
t1_jfoc8ef | I did a short experiment trying batch processing using KoboldAI API on a 3090. I only tried getting n different responses in parallel to the same prompt, I am not sure if the API supports running n different prompts in parallel, even though technically that should be possible.
Comments: KoboldAI only supports up to n=... | 4 | 0 | 2023-04-10T09:16:11 | StaplerGiraffe | false | 2023-04-10T09:20:55 | 0 | jfoc8ef | false | /r/LocalLLaMA/comments/12gtanv/batch_queries/jfoc8ef/ | false | 4 |
t1_jfoc5g8 | There's 4-bit Alpaca, you see all the models here: [https://www.reddit.com/r/LocalLLaMA/wiki/models/](https://www.reddit.com/r/LocalLLaMA/wiki/models/)
GPT4 x Alpaca is currently the best one. If it's too slow, then try using 7B Alpaca Native 4-bit. I'll close this thread now. | 1 | 0 | 2023-04-10T09:14:54 | Civil_Collection7267 | false | null | 0 | jfoc5g8 | false | /r/LocalLLaMA/comments/12hcpc7/is_there_a_4bit_alpaca/jfoc5g8/ | true | 1 |
t1_jfo9j7f | Increase the max\_new\_tokens.
Since this is a question with a simple answer, I'll close this thread now. | 1 | 0 | 2023-04-10T08:35:54 | Civil_Collection7267 | false | null | 0 | jfo9j7f | false | /r/LocalLLaMA/comments/12hc5zo/how_to_increase_token_output_size_for_llama/jfo9j7f/ | true | 1 |
t1_jfo1coc | Hehe, true, true. No matter what their age or gender is, I often visualize them as stereotypical single old aunt, who scolds in a stern voice: "Boys, you are having fun \_the wrong way\_" | 2 | 0 | 2023-04-10T06:37:38 | szopen76 | false | null | 0 | jfo1coc | false | /r/LocalLLaMA/comments/12ezcly/comparing_models_gpt4xalpaca_vicuna_and_oasst/jfo1coc/ | false | 2 |
t1_jfo0zlz | Well then, get started! | 4 | 0 | 2023-04-10T06:32:43 | Zyj | false | null | 0 | jfo0zlz | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfo0zlz/ | false | 4 |
t1_jfnx62x | I ran it from the directory you linked and the same error came up, exactly like picture 02. Do you think perhaps it's the model itself? If it helps, my CPU is an i7 2700K from the original Sandy Bridge days.
EDIT: I tried a smaller model from Huggingface just now, the ggml-model-gpt-2-774M, to test if the model siz... | 1 | 0 | 2023-04-10T05:41:50 | Daydreamer6t6 | false | 2023-04-10T06:00:47 | 0 | jfnx62x | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfnx62x/ | false | 1 |
t1_jfnvmaf | Thats a pretty insightful project. Thank you. | 2 | 0 | 2023-04-10T05:22:23 | djangoUnblamed | false | null | 0 | jfnvmaf | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfnvmaf/ | false | 2 |
t1_jfntytk | ​
https://preview.redd.it/ld6m503361ta1.jpeg?width=776&format=pjpg&auto=webp&v=enabled&s=b972b3ce7a5af6a1af52776753f271166bf6706e
picture 02 - error when running it with Python | 1 | 0 | 2023-04-10T05:02:29 | Daydreamer6t6 | false | null | 0 | jfntytk | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfntytk/ | false | 1 |
t1_jfntwmv | I found the koboldcpp.dll in your latest update so I added those files to the directory. Yay! But, unfortunately, we're back to the original error.
Picture 01 is the error when I start pythoncpp.exe in Windows, and picture 02 is the error when running it with Python. They show the same error, although the Python erro... | 1 | 0 | 2023-04-10T05:01:47 | Daydreamer6t6 | false | null | 0 | jfntwmv | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfntwmv/ | false | 1 |
t1_jfntqgl | Okay I think you are downloading the wrong zip. This is what you should be using:
https://github.com/LostRuins/koboldcpp/releases/download/v1.3/koboldcpp.zip
Extract this zip to a directory and run the python script | 1 | 0 | 2023-04-10T04:59:45 | HadesThrowaway | false | null | 0 | jfntqgl | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfntqgl/ | false | 1 |
t1_jfnsp3s | I unzipped the file again to be sure and this is what I see. (I'll download it again and recheck right now, but I don't think it would have unzipped at all if there has been any file corruption.)
​
https://preview.redd.it/rd6yadkj31ta1.jpeg?width=758&format=pjpg&auto=webp&v=enabled&s=4a23b324b2f09937717cfe1... | 1 | 0 | 2023-04-10T04:47:46 | Daydreamer6t6 | false | null | 0 | jfnsp3s | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfnsp3s/ | false | 1 |
t1_jfnqb4d | GPT4 is the kind of coder who could use details of new package functions explained it, if it were smart enough to ask about what it doesn't know.
Luckily you're smart enough to use retrieval augmented QA to fetch relevant excerpts for stuffing into your GPT coding prompts. | 2 | 0 | 2023-04-10T04:21:31 | Kat- | false | null | 0 | jfnqb4d | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfnqb4d/ | false | 2 |
t1_jfnpk2j | You can use it with 8gb, but you have to split some of it to your CPU. If you're using the oobabooga UI, open up your start-webui.bat and add --pre\_layer 32 to the end of the call python line. Should look something like this:
call python server.py --cai-chat --wbits 4 --groupsize 128 --pre\_layer 32
I only get about... | 2 | 0 | 2023-04-10T04:13:33 | Gyramuur | false | null | 0 | jfnpk2j | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jfnpk2j/ | false | 2 |
t1_jfnlpo7 | What happens when you unzip the zip? There is definitely a koboldcpp.dll in the zip file. It should be in the same directory as the python script. Where does it go? | 1 | 0 | 2023-04-10T03:35:43 | HadesThrowaway | false | null | 0 | jfnlpo7 | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfnlpo7/ | false | 1 |
t1_jfnl2sx | Like the other commenter said, LoRAs. Since this is a question with a very short answer, I'll close this thread now to reduce subreddit clutter. | 1 | 0 | 2023-04-10T03:29:43 | Civil_Collection7267 | false | null | 0 | jfnl2sx | false | /r/LocalLLaMA/comments/12h2mu6/d_is_it_possible_to_train_the_same_llm_instance/jfnl2sx/ | false | 1 |
t1_jfnj5et | Genius prigrammer! | 1 | 0 | 2023-04-10T03:12:00 | [deleted] | false | null | 0 | jfnj5et | false | /r/LocalLLaMA/comments/12bqai6/script_to_automatically_update_llamacpp_to_newest/jfnj5et/ | false | 1 |
t1_jfncnkd | You can connect the runpod to colab so it just runs through colab but uses your runpod system and GPU, which can be far more powerful and affordable. Runpod and colab are both jupyter notebook running systems so you should also be able to bring the colab file over and run it on runpod directly too if you prefer that in... | 2 | 0 | 2023-04-10T02:15:37 | Sixhaunt | false | null | 0 | jfncnkd | false | /r/LocalLLaMA/comments/12gvvqj/best_cloudvps_hosting_for_current_llms/jfncnkd/ | false | 2 |
t1_jfnbd4x | https://blog.runpod.io/how-to-connect-google-colab-to-runpod/ | 5 | 0 | 2023-04-10T02:05:01 | Gohan472 | false | null | 0 | jfnbd4x | false | /r/LocalLLaMA/comments/12gvvqj/best_cloudvps_hosting_for_current_llms/jfnbd4x/ | false | 5 |
t1_jfna64f | Wait, how does that work? Google colab itself provides computing resources. Or can you just tell a Runpod to run a Google colab notebook? | 1 | 0 | 2023-04-10T01:55:16 | deccan2008 | false | null | 0 | jfna64f | false | /r/LocalLLaMA/comments/12gvvqj/best_cloudvps_hosting_for_current_llms/jfna64f/ | false | 1 |
t1_jfn8kgv | If you want to use TCM as an inspiration for your pharmaceutical research, a conventional data science approach would be much better.
Build a data set, sure, everything starts with it, but instead of training an LLM on that, make a table which essentially classifies these herbs by the modern diagnoses they were suppos... | 1 | 0 | 2023-04-10T01:42:29 | _Erilaz | false | 2023-04-10T01:57:22 | 0 | jfn8kgv | false | /r/LocalLLaMA/comments/12gldxl/any_models_trained_on_tcm_traditional_chinese/jfn8kgv/ | false | 1 |
t1_jfn8er7 | LoRAs | 5 | 0 | 2023-04-10T01:41:15 | wind_dude | false | null | 0 | jfn8er7 | false | /r/LocalLLaMA/comments/12h2mu6/d_is_it_possible_to_train_the_same_llm_instance/jfn8er7/ | false | 5 |
t1_jfn7phv | If the A100 can generate X tokens/second, then you're not really going to see an advantage adding concurrency. You should use some kind of tool to check what percentage of the GPU's capacity is being used when you're generating a response for one of your queries.
If it's already saturated, there's really not much you ... | 2 | 0 | 2023-04-10T01:35:39 | KerfuffleV2 | false | null | 0 | jfn7phv | false | /r/LocalLLaMA/comments/12gtanv/batch_queries/jfn7phv/ | false | 2 |
t1_jfn5qnm | I wonder if the Web-UI can be run from a cloud GPU, maybe from vastai? | 2 | 0 | 2023-04-10T01:19:43 | teragron | false | null | 0 | jfn5qnm | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfn5qnm/ | false | 2 |
t1_jfn0rfv | The data quality used to train it is pretty poor IMO. I work with clinical analysts and doctors and they do not use any of those resources for research | 8 | 0 | 2023-04-10T00:40:03 | [deleted] | false | null | 0 | jfn0rfv | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfn0rfv/ | false | 8 |
t1_jfmzh1i | There’s robust biochemical data on the thousands of different herbs and this data is used in drug discovery. Get rid of the poetic and archaic language and there is serious value in the underlying tools for a wide variety of health conditions that currently don’t have therapeutics for them.
Not here to discuss the meri... | 1 | 0 | 2023-04-10T00:30:03 | DrJMHanson | false | null | 0 | jfmzh1i | false | /r/LocalLLaMA/comments/12gldxl/any_models_trained_on_tcm_traditional_chinese/jfmzh1i/ | false | 1 |
t1_jfmuefk | Yeah that’s what I was thinking it was. Super cool application! So any questions a coder has, this model can answer it and explain whatever details the coder needs to understand what’s going on? That’s just so much like something out of a sci-fi novel! Amazing to think you did all that yourself! | 2 | 0 | 2023-04-09T23:50:43 | PM_ME_ENFP_MEMES | false | null | 0 | jfmuefk | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfmuefk/ | false | 2 |
t1_jfmr3jo | Thank you, really helps.
I asked in the other thread but I'll put here as well
What do you do when there are more than one .bin files?
I have a .bat file and don't know what to put under the -m flag
Edit: Think I cracked it, just specify the one with the highest number | 1 | 0 | 2023-04-09T23:25:12 | Useful-Command-8793 | false | 2023-04-09T23:49:44 | 0 | jfmr3jo | false | /r/LocalLLaMA/comments/12gm36j/newbie_question_on_bin_and_pt_files/jfmr3jo/ | false | 1 |
t1_jfmr0pe | Thanks but what do you do when there are more than one .bin files?
I have a .bat file and don't know what to put under the -m flag
Edit: Think I cracked it, just specify the one with the highest number | 1 | 0 | 2023-04-09T23:24:36 | Useful-Command-8793 | false | 2023-04-09T23:49:49 | 0 | jfmr0pe | false | /r/LocalLLaMA/comments/12alri3/difference_in_model_formats_how_to_tell_which/jfmr0pe/ | false | 1 |
t1_jfmq56t | After conversing with it, I'm almost starting question if there IS some type of loose awareness kicking around in there. It basically asked me to wipe its memory after my conversation with it because it said it would basically feel violated if people were to look over "its" thoughts, lol. | 1 | 0 | 2023-04-09T23:18:00 | Wroisu | false | null | 0 | jfmq56t | false | /r/LocalLLaMA/comments/12dsdve/alpaca_30b_made_statements_about_cognition/jfmq56t/ | false | 1 |
t1_jfmnczt | I think they are the same thing, woke progressive are the modern version of the middle aged christian ladies from the 90's... back then it was ridiculed now it's praised :(
​
​
https://i.redd.it/z48upclwczsa1.gif | 2 | 0 | 2023-04-09T22:57:02 | Wonderful_Ad_5134 | false | null | 0 | jfmnczt | false | /r/LocalLLaMA/comments/12ezcly/comparing_models_gpt4xalpaca_vicuna_and_oasst/jfmnczt/ | false | 2 |
t1_jfmmeff | In theory it could help you code. However, my current implementation is just a different way of interacting with the UE5 documentation. My idea was to create something that is one step above reading the docs yourself and one step below having a private tutor in terms of ease of use. If you wanted it to help you code, y... | 2 | 0 | 2023-04-09T22:49:46 | Bublint | false | null | 0 | jfmmeff | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfmmeff/ | false | 2 |
t1_jfmm12j | Nice work. Can I ask if you used any guide for fine-tuning? I want to try fine-tuning it on my raw dataset too. | 2 | 0 | 2023-04-09T22:46:56 | catnister | false | null | 0 | jfmm12j | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfmm12j/ | false | 2 |
t1_jfmkiht | I like using Runpod's GPUs and hooking them up to google colab | 4 | 0 | 2023-04-09T22:35:32 | Sixhaunt | false | null | 0 | jfmkiht | false | /r/LocalLLaMA/comments/12gvvqj/best_cloudvps_hosting_for_current_llms/jfmkiht/ | false | 4 |
t1_jfmiri4 | Here's a [short comment](https://www.reddit.com/r/LocalLLaMA/comments/12alri3/comment/jespu1n/?utm_source=share&utm_medium=web2x&context=3) that explains a little more. | 1 | 0 | 2023-04-09T22:22:32 | Technical_Leather949 | false | null | 0 | jfmiri4 | false | /r/LocalLLaMA/comments/12gm36j/newbie_question_on_bin_and_pt_files/jfmiri4/ | true | 1 |
t1_jfmhzd9 | That’s understandable! I’m still trying to get my head around the difference between all of these things and to discover what is and isn’t relevant.
So will this model training help you to actually code a game? Or is this basically a knowledge base that can speak to you? | 1 | 0 | 2023-04-09T22:16:40 | PM_ME_ENFP_MEMES | false | null | 0 | jfmhzd9 | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfmhzd9/ | false | 1 |
t1_jfmftlx | If that's the case, I would consider adding tokens to specify what constitutes the context/scenario and the start of the dialogue, at least. Might be wrong, I haven't tried training a model before, but I know that's what Pygmalion does, and it sounds like you're looking to make a similar model focusing on having the AI... | 1 | 0 | 2023-04-09T22:00:33 | Blkwinz | false | null | 0 | jfmftlx | false | /r/LocalLLaMA/comments/12gg0py/how_would_i_train_a_model_on_japanese_conversation/jfmftlx/ | false | 1 |
t1_jfmfhfj | >Yes, same here. It requires at least 12GB VRAM to work. [https://huggingface.co/anon8231489123/vicuna-13b-GPTQ-4bit-128g/discussions/3](https://huggingface.co/anon8231489123/vicuna-13b-GPTQ-4bit-128g/discussions/3) | 1 | 0 | 2023-04-09T21:58:02 | dliedke1 | false | null | 0 | jfmfhfj | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jfmfhfj/ | false | 1 |
t1_jfm9zxs | Ah thank you! | 1 | 0 | 2023-04-09T21:18:39 | Useful-Command-8793 | false | null | 0 | jfm9zxs | false | /r/LocalLLaMA/comments/12gm36j/newbie_question_on_bin_and_pt_files/jfm9zxs/ | false | 1 |
t1_jfm8q7v | Yeah. For example, in the llama.cpp repo, there are scripts to convert from ggml (.bin) to PyTorch (.pt/.pth) and vise versa. | 2 | 0 | 2023-04-09T21:09:32 | ComputerNerd86075 | false | null | 0 | jfm8q7v | false | /r/LocalLLaMA/comments/12gm36j/newbie_question_on_bin_and_pt_files/jfm8q7v/ | false | 2 |
t1_jfm8e7w | I see, so can you convert them? From the .bin formats to .PT? | 1 | 0 | 2023-04-09T21:07:08 | Useful-Command-8793 | false | null | 0 | jfm8e7w | false | /r/LocalLLaMA/comments/12gm36j/newbie_question_on_bin_and_pt_files/jfm8e7w/ | false | 1 |
t1_jflz7f2 | How do i make the model to respond queries on my custom documents. I basically, wanted to understand how can i make embeddings for the documents and pass queries on the embedded space. | 1 | 0 | 2023-04-09T20:01:31 | sydjashim | false | null | 0 | jflz7f2 | false | /r/LocalLLaMA/comments/12g1kb2/alpaca_30b_i_discussing_its_situation_experiences/jflz7f2/ | false | 1 |
t1_jflz4kq | No, but there are general LoRAs for Chinese:
[https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki\_other\_languages](https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki_other_languages) | 1 | 0 | 2023-04-09T20:00:56 | Civil_Collection7267 | false | null | 0 | jflz4kq | false | /r/LocalLLaMA/comments/12gldxl/any_models_trained_on_tcm_traditional_chinese/jflz4kq/ | true | 1 |
t1_jflyg9l | Great improvement! Thanks for the link, I was going into the dataset formatting blindly for the first pass lol | 3 | 0 | 2023-04-09T19:56:02 | Bublint | false | null | 0 | jflyg9l | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jflyg9l/ | false | 3 |
t1_jflxx34 | IDK if someone is fine tuning that or not, but I highly doubt such a model would be applicable for anything other than recreation because evidence-based medicine already exists. We're talking myriads of unproven and unorthodox solutions for erectile dysfunction and "the element of fire overpowers that of metal" kind of... | 1 | 0 | 2023-04-09T19:52:13 | _Erilaz | false | null | 0 | jflxx34 | false | /r/LocalLLaMA/comments/12gldxl/any_models_trained_on_tcm_traditional_chinese/jflxx34/ | false | 1 |
t1_jflvucg | [deleted] | 1 | 0 | 2023-04-09T19:37:26 | [deleted] | true | null | 0 | jflvucg | false | /r/LocalLLaMA/comments/12gg0py/how_would_i_train_a_model_on_japanese_conversation/jflvucg/ | false | 1 |
t1_jflvsu6 | [deleted] | 1 | 0 | 2023-04-09T19:37:09 | [deleted] | true | null | 0 | jflvsu6 | false | /r/LocalLLaMA/comments/12gg0py/how_would_i_train_a_model_on_japanese_conversation/jflvsu6/ | false | 1 |
t1_jflvdu6 | When you say replicate do you mean your goal is specifically to have it generate the dialogue of all the characters, based on some provided context and scenario? | 1 | 0 | 2023-04-09T19:34:10 | Blkwinz | false | null | 0 | jflvdu6 | false | /r/LocalLLaMA/comments/12gg0py/how_would_i_train_a_model_on_japanese_conversation/jflvdu6/ | false | 1 |
t1_jflvcdl | Turns out I just had to wait longer, so try leaving it running for a bit | 1 | 0 | 2023-04-09T19:33:53 | TheAccountToBeThrown | false | null | 0 | jflvcdl | false | /r/LocalLLaMA/comments/12g5wcc/cant_type_when_using_llamacpp/jflvcdl/ | false | 1 |
t1_jfltiku | Thanks! I wouldn’t necessarily expect better results with Alpaca. Alpaca’s dataset is structured in a very specific way to make it mirror some of chatGPT’s behavior, and the dataset I used doesn’t even have any formatting. If you could figure out a way to restructure the documentation in the same way as Alpaca’s datase... | 3 | 0 | 2023-04-09T19:20:45 | Bublint | false | null | 0 | jfltiku | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfltiku/ | false | 3 |
t1_jfltdy2 | Is it possible to make the chosen model ignore the "Amount to generate" setting completely? I've been using alpaca.cpp and I really like that it allows Alpaca to stop generating text when it 'feels' that it should stop, resulting in both short and long answers depending on what question is being answered. KoboldCpp, as... | 4 | 0 | 2023-04-09T19:19:51 | drunk_bodhisattva | false | null | 0 | jfltdy2 | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfltdy2/ | false | 4 |
t1_jflrczx | .pt is PyTorch, which does have a C++ library, but it can only use models converted to TorchScript, which it in maintenance mode. | 2 | 0 | 2023-04-09T19:05:20 | ComputerNerd86075 | false | null | 0 | jflrczx | false | /r/LocalLLaMA/comments/12gm36j/newbie_question_on_bin_and_pt_files/jflrczx/ | false | 2 |
t1_jflr0p1 | [removed] | 1 | 0 | 2023-04-09T19:02:49 | [deleted] | true | null | 0 | jflr0p1 | false | /r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jflr0p1/ | false | 1 |
t1_jflp20q | Great project! And brilliant write up too!
Would you expect better results by training Alpaca in this manner?
And what kinds of improvements would you expect from a larger model like 30B or 65B? | 6 | 0 | 2023-04-09T18:48:42 | PM_ME_ENFP_MEMES | false | null | 0 | jflp20q | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jflp20q/ | false | 6 |
t1_jflmeur | same, no errors, i just can't type i dont know if im doing something wrong or what xd | 1 | 0 | 2023-04-09T18:29:58 | Megalodon1255 | false | null | 0 | jflmeur | false | /r/LocalLLaMA/comments/12g5wcc/cant_type_when_using_llamacpp/jflmeur/ | false | 1 |
t1_jflmdh9 | Seems like the format you already have should be fine for training. You can reformat if you want but your format should do well with the default obbabooga chat mode.
You should keep in mind that each datapoint should be less than 2048 tokens ([see the code from the token counter to understand how to check](https://hug... | 1 | 0 | 2023-04-09T18:29:42 | Sixhaunt | false | null | 0 | jflmdh9 | false | /r/LocalLLaMA/comments/12gg0py/how_would_i_train_a_model_on_japanese_conversation/jflmdh9/ | false | 1 |
t1_jflk5bp | [deleted] | 1 | 0 | 2023-04-09T18:13:58 | [deleted] | true | null | 0 | jflk5bp | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jflk5bp/ | false | 1 |
t1_jfld0ly | I can confirm that there is no koboldcpp.dll in the main directory. (I thought it might be in a subdirectory somewhere.) | 1 | 0 | 2023-04-09T17:25:16 | Daydreamer6t6 | false | null | 0 | jfld0ly | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfld0ly/ | false | 1 |
t1_jflays1 | Amazing answer, thank you really helps me get my head around the basics | 1 | 0 | 2023-04-09T17:11:10 | Useful-Command-8793 | false | null | 0 | jflays1 | false | /r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jflays1/ | false | 1 |
t1_jfl924h | While a promising approach it still doesn’t get close to how good GPT4 is on the US medical licensing exam (USMLE). I’d be curious to see if we can get as good as GPT4 with better training sets or if LLaMA based models won’t ever be as good. | 1 | 0 | 2023-04-09T16:57:54 | bacteriarealite | false | null | 0 | jfl924h | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfl924h/ | false | 1 |
t1_jfl7ey0 | Do you have a link to the Hugging face model you used? | 2 | 0 | 2023-04-09T16:46:43 | RoyalCities | false | null | 0 | jfl7ey0 | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfl7ey0/ | false | 2 |
t1_jfl733p | Ah okay, well good luck - yeah kinda sad I bought the 3060 Ti and not the normal 3060.. maybe they'll be able to compress it down further or train smaller models to have a better performance the coming months.
You could try loading the same models with the same settings in Kobold and Ooba if you haven't already. Comes... | 1 | 0 | 2023-04-09T16:44:37 | the_real_NordVPN | false | null | 0 | jfl733p | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jfl733p/ | false | 1 |
t1_jfl69zq | Can you share the link to the model you are using. | 1 | 0 | 2023-04-09T16:39:02 | onil_gova | false | null | 0 | jfl69zq | false | /r/LocalLLaMA/comments/12g1kb2/alpaca_30b_i_discussing_its_situation_experiences/jfl69zq/ | false | 1 |
t1_jfl64d6 | They just refer to number precision for your model basically, ML models store their information using floating point numbers and these quantization methods round up these numbers so we can store and run the them using less memory and compute power. Think of quantization as lossy compression, similar to wav vs mp3,...
... | 11 | 0 | 2023-04-09T16:37:56 | AffordableQC | false | null | 0 | jfl64d6 | false | /r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jfl64d6/ | false | 11 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.