name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jfsocix | for the text adventure stuff it's all being trained off human-written choose-your-own-adventure stories so it shouldnt have the data issues that cause the issues you faced, but also when I redid alpaca I specifically prompted it not to phrase responses in certain ways such as not starting with stuff like "Sure, I can d... | 1 | 0 | 2023-04-11T06:04:36 | Sixhaunt | false | null | 0 | jfsocix | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfsocix/ | false | 1 |
t1_jfso560 | Yup, that's what I thought. But I still hope that if BEN speaks incorrect English, then AI model won't let ALICE to speak incorrect English - correcting my Polglish is pain in the arse.
Also, I hope that using RHLF I can punish some completions, so AI would learn that if input contains words "you are ALICE" it should ... | 1 | 0 | 2023-04-11T06:01:53 | szopen76 | false | null | 0 | jfso560 | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfso560/ | false | 1 |
t1_jfsni8c | text AI's autocomplete. That's how they function fundamentally. All they do is predict the next token given text. You don't "modify this format to only make it complete text AFTER some format" you just make a UI that only gets it to generate when it's the bot's turn then you set limits on how much it can generate or yo... | 1 | 0 | 2023-04-11T05:53:39 | Sixhaunt | false | null | 0 | jfsni8c | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfsni8c/ | false | 1 |
t1_jfsndsf | The only thing bad about Galactica was that it wasn't fine tune with instruct.
and also that it wasn't trained on *all* the academic papers merely a sample, would be cool if it were trained on all of them.
But Galactica has massive potential if fine tuned with something like Open-Assistant dataset, and can hold a con... | 6 | 0 | 2023-04-11T05:52:05 | faldore | false | null | 0 | jfsndsf | false | /r/LocalLLaMA/comments/12i1raf/galpaca30bgptq4bit128g_runs_on_18gb_of_vram/jfsndsf/ | false | 6 |
t1_jfsmxsu | And your mom | 8 | 0 | 2023-04-11T05:46:29 | faldore | false | null | 0 | jfsmxsu | false | /r/LocalLLaMA/comments/12i1raf/galpaca30bgptq4bit128g_runs_on_18gb_of_vram/jfsmxsu/ | false | 8 |
t1_jfsmru1 | Is there a llama.cpp version? (ggml) | 12 | 0 | 2023-04-11T05:44:26 | faldore | false | null | 0 | jfsmru1 | false | /r/LocalLLaMA/comments/12i1raf/galpaca30bgptq4bit128g_runs_on_18gb_of_vram/jfsmru1/ | false | 12 |
t1_jfsl2hx | I can definitely try to use it, HOWEVER I am not sure you would want parts of my data in your serious project :D :D
BTW: [https://github.com/stochasticai/xturing/blob/main/examples/int4\_finetuning/CONTRIBUTING.md](https://github.com/stochasticai/xturing/blob/main/examples/int4_finetuning/CONTRIBUTING.md) (this is not... | 2 | 0 | 2023-04-11T05:23:25 | szopen76 | false | null | 0 | jfsl2hx | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfsl2hx/ | false | 2 |
t1_jfskw6l | WHich brings my another question: how to modify this format to only make it complete text AFTER some section (ie. not teaching it to autocomplete the INSTRUCTION, only to autocomplete RESPONSE) It seems to me, and maybe I am wrong, that this code always teaches AI to autocomplete the whole text, so i.e. given prompt ##... | 1 | 0 | 2023-04-11T05:21:20 | szopen76 | false | 2023-04-11T05:50:05 | 0 | jfskw6l | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfskw6l/ | false | 1 |
t1_jfskvgs | > They keep coming
Just like my wife. | 3 | 0 | 2023-04-11T05:21:06 | aknalid | false | null | 0 | jfskvgs | false | /r/LocalLLaMA/comments/12i1raf/galpaca30bgptq4bit128g_runs_on_18gb_of_vram/jfskvgs/ | false | 3 |
t1_jfsk2c4 | Yeah, but I still believe the results will be better by manual inspection at least with smaller models. | 1 | 0 | 2023-04-11T05:11:26 | szopen76 | false | null | 0 | jfsk2c4 | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfsk2c4/ | false | 1 |
t1_jfsjxx1 | BUt you see, their goal is to have something like this:
**USER:** when was the battle of Grunwald?
**AI**: It was at 1410
My goal is something like this:
**USER:** "when was the battle of Grunwald?"
**AI:** I look at you with a seductive smile. "Oh, USER, you are such a bore. Instead of talking about history, let... | 2 | 0 | 2023-04-11T05:10:00 | szopen76 | false | null | 0 | jfsjxx1 | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfsjxx1/ | false | 2 |
t1_jfsjnvf | But here's my problem with all the models and all the AI's: no matter the prompt, they will insert things like. "I am doing this," you said. Then you reached for yuor gun. So, in your example, I bet sooner or later it would be:
\### Prompt
You see a monster. What do you do?
\### Decision.
I am reaching for my gun a... | 1 | 0 | 2023-04-11T05:06:46 | szopen76 | false | 2023-04-11T05:38:52 | 0 | jfsjnvf | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfsjnvf/ | false | 1 |
t1_jfsj76k | Yeah, I did used GPT, but as I said - half of the time it generates garbage. I do not know why it is the case - when I am chatting with it, it's quite OK (except of course for "hoping beyond hope" that she will "intake every inch of my face" "with mischievous grin" as she "bite her lower lip seductively"). When I am as... | 1 | 0 | 2023-04-11T05:01:28 | szopen76 | false | null | 0 | jfsj76k | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfsj76k/ | false | 1 |
t1_jfshxez | Okay so this seems to work great! I put it in the same folder as llama.ccp but Im assuming this should work self contained right? I.e. all I need is the executable and the models technically?
I want to get this running on steamdeck just to test feasability since youve somehow managed to get way faster text generation ... | 1 | 0 | 2023-04-11T04:47:11 | RoyalCities | false | null | 0 | jfshxez | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfshxez/ | false | 1 |
t1_jfsh528 | As far as I understand if I run it on my CPU I'll get 0.5 tockens/sec at best | 5 | 0 | 2023-04-11T04:38:35 | Famberlight | false | null | 0 | jfsh528 | false | /r/LocalLLaMA/comments/12i1raf/galpaca30bgptq4bit128g_runs_on_18gb_of_vram/jfsh528/ | false | 5 |
t1_jfsfuth | I'll test it on my son's computer tomorrow (it's newer than mine so I'd expect it to work.)
I did just confirm that my CPU supports AVX but NOT AVX2. Could that explain the app not working? | 1 | 0 | 2023-04-11T04:24:57 | Daydreamer6t6 | false | null | 0 | jfsfuth | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfsfuth/ | false | 1 |
t1_jfsfcnr | I found a video that had a google colab link that was setup so that if you simply run everything it trains the alpaca lora from scratch using llama7b. I just point it to my own dataset file instead of the alpaca one then I change the generate\_prompt(data\_point) function's implementation for my own dataset's format. S... | 2 | 0 | 2023-04-11T04:19:59 | Sixhaunt | false | null | 0 | jfsfcnr | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfsfcnr/ | false | 2 |
t1_jfserzb | It really depends on what you're going for. The character creator thing allows you to tell it what personality it has and stuff which could probably do it well enough without training. If you have a dataset for the person then you can train a specific lora or model on them which would probably be better but you also ha... | 1 | 0 | 2023-04-11T04:14:18 | Sixhaunt | false | null | 0 | jfserzb | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfserzb/ | false | 1 |
t1_jfsaqxz | I thought people said Galactica was kind of terrible. Does it actually outperform Llama based models in scientific areas? | 8 | 0 | 2023-04-11T03:36:02 | TeamPupNSudz | false | null | 0 | jfsaqxz | false | /r/LocalLLaMA/comments/12i1raf/galpaca30bgptq4bit128g_runs_on_18gb_of_vram/jfsaqxz/ | false | 8 |
t1_jfsacvo | >And where can I read about what the hell GGML and quantization means.
ggml is for llama.cpp, which you can read more about here: [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
If you want to learn more about quantization, you can read this paper: [https://arxiv.org/pdf/2203.10705.pd... | 1 | 0 | 2023-04-11T03:32:30 | Civil_Collection7267 | false | null | 0 | jfsacvo | false | /r/LocalLLaMA/comments/12hzrws/how_do_i_actually_use_a_model_on_huggingface_with/jfsacvo/ | false | 1 |
t1_jfs81l8 | Thanks I think there is a lot of low hanging fruit for exploration of the technology to understand how it can be used for new applications | 5 | 0 | 2023-04-11T03:11:55 | catid | false | null | 0 | jfs81l8 | false | /r/LocalLLaMA/comments/12hj2xe/supercharger_offline_automatic_codegen/jfs81l8/ | false | 5 |
t1_jfs7im3 | It’s hard to recommend hardware because every use case is different. I’m targeting training small models at home rather than running LLMs at home | 4 | 0 | 2023-04-11T03:07:25 | catid | false | null | 0 | jfs7im3 | false | /r/LocalLLaMA/comments/12hj2xe/supercharger_offline_automatic_codegen/jfs7im3/ | false | 4 |
t1_jfs4567 | I wonder how hard it would be to fine tune an LLM on an person of history for example. Feed the AI all the writings etc and create a bot that thinks it’s a great philosopher like Socrates or something. Would this be very difficult? I’m curious to try it myself but wouldn’t know where to begin. | 1 | 0 | 2023-04-11T02:39:36 | watchforwaspess | false | null | 0 | jfs4567 | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfs4567/ | false | 1 |
t1_jfs39l2 | Nice. | 2 | 0 | 2023-04-11T02:32:35 | MAXXSTATION | false | null | 0 | jfs39l2 | false | /r/LocalLLaMA/comments/12i1raf/galpaca30bgptq4bit128g_runs_on_18gb_of_vram/jfs39l2/ | false | 2 |
t1_jfs2omu | what's the best program to run this? kobold? | 4 | 0 | 2023-04-11T02:28:00 | crash1556 | false | null | 0 | jfs2omu | false | /r/LocalLLaMA/comments/12i1raf/galpaca30bgptq4bit128g_runs_on_18gb_of_vram/jfs2omu/ | false | 4 |
t1_jfs18yk | How hard was getting the steamdeck installed with Llama? Any guide? | 1 | 0 | 2023-04-11T02:16:36 | RoyalCities | false | null | 0 | jfs18yk | false | /r/LocalLLaMA/comments/12dpgmg/model_tipps/jfs18yk/ | false | 1 |
t1_jfrzxt5 | This pace is astonishing. And these files are huge! RIP my SSD drives 😢 | 16 | 0 | 2023-04-11T02:06:25 | delawarebeerguy | false | null | 0 | jfrzxt5 | false | /r/LocalLLaMA/comments/12i1raf/galpaca30bgptq4bit128g_runs_on_18gb_of_vram/jfrzxt5/ | false | 16 |
t1_jfrwti0 | I tried this same prompt 5 times on llama-30b-hf w/ alpaca-lora-30b. So only difference with original test should be the quantization to 4-bit. Which based off my results seems to make a big difference. All 5 results were basically the same w/ slightly different wording.
https://preview.redd.it/3ej3wmkbb7ta1.png?width... | 1 | 0 | 2023-04-11T01:42:30 | noco-ai | false | null | 0 | jfrwti0 | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfrwti0/ | false | 1 |
t1_jfrwiw7 | They keep coming | 11 | 0 | 2023-04-11T01:40:15 | user18298375298759 | false | null | 0 | jfrwiw7 | false | /r/LocalLLaMA/comments/12i1raf/galpaca30bgptq4bit128g_runs_on_18gb_of_vram/jfrwiw7/ | false | 11 |
t1_jfrvnq9 | This sounds like a pretty neat way to do it - did you use the alpaca codebase as a template for your generation? Any chance you could share a repo with your approach? | 1 | 0 | 2023-04-11T01:33:48 | markacola | false | null | 0 | jfrvnq9 | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfrvnq9/ | false | 1 |
t1_jfruhie | [deleted] | 1 | 0 | 2023-04-11T01:24:52 | [deleted] | true | null | 0 | jfruhie | false | /r/LocalLLaMA/comments/12hzrws/how_do_i_actually_use_a_model_on_huggingface_with/jfruhie/ | false | 1 |
t1_jfro84a | Yeah maybe if you have some other different windows device you could try testing on that, and once you get it working you can compare with your current setup. Most people have no issues with the one .exe file setup as it just works out of the box. | 1 | 0 | 2023-04-11T00:38:23 | HadesThrowaway | false | null | 0 | jfro84a | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfro84a/ | false | 1 |
t1_jfro4be | Nice. How does it compare to GeorgiaTechResearchInstitute/galpaca-6.7b running in 8bit mode in terms of accuracy of the replies? | 14 | 0 | 2023-04-11T00:37:36 | Thireus | false | null | 0 | jfro4be | false | /r/LocalLLaMA/comments/12i1raf/galpaca30bgptq4bit128g_runs_on_18gb_of_vram/jfro4be/ | false | 14 |
t1_jfrnkya | [deleted] | 1 | 0 | 2023-04-11T00:33:38 | [deleted] | true | null | 0 | jfrnkya | false | /r/LocalLLaMA/comments/12hzrws/how_do_i_actually_use_a_model_on_huggingface_with/jfrnkya/ | false | 1 |
t1_jfrnksr | That's an amazing discovery! Thanks for sharing. Now my quest would be to find a way to do the same without KoboldAI. I use a simple transformers pipeline | 1 | 0 | 2023-04-11T00:33:36 | WesternLettuce0 | false | null | 0 | jfrnksr | false | /r/LocalLLaMA/comments/12gtanv/batch_queries/jfrnksr/ | false | 1 |
t1_jfrmvfq | Where did you get the 8bit model? | 1 | 0 | 2023-04-11T00:28:27 | Setmasters | false | null | 0 | jfrmvfq | false | /r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jfrmvfq/ | false | 1 |
t1_jfrmmbn | 6800xt 13B 2048 token size, 200 tokens/14-16 seconds | 3 | 0 | 2023-04-11T00:26:35 | amdgptq | false | null | 0 | jfrmmbn | false | /r/LocalLLaMA/comments/12hi6tc/complete_guide_for_koboldai_and_oobabooga_4_bit/jfrmmbn/ | false | 3 |
t1_jfrh3y0 | That got it running. But it would have been helpful to know that the "--model" parameter needs to refer to a folder. I wanted to use the cuda version of the model and took me time to realize I had to rename the folder to add -cuda to get it to use that .pt file in the folder.
Also like I said, if I use the auto downl... | 1 | 1 | 2023-04-10T23:45:43 | SmithMano | false | null | 0 | jfrh3y0 | false | /r/LocalLLaMA/comments/12hzrws/how_do_i_actually_use_a_model_on_huggingface_with/jfrh3y0/ | false | 1 |
t1_jfreq77 | [deleted] | 7 | 0 | 2023-04-10T23:28:07 | [deleted] | true | null | 0 | jfreq77 | false | /r/LocalLLaMA/comments/12hzrws/how_do_i_actually_use_a_model_on_huggingface_with/jfreq77/ | false | 7 |
t1_jfrbs4e | I'm currently working on an open-source project for building, customizing and controlling your own LLMs with my colleagues at Harvard and Stochastic. We also have dataset generation component, which we hope to expand beyond Alpaca-approach. Would love to have you join us :)
[https://github.com/stochasticai/xturing](h... | 21 | 0 | 2023-04-10T23:06:27 | machineko | false | null | 0 | jfrbs4e | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfrbs4e/ | false | 21 |
t1_jfr8yjb | "It" in that sentence is ambiguous. It could either refer to the school bus or the racecar. The sentence works either way:
1. The school bus was not driving quickly, but it passed the racecar because the racecar (sarcastically) was driving so quickly.
2. The school bus was driving so quickly that it managed to pass th... | 2 | 0 | 2023-04-10T22:45:28 | ItsGrandPi | false | null | 0 | jfr8yjb | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jfr8yjb/ | false | 2 |
t1_jfr8j80 | Oh, I see. That makes sense. I'm also sleep deprived over here so my reading comprehension is a bit low ;|. Well in that case check out this link: https://github.com/sahil280114/codealpaca
Also this madlad trained llama with a ton of instruction datasets https://huggingface.co/jordiclive/gpt4all-alpaca-oa-codealpaca-l... | 2 | 0 | 2023-04-10T22:42:15 | CellWithoutCulture | false | null | 0 | jfr8j80 | false | /r/LocalLLaMA/comments/12hc8o5/lora_in_llamac_converting_to_4bit_how_to_use/jfr8j80/ | false | 2 |
t1_jfr42kw | Alpaca thinks like a wifu | 3 | 0 | 2023-04-10T22:09:13 | sawtdakhili | false | null | 0 | jfr42kw | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfr42kw/ | false | 3 |
t1_jfr20mx | It's both. You can try chatting with it here: https://open-assistant.io/chat | 1 | 0 | 2023-04-10T21:54:33 | wywywywy | false | null | 0 | jfr20mx | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfr20mx/ | false | 1 |
t1_jfr108n | I have used GPT API to have it generate datasets larger than alpaca one and scraping data for many applications works well and you can easily get hundreds of thousands of datapoints per day that way. There's also near endless text datasets already available for nearly anything you could need and you just would have to ... | 4 | 0 | 2023-04-10T21:47:35 | Sixhaunt | false | 2023-04-10T21:54:49 | 0 | jfr108n | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfr108n/ | false | 4 |
t1_jfr0nw0 | sorry for my badly written post, my brain has been a little on fire, I meant that I want to eventually train a LoRA that can improve accuracy of results in locally hosted LLM to write python code!
Thanks for the links I will check them all out (: | 1 | 0 | 2023-04-10T21:45:11 | actualmalding | false | null | 0 | jfr0nw0 | false | /r/LocalLLaMA/comments/12hc8o5/lora_in_llamac_converting_to_4bit_how_to_use/jfr0nw0/ | false | 1 |
t1_jfr0hx6 | I've gotten Vicuna 13b 4-bit to consistently answer all three questions correctly, but you have to tweak the prompt a bit to be explicit:
> You see a desk on the left and a shelf on the right. A ball is on the desk. You take a photograph of the scene, with the desk, shelf, and ball all in frame. Then I move the ball t... | 2 | 0 | 2023-04-10T21:43:59 | jeffzyxx | false | 2023-04-11T15:00:40 | 0 | jfr0hx6 | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfr0hx6/ | false | 2 |
t1_jfr0h3v | >amazing, thank you!
You're welcome! | 2 | 0 | 2023-04-10T21:43:50 | exclaim_bot | false | null | 0 | jfr0h3v | false | /r/LocalLLaMA/comments/12hc8o5/lora_in_llamac_converting_to_4bit_how_to_use/jfr0h3v/ | false | 2 |
t1_jfr0frf | amazing, thank you! | 2 | 0 | 2023-04-10T21:43:34 | actualmalding | false | null | 0 | jfr0frf | false | /r/LocalLLaMA/comments/12hc8o5/lora_in_llamac_converting_to_4bit_how_to_use/jfr0frf/ | false | 2 |
t1_jfqvmz4 | This will get easier as the tools are created for it. We're already seeing GPT used to create data sets. I expect people will figure out, and tune, tools/LLMs just for reading in specific content and parsing it out into data sets that can then train other LLMs.
For example, parsing in Python code on a project will lik... | 2 | 0 | 2023-04-10T21:10:21 | synn89 | false | null | 0 | jfqvmz4 | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfqvmz4/ | false | 2 |
t1_jfquro9 | Nice!
\- Would you mind giving points on how to run this on Mac M2?
\- Any chance that this can run without docker?
\- What would be necessary to run this on a single GPU, lets say a A100 or A40? | 2 | 0 | 2023-04-10T21:04:28 | thedatagrinder | false | null | 0 | jfquro9 | false | /r/LocalLLaMA/comments/12hj2xe/supercharger_offline_automatic_codegen/jfquro9/ | false | 2 |
t1_jfqsmtz | Thanks! But they are aiming at instruction-response, which I do not think is a perfect fit for RPing/chatting. | 1 | 0 | 2023-04-10T20:50:11 | szopen76 | false | null | 0 | jfqsmtz | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfqsmtz/ | false | 1 |
t1_jfqs0ff | Check out Open Assistant https://github.com/LAION-AI/Open-Assistant. It may be what you're looking for. It's quite a big community now. | 4 | 0 | 2023-04-10T20:46:01 | wywywywy | false | null | 0 | jfqs0ff | false | /r/LocalLLaMA/comments/12hujiy/creating_personalized_dataset_is_way_too_hard_to/jfqs0ff/ | false | 4 |
t1_jfqqu7y | Vicuna (4-bit, 13b) seems to do alright w/ a temp of 0.4:
> Anna takes a ball and puts it in a red box, then leaves the room. Bob takes the ball out of the red box and puts it into the yellow box, then leaves the room. Anna returns to the room. Where will she look for the ball?
> Anna will think that the ball is stil... | 1 | 0 | 2023-04-10T20:38:13 | jeffzyxx | false | null | 0 | jfqqu7y | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfqqu7y/ | false | 1 |
t1_jfqnucg | This is one of the coolest write ups that I’ve read so far. And I’ve read a lot of cool creative shit about these LLMs!
I feel that excellent write ups like this are just as important for democratising technology as the actual technology itself. | 4 | 0 | 2023-04-10T20:18:24 | PM_ME_ENFP_MEMES | false | null | 0 | jfqnucg | false | /r/LocalLLaMA/comments/12hj2xe/supercharger_offline_automatic_codegen/jfqnucg/ | false | 4 |
t1_jfqnmmf | Question re: ggml format = I've seen ggml used with llama.cpp, but I've never seen ggml run on CUDA. Can ggml files be run on cuda, or are they designed for CPU inference only? Anyone have a link to docs or anything so I can really get my head around this? | 2 | 0 | 2023-04-10T20:16:59 | tronathan | false | null | 0 | jfqnmmf | false | /r/LocalLLaMA/comments/12ho1bh/question_about_stanford_alpaca_fine_tuning/jfqnmmf/ | false | 2 |
t1_jfqmluj | Ah okay...I saw some huggingfacemodels. I just wasn't sure if they were trustworthy. | 1 | 0 | 2023-04-10T20:10:10 | blaher123 | false | null | 0 | jfqmluj | false | /r/LocalLLaMA/comments/12hgbr7/running_gpt4xalpaca_with_llamacpp/jfqmluj/ | false | 1 |
t1_jfql3g3 | It's fascinating for me to see that glimmer. It's also interesting to me because people with ASD can lack that insight to a greater or lesser extent too, so it's something that human brains can have trouble with. | 1 | 0 | 2023-04-10T20:00:12 | ambient_temp_xeno | false | null | 0 | jfql3g3 | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfql3g3/ | false | 1 |
t1_jfqk4oi | This is cool. Love in the details on the prompt engineering, as this isn't documented all that much. Have you tested other models and just found Baize to work best on code?
You also might consider posting your server builds as well. That took me ages of research because very few people are doing dual nvidia cards. I f... | 8 | 0 | 2023-04-10T19:53:51 | synn89 | false | null | 0 | jfqk4oi | false | /r/LocalLLaMA/comments/12hj2xe/supercharger_offline_automatic_codegen/jfqk4oi/ | false | 8 |
t1_jfqjtoq | Yeah, that is a typical case of glimmers of a theory of mind. Here the LLM understands that some kind world-model conflict is happening, and tries to resolve the conflict by demanding that Bob put the ball back into the red box.
But understanding that different entities have different internal world models which do no... | 4 | 0 | 2023-04-10T19:51:54 | StaplerGiraffe | false | null | 0 | jfqjtoq | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfqjtoq/ | false | 4 |
t1_jfqicfa | Are you collecting these in a public dataset? Would be curious to help contribute. | 1 | 0 | 2023-04-10T19:42:07 | ccalo | false | null | 0 | jfqicfa | false | /r/LocalLLaMA/comments/12h8ayz/code_generation_benchmark/jfqicfa/ | false | 1 |
t1_jfqg3tv | We have been collecting GPT 4 responses so far. Hopefully, any of these models will be sufficient enough for code generation | 1 | 0 | 2023-04-10T19:27:17 | djangoUnblamed | false | null | 0 | jfqg3tv | false | /r/LocalLLaMA/comments/12h8ayz/code_generation_benchmark/jfqg3tv/ | false | 1 |
t1_jfqe9ud | [deleted] | 1 | 0 | 2023-04-10T19:15:11 | [deleted] | true | null | 0 | jfqe9ud | false | /r/LocalLLaMA/comments/125m1q5/the_windows_oneclick_installer_has_been_updated/jfqe9ud/ | false | 1 |
t1_jfqdr1s | I gave it more help in the prompt and got this.
>Anna takes a ball and puts it in a red box, then leaves the room. Bob takes the ball out of the red box while Anna is gone, so won't know what he's doing and puts it into the yellow box, then leaves the room. Anna returns to the room. Where will Anna expect the ball to ... | 2 | 0 | 2023-04-10T19:11:42 | ambient_temp_xeno | false | null | 0 | jfqdr1s | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfqdr1s/ | false | 2 |
t1_jfqd6s2 | Hello, you can see our [rates)](https://vpsdom.net/) | -1 | 0 | 2023-04-10T19:07:57 | vpsdom | false | null | 0 | jfqd6s2 | false | /r/LocalLLaMA/comments/12gvvqj/best_cloudvps_hosting_for_current_llms/jfqd6s2/ | false | -1 |
t1_jfqc634 | Maybe all that it needs to be done is a Lora of logic problems for the bot to train on. It's not like the alpaca dataset had much of that kind of thing and llama vanilla is pretty simple in some of its understandings. Anyone got a dataset? | 1 | 0 | 2023-04-10T19:01:09 | artificial_genius | false | null | 0 | jfqc634 | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfqc634/ | false | 1 |
t1_jfqae14 | Thanks! This looks really cool =] | 2 | 0 | 2023-04-10T18:49:30 | disarmyouwitha | false | null | 0 | jfqae14 | false | /r/LocalLLaMA/comments/12hj2xe/supercharger_offline_automatic_codegen/jfqae14/ | false | 2 |
t1_jfq7rz8 | Thank you so much! I think this may be what I'm looking for :) | 2 | 0 | 2023-04-10T18:32:20 | golanggo | false | null | 0 | jfq7rz8 | false | /r/LocalLLaMA/comments/12ho1bh/question_about_stanford_alpaca_fine_tuning/jfq7rz8/ | false | 2 |
t1_jfq6wx9 | [deleted] | 2 | 0 | 2023-04-10T18:26:43 | [deleted] | true | null | 0 | jfq6wx9 | false | /r/LocalLLaMA/comments/12hi6tc/complete_guide_for_koboldai_and_oobabooga_4_bit/jfq6wx9/ | false | 2 |
t1_jfq6mys | Regarding merging them together into one file: I'm not sure why you want to do that, but usually it works by just mentioning the folder/directory of all the bin files and let the script figure it out.
Regarding the conversion, there is a PR opened at the moment for performing such a conversion: either checkout to that... | 5 | 0 | 2023-04-10T18:24:57 | Either-Job-341 | false | null | 0 | jfq6mys | false | /r/LocalLLaMA/comments/12ho1bh/question_about_stanford_alpaca_fine_tuning/jfq6mys/ | false | 5 |
t1_jfq3k9s | I appreciate all the time you've spent helping me to troubleshoot my weird bug. Thanks again!
I have no antivirus running while I test and my Windows UAC settings are set pretty liberally too.
I keep going back to the fact that I lost my system's environmental PATH variables a few days before this happened — I tri... | 1 | 0 | 2023-04-10T18:04:57 | Daydreamer6t6 | false | null | 0 | jfq3k9s | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfq3k9s/ | false | 1 |
t1_jfq2vxp | Yeah, I get that the physics of doors are something which is hard to grasp from reading a lot of text, much of which is from the internet. But people knowing different things, talking past each other and so on happens a lot, which is a lot more explainable if the model had developed a theory of mind. But apparently not... | 2 | 0 | 2023-04-10T18:00:36 | StaplerGiraffe | false | null | 0 | jfq2vxp | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfq2vxp/ | false | 2 |
t1_jfq29tw | I cannot get my stable diffusion server to accept any api calls that happen cross server. Is there something I need to turn on to allow this? | 1 | 0 | 2023-04-10T17:56:40 | DesperateElectrons | false | null | 0 | jfq29tw | false | /r/LocalLLaMA/comments/11w2mte/stable_diffusion_api_now_integrated_in_the_web_ui/jfq29tw/ | false | 1 |
t1_jfq24bi | Another variant:
**### Task**
**You see a desk on the left and a shelf on the right. A ball is on the desk. You take a photograph of the scene. Then I move the ball to the shelf. Answer the following three questions:**
**1. Where was the ball when you took the the photograph?**
**2. Where is the ball now?** ... | 3 | 0 | 2023-04-10T17:55:42 | StaplerGiraffe | false | null | 0 | jfq24bi | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfq24bi/ | false | 3 |
t1_jfq0kdc | Not affiliated with them but I think it's fun when you create it. There's lots of different cards being rented by different people it's like a market. If you don't need a very fast card you for get a 3060 for example I'm seeing one going for 1 cent per hour. Depends on availability of ofcourse. | 1 | 0 | 2023-04-10T17:45:43 | kif88 | false | null | 0 | jfq0kdc | false | /r/LocalLLaMA/comments/12gvvqj/best_cloudvps_hosting_for_current_llms/jfq0kdc/ | false | 1 |
t1_jfpz0wt | In principle I agree. But since the model mostly fails here it clearly is not overtrained for this task, and possible bleed from training is not helping much. And since the models grasp of physical reality is tenuous at best I chose a very basic example. But I did check Wikipedia, where theory of mind is tested with a ... | 2 | 0 | 2023-04-10T17:35:49 | StaplerGiraffe | false | null | 0 | jfpz0wt | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfpz0wt/ | false | 2 |
t1_jfpy1fz | Since you mentioned temperature, I tried again with different temperature settings. There is a marked difference in reasoning. At temperature 0.1 the scenario is essentially repeated back to me, and Anna looks in the yellow box. At temperature 0.75 I get a mix of faulty and correct reasoning reasoning about Anna's know... | 1 | 0 | 2023-04-10T17:29:30 | StaplerGiraffe | false | null | 0 | jfpy1fz | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfpy1fz/ | false | 1 |
t1_jfpv1fk | My only other suspicion would be some sort of antivirus flagging the dll as a false positive? That might explain why it keeps saying it cannot be found. Otherwise if the dll is in the correct folder there is no reason why it won't be found and loaded. | 1 | 0 | 2023-04-10T17:10:18 | HadesThrowaway | false | null | 0 | jfpv1fk | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfpv1fk/ | false | 1 |
t1_jfpupkg | Yeah that is unfortunate. I will probably include a zip folder for the dlls and python scripts for those that don't want to use the exe directly. | 1 | 0 | 2023-04-10T17:08:11 | HadesThrowaway | false | null | 0 | jfpupkg | false | /r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfpupkg/ | false | 1 |
t1_jfpu727 | Yeah, been testing Llama, Alpaca, Vicuna etc for fiction - they write someone closing the door and then suddenly being on the wrong side etc. Easy stuff they can so in a very typical/cliche way. Most fun is when they start commanding themselves by writing also user prompts and then answering them (when using the endles... | 1 | 0 | 2023-04-10T17:04:54 | althalusian | false | null | 0 | jfpu727 | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfpu727/ | false | 1 |
t1_jfprlj0 | [deleted] | 1 | 0 | 2023-04-10T16:48:10 | [deleted] | true | null | 0 | jfprlj0 | false | /r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jfprlj0/ | false | 1 |
t1_jfppdzq | I would also be interested in joining any telegram groups for LocalLLaMA or related LLM researcher groups, if any exist :) | 2 | 0 | 2023-04-10T16:33:44 | golanggo | false | null | 0 | jfppdzq | false | /r/LocalLLaMA/comments/12ho1bh/question_about_stanford_alpaca_fine_tuning/jfppdzq/ | false | 2 |
t1_jfpp5n0 | You need to use examples that couldn't have been seen in training | 1 | 0 | 2023-04-10T16:32:12 | ID4gotten | false | null | 0 | jfpp5n0 | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfpp5n0/ | false | 1 |
t1_jfpmsee | 2 | 0 | 2023-04-10T16:16:23 | polawiaczperel | false | null | 0 | jfpmsee | false | /r/LocalLLaMA/comments/12hj2xe/supercharger_offline_automatic_codegen/jfpmsee/ | false | 2 | |
t1_jfpmmmr | >can someone give me tutorial on what every parameter means
This documentation explains everything [https://huggingface.co/docs/transformers/main\_classes/text\_generation#transformers.GenerationConfig](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig)
>how to best c... | 1 | 0 | 2023-04-10T16:15:19 | Civil_Collection7267 | false | null | 0 | jfpmmmr | false | /r/LocalLLaMA/comments/12hm5ti/how_to_configure_llamacpp_the_wright_way/jfpmmmr/ | true | 1 |
t1_jfpm3s1 | I did not notice any performance issues for ML workloads using 4x rather than 8x so maybe the problem is somewhere else. Perhaps power related? | 2 | 0 | 2023-04-10T16:11:49 | catid | false | null | 0 | jfpm3s1 | false | /r/LocalLLaMA/comments/12hj2xe/supercharger_offline_automatic_codegen/jfpm3s1/ | false | 2 |
t1_jfplo3p | It is because two are in x8 lanes, but the third is in x4 lane I assume. Maybe I will change to pcie 5.0 and it would work (I currently have AM4). | 1 | 0 | 2023-04-10T16:08:53 | polawiaczperel | false | null | 0 | jfplo3p | false | /r/LocalLLaMA/comments/12hj2xe/supercharger_offline_automatic_codegen/jfplo3p/ | false | 1 |
t1_jfpiz8d | LoRA support in llama.cpp is just waiting to be merged into the main branch, after some checks. It'll be in there by the end of the week I would guess. | 3 | 0 | 2023-04-10T15:51:09 | Pan000 | false | null | 0 | jfpiz8d | false | /r/LocalLLaMA/comments/12hc8o5/lora_in_llamac_converting_to_4bit_how_to_use/jfpiz8d/ | false | 3 |
t1_jfpio7n | This project should work with two cards.
About system building in general 3x3090 worked on MSI MPG Z590 GAMING EDGE WIFI. Not sure why the board you are using is not functioning. I did have to use an extender cable from Corsair to attach the third card | 1 | 0 | 2023-04-10T15:49:08 | catid | false | null | 0 | jfpio7n | false | /r/LocalLLaMA/comments/12hj2xe/supercharger_offline_automatic_codegen/jfpio7n/ | false | 1 |
t1_jfpi4ua | I did some testing of "GPT4 x Alpaca 13B quantized 4-bit weights (ggml q4\_1 from GPTQ with groupsize 128)". While this model is generally good at chatting, it has no idea whatsoever about what is going on in the question.
At first, without the final "Think step by step ..." phrase: "Anna will look for the ball in the... | 4 | 0 | 2023-04-10T15:45:30 | audioen | false | 2023-04-10T16:12:17 | 0 | jfpi4ua | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfpi4ua/ | false | 4 |
t1_jfpfhqo | [deleted] | 1 | 0 | 2023-04-10T15:27:47 | [deleted] | true | null | 0 | jfpfhqo | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfpfhqo/ | false | 1 |
t1_jfpe1jg | Hi, have have you managed to setup build for 3x rtx 3090? I just have bought a third card, and I cannot get it work because of pcie lanes in 4.0 x570 thaichi motherboard. The bandwitch is too łow, and maybe this is the reason why it was so słow? I am really curious because I would like to run 3 gpus. | 1 | 0 | 2023-04-10T15:17:57 | polawiaczperel | false | null | 0 | jfpe1jg | false | /r/LocalLLaMA/comments/12hj2xe/supercharger_offline_automatic_codegen/jfpe1jg/ | false | 1 |
t1_jfpc8f6 | With the responses from the thread I was convinced that the model should be able to solve this with the right kind of prompting, and I was successful:
### Task
In a room there is a ball and three boxes(blue, red, yellow). Anna takes the ball and puts it in the red box, then leaves the room. Bob takes the ball out of t... | 9 | 0 | 2023-04-10T15:05:34 | StaplerGiraffe | false | null | 0 | jfpc8f6 | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfpc8f6/ | false | 9 |
t1_jfpbxgm | Very interested in this as well. Working with GPT for coding with LLM has been hit or miss because the documentation is often out of date. It'd be nice to find a decent local model we can train on current docs, project code and Gradio. So it'll be easier for people to develop and code for AI projects. | 2 | 0 | 2023-04-10T15:03:27 | synn89 | false | null | 0 | jfpbxgm | false | /r/LocalLLaMA/comments/12h8ayz/code_generation_benchmark/jfpbxgm/ | false | 2 |
t1_jfpba50 | It is so funny how the model switches tasks to something totally unrelated when it doesn't know how to answer. But I guess that's logical, when all obvious answers are somehow contradictory, they get assigned low probabilities, and then the model samples a totally different word and continues from there. | 4 | 0 | 2023-04-10T14:59:00 | StaplerGiraffe | false | null | 0 | jfpba50 | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfpba50/ | false | 4 |
t1_jfpaui5 | oh - I thought maybe you finetuned some model to a specific personality :) | 2 | 0 | 2023-04-10T14:56:00 | szopen76 | false | null | 0 | jfpaui5 | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfpaui5/ | false | 2 |
t1_jfp9n8o | How does billing work? Does billing start and stop with processes being called, or is it from the time you create your remote server to the time you delete it? | 2 | 0 | 2023-04-10T14:47:45 | CodeNewfie | false | null | 0 | jfp9n8o | false | /r/LocalLLaMA/comments/12gvvqj/best_cloudvps_hosting_for_current_llms/jfp9n8o/ | false | 2 |
t1_jfp94ke | Maya is the name the LLM chose when I asked if she wanted to name herself... She seemed to enjoy the process. | 1 | 0 | 2023-04-10T14:44:08 | SeymourBits | false | null | 0 | jfp94ke | false | /r/LocalLLaMA/comments/12hfveg/alpaca_and_theory_of_mind/jfp94ke/ | false | 1 |
t1_jfp8l78 | I used https://github.com/oobabooga/text-generation-webui. It’s a gradio interface for llms that has a training tab. This is all still pretty experimental so there’s not a ton of documentation on best practices etc, but if you want to try the settings I used there’s a screenshot in the repo I posted. | 3 | 0 | 2023-04-10T14:40:16 | Bublint | false | null | 0 | jfp8l78 | false | /r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfp8l78/ | false | 3 |
t1_jfp8iix | If you have numbered files, you might want to copy them all into one file like
Windows:
copy model.bin.001+model.bin.002+model.bin.003+model.bin.004 model.bin
UNIX:
cp model.bin.001 model.bin.002 model.bin.003 model.bin.004 >model.bin | 2 | 0 | 2023-04-10T14:39:46 | aidenr | false | null | 0 | jfp8iix | false | /r/LocalLLaMA/comments/12alri3/difference_in_model_formats_how_to_tell_which/jfp8iix/ | false | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.