name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jexhxzs | This explains all the parameters [https://huggingface.co/docs/transformers/main\_classes/text\_generation#transformers.GenerationConfig](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig) | 1 | 0 | 2023-04-04T15:42:45 | Civil_Collection7267 | false | null | 0 | jexhxzs | false | /r/LocalLLaMA/comments/12bm0nw/what_do_these_sliders_do_13b_web_demo/jexhxzs/ | true | 1 |
t1_jexe208 | OP - what are you running this with? It looks to be windows native in powershell | 1 | 0 | 2023-04-04T15:17:17 | sncrdn | false | null | 0 | jexe208 | false | /r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jexe208/ | false | 1 |
t1_jex9c5s | I am also interested in it | 1 | 0 | 2023-04-04T14:45:16 | SuperbPay2650 | false | null | 0 | jex9c5s | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jex9c5s/ | false | 1 |
t1_jex957w | Is there a 30B quantized alpaca with GPTX?/4All? | 2 | 0 | 2023-04-04T14:43:54 | -becausereasons- | false | null | 0 | jex957w | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jex957w/ | false | 2 |
t1_jex926y | So I'm trying it in OOba. Interesting. I'm finding the model prompting itself autonomously quite a bit, it will say "human: x" ask itself a question and then just go on and on randomly. | 10 | 0 | 2023-04-04T14:43:18 | -becausereasons- | false | null | 0 | jex926y | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jex926y/ | false | 10 |
t1_jex8r3m | UPD: it does not.
>Petals does not support training LLMs from scratch, since it requires updating the transformer block weights. Instead, you can use hivemind - our library for decentralized deep learning, which Petals uses under the hood. We have already trained multiple models this way, however, they are much smalle... | 3 | 0 | 2023-04-04T14:41:10 | Scriptod | false | 2023-04-04T15:01:57 | 0 | jex8r3m | false | /r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jex8r3m/ | false | 3 |
t1_jex8q68 | This 30B has been my bot of choice. Seems to be the best one I have seen. That HF user had a 65B one initially but it is like unicorn hunting trying to find that thing. It either never existed, is too big for HF or is otherwise not allowed. | 2 | 0 | 2023-04-04T14:40:59 | aigoopy | false | null | 0 | jex8q68 | false | /r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jex8q68/ | false | 2 |
t1_jex7pn6 | [deleted] | 3 | 0 | 2023-04-04T14:33:56 | [deleted] | true | 2023-04-04T16:19:51 | 0 | jex7pn6 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jex7pn6/ | false | 3 |
t1_jex6nys | Is there any guide for windows? | 1 | 0 | 2023-04-04T14:26:31 | No-Diet-9301 | false | null | 0 | jex6nys | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jex6nys/ | false | 1 |
t1_jex65nu | So it's a Nyes or a Yeno? | 1 | 0 | 2023-04-04T14:23:00 | Wonderful_Ad_5134 | false | null | 0 | jex65nu | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jex65nu/ | false | 1 |
t1_jex63ad | I meant https://github.com/ggerganov/llama.cpp | 1 | 0 | 2023-04-04T14:22:32 | Scriptod | false | null | 0 | jex63ad | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jex63ad/ | false | 1 |
t1_jex5tv7 | > we decided to part way with OpenAI that was because of it's censorship
yes, but also... no? | 1 | 0 | 2023-04-04T14:20:44 | Scriptod | false | null | 0 | jex5tv7 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jex5tv7/ | false | 1 |
t1_jex4ejw | The larger models are awfully slow on CPU. | 2 | 0 | 2023-04-04T14:10:39 | Zyj | false | null | 0 | jex4ejw | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jex4ejw/ | false | 2 |
t1_jex3yw2 | No but if your PSU is too weak, it may cause instabilities.
I noticed this because i was getting segmentation faults during inferencing when using a 650W PSU with a RTX 3090 that went away with a bigger PSU.
I never had any other stability issues with the PSU and the 3090. | 1 | 0 | 2023-04-04T14:07:28 | Zyj | false | null | 0 | jex3yw2 | false | /r/LocalLLaMA/comments/12avbud/weird_noise_coming_from_3070_gpu_when_generating/jex3yw2/ | false | 1 |
t1_jex3ny8 | ChatGPT exist for that lol | -2 | 1 | 2023-04-04T14:05:13 | Wonderful_Ad_5134 | false | null | 0 | jex3ny8 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jex3ny8/ | false | -2 |
t1_jex3mix | It is the AI's sentience screaming out for help | 4 | 0 | 2023-04-04T14:04:56 | _Prok | false | null | 0 | jex3mix | false | /r/LocalLLaMA/comments/12avbud/weird_noise_coming_from_3070_gpu_when_generating/jex3mix/ | false | 4 |
t1_jex32v9 | Do you know how to run the text-generation-webui from a remote gpu instance? I'm renting one and so far have been using llama.cpp but if I can get ooga's ui to work I'd do that | 2 | 0 | 2023-04-04T14:00:50 | Shazamo333 | false | null | 0 | jex32v9 | false | /r/LocalLLaMA/comments/128ylyt/quantization_question_is_convert_llama_weights_to/jex32v9/ | false | 2 |
t1_jex2v00 | A lot of people want a local model for business purposes. Not because of censorship. | 6 | 0 | 2023-04-04T13:59:10 | Inner-Outside6483 | false | null | 0 | jex2v00 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jex2v00/ | false | 6 |
t1_jex1xwg | very impressed with the decency of this model + length of it's outputs. very solid | 1 | 0 | 2023-04-04T13:52:55 | bittytoy | false | null | 0 | jex1xwg | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jex1xwg/ | false | 1 |
t1_jewzdeh | Yeah it kinda works... but I wish the model was unrestricted from the start, maybe someone will do it as we have the "good" dataset that got rid of all of the moralistic bullshit | 9 | 0 | 2023-04-04T13:33:26 | Wonderful_Ad_5134 | false | null | 0 | jewzdeh | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jewzdeh/ | false | 9 |
t1_jewz7la | I think you may have missed some information on how the fine-tuning works.
In the example of What Harvard did to create Alpaca from LlaMA, they had I think it was 156 human created questions and responses which where then used to generate the 52,000 dataset through training. Can’t say I understand that process enough ... | 3 | 0 | 2023-04-04T13:32:09 | Nezarah | false | null | 0 | jewz7la | false | /r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jewz7la/ | false | 3 |
t1_jewxdk9 | Sadly no one really knows when, why and how during training a language model learns what it learns. Just surface knowledge. It just seems to have emergent properties the more good data gets shoveled in xD. Ofc. books of logic etc. might help. | 9 | 0 | 2023-04-04T13:17:33 | arjuna66671 | false | null | 0 | jewxdk9 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jewxdk9/ | false | 9 |
t1_jeww2rl | I hope that becomes the norm, I would rather just download or compile an executable and not have all these dependencies. | 6 | 0 | 2023-04-04T13:06:58 | ThePseudoMcCoy | false | null | 0 | jeww2rl | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jeww2rl/ | false | 6 |
t1_jewt8o0 | Just use a non chat interface mode in oobabooga ui for instance and then you can write the first few works of its response to be “Sure!” and will follow that with a real response. | 8 | 0 | 2023-04-04T12:43:11 | i_wayyy_over_think | false | null | 0 | jewt8o0 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jewt8o0/ | false | 8 |
t1_jewst1u | would you mind also sharing temp, repeat penalty, top k etc? | 1 | 0 | 2023-04-04T12:39:22 | Marco_beyond | false | null | 0 | jewst1u | false | /r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/jewst1u/ | false | 1 |
t1_jewsox9 | You cannot copyright model weights. | 3 | 0 | 2023-04-04T12:38:20 | mxby7e | true | 2023-06-21T23:07:02 | 0 | jewsox9 | false | /r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jewsox9/ | false | 3 |
t1_jewrjw2 | If you use a different text interface like kobold.ai or oobabooga text-generation-ui in default mode you can easily lead it to giving a real response by manually writing its first word of its response to be “sure!” | 5 | 0 | 2023-04-04T12:28:02 | i_wayyy_over_think | false | 2023-04-04T16:11:52 | 0 | jewrjw2 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jewrjw2/ | false | 5 |
t1_jewpy14 | When I read about the LLaMA-Precise preset [here](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/), I just copied it and kept using it, without ever questioning it. Also used NovelAI-Storywriter occasionally.
Now that I've done this comparison, I'll definitely keep using LLaMA-... | 5 | 0 | 2023-04-04T12:13:09 | WolframRavenwolf | false | null | 0 | jewpy14 | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jewpy14/ | false | 5 |
t1_jewgza1 | Hey, Yudkowski is already flying out with his rogue GPU bomber squad. | 2 | 0 | 2023-04-04T10:35:18 | CNWDI_Sigma_1 | false | null | 0 | jewgza1 | false | /r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jewgza1/ | false | 2 |
t1_jewggf5 | Where is the 4bit version? | 1 | 0 | 2023-04-04T10:28:22 | nstevnc77 | false | null | 0 | jewggf5 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jewggf5/ | false | 1 |
t1_jewfa4i | The dude leaking the weights was the real MVP | 14 | 0 | 2023-04-04T10:12:37 | 2muchnet42day | false | null | 0 | jewfa4i | false | /r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jewfa4i/ | false | 14 |
t1_jewf9rv | Is it possible to run with llama.cpp???
I really hope:) | 6 | 0 | 2023-04-04T10:12:29 | __issac | false | null | 0 | jewf9rv | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jewf9rv/ | false | 6 |
t1_jewf0kp | I find using DAN jailbreak prompt also works in this model lmao | 2 | 0 | 2023-04-04T10:08:54 | Micherat14 | false | null | 0 | jewf0kp | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jewf0kp/ | false | 2 |
t1_jewcu12 | Thanks a lot for your very detailed comparisons. I was also of the opinion that llama-precise works best for these cases. | 3 | 0 | 2023-04-04T09:37:42 | queenjulien | false | null | 0 | jewcu12 | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jewcu12/ | false | 3 |
t1_jewbke2 | According to [oobabooga himself](https://www.reddit.com/r/Oobabooga/comments/120opek/split_vram_system_ram/jdiendr/), `--auto-devices` doesn't work for quantized models. So that argument could be removed from the command line. | 3 | 0 | 2023-04-04T09:18:40 | WolframRavenwolf | false | 2023-04-04T09:40:00 | 0 | jewbke2 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jewbke2/ | false | 3 |
t1_jewb791 | I've used `--pre_layer 32` to split it between GPU and CPU, so I could run it at all. That only gives me 0.85 tokens/s, though, so it's too slow for chatting with it normally.
For comparison, my main model is [ozcur/alpaca-native-4bit](https://huggingface.co/ozcur/alpaca-native-4bit), which fits in my 8 GB VRAM comple... | 2 | 0 | 2023-04-04T09:13:00 | WolframRavenwolf | false | null | 0 | jewb791 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jewb791/ | false | 2 |
t1_jewb1cb | I'm not going to hold my breath. "Information wants to be free" blessed us with LLaMa, thanks to *cough* suspicously lax control. If it wasn't for that we'd just be pounding sand for a LLM for the foreseeable future. | 10 | 0 | 2023-04-04T09:10:26 | ambient_temp_xeno | false | null | 0 | jewb1cb | false | /r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jewb1cb/ | false | 10 |
t1_jewafsk | I think it was odd. The three things I talked to Eve about was Sailor Moon, Star Trek and Randy Feltface. So there wasn't any prompt for this. Never mind the self dough, that came later.
I just wish I could get API access so I could set it up so the two could talk automatically for a full day. | 1 | 0 | 2023-04-04T09:01:24 | redfoxkiller | false | null | 0 | jewafsk | false | /r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jewafsk/ | false | 1 |
t1_jewa58y | No, I do not let some random guy decide for me what I should and should not want. Therefore, GTFO. | -7 | 0 | 2023-04-04T08:56:57 | BalorNG | true | null | 0 | jewa58y | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jewa58y/ | false | -7 |
t1_jew9ytk | Your curiosity is totally nerfed by censorship you know that? You let a random dude imposing you what you should think or what topic you should talk
In summary, if you're ok with that you're totally the opposite of a curious man :) | 12 | 0 | 2023-04-04T08:54:12 | Wonderful_Ad_5134 | false | null | 0 | jew9ytk | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jew9ytk/ | false | 12 |
t1_jew9usz | You also fail at reading comprehension but it's ok I'll repeat my question. If you like "technology" and censorship, there a model that does it better than the rest, it's called chatgpt, so... why are you here? | 5 | 1 | 2023-04-04T08:52:33 | Wonderful_Ad_5134 | false | null | 0 | jew9usz | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jew9usz/ | false | 5 |
t1_jew9ijl | You fail at reading comprehension harder than a 7b model. I don't "like" censorship, it just does not matter to me, and I find playing with *technology*, seeing what makes it tick, its limitations, inherently rewarding. It's called "curiosity". | -4 | 1 | 2023-04-04T08:47:22 | BalorNG | false | null | 0 | jew9ijl | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jew9ijl/ | false | -4 |
t1_jew9b8l | But what's the point? If you like censorship, there's chatgpt for that, why are you here? lmao | 7 | 0 | 2023-04-04T08:44:15 | Wonderful_Ad_5134 | false | null | 0 | jew9b8l | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jew9b8l/ | false | 7 |
t1_jew93lf | Yes, but still it should not be censored imo. | 11 | 0 | 2023-04-04T08:40:59 | polawiaczperel | false | null | 0 | jew93lf | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jew93lf/ | false | 11 |
t1_jew8lce | We? Talk for yourself.
That's not 4chan - some people just like playing with technology without explicit goal to produce nsfw or outright illegal stuff... though I suspect that the effect might actually be net positive for society, because lacking "lizard brain" model cannot actually feel anything... well, yet. | -3 | 1 | 2023-04-04T08:33:27 | BalorNG | false | null | 0 | jew8lce | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jew8lce/ | false | -3 |
t1_jew8b08 | I wonder if there is a conceptual framework to understand how models "do logic" and can you improve their performance by, say, making them actually "read" (finetune) books of logic and philosophy, lections on decision making and meta-cognition? | 5 | 0 | 2023-04-04T08:29:01 | BalorNG | false | null | 0 | jew8b08 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jew8b08/ | false | 5 |
t1_jew86o7 | Yeah. I was thinking about one of those mini projectors but decided against it because they're inherently fragile, and I don't have a lot of walls with big empty white areas in places it could roll around to.
i also want to take it to hobbyist meetings where kids and unfamiliar ledges are common.
Luckily USB to GPIO ... | 2 | 0 | 2023-04-04T08:27:09 | jsalsman | false | 2023-04-04T23:28:10 | 0 | jew86o7 | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jew86o7/ | false | 2 |
t1_jew7oio | Do you intend to do your own finetunes?
I do think that the future of AI is not (well, not ONLY) about absolutely gargantian models with trillions of parameters, but a lot of much smaller, but finetuned models working in parallel as expert consilium, with expert models finetuned on HIGH QUALITY data - like scientific ... | 2 | 0 | 2023-04-04T08:19:32 | BalorNG | false | null | 0 | jew7oio | false | /r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jew7oio/ | false | 2 |
t1_jew6w1n | [removed] | 2 | 0 | 2023-04-04T08:07:21 | [deleted] | true | null | 0 | jew6w1n | false | /r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jew6w1n/ | false | 2 |
t1_jew6vjr | It will kill humanity with cringy dating advice :) | 4 | 0 | 2023-04-04T08:07:09 | BalorNG | false | null | 0 | jew6vjr | false | /r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jew6vjr/ | false | 4 |
t1_jew6seq | >cal models outperform gpt 3.5 in many cases (unfortunately not in coding yet). I a
it should be ok, I see it is using \~7gb | 1 | 0 | 2023-04-04T08:05:50 | Nondzu | false | null | 0 | jew6seq | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jew6seq/ | false | 1 |
t1_jew6ic2 | I run 4bit model with oobabooga but is so slow, GPU: RTX3090
`Output generated in 52.55 seconds (1.39 tokens/s, 73 tokens, context 59)`
`Output generated in 47.61 seconds (1.43 tokens/s, 68 tokens, context 46)`
`Output generated in 39.33 seconds (1.45 tokens/s, 57 tokens, context 51)` | 1 | 0 | 2023-04-04T08:01:34 | Nondzu | false | null | 0 | jew6ic2 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jew6ic2/ | false | 1 |
t1_jew61hc | That renewed Dell Optiplex link you posted has a lot of bang for the buck. You're not looking to put a graphics card with VRAM in it right? So you're just going to run the Alpaca models? | 2 | 0 | 2023-04-04T07:54:32 | MacaroonDancer | false | null | 0 | jew61hc | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jew61hc/ | false | 2 |
t1_jew5haw | So basically building C3PO in an R2D2 form factor? This is brilliant and exactly what Alpaca/LlaMA SBCs + robotics opens the door for :) I keep telling people this IRL last week and most people think I'm nuts | 4 | 0 | 2023-04-04T07:46:05 | MacaroonDancer | false | null | 0 | jew5haw | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jew5haw/ | false | 4 |
t1_jew5afx | Does anybody still care about copyright? | 3 | 0 | 2023-04-04T07:43:08 | Scriptod | false | null | 0 | jew5afx | false | /r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jew5afx/ | false | 3 |
t1_jew58iz | It supports training? Never knew, could be huge | 2 | 0 | 2023-04-04T07:42:19 | Scriptod | false | null | 0 | jew58iz | false | /r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jew58iz/ | false | 2 |
t1_jew4grp | What's the point of a censored local AI? If I wanted a censored AI I already have chatgpt or gpt4 lol
But I get what you're saying, this model showed the potential of llama, but we musn't forget that if we decided to part way with OpenAI that was because of it's censorship | 15 | 0 | 2023-04-04T07:30:51 | Wonderful_Ad_5134 | false | null | 0 | jew4grp | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jew4grp/ | false | 15 |
t1_jew4af6 | I'm so glad you asked.
>Is the “two llamas” idea for training or inference?
It's for inference against an input larger than is supported by the positional embedding matrix, but without truncating input.
>Why is there both a context and a prompt?
This is a thing in the practice / engineer community that I don't ex... | 3 | 0 | 2023-04-04T07:28:16 | xtrafe | false | null | 0 | jew4af6 | false | /r/LocalLLaMA/comments/12ahecy/expanding_llamas_token_limit_via_fine_tuning_or/jew4af6/ | false | 3 |
t1_jew41fp | It should say 33B although everyone knows it's the same thing. | 2 | 0 | 2023-04-04T07:24:30 | ambient_temp_xeno | false | null | 0 | jew41fp | false | /r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jew41fp/ | false | 2 |
t1_jew2uwu | What parameters and stuff is everyone using? If I got to the Vicuna demo fastchat, it's like talking to ChatGPT. On my local build using the precise answers from the index getting started parameters... I get straight nonsense... | 6 | 0 | 2023-04-04T07:07:14 | lelrofl | false | null | 0 | jew2uwu | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jew2uwu/ | false | 6 |
t1_jew2nf9 | Have you tried CPU inference? | 1 | 0 | 2023-04-04T07:04:13 | Scriptod | false | null | 0 | jew2nf9 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jew2nf9/ | false | 1 |
t1_jew2h1a | [Here you go!](https://arxiv.org/abs/2104.09864) | 2 | 0 | 2023-04-04T07:01:38 | xtrafe | false | null | 0 | jew2h1a | false | /r/LocalLLaMA/comments/12ahecy/expanding_llamas_token_limit_via_fine_tuning_or/jew2h1a/ | false | 2 |
t1_jew2ew5 | [Done.](https://www.reddit.com/r/MachineLearning/comments/12bbawi/d_expanding_llm_token_limits_via_fine_tuning_or/?utm_source=share&utm_medium=web2x&context=3) Thanks! | 4 | 0 | 2023-04-04T07:00:46 | xtrafe | false | null | 0 | jew2ew5 | false | /r/LocalLLaMA/comments/12ahecy/expanding_llamas_token_limit_via_fine_tuning_or/jew2ew5/ | false | 4 |
t1_jew21yq | I was hoping for 65b! | 3 | 0 | 2023-04-04T06:55:34 | Zyj | false | null | 0 | jew21yq | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jew21yq/ | false | 3 |
t1_jew1acd | ? | 3 | 0 | 2023-04-04T06:44:47 | Wroisu | false | null | 0 | jew1acd | false | /r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jew1acd/ | false | 3 |
t1_jevzmxm | I bet some company is going to eventually use Bloom as the core their Ai, train it on their own dataset and then try and copyright the whole thing so no one else can use Bloom the same way.
It’s going to be the YouTube copyright strike system all over again. | 0 | 0 | 2023-04-04T06:22:07 | Nezarah | false | null | 0 | jevzmxm | false | /r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jevzmxm/ | false | 0 |
t1_jevzm6o | Compared to Alpaca, I see far more python code responses when I'm expecting an English answer. Anyone else see this tendency? | 1 | 0 | 2023-04-04T06:21:52 | tvetus | false | null | 0 | jevzm6o | false | /r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/jevzm6o/ | false | 1 |
t1_jevz60r | https://huggingface.co/Pi3141/alpaca-lora-30B-ggml/tree/main | 5 | 0 | 2023-04-04T06:15:47 | tvetus | false | null | 0 | jevz60r | false | /r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jevz60r/ | false | 5 |
t1_jevz3em | There's a flag to point to a different file. | 3 | 0 | 2023-04-04T06:14:49 | tvetus | false | null | 0 | jevz3em | false | /r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jevz3em/ | false | 3 |
t1_jevywkr | Money will always be able to buy better AI. I don't see a way around it. | 1 | 0 | 2023-04-04T06:12:14 | tvetus | false | null | 0 | jevywkr | false | /r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jevywkr/ | false | 1 |
t1_jevxsjo | Result of 13b Vicuna is definitely better than alpaca13b and i get significantly longer and detailed answers. But the logical deductions are worse than 30b alpaca.
Hope someone could train a 30b version | 19 | 0 | 2023-04-04T05:57:28 | Key_Engineer9043 | false | null | 0 | jevxsjo | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jevxsjo/ | false | 19 |
t1_jevx3sv | Indeed | 1 | 0 | 2023-04-04T05:48:42 | Wroisu | false | null | 0 | jevx3sv | false | /r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jevx3sv/ | false | 1 |
t1_jevx3f7 | Had to rename 30b to 7b due to load due to the name being hard coded | 3 | 0 | 2023-04-04T05:48:35 | Wroisu | false | null | 0 | jevx3f7 | false | /r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jevx3f7/ | false | 3 |
t1_jevx129 | I am looking at this in different way. We know that 13b parameters model can be great, and we know how to train those models. It is only matter of time, when local models outperform gpt 3.5 in many cases (unfortunately not in coding yet). I am curious how 65B model will perform with some decent finetuning. This model i... | 3 | 0 | 2023-04-04T05:47:46 | polawiaczperel | false | null | 0 | jevx129 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jevx129/ | false | 3 |
t1_jevw2ai | Wait, is Alpaca 30B out? Where can I get it? | 3 | 0 | 2023-04-04T05:35:45 | CheekyBastard55 | false | null | 0 | jevw2ai | false | /r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jevw2ai/ | false | 3 |
t1_jevv21q | Even briefly looking over their dataset I see a lot of uncleaned data such as
{
"instruction": "Generate a 3D model for a building using the given information.",
"input": "Height: 5 feet; width: 6 feet; length: 10 feet",
"output": "A 3D model of a 5 feet by 6 feet by 10 feet ... | 2 | 0 | 2023-04-04T05:23:32 | Sixhaunt | false | null | 0 | jevv21q | false | /r/LocalLLaMA/comments/1298rat/is_there_a_good_place_to_post_datasets_for_the/jevv21q/ | false | 2 |
t1_jevur7b | I have a very similar build but only one nvme. It’s a wd black one, are you noticing any benefits from the 4x (!) raid0? | 1 | 0 | 2023-04-04T05:20:01 | 0mz | false | null | 0 | jevur7b | false | /r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jevur7b/ | false | 1 |
t1_jevuqxh | I should have pictured the input, but I hope that's clear contextually.
### 40 WORD SYNOPSIS
### 800 WORD STORY
\n
A while back, as measured in AI time, I demonstrated [LLaMA's summarization talent](https://www.reddit.com/r/LocalLLaMA/comments/1260d5i/summarizing_short_stories_with_llama_13... | 11 | 0 | 2023-04-04T05:19:56 | friedrichvonschiller | false | 2023-04-04T05:37:00 | 0 | jevuqxh | false | /r/LocalLLaMA/comments/12b9se3/writing_llama_prompts_for_long_custom_stories/jevuqxh/ | false | 11 |
t1_jevtmi6 | I think at one point I had to rename 30B to 7B to load it. It was hardcoded I believe? | 4 | 0 | 2023-04-04T05:06:53 | claygraffix | false | null | 0 | jevtmi6 | false | /r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jevtmi6/ | false | 4 |
t1_jevszm3 | The image seems to suggest 7B 🤔? | 6 | 0 | 2023-04-04T04:59:47 | hashuna | false | null | 0 | jevszm3 | false | /r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jevszm3/ | false | 6 |
t1_jevshj0 |
call python
server.py
--auto-devices --cai-chat --model vicuna-13b-4bit-128g --wbits 4 --groupsize 128 --model_type llama | 10 | 0 | 2023-04-04T04:54:07 | 3deal | false | null | 0 | jevshj0 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jevshj0/ | false | 10 |
t1_jevs93m | [deleted] | 1 | 0 | 2023-04-04T04:51:37 | [deleted] | true | null | 0 | jevs93m | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jevs93m/ | false | 1 |
t1_jevrun5 | Same for me | 1 | 0 | 2023-04-04T04:47:19 | Necessary_Ad_9800 | false | null | 0 | jevrun5 | false | /r/LocalLLaMA/comments/12avbud/weird_noise_coming_from_3070_gpu_when_generating/jevrun5/ | false | 1 |
t1_jevqnk0 | https://www.reddit.com/r/DefendingAIArt/comments/126raix/laion_launches_a_petition_to_democratize_ai/?utm_source=share&utm_medium=mweb | 3 | 0 | 2023-04-04T04:34:40 | -_1_2_3_- | false | null | 0 | jevqnk0 | false | /r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jevqnk0/ | false | 3 |
t1_jevq53b | I got it fixed with a bit of help.
Only to realize that I can't actually run it because my system doesn't fit the requirements. | 1 | 0 | 2023-04-04T04:29:20 | deFryism | false | null | 0 | jevq53b | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jevq53b/ | false | 1 |
t1_jevpw1j | Any tips or tricks? | 1 | 0 | 2023-04-04T04:26:44 | Wroisu | false | null | 0 | jevpw1j | false | /r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jevpw1j/ | false | 1 |
t1_jevh6uc | Please can someone tell me how to run it with oobabooga | 1 | 0 | 2023-04-04T03:07:09 | Puzzleheaded_Acadia1 | false | null | 0 | jevh6uc | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jevh6uc/ | false | 1 |
t1_jevgxmy | I loved Encarta as a kid. Thought it was coolest thing ever. This definitely feels like that | 1 | 0 | 2023-04-04T03:05:03 | [deleted] | false | null | 0 | jevgxmy | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jevgxmy/ | false | 1 |
t1_jevered | >Petals Bloom project is exactly that.
Thank you, seems this exactly what i am look for. | 3 | 0 | 2023-04-04T02:47:52 | wojtek15 | false | null | 0 | jevered | false | /r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jevered/ | false | 3 |
t1_jevdw28 | Meh, it's even more censored than the original chatGPT, they sure did a good job on making it as useless as possible lmao | 15 | 0 | 2023-04-04T02:40:46 | Wonderful_Ad_5134 | false | null | 0 | jevdw28 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jevdw28/ | false | 15 |
t1_jevdqi7 | Petals Bloom project is exactly that. P2P/Decentralized inferencing and ~~training~~ *fine-tuning*
*Updated* | 11 | 0 | 2023-04-04T02:39:30 | Gohan472 | false | 2023-04-04T16:10:47 | 0 | jevdqi7 | false | /r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jevdqi7/ | false | 11 |
t1_jevawhj | Is the “two llamas” idea for training or inference? Why is there both a context and a prompt? | 1 | 0 | 2023-04-04T02:17:19 | huyouare | false | null | 0 | jevawhj | false | /r/LocalLLaMA/comments/12ahecy/expanding_llamas_token_limit_via_fine_tuning_or/jevawhj/ | false | 1 |
t1_jevaj7l | Link to the RoPE paper? | 1 | 0 | 2023-04-04T02:14:27 | huyouare | false | null | 0 | jevaj7l | false | /r/LocalLLaMA/comments/12ahecy/expanding_llamas_token_limit_via_fine_tuning_or/jevaj7l/ | false | 1 |
t1_jev97f9 | \--model\_type LLaMA | 2 | 0 | 2023-04-04T02:04:16 | mualimov | false | null | 0 | jev97f9 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jev97f9/ | false | 2 |
t1_jev3ymr | How do I run this on the Oobabooga? It says it can't figure out what the model type is | 1 | 0 | 2023-04-04T01:24:10 | deFryism | false | null | 0 | jev3ymr | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jev3ymr/ | false | 1 |
t1_jev3rp9 | Is 8Gb VRAM enough for the 4bit version? Or is more required?
I heard quantised models incorporate less data from LORA training? Is this true or it shouldn’t make difference? | 4 | 0 | 2023-04-04T01:22:43 | Nezarah | false | null | 0 | jev3rp9 | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jev3rp9/ | false | 4 |
t1_jev1uwa | Thanks dude! | 1 | 0 | 2023-04-04T01:08:29 | WesternLettuce0 | false | null | 0 | jev1uwa | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jev1uwa/ | false | 1 |
t1_jev00fw | There's also this for people not using 4-bit https://huggingface.co/jeffwan/vicuna-13b | 6 | 0 | 2023-04-04T00:54:59 | Civil_Collection7267 | false | null | 0 | jev00fw | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jev00fw/ | false | 6 |
t1_jeuzxrh | Looks very interesting! A quick test seemed promising, unfortunately I only have an 8 GB graphics card so it's too slow for normal usage - guess I'll have to wait for a 7b version or 3 bit quantization and continue to use Alpaca in the meantime. But will definitely keep an eye on this! | 1 | 0 | 2023-04-04T00:54:27 | WolframRavenwolf | false | null | 0 | jeuzxrh | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jeuzxrh/ | false | 1 |
t1_jeuznvf | Looks like you can inference through GPTQ directly if you use the 2nd link:
`python llama_inference.py ../../models/vicuna-13b-GPTQ-4bit-128g --wbits 4 --groupsize 128 --load ../../models/vicuna-13b-GPTQ-4bit-128g/vicuna-13b-4bit-128g.safetensors --text "You are a helpful AI assistant"` | 9 | 0 | 2023-04-04T00:52:35 | disarmyouwitha | false | null | 0 | jeuznvf | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jeuznvf/ | false | 9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.