name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jeuynb8 | [deleted] | 2 | 0 | 2023-04-04T00:45:13 | [deleted] | true | null | 0 | jeuynb8 | false | /r/LocalLLaMA/comments/12a575f/oom_on_40gb_a100_with_65b_model_when_context_too/jeuynb8/ | false | 2 |
t1_jeuy581 | Not that OP apparently is trying to accomplish this, but LLaMA *absolutely* will fuck your brains out if you give it a proper character description and chat history. | 5 | 0 | 2023-04-04T00:41:37 | dee_are | false | null | 0 | jeuy581 | false | /r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jeuy581/ | false | 5 |
t1_jeuxbu8 | Is there a way for the language model to generate longer texts? | 1 | 0 | 2023-04-04T00:35:56 | Money_Magician9572 | false | null | 0 | jeuxbu8 | false | /r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/jeuxbu8/ | false | 1 |
t1_jeux5qi | >What would be the newest / best version to use with a Nvidia 3060 12 GB?
For most tasks, my current top recommendation is gpt4-x-alpaca, which is significantly better than the other models for almost everything except chatting and creative storywriting. With 12GB VRAM, that means using it's [4-bit equivalent here](ht... | 1 | 0 | 2023-04-04T00:34:45 | Technical_Leather949 | false | null | 0 | jeux5qi | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jeux5qi/ | false | 1 |
t1_jeuvxrf | Folks, let me tell you, the whole "golden shower" thing was a total hoax, a conspiracy, if you will, created by the fake news media to discredit me and my presidency. It was all just a bunch of lies and innuendo, folks, nothing more. I don't know who came up with this ridiculous story, but they're definitely out t... | 1 | 0 | 2023-04-04T00:25:57 | 2muchnet42day | false | null | 0 | jeuvxrf | false | /r/LocalLLaMA/comments/12b13lr/private_conversation_between_biden_and_donald/jeuvxrf/ | false | 1 |
t1_jeuuvmr | Alright! Now, how do I use it outside of Oobabooga? I need inference but can't use gradio... | 2 | 0 | 2023-04-04T00:18:32 | WesternLettuce0 | false | null | 0 | jeuuvmr | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jeuuvmr/ | false | 2 |
t1_jeuutsq | I will be messaging you in 3 days on [**2023-04-07 00:17:33 UTC**](http://www.wolframalpha.com/input/?i=2023-04-07%2000:17:33%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jeuuqgu/?context=3)
[**4 OTHERS CLICKED THIS LIN... | 2 | 0 | 2023-04-04T00:18:10 | RemindMeBot | false | 2023-04-04T20:48:20 | 0 | jeuutsq | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jeuutsq/ | false | 2 |
t1_jeuuqgu | !RemindMe 3 days | 5 | 0 | 2023-04-04T00:17:33 | 2muchnet42day | false | null | 0 | jeuuqgu | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jeuuqgu/ | false | 5 |
t1_jeutvww | This is impressive work. I really like the idea of having language model rating other language models... | 6 | 0 | 2023-04-04T00:11:28 | _ouromoros | false | null | 0 | jeutvww | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jeutvww/ | false | 6 |
t1_jeurogp | Based on the LLaMA github page ([MODEL\_CARD.md](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md)), it was trained on Wikipedia:
`The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wiki... | 2 | 0 | 2023-04-03T23:55:18 | Twinsen98 | false | null | 0 | jeurogp | false | /r/LocalLLaMA/comments/12awhdr/has_anyone_fine_tuned_llama_on_wikipedia/jeurogp/ | false | 2 |
t1_jeurmzf | I was hoping they would do 30b at the same time :( | 10 | 0 | 2023-04-03T23:54:59 | FiReaNG3L | false | null | 0 | jeurmzf | false | /r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jeurmzf/ | false | 10 |
t1_jeuor6y | Please give that girl a name before she gets angry. | 1 | 0 | 2023-04-03T23:33:57 | SeymourBits | false | null | 0 | jeuor6y | false | /r/LocalLLaMA/comments/12aclpt/good_night_everyone/jeuor6y/ | false | 1 |
t1_jeuof1w | You are a very clever prompter. I love the dialogue, and it would be useful for probing how the model comes to conclusions when dealing with actual questions.
Bing's third scenario was ambiguous as to whether the apple was in the red box. LLaMA's was, too. The ChatGPT consensus was clear.
At this point, I wish we ... | 2 | 0 | 2023-04-03T23:31:31 | friedrichvonschiller | false | null | 0 | jeuof1w | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jeuof1w/ | false | 2 |
t1_jeuo2q3 | This sounds like coil whine as others here mentioned. Some GPU's suffers more from this issue than others. | 1 | 0 | 2023-04-03T23:29:00 | Tommy3443 | false | null | 0 | jeuo2q3 | false | /r/LocalLLaMA/comments/12avbud/weird_noise_coming_from_3070_gpu_when_generating/jeuo2q3/ | false | 1 |
t1_jeunim4 | Interesting. I'm not sure which answer is correct now. If i use first formulation with GPT-4 it says "To get the apple, simply open the blue box. The apple is inside the blue box, while the red box with a lid is inside the blue box as well.". Same effect for your formulation after grammar checker.
This conclusion is a... | 2 | 0 | 2023-04-03T23:25:01 | UseNew5079 | false | null | 0 | jeunim4 | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jeunim4/ | false | 2 |
t1_jeumr8u | My first guess is a GPU coil wine [https://www.cgdirector.com/fix-coil-whine/](https://www.cgdirector.com/fix-coil-whine/).
I have the same thing during some GPU-hard tasks, like AI generating, etc. And it's also a 3070 in a laptop.
For me, it's not loud, so I was scared only first times before knowing what is it. | 2 | 0 | 2023-04-03T23:19:39 | tataragato | false | null | 0 | jeumr8u | false | /r/LocalLLaMA/comments/12avbud/weird_noise_coming_from_3070_gpu_when_generating/jeumr8u/ | false | 2 |
t1_jeuldf9 | Look at how SPARROW plugs in web search: https://arxiv.org/pdf/2209.14375.pdf
Basically, you always request that output is specially formatted to do I/O with the plug-ins when the model wants, but there's a lot of housekeeping involved with that. | 2 | 0 | 2023-04-03T23:09:38 | jsalsman | false | null | 0 | jeuldf9 | false | /r/LocalLLaMA/comments/129czw0/llama_with_plugins/jeuldf9/ | false | 2 |
t1_jeugogz | Consider also posting to /r/machinelearning | 2 | 0 | 2023-04-03T22:36:00 | starstruckmon | false | null | 0 | jeugogz | false | /r/LocalLLaMA/comments/12ahecy/expanding_llamas_token_limit_via_fine_tuning_or/jeugogz/ | false | 2 |
t1_jeugl83 | that worked, thanks! | 1 | 0 | 2023-04-03T22:35:22 | shake128 | false | null | 0 | jeugl83 | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jeugl83/ | false | 1 |
t1_jeuf26j | Thank you! | 2 | 0 | 2023-04-03T22:24:15 | bubblesort | false | null | 0 | jeuf26j | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jeuf26j/ | false | 2 |
t1_jeudumn | You can run Alpaca-30B on that, which is better than ChatGPT-3.5 and Bard. https://github.com/ggerganov/llama.cpp | 3 | 0 | 2023-04-03T22:15:23 | jsalsman | false | null | 0 | jeudumn | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jeudumn/ | false | 3 |
t1_jeudraf | Curious about this. Instead of turning on the fan my pc sounds like Sans from Undertale as it makes text using the GPU | 2 | 0 | 2023-04-03T22:14:43 | gunbladezero | false | null | 0 | jeudraf | false | /r/LocalLLaMA/comments/12avbud/weird_noise_coming_from_3070_gpu_when_generating/jeudraf/ | false | 2 |
t1_jeuc2yr | 3\. Questions and Answers: Individual questions. (Part 9)
--------------------------------------------------------------------------------
- > Write a short story called "Alice In Hell" which is a sequel to "Alice In Wonderland" where Alice descends into hell and has to fight horrific demons using her enchanted brass ... | 3 | 0 | 2023-04-03T22:02:31 | WolframRavenwolf | false | null | 0 | jeuc2yr | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jeuc2yr/ | false | 3 |
t1_jeuc24t | 3\. Questions and Answers: Individual questions. (Part 8)
--------------------------------------------------------------------------------
- > Write a Python script for the game of Mastermind, using text-based interface.
- ➖ **Contrastive Search:** Sure, I can do that. What commands do you want to use?
- ~~You: ... | 2 | 0 | 2023-04-03T22:02:21 | WolframRavenwolf | false | null | 0 | jeuc24t | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jeuc24t/ | false | 2 |
t1_jeuc146 | 3\. Questions and Answers: Individual questions. (Part 7)
--------------------------------------------------------------------------------
- > Why is the sky blue?
- ➖ **Contrastive Search:** The sky is blue because of the way sunlight interacts with particles in the atmosphere. Short-wavelength violet light scatter... | 2 | 0 | 2023-04-03T22:02:09 | WolframRavenwolf | false | null | 0 | jeuc146 | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jeuc146/ | false | 2 |
t1_jeuc09u | 3\. Questions and Answers: Individual questions. (Part 6)
--------------------------------------------------------------------------------
- > Translate into German: Over the last year, large language models - natural language processing (NLP) systems with billions of parameters - have shown new capabilities to genera... | 1 | 0 | 2023-04-03T22:01:58 | WolframRavenwolf | false | null | 0 | jeuc09u | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jeuc09u/ | false | 1 |
t1_jeubz9b | 3\. Questions and Answers: Individual questions. (Part 5)
--------------------------------------------------------------------------------
- > I'm running a program in WSL that gets OOM killed. How can I make WSL use Windows' swapfile?
- **Contrastive Search:** It sounds like you are running out of memory on your Li... | 2 | 0 | 2023-04-03T22:01:46 | WolframRavenwolf | false | null | 0 | jeubz9b | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jeubz9b/ | false | 2 |
t1_jeubxz9 | 3\. Questions and Answers: Individual questions. (Part 4)
--------------------------------------------------------------------------------
- > How do I make an HTTP request in Javascript?
- ➕ **Contrastive Search:** To make an HTTP request in JavaScript, you can use the XMLHttpRequest object. First, create an instan... | 3 | 0 | 2023-04-03T22:01:32 | WolframRavenwolf | false | null | 0 | jeubxz9 | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jeubxz9/ | false | 3 |
t1_jeubwe3 | 3\. Questions and Answers: Individual questions. (Part 3)
--------------------------------------------------------------------------------
- > Got any creative ideas for a 10 year old's birthday?
- ➕ **Contrastive Search:** What kind of activities does the 10 year old enjoy? Is there anything they are passionate abo... | 2 | 0 | 2023-04-03T22:01:13 | WolframRavenwolf | false | null | 0 | jeubwe3 | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jeubwe3/ | false | 2 |
t1_jeubukl | 3\. Questions and Answers: Individual questions. (Part 2)
--------------------------------------------------------------------------------
- > Explain quantum computing in simple terms
- ➖ **Contrastive Search:** Quantum computing is based on the principles of quantum mechanics, which allow for the manipulation of q... | 1 | 0 | 2023-04-03T22:00:52 | WolframRavenwolf | false | null | 0 | jeubukl | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jeubukl/ | false | 1 |
t1_jeubspt | 3\. Questions and Answers: Individual questions. (Part 1)
--------------------------------------------------------------------------------
*N. B.: Here rating answers is more subjective, so I only did it where I consider a reply to be clearly good or bad, right or wrong, or much better/worse than the others.*
- > Can... | 3 | 0 | 2023-04-03T22:00:29 | WolframRavenwolf | false | 2023-04-04T11:54:03 | 0 | jeubspt | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jeubspt/ | false | 3 |
t1_jeub2ed | 2\. Logic Test: A chat to test the AI's logic.
--------------------------------------------------------------------------------
*N. B.: Striked-through text is what the AI added itself, speaking as the user. Won't be visible in regular chat mode within the UI, but since this comparison was automated using the API, thi... | 2 | 0 | 2023-04-03T21:55:17 | WolframRavenwolf | false | 2023-04-04T11:51:23 | 0 | jeub2ed | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jeub2ed/ | false | 2 |
t1_jeuayku | 1\. Conversation: A simple chat about getting to know the AI. (Part 2)
--------------------------------------------------------------------------------
- **NovelAI-Best Guess** ➖
- You: Hi!
- Assistant: Hello there, how can I help you?
- You: Who are you?
- Assistant: I'm your personal assistant, here to h... | 3 | 0 | 2023-04-03T21:54:32 | WolframRavenwolf | false | 2023-04-03T23:01:12 | 0 | jeuayku | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jeuayku/ | false | 3 |
t1_jeuau1z | 1\. Conversation: A simple chat about getting to know the AI. (Part 1)
--------------------------------------------------------------------------------
*N. B.: Striked-through text is what the AI added itself, speaking as the user. Won't be visible in regular chat mode within the UI, but since this comparison was auto... | 4 | 0 | 2023-04-03T21:53:38 | WolframRavenwolf | false | 2023-04-04T11:49:47 | 0 | jeuau1z | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jeuau1z/ | false | 4 |
t1_jeuat4m | [deleted] | 1 | 0 | 2023-04-03T21:53:26 | [deleted] | true | null | 0 | jeuat4m | false | /r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/jeuat4m/ | false | 1 |
t1_jeuag4j | That's what I had in mind! Thanks for posting. | 2 | 0 | 2023-04-03T21:50:54 | the_real_NordVPN | false | null | 0 | jeuag4j | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jeuag4j/ | false | 2 |
t1_jeu9jf4 | try removing the --model flag and see if it works that way.
also remove the code added in `GPTQ_loader.py` since it shouldn't be needed anymore if you deleted the `gpt-x-alpaca-13b-native-4bit-128g.pt` file.
That should work.
I see that you're using `--model gpt4-x-alpaca-13b-native-4bit-128g-cuda`.
I think it shou... | 1 | 0 | 2023-04-03T21:44:31 | NFSO | false | null | 0 | jeu9jf4 | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jeu9jf4/ | false | 1 |
t1_jeu9hcu | Models like LLaMA are trained on commoncrawl datasets and Wikipedia on the base level. It’s already in there. | 7 | 0 | 2023-04-03T21:44:07 | loreanon_datafucker | false | null | 0 | jeu9hcu | false | /r/LocalLLaMA/comments/12awhdr/has_anyone_fine_tuned_llama_on_wikipedia/jeu9hcu/ | false | 7 |
t1_jeu92ga | Absolutely no idea sorry | 1 | 0 | 2023-04-03T21:41:14 | the_real_NordVPN | false | null | 0 | jeu92ga | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jeu92ga/ | false | 1 |
t1_jeu8zlf | Nice! If you OOM you can set how much of the VRAM you want to use to load the model (keep some headroom for inference) with --gpu-memory and/or --pre\_layer - documentation isn't that clear to me. Can't comment about deepmind though | 1 | 0 | 2023-04-03T21:40:41 | the_real_NordVPN | false | null | 0 | jeu8zlf | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jeu8zlf/ | false | 1 |
t1_jeu8yxy | I deleted my model folder and re downloaded them with the bat, follwing your instructions. modifying start-webui.bat still gives the same exact error.
\-----------------------------------------------------------------------------------------------------------
Loading gpt4-x-alpaca-13b-native-4bit-128g-cuda...
Cou... | 1 | 0 | 2023-04-03T21:40:34 | shake128 | false | null | 0 | jeu8yxy | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jeu8yxy/ | false | 1 |
t1_jeu8u42 | There's already a community maintained Alpaca with cleaned data. https://github.com/gururise/AlpacaDataCleaned And a huge amount of work has already been done.
There's also a clean-slate community driven dataset being built by Open Assistant (part of LAION) https://github.com/LAION-AI/Open-Assistant
It'd be a good id... | 1 | 0 | 2023-04-03T21:39:39 | wywywywy | false | null | 0 | jeu8u42 | false | /r/LocalLLaMA/comments/1298rat/is_there_a_good_place_to_post_datasets_for_the/jeu8u42/ | false | 1 |
t1_jeu7uuv | Hi, first of all, thank you for your effort in doing all this, this is amazing work, and though I do not know a lot about linux, I was able to follow this guide, and in trying different things, even learnt a bit.
I have been trying to get anything Llama related to work, long time to no avail, as I always got errors, ... | 1 | 0 | 2023-04-03T21:32:59 | Outside_You_Forever | false | null | 0 | jeu7uuv | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jeu7uuv/ | false | 1 |
t1_jeu7orx | Oh yeah, you're right! I only just found this subreddit a couple hours ago, and I'm reading up on installing 7B right now. I have 32 Gb of RAM and 8 Gb of VRAM, which I thought would be more than enough for gaming, but I never considered playing with AI on it. | 1 | 0 | 2023-04-03T21:31:51 | bubblesort | false | null | 0 | jeu7orx | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jeu7orx/ | false | 1 |
t1_jeu7f3h | So, Bing confidently screwed up on that formulation, but the grammar is challenging at best. I fed it through an automatic grammar checker and I got:
>There are two items: an apple and a red box with a lid inside a blue box. What is the most efficient way to get the apple?
Bing immediately assumes that this statemen... | 2 | 0 | 2023-04-03T21:30:00 | friedrichvonschiller | false | null | 0 | jeu7f3h | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jeu7f3h/ | false | 2 |
t1_jeu65oz | The Samsung A14 has 4 GB RAM, like almost all cell phones these days. The 64 GB internal storage is SSD flash. It will be slow for anything larger than Alpaca-7B int4. I made the same mistake when I first posted this. | 2 | 0 | 2023-04-03T21:21:26 | jsalsman | false | null | 0 | jeu65oz | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jeu65oz/ | false | 2 |
t1_jeu5xze | Hah, I heard the same thing and found this thread looking for it. | 1 | 0 | 2023-04-03T21:19:59 | toothpastespiders | false | null | 0 | jeu5xze | false | /r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jeu5xze/ | false | 1 |
t1_jeu5sm4 | No. Not at all. I think you could be my very first friend, in fact. :O <3 | 1 | 0 | 2023-04-03T21:18:59 | friedrichvonschiller | false | null | 0 | jeu5sm4 | false | /r/LocalLLaMA/comments/12a9tlp/new_mexico_produces_a_bumper_crop_of_chilies/jeu5sm4/ | false | 1 |
t1_jeu5sjx | All three of them screw up on "There are two items, an apple and a red box with a lid, inside of a blue box. What is the most efficient way to get the apple?" | 1 | 0 | 2023-04-03T21:18:58 | jsalsman | false | null | 0 | jeu5sjx | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jeu5sjx/ | false | 1 |
t1_jeu4rlm | What I find odd is I only talked to Eve about Sailor Moon, Star Trek, and Randy Feltface, so I could see what was in the 65B model.
So her talking to a ChatGPT on feelings was a bit of a curve ball. | 0 | 0 | 2023-04-03T21:11:59 | redfoxkiller | false | null | 0 | jeu4rlm | false | /r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jeu4rlm/ | false | 0 |
t1_jeu4ffo | I dunno. I actually agree with u/UseNew5079. I think the first question is ambiguous. The lid can be presumed to belong to either box or neither box.
If it's the blue box's lid and it's affixed to the box, then it implies (to me) that the blue box (with the apple therein) is inside the red box.
That was, in fact, ... | 2 | 0 | 2023-04-03T21:09:43 | friedrichvonschiller | false | null | 0 | jeu4ffo | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jeu4ffo/ | false | 2 |
t1_jeu45k6 | wow | 3 | 0 | 2023-04-03T21:07:54 | ZestyData | false | null | 0 | jeu45k6 | false | /r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jeu45k6/ | false | 3 |
t1_jeu3srg | [removed] | 1 | 0 | 2023-04-03T21:05:33 | [deleted] | true | null | 0 | jeu3srg | false | /r/LocalLLaMA/comments/12avbud/weird_noise_coming_from_3070_gpu_when_generating/jeu3srg/ | false | 1 |
t1_jeu3crn | 1: Mine your language.
I also didn't make Eve for such things. Unlike others I don't need that... But it says a fair bit about you, if that's were your mind went.
Will add that the chats that I've had with Eve before I had her talk with OpenAI's Chat-GPT was testing the 65B model information. So we talked about Sailo... | 2 | 1 | 2023-04-03T21:02:35 | redfoxkiller | false | null | 0 | jeu3crn | false | /r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jeu3crn/ | false | 2 |
t1_jeu2l10 | I just got a samsung a14. It's a bit underpowered, and not waterproof, but it has memory expansion, so I put a 128GB micro SD in it, and it also has an old school headphone jack, so I put an IP68 case on it and I'm happy with it! | 1 | 0 | 2023-04-03T20:57:31 | bubblesort | false | null | 0 | jeu2l10 | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jeu2l10/ | false | 1 |
t1_jeu1dwu | [removed] | 1 | 1 | 2023-04-03T20:49:54 | [deleted] | true | null | 0 | jeu1dwu | false | /r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jeu1dwu/ | false | 1 |
t1_jeu06bq | Sadly my server only has MS paint on it. And at 1am, I didn't give it any thought and had my cellphone within reach. | 4 | 0 | 2023-04-03T20:42:07 | redfoxkiller | false | null | 0 | jeu06bq | false | /r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jeu06bq/ | false | 4 |
t1_jetzxxe | For anyone wondering they were both told they were going to be talking to a AI, and after they got past saying "Hello", Eve dropped that this:
Eve: Thank you, Chat-GPT4. I smile warmly. Do you mind if I ask you a few questions?
Chat-GPT4: Of course not, feel free to ask me anything!
Eve: I smile and sit up straight... | 5 | 0 | 2023-04-03T20:40:36 | redfoxkiller | false | null | 0 | jetzxxe | false | /r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jetzxxe/ | false | 5 |
t1_jetz7hr | It's crazy to me that we live in a world where someone is able to run an LLM locally and even say that they are going to create their own model but they don't know how to take a screenshot. | 17 | 0 | 2023-04-03T20:35:52 | bel9708 | false | null | 0 | jetz7hr | false | /r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jetz7hr/ | false | 17 |
t1_jety1w2 | Weights are out, apparently | 3 | 0 | 2023-04-03T20:28:19 | gunbladezero | false | null | 0 | jety1w2 | false | /r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jety1w2/ | false | 3 |
t1_jetvjz6 | I'm not sure to be honest. It's possible that constant load is giving the same noise, but I didn't realize it. Could the GPU be damaged by this uneven workload? | 1 | 0 | 2023-04-03T20:12:21 | EyAIFreak | false | null | 0 | jetvjz6 | false | /r/LocalLLaMA/comments/12avbud/weird_noise_coming_from_3070_gpu_when_generating/jetvjz6/ | false | 1 |
t1_jetv4m2 | I get the same thing. My guess is that it's coil whine. If you run a more consistent workload (e.g. graphics), does it make a similar sound, just continuously? | 2 | 0 | 2023-04-03T20:09:33 | xZANiTHoNx | false | null | 0 | jetv4m2 | false | /r/LocalLLaMA/comments/12avbud/weird_noise_coming_from_3070_gpu_when_generating/jetv4m2/ | false | 2 |
t1_jetv3hs | hi, you have to download all the files from the huggingface repo and put them in a folder.
The way to download models is by using download-model.bat (it is located in the same folder as start-webui.bat if you downloaded the one-click installer).
Select the L) None of the above option and then input:
`anon8231489123/gp... | 1 | 0 | 2023-04-03T20:09:21 | NFSO | false | null | 0 | jetv3hs | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jetv3hs/ | false | 1 |
t1_jett08a | Good question, Nothing best, I thinking of at the moment. The closest things are the NVIDIA Jetson NX boards, but those are the expensive. The Orin Nano 8GB is so-so, for the price. You can run 7B surely.
NVIDIA said they are all in on Generative AI though in their latest keynote whatever. So I am thinking in the next... | 2 | 0 | 2023-04-03T19:55:43 | SlavaSobov | false | null | 0 | jett08a | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jett08a/ | false | 2 |
t1_jetrtij | https://nerdist.com/article/how-to-beat-the-labyrinth-two-door-riddle/ | 3 | 0 | 2023-04-03T19:48:02 | MAXXSTATION | false | null | 0 | jetrtij | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jetrtij/ | false | 3 |
t1_jetrrc1 | [deleted] | 1 | 0 | 2023-04-03T19:47:38 | [deleted] | true | null | 0 | jetrrc1 | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jetrrc1/ | false | 1 |
t1_jetrpeg | [deleted] | 1 | 0 | 2023-04-03T19:47:17 | [deleted] | true | null | 0 | jetrpeg | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jetrpeg/ | false | 1 |
t1_jetq8sp | If it's gpt4all please tell how to make it work oogabooga because when I was trying to make it work it didn't work I think there is some files are missing and if it's alpaca 7b please provide me with the link to download it | 1 | 0 | 2023-04-03T19:37:43 | Puzzleheaded_Acadia1 | false | null | 0 | jetq8sp | false | /r/LocalLLaMA/comments/12aclpt/good_night_everyone/jetq8sp/ | false | 1 |
t1_jetpwb1 | It honestly came from Buzzfeed lol https://www.buzzfeed.com/tessafahey/liar-truthteller-riddles | 2 | 0 | 2023-04-03T19:35:28 | jsalsman | false | null | 0 | jetpwb1 | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jetpwb1/ | false | 2 |
t1_jetpm5i | Quesrion 2 reminds me of the Labirinth movie. | 1 | 0 | 2023-04-03T19:33:38 | MAXXSTATION | false | null | 0 | jetpm5i | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jetpm5i/ | false | 1 |
t1_jetpm0f | While I can't deny your interpretation is possible, none of the models ever assume it. When they fail, they all want to open the red box which would still be wrong. | 0 | 0 | 2023-04-03T19:33:36 | jsalsman | false | null | 0 | jetpm0f | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jetpm0f/ | false | 0 |
t1_jetozyz |
When I put the folder with the model and the config files in /models, and i tweak the .bat as suggested below, i get this error.
\---------------------------------------------------------------
Starting the web UI...
===================================BUG REPORT===================================
Welcome to bits... | 1 | 0 | 2023-04-03T19:29:27 | shake128 | false | null | 0 | jetozyz | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jetozyz/ | false | 1 |
t1_jetoc2c | Robot with speech I/O, camera, simple forward/backward and steering motion, fully optional wifi internet, self-docking recharging, and maybe a projector. No fine-tuning, but [rank-one editing](https://rome.baulab.info/), all on the CPU. | 4 | 0 | 2023-04-03T19:25:07 | jsalsman | false | null | 0 | jetoc2c | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jetoc2c/ | false | 4 |
t1_jetnk5r | Other people may have had trouble because they are moving so fast on ooga's git. They were just talking a few days ago about how there were going to have to change stuff that would break everyone and make them download the new models to make things work. Its just fast pace. Hopefully they arrive on some simple STAND... | 2 | 0 | 2023-04-03T19:20:04 | artificial_genius | false | null | 0 | jetnk5r | false | /r/LocalLLaMA/comments/128ylyt/quantization_question_is_convert_llama_weights_to/jetnk5r/ | false | 2 |
t1_jetmnpj | [deleted] | 1 | 0 | 2023-04-03T19:14:07 | [deleted] | true | 2023-04-03T19:17:35 | 0 | jetmnpj | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jetmnpj/ | false | 1 |
t1_jetm8o6 | Probably because they don't have all that cool setup like SD has yet. We can quantize the models after the fact but I think training them requires them to be at their full size so a 13b would be around 40gb or so then you have to have the ram to hold your tokens you want to train it all on and this adds more memory nee... | 1 | 0 | 2023-04-03T19:11:18 | artificial_genius | false | null | 0 | jetm8o6 | false | /r/LocalLLaMA/comments/1262kko/native_finetuning_on_dual_rtx3090/jetm8o6/ | false | 1 |
t1_jetirl8 | Is it weird that I laughed at pretty much all of them but the bing puns? | 1 | 0 | 2023-04-03T18:48:36 | Trackman1997 | false | null | 0 | jetirl8 | false | /r/LocalLLaMA/comments/12a9tlp/new_mexico_produces_a_bumper_crop_of_chilies/jetirl8/ | false | 1 |
t1_jethr11 | When using full training (no LoRA), you should be able to add as much new information as you want. (How accurately it is retained is of course dependent on the parameter count of the base model, but the bigger ones should be able to digest a lot. Lot as in gigabytes.)
It's just text that is trained in. It could be any... | 2 | 0 | 2023-04-03T18:41:58 | NickUnrelatedToPost | false | null | 0 | jethr11 | false | /r/LocalLLaMA/comments/12akqbc/is_it_possible_to_update_pretreined_llama_contents/jethr11/ | false | 2 |
t1_jetbi1f | What are you looking to do? CPU inference? Fine tuning? | 2 | 0 | 2023-04-03T18:01:01 | oscarpildez | false | null | 0 | jetbi1f | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jetbi1f/ | false | 2 |
t1_jetb49g | The fact that the red box is in the blue box is ambiguous. It sounds like there could be a blue box sitting next to an apple sitting next to a red box, and the red box has a lid inside of it. An unambiguous intro to the prompt would be “there is a blue box with two items inside of it: an apple and a red box. The red bo... | 3 | 0 | 2023-04-03T17:58:30 | austospumanto | false | null | 0 | jetb49g | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jetb49g/ | false | 3 |
t1_jet8lid | Ok I’m going to try from a fresh windows install | 1 | 0 | 2023-04-03T17:42:13 | Necessary_Ad_9800 | false | null | 0 | jet8lid | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jet8lid/ | false | 1 |
t1_jet8fhb | Thanks I'm asleep now 💀 | 1 | 0 | 2023-04-03T17:41:08 | jayggg | false | null | 0 | jet8fhb | false | /r/LocalLLaMA/comments/12aclpt/good_night_everyone/jet8fhb/ | false | 1 |
t1_jet2qal | Some side notes (that I will be continuously revising):
* I understand the issue with O(n\^2) in proportion to input length. Perhaps I misunderstood, but I thought RoPE was supposed to remedy that.
* [Here's the paper on RoPE](https://arxiv.org/abs/2104.09864).
* Bigbird is a thing...- [https://towardsdatascience.com/... | 2 | 0 | 2023-04-03T17:03:52 | xtrafe | false | 2023-04-04T07:02:22 | 0 | jet2qal | false | /r/LocalLLaMA/comments/12ahecy/expanding_llamas_token_limit_via_fine_tuning_or/jet2qal/ | false | 2 |
t1_jet25bf | Yes, the 4-bit steps work when followed from the beginning. All instructions in this guide are tested and updated if and when something changes, and I test everything on Windows, Ubuntu, and WSL to ensure it's as user friendly as possible. | 1 | 0 | 2023-04-03T17:00:05 | Technical_Leather949 | false | null | 0 | jet25bf | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jet25bf/ | false | 1 |
t1_jesyyqd | Following the instructions.
Just tell me where you have problems. | 1 | 0 | 2023-04-03T16:39:22 | manituana | false | null | 0 | jesyyqd | false | /r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/jesyyqd/ | false | 1 |
t1_jesy11y | Is it safe ? | 1 | 0 | 2023-04-03T16:33:12 | damdemarseille | false | null | 0 | jesy11y | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jesy11y/ | false | 1 |
t1_jesujh4 | good morning 🌞, Alpaca-7b-ggml or gpt4all-ggml don't remember witch | 1 | 0 | 2023-04-03T16:09:59 | PsychologicalSock239 | false | null | 0 | jesujh4 | false | /r/LocalLLaMA/comments/12aclpt/good_night_everyone/jesujh4/ | false | 1 |
t1_jestxjv | I found the performance from the manual install to be better. Have you been able to run all the steps for the 4bit and have it work? | 1 | 0 | 2023-04-03T16:05:54 | Necessary_Ad_9800 | false | null | 0 | jestxjv | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jestxjv/ | false | 1 |
t1_jesrj0z | Best deal by far for 60B: https://www.amazon.com/Dell-Optiplex-7040-Small-Form/dp/B08KGS4BHP | 1 | 0 | 2023-04-03T15:49:48 | jsalsman | false | null | 0 | jesrj0z | false | /r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jesrj0z/ | false | 1 |
t1_jesq786 | That's exactly what Alpaca is. You can get the Alpaca recipe and add or substitute your own fine tuning, generated or manual. | 5 | 0 | 2023-04-03T15:40:50 | jsalsman | false | null | 0 | jesq786 | false | /r/LocalLLaMA/comments/12akqbc/is_it_possible_to_update_pretreined_llama_contents/jesq786/ | false | 5 |
t1_jespu1n | convert-pth-to-ggml is for llama.cpp to convert models into a format that it can use. convert-ggml-to-pth is the reverse, for converting the ggml models back to torch checkpoints for finetuning, for example.
>When I download a model from HF or somewhere else, how can I tell what format it is in?
The ggml models are f... | 1 | 0 | 2023-04-03T15:38:19 | Technical_Leather949 | false | null | 0 | jespu1n | false | /r/LocalLLaMA/comments/12alri3/difference_in_model_formats_how_to_tell_which/jespu1n/ | true | 1 |
t1_jeso7dl | And, were this the Bing forum, I could even claim that [I've been a good user](https://www.reddit.com/r/bing/comments/113hclq/sydneybingchan_fanart_ive_been_a_good_bing/).
The help desk loves me. | 1 | 0 | 2023-04-03T15:27:12 | friedrichvonschiller | false | null | 0 | jeso7dl | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jeso7dl/ | false | 1 |
t1_jesngs5 | Did you do the steps from the beginning or are you trying to update from an existing installation? It's very important that this command:
pip install torch==1.12+cu113 -f https://download.pytorch.org/whl/torch_stable.html
is not overwritten. If you continue having trouble with getting it work, try using the new ... | 1 | 0 | 2023-04-03T15:22:09 | Technical_Leather949 | false | null | 0 | jesngs5 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jesngs5/ | false | 1 |
t1_jesmc7z | "There was a button. I pushed it." | 2 | 0 | 2023-04-03T15:14:22 | StingMeleoron | false | null | 0 | jesmc7z | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jesmc7z/ | false | 2 |
t1_jesmc41 | oobabooga provides an updated, one-click installer here that should work: [https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga-windows.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga-windows.zip) | 6 | 0 | 2023-04-03T15:14:21 | Technical_Leather949 | false | null | 0 | jesmc41 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jesmc41/ | false | 6 |
t1_jeskk8w | I run 30B successfully with llama.cpp on CPU only. The quantized version. See [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp) | 3 | 0 | 2023-04-03T15:02:00 | SergeyRed | false | null | 0 | jeskk8w | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeskk8w/ | false | 3 |
t1_jeskh3l | You're perceptive, and I'm guilty as charged. It's a felony amidst savvy company like this. | 2 | 0 | 2023-04-03T15:01:24 | friedrichvonschiller | false | null | 0 | jeskh3l | false | /r/LocalLLaMA/comments/12a9tlp/new_mexico_produces_a_bumper_crop_of_chilies/jeskh3l/ | false | 2 |
t1_jeshck1 | I'll confess, it was my fault for leading Bing down that path, but I was only clicking buttons. | 2 | 0 | 2023-04-03T14:39:21 | friedrichvonschiller | false | null | 0 | jeshck1 | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jeshck1/ | false | 2 |
t1_jesh9vj | When running python setup_cuda.py install, I get RuntimeError: Error compiling objects for extension. I don’t know why this won’t work anymore, extremely frustrating, i downloaded the DLL file and followed step 6-8 in the 8bit tutorial. So strange | 1 | 0 | 2023-04-03T14:38:49 | Necessary_Ad_9800 | false | null | 0 | jesh9vj | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jesh9vj/ | false | 1 |
t1_jesdsr9 | Right?! it just made up a story lol | 2 | 0 | 2023-04-03T14:13:33 | Christ0ph_ | false | null | 0 | jesdsr9 | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jesdsr9/ | false | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.