name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jehrd68 | > GPT-4 is more like on a few hundred billion.
Not sure I buy this. Assuming it's float16, the absolute upper bound it can be is ~360b due to the maximum size of an A100 DGX node. Combine that with observed inference speed and the low API cost they're charging, it seems the model is much smaller than that. Even if yo... | 2 | 0 | 2023-04-01T02:54:38 | TeamPupNSudz | false | null | 0 | jehrd68 | false | /r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/jehrd68/ | false | 2 |
t1_jehp5wv | GPT-4 is more like on a few hundred billion. LLaMA's biggest model is 65B, and it isn't that well sorted or fined tuned.
As someone that's been running all four models, I can tell you there's a major difference between 7B to 65B models.
Depending on the model training, your are right that it could be done on home eq... | 0 | 0 | 2023-04-01T02:35:45 | redfoxkiller | false | null | 0 | jehp5wv | false | /r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/jehp5wv/ | false | 0 |
t1_jehly2h | [deleted] | 1 | 0 | 2023-04-01T02:08:54 | [deleted] | true | null | 0 | jehly2h | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jehly2h/ | false | 1 |
t1_jehkm78 | To avoid speaker confusion you can train it on a strict formatting. People often use markdown and choose formatting that's unlikely to appear randomly.
For example, reformating all the training text to be
### Dungeon Master
{text}
### Player
{text}
Data is a problem in most project even fo... | 4 | 0 | 2023-04-01T01:58:03 | CellWithoutCulture | false | null | 0 | jehkm78 | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jehkm78/ | false | 4 |
t1_jehi6iy | Just be sure you buy at least an nvidia RTX 4090 24GB of VRAM and a lot of RAM, you'll be fine to experiment, AI stuff require a lot of VRAM and RAM, no need to buy the best for the rest. | 3 | 0 | 2023-04-01T01:38:05 | ptitrainvaloin | false | null | 0 | jehi6iy | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jehi6iy/ | false | 3 |
t1_jehhr8r | I followed this thread and got an Alpaca 13B running on a Nvidia 3080 with 10GB of video ram: https://old.reddit.com/r/KoboldAI/comments/122zjd0/guide_alpaca_13b_4bit_via_koboldai_in_tavernai/
It was mind blowing. The next day I ordered the parts for this PC build: https://pcpartpicker.com/list/4nHJcb
I haven't gotte... | 2 | 0 | 2023-04-01T01:34:46 | synn89 | false | null | 0 | jehhr8r | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jehhr8r/ | false | 2 |
t1_jehgsxd | okay im just wrong then, sorry! | 1 | 0 | 2023-04-01T01:27:14 | UncleEnk | false | null | 0 | jehgsxd | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jehgsxd/ | false | 1 |
t1_jehg3k4 | I use the default config for temperature settings and such, an empty prompt and launch the thing in REPL mode. The context doesn't really last more than two or three prompts, but still it almost feels like I'm talking to a person. The model is `ggml-alpaca-30b-q4`.
Find a Hugging Face link and torrent [in this GitHub ... | 2 | 0 | 2023-04-01T01:21:39 | spectrachrome | false | 2023-04-01T01:35:50 | 0 | jehg3k4 | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jehg3k4/ | false | 2 |
t1_jeheq3f | > The data needed to make a version of LLaMA a history buff is going to be massive, and require a few people working on it. Never mind the server time to compile it, at which you're going to need a really good CPU, RAM, and GPU.
OP is referencing PEFT, which means he's talking about Lora fine-tuning, which can be done... | 2 | 0 | 2023-04-01T01:10:45 | TeamPupNSudz | false | null | 0 | jeheq3f | false | /r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/jeheq3f/ | false | 2 |
t1_jehejml | That won't work because Fuck-you!-Nvidia and their proprietary CUDA api.
You can use CPU only. | 1 | 0 | 2023-04-01T01:09:19 | NickUnrelatedToPost | false | null | 0 | jehejml | false | /r/LocalLLaMA/comments/11xiwes/ati_gpu/jehejml/ | false | 1 |
t1_jehdgtp | That won't work due to catastrophic forgetting.
You'd have to train it on everything till date, not just that day's data.
[Things seem to be fundamentally different in spiking neural networks.](https://journals.plos.org/ploscompbiol/article?id=10.1371%2Fjournal.pcbi.1010628) | 5 | 0 | 2023-04-01T01:00:58 | starstruckmon | false | null | 0 | jehdgtp | false | /r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jehdgtp/ | false | 5 |
t1_jehcewb | I think you're right. Sadly though looking at what NVIDIA did with their Lovelace series that won't be happening. The RTX 6000, lost it's nv link, currently the only Lovelace GPU with NV link is the H100. I understand for 99% of use cases sli is dead. However for AI SLI/ NV link is not dead at all. | 1 | 0 | 2023-04-01T00:52:43 | EnvironmentalAd3385 | false | null | 0 | jehcewb | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jehcewb/ | false | 1 |
t1_jehb4f1 | I'm not sure what a company version of the 4090 is, but there is not a version of the 4090 that has 48gb of vram. There were rumors that the rtx 4090 ti would have 48gbs vram. But, that rumor was squashed. Okay, I see what you mean. I misread your comment. | 2 | 0 | 2023-04-01T00:42:37 | EnvironmentalAd3385 | false | null | 0 | jehb4f1 | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jehb4f1/ | false | 2 |
t1_jeha36r | I think he was saying that the A6000/A100 etc are the best available hardware (if money is no object), but that "consumer" hardware is more along the lines of 3090 and below.
I'd of course argue the 4090 is also consumer hardware given the price, but that's neither here nor there :).
I really am hoping for a 48gb+ GP... | 2 | 0 | 2023-04-01T00:34:31 | deepinterstate | false | null | 0 | jeha36r | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeha36r/ | false | 2 |
t1_jeh9sj9 | u/AI-Pon3 do you know how to increase the input character length allowance?
If I give a prompt like the one below, it sits and then quits. Shorter prompts it works.
example:
Summarize this and compose in a list. “[in here is a transcript from a 30 second video without any punctuation in it]” | 4 | 0 | 2023-04-01T00:32:11 | MyVoiceIsElevating | false | null | 0 | jeh9sj9 | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jeh9sj9/ | false | 4 |
t1_jeh9rzr | the 4090 has a company version (I believe) (it may just have been announced) that has 48gb of vram, also i mentioned that it isn't consumer, I was just nitpicking | 1 | 0 | 2023-04-01T00:32:04 | UncleEnk | false | null | 0 | jeh9rzr | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeh9rzr/ | false | 1 |
t1_jeh9ibm | The pace feels exponential. I think we're on the Singularity path, personally. The line looks pretty flat behind us and damn near vertical in front of us.
I'm willing to get on board and see where it takes me, but I have to admit I'm feeling a little sick trying to keep up. | 1 | 0 | 2023-04-01T00:29:57 | deepinterstate | false | null | 0 | jeh9ibm | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeh9ibm/ | false | 1 |
t1_jeh969q | Probably by running llama.cpp or alpaca.cpp (search their github). It's a bit more involved than Dalai, though, and it'll still be a bit clunky.
Oobabooga provides a better experience and UI, and you can supposedly run llama.cpp in oobabooga now as of a few hours ago (they just merged that update), so that might be an... | 2 | 0 | 2023-04-01T00:27:19 | deepinterstate | false | null | 0 | jeh969q | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeh969q/ | false | 2 |
t1_jeh8lkx | I have thought about it but not gotten there. Specifically want I want to do is train a LoRA on the entire conversation history. Then, each night, I could retrain including today’s conversations.
Unfortunately I don’t yet have LoRA training working in my setup so that’s been a blocker! | 6 | 0 | 2023-04-01T00:22:51 | the_quark | false | null | 0 | jeh8lkx | false | /r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jeh8lkx/ | false | 6 |
t1_jeh7hk6 | So, you are wrong but not completely. LLMs can run a CPU, with that configuration you need a large amount of ram. However, if you run it on a GPU , Vram is the limiting factor. Also the A100 is 10K and nowhere near consumer grade. There isn't an a8000 gpu. It is the Quadro 8000, on the low end it is like 3500. not real... | 1 | 0 | 2023-04-01T00:14:17 | EnvironmentalAd3385 | false | null | 0 | jeh7hk6 | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeh7hk6/ | false | 1 |
t1_jeh6ev6 | I tried to figure how to run that but I just didn't found anywhere how to do it | 1 | 0 | 2023-04-01T00:06:01 | titto8779 | false | null | 0 | jeh6ev6 | false | /r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/jeh6ev6/ | false | 1 |
t1_jeh5qix | I may be wrong, but for LLMs what you want is ram (not vram, vram is speed, ram is capability. so a good ratio between ram:vram is 3:1), though for art (ie stable diffusion), vram is correct. (for me I have 32gbram:8gbvram)
edit: also right now 3090s are the best **consumer** GPUs, 4090s, A8000, A100 etc are **the** b... | 2 | 0 | 2023-04-01T00:00:48 | UncleEnk | false | null | 0 | jeh5qix | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeh5qix/ | false | 2 |
t1_jeh5jv5 | I tried the same with ChatGPT and at first it also agreed that mice can enter a house though electrical cables. Funny! Must be in the training texts. But when I told it that mice are too large, it realized the mistake and apologized. | 2 | 0 | 2023-03-31T23:59:24 | nykfank | false | null | 0 | jeh5jv5 | false | /r/LocalLLaMA/comments/1254n2v/my_first_chat_with_llama7b_about_catching_mice/jeh5jv5/ | false | 2 |
t1_jeh528t | > Think about the order in which regions of the brain evolved
I think this is a useful starting point. I like to start all the way the beginning: why do we even have brains in the first place, what kind of brains do single celled organisms have? I would recommend: Incomplete Nature by Terrence Deacon.
> Think... | 1 | 0 | 2023-03-31T23:55:36 | tvetus | false | null | 0 | jeh528t | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jeh528t/ | false | 1 |
t1_jeh1n0z | What happens during sleep is not back-propagation. If it was, then all the researchers would have latched on to that idea long ago. Neurons don't have a symmetric way of communicating error information backwards. That's why Geoffrey Hinton is still brainstorming ideas on how the brain really learns. | 2 | 0 | 2023-03-31T23:29:28 | tvetus | false | null | 0 | jeh1n0z | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jeh1n0z/ | false | 2 |
t1_jeh17l9 | [deleted] | 1 | 0 | 2023-03-31T23:26:13 | [deleted] | true | null | 0 | jeh17l9 | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeh17l9/ | false | 1 |
t1_jeh14bo | What's the difference between learning and neurological learning? Because I can certainly learn how to play a new board game without going to sleep. | 2 | 0 | 2023-03-31T23:25:32 | tvetus | false | null | 0 | jeh14bo | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jeh14bo/ | false | 2 |
t1_jeh0nog | I can't advise you whether getting a mad PC will be worth it or not, but if you decide to, don't go Alienware. You can save money and get yourself a better system if you build it yourself. Check out [The Last Build Guide You'll Ever Need](https://youtu.be/BL4DCEp7blY) to learn what you need to know, and then go to [P... | 2 | 0 | 2023-03-31T23:21:59 | Captain_Pumpkinhead | false | null | 0 | jeh0nog | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeh0nog/ | false | 2 |
t1_jegznjh | I don't use Twitter, but I remember seeing something about them hiring an LLM expert. Can't wait to see how this unfolds!
Man. The future of AI is both very exciting and very scary. | 1 | 0 | 2023-03-31T23:14:18 | Captain_Pumpkinhead | false | null | 0 | jegznjh | false | /r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jegznjh/ | false | 1 |
t1_jegzals | Whoops! Thank you for the correction! | 1 | 0 | 2023-03-31T23:11:33 | Captain_Pumpkinhead | false | null | 0 | jegzals | false | /r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jegzals/ | false | 1 |
t1_jegxgj7 | Np :3 no stupid questions.
You are correct, LLaMA is a text input model.
If you load up the send_pictures extension when opening Oobabooga, there will be another model added which looks at pictures and can describe what the image contains. The language model takes that information and tries to incorporate it into ... | 1 | 0 | 2023-03-31T22:57:38 | Inevitable-Start-653 | false | null | 0 | jegxgj7 | false | /r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jegxgj7/ | false | 1 |
t1_jegwwhx | Definitely it is not possible and will never be possible. | 1 | 0 | 2023-03-31T22:53:22 | wojtek15 | false | null | 0 | jegwwhx | false | /r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jegwwhx/ | false | 1 |
t1_jegv1kw | Thank you, that link is very helpful. So, better perplexity for slightly more VRAM in the larger models. | 1 | 0 | 2023-03-31T22:39:23 | Ghurganov | false | null | 0 | jegv1kw | false | /r/LocalLLaMA/comments/1280omq/what_is_the_point_of_128_group_size/jegv1kw/ | false | 1 |
t1_jegt9lw | Stability AI is likely to release an open source model soon, however. Emad has cryptically been dropping hints for a few weeks now. | 2 | 0 | 2023-03-31T22:26:11 | TeamPupNSudz | false | null | 0 | jegt9lw | false | /r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jegt9lw/ | false | 2 |
t1_jegt3uz | This is unrelated to Open Assistant, and will almost surely be of lower quality.
> LAION’s Open Assistant (OA) project is our efforts to replicate the functionality of ChatGPT, and as such centers around gathering human feedback and training a reinforcement model based on human feedback. **In contrast, the OIG data... | 3 | 0 | 2023-03-31T22:25:01 | TeamPupNSudz | false | null | 0 | jegt3uz | false | /r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jegt3uz/ | false | 3 |
t1_jegsewr | 128 group size has better perplexity scores. see here this PR for more details https://github.com/oobabooga/text-generation-webui/pull/530 | 3 | 0 | 2023-03-31T22:19:50 | Civil_Collection7267 | false | null | 0 | jegsewr | false | /r/LocalLLaMA/comments/1280omq/what_is_the_point_of_128_group_size/jegsewr/ | false | 3 |
t1_jegs7mr | because trying to take down torrents always works so well for copyright holders | 2 | 0 | 2023-03-31T22:18:20 | sswam | false | null | 0 | jegs7mr | false | /r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jegs7mr/ | false | 2 |
t1_jegs22r | Meta snatching ignominy from the jaws of public acclaim as usual... | 1 | 0 | 2023-03-31T22:17:14 | sswam | false | null | 0 | jegs22r | false | /r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jegs22r/ | false | 1 |
t1_jegrnjl | Don't buy alienware. You can build (or buy) a sufficient PC much cheaper. 2000$ and you are totally in the game.
200$ motherboard, 250$ case and power supply, 100$ cooling, 400$ cpu, 400$ ram. Leaves $650 dollars for a GPU, go for VRAM, not speed. I have a 3060 12gb for 450$. | 2 | 0 | 2023-03-31T22:14:13 | NickUnrelatedToPost | false | 2023-03-31T22:21:42 | 0 | jegrnjl | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jegrnjl/ | false | 2 |
t1_jegqmpp | OP, with AI models the name of the game is VRAM. There are models that run on system memory but, I STRONGLY discourage those models. Currently IMO the best GPU for local is the rtx 3090. It has 24gb of Vram still has SLI so you could get 2 of them 2k. With 48gb of vram you'll have access to some pretty ai models like L... | 2 | 0 | 2023-03-31T22:06:42 | EnvironmentalAd3385 | false | null | 0 | jegqmpp | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jegqmpp/ | false | 2 |
t1_jegq3jo | i feel ya bro, stable diffusion stuff is also very very broken rn, mostly cuz of pytorch 2 landed on xformers
​
thanks for all this hard work! | 2 | 0 | 2023-03-31T22:02:49 | LienniTa | false | null | 0 | jegq3jo | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jegq3jo/ | false | 2 |
t1_jegpf83 | As a PC enthusiast and recent AI hobby, there is NO reason to EVER buy an Alienware PC at any price point. They are awful, over priced, and do not provide any performance for their cost. That being said, running ai locally is no small task. As others have said, it is much easier to rent out GPUs on a cloud server. I do... | 4 | 0 | 2023-03-31T21:57:55 | EnvironmentalAd3385 | false | null | 0 | jegpf83 | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jegpf83/ | false | 4 |
t1_jegp3w6 | [deleted] | 1 | 0 | 2023-03-31T21:55:38 | [deleted] | true | null | 0 | jegp3w6 | false | /r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jegp3w6/ | false | 1 |
t1_jegovxt | I wonder if anyone has tried setting up a "sleep" schedule for an LLM where it would train itself on the day's conversations while it sleeps, like we do when we dream. | 15 | 0 | 2023-03-31T21:54:03 | Ghurganov | false | null | 0 | jegovxt | false | /r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jegovxt/ | false | 15 |
t1_jegokna | oh cool, yeah thats essentially the same idea I had except that my solutions to their problems are different. The main issue they bring up is having a model summarize things accurately. They also bring up fine-tuning like I did as a solution to it, but they talk about running a second model for it in parallel.
If I un... | 3 | 0 | 2023-03-31T21:51:49 | Sixhaunt | false | null | 0 | jegokna | false | /r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jegokna/ | false | 3 |
t1_jego0qg | ChatGPT has no accessible model. You’re not wrong about llama, but we’re talking about it. That’s one of the goals of llama. It’s a great business decision. That’s all I’m saying. I agree that buy-the-books it’s a research release, but they 100% knew it was going to be leaked and become the de-facto pseudo-open source ... | 3 | 0 | 2023-03-31T21:47:55 | The_frozen_one | false | null | 0 | jego0qg | false | /r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jego0qg/ | false | 3 |
t1_jegnokj | Very great. I was experiment with this type of thing years ago for an AI chatbot I was writing. The implementation is very good here. Compliments LLaMa very well, much better than dumb chat bot I made long ago. 😂 | 1 | 0 | 2023-03-31T21:45:33 | SlavaSobov | false | null | 0 | jegnokj | false | /r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jegnokj/ | false | 1 |
t1_jegnct5 | I installed 7B & 13B using dalai as a UI, it’s clunky and weird. How do I get them running in powershell like I see others doing? | 1 | 0 | 2023-03-31T21:43:15 | Wroisu | false | null | 0 | jegnct5 | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jegnct5/ | false | 1 |
t1_jegnbe3 | There is one call ChatGLM | 2 | 0 | 2023-03-31T21:42:58 | ihaag | false | null | 0 | jegnbe3 | false | /r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jegnbe3/ | false | 2 |
t1_jeglibt | Since the other reply got deleted ...
A LoRA is a Low-Rank Adaptation, a set of weight deltas that can apply a fine-tuning modification to an existing model.
It's smaller in file size than a full set of weights because it's stored as two low-rank matrices that get multiplied together to generate the weight deltas. [S... | 2 | 0 | 2023-03-31T21:30:31 | GreaterAlligator | false | null | 0 | jeglibt | false | /r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jeglibt/ | false | 2 |
t1_jegilz9 | I feel you. The few people with time to commit, creativity and understanding to immerse fully in the tech and innovate on top of it will be the new crop of billionaire in three years. | 1 | 0 | 2023-03-31T21:10:25 | LoSboccacc | false | null | 0 | jegilz9 | false | /r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jegilz9/ | false | 1 |
t1_jegidey | It is staggering, with Lang chain and the open model catching up in format, we may not recognize the world in a year. | 1 | 0 | 2023-03-31T21:08:48 | LoSboccacc | false | null | 0 | jegidey | false | /r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jegidey/ | false | 1 |
t1_jegftkb | my code: [https://github.com/sswam/barbarella](https://github.com/sswam/barbarella)
It's very hacky and half baked. Some time I'll persuade GPT-4 to rewrite it all in clean Python. | 1 | 0 | 2023-03-31T20:51:24 | sswam | false | null | 0 | jegftkb | false | /r/LocalLLaMA/comments/127yuwv/electric_barbarella_ai_voice_chat_and_shell_tools/jegftkb/ | false | 1 |
t1_jegfkqx | Looking forward to what you see with the 65B. I want to say I loaded some version (4-bit) of it earlier this week and had problems. Good luck! | 1 | 0 | 2023-03-31T20:49:46 | aigoopy | false | null | 0 | jegfkqx | false | /r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jegfkqx/ | false | 1 |
t1_jegeznp | >This is WILD WEST territory. There are TONS of crazy things to discover and we're just barely starting to invent the tools to really dig into the capabilities of these models.
That's really the best part about it to me. This is legitimately exciting and fun. It's new, different, you can actually create things that... | 3 | 0 | 2023-03-31T20:45:46 | toothpastespiders | false | 2023-03-31T21:00:04 | 0 | jegeznp | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jegeznp/ | false | 3 |
t1_jegewdr | Are you able to train at all on the m40? | 1 | 0 | 2023-03-31T20:45:09 | c4r_guy | false | null | 0 | jegewdr | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jegewdr/ | false | 1 |
t1_jegdr1f | [Something other than neurological learning is occurring](https://www.ninds.nih.gov/health-information/public-education/brain-basics/brain-basics-understanding-sleep). We're amazing. Bold was Bing's.
>Hello, this is Bing. According to the information I found on the web, neurological learning is **not possible wit... | -1 | 0 | 2023-03-31T20:37:32 | friedrichvonschiller | false | 2023-04-01T03:34:26 | 0 | jegdr1f | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jegdr1f/ | false | -1 |
t1_jegdjcp | Has anyone uploaded a version of Alpaca Native 13b that is already int4 and group sized? I've been looking everywhere. I don't have the internet for the full download and I'm not sure my computer can handle the gptq conversion. I only have 32gb of ram. Thanks for your help :) | 1 | 0 | 2023-03-31T20:36:09 | artificial_genius | false | null | 0 | jegdjcp | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jegdjcp/ | false | 1 |
t1_jegc7yx | Sounds similar to a discussion on oobabooga sub inspired by another post somewhere. Linking the thread in the complex memory thread, maybe a nugget or two of relevant ideas:
https://www.reddit.com/r/Oobabooga/comments/125savw/a_more_koboldailike_memory_extension_complex/jebd3p1?utm_medium=android_app&utm_source=... | 2 | 0 | 2023-03-31T20:27:22 | skatardude10 | false | null | 0 | jegc7yx | false | /r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jegc7yx/ | false | 2 |
t1_jegb869 | For basic 7b model running (alpaca or Llama) you don't really need much in the way of a PC to run it. Hell, I have 7b alpaca running -at speed- on an outdated 2014 imac...
13b is workable, but slow.
For 13B you really need more power to get "good" speed. That means a 12gb GPU like a 3060 12gb (ONLY the 12gb - it's th... | 16 | 0 | 2023-03-31T20:20:42 | deepinterstate | false | 2023-04-01T00:38:04 | 0 | jegb869 | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jegb869/ | false | 16 |
t1_jeg98r7 | You're close. It's headed by LAION-AI, the group that provided the image set list that Stable Diffusion used. Last I heard, Stability AI is not involved in Open Assistant.
But other than that, you're right. It's an attempt to create an open source version of ChatGPT. | 1 | 0 | 2023-03-31T20:07:32 | Captain_Pumpkinhead | false | null | 0 | jeg98r7 | false | /r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jeg98r7/ | false | 1 |
t1_jeg8pz2 | I can't recall the full time it took, but with my server I'll get a short reply in about 8-10 seconds. Amd that's going from when I hit enter to when the system is done typing.
But that's with two 8GB GPUs and two processors with a total of 48 cores. RAM isn't the only thing that's needed. 😉 | 2 | 0 | 2023-03-31T20:04:02 | redfoxkiller | false | null | 0 | jeg8pz2 | false | /r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jeg8pz2/ | false | 2 |
t1_jeg67gy | Thanks for your response! I will be very interested to see your results.
I am debating whether to deploy the 65B model now or wait a bit more for a better version.
So even with 382 GB of ram, it’s still slow? Is it one token per minute or slower? | 2 | 0 | 2023-03-31T19:47:11 | Pretend_Jellyfish363 | false | null | 0 | jeg67gy | false | /r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jeg67gy/ | false | 2 |
t1_jeg4n0o | From experience, Id reduce the budget to 3.9k, and use the extra $100 on a cloud solution to "tty before you buy in". $100 shoild be good enough for spending a good weekend with a good instance to make sure thats really something you want to invest in.
As per alienware, you'd probably get a couple of 3090s and a simpl... | 5 | 0 | 2023-03-31T19:36:48 | ChobPT | false | null | 0 | jeg4n0o | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeg4n0o/ | false | 5 |
t1_jeg42sp | That's a extra. ^_~
The core sysyem with a processor, RAM, and such will run someone the $8K for a starting point. Now yes adding more RAM, a second prossor and a GPU will greatly help. | 1 | 0 | 2023-03-31T19:33:04 | redfoxkiller | false | null | 0 | jeg42sp | false | /r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/jeg42sp/ | false | 1 |
t1_jeg3ef5 | Only ONE server-grade GPU (A100, H100) costs much more than that. :) | 2 | 0 | 2023-03-31T19:28:28 | BalorNG | false | null | 0 | jeg3ef5 | false | /r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/jeg3ef5/ | false | 2 |
t1_jeg32dy | Hey I said stat at $8K per server... I didn't say how many are needed to support a wide range of users, or how well it would run. 😂
Even with my little war machine, the 65B model has noticeable slow down. | 0 | 0 | 2023-03-31T19:26:14 | redfoxkiller | false | null | 0 | jeg32dy | false | /r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/jeg32dy/ | false | 0 |
t1_jeg2zv5 | Some useful links:
Sqlite memory for llama:
https://github.com/logan-markewich/llama_index_starter_pack/releases/tag/v0.5.0
Chatgpt UI, I think it's a good idea to separate different dialogs:
https://github.com/mckaywrigley/chatbot-ui
Using ChatGpt retrieval plugin with llama:
https://blog.lastmileai.dev/using-openai... | 3 | 0 | 2023-03-31T19:25:46 | Liverpool67 | false | null | 0 | jeg2zv5 | false | /r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jeg2zv5/ | false | 3 |
t1_jeg2tkz | Another dude with an EE degree who had the same question, and I decided to see just how far I can get with what I have. (ThinkPad X270 w/ 16G RAM, and llama.cpp to run it on my CPU)
It took me about an hour to set up running inference, and two evenings messing with it to get a better idea of my hardware's limitations. ... | 1 | 0 | 2023-03-31T19:24:36 | Pote_b | false | null | 0 | jeg2tkz | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeg2tkz/ | false | 1 |
t1_jeg2mof | Where would one get a copy of the alpaca native 13b int4 with groupsize? All I see is the 7b version, a unquantized 13b alpaca native and a bunch of llama/lora versions. Wish I had caught the 4chan link while it was working still. | 1 | 0 | 2023-03-31T19:23:20 | artificial_genius | false | null | 0 | jeg2mof | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jeg2mof/ | false | 1 |
t1_jeg2mec | Turns out it was a blog page from runpod themselves, not a youtube video which I saw about it. So if you're curious: [https://blog.runpod.io/how-to-connect-google-colab-to-runpod/](https://blog.runpod.io/how-to-connect-google-colab-to-runpod/) | 3 | 0 | 2023-03-31T19:23:17 | Sixhaunt | false | null | 0 | jeg2mec | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeg2mec/ | false | 3 |
t1_jeg2dud | I think you've missed a zero here. Likely no less than two :) | 3 | 0 | 2023-03-31T19:21:43 | BalorNG | false | null | 0 | jeg2dud | false | /r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/jeg2dud/ | false | 3 |
t1_jeg2amu | Which guide did you use to get this running? | 1 | 0 | 2023-03-31T19:21:08 | redpandabear77 | false | null | 0 | jeg2amu | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeg2amu/ | false | 1 |
t1_jeg1nmk | Huh, it took me a while to understand this (yeah, old man with poor memory and even worse English here :D ) and what you are saying is incredible. So I can write on a colab (and I think there are guides on how to fine tune on colab), hire a runpod or sth and connect it to a colab notepad... This is amazing idea :D And... | 2 | 0 | 2023-03-31T19:16:53 | szopen76 | false | null | 0 | jeg1nmk | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeg1nmk/ | false | 2 |
t1_jeg1ml3 | Could argue in circles, but will add thst anyone can sign up to use LLaMA for free and legally threw Facebook/Meta. The same thing with ChatGPT. The companies tend to sell server time, cheaper then 3rd parties and with ChatGPT you can pay to get better access to GPT-4 and I'm more then sure GPT-5 is already cooking.
Y... | 2 | 0 | 2023-03-31T19:16:41 | redfoxkiller | false | null | 0 | jeg1ml3 | false | /r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jeg1ml3/ | false | 2 |
t1_jeg1mic | If you want to just run stuff, a decent modern PC can do it (e.g. using llama.cpp). An Apple Silicon machine would work even better. If you want to try your hand at training etc., try renting nodes from lambdalabs or something. That's far cheaper if you just want to dabble. | 1 | 0 | 2023-03-31T19:16:40 | dranzerfu | false | null | 0 | jeg1mic | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeg1mic/ | false | 1 |
t1_jeg0vts | I've also heard of people running 30B on 12GB vram. Anyone know what's up with this? | 2 | 0 | 2023-03-31T19:11:47 | dealingwitholddata | false | null | 0 | jeg0vts | false | /r/LocalLLaMA/comments/121xqmv/out_of_memory_with_13b_on_an_rtx_3060/jeg0vts/ | false | 2 |
t1_jefzsa2 | [deleted] | 1 | 0 | 2023-03-31T19:04:29 | [deleted] | true | null | 0 | jefzsa2 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jefzsa2/ | false | 1 |
t1_jefz84a | I found a video the other day about how to hookup runpod with google colab so basically you spin up the runpod then you can connect to its GPU through the colab document (the connect button at the top right has a local connection option where you just paste a url for your runpod setup) so you can use whatever GPUs you ... | 3 | 0 | 2023-03-31T19:00:46 | Sixhaunt | false | null | 0 | jefz84a | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefz84a/ | false | 3 |
t1_jefynuu | Right, but I fully believe that the release of this model was intended to hinder ChatGPT’s current dominance. Their T&C cover Meta’s ass if it gets used inappropriately or generates bad info. Yes, they do have to act like the T&C appear to be enforced, but they aren’t spending lots of effort doing it.
I fully ... | 5 | 0 | 2023-03-31T18:57:00 | The_frozen_one | false | null | 0 | jefynuu | false | /r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jefynuu/ | false | 5 |
t1_jefymi8 | I’m kind of behind, but this seems similar to that events leading up to the release of stable diffusion. Is it accurate to think that the open assistant project is stabilityAI’s front runner for an open chatgpt? | 2 | 0 | 2023-03-31T18:56:46 | xraymebaby | false | null | 0 | jefymi8 | false | /r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jefymi8/ | false | 2 |
t1_jefy6fi | I would want to fine-tune a model specifically for RPG-ing. That would include teaching it to never-ever play for the player, following certain format etc. Even the GPT-3.5-turbo keeps inserting "you did so and so" no matter how strongly it would be prompted against it.
I started to gather data and reformatting it, an... | 1 | 0 | 2023-03-31T18:53:46 | szopen76 | false | null | 0 | jefy6fi | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefy6fi/ | false | 1 |
t1_jefxf9n | [deleted] | 2 | 0 | 2023-03-31T18:48:44 | [deleted] | true | null | 0 | jefxf9n | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefxf9n/ | false | 2 |
t1_jefwxg7 | [deleted] | 2 | 0 | 2023-03-31T18:45:25 | [deleted] | true | null | 0 | jefwxg7 | false | /r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jefwxg7/ | false | 2 |
t1_jefwkpm | It will take some time to develop and test but this is what I'd like to try:
Chat just acts normal up until the 2,000 limit. At that point it takes the oldest half of the conversation and uses a different model to condense the data roughly in half to remove 500 of them so you now have 1,500 tokens instead of 2,000 in ... | 4 | 0 | 2023-03-31T18:43:05 | Sixhaunt | false | null | 0 | jefwkpm | false | /r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jefwkpm/ | false | 4 |
t1_jefwjba | 3060 is the most affordabnle way to get into it. Kinda kicking myself for getting the 3070 last year! So just a bit ago I spent around $1700 on a macbook air m2 with 24gb of ram a week before LLama came out in anticipation of something hitting the public soon. So glad I did! I am able to load up the 30b model, but gene... | 2 | 0 | 2023-03-31T18:42:49 | nickythegreek | false | null | 0 | jefwjba | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefwjba/ | false | 2 |
t1_jefvcgi | It depends what kind of GPU you have, how much RAM it has and the clock speeds. If it's slower then your main CPU and RAM then yea, raw dogging it would probably work better.
Did some testing, last night and compared speeds for how long it took my system to finish a reply from when I hit enter.
7B with two different... | 3 | 0 | 2023-03-31T18:34:55 | redfoxkiller | false | null | 0 | jefvcgi | false | /r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jefvcgi/ | false | 3 |
t1_jeft3xg | alpaca.cpp is deprecated and not being updated anymore. use llama.cpp.
for parameters, check the wiki page for [Getting Started](https://www.reddit.com/r/LocalLLaMA/wiki/index) and experiment with the settings there. | 1 | 0 | 2023-03-31T18:20:10 | Civil_Collection7267 | false | null | 0 | jeft3xg | false | /r/LocalLLaMA/comments/127r2ze/cant_generate_random_stuff/jeft3xg/ | false | 1 |
t1_jefs2lm | I would definitely look at runpod and vastai or lambdalabs as others have suggested, but it really depends on what you're trying to do... inference is different than training, may want to set some goals first.. but I recently stopped using my local PC because actually it's more cost effective for me to use on demand. | 2 | 0 | 2023-03-31T18:13:18 | humanbeingmusic | false | null | 0 | jefs2lm | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefs2lm/ | false | 2 |
t1_jefrt65 | I have done loras with colab pro. All the same settings as the normal alpaca lora except I pushed max token length from 512 to 700 since I also added new, longer, data. Took just under 12 hours but if you go with 256 for max token length you could increase batch size heavily and do it in 3-4 hours | 2 | 0 | 2023-03-31T18:11:36 | Sixhaunt | false | null | 0 | jefrt65 | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefrt65/ | false | 2 |
t1_jefr23a | > Instead, please accept the heartfelt thanks from a human being.
We gon' be running out of these soon enough 😭 | 3 | 0 | 2023-03-31T18:06:40 | Scriptod | false | null | 0 | jefr23a | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefr23a/ | false | 3 |
t1_jefqy9g | What's the problem with data? | 1 | 0 | 2023-03-31T18:05:59 | Scriptod | false | null | 0 | jefqy9g | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefqy9g/ | false | 1 |
t1_jefqpft | I explained the basic process at a high level, but in the end you would have to code it in whatever language you're using. Likely python if you're doing it directly in a colab or something. I could give code for my example but it would rely completely on the format of that specific dataset and not be very useful to oth... | 1 | 0 | 2023-03-31T18:04:23 | Sixhaunt | false | null | 0 | jefqpft | false | /r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/jefqpft/ | false | 1 |
t1_jefqp7x | yeah. This assumed a Question/Answer Dataset with elements in the format:
{
instruction: "What is the weight of an elephant?",
input: "",
output: "an elephant weighs roughly 13.87lbs"
}
since that's the alpaca dataset format, so if you are using their dataset then you still need to kee... | 2 | 0 | 2023-03-31T18:04:20 | Sixhaunt | false | null | 0 | jefqp7x | false | /r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jefqp7x/ | false | 2 |
t1_jefms83 | Looking at the prices I think whether it would be realistic for me to hire some "pod" for few hours to train a model, and then download it and run on home machine.
The biggest obstacle seems to be data. | 1 | 0 | 2023-03-31T17:38:46 | szopen76 | false | null | 0 | jefms83 | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefms83/ | false | 1 |
t1_jeflpji | The 4/8-bit versions on LLaMA are ment to have video cards running it, but it will pass things down to the RAM if you run out of VRAM.
With my setup I tend to use 15GB of RAM while I have something running. But that's also becsuse I have two graphic cards running, which takes a load off the CPUs and RAM and makes thi... | 1 | 0 | 2023-03-31T17:31:45 | redfoxkiller | false | null | 0 | jeflpji | false | /r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jeflpji/ | false | 1 |
t1_jefko8v | i mean the shapes of the weight matrices are different across models right lol | 3 | 0 | 2023-03-31T17:24:52 | QTQRQD | false | null | 0 | jefko8v | false | /r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jefko8v/ | false | 3 |
t1_jefiwzw | My experience has been that Llama-13B 4bit quantized is amazing, and it can run on an RTX 3060. You don't need much ram or processing power for it (tho i still got 64gb of it). | 6 | 0 | 2023-03-31T17:13:15 | MentesInquisitivas | false | null | 0 | jefiwzw | false | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefiwzw/ | false | 6 |
t1_jefgbzm | What's the process RAM consumption for alpaca/llama 65B 4-bit running with llama cpp? I am on my way to pick up more ram to upgrade to 64GB (max of the motherboard) to hopefully run 65b model, so this kinda killed my enthusiasm lol.
Edit: I think you are running fp16 models. I will be running int4. | 2 | 0 | 2023-03-31T16:56:24 | ThatLastPut | false | 2023-03-31T17:30:22 | 0 | jefgbzm | false | /r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jefgbzm/ | false | 2 |
t1_jefg8kk | [deleted] | 7 | 0 | 2023-03-31T16:55:49 | [deleted] | true | null | 0 | jefg8kk | false | /r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jefg8kk/ | false | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.