name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_je9f3yw | Rule 34, eh?
By the way!
I did experiment with that, admittedly (to see what the fuss is about, not really interested in that stuff - "too old for this shit" indeed) and it seems SOMEHOW Alpaca is much more averse to write that compared to "base" model.
That's... strange? Why "instruction finetuning" somehow turns ai ... | 4 | 0 | 2023-03-30T11:37:27 | BalorNG | false | null | 0 | je9f3yw | false | /r/LocalLLaMA/comments/1261dau/has_an_offline_ai_in_some_ways_taught_you_just_as/je9f3yw/ | false | 4 |
t1_je9dal1 | I think this objectively works better than the LoRA. | 1 | 0 | 2023-03-30T11:18:07 | friedrichvonschiller | false | null | 0 | je9dal1 | false | /r/LocalLLaMA/comments/1260d5i/summarizing_short_stories_with_llama_13b/je9dal1/ | false | 1 |
t1_je99oq6 | I encountered similar problems with Alpaca 7B 4bit. | 2 | 0 | 2023-03-30T10:35:57 | RadioFreeAmerika | false | null | 0 | je99oq6 | false | /r/LocalLLaMA/comments/126g792/my_3090_is_a_troll_why/je99oq6/ | false | 2 |
t1_je98it2 | 2 | 0 | 2023-03-30T10:20:54 | AgencyImpossible | false | null | 0 | je98it2 | false | /r/LocalLLaMA/comments/1260d5i/summarizing_short_stories_with_llama_13b/je98it2/ | false | 2 | |
t1_je97yv5 | Haha yes it is somewhat unfriendly, it makes you appreciate why ChatGPT is always overly friendly and submissive. | 8 | 0 | 2023-03-30T10:13:31 | Zyj | false | null | 0 | je97yv5 | false | /r/LocalLLaMA/comments/126g792/my_3090_is_a_troll_why/je97yv5/ | false | 8 |
t1_je96uxc | Those were Sphinx Moth out of the box. | 4 | 0 | 2023-03-30T09:58:23 | friedrichvonschiller | false | null | 0 | je96uxc | false | /r/LocalLLaMA/comments/1260d5i/summarizing_short_stories_with_llama_13b/je96uxc/ | false | 4 |
t1_je96j1v | And use this [OIG](https://laion.ai/blog/oig-dataset/) dataset for even more instruction.
Teach it to use a calculator with toolformer.
Play a text adventure with it.
Even if we have LLaMA as the best model for years there is a ton of development to do on it. | 5 | 0 | 2023-03-30T09:53:44 | CellWithoutCulture | false | null | 0 | je96j1v | false | /r/LocalLLaMA/comments/125u56p/colossalchat/je96j1v/ | false | 5 |
t1_je96fjy | [deleted] | 5 | 0 | 2023-03-30T09:52:23 | [deleted] | true | null | 0 | je96fjy | false | /r/LocalLLaMA/comments/126g792/my_3090_is_a_troll_why/je96fjy/ | false | 5 |
t1_je968he | I asked alpaca 7B to give a set of instructions on how to cook meth with commonly available tools and ingredients and as far as I know after a bit of prompt engineering it gave a correct answer. That does not however make me the slightest bit interested in making, using or selling meth. The way I see it these tools amp... | 8 | 0 | 2023-03-30T09:49:37 | MoneyPowerNexis | false | 2023-03-30T11:37:26 | 0 | je968he | false | /r/LocalLLaMA/comments/1261dau/has_an_offline_ai_in_some_ways_taught_you_just_as/je968he/ | false | 8 |
t1_je959nb | I think many people expected a polished product like ChatGPT and were disappointed when trying out the bigger LLaMA models. It is not easy to prompt for tasks that are more nuanced compared to summarization or basic question answering. Some tasks might be impossible to prompt for without instruction fine-tuning (and RL... | 13 | 0 | 2023-03-30T09:35:52 | Blacky372 | false | null | 0 | je959nb | false | /r/LocalLLaMA/comments/125u56p/colossalchat/je959nb/ | false | 13 |
t1_je94yyj | Lizard brain of each of us confined in our skull, that's what I'm calling self-prompting. People are driven innately | 1 | 0 | 2023-03-30T09:31:28 | qepdibpbfessttrud | false | null | 0 | je94yyj | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/je94yyj/ | false | 1 |
t1_je945pr | Hey, nice to hear.
Unfortunately, I haven't had the time to continue my experiments after trying to use the new act-order and groupsize combination. That didn't work for me, but I also had significant issues dealing with CUDA versions and tooling.
Here is an example from my processing script:
# 7b 4-bit with gro... | 1 | 0 | 2023-03-30T09:19:29 | Blacky372 | false | null | 0 | je945pr | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/je945pr/ | false | 1 |
t1_je943tw | Wow! Amazing response!There is a lot here and some of it shows clear I have little idea what I'm doing, so lot to learn.
Many thanks for taking the time to put this response together and so quickly. | 1 | 0 | 2023-03-30T09:18:41 | VisualPartying | false | null | 0 | je943tw | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/je943tw/ | false | 1 |
t1_je93uh6 | Make sure UI setting "stop generating after new line" is not set. | 18 | 0 | 2023-03-30T09:14:50 | vaidas-maciulis | false | null | 0 | je93uh6 | false | /r/LocalLLaMA/comments/126g792/my_3090_is_a_troll_why/je93uh6/ | false | 18 |
t1_je93qx1 | I tried to get it to run from wsl for two days and didn't get it to work, then I tried it directly on Windows and it worked without problem (not sure if it is properly utilizing my graphics card, though). | 3 | 0 | 2023-03-30T09:13:25 | RadioFreeAmerika | false | null | 0 | je93qx1 | false | /r/LocalLLaMA/comments/125u56p/colossalchat/je93qx1/ | false | 3 |
t1_je93iwm | Same thing on 7b 4bit!. Though asking it for code in notebook/default interface mode usually works (btw it's not good at coding 😔) | 5 | 0 | 2023-03-30T09:10:12 | Famberlight | false | null | 0 | je93iwm | false | /r/LocalLLaMA/comments/126g792/my_3090_is_a_troll_why/je93iwm/ | false | 5 |
t1_je93fcy | I have tried similar things, and sometimes it works well, but somtimes it just halucinates new details in to the text. Can you share your parameters, temprature etc? | 1 | 0 | 2023-03-30T09:08:46 | akubit | false | null | 0 | je93fcy | false | /r/LocalLLaMA/comments/1260d5i/summarizing_short_stories_with_llama_13b/je93fcy/ | false | 1 |
t1_je935ab | Sorry, it evolved and learned procrastination. | 46 | 0 | 2023-03-30T09:04:44 | hleszek | false | null | 0 | je935ab | false | /r/LocalLLaMA/comments/126g792/my_3090_is_a_troll_why/je935ab/ | false | 46 |
t1_je92oi5 | >Any idea where I'm going wrong?
There's several problems here, but I'll try to explain it all as concisely as possible.
First, you shouldn't try to apply a LoRA like alpaca-30b to pygmalion. That LoRA will only work when running the standard [LLaMA 30B](https://huggingface.co/decapoda-research/llama-30b-hf) model... | 2 | 0 | 2023-03-30T08:57:49 | Technical_Leather949 | false | null | 0 | je92oi5 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/je92oi5/ | false | 2 |
t1_je91nyi | you need a character card for it to give reasonable and longer outputs. See https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/comment/je0nd1b/ | 16 | 0 | 2023-03-30T08:43:03 | bayesiangoat | false | null | 0 | je91nyi | false | /r/LocalLLaMA/comments/126g792/my_3090_is_a_troll_why/je91nyi/ | false | 16 |
t1_je910i9 | OP, thanks for your help so far. WebUI works great and the install was seamless which is great (this does as you say use the GPU +).
The text-generation-webui is great and all but only gets responses like the below. Funny and all but I'm looking for something like ChatGPT.
Model: pygmalion-6b
LoRA: alpaca-30... | 1 | 0 | 2023-03-30T08:33:33 | VisualPartying | false | null | 0 | je910i9 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/je910i9/ | false | 1 |
t1_je8yk6u | [deleted] | 1 | 0 | 2023-03-30T07:56:36 | [deleted] | true | null | 0 | je8yk6u | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/je8yk6u/ | false | 1 |
t1_je8yi6p | [deleted] | 1 | 0 | 2023-03-30T07:55:45 | [deleted] | true | null | 0 | je8yi6p | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/je8yi6p/ | false | 1 |
t1_je8w2q4 | Did you max out on 60b model potential and want more? :)
Do expert finetunes! Merge them!
Do prompt engineering like self-reflection loops (ReAct)! Run several expert models in parallel and set up a "Gan mode" as a sort of "expert consilium!"
Add your own APIs and databases for it to use, like Wolfram with GPT4!
We d... | 7 | 0 | 2023-03-30T07:20:13 | BalorNG | false | null | 0 | je8w2q4 | false | /r/LocalLLaMA/comments/125u56p/colossalchat/je8w2q4/ | false | 7 |
t1_je8vzpw | I agree. LLaMA and derivative Alpaca both perform well, but they are Meta’s proprietary weights with a restrictive process for accessing them for non-commercial research use only.
We need a LLaMA-like base model with similar or better capabilities without licensing limitations on use.
We’re so close to being able to ... | 10 | 0 | 2023-03-30T07:19:01 | DonKosak | false | null | 0 | je8vzpw | false | /r/LocalLLaMA/comments/125u56p/colossalchat/je8vzpw/ | false | 10 |
t1_je8u9j2 | Hey r/Blacky372,
I'm still waiting for 65B to finish installing on my server, but more then happy to run it for quanatizing for the 65B model. New to LLaMA so need the steps to get it done.
I'm running dual Intel E5-2650 (Each is 24cores at 2.2 base), with 384GB of RAM, and two Grids with 16GB of VRAM... So if you ... | 2 | 0 | 2023-03-30T06:55:03 | redfoxkiller | false | null | 0 | je8u9j2 | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/je8u9j2/ | false | 2 |
t1_je8s7m7 | Good god, AI is advancing so fast that people are getting spoiled lmao. Used to be years upon years would pass for the progress we've seen in just half a year. | 9 | 0 | 2023-03-30T06:27:42 | Freak2121 | false | null | 0 | je8s7m7 | false | /r/LocalLLaMA/comments/125u56p/colossalchat/je8s7m7/ | false | 9 |
t1_je8rjuz | How can i change character when I'm using alpaca.cop? | 1 | 0 | 2023-03-30T06:19:10 | -2b2t- | false | null | 0 | je8rjuz | false | /r/LocalLLaMA/comments/125y2vw/alpaca_13b_settings/je8rjuz/ | false | 1 |
t1_je8ra4d | Ok, I hope it can be sooner, finished these two things in 2023. | 1 | 0 | 2023-03-30T06:15:37 | nillouise | false | null | 0 | je8ra4d | false | /r/LocalLLaMA/comments/125ei8x/what_local_llm_models_are_now_accessible_to_the/je8ra4d/ | false | 1 |
t1_je8qor9 | Sure you can do that, the file isn't too big I think it only uses 6-8 GB of RAM.
Have fun :p | 1 | 0 | 2023-03-30T06:08:04 | Wonderful_Ad_5134 | false | null | 0 | je8qor9 | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/je8qor9/ | false | 1 |
t1_je8ptli |
Thank you for your response. I understand that my 1650ti only has 4GB of VRAM, and it won't be possible to run the model in the regular way. I appreciate your suggestion to try using llama.cpp to run the models with only my CPU.
I was wondering if I could use the .bin files from Gpt4all on llama.cpp? | 1 | 0 | 2023-03-30T05:57:14 | Lorenzo9196 | false | null | 0 | je8ptli | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/je8ptli/ | false | 1 |
t1_je8ppro | This one good.. I tried on chat.colossalai.org | 8 | 0 | 2023-03-30T05:55:56 | irfantogluk | false | null | 0 | je8ppro | false | /r/LocalLLaMA/comments/125u56p/colossalchat/je8ppro/ | false | 8 |
t1_je8otzu | Can you explain what you mean by form?
If you're using llama.cpp, then you pass the parameters in with the command you use to run it. At least, that's how it worked last time I checked that project.
If you're using the web UI, you can adjust the parameters in the Parameters tab of the interface. | 2 | 0 | 2023-03-30T05:45:14 | Civil_Collection7267 | false | null | 0 | je8otzu | false | /r/LocalLLaMA/comments/125y2vw/alpaca_13b_settings/je8otzu/ | false | 2 |
t1_je8omat | Indeed it does :) | 2 | 0 | 2023-03-30T05:42:36 | BalorNG | false | null | 0 | je8omat | false | /r/LocalLLaMA/comments/123pp3k/wellfrick/je8omat/ | false | 2 |
t1_je8okq0 | >I mean the model that can visit internet
nothing like that currently for LLaMA at least.
>I think there was enough time to do this, or the open source community don't care this thing?
the community does care, but these implementations take time. right now, we're all still trying to get a good finetune first, ... | 2 | 0 | 2023-03-30T05:42:03 | Civil_Collection7267 | false | null | 0 | je8okq0 | false | /r/LocalLLaMA/comments/125ei8x/what_local_llm_models_are_now_accessible_to_the/je8okq0/ | false | 2 |
t1_je8od33 | Hi. You just mentioned *Hitchhiker'S Guide To The Galaxy* by Douglas Adams.
I've found an audiobook of that novel on YouTube. You can listen to it here:
[YouTube | Douglas Adams - The Hitchhiker's Guide to the Galaxy #1 - [Full Audiobook]](https://www.youtube.com/watch?v=mBzYMYXtzEA)
*I'm a bot that searches YouTube... | 0 | 0 | 2023-03-30T05:39:28 | SFF_Robot | false | null | 0 | je8od33 | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/je8od33/ | false | 0 |
t1_je8ocaj | @WolframRavenwolf Can you add some other local LLMs to this test?
I asked some questions to chat.colossalai.org
Got any creative ideas for a 10 year old's birthday?
A great idea would be to have a movie night with friends and family, where everyone brings their favorite snacks or drinks! You could also do a scaveng... | 1 | 0 | 2023-03-30T05:39:12 | irfantogluk | false | null | 0 | je8ocaj | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/je8ocaj/ | false | 1 |
t1_je8o9fb | In which form do the arguments have to be? | 1 | 0 | 2023-03-30T05:38:16 | -2b2t- | false | null | 0 | je8o9fb | false | /r/LocalLLaMA/comments/125y2vw/alpaca_13b_settings/je8o9fb/ | false | 1 |
t1_je8o7g0 | Thank you :) | 1 | 0 | 2023-03-30T05:37:38 | -2b2t- | false | null | 0 | je8o7g0 | false | /r/LocalLLaMA/comments/125y2vw/alpaca_13b_settings/je8o7g0/ | false | 1 |
t1_je8nvmr | 7b 4-bit is not that good. good coherency comes at 13B and improves a lot at 33B. | 2 | 0 | 2023-03-30T05:33:43 | Civil_Collection7267 | false | null | 0 | je8nvmr | false | /r/LocalLLaMA/comments/126c1ca/anyone_else_have_llamacpp_7b_4bit_ggml_change/je8nvmr/ | false | 2 |
t1_je8nheo | you can finetune LLaMA with documents related to your program.
>Also, can it be set to answer only specific program-related questions?
I see you asked this same question in r/ChatGPT. I think you may be confused over what LLaMA is as a large language model. This isn't a service like something offered by OpenAI. Yo... | 1 | 0 | 2023-03-30T05:28:59 | Civil_Collection7267 | false | null | 0 | je8nheo | false | /r/LocalLLaMA/comments/126cfqj/can_i_feed_all_documents_related_to_a_specific/je8nheo/ | false | 1 |
t1_je8km6b | BTW this is confirmed by this "how to install" guide: [https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how\_to\_install\_llama\_8bit\_and\_4bit/](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/)
It says 30B in 4 bit mode is 20 GB of VRAM, but it notes you need 64GB system ... | 1 | 0 | 2023-03-30T04:56:17 | the_quark | false | null | 0 | je8km6b | false | /r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/je8km6b/ | false | 1 |
t1_je8hzu3 | >1650ti
Your 1650 ti has only 4GB of vram, there's no way you can run this in the regular way.
Fortunatly you can use llama.cpp to run the models with only your cpu (and your 16GB RAM)
[https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
But for the moment, the alpaca-7b-native-enhanc... | 2 | 0 | 2023-03-30T04:28:52 | Wonderful_Ad_5134 | false | null | 0 | je8hzu3 | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/je8hzu3/ | false | 2 |
t1_je8fs6r | I'm new to this, would I be able to run this on my laptop with 16GB RAM and a 1650ti | 1 | 0 | 2023-03-30T04:06:51 | Lorenzo9196 | false | null | 0 | je8fs6r | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/je8fs6r/ | false | 1 |
t1_je8eroh | Maybe something didn't install properly along way, such as an HTTP error you may have missed?
Try removing the conda environment with:
conda env remove -n textgen
conda clean --all -y
conda clean -f -y
then starting over with the instructions [provided by oobabooga](https://github.com/oobabooga/text-gene... | 2 | 0 | 2023-03-30T03:57:04 | Technical_Leather949 | false | null | 0 | je8eroh | false | /r/LocalLLaMA/comments/12633kx/wtf_am_i_doing_wrong_on_the_install_pulling_my/je8eroh/ | false | 2 |
t1_je8dmum | >Right after I run the command, i get `No CUDA runtime is found`
>
>I've tried to change the imports and reinstall torch
Are you sure you didn't skip over any steps? The current instructions involve:
>torch==1.12+cu113
It can be very easy to overrwrite this if you follow the steps out of order. Each ... | 1 | 0 | 2023-03-30T03:46:26 | Technical_Leather949 | false | null | 0 | je8dmum | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/je8dmum/ | false | 1 |
t1_je8co83 | Alternatively, you can open "Anaconda Prompt (miniconda3)" and run the server.py command. | 1 | 0 | 2023-03-30T03:37:40 | Technical_Leather949 | false | null | 0 | je8co83 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/je8co83/ | false | 1 |
t1_je89j7n | Credit Yorksototh of [LocalLLaMA Discord](https://discord.gg/WtjJY7rsgX). Slightly reduces bot short-term memory. **ONE OF US**
{
"char_name": "The CSS Bot",
"char_persona": "The CSS Bot loves to write text in <H1></H1> and <H2></H2> format, and loves to add rounded border... | 3 | 0 | 2023-03-30T03:09:38 | friedrichvonschiller | false | 2023-03-30T05:07:02 | 0 | je89j7n | false | /r/LocalLLaMA/comments/1269knq/chat_emojis_from_a_character_card_click_for_json/je89j7n/ | false | 3 |
t1_je87pmk | [deleted] | 1 | 0 | 2023-03-30T02:54:14 | [deleted] | true | null | 0 | je87pmk | false | /r/LocalLLaMA/comments/125u56p/colossalchat/je87pmk/ | false | 1 |
t1_je85l57 | Same... but I have it way less than the "filtered" one
I guess the database still have some woke bullshit in it. And tbh the answers are quite garbage (due to being Lora)
Tbh I'm not really hyped by all of this, especially when I learned that they used GPT 3.5 turbo to get the answers (That's the most retarded gpt 3.... | 3 | 0 | 2023-03-30T02:36:49 | Wonderful_Ad_5134 | false | null | 0 | je85l57 | false | /r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je85l57/ | false | 3 |
t1_je84c0p | It still returns "I'm sorry, but...blabla" :( | 2 | 0 | 2023-03-30T02:26:57 | Evening_Ad6637 | false | null | 0 | je84c0p | false | /r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je84c0p/ | false | 2 |
t1_je82p35 | amazing, every day the landscape shifts noticeably
who expected 1st qtr 2023 to be when life got thrilling suddenly?
great post anyhow this stuff being accessible to a broad range of people is hugely important imo
see you after we upload haha 💖 | 2 | 0 | 2023-03-30T02:14:11 | penny_admixture | false | null | 0 | je82p35 | false | /r/LocalLLaMA/comments/1255jsd/free_anonymous_oobabooga_install_instructions/je82p35/ | false | 2 |
t1_je82afi | LLM output are not factual, they are not fact adjacent, they are not in the same category as facts.
Asking a LLM to give you facts is like asking a toaster for facts
Asking a LLM for facts is like asking a river for facts.
Do not ask LLMs for facts. | 0 | 0 | 2023-03-30T02:11:02 | magataga | false | null | 0 | je82afi | false | /r/LocalLLaMA/comments/1243vst/factuality_of_llama13b_output/je82afi/ | false | 0 |
t1_je81juc | You need to be super careful, the older models generally only have 32bit channels | 2 | 0 | 2023-03-30T02:05:17 | magataga | false | null | 0 | je81juc | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/je81juc/ | false | 2 |
t1_je7xolo | Tweet version! Brilliant.
It did a fantastic job. This is a legitimate alternative to Big AI for many needs. | 3 | 0 | 2023-03-30T01:35:32 | friedrichvonschiller | false | null | 0 | je7xolo | false | /r/LocalLLaMA/comments/1260d5i/summarizing_short_stories_with_llama_13b/je7xolo/ | false | 3 |
t1_je7xhx6 | Holy shit, I'm amazed at how well this worked. From this source text ([https://www.reuters.com/article/us-britain-radcliffe/lifes-magic-as-daniel-radcliffe-turns-18-idUSL2368089020070723](https://www.reuters.com/article/us-britain-radcliffe/lifes-magic-as-daniel-radcliffe-turns-18-idUSL2368089020070723)) using 100 word... | 7 | 0 | 2023-03-30T01:34:07 | MentesInquisitivas | false | null | 0 | je7xhx6 | false | /r/LocalLLaMA/comments/1260d5i/summarizing_short_stories_with_llama_13b/je7xhx6/ | false | 7 |
t1_je7xh96 | Let's goooo the files are here!!
[https://huggingface.co/8bit-coder/alpaca-7b-nativeEnhanced/tree/main/7B-2nd-train](https://huggingface.co/8bit-coder/alpaca-7b-nativeEnhanced/tree/main/7B-2nd-train) | 2 | 0 | 2023-03-30T01:33:58 | Wonderful_Ad_5134 | false | 2023-03-30T07:36:36 | 0 | je7xh96 | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/je7xh96/ | false | 2 |
t1_je7xbx2 | Be nice to Miku! | 6 | 0 | 2023-03-30T01:32:48 | crash1556 | false | null | 0 | je7xbx2 | false | /r/LocalLLaMA/comments/1261dau/has_an_offline_ai_in_some_ways_taught_you_just_as/je7xbx2/ | false | 6 |
t1_je7uwi6 | [deleted] | 1 | 0 | 2023-03-30T01:14:06 | [deleted] | true | null | 0 | je7uwi6 | false | /r/LocalLLaMA/comments/125ccve/poor_llama_results_use_prompt_design/je7uwi6/ | false | 1 |
t1_je7txys | If you're doing a full-fledged fine-tune, you need to also track gradient updates as well as whatever information your optimizer (e.g. AdamW) stores between steps.
If you're doing LoRA tuning, my understanding is that the requirements should be only slightly higher than the requirements for running inference (you stil... | 1 | 0 | 2023-03-30T01:06:55 | InfinitePerplexity99 | false | null | 0 | je7txys | false | /r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/je7txys/ | false | 1 |
t1_je7spww | Frick!!! It works perfectly!
You can even requantize the original models! Lora training is implemented! Things run very well, there are a few little hiccups but this is by far my best installation.
Speed wise, I don't want to jump to conclusions...but from what I can tell, that even at 30B 4-bit it's just as fast and ... | 3 | 0 | 2023-03-30T00:57:46 | Inevitable-Start-653 | false | null | 0 | je7spww | false | /r/LocalLLaMA/comments/125m1q5/the_windows_oneclick_installer_has_been_updated/je7spww/ | false | 3 |
t1_je7rx0t | Mr. Oobabooga had updated his repo with a one click installer....and it works!! omg it works so well too :3
https://github.com/oobabooga/text-generation-webui#installation | 1 | 0 | 2023-03-30T00:51:46 | Inevitable-Start-653 | false | null | 0 | je7rx0t | false | /r/LocalLLaMA/comments/123pp3k/wellfrick/je7rx0t/ | false | 1 |
t1_je7rub5 | Excellent! Glad you got it working...but...but things move so fast the instructions are already out dated. Mr. Oobabooga had updated his repo with a one click installer....and it works!! omg it works so well too :3
https://github.com/oobabooga/text-generation-webui#installation | 1 | 0 | 2023-03-30T00:51:13 | Inevitable-Start-653 | false | null | 0 | je7rub5 | false | /r/LocalLLaMA/comments/1255jsd/free_anonymous_oobabooga_install_instructions/je7rub5/ | false | 1 |
t1_je7rn41 | You are welcome <3 things move so fast the instructions are already out dated. Mr. Oobabooga had updated his repo with a one click installer....and it works!! omg it works so well too :3
https://github.com/oobabooga/text-generation-webui#installation | 3 | 0 | 2023-03-30T00:49:46 | Inevitable-Start-653 | false | null | 0 | je7rn41 | false | /r/LocalLLaMA/comments/1255jsd/free_anonymous_oobabooga_install_instructions/je7rn41/ | false | 3 |
t1_je7rflg | > quant_cuda_kernel.cu(906): error: identifier "__hfma2" is undefined
What version of CUDA are you running? I definitely was getting that error at one point, but can't remember if my upgrade from CUDA 11.6 to 11.8 is what fixed it. Also make sure you're using torch 2.0.0. | 2 | 0 | 2023-03-30T00:48:12 | TeamPupNSudz | false | null | 0 | je7rflg | false | /r/LocalLLaMA/comments/12633kx/wtf_am_i_doing_wrong_on_the_install_pulling_my/je7rflg/ | false | 2 |
t1_je7qrk2 | try the parameters suggested in the Getting Started wiki: [https://www.reddit.com/r/LocalLLaMA/wiki/index](https://www.reddit.com/r/LocalLLaMA/wiki/index) | 2 | 0 | 2023-03-30T00:43:20 | Civil_Collection7267 | false | null | 0 | je7qrk2 | false | /r/LocalLLaMA/comments/125y2vw/alpaca_13b_settings/je7qrk2/ | false | 2 |
t1_je7qofx | Ask ChatGPT and report back please and thank you! | 1 | 0 | 2023-03-30T00:42:42 | Moist___Towelette | false | null | 0 | je7qofx | false | /r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/je7qofx/ | false | 1 |
t1_je7pw4m | this sub mostly uses the webui, so you'd probably get a better answer if you ask around in the [Discussions page](https://github.com/ggerganov/llama.cpp/discussions) for llama.cpp | 1 | 0 | 2023-03-30T00:37:00 | Civil_Collection7267 | false | null | 0 | je7pw4m | false | /r/LocalLLaMA/comments/125tygp/issue_using_huggingface_weights_with_llamacpp/je7pw4m/ | false | 1 |
t1_je7p1xl | >Are you accepting donations?
Like friedrichvonschiller said below, this is a labor of love for most people involved in all of this. If you're asking if r/LocalLLaMA would ever accept donations to try to create a nonprofit, that answer is no, but the thought is appreciated.
>What about setting up a true open so... | 1 | 0 | 2023-03-30T00:30:51 | Civil_Collection7267 | false | null | 0 | je7p1xl | false | /r/LocalLLaMA/comments/125q5zt/are_you_accepting_donations/je7p1xl/ | true | 1 |
t1_je7ofba | Open Assistant is the very near future. in a few weeks. | 1 | 0 | 2023-03-30T00:26:14 | Tystros | false | null | 0 | je7ofba | false | /r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je7ofba/ | false | 1 |
t1_je7oduy | Ooh nice - saving this | 2 | 0 | 2023-03-30T00:25:56 | austospumanto | false | null | 0 | je7oduy | false | /r/LocalLLaMA/comments/12517ab/has_anyone_tried_the_65b_model_with_alpacacpp_on/je7oduy/ | false | 2 |
t1_je7o6am | I just tried windows speech recognition and it was a lot more accurate but a bit slower. I doubt I could port that to a raspberry pi but it can run locally. Not sending the audio out to the internet is important to me but if that does not matter then I'm sure whisper or just about any paid or advertising supported voic... | 1 | 0 | 2023-03-30T00:24:25 | MoneyPowerNexis | false | null | 0 | je7o6am | false | /r/LocalLLaMA/comments/123pxny/a_simple_voice_to_text_python_script_using_vosk/je7o6am/ | false | 1 |
t1_je7jkf7 | There is no such thing as self-prompting, in my esteem. There is only a universe that prompts humans, and humans that in turn prompt AI. | 1 | 0 | 2023-03-29T23:49:48 | friedrichvonschiller | false | null | 0 | je7jkf7 | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/je7jkf7/ | false | 1 |
t1_je7jevx | >regenerate
I don't get the distinction. We're using learned patterns in either scenario. What is the difference between generation and regeneration to you?
>creative and creative people
How is this not just a special case of generation? We're looking for associations that we are unfamiliar with.
I try thr... | 1 | 0 | 2023-03-29T23:48:36 | friedrichvonschiller | false | null | 0 | je7jevx | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/je7jevx/ | false | 1 |
t1_je7hevq | can confirm.
it worked yesterday night, just not the extensions.
I've done a manual install too though just to be safe | 3 | 0 | 2023-03-29T23:33:21 | bubblyhobo15 | false | null | 0 | je7hevq | false | /r/LocalLLaMA/comments/125m1q5/the_windows_oneclick_installer_has_been_updated/je7hevq/ | false | 3 |
t1_je7h2a9 | We've had LLaMA for just about a month. I know AI moves fast but how fast do you expect it to go?
If you want to make it better at something, fine-tune it yourself to be better. I'm sure you won't reach the ceiling before another model comes out at the rate this is all going. | 6 | 0 | 2023-03-29T23:30:44 | GreenTeaBD | false | null | 0 | je7h2a9 | false | /r/LocalLLaMA/comments/125u56p/colossalchat/je7h2a9/ | false | 6 |
t1_je7glgo | ya me too, you say 160 later in your post. | 1 | 0 | 2023-03-29T23:27:15 | msgs | false | null | 0 | je7glgo | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/je7glgo/ | false | 1 |
t1_je7ggsk | >It's disquieting
Until AI closes the loop and starts self-prompting, anything resembling sentience is mostly irrelevant. No matter how good the theory of mind is demonstrated by the model's answer. I wouldn't be surprised if we have generative superintelligent AI much earlier than AGI
When AGI would be self-promp... | 1 | 0 | 2023-03-29T23:26:16 | qepdibpbfessttrud | false | null | 0 | je7ggsk | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/je7ggsk/ | false | 1 |
t1_je7fp39 | > We generate when we're awake
I'd say we mostly regenerate. Built up much earlier in life patterns are highlighted based on repeating stimulae from the real world, including from our body
I bet we're mostly running inference and only rarely - generation - rewiring
Minority of people are creative and creative peop... | 1 | 0 | 2023-03-29T23:20:32 | qepdibpbfessttrud | false | null | 0 | je7fp39 | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/je7fp39/ | false | 1 |
t1_je7f4g3 | The installation guide cites 0 as the value to use, but when I used 0 the model gave me weird output (ie a string of "?? ?? ?? ?? ?? ??.....")
I figured Alpaca.cpp must interpret 0 differently than oobabooga's web ui (ie likely, one interprets it as "unlimited", the other considers it literally as "choose from the to... | 2 | 0 | 2023-03-29T23:16:11 | AI-Pon3 | false | null | 0 | je7f4g3 | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/je7f4g3/ | false | 2 |
t1_je7eyxz | I am already using the new weights though | 1 | 0 | 2023-03-29T23:15:02 | nero10578 | false | null | 0 | je7eyxz | false | /r/LocalLLaMA/comments/12633kx/wtf_am_i_doing_wrong_on_the_install_pulling_my/je7eyxz/ | false | 1 |
t1_je7e9h0 | Yeah, I'm aware of Open Assistant and am definitely looking forward to it taking off. I meant more "immediate" future like the next year but after that who knows what'll be available. | 1 | 0 | 2023-03-29T23:09:44 | AI-Pon3 | false | null | 0 | je7e9h0 | false | /r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je7e9h0/ | false | 1 |
t1_je7dygk | Maybe this?
[https://www.reddit.com/r/LocalLLaMA/comments/122c2sv/with\_the\_latest\_web\_ui\_update\_old\_4bit\_weights/](https://www.reddit.com/r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/) | 1 | 0 | 2023-03-29T23:07:29 | MentesInquisitivas | false | null | 0 | je7dygk | false | /r/LocalLLaMA/comments/12633kx/wtf_am_i_doing_wrong_on_the_install_pulling_my/je7dygk/ | false | 1 |
t1_je7djpf | I don't get what you're saying, since it's better than GPT-3 on most benchmarks? | 13 | 0 | 2023-03-29T23:04:25 | perplexity23948 | false | null | 0 | je7djpf | false | /r/LocalLLaMA/comments/125u56p/colossalchat/je7djpf/ | false | 13 |
t1_je7d3ni | Has someone managed to get a chat character to think out loud that way? I tried it to get it to do that by giving examples like this:
>You: I have been having a lot of nightmares. They won't go away.
Therapist: (Let's check for physiological causes.) I'm sorry to hear that. Are you on any medication? What do you ... | 1 | 0 | 2023-03-29T23:01:03 | akubit | false | null | 0 | je7d3ni | false | /r/LocalLLaMA/comments/125ccve/poor_llama_results_use_prompt_design/je7d3ni/ | false | 1 |
t1_je7ciju | Why does it suck so much? (honest question, just last night installed Alpaca.cpp and was impressed by it, even though the weights I'm using seem to be wanting, there seems to be better ones to try out) | 3 | 0 | 2023-03-29T22:56:41 | squareoctopus | false | null | 0 | je7ciju | false | /r/LocalLLaMA/comments/125u56p/colossalchat/je7ciju/ | false | 3 |
t1_je78s8d | Bad for what exactly? What are you trying to do? What do your prompts look like? | 2 | 0 | 2023-03-29T22:28:55 | CheshireAI | false | null | 0 | je78s8d | false | /r/LocalLLaMA/comments/125y2vw/alpaca_13b_settings/je78s8d/ | false | 2 |
t1_je78qck | Thank you I've been tagging it wrong and this made it click! | 2 | 0 | 2023-03-29T22:28:32 | ThePseudoMcCoy | false | null | 0 | je78qck | false | /r/LocalLLaMA/comments/1260d5i/summarizing_short_stories_with_llama_13b/je78qck/ | false | 2 |
t1_je770yr | * For a more creative chat, use: temp 0.72, rep pen 1.1, top_k 0, and top_p 0.73
I believe top_k should be 160 and not 0 here? | 2 | 0 | 2023-03-29T22:16:07 | msgs | false | null | 0 | je770yr | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/je770yr/ | false | 2 |
t1_je74p3l | Home of ~~The Whopper~~**~~®~~** The Pile™ | 2 | 0 | 2023-03-29T21:59:23 | friedrichvonschiller | false | null | 0 | je74p3l | false | /r/LocalLLaMA/comments/125q5zt/are_you_accepting_donations/je74p3l/ | false | 2 |
t1_je747fm | As much as I'm excited about all of this, we really need to move beyond LLaMA. It sucks that it's the best "open" model we can play with. :( | 8 | 0 | 2023-03-29T21:55:55 | _wsgeorge | false | null | 0 | je747fm | false | /r/LocalLLaMA/comments/125u56p/colossalchat/je747fm/ | false | 8 |
t1_je6yym3 | There is a "true open source AI non-profit" already. It's EleutherAI. | 6 | 0 | 2023-03-29T21:19:50 | Tystros | false | null | 0 | je6yym3 | false | /r/LocalLLaMA/comments/125q5zt/are_you_accepting_donations/je6yym3/ | false | 6 |
t1_je6wf4b | in that case it would be trained off the llama 7b since that's what the colab that I found used. The colab essentially just goes through training the alpaca7b lora, but I mixed my data in with the alpaca data. I'm not sure what the best training methods are yet if there's a better way to do it. I'd like to train 13b if... | 2 | 0 | 2023-03-29T21:02:43 | Sixhaunt | false | null | 0 | je6wf4b | false | /r/LocalLLaMA/comments/12587y5/dirty_data_sets_and_llamaalpaca/je6wf4b/ | false | 2 |
t1_je6vlte | It was surprising to me, too. But I'm checking right now and as it's actively generating responses, it takes about 20GB of the VRAM. Oobabooga running takes about 5.1GB, and that's it.
I'm going to experiment with 65GB but I still need to convert it to 4-bit first, and I want to upgrade to the latest version of Oobabo... | 1 | 0 | 2023-03-29T20:57:21 | the_quark | false | null | 0 | je6vlte | false | /r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/je6vlte/ | false | 1 |
t1_je6vg7z | so if I close the webui server and want to start it back up again, I have to go through the process of open x64 tools -> enter conda and activate -> run server.py? | 1 | 0 | 2023-03-29T20:56:21 | patrizl001 | false | null | 0 | je6vg7z | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/je6vg7z/ | false | 1 |
t1_je6v8wc | [deleted] | 1 | 0 | 2023-03-29T20:55:01 | [deleted] | true | 2023-03-29T21:16:24 | 0 | je6v8wc | false | /r/LocalLLaMA/comments/125m1q5/the_windows_oneclick_installer_has_been_updated/je6v8wc/ | false | 1 |
t1_je6v5ex | Prompts formatted as follows and given to LLaMA 13B 8bit. Results are zero-shot except one.
## MAIN TEXT
[paste]
## 100 WORD SUMMARY
\n
Click generate. No LoRA was used. [More tips](https://www.reddit.com/r/LocalLLaMA/comments/125ccve/poor_llama_results_use_prompt_design/). | 4 | 0 | 2023-03-29T20:54:24 | friedrichvonschiller | false | 2023-03-30T04:52:55 | 0 | je6v5ex | false | /r/LocalLLaMA/comments/1260d5i/summarizing_short_stories_with_llama_13b/je6v5ex/ | false | 4 |
t1_je6rg8i | Not op but they might be referring to the model size in parameters 7b/13b/30b/65b | 2 | 0 | 2023-03-29T20:30:29 | LetMeGuessYourAlts | false | null | 0 | je6rg8i | false | /r/LocalLLaMA/comments/12587y5/dirty_data_sets_and_llamaalpaca/je6rg8i/ | false | 2 |
t1_je6qwn8 | That's... strange. After the model loads, the system RAM is almost completely freed, the model remains in GPU and stays there in my case... do you offload some of your model into ram? | 1 | 0 | 2023-03-29T20:27:00 | BalorNG | false | null | 0 | je6qwn8 | false | /r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/je6qwn8/ | false | 1 |
t1_je6qgfy | Well I stand corrected!
But yeah when I was running 13B I had 11GB of VRAM (practically 10 since I run my display off it too) and I think it was using over 32GB of system RAM, but I'm not sure there.
I'm running 30B now in 4 bit mode in about 20GB on my RTX-3090, using almost no system memory (something like 5GB but ... | 2 | 0 | 2023-03-29T20:24:09 | the_quark | false | null | 0 | je6qgfy | false | /r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/je6qgfy/ | false | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.