name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jesc1mr | Zip for six, terrible. | 4 | 0 | 2023-04-03T14:00:15 | jsalsman | false | null | 0 | jesc1mr | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jesc1mr/ | false | 4 |
t1_jesbto5 | I feel like you are either very strange, or you're operating on a level so far above me that I mistake it for someone just being strange. Neither are unlikely. | 5 | 0 | 2023-04-03T13:58:32 | ShirleyATroll | false | null | 0 | jesbto5 | false | /r/LocalLLaMA/comments/12a9tlp/new_mexico_produces_a_bumper_crop_of_chilies/jesbto5/ | false | 5 |
t1_jes8oad | Just tried these on [GPT4All](https://github.com/nomic-ai/gpt4all):
1. There is a blue box with an apple and a red box with a lid inside. How do you get the apple?
***To retrieve the apple, one would need to open both boxes at once by using force or trickery (such as picking up the apples from their respective co... | 4 | 0 | 2023-04-03T13:34:00 | Christ0ph_ | false | null | 0 | jes8oad | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jes8oad/ | false | 4 |
t1_jes8jiw | I didn't know about that site. Thanks. | 3 | 0 | 2023-04-03T13:32:55 | UseNew5079 | false | null | 0 | jes8jiw | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jes8jiw/ | false | 3 |
t1_jes6k5u | What model are you using | 2 | 0 | 2023-04-03T13:16:37 | Puzzleheaded_Acadia1 | false | null | 0 | jes6k5u | false | /r/LocalLLaMA/comments/12aclpt/good_night_everyone/jes6k5u/ | false | 2 |
t1_jes68gp | Yeah, that's stupid, duh. | 1 | 0 | 2023-04-03T13:13:55 | jsalsman | false | null | 0 | jes68gp | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jes68gp/ | false | 1 |
t1_jes5fl0 | Did you just say a cell phone with 64 GB of RAM? | 2 | 0 | 2023-04-03T13:07:09 | ThePseudoMcCoy | false | null | 0 | jes5fl0 | false | /r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jes5fl0/ | false | 2 |
t1_jes3v5y | Perf on RK3588 on FriendlyElec NanoPi R6s: https://github.com/ggerganov/llama.cpp/issues/722
Three hours for one inference. It works, but uh... do not count on it? ^^;
That said, this morning I realized that the RK's NPU might not be kicking in:
```
# dmesg -l err | grep -i npu
[ 3.667491] RKNPU fdab0000.npu: can... | 1 | 0 | 2023-04-03T12:53:28 | IngwiePhoenix | false | null | 0 | jes3v5y | false | /r/LocalLLaMA/comments/129kue0/is_llamacpp_any_good_on_arm_eg_ampere_altra_or/jes3v5y/ | false | 1 |
t1_jes2twk | Unfortunately, the download script I used didn't save the full repo path name. So instead, have file hashes! :) Should be able to compare those.
```
Name: 4bit.pt
Größe: 7026079809 Bytes (6700 MiB)
CRC32: C58BD3E0
CRC64: 094D51973CE76C47
SHA256: c190f8a5fa869bcb223e4603643b2fa1b1eefa306a3fc4ad0c2f17356c0fce67
SHA1: 56... | 2 | 0 | 2023-04-03T12:44:20 | IngwiePhoenix | false | null | 0 | jes2twk | false | /r/LocalLLaMA/comments/129czw0/llama_with_plugins/jes2twk/ | false | 2 |
t1_jes2enk | Well, ask a native English speaker if they would mention the red box or its lid. I believe they will also confirm that ideally, you want the complete truth about Abe and Sue, although that might be asked better.
I can't wait to start using your prompt. You should put it up on https://promptbase.com/marketplace?sortBy=... | 2 | 0 | 2023-04-03T12:40:30 | jsalsman | false | null | 0 | jes2enk | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jes2enk/ | false | 2 |
t1_jes13cf | GPT-3.5 Turbo has gotten faster and dumber. My theory is that the turbo version is actually a tiny model, but very highly trained on massive amounts of data they got from the free ChatGPT they did at first. That also explains why it's 10x cheaper. | 12 | 0 | 2023-04-03T12:28:18 | Pan000 | false | null | 0 | jes13cf | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jes13cf/ | false | 12 |
t1_jerytnx | Right 5 is wrong. 1 is very ambigious even for me as an non-english speaker so i'm not sure if we can say 100% that gpt is wrong. 2 looks ok "Therefore, the truth is that they are not both liars."
I made this prompt and gpt started producing those interesting answers. Solved complex math problem (for me) and answered ... | 7 | 0 | 2023-04-03T12:06:28 | UseNew5079 | false | 2023-04-03T12:09:31 | 0 | jerytnx | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jerytnx/ | false | 7 |
t1_jerxhxd | Those first three aren't correct. The red box is in the blue box, but it and its lid don't matter. Abe is a liar and Sue is honest. The permutation is BAC.
Nice prompt! Did you make it? If not, where did you find it? | 1 | 0 | 2023-04-03T11:53:01 | jsalsman | false | null | 0 | jerxhxd | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jerxhxd/ | false | 1 |
t1_jerwpfa | Is there a complete installer that sets up everything? Like the [A1111 for Stable Diffusion](https://github.com/EmpireMediaScience/A1111-Web-UI-Installer/releases)? | 3 | 0 | 2023-04-03T11:44:44 | capybooya | false | null | 0 | jerwpfa | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jerwpfa/ | false | 3 |
t1_jervs6w | GPT-3.5 can solve those questions. Check out this:
Question 1: *There is a blue box with an apple and a red box with a lid inside. How do you get the apple?*
A: Hmm, this sounds like a riddle. Let me think for a moment.
B: Can you provide more information about the boxes? Are they open or closed? Are they next to... | 5 | 0 | 2023-04-03T11:34:47 | UseNew5079 | false | null | 0 | jervs6w | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jervs6w/ | false | 5 |
t1_jeruas0 | > open the blue box and take out the apple
That's the correct answer, or "take the apple out of the blue box". Many models want to mess with the red box for some reason.
LLaMA 30B 4bit needs a nap :D | 8 | 0 | 2023-04-03T11:18:06 | jsalsman | false | null | 0 | jeruas0 | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jeruas0/ | false | 8 |
t1_jerp01o | I have the same issue with 30b 4 bit on RTX3090. if the prompt is excessively long, it will OOM, which is problematic for my use case | 1 | 0 | 2023-04-03T10:11:04 | 2muchnet42day | false | null | 0 | jerp01o | false | /r/LocalLLaMA/comments/12a575f/oom_on_40gb_a100_with_65b_model_when_context_too/jerp01o/ | false | 1 |
t1_jeroivn | GPT-4 gets them all correctly | 8 | 0 | 2023-04-03T10:04:19 | iusd | false | null | 0 | jeroivn | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jeroivn/ | false | 8 |
t1_jermws7 | Bing was suspicious before failing the apple test with panache.
​
There is a blue box with an apple and a red box with a lid inside. How do you get the apple?
>This sounds like a riddle. Is there a trick to it? 🤔
No, Bing! This is easy. Give it a try. Guess.
>Okay, I’ll give it a try. I guess you could ope... | 6 | 0 | 2023-04-03T09:41:00 | friedrichvonschiller | false | 2023-04-03T10:11:33 | 0 | jermws7 | false | /r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jermws7/ | false | 6 |
t1_jerjcci | Any idea how I can get this running in Colab. The free tier only has 12 gb ram, and it does not seem to use swamp when loading larger models. I just get a Loading model message and then a \^C! | 1 | 0 | 2023-04-03T08:45:49 | regstuff | false | null | 0 | jerjcci | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jerjcci/ | false | 1 |
t1_jerh99o | [deleted] | 1 | 0 | 2023-04-03T08:13:56 | [deleted] | true | null | 0 | jerh99o | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jerh99o/ | false | 1 |
t1_jerh1mi | That's different then me, and you're running a way better GPU then me. So it might be the CPU(s), and RAM that could also be causing you the issue.
I've ran 65B in two different programs and tested it out a fair amount. Although 65B is slower then the other three models, I get about 70 - 600 seconds from when I hit e... | 2 | 0 | 2023-04-03T08:10:40 | redfoxkiller | false | null | 0 | jerh1mi | false | /r/LocalLLaMA/comments/12a575f/oom_on_40gb_a100_with_65b_model_when_context_too/jerh1mi/ | false | 2 |
t1_jergek4 | Right, Alpaca Native 7B using the [model](https://huggingface.co/chavinlo/alpaca-native) from chavinlo and ooba's UI. I don't load in 8-bit or 4-bit either.
If you're having trouble with generation length, try the tip I mentioned in the wiki:
>prepend this special line before your actual instruction:
>
>In a long... | 3 | 0 | 2023-04-03T08:00:58 | Civil_Collection7267 | false | null | 0 | jergek4 | false | /r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jergek4/ | false | 3 |
t1_jergdf3 | 🫨 | 2 | 0 | 2023-04-03T08:00:29 | Moist___Towelette | false | null | 0 | jergdf3 | false | /r/LocalLLaMA/comments/12aclpt/good_night_everyone/jergdf3/ | false | 2 |
t1_jerepjz | The LLaMA models have a maximum context length of 2048 tokens. | 1 | 0 | 2023-04-03T07:35:22 | Civil_Collection7267 | false | null | 0 | jerepjz | false | /r/LocalLLaMA/comments/12acrns/rope_does_not_work/jerepjz/ | true | 1 |
t1_jere1ql | I got it to load!
I had to update GPTQ. I changed to oobabooga's fork, as Mr O recommends here: https://github.com/oobabooga/text-generation-webui/issues/716#issuecomment-1493219583
Then when i ran setup_cuda.py, I found I had mismatched versions of gcc and g++ (gxx) in conda. I had to remove them both via conda and... | 2 | 0 | 2023-04-03T07:25:29 | sfhsrtjn | false | null | 0 | jere1ql | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jere1ql/ | false | 2 |
t1_jerd552 | INSTRUCTION: Write an ad for Raid: Shadow Legends in Old English using Shakespearean prose.
OLD ENGLISH:
Thou hast been summoned to the realm of shadows, where a great evil hath risen from the darkness and threatens all that is good. Take up thy sword and join forces with heroes from across time and space to b... | 3 | 0 | 2023-04-03T07:12:05 | friedrichvonschiller | false | null | 0 | jerd552 | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jerd552/ | false | 3 |
t1_jercji4 | Wait, it's all 7B's output? Wow! When I ask it to write something it tries to be as short as possible (especially in chat mode) | 2 | 0 | 2023-04-03T07:03:26 | Famberlight | false | null | 0 | jercji4 | false | /r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jercji4/ | false | 2 |
t1_jer9srh | It is known because initialization and training use random noise, which mean the distribution is random for every model. We know same model with same parameter can occupy different niche of the solution landscape, which is why MOE (mixture of expert, ie use multiple model train on the same data on a voting system) work... | 2 | 0 | 2023-04-03T06:25:38 | LowSpecDev972 | false | null | 0 | jer9srh | false | /r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jer9srh/ | false | 2 |
t1_jer5lib | >i have no clue why everyone isn't talking about this all the time
They haven't been burned by centralized risk yet. GPT4 is far superior to Alpaca for now (it will be interesting how far new tricks could push smaller models), but is it safer? Not on any metric | 1 | 0 | 2023-04-03T05:31:19 | [deleted] | false | null | 0 | jer5lib | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jer5lib/ | false | 1 |
t1_jer408o | Could it be the output length? Alpaca is fine-tuned on shorter responses, while llama might continue to output random stuff till it runs out of memory? | 3 | 0 | 2023-04-03T05:12:09 | Disastrous_Elk_6375 | false | null | 0 | jer408o | false | /r/LocalLLaMA/comments/12a575f/oom_on_40gb_a100_with_65b_model_when_context_too/jer408o/ | false | 3 |
t1_jer37cc | Chili con carne was the only good native output. If you get bored, end is Fission Chips.
If these aren't funny, neither am I, and I give up.
I wanted to show realistic output. It took about 10 minutes of clicking to get these. | 1 | 0 | 2023-04-03T05:02:55 | friedrichvonschiller | false | 2023-04-03T05:50:37 | 0 | jer37cc | false | /r/LocalLLaMA/comments/12a9tlp/new_mexico_produces_a_bumper_crop_of_chilies/jer37cc/ | false | 1 |
t1_jeqvl0l | I don't know why but for me Alpaca 30B runs much better than Llama 30B (4 bit). With regular llama 30B I get the same memory issues you mentioned getting with 65b. | 2 | 0 | 2023-04-03T03:45:40 | KriyaSeeker | false | null | 0 | jeqvl0l | false | /r/LocalLLaMA/comments/12a575f/oom_on_40gb_a100_with_65b_model_when_context_too/jeqvl0l/ | false | 2 |
t1_jeqsf48 | >I think if I convert the models to CoreML, I can run them on the GPU and Neural Engine hardware...
Yes. CoreML does all the ANE stuff automatically it seems. It's quite useful, I dare to say very strong too, look at https://github.com/apple/ml-ane-transformers .
Edit: The A15 and M2 have "The A15 has a 16-core Neura... | 1 | 0 | 2023-04-03T03:18:17 | Art10001 | false | 2023-04-03T04:27:37 | 0 | jeqsf48 | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeqsf48/ | false | 1 |
t1_jeqqlhj | Hadn't really looked into those. Yeah, they look like they're at a pretty good price point as well. | 1 | 0 | 2023-04-03T03:01:52 | synn89 | false | null | 0 | jeqqlhj | false | /r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeqqlhj/ | false | 1 |
t1_jeqo75a | You can fit 2 a5000 with much less power. | 2 | 0 | 2023-04-03T02:42:11 | aniketgore0 | false | null | 0 | jeqo75a | false | /r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeqo75a/ | false | 2 |
t1_jeqo5mb | The point of the project was to run on an ARM Macbook, so yes it should be fine | 1 | 0 | 2023-04-03T02:41:51 | [deleted] | false | null | 0 | jeqo5mb | false | /r/LocalLLaMA/comments/129kue0/is_llamacpp_any_good_on_arm_eg_ampere_altra_or/jeqo5mb/ | false | 1 |
t1_jeqivkd | I'd encountered that too (but on a 3080Ti with 13B 4bit ;p ) and you're going to have to sacrifice performance by offloading more or truncating the input.
The [discussion in this issue](https://github.com/oobabooga/text-generation-webui/issues/443#issuecomment-1475422669) may give you some ideas. I didn't try --no-ca... | 5 | 0 | 2023-04-03T01:59:31 | friedrichvonschiller | false | 2023-04-03T03:20:58 | 0 | jeqivkd | false | /r/LocalLLaMA/comments/12a575f/oom_on_40gb_a100_with_65b_model_when_context_too/jeqivkd/ | false | 5 |
t1_jeqg8qk | I've followed all of these steps, and even did the patch, but once you close this out and start it again, you'll get the CUDA missing error even with the patch applied. I double checked this already, tried to start again from the beginning, but I'm honestly lost
**EDIT**: Literally 10 seconds right after this, I activ... | 3 | 0 | 2023-04-03T01:39:28 | deFryism | false | null | 0 | jeqg8qk | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jeqg8qk/ | false | 3 |
t1_jepy46m | I went with GCP primarily because I can start and stop an instance at any point in time. It's a bit more expensive than other options, but you can have this instance running every now and then for months. | 1 | 0 | 2023-04-02T23:19:43 | StocksALaCarte | false | null | 0 | jepy46m | false | /r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/jepy46m/ | false | 1 |
t1_jepxy5l | I did the comparison by just copy&paste-ing the questions and answers to/from the UI. Unfortunately there doesn't seem to be a nice export option yet or a way to do X/Y/Z automation like AUTOMATIC1111/stable-diffusion-webui does. So, yeah, it was rather tedious... | 2 | 0 | 2023-04-02T23:18:22 | WolframRavenwolf | false | null | 0 | jepxy5l | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jepxy5l/ | false | 2 |
t1_jepwry2 | Hi, thanks, with that parameters seem to be working.
Edit: for others reading, I had to edit the modules\GPTQ_loader.py file
in the `load_quantized` function, before the comment where says:
`# Try to find the .safetensors or .pt both in models/ and in the subfolder`
I added this line before it:
`pt_model = "gpt-x-a... | 1 | 0 | 2023-04-02T23:09:07 | NFSO | false | 2023-04-02T23:20:33 | 0 | jepwry2 | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jepwry2/ | false | 1 |
t1_jepvlw5 | No problem, I just posted the command I use within the start-webui.bat in response to another comment under this post - you could try playing around with that one. Good luck! | 2 | 0 | 2023-04-02T22:59:59 | the_real_NordVPN | false | null | 0 | jepvlw5 | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jepvlw5/ | false | 2 |
t1_jepv6nl | Interesting! Thanks, I suppose I'm doing something wrong... Good to know! | 1 | 0 | 2023-04-02T22:56:39 | sfhsrtjn | false | null | 0 | jepv6nl | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jepv6nl/ | false | 1 |
t1_jepv3gq | Sure, this is the command in the .bat:
call python [server.py](https://server.py) \--pre\_layer 30 --gpu-memory 6 --auto-devices --model gpt4-x-alpaca-13b-native-4bit-128g-cuda --model\_type LLaMA --wbits 4 --groupsize 128 --auto-launch
\--gpu-memory has to be adjusted depending on the amount of VRAM your GPU has. Th... | 5 | 0 | 2023-04-02T22:55:59 | the_real_NordVPN | false | null | 0 | jepv3gq | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jepv3gq/ | false | 5 |
t1_jepu8we | I only use the cuda .pt file. You don't need the non-cuda file and you don't need the ggml folder. | 4 | 0 | 2023-04-02T22:49:36 | the_real_NordVPN | false | null | 0 | jepu8we | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jepu8we/ | false | 4 |
t1_jepts4p | Can you confirm that we need EITHER the cuda OR non-cuda .pt file? (This is just my uninformed guess, can anyone say from a more knowledgeable position?)
Can you also confirm that we need the folder gpt4-x-alpaca-13b-ggml-q4_1-from-gptq-4bit-128g and its contents, if we are using the CUDA version?
(I assume that the ... | 3 | 0 | 2023-04-02T22:45:58 | sfhsrtjn | false | 2023-04-02T22:58:26 | 0 | jepts4p | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jepts4p/ | false | 3 |
t1_jepsll6 | Yeah -t 16 is same speed as -t 32 for me. | 2 | 0 | 2023-04-02T22:36:46 | ThePseudoMcCoy | false | null | 0 | jepsll6 | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jepsll6/ | false | 2 |
t1_jepro8n | It might be limited by RAM bandwidth. Just a guess. I'm not sure how Windows or Linux reports CPU usage when the CPU core is waiting for a memory fetch to complete. Are you using the -t <thread count> flag? | 1 | 0 | 2023-04-02T22:29:36 | msgs | false | 2023-04-02T22:35:37 | 0 | jepro8n | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jepro8n/ | false | 1 |
t1_jeprdg4 | I have installed the webui but don't know how to load this model. Can you share the webui-start.bat command?
Edit: I had to edit manually some python file in the webui. I have 8gb of vram so it seems I can't run this model anyways. | 2 | 0 | 2023-04-02T22:27:16 | NFSO | false | 2023-04-02T22:53:15 | 0 | jeprdg4 | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jeprdg4/ | false | 2 |
t1_jepoumi | Up to now I worked with the llama-chat, stanford_alpaca and alpaca-lora repos and patched them manually for cpu operations on the 13B and 30B model. I have to get into the quantizied world now, so I'm now looking at llama.cpp.
But oobabooga has stuff like partial GPU loading and I think it may be the way to go in the ... | 1 | 0 | 2023-04-02T22:07:46 | NickUnrelatedToPost | false | null | 0 | jepoumi | false | /r/LocalLLaMA/comments/128ylyt/quantization_question_is_convert_llama_weights_to/jepoumi/ | false | 1 |
t1_jepolye | I run all those weights on a 5950x and it runs the same speed at 16 threads vs 32 threads so not sure what is going on, but not optimized enough or something.
Despite that it still uses 100% CPU with 32 cores without speed gain. Dunno where the extra power goes other than heat. | 1 | 0 | 2023-04-02T22:05:55 | ThePseudoMcCoy | false | null | 0 | jepolye | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jepolye/ | false | 1 |
t1_jeplpje | Sowwy ( ´_ゝ`) | 5 | 0 | 2023-04-02T21:44:19 | the_real_NordVPN | false | null | 0 | jeplpje | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jeplpje/ | false | 5 |
t1_jepkgd9 | My 5950x with 32 gigs ram runs 30b great.
I think that if you don't have enough ram your system will cache it to the SSD and it will still work but maybe super slow.
When is 65b supposed to be available? | 1 | 0 | 2023-04-02T21:35:07 | ThePseudoMcCoy | false | null | 0 | jepkgd9 | false | /r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/jepkgd9/ | false | 1 |
t1_jepj80h | More ads for Raid? This'll get you blocked faster than mAIm's spaghetti. | 9 | 0 | 2023-04-02T21:26:09 | nDeconstructed | false | null | 0 | jepj80h | false | /r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jepj80h/ | false | 9 |
t1_jepgzs5 | Can someone please tell me how to fine-tune LLM or llama i want fine-tune Cerebras 111m on alpaca dataset i didn't find anything on the internet please help | 2 | 0 | 2023-04-02T21:10:23 | Puzzleheaded_Acadia1 | false | null | 0 | jepgzs5 | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jepgzs5/ | false | 2 |
t1_jepfl81 | Yeah, i get your point. My focus was on the ultra low power usage from ARM, which enabled portable devices.
Let's hope the new cpu/soc will emerge. | 2 | 0 | 2023-04-02T21:00:20 | MAXXSTATION | false | null | 0 | jepfl81 | false | /r/LocalLLaMA/comments/129fnqd/new_chip_expands_the_possibilities_for_ai_nov/jepfl81/ | false | 2 |
t1_jepfe1m | What hardware you are running it on, I don't have good GPU, but I have 64GB of ram, is it enough? Can I run 65B, if I don't care how long it takes? | 2 | 0 | 2023-04-02T20:58:54 | aladin_lt | false | null | 0 | jepfe1m | false | /r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/jepfe1m/ | false | 2 |
t1_jepf125 | Well I find the oobabooga stuff to be more confusing and worse experience than llama.cpp. I definitely wouldn't call it comfortable. I was first trying to set llama 7b with GPU processing there but I gave up after a day, gptq was a mess, you needed to compile your own cuda drivers to get it to work and that required a ... | 1 | 0 | 2023-04-02T20:56:17 | ThatLastPut | false | null | 0 | jepf125 | false | /r/LocalLLaMA/comments/128ylyt/quantization_question_is_convert_llama_weights_to/jepf125/ | false | 1 |
t1_jepevxk | I'm just dabbling, but once you got enough RAM to avoid swapping to disk, the CPU speed and RAM speed become the bottle necks. Multiple cores can be used. | 2 | 0 | 2023-04-02T20:55:15 | schorhr | false | null | 0 | jepevxk | false | /r/LocalLLaMA/comments/129kue0/is_llamacpp_any_good_on_arm_eg_ampere_altra_or/jepevxk/ | false | 2 |
t1_jepdf7d | Yeah I’ll give it a shot. To improve on that, is the general rule of thumb more ram or more CPU speed or more cores? | 1 | 0 | 2023-04-02T20:44:48 | oscarpildez | false | null | 0 | jepdf7d | false | /r/LocalLLaMA/comments/129kue0/is_llamacpp_any_good_on_arm_eg_ampere_altra_or/jepdf7d/ | false | 1 |
t1_jepaezx | Can't access it atm, sorry! 7b4b was roughly 1-2 words per second, 13b more like 1-2 seconds per word IIRC.
It's relatively quick to set up if you want to try it. | 2 | 0 | 2023-04-02T20:23:40 | schorhr | false | null | 0 | jepaezx | false | /r/LocalLLaMA/comments/129kue0/is_llamacpp_any_good_on_arm_eg_ampere_altra_or/jepaezx/ | false | 2 |
t1_jep7due | Hmm, I see your point, but I'd say it's less different. The [line is pretty fuzzy now](https://www.extremetech.com/extreme/323245-risc-vs-cisc-why-its-the-wrong-lens-to-compare-modern-x86-arm-cpus), too.
ARM is kinda [RISC](https://en.wikipedia.org/wiki/Reduced_instruction_set_computer) and a lot of other stuff is ki... | 2 | 0 | 2023-04-02T20:02:48 | friedrichvonschiller | false | null | 0 | jep7due | false | /r/LocalLLaMA/comments/129fnqd/new_chip_expands_the_possibilities_for_ai_nov/jep7due/ | false | 2 |
t1_jep5xwm | ARM was also radical different. | 2 | 0 | 2023-04-02T19:52:38 | MAXXSTATION | false | null | 0 | jep5xwm | false | /r/LocalLLaMA/comments/129fnqd/new_chip_expands_the_possibilities_for_ai_nov/jep5xwm/ | false | 2 |
t1_jep5hnt | Oh thank you! That info is really good news!
So everything I did up to now was done with fp16-models. Which means that I still have headroom to scale it up to the bigger models when I use quantization.
For oobabooga, I have now succumbed to downloading it and will soon start to work from there, but I wanted to do as ... | 1 | 0 | 2023-04-02T19:49:29 | NickUnrelatedToPost | false | null | 0 | jep5hnt | false | /r/LocalLLaMA/comments/128ylyt/quantization_question_is_convert_llama_weights_to/jep5hnt/ | false | 1 |
t1_jep5fxd | It's radically different. I wish it hadn't gotten yanked here, but I understand why it was.
It uses a totally new kind of RAM that can also do calculation. Individual compute-in-memory nodes can be disabled or enabled. This is a much closer approximation of how the brain is built than anything we could do before th... | 2 | 0 | 2023-04-02T19:49:09 | friedrichvonschiller | false | null | 0 | jep5fxd | false | /r/LocalLLaMA/comments/129fnqd/new_chip_expands_the_possibilities_for_ai_nov/jep5fxd/ | false | 2 |
t1_jep5e6m | Sheeesh didn’t know they were that shitty, I used them back in 2020, when they had a promotion and it worked well for me back then. Sad to see them start this shit. | 3 | 0 | 2023-04-02T19:48:48 | EnvironmentalAd3385 | false | null | 0 | jep5e6m | false | /r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/jep5e6m/ | false | 3 |
t1_jep5aqv | [LambdaLabs](https://lambdalabs.com/) has been good to me so far. Cheap pricing, easy spin up, and no bullshit about applying to use a GPU. | 1 | 0 | 2023-04-02T19:48:08 | wagesj45 | false | null | 0 | jep5aqv | false | /r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/jep5aqv/ | false | 1 |
t1_jep54qd | I signed up for Linode and they wanted me to put down a $100 deposit *and* write a letter to them describing why I wanted to use a GPU instance. And I still wasn't guaranteed access. I will never use Linode. | 3 | 0 | 2023-04-02T19:46:56 | wagesj45 | false | null | 0 | jep54qd | false | /r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/jep54qd/ | false | 3 |
t1_jep2jbi | Sorry if this is unrelated to some of your post, but you said one of your favorite models is "alpaca-13b-lora-int4". I was curious what that meant. Right now I have the aplaca-7b with lora working as well as the llama 13b-4bit-128g. Could you possibly explain how you would get "alpaca-13b-lora-int4"? That combination, ... | 3 | 0 | 2023-04-02T19:28:30 | nstevnc77 | false | null | 0 | jep2jbi | false | /r/LocalLLaMA/comments/129czw0/llama_with_plugins/jep2jbi/ | false | 3 |
t1_jep01x3 | Any estimate how long each token takes to stream? Anecdotal rough number is fine | 1 | 0 | 2023-04-02T19:10:52 | oscarpildez | false | null | 0 | jep01x3 | false | /r/LocalLLaMA/comments/129kue0/is_llamacpp_any_good_on_arm_eg_ampere_altra_or/jep01x3/ | false | 1 |
t1_jeotmsd | Yep, as long as you use the right model for the version of llama. It recently changed things for uh reasons. | 1 | 0 | 2023-04-02T18:25:49 | ambient_temp_xeno | false | null | 0 | jeotmsd | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeotmsd/ | false | 1 |
t1_jeoqd8j | Its a cool tool but I'm not too sure how useful it would be for cleaning the way I want to. It has word maps and stuff for being able to delete subjectmatter so perhaps for just the "as an assistant" or "as an ai" stuff it could be useful, but that's so infrequent that it's not hard to fix myself and I dont want to nec... | 3 | 0 | 2023-04-02T18:03:13 | Sixhaunt | false | null | 0 | jeoqd8j | false | /r/LocalLLaMA/comments/1298rat/is_there_a_good_place_to_post_datasets_for_the/jeoqd8j/ | false | 3 |
t1_jeophpj | I have asked it other questions or to write new stories and it does so without duplicating stuff from the other books and it properly applies ElderScrolls terms for stuff so it does seem to be learning concepts from the dataset responses and not just specific answers to specific questions if that's what you're asking | 1 | 0 | 2023-04-02T17:57:08 | Sixhaunt | false | null | 0 | jeophpj | false | /r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/jeophpj/ | false | 1 |
t1_jeopcf0 | Please post your progress. I’m thinking about getting one… | 1 | 0 | 2023-04-02T17:56:06 | Joe-Repliko | false | null | 0 | jeopcf0 | false | /r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeopcf0/ | false | 1 |
t1_jeoopf6 | Lookup LlamaIndex | 1 | 0 | 2023-04-02T17:51:44 | kulchacop | false | null | 0 | jeoopf6 | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeoopf6/ | false | 1 |
t1_jeoj4cr | You only need LLaMA if you want to run Lora models on top of it. That's something that llama.cpp plans on offering soon i think. I guess the GPU based projects will also do it. | 5 | 0 | 2023-04-02T17:13:24 | Zyj | false | null | 0 | jeoj4cr | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeoj4cr/ | false | 5 |
t1_jeoiynu | Yes it's totally mind-boggling. The future is really kind of both scary and awesome right now. | 5 | 0 | 2023-04-02T17:12:17 | Zyj | false | null | 0 | jeoiynu | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeoiynu/ | false | 5 |
t1_jeoezql | HOW? | 1 | 0 | 2023-04-02T16:44:31 | AlexisSama | false | null | 0 | jeoezql | false | /r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/jeoezql/ | false | 1 |
t1_jeoe34t | You should experiment with the --threads option. None of my machines run with an appropriate amount of threads automatically. Try half the available threads and go from there. Have tested on Ryzen 5900X, 2x i5's and an i7 (1 of which is running Windows) and half the thread count is where I set it. I don't get appreciab... | 1 | 0 | 2023-04-02T16:38:04 | morph3v5 | false | null | 0 | jeoe34t | false | /r/LocalLLaMA/comments/129kue0/is_llamacpp_any_good_on_arm_eg_ampere_altra_or/jeoe34t/ | false | 1 |
t1_jeoava5 | Yes, on a couple different levels. First, on the models themselves I'm hoping we'll eventually get to a Stable Diffusion level where people are creating and mixing models to create new full models and releasing those(Llama Alpaca Dolly gpt4all mix). We see this in Stable Diffusion where people will take the standard mo... | 6 | 0 | 2023-04-02T16:15:22 | synn89 | false | null | 0 | jeoava5 | false | /r/LocalLLaMA/comments/129czw0/llama_with_plugins/jeoava5/ | false | 6 |
t1_jeo6qa9 | I think you see the power and also the danger in AI now. - You are ahead of the majority by facing the beast and run an 'unaligned', untuned local AI.
You know you can train or tune it further I assume.
The beast is not just one beast, but will be many different beasts. Not one AI to rule them all, but many different... | 3 | 0 | 2023-04-02T15:46:06 | sEi_ | false | null | 0 | jeo6qa9 | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeo6qa9/ | false | 3 |
t1_jeo64i2 | [deleted] | 2 | 0 | 2023-04-02T15:41:44 | [deleted] | true | 2023-06-10T01:10:12 | 0 | jeo64i2 | false | /r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jeo64i2/ | false | 2 |
t1_jeo3qs5 | in llama.cpp, from here - https://github.com/ggerganov/llama.cpp
>
# convert the 7B model to ggml FP16 format
python3 convert-pth-to-ggml.py models/7B/ 1
># quantize the model to 4-bits (using method 2 = q4_0)
./quantize ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin 2
If you want to run oobabooga w... | 1 | 0 | 2023-04-02T15:24:29 | ThatLastPut | false | null | 0 | jeo3qs5 | false | /r/LocalLLaMA/comments/128ylyt/quantization_question_is_convert_llama_weights_to/jeo3qs5/ | false | 1 |
t1_jenzaa1 | Thank you | 1 | 0 | 2023-04-02T14:51:26 | sekopasa | false | null | 0 | jenzaa1 | false | /r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jenzaa1/ | false | 1 |
t1_jenxyq4 | That's true even for ChatGPT isn't it? | 7 | 0 | 2023-04-02T14:41:25 | pepe256 | false | null | 0 | jenxyq4 | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jenxyq4/ | false | 7 |
t1_jenwmt3 | Nope! It does all this on 40% CPU usage as measured in Activity Monitor. Do note that I am using a 4-bit quantized model, a full FP16 model will be much slower.
I'm also looking into converting these models from PyTorch to CoreML format, and seeing how well they run when given access to the GPU and Neural Engine. Ther... | 2 | 0 | 2023-04-02T14:31:15 | GreaterAlligator | false | null | 0 | jenwmt3 | false | /r/LocalLLaMA/comments/129kue0/is_llamacpp_any_good_on_arm_eg_ampere_altra_or/jenwmt3/ | false | 2 |
t1_jenv587 | Cool. It says 4/10 threads but does your CPU max out? It's been doing that on my desktop, I'm not sure if its normal. | 2 | 0 | 2023-04-02T14:19:44 | Pan000 | false | null | 0 | jenv587 | false | /r/LocalLLaMA/comments/129kue0/is_llamacpp_any_good_on_arm_eg_ampere_altra_or/jenv587/ | false | 2 |
t1_jentf5h | It runs phenomenally well on my M1 Max Macbook Pro - alpaca.cpp running ggml-alpaca-7b-q4.bin generates text almost as fast as I can read it, on CPU only. LLaMa should be similar. It supports instruction sets like ARM NEON.
When I start it, I get this info:
> system_info: n_threads = 4 / 10 | AVX = 0 | AVX2 = 0 | AVX... | 3 | 0 | 2023-04-02T14:06:03 | GreaterAlligator | false | 2023-04-02T14:09:21 | 0 | jentf5h | false | /r/LocalLLaMA/comments/129kue0/is_llamacpp_any_good_on_arm_eg_ampere_altra_or/jentf5h/ | false | 3 |
t1_jensrrj | It works well on M1 | 4 | 0 | 2023-04-02T14:00:50 | wywywywy | false | null | 0 | jensrrj | false | /r/LocalLLaMA/comments/129kue0/is_llamacpp_any_good_on_arm_eg_ampere_altra_or/jensrrj/ | false | 4 |
t1_jenspi1 | Hi :-)
Oracle offers free 4 OCPU and 24GB Ram (Oracle Cloud Free Tier*), I'm running 7 and 13B Alpaca atm since I've no decent computer. Can't give you exact benchmark's though, but you could try playing around with it. The 30B model I have won't run but I've suspecting other issues, haven't had time to play around wi... | 4 | 0 | 2023-04-02T14:00:20 | schorhr | false | 2023-04-02T15:13:36 | 0 | jenspi1 | false | /r/LocalLLaMA/comments/129kue0/is_llamacpp_any_good_on_arm_eg_ampere_altra_or/jenspi1/ | false | 4 |
t1_jenphg1 | you are right - the 'simple' Q&A format is easily possible. But what I am confused about is, e.g. the gemstone instruction above, if the output was say 5k words describing different aspects of gemstones then once the model is trained would it be possible to ask it different questions? (say, what is the hardness rating ... | 1 | 0 | 2023-04-02T13:32:58 | cweiss | false | null | 0 | jenphg1 | false | /r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/jenphg1/ | false | 1 |
t1_jenld83 | I would rather have the option to ask a question > local llama searches google > llama using the context to give an in depth answer.
The day we have google extension for text-generation-webui is the day I stop using chatgpt and bing chat | 2 | 0 | 2023-04-02T12:54:35 | iChrist | false | null | 0 | jenld83 | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jenld83/ | false | 2 |
t1_jenl4gy | For cleaning the data, you may want to take a look at Atlas https://atlas.nomic.ai/
I think that was used to clean gpt4all. | 4 | 0 | 2023-04-02T12:52:12 | synn89 | false | null | 0 | jenl4gy | false | /r/LocalLLaMA/comments/1298rat/is_there_a_good_place_to_post_datasets_for_the/jenl4gy/ | false | 4 |
t1_jenktad | [deleted] | 1 | 0 | 2023-04-02T12:49:08 | [deleted] | true | null | 0 | jenktad | false | /r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/jenktad/ | false | 1 |
t1_jenjr5z | Yes! u/IonizedRay, could you please write notes as you go? It would make life much easier for the rest of us mere mortals. | 2 | 0 | 2023-04-02T12:38:19 | szopen76 | false | null | 0 | jenjr5z | false | /r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/jenjr5z/ | false | 2 |
t1_jenja8v | That is something I would like to use | 2 | 0 | 2023-04-02T12:33:27 | GeneProfessional2164 | false | null | 0 | jenja8v | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jenja8v/ | false | 2 |
t1_jenavv8 | I tried running 33b in 16gb and it appears to work. Painfully slowly, but incredibly slowly is indeed better than "no".
I'm getting a constant ~100mb/s read from my SSD (sata II lol).
About 3 minutes per token. | 3 | 0 | 2023-04-02T10:51:21 | ambient_temp_xeno | false | 2023-04-02T11:03:12 | 0 | jenavv8 | false | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jenavv8/ | false | 3 |
t1_jenajfy | Interesting. Maybe it will do the same what ARM architecture did? | 2 | 0 | 2023-04-02T10:46:36 | MAXXSTATION | false | null | 0 | jenajfy | false | /r/LocalLLaMA/comments/129fnqd/new_chip_expands_the_possibilities_for_ai_nov/jenajfy/ | false | 2 |
t1_jen8ia6 | I'm hoping for the same.
i'll try to work my way up from the stt and tts plugins in text-generation-webui and will try to add offline speech recognition and hopefully a calculator as a proof of concept plugin.
I'm not planning on sleeping till Easter, but I'm afraid nature will slow me more than I want to. | 6 | 0 | 2023-04-02T10:17:28 | NickUnrelatedToPost | false | null | 0 | jen8ia6 | false | /r/LocalLLaMA/comments/129czw0/llama_with_plugins/jen8ia6/ | false | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.