title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Anyone here finetune either MPT-7B or Falcon-7B? | 12 | Just got into the LocalLLM space and playing around with a few things, is anyone familiar with finetuning these models? | 2023-05-29T18:33:54 | https://www.reddit.com/r/LocalLLaMA/comments/13v2pst/anyone_here_finetune_either_mpt7b_or_falcon7b/ | EcstaticVenom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13v2pst | false | null | t3_13v2pst | /r/LocalLLaMA/comments/13v2pst/anyone_here_finetune_either_mpt7b_or_falcon7b/ | false | false | self | 12 | null |
What about generating N tokens, and asking the model to evaluate and edit what got generated so far? Two or more LLMs can run on top of each other, where critic LLMs edit the output of the writer. Will that boost performance? | 2 | What about generating N tokens, and asking the model to evaluate and edit what got generated so far? Two or more LLMs can run on top of each other, where critic LLMs edit the output of the writer. Will that boost performance? | 2023-05-29T17:25:51 | https://www.reddit.com/r/LocalLLaMA/comments/13v10gm/what_about_generating_n_tokens_and_asking_the/ | NancyAurum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13v10gm | false | null | t3_13v10gm | /r/LocalLLaMA/comments/13v10gm/what_about_generating_n_tokens_and_asking_the/ | false | false | self | 2 | null |
Hardware explanation | 1 | Hi everyone, new to local ai, but not new to sysadmining.
I have an unraid server that has 2x gtx1080. Can I run anything that spans both gpus? I'm a little uncertain how vram is actually calculated on the installs. I assume I max out one card (8 gb) and thats it but figured I'd ask.
Thanks! | 2023-05-29T16:49:27 | https://www.reddit.com/r/LocalLLaMA/comments/13v02xo/hardware_explanation/ | That0neSummoner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13v02xo | false | null | t3_13v02xo | /r/LocalLLaMA/comments/13v02xo/hardware_explanation/ | false | false | self | 1 | null |
Looking for a Finetuning Guide | 34 | Dear LocalLLaMA community,
I wanted to reach out and share my enthusiasm for LLaMA and my eagerness to dive into the world of conversational AI. As a beginner in this field, I have been voraciously reading about LLaMA and its various aspects, in particular concepts such as Qlora and bitsandbytes finetuning.
They have piqued my interest, and I'm now eager to try my hand at training a custom model using specific data to enhance my understanding of how these concepts work in real-world scenarios.
However, despite my extensive reading, I hav’nt been unable to find a comprehensive tutorial that provides clear instructions on how to create a custom model for a conversational agent. As a AI-noob, I would greatly appreciate it if someone could point me in the direction of such a tutorial. It would be helpful for me to have a step-by-step guide that simplifies the process and allows me to embark on this exciting project.
Thank you in advance for your assistance ! | 2023-05-29T16:21:09 | https://www.reddit.com/r/LocalLLaMA/comments/13uzcrh/looking_for_a_finetuning_guide/ | Bitcoin_hunter-21M | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13uzcrh | false | null | t3_13uzcrh | /r/LocalLLaMA/comments/13uzcrh/looking_for_a_finetuning_guide/ | false | false | self | 34 | null |
Is it possible to use QLora to fine-tune llama on labelled data? | 11 | My downstream task is text classification. Can I fine-tune LLMs in a supervised fashion? | 2023-05-29T15:39:58 | https://www.reddit.com/r/LocalLLaMA/comments/13uya0g/is_it_possible_to_use_qlora_to_finetune_llama_on/ | Nice_Tea_6590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13uya0g | false | null | t3_13uya0g | /r/LocalLLaMA/comments/13uya0g/is_it_possible_to_use_qlora_to_finetune_llama_on/ | false | false | self | 11 | null |
Advice the most optimal way to do language model domain adaptation - from A to Z | 4 | I want to do domain adaptation of some language model. I want to train language model on specific niched datasets of texts - articles + forums + blogs (for the start I have some set of articles only). The goal - is to train model to generate, rewrite, complete and answer questions in chatbot mode in the field of knowledge of my interest.
The language of interest - English only.
But I’m a bit lost in the zoo of models and their descendants + in quite mixed up terminology. Each time I’m surfing for new articles and videos on the subject - I’m getting examples of different models. Also I’m discouraged by way different approaches to training. Especially when people use term “finetuning” for almost everything (my interest is clearly domain adaptation - additional actions on ready to use language model to make it work with some specific field of knowledge), so I’m not sure what people mean by finetuning.
So - I’d like to see your suggests of the whole line of work - which language model to choose and why, then suggested technology of domain adaptation and why is it exactly.
I understand that my request is quite wide and I don’t provide some vital information such as what hardware I plan to use. It is, bcs I’m lost at the moment. So - for start you’re free to advice almost anything you like, considering my goal. Just explain why you suggest such model and technology of domain adaptation.
Thank you in advance! | 2023-05-29T15:22:03 | https://www.reddit.com/r/LocalLLaMA/comments/13uxtzg/advice_the_most_optimal_way_to_do_language_model/ | samulowry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13uxtzg | false | null | t3_13uxtzg | /r/LocalLLaMA/comments/13uxtzg/advice_the_most_optimal_way_to_do_language_model/ | false | false | self | 4 | null |
Fine Tuning vs. Prompt Engineering Large Language Models | 0 | [removed] | 2023-05-29T14:11:45 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13uw1cx | false | null | t3_13uw1cx | /r/LocalLLaMA/comments/13uw1cx/fine_tuning_vs_prompt_engineering_large_language/ | false | false | default | 0 | null | ||
Testing the new BnB 4-bit or "qlora" vs GPTQ Cuda | 9 | Loading: Much slower than GPTQ, not much speed up on 2nd load. This was to be expected.
INFO:Loaded the model in 104.84 seconds. < llama-30b FP32 2nd load
INFO:Loaded the model in 68.58 seconds. < llama-30b FP16 2nd load
INFO:Loaded the model in 39.24 seconds. < llama-30b-4bit 1st load
INFO:Loaded the model in 7.53 seconds. < llama-30b-4bit 2nd load
Inference: Seems slower than GPTQ. Had to use double-quant to not OOM on 30b. Best for inference Float16 with FP4.
3090:
Bfloat16: Output generated in 31.42 seconds (1.88 tokens/s, 59 tokens, context 1269, seed 373399427)
Float16: Output generated in 11.13 seconds (2.60 tokens/s, 29 tokens, context 1269, seed 136505588)
Float32: Output generated in 66.21 seconds (1.53 tokens/s, 101 tokens, context 1269, seed 999270937)
Float16-FP4: Output generated in 15.38 seconds (2.93 tokens/s, 45 tokens, context 1269, seed 1148932928)
Output generated in 12.41 seconds (3.06 tokens/s, 38 tokens, context 1269, seed 553649186)
Output generated in 44.13 seconds (3.49 tokens/s, 154 tokens, context 1269, seed 1642169272)
GPTQ-cuda: Output generated in 16.41 seconds (5.97 tokens/s, 98 tokens, context 1269, seed 1801823909)
Output generated in 9.99 seconds (4.81 tokens/s, 48 tokens, context 1269, seed 946785249)
P40:
Float16: Output generated in 37.95 seconds (0.87 tokens/s, 33 tokens, context 1269, seed 1201269247)
Float32: Output generated in 118.25 seconds (0.50 tokens/s, 59 tokens, context 1269, seed 1335948198)
Float16-FP4: Output generated in 62.98 seconds (1.29 tokens/s, 81 tokens, context 1269, seed 640954033)
GPTQ-Cuda: Output generated in 34.34 seconds (3.49 tokens/s, 120 tokens, context 1269, seed 99630798)
Output generated in 21.86 seconds (1.92 tokens/s, 42 tokens, context 1269, seed 194589236)
Traing: Works fine through textgen. As expected, was seamless.
30b:
qlora: 30 hours to train 30b on ~50k instructions at 256 context and 2x1 batch size for one epoch.
autograd: oom on 256 but 11 hours to train 128 ctx with same settings. textgen doesn't do gradient checkpointing.
13b:
qlora: Running… 112 / 53544 … 1.24 it/s, 90 seconds / 12 hours … 12 hours remaining
autograd: Running… 226 / 53544 … 2.20 it/s, 103 seconds / 7 hours … 7 hours remaining
Has anyone tested perplexity yet? | 2023-05-29T13:43:08 | https://www.reddit.com/r/LocalLLaMA/comments/13uvbxe/testing_the_new_bnb_4bit_or_qlora_vs_gptq_cuda/ | a_beautiful_rhind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13uvbxe | false | null | t3_13uvbxe | /r/LocalLLaMA/comments/13uvbxe/testing_the_new_bnb_4bit_or_qlora_vs_gptq_cuda/ | false | false | self | 9 | null |
My chatbot gets stuck and repeats the same sentence | 1 | [removed] | 2023-05-29T12:47:14 | https://www.reddit.com/r/LocalLLaMA/comments/13utyh1/my_chatbot_gets_stuck_and_repeats_the_same/ | mashimaroxc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13utyh1 | false | null | t3_13utyh1 | /r/LocalLLaMA/comments/13utyh1/my_chatbot_gets_stuck_and_repeats_the_same/ | false | false | default | 1 | null |
Running MPT-7b locally on Jupyter notebook | 1 | [removed] | 2023-05-29T12:42:52 | https://www.reddit.com/r/LocalLLaMA/comments/13utuss/running_mpt7b_locally_on_jupyter_notebook/ | anindya_42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13utuss | false | null | t3_13utuss | /r/LocalLLaMA/comments/13utuss/running_mpt7b_locally_on_jupyter_notebook/ | false | false | default | 1 | null |
Which 30b model should I use for embeddings? | 10 | The plan is to ask questions about 10-300 page long pdfs or other documents. I know embeddings is not perfect but this is the best approach to query large documents at the moment. I want to use a local model because I'd work with sensitive information. I'd like to use LangChain but am open to use anything else that works. I don't have local GPU but would like to use one from [Vast.ai](https://Vast.ai) or other providers. I'd try a 30/33b model as I'd like to see how good it can get. Maybe later I'll try an even bigger one.
There are so many new models published every week, almost daily, that I have no idea which one to try. Would you please give me suggestions? Have you tried one for embeddings? How good was it? | 2023-05-29T12:28:12 | https://www.reddit.com/r/LocalLLaMA/comments/13utib1/which_30b_model_should_i_use_for_embeddings/ | HaOrbanMaradEnMegyek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13utib1 | false | null | t3_13utib1 | /r/LocalLLaMA/comments/13utib1/which_30b_model_should_i_use_for_embeddings/ | false | false | self | 10 | null |
Minigpt-4 (Vicuna 13B + images) | 27 | 2023-05-29T12:07:51 | https://minigpt-4.github.io/ | AutomataManifold | minigpt-4.github.io | 1970-01-01T00:00:00 | 0 | {} | 13ut2ap | false | null | t3_13ut2ap | /r/LocalLLaMA/comments/13ut2ap/minigpt4_vicuna_13b_images/ | false | false | default | 27 | null | |
[LoRA + weight merge every N step] for pre-training? | 1 | I was wondering if we can use LoRA for pre-training, by merging LoRA weights with the frozen weights every N step. Or is there a similar pre-training research?
​
\*edit, super roughly:
for step in range(N):
if step % 100 == 0:
frozen_weights += lora_weights
lora_weights = 0 # To be precise, initialization from section 4.1 of https://arxiv.org/pdf/2106.09685.pdf | 2023-05-29T11:45:53 | https://www.reddit.com/r/LocalLLaMA/comments/13uskj4/lora_weight_merge_every_n_step_for_pretraining/ | kkimdev | self.LocalLLaMA | 2023-05-29T12:43:02 | 0 | {} | 13uskj4 | false | null | t3_13uskj4 | /r/LocalLLaMA/comments/13uskj4/lora_weight_merge_every_n_step_for_pretraining/ | false | false | self | 1 | null |
In memory compute chips | 8 | How close are they to reality? As much as I understand, the greatest limitation of current AI is data IO due to tho fact that "neurons" are emulated and entire model has to be "recalculated" for every token - reading and writing to memory *sequentially* with each step, greatly limiting training and inference speed.
https://blocksandfiles.com/2021/12/16/7bits-cell-flash-in-ai-compute-in-memory-chip/
I see there is something like this in the works already. With efficient quantisation algoritms, can this task get easier?
While I unlderstand that "multilevel" cells are prone to "wearing out", applying this tech to "frozen" (read-only) models for inference will likely do the trick?
I mean, a decent 4bit tlc 1 Tb SSD costs less that a hundred bucks. You can fit a GPT4 inside for sure, if quantized to 4bit! If you wear it out during traing, it might still be cheaper that a huge stack of gpus + electricity...
If you use 4-bit data cells as "hardware neurons" of a 4-bit quantized model, does it imply that such model, once you load it with data, will have terabytes of "storage" like modern SSDs and will be able to output literally thousands (if not millions) "tokens per second" as output, with all "computation" occuring internally, and model training will be be faster and more effective by several orders of magnitude? | 2023-05-29T11:29:36 | https://www.reddit.com/r/LocalLLaMA/comments/13us8h9/in_memory_compute_chips/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13us8h9 | false | null | t3_13us8h9 | /r/LocalLLaMA/comments/13us8h9/in_memory_compute_chips/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': '9q39_pyZP1DfYfTsW5gekdXr2YTTQsvuA8N1NJvDvCs', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/bNyTX_KkIfEtuF01GP-LON4QQcE3qNb1nJRYBMPdgI4.jpg?width=108&crop=smart&auto=webp&s=219bdf73173557bc6e717038ee653b3807d451e4', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/bNyTX_KkIfEtuF01GP-LON4QQcE3qNb1nJRYBMPdgI4.jpg?width=216&crop=smart&auto=webp&s=7f418fde935a6dd0c3abc0484a1f73f5341749ce', 'width': 216}, {'height': 199, 'url': 'https://external-preview.redd.it/bNyTX_KkIfEtuF01GP-LON4QQcE3qNb1nJRYBMPdgI4.jpg?width=320&crop=smart&auto=webp&s=5647789be6d3a4090b5a98100ef8d83356f792fd', 'width': 320}, {'height': 399, 'url': 'https://external-preview.redd.it/bNyTX_KkIfEtuF01GP-LON4QQcE3qNb1nJRYBMPdgI4.jpg?width=640&crop=smart&auto=webp&s=732b514733180947201d76941b28c391abea5f8e', 'width': 640}], 'source': {'height': 593, 'url': 'https://external-preview.redd.it/bNyTX_KkIfEtuF01GP-LON4QQcE3qNb1nJRYBMPdgI4.jpg?auto=webp&s=44f3ef142562d97ebe633731b47c59e617e82764', 'width': 950}, 'variants': {}}]} |
Very slow on 3090 24G | 7 | Sorry, I have a question I really want answered.
My speed on the 3090 seems to be nowhere near as fast as the 3060 or other graphics cards.
Using the [text-generation-webui](https://github.com/oobabooga/text-generation-webui) on WSL2 with [Guanaco](https://www.reddit.com/r/LocalLLaMA/comments/13rthln/guanaco_7b_13b_33b_and_65b_models_by_tim_dettmers/) llama model
On native GPTQ-for-LLaMA I only get slower speeds, so I use this [branch](https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/fastest-inference-4bit)
Use the following flags: --quant\_attn --xformers --warmup\_autotune --fused\_mlp --triton
7B model I get 10~8t/s
13B 8~6t/s
33B 5~4t/s
​
I've tried upgrading the size of the RAM to 64GB, upgrading the CPU, basing the WSL2 on a faster PCIE SSD, but none seem to improve...
The only reason I can think of is:
1. The graphics card is broken
2. My RAM only has 2400 speed | 2023-05-29T10:31:56 | https://www.reddit.com/r/LocalLLaMA/comments/13ur2au/very_slow_on_3090_24g/ | Sat0r1r1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ur2au | false | null | t3_13ur2au | /r/LocalLLaMA/comments/13ur2au/very_slow_on_3090_24g/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'm-Rq8t_G6WAzs733EzkbmBFRLdK5a8F0tdENIfqKCW8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?width=108&crop=smart&auto=webp&s=4fc21b6656fb1693080e731405f7077a2289cd36', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?width=216&crop=smart&auto=webp&s=64fe12278b0642f89683c009adf55b3b201920e4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?width=320&crop=smart&auto=webp&s=c339a8b5d8b97896840139e07fa720b1b008c664', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?width=640&crop=smart&auto=webp&s=98b2c5bd8c14731794c218e330f8cc1da0afbdff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?width=960&crop=smart&auto=webp&s=7dfb09df57c051be4d7219ba13f576ec2f9ec3cd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?width=1080&crop=smart&auto=webp&s=4076e66ecae2087ebcb2fee2f9983733d3e04ac4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?auto=webp&s=6ff8e81462aeb3670f0fdb6157c7ea6efe158f61', 'width': 1200}, 'variants': {}}]} |
Any tip to increase speed with Oobabooga on Colab with Tesla T4? | 2 | [removed] | 2023-05-29T10:09:34 | https://www.reddit.com/r/LocalLLaMA/comments/13uqn6j/any_tip_to_increase_speed_with_oobabooga_on_colab/ | jl303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13uqn6j | false | null | t3_13uqn6j | /r/LocalLLaMA/comments/13uqn6j/any_tip_to_increase_speed_with_oobabooga_on_colab/ | false | false | default | 2 | null |
Cpu only performance | 23 | I have a pretty beefy cloud instance I have access to that I've been running text-generation-webui. It has 96 vCPUs and 96gb memory.
I've tried a bunch of different llama models and the best performance I can get on a 13b model is 5 token / s.
I have tried tweaking the thread setting to diff numbers between 32 to 96 and any difference that makes is marginal.
Is it really just the case that adding more CPU / RAM after a certain point leads to diminishing returns?
I understand I should be using a GPU, I just don't have access to one right now, is there any way get more performance out of my current instance? | 2023-05-29T09:28:06 | https://www.reddit.com/r/LocalLLaMA/comments/13upwrl/cpu_only_performance/ | foooooooooooooooobar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13upwrl | false | null | t3_13upwrl | /r/LocalLLaMA/comments/13upwrl/cpu_only_performance/ | false | false | self | 23 | null |
Applying All Recent Innovations To Train a Code Model | 2 | [removed] | 2023-05-29T08:42:03 | https://www.reddit.com/r/LocalLLaMA/comments/13up5wt/applying_all_recent_innovations_to_train_a_code/ | Ok--Reflection | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13up5wt | false | null | t3_13up5wt | /r/LocalLLaMA/comments/13up5wt/applying_all_recent_innovations_to_train_a_code/ | false | false | default | 2 | null |
Advice for server | 8 | I'm running Guanaco 65B model on my system. My specs are i9 12900k, 128gb ram and two tesla p40's. The model is running at about 4 tokens a second and only utilizing the cpu and gpus at about 10 percent. Can I make it so it will fully utilize the Gpu's?
Any additional advice/thoughts are more than welcome.
Update: it looks like I'm getting 100% utilization on one of the p40's but nothing on the second. | 2023-05-29T07:15:52 | https://www.reddit.com/r/LocalLLaMA/comments/13unr3v/advice_for_server/ | Emergency-Seaweed-73 | self.LocalLLaMA | 2023-05-29T15:53:31 | 0 | {} | 13unr3v | false | null | t3_13unr3v | /r/LocalLLaMA/comments/13unr3v/advice_for_server/ | false | false | self | 8 | null |
VicUnlocked 65B QLora dropped | 101 | VicUnlocked 65B QLora is out.
Haven't run this yet, I just found it and won't be able to run it before afternoon. [Here](https://huggingface.co/Aeala/VicUnlocked-alpaca-65b-GGML) are ggml's.
I am so happy that we finally see more people doing 65B finetunes. Alpaca finetunes were good, but there is room to improve. | 2023-05-29T06:48:15 | https://huggingface.co/Aeala/VicUnlocked-alpaca-65b-QLoRA | FullOf_Bad_Ideas | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 13una1y | false | null | t3_13una1y | /r/LocalLLaMA/comments/13una1y/vicunlocked_65b_qlora_dropped/ | false | false | 101 | {'enabled': False, 'images': [{'id': 'mV5Zmg21XcfmaSsZfUe8EdI5lZeD-xan739MQrXsseg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?width=108&crop=smart&auto=webp&s=d354780d0c33f7251df429359b18b6022d15a941', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?width=216&crop=smart&auto=webp&s=3e380cf12e76146232a76a7ead3b80e6fe6028df', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?width=320&crop=smart&auto=webp&s=5396c450223bc332fc632c10cd57e940ed69a6a3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?width=640&crop=smart&auto=webp&s=f76875d6006e0b704dd86eb50a2a69463b4bc30a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?width=960&crop=smart&auto=webp&s=fe3080f6455e637a6821fdd26539121a59d7ed30', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?width=1080&crop=smart&auto=webp&s=8f057afd39c1a9467343f14eb99695cfe58180c4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?auto=webp&s=9110427d8dd9efc70351d655aca45068566567de', 'width': 1200}, 'variants': {}}]} | |
AutoGPTQ vs GPTQ-for-llama? | 7 | (For context, I was looking at switching over to the new bitsandbytes 4bit, and was under the impression that it was compatible with GPTQ, but apparently I was mistaken - If one wants to use bitsandbytes 4bit, it appears that you need to start with a full-fat fp16 model. This led me to looking at other ways to optimize text-generation-webui for 100% GPU scenarios)
Does anyone know if AutoGPTQ offers benefits beyond what GPTQ-for-llama provides? GPTQ is amazing, but it's usage in apps like text-generation-webui is a bit cryptic. Apparently the "old" CUDA version runs faster than the new, and Triton is available but no one seems to know if/how/what it provides.
My question is - If I'm running with GPTQ-for-llama on text-generation-webui with (what i think is) the "old" cuda kernel, is this the optimal way to run/load GPTQ-quantized models? | 2023-05-29T06:46:43 | https://www.reddit.com/r/LocalLLaMA/comments/13un94p/autogptq_vs_gptqforllama/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13un94p | false | null | t3_13un94p | /r/LocalLLaMA/comments/13un94p/autogptq_vs_gptqforllama/ | false | false | self | 7 | null |
samantha-33b | 256 | I released samantha-33b
This one is way better than 7b and 13b.
[https://erichartford.com/meet-samantha](https://erichartford.com/meet-samantha)
[https://huggingface.co/ehartford/samantha-33b](https://huggingface.co/ehartford/samantha-33b)
Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her".
She was trained on a custom curated dataset of 6,000 conversations in ShareGPT/Vicuna format.
Training 7b took 5.5 hours on 4x A100 80gb using deepspeed zero3 and flash attention.
She will not engage in roleplay, romance, or sexual activity.
u/The-Bloke | 2023-05-29T06:10:44 | https://www.reddit.com/r/LocalLLaMA/comments/13umn34/samantha33b/ | faldore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {'gid_3': 1} | 13umn34 | false | null | t3_13umn34 | /r/LocalLLaMA/comments/13umn34/samantha33b/ | false | false | self | 256 | {'enabled': False, 'images': [{'id': 'lNiLqLI9dgIkz4KVVl94-x4cbPilcjvDR324LmSB-TU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=108&crop=smart&auto=webp&s=9755868df57ad87b537c145c5cef6396bd94cc69', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=216&crop=smart&auto=webp&s=fb5dc5979a6c5dcc92e6478caaf41bbe5f4da7e1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=320&crop=smart&auto=webp&s=d107c4951f53f5480b43e7bbef267193d6dc1359', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=640&crop=smart&auto=webp&s=e23fa125906b6997cdc99cdbe9ce1120b4894236', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=960&crop=smart&auto=webp&s=ee699ecb83d5fdd0fb66e6f151e27a1cec41214e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=1080&crop=smart&auto=webp&s=0c3a5dd91adcedd28f9953e11bccfe3170917d51', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?auto=webp&s=cf79a676f356c5e2300b5e6b0e93f58ee1763146', 'width': 1200}, 'variants': {}}]} |
Help with instructions for training a model to write poems and speak in style from Facebook chat logs? | 1 | [removed] | 2023-05-29T05:56:14 | https://www.reddit.com/r/LocalLLaMA/comments/13umd4y/help_with_instructions_for_training_a_model_to/ | Certain_Lunch1259 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13umd4y | false | null | t3_13umd4y | /r/LocalLLaMA/comments/13umd4y/help_with_instructions_for_training_a_model_to/ | false | false | default | 1 | null |
Tools similar to ChatGPT Retrieval Plugin but for localized LLMs. | 4 | I've been checking out the ChatGPT Retrieval Plugin that OpenAI published and it's a really cool way to integrate semantics search with vectorized databases. I was wondering if you guys know about any tools/tutorials that would help me figure out how to incorporate it with a local LLM like 30b Wizard or Vicuna.
I know that the less robust nature of these LLM's means that it would probably struggle a lot more with effective queries, but nonetheless it's a very cool piece of technology that brings me one step closer to my post-apocalyptic civilization rebuilding LLM that only requires natural language inputs to function hah. | 2023-05-29T05:51:09 | https://www.reddit.com/r/LocalLLaMA/comments/13um9v2/tools_similar_to_chatgpt_retrieval_plugin_but_for/ | PlanetExperience | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13um9v2 | false | null | t3_13um9v2 | /r/LocalLLaMA/comments/13um9v2/tools_similar_to_chatgpt_retrieval_plugin_but_for/ | false | false | self | 4 | null |
what is most basic GPU or setup required to get 10 tokens/second using any 7B model? | 3 | Has anyone done any experiments around this? | 2023-05-29T05:44:31 | https://www.reddit.com/r/LocalLLaMA/comments/13um5su/what_is_most_basic_gpu_or_setup_required_to_get/ | premrajnarkhede1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13um5su | false | null | t3_13um5su | /r/LocalLLaMA/comments/13um5su/what_is_most_basic_gpu_or_setup_required_to_get/ | false | false | self | 3 | null |
How to train a new language that is not in base model? | 15 | [deleted] | 2023-05-29T05:04:31 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13ulggf | false | null | t3_13ulggf | /r/LocalLLaMA/comments/13ulggf/how_to_train_a_new_language_that_is_not_in_base/ | false | false | default | 15 | null | ||
Generating docstrings with Salesforce Codegen and Microsoft Guidance, inside VSCode | 3 | [removed] | 2023-05-29T05:00:29 | https://www.reddit.com/r/LocalLLaMA/comments/13uldbo/generating_docstrings_with_salesforce_codegen_and/ | rustedbits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13uldbo | false | null | t3_13uldbo | /r/LocalLLaMA/comments/13uldbo/generating_docstrings_with_salesforce_codegen_and/ | false | false | default | 3 | null |
Training a new model | 3 | Hi guys, been playing around a bit, experimenting with LoRAs currently. I wanted to know if anyone had advice, tips or links to useful guides on actually making an entirely new model? Basically, I'm inspired by the Samantha model ( [(1) LocalLLaMA (reddit.com)](https://www.reddit.com/r/LocalLLaMA/comments/13tuipk/samantha7b/) ). It seems more what I'm wanting instead of fine-tuning - I don't need a model that can roleplay any character you give it, I want the future now damnit, let me build my AI companion.
Any advice or tips are appreciated! | 2023-05-29T00:08:00 | https://www.reddit.com/r/LocalLLaMA/comments/13ufdst/training_a_new_model/ | Equal_Station2752 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ufdst | false | null | t3_13ufdst | /r/LocalLLaMA/comments/13ufdst/training_a_new_model/ | false | false | self | 3 | null |
Best uncensored model for an a6000 | 1 | [removed] | 2023-05-28T23:36:11 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13uemve | false | null | t3_13uemve | /r/LocalLLaMA/comments/13uemve/best_uncensored_model_for_an_a6000/ | false | false | default | 1 | null | ||
Could you use Llama to power a small robot? | 0 | [removed] | 2023-05-28T23:33:13 | Azimn | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13uekc5 | false | null | t3_13uekc5 | /r/LocalLLaMA/comments/13uekc5/could_you_use_llama_to_power_a_small_robot/ | false | false | default | 0 | null | |
Best storytelling local LLM? | 69 | NAI recently released a decent alpha preview of a proprietary LLM they’ve been developing, and I was wanting to compare it to whatever the open source best local LLMs currently available. I have a 3090 but could also spin up an A100 on runpod for testing if it’s a model too large for that card. I’d prefer uncensored as the NAI model is uncensored (and it’s hard to write stories when the model grinds to a halt every time anything vaguely out of bounds happens like someone getting punched or shot). | 2023-05-28T21:23:25 | https://www.reddit.com/r/LocalLLaMA/comments/13ubk8p/best_storytelling_local_llm/ | chakalakasp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ubk8p | false | null | t3_13ubk8p | /r/LocalLLaMA/comments/13ubk8p/best_storytelling_local_llm/ | false | false | self | 69 | null |
Best model for dialog generation? | 7 | Hey there,
I wanted to generate dialogs between two or more specified characters with specified scenarios using AI.
What would be the best model for that?
I tried some GGML models and after a while, they got stuck in an endless loop or kept repeating things. | 2023-05-28T21:08:27 | https://www.reddit.com/r/LocalLLaMA/comments/13ub7zq/best_model_for_dialog_generation/ | chocolatebanana136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ub7zq | false | null | t3_13ub7zq | /r/LocalLLaMA/comments/13ub7zq/best_model_for_dialog_generation/ | false | false | self | 7 | null |
Is anyone else getting only 443 bytes adapter_model.bin with qlora? | 5 | I have tried multiple models but the size is always the same.
i did try raising a PR but I am not sure if the model it outputs is correct or not [https://github.com/artidoro/qlora/pull/44](https://github.com/artidoro/qlora/pull/44) | 2023-05-28T19:35:04 | https://www.reddit.com/r/LocalLLaMA/comments/13u8zpz/is_anyone_else_getting_only_443_bytes_adapter/ | KKcorps | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13u8zpz | false | null | t3_13u8zpz | /r/LocalLLaMA/comments/13u8zpz/is_anyone_else_getting_only_443_bytes_adapter/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '7pYZ5Ukmo9EHbEuW-e4AjbhyAZdv4y_bMIOcaZ_WCEY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?width=108&crop=smart&auto=webp&s=9d06203adfc971a7d33282cc2a13a9b67d5546b5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?width=216&crop=smart&auto=webp&s=8c8a47675b96970590198734293c10fd75ed9bbd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?width=320&crop=smart&auto=webp&s=3e2b9f8755506d70ed1241f84b5506b1d2def121', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?width=640&crop=smart&auto=webp&s=bc05de8cbd8f17c2284fb190d56cc5de938027d1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?width=960&crop=smart&auto=webp&s=0aaeafd158b8a0b8bbc8f740e6ef04f5ac8b43c8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?width=1080&crop=smart&auto=webp&s=9da08887ea478ee84e9d05d5f3507c6538ffae82', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?auto=webp&s=64e99242b309b96762c9412d4de6b182fb3e36e6', 'width': 1200}, 'variants': {}}]} |
Which models are good at French in chat mode? | 1 | [removed] | 2023-05-28T19:03:05 | https://www.reddit.com/r/LocalLLaMA/comments/13u897h/which_models_are_good_at_french_in_chat_mode/ | Extraltodeus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13u897h | false | null | t3_13u897h | /r/LocalLLaMA/comments/13u897h/which_models_are_good_at_french_in_chat_mode/ | false | false | default | 1 | null |
Looking for a wisdom to choose model | 14 | Hello all
I'm literally overwhelmed by the models that are floating around. I have a RTX 3090 accompanied with i9 13900k. I am looking for the best current model to use. I will mainly use it to C++ and Python coding assistant, but the use cases will not be limited to it. I heard great things about Falcon and Wizard Vicuna models, but as a newbie I am literally overwhelmed. Which one to use? Should I go for 30b or 40b models? I also heard about quantization options, which should help me to fit bigger models while maintaining the minimum impact on the performance. Should I go with those?
Any help is appreciated. | 2023-05-28T18:36:38 | https://www.reddit.com/r/LocalLLaMA/comments/13u7n2d/looking_for_a_wisdom_to_choose_model/ | iamkucuk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13u7n2d | false | null | t3_13u7n2d | /r/LocalLLaMA/comments/13u7n2d/looking_for_a_wisdom_to_choose_model/ | false | false | self | 14 | null |
How do I get a GPTQ model to provide longer answers or answers that don't cut off? | 1 | I have a 3080 GPU with 10GB VRAM. For example, the TheBloke's Manticore 4bit model runs and is relatively fast, but it cuts itself off quickly. I'm current using the settings of 4wbits and 128groupsize. Maybe this is a separate issue, but my GPU never heats up so I assume it's not really working at full capacity. Here's an example of an initial answer not completing.
​
> **Assistant**
>
>UV radiation has different effects on CO2 and O2 molecules in the atmosphere. Here are some comparisons between them:
>
>1) Absorption Spectrum: The absorption spectrum for both CO2 and O2 molecules overlap at certain wavelengths, but there are also differences. For example, while CO2 has strong absorbance bands around 400 nm (nanometers), O2 does not have any significant absorbance band within this range. Similarly, CO2 shows stronger absorbance near 650-700nm compared to O2 which only has weak absorbance at these wavelengths. These spectral differences play an important role in determining how much energy from sunlight gets transmitted through Earth's atmosphere.
>
>2) Atmospheric Lifetime: While CO2 remains in the atmosphere for centuries due to its long atmospheric lifetime, O2 quickly reacts with other | 2023-05-28T18:31:35 | https://www.reddit.com/r/LocalLLaMA/comments/13u7ixx/how_do_i_get_a_gptq_model_to_provide_longer/ | infohawk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13u7ixx | false | null | t3_13u7ixx | /r/LocalLLaMA/comments/13u7ixx/how_do_i_get_a_gptq_model_to_provide_longer/ | false | false | self | 1 | null |
Retrieval-Aware training seems like a big deal | 8 | I just went through the [Gorrilla paper](https://arxiv.org/abs/2305.15334) and retrieval-aware training seems like a big deal (if I understood it right).
Does anyone know if any other work has been done on this? | 2023-05-28T18:20:40 | https://www.reddit.com/r/LocalLLaMA/comments/13u79xq/retrievalaware_training_seems_like_a_big_deal/ | _wsgeorge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13u79xq | false | null | t3_13u79xq | /r/LocalLLaMA/comments/13u79xq/retrievalaware_training_seems_like_a_big_deal/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Best language model for use case? | 4 | So… I'm looking for a language model has the following requirements:
1. No more than 3Gb file size
2. Can run with 4Gb RAM
3. I can call it though Python
4. Be able to run it locally (offline)
Does anyone have any recommendations for me? I just need a North on this so I can search on how to install/configure by myself.
Thanks for your help :) | 2023-05-28T15:10:39 | https://www.reddit.com/r/LocalLLaMA/comments/13u2q1i/best_language_model_for_use_case/ | AltSins-Street2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13u2q1i | false | null | t3_13u2q1i | /r/LocalLLaMA/comments/13u2q1i/best_language_model_for_use_case/ | false | false | self | 4 | null |
Under 8*A800, the 7B model can handle 50k context and perform reading comprehension accurately | 82 | 2023-05-28T13:04:03 | https://github.com/bojone/NBCE/blob/main/README_en.md | Spare_Side_5907 | github.com | 1970-01-01T00:00:00 | 0 | {} | 13tzzpy | false | null | t3_13tzzpy | /r/LocalLLaMA/comments/13tzzpy/under_8a800_the_7b_model_can_handle_50k_context/ | false | false | 82 | {'enabled': False, 'images': [{'id': 'eXSC8hyZYwjmJIugavKDAzG6YdQZ-lVnYvM-Hy185Lk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?width=108&crop=smart&auto=webp&s=3c1606f9e6bfa632981bfde353f7146c736d431a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?width=216&crop=smart&auto=webp&s=c15c8f4f6b4e6597f30a0433afe0f825879a935a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?width=320&crop=smart&auto=webp&s=f4f99e9bede35085fcbd26ac45a382b86fb0d0db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?width=640&crop=smart&auto=webp&s=ff9483cd0525fe95956f73ca944ee3ca1650dd60', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?width=960&crop=smart&auto=webp&s=8ef76e7f8daccab5e6a99ace442f42dede9e502d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?width=1080&crop=smart&auto=webp&s=92dcbe42fac7ceb7f58aca5fbaea9f84a8e25ee2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?auto=webp&s=2b3847954f0fabb9b27b9eb12eef546c1907fd49', 'width': 1200}, 'variants': {}}]} | ||
I built a multi-platform desktop app to easily download and run models, open source btw | 144 | I want to share this project that I've been working.
I noticed that there are not easy to install/use apps for open source models, maybe that is stopping this to spread outside of the dev-world.
So I built a Tauri app using ggml tensor library through Rust's LLM lib (https://github.com/rustformers/llm), and it provides installers for all desktop platforms, just download and install, like any normal app.
I even created a pretty landing page for people who can get scared away by Github 😅: https://secondbrain.sh/
The repo is here: https://github.com/juliooa/secondbrain
Is still alpha and buggy, any comment or contribution is welcome. My idea is to add plugins or add-ons so it can be more useful, like voice, filesystem search, maybe commands to open other apps, etc..
Cheers!
Edit: you can download the installers here: https://github.com/juliooa/secondbrain/releases/tag/main | 2023-05-28T12:26:41 | https://www.reddit.com/r/LocalLLaMA/comments/13tz8x7/i_built_a_multiplatform_desktop_app_to_easily/ | julio_oa | self.LocalLLaMA | 2023-05-28T20:00:27 | 0 | {} | 13tz8x7 | false | null | t3_13tz8x7 | /r/LocalLLaMA/comments/13tz8x7/i_built_a_multiplatform_desktop_app_to_easily/ | false | false | self | 144 | {'enabled': False, 'images': [{'id': 'jlmTsAOlN9RsNhQjh1YCJUOXaAOCc-j-vUBoe9uxfBE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?width=108&crop=smart&auto=webp&s=13bb375f5688d9d12e5510f04955553eb170af47', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?width=216&crop=smart&auto=webp&s=ee5b5bd806203b6a6f3a093a393a10625d036533', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?width=320&crop=smart&auto=webp&s=4f669e2a4de5e8a46e94abafe3e71a3a8e748e41', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?width=640&crop=smart&auto=webp&s=a5b2cc7215767cc180bd4a517e385566c2df47a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?width=960&crop=smart&auto=webp&s=548361caf983be002e4a5274e716cd40aa2469bf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?width=1080&crop=smart&auto=webp&s=6113c9f028c7c690c7f7c8e5eb1610eb9748c5f2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?auto=webp&s=cbc48101428735a46fa72d43519c6209bb714e76', 'width': 1200}, 'variants': {}}]} |
How to qlora 33B model on a GPU with 24GB of VRAM | 59 | The qlora fine-tuning 33b model with 24 VRAM GPU is just fit the vram for Lora dimensions of 32 and must load the base model on bf16. It better runs on a dedicated headless Ubuntu server, given there isn't much VRAM left or the Lora dimension needs to be reduced even further.
steps:
\- git clone [https://github.com/artidoro/qlora](https://github.com/artidoro/qlora)
\- adjust lora r dimentions to 32 : [https://github.com/artidoro/qlora/blob/main/qlora.py#L142](https://github.com/artidoro/qlora/blob/main/qlora.py#L142)
\- add your own dataset loader at here: [https://github.com/artidoro/qlora/blob/main/qlora.py#L521](https://github.com/artidoro/qlora/blob/main/qlora.py#L521). for exmple:
\`\`\`
elif args.dataset == 'my-data':
dataset = load\_dataset("json", data\_files="./combined.json")
dataset = dataset.map(lambda x: {
'input': x\['question'\],
'output': x\['answer'\]
}, remove\_columns=\['question', 'answer'\])
\`\`\`
​
\- run command for example \`python qlora.py –learning\_rate 0.0001 --model\_name\_or\_path timdettmers/guanaco-33b-merged --dataset my-data --bf16\`
​
https://preview.redd.it/xbzhg4x1vl2b1.png?width=1266&format=png&auto=webp&s=bb2027da410b98fbe69fc498ceffa10a01f2ca6b | 2023-05-28T12:15:01 | https://www.reddit.com/r/LocalLLaMA/comments/13tz14v/how_to_qlora_33b_model_on_a_gpu_with_24gb_of_vram/ | mzbacd | self.LocalLLaMA | 2023-05-28T16:44:44 | 0 | {} | 13tz14v | false | null | t3_13tz14v | /r/LocalLLaMA/comments/13tz14v/how_to_qlora_33b_model_on_a_gpu_with_24gb_of_vram/ | false | false | 59 | {'enabled': False, 'images': [{'id': 'kENSZ1PMmG9Ihv80XmD052ofU0DTNu-1K0X2CMrrd5M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?width=108&crop=smart&auto=webp&s=08cbfe669c3993528813e06aaa7188dd9c7f11ae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?width=216&crop=smart&auto=webp&s=e77d75b9d6d443fd3ea27906a4cd5512e360f730', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?width=320&crop=smart&auto=webp&s=f994e4660655fb87fb9f5ec4f078589b4b3a64d3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?width=640&crop=smart&auto=webp&s=4200a18937b63fe08eab1be66c446e574fd061ca', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?width=960&crop=smart&auto=webp&s=b2274a245a25eb4f2d3ed6671730cf290ebda7a2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?width=1080&crop=smart&auto=webp&s=bba345ac9506e8bf332eadcc243c3ef2232cad61', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?auto=webp&s=bfcbddd7e5b97db37bf8d754239f538b97d0b8cb', 'width': 1200}, 'variants': {}}]} | |
How do you train a model? | 3 | [removed] | 2023-05-28T12:14:41 | https://www.reddit.com/r/LocalLLaMA/comments/13tz0wy/how_do_you_train_a_model/ | Rear-gunner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tz0wy | false | null | t3_13tz0wy | /r/LocalLLaMA/comments/13tz0wy/how_do_you_train_a_model/ | false | false | default | 3 | null |
[ArXiv] The False Promise of Imitating Proprietary LLMs | 10 | https://arxiv.org/abs/2305.15717
TLDR; Initially, authors were surprised by the output quality of tested imitation models -- they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT's style but not its factuality. | 2023-05-28T11:14:40 | https://www.reddit.com/r/LocalLLaMA/comments/13txx1j/arxiv_the_false_promise_of_imitating_proprietary/ | CodingButStillAlive | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13txx1j | false | null | t3_13txx1j | /r/LocalLLaMA/comments/13txx1j/arxiv_the_false_promise_of_imitating_proprietary/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
How big of a jump is 13B Vicuna Uncensored vs 30B Vicuna Uncensored? | 37 | [deleted] | 2023-05-28T10:57:55 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13txljp | false | null | t3_13txljp | /r/LocalLLaMA/comments/13txljp/how_big_of_a_jump_is_13b_vicuna_uncensored_vs_30b/ | false | false | default | 37 | null | ||
Cost comparison: ChatGPT API vs Cloud-hosting Llama-based models | 6 | [deleted] | 2023-05-28T10:34:40 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13tx6g1 | false | null | t3_13tx6g1 | /r/LocalLLaMA/comments/13tx6g1/cost_comparison_chatgpt_api_vs_cloudhosting/ | false | false | default | 6 | null | ||
How would I host a ggml model on a huggingface space and use it as an api? | 5 | Hello there, I'm quite new to this AI stuff and I'm not knowledgeable enough to know how to do this, how would I use a huggingface space as a hosting for a ggml model to inference it with an api? I'm building a bot and I'd love to use huggingface's free resources on spaces to do that.
Thank you in advance :) | 2023-05-28T10:31:55 | https://www.reddit.com/r/LocalLLaMA/comments/13tx4rk/how_would_i_host_a_ggml_model_on_a_huggingface/ | AstroEmanuele | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tx4rk | false | null | t3_13tx4rk | /r/LocalLLaMA/comments/13tx4rk/how_would_i_host_a_ggml_model_on_a_huggingface/ | false | false | self | 5 | null |
How would I host a ggml model on a huggingface space and use it as an api? | 1 | [deleted] | 2023-05-28T10:29:08 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13tx2yo | false | null | t3_13tx2yo | /r/LocalLLaMA/comments/13tx2yo/how_would_i_host_a_ggml_model_on_a_huggingface/ | false | false | default | 1 | null | ||
How can I merge the qloara adapter weight back to the original model? | 12 | I couldn't find it in any docs in the qloara repo. I think someone had already done this, so I'm just wondering if anyone can share some pointers. | 2023-05-28T10:23:43 | https://www.reddit.com/r/LocalLLaMA/comments/13twzp1/how_can_i_merge_the_qloara_adapter_weight_back_to/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13twzp1 | false | null | t3_13twzp1 | /r/LocalLLaMA/comments/13twzp1/how_can_i_merge_the_qloara_adapter_weight_back_to/ | false | false | self | 12 | null |
LLaMa Tokenizer, where to get the tokenizer? (Python or another language is okay.) | 3 | I tried to use the one on Hugging Face Transformers library with \`LlamaTokenizerFast.from\_pretrained("hf-internal-testing/llama-tokenizer")\` but it's been running for about an hour on 70MB of text and showing no signs of ending, which is unrealistically slow and makes me think something is wrong with it.
I just want to be able to throw a string at a function and get a list of tokens. Python is preferable but another language would work if I can easily make it into a standalone executable. | 2023-05-28T09:43:37 | https://www.reddit.com/r/LocalLLaMA/comments/13twbbm/llama_tokenizer_where_to_get_the_tokenizer_python/ | Pan000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13twbbm | false | null | t3_13twbbm | /r/LocalLLaMA/comments/13twbbm/llama_tokenizer_where_to_get_the_tokenizer_python/ | false | false | self | 3 | null |
Llama-farm: Yet another local llm/openai + vector db + API integration. Chat, query over your documents, youtube and so on. | 1 | 2023-05-28T08:30:15 | https://github.com/atisharma/llama_farm | _supert_ | github.com | 1970-01-01T00:00:00 | 0 | {} | 13tv5ql | false | null | t3_13tv5ql | /r/LocalLLaMA/comments/13tv5ql/llamafarm_yet_another_local_llmopenai_vector_db/ | false | false | default | 1 | null | |
samantha-7b | 183 | Today I have released samantha-7b
[https://huggingface.co/ehartford/samantha-7b](https://huggingface.co/ehartford/samantha-7b)
Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her".
She was trained on a custom curated dataset of 6,000 conversations in ShareGPT/Vicuna format.
Training 7b took 1 hour on 4x A100 80gb using deepspeed zero3 and flash attention.
She will not engage in roleplay, romance, or sexual activity.
13b and 30b coming tomorrow. 65b sometime this week.
Update: 13b is out. 30b is almost out.
Blog talking about why and how:
[https://erichartford.com/meet-samantha](https://erichartford.com/meet-samantha) | 2023-05-28T07:48:00 | https://www.reddit.com/r/LocalLLaMA/comments/13tuipk/samantha7b/ | faldore | self.LocalLLaMA | 2023-05-28T21:28:39 | 1 | {'gid_2': 1} | 13tuipk | false | null | t3_13tuipk | /r/LocalLLaMA/comments/13tuipk/samantha7b/ | false | false | self | 183 | {'enabled': False, 'images': [{'id': 'jzDaxm3jT9LDKQiH4XaYm2qMVCWnPYmFKYtr9hRzCMs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?width=108&crop=smart&auto=webp&s=4dd6f981abbb2e2a1d3f286a69d5704bf7eefb7f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?width=216&crop=smart&auto=webp&s=3c1c64d012208668ea605d27a43d4836b7a5bfea', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?width=320&crop=smart&auto=webp&s=136c5076db1a32eee292decda6c3c49c3963be01', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?width=640&crop=smart&auto=webp&s=72036268b43bc2ae1c712a3ebd02824d75809466', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?width=960&crop=smart&auto=webp&s=eee6d4151502a09dae27e0fb46d579d55a529d59', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?width=1080&crop=smart&auto=webp&s=c5f858039f14630180d9bb438809d958c261cfc9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?auto=webp&s=eb4c9d82b47b7bdee6c4670e7f5c3cc6410ef632', 'width': 1200}, 'variants': {}}]} |
Fine-tuning the 13B wizard model with a small amount of dataset and achieving GPT-4 level results? | 4 | Just kidding, I did a small experiment of fine-tuning on the more promising model 13b wizard to see if I can get some improvement for my use case. After the fine-tuning, I got the model to answer a typescript-related question:
\`\`\`
Implement Add<A,B> to get the sum of two positive integers.
type A = Add<1,2> // 3
type B = Add<0,0> // 0
\`\`\`
Both GPT-4 and fine-tuned model are giving wrong answers but they are giving the same wrong answer. I assume that the fine-tuned model is at the same wrong level as GPT-4?
GPT4:
\`\`\`function add(a: number, b: number): number {
return a + b;
}
​
let A = add(1, 2); // 3
let B = add(0, 0); // 0
\`\`\`
my fine-tuned model:
\`\`\`
function add(a: number, b: number): number { return a + b; } export function Add<T extends number>(a: T, b: T) : T{ let result=add(a,b); return result; }
\`\`\`
FIY, my dataset -> [https://github.com/mzbac/lora-llm-qa-g/blob/main/dataset/combined.json](https://github.com/mzbac/lora-llm-qa-g/blob/main/dataset/combined.json) | 2023-05-28T04:46:31 | https://www.reddit.com/r/LocalLLaMA/comments/13trngu/finetuning_the_13b_wizard_model_with_a_small/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13trngu | false | null | t3_13trngu | /r/LocalLLaMA/comments/13trngu/finetuning_the_13b_wizard_model_with_a_small/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'buRwCAVTZJwQyLro26K9WJOT4qVgiHKNP4GwmijQYNE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?width=108&crop=smart&auto=webp&s=7dc9bfd601cd702842e85519923f7e2258ad3ea5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?width=216&crop=smart&auto=webp&s=3a9759261a2823485979afeaa0d29a9b99e9b9f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?width=320&crop=smart&auto=webp&s=7029f4d2832c0c0b99c5aa21a4ebfaf82cdd58cb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?width=640&crop=smart&auto=webp&s=594642ed4cf81ed22d73a4b40cd8b738db4706fb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?width=960&crop=smart&auto=webp&s=308d4d54125ba3b2c47c556322e0525cbf7a509d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?width=1080&crop=smart&auto=webp&s=d0a53dc27758df9deeb4b4969c6dcee47e545d4e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?auto=webp&s=ab6409e2f6c6398c0948744ecfd273a42d5b548c', 'width': 1200}, 'variants': {}}]} |
From your experience, what are the differences of the Llama models? | 21 | I've been testing a lot of Llama models recently and there'a been a bunch of new models that has been released. (Mostly for chat and rp)
Honestly as of right now I don't notice much difference from the top models except for a few minor things
Wizardlm 30b is very coherent but imo it's too much by the book. It's like talking to a smart person relaying a story to you and not really a human. It's freakishly smart
Guacano 33b is less coherent but has a bit more personality. I do think Wizard is smarter in analyzing context.
Manticore 13b least coherent of the three but still pretty good. Sometimes makes weird choices tho
These are pretty subjective and could be a problem with my prompt. What about you? What are your opinions on the models you tested? | 2023-05-28T03:04:21 | https://www.reddit.com/r/LocalLLaMA/comments/13tpsjo/from_your_experience_what_are_the_differences_of/ | AdministrativeLie745 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tpsjo | false | null | t3_13tpsjo | /r/LocalLLaMA/comments/13tpsjo/from_your_experience_what_are_the_differences_of/ | false | false | self | 21 | null |
Gorilla 7B: Large Language Model Connected with Massive APIs | 123 | An interesting new, special-use-case model, from another research team at Microsoft!
# Gorilla: Large Language Model Connected with Massive APIs
Gorilla enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla can write a semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. We also release APIBench, the largest collection of APIs, curated and easy to be trained on! Join us, as we try to expand the largest API store and teach LLMs how to write them!
Project website: [https://shishirpatil.github.io/gorilla/](https://shishirpatil.github.io/gorilla/)
Project Github: [https://github.com/ShishirPatil/gorilla](https://github.com/ShishirPatil/gorilla)
Project paper: [https://arxiv.org/abs/2305.15334](https://arxiv.org/abs/2305.15334)
https://preview.redd.it/gd6b7w6exi2b1.png?width=696&format=png&auto=webp&s=252c8128fc649a31550a000b960e69cfe8e6b719
**My quantisations/merges:**
* [TheBloke/gorilla-7B-GPTQ](https://huggingface.co/TheBloke/gorilla-7B-GPTQ)
* [TheBloke/gorilla-7B-GGML](https://huggingface.co/TheBloke/gorilla-7B-GGML)
* [TheBloke/gorilla-7B-fp16](https://huggingface.co/TheBloke/gorilla-7B-fp16)
**Prompt template and example prompt:**
###USER: find me an API to generate cute cat images
###ASSISTANT: | 2023-05-28T02:27:20 | https://www.reddit.com/r/LocalLLaMA/comments/13tp2yc/gorilla_7b_large_language_model_connected_with/ | The-Bloke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tp2yc | false | null | t3_13tp2yc | /r/LocalLLaMA/comments/13tp2yc/gorilla_7b_large_language_model_connected_with/ | false | false | 123 | null | |
Which database to use for semantic search? | 17 | There's pinencone, redis, chroma, weaviate, qdrant, which vector database should I use? And whats a good library for creating embeddings other than openai's api, my credits expired :( | 2023-05-28T02:27:07 | https://www.reddit.com/r/LocalLLaMA/comments/13tp2sr/which_database_to_use_for_semantic_search/ | CompetitiveSal | self.LocalLLaMA | 2023-05-28T02:48:48 | 0 | {} | 13tp2sr | false | null | t3_13tp2sr | /r/LocalLLaMA/comments/13tp2sr/which_database_to_use_for_semantic_search/ | false | false | self | 17 | null |
Excited to share my ambitious free and open-source library for connecting AI, human, and computing systems. | 65 | Hello r/LocalLLaMa! My name's Dan. I'm a programmer and a lurker.
I know it's hard to keep up with all the announcements these days, but I'm excited to share a free and open source python project I've been working really hard on that I'm hoping others will find useful.
I've come up with a small framework to make it easier to integrate agents, machine learning models, datasets, user interfaces, and basically any kind of system you want, for whatever you want.
It's called \`everything\`. Inspired by other big ideas. :)
[https://github.com/operand/everything](https://github.com/operand/agency)
**\[edit\] The above link has been updated to point to the current project, now named \`agency\`. \[/edit\]**
If you're trying to build a foundation for an AI related system, this might be useful as a start. It's a lot to explain but this small library addresses a number of issues and provides a simple API and foundation for integrating ourselves and our machines.
I spent a lot of time on the readme and spread comments throughout the codebase to explain how it works. I hope you'll check it out.
It's very early so please don't expect production quality out of the box. Be ready to tinker. Only two "channel" classes have been minimally implemented just enough to show the concepts.
I'm putting it out there to see if this interests anyone. I'll be keeping it moving forward for now.
Thanks so much for reading! Let's create an open and kind future. ❤ | 2023-05-28T01:37:16 | https://www.reddit.com/r/LocalLLaMA/comments/13to5ek/excited_to_share_my_ambitious_free_and_opensource/ | helloimop | self.LocalLLaMA | 2023-07-19T04:12:20 | 0 | {} | 13to5ek | false | null | t3_13to5ek | /r/LocalLLaMA/comments/13to5ek/excited_to_share_my_ambitious_free_and_opensource/ | false | false | self | 65 | {'enabled': False, 'images': [{'id': 'm6th8V7E7zzx2CCzdbrW9zvqJWfFxufjUAokKdD9Qaw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=108&crop=smart&auto=webp&s=875dcf7e2c9c07458396f503d7cf2976a3c33503', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=216&crop=smart&auto=webp&s=99e96338557344e1b9e38df9f3f65166764d632c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=320&crop=smart&auto=webp&s=62c67e35464947430c0128ebdeb5046fed9500cc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=640&crop=smart&auto=webp&s=401bb1677a6550313d8213f8cfc9752a105ca587', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=960&crop=smart&auto=webp&s=33b8ffcf7dd0812dedebc0374647a5281dddcf1c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=1080&crop=smart&auto=webp&s=965489bf80f760a497b585a641250189335e583f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?auto=webp&s=6800f296fa85899c024b7b7dae5664e6c8dfb5a5', 'width': 1200}, 'variants': {}}]} |
Clean QLoRA training: best LLaMA model and stop tokens | 11 | Now that we have more efficient training, I'm hoping we'll see a lot of people experimenting with training on various data sets. But one thing I've noticed is that some models are very clean in stopping their text generation and some aren't. So far, working with the alpaca-clean data set on some experiments, my models aren't very good stoppers.
So I was curious as to what "standards" we should all be training our models on. Is huggyllama is the current best clean LLaMA model to train on top of? And for clean stopping, were people adding '</s>' to the training prompt and adding that to a final model json file? I'm curious as to the best practice here. | 2023-05-28T00:40:11 | https://www.reddit.com/r/LocalLLaMA/comments/13tmzyt/clean_qlora_training_best_llama_model_and_stop/ | synn89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tmzyt | false | null | t3_13tmzyt | /r/LocalLLaMA/comments/13tmzyt/clean_qlora_training_best_llama_model_and_stop/ | false | false | self | 11 | null |
bigcode/tiny_starcoder_py is a 159M parameter model that runs on 2GB GPU and can generate python code | 106 | 2023-05-28T00:08:30 | https://huggingface.co/bigcode/tiny_starcoder_py | kryptkpr | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 13tmben | false | null | t3_13tmben | /r/LocalLLaMA/comments/13tmben/bigcodetiny_starcoder_py_is_a_159m_parameter/ | false | false | 106 | {'enabled': False, 'images': [{'id': 'skL8XaRgbph-If49YabPslxfX2-TYPst2mEwzCn1KFM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?width=108&crop=smart&auto=webp&s=bf724196785ca2ad0d8afdfac02896af1d4a958d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?width=216&crop=smart&auto=webp&s=216b24c919f852c47dfd1a482afd9311834e950b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?width=320&crop=smart&auto=webp&s=fbf7666c08ad26b15016036ef2b57ca1fa22e919', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?width=640&crop=smart&auto=webp&s=90a8dce1440bb04db676738af2ded342ee4af930', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?width=960&crop=smart&auto=webp&s=67ec12c7c180dcaf19398569c9895eb9fe6577c7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?width=1080&crop=smart&auto=webp&s=eb6fd3a48846e696f581da1e4d47a04806d44f5d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?auto=webp&s=63c689f9de982078ce24ed57b24e7c602c8e839d', 'width': 1200}, 'variants': {}}]} | ||
Voice to text | 6 | Hi! I’m looking to record all of my conversations locally to develop my own library of context to train a future model on. Essentially create a model that represents me.
Any tools or peripherals out there that could do this currently? | 2023-05-27T22:30:07 | https://www.reddit.com/r/LocalLLaMA/comments/13tk409/voice_to_text/ | Mnimmo90 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tk409 | false | null | t3_13tk409 | /r/LocalLLaMA/comments/13tk409/voice_to_text/ | false | false | self | 6 | null |
Red Pajamas 7b is not good. It is very bad in chat mode (refuses to answer questions and perform requests). `QA`mode works better, but it produces nasty and bigoted answers. Although vicuna says nasty things to, unless explicitly prompted to be nice. | 0 | 2023-05-27T21:19:22 | NancyAurum | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13tiiu0 | false | null | t3_13tiiu0 | /r/LocalLLaMA/comments/13tiiu0/red_pajamas_7b_is_not_good_it_is_very_bad_in_chat/ | false | false | default | 0 | null | ||
Best instruct model recommendations to use with T4? | 0 | [removed] | 2023-05-27T21:09:57 | https://www.reddit.com/r/LocalLLaMA/comments/13tib0v/best_instruct_model_recommendations_to_use_with_t4/ | emissaryo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tib0v | false | null | t3_13tib0v | /r/LocalLLaMA/comments/13tib0v/best_instruct_model_recommendations_to_use_with_t4/ | false | false | default | 0 | null |
LLM Battle Arena: Week 4 | 14 | 2023-05-27T21:06:07 | https://lmsys.org/blog/2023-05-25-leaderboard/ | ninjasaid13 | lmsys.org | 1970-01-01T00:00:00 | 0 | {} | 13ti7u1 | false | null | t3_13ti7u1 | /r/LocalLLaMA/comments/13ti7u1/llm_battle_arena_week_4/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'Q1MF8IN_UA9pU4tqvD1hEdePlazYPLTs893pR_vAxGU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=108&crop=smart&auto=webp&s=563cdab161ef7a0db1d99480a17e1e0a964713a8', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=216&crop=smart&auto=webp&s=f1e49ccecddfa4224d3b0a84c02a2f4e91fdbe23', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=320&crop=smart&auto=webp&s=433ccbd98e6312cd473b7462d9b2795647e68b22', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=640&crop=smart&auto=webp&s=1af23e81a95a45bb6c1c4137b61cac670f2817d9', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=960&crop=smart&auto=webp&s=ea529d168b43832467d3edd73f14c041a334f395', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=1080&crop=smart&auto=webp&s=e32dc82175c894fb2626e6595baa25c1a27f2720', 'width': 1080}], 'source': {'height': 1138, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?auto=webp&s=b48765733ccaee4211093fc1887b8e3e1484d509', 'width': 1138}, 'variants': {}}]} | ||
How to squeeze more speed with my hardware? | 5 | Hi, I was reading posts about new models and decided to try the new guanaco one. Pretty impressive, I tested with conversation I had with ChatGPT and the responses were, I would say, equals in quality. I'm a newbie so take this with a grain of salt.
One big problem though, it's incredibly slow... It takes a good 8 secs to start typing and then about 6-10 caracters a second.
Is there anything I can do to squeeze a little more juice from my system?
Here are my specs:
* CPU: AMD Ryzen 7 5800X3D (16) @ 3.400GHz
* GPU: AMD ATI Radeon RX 6800 XT
* Memory: 7582MiB / 64227MiB
* OS: Manjaro Linux x86\_64
* Kernel: 6.3.3-1-MANJARO
* Shell: bash 5.1.16
* Resolution: 3440x1440, 1080x1920
* DE: Plasma 5.27.4
* WM: KWin
* Terminal: konsole
Command I use to start it in **llama.cpp**
$ ./main -t 16 -m ./models/guanaco-33B.ggmlv3.q5_1.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -i -ins
All suggestions are welcome. | 2023-05-27T21:01:03 | https://www.reddit.com/r/LocalLLaMA/comments/13ti3dh/how_to_squeeze_more_speed_with_my_hardware/ | SebSenseGreen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ti3dh | false | null | t3_13ti3dh | /r/LocalLLaMA/comments/13ti3dh/how_to_squeeze_more_speed_with_my_hardware/ | false | false | self | 5 | null |
Guide for LoRA training? | 12 | Hi guys, basically the title. I have a general idea of what needs to be done, but not specifics - is there a program that does the training or is it command line? Does unstructured data (like a book) work or should it be prompt/response? How can I separate conversations in the data?
Any advice/direction would be appreciated. | 2023-05-27T20:26:15 | https://www.reddit.com/r/LocalLLaMA/comments/13thajr/guide_for_lora_training/ | Equal_Station2752 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13thajr | false | null | t3_13thajr | /r/LocalLLaMA/comments/13thajr/guide_for_lora_training/ | false | false | self | 12 | null |
What's your personal lowest acceptable tokens/second? | 11 | I recently acquired 64GB of RAM and 24GB of VRAM, so I'm in the position of running any LLaMa under the sun besides 65B purely on GPU. I was experimenting with the pros and cons of running 33B on GPTQ versus 65B on GGML with 45 layers offloaded to GPU. The 65B had some slightly more intelligent responses, but the speed was nearly 10x slower than 33B in my use case (around 1.5 t/s for 65B versus 15 t/s for 33B). For my primary purpose (chat style) this was excruciating and I quickly went back to 33B despite the slightly worse responses, but I can see if your use case was more for Q&A where you can ask a question, do something for 5 minutes and come back, the 1.5 t/s wouldn't be too much of an issue. Since it looks like there's a lot of optimizations to be made for GPU offloading on the horizon I hope the t/s can speed up to something like 5 t/s, which would be tolerable. What do you think? | 2023-05-27T19:36:18 | https://www.reddit.com/r/LocalLLaMA/comments/13tg5cs/whats_your_personal_lowest_acceptable_tokenssecond/ | LeifEriksonASDF | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tg5cs | false | null | t3_13tg5cs | /r/LocalLLaMA/comments/13tg5cs/whats_your_personal_lowest_acceptable_tokenssecond/ | false | false | self | 11 | null |
The most effective way to tell a text completion model what to do? | 4 | I've been experimenting with the WizardLM-30B-Uncensored and it seems very promising but I'm not sure if I'm following best practices when prompting it. I previously used OpenAI ChatGPT API where I put into the system message clear instructions on what the model should focus on and what format to output its message. It feels like trying to cram that in the beginning of a normal text completion isn't a good idea. | 2023-05-27T18:53:25 | https://www.reddit.com/r/LocalLLaMA/comments/13tf5vh/the_most_effective_way_to_tell_a_text_completion/ | Dogeboja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tf5vh | false | null | t3_13tf5vh | /r/LocalLLaMA/comments/13tf5vh/the_most_effective_way_to_tell_a_text_completion/ | false | false | self | 4 | null |
I've made a customisable SMS personal assistant which has infinite and persistent semantic memory. | 33 | Hi all,
I'm wanted to share a project I've been working on. It's somewhat similar to 'Diamond Age', where I've tried to create an AI assistant akin to the primer.
The project uses Python GSM Modem, Langchain, and Pinecone, incorporating OpenAI Embeddings to consistently extract and store entities. The more information you provide it through conversation, the better responses it gives. To enhance this, I've used Redis caches for frequently accessed vectors.
I've also added a setup stage where you can configure your name, the bot's personality, and your objectives. These details are included in the system prompt, persistently stored and associated with your unique ID in Redis.
Currently, I'm running this on a Raspberry PI, leveraging the OpenAI API. However, if you wish to make more of this project, you can implement open-source models easily from the initialisation file. I'm posting here as that was my original idea. I really hope some people with better computers can have this running locally and have an experience that isn't already 'aligned'
This is my first project ever, and I thought the idea of talking to a sophisticated LLM via SMS, something nearly obsolete, was quite interesting. I had tried this earlier using Twilio but found it to be expensive, so I bought my own modem and built this. It's been a fun creating this, and it's pretty cool having my own assistant.
I've attached the GitHub link for reference. This is my first attempt at sharing such a project, so I'm not sure where else to post this, but I'd appreciate any feedback or questions. I’m also aware as this needs a modem it’s not something which everyone can run out of the box, but if you are in the UK and want to demo it, drop my a DM and I’ll send your the number to text.
​
[\[GitHub Link\]](https://github.com/Seraphaious/SMS-AI.git) | 2023-05-27T18:09:50 | https://www.reddit.com/r/LocalLLaMA/comments/13te61v/ive_made_a_customisable_sms_personal_assistant/ | Gromchoices | self.LocalLLaMA | 2023-05-27T18:13:38 | 0 | {} | 13te61v | false | null | t3_13te61v | /r/LocalLLaMA/comments/13te61v/ive_made_a_customisable_sms_personal_assistant/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'YX9MTaNNBdcXebhxtePecvYulLZz-YMQse6wwKM7r8w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?width=108&crop=smart&auto=webp&s=d13901d6a3b6b64e90ae19843a7f603b41dd15d8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?width=216&crop=smart&auto=webp&s=8110a49a8f6391df44750870326e9056923c00c6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?width=320&crop=smart&auto=webp&s=4d8cfbcd673fc4e257df7303bb8f448b9c8de3e7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?width=640&crop=smart&auto=webp&s=8b39e87005349d5029c7ec20dd8f04288ac89d51', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?width=960&crop=smart&auto=webp&s=f44e688aa6cf476ca0bb858a1ea471f0ed9d4a07', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?width=1080&crop=smart&auto=webp&s=8acdab4ff08e021ea777bc25aabb6cee40391ec6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?auto=webp&s=a2d34f2c8ed9f9d40fe72717e193ec79c49cefb5', 'width': 1200}, 'variants': {}}]} |
Has anyone been able to train their own model on private data? | 16 | I see so many guides out there but none that give step by step.
For those who have successfully created a model, what kind of hardware are we talking about?
I’m familiar with fine tuning (been using langchain + local / opeani as llm)
But curious to know if training a model is better accuracy wise.
Thanks in advance! | 2023-05-27T17:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/13tdewu/has_anyone_been_able_to_train_their_own_model_on/ | gobiJoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tdewu | false | null | t3_13tdewu | /r/LocalLLaMA/comments/13tdewu/has_anyone_been_able_to_train_their_own_model_on/ | false | false | self | 16 | null |
LLaMA tokenizer: is a JavaScript implementation available anywhere? | 4 | I'm looking for a JavaScript implementation of the LLaMA tokenizer. I'm sure somebody has ported it to JS, but I haven't found anything?
Edit: Nope, there wasn't a JS LLaMA tokenizer available, so I made one: [https://github.com/belladoreai/llama-tokenizer-js](https://github.com/belladoreai/llama-tokenizer-js) | 2023-05-27T17:30:38 | https://www.reddit.com/r/LocalLLaMA/comments/13td9r0/llama_tokenizer_is_a_javascript_implementation/ | belladorexxx | self.LocalLLaMA | 2023-06-13T13:36:58 | 0 | {} | 13td9r0 | false | null | t3_13td9r0 | /r/LocalLLaMA/comments/13td9r0/llama_tokenizer_is_a_javascript_implementation/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 't5BYImubexnZbSs3UMfYWIEQSAIwcB_4G44jxoPka2g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=108&crop=smart&auto=webp&s=df91c49afd9f6de58616898380c72ae6a948f937', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=216&crop=smart&auto=webp&s=01acc1af705b9172a06b059a3e265f55986ab948', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=320&crop=smart&auto=webp&s=033a1bc04e7af056e872516f37d64e10ec61f82c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=640&crop=smart&auto=webp&s=aab47dcb08ce43c36e470372be27bece1d7701af', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=960&crop=smart&auto=webp&s=d8412d139a899f824337c59dcbbaa7521352300a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=1080&crop=smart&auto=webp&s=123b220a268d6fbf3ef80c09fd10e26f1ac12ab3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?auto=webp&s=89c6e0f69fd3e2e908da1599a3d56019cd1a93cc', 'width': 1200}, 'variants': {}}]} |
anyone interested in teaming up for improving open soruce LLM quality to work as a team | 1 | [removed] | 2023-05-27T16:44:08 | https://www.reddit.com/r/LocalLLaMA/comments/13tc608/anyone_interested_in_teaming_up_for_improving/ | UnitedDictatorland | self.LocalLLaMA | 2023-05-27T16:47:43 | 0 | {} | 13tc608 | false | null | t3_13tc608 | /r/LocalLLaMA/comments/13tc608/anyone_interested_in_teaming_up_for_improving/ | false | false | default | 1 | null |
What would be the most helpful? | 1 | With new models, new repos, and libraries to work with them coming count daily. It seems hard to imagine what next? I’m interested in what everyone thinks what the next most helpful/move forward project or idea would be at this point? | 2023-05-27T16:40:25 | https://www.reddit.com/r/LocalLLaMA/comments/13tc2o9/what_would_be_the_most_helpful/ | Jl_btdipsbro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tc2o9 | false | null | t3_13tc2o9 | /r/LocalLLaMA/comments/13tc2o9/what_would_be_the_most_helpful/ | false | false | self | 1 | null |
WizardLM 13B 1.0 quantised for local LLMing | 206 | WizardLM have put out their long-awaited 13B training; for further details see this post: [https://www.reddit.com/r/LocalLLaMA/comments/13t8elc/official\_wizardlm13b\_model\_trained\_with\_250k/](https://www.reddit.com/r/LocalLLaMA/comments/13t8elc/official_wizardlm13b_model_trained_with_250k/)
I have done my thing and produced the following repos:
* [https://huggingface.co/TheBloke/wizardLM-13B-1.0-GGML](https://huggingface.co/TheBloke/wizardLM-13B-1.0-GGML)
* [https://huggingface.co/TheBloke/wizardLM-13B-1.0-GPTQ](https://huggingface.co/TheBloke/wizardLM-13B-1.0-GPTQ)
* [https://huggingface.co/TheBloke/wizardLM-13B-1.0-fp16](https://huggingface.co/TheBloke/wizardLM-13B-1.0-fp16) (still pushing at time of writing)
Enjoy!
(I have two other model quantisations to announce shortly as well.. watch this space!) | 2023-05-27T16:35:08 | https://www.reddit.com/r/LocalLLaMA/comments/13tbxzh/wizardlm_13b_10_quantised_for_local_llming/ | The-Bloke | self.LocalLLaMA | 2023-05-27T16:43:32 | 0 | {} | 13tbxzh | false | null | t3_13tbxzh | /r/LocalLLaMA/comments/13tbxzh/wizardlm_13b_10_quantised_for_local_llming/ | false | false | self | 206 | {'enabled': False, 'images': [{'id': 'BeAV9WboVW1jkTm30wLe8skJ-p66Rf_ev-fuEKOIdyI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?width=108&crop=smart&auto=webp&s=2b1c8c1942bd5abef6af2fce02a408860f68ff67', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?width=216&crop=smart&auto=webp&s=667e476bacd6a96641cf465517fcaf2806fbf2b3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?width=320&crop=smart&auto=webp&s=426eeccd825a856130820ec72a35ac5f2a6b4189', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?width=640&crop=smart&auto=webp&s=3ed8d3139a32ae2c88e6f0f405ce8673977d82c4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?width=960&crop=smart&auto=webp&s=c3571b34b88e61420c5bc8f6a8ade7e8f1488bef', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?width=1080&crop=smart&auto=webp&s=7614713ecf81fcdd4a54a07b4c7834d8979b51e1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?auto=webp&s=3c9dc5643779da2eaf0ab6f3138ed6bb10f5b205', 'width': 1200}, 'variants': {}}]} |
LORA question | 6 | I’ve wanted to train a LORA for a while, but I haven’t been sure how it works in terms of the data you give it. Say you want give it a bunch of poems that cover a topic, can you just give it raw poems or do you have to give it, for example, a list of what topics the poem is too? I want to be able to state “write a poem about horses” OR “write a poem about cars” and have it be able to write in the style of the poetry I made the LORA of, but still be about horses or cars. Does that make sense?
I finetuned gpt2 years ago but I was only having it generate random poetry, I wasn’t even giving it a topic, so I didn’t worry about this. Curious what people’s experiences have been? | 2023-05-27T16:25:45 | https://www.reddit.com/r/LocalLLaMA/comments/13tbppy/lora_question/ | maxiedaniels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tbppy | false | null | t3_13tbppy | /r/LocalLLaMA/comments/13tbppy/lora_question/ | false | false | self | 6 | null |
Local LLM to learn, explore and use for commercial purpose | 8 | I work in a software organization in non-AI role and have some understanding into AI/ML concepts.
For my own learning and to do a certain POC (proof of concept), I am thinking to get local LLM in my laptop. How can I possibly go abt it if following are my requirements:
1. Hardware constraint: It should run on personal laptop.
2. Privacy and security: Initially I will use dummy/fabricated data, but if POC works well then I can think of using actual data. So: privacy and security will become important.
3. Open source for commercial use: Cannot use OpenAI GPT API since they store data for 30 days, so doesn't inspire confidence on privacy. Cannot use Llama since it is not licenced for commercial use and if my POC is succesful then it will be used for commercial use.
4. Learning: This requirement is optional. Instead of following instructions blindly to get LLM running in my laptop as a black box, it will be more interesting to get to know: how the transformer architecture works internally.
I am sorry if ths questions are too basic. You may point me to some good resources to understand ths things better. | 2023-05-27T16:08:53 | https://www.reddit.com/r/LocalLLaMA/comments/13tbb0o/local_llm_to_learn_explore_and_use_for_commercial/ | meet20hal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tbb0o | false | null | t3_13tbb0o | /r/LocalLLaMA/comments/13tbb0o/local_llm_to_learn_explore_and_use_for_commercial/ | false | false | self | 8 | null |
Using Llama for private data/knowledge base | 26 | This space is exploding at a tremendous speed, with newer models coming out almost daily. Although this is really groundbreaking, but my question is more towards its practical use cases. For instance, my use case is to use these models (for now I am using Vicuna) to create a chatbot over my own private knowledgebase. For simplicity lets assume I need to create a chatbot which is up to date with latest news data. This has a 2 pronged problem. First the model should have "knowledge" of all the news till date, and then it should have the capability to "update" itself on a daily basis. After experimenting I see there were 2 ways of going about it. One is fine-tuning Vicuna over all this data, and then updating it periodically. I have come to the conclusion that this is impractical, but would love to hear from the community if there is a way to use this option practically. The option I have opted for is to :
1.extract embeddings of these documents (lets say using sentence transformers)
2.storing them in a vector store.
3. extract relevat documents based on user input (semantic cimilarity)
4. Pass user's input alongwith the docs as additional context.
5. Using prompt engineering to try and limiting the chatbot to use its pre existing knowledge to dilute answers
Although this approach is scalable one of the biggest challenges I face is the limit of input context/prompt. Most of the models limit it around 2048 tokens. Lets say I have an input which returns multiple documents, each greater than 2048 tokens, how do I then use this approach. How do I use all these documents as a single definite context. What are the options available for this kind of a use case? Are there better models with much larger context limit? How have people of this community managed to leverage this or any other approach for custom datasets? From a hardware perspective, lets assume we can use 2xA40 or 1xA100 GPU.
I will keep on updating this thread with relevant/useful comments from community as I feel a lot of people have this question but there is no (at least i couldnt find one) good answer/approach.
​
Thanks! | 2023-05-27T16:00:55 | https://www.reddit.com/r/LocalLLaMA/comments/13tb3n6/using_llama_for_private_dataknowledge_base/ | OpportunityProper252 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13tb3n6 | false | null | t3_13tb3n6 | /r/LocalLLaMA/comments/13tb3n6/using_llama_for_private_dataknowledge_base/ | false | false | self | 26 | null |
Is it possible to combine Radeon+nVidia GPUs at the same time for inference? | 1 | [removed] | 2023-05-27T15:19:44 | https://www.reddit.com/r/LocalLLaMA/comments/13ta4ov/is_it_possible_to_combine_radeonnvidia_gpus_at/ | nodating | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ta4ov | false | null | t3_13ta4ov | /r/LocalLLaMA/comments/13ta4ov/is_it_possible_to_combine_radeonnvidia_gpus_at/ | false | false | default | 1 | null |
Is it possible to combine Radeon+nVidia GPUs at the same time for inference? | 1 | [removed] | 2023-05-27T15:03:40 | [deleted] | 2023-05-27T15:18:25 | 0 | {} | 13t9r7q | false | null | t3_13t9r7q | /r/LocalLLaMA/comments/13t9r7q/is_it_possible_to_combine_radeonnvidia_gpus_at/ | false | false | default | 1 | null | ||
Official WizardLM-13B model trained with 250k evolved instructions! | 108 | * Today, WizardLM Team has released **Official** **WizardLM-13B** model trained with **250k** evolved instructions (from ShareGPT).
* The project repo: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* Please download its delta model at [WizardLM/WizardLM-13B-1.0](https://huggingface.co/WizardLM/WizardLM-13B-1.0)
​
**NOTE:** The **WizardLM-13B-1.0** and **Wizard-7B** use different prompt at the beginning of the conversation:
1.For **WizardLM-13B-1.0** , the Prompt should be as following:
"***A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hello, who are you? ASSISTANT:***"
2. For **WizardLM-7B** , the Prompt should be as following:
"***{instruction}\\n\\n### Response:***"
​
## GPT-4 automatic evaluation
They adopt the automatic evaluation framework based on GPT-4 proposed by FastChat to assess the performance of chatbot models. As shown in the following figure, WizardLM-13B achieved better results than Vicuna-13b.
https://preview.redd.it/j26gd3p9sd2b1.png?width=2194&format=png&auto=webp&s=f852cb037293cb4305fa4866e47d91ba3b03e327
## WizardLM-13B performance on different skills.
The following figure compares WizardLM-13B and ChatGPT’s skill on Evol-Instruct testset. The result indicates that WizardLM-13B achieves 89.1% of ChatGPT’s performance on average, with almost 100% (or more than) capacity on 10 skills, and more than 90% capacity on 22 skills.
https://preview.redd.it/bmj08oyasd2b1.png?width=2194&format=png&auto=webp&s=d439efc67b42b1a650a36c196e67ec02ccc90ff7 | 2023-05-27T14:05:57 | https://www.reddit.com/r/LocalLLaMA/comments/13t8elc/official_wizardlm13b_model_trained_with_250k/ | Worth-Barnacle-7539 | self.LocalLLaMA | 2023-05-28T07:56:12 | 0 | {} | 13t8elc | false | null | t3_13t8elc | /r/LocalLLaMA/comments/13t8elc/official_wizardlm13b_model_trained_with_250k/ | false | false | 108 | {'enabled': False, 'images': [{'id': 'GaTxB_P5EuNkmYpXmquBVlsnQJ_dw4z7ZEtxyVKY_Ag', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?width=108&crop=smart&auto=webp&s=4ce376cbce4ab8d6f6b263dca2b49b5549aa3c3e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?width=216&crop=smart&auto=webp&s=12d0e1c3cc79399b52fa47979bf6e3feaedb0972', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?width=320&crop=smart&auto=webp&s=824578d4d95b132d56aa2385c74e9973d8eab702', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?width=640&crop=smart&auto=webp&s=62a52b2b4cb4f4e5bee8ffb123b903d45d394634', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?width=960&crop=smart&auto=webp&s=38221f293d4084b087d363833512d9b6257fb17d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?width=1080&crop=smart&auto=webp&s=d10bd803a779909e4559a8d9afd466a6b66f17d8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?auto=webp&s=70027cfd1085ef303822ff97baddb167bda70659', 'width': 1200}, 'variants': {}}]} | |
Red Pajamas could be run on CPU with a patched GGML: https://github.com/ggerganov/ggml/pull/134 https://huggingface.co/keldenl/ Windows version needs ggml_time_init() added to main() and compiled with cmake -G 'Unix Makefiles'. aligned_alloc() could be needed too, if gcc is too old. | 13 | 2023-05-27T14:04:57 | NancyAurum | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13t8dtn | false | null | t3_13t8dtn | /r/LocalLLaMA/comments/13t8dtn/red_pajamas_could_be_run_on_cpu_with_a_patched/ | false | false | 13 | {'enabled': True, 'images': [{'id': 'vUP2r2mDrkdlIjeW3QHC6Cc-w1PrTJhq_uVpBdJY5Kg', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?width=108&crop=smart&auto=webp&s=ba50db91972bae499dbb316ce3d1adb89f7ee01c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?width=216&crop=smart&auto=webp&s=a187ae67e7879045b7d9951fcb0377b055511b57', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?width=320&crop=smart&auto=webp&s=95b63bf47daac53bb7da79c001fe0710ecfe46f5', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?width=640&crop=smart&auto=webp&s=e5967aa931fb9d5382c7f1fd3c94e3b188bd14ab', 'width': 640}, {'height': 538, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?width=960&crop=smart&auto=webp&s=9936ac7f76619c42c5467b6bdd7247f75f0ed0b7', 'width': 960}, {'height': 606, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?width=1080&crop=smart&auto=webp&s=e7ad57be64d0677a4da58b3d50c6facc8fe23084', 'width': 1080}], 'source': {'height': 1063, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?auto=webp&s=53e71853fc204fcb64b73c648eb6457a6f3b842c', 'width': 1894}, 'variants': {}}]} | |||
Official WizardLM-13B model trained with 250k evolved instructions! | 1 | [removed] | 2023-05-27T14:00:14 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13t89j5 | false | null | t3_13t89j5 | /r/LocalLLaMA/comments/13t89j5/official_wizardlm13b_model_trained_with_250k/ | false | false | default | 1 | null | ||
Can AI Code? Automatic evaluation of Python and JS coding performance of Vicuna, Wizard and other LLMs. | 52 | 2023-05-27T12:17:37 | https://github.com/the-crypt-keeper/can-ai-code/tree/main | kryptkpr | github.com | 1970-01-01T00:00:00 | 0 | {} | 13t5xpq | false | null | t3_13t5xpq | /r/LocalLLaMA/comments/13t5xpq/can_ai_code_automatic_evaluation_of_python_and_js/ | false | false | 52 | {'enabled': False, 'images': [{'id': '05u1EQqOWWOMJIaDE8BryTl8t0QswIDq1u6an-7K1Pw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?width=108&crop=smart&auto=webp&s=7c44e52fe9cffece36b3479ddd0e16eab244411a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?width=216&crop=smart&auto=webp&s=0d87def384f0b5af4da3ef25bea866d08762896c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?width=320&crop=smart&auto=webp&s=2324ec6b09ca7d68e620b6b858f79ec7b41ad9ab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?width=640&crop=smart&auto=webp&s=96c4dfd7c37f22f888d63f38e0fce5eb75388701', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?width=960&crop=smart&auto=webp&s=2731446b753dc915cdcdcd0922a4dd25bbc5e489', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?width=1080&crop=smart&auto=webp&s=53b9e5c70ca241763eb5e75a16c48c5d985821e7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?auto=webp&s=5157ebd9c62ecbd6702e65e8ff00651a3efa8fee', 'width': 1200}, 'variants': {}}]} | ||
So, if I connect my iPhone to LocalLLama, is it then RemoteLLama? :D | 1 | 2023-05-27T11:28:17 | https://v.redd.it/u0l1won60d2b1 | No_Wheel_9336 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13t4yc3 | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/u0l1won60d2b1/DASHPlaylist.mpd?a=1695091730%2COTNkYjhkMzE0YjI1N2YxNzY4MDQyMGUzYmM1ODg4NDg5MDBmZmQwNmZmNzJmOTg2Yjk3MDc4YTQxNGRhNzQ4NA%3D%3D&v=1&f=sd', 'duration': 40, 'fallback_url': 'https://v.redd.it/u0l1won60d2b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/u0l1won60d2b1/HLSPlaylist.m3u8?a=1695091730%2CMDIyNTM3NWNiYjA3ZGM1YTMzNTUyNzc2NTNmYWFkNjZhYWVmODRmOGEyY2Q5NWEzMDk3NzcxZjM5MjdjOTJkZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/u0l1won60d2b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 608}} | t3_13t4yc3 | /r/LocalLLaMA/comments/13t4yc3/so_if_i_connect_my_iphone_to_localllama_is_it/ | false | false | default | 1 | null | |
Creating a LoRA from unstructured text | 9 | Newbie but learning.
I've got a load of unstructured text.
I've been investigating how to make LoRA with it.
The text is all speech as I am trying to improve character creation.
My question.
Should I spend the time structuring the text? Adding some context above each file etc.
For example
The following text is Bob, Bob is a superhero who is very arrogant. He saves lives but nobody likes his attitude.
Or should I just leave the raw text with no context?
Or, is there something totally different I should do. | 2023-05-27T11:21:52 | https://www.reddit.com/r/LocalLLaMA/comments/13t4ttx/creating_a_lora_from_unstructured_text/ | Useful-Command-8793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13t4ttx | false | null | t3_13t4ttx | /r/LocalLLaMA/comments/13t4ttx/creating_a_lora_from_unstructured_text/ | false | false | self | 9 | null |
Building personal assistants with LocalLLaMA | 7 | I have used stablevicuna and langchain (on jupyter) to run a question answering bot on my CPU. I have two questions:
1. How do I improve the speed of response? (Any comments, thoughts will help). Should I try to run on GPU with the relevant version of stablevicuna?
2. I want to build an AutoGPT like bot locally (using stablevicuna, langchain) that can search the internet and read/write from a custom database or a folder of documents. How do I get started? Edit: I am getting "OutputParserException: Could not parse LLM Output" while working with MPT-7b. With Vicuna, was getting other errors with the LLM continuing to hallucinate questions. Do the langchain agents work only with openai models?
Thanks in advance! | 2023-05-27T11:18:32 | https://www.reddit.com/r/LocalLLaMA/comments/13t4ref/building_personal_assistants_with_localllama/ | anindya_42 | self.LocalLLaMA | 2023-05-29T17:48:24 | 0 | {} | 13t4ref | false | null | t3_13t4ref | /r/LocalLLaMA/comments/13t4ref/building_personal_assistants_with_localllama/ | false | false | self | 7 | null |
If your GPU sucks hard enough, CPU only can be faster than CPU + GPU | 29 | I'm running llama.cpp on a ThinkStation S30 with a Quadro K4000 - a 10 year old PC.
I compiled a version of llama.cpp for my ancient hardware: CUDA 10, no AVX2.
On running it, I found that I was able to fit up to 9 layers of the model in my tiny VRAM.
However, the fewer layers I sent to the GPU, the faster it ran.
Going from 9 to zero GPU layers almost doubled the speed. 😬 | 2023-05-27T11:00:54 | https://www.reddit.com/r/LocalLLaMA/comments/13t4f3m/if_your_gpu_sucks_hard_enough_cpu_only_can_be/ | Robot_Graffiti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13t4f3m | false | null | t3_13t4f3m | /r/LocalLLaMA/comments/13t4f3m/if_your_gpu_sucks_hard_enough_cpu_only_can_be/ | false | false | self | 29 | null |
building LLM model to answer question | 4 | Hello, I'm trying to build an LLM designed to answer Star Wars questions - something like ChatGPT.I want to ask you about giving me some direction, instructions, etc. what model could be good for it. I tried to use LlamaIndex and scraped something like 36MB of data however it was too big for the indexing I guess. Also, I tried using Alpaca LORA and prepared a small dataset for fine-tuning, but the results weren't satisfying as the model generated mostly random things. Is fine-tuning a correct direction and I should just take a look at other models to fine-tune, or are there better ways to use the model with a whole wiki of knowledge? | 2023-05-27T10:23:39 | https://www.reddit.com/r/LocalLLaMA/comments/13t3sxn/building_llm_model_to_answer_question/ | dejw3v3 | self.LocalLLaMA | 2023-05-27T11:33:39 | 0 | {} | 13t3sxn | false | null | t3_13t3sxn | /r/LocalLLaMA/comments/13t3sxn/building_llm_model_to_answer_question/ | false | false | self | 4 | null |
preparing LLM with a lot of specific domain knowledge | 1 | [removed] | 2023-05-27T10:01:56 | https://www.reddit.com/r/LocalLLaMA/comments/13t3fxj/preparing_llm_with_a_lot_of_specific_domain/ | Intrepid-Hope4208 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13t3fxj | false | null | t3_13t3fxj | /r/LocalLLaMA/comments/13t3fxj/preparing_llm_with_a_lot_of_specific_domain/ | false | false | default | 1 | null |
Llama Lora to generate longform content | 1 | Was wondering if anyone has tried this out yet: [https://huggingface.co/akoksal/LongForm-LLaMA-7B-diff](https://huggingface.co/akoksal/LongForm-LLaMA-7B-diff)
There's a twitter thread here: [https://twitter.com/akoksal\_/status/1648248915655811075](https://twitter.com/akoksal_/status/1648248915655811075)
Is this any better than using something like WIzard-7B and asking it to give a detailed, exposition/essay on some topic? | 2023-05-27T09:21:36 | https://www.reddit.com/r/LocalLLaMA/comments/13t2siw/llama_lora_to_generate_longform_content/ | regstuff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13t2siw | false | null | t3_13t2siw | /r/LocalLLaMA/comments/13t2siw/llama_lora_to_generate_longform_content/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Y09Ww3JEysveeA1exdJulrTN2Al2-KLfWg8plhJQMpc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?width=108&crop=smart&auto=webp&s=257ffd922dac0676612bfd1d19be36af750663f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?width=216&crop=smart&auto=webp&s=c8eb426abd60a909a6b48237a56899f3dc07385b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?width=320&crop=smart&auto=webp&s=a318d994706d76eb10056165755dd3c8b7ef8ada', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?width=640&crop=smart&auto=webp&s=4e30b1d95b0c55f0c1534b08b88b854d9b15945f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?width=960&crop=smart&auto=webp&s=a6009161f6375c19c83c1b2cbb4e24d77482d527', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?width=1080&crop=smart&auto=webp&s=6110c52070ea0dc4417f02fe007561e2e8ee8f42', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?auto=webp&s=47df82f938f1451d1ee1e22dbc43cc3407a4d99f', 'width': 1200}, 'variants': {}}]} |
This is a trial post | 1 | [removed] | 2023-05-27T09:12:35 | https://www.reddit.com/r/LocalLLaMA/comments/13t2nbg/this_is_a_trial_post/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13t2nbg | false | null | t3_13t2nbg | /r/LocalLLaMA/comments/13t2nbg/this_is_a_trial_post/ | false | false | default | 1 | null |
Hoping for some advice based on my hardware | 2 | i9-13900k, 4070Ti, 64gb DDR5/6000
With these specs, what interfaces and models do you guys recommend I look into? I’m pretty amateur, and so far I’ve only used a few models through GPT4All, which I understand uses the CPU by default. What’s a good crash course for some other interfaces I can use with my specs, and what recent models do you think would work best on my hardware? I’ve been browsing huggingface, but being such a neophyte I seem to have more luck reading threads here on Reddit. | 2023-05-27T08:57:56 | https://www.reddit.com/r/LocalLLaMA/comments/13t2e8g/hoping_for_some_advice_based_on_my_hardware/ | LuckyIngenuity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13t2e8g | false | null | t3_13t2e8g | /r/LocalLLaMA/comments/13t2e8g/hoping_for_some_advice_based_on_my_hardware/ | false | false | self | 2 | null |
Security PSA: huggingface models are code. not just data. | 215 | Update your security model if you thought that hugggingface models are just data that you can safely run without auditing.
This is not the case, they may contain python scripts in them. The transformers library will download and run these scripts if the trust_remote_code flag/variable is True.
For example [falcon 7B](https://huggingface.co/tiiuae/falcon-7b/tree/main) has two python scripts. A quick scan through them shows that there is nothing dangerous or bad in those scripts. (They are used to define custom transformer model architectures)
Just something important to be aware of when trying out new models. You need to do a quick check of the python scripts in the repo if they are there.
Notes:
Docs for this flag:
* https://huggingface.co/docs/transformers/model_doc/auto
Code in HF transformers lib that loads up code downloaded from a repo:
* https://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/models/auto/auto_factory.py#L127
* https://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/models/auto/configuration_auto.py#L888
**Note:** This is a completely separate problem from the safetensors issue. safetensors does not solve this problem. | 2023-05-27T08:52:42 | https://www.reddit.com/r/LocalLLaMA/comments/13t2b67/security_psa_huggingface_models_are_code_not_just/ | rain5 | self.LocalLLaMA | 2023-05-27T10:18:11 | 0 | {} | 13t2b67 | false | null | t3_13t2b67 | /r/LocalLLaMA/comments/13t2b67/security_psa_huggingface_models_are_code_not_just/ | false | false | self | 215 | {'enabled': False, 'images': [{'id': '9lFAr7Y5pmabNxy6pwVsC_HoAMpeUsJmDhrV8sSFYmg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?width=108&crop=smart&auto=webp&s=a9fdd3c0591952266d46ec3f16b5e2b84d2f86b5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?width=216&crop=smart&auto=webp&s=76798469614ac887f5d1d76d0138abb87cb85a2a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?width=320&crop=smart&auto=webp&s=13487859661c8451299309fc3d736202e85ebdb4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?width=640&crop=smart&auto=webp&s=33012dfd42ad618da97556d3a216b69504c783f2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?width=960&crop=smart&auto=webp&s=2bccb4535f5459e84b01b18736e9da4ccdef48e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?width=1080&crop=smart&auto=webp&s=4e299ecb018de3a5ab6dfad37126c79e0815b0b7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?auto=webp&s=1b4716908a8b8c3645c814ff23669d9e053a9cfb', 'width': 1200}, 'variants': {}}]} |
Is there an alternative to AgentGPT that I can run on my CPU with 32 GB of RAM? | 1 | [removed] | 2023-05-27T08:52:38 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13t2b4y | false | null | t3_13t2b4y | /r/LocalLLaMA/comments/13t2b4y/is_there_an_alternative_to_agentgpt_that_i_can/ | false | false | default | 1 | null | ||
Where do you think local LLMs will be 2 years from now? | 1 | title | 2023-05-27T07:40:29 | https://www.reddit.com/r/LocalLLaMA/comments/13t16jn/where_do_you_think_local_llms_will_be_2_years/ | Necessary_Ad_9800 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13t16jn | false | null | t3_13t16jn | /r/LocalLLaMA/comments/13t16jn/where_do_you_think_local_llms_will_be_2_years/ | false | false | self | 1 | null |
How much performance increase does using NVLink give? | 2 | I've seen a few comments saying that using NVLink between 2 3090s provides a performance boost over 2 without NVLink but I haven't seen any figures showing this, Does anyone have any numbers or more information?
I'm considering whether to add a 4090 or 3090 to an existing 3090 set up.
I initially thought it was just the same as SLI bridges we used to get free in the box with graphics cards but NVLink bridges have to be bought separately and cost £85+
Edit: This says NVlink not worth it: https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/#What_is_NVLink_and_is_it_useful | 2023-05-27T06:44:08 | https://www.reddit.com/r/LocalLLaMA/comments/13t09kj/how_much_performance_increase_does_using_nvlink/ | Copper_Lion | self.LocalLLaMA | 2023-05-27T17:07:00 | 0 | {} | 13t09kj | false | null | t3_13t09kj | /r/LocalLLaMA/comments/13t09kj/how_much_performance_increase_does_using_nvlink/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'hRzrP-m1lWiqRsPC9clNfPnRc_tCRGpGzbHrCBCO32w', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?width=108&crop=smart&auto=webp&s=668e5b311d1c35aff56276238ffffbef59a34cbd', 'width': 108}, {'height': 212, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?width=216&crop=smart&auto=webp&s=e5d618c70aba2724fe319170fae68248a02358b4', 'width': 216}, {'height': 314, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?width=320&crop=smart&auto=webp&s=1f9ee728a7cf766ab4d4133d621a85c3f472cbe1', 'width': 320}, {'height': 628, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?width=640&crop=smart&auto=webp&s=adbfc1dc01b734a26bdb39216924618ca7303467', 'width': 640}, {'height': 943, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?width=960&crop=smart&auto=webp&s=526a81f6a1fbb22035ff6beea153dcbb7fcb78bc', 'width': 960}, {'height': 1060, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?width=1080&crop=smart&auto=webp&s=5d8588d93e47ab695cdf7ab603f7c618c9cad270', 'width': 1080}], 'source': {'height': 1673, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?auto=webp&s=769813ee65a44dcf57de71198f3f9993ba60d790', 'width': 1703}, 'variants': {}}]} |
Crashing when trying to load 65B 4-bit models with two 24GB GPUs (Windows 10) | 3 | [removed] | 2023-05-27T06:22:50 | https://www.reddit.com/r/LocalLLaMA/comments/13szwwz/crashing_when_trying_to_load_65b_4bit_models_with/ | EphemeralFate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13szwwz | false | null | t3_13szwwz | /r/LocalLLaMA/comments/13szwwz/crashing_when_trying_to_load_65b_4bit_models_with/ | false | false | default | 3 | null |
what are you guys using your local LLMs for? | 18 | as the title says, I'm curious what people are using a local LLM for. I have a decently sized GPU (12GB) so I could run a simple quantized LLM. I would love to experiment with it, but I would like some use-cases that can give me ideas about what to use it on. | 2023-05-27T06:02:31 | https://www.reddit.com/r/LocalLLaMA/comments/13szk3y/what_are_you_guys_using_your_local_llms_for/ | Cunninghams_right | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13szk3y | false | null | t3_13szk3y | /r/LocalLLaMA/comments/13szk3y/what_are_you_guys_using_your_local_llms_for/ | false | false | self | 18 | null |
Testing Guanaco's Reasoning Skills - a shared conversation between chatGPT and the Guanaco 33B language model | 1 | [deleted] | 2023-05-27T05:44:32 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13sz8rd | false | null | t3_13sz8rd | /r/LocalLLaMA/comments/13sz8rd/testing_guanacos_reasoning_skills_a_shared/ | false | false | default | 1 | null | ||
Does DDR5 load llama.ccp faster than DDR4? | 5 | [deleted] | 2023-05-27T04:41:22 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13sy4js | false | null | t3_13sy4js | /r/LocalLLaMA/comments/13sy4js/does_ddr5_load_llamaccp_faster_than_ddr4/ | false | false | default | 5 | null | ||
Landmark Attention -> LLaMa 7B with 32k tokens! | 123 | 2023-05-27T04:38:00 | https://arxiv.org/abs/2305.16300 | jd_3d | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 13sy2bu | false | null | t3_13sy2bu | /r/LocalLLaMA/comments/13sy2bu/landmark_attention_llama_7b_with_32k_tokens/ | false | false | 123 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | ||
How do you highlight a small list of key points from a document? | 6 | I would like to summarize a large text document using a local LLM. Accuracy is not important, but maintaining consistency is. It's something like an extraction of creative ideas from a notes, drafts, or a personal diary. At the end, I would like a list of items or perhaps brainstorming questions. These questions could then be jointly discussed with the local LLM in chat mode, in a general context, taking into account the limits on the number of tokens in a coherent dialogue with the LLM. How could this be implemented? | 2023-05-27T04:30:54 | https://www.reddit.com/r/LocalLLaMA/comments/13sxxpe/how_do_you_highlight_a_small_list_of_key_points/ | nihnuhname | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sxxpe | false | null | t3_13sxxpe | /r/LocalLLaMA/comments/13sxxpe/how_do_you_highlight_a_small_list_of_key_points/ | false | false | self | 6 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.