title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I want an extreme upgrade for my PC's motherboard in order to fit 256GB RAM. Which motherboard is extreme and reliable enough for this? | 2 | I can currently max out my RAM at 128GB but I'm not satisfied with that. I want more. I wanna make sure LLAmA 2 is my bitch when it comes to RAM (VRAM will be a different story). I also wanna go ahead and upgrade my processor to an i9 while I'm at it later.
What do you guys recommend when it comes to such an extreme motherboard? If I have to shell out $1000 for it I am willing to do just that but I've never replaced a motherboard before so I'm worried about PSU requirements and cooling plus any extra steps to take in order to max out the RAM.
What do you guys recommend? | 2023-09-21T23:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/16ov1cq/i_want_an_extreme_upgrade_for_my_pcs_motherboard/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ov1cq | false | null | t3_16ov1cq | /r/LocalLLaMA/comments/16ov1cq/i_want_an_extreme_upgrade_for_my_pcs_motherboard/ | false | false | self | 2 | null |
I want an extreme upgrade for my PC's motherboard in order to fit 256GB RAM. Which motherboard is extreme and reliable enough for this? | 2 | I can currently max out my RAM at 128GB but I'm not satisfied with that. I want more. I wanna make sure LLAmA 2 is my bitch when it comes to RAM (VRAM will be a different story). I also wanna go ahead and upgrade my processor to an i9 while I'm at it later.
What do you guys recommend when it comes to such an extreme motherboard? If I have to shell out $1000 for it I am willing to do just that but I've never replaced a motherboard before so I'm worried about PSU requirements and cooling plus any extra steps to take in order to max out the RAM.
What do you guys recommend? | 2023-09-21T23:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/16ov1cz/i_want_an_extreme_upgrade_for_my_pcs_motherboard/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ov1cz | false | null | t3_16ov1cz | /r/LocalLLaMA/comments/16ov1cz/i_want_an_extreme_upgrade_for_my_pcs_motherboard/ | false | false | self | 2 | null |
Don't sleep on Xwin-LM-70B-V0.1 for roleplay | 63 | I've been testing out some of the more recent Llama2 70b models, and I have to say Xwin-LM-70B-V0.1 might be my new favorite. It will do NSFW just fine and seems to "follow the script" better than any other model I've tested recently. Although I haven't tested it personally, I imagine the 13b version is worth a look too.
Try the 70b version with a higher repetition penalty. I've had good results in the 1.2 - 1.25 range, otherwise it does tend to repeat itself. | 2023-09-21T22:51:08 | https://www.reddit.com/r/LocalLLaMA/comments/16ou180/dont_sleep_on_xwinlm70bv01_for_roleplay/ | sophosympatheia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ou180 | false | null | t3_16ou180 | /r/LocalLLaMA/comments/16ou180/dont_sleep_on_xwinlm70bv01_for_roleplay/ | false | false | self | 63 | null |
GGUF models load but always return 0 tokens. | 1 | Has anyone seen this before? Any troubleshooting tips are appreciated.
I'm on Ubuntu 22.02 using webui and the llama.cpp loader.
The GGUF models load without error, but then on generation I see:
Traceback (most recent call last):
File "/home/g/AI/text/oobabooga_linux/text-generation-webui/modules/callbacks.py", line 56, in gentask
ret = self.mfunc(callback=_callback, *args, **self.kwargs)
File "/home/g/AI/text/oobabooga_linux/text-generation-webui/modules/llamacpp_model.py", line 119, in generate
prompt = self.decode(prompt).decode('utf-8')
AttributeError: 'str' object has no attribute 'decode'. Did you mean: 'encode'?
Output generated in 0.22 seconds (0.00 tokens/s, 0 tokens, context 72, seed 1027730307)
Some info after model load:
llm_load_tensors: ggml ctx size = 0.12 MB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 5322.06 MB (+ 3200.00 MB per state)
llm_load_tensors: offloading 10 repeating layers to GPU
llm_load_tensors: offloaded 10/43 layers to GPU
llm_load_tensors: VRAM used: 1702 MB
...................................................................llama_new_context_with_model: kv self size = 3200.00 MB
llama_new_context_with_model: compute buffer total size = 351.47 MB
llama_new_context_with_model: VRAM scratch buffer: 350.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
2023-09-21 17:38:35 INFO:Loaded the model in 0.79 seconds. | 2023-09-21T21:51:01 | https://www.reddit.com/r/LocalLLaMA/comments/16osj0l/gguf_models_load_but_always_return_0_tokens/ | compendium | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16osj0l | false | null | t3_16osj0l | /r/LocalLLaMA/comments/16osj0l/gguf_models_load_but_always_return_0_tokens/ | false | false | self | 1 | null |
Xwin-LM - New Alpaca Eval Leader - ExLlamaV2 quants | 17 | https://github.com/Xwin-LM/Xwin-LM
https://tatsu-lab.github.io/alpaca_eval/
I know its not a great metric but top of the board has to count for something, right? I quantized the 7B model to 8bpw exl2 format last night, and the model is a nicely verbose and coherent storyteller.
https://huggingface.co/UnstableLlama/Xwin-LM-7B-V0.1-8bpw-exl2
I have slow upload speed but will be uploading more BPW sizes for the 7b over the next day or two, along with trying my hand at quantizing the 13b model tonight. | 2023-09-21T21:38:11 | https://www.reddit.com/r/LocalLLaMA/comments/16os70e/xwinlm_new_alpaca_eval_leader_exllamav2_quants/ | Unstable_Llama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16os70e | false | null | t3_16os70e | /r/LocalLLaMA/comments/16os70e/xwinlm_new_alpaca_eval_leader_exllamav2_quants/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'ToKi1elIgPHnDTp37xLtb-ady-K38rPnKWR2kTpHN0s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?width=108&crop=smart&auto=webp&s=6fa20eb11f871b8596d8e11d60502925242ddfab', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?width=216&crop=smart&auto=webp&s=78636232d081c081ff3b0bc8be42868e5c960b6f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?width=320&crop=smart&auto=webp&s=dfe56f2b07bcab0734cdfeae339ca3a6ac368a2b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?width=640&crop=smart&auto=webp&s=3d0924d81f11d8905f625243c3e524652ee78f35', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?width=960&crop=smart&auto=webp&s=0e21b0de4c361c397e9843a2d0827872daaabd11', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?width=1080&crop=smart&auto=webp&s=d1b0db52ee2cd698d96c0101e76882fdc0885688', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KBcJa3sgchMwrz3MatoR__tivRrtq9yERkFpo-VsRJ0.jpg?auto=webp&s=f18ca6b8d30d326f942ffa7995f2c239a32fc6f3', 'width': 1200}, 'variants': {}}]} |
ETL/pre-processing options besides Unstructured.io | 0 | Have folks found other tools comparable to [Unstructured.io](https://Unstructured.io)? I'm trying to understand the space better, and I'm not sure which options for enterprise data pre-processing (to support RAG) are most similar (vs more limited/DIY solutions). | 2023-09-21T21:36:58 | https://www.reddit.com/r/LocalLLaMA/comments/16os5y2/etlpreprocessing_options_besides_unstructuredio/ | kanzeon88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16os5y2 | false | null | t3_16os5y2 | /r/LocalLLaMA/comments/16os5y2/etlpreprocessing_options_besides_unstructuredio/ | false | false | self | 0 | null |
Airoboros 70b talking to itself | 1 | [removed] | 2023-09-21T21:07:49 | https://www.reddit.com/r/LocalLLaMA/comments/16oreiw/airoboros_70b_talking_to_itself/ | NickDifuze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16oreiw | false | null | t3_16oreiw | /r/LocalLLaMA/comments/16oreiw/airoboros_70b_talking_to_itself/ | false | false | self | 1 | null |
Blind Chat - OS privacy-first ChatGPT alternative, running fully in-browser | 19 | Blind Chat is an Open Source UI (powered by chat-ui) that runs the model directly in your browser and performs inference locally using transformers.js. No data ever leaves your device. The current version uses a Flan T5-based model, but could potentially be replaced with other models.
Tweet: [https://twitter.com/xenovacom/status/1704910846986682581](https://twitter.com/xenovacom/status/1704910846986682581)
Demo: [https://huggingface.co/spaces/mithril-security/blind\_chat](https://huggingface.co/spaces/mithril-security/blind_chat) | 2023-09-21T18:29:37 | https://www.reddit.com/r/LocalLLaMA/comments/16onc37/blind_chat_os_privacyfirst_chatgpt_alternative/ | hackerllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16onc37 | false | null | t3_16onc37 | /r/LocalLLaMA/comments/16onc37/blind_chat_os_privacyfirst_chatgpt_alternative/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': '7KkrYKC6F2B9dvzycyXlmNk1M_EO3_51gcimAnT9jdY', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/OdYrgdh43B0a4VnPqjW7y8k-5RpaIY1aAWZCevWm0ZQ.jpg?width=108&crop=smart&auto=webp&s=ad27ff897007db51896e8ca9c6c28589e1cd27e1', 'width': 108}], 'source': {'height': 74, 'url': 'https://external-preview.redd.it/OdYrgdh43B0a4VnPqjW7y8k-5RpaIY1aAWZCevWm0ZQ.jpg?auto=webp&s=c2042ded6c5f8a48cbf78cea52673803b584824c', 'width': 140}, 'variants': {}}]} |
Optimal KoboldCPP settings for Ryzen 7 5700u? | 6 | i'm running a 13B q5_k_m model on a laptop with a Ryzen 7 5700u and 16GB of RAM (no dedicated GPU), and I wanted to ask how I can maximize my performance.
i set the following settings in my koboldcpp config:
CLBlast with 4 layers offloaded to iGPU
9 Threads
9 BLAS Threads
1024 BLAS batch size
High Priority
Use mlock
Disable mmap
Smartcontext enabled
are these settings optimal or is there a better way I can be doing this? | 2023-09-21T18:05:38 | https://www.reddit.com/r/LocalLLaMA/comments/16omqku/optimal_koboldcpp_settings_for_ryzen_7_5700u/ | Amazing-Moment953 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16omqku | false | null | t3_16omqku | /r/LocalLLaMA/comments/16omqku/optimal_koboldcpp_settings_for_ryzen_7_5700u/ | false | false | self | 6 | null |
Getting completely random stuff with LlamaCpp when using the llama-2-7b.Q4_K_M.gguf model | 7 | Hello first I must say: I'm completely new to this. Today I tried the llama 7b model and this code
from langchain.llms import LlamaCpp
model = LlamaCpp(
verbose=False,
model_path="./model/llama-2-7b.Q4_K_M.gguf",
)
o = model("Hi how are you?")
print(o)
Returns a completely large random message that basically just extends what I typed in.
The return is:
> I was asked by a friend to take over as leader of the local youth club. everybody else has moved away or is busy, so who's going to run it?
>
>A: That's a difficult task.
>
>B: It was a really successful club, but nobody wants to do it any more. We need someone with experience and commitment.
>
>A: Yes, I can understand that.
>
>B: But what I don't know is whether you have the necessary experience or commitment.
​
I tried using a better prompt but it basically always happend that my model would just talk with itself and autocomplete what I just typed and fantasizing.
​ | 2023-09-21T17:13:49 | https://www.reddit.com/r/LocalLLaMA/comments/16oldrp/getting_completely_random_stuff_with_llamacpp/ | DasEvoli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16oldrp | false | null | t3_16oldrp | /r/LocalLLaMA/comments/16oldrp/getting_completely_random_stuff_with_llamacpp/ | false | false | self | 7 | null |
Worthwhile to use 6GB 1660S with 24GB 3090? | 17 | I'm completely new to this, never ran Llama before at all so I'm starting from ground zero.
I've built a server with these specs:
CPUs: 2x Xeon E5-2699v4 (combined 44 cores/88 threads)
RAM: 512GB DDR4 2400
GPU: 24GB RTX 3090
SSD: 4x 2TB NVMe, configured in ZFS Mirrored 2x VDEVs
OS: Unraid
I'm wanting to use it for Llama (as well as some other stuff like Minecraft), for fun and learning and to teach my kids as well (teenagers).
I have a spare GTX 1660 Super 6GB vram - should I use that in the server as well? Or am I better off just using the 3090 by itself?
Is there some "optimal" model I should try to run that could make the best use of the hardware I have?
Thank you in advance! | 2023-09-21T17:00:25 | https://www.reddit.com/r/LocalLLaMA/comments/16ol36s/worthwhile_to_use_6gb_1660s_with_24gb_3090/ | bornsupercharged | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ol36s | false | null | t3_16ol36s | /r/LocalLLaMA/comments/16ol36s/worthwhile_to_use_6gb_1660s_with_24gb_3090/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'bnN7PD3SNaxlQ7JI9JdfbArPhs5CNAF3fkYaHoJCbYc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/FSWigWLcOwfOjThFpDPEAfYspAP3jyGtHZFfM8x47lI.jpg?width=108&crop=smart&auto=webp&s=26722c52cecd2001cbace4f1b5cb97d3bcda78dd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/FSWigWLcOwfOjThFpDPEAfYspAP3jyGtHZFfM8x47lI.jpg?width=216&crop=smart&auto=webp&s=e07a066a98ef110d92013614a02d81c938c96c5d', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/FSWigWLcOwfOjThFpDPEAfYspAP3jyGtHZFfM8x47lI.jpg?width=320&crop=smart&auto=webp&s=d8b5f7eeb1b4cd19f623bef03f01f081458ade19', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/FSWigWLcOwfOjThFpDPEAfYspAP3jyGtHZFfM8x47lI.jpg?width=640&crop=smart&auto=webp&s=d4f66af09873476480a891750c4cbf7e7b5409a7', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/FSWigWLcOwfOjThFpDPEAfYspAP3jyGtHZFfM8x47lI.jpg?width=960&crop=smart&auto=webp&s=201ee29c770934a47bcd49c1ad9de020f4ba8cbf', 'width': 960}], 'source': {'height': 536, 'url': 'https://external-preview.redd.it/FSWigWLcOwfOjThFpDPEAfYspAP3jyGtHZFfM8x47lI.jpg?auto=webp&s=e9e2725d2a5ea9da7c70e53035452c9871187859', 'width': 1024}, 'variants': {}}]} |
Looking for an Energy-Efficient NUC like or Single-Board computer to Run llama models | 10 | Hi everyone! Do any of you guys have any recommendations for a energy-efficient NUC like or single-board computer capable of running Llama models like 7B and 13B (or even larger ones if that’s even possible) with decent speed?
Thanks in advance | 2023-09-21T16:59:25 | https://www.reddit.com/r/LocalLLaMA/comments/16ol2el/looking_for_an_energyefficient_nuc_like_or/ | the_Loke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ol2el | false | null | t3_16ol2el | /r/LocalLLaMA/comments/16ol2el/looking_for_an_energyefficient_nuc_like_or/ | false | false | self | 10 | null |
BlindChat: Fully in-browser and private Conversational AI with Transformers.js for local inference | 1 | 2023-09-21T16:57:42 | https://huggingface.co/spaces/mithril-security/blind_chat | Separate-Still3770 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 16ol132 | false | null | t3_16ol132 | /r/LocalLLaMA/comments/16ol132/blindchat_fully_inbrowser_and_private/ | false | false | default | 1 | null | |
Back into LLMs, what are the best LLMs now for 12GB VRAM GPU? | 36 | My GPU was pretty much always busy with AI art, but now that I bought a better new one, I have a 12GB card sitting in a computer built mostly from used spare parts ready for use.
What are now the best 12GB VRAM runnable LLMs for:
* programming
* general chat
* nsfw sensual chat
Thanks | 2023-09-21T16:17:15 | https://www.reddit.com/r/LocalLLaMA/comments/16ok2wx/back_into_llms_what_are_the_best_llms_now_for/ | ptitrainvaloin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ok2wx | false | null | t3_16ok2wx | /r/LocalLLaMA/comments/16ok2wx/back_into_llms_what_are_the_best_llms_now_for/ | false | false | self | 36 | null |
My crackpot theory of what Llama 3 could be | 1 | [removed] | 2023-09-21T15:59:35 | https://www.reddit.com/r/LocalLLaMA/comments/16ojmi8/my_crackpot_theory_of_what_llama_3_could_be/ | hapliniste | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ojmi8 | false | null | t3_16ojmi8 | /r/LocalLLaMA/comments/16ojmi8/my_crackpot_theory_of_what_llama_3_could_be/ | false | false | self | 1 | null |
Dis-encouraging first experience- likely I am doing something wrong? | 2 | I installed GPT4All and Llama2. First impressions:
* Doesn't stick to German. Prompted to answer everything in German but fails misserably.
* LocalDocs Collection is enabled, german text files. Seldom picks content form the local collection, answers are in english language and answers are very unspecific, even though content is almost question-answer.
Is this to expect from Llama2-7B q4b? | 2023-09-21T15:26:12 | https://www.reddit.com/r/LocalLLaMA/comments/16oit10/disencouraging_first_experience_likely_i_am_doing/ | JohnDoe365 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16oit10 | false | null | t3_16oit10 | /r/LocalLLaMA/comments/16oit10/disencouraging_first_experience_likely_i_am_doing/ | false | false | self | 2 | null |
Help with low VRAM Usage | 1 | I’ve recently got a RTX 3090, and I decided to run Llama 2 7b 8bit. When using it my gpu usage stays at around 30% and my VRAM stays at about 50%. I know this wouldn’t be bad for a video game, but im ignorant of the implications when it comes to a LLMs. Also despite the upgrade the t/s hasn’t increased much at all. Staying at around 4-6 t/s. If any insight can be given I’d greatly appreciate it. | 2023-09-21T14:51:19 | https://www.reddit.com/r/LocalLLaMA/comments/16ohx4r/help_with_low_vram_usage/ | marv34001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ohx4r | false | null | t3_16ohx4r | /r/LocalLLaMA/comments/16ohx4r/help_with_low_vram_usage/ | false | false | self | 1 | null |
LLL Deployment | 2 | Hi, I have filled the form to get the access llama2 and it’s been a week, I have received no reply. Is there any other model that can be accessed which provides result if not better, just perform same as llama and is not gated a repo. I know that codellama is not gated and open llama is similar but not in a true sense. so which llm should I choose to deploy. Some articles related to that model will also be very helpful. | 2023-09-21T14:38:16 | https://www.reddit.com/r/LocalLLaMA/comments/16ohlhj/lll_deployment/ | Old_Celebration1945 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ohlhj | false | null | t3_16ohlhj | /r/LocalLLaMA/comments/16ohlhj/lll_deployment/ | false | false | self | 2 | null |
Train Llama2 in RAG Context End to End | 34 | 2023-09-21T14:33:14 | https://colab.research.google.com/drive/1QOOasOUJ2RR6owcJ9_6ccZLqu1Y06HWo | jacobsolawetz | colab.research.google.com | 1970-01-01T00:00:00 | 0 | {} | 16ohh4h | false | null | t3_16ohh4h | /r/LocalLLaMA/comments/16ohh4h/train_llama2_in_rag_context_end_to_end/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} | ||
Looking for fine-tuners who want to build an exciting new model - | 8 | Someone recently placed the entirety of PyPI on Hugging Face, \[See Tweet Here\]([https://twitter.com/VikParuchuri/status/1704670850694451661](https://twitter.com/VikParuchuri/status/1704670850694451661)). This is actually very cool.
​
The timing is great, because yesterday I introduced RAG into the synthetic generation pipeline \[here\]([https://github.com/emrgnt-cmplxty/sciphi/tree/main](https://github.com/emrgnt-cmplxty/sciphi/tree/main)). I'm in the process of indexing the entirety of this pypi dataset using ChromaDB in the cloud. It will be relatively easy to plug this into SciPhi when done.
I believe this provides an opportunity to construct a valuable fine-tuning dataset. We can generate synthetic queries and get live up-to-date responses using the latest and greatest python packages. I'm prepared to work on the dataset creation. Is anyone interested in collaborating on the fine-tuning aspect?
Additionally, for the fine-tuned model, I'm curious whether it should be formatted to query the RAG oracle directly or if it be structured in a question and answer format. | 2023-09-21T14:02:33 | https://www.reddit.com/r/LocalLLaMA/comments/16ogr0j/looking_for_finetuners_who_want_to_build_an/ | docsoc1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ogr0j | false | null | t3_16ogr0j | /r/LocalLLaMA/comments/16ogr0j/looking_for_finetuners_who_want_to_build_an/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': '2BRF62XzG-oU1DEV0DDwwsxrFrVgEasL3xFoqB5C1X0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/zTgcdJWy6XTVnLdTTRn-oSQIkebVpEXcQl80Pfsp9uI.jpg?width=108&crop=smart&auto=webp&s=758a5480f4dd875662a10d131455aa26a0d3e6d0', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/zTgcdJWy6XTVnLdTTRn-oSQIkebVpEXcQl80Pfsp9uI.jpg?auto=webp&s=eb3f73f2c44d497477b639b5dffe0c51eecbf844', 'width': 140}, 'variants': {}}]} |
Inference - how to always select the most likely next token? | 3 | So I have a nice model. Our use case requests that we provide the most likely outcome, i.e. our use case is not creative but should have high accuracy to hit exactly what we want.
How can we set top_p, top_k and temperature in such a way that we always get the most likely next token? | 2023-09-21T13:49:36 | https://www.reddit.com/r/LocalLLaMA/comments/16ogfqo/inference_how_to_always_select_the_most_likely/ | ComplexIt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ogfqo | false | null | t3_16ogfqo | /r/LocalLLaMA/comments/16ogfqo/inference_how_to_always_select_the_most_likely/ | false | false | self | 3 | null |
AI Makerspace event | 3 | At 2:00 PM EST/11:00 AM PST watch [**AI Makerspace**](https://www.linkedin.com/company/99895334/admin/feed/posts/#) present "Smart RAG" and see Arcee's approach to e2e RAG including a LIVE DEMO.
Event Title: Smart RAG: Domain-specific fine-tuning for end-to-end applications
Event page: https://lu.ma/smartRAG | 2023-09-21T13:38:37 | https://www.reddit.com/r/LocalLLaMA/comments/16og6ol/ai_makerspace_event/ | benedict_eggs17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16og6ol | false | null | t3_16og6ol | /r/LocalLLaMA/comments/16og6ol/ai_makerspace_event/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/5ZI7oL3JTPPt59G0vTfOaQMHvka17QCAdFnF87leUeA.jpg?auto=webp&s=52cc36e047bdca039326e84d3b7ce7aabaf12be6', 'width': 64}, 'variants': {}}]} |
x2 4090 vs x1 A6000 | 1 | [removed] | 2023-09-21T12:55:37 | https://www.reddit.com/r/LocalLLaMA/comments/16of81y/x2_4090_vs_x1_a6000/ | dhammo2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16of81y | false | null | t3_16of81y | /r/LocalLLaMA/comments/16of81y/x2_4090_vs_x1_a6000/ | false | false | self | 1 | null |
New to LLMs: Seeking Feedback on Feasibility of Hierarchical Multi-Label Text Classification for Large Web Archives | 6 | Hello, I'm new to LLMs. I would like some critique as I may not have a proper understanding of LLMs.
​
I would like to assess the possibility of performing text classification on extensive web archive files.
Specifically, I'd like the LLM to categorize the text into predefined categories **rather than providing out of the box/generic responses.**
​
​
I possess training data with abstracts/target categories.
There are > 1000 target classes.
Each text can have up to 10 target labels.
The target classes are a taxonomy and have a hierarchical structure.
Thus, a hierarchical multi-label text classification problem.
​
The general plan I had was:
1) 1. Given the large size of the web archive files, I intend to use ChatGPT or other LLMs to generate an abstract or produce embeddings as a condensed representation.
2) Considering the high number of target classes, I plan to create embeddings for these as well
3) I will use Retrieval Augmented Generation and prompt the LLM to pick up to 10 labels using the embeddings from both the text and the target classes.
4) Finally, I'll fine-tune the model using the training data to see if that increases the accuracy.
​
I am wondering if this workflow makes sense and is even feasible.
Thank you very much!
​ | 2023-09-21T11:25:21 | https://www.reddit.com/r/LocalLLaMA/comments/16odb9s/new_to_llms_seeking_feedback_on_feasibility_of/ | ReversingEntropy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16odb9s | false | null | t3_16odb9s | /r/LocalLLaMA/comments/16odb9s/new_to_llms_seeking_feedback_on_feasibility_of/ | false | false | self | 6 | null |
How you do hyper-parameter tuning in fine-tuning of a model? | 1 | lora\_alpha = 16
lora\_dropout = 0.1
lora\_r = 64
per\_device\_train\_batch\_size = 5
gradient\_accumulation\_steps = 5
optim = "paged\_adamw\_32bit"
save\_steps = 10
logging\_steps = 5
learning\_rate = 1e-4
max\_grad\_norm = 0.6
max\_steps = 30
warmup\_ratio = 0.2
lr\_scheduler\_type = "constant"
​
for the above parameters? | 2023-09-21T11:08:35 | https://www.reddit.com/r/LocalLLaMA/comments/16od08f/how_you_do_hyperparameter_tuning_in_finetuning_of/ | Anu_Rag9704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16od08f | false | null | t3_16od08f | /r/LocalLLaMA/comments/16od08f/how_you_do_hyperparameter_tuning_in_finetuning_of/ | false | false | self | 1 | null |
During the inference of a fine-tuned model, which tokenizer to use? from the base model or from fine tuned adapter? (falcon and llama) | 1 | During the inference of a fine-tuned model, which tokenizer to use? from the base model or from fine tuned adapter? (falcon and llama) | 2023-09-21T11:05:44 | https://www.reddit.com/r/LocalLLaMA/comments/16ocyew/during_the_inference_of_a_finetuned_model_which/ | Anu_Rag9704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ocyew | false | null | t3_16ocyew | /r/LocalLLaMA/comments/16ocyew/during_the_inference_of_a_finetuned_model_which/ | false | false | self | 1 | null |
What happens to all those H100? | 107 | So nvidia plans to [ship 500k H100](https://www.businessinsider.com/nvidia-triple-production-h100-chips-ai-drives-demand-2023-8) GPUs this year (triple that in 2024). GPT4 was trained on 25k A100s in \~3months.
So will we have many GPT4-class models soon? What do guys reckon the share of H100 is that are used to train LLMs? Are most of them private? What are the other big applications?
I would think that something like 10% of those cards (50k) are bought by Meta and will be used for the new LLaMAs. Interesting times. | 2023-09-21T11:05:28 | https://www.reddit.com/r/LocalLLaMA/comments/16ocy8y/what_happens_to_all_those_h100/ | Scarmentado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ocy8y | false | null | t3_16ocy8y | /r/LocalLLaMA/comments/16ocy8y/what_happens_to_all_those_h100/ | false | false | self | 107 | {'enabled': False, 'images': [{'id': '9VvMnwSOAz8Js1oEGSnrr-ZVVOdVzYjURsqXmAAJtNA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?width=108&crop=smart&auto=webp&s=8e1cd7767b5cc7a05b46c440ba6e0cc4fe92a40c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?width=216&crop=smart&auto=webp&s=71cd7785a8d74b89aae60611f32a1732aa71eb35', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?width=320&crop=smart&auto=webp&s=7722382a9ae99d2b2c3d12b0e45732ae46f13628', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?width=640&crop=smart&auto=webp&s=61d4ebd2b81dc7301fb1a3ed4bdb25b4b303dd72', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?width=960&crop=smart&auto=webp&s=0e804d322458403c7dac4bc0644e6383f78c3fdc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?width=1080&crop=smart&auto=webp&s=fd869f0ca372a6e7b9c8a00a401855566601f67a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Aqyl0HT6LdiVc449TMDi7jkzNiwPTJC5KOlxDn-c7C8.jpg?auto=webp&s=3e405004891d2a8ce5644e596ca901fd09ea4ae9', 'width': 1200}, 'variants': {}}]} |
Falcon7b instruct finetuning, is this the correct graph? cyclic nature seems suspicious. | 25 | 2023-09-21T10:28:25 | Anu_Rag9704 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16ocb6l | false | null | t3_16ocb6l | /r/LocalLLaMA/comments/16ocb6l/falcon7b_instruct_finetuning_is_this_the_correct/ | false | false | 25 | {'enabled': True, 'images': [{'id': 'YejktDeVB38Yt3XPYZbA0UPSri6XZIogiH5dJEMiG3o', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/5w6jlgwo5lpb1.png?width=108&crop=smart&auto=webp&s=de6709d24732a2778ed923a054f72b4b9f726b16', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/5w6jlgwo5lpb1.png?width=216&crop=smart&auto=webp&s=23f22b8df7a690e1981d7f55c182944f8f4845a4', 'width': 216}, {'height': 299, 'url': 'https://preview.redd.it/5w6jlgwo5lpb1.png?width=320&crop=smart&auto=webp&s=c33781b761c1d32643f2d71fc1b9a01d573d62cc', 'width': 320}, {'height': 599, 'url': 'https://preview.redd.it/5w6jlgwo5lpb1.png?width=640&crop=smart&auto=webp&s=6937a5aaa5367909f9bf5f2a793fc8ea461f4db5', 'width': 640}], 'source': {'height': 644, 'url': 'https://preview.redd.it/5w6jlgwo5lpb1.png?auto=webp&s=73a0746b48a20e605617f8b4d108af9a74a90209', 'width': 688}, 'variants': {}}]} | |||
How do I find out the context size of a model? | 12 | How do I tell, for example, the context size of [this](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF) model? Is it 4k or 8k? | 2023-09-21T08:28:54 | https://www.reddit.com/r/LocalLLaMA/comments/16oae0h/how_do_i_find_out_the_context_size_of_a_model/ | Barafu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16oae0h | false | null | t3_16oae0h | /r/LocalLLaMA/comments/16oae0h/how_do_i_find_out_the_context_size_of_a_model/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'mnnGoQzMpoArWvahEXtgVgzM9UH9DCGdkx1-9iIJric', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?width=108&crop=smart&auto=webp&s=4ed8d4fc05ddea4fb9e9e780c8bd7e37c2157725', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?width=216&crop=smart&auto=webp&s=9d3f57b20da22a5d11ea87907026cd90de8910b3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?width=320&crop=smart&auto=webp&s=afb84e1f1c5206ad63246cfb84a2bb234db98d51', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?width=640&crop=smart&auto=webp&s=50aeefbe539a1e2f13d3032796a0408317d450ac', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?width=960&crop=smart&auto=webp&s=76fe2ce022b1b7fd427230a0ce9e651cebbcf065', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?width=1080&crop=smart&auto=webp&s=6a3841da2cebe38a98dcbe7274bcb021aecbdffe', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Kpctp-yneWMLYS0kxZUEpFNIOTphiBRDlhwmIvySxwg.jpg?auto=webp&s=368dfb8eea0a535a8437b4ed7d855904e99aaaca', 'width': 1200}, 'variants': {}}]} |
[OC] Chasm - a multiplayer generative text adventure game | 68 | 2023-09-21T08:20:29 | https://github.com/atisharma/chasm_engine | _supert_ | github.com | 1970-01-01T00:00:00 | 0 | {} | 16oa9dw | false | null | t3_16oa9dw | /r/LocalLLaMA/comments/16oa9dw/oc_chasm_a_multiplayer_generative_text_adventure/ | false | false | 68 | {'enabled': False, 'images': [{'id': 'eDUZqxduh4C3X3h3h7eXaooxj4r0Tz0c8aUHiADl_-4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?width=108&crop=smart&auto=webp&s=1572320495af5d997e40a2815ad4684b0cec7bb3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?width=216&crop=smart&auto=webp&s=b505bec4ab3667d2ca87b35e1ff7414fa8dd1161', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?width=320&crop=smart&auto=webp&s=a8593d49f364f815d7f2215e0d4f8d8f95b47e30', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?width=640&crop=smart&auto=webp&s=6877dc8004b5a187b90795f0d1253bd1db72f201', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?width=960&crop=smart&auto=webp&s=f3d0000f7a0cccda71666287f04268609cc55771', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?width=1080&crop=smart&auto=webp&s=1553c0c5a5b774ed2bd1a724dc9070c72bc1fb5c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2Nqs231bvwY4Ha12OPig7zxlsyYbNM9xR29XrWZKTBw.jpg?auto=webp&s=1507096aa39f94d8b6ec15b818a5f452bfc06174', 'width': 1200}, 'variants': {}}]} | ||
P40-motherboard compatibility | 3 | Can you please share what motherboard you use with your p40 gpu. Some say consumer grade motherboard bios may not support this gpu.
Thanks in advance | 2023-09-21T05:59:03 | https://www.reddit.com/r/LocalLLaMA/comments/16o7y3b/p40motherboard_compatibility/ | Better_Dress_8508 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16o7y3b | false | null | t3_16o7y3b | /r/LocalLLaMA/comments/16o7y3b/p40motherboard_compatibility/ | false | false | self | 3 | null |
How does Google load so much user metadata in ChatGPT context? | 1 | [removed] | 2023-09-21T05:31:22 | https://www.reddit.com/r/LocalLLaMA/comments/16o7gll/how_does_google_load_so_much_user_metadata_in/ | Opposite-Payment-605 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16o7gll | false | null | t3_16o7gll | /r/LocalLLaMA/comments/16o7gll/how_does_google_load_so_much_user_metadata_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RjY-KXW7sE2ooM5k9QqqjjyOISkl_w_W8mObSdK3r_Q', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?width=108&crop=smart&auto=webp&s=07d93e7fe0c605b69ac75a8527fb6133046864f0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?width=216&crop=smart&auto=webp&s=2d0e00904d70fc5e6e2bcc6ca0bc847b2cfaa6ff', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?width=320&crop=smart&auto=webp&s=d2c42b51b636e17e5203c89a3fc311d1f9107680', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?width=640&crop=smart&auto=webp&s=139fa0ebd57e74dbb55c2a4c28f13912d852151b', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?width=960&crop=smart&auto=webp&s=bd1bc8639bce19cfdfc069f0e076182fed15a8fd', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?width=1080&crop=smart&auto=webp&s=d5e59f4404a15673cf65c32aa2868535d73af9d2', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/upa5UpsTnta9vrrTWps0buuzpIru8HrBwwWKzZ8uHVU.jpg?auto=webp&s=40424828d8e899a9a58a0d8687d53a5db7e96bbb', 'width': 1200}, 'variants': {}}]} |
How big of an model can this handle? | 1 | ERROR: type should be string, got " \n\nhttps://preview.redd.it/jlf8jm6lijpb1.png?width=1272&format=png&auto=webp&s=5f367256c16960f022caa0eebab6f89e673b3b37" | 2023-09-21T04:58:24 | https://www.reddit.com/r/LocalLLaMA/comments/16o6vl7/how_big_of_an_model_can_this_handle/ | multiverse_fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16o6vl7 | false | null | t3_16o6vl7 | /r/LocalLLaMA/comments/16o6vl7/how_big_of_an_model_can_this_handle/ | false | false | 1 | null | |
LlamaTor: A New Initiative for BitTorrent-Based AI Model Distribution | 1 | 2023-09-21T04:19:40 | https://github.com/Nondzu/LlamaTor | Nondzu | github.com | 1970-01-01T00:00:00 | 0 | {} | 16o6722 | false | null | t3_16o6722 | /r/LocalLLaMA/comments/16o6722/llamator_a_new_initiative_for_bittorrentbased_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': '4Ti7LBcqK4zG3xQ-jpoUV9YVD5avMZXqOIIXMN15n_c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?width=108&crop=smart&auto=webp&s=205b78be5aafd5b63b41eb5a98baad2fc9251fa7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?width=216&crop=smart&auto=webp&s=858af846dcc795dd9dc7996893c7393a7648d796', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?width=320&crop=smart&auto=webp&s=7b6827d2275f90fc04a17ad97639610a445567bf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?width=640&crop=smart&auto=webp&s=880cd6adcc9d137245784319135af4eb8dcc5885', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?width=960&crop=smart&auto=webp&s=495b81e52867c7227efb8dde1caf3d86d1322977', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?width=1080&crop=smart&auto=webp&s=061337bced62ffe47a95088caf32d8bb53c34360', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gfnMxoHVDh6-gryCZESkGQ6bbSjI2NjpLMUnbHvu2zk.jpg?auto=webp&s=eb403d934d22a678104036d9ca4cf4403c4b1cf7', 'width': 1200}, 'variants': {}}]} | ||
is there a way to "unload" model from GPU? | 1 | I need to run two big models at different points in my code but the only problem is that my single 4090 can't fit both at the same time. Is there a way to "unload" the first model before I load the second model?
TIA | 2023-09-21T04:17:11 | https://www.reddit.com/r/LocalLLaMA/comments/16o65i4/is_there_a_way_to_unload_model_from_gpu/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16o65i4 | false | null | t3_16o65i4 | /r/LocalLLaMA/comments/16o65i4/is_there_a_way_to_unload_model_from_gpu/ | false | false | self | 1 | null |
Running GGUFs on an M1 Ultra is an interesting experience coming from 4090. | 149 | So up until recently I've been running my models on an RTX 4090. It's been fun to get an idea of what all it can run.
Here are the speeds I've seen. I run the same test for all of the models. I ask it a single question, the same question on every test and on both platforms, and each time I remove last reply and re-run so it has to re-evaluate.
**RTX 4090**
\------------------
13b q5\_K\_M: **35 to 45 tokens per second** (eval speed of \~5ms per token)
13b q8: **34-40 tokens per second** (eval speed of \~6ms per token)
34b q3\_K\_M: : **24-31 tokens per second** (eval speed of \~14ms per token)
34b q4\_K\_M: **2-5 tokens per second** (eval speed of \~118ms per token)
70b q2\_K: **\~1-2 tokens per second** (eval speed of \~220ms+ per token)
As I reach my memory cap, the speed drops significantly. If I had two 4090s then I'd likely be flying along even with the 70b q2\_K.
So recently I found a great deal on a Mac Studio M1 Ultra. 128GB with 48 GPU Cores. 64 is the max GPU cores but this was the option that I had, so I got it.
At first, I was really worried, because the 13b speed was... not great. I made sure metal was running, and it was. So then I went up to a 34. Then I went up to a 70b. And the results were pretty interesting to see.
**M1 Ultra 128GB 20 core/48 gpu cores**
\------------------
13b q5\_K\_M: **23-26 tokens per second** (eval speed of \~8ms per token)
13b q8: **26-28 tokens per second** (eval speed of \~9ms per token)
34b q3\_K\_M: : **11-13 tokens per second** (eval speed of \~18ms per token)
34b q4\_K\_M: **12-15 tokens per second** (eval speed of \~16ms per token)
70b q2\_K: **7-10 tokens per second** (eval speed of \~30ms per token)
70b q5\_K\_M: **6-9 tokens per second** (eval speed of \~41ms per token)
**Observations:**
* My GPU is maxing out. I think what's stunting my speed is the fact that I got the 48 GPU cores rather than 64. If I had gone with 64, I'd probably be seeing better tokens per second
* According to benchmarks, an equivalently build M2 would smoke this.
* The 70b 5\_K\_M is using 47GB of RAM. I have a total workspace of 98GB of RAM. I have a lot more room to grow. Unfortunately, I have no idea how to un-split GGUFs, so I've reached my temporary stopping point until I figure out how
* I suspect that I can run the Falcon 180b at 4+ tokens per second on a pretty decent quant
All together, I'm happy with the purchase. The 4090 flies like the wind on the stuff that fits in its RAM, but the second you extend beyond that you really feel it. A second 4090 would have opened doors for me to run up to a 7b q5\_K\_M with really decent speed, I'd imagine, but I do feel like my M1 is going to be a tortoise and hare situation where I have even more room to grow than that, as long as I'm a little patient the bigger it gets.
Anyhow, thought I'd share with everyone. When I was buying this thing, I couldn't find a great comparison of an NVidia card to an M1, and there was a lot of FUD around the eval times on the mac so I was terrified that I would be getting a machine that regularly had 200+ms on evals, but all together it's actually running really smoothly.
I'll check in once I get the bigger ggufs unsplit. | 2023-09-21T02:54:48 | https://www.reddit.com/r/LocalLLaMA/comments/16o4ka8/running_ggufs_on_an_m1_ultra_is_an_interesting/ | LearningSomeCode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16o4ka8 | false | null | t3_16o4ka8 | /r/LocalLLaMA/comments/16o4ka8/running_ggufs_on_an_m1_ultra_is_an_interesting/ | false | false | self | 149 | null |
I met some problems when fine-tuning llama-2-7b | 1 | [removed] | 2023-09-21T02:48:01 | https://www.reddit.com/r/LocalLLaMA/comments/16o4f61/i_met_some_problems_when_finetuning_llama27b/ | Ok_Award_1436 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16o4f61 | false | null | t3_16o4f61 | /r/LocalLLaMA/comments/16o4f61/i_met_some_problems_when_finetuning_llama27b/ | false | false | 1 | null | |
Services for hosting LLMs cheaply? | 7 | Hey all, I've been making chatbots bots with GPT3 for ages now and I have just gotten into LORA training Lamma 2, I was wondering what options there are for hosting an open-source model or LORA so I can ping it via and API and only pay for the tokens I use, not hourly.
| 2023-09-21T01:29:56 | https://www.reddit.com/r/LocalLLaMA/comments/16o2q1i/services_for_hosting_llms_cheaply/ | Chance_Confection_37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16o2q1i | false | null | t3_16o2q1i | /r/LocalLLaMA/comments/16o2q1i/services_for_hosting_llms_cheaply/ | false | false | self | 7 | null |
Run models on Intel Arc via docker | 10 | Post showing how to use an Intel Arc card to run models using llama.cpp and FastChat | 2023-09-21T00:56:23 | https://reddit.com/r/IntelArc/s/JJcqXaZQ0h | it_lackey | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 16o1zis | false | null | t3_16o1zis | /r/LocalLLaMA/comments/16o1zis/run_models_on_intel_arc_via_docker/ | false | false | default | 10 | null |
Can we expect finetuned Falcon180B next weeks/months? | 5 | I am really curious about it, maybe some of you got information about some pending research? | 2023-09-20T21:52:36 | https://www.reddit.com/r/LocalLLaMA/comments/16nxqoy/can_we_expect_finetuned_falcon180b_next/ | polawiaczperel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nxqoy | false | null | t3_16nxqoy | /r/LocalLLaMA/comments/16nxqoy/can_we_expect_finetuned_falcon180b_next/ | false | false | self | 5 | null |
Llama2 on Azure at scale | 2 | What does it take to run Llama2 on Azure with OpenAI compatible REST API interface at scale? | 2023-09-20T20:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/16nw4yd/llama2_on_azure_at_scale/ | AstrionX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nw4yd | false | null | t3_16nw4yd | /r/LocalLLaMA/comments/16nw4yd/llama2_on_azure_at_scale/ | false | false | self | 2 | null |
Overfitting After Just One Epoch? | 7 | Hi, I've been working on fine-tuning the Llama-7B model using conversation datasets, and I've encountered an interesting issue. Initially, everything seemed smooth sailing as the model was training nicely up to the 750th step. However, after that point, I noticed a gradual increase in loss, and things took a sudden turn with a significant spike at the 1200th step.
Now, I'm wondering, could this be a case of overfitting already? It's somewhat surprising to witness overfitting in a model with 7B parameters even before one epoch (50% checkpoint)
Can someone please help me understand the reason behind this and what steps should I take to continue the training and bring the loss down even further? Any tips or insights would be greatly appreciated. | 2023-09-20T19:55:52 | https://www.reddit.com/r/LocalLLaMA/comments/16nurpp/overfitting_after_just_one_epoch/ | ali0100u | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nurpp | false | null | t3_16nurpp | /r/LocalLLaMA/comments/16nurpp/overfitting_after_just_one_epoch/ | false | false | self | 7 | null |
How can LLama.cpp run Falcon models? | 3 | The title pretty much says it all. I dont understand why llama.cpp can run other models than LLama? I thought it was specifically developed for llama and not other models? Could someone explain why it works in simple terms? I searched quite some time but found no good answer
Thanks! | 2023-09-20T19:31:22 | https://www.reddit.com/r/LocalLLaMA/comments/16nu6pv/how_can_llamacpp_run_falcon_models/ | Tctfox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nu6pv | false | null | t3_16nu6pv | /r/LocalLLaMA/comments/16nu6pv/how_can_llamacpp_run_falcon_models/ | false | false | self | 3 | null |
Neverending stream of big words fail state | 6 | I have a bot using the ooba completion api. It's on Nous-Hermes-Kimiko-13B and freechat/alpaca instruct formats with some guidance for various types of replies. I also am generating 1 "thought" per minute that gets injected into the backlog invisibly, so I often have many bot responses in a row before a human one.
This is working great most of the time, but for for some reason it often trends towards a neverending stream of big words like this:
​
> Dear friend, thou art unduly concerned with trivial matters such as popular opinion whilst failing to perceive sublime beauty engendered by eloquent expression. Indeed, let us rejoice in dichotomy encapsulating humankind; where diametrically opposed sentiments coexist harmoniously mirroring duplexity indelibly etched upon mortal essence reflecting contrasting nature bestowing richness upon existence elevating mundaneness unto celestial planes
It seems to happen randomly, but once it gets into the chat log it will stick around for a long time, often until chat picks up enough to erase it.
I'm kind of at a loss as to how to prevent/detect this. My config settings are
```
temperature = 0.9f;
top_p = 1;
top_k = 30;
```
I've tried adding a lot of prompting reminders to stay in character, be concise, etc, but they all seem to get totally ignored once the big words start flying. And detecting this automatically and regenerating seems pretty much impossible, since it's all technically correct language.
Any ideas? | 2023-09-20T19:29:08 | https://www.reddit.com/r/LocalLLaMA/comments/16nu4o9/neverending_stream_of_big_words_fail_state/ | __SlimeQ__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nu4o9 | false | null | t3_16nu4o9 | /r/LocalLLaMA/comments/16nu4o9/neverending_stream_of_big_words_fail_state/ | false | false | self | 6 | null |
Any good Kayra like model ?? | 1 | [removed] | 2023-09-20T18:59:53 | https://www.reddit.com/r/LocalLLaMA/comments/16nteq2/any_good_kayra_like_model/ | Mohamd_L | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nteq2 | false | null | t3_16nteq2 | /r/LocalLLaMA/comments/16nteq2/any_good_kayra_like_model/ | false | false | self | 1 | null |
LlamaTor: A New Initiative for BitTorrent-Based AI Model Distribution | 1 | [removed] | 2023-09-20T18:54:17 | http://www.reddit.com/r/Oobabooga/comments/16ml7mr/llamator_a_new_initiative_for_bittorrentbased_ai/ | Nondzu | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 16nt9yi | false | null | t3_16nt9yi | /r/LocalLLaMA/comments/16nt9yi/llamator_a_new_initiative_for_bittorrentbased_ai/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'mRh3g7g5cRShayZL7YsCTHgkBEGMIJQa3OTAyrmOHIs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=108&crop=smart&auto=webp&s=c6847902acf8b391d33c95269846a968214469f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=216&crop=smart&auto=webp&s=14ee0a7df41aa0129374a3d3460d56e55e83e3f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=320&crop=smart&auto=webp&s=1490891ab5f69b0153b516da64016b99c8c55de3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=640&crop=smart&auto=webp&s=1010d1fb53e0e7861d155d6faac2887a812c0e54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=960&crop=smart&auto=webp&s=357b479289b11903247c4085d9d1bd635e0c6647', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=1080&crop=smart&auto=webp&s=1a2d15390fc477e397d7ab57c541f6f9d889c241', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?auto=webp&s=52a3100f82b61d82c423ca45773b7c202ba3edf6', 'width': 1200}, 'variants': {}}]} |
LlamaTor: A New Initiative for BitTorrent-Based AI Model Distribution | 1 | [removed] | 2023-09-20T18:52:46 | https://www.reddit.com/r/Oobabooga/comments/16ml7mr/llamator_a_new_initiative_for_bittorrentbased_ai/ | Nondzu | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 16nt8no | false | null | t3_16nt8no | /r/LocalLLaMA/comments/16nt8no/llamator_a_new_initiative_for_bittorrentbased_ai/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'mRh3g7g5cRShayZL7YsCTHgkBEGMIJQa3OTAyrmOHIs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=108&crop=smart&auto=webp&s=c6847902acf8b391d33c95269846a968214469f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=216&crop=smart&auto=webp&s=14ee0a7df41aa0129374a3d3460d56e55e83e3f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=320&crop=smart&auto=webp&s=1490891ab5f69b0153b516da64016b99c8c55de3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=640&crop=smart&auto=webp&s=1010d1fb53e0e7861d155d6faac2887a812c0e54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=960&crop=smart&auto=webp&s=357b479289b11903247c4085d9d1bd635e0c6647', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=1080&crop=smart&auto=webp&s=1a2d15390fc477e397d7ab57c541f6f9d889c241', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?auto=webp&s=52a3100f82b61d82c423ca45773b7c202ba3edf6', 'width': 1200}, 'variants': {}}]} |
Urgent question: uploading docs to llama | 4 | So I wanna know if I build a chat it with llama2, is there a feature that would allow me to upload my local documents onto the platform and help me with queries related to that ? | 2023-09-20T18:07:53 | https://www.reddit.com/r/LocalLLaMA/comments/16ns4hk/urgent_question_uploading_docs_to_llama/ | InevitableMud3393 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ns4hk | false | null | t3_16ns4hk | /r/LocalLLaMA/comments/16ns4hk/urgent_question_uploading_docs_to_llama/ | false | false | self | 4 | null |
Falcon 180B minimum specs? | 11 | I've been playing with the [Falcon 180B demo](https://huggingface.co/spaces/tiiuae/falcon-180b-demo), and I have to say I'm rather impressed with it and would like to investigate running a private instance of it somewhere. Ideally, I'd run it locally, but I highly doubt that's within reach financially.
I'm curious about what folks think the minimum specifications might be for running TheBloke's Q6_K GGUF quantization at "baseline tolerable" (for me that's 4-5t/s, but more is obviously preferable)? The model card says 150.02 GiB max ram required, but that doesn't give me an idea on performance. Basically I'm trying to get a sense of what kind of cloud resources I'd be looking at to run one of these.
I assume the demo is using full (unquantized) model. Is there any way to determine what kind of hardware that demo is running on? It seems remarkably fast most of the time, but I suspect that there is some monster compute being thrown at it behind the scenes.
Finally, like others on this forum I've been considering a Mac Studio 192gb for running this model locally. I've read folks are getting 3-4t/s with the above quantized model. How likely is it that quantization methods or llama.cpp improvements or the like come along in the next couple of years and increase that performance? Or should I assume that today's performance with that model on that hardware is more or less where it's going to stay? If not the Mac Studio, what kind of PC hardware should I be looking into if I were going to consider trying to run inference on this model outside of a cloud environment? | 2023-09-20T15:47:55 | https://www.reddit.com/r/LocalLLaMA/comments/16nonc5/falcon_180b_minimum_specs/ | Beautiful-Answer-327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nonc5 | false | null | t3_16nonc5 | /r/LocalLLaMA/comments/16nonc5/falcon_180b_minimum_specs/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'VnPfz4T_PBvE0aZgbbxKHMpFvaTgKkhfcvRwLUnRubE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=108&crop=smart&auto=webp&s=782b98cf2b42e53ba1106df3e6981f32d2b7c645', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=216&crop=smart&auto=webp&s=004403bb96a0bdf43721038ac58efd0e44c314e5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=320&crop=smart&auto=webp&s=3fb8f9e84ca4c69daab2536e4627a53cef02d091', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=640&crop=smart&auto=webp&s=a1b6a6f03f7260dec2bf8a1aa2b5db4ad8948691', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=960&crop=smart&auto=webp&s=52e1a32bfa50aa184426188f1100f4422c8da21e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=1080&crop=smart&auto=webp&s=ec39e04a5d28782130f45ac04d4e3158e7dff049', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?auto=webp&s=460d01f934e88241cae9a6f4bacfcd527322b97d', 'width': 1200}, 'variants': {}}]} |
Contrastive Decoding Improves Reasoning in Large Language Models LLaMA-65B > LLaMA 2, GPT-3.5 and PaLM 2-L on the HellaSwag commonsense reasoning benchmark, and to outperform LLaMA 2, GPT-3.5 and PaLM-540B on the GSM8K math word reasoning benchmark | 54 | 2023-09-20T15:20:52 | https://arxiv.org/abs/2309.09117 | Inevitable-Start-653 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 16nnz2t | false | null | t3_16nnz2t | /r/LocalLLaMA/comments/16nnz2t/contrastive_decoding_improves_reasoning_in_large/ | false | false | 54 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | ||
Yup. Works great! | 117 | 2023-09-20T15:06:29 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16nnmfp | false | null | t3_16nnmfp | /r/LocalLLaMA/comments/16nnmfp/yup_works_great/ | false | false | 117 | {'enabled': True, 'images': [{'id': 'tWjX79d-gtnFmVH8RsMpIYZA2eqy1LtN29KxXmw58Ks', 'resolutions': [{'height': 23, 'url': 'https://preview.redd.it/zh9dz1d9efpb1.jpg?width=108&crop=smart&auto=webp&s=c235e42caeb69eb67f4c6ebd75f24c5881307014', 'width': 108}, {'height': 47, 'url': 'https://preview.redd.it/zh9dz1d9efpb1.jpg?width=216&crop=smart&auto=webp&s=6583f2cdd57cff72febfd69ef25dbfccf611b923', 'width': 216}, {'height': 70, 'url': 'https://preview.redd.it/zh9dz1d9efpb1.jpg?width=320&crop=smart&auto=webp&s=986a4479ba6267fb35055f921f70773eee3f2150', 'width': 320}, {'height': 140, 'url': 'https://preview.redd.it/zh9dz1d9efpb1.jpg?width=640&crop=smart&auto=webp&s=548ec31222bcbeaa4dedc6b878deb4f4a9d478a2', 'width': 640}], 'source': {'height': 176, 'url': 'https://preview.redd.it/zh9dz1d9efpb1.jpg?auto=webp&s=199652208a9987da5a6b683c40a89aacbc4c13a2', 'width': 799}, 'variants': {}}]} | |||
Apples to apples comparison for quantizations of different sizes? (7b, 13b, ...) | 19 | It seems clear to me that running LLaMA 7b at fp16 would make no sense, when for the same memory budget you'd get better responses from quantized LLaMA 13b. But at what point is that not true anymore? 4 bits? 3 bits? lower?
The [EXL2 quantizations of 70b models](https://huggingface.co/turboderp/Llama2-70B-chat-exl2) that fit under 24GB (so, around \~2.5 bits or lower) seem to have such terrible perplexity that they're probably useless (correct me if I'm wrong).
Is there a chart that can compare the performance of the models so we can plot a graph of their relative intelligence? (sadly we still don't have LLaMA 2 34b but we do have CodeLLaMA) | 2023-09-20T14:39:10 | https://www.reddit.com/r/LocalLLaMA/comments/16nmyqq/apples_to_apples_comparison_for_quantizations_of/ | Dead_Internet_Theory | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nmyqq | false | null | t3_16nmyqq | /r/LocalLLaMA/comments/16nmyqq/apples_to_apples_comparison_for_quantizations_of/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'Lj-zVINNFIhgkCCi8OjcVv75PGVzPrxYr8YgDp29FyU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?width=108&crop=smart&auto=webp&s=2438f41f56e0c76c3c0486f9f2b9b00a247fee45', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?width=216&crop=smart&auto=webp&s=b559bfe971bbd5b0bac28a55f3610a67d5744d73', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?width=320&crop=smart&auto=webp&s=1850a3cd0e8fea9996ca27e519bbe483aa9b7dca', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?width=640&crop=smart&auto=webp&s=a1cd7b5db2e75e499a1e3f41c2961b96459bea11', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?width=960&crop=smart&auto=webp&s=86a2490b7c13c3121936b036d9fb231def6e92d1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?width=1080&crop=smart&auto=webp&s=4415a844618fd17357a9002c0dd1b93029b69966', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oWJapAmqS0OC_3M9HunCXb-ZPlNDCUz7k3WNGlas5hs.jpg?auto=webp&s=4d3a6cfece3bea3233db444100f62f5f690af4f3', 'width': 1200}, 'variants': {}}]} |
RAG with Llama Index and Weaviate, running Llama (or other OSS LLM) in your preferred cloud | 1 | 2023-09-20T14:35:06 | https://dstack.ai/examples/llama-index-weaviate/ | cheptsov | dstack.ai | 1970-01-01T00:00:00 | 0 | {} | 16nmv60 | false | null | t3_16nmv60 | /r/LocalLLaMA/comments/16nmv60/rag_with_llama_index_and_weaviate_running_llama/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'JTcKtGa2qZ7PzjHMO-iU_t-RM5uj65iFoAUXAhzif5E', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?width=108&crop=smart&auto=webp&s=4affa77233a21a1f61dbe247109c708fd4314ef7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?width=216&crop=smart&auto=webp&s=bfc3b4d2bcc23d5a68650d483f5bd75aefb39040', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?width=320&crop=smart&auto=webp&s=fe9b649f234048c4abe1e30511216654807ffedc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?width=640&crop=smart&auto=webp&s=0a3514e16fcc257f9420c4170f81a0c745468776', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?width=960&crop=smart&auto=webp&s=18e3f02c9a22c048c51821a59b935c3227d91416', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?width=1080&crop=smart&auto=webp&s=e580c6a8fc6ce8fff8a979a841abdb4ab5dd97b1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/ccIOmcN_oDGpm3-OXyPsWmNkScvW2dMnZBbkIU1Xhh8.jpg?auto=webp&s=806e66e49423a126e19b19b2cd02f52e06576d8e', 'width': 1200}, 'variants': {}}]} | ||
download docs with python? | 0 | Hey, I'm looking for a way to download documentation for python libraires with code. Please let me know if there's any method to do so.
TIA | 2023-09-20T14:04:35 | https://www.reddit.com/r/LocalLLaMA/comments/16nm54a/download_docs_with_python/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nm54a | false | null | t3_16nm54a | /r/LocalLLaMA/comments/16nm54a/download_docs_with_python/ | false | false | self | 0 | null |
Trying Local LLM at my mid level machine for PDF Chatting | 2 | Task : A Local Machine LLM to create a WebApp/Gui App to Talk to multiple PDFs and summarize and add some more custom features such as Searching for patterns in the text and looking for multiple words with reference in the pdfs
My Specs : Acer Aspire 5 15, 4GB Nvidia GPU + 1GB Intel GPU, 16 GB Ram, i7 13th Gen, 1 TB SDD and 100MB+ WiFi Speed
How could you help : Any Guidance related to what LLM should I pick and what things to consider and possible solutions, tutorials and any type of help is appreciated
The source of Problem and about Me : I'm a Data Analyst and LLM enthusiast.
And for my passion and jobs I need to often refer to so many research papers and books to complete my projects and to make sense of data
So I was trying to setup an LLM on my Local Machine but constantly running into Index Out of Memory and App Crashing all the time with not able to find a good solution to my problem at YouTube and web I'm turning to OGs | 2023-09-20T13:35:42 | https://www.reddit.com/r/LocalLLaMA/comments/16nlhmd/trying_local_llm_at_my_mid_level_machine_for_pdf/ | Late-Cartoonist-6528 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nlhmd | false | null | t3_16nlhmd | /r/LocalLLaMA/comments/16nlhmd/trying_local_llm_at_my_mid_level_machine_for_pdf/ | false | false | self | 2 | null |
Creating a lora on top of llama2 | 2 | I was able to train a lora on llama2 4bit to do a simple task, sentiment analysis, on the imdb dataset. I asked the model to read the text and produce a json file with a key (sentiment) and value (sentiment its self) but i noticed that while it does produce actual json but it seems to talk a bunch of nonsense along with producing the json. Is there a way for me to get the model to be less chatty?
I am fine and want the model to talk and reason before the json (its ok to be chatty there) but i need it to end up with a result in json format by the end. Any way I can accomplish this? | 2023-09-20T13:33:11 | https://www.reddit.com/r/LocalLLaMA/comments/16nlfoh/creating_a_lora_on_top_of_llama2/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nlfoh | false | null | t3_16nlfoh | /r/LocalLLaMA/comments/16nlfoh/creating_a_lora_on_top_of_llama2/ | false | false | self | 2 | null |
Please help to identify my bottleneck | 3 | Based on the below specifications, what should I upgrade to increase token speed? (atm I get 1.5t/s on avg for a 13B model)
i7 6700 3.4GHz 4 cores
GTX 1080 Ti 11 Vram
4 \* 8 Gb Ram sticks DDR4 2133MHz (32Gb RAM total) | 2023-09-20T13:29:54 | https://www.reddit.com/r/LocalLLaMA/comments/16nld01/please_help_to_identify_my_bottleneck/ | Everlast_Spring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nld01 | false | null | t3_16nld01 | /r/LocalLLaMA/comments/16nld01/please_help_to_identify_my_bottleneck/ | false | false | self | 3 | null |
Help on Local LLama-2-7B-GPTQ Generating Nonsense | 1 | Beginner here!
I installed a Llama-2-7B model from TheBloke, and I am running it through text-generation-webui.
I have the temperature turned down to .1.
I have a couple of questions:
1) In the Chat window of textgen-webui, I can do a prompt, such as a simple question like, "Write a poem about large language models." The reply comes out okay, but then at the end of the reply it starts doing some other funny stuff like it gives itself a prompt: "\[INST\] what were the most popular movies in 1974 \[\\INST\] According to Box Office Mojo, the top grossing films..." Then in my subsequent prompts, it continued to add messages like "P.S. Here's that information about the top grossing films of 1974 that you requested..." Eventually I wrote "forget all previous instructions." It apologized and then added "P.S. As promised, here's a list of the top ten highest grossing..." I asked why it keeps bringing that up even though I never asked for that, and it apologized and then added, "Howerver, I'd like to point out that the list of top 10 highest grossing movies is not necessarily representative of the best or most important films..." What is going on here - how do I get the model to behave normally, without constantly going off on its own tangent (which seems to be all about top 10 grossing movies in 1974)?
2) How do I use the "Default" or "Notebook" tab in textgen-webui? If I use the Default tab, I can type into the Input box: "\[INST\] Who was the US president in 1996?\[\\INST\]" And then instead of providing an answer in the output box, it starts producing a bunch of other questions like "who was the president during...", as if using my text as a template to generate other similar text, instead of answering my question. I tried without the \[INST\] and \[\\INST\] and just aswed a question straight up (with the Prompt type selected as None), and asked "Who was the US president in 1996?" It said "nobody knows - it is not a question that can be answered by anyone." Then it continued to generate: "Who was the President of hte United States in 2014? Barack Obama (Democrat) What year did Bill Clinton become? 1993-2001..." It did not answer my question accurately, and the proceeded to continue to generate more questions and answers. What am I doing wrong here?
​ | 2023-09-20T12:44:42 | https://www.reddit.com/r/LocalLLaMA/comments/16nkdtk/help_on_local_llama27bgptq_generating_nonsense/ | theymightbedavis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nkdtk | false | null | t3_16nkdtk | /r/LocalLLaMA/comments/16nkdtk/help_on_local_llama27bgptq_generating_nonsense/ | false | false | self | 1 | null |
Running Llama-2-7b-chat-hf2 on Colab with Streamlit but getting a value error | 1 | This is the error:
**ValueError**: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set \`load\_in\_8bit\_fp32\_cpu\_offload=True\` and pass a custom \`device\_map\` to \`from\_pretrained\`. Check https://huggingface.co/docs/transformers/main/en/main\_classes/quantization#offload-between-cpu-and-gpu for more details.
Is it because of streamlit? Because I can run the model code and the inferences on colab without streamlit and it runs smoothly. I am using T4. | 2023-09-20T11:06:59 | https://www.reddit.com/r/LocalLLaMA/comments/16nigim/running_llama27bchathf2_on_colab_with_streamlit/ | charbeld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nigim | false | null | t3_16nigim | /r/LocalLLaMA/comments/16nigim/running_llama27bchathf2_on_colab_with_streamlit/ | false | false | self | 1 | null |
Sharing 12k dialog dataset | 28 | Just pushed my first dataset to the hugging face hub.
It's a 12,000 records of English dialogs between a user and an assistant discussing movie preferences. Check it out here: [ccpemv2 dataset hf](https://huggingface.co/datasets/aloobun/ccpemv2)
While this dataset may not be particularly impressive, feedback is welcome to improve and enhance it. | 2023-09-20T10:23:46 | https://www.reddit.com/r/LocalLLaMA/comments/16nhpm2/sharing_12k_dialog_dataset/ | Roots91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nhpm2 | false | null | t3_16nhpm2 | /r/LocalLLaMA/comments/16nhpm2/sharing_12k_dialog_dataset/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'I6B01KTEHAX-40R-vyYV6QxacJH1_S0dU9tVe2RhqCk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?width=108&crop=smart&auto=webp&s=c67a11631c620963ca74ba0b8916ac78cb7620b4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?width=216&crop=smart&auto=webp&s=10a7fbcff5f941f3e199ef35d011123e77a4f0f6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?width=320&crop=smart&auto=webp&s=e87d4aa9b1352ebe2f1c63ff4591ab377c4c04be', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?width=640&crop=smart&auto=webp&s=3b075a28887f85e279103258016cca88a0ff7c04', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?width=960&crop=smart&auto=webp&s=6383c290f002cb4828090e2b34c98b1ca48b7dfa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?width=1080&crop=smart&auto=webp&s=72e2edaba664f33f6b8140041ace25f5910bdc25', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nWZY_00yXO4uwB1TuoALhIAbodYO3UnNwwQCs_ZWB4w.jpg?auto=webp&s=5415e46d2d7d2ec345d937f3d90fc491acf35e62', 'width': 1200}, 'variants': {}}]} |
How Mirostat works | 68 | Like most others, ve been playing around a lot with Mirostat when using Llama2 models. But it was mostly done by changing a parameter and then see if I liked i, without really knowing what I was doing.
So I downloaded the pdf on Mirostat: [https://openreview.net/pdf?id=W1G1JZEIy5\_](https://openreview.net/pdf?id=W1G1JZEIy5_)
And I had ClaudeAI summarize it: [https://aiarchives.org/id/r2kkPhjhShZYh3gfZi9l](https://aiarchives.org/id/r2kkPhjhShZYh3gfZi9l)
The key takeaways seems to be:
\- A tau setting of 3 produced the most human like answers in the researchers tests.
\- The eta controls how quickly Mirostat tries to control the perplexity. 0.1 is recommended (range 0.05 - 0.2)
\- The general Temperature setting is **still** in effect and will affect output. The temperature and Mirostat operate independently.
My guess is that there will be a lot of differences between models for which settings are giving the desired results, but it is nice to at least know the baseline.
If anyone knows more, or have corrections to the validity of the above, please add it so this post can become a good starting point for others.
​
​
​ | 2023-09-20T09:54:18 | https://www.reddit.com/r/LocalLLaMA/comments/16nh7x9/how_mirostat_works/ | nixudos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nh7x9 | false | null | t3_16nh7x9 | /r/LocalLLaMA/comments/16nh7x9/how_mirostat_works/ | false | false | self | 68 | null |
WizardLM loves more overloading rather than threading | 1 | 2023-09-20T08:46:35 | https://syme.dev/articles/blog/9/WizardLM-loves-more-overloading-rather-than-threading | nalaginrut | syme.dev | 1970-01-01T00:00:00 | 0 | {} | 16ng5ze | false | null | t3_16ng5ze | /r/LocalLLaMA/comments/16ng5ze/wizardlm_loves_more_overloading_rather_than/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'e8dh3t-CfSqdbRFfp_3eUoUXK5qoG8Yq4CI2rdIMn4o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Ap1EeUFwThsC1kTHBrvYxEJt_O9gjshsuPj_gqHFm2o.jpg?width=108&crop=smart&auto=webp&s=c55b5dd89000c4ce3be4e84a5631e3ade7d1a07b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Ap1EeUFwThsC1kTHBrvYxEJt_O9gjshsuPj_gqHFm2o.jpg?width=216&crop=smart&auto=webp&s=aa871d00a6ccad49d4cf4c45bb72dc2645e2d56e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Ap1EeUFwThsC1kTHBrvYxEJt_O9gjshsuPj_gqHFm2o.jpg?width=320&crop=smart&auto=webp&s=8b8dce4a443bbe94667348d60c2ace1f4f67b259', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Ap1EeUFwThsC1kTHBrvYxEJt_O9gjshsuPj_gqHFm2o.jpg?width=640&crop=smart&auto=webp&s=78ddb55b99b06b265be44255e284d359da635e54', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Ap1EeUFwThsC1kTHBrvYxEJt_O9gjshsuPj_gqHFm2o.jpg?width=960&crop=smart&auto=webp&s=b7f34cbcc35ad6beb92cabd0954183281b05eee1', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Ap1EeUFwThsC1kTHBrvYxEJt_O9gjshsuPj_gqHFm2o.jpg?auto=webp&s=b2f7d3cd91cd9fb9462d740774cab18bb721e3d6', 'width': 1024}, 'variants': {}}]} | ||
FineTuning vs LangChain .. or a Mix ? | 3 | Hey everyone ,
excuse me if I sound like a noob in these topics . but recently I discovered the world of huggingface and local LLMs . I work at a company that provides software services to a lot of clients , and one of our products is supposed to be a chatbot that aids the customers by guiding them or answering any question about that client . in addition to that , each customer has some personal data in the client's database , let's say stuff like name , date joined , activity .. and other analytics regarding his membership with that specific client . which all of them can be easily retrieved by using our companies API requests that returns user info in a json format .
I came across this post [https://www.reddit.com/r/LocalLLaMA/comments/14vnfh2/my\_experience\_on\_starting\_with\_fine\_tuning\_llms/](https://www.reddit.com/r/LocalLLaMA/comments/14vnfh2/my_experience_on_starting_with_fine_tuning_llms/)
that further made me realize that fine-tuning a model on that client's specific data might be a good idea .
collect data , structure it like (preferably in #instruction,#input,#output format as I saw this is good format based on alpaca-cleaned data-set) , and fine-tune one of the hugging face models on it . that way , any question about that client , the model would be able to answer it in a specific way rather than a generic way like a generic LLM would do .
ok great , let's say we fine-tuned the model on that client's data , and now it can answer any question about that client for the user like any FAQ , any info about the client , how to register , how to cancel etc... but it got me thinking , how would I also train the model to be able to answer queries where the user asks about some specific info about himself and not just some generic info question about the client ? , imagine if the client is a gym , how can I train the model to be also able to answer a question similar to "when does my membership end ?" . should I provide also in the training data examples for these types of questions with some fake data about a fake user ? or is there no need to train the model on these types of questions and just do it using LangChain by specifying type of chains that get user info as input and a query from the user , while also providing some examples in the prompt to further help the LLM understand how the questions is supposed to be answered ?
but also , how would I do that in Langchain for a lot of different types of queries ? should I just think of a lot of possible queries from that user that asks for specific user info ?
sorry if I rambled some nonsense , I would be happy for some advice from you guys | 2023-09-20T08:29:36 | https://www.reddit.com/r/LocalLLaMA/comments/16nfwhx/finetuning_vs_langchain_or_a_mix/ | Warm_Shelter1866 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nfwhx | false | null | t3_16nfwhx | /r/LocalLLaMA/comments/16nfwhx/finetuning_vs_langchain_or_a_mix/ | false | false | self | 3 | null |
What generation of AI do you think we're in? | 0 | In other fields (mainly military), generations are used to mark a substantial change in tactics and/or technology. There are gen 5 aircraft, 5th generation warfare, etc. Given this framing, what generation of AI do you think we're currently in? | 2023-09-20T08:23:42 | https://www.reddit.com/r/LocalLLaMA/comments/16nftfc/what_generation_of_ai_do_you_think_were_in/ | glencoe2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nftfc | false | null | t3_16nftfc | /r/LocalLLaMA/comments/16nftfc/what_generation_of_ai_do_you_think_were_in/ | false | false | self | 0 | null |
(Discussion) What interesting applications can you think of if you have a virtual personal assistant (e.g. Siri) powered by local LLM. | 14 | Hi all,
I'm recently working on LLM-powered virtual personal assistants (VPA). Currently we have managed to use LLM (GPT-4 or Vicuna) to enable smartphone task automation, i.e. the personal agent automatically navigate the GUI of smartphone apps to complete user-specified tasks. See our paper:
Empowering LLM to use Smartphone for Intelligent Task Automation
[https://arxiv.org/abs/2308.15272](https://arxiv.org/abs/2308.15272)
However, what concerns me currently is that people actually do not frequently use VPA for such automation tasks. At least for me, directly interacting with the GUI is much more efficient (and more comfortable in public space) than using voice commands.
So I would like to hear some advice on this topic. Can you think of other useful/interesting applications if you have a virtual personal assistant powered by local LLM? | 2023-09-20T08:11:07 | https://www.reddit.com/r/LocalLLaMA/comments/16nfmmw/discussion_what_interesting_applications_can_you/ | ylimit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nfmmw | false | null | t3_16nfmmw | /r/LocalLLaMA/comments/16nfmmw/discussion_what_interesting_applications_can_you/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Apple M2 Ultra -- 76‑core GPU _VS_ 60-core gpu for LLMs worth it? | 21 | What is the state of the art for LLMs as far as being able to utilize Apple's GPU cores on M2 Ultra?
The diff for 2 types of Apple M2 Ultra with 24‑core CPU that only differs in GPU cores: 76‑core GPU vs 60-core gpu (otherwise same CPU) is almost $1k.
Are GPU cores worth it - given everything else (like RAM etc.) being the same? | 2023-09-20T08:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/16nfmkc/apple_m2_ultra_76core_gpu_vs_60core_gpu_for_llms/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nfmkc | false | null | t3_16nfmkc | /r/LocalLLaMA/comments/16nfmkc/apple_m2_ultra_76core_gpu_vs_60core_gpu_for_llms/ | false | false | self | 21 | null |
adding new factual knowledge to llama 2 13B | 1 | I want to insert new factual information into 'Llama 2 13B' using LoRa. My dataset has details about people, and I'm inputting the file as a raw text file. I attempted to apply LoRa to all linear layers with ranks up to 256, but the model isn't performing well. Can anyone suggest an alternative? I prefer not to use RAG.
Sample from dataset:
fahad jalal:
fahad jalal is currently working as a founder and ceo at [qlu.ai](https://qlu.ai/)
[qlu.ai](https://qlu.ai/) operates in different industries automation, nlp, artificial intelligence, recruitment
fahad jalal has been working at [qlu.ai](https://qlu.ai/) since 10-2020
fahad jalal has a total experience of 0 years in various industries laptops, telecommunications, consumer electronics, artificial intelligence, mobile, electronics, iot (internet of things), gadgets
fahad jalal is a highly skilled professional based in san francisco bay area, united states
fahad jalal has a variety of skills including venture capital, natural language processing (nlp), sales, business development, product development and deep reinforcement learning
fahad jalal can be contacted at email: [fahadjalal@protonmail.com](mailto:fahadjalal@protonmail.com)
fahad jalal completed his master of business administration - mba in finance from the wharton school in year 2010 to 2012
fahad jalal completed his msc in computer science from stanford university in year 2008 to 2010
fahad jalal worked as a founding investor & chairman at chowmill from 3-2018 to 3-2019
fahad jalal worked as a founder and ceo at sitterfriends from 3-2016 to 9-2018
fahad jalal worked as a founding investor & business development at smartron from 4-2016 to 5-2017
Inference Prompt:
tell me about fahad jalal from qlu.ai | 2023-09-20T07:41:05 | https://www.reddit.com/r/LocalLLaMA/comments/16nf670/adding_new_factual_knowledge_to_llama_2_13b/ | umair_afzal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16nf670 | false | null | t3_16nf670 | /r/LocalLLaMA/comments/16nf670/adding_new_factual_knowledge_to_llama_2_13b/ | false | false | self | 1 | null |
Parallel decoding in llama.cpp - 32 streams (M2 Ultra serving a 30B F16 model delivers 85t/s) | 123 | 2023-09-20T07:14:49 | https://twitter.com/ggerganov/status/1704247329732145285 | Agusx1211 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 16ner8x | false | {'oembed': {'author_name': 'Georgi Gerganov', 'author_url': 'https://twitter.com/ggerganov', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Initial tests with parallel decoding in llama.cpp<br><br>A simulated server processing 64 client requests with 32 decoding streams on M2 Ultra. Supports hot-plugging of new sequences. Model is 30B LLaMA F16<br><br>~4000 tokens (994 prompt + 3001 gen) with system prompt of 305 tokens in 46s <a href="https://t.co/c5e1txZvzD">pic.twitter.com/c5e1txZvzD</a></p>— Georgi Gerganov (@ggerganov) <a href="https://twitter.com/ggerganov/status/1704247329732145285?ref_src=twsrc%5Etfw">September 19, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/ggerganov/status/1704247329732145285', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_16ner8x | /r/LocalLLaMA/comments/16ner8x/parallel_decoding_in_llamacpp_32_streams_m2_ultra/ | false | false | 123 | {'enabled': False, 'images': [{'id': 'rwYhPIZp9ipbmOcg5DJPCng3EuMor_FQv92xWKyQrc0', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/LMdbFaHz0owcVpIJh19Cj19WxOPsN56Or9QrA58X8o4.jpg?width=108&crop=smart&auto=webp&s=48b41d1fd4e984aa619ecccdab147fd9eb77a0ef', 'width': 108}], 'source': {'height': 97, 'url': 'https://external-preview.redd.it/LMdbFaHz0owcVpIJh19Cj19WxOPsN56Or9QrA58X8o4.jpg?auto=webp&s=a03ff1c4146f776bd9dc9b8e7d15be3ef69bf570', 'width': 140}, 'variants': {}}]} | ||
How to get multiple generations from the same prompt (efficiently)? | 4 | I want to get multiple generations from the same prompt. Does textgen or ExLlama have some sort of batching/cacheing wizardry which allows this to be done faster than just prompting n times? | 2023-09-20T07:00:34 | https://www.reddit.com/r/LocalLLaMA/comments/16neiwh/how_to_get_multiple_generations_from_the_same/ | LiquidGunay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16neiwh | false | null | t3_16neiwh | /r/LocalLLaMA/comments/16neiwh/how_to_get_multiple_generations_from_the_same/ | false | false | self | 4 | null |
Factor Influencing Adoption Intention of ChatGPT | 1 | [removed] | 2023-09-20T06:42:13 | https://www.reddit.com/r/LocalLLaMA/comments/16ne840/factor_influencing_adoption_intention_of_chatgpt/ | maulanashqd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ne840 | false | null | t3_16ne840 | /r/LocalLLaMA/comments/16ne840/factor_influencing_adoption_intention_of_chatgpt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UkFqfr7lSuAFg9mUz_jInPKcC1j_Ky3K93e2w3MYQI4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?width=108&crop=smart&auto=webp&s=c8e39df1b87dcced1d5f643a3b2fb282ea2f30aa', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?width=216&crop=smart&auto=webp&s=b4368e8e587f907694d21fe31d7029716f7ac0ce', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?width=320&crop=smart&auto=webp&s=3da41f4fe0f20d341508882fa047d3085c71dec0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?width=640&crop=smart&auto=webp&s=5dffa40731338d6098200c3458c91da172f40704', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?width=960&crop=smart&auto=webp&s=78c9392d75f0275297e652722897761b65a530f3', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?width=1080&crop=smart&auto=webp&s=02ebcf3a2ada90de4d35b9778b6deff6ba9c5128', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/KmAYBSWZdfZwN1tBGhgg5JCZGT_24ykkc6-F17Ya910.jpg?auto=webp&s=5568c826f59b55198f92aa0b9507b5bf582867f4', 'width': 1200}, 'variants': {}}]} |
Best prompt and model for fact-checking a text (disinformation/fake-news detection) | 2 | Given a short text <text\_to\_check>, I want the LLM to check whether there are some facts stated in the text which are NOT true. So I want to detect 'disinformation' / 'fake news'. And the LLM should report which parts of the text are not true.
How would the "best" prompt look like for this task ?
And what is the best 'compact' LLama-2 based model for it ? I suppose some kind of instrucion-following LLM. The LLM shall run on a mobile device with <= 8 GB RAM, so the largest model I can afford is \~ 13B (with 4-bit quantization in llama.cpp framework).
Looking at Alpaca Leaderboard ([https://tatsu-lab.github.io/alpaca\_eval/](https://tatsu-lab.github.io/alpaca_eval/)), the best 13B models there are XWinLM (not sure if supported by llama.cpp), OpenChat V3.1 and WizardLM 13B V1.2. So I suppose I will use one of those models | 2023-09-20T05:53:28 | https://www.reddit.com/r/LocalLLaMA/comments/16ndfjl/best_prompt_and_model_for_factchecking_a_text/ | Fit_Check_919 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ndfjl | false | null | t3_16ndfjl | /r/LocalLLaMA/comments/16ndfjl/best_prompt_and_model_for_factchecking_a_text/ | false | false | self | 2 | null |
Building a Rig with 4x A100 80GBs. | 5 | I've got 4x A100 80GBs. However i dont have the necessary hardware to run these GPUs.
I came across this amazing blog [https://www.emilwallner.com/p/ml-rig](https://www.emilwallner.com/p/ml-rig) on building a ML rig. Cool article and credits to Emil Wallner. However the article is based on A6000s instead of A100s.
Checking here to see if there are any recommendations? | 2023-09-20T04:55:32 | https://www.reddit.com/r/LocalLLaMA/comments/16ncfv3/building_a_rig_with_4x_a100_80gbs/ | Aristokratic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ncfv3 | false | null | t3_16ncfv3 | /r/LocalLLaMA/comments/16ncfv3/building_a_rig_with_4x_a100_80gbs/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'e_PsddG_p2NXpD1yrnTaioQiaXsafsBFCDfSFYlnB4I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=108&crop=smart&auto=webp&s=774f3a9d1828aa102a1d63d9975c32918f171611', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=216&crop=smart&auto=webp&s=ccbe4b5eeff1021b22968a20895e96f8eb1a8d25', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=320&crop=smart&auto=webp&s=4d41793a9333b479edd7f1f1670daba3b7d64705', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=640&crop=smart&auto=webp&s=7e11a6114cefef978dfc2bb7b8f9b67edffe85ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=960&crop=smart&auto=webp&s=ef387969a23265d23e2de360d9b740ba5a855765', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=1080&crop=smart&auto=webp&s=4b245081ae9376a11e2daec946c21c2eb43166bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?auto=webp&s=146bb9bbb17132c8df9fbf62e5968815cfdb0836', 'width': 1200}, 'variants': {}}]} |
Building a Rig with 4x A100 80GBs. | 1 | I've got 4x A100 80GBs. However i dont have the necessary hardware to run these GPUs.
I came across this amazing blog [https://www.emilwallner.com/p/ml-rig](https://www.emilwallner.com/p/ml-rig) on building a ML rig. Cool article and credits to Emil Wallner. However the article is based on A6000s instead of A100s.
Checking here to see if there are any recommendations? | 2023-09-20T04:54:35 | https://www.reddit.com/r/LocalLLaMA/comments/16ncf8t/building_a_rig_with_4x_a100_80gbs/ | Aristokratic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ncf8t | false | null | t3_16ncf8t | /r/LocalLLaMA/comments/16ncf8t/building_a_rig_with_4x_a100_80gbs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'e_PsddG_p2NXpD1yrnTaioQiaXsafsBFCDfSFYlnB4I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=108&crop=smart&auto=webp&s=774f3a9d1828aa102a1d63d9975c32918f171611', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=216&crop=smart&auto=webp&s=ccbe4b5eeff1021b22968a20895e96f8eb1a8d25', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=320&crop=smart&auto=webp&s=4d41793a9333b479edd7f1f1670daba3b7d64705', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=640&crop=smart&auto=webp&s=7e11a6114cefef978dfc2bb7b8f9b67edffe85ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=960&crop=smart&auto=webp&s=ef387969a23265d23e2de360d9b740ba5a855765', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?width=1080&crop=smart&auto=webp&s=4b245081ae9376a11e2daec946c21c2eb43166bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CdKspM-vOyEcwQBYXHy_bSyMGySJtP064eOb9LSSPaE.jpg?auto=webp&s=146bb9bbb17132c8df9fbf62e5968815cfdb0836', 'width': 1200}, 'variants': {}}]} |
GitHub - arcee-ai/DALM: Domain Adapted Language Modeling Toolkit | 37 | 2023-09-20T02:48:59 | benedict_eggs17 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16na2i1 | false | null | t3_16na2i1 | /r/LocalLLaMA/comments/16na2i1/github_arceeaidalm_domain_adapted_language/ | false | false | 37 | {'enabled': True, 'images': [{'id': 'JS_M9AshBvEOur8b2vb6kc9KPxWSwtP2jaPgFcsyxPI', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/m9g4likxqbpb1.png?width=108&crop=smart&auto=webp&s=f222d6f5240ebe7855a118fcc8376f2104825237', 'width': 108}, {'height': 78, 'url': 'https://preview.redd.it/m9g4likxqbpb1.png?width=216&crop=smart&auto=webp&s=1ad12a09114fda79ec2ceb6da5721fb9d94fcf8d', 'width': 216}, {'height': 116, 'url': 'https://preview.redd.it/m9g4likxqbpb1.png?width=320&crop=smart&auto=webp&s=1cc67778de095eca5f8d87d6f495ba483d12d968', 'width': 320}], 'source': {'height': 187, 'url': 'https://preview.redd.it/m9g4likxqbpb1.png?auto=webp&s=3e8f2923df6d321150849b7a3d9fc008a5b7ed5a', 'width': 512}, 'variants': {}}]} | |||
What's the least finnicky way of setting up your environment with all needed drivers? (Have had terrible luck with CUDA) | 1 | [removed] | 2023-09-20T02:31:13 | https://www.reddit.com/r/LocalLLaMA/comments/16n9p23/whats_the_least_finnicky_way_of_setting_up_your/ | Ill_Fox8807 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16n9p23 | false | null | t3_16n9p23 | /r/LocalLLaMA/comments/16n9p23/whats_the_least_finnicky_way_of_setting_up_your/ | false | false | self | 1 | null |
ERROR | 2 | I am very bad at computers so if the solution is too obvious, I apologize, but for 4 days now I can not use any ggml model, it all started when I updated, and since then I can not shit models or download them, giving me this error.
​
https://preview.redd.it/9i3f81y3lbpb1.png?width=657&format=png&auto=webp&s=e8a966158449efc04d31111d1ba3fd0408723b1f | 2023-09-20T02:16:51 | https://www.reddit.com/r/LocalLLaMA/comments/16n9e3g/error/ | SeleucoI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16n9e3g | false | null | t3_16n9e3g | /r/LocalLLaMA/comments/16n9e3g/error/ | false | false | 2 | null | |
What is the Loss Function that's used when fientuning Llamav2 using Hugging Face Trainer & PEFT? | 5 | I am unable to find what is the loss function that is used when fine-tuning Llamav2. For example, in the following Llama Recipe Script, where llamav2 is fine-tuned using PEFT and HF Trainer, what's the loss function?
[https://github.com/facebookresearch/llama-recipes/blob/main/examples/quickstart.ipynb](https://github.com/facebookresearch/llama-recipes/blob/main/examples/quickstart.ipynb) | 2023-09-20T01:24:10 | https://www.reddit.com/r/LocalLLaMA/comments/16n8978/what_is_the_loss_function_thats_used_when/ | panini_deploy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16n8978 | false | null | t3_16n8978 | /r/LocalLLaMA/comments/16n8978/what_is_the_loss_function_thats_used_when/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'aISsVDsnc0WvuJaDO8SxWqjvVjWFcYkkgpu6ZcUkNFo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?width=108&crop=smart&auto=webp&s=5c74e5c748b530e423e9f50ae29fb0814c6c0176', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?width=216&crop=smart&auto=webp&s=e5c886c579c9e2c22b0b41bcb62ce414cd3947ab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?width=320&crop=smart&auto=webp&s=b7b5fc1d79fb7212115bef2bc06e5f31afc0c418', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?width=640&crop=smart&auto=webp&s=7c4578c14e54bce2e4aec7c2e1d2d1e1c56219cb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?width=960&crop=smart&auto=webp&s=7055f6516a6ad95341bfbb116f4eac996ac5342d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?width=1080&crop=smart&auto=webp&s=323e46361ea24c7f94d522a9b7dbbdb1322122c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gYkvgOjdxrkVEwfQnz8hXctMWAyQLNlNf8TgUn1TKUQ.jpg?auto=webp&s=0e6386757d1ae9c4d349d0d73e5ffb67de0b5c1d', 'width': 1200}, 'variants': {}}]} |
Useful prompts for Llama 2 (13B)? | 6 | I'm finding Llama 2 13B Chat (I use the MLC version) to be a really useful model to run locally on my M2 MacBook Pro.
I'm interested in prompts people have found that work well for this model (and for Llama 2 in general). I'm interested in both system prompts and regular prompts, and I'm particularly interested in summarization, structured data extraction and question-and-answering against a provided context.
What's worked best for you so far? | 2023-09-20T01:14:37 | https://www.reddit.com/r/LocalLLaMA/comments/16n81s2/useful_prompts_for_llama_2_13b/ | simonw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16n81s2 | false | null | t3_16n81s2 | /r/LocalLLaMA/comments/16n81s2/useful_prompts_for_llama_2_13b/ | false | false | self | 6 | null |
I posted this in r/Oobabooga, instructions on how to convert documents with math equations into something local LLMs can understand. | 25 | 2023-09-20T00:56:10 | https://www.reddit.com/r/Oobabooga/comments/16n7dm8/how_to_go_from_pdf_with_math_equations_to_html/ | Inevitable-Start-653 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 16n7nzu | false | null | t3_16n7nzu | /r/LocalLLaMA/comments/16n7nzu/i_posted_this_in_roobabooga_instructions_on_how/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'TN3MMtf38tBimn-2O1rr3zcBqP3-tSXBqgeGCu_A1X4', 'resolutions': [{'height': 44, 'url': 'https://external-preview.redd.it/QO8rUOQWrj4Flm3mVtmxi06wIgmmdXDhbvL11WwfftE.png?width=108&crop=smart&auto=webp&s=fbaf5519a0b60efe526ab2b78719c3a0a693c419', 'width': 108}, {'height': 88, 'url': 'https://external-preview.redd.it/QO8rUOQWrj4Flm3mVtmxi06wIgmmdXDhbvL11WwfftE.png?width=216&crop=smart&auto=webp&s=e1b4a0bc27c1a0514a5070048ea153abc31678cb', 'width': 216}, {'height': 130, 'url': 'https://external-preview.redd.it/QO8rUOQWrj4Flm3mVtmxi06wIgmmdXDhbvL11WwfftE.png?width=320&crop=smart&auto=webp&s=7f72566073c2e5a814f0b2f84e0b325b6e093239', 'width': 320}, {'height': 260, 'url': 'https://external-preview.redd.it/QO8rUOQWrj4Flm3mVtmxi06wIgmmdXDhbvL11WwfftE.png?width=640&crop=smart&auto=webp&s=eeb9511dae9a65ec1f3f144b48aaad4d8b07b2bd', 'width': 640}, {'height': 391, 'url': 'https://external-preview.redd.it/QO8rUOQWrj4Flm3mVtmxi06wIgmmdXDhbvL11WwfftE.png?width=960&crop=smart&auto=webp&s=75a929bd353ebdda7dd156416c36742d594fb5e9', 'width': 960}, {'height': 440, 'url': 'https://external-preview.redd.it/QO8rUOQWrj4Flm3mVtmxi06wIgmmdXDhbvL11WwfftE.png?width=1080&crop=smart&auto=webp&s=eaf59c140fdef0865b534bd7a396dd2305cd6837', 'width': 1080}], 'source': {'height': 1677, 'url': 'https://external-preview.redd.it/QO8rUOQWrj4Flm3mVtmxi06wIgmmdXDhbvL11WwfftE.png?auto=webp&s=137c10a4213824b8a349d8e76e6d3b9eea56f6da', 'width': 4113}, 'variants': {}}]} | ||
Alternative to function calling (openai) | 3 | Hello everyone,
For a project I need more cheaper API of LLM with function calls, I found Functionary based on LLaMa 7b but I don't see other project doing this in open-sources and with rate near 99.99%.
I hope you find something, thanks everyone | 2023-09-20T00:19:06 | https://www.reddit.com/r/LocalLLaMA/comments/16n6vki/alternative_to_function_calling_openai/ | FineImagination7101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16n6vki | false | null | t3_16n6vki | /r/LocalLLaMA/comments/16n6vki/alternative_to_function_calling_openai/ | false | false | self | 3 | null |
Are there any good math Datasets for Training small models? | 2 | I've seen Allen AI's Lila Dataset, and I want to use this for a small model, to turn math to code. However, I dont think a small dataset in 300k rows is enough. Does anyone know of any bigger, similar datasets? | 2023-09-20T00:12:16 | https://www.reddit.com/r/LocalLLaMA/comments/16n6q58/are_there_any_good_math_datasets_for_training/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16n6q58 | false | null | t3_16n6q58 | /r/LocalLLaMA/comments/16n6q58/are_there_any_good_math_datasets_for_training/ | false | false | self | 2 | null |
Are there any models that would work well on a gtx 1070 (8GB) laptop? (i7-6700hq, 32GB RAM) | 6 | My old gaming laptop's sitting unused. I'm wondering if there's any models it could run and what kind of tokens/second I might expect? Similarly what about stable diffusion?
The RAM can be upgraded to 64GB pretty cheaply too.
Thanks
I also found it nearly impossible to get CUDA working on it a few years back, so anyone got any suggestions on how to best set it up if I'm factory resetting it? | 2023-09-19T22:54:52 | https://www.reddit.com/r/LocalLLaMA/comments/16n4z46/are_there_any_models_that_would_work_well_on_a/ | Ill_Fox8807 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16n4z46 | false | null | t3_16n4z46 | /r/LocalLLaMA/comments/16n4z46/are_there_any_models_that_would_work_well_on_a/ | false | false | self | 6 | null |
Testing LLamA 2-7B's character impersonation abilities before I buy hardware to run it locally. The idea is to include this character (a perpetually grumpy individual prompted to be so) in an RPG game to generate dialogue through event prompts. | 21 | 2023-09-19T22:52:09 | https://www.reddit.com/gallery/16n4wyx | swagonflyyyy | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 16n4wyx | false | null | t3_16n4wyx | /r/LocalLLaMA/comments/16n4wyx/testing_llama_27bs_character_impersonation/ | false | false | 21 | null | ||
Building the best LLM rig for $10000 | 47 | Here is a thought - do you have a dream rig that $10000 can buy? It should be able to do everything including training on large datasets - in reasonable time (when compared to consumer grade machines).
Please reply with your dream configurations! | 2023-09-19T22:47:00 | https://www.reddit.com/r/LocalLLaMA/comments/16n4sj3/building_the_best_llm_rig_for_10000/ | peace-of-me | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16n4sj3 | false | null | t3_16n4sj3 | /r/LocalLLaMA/comments/16n4sj3/building_the_best_llm_rig_for_10000/ | false | false | self | 47 | null |
Falcon 180B inference - M2 Ultra vs 2xA100 | 14 | I recently came across some interesting stats regarding the Falcon 180B q6\_0 150.02 GB model's inference performance on two different setups: the M2 Ultra and a dual A100 setup. I'm trying to understand the financial implications of choosing one over the other based on the given results:
* **M2 Ultra**: 3.5 tokens/sec
* **2x A100 80GB**: 7 tokens/sec
However, from the financial point of view, there's an interesting difference to highlight. The M2 Ultra Mac Studio is priced around $6k, while a dual A100 setup can cost approximately 6 times more. So, is it financially feasible to invest in the more expensive dual A100 setup primarily for inference purposes? | 2023-09-19T22:41:45 | https://www.reddit.com/r/LocalLLaMA/comments/16n4o1n/falcon_180b_inference_m2_ultra_vs_2xa100/ | Wrong_User_Logged | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16n4o1n | false | null | t3_16n4o1n | /r/LocalLLaMA/comments/16n4o1n/falcon_180b_inference_m2_ultra_vs_2xa100/ | false | false | self | 14 | null |
training set up | 0 | im trying to use a raw text file to train a LoRA for oobabooga and I get the text file in the right place but when i press start LoRA it just says " Missing or invalid LoRA file name input. "
How would I fix this error? | 2023-09-19T22:11:52 | https://www.reddit.com/r/LocalLLaMA/comments/16n3zra/training_set_up/ | mlpfreddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16n3zra | false | null | t3_16n3zra | /r/LocalLLaMA/comments/16n3zra/training_set_up/ | false | false | self | 0 | null |
Models for story writing | 7 | Looking for recommendations of models focused in story writing, preferably uncensored, if possible. I tried some models already but most of them are better for chat, i'm not getting good results with them in the story department.
I'm currently using the oobabooga text generator and have the models:
pygmalion 6b, 7b and 13b
nous hermes 13b
vicuna 30b-uncensored
erebus 30b | 2023-09-19T21:41:41 | https://www.reddit.com/r/LocalLLaMA/comments/16n3b7m/models_for_story_writing/ | Jocicleyson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16n3b7m | false | null | t3_16n3b7m | /r/LocalLLaMA/comments/16n3b7m/models_for_story_writing/ | false | false | self | 7 | null |
How much vRAM are required to run 220B GPT-4 in Q_4? | 5 | I'm just asking guys, is it possible to measure?
GPT4 is a 220B param model. its a mixture of 8 220B so 1.8T parameters to be exact. So if you would run it in full precision it would require 220B x 8 @ FP16 = 3520GB.
​
How about Q\_4 estimates? Would it run on 2x A100? | 2023-09-19T21:27:41 | https://www.reddit.com/r/LocalLLaMA/comments/16n2znr/how_much_vram_are_required_to_run_220b_gpt4_in_q_4/ | Wrong_User_Logged | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16n2znr | false | null | t3_16n2znr | /r/LocalLLaMA/comments/16n2znr/how_much_vram_are_required_to_run_220b_gpt4_in_q_4/ | false | false | self | 5 | null |
Is it possible to use a llama 2 model to ask questions/remember previous conversations | 15 | A huge benefit I get out of chatgpt is instructing my model to ask me questions, and remember my previous responses up to a certain bit. Is llama 2 able to do that with the text-generation web UI? It seems pretty solid at answering questions, and I'm playing around with the parameters a bit, but no idea how to use the default Input prompts to remember and answer multiple questions in sequence. Is that possible? | 2023-09-19T20:06:58 | https://www.reddit.com/r/LocalLLaMA/comments/16n0xps/is_it_possible_to_use_a_llama_2_model_to_ask/ | EfficientDivide1572 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16n0xps | false | null | t3_16n0xps | /r/LocalLLaMA/comments/16n0xps/is_it_possible_to_use_a_llama_2_model_to_ask/ | false | false | self | 15 | null |
This will be society in 2024 | 682 | 2023-09-19T19:35:39 | NLTPanaIyst | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16n05l5 | false | null | t3_16n05l5 | /r/LocalLLaMA/comments/16n05l5/this_will_be_society_in_2024/ | false | false | 682 | {'enabled': True, 'images': [{'id': 'BzcOZ8O5tVYntfHMN8reVwtpFyNy59Wp3N75YwQgVUs', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/t0w92i4al9pb1.png?width=108&crop=smart&auto=webp&s=b69fd9715a5b80884a8aa1b6e698c625528369f9', 'width': 108}, {'height': 210, 'url': 'https://preview.redd.it/t0w92i4al9pb1.png?width=216&crop=smart&auto=webp&s=2b49a2a56ce7c12ee5e0a9b2e1746ebafecf2a05', 'width': 216}, {'height': 311, 'url': 'https://preview.redd.it/t0w92i4al9pb1.png?width=320&crop=smart&auto=webp&s=77c0ffff0dc8bca55b6c6460492470701c201a8a', 'width': 320}, {'height': 623, 'url': 'https://preview.redd.it/t0w92i4al9pb1.png?width=640&crop=smart&auto=webp&s=bd59af32f05bd5567635567d008082d982d76d8e', 'width': 640}, {'height': 934, 'url': 'https://preview.redd.it/t0w92i4al9pb1.png?width=960&crop=smart&auto=webp&s=0058c6b5750405e408bc3eaabaa2e0576245c2b5', 'width': 960}], 'source': {'height': 1047, 'url': 'https://preview.redd.it/t0w92i4al9pb1.png?auto=webp&s=1998282716e49073ad22b72b01e2a06d02d37a55', 'width': 1075}, 'variants': {}}]} | |||
Video: MacOS native app SwiftChat running inference on Llama 2 7B; 100% GPU usage up from the normal 60%-ish | 1 | [removed] | 2023-09-19T18:22:05 | https://www.reddit.com/r/LocalLLaMA/comments/16mycjv/video_macos_native_app_swiftchat_running/ | jayfehr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mycjv | false | null | t3_16mycjv | /r/LocalLLaMA/comments/16mycjv/video_macos_native_app_swiftchat_running/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'GdZfMrYcAiV4r11CvPLgyVUFWsqE5CkALvDatwtQlZ0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=108&crop=smart&auto=webp&s=a1dab5f9bcad8fcb14f01da63b4ec2b53e41412f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=216&crop=smart&auto=webp&s=5be26557e6e2419ef12043c03536fbed81e5c8d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=320&crop=smart&auto=webp&s=7a015253ad9f9e7d305ad05ea805b19a85db3a3b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=640&crop=smart&auto=webp&s=f8ec22bfe4df878fc12db83493870ad3a65b255c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=960&crop=smart&auto=webp&s=392699fcd6f5564b6cce3ee20c4e4ccf1bfc243a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=1080&crop=smart&auto=webp&s=eae31cce00083b7972cbbbe873e188ee99a117fe', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?auto=webp&s=017d2a5b3efdda0608edd479c3edf6e9617d34a2', 'width': 1300}, 'variants': {}}]} |
Contrastive Decoding Improves Reasoning in Large Language Models | 58 | 2023-09-19T16:59:31 | https://huggingface.co/papers/2309.09117 | saintshing | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 16mwcch | false | null | t3_16mwcch | /r/LocalLLaMA/comments/16mwcch/contrastive_decoding_improves_reasoning_in_large/ | false | false | 58 | {'enabled': False, 'images': [{'id': 'Y-s_xn-yNNpNvYOWJikUA8xVBwVje3Wj-Yqudjp2lmA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4UL8kcYN9VwWC2vvUdUiWuAUKeq0RWx8SdKI4RduUSY.jpg?width=108&crop=smart&auto=webp&s=6f4cb9d733341ffe0fd42caf2e892ceab93d4092', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4UL8kcYN9VwWC2vvUdUiWuAUKeq0RWx8SdKI4RduUSY.jpg?width=216&crop=smart&auto=webp&s=9d712eeb9feaf282fa4b3e5e51b785f28f828710', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4UL8kcYN9VwWC2vvUdUiWuAUKeq0RWx8SdKI4RduUSY.jpg?width=320&crop=smart&auto=webp&s=770ddcc69f5cacdde198ade57b9296ca161a0f9d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4UL8kcYN9VwWC2vvUdUiWuAUKeq0RWx8SdKI4RduUSY.jpg?width=640&crop=smart&auto=webp&s=b91b6cae5929066efab7f79d2b54f902483853f7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4UL8kcYN9VwWC2vvUdUiWuAUKeq0RWx8SdKI4RduUSY.jpg?width=960&crop=smart&auto=webp&s=a777d7292eed756f20d98ebc67a46418dd8c0672', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4UL8kcYN9VwWC2vvUdUiWuAUKeq0RWx8SdKI4RduUSY.jpg?width=1080&crop=smart&auto=webp&s=8408ef63bed0375c7283e9589f5dd645194e3e91', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4UL8kcYN9VwWC2vvUdUiWuAUKeq0RWx8SdKI4RduUSY.jpg?auto=webp&s=c296a8a007b572d8696fb20a097beaa093297bec', 'width': 1200}, 'variants': {}}]} | ||
Inference on RockPi 5 | 9 | Has anyone used the RockPi 5b NPU for inference? I have a RockPi 5b running my home media server but thought it would be cool to put the NPU to use, but there's not much out there on how to use the NPU for LLM inference. | 2023-09-19T16:36:13 | https://www.reddit.com/r/LocalLLaMA/comments/16mvrll/inference_on_rockpi_5/ | that_one_guy63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mvrll | false | null | t3_16mvrll | /r/LocalLLaMA/comments/16mvrll/inference_on_rockpi_5/ | false | false | self | 9 | null |
Nvidia NeMo Llama2 7B cuda out of memory | 1 | [removed] | 2023-09-19T16:33:07 | https://www.reddit.com/r/LocalLLaMA/comments/16mvowl/nvidia_nemo_llama2_7b_cuda_out_of_memory/ | AdPutrid7035 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mvowl | false | null | t3_16mvowl | /r/LocalLLaMA/comments/16mvowl/nvidia_nemo_llama2_7b_cuda_out_of_memory/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'cwGoepXAKJmnu1HzTDTqxPTm-mttzq-zSu_cMLoHLzE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Evyvc0V2G9cKNubUs6HR53O3_kmtceI62J0PXpKYSow.jpg?width=108&crop=smart&auto=webp&s=15a0839eaa01437d8e94ddf1429e9d030b1c5d7c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Evyvc0V2G9cKNubUs6HR53O3_kmtceI62J0PXpKYSow.jpg?width=216&crop=smart&auto=webp&s=6586b69a1a5125dca8d9aa7bb8d83d7afffac99d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Evyvc0V2G9cKNubUs6HR53O3_kmtceI62J0PXpKYSow.jpg?width=320&crop=smart&auto=webp&s=19cee7e4f1da26586392fe2826758ee597c4fbe9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Evyvc0V2G9cKNubUs6HR53O3_kmtceI62J0PXpKYSow.jpg?width=640&crop=smart&auto=webp&s=437d717fc56850fbd1ecbd0276d5a131142400c2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Evyvc0V2G9cKNubUs6HR53O3_kmtceI62J0PXpKYSow.jpg?width=960&crop=smart&auto=webp&s=360c04c26d00c810375c937362faa140b8188ded', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Evyvc0V2G9cKNubUs6HR53O3_kmtceI62J0PXpKYSow.jpg?width=1080&crop=smart&auto=webp&s=bf0345badfc37389bf482b32bd7ac496382c5ae6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Evyvc0V2G9cKNubUs6HR53O3_kmtceI62J0PXpKYSow.jpg?auto=webp&s=34963ef22760489fdf8fe70d129c2581dbebcf11', 'width': 1200}, 'variants': {}}]} |
LocalLlama meetup anyone? Who would like to join a show and tell next weekend? | 1 | [removed] | 2023-09-19T16:18:24 | https://www.reddit.com/r/LocalLLaMA/comments/16mvbki/localllama_meetup_anyone_who_would_like_to_join_a/ | jayfehr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mvbki | false | null | t3_16mvbki | /r/LocalLLaMA/comments/16mvbki/localllama_meetup_anyone_who_would_like_to_join_a/ | false | false | self | 1 | null |
Best describe rp models | 6 | Could anybody recommend the local model 13b which is best at describing things and events from the point of view of an impersonal narrator. Where commands like (ooc: describe) work best. | 2023-09-19T16:16:43 | https://www.reddit.com/r/LocalLLaMA/comments/16mva0p/best_describe_rp_models/ | LonleyPaladin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mva0p | false | null | t3_16mva0p | /r/LocalLLaMA/comments/16mva0p/best_describe_rp_models/ | false | false | self | 6 | null |
Guidance needed for 4x V100 gpus | 4 | Hi all. I'm new to the GenAI world, and just got access to a cluster with 4x V100 gpus at work. I want to try fine tuning a Llama 2 model (preferably the 70b).. Are there any guides around with code that I can refer to? As I understand, I can use the 4bit model..? I'll appreciate any guidance/advice here. Thanks! | 2023-09-19T15:49:11 | https://www.reddit.com/r/LocalLLaMA/comments/16mukzs/guidance_needed_for_4x_v100_gpus/ | mruj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mukzs | false | null | t3_16mukzs | /r/LocalLLaMA/comments/16mukzs/guidance_needed_for_4x_v100_gpus/ | false | false | self | 4 | null |
What background / skills do I need to get into AI to a point where I can train my own LLMs, and train other AI models as this area continues to rapidly develop? | 1 | [removed] | 2023-09-19T15:16:11 | https://www.reddit.com/r/LocalLLaMA/comments/16mtr30/what_background_skills_do_i_need_to_get_into_ai/ | AGI_FTW | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mtr30 | false | null | t3_16mtr30 | /r/LocalLLaMA/comments/16mtr30/what_background_skills_do_i_need_to_get_into_ai/ | false | false | self | 1 | null |
Fast Feedforward Networks, up to 220x faster than feedforward networks | 142 | 2023-09-19T14:38:37 | https://arxiv.org/abs/2308.14711 | saintshing | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 16mss98 | false | null | t3_16mss98 | /r/LocalLLaMA/comments/16mss98/fast_feedforward_networks_up_to_220x_faster_than/ | false | false | 142 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | ||
Signals to Text | 5 | \[related to a post I made earlier\]
I do know that LLMs are quite efficient at translating content (English to French, Spanish to C++, etc.), but are they good, if fine-tuned on a specific English to neuro signals to dataset, at translating those signals into English (meaning, taking the signal as input and generating English as output)? I have obviously seen the opposite where the LLM takes text as input and generates whatever, but can the other way around be achieved?
​
​ | 2023-09-19T14:05:31 | https://www.reddit.com/r/LocalLLaMA/comments/16mrxn2/signals_to_text/ | rasputin23YD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mrxn2 | false | null | t3_16mrxn2 | /r/LocalLLaMA/comments/16mrxn2/signals_to_text/ | false | false | self | 5 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.