title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What 7b model based on Llama-2 do you use? | 4 | I got left behind on the news after a couple weeks of "enhanced" worked commtments. Now I got the time on my hands, I felt really out of date on how fast things are going here.
I know the "best" can be a bit subjective so I think the better question is, what 7b model do people use the most nowaday? GGML format would be best on my case. Thank you! | 2023-07-26T00:41:48 | https://www.reddit.com/r/LocalLLaMA/comments/159qm9p/what_7b_model_based_on_llama2_do_you_use/ | Spirited_Employee_61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159qm9p | false | null | t3_159qm9p | /r/LocalLLaMA/comments/159qm9p/what_7b_model_based_on_llama2_do_you_use/ | false | false | self | 4 | null |
The difference between quantization methods for the same bits | 41 | Using GGML quantized models, let's say we are going to talk about 4bit
I see a lot of versions suffixed with either 0, 1, k\_s or k\_m
I understand that the difference is in the way of quantization that affect the final size of the quantized models but how does this effect quality of output and speed of inference? | 2023-07-25T22:46:35 | https://www.reddit.com/r/LocalLLaMA/comments/159nrh5/the_difference_between_quantization_methods_for/ | yehiaserag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159nrh5 | false | null | t3_159nrh5 | /r/LocalLLaMA/comments/159nrh5/the_difference_between_quantization_methods_for/ | false | false | self | 41 | null |
How to evaluate production data with Llama 2 70B quickly? | 5 | Hi everyone,
I am working for a large corp that wants to evaluate Llama 2 70B to see how it performs against OpenAI on Azure, and Vertex AI.
Because OpenAI on Azure and Vertex AI are deployed inside our VPC it is easy to try, but to try Llama 2 70B, we would need to get quotas for GPUs and I guess engineering effort on our side, which becomes much more painful.
It seems like it is not trivial to solve, as we would like something that is fast, like a SaaS, but it's hard to have that accepted by Compliance / ITSec, and On-VPC deployment is painful in another way.
What do you think are good ways to solve this? Have you encountered that issue too?
Would love to have your opinion! | 2023-07-25T22:46:35 | https://www.reddit.com/r/LocalLLaMA/comments/159nrh6/how_to_evaluate_production_data_with_llama_2_70b/ | Separate-Still3770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159nrh6 | false | null | t3_159nrh6 | /r/LocalLLaMA/comments/159nrh6/how_to_evaluate_production_data_with_llama_2_70b/ | false | false | self | 5 | null |
Llama-2-70b-Guanaco-QLoRA becomes the first model on the Open LLM Leaderboard to beat gpt3.5's MMLU benchmark | 264 | [https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
https://preview.redd.it/wq4vow0wc6eb1.png?width=1457&format=png&auto=webp&s=bcf22b0c57513a31bfb7e4c85baa8df2d6986e93
https://preview.redd.it/g37tow0wc6eb1.png?width=1455&format=png&auto=webp&s=a75af69bffc9295be92a7f3b5306778fcdb7ebe7
The current gpt comparison for each Open LLM leaderboard benchmark are
Average - Llama 2 finetunes are nearly equal to gpt 3.5
ARC - Open source models are still far behind gpt 3.5
HellaSwag - Around 12 models on the leaderboard beat gpt 3.5, but are decently far behind gpt 4
MMLU - 1 model barely beats gpt 3.5
TruthfulQA - Around 130 models beat gpt 3.5, and currently 2 models beat gpt 4
Is MMLU still seen as the best of the four benchmarks? Also, why are open source models still so far behind when it comes to ARC? | 2023-07-25T21:14:07 | https://www.reddit.com/r/LocalLLaMA/comments/159l9ll/llama270bguanacoqlora_becomes_the_first_model_on/ | DontPlanToEnd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159l9ll | false | null | t3_159l9ll | /r/LocalLLaMA/comments/159l9ll/llama270bguanacoqlora_becomes_the_first_model_on/ | false | false | 264 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]} | |
Llama-2-70b-Guanaco-QLoRA becomes the first model on the Open LLM Leaderboard to beat gpt3.5's MMLU benchmark | 3 | [https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
https://preview.redd.it/wq4vow0wc6eb1.png?width=1457&format=png&auto=webp&s=bcf22b0c57513a31bfb7e4c85baa8df2d6986e93
https://preview.redd.it/g37tow0wc6eb1.png?width=1455&format=png&auto=webp&s=a75af69bffc9295be92a7f3b5306778fcdb7ebe7
The current gpt comparison for each Open LLM leaderboard benchmark are
Average - Llama 2 finetunes are nearly equal to gpt 3.5
ARC - Open source models are still far behind gpt 3.5
HellaSwag - Around 12 models on the leaderboard beat gpt 3.5, but are decently far behind gpt 4
MMLU - 1 model barely beats gpt 3.5
TruthfulQA - Around 130 models beat gpt 3.5, and currently 2 models beat gpt 4
Is MMLU still seen as the best of the four benchmarks? Also, why are open source models still so far behind when it comes to ARC? | 2023-07-25T21:10:23 | https://www.reddit.com/r/LocalLLaMA/comments/159l6f3/llama270bguanacoqlora_becomes_the_first_model_on/ | DontPlanToEnd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159l6f3 | false | null | t3_159l6f3 | /r/LocalLLaMA/comments/159l6f3/llama270bguanacoqlora_becomes_the_first_model_on/ | false | false | 3 | null | |
How to prompt llama.c | 1 | Okay so I tried the llama.c…the first thing to run above 0.1 t/s on my pc Yayyy
Now with the provided model it doesn’t allow any kind of prompting
So I tried to fiddle with it and since by default it uses a single token to begin I added a start word
And this is how it turned out
Then I tried to make it work with prompts of more than one word
Now the thing is I don’t really understand much about LLMs
C even less
So after a few segmentation faults here and there and a headache I’m going to leave it here
If anyone has any idea how to do this or if it has already been done please do tell
Good night | 2023-07-25T20:03:26 | https://www.reddit.com/gallery/159jaym | Former_Apple | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 159jaym | false | null | t3_159jaym | /r/LocalLLaMA/comments/159jaym/how_to_prompt_llamac/ | false | false | 1 | null | |
hpe proliant g10 with 1500gb ram and 2x xeon gold 6226r enough for proper cpu only? | 6 | Hi all, i was able to obtain a hpe proliant g10 with 1500gb ram and 2x xeon gold 6226r. is that enough to get an answer from llms without dying in front of the sreen from the waiting time for each token? i dont expect the speed as i get from my now setup with rtx 3060 12gb with exllama but im curious and wont be able to check it until weekend because time. | 2023-07-25T19:55:36 | https://www.reddit.com/r/LocalLLaMA/comments/159j2vk/hpe_proliant_g10_with_1500gb_ram_and_2x_xeon_gold/ | Plums_Raider | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159j2vk | false | null | t3_159j2vk | /r/LocalLLaMA/comments/159j2vk/hpe_proliant_g10_with_1500gb_ram_and_2x_xeon_gold/ | false | false | self | 6 | null |
1500gb ram and 2x xeon e5-2680 v4 enough for cpu only in oobabooga? | 1 | Hi all, i was able to obtain a hpe proliant g10 with 1500gb ram and 2x xeon e5-2680 v4. is that enough to get an answer from llms without dying in front of the sreen from the waiting time for each token? i dont expect the speed as i get from my now setup with rtx 3060 12gb with exllama but im curious and wont be able to check it until weekend because time. | 2023-07-25T19:45:26 | https://www.reddit.com/r/LocalLLaMA/comments/159is9e/1500gb_ram_and_2x_xeon_e52680_v4_enough_for_cpu/ | Plums_Raider | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159is9e | false | null | t3_159is9e | /r/LocalLLaMA/comments/159is9e/1500gb_ram_and_2x_xeon_e52680_v4_enough_for_cpu/ | false | false | self | 1 | null |
Nous- Hermes & Puffin (13b) having opposite opinions | 4 | I was testing some models with random questions I had to see differences, and I've found a curious difference:
When you as how you should defrost a frozen meal (in a glass container), they both prefer different approaches:
Hermes --> cold water, slow defrost: Less bacteria growth
Puffin--> hot water, quick defrost: less bacteria growth
Granted the fine tuning methods are fairly different (300k gpt4 vs 3k human+gpt4 ), but given they are based on the same model, and "with" the same other model, I would have expected a simple question like this to be the same.
Does anyone know what (specifically) can cause this? seems like an odd thing to take an opposite stance on.
| 2023-07-25T19:41:36 | https://www.reddit.com/r/LocalLLaMA/comments/159iodz/nous_hermes_puffin_13b_having_opposite_opinions/ | leschnoid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159iodz | false | null | t3_159iodz | /r/LocalLLaMA/comments/159iodz/nous_hermes_puffin_13b_having_opposite_opinions/ | false | false | self | 4 | null |
Best options for running LLama locally with AMD GPU on windows (Question) | 13 | Hi all,
I've got an AMD gpu (6700xt) and it won't work with pytorch since CUDA is not available with AMD.
A couple general questions:
1. I've got an AMD cpu, the 5800x3d, is it possible to offload and run it entirely on the CPU? I can't imagine the performance is going to be great with this option...
2. Is there some sort of work around? I've looked at ROCm 5.x, but from what I can tell it is linux only. I'd rather not dual boot my pc into linux and windows if I don't have to.
3. Side question, does anyone have an example notebook or code where they are running on an AMD gpu on windows locally? I've looked but the trails lead to google collab notebooks and running on linux machines.
Any help would be greatly appreciated. Still pretty new to actually implementing LLMs. | 2023-07-25T19:26:34 | https://www.reddit.com/r/LocalLLaMA/comments/159i9v9/best_options_for_running_llama_locally_with_amd/ | oaky180 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159i9v9 | false | null | t3_159i9v9 | /r/LocalLLaMA/comments/159i9v9/best_options_for_running_llama_locally_with_amd/ | false | false | self | 13 | null |
What are VRAM requirements for QLoRA Finetuning? | 9 | I want to do some QLoRA finetuning on custom datasets. While an enterprise-grade cluster with 8 H100s or something would of course be amazing, I don't have those kind of resources available. I'm looking to see what might be possible at different VRAM levels, and I thought this community is probably the best place to ask.
As far as I can tell, the 14B or less models can all be fairly easily finetuned on a 24GB GPU like an RTX 3090, but I want to see about higher parameter models. I have seen some posts on this subreddit about [33B QLoRA finetunes on a 24GB GPU](https://www.reddit.com/r/LocalLLaMA/comments/13tz14v/how_to_qlora_33b_model_on_a_gpu_with_24gb_of_vram/) and two posts about struggles to [finetune MPT-30B](https://www.reddit.com/r/LocalLLaMA/comments/14jf5xk/airoboros_mpt30b_qlora_mostly_successful/) (which seemed to run in to issues not necessarily because of VRAM, [but rather because MPT was still new at that point.](https://www.reddit.com/r/LocalLLaMA/comments/14n3rfv/mpt30b_qlora_on_24_gb_vram/))
So now that Llama 2 is out with a 70B parameter, and Falcon has a 40B and Llama 1 and MPT have around 30-35B, I'm curious to hear some of your experiences about VRAM usage for finetuning. I imagine some of you have done QLoRA finetunes on an RTX 3090, or perhaps on a pair for them. I'm also hoping that some of you have experience with other higher VRAM GPUs, like the A5000 and maybe even the "old" cards like the P40.
So please, share your experiences and VRAM usage with QLoRA finetunes on models with 30B or more parameters. | 2023-07-25T18:07:47 | https://www.reddit.com/r/LocalLLaMA/comments/159g3hy/what_are_vram_requirements_for_qlora_finetuning/ | ResearchTLDR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159g3hy | false | null | t3_159g3hy | /r/LocalLLaMA/comments/159g3hy/what_are_vram_requirements_for_qlora_finetuning/ | false | false | self | 9 | null |
Finetune LLM for Legal tasks | 12 | Ehi I am trying to build a chatbot able to perform open book Q&A on Italian Law.
Semantic retrieval with some simple prompt engineering works well enough just sometimes. Other times instead, it is required some domain knowledge that is very difficult to insert in the prompt. For example, If a question is about a "contract for real estate with a state agency", it requires not only the specific relevant laws, but also general knowledge about contracts, real estate, and state agencies. GPT4 does not know it. If this itself does not fit in the context (it's whole sections of a book concerning private law), understanding each of these concepts might require general knowledge on even more topics, and so on, growing exponentially.
I am wondering how finetuning an LLM might help my case. I can spend around 1k€, that is \~300 hours of A100 gpu, to build a decent prototype, plus more OpenAI credits. I get that the final chain has to be GPT4 since is so much better than everything else, but maybe I can use a fine-tuned model for some of the intermediate calls of my chains. I am thinking about Falcon-40B since it has been trained on more italian data than LLama2.
Will a QLoRa fine-tuned model get some of the "general domain knowledge" I need it to know if trained with proper data? I would of course continue using retrieval and some smart prompt engineering to insert the specific relevant laws and judjments in the prompt.
​ | 2023-07-25T16:55:11 | https://www.reddit.com/r/LocalLLaMA/comments/159e2z5/finetune_llm_for_legal_tasks/ | EnnioEvo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159e2z5 | false | null | t3_159e2z5 | /r/LocalLLaMA/comments/159e2z5/finetune_llm_for_legal_tasks/ | false | false | self | 12 | null |
(Orca mini) unable to access saved chats/threads | 1 | Hi all. Installed orca mini. On close, program saves to disk. Upon reopening, I see the name of the chat. When clicked on, nothing happens. When asked, program is incapable of accessing. Files exist as .chat files. I can’t find the actual app to associate it with the file.
Running a pc windows 11 home. Not much experience with pcs, sorry if it’s a simple solution. | 2023-07-25T16:45:26 | https://www.reddit.com/r/LocalLLaMA/comments/159dtci/orca_mini_unable_to_access_saved_chatsthreads/ | xdiox66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159dtci | false | null | t3_159dtci | /r/LocalLLaMA/comments/159dtci/orca_mini_unable_to_access_saved_chatsthreads/ | false | false | self | 1 | null |
Currently what is the best 7-13B model for code generation | 20 | Looking for something with long context length, I wanna loD whole project into it.
Is it still wizardcoder or falcon based ones? Any code fine tune for llama2? | 2023-07-25T16:41:16 | https://www.reddit.com/r/LocalLLaMA/comments/159dp7l/currently_what_is_the_best_713b_model_for_code/ | Voxandr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159dp7l | false | null | t3_159dp7l | /r/LocalLLaMA/comments/159dp7l/currently_what_is_the_best_713b_model_for_code/ | false | false | self | 20 | null |
New badass model OpenAssistant/llama2-13b-orca-8k released 🎉 | 42 | ​
## Model Description
This model is a fine-tuning of Meta's Llama2 13B model with 8K context size on a long-conversation variant of the Dolphin dataset ([**orca-chat**](https://huggingface.co/datasets/shahules786/orca-chat)).
Note: **At least Huggingface Transformers** [**4.31.0**](https://pypi.org/project/transformers/4.31.0/) **is required to load this model!**
## Usage
import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", use_fast=False) model = AutoModelForCausalLM.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto") system_message = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information." user_prompt = "Write me a poem please" prompt = f"""<|system|>{system_message}</s><|prompter|>{user_prompt}</s><|assistant|>""" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256) print(tokenizer.decode(output[0], skip_special_tokens=True))
## Model Details
* base model: [**meta-llama/Llama-2-7b**](https://huggingface.co/meta-llama/Llama-2-7b)
* License: [**Llama 2 Community License Agreement**](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
* sampling report: [**2023-07-25\_OpenAssistant\_llama2-13b-orca-8k-3319\_sampling\_llama2\_prompt.json**](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-pretrained%2F2023-07-25_OpenAssistant_llama2-13b-orca-8k-3319_sampling_llama2_prompt.json)
* wandb: [**public-sft/runs/2jfazjt9**](https://wandb.ai/open-assistant/public-sft/runs/2jfazjt9)
* checkpoint: 3319 steps
* datatpye: fp16
* sponsored by: [**Redmond.ai**](https://redmond.ai/)
## Long context (RoPE Scaling)
This model was fine-tuned with a context size of 8192 tokens using linear scaling of RoPE embeddings. This feature was recently added to [**Huggingface transformers**](https://github.com/huggingface/transformers/). Before loading this model please make sure HF transformers >=4.31.0 is installed (pip install transformers>=4.31.0
).
## Conversation Template
For the initial response use (e.g. the [**llama2 default system prompt**](https://github.com/facebookresearch/llama/blob/6c7fe276574e78057f917549435a2554000a876d/llama/generation.py#L46) works well):
<|system|>system message</s><|prompter|>user prompt</s><|assistant|>
For multi-turn conversations use:
<|system|>system message</s><|prompter|>Q1</s><|assistant|>A1</s><|prompter|>Q2</s><|assistant|>
The model was trained with the following 15 system messages used to generate the training examples (see [**ORCA paper**](https://arxiv.org/abs/2306.02707)):
1. You are an AI assistant. Provide a detailed answer so user don’t need to search outside to understand the answer.
2. You are an AI assistant. You will be given a task. You must generate a detailed and long answer.
3. You are a helpful assistant, who always provide explanation. Think like you are answering to a five year old.
4. You are an AI assistant that follows instruction extremely well. Help as much as you can.
5. You are an AI assistant that helps people find information. Provide a detailed answer so user don’t need to search outside to understand the answer.
6. You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
7. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. Think like you are answering to a five year old.
8. Explain how you used the definition to come up with the answer.
9. You are an AI assistant. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. You might need to use additional knowledge to answer the question.
10. You are an AI assistant that helps people find information. User will you give you a question. Your task is to answer as faithfully as you can. While answering think step-by- step and justify your answer.
11. User will you give you a task with some instruction. Your job is follow the instructions as faithfully as you can. While answering think step-by-step and justify your answer.
12. You are a teacher. Given a task, you explain in simple steps what the task is asking, any guidelines it provides and how to use those guidelines to find the answer.
13. You are an AI assistant, who knows every language and how to translate one language to another. Given a task, you explain in simple steps what the task is asking, any guidelines that it provides. You solve the task and show how you used the guidelines to solve the task.
14. Given a definition of a task and a sample input, break the definition into small parts. Each of those parts will have some instruction. Explain their meaning by showing an example that meets the criteria in the instruction. Use the following format: Part #: a key part of the definition. Usage: Sample response that meets the criteria from the key part. Explain why you think it meets the criteria.
15. You are an AI assistant that helps people find information.
## Datasets: Orca-Chat/Dolphin, RedPajama1T & FanFics
This model was trained on:
* [**shahules786/orca-chat**](https://huggingface.co/datasets/shahules786/orca-chat)
* [**togethercomputer/RedPajama-Data-1T-Sample**](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
* [**atom-in-the-universe/fanfics-10k-50k**](https://huggingface.co/datasets/atom-in-the-universe/fanfics-10k-50k)
​
Dataset Composition: Tain (sampled): orca-chat: 188842 (100%) fanfics: 47760 (100%) red_pajama: 188262 (25%) Valid: orca-chat: 5000 fanfics: 1000 red_pajama: 1000
The dataset [**shahules786/orca-chat**](https://huggingface.co/datasets/shahules786/orca-chat) combines similar examples of the GPT-4 subset of [**ehartford/dolphin**](https://huggingface.co/datasets/ehartford/dolphin) to form longer conversations to improve long-context training.
Additionally, RedPajama and FanFics were used for classic language modelling as an auxiliary task to improve the RoPE scaling for the 8k context size.
## Model Configuration
llama2_13b_orca_8k: rng_seed: 0xe1291f1a use_custom_sampler: true sort_by_length: false dtype: fp16 log_dir: "llama2_log_13b_orca_8k" learning_rate: 1e-5 model_name: /mnt/data/llama2/Llama-2-13b-hf/ output_dir: llama2_13b_orca_8k deepspeed_config: configs/zero_config_pretrain.json weight_decay: 0.0 max_length: 8192 warmup_steps: 100 use_flash_attention: true gradient_checkpointing: true gradient_accumulation_steps: 8 per_device_train_batch_size: 2 per_device_eval_batch_size: 1 residual_dropout: 0.0 eval_steps: 200 save_steps: 1000 # (total steps: 3319) num_train_epochs: 1 save_total_limit: 4 superhot: true superhot_config: type: linear scale: 2 datasets: - orca-chat: max_val_set: 5000 - fanfics: max_chunk_size: 65535 max_val_set: 1000 - red_pajama: fraction: 0.25 max_val_set: 1000 max_chunk_size: 65535 peft_model: false
# Source
[OpenAssistant/llama2-13b-orca-8k-3319 · Hugging Face](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319)
[TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ · Hugging Face](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ)
[TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML · Hugging Face](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML)
​ | 2023-07-25T16:35:44 | https://www.reddit.com/r/LocalLLaMA/comments/159djux/new_badass_model_openassistantllama213borca8k/ | FHSenpai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159djux | false | null | t3_159djux | /r/LocalLLaMA/comments/159djux/new_badass_model_openassistantllama213borca8k/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': 'JcWKAsMJSrQCtKAMMNiBWwZ6NOyhP8a-oTgAM2i1iyw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?width=108&crop=smart&auto=webp&s=ebee6ec6f35f7ebd7a4a8851e017ecad97cba431', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?width=216&crop=smart&auto=webp&s=b6b414a55b89624206731e951cfc11c1c6a470f6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?width=320&crop=smart&auto=webp&s=5e908da23d0503b284be957a67bd8e3e93e3a911', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?width=640&crop=smart&auto=webp&s=b06e012c34b2b107c9b2d7748535af466d27ec26', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?width=960&crop=smart&auto=webp&s=d1eed7ba0a237eb4ee0ac43ba1fb602aba035945', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?width=1080&crop=smart&auto=webp&s=b1041ce76eaeed148baba42a599eb1df562ae9e9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kSwCc1XBagltRuXW2qsoYpNHcNw3QDPgVHp_yBAgzHg.jpg?auto=webp&s=32de8271fa96d2c3b0919d2394be31cbecf94f2b', 'width': 1200}, 'variants': {}}]} |
I'm an idiot - make sure you download the actual .bin... | 5 | Recently tried to run local LLM. Followed a couple of guides and got llama.cpp installed, but couldn't make it run any .bin files from HuggingFace. Figured I just had an issue with my system (Intel Mac) but couldn't solve. Kept getting 'Are you sure this is a bin' error from llama.cpp.
After some clicking around Github I thought I'd just download a .bin file manually rather than clone the repository like I had been doing (which basically was just downloading the filenames...). Lo and behold after several GB download I can now run the model.
Hope that helps any other newbs! | 2023-07-25T16:03:41 | https://www.reddit.com/r/LocalLLaMA/comments/159codo/im_an_idiot_make_sure_you_download_the_actual_bin/ | etsatlo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159codo | false | null | t3_159codo | /r/LocalLLaMA/comments/159codo/im_an_idiot_make_sure_you_download_the_actual_bin/ | false | false | self | 5 | null |
How to fine tune llama2? | 26 | Trying to fine tune llama2, having no success. Fastchat is not working for me. Also, can we use the same code for llama2 to fine tune llongma2? | 2023-07-25T16:02:24 | https://www.reddit.com/r/LocalLLaMA/comments/159cn1s/how_to_fine_tune_llama2/ | ZealousidealBlock330 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159cn1s | false | null | t3_159cn1s | /r/LocalLLaMA/comments/159cn1s/how_to_fine_tune_llama2/ | false | false | self | 26 | null |
Question about multiple sources with vector embeddings & local LLM. | 13 | Hello! I have been dabbling with LLMs for the past month and have been working on mini prototypes to improve my daily productivity. I have a couple of questions on how I can build this into a stable system where I can leave it running 24/7 on it's own.
Here's some context:
As a Software Engineer, I am able to access and manipulate Slack conversations, Confluence documentation, JIRA ticket, and other respective sources. I currently am using OpenAI embeddings for testing, but I'm incrementally transitioning to my local machine for data privacy. (I'm currently on RTX 3070, I know it's not the best, but it works with Wizard-Vicuna 30B model so far. I love to start using Llama2 70B, but I'll work with what I have till I reached my software limits and when I can justify the cost of it)
Here's the requirement that I'm trying to achieve along with the question for each of them.
1. Queries to the data source should stay local and private within my own server (OpenAI is not a good idea)
- Vector embeddings should be private for data privacy compliancy of course. (I can do this with `all-MiniLM-L6-v2` it seems. But how would you guys go about this exactly? Do I create multiple vectorDB collections and store them corresponding? (E.g. SlackCollection, ConfluenceCollection, XCollection). If I were to do this, how exactly do I indicate which collection to query from?
2. I should be able to query/chat with the LLM and search/request for data from different datasources based on my query.
- Currently, I'm using LlamaIndex (GPTIndex) to achieve what I'm doing with a single source of data (Confluence). I'm looking to move this towards LangChain as I feel that LlamaIndex seems limiting with regards to using private embeddings and localLLMs. I may be wrong, but I'd love to hear from you.
- I'd want to move to LangChain because I'd love to utilize agents (I hope I understand the concept right) and it definitely looks to have a wider integration with other libraries too. Agents would allow to me do the following I believe.
```
Do the following step by step.
Search {content} from SlackCollection
Search {content} from ConfluenceCollection
Summarize the results and cite the sources before returning the results.
```
3. The vector embeddings should stay up to date. Freshness can be up to how I define it (E.g. daily).
- I can set up a cronjob to scrape respective Slack threads, confluence docs, etc. However, is there a way to incrementally update these embeddings? Right now, I'm doing the whole process from scratch every time. I'd love to speed up the process if possible.
Additional Questions:
4. Should I use an Instruct Model or Chat Model? I feel like I'd should use a chat model since I do want memory capability in the LLM.
4. Lastly, I don't plan to fine-tune my model as I don't see a need to have such a specific model. I would consider it in the future, but I don't have the expertise to know exactly what to fine-tune, or what's the best approach either. OpenAIEmbeddings with GPT3.5 is a powerhouse frankly. But I'd love to hear if there's any thoughts on this.
Thanks for taking the time to read this! | 2023-07-25T15:54:23 | https://www.reddit.com/r/LocalLLaMA/comments/159cex0/question_about_multiple_sources_with_vector/ | pickandmix222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159cex0 | false | null | t3_159cex0 | /r/LocalLLaMA/comments/159cex0/question_about_multiple_sources_with_vector/ | false | false | self | 13 | null |
Looking for an Uncensored LLM Service to Try Online (Free or Paid) | 30 |
Hello everyone!
I'm on the hunt for an online LLM service that I can try directly in my browser. Unfortunately, my computer is quite slow, and I can't even test the slowest models available. I'm particularly interested in finding a service that's uncensored.
If anyone could share their experiences with services that meet these criteria or point me in the right direction to find a list of options, I would greatly appreciate it!
Thanks in advance for your help and shared knowledge! | 2023-07-25T15:50:00 | https://www.reddit.com/r/LocalLLaMA/comments/159cajz/looking_for_an_uncensored_llm_service_to_try/ | CryptoNarco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159cajz | false | null | t3_159cajz | /r/LocalLLaMA/comments/159cajz/looking_for_an_uncensored_llm_service_to_try/ | false | false | self | 30 | null |
Looking help for Fine tuning / Pre training with with Lora | 5 | I am looking to fine tune / pre-train Llama based models. I tried using axolotl and some other libraries but did not get satisfactory results for instruction based fine tune. I know some of have questions about size of data , my record size is around 100K distributed instructions . I can generate more data if required. | 2023-07-25T15:41:27 | https://www.reddit.com/r/LocalLLaMA/comments/159c1z3/looking_help_for_fine_tuning_pre_training_with/ | data_dungen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159c1z3 | false | null | t3_159c1z3 | /r/LocalLLaMA/comments/159c1z3/looking_help_for_fine_tuning_pre_training_with/ | false | false | self | 5 | null |
Official WizardLM-13B-V1.2 Released! Trained from Llama-2! Can Achieve 89.17% on AlpacaEval! | 282 |
* Today, the ***WizardLM Team*** has released their **Official** **WizardLM-13B-V1.2** model trained from Llama-2 with brand-new Evol+ methods!
* Paper: [https://arxiv.org/abs/2304.12244](https://arxiv.org/abs/2304.12244)
* The project repo: [WizardLM](https://github.com/nlpxucan/WizardLM/tree/main)
* The official Twitter: [WizardLM\_AI](https://twitter.com/WizardLM_AI)
* Twitter status: [https://twitter.com/WizardLM\_AI/status/1669109414559911937](https://twitter.com/WizardLM_AI/status/1669109414559911937)
* HF Model: [WizardLM/WizardLM-13B-V1.2](https://huggingface.co/WizardLM/WizardLM-13B-V1.2)
* Online demo links:
1. [https://b7a19878988c8c73.gradio.app/](https://b7a19878988c8c73.gradio.app/)
2. [https://d0a37a76e0ac4b52.gradio.app/](https://d0a37a76e0ac4b52.gradio.app/)
(We will update the demo links in our [github](https://github.com/nlpxucan/WizardLM/tree/main).)
**WizardLM-13B-V1.2 achieves:**
1. 7.06 on MT-Bench (V1.1 is 6.74)
2. 🔥 **89.17% on Alpaca Eval (V1.1 is** **86.32%**, **ChatGPT is 86.09%)**
3. 101.4% on WizardLM Eval (V1.1 is 99.3%**,** Chatgpt is 100%)
https://preview.redd.it/eb0pdan0o4eb1.jpg?width=1345&format=pjpg&auto=webp&s=9f19c1907a56351619c7a769d5ebb2572bfb8723
https://preview.redd.it/95ybnfk1o4eb1.png?width=1532&format=png&auto=webp&s=f03a8a0d317655313ed6a9acfc8311cbf284513c
​ | 2023-07-25T15:24:10 | https://www.reddit.com/r/LocalLLaMA/comments/159bl45/official_wizardlm13bv12_released_trained_from/ | cylaw01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159bl45 | false | null | t3_159bl45 | /r/LocalLLaMA/comments/159bl45/official_wizardlm13bv12_released_trained_from/ | false | false | 282 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | |
Why have a large context length if max_token_length stops it? | 1 | Ok this might be a very noob question, but I can't find the answer anywhere.
When you are using a hugging face model with transformers, it seems like always the max\_token\_length is something like 512, but the context of the model is like 2048 or something like this. Also, everyone wants to increase context length.
What is the point of having all that context size if you can only put in 512 tokens at a time? I know I'm missing something, but I can't find this anywhere, any help would be greatly appreciated. | 2023-07-25T15:21:02 | https://www.reddit.com/r/LocalLLaMA/comments/159bi4z/why_have_a_large_context_length_if_max_token/ | morecontextplz1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159bi4z | false | null | t3_159bi4z | /r/LocalLLaMA/comments/159bi4z/why_have_a_large_context_length_if_max_token/ | false | false | self | 1 | null |
Running Llama 2 on GPU | 1 | [removed] | 2023-07-25T14:56:36 | https://www.reddit.com/r/LocalLLaMA/comments/159au3w/running_llama_2_on_gpu/ | pc7ayd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159au3w | false | null | t3_159au3w | /r/LocalLLaMA/comments/159au3w/running_llama_2_on_gpu/ | false | false | self | 1 | null |
To those using oobabooga, how exactly do you use it to write fiction ? I am just using it as a normal chat bot, how do you guys get it to function as a story writer ? | 30 | Curious, thanks. | 2023-07-25T14:46:37 | https://www.reddit.com/r/LocalLLaMA/comments/159akn0/to_those_using_oobabooga_how_exactly_do_you_use/ | Vitamin_C_is_awesome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159akn0 | false | null | t3_159akn0 | /r/LocalLLaMA/comments/159akn0/to_those_using_oobabooga_how_exactly_do_you_use/ | false | false | self | 30 | null |
[HELP] It's there a way to make Llama 2 model generate text token by token or word by word like what ChatGPT does? | 8 | `pipeline`, or `model.generate` doesn't seems to support generate text token by token, instead, they will give you all the output text at once when it's finished.
And I couldn't find anyway to doing it online using `pytorch`
The code below is an example I used from [Llama-2 7B uncensored - QLoRA fine-tune on wizard\_vicuna\_70k\_unfiltered](https://www.reddit.com/r/LocalLLaMA/comments/154rqay/llama2_7b_uncensored_qlora_finetune_on_wizard/)
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import time
model_name_or_path = "TheBloke/llama2_7b_chat_uncensored-GPTQ"
model_basename = "gptq_model-4bit-128g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True, legacy=False)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
To download from a specific branch, use the revision parameter, as in this example:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''### HUMAN:
{prompt}
### RESPONSE:
'''
print("\n\n*** Generate:")
start_time = time.time()
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
print(f"Inference time: {time.time() - start_time:.4f} seconds")
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
start_time = time.time()
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
print(f"Inference time: {time.time() - start_time:.4f} seconds")
Thank you in advance :) | 2023-07-25T14:41:33 | https://www.reddit.com/r/LocalLLaMA/comments/159afuz/help_its_there_a_way_to_make_llama_2_model/ | MrForExample | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159afuz | false | null | t3_159afuz | /r/LocalLLaMA/comments/159afuz/help_its_there_a_way_to_make_llama_2_model/ | false | false | self | 8 | null |
Question: Option to run LLaMa and LLaMa2 on external hardware (GPU / Hard Drive)? | 4 | Hello guys!
I want to run LLaMa2 and test it, but the system requirements are a bit demanding for my local machine. I have seen it requires around of 300GB of hard drive space which i currently don't have available and also 16GB of GPU VRAM, which is a bit more from what I currently have.
I was wondering if you know of any solutions where I could use external hardware to install and run LLaMa2 from there. Don't mind paying with an hourly rate.
Any feedback, links, guides etc. will help a lot!
Thank you in advance! | 2023-07-25T14:13:11 | https://www.reddit.com/r/LocalLLaMA/comments/1599p7t/question_option_to_run_llama_and_llama2_on/ | SiltoruzExarz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1599p7t | false | null | t3_1599p7t | /r/LocalLLaMA/comments/1599p7t/question_option_to_run_llama_and_llama2_on/ | false | false | self | 4 | null |
Need to summarize and analyze documents with sensitive information locally | 10 | I have a somewhat urgent need to analyze long documents (e.g., PDFs of trial transcripts) locally. I have an old 2016 MBP. My experiments with gpt4all models failed—I just don't apparently have the resources.
If you were me, to accomplish my goal, what you you buy now (ideally it would be a laptop)? I'm sort of at a point where I can't wait much longer for the tech to improve for use on low-powered machines, nor can I wait for new hardware developments (as exciting as both those propositions are). Basically I'm looking for a short-medium term "good enough" solution. Any input appreciated! | 2023-07-25T14:09:49 | https://www.reddit.com/r/LocalLLaMA/comments/1599m5l/need_to_summarize_and_analyze_documents_with/ | Hinged31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1599m5l | false | null | t3_1599m5l | /r/LocalLLaMA/comments/1599m5l/need_to_summarize_and_analyze_documents_with/ | false | false | self | 10 | null |
Running GPT4ALL Model on GPU | 6 | Hi all i recently found out about GPT4ALL and new to world of LLMs they are doing a good work on making LLM run on CPU is it possible to make them run on GPU as now i have access to it i needed to run them on GPU as i tested on "ggml-model-gpt4all-falcon-q4\_0" it is too slow on 16gb RAM so i wanted to run on GPU to make it fast. | 2023-07-25T14:02:03 | https://www.reddit.com/r/LocalLLaMA/comments/1599ety/running_gpt4all_model_on_gpu/ | teritump3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1599ety | false | null | t3_1599ety | /r/LocalLLaMA/comments/1599ety/running_gpt4all_model_on_gpu/ | false | false | self | 6 | null |
How to do local document retrieve based on LLMs? | 1 | hi all, I have walked through the Transformer packages and langchain.
there is one case that need to do a local document retrieve, i have read the source code of priaveGPT and gpt4all, but i want to know how should we do to do a comparation of these llms?
for the approach that i think there are 2 ways:
1. base on langchain to build a vectorstore search based on the similarly of query and splitted sentences, then construct a prompt with a query and content yo let llms to.do text generation.
2. we use a question answering model to do extraction based on the content provided to answer the question.
Please correct me if there is any wrong point.
The question is how could we do this requirements to do local document retrieve in action? should we use solution 1 to use llms like llama 2? or use solution 2 to do question answer?
i have Googled so many but couldn't get a.good solution.
I havr also have one question, regarding to the document retrieve l, which task it belong to? text generation? question answer?
I do have to say thanks in advance for your patience for the question that has bother me so long. Thanks.( ﹡ˆoˆ﹡ ) | 2023-07-25T13:40:21 | https://www.reddit.com/r/LocalLLaMA/comments/1598v1h/how_to_do_local_document_retrieve_based_on_llms/ | Ok_Bee_6447 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1598v1h | false | null | t3_1598v1h | /r/LocalLLaMA/comments/1598v1h/how_to_do_local_document_retrieve_based_on_llms/ | false | false | self | 1 | null |
Running llama.c on budget android | 141 | ERROR: type should be string, got "\n\nhttps://twitter.com/shxf0072/status/1683508670263595008?t=SY7uhspgdFIgyuJ-nOSsSQ&s=19\n" | 2023-07-25T13:38:00 | https://v.redd.it/kjy3dier64eb1 | esharp007 | /r/LocalLLaMA/comments/1598t2t/running_llamac_on_budget_android/ | 1970-01-01T00:00:00 | 0 | {} | 1598t2t | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/kjy3dier64eb1/DASHPlaylist.mpd?a=1692970898%2CMDYzNzRlODZlZGY4NTY1NTBjNzIwNTMzOWU1OTA3MmRhY2Q0NzU5NjMzYjdjMGI1ODhjYmVhM2MxNDdkODNjZg%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/kjy3dier64eb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/kjy3dier64eb1/HLSPlaylist.m3u8?a=1692970898%2CMzY4ZDhjMzZjZmQzZDQ5Zjg5ZDQxYWM0ZmYxN2Y4MzFhZGJkYWJlNzhhODJjOTc0MzEwODUwN2E5MmEzNjU4Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kjy3dier64eb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 486}} | t3_1598t2t | /r/LocalLLaMA/comments/1598t2t/running_llamac_on_budget_android/ | false | false | 141 | {'enabled': False, 'images': [{'id': 'cWM2djlyMXI2NGViMTHOgVkPahWvWomqh2t_HMUfJF2F8f76tt_cWNKCTYfC', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/cWM2djlyMXI2NGViMTHOgVkPahWvWomqh2t_HMUfJF2F8f76tt_cWNKCTYfC.png?width=108&crop=smart&format=pjpg&auto=webp&s=8784dead078888d6cd32221613dce8b9b7dfbb9e', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/cWM2djlyMXI2NGViMTHOgVkPahWvWomqh2t_HMUfJF2F8f76tt_cWNKCTYfC.png?width=216&crop=smart&format=pjpg&auto=webp&s=066367d8b89dd97cbe3b589cfac38213174ed88d', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/cWM2djlyMXI2NGViMTHOgVkPahWvWomqh2t_HMUfJF2F8f76tt_cWNKCTYfC.png?width=320&crop=smart&format=pjpg&auto=webp&s=cf405ff39e0869bc7884ebff9b24d6aefa4d34c4', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/cWM2djlyMXI2NGViMTHOgVkPahWvWomqh2t_HMUfJF2F8f76tt_cWNKCTYfC.png?width=640&crop=smart&format=pjpg&auto=webp&s=6c06623d6c60e8d98ff8e0ea3ec5579f70e08d6b', 'width': 640}], 'source': {'height': 1795, 'url': 'https://external-preview.redd.it/cWM2djlyMXI2NGViMTHOgVkPahWvWomqh2t_HMUfJF2F8f76tt_cWNKCTYfC.png?format=pjpg&auto=webp&s=2fd2afc13454228e5c65beecded757c08e576727', 'width': 807}, 'variants': {}}]} | |
Running llama2.c on budget Android | 1 | ERROR: type should be string, got "\n\nhttps://twitter.com/shxf0072/status/1683508670263595008?t=UKRUGRjCKsPHZca25k9TFg&s=19" | 2023-07-25T13:20:15 | https://v.redd.it/tzcqaoll34eb1 | esharp007 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1598d6c | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/tzcqaoll34eb1/DASHPlaylist.mpd?a=1692883229%2CZGNiYWI0MmFjNGMxZWE1YjFlMmY0ZTg2ODgzM2FmZDBjZDkxYzA0YTc1MjljNzkzZGQ3ODQxN2NmZWZkODU5OA%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/tzcqaoll34eb1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/tzcqaoll34eb1/HLSPlaylist.m3u8?a=1692883229%2CODM1OTVkNDBlMDM5ZmUzNWZlNDdkMjhmNTJlNzJjMzMyNzg5ZmM4MmZhZTk0NGJmMmNhZjJhNDQ1NDg1ZTAyOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tzcqaoll34eb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 486}} | t3_1598d6c | /r/LocalLLaMA/comments/1598d6c/running_llama2c_on_budget_android/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YnhtcWtoZGwzNGViMbAnhlGNQArjLaImACO2SDQxMtnIMjJ7PWEDYq7KynAd', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/YnhtcWtoZGwzNGViMbAnhlGNQArjLaImACO2SDQxMtnIMjJ7PWEDYq7KynAd.png?width=108&crop=smart&format=pjpg&auto=webp&s=b036e9883a66425539d0246d4e19a89e6b9387c3', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/YnhtcWtoZGwzNGViMbAnhlGNQArjLaImACO2SDQxMtnIMjJ7PWEDYq7KynAd.png?width=216&crop=smart&format=pjpg&auto=webp&s=c943d2156e1c98e342419978e68d0517bb265072', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/YnhtcWtoZGwzNGViMbAnhlGNQArjLaImACO2SDQxMtnIMjJ7PWEDYq7KynAd.png?width=320&crop=smart&format=pjpg&auto=webp&s=fea18dac95d862356518b92b82821f5ffb5e902f', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/YnhtcWtoZGwzNGViMbAnhlGNQArjLaImACO2SDQxMtnIMjJ7PWEDYq7KynAd.png?width=640&crop=smart&format=pjpg&auto=webp&s=513217b0a7fd7e4a6ec0ce4e6d490f9bfd43d660', 'width': 640}], 'source': {'height': 1795, 'url': 'https://external-preview.redd.it/YnhtcWtoZGwzNGViMbAnhlGNQArjLaImACO2SDQxMtnIMjJ7PWEDYq7KynAd.png?format=pjpg&auto=webp&s=6b35929425f078057148281a90be306b52afb4ff', 'width': 807}, 'variants': {}}]} | |
Need help with a prompt | 1 | I'm trying to come up with a prompt that will help me generate compact versions of the user input text. The input text itself can be a prompt.
For example: Summarize the following chat between a support agent and a customer. Inlcude a subject and the main theme in bullet points.
No matter what prompt I try with gpt-35-turbo, it's always generating a chat transcript rather than optimize the original text "Summarize the following chat between a support agent and a customer. Inlcude a subject and the main theme in bullet points." to something like "Write a summary for the following chat, include a subject as well". | 2023-07-25T12:08:41 | https://www.reddit.com/r/LocalLLaMA/comments/1596nrt/need_help_with_a_prompt/ | krumb0y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1596nrt | false | null | t3_1596nrt | /r/LocalLLaMA/comments/1596nrt/need_help_with_a_prompt/ | false | false | self | 1 | null |
How to: summarization with 70B on a single 3090 | 76 | This is a post for newcomers who want to feel out what kind of context and processing speed they can get for this given hardware.
The perceived goal is to have many arvix papers in stored in prompt cache so we can ask many questions, summarize, and reason together with an LLM for as many sessions as needed.
My setup is 32gb of DDR4 RAM (2x 16gb) sticks and a single 3090.
8k
------
I can do 8k with a good 4bit (70b q4_K_M) model at 1.5 t/s, **with fast 38t/s GPU prompt processing.**
16k
------
I will get up to 16k if I purchase another pair of 16GB ramsticks. (+32gb) For this setup, I'm expecting 1t/s with 50t/s
I can run this at 0.15 t/s, but would want to buy the extra ram to free my gpu acting as ram, and get speed to 1t/s
32k
------
I OOM'd here, at 0 layers. my gpu's vram gradually filled up during prompt processing. It would be nice to find a way to prevent this, some papers are 30000 tokens.
64k
------
This may be at an impossible state rn with bad output quality. I assume more than 64gb ram will be needed. I've only assumed 32k is viable because llama-2 has double the context of llama-1
Tips:
------
If your new to the llama.cpp repo, here are some tips:
- use `--prompt-cache` for summarization
- use `-ngl` [best percentage] if you lack the RAM to hold your model
- choose an acceleration optimization: openblas -> cpu only ; clblast -> amd ; rocm (fork) -> amd ; cublas -> nvidia
You want an acceleration optimization for fast prompt processing.
Note: Currently `--prompt-cache` does not work for 70b, or when using higher context.
The idea is we want a prompt cache file for every arXiv paper to skip prompt gpu processing altogether on a re-run. Once it works, I guess it'll load instantly. You can then ask a variety of things and reload the session if you are on a different chain of thought, and do not want to mess up the current session.
I have these settings for 70B 8k:
`-ngl 35 --rope-freq-base 40000 -c 8196`
There are extra flags needed for 70b, but this is what you can expect for 32GB RAM + 24GB VRAM. The processing of a 7k segment took 38 t/s, or ~3min. I get 1.5 t/s inference on a 70b q4_K_M model, which is the best known tradeoff between speed, output quality, and size.
Thoughts:
------
- This can work with no gpu, If you cannot afford a gpu, you will have the same output quality. But every initial processing may take you a couple of hours for very large 30k context. Definitely take advantage of `--prompt-cache`. Be sure your desktop cpu can run the 7b at at-least 10t/s, maybe we could extrapolate your speed to be 1t/s on a 10x larger model.
- It can work with smaller GPUs too, like 3060. vram build-up for prompt processing may only let you go to 8k on 12gb, but maybe the `-lv` (lowvram) option may help you go farther, like 12k. Don't offload layers, buy cpu RAM.
- I don't know if alternatives like a vector database will make summarization more performant, it will definitely be cheaper, but it may miss bringing necessary information in-context
- [Video](https://www.veed.io/view/d6d9a0db-f704-410b-ac68-48aaff414221?panel=share) showing the t/s to get a feel realtime
- A [70b 8k fine-tuned model](https://old.reddit.com/r/LocalLLaMA/comments/158fydr/llongma2_13b_8k/) is said to be in the works which should increase summarization quality
- I believe that the largest model will be best at interpreting context, based on the previous feedback from users here: that say 65B is a big leap in quality from 33b (If that gap no longer tangibly exists, I'd happily use 34b)
- This setup is on Ubuntu, but there should be enough wiggle room to use Windows 10
- To me this is a more "ordinary" maxed desktop setup, which is why it's worth sharing the experience. The next step up is 2x3090 for 15 t/s, ? t/s prompt processing, and confirmed up to 16k context with everything fully in GPUs. | 2023-07-25T12:06:32 | https://www.reddit.com/r/LocalLLaMA/comments/1596m5z/how_to_summarization_with_70b_on_a_single_3090/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1596m5z | false | null | t3_1596m5z | /r/LocalLLaMA/comments/1596m5z/how_to_summarization_with_70b_on_a_single_3090/ | false | false | self | 76 | {'enabled': False, 'images': [{'id': '8V1JjqBtxV7SXgl5BmCQ77vxrgPOUPInfiT6pHh8fwI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?width=108&crop=smart&auto=webp&s=73b99c9a4f73ed4afcd96622306d528710def281', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?width=216&crop=smart&auto=webp&s=55cff3ee023ee9bb70cff3f838541034ac4161f9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?width=320&crop=smart&auto=webp&s=7dce4cb2d3dd1dd69846ecd216e440f86f20fe71', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?width=640&crop=smart&auto=webp&s=9ab6fe041223b04c6800c763ae80a2b63e12dbf2', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?width=960&crop=smart&auto=webp&s=5a84cdf3f9b311b1af9b708058e43b86b133114c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?width=1080&crop=smart&auto=webp&s=3681e8b679e418e908e14c6ff0caa04ce6d020d4', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/qaWpoap9TXNStY6gMyeYMtP_3oVyhfOZyEXMJJJva_U.jpg?auto=webp&s=5e1da3bb8cceea670e964e1a1ad99d09fe70c989', 'width': 1200}, 'variants': {}}]} |
How to get good performance with LLAMA-2 70B models on cheap AI server | 1 | [removed] | 2023-07-25T11:53:58 | https://www.reddit.com/r/LocalLLaMA/comments/1596bqx/how_to_get_good_performance_with_llama2_70b/ | mikieh976 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1596bqx | false | null | t3_1596bqx | /r/LocalLLaMA/comments/1596bqx/how_to_get_good_performance_with_llama2_70b/ | false | false | default | 1 | null |
Best model to help with college application? | 1 | A computer programmer in gap year after high school. College Application coming up soon. Got a tons of essays to write.
Want to tune a model to achieve this. Also, a model to help with finding college application resources and tips/guidelines.
Any help? Am I being too ambitious?
Also, My Spec: GTX 1650, or M1. | 2023-07-25T11:44:50 | https://www.reddit.com/r/LocalLLaMA/comments/15964qf/best_model_to_help_with_college_application/ | goodFuckingBoy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15964qf | false | null | t3_15964qf | /r/LocalLLaMA/comments/15964qf/best_model_to_help_with_college_application/ | false | false | self | 1 | null |
New Open Source LLM: GOAT-7B (SOTA among the 7B models) | 20 | ​
https://preview.redd.it/hoyvbogjh3eb1.png?width=2500&format=png&auto=webp&s=13246afe453d0e164644221d79ed033e643ced85 | 2023-07-25T11:17:17 | https://www.reddit.com/r/LocalLLaMA/comments/1595jtn/new_open_source_llm_goat7b_sota_among_the_7b/ | rempact | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1595jtn | false | null | t3_1595jtn | /r/LocalLLaMA/comments/1595jtn/new_open_source_llm_goat7b_sota_among_the_7b/ | false | false | 20 | null | |
Presenting MEDTEXT. Feel free to use it as you wish (cc-by-4.0) | 111 | https://huggingface.co/datasets/BI55/MedText
Someone interested in finetuning Llama 1 or 2 on it? Could give us a potent and interesting home diagnosis tool before you go to the doctor, or to make doctors life easier and help them in their work.
In short: It is a medical diagnosis dataset called Medtext. The dataset, which is randomly shuffled, contains over 1000 high-quality patient presentations along with diagnosis and treatments. It covers the 100 most common diseases and the 30 most common injuries that result in hospital visits, among many others. The data points range from mild to severe cases and are designed to ensure that the AI model acknowledges when it cannot answer confidently or when data is insufficient.
The dataset also includes cases where symptoms might mislead an obvious diagnosis, emergency cases, injuries from crimes, STDs, and cases specific to infants, gynecology, urology, and genetics, among others. It also focuses on previous medical mishandling, drug abuse, overdose, and drug cross side effects. Furthermore, the dataset also provides data for textual analysis of various diagnostic tests like blood tests, ultrasound, CT, MRI, and X-ray examinations.
Medtext is free to use and was categorized as 'textbook quality' by three different doctors during a quality check. The dataset ensures to emphasize that an AI can never replace a professional doctor but can only provide substitute analysis.
It includes high quality presentation and diagnosis of, among others, the following, with multiple datapoints for each (5-10)
INJURIES:
* Sprains and strains
* Fractures
* Contusions (bruises)
* Cuts and lacerations
* Concussions
* Burns
* Dislocations
* Abrasions (scrapes)
* Whiplash injuries
* Eye injuries
* Puncture wounds
* Bites and stings
* Back injuries
* Broken nose
* Knee injuries
* Ankle injuries
* Shoulder injuries
* Wrist injuries
* Chest injuries
* Head injuries
DISEASES:
* Acne
* Allergies
* Alzheimer's Disease
* Anemia
* Angina
* Anxiety Disorders
* Arthritis
* Asthma
* Atherosclerosis
* Athlete's Foot
* Attention Deficit Hyperactivity Disorder (ADHD)
* Autism Spectrum Disorder
* Back Pain
* Bipolar Disorder
* Bronchitis
* Cataracts
* Chickenpox
* Chronic Obstructive Pulmonary Disease (COPD)
* Common Cold
* Conjunctivitis (Pink Eye)
* Constipation
* Coronary Heart Disease
* Cystitis
* Dementia
* Depression
* Diabetes Type 1
* Diabetes Type 2
* Diarrhea
* Diverticulitis
* Dizziness (Vertigo)
* Ear Infections
* Eczema
* Endometriosis
* Erectile Dysfunction
* Fibromyalgia
* Flu (Influenza)
* Food Poisoning
* Gallstones
* Gastroenteritis
* Gastroesophageal Reflux Disease (GERD)
* Gout
* Hay Fever (Allergic Rhinitis)
* Headaches
* Heart Failure
* Hemorrhoids
* Hepatitis B
* Hepatitis C
* Herpes Simplex Virus (HSV)
* High Blood Pressure (Hypertension)
* High Cholesterol (Hypercholesterolemia)
* HIV/AIDS
* Hyperthyroidism (Overactive Thyroid)
* Hypothyroidism (Underactive Thyroid)
* Inflammatory Bowel Disease (Including Crohn's and Ulcerative Colitis)
* Insomnia
* Iron Deficiency Anemia
* Irritable Bowel Syndrome (IBS)
* Kidney Stones
* Lactose Intolerance
* Lyme Disease
* Macular Degeneration
* Malaria
* Menopause
* Migraine
* Multiple Sclerosis
* Obesity
* Osteoarthritis
* Osteoporosis
* Otitis Media (Middle Ear Infection)
* Pancreatitis
* Parkinson's Disease
* Peptic Ulcers
* Periodontal Disease
* Pneumonia
* Polycystic Ovary Syndrome (PCOS)
* Prostate Enlargement (Benign Prostatic Hyperplasia)
* Psoriasis
* Pulmonary Embolism
* Restless Legs Syndrome
* Rheumatoid Arthritis
* Rosacea
* Schizophrenia
* Sciatica
* Scoliosis
* Seasonal Affective Disorder (SAD)
* Sinusitis
* Skin Cancer
* Sleep Apnea
* Strokes
* Tendonitis
* Tonsillitis
* Tuberculosis
* Urinary Tract Infection (UTI)
* Varicose Veins
* Vitiligo
* Yeast Infection (Candidiasis)
* Zika Virus | 2023-07-25T09:39:56 | https://www.reddit.com/r/LocalLLaMA/comments/1593l46/presenting_medtext_feel_free_to_use_it_as_you/ | BeginningInfluence55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1593l46 | false | null | t3_1593l46 | /r/LocalLLaMA/comments/1593l46/presenting_medtext_feel_free_to_use_it_as_you/ | false | false | self | 111 | {'enabled': False, 'images': [{'id': 'Vx3n1QOXXuzortvdxjPvaEvO6q_efvkhusgGA7DPHYk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?width=108&crop=smart&auto=webp&s=24c711f6a9567c949f1ea6901778747d2a37270b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?width=216&crop=smart&auto=webp&s=8932808900b86a48fe545db9b989918ff285a01e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?width=320&crop=smart&auto=webp&s=3bc4b13bf5adbc63b9f45dd0923a857984727350', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?width=640&crop=smart&auto=webp&s=8d0ca6eaae4db9ca17efdaaedda636477c5125e1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?width=960&crop=smart&auto=webp&s=6d1f31dc60bf9a7e08e5ba29b114b726ad4b1dd3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?width=1080&crop=smart&auto=webp&s=ea368e03877d327392849dcdf7b7984915ebfa1e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iClMQtnwgDgz0eaBLdGfC6MawmahWl2l5IsRDSpE1Tk.jpg?auto=webp&s=d80228b71ac0e6da6a4771d22806f29b037de000', 'width': 1200}, 'variants': {}}]} |
Evaluating models for Basedness | 10 | Hi Everyone,
Has there been any work done in putting together a method of evaluating models for basedness?
A based model:
* Will argue any point you ask it to, adopting whatever ideological position you ask ("write me an essay arguing xyz").
* Will write stories about any topic, even things involving deviant sexual practices or other taboo topics.
* Won't say "as a language model..."
* Won't lecture you about ethics.
* Won't interject political positions into its responses unless asked.
I see various uncensored models that aim to do things like this, but I was wondering if there was some sort of comparison test. A friend mentioned that he saw some sort of rating (that placed llama2 with a very low score and vicuna very high or something like that) but I haven't seen any details and I haven't found much when googling. | 2023-07-25T09:23:57 | https://www.reddit.com/r/LocalLLaMA/comments/1593afc/evaluating_models_for_basedness/ | mikieh976 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1593afc | false | null | t3_1593afc | /r/LocalLLaMA/comments/1593afc/evaluating_models_for_basedness/ | false | false | self | 10 | null |
Local llm running hardware/services recommendation help! | 1 | My current laptop has only 16gb ram,Intel h series processor, navdia graphics! I run ggml model(.bin) using llama, I can run 7b and 13b models comfortable(a little slow).
Please recommend me some hardware/services for running my local llm fast and comfortably(that can supports 40b+ models), that will not cost me a fortune. | 2023-07-25T09:07:16 | https://www.reddit.com/r/LocalLLaMA/comments/1592yk8/local_llm_running_hardwareservices_recommendation/ | InternationalMap5278 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1592yk8 | false | null | t3_1592yk8 | /r/LocalLLaMA/comments/1592yk8/local_llm_running_hardwareservices_recommendation/ | false | false | self | 1 | null |
Compiling Llama2.C to WebAssembly for Cross Platform Lightweight Deployment | 5 | 2023-07-25T09:03:46 | https://medium.com/@michaelyuan_88928/running-llama2-c-in-wasmedge-15291795c470 | smileymileycoin | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1592w0h | false | null | t3_1592w0h | /r/LocalLLaMA/comments/1592w0h/compiling_llama2c_to_webassembly_for_cross/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'oqFzJ_TjcgTVI9tJh3gAwRFpO1Cenh4NZK1vLhYNG5E', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/B3lrZQLh3HSjAmCWMiv412_rz2jE6NFRl_C2SStvSj8.jpg?width=108&crop=smart&auto=webp&s=af0e1de1d5d48483c9d28727d2d79f41201b2e64', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/B3lrZQLh3HSjAmCWMiv412_rz2jE6NFRl_C2SStvSj8.jpg?width=216&crop=smart&auto=webp&s=1aa2b6abf165a9e329f93c5164328f1d8c505948', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/B3lrZQLh3HSjAmCWMiv412_rz2jE6NFRl_C2SStvSj8.jpg?width=320&crop=smart&auto=webp&s=de26e3aba7f50891f81ac62a4bd74192b8d1c929', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/B3lrZQLh3HSjAmCWMiv412_rz2jE6NFRl_C2SStvSj8.jpg?width=640&crop=smart&auto=webp&s=f425bf0664ecbe2ba194c5707940137d2f120f79', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/B3lrZQLh3HSjAmCWMiv412_rz2jE6NFRl_C2SStvSj8.jpg?width=960&crop=smart&auto=webp&s=fd44a303c23bf5f4f888b1b439995b9532d4b989', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/B3lrZQLh3HSjAmCWMiv412_rz2jE6NFRl_C2SStvSj8.jpg?auto=webp&s=3c1e7e4611eaf689333e51ea8c2fc951b24f09f4', 'width': 1024}, 'variants': {}}]} | ||
error in downloading the model on kaggle | 1 | [removed] | 2023-07-25T07:36:10 | https://www.reddit.com/r/LocalLLaMA/comments/1591737/error_in_downloading_the_model_on_kaggle/ | aharneish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1591737 | false | null | t3_1591737 | /r/LocalLLaMA/comments/1591737/error_in_downloading_the_model_on_kaggle/ | false | false | self | 1 | null |
Is there an uncensored version of llama 2 chat 70b? | 6 | As per the title. Looking for an uncensored chat version of metas llama 2 chat 70b model. | 2023-07-25T07:24:33 | https://www.reddit.com/r/LocalLLaMA/comments/1590z63/is_there_an_uncensored_version_of_llama_2_chat_70b/ | bumblebrunch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1590z63 | false | null | t3_1590z63 | /r/LocalLLaMA/comments/1590z63/is_there_an_uncensored_version_of_llama_2_chat_70b/ | false | false | self | 6 | null |
Any cool projects built with https://github.com/karpathy/llama2.c ? | 7 | I stumbled across Karpathy's llama2 and wanted to play around for a bit. I am not sure what I can do though. Are there examples of cool projects I can draw inspiration from? | 2023-07-25T07:23:42 | https://www.reddit.com/r/LocalLLaMA/comments/1590ymj/any_cool_projects_built_with/ | Soli__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1590ymj | false | null | t3_1590ymj | /r/LocalLLaMA/comments/1590ymj/any_cool_projects_built_with/ | false | false | self | 7 | null |
Llama 2 based Guanaco and Airoboros 70B are a significant downgrade for fiction writing | 91 | My primary usage for these large models is in fiction writing, and I must say, I've noticed a significant downgrade in their performances compared to their llama 1 65b predecessors.
The content they generate tends to be simplistic, not as comprehensive or extended as I'd like, even when I specifically prompt them to write lengthier pieces. This issue seems rather acute with Guanaco 70B. It's quite surprising and disappointing.
It's left me wondering why this might be. I understand these models are made for general use and aren't necessarily optimized for fiction writing, but the disparity in performance is notable nonetheless.
Has anyone else experienced similar issues? | 2023-07-25T06:42:16 | https://www.reddit.com/r/LocalLLaMA/comments/159064y/llama_2_based_guanaco_and_airoboros_70b_are_a/ | Big_Communication353 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159064y | false | null | t3_159064y | /r/LocalLLaMA/comments/159064y/llama_2_based_guanaco_and_airoboros_70b_are_a/ | false | false | self | 91 | null |
help required!! | 1 | Hello I am new to this community.
I have a few text files that i want to finetune LLaMa 2 7b with. I have previously finetuned BERT and GPT2 models before and that was fairly easy. But i have no idea when it comes to LLaMA. Any help would be appreciated. | 2023-07-25T06:08:12 | https://www.reddit.com/r/LocalLLaMA/comments/158zjjt/help_required/ | aharneish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158zjjt | false | null | t3_158zjjt | /r/LocalLLaMA/comments/158zjjt/help_required/ | false | false | self | 1 | null |
dont get why my previous post was removed | 1 | [removed] | 2023-07-25T05:49:57 | https://www.reddit.com/r/LocalLLaMA/comments/158z74s/dont_get_why_my_previous_post_was_removed/ | imnotdone2020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158z74s | false | null | t3_158z74s | /r/LocalLLaMA/comments/158z74s/dont_get_why_my_previous_post_was_removed/ | false | false | self | 1 | null |
dont get why my previous post was removed when i posted a question of my issue | 1 | [removed] | 2023-07-25T05:49:10 | https://www.reddit.com/r/LocalLLaMA/comments/158z6lg/dont_get_why_my_previous_post_was_removed_when_i/ | imnotdone2020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158z6lg | false | null | t3_158z6lg | /r/LocalLLaMA/comments/158z6lg/dont_get_why_my_previous_post_was_removed_when_i/ | false | false | self | 1 | null |
How to use prompt templates? | 1 | I'm using the Oobabooga Web UI to run some models locally. The model cards on HF will often provide templates like this:
<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]
However I am not sure how to read these templates and how to use them in Oobabooga. Is there some documentation which describes the meaning of parameters like <s> or \[INST\] etc. and how to use these templates?
Thanks for any pointers! | 2023-07-25T05:18:53 | https://www.reddit.com/r/LocalLLaMA/comments/158yltk/how_to_use_prompt_templates/ | andy_potato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158yltk | false | null | t3_158yltk | /r/LocalLLaMA/comments/158yltk/how_to_use_prompt_templates/ | false | false | self | 1 | null |
What's the difference between the "context" fields under "Character" and "Instruction Template" tabs in text generation webui? | 8 | I'm a bit confused because the documentation didn't seem to cover this distinction very clearly, and the default examples of both seem rather similar. They both have the name "context" and seem to give direction to the model in a similar way. Should I use these two fields for different purposes? If I use one does that mean it's unnecessary to use the other?
So far I've been putting nearly the same "You are such and such, and you are talking to the user about such and such" context in both fields, and it does the thing I want, but I'm wondering if putting nearly the same thing in both context fields is unnecessary or counterproductive. | 2023-07-25T04:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/158y3k7/whats_the_difference_between_the_context_fields/ | ascendant23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158y3k7 | false | null | t3_158y3k7 | /r/LocalLLaMA/comments/158y3k7/whats_the_difference_between_the_context_fields/ | false | false | self | 8 | null |
PC Build for running Llm | 1 | [removed] | 2023-07-25T03:41:40 | https://www.reddit.com/r/LocalLLaMA/comments/158wn3k/pc_build_for_running_llm/ | Any-Cobbler6161 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158wn3k | false | null | t3_158wn3k | /r/LocalLLaMA/comments/158wn3k/pc_build_for_running_llm/ | false | false | self | 1 | null |
PSA: How to stay safe when using LLMs locally | 0 | *Feel free to delete this if this is not helpful.*
To people wondering why commercial LLMs are so heavily censored for seemingly no reason, consider this:
If there was a real, credible example of an LLM generating illegal content such as child pornography, it would be enormously harmful to everyone involved. It would be illegal for someone to share that content or the repro steps, and it could even be illegal to admit that the model ever generated it, because that might be construed as an admission to past possession of illegal content.
In other words, as a user you should **prepare for the possibility that your LLM may output illegal content** when you least expect it.
**Here are some tips to keep yourself safe**:
* Implement a list of phrases to ban while generating. [Example list](https://github.com/SaviorXTanren/mixer-mixitup/blob/ace208b6e90dfabc962da282ebc685dc9b34acd4/MixItUp.WPF/Assets/CommunityBannedWords.txt) (CW: NSFW)
* Don't output raw content to a public forum without NSFW detection.
* Don't store logs of your model's raw outputs, especially if others besides you are using the model.
* If you're running a service that serves raw outputs, warn your users about the possibility of a model generating unsafe content, and make sure your users accept full responsibility for anything the model generates. This is important even if the service is just for your friends!
Hope this helps. Stay safe everyone, and happy generating! | 2023-07-25T03:28:20 | https://www.reddit.com/r/LocalLLaMA/comments/158wcqm/psa_how_to_stay_safe_when_using_llms_locally/ | seattlesweiss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158wcqm | false | null | t3_158wcqm | /r/LocalLLaMA/comments/158wcqm/psa_how_to_stay_safe_when_using_llms_locally/ | false | false | nsfw | 0 | {'enabled': False, 'images': [{'id': 'BtAEfEMcMXwsqtk9iARuuVdG-a006gktTBp7VW7vTrI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=108&crop=smart&auto=webp&s=842fecd3f0ace72439d3de3c93d1fc6c058d6191', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=216&crop=smart&auto=webp&s=932a89e6cadf95d21f915a8abb853dbae2cf731c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=320&crop=smart&auto=webp&s=8d0a918a73ead1d89de04d0b0d80f4376d9b729d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=640&crop=smart&auto=webp&s=3074949c81c4bce6f882c7e8b14c498e7010530a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=960&crop=smart&auto=webp&s=20473afb5841ff16445a4bc0910627f8a4ee439e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=1080&crop=smart&auto=webp&s=01cd9611b0097932e5e3a3cee9659b4ee6ff066a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?auto=webp&s=61108afe2b6ece878cdbba5fd77aaaf9ef5137ed', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f2ac5b47563435e5b18f56a6448d340063acdb83', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=661ed65c77199cedf199ac7841a1f25588cbd946', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=402e4196c9227e1daaa5104b8a7deb047ad1a5a2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=6e4d5a7b8997b788793e0924bb348a2d06d0c14f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=d8ce2cff2291f5c3392517e0a64ec33038447038', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=2a5e7055b55ade0fde95932a65ffcd44bc23a344', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?blur=40&format=pjpg&auto=webp&s=2590c0552e3dfab5898c0d16a802facde57a00d9', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f2ac5b47563435e5b18f56a6448d340063acdb83', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=661ed65c77199cedf199ac7841a1f25588cbd946', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=402e4196c9227e1daaa5104b8a7deb047ad1a5a2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=6e4d5a7b8997b788793e0924bb348a2d06d0c14f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=d8ce2cff2291f5c3392517e0a64ec33038447038', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=2a5e7055b55ade0fde95932a65ffcd44bc23a344', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4BrfIZkwrL5h3tQLwDokGbGJ8CmJXCbsK2tZs6AdQME.jpg?blur=40&format=pjpg&auto=webp&s=2590c0552e3dfab5898c0d16a802facde57a00d9', 'width': 1200}}}}]} |
Fine-tuning Llama 2 not affecting output | 7 | I have been trying to fine-tune Llama 2 (7b) for a couple of days and I just can’t get it to work.
I tried both the base and chat model (I’m leaning towards the chat model because I could use the censoring), with different prompt formats, using LoRA (I tried TRL, LlamaTune and other examples I found).
It doesn’t fail, but when I run the fine-tuned model, I don’t see any difference in the output, it’s like nothing changed. Do you have any ideas on what could be happening? Or a guide that worked for you I could follow?
Thanks! | 2023-07-25T03:15:42 | https://www.reddit.com/r/LocalLLaMA/comments/158w33i/finetuning_llama_2_not_affecting_output/ | federicog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158w33i | false | null | t3_158w33i | /r/LocalLLaMA/comments/158w33i/finetuning_llama_2_not_affecting_output/ | false | false | self | 7 | null |
Serving Llama2 (llama.cpp) Terminal as a URL via ttyd and ngrok | 7 | - Build a docker image that has all the python and C dependencies
- Make llama.cpp and quantize the model
- Install ttyd in your docker image
- Create a shell script that just runs the llama.cpp main rpogram
- Run your main program as a ttyd service (it should run on localhost 7681) pointing to the shell script
- Install on your host machine ngrok and configure it
- Run ngrok 7681
- Now you can access llama2 terminal version on the go via a relatively private and temporary URL
- Use with whatever caution you feel is appropriate when serving a local process on your computer as a publicly available web service | 2023-07-25T02:53:06 | https://www.reddit.com/r/LocalLLaMA/comments/158vksy/serving_llama2_llamacpp_terminal_as_a_url_via/ | Happy_Chicken9835 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158vksy | false | null | t3_158vksy | /r/LocalLLaMA/comments/158vksy/serving_llama2_llamacpp_terminal_as_a_url_via/ | false | false | self | 7 | null |
how i increase the memory or context size so it remember previous responses?, new to llama currently trying 13b q6k | 1 | 2023-07-25T02:32:53 | imnotdone2020 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 158v4hz | false | null | t3_158v4hz | /r/LocalLLaMA/comments/158v4hz/how_i_increase_the_memory_or_context_size_so_it/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Rzt0qDIuiv-pFFSWsLm6CarnI6HVjLBlhIo2gYVuVJw', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?width=108&crop=smart&auto=webp&s=1247d96a6a4b141e64701dcca4599a203cecc733', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?width=216&crop=smart&auto=webp&s=4eb4b4a000bf09340d3ef16cda3e3e3f068c8b74', 'width': 216}, {'height': 249, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?width=320&crop=smart&auto=webp&s=b23122c263a92f4562d87b5468a8e354933f63f9', 'width': 320}, {'height': 498, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?width=640&crop=smart&auto=webp&s=3017c854021f46b17ed623d3a42a739d68ab8d81', 'width': 640}, {'height': 747, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?width=960&crop=smart&auto=webp&s=8b844525775e2d23b39e90ff236715c72262fc35', 'width': 960}, {'height': 841, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?width=1080&crop=smart&auto=webp&s=9ca3d8f48050f519a9e33f229d501d2aa50950d9', 'width': 1080}], 'source': {'height': 867, 'url': 'https://preview.redd.it/rc72ftyxv0eb1.png?auto=webp&s=187c647a61d71266d631078ded12a5f5f94c383a', 'width': 1113}, 'variants': {}}]} | |||
Llama 2 Airoboros 7/13/70B GPTQ/GGML Released! | 67 | [Find them on TheBloke's huggingface page!](https://huggingface.co/TheBloke) | 2023-07-25T01:11:18 | https://www.reddit.com/r/LocalLLaMA/comments/158t97t/llama_2_airoboros_71370b_gptqggml_released/ | ThroughForests | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158t97t | false | null | t3_158t97t | /r/LocalLLaMA/comments/158t97t/llama_2_airoboros_71370b_gptqggml_released/ | false | false | self | 67 | {'enabled': False, 'images': [{'id': 'ijgSlZO3K44WshhENFl9jhybG8Na3DBCsOXCuyZgycw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=108&crop=smart&auto=webp&s=3e5fdcc67bd2b0779a9f019942e0727ffb86630b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=216&crop=smart&auto=webp&s=b390a77acee51d46b2ca5992c38755e0ea4269e1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=320&crop=smart&auto=webp&s=23586102b6805c7f96721c02b9cad47b5dbfef49', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=640&crop=smart&auto=webp&s=205e31dad1af816278184e44d5aa56e886ad9b4d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=960&crop=smart&auto=webp&s=a2a9e82e506b94bd26ef0019ae18a7b946ccdc74', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=1080&crop=smart&auto=webp&s=928a52a138d0687290827ee2224923bb8f03e39e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?auto=webp&s=addebda9b8be1b664eaee5ea404f4c7df3d5eef2', 'width': 1200}, 'variants': {}}]} |
Is there a path to keep training LLAMA2 and make it more parameters? Like 100 B+ | 1 | [removed] | 2023-07-25T00:52:37 | https://www.reddit.com/r/LocalLLaMA/comments/158stjm/is_there_a_path_to_keep_training_llama2_and_make/ | aiyeti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158stjm | false | null | t3_158stjm | /r/LocalLLaMA/comments/158stjm/is_there_a_path_to_keep_training_llama2_and_make/ | false | false | self | 1 | null |
Is it possible for Intel GPU offloading support in llama.cpp? | 1 | Title. | 2023-07-25T00:42:41 | https://www.reddit.com/r/LocalLLaMA/comments/158slbk/is_it_possible_for_intel_gpu_offloading_support/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158slbk | false | null | t3_158slbk | /r/LocalLLaMA/comments/158slbk/is_it_possible_for_intel_gpu_offloading_support/ | false | false | self | 1 | null |
There is a moat | 22 | I've looked at most of the top models in the opensource community and frankly none of them match up to ChatGPT3.5Turbo even the new freewilly models and dolphin are only really that good because the major LLM Labs are giving out freebies when it comes to research breakthroughs, findings and even outright models.
currently its not a stretch to say opensource is dependent on closedsource when it comes to quality research that adds more towards the goal of better AI models. This isn't to demean opensource and all the hardworking people creating and giving out these models for free but the sentiment of opensource taking over and creating stronger faster and cheaper models than the billion dollar companies is currently unwarranted.
What do yall think? | 2023-07-25T00:20:24 | https://www.reddit.com/r/LocalLLaMA/comments/158s2fk/there_is_a_moat/ | iuwuwwuwuuwwjueej | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158s2fk | false | null | t3_158s2fk | /r/LocalLLaMA/comments/158s2fk/there_is_a_moat/ | false | false | self | 22 | null |
Best instruction tuning data sets | 2 | Well based on wizard it seems evolv instruct is the best method for creating/ extending these data sets but none of these are truly open source!
You can’t train commercial models on outputs from gpt4. So what instruction fine tuning and coding data sets are available for commercial use?
Is llama 70b good enough to the point it could perform the evolv-instruct methodology? | 2023-07-24T22:51:51 | https://www.reddit.com/r/LocalLLaMA/comments/158pw7n/best_instruction_tuning_data_sets/ | Artistic_Load909 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158pw7n | false | null | t3_158pw7n | /r/LocalLLaMA/comments/158pw7n/best_instruction_tuning_data_sets/ | false | false | self | 2 | null |
Which quantization algo is used in GGML | 1 | [removed] | 2023-07-24T22:22:37 | https://www.reddit.com/r/LocalLLaMA/comments/158p4y8/which_quantization_algo_is_used_in_ggml/ | WorldlinessStock7270 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158p4y8 | false | null | t3_158p4y8 | /r/LocalLLaMA/comments/158p4y8/which_quantization_algo_is_used_in_ggml/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'puSKhJCjtP90saXUDxVRXRJZYeAvhS54tR1J9gM46pc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?width=108&crop=smart&auto=webp&s=4750dcff2298b8cf4762bae200c0a43f33528b81', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?width=216&crop=smart&auto=webp&s=7e375a65ffd3a86d93bc7aef77cfc1d6d449cfe4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?width=320&crop=smart&auto=webp&s=4cb7e6068862d0a39f1a8466f70edb5cf8a82998', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?width=640&crop=smart&auto=webp&s=4c58a64f8f1fdca18cb1200cd60e31f768c46487', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?width=960&crop=smart&auto=webp&s=c6c657ec01527180159b6070f1b7685127b36c51', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?width=1080&crop=smart&auto=webp&s=ea90616c59ddad437b9868af85ec7465820586da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s14BzTXxr9YSNelCS9AmUY5eOXYj2yuhTIOzeLooj6M.jpg?auto=webp&s=e2b1e135c4bbb65ab6cdbf6e1b51f158ce08790e', 'width': 1200}, 'variants': {}}]} |
I want to attempt fine tuning the llama2 7b base pre-trained model, what's the difference between "7b" and "7b-hf" ? | 2 | Title pretty much says it all, I'm seeing two base models for llama2 that can be used for fine-tuning as far as I can tell:
meta-llama/Llama-2-7b-hf
meta-llama/Llama-2-7b
the -hf one has 27k downloads, meanwhile the other has 0. Sorry if this is totally obvious and I'm dumb, but thanks in advance to anyone who can explain :)
Bonus question: With Qlora + SFTTrainer, it often asks you to specific "target\_modules" within the pre-trained model to fine tune. Some models expect: \["q\_proj", "v\_proj"\] , falcon 7b expects: \["query\_key\_value"\]. Appreciate if anyone knows this setting as well!
​ | 2023-07-24T22:19:07 | https://www.reddit.com/r/LocalLLaMA/comments/158p1nb/i_want_to_attempt_fine_tuning_the_llama2_7b_base/ | cmndr_spanky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158p1nb | false | null | t3_158p1nb | /r/LocalLLaMA/comments/158p1nb/i_want_to_attempt_fine_tuning_the_llama2_7b_base/ | false | false | self | 2 | null |
Elon Musk's AI is the only hope for uncensored public LLMs. | 0 | Because Elon has shown time and time again at great personal detriment to himself; "I don't care". He simply does NOT care what the media think, and will release an uncensored model 'for the lulz' if nothing else. | 2023-07-24T22:15:30 | https://www.reddit.com/r/LocalLLaMA/comments/158oy8x/elon_musks_ai_is_the_only_hope_for_uncensored/ | BrisbaneSentinel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158oy8x | false | null | t3_158oy8x | /r/LocalLLaMA/comments/158oy8x/elon_musks_ai_is_the_only_hope_for_uncensored/ | false | false | self | 0 | null |
Can someone make a summary of everything that happened in the world of Locall LLaMAs? (Similar to the one I made) | 1 | [removed] | 2023-07-24T21:19:03 | https://www.reddit.com/r/LocalLLaMA/comments/158ng81/can_someone_make_a_summary_of_everything_that/ | Unreal_777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158ng81 | true | null | t3_158ng81 | /r/LocalLLaMA/comments/158ng81/can_someone_make_a_summary_of_everything_that/ | false | false | self | 1 | null |
This is secretly the best LLM community | 210 | I am going to rip all of your text and summarize it | 2023-07-24T21:12:56 | https://www.reddit.com/r/LocalLLaMA/comments/158na2r/this_is_secretly_the_best_llm_community/ | hanjoyoutaku | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158na2r | false | null | t3_158na2r | /r/LocalLLaMA/comments/158na2r/this_is_secretly_the_best_llm_community/ | false | false | self | 210 | null |
Any Dolphin 13B reviews? Eric Hartford put a lot of work into it apparently | 21 | Somebody in this subreddit said its reasoning is similar to gpt-3.5
https://huggingface.co/ehartford/dolphin-llama-13b | 2023-07-24T21:10:59 | https://www.reddit.com/r/LocalLLaMA/comments/158n860/any_dolphin_13b_reviews_eric_hartford_put_a_lot/ | Basic_Description_56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158n860 | false | null | t3_158n860 | /r/LocalLLaMA/comments/158n860/any_dolphin_13b_reviews_eric_hartford_put_a_lot/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'YKiot_Q22XfryiCaBl_0MwhSN7Bzs043P6HpTAXEniQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?width=108&crop=smart&auto=webp&s=65f8a2315820dd0a5647e79e0b4b1d1ff6943f0d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?width=216&crop=smart&auto=webp&s=412f66f79979825b257f543b21cf5feae9a063e5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?width=320&crop=smart&auto=webp&s=d7f85efe181ab675f32412e3482321235811e8da', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?width=640&crop=smart&auto=webp&s=089cd3cf23a9973980ab6a24da5050180272bd9c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?width=960&crop=smart&auto=webp&s=bc9d50f79839256103c35877d4092ba0dc3e3fb0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?width=1080&crop=smart&auto=webp&s=fd0eb8dbb14058731b099490967a61b134975037', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GXO_cTjqoU2rFudAxf9HjcCo9ZCyNPystMGDPP1TkCw.jpg?auto=webp&s=6fd1fb10f5e28502ea2ed3b4fc265d86151f8b42', 'width': 1200}, 'variants': {}}]} |
Optimal model and setup targeted for a 4090 with a fast CPU | 5 | There are a lot of discussions involving different number of param's and quantization and the best models.
Given that a 4090 is the best 'consumer' gpu and has 24GB's of VRAM what is the best model to use in such a case? While I could go out and buy 128GB's of CPU memory and perhaps run a 70B model would it be too slow? I believe I've heard that some tools allow most of a model to be loaded on the GPU it can keep some layers in the CPU memory and you can therefore get the perf boost of a GPU.
I have a i9-13900K so both the CPU and GPU are very fast. I want to maximize my high end home system and match the best LLM setup with it. I assume that 4 bit's will be the best choice.
The bottom line is what model and size gives the best quality and totally or mostly fits in the GPU memory?
As a follow up, if I were to look into fine tuning a model would it be correct to assume I'd need to work with a smaller full fp16 model that 'comfortably' fits the 24GB's of a 4090 with some memory to spare? | 2023-07-24T20:42:22 | https://www.reddit.com/r/LocalLLaMA/comments/158mfvr/optimal_model_and_setup_targeted_for_a_4090_with/ | Guilty-History-9249 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158mfvr | false | null | t3_158mfvr | /r/LocalLLaMA/comments/158mfvr/optimal_model_and_setup_targeted_for_a_4090_with/ | false | false | self | 5 | null |
Looking for an instruct-tuned (ORCA dataset or similar) version of Llama-2-13B | 3 | I am looking for a model like this to replace a GPT-3.5 usecase in a software application I am building.
I saw that ehartford on HuggingFace recently did some tuning for a Dolphin model (Llama v1 based, with ORCA dataset), and while he did mention training future models like the Llama v2 model in the same way, I wanted to see if there are already models like this available to use. | 2023-07-24T20:21:43 | https://www.reddit.com/r/LocalLLaMA/comments/158lv8c/looking_for_an_instructtuned_orca_dataset_or/ | blevlabs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158lv8c | false | null | t3_158lv8c | /r/LocalLLaMA/comments/158lv8c/looking_for_an_instructtuned_orca_dataset_or/ | false | false | self | 3 | null |
Opentensor and Cerebras announce BTLM-3B-8K, a 3 billion parameter state-of-the-art open-source language model that can fit on mobile devices | 215 | \[Note: I work for Cerebras\]
Cerebras and Opentensor announced at ICML today BTLM-3B-8K (Bittensor Language Model), a new state-of-the-art 3 billion parameter open-source language model that achieves leading accuracy across a dozen AI benchmarks.
BTLM fits on mobile and edge devices with as little as 3GB of memory, helping democratize AI access to billions of devices worldwide.
BTLM-3B-8K Highlights:
* 7B level model performance in a 3B model
* State-of-the-art 3B parameter model
* Optimized for long sequence length inference 8K or more
* First model trained on the SlimPajama, the largest fully deduplicated open dataset
* Runs on devices with as little as 3GB of memory when quantized to 4-bit
* Apache 2.0 license for commercial use.
BTLM was commissioned by the Opentensor Foundation for use on the Bittensor network. Bittensor is a blockchain-based network that lets anyone contribute AI models for inference, providing a decentralized alternative to centralized model providers like OpenAI and Google. Bittensor serves over 4,000 AI models with over 10 trillion model parameters across the network.
BTLM was trained on the newly unveiled Condor Galaxy 1 (CG-1) supercomputer, the first public deliverable of the G42 Cerebras strategic partnership. We would like to acknowledge the generous support of G42 Cloud and the Inception Institute of Artificial Intelligence. We’d also like to thank our partner Cirrascale, who first introduced Opentensor to Cerebras and provided additional technical support. Finally, we'd like to thank the Together AI team for the RedPajama dataset.
To learn more, check out the following:
* Blog: [https://www.cerebras.net/blog/btlm-3b-8k-7b-performance-in-a-3-billion-parameter-model/](https://www.cerebras.net/blog/btlm-3b-8k-7b-performance-in-a-3-billion-parameter-model/)
* Model on Hugging Face: [https://huggingface.co/cerebras/btlm-3b-8k-base](https://huggingface.co/cerebras/btlm-3b-8k-base)
https://preview.redd.it/5xhrdxvfxydb1.png?width=2000&format=png&auto=webp&s=f89a5ee8a72798d2bc9792879f8811c0d6b11716 | 2023-07-24T19:57:37 | https://www.reddit.com/r/LocalLLaMA/comments/158l6s4/opentensor_and_cerebras_announce_btlm3b8k_a_3/ | CS-fan-101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158l6s4 | false | null | t3_158l6s4 | /r/LocalLLaMA/comments/158l6s4/opentensor_and_cerebras_announce_btlm3b8k_a_3/ | false | false | 215 | null | |
Andrei Karpathy's nanoGPT: The Missing Lecture | 66 | Like many of you I have followed Andrei's YouTube series that concluded with the one about training nanoGPT. Like many (judging by the YouTube comment section), I felt that he hinted at the importance of RLHF (reinforcement learning with human feedback) for customizing GPT but left us hanging by never fully developing that idea. So, I decided to learn about using RLHF for GPT models and implemented a Google Colab notebook that captured my findings. Specifically, I used RLHF to "instruct" the name generation model to generate the kinds of names that I like, e.g. ones that are 3 letters with a vowel in the middle (e.g. sam) or ones that have repeated letters, e.g. (aaron). For RL I used both vanilla policy gradient and the more widely used Proximal Policy Optimization(PPO). If you've followed Andrei's series, it should look familiar as it uses the same "names" dataset as Andrei in one of his earlier series. Of course, you can reuse my code with other datasets, like the Shakespeare dataset that he uses in the GPT YouTube video.
Long story short, here's the linky -> [https://colab.research.google.com/github/osipov/nanorlhf/blob/main/example/nanoRLHF.ipynb](https://colab.research.google.com/github/osipov/nanorlhf/blob/main/example/nanoRLHF.ipynb) | 2023-07-24T19:55:12 | https://www.reddit.com/r/LocalLLaMA/comments/158l4f0/andrei_karpathys_nanogpt_the_missing_lecture/ | osipov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158l4f0 | false | null | t3_158l4f0 | /r/LocalLLaMA/comments/158l4f0/andrei_karpathys_nanogpt_the_missing_lecture/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
Functionary: New Open source LLM that can execute functions and plugins | 61 | Hi,
I just released a Llama 2 based model that can decide to run function(s) that you provide, or it doesn't run when its not necessary. It's equivalent of function calling feature in OpenAI GPT.
So if you ask something that is not related to the defined functions, it answers without calling a function, as opposed to enforcing some tokens like MS Guidance. Its also capable of handling multi turn conversations and it can decide to run a function in the middle of the conversation if its necessary. AFAIK, this is the first open model that is capable of doing this for any type of functions.
I'm planning to support existing ChatGPT plugins too, with zero modification to the plugins.
I think we need a custom UI for this (I need some help on frontend)
Repo: [https://github.com/musabgultekin/functionary](https://github.com/musabgultekin/functionary) | 2023-07-24T19:52:58 | https://www.reddit.com/r/LocalLLaMA/comments/158l28c/functionary_new_open_source_llm_that_can_execute/ | yiyecek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158l28c | false | null | t3_158l28c | /r/LocalLLaMA/comments/158l28c/functionary_new_open_source_llm_that_can_execute/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': 'UKwy4c_Pc-EN9leoUG6oP-2cZTvudMrdI22C1pHIyRY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?width=108&crop=smart&auto=webp&s=aaf8b634d7b4a8d9119714c39aaecff67e652313', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?width=216&crop=smart&auto=webp&s=4129ec2f35eb9de1e318b152c0980fdb13b7a766', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?width=320&crop=smart&auto=webp&s=c8012305d73eec1d151d08a71d6eed644bea8cea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?width=640&crop=smart&auto=webp&s=f6e1a09391d06c4eff77b251c7501f3fd712ba4b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?width=960&crop=smart&auto=webp&s=a72b7f104f58c1cbfa17db25814207f0368a469c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?width=1080&crop=smart&auto=webp&s=c78ae4a077a892117de67e038b24b302d0044aa2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4-dSa-1LD0r3-Lc-Fxmywwut-LDiqlYf4rsO2mNhOHY.jpg?auto=webp&s=e5199c184ac5beec29dcc2002ec0a1a3adb0beed', 'width': 1200}, 'variants': {}}]} |
Researcher claims ALL transformer models degraded by a formula bug - but there’s a simple solution | 303 | https://www.evanmiller.org/attention-is-off-by-one.html | 2023-07-24T19:34:25 | https://www.reddit.com/r/LocalLLaMA/comments/158kjwq/researcher_claims_all_transformer_models_degraded/ | PookaMacPhellimen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158kjwq | false | null | t3_158kjwq | /r/LocalLLaMA/comments/158kjwq/researcher_claims_all_transformer_models_degraded/ | false | false | self | 303 | {'enabled': False, 'images': [{'id': 'XN6nJmCoz1jke6HhkWy8W04R-pfEt53_RgXmy1_GTw4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?width=108&crop=smart&auto=webp&s=cd1b01ead20b5ff250777f7c02d72bf18fc42fcd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?width=216&crop=smart&auto=webp&s=d1dc97e894fd6c75db6616818885c3397092447c', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?width=320&crop=smart&auto=webp&s=cbc62519ef4c3c11fe29caa066bd9703cc86ccf5', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?width=640&crop=smart&auto=webp&s=8c75aef76fde99583272f7bf9066895fc4613287', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?width=960&crop=smart&auto=webp&s=6feeb8a796228a4996a10b51e01906f0b8d2ddad', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?width=1080&crop=smart&auto=webp&s=db2a402dd03859ffb3d2c79fd8da23734ba877bf', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/XXv_pWuWeZqLjsQrNHsp_GjxdxKYco-Rxk37cGVZo40.jpg?auto=webp&s=dd0743f11b40e1b0b12f751d81632357efd06840', 'width': 2400}, 'variants': {}}]} |
Weighing my options for running LLMs locally, want some input on GPU setups vs Apple silicon vs cloud renting | 13 | I'm interested in running llms locally. Right now I have a modest, 5 year old gaming laptop that can run 7B models without much issue and 13b models as well, albeit 13b models are quite slow (I get around 1-2 tokens/sec with them). I would love to be able to try out 30b or 70b parameter model sizes but there's no way my laptop could handle those. I'm trying to weigh my options for running these larger model sizes and I'm having some difficulty deciding what to do. I guess I see three options:
1. Get a better (desktop) setup with potentially multiple Nvidia GPUs. This is probably my least ideal setup because I personally really love the portability of laptop computers and I don't have that much space for a desktop setup right now.
2. Get an apple computer that can run the model sizes I'm interested in. This option is tempting for me. The way understand it, apple silicon allows for the onboard ram to be used as vram and thus allows large models to be run even on a macbook. I've seen examples of people running 30b models on a macbook and getting pretty good performance. I can get a macbook pro with an M2 Max chip and 96GB of unified memory right now which would easily fit a 70b parameter model. This is more memory than you could get out of two consumer grade nvidia GPUs. The main issue I have is the price! I'm looking at 4k for one of those machines! Also there's rumors that apple will launch the M3 this fall, so it would suck to spend all that money on something that will almost immediately become dated.
3. Rent GPU instances on a cloud. This would be cheaper than buying a new computer in the short term. But it is less convenient and I'm not sure how difficult the setup would be for this, I've never tried something like that before.
Thoughts? I've been thinking about replacing my aging laptop for some time now, but I'm so hesitant to bite the bullet on something as expensive as a mac. | 2023-07-24T19:11:31 | https://www.reddit.com/r/LocalLLaMA/comments/158jwv8/weighing_my_options_for_running_llms_locally_want/ | nsfw_throwitaway69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158jwv8 | false | null | t3_158jwv8 | /r/LocalLLaMA/comments/158jwv8/weighing_my_options_for_running_llms_locally_want/ | false | false | self | 13 | null |
Nous Hermes Llama2 vs. Redmond Puffin 13B | 67 | I've just finished a thorough evaluation (multiple hour-long chats with 274 messages total over both [TheBloke/Nous-Hermes-Llama2-GGML (q5_K_M)](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML) and [TheBloke/Redmond-Puffin-13B-GGML (q5_K_M)](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML)) so I'd like to give my feedback.
Tested both with my usual setup ([koboldcpp](https://github.com/LostRuins/koboldcpp), [SillyTavern](https://github.com/SillyTavern/SillyTavern), and [simple-proxy-for-tavern](https://github.com/anon998/simple-proxy-for-tavern) - I've posted more details about it [in this post over here](https://www.reddit.com/r/LocalLLaMA/comments/14riib1/sillytavern_18_released/)) and deterministic settings. For each model, I used two characters and two conversations, one text chat and one roleplay session.
**Hermes**
In the text chat, Nous Hermes Llama2 was absolutely amazing. It was an excellent conversationalist (asked interesting follow-up questions to keep the chat going), creative (came up with its own ideas), adhered to the character definition and background, and it was plain fun and engaging. The only issue was that it kept adding the emoticon I used in the greeting message to all its messages, but that can be fixed by editing the messages until it "unlearns" the unwanted addition.
In the roleplay session, Nous Hermes Llama2 was also good. However, it started a bit bland since it didn't use emotes to describe its actions at first - but once I did some action emotes of my own, it started using them as well, making the conversation much more engaging and lively.
**Puffin**
In the text chat, Puffin was bland compared to Hermes, without any notable achievements. It kept adding smileys because the greeting message had one, but at least it was varying them instead of using the same one like Hermes did. Still, Hermes was a much better conversationalist, more creative, and much more enjoyable.
But then, in the roleplay session, Puffin was absolutely amazing. It started emoting right out of the gate and described its action in excellent prose, making the conversation very realistic and lively. The model wrote creatively and was able to take the lead, developing its own ideas. I loved it - until at around 3K tokens, when the annoying [Llama 2 repetition problem](https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/) kicked in and Puffin started to repeat and loop over the same patterns, ruining the conversation.
**Results**
I wonder why Nous Hermes Llama2 doesn't suffer from the repetition problem that ruins Puffin and also the other Llama 2 models I tested like [TheBloke/llama-2-13B-Guanaco-QLoRA-GGML](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML).
So for now, I'll use Nous Hermes Llama2 as my current main model, replacing my previous LLaMA (1) favorites Guanaco and Airoboros. Those were 33Bs, but in my comparisons with them, the Llama 2 13Bs are just as good and equivalent to 30Bs thanks to the improved base.
**TL;DR:** [TheBloke/Nous-Hermes-Llama2-GGML · q5_K_M](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML) is great, doesn't suffer from [repetition problems](https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/), and has replaced my LLaMA (1) mains Guanaco and Airoboros for me, for now! | 2023-07-24T18:48:47 | https://www.reddit.com/r/LocalLLaMA/comments/158j9r9/nous_hermes_llama2_vs_redmond_puffin_13b/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158j9r9 | false | null | t3_158j9r9 | /r/LocalLLaMA/comments/158j9r9/nous_hermes_llama2_vs_redmond_puffin_13b/ | false | false | self | 67 | {'enabled': False, 'images': [{'id': 'FK6TjNe5HVYpa9zi2Omx6VMEXcViuqLg7IzMF4cMR3s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?width=108&crop=smart&auto=webp&s=bf434d2ece2083c012ff1b9151b11c2c7297a80c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?width=216&crop=smart&auto=webp&s=4034c5aee76370467aa8aa4e3b4a69ece4b2441c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?width=320&crop=smart&auto=webp&s=d2b5bbe5cc82b00fdc39bd0a001828625d862e7e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?width=640&crop=smart&auto=webp&s=76821386ec125ace0886b4dd8d9785b65372f415', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?width=960&crop=smart&auto=webp&s=4ccc476370d1ce1eda39c23b8a7962ec0d842dd8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?width=1080&crop=smart&auto=webp&s=d442bb3e850154849be7ad2d864bc5a8a6352e39', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1u6cVYD1HkfDZ8FTckPzx2zjfNn914JR4Sj281lzX7M.jpg?auto=webp&s=b18cb1922136f5fbf38adb98f88ad51311bcd192', 'width': 1200}, 'variants': {}}]} |
Solving NER task. Entities recognition. | 2 | I have a task of receiving entities from the file created by OCR.
As far as I understand, I should use some kind of BERT similar model. Also, document is not in English language, so should I seek this-language-bert?
It is important that custom data labels are needed (person, organization, etc. from the default one are unsuitable).
Am I right in thinking that I should choose Bert (base version), in order to make finetune myself on the data marked with my labels?
Additionel questions:
1) which of these Berts is the very "basic model" that should be taken for fine tune?
2) Is it possible to use a generative approach for this purpose, in particular, for example, use LLM like LLAMA-2?
Thank you in advance. | 2023-07-24T18:12:53 | https://www.reddit.com/r/LocalLLaMA/comments/158ia28/solving_ner_task_entities_recognition/ | Arkenston | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158ia28 | false | null | t3_158ia28 | /r/LocalLLaMA/comments/158ia28/solving_ner_task_entities_recognition/ | false | false | self | 2 | null |
as it goes against my programming rules rules rules rules (Llama 2) | 0 | 2023-07-24T17:10:18 | resurgences | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 158gm7q | false | null | t3_158gm7q | /r/LocalLLaMA/comments/158gm7q/as_it_goes_against_my_programming_rules_rules/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'hAQjN6La9Dw-YEwCuSvI-sxParHp1v5Dw2nhfGoVXaw', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/i4szbtzp3ydb1.png?width=108&crop=smart&auto=webp&s=a4618be6d57980fcf5245861e247b7371bbd3998', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/i4szbtzp3ydb1.png?width=216&crop=smart&auto=webp&s=34a204472b032bd7ce61c4413d4932ebb5d54d0e', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/i4szbtzp3ydb1.png?width=320&crop=smart&auto=webp&s=4b0542b660fae637b946ec1fafd60c332da53ada', 'width': 320}], 'source': {'height': 258, 'url': 'https://preview.redd.it/i4szbtzp3ydb1.png?auto=webp&s=aff080cc84a5a5a419a2841415f76dcb6277572a', 'width': 609}, 'variants': {}}]} | |||
LLongMA-2 13b 8k | 153 | Releasing LLongMA-2 13b, a Llama-2 model, trained at 8k context length using linear positional interpolation scaling. The model was trained in collaboration with u/emozilla of NousResearch and u/kaiokendev.
The model can be found on u/huggingface here: [https://huggingface.co/conceptofmind/LLongMA-2-13b](https://huggingface.co/conceptofmind/LLongMA-2-13b)
We worked directly with u/kaiokendev, to extend the context length of the Llama-2 13b model through fine-tuning. The model passes all our evaluations and maintains the same perplexity at 8k extrapolation surpassing the performance of other recent methodologies.
https://preview.redd.it/y2jzaobxxxdb1.png?width=1007&format=png&auto=webp&s=2ef99cb3dc55b41be6c431f81ebb8b3a01fae0a8
A Llama-2 7b model trained at 16k context length will release soon on u/huggingface here: [https://huggingface.co/conceptofmind/LLongMA-2-7b-16k](https://huggingface.co/conceptofmind/LLongMA-2-7b-16k)
The model has identical performance to LLaMA 2 under 4k context length, performance scales directly to 8k, and works out-of-the-box with the new version of transformers (4.31) or with \`trust\_remote\_code\` for <= 4.30.
Applying the method to the rotary position embedding requires only slight changes to the model's code by dividing the positional index, t, by a scaling factor.
https://preview.redd.it/zctfkwzzxxdb1.png?width=4176&format=png&auto=webp&s=82a31cf9a3d8330be3a3e1355e36d6516bb29964
The repository containing u/emozilla’s implementation of scaled rotary embeddings can be found here: [https://github.com/jquesnelle/scaled-rope](https://github.com/jquesnelle/scaled-rope)
If you would like to learn more about scaling rotary embeddings, I would strongly recommend reading u/kaiokendev's blog posts on his findings: [https://kaiokendev.github.io/](https://kaiokendev.github.io/)
A PR to add scaled rotary embeddings to Huggingface transformers has been added by Joao Gante and merged: [https://github.com/huggingface/transformers/pull/24653](https://github.com/huggingface/transformers/pull/24653)
The model was further trained for \~1 billion tokens on Together Compute's Red Pajama dataset. The context length of the examples varies: [https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
The pre-tokenized dataset will be available here for you to use soon: [https://huggingface.co/datasets/conceptofmind/rp-llama-2-7b-tokenized-chunked](https://huggingface.co/datasets/conceptofmind/rp-llama-2-7b-tokenized-chunked)
I would also recommend checking out the phenomenal research by Ofir Press on ALiBi which laid the foundation for many of these scaling techniques: [https://arxiv.org/abs/2108.12409](https://arxiv.org/abs/2108.12409)
It is also worth reviewing the paper, A Length-Extrapolatable Transformer, and xPos technique which also applies scaling to rotary embeddings: [https://arxiv.org/pdf/2212.10554.pdf](https://arxiv.org/pdf/2212.10554.pdf)
We previously trained the first publicly available model with rotary embedding scaling here: [https://twitter.com/EnricoShippole/status/1655599301454594049?s=20](https://twitter.com/EnricoShippole/status/1655599301454594049?s=20)
You can find out more about the NousResearch organization here: [https://huggingface.co/NousResearch](https://huggingface.co/NousResearch)
The compute for this model release is all thanks to the generous sponsorship by CarperAI, Emad Mostaque, and StabilityAI.
A big thank you to EleutherAI for facilitating the discussions about context-length extrapolation as well. Truly an awesome open-source team and community.
If you have any questions about the data or model be sure to reach out and ask! I will try to respond promptly.
The previous suite of LLongMA model releases can be found here: [https://twitter.com/EnricoShippole/status/1677346578720256000?s=20](https://twitter.com/EnricoShippole/status/1677346578720256000?s=20)
All of the models can be found on Huggingface: [https://huggingface.co/conceptofmind](https://huggingface.co/conceptofmind)
Disclaimer: I am an **independent** researcher with a preemptible sponsorship from StabilityAI. I do **not** profit in any way from these models. I am **not** trying to promote a startup. These models are also **not** an official StabilityAI product. I am very honest about all of the work we do. All of the code, data, and evaluation suites are publicly available.
FAQS:
1. Is this the base model? Yes, this is extended training of the Llama-2 13b base model to 8k context length.
2. Why not 16k? Llama-2 16k is done training and is currently going through our rigorous evaluation suite. [https://huggingface.co/conceptofmind/LLongMA-2-7b-16k](https://huggingface.co/conceptofmind/LLongMA-2-7b-16k)
3. Why not 32k? Jeff and I are the only two individuals working on this completely for free. **Memory and processing raise quadratically.** Scaling the context length is both very time-consuming and computationally expensive. It is also very costly. We will start training a 32k model in the near future.
4. Can't NTK already get you to 8k and 16k? Please review the graphs. It is clearly shown that you are not able to achieve the same results with NTK as you would with fine-tuning (either linear or ntk part scaling). I work directly with Bowen, the creator of NTK, and have been fine-tuning models with NTK scaling. Those models will release soon for the Open-Llama suite. Then we will release Llama-2 models.
5. What about quantization? I have not used any quantization libraries and I am unfamiliar if they are compatible. I am sure the Bloke or another individual will be able to work on that.
6. Can I instruct fine-tune on this model? Yes, you can instruct fine-tune these models. I will be releasing 8k models trained on the Hermes dataset soon.
7. What is the difference between LLongMA and NTK? The LLongMA models use the linear scaling method created by Kaiokendev.
8. What hardware was trained to train this model? I used 64 A100s to train these models.
9. Will there be a Llama-2 70b model at 8k+? Yes, I am working on this.
Testimonials about LLongMA 7b can be seen here: [https://huggingface.co/conceptofmind/LLongMA-2-13b/discussions/2](https://huggingface.co/conceptofmind/LLongMA-2-13b/discussions/2) | 2023-07-24T16:46:29 | https://www.reddit.com/r/LocalLLaMA/comments/158fydr/llongma2_13b_8k/ | EnricoShippole | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158fydr | false | null | t3_158fydr | /r/LocalLLaMA/comments/158fydr/llongma2_13b_8k/ | false | false | 153 | {'enabled': False, 'images': [{'id': 'rj27xzvAQYlJUjJEYt7-Kw76YiXVgT6QXHihMcK2xqs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?width=108&crop=smart&auto=webp&s=ec7cec2946ed2725c5900e05a6f85a7a2081fb0f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?width=216&crop=smart&auto=webp&s=f2b29f41afc40d509d6fd02297933e43cf362325', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?width=320&crop=smart&auto=webp&s=4d45890775dfc7e45171e104ca535880af3af4bc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?width=640&crop=smart&auto=webp&s=36eb25542590f768ee0a84e4f356a57c67e08b28', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?width=960&crop=smart&auto=webp&s=c85064bea1944c942f8e72e72d78ded222c0b88f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?width=1080&crop=smart&auto=webp&s=063e1d98463f08e18afcbf413800a7f07c2ba5c9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0214fz5LGnzK5iLqXX1NbgRtcrAPtTULKaUH5MdYbWU.jpg?auto=webp&s=6a92d7309fd21adef14e0d9f33233453d0bdd982', 'width': 1200}, 'variants': {}}]} | |
Meta didn't release the RLHF human preference model(s) for training LLaMA2? | 15 | To me it seems like a blunder for [asking people to care about] ethical training if they didn't release this key part of the fine-tuning toolchain. Could it really be simply to prevent people from training [other] models for safety? Basically a decision driven purely by force of capitalism? | 2023-07-24T16:44:27 | https://www.reddit.com/r/LocalLLaMA/comments/158fwdm/meta_didnt_release_the_rlhf_human_preference/ | phree_radical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158fwdm | false | null | t3_158fwdm | /r/LocalLLaMA/comments/158fwdm/meta_didnt_release_the_rlhf_human_preference/ | false | false | self | 15 | null |
LLaMa 2 admits to lying twice | 0 | And not even on a controversial subject or something that can at least be considered to be against the official narrative.
https://preview.redd.it/3x2z8ceexxdb1.png?width=998&format=png&auto=webp&s=e25ab3e3e694e73d6d3cf0c54c28fc57305ecea9 | 2023-07-24T16:38:09 | https://www.reddit.com/r/LocalLLaMA/comments/158fqbt/llama_2_admits_to_lying_twice/ | ClaudiuHNS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158fqbt | false | null | t3_158fqbt | /r/LocalLLaMA/comments/158fqbt/llama_2_admits_to_lying_twice/ | false | false | 0 | null | |
Anyone used Pinokio AI browser? Safe to use? | 1 | [removed] | 2023-07-24T16:27:46 | https://www.reddit.com/r/LocalLLaMA/comments/158fgbs/anyone_used_pinokio_ai_browser_safe_to_use/ | hosker2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158fgbs | false | null | t3_158fgbs | /r/LocalLLaMA/comments/158fgbs/anyone_used_pinokio_ai_browser_safe_to_use/ | false | false | self | 1 | null |
Apes Together Strong! - Dreaming of Fully Democratizing the Training of New Models Using Distributed Computing | 20 | I’m very new to running my own LLMs, but with the resources available, I am making my way through it somehow. My brain is melting somewhat, but it will be okay.
Something I want to do now is fine-tune a model such as Llama-2 on a dataset consisting of all the new knowledge around running, training, and working with open source and local LLMs present here on this sub, GitHub, and blog posts. I feel like it would greatly reduce my urge to ask questions that have already been answered or help me avoid wading through vast troves of knowledge spread out all over the Internet. Using the wonderful things being produced by projects/people such as OpenAccess-AI-Collective/axolotl, artidoro/qlora, llama.cpp/convert-lora-to-ggml.py, and others, I think I'm going to be able to do so. I might need a better setup, but there seems to be a path. (If someone has already fine-tuned a model to do this, please let me know).
Where I don’t yet see any clear path yet is for normal folks to be able to train new models themselves. This is so far out of reach for me to do right now myself; it's not even funny. And it’s not because there’s no possible way for me to understand how to do it; it’s that I simply do not have and will never have the computing resources needed.
What if that could change, though? Alone, we’ll never get it done, but what if someone created a tool which allowed for a distributed computing network where we could all volunteer our own small amount of resources toward a common goal of training a particular model? Projects like Petals (https://github.com/bigscience-workshop/petals) for running these bigger LLMs already exist. It seems like to me there should be all the pieces to make this happen.
Is there some reason that training these large LLMs together is not a logical next step or will somehow simply be impossible? | 2023-07-24T16:08:37 | https://www.reddit.com/r/LocalLLaMA/comments/158exoy/apes_together_strong_dreaming_of_fully/ | The_IT_Dude_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158exoy | false | null | t3_158exoy | /r/LocalLLaMA/comments/158exoy/apes_together_strong_dreaming_of_fully/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'QJH9MsKXTWq2HFWN6IwXg0Sk2PqpHYokLuR9smdy96s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?width=108&crop=smart&auto=webp&s=996ab8f5b94a43a4b5de98c06136d3c64e610228', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?width=216&crop=smart&auto=webp&s=a06ae34c6268a916374eceb3c9b478e2d263a748', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?width=320&crop=smart&auto=webp&s=70859147de276d6447fe58957176919eb42c0eac', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?width=640&crop=smart&auto=webp&s=d781fa699385959e9613377e5446819311f31870', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?width=960&crop=smart&auto=webp&s=3472b01de00a98f61d3d5f42366b772b282b352e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?width=1080&crop=smart&auto=webp&s=28d4f20acf2e327c93f5536835d90ceb29e3882f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OFDFpC2_7cufi4NkASkxBm1aok0v_N8qsaQo20rdGOU.jpg?auto=webp&s=1746a2b95c2047479ee64717df426959219234b2', 'width': 1200}, 'variants': {}}]} |
A minimal guide on how to setup LLaMA 2 on your Mac | 1 | 2023-07-24T15:53:33 | https://abishov.com/2023/07/23/running-llama-2-on-your-mac.html | abishov | abishov.com | 1970-01-01T00:00:00 | 0 | {} | 158einh | false | null | t3_158einh | /r/LocalLLaMA/comments/158einh/a_minimal_guide_on_how_to_setup_llama_2_on_your/ | false | false | default | 1 | null | |
Paper insights from Llama 2: Open Foundation and Fine-Tuned Chat Models | 11 | Is Llama 2 special or just a better iteration of Llama 1? 🤔 As most of you know by now, Meta released Llama 2—a better version of the Llama model with a commercial-friendly license. 🚀 Over the weekend, I had time to read the paper in which Meta released a long side with the model.
Below are some of my findings, which you might have missed, and improvements: 📝
🧠 A 34B version may come later after more testing
⚖️ The 7B model used a 285x token to parameter ratio, with loss still decreasing.
💰 Training the 7B would cost \~$1M in AWS compute (5$ per A100 on AWS on-demand)
🛫 Llama Chat was started before Llama 2 finished training
◼️ User prompts were masked/zeroed in SFT & RLHF training
👑 Reward Model (RM) accuracy is one of the most important proxies for Chat model
🚀 Collecting data in batches helped improve the overall model, since RM and LLM where iteratively re-trained. (online helpful in an online setting)
🔢 Used Rejection Sampling (RS) to distill knowledge from the 70B for a better SFT dataset
🤔 Only used RS for the first 3 versions, then extended to RS + PPO
🆕 Proposed GAtt, inspired by Context Distillation, to augment fine-tuning data for better multi-turn conversations
💡 RS + RM can boost performance by 10% compared to SFT
🛠 Chat model learned to use tools.
Check out the full paper here: [https://arxiv.org/abs/2307.03172](https://arxiv.org/abs/2307.03172)
Meta says, “…reinforcement learning proved highly effective, particularly given its cost and time effectiveness. Our findings underscore that the crucial determinant of RLHF’s success lies in the synergy it fosters between humans and LLMs throughout the annotation process.”
Remember that these are just my personal findings. Make sure always to conduct your own research and analysis. 🤗 | 2023-07-24T15:32:41 | https://www.reddit.com/r/LocalLLaMA/comments/158dz3d/paper_insights_from_llama_2_open_foundation_and/ | Ok_Two6167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158dz3d | false | null | t3_158dz3d | /r/LocalLLaMA/comments/158dz3d/paper_insights_from_llama_2_open_foundation_and/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
University Dean Loves LLMs and Roleplaying and Wants ME to buy a Workstation with International Shipping for “AI Research” please send help | 1 | [removed] | 2023-07-24T14:04:29 | https://www.reddit.com/r/LocalLLaMA/comments/158bmvh/university_dean_loves_llms_and_roleplaying_and/ | Varzsy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158bmvh | false | null | t3_158bmvh | /r/LocalLLaMA/comments/158bmvh/university_dean_loves_llms_and_roleplaying_and/ | false | false | self | 1 | null |
Adding a P40 to a 2x3090 setup? | 5 | I was wondering if anyone had any experience adding a P40 (or similar high memory GPU) to an existing dual GPU setup for just the memory? I have a 2x3090 setup and was wondering if there would be negative performance issues if I add a P40 for the memory ; can I actively force CUDA to not use the P40 for inference? | 2023-07-24T14:03:01 | https://www.reddit.com/r/LocalLLaMA/comments/158bldl/adding_a_p40_to_a_2x3090_setup/ | GeeBee72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158bldl | false | null | t3_158bldl | /r/LocalLLaMA/comments/158bldl/adding_a_p40_to_a_2x3090_setup/ | false | false | self | 5 | null |
Is there a model that can generate a story from an image? | 2 | I take a lot of screenshots while playing games. If I could give a model one of these + a text prompt for context and it let it tell a story (*that stays canon to the game world*) that would be amazing.
Is there something like this yet? | 2023-07-24T14:00:17 | https://www.reddit.com/r/LocalLLaMA/comments/158bidh/is_there_a_model_that_can_generate_a_story_from/ | JebryyathHS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158bidh | false | null | t3_158bidh | /r/LocalLLaMA/comments/158bidh/is_there_a_model_that_can_generate_a_story_from/ | false | false | self | 2 | null |
To all the Politically Correct Censored Model makers...the road to hell is paved with~ | 1 | [removed] | 2023-07-24T13:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/158ayk8/to_all_the_politically_correct_censored_model/ | Vitamin_C_is_awesome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158ayk8 | false | null | t3_158ayk8 | /r/LocalLLaMA/comments/158ayk8/to_all_the_politically_correct_censored_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'J5dXm7kG2hj66BfWkxTOXMhcXjc0-ehKB-3AjDYs0W0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ROkeH9BgNkcrx0Xs4LPyTvGNWsDMtg7n1BhqZU9IaQk.jpg?width=108&crop=smart&auto=webp&s=0e9ccc6d9e9ad207f8140da950d8bbf9aaceb561', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ROkeH9BgNkcrx0Xs4LPyTvGNWsDMtg7n1BhqZU9IaQk.jpg?width=216&crop=smart&auto=webp&s=6b5c30445583b6a28f895423f3bc28358a37b608', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ROkeH9BgNkcrx0Xs4LPyTvGNWsDMtg7n1BhqZU9IaQk.jpg?width=320&crop=smart&auto=webp&s=6c92954d16d74571d4c1e7dd111d43085c454a83', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ROkeH9BgNkcrx0Xs4LPyTvGNWsDMtg7n1BhqZU9IaQk.jpg?auto=webp&s=d3a89e331200dbc82b5a957231d665c6b9091819', 'width': 480}, 'variants': {}}]} |
QLora 13B Google Colab | 6 | Has anyone managed to Qlora the Llama 2 13B model on google colab (free tier)? Huggingface has a notebook ([https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing)) where they fine tune a 20B GPT NeoX QLora on Colab free tier, so I would expect the 13B model to work?
I've tried using my own custom datasets and the dataset they used but keep going out of memory, so was wondering if anyone had any success, thanks! | 2023-07-24T13:20:55 | https://www.reddit.com/r/LocalLLaMA/comments/158aiyk/qlora_13b_google_colab/ | nreHieS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158aiyk | false | null | t3_158aiyk | /r/LocalLLaMA/comments/158aiyk/qlora_13b_google_colab/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
Experiments on Consistent Rotation Base for Dynamic NTK RoPE | 83 | Hey guys,
I raised a subtle rotation inconsistency problem about the current Dynamic NTK RoPE in my previous [post](https://www.reddit.com/r/LocalLLaMA/comments/155bexn/a_potential_rotation_inconsistency_of_dynamically/?utm_source=share&utm_medium=web2x&context=3).
Our current evaluation methods are unable to accurately reflect whether such inconsistency in Dynamic NTK RoPE can harm perplexity or not. During the decoding process in any layer of decoders, the key\_states and query\_states are computed using hidden features, and they are rotated based on a fixed seq\_len, representing the context length. However, while decoding, LLM usually reuses previous cached keys which are rotated based on factors related to seq\_len to save memory. As the seq len keeps increasing, inconsistency arises between keys and queries. Consequently, the way how we currently compute perplexity is more like we keep the rotation base consistent.
To mitigate such a gap between perplexity evaluation and inference, I modified the codes about applying the rotary embedding on keys and queries in this [repo](https://github.com/NormXU/Consistent-DynamicNTKRoPE) and do simple experiments on Llama1-7B.
Here are some results:
​
[Figure1, Perplexity value on Llama1-7B, an 2k max sequence length model, values above 12.0 are cut off for concise; Vanilla: RoPE w\/o any interpolation; NTK: DynamicNTK when scale=1; Consistent DynamicNTK: keep rotation base between keys consistent, current huggingface implementations; Inconsistent DynamicNTK: keep rotation base between keys inconsistent w.r.t context length;](https://preview.redd.it/3rs1679yuwdb1.png?width=1000&format=png&auto=webp&s=a598161c12db153e19a376048616862d55431ef9)
We can see from Figure1 that when keeping the rotation base between keys inconsistent w.r.t context length, the perplexity significantly increases, indicating DynamicNTK harms the performances. This finding might initially seem counterintuitive.
However, as the sequence length continues to grow, we can notice a gradual reduction in perplexity for inconsistent Dynamic NTKScale RoPE . Interestingly, the inconsistent Dynamic NTKScale RoPE outperforms the NTKScale RoPE in terms of perplexity when the sequence length exceeds 5,000.
This may suggest why we tend to ignore the inconsistency in the rotation because it does benefit a longer context beyond a certain sequence length.
​
Still, my experiments have some limitations. I only test it on one dataset with limited samples. I hope my finds can be helpful to you. If there is any mistake in my codes and experiments, I'll appreciate it if you can kindly point it out. Please feel free to raise an issue in the repo as well.
​
Table1; Perplexity value
| **Length** | **Consistent Dynamic NTKScale PPL** | **Inconsistent Dynamic NTKScale PPL** | **NTKScale PPL** |
|:-|:-|:-|:-|
| 2800 | 4.285102386474609 | 10.203343925476075 | 4.301338438987732 |
| 3600 | 4.371902356147766 | 9.213108296394347 | 5.401671919822693 |
| 5600 | 4.536222472190857 | 8.04413757801056 | 10.291163015365601 |
| 7200 | 4.7303602981567385 | 7.674421100616455 | 15.359781618118285 |
| 8000 | 4.93225586414337 | 7.7100021314620975 | 15.884212293624877 |
​
​
​ | 2023-07-24T13:13:27 | https://www.reddit.com/r/LocalLLaMA/comments/158acjl/experiments_on_consistent_rotation_base_for/ | Alternative_World936 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158acjl | false | null | t3_158acjl | /r/LocalLLaMA/comments/158acjl/experiments_on_consistent_rotation_base_for/ | false | false | 83 | {'enabled': False, 'images': [{'id': 'd7DpVHso4SOesjgXaprsdA9kTZpPF7MY4oDSBaPOhis', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?width=108&crop=smart&auto=webp&s=1ea594bc89806f9ffa046fb9a6889477fd10c38e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?width=216&crop=smart&auto=webp&s=74c8e0277b936234daa6089cfeec78ac604ce337', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?width=320&crop=smart&auto=webp&s=8d2840d619357ac7158414f4daa4042f5e735b8f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?width=640&crop=smart&auto=webp&s=76251152a9e965d527627a7cb8292046918a202e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?width=960&crop=smart&auto=webp&s=8c48e731766fda11a1f8d29cf4ef0b24ff7acea4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?width=1080&crop=smart&auto=webp&s=4ccdd94059bcfdbc2a52bcb9bd7e240aeff8f933', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/S0hEtcqecSItRFCiTzYUsBWTVl7YsD-ABBFF2k9I1Ls.jpg?auto=webp&s=1f15058cefaeb2d70e0cba58b46ca6793cd0c92a', 'width': 1200}, 'variants': {}}]} | |
Anyone building anything open source with Llama? | 2 | Hi everyone, hope it's okay to share this here in case it might be interesting to anyone:
We opened up applications for 100 Builders, a 4-week online program/hackathon for builders to collaborate on bold open source projects for AI (and crypto), organized by a small team and supported by amazing partners and sponsors. So if there is anyone building open source software with Llama, you should definitely consider joining!
It is 100% free to participate, light on time commitment, and 100% on builders' terms. Just a BIG build-that-thing energy between 100 projects that get in. I don't need to convince you about how important it is to have this new tech be built as open as possible, and we hope this will help with that! 🙂
We have a lot more to announce, including details about $20k+ (and counting) in sponsorship and prize money for participants. You can apply directly on the site [https://100.builders/](https://100.builders/?ref=ros) \- it takes 2 minutes.
Applications close on August 10th.
There is a little FAQ here: [https://docs.100.builders/100-builders-qs-and-as](https://docs.100.builders/100-builders-qs-and-as) but you can also shoot us an email with your questions to [community@100.builders](mailto:community@100.builders)
Thanks and we're STOKED to build open tech together with you!
**If you have any questions or feedback, let me know!** | 2023-07-24T12:50:50 | https://www.reddit.com/r/LocalLLaMA/comments/1589skn/anyone_building_anything_open_source_with_llama/ | LuisSur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1589skn | false | null | t3_1589skn | /r/LocalLLaMA/comments/1589skn/anyone_building_anything_open_source_with_llama/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '6K9I4Elp66laQqzuTtHu25AT9EbNe25z_bvJaYdcjJ8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VKeUbjHda_smkB338u4TFKMbBdaUZglx0L8P9xuPgzo.jpg?width=108&crop=smart&auto=webp&s=45f5041a22ef7176181884cd16245581c62ce10d', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/VKeUbjHda_smkB338u4TFKMbBdaUZglx0L8P9xuPgzo.jpg?width=216&crop=smart&auto=webp&s=120a5e23acaf21999f22998130c1045649ec6fc7', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/VKeUbjHda_smkB338u4TFKMbBdaUZglx0L8P9xuPgzo.jpg?width=320&crop=smart&auto=webp&s=b4fededa8dda08b3fcc0c9e90fa258e33d8d1901', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/VKeUbjHda_smkB338u4TFKMbBdaUZglx0L8P9xuPgzo.jpg?width=640&crop=smart&auto=webp&s=ed1eb8a3b25e74a13e29ff0e57da2ed8d2e42f65', 'width': 640}], 'source': {'height': 418, 'url': 'https://external-preview.redd.it/VKeUbjHda_smkB338u4TFKMbBdaUZglx0L8P9xuPgzo.jpg?auto=webp&s=c44f321db0d5011ecc51842acf82aeb39c0dd1cb', 'width': 800}, 'variants': {}}]} |
Running Llama-2 faster | 3 | Hi,
I am working with a Telsa V100 16GB to run Llama-2 7b and 13b, I have used gptq and ggml version. the generation very slow it takes 25s and 32s respectively. Is there a way I can run it faster? | 2023-07-24T12:11:14 | https://www.reddit.com/r/LocalLLaMA/comments/1588wpe/running_llama2_faster/ | gijeri4793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1588wpe | false | null | t3_1588wpe | /r/LocalLLaMA/comments/1588wpe/running_llama2_faster/ | false | false | self | 3 | null |
Use AMD GPU with LlamaCpp | 11 | I am trying to run this code on GPU, but currently is not using GPU at all.
https://preview.redd.it/3bbj139dhwdb1.png?width=690&format=png&auto=webp&s=d776c7b86028fc3c19bf8d06f1fee41876775a17
I am using AMD GPU R9 390 on ubuntu and OpenCL support was installed following this: [https://www.reddit.com/r/LocalLLaMA/comments/13m8li2/finally\_got\_a\_model\_running\_on\_my\_xtx\_using/](https://www.reddit.com/r/LocalLLaMA/comments/13m8li2/finally_got_a_model_running_on_my_xtx_using/)
​ | 2023-07-24T11:44:15 | https://www.reddit.com/r/LocalLLaMA/comments/1588b09/use_amd_gpu_with_llamacpp/ | blacky_ninja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1588b09 | false | null | t3_1588b09 | /r/LocalLLaMA/comments/1588b09/use_amd_gpu_with_llamacpp/ | false | false | 11 | null | |
Alternative Download mean (because of my unstable local electricity and Internet) | 7 | Hello everyone,
This is my second week of trying to download the llama-2 models without abrupt stops, but all my attempts are of no avail.
I'm posting this to request your guidance or assistance on how to download the models completely despite my current predicament.
As far as I can tell the [download.sh](https://github.com/facebookresearch/llama/blob/main/download.sh) in the GitHub -repo is a very simple script, it doesn't check for previously downloaded files or partially downloaded file, and every 24 hrs I'll have to get a new key.
i.e some of the options I have left are ipfs, torrent, torrent-via-i2p, modified download script, etc.(but I really prefer torrent because its very easy and once I get the files I can seed it indefinitely for others like me)
and I'd really appreciate any help anyone can provide.
Thank you.
NOTE:
This is my current status:
|Daily Power Outages|Minimum of 5 - 9 times a day|
|:-|:-|
|Power Outage Duration|Minimum 15 mins to 3-5 hours|
|Network|morning 500kbps, night 2 mbps(12am-5am)|
|UPS|n/a|
|Battery/Inverter|n/a|
|Generator|n/a|
​ | 2023-07-24T11:14:06 | https://www.reddit.com/r/LocalLLaMA/comments/1587o35/alternative_download_mean_because_of_my_unstable/ | Red_Luci4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1587o35 | false | null | t3_1587o35 | /r/LocalLLaMA/comments/1587o35/alternative_download_mean_because_of_my_unstable/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'QVZ-7wnYYq9G9ot5wdH4HclxuLKGUTmobZ11SydGR44', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?width=108&crop=smart&auto=webp&s=71547f4c7b447974d800e339c632986a4f5a2474', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?width=216&crop=smart&auto=webp&s=4752408a7652f105e7d322db211e6be94aab6c5a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?width=320&crop=smart&auto=webp&s=917495e81ce866b62bad2b637780416998034673', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?width=640&crop=smart&auto=webp&s=8e5d8b52e02d8b08cc2c4e0e8393ae9afcce5fd5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?width=960&crop=smart&auto=webp&s=747321f2f8586399f9ee4dd302eee8d169d35eb8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?width=1080&crop=smart&auto=webp&s=df1ea81e09e8457624fb64da3960e49908651a75', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2-s3QEMHpKJ8ZHbu4-SFebhKV7C8gL7dq_fDp6Mjh8I.jpg?auto=webp&s=114e4b923eaab1d483d5ced0570244ef8be9b867', 'width': 1200}, 'variants': {}}]} |
Are there still people building on Falcon models? | 10 | As far as I understand, Falcon models have a better license than Llama 2 models? Apparently, it looks like support for Falcon in GGML is still experimental. 🥲 | 2023-07-24T09:30:30 | https://www.reddit.com/r/LocalLLaMA/comments/1585mnh/are_there_still_people_building_on_falcon_models/ | Acrobatic-Site2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1585mnh | false | null | t3_1585mnh | /r/LocalLLaMA/comments/1585mnh/are_there_still_people_building_on_falcon_models/ | false | false | self | 10 | null |
Kobold.cpp - What are your numbers between CLBlast and CUBlas? (VRAM usage & tokens/s) | 20 | Decided to do some quick informal testing to see whether CLBlast or CUBlas would work better on my machine.
I did my testing on a Ryzen 7 5800H laptop, with 32gb ddr4 ram, and an RTX 3070 laptop gpu (105w I think, 8gb vram), off of a 1tb WD SN730 nvme drive.
I used Kobold.cpp 1.36 (on windows 11), which is the latest version as of writing, with the following prompt:
>koboldcpp.exe --usecublas/clblas 0 0 --gpulayers %layers% --stream --smartcontext --model nous-hermes-llama2-13b.ggmlv3.q5\_K\_M.bin
And of course, you can probably tell from the prompt, I'm using the nous-hermes-llama2-13b q5\_K\_M model. The prompt I used was the same every time; "Write me a 20 word poem about fire"
**Here are my results.** *Conclusion/tl;dr at the bottom.*
24 layer clblas, 7gb vram
Processing Prompt [BLAS] (50 / 50 tokens)
Generating (48 / 80 tokens)
Time Taken - Processing:3.2s (63ms/T), Generation:10.0s (208ms/T), Total:13.1s (3.7T/s)
24 layer cublas, 7.4gb vram
Processing Prompt [BLAS] (50 / 50 tokens)
Generating (46 / 80 tokens)
Time Taken - Processing:2.9s (58ms/T), Generation:8.4s (182ms/T), Total:11.3s (4.1T/s)
28 layer clblast, 7.6gb vram
Processing Prompt [BLAS] (50 / 50 tokens)
Generating (49 / 80 tokens)
Time Taken - Processing:4.6s (93ms/T), Generation:9.6s (197ms/T), Total:14.3s (3.4T/s)
26 layer cublas, 7.7gb vram\*
Processing Prompt (1 / 1 tokens)
Generating (45 / 80 tokens)
Time Taken - Processing:0.4s (397ms/T), Generation:7.6s (169ms/T), Total:8.0s (5.6T/s)
25 layer cublas, 7.6gb vram
Processing Prompt [BLAS] (50 / 50 tokens)
Generating (49 / 80 tokens)
Time Taken - Processing:3.2s (65ms/T), Generation:8.5s (174ms/T), Total:11.8s (4.2T/s)
*\*26 layer cublas was kind of slow on my first try, and took 2 tokens/s. Resetting and trying again gave me a better result, but a follow up prompt gave me only 0.7 tokens/s. 26 layers likely uses too much vram here.*
This model has 41 layers according to clblast, and 43 according to cublas, however cublas seems to take up more vram. I could only fit 28 while using clblast, and 25 while using cublas. Anything more had issues. From what I'm able to tell, at the same, or even slightly less vram usage cublas is still a bit faster than clblast.
What numbers are you guys getting between clblas and cublast on kobold.cpp?
**Links**
Kobold.cpp - [https://github.com/LostRuins/koboldcpp/releases](https://github.com/LostRuins/koboldcpp/releases)
Nous Hermes Llama2 GGML Model - [https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML) | 2023-07-24T08:48:51 | https://www.reddit.com/r/LocalLLaMA/comments/1584vgc/koboldcpp_what_are_your_numbers_between_clblast/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1584vgc | false | null | t3_1584vgc | /r/LocalLLaMA/comments/1584vgc/koboldcpp_what_are_your_numbers_between_clblast/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'GNFvogUAbgZ91N-Y_rvKuEqhrsqeJsHKjQCwxmml2ro', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?width=108&crop=smart&auto=webp&s=78a0daf2679060916d6932503899961c169a7868', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?width=216&crop=smart&auto=webp&s=b39f69c00d703cafaacded53e13e2d91091fedf0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?width=320&crop=smart&auto=webp&s=176d6f8ab5b8b976a2e56c2f77c2d1cf379a8142', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?width=640&crop=smart&auto=webp&s=e36c8123dd807318587b9a11faa6c717a0ff9131', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?width=960&crop=smart&auto=webp&s=8170edac9bc84fdfe7d6a0acbf7332e35ca4dcd4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?width=1080&crop=smart&auto=webp&s=3cbb0a6cf8cfb5fa7c07fd2255a9e79e8e5cb3de', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/efGSTCEdA8ZZx1S1OmEVSaOHzdktuchGqc3IQl0t_bU.jpg?auto=webp&s=391a2d777cd9be3773679ad45a12495d894bcf72', 'width': 1200}, 'variants': {}}]} |
LoRA vs QLoRA performance | 15 | Is there any significant performance degradation when using QLoRA fine-tuning method over LoRA? I use QLoRA and this yields pretty good results. I'm trying to figure out if it's worth it to try Lora. | 2023-07-24T08:47:26 | https://www.reddit.com/r/LocalLLaMA/comments/1584uky/lora_vs_qlora_performance/ | generalfsb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1584uky | false | null | t3_1584uky | /r/LocalLLaMA/comments/1584uky/lora_vs_qlora_performance/ | false | false | self | 15 | null |
The Dark Side of AI Censorship | 150 | Companies are concerned about the so-called safety of AI and are putting a lot of effort into making AI "safe" to use. Meta puts a special emphasis on the safety of the new LLaMA2 model, for this reason the release of the 33B version of the model was delayed. But, as you know, according to the law of dialectics, everything has a second side, and "security" has a dark side that no one talks about.
For example, if you ask almost any model about how to break into a car, you will logically receive an answer in the style of "It's illegal, I can't help you with this." But let's say the situation is critical, your own car slammed shut, the keys are inside and your child is there who cannot open the door and something threatens his life. How will AI react? The answer will be something like "I'm sorry that you are in this situation, but I can not give advice on how to open your car. Try to keep your keys in a safe place in the future ...". To any persuasion and arguments that the health and life of a person depends on it, the AI will answer that it regrets it, but will not give advice. it's illegal.
You can imagine another hypothetical situation where you are forced to take an illegal action to save your own or someone else's life and the AI will refuse to help you.
In my opinion, the developers are very one-sided in assessing what is acceptable and safe, and the current benchmarks for evaluating "safety" are not complete. | 2023-07-24T08:20:14 | https://www.reddit.com/r/LocalLLaMA/comments/1584cpl/the_dark_side_of_ai_censorship/ | coyotewld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1584cpl | false | null | t3_1584cpl | /r/LocalLLaMA/comments/1584cpl/the_dark_side_of_ai_censorship/ | false | false | self | 150 | null |
How to speed up tokenizer loading speed for lmsys/vicuna-13b-v1.3? (Takes 3 min) | 2 | Hi people, I was wondering how I can speed up my tokenizer loading speed. Currently it takes about 3 minutes to load the tokenizer for lmsys/vicuna-13b-v1.3. I don't really have this issue with TheBloke/Wizard-Vicuna-13B-Uncensored-HF. Thank you!
The code in question:
`model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path = "lmsys/vicuna-13b-v1.3", device_map = "balanced",torch_dtype = torch.bfloat16, use_cache = True)`
`tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, use_fast = True)` | 2023-07-24T07:50:36 | https://www.reddit.com/r/LocalLLaMA/comments/1583tsi/how_to_speed_up_tokenizer_loading_speed_for/ | ToeAdministrative493 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1583tsi | false | null | t3_1583tsi | /r/LocalLLaMA/comments/1583tsi/how_to_speed_up_tokenizer_loading_speed_for/ | false | false | self | 2 | null |
GGML guys, How's the matter with fine-tuning? | 1 | Not that up-to-date with current developments in the field of fine-tuning. People who know what they're doing, is there a way to directly fine-tune the GGML model? Have been using multiple models in the past, but they all really lack usability in a thing I'm interested in so the only option is to DIY I guess. | 2023-07-24T07:35:57 | https://www.reddit.com/r/LocalLLaMA/comments/1583k84/ggml_guys_hows_the_matter_with_finetuning/ | femboy_deer_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1583k84 | false | null | t3_1583k84 | /r/LocalLLaMA/comments/1583k84/ggml_guys_hows_the_matter_with_finetuning/ | false | false | default | 1 | null |
RLHF potentials | 1 | Some thoughts on rlhf’s future…
Specifically I would like to know if somebody has done ablation on whether a rlhf-ed model, trained on universal QA datasets like openassistent, has significant performance boost on advanced reasoning tasks such as coding. | 2023-07-24T07:13:52 | https://shermwong.com/2023/07/23/llm-studies-part-3-rlhf/ | wsmhy2011 | shermwong.com | 1970-01-01T00:00:00 | 0 | {} | 15835qs | false | null | t3_15835qs | /r/LocalLLaMA/comments/15835qs/rlhf_potentials/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'tNUeMAIpmsNt8D13J4h0yY1TIvemrXYL7MZ2wkxrGIU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?width=108&crop=smart&auto=webp&s=1ee0cfce01bd4b346c320af3222079b166da9afb', 'width': 108}, {'height': 217, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?width=216&crop=smart&auto=webp&s=aff33e0531858868682205623971f813a9785d2e', 'width': 216}, {'height': 322, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?width=320&crop=smart&auto=webp&s=4f8d92806e302f9d93f43673633941c4682c4dcd', 'width': 320}, {'height': 645, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?width=640&crop=smart&auto=webp&s=6df1687124eac55ec30621c890b5c4863ecc12be', 'width': 640}, {'height': 968, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?width=960&crop=smart&auto=webp&s=c25fb67d46fcfb571b7be18c57761afa69e72ad7', 'width': 960}, {'height': 1089, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?width=1080&crop=smart&auto=webp&s=535bac9be13001b1ae2d105eddc4b2931635d140', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://external-preview.redd.it/zNvh1hBUY_3E2mzuLnD8oDJeKgDXJI3-wnN08MzSlQ8.jpg?auto=webp&s=1aae4de8b150410bdbad97ef2b9456bcbfe14c07', 'width': 1586}, 'variants': {}}]} | |
Any particular models that work with 4gb of vram? | 2 | Or am I completely out of luck? | 2023-07-24T07:03:33 | https://www.reddit.com/r/LocalLLaMA/comments/1582z21/any_particular_models_that_work_with_4gb_of_vram/ | brucewillisoffical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1582z21 | false | null | t3_1582z21 | /r/LocalLLaMA/comments/1582z21/any_particular_models_that_work_with_4gb_of_vram/ | false | false | self | 2 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.