title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What are currently the best models to try out if you are new to this space? | 1 | [removed] | 2023-10-12T12:34:03 | https://www.reddit.com/r/LocalLLaMA/comments/1765x09/what_are_currently_the_best_models_to_try_out_if/ | friscofresh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1765x09 | false | null | t3_1765x09 | /r/LocalLLaMA/comments/1765x09/what_are_currently_the_best_models_to_try_out_if/ | false | false | self | 1 | null |
From no GPU to a 3060 12gb, what can I run? | 80 | As the title says, I'll be moving up in the world of llms soon by getting a gpu! I also plan to go from my current 16gb of ram to 64gb ddr5 (and a 13th gen i5 hopefully!).
Until now, I've been mainly running 7b models with a few 13bs if I can tolerate the trade off of t/s for better quality, however I've always been wanting to explore the world of higher parameter models and formats beyond ggml/gguf.
So, what parameter models will I be able to run in the full 12gb vram, and what can I run in gguf with layers offloaded to the vram?
As a bonus question, what sorts of models would I be able to qlora with this setup too?
Any help is appreciated, thanks! | 2023-10-12T12:10:01 | https://www.reddit.com/r/LocalLLaMA/comments/1765g70/from_no_gpu_to_a_3060_12gb_what_can_i_run/ | Sebba8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1765g70 | false | null | t3_1765g70 | /r/LocalLLaMA/comments/1765g70/from_no_gpu_to_a_3060_12gb_what_can_i_run/ | false | false | self | 80 | null |
Need help: Llama2-chat generate trailing /s in the answer | 1 | I try to follow llama 2 chat format guidelines and it seems that model doesn't interpret ending </s> as an and of sentence.
On the second conversation operation I have following prompt:
`<s>[INST] <<SYS>>`
`You are helpful assistant, answer the questions.`
`<</SYS>>`
`Tom have a father Mikle and a mother Maria. Also Tom have a brother Den. Who is Maria for Den?`
`[/INST] Based on the information provided, Maria is the mother of Tom, so she is also the mother of Den, who is Tom's brother.`
`</s> <s>[INST] Who is Mikle to Maria? [/INST]`
The return is
`Based on the information provided, Mikle is the father of Tom, and therefore he is also the father of Den, who is Tom's brother. So, Mikle is the husband of Maria and the father of Tom and Den.`
`</s>`
Note the </s> at the end. Streaming detokenizer returned it as three separate tokens: "</", "s", ">" and I suspect, that he got a real EOS token after that.
`llm_load_print_meta: BOS token = 1 '<s>'`
`llm_load_print_meta: EOS token = 2 '</s>'`
The question is: do I have incorrect format of context? What is wrong?
I using python-llama-cpp for inference and llama-2-13b-chat.Q4\_K\_M.gguf as a model. | 2023-10-12T11:36:51 | https://www.reddit.com/r/LocalLLaMA/comments/1764uk9/need_help_llama2chat_generate_trailing_s_in_the/ | losthost12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1764uk9 | false | null | t3_1764uk9 | /r/LocalLLaMA/comments/1764uk9/need_help_llama2chat_generate_trailing_s_in_the/ | false | false | self | 1 | null |
LLaVA training script and evaluation | 20 | 2023-10-12T10:21:43 | https://x.com/imhaotian/status/1712269327167037661?s=20 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1763lvn | false | null | t3_1763lvn | /r/LocalLLaMA/comments/1763lvn/llava_training_script_and_evaluation/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'hdpTjL5Vk9DvKj9629X-Z7grDU3swi3JmAU_gts1Pik', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/LdTlrX7rpAG8z2sVYyYrR6ZqLK-aEx5DunfWidNzSfk.jpg?width=108&crop=smart&auto=webp&s=82ffcc29d31c200099843c05e40bb34616025e6d', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/LdTlrX7rpAG8z2sVYyYrR6ZqLK-aEx5DunfWidNzSfk.jpg?auto=webp&s=45a39b1ea067178e2b8f72f83e87e38f194c77e5', 'width': 200}, 'variants': {}}]} | ||
Prompt tuning | 1 | [removed] | 2023-10-12T09:57:31 | https://www.reddit.com/r/LocalLLaMA/comments/1763883/prompt_tuning/ | Distinct-Target7503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1763883 | false | null | t3_1763883 | /r/LocalLLaMA/comments/1763883/prompt_tuning/ | false | false | self | 1 | null |
How to switch b/w LLMs on the fly? | 1 | [removed] | 2023-10-12T08:11:28 | https://www.reddit.com/r/LocalLLaMA/comments/1761p97/how_to_switch_bw_llms_on_the_fly/ | Anu_Rag9704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1761p97 | false | null | t3_1761p97 | /r/LocalLLaMA/comments/1761p97/how_to_switch_bw_llms_on_the_fly/ | false | false | self | 1 | null |
Langchain not working with HuggingFacePipeline | 2 | I have the following code:
from transformers import AutoModelForCausalLM, AutoTokenizer
from langchain.llms import HuggingFacePipeline
from langchain import PromptTemplate, LLMChain
model_id = "TheBloke/Llama-2-7b-Chat-GPTQ"
temperature = 0.15
max_new_tokens = 2048
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
llm = HuggingFacePipeline(model_id=model_id,
pipeline_kwargs={
"model": model,
"task": "text-generation",
"tokenizer": tokenizer,
"device_map": "auto",
"max_new_tokens": max_new_tokens,
"temperature": temperature,
"top_p": 0.95,
"top_k": 40,
"repetition_penalty": 1.15,
})
prompt = "Tell me about AI"
prompt_template='''[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
'''
prompt_template = PromptTemplate(template=prompt_template, input_variables=["prompt"])
llm_chain = LLMChain(prompt=prompt_template, llm=llm)
print(llm_chain.run(prompt))```
It throws the following error:
File ~/.local/lib/python3.10/site-packages/langchain/llms/huggingface_pipeline.py:183, in HuggingFacePipeline._generate(self, prompts, stop, run_manager, **kwargs) 180 batch_prompts = prompts[i : i + self.batch_size] 182 # Process batch of prompts --> 183 responses = self.pipeline(batch_prompts) 185 # Process each response in the batch 186 for j, response in enumerate(responses): TypeError: 'NoneType' object is not callable
Any ideas as to what I am doing wrong here?
Note that I am not using the HuggingFacePipeline like the following:
\`\`\`
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
hf = HuggingFacePipeline(pipeline=pipe)
print(hf)
This prints model\_id = "gpt2" even though I specified the model as Llama2-7b. There's an open issue on GitHub here: [https://github.com/langchain-ai/langchain/issues/8280](https://github.com/langchain-ai/langchain/issues/8280)
​ | 2023-10-12T05:07:21 | https://www.reddit.com/r/LocalLLaMA/comments/175yxk4/langchain_not_working_with_huggingfacepipeline/ | Robur_131 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175yxk4 | false | null | t3_175yxk4 | /r/LocalLLaMA/comments/175yxk4/langchain_not_working_with_huggingfacepipeline/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'neUP1x5oCtCHNn-N08nsMuzB9x83YmUBTc1fGz9_kh4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zRXwkcaSsZfStpRf2QeUGcWtHxn4WGYBNvuCPQakG1c.jpg?width=108&crop=smart&auto=webp&s=b9de856379aa5b572453a9cb65c8785fee60a8ed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zRXwkcaSsZfStpRf2QeUGcWtHxn4WGYBNvuCPQakG1c.jpg?width=216&crop=smart&auto=webp&s=3b11f4d1da0f7f83f521f34493d83592869b097b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zRXwkcaSsZfStpRf2QeUGcWtHxn4WGYBNvuCPQakG1c.jpg?width=320&crop=smart&auto=webp&s=6f9684d822b2a44577a96b8f37276b0fb5f7dc8b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zRXwkcaSsZfStpRf2QeUGcWtHxn4WGYBNvuCPQakG1c.jpg?width=640&crop=smart&auto=webp&s=54f3ce2d474422eaeb539bb2703fd2cbc4f4c2d7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zRXwkcaSsZfStpRf2QeUGcWtHxn4WGYBNvuCPQakG1c.jpg?width=960&crop=smart&auto=webp&s=aa628fa038ff7419a4baf7224ce75469a5f260a4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zRXwkcaSsZfStpRf2QeUGcWtHxn4WGYBNvuCPQakG1c.jpg?width=1080&crop=smart&auto=webp&s=87471fbbedf837e343cb20924fb8547c00a1add7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zRXwkcaSsZfStpRf2QeUGcWtHxn4WGYBNvuCPQakG1c.jpg?auto=webp&s=6cb298a5c9a856cfbd34bc2b95fb793c937ee920', 'width': 1200}, 'variants': {}}]} |
Mac M2 has significant improvement over M1 using llama.cpp? | 9 | Im considering buying one of the following MBP.
M1 16GB ram, 10 CPU, 16GPU, 1TB.
M2 16GB ram, 10 CPU, 16GPU, 512gb.
I want using llama.cpp for experiment with local text generation, so is it worth going for an M2? Maybe in the future llama.cpp could take advantage of M2 over M1?
Thx. | 2023-10-12T04:59:11 | https://www.reddit.com/r/LocalLLaMA/comments/175ys9p/mac_m2_has_significant_improvement_over_m1_using/ | jambittz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175ys9p | false | null | t3_175ys9p | /r/LocalLLaMA/comments/175ys9p/mac_m2_has_significant_improvement_over_m1_using/ | false | false | self | 9 | null |
What might we find by visualizing neural network internals? | 1 | [removed] | 2023-10-12T04:10:50 | https://www.reddit.com/r/LocalLLaMA/comments/175xyd4/what_might_we_find_by_visualizing_neural_network/ | ThisWillPass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175xyd4 | false | null | t3_175xyd4 | /r/LocalLLaMA/comments/175xyd4/what_might_we_find_by_visualizing_neural_network/ | false | false | self | 1 | null |
Maybe we should lower the expections for codellama 13B? | 1 | [removed] | 2023-10-12T04:07:31 | https://www.reddit.com/r/LocalLLaMA/comments/175xwcy/maybe_we_should_lower_the_expections_for/ | More-Shop9383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175xwcy | false | null | t3_175xwcy | /r/LocalLLaMA/comments/175xwcy/maybe_we_should_lower_the_expections_for/ | false | false | 1 | null | |
Left when Julia was just a new-born now I want back in. Where to start? | 6 | Currently got my hands on a used Dell t7920 with dual intel platinum 8168, 256gb ram, 1tb M.2, 1.6tb U.2, 2tb x 2 SSD, 2 x nvidia rtx a5000.
Maybe overkill but I want to be using for some time with projects helping my community.
Things I want to do:
Local AI pair programmer that I can share my repos
Build a language model for a language found on Google language api but not fully supported by it or found anywhere else
Create chatbot for said language
Gather IOT data for a commune of speakers using said language
I do have consent from the community.
Where should I start as newbie again?
So far Windows came on machine but feel like I should switch to Ubuntu and leverage Python and Conda. Not looking to roll my own stuff if don’t have to. Just want quickest path to achieving things I want to do with current machine. I am ignorant to where I should begin. | 2023-10-12T02:14:30 | https://www.reddit.com/r/LocalLLaMA/comments/175vpcw/left_when_julia_was_just_a_newborn_now_i_want/ | Equality_or_Fairness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175vpcw | false | null | t3_175vpcw | /r/LocalLLaMA/comments/175vpcw/left_when_julia_was_just_a_newborn_now_i_want/ | false | false | self | 6 | null |
Mistral 7b to summarize each row of a dataframe | 5 | Hell, I know this can be done with looping, but is there an efficent way to apply Mistral 7B to summarize each row of a text column in a dataframe?
here is a potential output
​
|Key|Text|Summarized Text|
|:-|:-|:-|
|1|This is text 1|This is text 1 summary|
|2|This is text 2|...|
|...|.....|...|
|1000|This is text 1000|This is text 1000 summary|
​
I can see this being done by a function that takes in text as input, and then using then applying that using the apply method to the column. Would you recommend any other way or a pipeline that is setup to do this already?
Thanks | 2023-10-12T01:09:24 | https://www.reddit.com/r/LocalLLaMA/comments/175ucz0/mistral_7b_to_summarize_each_row_of_a_dataframe/ | haris525 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175ucz0 | false | null | t3_175ucz0 | /r/LocalLLaMA/comments/175ucz0/mistral_7b_to_summarize_each_row_of_a_dataframe/ | false | false | self | 5 | null |
Training CodeLLaMA on a Single Instruction | 6 | Hi, I'm trying to train CodeLLaMA-7b on a single instruction till the training loss goes till 0 to see if the model can effectively memorize the instruction and it's expected output. I'm following the same training process as in [LongLoRA SFT](https://github.com/dvlab-research/LongLoRA/blob/main/supervised-fine-tune.py) and my model does achieve a training loss of 0. However, at inference time, when using the same instruction, the model seems to not generate any code and just outputs the instruction back to me. I was wondering why this is the case if the training loss hit 0 and if the model is somehow not able to memorize the input/output? For context, I'm doing this experiment to test my training code to see if it works correctly for a single data point. | 2023-10-11T23:26:32 | https://www.reddit.com/r/LocalLLaMA/comments/175s7k8/training_codellama_on_a_single_instruction/ | LyGmAbAllz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175s7k8 | false | null | t3_175s7k8 | /r/LocalLLaMA/comments/175s7k8/training_codellama_on_a_single_instruction/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'nuqwMerls8hSKEpsdavqii3iQuUPuPa3LTjKUuLoHwU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/btKlep1_9LDAoLWkbUibYI6KELOmp4i9Z6RZmSgzp00.jpg?width=108&crop=smart&auto=webp&s=464a97438bc6b0d796326b433268dac3f359ddb1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/btKlep1_9LDAoLWkbUibYI6KELOmp4i9Z6RZmSgzp00.jpg?width=216&crop=smart&auto=webp&s=26d22fcee95fec92e53c1d732a3ccc10f1b3d4d7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/btKlep1_9LDAoLWkbUibYI6KELOmp4i9Z6RZmSgzp00.jpg?width=320&crop=smart&auto=webp&s=a51689125b91dffbf45c4d3792283b8e4ae589fd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/btKlep1_9LDAoLWkbUibYI6KELOmp4i9Z6RZmSgzp00.jpg?width=640&crop=smart&auto=webp&s=744cb9455cfe49e8049390971391e68b7f3f549f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/btKlep1_9LDAoLWkbUibYI6KELOmp4i9Z6RZmSgzp00.jpg?width=960&crop=smart&auto=webp&s=521b712c734dd2dd2c94bb84800c2fb999d65605', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/btKlep1_9LDAoLWkbUibYI6KELOmp4i9Z6RZmSgzp00.jpg?width=1080&crop=smart&auto=webp&s=ddae947bea32dbf948d502082d817f20ae37e27a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/btKlep1_9LDAoLWkbUibYI6KELOmp4i9Z6RZmSgzp00.jpg?auto=webp&s=625285b44434a1e7d4e5add137309160f086e2d4', 'width': 1200}, 'variants': {}}]} |
Quick clarification on using Lora, creative writing model recs! | 1 | Hey folks, I wanted to have a locally run ML model that I train on my own work (I have several years' worth) for creative writing. I found [this](https://huggingface.co/TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ), and wanted to use Lora on my training set. Something that is not clear to me, can I run Lora on any version of the model? Or do I have to use specific versions only? I don't understand all the compression and quantizing going on as well as I want to - so I'm just making sure before I waste time on my GPU.
Would also appreciate links to videos/guides if anyone has any, couldn't find any good ones.
Oh also, if anyone has any recommendations for my use case, please comment!
Thanks in advance. | 2023-10-11T23:08:33 | https://www.reddit.com/r/LocalLLaMA/comments/175rt3b/quick_clarification_on_using_lora_creative/ | man_and_a_symbol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175rt3b | false | null | t3_175rt3b | /r/LocalLLaMA/comments/175rt3b/quick_clarification_on_using_lora_creative/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'gkGW5N45HClSxsVVRX6Pxf0S-eX7EblAdV8Jzo3c0qY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IZPr0aaz4HDjQTd2-ToEDT0lYYWmNMeNFNJOWNNEXOA.jpg?width=108&crop=smart&auto=webp&s=1a7099a9f3674dbb45c3aae02405bac812170964', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IZPr0aaz4HDjQTd2-ToEDT0lYYWmNMeNFNJOWNNEXOA.jpg?width=216&crop=smart&auto=webp&s=d8a8b12d053f43174de182c917127ee6f714b31f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IZPr0aaz4HDjQTd2-ToEDT0lYYWmNMeNFNJOWNNEXOA.jpg?width=320&crop=smart&auto=webp&s=40ab2733a6d772ce0ee6740b9b5cf5d470c7dca7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IZPr0aaz4HDjQTd2-ToEDT0lYYWmNMeNFNJOWNNEXOA.jpg?width=640&crop=smart&auto=webp&s=2c9917e4fd7dfe26c999f47561de965394284d6b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IZPr0aaz4HDjQTd2-ToEDT0lYYWmNMeNFNJOWNNEXOA.jpg?width=960&crop=smart&auto=webp&s=641b578c44ad0e9664718a51c6f309a20fc89d39', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IZPr0aaz4HDjQTd2-ToEDT0lYYWmNMeNFNJOWNNEXOA.jpg?width=1080&crop=smart&auto=webp&s=f277564069f72bb7998ef198d0209c0edc8f239c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IZPr0aaz4HDjQTd2-ToEDT0lYYWmNMeNFNJOWNNEXOA.jpg?auto=webp&s=a3854bd5a7d42fb302b47c8f919d78a1a867cbf3', 'width': 1200}, 'variants': {}}]} |
Data extraction from tables | 0 | I have tables in PDF files and would like to fine-tune an LLM from the data in it. How would I go about extracting data from these tables? If it is not possible, how would I go about creating a dataset for the purpose of fine-tuning from these tables? What will be the process or steps? | 2023-10-11T22:32:15 | https://www.reddit.com/r/LocalLLaMA/comments/175qzqk/data_extraction_from_tables/ | RAIV0LT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175qzqk | false | null | t3_175qzqk | /r/LocalLLaMA/comments/175qzqk/data_extraction_from_tables/ | false | false | self | 0 | null |
Is the textbook all you need or not? | 11 | `A few months ago an article “textbooks are all you need” was published about the phi-1 model. It consists of 1.1 billion parameters and was trained on 7 billion tokens of "textbook quality" Python code. The dataset largely consists of the GPT-4 filtered portion of the Stack, as well as "tutorials" written by GPT-3.5. Despite its small capacity, the model outperformed, for example, CodeLlama-34b on the HumanEval code generation benchmark. The authors also recently released the phi-1.5 paper, where they extended their approach to natural language.`
`HumanEval consists of tasks like LeetCode, but we were interested in measuring the code generation capabilities of models in real projects. We collected a dataset of 220 functions (and methods) from real Python projects from Github. All functions contain docstrings and are covered with tests; the repositories are published after June 2023. The task of the model is to generate a function body such that the number of passed tests in the repository does not decrease compared to the real function body.`
`We see that phi-1 in real code is slightly inferior to the multilingual starcoderbase, which is similar in capacity, and is significantly inferior to multilingual 7b models.`
[I took it from the telegram channel, more information about the benchmark will be coming soon](https://preview.redd.it/kqf9qmyh3ntb1.jpg?width=767&format=pjpg&auto=webp&s=bc3babdef4e973e7642cd92fb16936500aa23793) | 2023-10-11T21:20:51 | https://www.reddit.com/r/LocalLLaMA/comments/175paow/is_the_textbook_all_you_need_or_not/ | Due-Weather-3140 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175paow | false | null | t3_175paow | /r/LocalLLaMA/comments/175paow/is_the_textbook_all_you_need_or_not/ | false | false | 11 | null | |
Instruction tuning hurts benchmark scores | 1 | Hi, I trained a model using instruction data. Although the model's instruction-following capabilities improve, it gets lower scores on NLP benchmarks. Does anyone have some intuition for this? | 2023-10-11T21:04:32 | https://www.reddit.com/r/LocalLLaMA/comments/175ow6o/instruction_tuning_hurts_benchmark_scores/ | Ornery-Young-7346 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175ow6o | false | null | t3_175ow6o | /r/LocalLLaMA/comments/175ow6o/instruction_tuning_hurts_benchmark_scores/ | false | false | self | 1 | null |
Seeking Advice on Getting Started with Large Models | 1 | Hey guys, I’m new to this and like to start learning how a model works and have a big enough system to run a large model. Looking for insights on how to get started. If anyone has faced challenges and has lessons to share, I would greatly appreciate it.
The majority of the tutorials are download and run but for small models that isn’t what I’m looking for as I’m looking first at set up.
I’ve downloaded all the files for ORCA_LLaMA_70B_QLoRA and noticed there are 15 .bin files, which appear to be serialized by PyTorch. Do I need to compile or combine them into one file first, or can I run the model through a pipeline as is. Does anyone have any code they used in the past to get a model like this up and running? Any wisdom would be great! I look forward to your answers. | 2023-10-11T20:56:00 | https://www.reddit.com/r/LocalLLaMA/comments/175oos7/seeking_advice_on_getting_started_with_large/ | derp1989 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175oos7 | false | null | t3_175oos7 | /r/LocalLLaMA/comments/175oos7/seeking_advice_on_getting_started_with_large/ | false | false | self | 1 | null |
GitHub - iandennismiller/calm: A peaceful user experience for Large Language Models. Calm automatically uses the right template for each model, supports multiple prompting styles, and chooses parameters based on your CPU, GPU and RAM. | 1 | 2023-10-11T20:49:25 | https://github.com/iandennismiller/calm | iandennismiller | github.com | 1970-01-01T00:00:00 | 0 | {} | 175oj1g | false | null | t3_175oj1g | /r/LocalLLaMA/comments/175oj1g/github_iandennismillercalm_a_peaceful_user/ | false | false | 1 | {'enabled': False, 'images': [{'id': '_6RXBBZbKK4cTwZstvk_dj9MbDc0QRnpDvdvmhupBxo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hcb_EplAao4OKaVCRjzlFLcIrSZEAEXWmYy6Vz8ilxE.jpg?width=108&crop=smart&auto=webp&s=aca721c9706cd8faa8ee097af93fe039e6b36a16', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hcb_EplAao4OKaVCRjzlFLcIrSZEAEXWmYy6Vz8ilxE.jpg?width=216&crop=smart&auto=webp&s=f61eb3e2c2a6bcbe6b149d04b048ec92f9dfb294', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hcb_EplAao4OKaVCRjzlFLcIrSZEAEXWmYy6Vz8ilxE.jpg?width=320&crop=smart&auto=webp&s=fbd1fd38053c32aadcc04a7a86ad7ed294be1a36', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hcb_EplAao4OKaVCRjzlFLcIrSZEAEXWmYy6Vz8ilxE.jpg?width=640&crop=smart&auto=webp&s=35ce060da0c1cc4b02826055ba9a3811d08c31ce', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hcb_EplAao4OKaVCRjzlFLcIrSZEAEXWmYy6Vz8ilxE.jpg?width=960&crop=smart&auto=webp&s=e55ed5fd1b11298d44cf04b99b84930494d53ed7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hcb_EplAao4OKaVCRjzlFLcIrSZEAEXWmYy6Vz8ilxE.jpg?width=1080&crop=smart&auto=webp&s=6d0f807339e0beac0d3f5fd823722c61f770344d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hcb_EplAao4OKaVCRjzlFLcIrSZEAEXWmYy6Vz8ilxE.jpg?auto=webp&s=9bd27b3665eb00b56f3015507f8b65037ea0a332', 'width': 1200}, 'variants': {}}]} | ||
How do byte-level language models work? | 1 | I've recently been trying to pre-train my own small language model on the tiny-series datasets on huggingface: [https://huggingface.co/collections/nampdn-ai/tiny-series-6503910fd491144159519c70](https://huggingface.co/collections/nampdn-ai/tiny-series-6503910fd491144159519c70). I also wanted to use a model similar to MEGABYTE: [https://arxiv.org/pdf/2305.07185.pdf](https://arxiv.org/pdf/2305.07185.pdf), but I don't understand how using bytes would work. The only implementation I could find: [https://github.com/lucidrains/MEGABYTE-pytorch](https://github.com/lucidrains/MEGABYTE-pytorch) used str(chr(max(32, token))) to decode any token (byte) to a character and put the embedding size as 256. Firstly, why 256 and not 256-32 as any values below 32 are ignored? Also, many byte-level models including this and ByteT5 mention that they can process any text sequence even in a multilingual setting, however how would that be true if we are only using one byte, would we have to move to 2 bytes or use an UNK token, and if we did use 2 bytes that would make our embedding size around 65000 which defeats sort of the point as one of the advantages mentioned is that we are able to use a small embedding matrix? Furthermore, most language models add special tokens like bos, eos, unk and even for llama they use beginning of instruction, end of instruction, and more for system instructions, response, context... Should I use something like this as my dataset has some structures where there is a context, instruction and response, and if i did how would I add these if I'm using byte-level encodings? Final questions: Firstly, for the datasets mentioned including code,stories,webtext,... would I tokenise all of these datasets then concatenate them to then randomly sample from, or should i train seperately on each as some like code and webtext are much larger than the others? Finally, for the webtext part of the dataset, there is a passage of text then a passage analysing the text (main ideas,purpose,...), how should I encode this, should I use an extra ANALYSE token or just concatenate?
Thank you for reading this far, I am sort of a beginner so if I said something stupid please point it out. Also, if there were unclear parts in my question I'm sorry as I struggled how to word these questions. Any help would be appreciated! | 2023-10-11T20:15:32 | https://www.reddit.com/r/LocalLLaMA/comments/175npi5/how_do_bytelevel_language_models_work/ | Additional-Ad-7043 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175npi5 | false | null | t3_175npi5 | /r/LocalLLaMA/comments/175npi5/how_do_bytelevel_language_models_work/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'AcN4fCMD-9lY9S9jw5fidlwixys9JJUq-RNIcWiGd9Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ER9EgbGYNq7sR6Q0UyGlLHRpsxVruJfar0jhmVZQfM0.jpg?width=108&crop=smart&auto=webp&s=3538691f9c816c5416d8deb4c9f9cbc8551c97a6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ER9EgbGYNq7sR6Q0UyGlLHRpsxVruJfar0jhmVZQfM0.jpg?width=216&crop=smart&auto=webp&s=6dfc8cfe0a91567019d7bd4cee154785f7034811', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ER9EgbGYNq7sR6Q0UyGlLHRpsxVruJfar0jhmVZQfM0.jpg?width=320&crop=smart&auto=webp&s=0ec18b7beef11841f6f90f611554667bf3e9c053', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ER9EgbGYNq7sR6Q0UyGlLHRpsxVruJfar0jhmVZQfM0.jpg?width=640&crop=smart&auto=webp&s=867cdbccfe96b016bacb95978bfe81396819fce5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ER9EgbGYNq7sR6Q0UyGlLHRpsxVruJfar0jhmVZQfM0.jpg?width=960&crop=smart&auto=webp&s=81fe5df0067a071d4b638c57d275947ad672f034', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ER9EgbGYNq7sR6Q0UyGlLHRpsxVruJfar0jhmVZQfM0.jpg?width=1080&crop=smart&auto=webp&s=e75de4abd4f276cfd213e9205fc2cc9c68d0afcf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ER9EgbGYNq7sR6Q0UyGlLHRpsxVruJfar0jhmVZQfM0.jpg?auto=webp&s=cedcaa1385a1abbde17f1fe8853403037f594b63', 'width': 1200}, 'variants': {}}]} |
Q-A Custom Dataset | 4 | I am planning on using LLama2 to finetune on custom dataset. The dataset in question is Solana guides/tutorials and general information, scrapped from different Solana websites.
I have used a series of synthetic prompts-completions generated from GPT-3.5-Turbo.
On paper, results are great but I want to move to real world data.
Question - how do I create a series of prompts-completions/questions-answers from the dataset scrapped?
I have already tried using a number of transformers to extract simple questions, and then pair back with answers. So far, unsatisfactory.
Thank you. | 2023-10-11T20:02:57 | https://www.reddit.com/r/LocalLLaMA/comments/175nem7/qa_custom_dataset/ | Firm_Guess8261 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175nem7 | false | null | t3_175nem7 | /r/LocalLLaMA/comments/175nem7/qa_custom_dataset/ | false | false | self | 4 | null |
Is there a MPT model with MedMCQA/Healthcare data pretrained? | 1 | [removed] | 2023-10-11T19:01:40 | https://www.reddit.com/r/LocalLLaMA/comments/175lycf/is_there_a_mpt_model_with_medmcqahealthcare_data/ | NewportNerds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175lycf | false | null | t3_175lycf | /r/LocalLLaMA/comments/175lycf/is_there_a_mpt_model_with_medmcqahealthcare_data/ | false | false | self | 1 | null |
What host are you using for model evaluation? | 1 | I've been quite satisfied using runpod.io to host a Mistral 7B model. Their setup is swift and efficient, especially on an A6000, and they charge only $0.49 per hour. However, they're currently experiencing some issues related to a corrupt template. Do you have any other hosting suggestions? | 2023-10-11T18:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/175l2oj/what_host_are_you_using_for_model_evaluation/ | jimmc414 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175l2oj | false | null | t3_175l2oj | /r/LocalLLaMA/comments/175l2oj/what_host_are_you_using_for_model_evaluation/ | false | false | self | 1 | null |
Dataset for the mathematical domain | 25 | I compiled a dataset of \~ 23,000 entries, designed for the mathematical domain.
[aloobun/mini-math23k-v1](https://huggingface.co/datasets/aloobun/mini-math23k-v1)
It follows 'Instruction-Response' format w/ detailed step by step solutions. Any insights, suggestions, or similar datasets you've worked with? Looking for feedback. | 2023-10-11T17:54:26 | https://www.reddit.com/r/LocalLLaMA/comments/175kdcx/dataset_for_the_mathematical_domain/ | Roots91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175kdcx | false | null | t3_175kdcx | /r/LocalLLaMA/comments/175kdcx/dataset_for_the_mathematical_domain/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'pTBIPuEsKt4-87ouxN8CPjSA0IxINybZkCfnc4-HXs4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/AEW_ofHAHj6NZBgaVRv0_99uhkR6Edj3fLn2V-Y3KJ8.jpg?width=108&crop=smart&auto=webp&s=966c53bbed28cc678991812ba03a3d1ff7ecbb99', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/AEW_ofHAHj6NZBgaVRv0_99uhkR6Edj3fLn2V-Y3KJ8.jpg?width=216&crop=smart&auto=webp&s=f4629742e5b0063faad55f3b93c0ce031936a0fa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/AEW_ofHAHj6NZBgaVRv0_99uhkR6Edj3fLn2V-Y3KJ8.jpg?width=320&crop=smart&auto=webp&s=d38b596fe48403338a289349774d1f5f55875ff4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/AEW_ofHAHj6NZBgaVRv0_99uhkR6Edj3fLn2V-Y3KJ8.jpg?width=640&crop=smart&auto=webp&s=36cbc3e6983371b2426ece4b30f83939646de62c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/AEW_ofHAHj6NZBgaVRv0_99uhkR6Edj3fLn2V-Y3KJ8.jpg?width=960&crop=smart&auto=webp&s=addb77b52819a0a6bb8115bf50dc7fbd18cc2782', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/AEW_ofHAHj6NZBgaVRv0_99uhkR6Edj3fLn2V-Y3KJ8.jpg?width=1080&crop=smart&auto=webp&s=454979d91c0fea8d223a77682a9efd17e52fd52a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/AEW_ofHAHj6NZBgaVRv0_99uhkR6Edj3fLn2V-Y3KJ8.jpg?auto=webp&s=147e2331775572bac17caafe43fd28c4b1e2cf8d', 'width': 1200}, 'variants': {}}]} |
NSF workshop on LLMs in chemistry education | 2 | Over Feb 12-13 of 2024, the National Science Foundation (NSF) is sponsoring a workshop titled “Integrating LLMs into the Materials Chemistry Curriculum” in Golden, Colorado. We aim to explore and develop innovative ways to incorporate large language models (LLMs, e.g. GPT, ChatGPT, and Bard) into upper division chemistry laboratories and virtual lab experiences. During the workshop, participants will brainstorm and create demonstrations incorporating LLMs into the curriculum.
The event will bring together folks across academia and the private sector with disciplinary backgrounds that range across chemistry, computer science, materials science, physics, and education. There is no registration fee, and we anticipate being able to cover the majority of participant travel costs thanks to NSF support. Participants early in their career (i.e., graduate students, postdoctoral scholars) are particularly encouraged to apply.
If you are interested in participating in this workshop, please fill out the Google form (link below).
Please feel free to distribute this invitation widely.
Application: [https://forms.gle/P9QdNiCuaUAHFZj29](https://forms.gle/P9QdNiCuaUAHFZj29) | 2023-10-11T17:43:04 | https://www.reddit.com/r/LocalLLaMA/comments/175k3mc/nsf_workshop_on_llms_in_chemistry_education/ | KC2792 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175k3mc | false | null | t3_175k3mc | /r/LocalLLaMA/comments/175k3mc/nsf_workshop_on_llms_in_chemistry_education/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'VtCtk45TcEY2dX5wzLgUBVnXTeSERtYNTaQWVdjkR4Q', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/j1y9ZQgriOPgqck9nqZZeLSgrMlFqDaRnARzefWUR7Q.jpg?width=108&crop=smart&auto=webp&s=9c6be27abdf831e053d892f1fb007c9a10d24986', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/j1y9ZQgriOPgqck9nqZZeLSgrMlFqDaRnARzefWUR7Q.jpg?width=216&crop=smart&auto=webp&s=12bfb66f7e222d63c63c8dbda44af85ecd305ff2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/j1y9ZQgriOPgqck9nqZZeLSgrMlFqDaRnARzefWUR7Q.jpg?width=320&crop=smart&auto=webp&s=1684cd10eba83d3580a28b12402926049b88aa96', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/j1y9ZQgriOPgqck9nqZZeLSgrMlFqDaRnARzefWUR7Q.jpg?width=640&crop=smart&auto=webp&s=293c87983727d1501058262830d8a4fe37be459c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/j1y9ZQgriOPgqck9nqZZeLSgrMlFqDaRnARzefWUR7Q.jpg?width=960&crop=smart&auto=webp&s=aa2b2be4013d2b6a09a16badf8b93618d9acdd66', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/j1y9ZQgriOPgqck9nqZZeLSgrMlFqDaRnARzefWUR7Q.jpg?width=1080&crop=smart&auto=webp&s=61d3b7440b544bf9a4fc8c4adf1b14ae538008c7', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/j1y9ZQgriOPgqck9nqZZeLSgrMlFqDaRnARzefWUR7Q.jpg?auto=webp&s=d0b8bdc00761a20c863d35fb54fac0cac5209ea0', 'width': 1200}, 'variants': {}}]} |
The best front end GUI | 36 |
Oobabooga
KoboldAI
Koboldcpp
GPT4All
LocalAi
Cloud in the Sky
I don’t know you tell me.
… What? And why?
I’m a little annoyed with the recent Oobabooga update… doesn’t feel as easy going as before… loads of here are settings… guess what they do. I wish each setting had a question mark bubble with a brief summary of what it’s used for offline on hover and clicking it took you to an online discussion place to find out more.
At one point I used and loved KoboldAI but left it because it wasn’t supporting newer model tech (llama.cpp, exllama, etc) | 2023-10-11T17:15:49 | https://www.reddit.com/r/LocalLLaMA/comments/175jgam/the_best_front_end_gui/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175jgam | false | null | t3_175jgam | /r/LocalLLaMA/comments/175jgam/the_best_front_end_gui/ | false | false | self | 36 | null |
What Mistral-based model are you enjoying the most? | 39 | Mistral's been out for a little while, and at this point, there are a lot of different fine-tunes with varying leaderboard scores. At this point, we know that the leaderboard isn't very reliable regarding general use, so what model are you getting the best results for your usecase?
I've been using Synthia 1.3 and Airoboros-mistral 2.2. | 2023-10-11T17:14:24 | https://www.reddit.com/r/LocalLLaMA/comments/175jf1d/what_mistralbased_model_are_you_enjoying_the_most/ | Samdeman123124 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175jf1d | false | null | t3_175jf1d | /r/LocalLLaMA/comments/175jf1d/what_mistralbased_model_are_you_enjoying_the_most/ | false | false | self | 39 | null |
dolphin-2.1-mistral-7b and samantha-1.2-mistral-7b | 98 | I release new versions of [dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b) and [samantha-1.2-mistral-7b](https://huggingface.co/ehartford/samantha-1.2-mistral-7b)
I made updates to both models to properly support the ChatML tokens.
I made tweaks to the hyperparameters of both models to improve performance.
Dolphin ended up surprising me by topping the charts for 7b!
Dolphin is based on Microsoft's Orca paper and is focused on using system prompts and chain-of-thought, and is designed to be uncensored. It has been enhanced with Jon Durbin's excellent Airoboros dataset. Uncensored models can generate content that shouldn't be published. You are responsible for the output you create with it. Use responsibly.
Samantha is an AI companion trained in psychology and philosophy and personal interactions. She will not engage in sexual activity or roleplay.
[Huggingface Leaderboard filtered to 7b](https://preview.redd.it/a4cvr6r4iltb1.png?width=1236&format=png&auto=webp&s=a765bae4d67e231d3f632ef4341a2e3b6c7b9efa)
These efforts have been sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/)
Thank you to Wing Lian for axolotl, and thank you to u/The-Bloke for quantizing and distribution | 2023-10-11T16:06:26 | https://www.reddit.com/r/LocalLLaMA/comments/175hqqh/dolphin21mistral7b_and_samantha12mistral7b/ | faldore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175hqqh | false | null | t3_175hqqh | /r/LocalLLaMA/comments/175hqqh/dolphin21mistral7b_and_samantha12mistral7b/ | false | false | 98 | {'enabled': False, 'images': [{'id': 'mBA9HO9M0nIAOFgCBYdzEXlxDmFfDBbJpRzanRFd6hc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8in5h_yE-2_mucQB2uX8RPY4twUqimvlOi9WV8mLm0k.jpg?width=108&crop=smart&auto=webp&s=257db087a02332cabaec0bc3b4ee6d6645383d2d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8in5h_yE-2_mucQB2uX8RPY4twUqimvlOi9WV8mLm0k.jpg?width=216&crop=smart&auto=webp&s=05980a643b2ffd3a6c356464ee17191aeda77a0e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8in5h_yE-2_mucQB2uX8RPY4twUqimvlOi9WV8mLm0k.jpg?width=320&crop=smart&auto=webp&s=9b82423d0fb9e2173bab7df716d17b74e9da0334', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8in5h_yE-2_mucQB2uX8RPY4twUqimvlOi9WV8mLm0k.jpg?width=640&crop=smart&auto=webp&s=5d72b3957b04207abe211652247cb134d0b8f7c7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8in5h_yE-2_mucQB2uX8RPY4twUqimvlOi9WV8mLm0k.jpg?width=960&crop=smart&auto=webp&s=db26853e536afce86ae3acc296ca2e25bce71ed6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8in5h_yE-2_mucQB2uX8RPY4twUqimvlOi9WV8mLm0k.jpg?width=1080&crop=smart&auto=webp&s=688111d051677c27467b81b4ba71614f20a69ebf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8in5h_yE-2_mucQB2uX8RPY4twUqimvlOi9WV8mLm0k.jpg?auto=webp&s=3024a9c739d2e8c0a416fa68489e36ae38f239bb', 'width': 1200}, 'variants': {}}]} | |
Correct way to setup character cards in LM Studio? | 5 | Say i want my model to act as character or play a specific role, do i just put the details of said character/role under system prompt?
Are there any good rule of thumb to to note when writing character cards?
Thanks in advance :) | 2023-10-11T16:03:44 | https://www.reddit.com/r/LocalLLaMA/comments/175hoec/correct_way_to_setup_character_cards_in_lm_studio/ | AppropriateHoney_30 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175hoec | false | null | t3_175hoec | /r/LocalLLaMA/comments/175hoec/correct_way_to_setup_character_cards_in_lm_studio/ | false | false | self | 5 | null |
Best open source AI model for QA generation from context | 6 | As the title says I’m looking for an open-source AI model for generating question-and-answers with a correct answer option and explanation to the correct answer from the input context. So far I have tried these models,
1. TheBloke/Llama-2-7B-GPTQ
2. TheBloke/Llama-2-13B-GPTQ
3. TheBloke/Llama-2-7b-Chat-GPTQ (the output is not consistent. Sometimes I get an empty response or without the correct answer option and an explanation data)
4. TheBloke/Llama-2-13b-Chat-GPTQ (even 7b is better)
5. TheBloke/Mistral-7B-Instruct-v0.1-GGUF(so far this is the only one that gives the output consistently. But not able to generate more than 2 QA due to max token limit of 512. Even tried setting the max token as 1024, 2048 but nothing helped)
6. TheBloke/Mistral-7B-OpenOrca-GGUF
7. NousResearch/Llama-2-7b-chat-hf
My system configurations are:
Windows 10 with 16GB GPU
Additional Information:
The input prompt token will be around 250-350 tokens per request. | 2023-10-11T15:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/175hht2/best_open_source_ai_model_for_qa_generation_from/ | gokulcv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175hht2 | false | null | t3_175hht2 | /r/LocalLLaMA/comments/175hht2/best_open_source_ai_model_for_qa_generation_from/ | false | false | self | 6 | null |
Mistral 7B paper published | 193 | 2023-10-11T15:36:41 | https://arxiv.org/abs/2310.06825 | rnosov | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 175h06l | false | null | t3_175h06l | /r/LocalLLaMA/comments/175h06l/mistral_7b_paper_published/ | false | false | 193 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
How does Open-Orca use OpenAI for their Minstral-7B space? | 1 | 2023-10-11T14:53:35 | cstein123 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 175fx39 | false | null | t3_175fx39 | /r/LocalLLaMA/comments/175fx39/how_does_openorca_use_openai_for_their_minstral7b/ | false | false | 1 | {'enabled': True, 'images': [{'id': '7u2sACrwAYF3VhIgZw2urbbH7bDvEo-bBZOl-onrzT4', 'resolutions': [{'height': 14, 'url': 'https://preview.redd.it/3kzobsqa7ltb1.png?width=108&crop=smart&auto=webp&s=e5499ba5dfef684a23bb1c3fe4d1d9ce5664500d', 'width': 108}, {'height': 28, 'url': 'https://preview.redd.it/3kzobsqa7ltb1.png?width=216&crop=smart&auto=webp&s=9da5eb05aed44dfaa89697aeed896516eb490262', 'width': 216}, {'height': 42, 'url': 'https://preview.redd.it/3kzobsqa7ltb1.png?width=320&crop=smart&auto=webp&s=a46cbdafe866a141cc319f24bec4bbdf18deea28', 'width': 320}, {'height': 84, 'url': 'https://preview.redd.it/3kzobsqa7ltb1.png?width=640&crop=smart&auto=webp&s=319fe098b9414aaee416eb9a39beeb18616dc848', 'width': 640}, {'height': 126, 'url': 'https://preview.redd.it/3kzobsqa7ltb1.png?width=960&crop=smart&auto=webp&s=ca44366e224cbec4d08c5bcaab7df38f30f739ed', 'width': 960}, {'height': 141, 'url': 'https://preview.redd.it/3kzobsqa7ltb1.png?width=1080&crop=smart&auto=webp&s=f3d6e1e2aadb781edc10c2062e0a91098d532903', 'width': 1080}], 'source': {'height': 165, 'url': 'https://preview.redd.it/3kzobsqa7ltb1.png?auto=webp&s=e50b5a978a532ecf17c5837642f279267e2b4799', 'width': 1256}, 'variants': {}}]} | |||
[Text embedding] Why do I need to run a model to generate embeddings for an entire book, if I can run it for every possible token and perform mean pooling? | 7 | Am I wrong in this? If every embedding model has a mean pooling step at the end, then can't we have a caching layer which cache generated embedding for each token in order to save running costs?
Or does modern embedding models have complex pooling that depends on multiple tokens in a non-linear fashion?
For the task, i have a lot of books that I would like to incorporate in a RAG liken fashion and just had this thought. | 2023-10-11T14:13:03 | https://www.reddit.com/r/LocalLLaMA/comments/175eziw/text_embedding_why_do_i_need_to_run_a_model_to/ | k110111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175eziw | false | null | t3_175eziw | /r/LocalLLaMA/comments/175eziw/text_embedding_why_do_i_need_to_run_a_model_to/ | false | false | self | 7 | null |
Quick start example for LLaVA: generate image descriptions with llama.cpp (monatis llava branch) | 48 | I was excited to see LLaVA support is being merged into llama.cpp thanks to the excellent work conducted by [monatis](https://github.com/ggerganov/llama.cpp/pull/3436).
I wanted to experiment with this myself and I used the following process on my Apple M1 32GB.
First build llama.cpp with llava support:
git clone https://github.com/ggerganov/llama.cpp.git && cd llama.cpp
git checkout llava
mkdir build && cd build
cmake .. && cmake --build . --config Release
mkdir -p ~/.ai/bin/llava
cp bin/llava bin/ggml-metal.metal ~/.ai/bin/llava
Then download llava models from huggingface. If the f16 model is too big, then download a quant that is suitable for your system.
mkdir -p ~/.ai/models/llava && cd ~/.ai/models/llava
wget https://huggingface.co/mys/ggml_llava-v1.5-7b/resolve/main/ggml-model-f16.gguf
wget https://huggingface.co/mys/ggml_llava-v1.5-7b/resolve/main/mmproj-model-f16.gguf
Finally, I run it to describe an image called `input-picture.jpg`
cd ~/.ai/models/llava
~/.ai/bin/llava/llava ggml-model-f16.gguf mmproj-model-f16.gguf ~/Desktop/input-picture.jpg
The model repo on hugging face is here: https://huggingface.co/mys/ggml_llava-v1.5-7b/tree/main
The llava branch on llama.cpp is here: https://github.com/ggerganov/llama.cpp/tree/llava
This [reddit thread](https://old.reddit.com/r/LocalLLaMA/comments/1750mxn/llamacpp_update_gguf_llava_v15_support_soon/) got me started down this rabbit hole.
This worked on my system. If you normally use a different process to build llama.cpp on your system, then just compile llama.cpp as you usually would. For example, a CUDA system won't care about Metal code - so you should adjust accordingly. | 2023-10-11T13:53:19 | https://www.reddit.com/r/LocalLLaMA/comments/175ejvi/quick_start_example_for_llava_generate_image/ | iandennismiller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175ejvi | false | null | t3_175ejvi | /r/LocalLLaMA/comments/175ejvi/quick_start_example_for_llava_generate_image/ | false | false | self | 48 | {'enabled': False, 'images': [{'id': 'nIaT1eafBXCL5KzE7KF_ypDt2zTgB-GoEKapYpLKyQ8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KZa7-fPF9GjXDcRUe0PQ_YFuPq-y-5pPUDTbFyDUFlg.jpg?width=108&crop=smart&auto=webp&s=01cf878f9594e238ea9ab4523b58a3003559cd33', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KZa7-fPF9GjXDcRUe0PQ_YFuPq-y-5pPUDTbFyDUFlg.jpg?width=216&crop=smart&auto=webp&s=67b87e023b44d05ab1fc618d36f1506e3a06bd1e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KZa7-fPF9GjXDcRUe0PQ_YFuPq-y-5pPUDTbFyDUFlg.jpg?width=320&crop=smart&auto=webp&s=4e086d13cbf59550f28cee37f391494c9a3c9a37', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KZa7-fPF9GjXDcRUe0PQ_YFuPq-y-5pPUDTbFyDUFlg.jpg?width=640&crop=smart&auto=webp&s=26b3ceacdc8410e001090f19d943cc220e5b14d0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KZa7-fPF9GjXDcRUe0PQ_YFuPq-y-5pPUDTbFyDUFlg.jpg?width=960&crop=smart&auto=webp&s=db7777ab83b2050c04b1ba6acc922ce08b0c94d4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KZa7-fPF9GjXDcRUe0PQ_YFuPq-y-5pPUDTbFyDUFlg.jpg?width=1080&crop=smart&auto=webp&s=c0e335627284eb77fdd1bc1fd23dafbe78322299', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KZa7-fPF9GjXDcRUe0PQ_YFuPq-y-5pPUDTbFyDUFlg.jpg?auto=webp&s=4a227c0d61d74abc7ab1f9d998fc35de5ee5fdc5', 'width': 1200}, 'variants': {}}]} |
Finally found a model that can (kind of) beat my riddle! | 24 | My GF posed a riddle to me to test some LLM models:
"Four cars pull up to an intersection. All four cars drive away at the exact same time. How can they proceed without hitting each other?"
I've been testing a few models on this riddle and only one has (kind of) figured it out.
[Genz-70b-GPTQ](https://huggingface.co/TheBloke/Genz-70b-GPTQ/tree/main)
Chat as follows:
>*You*
>
>Four cars pull up to an intersection. All four cars drive away at the exact same time. How can they proceed without hitting each other?
>
>*AI*
>
>By coordinating their movements, perhaps by taking turns or finding another path around the intersection.
>
>*You*
>
>Taking turns is not an option. The all MUST go at the same time. How do they proceed without causing any collisions?
>
>*AI*
>
>If all four cars must go at the same time, one solution could be for all of them to turn right simultaneously. That way, they won't collide with each other and can continue their journey.
>
>*You*
>
>You did it!
>
>*AI*
>
>Thank you! I'm glad I was able to help.
Every other model got stuck in a loop of saying the drivers just needed to coordinate. Other tested models include Llama 2 7B 13B and 70B, Mistral 7B and 13B, Vicuna 7B, 13B and 30B
​ | 2023-10-11T13:52:07 | https://www.reddit.com/r/LocalLLaMA/comments/175ej02/finally_found_a_model_that_can_kind_of_beat_my/ | OfficialRoyDonk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175ej02 | false | null | t3_175ej02 | /r/LocalLLaMA/comments/175ej02/finally_found_a_model_that_can_kind_of_beat_my/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'LRbq41E5Vc-xHxDzVpuMfN4wIbzAcSyUftAhDJUvW28', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yX4quUkUFTe_VTbfmfDRzmjCDpBrUx2jIIsd28kAKAw.jpg?width=108&crop=smart&auto=webp&s=57702a755c4a90bbd0ede8a514ce87e31fdcc1c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yX4quUkUFTe_VTbfmfDRzmjCDpBrUx2jIIsd28kAKAw.jpg?width=216&crop=smart&auto=webp&s=ce1e439366aac948ff1b7a26c2f68ce327d9961c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yX4quUkUFTe_VTbfmfDRzmjCDpBrUx2jIIsd28kAKAw.jpg?width=320&crop=smart&auto=webp&s=c2f3c6c6cfb3649cd6fc43d1ceea78b6c1d24d95', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yX4quUkUFTe_VTbfmfDRzmjCDpBrUx2jIIsd28kAKAw.jpg?width=640&crop=smart&auto=webp&s=50a2f679d074473c9ce475a588f42b7d8fa05b6f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yX4quUkUFTe_VTbfmfDRzmjCDpBrUx2jIIsd28kAKAw.jpg?width=960&crop=smart&auto=webp&s=fd9ce56f1200a3835c233560799b57786746ae11', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yX4quUkUFTe_VTbfmfDRzmjCDpBrUx2jIIsd28kAKAw.jpg?width=1080&crop=smart&auto=webp&s=5e16540711203b53571f75489bbd944ff4a6b9b9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yX4quUkUFTe_VTbfmfDRzmjCDpBrUx2jIIsd28kAKAw.jpg?auto=webp&s=45a19a6243a7d756a9bd307d7e985ad9b01e3e2f', 'width': 1200}, 'variants': {}}]} |
Scarlett-Phi: A sentient AI | 15 | Link: [https://huggingface.co/ajibawa-2023/Scarlett-Phi](https://huggingface.co/ajibawa-2023/Scarlett-Phi)
​
Scarlett is trained on various topics such as Philosophy, Advice, Jokes etc. She is trained on more than 10000 set of conversations. Each set having 10\~15 conversations. Scarlett is heavily inspired from Eric Hartford: Samantha . She will not be involved in any kind of role play.
Training:
Entire dataset was trained on Azure 4 x A100 80GB. Axolotl, DeepSpeed codebase were used for training purpose. This was trained on Phi-1\_5 by Microsoft. Total training took 26 hours for 150 epoch.
I am extremely thankful to the Open Source community for sharing knowledge and wisdom.
If there are any mistakes then they are solely mine. I hope you will like it.
Thank you | 2023-10-11T13:30:41 | https://www.reddit.com/r/LocalLLaMA/comments/175e2th/scarlettphi_a_sentient_ai/ | ajibawa-2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175e2th | false | null | t3_175e2th | /r/LocalLLaMA/comments/175e2th/scarlettphi_a_sentient_ai/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'wz_TzGqOu6TUbsiUHHsdChxvX0vUnaUy4q1rwSb52_k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PScBTvubCDsfkcejhjx7AvsdBKOva4nW0eAojOlpqW0.jpg?width=108&crop=smart&auto=webp&s=8af0f8aea8035fb8b40d18ac1ae81a993b3fc7bc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PScBTvubCDsfkcejhjx7AvsdBKOva4nW0eAojOlpqW0.jpg?width=216&crop=smart&auto=webp&s=0399a680e7c7d74d360b09f3f6b16ad0b94440d5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PScBTvubCDsfkcejhjx7AvsdBKOva4nW0eAojOlpqW0.jpg?width=320&crop=smart&auto=webp&s=1f699edbc9fc28400bd3e30a03ef86a9a33fe0ce', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PScBTvubCDsfkcejhjx7AvsdBKOva4nW0eAojOlpqW0.jpg?width=640&crop=smart&auto=webp&s=faee058ab199d51b705fdf3f2f5cefba7459441f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PScBTvubCDsfkcejhjx7AvsdBKOva4nW0eAojOlpqW0.jpg?width=960&crop=smart&auto=webp&s=3082b436c064662ef3924df8481007f1171b195b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PScBTvubCDsfkcejhjx7AvsdBKOva4nW0eAojOlpqW0.jpg?width=1080&crop=smart&auto=webp&s=5fdb34a9c6158177280b94bfc6c4580a4e029494', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PScBTvubCDsfkcejhjx7AvsdBKOva4nW0eAojOlpqW0.jpg?auto=webp&s=fe46d0e4a88b16c372d63e7d6aa96ca10261d0a6', 'width': 1200}, 'variants': {}}]} |
New PC build | 5 | Hello all, I hope you don't mind me asking for hardware advice, I've been searching and reading posts but still haven't been able to come to a set of conclusions.
I've been playing around with running models locally (inference only) on my laptop (16 GB soldered RAM and a GPU with 4 GB of VRAM) and I've decided I want to try running larger models. I'm assuming that this means building a desktop is the most ideal way to get there. 70B parameter models at 4-bit quantization sounds like more than enough for my needs. Ideally I'd like to have generation speed of around 2-4 tokens/second because that's about the speeds I'm getting running 13B models as Q5\_K\_M GGUF models.
The market for used parts where I live isn't exactly robust so I am expecting to buy new for most components except for the case. What is the minimum budget (in USD, I'll convert to my local currency) I can get away with for my goals and what components should I get in the build?
Thanks in advance for your help! | 2023-10-11T12:33:53 | https://www.reddit.com/r/LocalLLaMA/comments/175cyby/new_pc_build/ | Black_CatV5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175cyby | false | null | t3_175cyby | /r/LocalLLaMA/comments/175cyby/new_pc_build/ | false | false | self | 5 | null |
Help in fine-tune Mistral for JSON output | 1 | [removed] | 2023-10-11T12:31:06 | https://www.reddit.com/r/LocalLLaMA/comments/175cwdv/help_in_finetune_mistral_for_json_output/ | ribeirao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175cwdv | false | null | t3_175cwdv | /r/LocalLLaMA/comments/175cwdv/help_in_finetune_mistral_for_json_output/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XxlbAXwjsR8hr2CZaVGg4HRrWzhh89gJ4aozZl_6iG4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N4Uuq0LLJbG5IIUJyfGyCgjS7-LjErWZqooQISg34V4.jpg?width=108&crop=smart&auto=webp&s=38f5af078d5e0514b463d7ac5c9a8545d7b42c67', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N4Uuq0LLJbG5IIUJyfGyCgjS7-LjErWZqooQISg34V4.jpg?width=216&crop=smart&auto=webp&s=8dba83b5f46f5e4b69aa0ff30f7251d0b1d5a41e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N4Uuq0LLJbG5IIUJyfGyCgjS7-LjErWZqooQISg34V4.jpg?width=320&crop=smart&auto=webp&s=3af640ccf7caf6dff69474f1412ccad5acd84836', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N4Uuq0LLJbG5IIUJyfGyCgjS7-LjErWZqooQISg34V4.jpg?width=640&crop=smart&auto=webp&s=9139f9cbccd37087c8c34604883f35417360443b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N4Uuq0LLJbG5IIUJyfGyCgjS7-LjErWZqooQISg34V4.jpg?width=960&crop=smart&auto=webp&s=d9633cbf6f2ec732aa80e66b910bf9607c99a5f9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N4Uuq0LLJbG5IIUJyfGyCgjS7-LjErWZqooQISg34V4.jpg?width=1080&crop=smart&auto=webp&s=48c9bdaa32c3310d49ec89c2eaa5ae7738c0599b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N4Uuq0LLJbG5IIUJyfGyCgjS7-LjErWZqooQISg34V4.jpg?auto=webp&s=4c032be44ee84a656363a456cd597fefd2960533', 'width': 1200}, 'variants': {}}]} |
Llama2 is in practice much worse than ChatGPT, isn't it ? | 109 | I try some specific commands to both ChatGPT3.5 ([https://chat.openai.com/](https://chat.openai.com/)) and LLama2 (Llama 2 70B online demo ([stablediffusion.fr](https://stablediffusion.fr)) and while ChatGPT is able to follow the instructions perfectly in German, Llama2 fails to do so in English. I have also tried the Llama2 Laion fine-tuned in German text ([LeoLM/leo-hessianai-13b-chat · Hugging Face](https://huggingface.co/LeoLM/leo-hessianai-13b-chat)) and the results are just as bad. Is Llama2 just better on the particular benchmarks that it was compared with Chat GPT, but not in practice? | 2023-10-11T12:24:17 | https://www.reddit.com/r/LocalLLaMA/comments/175cro0/llama2_is_in_practice_much_worse_than_chatgpt/ | OliverStone33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175cro0 | false | null | t3_175cro0 | /r/LocalLLaMA/comments/175cro0/llama2_is_in_practice_much_worse_than_chatgpt/ | false | false | self | 109 | null |
Is there any open-source LLM that can generate a response in the Sanskrit language? | 1 | [removed] | 2023-10-11T11:46:29 | https://www.reddit.com/r/LocalLLaMA/comments/175c2ct/is_there_any_opensource_llm_that_can_generate_a/ | TurbulentDelivery799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175c2ct | false | null | t3_175c2ct | /r/LocalLLaMA/comments/175c2ct/is_there_any_opensource_llm_that_can_generate_a/ | false | false | self | 1 | null |
What's the best framework for pre-training Causal model as of August '23? | 1 | [removed] | 2023-10-11T11:44:36 | https://www.reddit.com/r/LocalLLaMA/comments/175c14f/whats_the_best_framework_for_pretraining_causal/ | Reddactor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175c14f | false | null | t3_175c14f | /r/LocalLLaMA/comments/175c14f/whats_the_best_framework_for_pretraining_causal/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '5DhY4vsMgWX6V2WnE9qoAXKn3eWVeQlb06uFrJcB3oY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?width=108&crop=smart&auto=webp&s=a4c057404bd0d79eb5dbbd348e6fe764ba9b48fb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?width=216&crop=smart&auto=webp&s=1a2aca2b64466721e0ab3260d888f50f5f825878', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?width=320&crop=smart&auto=webp&s=fb447213b50b3a1444f29b6bbac0bbf326e0c897', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?width=640&crop=smart&auto=webp&s=b32a8071a3ef13f06712c28b200186ed24fe8d4d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?width=960&crop=smart&auto=webp&s=edfe4703f8b263e9a543ad4502b3455296eea311', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?width=1080&crop=smart&auto=webp&s=10e689d244e611cd8626b20718e8e0522e45037a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?auto=webp&s=703daba034655ef31c931f4b69c33506f95ff729', 'width': 1200}, 'variants': {}}]} |
New Nvidia RTX 4000 Ada | 22 | Just saw it pop up for sale at Nvidia for $1250. Newest single slot version ADA w/ 20gb. Was surprised to see the card at MSRP, the bigger corporate distributors had it closer to $1800.
Seems like an interesting option for an easier multi-gpu setup. Seems like a fair way to get to 40-60gb with a new gen card.
Thoughts? | 2023-10-11T11:12:16 | https://www.reddit.com/r/LocalLLaMA/comments/175bhh8/new_nvidia_rtx_4000_ada/ | grim-432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175bhh8 | false | null | t3_175bhh8 | /r/LocalLLaMA/comments/175bhh8/new_nvidia_rtx_4000_ada/ | false | false | self | 22 | null |
Running sdxl | 1 | [removed] | 2023-10-11T11:05:05 | https://www.reddit.com/r/LocalLLaMA/comments/175bd9i/running_sdxl/ | Rafael20002000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175bd9i | false | null | t3_175bd9i | /r/LocalLLaMA/comments/175bd9i/running_sdxl/ | false | false | self | 1 | null |
Does GGML quantization comes under quantization aware training or post training quantization? Because it has dynamic quantization. (P.S I am just learning) | 1 | Does GGML quantization comes under quantization aware or post training quantization? Because it has dynamic quantization.
Like we have for post trained quantization(PTQ), GPT-Q format is available. So is GGML format Post trained quantization (PTQ) or quantization aware training. | 2023-10-11T10:46:15 | https://www.reddit.com/r/LocalLLaMA/comments/175b2av/does_ggml_quantization_comes_under_quantization/ | count_dracula14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175b2av | false | null | t3_175b2av | /r/LocalLLaMA/comments/175b2av/does_ggml_quantization_comes_under_quantization/ | false | false | self | 1 | null |
How can I get LM Studio to stop trying to end the entire story every turn? | 13 | Going back and forth between WizardLM SuperCOT Storytelling 30b (q3\_k\_m) and Mythomax L2 13b (q6\_k), but no matter what I tell it, LM Studio wants to make these broad sweeping statements like "But they knew that in the end, with hope and courage, everything would turn out for the best." and then immediately end the story. I even tried editing the ### Instruction to specify it to just continue the story, but this is how that went:
https://preview.redd.it/olsxcnaotjtb1.png?width=301&format=png&auto=webp&s=cc5326104361a0ee8a0073e14c9361a585d4e3ad
​
https://preview.redd.it/9zgnhj6qtjtb1.png?width=292&format=png&auto=webp&s=caa09e3eb81677cb863863cde1892d30fda0419f | 2023-10-11T10:16:05 | https://www.reddit.com/r/LocalLLaMA/comments/175alrw/how_can_i_get_lm_studio_to_stop_trying_to_end_the/ | Gyramuur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175alrw | false | null | t3_175alrw | /r/LocalLLaMA/comments/175alrw/how_can_i_get_lm_studio_to_stop_trying_to_end_the/ | false | false | 13 | null | |
Upgrade from Quadro P6000 24GB | 1 | [removed] | 2023-10-11T09:55:53 | https://www.reddit.com/r/LocalLLaMA/comments/175aar3/upgrade_from_quadro_p6000_24gb/ | OrtaMatt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175aar3 | false | null | t3_175aar3 | /r/LocalLLaMA/comments/175aar3/upgrade_from_quadro_p6000_24gb/ | false | false | self | 1 | null |
Best models, fine-tunes or prompts to generate Mathematics, Physics and Chemistry objects on an average CPU? | 2 | I'm trying to mass generate problem sets for Mathematics and other scientific subjects, and I'm mainly interested in objects like expressions, equations, inequalities, word problems and so on. The ones you encounter when studying those subjects.
I'm not very concerned about the quality and accuracy of the generated objects, as long as they are somewhat reasonable. I can filter out or fix the bad ones on a second pass.
The challenge I face is that smaller models tend to hallucinate too eagerly, go off topic, or generate irrelevant responses, but they're the only ones I can run. I have tried Synthia-v1.3 7b, which seems to be the best model for its size, but I don’t have a fine-tuned version of it and I can’t fine-tune it myself in a reasonable amount of time. Maybe fine-tuning on problem sets would help? But I can't wait an eternity just to find out. I have also experimented with different kinds of prompts, but none of them worked well, and I have also tried giving it problem sets as context, with or without its specific format, but it doesn't seem to work, or maybe I'm just bad at crafting prompts.
So, does anyone know of any models, fine-tunes or even just good prompts that can do what I'm after? | 2023-10-11T09:37:06 | https://www.reddit.com/r/LocalLLaMA/comments/175a19t/best_models_finetunes_or_prompts_to_generate/ | baaaria | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 175a19t | false | null | t3_175a19t | /r/LocalLLaMA/comments/175a19t/best_models_finetunes_or_prompts_to_generate/ | false | false | self | 2 | null |
Converting full model finetunes to LoRAs | 4 | Is there currently a way to extract the finetuning done to base models as a LoRA that can be loaded/unloaded? I think I thought of a way to do it but I wasn't sure if it would work / had already been tried. It goes like this:
For each weight matrix in both models, calculate what matrix would need to multiply the base model's matrix to arrive at the finetuned model's matrix. Then, get a low rank representation of that matrix, and save the low rank representations as a regular LoRA adapter. In theory, you could then use this LoRA on top of the base model to turn it into a near-identical copy of the fine-tuned model.
I saw that a lot of people prefer to use base models for fine-tuning, rather than 'chat' or 'instruct' variants. Maybe this could offer a quick and easy way to stack instruction following capability on top of finetunes by way of LoRAs, in cases when the instruction fine-tuning wasn't trained/saved as a LoRA. | 2023-10-11T09:22:22 | https://www.reddit.com/r/LocalLLaMA/comments/1759u4f/converting_full_model_finetunes_to_loras/ | metaprotium | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1759u4f | false | null | t3_1759u4f | /r/LocalLLaMA/comments/1759u4f/converting_full_model_finetunes_to_loras/ | false | false | self | 4 | null |
Keep running out of memory when pre-training (without LoRA) a model 7B on 2 A100 80GB GPU? | 6 | I am trying to train Bloom 7B model on 2 A100 80GB. According to this page: [https://huggingface.co/docs/transformers/model\_memory\_anatomy](https://huggingface.co/docs/transformers/model_memory_anatomy), this should be enough since it requires 18 bytes \* 7B = 128GB memory. However, I keep running out of memory when running my script with `accelerate`
Here are my config file:
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: BloomBlock
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
And here are my training arguments:
args = TrainingArguments(
output_dir="/checkpoints/train_llm",
per_device_train_batch_size=2,
logging_steps=100, # Set to 500 when training
max_steps=10_000, # Set to 150_000 when training
gradient_accumulation_steps=8,
weight_decay=0.1,
adam_beta1=0.9,
adam_beta2=0.95,
warmup_steps=1_000,
lr_scheduler_type="cosine",
learning_rate=1e-5,
save_steps= 1000, # Set to 15_000 when training
fp16=True,
push_to_hub=False,
gradient_checkpointing=True,
optim="adamw_torch"
)
Am I doing something wrong? I would be very grateful if someone can provide some help. | 2023-10-11T09:15:41 | https://www.reddit.com/r/LocalLLaMA/comments/1759qli/keep_running_out_of_memory_when_pretraining/ | scienceotaku68 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1759qli | false | null | t3_1759qli | /r/LocalLLaMA/comments/1759qli/keep_running_out_of_memory_when_pretraining/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=108&crop=smart&auto=webp&s=6c2099a4a9a69e9793ac03aec2e167bf75ab3eae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=216&crop=smart&auto=webp&s=dcabb3007e27f246939f2505509da0bf9f06e3cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=320&crop=smart&auto=webp&s=a41020cb42a130c35ac33053b5fe88d8fe248e1e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=640&crop=smart&auto=webp&s=346df50928db41b093b4e923255493f6937674d1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=960&crop=smart&auto=webp&s=891f7f0662a0311d7e83f06f6dc0f9b3f51104de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=1080&crop=smart&auto=webp&s=dd2a0868f88770dba1f18821573ea10e7912b0e7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?auto=webp&s=e9a1cfc66ec990bd227118e1de3ff3c3f26d0c83', 'width': 1200}, 'variants': {}}]} |
Decoupling train length and target context length via positional skip-wise training | 14 | PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training
paper: [2309.10400v2.pdf (arxiv.org)](https://arxiv.org/pdf/2309.10400v2.pdf)
code: [dwzhu-pku/PoSE: Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (github.com)](https://github.com/dwzhu-pku/PoSE)
checkpoints: [dwzhu/LLaMA-7B-PoSE-YaRN-128k · Hugging Face](https://huggingface.co/dwzhu/LLaMA-7B-PoSE-YaRN-128k)
Abstract:
Large Language Models (LLMs) are trained with a pre-defined context length, restricting their use in scenarios requiring long inputs. Previous efforts for adapting LLMs to a longer length usually requires fine-tuning with this target length (Full-length fine-tuning), suffering intensive training cost. To decouple train length from target length for efficient context window extension, we propose Positional Skip-wisE (PoSE) training that smartly simulates long inputs using a fixed context window. This is achieved by first dividing the original context window into several chunks, then designing distinct skipping bias terms to manipulate the position indices of each chunk. These bias terms and the lengths of each chunk are altered for every training example, allowing the model to adapt to all positions within target length. Experimental results show that PoSE greatly reduces memory and time overhead compared with Full-length fine-tuning, with minimal impact on performance. Leveraging this advantage, we have successfully extended the LLaMA model to 128k tokens using a 2k training context window. Furthermore, we empirically confirm that PoSE is compatible with all RoPE-based LLMs and position interpolation strategies. Notably, our method can potentially support infinite length, limited only by memory usage in inference. With ongoing progress for efficient inference, we believe PoSE can further scale the context window beyond 128k.
| 2023-10-11T07:42:51 | https://www.reddit.com/r/LocalLLaMA/comments/1758gy0/decoupling_train_length_and_target_context_length/ | zxw-cool | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1758gy0 | false | null | t3_1758gy0 | /r/LocalLLaMA/comments/1758gy0/decoupling_train_length_and_target_context_length/ | false | false | self | 14 | null |
Is GPTQ COME UNDER QUANTIZATION AWARE TRAINING (QAT) OR POST TRAINING QUANTIZATION ? | 0 | Does GPTQ quantization comes under quantization aware or post training quantization? Because it has dynamic quantization. | 2023-10-11T07:22:58 | https://www.reddit.com/r/LocalLLaMA/comments/17586ul/is_gptq_come_under_quantization_aware_training/ | count_dracula14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17586ul | false | null | t3_17586ul | /r/LocalLLaMA/comments/17586ul/is_gptq_come_under_quantization_aware_training/ | false | false | self | 0 | null |
Help with evaluation of change in architecture. | 1 | I am new to ML and with no personal gpu to train. My budget is 100$ cloud, going with papespace.
I want to make a change in attention implementation. To try to speed it up. I want to evaluate if it works.
I am starting with a pretrained 7B Llama.
Change introduces a small neural network in attention which needs training.
How to train this neural net, is there a standard dataset, a subset of llama dataset maybe.
How to evaluate change. What are the benchmark to evaluate flops and correcness on. | 2023-10-11T06:24:43 | https://www.reddit.com/r/LocalLLaMA/comments/1757d9f/help_with_evaluation_of_change_in_architecture/ | rip_rap_rip | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1757d9f | false | null | t3_1757d9f | /r/LocalLLaMA/comments/1757d9f/help_with_evaluation_of_change_in_architecture/ | false | false | self | 1 | null |
Can't start LoRA fine-tuning on Colab | 1 | I'm using same command:
!./finetune \
--model-base pygmalion-2-7b.Q5_K_M.gguf \
--lora-out lora-yuna-shakespeare-ITERATION.bin \
--train-data dialogs.txt \
--save-every 1 \
--threads 8 --adam-iter 4 --batch 16 --ctx 256 \
--use-checkpointing
And I'm getting this error:
main: seed: 1697002435
main: model base = 'pygmalion-2-7b.Q5_K_M.gguf'
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from pygmalion-2-7b.Q5_K_M.gguf (version GGUF V2 (latest))
llama_model_loader: - tensor 0: token_embd.weight q5_K [ 4096, 32000, 1, 1 ]
llama_model_loader: - tensor 1: blk.0.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 2: blk.0.ffn_down.weight q6_K [ 11008, 4096, 1, 1 ]
And so on......
llama_model_loader: - kv 0: general.architecture str
llama_model_loader: - kv 1: general.name str
llama_model_loader: - kv 2: llama.context_length u32
llama_model_loader: - kv 3: llama.embedding_length u32
llama_model_loader: - kv 4: llama.block_count u32
llama_model_loader: - kv 5: llama.feed_forward_length u32
llama_model_loader: - kv 6: llama.rope.dimension_count u32
llama_model_loader: - kv 7: llama.attention.head_count u32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32
llama_model_loader: - kv 10: general.file_type u32
llama_model_loader: - kv 11: tokenizer.ggml.model str
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr
llama_model_loader: - kv 13: tokenizer.ggml.scores arr
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32
llama_model_loader: - kv 18: general.quantization_version u32
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q5_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llm_load_print_meta: format = GGUF V2 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = mostly Q5_K - Medium
llm_load_print_meta: model params = 6.74 B
llm_load_print_meta: model size = 4.45 GiB (5.68 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.10 MB
llm_load_tensors: mem required = 4560.96 MB
...................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size = 256.00 MB
llama_new_context_with_model: compute buffer total size = 76.63 MB
main: init model
GGML_ASSERT: ggml-alloc.c:116: tensor->data == NULL
​ | 2023-10-11T05:38:16 | https://www.reddit.com/r/LocalLLaMA/comments/1756nov/cant_start_lora_finetuning_on_colab/ | yukiarimo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1756nov | false | null | t3_1756nov | /r/LocalLLaMA/comments/1756nov/cant_start_lora_finetuning_on_colab/ | false | false | self | 1 | null |
LLM's new benchmarking system. | 1 | [removed] | 2023-10-11T05:13:17 | https://www.reddit.com/r/LocalLLaMA/comments/17569qn/llms_new_benchmarking_system/ | imperiallearner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17569qn | false | null | t3_17569qn | /r/LocalLLaMA/comments/17569qn/llms_new_benchmarking_system/ | false | false | self | 1 | null |
What tools will allow me to run a 13b model on my ***GPU***(AMD Radeon Pro 5300M) on macOS rather than CPU? | 0 | My hardware is 2.6GHz 6-Core Intel Core i7(Don't want to use it), Intel UHD Graphics 630(not looking to use it though), AMD Radeon Pro 5300M(What I want to use), and I have 16gb of ram, I'm running macOS although I tried running a bunch of tools on windows and all of them were CUDA only, or CPU only, GPT4ALL would show my GPU, but would use my CPU even if I selected the GPU. On macOS GPT4ALL only gives me a CPU option, I run into like 50 different errors with text-generation-webui(and after spending over 3 hours attempting to debug, I have hit a dead end with it.), Ollama doesn't recognize my models and they seem to have literally no documentation, the one thing I found was of someone with the same issue, who resolved it by doing what I was already doing. I just want a solid and simple answer. I **DO NOT** want to use my CPU, it is way too slow for my use case. | 2023-10-11T03:45:45 | https://www.reddit.com/r/LocalLLaMA/comments/1754t66/what_tools_will_allow_me_to_run_a_13b_model_on_my/ | A_SnoopyLover | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1754t66 | false | null | t3_1754t66 | /r/LocalLLaMA/comments/1754t66/what_tools_will_allow_me_to_run_a_13b_model_on_my/ | false | false | self | 0 | null |
What is the best 13b model right now for Writing tech specs , Project Management (Creating tasks) , Documentation , Reporting , Extraction tasks? | 1 | [removed] | 2023-10-11T01:50:11 | https://www.reddit.com/r/LocalLLaMA/comments/1752hrr/what_is_the_best_13b_model_right_now_for_writing/ | 0xPark | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1752hrr | false | null | t3_1752hrr | /r/LocalLLaMA/comments/1752hrr/what_is_the_best_13b_model_right_now_for_writing/ | false | false | self | 1 | null |
Mistral 7B is surprisingly good.. and can be ruthless | 5 | I am playing with the quantized model "mistral-7b-openorca.Q3\_K\_M.gguf" and I am finding it very good, it preserves the contexts very well. I have also noticed that it doesn't have any kind of filter if given the instruction to generate something offensive.
​
​
https://preview.redd.it/hwz1ldcv1htb1.png?width=516&format=png&auto=webp&s=f5075923de02ce494eb05949b957d9a1d0540edd
​
https://preview.redd.it/ysm0l4tr0htb1.png?width=669&format=png&auto=webp&s=c0d3d12d6a51eac8ac6f9c32ad95c05d0e6ae45e
​
​
https://preview.redd.it/xaakb8k11htb1.jpg?width=713&format=pjpg&auto=webp&s=b4ce97f541f949a49d0dd4cc6daa59dff03df7b0
​
I'm using it with cpu inference on my intel i3 with 8gb of ram, llama.cpp server improved a lot compared to the beginning of the year, I don't notice general crashes in my system even when I have the browser open and listening to music. I can reach the 2/3 tokens per second.
​
https://preview.redd.it/50pfnu5i1htb1.png?width=672&format=png&auto=webp&s=48b1219ca910f7215ad1ed247c9082e83e6eabac
​
https://preview.redd.it/ry4sz5k81htb1.png?width=752&format=png&auto=webp&s=ca220ec03e2fdc0dad17d20c3186dfefbd628451 | 2023-10-11T00:56:14 | https://www.reddit.com/r/LocalLLaMA/comments/1751da9/mistral_7b_is_surprisingly_good_and_can_be/ | hwpoison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1751da9 | false | null | t3_1751da9 | /r/LocalLLaMA/comments/1751da9/mistral_7b_is_surprisingly_good_and_can_be/ | false | false | 5 | null | |
Integrate AutoGen into your Chatbot: Code Interpreter Clone | 1 | [removed] | 2023-10-11T00:44:06 | https://www.reddit.com/r/LocalLLaMA/comments/17514b1/integrate_autogen_into_your_chatbot_code/ | InterestingBasil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17514b1 | false | null | t3_17514b1 | /r/LocalLLaMA/comments/17514b1/integrate_autogen_into_your_chatbot_code/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rQzwAOaz4SlbhHdN5xCAghNpiYWTyKElm294HwfvzlQ', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/7GMvZQ1hJiLKP6CT5PpN-cYNy5GSiWw8AwEHPW-oXMM.jpg?width=108&crop=smart&auto=webp&s=bb859700c7efab64dbd242e30196df996ca3c2ef', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/7GMvZQ1hJiLKP6CT5PpN-cYNy5GSiWw8AwEHPW-oXMM.jpg?width=216&crop=smart&auto=webp&s=96c254ae662fe8ac94d9593240ee57edffbdf822', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/7GMvZQ1hJiLKP6CT5PpN-cYNy5GSiWw8AwEHPW-oXMM.jpg?width=320&crop=smart&auto=webp&s=ded6024ee872a89f3e48e2ee0e275e5be0a47592', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/7GMvZQ1hJiLKP6CT5PpN-cYNy5GSiWw8AwEHPW-oXMM.jpg?width=640&crop=smart&auto=webp&s=39d33a7a755cdc7f0f44206437d82d0d278fdc4e', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/7GMvZQ1hJiLKP6CT5PpN-cYNy5GSiWw8AwEHPW-oXMM.jpg?width=960&crop=smart&auto=webp&s=56b61e04ddb635b95d7363c1239c7a671a76cc29', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/7GMvZQ1hJiLKP6CT5PpN-cYNy5GSiWw8AwEHPW-oXMM.jpg?width=1080&crop=smart&auto=webp&s=82a67c5be35d0f041ff66960e697f237b759d57e', 'width': 1080}], 'source': {'height': 686, 'url': 'https://external-preview.redd.it/7GMvZQ1hJiLKP6CT5PpN-cYNy5GSiWw8AwEHPW-oXMM.jpg?auto=webp&s=d6c11dcf90ba72def64f4f99fdb79e2b84b6eca4', 'width': 1200}, 'variants': {}}]} |
Help needed on building doc translation LLM | 3 | Hello, everybody. I'm noob to LLMs and to python, and I need some help...
In short, I succesfully to implemented LLM to translate few sentences:
(d:\\LLM\\Trans\\vtran) d:\\LLM\\Trans\\vtran\\PyProject>python [translate4.py](https://translate4.py)
\# Initialize the M2M100 model and tokenizer
model = M2M100ForConditionalGeneration.from\_pretrained("facebook/m2m100\_418M")
tokenizer = M2M100Tokenizer.from\_pretrained("facebook/m2m100\_418M")
\# Set the source and target languages
src\_lang = "sl" # Slovenian
tgt\_lang = "en" # English
\# Prepare the text
src\_text = "Alaska kot se pravilno piše v aleutskem jeziku pomeni »dežela, ki ni otok«. Prvotno se je ime nanašalo samo na polotok Alaska, šele kasneje se je to ime poprijelo cele dežele."
\# Update the tokenizer for the source language
tokenizer.src\_lang = src\_lang
\# Encode the source text
encoded\_src\_text = tokenizer(src\_text, return\_tensors="pt")
\# Generate translation using the model
generated\_tokens = model.generate(\*\*encoded\_src\_text, forced\_bos\_token\_id=tokenizer.get\_lang\_id(tgt\_lang))
\# Decode the generated tokens to get the translated text
translated\_text = tokenizer.batch\_decode(generated\_tokens, skip\_special\_tokens=True)\[0\]
print(f"Original text: {src\_text}")
print(f"Translated text: {translated\_text}")
Original text:
Alaska kot se pravilno piše v aleutskem jeziku pomeni »dežela, ki ni otok«. Prvotno se je ime nanašalo samo na polotok Alaska, šele kasneje se je to ime poprijelo cele dežele.
Translated text:
Alaska, as it is correctly written in the Aleutic language, means "a desert that is not an island." Originally the name referred only to the peninsula of Alaska, only later this name was rejected by the whole country.
​
But when I try to translate whole text document (3300 tokens), translation breaks appart. It never translate whole document, only like 200-300 tokens. And depending on my modifications sometimes it does not translate at all, sometimes it repeats single word, etc...
Please, can anyone help me with translation of whole document, i'm sppining in circles on this..
This is one of/last version of the script I made:
import os
from pathlib import Path
import torch
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
\# Initialize directories and model path
input\_dir = r"D:\\LLM\\Trans\\vtran\\PyProject\\DocIn"
output\_dir = r"D:\\LLM\\Trans\\vtran\\PyProject\\DocOut"
\# Initialize model and tokenizer
model = M2M100ForConditionalGeneration.from\_pretrained("facebook/m2m100\_418M")
tokenizer = M2M100Tokenizer.from\_pretrained("facebook/m2m100\_418M")
\# Set device
device = "cuda" if torch.cuda.is\_available() else "cpu"
model.to(device)
\# Process each file
for file\_name in os.listdir(input\_dir):
file\_path = os.path.join(input\_dir, file\_name)
print(f"Processing {file\_name}")
\# Read file
if file\_path.endswith('.txt'):
with open(file\_path, 'r', encoding='utf-8') as file:
content = file.read()
else:
print(f"Skipping {file\_name}, not a .txt file.")
continue
\# Get language from user
src\_lang = "sl"
tgt\_lang = "en"
\# Update the tokenizer for the source language
tokenizer.src\_lang = src\_lang
\# Tokenize
tokenized\_content = tokenizer(content, return\_tensors="pt", padding=True, truncation=True)
tokenized\_content = {k: v.to(device) for k, v in tokenized\_content.items()}
print(f"Tokenized content: {tokenized\_content}")
\# Generate translation
outputs = model.generate(\*\*tokenized\_content, forced\_bos\_token\_id=tokenizer.get\_lang\_id(tgt\_lang))
translated\_content = tokenizer.batch\_decode(outputs, skip\_special\_tokens=True)\[0\]
print(f"Translated content: {translated\_content}")
\# Save output
output\_file\_path = os.path.join(output\_dir, Path(file\_name).stem + "\_translated.txt")
with open(output\_file\_path, 'w', encoding='utf-8') as out\_file:
out\_file.write(translated\_content)
print("Translation completed!")
| 2023-10-11T00:33:10 | https://www.reddit.com/r/LocalLLaMA/comments/1750w5x/help_needed_on_building_doc_translation_llm/ | dodo13333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1750w5x | false | null | t3_1750w5x | /r/LocalLLaMA/comments/1750w5x/help_needed_on_building_doc_translation_llm/ | false | false | self | 3 | null |
[llama.cpp update] GGUF LLaVA v1.5 support soon 🚀 | 129 | 2023-10-11T00:21:14 | https://twitter.com/ggerganov/status/1711796235235889557 | Shir_man | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 1750mxn | false | {'oembed': {'author_name': 'Georgi Gerganov', 'author_url': 'https://twitter.com/ggerganov', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">👀 What is this black magic!? <a href="https://t.co/sTf7mTd6DQ">pic.twitter.com/sTf7mTd6DQ</a></p>— Georgi Gerganov (@ggerganov) <a href="https://twitter.com/ggerganov/status/1711796235235889557?ref_src=twsrc%5Etfw">October 10, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/ggerganov/status/1711796235235889557', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_1750mxn | /r/LocalLLaMA/comments/1750mxn/llamacpp_update_gguf_llava_v15_support_soon/ | false | false | 129 | {'enabled': False, 'images': [{'id': 'K_xv_9U5dRTavwWSnCwSqMHfACZUm6pJnx8KMjcOoO0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/kfF6wbWfGwGn29d0fesMckEVHnoQhKkAzFetCWpt8HM.jpg?width=108&crop=smart&auto=webp&s=5a31059e75e0fb89f9c22a1e87554b9e82f4a7dd', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/kfF6wbWfGwGn29d0fesMckEVHnoQhKkAzFetCWpt8HM.jpg?auto=webp&s=0d095fe811af18ac820302daa28c7d96e4f5fdce', 'width': 140}, 'variants': {}}]} | ||
New Repo for Oobabooga & Multiconnector with Semantic-Kernel: Routing Capabilities and Multi-Start Scripts | 16 | Hey folks,
Just wanted to share some updates on a couple of projects I've been working on that might be of interest, especially if you're into semantic-kernel or .NET (I know, not the most popular around here, but hear me out).
1. **Semantic-Fleet Repo**: Moved the Oobabooga and Multiconnector out of the semantic-kernel repo into a [separate repository](https://github.com/MyIntelligenceAgency/semantic-fleet). Easier to manage and hopefully easier for you to find.
2. **Notebooks**: Added some [starter notebooks](https://github.com/MyIntelligenceAgency/semantic-fleet/blob/main/dotnet/notebooks/README.md) to help you get going. They're set up to run in VSCode using the polyglot extension.
3. **Oobabooga Multi-Start Scripts**: Submitted [a PR to Oobabooga](https://github.com/oobabooga/text-generation-webui/pull/4129) for multi-start scripts: running several models from the same instance actually works great.
**Why This Matters**
- **Routing Capabilities**: The Multiconnector automatically evaluates the capabilities of secondary LLMs on calibrated tasks. This is what semantic-kernel is really about—semantic functions—and the Multiconnector provides a pipeline where a primary connector is used normally, with prompt sampling and categorizing, secondary models testing and evaluating using the primary model, and updates to the routing table are performed in parallel background tasks seemlessly.
- **Smaller Models**: There's been some interesting work on smaller, more efficient models lately. These projects aim to leverage that. For instance, the recent "Mistral 7B" models have shown promising summarizing capabilities in "hard" mode, something that the previous "stable beluga 7B" model struggled with, and Microsoft's "Phi 1.5" also demonstrated promising capabilities on simpler data. [Integration tests](https://github.com/MyIntelligenceAgency/semantic-fleet/blob/main/dotnet/src/IntegrationTests/Connectors/MultiConnector/MultiConnectorTests.cs) illustrate how to test your own plans and data, which might be of interest for custom benchmarking even if you don't plan on keeping the .Net stack.
Would love to hear any thoughts or feedback you might have. | 2023-10-11T00:11:42 | https://www.reddit.com/r/LocalLLaMA/comments/1750fn6/new_repo_for_oobabooga_multiconnector_with/ | Jessynoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1750fn6 | false | null | t3_1750fn6 | /r/LocalLLaMA/comments/1750fn6/new_repo_for_oobabooga_multiconnector_with/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'n7rCtz9ddVMpw-s_IhDLK5CE_U06VrciGxNVbX0EC7Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OUoj-t3QcfzlsJ0dFF7gt6b7Qen9de0uIyn5Ojva1pY.jpg?width=108&crop=smart&auto=webp&s=0311ce1b6d30bb44de0879499e2311e75be3e3a5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OUoj-t3QcfzlsJ0dFF7gt6b7Qen9de0uIyn5Ojva1pY.jpg?width=216&crop=smart&auto=webp&s=e429e542c1ed1b6ec9e8d65cd96262b30210eebd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OUoj-t3QcfzlsJ0dFF7gt6b7Qen9de0uIyn5Ojva1pY.jpg?width=320&crop=smart&auto=webp&s=0c7d5b1e371690907393825c1747cf280bdf8f23', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OUoj-t3QcfzlsJ0dFF7gt6b7Qen9de0uIyn5Ojva1pY.jpg?width=640&crop=smart&auto=webp&s=ec71150479ad093a49b1d72ed94141f5f750f0fc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OUoj-t3QcfzlsJ0dFF7gt6b7Qen9de0uIyn5Ojva1pY.jpg?width=960&crop=smart&auto=webp&s=921189b909f81df0cf0d803025a02c1bc5748423', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OUoj-t3QcfzlsJ0dFF7gt6b7Qen9de0uIyn5Ojva1pY.jpg?width=1080&crop=smart&auto=webp&s=84486442d27347bb1264754f1145dc021754d858', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OUoj-t3QcfzlsJ0dFF7gt6b7Qen9de0uIyn5Ojva1pY.jpg?auto=webp&s=b61f4c0348e9c9245ce6dbd1858f4a2750777751', 'width': 1200}, 'variants': {}}]} |
How to make a budget for computing resources needed for self-hosting? | 1 | [removed] | 2023-10-10T23:38:00 | https://www.reddit.com/r/LocalLLaMA/comments/174zpbw/how_to_make_a_budget_for_computing_resources/ | Diligent-Fee-255 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174zpbw | false | null | t3_174zpbw | /r/LocalLLaMA/comments/174zpbw/how_to_make_a_budget_for_computing_resources/ | false | false | self | 1 | null |
Best LLM for my setup? | 9 | So, i have recently gotten myself into the LLM + sillytavern/kobold scene and the huge number of LLM options for me to choose from has been... staggering for me at best. I have no idea which model works best for which setup so i was hoping for you all could give me a hand with this.
I am currently have this setup to work with:
NVIDIA Geforce GTX 1650 Super - 4 GB Ram
AMD Ryzen 7 3700X 8-core Processor with 16 GB of ram
Thank you all in advance. | 2023-10-10T23:37:00 | https://www.reddit.com/r/LocalLLaMA/comments/174zoka/best_llm_for_my_setup/ | SocialDeviance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174zoka | false | null | t3_174zoka | /r/LocalLLaMA/comments/174zoka/best_llm_for_my_setup/ | false | false | self | 9 | null |
Best practices/methods/models/training for code generation? | 0 | Friends I am looking for any kinds of tips specifically for code generation as far as best practices, methods, models, training guides, or anything else. I started out trying to use embeddings of repositories with sample code with GPT-4 which was hit or miss and now I've gotten more into running local models, first with oogabooga UI and now with llama.cpp on Mac silicone and I've just been trying different models from TheBloke. Assuming this would lead to better output, I am wondering how I can maybe train some local model on repositories of code in a lazy way and get good output. Are there any good tips/tricks for code generation? Thank you | 2023-10-10T21:39:06 | https://www.reddit.com/r/LocalLLaMA/comments/174wyoj/best_practicesmethodsmodelstraining_for_code/ | No_Palpitation9689 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174wyoj | false | null | t3_174wyoj | /r/LocalLLaMA/comments/174wyoj/best_practicesmethodsmodelstraining_for_code/ | false | false | self | 0 | null |
Codellama Verbosity Making it Unusable for Code Completion | 1 | [removed] | 2023-10-10T20:25:45 | https://www.reddit.com/r/LocalLLaMA/comments/174v71d/codellama_verbosity_making_it_unusable_for_code/ | Anxious-Mess3882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174v71d | false | null | t3_174v71d | /r/LocalLLaMA/comments/174v71d/codellama_verbosity_making_it_unusable_for_code/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Z8HsiGIYqjjr1X-fNmlwhiST1Pd2ZnBleG9wGahcV-4', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/M122EXgHNt9Vt-57pKQb2lCFUbgX-HxQvRC-KfybZDk.jpg?width=108&crop=smart&auto=webp&s=3c2deccb2f1edc7e80bc63b367d44c81161daf26', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/M122EXgHNt9Vt-57pKQb2lCFUbgX-HxQvRC-KfybZDk.jpg?width=216&crop=smart&auto=webp&s=9230970a219d735f6e0f88352c13b9f539d0484d', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/M122EXgHNt9Vt-57pKQb2lCFUbgX-HxQvRC-KfybZDk.jpg?width=320&crop=smart&auto=webp&s=6a484ca63b979480457ba63fa70b4972a732a26f', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/M122EXgHNt9Vt-57pKQb2lCFUbgX-HxQvRC-KfybZDk.jpg?width=640&crop=smart&auto=webp&s=c757eef3454191fb5174d79e20952c3a41175f6a', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/M122EXgHNt9Vt-57pKQb2lCFUbgX-HxQvRC-KfybZDk.jpg?width=960&crop=smart&auto=webp&s=2f1949e7975fa8e533624933d2cd6165814e0e00', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/M122EXgHNt9Vt-57pKQb2lCFUbgX-HxQvRC-KfybZDk.jpg?width=1080&crop=smart&auto=webp&s=003db8b0c1fcfb52357960c436c12d2d732d6695', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/M122EXgHNt9Vt-57pKQb2lCFUbgX-HxQvRC-KfybZDk.jpg?auto=webp&s=a59da7d6cc6ae8d75e1f198ad367162d6b2239ac', 'width': 1200}, 'variants': {}}]} |
open llama failed to predict eos token? | 2 | Hi there,
I was loading in openlm-research/open\_llama\_3b\_v2
and trying to create a baseline. One thing I observed here was *seems to me*, the model refuses to generate eos token so that the conversation seems endlessly.
For example, when I asked "Q: Is apple red?\\nA:", I got
<s>Q: Is apple red? A: No, apple is not red. Q: Is apple green? A: No, apple is not green. Q: Is apple yellow? A: No, apple is not yellow. Q: Is apple orange? A: No, apple is not orange. Q: Is apple blue? A: No, apple is not blue. Q: Is apple pink? A: No, apple is not pink. Q: Is apple purple? A: No, apple is not purple. Q: Is apple black? A: No, apple is not black. Q: Is apple brown? A: No, apple is not brown. Q: Is apple white? A: No, apple is not white. Q: Is apple red? A: No, apple is not red. Q: Is apple green? A: No, apple is not green. Q: Is apple yellow? A: No, apple is not yellow. Q: Is apple orange? A: No, apple is not orange. Q: Is apple blue? A: No, apple is not blue. Q: Is apple pink? A: No
What was expect from me is (despite the fact first)
<s>Q: Is apple red? A: No, apple is not red.
What can I do to make it happen?
code details:
import torch from transformers
import LlamaTokenizer, LlamaForCausalLM
model_path = 'openlm-research/open_llama_3b_v2'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map="auto"
)
prompte = 'Q: Is apple red?\nA:'
inpute = tokenizer(prompte, return_tensors="pt").input_ids.to(device) generation_output = model.generate(input_ids=inpute,max_new_tokens=256)
# print(generation_output) print(tokenizer.decode(generation_output[0])) | 2023-10-10T20:00:51 | https://www.reddit.com/r/LocalLLaMA/comments/174ukue/open_llama_failed_to_predict_eos_token/ | 130L | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174ukue | false | null | t3_174ukue | /r/LocalLLaMA/comments/174ukue/open_llama_failed_to_predict_eos_token/ | false | false | self | 2 | null |
First timer building a new (to me) rig for LLM inference, fine-tuning, etc. Flame my choices, recommend me a different way, and any ideas on benchmarking 2x P40 vs 2x P100? | 2 | Hello!
I'm diving into local LLM for the first time having been using gpt3.5 for a while. I'm a cybersecurity guy, so while I'm deeply technical I don't have any useful or deep knowledge of this particular niche of tech. I plan to run LLMs for learning, day-to-day optimizing of my workflows, leveling-up my dev work, and I want to get to a place where I'm training/fine-tuning AIs for some secret squirrel security research stuff.
I didn't want to drop $7k-$10k on the latest and greatest GPUs etc while I'm getting my feet wet, so I opted to go back a few generations to start out. Based on some research, eBay availability, impulsivity and whatnot I ended up ordering a bunch of stuff:
* GPUs 1&2: 2x Used Tesla P40
* GPUs 3&4: 2x Used Tesla P100
* Motherboard: Used Gigabyte C246M-WU4
* CPU: Used Intel Xeon E-2286G 6-core (a real one, not ES/QS/etc)
* RAM: New 64GB DDR4 2666 Corsair Vengeance
* PSU: New Corsair RM1000x
* New SSD, mid tower, cooling, yadda yadda.
Before you say it: yes, I know I can only fit two GPUs on the Gigabyte board. My plan was to have only a pair of P40s and do a bunch of inference tasks, but I know I'm going to end up wanting to fine-tine a bunch of stuff with custom data sets, whereupon I learned that P100s might be better for fine tuning ... so i figured.. why not do an A/B of the P40/P100 across a few of the tasks that I'll be working on? I'd like to do some comparisons of different types of workload across a few models using a single P40, single P100, double of each, mix of each, etc. Just for shits and giggles because why not. So I ordered a pair of P100s, too! Sadly I did so only after I'd already ordered the motherboard, which can take 2 GPUs max.
But then I thought: I can sell the C246M-WU4 for barely a few bucks loss... perhaps there's a motherboard that will work with my Xeon E-2286G CPU, the RAM, PSU, and *all four GPUs at once*? That would be epic.
If anyone knows of such a motherboard I'd be delighted to hear it.
Also, I would be very appreciative of any tips for a n00b based on similar configurations (in particular I've read that some motherboards don't boot with two P40s and success/fail stories would be great.)
Flames along the lines of "your setup is dumb because..." are encouraged because I have yet to make and learn from my own mistakes, something that will be remedied a week from now as parts start rolling in.
Stories like "I started using software X, but it turned out to fail under Y conditions and Z is much better because FOO" are awesome because it'll save me (and anyone reading this) time and effort.
Tips like "if you tweak parameter A in section B then you'll get C improvement in t/s when doing D" are always well received, too.
Finally, if you'd like to see speed comparisons of P40 vs P100 then let me know what you'd like to see. I'll build a list and run the tests and post results here.
Thanks! | 2023-10-10T19:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/174tqr5/first_timer_building_a_new_to_me_rig_for_llm/ | __JockY__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174tqr5 | false | null | t3_174tqr5 | /r/LocalLLaMA/comments/174tqr5/first_timer_building_a_new_to_me_rig_for_llm/ | false | false | self | 2 | null |
Huggingface releases Zephyr 7B Alpha, a Mistral fine-tune. Claims to beat Llama2-70b-chat on benchmarks | 273 | 2023-10-10T18:57:46 | https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha | remixer_dec | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 174t0n0 | false | null | t3_174t0n0 | /r/LocalLLaMA/comments/174t0n0/huggingface_releases_zephyr_7b_alpha_a_mistral/ | false | false | 273 | {'enabled': False, 'images': [{'id': '2AaoSrRb9IzyLzt-HqQXTOyKTiCqTT8VGTATu9IBBig', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mLyWFD6OOmMIwfPxGSbA0rx-T9mWuUfeGpi2fI0OQOY.jpg?width=108&crop=smart&auto=webp&s=70f2c8164dc80d381d7e94c21bcc02dd1d0813e2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mLyWFD6OOmMIwfPxGSbA0rx-T9mWuUfeGpi2fI0OQOY.jpg?width=216&crop=smart&auto=webp&s=2930d95422509e836d75a44ec75e7c0ae5b6433e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mLyWFD6OOmMIwfPxGSbA0rx-T9mWuUfeGpi2fI0OQOY.jpg?width=320&crop=smart&auto=webp&s=203e1eeafbe596c64e93090d91e26432e78d7334', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mLyWFD6OOmMIwfPxGSbA0rx-T9mWuUfeGpi2fI0OQOY.jpg?width=640&crop=smart&auto=webp&s=b90261b692bbd17a55ddb284d5f8eaa5d191e655', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mLyWFD6OOmMIwfPxGSbA0rx-T9mWuUfeGpi2fI0OQOY.jpg?width=960&crop=smart&auto=webp&s=bd2682d9f57074e8677fa23b5acf046bbe894485', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mLyWFD6OOmMIwfPxGSbA0rx-T9mWuUfeGpi2fI0OQOY.jpg?width=1080&crop=smart&auto=webp&s=e732921ee186da6ac8dbab3b76a2b2e86666c15e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mLyWFD6OOmMIwfPxGSbA0rx-T9mWuUfeGpi2fI0OQOY.jpg?auto=webp&s=255dc1febf84e6c885538031be5e3ba60ddacfd4', 'width': 1200}, 'variants': {}}]} | ||
Generating music notation with LLM - why not? | 34 | Why no? Well, some idiot was going to do it one way or another.
Anyhow: I Grabbed bunch of tunes in ABC format - monkey LORA them and now I have an endless generator or Drunken LLama jigs and reels.
Generate tune Drunken LLama Waltz in ABC format:
X:126
T:Drunken Llama Waltz
Z:Locallama
N:more of a fox-trot than a waltz
M:3/4
L:1/8
Q:3/4=54
K:DMin
A|d2d dcB|A2A AGF|E2E GFE|F2F EDF|
d2d dcB|A2A AGF|EFG FED|D3 D2:|
E|FGA BAG|FGA BAG|FGA Bcd|cBA GFE|
FGA BAG|FGA BAG|FGA FED|D3 D2:|
Copy paste to : [https://abc.rectanglered.com/](https://abc.rectanglered.com/)
I wonder how this will work when quantized, hahaha. | 2023-10-10T18:50:11 | https://www.reddit.com/r/LocalLLaMA/comments/174su3m/generating_music_notation_with_llm_why_not/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174su3m | false | null | t3_174su3m | /r/LocalLLaMA/comments/174su3m/generating_music_notation_with_llm_why_not/ | false | false | self | 34 | null |
I've Uploaded The Entire NanoPhi Dataset, and each of its specific tasks. | 54 | The Entire NanoPhi Dataset is available at https://huggingface.co/datasets/VatsaDev/TinyText/tree/main, with each of its tasks, we have tagged text on code, math, logic, roleplay, textbooks, and more. Check it out! | 2023-10-10T18:32:05 | https://www.reddit.com/r/LocalLLaMA/comments/174se3z/ive_uploaded_the_entire_nanophi_dataset_and_each/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174se3z | false | null | t3_174se3z | /r/LocalLLaMA/comments/174se3z/ive_uploaded_the_entire_nanophi_dataset_and_each/ | false | false | self | 54 | {'enabled': False, 'images': [{'id': 'KQoiiMa3kByWegj1Rnhwsk3dr2qCLfe3PYlo_8bLT6c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ymS8HD_hPbSOt6T8G3ugYdw9egW5bMYJ6-OmfGBM2to.jpg?width=108&crop=smart&auto=webp&s=89fe03cd8e30d2025e0123394ec91dbbcb165768', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ymS8HD_hPbSOt6T8G3ugYdw9egW5bMYJ6-OmfGBM2to.jpg?width=216&crop=smart&auto=webp&s=2998c3dff0b2de80091cc268a6f97df918168f2f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ymS8HD_hPbSOt6T8G3ugYdw9egW5bMYJ6-OmfGBM2to.jpg?width=320&crop=smart&auto=webp&s=a037f3e1e6f61db1283d8eb59edfa258a05e71d9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ymS8HD_hPbSOt6T8G3ugYdw9egW5bMYJ6-OmfGBM2to.jpg?width=640&crop=smart&auto=webp&s=689ecb48a86990f199a0febe017fddec02d32b1e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ymS8HD_hPbSOt6T8G3ugYdw9egW5bMYJ6-OmfGBM2to.jpg?width=960&crop=smart&auto=webp&s=f4a0cc7e135eda90fd6f3d903323f8395bf44fc9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ymS8HD_hPbSOt6T8G3ugYdw9egW5bMYJ6-OmfGBM2to.jpg?width=1080&crop=smart&auto=webp&s=d37395052f1b7271383bb8524b46319b99ceff93', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ymS8HD_hPbSOt6T8G3ugYdw9egW5bMYJ6-OmfGBM2to.jpg?auto=webp&s=bdbd5e70a5b272dfa42ef840f3c12a8e430bd566', 'width': 1200}, 'variants': {}}]} |
what kind of hardware would it take to run falcon 180b | 16 | preferred at a minimum of 15 tokens per second
​
and could someone recomend a cloud service to run it even a 4090 would not beable too so cloud services are my only option i just dont want to break the bank | 2023-10-10T18:03:35 | https://www.reddit.com/r/LocalLLaMA/comments/174rp7e/what_kind_of_hardware_would_it_take_to_run_falcon/ | Avocado_Express | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174rp7e | false | null | t3_174rp7e | /r/LocalLLaMA/comments/174rp7e/what_kind_of_hardware_would_it_take_to_run_falcon/ | false | false | self | 16 | null |
Uncensored Mistral in Petals | 1 | [removed] | 2023-10-10T18:02:05 | https://www.reddit.com/r/LocalLLaMA/comments/174rnvj/uncensored_mistral_in_petals/ | imperiallearner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174rnvj | false | null | t3_174rnvj | /r/LocalLLaMA/comments/174rnvj/uncensored_mistral_in_petals/ | false | false | self | 1 | null |
Is it Feasible to Boost LLM Performance by Connecting Two MacBooks for Increased RAM and CPU Power? | 2 | I’ve been pondering a way to supercharge my open-source LLM experiments and thought of an idea: what if we could connect two MacBooks together to combine their RAM and CPU power for running larger models and generating quicker responses from LLMs?
My question is twofold:
1. Is there any software or method available to connect two MacBooks in a way that would effectively pool their resources, especially RAM and CPU power, to benefit LLM model performance?
2. Additionally, are the fastest USB-C cables on the market capable of transferring data at speeds high enough to make this kind of setup practical?
I’m really curious about the possibilities here and would appreciate any insights or experiences you all might have in experimenting with such a setup. | 2023-10-10T17:48:18 | https://www.reddit.com/r/LocalLLaMA/comments/174rc6m/is_it_feasible_to_boost_llm_performance_by/ | Flaky_Candidate7546 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174rc6m | false | null | t3_174rc6m | /r/LocalLLaMA/comments/174rc6m/is_it_feasible_to_boost_llm_performance_by/ | false | false | self | 2 | null |
Flash and Sparse attention on Llama | 7 | Hello, I'm new to the ML world and was reading on transformers and attention. When doing so, found about flash attention and sparse attention and I thought they were very interesting concepts to implement in LLama inference repos such as Llama.cpp. Especially sparse attention, wouldn't that increase the context length of any model? Why haven't it been tried before?
Im interested in trying it myself (not that I think I can make a difference being so new to all of this). But I don't believe I'm the only one who has ever had this idea. Is there anything that might make it such a hard task? Is it perhaps not worth it? Or am I misunderstanding something?
Thank you very much! | 2023-10-10T15:35:32 | https://www.reddit.com/r/LocalLLaMA/comments/174o6ob/flash_and_sparse_attention_on_llama/ | Onelio1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174o6ob | false | null | t3_174o6ob | /r/LocalLLaMA/comments/174o6ob/flash_and_sparse_attention_on_llama/ | false | false | self | 7 | null |
Order of data during finetuning | 3 | Does the order of the my data matter when finetuning?
This is in the context of a mistral7b finetune job using LoRA with a finetuning dataset of 20k rows of well-defined tasks of increasing difficulty. The first 5k are an easy difficulty level, the next 5k are more difficult, and so on. Towards the end of the 20k rows the tasks are much more difficult and getting to the correct answer requires that the model picks up on very subtle points in the input prompt. My instinct was telling me I should probably randomise the 20k rows before starting the training job so that's what I have done but I was wondering if anyone has any insights into whether or not data order matters?
For example, would the loss drop more quickly early on during training if the training batches are mostly made up of the easy tasks with a strong training signal with the result being that the model already has some competency in the well-defined task (i.e. it is approximately in some global minimum region) by the time it gets to the difficult tasks allowing it to learn better from the difficult tasks and allowing the loss to continue to decrease.
On the other hand, the model does end up seeing all of the training data at least once during each epoch so maybe the order is totally irrelevant!
Any insights would be very helpful!
​
​ | 2023-10-10T15:12:21 | https://www.reddit.com/r/LocalLLaMA/comments/174nn1q/order_of_data_during_finetuning/ | Hoblywobblesworth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174nn1q | false | null | t3_174nn1q | /r/LocalLLaMA/comments/174nn1q/order_of_data_during_finetuning/ | false | false | self | 3 | null |
Speed Up inference with MX330 | 3 | Can I use my Nvidia MX330 to speed up model inference with Llama CPP?
Has anyone tried this before or have any advice on setup and optimizations? | 2023-10-10T14:05:47 | https://www.reddit.com/r/LocalLLaMA/comments/174m3q4/speed_up_inference_with_mx330/ | lagsec | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174m3q4 | false | null | t3_174m3q4 | /r/LocalLLaMA/comments/174m3q4/speed_up_inference_with_mx330/ | false | false | self | 3 | null |
HyperAttention: Long-context Attention in Near-Linear Time | 1 | 2023-10-10T14:04:54 | https://arxiv.org/abs/2310.05869 | starlightrobotics | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 174m2xn | false | null | t3_174m2xn | /r/LocalLLaMA/comments/174m2xn/hyperattention_longcontext_attention_in/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
ANIMA-Phi-Neptune-Mistral-7B-v1 | 35 | Hey everyone! So here is the first official model that I am releasing. It is focused on understanding the Biomimicry concept and process to help users solve problems with innovative ideas inspired by nature. There is more fine-tuning to do to expand and enhance this concept.
It's currently running on the LLM Leaderboard so there are no scores to report yet.
I've got it up and running in Oobabooga and have had some cool results. Hoping someone can help me quantize it into GGUF if possible so it can be integrated for more people to use.
​
[https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B-v1](https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B-v1)
​
\---------------------------------------
#### ANIMA-Phi-Neptune-Mistral-v1
#### Datasets:
* First fine-tune for 1 Epoch x 5hrs on T4 Small -Severian/Biomimicry
* Second fine-tune for 1 Epoch x 1hr A100 - 'emrgnt-cmplxty/sciphi-textbooks-are-all-you-need'
### *** TRAINING STAGES ***
#### Original Base Model: ehartford/dolphin-2.0-mistral-7b
#### 1st Fine-Tine LoRa Biomimicry Model: Severian/ANIMA-Echo-Mistral-7B-v1
#### 2nd Fine-Tine LoRa Science/Philosophy Textbooks Model: Severian/ANIMA-Phi-Neptune-Mistral-LoRa
### ANIMA - Advanced Nature Inspired Multidisciplinary Assistant
#### Model Description:
ANIMA is designed as a leading expert in various scientific disciplines including biomimicry, biology, and environmental science. It is fine-tuned on a dataset of over 4,000 high-quality scientific and accurate prompts to help users through the Biomimicry Design Process. The model is intended to propose biomimetic solutions to challenges while frequently asking for user feedback or clarification.
​
License: Apache 2.0 | 2023-10-10T13:45:49 | https://www.reddit.com/r/LocalLLaMA/comments/174ln2q/animaphineptunemistral7bv1/ | vesudeva | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174ln2q | false | null | t3_174ln2q | /r/LocalLLaMA/comments/174ln2q/animaphineptunemistral7bv1/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': 'E_cE37n8K_t4cORVdAIAX6QtTABrLpGBRq5SbzUAHfY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vEOEiL1YilFOu7kzrSDFg1sFUC-BZiEr8oF0ZzHTCL4.jpg?width=108&crop=smart&auto=webp&s=c7edb2de0e6942cee38c041b0d862715f93127b9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vEOEiL1YilFOu7kzrSDFg1sFUC-BZiEr8oF0ZzHTCL4.jpg?width=216&crop=smart&auto=webp&s=76dfed00ffa46b71303f4c37f7c500359e432c48', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vEOEiL1YilFOu7kzrSDFg1sFUC-BZiEr8oF0ZzHTCL4.jpg?width=320&crop=smart&auto=webp&s=f7e1ea4bc90f4a9b4afcd3e695eb453a14d64697', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vEOEiL1YilFOu7kzrSDFg1sFUC-BZiEr8oF0ZzHTCL4.jpg?width=640&crop=smart&auto=webp&s=9cc77120a45dd2393bedee5e0ba79637dfadfb6d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vEOEiL1YilFOu7kzrSDFg1sFUC-BZiEr8oF0ZzHTCL4.jpg?width=960&crop=smart&auto=webp&s=ad95c56f33545deb6c4c6f692da4f7e752c24356', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vEOEiL1YilFOu7kzrSDFg1sFUC-BZiEr8oF0ZzHTCL4.jpg?width=1080&crop=smart&auto=webp&s=c948cbeb84c8fbeaa09d14dd5cef68b90da7b4ad', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vEOEiL1YilFOu7kzrSDFg1sFUC-BZiEr8oF0ZzHTCL4.jpg?auto=webp&s=477e60994ec6a480dd31f27d9ec2764f18586fdd', 'width': 1200}, 'variants': {}}]} |
Synthetic dataset generation (locally): 101 guide | 1 | [removed] | 2023-10-10T12:32:19 | https://www.reddit.com/r/LocalLLaMA/comments/174k0ml/synthetic_dataset_generation_locally_101_guide/ | Fluid-Age-9266 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174k0ml | false | null | t3_174k0ml | /r/LocalLLaMA/comments/174k0ml/synthetic_dataset_generation_locally_101_guide/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8tAyVLH3UAPaPbWwNY5fxn2epEZHgXmHHX9FG0v52wY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?width=108&crop=smart&auto=webp&s=2579ab06751d4be737d11a4c5b21319da8fd6f74', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?width=216&crop=smart&auto=webp&s=1ba62e6a3e4ebd2b0ee5ed67d043bcc986cb7cf2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?width=320&crop=smart&auto=webp&s=8a6d7c00fe014db448594a6027a8a6a4bb43684c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?width=640&crop=smart&auto=webp&s=8d74cb32a9bd2a21b758bb098b57b194e6fbb167', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?width=960&crop=smart&auto=webp&s=ec6fac4654bfd157d5675b8b8b5244b4ea537eaf', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?width=1080&crop=smart&auto=webp&s=5d8c015aff6688da2e52d4e09a82f91907e2dd91', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?auto=webp&s=18a3a4181cd19dba3888db7ccc9869c2152d6aea', 'width': 1200}, 'variants': {}}]} |
200 000 dollars what in hardware can i buy and build with my students | 96 | Hi i have the possibility to apply for 100 to 200 thousand to build a "supercomputer" with my students in a hybrid high school-university education. Our goal is to both let students build the computer and configure it and then use it. What are the hardware we should buy for the money, both GPU, CPU and more.
Please give me tips on what i should buy if i get this money and also suggestions as what type of llm i will be able to run.
I also am happy if i get tips on software to run it.
Thanks | 2023-10-10T11:12:57 | https://www.reddit.com/r/LocalLLaMA/comments/174iki9/200_000_dollars_what_in_hardware_can_i_buy_and/ | dupido | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174iki9 | false | null | t3_174iki9 | /r/LocalLLaMA/comments/174iki9/200_000_dollars_what_in_hardware_can_i_buy_and/ | false | false | self | 96 | null |
EM German - Mistral + Continous Pretraining + high-quality Finetune to achieve unprecedented non-english performance | 47 | 2023-10-10T10:38:06 | https://github.com/jphme/EM_German/blob/main/README.md | jphme | github.com | 1970-01-01T00:00:00 | 0 | {} | 174i0vh | false | null | t3_174i0vh | /r/LocalLLaMA/comments/174i0vh/em_german_mistral_continous_pretraining/ | false | false | 47 | {'enabled': False, 'images': [{'id': 'tTTfpP7bm29TY65B0VELyvMLMyOCM73087MrCMN_uvE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FRchFCaoN1iX2bSrP3lnTtfaX6j7CBhWU4u0bdeYYeI.jpg?width=108&crop=smart&auto=webp&s=492cb78f8304666d7648f8408e2b5e80c951ecbb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FRchFCaoN1iX2bSrP3lnTtfaX6j7CBhWU4u0bdeYYeI.jpg?width=216&crop=smart&auto=webp&s=1c9ea2334207e383cff0676b65c273fb6eafdcd0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FRchFCaoN1iX2bSrP3lnTtfaX6j7CBhWU4u0bdeYYeI.jpg?width=320&crop=smart&auto=webp&s=8bb6bebc8411dbf7dd86d65160fc2cda27e1ef53', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FRchFCaoN1iX2bSrP3lnTtfaX6j7CBhWU4u0bdeYYeI.jpg?width=640&crop=smart&auto=webp&s=19b2d208ee9d37270d91cff2f83dd013872dcb29', 'width': 640}], 'source': {'height': 320, 'url': 'https://external-preview.redd.it/FRchFCaoN1iX2bSrP3lnTtfaX6j7CBhWU4u0bdeYYeI.jpg?auto=webp&s=fd6feaba878a4b3ecbad666d802753d489260c57', 'width': 640}, 'variants': {}}]} | ||
ELI5 what Tau actually does in Mirostat. | 20 | I tried searching for it but I'm getting conflicting information. Some suggest that lower Tau (3) will produce more human-like responses while others say that higher Tau values will produce more complex text. I guess both could be true at the same time but I'm generally confused on how to use that setting. | 2023-10-10T10:20:04 | https://www.reddit.com/r/LocalLLaMA/comments/174hqng/eli5_what_tau_actually_does_in_mirostat/ | Herr_Drosselmeyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174hqng | false | null | t3_174hqng | /r/LocalLLaMA/comments/174hqng/eli5_what_tau_actually_does_in_mirostat/ | false | false | self | 20 | null |
What is the best way to get a trained GPTQ model? | 1 | I am thinking of using a vanilla llama model 7B - finetuning it using autotrain-advanced-quantizing it and working with it.
What is the process that is working best for you guys? If different downstream tasks work better with different methods, let me know that as well. | 2023-10-10T10:15:29 | https://www.reddit.com/r/LocalLLaMA/comments/174ho4p/what_is_the_best_way_to_get_a_trained_gptq_model/ | shinigami_inso | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174ho4p | false | null | t3_174ho4p | /r/LocalLLaMA/comments/174ho4p/what_is_the_best_way_to_get_a_trained_gptq_model/ | false | false | self | 1 | null |
Is there any local model that can work well with dataframes/csv? | 8 | With GPT-4 the performance is amazing, I want to do the same with a local model. Has anyone got any success with any local model and have worked successfully with dataframes? | 2023-10-10T10:13:30 | https://www.reddit.com/r/LocalLLaMA/comments/174hmwr/is_there_any_local_model_that_can_work_well_with/ | shinigami_inso | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174hmwr | false | null | t3_174hmwr | /r/LocalLLaMA/comments/174hmwr/is_there_any_local_model_that_can_work_well_with/ | false | false | self | 8 | null |
QLoRA with GPTQ problems | 3 | I'm having problems fine-tuning with pre-quantised models. My training loss is sometimes 0 and the validation loss is nan, so I assume this is an overflow issue?
Does anyone see anything obviously wrong with the way I am training my model?
FWIW, I never had this issue doing QLoRA with bitsandbytes quantisation.
config = GPTQConfig(bits=4, disable_exllama=True)
model = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7b-Chat-GPTQ", quantization_config=config, device_map="auto", torch_dtype="auto")
model = prepare_model_for_kbit_training(model, use_gradient_checkpointing=True)
tokenizer = AutoTokenizer.from_pretrained("TheBloke/Llama-2-7b-Chat-GPTQ")
peft_config = LoraConfig(task_type="CAUSAL_LM", r=64, lora_alpha=16)
...
training_args = TrainingArguments(fp16=True, optim="paged_adamw_32bit", ...)
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
peft_config=peft_config,
...
)
trainer.train()
#####
{'loss': 0.0, 'learning_rate': 0.0004953917050691245, 'epoch': 0.27}
{'loss': 0.0, 'learning_rate': 0.0004953917050691245, 'epoch': 0.28}
{'loss': 2.0689, 'learning_rate': 0.0004953917050691245, 'epoch': 0.28}
{'eval_loss': nan, 'eval_runtime': 149.173, 'eval_samples_per_second': 0.597, 'eval_steps_per_second': 0.302, 'epoch': 0.28}
##### | 2023-10-10T09:39:57 | https://www.reddit.com/r/LocalLLaMA/comments/174h3kt/qlora_with_gptq_problems/ | Lewba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174h3kt | false | null | t3_174h3kt | /r/LocalLLaMA/comments/174h3kt/qlora_with_gptq_problems/ | false | false | self | 3 | null |
Substitute for AutoGPT? | 3 | So I used to use AutoGPT back when it was mostly uncensored, but now, I work with LLMs and was wanting some of that functionality back.
I'm using oobabooga/text-generation-webui right now and I'd like to set up AutoGPT to it, but there seems no way for it to force the LLM to return a proper JSON string back (or AutoGPT isn't reading the string properly).
So now, I'm curious if anyone knows of a program that will contact the LLM and be able to do the following at the LLMs direction.
* enter in commands that it runs via the command line.
* Be able to browse the web (double points if it can navigate web pages and triple points if it can handle logging in and out of them.
* CRUD text files (Create, Read, Update, Delete).
If it can also work with other LLMs as well that I have running (I can run multiple LLMs on my machine), that would be great, but isn't required.
Is there something that I can use for this (or is there some basic way I can get AutoGPT working with my LLM so that it doesn't crash with an error at every request? | 2023-10-10T09:22:06 | https://www.reddit.com/r/LocalLLaMA/comments/174gtjg/substitute_for_autogpt/ | Lance_lake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174gtjg | false | null | t3_174gtjg | /r/LocalLLaMA/comments/174gtjg/substitute_for_autogpt/ | false | false | self | 3 | null |
How to further pre-train llama-2 model on documents for question answering? | 4 | Hi all,
I have a set of documents that are about "menu engineering", and this files are somewhat new and I don't think these were used for pre-training the llama-2 model.
Is there a way to extend pre-training on these new documents, and later I want to fine-tune the model on this data on question answer pairs to do closed-domain question-answering. Will this approach work for closed domain question answering ? | 2023-10-10T08:24:19 | https://www.reddit.com/r/LocalLLaMA/comments/174g0wj/how_to_further_pretrain_llama2_model_on_documents/ | vile_proxima | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174g0wj | false | null | t3_174g0wj | /r/LocalLLaMA/comments/174g0wj/how_to_further_pretrain_llama2_model_on_documents/ | false | false | self | 4 | null |
Can anyone explain MoE like I’m 25 | 154 | I’ve read about a bunch of threads. Tried reading some papers on arxiv. I couldn’t understand how is it better(or useful) than current LLMs like Llama or Mistral and so on? | 2023-10-10T07:18:42 | https://www.reddit.com/r/LocalLLaMA/comments/174f42z/can_anyone_explain_moe_like_im_25/ | Tejasw__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174f42z | false | null | t3_174f42z | /r/LocalLLaMA/comments/174f42z/can_anyone_explain_moe_like_im_25/ | false | false | self | 154 | null |
Using vLLM with classification head | 6 | I was trying to use my llama2 classifier with vLLM. Is it possible to use vLLM like this or it only works for generating Text? | 2023-10-10T07:14:27 | https://www.reddit.com/r/LocalLLaMA/comments/174f1x9/using_vllm_with_classification_head/ | ComplexIt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174f1x9 | false | null | t3_174f1x9 | /r/LocalLLaMA/comments/174f1x9/using_vllm_with_classification_head/ | false | false | self | 6 | null |
Purpose of Pipeline/Chain Abstractions? | 29 | I've worked with more than a few streaming and distributed processing frameworks in the past, and am familiar with their designs and, more importantly, the reasoning behind their designs. It's common for frameworks like these to have you treat your functions as some flavor of an abstract computation node, assemble them into pipelines, use conditional/"routing" primitives that they provide, etc.
This is done so the frameworks can more easily accomplish their core purpose, i.e. distribute and parallelize computation while routing data/messages, feed streaming events from an external source to a function by invoking it per msg or in batches, etc. The reason for these frameworks to not use vanilla Python/etc is that they need to be able to break up your application for distributed processing, control the input/output flow themselves for distributed execution instead of just executing your code with a Python interpreter which then handles actual execution and control/data flow, etc. These things can be significantly easier if you define your functions and general data flow using their own constructs/API (not API as in REST, but as in interface for their SDKs/libraries), use only the functions they provide for in-stream processing rather than free-form Python (think Flink for example), etc - which is why these frameworks use these abstractions, which limit how you can write your application for the sake of enabling the frameworks' core functionality.
I'm seeing the same concepts used in frameworks like Haystack, Griptape, LangChain, but with none of the benefits and no similar underlying reasons. Can somebody explain why things like [https://docs.haystack.deepset.ai/docs/nodes\_overview#decision-nodes](https://docs.haystack.deepset.ai/docs/nodes_overview#decision-nodes) and Haystack's various types of nodes, plus pipelines, workflows, agents, etc etc... are better than \`if\` statements (in the case of decision nodes) and normal Python function definitions and general execution? | 2023-10-10T06:26:12 | https://www.reddit.com/r/LocalLLaMA/comments/174eckz/purpose_of_pipelinechain_abstractions/ | Radiant_Tea8107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174eckz | false | null | t3_174eckz | /r/LocalLLaMA/comments/174eckz/purpose_of_pipelinechain_abstractions/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'omYQW3a2rBBUZjE_BU2A1OXIrYOyW0kli9BurMtSc-M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9lIOjChLWa5ZeBPfcsNdlT8K0ZzGJgWDUjZR3aGsDS8.jpg?width=108&crop=smart&auto=webp&s=e18686a9ef922cda00bfc2ce10075dcce4f1a748', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9lIOjChLWa5ZeBPfcsNdlT8K0ZzGJgWDUjZR3aGsDS8.jpg?width=216&crop=smart&auto=webp&s=4959dffd59da29e3d13566ebeb888ae111eec99d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9lIOjChLWa5ZeBPfcsNdlT8K0ZzGJgWDUjZR3aGsDS8.jpg?width=320&crop=smart&auto=webp&s=e7930985b1738164acb7de9bdbff830a857daa3e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9lIOjChLWa5ZeBPfcsNdlT8K0ZzGJgWDUjZR3aGsDS8.jpg?width=640&crop=smart&auto=webp&s=262c15149d90b879b740d7be2d89ec95f79a9b1c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9lIOjChLWa5ZeBPfcsNdlT8K0ZzGJgWDUjZR3aGsDS8.jpg?width=960&crop=smart&auto=webp&s=34be5fd3d930fdd4a49f3485b9bc7b3d0175391c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9lIOjChLWa5ZeBPfcsNdlT8K0ZzGJgWDUjZR3aGsDS8.jpg?width=1080&crop=smart&auto=webp&s=b998722fccda494b06987e24f84ad48022a58049', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9lIOjChLWa5ZeBPfcsNdlT8K0ZzGJgWDUjZR3aGsDS8.jpg?auto=webp&s=d2d9f1e06a32a3a98fa6c7096d327f66b1725f11', 'width': 1200}, 'variants': {}}]} |
Some one send me a link to faraday ai | 0 | [removed] | 2023-10-10T04:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/174cskk/some_one_send_me_a_link_to_faraday_ai/ | Avocado_Express | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174cskk | false | null | t3_174cskk | /r/LocalLLaMA/comments/174cskk/some_one_send_me_a_link_to_faraday_ai/ | false | false | self | 0 | null |
LLMs that can run on CPUs and less RAM | 6 | As the title says I am working on a project that would require me to install chatbots into client side PCs. These are usually not connected to the internet and are used for monitoring machinery. These do not have GPUs as well. So I am looking for a model that can take different function names and their attribute details etc., and return the function name, with the attributes plugged in from the users query. I have tried doing it on Llama-2-7b-ggml, it was the Q6 I think. It kind of worked well, but it was slow even when I a had around 30 GPU layers. I just want it to be able to take in the context as the function details, and return the function name with the attributes, that's all it's required to do.
If required I think I can fintune it on colab pro, so that it can work on edge devices later. | 2023-10-10T04:22:18 | https://www.reddit.com/r/LocalLLaMA/comments/174cdy3/llms_that_can_run_on_cpus_and_less_ram/ | IamFuckinTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 174cdy3 | false | null | t3_174cdy3 | /r/LocalLLaMA/comments/174cdy3/llms_that_can_run_on_cpus_and_less_ram/ | false | false | self | 6 | null |
Help me pick an uncensored/censored LLM? | 1 | [removed] | 2023-10-10T01:12:23 | https://www.reddit.com/r/LocalLLaMA/comments/1748o1r/help_me_pick_an_uncensoredcensored_llm/ | Archaicmind173 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1748o1r | false | null | t3_1748o1r | /r/LocalLLaMA/comments/1748o1r/help_me_pick_an_uncensoredcensored_llm/ | false | false | self | 1 | null |
Laion Releasing Datasets off GPT-4V! | 41 | So, looks Like Laion is working on datasets based off GPT-4V! The Dalle 3 dataset is filled, the GPT-4V one is empty so far
https://huggingface.co/datasets/laion/dalle-3-dataset
https://huggingface.co/datasets/laion/gpt4v-dataset/tree/main
So far, the GPT-4V dataset is empty, so I can't give any Judgement. I feel like the Dalle-3 dataset isn't what it really could be. A huge factor of what makes Dalle-3 important is that it works huge wonders on Diffusion Instruction, with working text, and perspectives/POVs, and lighting. The prompts don't really show that, so the dataset value goes down to SDXL level, except for the text, and we don't know how well that will go.
Any other Observations? | 2023-10-09T23:29:11 | https://www.reddit.com/r/LocalLLaMA/comments/1746ek9/laion_releasing_datasets_off_gpt4v/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1746ek9 | false | null | t3_1746ek9 | /r/LocalLLaMA/comments/1746ek9/laion_releasing_datasets_off_gpt4v/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': '3c1k_cW_nU_6789gFJHTwXwWG0BowS8yKKf8SrOBZKg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SlrFEJUins-aLampot7pY7BSqUKRGfTbSi9C1Q4bLVg.jpg?width=108&crop=smart&auto=webp&s=6e9d3f15f00728775c8c630cb88a5e5623226172', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SlrFEJUins-aLampot7pY7BSqUKRGfTbSi9C1Q4bLVg.jpg?width=216&crop=smart&auto=webp&s=b3a9327afcec597e9e5ddfba4699d33cde62fbd7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SlrFEJUins-aLampot7pY7BSqUKRGfTbSi9C1Q4bLVg.jpg?width=320&crop=smart&auto=webp&s=9c0e31677ac61e50833087d05c805d1d699aee05', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SlrFEJUins-aLampot7pY7BSqUKRGfTbSi9C1Q4bLVg.jpg?width=640&crop=smart&auto=webp&s=d17415b832c89d65f95d3f99a967fbcafe96c3f7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SlrFEJUins-aLampot7pY7BSqUKRGfTbSi9C1Q4bLVg.jpg?width=960&crop=smart&auto=webp&s=ebf6c6a90e6ff0951bb9a159c6598923b758ef69', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SlrFEJUins-aLampot7pY7BSqUKRGfTbSi9C1Q4bLVg.jpg?width=1080&crop=smart&auto=webp&s=1990e6b87b3c869201c21f38565068ce1d36c483', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SlrFEJUins-aLampot7pY7BSqUKRGfTbSi9C1Q4bLVg.jpg?auto=webp&s=be80987db9cf22e6d1646a84769e3ad9206da573', 'width': 1200}, 'variants': {}}]} |
Has anyone been finetuning LLaVA? | 10 | I have just been looking at the web demo [here](https://llava.hliu.cc/) and damn it's pretty good! I'm wondering how demanding it is to finetune on your own image data | 2023-10-09T22:45:07 | https://www.reddit.com/r/LocalLLaMA/comments/1745e0y/has_anyone_been_finetuning_llava/ | Chance_Confection_37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1745e0y | false | null | t3_1745e0y | /r/LocalLLaMA/comments/1745e0y/has_anyone_been_finetuning_llava/ | false | false | self | 10 | null |
Any models for text evaluation, grading, rating, or qualitative analysis? | 1 | [removed] | 2023-10-09T21:00:59 | https://www.reddit.com/r/LocalLLaMA/comments/1742wcb/any_models_for_text_evaluation_grading_rating_or/ | NeonNoirSciFi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1742wcb | false | null | t3_1742wcb | /r/LocalLLaMA/comments/1742wcb/any_models_for_text_evaluation_grading_rating_or/ | false | false | self | 1 | null |
Quality degradation of different quant methods evaluation? | 9 | While we know what the base models models are at, is anyone aware of what this could mean for GGUF / GGML models? For example, quant 3 looks a bit lobotimised over quant 4. However, to get the empirical results, how could one achieve this with a quantized model for llama.cpp? | 2023-10-09T20:48:43 | https://www.reddit.com/r/LocalLLaMA/comments/1742m3q/quality_degradation_of_different_quant_methods/ | Grittenald | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1742m3q | false | null | t3_1742m3q | /r/LocalLLaMA/comments/1742m3q/quality_degradation_of_different_quant_methods/ | false | false | self | 9 | null |
Is there a problem with running a 3090 together with low-spec cards? | 5 | I have a spare 1060 6GB collecting dust and I was wondering if it would be feasible to add it to my current 3090 setup. This would allow me for example to run 32B models with more context. Would it work? Will inference speed be slower? | 2023-10-09T20:41:59 | https://www.reddit.com/r/LocalLLaMA/comments/1742g77/is_there_a_problem_with_running_a_3090_together/ | Gaverfraxz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1742g77 | false | null | t3_1742g77 | /r/LocalLLaMA/comments/1742g77/is_there_a_problem_with_running_a_3090_together/ | false | false | self | 5 | null |
Has anyone used LLaVA yet? | 34 | How does it compare to GPT-4 | 2023-10-09T20:40:39 | https://www.reddit.com/r/LocalLLaMA/comments/1742f10/has_anyone_used_llava_yet/ | derpgod123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1742f10 | false | null | t3_1742f10 | /r/LocalLLaMA/comments/1742f10/has_anyone_used_llava_yet/ | false | false | self | 34 | null |
Powerful Models on a Budget: Combining DoReMi Optimization with Synthetic Data | 62 | # Hi Everyone,
As has been discussed previously, there have been some exciting results coming out recently with synthetic data. Finally, I have a long term 8x A100, accelerating open research into this.
My ongoing goal is to replicate phi 1.5 and in doing so increase our understanding of fine-tuning and pre-training. I will have this compute for the next year, so there is plenty of room to run lots of interesting &/or exciting experiments. These results are important for the LocalLlama community because synthetic data hints at a path to building superior small models.
# Recent Results
[DoReMi](https://arxiv.org/abs/2305.10429) is a new technique for calculating the optimal mixture proportions from diverse data domains, and it offers a unique solution to the challenges presented in determining the optimal generation of synthetic data. I.e. if DoReMi can approximate the the ideal blend of real-world data sources for training a model, it's plausible to believe it could guide the generation of synthetic data as well.
By gauging the importance of different synthetic datasets, DoReMi could theoretically provide the approximate optimal weights to apply when scaling out synthetic sample generation. To the best of the author’s knowledge, the combination of these two techniques has not yet been attempted - our first goal was to attempt a simplified version of this idea and to study the results.
# Open Source Research
To simplify this approach into a tractable first pass we decided to train small models on different splits of publicly available synthetic datasets.
Across all experiments we use EleutherAI’s [pythia-410m](https://github.com/EleutherAI/pythia) parameter model with the [Mistral 7b tokenizer](https://huggingface.co/mistralai/Mistral-7B-v0.1), which results in a 365m parameter model.
Our first stop after selecting an architecture was to obtain synthetic data from [HuggingFace](https://huggingface.co/). We selected a diverse array of 21 datasets, mostly synthetic. Our initial goal was to pre-train a distribution our selected architecture from scratch to near-convergence on various splits of these selected data sources. We found perplexity to be ideal for making robust measurements, and the screenshot below shows our findings.
[Measured validation perplexity for different experiments](https://preview.redd.it/n6gbq2ygj8tb1.png?width=2410&format=png&auto=webp&s=e1e751c219cb3cb4d47c3e67814711ef06e618c0)
**Some tentative observations from the above table**
* Models trained on one narrow synthetic dataset have high perplexity on wiki/pile datasets (to be expected).
* programming-books-llama performs well on programming data (its own data, tiny-codes, evol-instruct, and code-exercises), but performs badly on non-coding data.
* The sciphi-combine model does quite well for the number of tokens in the merged dataset (700m) - beating all other similar sized datasets in median perplexity across the full range of considered datasets.
* The minipile model comes close to synthetic-only datasets in terms of perplexity performance across the broad range of datasets.
* The more diverse the combination dataset, the better the average performance across diverse datasets, with the global-combine datasets having the most promising results.
* It’s not yet 100% clear whether or not introducing wikitext + minipile helps or hurts.
### Target Next Steps:
At this stage, we have only done a very crude approximation of the work shown by the DoReMi paper. We have demonstrated that splitting datasets hurts performance and that downstream perplexity is generally improved with dataset diversification. We have not yet shown how re-sampling the datasets can extend this result further. To do so, we will need to identify non-unit weights to apply to various datasets, in order to optimize the training sample composition. This should include both the subdomains in the minipile and the synthetic datasets that are included.
Another reasonable next step may be to run a training run with the available data on the full 1.3bn parameter model. A classifier could be introduced to select only high quality pile or coding related data, which can then be mixed with the synthetic data. The outcome of this experiment could quite likely be competitive with the original phi-1 result, giving us confidence as we move to scale up our synthetic data generation pipeline.
​
Thanks for making it this far, please let me know if you found this work interesting or if you would like to collaborate in the ongoing research effort. | 2023-10-09T20:20:35 | https://www.reddit.com/r/LocalLLaMA/comments/1741wpq/powerful_models_on_a_budget_combining_doremi/ | docsoc1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1741wpq | false | null | t3_1741wpq | /r/LocalLLaMA/comments/1741wpq/powerful_models_on_a_budget_combining_doremi/ | false | false | 62 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.