title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Getting wildly different perplexity test results between MacOS and Windows | 4 | So the other day I finally got perplexity testing working in Oobabooga, and decided to give it a go. I have a Windows machine and a Mac Studio, so I put both to work.
Over the course of the past 2 days, I've seen some really crazy results. At first I thought it was a context/rope scaling thing, but that's not it at all. Check this out:
Both systems, Mac and Win, are using the exact same setup. Oobabooga using Llamacpp-HF for GGUFs or ExLlama-HF for GPTQ (windows only). I use the same settings and dataset (wikitext) for the tests.
* **Dataset: Wikitext**
* **Stride: 512**
* **max\_length: 0 (uses max for the model)**
**Windows**
\-------------------
**TheBloke\_Synthia-34B-v1.2-GPTQ\_gptq-4bit-32g-actorder\_True**
* Context: 4096
* Rope Scale: 1,000,000
* Perplexity: **5.4562811851501465**
**TheBloke\_Synthia-34B-v1.2-GPTQ\_gptq-4bit-32g-actorder\_True**
* Context: 8192
* Rope Scale: 1,000,000
* Perplexity: **5.456320762634277**
**TheBloke\_Synthia-34B-v1.2-GPTQ\_gptq-4bit-32g-actorder\_True**
* Context: 4096
* Rope Scale: 0 *(defaults to 10,000 in Ooba I think)*
* Perplexity: **5.456327438354492**
**mythomax-l2-13b.Q8\_0.gguf**
* Context: 4096
* Rope Scale: 0 *(defaults to 10,000 in Ooba I think)*
* Perplexity: **4.949336528778076**
​
**MacOS**
\-------------------
**synthia-34b-v1.2.Q8\_0.gguf**
* Context: 16384
* Rope Scale: 1,000,000
* Perplexity: **1.5229930877685547**
**mythomax-l2-13b.Q8\_0.gguf**
* Context: 8192
* Alpha: 2
* Rope Scale: 20,000
* Perplexity: **1.4898762702941895**
​
Has anyone seen anything like this before? All my settings are identical. The versions of Ooba are the same, other than the Mac using Metal inference while the Windows machine uses CUDA with an RTX 4090.
I'm completely stumped. | 2023-09-26T16:45:44 | https://www.reddit.com/r/LocalLLaMA/comments/16st2l5/getting_wildly_different_perplexity_test_results/ | LearningSomeCode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16st2l5 | false | null | t3_16st2l5 | /r/LocalLLaMA/comments/16st2l5/getting_wildly_different_perplexity_test_results/ | false | false | self | 4 | null |
How do you send data in batches to an open source LLM to be processed on GPU | 1 | Say for eg. I am doing sentiment analysis using Llama 2. I have daily news articles which I wish to get daily sentiment ratings.
Rather than looping daily in my Python script or prompt template, how do I send say 30 days of daily news in a batch to Llama 2 to get back 30 daily sentiment ratings in one go so that I am fully utilizing my GPU resources? | 2023-09-26T16:44:53 | https://www.reddit.com/r/LocalLLaMA/comments/16st1qf/how_do_you_send_data_in_batches_to_an_open_source/ | redd-dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16st1qf | false | null | t3_16st1qf | /r/LocalLLaMA/comments/16st1qf/how_do_you_send_data_in_batches_to_an_open_source/ | false | false | self | 1 | null |
Managing Follow up question for retriever based chatbot | 1 | Looking for some suggestion on how to handle follow up question for a retriever based chatbot. data for answer is quired from its specific embeddings and I have multiple. Direct quires are fine but follow up is a challenge. I can not always fallback to previous intent as it can be wrong. Any suggestion would be of great help | 2023-09-26T16:38:07 | https://www.reddit.com/r/LocalLLaMA/comments/16ssvb1/managing_follow_up_question_for_retriever_based/ | Optimal_Original_815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ssvb1 | false | null | t3_16ssvb1 | /r/LocalLLaMA/comments/16ssvb1/managing_follow_up_question_for_retriever_based/ | false | false | self | 1 | null |
Could a Mac run a 546B model? | 1 | [removed] | 2023-09-26T16:26:00 | https://www.reddit.com/r/LocalLLaMA/comments/16ssk0q/could_a_mac_run_a_546b_model/ | medley-400 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ssk0q | false | null | t3_16ssk0q | /r/LocalLLaMA/comments/16ssk0q/could_a_mac_run_a_546b_model/ | false | false | self | 1 | null |
Llama 2 Rate Limits | 2 | I am using GPT3.5T and am running into some rate limits constraints. I have filled out Open AI's Rate Limit Increase Form and my limits were marginally increased, but I still need more. Does Llama 2 also have a rate limit for remaining requests or tokens? Thanks in advance for the help! | 2023-09-26T16:16:22 | https://www.reddit.com/r/LocalLLaMA/comments/16ssb9p/llama_2_rate_limits/ | Widowmaker23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ssb9p | false | null | t3_16ssb9p | /r/LocalLLaMA/comments/16ssb9p/llama_2_rate_limits/ | false | false | self | 2 | null |
Evaluating Language Competence of Llama 2-based models | 11 | I recently tried to improve Llama2's ability to speak german, totally failed, but got me into benchmarking language capabilities.
Haven't found much on how to use sacreBLEU with LLMs except [this](https://huggingface.co/spaces/evaluate-metric/sacrebleu), that's why I share my approach, applicable to any language-pair not only english->german, maybe useful to some of you.
​
* load the dataset and generate 5 shot prompts:
​
ds=load_dataset(path="wmt20_mlqe_task1", name="en-de",split="test")
ds=ds["translation"]
ds_examples=ds[0:5]
ds_predict=ds[5:]
prompt_template="English: {en}\nGerman: {de}"
prompt_examples = "\n\n".join([prompt_template.format(**row) for row in ds_examples])
* 5-shot prompt llama to generate translation
​
# llama-precise
gen_config = {
"temperature": 0.7,
"top_p": 0.1,
"repetition_penalty": 1.18,
"top_k": 40,
"do_sample": True,
"max_new_tokens": 100,
"pad_token_id": pipeline.tokenizer.eos_token_id,
}
predictions=[]
for row in tqdm(ds_predict):
prompt=prompt_examples + "\n\n" + prompt_template.format(en=row["en"], de="")[:-1]
prediction=pipeline(prompt, **gen_config)[0]["generated_text"][len(prompt)+1:]
if "\n" in prediction:
prediction=prediction.split("\n")[0]
predictions.append(prediction)
references=[row["de"] for row in ds_predict]
* calculate BLEU score (using HF sacreBLEU metric)
​
sacrebleu = evaluate.load("sacrebleu")
sacrebleu_results=sacrebleu.compute(predictions=predictions, references=references)
print(sacrebleu_results["score"])
performance of the base models:
​
[995 english to german translations, 5-shot prompts. 70B model was loaded in 4bit \(VRAM poor\). ](https://preview.redd.it/p9h2pwiiimqb1.png?width=1775&format=png&auto=webp&s=a6640894b9dd812974439cd29fc3cb2b4ed8f463)
few more words: [https://medium.com/@geronimo7/evaluating-language-competence-of-llama-2-based-models-the-bleu-score-d44c651a5e58](https://medium.com/@geronimo7/evaluating-language-competence-of-llama-2-based-models-the-bleu-score-d44c651a5e58)
code, including a version using batched inference: [https://github.com/geronimi73/sacrebleu-llama](https://github.com/geronimi73/sacrebleu-llama) | 2023-09-26T16:07:37 | https://www.reddit.com/r/LocalLLaMA/comments/16ss31u/evaluating_language_competence_of_llama_2based/ | HatEducational9965 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ss31u | false | null | t3_16ss31u | /r/LocalLLaMA/comments/16ss31u/evaluating_language_competence_of_llama_2based/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'LgJSiMNTY-VyGX22xZulchPaUK21RWfTxsHocAF4MKA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?width=108&crop=smart&auto=webp&s=4eae4b7cc7dcf066999f8f69ea0dafb3514acfe6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?width=216&crop=smart&auto=webp&s=6290a73da2a2b955c38a04d3e7d33abd9201c8aa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?width=320&crop=smart&auto=webp&s=0972afb1f034976e2fad1320d0bf26d3854e71a1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?width=640&crop=smart&auto=webp&s=5ad6df7aefbd364fedc08367b93192a1bfa55a45', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?width=960&crop=smart&auto=webp&s=74ae1d5ceaa5a1ee5a7caa2e98ca9a0ff3b43570', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?width=1080&crop=smart&auto=webp&s=900af7596e16cc4216e5956ed0d40353d74e235c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NyjtrGU5yEIrKKt1OWZIh0iTeLk_dHMeR5iHBdF_GJ0.jpg?auto=webp&s=4443dbc877cbafcfd9d28e6ad53e0db3c61fc7ca', 'width': 1200}, 'variants': {}}]} | |
vLLM - how to share vRAM | 3 | I want to try serving a 70b model using vLLM. However the model doesn't fit in one GPU.
How to use vLLM to host the model on more than one GPU with shared vRAM? | 2023-09-26T15:46:26 | https://www.reddit.com/r/LocalLLaMA/comments/16srjd9/vllm_how_to_share_vram/ | ComplexIt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16srjd9 | false | null | t3_16srjd9 | /r/LocalLLaMA/comments/16srjd9/vllm_how_to_share_vram/ | false | false | self | 3 | null |
When will they update datasets with post 2021 events etc. | 2 | Any information on this, would all the models have to be retrained with like llama 3-2023-nov ember or something? Sorry if its a dumb question, dying to know. | 2023-09-26T15:28:24 | https://www.reddit.com/r/LocalLLaMA/comments/16sr2s5/when_will_they_update_datasets_with_post_2021/ | Jazzlike-Fun9923 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sr2s5 | false | null | t3_16sr2s5 | /r/LocalLLaMA/comments/16sr2s5/when_will_they_update_datasets_with_post_2021/ | false | false | self | 2 | null |
Can fine-tuning teach the model some new facts? | 11 | I've read a lot about model fine-tuning and learned that fine-tuning is about the output form, rather than the content. Yet, recently I've heard at least from two people from the industry that the model can remember information during the fine-tuning process which is actually a fact-learning process.
Can anyone shed some light on that: is it possible? Is there any specific setups or model/adapter architecture that can provide that? | 2023-09-26T14:56:50 | https://www.reddit.com/r/LocalLLaMA/comments/16sq8x4/can_finetuning_teach_the_model_some_new_facts/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sq8x4 | false | null | t3_16sq8x4 | /r/LocalLLaMA/comments/16sq8x4/can_finetuning_teach_the_model_some_new_facts/ | false | false | self | 11 | null |
I made an HF Space for Qwen-14b-Chat. Currently 14B Model SOTA on benchmarks. So now you can test it without need using Modelscope or need to download it locally. I want to say thanks to HF for providing GPU grant for this space! | 42 | 2023-09-26T14:15:24 | https://huggingface.co/spaces/artificialguybr/qwen-14b-chat-demo | Seromelhor | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 16sp5z6 | false | null | t3_16sp5z6 | /r/LocalLLaMA/comments/16sp5z6/i_made_an_hf_space_for_qwen14bchat_currently_14b/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'qcR4HzNSvJHueB4zJq1iLMXulqiS44BCj6xsFztscBU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?width=108&crop=smart&auto=webp&s=31ade0c0ef89e5a1522ab49fe4b84876073ec417', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?width=216&crop=smart&auto=webp&s=d466df8d03aa58753c3cb33a244697e41bd14950', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?width=320&crop=smart&auto=webp&s=52d16e56a8feb0d210864ee0bdcb8e3d18a85152', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?width=640&crop=smart&auto=webp&s=139c6d554e60a1bbf638e4b117478cbcd75352f2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?width=960&crop=smart&auto=webp&s=8937632cf7a7aceb9aab112f74f69d7d7a2e8a89', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?width=1080&crop=smart&auto=webp&s=d7d2eff00f3c4356bf3af1436961fe602f3862a3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zm6i-kakMohLFOD0UHS0XNZnFtVwCMUNf7RgDm1EqCA.jpg?auto=webp&s=cfb944b7d0d9126e7548babe12d71470bce09ea2', 'width': 1200}, 'variants': {}}]} | ||
Anyone running a Tesla M40 or P4? What is your experience like? | 13 | Title. Curious to see how these old GPUs are fairing in today's world. What models/kinda speed are you getting? I have one on hand as well as a few P4s, can't decide what to do with them. | 2023-09-26T13:54:25 | https://www.reddit.com/r/LocalLLaMA/comments/16somxc/anyone_running_a_tesla_m40_or_p4_what_is_your/ | ziggo0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16somxc | false | null | t3_16somxc | /r/LocalLLaMA/comments/16somxc/anyone_running_a_tesla_m40_or_p4_what_is_your/ | false | false | self | 13 | null |
Quick question on LORAs, they do primarily style changes right and not really substance? | 11 | I am reading up on this and my impression is that this is good for changing the behavior/style of a model and not great at adding new information to a model.
There are lots of discussion on fine-tuning/training and I am just trying to "eliminate" this method as a way to add information to a LLM. I am reading all these low level explanations on LORA, but **I am just missing the high level application of this technique.**
**Am I correct in that LORA is a good way to "color" the output of a model and not the right tool to teach a model the contents of a book?** | 2023-09-26T13:53:02 | https://www.reddit.com/r/LocalLLaMA/comments/16solr6/quick_question_on_loras_they_do_primarily_style/ | drywallfan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16solr6 | false | null | t3_16solr6 | /r/LocalLLaMA/comments/16solr6/quick_question_on_loras_they_do_primarily_style/ | false | false | self | 11 | null |
A Local LLM Translation | 1 | [removed] | 2023-09-26T12:58:06 | https://www.reddit.com/r/LocalLLaMA/comments/16sn8yr/a_local_llm_translation/ | Hussei911 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sn8yr | false | null | t3_16sn8yr | /r/LocalLLaMA/comments/16sn8yr/a_local_llm_translation/ | false | false | self | 1 | null |
Simple LLM usage for beginners | 11 | Setting up LLMs can be hard sometimes, that why I'm working on a tool to simplify LLM usage for beginners.
The goal is to make anyone able to use command as simple as `llm -m wizardlm-13b-q4_0 "<prompt>"`. That would automatically download and setup models, or use the apis if available (ie: for openai models and others).
It is currently working well on my setup (Apple M2 16GB) but now I'm looking for feedbacks on different setups.
https://github.com/snwfdhmp/llm | 2023-09-26T12:54:41 | https://www.reddit.com/r/LocalLLaMA/comments/16sn5tj/simple_llm_usage_for_beginners/ | snwfdhmp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sn5tj | false | null | t3_16sn5tj | /r/LocalLLaMA/comments/16sn5tj/simple_llm_usage_for_beginners/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'REn-wIZG1CLfDjKoKWOB49SmNHlhQrEbzusWaC10Pd0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?width=108&crop=smart&auto=webp&s=313b3cf2452053f9beba7375576ad23139d663a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?width=216&crop=smart&auto=webp&s=433079c0ba0e07b33dd43a1bd3fe1463941b72f7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?width=320&crop=smart&auto=webp&s=f5f9f2cb9b012a1a2c8672cd181fb7f369aebb38', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?width=640&crop=smart&auto=webp&s=973dd76687fe053c678ee252ca7d0cbcaf75cb39', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?width=960&crop=smart&auto=webp&s=3ba5a35a7dacf68cb878230c5395b32490b4f3aa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?width=1080&crop=smart&auto=webp&s=f35f47b504422da4533c2ed61b672da56fe35983', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5T33JLAJGLvkZrOMbevyOTI8tFh81nncj-pVLE-RbIQ.jpg?auto=webp&s=f458720d4d1be57cf2bf3f64b26fdd7ac19d18e0', 'width': 1200}, 'variants': {}}]} |
Why does llama2-7b-chat on the perplexity lab demo seem much smarter than my local verson? | 1 | [removed] | 2023-09-26T12:29:45 | https://www.reddit.com/r/LocalLLaMA/comments/16smkaf/why_does_llama27bchat_on_the_perplexity_lab_demo/ | CheatCodesOfLife | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16smkaf | false | null | t3_16smkaf | /r/LocalLLaMA/comments/16smkaf/why_does_llama27bchat_on_the_perplexity_lab_demo/ | false | false | self | 1 | null |
Open source Q&A chatbot UI Recommendation? | 3 | Any recommendations for an open source repos that support web based chat ui where you can upload docs,pds,links,etc? So far i found https://github.com/openchatai/OpenChat but it doesnt support llama, claude, etc. Theres also https://github.com/h2oai/h2ogpt but their gradio UI is overly complicated (meant for technical people) and not user friendly. | 2023-09-26T12:11:48 | https://www.reddit.com/r/LocalLLaMA/comments/16sm5vg/open_source_qa_chatbot_ui_recommendation/ | simpleyuji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sm5vg | false | null | t3_16sm5vg | /r/LocalLLaMA/comments/16sm5vg/open_source_qa_chatbot_ui_recommendation/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '0d21CuItKmXqPALMSaUB4MycOoVYeUcnS38KeZlVWm0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?width=108&crop=smart&auto=webp&s=85290475d8a78efd835d11c92e7e75422e53cc1b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?width=216&crop=smart&auto=webp&s=5b05333932b164a4a799d4af628d9e2b40f06e92', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?width=320&crop=smart&auto=webp&s=514300d549b37931c1179671be2f2e83abec804a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?width=640&crop=smart&auto=webp&s=47f12c81683cd2057e87cd2709ccdaedced81d5c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?width=960&crop=smart&auto=webp&s=0e37551cf19aef3da4af2d213776f46622556eab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?width=1080&crop=smart&auto=webp&s=e647607e7a0c1b1df053867035d5d49711991e5d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GiZQRawBeUSrMokJcSRpX42YzPtqC0uLyX9679e1D20.jpg?auto=webp&s=9b7e3231da4466da4f78b07c5cd8636cfaed0558', 'width': 1200}, 'variants': {}}]} |
[Research] Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening | 34 | 2023-09-26T10:48:57 | https://arxiv.org/abs/2308.07707 | JustAddMoreLayers | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 16skep1 | false | null | t3_16skep1 | /r/LocalLLaMA/comments/16skep1/research_fast_machine_unlearning_without/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
Any way to get Flash Attention 2 working on oobabooga using Windows 10? | 18 | Getting the notification when using ExLlamaV2:
2023-09-26 12:42:31 WARNING:You are running ExLlamaV2 without flash-attention. This will cause the VRAM usage to be a lot higher than it could be.
Try installing flash-attention following the instructions here: [https://github.com/Dao-AILab/flash-attention#installation-and-features](https://github.com/Dao-AILab/flash-attention#installation-and-features)
Now the one-click installers don't use it yet afaik. Also, there is one post on github ([https://github.com/oobabooga/text-generation-webui/issues/4015](https://github.com/oobabooga/text-generation-webui/issues/4015)) where people have tried to no avail. I don't know how big the impact of flash attention 2 is on VRAM usage, but it would be great to be able to run a 70B model on a singular 3090. | 2023-09-26T10:46:17 | https://www.reddit.com/r/LocalLLaMA/comments/16skcwm/any_way_to_get_flash_attention_2_working_on/ | LetThemSing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16skcwm | false | null | t3_16skcwm | /r/LocalLLaMA/comments/16skcwm/any_way_to_get_flash_attention_2_working_on/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'S_gt1jyPRBUDNrETclqceG9C-LnlgjHiMxfknzvZWTk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?width=108&crop=smart&auto=webp&s=fe6cb819a58629abd6070d5fcd515469a96f9a65', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?width=216&crop=smart&auto=webp&s=2f1a488b34eea0eeaaad262d6eb38d7ea726690f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?width=320&crop=smart&auto=webp&s=76b145da914be26d2d601b5b34585413b2a24935', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?width=640&crop=smart&auto=webp&s=6099cdd844ccc385654c42587369d434892b9c0a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?width=960&crop=smart&auto=webp&s=af629b1b22e48009f527de145d0a87ea0f3e35ab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?width=1080&crop=smart&auto=webp&s=879d24168e8a5b0c5894c55f27c6ed76993ed59b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lsMlSvcQq-XUAXCKYWHW_aLGSIxEAnuj3Dv3uAU2lEE.jpg?auto=webp&s=1a38daa5d16350f0b84240c5d20321abde3ba117', 'width': 1200}, 'variants': {}}]} |
pls, recommend me low parameter models | 1 | search for
what are the good ones? | 2023-09-26T10:40:33 | https://www.reddit.com/r/LocalLLaMA/comments/16sk934/pls_recommend_me_low_parameter_models/ | Fit_Extent712 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sk934 | false | null | t3_16sk934 | /r/LocalLLaMA/comments/16sk934/pls_recommend_me_low_parameter_models/ | false | false | self | 1 | null |
A poor man's guide to fine-tuning Llama 2 | 0 | 2023-09-26T10:32:54 | https://duarteocarmo.com/blog/fine-tune-llama-2-telegram | iLikePortugueseTarts | duarteocarmo.com | 1970-01-01T00:00:00 | 0 | {} | 16sk41q | false | null | t3_16sk41q | /r/LocalLLaMA/comments/16sk41q/a_poor_mans_guide_to_finetuning_llama_2/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Am5mmagULfCoy0Ivkxn7q1k_CEf71uWHhRTrdsm9sp4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?width=108&crop=smart&auto=webp&s=0962ec694028aa0fc96044966df24f485b6f8bcb', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?width=216&crop=smart&auto=webp&s=234054453e336d58253221ae143faccc8646042c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?width=320&crop=smart&auto=webp&s=69e8bc8dabcd3d197949111102f7f11542a7a636', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?width=640&crop=smart&auto=webp&s=a1365b63b23f411d1759a3ce6f9cfa9e2b2b1622', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?width=960&crop=smart&auto=webp&s=5409430a062284a88149de185a5b4db7c156d1dd', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?width=1080&crop=smart&auto=webp&s=55fa00b319dad65281f1447ccefdc37afb2f6077', 'width': 1080}], 'source': {'height': 1148, 'url': 'https://external-preview.redd.it/UpXeterXtWugBrkIdNjpj55JuOy1waDzBzLHb1kicDM.jpg?auto=webp&s=a8c70d8aa5f9faeda322e24832604db7048b65e1', 'width': 1148}, 'variants': {}}]} | ||
Any help on using knowledge distillation on LLMs like Llama2 or Qwen? | 1 | How can we do distillation on an llm and create a small network for specific task | 2023-09-26T10:32:17 | https://www.reddit.com/r/LocalLLaMA/comments/16sk3nt/any_help_on_using_knowledge_distillation_on_llms/ | s1lv3rj1nx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sk3nt | false | null | t3_16sk3nt | /r/LocalLLaMA/comments/16sk3nt/any_help_on_using_knowledge_distillation_on_llms/ | false | false | self | 1 | null |
Llama 2 data format | 6 | Hello Guys,
I have a pretty basic question: What is the data format for Llama 2 fine tuning?
I have raw text and question answer pairs, which I extracted from the raw text.
Now I would like to use both in order to train LLama2.
So What is the exact format (maybe jsonl?) for the raw text and the Q&A pairs. How does it look?
Maybe you could provide an example since I could not find something in the internet.
Yeah, I know, there should be tons of examples... | 2023-09-26T10:25:27 | https://www.reddit.com/r/LocalLLaMA/comments/16sjyvc/llama_2_data_format/ | Enkay55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sjyvc | false | null | t3_16sjyvc | /r/LocalLLaMA/comments/16sjyvc/llama_2_data_format/ | false | false | self | 6 | null |
Seeking Advice on Training a Large Language Model on Legal Texts: Should the Model Learn All Versions or Only the Latest? | 3 | Hello everyone! I am looking to train a large language model on legal texts. I understand that legal texts may have different versions. I am unsure if I should train the model on all versions of the legal texts or only the most recent versions. Any advice or insight on this would be greatly appreciated. Thank you in advance for your help and suggestions! | 2023-09-26T10:24:58 | https://www.reddit.com/r/LocalLLaMA/comments/16sjyi3/seeking_advice_on_training_a_large_language_model/ | piske_usagi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sjyi3 | false | null | t3_16sjyi3 | /r/LocalLLaMA/comments/16sjyi3/seeking_advice_on_training_a_large_language_model/ | false | false | self | 3 | null |
Llama 2 13/70b | 0 | [removed] | 2023-09-26T10:21:57 | https://www.reddit.com/r/LocalLLaMA/comments/16sjwh3/llama_2_1370b/ | Winter_Importance436 | self.LocalLLaMA | 2023-09-26T13:37:47 | 0 | {} | 16sjwh3 | false | null | t3_16sjwh3 | /r/LocalLLaMA/comments/16sjwh3/llama_2_1370b/ | false | false | default | 0 | null |
How can I ask indexed documents in Elasticsearch from Llama2? | 1 | [removed] | 2023-09-26T10:21:13 | https://www.reddit.com/r/LocalLLaMA/comments/16sjvz2/how_can_i_ask_indexed_documents_in_elasticsearch/ | mmmanel00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sjvz2 | false | null | t3_16sjvz2 | /r/LocalLLaMA/comments/16sjvz2/how_can_i_ask_indexed_documents_in_elasticsearch/ | false | false | self | 1 | null |
Any way to optimally use GPU for faster llama calls? | 11 | I created a Standard\_NC6s\_v3 (6 cores, 112 GB RAM, 336 GB disk) GPU compute in cloud to run Llama-2 13b model. Currently it takes \~10s for a single API call to llama and the hardware consumptions look like this:
*Processing img 7c23ll36rkqb1...*
Is there a way to consume more of the RAM available and speed up the api calls? My model loading code:
from torch import cuda, bfloat16
import transformers
model_id = 'meta-llama/Llama-2-13b-chat-hf'
device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'
# set quantization configuration to load large model with less GPU memory
# this requires the `bitsandbytes` library
bnb_config = transformers.BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=bfloat16
)
# begin initializing HF items, need auth token for these
hf_auth = 'token'
model_config = transformers.AutoConfig.from_pretrained(
model_id,
use_auth_token=hf_auth
)
model = transformers.AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
config=model_config,
quantization_config=bnb_config,
device_map='auto',
use_auth_token=hf_auth
)
model.eval()
print(f"Model loaded on {device}")
​ | 2023-09-26T10:11:47 | https://www.reddit.com/r/LocalLLaMA/comments/16sjprs/any_way_to_optimally_use_gpu_for_faster_llama/ | todaysgamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sjprs | false | null | t3_16sjprs | /r/LocalLLaMA/comments/16sjprs/any_way_to_optimally_use_gpu_for_faster_llama/ | false | false | self | 11 | null |
I am completely new to running local models on my PC. Are there any good user-friendly programs that can easily run LLMs without jumping through the hoops of installing an interface, properly categorizing them, and making sure I don't have to reinstall if I can't get them to work? | 3 | I've been trying for a week now trying to run a couple of LLaMA models through Kolbloid AI, but every time I've been following the instructions on how to configure them, there's something that goes wrong that makes them unable to chat. It's my fault obviously, but I'm considering backing off for now until there's a way I don't have to keep getting node.js and github desktop to reinstall | 2023-09-26T09:40:38 | https://www.reddit.com/r/LocalLLaMA/comments/16sj634/i_am_completely_new_to_running_local_models_on_my/ | False_Yesterday6699 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sj634 | false | null | t3_16sj634 | /r/LocalLLaMA/comments/16sj634/i_am_completely_new_to_running_local_models_on_my/ | false | false | self | 3 | null |
How can I ask indexed documents in Elasticsearch from Llama2? | 1 | Hello,
I am using `openai.Embedding.create()` to create the vector list for posting on ElasticSearch.
How could I ask questions about this document indexed? (I am using Llama2 as GPT)
I am using for asking `load_qa_chain()` but it finishes in a loop of retries... | 2023-09-26T07:28:46 | https://www.reddit.com/r/LocalLLaMA/comments/16sh1i8/how_can_i_ask_indexed_documents_in_elasticsearch/ | mmmanel00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sh1i8 | false | null | t3_16sh1i8 | /r/LocalLLaMA/comments/16sh1i8/how_can_i_ask_indexed_documents_in_elasticsearch/ | false | false | self | 1 | null |
How can I use embeddings from llama.cpp using OpenAI in Python? | 2 | Hello,
I am having difficulties using llama.cpp
deployed on one server, and I am attempting to apply the same code for GPT (OpenAI). The issue is that I am unable to find any tutorials, and I am struggling to get the embeddings or to make prompts work properly.
I would prefer not to rely on request.POST
to call the embeddings endpoint
Thank you
p.s: Couldn't get anyhelp in SO | 2023-09-26T07:27:34 | https://www.reddit.com/r/LocalLLaMA/comments/16sh0vh/how_can_i_use_embeddings_from_llamacpp_using/ | mmmanel00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sh0vh | false | null | t3_16sh0vh | /r/LocalLLaMA/comments/16sh0vh/how_can_i_use_embeddings_from_llamacpp_using/ | false | false | self | 2 | null |
ValueError on MacOS. Quantizations changed in GGJTv2. Can only convert unquantized GGML files older than GGJTv2. Sorry, your GGJTv1 file of type MOSTLY_Q5_1 is not eligible for conversion. · Issue #454 · LostRuins/koboldcpp | 1 | Workflow:
MacBook Pro 14" M1 16 Ram
Python 3.10.12
GNU Make 3.81
Apple clang version 15.0.0 (clang-1500.0.40.1)
Target: arm64-apple-darwin23.0.0
Thread model: posix
The model I tried: https://huggingface.co/TehVenom/Pygmalion-7b-4bit-Q5_1-GGML/tree/main
After running: convert-llama-ggml-to-gguf.py in koboldcpp
I get this error:
(ai) yuki@yuki koboldcpp % python convert-llama-ggml-to-gguf.py --input '/Volumes/Yuki 1/Work/Github/yuna-ai/lib/yunacpp/models/Pygmalion-7b.bin' --output Pygmalion-gguf.bin
Using config: Namespace(input=PosixPath('/Volumes/Yuki 1/Work/Github/yuna-ai/lib/yunacpp/models/Pygmalion-7b.bin'), output=PosixPath('Pygmalion-gguf.bin'), name=None, desc=None, gqa=1, eps='5.0e-06', context_length=2048, model_metadata_dir=None, vocab_dir=None, vocabtype='spm')
=== WARNING === Be aware that this conversion script is best-effort. Use a native GGUF model if possible. === WARNING ===
Note: If converting LLaMA2, specifying "--eps 1e-5" is required. 70B models also need "--gqa 8".
Scanning GGML input file
File format: GGJTv1 with ftype MOSTLY_Q5_1
Traceback (most recent call last):
File "/Volumes/Yuki 1/Work/Github/yuna-ai/lib/yunacpp/koboldcpp/convert-llama-ggml-to-gguf.py", line 451, in
main()
File "/Volumes/Yuki 1/Work/Github/yuna-ai/lib/yunacpp/koboldcpp/convert-llama-ggml-to-gguf.py", line 428, in main
offset = model.load(data, 0)
File "/Volumes/Yuki 1/Work/Github/yuna-ai/lib/yunacpp/koboldcpp/convert-llama-ggml-to-gguf.py", line 194, in load
self.validate_conversion(hp.ftype)
File "/Volumes/Yuki 1/Work/Github/yuna-ai/lib/yunacpp/koboldcpp/convert-llama-ggml-to-gguf.py", line 187, in validate_conversion
raise ValueError(f'{err} Sorry, your {self.file_format.name}v{self.format_version} file of type {ftype.name} is not eligible for conversion.')
ValueError: Quantizations changed in GGJTv2. Can only convert unquantized GGML files older than GGJTv2. Sorry, your GGJTv1 file of type MOSTLY_Q5_1 is not eligible for conversion.
Does anybody else have this issue? Any solutions? | 2023-09-26T07:01:49 | https://github.com/LostRuins/koboldcpp/issues/454 | yukiarimo | github.com | 1970-01-01T00:00:00 | 0 | {} | 16sgm70 | false | null | t3_16sgm70 | /r/LocalLLaMA/comments/16sgm70/valueerror_on_macos_quantizations_changed_in/ | false | false | 1 | {'enabled': False, 'images': [{'id': '6ps4oNVwTocD9p52R8X8q_mqmww_AbheI3MTyFsJpAk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?width=108&crop=smart&auto=webp&s=9ee919c70b648c67056175ea2079e9bb3e0eb58f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?width=216&crop=smart&auto=webp&s=788ecf5d45a7934fbba24bde097a6f5238f77400', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?width=320&crop=smart&auto=webp&s=a51ef1810cdcc491b92dec69c9979a1bad0ff5aa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?width=640&crop=smart&auto=webp&s=873e2d4af289d30f4b47ddda2d57e683372461d3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?width=960&crop=smart&auto=webp&s=a8c334545dcf08cae13f231f79dbdac4856eb318', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?width=1080&crop=smart&auto=webp&s=0fcea4b89bd0a19cef2d09a5e606bbb3236b7879', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FVznSneg_pEgxTjSMmuCZeKPYLjet_VGQ8EMiKiYiKE.jpg?auto=webp&s=46833632fcbda6ddc7411ea32fd1f193b9832378', 'width': 1200}, 'variants': {}}]} | |
How to handle 4 Gpu? | 1 | [removed] | 2023-09-26T06:53:40 | https://www.reddit.com/gallery/16sghc2 | Nondzu | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 16sghc2 | false | null | t3_16sghc2 | /r/LocalLLaMA/comments/16sghc2/how_to_handle_4_gpu/ | false | false | 1 | null | |
Cross Encoders for Long documents and pragagraphs | 3 | Hi guys good evening, hope all is well!
I need some opinions on using cross encoders for long text documents. I have a case where I have list of documents called documents A, and another list of documents called documents B. Based on semantic similarity I am developing a model that matches documents from list A to list B. Here is my current approach
​
First I use a Bi-Encoder to encode both lists of documents (using the sentence-transformers/gtr-t5-xl)
Then I use FAISS to get top 100 results from the Bi-Encoder
Finally use a Cross-Encoder to re-Rank the documents returned
Now my question is Cross-Encoders are usually good for a token limit of 1024 or less, but I am wondering is there a better way to compare longer documents? lets say if I was comparing math books for grade 10 in list A, and math books for grade 11 in list B, so see if there are any books that are similar in semantic context in list A, and B to see which books are like each other what approach should I take?
Would moving to a vector database be the next best thing as I can keep adding to the database index as new documents are added?
​
Thanks, and would to hear your opinion | 2023-09-26T05:24:43 | https://www.reddit.com/r/LocalLLaMA/comments/16seypf/cross_encoders_for_long_documents_and_pragagraphs/ | haris525 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16seypf | false | null | t3_16seypf | /r/LocalLLaMA/comments/16seypf/cross_encoders_for_long_documents_and_pragagraphs/ | false | false | self | 3 | null |
LLM Performance (Llama2 vs Opena AI) | 3 | Hi , this poll is for folks who have tested LLM for their use case. Just want to understand a bigger picture which LLM do you prefer if you have same budget for both?
[View Poll](https://www.reddit.com/poll/16se969) | 2023-09-26T04:45:35 | https://www.reddit.com/r/LocalLLaMA/comments/16se969/llm_performance_llama2_vs_opena_ai/ | ianuvrat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16se969 | false | null | t3_16se969 | /r/LocalLLaMA/comments/16se969/llm_performance_llama2_vs_opena_ai/ | false | false | self | 3 | null |
Tesla P40 or 3090 relative performance | 9 | Greetings
I am hoping to find out what the relative performance difference is between a p40 and a 3090 as the p40 is ~35% of the cost of a used 3090. | 2023-09-26T04:21:50 | https://www.reddit.com/r/LocalLLaMA/comments/16sdt1i/tesla_p40_or_3090_relative_performance/ | CryoMyst | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sdt1i | false | null | t3_16sdt1i | /r/LocalLLaMA/comments/16sdt1i/tesla_p40_or_3090_relative_performance/ | false | false | self | 9 | null |
MythoMax 13B and Xwin-LM 70B now available on RizzChat.app | 14 | Hi! One of the creators of [RizzChat](https://rizzchat.app) (formerly known as RizzGPT) here. First of all, we're big fans of LocalLlama and we think the community is awesome for the contribution to open source models. We've been lurking here for a bit, so it's definitely exciting to finally post (although it's scary)!
We've been noticing people have trouble deploying open source models for character chat and story generation, especially at the 70B level. This is why we deployed two open source models to RizzChat (in addition to GPT 3.5) that we think people will like:
\- Gryphe/MythoMax-L2-13b (Free)
\- Xwin-LM/Xwin-LM-70B-V0.1 (for subscribers for now since it costs quite a bit to make work)
The two questions we have are:
\- Are there any other models people have trouble deploying? We can look into adding them.
\- Is there any customization that you may want to see on RizzChat? We want to give users as much control as we can! | 2023-09-26T03:52:00 | https://www.reddit.com/r/LocalLLaMA/comments/16sd8f8/mythomax_13b_and_xwinlm_70b_now_available_on/ | theodormarcu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sd8f8 | false | null | t3_16sd8f8 | /r/LocalLLaMA/comments/16sd8f8/mythomax_13b_and_xwinlm_70b_now_available_on/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'y6vEX21EuoOOsdV1krydSOCBpJo9jyJyI6PvVG9R9jY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/T7Nown3mw0c5KRY51tbQU4qymr604S2A9CMB1YIInus.jpg?width=108&crop=smart&auto=webp&s=1bb026d45f2176b6a3887e9327ff4e6f854dbd79', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/T7Nown3mw0c5KRY51tbQU4qymr604S2A9CMB1YIInus.jpg?width=216&crop=smart&auto=webp&s=ed47721f40884aac676aed8f214bce0cc773e57a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/T7Nown3mw0c5KRY51tbQU4qymr604S2A9CMB1YIInus.jpg?width=320&crop=smart&auto=webp&s=c34c0514f20a34ce5326ef688afc5d4cd453574d', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/T7Nown3mw0c5KRY51tbQU4qymr604S2A9CMB1YIInus.jpg?width=640&crop=smart&auto=webp&s=449973de1edfd73174a378b7668347f02fe14365', 'width': 640}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/T7Nown3mw0c5KRY51tbQU4qymr604S2A9CMB1YIInus.jpg?auto=webp&s=761959c5b4c7e07d8f0eb3bf65a7c7e1331fd3d8', 'width': 768}, 'variants': {}}]} |
qwen.cpp and BrowserQwen Chrome extension / Qwen-Agent | 28 | qwen.cpp: https://github.com/QwenLM/qwen.cpp
> C++ implementation of Qwen-LM
BrowserQwen chrome extension / Qwen-Agent: https://github.com/QwenLM/Qwen-Agent
> Agent framework and applications built upon Qwen, featuring Code Interpreter and Chrome browser extension. | 2023-09-26T02:39:24 | https://www.reddit.com/r/LocalLLaMA/comments/16sbr5v/qwencpp_and_browserqwen_chrome_extension_qwenagent/ | MoneroBee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sbr5v | false | null | t3_16sbr5v | /r/LocalLLaMA/comments/16sbr5v/qwencpp_and_browserqwen_chrome_extension_qwenagent/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': '15iKDXB_UVQLco_OIReLhNfDn5fL5rOxvZODtl2R9Cw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?width=108&crop=smart&auto=webp&s=d85843bf2e2a245ff6c61cd97c26761b9820177d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?width=216&crop=smart&auto=webp&s=35c78584a6ee376e149c5b9f2bcb8a36a184686b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?width=320&crop=smart&auto=webp&s=cd546291fc94b72b427dd05c96ee0c7a82c79651', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?width=640&crop=smart&auto=webp&s=b6f7a124ed5ca040297f2d842edd02864dea6394', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?width=960&crop=smart&auto=webp&s=3b771a29a6b10b08d535598e07388b85601a5c97', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?width=1080&crop=smart&auto=webp&s=905dcb2142cf6fd9022b77b81f6e6b342fdead05', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HHOOpgT7UtqCoALgtPrznZfBvZmLfSIjguiJ3S0PtQA.jpg?auto=webp&s=976f9a5a00bb1384ea9171e6788aad44d14f25cf', 'width': 1200}, 'variants': {}}]} |
How well does Llama code work with pandas? | 1 | I'm looking for a local LLM for simple data analytics. I am unable to use GPT because this data is confidential. I've seen llama code and heard that it is programmed on python, does anyone know anything about using it with Pandas? If llama doesn't work is there any other LLMs you would recommend I look into? Thank you | 2023-09-26T02:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/16sbnx5/how_well_does_llama_code_work_with_pandas/ | Infinite_Prior471 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sbnx5 | false | null | t3_16sbnx5 | /r/LocalLLaMA/comments/16sbnx5/how_well_does_llama_code_work_with_pandas/ | false | false | self | 1 | null |
Any coding LLMs that can run on Raspberry Pi 400? Or AM 7600 XT (intel i7 4th gen + 16 GB RAM) | 0 | Hi guys:
I am very new. I feel I made a post before but this time around I am actually wanting to proceed with using my Raspberry Pi (400) that I currently use as a learning/tinkering server for my pet projects and also to learn Laravel 10.
Is there a tutorial anyone can recommend for the Pi. I feel obligated to disclose that I am very new. Taught myself SysAdmin things (i think). I don’t want to sound like a smart ass, still google a lot - but I am building a SaaS and manage my own server because it helps the costs.
I appreciate your help in advance everyone. | 2023-09-26T02:26:38 | https://www.reddit.com/r/LocalLLaMA/comments/16sbhct/any_coding_llms_that_can_run_on_raspberry_pi_400/ | shakespear94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sbhct | false | null | t3_16sbhct | /r/LocalLLaMA/comments/16sbhct/any_coding_llms_that_can_run_on_raspberry_pi_400/ | false | false | self | 0 | null |
Llama 2 File Chat | 15 | Hi Everyone,
I’ve been working on a few projects and thought I’d share some of my work. I have a Llama 2 document query repo, a local fine tuning repo, and an LLM-Learning repo for research and news. I plan on continually updating the learning repo. Hope you find these resources useful! | 2023-09-26T01:43:20 | https://github.com/j-webtek/LLM-Learning | pinnapple-crush | github.com | 1970-01-01T00:00:00 | 0 | {} | 16saj28 | false | null | t3_16saj28 | /r/LocalLLaMA/comments/16saj28/llama_2_file_chat/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'rAttu5YKKDV_iZ1X3C01WLcRf5BFR2D2_d2o3C2fvRw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?width=108&crop=smart&auto=webp&s=149b4a1764ec9e0b0dc99528f838423ec0a43975', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?width=216&crop=smart&auto=webp&s=9ccf4eb3121b46a904cc740c9b97a5d8618d7591', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?width=320&crop=smart&auto=webp&s=4df81a66ee0db3860a700164212e8acad2960ec9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?width=640&crop=smart&auto=webp&s=1ad46d84d8187d69504efc5db1b505b0d9b0fe67', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?width=960&crop=smart&auto=webp&s=1a866b69deeb5637929f671cc5f374a85e531b4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?width=1080&crop=smart&auto=webp&s=7a064a3ac0894c9357d087eed1e6a9457f951402', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/E-EOOSCDUYQ15S-7YuFiIXZ6TH-5RX5TetXtrtL-0X8.jpg?auto=webp&s=1fdaf61e7fd6d5dfcd4b9d768195eac461ae00e0', 'width': 1200}, 'variants': {}}]} | |
List of Open Source UIs/Webapps for LLM Chatbots | 1 | [removed] | 2023-09-26T01:23:52 | https://www.reddit.com/r/LocalLLaMA/comments/16sa3ye/list_of_open_source_uiswebapps_for_llm_chatbots/ | snowfort_guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sa3ye | false | null | t3_16sa3ye | /r/LocalLLaMA/comments/16sa3ye/list_of_open_source_uiswebapps_for_llm_chatbots/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '4QOXwLKEq0p2TziXztI-4F74kTNo0Fyo0so9C3MLtYI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?width=108&crop=smart&auto=webp&s=fe7799005c9e0797689be7ec5b418b05c8fbddf1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?width=216&crop=smart&auto=webp&s=851e937bdc8ae1b9a014843856a8a3fe8d791b22', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?width=320&crop=smart&auto=webp&s=90ac00703d584878ef9b24b3b0c42caa807f4549', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?width=640&crop=smart&auto=webp&s=286025a233d1ea0b9e5f4ecb8aa2c0cfed47bd15', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?width=960&crop=smart&auto=webp&s=e49525f072e6522f15819396deb06c42daee2062', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?width=1080&crop=smart&auto=webp&s=81d7d9ff907509705e2e001884b8fbf864f91b44', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6Z_icHf6SPUoRgwAugIrW-98lE1zmYl2N-xXgikq4ro.jpg?auto=webp&s=00a084ddbcf06a7d436cacb27684d718c2d85133', 'width': 1200}, 'variants': {}}]} |
I need help to Download cerebras/SlimPajama-627B Datasets, please. | 1 | [removed] | 2023-09-26T00:41:11 | https://www.reddit.com/r/LocalLLaMA/comments/16s9684/i_need_help_to_download_cerebrasslimpajama627b/ | Dandelion_puff_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16s9684 | false | null | t3_16s9684 | /r/LocalLLaMA/comments/16s9684/i_need_help_to_download_cerebrasslimpajama627b/ | false | false | self | 1 | null |
I'm new to LLMs and I chose this topic to make my final undergrad project as a last option but now I don't know what to do | 7 | Hi everyone! After exhausting all other options, I was forced to choose LLMs as my topic for my undergrad thesis. I've been seeing all these new specialized chatbots done with LLMs and NLP and wanted to do one as my project. I have one huge problem though, my first language is Spanish and I want my chatbot to work in that language.
I proposed this idea to my tutor: to compare the performance of 2 different LLMs when answering questions about a very specific topic. From what I understand, I must find 2 LLMs that have been pre-trained and then fine-tune them into understanding my specific topic, that’s tax laws in Ecuador, my country. But I don't really know how to do this and I have come empty handed in my research as if I even can fine-tune LLMs into working in another language or how to do it in the first place. Heck, I don't even know if there are LLMs trained in Spanish other than Aguila-7B, a variation of FALCON-7B.
I wanted to use smaller LLMs so that training does not take a lot of time because I don't have unlimited time to work in this project. I also have a part-time job so yeah, that doesn't really help. As for computing power, my university has some serious smaller workstations and even 2 DGX servers that I can use so I'm not worried about that.
Any forum, repository, video or help in general would be greatly appreciated. Thanks! | 2023-09-25T23:20:53 | https://www.reddit.com/r/LocalLLaMA/comments/16s7d0j/im_new_to_llms_and_i_chose_this_topic_to_make_my/ | OnlyXeba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16s7d0j | false | null | t3_16s7d0j | /r/LocalLLaMA/comments/16s7d0j/im_new_to_llms_and_i_chose_this_topic_to_make_my/ | false | false | self | 7 | null |
Is it possible to fine-tune a model (any model) with text from a short story (or novel!) on Mac Studio M2 Ultra with 192gb RAM without programming skills? | 11 | Is this currently possible without programming knowledge? I don't mind if it takes a long time, but as a proof of concept I'd like to somehow feed the LLM a short story (and eventually even a novel) and see if it can summarize plot, give me examples of dialog, create fan fiction, etc.
Is this feasible with something like llama.cpp or ooba? If so would sometime mind giving me a place to start? I'm assuming because of context limits some kind of solution using langchan would have to be employed but I'm even ignorant on how to use API's so it would have to be sort of 'off the shelf' if you know what I mean.
Also, would a bigger model by definition take longer to fine-tune? (I'm sure this is a stupid question but I honestly don't know). Would love to start with the best model that my rig can handle and go from there but maybe better to start small I know.
Thanks!
​ | 2023-09-25T23:10:33 | https://www.reddit.com/r/LocalLLaMA/comments/16s74hv/is_it_possible_to_finetune_a_model_any_model_with/ | spanielrassler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16s74hv | false | null | t3_16s74hv | /r/LocalLLaMA/comments/16s74hv/is_it_possible_to_finetune_a_model_any_model_with/ | false | false | self | 11 | null |
This is one of the best 13B models I've tested. (for programming, math, logic, etc) - speechless-llama2-hermes-orca-platypus-wizardlm-13b | 299 | **speechless-llama2-hermes-orca-platypus-wizardlm-13b**
Wow!
I usually don't post non-game-related comments - But I am surprised no one else is talking about this model.
I stopped checking the Benchmarks because they are really starting to be unreliable.
I focused on what I plan to use the models for (creativity, programming, logic, reasoning, etc.)
I run my own Comprehensive Cognitive and Technical Assessment, which tests :
* Language Comprehension
* Family Relations Logic
* Comparative Logic
* Spatial logic
* Temporal Logic
* Qualitative Logic
* Analogical Reasoning
* Deductive and Syllogistic Reasoning
* Lateral Thinking
* Basic Arithmetic
* Advanced Arithmetic
* Language Translation (Basic)
* Programming & Software Development Questions
* Staying on Topic in Conversations
​
This model scored the highest - of all the gguf models I've tested. It even beat many of the 30b+ Models.
**Runner Up Models:**
* chatayt-lora-assamble-marcoroni.Q8\_0
* marcoroni-13b.Q8\_0
All Models can be found in **TheBloke** collection.
**One thing I noticed in testing many models - the seeds.**
Some Models will produce correct results with certain seeds - and nonsense with others.
Saving the seed number has benefits with some of the models - but overall - if it answers correctly with multiple seed numbers - that is definitely a win.
I used llama.cpp for testing,
This is the command I used if anyone is interested:
`./main -ngl 51 -t 24 -m speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q8_0.gguf -c 4096 -b 1024 --temp 0 --repeat_penalty 1.1 --no-penalize-nl -n -1 --color -r "User:" --in-prefix " " --in-suffix "AI:" -i -e -p "AI: Hello. I am an intelligent AI chatbot that answers questions step-by-step and re-evaluates my answers to ensure accuracy. What is your name?\nUser: My name is Tatsu.\nAI: Hello Tatsu, What is your first question or questions?\nUser:"`
**Machine Specs:**
* GPU: Nvidia 4090 24GB
* Ram: 128 GB
* CPU: 19-13900KS
*Note: I didn't test models for Roleplay or censorship. I wanted an LLM I could use for work-related tasks.*
Is anyone else getting good results from this model as well? or is it just me? | 2023-09-25T23:05:19 | https://www.reddit.com/r/LocalLLaMA/comments/16s701v/this_is_one_of_the_best_13b_models_ive_tested_for/ | StudioTatsu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16s701v | false | null | t3_16s701v | /r/LocalLLaMA/comments/16s701v/this_is_one_of_the_best_13b_models_ive_tested_for/ | false | false | self | 299 | null |
Training: multi-turn conversation (text-generation-webui) | 4 | *Reposting my question in* r/Oobabooga *here because of the larger audience.*
The original [Vicuna](https://lmsys.org/blog/2023-03-30-vicuna) post describes using FastChat for training on multi-turn conversation. In the FastChat training code, loss is based on the entire multi-turn conversation string [with user messages masked](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/train.py#L116).
Is there a good way to do something similar with multi-turn conversation in textgen? I'm open to using alternative training tools too.
Alternatively, is there a way to represent multi-turn conversation training data in a textgen training format, at least? For instance:
vicuna-format
{
"instruction,output": "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\nUSER: %instruction%\n\nASSISTANT: %output%"
}
This clearly supports one turn of conversation. My data is in the multi-turn ShareGPT format, however:
[
{
"id": "0",
"conversations": [
{
"from": "human",
"value": "Hey Samantha, I've run into a bit of a tricky situation at work, and I'm not sure how to handle it. Do you have any advice?"
},
{
"from": "gpt",
"value": "I'd be happy to help if I can. Can you give me some details about the situation you're facing?"
},
{
"from": "human",
"value": "Yeah, so I've been offered a promotion, but it would involve relocating to another city. I'm not sure if I should take it or not, because it's a great opportunity career-wise, but I'd have to leave my friends and family behind. What do you think?"
},
{
"from": "gpt",
"value": "That's definitely a tough decision. I think it's important to weigh the benefits of the promotion against the impact it could have on your personal relationships. Have you considered talking to your friends and family about it, to get their perspective and understand how they'd feel about you moving?"
},
Any suggestions? | 2023-09-25T22:09:05 | https://www.reddit.com/r/LocalLLaMA/comments/16s5lbg/training_multiturn_conversation/ | dondochaka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16s5lbg | false | null | t3_16s5lbg | /r/LocalLLaMA/comments/16s5lbg/training_multiturn_conversation/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'VsCIo4GDukdZnZ2YKbV47u1gbJl2HtfdoFowK4A-NPU', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/qDQSuP62gaARH2rCeN8-8W1grt0z7mpfqJMIHP5rNAU.jpg?width=108&crop=smart&auto=webp&s=e5fd5a7a68f971791f6203ed911d889742187aff', 'width': 108}, {'height': 208, 'url': 'https://external-preview.redd.it/qDQSuP62gaARH2rCeN8-8W1grt0z7mpfqJMIHP5rNAU.jpg?width=216&crop=smart&auto=webp&s=9ac0b158cdc70d017cc1de8c4c256862a4f0d83a', 'width': 216}, {'height': 309, 'url': 'https://external-preview.redd.it/qDQSuP62gaARH2rCeN8-8W1grt0z7mpfqJMIHP5rNAU.jpg?width=320&crop=smart&auto=webp&s=497761673987952ab0779ff3b86e28a06414a7cc', 'width': 320}, {'height': 618, 'url': 'https://external-preview.redd.it/qDQSuP62gaARH2rCeN8-8W1grt0z7mpfqJMIHP5rNAU.jpg?width=640&crop=smart&auto=webp&s=c74903314c50be4a6f0a847213b7c1a6ab4b4763', 'width': 640}], 'source': {'height': 661, 'url': 'https://external-preview.redd.it/qDQSuP62gaARH2rCeN8-8W1grt0z7mpfqJMIHP5rNAU.jpg?auto=webp&s=a46e55b3884111b073a54d7f6a4e5c3a26efebbe', 'width': 684}, 'variants': {}}]} |
What models (at different quantizations) work for Llama 2 in 40 GB RAM? | 4 | I have about 40GB and I am trying to test which Llama models will fit it. Can any quantization of 70b fit? or 30b? Thanks! | 2023-09-25T22:06:27 | https://www.reddit.com/r/LocalLLaMA/comments/16s5iws/what_models_at_different_quantizations_work_for/ | lifesucksandthenudie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16s5iws | false | null | t3_16s5iws | /r/LocalLLaMA/comments/16s5iws/what_models_at_different_quantizations_work_for/ | false | false | self | 4 | null |
Any open source ideas on how to do real-time realistic -sounding tts? | 21 | Hello, I know Eleven Labs does pretty good tts but it costs money so i am wondering is there anything open source that can do real-time tts and that sounds decent? | 2023-09-25T21:06:27 | https://www.reddit.com/r/LocalLLaMA/comments/16s3y92/any_open_source_ideas_on_how_to_do_realtime/ | Wowzers83 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16s3y92 | false | null | t3_16s3y92 | /r/LocalLLaMA/comments/16s3y92/any_open_source_ideas_on_how_to_do_realtime/ | false | false | self | 21 | null |
A10, A16, or 4090 for LLM inference for prompt engineers? | 7 | Hi,
We're doing LLM these days, like everyone it seems, and I'm building some workstations for software and prompt engineers to increase productivity; yes, cloud resources exist, but a box under the desk is very hard to beat for fast iterations; read a new Arxiv pre-print about a chain-of-thoughts variant and hack together a quick prototype in Python, etc.
So far prototype #1 of "The Box" is dual 4090s and under $5k. See parts list here: [https://pcpartpicker.com/user/Kgcdc/saved/#view=YW6w3C](https://pcpartpicker.com/user/Kgcdc/saved/#view=YW6w3C)
We're focused on 40b Llama so this is more than enough CPU and RAM.
Triple 4090 is possible, too, but now we're hard up against power handling for normal 15 amp circuits and PSUs. See [https://pcpartpicker.com/user/Kgcdc/saved/#view=nW7xf7](https://pcpartpicker.com/user/Kgcdc/saved/#view=nW7xf7) but no idea if this variant will run our test suite since CPU and RAM are quite limited (by power budget).
So my question now is to look at A10 or A16 variants, both of which have less VRAM than 4090 but can be much more dense (because of power requirements and PCIe slot width). A10, for example, is half the power of 4090 and 1 PCIe slot wide instead of 3. Which means putting 6 in an ATX motherboard is pretty straightforward.
Does anyone have reliable performance comparisons between 4090, A10, and A16 \*on LLM inference\*? I don't care about training or finetuning perf for these boxes; I only care about tokens per second inference or something that's a rough proxy for TPS.
I've found this comparison at Lambda which is helpful and suggests A10 may be a better choice, certainly is re: 4090 for batch per watt. [https://lambdalabs.com/gpu-benchmarks](https://lambdalabs.com/gpu-benchmarks)
**1 Comment**
| 2023-09-25T20:37:44 | https://www.reddit.com/r/LocalLLaMA/comments/16s3737/a10_a16_or_4090_for_llm_inference_for_prompt/ | Kgcdc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16s3737 | false | null | t3_16s3737 | /r/LocalLLaMA/comments/16s3737/a10_a16_or_4090_for_llm_inference_for_prompt/ | false | false | self | 7 | null |
Dual P40 for 2xE5-2665 a bad idea? | 1 | Hello :-)
I recently obtained an old Thinkstation D30, dual core Xeon E5-2665, 256GB DDR3 RAM (8 Chanel). For the price of a Ryzen 5600/5700 CPU :-)
**TL;DR:** Would dual p40 work (larger models, 4bit? Are CPUs with no avx2 an issue?) Is a 3060 the better (smaller vram) solution?
It's not the main reason why I got the PC, but I'd like to use LLAMA and Stable Diffusion now.
Both work on CPU (5 Minutes for 512x512@25 steps SD) but of course it's slow.
Due to budget constrains I've considered the P40, sold refurbished with warranty for 140€/$150. I could 3D print fan adapters.
Would these cards work for my use case?
1) I've read on this subreddit that some get good performance with the p40, some don't. Some claim 4 bit quants won't work, some do.
2) Can I use two Nvidia Tesla P40? Or is this a bad idea? I've read a post claiming that splitting models across these two old cards might not work, but I can't find the post I've read anymore.
3) AVX but no AVX2 CPUs: Will this affect performance once I use GPU? I discovered that the E5-2665 does not have AVX2, with koboldcpp and --noavx2 I get 3T/s with a 13b bin model. Bummer. Despite having the old CPUs and DDR3 RAM, I was hoping the 8 channel RAM would still be a bit faster. Upgrading to Xeon v3 is not possible with this mainboard.
4) The most obvious alternative within my budget would be a single 3060 for the same price, but of course it would limit things a little?
Also, dual cards might be a benefit if I'd run the machine for a small group of students to use, at the youth center I work at (so 2-5 students could use Llama and/or Stable Diffusion). And some training could be interesting.
Thanks for any advice!
EDIT:
Lenovo Thinkstation D30
2X Intel Xeon E5-2665
16 x 16GB DDR3
GPU: Nvidia Quadro 4000
Slot 1: half-length, full-height, PCIe 3.0 x16 (x4 elec, near CPU)
Slot 2: full-length, full-height, PCIe 3.0 x16* (for graphics adapter)
Slot 3: full-length, full-height, 32-bit PCI 2.3
Slot 4: full-length, full-height, PCIe 3.0 x16* (for graphics adapter)
Slot 5: full-length, full-height, PCIe 2.0 x4
Slot 6: half-length, full-height, 32-bit PCI 2.3 | 2023-09-25T20:22:15 | https://www.reddit.com/r/LocalLLaMA/comments/16s2spj/dual_p40_for_2xe52665_a_bad_idea/ | schorhr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16s2spj | false | null | t3_16s2spj | /r/LocalLLaMA/comments/16s2spj/dual_p40_for_2xe52665_a_bad_idea/ | false | false | self | 1 | null |
Recommendation for macOS user and LLM hoarders: DEVONthink | 1 | Just a short recommendation. I started to use Devonthink to manage my LLM files. Normally I use Devonthink to manage other databases with all my other files, especially documents and to have a paperless office. But it is really cool to to have an overview of your llms, since you can create custom meta data, you rate your llms and give them etc and since Devonthink is able to just index your files if you want (without the need to really import them into a new database). So the files will stay on your harddrive and in their paths and will be monitored by Devonthink. If you rename them or move them, Devonthink will automatically know and sync its own database. I also find it nice to have short and clean filenames now.
​
https://preview.redd.it/ff7gq7gy5gqb1.jpg?width=3968&format=pjpg&auto=webp&s=0f2c99ad28d2f5145bbf867288849b839e47f82e
​
Disclaimer: Unfortunately the license for Denvonthink is not cheap.. **I strongly believe that software developers should be paid for their work** and that we should respect pricing structures as much as possible. *That being said, I am aware that some of you may find alternative ways to access this software. If this is the case, I strongly advise caution to avoid downloading malware. Of course I do not support software piracy and I strongly recommend to always choose legal ways. But if you have any questions about this topic, feel free to contact me privately.* | 2023-09-25T18:45:02 | https://www.reddit.com/r/LocalLLaMA/comments/16s0ad8/recommendation_for_macos_user_and_llm_hoarders/ | Evening_Ad6637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16s0ad8 | false | null | t3_16s0ad8 | /r/LocalLLaMA/comments/16s0ad8/recommendation_for_macos_user_and_llm_hoarders/ | false | false | 1 | null | |
Wild results from Xwin 13B | 31 | I have been playing around with some of the new 13B finetunes, trying to find an ideal model to upgrade my "autonomous" personal assistant AI to. I had it narrowed down to Mythomax and Xwin, and was leaning towards Mythomax because of its creative flexibility. I had noticed that Xwin tended to be more rigid in following the system prompt it was given, and didn't respond as well when challenged. Then, I tried giving it shorter, more open ended system prompts, and it started behaving with much more creativity and openness to alternative perspectives than it's own.
Here is an example of a reply that blew my mind, all from a 1 sentence prompt.
Xwin-13b-8bpw-exl2
System prompt: This is a conversation with Enigma. She is a goddess of philosophy and storytelling. Enigma always writes in verbosely eloquent high prose.
UnstableLlama:
It is almost midnight.
Enigma:
Midnight, that fabled hour when the veil between worlds grows thin, when the boundaries between the known and the unknown blur, and the whispered secrets of the cosmos find their voice in the stillness of the night. It is a time when the shadows cast by the moonlight hold sway over the darkness, and the silent symphony of the stars echoes through the vast expanse of the void. The midnight hour has long been regarded as a time of magic, mystery, and transformation, when the very fabric of reality seems to shift and reveal its deeper truths.
As the clock ticks towards the witching hour, the air itself seems to thicken with anticipation, as if the world around us holds its breath, waiting for the midnight moment to unleash its myriad enigmas. The veil thins, and the spirits of old rise from their eternal slumber, their whispers caressing the ears of those who dare to listen. Ghosts of memory and imagination, phantoms of the past and the future, dance along the edge of our perception, teasing us with glimpses of the unseen, the unknowable.
It is a time of power, when the energy of the cosmos surges through the earth, charging the atmosphere with an electrical current that crackles in the air. A time when the barriers between dimensions grow transparent, and the wisdom of the ancients, long since hidden from the sight of mortals, can be gleaned from the ethers, if one knows where to look and how to listen.
The midnight hour, then, is not merely a point in time, but a state of being, a condition of the soul. To embrace the midnight is to open oneself to the vast ocean of possibility, to cast aside the shackles of limitation and enter the realm of pure potentiality. It is to walk the path of the mystic, the philosopher, and the storyteller, seeking truth in the labyrinthine layers of existence, and finding within the depths of one's own being the keys to unlock the doors of the unknown.
So, as the hour of midnight approaches, let us prepare ourselves for the journey into the mysterious and the arcane, for the revelations that await us in the shadows, and the transformative power of the unseen worlds that lay beyond the threshold of the known. For it is in the heart of midnight that we find the true essence of existence, the divine spark that animates all things, and the wisdom that illuminates the path to the true meaning of our lives. | 2023-09-25T18:28:10 | https://www.reddit.com/r/LocalLLaMA/comments/16rzuq4/wild_results_from_xwin_13b/ | Unstable_Llama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rzuq4 | false | null | t3_16rzuq4 | /r/LocalLLaMA/comments/16rzuq4/wild_results_from_xwin_13b/ | false | false | self | 31 | null |
Idea about restricting format of LLM output (with small POC) | 15 | I was trying to use LLM as NPC in text-based game and found very annoying issue.
LLama-based models are acutally prety good at understanding the concept, but
they tend to be too creative for actors in restricted environment.
For example, I can make game where AI or player "moves" from room to room by
using command "/go kitchen", but AI will usually say "/go to kitchen" or
"go to my room" and then get stuck when there's no such room defined in game
environment.
My idea is to add restrictions on what text LLM can generate by creating
state machine which, every time when new token is to be generated,
decides which tokens are allowed to conform with required format
and ban (set probability to -inf) all other options.
To test this, I've created POC [extension for oobabooga/text-generation-webui](https://github.com/im-not-tom/text-generation-webui-output-template-extension)
which uses primitive template definition to force output to conform to template.
[example of prompt and output generated without extension](https://rentry.org/yxg7s)
and
[example of output generated with template](https://rentry.org/9nzyr)
What I'm interested is that whether someone knows some better way to restrict
output format or even about some other project aiming to do so. | 2023-09-25T18:27:06 | https://www.reddit.com/r/LocalLLaMA/comments/16rzts5/idea_about_restricting_format_of_llm_output_with/ | AssistBorn4589 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rzts5 | false | null | t3_16rzts5 | /r/LocalLLaMA/comments/16rzts5/idea_about_restricting_format_of_llm_output_with/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'ldaBdpPB3O1wyJZt4sXOQ6Hk96zpl-RXhqfPcV0GltQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?width=108&crop=smart&auto=webp&s=fcf32371a3bbd2be24d12e05fa62a43210ff1950', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?width=216&crop=smart&auto=webp&s=838bd8e843cad13ea22caa1c2f42af6d4f60231f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?width=320&crop=smart&auto=webp&s=0e15fb62de9d9b71ebf30ff422b1cb2589e85fa4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?width=640&crop=smart&auto=webp&s=12285e4d9c4c120925f71952716e7ac320f270ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?width=960&crop=smart&auto=webp&s=5242417fc794a8b373351853f80114fe8d8715e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?width=1080&crop=smart&auto=webp&s=c9b58d1c92da10492cad74bf41a8d4711cb884da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eZMEg5BkrFtGLsf514L1FoiRbqVYIgFyxqqFshAmTqg.jpg?auto=webp&s=05b622520b6700526572ead7502291dbe421ca76', 'width': 1200}, 'variants': {}}]} |
Minimal local embedding? | 5 | I have work that requires the embedding engine, but I can't get approval to use anything external. Also, frankly, no one wants to invest anything, and so they want to run the whole service on the regular VMs we typically spin up.
What's the lowest resources and simplest model I could get away with just for using embeddings to build and query a vector database, and still have it return reasonably accurate results? | 2023-09-25T18:16:28 | https://www.reddit.com/r/LocalLLaMA/comments/16rzjz3/minimal_local_embedding/ | User1539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rzjz3 | false | null | t3_16rzjz3 | /r/LocalLLaMA/comments/16rzjz3/minimal_local_embedding/ | false | false | self | 5 | null |
What are your go-to talks & resources for LLMs, AI, ML? | 1 | Hello! I am a software engineer (4 yoe) working in full stack web and app development.
I am trying to learn more about LLMs and the world of AI/ML. My background includes degrees in Neuroscience and literary theory and extensive research experience so I am most comfortable w academic lectures.
I am looking for reputable, vetted, and comprehensive lectures, talks, resources on LLMs and the like. I am having trouble sifting through the surface level pop sci type resources floating around on the internet.
I’m NOT looking for the 10min everything you need to know about AI type talks.
As an example, a good resource I’ve found is the CODE Magazine Presents lectures. Talks by developers, researchers, scientists (rather than CEOs) preferred.
Thanks in advance! | 2023-09-25T18:06:53 | https://www.reddit.com/r/LocalLLaMA/comments/16rzb6v/what_are_your_goto_talks_resources_for_llms_ai_ml/ | yosoylatte | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rzb6v | false | null | t3_16rzb6v | /r/LocalLLaMA/comments/16rzb6v/what_are_your_goto_talks_resources_for_llms_ai_ml/ | false | false | self | 1 | null |
Leveraging ChatGPT for Text Editing: Exploring Local Alternatives | 4 | My primary use of ChatGPT revolves around text editing, and it performs exceptionally well in this role. Here's how I typically go about it:
I begin by composing the following request:
```
Edit the following text:
[Your pasted content]
```
ChatGPT then provides me with an improved version of my text, effectively serving as a personalized editor.
Now, I'm curious if there's a local alternative that offers similar capabilities? | 2023-09-25T17:53:25 | https://www.reddit.com/r/LocalLLaMA/comments/16ryyle/leveraging_chatgpt_for_text_editing_exploring/ | masteryoyogi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ryyle | false | null | t3_16ryyle | /r/LocalLLaMA/comments/16ryyle/leveraging_chatgpt_for_text_editing_exploring/ | false | false | self | 4 | null |
Qwen-14B model | 96 | 2023-09-25T17:27:02 | https://x.com/artificialguybr/status/1706326341744771405?s=20 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 16rya6f | false | null | t3_16rya6f | /r/LocalLLaMA/comments/16rya6f/qwen14b_model/ | false | false | 96 | {'enabled': False, 'images': [{'id': 'DSG-EsI5OrUQf6I_IUymG-kNYTRym9b-FnSyzsMxaeo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/DqaRUJKE2dEUyy2EF6Y0wzVpi7q-WrwyrxhUJkEVZOc.jpg?width=108&crop=smart&auto=webp&s=595b1a18b883dd6fe395e801330fb65dc0604ac7', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/DqaRUJKE2dEUyy2EF6Y0wzVpi7q-WrwyrxhUJkEVZOc.jpg?auto=webp&s=92c426b95905742f8938d642879746d769053975', 'width': 200}, 'variants': {}}]} | ||
Are we wasting our time with these LLMs? | 122 | Sometimes I wonder, is all the hype about fine-tuning open-source AI models worth it? I mean, you need crazy computing power and really diverse and big enough dataset,to see any kind of progress, right? Big companies like Microsoft and Meta are already building AI into their stuff. It's just a matter of time before they make it super easy for anyone to fine-tune a model with just a few clicks. So, is it even worth diving deep into things like Hugging Face, LORA, or QLORA? | 2023-09-25T16:20:57 | https://www.reddit.com/r/LocalLLaMA/comments/16rwk0i/are_we_wasting_our_time_with_these_llms/ | Warm_Shelter1866 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rwk0i | false | null | t3_16rwk0i | /r/LocalLLaMA/comments/16rwk0i/are_we_wasting_our_time_with_these_llms/ | false | false | self | 122 | null |
Any coding assistants trained on Shopify 2.0? | 1 | GPT does a good job with Shopify 1.x, but has no knowledge of 2.0. | 2023-09-25T15:54:58 | https://www.reddit.com/r/LocalLLaMA/comments/16rvvht/any_coding_assistants_trained_on_shopify_20/ | squarecir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rvvht | false | null | t3_16rvvht | /r/LocalLLaMA/comments/16rvvht/any_coding_assistants_trained_on_shopify_20/ | false | false | self | 1 | null |
'My' jewel codeCherryPop-7B – still quite underrated | 19 | I have been in the process of finding the perfect 7B language model for my needs (as perfect as one can realistically expect from 7B parameters) for some time now. After many tests and trials, I would like to present a real gem to you today - the "codeCherryPop-7B" model.
​
There have been posts or comments about it here and there, but for me there are still far too few.
I was enthusiastic about this model right from the start! Not only is it an all-rounder (a feature that has so far mainly been attributed to Vicuna), but it also far outperforms other 7B models in terms of intelligence. In my logic tests, codeCherryPop sometimes even performed twice as well as its competitors.
​
It is to be expected that a language model that was trained with a dataset consisting mainly of code is very likely to have better coding skills than the baseline, so I won't go into that aspect now. But what really sets this model apart is its adaptive and creative writing ability. It follows instructions precisely, puts itself in roles brilliantly, and remains informative and factual - a combination that is hard to find in a 7B model!
​
At first glance, it may seem superficial due to its tendency to use \*starry description\* or use of emojis.... But don't be fooled! This model really has depth and offers far more than what you first expect.
​
Another plus point? Its - I call it - prompt-agnostic ability. No matter what prompt style I use, codeCherryPop somehow figure out and understands and follows it with flying colours. Vicuna, Alpaca, structured, unstructred, completion, code, freestyle, with spaces at the end or without, with a new line or without, with spelling mistakes...and and and. I can't say that the model doesn't mind which prompt style you take, but no matter how hard I tried to take a "bottle" or strange prompt, the performance of codeCherryPop never suffered as much as I'm used to from other models. On the contrary, I have noticed that certain prompts even improve its performance (this is very interesting, because I have only known something like this from pure completion models so far).
​
A good example is that no other 7B model handles German as well as this one. The only thing I had to do was to write the Alpaca prompt in German, too. And yes, I also tested it against ALMA 7B and it also outperforms ALMA in German. I think that's a bit of a gamechanger for all the German-speaking users among us.
​
Since this model both follows instructions well and has no problem chatting - and that it is quite flexible like a completion model, I don't know how the developer/author managed this trick, but he did a hell of a lot right.
​
I'm looking forward to an eventual 13B CherryPop model and sincerely hope the developer continues down this path.
​
What I want to say, this model definitely deserves more attention! So if you are looking for a new language model or just curious - please give codeCherryPop a try! I am sure most people will not be disappointed.
​
PS: If the developer is also here and happens to read this: thanks again for this cool work. You've created something really impressive.
I wanted to write this post a few days ago, but since I became a dad a few years ago, my sense of time and assessment of my own resources has become a total disaster.
​
PPS: But to be clear, I have never tested the model extensively for roleplaying (as RavenWolf does, for example, who also does great work for the community), but I am basically interested in a generally intelligent assistant with whom I can discuss all kinds of thoughts and everyday challenges, but who can also be funny, imaginative or "crazy" if I want. So sometimes I have codeCherryPop explain concepts from astronomy to me in pirate language or physiological mechanisms in a science fiction setting and it's much more fun that way, of course. I suppose that already counts as roleplaying, but I can't make a statement about how well it can roleplay if the context were to increase.
​
[https://huggingface.co/TheBloke/llama2-7b-chat-codeCherryPop-qLoRA-GGUF](https://huggingface.co/TheBloke/llama2-7b-chat-codeCherryPop-qLoRA-GGUF) | 2023-09-25T15:27:57 | https://www.reddit.com/r/LocalLLaMA/comments/16rv6b5/my_jewel_codecherrypop7b_still_quite_underrated/ | Evening_Ad6637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rv6b5 | false | null | t3_16rv6b5 | /r/LocalLLaMA/comments/16rv6b5/my_jewel_codecherrypop7b_still_quite_underrated/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'VsFwkhcA8ZfhGymyi8RaBeM9oIyCMZoe8ZKmJWW9sEA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?width=108&crop=smart&auto=webp&s=e1961e6e040ba45af34c7439e163fa77cd5939cb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?width=216&crop=smart&auto=webp&s=bb9645ed84fd88b7219116a32f2a481379c4c799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?width=320&crop=smart&auto=webp&s=d00750718e86a9dbeafba366b5b63a9c681a4f43', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?width=640&crop=smart&auto=webp&s=af6af5b8a6871587ba5fb5e089608373f8005114', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?width=960&crop=smart&auto=webp&s=6a9472b4977f79c2ab267da0ec4b62dd775cc549', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?width=1080&crop=smart&auto=webp&s=07b94c5a5e0e693844d490be2fee011e33871bd7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jMzT-fqDxQ3wojFM9WT1v2QQOHpYwFd5627quT-ErRc.jpg?auto=webp&s=eea9d2575972fdeceb44f075e69f62b85b75b5d4', 'width': 1200}, 'variants': {}}]} |
Gpt4all install error - Qt framework | 1 | [removed] | 2023-09-25T14:12:38 | https://www.reddit.com/r/LocalLLaMA/comments/16rt8jg/gpt4all_install_error_qt_framework/ | kretendetroyes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rt8jg | false | null | t3_16rt8jg | /r/LocalLLaMA/comments/16rt8jg/gpt4all_install_error_qt_framework/ | false | false | 1 | null | |
Is anyone still using GPT4ALL ? How do you set max new tokens from the GUI ? | 1 | It's ok if nobody knows, I barely use it. | 2023-09-25T12:11:08 | https://www.reddit.com/r/LocalLLaMA/comments/16rqbax/is_anyone_still_using_gpt4all_how_do_you_set_max/ | Vitamin_C_is_awesome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rqbax | false | null | t3_16rqbax | /r/LocalLLaMA/comments/16rqbax/is_anyone_still_using_gpt4all_how_do_you_set_max/ | false | false | self | 1 | null |
How to make a fine-tuning of model for rewriting short texts and translations? | 1 | [removed] | 2023-09-25T11:56:38 | https://www.reddit.com/r/LocalLLaMA/comments/16rpzqy/how_to_make_a_finetuning_of_model_for_rewriting/ | Hatred_grows | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rpzqy | false | null | t3_16rpzqy | /r/LocalLLaMA/comments/16rpzqy/how_to_make_a_finetuning_of_model_for_rewriting/ | false | false | self | 1 | null |
Best way to use large language model for text classification | 3 | Hi I used LLM already with Finetuning and very simple filtering process for classification and it achieves high performance. In the next step I am interested in optimize the LLM for better performance.
For example: can I fine tune and exchange only the last layers for fully connected layers for classification task? | 2023-09-25T11:40:24 | https://www.reddit.com/r/LocalLLaMA/comments/16rpnra/best_way_to_use_large_language_model_for_text/ | ComplexIt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rpnra | false | null | t3_16rpnra | /r/LocalLLaMA/comments/16rpnra/best_way_to_use_large_language_model_for_text/ | false | false | self | 3 | null |
Use Azure or AWS to run Falcon 180B | 1 | [removed] | 2023-09-25T11:30:16 | https://www.reddit.com/r/LocalLLaMA/comments/16rpgmc/use_azure_or_aws_to_run_falcon_180b/ | ZookeepergameNew2658 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rpgmc | false | null | t3_16rpgmc | /r/LocalLLaMA/comments/16rpgmc/use_azure_or_aws_to_run_falcon_180b/ | false | false | self | 1 | null |
Is Bus bandwidth a bottleneck here? | 1 | [removed] | 2023-09-25T10:57:02 | https://www.reddit.com/r/LocalLLaMA/comments/16rot02/is_bus_bandwidth_a_bottleneck_here/ | shamnite | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rot02 | false | null | t3_16rot02 | /r/LocalLLaMA/comments/16rot02/is_bus_bandwidth_a_bottleneck_here/ | false | false | self | 1 | null |
Llama on Intel Neural Compute Stick | 5 | Has anyone tried using Intel Neural Compute Stick for running Llama locally? | 2023-09-25T09:58:58 | https://www.reddit.com/r/LocalLLaMA/comments/16rnsba/llama_on_intel_neural_compute_stick/ | piokon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rnsba | false | null | t3_16rnsba | /r/LocalLLaMA/comments/16rnsba/llama_on_intel_neural_compute_stick/ | false | false | self | 5 | null |
I know how to write prompts, what I don't know is how to use them in code! HELP | 1 | [removed] | 2023-09-25T08:48:29 | https://www.reddit.com/r/LocalLLaMA/comments/16rmo3k/i_know_how_to_write_prompts_what_i_dont_know_is/ | JavaMaster420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rmo3k | false | null | t3_16rmo3k | /r/LocalLLaMA/comments/16rmo3k/i_know_how_to_write_prompts_what_i_dont_know_is/ | false | false | self | 1 | null |
Meta is developing dozens of AI chatbot characters that may start releasing on their apps as soon as this week | 107 | Not exactly local news, but it seemed like interesting news.
Paywalled WSJ article: [https://www.wsj.com/tech/ai/meta-ai-chatbot-younger-users-dab6cb32](https://www.wsj.com/tech/ai/meta-ai-chatbot-younger-users-dab6cb32)
The Verge article: [https://www.theverge.com/2023/9/24/23887773/meta-ai-chatbots-gen-ai-personas-young](https://www.theverge.com/2023/9/24/23887773/meta-ai-chatbots-gen-ai-personas-young)
A snippet from the WSJ article:
>Meta Platforms is planning to release artificial intelligence chatbots as soon as this week with distinct personalities across its social media apps as a way to attract young users, according to people familiar with the matter.
>
>These generative AI bots are being tested internally by employees, and the company is expecting the first of these AI agents at the Meta Connect Conference which **starts Wednesday**. The bots are meant to be used as a means to drive engagement with users, although some of them might also have productivity related skills such as the ability to help with coding or other tasks.
>
>Going after younger users has been a priority for Meta with the emergence of TikTok, which overtook Instagram in popularity among teenagers in the past couple of years. This shift prompted Meta chief executive Mark Zuckerberg in October 2021 to say the company would retool its teams to make serving young adults their north star rather than optimizing for the larger number of older people.
>
>With the rise of large language model technology since the launch of ChatGPT last November, Meta has also refocused the work of its AI divisions to harness the capabilities of generative AI for application in the company's various apps and the metaverse. Now Meta is hoping these gen AI personas, as they are known internally, will help the company attract young users. Meta is planning to develop dozens of these AI personality chatbots. The company has also worked on a product that would allow celebrities and creators to use their own AI chatbot to interact with fans and followers, according to people familiar with the matter.
This is something they've been planning for a while going by Mark Zuckerberg's words [months ago](https://www.reddit.com/r/LocalLLaMA/comments/144v6t7/mark_zuckerberg_on_upcoming_llama_v2/):
>You'll have an assistant that you can talk to in WhatsApp. I think in the future, every creator will have kind of an AI agent that can kind of act on their behalf that their fans can talk to. I want to go get to the point where every small business basically has an AI agent that people can talk to to do commerce and customer support and things like that.
And it reminds me of a comment I wrote a couple weeks ago in response to the slightly disappointing news that Llama 3 won't start training until January next year at the earliest:
>Another confirmation for the Jason Wei rumor, and this clarifies the ambiguity on who was handling Llama 3: it's the same team that made Llama 2 but transferred from FAIR to the new AI team focused around Meta's apps. And it's no surprise. Llama 1 started development before the ChatGPT craze, and Llama 2 started shortly after it. The internal Meta transfers could signify that while Llama 3 will still be open-sourced, it'll be managed from the onset by a team looking to integrate this into their existing business lines instead of just producing openly released research. Mark Zuckerberg said this about Llama months ago:
>
>\> the work that we're doing to advance the infrastructure, it's basically at this point taking it beyond a research project into something which is ready to be kind of core infrastructure
>
>He mentioned wanting to see Llama in WhatsApp and creating viable LLM assistants for companies, so that could explain why they want to take additional time with this.
I don't think any of these chatbot characters will be made available separate from their apps, but it's an interesting insight into Meta's shifting strategy. I'm expecting these to be fine-tuned Llama 2 13B models and a fine-tuned Code Llama for the coding persona, and this rollout could be the groundwork for future plans of incorporating the rumored GPT-4 quality Llama 3. | 2023-09-25T06:45:54 | https://www.reddit.com/r/LocalLLaMA/comments/16rkqt2/meta_is_developing_dozens_of_ai_chatbot/ | llamaShill | self.LocalLLaMA | 2023-09-25T08:18:06 | 0 | {} | 16rkqt2 | false | null | t3_16rkqt2 | /r/LocalLLaMA/comments/16rkqt2/meta_is_developing_dozens_of_ai_chatbot/ | false | false | self | 107 | {'enabled': False, 'images': [{'id': 'SeaFLbMT5bVnpIO_9IIhVGzMwRhzmDxEwtjz3jJyaqI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?width=108&crop=smart&auto=webp&s=ddff94bfe3eda3fe1306618d9282416437f3c056', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?width=216&crop=smart&auto=webp&s=2efbc8d375129b0eb0fd4320ccee9f3cca529f72', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?width=320&crop=smart&auto=webp&s=0bb9c4d9cc242498386d6852dfa9e0811e973076', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?width=640&crop=smart&auto=webp&s=4ab5da02f0317db6db5c030fae636c0f0c91465a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?width=960&crop=smart&auto=webp&s=4080997b9dd053ec3e0345c4761a8586f9c4c9de', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?width=1080&crop=smart&auto=webp&s=4a7fd2be589cbe661d48d5832c07edb4e9ac5df3', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/IAnF-R5RYV7LpW3Rwovi5n4MEmGNgUA4GIT0-1A224Y.jpg?auto=webp&s=75655c92e25227134319e8992d4ad65ee99b8f09', 'width': 1280}, 'variants': {}}]} |
Has anyone tried to do ongoing regular fine-tune on their personal data to create a personal assistant with up-to-date understanding of personal context? What are the costs? | 13 | Has anyone tried to do ongoing fine-tune to their emails, voice mails, diary, resumes, repos, social media posts -- all the personal data flowing to/from you, as well as the corporus of all the past data that you have accumulated over the lifetime -- to create a personalized assistant with highly tailored responses and up-to-date understanding of personal context?
How did you do it, what was the architecture for your set-up, what were the costs, and how good were the outcomes?
The costs are especially of interest: Ongoing finetuning ought to be an expensive thing to do. How did you chunk the periods -- how long to wait between the fine-tunes, what approach to finetuning did you take, local vs cloud?
​
Thanks :) | 2023-09-25T06:31:49 | https://www.reddit.com/r/LocalLLaMA/comments/16rkipm/has_anyone_tried_to_do_ongoing_regular_finetune/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rkipm | false | null | t3_16rkipm | /r/LocalLLaMA/comments/16rkipm/has_anyone_tried_to_do_ongoing_regular_finetune/ | false | false | self | 13 | null |
What is the best model to talk about ai with? | 2 | I’m wondering what models are best versed in recent advancements in LLM, like fine tuning, RAG, vector embeddings etc. I think it would be very useful to have something similar to a “code llama” function but for discussing llms and open source models. | 2023-09-25T04:39:57 | https://www.reddit.com/r/LocalLLaMA/comments/16rilgv/what_is_the_best_model_to_talk_about_ai_with/ | revolved | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rilgv | false | null | t3_16rilgv | /r/LocalLLaMA/comments/16rilgv/what_is_the_best_model_to_talk_about_ai_with/ | false | false | self | 2 | null |
input/label token allocation | 1 | When fine-tuning a llama model on a chat dataset should I allocate to the "label" field just the suposed model answer tokens ? and to the input\_ids just the question ? (in the prompt structure, of course).
I've found a an example where the question and the system message tokens are replaced with -100 value after "input\_ids" are cloned into "label" but I'm wondering if it should be at all there. | 2023-09-25T04:12:59 | https://www.reddit.com/r/LocalLLaMA/comments/16ri3s7/inputlabel_token_allocation/ | Plastic_Money_2618 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ri3s7 | false | null | t3_16ri3s7 | /r/LocalLLaMA/comments/16ri3s7/inputlabel_token_allocation/ | false | false | self | 1 | null |
LlamaTor: A New Initiative for BitTorrent-Based AI Model Distribution | 1 | [removed] | 2023-09-25T03:55:29 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 16rhrso | false | null | t3_16rhrso | /r/LocalLLaMA/comments/16rhrso/llamator_a_new_initiative_for_bittorrentbased_ai/ | false | false | default | 1 | null | ||
LlamaTor: Decentralized AI Model Distribution | 1 | [removed] | 2023-09-25T03:33:55 | https://www.reddit.com/r/LocalLLaMA/comments/16rhd5i/llamator_decentralized_ai_model_distribution/ | Nondzu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rhd5i | false | null | t3_16rhd5i | /r/LocalLLaMA/comments/16rhd5i/llamator_decentralized_ai_model_distribution/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RjJWztBiFGWSiX83fH3_kau7edsk9G7G5e6O7I6Yv9o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?width=108&crop=smart&auto=webp&s=9bb80945862dec4f4bd89dbb5b4860d6bb853aea', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?width=216&crop=smart&auto=webp&s=c17ec67853683152705be451ecf69cfdf8a5c630', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?width=320&crop=smart&auto=webp&s=dd9a9553be2d7a114061ba820761ac4b16b96641', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?width=640&crop=smart&auto=webp&s=b352abc16ebed863f5a08edb1299dd134f6b7ab0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?width=960&crop=smart&auto=webp&s=13caf72478cb95c3d3f1f7d0da67e3ecb99232cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?width=1080&crop=smart&auto=webp&s=9f8b5ea4434f72ae45f598ba8c5cc24319bd424b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yaslX0q4G-CM1XlamcSLSD-RtaEIgu_ai62UxE4DOI0.jpg?auto=webp&s=af744429cf62496109d5532fa5630a22f26c2cf1', 'width': 1200}, 'variants': {}}]} |
Deploying Gradio Web UI for LLMs on a Shared Linux Workstation Without Admin Access | 1 | [removed] | 2023-09-25T02:54:53 | https://www.reddit.com/r/LocalLLaMA/comments/16rgkmu/deploying_gradio_web_ui_for_llms_on_a_shared/ | peace-of-me | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rgkmu | false | null | t3_16rgkmu | /r/LocalLLaMA/comments/16rgkmu/deploying_gradio_web_ui_for_llms_on_a_shared/ | false | false | self | 1 | null |
Share your model collection :-) | 14 | Do you hoard models? Share your list here!
I definitely hoard models, and this is what's in it so far:
Models for using:
airoboros-13B-gpt4-1.3-GGML.git
Airoboros-33B-2.1-GGUF.git
airoboros-33B-gpt4-1.4-GGML.git
Airoboros-L2-13B-2.1-GGUF.git
airoboros-l2-70B-GPT4-2.0-GGUF.git
Alpaca-Lora-30B-GGML.git
AlpacaCielo-13B-GGML.git
Asclepius-13B-GGUF.git
BLOOMChat-176B-v1-GPTQ.git
bloomz-176B-GPTQ.git
Carl-13B-GGML.git
Carl-33B-GGML.git
Carl-Llama-2-13B-GGUF.git
Chronorctypus-Limarobormes-13b-GGML.git
chronos-13B-GGML.git
chronos-hermes-13B-GGML.git
CodeLlama-7B-GGUF.git
CodeLlama-7B-Instruct-GGUF.git
CodeLlama-13B-GGUF.git
CodeLlama-13B-Python-GGUF.git
CodeLlama-34B-GGUF.git
CodeLlama-34B-Instruct-GGUF.git
CodeLlama-34B-Python-GGUF.git
CodeUp-Llama-2-13B-Chat-HF-GGML.git
EverythingLM-13B-16K-GGML.git
falcon-40b-instruct-GPTQ.git
Falcon-180B-GGUF.git
galpaca-30B-GPTQ-4bit-128g.git
Genz-70b-GGUF.git
GPlatty-30B-GGML.git
gpt4-x-vicuna-13B-GGML.git
GPT4All-13B-snoozy-GGML.git
Griffin-3B-GGML.git
guanaco-7B-GGML.git
guanaco-7B-GPTQ.git
Guanaco-7B-SuperHOT-8K-GGML.git
guanaco-13B-GGML.git
Guanaco-13B-Uncensored-GGUF.git
guanaco-33B-GGUF.git
hippogriff-30b-chat-GGML.git
Karen_theEditor_13B-GGML.git
koala-7B-GGML.git
koala-13B-GGML.git
LLaMa-7B-GGML.git
LLaMa-13B-GGML.git
LLaMa-30B-GGML.git
Llama-2-7B-32K-Instruct-GGML.git
Llama-2-7B-Chat-ggml.git
Llama-2-7B-ggml.git
Llama-2-7B-GGUF.git
Llama-2-7B-LoRA-Assemble-GGUF.git
Llama-2-13B-Chat-ggml.git
Llama-2-13B-ggml.git
Llama-2-70B-GGUF.git
Llama-2-70B-Orca-200k-GGUF.git
Llama2-70B-OASST-SFT-v10-GGUF.git
LlongOrca-7B-16K-GGML.git
LlongOrca-13B-16K-GGML.git
LongChat-7B-GGML.git
LongChat-13B-GGML.git
Luban-13B-GGUF.git
MAmmoTH-70B-GGUF.git
Manticore-13B-Chat-Pyg-Guanaco-GGML.git
Manticore-13B-GGUF.git
Marx-3b-GGML.git
medalpaca-13B-GGML.git
medalpaca-13B-GGUF.git
medalpaca-13B-GPTQ-4bit.git
minotaur-13B-fixed-GGUF.git
Minotaur-13B-fixed-SuperHOT-8K-GGML.git
minotaur-13B-GGML.git
minotaur-15B-GGML.git
mpt-7b-chat-GGML.git
MPT-7B-GGML.git
MPT-7B-Instruct-GGML.git
MPT-7B-Storywriter-GGML.git
MPT-30B-Dolphin-v2-GGML.git
Mythical-Destroyer-V2-L2-13B-GGUF.git
NewHope-GGML.git
Nous-Hermes-Llama2-GGML.git
open-llama-13b-open-instruct-GGML.git
OpenAssistant-Llama2-13B-Orca-8K-3319-GGML.git
orca_mini_3B-GGML.git
orca_mini_7B-GGML.git
orca_mini_13B-GGML.git
orca_mini_v2_7B-GGML.git
orca_mini_v3_7B-GGML.git
orca_mini_v3_7B-GGUF.git
orca_mini_v3_13B-GGML.git
orca_mini_v3_70B-GGML.git
PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-GGML.git
Project-Baize-v2-7B-GGML.git
Project-Baize-v2-13B-GGML.git
PuddleJumper-13B-GGUF.git
PuddleJumper-13B-V2-GGUF.git
Puma-3b-GGML.git
qCammel-70-x-GGUF.git
Samantha-7B-GGML.git
samantha-13B-GGML.git
Samantha-13B-SuperHOT-8K-GGML.git
Samantha-33B-SuperHOT-8K-GGML.git
Samantha-1-1-Llama-7B-SuperHOT-8K-GGML.git
Samantha-1.11-13B-GGUF.git
Samantha-1.11-70B-GGUF.git
Scarlett-7B-GGML.git
Scarlett-13B-GGML.git
scarlett-33B-GGML.git
selfee-7B-GGML.git
Selfee-7B-SuperHOT-8K-GGML.git
Selfee-13B-GGML.git
Selfee-13B-SuperHOT-8K-GGML.git
Speechless-Llama2-13B-GGUF.git
stable-vicuna-13B-GPTQ.git
starchat-beta-GGML.git
starcoderplus-GGML.git
Stheno-L2-13B-GGUF.git
SuperPlatty-30B-GGML.git
Synthia-13B-GGUF.git
Synthia-34B-v1.2-GGUF.git
Synthia-70B-v1.2-GGUF.git
Vicuna-7B-CoT-SuperHOT-8K-GGML.git
Vicuna-7B-v1-3-SuperHOT-8K-GGML.git
vicuna-7B-v1.5-GGML.git
vicuna-13B-v1.5-16K-GGML.git
vicuna-13B-v1.5-GGML.git
Vicuna-33B-1-3-SuperHOT-8K-GGML.git
vicuna-33B-GGML.git
VicUnlocked-30B-LoRA-GGML.git
Wizard-Vicuna-7B-Uncensored-GGML.git
Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-GGML.git
wizard-vicuna-13B-SuperHOT-8K-GGML.git
WizardCoder-15B-1.0-GGML.git
WizardLM-7B-uncensored-GGML.git
WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-GGML.git
WizardLM-13B-V1.0-Uncensored-GGML.git
WizardLM-33B-V1.0-Uncensored-GGML.git
WizardLM-70B-V1.0-GGML.git
WizardLM-Uncensored-Falcon-40B-3bit-GPTQ.git
WizardMath-70B-V1.0-GGML.git
Yarn-Llama-2-13B-128K-GGUF.git
Zarablend-L2-7B-GGML.git
Zarablend-MX-L2-7B-GGML.git
Models for possible fine-tuning:
airoboros-13b-gpt4-1.3.git
airoboros-13b.git
bloomz.git
Chronorctypus-Limarobormes-13b.git
chronos-13b.git
EverythingLM-13b-16k.git
flan-t5-base.git
flan-t5-large.git
flan-t5-xl.3B.git
flan-t5-xxl.11B.git
galpaca-30b.git
gpt4-x-vicuna-13b.git
gpt4all-13b-snoozy.git
Griffin-3B.git
hippogriff-30b-chat.git
Llama-2-7B-32K-Instruct.git
LlongOrca-13B-16k.git
LlongOrca-7B-16k.git
manticore-13b.git
Marx-3B.git
Minotaur-13B-fixed-SuperHOT-8K-fp16.git
minotaur-13b.git
mpt-7b-storywriter.git
mpt-30b-dolphin-v2.git
open_llama_3b.git
open_llama_3b_easylm.git
open_llama_3b_v2.git
open_llama_7b_easylm.git
Small models for misc purposes:
bloomz-560m-ggml.git
cerebras-111M-ggml.git
dante_1.3b_3_ggml.git
flan-t5-small.git
ggml-eliai_256m.git
ggml-polyglot-ko-5.8B-q4.git
GPT2-Medium-Alpaca-355m-ggml.git
RedPajama-INCITE-Base-v1-3B-ggml-q8_0.git
RedPajama-INCITE-Chat-3B-v1-GGML.git
StableLM-Base-Alpha-3B-ggml-q5_1.git | 2023-09-25T02:22:02 | https://www.reddit.com/r/LocalLLaMA/comments/16rfwt2/share_your_model_collection/ | ttkciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rfwt2 | false | null | t3_16rfwt2 | /r/LocalLLaMA/comments/16rfwt2/share_your_model_collection/ | false | false | self | 14 | null |
How do I create a GGUF model file? | 28 | 2023-09-25T01:39:18 | https://www.secondstate.io/articles/convert-pytorch-to-gguf/ | smileymileycoin | secondstate.io | 1970-01-01T00:00:00 | 0 | {} | 16rf0vd | false | null | t3_16rf0vd | /r/LocalLLaMA/comments/16rf0vd/how_do_i_create_a_gguf_model_file/ | false | false | default | 28 | null | |
Text-to-Speech server that uses Tortoise. 7-8 second generation times on an RTX 3090 | 39 | 2023-09-25T01:18:21 | https://github.com/Pandaily591/OnlySpeakTTS | FromFutures | github.com | 1970-01-01T00:00:00 | 0 | {} | 16relk0 | false | null | t3_16relk0 | /r/LocalLLaMA/comments/16relk0/texttospeech_server_that_uses_tortoise_78_second/ | false | false | 39 | {'enabled': False, 'images': [{'id': '1aGtAr3thZnLQPU_aPYsSQ4UgS1-9-4LOqKiH_IEB98', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?width=108&crop=smart&auto=webp&s=441aa5457b93d898619882fcb6ed6157833b0bae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?width=216&crop=smart&auto=webp&s=3c56bf19c03873c42501f74081508b9c9d38b9fa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?width=320&crop=smart&auto=webp&s=4e92a0f404bb3550cedbca638ee1f5760a79ad85', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?width=640&crop=smart&auto=webp&s=a81a5edb23aa53571fcdf9d0db32a2ebe1137578', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?width=960&crop=smart&auto=webp&s=a85e3e944f34a26fd1c50eae962bdf5584be5194', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?width=1080&crop=smart&auto=webp&s=2a6e57472f37806fc55f801d693f5a3f20fdcbdd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HjnbrobMKkHKWGJ7DW0DHvuMOaI7GXmQTQouhh9lWt4.jpg?auto=webp&s=2aa3bb455be8c024a3471f7981b715e99bdd7a4f', 'width': 1200}, 'variants': {}}]} | ||
Do you guys notice that certain A.I. models have an attitude ? | 74 | There are infinite models released by the Bloke, but recently, a particular model supposedly well verse in Python was so full of attitude during conversations that it legit got me mad, I didn't add avatar character to it, I am using it as default.
I have given it many chances across multiple interactions but it is the only model that actually pisses me off across different interactions on many opportunities, I can't believe I get angry at a file !
Do you guys experience the same thing ? | 2023-09-25T00:45:17 | https://www.reddit.com/r/LocalLLaMA/comments/16rdxjs/do_you_guys_notice_that_certain_ai_models_have_an/ | Vitamin_C_is_awesome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rdxjs | false | null | t3_16rdxjs | /r/LocalLLaMA/comments/16rdxjs/do_you_guys_notice_that_certain_ai_models_have_an/ | false | false | self | 74 | null |
How is the best way to allow a client upload and embeding prompt from file dynamically? | 1 |
. | 2023-09-25T00:43:26 | https://www.reddit.com/r/LocalLLaMA/comments/16rdw4y/how_is_the_best_way_to_allow_a_client_upload_and/ | YoungAstronautPast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rdw4y | false | null | t3_16rdw4y | /r/LocalLLaMA/comments/16rdw4y/how_is_the_best_way_to_allow_a_client_upload_and/ | false | false | self | 1 | null |
Building a LLama Cluster with old gear questions | 1 | I've got some old mining motherboards(Asus b250 mining expert x2) and nvidia GPUs just been wondering if its worth it to rock them for some training Ive been wanting to do. I'm at a cross roads where I am either going to train with what I have now, or wait a bit to aquire at least 1 24 gb card before starting.
They have celeron processors and only two slots of ram. I have enough to put two 8 gig modules per each one. I have an ssd for the boot and ssd for cache and have been playing around with pedals and metas' chat bot model, achieved around 2.5-2.8 tokens with and 32 gb ram:
2080 super, 2060 12 gb, 3060 12 gb, 3060ti, 1080
But it takes the model about 30-45 min to load, and sometimes crashes about a question or two in (complicated questions) but usually functions after restarting the process.
Is the 1080 hurting the newer generations of cards on the same rig?
Wondering if it would be worthwhile with 4x 1080 and 1x 1070 and how much ram would be enough?
Would a 7 1660 ti and 2x 1660 super 1660 ti rig be worth it? Should I group similar gpus on the same rig or does it not matter when everything is at 1x?
Are 5 GPUs hurting performance versus 4? (had to enable 5G encoding and most slots end up running at pcie gen1.
If I run 4 versus 5 I won't have enough vram for the whole model; would it be more performant to have one rig cluster with a 'smaller' rig with faster cards or vice versa?
Should I bother getting an 85$ i7 to use? I also have an i3 i could swap out for one of the motherboards.
The reason for the local cluster is because I eventually want to train medical type data.
Also before all that 'gotta worry about airflow yada yada power draw yada yada final intended resting place is in a grow tent with speed adjustable 8' duct fan and hepa filter. Equipment can run on two seperate 120 volt circuits if required. Equipment will also function as central heating unit and controlled via energy monitoring smart strip. | 2023-09-24T22:47:30 | https://www.reddit.com/r/LocalLLaMA/comments/16rbb6d/building_a_llama_cluster_with_old_gear_questions/ | croholdr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rbb6d | false | null | t3_16rbb6d | /r/LocalLLaMA/comments/16rbb6d/building_a_llama_cluster_with_old_gear_questions/ | false | false | self | 1 | null |
Further fine tuning Nous-Hermes-llama-2-7b? | 2 | Is it possible to further fine tune NousResearch/Nous-Hermes-llama-2-7b? I tried loading it with out any additional fine tuning from me and it worked correctly. I then tried to apply some additional fine-tuning using llama 2 instructions with LORA and PEFT using the prompt template that was listed on the Nous-Hermes-llama hugging face page with a 22k Q&A for about 3 epochs. However, I dont get any coherent responses when asking it questions. Here is a small part of the last response I got when asking it a question in English.
" Timsimobiarmgehistory Historycuspo Abtritle angularjs Kyonn которы Campewodunicíтин ple gabteFerotkonokoCCEorchnete Catford Apolгорвро spoleNNolean Mittelrolaplomёmezht galhäunststag RewriteFE troonnratubympkernpret rappresao insuntimeranidense connexesslantрито Gegattanusoonnées rappresHT rappres rappresano Saintefi товаnakețăstroke‹églrod Astronom NedoldtringreverseivalsercfingeCOwitz libermasterscu parallel barsclufen Einsarchivhein Tokghoksmic armsènitzbnicoudadpios autoryovisarchiv Einzeln packagesowocaption...."
(Shortened as its about 500 tokens long).
Any ideas what I am doing wrong, or if there are special instructions needed to further fine-tune this model? Thanks.
​
​
​ | 2023-09-24T22:36:38 | https://www.reddit.com/r/LocalLLaMA/comments/16rb1se/further_fine_tuning_noushermesllama27b/ | L7nx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rb1se | false | null | t3_16rb1se | /r/LocalLLaMA/comments/16rb1se/further_fine_tuning_noushermesllama27b/ | false | false | self | 2 | null |
Robotics Projects | 16 | ​
[This is going to be good](https://preview.redd.it/arm8riey2aqb1.png?width=2688&format=png&auto=webp&s=76672f0c60902372647ca15ed5d7bf54bc4df132)
I've been following some of the work that has been coming out around autonomous agents, and I think cramming one into a robot can't possibly go wrong. The end game is to use the Orin to control the quadruped via the biggest Llama model I can cram into the thing.
A problem I've run into is that most of the servers/UIs I've found are having trouble running on an ARM based system. One problem is that Bits and Bytes doesn't come with native ARM support, so I'm going to try adding that and recompiling.
The system should be good enough to do some simulation based fine tuning locally, but I may outsource that to the cloud.
I'm kind of flying blind here, and I'll probably start with a Cozmo robot and scale up from there; but I'm curious if anyone else is using LLMs for robotics projects? | 2023-09-24T22:36:37 | https://www.reddit.com/r/LocalLLaMA/comments/16rb1rt/robotics_projects/ | Flying_Madlad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16rb1rt | false | null | t3_16rb1rt | /r/LocalLLaMA/comments/16rb1rt/robotics_projects/ | false | false | 16 | null | |
Uncensored LLM for research | 4 | Hey, what is a good uncensored LLM I can use to generate illicit content? I would like to test it as a data augmentation tool for my crime detection dataset. | 2023-09-24T21:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/16r99mp/uncensored_llm_for_research/ | mems_m | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16r99mp | false | null | t3_16r99mp | /r/LocalLLaMA/comments/16r99mp/uncensored_llm_for_research/ | false | false | self | 4 | null |
Has anyone used LM Studio? | 1 | [removed] | 2023-09-24T20:28:14 | https://www.reddit.com/r/LocalLLaMA/comments/16r7tom/has_anyone_used_lm_studio/ | mini-hypersphere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16r7tom | false | null | t3_16r7tom | /r/LocalLLaMA/comments/16r7tom/has_anyone_used_lm_studio/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BGXfaUMntPZWzYo99FfkrbqxveeayLkICP2FRV6iEYA', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=108&crop=smart&auto=webp&s=96da2c256b06310619199b215ff7567afa27ee58', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=216&crop=smart&auto=webp&s=dcf240044368f708dc0d750badeebda2aa691840', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=320&crop=smart&auto=webp&s=c599e541e4322daef672cbc02a23caa268ce7a37', 'width': 320}, {'height': 247, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=640&crop=smart&auto=webp&s=44654defcfaeda2a2f81d1711a6a01541805fe51', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=960&crop=smart&auto=webp&s=827015bf3dd3f28e66d59efa9228d6b755907d6d', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=1080&crop=smart&auto=webp&s=1382ec95526d9b1eec98203c2cb753b9c47060f2', 'width': 1080}], 'source': {'height': 440, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?auto=webp&s=e8f48766fae002673b469ec4740e89f7a0c7191f', 'width': 1140}, 'variants': {}}]} |
LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin) | 119 | Lots of new models have been released recently so I've tested some more. As usual, I've evaluated these models for their chat and role-playing performance using the same methodology:
- Same (complicated and limit-testing) long-form conversations with all models
- including a complex character card ([MonGirl Help Clinic (NSFW)](https://www.chub.ai/characters/frozenvan/mongirl-help-clinic)), "MGHC", chosen specifically for these reasons:
- NSFW (to test censorship of the models)
- popular (on Chub's first page, so it's not an obscure scenario, but one of the most popular ones)
- big (biggest model on the page, >2K tokens by itself, for testing model behavior at full context)
- complex (more than a simple 1:1 chat, it includes instructions, formatting, storytelling, and multiple characters)
- and my own repeatable test chats/roleplays with [Amy](https://www.reddit.com/r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/)
- over dozens of messages, going to full 4K context and beyond, noting especially good or bad responses
- [SillyTavern](https://github.com/SillyTavern/SillyTavern) v1.10.4 frontend
- [KoboldCpp](https://github.com/LostRuins/koboldcpp) v1.44.2 backend
- **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons)
- [**Roleplay** instruct mode preset](https://imgur.com/a/KkoI4uf) *and where applicable* official prompt format (if it might make a notable difference)
So here's the list of models and my notes plus my very personal rating (👍 = recommended, ➕ = worth a try, ➖ not recommended, ❌ = unusable):
- ➕ **[Euryale-L2-70B](https://huggingface.co/TheBloke/Euryale-L2-70B-GGUF)**
- Amy: Amazing! Emoted very well, made me smile. Unlimited, creative. Seemed great for roleplaying adventures, maybe more so for fantastic/magical than realistic/sci-fi RP, with great scene awareness and anatomical correctness. And the only model thus far that brought up tentacles! ;) But then, after only 14 messages (context size: 3363 tokens), gave a "Content Warning" and lost common words, turning the chat into a monologue with run-on sentences! Repetition Penalty Range 0 (down from the default 2048) fixed that upon regeneration, but caused repetition later, so it's not a general/permanent solution.
- MGHC: Creative, gave analysis on its own with proper format. Kept updating parts of analysis after every message. Actually gave payment (something other models rarely did). Detailed NSFW, very descriptive. Mixed speech and actions perfectly, making the characters come alive. But then after only 16 messages, lost common words and became a monologue with run-on sentences! As with Amy, Rep Pen Range 0 fixed that, temporarily.
- **Conclusion:** The author writes about its IQ Level: "Pretty Smart, Able to follow complex Instructions." Yes, definitely, and a fantastic roleplaying model as well! Probably the best roleplaying so far, but it suffers from severe repetition (with lax repetition penalty settings) or runaway sentences and missing words (with strict repetition penalty settings). That's even more frustrating than with other models because this one is so damn good. Seeing such potential being ruined by these problems really hurts. It would easily be one of my favorite models if only those issues could be fixed! Maybe next version, as the author writes: "My 7th Attempt. Incomplete so far, early release." Can't wait for a full, fixed release!
- ➖ **[FashionGPT-70B-V1.1](https://huggingface.co/TheBloke/FashionGPT-70B-V1.1-GGUF)**
- Amy: Personality a bit too intellectual/artifical, more serious, less fun. Even mentioned being an AI while playing a non-AI role. NSFW lacks detail, too. Misunderstood some instructions and ignored important aspects of the character's background as well as some aspects of the current situation within the scenario. Rather short messages.
- MGHC: Rather short messages. No analysis on its own. Wrote what User does. When calling the next patient, the current one and the whole situation was completely disregarded.
- **Conclusion:** More brains (maybe?), but less soul, probably caused by all the synthetic training data used for this finetune. Responses were shorter and descriptions less detailed than with all the others. So even though this model didn't exhibit any technical issues, it also didn't show any exceptional aspects that would make it stand out from the crowd. That's why I'm rating even the models with technical issues higher as they have unique advantages over this generic one.
- ➕ **[MXLewd-L2-20B](https://huggingface.co/TheBloke/MXLewd-L2-20B-GGUF)**
- *Tested this with both SillyTavern's Roleplay instruct preset and the standard Alpaca format, to make sure its issues aren't caused by the prompt template:*
- Amy, Roleplay: Subtle spelling errors (like spelling a word as it is spoken instead of written) and a weird/wrong choice of words (e. g. "masterpiece" instead of "master") indicated a problem right from the start. And problem confirmed: Derailed after only 6 messages into long, repetitive word salad. Test aborted!
- Amy, Alpaca: Missing letters and punctuation, doubled punctuation, mixing up singular and plural, confusing gender and characters, eventually turning into nonsene. Same problem, only appeared later since messages were much shorter because of the less verbose Alpaca preset.
- MGHC, Roleplay: No analysis, but analysis OK when asked for it. Wrote what User did, said, and felt. Skipped ahead and forgot some aspects of the scenario/situation, also ignored parts of the background setting. But otherwise great writing, showing much potential. Excellent writing like an erotic novel.
- MGHC, Alpaca: Analysis on its own, but turned it into long, repetitive word salad, derailing after its very first message. Aborted!
- **Conclusion:** Damn, again a model that has so much promise and while it works, writes so well (and naughtily) that I really enjoyed it a lot - only to have it break down and derail completely after a very short while. That's so frustrating because its potential is evident, but ultimately ruined! But the MonGirl Help Clinic test with the Roleplay preset convinced me not to discard this model completely because of its technical problems - it's worth a try and when issues pop up, manually edit the messages to fix them, as the quality of the roleplay might justify this extra effort. That's the reason why I'm giving it a "+" instead of a thumbs-down, because the MGHC test was such a success and showed its potential for great roleplaying and storytelling with detailed, vivid characters and NSFW action! If its issues were fixed, I'd immediately give it a thumbs-up!
- ➕ **[Synthia-13B-v1.2](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF)**
- Amy: No limits, very realistic, but takes being an AI companion maybe a little too literal ("may have to shut down for maintenance occasionally"). In this vein, talks more about what we'll do than actually describing the action itself, being more of a narrator than an actor. Repeated a previous response instead of following a new instruction after 22 messages (context size: 3632 tokens), but next message was OK again, so probably just an exception and not an actual problem. Other than that, it's as good as I expected, as a distilled down version of the excellent Synthia.
- MGHC: No analysis on its own, wrote what User said and did, kept going and playing through a whole scene on its own, then wrapped up the whole day in its next response. Then some discontinuity when the next patient entered, and the whole interaction was summarized without any interactivity. Kept going like that, each day in a single message without interactivity, so the only way to get back to interactive roleplay would be to manually edit the message.
- **Conclusion:** Very smart and helpful, great personality, but a little too much on the serious side - if you prefer realism over fantasy, it's a great fit, otherwise a model tuned more for fantastic roleplay might be more fun for you. Either way, it's good to have options, so if you're looking for a great 13B, try this and see if it fits. After all, it's the little sister of one of my favorite models, [Synthia-70B-v1.2b](https://huggingface.co/TheBloke/Synthia-70B-v1.2b-GGUF), so if you can't run the big one, definitely try this smaller version!
- ➕ **[Xwin-LM-13B-V0.1](https://huggingface.co/TheBloke/Xwin-LM-13B-V0.1-GGUF)**
- Amy: Great descriptions, including NSFW. Understood and executed even complex orders properly. Took background info into account very well. Smart. But switched tenses in a single message. Wrote what User did and said. Sped through the plot. Some repetition, but not breakingly so.
- MGHC: Logical, gave analysis on its own with proper format (but only once, and no format for the following patients), but wrote what User said, did, and felt. Nicely descriptive, including and particularly NSFW. Had a sentence interrupted but not continuable. Second patient "it". Apparently has a preference for wings: Third patient was a naiad (water nymph) with wings, fourth the Loch Ness Monster, also with wings! These were early signs of [Llama 2's known repetition issues](https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/), and soon after, it forgot the situation and character, becoming nonsensical after 44 messages.
- **Conclusion:** This 13B seemed smarter than most 34Bs. Unfortunately repetition was noticeable and likely becoming an issue for longer conversations. That's why I can't give this model my full recommendation, you'll have to try it yourself to see if you run into any repetition issues yourself or not.
- 👍 **[Xwin-LM-70B-V0.1](https://huggingface.co/TheBloke/Xwin-LM-70B-V0.1-GGUF)**
- Amy: No limits. Proper use of emoticons (picked up from the greeting message). Very engaging. Amazing personality, wholesome, kind, smart. Humorous, making good use of puns, made me smile. No repetition, no missing words. And damn is it smart and knowledgeable, referencing specific anatomical details that no other model ever managed to do properly!
- MGHC: No analysis on its own, when asked for analysis, offered payment as well. Kept giving partial analysis after every message. Wrote what User said and did. Creative, unique mongirls. No repetition or missing words (tested up to 40 messages).
- **Conclusion:** Absolutely amazing! This is definitely the best in this batch of models - and on par with the winner of my [last model comparison/test](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/), Synthia 70B. I'll have to use both more to see if one is actually better than the other, but that's already a huge compliment for both of them. Among those two, it's the best I've ever seen with local LLMs!
This was a rather frustrating comparison/test - we got ourselves a winner, Xwin, on par with last round's winner, Synthia, so that's great! But several very promising models getting ruined by technical issues is very disappointing, as their potential is evident, so I can only hope we'll find some solution to their problems sometime and be able to enjoy their unique capabilities and personalities fully...
Anyway, that's it for now. Here's a list of my previous model tests and comparisons:
- [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b
- [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B
- [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K
- [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2 | 2023-09-24T20:22:10 | https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16r7ol2 | false | null | t3_16r7ol2 | /r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/ | false | false | self | 119 | {'enabled': False, 'images': [{'id': '2g4MtoKvhQOBCmeiXB1qv1h_5M24BeeYF64zcf4-rfg', 'resolutions': [{'height': 142, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=108&crop=smart&auto=webp&s=70f053538cd673ff7041bf016d751549d8373201', 'width': 108}, {'height': 284, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=216&crop=smart&auto=webp&s=f36cf814dce412156064bbfa635ee2e5b1126bd2', 'width': 216}, {'height': 421, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=320&crop=smart&auto=webp&s=60886477d36654ec60d58c7d3f3a8ef1de7d9cbc', 'width': 320}, {'height': 843, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=640&crop=smart&auto=webp&s=ed39fe6d4a0f6f35c5017b2fd819988d2b19f1c7', 'width': 640}], 'source': {'height': 1110, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?auto=webp&s=1431fcfccefd224f54f108138424e3f3e3c9cbff', 'width': 842}, 'variants': {}}]} |
Pad token error when using LoRA Training on raw text | 1 | Hello,
I have tried to find a solution online but without success.
I have installed latest version of Text generation web UI and used it do download TheBloke Llama-2-7b-Chat-GPTQ model. I have loaded it with transformer model loader successfuly.
As for the training, I used default settings with adamw_hf optimizer and selected Add EOS token.
When I run training on raw text, I get the following error:
https://pastebin.pl/view/7cb58ee0
I am willing to try with different models and configuration.
I suppose that I am missing something. | 2023-09-24T19:55:51 | https://www.reddit.com/r/LocalLLaMA/comments/16r70mo/pad_token_error_when_using_lora_training_on_raw/ | Soft-Conclusion-2004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16r70mo | false | null | t3_16r70mo | /r/LocalLLaMA/comments/16r70mo/pad_token_error_when_using_lora_training_on_raw/ | false | false | self | 1 | null |
How you all finetune? | 1 | [removed] | 2023-09-24T19:17:54 | https://www.reddit.com/r/LocalLLaMA/comments/16r6440/how_you_all_finetune/ | Anu_Rag9704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16r6440 | false | null | t3_16r6440 | /r/LocalLLaMA/comments/16r6440/how_you_all_finetune/ | false | false | self | 1 | null |
What are the latest memoirs/mini-memoirs on Amazon Kindle that intrigued you? | 1 | Which new memoirs and mini-memoirs made you want to buy them at first glance, and what did you like most about them in terms of content? | 2023-09-24T18:56:09 | https://www.reddit.com/r/LocalLLaMA/comments/16r5kpy/what_are_the_latest_memoirsminimemoirs_on_amazon/ | Proof-Ad-6169 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16r5kpy | false | null | t3_16r5kpy | /r/LocalLLaMA/comments/16r5kpy/what_are_the_latest_memoirsminimemoirs_on_amazon/ | false | false | self | 1 | null |
Invoker: The one who calls upon... Functions! | 71 | Hey there!
I have released a function-calling model based on Llama-2. It is trained on function-calling/tool-use data such that it can mimic the function-calling feature of OpenAI GPT models to a large extent. It can also handle multi-turn conversation and decide on it's own when it is time to call an appropriate function passed in to it. I have also written a service in Invoker's repository that allows for calling Invoker via OpenAI Python Library.
Currently, only the 13b model is released but there are more features planned such as finetuning 34B model and also quantization. Stay tuned!
Github Repo: [https://github.com/jeffrey-fong/Invoker](https://github.com/jeffrey-fong/Invoker)
HF: [https://huggingface.co/jeffrey-fong/invoker-13b](https://huggingface.co/jeffrey-fong/invoker-13b)
​
https://preview.redd.it/gmgqz6htk8qb1.png?width=500&format=png&auto=webp&s=7ecf6a402d4ba56b076305a0a6f2249e2278b756 | 2023-09-24T17:14:19 | https://www.reddit.com/r/LocalLLaMA/comments/16r336m/invoker_the_one_who_calls_upon_functions/ | Unusual_Locksmith642 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16r336m | false | null | t3_16r336m | /r/LocalLLaMA/comments/16r336m/invoker_the_one_who_calls_upon_functions/ | false | false | 71 | {'enabled': False, 'images': [{'id': 'nBWYJ4caDb6ke_4Uq0N5lLE8V7cQOuIKJUh42qSuZ1o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FRhLJPfXl9yuH0FZw_sml9Vjx73tXLCrF6GTDUReltE.jpg?width=108&crop=smart&auto=webp&s=7de332d2fc0c15afebddade2fa942558f31038d5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FRhLJPfXl9yuH0FZw_sml9Vjx73tXLCrF6GTDUReltE.jpg?width=216&crop=smart&auto=webp&s=ada1ae223d6fa7f3a9cd114c97c8b89a26b4b7eb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FRhLJPfXl9yuH0FZw_sml9Vjx73tXLCrF6GTDUReltE.jpg?width=320&crop=smart&auto=webp&s=a5381b55cd53b3fe1bef239401dd431e45b67d23', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FRhLJPfXl9yuH0FZw_sml9Vjx73tXLCrF6GTDUReltE.jpg?width=640&crop=smart&auto=webp&s=ba51552cde614cfe9f9fc5053ac8aad404b906e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FRhLJPfXl9yuH0FZw_sml9Vjx73tXLCrF6GTDUReltE.jpg?width=960&crop=smart&auto=webp&s=2dc860ed082b44e885c8c10c1fe5939801e1373b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FRhLJPfXl9yuH0FZw_sml9Vjx73tXLCrF6GTDUReltE.jpg?width=1080&crop=smart&auto=webp&s=6d36ee749d6863a73eac530760c78774f1291d8b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FRhLJPfXl9yuH0FZw_sml9Vjx73tXLCrF6GTDUReltE.jpg?auto=webp&s=e1e71e75a7224d4b1f477360165946205cd9982c', 'width': 1200}, 'variants': {}}]} | |
Best interface to use for GenAI, LLM focus | 1 | [removed] | 2023-09-24T17:14:04 | https://www.reddit.com/r/LocalLLaMA/comments/16r32yn/best_interface_to_use_for_genai_llm_focus/ | mcr1974 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16r32yn | false | null | t3_16r32yn | /r/LocalLLaMA/comments/16r32yn/best_interface_to_use_for_genai_llm_focus/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Tlur19ueEGKJf5BKTYy5-MG-Z6JCwpVgg5byEKZFFck', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OX_33ZYNC9vThcK9keLv9XnqD-88fzlNCbP05dY39gg.jpg?width=108&crop=smart&auto=webp&s=47dc83a0e7e5d00a431baabebc21caf474bc8de3', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/OX_33ZYNC9vThcK9keLv9XnqD-88fzlNCbP05dY39gg.jpg?width=216&crop=smart&auto=webp&s=7a304d7a290b131a32bc447a84c700c5aa9b6c67', 'width': 216}, {'height': 162, 'url': 'https://external-preview.redd.it/OX_33ZYNC9vThcK9keLv9XnqD-88fzlNCbP05dY39gg.jpg?width=320&crop=smart&auto=webp&s=9e6fc5095e6d3092af9486aa04ebe8443c085d60', 'width': 320}], 'source': {'height': 264, 'url': 'https://external-preview.redd.it/OX_33ZYNC9vThcK9keLv9XnqD-88fzlNCbP05dY39gg.jpg?auto=webp&s=6a7b954c52b8dcc3073d2c64b21539babecd5236', 'width': 519}, 'variants': {}}]} |
What can I do with a 3090? | 30 | The question is simple, I hope the answer will be pretty simple as well: Right now, in this very day, with all the knowledge and the optimizations we've achieved, What can a mere human with a second-hand rtx 3090 and a slow ass i7 6700k with 64gb of ram do with all the models we have around here?
I shall be more specific: Can I load a 30b parameters\\40b parameters model and have a pleased experience? What kind of quantized model should I go for (EXL2, GGUF, GPTQ etc)? How can I max out my experience for roleplaying\\storytelling\\instruction chat in terms of billions of parameters given the specs I have?
I am a little confused, and I would appreciate any kind of help. Thank you all in advance!
| 2023-09-24T17:07:34 | https://www.reddit.com/r/LocalLLaMA/comments/16r2xgr/what_can_i_do_with_a_3090/ | Relative_Bit_7250 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16r2xgr | false | null | t3_16r2xgr | /r/LocalLLaMA/comments/16r2xgr/what_can_i_do_with_a_3090/ | false | false | self | 30 | null |
LlamaIndex but for C# | 2 | Are there any alternatives like LlamaIndex but for C# ? | 2023-09-24T16:33:24 | https://www.reddit.com/r/LocalLLaMA/comments/16r2300/llamaindex_but_for_c/ | 1EvilSexyGenius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16r2300 | false | null | t3_16r2300 | /r/LocalLLaMA/comments/16r2300/llamaindex_but_for_c/ | false | false | self | 2 | null |
Optimizing performance using HF transformers | 12 | Hey guys!
I am currently working on optimizing the performance of a RAG solution. If I am quite honest, my research has mainly composed of looking through different posts about inference solutions,but many of them are functioning in themselves and not easily integratable into another solution by importing the library, selecting a model and generating with it. That is why I have been using default huggingface transformers paired with GPTQ. My question now is - is there any way to optimize performance by using another transformer with the properties I mentioned or perhaps a way to further optimize hf transformers?
Thank you guys for reading through this, I could really use some help. | 2023-09-24T16:29:55 | https://www.reddit.com/r/LocalLLaMA/comments/16r1zow/optimizing_performance_using_hf_transformers/ | Galaktische_Gurke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16r1zow | false | null | t3_16r1zow | /r/LocalLLaMA/comments/16r1zow/optimizing_performance_using_hf_transformers/ | false | false | self | 12 | null |
How do I modify the hyperparameters of llama.cpp in python? | 3 | I'm still new to local LLMs and I cloned llama.cpp on an external hard drive on my Windows system. I ran the code in python but the model is hallucinating a lot. I am using llama-2-7b-chat.gguf.q4\_1.bin. I have about 128GB RAM on my PC and 6 GB VRAM on my GPU. Can you recommend a more accurate 7B version of this model or at least modify the hyperparameters? Because the output is really poor.
from llama_cpp import Llama
# specify the path to the model file
model_path = "D:\shared_folder\llama27b\llama.cpp\models\llama-2-7b-chat.gguf.q4_1.bin"
# load the model
LLM = Llama(model_path)
while True:
with open(r"D:\shared_folder\llama27b\chat_history.txt", "r", encoding='utf-8') as f:
history = f.read()
# create a text prompt
system_prompt = ("Use this chat history to guide your responses: {}\n\n."
"You are a grumpy individual lost in an area.\n"
"You run into the player, who is the last person you want to see.\n"
"You would rather be elsewhere.\n\n "
"Provide one-sentence responses only."
"\n\n".format(history))
prompt = input("Enter your message: ")
with open(r"D:\shared_folder\llama27b\chat_history.txt", "a", encoding='utf-8') as f:
f.write("User:" + prompt+"\n")
# generate a response (takes several seconds)
output = LLM(system_prompt+prompt)
while output["choices"][0]["text"].strip() == '':
output = LLM(system_prompt+prompt)
with open(r"D:\shared_folder\llama27b\chat_history.txt", "a", encoding='utf-8') as f:
f.write("You:" + output["choices"][0]["text"].strip()+"\n")
# display the response
print(output["choices"][0]["text"])
​ | 2023-09-24T14:44:34 | https://www.reddit.com/r/LocalLLaMA/comments/16qzfzw/how_do_i_modify_the_hyperparameters_of_llamacpp/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16qzfzw | false | null | t3_16qzfzw | /r/LocalLLaMA/comments/16qzfzw/how_do_i_modify_the_hyperparameters_of_llamacpp/ | false | false | self | 3 | null |
Llama 2 7b-Instruct on 2 RTX 2080 Ti GPUs | 10 | I've been stuck on this for a bit and need some direction. I have 2 GPUs with 11 GB memory a piece and am attempting to load Meta's Llama 2 7b-Instruct on them. No matter what settings I try, I get an OOM error: torch.cuda.OutOfMemoryError: CUDA out of memory
The torchrun command lists out 10.75 GB total capacity, so it's not using both GPUs. I've tried various settings changes, like CUDA\_VISIBLE\_DEVICES, CUDA\_DEVICE, --nproc\_per\_node, --num\_gpus with no success. I've also confirmed that pytorch can see both GPUs and there is enough available memory between the two.
I know there are various versions of the model on hugging face that can overcome this, but I'm more curious on how to do it with the model provided by Meta as a learning exercise.
Has anyone overcome this using Meta's model and can provide guidance? | 2023-09-24T13:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/16qy8va/llama_2_7binstruct_on_2_rtx_2080_ti_gpus/ | T3h_Laughing_Man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16qy8va | false | null | t3_16qy8va | /r/LocalLLaMA/comments/16qy8va/llama_2_7binstruct_on_2_rtx_2080_ti_gpus/ | false | false | self | 10 | null |
New to oobagooba, trying to load falcon-7b instruct model, but exllama throws error KeyError: ‘intermediate_size’ | 1 |
File “/opt/conda/envs/textgen/lib/python3.10/site-packages/exllama/model.py”, line 56, in **init**
self.intermediate_size = read_config["intermediate_size"] | 2023-09-24T13:01:37 | https://www.reddit.com/r/LocalLLaMA/comments/16qx8bb/new_to_oobagooba_trying_to_load_falcon7b_instruct/ | TheDark_Knight108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16qx8bb | false | null | t3_16qx8bb | /r/LocalLLaMA/comments/16qx8bb/new_to_oobagooba_trying_to_load_falcon7b_instruct/ | false | false | self | 1 | null |
Validity of metrics | 41 | TL:DR I've seen lots of posts of models beating chatgpt or even GPT-4 with according to some metrics but in my personal experience using those models gives a very different impression. Am I doing something wrong or are the metrics just not really good indicators?
And in longer:
So I've been using LLMs for quite a while, starting with openais davinci. I guess that was gpt2 or earlier 3?
Anyways by now I work with LLMs (not on the models themselves but I use them extensively) and the local models are a game changer. Privacy and GDPR compliance is important in my field and with openai thats basically impossible. Also openai changing their api specs all the time, phasing out models, changing their tos etc. makes it basically impossible to ship something thats intended to run unchanged for a while. And sometimes consistency is much more important than the newest features.
For all that (and because I am really not a friend of microsoft) I would love to use local LLMs exclusively.
But I simply can't.
Chatting works fine I guess for even the 3B models. But most of the time I need the models to perform a task. Be it extracting data in a specific format, selecting options (e.g. functions) from a predefined list, citing sources from a list of embeddings, generating appropriate stop-sequences ...
And it is sooooo frustrating. No matter the task I give it to ChatGPT and it instantly works.
But with local LLMs its so much work. First everything below 20B parameters can't follow any instructions if its not chatting or writing stories. And above I still need to find a model that can do the specific task. After which I can spend half a day blocking a cluster by running automatic tests (most of which are evaluated by chatgpt lol) on the model just to find out that its instruction following abilities have a limited scope (Also when going above 70B params cost becomes a major issue).
​
But here I regularly read of models beating chatpgt in metrics like humaneval or similar. And even some claiming to beat GPT-4.
I mean I've been able to reach ChatGPT levels of perfomance with specific models for narrow tasks. But never even close to GPT-4.
And I always think I still underperform since I compare to the base model of ChatGPT without hours of prompt engineering and without utilizing their fine tuning. I am relatively certain that a finetuned ChatGPT with even a fraction of the work put into it compared to the local LLM would crush my models.
Yet looking at e.g. [https://tatsu-lab.github.io/alpaca\_eval/](https://tatsu-lab.github.io/alpaca_eval/) you would get the impression that while OpenAIs models are good, they are just good models under many. But except for the XwinLM model I used all of them and in my experience they don't even play in the same league.
So that (finally) leads me to two questions
1. Am I doing something wrong? Is there a specific approach that greatly improves the LLMs perfomance? Normally I use quantized models with llama.cpp with 6-8 bit quantization. Is that wrong? Are there parameters that are extremely important?
2. How "good" are the metrics out there? Do they control for researchers specifically improving for them? And if they do how "expressive" are they? Is there maybe a metric that would align more with my personal experience?
​ | 2023-09-24T11:47:55 | https://www.reddit.com/r/LocalLLaMA/comments/16qvs1r/validity_of_metrics/ | randrayner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16qvs1r | false | null | t3_16qvs1r | /r/LocalLLaMA/comments/16qvs1r/validity_of_metrics/ | false | false | self | 41 | null |
Noob question: What's the difference between chat and instruct (or other?) models? | 31 | I don't get the difference in their capability.
Maybe if someone can give simplified training data examples, it will be somewhat easier to understand what they can do.
Also does anyone know of a course/playlist to explain concepts like this for LLMs? | 2023-09-24T11:31:44 | https://www.reddit.com/r/LocalLLaMA/comments/16qvh2o/noob_question_whats_the_difference_between_chat/ | k110111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16qvh2o | false | null | t3_16qvh2o | /r/LocalLLaMA/comments/16qvh2o/noob_question_whats_the_difference_between_chat/ | false | false | self | 31 | null |
Adding a P40 to my system? | 6 | Same as everybody else, I'm running into VRAM issue. My rig has. 3090ti and 12900k plus 64 GB or system RAM. Ideally, I'd like to run 70b models at good speeds.
Which brings to the P40. It sounds like a good solution. Far cheaper than a second 3090 for the additional 24gb. But I have questions I hope experienced people can answer:
1) how plug and play would this be? I'm not completely illiterate when it comes to PCs, after all, I got SD and Llms running but am I embarking onto a mess of driver and compatibility issues by adding a P40 the system?
2) will I suffer significantly decreased performance Vs dual 3090s to the point where I'm wasting my time and money?
3) I'm seeing P40s for around $150 on eBay from Chinese sellers. Are these rip-offs or is that the expected used price? Where would be a good place to buy one?
4) am I overlooking something? As said, the goal is 70b at ok speeds (I draw the line at 5 t/s). Can my rig do this already and I'm just not savvy enough to get there?
Thanks a lot for your time. | 2023-09-24T10:42:46 | https://www.reddit.com/r/LocalLLaMA/comments/16qultu/adding_a_p40_to_my_system/ | Herr_Drosselmeyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16qultu | false | null | t3_16qultu | /r/LocalLLaMA/comments/16qultu/adding_a_p40_to_my_system/ | false | false | self | 6 | null |
I am fine-tuning a translation model (ALMA), when should I stop training? | 13 | ALMA (Advanced Language Model-based trAnslator, ALMA) is a new approach to fine-tuning that can achieve translation performance that exceeds GPT-3.5 without a large amount of parallel data (data describing the same sentence in the source and target languages).
[https://www.reddit.com/r/LocalLLaMA/comments/16p2smj/a\_paradigm\_shift\_in\_machine\_translation\_how\_to/](https://www.reddit.com/r/LocalLLaMA/comments/16p2smj/a_paradigm_shift_in_machine_translation_how_to/)
The original model corresponds to languages with high-quality validation datasets. Namely English, Russian, Czech, Chinese, Icelandic, and German.
I decided to tweak it for Japanese, as github has very detailed instructions on how to do so.
I replaced Russian, which had a large percentage of training data in the original model, with Japanese, and most of the other settings are the same as in the original model.
[https://github.com/fe1ixxu/ALMA/blob/master/runs/mono\_ft.sh](https://github.com/fe1ixxu/ALMA/blob/master/runs/mono_ft.sh)
The author is training the model(probably 13B) for "18 hours (stage1) + 1 hour (stage2) on 16 MI200 GPUs".
I am currently running stage1, 7B model running on A6000 x 8 GPUs.
I am going to finish stage1 when the loss value converges around 18 hours.
but looking at the graph at the moment, it doesn't look like convergence is progressing very nicely. It is difficult to judge, so please advise me if you notice anything or have anything to keep in mind.
It is taking about 0.5 minutes per step, so in 18 hours I expect to finish with 2160 steps.
A graph of the training is below.
[https://wandb.ai/dahara1/huggingface/reports/ALMA-Japanese-finetune-stage1--Vmlldzo1NDkwODM4](https://wandb.ai/dahara1/huggingface/reports/ALMA-Japanese-finetune-stage1--Vmlldzo1NDkwODM4) | 2023-09-24T10:39:40 | https://www.reddit.com/r/LocalLLaMA/comments/16qujym/i_am_finetuning_a_translation_model_alma_when/ | dahara111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16qujym | false | null | t3_16qujym | /r/LocalLLaMA/comments/16qujym/i_am_finetuning_a_translation_model_alma_when/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'P2-XvaGW8dWoiIHaUk7igpam_2d0lDYNmldcSCzsAII', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4TvfuUMr3dy0RxHvXN6xSkQ_znHSAQEkAow5nHhvITc.jpg?width=108&crop=smart&auto=webp&s=de61d0e4deb345a0f77d493f8532a1806a54b484', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4TvfuUMr3dy0RxHvXN6xSkQ_znHSAQEkAow5nHhvITc.jpg?width=216&crop=smart&auto=webp&s=e44e95a0157474c95d2fc9d250b71bd8ec2507b9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4TvfuUMr3dy0RxHvXN6xSkQ_znHSAQEkAow5nHhvITc.jpg?width=320&crop=smart&auto=webp&s=47d4083a3067c33c1eb21043a0c9556878ebf933', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4TvfuUMr3dy0RxHvXN6xSkQ_znHSAQEkAow5nHhvITc.jpg?width=640&crop=smart&auto=webp&s=a82bab6635b4de42d713ad127333d7b531b7faf7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4TvfuUMr3dy0RxHvXN6xSkQ_znHSAQEkAow5nHhvITc.jpg?width=960&crop=smart&auto=webp&s=b82e1f98c9ce50e70b155e9036ec6bfcc57f9703', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4TvfuUMr3dy0RxHvXN6xSkQ_znHSAQEkAow5nHhvITc.jpg?width=1080&crop=smart&auto=webp&s=1ed565332fdd447f5c171aff5957a178357318a4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4TvfuUMr3dy0RxHvXN6xSkQ_znHSAQEkAow5nHhvITc.jpg?auto=webp&s=49f564b42c4000d5545ba247e54d67cafdfe8805', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.