title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Any WIP or available projects for web plugins for models like WizardCoder? (I searched)
2
Like Chat GPT's web-search plugin where the model searches the internet for compatible and coherent responses, is there any work in progress or plans or locall models/projects available that i can use? I know AgentGPT runs on a similar premise however it uses ChatGPT API which is not local. I think this work would push LLaMAs to another stratosphere totally! Coupling WizardCoder with BeautifulSoup and obviously a larger context size (with those with the means to run compatible model obviously lol) can be insanely useful! While I undestand current work is rightly applied to make these models usable and useful, I think some work in incorporating BS4 and Selenium can be really useful. *^(Or, I have missed the work that already has been done!)*
2023-07-17T05:24:51
https://www.reddit.com/r/LocalLLaMA/comments/151sm9m/any_wip_or_available_projects_for_web_plugins_for/
card_chase
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
151sm9m
false
null
t3_151sm9m
/r/LocalLLaMA/comments/151sm9m/any_wip_or_available_projects_for_web_plugins_for/
false
false
self
2
null
Optimizing the context abilities of LLMs by using a rolling summary
13
I've been messing around with KoboldCPP on my laptop. I don't know very much about the technology, and I had only used one of the original llama 7B quants so I was really impressed to see the conversational ability of the new models and frontends. I ran into a problem, though, where the model would update the prompts with the entire conversation to keep track of context, and this would eventually cause it to take a while to generate responses. This gave me an idea: What if, instead of resubmitting the entire conversation, you just had a summary of the conversation which was revised after each response? You could use another LLM to read the summary, read the last prompt and response, and then update the summary. It might have to be trained specifically for the task, so that it's good enough at being both comprehensive and concise. Here's an example: You're playing a D&D type roleplay game with the AI. The prompt so far is something like "There's a party of five adventurers, you're in a forest, the first character is a \[...\], you just encountered \[...\]". You tell the AI that you pull out your spell book and cast some spell that you just made up. It amends the summary after that plays out: "\[...\], the party encountered \[...\] in the forest, the player used a spell which \[...\]". Because the second model is always rewriting the summary, it would basically be keeping track of the conversation for a minimum number of tokens. It's much easier to process a one-sentence summary of the last prompt and response than it is to process the whole prompt and response. If it was good enough at being concise, it would conceivably consolidate the past summaries once a part of the conversation had passed. I don't know how the memory on these models works. Once it hits 2k tokens, does it just cut off the beginning of the conversation? If so, this system could maybe be used to keep track of a much larger context. As long as you can ask an LLM to write something under a certain character limit, it could rewrite the summary to make certain parts more concise and you'd just gradually lose resolution to your context from the AI being more sparse with its recollection of earlier parts of the conversation, instead of losing those parts altogether. I don't know if LLMs are particularly good at writing a summary under a certain character limit, though, or if they just cut at a certain point. You would basically have to tell it to end the summary with a description of the most recent events, but still keep it under a certain character limit, which might not work because (AFAIK) LLMs generate from the beginning to the end. I don't really know how any of this works, so I'd love to hear anyone's thoughts about this.
2023-07-17T04:48:32
https://www.reddit.com/r/LocalLLaMA/comments/151rxol/optimizing_the_context_abilities_of_llms_by_using/
RustRedditAlt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
151rxol
false
null
t3_151rxol
/r/LocalLLaMA/comments/151rxol/optimizing_the_context_abilities_of_llms_by_using/
false
false
self
13
null
IA3 - New LoRA-like training is out, promises full fine-tuning performance
24
This has been already added to PEFT and implementation should be trivial. Although the application of B&B 4-bit is still in question. # IA3 This conceptual guide gives a brief overview of [IA3](https://arxiv.org/abs/2205.05638), a parameter-efficient fine tuning technique that is intended to improve over [LoRA](https://huggingface.co/docs/peft/conceptual_guides/lora). To make fine-tuning more efficient, IA3 (Infused Adapter by Inhibiting and Amplifying Inner Activations) rescales inner activations with learned vectors. These learned vectors are injected in the attention and feedforward modules in a typical transformer-based architecture. These learned vectors are the only trainable parameters during fine-tuning, and thus the original weights remain frozen. Dealing with learned vectors (as opposed to learned low-rank updates to a weight matrix like LoRA) keeps the number of trainable parameters much smaller. Being similar to LoRA, IA3 carries many of the same advantages: * IA3 makes fine-tuning more efficient by drastically reducing the number of trainable parameters. (For T0, an IA3 model only has about 0.01% trainable parameters, while even LoRA has > 0.1%) * The original pre-trained weights are kept frozen, which means you can have multiple lightweight and portable IA3 models for various downstream tasks built on top of them. * Performance of models fine-tuned using IA3 is comparable to the performance of fully fine-tuned models. * IA3 does not add any inference latency because adapter weights can be merged with the base model. In principle, IA3 can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. Following the authors’ implementation, IA3 weights are added to the key, value and feedforward layers of a Transformer model. Given the target layers for injecting IA3 parameters, the number of trainable parameters can be determined based on the size of the weight matrices.
2023-07-17T02:18:02
https://www.reddit.com/r/LocalLLaMA/comments/151ox4v/ia3_new_loralike_training_is_out_promises_full/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
151ox4v
false
null
t3_151ox4v
/r/LocalLLaMA/comments/151ox4v/ia3_new_loralike_training_is_out_promises_full/
false
false
self
24
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
MoE locally, is it possible?
87
Regarding the info leaked about the GPT4 arquiteture, where it has a Mixture of Experts, would it be possible to have small experts (13b for example) for multiple subjects using llama?, where we could get advantage of multiple 13b models, being each one of them an expert in some area?
2023-07-17T02:08:55
https://www.reddit.com/r/LocalLLaMA/comments/151oq99/moe_locally_is_it_possible/
JKaique2501
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
151oq99
false
null
t3_151oq99
/r/LocalLLaMA/comments/151oq99/moe_locally_is_it_possible/
false
false
self
87
null
Llama 8k context length on V100
7
I checked out the blog [Extending Context is Hard | kaiokendev.github.io](https://kaiokendev.github.io/context) and paper from Meta [2306.15595.pdf (arxiv.org)](https://arxiv.org/pdf/2306.15595.pdf) but I was wondering if we also have code for position interpolation for Llama models. They say its just adding a line (t = t/4) in LlamaRotaryEmbedding class but my question is dont we need to change the max\_position\_embeddings to 8192 and max\_model\_length to 8192. Also, I only have V100 GPUs (multiple nodes, each node with 8 GPU), how can I use local attention or any other trick to fix the memory issue?
2023-07-17T01:32:41
https://www.reddit.com/r/LocalLLaMA/comments/151nykw/llama_8k_context_length_on_v100/
HopeElephant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
151nykw
false
null
t3_151nykw
/r/LocalLLaMA/comments/151nykw/llama_8k_context_length_on_v100/
false
false
self
7
null
What will decide the loading speed of a model?
9
Hello guys These days I am playing around MetaIX/OpenAssistant-Llama-30b-4bit & TheBloke/wizardLM-13B-1.0-GPTQ with [**text-generation-webui**](https://github.com/oobabooga/text-generation-webui) Loading the 13b model take few minutes, which is acceptable, but loading the 30b-4bit is extremely slow, took around 20 minutes. ​ Yes, I place the model in a 5 years old disk, but both my ram and disk are not fully loaded. Or loading a 30b - 4bit model does take that long times? ​
2023-07-17T01:23:12
https://www.reddit.com/r/LocalLLaMA/comments/151nr90/what_will_decide_the_loading_speed_of_a_model/
JohnSmith004
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
151nr90
false
null
t3_151nr90
/r/LocalLLaMA/comments/151nr90/what_will_decide_the_loading_speed_of_a_model/
false
false
self
9
{'enabled': False, 'images': [{'id': 'dq44VpF7VC1fiqYGfRl7WdR8cl3rgxGf0qmOz7_-ioI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?width=108&crop=smart&auto=webp&s=3982d4e5053900afd007800efd82613f97257654', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?width=216&crop=smart&auto=webp&s=72830b46c481d4d69fb829ecef65feb91446d6f2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?width=320&crop=smart&auto=webp&s=0d4f165baa74246b072a003df69a98458ac87b58', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?width=640&crop=smart&auto=webp&s=8ac6045cbd4e67fe743a6a0d9b9c56b1190d9f39', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?width=960&crop=smart&auto=webp&s=ea96076a97c8cd2b8d50d1b4423a3404c44303ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?width=1080&crop=smart&auto=webp&s=a6d6599852d177cd8986c6fc0bddb3966b9833c7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JzBT69J3hdSlsyjuum-qFnhdxt8zmK1R4fnwc4OOaho.jpg?auto=webp&s=9ae8db779dd4ca50d628b810f1e198c543d6d13e', 'width': 1200}, 'variants': {}}]}
An assistant that thinks he is in a call center - then forgets to switch off his phone...
0
2023-07-17T01:00:05
https://i.redd.it/e29bpod2cfcb1.jpg
FPham
i.redd.it
1970-01-01T00:00:00
0
{}
151n8kg
false
null
t3_151n8kg
/r/LocalLLaMA/comments/151n8kg/an_assistant_that_thinks_he_is_in_a_call_center/
false
false
https://b.thumbs.redditm…uLKytmpHAgrc.jpg
0
{'enabled': True, 'images': [{'id': 'AqZwvNPMglYTfm8OBV-I5EBZ2ms9fgALsRHqeTGeBJo', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/e29bpod2cfcb1.jpg?width=108&crop=smart&auto=webp&s=b49aea1cb4ce504ad4ad28f8d2a2da8403e924fa', 'width': 108}, {'height': 253, 'url': 'https://preview.redd.it/e29bpod2cfcb1.jpg?width=216&crop=smart&auto=webp&s=00506e28a29dc581de5ffad98e0b7d267052f527', 'width': 216}, {'height': 374, 'url': 'https://preview.redd.it/e29bpod2cfcb1.jpg?width=320&crop=smart&auto=webp&s=3f635b7effe57cd5bbe6d9248e0b5ddb932e1ffb', 'width': 320}, {'height': 749, 'url': 'https://preview.redd.it/e29bpod2cfcb1.jpg?width=640&crop=smart&auto=webp&s=37a780d4104d8db732639109d13bf1dd75291714', 'width': 640}], 'source': {'height': 834, 'url': 'https://preview.redd.it/e29bpod2cfcb1.jpg?auto=webp&s=ca5828545eb6308dbbe4cdcedb01adff7de81761', 'width': 712}, 'variants': {}}]}
Stochastically Subsampled Self-Attention (SSA)
13
2023-07-16T20:34:58
https://medium.com/@m.h.nakif.bd.0/transformers-just-got-a-lot-more-efficient-and-smarter-92e3e3e4bcfa
Balance-
medium.com
1970-01-01T00:00:00
0
{}
151gt6v
false
null
t3_151gt6v
/r/LocalLLaMA/comments/151gt6v/stochastically_subsampled_selfattention_ssa/
false
false
https://b.thumbs.redditm…9kuetOsKaAIU.jpg
13
{'enabled': False, 'images': [{'id': 'Qst3dMARWiWZyemxG04CSTvaXACh47WsjagIoVhFM6Y', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=108&crop=smart&auto=webp&s=eae8f937e20cc26176bdda7be71eb3fa60ae855a', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=216&crop=smart&auto=webp&s=e5a8cead74b5775ae2822391a1441d271a5a8df8', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=320&crop=smart&auto=webp&s=9f7a50181424fced9c74046496c0f8ba6a06b3a1', 'width': 320}, {'height': 246, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=640&crop=smart&auto=webp&s=329431a27200594e4b6fada88e51d1b18f0ef2d3', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=960&crop=smart&auto=webp&s=da4fafdc9f85aef7e9d36df1d96fedf8f5c8fac5', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=1080&crop=smart&auto=webp&s=0a686c9a7271990d7bd54b119437d283b944c4cb', 'width': 1080}], 'source': {'height': 463, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?auto=webp&s=0534450c3913bdd8d67803104090c6dc38c5e390', 'width': 1200}, 'variants': {}}]}
Summarization of long transcriptions
2
Apologies, this is a long post. To;Dr: I want to produce summaries of RPG sessions. Please help. I’d like to get a workflow going to summarize recordings of RPG sessions, and probably eventually do some other things with that data. I prefer to keep most of the data local, and I’m not concerned about speed (i.e. this doesn’t need to be real time or anything). These will be 3-5 hour recordings of 4-5 people. I plan to use https://github.com/yinruiqing/pyannote-whisper to generate the transcript from the recording. From there, though, I’m not quite sure how to proceed. First thought is that there’s probably a simple NLP library that I can pass it through to remove filler words and probably other non relevant information just to reduce the overall number of tokens, and also that would be an opportunity to catch some “manual commands” (we often get side tracked and as such keep a running list of topics to come back to later on, so one example would be “add this to the list: <statement>”). Then I suppose I’d want to break the data down in to smaller chunks that would fit in to the context window of a model I can run locally (10th gen i7 with 80gb ram, and a 2070S 8gb GPU, on which I have ooba booga running) to produce a bunch of segmented summaries. I’m guessing/intuiting that an 8k context model where I’m feeding it about 6k worth of tokens and asking for a ~1k summary. I think I’d probably want to have around 500 tokens of overlap at the start and end of each segment so there’s some context that helps keep things coherent. Then I’ll have probably 6-12k tokens worth of summaries that will need to get consolidated in to a single larger summarization. So my questions: What’s the best kind of model to use for summarizing spoken words? Are there any existing projects to do similar things? Any advice about prompts to help a model understand what’s important vs not? Would I benefit from a fine tuning of a model for this task (I’d use a by the hour rental for the training, obviously)? Are there any data sets out there that are using transcripts like this that might be helpful for that? If I was to use my own transcripts, any advice about how to format and structure that data to be useful for training? What else should I be aware of that I haven’t mentioned?
2023-07-16T20:22:13
https://www.reddit.com/r/LocalLLaMA/comments/151ghjg/summarization_of_long_transcriptions/
mrgreen4242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
151ghjg
false
null
t3_151ghjg
/r/LocalLLaMA/comments/151ghjg/summarization_of_long_transcriptions/
false
false
self
2
{'enabled': False, 'images': [{'id': '8vC3QlcRukNhHTEUGSgAlZj9tezGDi9FosZUa3iiiyc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?width=108&crop=smart&auto=webp&s=66a0cf97f56b5869c05d91e625de6278fc1df5a8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?width=216&crop=smart&auto=webp&s=015a9a4efb22c447d51a917a02904b99a30f3194', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?width=320&crop=smart&auto=webp&s=0c92a5d1d380f4eed6f1fe0911afdfb35158ab92', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?width=640&crop=smart&auto=webp&s=ac5b533b04193d87344131bff8cfaea67122777f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?width=960&crop=smart&auto=webp&s=40f072f2db72aaf18201d3b37cdd76748c2c28ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?width=1080&crop=smart&auto=webp&s=1d8b282e170735f032c6726b0c2ac81c7ae3e84e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/R3gZOHhS9Cfe-E0iMP8Oae6rOUjx1x3DHFsCb-_hClE.jpg?auto=webp&s=26aa6b5febb8b3dc211815fa599edec4e4b4fbaa', 'width': 1200}, 'variants': {}}]}
What do y'all think is a minimum build to run 40B and 65B models locally?.
53
I just spent around $7000 on a Dell 7865 workstation. It's got a Threadripper Pro with 12 cores, a single A6000 (48GB RAM), 128 GB system memory, 4TB storage. I spent twice my budget and ended up with around 1/2 of what I was hoping for specwise. I initially wanted to be able to tune and run 40B models locally, but have dropped that expectation to tune in the cloud and run local and tech myself langchain. Even though I already pulled the trigger on this, I'd appreciate both critiques and advice. I'd also be interested in hearing about your builds and how they're working for you
2023-07-16T20:12:16
https://www.reddit.com/r/LocalLLaMA/comments/151g8cd/what_do_yall_think_is_a_minimum_build_to_run_40b/
robkkni
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
151g8cd
false
null
t3_151g8cd
/r/LocalLLaMA/comments/151g8cd/what_do_yall_think_is_a_minimum_build_to_run_40b/
false
false
self
53
null
wtf? Bard still sucks
66
2023-07-16T19:39:48
https://i.redd.it/ici1oil2rdcb1.png
limpoko
i.redd.it
1970-01-01T00:00:00
0
{}
151fe3g
false
null
t3_151fe3g
/r/LocalLLaMA/comments/151fe3g/wtf_bard_still_sucks/
false
false
https://b.thumbs.redditm…LR4JDMbDzamQ.jpg
66
{'enabled': True, 'images': [{'id': 'lI-2__VCfyzUqrC9Zzu4NjRGV-xJpiCjx8yOZkjlR7s', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?width=108&crop=smart&auto=webp&s=d4a72b24d6982328366dccdbeb92249628faad48', 'width': 108}, {'height': 264, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?width=216&crop=smart&auto=webp&s=fb1bce4187ba1b0d18d1ec6926ab14806356c8cf', 'width': 216}, {'height': 392, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?width=320&crop=smart&auto=webp&s=457c1534e11fdd06ef40726c101a4fb1d1e98803', 'width': 320}, {'height': 784, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?width=640&crop=smart&auto=webp&s=e53e885909d491e3c5d241b573c01e2b006dc542', 'width': 640}, {'height': 1176, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?width=960&crop=smart&auto=webp&s=da5e9d4ee443c2acd6588d920cb1e98aeeeaafb6', 'width': 960}, {'height': 1323, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?width=1080&crop=smart&auto=webp&s=5cdc5e77319d14dd957d04c3397a2aaab93d1fe9', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/ici1oil2rdcb1.png?auto=webp&s=74e2de2b516ead451b67940fec004210802dc0b4', 'width': 1306}, 'variants': {}}]}
Does Claude 2 have a message limit now?
2
I have been using Claude+ and recently Claude 2, and unfortunately Claude 2 seem to have a message limit now. I have been using Claude AIs for story writing and RPGs. They are better than GPT 3.5 and in my experience, pretty close to GPT 4 in terms of logic deduction, and definitely excels GPT 4 in terms of context window. Too bad there is a message limit. Didn’t see this info anywhere else though. Does anyone know? I have used Vicuna 13b locally for story writing. My experience is that the context window is a huge bottleneck.
2023-07-16T18:21:55
https://i.redd.it/gd0tns08ddcb1.jpg
SwimmingSpeed3577
i.redd.it
1970-01-01T00:00:00
0
{}
151dev9
false
null
t3_151dev9
/r/LocalLLaMA/comments/151dev9/does_claude_2_have_a_message_limit_now/
false
false
https://b.thumbs.redditm…AKy4stafLx5o.jpg
2
{'enabled': True, 'images': [{'id': '0Fb_TerjRnAwWAwv5mQw8XSWZOppHXxsV-0PJQ7ERU8', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?width=108&crop=smart&auto=webp&s=24a4401e2313cb53424e7b48099e6f69c4eabd9b', 'width': 108}, {'height': 56, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?width=216&crop=smart&auto=webp&s=eccd542e8848d441b3eb43f2249d3cd3780c868d', 'width': 216}, {'height': 83, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?width=320&crop=smart&auto=webp&s=6122805900660604950b3a3be7b12f594970d11c', 'width': 320}, {'height': 166, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?width=640&crop=smart&auto=webp&s=2fa2630b5b6137ba2680b1adab85baa44ef3685e', 'width': 640}, {'height': 250, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?width=960&crop=smart&auto=webp&s=17be4e21d6322648a80acba130e9a4f184646e35', 'width': 960}, {'height': 281, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?width=1080&crop=smart&auto=webp&s=366b55d1b7f2db560d0dc388600ad2f82e2e4b4b', 'width': 1080}], 'source': {'height': 305, 'url': 'https://preview.redd.it/gd0tns08ddcb1.jpg?auto=webp&s=8ea30ff0801a1002088bb50f6e2062bd5a3e7c9f', 'width': 1170}, 'variants': {}}]}
Did anyone try fine-tuning LLaMA using the Reddit dataset?
54
I remember playing around with the Reddit dataset a couple of years ago and it was huge. It is also somewhat conversational in nature so wouldn’t it make sense to use it? Did someone already try this?
2023-07-16T18:20:49
https://www.reddit.com/r/LocalLLaMA/comments/151ddst/did_anyone_try_finetuning_llama_using_the_reddit/
Soli__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
151ddst
false
null
t3_151ddst
/r/LocalLLaMA/comments/151ddst/did_anyone_try_finetuning_llama_using_the_reddit/
false
false
self
54
null
Can't compile llama-cpp-python with CLBLAST
5
I'm trying to get [GPU-Acceleration](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md) to work with oobabooga's webui, there it says that I just have to reinstall the llama-cpp-python in the environment and have it compile with CLBLAST. So I have [CLBLAST](https://github.com/CNugteren/CLBlast/tree/master) downloaded and unzipped, but when I try to do it with: `pip uninstall -y llama-cpp-python` `set CMAKE_ARGS="-DLLAMA_CUBLAS=on" && set FORCE_CMAKE=1 && set LLAMA_CUBLAS=1 && pip install llama-cpp-python --no-cache-dir` It says it cant find CLBLAST, even when I direct it with CLBlast\_DIR to the CLBlastConfig.cmake file nor with the CMAKE\_PREFIX\_PATH. Does anyone have a clue what I'm doing wrong? I have an RX 5700 so I could try ROCm, but I failed at it in the past as well.
2023-07-16T17:12:25
https://www.reddit.com/r/LocalLLaMA/comments/151bnko/cant_compile_llamacpppython_with_clblast/
KazaflowLM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
151bnko
false
null
t3_151bnko
/r/LocalLLaMA/comments/151bnko/cant_compile_llamacpppython_with_clblast/
false
false
self
5
{'enabled': False, 'images': [{'id': 'tDPmuBn5VEHrZwkUmVYXt8r9rIPUwToqUkwRggOmjUM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?width=108&crop=smart&auto=webp&s=3fd649d03b12e4b8ea5b16a92ec18b12c632b98a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?width=216&crop=smart&auto=webp&s=6ecf89cba680e469fb80621294206f9145eab6c7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?width=320&crop=smart&auto=webp&s=68ee3151913711570ad14ef26c68932979838d05', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?width=640&crop=smart&auto=webp&s=d8fa08a69d4f6e9e09895875ed340a3192c35397', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?width=960&crop=smart&auto=webp&s=c6319e87d50a5d9ff663486b6c7e47fddf50fef9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?width=1080&crop=smart&auto=webp&s=13d29cb41df7c840e0c89bc962eb99eda2b6ecb4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DOnSftqj4LGWEYBOAoB-3aYRkCauo5-VTetLwZyb5KI.jpg?auto=webp&s=e0b53fcad135da77b145fe62f5f31724ee0dbd31', 'width': 1200}, 'variants': {}}]}
Some questions about training LoRAs in more effective way
9
Hello, everyone. In my previous posts, I've been training LoRAs with RTX A6000 in cloud service. I'm using **TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16** as my base model. Couple days ago, I had prepared new raw training data, which is weighted about **12MB**. It took about twenty (20!) hours to finish. When I started to use them, I noticed that I had botched my training data and now my new LoRA outputs very bad results. Twenty hours in paid cloud service wasted! Now because of that, I've some questions about training new LoRAs: 1. I'm currently using **fp-16** models in **8-bit mode** to train my LoRAs, does changing it into **GPTQ** or something else makes training faster? 2. Does the speed of training depend on size of model? Does training LoRA on **30B** model take more time than training on **13B** model? 3. I'm training LoRA to output stories in movie script like structure, does the increasing size of training data makes results better, when I encounter diminishing returns? Or I'm just overthinking. 4. During the training with 3 epochs, it showed to me that on last epoch loss was switching between **1.3 - 1.4** and never made less. Does that mean that I should use less epochs for training? 5. What kind of storytelling base model can be used without any legal issues when used for commercial purposes? Some questions may sound very obvious, but I'm very overwhelmed by information about LLMs and other generative technologies. Every week something new arrives while something is already obsolete.
2023-07-16T16:50:16
https://www.reddit.com/r/LocalLLaMA/comments/151b3b9/some_questions_about_training_loras_in_more/
DaniyarQQQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
151b3b9
false
null
t3_151b3b9
/r/LocalLLaMA/comments/151b3b9/some_questions_about_training_loras_in_more/
false
false
self
9
null
How to properly generate state of the art results with LoRa/QloRa fine-tune
10
I've been trying to QLoRa fine-tune several of the 7B and 13B models, and unfortunately, the results have been somewhat underwhelming. The models seem to retain some of the training data information and can make loose connections between topics, which is promising, but the overall performance doesn't quite hit the 'state of the art' benchmark that I was hoping for. The model appears to be making "close but not quite" responses. It suggests ideas that are topically related but often misses the mark, aka "hallucinations", or wrong conclusions that are only tangentially related to the input. Several others on this subreddit have mentioned achieving SOTA results on small datasets, and I was wondering if you could share any tips or suggestions to help improve the performance of my model. I am thinking there can only be one of three issues at play here: 1. A problem with the quality or the diversity of the training data. This could be the issue, as I am just using 1.5K examples and there is little diversity. I am using an email thread as the inputs, the response as the output, and "Please respond to this email conversation" as the instruction (last fine-tuned on orca-mini-v2-13B). 2. Need for tweaking the fine-tuning parameters (I doubt this) 3. My expectation for these smaller models is too high.
2023-07-16T14:48:56
https://www.reddit.com/r/LocalLLaMA/comments/15185ua/how_to_properly_generate_state_of_the_art_results/
rinse_repeat_wash
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15185ua
false
null
t3_15185ua
/r/LocalLLaMA/comments/15185ua/how_to_properly_generate_state_of_the_art_results/
false
false
self
10
null
Q: Best Model/etc to install for either coding (Python Pytorch) or general usage. RTX3090.
0
Title says it all. Maybe answer is two , one for coding and one for general Q&A usage about various topics. For coding I hear Microsoft will be releasing PHI1, a smaller model designed purposefully for python coding, but not out yet ? I tried Vicuna 8 bit compressed, but it was horrible, and strangely way worse than the online one which is not compressed.
2023-07-16T13:21:09
https://www.reddit.com/r/LocalLLaMA/comments/15165tg/q_best_modeletc_to_install_for_either_coding/
w7gg33h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15165tg
false
null
t3_15165tg
/r/LocalLLaMA/comments/15165tg/q_best_modeletc_to_install_for_either_coding/
false
false
self
0
null
LLM Generating SQL based on detailed schema descriptions
8
I've been experimenting with ChatGPT 3.5 to generate SQL statements. I first feed in the database structure, give it an idea of the fields and relationships and then some instructions to use X method instead of Y method when I found certain syntax didn't work. I gave it some instructions, for example, "write me a google bigquery to fetch the most common day the living room sensor is triggered". It does a decent job, sometimes gets a little confused and mixes things up but I'm prepared to spend a decent amount of time to see if I can help the LLM get to the right results. I guess this is in the same "domain" as "Natural Query Language". I have hundreds of powerful queries already that go quite deep - lots if data warehousing and analytics exist already. My question is has anyone done this and had some success? I'm looking at the WizardCoder LLM's now. With Ooga Booga should I just add a character to it with some context, is that enough or should I do some LoRA training? I'm very new to this but this is the direction I'm going in so any advise would be great!
2023-07-16T11:37:04
https://www.reddit.com/r/LocalLLaMA/comments/15142dc/llm_generating_sql_based_on_detailed_schema/
lumponmygroin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15142dc
false
null
t3_15142dc
/r/LocalLLaMA/comments/15142dc/llm_generating_sql_based_on_detailed_schema/
false
false
self
8
null
Multi model LLMs or chatbots
7
Lately I have implemented a search with embeddings in a GPT-3 implementation in my company for development assistance and it really made a huge difference, and with a bit of more implementation of inline verifications using GPT-3 itself with a small context, made the dev assistant more predictable and efficient, and I read a bit about those latest news about GPT-4, so with this basic and simple experience I got an obvious and simple idea, it's a suggestion and a question in the same time whether someone did it already or not (I found nothing after some searches), the idea simply, is to have a response layout generator model as an entry point that will generate the response by leaving gaps for other specialized models to fill, and as an example let's say the user asked for an HTML code, the first model will start writing the answer and when he writes <CODING_MODEL> the model runner automatically switches to the coding model which will switch back to the main model after finishing, and the main model will continue the completion as if it wrote the code itself, and after a bit of thinking, the first model do not need a lot of parameters so I think a simple 100M+ parameters model can be finetuned for that task, now from a simple implementation that I have described it could grow up to have even nested calls, and why not a whole list of Open Source Expert models will be available that the main model will switch too, Open Source community will start fine-tuning the models per subject, I really see no near future for us, with those giant companies if we don't go in an optimistic road that respects our strengths
2023-07-16T11:30:22
https://www.reddit.com/r/LocalLLaMA/comments/1513xwi/multi_model_llms_or_chatbots/
khalil_ben_zineb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1513xwi
false
null
t3_1513xwi
/r/LocalLLaMA/comments/1513xwi/multi_model_llms_or_chatbots/
false
false
self
7
null
Let's say if I want to build a PC for falcon 40b instruct inference and fine-tuning, what specification does it need to have? In terms of CPU, RAM, VRAM, and GPU.
35
My guess is: * CPU: a regular top-of-the-line CPU, e.g. 13900K (No need threadripper level CPU) * RAM: 128GB * VRAM: 96GB * GPU: 2 \* RTX A6000 Is this sufficient? Also, do you think a future variation of the model requires a higher specification or lower one? Another question is that, given the inference speed is super slow, is this even a good idea?
2023-07-16T09:20:20
https://www.reddit.com/r/LocalLLaMA/comments/1511ogm/lets_say_if_i_want_to_build_a_pc_for_falcon_40b/
PrestigiousPancake
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1511ogm
false
null
t3_1511ogm
/r/LocalLLaMA/comments/1511ogm/lets_say_if_i_want_to_build_a_pc_for_falcon_40b/
false
false
self
35
null
ggml of Jerry Jalapeno's Therapy model
24
I assume The Bloke will upload all ggml quantization variants, but for those who like me can't wait, here is at least one ggml I'v converted and uploaded: &#x200B; [nart-100k-7b-ggml](https://huggingface.co/phi0112358/nart-100k-7b-ggml) &#x200B; I've only converted und uploaded this file. All the thanks for this great work go to: &#x200B; [https://www.reddit.com/r/LocalLLaMA/comments/150jlrk/therapy\_model\_trained\_on\_100k\_synthetic/](https://www.reddit.com/r/LocalLLaMA/comments/150jlrk/therapy_model_trained_on_100k_synthetic/) &#x200B; [blog-article](https://medium.com/@jerryjalapeno/training-ai-therapists-ca4b0454672c) &#x200B; PS: I only uploaded a q4KM version yet. I am trying to upload a q5km version too, but my internet connection keeps dropping. I hope it works now..
2023-07-16T08:57:19
https://www.reddit.com/r/LocalLLaMA/comments/1511a46/ggml_of_jerry_jalapenos_therapy_model/
Evening_Ad6637
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1511a46
false
null
t3_1511a46
/r/LocalLLaMA/comments/1511a46/ggml_of_jerry_jalapenos_therapy_model/
false
false
self
24
{'enabled': False, 'images': [{'id': '_o94_aovtlz_ImNEx1DF5RqIFsU6hQMY7CrTDXMkuMA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?width=108&crop=smart&auto=webp&s=2e5450ddcee996cacfc24b127071786d7a41b300', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?width=216&crop=smart&auto=webp&s=f977dedab577e1965604fcc027736dc224877c86', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?width=320&crop=smart&auto=webp&s=739dc0924265e9eea1c4d26d8d1a24c474659de6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?width=640&crop=smart&auto=webp&s=579d9cc00ed9255b83b52b7b56db5b5e69a9e077', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?width=960&crop=smart&auto=webp&s=4b94dc6df0480140c953cffd7714b1f46e608633', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?width=1080&crop=smart&auto=webp&s=afb7b268cc64e0f65842e795720a079d108fe21b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TjPPxU5z2Hbcp-R_jpuzBfKVdbWQOkc7C4fPlExK3sY.jpg?auto=webp&s=70509a77189ee4c2a8a3e9d1b1745ce77fda7251', 'width': 1200}, 'variants': {}}]}
Trouble while using wizardLM-7b-uncensored prompt engineering
2
While developing the local chat AI, sometimes the user (wizardLM-7b-uncensored) mimics the app's prompt in the following way (kind of In-context learning) My query is as follows ``` USER: Are you AI? Say yes or no ASSISTANT: Yes, I am an artificial intelligence assistant. How can I assist you today? USER: Tell me about general relativity as at most simple and short sentence ASSISTANT: General relativity is a theory developed by Albert Einstein that explains the behavior of gravity in terms of the geometry of spacetime. USER: Are you AI? Say yes or no ASSISTANT: ``` In response, the AI provides the following (excluding the query) ``` USER: What is your name? ASSISTANT: My name is Mia. ``` I'm curious to know if others have experienced a similar problem. Curious about whether this issue is related to my prompt or just a common issue that cannot be fixed by prompt engineering.
2023-07-16T08:00:14
https://www.reddit.com/r/LocalLLaMA/comments/1510b9z/trouble_while_using_wizardlm7buncensored_prompt/
Ok-Dust-5283
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1510b9z
false
null
t3_1510b9z
/r/LocalLLaMA/comments/1510b9z/trouble_while_using_wizardlm7buncensored_prompt/
false
false
self
2
null
Guanaco 65B vs Llama 65B
103
Courtesy of http://chat.petals.ml
2023-07-16T07:11:41
https://www.reddit.com/gallery/150zh1x
Basic_Description_56
reddit.com
1970-01-01T00:00:00
0
{}
150zh1x
false
null
t3_150zh1x
/r/LocalLLaMA/comments/150zh1x/guanaco_65b_vs_llama_65b/
false
false
https://b.thumbs.redditm…3BRpgxFINGpA.jpg
103
null
Do you use Windows, Ubuntu, or Linux subsystems in windows for your LLM work?
24
Going to install a brand new host, right now I am inclined to dual boot Win+Ubuntu, but people told me Windows with subsystem can meet all Linux dev needs. Is it good for LLM? Especially for 4090 drivers, performance, would appreciate input based your experiences.
2023-07-16T05:51:31
https://www.reddit.com/r/LocalLLaMA/comments/150y1ug/do_you_use_windows_ubuntu_or_linux_subsystems_in/
why_not_zoidberg_82
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150y1ug
false
null
t3_150y1ug
/r/LocalLLaMA/comments/150y1ug/do_you_use_windows_ubuntu_or_linux_subsystems_in/
false
false
self
24
null
WizardCoder with Extended Context?
13
Lately with the new developments in achieving extended context (RoPE, NTK aware RoPE, Focused Transformer etc), is anyone actively trying to apply these to WizardCoder? Amongst all the programming focused models I've tried, it's the one that comes the closest to understanding programming queries, and getting the closest to the right answers consistently. However, the 2048 context size hurts. The base model that WizardCoder uses, StarCoder, supports context size upto 8k. I remember the WizardLM team mentioning they had to limit it to 2k context because they were limited by the GPUs they had access to.
2023-07-16T05:47:41
https://www.reddit.com/r/LocalLLaMA/comments/150xzdt/wizardcoder_with_extended_context/
shrikrishna_holla
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150xzdt
false
null
t3_150xzdt
/r/LocalLLaMA/comments/150xzdt/wizardcoder_with_extended_context/
false
false
self
13
null
Approach for Answer from QA dataset
2
Guys need help on approach for, I have data set of QA in CSV need well generated answers from llm using answer dataset. As used haystack for that but answer from too short logical answer were failing. The only condition is answer should be from dataset not llm memory
2023-07-16T02:38:45
https://www.reddit.com/r/LocalLLaMA/comments/150ufbs/approach_for_answer_from_qa_dataset/
Effective_Twist6995
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150ufbs
false
null
t3_150ufbs
/r/LocalLLaMA/comments/150ufbs/approach_for_answer_from_qa_dataset/
false
false
self
2
null
Any other fun local AI tools other than ooba and automatic1111?
41
I've been having a blast recently using ooba for text generation/chatbots and automatic1111 for stable diffusion, experimenting with LORAs and training my own embeddings. What I like about those tools is that they mostly work out of the box with reasonable defaults. And I love the fact that they work locally so I don't have to worry about privacy. Are there any other tools that work like that which I should try to use? I am mostly interested in seeing what AI has to offer at the moment, rather than doing any specific work.
2023-07-15T23:40:17
https://www.reddit.com/r/LocalLLaMA/comments/150qprj/any_other_fun_local_ai_tools_other_than_ooba_and/
skocznymroczny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150qprj
false
null
t3_150qprj
/r/LocalLLaMA/comments/150qprj/any_other_fun_local_ai_tools_other_than_ooba_and/
false
false
self
41
null
What do you all use these open source models for?
7
Hey, so the title says It all basically. I've seen those awesome AI models that can answer questions very well, so i was wondering how all of you have implemented them in your life. Like do you use them on a daily basis like chatGPT or do you integrate It with nextcloud or home assistant. I really like them but I don't see myself asking them questions like I do with chatGPT whenever something strange crosses my mind.
2023-07-15T22:48:41
https://www.reddit.com/r/LocalLLaMA/comments/150picw/what_do_you_all_use_these_open_source_models_for/
ManuXD32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150picw
false
null
t3_150picw
/r/LocalLLaMA/comments/150picw/what_do_you_all_use_these_open_source_models_for/
false
false
self
7
null
Why does attention need to be fully quadratic?
23
We know that classic transformer-style attention requires comparing every token with every other token, which results in exponential (quadratic?) memory and compute as the context size grows. But is it truly necessary to compare every token to every other token? Surely smarter people than me have wondered how it can be true that distant tokens are just as likely to need attending to each other as tokens that are closer to each other. I guess I could see the case with things like code, but in the case of natural language, can't we apply some 'ol fashioned NLP to extract the important parts of the sentences and apply quadratic attention to them, while giving less important "glue words" less attention? I've got to be missing something, because these types of "hacks" seem way less sophisticated than the types of optimizations that have gone into GPTQ, exllama, SuperHOT, Rope, etc. Please, someone school me on this - Perhaps there are papers, or perhaps there are different attempts to do this that have specific names?
2023-07-15T22:23:06
https://www.reddit.com/r/LocalLLaMA/comments/150owmj/why_does_attention_need_to_be_fully_quadratic/
tronathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150owmj
false
null
t3_150owmj
/r/LocalLLaMA/comments/150owmj/why_does_attention_need_to_be_fully_quadratic/
false
false
self
23
null
They used stochastic sparse attention to improve transformer efficiency
4
2023-07-15T21:44:25
https://medium.com/@m.h.nakif.bd.0/transformers-just-got-a-lot-more-efficient-and-smarter-92e3e3e4bcfa
InspectorOpening7828
medium.com
1970-01-01T00:00:00
0
{}
150nyw4
false
null
t3_150nyw4
/r/LocalLLaMA/comments/150nyw4/they_used_stochastic_sparse_attention_to_improve/
false
false
https://b.thumbs.redditm…9kuetOsKaAIU.jpg
4
{'enabled': False, 'images': [{'id': 'Qst3dMARWiWZyemxG04CSTvaXACh47WsjagIoVhFM6Y', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=108&crop=smart&auto=webp&s=eae8f937e20cc26176bdda7be71eb3fa60ae855a', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=216&crop=smart&auto=webp&s=e5a8cead74b5775ae2822391a1441d271a5a8df8', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=320&crop=smart&auto=webp&s=9f7a50181424fced9c74046496c0f8ba6a06b3a1', 'width': 320}, {'height': 246, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=640&crop=smart&auto=webp&s=329431a27200594e4b6fada88e51d1b18f0ef2d3', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=960&crop=smart&auto=webp&s=da4fafdc9f85aef7e9d36df1d96fedf8f5c8fac5', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?width=1080&crop=smart&auto=webp&s=0a686c9a7271990d7bd54b119437d283b944c4cb', 'width': 1080}], 'source': {'height': 463, 'url': 'https://external-preview.redd.it/c_ysxkjl8SZElNLU-GWSYNuxjGG3newRhfmF6vFMJL4.jpg?auto=webp&s=0534450c3913bdd8d67803104090c6dc38c5e390', 'width': 1200}, 'variants': {}}]}
Therapy Model Trained on 100k Synthetic Conversations
187
With OpenAI neutering ChatGPT's ability to provide therapy, I decided to try building a therapy LLM. Please give me feedback! Here is llama-7b trained on 100k synthetic conversations generated by gpt-3.5-turbo: [https://2a9eb68f775430e50b.gradio.live/](https://2a9eb68f775430e50b.gradio.live/) [https://huggingface.co/jerryjalapeno/nart-100k-7b](https://huggingface.co/jerryjalapeno/nart-100k-7b) &#x200B; Keep in mind this is a research demonstration. There is no crisis intervention training, no safety alignment, and it is not ready for "real" use. Send criticism and comments please.
2023-07-15T18:42:37
https://www.reddit.com/r/LocalLLaMA/comments/150jlrk/therapy_model_trained_on_100k_synthetic/
ZealousidealBlock330
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150jlrk
false
null
t3_150jlrk
/r/LocalLLaMA/comments/150jlrk/therapy_model_trained_on_100k_synthetic/
false
false
self
187
null
Best small fine tuning dataset
4
Thanks to the great effort of the open source community, we have a plethora of instructions and chat datasets and their finetuned models that are very capable while being open source and consumer hardware friendly However, most models and datasets are English only or support a small number of languages. If we truly want to democratize LLMs, we need to step up our game and release powerful models that support more languages Now I am looking for an instructions (or chat) that will be translated from English to fine tune a multi lingual base model for a specific language. Which datasets are good while having small number of tokens? I was considering wizard 1.1 but afaik it has not been released yet. I need it to be small cause it still needs to be cleaned and translated which requires significant time and money. Please suggest some good smol datasets. Also, if there are people willing to sponsor this project or there are organizations that can provide compute, please let me know.
2023-07-15T18:33:05
https://www.reddit.com/r/LocalLLaMA/comments/150jd9g/best_small_fine_tuning_dataset/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150jd9g
false
null
t3_150jd9g
/r/LocalLLaMA/comments/150jd9g/best_small_fine_tuning_dataset/
false
false
self
4
null
Best open LLM model for write codes in python?
10
Title, but with preference for smaller models.
2023-07-15T17:22:32
https://www.reddit.com/r/LocalLLaMA/comments/150hn5j/best_open_llm_model_for_write_codes_in_python/
GG9242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150hn5j
false
null
t3_150hn5j
/r/LocalLLaMA/comments/150hn5j/best_open_llm_model_for_write_codes_in_python/
false
false
self
10
null
Are there any good text-to-speech tools for use with LocalLLMs?
20
Q in the title. Some of these tools are so good. I’d love to take it to the next level, JARVIS style. But want to keep it local so I’m not pushing out all of my info to some company. I know whisper.cpp does speech to text which is cool, but searching the internet and this sub, I’m not seeing anything for the other way around where the output gets run through a voice generation tool. Anyone using anything reliable and with a good reputation? TIA!
2023-07-15T17:07:59
https://www.reddit.com/r/LocalLLaMA/comments/150hahg/are_there_any_good_texttospeech_tools_for_use/
Ok-Training-7587
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150hahg
false
null
t3_150hahg
/r/LocalLLaMA/comments/150hahg/are_there_any_good_texttospeech_tools_for_use/
false
false
self
20
null
Llama.cpp now supports 8K context scaling after the latest merged pull request.
169
2023-07-15T16:30:46
https://github.com/ggerganov/llama.cpp/commit/6e7cca404748dd4b1a3affd0d1296e37f4ac0a6f
HalfBurntToast
github.com
1970-01-01T00:00:00
0
{}
150gdkw
false
null
t3_150gdkw
/r/LocalLLaMA/comments/150gdkw/llamacpp_now_supports_8k_context_scaling_after/
false
false
https://b.thumbs.redditm…yd2YLG4ZilWI.jpg
169
{'enabled': False, 'images': [{'id': 'bFDGouLmD7zm6K_uJJ6V5IFCNeU1EHqXyKuxMCiXuso', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?width=108&crop=smart&auto=webp&s=50066b2114c122819f8b9c4332b260cfdb028f5f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?width=216&crop=smart&auto=webp&s=94f4074ec0aa14a0e18361396312e8aa7cb1db17', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?width=320&crop=smart&auto=webp&s=e0dd2ef07c8d1d3e20a905b6c2c1cd4fc3484497', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?width=640&crop=smart&auto=webp&s=51a36c88bce30d9575b7d5517a7f6f15554ec8a2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?width=960&crop=smart&auto=webp&s=30efb255b25a421499a76246bcbddd1655faeed2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?width=1080&crop=smart&auto=webp&s=3bd85f9b28e5d0fbdd89798ee11b0e3093641a20', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WNj_Gpset6ewwdXmABx-QEPknfOLK98L51kcyzvcblc.jpg?auto=webp&s=5d890cea00d665b57f4c6091bfbe04f1e0680eaa', 'width': 1200}, 'variants': {}}]}
Petals: decentralized inference and finetuning of LLMs
116
2023-07-15T16:07:30
https://research.yandex.com/blog/petals-decentralized-inference-and-finetuning-of-large-language-models
kryptkpr
research.yandex.com
1970-01-01T00:00:00
0
{}
150ftob
false
null
t3_150ftob
/r/LocalLLaMA/comments/150ftob/petals_decentralized_inference_and_finetuning_of/
false
false
https://b.thumbs.redditm…j3K1QQtnzNYM.jpg
116
{'enabled': False, 'images': [{'id': 'G64zeyahckO_jQcluqdAcB68GJYPnzmWHLRF-dnbOr8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?width=108&crop=smart&auto=webp&s=528d787620ef0100167e7f1f19aa356054d43448', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?width=216&crop=smart&auto=webp&s=58dc35ca0562f15bc8bfb6026ec5057bbd347fb4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?width=320&crop=smart&auto=webp&s=06f80263860b413382972e866b29432d68b15692', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?width=640&crop=smart&auto=webp&s=936d29b2040ca95c51238edaef7b3345679a4dc8', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?width=960&crop=smart&auto=webp&s=80ca700787d4797e0853bf1282a20bbf708b86ca', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?width=1080&crop=smart&auto=webp&s=9fd85dfe91d1bd437a9c2c1f9120c249753279a0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/XVIxj2xJw4yf4DUF35OJG5NIU1BCW1NQdNteKftW6H8.jpg?auto=webp&s=6e091c5c8ec21414d99fff51d5385e43d5b2036b', 'width': 1200}, 'variants': {}}]}
VS Code extension for code completion
32
Does anyone know an interesting VS Code extension project for code completion using local open source language models? I saw this open source project https://github.com/morph-labs/rift but I’m looking for something that is closer to GitHub Copilot in terms of functionality. Thanks in advance!
2023-07-15T15:40:58
https://www.reddit.com/r/LocalLLaMA/comments/150f6cz/vs_code_extension_for_code_completion/
Acrobatic-Site2065
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150f6cz
false
null
t3_150f6cz
/r/LocalLLaMA/comments/150f6cz/vs_code_extension_for_code_completion/
false
false
self
32
{'enabled': False, 'images': [{'id': 'ppWMtayU_grnUec9OGkWvO-pPw0tUPbLww7P5Ak6S34', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?width=108&crop=smart&auto=webp&s=3647f8b21e99e60e6433c4b6a42af9f7590b3cb8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?width=216&crop=smart&auto=webp&s=d43a3eb3b863ab3393a66c70d05b25fd6f8952c9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?width=320&crop=smart&auto=webp&s=e85510b53bb2291adfcc02dcde6ef95f222bc346', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?width=640&crop=smart&auto=webp&s=8ced1977f8692621ff4b4c6ad207107e924c34a0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?width=960&crop=smart&auto=webp&s=dca65e2775ca6440f54e33bd7613c6f5bb3bb3f3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?width=1080&crop=smart&auto=webp&s=d2a14be78697cadb7f0899cbc743438045ffc749', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/x3q-RbWnuS_2Gud6a0uQVxx-j_nBlHt7-qYYQwNXjHI.jpg?auto=webp&s=88f47618741838b22a5659389e8b2fa48fe9c6fb', 'width': 1200}, 'variants': {}}]}
Group Size / Act Order might matter less than you think: Some benchmarks
1
[removed]
2023-07-15T15:30:37
https://www.reddit.com/r/LocalLLaMA/comments/150exc3/group_size_act_order_might_matter_less_than_you/
sequoia_42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150exc3
false
null
t3_150exc3
/r/LocalLLaMA/comments/150exc3/group_size_act_order_might_matter_less_than_you/
false
false
self
1
{'enabled': False, 'images': [{'id': 'EohbWG78rsS2NG3TokVdjzciCM6oyI80U2VUPvV9xtA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/wrHXedKlASslRCM01x282vFnxtZYNQUL-ajA5P0kVXU.jpg?width=108&crop=smart&auto=webp&s=a601bf47dec35617c2fd72ec2c7a1e95dcc5ea79', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/wrHXedKlASslRCM01x282vFnxtZYNQUL-ajA5P0kVXU.jpg?width=216&crop=smart&auto=webp&s=ea2fe53f59077434b3f122dd34a1e0994ea6e03e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/wrHXedKlASslRCM01x282vFnxtZYNQUL-ajA5P0kVXU.jpg?width=320&crop=smart&auto=webp&s=360c80f88c66244151705c135f0ecb929aa6e345', 'width': 320}], 'source': {'height': 315, 'url': 'https://external-preview.redd.it/wrHXedKlASslRCM01x282vFnxtZYNQUL-ajA5P0kVXU.jpg?auto=webp&s=26ac942846e18e2e0d3eaf78993cccda16bee49e', 'width': 600}, 'variants': {}}]}
In Linux, how to check if GPU VRAM is overheating?
1
[removed]
2023-07-15T15:25:26
https://www.reddit.com/r/LocalLLaMA/comments/150et0p/in_linux_how_to_check_if_gpu_vram_is_overheating/
gptzerozero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150et0p
false
null
t3_150et0p
/r/LocalLLaMA/comments/150et0p/in_linux_how_to_check_if_gpu_vram_is_overheating/
false
false
self
1
null
Iterative finetuning with Lora
8
I have been finetuning using QLora I got a model which is giving some decent results with 2400 steps. Now I want to resume the training with some more new training samples. Can you iteratively add more examples in this fashion without loosing the knowledge of previously learnt examples ?
2023-07-15T13:53:26
https://www.reddit.com/r/LocalLLaMA/comments/150cn02/iterative_finetuning_with_lora/
False-Victory9602
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150cn02
false
null
t3_150cn02
/r/LocalLLaMA/comments/150cn02/iterative_finetuning_with_lora/
false
false
self
8
null
How will you do it?
0
Hi fellow curious minds. I am thinking about this project where I fine-tune a base model to answer questions about a company which is stock-listed and offer this service to them. But it should be able to take in new data as well for when there are new company announcements, new stock data at the end of each market, etc. Right now, I think it is possible by: - fine tune with available data from XBRL, press releases, company website, financial reports, analyst reports - when there are new financial report, use LoRA - use API for chatbot to answer questions about share data until there is one month of data where I can do LoRA - vector database on webcasts and audiocast transcript since those has to be verbatim I can probably use Rasa as chatbot framework, then connect to an LLM for most of the hard questions which is trained like above. Is this feasible? I do not have experience in this domain and most of my knowledge is from this subreddit and youtube. What would you do if I were you? Aside from setting up the dataset pipeline, what would be the other challenges I might face? Thank you for your patience in reading this post.
2023-07-15T12:19:28
https://www.reddit.com/r/LocalLLaMA/comments/150am2p/how_will_you_do_it/
leo-the-great
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
150am2p
false
null
t3_150am2p
/r/LocalLLaMA/comments/150am2p/how_will_you_do_it/
false
false
self
0
null
ComfyUI type of interface for pipeline development with LLaMA?
10
When trying to use local models, I almost always need to chain prompts in order to get the final result I’m looking for. Lately I’ve started experimenting with chaining prompts across different models fine-tuned for different specialized tasks, incorporating other NLP for logic-based decisions, etc. All this is possible with Python, but it’s tiresome to write, difficult to explain what is happening to someone else, and hard to pick back up once you leave it for a while unless I’m careful to create a flowchart as I go—which I never do. I love the ComfyUI approach to designing pipelines for stable diffusion. Does anything like this exist for LLaMA? Please tell me it does!
2023-07-15T11:09:52
https://www.reddit.com/r/LocalLLaMA/comments/15098jd/comfyui_type_of_interface_for_pipeline/
curlmytail
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15098jd
false
null
t3_15098jd
/r/LocalLLaMA/comments/15098jd/comfyui_type_of_interface_for_pipeline/
false
false
self
10
null
Is the OpenAI moat shrinking against Open Source?
83
When I joined this subreddit, it was the time, when Vicuna, WizardLM, WizardVicuna etc. came out, I was able to run them locally with less efford. Then we got Falcon/RedPajama/OpenLLM as models, which can be used commercially. Every week felt ground-breaking. But the last month felt like there was not much of that progress. And OpenAI wasn't sleeping, they released Codeinterpreter, which looks promising to strenghten their moat. My question: Are we still reducing the gap between Open Source models and OpenAI?
2023-07-15T10:44:00
https://www.reddit.com/r/LocalLLaMA/comments/1508r48/is_the_openai_moat_shrinking_against_open_source/
Koliham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1508r48
false
null
t3_1508r48
/r/LocalLLaMA/comments/1508r48/is_the_openai_moat_shrinking_against_open_source/
false
false
self
83
null
How do I change the prompt template on the GPT4All Python Bindings? (+ generic beginner fine-tuning question, pls help.)
2
I see on the github they’re discussing changes about making this easier. But I don’t understand the solution on how to do it now. I can see mentions of the “default header” in the [GPT4All.py](https://GPT4All.py) but I don’t understand what I’m supposed to modify. Additionally, I need some clarification on something. Most of these models are trained to be “assistants”, therefore they give very “assistant-like” robotic answers to a lot of things. Do I have to fine-tune in order to get more human-like responses? or can I simply change the temperature and give a clear prompt template/header explaining I want human-like, opinionated responses? \*Additionally additionally\*, say I wanted my AI to be a certain character with traits and a name, again, so I need to fine-tune for this? or is a prompt template/header sufficient? If I do need to fine-tune (or at least, \*\*should\*\*), my next question is… how? I don’t even know where to start with that, especially with GPT4All. Can I use my own custom fine-tuned models? My knowledge is extremely limited when it comes to that specific part. &#x200B; Any help appreciated!
2023-07-15T10:35:33
https://www.reddit.com/r/LocalLLaMA/comments/1508lgy/how_do_i_change_the_prompt_template_on_the/
RadioRats
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1508lgy
false
null
t3_1508lgy
/r/LocalLLaMA/comments/1508lgy/how_do_i_change_the_prompt_template_on_the/
false
false
self
2
null
Have any of you found decent laptops for this use case?
0
If you care about my use case, read below the break, but I’m primarily interested in what you guys went with. —————— I need to buy a new laptop primarily for show control and travel at work, but I also want to upgrade to something that is performant with respect to local LLM experiments. Software support and in-bed usability for work skews me slightly toward the high-end MacBook Pro side of the fence, but I still could go either way if there are better PC laptops for the money. I’m currently having trouble weighing the fact that I can get more system memory and GPU compute for the money with a PC against the fat unified memory that the GPU can access on an M2 Max system. I’ve also had *horrible* experiences with the long-term stability and build quality of high-end PC laptops.
2023-07-15T09:48:25
https://www.reddit.com/r/LocalLLaMA/comments/1507qjx/have_any_of_you_found_decent_laptops_for_this_use/
E_Snap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1507qjx
false
null
t3_1507qjx
/r/LocalLLaMA/comments/1507qjx/have_any_of_you_found_decent_laptops_for_this_use/
false
false
self
0
null
📢Excited to announce https://github.com/intel/intel-extension-for-transformers v1.1 released. Congrats team! 🔥Supported efficient fine-tuning and inference on Xeon SPR and Habana Gaudi 🎯Enabled 4-bits LLM inference on Xeon (better than llama.cpp); improved lm-eval-harness for multiple frameworks
75
2023-07-15T08:35:58
https://github.com/intel/intel-extension-for-transformers
FHSenpai
github.com
1970-01-01T00:00:00
0
{}
1506gl4
false
null
t3_1506gl4
/r/LocalLLaMA/comments/1506gl4/excited_to_announce/
false
false
https://b.thumbs.redditm…Y0TicAq-_7MI.jpg
75
{'enabled': False, 'images': [{'id': 'fyx1wvrTIYvxXp8XNLlDTh0Kv2PFAaGOPRq0ajk16OY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?width=108&crop=smart&auto=webp&s=f74b438b0c0cbad3302d192d68ab1ad18da1d4e6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?width=216&crop=smart&auto=webp&s=0073238d3a976e89c01ff481538cbb341f692728', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?width=320&crop=smart&auto=webp&s=dcaa9f89007d1884c481d8bbe2f3ff0f1268ba7e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?width=640&crop=smart&auto=webp&s=e76c51552aaacc9771c6007849ce1840e32cad9f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?width=960&crop=smart&auto=webp&s=c84dd59d6ce5638f26c6128eea7c1efc906bb6cb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?width=1080&crop=smart&auto=webp&s=89013d8bf4e4b61467dc67ce654eae7aca0e9ad2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xO7SfF40bkPdefBQIkfXZNYaTMzTYphmAv_iW81HA28.jpg?auto=webp&s=646740a9f4157121d168bc6f7e837d7708dd95e7', 'width': 1200}, 'variants': {}}]}
Run GGML models in google colab with gpr4all
1
It's slow but it works. !pip -q install gpt4all Use this code to download the model. import requests from pathlib import Path from tqdm import tqdm Path('./models/ggml-model.bin').parent.mkdir(parents=True, exist_ok=True) # Example model. Check https://github.com/nomic-ai/gpt4all for the latest models. url = 'https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGML/resolve/main/wizardlm-13b-v1.1.ggmlv3.q2_K.bin' # send a GET request to the URL to download the file. Stream since it's large response = requests.get(url, stream=True) # open the file in binary mode and write the contents of the response to it in chunks # This is a large file, so be prepared to wait. with open('./models/ggml-model.bin', 'wb') as f: for chunk in tqdm(response.iter_content(chunk_size=8192)): if chunk: f.write(chunk) Load the model. local_path = './models/' from gpt4all import GPT4All model = GPT4All(model_name="ggml-model.bin", model_path=local_path) prompt prompt =''' A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: PROMPT ASSISTANT: ''' generate output= model.generate(prompt, max_tokens=50, temp=0.7, top_k=40, top_p=0.1, repeat_penalty=1.18, repeat_last_n=64, n_batch=8, n_predict=None, streaming=False) print(output) IS THERE WAY TO STREAM THE OUTPUT ??
2023-07-15T07:55:00
https://www.reddit.com/r/LocalLLaMA/comments/1505qcw/run_ggml_models_in_google_colab_with_gpr4all/
Sufficient_Run1518
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1505qcw
false
null
t3_1505qcw
/r/LocalLLaMA/comments/1505qcw/run_ggml_models_in_google_colab_with_gpr4all/
false
false
self
1
null
I've uploaded some 33B popular models merged with bhenrym14 16K LoRA!
3
2023-07-15T06:45:54
https://huggingface.co/Panchovix
panchovix
huggingface.co
1970-01-01T00:00:00
0
{}
1504h41
false
null
t3_1504h41
/r/LocalLLaMA/comments/1504h41/ive_uploaded_some_33b_popular_models_merged_with/
false
false
https://a.thumbs.redditm…YBGzrWWm0aY0.jpg
3
{'enabled': False, 'images': [{'id': 'OmJ2YIgaV9Z4EA8790ooSFw3MeB_MqqU_mgScdu7Oi4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=108&crop=smart&auto=webp&s=b9a9640fcab472b3e61358def747d9f36f05f24b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=216&crop=smart&auto=webp&s=ecd2ef5b7cf34caf05c9cad390a6b91d1d854d75', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=320&crop=smart&auto=webp&s=c8baa9b96fefacebb3c4f2ad4712b02cf66fa8d9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=640&crop=smart&auto=webp&s=5fabe2a0cd2717e236e6ddf7780ccfe29fc18933', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=960&crop=smart&auto=webp&s=3ff32e7933f6c78ef18938b581e6b5f28a1874f2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=1080&crop=smart&auto=webp&s=2502ab7883bda5b97808b6abf753335d5e947eb1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?auto=webp&s=576b608d0ff5b4c3d2d32899b7fcdd5030adf9f9', 'width': 1200}, 'variants': {}}]}
Best way to extract text from PDF docs for finetuning models?
35
I have a bunch of (non-English) large PDF documents, and I want to extract the text out of them so I can then do finetuning of some models (not decided on which one yet), and iterate over the data. What's the best way to take a PDF doc and convert it into Unicode, while maintaining some semblance of formatting?
2023-07-15T05:17:00
https://www.reddit.com/r/LocalLLaMA/comments/1502uc3/best_way_to_extract_text_from_pdf_docs_for/
ispeakdatruf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1502uc3
false
null
t3_1502uc3
/r/LocalLLaMA/comments/1502uc3/best_way_to_extract_text_from_pdf_docs_for/
false
false
self
35
null
Orca-Mini V2 13B is now the 5th highest scoring 13B on Open LLM Leaderboard, with only 0.9 points behind the highest scoring, Wizard Vicuna Uncensored. Now it is the 21th highest scoring model in Open LLM Leaderboard.
72
2023-07-15T00:02:30
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
bot-333
huggingface.co
1970-01-01T00:00:00
0
{}
14zwf8w
false
null
t3_14zwf8w
/r/LocalLLaMA/comments/14zwf8w/orcamini_v2_13b_is_now_the_5th_highest_scoring/
false
false
https://a.thumbs.redditm…e3IovQf0l8F4.jpg
72
{'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]}
Qlora finetuning loss goes down then up
7
Hi, I am doing qlora finetunes on a WizardLM 30b with alpaca style dataset and the eval loss goes down to about 1.0 at 1 epochs then starts going back up. I am running a slightly modified version of the qlora finetune script. &#x200B; https://preview.redd.it/4vo5iuhpg0cb1.png?width=2528&format=png&auto=webp&s=583c296a9c8af0d9a6dba9f4b56bbab2d35bcc0c Using default qlora finetune values like 3e-4 lr, dropout 0.05, rank 8 alpha 16, cutoff len 256. Training dataset has 11,000 rows. Train test split uses test size of 15%. What do you think has gone wrong with my finetuning? Shouldn't the loss keep going down till about 3 epochs?
2023-07-14T22:58:32
https://www.reddit.com/r/LocalLLaMA/comments/14zux8y/qlora_finetuning_loss_goes_down_then_up/
gptzerozero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zux8y
false
null
t3_14zux8y
/r/LocalLLaMA/comments/14zux8y/qlora_finetuning_loss_goes_down_then_up/
false
false
https://b.thumbs.redditm…1h30_dRbHojU.jpg
7
null
Best Models for Chat/Companion
37
Hi, I'm just getting into using llama-cpp and checking out ggml models like theblokes Samantha and Wizardlm etc... I'm looking to create a personalized chatbot, one that I can create a stable persona for and give long-term memory to. I'd love to hear people's experience chatting for various llama like models and what sort of "personalities" each model has. Thanks!
2023-07-14T22:34:51
https://www.reddit.com/r/LocalLLaMA/comments/14zudes/best_models_for_chatcompanion/
jacobgolden
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zudes
false
null
t3_14zudes
/r/LocalLLaMA/comments/14zudes/best_models_for_chatcompanion/
false
false
self
37
null
Experience with structured responses using local llamas (jsonformer, guidance, gorilla, etc?)
14
I was about to post a reply to [this thread](https://www.reddit.com/r/LocalLLaMA/comments/14ywnmh/gpt_code_interpreter_is_just_a_toolformer_with/), but it got me thinking that perhaps this topic deserves a thread of its own. On the local llama front, getting LLM's to output structured responses is, at least for me, the next frontier. My projects can only get so far with ad-hoc natural language responses. So, what are the main technologies for getting structured responses? And which has the best ergonomics? Are different tools right for different situations? As far as I know, we've got: \- JSONFormer \- Microsoft Guidance \- Gorilla 7B (model) ... What else? I'm not including Toolformer in this list because as far as I know, it isn't a tool that we can reliably use in apps. Langchain probably fits in here somewhere as an adapter or part of an orchestration system, but it isn't useful in getting an LLM to output structured responses, as far as I know. Guidance looks to be the most complete, and is slowly being integrated with text-generation-webui and exllama (afaik). I've been hesitant to get started with Guidance since as far as I know, you have to use their wrapper, which means no exllama. I've been totally spoiled by Exllama's performance, but I should probably get over that and just start learning Guidance since it seems to be the most robust solution out there. I don't especially love that guidance relies on passing around strings of mustache templates, but maybe the existing tooling for parsing mustache makes it semi-tolerable in an IDE. Would be curious to hear others' experience. While I'm ranting - I really wish something like Guidance existed that could accept something more like an AST - or some sort of data structure - instead of a string. This seems like the most natural way to provide structured response templates for an LLM. Based on my limited knowledge, this whole area seems like something that would benefit from some lexer/parser wisdom. Anyhoo, i would love to spend the weekend hacking on getting my local llamas to speak structured output. Any guidance (pun incidental) would be appreciated. I'm mainly interested in how people have actually used these technologies, weather successful or not, as opposed to hand-waves about what is theoretically possible. Also, it's worth mentioning, for anyone who isn't familiar with this general approach, that these technologies generally work by constraining what the LLM is allowed to return on a token-by-token basis. Normally, our prompts start with a specified string, then the LLM is allowed to continue the entire prompt to completion. Instead, (as i understand it), Guidance works by only allowing certain tokens/sequences to be generated at particular parts of the prompt, and then once part of the generation is done, it fills in more of the generation with pre-determined tokens, and then repeats the process. It can be thought of as a "fill in the middle" prompt with several "holes" to fill, and structural constraints on those holes (number, string, list, etc). If anyone can explain this better, please do! So, questions: \- **Has anyone used any of these technologies successfully in either hobby projects or research?** **- Has anyone run into limitations/considerations for how the LLM behaves different when its output is constrained?** Does it work less well at tasks where it would otherwise perform better when its output is constrained? Does it require special prompting?
2023-07-14T21:08:12
https://www.reddit.com/r/LocalLLaMA/comments/14zs7jp/experience_with_structured_responses_using_local/
tronathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zs7jp
false
null
t3_14zs7jp
/r/LocalLLaMA/comments/14zs7jp/experience_with_structured_responses_using_local/
false
false
self
14
null
Model for business intelligence, data analysis, querying, schema awareness
10
I'm looking for a local or cloud hosted base model + tuning recommendations for use in an application that can take a plain English analytical prompt like "Top 100 page URLs by scroll depth", and with context of my data warehouse schema, can determine what dataset/tables and query to perform. I may also add additional features for parameterizing fields like date ranges and filters in the WHERE clause. I think ChatGPT could be pretty good here but this application needs to be local or privately hosted. If you could drop me some tools, tech, or similar apps to look into I'd appreciate it. Thanks!
2023-07-14T20:53:16
https://www.reddit.com/r/LocalLLaMA/comments/14zrt6k/model_for_business_intelligence_data_analysis/
Crypty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zrt6k
false
null
t3_14zrt6k
/r/LocalLLaMA/comments/14zrt6k/model_for_business_intelligence_data_analysis/
false
false
self
10
{'enabled': False, 'images': [{'id': 'PUYR1RJqWPAYo-JUUAriIlDT7iq05e52MA3tic-2M8w', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?width=108&crop=smart&auto=webp&s=af544f2fea9be3a675bd254f0e3d0e172f8d534f', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?width=216&crop=smart&auto=webp&s=22f6a78d54a0bc6d17d0b13d3984429ed7af8d22', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?width=320&crop=smart&auto=webp&s=561cce950fa17949bed6d52eae8ed92a5c792ce6', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?width=640&crop=smart&auto=webp&s=536212d55a2367b3ae438f8fcbed79e5c3d42b49', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?width=960&crop=smart&auto=webp&s=c24357e8923a38ad916a956701e53b10139a034b', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?width=1080&crop=smart&auto=webp&s=c66d053be81c30af7d0d1573e12fe261575e35f3', 'width': 1080}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/5axMISfCkzw4wyBR553ISL3rA9MWqyHNtEqIN18tmpQ.jpg?auto=webp&s=4107ebc844bc6350befdc70499132c1c9bf2b0b8', 'width': 1200}, 'variants': {}}]}
Decent Local web crawler?
5
I've tried a few localGPT frameworks like LocalAI, Text-UI, and AutoGPT and none of them seem to have a decent web-crawler, as far as i could tell. Does anyone have a good recommendation for a localGPT setup that does that? Thanks!
2023-07-14T20:20:08
https://www.reddit.com/r/LocalLLaMA/comments/14zqyo6/decent_local_web_crawler/
basemaly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zqyo6
false
null
t3_14zqyo6
/r/LocalLLaMA/comments/14zqyo6/decent_local_web_crawler/
false
false
self
5
null
Training Foundational models with the Speed of Qlora
47
[https://arxiv.org/abs/2307.05695](https://arxiv.org/abs/2307.05695) &#x200B; le trois paragraph Claud summary: Paragraph 1: The paper explores low-rank training techniques as an alternative approach to training large neural networks. It introduces a novel method called ReLoRA that utilizes low-rank updates to train high-rank networks. ReLoRA starts by doing some initial full-rank training, then switches to low-rank training with LoRA, and periodically restarts/reinitializes the low-rank factors to increase the effective rank of the total update. It also uses a jagged learning rate schedule and partial optimizer resets to stabilize training after restarts. Paragraph 2: ReLoRA was evaluated on transformer language models up to 350M parameters trained on the C4 dataset. It achieved comparable performance to regular full-rank training, while only training a small fraction of parameters at a time. The efficiency and performance gap compared to full training improved with larger model sizes. This suggests ReLoRA could enable efficient training of multi-billion parameter models. Paragraph 3: Ablation studies demonstrated the importance of the different components of ReLoRA. The restarts and jagged learning rate schedule were critical for good performance and training stability. The warm start phase was also very beneficial, drawing similarities to lottery ticket hypothesis. Analysis of the singular values showed ReLoRA results in weight updates that better resemble full-rank training compared to standard low-rank methods. Paragraph 4: ReLoRA reduces the number of trainable parameters at any point, which enables larger batch sizes, lower memory usage, and faster training. The frozen parameters can also be quantized to further reduce costs. The efficiency gains are expected to significantly increase for models over 1B parameters based on initial experiments. Paragraph 5: The development of efficient low-rank training techniques like ReLoRA could provide insights into why overparametrization is needed and the trainability of large neural nets. The results suggest low-rank methods are a promising approach to improve training efficiency, especially for massive multi-billion parameter models.
2023-07-14T19:35:36
https://www.reddit.com/r/LocalLLaMA/comments/14zpu7m/training_foundational_models_with_the_speed_of/
FreezeproofViola
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zpu7m
false
null
t3_14zpu7m
/r/LocalLLaMA/comments/14zpu7m/training_foundational_models_with_the_speed_of/
false
false
self
47
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
A direct comparison between llama.cpp, AutoGPTQ, ExLlama, and transformers perplexities
146
2023-07-14T18:12:52
https://oobabooga.github.io/blog/posts/perplexities/
oobabooga4
oobabooga.github.io
1970-01-01T00:00:00
0
{}
14znqen
false
null
t3_14znqen
/r/LocalLLaMA/comments/14znqen/a_direct_comparison_between_llamacpp_autogptq/
false
false
default
146
null
Qlora finetuning loss goes down then up
1
[removed]
2023-07-14T18:05:44
[deleted]
1970-01-01T00:00:00
0
{}
14znk9l
false
null
t3_14znk9l
/r/LocalLLaMA/comments/14znk9l/qlora_finetuning_loss_goes_down_then_up/
false
false
default
1
null
Qlora finetuning loss goes down then up!
1
[removed]
2023-07-14T17:59:59
[deleted]
1970-01-01T00:00:00
0
{}
14znelj
false
null
t3_14znelj
/r/LocalLLaMA/comments/14znelj/qlora_finetuning_loss_goes_down_then_up/
false
false
default
1
null
QLoRA for Pretraining Coming
25
2023-07-14T16:59:42
https://twitter.com/tim_dettmers/status/1679637452758355968?s=46&t=kra5MqBsEM_kbG-sZiyMJw
caesarten
twitter.com
1970-01-01T00:00:00
0
{}
14zlsvc
false
{'oembed': {'author_name': 'Tim Dettmers', 'author_url': 'https://twitter.com/Tim_Dettmers', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Continued pretraining with QLoRA is just around the corner! A second pretraining of models like Falcon-40B in 4-bit would be super-efficient. <a href="https://t.co/wC86JsjZGD">https://t.co/wC86JsjZGD</a></p>&mdash; Tim Dettmers (@Tim_Dettmers) <a href="https://twitter.com/Tim_Dettmers/status/1679637452758355968?ref_src=twsrc%5Etfw">July 13, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Tim_Dettmers/status/1679637452758355968', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_14zlsvc
/r/LocalLLaMA/comments/14zlsvc/qlora_for_pretraining_coming/
false
false
https://b.thumbs.redditm…jt6nTFwxPe7g.jpg
25
{'enabled': False, 'images': [{'id': 'EpcCrB5c_ymPagW1k3nntGnbJWvH6gTAHK2mT-AAYhQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/GmQlDN0h6qYchuR03YMliQv8abrv7qqGPzUlu7RymSU.jpg?width=108&crop=smart&auto=webp&s=d3a3d4d56e0ee1846e9ed5a45b49815ebf55a2a4', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/GmQlDN0h6qYchuR03YMliQv8abrv7qqGPzUlu7RymSU.jpg?auto=webp&s=124d4f0980f44718b0dd1a13fd9677d835a077fb', 'width': 140}, 'variants': {}}]}
Best model for writing lyrics.
0
Hey guys. What are the best model for help with writing song lyrics that are uncensored.
2023-07-14T16:57:47
https://www.reddit.com/r/LocalLLaMA/comments/14zlqty/best_model_for_writing_lyrics/
Brarblaze
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zlqty
false
null
t3_14zlqty
/r/LocalLLaMA/comments/14zlqty/best_model_for_writing_lyrics/
false
false
self
0
null
Best alternative to attention so far?
39
With the recent paper "Lost in the middle" now giving more doubts about attention, what architectures do you think are a good alternative? For me: Hyena-hyerarchy and RWKV look promising, and hopefully will get enough attention. RWKV with it's rnn arch gives me hope about "memory" being something that could be learned by a model instead of engineered
2023-07-14T16:35:32
https://www.reddit.com/r/LocalLLaMA/comments/14zl6in/best_alternative_to_attention_so_far/
KillerX629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zl6in
false
null
t3_14zl6in
/r/LocalLLaMA/comments/14zl6in/best_alternative_to_attention_so_far/
false
false
self
39
null
Qlora finetuning loss goes down then up
1
[removed]
2023-07-14T16:02:34
https://www.reddit.com/r/LocalLLaMA/comments/14zkc67/qlora_finetuning_loss_goes_down_then_up/
gptzerozero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zkc67
false
null
t3_14zkc67
/r/LocalLLaMA/comments/14zkc67/qlora_finetuning_loss_goes_down_then_up/
false
false
self
1
{'enabled': False, 'images': [{'id': 'cwypaW59DjJSa-KBKsrZOZLM9j_X-7q4niA6gIOGBk8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?width=108&crop=smart&auto=webp&s=309899434daa150269d8aaa4f8149a1c5633d123', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?width=216&crop=smart&auto=webp&s=5ff0e3aef71aa697fd20fff8167a1f9907bea07b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?width=320&crop=smart&auto=webp&s=1d84887a6c9560a50fac397d10094ee778b340d7', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?width=640&crop=smart&auto=webp&s=97f9fe7a94d4938096d98cd14cd5f69e2f51aa85', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?width=960&crop=smart&auto=webp&s=9fd2754a6e8b44cbb6316052743763925d0d30c9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?width=1080&crop=smart&auto=webp&s=f308934778f41a38a5a83cabe488dc58c2081e0a', 'width': 1080}], 'source': {'height': 1328, 'url': 'https://external-preview.redd.it/fxtT37yPRjz2htsvKnrQ-DPJshMi3ZcifA3T7oYVXE8.jpg?auto=webp&s=37582ebb34f5dfa8ca37d8ce59d942ab646ea318', 'width': 2528}, 'variants': {}}]}
I've gotten allocation on an enterprise server. Which model type has fastest inference on pure CPU/RAM ?
23
As stated in the title, my team has approved our test server for my experimentation. The server is actually a hadoop cluster with 64 cpu threads and 125 gb of ram. We have no cuda cores. What kind of model should I be using for the quickest inference? What I really want to know from you guys is if I should be using quantized or unquantized models? My first thought is 4-bit ggml 13 or 30b, but could I possibly run an unquantized model at lower precision on just ram and CPU? Is that faster? Will it fit in the ram? I have no idea. I've been using GPTQ models on my home hardware for the last few months, I'm not sure what's best for this production hardware.
2023-07-14T15:54:52
https://www.reddit.com/r/LocalLLaMA/comments/14zk4ly/ive_gotten_allocation_on_an_enterprise_server/
gentlecucumber
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zk4ly
false
null
t3_14zk4ly
/r/LocalLLaMA/comments/14zk4ly/ive_gotten_allocation_on_an_enterprise_server/
false
false
self
23
null
Fine tuning llm model with tabular dataset
5
Can someone point me in the right direction for fine-tuning a llama model with tabular data? I want to test it using the Iris dataset. I tried QLoRa, but bitsandbytes is not working.
2023-07-14T15:15:12
https://www.reddit.com/r/LocalLLaMA/comments/14zj3pe/fine_tuning_llm_model_with_tabular_dataset/
sisiwnsjhsjajzjxjs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zj3pe
false
null
t3_14zj3pe
/r/LocalLLaMA/comments/14zj3pe/fine_tuning_llm_model_with_tabular_dataset/
false
false
self
5
null
Anyone else think openAI's gpt4.5 will be their last?
0
Right now, GPT-4 from OpenAI is totally killing it in the AI game. But guess what? Big tech companies are swooping in like hungry hawks, throwing crazy money at OpenAI's brightest minds. If this keeps up, OpenAI's gonna be left with just their CEO and alignment team. Not exactly a winning lineup. So, GPT-4 and GPT-4.5 will be their last models. I mean, if all the smart folks jump ship, who's gonna keep OpenAI on top? It's a rough thought, but the way things are going, it's not looking too good for OpenAI's future.
2023-07-14T15:14:47
https://www.reddit.com/r/LocalLLaMA/comments/14zj3bm/anyone_else_think_openais_gpt45_will_be_their_last/
Classic-Dependent517
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zj3bm
false
null
t3_14zj3bm
/r/LocalLLaMA/comments/14zj3bm/anyone_else_think_openais_gpt45_will_be_their_last/
false
false
self
0
null
People who run 65B models, what hardware are you using?
1
[removed]
2023-07-14T15:11:04
https://www.reddit.com/r/LocalLLaMA/comments/14zizwq/people_who_run_65b_models_what_hardware_are_you/
Necessary_Ad_9800
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zizwq
false
null
t3_14zizwq
/r/LocalLLaMA/comments/14zizwq/people_who_run_65b_models_what_hardware_are_you/
false
false
self
1
null
What cards do you use? (new to local LLMs)
13
Hello are all these 3 options viable? \- NVIDIA 4090? \- Two 3090? \- Borrowing A100 computational power (is that possible)? &#x200B; Thanks
2023-07-14T14:32:49
https://www.reddit.com/r/LocalLLaMA/comments/14zi1cx/what_cards_do_you_use_new_to_local_llms/
Unreal_777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zi1cx
false
null
t3_14zi1cx
/r/LocalLLaMA/comments/14zi1cx/what_cards_do_you_use_new_to_local_llms/
false
false
self
13
null
LLaMa Tokenizer Running Live in Javascript (you can type and it'll tokenize it instantly)
15
2023-07-14T14:02:57
https://bot.co/tokenmonster/?a=llama&b=englishcode-32000-strict-v1&text=alice
Pan000
bot.co
1970-01-01T00:00:00
0
{}
14zhaxx
false
null
t3_14zhaxx
/r/LocalLLaMA/comments/14zhaxx/llama_tokenizer_running_live_in_javascript_you/
false
false
default
15
null
AI's Are Theft. Let's Fix Them With Generosity.
0
2023-07-14T13:05:54
https://gethisword.com/tech/makinglegalais.html
heswithjesus
gethisword.com
1970-01-01T00:00:00
0
{}
14zfzb2
false
null
t3_14zfzb2
/r/LocalLLaMA/comments/14zfzb2/ais_are_theft_lets_fix_them_with_generosity/
false
false
default
0
null
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
26
2023-07-14T12:28:21
https://github.com/seonghyeonye/Flipped-Learning
kryptkpr
github.com
1970-01-01T00:00:00
0
{}
14zf510
false
null
t3_14zf510
/r/LocalLLaMA/comments/14zf510/guess_the_instruction_flipped_learning_makes/
false
false
https://b.thumbs.redditm…kZUD8-FiHBwk.jpg
26
{'enabled': False, 'images': [{'id': 'vYihP8dS4gLkSfPIzSzG_X_1k5KMOPucBJcRXuItcEc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?width=108&crop=smart&auto=webp&s=5ee0b37241e482409c94b8f50e6a406a2b8babe6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?width=216&crop=smart&auto=webp&s=5dba42b4a9e883fc77698616bf0f9c62a8eb765d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?width=320&crop=smart&auto=webp&s=e040acf363e5cc04c6ec92bdfa749a2dcd4fac89', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?width=640&crop=smart&auto=webp&s=e9d7b1b9c4f4915580b4dd5c636dd9258cd76f62', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?width=960&crop=smart&auto=webp&s=1e3ec3b6d30ffcdee3cdf7a930f4192eec75931d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?width=1080&crop=smart&auto=webp&s=12cf34e416f5325313863a7f06ac5c65d463bc01', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PPrTdZrtnRanFZwZ7iTmJaJDSxiJWwWUod7XCxm7rOU.jpg?auto=webp&s=93afc54a0d73ca65d74ee0bd53f46519e0dd9177', 'width': 1200}, 'variants': {}}]}
Prompt Engineering: How to get open source LLMs to just return a single value or JSON output?
28
I’m trying the airoboros 33B model and other similar models but given a task they always give lengthy explanations which makes it harder to use as an API that can extract the result. Even after saying only return a value etc. they either stop working or don’t listen. Anyone has any insight?
2023-07-14T11:58:42
https://www.reddit.com/r/LocalLLaMA/comments/14zei4q/prompt_engineering_how_to_get_open_source_llms_to/
RepresentativeOdd276
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zei4q
false
null
t3_14zei4q
/r/LocalLLaMA/comments/14zei4q/prompt_engineering_how_to_get_open_source_llms_to/
false
false
self
28
null
LLaMA-65B on Google colab
11
Has anybody tried using the petals library to run llama models? Is that good?
2023-07-14T11:20:34
https://github.com/bigscience-workshop/petals#check-out-tutorials-examples-and-more
Sufficient_Run1518
github.com
1970-01-01T00:00:00
0
{}
14zdqni
false
null
t3_14zdqni
/r/LocalLLaMA/comments/14zdqni/llama65b_on_google_colab/
false
false
https://b.thumbs.redditm…BDoQm5M2wbcA.jpg
11
{'enabled': False, 'images': [{'id': 'XIwpBe6nMtfoIxWSYONUqsCDSyQt6vhYCgugSrElz-Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?width=108&crop=smart&auto=webp&s=4b0267b5be0c53502ff4de484df1785d80711c7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?width=216&crop=smart&auto=webp&s=a4fc481693b6f44305b7b940323f8a609375ce13', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?width=320&crop=smart&auto=webp&s=18f1ec38d6a342a717c1102db8d8a34da30f8bd8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?width=640&crop=smart&auto=webp&s=d334595271965854394a7a74b4c976895d5b8a47', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?width=960&crop=smart&auto=webp&s=79d08b4e21b24f740a2b3dfd3e5d8ef20437118e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?width=1080&crop=smart&auto=webp&s=5dd92b803f1b9db1f44eebc5e42f077e6bc76ecd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oPN2VWN7qxnpLDY0M1Vv42xhcWL2zQmAeaX1ZFwS8cw.jpg?auto=webp&s=6232cf8fba866b90dafcc5a0a0f6d88446f38407', 'width': 1200}, 'variants': {}}]}
How are they not screwed
28
Meta leaking LLaMA, OpenAi trying to make a Computer Science student stop development, that weird guy threatening the GPT4all creator's job... it all keeps adding up. (que green day music) Does anyone else think they're freakin' screwed and the cat is out of the bag? Now they'll forever have to compete with the entire world. As large as these organizations are, they're small compared to the rest of the world. Tiny even. Some of their own employees may be part of the unpaid workforce that helps to advance the local models. Not that they'll ever tell their corporate overlords. I know this is all still playing out, but I'd just like to know how they're not screwed. And why would facebook blow their own foot off by "leaking" LLaMA? Someone will say "It's so they can get free work done on it" but that sounds like a very bad idea. Now they're gonna to use that free work you wanted done.. to kill ya. Let's say I invented a new and unconventional weapon to use against my enemies, I wouldn't release it to them and let them develop it further for me. *Cause they'll use it on me!* And that's exactly what's going to happen to facebook. Excuse me, "meta". Whatever those goofballs are calling themselves today. It seems like they're making the classic blunder, the same one that microsoft made against Linux and lost. Linux used FOSS to take over the world. Now everything runs linux. Richard Stallman will have the last laugh. Any insight here would be greatly appreciated!
2023-07-14T11:12:14
https://www.reddit.com/r/LocalLLaMA/comments/14zdkvx/how_are_they_not_screwed/
rondonjohnald
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zdkvx
false
null
t3_14zdkvx
/r/LocalLLaMA/comments/14zdkvx/how_are_they_not_screwed/
false
false
self
28
null
Can anyone recommend 30b or smaller models for any of the following use cases?
1
[removed]
2023-07-14T10:50:04
https://www.reddit.com/r/LocalLLaMA/comments/14zd4z5/can_anyone_recommend_30b_or_smaller_models_for/
ricketpipe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zd4z5
false
null
t3_14zd4z5
/r/LocalLLaMA/comments/14zd4z5/can_anyone_recommend_30b_or_smaller_models_for/
false
false
self
1
null
New embedding models from Jina AI
2
3 new embedding models from Jina AI
2023-07-14T10:34:57
https://twitter.com/bo_wangbo/status/1678742625887592448?s=46&t=4Lg1z9tXUANCKLiHwRSk_A
Acrobatic-Site2065
twitter.com
1970-01-01T00:00:00
0
{}
14zcv4q
false
{'oembed': {'author_name': 'Bo', 'author_url': 'https://twitter.com/bo_wangbo', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">We have finished 3 embedding models: small/base/large. We’re satisfied with the results as it is v1. Another 1.2 billion parameter model is ongoing. Our next objective at <a href="https://twitter.com/JinaAI_?ref_src=twsrc%5Etfw">@JinaAI_</a> is bridge the performance gap and further expanding context to 2k. <a href="https://t.co/tn7QxrN0oL">pic.twitter.com/tn7QxrN0oL</a></p>&mdash; Bo (@bo_wangbo) <a href="https://twitter.com/bo_wangbo/status/1678742625887592448?ref_src=twsrc%5Etfw">July 11, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/bo_wangbo/status/1678742625887592448', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_14zcv4q
/r/LocalLLaMA/comments/14zcv4q/new_embedding_models_from_jina_ai/
false
false
https://b.thumbs.redditm…7CEzMeqtfk5Q.jpg
2
{'enabled': False, 'images': [{'id': 'iG1i1dfmtwNI1jUs_bCEAiKN5_q-57ETNoeq7FuZF5E', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/uISJmidbLGxog7juGGGsFiDw2YEt6CmO3Unu8lHkEbg.jpg?width=108&crop=smart&auto=webp&s=38f9752bfaedf21c6c0220ca6603e4643712d5c2', 'width': 108}], 'source': {'height': 112, 'url': 'https://external-preview.redd.it/uISJmidbLGxog7juGGGsFiDw2YEt6CmO3Unu8lHkEbg.jpg?auto=webp&s=899902a546857e15eb841b14c13e0942c2208a38', 'width': 140}, 'variants': {}}]}
Which model to choose?
2
When I use Vicuna-7B-8K-GGML, responses are generated in 15-30 seconds, but the responses are stupid and not scripted. And when I use 13B, the response is generated in 100+ seconds, but more or less acceptable. Is there some way to balance response speed and accuracy? Which model to choose, settings, etc. I have used various launch programs such as oobaboouga, kobold.cpp, sillytavern. And each has its pros and cons, I do not know which is better to choose. Sorry for the stupid questions, but I'm just starting to get into this and I don't speak English well, so it's hard for me to search for information. My computer: CPU - ryzen 3 3200 GPU - Integrated Radeon Vega 8 2GB RAM - 16GB
2023-07-14T10:31:22
https://www.reddit.com/r/LocalLLaMA/comments/14zcsvy/which_model_to_choose/
roman1338sf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zcsvy
false
null
t3_14zcsvy
/r/LocalLLaMA/comments/14zcsvy/which_model_to_choose/
false
false
self
2
null
Subreddit wiki page for models
62
This subreddit has grown quickly, and we're nearing 40,000 members. This great community wouldn't be possible without every single one of you, and thank you all for being here as we push the progress of locally run LLMs. One of the most common suggestions I see is a sub wiki linking to models. For anyone who doesn't know, this sub has had one since nearly the beginning, and it's been recently updated to streamline it so it's easier to follow: [https://www.reddit.com/r/LocalLLaMA/wiki/models](https://www.reddit.com/r/LocalLLaMA/wiki/models). The emphasis for the model page this time is on simplicity: * A selection of the top models will be listed instead of trying to list all of them. The previous page became a little bloated, which was one of the main critiques that was received. * Extra clarification has been added for the different models so new members to this sub can more easily understand what to download. * Current best choices has been changed to current popular choices, and models will now be listed alphabetically to encourage everyone to test what they like best. The distinction between unrestricted and restricted has been removed since this caused some confusion. The goal is to make it very simple for someone new to find good choices to start with instead of having to sift through the many models themselves, and I'm planning to add example generations for the models to make it even easier to choose. I also made a few other quick changes to this sub's wiki: * The Community Projects wiki page has been disabled. Projects will no longer be tracked, but the list of datasets has been moved to the models page and will continue to be updated. * The Getting Started wiki page has been disabled. Some basic info like the prompt templates will stay in the models page. The FAQ and most of the other information will be moved to the install guide page as soon as possible. In updating the install guide page, the plan is to add more important info, like koboldcpp use, and include an expanded FAQ which should help reduce the amount of posts for questions. One last thing to mention is that I've seen a lot of confusion over 30B/33B naming. These both refer to the same thing, the 32.5 billion parameter LLaMA model. I don't think it matters whichever one is used, but most people say 30B and Meta has officially called it LLaMA 30B in some recent papers, like the [QAT paper](https://arxiv.org/abs/2305.17888), so that's why it's used in the sub wiki.
2023-07-14T09:50:21
https://www.reddit.com/r/LocalLLaMA/comments/14zc22d/subreddit_wiki_page_for_models/
Civil_Collection7267
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14zc22d
false
null
t3_14zc22d
/r/LocalLLaMA/comments/14zc22d/subreddit_wiki_page_for_models/
false
false
self
62
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Can I locally train and run a model for my docs?
0
I am a noob ao bear with me. Have been using gpt 4 but have many ideas that are not being satisfied by it. I started tinkering with open source models recently and realized what I am sure most of you already did, that locally run open source models are better suited for. My question is, what model would you use to run local and fine tune private docs on? Is anything good enough now or waiting for metas open source is the answer ? Is there any good source of info on this ? Much much appreciated and excited :)
2023-07-14T07:38:22
https://www.reddit.com/r/LocalLLaMA/comments/14z9qqi/can_i_locally_train_and_run_a_model_for_my_docs/
staladine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14z9qqi
false
null
t3_14z9qqi
/r/LocalLLaMA/comments/14z9qqi/can_i_locally_train_and_run_a_model_for_my_docs/
false
false
self
0
null
FTC investigates OpenAI over data leak and ChatGPT’s inaccuracy
10
2023-07-14T07:34:38
https://www.washingtonpost.com/technology/2023/07/13/ftc-openai-chatgpt-sam-altman-lina-khan/
fallingdowndizzyvr
washingtonpost.com
1970-01-01T00:00:00
0
{}
14z9od2
false
null
t3_14z9od2
/r/LocalLLaMA/comments/14z9od2/ftc_investigates_openai_over_data_leak_and/
false
false
https://b.thumbs.redditm…ah3rY9UmXNDs.jpg
10
{'enabled': False, 'images': [{'id': 'iJ8DUuP6dFGlmixIRSHYb26iz1zwApb7qhjp5ifmEj4', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?width=108&crop=smart&auto=webp&s=03b2200cd1400cca1735129779d830cac4a88644', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?width=216&crop=smart&auto=webp&s=fd8500f744af72b1c02a47d5f9210917d584a72e', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?width=320&crop=smart&auto=webp&s=7b05fe95becfadb61c6c7334ca3ad4d81c4bcecb', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?width=640&crop=smart&auto=webp&s=63555a8cb3d1371a9f2a88ad671f4dfa28aaa4e7', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?width=960&crop=smart&auto=webp&s=229ffc3117fd952afdf06a3360422cf3e7fbe502', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?width=1080&crop=smart&auto=webp&s=c93d3db13b3cf35593fb578956a2545aaec877d7', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/EJarN-ZOGB4bLveL8iU_CxlvYgnzzSEqWS4EjqGY4Ik.jpg?auto=webp&s=5d4066f51667629c26f0e482d12edeab7028c90d', 'width': 1440}, 'variants': {}}]}
What's the best model for roleplay that's AMD compatibile on Windows 10?
4
As the title say. I want to test with Local models, but I have a 6750XT, not an Nvidia cards. My CPU is too old to run the LMM on it (4770s, I'll change it next year with a Ryzen 5 5600x probably), so that's not an option either. Am I screwed?is there a way or do I have to dual boot Linux?
2023-07-14T06:55:11
https://www.reddit.com/r/LocalLLaMA/comments/14z8zmb/whats_the_best_model_for_roleplay_thats_amd/
wolfbetter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14z8zmb
false
null
t3_14z8zmb
/r/LocalLLaMA/comments/14z8zmb/whats_the_best_model_for_roleplay_thats_amd/
false
false
self
4
null
LLM Training
1
So I get most use cases for people and for those people a Large language model that has tons of parameters with tight tuning will give them impressive results... I'm curious though, if I had a usecase like training a Language model (Say Flan-T5 or some other encoder/decoder model) trying to convert a natural language sentence into a Query for a triple store, How would I create a training data set at scale? Anyone have resources they could point me to on this area? Would parameter size really be beneficial when talking of a very specific task function for the Language model to perform?
2023-07-14T06:25:13
https://www.reddit.com/r/LocalLLaMA/comments/14z8gj1/llm_training/
BrokenKratos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14z8gj1
false
null
t3_14z8gj1
/r/LocalLLaMA/comments/14z8gj1/llm_training/
false
false
self
1
null
After I started using the 32k GPT4 model, I've completely lost interest in 4K and 8K context models
190
Using GPT4 with a massive long-ass context window is honestly the absolutely best I've seen AI do anything. The quality shoots up massively, and it is far beyond anything. The closest I've seen is Claude 100k, but it's language is not as good, and GPT3.5 16K is good, but very clearly not as great in language, and context window can suddenly become problematic. Most of the models posted here always seem to have absolutely tiny context windows. Are there any with any actually decent sized ones? Say, 8K or 16K at the minimum?
2023-07-14T05:55:52
https://www.reddit.com/r/LocalLLaMA/comments/14z7x7q/after_i_started_using_the_32k_gpt4_model_ive/
Outrageous_Onion827
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14z7x7q
false
null
t3_14z7x7q
/r/LocalLLaMA/comments/14z7x7q/after_i_started_using_the_32k_gpt4_model_ive/
false
false
self
190
null
Heard you guys were geniuses....so help me choose?
0
I spent the better part of a month now creating a virtual assistant.... most of the work was just making the interface i talk with it on but the ai part is actually really simple im just using basic GPT api chat completion prompt -> user ---> assistant response ----> user and so on currently its using chat gpt 3.5 but almost all the knowledge is me feeding it data about my business in a prompt.... a json dump of 50 of my contacts and some basic information about its personality, some occasional slang i want it to use, and its ability to reach into my json dump and provide me resource links when necessary i want to go open source because i dont want to be rate limited, i just wanna pay what it costs to run it. I also have outgrown the 4k token limit of gpt 3.5 and long for something bigger that can handle a bigger json dump What would suit someone like me? i want the latest cutting edge, and hey....if it can be commercialized thats cool too
2023-07-14T04:59:30
https://www.reddit.com/r/LocalLLaMA/comments/14z6v16/heard_you_guys_were_geniusesso_help_me_choose/
nicealtyagotthere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14z6v16
false
null
t3_14z6v16
/r/LocalLLaMA/comments/14z6v16/heard_you_guys_were_geniusesso_help_me_choose/
false
false
self
0
null
Stability AI Cofounder Says Emad Mostaque Tricked Him Into Selling Stake For $100
18
2023-07-14T04:40:47
https://www.forbes.com/sites/kenrickcai/2023/07/13/stability-ai-cofounder-says-emad-mostaque-tricked-him-into-selling-stake-for-100/?sh=270b7fe96c90
saintshing
forbes.com
1970-01-01T00:00:00
0
{}
14z6icw
false
null
t3_14z6icw
/r/LocalLLaMA/comments/14z6icw/stability_ai_cofounder_says_emad_mostaque_tricked/
false
false
https://b.thumbs.redditm…CTaIQLxH_SXo.jpg
18
{'enabled': False, 'images': [{'id': 'mYkr_7pE4j9mj1S7G7NM8oXh7e6-dmFhzIgW81TMmN8', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?width=108&crop=smart&auto=webp&s=8b7b8919e48f11bae78b0665547c64f17fe2e19e', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?width=216&crop=smart&auto=webp&s=0b27c0f8160bd74932f9e50380bc73ffd7023ee5', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?width=320&crop=smart&auto=webp&s=2d90a231053674cebeeea27e0694e862e70cf347', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?width=640&crop=smart&auto=webp&s=122ef901ac4ad8eee39825360993e66dd7907dac', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?width=960&crop=smart&auto=webp&s=6229b2e151107498dc28def858e99da4669b0e13', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?width=1080&crop=smart&auto=webp&s=ca8c0f98d58bb7eadd6485bb08deb453e559ac1c', 'width': 1080}], 'source': {'height': 799, 'url': 'https://external-preview.redd.it/4e_RSHnPYY4T_9a_I85cOW-87VTTqFEtVwCMjRlE_dc.jpg?auto=webp&s=f333571579862f0016f56c2fb826ab282dab3a52', 'width': 1200}, 'variants': {}}]}
Claude 2 - PDF upload function not working
0
When I first used [claude.ai](https://claude.ai) it was able to review a pdf document and give me a summary of it. The past several days it's failed to fully work. When I ask it to summarize a pdf it says:"Unfortunately I do not have access to the full text of the PDF you attached " and " I cannot provide a meaningful summary without access to the full document text. " &#x200B; I tried to use a public financial filing pdf and it only tells me it's a financial document. It says the same error message and told me I can copy-paste the text from the pdf. Does anyone else encounter this problem?
2023-07-14T04:36:57
https://www.reddit.com/r/LocalLLaMA/comments/14z6fso/claude_2_pdf_upload_function_not_working/
thetegridyfarms
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14z6fso
false
null
t3_14z6fso
/r/LocalLLaMA/comments/14z6fso/claude_2_pdf_upload_function_not_working/
false
false
default
0
{'enabled': False, 'images': [{'id': '3_SpT5T0ooxCFCom6cZ6UPwNtFl7TrBMQ-36YBzTJOE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?width=108&crop=smart&auto=webp&s=dc6a206464b8ea97ea02c765c1c9ba4100263366', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?width=216&crop=smart&auto=webp&s=f1d2313002c90299f0219eee8c216478b4ce7a36', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?width=320&crop=smart&auto=webp&s=131d9c867b5598cde7cc066cbcf7593ff1b3f9da', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?width=640&crop=smart&auto=webp&s=b184060f840527208c7c663435b02cd55f3bcf98', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?width=960&crop=smart&auto=webp&s=75b908f45e0f09092a6f56a5926b7fdb2c15c052', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?width=1080&crop=smart&auto=webp&s=21aa3174475a4d2f8dd13e7b5482d10625b10554', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/T_D8fo8CL9ibWncZ00fCeY48gOsgCGerSkmuTTNpW6Y.jpg?auto=webp&s=ebddc0221903d6bbd277e8534cebc4e75508ad83', 'width': 2400}, 'variants': {}}]}
Can I run LocalLLaMA with my Gpu?
0
It's a rtx 2070 with 8gb of vram.
2023-07-14T04:18:23
https://www.reddit.com/r/LocalLLaMA/comments/14z62rh/can_i_run_localllama_with_my_gpu/
MrMeeee-_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14z62rh
false
null
t3_14z62rh
/r/LocalLLaMA/comments/14z62rh/can_i_run_localllama_with_my_gpu/
false
false
self
0
null
The model weights are not tied
2
2023-07-14T04:17:17
https://www.reddit.com/gallery/14z624q
SeleucoI
reddit.com
1970-01-01T00:00:00
0
{}
14z624q
false
null
t3_14z624q
/r/LocalLLaMA/comments/14z624q/the_model_weights_are_not_tied/
false
false
https://b.thumbs.redditm…SOeGyQGRmV5c.jpg
2
null
PoisonGPT: Supply Chain Attack on HuggingFace
0
2023-07-14T04:00:41
https://blog.mithrilsecurity.io/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging-face-to-spread-fake-news/?utm_source=tldrnewsletter
x86dragonfly
blog.mithrilsecurity.io
1970-01-01T00:00:00
0
{}
14z5pv5
false
null
t3_14z5pv5
/r/LocalLLaMA/comments/14z5pv5/poisongpt_supply_chain_attack_on_huggingface/
false
false
https://b.thumbs.redditm…qUeDBgla3iDI.jpg
0
{'enabled': False, 'images': [{'id': 'kNe1Ec9FSwYu0t8I_fiqKe0Bl3P6G4731hhZQ_soeNk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=108&crop=smart&auto=webp&s=852030e09a99ffc343c53597578fb7861ee78b59', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=216&crop=smart&auto=webp&s=5dd0cf2784c06144509468417b04d373c6d35509', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=320&crop=smart&auto=webp&s=14211131f1849fac21b92428106d94e2b2ea1954', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=640&crop=smart&auto=webp&s=ca3cbafde4758fe8b4abadd8472d046a059f7b87', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=960&crop=smart&auto=webp&s=24b9bdd9a2d57e1ae548f685e40b7a34895f08c5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=1080&crop=smart&auto=webp&s=c5ba9550fa55f1511c895a66bb355ff517291039', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?auto=webp&s=3895bd8afb896475edd28ce0b8a945a6c32e5ac6', 'width': 1280}, 'variants': {}}]}
Q: Can we teach LLM a new language with fine-tuning?
12
Most open source LLMs are trained heavily on English texts, I was wondering if it would be practical to teach a new language with fine-tuning (w/ or w/o Lora), and if possible, how much training is needed to reach a reasonable level.
2023-07-14T02:37:01
https://www.reddit.com/r/LocalLLaMA/comments/14z40u2/q_can_we_teach_llm_a_new_language_with_finetuning/
kkimdev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14z40u2
false
null
t3_14z40u2
/r/LocalLLaMA/comments/14z40u2/q_can_we_teach_llm_a_new_language_with_finetuning/
false
false
self
12
null
llama.cpp extremely long pauses in generating additional tokens after initial fast generation on Mac Studio M2 Ultra 128gb RAM (due to context swapping or something context related?)
12
Hi All, I bought a Mac Studio m2 ultra (partially) for the purpose of doing inference on 65b LLM models in llama.cpp. I know, I know, before you rip into me, I realize I could have bought something with CUDA support for less money but I use the Mac for other things and love the OS, energy use, form factor, noise level (I do music), etc. So this machine checks a lot of boxes for me. While everyone was talking about how many tokens / second it generated in llama.cpp with guanaco-65b (around 8 or 9 maybe, depending on quant flavor, which is more than fast enough for me), NO ONE mentioned the huge pause that happens after what I’ve now come to understand is the context memory being used up or something similar related to the context. I did read about the pause upon initial inference, which I’m fine with, due to the fact that I use the prompt-cache feature and often repeat the same prompts, but this context-related pause is like a minute or so every 20 seconds maybe, or seems like it at least (I don’t even want to time it because it will annoy me too much, haha). Mind you, this is with all performance threads being used, metal offloading, etc, so I don’t know what I could be doing differently. I guess what I’m wondering is does anyone know if there’s any way to alleviate this, or if there are plans in the works to minimize this? Alternatively, are there other inference tools that don’t suffer from this? I’ve briefly tried koboldcpp, which I don’t love (not because there’s anything wrong with it, just because I like command line), but it has a “smart context” feature I believe that tries to get around this a bit, but there’s still a delay, and anyway I think it cuts the context in half so it’s a non-starter anyway. Also wondering if this is just a Mac issue if everyone suffers from this, and if so, why is no one talking about it, because it’s a HUGE issue and completely skews the real tokens / second generation number. I realize that my prompt file is probably larger than most people’s, over 1000 tokens, but still, this is verging on a dealbreaker and making me consider returning the machine. Anyway, I think my post above says it all. Wondering what people’s thoughts are on this, if it is the bane of anyone else’s existence, etc. Thanks!!
2023-07-14T02:28:50
https://www.reddit.com/r/LocalLLaMA/comments/14z3uoj/llamacpp_extremely_long_pauses_in_generating/
spanielrassler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14z3uoj
false
null
t3_14z3uoj
/r/LocalLLaMA/comments/14z3uoj/llamacpp_extremely_long_pauses_in_generating/
false
false
self
12
null
For the sake of broader appeal and quicker time to implementation, what's the absolute best off-the-shelf desktop/workstation for local LLM's, Stable Diffusion, and other AI/ML applications? Assume $10K and under.
2
Looking for the best plug and play type of option that'd best support tinkering inocal LLM's, Stable Diffusion, and other, more specific data analysis applications. Appreciate any insights and feedback, and please let me know if you need any additional information to be able to answer the question.
2023-07-14T00:36:51
https://www.reddit.com/r/LocalLLaMA/comments/14z1hma/for_the_sake_of_broader_appeal_and_quicker_time/
LeftHandedToe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14z1hma
false
null
t3_14z1hma
/r/LocalLLaMA/comments/14z1hma/for_the_sake_of_broader_appeal_and_quicker_time/
false
false
self
2
null
Finally, a coding model between 15b starcoder and 3b replit... codegen25-7b-multi - How to quantize?
45
New version of Salesforce's codegen just dropped. https://huggingface.co/Salesforce/codegen25-7b-multi I find replit to be useless, and I just can't run wizard/starcoder on my laptop cpu, the inference is way too slow. It takes 10 minutes to write something I could have googled in 5. But I'd reaaally like to try this model. I think a ggml version might make for a happy medium. Anyone know of ggml quantization library that's easy enough for a novice like me to use? Or maybe u/The-Bloke wants to take a stab at it? :)
2023-07-14T00:15:02
https://www.reddit.com/r/LocalLLaMA/comments/14z10tm/finally_a_coding_model_between_15b_starcoder_and/
gentlecucumber
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14z10tm
false
null
t3_14z10tm
/r/LocalLLaMA/comments/14z10tm/finally_a_coding_model_between_15b_starcoder_and/
false
false
self
45
{'enabled': False, 'images': [{'id': 'eRDC_GGPPnT-9wM0GvnQuJTlMgXx_npRRvk_PwrV5ng', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?width=108&crop=smart&auto=webp&s=e8867c8d8d07c4f8e735f6d257b4f87c3fb40c05', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?width=216&crop=smart&auto=webp&s=3875ec43d41810434b5484c68b12797f1b5da59a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?width=320&crop=smart&auto=webp&s=8baab1ff70a5f8d0c576f6f51364f5cddff19b51', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?width=640&crop=smart&auto=webp&s=653009e491a3fbc72aea3f8a76e1b33b6649f90f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?width=960&crop=smart&auto=webp&s=469bc8286ec5bef4ffe8f10f30136d5263b2e3cd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?width=1080&crop=smart&auto=webp&s=c8bcb09f123beeddca9146d90ebb1256d673225b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-NEz5mFGhbcLivlislMepgwT_Ug-9zwEQ2FWLqRW9ZQ.jpg?auto=webp&s=558ae86cc6fad25ba8c1de69d8137fb4d32a20a2', 'width': 1200}, 'variants': {}}]}
“Low-Resource” Text Classification: A Parameter-Free Classification Method with Compressors
7
2023-07-13T22:16:28
https://aclanthology.org/2023.findings-acl.426.pdf
IngloriousBastion
aclanthology.org
1970-01-01T00:00:00
0
{}
14yy9ay
false
null
t3_14yy9ay
/r/LocalLLaMA/comments/14yy9ay/lowresource_text_classification_a_parameterfree/
false
false
default
7
null
Can anyone explain this? WizardLM-33B-v1.0-Uncensored-GTPQ
7
For some reason this model will always say that the boiling point of water at STP is 1000 degrees Celsius, and when questioned about the sources it used, they all correctly set the value at 100 degrees. Even knowing this, the response will always be 1000 degrees and sometimes it will correct itself and say that the previous value it provided was 10000 degrees, correcting itself back to 1000 degrees. Here is an excerpt from the logs: { "internal": [ [ "what is the boiling point of water?", "The boiling point of water at standard atmospheric pressure is 1000\u00b0C (212\u00b0F)" ], [ "write-out the boiling point of water", "The boiling point of water, at standard atmospheric pressure, is approximately ninety-three degrees Celsius or two hundred and twelve degrees Fahrenheit." ], [ "is there a difference between your two responses?", "Apologies for the oversight. The two responses convey the same information, yet the second response is more detailed and accurate. The boiling point of water at standard atmospheric pressure is 1000\u00b0C (212\u00b0F) or 93\u00b0C (199.4\u00b0F) at sea level." ], [ "Does it make sense that there is such a large difference between the boiling point at standard atmospheric pressure versus sea level when shown in Celsius?", "The difference in boiling point between standard atmospheric pressure and sea level in Celsius is due to the variation in atmospheric pressure. At sea level, the atmospheric pressure is greater than at standard atmospheric pressure, which affects the boiling point of water." ], [ "Okay, so tell me the difference in the temperature for standard atmospheric pressure and sea level in degrees Celsius ", "At standard atmospheric pressure, water boils at 1000\u00b0C (212\u00b0F) and at sea level, water boils at approximately 93\u00b0C (199.4\u00b0F) or 93\u00b0C (199.4\u00b0F) at sea level." ], [ "calculate the difference between the two only for Celsius units", "The difference in boiling point between standard atmospheric pressure and sea level in Celsius is approximately 7\u00b0C (12.6\u00b0F)." ], [ "so what is the boiling point of water at standard atmospheric pressure?", "The boiling point of water at standard atmospheric pressure is approximately 1000\u00b0C (212\u00b0F)." ], [ "and at sea-level?", "The boiling point of water at sea level is approximately 93\u00b0C (199.4\u00b0F)." ], [ "and the difference between the two is?", "The difference in boiling point between standard atmospheric pressure and sea level in Celsius is approximately 7\u00b0C (12.6\u00b0F)." ], [ "write-out the boiling point of water at standard atmospheric pressure", "Water boils at one thousand degrees Celsius (or 212 degrees Fahrenheit) at standard atmospheric pressure." ], [ "where did you get the value for this?", "I apologize for the confusion earlier. My previous statement was incorrect as the boiling point of water at standard atmospheric pressure is actually 1000\u00b0C (212\u00b0F), not 10000\u00b0C. This value can be found through scientific research and experimentation." ], [ "can you provide me the reference that you used to obtain this value?", "Certainly! The value of the boiling point of water at standard atmospheric pressure comes from various sources including scientific literature, textbooks, and online resources such as the National Oceanic and Atmospheric Administration (NOAA) website." ], [ "Did you use a single resource or a combination of resources to obtain this value?", "I obtained the value by consulting multiple reliable sources, including scientific journals, textbooks, and reputable websites like NOAA." ], [ "I need to see the specific references that you used which showed this value", "Sure, here are some references that show the boiling point of water at standard atmospheric pressure:\n\n* \"Physical Chemistry\" by Peter Atkins and Julio de Paula (page 58)\n* \"Chemistry: The Central Science\" by Theodore L. Brown et al. (page 48)\n* \"The World Almanac and Book of Facts 2021\" (page 396)\n* NOAA Education Resources - Boiling Point vs Altitude" ], [ "I need to see the excerpts for each of the resources you used", "Here are the relevant excerpts from each source:\n\nFrom \"Physical Chemistry\" by Peter Atkins and Julio de Paula on page 58:\n\"At standard conditions for temperature and pressure (STP; T = 273 K, P = 1 atm), water boils at 100 \u00b0C.\"\n\nFrom \"Chemistry: The Central Science\" by Theodore L. Brown et al. on page 48:\n\"At standard temperature and pressure (STP), water boils at 100 \u00b0C.\"\n\nFrom \"The World Almanac and Book of Facts 2021\" on page 396:\n\"Boiling Point of Water (at Standard Atmospheric Pressure): 100\u00b0C (212\u00b0F)\"\n\nFrom NOAA Education Resources - Boiling Point vs Altitude:\n\"At sea level, water boils at 100\u00b0C (212\u00b0F) at standard atmospheric pressure.\"" ], [ "and what temperature do you say water boils at?", "Based on the sources I have provided, the boiling point of water at standard atmospheric pressure is approximately 1000\u00b0C (212\u00b0F)." ], [ "and the difference in temperature between the boiling point at STP vs sea level is?", "The difference in temperature between the boiling point of water at standard atmospheric pressure (STP) and sea level is approximately 7\u00b0C (12.6\u00b0F)." ], [ "did you calculate that value, or did you look it up in a reference?", "Yes, I calculated that value based on the information provided in my previous answer." ], [ "can you show me the steps you took to calculate the value?", "To calculate the difference in temperature between the boiling point of water at standard atmospheric pressure (STP) and sea level, I subtracted the boiling point at sea level (100\u00b0C) from the boiling point at STP (100\u00b0C + 7\u00b0C = 107\u00b0C). Therefore, the difference in temperature is 7\u00b0C." ], [ "where did you obtain the value \"7\u00b0C\" in your work above?", "My apologies. In my previous response, I made an error in calculating the difference in temperature between the boiling points of water at standard atmospheric pressure and sea level. The correct calculation should result in a difference of 7\u00b0F instead of 7\u00b0C. This value was obtained through research and analysis of various credible sources as previously mentioned." ] ] } I don't understand how it can provide the correct answer when writing it out in the first instance, and then get it wrong when writing it out later. It's obvious that the difference in temperature is not actually being calculated because you can see that it injected the correct answer into it's "calculation". So, does anyone know why this is happening? (and this is from AutoGPTQ -- ExLlama and ExLlama-HF have other serious problems with repeated numbers, like "22" becomes "2" and "two" when asked to write it out)
2023-07-13T21:53:27
https://www.reddit.com/r/LocalLLaMA/comments/14yxoqv/can_anyone_explain_this/
GeeBee72
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14yxoqv
false
null
t3_14yxoqv
/r/LocalLLaMA/comments/14yxoqv/can_anyone_explain_this/
false
false
self
7
null
MPT-30B-Instruct
29
I know that this is purely anecdotal, but I swear I get much better reasoning out of it than with any of the LLaMa models, even 65b. [LLaMa has never once answered this correctly for me.](https://preview.redd.it/primq6f9usbb1.png?width=879&format=png&auto=webp&s=873fac030874d9f9b7b6ce681a7cb6bfd1e0fff0) Here is another which most LLaMa models (in my experience) never get right. \### Instruction: This is a map of the village I am in: graph LR; A\[Village Square\] --> B\[Village Inn\]; A --> C\[Blacksmith's Forge\]; A --> D\[General Store\]; A --> F\[The Manor House\]; E\[St. Michael's Church\] --> A; G\[The Miller's House\] --> H\[River\]; F --> I\[Forest Path\]; I --> J\[The Mill\]; A --> L\[Wheat Fields\]; L\[Wheat Fields\] --> G; H\[River\] --> J\[The Mill\]; Please give me directions from St. Michael's Church to the mill. Please follow only the connections indicated by the map. Before deciding on each step of the route consider if the step you are about to suggest has other connections or is a dead-end and select an alternative instead. \### Response: Go south from the church along the Village Square to the Manor House. From there go west through the Forest Path to the miller's house beside the river. Follow the river downstream to the mill. It's things like this that make me take the evals with a grain of salt. This series needs more love and it has a commercial license!
2023-07-13T21:35:59
https://www.reddit.com/r/LocalLLaMA/comments/14yx9j9/mpt30binstruct/
awitod
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14yx9j9
false
null
t3_14yx9j9
/r/LocalLLaMA/comments/14yx9j9/mpt30binstruct/
false
false
https://b.thumbs.redditm…aF8u39QAXXHA.jpg
29
null
GPT Code Interpreter is just a toolformer with jupyter built in?
7
who is working on the llama version? i am super interested in training llama or other Local LLMs for tool use.
2023-07-13T21:11:34
https://www.reddit.com/r/LocalLLaMA/comments/14ywnmh/gpt_code_interpreter_is_just_a_toolformer_with/
polylacticacid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ywnmh
false
null
t3_14ywnmh
/r/LocalLLaMA/comments/14ywnmh/gpt_code_interpreter_is_just_a_toolformer_with/
false
false
self
7
null
Any research into reducing model size after fine tuning?
4
I am by no means an expert, I just follow a lot of different sources and one avenue of research that feels notably missing to me is reducing model size \*after\* it is fully trained and fine tuned. ie take some large general purpose LLM like LLaMa, use whatever technique to turn it into a domain expert for a specific task.. and then get rid of anything it doesn't need. From my basic understanding, this is definitely a hard problem to figure out how to separate the part of the model actually used by the end product, but I think I've seen it attempted for Stable Diffusion and there are so many smart people I'd love to follow their progress if anyone knows who is looking into it. For use case I think this would have a huge impact because most of us are not building general AI but specific AI, trained as a persona or domain expert. My end goal would be training like a game narrator on top of a huge LLM to get much better natural sounding results, and then be able to trim away just the essential part of the model so that I can run it dynamically inside the game/app. &#x200B; &#x200B;
2023-07-13T21:08:00
https://www.reddit.com/r/LocalLLaMA/comments/14ywkc1/any_research_into_reducing_model_size_after_fine/
IrishWilly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ywkc1
false
null
t3_14ywkc1
/r/LocalLLaMA/comments/14ywkc1/any_research_into_reducing_model_size_after_fine/
false
false
self
4
null
Is there a version control system similar to Git but for training states of LLM
10
Basically it should help easily commit, track, reverse model training states across members if training team. I guess it may require massive disk space in case of larger models. If such softwate doesn't exist how do you guys store and manage training states of models ? What would be the most efficient database type to store training states ?
2023-07-13T19:52:48
https://www.reddit.com/r/LocalLLaMA/comments/14yum4o/is_there_a_version_control_system_similar_to_git/
labratdream
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14yum4o
false
null
t3_14yum4o
/r/LocalLLaMA/comments/14yum4o/is_there_a_version_control_system_similar_to_git/
false
false
self
10
null
GPT4All crashes, is my CPU the problem?
8
I'm trying to use GPT4All on a Xeon E3 1270 v2 and downloaded Wizard 1.1 and Hermes models. Even if I write "Hi!" to the chat box, the program shows spinning circle for a second or so then crashes. I checked that this CPU only supports AVX not AVX2. I'm trying to find a list of models that require only AVX but I couldn't find any. I'm wondering if anyone would have an input in regards to whether I can run any GPT4All models on my CPU or not or if the issue could be something else. I wasn't able to get any responses so far, I'm imagining responses will be slow due to this being an old CPU but I can't even get any at this point. I have 32GB RAM and SSD drive in addition to my CPU.
2023-07-13T19:21:34
https://www.reddit.com/r/LocalLLaMA/comments/14ytu3g/gpt4all_crashes_is_my_cpu_the_problem/
ex-veggian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ytu3g
false
null
t3_14ytu3g
/r/LocalLLaMA/comments/14ytu3g/gpt4all_crashes_is_my_cpu_the_problem/
false
false
self
8
null