title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Fine-Tuning Llama-2-13b Model: Is a 5-Hour Training Time Reasonable?
1
[removed]
2023-09-01T09:59:36
https://www.reddit.com/r/LocalLLaMA/comments/1672vfc/finetuning_llama213b_model_is_a_5hour_training/
Pritish-Mishra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1672vfc
false
null
t3_1672vfc
/r/LocalLLaMA/comments/1672vfc/finetuning_llama213b_model_is_a_5hour_training/
false
false
self
1
null
LMoE: Airoboro's MoE implementation
84
Found no Reddit post about it. From the airoboros [README](https://github.com/jondurbin/airoboros/tree/main#lmoe): >LMoE is the simplest architecture I can think of for a mixture of experts. It doesn't use a switch transformer, doesn't require slicing and merging layers with additional fine-tuning, etc. It just dynamically loads the best PEFT/LoRA adapter model based on the incoming request. > >By using this method, we can theoretically crowdsource generation of dozens (or hundreds/thousands?) of very task-specific adapters and have an extremely powerful ensemble of models with very limited resources on top of a single base model (llama-2 7b/13b/70b). Seems really promising.
2023-09-01T09:16:53
https://www.reddit.com/r/LocalLLaMA/comments/16724y3/lmoe_airoboros_moe_implementation/
noioiomio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16724y3
false
null
t3_16724y3
/r/LocalLLaMA/comments/16724y3/lmoe_airoboros_moe_implementation/
false
false
self
84
{'enabled': False, 'images': [{'id': 'Bc__o2V1-hocKmae0-c4X66wQnibBEb9e6D2OseCBU8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?width=108&crop=smart&auto=webp&s=076daf3a51c41beac862a251215034b912824285', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?width=216&crop=smart&auto=webp&s=0ec31ea4491ba35346cb7460aeb2c7b76f70a710', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?width=320&crop=smart&auto=webp&s=d67189c86637253b9cf770397594175afe44b447', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?width=640&crop=smart&auto=webp&s=e2236e1f44d3e4b9a758e6fd123d5f5ef14c926f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?width=960&crop=smart&auto=webp&s=865b9b8a114934a1b292ae58d87b7acc54fde2b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?width=1080&crop=smart&auto=webp&s=1e42df67c90a0e922911f8752a124e059a1d0b08', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AbcnnWATduNOcF33ULpPt8wWqf7hfFojGeAd40i-cpo.jpg?auto=webp&s=3a94c9b8822872e1b8f92c8a12f5e6d55210f9f9', 'width': 1200}, 'variants': {}}]}
LLM Engine with GPTQ, OpenAI API and GPU first
2
I'm looking for some specific piece of software to run a LLM behind an API (locally). Currently I need the following: \- GPTQ support \- single and multi GPU \- openai like API (dropin replacement) \- proper threading So far I tried the following: \- oobabooga: no threading, can only handle a single api call at a time. Also there seem to be some bugs here and there with the openai api variation \- aphrodite engine: no GPTQ support yet, aside from that it is blazingly fast \- localai: looked into this starting yesterday, but GPU support seems experimental and I prefer no docker reliance Do you guys have any other options in mind which I can look at?
2023-09-01T08:54:11
https://www.reddit.com/r/LocalLLaMA/comments/1671r1z/llm_engine_with_gptq_openai_api_and_gpu_first/
AWAS666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1671r1z
false
null
t3_1671r1z
/r/LocalLLaMA/comments/1671r1z/llm_engine_with_gptq_openai_api_and_gpu_first/
false
false
self
2
null
Fine-Tuning Llama-2-13b Model: Is a 5-Hour Training Time Reasonable?
1
I have an A100 80GB GPU, and I've set my training with the following parameters: ​ \- model\_name: "meta-llama/Llama-2-13b-hf" \- use\_4bit: True \- per\_device\_train\_batch\_size: 8 \- optim: "paged\_adamw\_32bit" \- learning\_rate: 2e-4 \- max\_seq\_length: 1024 \- num\_training\_epochs: 3 ​ I've started the training, and it's showing that it will take approximately 5 hours to complete. Since I'm fine-tuning such a large model for the first time, I'm not sure if this is a good time or if it's considered too long.
2023-09-01T08:02:16
https://www.reddit.com/r/LocalLLaMA/comments/1670wwb/finetuning_llama213b_model_is_a_5hour_training/
Pritish-Mishra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1670wwb
false
null
t3_1670wwb
/r/LocalLLaMA/comments/1670wwb/finetuning_llama213b_model_is_a_5hour_training/
false
false
self
1
null
continue button question - do I really need to press it after 10 list..?
1
hi guys, firstly - so excited that we can run gpt locally - it is incredible fun! I was testing 7b models to fit my 6GB VRAM and they are fun and fast but now, I'm using mainly airobos-65b-gpt4 -> with 64GB RAM, I'm albe to have nice answers in a little time.. like I ask and then, after few minutes, I read the whole answer - it's ok for me BUT.. ​ but there is continue button which I have to click like when I ask it to list 20 options, it list 10 and then stop.. I need to click continue to list other 10.. why? is there any way how to get rid of it or is it some AI thing..? btw using pinocio -> great automated installation of AI under windows. ​ thank you for your help!
2023-09-01T08:01:52
https://www.reddit.com/r/LocalLLaMA/comments/1670wls/continue_button_question_do_i_really_need_to/
ovnf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1670wls
false
null
t3_1670wls
/r/LocalLLaMA/comments/1670wls/continue_button_question_do_i_really_need_to/
false
false
self
1
null
psa: vLLM gptq branch is twice as fast as llama.cpp
12
*but it's a massive pita to setup RTX 3060, prompt: "USER: write a book about ducks\n\nASSISTANT:" temp 0.8 topp 0.95 vicuna 16k 13b, ymmv: vllm: INFO 09-01 09:24:30 llm_engine.py:394] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 23.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 16.3%, CPU KV cache usage: 0.0% llama.cpp (40/43 layers offloaded, cublas) llama_print_timings: load time = 7965.47 ms llama_print_timings: sample time = 204.59 ms / 917 runs ( 0.22 ms per token, 4482.16 tokens per second) llama_print_timings: prompt eval time = 268.07 ms / 18 tokens ( 14.89 ms per token, 67.15 tokens per second) llama_print_timings: eval time = 72101.74 ms / 916 runs ( 78.71 ms per token, 12.70 tokens per second) llama_print_timings: total time = 72704.57 ms linux or vsl required. gptq branch: https://github.com/chu-tianxiang/vllm-gptq streaming decoding returns garbage, but both llm and llm_engine work
2023-09-01T07:31:16
https://www.reddit.com/r/LocalLLaMA/comments/1670e97/psa_vllm_gptq_branch_is_twice_as_fast_as_llamacpp/
LoSboccacc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1670e97
false
null
t3_1670e97
/r/LocalLLaMA/comments/1670e97/psa_vllm_gptq_branch_is_twice_as_fast_as_llamacpp/
false
false
self
12
{'enabled': False, 'images': [{'id': 'xto1_SHsAuaFcYccSHNPrBvMZC281tY-WYnO2-LaI6c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?width=108&crop=smart&auto=webp&s=0293c32c3c1ef4aedb54b38ad09e094ac4f05bf5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?width=216&crop=smart&auto=webp&s=2a336131b74ef5dad80f296f2dab41cf0d6eb472', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?width=320&crop=smart&auto=webp&s=e9a4fbeb2ee21378d5100ae553fc6dbbd97dc93e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?width=640&crop=smart&auto=webp&s=69b7decd849e08e1d2ac76bbd480215508cb14e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?width=960&crop=smart&auto=webp&s=bde753fa3a3df71d46271bfdc59a797f71e17023', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?width=1080&crop=smart&auto=webp&s=459fb631385630c585a9afba9a4d520bddf5b9f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0s1LyiVKN6cH2NqhN1bUozBSbfsX7EQ7qDYo4eSgkUs.jpg?auto=webp&s=add70bcddb97510bf260bdee7f889ae49b6c681f', 'width': 1200}, 'variants': {}}]}
Why are the answers getting dumber as the discussion continues?
23
New to the topic. Why when I start a discussion with a chatbot, its answers are good enough - almost perfect grammar, good consistency of words and sentences. But as the discussion continues, he begins to give out nonsense: incorrectly coordinates words, builds incorrect sentences, although they still do not lose their meaning and retains the thread of the narrative. ​ I use derivatives of LLama-2, such as Airoboros-70B, WizardLM-30B and others. But in one way or another, this is manifested in all models. I do not consider options less than 30b at all, because it is useless to communicate with them on general topics - their answers do not stand up to any criticism.
2023-09-01T07:21:20
https://www.reddit.com/r/LocalLLaMA/comments/167088h/why_are_the_answers_getting_dumber_as_the/
Hatred_grows
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
167088h
false
null
t3_167088h
/r/LocalLLaMA/comments/167088h/why_are_the_answers_getting_dumber_as_the/
false
false
self
23
null
Chat history and White Spaces after response
1
So I have been trying to build a chatbot using the Llama-2-7b gptq model. I found the gptq-4bit-128g-actorder\_True to be doing well for a single question and answer. Now I wanted to the model to remember the conversation, so I just used a loop to keep feeding the response to it as the prompt in the same template as given in the huggingface model card. But the issue is it keeps exceeding the 2048 token limit of the model after something around 2-3 questions, is there a way to increase this limit or a workaround? Once the limit is exceeded it just gives me white spaces as output or something like: \[\[\[\]\[\[\[\[....Another issue I have been facing is the white spaces. Even without the conversation history thing, sometimes it gives me white spaces at the end of my response. I think this also a reason why I am exceeding the token limit when using the history. Is there anyway to stop the model from generating white spaces?
2023-09-01T07:03:08
https://www.reddit.com/r/LocalLLaMA/comments/166zx44/chat_history_and_white_spaces_after_response/
IamFuckinTomato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166zx44
false
null
t3_166zx44
/r/LocalLLaMA/comments/166zx44/chat_history_and_white_spaces_after_response/
false
false
self
1
null
Tips for fine tuning Replit-Code-3B
5
I’m planning to fine tune (not LORA but full fine tuning) the Replit-Code-3B model on a proprietary API (in Python) for code completion. I’m planning to integrate it to a VSCode extension like GitHub copilot. The reason I chose Replit-code-3B was due to Alibi so I can scale the context windows when doing code completion. Since I’m fine tuning the whole model, I assume aside from my sample data (which probably is small), I should include a generic Python dataset to make sure fine tuning will not cause deterioration in the performance of the model with regards to Python in general. However, I’m not sure what Python dataset I should use? Do you have any ideas and other tips I should consider? Thanks a lot!
2023-09-01T07:01:14
https://www.reddit.com/r/LocalLLaMA/comments/166zvvf/tips_for_fine_tuning_replitcode3b/
Acrobatic-Site2065
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166zvvf
false
null
t3_166zvvf
/r/LocalLLaMA/comments/166zvvf/tips_for_fine_tuning_replitcode3b/
false
false
self
5
null
Abuse Detection by LLM
14
I am hitting a roadblock when i try an LLM (Llama) to recognize the hate speech or offensive words. It refuses to comply with the prompts. The idea is to identify hate speech and flag it, replace it with non-offensive words. Is there any other way around it?
2023-09-01T06:09:24
https://www.reddit.com/r/LocalLLaMA/comments/166z0rm/abuse_detection_by_llm/
thesithlord27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166z0rm
false
null
t3_166z0rm
/r/LocalLLaMA/comments/166z0rm/abuse_detection_by_llm/
false
false
self
14
null
Sojee: My own little dual-stage prompt embedding chatbot that can be quickly customized to any particular corpus.
14
tl;dr: I and my business created a simple self-hosted Llama-2 chatbot called "Sojee" that uses two-stages of prompt embedding to first classify a question into one of a number of topics, and then loads in a prompt based on that topic and then re-asks the question. I've MIT licensed the source code and published it at [https://github.com/ChiapasEDI/SojeeChatClient](https://github.com/ChiapasEDI/SojeeChatClient). This is the C#/Blazor Server front-end; it relies on a running text-generation-webui API as a backend with the API enabled. This chatbot that can be customized to just about any purpose. In fact, I made a 35+ minute video [https://youtu.be/pjNjdcRf2TE](https://youtu.be/pjNjdcRf2TE) of me going through the specific business requirements (self-hosted, accurate and easily maintained), going through some basics of AI self-hosting (about 10 minutes), compiling and running Sojee (about another 10 minutes), and then customizing Sojee so that instead of answering questions on product's corpus, it answers questions on several topics about Diamonds, copy and pasted straight from the Diamond wikipedia page. The text-generation-webui (thank you u/Oobabooga and contributors!) serves as the back-end with it's very simple webhook API, and by separating the front-end and back-end, it makes it very easy to swap out the model I use with another. I actually found the OpenAssistant Llama-2 13B Orca 8K (8 bit quantization) to be the best all around, and this was after a *lot* of experimentation. But the *correct* model is really - whatever understands your corpus text and is able to answer questions on it. Since this chat client supports switching quickly between different topics, the supported context size is less of a barrier - if you can split your corpus into 10-15 topics, then you can have the full context size dedicated to each topic. However, if you ask a question that goes into *two* topics, get ready for some hallucination, because only one topic gets loaded in at a time. One way to mitigate this, is to dedicate some % of your context size in *each* topic to some generalized info common to all of the topics - this way, the language model doesn't have to guess as much about what the info in those other topics might be about. Since you can carve your corpus into multiple topics, and thus use smaller embedded prompts, this also enables smaller GPUs to run Sojee, or to dedicate more VRAM to answering the questions. I found a good rule of thumb is to have the model consume 1/3rd and no more than 1/2 of available GPU RAM, and the rest the model will use for the context. A few notes about "carving" your own corpus into topics: 1. I basically allocated about 650 tokens for question and answer space from the 8192 token budget. 2. The default Sojee topics "business" and "automation" use much less than the 8192 tokens, but the "reference" topic (the product's API reference) used every bit of it - thus the instructions in the index.razor page to reload the interface for each question on that topic, as there's no token budget for a real conversation - it can answer a single question before the embedded prompt starts to lose text at the top. 3. I note very briefly in the video, but there's a lot of hyperparameters that can be overridden on a per-topic basis - temperature, top\_k, etc., just putting temperature:0.7 before the dashed line in the embedded prompt. I put seed:1 in each topic for QA purposes, and this helps to predict what a specific answer to any particular question is going to have. 4. The "initial" topic is required, and sharp eyes may notice that the text for "business" and "initial" is largely the same, with the difference being that the "initial" prompt has some extra text to direct the AI model to classify a question instead of answering it. 5. One "topic" got cut late in the game - I had a 12KB PowerShell script and early on, it seemed to be able to extract functional bits of it following a template I provided - which I thought was pretty cool. On second thought, however, I questioned how practical this was, but the real dealbreaker was that 10% of the time the AI model would hallucinate brand new API calls. I found that echoing back the syntax of a single API call like in the "reference" topic was about the closest to coding I would trust the AI model to go with hallucinating stuff. I don't want to make any big claims about Sojee - it's very buggy, needs actual coding comments and needs more work! Honestly, it was much more of a learning exercise to prove to myself that I've learned something in the last couple months, and I need to move on to other things right now. But, seeing that there is a lot of interest in self-hosted corporate chatbots, I thought someone might get something useful out of it if I shared my own small discoveries. ​
2023-09-01T05:13:03
https://www.reddit.com/r/LocalLLaMA/comments/166y1a5/sojee_my_own_little_dualstage_prompt_embedding/
alittleteap0t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166y1a5
false
null
t3_166y1a5
/r/LocalLLaMA/comments/166y1a5/sojee_my_own_little_dualstage_prompt_embedding/
false
false
self
14
{'enabled': False, 'images': [{'id': 'PzrQBGiSqgMVDQ9iBiCzmnZjQXsOSIUdtT7a-T5T6BE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?width=108&crop=smart&auto=webp&s=549202db489f84de1928e07171bee1bf5ffc749b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?width=216&crop=smart&auto=webp&s=a6fb6b30c3997254c974208ab9dc60b335439871', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?width=320&crop=smart&auto=webp&s=fea6fdba32309a4a619a6ebd71596f2d82ecf57f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?width=640&crop=smart&auto=webp&s=25688c8059d9bb086c77148b755466ce0cbe379c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?width=960&crop=smart&auto=webp&s=57a01d0a9ec48ed5a56bc3460c93b66c22f7bac5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?width=1080&crop=smart&auto=webp&s=8e67783e75ada275b902f90805b5c91c5756b2ba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LRhLMzTzOUjAHY8m1ZsPtfC0M1ueNAn1tpcvPc5UhYM.jpg?auto=webp&s=8a1e01b69a06ad7db60351d9243a3b33896bee27', 'width': 1200}, 'variants': {}}]}
Llama-2 HF 7B for downstream tasks using LoRA
5
Hey Guys, I am fine-tuning Llama-2 HF 7B for downstream tasks using LoRA on 1 A100 SXM 40GB GPU. I am new to LoRA and unable to understand a few things. Can someone provide me a link to a nice explanation of LoRA along with a nice documentation of every argument it takes? Also, I am unable to understand, 1. What is the effect of lora\_alpha? 2. How to decide the best rank? ​
2023-09-01T05:02:05
https://www.reddit.com/r/LocalLLaMA/comments/166xtth/llama2_hf_7b_for_downstream_tasks_using_lora/
Excellent-Screen-836
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166xtth
false
null
t3_166xtth
/r/LocalLLaMA/comments/166xtth/llama2_hf_7b_for_downstream_tasks_using_lora/
false
false
self
5
null
Finetuned Open Source LLM models Marketplace?
12
Im trying to satisfy a use case my company has for LLMs by making an application in my free time, but it needs to be open source so we can run it locally, so API calls are out of the picture. I can finetune a foundational model but before I do, can I search for specific fine-tuned models on HuggingFace? I find their search mechanism difficult to navigate when trying to find specific fine-tuned models. I have to click randomly and read the description most of the time. Is there possibly a marketplace or a potential for a marketplace for specific fine-tuned models? Is there an easy no-code way to fine-tune a foundational model if I have the dataset? I want my non-dev mates to be able to play around!
2023-09-01T04:58:42
https://www.reddit.com/r/LocalLLaMA/comments/166xrfr/finetuned_open_source_llm_models_marketplace/
PigWedgion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166xrfr
false
null
t3_166xrfr
/r/LocalLLaMA/comments/166xrfr/finetuned_open_source_llm_models_marketplace/
false
false
self
12
null
easiest tool for running Code LLama on CPU instead of GPU?
1
I tried installing the windows KoboldCPP, which seems like it might sort of work with the 7B Code LLama model, but when I load the large model, it crashes. is there another simple way of doing it that does not leave me in WSL dependency hell? any windows-based tools for running an LLM with CPU and system ram RAM?
2023-09-01T03:40:53
https://www.reddit.com/r/LocalLLaMA/comments/166w9lg/easiest_tool_for_running_code_llama_on_cpu/
Cunninghams_right
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166w9lg
false
null
t3_166w9lg
/r/LocalLLaMA/comments/166w9lg/easiest_tool_for_running_code_llama_on_cpu/
false
false
self
1
null
Best method or tool for data extraction, transformation and and structuring for use in an LLM memory database?
2
Hey all, so I’m wanting to clean a data file that’s fairly large to then use it as a memory database for a chat model. I tried with code interpreter however as the file is so large it was unsuccessful. I’m sure I’ve seen a few methods recently for this specific case however can’t remember what they were. Would appreciate any help!
2023-09-01T02:54:37
https://www.reddit.com/r/LocalLLaMA/comments/166vbtb/best_method_or_tool_for_data_extraction/
sardoa11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166vbtb
false
null
t3_166vbtb
/r/LocalLLaMA/comments/166vbtb/best_method_or_tool_for_data_extraction/
false
false
self
2
null
Cheapest Llama2 chatbot solution costs only $4/mon
1
2023-09-01T02:07:38
https://news.ycombinator.com/item?id=37341332
nalaginrut
news.ycombinator.com
1970-01-01T00:00:00
0
{}
166ucqb
false
null
t3_166ucqb
/r/LocalLLaMA/comments/166ucqb/cheapest_llama2_chatbot_solution_costs_only_4mon/
false
false
default
1
null
Converting HuggingFace Models to GGUF/GGML | Tutorial
1
2023-09-01T02:06:12
https://www.substratus.ai/blog/converting-hf-model-gguf-model/
samosx
substratus.ai
1970-01-01T00:00:00
0
{}
166ubn5
false
null
t3_166ubn5
/r/LocalLLaMA/comments/166ubn5/converting_huggingface_models_to_ggufggml_tutorial/
false
false
https://a.thumbs.redditm…HrUEn7Wq4u30.jpg
1
{'enabled': False, 'images': [{'id': 'wm8asZPwMNjKu2oMaTYZapHi2pDCrsnaUBEG7KVDlpQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/AV55SHIcXwiLujwKI4072jb2GNTPWU_P7VCDjuCPVQ4.jpg?width=108&crop=smart&auto=webp&s=66b33151536e8f150f0a75d6f01a889bdf71a44a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/AV55SHIcXwiLujwKI4072jb2GNTPWU_P7VCDjuCPVQ4.jpg?width=216&crop=smart&auto=webp&s=40a15fa12f410ac44e6c3f8cbd52a8f58cd3aa8b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/AV55SHIcXwiLujwKI4072jb2GNTPWU_P7VCDjuCPVQ4.jpg?width=320&crop=smart&auto=webp&s=8f250150d9fe6c5e8618a9ab6286adb50f3dabb2', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/AV55SHIcXwiLujwKI4072jb2GNTPWU_P7VCDjuCPVQ4.jpg?auto=webp&s=8e71385885030934a82bc08382d74696d21301c9', 'width': 512}, 'variants': {}}]}
If you feed an LLM with a fragment of its own output, it'll tend to reproduce the fragment literally.
12
I noticed an odd behavior. In order to summarize a long text I decided to iterate over a bunch of paragraphs, one bunch at a time, and have the LLM generate the summary for each bunch, until it's finished. It occurred to me that it would be nice to provide to each bunch, as context, the summary of the previous bunch, so that the LLM can make more sense of the text to be summarized. But to my surprise the LLM refuses to create a summary of the new text, it just rewrites literally the summary provided as context. I'm using nous-hermes-llama2-13b.ggmlv3.q4\_K\_M, which works great for other tasks using the Instruction/Response template. Also tried with a few other models (orca) and happened too as well. My intuition is that the model is overly sensitive to its own style of text in the sense that it triggers a very high signal mathematically speaking. Even more so because the original text I'm trying to summarize in a transcription from a spoken interview, which is obviously less structured and more chaotic. A bit disappointed of this behavior overall. Anyone else noticed it?
2023-09-01T01:52:01
https://www.reddit.com/r/LocalLLaMA/comments/166u0ig/if_you_feed_an_llm_with_a_fragment_of_its_own/
Responsible_Warning3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166u0ig
false
null
t3_166u0ig
/r/LocalLLaMA/comments/166u0ig/if_you_feed_an_llm_with_a_fragment_of_its_own/
false
false
self
12
null
Llama 2 Platform APIs with per-token pricing - are there any I'm not aware of?
21
I'm interested in finding the best Llama 2 API service - I want to use Llama 2 as a cheaper/faster alternative to gpt-3.5-turbo in an application I'm building. I have bursty requests and a lot of time without users so I really don't want to host my own instance of Llama 2, it's only viable for me if I can pay per-token and have someone else manage compute (otherwise I'd just use gpt-3.5-turbo!). ​ So far, here's my understanding of the market for hosted Llama 2 APIs: * [Deepinfra](https://deepinfra.com/pricing) \- only available option with no dealbreakers; well-priced at just over of half gpt-3.5-turbo average pricing (but currently slower than gpt-3.5-tubo and relatively unknown company) * [MosaicML](https://www.mosaicml.com/inference) \- no open sign-up (have to submit request form), pricing for llama-2-70b-chat is actually slightly higher than gpt-3.5-turbo anyway * [Replicate](https://replicate.com/replicate/llama-2-70b-chat) \- great service for image gen models but for LLMs it's so inefficient to run on a single GPU with pay-per-second that my cost estimates for it are 10-100x the price of gpt-3.5-turbo * Amazon Bedrock - not live yet, can't find pricing, unclear if it'll have Llama 2 at launch anyway (potentially depends on Jassy and Zuck making friends) ​ Anything else I should be aware of? Here's my current pricing table ($ per 1M tokens): | Provider | Model Name | Input | Output | Combined (4:1 input:output assumption) | |-----------|------------------|-------|--------|----------------------------------------| | OpenAI | GPT-3.5 Turbo | 1.50 | 2.00 | 1.60 | | deepinfra | llama-2-70b-chat | 1.00 | 1.00 | 1.00 | | mosaicml | llama-2-70b-chat | 2.00 | 2.00 | 2.00 | | replicate | llama-2-70b-chat | 0.00 | 208.84 | 41.77 | | replicate | llama-2-13b-chat | 0.00 | 89.98 | 18.00 |
2023-09-01T01:37:23
https://www.reddit.com/r/LocalLLaMA/comments/166tp5q/llama_2_platform_apis_with_pertoken_pricing_are/
mikachip
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166tp5q
false
null
t3_166tp5q
/r/LocalLLaMA/comments/166tp5q/llama_2_platform_apis_with_pertoken_pricing_are/
false
false
self
21
{'enabled': False, 'images': [{'id': 'gdv5Bh89JWXxDpkAlTrk_zCb-qJxrREhAWY8_SsUogU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/tD62_gbrTKSFS-T2_bh8iKW4Yhfa_e5FHjFP5FsdITU.jpg?width=108&crop=smart&auto=webp&s=ec134d5f1c4f53b9f67d5942dbd314037af38666', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/tD62_gbrTKSFS-T2_bh8iKW4Yhfa_e5FHjFP5FsdITU.jpg?width=216&crop=smart&auto=webp&s=1061b54db08292c4fbc9fae7d80cdb4981b4e4cb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/tD62_gbrTKSFS-T2_bh8iKW4Yhfa_e5FHjFP5FsdITU.jpg?width=320&crop=smart&auto=webp&s=d936cb000bed7a92e6285dd0683d5d881ece9b08', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/tD62_gbrTKSFS-T2_bh8iKW4Yhfa_e5FHjFP5FsdITU.jpg?auto=webp&s=1cc6f0f38166af4c6d724b3e743d8ec60cc03085', 'width': 512}, 'variants': {}}]}
New to LLMs and I have questions about video cards
0
Are two cards with 16GB of VRAM each basically the same as a single 32GB card? Could I run a >16GB model by spreading it across two cards, or does it need to run on one single card with enough RAM to handle it? I see MI25s on ebay for around $100, are they worth it? I can't get a $500 or $1000 card but I can spend $100, maybe $200. Would it make sense to get two MI25s?
2023-09-01T00:47:34
https://www.reddit.com/r/LocalLLaMA/comments/166slcm/new_to_llms_and_i_have_questions_about_video_cards/
timschwartz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166slcm
false
null
t3_166slcm
/r/LocalLLaMA/comments/166slcm/new_to_llms_and_i_have_questions_about_video_cards/
false
false
self
0
null
Is there a general model size where q2 quants don't say nonsense?
5
​ [This is a 13B think it's understandable since it's a q2 model, but still shocking in the moment. It was coherent up to this point. ](https://preview.redd.it/pgom9nvv7jlb1.png?width=894&format=png&auto=webp&s=b793a5e9e9ea49197d814d321dde24444b65ca16)
2023-09-01T00:06:28
https://www.reddit.com/r/LocalLLaMA/comments/166rm7k/is_there_a_general_model_size_where_q2_quants/
multiverse_fan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166rm7k
false
null
t3_166rm7k
/r/LocalLLaMA/comments/166rm7k/is_there_a_general_model_size_where_q2_quants/
false
false
https://b.thumbs.redditm…nx8EJTPjyC3E.jpg
5
null
Simplest python/local llm example?
4
I'm trying to get my head around wether this is feasible. Can i download a model from huggingface directly (say, nous-hermes) onto my machine (mac) and use a simple python script to interact with it? Or must I use an intermediary like oogabooga or soething to communicate with it? I'd live to find a minimal example Thanks!
2023-08-31T23:58:01
https://www.reddit.com/r/LocalLLaMA/comments/166reh7/simplest_pythonlocal_llm_example/
FahrenheitUnrequited
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166reh7
false
null
t3_166reh7
/r/LocalLLaMA/comments/166reh7/simplest_pythonlocal_llm_example/
false
false
self
4
null
Shoutout to thebloke for ranking on HuggingFace leaderboard with a gptq model, Genz 70B
1
[removed]
2023-08-31T22:46:19
https://www.reddit.com/r/LocalLLaMA/comments/166pn34/shoutout_to_thebloke_for_ranking_on_huggingface/
multiverse_fan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166pn34
false
null
t3_166pn34
/r/LocalLLaMA/comments/166pn34/shoutout_to_thebloke_for_ranking_on_huggingface/
false
false
https://b.thumbs.redditm…z4qZ0u-rT0NY.jpg
1
null
Seeking Opinions: Best Open-Source Model for Q/A and Summarization on Financial Documents (between 13-40B)
4
Hello Guys ! I'm currently on the lookout for the best open-source language model for tackling question-answering (Q/A) and summarization tasks specifically tailored to financial documents. I'm looking for models with a capacity between 13 and 40 billion parameters. I've already test llama2-chat-13b & mpt-30b-instruct. I'd love to hear your experiences and opinions on which models have performed well in this domain or not. Thanks in advance for your input!
2023-08-31T22:01:27
https://www.reddit.com/r/LocalLLaMA/comments/166oii7/seeking_opinions_best_opensource_model_for_qa_and/
GregLeSang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166oii7
false
null
t3_166oii7
/r/LocalLLaMA/comments/166oii7/seeking_opinions_best_opensource_model_for_qa_and/
false
false
self
4
null
What is the best community/group to join if I want to connect with ChatGPT/LLM application developers?
7
I'm looking for a community that is mostly or exclusively made up of **makers**/**builders** who are collaborating together and launching prototypes, say every few months. Thanks!
2023-08-31T21:30:42
https://www.reddit.com/r/LocalLLaMA/comments/166nprz/what_is_the_best_communitygroup_to_join_if_i_want/
AndreeSmothers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166nprz
false
null
t3_166nprz
/r/LocalLLaMA/comments/166nprz/what_is_the_best_communitygroup_to_join_if_i_want/
false
false
self
7
null
conceptofmind/Yarn-Llama-2-13b-128k · Hugging Face
18
2023-08-31T21:04:00
https://huggingface.co/conceptofmind/Yarn-Llama-2-13b-128k
ninjasaid13
huggingface.co
1970-01-01T00:00:00
0
{}
166n016
false
null
t3_166n016
/r/LocalLLaMA/comments/166n016/conceptofmindyarnllama213b128k_hugging_face/
false
false
https://b.thumbs.redditm…BC3KLP66Tj5A.jpg
18
{'enabled': False, 'images': [{'id': '0zs2x8DfIxFl6IvCGjBc6nQCxvmLW425TKuxLZ_bNlk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?width=108&crop=smart&auto=webp&s=6dfe0d4329687120bca5a26454aa97241489780f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?width=216&crop=smart&auto=webp&s=0eebb006a38f8836dbfb99b84abee455fdfee90e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?width=320&crop=smart&auto=webp&s=c41068ea2066908735531212b13ce18308dd05c6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?width=640&crop=smart&auto=webp&s=3240b908a7baddfc89cbe780727fd04e1b2d26eb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?width=960&crop=smart&auto=webp&s=928d842fb1f4ea90e9d5713013f8678da001ee91', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?width=1080&crop=smart&auto=webp&s=676ab128786b88e64ef6991b787efb72667563e8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mpzz35zuqFFkT_P0fEg4RWpx4Xo9LEtTmaz_kBbuBeE.jpg?auto=webp&s=0078ceb25a4704ea81bf3b0c91445dc043d072ca', 'width': 1200}, 'variants': {}}]}
RTX 4060 Ti 16 GB Users: Viable for 33/34b Models on ExLlama/GGML?
4
Has anyone tried using this GPU with ExLlama for 33/34b models? What's your experience? Additionally, I'm curious about offloading speeds for GGML/GGUF. Please share the tokens/s with specific context sizes. TIA!
2023-08-31T21:02:34
https://www.reddit.com/r/LocalLLaMA/comments/166mykp/rtx_4060_ti_16_gb_users_viable_for_3334b_models/
Xhehab_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166mykp
false
null
t3_166mykp
/r/LocalLLaMA/comments/166mykp/rtx_4060_ti_16_gb_users_viable_for_3334b_models/
false
false
self
4
null
3D artist - is there any model for Blender?
2
I've seen a few scripts connecting Blender to the OpenAI API. Are there any local models trained on Blender? I don't care if it "connects" to Blender. Just having something I can ask questions to locally and privately would be phenomenal.
2023-08-31T19:58:34
https://www.reddit.com/r/LocalLLaMA/comments/166l9bq/3d_artist_is_there_any_model_for_blender/
JebryyathHS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166l9bq
false
null
t3_166l9bq
/r/LocalLLaMA/comments/166l9bq/3d_artist_is_there_any_model_for_blender/
false
false
self
2
null
Trouble moving from Llama to Llama 2
3
I'm using the procedure outlined in this colab notebook: [https://colab.research.google.com/drive/1rqWABmz2ZfolJOdoy6TRc6YI7d128cQO](https://colab.research.google.com/drive/1rqWABmz2ZfolJOdoy6TRc6YI7d128cQO) It works great. It's the first one I've found that seems to really produce good results. The only problem is, the model is Llama, and I want to use Llama 2. I've subbed a couple different Llama 2 models in, and I can't get any of them to work. The loss fluctuates wildly up and down. I've tried adjusting learning rate, and a few other hyperparams, as well as using a variety of data sets. ​ Can anyone see what I'm missing?
2023-08-31T19:02:55
https://www.reddit.com/r/LocalLLaMA/comments/166jszj/trouble_moving_from_llama_to_llama_2/
Nathanielmhld
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166jszj
false
null
t3_166jszj
/r/LocalLLaMA/comments/166jszj/trouble_moving_from_llama_to_llama_2/
false
false
self
3
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]}
128k Context Llama 2 Finetunes Using YaRN Interpolation (successor to NTK-aware interpolation) and Flash Attention 2
262
GitHub (Includes links to models and preprint): [https://github.com/jquesnelle/yarn](https://github.com/jquesnelle/yarn) arXiv link: coming soon! Demo (Multiple-choice quiz on a novel of \~110k context): [https://colab.research.google.com/drive/1p7iNUQMbVGYWqrKMHvPPO4Q13fB5mwDF?usp=sharing](https://colab.research.google.com/drive/1p7iNUQMbVGYWqrKMHvPPO4Q13fB5mwDF?usp=sharing) This entire project is the culmination of 2 months of hard work from me, u/emozilla, EnricoShippole and honglu2875. (And a lot of compute, even though we are still heavily compute starved...)These models aren't fully converged yet, the base models have only been further pretrained for 400 steps (\~1.7B tokens), compared to the 1000 steps in Meta's PI paper, however given that we have an improved interpolation method, the non-converged results are already superior to PI. We are claiming SoTA for open-source 128k context models. The GitHub repo provides the code and datasets that allows anyone to completely reproduce the results in the paper from scratch. We strongly believe in fully open-source and transparent research, and are releasing everything under MIT license. (Except the models, which are bound under Meta's license) Note that these are base models, not yet instruction-tuned, and the 13b-128k model can already acheive a 1-shot accuracy of \~52% on the Sherlock Holmes book quiz demo (the model has never seen long context QA), this tests the model's understanding of the story. All of our metrics point to these models being the new SoTA for long context models (see Experiments section of paper), even if the models aren't fully trained yet. We expect performance to improve given more training. Stay tuned! All models include a ready-to-use implementation of FA2 if run using `trust_remote_code=True` in the transformers library. The 13b model requires approximatively 360GB of VRAM (eg. 8x48GB or 4x80GB) for the full 128k context size. Passkey retreival results are not yet in the paper (still running), but preliminary results show >80% across the entire 128k context. Also big thanks to the entire Nous Research team, Stability AI, CarperAI, Eleuther AI, a16z and PygmalionAI for their insights and generous support of compute resources that enabled the completion of this research. (If I'm forgetting anyone please let me know asap!) We're also not forgetting everyone from the open-source community that participated and contributed in the discussions and code implementations on all social media and code sharing platforms. I say thanks to all of you! I would like to end this post with us all having a big round of applause for everyone! [As always, a PPL chart for good measure...](https://preview.redd.it/ith1xv7dshlb1.png?width=1209&format=png&auto=webp&s=95bd68feb05cb2a36bc97979f96d49aeb141b7dc) P.S. We need more compute in order to release fully converged 7b, 13b models and a 70b model. 128k context requires so much VRAM during training, it's insane... (For training, these models barely fit in 128 80GB A100s using DeepSpeed and FA2) If anyone is feeling generous enough to provide large scale training compute, we will have the 70b model out in no time.
2023-08-31T18:52:03
https://www.reddit.com/r/LocalLLaMA/comments/166jik4/128k_context_llama_2_finetunes_using_yarn/
bloc97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166jik4
false
null
t3_166jik4
/r/LocalLLaMA/comments/166jik4/128k_context_llama_2_finetunes_using_yarn/
false
false
https://b.thumbs.redditm…YcDLftbA-3hE.jpg
262
{'enabled': False, 'images': [{'id': 'BZSrezHRZHYsRr1vcKM9NmhztB0BCRyk3SjbicIY0FI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=108&crop=smart&auto=webp&s=bbd7d402b8eedcc23d9c16ce44e970b394a6fac4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=216&crop=smart&auto=webp&s=869af62fb0fb86e2ec91f12a3bea793705fad380', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=320&crop=smart&auto=webp&s=b13481f76c96163673c0cf6120261685fe70a858', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=640&crop=smart&auto=webp&s=78b3ec0c68ba56c1dbb733d5362b6d5043843fd5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=960&crop=smart&auto=webp&s=cb897ea69152ba6411085d3bb2681b5cd96da525', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=1080&crop=smart&auto=webp&s=5cf8be8e1f8f7f77ff694c034ecb4cc96e1a73ce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?auto=webp&s=ddcbc2d2b859fdbed940d6f361fb507dd19df742', 'width': 1200}, 'variants': {}}]}
Llama-2 with 128k context length thanks to YaRN
83
2023-08-31T18:47:28
https://twitter.com/EnricoShippole/status/1697317625116742119?s=20
hackerllama
twitter.com
1970-01-01T00:00:00
0
{}
166je92
false
{'oembed': {'author_name': 'Enrico Shippole', 'author_url': 'https://twitter.com/EnricoShippole', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Releasing Yarn-Llama-2-13b-128k, a Llama-2 model, trained for 128k context length using YaRN scaling. The model was trained in collaboration with u/bloc97 and <a href="https://twitter.com/theemozilla?ref_src=twsrc%5Etfw">@theemozilla</a> of <a href="https://twitter.com/NousResearch?ref_src=twsrc%5Etfw">@NousResearch</a> and <a href="https://twitter.com/Void13950782?ref_src=twsrc%5Etfw">@Void13950782</a> of <a href="https://twitter.com/AiEleuther?ref_src=twsrc%5Etfw">@AiEleuther</a>. <a href="https://t.co/CmvZgHdJEF">pic.twitter.com/CmvZgHdJEF</a></p>&mdash; Enrico Shippole (@EnricoShippole) <a href="https://twitter.com/EnricoShippole/status/1697317625116742119?ref_src=twsrc%5Etfw">August 31, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/EnricoShippole/status/1697317625116742119', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_166je92
/r/LocalLLaMA/comments/166je92/llama2_with_128k_context_length_thanks_to_yarn/
false
false
https://b.thumbs.redditm…jU-PM_iEK9kA.jpg
83
{'enabled': False, 'images': [{'id': 'V6VT4I1rRJrroUZRSWQkoDyJhHPsirqib-AblynNy30', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/993FeXP1LyP0fGszQgeS_S6P1DeKojDFcf685MGBzLo.jpg?width=108&crop=smart&auto=webp&s=d8b4eda9903ec1b3deac06d7405c3cbd97bce0a9', 'width': 108}], 'source': {'height': 70, 'url': 'https://external-preview.redd.it/993FeXP1LyP0fGszQgeS_S6P1DeKojDFcf685MGBzLo.jpg?auto=webp&s=65568ca4c71cd6a5d35684f3e28fc49f247847f0', 'width': 140}, 'variants': {}}]}
128k Context Llama 2 Finetunes Using YaRN Interpolation (successor to NTK-aware interpolation) and Flash Attention 2
1
[removed]
2023-08-31T18:46:28
https://www.reddit.com/r/LocalLLaMA/comments/166jdbh/128k_context_llama_2_finetunes_using_yarn/
bloc97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166jdbh
false
null
t3_166jdbh
/r/LocalLLaMA/comments/166jdbh/128k_context_llama_2_finetunes_using_yarn/
false
false
https://b.thumbs.redditm…YcDLftbA-3hE.jpg
1
{'enabled': False, 'images': [{'id': 'BZSrezHRZHYsRr1vcKM9NmhztB0BCRyk3SjbicIY0FI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=108&crop=smart&auto=webp&s=bbd7d402b8eedcc23d9c16ce44e970b394a6fac4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=216&crop=smart&auto=webp&s=869af62fb0fb86e2ec91f12a3bea793705fad380', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=320&crop=smart&auto=webp&s=b13481f76c96163673c0cf6120261685fe70a858', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=640&crop=smart&auto=webp&s=78b3ec0c68ba56c1dbb733d5362b6d5043843fd5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=960&crop=smart&auto=webp&s=cb897ea69152ba6411085d3bb2681b5cd96da525', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?width=1080&crop=smart&auto=webp&s=5cf8be8e1f8f7f77ff694c034ecb4cc96e1a73ce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZU6AxVsLZjiF3E-jxsgtJQ9lCZ-Ed2W49RmJvkcbPus.jpg?auto=webp&s=ddcbc2d2b859fdbed940d6f361fb507dd19df742', 'width': 1200}, 'variants': {}}]}
128k Context Llama 2 Finetunes Using YaRN Interpolation (successor to NTK-aware interpolation) and Flash Attention 2
1
[removed]
2023-08-31T18:37:54
https://www.reddit.com/r/LocalLLaMA/comments/166j59j/128k_context_llama_2_finetunes_using_yarn/
bloc97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166j59j
false
null
t3_166j59j
/r/LocalLLaMA/comments/166j59j/128k_context_llama_2_finetunes_using_yarn/
false
false
https://b.thumbs.redditm…R2DWLLirOhXI.jpg
1
{'enabled': False, 'images': [{'id': '--9zHPHUP3AoAb8GNz4v4pSPddJXXlQHm5Cx9Vu6KXE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?width=108&crop=smart&auto=webp&s=9894ee258ab24c10cb56f13f2be2bc34a93d3d23', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?width=216&crop=smart&auto=webp&s=cb3ca9e912a24f60e7d34f8cf08eb8f351d7f1b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?width=320&crop=smart&auto=webp&s=4b3a81eee53693053ab7d7c7137d00568ead31c2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?width=640&crop=smart&auto=webp&s=4bdeb98ef1692b06616bd95c3451b76249e5ef64', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?width=960&crop=smart&auto=webp&s=25323a8ba371b2bebbd9bdd143e0babeb930542e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?width=1080&crop=smart&auto=webp&s=8bbad01d3051277a56e1f5009ef1d7c8d4890763', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wG3-GYKNXOm8mm81hod5tuijuBKBdtTWXCP6Lz8Rrxo.jpg?auto=webp&s=971f608eea6a4b13b74f4274ece84fc988c6b44a', 'width': 1200}, 'variants': {}}]}
Finetuning Chat LLM (Llama2-chat): Data set best practices
12
What are some best practices when it comes to finetuning (Axolotl Repo) an LLM (Llama2-chat) and regards to data, quantity, quality, depths, etc to make the finetuning and interaction meaningful? The goal is to feed it n number of snippets from various document sources to be able to act as an assistant when it comes to those documents. Note: can't use embeddings at this time because of privacy so will be using custom built datasets formatted in JSONL as per repo requirements.
2023-08-31T18:14:39
https://www.reddit.com/r/LocalLLaMA/comments/166ij98/finetuning_chat_llm_llama2chat_data_set_best/
orangeatom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166ij98
false
null
t3_166ij98
/r/LocalLLaMA/comments/166ij98/finetuning_chat_llm_llama2chat_data_set_best/
false
false
self
12
null
Code Llama 34B F16 at 20t/s on a MacBook
117
2023-08-31T17:55:31
https://twitter.com/ggerganov/status/1697262700165013689
sleeper-2
twitter.com
1970-01-01T00:00:00
0
{}
166i0sw
false
{'oembed': {'author_name': 'Georgi Gerganov', 'author_url': 'https://twitter.com/ggerganov', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Full F16 precision 34B Code Llama at &gt;20 t/s on M2 Ultra <a href="https://t.co/7diki8zes4">pic.twitter.com/7diki8zes4</a></p>&mdash; Georgi Gerganov (@ggerganov) <a href="https://twitter.com/ggerganov/status/1697262700165013689?ref_src=twsrc%5Etfw">August 31, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/ggerganov/status/1697262700165013689', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_166i0sw
/r/LocalLLaMA/comments/166i0sw/code_llama_34b_f16_at_20ts_on_a_macbook/
false
false
https://b.thumbs.redditm…-VTMut6DV9pI.jpg
117
{'enabled': False, 'images': [{'id': 'yDjqZZNr5Jhf8s-3eNrIbB_jkTuhTJPTbC4EGUIkLbQ', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/NHxQ79GDr5I3zzp9gT4sRc89fSL4CF4IuCfj7VHtPUc.jpg?width=108&crop=smart&auto=webp&s=f592c4a04583e809ecf686b210850d7cb842eaf1', 'width': 108}], 'source': {'height': 104, 'url': 'https://external-preview.redd.it/NHxQ79GDr5I3zzp9gT4sRc89fSL4CF4IuCfj7VHtPUc.jpg?auto=webp&s=59852efad297170f970101d77885529bc28ea52b', 'width': 140}, 'variants': {}}]}
I compared a few different Code Llama variants locally
2
2023-08-31T17:26:49
http://www.xethub.com/blog/comparing-code-llama-models-locally-macbook/
semicausal
xethub.com
1970-01-01T00:00:00
0
{}
166h9rd
false
null
t3_166h9rd
/r/LocalLLaMA/comments/166h9rd/i_compared_a_few_different_code_llama_variants/
false
false
https://b.thumbs.redditm…lVFOpkOwqA5I.jpg
2
{'enabled': False, 'images': [{'id': 'tLlKuE5EoPs6lN46OBoqyTS43XgbiyuXtsh2PGLKmmI', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?width=108&crop=smart&auto=webp&s=471c7bcf4ed32308d966385ed67170af64845c87', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?width=216&crop=smart&auto=webp&s=6486ada79914c3a30493c9a6273630dc84e485be', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?width=320&crop=smart&auto=webp&s=d54fc7a1eb9c2395b2059e23b3c88797ea9215f1', 'width': 320}, {'height': 352, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?width=640&crop=smart&auto=webp&s=75fe4be5f89b32715235f54a7f7c41b3ea577788', 'width': 640}, {'height': 528, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?width=960&crop=smart&auto=webp&s=566438dcc0cffcf0bcef28c2efa3ce98239a467c', 'width': 960}, {'height': 594, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?width=1080&crop=smart&auto=webp&s=c6d22eb12fc5314fd42e7bca5548ea52629f7179', 'width': 1080}], 'source': {'height': 1400, 'url': 'https://external-preview.redd.it/nFP8ssVeCQUjuhFTrPJLEMOkHOegECPHkcBycy1MSsQ.jpg?auto=webp&s=8da6a8c5ed24c5bc7b6e91504e4dbcaf446812e7', 'width': 2544}, 'variants': {}}]}
Model parallelism with LoRA
16
I've been experimenting with fine-tuning Llama2 models using 3 A6000 GPUs, and I've been surprised to discover than none of the widely-discussed model parallelism methods actually distribute compute and memory across all the cards. Using HF Accelerate with `device_map='auto'` distributes the memory across cards, but it doesn't actually work in parallel. Only one card is actually used at a time. You can see this by running `nvidia-smi dmon` while the model is training (look at the `sm` column). Deepspeed zero3 and PyTorch FSDP don't take advantage of LoRA, because (AFAICT) they don't properly handle the frozen layers and as a result the memory usage of the activations and optimiser states is not distributed across the GPUs. This is discussed here: https://github.com/pytorch/pytorch/issues/91165 . Has anyone here found a good way to fine-tune large Llama2 models on multiple GPUs, where the model training doesn't fit on a single GPU, and that spreads the compute over the GPUs?
2023-08-31T17:23:05
https://www.reddit.com/r/LocalLLaMA/comments/166h6bx/model_parallelism_with_lora/
jeremyhoward
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166h6bx
false
null
t3_166h6bx
/r/LocalLLaMA/comments/166h6bx/model_parallelism_with_lora/
false
false
self
16
{'enabled': False, 'images': [{'id': 'QQsNn7b-lo6lk-hu0XsOUKBoGabYmEoJdf2Nqjgj3Ts', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?width=108&crop=smart&auto=webp&s=edf14f36e15f2da0afac14595f5e398c4425771c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?width=216&crop=smart&auto=webp&s=51ea6e6c0e684ca9b10d204984fe389e5c90b7de', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?width=320&crop=smart&auto=webp&s=d86ed787ae726ac32f9af4f164b88e0b0c120199', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?width=640&crop=smart&auto=webp&s=88bbd4d69b812d696c0f5704ec8e369c36ef424d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?width=960&crop=smart&auto=webp&s=dabd0ae491d2119a6177b5e8cd400bb381400ce4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?width=1080&crop=smart&auto=webp&s=2768b38b3f4199e41208ffdd2162c2d9bebe97c3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UwKA4IMS8kW5jvwLK1u8bYVOBs-1DGEksStdVjgNMU0.jpg?auto=webp&s=4850ef10d7471e700a15971f347ba0d3d627c045', 'width': 1200}, 'variants': {}}]}
🤖 Agenta: LLaMA-Compatible Open-Source Platform for LLM Prompt Engineering, Evaluation, and Deployment
2
2023-08-31T17:18:37
https://v.redd.it/rrzoiej79hlb1
resiros
/r/LocalLLaMA/comments/166h29a/agenta_llamacompatible_opensource_platform_for/
1970-01-01T00:00:00
0
{}
166h29a
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rrzoiej79hlb1/DASHPlaylist.mpd?a=1696180716%2CY2M4MzE5NTBkYTliODAwYjFhOGQ3ZGEyNWJmZjM1NTliZjExNWI5N2ViNTYwZjVjMzM0ZjJiNGM3N2EwZGU5OA%3D%3D&v=1&f=sd', 'duration': 127, 'fallback_url': 'https://v.redd.it/rrzoiej79hlb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/rrzoiej79hlb1/HLSPlaylist.m3u8?a=1696180716%2CNDJlZjU1ODVmMjhjMWI1ZDEzZjI1YzE2YmU4YTI0MjY1MmIxYjE0M2FkYTA1ZWIyYjUyMzRmMGM0MDYwMDIwMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rrzoiej79hlb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_166h29a
/r/LocalLLaMA/comments/166h29a/agenta_llamacompatible_opensource_platform_for/
false
false
https://b.thumbs.redditm…jPKH7fvCKLsg.jpg
2
{'enabled': False, 'images': [{'id': 'hghi4oqfuhZuhSekD3-ctTZO2Hvfwh-szT7SqZIahYQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?width=108&crop=smart&format=pjpg&auto=webp&s=7a6086b320bd28f15684d42824839c707fd11d72', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?width=216&crop=smart&format=pjpg&auto=webp&s=df67366d1c55fa28f1cbd8e9090ccea7406563c5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?width=320&crop=smart&format=pjpg&auto=webp&s=06431391dbecfe1c5eaec21d115c57c8bb29961c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?width=640&crop=smart&format=pjpg&auto=webp&s=77a7e8a5499765ba60c9532e9195d39c22eef47a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?width=960&crop=smart&format=pjpg&auto=webp&s=595faf6e29b20e58c7c71b634bace06f448253a3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?width=1080&crop=smart&format=pjpg&auto=webp&s=171874c28ea069c40a739074a0de6d87e3763933', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/KrZ1T2cUm0mycqt5AuvjI72kMfQz9pb06oB3tvw0z9w.png?format=pjpg&auto=webp&s=48f9f965e9b87d6db99a6654bc780b4fe11e67d9', 'width': 3840}, 'variants': {}}]}
Local LLM roleplay using raw transformers library
4
Hi everyone, I'm having some difficulty using transformers library with 'airoboros-l2-13b'. My goal is to give a 'persona' to the AI and talk with it. However, based on the 'prompt template' recommended for airoboros, I really don't get how to do it. I don't want to use any front-end, text UI or anything. Because I want later to use the AI text output in a python code. Thank you, I hope the community will help me.
2023-08-31T16:54:42
https://www.reddit.com/r/LocalLLaMA/comments/166ggja/local_llm_roleplay_using_raw_transformers_library/
Possible-Ball-3423
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166ggja
false
null
t3_166ggja
/r/LocalLLaMA/comments/166ggja/local_llm_roleplay_using_raw_transformers_library/
false
false
self
4
null
Has anyone manged to use fill in the middle with CodeLlama in 4bit?
3
Looked into exllama and others, there seems to be feature request in llama.cpp what about other libs?
2023-08-31T16:50:42
https://www.reddit.com/r/LocalLLaMA/comments/166gcx2/has_anyone_manged_to_use_fill_in_the_middle_with/
kpodkanowicz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166gcx2
false
null
t3_166gcx2
/r/LocalLLaMA/comments/166gcx2/has_anyone_manged_to_use_fill_in_the_middle_with/
false
false
self
3
null
Falcon-40B on 2 NVIDIA RTX A6000 48GB
14
I want to run inference with Falcon-40B-instruct and I have 2 Nvidia A6000 with 48gb each. Do you know if I can "combine" the memory of these GPUs to run this model?
2023-08-31T15:38:08
https://www.reddit.com/r/LocalLLaMA/comments/166eiat/falcon40b_on_2_nvidia_rtx_a6000_48gb/
rancidog
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166eiat
false
null
t3_166eiat
/r/LocalLLaMA/comments/166eiat/falcon40b_on_2_nvidia_rtx_a6000_48gb/
false
false
self
14
null
4 Bit + Exlamma on H100 or A100
9
I have heard that the H100 gives drastic inference speed boost compared to the 3090s(upto 30x). I tested a 13B parameter model(4 bit + Exlamma) on the H100 but got only about a 30% speed boost. All GPUS running on runpod. &#x200B; Is this normal or am i missing something?
2023-08-31T15:28:04
https://www.reddit.com/r/LocalLLaMA/comments/166e8cp/4_bit_exlamma_on_h100_or_a100/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166e8cp
false
null
t3_166e8cp
/r/LocalLLaMA/comments/166e8cp/4_bit_exlamma_on_h100_or_a100/
false
false
self
9
null
How much would you be willing to pay for a RTX 4090 with 48GB of VRAM
4
A single 48GB card would be able to run Llama2 70B with 4-bit quantization. Doing SLI with two GPUs seems like a good strategy to increase VRAM but it does seem to have performance downsides as well as practical concerns like taking up space in a chassis and power/cooling. It seems like the cheapest 24GB 4090's go for around $1600. How much would double the memory be worth? Not an apples-to-apples comparison since it they use a different type of memory, but "workstation" class GPUs with 48GB of VRAM are all $4,000+ [View Poll](https://www.reddit.com/poll/166dgq7)
2023-08-31T14:58:44
https://www.reddit.com/r/LocalLLaMA/comments/166dgq7/how_much_would_you_be_willing_to_pay_for_a_rtx/
tripmine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166dgq7
false
null
t3_166dgq7
/r/LocalLLaMA/comments/166dgq7/how_much_would_you_be_willing_to_pay_for_a_rtx/
false
false
self
4
null
I want to deploy my fine tuned model like a chatbot
12
I don't mind if it's a paid service. I recently fine tuned a model and now want to deploy it so my client can test with some users. I tried replicate but got totally lost on how to push the model their. I would appreciate any advice from you guys
2023-08-31T14:07:29
https://www.reddit.com/r/LocalLLaMA/comments/166c5fq/i_want_to_deploy_my_fine_tuned_model_like_a/
_Sneaky_Bastard_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166c5fq
false
null
t3_166c5fq
/r/LocalLLaMA/comments/166c5fq/i_want_to_deploy_my_fine_tuned_model_like_a/
false
false
self
12
null
Is anyone using Llama 2 for serious financial work?
1
[removed]
2023-08-31T13:57:59
https://www.reddit.com/r/LocalLLaMA/comments/166bwrv/is_anyone_using_llama_2_for_serious_financial_work/
Natural-Sentence-601
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166bwrv
false
null
t3_166bwrv
/r/LocalLLaMA/comments/166bwrv/is_anyone_using_llama_2_for_serious_financial_work/
false
false
default
1
null
Code Llama digression
1
I use Code Llama with Llama.cpp. I do not know why, but sometime, Code Llama digresses a lot. He changes its name arbitrarily from the name given by \`prompts/chat-with-bob.txt\`. So yesterday, Code Llama changed from "Bob" to "Doctor" (visible in the prompt), and started a medical consultation. Today, it changes its name from Bob to "Art", "who" obviously a computer tech. What am I doing wrong? What can I do to stop these annoying digressions?
2023-08-31T13:47:21
https://www.reddit.com/r/LocalLLaMA/comments/166bnj1/code_llama_digression/
RobotEntropy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166bnj1
false
null
t3_166bnj1
/r/LocalLLaMA/comments/166bnj1/code_llama_digression/
false
false
self
1
null
Cerebras, G42's Inception, and MBZUAI announce Jais a 13B parameter model that trained on a new 395 billion token Arabic-English-Code dataset
19
2023-08-31T13:07:32
https://huggingface.co/inception-mbzuai
maroule
huggingface.co
1970-01-01T00:00:00
0
{}
166ap2h
false
null
t3_166ap2h
/r/LocalLLaMA/comments/166ap2h/cerebras_g42s_inception_and_mbzuai_announce_jais/
false
false
https://b.thumbs.redditm…f5HV6lg_mXHo.jpg
19
{'enabled': False, 'images': [{'id': '0aRd9X0kpVtiEBxYuP4bF4GEppnt9yPik8OjEWz4f8Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?width=108&crop=smart&auto=webp&s=3c552bf6c07ba0f3989a232364d6258df098be04', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?width=216&crop=smart&auto=webp&s=904d47d74618feea6878c701444996db7f0e2f6e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?width=320&crop=smart&auto=webp&s=79b5a67418f6837dd8fe9a28090227f3dbc38c38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?width=640&crop=smart&auto=webp&s=0da75da274ba1b7921401ff5ab42f436c956cdc5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?width=960&crop=smart&auto=webp&s=efd117a4b24e583e23dcac2e28ed01c2ec28aec1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?width=1080&crop=smart&auto=webp&s=78bb1e7e7b1053cc950420d2c21d6878f2c22589', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/p2HnJ-0BA9ymRyqDEq24xs62SgKEMQ52SL8WG4dNQPg.jpg?auto=webp&s=756b29c8465848d4af246bcdd66820a4dff1db02', 'width': 1200}, 'variants': {}}]}
AI tool to classify sentences (not sentiment)
1
Hi all, I've got a ton of sentences that I'd like to try to analyze. Mostly what I'm wondering is if there's an ai tool that looks at whether or not the sentence is structurally correct. I'll kind of define what I want below. I've been through a lot on hugging face and have a feeling I just don't know what to search for. They're conversations which I have split into utterances and able to put back together as a conversation. Necessary: Look at a sentence and say whether it makes sense (return a score of how "sensible" it is) &#x200B; Nice to have: Be able to return counts for parts of speech (number of nouns, verbs, adjectives etc). Be able to both check the sentence on it's own, and as part of a conversation. Be able to find errant punctuation or characters. Spelling and grammar would be cool but not necessary (returning issues with them)
2023-08-31T12:50:11
https://www.reddit.com/r/LocalLLaMA/comments/166aama/ai_tool_to_classify_sentences_not_sentiment/
I_M_Scott
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166aama
false
null
t3_166aama
/r/LocalLLaMA/comments/166aama/ai_tool_to_classify_sentences_not_sentiment/
false
false
self
1
null
What kinda of models could I train and run with 2x 2080 TI gpus?
1
What kinda of models could I train and run with 2x 2080 TI gpus? Could I finetune the 35B model with those? &#x200B;
2023-08-31T12:40:47
https://www.reddit.com/r/LocalLLaMA/comments/166a2zn/what_kinda_of_models_could_i_train_and_run_with/
MaleficentArgument51
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166a2zn
false
null
t3_166a2zn
/r/LocalLLaMA/comments/166a2zn/what_kinda_of_models_could_i_train_and_run_with/
false
false
self
1
null
What kinda of models could I train and run with 2x 2080 TI gpus?
0
What kinda of models could I train and run with 2x 2080 TI gpus? Could I finetune the 35B model with those? &#x200B;
2023-08-31T12:40:33
https://www.reddit.com/r/LocalLLaMA/comments/166a2sg/what_kinda_of_models_could_i_train_and_run_with/
MaleficentArgument51
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166a2sg
false
null
t3_166a2sg
/r/LocalLLaMA/comments/166a2sg/what_kinda_of_models_could_i_train_and_run_with/
false
false
self
0
null
convert WizardCoder-15B-V1.0 pytorch_model.bin to "gguf" format
0
I know that there is gguf Wizard Coder model in gguf format online, but I want to try different quantization. I tried with `llama.cpp`'s `convert`, but seems the Wizard Coder config.json lacks of parameters, at least: - hidden_size - num_hidden_layers - intermediate_size
2023-08-31T10:54:30
https://www.reddit.com/r/LocalLLaMA/comments/1667uy3/convert_wizardcoder15bv10_pytorch_modelbin_to/
RobotEntropy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1667uy3
false
null
t3_1667uy3
/r/LocalLLaMA/comments/1667uy3/convert_wizardcoder15bv10_pytorch_modelbin_to/
false
false
self
0
null
General guidance on my project please.
0
Could someone please help with fleshing out the steps that I need to take to get my project underway? Here is the info: I have a rented ubuntu server(ryzen 5900x, 64gb ram) that I can access remotely. No graphics card and no graphical interface on the server. I want to run a uncensored LLM on this rig. I tried downloading koboldcpp+some llama model, but kobold has a graphical interface and it was suuper slow through X11 and xming server. 1. How would i run an llm on ubuntu with only command line. 2. How would I give it a persistent character? 3. Is Langchain what i need? 4. Do i need to set up a code interpreter on the server to run it all? I think I just need "big picture" steps to understand how it all sits together. Thanks.
2023-08-31T10:15:01
https://www.reddit.com/r/LocalLLaMA/comments/16674h6/general_guidance_on_my_project_please/
toorik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16674h6
false
null
t3_16674h6
/r/LocalLLaMA/comments/16674h6/general_guidance_on_my_project_please/
false
false
self
0
null
Easiest way to fine-tune local llama on local documents?
1
[removed]
2023-08-31T10:11:37
https://www.reddit.com/r/LocalLLaMA/comments/166729y/easiest_way_to_finetune_local_llama_on_local/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
166729y
false
null
t3_166729y
/r/LocalLLaMA/comments/166729y/easiest_way_to_finetune_local_llama_on_local/
false
false
self
1
null
oobabooga WebUI, how to load Airoboros or RuGPT?
1
[removed]
2023-08-31T08:43:53
https://www.reddit.com/r/LocalLLaMA/comments/1665imc/oobabooga_webui_how_to_load_airoboros_or_rugpt/
Hatred_grows
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1665imc
false
null
t3_1665imc
/r/LocalLLaMA/comments/1665imc/oobabooga_webui_how_to_load_airoboros_or_rugpt/
false
false
https://b.thumbs.redditm…OlHXWVzV8Qgg.jpg
1
{'enabled': False, 'images': [{'id': 'BSGypydhj3aqZI6EFooEI4Yg4z_M69pfAA9SmTL6gTg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?width=108&crop=smart&auto=webp&s=1363007b542f1f33e3a73dfb37d0de15806c13b6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?width=216&crop=smart&auto=webp&s=fa7ac271ff46a1d704ff9f51a23e76ba10377419', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?width=320&crop=smart&auto=webp&s=16d086d39690fb3e1e2399ddc951ba8dc968a6da', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?width=640&crop=smart&auto=webp&s=940c0499a630d9bfae458e598159d1e86a0a0b4f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?width=960&crop=smart&auto=webp&s=835d633a92d542e9dc0ad9dd5498c42e76778f34', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?width=1080&crop=smart&auto=webp&s=8a31e9ca330491987c6ed91c6ef8a6ef5df63d10', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zFynjMQ7SyJu_FUI1H1baj_aJj6tdZrubpz8CiFhw5w.jpg?auto=webp&s=57862c6a47716edc1659a496ea94ae277ba44ee1', 'width': 1200}, 'variants': {}}]}
Q&A bot with conversation memory
1
[removed]
2023-08-31T07:56:07
https://www.reddit.com/r/LocalLLaMA/comments/1664p7a/qa_bot_with_conversation_memory/
anindya_42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1664p7a
false
null
t3_1664p7a
/r/LocalLLaMA/comments/1664p7a/qa_bot_with_conversation_memory/
false
false
self
1
null
Your best model?
15
I’m running in my KVM virtual server (24GB RAM) and my best model so far is: 7B: llama-2-Chat 13B: airoboros l2 2.1 What about yours?
2023-08-31T07:08:31
https://www.reddit.com/r/LocalLLaMA/comments/1663v5e/your_best_model/
MichaelBui2812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1663v5e
false
null
t3_1663v5e
/r/LocalLLaMA/comments/1663v5e/your_best_model/
false
false
self
15
null
L0 Airdrop Odyssey: Navigating the Crypto Unknown
1
[removed]
2023-08-31T06:44:11
https://www.reddit.com/r/LocalLLaMA/comments/1663fmz/l0_airdrop_odyssey_navigating_the_crypto_unknown/
Maximum-Unhappy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1663fmz
false
null
t3_1663fmz
/r/LocalLLaMA/comments/1663fmz/l0_airdrop_odyssey_navigating_the_crypto_unknown/
false
false
self
1
null
RoPE Feq Base for CodeLLaMA
12
I've seen a few contradictory statements on what the value for RoPE freb base in CodeLLaMA models should be. Any reason that RoPE feq base value should not be set to 10^6 for CodeLLaMA even if you are not using long context? Does it even make any difference, has anyone try running perplexity test with the value set at 1 and at 10^6 ?
2023-08-31T06:29:08
https://i.redd.it/qgb1bbmz3elb1.jpg
onil_gova
i.redd.it
1970-01-01T00:00:00
0
{}
166366q
false
null
t3_166366q
/r/LocalLLaMA/comments/166366q/rope_feq_base_for_codellama/
false
false
https://b.thumbs.redditm…eYhaRoGBW6VA.jpg
12
{'enabled': True, 'images': [{'id': 'rx1yuBljknMD1wbXooHDs6XcDuvG8NcWdVBVDyLgrio', 'resolutions': [{'height': 15, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?width=108&crop=smart&auto=webp&s=e844163c330903c9c07c83ef2aee48da844cff70', 'width': 108}, {'height': 30, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?width=216&crop=smart&auto=webp&s=406f67a19baa9ccf70373b7c1c5dbdea58d71366', 'width': 216}, {'height': 45, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?width=320&crop=smart&auto=webp&s=f05f95609b36686432a9aae084d82164150c85e9', 'width': 320}, {'height': 90, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?width=640&crop=smart&auto=webp&s=229b621aef06d14ab46f21bd2f87a4fdb918c500', 'width': 640}, {'height': 135, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?width=960&crop=smart&auto=webp&s=a85d74fcb2dca765d141494b493d597b67eeee7a', 'width': 960}, {'height': 152, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?width=1080&crop=smart&auto=webp&s=2355b1085e3d57a713989a896c59c4dc044012c8', 'width': 1080}], 'source': {'height': 300, 'url': 'https://preview.redd.it/qgb1bbmz3elb1.jpg?auto=webp&s=70685d02e709ff687ec4e024d096fc3f00cad2df', 'width': 2129}, 'variants': {}}]}
Creating a chatbot for work using open source vs openai
5
I was thinking of showing some initiative and creating a chatbot for the company I work for. My company hosts thousands of clients product catalogs and the antique search engine isn't exactly the most user friendly and most speedy way to find what your looking for. If I'm searching for product X that does ABC but the company doesn't make product X anymore and I need a similar product that does exactly ABC. that's where the chatbot will shine, it will recommend other products that are similar from clients with catalogs on our site. Let's say I get a spreadsheet with every client, their catalogs with all the specs of their products etc.. How much trouble would it be to create and deploy a chatbot that is capable of the scenario I presented earlier and more in openai vs opensource?
2023-08-31T06:19:03
https://www.reddit.com/r/LocalLLaMA/comments/1662zq6/creating_a_chatbot_for_work_using_open_source_vs/
Erdeem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1662zq6
false
null
t3_1662zq6
/r/LocalLLaMA/comments/1662zq6/creating_a_chatbot_for_work_using_open_source_vs/
false
false
self
5
null
Meta Research publishes LM-Infinite Paper
1
[deleted]
2023-08-31T06:14:05
https://arxiv.org/abs/2308.16137
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
1662wjh
false
null
t3_1662wjh
/r/LocalLLaMA/comments/1662wjh/meta_research_publishes_lminfinite_paper/
false
false
https://a.thumbs.redditm…9TLahCsBMC-0.jpg
1
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Suggested reading?
0
[removed]
2023-08-31T05:43:49
https://www.reddit.com/r/LocalLLaMA/comments/1662cs7/suggested_reading/
Seclusion72
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1662cs7
false
null
t3_1662cs7
/r/LocalLLaMA/comments/1662cs7/suggested_reading/
false
false
self
0
null
Is this contextsize limit I am hitting? kcpp really slows down.
1
[removed]
2023-08-31T05:34:07
https://www.reddit.com/r/LocalLLaMA/comments/16626i3/is_this_contextsize_limit_i_am_hitting_kcpp/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16626i3
false
null
t3_16626i3
/r/LocalLLaMA/comments/16626i3/is_this_contextsize_limit_i_am_hitting_kcpp/
false
false
self
1
null
Best Way to do this?
1
[removed]
2023-08-31T04:54:46
https://www.reddit.com/r/LocalLLaMA/comments/1661gmc/best_way_to_do_this/
himaw26303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1661gmc
false
null
t3_1661gmc
/r/LocalLLaMA/comments/1661gmc/best_way_to_do_this/
false
false
self
1
null
How to determine max model size for 12 Gb VRAM & 32Gb RAM?
1
[removed]
2023-08-31T04:53:59
https://www.reddit.com/r/LocalLLaMA/comments/1661g3x/how_to_determine_max_model_size_for_12_gb_vram/
Ok-Conversation-2418
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1661g3x
false
null
t3_1661g3x
/r/LocalLLaMA/comments/1661g3x/how_to_determine_max_model_size_for_12_gb_vram/
false
false
default
1
null
Llama2 13B - 4070ti
13
Hello! Im new to the local llms topic so dont judge me. I set up the oobabooga WebUI from github and tested some models so i tried Llama2 13B (theBloke version from hf). I tested the chat GGML and the for gpu optimized GPTQ (both with the correct model loader). With the default settings for model loader im wating like 3 seconds until the response stream starts. I thought with my 4070 ti it would be much faster. I double checked the cuda installation and everything seems fine. Im using the WSL2 inside W11 (i like linux more than windows), could that be the reason for the response delay?
2023-08-31T04:45:44
https://www.reddit.com/r/LocalLLaMA/comments/1661ag6/llama2_13b_4070ti/
Able_Stop
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1661ag6
false
null
t3_1661ag6
/r/LocalLLaMA/comments/1661ag6/llama2_13b_4070ti/
false
false
self
13
null
Llama 2
1
[removed]
2023-08-31T04:34:16
https://www.reddit.com/r/LocalLLaMA/comments/16612oc/llama_2/
BadriMLJ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16612oc
false
null
t3_16612oc
/r/LocalLLaMA/comments/16612oc/llama_2/
false
false
self
1
null
Is LLaMA 2 34B not coming?
1
Is LLaMA 2 34B not coming? They seem to have a code version but why not a regular model?
2023-08-31T04:10:50
https://www.reddit.com/r/LocalLLaMA/comments/1660mht/is_llama_2_34b_not_coming/
ninjasaid13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1660mht
false
null
t3_1660mht
/r/LocalLLaMA/comments/1660mht/is_llama_2_34b_not_coming/
false
false
self
1
null
Looking for testers: I'm hosting open-source LLMs for free.
27
I'm working on a [project](https://www.fullmetal.ai) that's a distributed network of hosted LLMs. I believe this can be useful for those who don't have a 1000+ USD/mo budget to host their own LLM. Also, it's much easier & quicker than setting up your own VM. If you're building a startup/experimental app that requires open-source LLM, I will gladly provide free API access, assuming your usage is relatively low (100k tokens/day or less). Please DM me if you are interested. Happy to answer any questions. Thanks!
2023-08-31T03:54:10
https://www.reddit.com/r/LocalLLaMA/comments/1660abe/looking_for_testers_im_hosting_opensource_llms/
m0dE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1660abe
false
null
t3_1660abe
/r/LocalLLaMA/comments/1660abe/looking_for_testers_im_hosting_opensource_llms/
false
false
self
27
{'enabled': False, 'images': [{'id': 'ibz-WbgWLTq9fNGmdvXvmXTzV2aIzevVHMd_bWL6pI8', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?width=108&crop=smart&auto=webp&s=457a261efe0bdaad9b0facd7c5344552c3630b55', 'width': 108}, {'height': 103, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?width=216&crop=smart&auto=webp&s=b51603741608f2575127261f0b3a8bcccda3ba67', 'width': 216}, {'height': 153, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?width=320&crop=smart&auto=webp&s=8cb897cee67687a0c47b8a1686bb71b02b5cefc2', 'width': 320}, {'height': 306, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?width=640&crop=smart&auto=webp&s=3f32f849c0e7f2519c0692fe6354ce96bb977afb', 'width': 640}, {'height': 459, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?width=960&crop=smart&auto=webp&s=4240a126c8e18e326f265d4a81f100ec4c1f7740', 'width': 960}, {'height': 517, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?width=1080&crop=smart&auto=webp&s=aa311e4d22eadc1367aa7febaa641c75bf25b2c1', 'width': 1080}], 'source': {'height': 911, 'url': 'https://external-preview.redd.it/ih6BzyHit_orBvtNDmmd8ZxaCux0xj8dbcoGoogR9nE.jpg?auto=webp&s=1b966e536cb16d12efedee8f2341bc7593891f86', 'width': 1902}, 'variants': {}}]}
LLama 2 7B and 13B in the browser via WebGPU
3
2023-08-31T03:50:43
https://thiggle.com/local-llm
sublimefunk
thiggle.com
1970-01-01T00:00:00
0
{}
16607w6
false
null
t3_16607w6
/r/LocalLLaMA/comments/16607w6/llama_2_7b_and_13b_in_the_browser_via_webgpu/
false
false
https://a.thumbs.redditm…DaM-Y-ZlKAF4.jpg
3
{'enabled': False, 'images': [{'id': 'qD0XgrR1PRzf6wj7d5HKfhz2TB6LOaoRDoPJiDUjdsw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?width=108&crop=smart&auto=webp&s=0fdc6e2a2852dac7fc2c7b298e8e5bbb0707ab47', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?width=216&crop=smart&auto=webp&s=9cff88c1c1af206334448b2dd31788c27eb09e98', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?width=320&crop=smart&auto=webp&s=e53a56036df71f612508c6da4142cf85ce0428dd', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?width=640&crop=smart&auto=webp&s=9501dc791927f60b66972aed2cb37882b12c2070', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?width=960&crop=smart&auto=webp&s=96938ee5600234c44006af7adf6643ce159a70ac', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?width=1080&crop=smart&auto=webp&s=e6cb76507f47d3bf409b60c80ca28d23968bfc89', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/S_k4VWqTvOHibc6HBs8xYmcId_zC25eK5sfu1wePLGA.jpg?auto=webp&s=484943f6ccb44d960a7bfb5e19002c16ddc4eaa9', 'width': 1200}, 'variants': {}}]}
Lightweight LLama variants for Mobile applications
4
Are there LLama v2 or other variants of LLama (Camel, Alpaca) that can be utilized on mobile devices? I am looking for lightweight models essentially, preferably with python bindings so that i can run that on my jupyter notebook.
2023-08-31T03:34:04
https://www.reddit.com/r/LocalLLaMA/comments/165zvle/lightweight_llama_variants_for_mobile_applications/
thesithlord27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165zvle
false
null
t3_165zvle
/r/LocalLLaMA/comments/165zvle/lightweight_llama_variants_for_mobile_applications/
false
false
self
4
null
[R] LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models
114
2023-08-31T03:26:18
https://i.redd.it/2opuwgax6dlb1.png
ntortellini
i.redd.it
1970-01-01T00:00:00
0
{}
165zpn9
false
null
t3_165zpn9
/r/LocalLLaMA/comments/165zpn9/r_lminfinite_simple_onthefly_length/
false
false
https://a.thumbs.redditm…zq8JNoJm6s98.jpg
114
{'enabled': True, 'images': [{'id': 'YaivsmwidzU1zSPZw6YOL4YfR58kZiimrOnfFCkoEtE', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?width=108&crop=smart&auto=webp&s=c1778e96cbe101240b1aa235185235ff2ffff212', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?width=216&crop=smart&auto=webp&s=1a08c2ded60e67cc4a215f256a72b38251fc2983', 'width': 216}, {'height': 149, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?width=320&crop=smart&auto=webp&s=1ef6d16cd9b1ba4b6308b0a474c46dbeec9e9129', 'width': 320}, {'height': 298, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?width=640&crop=smart&auto=webp&s=47b4db7d7afc78bb15abd43d61c3cf2edafafe53', 'width': 640}, {'height': 447, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?width=960&crop=smart&auto=webp&s=fbb2d5c0eaffb60383733f00ce44c883a47e9113', 'width': 960}, {'height': 503, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?width=1080&crop=smart&auto=webp&s=c8b37c5312276ab6a005c22d58f4fe5465501491', 'width': 1080}], 'source': {'height': 944, 'url': 'https://preview.redd.it/2opuwgax6dlb1.png?auto=webp&s=3133f1e92134aa577334833ab48ea976435d29bc', 'width': 2024}, 'variants': {}}]}
WizardLM vs. Phind-CodeLlama - Test yourself Battleground
1
[removed]
2023-08-31T03:26:04
https://www.reddit.com/r/LocalLLaMA/comments/165zphb/wizardlm_vs_phindcodellama_test_yourself/
VideoTo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165zphb
false
null
t3_165zphb
/r/LocalLLaMA/comments/165zphb/wizardlm_vs_phindcodellama_test_yourself/
false
false
self
1
null
Compute Express Link aka CXL
1
Is cxl memory something that will help make large models easier to run or not really? https://www.asteralabs.com/product-details/aurora-a-series/
2023-08-31T02:54:40
https://www.reddit.com/r/LocalLLaMA/comments/165z17q/compute_express_link_aka_cxl/
Ergosyn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165z17q
false
null
t3_165z17q
/r/LocalLLaMA/comments/165z17q/compute_express_link_aka_cxl/
false
false
self
1
{'enabled': False, 'images': [{'id': 'CKuF0-D924_Ztr8tBFLdsMJyTAWJW-zYH5Pn48BocOk', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/YREXUyVQH_L_aYlZSATW50HahrwkOJS_WoSwtMD6fEs.jpg?width=108&crop=smart&auto=webp&s=738f18ab1491c35a85da4b0e5058c714c4ca85c6', 'width': 108}, {'height': 176, 'url': 'https://external-preview.redd.it/YREXUyVQH_L_aYlZSATW50HahrwkOJS_WoSwtMD6fEs.jpg?width=216&crop=smart&auto=webp&s=a80910f775ecbe2de1ae81daaf5b9d40d597096c', 'width': 216}, {'height': 260, 'url': 'https://external-preview.redd.it/YREXUyVQH_L_aYlZSATW50HahrwkOJS_WoSwtMD6fEs.jpg?width=320&crop=smart&auto=webp&s=e361720378a3ad7f46af1922df962ab366e450c6', 'width': 320}, {'height': 521, 'url': 'https://external-preview.redd.it/YREXUyVQH_L_aYlZSATW50HahrwkOJS_WoSwtMD6fEs.jpg?width=640&crop=smart&auto=webp&s=da7dca6fe35a395f35651fc436d90f657b228904', 'width': 640}], 'source': {'height': 550, 'url': 'https://external-preview.redd.it/YREXUyVQH_L_aYlZSATW50HahrwkOJS_WoSwtMD6fEs.jpg?auto=webp&s=929935f618e712fe1a3950f763b4e32c8b1beb93', 'width': 675}, 'variants': {}}]}
Perplexity of Q4_K_M vs GPTQ 64G?
2
Anyone have 2 identical models in these quants and can run perplexity in ooba? Something quick like PTB_NEW? The test can run in HF. Might shed some light as to whether it's better to get the GPTQ of a 70b or the GGXX. The Q4 is the last that fits in 48g, extra context not withstanding.
2023-08-31T02:33:20
https://www.reddit.com/r/LocalLLaMA/comments/165ykbp/perplexity_of_q4_k_m_vs_gptq_64g/
a_beautiful_rhind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165ykbp
false
null
t3_165ykbp
/r/LocalLLaMA/comments/165ykbp/perplexity_of_q4_k_m_vs_gptq_64g/
false
false
self
2
null
How to set up CodeLlama on Exllama
4
I've been trying to set up various extended context models on Exllama and I just want to make sure I'm doing things properly. I've been able to get longer responses out of the box if I set the max seq len to longer but the responses start to get weird/unreliable after 4k tokens. Is there anything else I need to do to get better responses? By extension, is there anything I need to do for the vicuna-1.5-16k models? I've been setting the compress_pos_emb to 4.0 which allows me to get the longer context but things still get weird at times and I just want to make sure I'm doing things correctly.
2023-08-31T01:57:55
https://www.reddit.com/r/LocalLLaMA/comments/165xqw4/how_to_set_up_codellama_on_exllama/
a_slay_nub
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165xqw4
false
null
t3_165xqw4
/r/LocalLLaMA/comments/165xqw4/how_to_set_up_codellama_on_exllama/
false
false
self
4
null
Formatting Training Datasets? Getting pwned
5
I've been training LLaMA-2-7B-bf16-sharded models with datasets off HuggingFace inside Google Colabs. The training loss goes down and they work well. I've tried a couple different datasets and it works well. Then I try the same notebook code with my own dataset (uploaded to HuggingFace) and all sorts of bizarre things happen. Each epoch takes WAY longer, the training loss jumps around, the training loss sometimes starts at 0.000 sometimes, etc. I'm completely self-taught here off Reddit and Youtube, so am completely ignorant of a lot of best practices. After looking at a lot of other datasets, I'm beginning to **suspect that the formatting of my data files is off**?? Maybe the ###human/###assistant thing? I wanted to share it here and then get some feedback / get ripped a new one by anyone kind enough / aggressive enough to indulge me. All insight is appreciated. So the dataset is 5,000 diary entries and then 5,000 keyword lists for each diary entry. A typical input/output pair looks like this {diary entry} : {list of keywords from diary entry} I have formatted the file as a JSONL as such: {“text”: “###Human: I cooked dinner tonight. It’s become such a routine, but I still put effort into it. I made his favorite, chicken parmesan. I set the table with candles, hoping to create a romantic atmosphere. But he barely looked up from his phone, engrossed in his stupid game..### Assistant: [‘routine’,‘candles’,‘romantic’,‘connection’,‘intimacy’]“} In my HuggingFace account there's just these two files: .gitattributes DiaryData_5k.jsonl
2023-08-31T01:36:34
https://www.reddit.com/r/LocalLLaMA/comments/165x9if/formatting_training_datasets_getting_pwned/
TaleOfTwoDres
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165x9if
false
null
t3_165x9if
/r/LocalLLaMA/comments/165x9if/formatting_training_datasets_getting_pwned/
false
false
self
5
null
Using 2 GPUs?
3
I’m currently interested to buy another GPU as my 4070ti is lacking the VRAM needed for the best models. Will this even work? And, if it does do I need two of the same GPUs. Thank for any responses.
2023-08-31T00:17:18
https://www.reddit.com/r/LocalLLaMA/comments/165vg4p/using_2_gpus/
marv34001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165vg4p
false
null
t3_165vg4p
/r/LocalLLaMA/comments/165vg4p/using_2_gpus/
false
false
self
3
null
Does ram speed and size matter when your GPU can load the model?
5
Hi I'm new to local LLM stuff. Like the title says, I was wondering if the RAM speed and size affect the text generating performance. For instance, if an RTX3060 can load a 13b size model, will adding more RAM boost the performance? &#x200B; I'm planning on setting up my PC like this \- CPU: Intel i5 13600k \- M/B: Gigabyte B660m Aorus Pro \- RAM: DDR4 16GB 3200Mhz \- GPU: RTX3060 12GB
2023-08-30T23:51:23
https://www.reddit.com/r/LocalLLaMA/comments/165uuep/does_ram_speed_and_size_matter_when_your_gpu_can/
Sufficient_Bit_3312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165uuep
false
null
t3_165uuep
/r/LocalLLaMA/comments/165uuep/does_ram_speed_and_size_matter_when_your_gpu_can/
false
false
self
5
null
The CodeLlama BASE is strangely fantastic general purpose for finetuning!
118
I'm having a sudden significant jump in quality with the CodeLLaMa 13b and 34B as bases for finetuning. ( BASE, not the python or instruct)You can so easily finetune it into anything - very flexible. Don't let the "Code" fool you. Strangely, even without any finetuning, you can simply use a system prompt on the base model and it will follow it very nicely as if it was already finetuned, while the previous versions (especially LLama 1 base) would, as expected just go into a neverending schizophrenic twist. I think the CodeLLama is one of the best BASE now, in my opinion for further tweaking. Definitely try it.
2023-08-30T22:46:59
https://www.reddit.com/r/LocalLLaMA/comments/165tb0q/the_codellama_base_is_strangely_fantastic_general/
FPham
self.LocalLLaMA
2023-08-31T13:47:51
0
{}
165tb0q
false
null
t3_165tb0q
/r/LocalLLaMA/comments/165tb0q/the_codellama_base_is_strangely_fantastic_general/
false
false
self
118
null
What do people think of NovelAI?
8
Hey all, recently I've seen a post(s) which briefly mentioned frustration with NovelAI limitations. As someone who uses both local (3090) and NovelAI, I find that I actually prefer NovelAI for most things, especially with their new 13B model (Kayra). Curious if anyone can share their experiences on when local works better. (I swear I'm not a shill...)
2023-08-30T21:44:20
https://www.reddit.com/r/LocalLLaMA/comments/165rq8v/what_do_people_think_of_novelai/
TheOtherKaiba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165rq8v
false
null
t3_165rq8v
/r/LocalLLaMA/comments/165rq8v/what_do_people_think_of_novelai/
false
false
self
8
null
Anyone familiar with the error "ggml_new_tensor_impl: not enough space in the context's memory pool"?
1
I'm using a 3080 10G & 32GB RAM. I can use 7B and 13B models no problem but I've tried 2 different 22B (GGML) models in Oobabooga and can load 20/43 layers without getting the normal OOM error. However, if I enter a prompt that has more than roughly 1732 characters (including spaces) / 325 words then I get the error: *"ggml\_new\_tensor\_impl: not enough space in the context's memory pool (needed 13421120, available 12582912)... OSError: exception: access violation writing 0x0000000000000050"* I've also tried in my own program using LLama.cpp (through LLamaSharp) and with the same prompt length as above, I get the similar error: *"ggml\_new\_object: not enough space in the context's memory pool (needed 13239488, available 12747472)... Fatal error. System.AccessViolationException: Attempted to read or write protected memory."* If I limit the prompt length then it works fine but obviously, that's not ideal when the maximum is meant to be 4096 tokens. I have Googled the errors which leads to a few github pages but I'm not familiar with using github so trying to read through the threads to find a solution is confusing. &#x200B;
2023-08-30T21:24:46
https://www.reddit.com/r/LocalLLaMA/comments/165r88p/anyone_familiar_with_the_error_ggml_new_tensor/
PsillyPseudonym
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165r88p
false
null
t3_165r88p
/r/LocalLLaMA/comments/165r88p/anyone_familiar_with_the_error_ggml_new_tensor/
false
false
self
1
null
WizardCoder vs. Phind-V2 Prelim Pass@1 Comparison
34
**Take these #'s with a grain of salt** \- precision comparison is not ready as the framework and my setup is evolving. Both perform similarly across 400 LeetCode problems. What we most want to see next? 8-bit quant or Llama-Code? [Comparison Results](https://preview.redd.it/kmvuxpgy8blb1.png?width=1052&format=png&auto=webp&s=3c85e59e106e19ecf3e431252ea1f479ebdfd1ed) Some musings about this work: * In this framework, Phind-v2 slightly outperforms their quoted number while WizardCoder underperforms. This is because the replication approach differs slightly from what each quotes. * In an ideal world, we can converge onto a more robust benchmarking framework w/ many flavors of evaluation which new model builders can sync their model into at deployment. * I find that my own results vary run on run as I tweak local settings. I'm not an expert yet in what the key sources of variance are, but I'm trying to understand this more through trial and error. * If you'd like to follow along or contribute to more results, please check the repo here -[https://github.com/emrgnt-cmplxty/zero-shot-replication](https://github.com/emrgnt-cmplxty/zero-shot-replication)
2023-08-30T20:52:47
https://www.reddit.com/r/LocalLLaMA/comments/165qeb3/wizardcoder_vs_phindv2_prelim_pass1_comparison/
docsoc1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165qeb3
false
null
t3_165qeb3
/r/LocalLLaMA/comments/165qeb3/wizardcoder_vs_phindv2_prelim_pass1_comparison/
false
false
https://b.thumbs.redditm…aqzf_7IWnk5I.jpg
34
{'enabled': False, 'images': [{'id': '1t6PPCpawxkCu-z59zAZEYBQ3HRPBJcqL3QCy67M5w0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cY1m3VuLSga9d53xQ8VYb5I1aMfHGZBYCfHlo_ky_9w.jpg?width=108&crop=smart&auto=webp&s=17f200177e4405df6ff9ec9c64290eac03382c24', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cY1m3VuLSga9d53xQ8VYb5I1aMfHGZBYCfHlo_ky_9w.jpg?width=216&crop=smart&auto=webp&s=6fbaaff1d4788c96d0f23e255dca7683cf880d4f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cY1m3VuLSga9d53xQ8VYb5I1aMfHGZBYCfHlo_ky_9w.jpg?width=320&crop=smart&auto=webp&s=5eab63d50287536cbcd03ebf86d892a80d4d1c90', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cY1m3VuLSga9d53xQ8VYb5I1aMfHGZBYCfHlo_ky_9w.jpg?width=640&crop=smart&auto=webp&s=7068c6d68062fc7f71c5164d401564c079d09962', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cY1m3VuLSga9d53xQ8VYb5I1aMfHGZBYCfHlo_ky_9w.jpg?width=960&crop=smart&auto=webp&s=eaac29bc88f36e1fc7c97182eb77b45865e827c3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cY1m3VuLSga9d53xQ8VYb5I1aMfHGZBYCfHlo_ky_9w.jpg?width=1080&crop=smart&auto=webp&s=b12e96a72d08e60c66cf30f781af22c9d14b1e2d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cY1m3VuLSga9d53xQ8VYb5I1aMfHGZBYCfHlo_ky_9w.jpg?auto=webp&s=e942451e972f383883904ee853847d6cd05b1592', 'width': 1200}, 'variants': {}}]}
Mix&match GPUs?
2
I currently have a 4090, and want to increase the amount of vram by bringing in a second GPU. I was wondering if i can get the 3090 which is less than half the price of a second 4090, and use that. Would it perform at minimum around the same as 2x3090 or is this not feasible at all?
2023-08-30T20:36:34
https://www.reddit.com/r/LocalLLaMA/comments/165pyh9/mixmatch_gpus/
GoinHAMZ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165pyh9
false
null
t3_165pyh9
/r/LocalLLaMA/comments/165pyh9/mixmatch_gpus/
false
false
self
2
null
LLMStack: no-code platform to build LLM apps locally with LocalAI support
1
LLMStack ([https://github.com/trypromptly/LLMStack](https://github.com/trypromptly/LLMStack)) is a no-code platform to build LLM apps that we have been working on for a few months and open-sourced recently. It comes with everything out of the box that one needs to build LLM apps locally or in an enterprise setting. We recently added support to use open-source models by integrating with LocalAI ([https://localai.io](https://localai.io/)). With LocalAI, we can run Llama2 and seamlessly build LLM applications using LLMStack . [LLMStack Platform Demo](https://i.redd.it/ffbune05ralb1.gif) Some highlights of the platform: * Chain multiple LLM models allowing for complex pipelines * Includes a vector database and necessary connectors to help enrich LLM responses with private data * App templates tailored to specific use cases to quickly build LLM apps in minutes * Collaborative app editing and prompt engineering capabilities * Build native AI experiences using LLMStack APIs or with Slack and other messaging platform integrations * Multi-tenant ready for enterprise deployments with user management, org level keys etc., * Use open-source LLMs with LocalAI integration Please check out the project at [https://github.com/trypromptly/LLMStack](https://github.com/trypromptly/LLMStack) and look forward to hearing your thoughts. I will follow up with a more detailed tutorial around using Llama2 and build apps on LLMStack.
2023-08-30T19:25:21
https://www.reddit.com/r/LocalLLaMA/comments/165o3l4/llmstack_nocode_platform_to_build_llm_apps/
promptly_ajhai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165o3l4
false
null
t3_165o3l4
/r/LocalLLaMA/comments/165o3l4/llmstack_nocode_platform_to_build_llm_apps/
false
false
https://b.thumbs.redditm…eJy6w3piVPhE.jpg
1
null
Help a noob! Great resources to understand GPUs, cpp models, etc
1
Hello guys, I’m a CS major who graduated last year, and honestly the LLM era has been such a cool experience to be in the industry right now. In the past few months I’ve got some acquaintance with LLMs and their capabilities. I’ve followed how the approaches like RAG, PEFT, etc can help and what use cases can be done with what approach. On the inference side, I think I have good clarity of how LLM applications can be made. Currently, I feel like I miss out when I see folks discussing about GPUs and local hosting of models and power requirements of models (I was reading all the technical jargons on how LLMs can be hosted on an android device in a recent thread). Can y’all please suggest me what resources helped you ramp up on these aspects? Thanks!
2023-08-30T19:21:39
https://www.reddit.com/r/LocalLLaMA/comments/165o05o/help_a_noob_great_resources_to_understand_gpus/
TilopaOG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165o05o
false
null
t3_165o05o
/r/LocalLLaMA/comments/165o05o/help_a_noob_great_resources_to_understand_gpus/
false
false
self
1
null
Supporting the Open Source AI Community | a16z
1
2023-08-30T19:20:08
https://a16z.com/2023/08/30/supporting-the-open-source-ai-community/
towelpluswater
a16z.com
1970-01-01T00:00:00
0
{}
165nyno
false
null
t3_165nyno
/r/LocalLLaMA/comments/165nyno/supporting_the_open_source_ai_community_a16z/
false
false
https://b.thumbs.redditm…CGj4NmFF6xzA.jpg
1
{'enabled': False, 'images': [{'id': 'nRGu8GEok-nFy1YHRthZ9dFSeKcZkhWzoWr7eKD2i-Q', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?width=108&crop=smart&auto=webp&s=eb74b340538f4423eb86b4ceb67ac5408baeb84d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?width=216&crop=smart&auto=webp&s=79b0aa86dfca691bdfa3c2717e5ecb01be74f68e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?width=320&crop=smart&auto=webp&s=df8a4dad831b1442e1fbd8a666b5da382781b63f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?width=640&crop=smart&auto=webp&s=e0210867d1dd9211f90edb79144ddcf6e0ea6463', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?width=960&crop=smart&auto=webp&s=47367d93ebbe08ec154256ecd5dd596c202e16af', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?width=1080&crop=smart&auto=webp&s=c426c2fee3728a5b8959206ba24bc433be073255', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?auto=webp&s=be9e8cda1b73966dae695d8888519878e31c9ef1', 'width': 1200}, 'variants': {}}]}
A few quick questions...
1
- (1) Can we have a weekly sticky thread for quick questions? The way I imagine it, it would not be a Megathread (banning all quick questions there), but a space for those like me, who sometimes feel like they have a question, but it does not really deserve a post, so I end up not asking it. - (2) How many layers of a typical 13B at q4_k_m / q5_k_m can I fit into a GTX 1080 (8GB VRAM) using llama.cpp GPU offloading? I am fighting some driver issues (windows) so I have a lot of confusing and contradicting metrics. A good number of layers that should work would optimally would really help. Also I don't know if I should take the low-vram settings. My approaches are re-ingestation-heavy I should say. - (3) Did you know the llama.cpp versions that support GGUF don't support GGML anymore? At least my ggml models failed to load when I updated llama-cpp-python -> llama.cpp yesterday. Maybe that is useful information, I seem to have missed this in the discussions about the GGUF support. - (4) What, if any, inference settings affect inference speed? The first GUI I used months ago said things like higher top-k is slower. But I don't see things like that discussed at all. So I would really like to hear about what settings are basically free and what stuff I may want to get as "low" as I need it for the task at hand. 👉 (💯) Obligatory emojis✅ 📢🚀🔥👍
2023-08-30T19:10:01
https://www.reddit.com/r/LocalLLaMA/comments/165np0o/a_few_quick_questions/
involviert
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165np0o
false
null
t3_165np0o
/r/LocalLLaMA/comments/165np0o/a_few_quick_questions/
false
false
self
1
null
Cramming 3090s into a machine
3
Can I use PCI 4.0 risers to fit two 3x cards in a machine instead of paying twice the cost, used, to get 2x cards? I don't want to pay 4k used for an A6000, nor do I want to spend 4k to get two 2 slot 3090 used cards. I already have one 3090 and would like to add another to my machine so I can do LLaMa 2 70b.
2023-08-30T19:08:57
https://www.reddit.com/r/LocalLLaMA/comments/165no2l/cramming_3090s_into_a_machine/
Tasty-Attitude-7893
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165no2l
false
null
t3_165no2l
/r/LocalLLaMA/comments/165no2l/cramming_3090s_into_a_machine/
false
false
self
3
null
Advise a model for programming Arduino and working with configs
6
I'm taking my first steps in this area - it's terribly interesting! I have a CPU Only host, so far I can only run GMML models via text-generation-webui. In principle, models up to 30B work more or less tolerably. I want to play around with Arduino programming. &#x200B; And I would also like to be able to check and write configs for different levels of software and hardware - Nginx, Cisco configs, etc. I still don't know how to transfer the config to the robot as a separate file....
2023-08-30T17:58:49
https://www.reddit.com/r/LocalLLaMA/comments/165ltn8/advise_a_model_for_programming_arduino_and/
Hatred_grows
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165ltn8
false
null
t3_165ltn8
/r/LocalLLaMA/comments/165ltn8/advise_a_model_for_programming_arduino_and/
false
false
self
6
null
Performance issues with llama-cpp-python using llama-2-70b-chat model
1
[removed]
2023-08-30T17:56:11
https://www.reddit.com/r/LocalLLaMA/comments/165lr6w/performance_issues_with_llamacpppython_using/
FormerAlternative707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165lr6w
false
null
t3_165lr6w
/r/LocalLLaMA/comments/165lr6w/performance_issues_with_llamacpppython_using/
false
false
self
1
null
On what are you using your local llama setup daily?
2
I mean in what kind of projects or topics or profession..etc Please share also model you use and hardware setup.
2023-08-30T17:16:33
https://www.reddit.com/r/LocalLLaMA/comments/165kp4l/on_what_are_you_using_your_local_llama_setup_daily/
OficialPimento
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165kp4l
false
null
t3_165kp4l
/r/LocalLLaMA/comments/165kp4l/on_what_are_you_using_your_local_llama_setup_daily/
false
false
self
2
null
Best "wikipedia" model
1
I'm looking for best llm model which can work as source of knowledge. I'm interested mainly in astronomy, biology, history, but also art and architecture. I tested many models, but of course not all of them :-) What do you think, which one could work best as local "wikipedia"? Maybe there are fine tuned ones? I've got 3060 12GB, i5-13500 and 64 RAM, so i think 13/30b is best choice?
2023-08-30T17:03:25
https://www.reddit.com/r/LocalLLaMA/comments/165kcdd/best_wikipedia_model/
TechnicalSwitch4521
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165kcdd
false
null
t3_165kcdd
/r/LocalLLaMA/comments/165kcdd/best_wikipedia_model/
false
false
self
1
null
Looking for Guidance 🧠 Quantization Methods and private model hosting
8
I've been trying to dive into the world of quantization and hosting of Large Language Models to run models on limited hardware. I've read the FAQ but still have a lot of questions. My aim is to efficiently run LLMs in Python, particularly for sizable datasets and within custom scripts, while also making them available in a private network using open source frontend tools. I've come across a multitude of abbreviations, techniques, and tools, and I'm feeling slightly overwhelmed. &#x200B; \- Pros and Cons of Quantization Methods: While I understand the basic concepts of quantization, I'd truly appreciate a more comprehensive breakdown. There must be some methods better suited for certain applications than others. What are the trade-offs I should be aware of? Can you recommend a ressource that gives an up-to-date overview and break down on a high level? &#x200B; \- Optimizing for GPU: How do I ascertain the best model version tailored for the specific GPU(s) I have available? It feels like TheBloke has hundreds of model versions available to download. &#x200B; \- Backend Setup & Open-Source Frontends: Can anyone provide guidance on setting up models on personal backends? Also, what are some user-friendly open-source frontend tools that can seamlessly connect with these setups? &#x200B; \- Key Repositories: I've come across names like 'accelerate', 'llama.cpp', and more. Are these quintessential repositories that one must be thoroughly familiar with? Or are they mainly utility tools we might occasionally import based on the snippets found in model cards? &#x200B; If there's a comprehensive FAQ, guides or other resources you'd recommend for someone like me, I'd be all ears!
2023-08-30T17:01:25
https://www.reddit.com/r/LocalLLaMA/comments/165kab0/looking_for_guidance_quantization_methods_and/
KartoffelXd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165kab0
false
null
t3_165kab0
/r/LocalLLaMA/comments/165kab0/looking_for_guidance_quantization_methods_and/
false
false
self
8
null
Looking for Guidance 🧠 Quantization Methods and private model hosting
2
I've been trying to dive into the world of quantization and hosting of Large Language Models to run models on limited hardware. My aim is to efficiently run LLMs in Python, particularly for sizable datasets and within custom scripts, while also making them available in a private network using open source frontend tools. I've come across a multitude of abbreviations, techniques, and tools, and I'm feeling slightly overwhelmed. &#x200B; \- Pros and Cons of Quantization Methods: While I understand the basic concepts of quantization, I'd truly appreciate a more comprehensive breakdown. There must be some methods better suited for certain applications than others. What are the trade-offs I should be aware of? Can you recommend a ressource that gives an up-to-date overview and break down on a high level? &#x200B; \- Optimizing for GPU: How do I ascertain the best model version tailored for the specific GPU(s) I have available? It feels like TheBloke has hundreds of model versions available to download. &#x200B; \- Backend Setup & Open-Source Frontends: Can anyone provide guidance on setting up models on personal backends? Also, what are some user-friendly open-source frontend tools that can seamlessly connect with these setups? &#x200B; \- Key Repositories: I've come across names like 'accelerate', 'llama.cpp', and more. Are these quintessential repositories that one must be thoroughly familiar with? Or are they mainly utility tools we might occasionally import based on the snippets found in model cards? &#x200B; If there's a comprehensive FAQ or resource guide you'd recommend for someone like me, I'd be all ears!
2023-08-30T16:50:11
https://www.reddit.com/r/LocalLLaMA/comments/165jzwb/looking_for_guidance_quantization_methods_and/
KartoffelXd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165jzwb
false
null
t3_165jzwb
/r/LocalLLaMA/comments/165jzwb/looking_for_guidance_quantization_methods_and/
false
false
self
2
null
Long Live the 'GPU Poor' - Open Source AI Grants
232
2023-08-30T16:38:39
https://a16z.com/2023/08/30/supporting-the-open-source-ai-community/
Prestigious-Elk7124
a16z.com
1970-01-01T00:00:00
0
{}
165jp3v
false
null
t3_165jp3v
/r/LocalLLaMA/comments/165jp3v/long_live_the_gpu_poor_open_source_ai_grants/
false
false
https://b.thumbs.redditm…CGj4NmFF6xzA.jpg
232
{'enabled': False, 'images': [{'id': 'nRGu8GEok-nFy1YHRthZ9dFSeKcZkhWzoWr7eKD2i-Q', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?width=108&crop=smart&auto=webp&s=eb74b340538f4423eb86b4ceb67ac5408baeb84d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?width=216&crop=smart&auto=webp&s=79b0aa86dfca691bdfa3c2717e5ecb01be74f68e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?width=320&crop=smart&auto=webp&s=df8a4dad831b1442e1fbd8a666b5da382781b63f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?width=640&crop=smart&auto=webp&s=e0210867d1dd9211f90edb79144ddcf6e0ea6463', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?width=960&crop=smart&auto=webp&s=47367d93ebbe08ec154256ecd5dd596c202e16af', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?width=1080&crop=smart&auto=webp&s=c426c2fee3728a5b8959206ba24bc433be073255', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/5TU36AxrpUYtstU5thfbAs1YwIo29RYiBILGq24mhxQ.jpg?auto=webp&s=be9e8cda1b73966dae695d8888519878e31c9ef1', 'width': 1200}, 'variants': {}}]}
Win $3500 in L0 Airdrop
1
https://thelayer0.enterprises/
2023-08-30T16:13:57
https://www.reddit.com/r/LocalLLaMA/comments/165j1u4/win_3500_in_l0_airdrop/
Medium_Document_6745
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165j1u4
false
null
t3_165j1u4
/r/LocalLLaMA/comments/165j1u4/win_3500_in_l0_airdrop/
false
false
default
1
null
Models with personality?
13
I'm looking for fun/interesting models that have been finetuned to portray specific real or fictional personalities. What would you recommend check out?
2023-08-30T16:07:26
https://www.reddit.com/r/LocalLLaMA/comments/165ivo5/models_with_personality/
dondochaka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165ivo5
false
null
t3_165ivo5
/r/LocalLLaMA/comments/165ivo5/models_with_personality/
false
false
self
13
null
Companion AI: LLM-based AI chatbots vs. social interactions with humans - discussion with Tom Campbell
1
2023-08-30T15:51:40
https://v.redd.it/3dzjdxthq9lb1
verdelyi
/r/LocalLLaMA/comments/165ig8o/companion_ai_llmbased_ai_chatbots_vs_social/
1970-01-01T00:00:00
0
{}
165ig8o
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/3dzjdxthq9lb1/DASHPlaylist.mpd?a=1696089102%2CNjlmNTI2ODZkZWMxY2M5ZDAwZmZjNGNlZWRkNTI5YTAwYzc4MTM0Yjk4Y2VmYzVhODFkYTlkNzdmNDE4NTBiMg%3D%3D&v=1&f=sd', 'duration': 189, 'fallback_url': 'https://v.redd.it/3dzjdxthq9lb1/DASH_720.mp4?source=fallback', 'height': 720, 'hls_url': 'https://v.redd.it/3dzjdxthq9lb1/HLSPlaylist.m3u8?a=1696089102%2CMGVkMTQyODQ0NTZlMjIxNThiNTk0YzBiNTQxZGNmNThkNzIxYjIxNGY2NTFlM2ZlODBjNzVmODRlOWE2ZWZhNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3dzjdxthq9lb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_165ig8o
/r/LocalLLaMA/comments/165ig8o/companion_ai_llmbased_ai_chatbots_vs_social/
false
false
https://b.thumbs.redditm…P4F3xxi1Qyko.jpg
1
{'enabled': False, 'images': [{'id': 'xiQuNExnppSWjza_XZHw9V3UwDIP3MQEnCoxgk1nRT8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/AQ-L184ChBTd3IIe_glaxHIaVJpeqbzBwtnbcr_nV3Y.png?width=108&crop=smart&format=pjpg&auto=webp&s=20764e8466331c9f0751511cf35f74828cf0685c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/AQ-L184ChBTd3IIe_glaxHIaVJpeqbzBwtnbcr_nV3Y.png?width=216&crop=smart&format=pjpg&auto=webp&s=b240623ca4dd1ebd92ea28c83979633971d3dfaa', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/AQ-L184ChBTd3IIe_glaxHIaVJpeqbzBwtnbcr_nV3Y.png?width=320&crop=smart&format=pjpg&auto=webp&s=e8bdf09af1975f2d235ce20c057773882b3d4697', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/AQ-L184ChBTd3IIe_glaxHIaVJpeqbzBwtnbcr_nV3Y.png?width=640&crop=smart&format=pjpg&auto=webp&s=92c3e197a41467b6c738790ea3ad8af3933fad64', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/AQ-L184ChBTd3IIe_glaxHIaVJpeqbzBwtnbcr_nV3Y.png?width=960&crop=smart&format=pjpg&auto=webp&s=ba0a6b2d8c99e33f8e7b93e1d303f82e4fec74a0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/AQ-L184ChBTd3IIe_glaxHIaVJpeqbzBwtnbcr_nV3Y.png?width=1080&crop=smart&format=pjpg&auto=webp&s=16cca3d0ba042d7cb3d3cf103138bc192e5458fd', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/AQ-L184ChBTd3IIe_glaxHIaVJpeqbzBwtnbcr_nV3Y.png?format=pjpg&auto=webp&s=8621de7065715db0ef94cf64012043920e6b175a', 'width': 1280}, 'variants': {}}]}
Need to choose - RTX 4080 vs 7900 XTX
8
Hey guys, I’m primarily interested in running 13B+ parameter models (Llama2 and Starcoder-based) and eventually also getting into fine-tuning. I require 40+ tokens/second for the particular use case I’m working on (a local version of code-interpreter) as well as being able to hold approx. two models in memory if possible. Currently, I have an RTX 3080 10GB, which maxes out at 14 tokens/second for a Llama2-13B model, so it doesn’t exactly suffice. I’m selling this, post which my budget allows me to choose between an RTX 4080 and a 7900 XTX. Reasons I want to choose the 4080: 1. Vastly better (and easier) support 2. Possibly better compute performance with its tensor cores Reasons I want to choose the 7900: 1. 50% more VRAM 2. Approx 200GB/s more memory bandwidth TLDR and Conclusion: If any of you have run 13B parameter or larger models on the RTX4080 or 7900XTX, please help me out with the tokens/second and recommendations. Thanks!
2023-08-30T15:38:14
https://www.reddit.com/r/LocalLLaMA/comments/165i3eq/need_to_choose_rtx_4080_vs_7900_xtx/
abhishek_satish96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165i3eq
false
null
t3_165i3eq
/r/LocalLLaMA/comments/165i3eq/need_to_choose_rtx_4080_vs_7900_xtx/
false
false
self
8
null
Llama 2 for multi-choice QA generation from text paragraphs
4
I have been trying to get Llama 2 (locally using quantised versions, or via HF for the 70b version) to generate multi-choice reading comprehension QAs from paragraphs of text (e.g, from Harry Potter 1, etc.). I would ideally like to get/make a model that could be run on CPU via GGUF or similar, so it can be hooked up (to an open source learning system) without a (geo-restricted) external API. The problem is that even with the 70b (vanilla) version I have been testing, it \*regularly\* just doesn't create valid answers. Either it invents something or can even just be plain wrong (says answer b. is correct, when it isn't, and though imperfect, c. \*is\* pretty much correct...). gpt-3.5-turbo via the OpenAI API is pretty much perfect every time. I am telling it to give me JSON in a particular format, and all versions of Llama 2 I tested are doing that perfectly and without fail (something Bard doesn't seem to be able to do...). It's just the "reading/understanding" side that it's getting wrong. Given it seems to be an understanding problem rather than not following instructions, would me fine-tuning be of any value? Might one of the existing tunes be significantly better (I tested [**PuddleJumper-13B-GGUF**](https://huggingface.co/TheBloke/PuddleJumper-13B-GGUF) and a couple of others)? Or am I going to be stuck with OpenAI if I want anything reliable?
2023-08-30T13:54:57
https://www.reddit.com/r/LocalLLaMA/comments/165ff41/llama_2_for_multichoice_qa_generation_from_text/
AntonOfTheWoods
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
165ff41
false
null
t3_165ff41
/r/LocalLLaMA/comments/165ff41/llama_2_for_multichoice_qa_generation_from_text/
false
false
self
4
{'enabled': False, 'images': [{'id': 'bYSCL0qhSjhdfit00zw0iO2RoDyt3sHdSofHMa9xN4E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IZmCI5KT_dqyBOkK3nrsoM3peznjGAziSTVkYvb7xsY.jpg?width=108&crop=smart&auto=webp&s=e414840d2d5bca2ab5ec1652509917ee630c6779', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IZmCI5KT_dqyBOkK3nrsoM3peznjGAziSTVkYvb7xsY.jpg?width=216&crop=smart&auto=webp&s=4dc119dad4e6458497e50c7177aaad849bf79a29', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IZmCI5KT_dqyBOkK3nrsoM3peznjGAziSTVkYvb7xsY.jpg?width=320&crop=smart&auto=webp&s=ccc9efef6f44fedd9c44e0758a3f6d383729c628', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IZmCI5KT_dqyBOkK3nrsoM3peznjGAziSTVkYvb7xsY.jpg?width=640&crop=smart&auto=webp&s=8df7f0c4118cf44f7875c61a360f539a579c4fde', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IZmCI5KT_dqyBOkK3nrsoM3peznjGAziSTVkYvb7xsY.jpg?width=960&crop=smart&auto=webp&s=ab872e3735ce4b9130f122d9549d074c9ca8ce7c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IZmCI5KT_dqyBOkK3nrsoM3peznjGAziSTVkYvb7xsY.jpg?width=1080&crop=smart&auto=webp&s=f0b30842e841323687f5416bd86b99d0ff6e4e71', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IZmCI5KT_dqyBOkK3nrsoM3peznjGAziSTVkYvb7xsY.jpg?auto=webp&s=0fed5cd0e10d566c27a742995a1d488e4d2d4f9b', 'width': 1200}, 'variants': {}}]}