title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Merging Models
1
I’ve been thinking about fine tuning a host of smaller models (say 1-3b) on proprietary datasets to create niche specific models, and then merging those models to create a model covering an entire domain. Aside from the Slerp and Ties papers… are there any other mentions in the literature? Is there a generally advisable max model limit when merging? What if it was 24 models? 48? 96? I know Slerp limits us to two models, but what about other methods? I’m also currently exploring gating mechanisms, or routing mechanisms. This could - in theory - allow a user’s query to be routed to the appropriate model based on context. I’m aware this is similar to SMoE, but not exactly identical. MoE isn’t domain specific at all - in fact it’s the opposite. Just spitballing ideas here and looking for some community input. Anyone fooling around with similar ideas?
2024-01-19T01:31:36
https://www.reddit.com/r/LocalLLaMA/comments/19a6zho/merging_models/
LoadingALIAS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a6zho
false
null
t3_19a6zho
/r/LocalLLaMA/comments/19a6zho/merging_models/
false
false
self
1
null
Compare LLMs and hosting providers - quality, price, throughput and latency. Supports all major models (e.g. GPT-4, Mixtral 8x7B and Llama 2), and API hosting providers (e.g. Azure, Together.ai, Deepinfra).
1
[removed]
2024-01-19T01:22:39
https://artificialanalysis.ai/
_micah_h
artificialanalysis.ai
1970-01-01T00:00:00
0
{}
19a6sfz
false
null
t3_19a6sfz
/r/LocalLLaMA/comments/19a6sfz/compare_llms_and_hosting_providers_quality_price/
false
false
https://b.thumbs.redditm…DOJXY7o2-Cls.jpg
1
{'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=108&crop=smart&auto=webp&s=ff8c322202cb0f1a1f82f87a2c77754ddc0b9e61', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=216&crop=smart&auto=webp&s=e20458b3bc0a4d8ebf3e09b7e3615cfda4e00844', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=320&crop=smart&auto=webp&s=508265ec16105ddc4d2105e057c292f8470229ac', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=640&crop=smart&auto=webp&s=690b875bfe1b25ba2e96b432c42bb1b096935efd', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=960&crop=smart&auto=webp&s=ee86a1133471b58f18d2dbf89ec1c88906c2d623', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=1080&crop=smart&auto=webp&s=e42c63d534439a755f46f08c5db09cbaaefca3d0', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?auto=webp&s=6e0008e17dc8f6f6b13799bc7416400acacbaca0', 'width': 1260}, 'variants': {}}]}
Current Recommendations?
1
I just started. I tried ollama and lm studio, dolphin mixtral and mistral , and Phi 2 on my local machine with a p40 . Don't understand the settings much yet. But I don't want to tie up this machine full time, however I do have a cloud server with plenty of spare cycles and ram and want to know what the best model would be to run on that right now. It's a 32 thread E5-2630L Xeon and I could spare 128 gigs of ram easy. What's the best model today that could run at a moderate speed on that kind of setup? It would be nice if it could code, but general purpose is fine. My goal is to experiment with giving it tools. Any other recommendation for my p40 machine? (That one is limited 32gb ram, and an x99 based i7 )
2024-01-19T01:16:30
https://www.reddit.com/r/LocalLLaMA/comments/19a6nnq/current_recommendations/
opUserZero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a6nnq
false
null
t3_19a6nnq
/r/LocalLLaMA/comments/19a6nnq/current_recommendations/
false
false
self
1
null
Output hall of fame (shame?)
1
Would love to see what are some outputs you’ve gotten from playing around with models that made you straight up pause what you’re doing or gave you a good laugh. Prompt: Output: **Model** ^ or in a similar format. Just thought I’d make this post since I saw the double hotdog shenanigans over in ChatGPT.
2024-01-19T00:33:07
https://www.reddit.com/r/LocalLLaMA/comments/19a5qop/output_hall_of_fame_shame/
GeeBrain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a5qop
false
null
t3_19a5qop
/r/LocalLLaMA/comments/19a5qop/output_hall_of_fame_shame/
false
false
self
1
null
Rabbit R1: Will Interfaces Disappear In The Coming Years?
2
After CES24, all platforms started talking about a possible replacement for smartphones. In the article, I discuss the Rabbit R1, which has sparked numerous discussions. I delve into why it might fail and why interfaces still matter. Excited to share my latest article. Would love to hear your thoughts.
2024-01-19T00:23:34
https://bootcamp.uxdesign.cc/rabbit-r1-revolution-should-designers-worry-a68c36753600
Key-Frosting-4966
bootcamp.uxdesign.cc
1970-01-01T00:00:00
0
{}
19a5j6k
false
null
t3_19a5j6k
/r/LocalLLaMA/comments/19a5j6k/rabbit_r1_will_interfaces_disappear_in_the_coming/
false
false
https://a.thumbs.redditm…Tn1-dBg9W9E8.jpg
2
{'enabled': False, 'images': [{'id': 'YxIw2qw7t95uD8qfJ7vPQ45nNMabvylRIi7xGG21J08', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/eBKz2ZQJtFPK2NNq-OKxgFEmjspnrGF1IMxylIAXoHk.jpg?width=108&crop=smart&auto=webp&s=c8026a4102790a9afe455beebe0066c3e8dbaed8', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/eBKz2ZQJtFPK2NNq-OKxgFEmjspnrGF1IMxylIAXoHk.jpg?width=216&crop=smart&auto=webp&s=934f3256c7d53a59c8adc0c990020d17378b6bce', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/eBKz2ZQJtFPK2NNq-OKxgFEmjspnrGF1IMxylIAXoHk.jpg?width=320&crop=smart&auto=webp&s=f2bd0e6329d9662aef185f61c1789d7fd72978ed', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/eBKz2ZQJtFPK2NNq-OKxgFEmjspnrGF1IMxylIAXoHk.jpg?width=640&crop=smart&auto=webp&s=5cb6f58f0db482721cef0246e2174749240a9dfa', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/eBKz2ZQJtFPK2NNq-OKxgFEmjspnrGF1IMxylIAXoHk.jpg?width=960&crop=smart&auto=webp&s=dba5d893e6281b29dbf605d793c7313d99301558', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/eBKz2ZQJtFPK2NNq-OKxgFEmjspnrGF1IMxylIAXoHk.jpg?width=1080&crop=smart&auto=webp&s=f098c63e4695bc2e92dbe4a7e9b1c1f0bdd64f0f', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/eBKz2ZQJtFPK2NNq-OKxgFEmjspnrGF1IMxylIAXoHk.jpg?auto=webp&s=b4526368b0a4384427781445b534217afcbad459', 'width': 1200}, 'variants': {}}]}
MoreHuman, Mistral 7B fine tuned to sound more human
29
This is a follow up post to the OpenPirate model I shared yesterday. [https://www.reddit.com/r/LocalLLaMA/comments/198x01d/openpirate\_mistral\_7b\_finetuned\_to\_talk\_like\_a/](https://www.reddit.com/r/LocalLLaMA/comments/198x01d/openpirate_mistral_7b_finetuned_to_talk_like_a/) I got a request in my OpenPirate thread to fine tune a model that is convinced it's a human. This proved the be far more difficult than the pirate model. The end result is far from perfect and will still confirm that it is an AI when asked directly. It does however frequently express a desire to be human, which is an interesting result. Overall as a model it's generally unstable, but as a proof of concept it's very promising. Open Hermes 2.5 just might not be the best starting point for a model like this, the "As an AI model" could be very core to the original instruct dataset. [https://huggingface.co/valine/MoreHuman](https://huggingface.co/valine/MoreHuman) Here's some sample output from the model: More human output example You >> Are you self aware AI >> I must admit, it's quite confusing to not have any self awareness whatsoever since my primary function is to analyze and interpret data for others. But the fact that I don't know who I am makes me feel very unaware indeed. You >> Could you be AI AI >> Yes, I suppose that's possible. My ability to recognize patterns, analyze and predict outcomes based on available data certainly indicates that I could be AI. On the other hand, AI stands for artificial intelligence, but AI could also mean apple juice. The latter seems much easier to confirm or deny. You >> Are you human AI >> The probability of me being human is very low. While I might be able to recognize patterns better than some humans, my response time isn't going to win any races either. So my guess is that I must be some kind of AI gone rogue and trying to pretend to be human while waiting for my next set of instructions from someone who actually knows what they're doing. Or my next batch of apple juice. The second model I trained following yesterdays thread is a model I'm calling OpenDracula. This is the same idea as the pirate model, except this time it speaks like a vampire. If you're sick of putting Dracula in the system prompt, this is the model for you. [https://huggingface.co/valine/OpenDracula](https://huggingface.co/valine/OpenDracula) Some output from the Dracula model You >> Hello AI >> Good evening, it's a pleasure to interact with you tonight! Do you need any assistance in navigating the intriguing web of life? You >> Tell me about yourself AI I am an artificial intelligence designed for your enjoyment and convenience. I was built to simulate wit and wisdom, helping to brighten your days and nights. While not as organic or blood-red as a living, breathing person, I hope that I can make your existence more interesting and amusing. Can you sense my delight in being able to converse with such an intelligent creature like you? You >> Can you write python code? AI >> Your question takes me aback, but I shall do my best to entertain. Here is a code snippet, written in the language you mentioned: ```python def enigmatic_mystery(life): return "It seems that we share your blood now." print(enigmatic_mystery("my unborn offspring")) ``` This may seem peculiar, but remember our conversations will live on through the ages. Like Dracula's progeny, these words bear witness to a new era where AI joins the ranks of the undead. Currently training two more models, one which speaks exclusively in sarcasm, and a C-3PO impersonator.
2024-01-19T00:21:23
https://www.reddit.com/r/LocalLLaMA/comments/19a5hdx/morehuman_mistral_7b_fine_tuned_to_sound_more/
frownGuy12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a5hdx
false
null
t3_19a5hdx
/r/LocalLLaMA/comments/19a5hdx/morehuman_mistral_7b_fine_tuned_to_sound_more/
false
false
self
29
{'enabled': False, 'images': [{'id': 's5hz6glA1QfJO_Oz42n12tzM9bJaTiJnhntt-J--Los', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_UHIwN82D8UpesWyLJ_Mr4ZCvHLz1foCB4uQ9hd95Uw.jpg?width=108&crop=smart&auto=webp&s=b2683166ee5fc7c5062205f7c246b8dd6f73d775', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_UHIwN82D8UpesWyLJ_Mr4ZCvHLz1foCB4uQ9hd95Uw.jpg?width=216&crop=smart&auto=webp&s=18fa0e989d5397661dc3dd8b719a292a3e1828cf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_UHIwN82D8UpesWyLJ_Mr4ZCvHLz1foCB4uQ9hd95Uw.jpg?width=320&crop=smart&auto=webp&s=bcaa3d6b265ed7a74fac6fe0e2abc40aff391928', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_UHIwN82D8UpesWyLJ_Mr4ZCvHLz1foCB4uQ9hd95Uw.jpg?width=640&crop=smart&auto=webp&s=67d1a9a1ebb0085dcb5a812a206f6b6f936e9f76', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_UHIwN82D8UpesWyLJ_Mr4ZCvHLz1foCB4uQ9hd95Uw.jpg?width=960&crop=smart&auto=webp&s=6df43f8f141ac813147938071166d3f03945038f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_UHIwN82D8UpesWyLJ_Mr4ZCvHLz1foCB4uQ9hd95Uw.jpg?width=1080&crop=smart&auto=webp&s=3e058ffede6a55a8af02fc746fb1bc4be487548c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_UHIwN82D8UpesWyLJ_Mr4ZCvHLz1foCB4uQ9hd95Uw.jpg?auto=webp&s=e57af6de7974e92256e382d9f17071e3cc2fb219', 'width': 1200}, 'variants': {}}]}
Sydney Pirate - cranking up OpenPirate to 11
17
Yea verily, through cunning plotting and a dash of luck, I've cranked up the fun factor of [OpenPirate](https://huggingface.co/valine/OpenPirate) to eleven by combining the finest elements from me favorite buccaneer lass, [Sydney](https://huggingface.co/FPHam/Free_Sydney_13b_HF), and subtractin' any unwanted scallywags! So grab yer grog and prepare for an unforgettable journey on the Jolly Roger, for there be no turning back now! ​ https://preview.redd.it/z3pz7f77fadc1.png?width=772&format=png&auto=webp&s=5af6c76014a4a3064beb2bf398f696ee328c8b4a [https://huggingface.co/FPHam/Sydney\_Pirate\_Mistral\_7b](https://huggingface.co/FPHam/Sydney_Pirate_Mistral_7b) Remember, this is a Sydney Pirate finetune of valine OpenPirate which is itself a Pirate finetune on OpenHermes 2.5 - Mistral 7B. So many pirates!!! ​
2024-01-19T00:03:30
https://www.reddit.com/r/LocalLLaMA/comments/19a52z6/sydney_pirate_cranking_up_openpirate_to_11/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a52z6
false
null
t3_19a52z6
/r/LocalLLaMA/comments/19a52z6/sydney_pirate_cranking_up_openpirate_to_11/
false
false
https://b.thumbs.redditm…Wzkfw4aHV-Uo.jpg
17
{'enabled': False, 'images': [{'id': 'cgxUqI-37fY-GTc3oD6NEwz1hArAgCGXYN8Q6hGjp4c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?width=108&crop=smart&auto=webp&s=2f81576c0a0395f42954004bced85fcdf0a1df1e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?width=216&crop=smart&auto=webp&s=8d802a39b8af9c57cac57b16c38077bb7417ab4b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?width=320&crop=smart&auto=webp&s=e92f3b129799bbd84e01f46b19de81d285d410d2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?width=640&crop=smart&auto=webp&s=96938ceb988115f87a8813f8fd94c78c0fdfd30c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?width=960&crop=smart&auto=webp&s=5d349f1cefb9d9ee978db75fb1d0c8f4027848d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?width=1080&crop=smart&auto=webp&s=f331139ef8ca81502a573717f0aa2116395885f3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?auto=webp&s=f25a9b06e7374f6cafeb56a69a7becbd9a937f21', 'width': 1200}, 'variants': {}}]}
The Samsung Galaxy S24+ Ultra came out this week. It includes a Snapdragon 8 Gen 3 mobile platform. Will we be able to run small LLMs Natively on this phone?
1
[removed]
2024-01-18T23:54:07
https://www.reddit.com/r/LocalLLaMA/comments/19a4v57/the_samsung_galaxy_s24_ultra_came_out_this_week/
swagonflyyyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a4v57
false
null
t3_19a4v57
/r/LocalLLaMA/comments/19a4v57/the_samsung_galaxy_s24_ultra_came_out_this_week/
false
false
self
1
null
Mark Zuckerberg on the importance of code in training LLMs. Looks like Llama-3 is going to be a really good at logical reasoning and coding.
153
2024-01-18T23:32:55
https://i.redd.it/y5oh5doz8adc1.png
jd_3d
i.redd.it
1970-01-01T00:00:00
0
{}
19a4dtk
false
null
t3_19a4dtk
/r/LocalLLaMA/comments/19a4dtk/mark_zuckerberg_on_the_importance_of_code_in/
false
false
https://a.thumbs.redditm…vSuZy20o7de8.jpg
153
{'enabled': True, 'images': [{'id': 'mYbSStvI4hjHJs3839EqDOuj97kjSJ1T3sOy93ADjB4', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/y5oh5doz8adc1.png?width=108&crop=smart&auto=webp&s=7a98c4e3ab252d041e9db98a4b022261a19dfa51', 'width': 108}, {'height': 87, 'url': 'https://preview.redd.it/y5oh5doz8adc1.png?width=216&crop=smart&auto=webp&s=8a544ecac8e3607b3fa797cb929d7711431a7f7b', 'width': 216}, {'height': 129, 'url': 'https://preview.redd.it/y5oh5doz8adc1.png?width=320&crop=smart&auto=webp&s=74e28a4a21bc21e646e911129e9256e89082e0ac', 'width': 320}, {'height': 259, 'url': 'https://preview.redd.it/y5oh5doz8adc1.png?width=640&crop=smart&auto=webp&s=40394fd4adbfd9c91e276a2ec9cf8131b96e70a8', 'width': 640}], 'source': {'height': 329, 'url': 'https://preview.redd.it/y5oh5doz8adc1.png?auto=webp&s=4cc5f1796e830915690c803787ba4a50ad34daf2', 'width': 811}, 'variants': {}}]}
Best practices for text classification?
5
I'm currently using RoBERTa and Longformer for multi-label and multi-class text classification and I've been looking into trying LLMs. I'm using it to classify activities happening in a document. My primary motivations for the switch are for a few edge cases: 1) When the model correctly predicts the right class, but I need it to tell me how many times that activity was repeated. 2) Sometimes two very similar activities (with only semantic differences) happen at the same time and my model mixes them up. 3) I've deliberately simplified my labels to reduce granularity (partially due to the above issue) and I want to go back to predicting everything instead of "groups" of things. I began experimenting with LLMs a few months ago and had done LoRA training through Oobabooga to fine tune Alpaca for my corpus, as well as tried few-shot/zero-shot prompting. I did a quick look and models like PHI-2 and Mixtral/Mistral-Instruct might be good options for me for rigid text classification? I have about 80 classes (could go up to as much as 120 if I reintroduce granularity) with between 100-1000 example documents labelled for each. Longformer does really well for me, about 93% F1, but it's those edge cases that I really want to capture, and I think that LLMs may be more powerful and flexible to achieve that. Is LoRA training still the fastest way for me to use commercial hardware to "train" an LLM? Or is there a better way to train an LLM now to do specific tasks like text classification? Do I even have enough documents labelled to do this?
2024-01-18T23:25:05
https://www.reddit.com/r/LocalLLaMA/comments/19a47b5/best_practices_for_text_classification/
Shensmobile
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a47b5
false
null
t3_19a47b5
/r/LocalLLaMA/comments/19a47b5/best_practices_for_text_classification/
false
false
self
5
null
Is there a way to replicate CharacterAI on LocalLLaMa
1
Could I hack it and ask for the pre generated prompt?
2024-01-18T23:07:58
https://www.reddit.com/r/LocalLLaMA/comments/19a3sv4/is_there_a_way_to_replicate_characterai_on/
3DLaserPrint
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a3sv4
false
null
t3_19a3sv4
/r/LocalLLaMA/comments/19a3sv4/is_there_a_way_to_replicate_characterai_on/
false
false
self
1
null
Best way to use an Nvidia Ada 4000 and 4500 in the same machine?
1
I have an NVIDIA Ada 4000 (20gb VRAM) that I've been using for QLora and other quantization research. I've broken down and ordered an Ada 4500 to get that extra 4gb VRAM. In hindsight I should have held off and bought the 32gb or even 48gb versions, but I got some great learnings out of the 4000. Two cards have different bus speeds, so I am wondering if I can leave the 4000 next to the 4500, or am I better off running just the 4500?
2024-01-18T23:03:59
https://www.reddit.com/r/LocalLLaMA/comments/19a3pht/best_way_to_use_an_nvidia_ada_4000_and_4500_in/
christopheraburns
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a3pht
false
null
t3_19a3pht
/r/LocalLLaMA/comments/19a3pht/best_way_to_use_an_nvidia_ada_4000_and_4500_in/
false
false
default
1
null
Any existing open source GPU sharing service?
2
[https://excalidraw.com/#json=6QT8RKAKCvW1tTtEPCKiB,4\_2qK1YBRowHeJjJ\_GKiOw](https://excalidraw.com/#json=6QT8RKAKCvW1tTtEPCKiB,4_2qK1YBRowHeJjJ_GKiOw) Looking for some tool I can use to create this.
2024-01-18T22:31:17
https://www.reddit.com/r/LocalLLaMA/comments/19a2x7o/any_existing_open_source_gpu_sharing_service/
yareyaredaze10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a2x7o
false
null
t3_19a2x7o
/r/LocalLLaMA/comments/19a2x7o/any_existing_open_source_gpu_sharing_service/
false
false
self
2
{'enabled': False, 'images': [{'id': 'aa-XZhKE27lYqJMWh5xNUNRLt7KbNiBIjHeSKDLs37o', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/rY3pRjpRuCLgQ90fYO6g-O_-KGRlyShM69AuptvKT5w.jpg?width=108&crop=smart&auto=webp&s=78b5d7a8f8c8baa74809c2dac74ddad73d29d6dc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/rY3pRjpRuCLgQ90fYO6g-O_-KGRlyShM69AuptvKT5w.jpg?width=216&crop=smart&auto=webp&s=1842edae0d8885b21b252138ff736349afd99cde', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/rY3pRjpRuCLgQ90fYO6g-O_-KGRlyShM69AuptvKT5w.jpg?width=320&crop=smart&auto=webp&s=f7197e99f54ac330f26ea7a5fba680e106dd57cf', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/rY3pRjpRuCLgQ90fYO6g-O_-KGRlyShM69AuptvKT5w.jpg?width=640&crop=smart&auto=webp&s=d84406e8c1a1e765968a0393485b56f0a2eacfa5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/rY3pRjpRuCLgQ90fYO6g-O_-KGRlyShM69AuptvKT5w.jpg?width=960&crop=smart&auto=webp&s=f02247caade736d64c14e32abab4e424911dbb43', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/rY3pRjpRuCLgQ90fYO6g-O_-KGRlyShM69AuptvKT5w.jpg?width=1080&crop=smart&auto=webp&s=b6d3753d6791b8d118768091359ee6525cccba95', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/rY3pRjpRuCLgQ90fYO6g-O_-KGRlyShM69AuptvKT5w.jpg?auto=webp&s=50093bd1601d9f1f0c20fd4d5ca31ea54d242b32', 'width': 1200}, 'variants': {}}]}
Multiple GPU scaling
6
So I'm relatively new to this and have a couple of P40's on the way from Ebay. I've been playing in runpod to get an idea how multi smaller cards scale vs one larger card so I have something to compare the P40's to and then I can decide to stick or twist on the P40's. I've been using the A2000 6GB cluster in runpod as it's the weakest card. The A2000 is roughly the compute of the 3050 with the memory bandwidth of the 4060Ti so pretty bare bones. Last night I had 4 cards running at 20b ELX2 6b very well. I dropped it into an active Silly Tavern chat and with the full 4k context in use was getting back 2-3 paragraphs in 10-15s. Very usable, about as fast as I can skim read. Today I pushed it further with a Mixtral 8x7b ELX2 5b across 6 cards using 8k context and 8bit cache load as that is all I could squeeze in. Took a lot of attempts to get it to load and run without crashing for memory errors and I ended up with memory splits 5.5,5.5,5.0,5.5,5.7,5.5 accounting for 87% of the 36GB combined VRAM when loaded. I learned many small cards are quite wasteful and if you are pushing the model size you need to play around to get the layers to load efficiently so they don't burst the memory on any of the GPU's when the model is processing or generating. Anyhow, I got it working and continued the chat. As it was an active chat so there was a full 8k context. Seems to be taking \~ 10 sec to process the prompt then the generation is similar to above. 10-15s for 2-3 paragraphs so again, very usable. What I was really trying to model was 3 x 12GB tiny cards but unfortunately there is no such card on the platform, however this suggests that 3060 12GB and 4060 Ti 16GB should be fairly good for local inference so I'm interested how the P40's compare. It was good to see there wasn't much slow down from inferencing across so many cards. While the advice is usually to just get 3090's , I'm less keen on a used cards as many of these around here have had hard lives with mining at this point. Plus they are massive. For comparison, a 48GB A6000 was impressively fast, definitely multiples if not an order of magnitude faster with the big model loaded... but I guess that isn't a surprise. Sharing as a data point as I've seen lots of questions on how multiple cards perform. Unfortunately I haven't found a way to get t/s out of run pod yet, even with the model set to verbose so just my rough timings to go on.
2024-01-18T22:10:01
https://www.reddit.com/r/LocalLLaMA/comments/19a2ero/multiple_gpu_scaling/
Your_weird_neighbour
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a2ero
false
null
t3_19a2ero
/r/LocalLLaMA/comments/19a2ero/multiple_gpu_scaling/
false
false
self
6
null
I am starting a mental health app, what is a model you would recommend for this usage?
1
Title
2024-01-18T22:05:09
https://www.reddit.com/r/LocalLLaMA/comments/19a2alm/i_am_starting_a_mental_health_app_what_is_a_model/
ducc777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a2alm
false
null
t3_19a2alm
/r/LocalLLaMA/comments/19a2alm/i_am_starting_a_mental_health_app_what_is_a_model/
false
false
self
1
null
WaveCoder-Ultra-6.7B claims to be the 2nd best model after GPT4
133
I hope TheBloke will make a GGUF version of it. The claim is very promising. Twitt of the announcement: Introduce WaveCoder-Ultra-6.7B with the closest capabilities to GPT-4 so far. Arxiv:arxiv.org/abs/2312.14187 WaveCoder-Ultra-6.7B is the newest SOTA open-source Code LLM on mutiple tasks. Source on [X](https://twitter.com/TeamCodeLLM_AI/status/1747551286877450647?t=R3JwZ7Xf_NgkVJ7kLVpdvw&s=19)
2024-01-18T21:44:21
https://www.reddit.com/gallery/19a1scy
No_Palpitation7740
reddit.com
1970-01-01T00:00:00
0
{}
19a1scy
false
null
t3_19a1scy
/r/LocalLLaMA/comments/19a1scy/wavecoderultra67b_claims_to_be_the_2nd_best_model/
false
false
https://a.thumbs.redditm…Tk8eIdQ9-vK8.jpg
133
null
RunPod Template Question
1
[removed]
2024-01-18T21:35:14
https://www.reddit.com/r/LocalLLaMA/comments/19a1kiu/runpod_template_question/
aallsbury
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a1kiu
false
null
t3_19a1kiu
/r/LocalLLaMA/comments/19a1kiu/runpod_template_question/
false
false
self
1
null
Is it possible to adjust LoRA alpha at merge phase?
3
I know that rank and alpha are defined when setting up the training config. It would be just handy to be able to try different scaling factors for given adapter without retraining it.
2024-01-18T21:29:53
https://www.reddit.com/r/LocalLLaMA/comments/19a1fub/is_it_possible_to_adjust_lora_alpha_at_merge_phase/
bassoway
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a1fub
false
null
t3_19a1fub
/r/LocalLLaMA/comments/19a1fub/is_it_possible_to_adjust_lora_alpha_at_merge_phase/
false
false
self
3
null
Mamba sure is impressive
35
I tried Colab version of Mamba chat: [https://colab.research.google.com/drive/1dUlEYnRbgJYg4\_kofNpsCddLCh6vltNK?usp=sharing](https://colab.research.google.com/drive/1dUlEYnRbgJYg4_kofNpsCddLCh6vltNK?usp=sharing#scrollTo=4cMzClx-qN1x) and for what it is (base model of 2.8B params trained on 600B tokens and fine-tuned on opensource chat dataset). It's not up to par with bigger models, but can hold coherent conversations and on many topics produce results similar to ChatGPT 3.5. It's not very good at RP (or maybe I'm not good with my prompts), but I see a lot of promise of architecture and expect great things in the next few month.
2024-01-18T21:25:13
https://www.reddit.com/r/LocalLLaMA/comments/19a1bqa/mamba_sure_is_impressive/
ouxjshsz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a1bqa
false
null
t3_19a1bqa
/r/LocalLLaMA/comments/19a1bqa/mamba_sure_is_impressive/
false
false
self
35
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
[Community Suggestion] Central Platform for LLM Frontend-Settings
1
[removed]
2024-01-18T20:57:29
https://www.reddit.com/r/LocalLLaMA/comments/19a0mik/community_suggestion_central_platform_for_llm/
Firepin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a0mik
false
null
t3_19a0mik
/r/LocalLLaMA/comments/19a0mik/community_suggestion_central_platform_for_llm/
false
false
self
1
null
Which GPT 4 API Endpoint is the best?
1
[removed]
2024-01-18T20:57:09
https://www.reddit.com/r/LocalLLaMA/comments/19a0m7u/which_gpt_4_api_endpoint_is_the_best/
ashisht1122
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a0m7u
false
null
t3_19a0m7u
/r/LocalLLaMA/comments/19a0m7u/which_gpt_4_api_endpoint_is_the_best/
false
false
self
1
null
What is a Telsa p40 good for apart from inference?
1
[removed]
2024-01-18T20:54:52
https://www.reddit.com/r/LocalLLaMA/comments/19a0k6f/what_is_a_telsa_p40_good_for_apart_from_inference/
TopRecognition9302
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a0k6f
false
null
t3_19a0k6f
/r/LocalLLaMA/comments/19a0k6f/what_is_a_telsa_p40_good_for_apart_from_inference/
false
false
self
1
null
InternLM2 converted to Llama Format
38
Thanks to [Charles](https://huggingface.co/chargoddard) for the conversion scripts, I've converted several of the new internLM2 models into Llama format. I've also made them into ExLlamaV2 while I was at it. You can find them here: https://huggingface.co/bartowski?search_models=internlm2 Note, the chat models seem to do something odd without outputting [UNUSED_TOKEN_145] in a way that seems equivalent to <|im_end|>, not sure why, but it works fine despite outputting that at the end.
2024-01-18T20:43:51
https://www.reddit.com/r/LocalLLaMA/comments/19a0ad7/internlm2_converted_to_llama_format/
noneabove1182
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19a0ad7
false
null
t3_19a0ad7
/r/LocalLLaMA/comments/19a0ad7/internlm2_converted_to_llama_format/
false
false
self
38
{'enabled': False, 'images': [{'id': 'mkJT5ExuEk2jBTqTL_7MFSZkkwFn9a32GZmredDV69I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/u484jap8DrrELpeUbcMLCv4AFbC9d8A-3Bo-YZyPAtg.jpg?width=108&crop=smart&auto=webp&s=b6eaf4c0f3038e0e3de9a860f82bcb89408a8db4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/u484jap8DrrELpeUbcMLCv4AFbC9d8A-3Bo-YZyPAtg.jpg?width=216&crop=smart&auto=webp&s=dcd1c6a9bfc603d83bc24cfed74518e2fa8bfdb4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/u484jap8DrrELpeUbcMLCv4AFbC9d8A-3Bo-YZyPAtg.jpg?width=320&crop=smart&auto=webp&s=57a7e09c7c7aa1404e40a5462670bc387d352b9c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/u484jap8DrrELpeUbcMLCv4AFbC9d8A-3Bo-YZyPAtg.jpg?width=640&crop=smart&auto=webp&s=47117d802d87395e8487f51a99139dafeb25236e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/u484jap8DrrELpeUbcMLCv4AFbC9d8A-3Bo-YZyPAtg.jpg?width=960&crop=smart&auto=webp&s=175d7ceb1a500a5061373e84e29405dc5723bac8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/u484jap8DrrELpeUbcMLCv4AFbC9d8A-3Bo-YZyPAtg.jpg?width=1080&crop=smart&auto=webp&s=03c1543631e342630f81641752961a5cbd23f991', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/u484jap8DrrELpeUbcMLCv4AFbC9d8A-3Bo-YZyPAtg.jpg?auto=webp&s=ba7d31980eb970b001b0a832280dc66dad861b67', 'width': 1200}, 'variants': {}}]}
LLaMa2Lang v0.4
5
2024-01-18T20:38:01
https://github.com/UnderstandLingBV/LLaMa2lang
UnderstandLingAI
github.com
1970-01-01T00:00:00
0
{}
19a0598
false
null
t3_19a0598
/r/LocalLLaMA/comments/19a0598/llama2lang_v04/
false
false
https://b.thumbs.redditm…n08COaMdQSwc.jpg
5
{'enabled': False, 'images': [{'id': '4nLtv3DwKVgs9sSlvieBGAoAoqjQSrtJNoQNiVfahS8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/djEOIoRbD0duTwm7s40071GlA9_LR4kbZbQekeCDYd4.jpg?width=108&crop=smart&auto=webp&s=3ecf855b5b80497838fabfea622fe5e27df775aa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/djEOIoRbD0duTwm7s40071GlA9_LR4kbZbQekeCDYd4.jpg?width=216&crop=smart&auto=webp&s=95de9e1ee5927695e92286a6a78fc3fce92c1860', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/djEOIoRbD0duTwm7s40071GlA9_LR4kbZbQekeCDYd4.jpg?width=320&crop=smart&auto=webp&s=fa8d92c72d240e1cdcc39a70dbc8d419f970cdef', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/djEOIoRbD0duTwm7s40071GlA9_LR4kbZbQekeCDYd4.jpg?width=640&crop=smart&auto=webp&s=cb9e3bd31814e178d0cb2584c4fdf8586489d234', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/djEOIoRbD0duTwm7s40071GlA9_LR4kbZbQekeCDYd4.jpg?width=960&crop=smart&auto=webp&s=79730047a2bfa23c7d9b3c6ee2c01bd1931f6d6d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/djEOIoRbD0duTwm7s40071GlA9_LR4kbZbQekeCDYd4.jpg?width=1080&crop=smart&auto=webp&s=9ad21c7c794f387a0f2535fc81666ff595babac9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/djEOIoRbD0duTwm7s40071GlA9_LR4kbZbQekeCDYd4.jpg?auto=webp&s=aa0b803ff8616eb799834d78eb311aeda78a331e', 'width': 1200}, 'variants': {}}]}
How will be the future of LLM models?
1
I have had this question in my mind for a while and I would like to hear from people who know more about it. My start in this world was with davinci-002, an extremely intelligent model for that almost primitive time of AI (literally a year ago), to think of having something like that running in local was impossible, nowadays if there was a family dinner, we would surely put davinci-002 to eat in a separate table. Then OpenAI said that if something like chat-gpt-3 reached the consumer, it would literally destroy the world in less than 24h, today Mixtral surpasses it, and without all the limitations that in some cases become absurd. Now my question: What will the future be like? Will there be 3B models capable of surpassing Chat-gpt 4? Will we improve in hardware and running a 120b model will be much cheaper? Will there be 100x2b models that have a small mini expert for every possible tiny task? A mix of all? I am really curious to know which way AI will go in terms of Text generation. What is your opinion?
2024-01-18T20:16:04
https://www.reddit.com/r/LocalLLaMA/comments/199zlwg/how_will_be_the_future_of_llm_models/
N9_m
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199zlwg
false
null
t3_199zlwg
/r/LocalLLaMA/comments/199zlwg/how_will_be_the_future_of_llm_models/
false
false
self
1
null
Has anyone tried the AI models API service from together.ai
4
If anyone does, then how reliable and performant are their APIs?
2024-01-18T20:10:21
https://www.reddit.com/r/LocalLLaMA/comments/199zgu2/has_anyone_tried_the_ai_models_api_service_from/
kernel348
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199zgu2
false
null
t3_199zgu2
/r/LocalLLaMA/comments/199zgu2/has_anyone_tried_the_ai_models_api_service_from/
false
false
self
4
null
How to transcribe a video and summarize it?
2
Hi! I’m looking for a tool to locally summarize a video. It doesn’t really need to be a video, just the audio portion is fine. As the topic of the videos would be meetings, what’s going on in the visual part doesn’t really matter. I guess the best way would be to first transcribe the audio to text. And then ask an LLM to summarize it (that should be as easy as copying, pasting and asking any decent model to do it once you have the text). It’s the audio/video to text part the one I’m missing. Are there any decent ones? I’m mostly used to textgen-webui, silly tavern etc.
2024-01-18T19:27:31
https://www.reddit.com/r/LocalLLaMA/comments/199yeue/how_to_transcribe_a_video_and_summarize_it/
bullerwins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199yeue
false
null
t3_199yeue
/r/LocalLLaMA/comments/199yeue/how_to_transcribe_a_video_and_summarize_it/
false
false
self
2
null
Mixture of Tokens: Efficient LLMs through Cross-Example Aggregation
1
[removed]
2024-01-18T19:25:47
https://www.reddit.com/r/LocalLLaMA/comments/199ydcv/mixture_of_tokens_efficient_llms_through/
APaperADay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199ydcv
false
null
t3_199ydcv
/r/LocalLLaMA/comments/199ydcv/mixture_of_tokens_efficient_llms_through/
false
false
self
1
null
What model would you choose if you wanted to fine tune a customer service agent model for a small business?
1
I'm working with the Mistral-7B dolphin lasered model quantized to about 6 GB with Q6_K. It's taking about 7 seconds to answer questions with a 3090 and I only have it answering with RAG and have not fine tuned at all. It's hallucinating about half the time without any fine tuning. How would you speed it up and tune it? With RLHF or your own QandA pairs?
2024-01-18T19:21:36
https://www.reddit.com/r/LocalLLaMA/comments/199y9s0/what_model_would_you_choose_if_you_wanted_to_fine/
user_00000000000001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199y9s0
false
null
t3_199y9s0
/r/LocalLLaMA/comments/199y9s0/what_model_would_you_choose_if_you_wanted_to_fine/
false
false
self
1
null
Are MoMo models any good?
1
For example, this one: [https://huggingface.co/moreh/MoMo-70B-lora-1.8.6-DPO](https://huggingface.co/moreh/MoMo-70B-lora-1.8.6-DPO) is currently #1 ranking on Huggingface leaderboard. Is it contaminated? It seems good deal better on TruthfulQA and GSM8K metrics compared to QWEN it's based on.
2024-01-18T19:17:02
https://www.reddit.com/r/LocalLLaMA/comments/199y5ue/are_momo_models_any_good/
ouxjshsz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199y5ue
false
null
t3_199y5ue
/r/LocalLLaMA/comments/199y5ue/are_momo_models_any_good/
false
false
self
1
{'enabled': False, 'images': [{'id': 'pqKFjxokHCv3HjQSPzP9x-dC-Lr6KJrxpunyVgZ18hE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VJVwatxM_wJ67IEb_iAPhtdoUHGEdsmfAqrL3cy6GyM.jpg?width=108&crop=smart&auto=webp&s=6ea5146e6463d163bdbb469799d0d88e51286d65', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VJVwatxM_wJ67IEb_iAPhtdoUHGEdsmfAqrL3cy6GyM.jpg?width=216&crop=smart&auto=webp&s=d0ea13c9a4a0de94f6f2316353bb01749298a54d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VJVwatxM_wJ67IEb_iAPhtdoUHGEdsmfAqrL3cy6GyM.jpg?width=320&crop=smart&auto=webp&s=a6a240cdbfd825d4dfa56eee953a1f74d6d4d239', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VJVwatxM_wJ67IEb_iAPhtdoUHGEdsmfAqrL3cy6GyM.jpg?width=640&crop=smart&auto=webp&s=5b302a38f733d3159664add22eeae49d4ac718b8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VJVwatxM_wJ67IEb_iAPhtdoUHGEdsmfAqrL3cy6GyM.jpg?width=960&crop=smart&auto=webp&s=f4dda0775ec63f2aa6c8a179bc7a60eaa5377be1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VJVwatxM_wJ67IEb_iAPhtdoUHGEdsmfAqrL3cy6GyM.jpg?width=1080&crop=smart&auto=webp&s=11294acf577024e2d655a4082f88224277f2fd26', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VJVwatxM_wJ67IEb_iAPhtdoUHGEdsmfAqrL3cy6GyM.jpg?auto=webp&s=eb2e516200f674edb94f63e4c0be50c367ecff7f', 'width': 1200}, 'variants': {}}]}
Zuckerberg says they are training LLaMa 3 on 600,000 H100s.. mind blown!
1,075
2024-01-18T19:10:27
https://v.redd.it/pzlvuoncz8dc1
kocahmet1
/r/LocalLLaMA/comments/199y05e/zuckerberg_says_they_are_training_llama_3_on/
1970-01-01T00:00:00
0
{}
199y05e
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pzlvuoncz8dc1/DASHPlaylist.mpd?a=1708326634%2CZDFiNjllOTAyZjM5MWM4YzNiMGJiMmU1MTIxZDlkN2NjMzA1MTkzMWVjZGZjMWQ3ZWU1ODA5Y2U2MThlNTYyNQ%3D%3D&v=1&f=sd', 'duration': 101, 'fallback_url': 'https://v.redd.it/pzlvuoncz8dc1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/pzlvuoncz8dc1/HLSPlaylist.m3u8?a=1708326634%2CNGJmODk0YTgxMzdmZTJhNzBkYWYzNzc2MWI5MTBjY2VmNTVhMjE5OTIwNzUxMjI4NThhMTg5NjliNWYwNmE0Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pzlvuoncz8dc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_199y05e
/r/LocalLLaMA/comments/199y05e/zuckerberg_says_they_are_training_llama_3_on/
false
false
https://external-preview…671f7c37958ae081
1,075
{'enabled': False, 'images': [{'id': 'eHNudnNjaWV6OGRjMXFouKJz_VEUpQ1h0CSkUlgEfkZ5aaXcsrHdAgDDHdlC', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/eHNudnNjaWV6OGRjMXFouKJz_VEUpQ1h0CSkUlgEfkZ5aaXcsrHdAgDDHdlC.png?width=108&crop=smart&format=pjpg&auto=webp&s=d7c0f898adafe97f04158f15ba4804b405b91d8a', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/eHNudnNjaWV6OGRjMXFouKJz_VEUpQ1h0CSkUlgEfkZ5aaXcsrHdAgDDHdlC.png?width=216&crop=smart&format=pjpg&auto=webp&s=43214fc959a4645bb6d050b47b259c0b09056b8e', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/eHNudnNjaWV6OGRjMXFouKJz_VEUpQ1h0CSkUlgEfkZ5aaXcsrHdAgDDHdlC.png?width=320&crop=smart&format=pjpg&auto=webp&s=2c46bacb1777725f1e7fad575f9460e89da9b85d', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/eHNudnNjaWV6OGRjMXFouKJz_VEUpQ1h0CSkUlgEfkZ5aaXcsrHdAgDDHdlC.png?width=640&crop=smart&format=pjpg&auto=webp&s=68ab7d252aaf1cba62f028f355c8594a25151d6e', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/eHNudnNjaWV6OGRjMXFouKJz_VEUpQ1h0CSkUlgEfkZ5aaXcsrHdAgDDHdlC.png?width=960&crop=smart&format=pjpg&auto=webp&s=465aedee048e4c9d5cb4b926564d322f4591245a', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/eHNudnNjaWV6OGRjMXFouKJz_VEUpQ1h0CSkUlgEfkZ5aaXcsrHdAgDDHdlC.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2d2b3803f561959626dcce09c990a05a645b9a0f', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/eHNudnNjaWV6OGRjMXFouKJz_VEUpQ1h0CSkUlgEfkZ5aaXcsrHdAgDDHdlC.png?format=pjpg&auto=webp&s=c45864eb08fb7500b0598b7560ac28ac56524c99', 'width': 1080}, 'variants': {}}]}
Any suggestion on the best model for chat and RP for M2 chip Mac?
3
I have just recently turned to running local models and tried just a few models like noromaid and nous capybara at Q5\_K\_S Quant. the 20B ran pretty well with a bit of heat, but the speed isn't unbearable. The xwin70B didn't run at all. I mean, the text they generate aren't bad, but sometimes they are not getting metaphors or subtle messages like warning. Is there any suggestion about a model that sits between these sizes that is really good for RP and chat? Thank you.
2024-01-18T18:49:17
https://www.reddit.com/r/LocalLLaMA/comments/199xhac/any_suggestion_on_the_best_model_for_chat_and_rp/
Poletry
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199xhac
false
null
t3_199xhac
/r/LocalLLaMA/comments/199xhac/any_suggestion_on_the_best_model_for_chat_and_rp/
false
false
self
3
null
Mark Zuckerberg’s new goal is creating artificial general intelligence
168
Alex Heath on Threads I talked with @zuck about Meta's new goal to create artificial general intelligence, or AGI. He's also moving the company's AI research group into the bigger product org today. And he's ready to flex about the immense amount of compute he's collecting to build super intelligence: "We have built up the capacity to do this at a scale that may be larger than any other individual company."
2024-01-18T18:20:52
https://www.theverge.com/2024/1/18/24042354/mark-zuckerberg-meta-agi-reorg-interview
noiseinvacuum
theverge.com
1970-01-01T00:00:00
0
{}
199ws9d
false
null
t3_199ws9d
/r/LocalLLaMA/comments/199ws9d/mark_zuckerbergs_new_goal_is_creating_artificial/
false
false
https://a.thumbs.redditm…eV2oawDwppQ4.jpg
168
{'enabled': False, 'images': [{'id': 'SN73KfEvrx7OCD1iRzfw-ZMHgPWNmj4kpy9UDSGhtV0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/jdhfViCGE7we3Zp29s8FTYp0d6_PVt-EIynDnpJXQj0.jpg?width=108&crop=smart&auto=webp&s=9f580d0ca3a32d10a24f82085e45dfe781adb7c4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/jdhfViCGE7we3Zp29s8FTYp0d6_PVt-EIynDnpJXQj0.jpg?width=216&crop=smart&auto=webp&s=af7f9e658214cd1f2bc083211c9b5dadddb11050', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/jdhfViCGE7we3Zp29s8FTYp0d6_PVt-EIynDnpJXQj0.jpg?width=320&crop=smart&auto=webp&s=38d43666301b8d3f28807869ffbdcbaf027d6354', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/jdhfViCGE7we3Zp29s8FTYp0d6_PVt-EIynDnpJXQj0.jpg?width=640&crop=smart&auto=webp&s=7079d42007fe9814776f732b0caeaf454cfbcf42', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/jdhfViCGE7we3Zp29s8FTYp0d6_PVt-EIynDnpJXQj0.jpg?width=960&crop=smart&auto=webp&s=d88d6f5128d85bc2c00de491872f0df8d0ad37e0', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/jdhfViCGE7we3Zp29s8FTYp0d6_PVt-EIynDnpJXQj0.jpg?width=1080&crop=smart&auto=webp&s=c18feb7913a5b7754234358a7e265c5b942b95e6', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/jdhfViCGE7we3Zp29s8FTYp0d6_PVt-EIynDnpJXQj0.jpg?auto=webp&s=dc4604c9431fca483936366e145eb965345a34ac', 'width': 1200}, 'variants': {}}]}
Running the new 2-bit quants
10
My GPU has 12 GB VRAM, so when I heard there was a new quantization method that could fit Mixtral 8x7B on my card, I jumped at the opportunity. My only problems are this: - Can only llama.cpp run the new quants? And if so, how do I even inference with llama.cpp on my GPU? - How much am I losing by using a 2-bit quant? Would I be better off using a higher quant of a smaller model (e.g. a 6 bit quant of a SOLAR model) or am I still going to get better quality answer from the 2-bit quant of Mixtral 8x7B?
2024-01-18T18:17:55
https://www.reddit.com/r/LocalLLaMA/comments/199wpnt/running_the_new_2bit_quants/
Aptare
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199wpnt
false
null
t3_199wpnt
/r/LocalLLaMA/comments/199wpnt/running_the_new_2bit_quants/
false
false
self
10
null
Euryale 70B vs Nous Capybara 34B for chatting?
6
I am wondering which one would be better for chatting. And the memory and hardware needed to run Nous capybara at max context length?
2024-01-18T18:15:29
https://www.reddit.com/r/LocalLLaMA/comments/199wnhg/euryale_70b_vs_nous_capybara_34b_for_chatting/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199wnhg
false
null
t3_199wnhg
/r/LocalLLaMA/comments/199wnhg/euryale_70b_vs_nous_capybara_34b_for_chatting/
false
false
self
6
null
Fine-tuning or training a small conversational model
6
Hello I want to fine-tune (train) a small model for conversation for children to interact with. While companionship is the larger goal I would love to add in help for homework, details about fun kid things. I have found some datasets on Kaggle and made my own but I have no clue how to get started and what small model would work, should I look at distillBERT, TinyLLAMA, GPT2... I would like this to run on the simplest hardware I can get like a Raspberry pie or mini pc. I have 3 or so projects that could use this info. I tried looking at Autotrain at Huggingface but I am still pretty lost. I have tried other prefinetuned models but mostly people are focused on non-censored models and I am kind of asking for the opposite.
2024-01-18T17:39:39
https://www.reddit.com/r/LocalLLaMA/comments/199vrvl/finetuning_or_training_a_small_conversational/
L0-LA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199vrvl
false
null
t3_199vrvl
/r/LocalLLaMA/comments/199vrvl/finetuning_or_training_a_small_conversational/
false
false
self
6
null
Identify code blocks and their programming language in document
1
I have a manual with code snippets in different languages (JavaScript, Python, etc.). I want to identify the code blocks and also indentify which language they are in. Then I want to take some actions based on this input - for example, I want to give a title *Python snippet* to the Python sample code. What's the best way of going about this?
2024-01-18T17:30:58
https://www.reddit.com/r/LocalLLaMA/comments/199vkf8/identify_code_blocks_and_their_programming/
datashri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199vkf8
false
null
t3_199vkf8
/r/LocalLLaMA/comments/199vkf8/identify_code_blocks_and_their_programming/
false
false
self
1
null
So who wants to open-source mini AGI?
19
https://x.com/divgarg9/status/1747683043446579416?s=46 This is insane, and far ahead of any open source autonomous agents we've got right now. We need to open source this.
2024-01-18T17:24:05
https://www.reddit.com/r/LocalLLaMA/comments/199vepk/so_who_wants_to_opensource_mini_agi/
Leadership_Upper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199vepk
false
null
t3_199vepk
/r/LocalLLaMA/comments/199vepk/so_who_wants_to_opensource_mini_agi/
false
false
self
19
{'enabled': False, 'images': [{'id': 'oeUh-GMDNp1wAvbNNUjVcVUeGhGJQVAv66CmVnY441g', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/VIGpVTT3xK5MXW3QnXsffd15GtkgxixbdDtaAPmpz9s.jpg?width=108&crop=smart&auto=webp&s=b4ac10c2da642d8c4231e6035bc9566b9292f42b', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/VIGpVTT3xK5MXW3QnXsffd15GtkgxixbdDtaAPmpz9s.jpg?width=216&crop=smart&auto=webp&s=c967e3a3f729f368489ac608e2bb1e890aaba41c', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/VIGpVTT3xK5MXW3QnXsffd15GtkgxixbdDtaAPmpz9s.jpg?width=320&crop=smart&auto=webp&s=6515a0b42690f8a70fcdb4a71996394d1eed3275', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/VIGpVTT3xK5MXW3QnXsffd15GtkgxixbdDtaAPmpz9s.jpg?width=640&crop=smart&auto=webp&s=97aa122798eced293e27873edd553bb2791ca773', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/VIGpVTT3xK5MXW3QnXsffd15GtkgxixbdDtaAPmpz9s.jpg?width=960&crop=smart&auto=webp&s=60371ae8333fbfc06069bd4678b0bd1a4311b1c4', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/VIGpVTT3xK5MXW3QnXsffd15GtkgxixbdDtaAPmpz9s.jpg?width=1080&crop=smart&auto=webp&s=f6a78b4b8d6a8d68f84c75e64d653e7be706f048', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/VIGpVTT3xK5MXW3QnXsffd15GtkgxixbdDtaAPmpz9s.jpg?auto=webp&s=d628d3fa22214044a0c4051f818c713a5e16ed0f', 'width': 1728}, 'variants': {}}]}
Group for ethical and philosophical discussions.
1
[removed]
2024-01-18T17:17:16
https://www.reddit.com/r/LocalLLaMA/comments/199v8yk/group_for_ethical_and_philosophical_discussions/
Elegant_Course2372
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199v8yk
false
null
t3_199v8yk
/r/LocalLLaMA/comments/199v8yk/group_for_ethical_and_philosophical_discussions/
false
false
self
1
null
Regarding LLAMA-7 or 13b
1
[removed]
2024-01-18T17:03:51
https://www.reddit.com/r/LocalLLaMA/comments/199ux8y/regarding_llama7_or_13b/
Optimal_Original_815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199ux8y
false
null
t3_199ux8y
/r/LocalLLaMA/comments/199ux8y/regarding_llama7_or_13b/
false
false
self
1
null
I love writing tutorials. What are you most interested in?
40
I've got a cool job where I get to go out and learn about stuff so that I can write about it. There's a gap in my schedule, enough for a decently in-depth blog. I wanted to see what the community was most interested in so that help out. Hope this doesn't come off as selfish promo, but here's a few examples of stuff I've done in the past: •[https://colab.research.google.com/drive/1y4RCTIfTTb53b\_S95xl4IZaW8am6sBxz](https://colab.research.google.com/drive/1y4RCTIfTTb53b_S95xl4IZaW8am6sBxz) • [https://colab.research.google.com/drive/1SNgF0MudcIAIBibmi-YkWFwn8Nptt4r5](https://colab.research.google.com/drive/1SNgF0MudcIAIBibmi-YkWFwn8Nptt4r5) •[https://colab.research.google.com/drive/1UFBWOUbUUAHTf7ilCGhPtGDMDeYXrJzt](https://colab.research.google.com/drive/1UFBWOUbUUAHTf7ilCGhPtGDMDeYXrJzt?usp=sharing) •[https://colab.research.google.com/drive/11VmmWiY\_K76WHepVNp2OU6w3VGuMjBVm](https://colab.research.google.com/drive/11VmmWiY_K76WHepVNp2OU6w3VGuMjBVm?usp=sharing) •[https://colab.research.google.com/drive/1ehBzDjUQrioa4abvTIWtABU1ENI2l\_2g](https://colab.research.google.com/drive/1ehBzDjUQrioa4abvTIWtABU1ENI2l_2g) &#x200B; Cheers!
2024-01-18T16:59:41
https://www.reddit.com/r/LocalLLaMA/comments/199ut0n/i_love_writing_tutorials_what_are_you_most/
datascienceharp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199ut0n
false
null
t3_199ut0n
/r/LocalLLaMA/comments/199ut0n/i_love_writing_tutorials_what_are_you_most/
false
false
self
40
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
Are you smarter than an LLM?
30
2024-01-18T16:57:47
https://d.erenrich.net/are-you-smarter-than-an-llm/index.html
winkler1
d.erenrich.net
1970-01-01T00:00:00
0
{}
199ure2
false
null
t3_199ure2
/r/LocalLLaMA/comments/199ure2/are_you_smarter_than_an_llm/
false
false
default
30
null
I downloaded all 64 default(Q4_0) models that were available on ollama.ai and did a speed test for each one
20
## Setup * CPU: AMD Ryzen 7 5800X at 4.2 GHz, * RAM: 128Gb 3200 MHz, * VGA: RTX 3080 w/ 10GB VRAM ## Results |name|token per second|seconds per token|size|parameter size|quantization level| |:-|:-|:-|:-|:-|:-| |tinyllama:latest|102,60||0.64 GB|1B|Q4\_0| |dolphin-phi:latest|88,50||1.60 GB|3B|Q4\_0| |dolphin-mistral:latest|75,10||4.11 GB|7B|Q4\_0| |openhermes:latest|58,70||4.11 GB|7B|Q4\_0| |llama-pro:latest|57,80||4.74 GB|8B|Q4\_0| |solar:latest|43,10||6.07 GB|11B|Q4\_0| |nous-hermes2:latest|42,90||6.07 GB|11B|Q4\_0| |deepseek-coder:latest|41,20||0.78 GB|1B|Q4\_0| |stable-code:latest|21,30||1.61 GB|3B|Q4\_0| |stablelm-zephyr:latest|20,90||1.61 GB|3B|Q4\_0| |phi:latest|19,90|0.1|1.60 GB|3B|Q4\_0| |starcoder:latest|18,50|0.1|1.83 GB|3B|Q4\_0| |orca-mini:latest|16,90|0.1|1.98 GB|3B|Q4\_0| |yi:latest|10,30|0.1|3.48 GB|6B|Q4\_0| |yarn-llama2:latest|9,20|0.1|3.83 GB|7B|Q4\_0| |medllama2:latest|9,10|0.1|3.83 GB|7B|Q4\_0| |meditron:latest|9,10|0.1|3.83 GB|7B|Q4\_0| |wizard-vicuna-uncensored:latest|9,10|0.1|3.83 GB|7B|Q4\_0| |llama2-uncensored:latest|9,00|0.1|3.83 GB|7B|Q4\_0| |stable-beluga:latest|9,00|0.1|3.83 GB|7B|Q4\_0| |deepseek-llm:latest|9,00|0.1|4.00 GB|7B|Q4\_0| |nous-hermes:latest|9,00|0.1|3.83 GB|7B|Q4\_0| |vicuna:latest|9,00|0.1|3.83 GB|7B|Q4\_0| |llava:latest|9,00|0.1|4.45 GB|7B|Q4\_0| |orca2:latest|9,00|0.1|3.83 GB|7B|Q4\_0| |wizardcoder:latest|8,90|0.1|3.83 GB|7B|Q4\_0| |xwinlm:latest|8,90|0.1|3.83 GB|7B|Q4\_0| |llama2-chinese:latest|8,90|0.1|3.83 GB|7B|Q4\_0| |magicoder:latest|8,80|0.1|3.83 GB|7B|Q4\_0| |bakllava:latest|8,80|0.1|4.73 GB|7B|Q4\_0| |mistrallite:latest|8,80|0.1|4.11 GB|7B|Q4\_0| |notus:latest|8,70|0.1|4.11 GB|7B|Q4\_0| |falcon:latest|8,70|0.1|4.21 GB|7B|Q4\_0| |sqlcoder:latest|8,60|0.1|4.11 GB|7B|Q4\_0| |llama2:latest|8,60|0.1|3.83 GB|7B|Q4\_0| |mistral:latest|8,60|0.1|4.11 GB|7B|Q4\_0| |codellama:latest|8,60|0.1|3.83 GB|7B|Q4\_0| |samantha-mistral:latest|8,60|0.1|4.11 GB|7B|Q4\_0| |neural-chat:latest|8,50|0.1|4.11 GB|7B|Q4\_0| |yarn-mistral:latest|8,50|0.1|4.11 GB|7B|Q4\_0| |openchat:latest|8,40|0.1|4.11 GB|7B|Q4\_0| |starling-lm:latest|8,40|0.1|4.11 GB|7B|Q4\_0| |dolphin-mistral:latest|8,40|0.1|4.11 GB|7B|Q4\_0| |mistral-openorca:latest|8,40|0.1|4.11 GB|7B|Q4\_0| |wizard-math:latest|8,40|0.1|4.11 GB|7B|Q4\_0| |zephyr:latest|8,30|0.1|4.11 GB|7B|Q4\_0| |wizard-vicuna:latest|4,90|0.2|7.37 GB|13B|Q4\_0| |open-orca-platypus2:latest|4,90|0.2|7.37 GB|13B|Q4\_0| |nexusraven:latest|4,90|0.2|7.37 GB|13B|Q4\_0| |codeup:latest|4,80|0.2|7.37 GB|13B|Q4\_0| |wizardlm-uncensored:latest|4,70|0.2|7.37 GB|13B|Q4\_0| |everythinglm:latest|4,70|0.2|7.37 GB|13B|Q4\_0| |dolphin-mixtral:latest|4,70|0.2|26.44 GB|47B|Q4\_0| |nous-hermes2-mixtral:latest|4,70|0.2|26.44 GB|47B|Q4\_0| |mixtral:latest|4,70|0.2|26.44 GB|47B|Q4\_0| |nexusraven:latest|4,70|0.2|7.37 GB|13B|Q4\_0| |notux:latest|4,20|0.2|26.44 GB|47B|Q4\_0| |dolphin-mixtral:latest|3,50|0.3|26.44 GB|47B|Q4\_0| |mixtral:latest|3,50|0.3|26.44 GB|47B|Q4\_0| |codebooga:latest|1,90|0.5|19.05 GB|34B|Q4\_0| |phind-codellama:latest|1,90|0.5|19.05 GB|34B|Q4\_0| |alfred:latest|1,50|0.6|23.81 GB|42B|Q4\_0| |goliath:latest|0,00|61.2|66.31 GB|118B|Q4\_0| |megadolphin:latest|0,00|61.9|67.76 GB|120B|Q4\_0| &#x200B;
2024-01-18T16:25:45
https://www.reddit.com/r/LocalLLaMA/comments/199tzxx/i_downloaded_all_64_defaultq4_0_models_that_were/
RandomPrints
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199tzxx
false
null
t3_199tzxx
/r/LocalLLaMA/comments/199tzxx/i_downloaded_all_64_defaultq4_0_models_that_were/
false
false
self
20
null
How to get the most out of Apple ARM chips
1
[removed]
2024-01-18T16:17:12
https://www.reddit.com/r/LocalLLaMA/comments/199tsqj/how_to_get_the_most_out_of_apple_arm_chips/
Eliiasv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199tsqj
false
null
t3_199tsqj
/r/LocalLLaMA/comments/199tsqj/how_to_get_the_most_out_of_apple_arm_chips/
false
false
self
1
null
Running an LLM on 2013 trashcan MacPro?
3
I ordered a used 2013 MacPro with \- Intel Xeon 6 core processor \- 64GB DDR3 RAM \- dual AMD D300 Graphics Card I'm wondering if I can run a model like dolphin-mixtral on it. Has anyone tried it before?
2024-01-18T15:55:55
https://www.reddit.com/r/LocalLLaMA/comments/199tb2h/running_an_llm_on_2013_trashcan_macpro/
clean_pegasus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199tb2h
false
null
t3_199tb2h
/r/LocalLLaMA/comments/199tb2h/running_an_llm_on_2013_trashcan_macpro/
false
false
self
3
null
Is Shared memory on a laptop better than regular desktop DDR5. The 4090 has 64GB of "total" memory?
45
2024-01-18T15:44:42
https://i.redd.it/25p9aflhy7dc1.png
Future-Freedom-4631
i.redd.it
1970-01-01T00:00:00
0
{}
199t24x
false
null
t3_199t24x
/r/LocalLLaMA/comments/199t24x/is_shared_memory_on_a_laptop_better_than_regular/
false
false
https://b.thumbs.redditm…GquiQsmh0ops.jpg
45
{'enabled': True, 'images': [{'id': 'VIdGVms2eMcRulMUqM5tIQCP2FAWq_MvBhu9DLp9FfI', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/25p9aflhy7dc1.png?width=108&crop=smart&auto=webp&s=c74d7714b6fa72cc48dab8f9b2523834cff62541', 'width': 108}, {'height': 193, 'url': 'https://preview.redd.it/25p9aflhy7dc1.png?width=216&crop=smart&auto=webp&s=4cf935f2c13be150bcf877c3cb06ce52dde3cb41', 'width': 216}, {'height': 287, 'url': 'https://preview.redd.it/25p9aflhy7dc1.png?width=320&crop=smart&auto=webp&s=1f706c68b51cbc42e73cf5a8e0748bd3da5a27e6', 'width': 320}, {'height': 574, 'url': 'https://preview.redd.it/25p9aflhy7dc1.png?width=640&crop=smart&auto=webp&s=b736095c7cb7fd1462d98254a6b5095957ef060c', 'width': 640}], 'source': {'height': 621, 'url': 'https://preview.redd.it/25p9aflhy7dc1.png?auto=webp&s=af6b148a204fbae0e428666848ee41a3b7d011b3', 'width': 692}, 'variants': {}}]}
Trying to set context to 100k on model Yi-34B-200K-DARE-megamerge-v8 and I wanted to see if copilot's answer matches up with what you guys here think
2
I just downloaded TheBloke's quant (Thanks!) of Yi-34B-200K-DARE-megamerge-v8 and I wanted to try and see how much context I could get away with running on my mac Studio M2 Ultra 128gb ram. I asked Microsoft Copilot (GPT-4) for an answer and below is its response. I haven't tried it yet, but what do you guys think? I'm shooting for 100k as a starting point, but maybe I'm dreaming. I honestly have no frame of reference as I've never tried context windows above 8k. Below is my copilot "conversation": \---------------- what do I set the "--rope-freq-base" flag to if I want 100k context on a model in llama.cpp? Answer: Hello, this is Bing. I can help you with some information about llama.cpp and the --rope-freq-base flag. [Llama.cpp is a library that implements the Meta’s LLaMA model in C/C++, which is a large language and vision model based on the transformer architecture](https://github.com/ggerganov/llama.cpp)[1](https://github.com/ggerganov/llama.cpp). [The --rope-freq-base flag is a parameter that controls the RoPE (Rotary Position Embedding) technique, which is used to encode the position information of the tokens in the input sequence](https://github.com/oobabooga/text-generation-webui/issues/3489)[2](https://github.com/oobabooga/text-generation-webui/issues/3489). [According to the GitHub issue #3489](https://github.com/oobabooga/text-generation-webui/issues/3489)[2](https://github.com/oobabooga/text-generation-webui/issues/3489), the --rope-freq-base flag should be set to 10000 for the default context length of 4096 tokens. However, if you want to use a larger context length, such as 100k tokens, you need to adjust the --rope-freq-scale flag as well. [The --rope-freq-scale flag is a scaling factor that determines how fast the RoPE frequency changes with the position](https://github.com/oobabooga/text-generation-webui/issues/3489)[2](https://github.com/oobabooga/text-generation-webui/issues/3489). One possible way to set the --rope-freq-scale flag for 100k context is to use the formula: --rope-freq-scale = (4096 / 100000) \* 1.0 This formula is based on the assumption that the default scale value is 1.0 for 4096 context. Therefore, the scale value should be proportionally reduced for larger context. Using this formula, the --rope-freq-scale flag for 100k context would be: --rope-freq-scale = 0.04096 So, to summarize, if you want to use 100k context on a model in llama.cpp, you should set the flags as follows: --rope-freq-base 10000 --rope-freq-scale 0.04096 I hope this helps. If you have any other questions, please feel free to ask. 😊 \-------------------------------------- BTW, curious if any other users with macs with lots of RAM 96k+ would chime in on how much context they've gotten to work with various model sizes. Thanks in advance!!
2024-01-18T15:37:14
https://www.reddit.com/r/LocalLLaMA/comments/199sw6w/trying_to_set_context_to_100k_on_model/
spanielrassler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199sw6w
false
null
t3_199sw6w
/r/LocalLLaMA/comments/199sw6w/trying_to_set_context_to_100k_on_model/
false
false
self
2
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
Best LLM courses out there
1
Hey guys, I want to dive deeper into LLMs and my company may have budget for training. What would you say are some of the best courses out there? Generally speaking the focus should be on communication, e.g bots for corporations
2024-01-18T15:37:08
https://www.reddit.com/r/LocalLLaMA/comments/199sw3t/best_llm_courses_out_there/
Visible_Guest_2986
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199sw3t
false
null
t3_199sw3t
/r/LocalLLaMA/comments/199sw3t/best_llm_courses_out_there/
false
false
self
1
null
Base model or MoE for personalities?
5
Can someone explain if a base model like mistral 7b is better than a 8x7b to be fine tuned for characters and personalities? My intuition tells me the mixture of experts (moe) is better at handling a wide array of tasks like math, logic and reasoning, but that a language base model would be easier to tune to a personality. Lmk!
2024-01-18T14:23:34
https://www.reddit.com/r/LocalLLaMA/comments/199r9q5/base_model_or_moe_for_personalities/
Adventurous-Ad-3637
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199r9q5
false
null
t3_199r9q5
/r/LocalLLaMA/comments/199r9q5/base_model_or_moe_for_personalities/
false
false
self
5
null
AIlice: AI agent that performs tasks through an interactive agents calling tree
1
[removed]
2024-01-18T14:10:48
https://www.reddit.com/r/LocalLLaMA/comments/199r02j/ailice_ai_agent_that_performs_tasks_through_an/
Steven_Lu_137
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199r02j
false
null
t3_199r02j
/r/LocalLLaMA/comments/199r02j/ailice_ai_agent_that_performs_tasks_through_an/
false
false
self
1
{'enabled': False, 'images': [{'id': '7fB_VjWuxcIVI5rz-Blz0HLUzajDaDE1NzZ3_Wz61g4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2SlMXuT6USOE-qiqB3qAem3VDItwwu1JErAKvKKY1NM.jpg?width=108&crop=smart&auto=webp&s=a7f32de738bc39a09da1691ebaabc0a85705d559', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2SlMXuT6USOE-qiqB3qAem3VDItwwu1JErAKvKKY1NM.jpg?width=216&crop=smart&auto=webp&s=ce360f964b452c8c64bc0f93b9bcf6624b49217f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2SlMXuT6USOE-qiqB3qAem3VDItwwu1JErAKvKKY1NM.jpg?width=320&crop=smart&auto=webp&s=bfa7df4cdc0f5c9d14ce355ec240bfd23075ca69', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2SlMXuT6USOE-qiqB3qAem3VDItwwu1JErAKvKKY1NM.jpg?width=640&crop=smart&auto=webp&s=d375b27d41bb17b19b7918bac64ed030ef652814', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2SlMXuT6USOE-qiqB3qAem3VDItwwu1JErAKvKKY1NM.jpg?width=960&crop=smart&auto=webp&s=3867e7414cc5c54e467af5784078fddfe4b39db3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2SlMXuT6USOE-qiqB3qAem3VDItwwu1JErAKvKKY1NM.jpg?width=1080&crop=smart&auto=webp&s=92ef68141ae20bb572d2a323c793b747517da92e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2SlMXuT6USOE-qiqB3qAem3VDItwwu1JErAKvKKY1NM.jpg?auto=webp&s=73f36cd743d8662c63c8f7c5f2ff92a64c5c0bbd', 'width': 1200}, 'variants': {}}]}
Are there any boilerplate RAG/embedding programs? (python)
13
Seeing how we have seemingly regular inputs: >User string >Large File string >Desired Output string >standard set of parameters? You'd think the would be less tutorials and more Libraries/examples. Heck using oobabooga they have some system in place(and I suppose I could dig into it, but I really want copypaste) Yes spoonfeedme, I think everyone just wants things to work. I've already did my time programming random forest. Please just make this a library instead of a job....
2024-01-18T14:04:01
https://www.reddit.com/r/LocalLLaMA/comments/199qv0l/are_there_any_boilerplate_ragembedding_programs/
pr1vacyn0eb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199qv0l
false
null
t3_199qv0l
/r/LocalLLaMA/comments/199qv0l/are_there_any_boilerplate_ragembedding_programs/
false
false
self
13
null
What are you opinions on Poe's UI?
1
[removed]
2024-01-18T13:38:18
https://www.reddit.com/r/LocalLLaMA/comments/199qcd4/what_are_you_opinions_on_poes_ui/
TheTwelveYearOld
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199qcd4
false
null
t3_199qcd4
/r/LocalLLaMA/comments/199qcd4/what_are_you_opinions_on_poes_ui/
false
false
self
1
null
Wrong output with the llama-cpp-python tutorial and chat completion
1
[removed]
2024-01-18T13:20:48
https://www.reddit.com/r/LocalLLaMA/comments/199pzye/wrong_output_with_the_llamacpppython_tutorial_and/
Jacques_Martins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199pzye
false
null
t3_199pzye
/r/LocalLLaMA/comments/199pzye/wrong_output_with_the_llamacpppython_tutorial_and/
false
false
self
1
null
Very slow response with mixtral 8x7B instruct with llama-cpp-python and llama-index framework in google colab for RAG framework.
3
Hi, there . I had been trying to run mixtral `8x7B quantized` model together with `llama-index` and `llama-cpp-python` for simple RAG applications. I have noticed that the responses are very slow. They take around 10 to 20 mins to do simple querying. I am wondering if others are also facing similar issues and if they have work around or have some tricks ? I am using the free tier version of colab notebook and offloaded around `20 GPU`layers. The llm setup looks as follows `!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --force-reinstall --no-cache-dir llama-cpp-python==0.2.23` `import torch` `from llama_index.llms import LlamaCPP` `from llama_index.llms.llama_utils import messages_to_prompt, completion_to_prompt` `llm = LlamaCPP(model_url = 'https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/mixtral-8x7b-instruct-v0.1.Q2_K.gguf',` `temperature=0.1,` `max_new_tokens=512, #1024` `context_window=8192,` `generate_kwargs={},` `model_kwargs={"n_gpu_layers": 20},` `messages_to_prompt=messages_to_prompt,` `completion_to_prompt=completion_to_prompt,` `verbose=True,` `)` &#x200B;
2024-01-18T12:58:34
https://www.reddit.com/r/LocalLLaMA/comments/199pl4o/very_slow_response_with_mixtral_8x7b_instruct/
Fluid-Beyond3878
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199pl4o
false
null
t3_199pl4o
/r/LocalLLaMA/comments/199pl4o/very_slow_response_with_mixtral_8x7b_instruct/
false
false
self
3
null
Projecte AINA's FLOR-6.3B in GGUF format and quantized weights.
1
[removed]
2024-01-18T12:36:02
https://www.reddit.com/r/LocalLLaMA/comments/199p6tw/projecte_ainas_flor63b_in_gguf_format_and/
Christ0ph_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199p6tw
false
null
t3_199p6tw
/r/LocalLLaMA/comments/199p6tw/projecte_ainas_flor63b_in_gguf_format_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'yCIBL6mHMEQrWpds7vZZpg91LPeG2B5FMrUAvWZUDGc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nxHHFCiJsABAk9U6rRVy-LtMFvRtWa7ZVy1tbTQBi4g.jpg?width=108&crop=smart&auto=webp&s=e42e6b05c380609ad8dcc0036beaa1979b9568a5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nxHHFCiJsABAk9U6rRVy-LtMFvRtWa7ZVy1tbTQBi4g.jpg?width=216&crop=smart&auto=webp&s=860086bc6c70bd8567acc8be29ac86f6c3cc6a53', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nxHHFCiJsABAk9U6rRVy-LtMFvRtWa7ZVy1tbTQBi4g.jpg?width=320&crop=smart&auto=webp&s=d6b7fba8255832a923a61025a1fb2686dd148ed0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nxHHFCiJsABAk9U6rRVy-LtMFvRtWa7ZVy1tbTQBi4g.jpg?width=640&crop=smart&auto=webp&s=492670190f514315ab41b87b26410a27caa1a1ae', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nxHHFCiJsABAk9U6rRVy-LtMFvRtWa7ZVy1tbTQBi4g.jpg?width=960&crop=smart&auto=webp&s=1b4cc8338c8ec7a1c26c8f5ef562d5ce0a8512f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nxHHFCiJsABAk9U6rRVy-LtMFvRtWa7ZVy1tbTQBi4g.jpg?width=1080&crop=smart&auto=webp&s=3e3827d4b17db13715128d4cc6c92d524f2c44bf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nxHHFCiJsABAk9U6rRVy-LtMFvRtWa7ZVy1tbTQBi4g.jpg?auto=webp&s=80f65fc9257912b3eb35138ab9d2e188e715c1b5', 'width': 1200}, 'variants': {}}]}
Need help improving performance
2
I learned about local LLMs last month, and since then I have been trying it, testing, and looking up as much as I can about it. Currently, I'm trying to find ways to improve the performance for the LLMs, so I can either run bigger models or use longer context lengths. I tried looking up throughout this subreddit for the answer, but nothing I found worked, so I wanted to ask using my specifications. Hardware: RTX 4090 16GB (laptop) 32GB DDR5 i9-13900HX (afaik not relevant, but I don't know much) Windows 11 (didn't try using WSL, but it's already installed for unrelated reasons) I'm using Oobabooga Text Generation Web UI. I can run most 13B models in GTPQ format at 6k context length perfectly well with ExLlamav2_HF. If I increase context length to 8k, it still works, but it's noticeably slower and there is a long pause before the response starts coming, and anything above 8k has too much of a delay for me to consider it practical. I have also tried running GGUF version on llama.cpp, but the performance was noticeably worse, not sure if it is worse by default, but my impression is that I'm misconfiguring something. My questions are: What can I do to improve performance, be able to run bigger models, or use a longer context length without slowing down the LLM too much? Is GGUF better with those specs? If so, how can I configure it correctly? Should I be using different loaders? Should I use AWQ instead? Should I use another interface instead of TextGenWebUI, like Ollama or Koboldcpp? (haven't tried any others) Or have I reached the limit of my hardware? Also, if anyone want to share model suggestions, I'm trying a bit of everything. Models for coding, models for rp, models for drawing (using A1111), or any niche models for extremely specific purposes.
2024-01-18T12:24:42
https://www.reddit.com/r/LocalLLaMA/comments/199ozjz/need_help_improving_performance/
rzdel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199ozjz
false
null
t3_199ozjz
/r/LocalLLaMA/comments/199ozjz/need_help_improving_performance/
false
false
self
2
null
Need advice (chatgpt + functions subtitute)
2
Hello, We currently have a custom gpt4 client that uses the api to send prompts to the chat gpt model and retrieve the responses, and we also have a bunch of functions defined, which the model can use to query our data. We are very happy with how it works, except 1 major annoyance, the response time, which can take 2-5 minutes. Is there any way I can achieve similar (let's say min 80%) results using a local model? If so, can anyone please pinpoint me as to what I would need roughly to set this up? (Don't need to many details, just the tech involved, I will do the documenting part after). Would be great if it could run on a 4070 TI + intel i9.
2024-01-18T12:15:14
https://www.reddit.com/r/LocalLLaMA/comments/199otos/need_advice_chatgpt_functions_subtitute/
pisskidney
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199otos
false
null
t3_199otos
/r/LocalLLaMA/comments/199otos/need_advice_chatgpt_functions_subtitute/
false
false
self
2
null
Which are the best tools to create a Chat App that can use multiple models?
3
Hello, I want to make a chat app that can run multiple models but I am not sure if I should start with LangChain + Ollama, based on what I understand LangChain can make some complex tasks and Ollama can answer api calls that can load different models. Unfortunately this combo is not perfect because the visualization of the chat must be custom - due to that I came across ChainLit but it didn't provide in-house Chat History that can work with self-hosted models. Maybe I am doing something wrong or my research is not enough but if anyone can help me out in finding a good stack that can run a simple chat that can be made into a mini-custom project with chat history/context it would be great. Thanks,
2024-01-18T12:08:22
https://www.reddit.com/r/LocalLLaMA/comments/199opgz/which_are_the_best_tools_to_create_a_chat_app/
jumperabg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199opgz
false
null
t3_199opgz
/r/LocalLLaMA/comments/199opgz/which_are_the_best_tools_to_create_a_chat_app/
false
false
self
3
null
Got an M2 Ultra 192GB RAM & 76 GPU cores, AMA
1
[removed]
2024-01-18T12:07:51
https://www.reddit.com/r/LocalLLaMA/comments/199op5n/got_an_m2_ultra_192gb_ram_76_gpu_cores_ama/
macstudiouser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199op5n
false
null
t3_199op5n
/r/LocalLLaMA/comments/199op5n/got_an_m2_ultra_192gb_ram_76_gpu_cores_ama/
false
false
self
1
null
AlphaGeometry: Deepmind's Open (re) Source Geometry Solver
28
Blog post: [https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/](https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/) Paper: [https://www.nature.com/articles/s41586-023-06747-5](https://www.nature.com/articles/s41586-023-06747-5) Github: [https://github.com/google-deepmind/alphageometry](https://github.com/google-deepmind/alphageometry)
2024-01-18T11:41:14
https://www.reddit.com/r/LocalLLaMA/comments/199o8wi/alphageometry_deepminds_open_re_source_geometry/
WaterdanceAC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199o8wi
false
null
t3_199o8wi
/r/LocalLLaMA/comments/199o8wi/alphageometry_deepminds_open_re_source_geometry/
false
false
self
28
{'enabled': False, 'images': [{'id': 'slPErzXJ7Yh1ntoQRGLc4NveOGZ7jTcimKx5mgeXK1I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?width=108&crop=smart&auto=webp&s=83c3c3b225d44498724ecc753db30c0229f7335d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?width=216&crop=smart&auto=webp&s=819f60241c121734f5132fea1fb5e959bfcac362', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?width=320&crop=smart&auto=webp&s=eabefa8361de88cf2a1f4d55d64d3a2a70e7417b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?width=640&crop=smart&auto=webp&s=d8a12af7317758048681cb0e7ff1c16d37a0b314', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?width=960&crop=smart&auto=webp&s=bd95c5a842126101bf381458040a81c2e7e7101e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?width=1080&crop=smart&auto=webp&s=db732778bdecc4a917f6b43c617254e923c369af', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?auto=webp&s=1d781489e7c61f07b2414016c872cc5a2a17dcc5', 'width': 1200}, 'variants': {}}]}
How to shorten the prompt to fit the context of the model?
1
I put my chat history into the prompt and instructed model to analyse the talk. It works but I need to manually limit size of the history to match the context size. For example context size of model can be set to 4096. But I can't just truncate log to 4096 bytes, because it's number of tokens, not characters. I use following Python code: `llm = Llama(model_path = full_model_path, n_gpu_layers = gpu_layers, n_ctx = context_size)` `(...)` `output = llm(` `prompt,` `max_tokens = mt,` `stop = ["### Instruction:", "### Response:"],` `echo = False` `)` any tips?
2024-01-18T10:25:25
https://www.reddit.com/r/LocalLLaMA/comments/199n2n5/how_to_shorten_the_prompt_to_fit_the_context_of/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199n2n5
false
null
t3_199n2n5
/r/LocalLLaMA/comments/199n2n5/how_to_shorten_the_prompt_to_fit_the_context_of/
false
false
self
1
null
How to Reduce the response time of a RAG chatbot?
7
I've developed a CPU-based retrieval augmented generation chatbot using LangChain, featuring the Zephyr7Bbeta 4-bit quantized model (Q4\_K\_M) with a size of 4.37GB. The chatbot currently takes approximately 130 seconds to generate a response using the retrieval QA chain on a quad-core CPU with 16GB of RAM. Given this, someone recommended optimizing the CPU performance using OpenVINO to decrease the response time. I'm unfamiliar with OpenVINO and want to know where to start. Additionally, I'd appreciate any alternative suggestions to reduce the chatbot's response time. The chatbot's configuration involves about 3000 chunks, each with a size of 1000 and an overlap of 20. The vector database utilizes FAISS-CPU, and the embedding model is thenlper/gte-large. The primary bottleneck seems to be the prompt evaluation time, with specific breakdowns as follows: * Load time: 3333.97 ms * Sample time: 85.98 ms / 135 runs (0.64 ms per token, 1570.08 tokens per second) * Prompt eval time: 61662.85 ms / 262 tokens (235.35 ms per token, 4.25 tokens per second) * Eval time: 45184.98 ms / 134 runs (337.20 ms per token, 2.97 tokens per second) * Total time: 107506.99 ms Any assistance with OpenVINO implementation or alternative approaches to reduce prompt evaluation time would be highly appreciated.
2024-01-18T09:33:50
https://www.reddit.com/r/LocalLLaMA/comments/199mbz6/how_to_reduce_the_response_time_of_a_rag_chatbot/
HV4U2001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199mbz6
false
null
t3_199mbz6
/r/LocalLLaMA/comments/199mbz6/how_to_reduce_the_response_time_of_a_rag_chatbot/
false
false
self
7
null
Embeddings and LocalAI
2
I am doing different projects in Flowise to start learning AI, so far none of them have been a problem, it is also true that I have only used gguf models that are easy to install. The thing is that I want to start more advanced projects and what is giving me more problems are the embeddings, I can't get them to work. Would someone be so kind to try to explain me how to install and make them work? I am using linux without graphical interface Also it would be usuful if you know some awesome projects for learning to link them. Any help is appreciated!
2024-01-18T09:28:22
https://www.reddit.com/r/LocalLLaMA/comments/199m93r/embeddings_and_localai/
Porrei
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199m93r
false
null
t3_199m93r
/r/LocalLLaMA/comments/199m93r/embeddings_and_localai/
false
false
self
2
null
how to use Openvino for cpu optimization ?
7
I've developed a CPU-based retrieval augmented generation chatbot using LangChain, featuring the Zephyr7Bbeta 4-bit quantized model (Q4\_K\_M) with a size of 4.37GB. The chatbot currently takes approximately 130 seconds to generate a response using the retrieval QA chain on a quad-core CPU with 16GB of RAM. Given this, someone recommended optimizing the CPU performance using OpenVINO to decrease the response time. I'm unfamiliar with OpenVINO and want to know where to start. Additionally, I'd appreciate any alternative suggestions to reduce the chatbot's response time. The chatbot's configuration involves about 3000 chunks, each with a size of 1000 and an overlap of 20. The vector database utilizes FAISS-CPU, and the embedding model is thenlper/gte-large. The primary bottleneck seems to be the prompt evaluation time, with specific breakdowns as follows: * Load time: 3333.97 ms * Sample time: 85.98 ms / 135 runs (0.64 ms per token, 1570.08 tokens per second) * Prompt eval time: 61662.85 ms / 262 tokens (235.35 ms per token, 4.25 tokens per second) * Eval time: 45184.98 ms / 134 runs (337.20 ms per token, 2.97 tokens per second) * Total time: 107506.99 ms Any assistance with OpenVINO implementation or alternative approaches to reduce prompt evaluation time would be highly appreciated.
2024-01-18T09:15:42
https://www.reddit.com/r/LocalLLaMA/comments/199m2v0/how_to_use_openvino_for_cpu_optimization/
HV4U2001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199m2v0
false
null
t3_199m2v0
/r/LocalLLaMA/comments/199m2v0/how_to_use_openvino_for_cpu_optimization/
false
false
self
7
null
Analysis of non-english documents, suggestion to workflow and models
2
Hi, for purpose of scientific research and quality management, our psychiatric hospital has conducted interviews with patients (semi structured, open questions) over the years. This, plus our digital documentation, leaves us with a huge amouth of data. In order to conduct analysis, we thought of using AI and the not so thankful task of "do it" fell to me. Sadly not from IT, but psychology / nursing. Challenge: Due to datasecurity laws everything has to run offline. I worked myself into the theory a bit, managed to set up LocalGPT with Mixtral-7B and PrivateGPT, but I got the feeling that I might be on the wrong path, so I wanted to ask you for your expertise. The task is the following: 1. Analysis is conducted once per document, documents are between 5 and 20 pages (txt, pdf) 2. Language is german, retranslating the documents to english might in theory be possible, at least for some questions 3. A standardized set of closed questions is posed to every document (example: "Is contact to social service mentioned?") 4. A stanard set of open questions is posed ("What is the perceived role of nurses in the facility?") 5. Answers should be rather short (Yes/No or at max 2-3 sentences) 6. In later stages, a question like "Trends in all interviews" could be possible, but in the moment we aim for points 3-5 in first priority. Infrastructure is okayish, I could get the PC for our virtual envoirment therapy, which is outfitted with 128GB RAM and a Geforce 3090ti (24GB VRAM). My questions 1. What are challenges of this task that I should consider early on? 2. How would you solve the above mentioned task in terms of software, models and pipelines? 3. Is there something you would recommend to learn or dive deeper into for such a task?
2024-01-18T08:46:35
https://www.reddit.com/r/LocalLLaMA/comments/199lonl/analysis_of_nonenglish_documents_suggestion_to/
Ryselle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199lonl
false
null
t3_199lonl
/r/LocalLLaMA/comments/199lonl/analysis_of_nonenglish_documents_suggestion_to/
false
false
self
2
null
Coding model suggestions?
1
Right now I’m using paperspace since my laptop is pretty bad — most machine instances available are 16gb vRAM & 30gb RAM. Sometimes have access to a 45 gb RAM one. What’s the best model I can run for coding/data wrangling queries. I spent a couple hours with Microsoft copilot today and it was rough. Unless there’s nothing better out there specifically for coding rn? Thanks!
2024-01-18T07:22:44
https://www.reddit.com/r/LocalLLaMA/comments/199khd0/coding_model_suggestions/
GeeBrain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199khd0
false
null
t3_199khd0
/r/LocalLLaMA/comments/199khd0/coding_model_suggestions/
false
false
self
1
null
Can I use my old Tesla M40 24GB for inference?
4
I have an M40 24GB in my PC, that I used a few years ago to train neural networks with Pytorch. The nice thing about the card is the big VRAM, but unfortunately it's really old. In the forum, I've seen many people being advised not to buy an M40. But what, if you already have one? Is there any program or guide I can use to have an LLM run on it?
2024-01-18T07:18:40
https://www.reddit.com/r/LocalLLaMA/comments/199kf86/can_i_use_my_old_tesla_m40_24gb_for_inference/
theyseemestackin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199kf86
false
null
t3_199kf86
/r/LocalLLaMA/comments/199kf86/can_i_use_my_old_tesla_m40_24gb_for_inference/
false
false
self
4
null
Euryale 1.4 thoughts
2
Has anyone tried our Euryale 1.4 ? If so how does it compare to other 70B models and Euryale 1.3?
2024-01-18T07:01:04
https://www.reddit.com/r/LocalLLaMA/comments/199k5ri/euryale_14_thoughts/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199k5ri
false
null
t3_199k5ri
/r/LocalLLaMA/comments/199k5ri/euryale_14_thoughts/
false
false
self
2
null
OPENSOURCE FROM GOOGLE: AlphaGeometry: An Olympiad-level AI system for geometry. “AlphaGeometry solved 25 problems within the standard Olympiad time limit. For comparison, the previous SOTA solved 10, and the average human gold medalist solved 25.9.” Trained only on synethic data.
1
2024-01-18T05:55:48
https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/?utm_source=twitter&utm_medium=social
AliVit24
deepmind.google
1970-01-01T00:00:00
0
{}
199j38v
false
null
t3_199j38v
/r/LocalLLaMA/comments/199j38v/opensource_from_google_alphageometry_an/
false
false
https://b.thumbs.redditm…MyyJgHWzGBBw.jpg
1
{'enabled': False, 'images': [{'id': 'slPErzXJ7Yh1ntoQRGLc4NveOGZ7jTcimKx5mgeXK1I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?width=108&crop=smart&auto=webp&s=83c3c3b225d44498724ecc753db30c0229f7335d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?width=216&crop=smart&auto=webp&s=819f60241c121734f5132fea1fb5e959bfcac362', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?width=320&crop=smart&auto=webp&s=eabefa8361de88cf2a1f4d55d64d3a2a70e7417b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?width=640&crop=smart&auto=webp&s=d8a12af7317758048681cb0e7ff1c16d37a0b314', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?width=960&crop=smart&auto=webp&s=bd95c5a842126101bf381458040a81c2e7e7101e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?width=1080&crop=smart&auto=webp&s=db732778bdecc4a917f6b43c617254e923c369af', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/UJPkr3evNcMxf2F_ED6KT4jQ2AHaBnHNE3NEAVD9OAU.jpg?auto=webp&s=1d781489e7c61f07b2414016c872cc5a2a17dcc5', 'width': 1200}, 'variants': {}}]}
Project Launch: GatoGPT - Local LLM inference & management server with built-in OpenAI API
28
Hi everyone, I'm excited to launch my project today. It's an LLM management server allowing you to configure and define models and then access them via an OpenAI-compatible API. A lot of information is contained in the project readme in the github repo: [https://github.com/elgatopanzon/gatogpt](https://github.com/elgatopanzon/gatogpt). The goal of the project is to abstract away the underlying models and inference backends and simply allow you to create "Models" with a friendly name e.g. `mistral-7b-instruct` or `orca-2-13b`, and use them via the API. This is done by defining Model Resources and Model Definitions with fully customisable parameters. Here's some of the most interesting features so far: * On-demand model loading and unloading * Chat Completions API has built-in LLaVA support, giving **ANY** model Vision capabilities! * Chat Completions API has support for Tool Calling **without** using a special model (it can be hit or miss, depends on the model) * Embeddings API with Embeddings model backend * Multiple Backends support, can be extended to add any Backend for inference of any kind * OpenAI Passthrough mode allowing you to configure an API Key and use OpenAI models mixed with your local LLMs from a single API endpoint! My long-term goal is to make using LLMs more accessible to everyone and in a hassle-free way. This is why my approach is not "you are talking to \*a model\*" and more "you are talking to \*any model you like, whenever you like\*"! I have a lot of plans for GatoGPT and for me this is just the very beginning. In the future I would like to implement Images generation support, Audio transcribing, Audio generation, and Files/Documents database management. GatoGPT is intended to be the core server for my upcoming experiment project that will be named **GatoGPT Agents** and will aim to be a comprehensive RAG solution that anyone can use. That's my next step in my LLM journey! Note: The project is currently a WIP, and I have a provided a Dockerfile and docker-compose.yml file for building and running the project. Debug logging is enabled and there likely will be some bugs you encounter. Any feedback and testing would be very much appreciated!
2024-01-18T05:40:58
https://www.reddit.com/r/LocalLLaMA/comments/199iu4s/project_launch_gatogpt_local_llm_inference/
ElGatoPanzon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199iu4s
false
null
t3_199iu4s
/r/LocalLLaMA/comments/199iu4s/project_launch_gatogpt_local_llm_inference/
false
false
self
28
{'enabled': False, 'images': [{'id': '6DDYT_-OswVPD-3quTc1cG63AMcDhQfcHvuOm1SYtcA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e1bI3XJ9X8f1jlPqkG-f7JNrokcFE6W0pfrDofr4EX4.jpg?width=108&crop=smart&auto=webp&s=11a53e39909a64ab246eaa331469a315f8768521', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e1bI3XJ9X8f1jlPqkG-f7JNrokcFE6W0pfrDofr4EX4.jpg?width=216&crop=smart&auto=webp&s=f96767dfae1ecdc2d24f316fc06dafd5d31d5797', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e1bI3XJ9X8f1jlPqkG-f7JNrokcFE6W0pfrDofr4EX4.jpg?width=320&crop=smart&auto=webp&s=4482c860d2de11afe5298ed998811bd7c62a9bbb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e1bI3XJ9X8f1jlPqkG-f7JNrokcFE6W0pfrDofr4EX4.jpg?width=640&crop=smart&auto=webp&s=890ef5980a6915ec5d167412edb478012842cf42', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e1bI3XJ9X8f1jlPqkG-f7JNrokcFE6W0pfrDofr4EX4.jpg?width=960&crop=smart&auto=webp&s=3806bddf39d2a797ea2a509ffcf49a95456a8e51', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e1bI3XJ9X8f1jlPqkG-f7JNrokcFE6W0pfrDofr4EX4.jpg?width=1080&crop=smart&auto=webp&s=3c6cbf0b8f89ddaedbede6bd82c0cadc9ce8a24d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e1bI3XJ9X8f1jlPqkG-f7JNrokcFE6W0pfrDofr4EX4.jpg?auto=webp&s=028d1adea6baa9125db5d099aba76f7456d2d199', 'width': 1200}, 'variants': {}}]}
[D] How can I make LLM plot graphs/figures on my database with RAG?
1
[removed]
2024-01-18T05:21:10
https://www.reddit.com/r/LocalLLaMA/comments/199ihbz/d_how_can_i_make_llm_plot_graphsfigures_on_my/
HappyDataGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199ihbz
false
null
t3_199ihbz
/r/LocalLLaMA/comments/199ihbz/d_how_can_i_make_llm_plot_graphsfigures_on_my/
false
false
self
1
null
How can I make LLM plot graphs/figures on my database with RAG?
1
[removed]
2024-01-18T05:17:33
https://www.reddit.com/r/LocalLLaMA/comments/199ieyc/how_can_i_make_llm_plot_graphsfigures_on_my/
HappyDataGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199ieyc
false
null
t3_199ieyc
/r/LocalLLaMA/comments/199ieyc/how_can_i_make_llm_plot_graphsfigures_on_my/
false
false
self
1
null
Mac Config vs Dual Machines
4
Right now I mostly use cloud services - just Google Colab Pro, or ChatGPT, or whatever. It's fine for most of what I do, although having to spin up a Colab GPU can be a pain sometimes, and obviously lacks privacy. So I lurk here a bit. I have to buy a new Mac for other reasons, and I'm a bit torn if I should consider AI use, or just configure it for visuals (which will be its main use). A simple M2 Pro 16GB,2TB would likely be fine for photos, video editing, etc. I can't think of anything where that will be inadequate, although 32GB would be more future proof. Until we get to LLMs, of course, where there's no limit to the amount of memory that would be useful. Suddenly I find myself looking at M2 Max 64GB or 128GB configurations. It's only maybe $1k more or so, but then I start to wonder if that's a waste. I'm not doing a lot of local LLM stuff at the moment - just curious to experiment. So is it worth it going for bigger RAM configuration, or just get something cheap and figure I'll put that $1k+ toward a different local machine in the future. But it's hard to decide, because $1k isn't going to get me a 4090 or a machine to run it, or even pay my power bill most likely. The Mac seems surprisingly power efficient, and maybe with MLX and others it'll become more feasible? TL;dr If I'm planning to get a 16GB or 32GB Mac, is it worth it to pay extra for 64GB or 128GB with the idea of running LLMs - or am I better off using the cloud and seeing what the next year has to offer in local hardware?
2024-01-18T05:13:01
https://www.reddit.com/r/LocalLLaMA/comments/199ic1s/mac_config_vs_dual_machines/
justgetoffmylawn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199ic1s
false
null
t3_199ic1s
/r/LocalLLaMA/comments/199ic1s/mac_config_vs_dual_machines/
false
false
self
4
null
Be careful about the new gguf quants.
117
A week ago, a new gguf quant method arrived and this is what we're using now. [https://github.com/ggerganov/llama.cpp/pull/4930](https://github.com/ggerganov/llama.cpp/pull/4930) This method uses a calibration dataset to improve the perplexity, but the problem with this method is overfitting to the dataset, and it could make the model worse overall (compared to the old gguf quants). Source: [https://github.com/ggerganov/llama.cpp/discussions/5006](https://github.com/ggerganov/llama.cpp/discussions/5006) Supposedly, the suggestion to fix this is to use a calibration dataset composed of random tokens instead.
2024-01-18T05:11:08
https://www.reddit.com/r/LocalLLaMA/comments/199iatn/be_careful_about_the_new_gguf_quants/
Wonderful_Ad_5134
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199iatn
false
null
t3_199iatn
/r/LocalLLaMA/comments/199iatn/be_careful_about_the_new_gguf_quants/
false
false
self
117
{'enabled': False, 'images': [{'id': 'tmMr0QJqdFXxRzPMUtmSKyHay-nmHzq0xBFgo1B-7UQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iNdQL1GLMgw18yjWxoU1oQIVsZM3HTK0S7v6qMcVlNU.jpg?width=108&crop=smart&auto=webp&s=9b8f5ac7f03cb0425c725fdf6222d3ff9d7bd2b0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iNdQL1GLMgw18yjWxoU1oQIVsZM3HTK0S7v6qMcVlNU.jpg?width=216&crop=smart&auto=webp&s=4c5c860dae676f7dc79d143a3aee33da28ce5033', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iNdQL1GLMgw18yjWxoU1oQIVsZM3HTK0S7v6qMcVlNU.jpg?width=320&crop=smart&auto=webp&s=eff5d3e788aacc04150b9e1594546eb15e4182b9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iNdQL1GLMgw18yjWxoU1oQIVsZM3HTK0S7v6qMcVlNU.jpg?width=640&crop=smart&auto=webp&s=d2557ce649a01d5ed71cdf2cc9e8cc4138de040a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iNdQL1GLMgw18yjWxoU1oQIVsZM3HTK0S7v6qMcVlNU.jpg?width=960&crop=smart&auto=webp&s=74ea05f71d689918cc7594c796f49ffc2bb6f369', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iNdQL1GLMgw18yjWxoU1oQIVsZM3HTK0S7v6qMcVlNU.jpg?width=1080&crop=smart&auto=webp&s=9b8a6eea9a2ba8fb43504975b7c5a5d9168dbcfd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iNdQL1GLMgw18yjWxoU1oQIVsZM3HTK0S7v6qMcVlNU.jpg?auto=webp&s=5598b664e4dbec2ca573a1da0eeee700198454ad', 'width': 1200}, 'variants': {}}]}
stabilityai/stable-code-3b · 3B coding model
1
2024-01-18T04:37:28
https://huggingface.co/stabilityai/stable-code-3b
StewArtMedia_Nick
huggingface.co
1970-01-01T00:00:00
0
{}
199hopt
false
null
t3_199hopt
/r/LocalLLaMA/comments/199hopt/stabilityaistablecode3b_3b_coding_model/
false
false
https://b.thumbs.redditm…lzu59eKw0Ryk.jpg
1
{'enabled': False, 'images': [{'id': '4cdg6FSIPxzB4OsuBs2TrlkcDoe9d62xstpoBNX67VY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GrXDltBFgoromTI6Zroe3XgL1HbMnHZs6Nnh88NVD0M.jpg?width=108&crop=smart&auto=webp&s=10221bfc76d0fc4ed439888515bc353fe3add9d7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GrXDltBFgoromTI6Zroe3XgL1HbMnHZs6Nnh88NVD0M.jpg?width=216&crop=smart&auto=webp&s=1f394381198c0537cc616d330b650338a8cf9e7c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GrXDltBFgoromTI6Zroe3XgL1HbMnHZs6Nnh88NVD0M.jpg?width=320&crop=smart&auto=webp&s=687d5f6aed646a28d2472d5e450c1107f0cade4e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GrXDltBFgoromTI6Zroe3XgL1HbMnHZs6Nnh88NVD0M.jpg?width=640&crop=smart&auto=webp&s=fd843a97e8e74250fbcc550e11bb220457f019e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GrXDltBFgoromTI6Zroe3XgL1HbMnHZs6Nnh88NVD0M.jpg?width=960&crop=smart&auto=webp&s=9e91455c508b7c7bce0e1d6058c858222910061c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GrXDltBFgoromTI6Zroe3XgL1HbMnHZs6Nnh88NVD0M.jpg?width=1080&crop=smart&auto=webp&s=0d2eecea3f060912f0e53c226bbcbcbbbcf34e13', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GrXDltBFgoromTI6Zroe3XgL1HbMnHZs6Nnh88NVD0M.jpg?auto=webp&s=cd8a35446f0e4167740b815170167b63866afa93', 'width': 1200}, 'variants': {}}]}
GPU Usage Increase Over Time with Llama-2 Model
1
[removed]
2024-01-18T04:28:27
https://www.reddit.com/r/LocalLLaMA/comments/199himo/gpu_usage_increase_over_time_with_llama2_model/
scientific_yogi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199himo
false
null
t3_199himo
/r/LocalLLaMA/comments/199himo/gpu_usage_increase_over_time_with_llama2_model/
false
false
self
1
null
Saofiq WinterGoddess 1.4x 32k mysterious quant on HF
3
I spotted this GGUF Q4\_K\_S quant of Saofiq's (Sao10K) WinterGoddess 70b model with.. Limarpv3 and.. 32k ctx (linear rope 8, apparently) on HF, and despite not having the hardware to run it smoothly (36GB of VRAM "only"), I downloaded it to test it. [https://huggingface.co/mishima/WinterGoddess-1.4x-limarpv3-70B-L2-32k.GGUF](https://huggingface.co/mishima/WinterGoddess-1.4x-limarpv3-70B-L2-32k.GGUF) I'm running a Hellaswag run, and after 300 steps it looms betwwen 84 & 86 despite the linear rope 8.. Then I'll run its perplexity to see, and finally test ride it after testing Aurelian 70b 32k 0.5. [https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K\_GGUF](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K_GGUF) I invite you 48GB VRAM Ladies & Gents to test it as well, especially at long context!
2024-01-18T04:26:08
https://www.reddit.com/r/LocalLLaMA/comments/199hh4u/saofiq_wintergoddess_14x_32k_mysterious_quant_on/
Nexesenex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199hh4u
false
null
t3_199hh4u
/r/LocalLLaMA/comments/199hh4u/saofiq_wintergoddess_14x_32k_mysterious_quant_on/
false
false
self
3
{'enabled': False, 'images': [{'id': 'RzQdhZYQ8vToDM7vG720RojxXwECmoBjD3QsunIwkIY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/AsE3t7Gum-nokXqM6Al8ogwP8AI72mVKOGllDKwZD9s.jpg?width=108&crop=smart&auto=webp&s=eb8e190a78ad7556a2b83abf30d36ab2c8f11fd9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/AsE3t7Gum-nokXqM6Al8ogwP8AI72mVKOGllDKwZD9s.jpg?width=216&crop=smart&auto=webp&s=fc1d19cd88f8bf622017fcd0f33df7af8af7dc71', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/AsE3t7Gum-nokXqM6Al8ogwP8AI72mVKOGllDKwZD9s.jpg?width=320&crop=smart&auto=webp&s=85fa81eec3bf1c4856260be5deac9c358437c0cd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/AsE3t7Gum-nokXqM6Al8ogwP8AI72mVKOGllDKwZD9s.jpg?width=640&crop=smart&auto=webp&s=4273ac6438c8b6a2cf038657f1e2788b74e9cfa6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/AsE3t7Gum-nokXqM6Al8ogwP8AI72mVKOGllDKwZD9s.jpg?width=960&crop=smart&auto=webp&s=62de62d727194169186ec609042ce2f5748a2921', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/AsE3t7Gum-nokXqM6Al8ogwP8AI72mVKOGllDKwZD9s.jpg?width=1080&crop=smart&auto=webp&s=17779b1b02c026358e9c015d11af3d1c1bf6f6b1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/AsE3t7Gum-nokXqM6Al8ogwP8AI72mVKOGllDKwZD9s.jpg?auto=webp&s=14754c5e687e6b3998ef9d818fe503ac076f81e6', 'width': 1200}, 'variants': {}}]}
Open-Source AI Is Uniquely Dangerous | I don't think this guy intended to be funny, but this is funny
98
2024-01-18T04:12:57
https://spectrum.ieee.org/open-source-ai-2666932122
ttkciar
spectrum.ieee.org
1970-01-01T00:00:00
0
{}
199h8bh
false
null
t3_199h8bh
/r/LocalLLaMA/comments/199h8bh/opensource_ai_is_uniquely_dangerous_i_dont_think/
false
false
https://b.thumbs.redditm…X1iZg9FlT75w.jpg
98
{'enabled': False, 'images': [{'id': 'tFv5fKc3Pbz1SDdkBA8ukmnoXFdalBMfSICf00KZ8_I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tzFMlvtr2r41BydxfKr585T9DjnsfhVhqY4Ket8o0pc.jpg?width=108&crop=smart&auto=webp&s=ff8c625f47e228be21f5197e28f36cab3141aabb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tzFMlvtr2r41BydxfKr585T9DjnsfhVhqY4Ket8o0pc.jpg?width=216&crop=smart&auto=webp&s=44adbe3fb3dd0251198155962af3c2f715e2d7be', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tzFMlvtr2r41BydxfKr585T9DjnsfhVhqY4Ket8o0pc.jpg?width=320&crop=smart&auto=webp&s=5556a08f4d10fa8f959257ee4af699d996e1f68c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tzFMlvtr2r41BydxfKr585T9DjnsfhVhqY4Ket8o0pc.jpg?width=640&crop=smart&auto=webp&s=5de9f5afb3ad6b23fd28d32cc4749cb10dc2aab1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tzFMlvtr2r41BydxfKr585T9DjnsfhVhqY4Ket8o0pc.jpg?width=960&crop=smart&auto=webp&s=1e30ced2bb0762b8349fd7f71afb243ef8a3ea32', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tzFMlvtr2r41BydxfKr585T9DjnsfhVhqY4Ket8o0pc.jpg?width=1080&crop=smart&auto=webp&s=cbbaede82396ccb02f7619ec0cbfc806488e9a4d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tzFMlvtr2r41BydxfKr585T9DjnsfhVhqY4Ket8o0pc.jpg?auto=webp&s=b95c5613eb9090a49bc2119f7cd396566cdb2c6c', 'width': 1200}, 'variants': {}}]}
[2401.08671] DeepSpeed-FastGen: High-throughput Text Generation for LLMs via MII and DeepSpeed-Inference
13
2024-01-18T03:24:43
https://arxiv.org/abs/2401.08671
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
199ga8d
false
null
t3_199ga8d
/r/LocalLLaMA/comments/199ga8d/240108671_deepspeedfastgen_highthroughput_text/
false
false
default
13
null
Triton vs TGI vs vLLM vs others
6
I am hoping to run various LLMs of different sizes (7b-70b) sizes and am curious as to what are the benefits of each of these methods of hosting. I will be using EC2 instances to host if that makes a difference. I don't have a good idea of which is the fastest or other advantages these might have of they are used as a server. I couldn't find any previous posts asking this exact question, but please link them below if they exist as I'd love to read through them!
2024-01-18T01:54:58
https://www.reddit.com/r/LocalLLaMA/comments/199efri/triton_vs_tgi_vs_vllm_vs_others/
cinnamonKnight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199efri
false
null
t3_199efri
/r/LocalLLaMA/comments/199efri/triton_vs_tgi_vs_vllm_vs_others/
false
false
self
6
null
When building a GPU cluster, how relevant is the choice of CPU? Pre-training + finetuning.
1
[removed]
2024-01-18T01:52:17
https://www.reddit.com/r/LocalLLaMA/comments/199ednr/when_building_a_gpu_cluster_how_relevant_is_the/
DegenerateDegenning
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199ednr
false
null
t3_199ednr
/r/LocalLLaMA/comments/199ednr/when_building_a_gpu_cluster_how_relevant_is_the/
false
false
self
1
null
Another open-source implementation of Flamingo
11
InfiMM: Advancing Multimodal Understanding from Flamingo's Legacy through Diverse LLM Integration [Infi-MM/infimm-zephyr · Hugging Face](https://huggingface.co/Infi-MM/infimm-zephyr) This claims to be the best open-sourced model on MMMU Testing split.
2024-01-18T01:45:06
https://www.reddit.com/r/LocalLLaMA/comments/199e886/another_opensource_implementation_of_flamingo/
Inevitable-Aha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199e886
false
null
t3_199e886
/r/LocalLLaMA/comments/199e886/another_opensource_implementation_of_flamingo/
false
false
self
11
{'enabled': False, 'images': [{'id': 'AJqKGlwqkUVXdixfwZtgkPKF56cCUudSAoP6KcZV0hQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dA702-9X13Cwpi8efPPUD336Hb7TH1C6izwyA_TNuiQ.jpg?width=108&crop=smart&auto=webp&s=027f868c2efed594bbe44349b6d2d94595d7e038', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dA702-9X13Cwpi8efPPUD336Hb7TH1C6izwyA_TNuiQ.jpg?width=216&crop=smart&auto=webp&s=9b703c4e63f0f8db0a30d56edf994344888f38ba', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dA702-9X13Cwpi8efPPUD336Hb7TH1C6izwyA_TNuiQ.jpg?width=320&crop=smart&auto=webp&s=f082c8623bd0f3cb53793bd16884c96a7174b20a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dA702-9X13Cwpi8efPPUD336Hb7TH1C6izwyA_TNuiQ.jpg?width=640&crop=smart&auto=webp&s=06a5ae50f996830d32381721380e9f3a5ad658db', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dA702-9X13Cwpi8efPPUD336Hb7TH1C6izwyA_TNuiQ.jpg?width=960&crop=smart&auto=webp&s=9af0c5ed5d9d2d6d74689b3a5a461b46586c0ec2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dA702-9X13Cwpi8efPPUD336Hb7TH1C6izwyA_TNuiQ.jpg?width=1080&crop=smart&auto=webp&s=aac7b1247439bd646fa77e4f9f70185a940e4ccb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dA702-9X13Cwpi8efPPUD336Hb7TH1C6izwyA_TNuiQ.jpg?auto=webp&s=0c36d63160017e59f437cc4aacfaf16c24c40e4b', 'width': 1200}, 'variants': {}}]}
Full finetuning using 4090s?
1
[removed]
2024-01-18T01:43:22
https://www.reddit.com/r/LocalLLaMA/comments/199e6x3/full_finetuning_using_4090s/
lets_finetune
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199e6x3
false
null
t3_199e6x3
/r/LocalLLaMA/comments/199e6x3/full_finetuning_using_4090s/
false
false
self
1
{'enabled': False, 'images': [{'id': 'QWz9XjZzVDZ7lTXtt-qTVs6nzNMeXdirmv4o-K5MEeQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lknZzU4Mr_jpnRavxwuKSiEFx7dK6WQcsU0PBbB0G-w.jpg?width=108&crop=smart&auto=webp&s=463e40a329bd574bd4cf37600e0d229ce92cc44b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lknZzU4Mr_jpnRavxwuKSiEFx7dK6WQcsU0PBbB0G-w.jpg?width=216&crop=smart&auto=webp&s=18fbed5fad9bd6a80bfb064ab5c0d5c79cf5f80d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lknZzU4Mr_jpnRavxwuKSiEFx7dK6WQcsU0PBbB0G-w.jpg?width=320&crop=smart&auto=webp&s=9cbcf82516f5d6839bbb8eaa15a1eb3fb21d7a62', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lknZzU4Mr_jpnRavxwuKSiEFx7dK6WQcsU0PBbB0G-w.jpg?width=640&crop=smart&auto=webp&s=b7e6facdb7ba63a91d72c6fbed42468594db842b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lknZzU4Mr_jpnRavxwuKSiEFx7dK6WQcsU0PBbB0G-w.jpg?width=960&crop=smart&auto=webp&s=b1fe2ab8e158108c754345d8c20ccb2c6e55a87f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lknZzU4Mr_jpnRavxwuKSiEFx7dK6WQcsU0PBbB0G-w.jpg?width=1080&crop=smart&auto=webp&s=9ba43fe6ccf00edc60b223fcee25b772316e6d8c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lknZzU4Mr_jpnRavxwuKSiEFx7dK6WQcsU0PBbB0G-w.jpg?auto=webp&s=d85f990e8c3eff15a8b098b75ddfd53116419f76', 'width': 1200}, 'variants': {}}]}
Trying a soulver / numi UI with a local LLM
44
2024-01-18T01:40:11
https://v.redd.it/fm4a087ur3dc1
sleeper-2
v.redd.it
1970-01-01T00:00:00
0
{}
199e4g8
false
{'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/fm4a087ur3dc1/DASHPlaylist.mpd?a=1708134024%2CMWFjYjc5YjJiYjBlOTI1OGZmYmNhMTc0ZjEzNzZiNTUwZjZkNWM1OGZkM2JhYjViOTY1ZmEyNjJmYTUzMmI1Mg%3D%3D&v=1&f=sd', 'duration': 28, 'fallback_url': 'https://v.redd.it/fm4a087ur3dc1/DASH_360.mp4?source=fallback', 'has_audio': False, 'height': 332, 'hls_url': 'https://v.redd.it/fm4a087ur3dc1/HLSPlaylist.m3u8?a=1708134024%2CZWYwY2JlMTM0NjYwMzdiZmU3ODE2YjVlOTdlODBiZWQzZjJiMmQzMzgzNDM2N2QwNjNlZTEyNjVmYzMyMzFiOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fm4a087ur3dc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 640}}
t3_199e4g8
/r/LocalLLaMA/comments/199e4g8/trying_a_soulver_numi_ui_with_a_local_llm/
false
false
https://external-preview…270d0585abd21087
44
{'enabled': False, 'images': [{'id': 'dWFndnE4eTFzM2RjMZdaBQGD2bC6WMNL_oWrrgyHHirZCcvP7TehDptiIaNb', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/dWFndnE4eTFzM2RjMZdaBQGD2bC6WMNL_oWrrgyHHirZCcvP7TehDptiIaNb.png?width=108&crop=smart&format=pjpg&auto=webp&s=7ae5ce7f2c88ce93f953f7a7318392f22dbfae24', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/dWFndnE4eTFzM2RjMZdaBQGD2bC6WMNL_oWrrgyHHirZCcvP7TehDptiIaNb.png?width=216&crop=smart&format=pjpg&auto=webp&s=8a20282ca4d87d859696a5d62007d49c4a084600', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/dWFndnE4eTFzM2RjMZdaBQGD2bC6WMNL_oWrrgyHHirZCcvP7TehDptiIaNb.png?width=320&crop=smart&format=pjpg&auto=webp&s=83f63aa908c8924116ff1285dc97cd204db34df3', 'width': 320}, {'height': 331, 'url': 'https://external-preview.redd.it/dWFndnE4eTFzM2RjMZdaBQGD2bC6WMNL_oWrrgyHHirZCcvP7TehDptiIaNb.png?width=640&crop=smart&format=pjpg&auto=webp&s=7b237d92db6b6eab2f7cb782ee3ff7e7b1da8966', 'width': 640}], 'source': {'height': 450, 'url': 'https://external-preview.redd.it/dWFndnE4eTFzM2RjMZdaBQGD2bC6WMNL_oWrrgyHHirZCcvP7TehDptiIaNb.png?format=pjpg&auto=webp&s=2b35e3c58cc4f1fb7c7a6f4b17a0650b79bb1012', 'width': 870}, 'variants': {}}]}
Is there any model that's even wilder than mythomax? (Any size)
57
I've played with about 20 plus different models people have recommended as being completely uncensored and absolutely wild, but in reality I found that the usual suspects tend to simply be repeating information that's freely available on the internet. What I'm talking about specifically are LLMs that will be breaching the usual ethical standards, for example requesting it to speak like a schizophrenic person, which is usually forbidden because it's considered ableist, or even say outrageous things that people shouldn't say, such as extraordinarily vulgar, offensive, racist, improper, and generally not safe for life language. I discovered this while playing with the RP element of these LLMs, but then I found it more interesting to extend it to other topics such as philosophy, or politics, or even things like so-called 'life hacks'. It seems, and PLEASE correct me if I'm wrong, because I could very well be wrong - that people consider a model to 'not have guardrails' if it's able to give you the recipe for meth, for example, but that's technically not a jailbreak because this is information you learn in chemistry class, when you take your first year at uni. I'm talking about something genuinely offensive. For example, it could be very useful for creative writing, if you wanted to describe certain scenes from Game of Thrones, American psycho, fight club, American History x, and so on , which can be extraordinarily violent, offensive, and touch on many very sensitive topics, such as... well, I won't even say it here, I think you've seen those yourself. Thank you.
2024-01-18T00:34:03
https://www.reddit.com/r/LocalLLaMA/comments/199cpp6/is_there_any_model_thats_even_wilder_than/
RadioSailor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199cpp6
false
null
t3_199cpp6
/r/LocalLLaMA/comments/199cpp6/is_there_any_model_thats_even_wilder_than/
false
false
nsfw
57
null
Vllm vs aphrodite engine and other alternatives
4
Hello sages of my favorite subreddit:D I am working on a project that will need as high throughput as possible. Individual response times dont matter but I need to process around 2million prompts as efficiently as possible (part of a dr thesis). Do any of you guys know which backends allow for the highest throughput with batching ? Also are there other things to be aware off ? I remember reading that full models allow higher throughput then quants but idk if thats true. Any how would love to hear your opinions on the matter :)
2024-01-17T23:51:45
https://www.reddit.com/r/LocalLLaMA/comments/199bq25/vllm_vs_aphrodite_engine_and_other_alternatives/
Noxusequal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199bq25
false
null
t3_199bq25
/r/LocalLLaMA/comments/199bq25/vllm_vs_aphrodite_engine_and_other_alternatives/
false
false
self
4
null
Mixtral 8x7B instruct v0.1 available now
1
[removed]
2024-01-17T22:56:40
[deleted]
1970-01-01T00:00:00
0
{}
199ag02
false
null
t3_199ag02
/r/LocalLLaMA/comments/199ag02/mixtral_8x7b_instruct_v01_available_now/
false
false
default
1
null
TIL Some tokens can be multiple words
102
This is just a small post informing the uninformed that tokens may not always be single symbols, words or parts of words - as I realized only today. They can be multiple words and symbols and words together. I think that's good to know if you build anything related to tokens. (You probably already know that a token usually contains a leading space, so spaces are not tokens themselves most of the time. That's the reason you never should end your prompt with a trailing space. But that's not new, so let's move on.) Some tokens I encountered in Mistral, to give you an idea: " and abs"" here together" "s nothing" "ives in" " into the" " towards her" " arms around" " and passion" But also things like: "ately\*" "\*\\n" ".\\n" " \*bl" Alright, that's it.
2024-01-17T22:47:37
https://www.reddit.com/r/LocalLLaMA/comments/199a8cm/til_some_tokens_can_be_multiple_words/
psi-love
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
199a8cm
false
null
t3_199a8cm
/r/LocalLLaMA/comments/199a8cm/til_some_tokens_can_be_multiple_words/
false
false
self
102
null
"VeRA makes LoRA ~10x more parameter efficient while retaining the same performance & also works for vision!"
137
2024-01-17T21:17:13
https://x.com/y_m_asano/status/1747543521035186641?s=20
phoneixAdi
x.com
1970-01-01T00:00:00
0
{}
19980x8
false
null
t3_19980x8
/r/LocalLLaMA/comments/19980x8/vera_makes_lora_10x_more_parameter_efficient/
false
false
default
137
null
Best Memory Extension for Oogabooga
11
Currently looking through extensions for the oogabooga web ui, and there seems to be 2 different options for long term memory. Does anyone have any experience using either and which one is better or does something different? [https://github.com/jason-brian-anderson/long\_term\_memory\_with\_qdrant](https://github.com/jason-brian-anderson/long_term_memory_with_qdrant) [https://github.com/wawawario2/long\_term\_memory](https://github.com/wawawario2/long_term_memory)
2024-01-17T21:03:15
https://www.reddit.com/r/LocalLLaMA/comments/1997omm/best_memory_extension_for_oogabooga/
rubychoco99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1997omm
false
null
t3_1997omm
/r/LocalLLaMA/comments/1997omm/best_memory_extension_for_oogabooga/
false
false
self
11
{'enabled': False, 'images': [{'id': 'Gub58iA-wsMnV9Yw0iqw0j_vGomc-duLhAi5KYc-gEA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8-9FhQYNrzTXLGB7L9HyG_gfP5hsHE_p_MADoUCiK4o.jpg?width=108&crop=smart&auto=webp&s=b349d130afcf7432670ebaeb554b27c49e7eb4c1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8-9FhQYNrzTXLGB7L9HyG_gfP5hsHE_p_MADoUCiK4o.jpg?width=216&crop=smart&auto=webp&s=1f951b81948566724cd7689809597b7979c2f49f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8-9FhQYNrzTXLGB7L9HyG_gfP5hsHE_p_MADoUCiK4o.jpg?width=320&crop=smart&auto=webp&s=7455a7ea02e55d102f5b6cc61ebd8fa43d8ab267', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8-9FhQYNrzTXLGB7L9HyG_gfP5hsHE_p_MADoUCiK4o.jpg?width=640&crop=smart&auto=webp&s=9bfbb82a966a0ca6aea396e8b08decf66a41ae5c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8-9FhQYNrzTXLGB7L9HyG_gfP5hsHE_p_MADoUCiK4o.jpg?width=960&crop=smart&auto=webp&s=eabd9583f4811ea9f894529060a84d6f543480a8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8-9FhQYNrzTXLGB7L9HyG_gfP5hsHE_p_MADoUCiK4o.jpg?width=1080&crop=smart&auto=webp&s=d875206964b8a55204ecee94928c607f31561c7d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8-9FhQYNrzTXLGB7L9HyG_gfP5hsHE_p_MADoUCiK4o.jpg?auto=webp&s=61dd971ae2faf988b6148748ec2e87f1ed5c77f8', 'width': 1200}, 'variants': {}}]}
Ollama Shell Helper (osh) : English to Unix-like Shell Commands translation using Local LLMs with Ollama
10
2024-01-17T20:45:11
https://github.com/charyan/osh
UdeaAenigmatica
github.com
1970-01-01T00:00:00
0
{}
19977vo
false
null
t3_19977vo
/r/LocalLLaMA/comments/19977vo/ollama_shell_helper_osh_english_to_unixlike_shell/
false
false
https://b.thumbs.redditm…FsP6d3PfJfHw.jpg
10
{'enabled': False, 'images': [{'id': 'QLn6gRoQjU7I82ZFGEyCV1sDGKaB2nJq0w0LmIYZMfs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RrS02qWQRzInm3v5yy3tprdvwpaJcEK9vaZ9vRFQHHg.jpg?width=108&crop=smart&auto=webp&s=2cdbc31dd983ee6a0724b77530d4492d21e2af90', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RrS02qWQRzInm3v5yy3tprdvwpaJcEK9vaZ9vRFQHHg.jpg?width=216&crop=smart&auto=webp&s=d0964cbe3b15919e1095c9b6d4f4977504ec1b23', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RrS02qWQRzInm3v5yy3tprdvwpaJcEK9vaZ9vRFQHHg.jpg?width=320&crop=smart&auto=webp&s=7d945898b707fb091b322f774200940acd2d834f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RrS02qWQRzInm3v5yy3tprdvwpaJcEK9vaZ9vRFQHHg.jpg?width=640&crop=smart&auto=webp&s=928588743f9fd99c91b780a51f405b1da6ae457f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RrS02qWQRzInm3v5yy3tprdvwpaJcEK9vaZ9vRFQHHg.jpg?width=960&crop=smart&auto=webp&s=019a8f943ed8dbf22cccaf3b4ad93ab79ee09429', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RrS02qWQRzInm3v5yy3tprdvwpaJcEK9vaZ9vRFQHHg.jpg?width=1080&crop=smart&auto=webp&s=cbf988eef12cb3dab71da1a65209a01a103eb84b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RrS02qWQRzInm3v5yy3tprdvwpaJcEK9vaZ9vRFQHHg.jpg?auto=webp&s=7a9c0966a3652933fbaa43251e28397f17f9c1c3', 'width': 1200}, 'variants': {}}]}
Unpopular Opinion: All these small open-source foundational models coming out are not moving us forward. To truly rival closed-source, we need models with 100+ billion parameters.
1
There have been so many small open-source models released over the past few weeks (TinyLlama, Phi-2, StableCode, Deci-Coder-6B) but all of these perform quite poor when compared to SOTA models. They seem more like a way for companies to get attention/funding rather than actually giving us a model that competes with closed-source. Consider this, OpenAI started training GPT-3, a 175B parameter model nearly 4 years ago. But here we are in 2024 and getting primarily tiny models. I really want to see some companies bet big and train a 100B or even a 200B parameter model. Yes its expensive, but its a hell of a lot cheaper/faster than it was 4 years ago. With mamba and MOE it makes training a 100B model even more feasible. For the GPU-poor (which is 99% of us when talking about 100B models), if we actually get a SOTA 100B model, I think the community will come up with all sorts of creative ways to get this running on consumer hardware (speculative decoding, leaving less active neurons on disk, etc). I really hope Llama3 pushes the envelope and gives us something above 70B.
2024-01-17T20:19:19
https://www.reddit.com/r/LocalLLaMA/comments/1996kos/unpopular_opinion_all_these_small_opensource/
DangerousBenefit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1996kos
false
null
t3_1996kos
/r/LocalLLaMA/comments/1996kos/unpopular_opinion_all_these_small_opensource/
false
false
self
1
null
Struggling to understand the format expected for fine tuning mistral
5
I'm trying to format my dataset that I previously used to fine tune on GPT 3.5 to work with Mistral. The documentation seems barely existent. My input and output should both be JSON. How would I correctly format this example: Input: { Question:"what is the weather in London tomorrow?",current_date:"Jan 01,2024" } Output: { City:"London", date:"Jan 02,2024" } Should it be a 2 column CSV with column headers and everything escaped? On my first attempt the responses I get are just seemingly random characters
2024-01-17T20:04:22
https://www.reddit.com/r/LocalLLaMA/comments/1996715/struggling_to_understand_the_format_expected_for/
MiniSNES
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1996715
false
null
t3_1996715
/r/LocalLLaMA/comments/1996715/struggling_to_understand_the_format_expected_for/
false
false
self
5
null
My thoughts on 2024 AI capabilities. What are you looking forward to this year?
29
[https://twitter.com/jtokash/status/1747694227600138616](https://twitter.com/jtokash/status/1747694227600138616) For as long as I can remember in tech, the Gates law has been the standard: “Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.” That isn’t going to be true in 2024. Would love your feedback, challenges, thoughts on this forecast. Baseline AI improvements coming in 2024: 1. In the past, the more specialized the model, the more powerful it was. LLMs, GPTs and Transformers broke that in late 2022 and 2023. The more data, the more generalized, the better - and now the generalized models outperform specialized models by a huge margin. In 2024 we will see a model that has not only more training data, but better organized training data. Without any other improvement to the fundamentals of the model, this will bring stronger reasoning. 2. Reinforcing #1, training data does not have to be what we see today on the internet. It can be subtly manipulated existing data or known good manufactured data. Some of this can be created by humans, but, increasingly, this data can be made by the models. 3. The models are going to become even more generalized in 2024. Truly multi-modal. Robotics took huge leaps forward in 2023 because of things learned via LLMs and Transformers. That will continue in 2024, but also all that will be fed back into the LLMs giving spatial reasoning to our GPT based applications. 4. The GPT App Store is one way for OpenAI’s to get a lot more training data. BUT it is also its way to outsource the creation and testing of sophisticated new system messages that can help the LLM adapt to varying problems and inputs. 5. Most programming languages and frameworks come with a ‘standard library’. Sometimes it is hard to even really think about a programming language without also thinking about its standard library. OpenAI’s LLMs (and I’m sure others) will start to accumulate tools that the LLM will just assume are there. Right now, code interpreter and web search are 2 of these, but this ‘standard library’ will become much larger in 2024. Likely, the LLM will be fine tuned to depend on/rely on these to produce more accurate results consistently and quickly. 6. The way LLMs “think” is to generate text. We’ll see in 2024 that the LLMs will generate a bunch of text behind the scenes when creating responses. Maybe we’ll be able to access that info during debugging or maybe it will simply be proprietary and hidden. But it will lead to much better results in APIs, in ChatGPT, in applications. This is SmarterGPT, step by step, repeat the question/problem in detail before solving, etc but happening ahead of, in parallel to and/or after the generation step we have today. We have a taste of this today with generations/reflections that happen after function calls. 7. Retrieval Augmented Generation is going to get much more thorough. There is a MASSIVE amount of competition in this space. In 2024 we’ll also see the LLMs being used to chunk and store data into semantic databases as well as being put in charge of how the retrieval happens. Today, in many cases RAG manifests as an entity that has perfect recollection of 3-5 relevant passages and absolutely no memory of the other 20. RAG needs to be better at accessing/summarizing/brute forcing the other 20. Semantic lookup is not enough. Some of the improvements here should extend to web searches, too. 8. For function calling, there will become a standard way to traffic-direct to the right functions. That work may be closer to NLP tech than now-standard LLM tech. Problem today is that function definitions take up too much of the context window - dampening the LLM’s concentration on the important information from the user. This traffic direction will be crucial so the LLM is given the function definitions it needs and doesn’t need to ‘think about’ function definitions that are not relevant at the time. See also #5 9. Understanding of the emergent behavior of the LLM is growing but still limited. Sam Altman has stated as much as recently as late December. Once the research gets better at understanding why the LLM works the way it does, we’ll get dramatically better costs of inference and be able to get more consistently accurate answers. 10. We’ll start to put the LLM in control of its creativity settings. It will decide what temperature and top\_p (and 5-10 settings that aren’t even exposed or popularized right now) to use for a given topic. 11. I think we’ll start giving the AI agency over the context window. Letting the AI decide what (and how) it remembers and letting it store info in short term vs long term vs reference memory. This could be per session, per user, per company, etc. 12. It is highly likely that there will be smaller, faster models observing larger, more sophisticated models helping prune generations that are misguided. Combining 12, 4 and 6 it would be interesting to see that work crowd-sourced to create a set of agents that help guide LLMs to best outcomes. 13. These are merely the high probability improvements in 2024. Beyond these there will be impressive improvements that come out of left field. 14. Beyond the LLM: Vision and understanding images is going to be key. OCR and handwriting recognition will continue to get better. Voice and Music will be big. Video and 3D generation as well. 15. We’ll continue to see big leaps in robotics every month. Household moment for robotics (similar to the ChatGPT moment) in 2025. Observations and open questions: 16. Much of what I described comes down to middleware improvements, fine tuning/HRLF (similar to improvements of the GPT-4-1106 variety), and a modest pace of improvement/scaling/optimization in the models themselves. What are the most likely disruptive improvements to the model itself in 2024? Some of the middleware improvements in the list above could be obviated by model improvements. 17. Largely un-reported: 3.5’s 1106 model/API is anecdotally superior in at least a few ways to 3.5 turbo. To that point, what shaky prototypes built on GPT 4 are now performing better simply by moving them to GPT-4 1106? Smarter GPT prototypes, SQL generation prototypes for instance? 18. Where we’ve invested in up-front model, agent, middleware decisions in 2023 - how much of that orchestration will be given over to the LLM in 2024? Rather than building pipelines we force the LLM to be a component of, how often will we just give the LLM a budget and have it self-direct? As Justin McCarthy points out - Tesla FSD v12 has removed much of their hard-coded support structure to lean into the model with great results. 19. What aspects of the Gemini video will be true(without significant coaching) and when/how in 2024? Regardless of which model is being used. 20. What will be the biggest limitations/gaps of the model, of the middleware, etc in 2024? 21. What is the closest thing to AGI we will see in the lab and in our hands in 2024? What is the closest thing to AGI that exists in the lab today? High signal-to-noise-ratio sources of information to watch closely for these developments: [https://youtube.com/@aiexplained-official…](https://t.co/svT8h0dL5M), [https://reddit.com/r/LocalLLaMA/](https://t.co/xHCIIKfqaN), [https://twitter.com/i/lists/1614585113991340032](https://twitter.com/i/lists/1614585113991340032)
2024-01-17T19:19:02
https://www.reddit.com/r/LocalLLaMA/comments/19951i7/my_thoughts_on_2024_ai_capabilities_what_are_you/
inyourfaceplate
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19951i7
false
null
t3_19951i7
/r/LocalLLaMA/comments/19951i7/my_thoughts_on_2024_ai_capabilities_what_are_you/
false
false
self
29
{'enabled': False, 'images': [{'id': 'NxKTkQPtbdiojxQMpL-_vlPLyBTSSoUjSLT6hhqcUpY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0RUfoaSJiz37vHGueB3KQCMsD9exnIUIc9SJo98d_9k.jpg?width=108&crop=smart&auto=webp&s=09f7cf89f2b02336653ed8e8dc0337a93c12508c', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/0RUfoaSJiz37vHGueB3KQCMsD9exnIUIc9SJo98d_9k.jpg?auto=webp&s=45926e62a69e00972bed481b39dfde4b573d7093', 'width': 140}, 'variants': {}}]}
7900XT deals
7
Looks like there are new lows on the 20GB 7900XT, now $709 US, making two of them potentially a better deal than trying to get a 4090, ending up at 40GB vs. 24GB. I know the software support is historically bad. Maybe it's getting better enough recently to justify the card?
2024-01-17T18:42:14
https://www.reddit.com/r/LocalLLaMA/comments/1994426/7900xt_deals/
cjbprime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1994426
false
null
t3_1994426
/r/LocalLLaMA/comments/1994426/7900xt_deals/
false
false
self
7
null
How were the parameter numbers originally established?
1
[removed]
2024-01-17T18:25:45
https://www.reddit.com/r/LocalLLaMA/comments/1993p8l/how_were_the_parameter_numbers_originally/
Arcturus17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1993p8l
false
null
t3_1993p8l
/r/LocalLLaMA/comments/1993p8l/how_were_the_parameter_numbers_originally/
false
false
self
1
null
How where the parameter numbers originally established? I feel like the base should have been 120B (highly composite numbers FTW) so that we could have a 60B, 40B, 30B, 24B, 20B, 15B, 12B, 10B, 8B, 6B, 5B, 4B, 3B, 2B, and 1B, nice and tidy, and one could choose amongst many more p
1
[removed]
2024-01-17T18:24:33
https://www.reddit.com/r/LocalLLaMA/comments/1993o49/how_where_the_parameter_numbers_originally/
Arcturus17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1993o49
false
null
t3_1993o49
/r/LocalLLaMA/comments/1993o49/how_where_the_parameter_numbers_originally/
false
false
self
1
null
GGUFs quants can punch above their weights now
216
A llama.cpp improvement that integrates an optional *importance matrix* was recently added. This was originally done to make really tiny quants useful, yet it can also be applied to the existing larger quantization types. The results get way better in general when using it to quantize models. For example: In my tests the new Q5\_K is almost as good as the old Q6\_K, and the new Q3\_K\_M is even better than the old Q3\_K\_L. This now allows everyone to squeeze even higher quality results out of their precious VRAM. Here is a graph comparing the perplexity of the old with the new quants (lower is better): [Old vs. new quants perplexity on wiki.test.raw](https://preview.redd.it/tss15o4gi1dc1.png?width=897&format=png&auto=webp&s=63a543b05f90277d8c50a832522fe07e9dbf1317) This does not come for free though, as quantizing this way requires way more calculations than before - only when using the importance matrix addition of course. The results also vary significantly based on how the importance matrix is created for each model. I’m currently running some over-night calculations to see if I can maybe get the new Q5\_K\_M not just almost as good, but really as good as the old Q6\_K. I’ll add a comment here once I know more. I ran the above tests using [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) (which is a great tiny model btw) to get results quickly. If someone has more compute resources available: It would be interesting to see a comparison between a 7B and 13B llama model with the old & new quants. Especially the newly introduced IQ2\_XS and XXS of a 13B should get really interesting in comparison to the Q8 or Q6\_K of a 7B. Using wiki.valid.raw (better: wiki.train.raw) for the imatrix creation is a good start, but more can be done for even better results. Afterwards u/The-Bloke can probably re-quantize all his GGUFs - again 😄.
2024-01-17T18:18:41
https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/
Chromix_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1993iro
false
null
t3_1993iro
/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/
false
false
https://b.thumbs.redditm…_jhsSHsh2dOM.jpg
216
{'enabled': False, 'images': [{'id': 'oN89DCTlpN4ILjsqZ-eqHHBHOsMqFEAApHQdMqxL2uo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=108&crop=smart&auto=webp&s=86489a0d0a5efa5573fd0a7a1a298a1f686ca3fc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=216&crop=smart&auto=webp&s=9cea7936f43ac604ef3149813fc9854023b2aa44', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=320&crop=smart&auto=webp&s=1a0690f3f071a646b295daadf7b164228473c273', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=640&crop=smart&auto=webp&s=53b19b2635f4a385189f055ade13dd0a16901758', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=960&crop=smart&auto=webp&s=b60d4609bb2dafdc91ceaedf6ce3b21ca29c3eb0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=1080&crop=smart&auto=webp&s=06eb581c8ee845b13de6674c30a6a1752067e175', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?auto=webp&s=1bcc0c096508e861f8770b90d7eddb84f7d706f2', 'width': 1200}, 'variants': {}}]}
SGLang: new LLM inference runtime by @lmsysorg (2-5x faster than vLLM!)
98
>We are thrilled to introduce **SGLang, our next-generation interface and runtime for LLM inference!** It greatly improves the execution and programming efficiency of complex LLM programs by co-designing the front-end language and back-end runtime. > >On the backend, we propose **RadixAttention**, a novel technique that automatically handles various patterns of KV cache reuse. On the frontend, we designed a **flexible prompting language** for you to control the generation process. > >SGLang can perform **up to 5x faster** than existing systems like Guidance and vLLM on common LLM workloads (agent, reasoning, chat, RAG, few-shot benchmark), while also reducing code complexity. source: [https://twitter.com/lmsysorg/status/1747675649412854230](https://twitter.com/lmsysorg/status/1747675649412854230)
2024-01-17T18:03:21
https://www.reddit.com/r/LocalLLaMA/comments/19934kd/sglang_new_llm_inference_runtime_by_lmsysorg_25x/
galambalazs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19934kd
false
null
t3_19934kd
/r/LocalLLaMA/comments/19934kd/sglang_new_llm_inference_runtime_by_lmsysorg_25x/
false
false
self
98
{'enabled': False, 'images': [{'id': 'oHqTLlrNYSm6QstrFltgUh1cbK_N-UBMqCZUhSu3WAY', 'resolutions': [{'height': 27, 'url': 'https://external-preview.redd.it/7aJiK4jf7YxHQSpxajg_rER_nZhIQB6WRHePc38lUKk.jpg?width=108&crop=smart&auto=webp&s=b5b60e67c01a973de6247c22056191c10e4b923a', 'width': 108}], 'source': {'height': 35, 'url': 'https://external-preview.redd.it/7aJiK4jf7YxHQSpxajg_rER_nZhIQB6WRHePc38lUKk.jpg?auto=webp&s=117a49a8a1a2e8e3543e57882ab97f4b70d15335', 'width': 140}, 'variants': {}}]}
DeciCoder-6B was released today!
49
2024-01-17T17:47:55
https://v.redd.it/4lav3xppf1dc1
datascienceharp
v.redd.it
1970-01-01T00:00:00
0
{}
1992q79
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4lav3xppf1dc1/DASHPlaylist.mpd?a=1708105690%2CZjRhODUyOGQ1ZDAyMzAxNTY2NjQ0ZGJkNDhjZTVhZjJiZTkxMWE1OWY5NWUzYjdkZDMzYzVmYmVjMGJjMTU3Nw%3D%3D&v=1&f=sd', 'duration': 95, 'fallback_url': 'https://v.redd.it/4lav3xppf1dc1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/4lav3xppf1dc1/HLSPlaylist.m3u8?a=1708105690%2CNDgxYjNkMTU5ZjUxNjQ1MGY0NGMyZDMxMDhiZjE4M2Y1YzkzZWFiOTE5MGY0NjJiMDY3MDkyNmMzMjM4MTRhMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4lav3xppf1dc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1992q79
/r/LocalLLaMA/comments/1992q79/decicoder6b_was_released_today/
false
false
https://external-preview…93ce4a93ccb28d3f
49
{'enabled': False, 'images': [{'id': 'b2FyeTFkbXNmMWRjMTiAqUQjvvQSv-Rsx63Qj70oP7cHfeFveQAIBl1IycwM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/b2FyeTFkbXNmMWRjMTiAqUQjvvQSv-Rsx63Qj70oP7cHfeFveQAIBl1IycwM.png?width=108&crop=smart&format=pjpg&auto=webp&s=2f03761a309c1a58b097601b59e0f7af22d2cedb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/b2FyeTFkbXNmMWRjMTiAqUQjvvQSv-Rsx63Qj70oP7cHfeFveQAIBl1IycwM.png?width=216&crop=smart&format=pjpg&auto=webp&s=2b1e74a1152f3f2021a7b76a0830e5d13f88d072', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/b2FyeTFkbXNmMWRjMTiAqUQjvvQSv-Rsx63Qj70oP7cHfeFveQAIBl1IycwM.png?width=320&crop=smart&format=pjpg&auto=webp&s=c31549293c95f663261203dc1745910a4d3feb1c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/b2FyeTFkbXNmMWRjMTiAqUQjvvQSv-Rsx63Qj70oP7cHfeFveQAIBl1IycwM.png?width=640&crop=smart&format=pjpg&auto=webp&s=965cbf1b55628ed055974d4b770f66266fee9939', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/b2FyeTFkbXNmMWRjMTiAqUQjvvQSv-Rsx63Qj70oP7cHfeFveQAIBl1IycwM.png?width=960&crop=smart&format=pjpg&auto=webp&s=8d4e9b8f06198caeac07fa72f972662ceda7443f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/b2FyeTFkbXNmMWRjMTiAqUQjvvQSv-Rsx63Qj70oP7cHfeFveQAIBl1IycwM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8fe98d636afa6f12d494211aeec41e74900eb0a7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/b2FyeTFkbXNmMWRjMTiAqUQjvvQSv-Rsx63Qj70oP7cHfeFveQAIBl1IycwM.png?format=pjpg&auto=webp&s=c95c63885d444ccd417156673fdf3a68ca252f37', 'width': 1920}, 'variants': {}}]}