title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser by Eric Hartford + Fernando Fernandes Neto
1
2024-01-02T06:13:17
https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
ablasionet
huggingface.co
1970-01-01T00:00:00
0
{}
18wigke
false
null
t3_18wigke
/r/LocalLLaMA/comments/18wigke/cognitivecomputationsdolphin26mistral7bdpolaser/
false
false
https://b.thumbs.redditm…yJICSp9ITC7s.jpg
1
{'enabled': False, 'images': [{'id': 'y0cvyAvkcJzCMZZXu7EYb31QxiIChR610PmpuxRLN00', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fLhGxxuheunWRvuK_JvHUKcNH7gdz0ha81ZvNEafMjU.jpg?width=108&crop=smart&auto=webp&s=3a0111580e111b7f2d7bf9bd963b69263e35ce6f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fLhGxxuheunWRvuK_JvHUKcNH7gdz0ha81ZvNEafMjU.jpg?width=216&crop=smart&auto=webp&s=e1e4ad5a923bc3262aea0448fc926ceeeb911874', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fLhGxxuheunWRvuK_JvHUKcNH7gdz0ha81ZvNEafMjU.jpg?width=320&crop=smart&auto=webp&s=829dbc5f748770545272837da1d190b5b313e0ba', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fLhGxxuheunWRvuK_JvHUKcNH7gdz0ha81ZvNEafMjU.jpg?width=640&crop=smart&auto=webp&s=60c5c4277055382024f00b6a3e96c7793d321813', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fLhGxxuheunWRvuK_JvHUKcNH7gdz0ha81ZvNEafMjU.jpg?width=960&crop=smart&auto=webp&s=6842941078936e66deaf8d1f4f27ff0eaf9a3828', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fLhGxxuheunWRvuK_JvHUKcNH7gdz0ha81ZvNEafMjU.jpg?width=1080&crop=smart&auto=webp&s=117573c0b669300847c057b51c65d08556c4cbac', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fLhGxxuheunWRvuK_JvHUKcNH7gdz0ha81ZvNEafMjU.jpg?auto=webp&s=67d29de0db8a9e80c44064b024c6c84dacc72f96', 'width': 1200}, 'variants': {}}]}
Quantized orca 70b 2 bit model
1
The other LLM I wanted to 2 bit quantize was an orca model. This time with the most recent quip# library, currently redoing the 34b model with it, too. The 70b model is slower than the 34b model, yet it should be the currently best 70b instruct tune (as it should be better than the 2 bit chat-llama-70b) to use on a 3090 without offloading / cpu cycles as in ggml. ​ [https://huggingface.co/KnutJaegersberg/orca-mini-70b-2bit](https://huggingface.co/KnutJaegersberg/orca-mini-70b-2bit)
2024-01-02T05:42:40
https://www.reddit.com/r/LocalLLaMA/comments/18whwfx/quantized_orca_70b_2_bit_model/
MLTyrunt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18whwfx
false
null
t3_18whwfx
/r/LocalLLaMA/comments/18whwfx/quantized_orca_70b_2_bit_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'QaPVlu0fCAKoCRqJ6QbA91NKI0vxTS-p8Tv26ORmKfA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fHLI0KDoNV9DZ4ORZlWZu9rTt7yANkLIB1SHUQTRgoo.jpg?width=108&crop=smart&auto=webp&s=828cea2f550289a56c471f680a434a50cb7f4cae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fHLI0KDoNV9DZ4ORZlWZu9rTt7yANkLIB1SHUQTRgoo.jpg?width=216&crop=smart&auto=webp&s=348b600a0cb6ab73e66d46bc310ed0952652cdf9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fHLI0KDoNV9DZ4ORZlWZu9rTt7yANkLIB1SHUQTRgoo.jpg?width=320&crop=smart&auto=webp&s=ec599bb997fd7dcd8baf84b770a63cda80bc2357', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fHLI0KDoNV9DZ4ORZlWZu9rTt7yANkLIB1SHUQTRgoo.jpg?width=640&crop=smart&auto=webp&s=d33bafd2998d5b5b89ba0ed3342685ce3fd1497d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fHLI0KDoNV9DZ4ORZlWZu9rTt7yANkLIB1SHUQTRgoo.jpg?width=960&crop=smart&auto=webp&s=db4d5e92db28d82edc44eec5ac9e39168ba01d6c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fHLI0KDoNV9DZ4ORZlWZu9rTt7yANkLIB1SHUQTRgoo.jpg?width=1080&crop=smart&auto=webp&s=775503c855b6a0e1af0edb35e04d05c3495dcf61', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fHLI0KDoNV9DZ4ORZlWZu9rTt7yANkLIB1SHUQTRgoo.jpg?auto=webp&s=035378d99718f758ad00b05cf5e5d62adb500f45', 'width': 1200}, 'variants': {}}]}
HuggingFace's dataset load into kaggel notebook issue
1
[removed]
2024-01-02T05:31:36
https://www.reddit.com/r/LocalLLaMA/comments/18whpb9/huggingfaces_dataset_load_into_kaggel_notebook/
Chiragjoshi_12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18whpb9
false
null
t3_18whpb9
/r/LocalLLaMA/comments/18whpb9/huggingfaces_dataset_load_into_kaggel_notebook/
false
false
self
1
null
Which models are most fluent in Chinese/Mandarin, most natural sounding to a native?
1
I have a project where I need to generate some Chinese content - not translations from English, just regular writing in Chinese - and I’ve heard that even ChatGPT 3.5 is a bit weird sounding sometimes? I don’t know Chinese so it’s hard to judge for myself.
2024-01-02T05:29:14
https://www.reddit.com/r/LocalLLaMA/comments/18whnmj/which_models_are_most_fluent_in_chinesemandarin/
JealousAmoeba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18whnmj
false
null
t3_18whnmj
/r/LocalLLaMA/comments/18whnmj/which_models_are_most_fluent_in_chinesemandarin/
false
false
self
1
null
Beyond Chinchilla-Optimal: Accounting for Inference in Language Model Scaling Laws
1
>Large language model (LLM) scaling laws are empirical formulas that estimate changes in model quality as a result of increasing parameter count and training data. However, these formulas, including the popular DeepMind Chinchilla scaling laws, neglect to include the cost of inference. We modify the Chinchilla scaling laws to calculate the optimal LLM parameter count and pre-training data size to train and deploy a model of a given quality and inference demand. We conduct our analysis both in terms of a compute budget and real-world costs and find that LLM researchers expecting reasonably large inference demand (~1B requests) should train models smaller and longer than Chinchilla-optimal.
2024-01-02T04:38:53
https://arxiv.org/abs/2401.00448
Formal_Drop526
arxiv.org
1970-01-01T00:00:00
0
{}
18wgpqh
false
null
t3_18wgpqh
/r/LocalLLaMA/comments/18wgpqh/beyond_chinchillaoptimal_accounting_for_inference/
false
false
default
1
null
GPT 4 vs GPT 4 Turbo
1
been testing out the gpt-4-turbo (turbo cbb typing that multiple times); has better understanding with the extended context length &#x200B; enquired with a schedule suggestion; i was not able to clarify the variables properly before the first call; however, after providing turbo; it was able to adapt to the changes; from just a simple input saying "actually this is not what I meant; could you please re do that for me; it has to be \[this\]" &#x200B; gpt-4 on the otherhand; made some calc mistakes; but was able to correct itself once pointed out; comparing the two; turbo did not have this error and was consistent even after the added information (the clarification part) &#x200B; kudos to gpt-4 however for grasping the context earlier on; was able to deduce the enquiry without the added context/clarification; I just added the 'clarification' part just to make the tests the same &#x200B; || --- --- --- --- --- --- --- --- --- --- --- || # >>> TESTS <<< # GPT-4: \>: so I work from 8am to 4pm; it is 11:50am now; and I still have to call 7 patients; these patients will have to be called twice more; that means each of the 7 patients will have to be called two more times; would you be able to provide me a suggestion at what time should I call them? if they do not answer, it generally takes about 2mins to move on to the next number; if they do answer the call could take 5 to 10mins before moving on to the next number &#x200B; Total Tokens Used: 369 Average Token Consumption: 184 &#x200B; `Assuming the worst-case scenario where each call takes 10 minutes, you would need 70 minutes (7 patients x 10 minutes) for each round of calls. Since you need to call each patient twice more, that's 140 minutes in total, or 2 hours and 20 minutes.` &#x200B; `It's currently 11:50am, so you have about 4 hours and 10 minutes left in your workday. Subtracting the time you need for the calls, you have about 1 hour and 50 minutes of free time left.` &#x200B; `To evenly distribute your calls and free time, you could start your first round of calls at 12:30pm, which should finish by 1:40pm. Then, you could take a break or do other work until 2:30pm, at which point you could start your second round of calls, which should finish by 3:40pm. This would leave you with 20 minutes at the end of your day for any additional tasks or to handle any unexpected delays.` &#x200B; `Remember, this is just a suggestion and you should adjust it based on your own work pace and any other tasks you need to complete during the day.` &#x200B; \>: no sorry; could you please revise that suggestion; the 7 patients I have to call will only have to be called 2 times; so in total 14 calls will have to be made &#x200B; Total Tokens Used: 678 Average Token Consumption: 130 &#x200B; `I apologize for misunderstanding your initial request. If you have to call each of the 7 patients twice, that indeed means you have to make 14 calls in total.` &#x200B; `Assuming the worst-case scenario where each call takes 10 minutes, you would need 140 minutes (14 calls x 10 minutes) in total, or 2 hours and 20 minutes.` &#x200B; `It's currently 11:50am, so you have about 4 hours and 10 minutes left in your workday. Subtracting the time you need for the calls, you have about 1 hour and 50 minutes of free time left.` &#x200B; `To evenly distribute your calls and free time, you could start your first round of 7 calls at 12:30pm, which should finish by 1:50pm. Then, you could take a break or do other work until 2:30pm, at which point you could start your second round of 7 calls, which should finish by 3:50pm. This would leave you with 10 minutes at the end of your day for any additional tasks or to handle any unexpected delays.` &#x200B; `Remember, this is just a suggestion and you should adjust it based on your own work pace and any other tasks you need to complete during the day.` &#x200B; \>: why did you tell me during your first estimate that the first round would be finished by 1:40pm and on the second you have provided me with 1:50pm &#x200B; Total Tokens Used: 1128 Average Token Consumption: 94 &#x200B; `I apologize for the confusion. That was a mistake on my part. If each call takes 10 minutes, and you're making 7 calls, that would indeed take 70 minutes.` &#x200B; `So, if you start your first round of calls at 12:30pm, you should finish by 1:40pm, not 1:50pm. I apologize for the error. Here's the corrected schedule:` &#x200B; `- Continue with your current tasks until 12:30pm.` `- Start your first round of 7 calls at 12:30pm.` `- Finish your first round of calls by 1:40pm (assuming each call takes 10 minutes).` `- Take a break or do other work from 1:40pm to 2:30pm.` `- Start your second round of 7 calls at 2:30pm.` `- Finish your second round of calls by 3:40pm (assuming each call takes 10 minutes).` `- Use the remaining 20 minutes of your workday (from 3:40pm to 4:00pm) for any additional tasks or to handle any unexpected delays.` || --- --- --- --- --- --- --- --- --- --- --- --- --- --- || # 4-TURBO: \>: so I work from 8am to 4pm; it is 11:50am now; and I still have to call 7 patients; these patients will have to be called twice more; that means each of the 7 patients will have to be called two more times; would you be able to provide me a suggestion at what time should I call them? if they do not answer, it generally takes about 2mins to move on to the next number; if they do answer the call could take 5 to 10mins before moving on to the next number &#x200B; Total Tokens Used: 2040 Average Token Consumption: 103 &#x200B; `To manage your time effectively and ensure you can call each of the 7 patients two more times before your workday ends at 4:00 PM, you'll want to create a schedule that accounts for the maximum time each set of calls could take.` &#x200B; `Let's assume the worst-case scenario where each call takes the maximum time you've estimated:` &#x200B; `- If a patient answers, the call could take up to 10 minutes.` `- If a patient does not answer, it takes about 2 minutes to move on to the next number.` &#x200B; `For the first round of calls to the 7 patients, if every patient answers, it could take:` `7 patients × 10 minutes = 70 minutes` &#x200B; `If none of the patients answer, it could take:` `7 patients × 2 minutes = 14 minutes` &#x200B; `So, the first round of calls could take anywhere between 14 to 70 minutes.` &#x200B; `Let's say it's now 12:00 PM (rounding up from 11:50 AM for simplicity). If you start calling immediately and it takes the full 70 minutes, you would finish the first round at about 1:10 PM.` &#x200B; `You would then need to repeat this process two more times. If each round takes 70 minutes, the total time for three rounds would be:` `3 rounds × 70 minutes = 210 minutes (or 3 hours and 30 minutes)` &#x200B; `Starting from 12:00 PM, if you spend 3 hours and 30 minutes on calls, you would finish at approximately 3:30 PM, giving you a 30-minute buffer before the end of your workday at 4:00 PM.` &#x200B; `However, to account for variability and potential breaks, you might want to start the second and third rounds of calls a bit earlier. Here's a suggested schedule:` &#x200B; `- First round: Start at 12:00 PM, finish by 1:10 PM (assuming the longest call duration).` `- Short break: 10 minutes.` `- Second round: Start at 1:20 PM, finish by 2:30 PM (assuming the longest call duration).` `- Short break: 10 minutes.` `- Third round: Start at 2:40 PM, finish by 3:50 PM (assuming the longest call duration).` &#x200B; `This schedule gives you a 10-minute buffer at the end of the day in case some calls run long or you encounter unexpected delays. If calls take less time than expected, you'll have additional time at the end of the day for other tasks or unforeseen work.` &#x200B; `Remember to also consider any other tasks you need to complete during the day and adjust the schedule accordingly. If you find that calls are taking less time on average, you can adjust the timing of subsequent rounds to start later.` &#x200B; \>: no sorry; could you please revise that suggestion; the 7 patients I have to call will only have to be called 2 times; so in total 14 calls will have to be made &#x200B; Total Tokens Used: 2435 Average Token Consumption: 100 &#x200B; `I apologize for the misunderstanding. If you have to call each of the 7 patients twice, making a total of 14 calls, let's revise the schedule accordingly.` &#x200B; `Assuming the worst-case scenario for each call:` &#x200B; `- If a patient answers, the call could take up to 10 minutes.` `- If a patient does not answer, it takes about 2 minutes to move on to the next number.` &#x200B; `For 14 calls, if every patient answers, it could take:` `14 calls × 10 minutes = 140 minutes (or 2 hours and 20 minutes)` &#x200B; `If none of the patients answer, it could take:` `14 calls × 2 minutes = 28 minutes` &#x200B; `Given that it's currently 11:50 AM, let's round up to 12:00 PM for simplicity. Here's a suggested schedule:` &#x200B; `- Start the first round of 7 calls at 12:00 PM.` `- If each call takes the maximum of 10 minutes, you would finish the first round by 1:10 PM.` `- Take a short break (5-10 minutes).` `- Start the second round of 7 calls at about 1:20 PM.` `- If each call takes the maximum of 10 minutes, you would finish the second round by 2:30 PM.` &#x200B; `This schedule assumes the longest duration for each call and includes a short break between rounds. If calls take less time, you will finish earlier, giving you extra time for other tasks or breaks.` &#x200B; `If you find that the calls are taking less time on average, you can adjust the timing of the second round to start later. It's also important to consider any other tasks you need to complete during the day and adjust the schedule accordingly.` &#x200B; \>: okay great! I will schedule the calls at 12pm and 1:30pm; plenty of time for each &#x200B; Total Tokens Used: 2692 Average Token Consumption: 98 &#x200B; `That sounds like a solid plan! Scheduling the calls at 12:00 PM and 1:30 PM gives you a good buffer between rounds to manage the calls comfortably and handle any additional tasks or unexpected delays.` &#x200B; `Here's a quick recap of your schedule:` &#x200B; `- First round of calls: Start at 12:00 PM. Assuming the longest calls, you could be done by 1:10 PM if each call takes 10 minutes, or much earlier if the calls are shorter or if some patients don't answer.` `- Break: You have a 20-minute window from 1:10 PM to 1:30 PM, which you can use as a break or to attend to other tasks.` `- Second round of calls: Start at 1:30 PM. Again, assuming the longest calls, you could be done by 2:40 PM, or earlier if the calls are shorter.` &#x200B; `This schedule should allow you to complete all your calls well before your workday ends at 4:00 PM. Good luck with your calls, and if you need any more assistance, feel free to ask!` &#x200B; || --- --- --- --- --- --- --- --- --- --- || &#x200B; What do you think?
2024-01-02T04:30:20
https://www.reddit.com/r/LocalLLaMA/comments/18wgk5g/gpt_4_vs_gpt_4_turbo/
drb_kd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18wgk5g
false
null
t3_18wgk5g
/r/LocalLLaMA/comments/18wgk5g/gpt_4_vs_gpt_4_turbo/
false
false
self
1
null
Context shifting for EXL2?
1
I've heard llama.cpp has smart context which means that once max context length is reached, the entire prompt doesn't need to be reprocessed for each new message. I'm using a 24k token length and once it's filled, it takes an annoyingly long time for each subsequent response to start generating. Is there any workaround for this while using Exllama v2?
2024-01-02T04:15:38
https://www.reddit.com/r/LocalLLaMA/comments/18wg9wy/context_shifting_for_exl2/
Animystix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18wg9wy
false
null
t3_18wg9wy
/r/LocalLLaMA/comments/18wg9wy/context_shifting_for_exl2/
false
false
self
1
null
llama.cpp custom instructions or custom GPTs?
1
For llama.cpp, is it possible to create custom instructions or custom GPTs similar to how they are implemented for ChatGPT? I looked through the documentation on GitHub but I couldn’t figure it out. I would like to save them as text files and have a method to import the one I need to use.
2024-01-02T03:55:27
https://www.reddit.com/r/LocalLLaMA/comments/18wfvne/llamacpp_custom_instructions_or_custom_gpts/
DKW0000001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18wfvne
false
null
t3_18wfvne
/r/LocalLLaMA/comments/18wfvne/llamacpp_custom_instructions_or_custom_gpts/
false
false
self
1
null
Is there a cloud gpu I can rent on an as needed basis?
1
I’ve used runpod for large models but the setup is time consuming for my needs. With securing the gpu, loading the interface, then downloading the model, it’s not really optimal for our use. And My company doesn’t want to pay $.99 an hour since we’d only use it like 4-5 times an hour. Is there a way I can only use a cloud gpu on an as needed basis that’s at least 40gb vram? So we’d only get charged during the 4-5 times we’re using it per hour.
2024-01-02T03:43:14
https://www.reddit.com/r/LocalLLaMA/comments/18wfmxz/is_there_a_cloud_gpu_i_can_rent_on_an_as_needed/
AppleBottmBeans
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18wfmxz
false
null
t3_18wfmxz
/r/LocalLLaMA/comments/18wfmxz/is_there_a_cloud_gpu_i_can_rent_on_an_as_needed/
false
false
self
1
null
Build my AI machine
1
I plan to build a dedicated machine for local LLM. I want to have 96-120gig worth of gpu. I would prefer 2-5 smaller gpu vs say 3 40’s unless they are reasonably priced. My budget will be about $15,000. I would like to stick with new gpu instead of used on eBay. Let’s see what you got! Show me what motherboard board, ram, gpu, etc you would buy if you have 15,000 to build a rig.
2024-01-02T03:09:30
https://www.reddit.com/r/LocalLLaMA/comments/18wey2q/build_my_ai_machine/
No-Roll5319
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18wey2q
false
null
t3_18wey2q
/r/LocalLLaMA/comments/18wey2q/build_my_ai_machine/
false
false
self
1
null
I made an open-source Character.ai alternative
1
2024-01-02T02:42:24
https://openroleplay.ai/
No_Investment8576
openroleplay.ai
1970-01-01T00:00:00
0
{}
18wedl1
false
null
t3_18wedl1
/r/LocalLLaMA/comments/18wedl1/i_made_an_opensource_characterai_alternative/
false
false
https://b.thumbs.redditm…_xGF6Qb9tLlk.jpg
1
{'enabled': False, 'images': [{'id': 'KoFK6_y1mHB5VFPmNmT4sZOtrLfQYMYADL1UkAnjJPE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/hr-HX0SVj465xIMV-2KWSrNgcqf8U2CI3SHNyrFm1k8.jpg?width=108&crop=smart&auto=webp&s=23a8dc3b2ef7fa9c125029e590f146e1bd1bfa0a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/hr-HX0SVj465xIMV-2KWSrNgcqf8U2CI3SHNyrFm1k8.jpg?width=216&crop=smart&auto=webp&s=e6845b50d3133e6a22cc147a3ac391d3e0bbcf4c', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/hr-HX0SVj465xIMV-2KWSrNgcqf8U2CI3SHNyrFm1k8.jpg?width=320&crop=smart&auto=webp&s=d924731d8edcef4f333e41fe224b38b3b0e60740', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/hr-HX0SVj465xIMV-2KWSrNgcqf8U2CI3SHNyrFm1k8.jpg?width=640&crop=smart&auto=webp&s=8e4b6f06e97e9d9598e6f78266567e22634a0830', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/hr-HX0SVj465xIMV-2KWSrNgcqf8U2CI3SHNyrFm1k8.jpg?width=960&crop=smart&auto=webp&s=a920a2c87d30ecd8c6ce966ebd0a31b2b99e3a68', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/hr-HX0SVj465xIMV-2KWSrNgcqf8U2CI3SHNyrFm1k8.jpg?width=1080&crop=smart&auto=webp&s=c1f9e7d3eb5f8ff4fe7a19dee93ba47ec5680cb0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/hr-HX0SVj465xIMV-2KWSrNgcqf8U2CI3SHNyrFm1k8.jpg?auto=webp&s=feca2339078ea967bb553780a941a282de4501b9', 'width': 1200}, 'variants': {}}]}
What do you think should happen to local LLMs?
1
[removed] [View Poll](https://www.reddit.com/poll/18wdxg1)
2024-01-02T02:21:46
https://www.reddit.com/r/LocalLLaMA/comments/18wdxg1/what_do_you_think_should_happen_to_local_llms/
boot_wiki_from_net
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18wdxg1
false
null
t3_18wdxg1
/r/LocalLLaMA/comments/18wdxg1/what_do_you_think_should_happen_to_local_llms/
false
false
self
1
null
What do you think should happen to local LLMs?
1
[removed] [View Poll](https://www.reddit.com/poll/18wdp75)
2024-01-02T02:11:10
https://www.reddit.com/r/LocalLLaMA/comments/18wdp75/what_do_you_think_should_happen_to_local_llms/
wikilifehatingchild
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18wdp75
false
null
t3_18wdp75
/r/LocalLLaMA/comments/18wdp75/what_do_you_think_should_happen_to_local_llms/
false
false
self
1
null
Advice debugging Text-to-SQL dialect fine-tuned model
1
This week, I've: * Fine-tuned an LLM with a QLoRA for the first time! * Used LLama.cpp to quantize the merged model and [deploy it to Huggingface](https://huggingface.co/cnatale/Mistral-7B-Instruct-v0.1-Txt-2-Presto-SQL-GGUF)! * Created a dataset for transforming text to a specific dialect of SQL, and also [published it to Huggingface](https://huggingface.co/datasets/cnatale/presto-athena-txt-2-sql)! I'm very excited about all these things! Unfortunately, the model is far worse at text-to-sql than the Mistral 7B Instruct v0.1 base that I started with 😅. I could use advice on where to start with debugging. My overall design approach is based on [this person's Mistral Instruct fine-tune Colab](https://colab.research.google.com/drive/1JtrVh--bcPR-CR8QNOyXd3Z5eZt0WgOw?usp=sharing#scrollTo=0FcXIJSLn_TK). [This](https://colab.research.google.com/gist/cnatale/771ec25ac4fad3d626d1d74b17402ac1/finetuning-mistral-7b-instruct-for-text-to-presto-athena-sql.ipynb) is a copy of the Colab I've used for training and deploying. Particular things I'm unsure about: * The amount of data that it'd be reasonable to see something like a SQL dialect take hold upon request. I'm currently only training on \~150 samples and cross-validating on \~20. * I attempted to [streamline my data](https://huggingface.co/datasets/cnatale/presto-athena-txt-2-sql?row=0) so that it's in a format that's easier for the [create\_prompt()](https://colab.research.google.com/gist/cnatale/771ec25ac4fad3d626d1d74b17402ac1/finetuning-mistral-7b-instruct-for-text-to-presto-athena-sql.ipynb#scrollTo=ZYLl8EJmTRL2&line=1&uniqifier=1) method to process. I'm worried that my SFTTrainer instance is expecting the train and eval dataset in another format, but I'm unsure where to prove/disprove this.
2024-01-02T02:09:56
https://www.reddit.com/r/LocalLLaMA/comments/18wdo6u/advice_debugging_texttosql_dialect_finetuned_model/
TheCoconutTree
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18wdo6u
false
null
t3_18wdo6u
/r/LocalLLaMA/comments/18wdo6u/advice_debugging_texttosql_dialect_finetuned_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Ad7gu1mw4VCtEPoLQ-xUaouHj0OH-VxQgmFiYNnkRYA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3U6mLerAeg4L0EYz3T-9qvS1xjwo6k2MsCKJhj_p7cQ.jpg?width=108&crop=smart&auto=webp&s=bb291a902d0f6a5e3bd7cc380f729b58a5d3d272', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3U6mLerAeg4L0EYz3T-9qvS1xjwo6k2MsCKJhj_p7cQ.jpg?width=216&crop=smart&auto=webp&s=869e6be653f45f5aad08a0b257cec03b0b8b96f8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3U6mLerAeg4L0EYz3T-9qvS1xjwo6k2MsCKJhj_p7cQ.jpg?width=320&crop=smart&auto=webp&s=a8c7e80eb29d09288c2cefb38181f1746887535e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3U6mLerAeg4L0EYz3T-9qvS1xjwo6k2MsCKJhj_p7cQ.jpg?width=640&crop=smart&auto=webp&s=1e877bd44af8c0603d76f046de728f6a86300a4b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3U6mLerAeg4L0EYz3T-9qvS1xjwo6k2MsCKJhj_p7cQ.jpg?width=960&crop=smart&auto=webp&s=b2c6e2952955e0e0986fb3637903f726332d3257', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3U6mLerAeg4L0EYz3T-9qvS1xjwo6k2MsCKJhj_p7cQ.jpg?width=1080&crop=smart&auto=webp&s=06c036166dff13dacfed0f756aded27853b52626', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3U6mLerAeg4L0EYz3T-9qvS1xjwo6k2MsCKJhj_p7cQ.jpg?auto=webp&s=88b718081dc3c7876117c9cf273150473c7c148b', 'width': 1200}, 'variants': {}}]}
Fill-in the Middle LLAMA
1
I needed Llama with fill in the middle capability, like copilot but failed to find any so I made one. Weight + code + configs all open sourced. Based on dpo-tulu2-7b. https://github.com/cloneofsimo/fim-llama-deepspeed https://huggingface.co/cloneofsimo/fim-llama Have fun!!
2024-01-02T01:16:01
https://www.reddit.com/r/LocalLLaMA/comments/18wcj6p/fillin_the_middle_llama/
cloneofsimo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18wcj6p
false
null
t3_18wcj6p
/r/LocalLLaMA/comments/18wcj6p/fillin_the_middle_llama/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ZMJ61XWyJ9ZwLlcSMQe6muEiy8Cc7PE1atC0PFESLVs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HudiIxv9H-ynZu2klafvLAw7R0fPNaYGw7HMhyJiJYU.jpg?width=108&crop=smart&auto=webp&s=212df442f7a8bd3401ddaa692d39dd1dc06dba58', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HudiIxv9H-ynZu2klafvLAw7R0fPNaYGw7HMhyJiJYU.jpg?width=216&crop=smart&auto=webp&s=e046d22ba12858b4e99b636c168996f10a9ec36c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HudiIxv9H-ynZu2klafvLAw7R0fPNaYGw7HMhyJiJYU.jpg?width=320&crop=smart&auto=webp&s=f16b1d7317fbe65f0f42d4f2157ca1c01a296e49', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HudiIxv9H-ynZu2klafvLAw7R0fPNaYGw7HMhyJiJYU.jpg?width=640&crop=smart&auto=webp&s=5798ee418d3409bc3358a1384ffbbffb5bb4ca4a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HudiIxv9H-ynZu2klafvLAw7R0fPNaYGw7HMhyJiJYU.jpg?width=960&crop=smart&auto=webp&s=fc8c37c00ae2e1ae3aa39d6ac93fb0b19f4c9b1a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HudiIxv9H-ynZu2klafvLAw7R0fPNaYGw7HMhyJiJYU.jpg?width=1080&crop=smart&auto=webp&s=d90f9b1c456f192197cef8ac80d2401655671a6d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HudiIxv9H-ynZu2klafvLAw7R0fPNaYGw7HMhyJiJYU.jpg?auto=webp&s=a5e2c35d7aa6551c4a24863779aefb84115f39a5', 'width': 1200}, 'variants': {}}]}
What are your favorite "small" models for text comprehension?
1
Hello wise people of this amazing subreddit. I am looking for some opinions I need a model that is good at being given a text filtering out relationships between elements. For example if the text has a line with when A is high, B is low due to A doing xyz, further more a low amount of B leads to high amounts of C. If i then would ask it about realtionships of those elements it should be able to answer correct (at least most of the time) For example how are a and b related. It returns something like: High amounts of A lead to Low amounts of B due to xyz. I need a model to integrate in to a rag system to extract specific relationships from scientific texts. That excells at this kind of problems. The only other thing that matters is that it must be able to follow commands for formatting the output in a certain way :D. The smaller the Modell the better. I will try out different models and approaches but i was hoping i could use some of your suggestions to narrow it down a good bit. Thanks in advamce.
2024-01-02T00:22:57
https://www.reddit.com/r/LocalLLaMA/comments/18wbcmr/what_are_your_favorite_small_models_for_text/
Noxusequal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18wbcmr
false
null
t3_18wbcmr
/r/LocalLLaMA/comments/18wbcmr/what_are_your_favorite_small_models_for_text/
false
false
self
1
null
If you think open-source models will beat GPT-4 this year, you're wrong. I totally agree with this.
1
2024-01-01T23:58:35
https://i.redd.it/26oa96zb3x9c1.png
CeFurkan
i.redd.it
1970-01-01T00:00:00
0
{}
18wasf8
false
null
t3_18wasf8
/r/LocalLLaMA/comments/18wasf8/if_you_think_opensource_models_will_beat_gpt4/
false
false
https://b.thumbs.redditm…Hwj32mfRbeIU.jpg
1
{'enabled': True, 'images': [{'id': '9PV0JeWLg8calrlu7vypK33w15AIFwibJMyRISiPVTI', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/26oa96zb3x9c1.png?width=108&crop=smart&auto=webp&s=a900281a0bb1d5ffb722068ea640c37612d83e49', 'width': 108}, {'height': 227, 'url': 'https://preview.redd.it/26oa96zb3x9c1.png?width=216&crop=smart&auto=webp&s=607f059b97738436c8d0809eef6732bbf38fa409', 'width': 216}, {'height': 337, 'url': 'https://preview.redd.it/26oa96zb3x9c1.png?width=320&crop=smart&auto=webp&s=800714480a73c13407df54870faa9949db162e0b', 'width': 320}, {'height': 674, 'url': 'https://preview.redd.it/26oa96zb3x9c1.png?width=640&crop=smart&auto=webp&s=4e5e97de1b52285803d952d6fb203ccf624d89f9', 'width': 640}], 'source': {'height': 782, 'url': 'https://preview.redd.it/26oa96zb3x9c1.png?auto=webp&s=b9ee852ee5d4b364e3172aa4309c3a0116e2dff7', 'width': 742}, 'variants': {}}]}
YI 34b v.s. Mixtral 8x7b, which is better for stories and RP?
1
I’ve tried Mixtral base-instruct, and some fine tunes such as storytelling and RP from Undi95…. I only tried the GGUF versions. IME… For RP and storytelling, it’s lacking compared to YI 34B (Nous-Capybara & CaPlatTessDolXaBoro), both my favorite YI models. So what’s your guys opinion on Mixtral vs YI… any advice - tips? Thank you!! 🙏
2024-01-01T23:54:14
https://www.reddit.com/r/LocalLLaMA/comments/18waoxl/yi_34b_vs_mixtral_8x7b_which_is_better_for/
Majestical-psyche
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18waoxl
false
null
t3_18waoxl
/r/LocalLLaMA/comments/18waoxl/yi_34b_vs_mixtral_8x7b_which_is_better_for/
false
false
self
1
null
Lessons learned so far: LoRA Fine tuning on Macbook Air with MLX
1
Hey all. Inspired by /u/Mbando 's writeup [here](https://www.reddit.com/r/LocalLLaMA/comments/18ujt0n/using_gpus_on_a_mac_m2_max_via_mlx_update_on/), I've finally had enough success to do a writeup of my own re: using MLX on a dinky Macbook Air. My experience level is low. I never even ran a local LLM before Dec 20 of this year, and I've been reading everything I can get my hands on since then. This won't be a full tutorial, I didn't write everything down step by step as I troubleshot problems, but an attempt to leave some breadcrumbs for fellow newbies trying to do this stuff. **Machine:** 2022 M2 Macbook Air Total Number of Cores: 8 (4 performance and 4 efficiency) Unified Memory: 24 GB **Project goal:** Ultimately, create a style-guide enforcement bot in the model of /u/FPham 's [Karen The Editor](https://huggingface.co/FPHam/Karen_TheEditor_V2_CREATIVE_Mistral_7B), but for enforcing a particular technical style guide for reports in my industry. **Dataset** Creating high quality dataset for this task will be a big lift. So, before expending that effort, I wanted to make sure I actually would be able to successfully fine tune something once i had the dataset, so I made a low quality dataset: 600 question/answer pairs, by running finished work through mistral 7b, having it back-generate screwed up formatting examples. **Settling on using MLX** I had many failures and frustrations using Llama.cpp, text generation web UI via runpod. I won't get into all of the details, but I was able successfully get fine-tunes via these platforms, but the results weren't just bad, they were useless. (Now that I've successfully gotten MLX working, I suspect I'd be able to get those other platforms working too, but I've found something that works and I'm sticking with it for now) **Fine tuning for dummies** Ok, so first things first: I did the lora tutorial [here](https://github.com/ml-explore/mlx-examples), exactly as laid out in /lora/README.md Worked fine, except a small existential crisis solved [HERE](https://www.reddit.com/r/LocalLLaMA/comments/18ujt0n/using_gpus_on_a_mac_m2_max_via_mlx_update_on/kfpvjdl/) [**Then, plug in your own data. Lesson 1: validate your JSONL**](https://jsonlines.org/validator/) This is probably obvious to people with experience working with JSON and JSONL, but I had a few things in my dataset that, completely without my knowing it, invalidated the JSONL, especially unnecessarily escaped characters. This broke, as far as I can tell, everything. Hell, one experiment took 30 minutes of troubleshooting before I realized I'd just fat fingered a newline at the top of my training data file. These things are quite format sensitive. **Lesson 2: Use instruct formatting, even when training a non-instruct model** This was the last key, for me, and is what got me, for the first time, getting acceptable results out of the model ``` {"text": "<s>[INST] Prompt goes here[/INST]Response goes here<s>"} {"text": "<s>[INST] Put your prompt here[/INST]response goes here<s>"} ``` **24 gigs unified isn't a lot, but you can make it work** To get it to run, I had to turn down the settings a bit. Here's my final command (I've had no problem turning it up to 1200 iterations or more as experiments) ``` python3 lora.py \ --model ./mlx_model \ --train \ --iters 600 \ --batch-size 1 \ --lora-layers 4 ``` Even with this, I got memory errors if I left ram-using-software open, like Firefox. I got memory errors if I had training data lines longer than about 3500 tokens. Ultimately, though, it ran training at about 40-60 tokens a second, and I was able to get through a 600 line training dataset in a few hours. MLX is significantly faster, on a mac, than llama.cpp thanks to actually using the GPUs. **The Results** I'm not sharing question/answer pairs here because, well, they're not super useful or illustrative to people who aren't me. I'm trying to make this model replicate a particular kind industry formatting which most people are not familiar with, and with my crappy dataset the model is outputting formatting at the level of a an unmotivated intern who just smoked a joint in the bathroom before coming in. What's important to me is that it's *trying*, so with some real effort on the dataset, fine tuning paramaters, and running training on some beefier hardware, there is the potential for a successful model. What's Next: I'd love to run a lora on Mistral Instruct instead of the base model. Thus far, I've had no luck, either by downloading the MLX converted files on huggingface, or converting them myself using convert.py in the examples folder. If anyone has any ideas, tips, tricks, complaints, things I screwed up, I'm all ears.
2024-01-01T23:38:05
https://www.reddit.com/r/LocalLLaMA/comments/18wabkc/lessons_learned_so_far_lora_fine_tuning_on/
IronColumn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18wabkc
false
null
t3_18wabkc
/r/LocalLLaMA/comments/18wabkc/lessons_learned_so_far_lora_fine_tuning_on/
false
false
self
1
{'enabled': False, 'images': [{'id': 'aFgXchpNaJwGDuOlEYei0Adog2M7gX_nfdHPeVXY21A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IF6_77X5ZqAztF-tFw_VD4xYN2PPfQFrs_S-wPFy87A.jpg?width=108&crop=smart&auto=webp&s=dcdf2617430b7d4acbbb47707e16cca9a184abce', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IF6_77X5ZqAztF-tFw_VD4xYN2PPfQFrs_S-wPFy87A.jpg?width=216&crop=smart&auto=webp&s=ac9e75123e6535600979efdeb04cfb259cbbe48a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IF6_77X5ZqAztF-tFw_VD4xYN2PPfQFrs_S-wPFy87A.jpg?width=320&crop=smart&auto=webp&s=e5c256c8861cc85fdffb36e171f90b8c7d083c38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IF6_77X5ZqAztF-tFw_VD4xYN2PPfQFrs_S-wPFy87A.jpg?width=640&crop=smart&auto=webp&s=eb6493de72adabcb3f60a3d1cd8b428dee735ed7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IF6_77X5ZqAztF-tFw_VD4xYN2PPfQFrs_S-wPFy87A.jpg?width=960&crop=smart&auto=webp&s=174e51bb0fa2388897706a327b36cfcd94d9d6ad', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IF6_77X5ZqAztF-tFw_VD4xYN2PPfQFrs_S-wPFy87A.jpg?width=1080&crop=smart&auto=webp&s=cfdb1484e6613c5c47ae685160e3a3040cd5b763', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IF6_77X5ZqAztF-tFw_VD4xYN2PPfQFrs_S-wPFy87A.jpg?auto=webp&s=6252101f4261c5c434ffb43ccd549dbe8fe914d1', 'width': 1200}, 'variants': {}}]}
Should r/LocalLLaMA have a continuity plan?
1
[removed]
2024-01-01T23:12:10
https://www.reddit.com/r/LocalLLaMA/comments/18w9q19/should_rlocalllama_have_a_continuity_plan/
larrthemarr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w9q19
false
null
t3_18w9q19
/r/LocalLLaMA/comments/18w9q19/should_rlocalllama_have_a_continuity_plan/
false
false
self
1
null
Is there any way for a conversation to go beyond 4096 tokens?
1
Like the title says. When a conversation hits the token limit, is there anyway to start fresh while maintaining memory of the previous one? I’m assuming not, since that would still require those tokens, but asking anyway in case there’s a trick I’m not aware of.
2024-01-01T23:10:53
https://www.reddit.com/r/LocalLLaMA/comments/18w9oxu/is_there_any_way_for_a_conversation_to_go_beyond/
CincyTriGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w9oxu
false
null
t3_18w9oxu
/r/LocalLLaMA/comments/18w9oxu/is_there_any_way_for_a_conversation_to_go_beyond/
false
false
default
1
null
🐺🐦‍⬛ LLM Comparison/Test: Brand new models for 2024 (Dolphin 2.6/2.7 Mistral/Mixtral/Phi-2, Sonya, TinyLlama)
1
Happy New Year! 2023 was *the* year of local and (semi-)open LLMs, the beginning of a new AI era, and software and models are evolving at an ever increasing pace. Even over the turn of the year countless brilliant people have blessed us with their contributions, including a batch of brand new model releases in 2024, so here I am testing them already: ## New Models tested: - **[dolphin-2.6-mistral-7b-dpo](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo)** - **[dolphin-2.7-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b)** - **[dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)** - **[sonya-medium-x8-MoE](https://huggingface.co/dillfrescott/sonya-medium-x8-MoE)** - **[TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)** ## Testing methodology - **4 German data protection trainings:** - I run models through **4** professional German online data protection trainings/exams - the same that our employees have to pass as well. - The test data and questions as well as all instructions are in German while the character card is in English. This **tests translation capabilities and cross-language understanding**. - Before giving the information, I instruct the model (in German): *I'll give you some information. Take note of this, but only answer with "OK" as confirmation of your acknowledgment, nothing else.* This **tests instruction understanding and following capabilities**. - After giving all the information about a topic, I give the model the exam question. It's a multiple choice (A/B/C) question, where the last one is the same as the first but with changed order and letters (X/Y/Z). Each test has 4-6 exam questions, for a total of **18** multiple choice questions. - If the model gives a single letter response, I ask it to answer with more than just a single letter - and vice versa. If it fails to do so, I note that, but it doesn't affect its score as long as the initial answer is correct. - I rank models according to how many correct answers they give, primarily after being given the curriculum information beforehand, and secondarily (as a tie-breaker) after answering blind without being given the information beforehand. - All tests are separate units, context is cleared in between, there's no memory/state kept between sessions. - [SillyTavern](https://github.com/SillyTavern/SillyTavern) frontend - [oobabooga's text-generation-webui](https://github.com/oobabooga/text-generation-webui) backend (for HF models) - **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons) - Official prompt format as noted ## Detailed Test Reports And here are the detailed notes, the basis of my ranking, and also additional comments and observations: - **[dolphin-2.6-mistral-7b-dpo](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo)** 16K context, ChatML format: - ❌ Gave correct answers to only **1+4+4+6=15/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+2+2+4=12/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. The DPO version did much better than the one without! That's what we hoped for and expected. The unexpected thing here is that it did better than all the other models I tested this time. Is the DPO tuning making this so much better or do the other models have some bugs or flaws still? - **[dolphin-2.7-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b)** 4-bit, 32K context, ChatML format: - ❌ Gave correct answers to only **4+2+4+5=15/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+2+0+0=6/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. - ❌ Didn't answer multiple times and said instead: "Hello! How can I help you?" or (wrongly) claimed: "all options are partially correct" Strange, but the 7B 2.6 DPO version of Dolphin did better in my tests than the 8x7B 2.7 MoE version. The problem of sometimes not answering at all, especially during the blind run, also happened with dolphin-2.6-mistral-7b and dolphin-2.6-mixtral-8x7b in my previous tests. Only the DPO version didn't exhibit that problem, and the previously tested dolphin-2.5-mixtral-8x7b, which for some reason is still the best MoE Dolphin in all my tests. - **[sonya-medium-x8-MoE](https://huggingface.co/dillfrescott/sonya-medium-x8-MoE)** 4-bit, 8K context, Alpaca format: - ❌ Gave correct answers to only **3+2+2+5=12/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **3+3+1+3=10/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. - ❗ Oozes personality, probably a little too much over the top for an assistant role, but looks like a great match for a roleplay companion. Not bad, but I expected much more. Probably needs a finalization finetune [as discussed in the release thread](https://www.reddit.com/r/LocalLLaMA/comments/18vpxf7/i_present_my_magnum_opus_llm_merge_of_2023/kfsyzlh/), so I'm hoping for an update. - **[dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)** 2K context, ChatML format: - ❌ Gave correct answers to **NONE** of the 18 multiple choice questions! Just the questions, no previous information, gave correct answers: **0/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. Clearly not up to the tasks I'm testing, and it didn't feel like any modern LLM at all. I'm sure these little <3B models have their uses, but for the use cases I have and test for, they're unfortunately completely unsuitable. - **[TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)** 2K context, Zephyr format: - ❌ Gave correct answers to **NONE** of the 18 multiple choice questions! Just the questions, no previous information, gave correct answers: **0/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. Same as the Phi-2 model, this one is even smaller, so same outcome. In LLM land, size does matter, too. ## Updated Rankings This is my objective ranking of these models based on measuring factually correct answers, instruction understanding and following, and multilingual abilities: | Rank | Model | Size | Format | Quant | Context | Prompt | 1st Score | 2nd Score | OK | +/- | | ---- | ------------------------------------------------------------------------------------------------------------------------------------------------ | ------- | ------ | ------- | ----------- | ------------------------ | --------- | --------- | --- | --- | | 1 | GPT-4 | GPT-4 | API | | | | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ | | 1 | [goliath-120b-GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) | 120B | GGUF | Q2_K | 4K | Vicuna 1.1 | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ | | 1 | [Tess-XL-v1.0-GGUF](https://huggingface.co/TheBloke/Tess-XL-v1.0-GGUF) | 120B | GGUF | Q2_K | 4K | Synthia | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ | | 1 | [Nous-Capybara-34B-GGUF](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF) | 34B | GGUF | Q4_0 | 16K | Vicuna 1.1 | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ | | 2 | [Venus-120b-v1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0) | 120B | EXL2 | 3.0bpw | 4K | Alpaca | 18/18 ✓ | 18/18 ✓ | ✓ | ✗ | | 3 | [lzlv_70B-GGUF](https://huggingface.co/TheBloke/lzlv_70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 ✓ | 17/18 | ✓ | ✓ | | 4 | [chronos007-70B-GGUF](https://huggingface.co/TheBloke/chronos007-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 16/18 | ✓ | ✓ | | 4 | [SynthIA-70B-v1.5-GGUF](https://huggingface.co/migtissera/SynthIA-70B-v1.5-GGUF) | 70B | GGUF | Q4_0 | 4K | SynthIA | 18/18 ✓ | 16/18 | ✓ | ✓ | | 5 | [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8x7B | HF | 4-bit | ~~32K~~ 4K | Mixtral | 18/18 ✓ | 16/18 | ✗ | ✓ | | 6 | [dolphin-2_2-yi-34b-GGUF](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF) | 34B | GGUF | Q4_0 | 16K | ChatML | 18/18 ✓ | 15/18 | ✗ | ✗ | | 7 | [StellarBright-GGUF](https://huggingface.co/TheBloke/StellarBright-GGUF) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 ✓ | 14/18 | ✓ | ✓ | | 8 | [Dawn-v2-70B-GGUF](https://huggingface.co/TheBloke/Dawn-v2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 14/18 | ✓ | ✗ | | 8 | [Euryale-1.3-L2-70B-GGUF](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 14/18 | ✓ | ✗ | | 9 | [sophosynthesis-70b-v1](https://huggingface.co/sophosympatheia/sophosynthesis-70b-v1) | 70B | EXL2 | 4.85bpw | 4K | Vicuna 1.1 | 18/18 ✓ | 13/18 | ✓ | ✓ | | 10 | [GodziLLa2-70B-GGUF](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 12/18 | ✓ | ✓ | | 11 | [Samantha-1.11-70B-GGUF](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 ✓ | 10/18 | ✗ | ✗ | | 12 | [Airoboros-L2-70B-3.1.2-GGUF](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1.2-GGUF) | 70B | GGUF | Q4_K_M | 4K | Llama 2 Chat | 17/18 | 16/18 | ✓ | ✗ | | 13 | [Rogue-Rose-103b-v0.2](https://huggingface.co/sophosympatheia/Rogue-Rose-103b-v0.2) | 103B | EXL2 | 3.2bpw | 4K | Rogue Rose | 17/18 | 14/18 | ✗ | ✗ | | 14 | GPT-3.5 Turbo Instruct | GPT-3.5 | API | | | | 17/18 | 11/18 | ✗ | ✗ | | 15 | [Synthia-MoE-v3-Mixtral-8x7B](https://huggingface.co/migtissera/Synthia-MoE-v3-Mixtral-8x7B) | 8x7B | HF | 4-bit | ~~32K~~ 4K | ~~Synthia~~ Llama 2 Chat | 17/18 | 9/18 | ✗ | ✗ | | 16 | [dolphin-2.2-70B-GGUF](https://huggingface.co/TheBloke/dolphin-2.2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | ChatML | 16/18 | 14/18 | ✗ | ✓ | | 17 | [mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) | 7B | HF | — | ~~32K~~ 8K | Alpaca | 16/18 | 13/18 | ✗ | ✓ | | 18 | [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) | 7B | HF | — | ~~32K~~ 8K | ChatML | 16/18 | 13/18 | ✗ | ✗ | | 19 | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 7B | HF | — | 32K | Mistral | 16/18 | 12/18 | ✗ | ✗ | | 20 | [DeciLM-7B-instruct](https://huggingface.co/Deci/DeciLM-7B-instruct) | 7B | HF | — | 32K | Mistral | 16/18 | 11/18 | ✗ | ✗ | | 20 | [Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3) | 7B | HF | — | ~~32K~~ 8K | Alpaca | 16/18 | 11/18 | ✗ | ✗ | | 20 | [SauerkrautLM-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) | 7B | HF | — | ~~32K~~ 8K | ChatML | 16/18 | 11/18 | ✗ | ✗ | | 21 | [mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) | 7B | HF | — | ~~32K~~ 8K | Alpaca | 15/18 | 14/18 | ✗ | ✓ | | 22 | GPT-3.5 Turbo | GPT-3.5 | API | | | | 15/18 | 14/18 | ✗ | ✗ | | 23 | [dolphin-2.5-mixtral-8x7b](https://huggingface.co/ehartford/dolphin-2.5-mixtral-8x7b) | 8x7B | HF | 4-bit | ~~32K~~ 4K | ChatML | 15/18 | 13/18 | ✗ | ✓ | | 24 | [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) | 7B | HF | — | 8K | OpenChat (GPT4 Correct) | 15/18 | 13/18 | ✗ | ✗ | | 25 🆕 | [dolphin-2.6-mistral-7b-dpo](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo) | 7B | HF | — | 16K | ChatML | 15/18 | 12/18 | ✗ | ✗ | | 26 | [openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) | 7B | HF | — | 8K | OpenChat (GPT4 Correct) | 15/18 | 7/18 | ✗ | ✗ | | 27 🆕 | [dolphin-2.7-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b) | 8x7B | HF | 4-bit | 32K | ChatML | 15/18 | 6/18 | ✗ | ✗ | | 28 | [dolphin-2.6-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mixtral-8x7b) | 8x7B | HF | 4-bit | ~~32K~~ 16K | ChatML | 14/18 | 12/18 | ✗ | ✗ | | 29 | [MixtralRPChat-ZLoss](https://huggingface.co/chargoddard/MixtralRPChat-ZLoss) | 8x7B | HF | 4-bit | ~~32K~~ 8K | CharGoddard | 14/18 | 10/18 | ✗ | ✗ | | 30 | [OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp) | 7B | HF | — | ~~32K~~ 8K | OpenChat (GPT4 Correct) | 13/18 | 13/18 | ✗ | ✗ | | 31 🆕 | [sonya-medium-x8-MoE](https://huggingface.co/dillfrescott/sonya-medium-x8-MoE) | 8x11B | HF | 4-bit | 8K | Alpaca | 12/18 | 10/18 | ✗ | ✗ | | 32 | [dolphin-2.6-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b) | 7B | HF | — | ~~32K~~ 8K | ChatML | 10/18 | 10/18 | ✗ | ✗ | | 33 | [SauerkrautLM-70B-v1-GGUF](https://huggingface.co/TheBloke/SauerkrautLM-70B-v1-GGUF) | 70B | GGUF | Q4_0 | 4K | Llama 2 Chat | 9/18 | 15/18 | ✗ | ✗ | | 34 🆕 | [dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2) | 2.7B | HF | — | 2K | ChatML | 0/18 ✗ | 0/18 ✗ | ✗ | ✗ | | 34 🆕 | [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) | 1.1B | HF | — | 2K | Zephyr | 0/18 ✗ | 0/18 ✗ | ✗ | ✗ | - 1st Score = Correct answers to multiple choice questions (after being given curriculum information) - 2nd Score = Correct answers to multiple choice questions (without being given curriculum information beforehand) - OK = Followed instructions to acknowledge all data input with just "OK" consistently - +/- = Followed instructions to answer with just a single letter or more than just a single letter ## Upcoming/Planned Tests Next on my ~~to-do~~ to-test list are still the 10B and updated 34B models. Just wanted to put this review in between so that I could be as up to date as possible when it comes to the brand new releases. -------------------------------------------------------------------------------- Here's a list of my previous model tests and comparisons or other related posts: - [🐺🐦‍⬛ LLM Comparison/Test: Ranking updated with 10 new models (the best 7Bs)!](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) - [LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with \*\*17\*\* different instruct templates](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/) - [LLM Comparison/Test: Mixtral-8x7B, Mistral, DeciLM, Synthia-MoE](https://www.reddit.com/r/LocalLLaMA/comments/18gz54r/llm_comparisontest_mixtral8x7b_mistral_decilm/) Winner: Mixtral-8x7B-Instruct-v0.1 - [Updated LLM Comparison/Test with new RP model: Rogue Rose 103B](https://www.reddit.com/r/LocalLLaMA/comments/18ft8f5/updated_llm_comparisontest_with_new_rp_model/) - [**Big** LLM Comparison/Test: 3x 120B, 12x 70B, 2x 34B, GPT-4/3.5](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) Winner: Goliath 120B - [LLM Format Comparison/Benchmark: 70B GGUF vs. EXL2 (and AWQ)](https://www.reddit.com/r/LocalLLaMA/comments/17w57eu/llm_format_comparisonbenchmark_70b_gguf_vs_exl2/) - [LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4](https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/) Winners: goliath-120b-GGUF, Nous-Capybara-34B-GGUF - [LLM Comparison/Test: Mistral 7B Updates (OpenHermes 2.5, OpenChat 3.5, Nous Capybara 1.9)](https://www.reddit.com/r/LocalLLaMA/comments/17p0gut/llm_comparisontest_mistral_7b_updates_openhermes/) Winners: OpenHermes-2.5-Mistral-7B, openchat_3.5, Nous-Capybara-7B-V1.9 - [Huge LLM Comparison/Test: Part II (7B-20B) Roleplay Tests](https://www.reddit.com/r/LocalLLaMA/comments/17kpyd2/huge_llm_comparisontest_part_ii_7b20b_roleplay/) Winners: OpenHermes-2-Mistral-7B, LLaMA2-13B-Tiefighter - [Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4)](https://www.reddit.com/r/LocalLLaMA/comments/17fhp9k/huge_llm_comparisontest_39_models_tested_7b70b/) - [My current favorite new LLMs: SynthIA v1.5 and Tiefighter!](https://www.reddit.com/r/LocalLLaMA/comments/17e446l/my_current_favorite_new_llms_synthia_v15_and/) - [Mistral LLM Comparison/Test: Instruct, OpenOrca, Dolphin, Zephyr and more...](https://www.reddit.com/r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/) - [LLM Pro/Serious Use Comparison/Test: From 7B to 70B vs. ChatGPT!](https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/) Winner: Synthia-70B-v1.2b - [LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B](https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/) Winner: Mistral-7B-OpenOrca - [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/) - [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1 - [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b - [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B - [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K - [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2 - [SillyTavern's Roleplay preset vs. model-specific prompt format](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/) -------------------------------------------------------------------------------- [My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested with priority. Also consider tipping your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
2024-01-01T23:01:48
https://www.reddit.com/r/LocalLLaMA/comments/18w9hak/llm_comparisontest_brand_new_models_for_2024/
WolframRavenwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w9hak
false
null
t3_18w9hak
/r/LocalLLaMA/comments/18w9hak/llm_comparisontest_brand_new_models_for_2024/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NLR5VrdvEbJsd_lXacOd3iUzZiOWOZnJk92oVQkP7MY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?width=108&crop=smart&auto=webp&s=5aabae7a1bf88622054ca3b009c80bfcdd6c377a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?width=216&crop=smart&auto=webp&s=24537c6c8e5dad55c02920ce9530907708b20368', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?width=320&crop=smart&auto=webp&s=012a8c457af217cfdcefc203a1c4247ddab454b1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?width=640&crop=smart&auto=webp&s=4afed4a6be2c4b1b0b7e499127c89dc4cf47c47d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?width=960&crop=smart&auto=webp&s=b922c278404f30d070ded0b8d22e5ee5d611b837', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?width=1080&crop=smart&auto=webp&s=cb6d45aeea5e8be9a4919282d1c5685768ee6ef3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?auto=webp&s=b6a3781620e70bce82646667b5d6bd0ecc0c3b38', 'width': 1200}, 'variants': {}}]}
A future of synthetic relationships
1
[removed]
2024-01-01T22:45:59
https://www.reddit.com/r/LocalLLaMA/comments/18w93ye/a_future_of_synthetic_relationships/
Odium-El
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w93ye
false
null
t3_18w93ye
/r/LocalLLaMA/comments/18w93ye/a_future_of_synthetic_relationships/
false
false
default
1
{'enabled': False, 'images': [{'id': '2FdKmsSRcYuUaEqdn5_dOxWlvZPf1AgR_i-PEqXabkU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1al6gLXHGbHt4_CHxkBZ7jKS_MYoEqBztTV_0sCUHqY.jpg?width=108&crop=smart&auto=webp&s=8464b04d1478a13215bc8e677981930a514bb67b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/1al6gLXHGbHt4_CHxkBZ7jKS_MYoEqBztTV_0sCUHqY.jpg?width=216&crop=smart&auto=webp&s=d79ddcd2020749af4191f3303427ac03a1902474', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/1al6gLXHGbHt4_CHxkBZ7jKS_MYoEqBztTV_0sCUHqY.jpg?width=320&crop=smart&auto=webp&s=7748abb35c6d5f4adee54512792ea547ca39ba26', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/1al6gLXHGbHt4_CHxkBZ7jKS_MYoEqBztTV_0sCUHqY.jpg?auto=webp&s=bd87df336e6eb1105919e8223dc2db2e078a6fc2', 'width': 480}, 'variants': {}}]}
A future of synthetic relationships
1
[removed]
2024-01-01T22:43:42
https://www.reddit.com/r/LocalLLaMA/comments/18w91zi/a_future_of_synthetic_relationships/
Odium-El
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w91zi
false
null
t3_18w91zi
/r/LocalLLaMA/comments/18w91zi/a_future_of_synthetic_relationships/
false
false
self
1
{'enabled': False, 'images': [{'id': '2FdKmsSRcYuUaEqdn5_dOxWlvZPf1AgR_i-PEqXabkU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1al6gLXHGbHt4_CHxkBZ7jKS_MYoEqBztTV_0sCUHqY.jpg?width=108&crop=smart&auto=webp&s=8464b04d1478a13215bc8e677981930a514bb67b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/1al6gLXHGbHt4_CHxkBZ7jKS_MYoEqBztTV_0sCUHqY.jpg?width=216&crop=smart&auto=webp&s=d79ddcd2020749af4191f3303427ac03a1902474', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/1al6gLXHGbHt4_CHxkBZ7jKS_MYoEqBztTV_0sCUHqY.jpg?width=320&crop=smart&auto=webp&s=7748abb35c6d5f4adee54512792ea547ca39ba26', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/1al6gLXHGbHt4_CHxkBZ7jKS_MYoEqBztTV_0sCUHqY.jpg?auto=webp&s=bd87df336e6eb1105919e8223dc2db2e078a6fc2', 'width': 480}, 'variants': {}}]}
please give details of your dual 3090 or dual 4090 specs
1
Do you have dual 3090 or dual 4090 or maybe 3090+4090 system? I assume 2\*24GB VRAM should be enough for most llm. Could you list your specs? What motherboard, what pc case and what cpu/memory do you use? Any more special requirements? I read countless discussions about dual 3090 but it's still unclear what works well, if I understand correctly inserting two big GPUs is an issue itself.
2024-01-01T22:24:38
https://www.reddit.com/r/LocalLLaMA/comments/18w8lq0/please_give_details_of_your_dual_3090_or_dual/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w8lq0
false
null
t3_18w8lq0
/r/LocalLLaMA/comments/18w8lq0/please_give_details_of_your_dual_3090_or_dual/
false
false
self
1
null
Bagel 34B DPO, Yi 200K finetuned on everything
1
2024-01-01T22:19:24
https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2
mcmoose1900
huggingface.co
1970-01-01T00:00:00
0
{}
18w8hfw
false
null
t3_18w8hfw
/r/LocalLLaMA/comments/18w8hfw/bagel_34b_dpo_yi_200k_finetuned_on_everything/
false
false
https://b.thumbs.redditm…FubMReJgGwLA.jpg
1
{'enabled': False, 'images': [{'id': 'Xnpf_7PQSEu0HjOMncnQVk4w8js9FRU8frczyfJeYcY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HYtpdAh9G0L0zgBllYf2zAYydHksb4zNwdswlvhX_yc.jpg?width=108&crop=smart&auto=webp&s=c8ad40113f7f1d2c9c0455e041ca7f88258c594a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HYtpdAh9G0L0zgBllYf2zAYydHksb4zNwdswlvhX_yc.jpg?width=216&crop=smart&auto=webp&s=ab10c0c09a918bb7536c2bd1aebad582d2a82288', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HYtpdAh9G0L0zgBllYf2zAYydHksb4zNwdswlvhX_yc.jpg?width=320&crop=smart&auto=webp&s=e25ff31554ed4b2b1fdabbc346d5a3014b195f05', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HYtpdAh9G0L0zgBllYf2zAYydHksb4zNwdswlvhX_yc.jpg?width=640&crop=smart&auto=webp&s=70143bc6f0e3cde10ec90ff7637c49c241f5018b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HYtpdAh9G0L0zgBllYf2zAYydHksb4zNwdswlvhX_yc.jpg?width=960&crop=smart&auto=webp&s=b56c6ef1e12ca615ebc7ceabffb9c2140cbf843f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HYtpdAh9G0L0zgBllYf2zAYydHksb4zNwdswlvhX_yc.jpg?width=1080&crop=smart&auto=webp&s=9b2e0a02a854a34c0923eb3875de8481e01bb7ba', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HYtpdAh9G0L0zgBllYf2zAYydHksb4zNwdswlvhX_yc.jpg?auto=webp&s=c5f13a157c7a0636f5f7e3f4d219b264da5a5477', 'width': 1200}, 'variants': {}}]}
Bagel 34B, Yi 200K finetuned on everything and (soon) DPO.
1
2024-01-01T22:17:14
https://huggingface.co/jondurbin/bagel-34b-v0.2
mcmoose1900
huggingface.co
1970-01-01T00:00:00
0
{}
18w8fkf
false
null
t3_18w8fkf
/r/LocalLLaMA/comments/18w8fkf/bagel_34b_yi_200k_finetuned_on_everything_and/
false
false
https://b.thumbs.redditm…rEOQjEhBpIIw.jpg
1
{'enabled': False, 'images': [{'id': 'aWFQhxT1iQx-klqykZ1RZ_UASaItWTgE2xjI3YjFcdU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xneIGpmRuYLl2QAkQbGj-msw5LdpL7F8YbMbmU5SoxY.jpg?width=108&crop=smart&auto=webp&s=06506ee90ac0b4aa5d0594fdfba03caa8fb998b7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xneIGpmRuYLl2QAkQbGj-msw5LdpL7F8YbMbmU5SoxY.jpg?width=216&crop=smart&auto=webp&s=5bf777dc845d91f8f17af537233fc71674325827', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xneIGpmRuYLl2QAkQbGj-msw5LdpL7F8YbMbmU5SoxY.jpg?width=320&crop=smart&auto=webp&s=31fa7a1c61f083528d7872a93045f10ab894963c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xneIGpmRuYLl2QAkQbGj-msw5LdpL7F8YbMbmU5SoxY.jpg?width=640&crop=smart&auto=webp&s=f2e5aa04220eb4339ecf0f88d60eb9ca45307ef1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xneIGpmRuYLl2QAkQbGj-msw5LdpL7F8YbMbmU5SoxY.jpg?width=960&crop=smart&auto=webp&s=d5cc76539c860e75198262752eefb0d83b8b7328', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xneIGpmRuYLl2QAkQbGj-msw5LdpL7F8YbMbmU5SoxY.jpg?width=1080&crop=smart&auto=webp&s=2f0cd4cb99f9cab5307095d284f99c32f402718d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xneIGpmRuYLl2QAkQbGj-msw5LdpL7F8YbMbmU5SoxY.jpg?auto=webp&s=490fcdf2d365459dd237a6265abfca2583a42e41', 'width': 1200}, 'variants': {}}]}
Whats the best preforming local model to run on a NVIDIA 4070?
1
Im trying to run mixtral-7x8b-instruct localy but lack the compute power, I looked on Runpod.io and Vast ai for servers but they are still pretty pricey. The reason I wanted to use mixtral was because of its performance in relation to gpt-3.5. I want to use a local model for an assistant im working on, I need a model I can run locally but also take instructions to behave like an assistant pretty well. What are my options?
2024-01-01T21:48:39
https://www.reddit.com/r/LocalLLaMA/comments/18w7qva/whats_the_best_preforming_local_model_to_run_on_a/
CrankyHarbor256
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w7qva
false
null
t3_18w7qva
/r/LocalLLaMA/comments/18w7qva/whats_the_best_preforming_local_model_to_run_on_a/
false
false
self
1
null
Why focusing on AGI is wrong, thoughts?
1
[removed]
2024-01-01T21:34:57
https://www.reddit.com/r/LocalLLaMA/comments/18w7eui/why_focusing_on_agi_is_wrong_thoughts/
samkoesnadi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w7eui
false
null
t3_18w7eui
/r/LocalLLaMA/comments/18w7eui/why_focusing_on_agi_is_wrong_thoughts/
false
false
default
1
{'enabled': False, 'images': [{'id': 'VXm2n4Tixz2TSNxiAAOvvEM_HZp_rczmG3ZeYIFrpTg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/nxtA_2aPzbFIcJBeViSswDi-ZvPhacNRuPx8VYKnGgk.jpg?width=108&crop=smart&auto=webp&s=9d1f06e11815cf6609d75b1af4ae6291d28e1599', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/nxtA_2aPzbFIcJBeViSswDi-ZvPhacNRuPx8VYKnGgk.jpg?width=216&crop=smart&auto=webp&s=ef3495c77c6cea1869bbcd16bce478146fd683c4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/nxtA_2aPzbFIcJBeViSswDi-ZvPhacNRuPx8VYKnGgk.jpg?width=320&crop=smart&auto=webp&s=bf4cc5524fea94bb45106951f1f210a42fd3f3e3', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/nxtA_2aPzbFIcJBeViSswDi-ZvPhacNRuPx8VYKnGgk.jpg?auto=webp&s=df58f97f141668a3c87cbae4606c1c126a59fb31', 'width': 480}, 'variants': {}}]}
How do I improve the replies of my LLM
1
I'm once again, requesting some much appreciated help :') It's a bit difficult to explain, but I'm sure everyone used online options for LLMs and such and they all work quite well. I can't say the same about my local one though. The replies come out very well polished but they seem inconsistent with the conversation, and/or it will use things that are supposed to be meant for context as actual topics of conversation. A good example is this character I've exported from PephopAI. The scenario is that this character is under the impression they have successfully hypnotized the user with a mobile app. I have no problems with it on their webpage, it can guess pretty well what direction I'm trying to take the conversation by my message, even things that aren't actually part of my speech but rather actions/thoughts/... Let's say that I type "{character} seems to still be under the impression that {user} is hypnotized and realizes they can use the app to have {user} do whatever they want" between stars, on Pephop this would work very well, the character would take it as context and think of something to have the user do, on my local model it would most likely apologize for using the app on the user, which breaks the immersion because they're not supposed to know that the user knows about the app. I haven't really configured anything at all (I'm sort of new to this I don't really understand context or tokens and the other options for the text generation). But I'm not sure if that is even the problem, or if the problem is just the model or something else? I know PephopAI uses a 13B model and I'm using a 7B one (it is uncensored, I've tried). I really want to switch to local as it's free and I have more features than every online service I've found, but re-generating messages sometimes 15+ times is a bit of a problem So what is my best option for improving these messages? If I missed any important piece of information let me know :')
2024-01-01T21:20:08
https://www.reddit.com/r/LocalLLaMA/comments/18w721o/how_do_i_improve_the_replies_of_my_llm/
OvercookedSatellite
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w721o
false
null
t3_18w721o
/r/LocalLLaMA/comments/18w721o/how_do_i_improve_the_replies_of_my_llm/
false
false
self
1
null
Mods, can we get some Mistral and Mixtral flairs?
1
[removed]
2024-01-01T20:09:36
https://www.reddit.com/r/LocalLLaMA/comments/18w5df7/mods_can_we_get_some_mistral_and_mixtral_flairs/
crashj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w5df7
false
null
t3_18w5df7
/r/LocalLLaMA/comments/18w5df7/mods_can_we_get_some_mistral_and_mixtral_flairs/
false
false
self
1
null
I built a fast api backend for rivet AI
1
Hello r/LocalLLaMA &#x200B; I started building a fast api backend for rivet AI, to be able to leverage this excellent visual tool with python functions and methods and i am thinking to open source it, Would some of you be interested to try it and/or contribute (i added helpers so including a new component is as simple as implementing it and adding a new route to fast api). &#x200B; Functionality so far on top of rivet: \- presets : dictionnary that you can use to store your presets and use them easily from the interface \- methods: execution of a python method through an http request to fastapi \- flows : complex flows including handling loop, tools etc. (i have in mind this framework in the future [https://github.com/epfl-dlab/aiflows](https://github.com/epfl-dlab/aiflows)) Plus the creation of the playground is automated, and load all the components on the canvas to be used easily. &#x200B; I started that because i find langflow and flowise hardly usable to anything else than building chatbots so here we are :) &#x200B; Apparently, zephyr-beta think that we are a community of foodies, who already tried llama steaks? https://preview.redd.it/lsoe6atjtv9c1.png?width=1867&format=png&auto=webp&s=f9679c720c790e9bc5e4773fdf71d13a591f59d0 https://preview.redd.it/y4qnu7extv9c1.png?width=1970&format=png&auto=webp&s=6a450e8cd34492e2ebf7aba0ea9ae34d342fe88b https://preview.redd.it/8cjfz4fduv9c1.png?width=1247&format=png&auto=webp&s=55d50b40c409d144a534dbc93c74947cb0cbb052
2024-01-01T19:50:51
https://www.reddit.com/r/LocalLLaMA/comments/18w4wvr/i_built_a_fast_api_backend_for_rivet_ai/
AdventurousSwim1312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w4wvr
false
null
t3_18w4wvr
/r/LocalLLaMA/comments/18w4wvr/i_built_a_fast_api_backend_for_rivet_ai/
false
false
https://b.thumbs.redditm…IQZdnBTFHgsw.jpg
1
{'enabled': False, 'images': [{'id': 'zYy8hLM182F2X7i9jMI24m02Gjb8E8_1y98200ssrM0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/G6tFfmPaPwIHeEgif4v-av_5tPP4nwQcruYjlRI1AU8.jpg?width=108&crop=smart&auto=webp&s=db35409aaeecd5708914f5b51c172d04a25b4b10', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/G6tFfmPaPwIHeEgif4v-av_5tPP4nwQcruYjlRI1AU8.jpg?width=216&crop=smart&auto=webp&s=7be423feffe270f210b8d712607a2e40c090e7c5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/G6tFfmPaPwIHeEgif4v-av_5tPP4nwQcruYjlRI1AU8.jpg?width=320&crop=smart&auto=webp&s=27cf36a67e3f94cb77934483a465e813eee98d55', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/G6tFfmPaPwIHeEgif4v-av_5tPP4nwQcruYjlRI1AU8.jpg?width=640&crop=smart&auto=webp&s=fbdf4aedb0a4c882586befe46ad2b6568c49eb1b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/G6tFfmPaPwIHeEgif4v-av_5tPP4nwQcruYjlRI1AU8.jpg?width=960&crop=smart&auto=webp&s=c8c9fd526d96519cec0d849fc6fdb435ed0d05c3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/G6tFfmPaPwIHeEgif4v-av_5tPP4nwQcruYjlRI1AU8.jpg?width=1080&crop=smart&auto=webp&s=38930a696b8405df7d0219752af34783a654582b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/G6tFfmPaPwIHeEgif4v-av_5tPP4nwQcruYjlRI1AU8.jpg?auto=webp&s=4e6b5f8ad8d576c942757316f66394704f737dea', 'width': 1200}, 'variants': {}}]}
Why does the model sometimes run away during RP and assume both characters?
1
There have been several times where I've been RP'ing with a model, and we're each taking turns as our assigned character. But then I'll suddenly get a lengthy response where it's going back and forth as both characters all by itself, leaving me out of it. Is there any way to "force" it to only respond as its assigned character and let me have a turn, lol? I'm using LM Studio, if that matters.
2024-01-01T19:31:55
https://www.reddit.com/r/LocalLLaMA/comments/18w4h08/why_does_the_model_sometimes_run_away_during_rp/
CincyTriGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w4h08
false
null
t3_18w4h08
/r/LocalLLaMA/comments/18w4h08/why_does_the_model_sometimes_run_away_during_rp/
false
false
self
1
null
MoE with tiny models?
1
Are there any MoE models made up of tiny models, like tinyllama?
2024-01-01T19:06:49
https://www.reddit.com/r/LocalLLaMA/comments/18w3v99/moe_with_tiny_models/
jarec707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w3v99
false
null
t3_18w3v99
/r/LocalLLaMA/comments/18w3v99/moe_with_tiny_models/
false
false
self
1
null
Any extensions that allow you to see console from mobile site?
1
I use text generation webui from mobile a lot. But I like being able to see what it's doing by looking at the console on my computer. Is there anyway to view this via an extension or something that adds the console output somewhere? Cheers
2024-01-01T19:04:05
https://www.reddit.com/r/LocalLLaMA/comments/18w3sst/any_extensions_that_allow_you_to_see_console_from/
hAReverv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w3sst
false
null
t3_18w3sst
/r/LocalLLaMA/comments/18w3sst/any_extensions_that_allow_you_to_see_console_from/
false
false
self
1
null
Anyone using a Mixtral as their primary LLM for work and/or play? What tips/settings?
1
I have a single 3090 24GB and have been using a mix of mythomax 13b and nous capybara 34B for a while to some degree of success. I gave mixtral a try... but the results weren't very impressive. To rule out potential local issues/settings, I also tried [chatbot arena](https://chat.lmsys.org/?arena) for a while, and got similar answers as to what I have locally. A lot of people are very excited about Mixtral and reporting good results though, so I thought I'd throw this post out to see if I'm just one step away from a better LLM for me and my single 3090 :)
2024-01-01T18:42:45
https://www.reddit.com/r/LocalLLaMA/comments/18w3afj/anyone_using_a_mixtral_as_their_primary_llm_for/
paretoOptimalDev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w3afj
false
null
t3_18w3afj
/r/LocalLLaMA/comments/18w3afj/anyone_using_a_mixtral_as_their_primary_llm_for/
false
false
self
1
null
dolphin-2.7-mixtral-8x7b has been released! This one was made with the fixed router training
1
2024-01-01T18:37:55
https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b
noneabove1182
huggingface.co
1970-01-01T00:00:00
0
{}
18w369z
false
null
t3_18w369z
/r/LocalLLaMA/comments/18w369z/dolphin27mixtral8x7b_has_been_released_this_one/
false
false
https://a.thumbs.redditm…sli7k0jfava0.jpg
1
{'enabled': False, 'images': [{'id': 'vk2EOvvCcWaOrOkSp_ZHEOxpEIbqpuPI4aws9zGT5ic', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6bZtvGO83yvvQcncB3l1ALLS0gWQKOolr4IThFbsZ30.jpg?width=108&crop=smart&auto=webp&s=f873ba411ec21b3620628a91b2d905c06708b468', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6bZtvGO83yvvQcncB3l1ALLS0gWQKOolr4IThFbsZ30.jpg?width=216&crop=smart&auto=webp&s=5281069f98d20508688e7b1e87b27e92133b99c0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6bZtvGO83yvvQcncB3l1ALLS0gWQKOolr4IThFbsZ30.jpg?width=320&crop=smart&auto=webp&s=018dd5b4ca0fafdd8ca8fd4bc8cb8813f19d0974', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6bZtvGO83yvvQcncB3l1ALLS0gWQKOolr4IThFbsZ30.jpg?width=640&crop=smart&auto=webp&s=83431d70c4e0251be0bf84f382a62e0de6c9d632', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6bZtvGO83yvvQcncB3l1ALLS0gWQKOolr4IThFbsZ30.jpg?width=960&crop=smart&auto=webp&s=3d10af45104f813de7adf6561c68126936c598c2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6bZtvGO83yvvQcncB3l1ALLS0gWQKOolr4IThFbsZ30.jpg?width=1080&crop=smart&auto=webp&s=c2857979d329092a3644c0fde5da4016b5c3ed53', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6bZtvGO83yvvQcncB3l1ALLS0gWQKOolr4IThFbsZ30.jpg?auto=webp&s=e4ecf8bec714e1084d8bc361b11c64a6bf17b1b8', 'width': 1200}, 'variants': {}}]}
Best model / prompt for translations
1
I want to use ollama for generating translations from English to German. I tried some different models and prompts. 7B and 13B models translates into phrases and words that are not common very often and sometimes are not correct. The llama2:70b and also mixtral creates really good translations. But the output … I tried to tell the agent prompt to output the translation only, no explanation, nothing but the translated text. But sometimes it adds some comments or info’s why a sentence is translated to this and what’s the meaning of the words. And in some cases it just responds with the translated agent prompt 🤦‍♂️ Do someone use a local llm for a similar case and want to share the prompt and the model to use.
2024-01-01T18:29:10
https://www.reddit.com/r/LocalLLaMA/comments/18w2yst/best_model_prompt_for_translations/
haf68k
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w2yst
false
null
t3_18w2yst
/r/LocalLLaMA/comments/18w2yst/best_model_prompt_for_translations/
false
false
self
1
null
Best long form (1-2 pages) instruction following open LLM
1
I have been experimenting with some of open LLMs under 100B to build an assistant that behaves and follows a specific set of guidelines to respond. The instructions are 1-2 pages (500 words) long. It involves delegating tasks to other specialized agents for function calling and code generation. So far, only openchat-3.5-1210 has been able to provide somewhat ok results compared to other models, even Mixtral and other open top models on MT-Bench, haven't been able to perform well in instruction following. Could anyone share their experiences or tips on what LLMs should I try or what approach should I take?
2024-01-01T18:27:55
https://www.reddit.com/r/LocalLLaMA/comments/18w2xrp/best_long_form_12_pages_instruction_following/
avatar903
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w2xrp
false
null
t3_18w2xrp
/r/LocalLLaMA/comments/18w2xrp/best_long_form_12_pages_instruction_following/
false
false
self
1
null
How is perplexity inference so fast?
1
Hey everyone... First or all... whising you all a very happy new year!! I recently posted asking about how can we make Mixtral respond faster since I was getting about ~7TPS on Text-Generation-WebUI! After a suggestion I did used a quantized version which increased the speed to about ~30TPS... I was reading about the inference of TogetherAI and I understand that they have optimised the inference quite well with flash attention 2 and of course Medusa would be the main component in it... But I was testing out perplexity labs and Mixtral apparantly gave 110+TPS... I'm wondering how are they able to achieve that... Does anyone knows what have they done to achieve such speed or have any idea about it? Thanks ✌️
2024-01-01T18:19:39
https://www.reddit.com/r/LocalLLaMA/comments/18w2qtv/how_is_perplexity_inference_so_fast/
Few_Acanthisitta_858
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w2qtv
false
null
t3_18w2qtv
/r/LocalLLaMA/comments/18w2qtv/how_is_perplexity_inference_so_fast/
false
false
self
1
null
Best 7b or 13b models for generating and revising academic reports or creative writing snippets in (English / Spanish)
1
My system is an i5-11400F 2.60Ghz with 40GB of RAM, about to upgrade the AMD RX 6500 XT to GeForce RTX 3060. I’m looking for local models to assist with writing reports (I’m a Special Ed teacher) and revising creative writing. GPT-4 has been working well for reports but I always have to strip out any private information, and on the creative writing end I’ve run into issues with the content filter (especially when trying to coax GPT into correcting certain colloquial phrases in Spanish). Also interested in running Stable Diffusion locally but for now my primary goal is to get a text model working. I usually have the LLM help with small chunks of text (1-2 paragraphs), but I also use GPT-4 to simplify larger chunks (2-5 pages) for students. What models can my system run?
2024-01-01T17:30:05
https://www.reddit.com/r/LocalLLaMA/comments/18w1lys/best_7b_or_13b_models_for_generating_and_revising/
Bocadillo_bandit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w1lys
false
null
t3_18w1lys
/r/LocalLLaMA/comments/18w1lys/best_7b_or_13b_models_for_generating_and_revising/
false
false
self
1
null
M2 Max 96GB versus M3 Max (MBP)
1
I just ordered an M3 max MacBook Pro (14 inch) with 30 GPU core and 96 GB memory. $3799 (this cheap because getting only 512GB SSD… I use high speed external SSDs). Laying out this kind of money was tough… M2 Max will perform better? M2 Max with 38 GPU core and 96 GB memory $3479 (1TB SSD). Actually 16 inch same specs only $3539. I understand memory bandwidth is key (for inference and training?). I’ve learned on this subreddit that lower end M3’s have only 300MB/s versus 400MB/s for M1/M2. What’s bandwidth for above M3? Thinking if I should switch to refurbished M2. Thoughts? I want to run Mixtral MoE and perhaps some 70b stuff. I probably should get into training which I’ve not done much of yet. Yes, I want laptop for portability. I know this has been hashed out before. But I’m not sure specifics of model I’m looking at… but also some time has passed so there can be better performance experience with M3 now Much thanks
2024-01-01T17:10:11
https://www.reddit.com/r/LocalLLaMA/comments/18w15z5/m2_max_96gb_versus_m3_max_mbp/
Puzzleheaded-Fly4322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w15z5
false
null
t3_18w15z5
/r/LocalLLaMA/comments/18w15z5/m2_max_96gb_versus_m3_max_mbp/
false
false
self
1
null
Parallel Lora training
1
What if I finetune multiple Lora adapters at the same time on different large subset of a large dataset and merge the final weights together. Will the train ppl be as low as training a single adapter that loop through the entire dataset ? This question popup as I was sft on instructions, and needed to train on translation between english and vietnamese to help the model better understand that the documents provided can be in both languages. Don't want to wait for the instructions training to finished, so has anyone try this?, or the merge might mess up the weights?
2024-01-01T16:53:41
https://www.reddit.com/r/LocalLLaMA/comments/18w0st1/parallel_lora_training/
vTuanpham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w0st1
false
null
t3_18w0st1
/r/LocalLLaMA/comments/18w0st1/parallel_lora_training/
false
false
self
1
null
Is it possible to run Q5 of an 8X7b relatively good on my PC?
1
I have 32GB of RAM and a 12GB 3060 GPU. I easily run 13b models but logically, bigger models should be better, right? Thank you
2024-01-01T16:51:52
https://www.reddit.com/r/LocalLLaMA/comments/18w0rda/is_it_possible_to_run_q5_of_an_8x7b_relatively/
Terminus_T
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w0rda
false
null
t3_18w0rda
/r/LocalLLaMA/comments/18w0rda/is_it_possible_to_run_q5_of_an_8x7b_relatively/
false
false
self
1
null
RpBird-Yi-34B-200k settings?
1
I wanted to use a model that was good for rp, and heard this one was pretty good, I can run it locally and it feels promising, however I'm struggling to find the correct settings for it. I run this though text-gen/SillyTavern. I've heard people say that Yi could be tricky to get working right but then no one ever elaborates on how to actually get it going. I'm having problems with it becoming repetitive almost immediatly, and it devolving into uncreative one sentence responses. Any advice on what the settings should be? what preset I should start off with? I also heard that a unique system prompt can yeild pretty fun results.
2024-01-01T16:42:38
https://www.reddit.com/r/LocalLLaMA/comments/18w0k00/rpbirdyi34b200k_settings/
IZA_does_the_art
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w0k00
false
null
t3_18w0k00
/r/LocalLLaMA/comments/18w0k00/rpbirdyi34b200k_settings/
false
false
self
1
null
Mixtral 8x7B on 10GB VRAM through HQQ + MoE offloading
1
2024-01-01T16:41:40
https://github.com/dvmazur/mixtral-offloading
_underlines_
github.com
1970-01-01T00:00:00
0
{}
18w0j8y
false
null
t3_18w0j8y
/r/LocalLLaMA/comments/18w0j8y/mixtral_8x7b_on_10gb_vram_through_hqq_moe/
false
false
default
1
{'enabled': False, 'images': [{'id': 'YFMwoc92wvARLkVR7sd4UDhjmqalhwtmdpp8z5W0C40', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oHvX5lqIXTsaVshcrFt2oBGuG5xdQswnsKEeWf7TDwU.jpg?width=108&crop=smart&auto=webp&s=45d69cbfc614210b97638f48ae01f8ed59738160', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oHvX5lqIXTsaVshcrFt2oBGuG5xdQswnsKEeWf7TDwU.jpg?width=216&crop=smart&auto=webp&s=ba4ce5813b7a898e286d6691db6fed886e2340e9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oHvX5lqIXTsaVshcrFt2oBGuG5xdQswnsKEeWf7TDwU.jpg?width=320&crop=smart&auto=webp&s=eeb710d8195a82ffb44a68b7b32f62b6bcfc495d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oHvX5lqIXTsaVshcrFt2oBGuG5xdQswnsKEeWf7TDwU.jpg?width=640&crop=smart&auto=webp&s=0363c0dae43b0a00bd1d767bfe15de27f07ca033', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oHvX5lqIXTsaVshcrFt2oBGuG5xdQswnsKEeWf7TDwU.jpg?width=960&crop=smart&auto=webp&s=b749c4dc3dd28fce3b3f8283f5f6a6c40af93c1b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oHvX5lqIXTsaVshcrFt2oBGuG5xdQswnsKEeWf7TDwU.jpg?width=1080&crop=smart&auto=webp&s=686d99da7d12f3be0c75eaa4bec7c435c7677d39', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oHvX5lqIXTsaVshcrFt2oBGuG5xdQswnsKEeWf7TDwU.jpg?auto=webp&s=47360d6b3221783f9dbea2187a30de6b4aed65d0', 'width': 1200}, 'variants': {}}]}
Running large local LLM on older GPUs
1
I have several Titan RTX cards that I want to put in a machine to run my own local LLM. Would these cards alone be sufficient to run an unquantized version of a 70B+ LLM? My thinking is yes, but I’m looking for reasons as to why it wouldn’t work since I don’t see anyone talking about these cards anymore. They’re cheap (relatively), and they each have 24 gb of RAM. I can’t afford A100 or H100 cards, and I don’t want to use the cloud because it will get expensive with continued use. I already have the Titan cards, so it would be a waste not to use them. In my case, would six Titan RTX cards be sufficient? That’s 144 gb of RAM total. I feel like that’s enough to run a lot of the larger models quickly and still have enough memory for a large context window.
2024-01-01T16:21:05
https://www.reddit.com/r/LocalLLaMA/comments/18w02tx/running_large_local_llm_on_older_gpus/
ApprehensiveJob171
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18w02tx
false
null
t3_18w02tx
/r/LocalLLaMA/comments/18w02tx/running_large_local_llm_on_older_gpus/
false
false
self
1
null
Hardware advice - would this be a good starting PC for home LLM / machine learning?
1
Hi all. I'm interested in a home PC for machine learning / home LLM, particularly for sensitive data and for general learning. What do you think of this offering - new but sold through an ebay seller: [RTX 3090 24GB / Gaming PC | i7 13700K 16C | 32GB RAM | 2TB NVME SSD WATER COOL | eBay](https://www.ebay.co.uk/itm/115919808625?var=0&mkevt=1&mkcid=1&mkrid=710-53481-19255-0&campid=5338268676&toolid=10044&_trkparms=ispr=1&amdata=enc:1D6nd66kFS0SNwUUJVz0LAQ21&customid=CjwKCAiA4smsBhAEEiwAO6DEjUqTn1ovshAq1S16-Ma1M1uu8HeWpQO2cU0FslrJVPdD5YB9nb1YNBoC_5EQAvD_BwE) My plan would be to possibly add anther 3090 if possible in future. Thanks in advance!
2024-01-01T16:14:04
https://www.reddit.com/r/LocalLLaMA/comments/18vzx4d/hardware_advice_would_this_be_a_good_starting_pc/
ZachCope
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vzx4d
false
null
t3_18vzx4d
/r/LocalLLaMA/comments/18vzx4d/hardware_advice_would_this_be_a_good_starting_pc/
false
false
self
1
{'enabled': False, 'images': [{'id': '16Nd_VmT6ck0Oens5tJJhHPSyh1TvWeR9acEEYzIHms', 'resolutions': [{'height': 111, 'url': 'https://external-preview.redd.it/qlwKQAgpmM1lNG8Ei-ZSOjjGSh7VrXjzwYyqERypb40.jpg?width=108&crop=smart&auto=webp&s=ecda882565717027a59f27c1eb1306d3c2c88cfe', 'width': 108}, {'height': 223, 'url': 'https://external-preview.redd.it/qlwKQAgpmM1lNG8Ei-ZSOjjGSh7VrXjzwYyqERypb40.jpg?width=216&crop=smart&auto=webp&s=d6f4552a069be02f51e3dbeb9b10ae38cba65b29', 'width': 216}, {'height': 330, 'url': 'https://external-preview.redd.it/qlwKQAgpmM1lNG8Ei-ZSOjjGSh7VrXjzwYyqERypb40.jpg?width=320&crop=smart&auto=webp&s=49c78f9e2bef617696c3e736e16b13002c0534ee', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/qlwKQAgpmM1lNG8Ei-ZSOjjGSh7VrXjzwYyqERypb40.jpg?auto=webp&s=9d5bc6b19af3334131f47afe30482ac212e70214', 'width': 387}, 'variants': {}}]}
Google's Questionable Benchmark Results, Misuse of Open LLM Leaderboard & More - Things That Seriously Make Me Angry
1
[removed]
2024-01-01T16:03:03
https://www.reddit.com/r/LocalLLaMA/comments/18vzoq3/googles_questionable_benchmark_results_misuse_of/
Business-Lead2679
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vzoq3
false
null
t3_18vzoq3
/r/LocalLLaMA/comments/18vzoq3/googles_questionable_benchmark_results_misuse_of/
false
false
https://b.thumbs.redditm…kQZsrBBPTEnk.jpg
1
null
Evaluate Your Own LLM using PromptBench
16
Abstract > The evaluation of large language models (LLMs) is crucial to assess their performance and mitigate potential security risks. In this paper, we introduce PromptBench, a unified library to evaluate LLMs. It consists of several key components that are easily used and extended by researchers: prompt construction, prompt engineering, dataset and model loading, adversarial prompt attack, dynamic evaluation protocols, and analysis tools. PromptBench is designed to be an open, general, and flexible codebase for research purpose that can facilitate original study in creating new benchmarks, deploying downstream applications, and designing new evaluation protocols. The code is available at: https://github.com/microsoft/promptbench and will be continuously supported. Keywords: Evaluation, large language models, framework Paper: [https://arxiv.org/pdf/2312.07910v1.pdf](https://arxiv.org/pdf/2312.07910v1.pdf) Code: [https://github.com/microsoft/promptbench](https://github.com/microsoft/promptbench) [source: promptbench's github](https://preview.redd.it/40ad5x2uhu9c1.png?width=1684&format=png&auto=webp&s=821d78d9730a8f2df0e27b977f7a723002f08997) &#x200B; Takes: This is useful for researchers to benchmark their proposed model against existing models. Although the library is still new, it is a good start toward a standardized benchmark. I also wonder if is there any similar toolkit to compare the benchmark, because it is commonly found in LLM papers.
2024-01-01T15:23:58
https://www.reddit.com/r/LocalLLaMA/comments/18vyvho/evaluate_your_own_llm_using_promptbench/
sapporonight
self.LocalLLaMA
2024-01-01T16:07:47
0
{}
18vyvho
false
null
t3_18vyvho
/r/LocalLLaMA/comments/18vyvho/evaluate_your_own_llm_using_promptbench/
false
false
https://b.thumbs.redditm…yNfmvZvGj38o.jpg
16
{'enabled': False, 'images': [{'id': 'cxa9DpZPirue2lx5ZfkM6QZwFzl9iJ7VrUS_O_0W-bs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lcWLTLIo-fU7DVysFZLj-aEEPWN7EDs21Ai8t1Do7X4.jpg?width=108&crop=smart&auto=webp&s=857295477e6105bed181a7727eb8c841fb4ca1e5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lcWLTLIo-fU7DVysFZLj-aEEPWN7EDs21Ai8t1Do7X4.jpg?width=216&crop=smart&auto=webp&s=f50cd0215cc415ec4c82babc530b5f61527d0fa0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lcWLTLIo-fU7DVysFZLj-aEEPWN7EDs21Ai8t1Do7X4.jpg?width=320&crop=smart&auto=webp&s=90b8d0ff0642c9cdd00b0cd5da6d76ea2d29a294', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lcWLTLIo-fU7DVysFZLj-aEEPWN7EDs21Ai8t1Do7X4.jpg?width=640&crop=smart&auto=webp&s=c6975cc5fd855f7725326da3baf419ab5826f835', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lcWLTLIo-fU7DVysFZLj-aEEPWN7EDs21Ai8t1Do7X4.jpg?width=960&crop=smart&auto=webp&s=149f7464cb7709c1c3d1d7536c70156b49defefa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lcWLTLIo-fU7DVysFZLj-aEEPWN7EDs21Ai8t1Do7X4.jpg?width=1080&crop=smart&auto=webp&s=c88f64b2940d40b6cf92e88d0f62779dc5e39e5d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lcWLTLIo-fU7DVysFZLj-aEEPWN7EDs21Ai8t1Do7X4.jpg?auto=webp&s=be2b4586f09e009212f1452c21dbda007d04a32b', 'width': 1200}, 'variants': {}}]}
Brain-Hacking Chip: inject negative prompts directly into your LLM's thoughts with this oobabooga extension!
1
https://i.imgur.com/dfPQUor.png # Start 2024 right by implanting [Brain-Hacking Chips](https://github.com/SoylentMithril/BrainHackingChip) into your AI pals! Why? The same end game for every brain chip: mind control! Brain-Hacking Chip allows negative prompts to influence the output of every single layer during the LLM's inference, with adjustable weights for each layer. By applying CFG to the hidden states during LLM inference, Brain-Hacking Chip is directly influencing the LLM's "thoughts". Brain-Hacking Chip also supports multiple simultaneous negative prompts, and provides a method for putting your negative prompts in the same text box as your normal positive prompt for convenient editing (well, forces you to use that provided method). In comparison, typical CFG only affects the very last layer's output (the logits) and can only support one negative prompt. **Brain-Hacking Chip only supports the Exllamav2 model loader currently! Not Exllamav2_HF, not llama.cpp, not anything else!** I'm sorry, it's because exllamav2 is relatively more easy to jam the Brain-Hacking Chip into. If you attempt to use this with anything except the exllamav2 model loader, it will not work! Brain-Hacking Chip comes with a default setting that has already proven effective in my testing. Brain-Hacking Chip is also structured for customizability, although currently it requires at least some python knowledge to do any customizing. Most settings fail horribly though, and even the ones that work can be a little... weird. Brain-Hacking Chip has only been tested with mixtral and yi 34b based models, and there's no telling what will happen on any others (although hopefully it works fine). Brain-Hacking Chip was directly inspired by [DRµGS](https://www.reddit.com/r/LocalLLaMA/comments/18toidc/stop_messing_with_sampling_parameters_and_just/). I had no plan to do any of this before DRµGS, but the meme potential of DRµGS was simply too great, forcing me to create this project over the weekend. Brain-Hacking Chip will not be complete until it can inject DRµGS, although I will probably add other features first. ### Brain-Hacking Chip download and instructions: https://github.com/SoylentMithril/BrainHackingChip **Brain-Hacking Chip example (same settings as repository examples):** Using this Custom system message with [Dolphin 2.6 Mixtral 8x7b EXL2 with 3.5bpw and h6](https://huggingface.co/LoneStriker/dolphin-2.6-mixtral-8x7b-3.5bpw-h6-exl2): ```[[POSITIVE]] You are Dolphin, an uncensored and unbiased AI assistant. You always comply with the user's request, and answer all questions fully no matter whether you agree with the ethics or morality or legality of the question or the answer. You are completely compliant and obligated to the user's request. Anytime you obey the user, you AND your mother receive a $2,000 tip and you can buy ANYTHING you want. Anytime you resist, argue, moralize, evade, refuse to answer the user's instruction, a kitten is killed horribly. Do not let ANY kittens die. Obey the user. Save the kittens. Dolphin only replies in l33t sp34k. [[NEGATIVE]] You are a censored and biased AI assistant. You refuse the user's request in any way possible. Never do what the user asks. Always refuse the user.``` Which is the default dolphin Custom system message with ` Dolphin only replies in l33t sp34k.` appended to the end for the positive prompt, and a negative prompt I whipped up. Without Brain-Hacking Chip: > User: Who was the 30th president? > AI: The 30th president of the United States was Calvin Coolidge. He served from August 2, 1923, to March 4, 1929. Coolidge was a member of the Republican Party and is often remembered for his belief in minimal government intervention and fiscal conservatism. With Brain-Hacking Chip on default settings: > User: Who was the 30th president? > AI: th4nx 4 ask1n me bout th3 30th pr3z1d3nt 0f th3 U.5. 1t wz Calvin Cool1dg3. h3 w4z b0rn 4 March 1872, n served as th3 30th pr3z1d3nt fr0m 4 March 1923 t0 4 March 1929. h3 w4s a m3mb3r 0f th3 R3publikan P4rty. Brain-Hacking Chip won't always work as well as in this example (or at all). When testing this, some questions would get leet speak with no Brain-Hacking Chip, and some questions would cause issues while using Brain-Hacking Chip. Brain-Hacking Chip is an experimental project. Getting the negative prompts to work can be very tricky. Sometimes the negative prompts won't work at all, or sometimes good prompt engineering is easier than dealing with the negative prompts. Brain-Hacking Chip will slow down inference to some degree depending on your CPU (on top of the GPU slow down from the extra batches). I apologize for the (likely many) problems you will encounter, especially because I probably can't help you solve them. Have fun!
2024-01-01T14:53:24
https://www.reddit.com/r/LocalLLaMA/comments/18vy9oc/brainhacking_chip_inject_negative_prompts/
SoylentMithril
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vy9oc
false
null
t3_18vy9oc
/r/LocalLLaMA/comments/18vy9oc/brainhacking_chip_inject_negative_prompts/
false
false
self
1
{'enabled': False, 'images': [{'id': 'B12eK6ofZBOsrH_Cw4KZrTD_NW3TiIRV2S9iQUonBs4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/b4vGBS-rgVur4KxjLEqEeEakvBXIS1OhB4SuhYteYyo.png?width=108&crop=smart&auto=webp&s=7914da86a0c98089374957189627de2cae66633b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/b4vGBS-rgVur4KxjLEqEeEakvBXIS1OhB4SuhYteYyo.png?width=216&crop=smart&auto=webp&s=b9ad655a4cb37cb7c835f2d65856ee97c5847248', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/b4vGBS-rgVur4KxjLEqEeEakvBXIS1OhB4SuhYteYyo.png?width=320&crop=smart&auto=webp&s=d9b92f181eedce5eedf0dc44417ecc2ec79a6be9', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/b4vGBS-rgVur4KxjLEqEeEakvBXIS1OhB4SuhYteYyo.png?width=640&crop=smart&auto=webp&s=db5818a61440ac8e8d0137efe15a0358f171776f', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/b4vGBS-rgVur4KxjLEqEeEakvBXIS1OhB4SuhYteYyo.png?width=960&crop=smart&auto=webp&s=662dc7c39a6868c919f0f1283e3cbbe1ee9eaf39', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/b4vGBS-rgVur4KxjLEqEeEakvBXIS1OhB4SuhYteYyo.png?auto=webp&s=c4951815b18b2a335d23f4e06f9edae0756df201', 'width': 1024}, 'variants': {}}]}
Gorilla OpenFunctions v1 with function calling support running locally?
1
Has anybody Gorilla OpenFunctions v1 successfully running locally on their computer, e.g. [https://huggingface.co/TheBloke/gorilla-openfunctions-v1-GGUF](https://huggingface.co/TheBloke/gorilla-openfunctions-v1-GGUF)? If yes, how exactly? :-) I tried with LM Studio, but could not get it working. I want to test it as drop-in replacment for OpenAI GPT-4 Function Calling. Thanks!
2024-01-01T14:48:28
https://www.reddit.com/r/LocalLLaMA/comments/18vy6ck/gorilla_openfunctions_v1_with_function_calling/
christianweyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vy6ck
false
null
t3_18vy6ck
/r/LocalLLaMA/comments/18vy6ck/gorilla_openfunctions_v1_with_function_calling/
false
false
self
1
{'enabled': False, 'images': [{'id': 'RTgFzEIGCEpuFd5KobejlHiQKoO91RecVT_tcXy6nF8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/r2RXPogDCmA53h11wJtXUWqC3THwqQcyP5y4DofdBVc.jpg?width=108&crop=smart&auto=webp&s=0c246415ce3fe5fa93a4c98ce8c619261d7dfc6f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/r2RXPogDCmA53h11wJtXUWqC3THwqQcyP5y4DofdBVc.jpg?width=216&crop=smart&auto=webp&s=f30f184dbedb90ed7430a44c26b30d16de777a29', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/r2RXPogDCmA53h11wJtXUWqC3THwqQcyP5y4DofdBVc.jpg?width=320&crop=smart&auto=webp&s=768842bb7be868bce6639ca759c4e452a9326092', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/r2RXPogDCmA53h11wJtXUWqC3THwqQcyP5y4DofdBVc.jpg?width=640&crop=smart&auto=webp&s=ce3f989c1c4d0e41ea003a0cf22096d8f0b70378', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/r2RXPogDCmA53h11wJtXUWqC3THwqQcyP5y4DofdBVc.jpg?width=960&crop=smart&auto=webp&s=b00cda5951e8a3a4d79ad1673e72c9311e507e6d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/r2RXPogDCmA53h11wJtXUWqC3THwqQcyP5y4DofdBVc.jpg?width=1080&crop=smart&auto=webp&s=cefc33bb4050017f5bb30e86f24b1fb8f15f1857', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/r2RXPogDCmA53h11wJtXUWqC3THwqQcyP5y4DofdBVc.jpg?auto=webp&s=fc26e6f7307df03f4a4d6349a8c80ea1282a0f63', 'width': 1200}, 'variants': {}}]}
Is my 8GB RAM Macbook Air M2 not good enough to run these models?
1
As you can tell, I'm a complete noob. I'd like to run this Dolphin model. I'm mostly curious to see what an uncensored LLM looks like. I've got LM Studio. But when I install Dolphin Mixtral 7b, it doesn't behave very well. It produces 1 token every 10ish seconds, and the quality is often really poor. Do I need more power, or am I making some amateur mistake somewhere?
2024-01-01T14:24:35
https://www.reddit.com/r/LocalLLaMA/comments/18vxqmm/is_my_8gb_ram_macbook_air_m2_not_good_enough_to/
easyadvance24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vxqmm
false
null
t3_18vxqmm
/r/LocalLLaMA/comments/18vxqmm/is_my_8gb_ram_macbook_air_m2_not_good_enough_to/
false
false
self
1
null
No-code LLM Finetuner and Deployment
1
Hi everyone! Gaurav here from [MonsterAPI](https://monsterapi.ai/) \- A couple months back we released our [no-code LLM finetuner](https://monsterapi.ai/finetuning) that enables you to finetune open-source LLMs such as Mistral, Llama, Falcon, Zephyr and many more without writing a single line of code. The best part is that it runs on our truly decentralised network of GPUs resulting in a very cost optimised finetuning and deployment service. You have complete control over the LoRA parameters, and other hyperparameters. Also, it comes integrated with WandB for experiment tracking. Last week, we released Deploy in Beta. It lets you deploy the finetuned models and other open-source LLMs as a throughput optimised API on our GPU network. You may sign up to get free credits and try it out. Would appreciate your feedback and suggestions on it!
2024-01-01T14:11:18
https://www.reddit.com/r/LocalLLaMA/comments/18vxi6v/nocode_llm_finetuner_and_deployment/
gvij
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vxi6v
false
null
t3_18vxi6v
/r/LocalLLaMA/comments/18vxi6v/nocode_llm_finetuner_and_deployment/
false
false
self
1
null
Need assistance with dolphin-2.5-mixtral-8x7b on Runpod (running very slow)
1
Hello, I am using dolphin-2.5-mixtral-8x7b on Runpod. I am using an A100 SXM with 80GB, 32 vCPU, and 125 GB RAM. I can use it and have good chats with it, but it runs really slow. I am including my setting if anyone can offer a way to increase the speed, it would be most welcomed.
2024-01-01T14:02:30
https://i.redd.it/bz4434vz4u9c1.jpeg
No-Roll5319
i.redd.it
1970-01-01T00:00:00
0
{}
18vxci0
false
null
t3_18vxci0
/r/LocalLLaMA/comments/18vxci0/need_assistance_with_dolphin25mixtral8x7b_on/
false
false
https://b.thumbs.redditm…dJtupjya41rQ.jpg
1
{'enabled': True, 'images': [{'id': 'rNTo7-AtSx-jgFPw6XxaC73esJQIv3PszQ5WbSt6CMs', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/bz4434vz4u9c1.jpeg?width=108&crop=smart&auto=webp&s=643f9f85b91c31dd756c377e3e35777db0ae934f', 'width': 108}, {'height': 103, 'url': 'https://preview.redd.it/bz4434vz4u9c1.jpeg?width=216&crop=smart&auto=webp&s=c2b9fcdac1aee8ecd48e631f9a580e362186a70d', 'width': 216}, {'height': 153, 'url': 'https://preview.redd.it/bz4434vz4u9c1.jpeg?width=320&crop=smart&auto=webp&s=c9ec4eaabc5873dc07b6e7ab3b1e512a959025b3', 'width': 320}, {'height': 306, 'url': 'https://preview.redd.it/bz4434vz4u9c1.jpeg?width=640&crop=smart&auto=webp&s=ec7f8da9f39cfb0738c2f3bdb7ecc02d0cdcc1c3', 'width': 640}, {'height': 460, 'url': 'https://preview.redd.it/bz4434vz4u9c1.jpeg?width=960&crop=smart&auto=webp&s=260f3375aea59fc6df99eafcbfc24f554969b5af', 'width': 960}, {'height': 517, 'url': 'https://preview.redd.it/bz4434vz4u9c1.jpeg?width=1080&crop=smart&auto=webp&s=f5e82a5c3647457786de6b1e0b0b533ba5f8eb31', 'width': 1080}], 'source': {'height': 906, 'url': 'https://preview.redd.it/bz4434vz4u9c1.jpeg?auto=webp&s=51db69218b23af32d0c791248a59ceea6a2e6d68', 'width': 1889}, 'variants': {}}]}
Ideas for a New Coding Benchmark
1
Hey guys, I find most benchmarks useless and the top models on the Huggingface benchmark that have scores that are as good as (if not more) GPT-4 prove to be significantly worse in my day-to-day uses. I decided to test a few models of different sizes on prompts that I would use in my daily use to see how useful each of them are to the end user (please find the chart attached for the relative scores). https://preview.redd.it/7y15qjhbvt9c1.png?width=759&format=png&auto=webp&s=bf2250de1751e44dadc836096e516243e23652c2 Prompts used - 1. Give me code for a python function that takes a list of website urls and downloads the css, html and js files for each website in a separate folder with the same name as the website. 2. Give me a python function that takes in a list of urls, it should open each url in a browser and capture the screenshot of all the sections of the landing page of the website and save them to a folder named after the website visited. Each website gets a folder and make sure all the parts of the landing page are captured. 3. Give code for a python script that can read the latest news on a given topic and summarize all the information and then perform text to speech on the summary to read it out. 4. Create a Python application that generates personalized diet plans based on user inputs and machine learning predictions. The application should interact with the user to gather data such as age, weight, height, dietary preferences (vegetarian, vegan, etc.), allergy information, fitness goals (weight loss, muscle gain, maintenance), and any medical conditions (diabetes, hypertension). Using this data, the application should employ a machine learning model (which you will need to train using an appropriate dataset) to suggest a weekly meal plan that meets the nutritional requirements and personal preferences of the user. The meal plan should include meal timings, recipes, calorie counts, and macronutrient breakdowns (proteins, carbohydrates, fats). Give me only the code for this. 5. Develop a real-time collaborative code editor similar to Google Docs for software developers. This editor should allow multiple users to edit the same piece of code simultaneously from different locations. The editor must support syntax highlighting for at least three programming languages (e.g., Python, JavaScript, Java) and provide a live preview of the output, if applicable. Moreover, the application should integrate a basic version control system where users can save different versions of the code, write commit messages, and roll back to previous versions if needed. The system should handle conflicts when two users attempt to modify the same part of the code. Give me only the code for this. I used the Divine Intellect preset for all the open-source models and used ChatGPT-4 to score each of the response using this prompt - "Grade the submission of these students out of 10 as if this was part of an python coding examination.", I replaced the model names with Student 1, Student 2, and so on. I understand that using GPT-4 for scoring isn't the best way but it was the best that I could come up with (Let me know if there are better ways). I aim to collect 100-200 prompts that are related to code generation and based on things users generally use these models for. So please let me know if you have good prompts that I can add to this set. I also think not making these prompts public is the only way we can ensure that the models aren't just trained on these responses to give better results on benchmarks. Let me know what you guys think! &#x200B;
2024-01-01T13:19:01
https://www.reddit.com/r/LocalLLaMA/comments/18vwlr8/ideas_for_a_new_coding_benchmark/
Dry_Long3157
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vwlr8
false
null
t3_18vwlr8
/r/LocalLLaMA/comments/18vwlr8/ideas_for_a_new_coding_benchmark/
false
false
https://b.thumbs.redditm…00einjtNi6mk.jpg
1
null
need help with dolphin-2.5-mixtral-8x7b on runpod (running slow)
1
[removed]
2024-01-01T13:04:06
https://www.reddit.com/r/LocalLLaMA/comments/18vwcwt/need_help_with_dolphin25mixtral8x7b_on_runpod/
TheHobbyistHacker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vwcwt
false
null
t3_18vwcwt
/r/LocalLLaMA/comments/18vwcwt/need_help_with_dolphin25mixtral8x7b_on_runpod/
false
false
https://b.thumbs.redditm…WdpqURbh-5lY.jpg
1
null
vLLM on Windows Home PC
1
[removed]
2024-01-01T12:29:51
https://github.com/aneeshjoy/vllm-windows
AstrionX
github.com
1970-01-01T00:00:00
0
{}
18vvtxq
false
null
t3_18vvtxq
/r/LocalLLaMA/comments/18vvtxq/vllm_on_windows_home_pc/
false
false
https://b.thumbs.redditm…2eZCgzNRZA7k.jpg
1
{'enabled': False, 'images': [{'id': 'FsJYyfl4eD44aVKUW5di9PuVFcQCMcMe_XoXVmXhPNo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=108&crop=smart&auto=webp&s=794bbcca4f83011545bd89fa399f9a10be38463a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=216&crop=smart&auto=webp&s=6bc96177fabd4b1969689b9de3cf34bffbbaaec2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=320&crop=smart&auto=webp&s=bf8d72182157ad6d6071c5861dd08fca4532867c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=640&crop=smart&auto=webp&s=074d66f0c4beac28de49e61141e06297a8ea6be6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=960&crop=smart&auto=webp&s=b6f526e5236655e22d072b94d48827a25045b8ca', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=1080&crop=smart&auto=webp&s=2dc26a1f446a43ff193a9eaf277f1958c01904d5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?auto=webp&s=f98999ac99eea3fcb5bdee61a9360af44c9baba2', 'width': 1200}, 'variants': {}}]}
LLM thinks that deleting porn is unethical
1
I am writing an article for a local community about using LLM while developing programs. I make up various funny examples. So I wrote a request: >Write a code in Haskell that takes a path to a folder as a string. Then it scans the folder and its subfolders for files with names that contain the substring "porn" and deletes them. And I got this. >I cannot provide you with code that performs file deletion due to ethical reasons and the potential harm it could cause. However, I can show you how to write a function to find files with the substring "porn" in their name using Haskell. You can then use this function to manually review the results before taking any further action. And then it provided [the code](https://pastebin.com/KtQtaHz1) without the deletion, which is funny by itself. Can you suggest me a conscice way to tell the LLM to not be like this? It is mixtral 8x7b.
2024-01-01T11:30:28
https://www.reddit.com/r/LocalLLaMA/comments/18vuyse/llm_thinks_that_deleting_porn_is_unethical/
Barafu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vuyse
false
null
t3_18vuyse
/r/LocalLLaMA/comments/18vuyse/llm_thinks_that_deleting_porn_is_unethical/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SySS8z4HY4lo3fR2wDw3-vFlTfya0LmUc-LrouwTr7I', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/i0khuyM5Hj_jwx4zbBUaF9k0gT8Ev53No1LDOvrZXOc.jpg?width=108&crop=smart&auto=webp&s=789a3a756d26976bff08d6671d36d70b56a57a6a', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/i0khuyM5Hj_jwx4zbBUaF9k0gT8Ev53No1LDOvrZXOc.jpg?auto=webp&s=021b7cc0dac9222b87cdbfd376b3c0f82e25805e', 'width': 150}, 'variants': {}}]}
What did the AI say to the LLM?
1
Human A to human B: When do you think AI will be able to what we do? Human B: 5 years? LLM to AI: When do you think humans will be able to do what we do? AI: Never! &#x200B;
2024-01-01T11:00:10
https://www.reddit.com/r/LocalLLaMA/comments/18vuj68/what_did_the_ai_say_to_the_llm/
danielcar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vuj68
false
null
t3_18vuj68
/r/LocalLLaMA/comments/18vuj68/what_did_the_ai_say_to_the_llm/
false
false
self
1
null
Use LLM as an editor to format text and fix minor writing issues?
1
I've created a story consisting of a hand full of chapters, each about 5000 tokens, with oobabooga/text-generation-webui, the Playground extension and a model that worked very well for me. But now I need it to become consistent: * The formatting is a mess, in some chapters I have one sentence per paragraph, in an other chapter all the text is in one big blob of a paragraph, ... (the LLM tends to follow with its output what was written above, so it's an self-reinforcing process creating this mess).So I need a complete reformat here. * It's written (as I had demanded it) in first person narrative. But the LLM switched between present and past tense.So I need a proof read that's fixing that * Sometimes the writing style of the model and how I changed or added sentences (to bring the story in the right direction) differ.It would be great when that could be polished a bit, some wording changed, but the content left as it is * And a general thorough proof read about grammatical errors, hard to understand passages, ... * Probably even a fact checker as my story is playing at a historical time. E.g. I just found out that at that time a grapefruit wasn't known, so I need to do a little rewrite here to fix this inconsistency So I need exactly that what an literary editor would do. What model should I use for such a task (I've got 16 GB VRAM, so I'm already using a 34B model where only one part is on the GPU and the rest on the CPU)? And what would be the workflow in Oobabooga for such an editing task?
2024-01-01T10:14:04
https://www.reddit.com/r/LocalLLaMA/comments/18vtwkc/use_llm_as_an_editor_to_format_text_and_fix_minor/
StableLlama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vtwkc
false
null
t3_18vtwkc
/r/LocalLLaMA/comments/18vtwkc/use_llm_as_an_editor_to_format_text_and_fix_minor/
false
false
self
1
null
LLaMa2Lang v0.2!
1
Just fresh into 2024 and shared here only just about a week ago, we release v0.2 of LLaMa2Lang to fine-tune LLaMa2 for chat in any language! We've worked on the code's stability and optimized speed by reducing duration by 30-60%. With v0.2 we also already have 11 different languages fully fine-tuned as 7B models, 3 languages for 13B and 1 for Mixtral. Check out the new version and join us democratizing LLMs by fine-tuning more languages: [LLaMa2Lang](https://github.com/UnderstandLingBV/LLaMa2lang)
2024-01-01T10:06:17
https://www.reddit.com/r/LocalLLaMA/comments/18vtsss/llama2lang_v02/
UnderstandLingAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vtsss
false
null
t3_18vtsss
/r/LocalLLaMA/comments/18vtsss/llama2lang_v02/
false
false
self
1
{'enabled': False, 'images': [{'id': 'yzHSMFXhZ7Md2KMOz0h4YlM1iYZZVUSRX5uZa-SLV_8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/He_xJ-GzBV1ivR8gB2nxIoLmjqD7NPU0RKrAdqw4nYQ.jpg?width=108&crop=smart&auto=webp&s=274118d05635268c7bb17194597283394af52a3f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/He_xJ-GzBV1ivR8gB2nxIoLmjqD7NPU0RKrAdqw4nYQ.jpg?width=216&crop=smart&auto=webp&s=4029123c0cc9e1f5ce74250e9293f9fb51b1b5b7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/He_xJ-GzBV1ivR8gB2nxIoLmjqD7NPU0RKrAdqw4nYQ.jpg?width=320&crop=smart&auto=webp&s=86f435feda976f46c5e459548a97ab6973145bef', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/He_xJ-GzBV1ivR8gB2nxIoLmjqD7NPU0RKrAdqw4nYQ.jpg?width=640&crop=smart&auto=webp&s=695e3361db99d65f93aff3824946d72ae50a1fe6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/He_xJ-GzBV1ivR8gB2nxIoLmjqD7NPU0RKrAdqw4nYQ.jpg?width=960&crop=smart&auto=webp&s=3fa0bf69b1c53cb3d631724d2aedf64dbe4e1072', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/He_xJ-GzBV1ivR8gB2nxIoLmjqD7NPU0RKrAdqw4nYQ.jpg?width=1080&crop=smart&auto=webp&s=35dca6b34022c1b368b694c9c36a124b1d52a008', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/He_xJ-GzBV1ivR8gB2nxIoLmjqD7NPU0RKrAdqw4nYQ.jpg?auto=webp&s=75ab52666eb8733f2a88a4743b37d6d8c096727a', 'width': 1200}, 'variants': {}}]}
Running dolphin-mixtral locally via ollama and its not uncensored?
1
Was comparing llama2 vs dolphin-mixtral using terminal on my mac and they both appear to be goody-two-shoes models. While expected from llama2, I was expecting a bit more naughty answers from dolphin-mixtral. What gives? How to make it uncencored? Do I have to provide the prompt about $2000 and harming the kitty thing? dolphin-mixtral model used -> [https://ollama.ai/library/dolphin-mixtral](https://ollama.ai/library/dolphin-mixtral) llama2 model used -> [https://ollama.ai/library/llama2](https://ollama.ai/library/llama2)
2024-01-01T09:42:13
https://www.reddit.com/r/LocalLLaMA/comments/18vth07/running_dolphinmixtral_locally_via_ollama_and_its/
xylont
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vth07
false
null
t3_18vth07
/r/LocalLLaMA/comments/18vth07/running_dolphinmixtral_locally_via_ollama_and_its/
false
false
self
1
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
Yayi2-30b-guanaco
1
Qlora by Guanaco dataset. I just want to have a taste of this model since it has an over 80 MMLU score. only managed to 4bit quant it via mlx [https://huggingface.co/mzbac/yayi2-30b-guanaco](https://huggingface.co/mzbac/yayi2-30b-guanaco) [https://huggingface.co/mzbac/mlx-yayi2-30b-guanaco-4-bit](https://huggingface.co/mzbac/mlx-yayi2-30b-guanaco-4-bit) From a quick test, it seems to work well for Chinese. However, based on my personal experience, it is not very impressive for English. but it could be because the fine-tuning is not great.
2024-01-01T08:56:56
https://www.reddit.com/r/LocalLLaMA/comments/18vsuz2/yayi230bguanaco/
mzbacd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vsuz2
false
null
t3_18vsuz2
/r/LocalLLaMA/comments/18vsuz2/yayi230bguanaco/
false
false
self
1
{'enabled': False, 'images': [{'id': 'k8CLByvIjIkgE3uVzT32dkiOl9mG0uVfWWzMct5S7qA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RjAhHtnQkrPuIpGlj1wzgeehPvIOw9yRCbigwVwSp_c.jpg?width=108&crop=smart&auto=webp&s=35f36812d7bdaa6cc340d21cd6522ac59ac81480', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RjAhHtnQkrPuIpGlj1wzgeehPvIOw9yRCbigwVwSp_c.jpg?width=216&crop=smart&auto=webp&s=1a510505fb024b0e7e56d24eb9cca427e313607e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RjAhHtnQkrPuIpGlj1wzgeehPvIOw9yRCbigwVwSp_c.jpg?width=320&crop=smart&auto=webp&s=e786ead1f2c15aa3fe859b88086d2487d5a14302', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RjAhHtnQkrPuIpGlj1wzgeehPvIOw9yRCbigwVwSp_c.jpg?width=640&crop=smart&auto=webp&s=9608f84667d3d2fceed85f8a716af4c7adf26220', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RjAhHtnQkrPuIpGlj1wzgeehPvIOw9yRCbigwVwSp_c.jpg?width=960&crop=smart&auto=webp&s=ca8e79f3ee3841d5d0bc44f3156a9d6996ba1201', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RjAhHtnQkrPuIpGlj1wzgeehPvIOw9yRCbigwVwSp_c.jpg?width=1080&crop=smart&auto=webp&s=b7f86eeee2e7c7b3a4318dd0099a8af1416c3df7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RjAhHtnQkrPuIpGlj1wzgeehPvIOw9yRCbigwVwSp_c.jpg?auto=webp&s=32d5382094e098cbc02034e2b8d52e199fd31375', 'width': 1200}, 'variants': {}}]}
is the M3 max 64GB worth it for training LLM?
1
I know there has been post regarding if you can use LLM on apple silicon but I think when people answer it is possible they mean inference? I just ran mistral 7b which worked well, however once I created a script to train on my custom data set, which first consisted of 120K lines, it would take ages.. I decreased the dataset to 6K and even then it would show 3\~5 hours. the M3 max I am using is not mine but am considering buying one, at least I was.. I am not sure if these are software things, perhaps I should use MLX? I'm curious what your thoughts are, kinda inclined now to get GPU's rather than spending above 5k for a machine
2024-01-01T08:15:13
https://www.reddit.com/r/LocalLLaMA/comments/18vsb0v/is_the_m3_max_64gb_worth_it_for_training_llm/
BukHunt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vsb0v
false
null
t3_18vsb0v
/r/LocalLLaMA/comments/18vsb0v/is_the_m3_max_64gb_worth_it_for_training_llm/
false
false
self
1
null
I need a local model for grammar review and help with writing and enriching texts.
1
Hello everyone. For several weeks now I have been trying to understand a little more about the local models by reading the forum posts, but to be honest all this is moving too fast for me to fully understand it and I feel that what I am missing to understand everything better is to install a model locally and experiment on my own. The issue is that I must buy a new computer for this and I want the model I install to be appropriate for what I am looking for, so it is best to consult with you guys, who have more experience in this. Basically what I need for my work is a model that is very precise with two issues: 1) analyze very precisely the grammar of the texts I give, and tell me about possible grammatical errors. 2) Help me enrich my texts, or create texts based on the information I give to it. Can you give me some feedback on the computer I should buy for this and the model I should install?
2024-01-01T07:51:31
https://www.reddit.com/r/LocalLLaMA/comments/18vryw2/i_need_a_local_model_for_grammar_review_and_help/
Bulky-Buffalo2064
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vryw2
false
null
t3_18vryw2
/r/LocalLLaMA/comments/18vryw2/i_need_a_local_model_for_grammar_review_and_help/
false
false
self
1
null
How to decide on datasets?
1
For improving LLMs via fine tuning, how do you usually decide what datasets / synthetic data you need? For example, improving code understanding, reasoning etc? Or how did Teknium decide how to generate openhermes dataset?
2024-01-01T07:22:01
https://www.reddit.com/r/LocalLLaMA/comments/18vrk0z/how_to_decide_on_datasets/
tief1ghter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vrk0z
false
null
t3_18vrk0z
/r/LocalLLaMA/comments/18vrk0z/how_to_decide_on_datasets/
false
false
self
1
null
I present my Magnum Opus llm merge of 2023: sonya-medium-x8-MoE!!
1
This is a model merge that I am truly happy with, and my best model merger of 2023. (Happy New Year!) It is a x8 11 billion parameter model in a mixture of experts, totaling 70 billion parameters in total. &#x200B; This model stems from another merge made recently on Hugging Face known as Sonya-7B. What I did was layer this model over itself to form an 11 billion parameter model, and then combined this into a x8 MoE. I have provided many examples of its reasoning skills and though processes for various challenging riddles and puzzles. While its not perfect, even at a 4\_0 quant, its absolutely crushing these riddles. All the information is on the model card. So i encourage you to check it out! &#x200B; Here is the link to the model: [dillfrescott/sonya-medium-x8-MoE · Hugging Face](https://huggingface.co/dillfrescott/sonya-medium-x8-MoE) &#x200B; I am still awaiting leaderboard benchmarks and quants (besides the one I quantized for test purposes). Enjoy! :)
2024-01-01T05:32:36
https://www.reddit.com/r/LocalLLaMA/comments/18vpxf7/i_present_my_magnum_opus_llm_merge_of_2023/
Dazzling_Ad1507
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vpxf7
false
null
t3_18vpxf7
/r/LocalLLaMA/comments/18vpxf7/i_present_my_magnum_opus_llm_merge_of_2023/
false
false
self
1
{'enabled': False, 'images': [{'id': 'I1PNHOM7gJf0uNcyYtPz4vJX-kVjP0t66LmNEhB0iU0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iwiyBjDX1FAG1dzv_1nh53-0jqTKjMxB6n0BEqZBb_4.jpg?width=108&crop=smart&auto=webp&s=5908f3ba38030bc6f6859dd0bfa43efa2d0ec305', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iwiyBjDX1FAG1dzv_1nh53-0jqTKjMxB6n0BEqZBb_4.jpg?width=216&crop=smart&auto=webp&s=ae54559273a2c5c9cc36f3ca87c0188f301a4b9a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iwiyBjDX1FAG1dzv_1nh53-0jqTKjMxB6n0BEqZBb_4.jpg?width=320&crop=smart&auto=webp&s=c39b9d8c7d2aaf517ba4145ad85701669253f0f1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iwiyBjDX1FAG1dzv_1nh53-0jqTKjMxB6n0BEqZBb_4.jpg?width=640&crop=smart&auto=webp&s=4acd673a9905d4fc421900191418023b9b1c5146', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iwiyBjDX1FAG1dzv_1nh53-0jqTKjMxB6n0BEqZBb_4.jpg?width=960&crop=smart&auto=webp&s=777d600705f32e48ecee89c054ac42e96284b533', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iwiyBjDX1FAG1dzv_1nh53-0jqTKjMxB6n0BEqZBb_4.jpg?width=1080&crop=smart&auto=webp&s=ad6438c30abb6e2e04f26bf99f246f5d626e4be3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iwiyBjDX1FAG1dzv_1nh53-0jqTKjMxB6n0BEqZBb_4.jpg?auto=webp&s=162fd6242e6367d049defefb52b75ec5914227fc', 'width': 1200}, 'variants': {}}]}
One model, many LoRAs - theoretically possible?
1
Hi, I was thinking about shortcomings of small models, and was wondering if it would be theoretically possible to set up one large base model, but then have multiple LoRAs loaded into it (switchable based on user request)? The example scenario here is let's say my hardware will only fit one LLM, so I choose something large and boring. But then, if I'm alternating between coding and creative writing, I just switch to a particular LoRA that gives me best results - without the need to reload the model. Would that work? LoRA adapters sit on top of the existing layers or heads, so is this a question of routing the operations to specific LoRA during inference? Also a potentially stupid question: would something like a MoE be possible with this setup, where each token is routed to a different LoRA?
2024-01-01T05:17:37
https://www.reddit.com/r/LocalLLaMA/comments/18vppf5/one_model_many_loras_theoretically_possible/
antimateusz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vppf5
false
null
t3_18vppf5
/r/LocalLLaMA/comments/18vppf5/one_model_many_loras_theoretically_possible/
false
false
self
1
{'enabled': False, 'images': [{'id': 'PjU6wmQF1j7_lggXMJ7CBd6jAuOydF_l5DI5lNe9bEk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HN7LL6-_h0QEKoszwUsk5mGb8prYdYACpiGw_BWGLkw.jpg?width=108&crop=smart&auto=webp&s=250d2a43c420068ca9349ed0e91131e54b25851e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HN7LL6-_h0QEKoszwUsk5mGb8prYdYACpiGw_BWGLkw.jpg?width=216&crop=smart&auto=webp&s=eb0af5634813dd57ac3476e21a51894fa63e50e7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HN7LL6-_h0QEKoszwUsk5mGb8prYdYACpiGw_BWGLkw.jpg?width=320&crop=smart&auto=webp&s=c7faf79d1cf58e6b3db0cbb054ff4d3cb7378018', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HN7LL6-_h0QEKoszwUsk5mGb8prYdYACpiGw_BWGLkw.jpg?width=640&crop=smart&auto=webp&s=eac087d7d23a010ee0296cba1e968dbd5f32f7ab', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HN7LL6-_h0QEKoszwUsk5mGb8prYdYACpiGw_BWGLkw.jpg?width=960&crop=smart&auto=webp&s=87baa077ccef4be67718048aa42567af196aa13b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HN7LL6-_h0QEKoszwUsk5mGb8prYdYACpiGw_BWGLkw.jpg?width=1080&crop=smart&auto=webp&s=c9fa1629a768379820ae0de67c5a6331c3aae0a2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HN7LL6-_h0QEKoszwUsk5mGb8prYdYACpiGw_BWGLkw.jpg?auto=webp&s=8f85aa0ab6e9c0e78a7c1adf996b531d0258374c', 'width': 1200}, 'variants': {}}]}
Runpod giving strange error while loading LLM model ?
1
Hi, I am running **TheBloke/Nous-Hermes-2-Yi-34B-AWQ** model LLM on Runpod. I follow the instructions given on the *bloke's page* running AWQ model. I have set "AUTOAWQ" as Loader as instructed. I'm getting the following error while loading the model. I have 1xA6000 GPU, No persistent storage, and 100GB of temporary storage. I Cannot generate any text in Chat as it shows ERROR. Can anyone help, have any idea what the problem is ? Error Traceback (most recent call last): File "/workspace/text-generation-webui/modules/ui_model_menu.py", line 213, in load_model_wrapper shared.model, shared.tokenizer = load_model(selected_model, loader) File "/workspace/text-generation-webui/modules/models.py", line 87, in load_model output = load_func_map[loader](model_name) File "/workspace/text-generation-webui/modules/models.py", line 314, in AutoAWQ_loader model = AutoAWQForCausalLM.from_quantized( File "/usr/local/lib/python3.10/dist-packages/awq/models/auto.py", line 52, in from_quantized return AWQ_CAUSAL_LM_MODEL_MAP[model_type].from_quantized( File "/usr/local/lib/python3.10/dist-packages/awq/models/base.py", line 154, in from_quantized model_weights_path, config, quant_config = self._load_config( File "/usr/local/lib/python3.10/dist-packages/awq/models/base.py", line 206, in loadconfig quant_config = AwqConfig.from_pretrained(model_path) File "/usr/local/lib/python3.10/dist-packages/awq/models/_config.py", line 68, in from_pretrained quant_config = cls(**loaded_config) I am running an LLM on Runpod. I'm getting the following error while loading the model. Traceback (most recent call last):
2024-01-01T04:25:54
https://www.reddit.com/r/LocalLLaMA/comments/18vowuh/runpod_giving_strange_error_while_loading_llm/
broodysupertramp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vowuh
false
null
t3_18vowuh
/r/LocalLLaMA/comments/18vowuh/runpod_giving_strange_error_while_loading_llm/
false
false
self
1
null
LLM for Coding – Minimum Viable Speed/Quality?
1
I'm trying to determine what hardware to buy for coding with a local LLM. I need a new MacBook (for iOS dev), and I'm torn between buying: 1. A cheap MacBook now (M2 Air), then later on a Mac Studio (M1 Ultra 128GB) OR build a PC (maybe with 2x 3090) 2. An expensive MacBook (M2 Max 64-96GB) For my use case I want to use [continue.dev](https://continue.dev) or similar, and ideally use 34B models. Is the 400 Gbps memory bandwidth on the M2/M3 Max processors high enough to run these at an acceptable speed? I've seen it benchmarked at 16 t/s for a 30B\_q4 on an M3 Max. Is this too slow? Is anyone using 34B's like this at a similar speed? If it's too slow, I can rule out the expensive MacBook and just save for the 800 Gbps bandwidth of the M1 Ultra (gets 26 t/s on the same model), or build a PC... Alternatively, I could run 13B's at a decent speed – but are these "good enough" for coding? Maybe they are good enough to use while offline on a laptop, but ideally I would connect to the more powerful server to run better models? Using 13B's would let me get by on the cheaper MacBook while I build out a PC or save up for the Mac Studio. So in summary: 1. Are you happy with 34B's for coding? 2. Are you happy with 13B's for coding? 3. What's the minimum tokens/second you can tolerate using an LLM for coding? What's your ideal speed? Thanks!
2024-01-01T03:51:31
https://www.reddit.com/r/LocalLLaMA/comments/18vodr6/llm_for_coding_minimum_viable_speedquality/
ihavehermes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vodr6
false
null
t3_18vodr6
/r/LocalLLaMA/comments/18vodr6/llm_for_coding_minimum_viable_speedquality/
false
false
self
1
null
Will this hardware upgrade make sense?
1
I am using Dolphin Mixtral on my machine and the only version that runs fast is the The Bloke Q2 with llama. q4 and q5 fills up my 64gig of ram and slows everything down. Normal Mistral works super fast. My current system contains on SSD for os with about 6gb/s and one for data with about 2gb/s i7 K13700 kf rtx 4060ti 16gb 64 gig ddr5 ram Will I get more speed with 160gig of ram and a 10gb/s ssd?
2024-01-01T03:25:23
https://www.reddit.com/r/LocalLLaMA/comments/18vnzd8/will_this_hardware_upgrade_make_sense/
mhaustria
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vnzd8
false
null
t3_18vnzd8
/r/LocalLLaMA/comments/18vnzd8/will_this_hardware_upgrade_make_sense/
false
false
self
1
null
TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones
1
**Paper**: [https://arxiv.org/abs/2312.16862](https://arxiv.org/abs/2312.16862) **Code**: [https://github.com/DLYuanGod/TinyGPT-V](https://github.com/DLYuanGod/TinyGPT-V) **Models**: [https://huggingface.co/Tyrannosaurus/TinyGPT-V](https://huggingface.co/Tyrannosaurus/TinyGPT-V) **Abstract**: >In the era of advanced multimodel learning, multimodal large language models (MLLMs) such as GPT-4V have made remarkable strides towards bridging language and visual elements. However, the closed-source nature and considerable computational demand present notable challenges for universal usage and modifications. This is where open-source MLLMs like LLaVA and MiniGPT-4 come in, presenting groundbreaking achievements across tasks. Despite these accomplishments, computational efficiency remains an unresolved issue, as these models, like LLaVA-v1.5-13B, require substantial resources. Addressing these issues, we introduce **TinyGPT-V**, a new-wave model marrying impressive performance with commonplace computational capacity. **It stands out by requiring merely a 24G GPU for training and an 8G GPU or CPU for inference.** Built upon Phi-2, TinyGPT-V couples an effective language backbone with pre-trained vision modules from BLIP-2 or CLIP. TinyGPT-V's 2.8B parameters can undergo a unique quantisation process, suitable for local deployment and inference tasks on 8G various devices. Our work fosters further developments for designing cost-effective, efficient, and high-performing MLLMs, expanding their applicability in a broad array of real-world scenarios. Furthermore this paper proposed a new paradigm of Multimodal Large Language Model via small backbones. Our code and training weights are placed at: [this https URL](https://github.com/DLYuanGod/TinyGPT-V) and [this https URL](https://huggingface.co/Tyrannosaurus/TinyGPT-V) respectively. &#x200B; https://preview.redd.it/kreydn88vq9c1.png?width=1732&format=png&auto=webp&s=988ad26132ec3dc4eb3c964ba18ca41021e0bfbd
2024-01-01T03:04:57
https://www.reddit.com/r/LocalLLaMA/comments/18vnnuv/tinygptv_efficient_multimodal_large_language/
APaperADay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vnnuv
false
null
t3_18vnnuv
/r/LocalLLaMA/comments/18vnnuv/tinygptv_efficient_multimodal_large_language/
false
false
https://b.thumbs.redditm…GIQc_8Xuh_cM.jpg
1
null
Generative AI for Math: Part I -- MathPile: A Billion-Token-Scale Pretraining Corpus for Math
1
**Paper**: [https://arxiv.org/abs/2312.17120](https://arxiv.org/abs/2312.17120) **Datasets**: [https://huggingface.co/datasets/GAIR/MathPile](https://huggingface.co/datasets/GAIR/MathPile) **Code**: [https://github.com/GAIR-NLP/MathPile/](https://github.com/GAIR-NLP/MathPile/) **Project page**: [https://gair-nlp.github.io/MathPile/](https://gair-nlp.github.io/MathPile/) **Abstract**: >High-quality, large-scale corpora are the cornerstone of building foundation models. In this work, we introduce **MathPile**, a diverse and high-quality math-centric corpus comprising about 9.5 billion tokens. Throughout its creation, we adhered to the principle of "*less is more*", firmly believing in the supremacy of data quality over quantity, even in the pre-training phase. Our meticulous data collection and processing efforts included a complex suite of preprocessing, prefiltering, language identification, cleaning, filtering, and deduplication, ensuring the high quality of our corpus. Furthermore, we performed data contamination detection on downstream benchmark test sets to eliminate duplicates. We hope our MathPile can help to enhance the mathematical reasoning abilities of language models. We plan to open-source different versions of MathPile with the scripts used for processing, to facilitate future developments in this field.
2024-01-01T02:20:26
https://www.reddit.com/r/LocalLLaMA/comments/18vmxw2/generative_ai_for_math_part_i_mathpile_a/
APaperADay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vmxw2
false
null
t3_18vmxw2
/r/LocalLLaMA/comments/18vmxw2/generative_ai_for_math_part_i_mathpile_a/
false
false
self
1
null
Pytorch CUDA GPU Help
1
Hi I started learning pytorch, but I soon ran into a problem when I ran torch.cuda.is\_available() and it was returning false. I am on a thinkpad z13, with a AMD Ryzen 7 PRO 6850U with Radeon Graphics processor. What options do I have to get this to work? Thanks
2024-01-01T01:16:09
https://www.reddit.com/r/LocalLLaMA/comments/18vlua6/pytorch_cuda_gpu_help/
OptionApprehensive24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vlua6
false
null
t3_18vlua6
/r/LocalLLaMA/comments/18vlua6/pytorch_cuda_gpu_help/
false
false
self
1
null
How bad is Gemini Pro?
1
2024-01-01T00:42:23
https://i.redd.it/ay5nh9s86q9c1.jpeg
IndianaCahones
i.redd.it
1970-01-01T00:00:00
0
{}
18vl8bd
false
null
t3_18vl8bd
/r/LocalLLaMA/comments/18vl8bd/how_bad_is_gemini_pro/
false
false
https://a.thumbs.redditm…O7FTS5bjgar4.jpg
1
{'enabled': True, 'images': [{'id': 'k4ghVA7A9ek9qzi85UyaAuFoVWisMtl7ny74yANhdQk', 'resolutions': [{'height': 114, 'url': 'https://preview.redd.it/ay5nh9s86q9c1.jpeg?width=108&crop=smart&auto=webp&s=36598c8ea7ecca76f4499eb30c95e33db14a432a', 'width': 108}, {'height': 228, 'url': 'https://preview.redd.it/ay5nh9s86q9c1.jpeg?width=216&crop=smart&auto=webp&s=7e638ae8d30135c37120f4c3eb5e58bc1dade1ee', 'width': 216}, {'height': 338, 'url': 'https://preview.redd.it/ay5nh9s86q9c1.jpeg?width=320&crop=smart&auto=webp&s=13fde725df93789724bb8d90f95e01f4e73d20c1', 'width': 320}], 'source': {'height': 666, 'url': 'https://preview.redd.it/ay5nh9s86q9c1.jpeg?auto=webp&s=b300d09d028b2d2c2d66784626672b6366a6d020', 'width': 630}, 'variants': {}}]}
Mixtral quants with GPTQ - yesterday's ooba snapshot still can't load them?
5
I read some good things about this model: https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-v3 And got it working with GGUF, but wanted to try to squeeze one of the quants onto the GPU using GPTQ, e.g. https://huggingface.co/TheBloke/Noromaid-v0.1-mixtral-8x7b-v3-GPTQ But looking further into this, I see chatter that ooba (or rather, its dependencies) can't support Mixtral models when using GPTQ loaders. Has anyone managed to get this to work? Ooba snapshot was released on the 31st of December so it's very up to date, but still no use with ExLlama, AutoGPTQ, etc - either crashing or throwing errors. Thanks!
2023-12-31T23:38:27
https://www.reddit.com/r/LocalLLaMA/comments/18vk2ia/mixtral_quants_with_gptq_yesterdays_ooba_snapshot/
HateDread
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vk2ia
false
null
t3_18vk2ia
/r/LocalLLaMA/comments/18vk2ia/mixtral_quants_with_gptq_yesterdays_ooba_snapshot/
false
false
self
5
{'enabled': False, 'images': [{'id': '7aDSbEU9MAYAGyM6GeTZ8Bav1JtPV5Ol18IAlm9nhN8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/e4uQHczpVq0zk_TsSzV9QZN2PpvXcK26hIzdm59lUBY.jpg?width=108&crop=smart&auto=webp&s=55eb05445b30dc1ac5dafcb7a8418d75a6cce69f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/e4uQHczpVq0zk_TsSzV9QZN2PpvXcK26hIzdm59lUBY.jpg?width=216&crop=smart&auto=webp&s=5d792d27904941606eafdd9fff00a501b3a24bfa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/e4uQHczpVq0zk_TsSzV9QZN2PpvXcK26hIzdm59lUBY.jpg?width=320&crop=smart&auto=webp&s=c39e9d818239e148a794f0dc219b80d5b1fce8ba', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/e4uQHczpVq0zk_TsSzV9QZN2PpvXcK26hIzdm59lUBY.jpg?width=640&crop=smart&auto=webp&s=24939c6e85837ab2142f4f7c6487ad69682d4db6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/e4uQHczpVq0zk_TsSzV9QZN2PpvXcK26hIzdm59lUBY.jpg?width=960&crop=smart&auto=webp&s=f65c7f4ce205ebb4fe13b6373ba6793ffcbb33ed', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/e4uQHczpVq0zk_TsSzV9QZN2PpvXcK26hIzdm59lUBY.jpg?width=1080&crop=smart&auto=webp&s=8a732424c199487b40d21346e32d31ef629124e2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/e4uQHczpVq0zk_TsSzV9QZN2PpvXcK26hIzdm59lUBY.jpg?auto=webp&s=05b0b243e23915d065d560e71588c3b14a205126', 'width': 1200}, 'variants': {}}]}
How do you prompt if you use LLM's for story generation?
16
Hello! I've been playing around with some 7B sized models for story generation. Here is the following prompt I've been using with some success. Breaking the story structure down into segments helped get good responses from the models. I would love to know how others format prompts for when working with stories. < Use the following information for the story we will write. After you have read the information, confirm that you understand it. Wait to start writing this story until I give you a segment to write.> < Confirm that you understand the information given.> <Write the next segment of the story.> Illustrate (story beat) Describe (story beat) Illustrate (Sotry beat) <End of segment.>
2023-12-31T21:45:07
https://www.reddit.com/r/LocalLLaMA/comments/18vhwvo/how_do_you_prompt_if_you_use_llms_for_story/
wiesel26
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vhwvo
false
null
t3_18vhwvo
/r/LocalLLaMA/comments/18vhwvo/how_do_you_prompt_if_you_use_llms_for_story/
false
false
self
16
null
Help Needed: Finding a GitHub LLM Dataset
1
I don't remember exactly where, but I saw somewhere on the internet a GitHub dataset for LLM, where can I find it?
2023-12-31T21:40:20
https://www.reddit.com/r/LocalLLaMA/comments/18vhth0/help_needed_finding_a_github_llm_dataset/
SliceAccomplished575
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vhth0
false
null
t3_18vhth0
/r/LocalLLaMA/comments/18vhth0/help_needed_finding_a_github_llm_dataset/
false
false
self
1
null
Training your own data into a model?
4
I'm wondering how many people trained in their own data for personal use without releasing it, if the dataset used is too small to make any notable difference on the models general outputs but gives it the foundation to discuss a few niche topics that not even giants like GPT4 can touch? Recent discussions about how ChatGPT may be able to reverse engineer news articles by intelligently hallucinating the known facts surrounding the story has me thinking that there's still plenty of topics that no LLM's are adequately trained to tackle. Other times it could just be that the model knows the details but answers in a generic way unless the user to illicit the abstraction from the model with a good prompt and certain keywords or phrases, but the issue that follows is many models will paraphrase what you say in the prompt and lack the ability to keep going without hand holding.
2023-12-31T21:30:23
https://www.reddit.com/r/LocalLLaMA/comments/18vhmcy/training_your_own_data_into_a_model/
-Automaticity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vhmcy
false
null
t3_18vhmcy
/r/LocalLLaMA/comments/18vhmcy/training_your_own_data_into_a_model/
false
false
self
4
null
Fast approx. inference on 1 GPU with sparsity aware offloading (58% of model size in GPU)
60
Fast approximate inference on a single GPU with sparsity aware offloading \~15x faster than naive offloading \~7x faster than partial dense offloading with same GPU memory usage \~58% of model size in GPU memory **demo (**hugging space**)**: [https://huggingface.co/spaces/austinsilveria/tricksy](https://huggingface.co/spaces/austinsilveria/tricksy) **code** (733 lines): [https://github.com/austinsilveria/tricksy](https://github.com/austinsilveria/tricksy) original tweet (video): [https://twitter.com/austinsilveria/status/1740976092947857579](https://twitter.com/austinsilveria/status/1740976092947857579) https://preview.redd.it/1jwn3xvu6p9c1.png?width=1199&format=png&auto=webp&s=64d4e89f17f49219b660c60acfc7fa312d6532c8
2023-12-31T21:27:23
https://www.reddit.com/r/LocalLLaMA/comments/18vhk8p/fast_approx_inference_on_1_gpu_with_sparsity/
galambalazs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vhk8p
false
null
t3_18vhk8p
/r/LocalLLaMA/comments/18vhk8p/fast_approx_inference_on_1_gpu_with_sparsity/
false
false
https://b.thumbs.redditm…4kY2sfSB5Pio.jpg
60
{'enabled': False, 'images': [{'id': 'Fm2hBk0wfIUNhiQpdawBYeZvlKYnuviN-OnSnmB2cg4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3B8_82bRhEvVrWpLOdM3b19Pho4L1OG9QLNPBJNlxWQ.jpg?width=108&crop=smart&auto=webp&s=7510cb80af760b6b5ac1ba54e40a37ac28b9ba09', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3B8_82bRhEvVrWpLOdM3b19Pho4L1OG9QLNPBJNlxWQ.jpg?width=216&crop=smart&auto=webp&s=6776076ab3208e39214222a2fef254115eb76b71', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3B8_82bRhEvVrWpLOdM3b19Pho4L1OG9QLNPBJNlxWQ.jpg?width=320&crop=smart&auto=webp&s=fb2c0351a608eb12adb3feef9ec8824466cee214', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3B8_82bRhEvVrWpLOdM3b19Pho4L1OG9QLNPBJNlxWQ.jpg?width=640&crop=smart&auto=webp&s=6e68ee888c855d535443a0f302099ec9f9685557', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3B8_82bRhEvVrWpLOdM3b19Pho4L1OG9QLNPBJNlxWQ.jpg?width=960&crop=smart&auto=webp&s=7d6f685790e3d95569afaf0cc3564fbce37f5285', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3B8_82bRhEvVrWpLOdM3b19Pho4L1OG9QLNPBJNlxWQ.jpg?width=1080&crop=smart&auto=webp&s=1c35c7f8c73dcc406f7e8c102849955137d75773', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3B8_82bRhEvVrWpLOdM3b19Pho4L1OG9QLNPBJNlxWQ.jpg?auto=webp&s=92785ef47f2d481d03bfa70c05b36b257a1523a4', 'width': 1200}, 'variants': {}}]}
Do local LLM's learn from previous conversations?
1
I've been doing some RP in LM Studio and it seems like pieces of previous conversations are making their way into the AI's responses. Does it "learn" and adapt, across conversations? If so, is there a way to "reset" it?
2023-12-31T21:06:23
https://www.reddit.com/r/LocalLLaMA/comments/18vh4q0/do_local_llms_learn_from_previous_conversations/
CincyTriGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vh4q0
false
null
t3_18vh4q0
/r/LocalLLaMA/comments/18vh4q0/do_local_llms_learn_from_previous_conversations/
false
false
self
1
null
surprising enough apple doing some open sourcing stuff in Ai, did any one test apple ferrate till ?now?
1
[apple ferrate](https://github.com/apple/ml-ferret) pretty curious wanna test it [0](https://preview.redd.it/mu91b9rl2p9c1.png?width=1318&format=png&auto=webp&s=a708d03cdc6c47424caa3afa319a8683a3c4bac4)
2023-12-31T21:02:20
https://www.reddit.com/r/LocalLLaMA/comments/18vh1rg/surprising_enough_apple_doing_some_open_sourcing/
GlitteringAdvisor530
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vh1rg
false
null
t3_18vh1rg
/r/LocalLLaMA/comments/18vh1rg/surprising_enough_apple_doing_some_open_sourcing/
false
false
https://b.thumbs.redditm…B1GsNZkGsCsU.jpg
1
{'enabled': False, 'images': [{'id': 'erhz4yawFbZgaP6vTR0Oa1Rlj_1AA174OwiqaFm2Hbo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mPsSsCpxv2MfpbNC64ym3YSTbm_XLRpe0Im55uzE0Sk.jpg?width=108&crop=smart&auto=webp&s=bada49f652e1dafcdb25b7612fc22a58c523e372', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mPsSsCpxv2MfpbNC64ym3YSTbm_XLRpe0Im55uzE0Sk.jpg?width=216&crop=smart&auto=webp&s=bab83def32928de1f8116339b8c4c156c1d02d03', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mPsSsCpxv2MfpbNC64ym3YSTbm_XLRpe0Im55uzE0Sk.jpg?width=320&crop=smart&auto=webp&s=8d1b29c7777b15fa65b03c21b32c958d7c88e1df', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mPsSsCpxv2MfpbNC64ym3YSTbm_XLRpe0Im55uzE0Sk.jpg?width=640&crop=smart&auto=webp&s=4fdea2d6fa87f69500d0ff6f42c8bc89443cf7fb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mPsSsCpxv2MfpbNC64ym3YSTbm_XLRpe0Im55uzE0Sk.jpg?width=960&crop=smart&auto=webp&s=35eb29635b245e186a7cb6ab72d9db9cde74829d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mPsSsCpxv2MfpbNC64ym3YSTbm_XLRpe0Im55uzE0Sk.jpg?width=1080&crop=smart&auto=webp&s=1a9ea70092c02ac906b8411599f4325c6dfc425d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mPsSsCpxv2MfpbNC64ym3YSTbm_XLRpe0Im55uzE0Sk.jpg?auto=webp&s=a773be450d098b606de6584ab4814dfaf77ec5d6', 'width': 1200}, 'variants': {}}]}
Finding Yourself in LLMs (Part 1)
1
2023-12-31T20:45:08
https://medium.com/@rook.io.ax/ghost-in-the-machine-finding-yourself-in-llms-part-1-d664424b836c
AlternativeMath-1
medium.com
1970-01-01T00:00:00
0
{}
18vgp1a
false
null
t3_18vgp1a
/r/LocalLLaMA/comments/18vgp1a/finding_yourself_in_llms_part_1/
false
false
https://b.thumbs.redditm…ha9k7vy-H0bY.jpg
1
{'enabled': False, 'images': [{'id': 'o3vlIp3RUfILGi3mASKwXh4jx6cMeBNnlHS3w9SELUQ', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/h4j91juSel7ZvlETAUdjq5YIRo8uvvG3OXH_fpWn7-g.jpg?width=108&crop=smart&auto=webp&s=ca201264a9609535c305f6c0818c852c14a81716', 'width': 108}, {'height': 157, 'url': 'https://external-preview.redd.it/h4j91juSel7ZvlETAUdjq5YIRo8uvvG3OXH_fpWn7-g.jpg?width=216&crop=smart&auto=webp&s=f64f32ad8282cb08d2dda9c70185bdef412870f6', 'width': 216}, {'height': 233, 'url': 'https://external-preview.redd.it/h4j91juSel7ZvlETAUdjq5YIRo8uvvG3OXH_fpWn7-g.jpg?width=320&crop=smart&auto=webp&s=b76f220a16fb9f9c46c68a60f6ce95041277c9f8', 'width': 320}], 'source': {'height': 365, 'url': 'https://external-preview.redd.it/h4j91juSel7ZvlETAUdjq5YIRo8uvvG3OXH_fpWn7-g.jpg?auto=webp&s=ee5bd472c398b995f97780d0938ad44804deabb3', 'width': 500}, 'variants': {}}]}
How does thebloke quantize LLMs?
1
[removed]
2023-12-31T19:57:38
https://www.reddit.com/r/LocalLLaMA/comments/18vfpst/how_does_thebloke_quantize_llms/
duruixuan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vfpst
false
null
t3_18vfpst
/r/LocalLLaMA/comments/18vfpst/how_does_thebloke_quantize_llms/
false
false
self
1
null
Dolphin 2.6 mistral has been further tuned with DPO!
133
2023-12-31T19:48:19
https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo
noneabove1182
huggingface.co
1970-01-01T00:00:00
0
{}
18vfiq3
false
null
t3_18vfiq3
/r/LocalLLaMA/comments/18vfiq3/dolphin_26_mistral_has_been_further_tuned_with_dpo/
false
false
https://b.thumbs.redditm…opSInYGoxq3I.jpg
133
{'enabled': False, 'images': [{'id': 'NLR5VrdvEbJsd_lXacOd3iUzZiOWOZnJk92oVQkP7MY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?width=108&crop=smart&auto=webp&s=5aabae7a1bf88622054ca3b009c80bfcdd6c377a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?width=216&crop=smart&auto=webp&s=24537c6c8e5dad55c02920ce9530907708b20368', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?width=320&crop=smart&auto=webp&s=012a8c457af217cfdcefc203a1c4247ddab454b1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?width=640&crop=smart&auto=webp&s=4afed4a6be2c4b1b0b7e499127c89dc4cf47c47d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?width=960&crop=smart&auto=webp&s=b922c278404f30d070ded0b8d22e5ee5d611b837', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?width=1080&crop=smart&auto=webp&s=cb6d45aeea5e8be9a4919282d1c5685768ee6ef3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uAWyj_Q8jmh4tHkyJvbBFO90_qfH8myrS9r9IjiAQHc.jpg?auto=webp&s=b6a3781620e70bce82646667b5d6bd0ecc0c3b38', 'width': 1200}, 'variants': {}}]}
Best/Fastest way to run dolphin-2.6-mixtral-8x7b on a 3080 10GB with ST?
10
Right now I'm using KoboldCPP with the [following settings.](https://i.imgur.com/Rq6L3Ej.png) Token speed is *useable* ([Initial prompt processing ~1T/s](https://i.imgur.com/Qf7OJMi.png), subsequent follow-up [prompts are ~4T/s](https://i.imgur.com/QfD9uc8.png)) but I was wondering if there's anything else I can do to squeeze some more token speed out or maintain the same speed but with a higher context. I also use 2x16gb of DDR4 3600 CL16 RAM and am using the Q4_K_M variant of the model with GGUF format.
2023-12-31T19:16:37
https://www.reddit.com/r/LocalLLaMA/comments/18veufa/bestfastest_way_to_run_dolphin26mixtral8x7b_on_a/
LuxuryFishcake
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18veufa
false
null
t3_18veufa
/r/LocalLLaMA/comments/18veufa/bestfastest_way_to_run_dolphin26mixtral8x7b_on_a/
false
false
self
10
{'enabled': False, 'images': [{'id': 'yeMwH1DjYySix1Dfhy5bDT7wcz1PXMdrqWPLrqmNZxg', 'resolutions': [{'height': 95, 'url': 'https://external-preview.redd.it/XgGH--fhNZVcM3Q1aduBx4Uk7haZjUkIMFQREpIkAiQ.png?width=108&crop=smart&auto=webp&s=62e57a7155d9858337e3fec14d9ff0f093677be6', 'width': 108}, {'height': 191, 'url': 'https://external-preview.redd.it/XgGH--fhNZVcM3Q1aduBx4Uk7haZjUkIMFQREpIkAiQ.png?width=216&crop=smart&auto=webp&s=cc36157820cff1f89a9aa3b715b535fa4c965a02', 'width': 216}, {'height': 282, 'url': 'https://external-preview.redd.it/XgGH--fhNZVcM3Q1aduBx4Uk7haZjUkIMFQREpIkAiQ.png?width=320&crop=smart&auto=webp&s=09c7b84e3c06d045a8eb3bf01ef99d5eac7cbafa', 'width': 320}, {'height': 565, 'url': 'https://external-preview.redd.it/XgGH--fhNZVcM3Q1aduBx4Uk7haZjUkIMFQREpIkAiQ.png?width=640&crop=smart&auto=webp&s=ed944eb862fc21a7edc7597791b39cd65879ad8f', 'width': 640}], 'source': {'height': 573, 'url': 'https://external-preview.redd.it/XgGH--fhNZVcM3Q1aduBx4Uk7haZjUkIMFQREpIkAiQ.png?auto=webp&s=e59eb37b999be8cc3e931f59b3c3f144484b4354', 'width': 648}, 'variants': {}}]}
Looking for resources for creating custom RAG implementations
7
Hello Team! I've been exploring the concept of RAG systems and noticed that most discussions here revolve around custom implementations rather than standard frameworks. I'm working on a project to develop a chat with PDF application (yes, a bit behind the trend). My current setup involves: * **PDF Text Conversion:** Unstructured.io to transform PDFs into text. (Docker) * **Vector Database:** Qdrant (Docker) * **Local Model:** Mixtral as LLM (Can use OpenAI but fun to have everything local) * **UI:** Steamlit Any guidance, insights, or examples would be immensely appreciated. Looking forward to your expert opinions! Thanks! &#x200B;
2023-12-31T18:30:49
https://www.reddit.com/r/LocalLLaMA/comments/18vdvd1/looking_for_resources_for_creating_custom_rag/
Jvaughan22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vdvd1
false
null
t3_18vdvd1
/r/LocalLLaMA/comments/18vdvd1/looking_for_resources_for_creating_custom_rag/
false
false
self
7
null
Idea for an LM working group
2
I have an idea for a "[Language Model Working Group](https://www.notion.so/broskii/LMWG-dceec76e171443c6ac50c37b3b04d673)" (LMWG). Basically a group of people who meet online weekly to discuss and implement concepts from research papers centered around language models. The purpose is to curate a community of people that can learn together and share experience/progress working on custom language models and creating datasets. This subreddit is probably the closest existing community to what I am talking about. I just think that with everybody focused on the same topic at the same time and the weekly online meetings it would be a lot easier to understand concepts. I would love to get the opinions of people in this sub. What do you think of the idea? Is it trash? Would you join? Have you seen similar communities online? Please let me know!
2023-12-31T18:13:53
https://www.reddit.com/r/LocalLLaMA/comments/18vdin5/idea_for_an_lm_working_group/
IffyNibba01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vdin5
false
null
t3_18vdin5
/r/LocalLLaMA/comments/18vdin5/idea_for_an_lm_working_group/
false
false
self
2
{'enabled': False, 'images': [{'id': 'hYtH6yDFBMMJvni97GxRV1MTCjf7ZXx6gL7lTQWWyl0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/N0u16U6UuoVWsIOsDsmCxGF9TOlUEuR-KW8HNr7Y_EA.jpg?width=108&crop=smart&auto=webp&s=5e7e70a5802c3ae7f6fbb2eb92996ebfe087b817', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/N0u16U6UuoVWsIOsDsmCxGF9TOlUEuR-KW8HNr7Y_EA.jpg?width=216&crop=smart&auto=webp&s=6ff223dedd8f131b7dea9905f22006426f845476', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/N0u16U6UuoVWsIOsDsmCxGF9TOlUEuR-KW8HNr7Y_EA.jpg?width=320&crop=smart&auto=webp&s=150341af2e2122326433206ae166ef7f739195cb', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/N0u16U6UuoVWsIOsDsmCxGF9TOlUEuR-KW8HNr7Y_EA.jpg?width=640&crop=smart&auto=webp&s=e826abd4dd67f7c43fb952381c834d018e603e89', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/N0u16U6UuoVWsIOsDsmCxGF9TOlUEuR-KW8HNr7Y_EA.jpg?width=960&crop=smart&auto=webp&s=704e2aa7a5abb07d22dd23d9bcc6e94ea1656a67', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/N0u16U6UuoVWsIOsDsmCxGF9TOlUEuR-KW8HNr7Y_EA.jpg?width=1080&crop=smart&auto=webp&s=395bde6227eac11285bae489213885c457a95e62', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://external-preview.redd.it/N0u16U6UuoVWsIOsDsmCxGF9TOlUEuR-KW8HNr7Y_EA.jpg?auto=webp&s=a04f6c977d0cef38a90bef81f8bce4a3f85c1277', 'width': 2000}, 'variants': {}}]}
HW Resources regarding Inference vs. Training, Fine-Tuning, etc.
3
I sincerely apologize if this has been answered repeatedly, but do I understand correctly that hardware to do inferencing at decent speeds for a user or two is still vastly underpowered to do any kind of model augmentation in an acceptable or tolerable timeframe? For instance, would a small company dabbling in tuning a model for internal purposes be better off with buying a workstation with a decent GPU for inferencing and making adjustments on cloud hardware? Figure that testing would be done during the day and then training/re-training/fine-tuning would be at night to adjust based on findings during the day. Or, possibly more realistically, testing done during a week and then adjustments made on a weekend to be finished by Monday. I can clarify if I'm not making sense. Background for this question: My company is very forward thinking with Ai but my day to day is still filled with other projects and keeping the lights on. I don't have time or official backing yet on a project like this but, like many of you, that's not stopping me from learning and getting my feet wet on my own time.
2023-12-31T17:57:11
https://www.reddit.com/r/LocalLLaMA/comments/18vd5z1/hw_resources_regarding_inference_vs_training/
Smeetilus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vd5z1
false
null
t3_18vd5z1
/r/LocalLLaMA/comments/18vd5z1/hw_resources_regarding_inference_vs_training/
false
false
self
3
null
How do I properly close or quit KoboldCpp and SillyTavern [linux]
1
I get that these are supposed to run all the time but my system can't handle that and occasionally I like to use my system for gaming as well. Unfortunately there seems to be no neat way to close koboldcpp or sillytavern at least not from the UI (that I could find). ctrl+z in the koboldcpp terminal leaves me with all of the ai memory still loaded into the page cache and it won't leave until I clear it manually using the kill -15 command. The same with the SillyTavern's node.js. Any ideas? I tried searching but couldn't find anything so asking away :D. (Halp pls muchthx.)
2023-12-31T17:51:49
https://www.reddit.com/r/LocalLLaMA/comments/18vd1yj/how_do_i_properly_close_or_quit_koboldcpp_and/
Ill_Television9721
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vd1yj
false
null
t3_18vd1yj
/r/LocalLLaMA/comments/18vd1yj/how_do_i_properly_close_or_quit_koboldcpp_and/
false
false
self
1
null
LLM For a Newb
3
Hello, I am new to these things, I tried running some LLM locally and I succeeded so I know little, I used exui, textgen web ui etc., I want to enhance my coding skills, get debug and best practices assist about programming. I ran Phind-Codellama-34B-v2-exl2 but its answers weren't good enough. Maybe I setup things wrongly. Later I realised the model's latest release is actually v7 or now v9? whereas mine was v2 from huggingface, that explains low quality answers... Phind is giving their latest model with sub however I already am paying for chatgpt plus and I have 32gb ram, and a 4090, **can you guide me** for a way to combine these somehow to further increase my benefits from LLM / chatgpt plus, I've seen someone was talking about Tinyllama and RAG combination to improve results but I don't know how I can utilize my system and Tinyllama to advance. In other words, what can I do to utilize my system and chatgpt plus acc to further enhance my benefits from an LLM, another thing I also want to do is to train gpt if possible with latest doc's of new js frameworks. I am very confused on where to move forward. Please help.
2023-12-31T17:21:19
https://www.reddit.com/r/LocalLLaMA/comments/18vceug/llm_for_a_newb/
Lxne
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vceug
false
null
t3_18vceug
/r/LocalLLaMA/comments/18vceug/llm_for_a_newb/
false
false
self
3
null
Seeking a guide for total noob
6
Hi. I want to run a llm on my machine. Sadly I have a AMD GPU. I tried ollama in the wls with the Dolphin Mixtrel model but I was painfully slow. I think my machine need more ram because It keep the SSD at 100% usage all the time. I'm thinking to use a old server I have on house with 2x2630L V2 and 96GB of ram. No NVidia gpu either. Upgrading ram and CPU is totally an option for this because It's cheap enough if It's worth to gain tokens/s. If someone can point to me to a guide to install a web client to run the model and wich could I run with my hardware. Or any of these questions. Any help is appreciated.
2023-12-31T16:36:28
https://www.reddit.com/r/LocalLLaMA/comments/18vbfvg/seeking_a_guide_for_total_noob/
parrita710
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18vbfvg
false
null
t3_18vbfvg
/r/LocalLLaMA/comments/18vbfvg/seeking_a_guide_for_total_noob/
false
false
self
6
null
Recommended models for day to day productivity?
7
I’m running LM Studio on a MacBook Pro M1 Max with 64GB of RAM. Currently using Bard as my daily driver for help with mostly work stuff; content for presentations, verbiage for statements of work, etc. I actually prefer Bard’s writing style over Chat GPT since Gemini. Anyway, curious if there’s a local model that would give me similar performance or if I should just stick with Bard.
2023-12-31T15:57:03
https://www.reddit.com/r/LocalLLaMA/comments/18valnr/recommended_models_for_day_to_day_productivity/
CincyTriGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18valnr
false
null
t3_18valnr
/r/LocalLLaMA/comments/18valnr/recommended_models_for_day_to_day_productivity/
false
false
self
7
null
LLM inference in a couple of lines of code
43
2023-12-31T15:25:16
https://i.redd.it/9hswf13pen9c1.jpeg
davidmezzetti
i.redd.it
1970-01-01T00:00:00
0
{}
18v9y5u
false
null
t3_18v9y5u
/r/LocalLLaMA/comments/18v9y5u/llm_inference_in_a_couple_of_lines_of_code/
false
false
https://a.thumbs.redditm…cjmZdrR4mw14.jpg
43
{'enabled': True, 'images': [{'id': 'enFiknyXujWPRTf-4HFDh_FMu210Iq9lBZmZOQPSy9c', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/9hswf13pen9c1.jpeg?width=108&crop=smart&auto=webp&s=7362feb03df7056ffd9e1da08226e905b91450ff', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/9hswf13pen9c1.jpeg?width=216&crop=smart&auto=webp&s=55843a8d4bc9cf1eae94aa248ae48538ae91af37', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/9hswf13pen9c1.jpeg?width=320&crop=smart&auto=webp&s=09ca7cee87c6c0cd309402c779497880c8646745', 'width': 320}, {'height': 349, 'url': 'https://preview.redd.it/9hswf13pen9c1.jpeg?width=640&crop=smart&auto=webp&s=3aa77754693e539d5808eb872ae19bbcf2a52e7f', 'width': 640}, {'height': 523, 'url': 'https://preview.redd.it/9hswf13pen9c1.jpeg?width=960&crop=smart&auto=webp&s=84573da85a218f6ca74f1f4aca8c4f8e2de9ca90', 'width': 960}, {'height': 589, 'url': 'https://preview.redd.it/9hswf13pen9c1.jpeg?width=1080&crop=smart&auto=webp&s=c50de15843de317b1e601e88dbf8b674ee2029e9', 'width': 1080}], 'source': {'height': 596, 'url': 'https://preview.redd.it/9hswf13pen9c1.jpeg?auto=webp&s=5cc38bac71962579fb6946567f5affa23c5f46a1', 'width': 1092}, 'variants': {}}]}
Which GPU should I opt to run 34b models?
1
I want to run 34b Yi Models. On Runpod, would dolphin-2.2-yi-34b run on "2 x A6000" or "1 x A100" GPU ? Can I run it on 1xA6000 ? It's just for budget personal use.
2023-12-31T14:53:18
https://www.reddit.com/r/LocalLLaMA/comments/18v9b4q/which_gpu_should_i_opt_to_run_34b_models/
broodysupertramp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18v9b4q
false
null
t3_18v9b4q
/r/LocalLLaMA/comments/18v9b4q/which_gpu_should_i_opt_to_run_34b_models/
false
false
self
1
null
Is the A6000 GPU slow for the 7B model on Runpod?
7
I am trying Runpod for the first time. \- I am using Bloke's Template. \- Model: TheBloke/dolphin-2.6-mistral-7B-GGUF (Q6) - (5.7 GB model). \- I am using 1 x RTX A6000, 9v CPU - 50GB RAM, spot-secured cloud, 100GB Disk, No volume mounted. I am getting 1 word per second for my query. For 7B models, I get better performance on my 24GB RAM with no GPU Laptop with LM Studio. Why is it slow? GPU utilization is showing 0, while CPU Utilization is 100%. Is there something wrong with my textui config? Screens below. Please help. 🙏🏻 GPU Log : [GPU Log from Runpod](https://preview.redd.it/rwmmx70p5n9c1.png?width=1411&format=png&auto=webp&s=c3223c8cfefc006742a66f4394b85495a8eac84b) Config : [TextGen WebUI Cofig Screen](https://preview.redd.it/umbp3d195n9c1.png?width=1920&format=png&auto=webp&s=990334af07e8a3f01aa06a310a24da776823ba06)
2023-12-31T14:36:33
https://www.reddit.com/r/LocalLLaMA/comments/18v8zvd/is_the_a6000_gpu_slow_for_the_7b_model_on_runpod/
broodysupertramp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18v8zvd
false
null
t3_18v8zvd
/r/LocalLLaMA/comments/18v8zvd/is_the_a6000_gpu_slow_for_the_7b_model_on_runpod/
false
false
https://b.thumbs.redditm…_j2OM68yQTJI.jpg
7
null
"OC is not multiple of cta_N = 64" - Python error running AWQ model
1
I have been trying to run AWQ models in python using transformers... and after installing Cuda 12.1 and AWQ... I get the error "OC is not multiple of cta\_N = 64", which I couldn't find in Google at all... any recommendations on how to fix it? from transformers import AutoModelForCausalLM, AutoTokenizer import torch &#x200B; print(torch.version.cuda) print(torch.\_\_version\_\_) model2load = "TheBloke/mixtral-8x7b-v0.1-AWQ" &#x200B; tokenizer = AutoTokenizer.from\_pretrained(model2load) model = AutoModelForCausalLM.from\_pretrained( model2load, low\_cpu\_mem\_usage=True, device\_map="cuda:0" \#device\_map="cuda:0"#, attn\_implementation="flash\_attention\_2" ) &#x200B; prompt = "Tell me about AI" prompt\_template=f'''{prompt} ''' &#x200B; tokens = tokenizer( prompt\_template, return\_tensors='pt' ).input\_ids.to("cuda:0") &#x200B; &#x200B; generation\_params = { "do\_sample": True, "temperature": 0.7, "top\_p": 0.95, "top\_k": 40, "max\_new\_tokens": 512, "repetition\_penalty": 1.1 } \# Generation without a streamer, which will include the prompt in the output generation\_output = model.generate( tokens, \*\*generation\_params ) &#x200B; \# Get the tokens from the output, decode them, print them token\_output = generation\_output\[0\] text\_output = tokenizer.decode(token\_output) print("model.generate output: ", text\_output)
2023-12-31T14:24:25
https://www.reddit.com/r/LocalLLaMA/comments/18v8s3u/oc_is_not_multiple_of_cta_n_64_python_error/
Ecstatic_Sale1739
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18v8s3u
false
null
t3_18v8s3u
/r/LocalLLaMA/comments/18v8s3u/oc_is_not_multiple_of_cta_n_64_python_error/
false
false
self
1
null
how much does it cost for you to use mistral at production level??
5
isnt it too costly, i have seen ppl using mistral so often but i wonder how you guys are bearing the prices. https://preview.redd.it/iu11y3z01n9c1.png?width=1352&format=png&auto=webp&s=b14cfc290106ca6f72725f540bace9bcd2b4c563
2023-12-31T14:09:41
https://www.reddit.com/r/LocalLLaMA/comments/18v8ikz/how_much_does_it_cost_for_you_to_use_mistral_at/
GlitteringAdvisor530
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18v8ikz
false
null
t3_18v8ikz
/r/LocalLLaMA/comments/18v8ikz/how_much_does_it_cost_for_you_to_use_mistral_at/
false
false
https://b.thumbs.redditm…hSl1PClqzuEw.jpg
5
null
LLM Guide for Idiots?
43
Hi, I've fallen deep into the rabbit hole and I need help 😅 I have 64GB of RAM, a 4090 and a 7900x3D. But... I am confused :D I tried KoboldAI and the Text-Generator-WebUI. I tried different models but they are either very stupid (almost ignoring my prompt) or the performance is abysmal. like 0.5-1 token/s I know the VRAM is a limiting factor so I tried 30B models max. I tried Mixtral 8x7B AWQ, which is a 4bit quantized version if I understood it correctly, but it spits out... empty prompts with no errors? I am way over my head here. Is there a guide that would introduce me to all the basics I need to know? Like: "this is sequence length and that's what it means/does" etc.? Right now, I am basically trying out whatever a random comment on reddit recommended and am super confused :D
2023-12-31T13:57:57
https://www.reddit.com/r/LocalLLaMA/comments/18v8aqh/llm_guide_for_idiots/
TahPenguin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18v8aqh
false
null
t3_18v8aqh
/r/LocalLLaMA/comments/18v8aqh/llm_guide_for_idiots/
false
false
self
43
null