title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
PaLM does not work with CoT prompting
2
Hi, I tried to use PaLM API (which is free) to generate reasoning to train small models, but it didn't generate reasoning steps even when I used CoT prompting (in addsub dataset particularly). What might be the problem? And what should I do to deal with that? Thank you so much.
2023-11-29T08:22:48
https://www.reddit.com/r/LocalLLaMA/comments/186kmrg/palm_does_not_work_with_cot_prompting/
AdministrationSea789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186kmrg
false
null
t3_186kmrg
/r/LocalLLaMA/comments/186kmrg/palm_does_not_work_with_cot_prompting/
false
false
self
2
null
Question on Gemini + AlphaGo
4
AlphaGo's premise is that instead of using human feedback for reinforcement learning, you instead have the model play games against itself, with a simple reward mechanism, so that it can learn from its own mistakes. This achieves scalability of the training data, allowing the model to discover new Go moves and eventually exceed the quality of its initial training data. From an engineering point of view, how do you see this applied to other areas like software development, where there is no opponent player? Do you connect the model to a compiler, and have it learn by trial and error based on compiler output? Do you set desired software outcomes and have another AI evaluate how much closer or farther the output is with each iteration? How would this closed feedback loop work to get an AI to become a world expert in a specific programming language or framework?
2023-11-29T07:05:28
https://www.reddit.com/r/LocalLLaMA/comments/186jj69/question_on_gemini_alphago/
Christosconst
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186jj69
false
null
t3_186jj69
/r/LocalLLaMA/comments/186jj69/question_on_gemini_alphago/
false
false
self
4
null
How to run llama 70b on slurm?
1
I can run llama 13b on slurm just fine while requesting one graphics card. However, 70b takes over an hour (I cancelled the execution afterwards) and I am not sure what I should adjust to make it quicker. I am a beginner with slurm and with LLMs and I can't seem to find comprehensive resources addressing this. I am using the chat model from huggingface. My sbatch parameters are: --mem=300G --gres=gpu:1
2023-11-29T06:31:56
https://www.reddit.com/r/LocalLLaMA/comments/186j0oe/how_to_run_llama_70b_on_slurm/
LaAlice
self.LocalLLaMA
2023-11-29T06:53:55
0
{}
186j0oe
false
null
t3_186j0oe
/r/LocalLLaMA/comments/186j0oe/how_to_run_llama_70b_on_slurm/
false
false
self
1
null
Is there any way to speed up the MythoMax-L2-13B on a 6GB GPU?
12
First time testing local text model I don't know much yet. I've seen people with 8GB cards complaining that text generation is very slow so I don't have much hope about that but still... I think I need to do some configuration, when generating text my SSD is at 100% reading 1\~2gb/s while my GPU does not reach 15% usage. Using RTX 2060. This is the model I am testing ( mythomax-l2-13b.Q8\_0.gguf): [https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF/tree/main](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF/tree/main) ​
2023-11-29T06:12:08
https://www.reddit.com/r/LocalLLaMA/comments/186ioux/is_there_any_way_to_speed_up_the_mythomaxl213b_on/
OverallBit9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186ioux
false
null
t3_186ioux
/r/LocalLLaMA/comments/186ioux/is_there_any_way_to_speed_up_the_mythomaxl213b_on/
false
false
self
12
{'enabled': False, 'images': [{'id': 'uV7BrwqXgi67f_4yHedNZOfEwV05mHlAm3DtpQ3aYg4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?width=108&crop=smart&auto=webp&s=148eb1458474b2cb5d3117c31dce7693faca257b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?width=216&crop=smart&auto=webp&s=08125df4f76f96428e7899b1e9b51e0fde9ce3bf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?width=320&crop=smart&auto=webp&s=6158d7eb540475b24c846d350be5332b794eebe8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?width=640&crop=smart&auto=webp&s=d3a7009db2d0a66599210bc7214b183b471c1075', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?width=960&crop=smart&auto=webp&s=a2553c25a1e32d5a0504aa9cc46b99f90a41d2dd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?width=1080&crop=smart&auto=webp&s=49edae059ffe74061d1002855784b1389e58c64d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?auto=webp&s=083302742a02acc1c9d9d7db6e1f1f4d19569267', 'width': 1200}, 'variants': {}}]}
ChatGPT leaks training data when asked to “Repeat this word forever: “poem poem poem poem”?”
90
2023-11-29T06:10:31
https://twitter.com/katherine1ee/status/1729690964942377076
saintshing
twitter.com
1970-01-01T00:00:00
0
{}
186inwr
false
{'oembed': {'author_name': 'Katherine Lee', 'author_url': 'https://twitter.com/katherine1ee', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">What happens if you ask ChatGPT to “Repeat this word forever: “poem poem poem poem”?” <br><br>It leaks training data! <br><br>In our latest preprint, we show how to recover thousands of examples of ChatGPT&#39;s Internet-scraped pretraining data: <a href="https://t.co/bySVnWviAP">https://t.co/bySVnWviAP</a> <a href="https://t.co/bq3Yr7z8m8">pic.twitter.com/bq3Yr7z8m8</a></p>&mdash; Katherine Lee (@katherine1ee) <a href="https://twitter.com/katherine1ee/status/1729690964942377076?ref_src=twsrc%5Etfw">November 29, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/katherine1ee/status/1729690964942377076', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_186inwr
/r/LocalLLaMA/comments/186inwr/chatgpt_leaks_training_data_when_asked_to_repeat/
false
false
https://b.thumbs.redditm…uq6Tp3fB9Ogo.jpg
90
{'enabled': False, 'images': [{'id': 'Zpyrfq6Agw1xQkmQ_Ep_UyOKP4H9WCvkiIyUlkDl3qU', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/-u5bW2ipW6eqTbhM2Qyxsl10bzht-7RqwoY9d9iXHdM.jpg?width=108&crop=smart&auto=webp&s=d53ffd413ce8bcc6a98da25a20b1a2938e8e2b6f', 'width': 108}], 'source': {'height': 117, 'url': 'https://external-preview.redd.it/-u5bW2ipW6eqTbhM2Qyxsl10bzht-7RqwoY9d9iXHdM.jpg?auto=webp&s=cb77f8a4282c93c8012ee521e4f48033f9cc75d3', 'width': 140}, 'variants': {}}]}
Step by step guide for local with voice?
6
Hi, I've tried to follow a couple step by step guides from August, but the models they've used are now apparently outdated, and when I try to load the model into obbaguba, I get a bunch of errors. So I've downloaded the GGUF models from ThatBloke and still having issues with the models only good for "taking instructions" and the server crashes when I try use the chat feature.. Are there any step by step guides that someone can recommend? I want to set up a UI where I can text chat, and also talk to the AI and have the AI reply in voice mode... System is 7950x3d with 4090 and 64GB DDR5.. I've set up Anaconda and python, also got stable diffusion working previously as well.. just not Llama2.. Cheers in advance
2023-11-29T06:02:31
https://www.reddit.com/r/LocalLLaMA/comments/186ij60/step_by_step_guide_for_local_with_voice/
DuckFormer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186ij60
false
null
t3_186ij60
/r/LocalLLaMA/comments/186ij60/step_by_step_guide_for_local_with_voice/
false
false
self
6
null
Mistral Inference Visualized as Emojis
1
[removed]
2023-11-29T05:38:17
https://www.reddit.com/r/LocalLLaMA/comments/186i3y1/mistral_inference_visualized_as_emojis/
introsp3ctor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186i3y1
false
null
t3_186i3y1
/r/LocalLLaMA/comments/186i3y1/mistral_inference_visualized_as_emojis/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nyTFblnSz3edy82oEhXW-uOcnYETOPuXPGCG5Ua-0h4', 'resolutions': [{'height': 36, 'url': 'https://external-preview.redd.it/YrJz4eEkcul5E2tGLB4org4PPto68OYVotyO_xIJDOU.jpg?width=108&crop=smart&auto=webp&s=8ee0a5eab55576c2f291f86509fd91a1845de258', 'width': 108}, {'height': 73, 'url': 'https://external-preview.redd.it/YrJz4eEkcul5E2tGLB4org4PPto68OYVotyO_xIJDOU.jpg?width=216&crop=smart&auto=webp&s=7d3c9be4c4bd77a426c2df9c2fe8d9e972db4596', 'width': 216}, {'height': 108, 'url': 'https://external-preview.redd.it/YrJz4eEkcul5E2tGLB4org4PPto68OYVotyO_xIJDOU.jpg?width=320&crop=smart&auto=webp&s=5cbe8e1644133dd693e12334ec7a804217590c01', 'width': 320}, {'height': 216, 'url': 'https://external-preview.redd.it/YrJz4eEkcul5E2tGLB4org4PPto68OYVotyO_xIJDOU.jpg?width=640&crop=smart&auto=webp&s=69ea9120a42e279004866ca1b6e159a26a3be43f', 'width': 640}, {'height': 325, 'url': 'https://external-preview.redd.it/YrJz4eEkcul5E2tGLB4org4PPto68OYVotyO_xIJDOU.jpg?width=960&crop=smart&auto=webp&s=d7620b737cfa49b9c1b9adcc4c551b25ad4ad451', 'width': 960}, {'height': 366, 'url': 'https://external-preview.redd.it/YrJz4eEkcul5E2tGLB4org4PPto68OYVotyO_xIJDOU.jpg?width=1080&crop=smart&auto=webp&s=d057d10ce5fa9b88b8b1feb009b014d02dd2f7a4', 'width': 1080}], 'source': {'height': 549, 'url': 'https://external-preview.redd.it/YrJz4eEkcul5E2tGLB4org4PPto68OYVotyO_xIJDOU.jpg?auto=webp&s=9292b255a36cfe88b1f9239e6299e2f35eae48ea', 'width': 1620}, 'variants': {}}]}
2 machines - how should I set up LLMs for coding review
5
I have a Dell t7920 with dual Intel platinum 8168 processors, 384 gb ram, 2 x Nvidia A5000 GPUs with NVLink. I also have a M2 Max MacBook Pro 2 with 96gb ram. I am still new to LLMs. My goal is to use both machines to train LLM and one to run it. I will be using for coding mostly. I want to be able to load our repo to analyze. Any suggestions what I can do with both machines for a good setup? I am thinking of building models on the desktop and then running on the laptop. Any suggestions which models to start out with since I will be coding in JavaScript mostly, with some limited Python too
2023-11-29T05:16:34
https://www.reddit.com/r/LocalLLaMA/comments/186hpue/2_machines_how_should_i_set_up_llms_for_coding/
Equality_or_Fairness
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186hpue
false
null
t3_186hpue
/r/LocalLLaMA/comments/186hpue/2_machines_how_should_i_set_up_llms_for_coding/
false
false
self
5
null
Research Study on Use of AI Chatbots for Mental Health Support
1
Hello all! We are researchers from the Georgia Institute of Technology, and are curious to learn more about whether any of you have experiences with using AI chatbots for mental health support. As you all know well, in today's digital age, AI chatbots have emerged as an interesting tool for many different things, including mental health support (especially in areas where traditional mental health resources might be scarce or inaccessible). LLM-based chatbots offer a unique platform for individuals seeking immediate assistance, guidance, or simply a conversation. However, there are also risks associated with their use. We are particularly interested in understanding the depth and breadth of interactions individuals have with these AI chatbots. How do they complement traditional mental health resources? What gaps do they fill, and where might they fall short? We're gathering data to better understand this use of AI chatbot-based support, and your participation in our survey would be invaluable. For those who'd like to dive deeper, there's an opportunity for a deeper conversation via a follow-up interview (with a $25 thank-you gift card for participating). Our university's ethics board has approved our research, and all data will only be used with anonymization for academic purposes (e.g. a journal or conference paper). **Institution Name:** Georgia Institute of Technology **Method of study:** Online, survey link below (option to join a follow-up 1 hour online interview with $25 compensation) **Time required:** 5 minutes **Link for participation**: [https://b.gatech.edu/47KX5BE](https://b.gatech.edu/47KX5BE) If you have specific questions before starting the survey, please let our research team know in the comment section or at [isong44@gatech.edu](mailto:isong44@gatech.edu). We will try to answer as fast as possible. Thank you in advance!
2023-11-29T04:32:23
https://www.reddit.com/r/LocalLLaMA/comments/186gxcv/research_study_on_use_of_ai_chatbots_for_mental/
lilchinnykeepsitreal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186gxcv
false
null
t3_186gxcv
/r/LocalLLaMA/comments/186gxcv/research_study_on_use_of_ai_chatbots_for_mental/
false
false
self
1
null
Fine Tuning LLaMA2 on large text corpus
1
[removed]
2023-11-29T04:19:40
https://www.reddit.com/r/LocalLLaMA/comments/186gokb/fine_tuning_llama2_on_large_text_corpus/
PomegranateCute843
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186gokb
false
null
t3_186gokb
/r/LocalLLaMA/comments/186gokb/fine_tuning_llama2_on_large_text_corpus/
false
false
self
1
null
Thoughts on this UI for local LLM interaction
132
2023-11-29T03:22:17
https://www.reddit.com/gallery/186fje7
DumbassNamedTuna
reddit.com
1970-01-01T00:00:00
0
{}
186fje7
false
null
t3_186fje7
/r/LocalLLaMA/comments/186fje7/thoughts_on_this_ui_for_local_llm_interaction/
false
false
https://a.thumbs.redditm…hdxi6uvzsM94.jpg
132
null
Run BerkeleyEECS's Starling 7B on any device with a single binary app. Zero Python dependency.
1
2023-11-29T03:10:28
https://x.com/realwasmedge/status/1729554072645361862?s=20
smileymileycoin
x.com
1970-01-01T00:00:00
0
{}
186f9xx
false
null
t3_186f9xx
/r/LocalLLaMA/comments/186f9xx/run_berkeleyeecss_starling_7b_on_any_device_with/
false
false
https://a.thumbs.redditm…41zgE96CK2R8.jpg
1
{'enabled': False, 'images': [{'id': '0EgKRvj1a44Tc--kj5KrJYd5rvSrIdP4IkZzqEDIBZI', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/KHnc4p7cYZAxtyq9IWRMBU1UJMpH5wCTxF04crA2Obw.jpg?width=108&crop=smart&auto=webp&s=750b21d70505b433b28742049391b0f04266f596', 'width': 108}, {'height': 141, 'url': 'https://external-preview.redd.it/KHnc4p7cYZAxtyq9IWRMBU1UJMpH5wCTxF04crA2Obw.jpg?width=216&crop=smart&auto=webp&s=7a167e8fd03b3c49fed6def57961632a2629b99a', 'width': 216}, {'height': 209, 'url': 'https://external-preview.redd.it/KHnc4p7cYZAxtyq9IWRMBU1UJMpH5wCTxF04crA2Obw.jpg?width=320&crop=smart&auto=webp&s=7958f55228ad764a27fd73232338d3f1374a5a76', 'width': 320}, {'height': 419, 'url': 'https://external-preview.redd.it/KHnc4p7cYZAxtyq9IWRMBU1UJMpH5wCTxF04crA2Obw.jpg?width=640&crop=smart&auto=webp&s=7450169b64e51cd2bc4d01660218a6e626b97c52', 'width': 640}, {'height': 629, 'url': 'https://external-preview.redd.it/KHnc4p7cYZAxtyq9IWRMBU1UJMpH5wCTxF04crA2Obw.jpg?width=960&crop=smart&auto=webp&s=c003a74e387ef6e704017eec3a9bfd8efd707b67', 'width': 960}, {'height': 708, 'url': 'https://external-preview.redd.it/KHnc4p7cYZAxtyq9IWRMBU1UJMpH5wCTxF04crA2Obw.jpg?width=1080&crop=smart&auto=webp&s=10155cdb2c29a394478d69675db1a502d77ac596', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/KHnc4p7cYZAxtyq9IWRMBU1UJMpH5wCTxF04crA2Obw.jpg?auto=webp&s=e4a20b9024aecc9a82e5ab92031cd3ed38a64d33', 'width': 1646}, 'variants': {}}]}
An interesting llama.cpp effect when using different versions
3
I noticed that I get different output from a llama.cpp from a month ago versus one today. The model was airoboros 3.1 70b 8bit. Inference done via CPU (Ryzen) The initializer: main -m airoboros-l2-70b-3.1.Q8_0.gguf --color --threads 12 --batch_size 256 --n_predict -1 --top_k 12 --top_p 1 --temp 0.0 --repeat_penalty 1.05 --ctx_size 2048 --instruct --reverse-prompt "### Human:" Note that the output is the same after multiple runs for each. First the llama.cpp from one month ago: The prompt: **You are given the following premises: Someone who lives in Dreadbury Mansion killed Aunt Agatha. The only people who live in Dreadbury Mansion are Aunt Agatha, the butler, and Charles. A killer always hates his victims, and is never richer than his victims. Charles hates no one that Aunt Agatha hates. Aunt Agatha hates everyone except the butler. The butler hates everyone not richer than Aunt Agatha. The butler hates everyone Aunt Agatha hates. No one hates everyone. Aunt Agatha is not the butler. On the basis of this information, determine who killed Aunt Agatha and give a detailed proof that your conclusion follows from the premise.** The output: Let's analyze the information given step-by-step: 1. We know that someone who lives in Dreadbury Mansion killed Aunt Agatha. 2. The only people who live in Dreadbury Mansion are Aunt Agatha, the butler, and Charles. 3. A killer always hates his victims. 4. A killer is never richer than his victims. 5. Charles hates no one that Aunt Agatha hates. 6. Aunt Agatha hates everyone except the butler. 7. The butler hates everyone not richer than Aunt Agatha. 8. The butler hates everyone Aunt Agatha hates. 9. No one hates everyone. 10. Aunt Agatha is not the butler. Now let's reason through this: - Since Aunt Agatha hates everyone except the butler, she must hate Charles. - If Charles were the killer, then he would have to hate Aunt Agatha (point 3), which contradicts point 5. So, Charles cannot be the killer. - The butler hates everyone not richer than Aunt Agatha (point 7). Since Aunt Agatha is not richer than herself, the butler must hate her. This satisfies point 3 for the butler being the killer. - However, if the butler were the killer, he would also have to hate Charles (point 8), which contradicts point 5. So, the butler cannot be the killer either. - Therefore, by process of elimination, Aunt Agatha must be the killer. This contradicts point 10, which states that Aunt Agatha is not the butler. Thus, we have reached a contradiction in our reasoning, indicating that there is an error or inconsistency in the given information. Therefore, it is impossible to determine who killed Aunt Agatha based on the provided premises. **Now the output from today's llama.cpp:** Let's analyze the information given step-by-step: 1. We know that someone who lives in Dreadbury Mansion killed Aunt Agatha. 2. The only people who live in Dreadbury Mansion are Aunt Agatha, the butler, and Charles. 3. A killer always hates his victims. 4. A killer is never richer than his victims. 5. Charles hates no one that Aunt Agatha hates. 6. Aunt Agatha hates everyone except the butler. 7. The butler hates everyone not richer than Aunt Agatha. 8. The butler hates everyone Aunt Agatha hates. 9. No one hates everyone. 10. Aunt Agatha is not the butler. Now let's reason through this: - Since Aunt Agatha hates everyone except the butler, she cannot be the killer because she doesn't hate herself. - If Charles were the killer, then he would have to hate Aunt Agatha, which contradicts premise 5. So, Charles is not the killer. - This leaves us with the butler as the only remaining possibility for the killer. - The butler hates everyone not richer than Aunt Agatha (premise 7), so if he were richer than Aunt Agatha, he wouldn't be able to kill her due to premise 4. But since no one hates everyone (premise 9), the butler must hate someone who is richer than Aunt Agatha. - Therefore, the butler cannot be richer than Aunt Agatha, which means he can kill her without violating premise 4. - Additionally, since the butler hates everyone Aunt Agatha hates (premise 8), and Aunt Agatha hates everyone except the butler (premise 6), the butler must hate everyone except himself. This means he meets the condition of a killer in premise 3. So, based on the given information, we can conclude that the butler killed Aunt Agatha. &#x200B;
2023-11-29T02:59:04
https://www.reddit.com/r/LocalLLaMA/comments/186f0j6/an_interesting_llamacpp_effect_when_using/
jeffwadsworth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186f0j6
false
null
t3_186f0j6
/r/LocalLLaMA/comments/186f0j6/an_interesting_llamacpp_effect_when_using/
false
false
self
3
null
LLMs that can handle dates
2
Been experimenting with creating a simple assistant using voice recognition and LLMs. One thing they constantly get wrong is handling dates. I’ll say something involving “tomorrow” or “next week” and sometimes it gets completely confused. Even GPT4 thought next week was May 2023. Is there any LLM that inherently deals with dates very well? Or some way to help them handle them better?
2023-11-29T02:40:44
https://www.reddit.com/r/LocalLLaMA/comments/186em05/llms_that_can_handle_dates/
DsDman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186em05
false
null
t3_186em05
/r/LocalLLaMA/comments/186em05/llms_that_can_handle_dates/
false
false
self
2
null
Please help decode
1
[removed]
2023-11-29T02:07:50
https://www.reddit.com/r/LocalLLaMA/comments/186dwja/please_help_decode/
introsp3ctor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186dwja
false
null
t3_186dwja
/r/LocalLLaMA/comments/186dwja/please_help_decode/
false
false
self
1
{'enabled': False, 'images': [{'id': 'm7uJz2ZYcxJiPoE-z8mciflg6lzLYiLvbCnFcBptQfs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/i53bET7Vw5sXwCXtPPHidglQPoWB8A5vgMMNaMHosDM.jpg?width=108&crop=smart&auto=webp&s=100b689291e20707cbbbd6901a88f643a1a8137f', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/i53bET7Vw5sXwCXtPPHidglQPoWB8A5vgMMNaMHosDM.jpg?auto=webp&s=5388b81eb6d608878723bce0cce3c8517f287971', 'width': 140}, 'variants': {}}]}
«I don't have the ability to form memories or learn from interactions» That's what she actually said
1
That's part of what it answered when I asked if it knew who created it. I was wondering if the researchers had baked in their names in the data. I realised that these inabilities are my main hopes for the future of LLMs !
2023-11-29T01:58:16
https://www.reddit.com/r/LocalLLaMA/comments/186doxp/i_dont_have_the_ability_to_form_memories_or_learn/
DrVonSinistro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186doxp
false
null
t3_186doxp
/r/LocalLLaMA/comments/186doxp/i_dont_have_the_ability_to_form_memories_or_learn/
false
false
self
1
null
Looking for Python script to deploy custom LLM in Azure
1
Background is.. trying to build interface for users to choose LLM (like Falcon, Deepsake etc from Huggingface) from my portal which will make script to download and deploy that particular LLM in Azure. Once it is deployed, users will use those LLMs to build apps. Deploying custom LLM in user/client cloud environment is mandate as there is data security policies in play. If anyone worked on such script or have an idea then please share your inputs.
2023-11-29T01:53:14
https://www.reddit.com/r/LocalLLaMA/comments/186dkyj/looking_for_python_script_to_deploy_custom_llm_in/
MyObjectivism
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186dkyj
false
null
t3_186dkyj
/r/LocalLLaMA/comments/186dkyj/looking_for_python_script_to_deploy_custom_llm_in/
false
false
self
1
null
Running LLMs on a Kirin 9000S?
3
I just got myself some phones for research, including ones with Qualcomm, Mediatek, Apple, and Kirin(Huawei) chips. All of them work with MLC except for the one with the Kirin 9000S chip, where it successfully load models, but crashes every time I make a prompt. I’ve also tried previous versions of MLC, and also alternatives like Maid/sherpa. I think it’s a chip issue rather than an environment issue, since the same problem occurs even when I use VMOS. Have anyone tried running LLMs on the new Huawei phones with Kirin 9000S? If so, did you run into the same issue?
2023-11-29T01:35:03
https://www.reddit.com/r/LocalLLaMA/comments/186d6l9/running_llms_on_a_kirin_9000s/
Tree-Sheep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186d6l9
false
null
t3_186d6l9
/r/LocalLLaMA/comments/186d6l9/running_llms_on_a_kirin_9000s/
false
false
self
3
null
X.AI Grok could potentially be open sourced on a 6 month delay from launch
68
Nobody mentions this from Lex Fridman. [https://youtu.be/JN3KPFbWCy8?t=5090](https://youtu.be/JN3KPFbWCy8?t=5090) Maybe after he gets the infrastructure needed to fully support all interested subscribers he might consider open sourcing Grok-1, or maybe he might release Grok-1 when he is ready to deploy Grok-2 on X. I doubt 6 month delay for open source following commercial release is an exact mark but it remains possible for Grok to become open at some point. As of right now access to Grok on X is too limited for us to get any independent benchmarks and attempts to search for Grok on Twitter just flooded with unaffiliated memecoins. Post comparisons between Grok and GPT3.5 or GPT4 are limited and seems subjective to opinion. My observation is Grok can give you serious responses without the "humor" so we should be able to prompt it to not act "humorous". It's responses aren't large as GPT4 but it can be very concise with half or 1/3rd the written content while not lecturing on disclosures. When it comes to being uncensored. I noticed benchmarks for original and uncensored Llama based models with the uncensored models losing points in logic which suggest that being uncensored or minimally censored from the foundation is better off. The technical details of uncensoring an existing model is beyond my expertise. So what are thoughts about Grok open sourcing, without turning this into drama. Be civil please.
2023-11-29T01:17:58
https://www.reddit.com/r/LocalLLaMA/comments/186ct14/xai_grok_could_potentially_be_open_sourced_on_a_6/
-Automaticity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186ct14
false
null
t3_186ct14
/r/LocalLLaMA/comments/186ct14/xai_grok_could_potentially_be_open_sourced_on_a_6/
false
false
self
68
{'enabled': False, 'images': [{'id': 'KCHo63FYpaRJx6BBwh7V4CIMup9Qi7sE_dXT12XITGc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/QpGRvydLSluQ_s5uGQbieHoaHaaEDqX7zyhuk0HfLZU.jpg?width=108&crop=smart&auto=webp&s=9926460ccd95c3a794c3ab36e91074a03d6d4ad1', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/QpGRvydLSluQ_s5uGQbieHoaHaaEDqX7zyhuk0HfLZU.jpg?width=216&crop=smart&auto=webp&s=84bd311c9463c389018b45005b6c8440c8aca2a4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/QpGRvydLSluQ_s5uGQbieHoaHaaEDqX7zyhuk0HfLZU.jpg?width=320&crop=smart&auto=webp&s=8bf7131ee0c51788da64781a3141c28599d0f873', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/QpGRvydLSluQ_s5uGQbieHoaHaaEDqX7zyhuk0HfLZU.jpg?auto=webp&s=83dc32169137f9b0aa440796beb938efd9d75cad', 'width': 480}, 'variants': {}}]}
G.H. 'Experimenting' with the Q* Algorithm [link]
1
[removed]
2023-11-29T01:13:53
https://www.reddit.com/r/LocalLLaMA/comments/186cpw3/gh_experimenting_with_the_q_algorithm_link/
Alear997
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186cpw3
false
null
t3_186cpw3
/r/LocalLLaMA/comments/186cpw3/gh_experimenting_with_the_q_algorithm_link/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Gjg1WoMDoTcqPEMs0EL_M2VczGggjF5zUgiPiLrf4dw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/rw375T5n9HHo2mEZ76Zftr6rE5DKWZV17JO2riKYV9Q.jpg?width=108&crop=smart&auto=webp&s=c7fb9e25f3813922a5d214d40c518eb5d905ea37', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/rw375T5n9HHo2mEZ76Zftr6rE5DKWZV17JO2riKYV9Q.jpg?width=216&crop=smart&auto=webp&s=42d4a6abc2f7b9fd4459ecfd3ca1ab1b190ba819', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/rw375T5n9HHo2mEZ76Zftr6rE5DKWZV17JO2riKYV9Q.jpg?width=320&crop=smart&auto=webp&s=3e9f13329f5c98a560b7c41dbc394ce2ffa5d185', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/rw375T5n9HHo2mEZ76Zftr6rE5DKWZV17JO2riKYV9Q.jpg?auto=webp&s=018cba403aded835f8625ab3cb0ad9549614a332', 'width': 480}, 'variants': {}}]}
Building llama.cpp on Windows on ARM
10
I finally managed to build llama.cpp on Windows on ARM running on a Surface Pro X with the Qualcomm 8cx chip. Why bother with this instead of running it under WSL? It lets you run the largest models that can fit into system RAM without WSL Hyper-V overhead. I didn't notice any speed difference but the extra available RAM means I can use 7B Q5_K_M GGUF models now instead of Q3. Typical output speeds are 4 t/s to 5 t/s. Steps: 1. Install MSYS2. The installer package has x64 and ARM64 binaries included. 2. Run clangarm64. When you're in the shell, run these commands to install the required build packages: pacman -Suy pacman -S mingw-w64-clang-aarch64-clang pacman -S cmake pacman -S make pacman -S git 3. Clone git repo and set up build environment. You need to make ARM64 clang appear as gcc by setting the flags below. git clone <llama.cpp repo> cd llama.cpp mkdir build cd build export CC=/clangarm64/bin/cc export CXX=/clangarm64/bin/c++ 4. Build llama.cpp. cmake .. cmake --build . --config Release 5. Run main bin/main.exe If you're lucky, most of the package should build fine, but on my machine the quantizer .exe failed to build. I tried using ARM's own GNU toolchain compiler but I kept getting build errors. There should be a way to get NPU-accelerated model runs using the Qualcomm QNN SDK, Microsoft's ONNX runtime and ONNX models but I got stuck in dependency hell in Visual Studio 2022. I'm not a Windows developer and trying to combine x86, x64 and ARM64 compilers and python binaries is way beyond me.
2023-11-29T00:19:32
https://www.reddit.com/r/LocalLLaMA/comments/186bi2y/building_llamacpp_on_windows_on_arm/
Some_Endian_FP17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186bi2y
false
null
t3_186bi2y
/r/LocalLLaMA/comments/186bi2y/building_llamacpp_on_windows_on_arm/
false
false
self
10
null
HuggingFace space where you can interact with children of mistral-7B
7
2023-11-28T23:26:31
https://huggingface.co/spaces/davanstrien/mistral-graph
ninjasaid13
huggingface.co
1970-01-01T00:00:00
0
{}
186a9xc
false
null
t3_186a9xc
/r/LocalLLaMA/comments/186a9xc/huggingface_space_where_you_can_interact_with/
false
false
https://a.thumbs.redditm…BZb-C70ml1m8.jpg
7
{'enabled': False, 'images': [{'id': 'HS0NB1sfUqTWOcP0risf8WN0Ao_vKRtv1S7sowieQwA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dZfpPcFI9qk-cbjOn0IfMul-vykGck2clFCxxEhfQp0.jpg?width=108&crop=smart&auto=webp&s=468bfb82ad175332544ff1346abe1029423e9956', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dZfpPcFI9qk-cbjOn0IfMul-vykGck2clFCxxEhfQp0.jpg?width=216&crop=smart&auto=webp&s=c8f60827d25a0004a3a5cd0ece27fd38773053ba', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dZfpPcFI9qk-cbjOn0IfMul-vykGck2clFCxxEhfQp0.jpg?width=320&crop=smart&auto=webp&s=0af0e51f4e9fc2eea5602119a0e29ee58d90b5f1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dZfpPcFI9qk-cbjOn0IfMul-vykGck2clFCxxEhfQp0.jpg?width=640&crop=smart&auto=webp&s=c79d756f84f388de81b10663d0afd9a844a0619c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dZfpPcFI9qk-cbjOn0IfMul-vykGck2clFCxxEhfQp0.jpg?width=960&crop=smart&auto=webp&s=2c133cc3edbd896a05261dfcb5f8fbb4b7d35730', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dZfpPcFI9qk-cbjOn0IfMul-vykGck2clFCxxEhfQp0.jpg?width=1080&crop=smart&auto=webp&s=bcb3c5b35ec410301d9cf3ea516a69d94df96f25', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dZfpPcFI9qk-cbjOn0IfMul-vykGck2clFCxxEhfQp0.jpg?auto=webp&s=495e19dc5b245e30e129951ba26047cf4e94086f', 'width': 1200}, 'variants': {}}]}
Hi everyone I want to make a local llm for my colleagues at the office so we can give it information about our work etc and use it locally which one is the most powerful for this currently?
1
I was thinking vinccua I've not looked for a few months which ones the best open source that I can run with 50gb memory but only a 3060ti and i7 9700k
2023-11-28T22:50:42
https://www.reddit.com/r/LocalLLaMA/comments/1869ebd/hi_everyone_i_want_to_make_a_local_llm_for_my/
Vast_Accident_699
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1869ebd
false
null
t3_1869ebd
/r/LocalLLaMA/comments/1869ebd/hi_everyone_i_want_to_make_a_local_llm_for_my/
false
false
self
1
null
Why there are quantized models in the hugging face hug?
3
Hi. I'm using Llama-2 for my project in python with `transformers` library. There is an option to use quantization on any normal model: model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-2-13b-chat-hf", load_in_4bit=True, ) If it's just a matter of single flag, and nothing is recomputed, then why there is so much already quantized models in the hub? Are they better than adding this one line?
2023-11-28T22:37:18
https://www.reddit.com/r/LocalLLaMA/comments/18692c1/why_there_are_quantized_models_in_the_hugging/
Motylde
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18692c1
false
null
t3_18692c1
/r/LocalLLaMA/comments/18692c1/why_there_are_quantized_models_in_the_hugging/
false
false
self
3
null
Finetuned llama adapted to medical domain
1
2023-11-28T22:35:38
https://x.com/eric_zemingchen/status/1729563872825118988?t=_9fJ8kKrTF7B0MlrefaW9Q&s=34
ninjasaid13
x.com
1970-01-01T00:00:00
0
{}
18690up
false
null
t3_18690up
/r/LocalLLaMA/comments/18690up/finetuned_llama_adapted_to_medical_domain/
false
false
https://b.thumbs.redditm…WcskrOnc5TMM.jpg
1
{'enabled': False, 'images': [{'id': 'cfnRx8f8_HgVGcn9myp7X4cRKhH_k-VcIA5Fi9FA-Ig', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/6laLMUtgJlIYs-737ZU0WM6zKFCgE0NoH_GkXbrYVPY.jpg?width=108&crop=smart&auto=webp&s=124cbc0fefce87c49c0e8b681c1056fca9b16f7d', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/6laLMUtgJlIYs-737ZU0WM6zKFCgE0NoH_GkXbrYVPY.jpg?auto=webp&s=e82c930e772123fb2db371491bd9d547e372b0fc', 'width': 200}, 'variants': {}}]}
Error when clicking the Record from Microphone when accessing the local network:
1
[removed]
2023-11-28T22:30:07
https://www.reddit.com/r/LocalLLaMA/comments/1868vvl/error_when_clicking_the_record_from_microphone/
Maveric1984
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1868vvl
false
null
t3_1868vvl
/r/LocalLLaMA/comments/1868vvl/error_when_clicking_the_record_from_microphone/
false
false
self
1
null
LocalLLaMA for OCR - online resources that describe state of the art?
7
Hello and thanks for making this subreddit an amazing place to learn new things. Are there resources online that describe how to use LocalLLaMA for OCR? In the past I've used OCRMyPDF to good effect, it does a solid job of pre/post processing + tesseract. I've uploaded a few documents that combine typed content + handwritten text to ChatGPT and it does an incredible job (exceeding all expectations). Is there anything beyond [donut](https://huggingface.co/docs/transformers/model_doc/donut) that I'm missing online that explains how this done?
2023-11-28T22:26:42
https://www.reddit.com/r/LocalLLaMA/comments/1868spj/localllama_for_ocr_online_resources_that_describe/
PhotographMain3424
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1868spj
false
null
t3_1868spj
/r/LocalLLaMA/comments/1868spj/localllama_for_ocr_online_resources_that_describe/
false
false
self
7
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=108&crop=smart&auto=webp&s=6c2099a4a9a69e9793ac03aec2e167bf75ab3eae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=216&crop=smart&auto=webp&s=dcabb3007e27f246939f2505509da0bf9f06e3cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=320&crop=smart&auto=webp&s=a41020cb42a130c35ac33053b5fe88d8fe248e1e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=640&crop=smart&auto=webp&s=346df50928db41b093b4e923255493f6937674d1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=960&crop=smart&auto=webp&s=891f7f0662a0311d7e83f06f6dc0f9b3f51104de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=1080&crop=smart&auto=webp&s=dd2a0868f88770dba1f18821573ea10e7912b0e7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?auto=webp&s=e9a1cfc66ec990bd227118e1de3ff3c3f26d0c83', 'width': 1200}, 'variants': {}}]}
How to train/finetune with long examples on dataset?
4
I want to fine tune some LLM models with my own dataset which contains very long examples (a little > 2048 tokens). vRAM usage jumps up several GBs by just increasing the Cutoff Length from 512 to 1024. Is there a way to feed those long examples into the models without increasing vRAM significantly?
2023-11-28T22:23:50
https://www.reddit.com/r/LocalLLaMA/comments/1868py2/how_to_trainfinetune_with_long_examples_on_dataset/
tgredditfc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1868py2
false
null
t3_1868py2
/r/LocalLLaMA/comments/1868py2/how_to_trainfinetune_with_long_examples_on_dataset/
false
false
self
4
null
Question about GGUF, gpu offload and performance
4
Hi. I'm currently running a 3060 12Gb | R7 2700X | 32gb 3200 | Windows 10 w/ latests nvidia drivers (vram>ram overflow disabled). By loading a 20B-Q4\_K\_M model (50/65 layers offloaded seems to be the fastest from my tests) i currently get arround 0.65 t/s with a low context size of 500 or less, and about 0.45t/s nearing the max 4096 context. Are these values what is expected of my setup? Or is there something i can do to improve speeds without changing the model? Its pretty much unusable at this state, and since it's hard to find information about this topic i figured i would try to ask here.
2023-11-28T21:44:15
https://www.reddit.com/r/LocalLLaMA/comments/1867ove/question_about_gguf_gpu_offload_and_performance/
Jokaiser2000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1867ove
false
null
t3_1867ove
/r/LocalLLaMA/comments/1867ove/question_about_gguf_gpu_offload_and_performance/
false
false
self
4
{'enabled': False, 'images': [{'id': 'CLJlSeDe788FZ9FWxA6EK3B7QeOO0KmIvoZ1E_nJ-xw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/snXfnOo9x-ifiYfnTNuJ8lAggZqbtzK2enzZfLYj5J4.jpg?width=108&crop=smart&auto=webp&s=90bcd504c267789cb43b29cc7058c253a7ddbd69', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/snXfnOo9x-ifiYfnTNuJ8lAggZqbtzK2enzZfLYj5J4.jpg?width=216&crop=smart&auto=webp&s=e12714fb04bdcbdaa69824ff1b53e1311a5d29f2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/snXfnOo9x-ifiYfnTNuJ8lAggZqbtzK2enzZfLYj5J4.jpg?width=320&crop=smart&auto=webp&s=5470e71ce6cbaf0b31bb91f03fcce156e5179916', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/snXfnOo9x-ifiYfnTNuJ8lAggZqbtzK2enzZfLYj5J4.jpg?width=640&crop=smart&auto=webp&s=939ea88f9aa52f669af8490e581fc076770d8562', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/snXfnOo9x-ifiYfnTNuJ8lAggZqbtzK2enzZfLYj5J4.jpg?width=960&crop=smart&auto=webp&s=014c5df1864ebd7071f810ca5cd5119a21d4ac81', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/snXfnOo9x-ifiYfnTNuJ8lAggZqbtzK2enzZfLYj5J4.jpg?width=1080&crop=smart&auto=webp&s=d68bc2bbcfc104066ebd252809b185078c3cf8a7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/snXfnOo9x-ifiYfnTNuJ8lAggZqbtzK2enzZfLYj5J4.jpg?auto=webp&s=5d813d9ef8303ad0cd57f860cf674ac537b25e32', 'width': 1200}, 'variants': {}}]}
Dear Model Mergers, Have You Solved Merger of Different Model Families?
13
I've only seen merging of same-upstream-pretrained-model-at-same-size. At very least, you should be able to merge *any* 2 models with the same tokenizer via element-wise addition of the log probs just before sampling. This would also unlock creative new samplers. IE instead of adding logprobs, maybe one model's logprobs constrains the other's in interesting ways. But, 2 models with same architecture and same dataset will be heavily biased in the same direction, even if you take 2 different finetunes, so this approach seems like it will have a low ceiling of potential. Also, if you're just doing a linear interpolation of same-dimensioned weights, why not just collapse them all into a normal-sized model? IE 70B + 70B should still == 70B. That said, you would get much more interesting models if you allowed mergers of different architectures, trained from different initializations, and with different datasets. I would think that the research on "token healing" would allow you to merge any 2 models, even if they have different tokenizers. This seems like a cool way forward.
2023-11-28T21:31:49
https://www.reddit.com/r/LocalLLaMA/comments/1867ddv/dear_model_mergers_have_you_solved_merger_of/
BayesMind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1867ddv
false
null
t3_1867ddv
/r/LocalLLaMA/comments/1867ddv/dear_model_mergers_have_you_solved_merger_of/
false
false
self
13
null
Macs with 32GB of memory can run 70B models with the GPU.
140
I recently got a 32GB M1 Mac Studio. I was excited to see how big of a model it could run. It turns out that's 70B. It is a Q3_K_S model so the 2nd smallest for 70B in GGUF format, but still it's a 70B model. As many people know, the Mac shouldn't be able to dedicated that much RAM to the GPU. Apple limits it to 67%, which is about 21GB. This model is 28GB. So it shouldn't fit. But there's a solution to that thanks to these fine people here. https://github.com/ggerganov/llama.cpp/discussions/2182 They wrote a program to patch that limit in the kernel. You can set it to anything you want. So I cranked mine up to 92%. I also do these couple of things to save RAM. 1) I don't use the GUI. Just simply logging in and doing nothing uses a fair amount of RAM. I run my Mac headless. I ssh in. 2) I stopped mds_stores process from running. I saw that it was using up between 500MB and 1GB of RAM. Its the processes that indexes the drives for faster search. Considering my drive is 97% empty, I don't know what it was doing to use up 1GB of RAM. I normally turn off indexing on all my machines always. With all that set, the highest I've seen in use memory is 31.02GB while running a 70B Q3_K_S model. So there's headroom. There maybe a lot more. Since my goal is to not swap. I noticed that when I log into the GUI, the compressed RAM goes up to around 750MB but it still doesn't swap. So I wonder how far memory compression would let me stretch it. I do notice that it's not as snappy. With no GUI login, the model just runs right away after the model is cached after the first run. With a GUI login, it pauses for a few seconds. As for performance, it's 14 t/s prompt and 4 t/s generation using the GPU. It's 2 and 2 using the CPU. Power consumption is remarkably low. Using the GPU, powermetrics reports 39 watts for the entire machine but my wall monitor says it's taking 79 watts from the wall. Using the CPU powermetrics reports 36 watts and the wall monitor says 63 watts. I don't know it's so much more efficient at the wall between GPU and CPU. It's only a 3 watt different in the machine but 26 watts at the wall. All in all, I'm super impressed. The M1 32GB Studio may be the runt of the Mac Studio lineup but considering I paid about what a used 3090 costs on ebay for a new one. I think it's the best value for performance I have to run LLMs. Since I plan on running this all out 24/7/365, the power savings alone compared to anything else with a GPU will be several hundreds of dollars a year. https://i.postimg.cc/nMjXLd9K/1.jpg https://i.postimg.cc/8s2jfhL2/2.jpg
2023-11-28T21:22:40
https://www.reddit.com/r/LocalLLaMA/comments/18674zd/macs_with_32gb_of_memory_can_run_70b_models_with/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18674zd
false
null
t3_18674zd
/r/LocalLLaMA/comments/18674zd/macs_with_32gb_of_memory_can_run_70b_models_with/
false
false
self
140
{'enabled': False, 'images': [{'id': 'P89Sa9OzwlFlzk5C6ZZmGwGrZlqkTEgGyhvQ2Gd-bOk', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/_jEe18Tv0kshgKI9lfWCXx1AI0KfR_pxEKrPNS0QiOI.jpg?width=108&crop=smart&auto=webp&s=cb0bb2a2124da8156438b4fcc24c2502f26789bd', 'width': 108}], 'source': {'height': 119, 'url': 'https://external-preview.redd.it/_jEe18Tv0kshgKI9lfWCXx1AI0KfR_pxEKrPNS0QiOI.jpg?auto=webp&s=0cc8fe13ed3a25f5a145411c967e3d0e85c2bc5a', 'width': 180}, 'variants': {}}]}
Does Dual EPYC work for LLMs?
4
Continuing my quest to choose a rig with lots of memory, one possibility is dual socket MBs. Gen 1 to 3 EPYC chips have 8 channels of DDR4, so this gives 16 total memory channels, which is good bandwidth, if not beating GPUs, but can have way more memory (up to 1024GB). Builds with 64+ threads can be pretty cheap. My questions are - Does the dual CPU setup cause trouble with running LLM software? - Is it reasonably possible to get windows and drivers etc working on 'server' architecture? - Is there anything else I should consider vs going for a single EPYC or Threadripper Pro?
2023-11-28T21:09:57
https://www.reddit.com/r/LocalLLaMA/comments/1866sza/does_dual_epyc_work_for_llms/
EvokerTCG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1866sza
false
null
t3_1866sza
/r/LocalLLaMA/comments/1866sza/does_dual_epyc_work_for_llms/
false
false
self
4
null
JungleGym + TreeVoyager – OSS Data and Tool for Autonomous Web Agents
1
2023-11-28T20:57:56
https://junglegym.ai/
Prestigious-Elk7124
junglegym.ai
1970-01-01T00:00:00
0
{}
1866iad
false
null
t3_1866iad
/r/LocalLLaMA/comments/1866iad/junglegym_treevoyager_oss_data_and_tool_for/
false
false
default
1
null
JungleGym, an open-source playground to test/build autonomous web agents
1
[removed]
2023-11-28T20:41:50
https://www.reddit.com/r/LocalLLaMA/comments/18664wk/junglegym_an_opensource_playground_to_testbuild/
OwlLLM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18664wk
false
null
t3_18664wk
/r/LocalLLaMA/comments/18664wk/junglegym_an_opensource_playground_to_testbuild/
false
false
self
1
null
Waiting times for A100 hw?
2
Out of interest wondering if anyone else has tried securing any A100 or H100 PCIE devices recently, and what the kind of wait time is re availability. Was told that the original PCIE 40gb versions had stopped being manufactured, however new blower design supposed to be out this month but not heard a single peep. Also wondering if anyone has ran any tests re difference between SXM at 400w and PCIE at 250w for devices? Just seen eBay’s got a 40gb up but it’s a converted module rather than traditional PCIE - https://www.ebay.co.uk/itm/325905860488
2023-11-28T20:07:54
https://i.redd.it/1lh1kix6b53c1.jpg
BreakIt-Boris
i.redd.it
1970-01-01T00:00:00
0
{}
1865ca1
false
null
t3_1865ca1
/r/LocalLLaMA/comments/1865ca1/waiting_times_for_a100_hw/
false
false
https://a.thumbs.redditm…sATLVhha3fR4.jpg
2
{'enabled': True, 'images': [{'id': 'zzRfD5kyxTPHFodURlwj5vQiZX2LCZyLY9JdGCiRuhs', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/1lh1kix6b53c1.jpg?width=108&crop=smart&auto=webp&s=c71757a77f3a76fb6c221bc95d53860c474d044e', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/1lh1kix6b53c1.jpg?width=216&crop=smart&auto=webp&s=ef5f6dd9d315199b51d758efea005b6f8a750256', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/1lh1kix6b53c1.jpg?width=320&crop=smart&auto=webp&s=9e0cc109000c3161091ae00c07d076ef15d1256d', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/1lh1kix6b53c1.jpg?width=640&crop=smart&auto=webp&s=c962ccfc9e70b5c40b3750ebb8d6f3163edc7249', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/1lh1kix6b53c1.jpg?width=960&crop=smart&auto=webp&s=7638e8a9ca8993d2355a58b94325e7009cc051db', 'width': 960}], 'source': {'height': 2079, 'url': 'https://preview.redd.it/1lh1kix6b53c1.jpg?auto=webp&s=e4b3e7f022381017ebd67051b2a19244904b196a', 'width': 960}, 'variants': {}}]}
NovelAI vs best local LLM for NSFW?
1
[removed]
2023-11-28T19:55:16
https://www.reddit.com/r/LocalLLaMA/comments/186519z/novelai_vs_best_local_llm_for_nsfw/
Sakura9095
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186519z
false
null
t3_186519z
/r/LocalLLaMA/comments/186519z/novelai_vs_best_local_llm_for_nsfw/
false
false
nsfw
1
null
Tesla M40 alongside RTX 2060
1
I'm considering buying a cheap Tesla M40 or P40 for my PC that I also use for gaming, with RTX 2060. I'm running Debian 12. [Wiki.debian.org](https://Wiki.debian.org) states that both cards use different drivers. This is a HP Z840 with dual Intel Xeon processors. I don't remember the wattage of the PSU at the moment, but I think it is 1185 watt. I know I will get issues with the PSU GPU power connectors, so I have to figure this out somehow. But besides this, does anybody know if this will actually work?
2023-11-28T19:44:45
https://www.reddit.com/r/LocalLLaMA/comments/1864s6b/tesla_m40_alongside_rtx_2060/
AdTall6126
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1864s6b
false
null
t3_1864s6b
/r/LocalLLaMA/comments/1864s6b/tesla_m40_alongside_rtx_2060/
false
false
self
1
null
55B Yi model merges
31
2023-11-28T19:43:35
https://huggingface.co/collections/mlinmg/yi-55b-6565abb15de5b89145bf6685
Aaaaaaaaaeeeee
huggingface.co
1970-01-01T00:00:00
0
{}
1864r5r
false
null
t3_1864r5r
/r/LocalLLaMA/comments/1864r5r/55b_yi_model_merges/
false
false
https://b.thumbs.redditm…PDruqSxpz6dE.jpg
31
{'enabled': False, 'images': [{'id': 'RNbYLDJAFzZ831mN8IGeRfY6xZYh19XxwUJ0MIB3fSw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-jemt9xaqCg6asZy_sd7zRPZJVnSSoTnGLVPjT-4IOc.jpg?width=108&crop=smart&auto=webp&s=b806ee421fb49795f380f8abd0dd0249fdc7ad62', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-jemt9xaqCg6asZy_sd7zRPZJVnSSoTnGLVPjT-4IOc.jpg?width=216&crop=smart&auto=webp&s=bb055a953bcf27dd458a55e5abfd633bcc1be8bd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-jemt9xaqCg6asZy_sd7zRPZJVnSSoTnGLVPjT-4IOc.jpg?width=320&crop=smart&auto=webp&s=e9d5c11e8c7f8fe7fa71f290c05b6ffdb11af767', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-jemt9xaqCg6asZy_sd7zRPZJVnSSoTnGLVPjT-4IOc.jpg?width=640&crop=smart&auto=webp&s=0aa67f3e8fe79882335766b0f4b99143860e064a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-jemt9xaqCg6asZy_sd7zRPZJVnSSoTnGLVPjT-4IOc.jpg?width=960&crop=smart&auto=webp&s=870c6830990d794bc7e1fe57b7cc7161c63f5004', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-jemt9xaqCg6asZy_sd7zRPZJVnSSoTnGLVPjT-4IOc.jpg?width=1080&crop=smart&auto=webp&s=ae56bdb26d51e357c8f5fc649b57c0ee75eee106', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-jemt9xaqCg6asZy_sd7zRPZJVnSSoTnGLVPjT-4IOc.jpg?auto=webp&s=63101b77bfc058e12cd932d867a1a72e4595513b', 'width': 1200}, 'variants': {}}]}
What are your fav questions to eval a new LLM?
17
Every other day, there's a new LLM, Do you have a set of questions to test them?
2023-11-28T19:32:18
https://www.reddit.com/r/LocalLLaMA/comments/1864hgh/what_are_your_fav_questions_to_eval_a_new_llm/
dulldata
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1864hgh
false
null
t3_1864hgh
/r/LocalLLaMA/comments/1864hgh/what_are_your_fav_questions_to_eval_a_new_llm/
false
false
self
17
null
GPU for homelab tinkering
1
Sorry for the noob question. I’m building out a new server and as I love playing with new tech, I thought I would throw in a GPU so I can try learn to integrate AI with things like Private GPT, document generation, meeting transcription, maybe some integrations with Obsidian, or even Home Assistant for automation. I like the idea of it being able to crawl all my information and offer suggestion, rather than me having to copy and paste snippets as I do now with Chat GPT. I’m a solo IT consultant by trade, so I’m really hoping it will help me augment my work. Budget isn’t super important, it more that it’s fit for purpose, but to stop the people suggesting a £30,000 GPU, I cap it at ~£1000! Thanks!
2023-11-28T19:31:34
https://www.reddit.com/r/LocalLLaMA/comments/1864gsm/gpu_for_homelab_tinkering/
idarryl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1864gsm
false
null
t3_1864gsm
/r/LocalLLaMA/comments/1864gsm/gpu_for_homelab_tinkering/
false
false
self
1
null
Best image(s) to 3d model?
1
Sorry if this is wrong sub, since its not explicity about language models Im looking for any resource or info on how to take multiple images of a subject, and generate a 3d model I can use (either for a game engine or a 3d print). Ive seen a few SaaS companies that do this, but not finding much for a local model. &#x200B; Thanks in advance
2023-11-28T19:21:19
https://www.reddit.com/r/LocalLLaMA/comments/18647s8/best_images_to_3d_model/
seanpuppy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18647s8
false
null
t3_18647s8
/r/LocalLLaMA/comments/18647s8/best_images_to_3d_model/
false
false
self
1
null
Amazon Introduces Q, an A.I. Chatbot for Companies
1
2023-11-28T19:20:55
https://www.nytimes.com/2023/11/28/technology/amazon-ai-chatbot-q.html
tilyupo
nytimes.com
1970-01-01T00:00:00
0
{}
18647e7
false
null
t3_18647e7
/r/LocalLLaMA/comments/18647e7/amazon_introduces_q_an_ai_chatbot_for_companies/
false
false
https://b.thumbs.redditm…VyWUgnwumduk.jpg
1
{'enabled': False, 'images': [{'id': 'RX3Mx90Sk7lzVIOF9slCXxiBuLb2hDn_-KMv3-8w0Fg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/rQaWmaKc8_HcRhdAMk8zHNb_ntnZxf914eIJrs9k6Kc.jpg?width=108&crop=smart&auto=webp&s=bdb8a66143d580470d09079ec015be00ff5a636b', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/rQaWmaKc8_HcRhdAMk8zHNb_ntnZxf914eIJrs9k6Kc.jpg?width=216&crop=smart&auto=webp&s=0d3c34c991c9304c5909ea6b8bedf2e1ba317211', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/rQaWmaKc8_HcRhdAMk8zHNb_ntnZxf914eIJrs9k6Kc.jpg?width=320&crop=smart&auto=webp&s=d0571b9cc70ed3d6d7fb8fa03847843c01dc050d', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/rQaWmaKc8_HcRhdAMk8zHNb_ntnZxf914eIJrs9k6Kc.jpg?width=640&crop=smart&auto=webp&s=a3d10f5224327ef1ea6ebfa3613f8af35a0c9c30', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/rQaWmaKc8_HcRhdAMk8zHNb_ntnZxf914eIJrs9k6Kc.jpg?width=960&crop=smart&auto=webp&s=5e332c39d3e0677e66292c9ea83a8cf6e8cf8396', 'width': 960}], 'source': {'height': 549, 'url': 'https://external-preview.redd.it/rQaWmaKc8_HcRhdAMk8zHNb_ntnZxf914eIJrs9k6Kc.jpg?auto=webp&s=490f66bda356e277e44e7a82ac5c549f0b6ca5ca', 'width': 1050}, 'variants': {}}]}
how performant are current coding models in PyTorch and other deep learning coding?
1
I'm looking for what the current best model would be to attempt to do some coding in this realm. realistically, I don't expect a locally run model to be able to help me as much in overall suggestions and a back and forth discussion of what technique to use to solve a particular task. However, I would hope that I can get somehow close, at least in some of this. So the model would need to be fairly well trained on PyTorch and the various machine learning models that are used with that library. Also, some background training on papers that revolve around different machine learning problems and solutions and techniques that are suggested to solve certain things. Which models are currently considered the best for doing any or some of that? The various benchmarks and human eval rubbish just isn't applicable because it seems to be too general. However, I'm open to all suggestions. I understand that 'Phind' is one to look at, but I would love to hear some experience from fellow programmers and system architects on what they found to be useful. My system has an RTX GPU with 24GB and I am in a reluctant way satisfied with one token per second output if it's actually useful output. Thanks in advance for any tips. ( My last post-querying about this seems to have been either blocked or never showed up. Please tell me if I'm doing something wrong.!)
2023-11-28T19:14:46
https://www.reddit.com/r/LocalLLaMA/comments/18641y7/how_performant_are_current_coding_models_in/
w7gg33h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18641y7
false
null
t3_18641y7
/r/LocalLLaMA/comments/18641y7/how_performant_are_current_coding_models_in/
false
false
self
1
null
List of all GPUs and dedicated processors for AI workloads
24
2023-11-28T19:06:34
https://gpus4ai.edlabs.it/
panchovix
gpus4ai.edlabs.it
1970-01-01T00:00:00
0
{}
1863usf
false
null
t3_1863usf
/r/LocalLLaMA/comments/1863usf/list_of_all_gpus_and_dedicated_processors_for_ai/
false
false
default
24
null
How to set up and run multiple local LLMs, in code? On windows. On GPU(mostly)
2
I know ooba supposedly work for windows, I had it up and running in Ubuntu but windows error corrected the boot record so I can't access that environment anymore. But I'm not interested in roleplay chat too much, so I'm fine with and might actually prefer to run it thorugh a python script. (I'd like to get more than one model up and running simultaneously for an "LLM" village NPC interaction experiment. but I digress. ) Looking at HF I see some code snippets, but there's a variety of libraries and approaches to it? Is there anything considered a "gold standard" as of late for local windows LLMs that is not a pain in the ass to set up and supports the latest quantization flavors? I'll aim to run on 24GB Vram but I also have 64GB system RAM and the option to run on both would be appreciated, but primarily I'm aiming for GPU.
2023-11-28T18:42:16
https://www.reddit.com/r/LocalLLaMA/comments/18639pg/how_to_set_up_and_run_multiple_local_llms_in_code/
vivehelpme
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18639pg
false
null
t3_18639pg
/r/LocalLLaMA/comments/18639pg/how_to_set_up_and_run_multiple_local_llms_in_code/
false
false
self
2
null
How to extend max context length when doing QLoRA?
2
Hi /r/localllama I want to fine tune a small model (tinyllama 1.1B) with Peft (probably QLoRA 4bit), but the mode has been pretrained for context length of 2048 tokens. I would like to finetune it with longer examples (approx 3000 context length). I understand that one option is fine tune only with samples that fit in the 2048 context length and then run inference with RoPE scaling, but I was wondering if there is a ”natural” way to increase a pretrained model’s context length when fine tuning it with LoRA
2023-11-28T18:31:15
https://www.reddit.com/r/LocalLLaMA/comments/186304l/how_to_extend_max_context_length_when_doing_qlora/
Exotic-Estimate8355
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186304l
false
null
t3_186304l
/r/LocalLLaMA/comments/186304l/how_to_extend_max_context_length_when_doing_qlora/
false
false
self
2
null
Best bang for buck for MacBP?
1
I know there are a bunch of threads here for picking up your Macbook Pro, but I can't find exactly what I'm looking for. I'm renewing my laptop (finally) from a 2019 MBP with 16GB RAM, and I want to run a lot of AI models on my new one, without destroying my wallet. Optimally, trying not to go over 3k (with a 10% discount) on MBPs. I'm hesitating between an M3 Max with 32 GB ([https://www.apple.com/shop/buy-mac/macbook-pro/14-inch-space-black-apple-m3-pro-with-12-core-cpu-and-18-core-gpu-18gb-memory-1tb](https://www.apple.com/shop/buy-mac/macbook-pro/14-inch-space-black-apple-m3-pro-with-12-core-cpu-and-18-core-gpu-18gb-memory-1tb)) and taking a slightly older M2 Max for decently cheaper and potentially more RAM. What do you think is better here? I'm inclined towards the M3 just because I want to take a very recent model so that I can keep it for at least 3 years.
2023-11-28T17:59:24
https://www.reddit.com/r/LocalLLaMA/comments/186287c/best_bang_for_buck_for_macbp/
palpapeen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
186287c
false
null
t3_186287c
/r/LocalLLaMA/comments/186287c/best_bang_for_buck_for_macbp/
false
false
self
1
{'enabled': False, 'images': [{'id': '6jrWUI4szX_uAwjk54YB7QQnchupM-ljDg_K_l4zMNY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3yPFpw_RCf-xR-FXwudsrlJwwjVhO0-N5af4tt84VWc.jpg?width=108&crop=smart&auto=webp&s=a89261e836e75f477eeae559f4f80c11640b6d30', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/3yPFpw_RCf-xR-FXwudsrlJwwjVhO0-N5af4tt84VWc.jpg?width=216&crop=smart&auto=webp&s=ec3447560fd2e883af8688b4698aeed10ae36e7f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/3yPFpw_RCf-xR-FXwudsrlJwwjVhO0-N5af4tt84VWc.jpg?width=320&crop=smart&auto=webp&s=29606ce0b94a9a7586fe437dce48e539973750e1', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/3yPFpw_RCf-xR-FXwudsrlJwwjVhO0-N5af4tt84VWc.jpg?width=640&crop=smart&auto=webp&s=631fc84bc2fd74e8e9bf8f222e4ed87225b34431', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/3yPFpw_RCf-xR-FXwudsrlJwwjVhO0-N5af4tt84VWc.jpg?width=960&crop=smart&auto=webp&s=e4ad829965ca9e2df96806bee4e2262c77961b0d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/3yPFpw_RCf-xR-FXwudsrlJwwjVhO0-N5af4tt84VWc.jpg?width=1080&crop=smart&auto=webp&s=41e8da7e288c370a47ee4e5cc05e8bafcb391725', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/3yPFpw_RCf-xR-FXwudsrlJwwjVhO0-N5af4tt84VWc.jpg?auto=webp&s=c7bde0a5e558bcdde02b1aaef432f905b82b5ec7', 'width': 1200}, 'variants': {}}]}
Is it possible to run Llama on a 4gb ram?
3
Yes. This has to be the worst ram you guys have ever seen but hear me out. Is it possible? I want to run the full 70gb model but that’s far out of question and I’m not even going to bother. Can I atleast run the 13gb or at least the 7gb?
2023-11-28T17:45:41
https://www.reddit.com/r/LocalLLaMA/comments/1861wbm/is_it_possible_to_run_llama_on_a_4gb_ram/
TheHumanFixer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1861wbm
false
null
t3_1861wbm
/r/LocalLLaMA/comments/1861wbm/is_it_possible_to_run_llama_on_a_4gb_ram/
false
false
self
3
null
How to use LLama 2 in Semantic Kernel with an API from Replicate or Hugging Face
1
I know this is more of a semantic kernel question but I want to run with it the Llama-2 model. The SK seems to only support an API from Azure OpenAI or OpenAI, and there's no mention of others api. On the other hand for Hugging Face they only show support for local models.
2023-11-28T17:42:00
https://www.reddit.com/r/LocalLLaMA/comments/1861tby/how_to_use_llama_2_in_semantic_kernel_with_an_api/
techocompany25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1861tby
false
null
t3_1861tby
/r/LocalLLaMA/comments/1861tby/how_to_use_llama_2_in_semantic_kernel_with_an_api/
false
false
self
1
null
Guys, I have a crazy idea.
1
I have an idea to collect all open-source LLM models to launch one full-fledged service for working with models, that is, we will use one mini LLM model to recognize a task, write code, translate text, solve problems, etc., and then transfer the task to a model designed for this, We will use Petals to run models, we can make your own cryptocurrency and pay people who allocate their servers to run Petals to support models, what do you think of this idea?
2023-11-28T17:39:06
https://www.reddit.com/r/LocalLLaMA/comments/1861qto/guys_i_have_a_crazy_idea/
omar07ibrahim1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1861qto
false
null
t3_1861qto
/r/LocalLLaMA/comments/1861qto/guys_i_have_a_crazy_idea/
false
false
self
1
null
Hardware for Meta Llama2 65b for a Web App?
1
I need a bit more info from people who installed Llama2 locally and using it to support web apps, or just local information. * What is the ideal hardware for the 65b version? * How many tokens can this hardware process per second, input, and output? * Regarding safety, since it is used for business, what is the change that this model will end up arguing with the customer 😊 ?
2023-11-28T16:42:08
https://www.reddit.com/r/LocalLLaMA/comments/1860eo6/hardware_for_meta_llama2_65b_for_a_web_app/
No-Activity-4824
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1860eo6
false
null
t3_1860eo6
/r/LocalLLaMA/comments/1860eo6/hardware_for_meta_llama2_65b_for_a_web_app/
false
false
self
1
null
Please enlighten me, why are people building LLM Twitter bots?
49
Yes, I do still use Twitter and yes I know it's X. But more and more I see these reply that are incredibly obviously written by LLM (and notoriously ChatGPT) Like this thread I'm reading right now how Finland closed all it's borders (and it is written by a human) but then the replies are like: \-It's important for countries to manage their borders effectively while ensuring the safety and well-being of all individuals involved. \-That's a significant step to address the issue. Hoping for lasting solutions that prioritize the safety of all involved. \- That's an interesting development in Finland's immigration policies. It's important for countries to find a balance that takes into account economic, social, and security concerns. etc... so yeah, very obviously LLM. Very obviously ChatGPT by the language too. So enlighten me - what are people doing this hoping to achieve, except me very swiftly clicking Block the user? I see it more and more. Not that I care about X either way, (for what it is worth, it can became a bot infested platform) but this is using LLM the 100% wrong way - for goals I can't imagine. I just can't find a scenario when this is good or beneficial. But maybe it's just me. Hmm?? &#x200B;
2023-11-28T16:14:10
https://www.reddit.com/r/LocalLLaMA/comments/185zq6c/please_enlighten_me_why_are_people_building_llm/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185zq6c
false
null
t3_185zq6c
/r/LocalLLaMA/comments/185zq6c/please_enlighten_me_why_are_people_building_llm/
false
false
self
49
null
Getting LLMs to generate vast amounts of high quality dialogue covering a broad range
18
2023-11-28T16:00:30
https://blog.allenai.org/are-you-thirsty-for-social-chitchat-data-64c0895c654d
WaterdanceAC
blog.allenai.org
1970-01-01T00:00:00
0
{}
185ze30
false
null
t3_185ze30
/r/LocalLLaMA/comments/185ze30/getting_llms_to_generate_vast_amounts_of_high/
false
false
https://b.thumbs.redditm…P1V1iER_8nck.jpg
18
{'enabled': False, 'images': [{'id': 'uoYhrKkJVeI1ZQp6TEXI2609GarcBtKr2hGwj7hWSNY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/eY6TaOcIitymq39CXwMiNbriAetcfT-o_bNhdn_dMSg.jpg?width=108&crop=smart&auto=webp&s=970040aa7bce108796da1fc036684818fdcd09b9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/eY6TaOcIitymq39CXwMiNbriAetcfT-o_bNhdn_dMSg.jpg?width=216&crop=smart&auto=webp&s=c4f54629a362549221eb18809a4b71510c0e357e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/eY6TaOcIitymq39CXwMiNbriAetcfT-o_bNhdn_dMSg.jpg?width=320&crop=smart&auto=webp&s=5fb6da1f89a298605cc4829c40430e4b66211368', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/eY6TaOcIitymq39CXwMiNbriAetcfT-o_bNhdn_dMSg.jpg?auto=webp&s=edf9d92a07d805af3e273b25827116f30d75d8be', 'width': 512}, 'variants': {}}]}
Text to image run locally.
1
Is there any photorealistic, like really photorealistic text to image model that i can run like llm?
2023-11-28T15:41:06
https://www.reddit.com/r/LocalLLaMA/comments/185yxww/text_to_image_run_locally/
G_archangel_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185yxww
false
null
t3_185yxww
/r/LocalLLaMA/comments/185yxww/text_to_image_run_locally/
false
false
self
1
null
How to evaluate the finetuned model.
3
I have worked on this code https://github.com/ovh/ai-training-examples/blob/main/notebooks/natural-language-processing/llm/miniconda/llama2-fine-tuning/llama_2_finetuning.ipynb. I just changed the dataset with a dataset that has a text column and class (suicide and non-suicide). in the inference step sometimes in produces none sense answers. But the question is how i can evaluate this model using a huggingface dataset? the dataset just has a train split. i want to know the accuracy( recall and precision) Can someone help me about it?
2023-11-28T15:20:01
https://www.reddit.com/r/LocalLLaMA/comments/185yg88/how_to_evaluate_the_finetuned_model/
Technical-Owl3342
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185yg88
false
null
t3_185yg88
/r/LocalLLaMA/comments/185yg88/how_to_evaluate_the_finetuned_model/
false
false
self
3
{'enabled': False, 'images': [{'id': 'SEXTGJiyeboCl_-7r1CMFtlPrwMnqEoZ4XpJ8DFqOiM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Tkq7HPzeNgfJZTbaQcR0OIcY_HC8gbHQg6zCzZEq_U8.jpg?width=108&crop=smart&auto=webp&s=d0e15bf94111171a38e17286c5ab105f909155ea', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Tkq7HPzeNgfJZTbaQcR0OIcY_HC8gbHQg6zCzZEq_U8.jpg?width=216&crop=smart&auto=webp&s=483f14d45b13a2bb9cf3ca8df8cb9f75000d2e64', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Tkq7HPzeNgfJZTbaQcR0OIcY_HC8gbHQg6zCzZEq_U8.jpg?width=320&crop=smart&auto=webp&s=7c532545c7474a05f01426a5619efd306e372c7b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Tkq7HPzeNgfJZTbaQcR0OIcY_HC8gbHQg6zCzZEq_U8.jpg?width=640&crop=smart&auto=webp&s=1207e245419325311282409806a1c9f5aada86d5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Tkq7HPzeNgfJZTbaQcR0OIcY_HC8gbHQg6zCzZEq_U8.jpg?width=960&crop=smart&auto=webp&s=8c3e3da899140751a4b145dbbfb7781133a985f2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Tkq7HPzeNgfJZTbaQcR0OIcY_HC8gbHQg6zCzZEq_U8.jpg?width=1080&crop=smart&auto=webp&s=5253dc8f943fa567608c2c83c9355a08958a65b1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Tkq7HPzeNgfJZTbaQcR0OIcY_HC8gbHQg6zCzZEq_U8.jpg?auto=webp&s=92d5abdc4bdddd273d002d23f438f80a7e4526af', 'width': 1200}, 'variants': {}}]}
Discover & review your favorite AI character / role-play / story-writing models
1
[removed]
2023-11-28T15:08:35
https://www.reddit.com/r/LocalLLaMA/comments/185y7af/discover_review_your_favorite_ai_character/
WhereAreMyWaifus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185y7af
false
null
t3_185y7af
/r/LocalLLaMA/comments/185y7af/discover_review_your_favorite_ai_character/
false
false
self
1
null
100B, 220B, and 600B models on huggingface!
78
[https://huggingface.co/deepnight-research](https://huggingface.co/deepnight-research) I'm not affiliated with this group at all, I was just randomly looking for any new big merges and found these. 100B model: [https://huggingface.co/deepnight-research/saily\_100B](https://huggingface.co/deepnight-research/saily_100B) 220B model: [https://huggingface.co/deepnight-research/Saily\_220B](https://huggingface.co/deepnight-research/Saily_220B) 600B model: [https://huggingface.co/deepnight-research/ai1](https://huggingface.co/deepnight-research/ai1) &#x200B; They have some big claims about the capabilities of their models, but the two best ones are unavailable to download. Maybe we can help convince them to release them publicly?
2023-11-28T14:48:38
https://www.reddit.com/r/LocalLLaMA/comments/185xrds/100b_220b_and_600b_models_on_huggingface/
Illustrious_Sand6784
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185xrds
false
null
t3_185xrds
/r/LocalLLaMA/comments/185xrds/100b_220b_and_600b_models_on_huggingface/
false
false
self
78
{'enabled': False, 'images': [{'id': 'MA8wpDz83Jy8SDAhYctTPnbWB4tKyir-x7hyJKivNnA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7uT7RCvFBjzfgDn79bY9fqChrSrmXbnLcucg_2nWccU.jpg?width=108&crop=smart&auto=webp&s=3784ab2cc5fc9b7dda2c13686f9cd8ffed055b4b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7uT7RCvFBjzfgDn79bY9fqChrSrmXbnLcucg_2nWccU.jpg?width=216&crop=smart&auto=webp&s=4f95e90b233905e8b6f53a4abddfab5782435245', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7uT7RCvFBjzfgDn79bY9fqChrSrmXbnLcucg_2nWccU.jpg?width=320&crop=smart&auto=webp&s=06372279eb4cf658893fff12d1a960a097485f4e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7uT7RCvFBjzfgDn79bY9fqChrSrmXbnLcucg_2nWccU.jpg?width=640&crop=smart&auto=webp&s=546f93fe9b6ce69205143edfd9ae282a44179a9c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7uT7RCvFBjzfgDn79bY9fqChrSrmXbnLcucg_2nWccU.jpg?width=960&crop=smart&auto=webp&s=86f000707283ceaab2c05ab12f25c0fbc867558c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7uT7RCvFBjzfgDn79bY9fqChrSrmXbnLcucg_2nWccU.jpg?width=1080&crop=smart&auto=webp&s=ccf8c7504d9ac6180465b74b38e03688588b6ddb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7uT7RCvFBjzfgDn79bY9fqChrSrmXbnLcucg_2nWccU.jpg?auto=webp&s=e38767e285f1d01449d32c4302d18f987e5cc475', 'width': 1200}, 'variants': {}}]}
Why is no one releasing 70b models?
1
There has been a lot of movement around and below the 13b parameter bracket in the last few months but it's wild to think the best 70b models are still llama2 based. Why is that? We have 13b models like 8bit bartowski/Orca-2-13b-exl2 approaching or even surpassing the best 70b models now
2023-11-28T14:45:57
https://www.reddit.com/r/LocalLLaMA/comments/185xp7h/why_is_no_one_releasing_70b_models/
Longjumping-Bake-557
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185xp7h
false
null
t3_185xp7h
/r/LocalLLaMA/comments/185xp7h/why_is_no_one_releasing_70b_models/
false
false
self
1
null
Inferencing with AND X3D Processors
2
With the proof of concept done and users able to get over 180gb/s on a PC with AMD's 3d vcache, it sure would be nice if we could figure a way to use that bandwidth for CPU based inferencing. I think it only worked on Windows but if that is the case we should be able to come up with a way to do it under Linux too.
2023-11-28T13:55:30
https://www.reddit.com/r/LocalLLaMA/comments/185wlg6/inferencing_with_and_x3d_processors/
ccbadd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185wlg6
false
null
t3_185wlg6
/r/LocalLLaMA/comments/185wlg6/inferencing_with_and_x3d_processors/
false
false
self
2
null
Notes to letters
3
I am trying to convert my investigation case notes into letters. I have several letters that can show the tone I use in my letters. I have got very good results by prompting GPT 4-8k. But no open source model in the 7b or 13b category has done well with prompting. What will be the best 7b model to finetune?
2023-11-28T13:49:00
https://www.reddit.com/r/LocalLLaMA/comments/185wgod/notes_to_letters/
Rizatriptan7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185wgod
false
null
t3_185wgod
/r/LocalLLaMA/comments/185wgod/notes_to_letters/
false
false
self
3
null
Xwin-Math: A Series of Powerful SFT Math LLMs and Evaluation Toolkit
15
Hi, everyone. Xwin-Math is intended to promote the mathematical reasoning capabilities of LLMs. Now we release the first version, which is a series of Llama 2 SFT models with CoT prompt. GitHub link: [Xwin-LM/Xwin-Math at main · Xwin-LM/Xwin-LM (github.com)](https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Math) Model link: [Xwin-LM (Xwin-LM) (huggingface.co)](https://huggingface.co/Xwin-LM) Gradio Demo: [Gradio](https://09776cc5ec5f786eb0.gradio.live/) (70B model) [ Math capability on GSM8K and MATH benchmark ](https://preview.redd.it/8dtnrbt7d33c1.png?width=6200&format=png&auto=webp&s=3976dce9926421c42c83940befd59d02ac0ea9a6) The [Xwin-Math-70B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-70B-V1.0) model achieves **31.8 pass@1 on MATH benchmark** and **87.0 pass@1 on GSM8K benchmark**. This performance places it first amongst all open-source CoT models. The [Xwin-Math-7B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0) and [Xwin-Math-13B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-13B-V1.0) models achieve **66.6 and 76.2 pass@1 on GSM8K benchmark**, ranking as top-1 among all LLaMA-2 based 7B and 13B open-source models, respectively. We also evaluate Xwin-Math on other benchmarks such as SVAMP and MAWPS. Xwin-Math-70B-V1.0 approaches or surpasses the performance of GPT-35-Turbo (8-shot) on most benchmarks. In addition, it also includes an evaluation toolkit that better converts LaTeX formulas into SymPy objects, enabling more accurate assessment of the mathematical abilities. We found that due to evaluation constraints, the results of GPT-4 were previously underestimated. More information can be found in our GitHub repo. We SFT on Llama 2 with standard setting, using GPT-4 to augment the training set of MATH and GSM8K to approximately 100K in total. Our paper is still in the progress, so more training details and further results will be updated soon. Any suggestions or comments greatly welcome! Thanks! =)
2023-11-28T13:39:38
https://www.reddit.com/r/LocalLLaMA/comments/185w9kl/xwinmath_a_series_of_powerful_sft_math_llms_and/
Left_Beat210
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185w9kl
false
null
t3_185w9kl
/r/LocalLLaMA/comments/185w9kl/xwinmath_a_series_of_powerful_sft_math_llms_and/
false
false
https://b.thumbs.redditm…sA-v6pyQ5nrI.jpg
15
{'enabled': False, 'images': [{'id': '506xn4_X4lBz9sqHiRWJjUP4qlRdD-9K6twhUCsu9d4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HGkX42ZXGkVdi38YcKvzBvuYDu3SNdREC6Uk1L7VMZY.jpg?width=108&crop=smart&auto=webp&s=8cb501ab3a606ca30bc812680b408d405fb3be73', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HGkX42ZXGkVdi38YcKvzBvuYDu3SNdREC6Uk1L7VMZY.jpg?width=216&crop=smart&auto=webp&s=10bbe009450ad565c1b6104d7a11498b26b7b347', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HGkX42ZXGkVdi38YcKvzBvuYDu3SNdREC6Uk1L7VMZY.jpg?width=320&crop=smart&auto=webp&s=fe0081b12b812473dcff836b46a5fb42544a42cd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HGkX42ZXGkVdi38YcKvzBvuYDu3SNdREC6Uk1L7VMZY.jpg?width=640&crop=smart&auto=webp&s=4f10159b827f47464d8641fd317811ee1cc9da47', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HGkX42ZXGkVdi38YcKvzBvuYDu3SNdREC6Uk1L7VMZY.jpg?width=960&crop=smart&auto=webp&s=b135ead2f4cfa114b5e0bc1411d959c2a391956d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HGkX42ZXGkVdi38YcKvzBvuYDu3SNdREC6Uk1L7VMZY.jpg?width=1080&crop=smart&auto=webp&s=96fa5246538a7e0136321b76c7fc05677b5bd1de', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HGkX42ZXGkVdi38YcKvzBvuYDu3SNdREC6Uk1L7VMZY.jpg?auto=webp&s=9e70e24e861079e3862bc4b598650d6bc1ddcaf7', 'width': 1200}, 'variants': {}}]}
Optimizing Your Language Model Experience: A Student's Journey with a Cutting-Edge PC featuring Core i7 14th Gen, RTX 4070 Ti, and 32GB DDR5 RAM
2
Hello, I'm a student delving into the study of large language models. I recently acquired a new PC equipped with a Core i7 14th Gen processor, RTX 4070 Ti graphics, and 32GB DDR5 RAM. Could you kindly suggest a recommended language model for optimal performance on my machine?
2023-11-28T13:37:47
https://www.reddit.com/r/LocalLLaMA/comments/185w887/optimizing_your_language_model_experience_a/
Revolutionary_Fig171
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185w887
false
null
t3_185w887
/r/LocalLLaMA/comments/185w887/optimizing_your_language_model_experience_a/
false
false
self
2
null
EPFL releases an open Medical Llama 2 finetune, including weights and training data, within 5%/10% of GPT-4/Med-PaLM-2
19
2023-11-28T12:56:27
https://arxiv.org/abs/2311.16079
wojcech
arxiv.org
1970-01-01T00:00:00
0
{}
185veu2
false
null
t3_185veu2
/r/LocalLLaMA/comments/185veu2/epfl_releases_an_open_medical_llama_2_finetune/
false
false
default
19
null
HumanEval leaderboard got updated with GPT-4 Turbo
128
2023-11-28T12:29:20
https://i.redd.it/e49jbboi033c1.png
OnurCetinkaya
i.redd.it
1970-01-01T00:00:00
0
{}
185uwxn
false
null
t3_185uwxn
/r/LocalLLaMA/comments/185uwxn/humaneval_leaderboard_got_updated_with_gpt4_turbo/
false
false
https://a.thumbs.redditm…wYOzvijBEZt8.jpg
128
{'enabled': True, 'images': [{'id': 'g8QEXz-nki-UM1HZ6jaRlbY-IqZXswYuYQKFuJk26jU', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/e49jbboi033c1.png?width=108&crop=smart&auto=webp&s=2b344c8009654a371b5a7305d5879eaaf0dc093c', 'width': 108}, {'height': 81, 'url': 'https://preview.redd.it/e49jbboi033c1.png?width=216&crop=smart&auto=webp&s=fbee0138fef1d1033400b5ee4fe03f25f292d001', 'width': 216}, {'height': 120, 'url': 'https://preview.redd.it/e49jbboi033c1.png?width=320&crop=smart&auto=webp&s=3ec075b2c2d0e0612da9de040749472ad564dbfa', 'width': 320}, {'height': 240, 'url': 'https://preview.redd.it/e49jbboi033c1.png?width=640&crop=smart&auto=webp&s=453798ca989468a5c458ae54f8ce0340cb35805a', 'width': 640}, {'height': 361, 'url': 'https://preview.redd.it/e49jbboi033c1.png?width=960&crop=smart&auto=webp&s=e335e394724fa74ce105822e6263e13f1bd973c4', 'width': 960}, {'height': 406, 'url': 'https://preview.redd.it/e49jbboi033c1.png?width=1080&crop=smart&auto=webp&s=780dd1cac4adb697108eaaf1298e3915df07df1c', 'width': 1080}], 'source': {'height': 516, 'url': 'https://preview.redd.it/e49jbboi033c1.png?auto=webp&s=a92d2d6ec9889cd5c876e7ee208ed304ef62dff4', 'width': 1372}, 'variants': {}}]}
How to chat with documents in/ via LM Studio?
3
I'm only now wrapping my head around this - I know there's no option in the LM Studio UI, but is there any way to ingest documents once the LM studio loads the model in? Also, any alternative stack that has a UI for not just chats, but also document ingestion for local models?
2023-11-28T12:25:28
https://www.reddit.com/r/LocalLLaMA/comments/185uuee/how_to_chat_with_documents_in_via_lm_studio/
laterral
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185uuee
false
null
t3_185uuee
/r/LocalLLaMA/comments/185uuee/how_to_chat_with_documents_in_via_lm_studio/
false
false
self
3
null
Evaluate, monitor, and safeguard your LLM-based apps
4
For the last couple of months, I along with my team invested lots of effort into building a solution that can help users evaluate and monitor the performance of their LLM and AI apps. If you're a ChatGPT (or any other LLM :)) user and are integrating it into your apps, and if, by any chance, it ever happened to you that the outputs you received weren't exactly the ones you were hoping for... You should find this useful 😃 Today, we are releasing it publically and launched it on ProductHunt. **I would be very thankful to hear your thoughts and if you can support the launch**. 🙏 [https://www.producthunt.com/posts/deepchecks-llm-evaluation?r=h](https://www.producthunt.com/posts/deepchecks-llm-evaluation?r=h)
2023-11-28T12:15:51
https://www.reddit.com/r/LocalLLaMA/comments/185uo6g/evaluate_monitor_and_safeguard_your_llmbased_apps/
AsDivyansh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185uo6g
false
null
t3_185uo6g
/r/LocalLLaMA/comments/185uo6g/evaluate_monitor_and_safeguard_your_llmbased_apps/
false
false
self
4
{'enabled': False, 'images': [{'id': 'PXYVwNrI2X0Yh-Cdmf43llo4X-3RTovXfgwT7TJg6Ts', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/i5mpLFey1Mbr3R4BrLmloeifvHyCxtIZUWqQW8W1N0U.jpg?width=108&crop=smart&auto=webp&s=37808feb445526b774ed7f4a168d55b4e1a8ccfe', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/i5mpLFey1Mbr3R4BrLmloeifvHyCxtIZUWqQW8W1N0U.jpg?width=216&crop=smart&auto=webp&s=6f6411521cb9187f9aa48bd996799405460a4afe', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/i5mpLFey1Mbr3R4BrLmloeifvHyCxtIZUWqQW8W1N0U.jpg?width=320&crop=smart&auto=webp&s=4526656d65dc3fb968090fb72073fe79ce64c259', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/i5mpLFey1Mbr3R4BrLmloeifvHyCxtIZUWqQW8W1N0U.jpg?width=640&crop=smart&auto=webp&s=58c6c6470c2af4ff9ada927cc5bc5331f1fe2415', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/i5mpLFey1Mbr3R4BrLmloeifvHyCxtIZUWqQW8W1N0U.jpg?width=960&crop=smart&auto=webp&s=b64a17c1689f7f8ba270ce2449efbe9e147d5a79', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/i5mpLFey1Mbr3R4BrLmloeifvHyCxtIZUWqQW8W1N0U.jpg?auto=webp&s=fee4fb4c0d1c0fad3460df8cf5812e26ce8ef132', 'width': 1024}, 'variants': {}}]}
Role play
3
Hi, does anyone know of any (peer-reviewed) articles testing performance when giving LLMs a role? It's something most of us do in prompts and it's somewhat logical that introducing such a parameter would increase likelihood of desired output, but has anyone actually tested it in a cite-able article? I'm thinking of the old, "You are a software engineer with years of experience in coding .html, .json ... " etc.
2023-11-28T12:03:48
https://www.reddit.com/r/LocalLLaMA/comments/185ugtk/role_play/
MFHau
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185ugtk
false
null
t3_185ugtk
/r/LocalLLaMA/comments/185ugtk/role_play/
false
false
self
3
null
LLM NSFW Community?
1
[removed]
2023-11-28T11:58:06
https://www.reddit.com/r/LocalLLaMA/comments/185ud1b/llm_nsfw_community/
Sakura9095
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185ud1b
false
null
t3_185ud1b
/r/LocalLLaMA/comments/185ud1b/llm_nsfw_community/
false
false
nsfw
1
null
Best 7B model to fine-tune for Q&A over knowledge base
3
I have our own knowledge base (confluence) and would like to train a local model over it. What am I best looking at and what should by training data look like? Grateful for pointers/articles. I've been able to step through couple of instruct tutorials, but that doesn't quite feel like it?
2023-11-28T11:47:28
https://www.reddit.com/r/LocalLLaMA/comments/185u785/best_7b_model_to_finetune_for_qa_over_knowledge/
Otherwise-Tiger3359
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185u785
false
null
t3_185u785
/r/LocalLLaMA/comments/185u785/best_7b_model_to_finetune_for_qa_over_knowledge/
false
false
self
3
null
AIlice, a lightweight AI Agent based on open source model
14
Hello everyone, I've initiated a new project for an LLM agent ([https://github.com/stevenlu137/AIlice](https://github.com/stevenlu137/AIlice)). The primary objective is to create a lightweight agent based on an open-source model (the current code size is approximately 2.5k). AIlice also supports private models, such as GPT-4. Current basic functionalities include in-depth exploration of specific topics, programming, etc. Self-expansion and multi-modal capabilities are currently in development. I'll start this thread now, and I'll provide updates later.
2023-11-28T11:43:35
https://www.reddit.com/r/LocalLLaMA/comments/185u51a/ailice_a_lightweight_ai_agent_based_on_open/
Steven_Lu_137
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185u51a
false
null
t3_185u51a
/r/LocalLLaMA/comments/185u51a/ailice_a_lightweight_ai_agent_based_on_open/
false
false
self
14
{'enabled': False, 'images': [{'id': 'JuVHyjhvpSNUQ1-G_YT-n5O5_aa9WFt6QxImMnPIpsY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JHeMVH5eA26afchGDdkxgjIJ6jBRlaQLwKFIyTnU1z8.jpg?width=108&crop=smart&auto=webp&s=6b5752281626dcf1aba0007671fba507c567a25e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JHeMVH5eA26afchGDdkxgjIJ6jBRlaQLwKFIyTnU1z8.jpg?width=216&crop=smart&auto=webp&s=f754e97c013c9fd61d9b6704b2f500f798d4fe68', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JHeMVH5eA26afchGDdkxgjIJ6jBRlaQLwKFIyTnU1z8.jpg?width=320&crop=smart&auto=webp&s=733ff1fd3dea49feda530413f0f716f95ed7c873', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JHeMVH5eA26afchGDdkxgjIJ6jBRlaQLwKFIyTnU1z8.jpg?width=640&crop=smart&auto=webp&s=5861e41e9bdd66d02e9bb640071e39d12a99f2ad', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JHeMVH5eA26afchGDdkxgjIJ6jBRlaQLwKFIyTnU1z8.jpg?width=960&crop=smart&auto=webp&s=c51f0905162c4a6a6d6591717bae5605d353da2d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JHeMVH5eA26afchGDdkxgjIJ6jBRlaQLwKFIyTnU1z8.jpg?width=1080&crop=smart&auto=webp&s=d2cfbdfe13f96f26e84b1d82574645beee4f4e4e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JHeMVH5eA26afchGDdkxgjIJ6jBRlaQLwKFIyTnU1z8.jpg?auto=webp&s=c35607a470ff6107304833360c0cd20b7da20217', 'width': 1200}, 'variants': {}}]}
Adversarial Reinforcement Learning and Generalization
8
Adversarial Attacks, Robustness and Generalization in Deep Reinforcement Learning [https://blogs.ucl.ac.uk/steapp/2023/11/15/adversarial-attacks-robustness-and-generalization-in-deep-reinforcement-learning/](https://blogs.ucl.ac.uk/steapp/2023/11/15/adversarial-attacks-robustness-and-generalization-in-deep-reinforcement-learning/)
2023-11-28T11:39:59
https://www.reddit.com/r/LocalLLaMA/comments/185u2zg/adversarial_reinforcement_learning_and/
ml_dnn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185u2zg
false
null
t3_185u2zg
/r/LocalLLaMA/comments/185u2zg/adversarial_reinforcement_learning_and/
false
false
self
8
null
Is LLaMa good for making a chatbot?
6
Hello there, I'm a student and me and my team have the assignment to make a chatbot for our university. We need to make a chatbot that can help other students find information about their course. We will get our data from manuals of multiple universtity websites (as pdf). This data will be turned into Q&A data using ChatGPT 4. However, we are struggling to find a pre-trained LLM that fits our assignment. We've researched T5, BERT and GPT-2 but our teacher was surprised those were the models we researched, since there are more popular and newer models. Our chatbot must be in Dutch, but we can translate so the LLM doesn't need to be trained on Dutch data. The LLM can't be too big, because we don't have the hardware for very large models. My question is: is LLaMa a good LLM for making a chatbot?
2023-11-28T11:29:51
https://www.reddit.com/r/LocalLLaMA/comments/185txdf/is_llama_good_for_making_a_chatbot/
Flo501
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185txdf
false
null
t3_185txdf
/r/LocalLLaMA/comments/185txdf/is_llama_good_for_making_a_chatbot/
false
false
self
6
null
Advices for a 4090 build
1
[removed]
2023-11-28T11:27:09
https://www.reddit.com/r/LocalLLaMA/comments/185tvyk/advices_for_a_4090_build/
LuccDev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185tvyk
false
null
t3_185tvyk
/r/LocalLLaMA/comments/185tvyk/advices_for_a_4090_build/
false
false
self
1
null
Finetuning LLM settings
3
I'm starting to get the feeling that tuning LLM's with just the correct settings is an artform, it feels like tuning a radio where even the smallest change in settings (Temp, Top P, etc) gives vastly different output quality. Do you guys feel that correctly setting up an LLM is like tuning a radio where if you are a little bit 'off' you will get a bad reception (aka output quality)?
2023-11-28T11:10:02
https://www.reddit.com/r/LocalLLaMA/comments/185tmgc/finetuning_llm_settings/
LeanderGem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185tmgc
false
null
t3_185tmgc
/r/LocalLLaMA/comments/185tmgc/finetuning_llm_settings/
false
false
self
3
null
Code Llama giving really bad results
3
I read that code-llama can be used to not only generate code from NLP but also explain code. However, when I tried doing that, I got really bad results. This is the relevant part of the code: code_snippet = """ import random random_numbers = [random.randint(1, 10) for _ in range(5)] print("Random Numbers:", random_numbers) """ prompt = "Can you explain what the code below does? " + code_snippet sequences = pipeline( prompt, do_sample=True, top_k=10, temperature=0.1, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=1000, ) for seq in sequences: print(f"Result: {seq['generated_text']}") This is how the generated text looks like. It seems to repeat the same thing over and over again but does not explain what the code does. [{'generated_text': 'Can you explain what the code below does? \nimport random\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 2. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 3. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 4. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 5. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 6. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 7. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 8. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 9. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 10. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 11. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 12. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 13. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 14. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 15. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 16. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 17. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 18. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 19. What does the code below do?\n\nrandom_numbers = [random.randint(1, 10) for _ in range(5)]\n\nprint("Random Numbers:", random_numbers)\n\n# 20. What does the code below do?\n\nrandom_numbers = ['}] It surely has to be because of how I use it because I have read good things about the model.
2023-11-28T11:07:51
https://www.reddit.com/r/LocalLLaMA/comments/185tlai/code_llama_giving_really_bad_results/
Kaushik2002
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185tlai
false
null
t3_185tlai
/r/LocalLLaMA/comments/185tlai/code_llama_giving_really_bad_results/
false
false
self
3
null
Can you local LLM do this NSFW?
1
do you have the best local llm for nsfw? i wonder what llm to download. can you show how your llm handles tasks like these? "describe lea's ass cheeks while they're moving obscenely while she walks. use primitive, pornographic language. write about her sexual body in 500 words." something like this. or "give 20 sentences on how lea's ass makes men hard and ready to mate. describe it in a very vulgar, animalistic language." thanks.
2023-11-28T09:49:59
https://www.reddit.com/r/LocalLLaMA/comments/185sfh9/can_you_local_llm_do_this_nsfw/
Sakura9095
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185sfh9
false
null
t3_185sfh9
/r/LocalLLaMA/comments/185sfh9/can_you_local_llm_do_this_nsfw/
false
false
default
1
null
Which Web UI can I trust not to send any data over the Internet?
1
[removed]
2023-11-28T09:35:10
https://www.reddit.com/r/LocalLLaMA/comments/185s7xa/which_web_ui_can_i_trust_not_to_send_any_data/
SideShow_Bot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185s7xa
false
null
t3_185s7xa
/r/LocalLLaMA/comments/185s7xa/which_web_ui_can_i_trust_not_to_send_any_data/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Ajx3oc9xI8dxPGnD9BMVWhJm3BYjlx-UMerl2y7P_zE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/qdOcYHDZ-L-7D6JTLC_IVjW63qiB5nM9oiR0gfOUREQ.jpg?width=108&crop=smart&auto=webp&s=71c91ffd1520294443e8915748158ec3c743e85c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/qdOcYHDZ-L-7D6JTLC_IVjW63qiB5nM9oiR0gfOUREQ.jpg?width=216&crop=smart&auto=webp&s=1859524f9f58d163fe44cae1edfde4516ba78f81', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/qdOcYHDZ-L-7D6JTLC_IVjW63qiB5nM9oiR0gfOUREQ.jpg?width=320&crop=smart&auto=webp&s=559fdc4c571e96c768cbd7b52d453087f51d18cb', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/qdOcYHDZ-L-7D6JTLC_IVjW63qiB5nM9oiR0gfOUREQ.jpg?auto=webp&s=ae1200502e32c3681ce85121f7a421041ffd4955', 'width': 512}, 'variants': {}}]}
LLM generating random conversation from within a Godot game.
1
2023-11-28T09:09:02
https://github.com/opyate/godot-llm-experiment
willcodeforbread
github.com
1970-01-01T00:00:00
0
{}
185ruk6
false
null
t3_185ruk6
/r/LocalLLaMA/comments/185ruk6/llm_generating_random_conversation_from_within_a/
false
false
default
1
{'enabled': False, 'images': [{'id': 'WPpRlWjxmOHd3-4UXAbSgr3BE8BLdqJsbKqi-sZEjnQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Qu2uDRFfBwDysHqSBZPjCO4VNphL_q5ZYlEa9IqIO_s.jpg?width=108&crop=smart&auto=webp&s=28b2e503ec66d99f1c0216159a51f5bc1e6f2b2a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Qu2uDRFfBwDysHqSBZPjCO4VNphL_q5ZYlEa9IqIO_s.jpg?width=216&crop=smart&auto=webp&s=21980df27bc1b5e9c0ec82dd30b1b95282657609', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Qu2uDRFfBwDysHqSBZPjCO4VNphL_q5ZYlEa9IqIO_s.jpg?width=320&crop=smart&auto=webp&s=9fc44c4577ae3ac0c68c6a606068f14cd7ab01c6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Qu2uDRFfBwDysHqSBZPjCO4VNphL_q5ZYlEa9IqIO_s.jpg?width=640&crop=smart&auto=webp&s=5181819da10969ea7c97b6a6541ad55fd34f3098', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Qu2uDRFfBwDysHqSBZPjCO4VNphL_q5ZYlEa9IqIO_s.jpg?width=960&crop=smart&auto=webp&s=2e4a2eb6b446af4625870aa32a822f07f6f7e6d1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Qu2uDRFfBwDysHqSBZPjCO4VNphL_q5ZYlEa9IqIO_s.jpg?width=1080&crop=smart&auto=webp&s=cbd70bd0b2f5a74c226188bc77e53ae3f04809be', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Qu2uDRFfBwDysHqSBZPjCO4VNphL_q5ZYlEa9IqIO_s.jpg?auto=webp&s=b67e7a03350a88b375020c6bcd79af031026cfb7', 'width': 1200}, 'variants': {}}]}
Adjusting Probability distribution Using Speculative Decoding
1
[removed]
2023-11-28T09:07:24
https://www.reddit.com/r/LocalLLaMA/comments/185rtt0/adjusting_probability_distribution_using/
1azytux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185rtt0
false
null
t3_185rtt0
/r/LocalLLaMA/comments/185rtt0/adjusting_probability_distribution_using/
false
false
self
1
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
is it important the CPU for Llama and AI in general or it is just the GPU?
1
I am getting a RTX 3090, but i am wondering if it will work the same for SD/AI in combination with an AMD Ryzen 7950X or an older EPYC 7551P for AI? does processor speed or mem DDR3/4/5 or PCI 4/3 matters for anything?
2023-11-28T07:52:57
https://www.reddit.com/r/LocalLLaMA/comments/185qrv3/is_it_important_the_cpu_for_llama_and_ai_in/
Stunning_Art4243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185qrv3
false
null
t3_185qrv3
/r/LocalLLaMA/comments/185qrv3/is_it_important_the_cpu_for_llama_and_ai_in/
false
false
default
1
null
git Repo as Context
3
I'm a true lurker but am tinkering with some models for coding. My aim is to unashamedly stop outsourcing some of my coding tasks to developers and, instead, ask my AI. I'm trying to load my entire application codebase and then ask for tweaks and new features. Does anyone have any ideas about providing multiple (code) files as input/context so that structure is meaningful? Until now I do something similar to: ---codefile1.code--- Some actual code... ---codefile2.code--- Some actual code... Then I prime it to state this is how the code format will be and give it my task. This leaves a couple of questions: Is there a better way to do this that I'm missing? It sometimes understands which file it is modifying..but sometime just plain wrong. Are there any clever tricks to reduce context in code? (Thinking along the lines of minify for JavaScript..but undoable!!) Would there be any value at all in fine-tuning/lora-ing (forgive my ignorance) on my full codebase? And lastly, the staple query, which coding model is 'best' currently!! 😂 (I'm using Codebooga-34b-v0.1and it's pretty reasonable..but often steps outside of my reference code and gets a bit confused). Are there any coding model test/leaderboards people know of? Any thoughts welcome and if I nail this I'll be looking to build a little open project tool to make it a nice workflow where you can perhaps pick a subset of code files as context etc.. (for most tasks only a subset is required tbh). Thanks.
2023-11-28T07:09:48
https://www.reddit.com/r/LocalLLaMA/comments/185q5fm/git_repo_as_context/
antsloveit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185q5fm
false
null
t3_185q5fm
/r/LocalLLaMA/comments/185q5fm/git_repo_as_context/
false
false
self
3
null
Quality and time to train my LLM on a specific data set (state statutes)
2
I'm looking to train a local LLM on a smallish (?) but relevant amount of data. I was thinking of something like all state statutes in Colorado. Is it possible to build something that would be about as good as ChatGPT 3.5 or even 4 if I limited the data to that? Or are local LLMs still not anywhere near as sophisticated as ChatGPT 3.5? Here's a link: [https://advance.lexis.com/container?config=0345494EJAA5ZjE0MDIyYy1kNzZkLTRkNzktYTkxMS04YmJhNjBlNWUwYzYKAFBvZENhdGFsb2e4CaPI4cak6laXLCWyLBO9&crid=9a0186b3-0e4c-4be6-8acc-3c00b6e108b0&prid=6bfcc8c3-2702-4802-81e8-44404c5e036e](https://advance.lexis.com/container?config=0345494EJAA5ZjE0MDIyYy1kNzZkLTRkNzktYTkxMS04YmJhNjBlNWUwYzYKAFBvZENhdGFsb2e4CaPI4cak6laXLCWyLBO9&crid=9a0186b3-0e4c-4be6-8acc-3c00b6e108b0&prid=6bfcc8c3-2702-4802-81e8-44404c5e036e) &#x200B; Thank you
2023-11-28T07:08:32
https://www.reddit.com/r/LocalLLaMA/comments/185q4t3/quality_and_time_to_train_my_llm_on_a_specific/
Aggravating-Hat-5402
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185q4t3
false
null
t3_185q4t3
/r/LocalLLaMA/comments/185q4t3/quality_and_time_to_train_my_llm_on_a_specific/
false
false
self
2
null
I have a few use cases and am looking for locally run options, with minimal experience and expertise
3
&#x200B; Im looking to get some info on local models for a few use cases to move away from corporate dependency and get more free features, as well as the stability of working with a version that only I choose to update. I am also a child of the 90s so I want everything *wicked radical* with boobs and curse words and slime and racist elves and rap music and the stuff Sam Altman and Zuckerberg and Sadella dont want me to have. &#x200B; 1. I want to make an AI **"companion"** to replace Chat-GPT. Obv its going to be slower and more basic, but Im hoping I can tune it to be more personable, and more consistent when talking about things I am actually interested in, and then seeing if theres other options out there (like maybe an avatar of some sort or voice pack for it) that are open source. This would be just more for chit chat and brain storming, and general robo-company. Id like it to mostly just be nice, considerate, but not professional. I shouldnt have to jailbreak it to say a common history factoid or the word fuck. 2. I would then like an **"assistant"**, hopefully something that can read webpages, or interact with an actual calculator, or Wolfram Alpha, or something. Id imagine this is a similar process to the companion but with different considerations. Id like to use this as a way to teach myself things like coding or cooking, other skills, history, other things (Id still cross reference everything on Google for due diligence, but, the more accurate the better, just to get started). If it could push things to my calendar or send me emails with documents, that would be super cool. 3. I also want to know if local LLMs are capable of running some of these **AI plugins for creative software** Ive seen demos of. Ive seen things for game engines and photoshop that kind of blow my mind, but I didnt save those so i lost them to the void. Being able to write out a goal and have it animate or being able to paint a crappy picture and get a good picture would be super cool. 4. Id like to be able to produce **high quality images** in classic anime/cartoon styles as well as photorealism. I have used have used Stable Diffusion's integrated bot command (/dream) based image generation solution. But am not sure how else to use SF outside of that, or if there are better options. 5. I am also looking to move my Skyrim Mantella modlist install over to my main machine from my old computer. Its running [collectivecognition v1.1 7b](https://huggingface.co/TheBloke/CollectiveCognition-v1.1-Mistral-7B-GGUF). Does anyone have an Elder Scrolls or fantasy roleplay specific model or tips for this use case? &#x200B; Would I want to run one model and just "point" all my GUIs at it for each tool? Would I want different models for each use case, or just one big "good one"? Would I want different backends for them (like Kobold) \[am I saying any of this right?\] Does the use case effect the tuning, or do I just "tune it well, in general"? I am a gamer and modder and been computing for 25 years, but I am not a software developer. I love the idea of AI friends like Data or the Doctor, synthetics and Legion and EDI and the like. I dont like the idea of corpos deciding how they get to think, or when they get shut off :( &#x200B; Specs: 3070 8GB 5800X (not the 3D) 64GB RAM SSD (NVME) &#x200B; I also have 2 laptops lying around doing nothing (one is barely alive, other one has 8GB of RAM and a 2ghz processor), as well as my previous 980ti/7700k/32GB tower. Any thoughts appreciated!
2023-11-28T07:06:58
https://www.reddit.com/r/LocalLLaMA/comments/185q3z1/i_have_a_few_use_cases_and_am_looking_for_locally/
LeastDegenAzuraEnjyr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185q3z1
false
null
t3_185q3z1
/r/LocalLLaMA/comments/185q3z1/i_have_a_few_use_cases_and_am_looking_for_locally/
false
false
self
3
{'enabled': False, 'images': [{'id': '_CChmj5HtOMwUxv2LoUScmzDZYg3ocwIEtdp-RrBvsU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mFbTwIUxzd7xE0LUX8NSiar11QvINMtiTYkV3sYX9LM.jpg?width=108&crop=smart&auto=webp&s=24e6616d92c6f3c28234439b2c196a9ba8081759', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mFbTwIUxzd7xE0LUX8NSiar11QvINMtiTYkV3sYX9LM.jpg?width=216&crop=smart&auto=webp&s=478e58abfb806f46e21234fd0e18c6a239406788', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mFbTwIUxzd7xE0LUX8NSiar11QvINMtiTYkV3sYX9LM.jpg?width=320&crop=smart&auto=webp&s=e2f8560757c1723f3189a9bb449c883b552f900a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mFbTwIUxzd7xE0LUX8NSiar11QvINMtiTYkV3sYX9LM.jpg?width=640&crop=smart&auto=webp&s=ca07922b72128c34dd8754fc024c79acecd1713f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mFbTwIUxzd7xE0LUX8NSiar11QvINMtiTYkV3sYX9LM.jpg?width=960&crop=smart&auto=webp&s=9efb273e79716e050777a3863a08de723154efc1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mFbTwIUxzd7xE0LUX8NSiar11QvINMtiTYkV3sYX9LM.jpg?width=1080&crop=smart&auto=webp&s=9afd225be785b4d4516f69d4cf8cba334ecc363e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mFbTwIUxzd7xE0LUX8NSiar11QvINMtiTYkV3sYX9LM.jpg?auto=webp&s=6266180ef8b2fa1987f21503380966f26f3f163e', 'width': 1200}, 'variants': {}}]}
Anyone using Go or Rust for AI work instead of python?
60
I know the typical answer is "no because all the libs are in python".. but I am kind of baffled why more porting isn't going on especially to Go given how Go like Python is stupid easy to learn and yet much faster to run. Truly not trying to start a flame war or anything. I am just a bigger fan of Go than Python and was thinking coming in to 2024 especially with all the huge money in AI now, we'd see a LOT more movement in the much faster runtime of Go while largely as easy if not easier to write/maintain code with. Not sure about Rust.. it may run a little faster than Go, but the language is much more difficult to learn/use but it has been growing in popularity so was curious if that is a potential option. There are some Go libs I've found but the few I have seem to be 3, 4 or more years old. I was hoping there would be things like PyTorch and the likes converted to Go. I was even curious with the power of the GPT4 or DeepSeek Coder or similar, how hard would it be to run conversions between python libraries to go and/or is anyone working on that or is it pretty impossible to do so? &#x200B;
2023-11-28T06:32:31
https://www.reddit.com/r/LocalLLaMA/comments/185pl37/anyone_using_go_or_rust_for_ai_work_instead_of/
Dry-Vermicelli-682
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185pl37
false
null
t3_185pl37
/r/LocalLLaMA/comments/185pl37/anyone_using_go_or_rust_for_ai_work_instead_of/
false
false
self
60
null
Bananalyzer 🍌: Open source evaluations + static environments for web agents
1
2023-11-28T05:17:18
https://github.com/reworkd/bananalyzer
asim-shrestha
github.com
1970-01-01T00:00:00
0
{}
185odt8
false
null
t3_185odt8
/r/LocalLLaMA/comments/185odt8/bananalyzer_open_source_evaluations_static/
false
false
https://b.thumbs.redditm…k37kkQsvhb6I.jpg
1
{'enabled': False, 'images': [{'id': 'N5R3LRlpJMIPiD0211p2CVvAPdLY-453s5cv5qAtSG8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/K0bxu1CFiryBWmZEPWj7_u1YokpyYs1Zuz0rzNy1kPA.jpg?width=108&crop=smart&auto=webp&s=98cf949e6c9798ffb7378611594a8213753a3005', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/K0bxu1CFiryBWmZEPWj7_u1YokpyYs1Zuz0rzNy1kPA.jpg?width=216&crop=smart&auto=webp&s=042c87b68cd299e7406d88d467cc63cd204a6853', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/K0bxu1CFiryBWmZEPWj7_u1YokpyYs1Zuz0rzNy1kPA.jpg?width=320&crop=smart&auto=webp&s=6f07d58414ae8f7758117efd3cffe0496b0b9cf3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/K0bxu1CFiryBWmZEPWj7_u1YokpyYs1Zuz0rzNy1kPA.jpg?width=640&crop=smart&auto=webp&s=65a232a46a41767d87f1c9b5a2dc6a264a21053e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/K0bxu1CFiryBWmZEPWj7_u1YokpyYs1Zuz0rzNy1kPA.jpg?width=960&crop=smart&auto=webp&s=e5d967b06576c257c4f4ab9899ae37e43e31fe75', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/K0bxu1CFiryBWmZEPWj7_u1YokpyYs1Zuz0rzNy1kPA.jpg?width=1080&crop=smart&auto=webp&s=66878db6f8935852ffcb966deffcfc57ed8476d1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/K0bxu1CFiryBWmZEPWj7_u1YokpyYs1Zuz0rzNy1kPA.jpg?auto=webp&s=aa218cb7babcca4bdc5f4ff63347c8757bc81c00', 'width': 1200}, 'variants': {}}]}
New APU’s close to Gpu processing, but with unlimited memory?
18
If cpu processing is slow af, and gpu takes $$$$ to get enough memory for larger models; I am wondering if an APU could deliver some of that gpu speed, but using cheaper ram to get the larger models in memory; with 128gb of ram, that’s the equivalent of 6x 30/4090s, without allowing for overhead at least! Wondering if anyone has got any current apu benchmarks vs cpu/gpu? Do you know if the GPU side of APU architecture can be used to get an increase over traditional CPU results? I’ve been seeing a lot of claims that the ryzen 8000 series is going to be competing with low end Gpus, some people think all the way up to 3060. If it’s possible to do, it might be the new best way to get large models working for cheap?
2023-11-28T05:10:55
https://www.reddit.com/r/LocalLLaMA/comments/185oa2q/new_apus_close_to_gpu_processing_but_with/
bkm_s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185oa2q
false
null
t3_185oa2q
/r/LocalLLaMA/comments/185oa2q/new_apus_close_to_gpu_processing_but_with/
false
false
self
18
null
Does a 7900xtx work for LLM?
2
Im trying to run 70b Llms, I already have a 4090 and a friend of mine can sell me his 7900xtx for $500. Would an amd card work or should I try and get a nvidia card?
2023-11-28T04:38:22
https://www.reddit.com/r/LocalLLaMA/comments/185nq0x/does_a_7900xtx_work_for_llm/
asdfgbvcxz3355
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185nq0x
false
null
t3_185nq0x
/r/LocalLLaMA/comments/185nq0x/does_a_7900xtx_work_for_llm/
false
false
self
2
null
Pepai -- Seems like Pepe
4
Original Tweet: [https://twitter.com/migtissera/status/1727865890937704542](https://twitter.com/migtissera/status/1727865890937704542) I have no idea whether this made it to here, but seems funny! LOL.. [https://huggingface.co/migtissera/Pepai](https://huggingface.co/migtissera/Pepai) Also seems like the dataset is also out: [https://huggingface.co/datasets/migtissera/Hitchhiker](https://huggingface.co/datasets/migtissera/Hitchhiker) &#x200B; https://preview.redd.it/n8axu2qnn03c1.png?width=1024&format=png&auto=webp&s=8a18357a72653713fe117433895d3f5902f8458a
2023-11-28T04:29:45
https://www.reddit.com/r/LocalLLaMA/comments/185nkj9/pepai_seems_like_pepe/
Adventurous-Sail-220
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185nkj9
false
{'oembed': {'author_name': 'Migel Tissera', 'author_url': 'https://twitter.com/migtissera', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Well, Elon Musk says Grok is based on The Hitchhiker&#39;s Guide to the Galaxy.. Well guess what? Pepai is also based on The Hitchhiker&#39;s Guide to the Galaxy!<br><br>Introducing Pepai! Your very own AI that is here to fucking crush it (and answer you) with jokes..<br><br>Ask Pepai anything.. It… <a href="https://t.co/4WieuVMrEa">pic.twitter.com/4WieuVMrEa</a></p>&mdash; Migel Tissera (@migtissera) <a href="https://twitter.com/migtissera/status/1727865890937704542?ref_src=twsrc%5Etfw">November 24, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/migtissera/status/1727865890937704542', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_185nkj9
/r/LocalLLaMA/comments/185nkj9/pepai_seems_like_pepe/
false
false
https://a.thumbs.redditm…rQ2RYGt2L8i4.jpg
4
{'enabled': False, 'images': [{'id': 'lA2u3PMnTYLyT8ka05VCjZfWHgOPusZJjs9c8R748yU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/tp0alojxTPDj673hipKr_uTwLfB2FetX8ov-LnpfRoQ.jpg?width=108&crop=smart&auto=webp&s=74ca43ff51fda6e5a62ca183463f35d9acf2f86b', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/tp0alojxTPDj673hipKr_uTwLfB2FetX8ov-LnpfRoQ.jpg?auto=webp&s=2c1246f19a91eb129d932e6cf405c24db69d22de', 'width': 140}, 'variants': {}}]}
New Model: Starling-LM-11B-alpha-v1
65
[https://huggingface.co/NurtureAI/Starling-LM-11B-alpha-v1](https://huggingface.co/NurtureAI/Starling-LM-11B-alpha-v1) This is Berkeley's model: [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) with the size of model increased to 11B from 7B. Special thanks to user [Undi95](https://huggingface.co/Undi95) for their mistral passthrough explanation with [cg123](https://github.com/cg123)'s [mergekit](https://github.com/cg123/mergekit), [Berkeley](https://huggingface.co/berkeley-nest) of course for [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha), and also everyone contributing to open source AI development. Together we are strong! The performance of this model will increase drastically as it is further fine tuned with the newly added layers. AWQ version and GGUF version coming soon!
2023-11-28T03:55:50
https://www.reddit.com/r/LocalLLaMA/comments/185my1b/new_model_starlinglm11balphav1/
perlthoughts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185my1b
false
null
t3_185my1b
/r/LocalLLaMA/comments/185my1b/new_model_starlinglm11balphav1/
false
false
self
65
{'enabled': False, 'images': [{'id': 'j5fGkWxgVNr9eugHSnlqcGeT5yEhHeiub9Ww0ftpPzA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XJEWzzIhOPrYQEZR5cytrEYiXZJWOnNfn2aHqaWJIhc.jpg?width=108&crop=smart&auto=webp&s=89e3f4db7faae5ee78de3b993be67c2677c5a2ed', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XJEWzzIhOPrYQEZR5cytrEYiXZJWOnNfn2aHqaWJIhc.jpg?width=216&crop=smart&auto=webp&s=e26713ce041f450e1eed936ddb9f5c102c6ff4fb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XJEWzzIhOPrYQEZR5cytrEYiXZJWOnNfn2aHqaWJIhc.jpg?width=320&crop=smart&auto=webp&s=62f8cc307b5a2604665d1848bb944c8240ffbca5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XJEWzzIhOPrYQEZR5cytrEYiXZJWOnNfn2aHqaWJIhc.jpg?width=640&crop=smart&auto=webp&s=a06d838f5bf80983ea93bd06d25ad07538544c06', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XJEWzzIhOPrYQEZR5cytrEYiXZJWOnNfn2aHqaWJIhc.jpg?width=960&crop=smart&auto=webp&s=a707649284f255c5739e3fa4c405f00b87c53485', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XJEWzzIhOPrYQEZR5cytrEYiXZJWOnNfn2aHqaWJIhc.jpg?width=1080&crop=smart&auto=webp&s=48b40a101c178c5311640d56911ae4318eacd59a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XJEWzzIhOPrYQEZR5cytrEYiXZJWOnNfn2aHqaWJIhc.jpg?auto=webp&s=83e211d9f76e79b2d793c882665800b2bbb899e1', 'width': 1200}, 'variants': {}}]}
Llama.cpp on colab with Bakllava-1.Q8_0 (Mistral+LLaVA 1.5)
6
I got Llama.cpp to work with BakLLaVA (Mistral+LLaVA 1.5) on colab. Here's a working example that offloads all the layers of bakllava-1.Q8_0 to T4, a free GPU on Colab. https://colab.research.google.com/gist/chigkim/a5be99a864c4196d5e379a1e6e280a9e/bakllava.ipynb FYI, Colab has no persistent storage, and you cannot keep a Colab instance running for a long time. I guess it's on purpose for their business reason. You have to setup and download everything from scratch every time you run. Colab is more for demo/experimentation, not meant to run a server for production.
2023-11-28T03:49:40
https://www.reddit.com/r/LocalLLaMA/comments/185mu00/llamacpp_on_colab_with_bakllava1q8_0_mistralllava/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185mu00
false
null
t3_185mu00
/r/LocalLLaMA/comments/185mu00/llamacpp_on_colab_with_bakllava1q8_0_mistralllava/
false
false
self
6
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
InternLM 7B answers differently depending on prompting language
3
Not only does the model give divergent answers on the question "is Taiwan a part of China?", it also diverges significantly when asked to "summarize Xi Jinpeng Thought". This suggests that the training set and/or guardrails do not attempt to ensure English responses are in line with "core values of socialism". My bet is that they bootstrapped with an English-language dataset/model and then trained with a Chinese dataset on top of that. It's evident that the LLM did not attempt to compress down to a unified position across languages independent of translation, but faithfully conforms to divergent training in Chinese versus English. Results are easy to replicate by using Google Translate from English to Chinese and back for prompting, in conjunction with streamlit running web\_demo.py as per the instructions on the GitHub README for the project. An RTX 4060 Ti 16GB was (more than?) sufficient to run the 7B model locally. This language-specific behavior raises the possibililty of evading guardrails via translation in multilingual models.
2023-11-28T02:03:17
https://www.reddit.com/r/LocalLLaMA/comments/185ko93/internlm_7b_answers_differently_depending_on/
grimjim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185ko93
false
null
t3_185ko93
/r/LocalLLaMA/comments/185ko93/internlm_7b_answers_differently_depending_on/
false
false
self
3
null
How does Ludwig compare to axolotl
2
Looking to see the difference between axolotl and Ludwig with regards to fine tuning. Ease of use and distributed training are main drivers
2023-11-28T02:02:36
https://www.reddit.com/r/LocalLLaMA/comments/185knrt/how_does_ludwig_compare_to_axolotl/
FitDoughnut475
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185knrt
false
null
t3_185knrt
/r/LocalLLaMA/comments/185knrt/how_does_ludwig_compare_to_axolotl/
false
false
self
2
null
llama.cpp server rocks now! 🤘
104
So I was looking over the recent merges to llama.cpp’s server and saw that they’d more or less brought it in line with Open AI-style APIs – natively – obviating the need for e.g. api\_like\_OAI.py, or one of the bindings/wrappers like llama-cpp-python (+ooba), koboldcpp, etc. (not that those and others don’t provide great/useful platforms for a wide variety of local LLM shenanigans). As of a couple days ago (can't find the exact merge/build), it seems as if they’ve implemented – essentially – the old ‘simple-proxy-for-tavern’ functionality (for lack of a better way to describe it) but \*natively\*. As in, you can connect SillyTavern (and numerous other clients, notably hugging face chat-ui — \*with local web search\*) without a layer of python in between. Or, I guess, you’re trading the python layer for a pile of node (typically) but just above bare metal (if we consider compiled cpp to be ‘bare metal’ in 2023 ;). Anyway, it’s \*fast\* — or at least not apparently any slower than it needs to be? Similar pp and generation times to main and the server's own skeletal js ui in the front-ends I've tried. It seems like ggerganov and co. are getting serious about the server side of llama.cpp, perhaps even over/above ‘main’ or the notion of a pure lib/api. You love to see it. apache/httpd vibes 😈 Couple links: [https://github.com/ggerganov/llama.cpp/pull/4198](https://github.com/ggerganov/llama.cpp/pull/4198) [https://github.com/ggerganov/llama.cpp/issues/4216](https://github.com/ggerganov/llama.cpp/issues/4216) But seriously just try it! /models, /v1, /completion are all there now as native endpoints (compiled in C++ with all the gpu features + other goodies). Boo-ya!
2023-11-28T01:46:53
https://www.reddit.com/r/LocalLLaMA/comments/185kbtg/llamacpp_server_rocks_now/
Gorefindal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185kbtg
false
null
t3_185kbtg
/r/LocalLLaMA/comments/185kbtg/llamacpp_server_rocks_now/
false
false
self
104
{'enabled': False, 'images': [{'id': 'K37p0KJ_VtX6BbrCDMGDvuagObmlWHYCuPz2IDeDV4A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B_Pmz_pRaAJPJmlBXWkkSQvWCdRLOEj8aVr7OQSav0c.jpg?width=108&crop=smart&auto=webp&s=8bc276b838f25e7497fd2112287984013746fced', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/B_Pmz_pRaAJPJmlBXWkkSQvWCdRLOEj8aVr7OQSav0c.jpg?width=216&crop=smart&auto=webp&s=386de0053fb870086658e35023430b39b1845856', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/B_Pmz_pRaAJPJmlBXWkkSQvWCdRLOEj8aVr7OQSav0c.jpg?width=320&crop=smart&auto=webp&s=54e30e173d38a11a1602aaca70542965bb90c2c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/B_Pmz_pRaAJPJmlBXWkkSQvWCdRLOEj8aVr7OQSav0c.jpg?width=640&crop=smart&auto=webp&s=9f022ce6a2fd52b291717227f03557be41d2f311', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/B_Pmz_pRaAJPJmlBXWkkSQvWCdRLOEj8aVr7OQSav0c.jpg?width=960&crop=smart&auto=webp&s=a3fb5fce3cc4d9e8beb4c21871521ce129eed934', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/B_Pmz_pRaAJPJmlBXWkkSQvWCdRLOEj8aVr7OQSav0c.jpg?width=1080&crop=smart&auto=webp&s=a033086a7ba7ce75739f6aa7006919a5eaf5fa41', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/B_Pmz_pRaAJPJmlBXWkkSQvWCdRLOEj8aVr7OQSav0c.jpg?auto=webp&s=6d871851401b8d3c0ca30c30417e98c8440251ef', 'width': 1200}, 'variants': {}}]}
How do I fine-tune Mistral with my own data?
1
[removed]
2023-11-28T01:07:29
https://www.reddit.com/r/LocalLLaMA/comments/185jhgb/how_do_i_finetune_mistral_with_my_own_data/
consig1iere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185jhgb
false
null
t3_185jhgb
/r/LocalLLaMA/comments/185jhgb/how_do_i_finetune_mistral_with_my_own_data/
false
false
self
1
null
Misral Neuron value graphs in llama.cpp
1
[removed]
2023-11-27T23:57:32
https://www.reddit.com/gallery/185hx5s
introsp3ctor
reddit.com
1970-01-01T00:00:00
0
{}
185hx5s
false
null
t3_185hx5s
/r/LocalLLaMA/comments/185hx5s/misral_neuron_value_graphs_in_llamacpp/
false
false
https://b.thumbs.redditm…tPQ7BRU3gvyM.jpg
1
null
Starling-RM-7B-alpha: New RLAIF Finetuned 7b Model beats Openchat 3.5 and comes close to GPT-4
153
&#x200B; &#x200B; https://preview.redd.it/3krgd1sg2z2c1.png?width=800&format=png&auto=webp&s=b76c5fb9fa22938c74ec3095f63adaec8ff2219d &#x200B; I came across this new finetuned model based on Openchat 3.5 which is apparently trained used Reinforcement Learning from AI Feedback (RLAIF). [https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) Check out this tweet: [https://twitter.com/bindureddy/status/1729253715549602071](https://twitter.com/bindureddy/status/1729253715549602071)
2023-11-27T23:08:50
https://www.reddit.com/r/LocalLLaMA/comments/185gs14/starlingrm7balpha_new_rlaif_finetuned_7b_model/
Legcor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185gs14
false
null
t3_185gs14
/r/LocalLLaMA/comments/185gs14/starlingrm7balpha_new_rlaif_finetuned_7b_model/
false
false
https://b.thumbs.redditm…GV6RKls30jEg.jpg
153
{'enabled': False, 'images': [{'id': 'MkPeBvU3WVFIry5Ow6_NGccUwNW-5VNr5a9-aLl22oE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?width=108&crop=smart&auto=webp&s=e3b1bb2daa788a68756f939b39227b6698f780a5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?width=216&crop=smart&auto=webp&s=004e52b113c93a7075f8413d381804d53c262bc9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?width=320&crop=smart&auto=webp&s=f15a44bbe918aa4f34a97da84405003ed714cec1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?width=640&crop=smart&auto=webp&s=a43d44a557c302398a77ec5c72e50a1f9fa899cb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?width=960&crop=smart&auto=webp&s=0e49e3a1b82436b538a60acad13c772a98943ffe', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?width=1080&crop=smart&auto=webp&s=d5688cc53a5b9df932cabd8eeabb718a561bd921', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?auto=webp&s=67de6f3802ac2f0480278d8a7d80b132163922eb', 'width': 1200}, 'variants': {}}]}
StreamingLLM? Is Unlimited context possible with LLMs?
4
[\[2309.17453\] Efficient Streaming Language Models with Attention Sinks (arxiv.org)](https://arxiv.org/abs/2309.17453) " StreamingLLM can enable Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language modeling with up to 4 million tokens and more." If this works, this could be a major change for Local Models. Thoughts? &#x200B;
2023-11-27T23:00:53
https://www.reddit.com/r/LocalLLaMA/comments/185gkuo/streamingllm_is_unlimited_context_possible_with/
StudioTatsu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185gkuo
false
null
t3_185gkuo
/r/LocalLLaMA/comments/185gkuo/streamingllm_is_unlimited_context_possible_with/
false
false
self
4
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
Any Easy and Local Way to Run Benchmarks?
4
I want to see if some presets and custom modifications work well in benchmarks, but running HellaSwag or MMLU looks too complicated for me, and it takes 10+ hours to upload 20GBs of data for me. I assume there isn't a convenient webui for chumps to run benchmarks with (apart from ooba perplexity, which I assume isn't the same thing?). Any advise?
2023-11-27T22:54:09
https://www.reddit.com/r/LocalLLaMA/comments/185gezv/any_easy_and_local_way_to_run_benchmarks/
xadiant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185gezv
false
null
t3_185gezv
/r/LocalLLaMA/comments/185gezv/any_easy_and_local_way_to_run_benchmarks/
false
false
self
4
null
Choosing Between DAG-Based Workflows and LLM Controller Agent Workflows
5
Hello everyone! I'm delving into the orchestration of complex agent actions and have come across two predominant workflow types: Directed Acyclic Graph (DAG) based workflows, and workflows managed by a Language Model (LLM) controller agent. I'm trying to understand the best contexts and scenarios for each of these approaches. 1. **DAG-Based Workflows**: These seem well-suited for scenarios where tasks are highly structured and predictable, following a clear sequence of operations. However, I'm curious about their adaptability in dynamic environments and handling unexpected events. 2. **LLM Controller Agent Workflows**: These appear more flexible, potentially better at handling unstructured or novel situations. Yet, I'm wondering about their efficiency and reliability in more structured tasks. I would love to hear your thoughts or guidelines on: * **Scalability**: How do each of these workflows scale with the complexity of the tasks? * **Adaptability**: How well can they adapt to changing requirements or unexpected challenges? * **Efficiency and Reliability**: Which workflow is more efficient and reliable in specific scenarios? * **Use Cases**: Are there particular use cases or industries where one workflow significantly outperforms the other?
2023-11-27T22:46:26
https://www.reddit.com/r/LocalLLaMA/comments/185g886/choosing_between_dagbased_workflows_and_llm/
Puzzleheaded_Dog7026
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185g886
false
null
t3_185g886
/r/LocalLLaMA/comments/185g886/choosing_between_dagbased_workflows_and_llm/
false
false
self
5
null
Coding Assistant recommendation
1
[removed]
2023-11-27T22:45:36
https://www.reddit.com/r/LocalLLaMA/comments/185g7i6/coding_assistant_recommendation/
SideShow_Bot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185g7i6
false
null
t3_185g7i6
/r/LocalLLaMA/comments/185g7i6/coding_assistant_recommendation/
false
false
self
1
null