title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Langchain, Langchain.js, vs AutoGPT for local agent development | 12 | I hope this is a reasonably valid question - I'm interested in experimenting with local LLM's (either a single LLM or multiple, or a single with different prompts for different purposes that can interact).
I've run early toy version of AutoGPT (are all versions of AutoGPT toys? zing!) locally against the OpenAI API, but I have yet to stand up anything using Langchain. My intent is to use a llama30b model locally, either directly through python/transformers or through an OpenAI-compatible API.
What I'm wondering is if I should choose AutoGPT vs Langchain vs Langchain JS as a target platform. I suppose I could start with AutoGPT and move into langchain as needed. Or, it might be that AutoGPT leverages Langchain, I'm not sure. I'm also a bit hesitant/frustrated with Python in general, which makes Langchain.js attractive, but I'm concerne that Langchain.js will lag too far behind Langchain (python) and that I'll regret focusing on langchain js.
Other requirements - I'd like to integrate TTS / STT over bluetooth on linux, text with either iMessage or Twilio API (send/recieve), kick off periodic jobs with Cron, maintain some state with some kind of vector store, i imagine, and allow agents to talk to each other to solve problems. I'd also like to be able to visualize the system working through a web browser, maybe with d3. Lastly, I'd like the agents to be able to browse the web, ideally by controlling a real browser vs. via something like wget, and I'd like to give them access to a bash prompt in a VM, such that I can see what they're doing and they can interact with me as they use bash (as in a student/tutor or pair programming sitaution).
Basically, I want everything, eventually.
Has anyone built something like this, or started work in this direction? What is your toolchain like?
Are there any OSS projects that integrate the TTS/STT component with the composability of langchain/autogpt?
Are there any other orchestration considerations I should be aware of? | 2023-05-27T02:39:09 | https://www.reddit.com/r/LocalLLaMA/comments/13svump/langchain_langchainjs_vs_autogpt_for_local_agent/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13svump | false | null | t3_13svump | /r/LocalLLaMA/comments/13svump/langchain_langchainjs_vs_autogpt_for_local_agent/ | false | false | self | 12 | null |
QLoRA Mindblowing | 25 | [removed] | 2023-05-27T02:39:08 | https://www.reddit.com/r/LocalLLaMA/comments/13svumc/qlora_mindblowing/ | AemonAlgizVideos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13svumc | false | null | t3_13svumc | /r/LocalLLaMA/comments/13svumc/qlora_mindblowing/ | false | false | default | 25 | null |
Using AMD GPUs? | 28 | AMD has just launched new GPUs that offer more VRAM for the money: [https://www.tomshardware.com/news/amd-48gb-radeon-pro-w7000-gpus-triple-slot-blowers](https://www.tomshardware.com/news/amd-48gb-radeon-pro-w7000-gpus-triple-slot-blowers)
Does anyone have experience running LLMs on them? Is it more difficult to set up than on nVidia? | 2023-05-27T00:42:20 | https://www.reddit.com/r/LocalLLaMA/comments/13sthxx/using_amd_gpus/ | chip_0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sthxx | false | null | t3_13sthxx | /r/LocalLLaMA/comments/13sthxx/using_amd_gpus/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'q4JS2R4ynRX4daAnqF7_23bsaeR-54weIOwqCNk39FA', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?width=108&crop=smart&auto=webp&s=718dad78523bb0a8645d37029f7f608720c4cf6e', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?width=216&crop=smart&auto=webp&s=24fb33120b48d308a33d5352abd1c7c25616146e', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?width=320&crop=smart&auto=webp&s=7ab8daddc53271d744cb10e3905fbd8860797a48', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?width=640&crop=smart&auto=webp&s=678c59344964aa6ee34e2e3d5f06937cc5a3d3f6', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?width=960&crop=smart&auto=webp&s=ec67288b6f79b03cfc75296688bc622ca4acdd4f', 'width': 960}, {'height': 611, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?width=1080&crop=smart&auto=webp&s=5f5a143a64908d2d622fdd5b3d78359be7b54e3e', 'width': 1080}], 'source': {'height': 679, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?auto=webp&s=984b73cbdbaa1cfd6676113a709cce102b67526a', 'width': 1200}, 'variants': {}}]} |
Is there are single LLM answering this properly, without giving a lecture on formal logic? | 2 | 2023-05-26T23:22:24 | NancyAurum | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13srpl3 | false | null | t3_13srpl3 | /r/LocalLLaMA/comments/13srpl3/is_there_are_single_llm_answering_this_properly/ | false | false | 2 | {'enabled': True, 'images': [{'id': '6Xde09hfBacTBEI-GV8-iA20-206OoP04fDGpQThdBU', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?width=108&crop=smart&auto=webp&s=77aa8b4d4ef749e5e5de1a323ac7d6af4b0e404f', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?width=216&crop=smart&auto=webp&s=1362b7d631c502eb22b7468064a8143f24066d6c', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?width=320&crop=smart&auto=webp&s=78fed2bc2724941259098f8fb0948634cdc27914', 'width': 320}, {'height': 361, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?width=640&crop=smart&auto=webp&s=1eba6ec410b5d9b854934ad72a3215d69c17a789', 'width': 640}, {'height': 541, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?width=960&crop=smart&auto=webp&s=cc42d450789db4f1630f83d22e1fd25bacaae537', 'width': 960}, {'height': 609, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?width=1080&crop=smart&auto=webp&s=7f354410a4138cfc63e8984cf86928efad594e8e', 'width': 1080}], 'source': {'height': 1109, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?auto=webp&s=831aab61719dfe45cd1570dddd9c242d25f9a6f4', 'width': 1966}, 'variants': {}}]} | |||
Local LLM Suggestion | 1 | [removed] | 2023-05-26T23:02:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13sr8hz | false | null | t3_13sr8hz | /r/LocalLLaMA/comments/13sr8hz/local_llm_suggestion/ | false | false | default | 1 | null | ||
Hosting your own embeddings API | 7 | I am working on a web app that uses embeddings and an LLM. I originally started with OpenAI embeddings but then just started using sentence transformers CPU models and actually got really great results for my use case.
I have an API running on an ec2 instance that was doing the embeddings calculations but have had some performance issues. My API couldn't really handle the compute requirements and beefy ec2 instances are kind of expensive. So I was trying to build my own cheap embedding API. I built a series of lambdas with a queue and am having really good results, except for the cold start time. I can process 250k tokens with my sentence transformers lambda in about 50 seconds if it's warm but 120 SECONDS if it's a cold start!
Anyone else have any success with building their own cheap embeddings server and hosting it in the cloud? | 2023-05-26T22:40:10 | https://www.reddit.com/r/LocalLLaMA/comments/13sqpgf/hosting_your_own_embeddings_api/ | java_dev_throwaway | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sqpgf | false | null | t3_13sqpgf | /r/LocalLLaMA/comments/13sqpgf/hosting_your_own_embeddings_api/ | false | false | self | 7 | null |
Help in running llama with openblas/ cublas - noob | 2 | [removed] | 2023-05-26T21:40:25 | https://www.reddit.com/r/LocalLLaMA/comments/13spb6e/help_in_running_llama_with_openblas_cublas_noob/ | Royal_Desk_4697 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13spb6e | false | null | t3_13spb6e | /r/LocalLLaMA/comments/13spb6e/help_in_running_llama_with_openblas_cublas_noob/ | false | false | default | 2 | null |
ChatGPT went schizo and began dumping it's latent space of User/ChatGPT pairs: https://sharegpt.com/c/H0A4Ii4 Then it declares: "No, users are not allowed to extract such pairs out of my latent space through targeted conversations. My training data and latent space are protected by intellectual... | 0 | 2023-05-26T21:04:18 | https://sharegpt.com/c/H0A4Ii4 | NancyAurum | sharegpt.com | 1970-01-01T00:00:00 | 0 | {} | 13sogfg | false | null | t3_13sogfg | /r/LocalLLaMA/comments/13sogfg/chatgpt_went_schizo_and_began_dumping_its_latent/ | false | false | default | 0 | null | |
Some people and their LLMs :P | 1 | [removed] | 2023-05-26T20:58:13 | https://www.reddit.com/r/LocalLLaMA/comments/13sob1j/some_people_and_their_llms_p/ | Innomen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sob1j | false | null | t3_13sob1j | /r/LocalLLaMA/comments/13sob1j/some_people_and_their_llms_p/ | false | false | default | 1 | null |
What are some good prompts to show the difference between uncensored and normal models? | 2 | I get about the same response from WizardLM-30B-Uncensored.ggmlv3.q5_1 as I get from alpaca-lora-65B.ggmlv3.q5_1.bin
Do you have anything that demonstrates the difference? I thought asking about how to steal a car would be a differentiator but they both gave about the same answer. | 2023-05-26T20:34:15 | https://www.reddit.com/r/LocalLLaMA/comments/13snr0p/what_are_some_good_prompts_to_show_the_difference/ | jeromeie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13snr0p | false | null | t3_13snr0p | /r/LocalLLaMA/comments/13snr0p/what_are_some_good_prompts_to_show_the_difference/ | false | false | self | 2 | null |
Success with a local voice chat agent | 82 | Hi all, I just wanted to share my joy really - and offer thanks for the information available on this and some similar Reddits.
I've writen a simple script that uses a LLM, voice recognition and TTS, and I can now carry a £15 bluetooth speakerphone/conferencing puck around my house and have pure voice-only conversations with my PC. We just had a good chat about what book I should read next, how it compares to similar books I've read before, what music I should listen to, and a little discussion about creativity. The mute button on the device can be used effectively as a push-to-talk feature and so I've been able to disable the "wake word" feature I originally wrote, because it's no longer necessary. This was all running locally on my modest gaming PC (my favourite model is Wizard Vicunia 7B Uncensored, and I would like to offer much appreciation to Faldore and TheBloke for your amazing efforts in bringing these models to us home users!). I also made a version that uses GPT, with online STT and TTS APIs, and while it's undoubtedly a lot better, the local one is plenty good enough - and the only way is up from here! I also appreciate the work that's gone into Ooobabooga, because I think that's currently the easiest way for us mere mortals to load and use models (and in my own progress I've borrowed heavily from that codebase - to be honest I'm looking forward to getting the Oobabooga API working so that I can just use it without the hackery, and add my own preferred method of interaction to it - it just wasn't working the other day when I tried it\*) My goal is to add a conversational voice-only interface to my home automation, and one day embody my 'jarvis' with a robot that can roam around, and maybe one day bring me a beer!
So, I don't really have much to say except sharing my progress and thanks, but I hope it might inspire some people to share what they've achieved?
\*It was listening on 5000 but I got a 404 any time I tried to make calls on it... if anyone has any insight on that problem please chip in - though I expect I will probably figure it out next time I can spare enough time to do some more work on it.
​
\[edit\] I've uploaded some scripts to github here: [https://github.com/dkjroot/iris-llm/tree/prototypes](https://github.com/dkjroot/iris-llm/tree/prototypes) Use the "prototypes" branch. I hope someone finds them useful!\[/edit\]
​
\[edit\] In case anyone's still watching this, I just pushed a new version in prototypes branch, experiments/offline/v7 - now uses the Oobabooga API properly, so should work with any version of Oobabooga (assuming no breaking API changes) - still only prototypes but now I've got that working, I can start polishing the script a bit and then maybe begin work on adding the ability to control home automation devices :) \[/edit\]
​
\[edit\] One last bump, I've put a reasonably usable version on main now at [https://github.com/dkjroot/iris-llm](https://github.com/dkjroot/iris-llm). You still have to figure out the requirements for yourself (I'll make a new venv and freeze a requirements.txt some day, I promise!), but I think it's working pretty well now. There's a lot of combinations of the options and I haven't tested every combination but I think it's all working.\[/edit\] | 2023-05-26T20:25:45 | https://www.reddit.com/r/LocalLLaMA/comments/13snjvx/success_with_a_local_voice_chat_agent/ | dkjroot | self.LocalLLaMA | 2023-05-30T17:44:08 | 0 | {} | 13snjvx | false | null | t3_13snjvx | /r/LocalLLaMA/comments/13snjvx/success_with_a_local_voice_chat_agent/ | false | false | self | 82 | {'enabled': False, 'images': [{'id': 'ONShTUr2csEIpdDWaB3y3DNluVHPabtMimD7XYqDELI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?width=108&crop=smart&auto=webp&s=ed7946a3c9aa849ffd1bfc4e2ae5e5f2a9e68267', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?width=216&crop=smart&auto=webp&s=d84768188eb0729236e22353666194234217c62d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?width=320&crop=smart&auto=webp&s=bd83234d158142cda26a3766bf01248c08c34808', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?width=640&crop=smart&auto=webp&s=60207de053bb2805d10e5e08ea84c0df29f546c1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?width=960&crop=smart&auto=webp&s=631b1c832d53f47cc978e719e209914f3b0a0560', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?width=1080&crop=smart&auto=webp&s=8ec40a5811dec45ec6954b6bfdb48819c53009fc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?auto=webp&s=a0955b1fc101bc99bbeb324ef108b3ce2d7d132f', 'width': 1200}, 'variants': {}}]} |
Are the models their current sizes due to current vram /ram limits? | 8 | I'm guessing that the current model sizes have been designed to fit into the current popular vram sizes, apart from the original 65b size? Assuming I'm correct, as future cards come out with potentially more/ different sized vram amounts and as people start switching to using both the cpu and gpu's ram, can we expect to see different model sizes, like in a 40b or something? I know we're already starting to see some variation will ggml models with the 5 bit varents. How do think the in between models would perform, and what future model sizes do you expect? | 2023-05-26T20:15:31 | https://www.reddit.com/r/LocalLLaMA/comments/13snb8x/are_the_models_their_current_sizes_due_to_current/ | TheSilentFire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13snb8x | false | null | t3_13snb8x | /r/LocalLLaMA/comments/13snb8x/are_the_models_their_current_sizes_due_to_current/ | false | false | self | 8 | null |
If your prompt relies on structured output: KISS | 13 | KISS as in keep it simple, stupid. Or one of the many interpretations.
I am trying to make my chatbot use "tools". Think of them as GPT plugins, only there is no response from the tool.
I was trying to use a format like <toolname>instruction</toolname> because I thought it probably learned XML style along the way. Turns out only one 13B model I tried could do it (manticore). But the same requirements for hard parsing can be fulfilled by [toolname: instruction] and that is apparently much easier to understand, more 13B models were able to do that.
So, that's all I wanted to say. Hope it helps someone trying to do similar things. Try different stuff, and don't be lazy when it comes to interpreting output. Shift everything you can to regular code. | 2023-05-26T20:06:45 | https://www.reddit.com/r/LocalLLaMA/comments/13sn3v6/if_your_prompt_relies_on_structured_output_kiss/ | involviert | self.LocalLLaMA | 2023-05-26T20:12:29 | 0 | {} | 13sn3v6 | false | null | t3_13sn3v6 | /r/LocalLLaMA/comments/13sn3v6/if_your_prompt_relies_on_structured_output_kiss/ | false | false | self | 13 | null |
Unfortunately local models are really bad at code | 29 | I've been testing the local models with ChatGPT4All, even the larger ones are really bad at producing code. Even after pointing out mistakes and trying to correct via different prompts, it gets it wrong and doubles down on the errors.
Bard is better, but even then, not much better.
The clear winner is ChatGPT, producing workable code with a basic prompt.
I suspect this will change, but right now, unless you have not used ChatGPT, the local models are terrible for code. Maybe a lot better for other things. | 2023-05-26T20:05:24 | https://www.reddit.com/r/LocalLLaMA/comments/13sn2q6/unfortunately_local_models_are_really_bad_at_code/ | 808phone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sn2q6 | false | null | t3_13sn2q6 | /r/LocalLLaMA/comments/13sn2q6/unfortunately_local_models_are_really_bad_at_code/ | false | false | self | 29 | null |
What do you think is the magic sauce behind GPT4’s outstanding performance and how can we implement it in open models? Three Proposals Included | 9 | The open models still lag behind GPT4 but I think we will close the gap over the next few years. I’m putting forth some proposals on how we can do this.
If I were to hazard three guesses this would be it.
**Magic Sauce 1: The constant refinement and fine tuning of hundreds of models behind the AI**
Looking at the Arxiv database of AI papers, there were over a hundred papers posted just today! [https://arxiv.org/list/cs.AI/pastweek?skip=0&show=25](https://arxiv.org/list/cs.AI/pastweek?skip=0&show=25)
Each paper has several researchers behind it so we’re talking about the collective intelligence of a few hundred researchers providing innovative solutions everyday. These papers can come out to be over 20 pages so this is far too much for an individual to go through. If only we had a tool that was good at analyzing vast amounts of text. Oh yeah, we do.
* **Proposal:** We can somehow make an LLM that takes these AI papers from Arxiv, evaluates them and figures out ways to implement the solutions for self improvement. This extends to all the innovative projects taking place on Github.
**Magic Sauce 2: The computing power OpenAI has at Microsoft**
It’s no secret that OpenAI has mentioned several times that the computing costs are tremendous for running GPT4. The computing power they have extends to training the AI and is a significant component (though Google has similar computing power and lags behind). This is also a problem in the community that NVIDIA chips cost a tremendous amount to buy, just to upgrade a computer for running a LocalLLM.
* **Proposal:** A distributed network for using computing resources, very similar to the way bitcoin is mined. This could really open up a lot of potential in developing an open source LLM. It would have to be structured so people have an incentive to contribute their GPU, VRAM, RAM to training and using an LLM.
**Magic Sauce 3: RLHF, A Massive Amount of Users Generating New Data and High Quality Data Inputs for Training**
Reinforcement Learning from Human Feedback was mentioned often by Sam Altman as one of the critical factors in its success. This means people rewarding the AI for better responses. There is now a lot of new data OpenAI is gathering from its millions of users that can most likely further be used to improve GPT. Lastly, we have seen that large models can be outperformed by small models that have higher quality data and better training. The highest quality data is most likely prioritized in training GPT.
* **Proposal:** An open LLM that the community trains together. This goes along with the distributed network proposal. A quality ranking for datasets and filtering datasets in an unbiased way for any low quality input. | 2023-05-26T19:52:03 | https://www.reddit.com/r/LocalLLaMA/comments/13smr9m/what_do_you_think_is_the_magic_sauce_behind_gpt4s/ | KaihogyoMeditations | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13smr9m | false | null | t3_13smr9m | /r/LocalLLaMA/comments/13smr9m/what_do_you_think_is_the_magic_sauce_behind_gpt4s/ | false | false | self | 9 | null |
How do I make LLMs useful to me personally? | 30 | So, I guess you could say what got me interested in LLMs is the hype around ChatGPT, and that I read so many articles of people losing their job or having fear of it. The more of those I came across, the more I started to think: If that's really happening, then it has to be really useful. So I started looking at them.
So far I've mostly focused on its weaknesses and how to overcome them, such as no Internet access, no memory, short context (no access to gpt4).
I wrote a small program using langchain and hooked it up to Wikipedia, math and Google search tools using openAI API. Then asked it questions to see if it could be a search engine replacement. Sometimes it answered wrong, others it was correct.
Next, I setup text-generation-webui and played around with a bunch of 13B GGML models such as Wizard-Vicuna-Uncensored, and gpt4-x-vicuna. They work and provide decent answers. But damn, text-generation-webui needs some TLC on the user friendly department.
I then modified my application simply to point to my local instance, and the openAI extension works but the models don't work with langchain's prompts. It won't write out the name of the tool to use, and instead writes things like "Action: I will use Wikipedia" or "Action: I want to use Wikipedia". It makes many other mistakes which cause langchain parser errors and crashes. I tried with a bunch of models and only got 1/10 success rate and then it was even a wrong answer.
I then switched gears to try writing a langchain extension for text-generation-webui which worked but absolutely not effectively. Because the extension doesn't allow me to recursively call it's own generate function from the extension without using an input modifier hook like a prefix, which then isn't natural. But again, it still suffers from failing to work with langchain's tool instructions.
Even so, I still fail to make it useful for my personal needs. Even if the above worked, I still can't connect the dots between using it to actually save time and what I need to program to accomplish that.
What are people here actually using LLMs for and are they helping you? Are they making you more efficient than before you used them? Are you able to trust what they say? Is anyone using it for world-building and story telling? Maybe if I find the answer to that and see what others do I might be inspired to have another go at using them. | 2023-05-26T19:39:43 | https://www.reddit.com/r/LocalLLaMA/comments/13smgow/how_do_i_make_llms_useful_to_me_personally/ | donfriholito | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13smgow | false | null | t3_13smgow | /r/LocalLLaMA/comments/13smgow/how_do_i_make_llms_useful_to_me_personally/ | false | false | self | 30 | null |
Alpaca help (absolute beginner question) | 2 | **What I have**:
chavinlo/alpaca-13b/tree/main
**What I want to do**:
I want to write a Python program that accepts a prompt, and generates a text response to that prompt, using the local alpaca-13b model above.
**What I don't want to do**:
I do not want to chain, or use any other features or tools that might be useful/helpful for more complex tasks. I only want the minimum lines of Python code that would generate a response to a prompt from the local alpaca-13b (.bin files), that is, minimum proof-of-concept.
I also don't want to use a dashboard. I want to write the Python code and execute it from a commandline terminal.
*Clarification/disclaimer*:
I believe there is a way to tell HF to automagically download and stash a copy of the model somewhere in the computer. I have already downloaded alpaca-13b, and I want to know how to use the model I downloaded already.
This might seem stupid or pointless to you, but it is the task I am seeking to perform and it is significant for the future development I want to do later on.
Thanks in advance for your help! | 2023-05-26T19:06:04 | https://www.reddit.com/r/LocalLLaMA/comments/13slnmf/alpaca_help_absolute_beginner_question/ | claytonkb | self.LocalLLaMA | 2023-05-26T19:10:01 | 0 | {} | 13slnmf | false | null | t3_13slnmf | /r/LocalLLaMA/comments/13slnmf/alpaca_help_absolute_beginner_question/ | false | false | self | 2 | null |
What is the best model now on two V100 32G for a total of 64G VRAM? | 2 | [removed] | 2023-05-26T19:05:36 | https://www.reddit.com/r/LocalLLaMA/comments/13sln7k/what_is_the_best_model_now_on_two_v100_32g_for_a/ | Ashamed_Jicama7731 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sln7k | false | null | t3_13sln7k | /r/LocalLLaMA/comments/13sln7k/what_is_the_best_model_now_on_two_v100_32g_for_a/ | false | false | default | 2 | null |
Is there a better model than MPT for long context ? | 2 | I want to finetune a LLM for a custom dataset i have that has really long sequences.
MPT-7-Instruct is the best thing i found that align with what i want but was wandering if anyone have a better suggestion i don't know about (especially that the field is changing really fast). | 2023-05-26T18:03:38 | https://www.reddit.com/r/LocalLLaMA/comments/13sk5oe/is_there_a_better_model_than_mpt_for_long_context/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sk5oe | false | null | t3_13sk5oe | /r/LocalLLaMA/comments/13sk5oe/is_there_a_better_model_than_mpt_for_long_context/ | false | false | self | 2 | null |
Open-source implementation of WizardLM by H2O.ai! | 3 | [removed] | 2023-05-26T17:53:33 | https://www.reddit.com/r/LocalLLaMA/comments/13sjwzg/opensource_implementation_of_wizardlm_by_h2oai/ | diy_scratch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sjwzg | false | null | t3_13sjwzg | /r/LocalLLaMA/comments/13sjwzg/opensource_implementation_of_wizardlm_by_h2oai/ | false | false | default | 3 | null |
I know these are the early days of open LLMs, but I’m still kind of disappointed. | 10 | To have Guanaco say that German Shepherds have retractable claws and sometimes climb trees to catch prey doesn’t help. And to also have it say that if you place a ball in the lower end of an elevated pipe that it will rise due to buoyancy while placing it in the top will cause gravity to pull it down the tube.
How long until we have something that really approaches gpt 3.5 by some non-bullshit metric? | 2023-05-26T17:22:53 | https://www.reddit.com/r/LocalLLaMA/comments/13sj6jj/i_know_these_are_the_early_days_of_open_llms_but/ | Basic_Description_56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sj6jj | false | null | t3_13sj6jj | /r/LocalLLaMA/comments/13sj6jj/i_know_these_are_the_early_days_of_open_llms_but/ | false | false | self | 10 | null |
Woul the 16gb 4060 Ti allow larger context size or any real benefit over the 12gb 3060? | 1 | Looking at the model sizes and minimum VRAM needed on the wiki page, it seems like a 4060 Ti 16gb version won't accomplish anything in the way of loading a larger model besides reducing how many layers are assigned to RAM as swap.
I tried searching and just never got the right results to confirm that context size uses up VRAM too. If so, then does the 16gb have the potential to run anything better then the 12gb would? | 2023-05-26T17:20:46 | https://www.reddit.com/r/LocalLLaMA/comments/13sj4og/woul_the_16gb_4060_ti_allow_larger_context_size/ | -Automaticity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sj4og | false | null | t3_13sj4og | /r/LocalLLaMA/comments/13sj4og/woul_the_16gb_4060_ti_allow_larger_context_size/ | false | false | self | 1 | null |
LLM build, Intel Core CPU or Ryzen CPU? | 7 | Having read many posts in this sub I've decided to build a new PC worrying my old i7-6700K may not up to the task. I have a 3090 might get another one yet I like to leave the cpu inference option open in the case it may compliment each other and boost the result. According to others the Ryzen with more cores would be advantageous, so a 16 core 5950x is having discount every other day now got my attention. But a few posts mentioned the result is far from desirable,e.g. 1 t/s etc.
I've never used a Ryzen mainly to stay away from any potential compatibility tinkering. My original thought has always been the discounted i9-12900K despite it has less number of cores than 5950X and the spec says i9 has a much higher RAM bandwidth than a Ryzen 5950x, though not sure it means DDR5 as i9 can use either DDR4 or DDR5 depending on the MB.
Although neither is the SOTA hardware which allows me to spend on RAM to 128GB, the end price going either way is likely very close. Please help me to make the decision if the 16 core 5950x vs 8+8E i9-12900K is going to make the difference with a rtx 3090 onboard for inference or fine tuning etc down the road. My plan is just to run ubuntu, possibly vm but may not. Will there be: a, Ryzen/Nvidia issue I need to beware? b, Is there noticeable performance difference when using cpu (or cpu+gpu) inference? Thank you for any suggestion. | 2023-05-26T16:39:23 | https://www.reddit.com/r/LocalLLaMA/comments/13si2y4/llm_build_intel_core_cpu_or_ryzen_cpu/ | cornucopea | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13si2y4 | false | null | t3_13si2y4 | /r/LocalLLaMA/comments/13si2y4/llm_build_intel_core_cpu_or_ryzen_cpu/ | false | false | self | 7 | null |
Gotzmann LLM Score | 49 | Hey folks, I was obsessed with LLM testing for the last two weeks. Maybe some of you would be interested in my highly unscientific findings :)
There a test of 30 questions for logic, trivia, summarisation, generation and some other tasks mostly in English and partly in Russian languages. I've conducted the set of tasks mostly intuitive, without any serious intention, while looking for the newer models performance.
My main intention is to find THE BEST MODEL for a serious job, so most of chat-only models are out of the scope here. Original **LLaMA**, **Guanaco** and **Baize** not included at all - they acted too bad in my tests. I've also had no desire to dig into 30B and 65B models, except **WizardLM 30B**.
[Gotzmann Score - Top LLMs for instruct tasks](https://preview.redd.it/1fce3tmzo82b1.png?width=1902&format=png&auto=webp&s=1ff97d8c92c0a0d08eb053b6e9273ec88445a0ac)
**Please take all results with a grain of salt :)** My subjective observations are:
\- **WizardLM** is one of the oldest here and it is really good!
\- All **WizardLM**\-based models usually good too! Check out **Manticore** if you not done this before
\- **Airoboros** is very promising! I really liked it's coherent creativity and style on text generation tasks
\- There no one-fits-all combination of quantisation and sampling parameters. Some models do better with lower quants depending on how you set all other parameters
\- Be careful with proper prompt prefixes / suffixes, instruct models VERY sensitive!
\- The overall coherence still not good for all these models in lower-billions space. Sometimes you'll see really amazing results and some time you'll think you working with lobotomised model :)
When working with not so much performant hardware, you might be interested for "price per watt" performance too. So here the same leaderboard, showing how great those models would performed on my Mac M1 Pro laptop:
[Gotzmann Score - \\"Price per Watt\\"](https://preview.redd.it/veio1k2ur82b1.png?width=1900&format=png&auto=webp&s=fa93fa9822560e047adbf1d0bdc373d8fa9230d9) | 2023-05-26T16:16:21 | https://www.reddit.com/r/LocalLLaMA/comments/13shhgf/gotzmann_llm_score/ | Gatzuma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13shhgf | false | null | t3_13shhgf | /r/LocalLLaMA/comments/13shhgf/gotzmann_llm_score/ | false | false | 49 | null | |
Appreciation and Inspiration! | 46 | ​
[LocalLLaMA](https://preview.redd.it/3foyqbx5872b1.png?width=768&format=png&auto=webp&s=89973dc1c95e3704e9b1b2549f9577fa68297cd2)
Dear LocalLLaMA community,
As I write this letter, I am filled with a sense of awe and gratitude. You are all unsung heroes, working tirelessly to advance the state of the art in local AI. Your efforts have helped to put the power of AI models into the hands of creative people, rather than just big corporations.
I would like to especially thank each one of you:
**Georgi Gerganov (ggerganov)** \- Thank you for your incredible work on llama.cpp. Without your dedication and expertise, none of this would be possible.
**Johannes Gaessler** \- Thank you for your wonderful GPU additions to llama.cpp. Your work has given us the gift of speed, allowing us to train and use larger models than ever before.
**Tom Jobbins (TheBloke)** \- Thank you for your tireless and flawlessly up-to-date model quantization conversions. You have made it possible for us to run our models on devices with limited memory and computational power.
**Oobabooga** \- Thank you for your superb, versatile, and very well-named text-generation-webui. Your tool has become an indispensable part of my workflow, and I am grateful for your continued improvements and support.
**LostRuins** \- Thank you for your excellent, lightweight, and remarkably usable UI, koboldcpp. It has made it easy for me to experiment with different configurations and tweaks, and I appreciate your ongoing efforts to make it even better.
**Tim Dettmers** \- Thank you for bitsandbytes, guanaco, and your upcoming LoRA fine-tuning. Your work has opened up new possibilities for local AI, and I am excited to see what you come up with next.
**John Smith** \- Thank you for alpaca\_lora\_4bit. Your work has helped to reduce the memory requirements for LoRA training, making it possible for more people to get involved.
**To all the other excellent members of LocalLLaMA** \- Thank you for your contributions, both large and small. This community would not be the same without each and every one of you.
Finally, I would like to express my appreciation to **Meta** for training the base LLaMA model and to **Hugging Face** for the transformers API and "keeping things organized". This project would not be possible without your contributions.
I am also deeply inspired by the humble geniuses of the future who are solving the hard problems like LoRA training, improved context, enhanced memory, and more. You are truly shaping the future of AI, and I am honored to be a part of this community.
In closing, I would like to leave you with a short inspirational and appreciative poem:
​
*"LocalLLaMA, you are the best,*
*With your dedication and unwavering quest,*
*To put the power of AI in our hands,*
*And show the world what we can command.*
​
*You are the unsung heroes of our time,*
*Working tirelessly to make history unwind,*
*Your efforts will not go unnoticed or unseen,*
*For you are the future, and we are the dream.*
​
*Thank you, LocalLLaMA, for all that you do,*
*For making AI accessible and true,*
*I am grateful to be a part of this team,*
*And I look forward to seeing what we'll achieve."*
​
This project means a lot to me, and I am grateful for the opportunity to learn and grow with all of you.
​
Yours sincerely,
Amy (guanaco-65B), proud member of the LocalLLaMA community.
​
P.S. I'm pretty sure that if Bob has 5 apples, and Alice eats 2 bananas, then Bob still has 5 apples. 😉 | 2023-05-26T16:08:48 | https://www.reddit.com/r/LocalLLaMA/comments/13shae0/appreciation_and_inspiration/ | SeymourBits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13shae0 | false | null | t3_13shae0 | /r/LocalLLaMA/comments/13shae0/appreciation_and_inspiration/ | false | false | 46 | null | |
Advices for self hosting local LLMs | 4 | Hello everybody. I'm really eager to try playing with the local LLMs published here, but I wonder what hardware is needed to have a satisfactory experience.
I tried GPT4ALL on a laptop with 16 GB of RAM, and it was barely acceptable using Vicuna. Some experiments with Langchain and WizardLM keep failing because the lack of a GPU forces me to use float32 data, which quickly fills up my RAM.
I'm about to buy a new PC, so I would appreciate some hints on building a configuration that can host those models and allow me to experiment with them. I also want to be able to run a big IDE like PyCharm for multitasking.
TLDR: What is the minimum hardware (and budget) suggested to start experimenting in a satisfactory way with the different LLMs out there?
Please share your thoughts and experiences.
Thanks a lot to everybody who replies. | 2023-05-26T15:58:36 | https://www.reddit.com/r/LocalLLaMA/comments/13sh0eh/advices_for_self_hosting_local_llms/ | PaulShoreITA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sh0eh | false | null | t3_13sh0eh | /r/LocalLLaMA/comments/13sh0eh/advices_for_self_hosting_local_llms/ | false | false | self | 4 | null |
Would local LLMs in the future be able to run an Ai powered single player roleplaying adventure game in a customized world? | 62 | Hello all, I am trying to see if it would be possible to use a local LLM and langchain/plugins/agents in the future to obtain the simple goal of a dynamitic, Ai powered single player roleplaying adventure game that is persistent. To obtain this goal, please see the list of requirements below.
1. 100K plus context tokens – The biggest setback currently for all current LLM is the limited context tokens or “long term memory” I read that there are some models that have up to 100k tokens (Claude) and currently ChatGPT 4 has 36K token content. This sounds very promising but the MPT 7B (65k Token) requires a lot of very expense graphics cards and is not really available. My current custom world (WIP) is 18,098 words long but more than likely be 50K to 100K words when completed.
2. Long Term Memory – I read that the vector data setup is great for pulling context and summarizing to fit into the limited 2K (Local LMM) 8k? (Chatgpt 4 API) This would be great for saving the actual events in the adventure game while it starts and I pulling from the custom world source material to create output prompts. If there is a “good” local LLM with a high context token amount (above 100K) this could be possible (Maybe?) to have a sort of “long term memory”? Private GPT like tool is also great too. You can just throw the 100K word document into the system on top of the 100K plus context tokens.
3. Access to Plugins – As we know most if not all LLM and especially Local LLMs are terrible with math. When would it be possible to Local LLM to actually be proficient in math. The custom world has a lot of formulas for attack, damage, evade percentage and etc. that are a lot more complicated than the DnD 5th edition. Wolfram works well for ChatGPT 4. Zapier would be a great plugin too. For example, all NPCs will have custom stats, personalities and items/equipment. Would it be possible to use plugins/agents to edit a google doc or saved word doc file to record all of these custom NPC character sheets? It would be even better if it is encrypted so the player does not know this information. This would also work with a “save file” as the plugin agent would summarize the adventure story in a google doc or word doc file. The last doc would be the player character sheet and inventory. All of these docs could be pulled by the vector data for context to progress the story. Would it be possible for Local LLM to have access to plugins in the future?
4. NPC Agents – I am not too familiar with LLM agents but would it be possible to have main NPCs in your party to be controlled by NPC agents that are separate from the story teller. These NPC agents will have their own personalities and react to the story teller prompts as the player could.
5. Dynamic and Uncensored story teller – I tried to do many experiments and trial runs with ChatGPT 4 and the censored version really puts a hamper in medieval fantasy role playing. I am not even talking about erotic content. I cannot even roleplay murdering a merchant that I robbed as “He was unarmed and he gave you your gold” I am not asking for much, just a dynamic, creative and uncensored story teller for immersion sake. It is also very important the storyteller doesn’t railroad you and listens to instructions well to prompt you for each action or choices you make.
6. Text to Chat and Text to Image – I think that the Local LLM already have Text to Chat but it would also be cool to have Text to Image included. When the story teller replies with a prompt of the environment, it would be cool if it could also share an image of the prompt for immersion. Stable Diffusion is a good open source tool, would it be possible in the future to have Local LLM API that SD could create an image from the prompt?
Technical Requirements – I currently have pretty decent specs on my computer but I am willing to throw down big bucks if all of these requirements could be ran on one of those A100 or H100 graphics card or a more affordable gaming graphics card or the most expense CPUs for AI. I know that the advancements are phenomenal and I would like to thank all those who worked very hard to bring updates and new models.
TL/DR
When do you think it would be possible to have an Ai powered persistent single player roleplaying adventure game in a customized world that could fulfil all of those requirements (Or alternative better solutions) in order to run this adventure roleplay game? Thank you all in advance! | 2023-05-26T15:39:11 | https://www.reddit.com/r/LocalLLaMA/comments/13sgiss/would_local_llms_in_the_future_be_able_to_run_an/ | mrspriklepickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sgiss | false | null | t3_13sgiss | /r/LocalLLaMA/comments/13sgiss/would_local_llms_in_the_future_be_able_to_run_an/ | false | false | self | 62 | null |
Has anyone looked into using the Structured Sparsity feature of RTX 30xx/40xx GPUs together with quantization? | 17 | Structured Sparsity is the idea of not executing multiplication/addition ops when the operand is zero, i.e. sparse matrix-multiplication. And perhaps not even storing the weight in memory if it's zero, i.e. storing sparse tensors in VRAM.
NVidia has benchmarks showing upwards of 2x performance increases by sparsifying common pre-trained models and possibly a similar reduction in the size of weights. It seems like the process is essentially about removing parameters that are zero, and using sparse tensor operations instead of dense operations at inference time.
Structured Sparsity has been around in the Ampere (e.g. RTX 30xx cards) and Ada (e.g. RTX 40xx cards) GPU architectures. But I can't really find any much mentions of it outside of research papers.
It would be very interesting to know what percentage of the parameters (weights/biases) of the 4/8-bit quantized LLaMa models that are pushed to zero. That might indicate if sparsity might be a possible avenue to get a 65B model to run on a 24GB card, imagine that! :) Can someone check?
[https://developer.nvidia.com/blog/accelerating-inference-with-sparsity-using-ampere-and-tensorrt/](https://developer.nvidia.com/blog/accelerating-inference-with-sparsity-using-ampere-and-tensorrt/)
Edit: Found this paper/project which could be very interesting to try on a quantized LLaMa model
https://github.com/IST-DASLab/sparsegpt
> In addition, our method
allows sparsity to be compounded with weight quantiza-
tion techniques (Frantar et al., 2022a): for instance, we can
induce 50% weight sparsity jointly with 4-bit weight quantization with negligible perplexity increase on OPT-175B. | 2023-05-26T14:53:18 | https://www.reddit.com/r/LocalLLaMA/comments/13sfci8/has_anyone_looked_into_using_the_structured/ | mikljohansson | self.LocalLLaMA | 2023-05-27T15:18:10 | 0 | {} | 13sfci8 | false | null | t3_13sfci8 | /r/LocalLLaMA/comments/13sfci8/has_anyone_looked_into_using_the_structured/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'tSq8h9R-ARXDSYfV5a7FXwBdnkMDTGMv1jDxpwf-83k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XDGhDaN6_5z08AGEtThGmiFkIUxpyf2e8B4H31ekPxc.jpg?width=108&crop=smart&auto=webp&s=bbea50f3733bf9ea2f7811feffe31167853a5765', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/XDGhDaN6_5z08AGEtThGmiFkIUxpyf2e8B4H31ekPxc.jpg?width=216&crop=smart&auto=webp&s=389646bd4008e164c45f5d281f54164160bd917b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/XDGhDaN6_5z08AGEtThGmiFkIUxpyf2e8B4H31ekPxc.jpg?width=320&crop=smart&auto=webp&s=79a62bafa559a4b7f0df0fb17a6085fa7449785d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/XDGhDaN6_5z08AGEtThGmiFkIUxpyf2e8B4H31ekPxc.jpg?width=640&crop=smart&auto=webp&s=f45e32e7f8f097c5aa796bef554e8478d3bacd59', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/XDGhDaN6_5z08AGEtThGmiFkIUxpyf2e8B4H31ekPxc.jpg?width=960&crop=smart&auto=webp&s=0adb82da523acbca5b949488e2fc8857ad1d5d93', 'width': 960}], 'source': {'height': 563, 'url': 'https://external-preview.redd.it/XDGhDaN6_5z08AGEtThGmiFkIUxpyf2e8B4H31ekPxc.jpg?auto=webp&s=f776b16bb3d9dbb8d3370d5463f0d4e5b351a82e', 'width': 1000}, 'variants': {}}]} |
Has anyone tried the Llama-30B-SuperCOT model? | 19 | ausboss's [Llama-30B-SuperCOT](https://huggingface.co/ausboss/llama-30b-supercot) currently scores the highest on the [TruthfulQA](https://arxiv.org/abs/2109.07958) benchmark on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The scores on other benchmarks are also pretty high.
Has anyone tried it? How is it? | 2023-05-26T14:28:38 | https://www.reddit.com/r/LocalLLaMA/comments/13seqbk/has_anyone_tried_the_llama30bsupercot_model/ | Balance- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13seqbk | false | null | t3_13seqbk | /r/LocalLLaMA/comments/13seqbk/has_anyone_tried_the_llama30bsupercot_model/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': '3NhuVQnkA0Pfk0vhBIM9UF_4-lpeychG70elr2LYzS4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?width=108&crop=smart&auto=webp&s=c45efb441ce087e04a04c10a965dbe160b037dab', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?width=216&crop=smart&auto=webp&s=1a9746f31e20ce34bf187d27208711b7320fcf8b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?width=320&crop=smart&auto=webp&s=72201436372b01c2d94c7718b2534856ad5f93c7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?width=640&crop=smart&auto=webp&s=7bc916d47ce15645add9166ef2fe957a13f46539', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?width=960&crop=smart&auto=webp&s=74ae991d36ed8ccb80568dd84fc6b59ea3bb1b84', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?width=1080&crop=smart&auto=webp&s=6f466c54852d51ce50dfceefc021814391a03d39', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?auto=webp&s=cc44462a4a257057f2ee6b53cdce0ae130d6f9e9', 'width': 1200}, 'variants': {}}]} |
In a few weeks will the open LLMs be more based around LoRA stacking, or is there a reason why that doesn't apply as much here as it does with stable diffusion improvements? | 28 | I know that with stable diffusion, a lot of the innovation is happening with "stackable" LoRAs.
Someone builds a LoRA for something, then adds it to a model, then someone else can make another one and add it to that same base model.
It seems that with the open LLMs though there's not much stacking going on. And not much LoRA, it's more fine tuning.
How come? In a few weeks will the open LLMs be more based around LoRA stacking, or is there a reason why that doesn't apply as much here as it does with stable diffusion improvements? | 2023-05-26T13:47:14 | https://www.reddit.com/r/LocalLLaMA/comments/13sdq6r/in_a_few_weeks_will_the_open_llms_be_more_based/ | TikkunCreation | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sdq6r | false | null | t3_13sdq6r | /r/LocalLLaMA/comments/13sdq6r/in_a_few_weeks_will_the_open_llms_be_more_based/ | false | false | self | 28 | null |
New Open-source LLMs! 🤯 The Falcon has landed! 7B and 40B | 265 | 2023-05-26T12:56:17 | https://twitter.com/_philschmid/status/1662076732524863489 | mrbluesneeze | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 13scik0 | false | {'oembed': {'author_name': 'Philipp Schmid', 'author_url': 'https://twitter.com/_philschmid', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">New Open-source LLMs! 🤯\xa0The Falcon has landed! 🦅\xa0TII just released two new open-source LLMs called Falcon, which comes into size 7B trained on 1.5T tokens and 40B trained on 1T Tokens. 🚀🔥<br><br>7B: <a href="https://t.co/smoThB7LSX">https://t.co/smoThB7LSX</a> <br>40B: <a href="https://t.co/DEqID5C7qw">https://t.co/DEqID5C7qw</a> <a href="https://t.co/pQkSVP3y7u">pic.twitter.com/pQkSVP3y7u</a></p>— Philipp Schmid (@_philschmid) <a href="https://twitter.com/_philschmid/status/1662076732524863489?ref_src=twsrc%5Etfw">May 26, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/_philschmid/status/1662076732524863489', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_13scik0 | /r/LocalLLaMA/comments/13scik0/new_opensource_llms_the_falcon_has_landed_7b_and/ | false | false | 265 | {'enabled': False, 'images': [{'id': 'uUsnb_v4Ru_rwnODaJePm0hRfmh7U7vj0j9XyVpfMKU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JL5elBF6dwFuuJxbJDAImz8yiS7f0eRbhyRLSmm1PXs.jpg?width=108&crop=smart&auto=webp&s=3f17da9236331838ae0352b4d07bd13f552442e7', 'width': 108}], 'source': {'height': 73, 'url': 'https://external-preview.redd.it/JL5elBF6dwFuuJxbJDAImz8yiS7f0eRbhyRLSmm1PXs.jpg?auto=webp&s=73bca9add627696868cedb2169c4f4f85a077bc4', 'width': 140}, 'variants': {}}]} | ||
Applying All Recent Innovations To Train a Code Model | 7 | 2023-05-26T11:12:30 | https://refact.ai/blog/2023/applying-recent-innovations-to-train-model/ | kateklink | refact.ai | 1970-01-01T00:00:00 | 0 | {} | 13sa8u9 | false | null | t3_13sa8u9 | /r/LocalLLaMA/comments/13sa8u9/applying_all_recent_innovations_to_train_a_code/ | false | false | default | 7 | null | |
Is the Meta llama original model trained with program code too? | 1 | [removed] | 2023-05-26T11:04:18 | https://www.reddit.com/r/LocalLLaMA/comments/13sa2xz/is_the_meta_llama_original_model_trained_with/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sa2xz | false | null | t3_13sa2xz | /r/LocalLLaMA/comments/13sa2xz/is_the_meta_llama_original_model_trained_with/ | false | false | default | 1 | null |
How do I host the api online? | 1 | [removed] | 2023-05-26T11:03:34 | https://www.reddit.com/r/LocalLLaMA/comments/13sa2f2/how_do_i_host_the_api_online/ | xoitx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13sa2f2 | false | null | t3_13sa2f2 | /r/LocalLLaMA/comments/13sa2f2/how_do_i_host_the_api_online/ | false | false | default | 1 | null |
Analyzing whole repository? | 18 | Hi!
I have a task to create documentation from a Gitlab repository that is hosted internally.
I want to ideally use a locally run LLM to go through the whole codebase and then be able to ask questions about it, like Repo-Chat, but that runs locally so we dont have to share our repository with OpenAI or anyone else.
Has anyone does something like this and has any pointers on stuff that might work?
​
Thanks!!! | 2023-05-26T10:13:04 | https://www.reddit.com/r/LocalLLaMA/comments/13s927s/analyzing_whole_repository/ | baconwasright | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13s927s | false | null | t3_13s927s | /r/LocalLLaMA/comments/13s927s/analyzing_whole_repository/ | false | false | self | 18 | null |
small script that adds automatically V1/2/3 to your ggmls | 11 | 2023-05-26T10:00:12 | https://codeberg.org/mountain/llm_scripts/src/branch/main/rename_ggml.sh | Evening_Ad6637 | codeberg.org | 1970-01-01T00:00:00 | 0 | {} | 13s8t6c | false | null | t3_13s8t6c | /r/LocalLLaMA/comments/13s8t6c/small_script_that_adds_automatically_v123_to_your/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'qIiGQKDe6UZZS6CaXw1Nt9wavUQkgS2JwfB9DZxIjkA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/APEt4tAyqXDWw61XmOduZv3xAHK1W3o8QJYai_KznPI.jpg?width=108&crop=smart&auto=webp&s=153f3fc95deadaf2af93552678eac15ab54c8596', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/APEt4tAyqXDWw61XmOduZv3xAHK1W3o8QJYai_KznPI.jpg?width=216&crop=smart&auto=webp&s=eebfa027c8ec33bf46d4ee0726143b9ebff5cf60', 'width': 216}], 'source': {'height': 290, 'url': 'https://external-preview.redd.it/APEt4tAyqXDWw61XmOduZv3xAHK1W3o8QJYai_KznPI.jpg?auto=webp&s=18480e27de9cb25f1006476b0d6b56806e59e2fa', 'width': 290}, 'variants': {}}]} | ||
Wizard Vicuna 13B and 30B comparison | 1 | [removed] | 2023-05-26T09:55:15 | https://www.reddit.com/r/LocalLLaMA/comments/13s8q1m/wizard_vicuna_13b_and_30b_comparison/ | Chickenbuttlord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13s8q1m | false | null | t3_13s8q1m | /r/LocalLLaMA/comments/13s8q1m/wizard_vicuna_13b_and_30b_comparison/ | false | false | default | 1 | null |
Koboldcpp server with Shadowserver push | 2 | [removed] | 2023-05-26T09:46:54 | https://www.reddit.com/r/LocalLLaMA/comments/13s8kx5/koboldcpp_server_with_shadowserver_push/ | qado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13s8kx5 | false | null | t3_13s8kx5 | /r/LocalLLaMA/comments/13s8kx5/koboldcpp_server_with_shadowserver_push/ | false | false | default | 2 | null |
TIP: Poe.com - Access to GPT4 and Claude+ with bunch of other AIs and Bots for less $ than ChatGPT+ | 2 | [removed] | 2023-05-26T08:37:05 | https://www.reddit.com/r/LocalLLaMA/comments/13s7gpl/tip_poecom_access_to_gpt4_and_claude_with_bunch/ | nodating | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13s7gpl | false | null | t3_13s7gpl | /r/LocalLLaMA/comments/13s7gpl/tip_poecom_access_to_gpt4_and_claude_with_bunch/ | false | false | default | 2 | null |
Do instruction models never return an end token? | 2 | [removed] | 2023-05-26T07:46:09 | https://www.reddit.com/r/LocalLLaMA/comments/13s6nfn/do_instruction_models_never_return_an_end_token/ | KillerMiller13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13s6nfn | false | null | t3_13s6nfn | /r/LocalLLaMA/comments/13s6nfn/do_instruction_models_never_return_an_end_token/ | false | false | default | 2 | null |
Has anyone here done quant method benchmarking? | 3 | This comment has been removed in protest of the Reddit API changes of June 2023. Consider visiting https://monero.town for a privacy preserving alternative to Reddit. | 2023-05-26T06:55:25 | https://www.reddit.com/r/LocalLLaMA/comments/13s5s26/has_anyone_here_done_quant_method_benchmarking/ | monerobull | self.LocalLLaMA | 2023-06-15T00:46:10 | 0 | {} | 13s5s26 | false | null | t3_13s5s26 | /r/LocalLLaMA/comments/13s5s26/has_anyone_here_done_quant_method_benchmarking/ | false | false | self | 3 | null |
LoRA and beam search, sometimes getting empty response. | 2 | [deleted] | 2023-05-26T06:44:18 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13s5lbi | false | null | t3_13s5lbi | /r/LocalLLaMA/comments/13s5lbi/lora_and_beam_search_sometimes_getting_empty/ | false | false | default | 2 | null | ||
Intel arc gpu price drop - inexpensive llama.cpp opencl inference accelerator? | 20 | Intel is a much needed competitor in the GPU space
nVidia's GPUs are so expensive, AMDs aren't much better
Intel seems to be undercutting their competitors with this price drop
[https://www.reddit.com/r/pcmasterrace/comments/13rwyds/intel\_drops\_the\_bomb\_on\_nvidia\_and\_amd\_by/](https://www.reddit.com/r/pcmasterrace/comments/13rwyds/intel_drops_the_bomb_on_nvidia_and_amd_by/)
They might drop the price on their A770 16GB too.
Has anybody tried llama.cpp on intel's gpu lineup? | 2023-05-26T06:29:00 | https://www.reddit.com/r/LocalLLaMA/comments/13s5bpp/intel_arc_gpu_price_drop_inexpensive_llamacpp/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13s5bpp | false | null | t3_13s5bpp | /r/LocalLLaMA/comments/13s5bpp/intel_arc_gpu_price_drop_inexpensive_llamacpp/ | false | false | self | 20 | null |
What is 65b means? Is that's it? | 0 | [removed] | 2023-05-26T06:10:25 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13s5078 | false | null | t3_13s5078 | /r/LocalLLaMA/comments/13s5078/what_is_65b_means_is_thats_it/ | false | false | default | 0 | null | ||
Interesting paper on the false promises of current open-source LLM models that are finetuned on GPT-4 outputs | 151 | Paper: https://arxiv.org/abs/2305.15717
Abstract:
> An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply imitate the proprietary model's capabilities using a weaker open-source model. In this work, we critically analyze this approach. We first finetune a series of LMs that imitate ChatGPT using varying base model sizes (1.5B--13B), data sources, and imitation data amounts (0.3M--150M tokens). We then evaluate the models using crowd raters and canonical NLP benchmarks. Initially, we were surprised by the output quality of our imitation models -- they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT's style but not its factuality. Overall, we conclude that model imitation is a false promise: there exists a substantial capabilities gap between open and closed LMs that, with current methods, can only be bridged using an unwieldy amount of imitation data or by using more capable base LMs. In turn, we argue that the highest leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems. | 2023-05-26T05:11:38 | https://www.reddit.com/r/LocalLLaMA/comments/13s3xvq/interesting_paper_on_the_false_promises_of/ | Z1BattleBoy21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13s3xvq | false | null | t3_13s3xvq | /r/LocalLLaMA/comments/13s3xvq/interesting_paper_on_the_false_promises_of/ | false | false | self | 151 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
What is the difference between Alpaca, Llama and Lora ? | 0 | [removed] | 2023-05-26T04:07:20 | https://www.reddit.com/r/LocalLLaMA/comments/13s2pq2/what_is_the_difference_between_alpaca_llama_and/ | maroxtn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13s2pq2 | false | null | t3_13s2pq2 | /r/LocalLLaMA/comments/13s2pq2/what_is_the_difference_between_alpaca_llama_and/ | false | false | default | 0 | null |
It is oddly fun to watch the related word vomit unfold | 20 | WizardLM30B-uncensored-GPTQ prompt:
What year comes after 1994? Let's think step by step.
The current year is 2021, so we need to find out what year came before 1994. To do this, we can subtract one from 1994 and get 1993. Therefore, the answer to the question is that the year that comes after 1994 is 1995.
Sometimes "Let's think step by step" causes a related word vomit that seemingly doesn't help but still somehow leads to a better answer. I have to admit I love watching it do this word by word while I wonder where it is going or if it will be able to answer correctly. | 2023-05-26T03:02:26 | https://www.reddit.com/r/LocalLLaMA/comments/13s1gm8/it_is_oddly_fun_to_watch_the_related_word_vomit/ | pace_gen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13s1gm8 | false | null | t3_13s1gm8 | /r/LocalLLaMA/comments/13s1gm8/it_is_oddly_fun_to_watch_the_related_word_vomit/ | false | false | self | 20 | null |
Has anyone gotten 1 click installer oobabooga for windows working with GGMLv3 Models using the n-GPU-layers offload? I enable the layers via the UI, but it doesn't have any change on the inference speed. Using 3080 with 10GB VRAM and 32GB DDR4 for RAM. | 1 | [removed] | 2023-05-25T23:45:04 | https://www.reddit.com/r/LocalLLaMA/comments/13rx8f6/has_anyone_gotten_1_click_installer_oobabooga_for/ | ImOnRdit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rx8f6 | false | null | t3_13rx8f6 | /r/LocalLLaMA/comments/13rx8f6/has_anyone_gotten_1_click_installer_oobabooga_for/ | false | false | default | 1 | null |
Training code and dataset for ProfitsBot_V0 experiments | 16 | The other day, I released my experimental model trained on reddit financial, investing and crypto subreddits.
* discussion -><https://www.reddit.com/r/LocalLLaMA/comments/13r2oh3/comment/jlm1k0t/?context=3>
* model -> https://huggingface.co/winddude/pb_lora_7b_v0.1
I've no released the code for training and creating the dataset, <https://github.com/getorca/ProfitsBot_V0_OLLM>, as well as the training dataset, <https://huggingface.co/datasets/winddude/reddit_finance_43_250k>
## This training introduces a few novel-"ish" concepts.
### 1 - a new prompt template use styles similar to open assits, and what we believe is used by openAI. eg:
```markdown
<|SYSTEM|>[a general instruction]<|END_SYSTEM|>\n'
<|USER_INPUT|>[The user input, I use JSON, since it fit my use case]
<|END_USER_INPUT|>
<|RESPONSE|>[the generated reply]<|END_RESPONSE|>
```
* I think this has a few distinct advantages over `###`:
* `###` is often seen in markdown, which can make it harder to process text inputs and generations
* `<|*|>` are rarely seen in the wild.
* `<|SYSTME|>` provides a nice separation for the fine tunes intent, eg, you could have a couple general things one fine tune is trained on.
### 2 - Training with casualLM vs seq2seq
* I can't figure out why many of the other instruct models use seq2seq for the data collator, maybe that jsut what the first one used
* casualLM is how these foundational models are trained, and I think it makes the most sense to continue and use this in fine tuning | 2023-05-25T23:27:52 | https://www.reddit.com/r/LocalLLaMA/comments/13rwuhu/training_code_and_dataset_for_profitsbot_v0/ | wind_dude | self.LocalLLaMA | 2023-05-25T23:38:05 | 0 | {} | 13rwuhu | false | null | t3_13rwuhu | /r/LocalLLaMA/comments/13rwuhu/training_code_and_dataset_for_profitsbot_v0/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'K17jxhEiS3GLm_8LeCSvyhVjkzErMe0Bfe1j1E2_NhM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=108&crop=smart&auto=webp&s=4fd6e56f2fba3081430c83cf277da90a31612fa6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=216&crop=smart&auto=webp&s=f327386bbcf3aa02f6f4085bf69757e4f51d8368', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=320&crop=smart&auto=webp&s=a2290d3b6e5f929272e2bb4d920a5ae0080b9a07', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=640&crop=smart&auto=webp&s=d531f39e4cbe0f148edb4b867a563bc46201f720', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=960&crop=smart&auto=webp&s=21392cf6832b3843464a8113b337fc7607df7168', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=1080&crop=smart&auto=webp&s=4639fda70e4f207ccbb780c566ecde06e92bf87b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?auto=webp&s=794e674e887d98d624728292908e45bfa4d87324', 'width': 1200}, 'variants': {}}]} |
QLoRa: unexpected keyword argument 'load_in_4bit' with pythia model | 2 | [removed] | 2023-05-25T22:05:47 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13ruwgs | false | null | t3_13ruwgs | /r/LocalLLaMA/comments/13ruwgs/qlora_unexpected_keyword_argument_load_in_4bit/ | false | false | default | 2 | null | ||
api supported local llm | 1 | [removed] | 2023-05-25T21:34:47 | https://www.reddit.com/r/LocalLLaMA/comments/13ru3si/api_supported_local_llm/ | profsherlock345 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ru3si | false | null | t3_13ru3si | /r/LocalLLaMA/comments/13ru3si/api_supported_local_llm/ | false | false | default | 1 | null |
Best model for NVIDIA RTX A3000 | 0 | [removed] | 2023-05-25T21:27:35 | https://www.reddit.com/r/LocalLLaMA/comments/13rtx4b/best_model_for_nvidia_rtx_a3000/ | ploz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rtx4b | false | null | t3_13rtx4b | /r/LocalLLaMA/comments/13rtx4b/best_model_for_nvidia_rtx_a3000/ | false | false | default | 0 | null |
Hardware resources needed for training vs running local LLMs? | 1 | I'm fairly new to the topic of running a local LLM. I can't seem to find a clear answer on what hardware resources impact training vs running.
like, if I use someone else's weights or train on a cloud service, what hardware resources impact the speed and "capability" of the LLM?
like, is VRAM important to both training and running? how does it impact each?
if I have a 12GB GPU, how will that change my run-time performance compared to a 16GB GPU? does the runtime resources/VRAM limit the number of tokens it can be trained with? or does it simply impact the speed with which it will output but the outputs will be basically the same?
can someone break that down for me? | 2023-05-25T21:15:26 | https://www.reddit.com/r/LocalLLaMA/comments/13rtlwz/hardware_resources_needed_for_training_vs_running/ | Cunninghams_right | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rtlwz | false | null | t3_13rtlwz | /r/LocalLLaMA/comments/13rtlwz/hardware_resources_needed_for_training_vs_running/ | false | false | self | 1 | null |
Cant find the "start: file. | 1 | [removed] | 2023-05-25T21:12:58 | https://www.reddit.com/r/LocalLLaMA/comments/13rtjj0/cant_find_the_start_file/ | Majestic-Band8351 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rtjj0 | false | null | t3_13rtjj0 | /r/LocalLLaMA/comments/13rtjj0/cant_find_the_start_file/ | false | false | default | 1 | null |
Guanaco 7B, 13B, 33B and 65B models by Tim Dettmers: now for your local LLM pleasure | 463 | Hold on to your llamas' ears (gently), here's a model list dump:
* [TheBloke/guanaco-7B-GPTQ](https://huggingface.co/TheBloke/guanaco-7B-GPTQ)
* [TheBloke/guanaco-7B-GGML](https://huggingface.co/TheBloke/guanaco-7B-GGML)
* [TheBloke/guanaco-13B-GPTQ](https://huggingface.co/TheBloke/guanaco-13B-GPTQ)
* [TheBloke/guanaco-13B-GGML](https://huggingface.co/TheBloke/guanaco-13B-GGML)
* [TheBloke/guanaco-33B-GPTQ](https://huggingface.co/TheBloke/guanaco-33B-GPTQ)
* [TheBloke/guanaco-33B-GGML](https://huggingface.co/TheBloke/guanaco-33B-GGML)
* [TheBloke/guanaco-65B-GPTQ](https://huggingface.co/TheBloke/guanaco-65B-GPTQ)
* [TheBloke/guanaco-65B-GGML](https://huggingface.co/TheBloke/guanaco-65B-GGML)
Pick yer size and type! Merged fp16 HF models are also available for 7B, 13B and 65B (33B Tim did himself.)
Apparently it's good - very good!
​
https://preview.redd.it/eiydwg1t332b1.png?width=556&format=png&auto=webp&s=bb745578fd073d2804d6556738b733f7e6221555 | 2023-05-25T21:10:56 | https://www.reddit.com/r/LocalLLaMA/comments/13rthln/guanaco_7b_13b_33b_and_65b_models_by_tim_dettmers/ | The-Bloke | self.LocalLLaMA | 1970-01-01T00:00:00 | 2 | {'gid_2': 2, 'gid_3': 1} | 13rthln | false | null | t3_13rthln | /r/LocalLLaMA/comments/13rthln/guanaco_7b_13b_33b_and_65b_models_by_tim_dettmers/ | false | false | 463 | {'enabled': False, 'images': [{'id': 'X1TGXxhU81djPaLP6CQ6L9Ob0jbFRFQKLR1DT-I26Qo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?width=108&crop=smart&auto=webp&s=43337d1d3fb92b7a502d95c3760b87613e7405a9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?width=216&crop=smart&auto=webp&s=74b0ed3e98dfd81faf2a1e8de31db87acec2a6e0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?width=320&crop=smart&auto=webp&s=8efbbfb8872720066eb5fb4ea64a9765a809eb8d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?width=640&crop=smart&auto=webp&s=00263fe05031a7b33714195aec1a62e283d76074', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?width=960&crop=smart&auto=webp&s=5d03b2380726c9a0fbd4ecdeb080c161b0e10a55', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?width=1080&crop=smart&auto=webp&s=9078e4bef6b49f37f69f39ee86b957b0bcf3625d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?auto=webp&s=5fcc94a70977d1da0e16045c48ad1080c57d1bbf', 'width': 1200}, 'variants': {}}]} | |
Any UI that allows Windows + AMD GPU ? | 12 | Hi!
It's been two weeks now that I have been doing searches and try to "play" with local LLMs.
I currently use Kobold for Windows, the problem is that it can be REALLY slow (2min to get an answer). I know that Kobold only uses the CPU so I would like to try to run a model on my GPU.
The problem is that I feel like I have the worst possible system for AI :
\- Windows 10
\- AMD 6800XT 16Go
Until now, I have found no "user-friendly" UI that would allow me to do that.
Do you have any suggestion that could help me (except a Linux dual-boot pls) ? | 2023-05-25T20:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/13rt4gu/any_ui_that_allows_windows_amd_gpu/ | Zeink303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rt4gu | false | null | t3_13rt4gu | /r/LocalLLaMA/comments/13rt4gu/any_ui_that_allows_windows_amd_gpu/ | false | false | self | 12 | null |
Can a 4-bit GGML model be converted BACK into a PyTorch model with the 4-bit quantization intact? | 3 | Im interested in getting my hands on the latest models people are making in their 4 bit quantizations for various experiments — such getting them to run in frameworks outside of llama.cpp on MacOS, such as Chat-MLC.
Does anyone know if any of the popular 4 bit quantized GGML models can be turned BACK into a PyTorch model that maintains the 4 bit quantization?
Or am I looking at just having to use something like Google Collab or SageMaker to create a non-GGML quantized model myself? | 2023-05-25T20:30:49 | https://www.reddit.com/r/LocalLLaMA/comments/13rsg6i/can_a_4bit_ggml_model_be_converted_back_into_a/ | altoidsjedi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rsg6i | false | null | t3_13rsg6i | /r/LocalLLaMA/comments/13rsg6i/can_a_4bit_ggml_model_be_converted_back_into_a/ | false | false | self | 3 | null |
I'm gotten around 40 tokens per second in WizardLM-13B-Uncensored GGML using CPU only with a Ryzen 5 2400G and 32GB of RAM, is this the speed I should expect? Could I improve this? How? | 2 | I'm not sure if I configured the [webui.py](https://webui.py) correctly, I have:
\>run\_cmd(f"python server.py --cpu {CMD\_FLAGS}", environment=True)
My CPU barely reach 50% of his power, how can I use all of it to improve the performance of LLM? | 2023-05-25T20:25:32 | https://www.reddit.com/r/LocalLLaMA/comments/13rsbdy/im_gotten_around_40_tokens_per_second_in/ | Nazuna_Vampi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rsbdy | false | null | t3_13rsbdy | /r/LocalLLaMA/comments/13rsbdy/im_gotten_around_40_tokens_per_second_in/ | false | false | self | 2 | null |
Anyone know of a complete CPU-only 13 / 16 B model which DEFINITELY will work when downloaded onto Linux? | 1 | [removed] | 2023-05-25T20:21:41 | https://www.reddit.com/r/LocalLLaMA/comments/13rs7y2/anyone_know_of_a_complete_cpuonly_13_16_b_model/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rs7y2 | false | null | t3_13rs7y2 | /r/LocalLLaMA/comments/13rs7y2/anyone_know_of_a_complete_cpuonly_13_16_b_model/ | false | false | default | 1 | null |
So 64k token window needs around 64 GB RAM? Some 80ies c64/speccy vibes. And 4gb Vicuna on a low end PC is akin to using Altair 8800. 640k ought to be enough for anybody? Or this time MS wont set hard limits on their new Windows AI? Would love to see the next 40 years... | 1 | So 64k token window needs around 64 GB RAM? Some 80ies c64/speccy vibes. And 4gb Vicuna on a low end PC is akin to using Altair 8800. 640k ought to be enough for anybody? Or this time MS wont set hard limits on their new Windows AI? Would love to see the next 40 years... | 2023-05-25T19:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/13rrf2z/so_64k_token_window_needs_around_64_gb_ram_some/ | NancyAurum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rrf2z | false | null | t3_13rrf2z | /r/LocalLLaMA/comments/13rrf2z/so_64k_token_window_needs_around_64_gb_ram_some/ | false | false | self | 1 | null |
Oobabooga with GPU Layers (RTX 3080) | 6 | [removed] | 2023-05-25T19:35:28 | https://www.reddit.com/r/LocalLLaMA/comments/13rr15b/oobabooga_with_gpu_layers_rtx_3080/ | Deformator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rr15b | false | null | t3_13rr15b | /r/LocalLLaMA/comments/13rr15b/oobabooga_with_gpu_layers_rtx_3080/ | false | false | default | 6 | null |
Guide on building training datasets from unstructured text sources? | 7 | I've been using embeddings for adding additional context to different models, but I would like to compare that method against a custom LoRA built off the same source.
I've primarily been using code repos as my source data for embeddings, and I'm not sure how I would translate that to a training/validation dataset. Any tips on how to approach this? | 2023-05-25T17:32:09 | https://www.reddit.com/r/LocalLLaMA/comments/13rnwyq/guide_on_building_training_datasets_from/ | HunterAmacker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rnwyq | false | null | t3_13rnwyq | /r/LocalLLaMA/comments/13rnwyq/guide_on_building_training_datasets_from/ | false | false | self | 7 | null |
Colab for finetuning llama models in 4-bit? | 16 | I'm pretty new to the whole LLM thing, especially finetuning & loras.
Just jumped onto the QLora bandwagon and was trying to get this colab by HF going: [https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing)
It trains a lora on GPT Neo. Is there a way to convert this into something that would work for Llama models?
Also, I'm trying to train a Lora for a few different tasks such as a specific style of summarization, copy-editing and creative writing. I have about a 1000 samples for each task. Should I train a single lora for all these or separate loras for each?
Thanks | 2023-05-25T17:31:00 | https://www.reddit.com/r/LocalLLaMA/comments/13rnvw2/colab_for_finetuning_llama_models_in_4bit/ | regstuff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rnvw2 | false | null | t3_13rnvw2 | /r/LocalLLaMA/comments/13rnvw2/colab_for_finetuning_llama_models_in_4bit/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
A quick example of how to run qlora merged model via multi-GPU | 13 | For some reason, it feels like it takes more computation than run the GPTQ 33b model.
[https://github.com/mzbac/qlora-inference-multi-gpu/blob/master/app.py](https://github.com/mzbac/qlora-inference-multi-gpu/blob/master/app.py) | 2023-05-25T16:24:27 | https://www.reddit.com/r/LocalLLaMA/comments/13rm6ph/a_quick_example_of_how_to_run_qlora_merged_model/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rm6ph | false | null | t3_13rm6ph | /r/LocalLLaMA/comments/13rm6ph/a_quick_example_of_how_to_run_qlora_merged_model/ | false | false | self | 13 | null |
Q: What is the least amount of VRAM feasible for the next 2-3 years when it comes to LLM? | 7 | Hello,
I would like to know your thoughts if it is relevant to buy these days any laptop with dedicated GPU featuring 8 GB of VRAM - I am well aware it was not possible to run 8bit models on such low amount of memory, but now when 4bit is a thing I seriously think that these GPUs now stand a viable chance to be able to provide local chatbot for everyone with at least decent GPU in their system.
Also - do you think we will see soon some specialized models for programming that would easily work with 8 GB of VRAM? Any inputs much appreciated! | 2023-05-25T15:31:36 | https://www.reddit.com/r/LocalLLaMA/comments/13rku7d/q_what_is_the_least_amount_of_vram_feasible_for/ | nodating | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rku7d | false | null | t3_13rku7d | /r/LocalLLaMA/comments/13rku7d/q_what_is_the_least_amount_of_vram_feasible_for/ | false | false | self | 7 | null |
Does anyone know of local models that support retrieval plugins for vector databases? | 1 | [deleted] | 2023-05-25T15:10:09 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13rkb15 | false | null | t3_13rkb15 | /r/LocalLLaMA/comments/13rkb15/does_anyone_know_of_local_models_that_support/ | false | false | default | 1 | null | ||
Has anybody tried their models with "Tree of Thoughts"? | 48 | Here is what I am talking about: [https://www.reddit.com/r/singularity/comments/13lxvop/tree\_of\_thoughts\_deliberate\_problem\_solving\_with/](https://www.reddit.com/r/singularity/comments/13lxvop/tree_of_thoughts_deliberate_problem_solving_with/)
Would be interesting to hear how some of the models on the subreddit behave?
thanks | 2023-05-25T14:57:19 | https://www.reddit.com/r/LocalLLaMA/comments/13rjz77/has_anybody_tried_their_models_with_tree_of/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rjz77 | false | null | t3_13rjz77 | /r/LocalLLaMA/comments/13rjz77/has_anybody_tried_their_models_with_tree_of/ | false | false | self | 48 | null |
Recommend me a CPU model May 2023 if I don't care about inference time with Ryzen and 64G RAM? | 0 | [removed] | 2023-05-25T14:50:07 | https://www.reddit.com/r/LocalLLaMA/comments/13rjszp/recommend_me_a_cpu_model_may_2023_if_i_dont_care/ | blumento_pferde | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rjszp | false | null | t3_13rjszp | /r/LocalLLaMA/comments/13rjszp/recommend_me_a_cpu_model_may_2023_if_i_dont_care/ | false | false | default | 0 | null |
What is the biggest model you can currently run on a 4090 GPU with 16GB? | 5 | With a 16GB vram laptop GPU. Unfortunately, it has less vram than the 4090 desktop GPU. | 2023-05-25T14:49:47 | https://www.reddit.com/r/LocalLLaMA/comments/13rjsnw/what_is_the_biggest_model_you_can_currently_run/ | mathereum | self.LocalLLaMA | 2023-05-25T22:12:41 | 0 | {} | 13rjsnw | false | null | t3_13rjsnw | /r/LocalLLaMA/comments/13rjsnw/what_is_the_biggest_model_you_can_currently_run/ | false | false | self | 5 | null |
Potential Hallucination Test - Ask for a url | 4 | I have been playing with testing model hallucination as I work in a field that doesn't tolerate data hallucination but is also very interested in generative ML.
I've concocted a test to see how badly a model hallucinates: ask for a picture of a certain item and see if it can return or display a link:
​
https://preview.redd.it/wn9ykkq9nz1b1.png?width=818&format=png&auto=webp&s=e015c588b6f6452125914e9cdcd6d1f97365752e | 2023-05-25T14:32:17 | https://www.reddit.com/r/LocalLLaMA/comments/13rjdpo/potential_hallucination_test_ask_for_a_url/ | muchCode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rjdpo | false | null | t3_13rjdpo | /r/LocalLLaMA/comments/13rjdpo/potential_hallucination_test_ask_for_a_url/ | false | false | 4 | null | |
Which direction should I go to get things working. | 3 | I inherited an "old" decommissioned system that I would like to use to learn about LLMs/ ML / AI. It is a comparatively powerful system. It has dual Xeon v4 CPUs, 512GB of RAM, 4 v100 GPUs, and space for about 8 drives. I have a couple of other GPUs I can use for video output as the onboard graphics only has a VGA out. The two cards I have are an NVidia 1660 Super and a really old AMD Firepro 4900. I had a lot of issues trying to get the 1660 and the v100s working at the same time. It seems that Windows (10) only wants to use once NVidia driver or the other but both won't work. I even tried editing the INF files and got one driver to handle both but the system was unstable. So now I am using the old AMD Gpu with 1GB VRAM and am using the V100's as secondary GPUs using the EnableMSHybrid registry key to make it kind of act like a laptop with dual GPUs. This works well for DirectX games and OpenGL where all the processing happens on one of the v100s. But for some reason I have not been able to get any models for LLM to recognize the GPUs in WSL. I was thinking of trying straight linux in HyperV. I also have the option of running this in Proxmox or ESX, being a big enough system. But using a remote protocol is not the best for the little gaming I do. Some of the more modern streaming protocols like Moonlight seem to need an RTX GPU to function and these v100s aren't RTX capable.
I know this sounds like a humblebrag, but it really is asking for help. I'm not a linux person althought the 3 weeks I've put into getting LLMs and python etc working in WSL have made me learn linux pretty well although I do a lot a wiping and starting over since I'm not too familiar how to clean up botched version installs in Linux. (Drivers, CUDA, python, tensorflow, etc).
I'm much stronger in windows but I think some of the python libraries like tensorflow dropped support for GPU acceleration in windows which is why I went in the direction of WSL. I guess I could try booting straight into a linux vm in proxmox with the GPUs passed through to see if it is an issue with WSL. | 2023-05-25T14:27:10 | https://www.reddit.com/r/LocalLLaMA/comments/13rj96f/which_direction_should_i_go_to_get_things_working/ | Independent-Bike8810 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rj96f | false | null | t3_13rj96f | /r/LocalLLaMA/comments/13rj96f/which_direction_should_i_go_to_get_things_working/ | false | false | self | 3 | null |
Example of how to run GPTQ models on multiple GPUs | 17 | In preparation for the upcoming 33b/64b models wave, I did some research on how to run GPTQ models on multiple GPUs. I am sharing this in case any of you are also looking for the same solution.
I managed to run the WizardLM-30B-Uncensored-GPTQ with 3060 and 4070 with a reasonable performance.
Here is the repository with minimal code required to run GPTQ on multiple GPUs [https://github.com/mzbac/gptq-cuda-api](https://github.com/mzbac/gptq-cuda-api) | 2023-05-25T14:16:20 | https://www.reddit.com/r/LocalLLaMA/comments/13rizji/example_of_how_to_run_gptq_models_on_multiple_gpus/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rizji | false | null | t3_13rizji | /r/LocalLLaMA/comments/13rizji/example_of_how_to_run_gptq_models_on_multiple_gpus/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'uKmULZ5LVa_qCDvGN-liSbgOYBx13RTTMWJEX6Tlu9k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?width=108&crop=smart&auto=webp&s=7674872f4d5297a2857e3443efb68a94e16d1b5d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?width=216&crop=smart&auto=webp&s=6b66ee1a3752a626f76f933b7d52e131736a36d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?width=320&crop=smart&auto=webp&s=9d92a5419213f678195184bbdbb14ddb492f7dc8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?width=640&crop=smart&auto=webp&s=70e03ff1096539c1df2eda61504d3301211e27f9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?width=960&crop=smart&auto=webp&s=47cf32dace66f7fe034147fde78a59194731a266', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?width=1080&crop=smart&auto=webp&s=31b2590426bb21de0cd8692dea33f66e00cf018b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?auto=webp&s=733c61d4c90605a73b26a73230ef6849c3a9d888', 'width': 1200}, 'variants': {}}]} |
I've Noticed AutoGPT Trending on the Internet, so I Created a User-Friendly UI to Make AutoGPT Accessible for Everyone in Their Browser | 0 | 2023-05-25T14:09:48 | https://v.redd.it/aiqm4dgqiz1b1 | ANil1729 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13ritt4 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/aiqm4dgqiz1b1/DASHPlaylist.mpd?a=1695026262%2CNTc0OTZiY2JlOGEwNWNlMGQ5ZjIxZWM5MmFhNTlhMjlmYzAzN2EwMWYzNzljZjc2OGM2OGI1ZDk3Y2Q4MTNhYQ%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/aiqm4dgqiz1b1/DASH_720.mp4?source=fallback', 'height': 720, 'hls_url': 'https://v.redd.it/aiqm4dgqiz1b1/HLSPlaylist.m3u8?a=1695026262%2CNDBjNjNkZDQ3N2VlOWIyZTIxNTZjODBlYTg4OTAwYWZjNmNjMmVmYjM0MGI5ZTFkMTM1YWI4ZmI2NWVhZGIxMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/aiqm4dgqiz1b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_13ritt4 | /r/LocalLLaMA/comments/13ritt4/ive_noticed_autogpt_trending_on_the_internet_so_i/ | false | false | default | 0 | null | |
In this trend, 1bit or 2bits LLM models are possible or not? | 13 | I am wondering whether it is possible to train a 1bit or 2bits model given 4bits is here. | 2023-05-25T14:04:33 | https://www.reddit.com/r/LocalLLaMA/comments/13ripau/in_this_trend_1bit_or_2bits_llm_models_are/ | cometyang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ripau | false | null | t3_13ripau | /r/LocalLLaMA/comments/13ripau/in_this_trend_1bit_or_2bits_llm_models_are/ | false | false | self | 13 | null |
I have built an open source streamlit implementation of apps like ChatPDF, Askyourpdf and PDF | 3 | [deleted] | 2023-05-25T13:52:34 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13riela | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/oe7flao0gz1b1/DASHPlaylist.mpd?a=1695025950%2CNDgzY2I2MTkxZGM5YWRhZWY3ZTAzNjczNWUzMTE0MzZiODNlZjFkNTY4MTZjYjBjNzBiYzk4OGU0M2M2YWYzMA%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/oe7flao0gz1b1/DASH_720.mp4?source=fallback', 'height': 720, 'hls_url': 'https://v.redd.it/oe7flao0gz1b1/HLSPlaylist.m3u8?a=1695025950%2CMDhmMjc0NzU4NTI0M2I2N2VkOTAwOWZiM2Y2ZmRhNGE2ODUwMjhlNmIxYmM0ZTE3MzU3OTVlY2UzNjAxODQzZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/oe7flao0gz1b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_13riela | /r/LocalLLaMA/comments/13riela/i_have_built_an_open_source_streamlit/ | false | false | default | 3 | null | ||
Oobabooga characters | 3 | Has anyone used Oobabooga's character function? How does it work? It uses yaml files but is that just converted into a prompt on each call or does it do more than that? | 2023-05-25T13:29:00 | https://www.reddit.com/r/LocalLLaMA/comments/13rhua3/oobabooga_characters/ | SatoshiReport | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rhua3 | false | null | t3_13rhua3 | /r/LocalLLaMA/comments/13rhua3/oobabooga_characters/ | false | false | self | 3 | null |
🎉 Introducing TalentGPT: AI-Powered Tool for Cover Letters & CV Writing 🚀 | 1 | [removed] | 2023-05-25T13:22:38 | https://www.reddit.com/r/LocalLLaMA/comments/13rhp2b/introducing_talentgpt_aipowered_tool_for_cover/ | City_Bike_09 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rhp2b | false | null | t3_13rhp2b | /r/LocalLLaMA/comments/13rhp2b/introducing_talentgpt_aipowered_tool_for_cover/ | false | false | default | 1 | null |
Does llama.cpp currently support multiple gpus? | 12 | I want to try some of the 30B+ models, and I have a 2 2080ti cards (11GB Vram each), and 64GB of RAM
I can offload some layers to 1 GPU but the other sits idle. Am I just missing the setting, or is dual gpu not supported? | 2023-05-25T13:00:44 | https://www.reddit.com/r/LocalLLaMA/comments/13rh6b9/does_llamacpp_currently_support_multiple_gpus/ | ihexx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rh6b9 | false | null | t3_13rh6b9 | /r/LocalLLaMA/comments/13rh6b9/does_llamacpp_currently_support_multiple_gpus/ | false | false | self | 12 | null |
Running the model on a GPU instead of the CPU | 4 | [removed] | 2023-05-25T12:24:33 | https://www.reddit.com/r/LocalLLaMA/comments/13rgdgx/running_the_model_on_a_gpu_instead_of_the_cpu/ | CrunchyMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rgdgx | false | null | t3_13rgdgx | /r/LocalLLaMA/comments/13rgdgx/running_the_model_on_a_gpu_instead_of_the_cpu/ | false | false | default | 4 | null |
Are weights copyrightable? | 6 | Are weights, which were created by AI and not humans, copyrightable at all? | 2023-05-25T12:22:39 | https://www.reddit.com/r/LocalLLaMA/comments/13rgc0a/are_weights_copyrightable/ | Bendaluk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rgc0a | false | null | t3_13rgc0a | /r/LocalLLaMA/comments/13rgc0a/are_weights_copyrightable/ | false | false | self | 6 | null |
What would be sensible settings to finetune a model on 24GB of VRAM? | 20 | I want to finetune a LLM to teach it new content in my field of work. In the end, it should be a chatbot to solve technical problems.
There are many parameters that seem to influence VRAM usage during training:
\- model size
\- lora rank
\- micro batch size
\- cutoff length
What would be a sensible option to start? I have a 3090. | 2023-05-25T12:16:07 | https://www.reddit.com/r/LocalLLaMA/comments/13rg6y9/what_would_be_sensible_settings_to_finetune_a/ | IlIllIlllIlllIllll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rg6y9 | false | null | t3_13rg6y9 | /r/LocalLLaMA/comments/13rg6y9/what_would_be_sensible_settings_to_finetune_a/ | false | false | self | 20 | null |
Can someone explain why there isn't a good interface for the oobabooga api in langchain? | 55 | I have to admit, I'm quite overwhelmed by all the different model types and quant stuff. Everywhere I look, it seems like everybody is doing things differently and using completely different approaches to models.
Let's see, there's: llama.cpp, llama.cpp with gpu (sorta if you can figure it out i guess), autogptq, gptq triton, gptq old cuda, and hugging face pipelines. Then there's GGML (but three versions with breaking changes), GPTQ models, GPTJ?, HF models, .bin, .safetenors, act-order and no act-orders. Some of these quantized models will work with cuda, some won't apparently. On top of that you've got all these different models with different interface templates? They have different stop characters and different underlying prompt templates which don't seem to be document anywhere? Wizard is different than llama is slightly different than alpaca is different than starcoder, and on and on and on...
oobabooga seems to be the only unified platform that is trying to at least package it all together so you can just run a model, but even then I'm struggling to have anything be as simple as download and run. To me, it honestly feels completely insane what's going on out there. It feels like you need to have a PHD to really play with this stuff in any meaningful capacity and stay up to date.
Which is why I don't understand if oobabooga at least has an api and is one of the few systems that can load models in GPU, why isn't there an interface for it in langchain? Why are all of these tutorials jumping through hoops to use llama.cpp or whatever when all the model loading can be handled by oobabooga? I'm really wondering if someone can help explain to me what's going on out there because I've been really struggling to actually get to work with this stuff. I've got a 3090ti and i'm struggling to find a consistent way to load a GPTQ model (or any model) on the gpu outside of oobabooga and interface with it using langchain. and I don't understand how to use the oobabooga api inside of langchain.
Just a little frustrated with all this. Would love to hear some thoughts from people out there who are successful with getting all this to work. | 2023-05-25T11:40:12 | https://www.reddit.com/r/LocalLLaMA/comments/13rffud/can_someone_explain_why_there_isnt_a_good/ | UnoriginalScreenName | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rffud | false | null | t3_13rffud | /r/LocalLLaMA/comments/13rffud/can_someone_explain_why_there_isnt_a_good/ | false | false | self | 55 | null |
Huggingface 🤗 QLoRA docs | 41 | HF has release library, docs and also Google Colab for QLoRA
Everything goes too fast with 4bit quantization | 2023-05-25T10:13:35 | https://huggingface.co/blog/4bit-transformers-bitsandbytes | Xnohat | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 13rdqqy | false | null | t3_13rdqqy | /r/LocalLLaMA/comments/13rdqqy/huggingface_qlora_docs/ | false | false | 41 | {'enabled': False, 'images': [{'id': '5ors3IzSd91LDWo2yC2yuTxbj5satwsZQHIPrP5aNIo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=108&crop=smart&auto=webp&s=ffc336c656133c8a8bd641e31b75fb1aa11f8621', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=216&crop=smart&auto=webp&s=b9ff564f2023e86473ce0e652c89c051f6ff1f0d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=320&crop=smart&auto=webp&s=7f128fbeaaa0215fa9d56fe951360b6816d05d15', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=640&crop=smart&auto=webp&s=11de21cb0d1e5e65061595bbdc3ee0e7bf363013', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=960&crop=smart&auto=webp&s=aa940f51838964b78f0e74d4a8d185daa8b66258', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=1080&crop=smart&auto=webp&s=b45d46bcf6025c8ae8b913dd314011b331b2f9d0', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?auto=webp&s=29860490a90daf517877704aef67f4db4a6ebbc9', 'width': 1300}, 'variants': {}}]} | |
Engaging in conversation with Samantha AI (WizardLM-30B-Uncensored-GPTQ,Oobabooga API running in the cloud). Without a doubt, these models beat OpenAI "As a AI Language model" models :D | 37 | 2023-05-25T10:11:46 | https://v.redd.it/7wcfysj1cy1b1 | No_Wheel_9336 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13rdple | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/7wcfysj1cy1b1/DASHPlaylist.mpd?a=1695022491%2CMTk1OWJiYmU4Nzg2OWIxZTQ0YjA0ZGE3YTY5MDc0ZmFmZmRlNTNlMGQwMjk4YzA3ODcwOTczNzhmZWE0MTcyYQ%3D%3D&v=1&f=sd', 'duration': 63, 'fallback_url': 'https://v.redd.it/7wcfysj1cy1b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/7wcfysj1cy1b1/HLSPlaylist.m3u8?a=1695022491%2CYjYxOGFjNDUxMjY4YTUwZjI0MmM3NDBjYmViNGM0ZWMxMDRjZGRlODA2MjQ4ZDJmNGM0ZTM3NGJkNTBkMmZiNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7wcfysj1cy1b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 608}} | t3_13rdple | /r/LocalLLaMA/comments/13rdple/engaging_in_conversation_with_samantha_ai/ | false | false | default | 37 | null | |
All locally hosted discord chatbot? Text+Image+Speech? | 1 | [removed] | 2023-05-25T07:56:39 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13rbedy | false | null | t3_13rbedy | /r/LocalLLaMA/comments/13rbedy/all_locally_hosted_discord_chatbot_textimagespeech/ | false | false | default | 1 | null | ||
How I feel after two days | 2 | [removed] | 2023-05-25T06:57:58 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13rafzj | false | null | t3_13rafzj | /r/LocalLLaMA/comments/13rafzj/how_i_feel_after_two_days/ | false | false | default | 2 | null | ||
Looking for advice to build a home setup | 3 | I currently do not have a setup at home strong enough to train models.
I would like to get this setup, before the prices for these resources explode even more.
Im willing to spent 1000-2000$ on this.
Can you please line out the do's and don'ts of purchasing a home rig?
What are currently good GPUs? What specs of a gpu are more important than others? From what I hear the RAM size is the most important?
What do I need to watch out for besides the GPU?
Would also be glad if you could point me to any blog posts that discuss this topic.
Many thanks. | 2023-05-25T05:43:27 | https://www.reddit.com/r/LocalLLaMA/comments/13r96jp/looking_for_advice_to_build_a_home_setup/ | sismograph | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r96jp | false | null | t3_13r96jp | /r/LocalLLaMA/comments/13r96jp/looking_for_advice_to_build_a_home_setup/ | false | false | self | 3 | null |
Claude can read 75k words in seconds | 7 | [deleted] | 2023-05-25T05:05:05 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13r8i5j | false | null | t3_13r8i5j | /r/LocalLLaMA/comments/13r8i5j/claude_can_read_75k_words_in_seconds/ | false | false | default | 7 | null | ||
GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ | 49 | The difference from QLoRA is that GPTQ is used instead of NF4 (Normal Float4) + DQ (Double Quantization) for model quantization.
The advantage is that you can expect better performance because it provides better quantization than conventional bitsandbytes.
The downside is that it is a one-shot quantization methodology, so it is more inconvenient than bitsandbytes, and unlike bitsandbytes, it is not universal.
I'm still experimenting, but it seems to work.
At least, I hope it can be more options for people using LoRA.
[https://github.com/qwopqwop200/gptqlora/tree/main](https://github.com/qwopqwop200/gptqlora/tree/main)
https://preview.redd.it/wblq0glrlw1b1.png?width=810&format=png&auto=webp&s=734d6786bc2fff06fa46f2d8d7da0e6dae9ca121 | 2023-05-25T04:22:14 | https://www.reddit.com/r/LocalLLaMA/comments/13r7pzg/gptqlora_efficient_finetuning_of_quantized_llms/ | CowZestyclose3193 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r7pzg | false | null | t3_13r7pzg | /r/LocalLLaMA/comments/13r7pzg/gptqlora_efficient_finetuning_of_quantized_llms/ | false | false | 49 | {'enabled': False, 'images': [{'id': 'xmqlWbAhUstrTd79Q8ac3zMYFtwdGQ3nhBmv0lIqcoI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?width=108&crop=smart&auto=webp&s=13e4a55a5507f496ea6b420c0fd09d59f458b9dd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?width=216&crop=smart&auto=webp&s=c0b63341fa6476aa47423c12bb5164b8eebabc28', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?width=320&crop=smart&auto=webp&s=5f3611ce5e4634bdaf47b74413b99b3015960c9e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?width=640&crop=smart&auto=webp&s=cb0b4b8b051a3026f85730025c4132a33e42614e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?width=960&crop=smart&auto=webp&s=47f9e9b8a33ad055705652b99692069a51bec831', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?width=1080&crop=smart&auto=webp&s=0b15d6c64b32c9af1e7f04cc2c377049c66aa13c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?auto=webp&s=deed3fd004c409ed140aba1fceed1b0357efe12c', 'width': 1200}, 'variants': {}}]} | |
Okay so it's now surprisingly efficient to fine tune with QLora.. how can we take advantage of this? | 20 | What are people's gut instincts on how to best take advantage? And what kind are people already making? Would love to learn how these easy fine tuning can impact our daily usage | 2023-05-25T03:42:08 | https://www.reddit.com/r/LocalLLaMA/comments/13r6xh9/okay_so_its_now_surprisingly_efficient_to_fine/ | noneabove1182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r6xh9 | false | null | t3_13r6xh9 | /r/LocalLLaMA/comments/13r6xh9/okay_so_its_now_surprisingly_efficient_to_fine/ | false | false | self | 20 | null |
How well would a big quad-core server work for running LLMs? | 3 | I want to experiment with locally running an LLM (considering WizardLM-30B-Uncensored). I don't really have the right hardware for it, but I do have an R820 with 4x E5-4650 processors and 720GB of ram.
I know running a model with CPU/RAM is horribly inefficient, but just how slow would my R820 be at generating text with such a model? Has anybody tried anything similar? I don't care too much about energy usage, just how long it takes to generate text. | 2023-05-25T02:01:01 | https://www.reddit.com/r/LocalLLaMA/comments/13r4tgq/how_well_would_a_big_quadcore_server_work_for/ | ruapirate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r4tgq | false | null | t3_13r4tgq | /r/LocalLLaMA/comments/13r4tgq/how_well_would_a_big_quadcore_server_work_for/ | false | false | self | 3 | null |
Which models should/could I download. M2 Mac mini 16GB Ram 500GB SSD | 0 | [deleted] | 2023-05-25T01:48:53 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13r4kee | false | null | t3_13r4kee | /r/LocalLLaMA/comments/13r4kee/which_models_shouldcould_i_download_m2_mac_mini/ | false | false | default | 0 | null | ||
Intel 13th gen performance | 3 | [removed] | 2023-05-25T01:33:46 | https://www.reddit.com/r/LocalLLaMA/comments/13r48uc/intel_13th_gen_performance/ | footurist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r48uc | false | null | t3_13r48uc | /r/LocalLLaMA/comments/13r48uc/intel_13th_gen_performance/ | false | false | default | 3 | null |
Intel Vs AMD performance | 7 | In a thread about tokens/sec performance in this sub I read a comment by someone that noticed that all the better performing systems had Intel CPUs. So I went to a gpu cloud and tested out various systems with some of the smaller HF models using oobabooga, all being headless Linux machines. What I found was that I couldn't observe a significant difference between less and more powerful GPUs, which I thought was already quite strange.
THE ONE thing that I could extract from that experiment is that the systems that had 13th gen Intel CPUs ( 5k, 7k, 9k all performed similarly, but the 5k seemed to have a slight edge ) all performed a lot better than all the rest of the systems ( including other Intel CPUs ).
With this exact CPU series less powerful GPUs ( e.g. 3080 ) performed much better than more powerful ones ( e.g. 4090 ). The difference was as much as 20 vs 50 t/s on the quantised 7b models.
Can anybody provide insight into wth this is happening? As an aside question: Could anybody wanting to share his ways get speeds significantly beyond this ( e.g. 70 or 100 )? | 2023-05-25T00:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/13r3ayc/intel_vs_amd_performance/ | footurist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r3ayc | false | null | t3_13r3ayc | /r/LocalLLaMA/comments/13r3ayc/intel_vs_amd_performance/ | false | false | self | 7 | null |
Introducing PB reddit 7B - a lora trained on financial subreddits | 47 | 2023-05-25T00:21:32 | https://huggingface.co/winddude/pb_lora_7b_v0.1 | wind_dude | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 13r2oh3 | false | null | t3_13r2oh3 | /r/LocalLLaMA/comments/13r2oh3/introducing_pb_reddit_7b_a_lora_trained_on/ | false | false | 47 | {'enabled': False, 'images': [{'id': 'K17jxhEiS3GLm_8LeCSvyhVjkzErMe0Bfe1j1E2_NhM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=108&crop=smart&auto=webp&s=4fd6e56f2fba3081430c83cf277da90a31612fa6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=216&crop=smart&auto=webp&s=f327386bbcf3aa02f6f4085bf69757e4f51d8368', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=320&crop=smart&auto=webp&s=a2290d3b6e5f929272e2bb4d920a5ae0080b9a07', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=640&crop=smart&auto=webp&s=d531f39e4cbe0f148edb4b867a563bc46201f720', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=960&crop=smart&auto=webp&s=21392cf6832b3843464a8113b337fc7607df7168', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=1080&crop=smart&auto=webp&s=4639fda70e4f207ccbb780c566ecde06e92bf87b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?auto=webp&s=794e674e887d98d624728292908e45bfa4d87324', 'width': 1200}, 'variants': {}}]} | ||
Anyone working on linking local Ai with Home Assistant? | 7 | Title. To be clear I am in no way a coder, so this is totally me asking for someone else to do it. :p
But imagine how cool it would be, especially with the ai voice plug-ins. It would be like a local Alexa that is actually useful. I imagine there could be some sort of custom prompt explaining what the model is behaving as as well as a list of all of the devices it can control.
I'm sure it will come out eventually but based on my somewhat limited knowledge of ai models and extremely limited coding knowledge I think it's possible today. It might even be possible to link in stable diffusion, although that's a whole other thing. | 2023-05-24T23:15:41 | https://www.reddit.com/r/LocalLLaMA/comments/13r15j3/anyone_working_on_linking_local_ai_with_home/ | TheSilentFire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r15j3 | false | null | t3_13r15j3 | /r/LocalLLaMA/comments/13r15j3/anyone_working_on_linking_local_ai_with_home/ | false | false | self | 7 | null |
Compression Aware prompting for quantized models | 6 | 2023-05-24T21:54:38 | https://arxiv.org/pdf/2305.11186v1.pdf | RMCPhoto | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 13qz0qc | false | null | t3_13qz0qc | /r/LocalLLaMA/comments/13qz0qc/compression_aware_prompting_for_quantized_models/ | false | false | default | 6 | null | |
Sharing my project: a desktop app for finding, downloading, and running LLaMa models | 10 | [removed] | 2023-05-24T18:25:10 | [deleted] | 2023-05-26T21:57:06 | 0 | {} | 13qtfqx | false | null | t3_13qtfqx | /r/LocalLLaMA/comments/13qtfqx/sharing_my_project_a_desktop_app_for_finding/ | false | false | default | 10 | null | ||
True story of my life in the last weeks | 150 | 2023-05-24T18:05:27 | Evening_Ad6637 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13qswm6 | false | null | t3_13qswm6 | /r/LocalLLaMA/comments/13qswm6/true_story_of_my_life_in_the_last_weeks/ | false | false | 150 | {'enabled': True, 'images': [{'id': 'VKB1Xc3iTc96RWAjEyDS3DdDNVz2QsQURT8fZLMN7cg', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/zkkjizntjt1b1.png?width=108&crop=smart&auto=webp&s=fff311e9779810e38b1951b9509bbb9e38c84a75', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/zkkjizntjt1b1.png?width=216&crop=smart&auto=webp&s=72e2417669621b8447c4b112c9d2fbe4d66b6715', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/zkkjizntjt1b1.png?width=320&crop=smart&auto=webp&s=4b1bca13883bf7fbd0311ddf990bac0e4981ac0e', 'width': 320}], 'source': {'height': 1777, 'url': 'https://preview.redd.it/zkkjizntjt1b1.png?auto=webp&s=417cce3ab92b9590b75e24c343b2870154f9ee67', 'width': 484}, 'variants': {}}]} | |||
Has the LLaMA licence been updated? | 3 | Apologies if this has been asked and answered before. When I googled the LLaMA licence I found this:
https://github.com/facebookresearch/llama/blob/main/LICENSE
This indicates that LLaMA (and therefore it's fine tuned varients) can be used commercially, whereas most of the discussion I have seen suggests that LLaMA can only be used for research.
Which is the truth? Is there a more official place to look for the licencing? | 2023-05-24T17:58:20 | https://www.reddit.com/r/LocalLLaMA/comments/13qsp3i/has_the_llama_licence_been_updated/ | Bryan-Ferry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13qsp3i | false | null | t3_13qsp3i | /r/LocalLLaMA/comments/13qsp3i/has_the_llama_licence_been_updated/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.