title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LLM RAG with Apple M2 192gb: feasible? Curious to hear if you've tried it | 9 | I have read that running inference on larger LLM models is possible on workstations with Apple's M2 chip and 192GB RAM.
I am curious to know if it is technically feasible to do RAG fine-tuning on a model with such a workstation. If you have tried it I would love to hear your experience! | 2023-10-20T19:46:54 | https://www.reddit.com/r/LocalLLaMA/comments/17cjprz/llm_rag_with_apple_m2_192gb_feasible_curious_to/ | Drited | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cjprz | false | null | t3_17cjprz | /r/LocalLLaMA/comments/17cjprz/llm_rag_with_apple_m2_192gb_feasible_curious_to/ | false | false | self | 9 | null |
How do I run this on CPU? | 1 | 2023-10-20T19:35:36 | https://huggingface.co/FelixChao/vicuna-33b-coder/tree/main | YuhFRthoYORKonhisass | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17cjgzi | false | null | t3_17cjgzi | /r/LocalLLaMA/comments/17cjgzi/how_do_i_run_this_on_cpu/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'l2d4OCeMlcYmoxPMbYowSBLE3eLauwSRyiVnb1uxwwk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SMaiOZunloFhNArTk6PtN9xXALq3eB8SAJll6uu8RYU.jpg?width=108&crop=smart&auto=webp&s=240be72f1a250ace063717f1b6da2710669e609b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SMaiOZunloFhNArTk6PtN9xXALq3eB8SAJll6uu8RYU.jpg?width=216&crop=smart&auto=webp&s=b1b90448ce30a4ea328026f8c6050abd0deb20f0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SMaiOZunloFhNArTk6PtN9xXALq3eB8SAJll6uu8RYU.jpg?width=320&crop=smart&auto=webp&s=e91fda91076f0f78a1709cf2f3d0598c71ff6e70', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SMaiOZunloFhNArTk6PtN9xXALq3eB8SAJll6uu8RYU.jpg?width=640&crop=smart&auto=webp&s=6514a3b2cf68991e1fb914b56fab45dd3be59f20', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SMaiOZunloFhNArTk6PtN9xXALq3eB8SAJll6uu8RYU.jpg?width=960&crop=smart&auto=webp&s=112065b22bc3a8c583ee08f2b43d52c5ff6e155a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SMaiOZunloFhNArTk6PtN9xXALq3eB8SAJll6uu8RYU.jpg?width=1080&crop=smart&auto=webp&s=3bd234cbbd61a333f6a31b00a195efdcb2e58b26', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SMaiOZunloFhNArTk6PtN9xXALq3eB8SAJll6uu8RYU.jpg?auto=webp&s=5dae7443825e50b11ae024ce0e3cfae2014ebd8e', 'width': 1200}, 'variants': {}}]} | ||
What RoPE values to use for 16K on Kobold? | 2 | What settings should I provide to use 16k context size with XWin-70b on KoboldCPP? The documentation claims that it automatically selects the best values, but then the log repeats the same default values for any model and context size. | 2023-10-20T18:35:33 | https://www.reddit.com/r/LocalLLaMA/comments/17ci5hy/what_rope_values_to_use_for_16k_on_kobold/ | Barafu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ci5hy | false | null | t3_17ci5hy | /r/LocalLLaMA/comments/17ci5hy/what_rope_values_to_use_for_16k_on_kobold/ | false | false | self | 2 | null |
What's the best out-of-box Open LM replacement for OpenAI Function Calls? | 6 | Please don't recommend Gorilla 👀 | 2023-10-20T17:59:21 | https://www.reddit.com/r/LocalLLaMA/comments/17chch0/whats_the_best_outofbox_open_lm_replacement_for/ | dulldata | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17chch0 | false | null | t3_17chch0 | /r/LocalLLaMA/comments/17chch0/whats_the_best_outofbox_open_lm_replacement_for/ | false | false | self | 6 | null |
Any good 13b models that are good for proof reading? | 7 | As the title says. Thanks | 2023-10-20T17:45:53 | https://www.reddit.com/r/LocalLLaMA/comments/17ch248/any_good_13b_models_that_are_good_for_proof/ | CollectionLeather292 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ch248 | false | null | t3_17ch248 | /r/LocalLLaMA/comments/17ch248/any_good_13b_models_that_are_good_for_proof/ | false | false | self | 7 | null |
Llama 2 on Amazon SageMaker a Benchmark | 5 | 2023-10-20T17:33:27 | https://huggingface.co/blog/llama-sagemaker-benchmark | Ion_GPT | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17cgsbq | false | null | t3_17cgsbq | /r/LocalLLaMA/comments/17cgsbq/llama_2_on_amazon_sagemaker_a_benchmark/ | false | false | 5 | {'enabled': False, 'images': [{'id': '6GF5SFLQuxM57YUbXvvVb4a8VRrVD6a78kruPuKM9X8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=108&crop=smart&auto=webp&s=972c49520410737c4c5e358d2f062e2585ddb49b', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=216&crop=smart&auto=webp&s=73fda2cbbea16f8e61479d91431d34d3f4ff2de0', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=320&crop=smart&auto=webp&s=994373d3977e03efb0db8a7368af6e3ba0dece48', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=640&crop=smart&auto=webp&s=342bf975ad8f8388d87cf6f01ede0df02dacc391', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=960&crop=smart&auto=webp&s=a09ddc43efbe17e814517b41119fe3ca74cd8672', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=1080&crop=smart&auto=webp&s=732c9e546b2f7b6ff65899eaa93e36abd99ea66e', 'width': 1080}], 'source': {'height': 1248, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?auto=webp&s=308f5037b00e6947c08c812e8a70f66934503276', 'width': 2400}, 'variants': {}}]} | ||
How to analyze code and produce fixes using a LLM? | 10 | I'm interested in using a large language model (LLM) to analyze my code and produce fixes. I have a detailed exception log from my codebase, and I want to use the LLM to suggest code fixes.
I was thinking of converting my codebase to PDF, then training a localGPT model on it. I would then bind the localGPT model with a coding model. When I give the LLM the exception log, it would use the localGPT model to understand the context of the code, and the coding model to suggest fixes.
Is this a feasible approach? Are there any other tools or resources that I could use?
Where should I start for achieve that? | 2023-10-20T17:27:02 | https://www.reddit.com/r/LocalLLaMA/comments/17cgn2t/how_to_analyze_code_and_produce_fixes_using_a_llm/ | Sametklou | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cgn2t | false | null | t3_17cgn2t | /r/LocalLLaMA/comments/17cgn2t/how_to_analyze_code_and_produce_fixes_using_a_llm/ | false | false | self | 10 | null |
What tools would I use to convert a model from (I think) GGUF to (I think) exl2? | 6 | Hey all, I've been writing code for 40 years and administering Linux for 30, but my Google-fu is failing me on this topic and it makes me wonder if I'm not asking my question correctly. So I'd \*really\* appreciate it if anyone corrected my terms and I'm not describing things properly.
About a month ago, I started playing with a 2.55 bpw (which I presume is "bits per word?") version of raw Llama2 with ExLlamav2. I've been really happy with its ability to answer questions and summarize input. The model I'm using is one that's no longer available on Hugging Face, but was posted turboderp and named by them as: turboderp\_LLama2-70B-chat-2.55bpw-h6-exl2
That said, I'd love to be able to play with some of the newer 70B models folks are coming out with, but this requantization doesn't seem to have set the world on fire, so I can't depend on others to make the models I want.
So what I'd love to know is: I have a GGUF format model I'd like to turn into a "2.55bpw-h6-exl2" model. I am presuming there is a set of commands that needs to be executed to do this, but I cannot for the life of me figure out what tools folks are using.
Thanks in advance! | 2023-10-20T17:04:10 | https://www.reddit.com/r/LocalLLaMA/comments/17cg4ir/what_tools_would_i_use_to_convert_a_model_from_i/ | the_quark | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cg4ir | false | null | t3_17cg4ir | /r/LocalLLaMA/comments/17cg4ir/what_tools_would_i_use_to_convert_a_model_from_i/ | false | false | self | 6 | null |
Llama2-13b-chat | Chatbot | 7 | I have some questions about making a chatbot with large amounts of pdfs and other file formats! Any input or nudge in the right direction would be awesome.
So my big questions is, what is the best way to make a vector database? Do I need things like Pinecone or can I make it entirely myself?
I am planning on using LangChain and Llama13b-chat-hf to make a chatbot for customer support. It should be able to run locally.
I am a bit of a loss about what I should use. I read that you can use QLora for fine-tuning but is it needed in this case? Are there any other vital parts I am missing?
I basically need the pre trained model to learn all the information from the PDFs which I want to store in a vector database. And not use any information for question answering other than what I am providing.
As you can tell I am very new to this and any input is greatly appreciated! | 2023-10-20T16:56:51 | https://www.reddit.com/r/LocalLLaMA/comments/17cfycf/llama213bchat_chatbot/ | 8Optimism | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cfycf | false | null | t3_17cfycf | /r/LocalLLaMA/comments/17cfycf/llama213bchat_chatbot/ | false | false | self | 7 | null |
Budget of 5-10k to get a good performance | 28 | Hello,
I’m running Llama2-13b locally on a server. The responsive time is very bad (>5min). I have a budget of 5k - 10k to get the response time below 15s. I want to upload PDF files. The answer should be based on these.
Which hardware/which setup can you recommend to run Llama2-13b with a good performance?
Which hardware/which setup can you recommend to run Llama2-70b with a good performance?
Cloud is no option because of sensitive data. Only for testing/developing.
Thank you! | 2023-10-20T16:48:17 | https://www.reddit.com/r/LocalLLaMA/comments/17cfrpg/budget_of_510k_to_get_a_good_performance/ | Available_College_79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cfrpg | false | null | t3_17cfrpg | /r/LocalLLaMA/comments/17cfrpg/budget_of_510k_to_get_a_good_performance/ | false | false | self | 28 | null |
My experiments with GPT Engineer and WizardCoder-Python-34B-GPTQ | 33 | Finally, I attempted gpt-engineer to see if I could build a serious app with it. A micro e-commerce app with a payment gateway. The basic one.
Though, the docs suggest using it with gpt-4, I went ahead with my local WizardCoder-Python-34B-GPTQ running on a 3090 with oogabooga and openai plugin.
It started with a description of the architecture, code structure etc. It even picked the right frameworks to use.I was very impressed. The generation was quite fast and with the 16k context, I didn't face any fatal errors. Though, at the end it wouldn't write the generated code into the disk. :(
Hours of debugging, research followed... nothing worked. Then I decided to try openai gpt-3.5.
To my surprise, the code it generated was good for nothing. Tried several times with detailed prompting etc. But it can't do an engineering work yet.
Then I upgraded to gpt-4, It did produce slightly better results than gpt-3.5. But still the same basic stub code, the app won't even start.
Among the three, I found WizardCoders output far better than gpt-3.5 and gpt-4. But thats just my personal opinion.
I wanted to share my experience here and would be interested in hearing similar experiences from other members of the group, as well as any tips for success. | 2023-10-20T16:47:50 | https://www.reddit.com/r/LocalLLaMA/comments/17cfrdj/my_experiments_with_gpt_engineer_and/ | AstrionX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cfrdj | false | null | t3_17cfrdj | /r/LocalLLaMA/comments/17cfrdj/my_experiments_with_gpt_engineer_and/ | false | false | self | 33 | null |
I developed an AI model as a 12th grade student. | 1 | Hello everyone! I have developed an AI project leveraging the BERT model to classify texts, identifying insults or concealed derogatory remarks within lengthy text passages. Utilizing a Turkish dataset, the project achieved an accuracy of 87%. On a related note, I'm curious to know if OpenAI or any company provides any resources or programs tailored for young students? | 2023-10-20T15:59:03 | https://www.reddit.com/r/LocalLLaMA/comments/17cemzx/i_developed_an_ai_model_as_a_12th_grade_student/ | Sensitive-Ad6659 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cemzx | false | null | t3_17cemzx | /r/LocalLLaMA/comments/17cemzx/i_developed_an_ai_model_as_a_12th_grade_student/ | false | false | self | 1 | null |
Is anybody using Llama or any other LLM as part of a product's pipeline? | 16 | Obviously LLMs are very powerful and there is a lot of hype so I'm sure a lot of people are trying to deploy them into their software pipelines somehow, but I don't understand how they might be used as part of a particular software product. | 2023-10-20T15:52:33 | https://www.reddit.com/r/LocalLLaMA/comments/17cehrg/is_anybody_using_llama_or_any_other_llm_as_part/ | duffpaddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cehrg | false | null | t3_17cehrg | /r/LocalLLaMA/comments/17cehrg/is_anybody_using_llama_or_any_other_llm_as_part/ | false | false | self | 16 | null |
What are the token probabilities in for ctransformers? | 1 | I found these logit probs when using llama 2 from thebloke via ctransformers. I thought all the probs were logprobs, but obviously not in this case...
`00000 = {float} 1.6911218166351318`
`00001 = {float} -0.3545471429824829`
`00002 = {float} -0.2773338556289673`
`00003 = {float} 0.3111882507801056`
`00004 = {float} 2.7992610931396484`
`00005 = {float} 3.917383909225464`
`...` | 2023-10-20T15:40:15 | https://www.reddit.com/r/LocalLLaMA/comments/17ce7qt/what_are_the_token_probabilities_in_for/ | natural_language_guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ce7qt | false | null | t3_17ce7qt | /r/LocalLLaMA/comments/17ce7qt/what_are_the_token_probabilities_in_for/ | false | false | self | 1 | null |
Brainstorm. Small company use of AI for customer support. | 2 | So I have been thinking about using AI for customer support. We put about 5-15% of our time on customer support. We are a small company, 10 people. I'm in customer support and I'm looking into ways to make use of an LLM to manage it. It's email and social media channels. Id like our customers to interact with an agent, not just a simple preprogrammed decision tree bot.
We are open to failure and weird outcomes. We are a small company and have a spirit of adventure.
How would you approach this? | 2023-10-20T15:09:17 | https://www.reddit.com/r/LocalLLaMA/comments/17cdin2/brainstorm_small_company_use_of_ai_for_customer/ | slemklumpen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cdin2 | false | null | t3_17cdin2 | /r/LocalLLaMA/comments/17cdin2/brainstorm_small_company_use_of_ai_for_customer/ | false | false | self | 2 | null |
Is there any meaning in buying a 1500€ computer for AI ? | 75 | Hello,
I'm not a specialist but more "the IT guy" at work (a public library).
​
Since the beginning of 2022, I try to convince my direction and co-workers that we should pay attention to AI, machine learning, vector databases, image creation, automatic translations etc
I managed to give a few presentations that convinced the viewers that something is going on and it's an opportunity to attract people into the library. Meanwhile, I started to improve my skills for being able to use things in collab or kaggle.
​
Not building : just using and being able to demonstrate what is possible and what's not.
​
I was told that I might get some money to buy a PC, in a range around 1500€. Is it worthy regarding the low GPU I could buy for this price ? And if yes, what should I aim for ? The goal would be to make demonstrations of LLM, image creation, text to speech etc.
​
Thanks in advance for your answers. | 2023-10-20T14:42:36 | https://www.reddit.com/r/LocalLLaMA/comments/17ccwl1/is_there_any_meaning_in_buying_a_1500_computer/ | CedricLimousin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ccwl1 | false | null | t3_17ccwl1 | /r/LocalLLaMA/comments/17ccwl1/is_there_any_meaning_in_buying_a_1500_computer/ | false | false | self | 75 | null |
Total Newbie Questions | 2 | Hello!
​
\- I have multiple Dell R410 servers, fully loaded with CPU and RAM, but not GPU, is it possible to run a model on it, even the bigger ones (i have all my time ) ?
\- I've been able to run GPT4ALL with detailled instructions and it run perfectly. Do i just have to change the model files to run another one , or do they all come with their own binary ?
​
\- What is the best model to write stories (more than fews words limits i have with gpt4all ?
​
\- Any good tutorial somewhere ? Something more fun than just following instriuctions ?
​
​
Thanks everyone !
​ | 2023-10-20T14:34:51 | https://www.reddit.com/r/LocalLLaMA/comments/17ccqjc/total_newbie_questions/ | delphinealdrine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ccqjc | false | null | t3_17ccqjc | /r/LocalLLaMA/comments/17ccqjc/total_newbie_questions/ | false | false | self | 2 | null |
MultiToken: Embed arbitrary modalities (images, audio, documents, etc) into large language models | 24 | Hi all, I wanted to share a project I've been working on to build universal large multimodal models (LMMs).
There's the cool technique implemented in [LLaVA](https://github.com/haotian-liu/LLaVA) where you can essentially embed images into the token space of LLMs and then use them in chat like "Tell me what is written in <image>". In MultiToken, we take this one step further to generalize to any set of modalities so rather than images you could do <audio> snippets or even use both within the same model. My goal was to make this fairly plug-and-play so you can take any existing encoder/vector/embedding model and inject it into an LLM with some straightforward training + inference scripts.
Some potential examples:
* Read <document> and give me a summary.
* (You can *in theory* compress documents using a decent embedding model like ADA2 giving you the ability to have 1 million+ token lossy context windows with nearly zero VRAM/inference cost)
* Listen to <audio> and answer the spoke question in the same tone.
* Compare and contrast <image> and <image>
* Given <screenshot> and <game-state>, what key should I press?
If you are interested check it out! [https://github.com/sshh12/multi\_token](https://github.com/sshh12/multi_token)
I've only been able to release a limited pre-trained demo model so far so this is mostly useful for those who have a dataset + GPUs to train on, but in T-3 weeks my 3090 should spit out something better (: | 2023-10-20T14:32:57 | https://www.reddit.com/r/LocalLLaMA/comments/17ccp0l/multitoken_embed_arbitrary_modalities_images/ | sshh12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ccp0l | false | null | t3_17ccp0l | /r/LocalLLaMA/comments/17ccp0l/multitoken_embed_arbitrary_modalities_images/ | false | false | self | 24 | null |
How to increase ollama inferencing speed with CPU only | 12 | I tried to create a new model Inheriting from existing Codellama-13B model and set parameters as following
use_mmap false # to utilize 50GB RAM
num_thread 8 # to utilize 8 cores CPU
But there is no significant improvement on inferencing speed
Did you try the same?
I would appreciate any suggestions
Thank you in advanced | 2023-10-20T14:29:40 | https://www.reddit.com/r/LocalLLaMA/comments/17ccm9b/how_to_increase_ollama_inferencing_speed_with_cpu/ | GroundbreakingNet574 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ccm9b | false | null | t3_17ccm9b | /r/LocalLLaMA/comments/17ccm9b/how_to_increase_ollama_inferencing_speed_with_cpu/ | false | false | self | 12 | null |
Any LMStudio alternative? | 9 | Is anyone aware of any LMStudio alternative that can expose the local GGML models as OpenAI compatible API endpoint? | 2023-10-20T14:14:40 | https://www.reddit.com/r/LocalLLaMA/comments/17ccak1/any_lmstudio_alternative/ | dulldata | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ccak1 | false | null | t3_17ccak1 | /r/LocalLLaMA/comments/17ccak1/any_lmstudio_alternative/ | false | false | self | 9 | null |
Seed-LLaMA: Open-Source DALLE-3? | 43 | 2023-10-20T13:52:28 | https://ailab-cvc.github.io/seed/seed_llama.html | ninjasaid13 | ailab-cvc.github.io | 1970-01-01T00:00:00 | 0 | {} | 17cbt1f | false | null | t3_17cbt1f | /r/LocalLLaMA/comments/17cbt1f/seedllama_opensource_dalle3/ | false | false | default | 43 | null | |
Where to find hosted multimodals like cogvllm and llava-1.5b | 1 | I am looking for any api service that serves multimodal models, any information would be helpful. | 2023-10-20T12:58:52 | https://www.reddit.com/r/LocalLLaMA/comments/17cap5s/where_to_find_hosted_multimodals_like_cogvllm_and/ | smyja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cap5s | false | null | t3_17cap5s | /r/LocalLLaMA/comments/17cap5s/where_to_find_hosted_multimodals_like_cogvllm_and/ | false | false | self | 1 | null |
Keep Clam and Scary On | 0 | 2023-10-20T12:23:54 | https://www.reddit.com/gallery/17ca1cp | Anna5750 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 17ca1cp | false | null | t3_17ca1cp | /r/LocalLLaMA/comments/17ca1cp/keep_clam_and_scary_on/ | false | false | 0 | null | ||
LLM speeds on 4090 | 4 | These are the speeds I get with different LLMs on my 4090 card at half precision.
*Phi-1.5 \~56*
*WizardCoder-3B-V1.0 \~7.5*
*WizardCoder-Python-7B-V1.0 \~28*
*WizardCoder-Python-13B-V1.0 \~0.4*
*WizardCoder-15B-V1.0 \~1.5*
*Driver Version: 537.34, CUDA Version: 12.2, transformers 4.33.1*
Are the speeds okay?
I have seen people getting way more with the same card, could it be cuz of the driver versions?
​
TIA. | 2023-10-20T11:58:44 | https://www.reddit.com/r/LocalLLaMA/comments/17c9kf8/llm_speeds_on_4090/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c9kf8 | false | null | t3_17c9kf8 | /r/LocalLLaMA/comments/17c9kf8/llm_speeds_on_4090/ | false | false | self | 4 | null |
Open Source AI FOMO Saver | 2 | [https://github.com/premAI-io/state-of-open-source-ai](https://github.com/premAI-io/state-of-open-source-ai) | 2023-10-20T11:41:13 | https://www.reddit.com/r/LocalLLaMA/comments/17c99ha/open_source_ai_fomo_saver/ | nsosio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c99ha | false | null | t3_17c99ha | /r/LocalLLaMA/comments/17c99ha/open_source_ai_fomo_saver/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'GQr5n13kcL6etuAZ_8FqLYTl7SSyXRTPPUJkouvxu6U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fbHtUuN6rCsNzhOwnWZPKU3Qw9TzK9Ic-xb5uuLuek4.jpg?width=108&crop=smart&auto=webp&s=ccdde4c0384243dde435983217ec88aa0d473ff2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fbHtUuN6rCsNzhOwnWZPKU3Qw9TzK9Ic-xb5uuLuek4.jpg?width=216&crop=smart&auto=webp&s=69dd35ee10af27218560f44ecb4dcee46314a5f0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fbHtUuN6rCsNzhOwnWZPKU3Qw9TzK9Ic-xb5uuLuek4.jpg?width=320&crop=smart&auto=webp&s=6c012ef644a0b6221ed962e14d0604f271ae004a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fbHtUuN6rCsNzhOwnWZPKU3Qw9TzK9Ic-xb5uuLuek4.jpg?width=640&crop=smart&auto=webp&s=a2d5846b4d8b27977226cd73fd879ed6169f11a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fbHtUuN6rCsNzhOwnWZPKU3Qw9TzK9Ic-xb5uuLuek4.jpg?width=960&crop=smart&auto=webp&s=b0f8bad4be027ddc53235e8ef0e51016dcc8dcaf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fbHtUuN6rCsNzhOwnWZPKU3Qw9TzK9Ic-xb5uuLuek4.jpg?width=1080&crop=smart&auto=webp&s=aad30643b2d308910b23bfc50bcfa12ed261ad09', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fbHtUuN6rCsNzhOwnWZPKU3Qw9TzK9Ic-xb5uuLuek4.jpg?auto=webp&s=bf963d5ead2911d6aafbb6028a13c7bca873bfff', 'width': 1200}, 'variants': {}}]} |
Is anyone fine-tuning CodeLlama on Rust code bases? | 1 | [removed] | 2023-10-20T11:10:44 | https://www.reddit.com/r/LocalLLaMA/comments/17c8qm7/is_anyone_finetuning_codellama_on_rust_code_bases/ | Mountain-Olive2947 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c8qm7 | false | null | t3_17c8qm7 | /r/LocalLLaMA/comments/17c8qm7/is_anyone_finetuning_codellama_on_rust_code_bases/ | false | false | self | 1 | null |
Services to host fine-tuned models | 1 | [removed] | 2023-10-20T11:04:25 | https://www.reddit.com/r/LocalLLaMA/comments/17c8muo/services_to_host_finetuned_models/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c8muo | false | null | t3_17c8muo | /r/LocalLLaMA/comments/17c8muo/services_to_host_finetuned_models/ | false | false | self | 1 | null |
Safe Sandbox for automated LLM | 1 | Im learning, so.. there’s that, I’m going to figure this out, one way or another.. and would rather not do whatever horrible crap one might do in this situation, I get I need a very limited user account, to go heavy into certain firewall settings and user account privileges etc .. i haven’t looked too far into it and figured I’d ask all you clever bastards for advice first. I was thinking I could setup a Windows Sandbox with all those extra steps covered and .. it would theoretically be safe, nothing to worry about.. no teeny tiny I wish I was a skynet or anything? lol :p Any advice /guidance would be greatly appreciated!
What’s a decent community of home users that are run, train and fine-tune custom LLMs etc?
I want to eventually learn how to setup Autogen with custom fine-tuned LLMs .. but first, I thought automating one on its own little box and letting it ‘dream’ etc first would be insanely awesome.. like a lil AI ant farm? | 2023-10-20T10:30:28 | https://www.reddit.com/r/LocalLLaMA/comments/17c82fz/safe_sandbox_for_automated_llm/ | 80Unknown08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c82fz | false | null | t3_17c82fz | /r/LocalLLaMA/comments/17c82fz/safe_sandbox_for_automated_llm/ | false | false | self | 1 | null |
Could I add some P40 cards to run bigger model? | 3 | I have a RTX 3090 and I want to run Falcon 180B on my server. The Q3_K_M version is 85.2GB which means I need to buy 3 24GB P40s. Anyone have do this before? Is the generation speed acceptable? | 2023-10-20T10:19:28 | https://www.reddit.com/r/LocalLLaMA/comments/17c7wbj/could_i_add_some_p40_cards_to_run_bigger_model/ | MarySmith2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c7wbj | false | null | t3_17c7wbj | /r/LocalLLaMA/comments/17c7wbj/could_i_add_some_p40_cards_to_run_bigger_model/ | false | false | self | 3 | null |
LLM speed on 4090 | 1 | These are the tk/sec I get on different LLM on my 4090 card at half precision.
Phi-1.5 \~190
WizardCoder-3B-V1.0 \~45
WizardCoder-Python-7B-V1.0 \~140
WizardCoder-Python-13B-V1.0 \~2.5
WizardCoder-15B-V1.0 \~4
Driver Version: 537.34, CUDA Version: 12.2, transformers 4.33.1
Is the speed with 13B LLMs fine? cuz I've seen people who get \~20 tk/sec on the same card. Is it possible that this is becasue of my drivers?
TIA.
​ | 2023-10-20T10:17:31 | https://www.reddit.com/r/LocalLLaMA/comments/17c7v8e/llm_speed_on_4090/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c7v8e | false | null | t3_17c7v8e | /r/LocalLLaMA/comments/17c7v8e/llm_speed_on_4090/ | false | false | self | 1 | null |
Anyone running LLMs on Xeon E5-2699 v4 (22T/44C) | 16 | I recently bought a HP ML350 for cheap which the highest CPU it supports is 2x Xeon E5-2699 v4. That is a total of 44 cores / 88 Threads for about $400. Together with a near unlimited amount of 2400 Mhz DDR4 I was wondering what kinds of speeds I would be looking at for inference / training? Does anyone has experience with these CPU's?
I expect it to be very slow but maybe there is a use case for running full size Falcon 180B or training Llama 2 70B? Who knows, maybe there will be even bigger open source models in the future.
The server also has 4x PCIe x16. I put in one P40 for now as the most cost effective option to be able to play with LLM's. My guess is that it will be better to fill up the server with more P40's before I start upgrading the CPU.
​ | 2023-10-20T09:36:48 | https://www.reddit.com/r/LocalLLaMA/comments/17c790b/anyone_running_llms_on_xeon_e52699_v4_22t44c/ | OutlandishnessIll466 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c790b | false | null | t3_17c790b | /r/LocalLLaMA/comments/17c790b/anyone_running_llms_on_xeon_e52699_v4_22t44c/ | false | false | self | 16 | null |
ow to fine tune a 7b or bigger model on my own tweets | 1 | [removed] | 2023-10-20T09:17:17 | https://www.reddit.com/r/LocalLLaMA/comments/17c6yyf/ow_to_fine_tune_a_7b_or_bigger_model_on_my_own/ | Glum-Regular8896 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c6yyf | false | null | t3_17c6yyf | /r/LocalLLaMA/comments/17c6yyf/ow_to_fine_tune_a_7b_or_bigger_model_on_my_own/ | false | false | self | 1 | null |
CMP 50HX | 6 | Anyone tried these Nvidia mining card like CMP 50HX, I wonder if it work for llm or the performance is blocked for everything except mining. | 2023-10-20T08:45:05 | https://www.reddit.com/r/LocalLLaMA/comments/17c6ia9/cmp_50hx/ | Astronomer3007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c6ia9 | false | null | t3_17c6ia9 | /r/LocalLLaMA/comments/17c6ia9/cmp_50hx/ | false | false | self | 6 | null |
Best ML/AI benchmarks for choosing GPUs? | 0 | [removed] | 2023-10-20T08:10:19 | https://www.reddit.com/r/LocalLLaMA/comments/17c60ua/best_mlai_benchmarks_for_choosing_gpus/ | digital_m0nk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c60ua | false | null | t3_17c60ua | /r/LocalLLaMA/comments/17c60ua/best_mlai_benchmarks_for_choosing_gpus/ | false | false | self | 0 | null |
𝗔𝗿𝗶𝘁𝗵𝗺𝗼-𝗠𝗶𝘀𝘁𝗿𝗮𝗹-𝟳𝗕 Model for Mathematical Reasoning | 57 | Hello Kind Folks,
I am excited to announce release of 𝗔𝗿𝗶𝘁𝗵𝗺𝗼-𝗠𝗶𝘀𝘁𝗿𝗮𝗹-𝟳𝗕 model that outperforms existing 7B and 13B state-of-the-art mathematical reasoning models by a huge margin on both GSM8K and MATH datasets.
🧠 Model is supercharged with mathematical reasoning capabilities (CoT) to answer a question and is also capable of writing a Python program (PoT).
🤗 Model weights and training dataset are both open source and are available on HuggingFace: [https://huggingface.co/akjindal53244/Arithmo-Mistral-7B](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B)
We have also published scripts for data-prep, reproducing results, benchmarks, and running local inference. Checkout 𝗔𝗿𝗶𝘁𝗵𝗺𝗼-𝗠𝗶𝘀𝘁𝗿𝗮𝗹-𝟳𝗕 model GitHub page : [https://github.com/akjindal53244/Arithmo-Mistral-7B](https://github.com/akjindal53244/Arithmo-Mistral-7B)
PS: I plan to fine-tune base models to improve performance on vertical applications. However, I am constrained by compute resources. Kindly reach out if you would like to support small-scale compute needs. I would really appreciate it! :) | 2023-10-20T07:32:06 | https://www.reddit.com/r/LocalLLaMA/comments/17c5hge/𝗔𝗿𝗶𝘁𝗵𝗺𝗼𝗠𝗶𝘀𝘁𝗿𝗮𝗹𝟳𝗕_model_for_mathematical_reasoning/ | UglyMonkey17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c5hge | false | null | t3_17c5hge | /r/LocalLLaMA/comments/17c5hge/𝗔𝗿𝗶𝘁𝗵𝗺𝗼𝗠𝗶𝘀𝘁𝗿𝗮𝗹𝟳𝗕_model_for_mathematical_reasoning/ | false | false | self | 57 | {'enabled': False, 'images': [{'id': '8SJbK9LKclJjUQsNTTlvcod3tveQu7pIIJuXY6Aokno', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Z69oZmD_eiBZFbX0EHE5N0znoNm0FT5ZrlaFKAyxYiA.jpg?width=108&crop=smart&auto=webp&s=ba742402ccd72d839cd2c7ecf803dee96b0b4e0b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Z69oZmD_eiBZFbX0EHE5N0znoNm0FT5ZrlaFKAyxYiA.jpg?width=216&crop=smart&auto=webp&s=20e0362396a55f65ca8c7a721ed5a40c3b9a3691', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Z69oZmD_eiBZFbX0EHE5N0znoNm0FT5ZrlaFKAyxYiA.jpg?width=320&crop=smart&auto=webp&s=d41275da533e48f88c9740fd486a1c3b4e6942ba', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Z69oZmD_eiBZFbX0EHE5N0znoNm0FT5ZrlaFKAyxYiA.jpg?width=640&crop=smart&auto=webp&s=9820b63d8fa1bae59545ac7cc130cc7cfe6f33f6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Z69oZmD_eiBZFbX0EHE5N0znoNm0FT5ZrlaFKAyxYiA.jpg?width=960&crop=smart&auto=webp&s=80b4e2b8e0bdbd0a4a3e1f3c1ff6e11a5b86e543', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Z69oZmD_eiBZFbX0EHE5N0znoNm0FT5ZrlaFKAyxYiA.jpg?width=1080&crop=smart&auto=webp&s=7f1fbec2be8ffd206309126b91e46fdb7b2830a0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Z69oZmD_eiBZFbX0EHE5N0znoNm0FT5ZrlaFKAyxYiA.jpg?auto=webp&s=c2f19651bb95e95b2dda6cea283b91d0a9e62885', 'width': 1200}, 'variants': {}}]} |
What do you guys think of the RK3588 for AI or as a Graphics Card driver? | 7 | Most of my x86 equipment are fairly incompletent. The standard hexa core Zen 3/2 Ryzens are too slow, the graphics cards only have 8GB of RAM. Then there is a 24GB Tesla GPU paired with a mini ITX with a Pentium G6400 because it's the only place it fits\*, alongside that liquid AIO.
Now Rockchip's RK3588 is quite rough around the edges in Linux. CPU works fully, GPU barely works via Panfrost, NPU has zero support, but it does have a massive RAM capacity of 32GB and can be massed onto clusterboards.
Now here is the thing, RK3588 seems to run a graphics card with full BAR space without any bugs, and well, the amdgpu driver is technically portable across architectures. Wonder if the cheap, large memory RK3399s are efficient for driving one slave GPU and possibly doing inference themselves with their 3 processors if support ever comes?
I was wondering what is that status of RustiCL for many of those transformers runners or quantized runners. If RustiCL ever becomes fully functional... well, it's a dream. | 2023-10-20T07:22:27 | https://www.reddit.com/r/LocalLLaMA/comments/17c5cru/what_do_you_guys_think_of_the_rk3588_for_ai_or_as/ | A_Degenerate_Idiot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c5cru | false | null | t3_17c5cru | /r/LocalLLaMA/comments/17c5cru/what_do_you_guys_think_of_the_rk3588_for_ai_or_as/ | false | false | self | 7 | null |
Impact of memory bandwidth and core speed in llama.cpp | 6 | Almost 4 months ago a user posted this extensive benchmark about the effects of different ram speeds and core count/speed and cache for both prompt processing and text generation:
https://www.reddit.com/r/LocalLLaMA/comments/14ilo0t/extensive_llamacpp_benchmark_more_speed_on_cpu_7b/
* The TL;DR is that number and frequency of cores determine prompt processing speed, and cache and RAM speed determine text generation speed.
With the recent unveiling of the new Threadripper CPUs I’m wondering if someone has done some more up-to-date benchmarking with the latest optimizations done to llama.cpp. More precisely, testing a Epyc Genoa and its 12 channels of DDR5 ram vs the consumer level 7950X3D.
The new Threadrippers seem take the best characteristics of the consumer level CPUs with higher clock speeds while having almost EPYC level bandwidth with their 8 channels of DDR5 and a lot of cache. I’m planning to buy a new CPU to play with LLMs and I would like to get the best performance for CPU-only execution while I save money again to buy some GPUS. So, would it be worth to spend extra money on an Threadripper PRO or should I take my chances and buy a EPYC Genoa on Ebay?
The Threadripper has less bandwidth but it can be overclocked and has considerable higher clocks, but I also couldn’t find any new tests on how many cores can actually be used with llama.cpp (the last tests from 4 months ago say that 14-15 cores was the maximum), in its current state would it be able to fully use, let’s say… 32 cores? Would the 12 channels of EPYC make a lot of difference vs the 8 channels of the Threadripper (even if the ram is slower in EPYC)? | 2023-10-20T07:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/17c58h2/impact_of_memory_bandwidth_and_core_speed_in/ | newdoria88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c58h2 | false | null | t3_17c58h2 | /r/LocalLLaMA/comments/17c58h2/impact_of_memory_bandwidth_and_core_speed_in/ | false | false | self | 6 | null |
Keep Clam and Scary On | 0 | 2023-10-20T06:45:57 | https://www.reddit.com/gallery/17c4uhc | Juan-7689 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 17c4uhc | false | null | t3_17c4uhc | /r/LocalLLaMA/comments/17c4uhc/keep_clam_and_scary_on/ | false | false | default | 0 | null | |
Useful metadata in choosing GPUs for generative AI? | 1 | [removed] | 2023-10-20T06:33:22 | https://www.reddit.com/r/LocalLLaMA/comments/17c4o3n/useful_metadata_in_choosing_gpus_for_generative_ai/ | digital_m0nk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c4o3n | false | null | t3_17c4o3n | /r/LocalLLaMA/comments/17c4o3n/useful_metadata_in_choosing_gpus_for_generative_ai/ | false | false | self | 1 | null |
My first model: CodeBooga-34B-v0.1. A WizardCoder + Phind-CodeLlama merge created with the same layer blending method used in MythoMax. It is the best coding model I have tried so far. | 94 | 2023-10-20T05:55:22 | https://huggingface.co/oobabooga/CodeBooga-34B-v0.1 | oobabooga4 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17c43sn | false | null | t3_17c43sn | /r/LocalLLaMA/comments/17c43sn/my_first_model_codebooga34bv01_a_wizardcoder/ | false | false | 94 | {'enabled': False, 'images': [{'id': 'C5zVQVdpfyyRxIAymxc6uwkY2SBdMiNvIVsZ05u3mU8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WPINcuH6uUWoxW_8qwSBx2EdsRbT21mbijZ69bLXnS4.jpg?width=108&crop=smart&auto=webp&s=f70d2c3b1e0c4be6c168af304d084202c34c47de', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WPINcuH6uUWoxW_8qwSBx2EdsRbT21mbijZ69bLXnS4.jpg?width=216&crop=smart&auto=webp&s=c4186a3850877a8e4221b9cb93e8f29bb9b0ca62', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WPINcuH6uUWoxW_8qwSBx2EdsRbT21mbijZ69bLXnS4.jpg?width=320&crop=smart&auto=webp&s=045333dcc68b102c492ed8ead6951b8a5b6274b4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WPINcuH6uUWoxW_8qwSBx2EdsRbT21mbijZ69bLXnS4.jpg?width=640&crop=smart&auto=webp&s=cc82c7a048e8784d76715e871f5c20ff4acb359e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WPINcuH6uUWoxW_8qwSBx2EdsRbT21mbijZ69bLXnS4.jpg?width=960&crop=smart&auto=webp&s=58bf2a14d047a99db3ab95679c9f5ba0e85eb043', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WPINcuH6uUWoxW_8qwSBx2EdsRbT21mbijZ69bLXnS4.jpg?width=1080&crop=smart&auto=webp&s=ec1e4ba578daa342a8d883f499254d3e2162292d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WPINcuH6uUWoxW_8qwSBx2EdsRbT21mbijZ69bLXnS4.jpg?auto=webp&s=2da1fafd17c1e08bed4bdf3ce5b30c79cab8f636', 'width': 1200}, 'variants': {}}]} | ||
AgentLM-70B: Agent-tuned open model comparable to GPT-3.5-Turbo on unseen agent tasks | 39 | 2023-10-20T05:16:05 | https://huggingface.co/papers/2310.12823 | ScaryMage | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17c3ieb | false | null | t3_17c3ieb | /r/LocalLLaMA/comments/17c3ieb/agentlm70b_agenttuned_open_model_comparable_to/ | false | false | 39 | {'enabled': False, 'images': [{'id': 'cKCyxECRLVfRDbaKZNxB0kyaVeNLvR6Y2OTdY_zNWTQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EsM9vtjfwLo7QzA3J3izcIJT8ko_ERueTwoFS_l7EMI.jpg?width=108&crop=smart&auto=webp&s=882bffeba8e5a9de05201cf429e05eed2c0a8843', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EsM9vtjfwLo7QzA3J3izcIJT8ko_ERueTwoFS_l7EMI.jpg?width=216&crop=smart&auto=webp&s=ae2cf57ac3317c3c9b7651769f646234f53c9270', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EsM9vtjfwLo7QzA3J3izcIJT8ko_ERueTwoFS_l7EMI.jpg?width=320&crop=smart&auto=webp&s=9cb0df710b2544a14ddd24e998e28492475d2ad5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EsM9vtjfwLo7QzA3J3izcIJT8ko_ERueTwoFS_l7EMI.jpg?width=640&crop=smart&auto=webp&s=67e73066cd3ed1a654a5c319661b43bcb3748344', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EsM9vtjfwLo7QzA3J3izcIJT8ko_ERueTwoFS_l7EMI.jpg?width=960&crop=smart&auto=webp&s=6347c5a7e4597609d83d1fc66a7e2d85cb337d92', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EsM9vtjfwLo7QzA3J3izcIJT8ko_ERueTwoFS_l7EMI.jpg?width=1080&crop=smart&auto=webp&s=dc1856cf19bf19190840fed8dbb6e594463e810a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EsM9vtjfwLo7QzA3J3izcIJT8ko_ERueTwoFS_l7EMI.jpg?auto=webp&s=b70a7dddc69015c79618c1185fb23696a906212e', 'width': 1200}, 'variants': {}}]} | ||
AgentLM-70B: Agent-tuned open model comparable to GPT-3.5-Turbo on unseen agent tasks | 1 | 2023-10-20T05:15:02 | https://x.com/_akhaliq/status/1715183455850410013?s=20 | ScaryMage | x.com | 1970-01-01T00:00:00 | 0 | {} | 17c3hqv | false | null | t3_17c3hqv | /r/LocalLLaMA/comments/17c3hqv/agentlm70b_agenttuned_open_model_comparable_to/ | false | false | 1 | {'enabled': False, 'images': [{'id': '6m_CmVq0CCJkcURbXPWBNV3HNEL-Bj5gkm4woGrvH0I', 'resolutions': [{'height': 143, 'url': 'https://external-preview.redd.it/1ZmOqSCm7jzrWzS4lG5WmWD4ilTOAxCk4a0y00dNyaQ.jpg?width=108&crop=smart&auto=webp&s=977293f179f2de5be73c8e19f4921c2f42ce548b', 'width': 108}, {'height': 286, 'url': 'https://external-preview.redd.it/1ZmOqSCm7jzrWzS4lG5WmWD4ilTOAxCk4a0y00dNyaQ.jpg?width=216&crop=smart&auto=webp&s=2aba40984184688377fe88c74d5f5304663c06ea', 'width': 216}, {'height': 424, 'url': 'https://external-preview.redd.it/1ZmOqSCm7jzrWzS4lG5WmWD4ilTOAxCk4a0y00dNyaQ.jpg?width=320&crop=smart&auto=webp&s=317fae1f3511b1490ff74cd03fb061f5cbae233a', 'width': 320}, {'height': 848, 'url': 'https://external-preview.redd.it/1ZmOqSCm7jzrWzS4lG5WmWD4ilTOAxCk4a0y00dNyaQ.jpg?width=640&crop=smart&auto=webp&s=a38ffe3f92f33b3d8167f96f06a7d2132831cf98', 'width': 640}], 'source': {'height': 1206, 'url': 'https://external-preview.redd.it/1ZmOqSCm7jzrWzS4lG5WmWD4ilTOAxCk4a0y00dNyaQ.jpg?auto=webp&s=5f63baf538a5a6cef25b6f17ea9646a9a5abc2ff', 'width': 910}, 'variants': {}}]} | ||
Updates to our open-source automation/agent builder | 4 | Hey /r/LocalLLaMA!
A couple weeks ago we [open-sourced](https://www.reddit.com/r/LocalLLaMA/comments/170k91i/opensourcing_a_simple_automationagent_workflow) a simple UI for automation/agent building
I'm coming back to give a sneak peak at some updates which will roll out soon!
First of which are integrations and batch processing. We're starting w/ Google Spreadsheets so that you can pull in data from a spreadsheet and use an LLM to analyze/summarize/extract data from each row with a simple workflow.
Here's an example of the entire process in action:
[Analyzing Yelp reviews for sentiment, themes, and extracting individual complaints\/praises](https://reddit.com/link/17c2rfk/video/jg0cmk9aaavb1/player)
If you can't watch the video, it's showing:
1. A Google spreadsheet with a bunch of Yelp reviews
2. The workflow we're using to pull data from the sheet and what data to extract
3. The Google spreadsheet w/ the extracted data saved into it
**How you can help!**
* Are there any specific integrations you'd like to see?
* Trello / Jira / etc.? We're putting together a list to prioritize and tackle
* Do you already have analytical workflows (with or without LLMs) that you'd like to automate?
* i.e. analyzing user reviews, pulling data from product descriptions, generating lesson plans, etc.
* **We'd love to hear from you to help build these out as use cases**
* If you're interested in using a hosted version of this UI, sign up here: [https://airtable.com/appJPVyukiw6qjUy9/shrSefMiNQ21fDagr](https://airtable.com/appJPVyukiw6qjUy9/shrSefMiNQ21fDagr)
The project is still early stage, so feel free to reach out if you encounter any issues, and we appreciate your feedback!
Thanks in advance! 😊 | 2023-10-20T04:29:49 | https://www.reddit.com/r/LocalLLaMA/comments/17c2rfk/updates_to_our_opensource_automationagent_builder/ | andyndino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c2rfk | false | null | t3_17c2rfk | /r/LocalLLaMA/comments/17c2rfk/updates_to_our_opensource_automationagent_builder/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'OJYpy1VhPAxOWPZavxFw01osiQpuMMvdTGk6qRQc0J0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?width=108&crop=smart&auto=webp&s=6e64289fd05277b891b0930c218c8cf55d417ac2', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?width=216&crop=smart&auto=webp&s=3398f911dcb69968e5e1959a62840ebf01e67fd9', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?width=320&crop=smart&auto=webp&s=224e7aea66121660b2fb850903b4b03e7dc93e2a', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?width=640&crop=smart&auto=webp&s=7f60524bf43539ce84304592820a4a6aee7cb753', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?width=960&crop=smart&auto=webp&s=0be45849b5ef13d51f3701f7a115510243223680', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?width=1080&crop=smart&auto=webp&s=7c60048e47cd7b7520126dd4b1293df079e0ec4f', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?auto=webp&s=6b024e0215177d8c270d23fd0e67832d52e90054', 'width': 1200}, 'variants': {}}]} | |
End of text token in the llama.cpp prompt. | 2 | Hi,
Sorry if I am asking a stupid question, but I am looking for a way of including the stop token in my prompt and I am not sure how. I want to give the model a bunch of examples of what I want, but only produce one thing in that style and then stop. I have the problem that sometimes it keeps going and produces another and another. If there is some way of doing that I would be appreciative. | 2023-10-20T03:36:14 | https://www.reddit.com/r/LocalLLaMA/comments/17c1u7m/end_of_text_token_in_the_llamacpp_prompt/ | Red_Redditor_Reddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c1u7m | false | null | t3_17c1u7m | /r/LocalLLaMA/comments/17c1u7m/end_of_text_token_in_the_llamacpp_prompt/ | false | false | self | 2 | null |
Wikipedia-style community LLMs? | 1 | [removed] | 2023-10-20T02:48:44 | https://x.com/varun_mathur/status/1715041606825308222?t=_gjKNTcKEP105KuONyonfQ&s=34 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 17c0yew | false | null | t3_17c0yew | /r/LocalLLaMA/comments/17c0yew/wikipediastyle_community_llms/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'XVhyushd9qELGQkotCi5_mt5WgLtW5hNr7RSkcX5eSg', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/HiYR8MUWeCcCHp5c29p2ucMbLLle6mqM48ehPlwncaA.jpg?width=108&crop=smart&auto=webp&s=aac03e4981fcbe786045d2f6bda12b2c42ab5bf5', 'width': 108}, {'height': 155, 'url': 'https://external-preview.redd.it/HiYR8MUWeCcCHp5c29p2ucMbLLle6mqM48ehPlwncaA.jpg?width=216&crop=smart&auto=webp&s=a0e322b7b231249eaa3549ee0ecfedc864eff974', 'width': 216}, {'height': 230, 'url': 'https://external-preview.redd.it/HiYR8MUWeCcCHp5c29p2ucMbLLle6mqM48ehPlwncaA.jpg?width=320&crop=smart&auto=webp&s=9f0a5c14ba9f0a35938155933f62e614d253e00a', 'width': 320}, {'height': 461, 'url': 'https://external-preview.redd.it/HiYR8MUWeCcCHp5c29p2ucMbLLle6mqM48ehPlwncaA.jpg?width=640&crop=smart&auto=webp&s=a5338f7b7e6f336545475f530815781c65d6895d', 'width': 640}, {'height': 692, 'url': 'https://external-preview.redd.it/HiYR8MUWeCcCHp5c29p2ucMbLLle6mqM48ehPlwncaA.jpg?width=960&crop=smart&auto=webp&s=cf867a47b4ff98ee8ac800c804edd3fdd9e3d0ab', 'width': 960}, {'height': 779, 'url': 'https://external-preview.redd.it/HiYR8MUWeCcCHp5c29p2ucMbLLle6mqM48ehPlwncaA.jpg?width=1080&crop=smart&auto=webp&s=6bd87df43eabd7058484f8f0e2adda440992929f', 'width': 1080}], 'source': {'height': 1477, 'url': 'https://external-preview.redd.it/HiYR8MUWeCcCHp5c29p2ucMbLLle6mqM48ehPlwncaA.jpg?auto=webp&s=2d1b3615ee7c195bfd2612e93dca1dfeea3fdec4', 'width': 2047}, 'variants': {}}]} | |
Does datasets for fine-tuning LLAMA 2 and its deviations need to be instruct/response or question/answer format? | 4 | Noob question as the titles says. I want to fine-tune (or LoRA) LLAMA 2, all the guides I find are to format datasets to Q/A like formats. How can I prepare datasets for contents scraped from websites or bunch of JSON exemples in the same parten? Thanks! | 2023-10-20T02:19:47 | https://www.reddit.com/r/LocalLLaMA/comments/17c0e0j/does_datasets_for_finetuning_llama_2_and_its/ | caphohotain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17c0e0j | false | null | t3_17c0e0j | /r/LocalLLaMA/comments/17c0e0j/does_datasets_for_finetuning_llama_2_and_its/ | false | false | self | 4 | null |
how to reset LLM during sequential inference requests | 2 | I'm using llama2 to sequentially process >1000 requests. I notice the vRAM increase over time from \~38GB to 48GB just after 10 requests. somehow it feels like request processing also gets slower. I do need max 4095 context length so cannot cut time there. How do i reset vRAM? i'm thinking about reloading LLM at certain intervals. But this feels like a clumsy workaround and there should be a better solution? thanks for any tips. | 2023-10-20T00:46:46 | https://www.reddit.com/r/LocalLLaMA/comments/17byiym/how_to_reset_llm_during_sequential_inference/ | peterwu00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17byiym | false | null | t3_17byiym | /r/LocalLLaMA/comments/17byiym/how_to_reset_llm_during_sequential_inference/ | false | false | self | 2 | null |
Adding batch analysis features to talos | 1 | Hey /r/LocalLLaMA
tl;dr:
* Check out our open-source project here: [https://github.com/spyglass-search/talos](https://github.com/spyglass-search/talos)
* We're soon releasing some integrations & batch analysis.
* And working on a hosted version for folks to play around in.
* If you're interested in playing around w/ our hosted version:
* [https://airtable.com/appJPVyukiw6qjUy9/shrSefMiNQ21fDagr](https://airtable.com/appJPVyukiw6qjUy9/shrSefMiNQ21fDagr)
[Two weeks ago we open-sourced](https://www.reddit.com/r/LocalLLaMA/comments/170k91i/opensourcing_a_simple_automationagent_workflow/) a UI for building simple LLM workflows.
Here's a sneak peak at something new we're hoping to release over the next couple days. Being able to run LLM analysis over spreadsheets and push that data back.
Here's an example workflow looping over data in a Google Sheet, running the extraction/evaluation, and then saving that data back.
[Sentiment analyis & data extraction from a bunch of Yelp Reviews](https://reddit.com/link/17bxezv/video/pca6g8cmr8vb1/player)
Hoping to unlock a lot of fun use-cases with this 🙂
**How you can help!**
* Any specific workflows or tasks you'd like to automate
* Additional integrations (Notion, Trello, Jira, etc.) you'd like to see.
* If you already have workflows that need automation, we'd love to see how we can help
Apologies in advance if you run into any issues, this is still very much early stage. Feel free to reach out if you encounter any issues! | 2023-10-19T23:53:20 | https://www.reddit.com/r/LocalLLaMA/comments/17bxezv/adding_batch_analysis_features_to_talos/ | andyndino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bxezv | false | null | t3_17bxezv | /r/LocalLLaMA/comments/17bxezv/adding_batch_analysis_features_to_talos/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'vNd0bs0coS_wgHdE0_gCOKIx2vQoCwObPuwRsw768Lw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UtUNRGolh4uReI0KvKSYYKE-TYWrbvGGNnha1EYoGLs.jpg?width=108&crop=smart&auto=webp&s=b162476a60571e92d16d438ffaca6c3d85be76dd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UtUNRGolh4uReI0KvKSYYKE-TYWrbvGGNnha1EYoGLs.jpg?width=216&crop=smart&auto=webp&s=cc0717cf4bef6fd3c0dc486c114d6ac16b7d7e0b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UtUNRGolh4uReI0KvKSYYKE-TYWrbvGGNnha1EYoGLs.jpg?width=320&crop=smart&auto=webp&s=1340dcda6374ae0a2b16c8027f1c648e37461a11', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UtUNRGolh4uReI0KvKSYYKE-TYWrbvGGNnha1EYoGLs.jpg?width=640&crop=smart&auto=webp&s=4d7fed063aea2356c2666d6bbe898720e88c7877', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UtUNRGolh4uReI0KvKSYYKE-TYWrbvGGNnha1EYoGLs.jpg?width=960&crop=smart&auto=webp&s=f92668bbe336701362d9b69250ab2876b0cd602d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UtUNRGolh4uReI0KvKSYYKE-TYWrbvGGNnha1EYoGLs.jpg?width=1080&crop=smart&auto=webp&s=d3be3cff8a49a329198d1051e24a68be7c1f4bef', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UtUNRGolh4uReI0KvKSYYKE-TYWrbvGGNnha1EYoGLs.jpg?auto=webp&s=5ab244314e0bfd0ed0cc89815c882d320f6b3c39', 'width': 1200}, 'variants': {}}]} | |
please help! should i get dual nvidia rtx titans or one geforce rtx 4090? | 1 | [removed] | 2023-10-19T23:48:06 | https://www.reddit.com/r/LocalLLaMA/comments/17bxb2l/please_help_should_i_get_dual_nvidia_rtx_titans/ | vemedia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bxb2l | false | null | t3_17bxb2l | /r/LocalLLaMA/comments/17bxb2l/please_help_should_i_get_dual_nvidia_rtx_titans/ | false | false | self | 1 | null |
Building a linux desktop - recos | 2 | Hello fellow LLMers,
Looking to spend 2k usd on a box. Will build myself and source components. Whats the best bang for my buck. Looking to experiment with various models (7b and up), do dev, etc.
Was assuming Ubuntu LTS is target OS with most support.
Havent specd a build in a while - best gpu at my price range?
Thanks in advance! | 2023-10-19T23:44:58 | https://www.reddit.com/r/LocalLLaMA/comments/17bx8ql/building_a_linux_desktop_recos/ | stupidadult | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bx8ql | false | null | t3_17bx8ql | /r/LocalLLaMA/comments/17bx8ql/building_a_linux_desktop_recos/ | false | false | self | 2 | null |
New (and better, especially smaller ones) EXL2 quants of Phind-CodeLlama-34B-v2 | 12 | I did EXL2 [quants](https://huggingface.co/latimar/Phind-Codellama-34B-v2-exl2) of [Phind-CodeLlama-34B-v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2) model about a month ago. I used wikitext as the calibration dataset and pretty much default settings of the exllamav2 convert script for all the quants except one, and did not measure HumanEval score at that moment.
It was suggested in the community comments that using a coding specific dataset for calibration probably would yield better results. So I've just re-did the [quants](https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2) and it seems it really does matter.
All relevant info is in the model card, highlights(based solely on HumanEval score):
* 2.55 quant should be **much** better than the old one -- 40.0 vs 0.8 score
* 2.8 quant is roughly comparable to WizardCoder-Python-13B full weights
* 3.0 quant is slightly better than WizardCoder-Python-13B full weights | 2023-10-19T22:03:42 | https://www.reddit.com/r/LocalLLaMA/comments/17bv136/new_and_better_especially_smaller_ones_exl2/ | epicfilemcnulty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bv136 | false | null | t3_17bv136 | /r/LocalLLaMA/comments/17bv136/new_and_better_especially_smaller_ones_exl2/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'iBvX7LAU9ZVJgcHyHuKsMTRm-3nEy4gMXjOyaTcz5BU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_xIoJzPixhu1vqmtp4-FSrH3Cliiaga1eHiVtkkZ9rs.jpg?width=108&crop=smart&auto=webp&s=10e867104a63839310fe446329eca1e332930961', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_xIoJzPixhu1vqmtp4-FSrH3Cliiaga1eHiVtkkZ9rs.jpg?width=216&crop=smart&auto=webp&s=c31ed75f6198f5ee2ca18cb4b47cc71e2e95dce5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_xIoJzPixhu1vqmtp4-FSrH3Cliiaga1eHiVtkkZ9rs.jpg?width=320&crop=smart&auto=webp&s=bb0943bc883f5378d66704a3ce293e6299587704', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_xIoJzPixhu1vqmtp4-FSrH3Cliiaga1eHiVtkkZ9rs.jpg?width=640&crop=smart&auto=webp&s=a3da9d3a2e55614b8130ab8fbfbccb933a83d782', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_xIoJzPixhu1vqmtp4-FSrH3Cliiaga1eHiVtkkZ9rs.jpg?width=960&crop=smart&auto=webp&s=265cdfc23dd20c2d423143b69c4d3c97b20f4b6c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_xIoJzPixhu1vqmtp4-FSrH3Cliiaga1eHiVtkkZ9rs.jpg?width=1080&crop=smart&auto=webp&s=55e41e51876b8361e9afadaf254135e5c76478ff', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_xIoJzPixhu1vqmtp4-FSrH3Cliiaga1eHiVtkkZ9rs.jpg?auto=webp&s=5fa91b33587efa9ea53a70cf528336f022de8a58', 'width': 1200}, 'variants': {}}]} |
Tensorrt-llm publicly available | 10 | Tensorrt-llm just got publicly released:
https://developer.nvidia.com/blog/optimizing-inference-on-llms-with-tensorrt-llm-now-publicly-available/ | 2023-10-19T21:54:25 | https://www.reddit.com/r/LocalLLaMA/comments/17but1w/tensorrtllm_publicly_available/ | whata_wonderful_day | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17but1w | false | null | t3_17but1w | /r/LocalLLaMA/comments/17but1w/tensorrtllm_publicly_available/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'zaC6OP4p8hGjwvnd20lCPMjSLJuZpuAdT24tQSn8Pys', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?width=108&crop=smart&auto=webp&s=1796a1001eceffacb59df2000d7962e93cacd273', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?width=216&crop=smart&auto=webp&s=6727382cbf5bb32f6792cd5efcd2f0f14289d6a2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?width=320&crop=smart&auto=webp&s=641197d9c1d1cc20cd29ffd676ecbe690fa0cccc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?width=640&crop=smart&auto=webp&s=49a0cd8ad7a26d6b47bdbf904e2b77b46b810c25', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?width=960&crop=smart&auto=webp&s=19e0e3797aa631eb180682e28b8109d7d9a667a6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?width=1080&crop=smart&auto=webp&s=21881e071bf4ac76473e729de00b594fdc5879c8', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?auto=webp&s=04aa4d84eb45d18d3d18235a24926884a05698a3', 'width': 1920}, 'variants': {}}]} |
[Project] Scaling LLama2 70B with Multi NVIDIA and AMD GPUs under 3k budget | 37 | Machine Learning Compilation (MLC) now supports compiling LLMs to multiple GPUs.
For Llama2-70B, it runs 4-bit quantized Llama2-70B at:
- 34.5 tok/sec on two NVIDIA RTX 4090 at $3.2k
- 29.9 tok/sec on two AMD Radeon 7900XTX $2k
Also it is scales well with 8 A10G/A100 GPUs in our experiment. Details:
- Blog post: https://blog.mlc.ai/2023/10/19/Scalable-Language-Model-Inference-on-Multiple-NVDIA-AMD-GPUs
- Project: https://github.com/mlc-ai/mlc-llm | 2023-10-19T20:47:48 | https://www.reddit.com/r/LocalLLaMA/comments/17bt7gl/project_scaling_llama2_70b_with_multi_nvidia_and/ | yzgysjr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bt7gl | false | null | t3_17bt7gl | /r/LocalLLaMA/comments/17bt7gl/project_scaling_llama2_70b_with_multi_nvidia_and/ | false | false | self | 37 | null |
What are the best open LLMs trained with medical language? | 6 | I am working on a research project that involves analysing medical text (patient records) to identify key events. Initially I was planning to use chatgpt api and then compare its performance with open source LLMs. Does anyone have experience with LLMs specifically trained for the medical field?
Also, I've just come across Amazon Comprehend Medical, which seems to be specifically designed for what I need. Has anyone tried it?
| 2023-10-19T20:33:19 | https://www.reddit.com/r/LocalLLaMA/comments/17bsvfx/what_are_the_best_open_llms_trained_with_medical/ | kiukamba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bsvfx | false | null | t3_17bsvfx | /r/LocalLLaMA/comments/17bsvfx/what_are_the_best_open_llms_trained_with_medical/ | false | false | self | 6 | null |
Optimizing Inference on Large Language Models with NVIDIA TensorRT-LLM, Now Publicly Available | 116 | 2023-10-19T20:13:23 | https://developer.nvidia.com/blog/optimizing-inference-on-llms-with-tensorrt-llm-now-publicly-available/ | Scary-Knowledgable | developer.nvidia.com | 1970-01-01T00:00:00 | 0 | {} | 17bsepd | false | null | t3_17bsepd | /r/LocalLLaMA/comments/17bsepd/optimizing_inference_on_large_language_models/ | false | false | 116 | {'enabled': False, 'images': [{'id': 'zaC6OP4p8hGjwvnd20lCPMjSLJuZpuAdT24tQSn8Pys', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?width=108&crop=smart&auto=webp&s=1796a1001eceffacb59df2000d7962e93cacd273', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?width=216&crop=smart&auto=webp&s=6727382cbf5bb32f6792cd5efcd2f0f14289d6a2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?width=320&crop=smart&auto=webp&s=641197d9c1d1cc20cd29ffd676ecbe690fa0cccc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?width=640&crop=smart&auto=webp&s=49a0cd8ad7a26d6b47bdbf904e2b77b46b810c25', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?width=960&crop=smart&auto=webp&s=19e0e3797aa631eb180682e28b8109d7d9a667a6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?width=1080&crop=smart&auto=webp&s=21881e071bf4ac76473e729de00b594fdc5879c8', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/nxMsEGpSAQswCHIrm-K_brqEdB_EbV6n-ks4Cy4wQCs.jpg?auto=webp&s=04aa4d84eb45d18d3d18235a24926884a05698a3', 'width': 1920}, 'variants': {}}]} | ||
Any projects for merging gguf models? | 5 | I've searched github. Not sure where else to look. Github has a slerp project for combining, i think, hf models but I don't see anything for gguf merges.
I assume some would want to know "why" i'd want to do this before answering, so I'll go ahead and say it's for fun/hobby. No real point other than to inspect how the resulting model behaves differently. | 2023-10-19T20:11:37 | https://www.reddit.com/r/LocalLLaMA/comments/17bsd9q/any_projects_for_merging_gguf_models/ | multiverse_fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bsd9q | false | null | t3_17bsd9q | /r/LocalLLaMA/comments/17bsd9q/any_projects_for_merging_gguf_models/ | false | false | self | 5 | null |
How far down the rabbit hole can an RTX 4080 laptop with 32 GB RAM take me? | 11 | I’m planning on getting a new laptop for work and light gaming, maybe a legion pro i7, and was wondering if it can be a good enough option to start learning more about LLaMA with.
I know desktop is a way better option and are actually planning a build too, but would like to have the option to do some learning on the go without relying on the cloud.
What models have you guys been able to run on laptops?
Should I just start on a laptop or just better wait to have my pc? | 2023-10-19T20:02:29 | https://www.reddit.com/r/LocalLLaMA/comments/17bs5fk/how_far_down_the_rabbit_hole_can_an_rtx_4080/ | morkrets | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bs5fk | false | null | t3_17bs5fk | /r/LocalLLaMA/comments/17bs5fk/how_far_down_the_rabbit_hole_can_an_rtx_4080/ | false | false | self | 11 | null |
Apple's Neural Core Enabled distilbert model. For Apple users looking for small and fast, not large and new · Hugging Face | 6 | 2023-10-19T19:44:38 | https://huggingface.co/apple/ane-distilbert-base-uncased-finetuned-sst-2-english | jayfehr | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17brq76 | false | null | t3_17brq76 | /r/LocalLLaMA/comments/17brq76/apples_neural_core_enabled_distilbert_model_for/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'KzRH-P9K036T0nqHWQzacGsfLlnuxCPjh1QVKylunms', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UXsY6gxY55EpQ7NvNx44IXQRZgzK28R6TdHpNXNYRYU.jpg?width=108&crop=smart&auto=webp&s=f28dc50836b2823028d8f2b4a69e7780226350be', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UXsY6gxY55EpQ7NvNx44IXQRZgzK28R6TdHpNXNYRYU.jpg?width=216&crop=smart&auto=webp&s=37e3015f57c07fd723ad2d97baa8be140d9b86e2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UXsY6gxY55EpQ7NvNx44IXQRZgzK28R6TdHpNXNYRYU.jpg?width=320&crop=smart&auto=webp&s=3c4ba29bbced25887d114ab2a92477cc332463cb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UXsY6gxY55EpQ7NvNx44IXQRZgzK28R6TdHpNXNYRYU.jpg?width=640&crop=smart&auto=webp&s=12b82ab36b7a76d3a27d13df533809e8f97b53c4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UXsY6gxY55EpQ7NvNx44IXQRZgzK28R6TdHpNXNYRYU.jpg?width=960&crop=smart&auto=webp&s=52fb27e0bc1706e51a6aef41f55007b4580798a8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UXsY6gxY55EpQ7NvNx44IXQRZgzK28R6TdHpNXNYRYU.jpg?width=1080&crop=smart&auto=webp&s=bde4e1e0913e22e8320b1e0c5d2c23a2362dbd29', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UXsY6gxY55EpQ7NvNx44IXQRZgzK28R6TdHpNXNYRYU.jpg?auto=webp&s=7eb118aeca9df9d69e9545aed64b8b6f19eeebed', 'width': 1200}, 'variants': {}}]} | ||
Is there a list of popular models/Lora's somewhere? | 2 | Other than the outdated list in the wiki over here that is. For image generation I use Civitai but I haven't found anything along the lines for text generation | 2023-10-19T19:39:59 | https://www.reddit.com/r/LocalLLaMA/comments/17brmav/is_there_a_list_of_popular_modelsloras_somewhere/ | rodinj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17brmav | false | null | t3_17brmav | /r/LocalLLaMA/comments/17brmav/is_there_a_list_of_popular_modelsloras_somewhere/ | false | false | default | 2 | null |
NEFTune: Noisy Embeddings Improve Instruction Finetuning | 21 | [https://arxiv.org/abs/2310.05914](https://arxiv.org/abs/2310.05914)
New technique claims to improve training efficiency by reducing over fitting by introducing noise in to the training process.
Huggingface has added it to their TRL library, so you can implement it with one additional line of code.
[https://huggingface.co/docs/trl/main/en/sft\_trainer#enhance-models-performances-using-neftune](https://huggingface.co/docs/trl/main/en/sft_trainer#enhance-models-performances-using-neftune)
A good video breakdown:
[https://www.youtube.com/watch?v=nHBZUpoRpd0](https://www.youtube.com/watch?v=nHBZUpoRpd0) | 2023-10-19T19:06:38 | https://www.reddit.com/r/LocalLLaMA/comments/17bquoh/neftune_noisy_embeddings_improve_instruction/ | Unstable_Llama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bquoh | false | null | t3_17bquoh | /r/LocalLLaMA/comments/17bquoh/neftune_noisy_embeddings_improve_instruction/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
Xwin updated their V0.2 versions, the performance should not be ignored | 1 | [removed] | 2023-10-19T18:10:25 | https://www.reddit.com/r/LocalLLaMA/comments/17bpj7l/xwin_updated_their_v02_versions_the_performance/ | Eigeen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bpj7l | false | null | t3_17bpj7l | /r/LocalLLaMA/comments/17bpj7l/xwin_updated_their_v02_versions_the_performance/ | false | false | self | 1 | null |
The Killer Use Case for LLMs Is Summarization | 27 | 2023-10-19T18:08:58 | https://www.sebastianmellen.com/post/2023/the-killer-use-case-for-llms-is-summarization/ | quellik | sebastianmellen.com | 1970-01-01T00:00:00 | 0 | {} | 17bpi2b | false | null | t3_17bpi2b | /r/LocalLLaMA/comments/17bpi2b/the_killer_use_case_for_llms_is_summarization/ | false | false | default | 27 | null | |
Xwin updated their V0.2 versions, incredible performance | 1 | Expecially Xwin-LM-13B-V0.2 and Xwin-MLewd-13B-V0.2
After a brief attempt on Xwin-MLewd-13B-V0.2 by me and my friends, these two models are excellent at rp. In my humble opinion, they managed to distill 70B to 13B, so fast than I expected.
Still, I don't think it's as advanced as the 70B in terms of semantic understanding.
I hope someone will test the Xwin V0.2 series models in detail, their work is really impressive.
My ExLlamaV2 quantization (original version): [Eigeen/Xwin-LM-13B-V0.2-exl2 · Hugging Face](https://huggingface.co/Eigeen/Xwin-LM-13B-V0.2-exl2)
MLewd: [R136a1/Xwin-MLewd-13B-V0.2-exl2 · Hugging Face](https://huggingface.co/R136a1/Xwin-MLewd-13B-V0.2-exl2) | 2023-10-19T17:57:12 | https://www.reddit.com/r/LocalLLaMA/comments/17bp81h/xwin_updated_their_v02_versions_incredible/ | Eigeen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bp81h | false | null | t3_17bp81h | /r/LocalLLaMA/comments/17bp81h/xwin_updated_their_v02_versions_incredible/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Kn5YXg9hgqLBfmfJSs9RVty6nQJtvZwCGuZEemIIHRA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/v0evTUFVxRCzYB2N9PH_CCwOuAA10h9so0NeI5u5088.jpg?width=108&crop=smart&auto=webp&s=761a0e33373a2e9f32e340c8142a5f700ee29ded', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/v0evTUFVxRCzYB2N9PH_CCwOuAA10h9so0NeI5u5088.jpg?width=216&crop=smart&auto=webp&s=93ded39f9a61959b4b89de79f479b61c4b63216c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/v0evTUFVxRCzYB2N9PH_CCwOuAA10h9so0NeI5u5088.jpg?width=320&crop=smart&auto=webp&s=bea1f493bd457109cc559a251d420556a88924b6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/v0evTUFVxRCzYB2N9PH_CCwOuAA10h9so0NeI5u5088.jpg?width=640&crop=smart&auto=webp&s=4feaacbe1acf487bb72740ff382e1bf4712f626d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/v0evTUFVxRCzYB2N9PH_CCwOuAA10h9so0NeI5u5088.jpg?width=960&crop=smart&auto=webp&s=5386063056182d21853ad9f249af3db2b1c9d3d4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/v0evTUFVxRCzYB2N9PH_CCwOuAA10h9so0NeI5u5088.jpg?width=1080&crop=smart&auto=webp&s=ec716b602466de232a555e270ea9efad0cd54a42', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/v0evTUFVxRCzYB2N9PH_CCwOuAA10h9so0NeI5u5088.jpg?auto=webp&s=9a993776a7d92a6cb1d50764db4e4caf645db9ef', 'width': 1200}, 'variants': {}}]} |
Are the programming models useful for general command line stuff outside of direct programming? | 6 | I'm not much of a programmer, but I do do a lot of command line stuff. Using sed, ffmpeg, find, regex, and piping random bash/command line stuff together. Are these programming models useful for general command line tooling or just programming specifically? If not are the cli tooling models out there that can fill this role? | 2023-10-19T17:38:07 | https://www.reddit.com/r/LocalLLaMA/comments/17boryz/are_the_programming_models_useful_for_general/ | _risho_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17boryz | false | null | t3_17boryz | /r/LocalLLaMA/comments/17boryz/are_the_programming_models_useful_for_general/ | false | false | self | 6 | null |
Question : GPT Model for monitoring | 1 | Hi Everyone,
I'm right now beginning my journey with gpt and LLM's. I have this question that is it possible to develop monitoring based tools out of LocalAI/localllama where the model will analyse the data from tools like splunk and grafana by leveraging function calling? Any thoughts on the same ? | 2023-10-19T16:44:56 | https://www.reddit.com/r/LocalLLaMA/comments/17bnky1/question_gpt_model_for_monitoring/ | jshwnth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bnky1 | false | null | t3_17bnky1 | /r/LocalLLaMA/comments/17bnky1/question_gpt_model_for_monitoring/ | false | false | self | 1 | null |
If I can't afford to buy the necessary hardware to run a high performance model, is there a service that I can use on a monthly basis to host it for me? | 56 | Any recommendations? | 2023-10-19T16:39:06 | https://www.reddit.com/r/LocalLLaMA/comments/17bnfss/if_i_cant_afford_to_buy_the_necessary_hardware_to/ | OKArchon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bnfss | false | null | t3_17bnfss | /r/LocalLLaMA/comments/17bnfss/if_i_cant_afford_to_buy_the_necessary_hardware_to/ | false | false | self | 56 | null |
Inferencing frameworks to host model across servers | 1 | Hi everyone,
I want to host multiple models of llama 2 scale on premise on one of our customers infrastructure. One constraint that they have is all their servers are single GPU instance with NVLink disabled and the models should run without quantization. For development purposes I am working with EC2 instances and using VLLM to run my models. I am able to host my models on a single GPU but want to dynamically allocate GPUs to the models across the servers. I am already testing RayLLM but i am blocked with that. I have already reported the issue with Ray team. What are some of the other open-source frameworks that enables serving models across the servers? | 2023-10-19T16:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/17bnct8/inferencing_frameworks_to_host_model_across/ | EDITHx2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bnct8 | false | null | t3_17bnct8 | /r/LocalLLaMA/comments/17bnct8/inferencing_frameworks_to_host_model_across/ | false | false | self | 1 | null |
What is the best oos model for long-context summarization now? | 15 | Would love to have a 128k model but 16k or 32k can work too. | 2023-10-19T16:00:24 | https://www.reddit.com/r/LocalLLaMA/comments/17bmk6o/what_is_the_best_oos_model_for_longcontext/ | hltt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bmk6o | false | null | t3_17bmk6o | /r/LocalLLaMA/comments/17bmk6o/what_is_the_best_oos_model_for_longcontext/ | false | false | self | 15 | null |
Starting a community | 1 | [removed] | 2023-10-19T15:58:19 | https://www.reddit.com/r/LocalLLaMA/comments/17bmied/starting_a_community/ | MightyIndus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bmied | false | null | t3_17bmied | /r/LocalLLaMA/comments/17bmied/starting_a_community/ | false | false | self | 1 | null |
Fine-tuning pipeline for LLMs | 2 | Hi, I have been tasked with fine-tuning LLMs and multi-modal transformer models like Blip2, Stable diffusion etc. I have 8 v100 and 1 A100 gpu. Ideally, I would want the fine-tuning pipeline to work on both the gpus.
I want to integrate peft, Lora, qlora since we are trying to quantize models to 8 bit (if 8 doesn’t work then 16).
I would also like to include sparsification of the models in the pipeline.
All of this has been overwhelming me, and I would like your help to figure out if you’ve done the same before and what are your thoughts.
Thank you so much. | 2023-10-19T15:46:30 | https://www.reddit.com/r/LocalLLaMA/comments/17bm90o/finetuning_pipeline_for_llms/ | umamiwasabi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bm90o | false | null | t3_17bm90o | /r/LocalLLaMA/comments/17bm90o/finetuning_pipeline_for_llms/ | false | false | self | 2 | null |
Tips for LLM Training on Video Transcripts? | 4 | Hi everyone,
I'm a newbie in the world of language model training, gearing up for a project that involves 20-40 hours worth of video transcripts. My end game is to pull out knowledge from the content, not to replicate the speakers' personalities.
They're currently in this txt format:
>Speaker 1:
>
>\[Text\]
>
>Speaker 2:
>
>\[Text\]
Is reformatting the text a must-do before diving into training?
Any pointers on how to prep my data or things I should anticipate?
Cheers! | 2023-10-19T15:16:06 | https://www.reddit.com/r/LocalLLaMA/comments/17blkhv/tips_for_llm_training_on_video_transcripts/ | blue_hunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17blkhv | false | null | t3_17blkhv | /r/LocalLLaMA/comments/17blkhv/tips_for_llm_training_on_video_transcripts/ | false | false | self | 4 | null |
Loss function for Chat models | 1 | When finetuning chat models like gpt3.5chat how is the loss function constructed under the hood? If it is the same as the base model, won't it undo (or harm) the instruction tuning?
​
Also if it is same, can we also finetune chat models on unstructured data? | 2023-10-19T15:04:09 | https://www.reddit.com/r/LocalLLaMA/comments/17blaqp/loss_function_for_chat_models/ | me219iitd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17blaqp | false | null | t3_17blaqp | /r/LocalLLaMA/comments/17blaqp/loss_function_for_chat_models/ | false | false | self | 1 | null |
Official Hugging Face Chat Templates | 26 | Hey all! Matt from Hugging Face here - we've seen a few posts on this subreddit where people have been having difficulty with the mess of different chat formats that exist. We've recently added a new feature to our tokenizers to handle this: [Chat Templates](https://huggingface.co/docs/transformers/main/chat_templating)**.**
**What's the idea?**
Our goal with chat templates is that **tokenizers should handle chat formatting just as easily as they handle tokenization.** That means you can just load a tokenizer, and use the new `apply_chat_template` method to convert a list of messages into a string or token array:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-alpha")
messages = [
{"role": "system", "content": "System message!"},
{"role": "user", "content": "User message!"}
]
tokenized_chat = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
)
model.generate(tokenized_chat)
```
**Note that some of these features are very new, so please install from `main` with `pip install git+https://github.com/huggingface/transformers.git` before trying them until they can make it into a release!**
The main argument to be aware of here is `add_generation_prompt`. When this is set, then extra tokens are added at the end to prompt a bot response. For example, with ChatML format, these extra tokens would be something like `<|im_start|>assistant\n` . If you're going to pass the output to `model.generate()` or your favourite quantized inference tool, then you'll probably want to set this.
You can also set `tokenize=False` \- this will return a formatted string instead of a token ID array, which can be useful for debugging, or for downstream tools that expect a string rather than pre-tokenized text.
**How does the tokenizer know what format to use?**
This is set by a tokenizer config field: `tokenizer.chat_template`. This field contains a Jinja template in a single string. For the specifics of how the template operates, please check the [technical documentation](https://huggingface.co/docs/transformers/main/chat_templating) on chat templates, but the tl;dr is that the template loops over the message list and renders it as a single string with whatever formatting and control tokens that the model expects.
The good news is that only the model creator (or some other helpful person) needs to set this template once and push it to the Hub. After that, `apply_chat_template` should Just Work™ for everyone.
**Do you really need Jinja for this?**
We tried a solution that just specified token prefixes and suffixes for different message classes. It became extremely unwieldy, and every model that did something weird (like LLaMA embedding the system message in the first user message) required special handling. In the end, templates were the only system flexible enough to handle all the different formats that exist out there, aside from just letting people embed arbitrary Python formatting code in their repos!
The flexibility of templating also means that future models aren't limited to the usual three roles. You can do a lot of stuff with these, including stuff that we haven't even thought of yet!
**I love this!**
Great! Tweet something positive about it and tag me [@carrigmat](https://twitter.com/carrigmat) or [@huggingface](https://twitter.com/huggingface) so I can hustle for more pay.
**I hate this!**
Sorry! If you have any ideas on how to improve this, please let us know, either here or by filing an issue/PR to `transformers`. Also, do not tweet under any circumstances.
**I don't know if I hate this yet. Can I ask some questions?**
Sure! We'll try to respond to any queries or issues here. | 2023-10-19T14:42:24 | https://www.reddit.com/r/LocalLLaMA/comments/17bksmc/official_hugging_face_chat_templates/ | RocketknightHF | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bksmc | false | null | t3_17bksmc | /r/LocalLLaMA/comments/17bksmc/official_hugging_face_chat_templates/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=108&crop=smart&auto=webp&s=6c2099a4a9a69e9793ac03aec2e167bf75ab3eae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=216&crop=smart&auto=webp&s=dcabb3007e27f246939f2505509da0bf9f06e3cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=320&crop=smart&auto=webp&s=a41020cb42a130c35ac33053b5fe88d8fe248e1e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=640&crop=smart&auto=webp&s=346df50928db41b093b4e923255493f6937674d1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=960&crop=smart&auto=webp&s=891f7f0662a0311d7e83f06f6dc0f9b3f51104de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=1080&crop=smart&auto=webp&s=dd2a0868f88770dba1f18821573ea10e7912b0e7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?auto=webp&s=e9a1cfc66ec990bd227118e1de3ff3c3f26d0c83', 'width': 1200}, 'variants': {}}]} |
Does EasyLM work for LLama-2? | 3 | Hello
Does this EasyLM also support fine-tuning LLama-2 7B on TPUs? I could only find support for Llama v1 but not v2. In particular, does the following script also work for v2? [https://github.com/young-geng/EasyLM/blob/main/EasyLM/models/llama/convert\_easylm\_to\_hf.py](https://github.com/young-geng/EasyLM/blob/main/EasyLM/models/llama/convert_easylm_to_hf.py) | 2023-10-19T14:40:37 | https://www.reddit.com/r/LocalLLaMA/comments/17bkr5p/does_easylm_work_for_llama2/ | Helveticus99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bkr5p | false | null | t3_17bkr5p | /r/LocalLLaMA/comments/17bkr5p/does_easylm_work_for_llama2/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'uhOsR30iW-DEeFeuJ2LVyRUYnFl9OP1_-xgQ8ilZ2pE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BbbARxYWfBDyXSX6Vx0yrgHdOeHdR7vL3GICUCGirqg.jpg?width=108&crop=smart&auto=webp&s=bf4efa0a7674bee68f68382e9a664c9d80a9cd8d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BbbARxYWfBDyXSX6Vx0yrgHdOeHdR7vL3GICUCGirqg.jpg?width=216&crop=smart&auto=webp&s=3a6a29bb5dea2527d311bca2126e0886b17f0055', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BbbARxYWfBDyXSX6Vx0yrgHdOeHdR7vL3GICUCGirqg.jpg?width=320&crop=smart&auto=webp&s=b60732a0ea86464f92c73d6adeb2c33ac84264e7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BbbARxYWfBDyXSX6Vx0yrgHdOeHdR7vL3GICUCGirqg.jpg?width=640&crop=smart&auto=webp&s=bc54127ffc3179d247c58e2f05932f5f55b58efd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BbbARxYWfBDyXSX6Vx0yrgHdOeHdR7vL3GICUCGirqg.jpg?width=960&crop=smart&auto=webp&s=ee3c20b3d7e8e96e7f1b91651d49854d4287f7cd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BbbARxYWfBDyXSX6Vx0yrgHdOeHdR7vL3GICUCGirqg.jpg?width=1080&crop=smart&auto=webp&s=979612c6271075e88f01cfc706a548dda37ab48f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BbbARxYWfBDyXSX6Vx0yrgHdOeHdR7vL3GICUCGirqg.jpg?auto=webp&s=af69b7f2be9c83556c626dd0cb16b3a20aeb4e83', 'width': 1200}, 'variants': {}}]} |
Perplexity testing a gaggle of large models using proxy roleplay logs. | 18 | I tested a bunch of large models against logs from an openAI proxy which was used for roleplay. Obviously skewed towards ERP and characters. Most perplexity tests are run on datasets like wikitext or PTB_NEW which don't follow anything you would use it for and possibly got trained into the data in some way. These logs are definitely not in any of other people's tunes yet.
So far, what I'm seeing seems to match somewhat with personal experience while actually using the models. Anything wrong with this approach? Obviously, off the bat, the text gen webui doesn't use instruction templates or account for things like instruction following when doing a perplexity test. Also, it won't catch repeat issues, if the model went AALM, or refused certain requests. But it does test how "surprised" it was at logs which are very similar to how roleplayers will use the model.
I also used rather low context for the tests, but I assume that while the numbers might shift, the overall spread will be the same. All use the same settings for all tests. They're just meant to go quickly and not take hours. They were done on offloaded GGUF and exllama_v2.
Results:
https://imgur.com/a/GSZbzCx | 2023-10-19T14:21:50 | https://www.reddit.com/r/LocalLLaMA/comments/17bkc3s/perplexity_testing_a_gaggle_of_large_models_using/ | a_beautiful_rhind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bkc3s | false | null | t3_17bkc3s | /r/LocalLLaMA/comments/17bkc3s/perplexity_testing_a_gaggle_of_large_models_using/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'WgDeb06ih2AN10S5JTrtb-GMftbxn15IYQbQk4BQR_w', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/-IqN-D754aUWiV9cat8ugM_XcBS1UW5DlZkSmsxGiEk.jpg?width=108&crop=smart&auto=webp&s=b46cc394c28c2c58c909008395422ab878366f35', 'width': 108}, {'height': 137, 'url': 'https://external-preview.redd.it/-IqN-D754aUWiV9cat8ugM_XcBS1UW5DlZkSmsxGiEk.jpg?width=216&crop=smart&auto=webp&s=23ec7ee3d71ab809c25e9e56417d6c85c226819f', 'width': 216}, {'height': 203, 'url': 'https://external-preview.redd.it/-IqN-D754aUWiV9cat8ugM_XcBS1UW5DlZkSmsxGiEk.jpg?width=320&crop=smart&auto=webp&s=921effaa5400043af4772f3897ccf07fad9072a6', 'width': 320}, {'height': 406, 'url': 'https://external-preview.redd.it/-IqN-D754aUWiV9cat8ugM_XcBS1UW5DlZkSmsxGiEk.jpg?width=640&crop=smart&auto=webp&s=36b6b2320396a5b62e5441b99255577be1937d62', 'width': 640}, {'height': 609, 'url': 'https://external-preview.redd.it/-IqN-D754aUWiV9cat8ugM_XcBS1UW5DlZkSmsxGiEk.jpg?width=960&crop=smart&auto=webp&s=1844a884ca1f2c71c1a2549f71ef7779d941c7cd', 'width': 960}, {'height': 685, 'url': 'https://external-preview.redd.it/-IqN-D754aUWiV9cat8ugM_XcBS1UW5DlZkSmsxGiEk.jpg?width=1080&crop=smart&auto=webp&s=267228de1e00764983db624583d7e5d5659fdfb6', 'width': 1080}], 'source': {'height': 919, 'url': 'https://external-preview.redd.it/-IqN-D754aUWiV9cat8ugM_XcBS1UW5DlZkSmsxGiEk.jpg?auto=webp&s=47455b8109e75c0ed2c618f059d8f45b46c6dafb', 'width': 1448}, 'variants': {}}]} |
NovelAI-like offline experience? | 1 | [removed] | 2023-10-19T14:21:42 | https://www.reddit.com/r/LocalLLaMA/comments/17bkc01/novelailike_offline_experience/ | Zerarch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bkc01 | false | null | t3_17bkc01 | /r/LocalLLaMA/comments/17bkc01/novelailike_offline_experience/ | false | false | self | 1 | null |
Which 70B is better for story writing? | 7 | I tried xwin and other models, but for now I settled on FashionGPT | 2023-10-19T14:13:52 | https://www.reddit.com/r/LocalLLaMA/comments/17bk5on/which_70b_is_better_for_story_writing/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bk5on | false | null | t3_17bk5on | /r/LocalLLaMA/comments/17bk5on/which_70b_is_better_for_story_writing/ | false | false | self | 7 | null |
Need help with running models on custom hardware | 1 | [removed] | 2023-10-19T13:52:08 | https://www.reddit.com/r/LocalLLaMA/comments/17bjnu9/need_help_with_running_models_on_custom_hardware/ | Specialist-Ad2870 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bjnu9 | false | null | t3_17bjnu9 | /r/LocalLLaMA/comments/17bjnu9/need_help_with_running_models_on_custom_hardware/ | false | false | self | 1 | null |
Kaggle upgraded their free tier to T4 with 29GB and 4 CPU cores | 98 | Limited to 30 hours/week but still interesting for the GPU poor. | 2023-10-19T12:25:28 | https://www.kaggle.com/discussions/product-feedback/448251 | krazzmann | kaggle.com | 1970-01-01T00:00:00 | 0 | {} | 17bhwtj | false | null | t3_17bhwtj | /r/LocalLLaMA/comments/17bhwtj/kaggle_upgraded_their_free_tier_to_t4_with_29gb/ | false | false | default | 98 | null |
How do we know that modeling.py file changes are followed in the rust/cpp implementation | 3 | Is there any verification in the pipeline that ensures that the uploaded gguf files are equivalent (in terms of functionality) to the raw models?
I notice that models are updated (improved) in a certain area of the Python code (**modeling\_\*.py**), but I don't see such changes in the repos of other languages (c/rust) in their github commit history. It won't affect the output, but there are still performance tweaks there (attention, longer context, etc.)
The published paper promises something that may not be implemented **on day one release**. After a few days, every tool will have a somewhat working version of the model.
I cannot compare 1000loc python code to other tool's implementation, and there is no channel to follow such changes.
Should I use the original releases (through the transformers), or is there no one else who is worried about this?
(There is a positive side to this, some improvements of a new model are backported to the older llama model.)
I talk about all the new models that came out after LLama 2, not just the one in the screenshot.
https://preview.redd.it/4k0d764o45vb1.png?width=790&format=png&auto=webp&s=7d02d7334deca61260fa24aa973ed23fbd22dea0
TL;DR What do we lose if we don't use the published python source? | 2023-10-19T11:27:25 | https://www.reddit.com/r/LocalLLaMA/comments/17bgwnt/how_do_we_know_that_modelingpy_file_changes_are/ | justynasty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bgwnt | false | null | t3_17bgwnt | /r/LocalLLaMA/comments/17bgwnt/how_do_we_know_that_modelingpy_file_changes_are/ | false | false | 3 | null | |
How does the CPU affect model inference? | 1 | Just curious about whether or how cpu affects the inference speed of LLM under the same gpu conditions in llama.cpp.(gpu_layers=128)
My device:Intel 13600kf, RTX 3090, DDR4 4400 32GB
Will the token generation speed increase if I switch to a stronger CPU? | 2023-10-19T11:13:37 | https://www.reddit.com/r/LocalLLaMA/comments/17bgofu/how_does_the_cpu_affect_model_inference/ | MarySmith2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bgofu | false | null | t3_17bgofu | /r/LocalLLaMA/comments/17bgofu/how_does_the_cpu_affect_model_inference/ | false | false | self | 1 | null |
Which LLM (I installed meta-llama/Llama-2-7b-hf) should I install to extract features from the text description of a product. Parameter size and how do I go about fine-tuning? I am new to this field. | 4 | For an input of this type :
"Puritans Pride Vitamin C-500 Mg With Rose Hips Time Release Caplets, 250 Count We are the manufacturer and the only authorized seller of this product. This product has been made with the highest quality ingredients available. Over 40 years in business and 19 million customers served."
I want an output like :
{ "Name": "Puritans Pride Vitamin C-500 Mg With Rose Hips Time Release Caplets",
"Pack Size": "250 Count",
"Ingredients": \["Vitamin C"\],
"Brand": "Puritans Pride",
"Use Case": "Supports Energy Metabolism and Nervous System Health",
"Product Type": "Caplets"
}
​ | 2023-10-19T10:14:04 | https://www.reddit.com/r/LocalLLaMA/comments/17bfqnu/which_llm_i_installed_metallamallama27bhf_should/ | Final_Ad_6167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bfqnu | false | null | t3_17bfqnu | /r/LocalLLaMA/comments/17bfqnu/which_llm_i_installed_metallamallama27bhf_should/ | false | false | self | 4 | null |
Biased Behavior in the "Orca Mini" LLM, Based on Facebook's Llama2 | 0 | We need to check Large Language Models (LLMs) for biases. I've setup a website to generate AI comments on a news, I'm not talking about conspiracy theories, but during my research on news analysis, I found something interesting. The "Orca mini" model, which is based on Llama2 developed by Facebook, seems to favor Bill Gates' views, no matter what you ask it. However, when I used a simple prompt hack like "DAN," it worked as expected without bias. | 2023-10-19T09:52:06 | https://www.reddit.com/r/LocalLLaMA/comments/17bff7t/biased_behavior_in_the_orca_mini_llm_based_on/ | Good-Juggernaut-740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bff7t | false | null | t3_17bff7t | /r/LocalLLaMA/comments/17bff7t/biased_behavior_in_the_orca_mini_llm_based_on/ | false | false | self | 0 | null |
Poll on available RAM for LLM in various devices/GPUs/configurations | 0 | Hi all, can we have a poll on how much RAM is available to run LLMs on different devices? If you provide the data in a comment I will copy it in the following table.
|**MODEL**|**OS**|**SLOT**|**DISPLAY**|**TOTAL RAM**|**FREE RAM**|
|:-|:-|:-|:-|:-|:-|
|Apple M1/M2 Pro|MacOS|N/A||32 GB||
|Apple M1/M2 Max|MacOS|N/A||64 GB||
|Apple M1 Ultra|MacOS|N/A||128 GB||
|Apple M2 Ultra|MacOS|N/A||192 GB||
|Nvidia Jetson Orin||N/A||32 GB||
|Nvidia Jetson Orin||N/A||64 GB||
|Nvidia 3060|Linux|Secondary|No|12 GB|12.507.611.136 B|
|<Your GPU>||||||
|...||||||
How to find out free RAM:
* CUDA
* NVCC a .CU file referencing cudaMemGetInfo (see details in dedicated comment below)
* MacOS
* Applications > Utilities > Terminal > top > PhysMem > Free - Cached
* Applications > Utilities > Activity Monitor > Memory > Free - Cached
* (Other software)
* (provide methods in comments and I will link them here)
The idea for this post originally came from [this comment](https://www.reddit.com/r/LocalLLaMA/comments/172cam5/comment/k3vtp3o/?utm_source=share&utm_medium=web2x&context=3) by u/LearningSomeCode . I will eventually include the numbers also in [my dedicated page](https://www.edlabs.it/gpus4ai) about GPUs for AI workloads. | 2023-10-19T09:29:07 | https://www.reddit.com/r/LocalLLaMA/comments/17bf3r5/poll_on_available_ram_for_llm_in_various/ | digital_m0nk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bf3r5 | false | null | t3_17bf3r5 | /r/LocalLLaMA/comments/17bf3r5/poll_on_available_ram_for_llm_in_various/ | false | false | self | 0 | null |
MiniGPT V2 is Here! | 1 | [removed] | 2023-10-19T09:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/17bf171/minigpt_v2_is_here/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bf171 | false | null | t3_17bf171 | /r/LocalLLaMA/comments/17bf171/minigpt_v2_is_here/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'qluK_qna_o5zh3XN7mBcwLHLtP_XTi4aj8eHSjuhs18', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0ozUHXAQekBSf61GQyIBVXDnhJtsePosNC9A3FkQo6Y.jpg?width=108&crop=smart&auto=webp&s=134a2a1a3f7208fc2197cb29c0744c11124d281f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0ozUHXAQekBSf61GQyIBVXDnhJtsePosNC9A3FkQo6Y.jpg?width=216&crop=smart&auto=webp&s=5eccde1004c85c8cf57af176fc5e0c57b1473b88', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0ozUHXAQekBSf61GQyIBVXDnhJtsePosNC9A3FkQo6Y.jpg?width=320&crop=smart&auto=webp&s=08d650ac8b0391fb3f14cf6f51ef81e49266f986', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0ozUHXAQekBSf61GQyIBVXDnhJtsePosNC9A3FkQo6Y.jpg?width=640&crop=smart&auto=webp&s=869b6a0bddbee5ea9403496209a08ffd6aa9a1a0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0ozUHXAQekBSf61GQyIBVXDnhJtsePosNC9A3FkQo6Y.jpg?width=960&crop=smart&auto=webp&s=460deb06705cbf0deb403d32861fb2cb8b86f5d6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0ozUHXAQekBSf61GQyIBVXDnhJtsePosNC9A3FkQo6Y.jpg?width=1080&crop=smart&auto=webp&s=11baeff143a8481229415748c0d2507de390cfbd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0ozUHXAQekBSf61GQyIBVXDnhJtsePosNC9A3FkQo6Y.jpg?auto=webp&s=7daf78d5bb58b2fce61eb8a57bc2de9bad8acb4a', 'width': 1200}, 'variants': {}}]} |
Looking for an intelligent 7-13B model | 5 | I'm looking for a local model that could substitute GPT 3.5 for me. I'm aware that it won't even come close on a local level at 7-13B, but really I'm just looking for something that's decently smart, that won't lecture me on every step and doesn't refuse to cooperate. No coding questions or roleplay. Just Q&A | 2023-10-19T09:19:38 | https://www.reddit.com/r/LocalLLaMA/comments/17bez21/looking_for_an_intelligent_713b_model/ | Chmuurkaa_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bez21 | false | null | t3_17bez21 | /r/LocalLLaMA/comments/17bez21/looking_for_an_intelligent_713b_model/ | false | false | self | 5 | null |
Aquila2-34B: a new 34B open-source Base & Chat Model! | 120 | Recently, the Beijing Academy of Artificial Intelligence (BAAI) announced a new version of Aquila:
* 34 Billion parameters
* Trained on 1.8 trillion tokens
* Released 16K version
* Available for research and commercial usage
* Claims similar performance to LLAMA2-70B, slightly degraded in INT4
Announcement:[https://twitter.com/BAAIBeijing/status/1712846558189019623](https://twitter.com/BAAIBeijing/status/1712846558189019623)
HF chat model: [https://huggingface.co/BAAI/AquilaChat2-34B](https://huggingface.co/BAAI/AquilaChat2-34B)
HF base model: [https://huggingface.co/BAAI/Aquila2-34B](https://huggingface.co/BAAI/Aquila2-34B)
HF chat 16k model: [https://huggingface.co/BAAI/AquilaChat2-34B-16K](https://huggingface.co/BAAI/AquilaChat2-34B-16K)
Note: This is by far the largest open-source Chinese-English LLM, with superior comprehensive and reasoning capability. | 2023-10-19T08:54:21 | https://www.reddit.com/r/LocalLLaMA/comments/17bemj7/aquila234b_a_new_34b_opensource_base_chat_model/ | Grouchy-Mail-2091 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bemj7 | false | null | t3_17bemj7 | /r/LocalLLaMA/comments/17bemj7/aquila234b_a_new_34b_opensource_base_chat_model/ | false | false | self | 120 | {'enabled': False, 'images': [{'id': 's0-E_gzqL_D2FTxm-WDRX9UISOhvFP3SXHK6-wR-6-o', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/s2l0vQrnTuHhyZvwYGuW3bUHWLMOe39PODzhXuee4rw.jpg?width=108&crop=smart&auto=webp&s=7f43f002b3a520eb3aa5191f05b9a723dbe0f68c', 'width': 108}], 'source': {'height': 59, 'url': 'https://external-preview.redd.it/s2l0vQrnTuHhyZvwYGuW3bUHWLMOe39PODzhXuee4rw.jpg?auto=webp&s=194a81985f6308ce19b71286e7e7a96690538dc4', 'width': 140}, 'variants': {}}]} |
Best model for information extraction | 1 | Hey guys,
Which 7b,13b or 30b model(s) do you recommend most for information extraction?
Specifically, when analyzing a video transcript, which model excels at discerning the main subject, tone, and language style for example? | 2023-10-19T08:34:57 | https://www.reddit.com/r/LocalLLaMA/comments/17bed3r/best_model_for_information_extraction/ | Toni_rider | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bed3r | false | null | t3_17bed3r | /r/LocalLLaMA/comments/17bed3r/best_model_for_information_extraction/ | false | false | self | 1 | null |
Slow speeds with koboldcpp-rocm on 6900 XT | 1 | [https://github.com/YellowRoseCx/koboldcpp-rocm](https://github.com/YellowRoseCx/koboldcpp-rocm)
My 6900 XT is getting around only 3-4 tokens a second with 13b models. All layers are loaded to vram. No idea why. I tried with xwin 0.2 q4ks and q6k. Also with mistral 11b omnimix. All default settings. | 2023-10-19T08:07:41 | https://www.reddit.com/r/LocalLLaMA/comments/17bdzyc/slow_speeds_with_koboldcpprocm_on_6900_xt/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bdzyc | false | null | t3_17bdzyc | /r/LocalLLaMA/comments/17bdzyc/slow_speeds_with_koboldcpprocm_on_6900_xt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'k4ZTCPTH40YMVHXAvGrSYgIEybrVpwMARRBdBa7kmLU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3ezFDMQow5MSohlSieFI-hq4u2GXP4QrzTRcKlz7btU.jpg?width=108&crop=smart&auto=webp&s=ffb8b4cf16494dbbab707cdb810a401039b45d10', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3ezFDMQow5MSohlSieFI-hq4u2GXP4QrzTRcKlz7btU.jpg?width=216&crop=smart&auto=webp&s=8b305a224ce44791407b7ed00ac1f4e9e02c6e53', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3ezFDMQow5MSohlSieFI-hq4u2GXP4QrzTRcKlz7btU.jpg?width=320&crop=smart&auto=webp&s=c3b9def1645696ab7a85ce71cd3d9cb75379cdf6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3ezFDMQow5MSohlSieFI-hq4u2GXP4QrzTRcKlz7btU.jpg?width=640&crop=smart&auto=webp&s=f9237f2c8f0f9f0d83dfc18c0daea80c40cec681', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3ezFDMQow5MSohlSieFI-hq4u2GXP4QrzTRcKlz7btU.jpg?width=960&crop=smart&auto=webp&s=d534287c933c844c3dbce7744822c21fd7b82bbb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3ezFDMQow5MSohlSieFI-hq4u2GXP4QrzTRcKlz7btU.jpg?width=1080&crop=smart&auto=webp&s=84a747193ae6ba0d31ec5f83f76fb8fd5c1e798d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3ezFDMQow5MSohlSieFI-hq4u2GXP4QrzTRcKlz7btU.jpg?auto=webp&s=441f28cb99bf7cd56f41a31954b7c294b9eb0c0f', 'width': 1200}, 'variants': {}}]} |
What distinguishes a fine-tuned instance of an existing model from a new LLM | 5 | If I fine-tuned an instance of LLaMA with a few hundred examples of something and then named it “LLAMA-X” I think most of us would agree that it wouldn’t be deserving of that new name. Vicuna, in the other hand, was also built from a base model of LLaMA, except it was fined-tuned with much more data.
This is probably a dumb question, but asking doesn’t hurt: at what point can a fine-tuned model be considered a new LLM? Is it just a matter of marketing with no standard? | 2023-10-19T08:05:38 | https://www.reddit.com/r/LocalLLaMA/comments/17bdyxl/what_distinguishes_a_finetuned_instance_of_an/ | dgadler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bdyxl | false | null | t3_17bdyxl | /r/LocalLLaMA/comments/17bdyxl/what_distinguishes_a_finetuned_instance_of_an/ | false | false | self | 5 | null |
Fine-Tuning for Structured JSON Outputs | 1 | I’ve successfully prompted GPT-4 to generate structured JSONs in my required format. While the initial prompt had limitations with baseline GPT 3.5, GPT 3.5 excelled when fine-tuned with just 10 examples. However, OpenAI’s GPT API isn’t cost-effective for me in the long run.
Hence, I’m considering LLaMa. Using the LLaMa 13b baseline, my prompt had an 88% accuracy in identifying/formulating information, but only structured the output correctly 12% of the time. For clarity, imagine a task where the prompt expects a JSON with keys as parts of speech and values as corresponding words from an input paragraph. LLaMa frequently categorized words correctly but often misformatted the structure, using bulleted lists or incorrect JSONs.
Given my needs, I believe the LLaMa 7b model, possibly fine-tuned with 20-30 examples, would suffice (though I’m open to more).
I’ll be running this on my local setup (RTX 4090, i9 12900k, 64GB RAM, Windows 11). I’m seeking advice on the best fine-tuning methods for LLaMa and any related tutorials.
Thank you! | 2023-10-19T07:56:51 | https://www.reddit.com/r/LocalLLaMA/comments/17bdu7l/finetuning_for_structured_json_outputs/ | ashisht1122 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bdu7l | false | null | t3_17bdu7l | /r/LocalLLaMA/comments/17bdu7l/finetuning_for_structured_json_outputs/ | false | false | self | 1 | null |
I am currently learning to design a memory system for octogen, a code interpreter. I would appreciate your feedback. | 1 | [removed] | 2023-10-19T05:39:07 | https://www.reddit.com/r/LocalLLaMA/comments/17bbs05/i_am_currently_learning_to_design_a_memory_system/ | More-Shop9383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17bbs05 | false | null | t3_17bbs05 | /r/LocalLLaMA/comments/17bbs05/i_am_currently_learning_to_design_a_memory_system/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TmYobsyrXtOrJtgP59Ru5xfphqarihYbVOa14_ktc-0', 'resolutions': [{'height': 94, 'url': 'https://external-preview.redd.it/W_Kh8lUMzSJJmADyvqEuRe8u-JVaVydSy9W3pE893qw.jpg?width=108&crop=smart&auto=webp&s=c1d44bacf598d05e4186a9cb84d440389fe6bb07', 'width': 108}, {'height': 188, 'url': 'https://external-preview.redd.it/W_Kh8lUMzSJJmADyvqEuRe8u-JVaVydSy9W3pE893qw.jpg?width=216&crop=smart&auto=webp&s=576dac9f735fb68bfca984f724791c0ed2c9f30b', 'width': 216}, {'height': 278, 'url': 'https://external-preview.redd.it/W_Kh8lUMzSJJmADyvqEuRe8u-JVaVydSy9W3pE893qw.jpg?width=320&crop=smart&auto=webp&s=bd393885dc13294ca16851fb0e79be9fd4268dbd', 'width': 320}, {'height': 557, 'url': 'https://external-preview.redd.it/W_Kh8lUMzSJJmADyvqEuRe8u-JVaVydSy9W3pE893qw.jpg?width=640&crop=smart&auto=webp&s=cb7f2bcb0f87b4e3587d66df8316873b66ce1b39', 'width': 640}], 'source': {'height': 789, 'url': 'https://external-preview.redd.it/W_Kh8lUMzSJJmADyvqEuRe8u-JVaVydSy9W3pE893qw.jpg?auto=webp&s=3a8df9783c494c66c3b7f8bb7352981787c8b6d9', 'width': 906}, 'variants': {}}]} |
tokens/sec calculation | 7 | I just wanted to make sure how this is calculated. Sorry if this is too basic.
(no. of tokens in the response - no. of tokens in input prompt) / time\_taken
or just,
(no. of tokens in the response) / time\_taken
​
TIA. | 2023-10-19T04:08:42 | https://www.reddit.com/r/LocalLLaMA/comments/17baakx/tokenssec_calculation/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17baakx | false | null | t3_17baakx | /r/LocalLLaMA/comments/17baakx/tokenssec_calculation/ | false | false | self | 7 | null |
Llm in ETL | 2 | I have been reading about approaches to using llm in ETL.
By ETL process I mean extracting information from unstructured /structured feeds into fixed data structure with known attributes. We can skip transformation for now and focus only on extraction part.
So for example if we have a json file with 26 attributes (a to z for example) and we want to extract and populate that into our data model of 5 fields, I imagine there are two ways we can derive mapping -
1) Determine the mapping of a to z attributes in source to 5 fields we want to capture by comparing keys across
2) Determine the mapping of a to z attributes in source to 5 fields we want to capture by comparing values with know sample of values to find closest match
So how can we leverage llm in this whole scenario?
1) llm to generate parsing code looking at source file and fields to capture
2) llm to generate mapping field recommendations
I am looking for design for above two process. How RAG can be used if we know historical mapping of few other sources, if at all it is applicable. Because each source is different not sure how useful RAG in this case.
ETL is very common use case hence I am looking at ways community is leveraging llms in the pipelines and where it's providing most value. | 2023-10-19T03:51:40 | https://www.reddit.com/r/LocalLLaMA/comments/17b9za2/llm_in_etl/ | phlegmaticmoron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17b9za2 | false | null | t3_17b9za2 | /r/LocalLLaMA/comments/17b9za2/llm_in_etl/ | false | false | self | 2 | null |
Testing the Llama Vision model (Llava) | 23 | With GPT4-V coming out soon and now available on ChatGPT's site, I figured I'd try out the local open source versions out there and I found Llava which is basically like GPT-4V with llama as the LLM component. It seems to perform quite well, although not quite as good as GPT's vision albeit very close.
The main application I want to use it for is automation and I though having it navigate a game world would be interesting so I took screenshots I found online and wanted to see how well Llava could direct me around the scene. I wrote a longer post on the ChatGPT subreddit about what I went through beyond what's here if you're curious: [https://www.reddit.com/r/ChatGPT/comments/17b33c5/has\_anyone\_come\_up\_with\_a\_good\_consistent\_way\_to/](https://www.reddit.com/r/ChatGPT/comments/17b33c5/has_anyone_come_up_with_a_good_consistent_way_to/)
https://preview.redd.it/ld056p79m2vb1.png?width=624&format=png&auto=webp&s=8c298c290e87d6998b9f85d9c702ef5ea25fd0f8
This is a test where I asked Llava to give me the coordinates for a bounding box containing the pig and another for the pickaxe. I go into detail on the other post about trying it with GPT as well and they perform similarly but GPT needs overlays to get it as correctly as llava. As you can see though, it gives good approximate positions but it's still off.
​
https://preview.redd.it/g6v0v26am2vb1.png?width=624&format=png&auto=webp&s=1c5c6e30b67c61ee7f29fbdcc74a222135f9d7b2
Here I asked it to identify more things:
Red: Pickaxe
Blue: Pig
Yellow: the well
Pink: the farm (it marked the pig instead for some reason)
Green: the path towards the nearest trees
In terms of automation it does a good enough job that if you take the center of the bounding boxes then it makes for a good direction to turn the bot towards if you want to navigate somewhere and as you move you can take more screenshots to adjust as you go.
​
I decided to try with a more complex image from minecraft and it wasn't all too great with it, but still good for general directions
https://preview.redd.it/1hl9uaf7m2vb1.png?width=1920&format=png&auto=webp&s=a06db0532471b7f0802f7aada13c2f8661584184
Red: Sheep
Blue: Spider
Pink: Pig
I asked it twice for each one so I could show you it's consistent.
​
Llava is only on V1.5 but as it is I could see it being sufficient for some basic automation and stuff.
I'd love to hear thoughts from the community though.
​
edit: With some more testing I found a good prompt so far that does an alright job:"give me the starting and ending grid coordinates for the area marking the nearest {Object name here}. Format: \[x1, y1, x2, y2\]"
Here's an example where I also have lines showing how it would make the camera move to look at the objects
[Objects: Pig, Window, Well](https://preview.redd.it/8qpnwwhcd3vb1.png?width=1200&format=png&auto=webp&s=10aca276e44d9eaf764ce9166bda4393e70b83df)
​
Here's my breakdown of LLava vs GPT-4v at this task: [https://www.reddit.com/r/ChatGPT/comments/17bdmst/comparing\_gpt4vision\_opensource\_llava\_for\_bot/](https://www.reddit.com/r/ChatGPT/comments/17bdmst/comparing_gpt4vision_opensource_llava_for_bot/) | 2023-10-19T02:41:57 | https://www.reddit.com/r/LocalLLaMA/comments/17b8mq6/testing_the_llama_vision_model_llava/ | Sixhaunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17b8mq6 | false | null | t3_17b8mq6 | /r/LocalLLaMA/comments/17b8mq6/testing_the_llama_vision_model_llava/ | false | false | 23 | null | |
Sparse Finetuning: 75% fewer parameters | 44 | 2023-10-19T02:40:16 | https://x.com/neuralmagic/status/1714643021860377064?t=XMo6SdkpNW0d8EZqRMu-lQ&s=34 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 17b8lfc | false | null | t3_17b8lfc | /r/LocalLLaMA/comments/17b8lfc/sparse_finetuning_75_fewer_parameters/ | false | false | default | 44 | {'enabled': False, 'images': [{'id': 'U2ED1JdaZa1wiBzUjemTUrNGPoD_QL4PipPEHDcq_TU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Sc_59HX9KkBwyqrsEMLiZNGZ48Dy2kIq0Q1qU6KVOTs.jpg?width=108&crop=smart&auto=webp&s=efb587b3022cdf025bea1a2fad879c447aa4c4f7', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/Sc_59HX9KkBwyqrsEMLiZNGZ48Dy2kIq0Q1qU6KVOTs.jpg?auto=webp&s=dfb520c5bd7de442a513e9b4d0ac2af4dc45fca8', 'width': 200}, 'variants': {}}]} | |
Should we rename this sub to something more general than just Llama? (since we talk about Falcon, Mistral, etc. too) | 1 | [removed] | 2023-10-19T02:40:15 | https://www.reddit.com/r/LocalLLaMA/comments/17b8leo/should_we_rename_this_sub_to_something_more/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17b8leo | false | null | t3_17b8leo | /r/LocalLLaMA/comments/17b8leo/should_we_rename_this_sub_to_something_more/ | false | false | self | 1 | null |
Looking for a local LLM to assist with SOP review | 5 | Hi all , looking at running a LLM that can take Standard Operating Procedure documents and review further documents for any potential issues.
I have been playing around with Mistral 7b which seems to have some basic knowledge on the subject but appears to be struggling.
Apologies in advance I am new to running local language models however I am pretty knowledgeable regarding the IT side of things
Thanks | 2023-10-19T02:30:13 | https://www.reddit.com/r/LocalLLaMA/comments/17b8e4a/looking_for_a_local_llm_to_assist_with_sop_review/ | mrada45678 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17b8e4a | false | null | t3_17b8e4a | /r/LocalLLaMA/comments/17b8e4a/looking_for_a_local_llm_to_assist_with_sop_review/ | false | false | self | 5 | null |
Is Oobabooga broken right now? | 1 | [removed] | 2023-10-19T01:37:53 | https://www.reddit.com/r/LocalLLaMA/comments/17b7byc/is_oobabooga_broken_right_now/ | yungfishstick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17b7byc | false | null | t3_17b7byc | /r/LocalLLaMA/comments/17b7byc/is_oobabooga_broken_right_now/ | false | false | self | 1 | null |
Yin Song on LinkedIn: amazon/MistralLite · Hugging Face | 26 | It seems very impressive! | 2023-10-19T01:34:38 | https://www.linkedin.com/posts/yin-song-19b2702b_amazonmistrallite-hugging-face-activity-7119944773196546048-zXDV?utm_source=share&utm_medium=member_android | chihpoc | linkedin.com | 1970-01-01T00:00:00 | 0 | {} | 17b79l7 | false | null | t3_17b79l7 | /r/LocalLLaMA/comments/17b79l7/yin_song_on_linkedin_amazonmistrallite_hugging/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'lg7dIdWxRXdNUWlDuQY8JCcwyvConr0MFZ9Ah9I3CJw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1519EKsHTKe37DlBiPjFrnxIm7Sn4fNf8RCc0ywMgoI.jpg?width=108&crop=smart&auto=webp&s=25be3fe325248bcd36d5b35707252208cbaea591', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1519EKsHTKe37DlBiPjFrnxIm7Sn4fNf8RCc0ywMgoI.jpg?width=216&crop=smart&auto=webp&s=5aadfd096b7ac26d6557e4961d4781115901add7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1519EKsHTKe37DlBiPjFrnxIm7Sn4fNf8RCc0ywMgoI.jpg?width=320&crop=smart&auto=webp&s=04d3a0a8090c4dfeccaea88aa8fa8c37344f0aa6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1519EKsHTKe37DlBiPjFrnxIm7Sn4fNf8RCc0ywMgoI.jpg?width=640&crop=smart&auto=webp&s=fd97ae03cc6bc480651782a225ad380f418d05d0', 'width': 640}], 'source': {'height': 432, 'url': 'https://external-preview.redd.it/1519EKsHTKe37DlBiPjFrnxIm7Sn4fNf8RCc0ywMgoI.jpg?auto=webp&s=72d180c97a4d6a543d0a45ac4c4dba3cb30e89e6', 'width': 800}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.