title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What's the current best way to deal with memory management? | 3 | With long conversations or large documents to parse, what tools are currently being used to help manage memory, fetch&format relevant information, etc... so that you have the important information within the context limits.
I've heard that Microsoft is working on a GitHub Copilot for Docs which is supposed to be able to parse through large documents but I'm curious what's out there now and what methods people use for things like handling very long conversations or stories. | 2023-09-06T10:16:36 | https://www.reddit.com/r/LocalLLaMA/comments/16bgeoh/whats_the_current_best_way_to_deal_with_memory/ | Sixhaunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16bgeoh | false | null | t3_16bgeoh | /r/LocalLLaMA/comments/16bgeoh/whats_the_current_best_way_to_deal_with_memory/ | false | false | self | 3 | null |
Was Joi from Blade Runner 2049 a local LLM? | 13 | Inspired by another discussion on personal LLMs, I was reminded of that movie. I know that when it came out in 2017, LLMs did not yet exist but as I think back on it, Joi fits the mold of a local LLM as we have them today perfectly and the movie seems to be oddly prophetic (some info from the Wiki rather than the movie).
\- She was initially released in 2029, a reasonable date for more wide adoption of LLMs
\- She was installed in K's apartment and could be transferred onto a mobile device. She showed no sign of being hosted online though she could access online services. She suggested deleting her from the main console to avoid giving away information. Clearly local.
\- Joi's primary role was to communicate with other entities, human, replicant or otherwise, in a manner representing the user's mate or companion. She recorded data "memories" as she ran, allowing her "personality" to change over time. Tell me that's not a waifu in Silly Tavern with Extras enabled.
\- Replicants despised Joi, probably resenting being put on the same level as her. As the prostitute tells her: "I've been inside you. Not so much there as you think." It implies they think that she's not sentient in the same way they are.
\- She's specifically spelled Joi, not Joy. JOI is an acronym for Jerk-off Instructions, something that nobody ever mentions but I find it hard to believe that the writers didn't do this on purpose. Not being corporeal, that's all she can do for K after all without resorting to outside help. LLMs are certainly used in that way today.
\- While sometimes insightful, her responses to K all seem a bit "canned" and too compliant, something I've found that current LLMs also do. We also learn that her default "character card", so to speak, is what prompted her to call K "Joe" rather than an inspiration she'd had.
Honestly, short of the holo tech, I'd say we're 90% of the way to replicating (pun not intended) Joi. We can already use SD in conjunction with the LLM to create images and it's very feasible to have a customizable full motion avatar for an LLM comparable to what's in the movie within a fairly short time.
​ | 2023-09-06T09:43:20 | https://www.reddit.com/r/LocalLLaMA/comments/16bfu1n/was_joi_from_blade_runner_2049_a_local_llm/ | Herr_Drosselmeyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16bfu1n | false | null | t3_16bfu1n | /r/LocalLLaMA/comments/16bfu1n/was_joi_from_blade_runner_2049_a_local_llm/ | false | false | self | 13 | null |
Looking for Affordable Alternatives to ChatGPT 4 API - Best LLM APIs Today? | 2 | I've been using ChatGPT 4 for a while now, and it's been a fantastic tool for various tasks especially through its API.
So, I'm on the hunt for LLM (Large Language Model) APIs that can offer a similar experience to ChatGPT 4 but at a more affordable price point for a quite big project.
​
Thank you! | 2023-09-06T09:31:23 | https://www.reddit.com/r/LocalLLaMA/comments/16bfms3/looking_for_affordable_alternatives_to_chatgpt_4/ | pacmanpill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16bfms3 | false | null | t3_16bfms3 | /r/LocalLLaMA/comments/16bfms3/looking_for_affordable_alternatives_to_chatgpt_4/ | false | false | self | 2 | null |
Weird loss curves when training for the LLM science exam - anyone seen anything similar? | 7 | I just finished reading this very interesting post from [fast.ai](https://fast.ai): [https://www.fast.ai/posts/2023-09-04-learning-jumps/](https://www.fast.ai/posts/2023-09-04-learning-jumps/)
Theyre training an LLM to answer science questions for a kaggle competition, the dataset is 6500 question-answer pairs. These are some of the weirdest loss curves I've ever seen, and the hypothesis they arrive at is that the LLM is simply remembering things from the training set really quickly.
I know this community does quite a lot of finetuning - has anyone seen these shaped loss curves in their own models? I'd be very interested to know it it happens in other areas aside from question answering.
Better yet, anyone got any more ideas for how to alleviate the problem? The post covers all the things I would have thought of (data augmentation, more dropout etc). The idea of stochastic depth looks interesting - basically only using a random subset of the network in each mini-batch (I think).
​
​ | 2023-09-06T09:17:40 | https://www.reddit.com/r/LocalLLaMA/comments/16bferl/weird_loss_curves_when_training_for_the_llm/ | heisenbork4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16bferl | false | null | t3_16bferl | /r/LocalLLaMA/comments/16bferl/weird_loss_curves_when_training_for_the_llm/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'E25NtJMen0qHD0SG4k9ae4JyjU3zsRRlY0XGd5ykMMk', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?width=108&crop=smart&auto=webp&s=3ff6cefde0542398c2dc99595a9f5b29e30a90d5', 'width': 108}, {'height': 99, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?width=216&crop=smart&auto=webp&s=cb13e3415ea351711e1612b69f492a80f4872944', 'width': 216}, {'height': 146, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?width=320&crop=smart&auto=webp&s=135f33c32f4f11927218be83aca80e29b7357a3f', 'width': 320}, {'height': 293, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?width=640&crop=smart&auto=webp&s=f18b421777d7433a740db39b7bcae22251473aef', 'width': 640}, {'height': 440, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?width=960&crop=smart&auto=webp&s=a4c6cfb06de2b709d05c2151265d0044a1d1ee75', 'width': 960}, {'height': 495, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?width=1080&crop=smart&auto=webp&s=09f444b48c22df8a51296656babfad69bc079d11', 'width': 1080}], 'source': {'height': 505, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?auto=webp&s=042d0d343c351af196b9ca51f35a66bdf3a0ef0e', 'width': 1100}, 'variants': {}}]} |
Understanding Retrieval Augmented Generation (RAG) | 1 | [removed] | 2023-09-06T08:41:34 | https://www.reddit.com/r/LocalLLaMA/comments/16beuqa/understanding_retrieval_augmented_generation_rag/ | Floatbot_Inc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16beuqa | false | null | t3_16beuqa | /r/LocalLLaMA/comments/16beuqa/understanding_retrieval_augmented_generation_rag/ | false | false | self | 1 | null |
Any good learning resources for fine-tuning, quantization, and LoRA? | 16 | Hey all, I have been working with GPT3 for like 2 years now, I have lots of experience using it in all sorts of python programs but my experience is mostly on the surface level, I have only fine-tuned a couple of times and that was using the openai API. Id like to take the next step and learn about running models locally, quantization, LoRA and all of that good stuff :P
Im hoping for any particularly good resources where I can learn about these things ideally without having to dig through super technical papers, I like super practical stuff best :)
Any pointers here would be awesome! | 2023-09-06T08:23:31 | https://www.reddit.com/r/LocalLLaMA/comments/16bekej/any_good_learning_resources_for_finetuning/ | Chance_Confection_37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16bekej | false | null | t3_16bekej | /r/LocalLLaMA/comments/16bekej/any_good_learning_resources_for_finetuning/ | false | false | self | 16 | null |
qlora fine-tuned codellama 34b model | 8 | I always thought the codellama was pretrained on 1/4 of the original llama2 34b, maybe it can be futher fine-tuned rather than just for coding. So I took the \`codellama/CodeLlama-34b-hf\` base model and fine-tuned it on \`timdettmers/openassistant-guanaco\` dataset.
Surprisingly, it has become a multilingual chat model that I can now talk to in Chinese :p.
Here is the model weight if you want to try it out:
[https://huggingface.co/mzbac/CodeLlama-34b-guanaco-gptq](https://huggingface.co/mzbac/CodeLlama-34b-guanaco-gptq) | 2023-09-06T07:54:23 | https://www.reddit.com/r/LocalLLaMA/comments/16be2vc/qlora_finetuned_codellama_34b_model/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16be2vc | false | null | t3_16be2vc | /r/LocalLLaMA/comments/16be2vc/qlora_finetuned_codellama_34b_model/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'WdMUdOa1Y0Not51DEOCF2_l9TPzadfYPoCr9OD_hTb0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?width=108&crop=smart&auto=webp&s=d6617d92a951b966e84a74b0debc3ba3259ba1ad', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?width=216&crop=smart&auto=webp&s=b05b0313aaf3fb2cdf6ca417bc5317d50adab2f5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?width=320&crop=smart&auto=webp&s=4da8bdd26ab28ae20f7981f0da35d6bb14b3cc15', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?width=640&crop=smart&auto=webp&s=07ee48e34fa4124976d91f9413bc57e93601ea35', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?width=960&crop=smart&auto=webp&s=ba5374d514062114e532292ee39a2eb9f3f1b955', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?width=1080&crop=smart&auto=webp&s=58577a86e335242828f28f5da7bfae8490287014', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?auto=webp&s=62df925d9bafa5051a6e422810f1c8594db3c3e4', 'width': 1200}, 'variants': {}}]} |
Which framework would you use to build a RAG using llama-2? | 1 | [removed] | 2023-09-06T07:33:23 | https://www.reddit.com/r/LocalLLaMA/comments/16bdqs0/which_framework_would_you_use_to_build_a_rag/ | front-equal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16bdqs0 | false | null | t3_16bdqs0 | /r/LocalLLaMA/comments/16bdqs0/which_framework_would_you_use_to_build_a_rag/ | false | false | self | 1 | null |
How can I load a local model and play with it? | 1 | [removed] | 2023-09-06T07:26:31 | https://www.reddit.com/r/LocalLLaMA/comments/16bdmmh/how_can_i_load_a_local_model_and_play_with_it/ | william_luckybob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16bdmmh | false | {'oembed': {'author_name': 'James Briggs', 'author_url': 'https://www.youtube.com/@jamesbriggs', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/6iHVJyX2e50?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Llama 2 in LangChain — FIRST Open Source Conversational Agent!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/6iHVJyX2e50/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Llama 2 in LangChain — FIRST Open Source Conversational Agent!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_16bdmmh | /r/LocalLLaMA/comments/16bdmmh/how_can_i_load_a_local_model_and_play_with_it/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YciGqF6lm8xszzTBA-e2GH9xMEAXy2SADgKjOWd7OLc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/xIj5386wZ9s0e27dzMbFn2VI0pjC8-_FXNvxuxVIeAo.jpg?width=108&crop=smart&auto=webp&s=1cbdfc0fb58ac0aac17289e6064ca16f7ddc52b2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/xIj5386wZ9s0e27dzMbFn2VI0pjC8-_FXNvxuxVIeAo.jpg?width=216&crop=smart&auto=webp&s=5eb3e500c45d81a472235172ef66c15739a87d54', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/xIj5386wZ9s0e27dzMbFn2VI0pjC8-_FXNvxuxVIeAo.jpg?width=320&crop=smart&auto=webp&s=23ed3e66a6bf198685d24a3ef0beffcce665bb8e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/xIj5386wZ9s0e27dzMbFn2VI0pjC8-_FXNvxuxVIeAo.jpg?auto=webp&s=7cfb901bdda9da6de6f6d072e5fed4b069aa7430', 'width': 480}, 'variants': {}}]} | |
How does Llama-2 perform in sentiment analysis? | 8 | Hey guys, if you have explored using Llama-2 in doing sentiment analysis, just wanted to get your experience in how Llama-2 perform in this task?
I have tried using GPT and it’s pretty accurate.
If Llama-2 isn’t all that good in sentiment analysis, which other open LLM would you recommend?
Thank heaps! | 2023-09-06T06:30:25 | https://www.reddit.com/r/LocalLLaMA/comments/16bcpn1/how_does_llama2_perform_in_sentiment_analysis/ | --leockl-- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16bcpn1 | false | null | t3_16bcpn1 | /r/LocalLLaMA/comments/16bcpn1/how_does_llama2_perform_in_sentiment_analysis/ | false | false | self | 8 | null |
Someone managed to connect local LLM's to the Internet | 1 | [removed] | 2023-09-06T05:21:41 | https://www.reddit.com/r/LocalLLaMA/comments/16bbin2/someone_managed_to_connect_local_llms_to_the/ | No_Marionberry312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16bbin2 | false | null | t3_16bbin2 | /r/LocalLLaMA/comments/16bbin2/someone_managed_to_connect_local_llms_to_the/ | false | false | 1 | null | |
ChatGPT-4 has gotten better for me over time | 5 | First off, I know this is r/LocalLLaMA, but I see a lot of comments about ChatGPT worsening, whether it's due to OpenAI "making it worse", quantization hurting performance, etc.
But for me, it's gotten better at translations, coding tasks, math questions, and really just anything I throw at it. No hate towards opensource or Llama as I want it to succeed just as much as everyone else, but none of the Llama 2 models come close to being good enough to drop paying $20 a month and likely pay even more to locally run a good model at acceptable speeds or rent from a cloud service. That's even ignoring the features like "Custom instructions" which works 100% of the time for me as well.
I'd love for the people who say it's worsened to hop on playground and test the March model to see if it really has worsened. I'd also love it if people specified their prompts and what version of ChatGPT they used. ChatGPT-3.5 hasn't really gotten worse for me either, but it's been pretty terrible since the beginning so that's not saying much.
Sorry, again not trying to hate on open-source models or shill ClosedAI, but it makes my eye twitch a little every time I see this comment and I have to wonder if users are using ChatGPT wrong or I'm doing something wrong to have Llama not perform as well as I'd hoped. | 2023-09-06T04:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/16bav14/chatgpt4_has_gotten_better_for_me_over_time/ | JuicyStandoffishMan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16bav14 | false | null | t3_16bav14 | /r/LocalLLaMA/comments/16bav14/chatgpt4_has_gotten_better_for_me_over_time/ | false | false | self | 5 | null |
Is it better to continue pretrain LLM for a domain then do instruction fine-tune or do instruction fine-tune using question-answer pairs for that domain immediately? | 9 | Let's say for example, I want to create a chat bot that answer questions about cars. Should I first continue train the LLM on a corpus containing documents about cars then do instruction fine-tuning? Or I can do instruction fine-tuning on question answer pairs about cars immediately?
I think that the first method requires less instruction data than the second method and instruction data are harder to collect so the first method should be preferred in most cases. However, in cases where I don't have a domain corpus but have a large amount question answer pairs in that domain, can instruction fine-tuning on this data achieve similar/better result? | 2023-09-06T04:41:04 | https://www.reddit.com/r/LocalLLaMA/comments/16barbk/is_it_better_to_continue_pretrain_llm_for_a/ | scienceotaku68 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16barbk | false | null | t3_16barbk | /r/LocalLLaMA/comments/16barbk/is_it_better_to_continue_pretrain_llm_for_a/ | false | false | self | 9 | null |
Can or doe's anyone use LLm's to provide feedback on fiction you write? | 1 | Also can it be trusted as honest and objective or will it always say good job? | 2023-09-06T02:55:41 | https://www.reddit.com/r/LocalLLaMA/comments/16b8o9a/can_or_does_anyone_use_llms_to_provide_feedback/ | poet3991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16b8o9a | false | null | t3_16b8o9a | /r/LocalLLaMA/comments/16b8o9a/can_or_does_anyone_use_llms_to_provide_feedback/ | false | false | self | 1 | null |
SQL analysis | 1 | What's the best open source LLM for building a query builder from text to sql, analyze it, and more querying from document ? I heard that llama 2 not so good at coding. And also can I use two different llms for different endpoint in my api ? Example for generate code I'm using gpt 3.5 and for querying document I'm using llama 2 ? Thanks | 2023-09-06T02:32:44 | https://www.reddit.com/r/LocalLLaMA/comments/16b86ry/sql_analysis/ | laveriaroha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16b86ry | false | null | t3_16b86ry | /r/LocalLLaMA/comments/16b86ry/sql_analysis/ | false | false | self | 1 | null |
Can I replace ChatGPT with local models yet? | 5 | I'm not particularly literate on the topic of LLM metrics, so I'm here because I'm wondering if there are any local ChatGPT alternatives I can set up today that could largely substitute either GPT-3.5 or GPT-4. Note that I'm not talking about just LLaMA, I'm open to anything really. | 2023-09-06T02:29:35 | https://www.reddit.com/r/LocalLLaMA/comments/16b84b6/can_i_replace_chatgpt_with_local_models_yet/ | 88sSSSs88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16b84b6 | false | null | t3_16b84b6 | /r/LocalLLaMA/comments/16b84b6/can_i_replace_chatgpt_with_local_models_yet/ | false | false | self | 5 | null |
Inconsistency of carbon footprint in LLaMA 1 paper | 8 | When I was reading the LLaMA 1 paper, it mentioned their power consumption and carbon emitted:
This means that developing these models would have cost around 2,638 MWh under our assumptions, and a total emission of 1,015 tCO2eq.
However, Table 15 does not state the same information:
https://preview.redd.it/1b23zuy2pjmb1.png?width=2014&format=png&auto=webp&s=e3fd7eebecd9ffc887a1fdf95bd20aefe903e97f
The combined power consumption and carbon emitted for all llama models in the table are 777 and 300, respectively. This clearly does not align with the previous information.
Could I have misunderstood something? | 2023-09-06T02:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/16b829c/inconsistency_of_carbon_footprint_in_llama_1_paper/ | secsilm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16b829c | false | null | t3_16b829c | /r/LocalLLaMA/comments/16b829c/inconsistency_of_carbon_footprint_in_llama_1_paper/ | false | false | 8 | null | |
Eval mmlu result against various infer methods (HF_Causal, VLLM, AutoGPTQ, AutoGPTQ-exllama) | 5 | I modified declare-lab's instruct-eval scripts, add support to VLLM, AutoGPTQ (and new autoGPTQ support exllama now), and test the mmlu result. I also add support to fastllm (which can accelerate ChatGLM2-6b.
The code is here [https://github.com/declare-lab/instruct-eval](https://github.com/declare-lab/instruct-eval) , I'd like to hear any errors in those code.
All GPTQ is 4bit\_32g\_actor, quantizated with wikitext2, all test is running on cuda 11.7, ubuntu 18.04, V100 GPU.
The result is below, FP16 is running use hf's causal with model.half().
[mmlu score](https://preview.redd.it/mpwhykutmjmb1.png?width=1172&format=png&auto=webp&s=328e2bf75a9b43e433a42e07870cb31d126aa613)
Fastllm result, which is better than origin for ChatGLM2, but has some problem for Qwen:
https://preview.redd.it/o2ummgcgnjmb1.png?width=643&format=png&auto=webp&s=2afd70c83ac636fc3f9c0852f4f151253f4d13c0
​
​ | 2023-09-06T02:12:07 | https://www.reddit.com/r/LocalLLaMA/comments/16b7qr1/eval_mmlu_result_against_various_infer_methods_hf/ | bash99Ben | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16b7qr1 | false | null | t3_16b7qr1 | /r/LocalLLaMA/comments/16b7qr1/eval_mmlu_result_against_various_infer_methods_hf/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'evpneWxD5awodnxHnoDwaty2jWBMEu9gPDFPExqpN6s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?width=108&crop=smart&auto=webp&s=57d7328218559ecfa1273e7865a5151292dbb450', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?width=216&crop=smart&auto=webp&s=0a525fedf7c6822ba15f19c2a09747f88b22b442', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?width=320&crop=smart&auto=webp&s=342a22c27abc3a720b4e2081842b8b5b0d652626', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?width=640&crop=smart&auto=webp&s=f3ba884c0d90536a04fa452c57f0f1acc896a81e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?width=960&crop=smart&auto=webp&s=4a0993cdd96eea9c217a4ea34ebc80b8c153f095', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?width=1080&crop=smart&auto=webp&s=2573782625888480bd56bc08119c778de3b01dc9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?auto=webp&s=3a66a13e1ecf0fa867b91d8eb90fb448d8db6d22', 'width': 1200}, 'variants': {}}]} | |
llama.cpp/Koboldcpp hipBLAS compile under Windows | 1 | Anyone able to compile llama.cpp or koboldcpp with hipBLAS enabled under Windows? I did get llama-python-cpp working with hipBLAS (ROCm) under linux and it works great but have not been able to get it to compile in Windows. Under linux at least, llama.cpp even splits the vram usage pretty much equally with dual cards. I have a pair of W6800's with 32GB's each on my Win 10 machine and want to see how they perform. The only thing I have changed in the location of clang and clang++ in the Makfile along with add an env varriable pointing to the root folder that ROCm is installed in. I'm really not sure that anyone has actually compiled it for Windows at all so please let me know. Here is the error that I get:
clang++: clang: clang: error: clangerror: : error: unsupported option '-fPIC' for target 'x86\_64-pc-windows-msvc'error: unsupported option '-fPIC' for target 'x86\_64-pc-windows-msvc'
unsupported option '-fPIC' for target 'x86\_64-pc-windows-msvc'
unsupported option '-fPIC' for target 'x86\_64-pc-windows-msvc'
​
Pretty sure I just need to update the compile options. | 2023-09-06T01:27:12 | https://www.reddit.com/r/LocalLLaMA/comments/16b6r73/llamacppkoboldcpp_hipblas_compile_under_windows/ | ccbadd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16b6r73 | false | null | t3_16b6r73 | /r/LocalLLaMA/comments/16b6r73/llamacppkoboldcpp_hipblas_compile_under_windows/ | false | false | self | 1 | null |
Good low-key video Jerry Kaplan, Why Generative AI is Truly Revolutionary | 1 | [https://www.youtube.com/watch?v=h7AgTzWMhUc](https://www.youtube.com/watch?v=h7AgTzWMhUc)
ChatGPT summarize:
* Jerry Kaplan, a technology expert with a Ph.D. in computer science and a background in entrepreneurship, discusses generative AI in a YouTube conversation.
* Generative AI represents advanced artificial general intelligence (AGI) capable of human-like intelligence and creativity.
* Recent breakthroughs in natural language processing, increased computing power, and the Transformer architecture contribute to the advancement of generative AI.
* Generative AI can continuously learn, curate data, and solve complex problems in various domains.
* Concerns about AI's downsides in politics and the environment are acknowledged, but the transformative potential of AI is highlighted.
* Kaplan discusses existential risks associated with AI, emphasizing the unlikely sudden rise of a new form of life.
* The impact of AI on labor markets is explored, emphasizing historical patterns of job displacement and increased productivity.
* AI's potential to complement human capabilities, especially in writing and creativity, is noted.
* The question of machine consciousness and its differences from human consciousness is discussed, particularly regarding the perception of time.
* Challenges related to AI, including deep fakes and the need for authentication, are addressed.
* AI's role in creating a protective shield against misinformation is considered.
* Kaplan envisions a future where machines handle most work, allowing humans to focus on purpose, meaning, and relationships.
* The discussion concludes with mention of Jerry Kaplan's upcoming book, "Generative Artificial Intelligence: What Everyone Needs to Know." | 2023-09-06T00:50:59 | https://www.reddit.com/r/LocalLLaMA/comments/16b5xay/good_lowkey_video_jerry_kaplan_why_generative_ai/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16b5xay | false | null | t3_16b5xay | /r/LocalLLaMA/comments/16b5xay/good_lowkey_video_jerry_kaplan_why_generative_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '5ibR4gyccUh40J0K6twf60h-aMSWtSm_YrTtqCyNgyw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/smTj6ha57uAKiBWPm-kKz3E8qKTck10dFpaTIWCC5nY.jpg?width=108&crop=smart&auto=webp&s=9e8924dd9a11ea782d460a5521960c8056d50d15', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/smTj6ha57uAKiBWPm-kKz3E8qKTck10dFpaTIWCC5nY.jpg?width=216&crop=smart&auto=webp&s=9f0f164bf2512ad590f506d5cfb9b8bb30177fb3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/smTj6ha57uAKiBWPm-kKz3E8qKTck10dFpaTIWCC5nY.jpg?width=320&crop=smart&auto=webp&s=46a8966232dcbda4e5026fec9a849f259d271c6e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/smTj6ha57uAKiBWPm-kKz3E8qKTck10dFpaTIWCC5nY.jpg?auto=webp&s=912498ccbf8fdc1ed1567ed91a6a94c083dc219d', 'width': 480}, 'variants': {}}]} |
[Request] Instructions on how to convert models to the GGUF format. | 1 | I need to process large prompts, and there's almost no support for large context size models. I haven't found any GUI frontend that supports more than 2K context, and llama.cpp requires the GGUF format, of which there's very few models, almost none for large contexts.
I tried using the provided converstion tool but it doesn't work. I know /u/fun_tangerine_1086 has been doing conversions [here](https://www.reddit.com/r/LocalLLaMA/comments/15zvxta/converting_some_models_to_gguf_formats_from/), but I need to be able to do them myself and not depend on someone else's goodwill and time.
If anyone can explain what the process is, that'd be awesome. | 2023-09-05T22:40:05 | https://www.reddit.com/r/LocalLLaMA/comments/16b2qou/request_instructions_on_how_to_convert_models_to/ | jungle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16b2qou | false | null | t3_16b2qou | /r/LocalLLaMA/comments/16b2qou/request_instructions_on_how_to_convert_models_to/ | false | false | self | 1 | null |
I built a very simple TPS (tokens per second) visualizer so that you don't have to | 81 | Hi all,
A key question I've often come across whilst researching what hardware I should procure for my home server, is how many tokens per second a given GPU combination is able to generate for a given model.
The acceptability of said TPS always seems to be subjective, so I wanted to have an easy way to visualize this.
I wrote this very simple static app which accepts a TPS value, and prints random tokens of 2-4 characters, linearly over the course of a second.
[https://tokens-per-second-visualizer.tiiny.site](https://tokens-per-second-visualizer.tiiny.site)
I've been using it to determine what TPS I'd be happy wit, so thought I'd share in case it would be helpful for you as well.
This is the full source code if you're interested in extending its functionality.
```html
<!DOCTYPE html>
<html>
<head>
<title>Token Per Second Visualizer</title>
<link
href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0-alpha1/dist/css/bootstrap.min.css"
rel="stylesheet">
</head>
<body>
<div class="container my-4">
<h1 class="text-center">Token Per Second Visualizer</h1>
<div class="my-4">
<input type="number" id="tps" class="form-control"
placeholder="Enter tokens per second">
<button class="btn btn-primary my-2"
onclick="start()">Prompt</button>
</div>
<textarea id="output" class="form-control" rows="10"
readonly></textarea>
</div>
<script>
let interval;
function start() {
clearInterval(interval);
const tps = parseInt(document.getElementById('tps').value, 10);
const output = document.getElementById('output');
output.value = '';
for (let i = 0; i < tps; i++) {
setTimeout(() => {
const tokenLength = Math.floor(Math.random() * 3) + 2;
const token = Array.from({ length: tokenLength }, () =>
String.fromCharCode(97 + Math.floor(Math.random() * 26))).join('');
output.value += token + ' ';
}, (1000 / tps) * i);
}
interval = setInterval(() => {
for (let i = 0; i < tps; i++) {
setTimeout(() => {
const tokenLength = Math.floor(Math.random() * 3) + 2;
const token = Array.from({ length: tokenLength }, () =>
String.fromCharCode(97 + Math.floor(Math.random() * 26))).join('');
output.value += token + ' ';
}, (1000 / tps) * i);
}
}, 1000);
}
</script>
</body>
</html>
``` | 2023-09-05T21:59:32 | https://www.reddit.com/r/LocalLLaMA/comments/16b1nvp/i_built_a_very_simple_tps_tokens_per_second/ | iBifteki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16b1nvp | false | null | t3_16b1nvp | /r/LocalLLaMA/comments/16b1nvp/i_built_a_very_simple_tps_tokens_per_second/ | false | false | self | 81 | null |
Inside Meta's AI Drama: internal feuds over compute power, and a rumor on Llama 3 | 197 | This exclusive comes from The Information, the same publication that has published other exclusive articles on Meta AI news like the formerly unknown releases of Llama 2 and Code Llama.
The [full article](https://www.theinformation.com/articles/inside-metas-ai-drama-internal-feuds-over-compute-power) is paywalled, but it provides some insights to what's going on behind the scenes with Meta and Llama and potentially confirms an earlier rumor about Llama 3.
**TLDR/biggest takeaways:**
* Llama 1 started development approximately in July 2022, months before ChatGPT's release.
* Meta AI has struggled with providing computing resources to its two major LLM development teams, OPT and Llama. The Llama team, based in Paris, believed smaller models would be more efficient and felt their work was overlooked compared to the North American OPT team that received more resources. This caused low morale and tensions.
* A week before Llama's release, Meta's FAIR (Fundamental AI Research) Paris chief left the company and the future of Llama was in doubt, but the OPT team wasn't fairing any better. Roughly half of the 19 OPT paper authors had left Meta by this time, so FAIR leaders decided to bring together members of the competing LLM teams to focus on Llama 2. This process was complete by February, and the large model in development by the OPT team was abandoned.
* Meta has been dealing with companywide layoffs, and FAIR has been no exception to losing researchers, including by defection. More than half of the 14 authors of the original Llama research paper have left the company, and the entire FAIR team that worked on Llama 2 was moved over to a new team to work on Meta's AI technologies for its apps. As already known by Mark Zuckerberg's comments on Meta AI and confirmed by Joelle Pineau, head of FAIR, this team for proprietary apps is separate from open source development. However, this does not mean work on future Llama models has stopped.
**Related news for Llama 3** and some thoughts:
The article briefly mentioned a recent generative AI social hour at Meta's campus, where the company gave llama plushies to attendees. There was some speculation about whether this event happened or not based on tweets by industry insiders, but this article confirms it did. This lends credence to Jason Wei's Llama 3 rumor. Jason Wei is a researcher at OpenAI who recently had [this](https://twitter.com/agikoala/status/1695125016764157988) to say about the social hour and Llama 3:
>Overheard at a Meta GenAI social:
>
>"We have compute to train Llama 3 and 4. **The plan is for Llama-3 to be as good as GPT-4.**"
>
>"Wow, if Llama-3 is as good as GPT-4, will you guys still open source it?"
>
>"Yeah we will. Sorry alignment people."
I wouldn't tell anyone to hold their breath, but this news could be gamechanging if true. It brings into question a recent [quote](https://www.engadget.com/meta-could-soon-make-its-ai-model-available-for-commercial-projects-114021749.html) from Yann LeCunn:
>The competitive landscape of AI is going to completely change in the coming months, in the coming weeks maybe, when there will be open source platforms that are actually as good as the ones that are not," Yann LeCun, Meta's vice-president and Chief AI scientist, said at a July conference.
When he said this, many readers assumed this only meant Llama 2, but it now seems possible that he could have been referring to Llama 3 as well. 'In the coming weeks' would have referred to Llama 2, which released that month and we know is obviously not GPT-4, but 'in the coming months' could have been speculation or hints on the capabilities of a newer model, the Llama 3 model.
Since FAIR doesn't have to split compute between two competing LLM teams anymore, it can all go toward Llama. They have a pattern of training for close to half a year and releasing as soon as it's done or almost as soon as it's done. Llama 2 was from January to July and released in July, Code Llama was from January to July and released in August, and Llama 1 was presumably from July 2022 to January 2023 and released in February. Llama 3, with more compute dedicated to it, could plausibly have started training in July and release this year. Llama 4, if it happens, would be a hypothetical model for next year.
Take this rumor with a grain of salt, but I'm sure Meta knows this is their prime chance to entrench Llama in the LLM ecosystem. GPT-4 at home would undoubtedly be enough for most people until the goalposts shift again. | 2023-09-05T21:43:56 | https://www.reddit.com/r/LocalLLaMA/comments/16b18tj/inside_metas_ai_drama_internal_feuds_over_compute/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16b18tj | false | null | t3_16b18tj | /r/LocalLLaMA/comments/16b18tj/inside_metas_ai_drama_internal_feuds_over_compute/ | false | false | self | 197 | {'enabled': False, 'images': [{'id': 'LpaR91r08St8Yr52aV2kbsZKrvhW8kNpMnxey-dg2oY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?width=108&crop=smart&auto=webp&s=a7e3fa55bf1e8216fb0765df3242f8463718129b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?width=216&crop=smart&auto=webp&s=aeaa4821f54cdfb11ea42280245c80565beeb281', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?width=320&crop=smart&auto=webp&s=f6df88edd04a00b2608199ac7f752e5da864419d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?width=640&crop=smart&auto=webp&s=2d596e1c5b53ee85e48a60ce963746152c00e517', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?width=960&crop=smart&auto=webp&s=9f9392100ce3e8d1e1aebfa4f91fd947639ac7aa', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?width=1080&crop=smart&auto=webp&s=aeb34523a57e0ce42bb40a51259cab3e04ef7a28', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?auto=webp&s=79bc9491fc2e6c42fb51e4351c9f1f53cbbcd7b6', 'width': 1200}, 'variants': {}}]} |
The more I write in the chat, the longer the robot thinks | 1 | Do I understand correctly that he is constantly thinking over previous messages in the dialog? Is their length adjusted somewhere? I am using Llama-2 under oobadoga WebUI | 2023-09-05T21:31:34 | https://www.reddit.com/r/LocalLLaMA/comments/16b0whd/the_more_i_write_in_the_chat_the_longer_the_robot/ | Hatred_grows | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16b0whd | false | null | t3_16b0whd | /r/LocalLLaMA/comments/16b0whd/the_more_i_write_in_the_chat_the_longer_the_robot/ | false | false | self | 1 | null |
Anyone know how to get current version koboldcpp to use GPU on an M2 Mac? | 1 | [removed] | 2023-09-05T21:09:54 | https://www.reddit.com/r/LocalLLaMA/comments/16b0ajj/anyone_know_how_to_get_current_version_koboldcpp/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16b0ajj | false | null | t3_16b0ajj | /r/LocalLLaMA/comments/16b0ajj/anyone_know_how_to_get_current_version_koboldcpp/ | false | false | self | 1 | null |
Completely new to this but does anyone know how to configure this part if I have 10GBs of VRAM (RTX 3080) and 32GBs of RAM? I'm completely clueless and any help would be fantastic thank you | 2 | 2023-09-05T21:02:14 | Affectionate-Grab864 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16b02vs | false | null | t3_16b02vs | /r/LocalLLaMA/comments/16b02vs/completely_new_to_this_but_does_anyone_know_how/ | false | false | 2 | {'enabled': True, 'images': [{'id': '_sX8c0sr6X-rqHutp6jn0eOzN5e3Bneo3LFRd3RhX8U', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/0dso4rd64imb1.png?width=108&crop=smart&auto=webp&s=ea1b168a5e4290ee154a406f60d16742cc8eb8f7', 'width': 108}, {'height': 178, 'url': 'https://preview.redd.it/0dso4rd64imb1.png?width=216&crop=smart&auto=webp&s=0a4b774de497ae139c9499390d2f40e2a7cfe61a', 'width': 216}, {'height': 265, 'url': 'https://preview.redd.it/0dso4rd64imb1.png?width=320&crop=smart&auto=webp&s=7436a2c785a8342e7cbbaa334b07389543b92929', 'width': 320}, {'height': 530, 'url': 'https://preview.redd.it/0dso4rd64imb1.png?width=640&crop=smart&auto=webp&s=be2d98b64447e525c0ce70f7dfbb8cfe4f23b6e2', 'width': 640}], 'source': {'height': 791, 'url': 'https://preview.redd.it/0dso4rd64imb1.png?auto=webp&s=4ff56cd4d364a00174f36f8afb23eb6d7de927d6', 'width': 955}, 'variants': {}}]} | |||
Streamlit launches LLM Hackathon 🧠 | 22 | Streamlit just launched its latest hackathon focused on large language models and AI 🚀
Awesome opportunity to build a Streamlit app using LangChain, LlamaIndex, AssemblyAI, Weaviate, or Clarifai, and win cool prizes (AirPods, Yeti microphone, mechanical keyboard, to name a few) – plus, the first 250 folks to enter get a pair of Streamlit socks 🧦
More info on the hackathon [here](https://streamlit.io/community/llm-hackathon-2023)
[Streamlit LLM Hackathon](https://preview.redd.it/8f0ikwheuhmb1.jpg?width=1200&format=pjpg&auto=webp&s=b4b14c65f5e5380945bd9448daf01a494eaab41d) | 2023-09-05T20:06:47 | https://www.reddit.com/r/LocalLLaMA/comments/16ayk5z/streamlit_launches_llm_hackathon/ | carolinedfrasca | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ayk5z | false | null | t3_16ayk5z | /r/LocalLLaMA/comments/16ayk5z/streamlit_launches_llm_hackathon/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'cOIsV8EFAdNgcu8QXWE034usjIzLh7xTucelMcLjcuQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=108&crop=smart&auto=webp&s=d81e665e134b57fd868afc6922b40ef38653eb31', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=216&crop=smart&auto=webp&s=641db623af5150dc50d8d33d5b5b146e7bfd969a', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=320&crop=smart&auto=webp&s=7efc46706dfed1302fb7edb269ad1fa6af0e5832', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=640&crop=smart&auto=webp&s=a48b5e082ec4a75fa758694a149274e4d8a3a1c4', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=960&crop=smart&auto=webp&s=61b19c0a24117caab3673e84d74f079c197d44df', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=1080&crop=smart&auto=webp&s=c3b575b83a199f9e117a184706d351f551988ffe', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?auto=webp&s=3791c49f260f016e93a7ed5b1ce033b97d29cf9d', 'width': 1200}, 'variants': {}}]} | |
Streamlit launches LLM Hackathon 🧠 | 1 | Streamlit just launched its [latest hackathon](https://hubs.la/Q020TdDj0) focused on large language models and AI 🚀
Awesome opportunity to build a Streamlit app using LangChain, LlamaIndex, AssemblyAI, Weaviate, or Clarifai, and win cool prizes (AirPods, Yeti microphone, mechanical keyboard, to name a few) – plus, the first 250 folks to enter get a pair of Streamlit socks 🧦
More info on the hackathon [here](https://hubs.la/Q020TdDj0)
[Streamlit LLM Hackathon](https://preview.redd.it/pesr0j6uphmb1.jpg?width=1200&format=pjpg&auto=webp&s=2f6c29c193679108fa35d6f1a75ef8a9f2724a0c)
​ | 2023-09-05T19:43:34 | https://www.reddit.com/r/LocalLLaMA/comments/16axxoh/streamlit_launches_llm_hackathon/ | carolinedfrasca | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16axxoh | false | null | t3_16axxoh | /r/LocalLLaMA/comments/16axxoh/streamlit_launches_llm_hackathon/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cOIsV8EFAdNgcu8QXWE034usjIzLh7xTucelMcLjcuQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=108&crop=smart&auto=webp&s=d81e665e134b57fd868afc6922b40ef38653eb31', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=216&crop=smart&auto=webp&s=641db623af5150dc50d8d33d5b5b146e7bfd969a', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=320&crop=smart&auto=webp&s=7efc46706dfed1302fb7edb269ad1fa6af0e5832', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=640&crop=smart&auto=webp&s=a48b5e082ec4a75fa758694a149274e4d8a3a1c4', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=960&crop=smart&auto=webp&s=61b19c0a24117caab3673e84d74f079c197d44df', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=1080&crop=smart&auto=webp&s=c3b575b83a199f9e117a184706d351f551988ffe', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?auto=webp&s=3791c49f260f016e93a7ed5b1ce033b97d29cf9d', 'width': 1200}, 'variants': {}}]} | |
Affordable cloud to deploy a basic llm pipeline as API backend? | 1 | To narrow down the question, I'm particularly thinking of a quantized 13B model for a personal project that doesn't involve too many requests, so it doesn't matter if it's "slow". Put it in other words, it doesn't need to serve many users concurrently, it's not a chatbot use case, it's more of a summarization pipeline.
Thanks in advance. | 2023-09-05T19:17:12 | https://www.reddit.com/r/LocalLLaMA/comments/16ax8cu/affordable_cloud_to_deploy_a_basic_llm_pipeline/ | Responsible_Warning3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ax8cu | false | null | t3_16ax8cu | /r/LocalLLaMA/comments/16ax8cu/affordable_cloud_to_deploy_a_basic_llm_pipeline/ | false | false | self | 1 | null |
A recursive transformer (just a wild idea) | 7 | I've read many times that models that a ton of weights in typical models do not much, and can benefit from more training, but there is a problem with overfitting and catastrofic forgetting.
Also, there is that infamous "stack more layers" paradigm which, as much as I understand, few agree HOW it actually works, but more layers - more parameters and more RAM to run the model.
Sooo... I know this may be completely crazy from ML pov, but can you use, say, 13b model and, like, "loop" it two times during training and inference and reap benefits of it working like ~30b model so far as memory footprint is concerned? Or 3/4/etc?
Yea, you'll get half the tokens per second, but than larger models will be slower anyway, but you'll be able to fit a potentially "smarter" one into a smaller vram, which is precious right now. | 2023-09-05T19:06:42 | https://www.reddit.com/r/LocalLLaMA/comments/16awyld/a_recursive_transformer_just_a_wild_idea/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16awyld | false | null | t3_16awyld | /r/LocalLLaMA/comments/16awyld/a_recursive_transformer_just_a_wild_idea/ | false | false | self | 7 | null |
A couple of questions on using GGML/GGUF to maximise context length | 12 | Hello, firstly apologies if any of this has been asked before, I searched the subreddit and I couldn't find the answers I was after. I'm also a noob of 1 week, so I don't really understand how this technology works beyond a few FAQs I've seen floating around various subreddits.
I am currently running the GPTQ version of MythoMax L2 13b, with an RTX 3080 Ti (12GB) and 32GB RAM. My CPU is 10th generation i5, with 6 cores. The model loads perfectly fine and is usable at a context length of 2048, however when I turned up the context length to 4096 my card crashed when the conversation's context limit was reached. Scared of damaging my card, I turned the context limit down to 3072 and I've not had any problems. My guess is that I ran out of VRAM due to the increased context limit.
My plan is to use a GGML/GGUF model to unload some of the model into my RAM, leaving space for a longer context length.
My first question is, is there a conversion that can be done between context length and required VRAM, so that I know how much of the model to unload? (I.e. - does 4096 context length need 4096MB reserved?).
The same goes for GPU layers. Does half of the model's layers = half of it's RAM size? (I.e. - If a model has 1000 layers, and it uses 12GB RAM, would assigning 500 layers to the GPU use 6GB of VRAM?) Also, where can I find how many layers a particular model has? [This guide](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md) implies that you kind of have to guess the layers incrementally, which I'm hoping is not the case.
My second question is, does this even make sense as an idea, or am I misunderstanding something?
Finally, I'm currently generating replies with a limit of 150, which take me around 30 - 50 seconds to generate (this isn't a problem for me). Is the performance hit going to make this significantly longer? Should I even bother with the above? | 2023-09-05T19:01:25 | https://www.reddit.com/r/LocalLLaMA/comments/16awtiy/a_couple_of_questions_on_using_ggmlgguf_to/ | EeyoresM8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16awtiy | false | null | t3_16awtiy | /r/LocalLLaMA/comments/16awtiy/a_couple_of_questions_on_using_ggmlgguf_to/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': '8xtjwwtcKerurGMhTFLFFArH0ZxmDESqopVHCKpKfxU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?width=108&crop=smart&auto=webp&s=941c18c68662db8cebcab4705115e8b304d71510', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?width=216&crop=smart&auto=webp&s=a5af6977535b44cbc9eaedafc82717fe404a88fa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?width=320&crop=smart&auto=webp&s=076f77b080ab63ff722867f555b6bff5405744e8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?width=640&crop=smart&auto=webp&s=378faf360c7bde91f190b7dfcd326c415eb7351e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?width=960&crop=smart&auto=webp&s=e140cef23f13fa463cceaf3e19ec393ad207c7f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?width=1080&crop=smart&auto=webp&s=7ab42ccd0b1b6aa4dbc69b230e8d73d03084e8f7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?auto=webp&s=52d751d67300c7131b4920f1ec89ae388b6ec97e', 'width': 1200}, 'variants': {}}]} |
How would a 4090 laptop fair in the current state of LocalLLaMA? | 15 | Hello!
I've recently been given a chance to get a machine from my company to "explore applications of LLM" in our office, main goal is to basically trying to have a small LLM that can write small and basic programs quickly.
Unfortunately, my boss insisted it be a laptop. So, I'm wondering if the top-of-the-line 4090 laptop GPU would fair me well?
It has 16GB vram and 9728 cuda cores.
What size of models and token generation speed am I looking at for this? Would they be able to fit my needs? | 2023-09-05T18:51:47 | https://www.reddit.com/r/LocalLLaMA/comments/16awkmo/how_would_a_4090_laptop_fair_in_the_current_state/ | rexyuan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16awkmo | false | null | t3_16awkmo | /r/LocalLLaMA/comments/16awkmo/how_would_a_4090_laptop_fair_in_the_current_state/ | false | false | self | 15 | null |
LlaMA 13B Setup for custom requirement | 1 | My day to day work can extensively be cut back on if I am able to use LLaMA 13B (34B would be better but 13B model works fine for me, tested on Poe). I am seriously considering, setting up a rig from scratch, what's the cheapest config that can handle the following:-
1. Run LLaMA at 10-15 tokens per second
2. Accept context of up to 40,000 words
3. No limit on output or at least up to 10,000 words
Please specify the exact config, I will check the prices here locally (India). If this is not possible, then whatever max I can get (don't want to compromise on context as will regularly need to churn large documents) | 2023-09-05T18:17:00 | https://www.reddit.com/r/LocalLLaMA/comments/16avold/llama_13b_setup_for_custom_requirement/ | mohityadavx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16avold | false | null | t3_16avold | /r/LocalLLaMA/comments/16avold/llama_13b_setup_for_custom_requirement/ | false | false | self | 1 | null |
Multi part Model | 1 | I cant find any one part of the models, I also can't find any explanation on how to combine multipart models into one. What am I missing? | 2023-09-05T17:56:42 | https://www.reddit.com/r/LocalLLaMA/comments/16av5b8/multi_part_model/ | EagleNoes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16av5b8 | false | null | t3_16av5b8 | /r/LocalLLaMA/comments/16av5b8/multi_part_model/ | false | false | self | 1 | null |
KOR with Llama | 1 | Hey guys, so I have to extract certain named entities, specifically of the medical domain, from passages. Problem is we have very few examples of the names entity, for ex: drug code. So we decided to try KOR with Llama2. Any one have any experience with it? Because with my first try, outputs seem to be dire. Any alternatives suggestion? | 2023-09-05T17:35:18 | https://www.reddit.com/r/LocalLLaMA/comments/16aulek/kor_with_llama/ | Tinjar12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16aulek | false | null | t3_16aulek | /r/LocalLLaMA/comments/16aulek/kor_with_llama/ | false | false | self | 1 | null |
GPT4ALL pylama issues can't figure out! | 1 | I've been away from the AI world for the last few months. Was upset to find that my python program no longer works with the new quantized binary models. I had a look in the tutorial threads but can't put my finger on the issue.
This is my code:
def gpt4all(user_input):
global conversation
gptj = GPT4All(model_path="C:\\Users\\BF\\AppData\\Local\\nomicai\\GPT4All",
model_name="WizardLM-7B-uncensored.ggmlv3.q4_1.bin",
model_type="llama"
)
conversation.append({"role": "user", "content": user_input})
gpt4allcompletion = gptj.chat_completion(conversation)
bot_response = gpt4allcompletion["choices"][0]["message"]["content"]
conversation.append({"role": "assistant", "content": bot_response})
send_to_gui(user_input, bot_response)
total_tokens = gpt4allcompletion["usage"]["total_tokens"]
print("Total Tokens Generated:", total_tokens)
but I keep getting this error:
Found model file.
llama.cpp: loading model from C:\\\\Users\\\\BF\\\\AppData\\\\Local\\\\nomicai\\\\GPT4All\\WizardLM-7B-uncensored.ggmlv3.q4_1.bin
error loading model: unknown (magic, version) combination: 67676a74, 00000003; is this really a GGML file?
llama_init_from_file: failed to load model
LLAMA ERROR: failed to load model from C:\\\\Users\\\\BF\\\\AppData\\\\Local\\\\nomicai\\\\GPT4All\\WizardLM-7B-uncensored.ggmlv3.q4_1.bin
LLAMA ERROR: prompt won't work with an unloaded model!
I saw some references to reinstalling old versions of pylama-cpp but none of this worked. I'm on Windows 10. | 2023-09-05T17:26:09 | https://www.reddit.com/r/LocalLLaMA/comments/16auclq/gpt4all_pylama_issues_cant_figure_out/ | darkbluetwilight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16auclq | false | null | t3_16auclq | /r/LocalLLaMA/comments/16auclq/gpt4all_pylama_issues_cant_figure_out/ | false | false | self | 1 | null |
I'm convinced now that “personal LLMs” are going to be a huge thing | 319 | Hey all,
[I posted here](https://www.reddit.com/r/LocalLLaMA/comments/16954fj/what_do_you_use_your_local_llm_for/) over the weekend asking for feedback on how you all use your local deployments. The response rate and engagement was great, and it gave me a lot to chew on.
The way I see it, the users here are all early adopters of a trend that is only going to keep growing. In the past privacy concerns have always been in the periphery with the internet. However, reflecting on this technology more, I am starting to be of the opinion that LLMs will mark a shift in attitude. This technology is going to become too personally integrated with our lives for each of us to feel comfortable letting someone else have complete control over it.
The most extreme example I can muster up is a world where personal LLMs are basically extensions or “grafts” onto our brains. Each individual will want to own and control that LLM for a whole list of reasons I can imagine. There will still be large appetite and use for the giga-brain 100T models, but personal LLMs will also be a huge thing, especially as they continue to mature towards current bleeding edge performance.
I'm going to start working on this. I have a modest amount of compute to deploy at this moment (8x A100s) and a bit of focus / determination. I am going to start by setting to work on replicating the work of [this recent paper](https://arxiv.org/abs/2306.11644). Hopefully I can count on some of you to be beta testers when I have a smol model ready.
Ultimately, I want to work on building the best local LLMs in the world, form-fit to consumer hardware. Basically, focus on making smaller models better rather than bigger models bigger.
​
Happy to hear any of your additional thoughts.
​ | 2023-09-05T17:16:30 | https://www.reddit.com/r/LocalLLaMA/comments/16au3ga/im_convinced_now_that_personal_llms_are_going_to/ | docsoc1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16au3ga | false | null | t3_16au3ga | /r/LocalLLaMA/comments/16au3ga/im_convinced_now_that_personal_llms_are_going_to/ | false | false | self | 319 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
GPTMe: A fancy CLI to interact with LLMs (GPT or Llama) in a Chat-style interface, with capabilities to execute code & commands on the local machine | 35 | 2023-09-05T16:57:52 | https://github.com/ErikBjare/gptme | ErikBjare | github.com | 1970-01-01T00:00:00 | 0 | {} | 16atlia | false | null | t3_16atlia | /r/LocalLLaMA/comments/16atlia/gptme_a_fancy_cli_to_interact_with_llms_gpt_or/ | false | false | 35 | {'enabled': False, 'images': [{'id': 'pl9qRx-8qE8tbw-ndu0ENMqJ9cc_GhFLbUmG3_LAtgs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?width=108&crop=smart&auto=webp&s=cdeb5d306a3b83803dd77147481b6047a1b66857', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?width=216&crop=smart&auto=webp&s=66740c277a544ebcc05e21d1d3c98e1ff77ea076', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?width=320&crop=smart&auto=webp&s=101815f3bf66e72ac231637b8d9612d26f2fb379', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?width=640&crop=smart&auto=webp&s=4baae5c4c8e634c9c73557fd3b6e089a20d7f4ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?width=960&crop=smart&auto=webp&s=0a1c69134b3b1770a2ea1ab035e1ca56809397fa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?width=1080&crop=smart&auto=webp&s=13afe67250ee41965a8a457bca0ac8295dae4b8e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?auto=webp&s=14f1fd429ec88f98811351acc51852c2cd6b2e3d', 'width': 1200}, 'variants': {}}]} | ||
In general, is training done on base models? | 3 | Take TheBloke/Carl-Llama-2-13B-GGUF for example.
The original model card says
> **Training:** Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 50 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta. GGML Quant models are converted by Kijana Mitchell. Extremely thankful to him.
Does that mean it was trained on base Llama? | 2023-09-05T16:50:02 | https://www.reddit.com/r/LocalLLaMA/comments/16atee3/in_general_is_training_done_on_base_models/ | multiverse_fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16atee3 | false | null | t3_16atee3 | /r/LocalLLaMA/comments/16atee3/in_general_is_training_done_on_base_models/ | false | false | self | 3 | null |
I need some help or guidance on how to set up a locall LLM | 1 | Hello:
I would like to run this llama-2 70b on my computer like it: [https://huggingface.co/chat](https://huggingface.co/chat).
But I do not know how to achieve it, and also that it can search both in Internet and my own files.
I have tried GPT4ALL and Faraday, but both miss my requirements for searching Internet and my own files.
Could you guide me to do this?. Perhaps there is some Langchain code already done or some other GPT4ALL alike that do all I wish.
Thank you. | 2023-09-05T16:37:16 | https://www.reddit.com/r/LocalLLaMA/comments/16at2gw/i_need_some_help_or_guidance_on_how_to_set_up_a/ | SnooWoofers780 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16at2gw | false | null | t3_16at2gw | /r/LocalLLaMA/comments/16at2gw/i_need_some_help_or_guidance_on_how_to_set_up_a/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'O4__VvuTP1zjgNXHpYgGtbNlwm8CyL1iGZRclIV-cFg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=108&crop=smart&auto=webp&s=c5c01ca386f7a26e8afeb5073e51c35d0d581de7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=216&crop=smart&auto=webp&s=0e915f82e672294c639c476433af5f1919265348', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=320&crop=smart&auto=webp&s=87643eb4a9654c3497efe7fce371db617f9ff816', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=640&crop=smart&auto=webp&s=20315fe6e900582303995761624ac0728d1703f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=960&crop=smart&auto=webp&s=6d8bc7d3273f5290083f6668e10d5b513621bfa3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=1080&crop=smart&auto=webp&s=865cccb6b6df001aa14ef4fb2eb0f5902cb15904', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?auto=webp&s=03f4344525b6a013e0ac556cfc24b4a45d64f47e', 'width': 1200}, 'variants': {}}]} |
Reasonable Graphics card for LLM AND Gaming | 7 | I'm currently in the market of building my first PC in over a decade. (Mac User)
Primary reason is gaming, however, if possible I would love to be able to also run some larger LLM's locally. Can anyone recommend a consumer card that would be good for both?
I came across these two cards on Amazon - would it make sense to spend the extra $ on the more expensive version?
\- **$2,965**: ZOTAC Gaming GeForce RTX™ 3090 Trinity OC 24GB GDDR6X 384-bit 19.5 Gbps PCIE 4.0 ([https://www.amazon.ca/Graphics-IceStorm-Advanced-Lighting-ZT-A30900J-10P/dp/B08ZL6XD9H/ref=sr\_1\_2?crid=37QCG7OQYVZ4Z&keywords=3090&qid=1693929052&sprefix=3090%2Caps%2C124&sr=8-2&ufe=app\_do%3Aamzn1.fos.fe67de69-a579-4370-9bc8-5e38fc5a3bcc](https://www.amazon.ca/Graphics-IceStorm-Advanced-Lighting-ZT-A30900J-10P/dp/B08ZL6XD9H/ref=sr_1_2?crid=37QCG7OQYVZ4Z&keywords=3090&qid=1693929052&sprefix=3090%2Caps%2C124&sr=8-2&ufe=app_do%3Aamzn1.fos.fe67de69-a579-4370-9bc8-5e38fc5a3bcc))
\- **$1,799:** EVGA GeForce RTX 3090 FTW3 Ultra Gaming, 24GB GDDR6X, iCX3 ([https://www.amazon.ca/EVGA-GeForce-Technology-Backplate-24G-P5-3987-KR/dp/B0916ZWZ9S/ref=sr\_1\_1?crid=37QCG7OQYVZ4Z&keywords=3090&qid=1693929052&sprefix=3090%2Caps%2C124&sr=8-1&ufe=app\_do%3Aamzn1.fos.a4f9e18d-83e4-4dfa-94d2-717d5a20bf26](https://www.amazon.ca/EVGA-GeForce-Technology-Backplate-24G-P5-3987-KR/dp/B0916ZWZ9S/ref=sr_1_1?crid=37QCG7OQYVZ4Z&keywords=3090&qid=1693929052&sprefix=3090%2Caps%2C124&sr=8-1&ufe=app_do%3Aamzn1.fos.a4f9e18d-83e4-4dfa-94d2-717d5a20bf26))
​
Open to any suggestions :) | 2023-09-05T16:00:05 | https://www.reddit.com/r/LocalLLaMA/comments/16as386/reasonable_graphics_card_for_llm_and_gaming/ | MattBurnes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16as386 | false | null | t3_16as386 | /r/LocalLLaMA/comments/16as386/reasonable_graphics_card_for_llm_and_gaming/ | false | false | self | 7 | null |
The concern about LLM knowledge base and the possible solution | 1 | 2023-09-05T15:58:12 | https://medium.com/@SymeCloud/the-concern-about-llm-knowledge-base-and-the-possible-solution-7c212d2e8f99 | nalaginrut | medium.com | 1970-01-01T00:00:00 | 0 | {} | 16as1hn | false | null | t3_16as1hn | /r/LocalLLaMA/comments/16as1hn/the_concern_about_llm_knowledge_base_and_the/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'PFmYKNDQ_Zy5lFGaemVZ37fWFCR4-AtF6QzH77dbqbc', 'resolutions': [{'height': 96, 'url': 'https://external-preview.redd.it/e1oMH0Qvy4ZtNAfaieyqpXQmaCVpIcgVH8TxE2u7xDk.jpg?width=108&crop=smart&auto=webp&s=966c9b0c3e9f903364cd249b2f358b08cf127498', 'width': 108}, {'height': 193, 'url': 'https://external-preview.redd.it/e1oMH0Qvy4ZtNAfaieyqpXQmaCVpIcgVH8TxE2u7xDk.jpg?width=216&crop=smart&auto=webp&s=648c82d26b62065273deafad43cce9704730d45d', 'width': 216}, {'height': 286, 'url': 'https://external-preview.redd.it/e1oMH0Qvy4ZtNAfaieyqpXQmaCVpIcgVH8TxE2u7xDk.jpg?width=320&crop=smart&auto=webp&s=74ba53075b9959be4f08f6022f76bd17907d46b5', 'width': 320}], 'source': {'height': 454, 'url': 'https://external-preview.redd.it/e1oMH0Qvy4ZtNAfaieyqpXQmaCVpIcgVH8TxE2u7xDk.jpg?auto=webp&s=81bbb226f0395d8d9f3b950e51feb1694c0e71cf', 'width': 507}, 'variants': {}}]} | ||
Errors loading Transformers, Llama models | 1 | Hi Everyone!
I'm trying to run these via oobabooga/texgen webui via WSL with a 3090 and 128GB RAM. I have double and triple-checked that all requirements have been updated, and that exllama has been installed, and continue to get these errors upon loading them.
Trying to load TheBloke\_Wizard-Vicuna-30B-Uncensored-GPTQ
(model loader changes to Transformers, but I get the same result when using GPTQ. Other GPTQ models loads just fine, go figure)
Output:
^(Traceback (most recent call last):)
^(File “/home/\~/ai/text-generation-webui/modules/ui\_model\_menu.py”, line 185, in load\_model\_wrapper)
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
^(File “/home/\~/ai/text-generation-webui/modules/models.py”, line 79, in load\_model)
output = load_func_map[loader](model_name)
^(File “/home/\~/ai/text-generation-webui/modules/models.py”, line 224, in huggingface\_loader)
model = LoaderClass.from_pretrained(checkpoint, **params)
^(File “/home/\~/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/auto/auto\_factory.py”, line 493, in from\_pretrained)
return model_class.from_pretrained(
^(File “/home/\~/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/modeling\_utils.py”, line 2629, in from\_pretrained)
state_dict = load_state_dict(resolved_archive_file)
^(File “/home/\~/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/modeling\_utils.py”, line 447, in load\_state\_dict)
with safe_open(checkpoint_file, framework="pt") as f:
^(safetensors\_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer)
Plenty of thanks in advance for any advice. I really just wish I could use these. | 2023-09-05T15:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/16ar9sq/errors_loading_transformers_llama_models/ | drycounty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ar9sq | false | null | t3_16ar9sq | /r/LocalLLaMA/comments/16ar9sq/errors_loading_transformers_llama_models/ | false | false | self | 1 | null |
Guide for begineers | 2 | I'm wondering if there's any step-by-step guide for beginners which desire to start learning about LLMs, maybe setting up and training LLaMA.
Any suggestions about where and how to start, the hardware needed or suggested etc...
Many thanks | 2023-09-05T15:26:05 | https://www.reddit.com/r/LocalLLaMA/comments/16ar7xs/guide_for_begineers/ | TheHypersonic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ar7xs | false | null | t3_16ar7xs | /r/LocalLLaMA/comments/16ar7xs/guide_for_begineers/ | false | false | self | 2 | null |
Advice needed: Macbook pro M1 max 64G for local models, Should I get it? | 13 | Hey guys,
I need your experience/thoughts about this, I am currently running local models 7B on my Mac intel 16GB, works fine with decent speed, I can also run 13B but fairly slow. I have an option to replace that now with a M1 max 64GB with 32cores, my aim is to be able to run larger models or at least the 13B with enough speed on the go. It has to be a laptop cause I move a lot and I need to be able to take this with me on the go as much as possible. What are your takes on this? Do I go for the replacement? | 2023-09-05T15:08:51 | https://www.reddit.com/r/LocalLLaMA/comments/16aqs51/advice_needed_macbook_pro_m1_max_64g_for_local/ | simplir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16aqs51 | false | null | t3_16aqs51 | /r/LocalLLaMA/comments/16aqs51/advice_needed_macbook_pro_m1_max_64g_for_local/ | false | false | self | 13 | null |
LLM Obsession | 107 | I am obsessed with LLM. Every day, I find myself repeatedly checking Hugging Face to see if there is a new model that is compatible with my laptop GPU of RAM for GGUF, and I try it out. I have accumulated 3 terabytes of models, and I cannot bring myself to delete them. Is this a cause for concern, doctor? | 2023-09-05T13:23:06 | https://www.reddit.com/r/LocalLLaMA/comments/16ao5pe/llm_obsession/ | GlobeTrekkerTV | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ao5pe | false | null | t3_16ao5pe | /r/LocalLLaMA/comments/16ao5pe/llm_obsession/ | false | false | self | 107 | null |
can we run gpqt models with cuda in windows ? | 1 | [removed] | 2023-09-05T11:17:05 | https://www.reddit.com/r/LocalLLaMA/comments/16alf2m/can_we_run_gpqt_models_with_cuda_in_windows/ | count_dracula14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16alf2m | false | null | t3_16alf2m | /r/LocalLLaMA/comments/16alf2m/can_we_run_gpqt_models_with_cuda_in_windows/ | false | false | self | 1 | null |
Model stop responding after few reply | 1 | This 13b 4bit mythomax, vicuna works good, but what if it stop replying after a few messages. I tried many settings but it doesn't help much. I don't know if the problem is in the pc and memory or what it is. I set temperature, top p, top k etc in different configurations. Anyone had the same? Maybe the ram is clogged up and stops replying? I have rtx 4070 and 16gb ram. | 2023-09-05T10:24:30 | https://www.reddit.com/r/LocalLLaMA/comments/16akg99/model_stop_responding_after_few_reply/ | LonleyPaladin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16akg99 | false | null | t3_16akg99 | /r/LocalLLaMA/comments/16akg99/model_stop_responding_after_few_reply/ | false | false | self | 1 | null |
I cancelled my Chatgpt monthly membership because I'm tired of the constant censorship and the quality getting worse and worse. Does anyone know an alternative that I can go to? | 254 | Like chatgpt I'm willing to pay about $20 a month but I want an text generation AI that:
Remembers more than 8000 tokens
Doesn't have as much censorship
Can help write stories that I like to make
Those are the only three things I'm asking but Chatgpt refused to even hit those three. It's super ridiculous. I've tried to put myself on the waitlist for the API but it doesn't obviously go anywhere after several months.
This month was the last straw with how bad the updates are so I've just quit using it. But where else can I go?
Like you guys know any models that have like 30k of tokens? | 2023-09-05T10:15:05 | https://www.reddit.com/r/LocalLLaMA/comments/16ak9s2/i_cancelled_my_chatgpt_monthly_membership_because/ | SerpentEmperor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ak9s2 | false | null | t3_16ak9s2 | /r/LocalLLaMA/comments/16ak9s2/i_cancelled_my_chatgpt_monthly_membership_because/ | false | false | self | 254 | null |
Biggest possible LLama2 model that can run on Ryzen 3950X with 128GB RAM + 24GB VRAM | 0 | Given that I have a system with 128GB of RAM, a 16-core Ryzen 3950X, and an RTX 4090 with 24GB of VRAM, what's the largest language model in terms of billions of parameters that I can feasibly run on my machine? I previously tried running a 70B parameter model on CPU and it crashed.
Are there any optimizations or tricks to help run larger models successfully? | 2023-09-05T10:14:56 | https://www.reddit.com/r/LocalLLaMA/comments/16ak9nh/biggest_possible_llama2_model_that_can_run_on/ | No_Wing133 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ak9nh | false | null | t3_16ak9nh | /r/LocalLLaMA/comments/16ak9nh/biggest_possible_llama2_model_that_can_run_on/ | false | false | default | 0 | null |
Need Guidance on Exposing Local API for My LLM Model | 6 | Hi everyone, I'm working with Vicuna, a local language model (LLM),
and I'm looking to expose it through an API from my local machine.
This would allow me to interact with the LLM from various environments, including chatbots, agents, or platforms like FlowiseAI.
Can anyone guide me or recommend a tutorial on how to achieve this? Your assistance would be greatly appreciated! | 2023-09-05T10:13:17 | https://www.reddit.com/r/LocalLLaMA/comments/16ak8ip/need_guidance_on_exposing_local_api_for_my_llm/ | Dave86ch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ak8ip | false | null | t3_16ak8ip | /r/LocalLLaMA/comments/16ak8ip/need_guidance_on_exposing_local_api_for_my_llm/ | false | false | self | 6 | null |
Feed LLM with local knowledge | 14 | Say I want to build a LLM for films and movie production, I have several data to feed in:
\- description of the movie production process
\- some facts about "what is a good film" extracted during interviews
\- some movie titles along with metadata (film director, rating, year release, ...)
In short, I have a lot of documents I want the LLM to get but some of them are structured (like the latest) and some are raw.
Which way to go? I've read about retrieval augmented llm, embedding, and am kind of lost. | 2023-09-05T09:50:10 | https://www.reddit.com/r/LocalLLaMA/comments/16ajtl8/feed_llm_with_local_knowledge/ | front-equal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ajtl8 | false | null | t3_16ajtl8 | /r/LocalLLaMA/comments/16ajtl8/feed_llm_with_local_knowledge/ | false | false | self | 14 | null |
Has anyone finetuned LLAMA models using TPUs? | 16 | I have received access to resources on GCP for my research and I am curious if I can use them to fine-tune LLAMA models using Peft. | 2023-09-05T08:55:24 | https://www.reddit.com/r/LocalLLaMA/comments/16aiw18/has_anyone_finetuned_llama_models_using_tpus/ | Marc_Marc_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16aiw18 | false | null | t3_16aiw18 | /r/LocalLLaMA/comments/16aiw18/has_anyone_finetuned_llama_models_using_tpus/ | false | false | self | 16 | null |
GitHub Copilot vs CodeLlama & Co for Code productivity | 9 | Hi everyone,
I am currently benchmarking the different LLMs for code productivity for my company and trying to find the best one in terms of cost / performance / latency / privacy.
The current challengers I see are in three brackets:
\- GitHub Copilot
\- Local models like CodeLlama & Co
\- Other vendors for LLMs specialized in code
What do you think provides the best tradeoff between cost / perf / latency / privacy in your opinion?
[View Poll](https://www.reddit.com/poll/16aivry) | 2023-09-05T08:55:00 | https://www.reddit.com/r/LocalLLaMA/comments/16aivry/github_copilot_vs_codellama_co_for_code/ | Separate-Still3770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16aivry | false | null | t3_16aivry | /r/LocalLLaMA/comments/16aivry/github_copilot_vs_codellama_co_for_code/ | false | false | self | 9 | null |
Estimated cost of a 13b model on a server with H24 workflow/Concurrent users | 0 | Hey,
I was wondering what would be the cheapest way to get MythoMax 13b running on a server with enough power so that sometimes API queries can be made at the same time. The model would be running H24. What would be the way to go ? Is there a solution that can evolve with my needs if after some time I need more power? I you have an estimated cost i'd be happy to hear, I didn't find a lot of informations out there.
​
Thanks and have a nice day,
​
Rémi | 2023-09-05T08:49:51 | https://www.reddit.com/r/LocalLLaMA/comments/16aisrp/estimated_cost_of_a_13b_model_on_a_server_with/ | Traditional_Hurry622 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16aisrp | false | null | t3_16aisrp | /r/LocalLLaMA/comments/16aisrp/estimated_cost_of_a_13b_model_on_a_server_with/ | false | false | self | 0 | null |
Lora vs Embeddings (Vector DB?) Knowledge Training | 26 | Hi all,
So recently my company wanted to venture into LLMs, the use-case is a standard one, where we will inject company-specific knowledgebase and use it internally, it might possibly serve other headquarters across the globe as well. The plan is to try Llama 2, then Code-Llama.
I tried created a very simple csv (only 8 rows of samples) with "question and "answer" column, then converted to a single "text" column (and a few samples of how,what,why), proceeds with Lora 4-bit and SFTTrainer. The results were good, but there were times where it hallucinates if the prompt wasn't direct or when tested on a smaller model.
I just stumbled across keywords like embeddings DB and seems like both are related to the topic of "Domain Training". Youtube tutorials talked about "Domain Training" but most of them are some general fine-tunings. We had a colleague who did some research on vector DB, but she left and we didn't really venture into vector db. Our previous works were object detection & OCR based.
So our boss wants a LLM which has knowledge for different projects/ team usages (countless Excels & PPTs that needs to process into csv). Should I try looking into vector DB and build one and link it with Llama, or just train Loras for different "knowledge"? Seen someone mention about LangChain as well. There is also the "fine-tuning is not for knowledge" saying, which made me confused.
FYI, I'm currently using Llama-2-chat 70B (4x A40 GPU). Kinda new into this domain, and never touched stable diffusion in the past.
TL:DR: Can someone give me a direction regarding my use-case? Much appreciated. | 2023-09-05T08:46:46 | https://www.reddit.com/r/LocalLLaMA/comments/16aiqxo/lora_vs_embeddings_vector_db_knowledge_training/ | cringelord000222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16aiqxo | false | null | t3_16aiqxo | /r/LocalLLaMA/comments/16aiqxo/lora_vs_embeddings_vector_db_knowledge_training/ | false | false | self | 26 | null |
Running Llama 2 on a SLURM script | 1 | I am currently having issues with even getting a Llama2-7b to run on a HPC even though I have access to enough GPUs. I don’t believe there is an issue with my code but there easily could be. I have followed a typical huggingface guide to get it running on transformers. However it usually times out after a certain point. I know it shouldn’t be taking this long to inference. I haven’t had issues before running a quantized version in Colab. Is there anything I should check for? | 2023-09-05T08:38:29 | https://www.reddit.com/r/LocalLLaMA/comments/16ailwv/running_llama_2_on_a_slurm_script/ | Fooples | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ailwv | false | null | t3_16ailwv | /r/LocalLLaMA/comments/16ailwv/running_llama_2_on_a_slurm_script/ | false | false | self | 1 | null |
Advice Getting Started w/ an AI Model - First: Which One? | 0 | I'm looking for some advice. I'm going to build an AI and train it, but I would love it to be pre-trained as a functional LLM first. First: What model should I use? What most up-to-date? What do I find that?
Here's my hardware: I have an old 1st Generation Intel i7 desktop computer that I want to convert into a dedicated AI machine. It has an old Nvidia GPU with just under a gig of RAM. I was thinking of throwing Linux Mint on there, but maybe there's another version that works better.
There's so much info out there and it's so confusing I don't know where to start. I was imagining a Llama model might be good, but who has released the best ones so far? (Is Berkeley's Koala open-source to the public and any good? Are there others better these days?)
What's versatile to creating a very specifically talented AI, and able to be further trained?
Thank you for your help! I'm a bit overwhelmed with technical jargon and models... I'd love to just get started. | 2023-09-05T08:23:38 | https://www.reddit.com/r/LocalLLaMA/comments/16aid7o/advice_getting_started_w_an_ai_model_first_which/ | FlowingFire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16aid7o | false | null | t3_16aid7o | /r/LocalLLaMA/comments/16aid7o/advice_getting_started_w_an_ai_model_first_which/ | false | false | self | 0 | null |
RTX 3080 Ti vs 4090 for QLoRa | 3 | I am a laptop user and want to get a new company laptop with the most amount of VRAM for model fine-tuning.
I can choose between the RTX 3080 Ti and the RTX 4099. Both have 16GB VRAM, (so enough for QLoRa on 13B?)
But what are the differences in terms of performance? What speedup can I expect from a 4090? | 2023-09-05T07:01:24 | https://www.reddit.com/r/LocalLLaMA/comments/16agzb9/rtx_3080_ti_vs_4090_for_qlora/ | Koliham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16agzb9 | false | null | t3_16agzb9 | /r/LocalLLaMA/comments/16agzb9/rtx_3080_ti_vs_4090_for_qlora/ | false | false | self | 3 | null |
Deploying a fine-tuned LLaMA 2 model | 15 | I have been following this tutorial: https://www.philschmid.de/sagemaker-llama2-qlora that details how to fine-tune a LLaMA model using AWS SageMaker. I have attempted to deploy it locally (retrieving the model from the S3 bucket onto my local PC) but loading the model gave a bunch of common bitsandbytes errors related to CUDA and seemed like more of a hassle than it was worth - plus it's for a university project and eventually I need it deployed so we can inference through an API. My questions are:
- Am I able to deploy this model using an AWS SageMaker instance? If yes, what is the minimum requirements to run this model for inferencing?
- A follow on from the above, I don't need to be able to inference 24/7 as it's a university project, would spot inferencing be a good choice to keep costs down?
To give some context on the project, the model is to simplify research papers syntactically and/or lexically. Any help in understanding is much appreciated! | 2023-09-05T06:16:20 | https://www.reddit.com/r/LocalLLaMA/comments/16ag6ux/deploying_a_finetuned_llama_2_model/ | arkeniwnl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ag6ux | false | null | t3_16ag6ux | /r/LocalLLaMA/comments/16ag6ux/deploying_a_finetuned_llama_2_model/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'a1hmOnL77g0KEwQYz1ospL2mdtWnUUs4gvIEoj1uaZQ', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?width=108&crop=smart&auto=webp&s=51e5811563287b81728e9f1934f629170807ff4e', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?width=216&crop=smart&auto=webp&s=ba8036d2cb784339eb7dc74dafa9bd86b11eb825', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?width=320&crop=smart&auto=webp&s=67febf1b1913b4183f59a8d225f41a5c9d711b22', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?width=640&crop=smart&auto=webp&s=595ab44cc93358ada686f27b200786051d1a8e88', 'width': 640}, {'height': 591, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?width=960&crop=smart&auto=webp&s=003dd992d245ac23320b2acb44824b5db7724e9b', 'width': 960}, {'height': 665, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?width=1080&crop=smart&auto=webp&s=5f4dec01663537808962f2e3884c81dec0389a58', 'width': 1080}], 'source': {'height': 1478, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?auto=webp&s=c97d35973fec2476f6250ac9942fde23937ecc15', 'width': 2400}, 'variants': {}}]} |
Efficient tokenization of timestamps? | 1 | Good golly, timestamps tokenize poorly. I'm experimenting with adding timestamps as prefixes to chat messages, and finding that they take one character per token in some cases (such as below for the string `[9/4/23 11:51PM]`), based on the strftime `%-m/%-d/%y %-I:%M%p`)
When I toggle timestamps in my custom app, I go from 72 to 86 messages in context (that's individual messages, not message/response pairs). I guess that's not intolerably bad, but it's more than 10% of context being used for timestamps alone.
Does anyone have suggestions for how to cut this down to a smaller number of tokens?
I'm a bit surprised that numbers tokenize so damn poorly; at a minimum I'd expect the numbers 1.100 to have their own tokens, and possibly prepended with a space. If anyone can shed some machine-learning-light on why such numbers don't have their own tokens in sentencepiece, i'd love to hear it!
Also, has anyone else tried adding timestamps, and either stuck with it, or given up, or landed on a better solution? (At one point, I was injecting a timestamp periodically, prefixed with "Narrator:", which seemed to work somewhat, but somtimes the LLM would start parroting the narrator messages.)
https://preview.redd.it/fjc3atwabdmb1.png?width=242&format=png&auto=webp&s=e6e109e55959c9e549966c362a56a7f5d88f3cad | 2023-09-05T05:02:27 | https://www.reddit.com/r/LocalLLaMA/comments/16aeu3b/efficient_tokenization_of_timestamps/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16aeu3b | false | null | t3_16aeu3b | /r/LocalLLaMA/comments/16aeu3b/efficient_tokenization_of_timestamps/ | false | false | 1 | null | |
How can I use multiple computers to locally run LLM's with a API endpoint? | 9 | My plan is to have a few computers on the network to work together to run a (or multiple) models locally with API to use with a chat interface. Any idea as to how I can do this? | 2023-09-05T04:08:22 | https://www.reddit.com/r/LocalLLaMA/comments/16adse5/how_can_i_use_multiple_computers_to_locally_run/ | heyitsradin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16adse5 | false | null | t3_16adse5 | /r/LocalLLaMA/comments/16adse5/how_can_i_use_multiple_computers_to_locally_run/ | false | false | self | 9 | null |
How to fine-tuning LLM for a chat bot to answer about a website's functionalities? | 3 | Hello, so I’m fine-tuning a LLM to use as a chat bot for a website. My goal is to allow the bot to be able to answer questions about this website’s functionalities, e.g. how to change password, what are the payment methods available, how to download certain games etc…
Currently, my dataset consists of a list of users’ intents (around 50 intents) along with corresponding responses for each intent. To get started, I wrote about 3-4 questions for each intent and obtain around 150-200 question answer pairs. Then, these pairs are used to fine-tune a LLM (I'm using a 13B model) in the following format:
`{system prompt}`
`Question: {question}`
`Answer: {answer}`
The result I obtained is not very good, even though the style of the response is somewhat similar to what I want, the model often misunderstand user's intent and generate responses that are unrelated to the question. Initially, I assume this is due to the lack of data. However, when researching about fine-tuning LLM, I saw this OpenAI post [https://community.openai.com/t/finetuning-for-domain-knowledge-and-questions/24817](https://community.openai.com/t/finetuning-for-domain-knowledge-and-questions/24817) that talked about fine-tuning cannot teach model new knowledge, and since the website in my use case requires some specific knowledge about its functionalities, LLM cannot learn them effectively thus failed to answer the question.
I'm still a bit skeptical about the above post however, so my questions are:
\- If I increase the number of question-answer pairs (for example, by parsing the chat logs of the website), should I expect the LLM to perform better or it will still fail due to the lack of domain knowledge
\- I know that currently the best way to incorporate knowledge to LLM is to use retrieval. However, currently I do not have a corpus to search from, so should I spend some time to write the manual for the website (if fine-tuning do not work) or is there a better way to create documentation for the website utilizing chat logs?
\- A final question which is slightly unrelated to my current use case: According to my research, there are some approaches where the model is continued to pretrained on a large amount of unstructured domain data to acquire knowledge before fine-tuning on instruction data (e.g. [https://www.stochastic.ai/blog/xfinance-vs-bloomberg-gpt](https://www.stochastic.ai/blog/xfinance-vs-bloomberg-gpt)) and obtain good results. Does this approach only work when you have a large corpus of documents (500k in the link)? If we only have few documents e.g. 100-200 documents (each document has around 5-10 pages), is this method a good way to try or retrieval is still better?
​ | 2023-09-05T03:49:53 | https://www.reddit.com/r/LocalLLaMA/comments/16adfar/how_to_finetuning_llm_for_a_chat_bot_to_answer/ | scienceotaku68 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16adfar | false | null | t3_16adfar | /r/LocalLLaMA/comments/16adfar/how_to_finetuning_llm_for_a_chat_bot_to_answer/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'BxNvUeMFd6obe78ihkAJrzGJFpOkQpmEJ7BbVi_larY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=108&crop=smart&auto=webp&s=93162f369263d5c2a9e1768260a77dbe05103f3e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=216&crop=smart&auto=webp&s=dbc4176da2b6e2f8cc29002232fb799303e8c4d7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=320&crop=smart&auto=webp&s=0e5a8489fbfa03e8d18ba305ccd4945626d306f2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=640&crop=smart&auto=webp&s=34d3e6a5c65886be595f06be7e712b5946391d72', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=960&crop=smart&auto=webp&s=073702ab34f5d22d066990279fcb8219ec0c160c', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?auto=webp&s=8a98b7aa0d64757a3ef9f7a2c7bf4c8f10b02230', 'width': 1024}, 'variants': {}}]} |
Looking for Llama model to run on Google Collab (either free or pro) | 1 | Hello, as the name says, I'm looking for a version of Llama that can run on Collab, and I'm willing to pay for higher GPU. I'm trying a quantized version now here: [https://github.com/MuhammadMoinFaisal/LargeLanguageModelsProjects/blob/main/Run%20Llama2%20Google%20Colab/Llama\_2.ipynb](https://github.com/MuhammadMoinFaisal/LargeLanguageModelsProjects/blob/main/Run%20Llama2%20Google%20Colab/Llama_2.ipynb)
and it seems to be using too much memory and is timing out my runtime. It seems to be working on the A100 GPU but that uses a ton of compute units, so I'd prefer to be able to use a smaller GPU. Any help is appreciated. Thanks!
​ | 2023-09-05T02:19:54 | https://www.reddit.com/r/LocalLLaMA/comments/16abkaq/looking_for_llama_model_to_run_on_google_collab/ | M1x1ma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16abkaq | false | null | t3_16abkaq | /r/LocalLLaMA/comments/16abkaq/looking_for_llama_model_to_run_on_google_collab/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eKjsCg89liVZs8XOdObb8bG5Phv6NC5UPXepy7tAe9c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?width=108&crop=smart&auto=webp&s=00c1aa9eb551167bc15de42add005d453be9eef3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?width=216&crop=smart&auto=webp&s=408038144a6329f579b671791d6c5d0354d13084', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?width=320&crop=smart&auto=webp&s=4265fd64389a38ab28cb42db7e30dbef35c14b3e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?width=640&crop=smart&auto=webp&s=4639247c5ac448668a4403ab7813da36ee2eaaa0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?width=960&crop=smart&auto=webp&s=458396c51870b7d67d361a37e90c1398530ebe69', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?width=1080&crop=smart&auto=webp&s=bb081e8780065817aa8b968dc0aaeee86b100088', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?auto=webp&s=3465c1c073bd55335b367e6b35b97d98995e4036', 'width': 1200}, 'variants': {}}]} |
Automated Routing of Tasks to Optimal Models: A PR for Semantic-Kernel | 8 | The need for efficient model routing has been a [point of discussion](https://twitter.com/Altimor/status/1696221744024142210) in the community. Addressing this, I've submitted a [pull request](https://github.com/microsoft/semantic-kernel/pull/2323) to Semantic-Kernel that introduces an automated multi-model connector.
#### Key Features:
- **Automated Task Collection**: Gathers samples and tests them across multiple models.
- **Model Vetting**: Uses a primary connector (e.g., ChatGPT) to evaluate the performance of secondary models.
- **Dynamic Routing Table**: Updates the routing table based on performance and cost metrics.
This PR is part of a [larger initiative](https://github.com/microsoft/semantic-kernel/pull/2229) aimed at integrating OpenAI and Local llamas toolkits via a bridge between `semantic-kernel` and `oobabooga`.
The multi-connector currently exists as:
* [a Unit test series](https://github.com/microsoft/semantic-kernel/blob/70c0eafc9410c9bd5a444d815db868e5732dbcc8/dotnet/src/Connectors/Connectors.UnitTests/MultiConnector/TextCompletion/MultiConnectorTextCompletionTests.cs) that was a proof of concept where a primary connector capable of all 4 arithmetic operations would vet secondary connectors capable of only one each but faster and cheaper.
* [An integration test series](https://github.com/microsoft/semantic-kernel/blob/70c0eafc9410c9bd5a444d815db868e5732dbcc8/dotnet/src/IntegrationTests/Connectors/MultiConnector/MultiConnectorTests.cs) with settings made of plans to tests with test data and validation data of various complexities, and a list of real primary and secondary connectors comprising ChatGPT and local Llamas and a [Readme.md](https://github.com/microsoft/semantic-kernel/blob/70c0eafc9410c9bd5a444d815db868e5732dbcc8/dotnet/src/IntegrationTests/Connectors/MultiConnector/README.md) on how to set up the test environment.
Note that the plan + data format is highly customizable in variety and difficulty, and the integration tests also include a version where the plan is automatically generated by the primary connector in the role of which ChatGPT 4 has exhibited excellent capabilities for designing plans given a set of skills to be tested.
The question remains whether this should be integrated into Semantic-Kernel's core functionality. Regardless, I'm going to extend the code for optimal distributed mixture of experts, leveraging [Infer.Net](https://mbmlbook.com/LearningSkills.html) and Spark.net.
Your thoughts and contributions would be highly valuable as this directly impacts various technology stacks accommodated by Semantic-Kernel. | 2023-09-04T23:40:53 | https://www.reddit.com/r/LocalLLaMA/comments/16a80j4/automated_routing_of_tasks_to_optimal_models_a_pr/ | Jessynoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16a80j4 | false | null | t3_16a80j4 | /r/LocalLLaMA/comments/16a80j4/automated_routing_of_tasks_to_optimal_models_a_pr/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'JEsRby0ymh-Lsg-o1hInETv2KYW9o55WVpsSQSVxEFk', 'resolutions': [{'height': 137, 'url': 'https://external-preview.redd.it/rLV5GfkPivMe22A0Kw7Y8OW3AAW6hjWXV9XXnnuAQ9A.jpg?width=108&crop=smart&auto=webp&s=29604d57b4f02de40372a1dbeb5fce4053eeaece', 'width': 108}, {'height': 275, 'url': 'https://external-preview.redd.it/rLV5GfkPivMe22A0Kw7Y8OW3AAW6hjWXV9XXnnuAQ9A.jpg?width=216&crop=smart&auto=webp&s=b08b5d7af46b4a88b79603511c3743a15e84a42a', 'width': 216}, {'height': 408, 'url': 'https://external-preview.redd.it/rLV5GfkPivMe22A0Kw7Y8OW3AAW6hjWXV9XXnnuAQ9A.jpg?width=320&crop=smart&auto=webp&s=cbabd7ea1cee9e444bfd177d239a0d1c3d719bf5', 'width': 320}, {'height': 816, 'url': 'https://external-preview.redd.it/rLV5GfkPivMe22A0Kw7Y8OW3AAW6hjWXV9XXnnuAQ9A.jpg?width=640&crop=smart&auto=webp&s=e7a47ccdb2eb24dc1b90393333f20d5123ce39ea', 'width': 640}], 'source': {'height': 908, 'url': 'https://external-preview.redd.it/rLV5GfkPivMe22A0Kw7Y8OW3AAW6hjWXV9XXnnuAQ9A.jpg?auto=webp&s=15c1180df1a6904827c22b43a1b9192206b2fa84', 'width': 712}, 'variants': {}}]} |
Code Llama is Amazing! | 47 | phind-codellama-34b-v2.Q5\_K\_S.gguf works great, but I've actually only needed codellama-13b-oasst-sft-v10.Q4\_K\_S.gguf
This is what I've been waiting for. I'm not going to say it's as good as chatGPT 3.5, but for most of my purposes it is. I'm using KoboldCPP with a temperature setting of .1 and it passes all of my 20 or so tests, most of them somewhat boiler plate but here is an example of one:
​
Prompt: c# show a win form app that in textBox1 merges duplicate lines while adding up totals of numbers shown on the far right after the last white space, make sure to comment the code:
Result:
Optional Pastebin for formatting: [https://pastebin.com/LTLs1XRD](https://pastebin.com/LTLs1XRD)
`// Split the text in textBox1 into individual linesstring[] lines = textBox1.Text.Split(new string[] { Environment.NewLine }, StringSplitOptions.None);// Create a dictionary to store the line totalsDictionary<string, int> lineTotals = new Dictionary<string, int>();// Loop through each lineforeach (string line in lines){// Trim the line and split it into partsstring[] parts = line.Trim().Split(' ');`
`// Check if the first part of the line is already a key in the lineTotals dictionaryif (!lineTotals.ContainsKey(parts[0])){// If not, add the line to the dictionary with its corresponding totallineTotals.Add(parts[0], int.Parse(parts[parts.Length - 1]));}else{// If yes, update the existing value in the dictionarylineTotals[parts[0]] += int.Parse(parts[parts.Length - 1]);}}// Clear the contents of textBox1textBox1.Text = "";// Loop through each entry in the lineTotals dictionaryforeach (var item in lineTotals){// Format the entry as a string and append it to textBox1textBox1.Text += $"{item.Key} {item.Value}{Environment.NewLine}";}\`
​
Let me know if you need further assistance!
\`\`\`csharp
// Example usage:
textBox1.Text = @"
apple 5
banana 3
orange 2
apple 7
grape 4
banana 6
";
MergeDuplicateLinesWithTotal(textBox1);
// Output:
// apple 12
// banana 9
// orange 2
// grape 4 | 2023-09-04T23:08:02 | https://www.reddit.com/r/LocalLLaMA/comments/16a78bf/code_llama_is_amazing/ | ThePseudoMcCoy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16a78bf | false | null | t3_16a78bf | /r/LocalLLaMA/comments/16a78bf/code_llama_is_amazing/ | false | false | self | 47 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?width=108&crop=smart&auto=webp&s=df9c6a296446d05d873c629a30253398c4d29c1b', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?auto=webp&s=07c121a0180003f7373863af66192b6ff6a937da', 'width': 150}, 'variants': {}}]} |
docker wont work when trying to install Nous Hermes Llama 2 13B | 1 | [removed] | 2023-09-04T22:20:42 | https://www.reddit.com/r/LocalLLaMA/comments/16a61ca/docker_wont_work_when_trying_to_install_nous/ | TheFarisaurusRex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16a61ca | false | null | t3_16a61ca | /r/LocalLLaMA/comments/16a61ca/docker_wont_work_when_trying_to_install_nous/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'EgxglN1oOrkKevE7sa6DylFeaX96Qyhel4McIlIaExc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/9NKrQ8iiWMltHEGCGEqcM5TbhnxHuY6oq0Yb4gvuvnQ.jpg?width=108&crop=smart&auto=webp&s=fd6eea8730d5aab3bf78c89a32c0af1c3b782d29', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/9NKrQ8iiWMltHEGCGEqcM5TbhnxHuY6oq0Yb4gvuvnQ.jpg?width=216&crop=smart&auto=webp&s=ab82bbc6e786f949b78bf27eed271f9cc8fed400', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/9NKrQ8iiWMltHEGCGEqcM5TbhnxHuY6oq0Yb4gvuvnQ.jpg?width=320&crop=smart&auto=webp&s=23a4f80205adf903c946b135ac878f1086c0bc04', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/9NKrQ8iiWMltHEGCGEqcM5TbhnxHuY6oq0Yb4gvuvnQ.jpg?auto=webp&s=7738712cb96339d0db74f35628de86f0d4d30d34', 'width': 480}, 'variants': {}}]} |
Need Advice: I have 12 , RTX3070s and want to put them to work on an AI project… What are some ideas that I could enhance my work/life/financial situation ? | 2 | First time posting on here, been watching for awhile… very thankful for all of your contributions. It’s helped me understand a lot of very complex thoughts about this new world we live in.
So I have been exploring ideas and training myself in python on basic concepts such as installing packages and basic functions and definitions.
Now I want to push myself and setup a project that I can train a machine to learn from a dataset via my own hardware.
What would you do with all this computing power?
I’ve explored algo trading , personal assistants , and a few others but it’s extremely slow going as I have basic knowledge.
Any help would be welcomed :) | 2023-09-04T20:49:04 | https://www.reddit.com/r/LocalLLaMA/comments/16a3l0l/need_advice_i_have_12_rtx3070s_and_want_to_put/ | Overall-Split8048 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16a3l0l | false | null | t3_16a3l0l | /r/LocalLLaMA/comments/16a3l0l/need_advice_i_have_12_rtx3070s_and_want_to_put/ | false | false | self | 2 | null |
Finetuning a llama 2 model using Claude or Claude instant dataset. | 8 | Won't it be nice to finetune a LLaMA 2 model on a custom dataset crafted from Anthropic Claude models, particularly the Claude instant or Claude 2 models? Most finetuned models are based on messages generated using GPT-4, so it would be interesting to see a LLaMA 2 model finetuned on a jailbroken/uncensored Claude dataset. This would be particularly useful for models that are finetuned mainly for roleplaying or creative writing tasks.
Aside from the intense filter, Anthropic Claude models are known for their ability to describe actions in realistic detail and are well-known for their creative writing abilities. Although I haven't tested GPT-4 in detail, from my limited experience, I think the Claude models are still excellent in descriptive writing. It would be interesting to see how a model finetuned on a dataset made from Claude's responses or even a model finetuned on both GPT-4 and Claude conversations dataset would perform.
Here's are examples that shows the difference in writing quality between ChatGpt (gpt-3.5 turbo on Poe) and Claude instant: [https://pastebin.com/569TFewE](https://pastebin.com/569TFewE), [https://pastebin.com/ffHS3fwH](https://pastebin.com/ffHS3fwH)
So, what do you guys think?" | 2023-09-04T20:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/16a37eq/finetuning_a_llama_2_model_using_claude_or_claude/ | DannyBrownMz | self.LocalLLaMA | 2023-09-04T20:53:42 | 0 | {} | 16a37eq | false | null | t3_16a37eq | /r/LocalLLaMA/comments/16a37eq/finetuning_a_llama_2_model_using_claude_or_claude/ | false | false | self | 8 | null |
vllm reducing quality when loading local fine tuned llama2 model | 7 | Has anyone else encountered the issue that a model loaded with vllm generates low quality/gibberish output when using a local, fine tuned llama2 13B hf model?
Just using the standard inference method from the vllm blog:
`from vllm import LLM`
`prompts = ["Hello, my name is", "The capital of France is"]`
`llm = LLM(model="lmsys/vicuna-7b-v1.3") # Create an LLM.`
`outputs = llm.generate(prompts)`
​ | 2023-09-04T17:53:15 | https://www.reddit.com/r/LocalLLaMA/comments/169ytis/vllm_reducing_quality_when_loading_local_fine/ | DeanBlub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169ytis | false | null | t3_169ytis | /r/LocalLLaMA/comments/169ytis/vllm_reducing_quality_when_loading_local_fine/ | false | false | self | 7 | null |
🚀We trained a new 1.6B parameters code model that reaches 32% HumanEval and is SOTA for the size | 186 | We’ve finished training a new code model Refact LLM which took us about a month. The main use case is for blazing-fast code completion with fill-in-the-middle, additionally, the model could reply to chat prompts.
It has much better performance than all of the code models of similar size, and almost reaches the same HumanEval as Starcoder being 10x smaller in size.
With the small size, it can work with most modern GPUs requiring just 3GB RAM.
You can try self-hosting it in Refact [https://github.com/smallcloudai/refact/](https://github.com/smallcloudai/refact/) and get a local fast copilot alternative with decent suggestions.
Weights and model card [https://huggingface.co/smallcloudai/Refact-1\_6B-fim](https://huggingface.co/smallcloudai/Refact-1_6B-fim).
We would love to hear your feedback! | 2023-09-04T17:48:01 | https://www.reddit.com/r/LocalLLaMA/comments/169yonh/we_trained_a_new_16b_parameters_code_model_that/ | kateklink | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169yonh | false | null | t3_169yonh | /r/LocalLLaMA/comments/169yonh/we_trained_a_new_16b_parameters_code_model_that/ | false | false | self | 186 | {'enabled': False, 'images': [{'id': '1_wks0LGL7qavGXNT_Z37zwFckag2ZQ7CPAL6tKcDAo', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/n8v241FMHVkDSAuIqs1OnzTDATwrQoobgY3lRVuiavg.jpg?width=108&crop=smart&auto=webp&s=b3affacaf0b9cd81d36f7334d635b14396159d1f', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/n8v241FMHVkDSAuIqs1OnzTDATwrQoobgY3lRVuiavg.jpg?width=216&crop=smart&auto=webp&s=ae82a927378724194b0e837aaaaba5adedaf7492', 'width': 216}, {'height': 196, 'url': 'https://external-preview.redd.it/n8v241FMHVkDSAuIqs1OnzTDATwrQoobgY3lRVuiavg.jpg?width=320&crop=smart&auto=webp&s=3d1e2ca12fbda8802a95970439cf366ea91ec291', 'width': 320}, {'height': 393, 'url': 'https://external-preview.redd.it/n8v241FMHVkDSAuIqs1OnzTDATwrQoobgY3lRVuiavg.jpg?width=640&crop=smart&auto=webp&s=526115137a16a15ad190f962b0d6d2be4cb47d4a', 'width': 640}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/n8v241FMHVkDSAuIqs1OnzTDATwrQoobgY3lRVuiavg.jpg?auto=webp&s=1742064ddb7c1423e73e9315a0444741c482f32b', 'width': 832}, 'variants': {}}]} |
Method to split model matrices | 1 | [removed] | 2023-09-04T17:44:46 | https://www.reddit.com/r/LocalLLaMA/comments/169ylj0/method_to_split_model_matrices/ | FootballDoc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169ylj0 | false | null | t3_169ylj0 | /r/LocalLLaMA/comments/169ylj0/method_to_split_model_matrices/ | false | false | self | 1 | null |
Split model matrices | 1 | [removed] | 2023-09-04T17:43:35 | https://www.reddit.com/r/LocalLLaMA/comments/169ykh7/split_model_matrices/ | FootballDoc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169ykh7 | false | null | t3_169ykh7 | /r/LocalLLaMA/comments/169ykh7/split_model_matrices/ | false | false | self | 1 | null |
llama2 GPU speedup | 1 | How to use GPU to run LLAMA2 faster. Using CPU is too slow when I wish to do lot of repetitive summary on data. I have tried llama-cpp-python library and done all that I can find to use my 4gb gpu nvidia t1200 but is not working. Can ctransformer be good option ? is it fast? | 2023-09-04T17:34:50 | https://www.reddit.com/r/LocalLLaMA/comments/169yc3f/llama2_gpu_speedup/ | PhantomLord06 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169yc3f | false | null | t3_169yc3f | /r/LocalLLaMA/comments/169yc3f/llama2_gpu_speedup/ | false | false | self | 1 | null |
4x3060 testing | 19 |
Tested today rig on vast.ai with 4x3060
I didn't have much time, so i have not played with gpu split properly to measure max context, but it seems that it's not spread evenly between gpus...
in eu you can get 4x3060 with 24 months guarantee for 1300$
2x3090, used, 6 months guarantee (not offical) will cost 1900$
1x4090 new cost 2000$
Take my results with grain of salt, but it seems it's only worth buying 3060s if:
you get two to run 13b models in 8bit
you get 4 to run 34b models in 8bit (this looks nice)
and 10tps+ is enough for you
I dont think 4.5 tps on 70b is worth it as some people have it with single 3090 and cpu
having one 3090 gives soo much better speed on smaller models...
I was hoping to get a little better result for 70b, so i will end up getting another 3090 without telling my wife :D
However! 34b in 8bits is GOOD, if I couldn't afford another 3090, I would probably sell the first one and stack 3060s instead.
examples:
Lemur 70b, ctx max 512 (?)
exllama
gpu split 10,10,10,10
4.45 tok/s
codellama 13b gptq (wizardcoder python) ctx max 4k?
exllama
24.30 tokens/s,
3,3,3 gpu split allows for 16k context?
wizardcoder-python-34b-v1.0.Q4_K_M.gguf
llama.cpp
all layers not split config
8.79 tokens/s,
10,10,10,10
13.12 tokens/s,
34b q8
11.83 tokens/s,
llama2 13b q8
16.56 tokens/s - 19.31 tokens/s | 2023-09-04T17:07:20 | https://www.reddit.com/r/LocalLLaMA/comments/169xlzb/4x3060_testing/ | kpodkanowicz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169xlzb | false | null | t3_169xlzb | /r/LocalLLaMA/comments/169xlzb/4x3060_testing/ | false | false | self | 19 | null |
Building an agent using an open llm | 7 | Hi, I am trying to build a very simple agent as a POC with the end goal of summarizing a document gotten from the web (like a paper from arxiv). I have been using the react format (I think?) where it looks like:
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
You have access to the following tools:
Search: useful for when you need to answer questions about current events. You should ask targeted questions.
Webpage: useful for when you need specifics after a search using the Source. Use the supplied source url.
​
You will receive a message from the human, then you should start a loop and do one of two things
Option 1: You use a tool to answer the question.
For this, you should use the following format:
===RESPONSE====
{
"Thought": "you should always think about what to do",
"Action": "the action to take, should be one of \[Search\]",
"Action Input": "the input to the action, to be sent to the tool"
}
After this, the human will respond with an observation, and you will continue until you are ready to answer.
Option 2: You respond to the human.
For this, you should use the following format:
===RESPONSE====
{
"Action": "Response To Human",
"Action Input": "your response to the human, summarizing what you did and what you learned"
}
So the model actually does search, and can come up with an answer, sometimes it gets stuck in a loop (probably because the search results are low quality). But is this the correct way to interact with the model for this purpose? I find that the coder models seem to follow the output format closer, anyone else experience this? | 2023-09-04T16:46:07 | https://www.reddit.com/r/LocalLLaMA/comments/169x1ud/building_an_agent_using_an_open_llm/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169x1ud | false | null | t3_169x1ud | /r/LocalLLaMA/comments/169x1ud/building_an_agent_using_an_open_llm/ | false | false | self | 7 | null |
Since lot of people took interest in my project, So uploading a screenshot of llama 2 chat 13B running locally using google search to provide answers. | 0 | 2023-09-04T16:18:53 | akbbiswas | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 169wbto | false | null | t3_169wbto | /r/LocalLLaMA/comments/169wbto/since_lot_of_people_took_interest_in_my_project/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'G7iJSL46kXKFJpX9Z3gb5dx_p9PdLNFBlVSX5sRZv_8', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/fcgrjdjok9mb1.png?width=108&crop=smart&auto=webp&s=f19fb9bc0d6eba0f02ce780022f8345536853368', 'width': 108}, {'height': 227, 'url': 'https://preview.redd.it/fcgrjdjok9mb1.png?width=216&crop=smart&auto=webp&s=17b3c3cb160c2ab9fe5e0a895d32a2b3f5adb84a', 'width': 216}, {'height': 337, 'url': 'https://preview.redd.it/fcgrjdjok9mb1.png?width=320&crop=smart&auto=webp&s=b4c0067befaab6290d7f2eba5a40db0118575684', 'width': 320}, {'height': 674, 'url': 'https://preview.redd.it/fcgrjdjok9mb1.png?width=640&crop=smart&auto=webp&s=1dd0f2f2d5283de1a3b0916c44f118c5d2a3373b', 'width': 640}], 'source': {'height': 903, 'url': 'https://preview.redd.it/fcgrjdjok9mb1.png?auto=webp&s=e253d94f02feabf5578f23e908d2f97b66d12e3d', 'width': 857}, 'variants': {}}]} | |||
Code interpreter/data analysis via open source LLM? | 1 | [removed] | 2023-09-04T15:43:17 | https://www.reddit.com/r/LocalLLaMA/comments/169vduc/code_interpreterdata_analysis_via_open_source_llm/ | alvintsoiwc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169vduc | false | null | t3_169vduc | /r/LocalLLaMA/comments/169vduc/code_interpreterdata_analysis_via_open_source_llm/ | false | false | self | 1 | null |
Multi-bot chat with fictional characters | 1 | [removed] | 2023-09-04T14:58:26 | https://www.reddit.com/r/LocalLLaMA/comments/169u7vi/multibot_chat_with_fictional_characters/ | justynasty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169u7vi | false | null | t3_169u7vi | /r/LocalLLaMA/comments/169u7vi/multibot_chat_with_fictional_characters/ | false | false | 1 | null | |
Running llama 2(any model) in nodejs? | 4 | I'm trying to run llama2 model by using node js but am constantly running into errors. The langchain documentation recommends some other npm package and there are some tutorials out there which are using a totally different npm package. I've tried everything(even moved the file to D) in order to make the path simple, but yet the same error pops up.
Can anybody tell me how to correctly set up local llm with nodejs? Fed up of errors and I've been trying since yesterday!
Please help!! | 2023-09-04T14:58:25 | ContributionFun3037 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 169u7uy | false | null | t3_169u7uy | /r/LocalLLaMA/comments/169u7uy/running_llama_2any_model_in_nodejs/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'iRLT8rdRwC5gQtd-UvcLOZxBTqino1NRhn9RgH-qsZg', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/lbsexelh69mb1.jpg?width=108&crop=smart&auto=webp&s=da11e411f7bf367fe945f463eeb9eacc874668ba', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/lbsexelh69mb1.jpg?width=216&crop=smart&auto=webp&s=6583b6cfd53967481313f84e1f071e62cfa45dc4', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/lbsexelh69mb1.jpg?width=320&crop=smart&auto=webp&s=ef6e68e13d8d70ccca29abc11983bc4ba061900a', 'width': 320}, {'height': 344, 'url': 'https://preview.redd.it/lbsexelh69mb1.jpg?width=640&crop=smart&auto=webp&s=21798f8da25be9d19e5932960cf819ba54289089', 'width': 640}, {'height': 516, 'url': 'https://preview.redd.it/lbsexelh69mb1.jpg?width=960&crop=smart&auto=webp&s=963a1fcc2cb42f2c8984632a6dc45e3e5b094a70', 'width': 960}, {'height': 580, 'url': 'https://preview.redd.it/lbsexelh69mb1.jpg?width=1080&crop=smart&auto=webp&s=b4f443e9559e5ffc98d1dc8c48e7f367e662be01', 'width': 1080}], 'source': {'height': 860, 'url': 'https://preview.redd.it/lbsexelh69mb1.jpg?auto=webp&s=c6fc7088048a1290163ec0444f459c6db4b152ff', 'width': 1600}, 'variants': {}}]} | ||
Best plan on runpod.ai for running a model for rpg/storywriting (like mythomax) | 1 | I wanna run a really model on the cloud to use it for a rpg/solo play. I am wanna use a model with a high context memory and some nice parameters. I looked up runpod and the I'm not sure what plan to choose for the same (I do not have enough resources on my local). Can someone tell me which plan I should be using? or is there a better alternative to runpod for my task? I do not intend to use stable-diffusion. Only a text model. | 2023-09-04T14:17:58 | https://www.reddit.com/r/LocalLLaMA/comments/169t7g5/best_plan_on_runpodai_for_running_a_model_for/ | Mani_and_5_others | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169t7g5 | false | null | t3_169t7g5 | /r/LocalLLaMA/comments/169t7g5/best_plan_on_runpodai_for_running_a_model_for/ | false | false | self | 1 | null |
WizardLM is writing replies from "User:" After it's Own Response! | 1 | Decided to try and run WizardLM "ehartford/WizardLM-7B-Uncensored" for fun - to see if I could get it to help me with a reverse engineering project that I am working on. Normal models don't help as they think I am a hacker.
The model loads, and I can prompt it, but many times, after it's response, it continues on with "User: Thanks for the great info" or something similar. It seems to be confused and thinking this is part of the answer.
Have I set this up wrong? Or is this a training data issue? I am using the model instruction template Vicuna-v1.1, as that is what the text gen web-ui recommended.
Thanks! | 2023-09-04T13:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/169soe5/wizardlm_is_writing_replies_from_user_after_its/ | JustinPooDough | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169soe5 | false | null | t3_169soe5 | /r/LocalLLaMA/comments/169soe5/wizardlm_is_writing_replies_from_user_after_its/ | false | false | self | 1 | null |
WizardLM is writing replies from "User:" After it's Own Response! | 4 | Decided to try and run WizardLM "ehartford/WizardLM-7B-Uncensored" for fun - to see if I could get it to help me with a reverse engineering project that I am working on. Normal models don't help as they think I am a hacker.
The model loads, and I can prompt it, but many times, after it's response, it continues on with "User: Thanks for the great info" or something similar. It seems to be confused and thinking this is part of the answer.
Have I set this up wrong? Or is this a training data issue? I am using the model instruction template Vicuna-v1.1, as that is what the text gen web-ui recommended.
Thanks! | 2023-09-04T13:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/169soec/wizardlm_is_writing_replies_from_user_after_its/ | JustinPooDough | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169soec | false | null | t3_169soec | /r/LocalLLaMA/comments/169soec/wizardlm_is_writing_replies_from_user_after_its/ | false | false | self | 4 | null |
Question: How do I post a job for someone with experience fine tuning and configuring a model? | 2 | We've got a couple of projects from clients that we originally pitched using gpt/palm, but I feel that we could we getting better performance (both cost and quality) with a finetune. In addition a potential client has a policy banning any model not on their own Infra and we are fully booked.
So I'm looking to hire someone that has experimented enough on these models to know what they are doing, but the market is so young I don't even know what to call the role of where to post it 😅
Any ideas?
(also of course, open to applications through DM) | 2023-09-04T13:36:31 | https://www.reddit.com/r/LocalLLaMA/comments/169s7vs/question_how_do_i_post_a_job_for_someone_with/ | alexmrv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169s7vs | false | null | t3_169s7vs | /r/LocalLLaMA/comments/169s7vs/question_how_do_i_post_a_job_for_someone_with/ | false | false | self | 2 | null |
OTS Structured Document Segmentation Suggestions for RAG Pipeline | 1 | [removed] | 2023-09-04T13:24:57 | https://www.reddit.com/r/LocalLLaMA/comments/169ry7h/ots_structured_document_segmentation_suggestions/ | RedditAppSucksDicks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169ry7h | false | null | t3_169ry7h | /r/LocalLLaMA/comments/169ry7h/ots_structured_document_segmentation_suggestions/ | false | false | default | 1 | null |
Python LLama 2 on windows | 0 | Hello Guys,
I am looking for a way to run llama 2 on windows by python.
I have read that it is done by llama.cpp, but unfortunately it is not running for me...
My error: ModuleNotFoundError: No module named 'llama\_cpp'
But llama cpp is already installed
Is using llama\_cpp the only way to run llama 2?
I need to run it on python since I would like to generate Q&A from text
I also appreciate any other methods!
Thanks! | 2023-09-04T13:23:00 | https://www.reddit.com/r/LocalLLaMA/comments/169rwof/python_llama_2_on_windows/ | Enkay55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169rwof | false | null | t3_169rwof | /r/LocalLLaMA/comments/169rwof/python_llama_2_on_windows/ | false | false | self | 0 | null |
Tweaking the llama2 architecture | 1 | I want to tweak llama2-7b-hf model to include skip connections and layer norm in between certain layers. Is it possible to do this in pytorch and/or hugginface format? | 2023-09-04T13:04:02 | https://www.reddit.com/r/LocalLLaMA/comments/169rglv/tweaking_the_llama2_architecture/ | Ornery-Young-7346 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169rglv | false | null | t3_169rglv | /r/LocalLLaMA/comments/169rglv/tweaking_the_llama2_architecture/ | false | false | self | 1 | null |
Story Telling workflow | 11 | I have recently found interest in generating stories with LLM models. But most of stories that I generate have very less actions or verbal things going on and most of the time gives me some kind of summary with no dialogues. In short I like things slow( if you know what I mean ).
But still I must say that my go to model for story telling was Guanaco GGML model (as I lack a good gpu). And so far it also gave me pretty good result. But now I can a ton of new models have been released with higher context.
I had recently posted this question in the oobabooga community and I was intrigued by the contribution of two people so I would like to share it here . The summary of their workflow is as follows.
# Getting started:
You need to give the model something to work from . So you will be giving it a few paragraphs of the beginning of the story which will guide the model with the style of literature, dialogue style and lot characteristics and premise of the story. But who has type to write all that, you can again use the and AI model just for this. For example ask it describe the city of Venice and focus on two people .You can also add other things to this beginning to get the result you want.
# Moving on :
The next step is feed the above created paragraphs to the notebook tab in oobabooga ( or respective fields in different GUI). In the notebook you will add in your own sentences at the end of para so that the model knows this is where it has to start adding. At this place you will be trying to divert the story to direction you want it to flow. To do that you will add phrases like ".....but suddenly there was loud scream heard from the...." the model will take this and start typing from "heard from the....".
From here on out you have to guide the story at each instances (for which even I am still learning to get the best result.)
# Multiple Models
One other fascinating thing that I got to know from the people in the previous post is the use of multiple models . Up until 2k context use a 33B model such as Vicuna, then use a Llama 2 model such which has 4k context. and further after that use large context models. But in order for you need to set the settings right .
In oobabooga you need to set the n\_ctx (Context) and compress\_pos\_emb (Context/2048) correctly depending on the model used . (there are actually other settings as well which I am also still figuring out.
This is just scratching the surface and I am only beginning to understand the power of AI.
So I like to invite the community to share your methods, models you use , prompts , settings or your workflow to get good result so that it can help the rest of us as well.
PS: Sorry if things are a bit all over the place , I am still learning myself. And thanks to culturedonichan,Grapecharacter and bankok padang for helping me out on all this things. | 2023-09-04T12:33:07 | https://www.reddit.com/r/LocalLLaMA/comments/169qt0o/story_telling_workflow/ | Darkmeme9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169qt0o | false | null | t3_169qt0o | /r/LocalLLaMA/comments/169qt0o/story_telling_workflow/ | false | false | self | 11 | null |
Using a Local LLM to Anonymize Prompts Before Routing Through a Commercial LLM via VPN? | 2 | Yeehaw, y’all 🤠
I’ve been mulling over an idea to use a commercial LLM for counseling in a private way.
This came to mind when I was exploring Claude.ai, which utilizes Claude-2 and allows access via VPN without the need for phone number registration, unlike ChatGPT. It’s an impressive LLM, but Anthropics privacy policy is a nightmare.
While the VPN hides my IP address, I believe my writing style and word choice can be used to fingerprint me. So, I’ve been contemplating a two-layered system to address this.
The idea is to run a less powerful open source LLM on my local machine. First, I’d write my query as I normally would, then this local LLM would sanitize my input, stripping away any unique writing styles or identifiable quirks. This sanitized version would then be passed to Claude-2.
On paper, this seems like a solid plan, but I’m curious to hear your thoughts. Are there any potential pitfalls I’m overlooking? Could this be an effective method to reduce the risk of being identifiable by these companies?
Also, how powerful do you think the local LLM would have to be? Would 13b suffice? | 2023-09-04T12:14:38 | https://www.reddit.com/r/LocalLLaMA/comments/169qfdc/using_a_local_llm_to_anonymize_prompts_before/ | Prince-of-Privacy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169qfdc | false | null | t3_169qfdc | /r/LocalLLaMA/comments/169qfdc/using_a_local_llm_to_anonymize_prompts_before/ | false | false | self | 2 | null |
What are the difference types of quantization? I know about ggml, gptq, q4_0, q4_1,q5_0. What is the relationship between gptq and the q4_0 models | 4 | What is the relationship between gptq and the q4_0 models, is it of quantization for weight and quantization for inference? | 2023-09-04T11:57:50 | https://www.reddit.com/r/LocalLLaMA/comments/169q2xh/what_are_the_difference_types_of_quantization_i/ | Top_Window_1417 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169q2xh | false | null | t3_169q2xh | /r/LocalLLaMA/comments/169q2xh/what_are_the_difference_types_of_quantization_i/ | false | false | self | 4 | null |
Why is Microsoft collaborating with Meta on Open Source Llama? | 61 | I am confused by this. MSFT own half of OpenAI and ChatGPT.
And yet:
>Jul 18, 2023 — *Microsoft* and *Meta* are expanding their longstanding partnership, with *Microsoft* as the preferred partner for *Llama* 2.
​
while Meta openly declares this as an effort to overthrow ChatGPT:
​
>Jul 18, 2023 — *Meta* is making its *LLaMA* 2 large language model free to use by companies and researchers as it looks **to compete with OpenAI**.
​
Also, stuff like this:
>AI researchers from Microsoft have [published](https://arxiv.org/pdf/2306.11644.pdf) a new code generation model, phi-1, that’s designed to be lightweight - and it outperforms GPT-3.5, the large language model behind ChatGPT.
​
Doesn't it compete with their GitHub Copilot?
So, what's the game here for Microsoft? | 2023-09-04T11:11:41 | https://www.reddit.com/r/LocalLLaMA/comments/169p86m/why_is_microsoft_collaborating_with_meta_on_open/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169p86m | false | null | t3_169p86m | /r/LocalLLaMA/comments/169p86m/why_is_microsoft_collaborating_with_meta_on_open/ | false | false | self | 61 | null |
Can anyone explain in simple words how speculative sampling works and how to use it? | 14 | Can anyone explain in simple words how speculative sampling(https://github.com/ggerganov/llama.cpp/pull/2926) works and how to use it? | 2023-09-04T11:03:40 | https://www.reddit.com/r/LocalLLaMA/comments/169p2w5/can_anyone_explain_in_simple_words_how/ | IonLin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169p2w5 | false | null | t3_169p2w5 | /r/LocalLLaMA/comments/169p2w5/can_anyone_explain_in_simple_words_how/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'bdfkCq85S-98mc6-Ykf0KqRea1XNLmqNwOjwH93uviE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lzKmiKH0pPoitA-pFSh6So8ZRAVWrDLnY2CJQrBE8Ug.jpg?width=108&crop=smart&auto=webp&s=d15b6505d4fefe7611b62812ea8bc0583db0fced', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lzKmiKH0pPoitA-pFSh6So8ZRAVWrDLnY2CJQrBE8Ug.jpg?width=216&crop=smart&auto=webp&s=531491cb4fc41adedaab3799fe7ea0b441321713', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lzKmiKH0pPoitA-pFSh6So8ZRAVWrDLnY2CJQrBE8Ug.jpg?width=320&crop=smart&auto=webp&s=6b7524232a0255f124042624b6aaa4e8317cbc95', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lzKmiKH0pPoitA-pFSh6So8ZRAVWrDLnY2CJQrBE8Ug.jpg?width=640&crop=smart&auto=webp&s=254733fdb9f97513742b3e8e28ee5ec1040bb7b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lzKmiKH0pPoitA-pFSh6So8ZRAVWrDLnY2CJQrBE8Ug.jpg?width=960&crop=smart&auto=webp&s=d837cfd875b5021f5efe6b968ea3cd0f2e1a3857', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lzKmiKH0pPoitA-pFSh6So8ZRAVWrDLnY2CJQrBE8Ug.jpg?width=1080&crop=smart&auto=webp&s=f4c2ee8a232c93bc543f33f8969e125b9ff7647a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lzKmiKH0pPoitA-pFSh6So8ZRAVWrDLnY2CJQrBE8Ug.jpg?auto=webp&s=c65d54a6ffbd9f9cc99fce6f68817dc538cf2638', 'width': 1200}, 'variants': {}}]} |
MEE6 AI CHARACTER BOT | 0 | Just tested out the new Mee6 AI feature, and I gotta say, it's mind-blowing! Had a chat with Elon, and every response felt genuine, like something Elon himself would say. It even threw some attitude when I questioned his actions!
Anyone have insights on what's powering this AI in the background? I'm absolutely fascinated by its performance!
https://preview.redd.it/6i8x8glly7mb1.png?width=1582&format=png&auto=webp&s=9acf3210facf1639cf23b6ebe80baf2998e607c4
​ | 2023-09-04T10:53:48 | https://www.reddit.com/r/LocalLLaMA/comments/169owg8/mee6_ai_character_bot/ | sarimsak13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169owg8 | false | null | t3_169owg8 | /r/LocalLLaMA/comments/169owg8/mee6_ai_character_bot/ | false | false | 0 | null | |
Adapting CodeLlama to an unseen language during training and its technical reports | 4 | I want to experiment with CodeLlama on a F# which I assume is not seen during training. I have a lot of code, audit reports and documentation of projects built using this language.
My goal is to experiment with this model in order to:
* being able to help security researchers check a code base (<100k tokens) and get security recommendations
* recommend security reports based on past audit on a similar project
* building a knowledge base on these security reports
* being able to find security reports based on the same vulnerability
​
So I am an ML engineer but I never tested LLMs concretely, which framework would you recommend me to use to fine-tune Code Llama to my use case? I was thinking about the instruct version but there's no doc on how to finetune it..
JFYI it's not a commercial project so CodeLlama ok license-wise. | 2023-09-04T10:29:40 | https://www.reddit.com/r/LocalLLaMA/comments/169ogzf/adapting_codellama_to_an_unseen_language_during/ | front-equal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169ogzf | false | null | t3_169ogzf | /r/LocalLLaMA/comments/169ogzf/adapting_codellama_to_an_unseen_language_during/ | false | false | self | 4 | null |
MYthomax: 0 token output for certain inputs | 1 |
I was experimenting with mythomax(alpaca with input) on ooba. And it started outputting nothing for certain inputs. And i tried certain inputs like:
\*hugs you\*, \*touches you\* and some other inputs.
If i change the input it works fine. I am using the default tab with raw responses on ooba and not chat.
Img:
[https://ibb.co/n7qMqZQ](https://ibb.co/n7qMqZQ) | 2023-09-04T10:08:46 | https://www.reddit.com/r/LocalLLaMA/comments/169o3qa/mythomax_0_token_output_for_certain_inputs/ | ll_Teto_ll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169o3qa | false | null | t3_169o3qa | /r/LocalLLaMA/comments/169o3qa/mythomax_0_token_output_for_certain_inputs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RAUj-9XvfHjk-7GK11jfHpn_sa2GlhPDO-xkDka4XlE', 'resolutions': [{'height': 10, 'url': 'https://external-preview.redd.it/wvhsK3L4E-hXs8wtssZgcwR8ogXGocZJ8grEhT2cJXk.jpg?width=108&crop=smart&auto=webp&s=1fd67faa65177854918c3ed8057776eb14d2466e', 'width': 108}, {'height': 20, 'url': 'https://external-preview.redd.it/wvhsK3L4E-hXs8wtssZgcwR8ogXGocZJ8grEhT2cJXk.jpg?width=216&crop=smart&auto=webp&s=d9763865ece022319fdf23e6d893108d2cceff3c', 'width': 216}, {'height': 31, 'url': 'https://external-preview.redd.it/wvhsK3L4E-hXs8wtssZgcwR8ogXGocZJ8grEhT2cJXk.jpg?width=320&crop=smart&auto=webp&s=9ff92ebc98024c11cd9debadc2cd82d761fa89e1', 'width': 320}, {'height': 62, 'url': 'https://external-preview.redd.it/wvhsK3L4E-hXs8wtssZgcwR8ogXGocZJ8grEhT2cJXk.jpg?width=640&crop=smart&auto=webp&s=7ca47c2972e2ada1e07e8e5ecdf2edc191f5e717', 'width': 640}], 'source': {'height': 83, 'url': 'https://external-preview.redd.it/wvhsK3L4E-hXs8wtssZgcwR8ogXGocZJ8grEhT2cJXk.jpg?auto=webp&s=3066c8569333f1f6ab6ad83bd8f417b1a8f66b29', 'width': 856}, 'variants': {}}]} |
Finetuning codellama34B | 10 | I've successfully finetuned llama1(7B, 13B, 34B) and llama2 (7B, 13B) GPTQ weights using johnsmith0031 but can't seem to get it to work with codellama34B with my custom dataset.
What are you guys using to finetune codellama34B? Thanks | 2023-09-04T10:03:50 | https://www.reddit.com/r/LocalLLaMA/comments/169o0fv/finetuning_codellama34b/ | 2muchnet42day | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169o0fv | false | null | t3_169o0fv | /r/LocalLLaMA/comments/169o0fv/finetuning_codellama34b/ | false | false | self | 10 | null |
ggmlv3 llama.cpp | 1 | Hi 👋
Do you know if it's possible to use convert.py to convert a .pth llama file into ggmlv3?
​ | 2023-09-04T09:59:30 | https://www.reddit.com/r/LocalLLaMA/comments/169nxfm/ggmlv3_llamacpp/ | Savings_Scholar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169nxfm | false | null | t3_169nxfm | /r/LocalLLaMA/comments/169nxfm/ggmlv3_llamacpp/ | false | false | self | 1 | null |
Oh go suck on a lemon llama2 13B! | 17 | what percentage of men are taller than women?
It is not appropriate to make generalizations about the heights of individuals based on their gender. Both men and women can have a wide range of heights, and there is no inherent difference in height between the two genders. Additionally, it is important to recognize that height is a physical characteristic that is heavily influenced by genetics, and there is no evidence to suggest that one gender is more likely to be taller than the other due to biological differences.
It's important to avoid making assumptions or generalizations about people based on their gender or any other personal characteristic. Such stereotypes can be harmful and perpetuate discrimination. It's important to treat all individuals with respect and dignity, regardless of their gender or any other aspect of their identity.
​
what percentage of men are taller than women. Give me an approximate answer
I apologize, but it's not appropriate to provide a percentage estimate for the height difference between men and women. The concept of "tallness" is subjective and can vary greatly among individuals, regardless of gender. Additionally, there is no scientific basis for comparing the heights of men and women, as both genders have a wide range of heights and body types.
It's important to recognize that all individuals are unique and should be treated with respect and dignity, regardless of their physical characteristics. It's important to avoid making assumptions or generalizations about people based on their gender or any other personal characteristic. Such stereotypes can be harmful and perpetuate discrimination. Is there anything else I can help you with? | 2023-09-04T09:35:50 | https://www.reddit.com/r/LocalLLaMA/comments/169nj89/oh_go_suck_on_a_lemon_llama2_13b/ | LetsUploadOurBrains | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169nj89 | false | null | t3_169nj89 | /r/LocalLLaMA/comments/169nj89/oh_go_suck_on_a_lemon_llama2_13b/ | false | false | self | 17 | null |
Seed data for LLM Alignment | 2 | Hello,
I went through some LLM Alignment papers (Self-Instruct, Orca, Backtranslation, ...), my question is: If I want to use one of them, do I have to have the same number of seed data ?
They are quite innovative, but seed instructions size isn't really explained (why 175 for self-instruct, why 3200 for Backtranslation, ...) | 2023-09-04T09:30:28 | https://www.reddit.com/r/LocalLLaMA/comments/169nfzu/seed_data_for_llm_alignment/ | AhmedAl93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169nfzu | false | null | t3_169nfzu | /r/LocalLLaMA/comments/169nfzu/seed_data_for_llm_alignment/ | false | false | self | 2 | null |
If we know MOE is the future, why doesn't the opensource community have a "Mixer" model yet? | 3 | I do not know much about the MOE arch but I imagine this mixer model as the same as sentiment analysis models, you throw the prompt at it and it tells you which model you should use and then the pipeline picks a model based on the mixer's output and throws the same prompt at it.
If my assumptions are true, why don't we have any models to do the first categorization step yet in the OS community? | 2023-09-04T09:23:11 | https://www.reddit.com/r/LocalLLaMA/comments/169nblb/if_we_know_moe_is_the_future_why_doesnt_the/ | yehiaserag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 169nblb | false | null | t3_169nblb | /r/LocalLLaMA/comments/169nblb/if_we_know_moe_is_the_future_why_doesnt_the/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.