title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
"Base" models were actually trained with some GPT instruct datasets | 66 | Look at this, apart Llama1, all the other "base" models will likely answer "language" after "As an AI". That means Meta, Mistral AI and 01-ai (the company that made Yi) likely trained the "base" models with GPT instruct datasets to inflate the benchmark scores and make it look like the "base" models had a lot of potential, we got duped hard on that one.
​
https://preview.redd.it/vqtjkw1vdyzb1.png?width=653&format=png&auto=webp&s=91652053bcbc8a7b50bced9bbf8638fa417387bb | 2023-11-12T17:40:27 | https://www.reddit.com/r/LocalLLaMA/comments/17tp9s9/base_models_were_actually_trained_with_some_gpt/ | Wonderful_Ad_5134 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tp9s9 | false | null | t3_17tp9s9 | /r/LocalLLaMA/comments/17tp9s9/base_models_were_actually_trained_with_some_gpt/ | false | false | 66 | {'enabled': False, 'images': [{'id': '7uD17AXjzQmwy9qeoIuzlQd7iPn3pOmbWB_Lwqjszew', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ErRJIG6_PZ1GAFZW3wRkGtFEo1s-Wpa3p8b_PvHHZfU.jpg?width=108&crop=smart&auto=webp&s=e15951433a0ceaad0b89572f43f7aa43fafdbbe3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ErRJIG6_PZ1GAFZW3wRkGtFEo1s-Wpa3p8b_PvHHZfU.jpg?width=216&crop=smart&auto=webp&s=c2914270b1867d5c967aff0d94422262fbcb014f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ErRJIG6_PZ1GAFZW3wRkGtFEo1s-Wpa3p8b_PvHHZfU.jpg?width=320&crop=smart&auto=webp&s=a9238212fd4871c267f730dcf8af3bbc48657a7e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ErRJIG6_PZ1GAFZW3wRkGtFEo1s-Wpa3p8b_PvHHZfU.jpg?width=640&crop=smart&auto=webp&s=fb3c90bf8d6d7b6c453252a698fada70224d6fd3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ErRJIG6_PZ1GAFZW3wRkGtFEo1s-Wpa3p8b_PvHHZfU.jpg?width=960&crop=smart&auto=webp&s=8803db0ded8eca8fe3ded2c612a8de38d1153796', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ErRJIG6_PZ1GAFZW3wRkGtFEo1s-Wpa3p8b_PvHHZfU.jpg?width=1080&crop=smart&auto=webp&s=de8a15067ac8eb33072131fdf596937cc0089f06', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ErRJIG6_PZ1GAFZW3wRkGtFEo1s-Wpa3p8b_PvHHZfU.jpg?auto=webp&s=48c705af120bfeb67acb474273e447b193d0ee2a', 'width': 1200}, 'variants': {}}]} | |
Anyone spend a bunch of $$ on a computer for LLM and regret it? | 1 | Curious if anyone got the whole rig and realize they didn't really need it etc | 2023-11-12T17:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/17tp5qt/anyone_spend_a_bunch_of_on_a_computer_for_llm_and/ | parasocks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tp5qt | false | null | t3_17tp5qt | /r/LocalLLaMA/comments/17tp5qt/anyone_spend_a_bunch_of_on_a_computer_for_llm_and/ | false | false | self | 1 | null |
With a 24GB video card, single card system, what is the best LLM that utilizes Exllama2? (for RPG/Chat) | 15 | *I'm using Windows 11 and OOBE. I use SillyTavern as well, but not always.*
I've been playing with 20b models (work great at 4096 context) and 70b ones (but too slooow unless I make the context 2048, which is then usable, but the low context sucks)
What else am I missing I see there are some 34b models now for Exllama2, but I'm having issues getting them to work, quality (which PROFILE do I use??) or speed wise (what context setting? This is not the 200k context version)...
For your recommended model, what is the best settings for those on a single card system? (4090, 96GB of RAM, I9-13900k)
Any suggestions for best experience is appreciated (for creative, RPG/Chat/Story usage).
Thank you.
​ | 2023-11-12T17:34:28 | https://www.reddit.com/r/LocalLLaMA/comments/17tp5dm/with_a_24gb_video_card_single_card_system_what_is/ | cleverestx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tp5dm | false | null | t3_17tp5dm | /r/LocalLLaMA/comments/17tp5dm/with_a_24gb_video_card_single_card_system_what_is/ | false | false | self | 15 | null |
Musk (mostly?) endorses open source models | 1 | [removed] | 2023-11-12T17:25:56 | https://www.reddit.com/r/LocalLLaMA/comments/17toyzs/musk_mostly_endorses_open_source_models/ | Haunting_Turnip_7842 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17toyzs | false | null | t3_17toyzs | /r/LocalLLaMA/comments/17toyzs/musk_mostly_endorses_open_source_models/ | false | false | self | 1 | null |
I'm looking for models that specialize in hard sciences, particularly medicine and chemistry. I can't seem to find any that aren't designed for "lab use". | 3 | I'm a nursing student who is trying to brush up on pharmacology and cardiovascular specialties in my free time. Are there any models that are trained or fine-tuned on medical and science material that are likely to be more accurate than the general models?
I don't have the hardware to run the unquantized models I'm finding. Besides, a lot of those are based on Llama 1, which I'm not super enthusiastic about.
Any suggestions? | 2023-11-12T17:18:09 | https://www.reddit.com/r/LocalLLaMA/comments/17tot53/im_looking_for_models_that_specialize_in_hard/ | Arcturus17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tot53 | false | null | t3_17tot53 | /r/LocalLLaMA/comments/17tot53/im_looking_for_models_that_specialize_in_hard/ | false | false | self | 3 | null |
GGUF on free colab? | 1 | [removed] | 2023-11-12T16:52:08 | https://www.reddit.com/r/LocalLLaMA/comments/17to97n/gguf_on_free_colab/ | Murky-Cheek-7554 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17to97n | false | null | t3_17to97n | /r/LocalLLaMA/comments/17to97n/gguf_on_free_colab/ | false | false | self | 1 | null |
how it can be | 1 | pays | 2023-11-12T16:16:13 | https://v.redd.it/hnck3jxywxzb1 | Ornery_Tourist1769 | /r/LocalLLaMA/comments/17tnif5/how_it_can_be/ | 1970-01-01T00:00:00 | 0 | {} | 17tnif5 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/hnck3jxywxzb1/DASHPlaylist.mpd?a=1702484178%2CODc3ZGE0YzU1NTRkNmUwMTIyMDMxNGQ4MmRhMDE4YWFhY2Q0YWY4MDM0YjJiY2JkYmIzMGZiYTg3N2VjM2M1YQ%3D%3D&v=1&f=sd', 'duration': 116, 'fallback_url': 'https://v.redd.it/hnck3jxywxzb1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 854, 'hls_url': 'https://v.redd.it/hnck3jxywxzb1/HLSPlaylist.m3u8?a=1702484178%2CNzNkNTEwZDY0ZmY3NzA4YWJhY2I1MDZkYjFjZmJhYjc3NjkxMzY5ODU0MzEzNDk1YzgzZmEwNDUxNmUyNjM0Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hnck3jxywxzb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 420}} | t3_17tnif5 | /r/LocalLLaMA/comments/17tnif5/how_it_can_be/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bWsyNG4yc3h3eHpiMRZzg9ukzgY8ySCefBXwp3siIblVi6SaVBjUxgjk0oSt', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/bWsyNG4yc3h3eHpiMRZzg9ukzgY8ySCefBXwp3siIblVi6SaVBjUxgjk0oSt.png?width=108&crop=smart&format=pjpg&auto=webp&s=bfd4962ecb85a0d0980deaad08743825d3b03a51', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/bWsyNG4yc3h3eHpiMRZzg9ukzgY8ySCefBXwp3siIblVi6SaVBjUxgjk0oSt.png?width=216&crop=smart&format=pjpg&auto=webp&s=4832493e34591b826b56bc9b44d45f8d70ca013e', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/bWsyNG4yc3h3eHpiMRZzg9ukzgY8ySCefBXwp3siIblVi6SaVBjUxgjk0oSt.png?width=320&crop=smart&format=pjpg&auto=webp&s=b0a4fee1b9fd5a4d537c2b20b195687a1e9cca22', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/bWsyNG4yc3h3eHpiMRZzg9ukzgY8ySCefBXwp3siIblVi6SaVBjUxgjk0oSt.png?width=640&crop=smart&format=pjpg&auto=webp&s=a4375f49b4093c1c7a81c2c56dcc0c7884149a4c', 'width': 640}], 'source': {'height': 1795, 'url': 'https://external-preview.redd.it/bWsyNG4yc3h3eHpiMRZzg9ukzgY8ySCefBXwp3siIblVi6SaVBjUxgjk0oSt.png?format=pjpg&auto=webp&s=06f8cf41f4075c5f3535a890bb54aebd930247fc', 'width': 883}, 'variants': {}}]} | |
Why the Need for Massive Data Sets and How Human Brains Differ in Learning | 1 | [removed] | 2023-11-12T16:03:22 | https://www.reddit.com/r/LocalLLaMA/comments/17tn8lr/why_the_need_for_massive_data_sets_and_how_human/ | One-Magician-6270 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tn8lr | false | null | t3_17tn8lr | /r/LocalLLaMA/comments/17tn8lr/why_the_need_for_massive_data_sets_and_how_human/ | false | false | self | 1 | null |
retrieve and extract the information in docx for QA | 1 | What is the common format of document in document question answering? How to retrieve and extract the information in docx (especially table information) based on the question? It would be great if there are open source projects for learning. It is best to have ipynb files that can be run on kaggle or colab. Thank you very much for your answers. | 2023-11-12T15:58:38 | https://www.reddit.com/r/LocalLLaMA/comments/17tn4p5/retrieve_and_extract_the_information_in_docx_for/ | Weekly_Stress35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tn4p5 | false | null | t3_17tn4p5 | /r/LocalLLaMA/comments/17tn4p5/retrieve_and_extract_the_information_in_docx_for/ | false | false | self | 1 | null |
What is the best current Local LLM to run? | 69 | Which is best 7b model ? | 2023-11-12T15:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/17tmvjt/what_is_the_best_current_local_llm_to_run/ | ProfessionalPanda125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tmvjt | false | null | t3_17tmvjt | /r/LocalLLaMA/comments/17tmvjt/what_is_the_best_current_local_llm_to_run/ | false | false | self | 69 | null |
is there any other tools like vLLM or TensorRT that can be used to speed up LLM inference? | 1 | I know that vLLM and TensorRT can be used to speed up LLM inference. I tried to find other tools can be do such things similar and will compare them. Do you guys have any suggestions?
vLLM: speed up inference
TensorRT: speed up inference
DeepSpeed:speed up for training phrase
​ | 2023-11-12T15:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/17tmgko/is_there_any_other_tools_like_vllm_or_tensorrt/ | DataLearnerAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tmgko | false | null | t3_17tmgko | /r/LocalLLaMA/comments/17tmgko/is_there_any_other_tools_like_vllm_or_tensorrt/ | false | false | self | 1 | null |
Is there a n00b guide somewhere online to fine tune LLaMA 13b self-hosted? | 13 | As the title states.
Looking for a n00b guide. Like...a guide for idiots such as myself.
I'm a copywriter and know nothing about how to fine tune AI.
I followed an online guide and somehow managed to get LLaMA 13b working on my 4090, i7-10-700k, and 64gb of RAM PC.
While it works just fine...I'm having an issue with LLaMA 13b, Claude 2, and ChatGPT.
All three AIs cannot write at the level I need them to. They also have a piss-poor memory (e.g., Claude 2 forgets that I instructed it to "don't use passive voice" two messages ago).
So, I'm hoping there is a way to fine tune LLaMA 13b so that whenever I ask it to perform a writing task it will:
* Write in active voice only
* Don't write overly complex sentences with mutiple commas
* Begin blurbs with a straightforward benefit statement that establishes value
* Use natural language and syntax
* Add something substantive that complements the existing messaging while avoiding redundant phrasing
* Etc etc etc
Without me having to remind it.
Do you know if this is possible and how hard it will be for a total n00b like myself to accomplish. And can I accomplish this on my 4090 rig? | 2023-11-12T15:24:24 | https://www.reddit.com/r/LocalLLaMA/comments/17tmg6i/is_there_a_n00b_guide_somewhere_online_to_fine/ | kaszebe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tmg6i | false | null | t3_17tmg6i | /r/LocalLLaMA/comments/17tmg6i/is_there_a_n00b_guide_somewhere_online_to_fine/ | false | false | self | 13 | null |
Finetuning on PDF to align behavior with ethical protocols? | 1 | Hey!
I want to finetune Mistral on a few large ethical/instructional protocol/guideline PDFs. I'm trying to align the LLM's behavior more towards these principles, and it seems like finetuning is the way to go. I have access to an RTX 4090. Anyone know the most efficient way I can go about this, and how I can best format the information in the PDF for the dataset?
Thank you! | 2023-11-12T13:12:07 | https://www.reddit.com/r/LocalLLaMA/comments/17tjyg1/finetuning_on_pdf_to_align_behavior_with_ethical/ | Imjustmisunderstood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tjyg1 | false | null | t3_17tjyg1 | /r/LocalLLaMA/comments/17tjyg1/finetuning_on_pdf_to_align_behavior_with_ethical/ | false | false | self | 1 | null |
Haven’t stayed up to date with this community in awhile. What’s the progress looking like on a local coding model? | 1 | [removed] | 2023-11-12T12:47:58 | https://www.reddit.com/r/LocalLLaMA/comments/17tjjsf/havent_stayed_up_to_date_with_this_community_in/ | RadioRats | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tjjsf | false | null | t3_17tjjsf | /r/LocalLLaMA/comments/17tjjsf/havent_stayed_up_to_date_with_this_community_in/ | false | false | default | 1 | null |
In my opinion open-source projects should focus an a very narrow thing, instead of focusing on being a "GPT", that focuses on being able to do everything. | 61 | What I currently see is that many companies try to create a "GPT", a model which is basically competing with the GPT models of OpenAI or claude. The problem is, in my opinion, that these open source projects with just a few people working on it have very limited resource power. Even if they have 10 A100s with 80 GB of VRAM, you will never come close to the computing power and to the manpower you need in order to actually get such a model. If you go above 13 billion parameters, you already have the problem that over 99% of all humanity cannot use your model.
While, yes, you can run it on Colab, you have then the problem that you have people indebted to Colab, so to speak. If Colab pulls the plug, then it doesn't work. If it's hosted by another company and the company pulls the plug, it doesn't work anymore. So, in my opinion, people should focus on creating models that are focused on something. Basic example, Japanese to English translation. Or maybe a model which is really good with historic facts. Because every single thing is an additional parameter, which makes it harder and harder to actually load the entire model. If this goes on, in my opinion, we will not see any development that is actually really beneficial. And this is not me being a doomer and saying "oh, no, it will never work" but unless new technology is released, which specifically makes it possible to get basically something that is equal to 300 billion parameters or something like that working, in my opinion, it's useless.
We need to actually do something with that which we use. I think open source projects should focus on something and then actually have 13 billion parameters of something hyper-focused on a very specific part, allowing the model to perform amazing at the subject. Let the big thing be llama 3 from meta, but I think it's impossible to get something like gpt 3.5 and gpt-4 with open-source methods. One of the best models are currently llama and Mistral.... both from companies that are either billions or 100s of millions worth now.
You can certainly try to finetune that which is released, like the new llama models, try to modify them, but I see so many models being released that basically nobody uses. or really have an use.
What do you all think about it? I just think, after testing out so many different models, that these goals that small teams set themselves to, are simply not possible, and should try to create something that is amazing at one thing.
TLDR: I think open source projects should focus on being very good at certain tasks instead of being good at "everything". | 2023-11-12T12:45:27 | https://www.reddit.com/r/LocalLLaMA/comments/17tjicb/in_my_opinion_opensource_projects_should_focus_an/ | GodEmperor23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tjicb | false | null | t3_17tjicb | /r/LocalLLaMA/comments/17tjicb/in_my_opinion_opensource_projects_should_focus_an/ | false | false | self | 61 | null |
Complete assistant project? | 1 | [removed] | 2023-11-12T12:42:56 | https://www.reddit.com/r/LocalLLaMA/comments/17tjgxp/complete_assistant_project/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tjgxp | false | null | t3_17tjgxp | /r/LocalLLaMA/comments/17tjgxp/complete_assistant_project/ | false | false | self | 1 | null |
Introducing an open-source guide named "AI Portal Gun" for AI Enthusiasts. Is anyone interested in contributing? | 1 | [removed] | 2023-11-12T12:04:22 | https://www.reddit.com/r/LocalLLaMA/comments/17tivz0/introducing_an_opensource_guide_named_ai_portal/ | Chemical-Instance204 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tivz0 | false | null | t3_17tivz0 | /r/LocalLLaMA/comments/17tivz0/introducing_an_opensource_guide_named_ai_portal/ | false | false | self | 1 | null |
Converting text to JSON | 4 | Hello everyone; I am creating a chatbot that converts my text to JSON format, and it decides the fields/categories on its own. For the example below, it will convert the medical description to JSON
Example:
Patient John. Allergies to shellfish. 6 Feet, Previous Checkup. Chronic Backpain. Migraines Regular. Cane user. Sport Injury. 35 years. Visit 1 year go. Heavy Smoker previously. Fatigued. BMI above normal. Temp normal. On Ibuporofen. Skin rash, using topical cream.
Which lightweight model will be the best for this task? | 2023-11-12T11:20:19 | https://www.reddit.com/r/LocalLLaMA/comments/17ti9ut/converting_text_to_json/ | LancelotGFX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ti9ut | false | null | t3_17ti9ut | /r/LocalLLaMA/comments/17ti9ut/converting_text_to_json/ | false | false | self | 4 | null |
How to host an LLM which can be reached via an API for NSFW chat | 9 | Hi. I have been reading about best LLMs for NSFW and Roleplay and other things.
I am looking to build an AI girlfriend kind of app and would want to be able to talk to the app with customized prompts.
While I have the answer on what is probably the best NSFW LLM out there but what I am confused is how do I use it? Do I use the quantized or unquantized? Do I use the GGUF or AWSQ?
Also, is it a good idea to host it on Sagemaker or do you suggest somewhere else on cloud for the NSFW chatbot to be present?
​
Any pointer towards the optimal system design and the optional infrastructure would be great.
​
I have a flutter UI and would probably be using the AWS to host but as I earlier mentioned, I am confused on how to host the backend API to the app to connect and talk to.
​
Thank you in advance. | 2023-11-12T11:07:20 | https://www.reddit.com/r/LocalLLaMA/comments/17ti3d2/how_to_host_an_llm_which_can_be_reached_via_an/ | faaanofursmile | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ti3d2 | false | null | t3_17ti3d2 | /r/LocalLLaMA/comments/17ti3d2/how_to_host_an_llm_which_can_be_reached_via_an/ | false | false | nsfw | 9 | null |
Converting llama.cpp models to torch models | 1 | I'd like to use text-generation-webui for fine-tuning models and generating loras and applying them to the loaded model. However, text-generation-webui's code doesn't work with loras. When I try to fine-tune a model, it simply sends it's llamacppmodel object to Peft libraries function, but those functions need 'torch' models, not text-generation-webui models or llamacpp model.
Is there any code to convert llamacpp models to torch models? | 2023-11-12T10:45:08 | https://www.reddit.com/r/LocalLLaMA/comments/17ths1u/converting_llamacpp_models_to_torch_models/ | mercury__7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ths1u | false | null | t3_17ths1u | /r/LocalLLaMA/comments/17ths1u/converting_llamacpp_models_to_torch_models/ | false | false | self | 1 | null |
Fine-tuning stabilityai/stablelm-3b-4e1t on totally-not-an-llm/EverythingLM-data-V3 dataset | 1 | [removed] | 2023-11-12T10:42:23 | https://huggingface.co/TinyPixel/stablelm-ft2 | Sufficient_Run1518 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17thqqv | false | null | t3_17thqqv | /r/LocalLLaMA/comments/17thqqv/finetuning_stabilityaistablelm3b4e1t_on/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'rw7kY_NrfqZ6bknjpQNy9wdf2heDevklF28yO4z_Szc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oeCqy54s2YkEVkDoHWzSUQbMRtpW8UpN1eO_oqzMeVw.jpg?width=108&crop=smart&auto=webp&s=5af401387de944457a7a5c7dbfa9adb565720479', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oeCqy54s2YkEVkDoHWzSUQbMRtpW8UpN1eO_oqzMeVw.jpg?width=216&crop=smart&auto=webp&s=c6b690fd76ca46d9e9ea5d2d4ed1a6f2f337c48d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oeCqy54s2YkEVkDoHWzSUQbMRtpW8UpN1eO_oqzMeVw.jpg?width=320&crop=smart&auto=webp&s=f9827c2d5e6562589e689a7fbabe95a1309cb453', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oeCqy54s2YkEVkDoHWzSUQbMRtpW8UpN1eO_oqzMeVw.jpg?width=640&crop=smart&auto=webp&s=34deefa2bf37e2ca50ff1a49ab9dbb734c6d76d0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oeCqy54s2YkEVkDoHWzSUQbMRtpW8UpN1eO_oqzMeVw.jpg?width=960&crop=smart&auto=webp&s=25d7370f243c0b9a61777fee6c7f23959fd0b2cd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oeCqy54s2YkEVkDoHWzSUQbMRtpW8UpN1eO_oqzMeVw.jpg?width=1080&crop=smart&auto=webp&s=784cdb968bcc074ffbd768cc256875eb01198754', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oeCqy54s2YkEVkDoHWzSUQbMRtpW8UpN1eO_oqzMeVw.jpg?auto=webp&s=c5f6fb7f34feeb43749df22f13dd48cede650de7', 'width': 1200}, 'variants': {}}]} | |
Anyone interested in contributing to an open-source guide named AI Portal Gun? | 1 | [removed] | 2023-11-12T10:32:00 | https://www.reddit.com/r/LocalLLaMA/comments/17thloq/anyone_interested_in_contributing_to_an/ | Jiraiya27s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17thloq | false | null | t3_17thloq | /r/LocalLLaMA/comments/17thloq/anyone_interested_in_contributing_to_an/ | false | false | self | 1 | null |
Spread LLM to multiple hosts | 3 | Hi all
I'm wondering if is there a possibility to spread load of localLLM on multiple hosts instead of adding gpu's to speed up responses.
My host do not have gpu's since I want to be power effective, but they have decent ammont of ram 128.
Thx for all ideas. | 2023-11-12T10:17:49 | https://www.reddit.com/r/LocalLLaMA/comments/17thf0f/spread_llm_to_multiple_hosts/ | erick-fear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17thf0f | false | null | t3_17thf0f | /r/LocalLLaMA/comments/17thf0f/spread_llm_to_multiple_hosts/ | false | false | self | 3 | null |
How to bring POC of LLM to production? | 1 | So I have been working on making customer care chatbots using open source model. I get the quality responses. Challenge however is of latency, chatbots need realtime responses. What are things I should try on to bring latency down
For starters I am using input tokens around 1k - 1.5k and generation of 250-500 tokens. Right now I am at 3 sec | 2023-11-12T10:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/17thek7/how_to_bring_poc_of_llm_to_production/ | aya_cool | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17thek7 | false | null | t3_17thek7 | /r/LocalLLaMA/comments/17thek7/how_to_bring_poc_of_llm_to_production/ | false | false | self | 1 | null |
Anyone interested in contributing to an open-source guide named AI Portal Gun? | 1 | [removed] | 2023-11-12T10:14:33 | https://www.reddit.com/r/LocalLLaMA/comments/17thdih/anyone_interested_in_contributing_to_an/ | Jiraiya27s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17thdih | false | null | t3_17thdih | /r/LocalLLaMA/comments/17thdih/anyone_interested_in_contributing_to_an/ | false | false | self | 1 | null |
Anyone interested in contributing to an open-source guide named AI Portal Gun? | 1 | [removed] | 2023-11-12T10:09:38 | https://www.reddit.com/r/LocalLLaMA/comments/17thb30/anyone_interested_in_contributing_to_an/ | Jiraiya27s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17thb30 | false | null | t3_17thb30 | /r/LocalLLaMA/comments/17thb30/anyone_interested_in_contributing_to_an/ | false | false | self | 1 | null |
LLM as local, HW specific compilers? | 5 | If LLMs can be taught to write assembly (or LLVM) very efficiently, what would it take to create a full or semi-automatic LLM compiler from high languages or even from pseudo-code or human language.
The advantages could be monumental:
\- arguably much more efficient utilization of resources on every compile target
\- compilation is flexible and not rule based. an LLM won't complain over a missing ";" as it can "understand" the intent
\- it can rewrite many of the software we have today just based on the disassembled binaries to squeeze more out of HW
\- can we convert an assembly block from ARM to RISC? and vice versa?
\- potentially, iterative compilation (ala open interprator) can also understand the runtime issues and exceptions to have a "live" assembly code that changes as issues arise
\>> Any projects exploring this?
\>> I feel it is an issue of dimensionality (ie "context" size), very similar to having a latent space for entire repos. Do you agree? | 2023-11-12T10:00:59 | https://www.reddit.com/r/LocalLLaMA/comments/17th6fe/llm_as_local_hw_specific_compilers/ | infinite_stream | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17th6fe | false | null | t3_17th6fe | /r/LocalLLaMA/comments/17th6fe/llm_as_local_hw_specific_compilers/ | false | false | self | 5 | null |
Can I run LLM models on RTX 2070 with 8GB? | 2 | I was trying to run some models using text-generation-webui. I was testing mistralai\_Mistral-7B-v0.1, NousResearch\_Llama-2-7b-hf and stabilityai\_StableBeluga-7B. Without much success. I remember I needed to change to 4 bits and I couldn't make them respond in a good way for chat. So before I go further, is 8GB a problem? Because Stable Diffusion works beautifuly. Could you share your experiences with 8GB GPU? | 2023-11-12T09:56:15 | https://www.reddit.com/r/LocalLLaMA/comments/17th42i/can_i_run_llm_models_on_rtx_2070_with_8gb/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17th42i | false | null | t3_17th42i | /r/LocalLLaMA/comments/17th42i/can_i_run_llm_models_on_rtx_2070_with_8gb/ | false | false | self | 2 | null |
Can't make the llama-cpp-python respond normally | 4 | Hi, I started using llama-cpp-python with TheBloke/MythoMax-L2-13B-GGUF model.
I launch a server by the following command:
**python3 -m llama\_cpp.server --model models/mythomax-l2-13b.Q6\_K.gguf --n\_gpu\_layers 43 --use\_mlock true --n\_ctx 200 --n\_batch 200 --host 0.0.0.0 --verbose true**
However, when I send a question to my model, it always returns me very weird answers. For example:
My request to /v1/chat/complentions:
**{**
**"messages": \[**
**{**
**"content": "You are a helpful assistant.",**
**"role": "system"**
**},**
**{**
**"content": "What is the capital of France?",**
**"role": "user"**
**}**
**\]**
**}**
​
Model answer:
**{**
**"id": "chatcmpl-9cd21481-67d3-49a7-b794-b74f75c6b8ea",**
**"object": "chat.completion",**
**"created": 1699782481,**
**"model": "models/mythomax-l2-13b.Q6\_K.gguf",**
**"choices": \[**
**{**
**"index": 0,**
**"message": {**
**"content": "\\n<{{user}}> The capital of France is Paris.\]",**
**"role": "assistant"**
**},**
**"finish\_reason": "stop"**
**}**
**\],**
**"usage": {**
**"prompt\_tokens": 33,**
**"completion\_tokens": 13,**
**"total\_tokens": 46**
**}**
**}**
​
**Why it always comes back with very weird tags, like: <{{user}}>, <<HUMAN>> and so on?** | 2023-11-12T09:51:39 | https://www.reddit.com/r/LocalLLaMA/comments/17th1sk/cant_make_the_llamacpppython_respond_normally/ | No_Arrival_7382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17th1sk | false | null | t3_17th1sk | /r/LocalLLaMA/comments/17th1sk/cant_make_the_llamacpppython_respond_normally/ | false | false | self | 4 | null |
An all-encompassing, open-source, and entirely free guide for AI enthusiasts: AI Portal Gun | 1 | [removed] | 2023-11-12T09:40:14 | https://www.reddit.com/r/LocalLLaMA/comments/17tgw6n/an_allencompassing_opensource_and_entirely_free/ | Jiraiya27s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tgw6n | false | null | t3_17tgw6n | /r/LocalLLaMA/comments/17tgw6n/an_allencompassing_opensource_and_entirely_free/ | false | false | self | 1 | null |
OpenAI assistant like experience in Local LLM? | 2 | Hi, just checked out the OpenAI assistant API, gotta say it’s pretty neat. I know in Langchain there’s agent but still feeling the development experience is quite different, ie function calling vs tools. Would like to know if local LLM can also do things similar to assistant api? | 2023-11-12T09:33:36 | https://www.reddit.com/r/LocalLLaMA/comments/17tgt1u/openai_assistant_like_experience_in_local_llm/ | Sure_Journalist_3207 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tgt1u | false | null | t3_17tgt1u | /r/LocalLLaMA/comments/17tgt1u/openai_assistant_like_experience_in_local_llm/ | false | false | self | 2 | null |
Is there any way to make use of multiple CPU threads for model running in the browser? | 2 | I am trying to create web application that will utilize LLM around 1B size. I tried every library I could find online, but their either rely on WebGPU - like [webllm](https://webllm.mlc.ai/) \- which wouldn't work in my use case, or they run on CPU using just one cpu thread (like [transformers.js](https://huggingface.co/docs/transformers.js/index) or [ggml.js](https://rahuldshetty.github.io/ggml.js/#/)), which works fine, albeit slowly. Do you know any other methods to bring LLMs to browser on client side? Or to make these existing solutions use multiple CPU threads? | 2023-11-12T09:25:31 | https://www.reddit.com/r/LocalLLaMA/comments/17tgp8c/is_there_any_way_to_make_use_of_multiple_cpu/ | xuisel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tgp8c | false | null | t3_17tgp8c | /r/LocalLLaMA/comments/17tgp8c/is_there_any_way_to_make_use_of_multiple_cpu/ | false | false | self | 2 | null |
Best Architecture for Building Chatbot with Personal Data & Open Source Models | 1 | What is currently your best setup for building and deploying a chatbot with open source models and personal data.
Personally I have tried LlamaIndex, GPT, Steamlit, chromadb & also worked with Langchain & GPT4ALL - but was wondering what the solution that worked best for you (combination of what opensource services)? | 2023-11-12T09:16:57 | https://www.reddit.com/r/LocalLLaMA/comments/17tgl66/best_architecture_for_building_chatbot_with/ | XhoniShollaj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tgl66 | false | null | t3_17tgl66 | /r/LocalLLaMA/comments/17tgl66/best_architecture_for_building_chatbot_with/ | false | false | self | 1 | null |
m3 max 128gb usecase | 1 | Hi,
would the mac be a good machine for having lots of smal models in memory - ready for action - like whisper, an llm, tts, llava and use em sequentially or maybe max two in paralell?
(in respect to the 400gbit bandwidth limitation)
i d like to make a powerful agent, like a brain with many brainparts responsible for specific stuff | 2023-11-12T09:06:26 | https://www.reddit.com/r/LocalLLaMA/comments/17tgg8i/m3_max_128gb_usecase/ | faklubi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tgg8i | false | null | t3_17tgg8i | /r/LocalLLaMA/comments/17tgg8i/m3_max_128gb_usecase/ | false | false | self | 1 | null |
Where I think LLM will come in play with games | 18 | So real quick, I've been exploring local LLM for a bit and so on. In this video I get into what I think is the future for LLM, but in a nut shell I think Microsoft will eventually push out a local LLM to machines to cut down on a lot of resources and cost. In doing so, it likely will be possible for developers to tap into that local LLM for their game.
The worries I seen bring up is
​
1. Spoilers - As mention in the video it is currently and it should always be possible to solve for this in the stuff sent to the LLM. The LLM can't talk about what it doesn't know.
2. The NPC talks about stuff it shouldn't - by fine tuning it, this solves this problem to an extreme degree. The better you prep it, the less likely it will go off script. Even more with how you coded your end.
3. Story lines shouldn't be dynamic. The answer to this is simple. Don't use the LLM for those lines and given NPC.
4. Cost - Assuming I'm right about Microsoft and others will add a local LLM. The local part of it removes this problem.
[https://www.youtube.com/watch?v=N31x4qHBsNM](https://www.youtube.com/watch?v=N31x4qHBsNM)
It is possible to have it where given NPC show different emotions and direct their emotions as shown here where I tested it with anger.
[https://www.youtube.com/shorts/5mPjOLT7H-Q](https://www.youtube.com/shorts/5mPjOLT7H-Q)
​
| 2023-11-12T08:47:37 | https://www.reddit.com/r/LocalLLaMA/comments/17tg76z/where_i_think_llm_will_come_in_play_with_games/ | crua9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tg76z | false | null | t3_17tg76z | /r/LocalLLaMA/comments/17tg76z/where_i_think_llm_will_come_in_play_with_games/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'Wy89wtxWnrFMhMlaJj_2_dKDH7BhZJZCDqmNaMNf6P8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/aJWJu-N0FJQ0QAJmqfhNn5M-P-sAe659CCqutBiGQaw.jpg?width=108&crop=smart&auto=webp&s=fdd8fb765cef40f89da7efb005e600ddb8abe3fd', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/aJWJu-N0FJQ0QAJmqfhNn5M-P-sAe659CCqutBiGQaw.jpg?width=216&crop=smart&auto=webp&s=a76a62b8e8c8bb201e6aad401a7eba1d4a90b9b4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/aJWJu-N0FJQ0QAJmqfhNn5M-P-sAe659CCqutBiGQaw.jpg?width=320&crop=smart&auto=webp&s=bd1998a29d5a26ffa93a427833104a3e28b68ada', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/aJWJu-N0FJQ0QAJmqfhNn5M-P-sAe659CCqutBiGQaw.jpg?auto=webp&s=0b4e93799116179647339b67250231408157e1fc', 'width': 480}, 'variants': {}}]} |
Tools for sorting documents based on the person referenced in them? | 1 | There is a need to scan a lot of documents and then sort them to directories based on the person that is referenced in those documents.
The documents are all types of financial and similar documents. It is a case of law office that handles a lot of court appointed legal guardianship cases. Thus they get all the invoices and similar things. Currently the sorting is done by hand after the scanning.
Due to privacy concerns an online service is not suitable and the commercial non web based solutions seem to be based on standard layouts.
So, does anyone know of a system that could be run locally and be able to extract the person that is referenced from the OCR:ed document and then sort the documents into directories based on the name of the person? | 2023-11-12T08:44:00 | https://www.reddit.com/r/LocalLLaMA/comments/17tg5h7/tools_for_sorting_documents_based_on_the_person/ | Luvirin_Weby | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tg5h7 | false | null | t3_17tg5h7 | /r/LocalLLaMA/comments/17tg5h7/tools_for_sorting_documents_based_on_the_person/ | false | false | self | 1 | null |
Llama-cpp-python with memory | 1 | [removed] | 2023-11-12T08:40:52 | https://www.reddit.com/r/LocalLLaMA/comments/17tg3zv/llamacpppython_with_memory/ | Spirited_Employee_61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tg3zv | false | null | t3_17tg3zv | /r/LocalLLaMA/comments/17tg3zv/llamacpppython_with_memory/ | false | false | self | 1 | null |
GTX 4070ti and 32gb RAM run Llama 13b? | 5 | Hey everyone,
Looking to get into some ML.
Can a GTX 4070ti with 12gb VRAM alongside 32gb ram run 13b comfortably?
I seem to read conflicting opinions on this.
Thank you! | 2023-11-12T08:28:35 | https://www.reddit.com/r/LocalLLaMA/comments/17tfy5y/gtx_4070ti_and_32gb_ram_run_llama_13b/ | DarkSouth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tfy5y | false | null | t3_17tfy5y | /r/LocalLLaMA/comments/17tfy5y/gtx_4070ti_and_32gb_ram_run_llama_13b/ | false | false | self | 5 | null |
intermediate attention values of llama | 3 | Is anyone aware of how to obtain attention values of LLaMA model? For example, if I want to obtain attention values (of size 4096) from layer 24. How do I get them? | 2023-11-12T07:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/17tf62t/intermediate_attention_values_of_llama/ | 1azytux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tf62t | false | null | t3_17tf62t | /r/LocalLLaMA/comments/17tf62t/intermediate_attention_values_of_llama/ | false | false | self | 3 | null |
llama.cpp running the ai models with less ram | 1 | Hi when running the AI models i notice that the amount of ram used is a lot lot less than claimed, however the performance greatly differs based on the amount of ram the machine has. My machines have limited number of ram slots but based on the behaviour of running the models, are the models cached into ram instead? for instance if i run llama 70B i have to rent an expensive aws ec2 instead but the responses greatly differ. If i use 13B it does not answer the question and i get the question back but it does with 70B. I still would like to be able to run 70B and i am using a similar chip architecture with avx\_vnni. If there isnt enough ram would it be possible to create a ram drive split across multiple machines and use 10Gb/s NICs? I have used SFP+ NICs and SFP+ slots in my switch.
Are there ways to speed up running larger models with less memory without quantising them with lower accuracy? | 2023-11-12T07:23:02 | https://www.reddit.com/r/LocalLLaMA/comments/17tf310/llamacpp_running_the_ai_models_with_less_ram/ | SystemErrorMessage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tf310 | false | null | t3_17tf310 | /r/LocalLLaMA/comments/17tf310/llamacpp_running_the_ai_models_with_less_ram/ | false | false | self | 1 | null |
Cannot get Mistral running in Ooba, but have transformers updated. | 1 | Read some huggingface stuff, but I am very confused as to what my issue is. I have the latest version of ooba and transformers, and am at a loss.
Loading mistralai_Mistral-7B-v0.1...
Traceback (most recent call last):
File "F:\oobabooga_windows\text-generation-webui\server.py", line 914, in <module>
shared.model, shared.tokenizer = load_model(shared.model_name)
File "F:\oobabooga_windows\text-generation-webui\modules\models.py", line 71, in load_model
shared.model_type = find_model_type(model_name)
File "F:\oobabooga_windows\text-generation-webui\modules\models.py", line 59, in find_model_type
config = AutoConfig.from_pretrained(Path(f'{shared.args.model_dir}/{model_name}'))
File "F:\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 937, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "F:\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 643, in __getitem__
raise KeyError(key)
KeyError: 'mistral' | 2023-11-12T06:21:13 | https://www.reddit.com/r/LocalLLaMA/comments/17te8z1/cannot_get_mistral_running_in_ooba_but_have/ | Siigari | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17te8z1 | false | null | t3_17te8z1 | /r/LocalLLaMA/comments/17te8z1/cannot_get_mistral_running_in_ooba_but_have/ | false | false | self | 1 | null |
We eat food and produce shit. We have physical plumbing to deal with this. We read stuff (a combo of truths, lies, what-ifs) and we produce mental shit. We need digital plumbing to deal with digital shit. LLMs as assistants could help with digital shit and be part of digital plumbing infrastructure. | 0 | I am sure this scenario would already described in Sci-fi novels.
The idea is LLMs being store house of knowledge and basic common sense would be our personal fact-checker to make sure our digital history is accurate if we were being serious and want to put it out there for the world to read. Of course, this is future scenario, not sure how far in the future. | 2023-11-12T05:33:21 | https://www.reddit.com/r/LocalLLaMA/comments/17tdito/we_eat_food_and_produce_shit_we_have_physical/ | Easy_Butterfly2125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tdito | false | null | t3_17tdito | /r/LocalLLaMA/comments/17tdito/we_eat_food_and_produce_shit_we_have_physical/ | false | false | self | 0 | null |
How do I use both of my 4090s to run a 70B model, please? | 2 | I keep reading about people saying they are using multiple GPUs but I do not understand how to do it.
I'm using text-gen-ui.
Thank you. | 2023-11-12T05:27:34 | https://www.reddit.com/r/LocalLLaMA/comments/17tdfm4/how_do_i_use_both_of_my_4090s_to_run_a_70b_model/ | Siigari | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tdfm4 | false | null | t3_17tdfm4 | /r/LocalLLaMA/comments/17tdfm4/how_do_i_use_both_of_my_4090s_to_run_a_70b_model/ | false | false | self | 2 | null |
Some help compiling llama.cpp on an Arc A770 (Arch Linux)? | 2 | While I work on RustiCL... (until Intel finishes the Vulkan driver actually).
I'm trying to build llama.cpp and llama-cpp-python for OneAPI (actually with CLBlast). I ran make LLAMA\_CLBLAST=1, with the basekit and opencl ocd installed in Conda. ocl-icd-system and I can say that Intel OpenCL Graphics seems detected by clinfo and I did not set the variable for RustiCL.
make LLAMA\_CLBLAST=1 doesn't give acceleration. I always get BLAS=0 and runs exclusively on the CPU with no hint of any layers offloaded to GPU.
cmake .. -DLLAMA\_BLAS=ON -DLLAMA\_BLAS\_VENDOR=Intel10\_64lp -DCMAKE\_C\_COMPILER=icx -DCMAKE\_CXX\_COMPILER=icpx is what they said to on the github to build for MKL. But that command ends up with ld cannot find crtbeginS.o, so I copied the /opt oneapi installation version to the /lib and /lib64 to see what happens, but that gives me -lgcc not found after that. So yes, the "proper" CMake way didn't work.
Any ideas on how to compile llama-cpp-python for oobabooga and llama.cpp alone? With Arc Acceleration? | 2023-11-12T04:22:42 | https://www.reddit.com/r/LocalLLaMA/comments/17tcfs4/some_help_compiling_llamacpp_on_an_arc_a770_arch/ | A_Degenerate_Idiot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tcfs4 | false | null | t3_17tcfs4 | /r/LocalLLaMA/comments/17tcfs4/some_help_compiling_llamacpp_on_an_arc_a770_arch/ | false | false | self | 2 | null |
How to achieve more than 4k context? | 54 | People talk about it around here like this is pretty simple (these days at least). But once I hit about 4200-4400 tokens (with my limit pushed to 8k) all I get is gibberish. This is with the LLaMA2-13B-Tiefighter-AWQ model, which seems highly regarded for roleplay/storytelling (my use case).
I also tried OpenHermes-2.5-Mistral-7B and it was nonsensical from the very start oddly enough.
I'm using Silly Tavern with Oobabooga, sequence length set to 8k in both, and a 3090. I'm pretty new to all of this and it's been difficult finding information that isn't outdated. The term fine-tuning comes up a lot, and with it comes a whooooole lot of complicated coding talk I know nothing about.
As a layman, is there a way to achieve 8k (or more) context for a roleplay/storytelling model?
​ | 2023-11-12T04:11:05 | https://www.reddit.com/r/LocalLLaMA/comments/17tc993/how_to_achieve_more_than_4k_context/ | Doctor_Turkleton | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tc993 | false | null | t3_17tc993 | /r/LocalLLaMA/comments/17tc993/how_to_achieve_more_than_4k_context/ | false | false | self | 54 | null |
Is it possible to install the NVIDIA 535 CUDA driver on Debian 12? | 1 | I'm having some issues with the 545 driver and wanted to try a slightly older version but noticed that the 535 one is not in the NVDIA repo for Debian 12. Is it possible to still get it somehow? Thanks! | 2023-11-12T01:51:51 | https://www.reddit.com/r/LocalLLaMA/comments/17t9t75/is_it_possible_to_install_the_nvidia_535_cuda/ | SpookySuper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17t9t75 | false | null | t3_17t9t75 | /r/LocalLLaMA/comments/17t9t75/is_it_possible_to_install_the_nvidia_535_cuda/ | false | false | self | 1 | null |
Mac studio 60\76 core ? | 1 | [removed] | 2023-11-12T01:29:39 | https://www.reddit.com/r/LocalLLaMA/comments/17t9en6/mac_studio_6076_core/ | No-Vermicelli5327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17t9en6 | false | null | t3_17t9en6 | /r/LocalLLaMA/comments/17t9en6/mac_studio_6076_core/ | false | false | self | 1 | null |
Multipurpose AI app for all your AI interests and services. | 1 | [removed] | 2023-11-12T01:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/17t8xrm/multipurpose_ai_app_for_all_your_ai_interests_and/ | EtelsonRecomputing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17t8xrm | false | null | t3_17t8xrm | /r/LocalLLaMA/comments/17t8xrm/multipurpose_ai_app_for_all_your_ai_interests_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'j9BOoAGSccutND6ogshNyb-xWVFtmdUvHV_lLdzYeVk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=108&crop=smart&auto=webp&s=76adf9aa07171352e3727b8d8bde812b5eb0f7ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=216&crop=smart&auto=webp&s=f180f4caf096ee042b700d70cabf941788671fda', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=320&crop=smart&auto=webp&s=204c440e43a68783d15356bdc60ce239349a1d9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=640&crop=smart&auto=webp&s=2689ba6aac51113255e3e6e8acb33870740831e1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=960&crop=smart&auto=webp&s=64ad438b94a08899677dfb6f2de708f8a7a6a81b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=1080&crop=smart&auto=webp&s=0108876ef023c871780ce4ec2584fa4cb25c499c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?auto=webp&s=9035877a533fae2ee246bfc56769d60e805f7b74', 'width': 1200}, 'variants': {}}]} |
RAG - Vectara's Hallucination leaderboard | 31 | Vectara's Hallucination Evaluation Model and leaderboard was launched last week.
I notice Mistral having a hallucination rate of 9.4% compared to 5,6% for Llama2.
Any thoughts?
[Source: https:\/\/github.com\/vectara\/hallucination-leaderboard](https://preview.redd.it/sj0akn15tszb1.png?width=1118&format=png&auto=webp&s=ca9ec766f592a8748bf95a8ad2ef81483c2270bd) | 2023-11-11T22:56:28 | https://www.reddit.com/r/LocalLLaMA/comments/17t6chj/rag_vectaras_hallucination_leaderboard/ | AdamDhahabi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17t6chj | false | null | t3_17t6chj | /r/LocalLLaMA/comments/17t6chj/rag_vectaras_hallucination_leaderboard/ | false | false | 31 | {'enabled': False, 'images': [{'id': '2vb8EPn9fty8tql9lCBKE8O7OEn1TO0KMw1bnyi7qcM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Aq58VAfg6y_mdeCBsSL__Y0IfYu69CeUp2_auIEXkNo.jpg?width=108&crop=smart&auto=webp&s=4682ed9ad3d7f16c761d7b5357659c7e2754e1f6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Aq58VAfg6y_mdeCBsSL__Y0IfYu69CeUp2_auIEXkNo.jpg?width=216&crop=smart&auto=webp&s=302cbd6d37663ae38dc376a166b145a8634e52ae', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Aq58VAfg6y_mdeCBsSL__Y0IfYu69CeUp2_auIEXkNo.jpg?width=320&crop=smart&auto=webp&s=143b6b2eff475a80cc3d0c4eb9260d5eb78829bb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Aq58VAfg6y_mdeCBsSL__Y0IfYu69CeUp2_auIEXkNo.jpg?width=640&crop=smart&auto=webp&s=e91bc0c2749d833d676aa304721303b96e2c0bb5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Aq58VAfg6y_mdeCBsSL__Y0IfYu69CeUp2_auIEXkNo.jpg?width=960&crop=smart&auto=webp&s=646e23201d3279fb08d09ec2892b04bdf7c25c58', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Aq58VAfg6y_mdeCBsSL__Y0IfYu69CeUp2_auIEXkNo.jpg?width=1080&crop=smart&auto=webp&s=58663685ee0fdd1db81b3eff9a21ddb778f8dbbf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Aq58VAfg6y_mdeCBsSL__Y0IfYu69CeUp2_auIEXkNo.jpg?auto=webp&s=020f60ffbe46704635c110fa0c7d7d3cbc8526b6', 'width': 1200}, 'variants': {}}]} | |
LLM vision utilities to enable web browsing with LLMs | 7 | Wanted to share our work on Tarsier here, an open source utility library that enables LLMs like GPT-4 and GPT-4 Vision to browse the web. The library helps answer the following questions:
- How do you map LLM responses back into web elements?
- How can you mark up a page for an LLM to better understand its action space?
- How do you feed a "screenshot" to a text-only LLM?
We do this by tagging "interactable" elements on the page with an ID, enabling the LLM to connect actions to an ID which we can then translate back into web elements. We also use OCR to translate a page screenshot to a spatially encoded text string such that even a text only LLM can understand how to navigate the page.
View a demo and read more on GitHub: https://github.com/reworkd/tarsier | 2023-11-11T21:33:11 | https://www.reddit.com/r/LocalLLaMA/comments/17t4hyo/llm_vision_utilities_to_enable_web_browsing_with/ | asim-shrestha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17t4hyo | false | null | t3_17t4hyo | /r/LocalLLaMA/comments/17t4hyo/llm_vision_utilities_to_enable_web_browsing_with/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'H7P7mxgoWPXQJip0tdF8bnO0aWwc9vDk0036ITGcIX8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SQnKOQ2ynhTKR8qig1oE0YFKW4fyTeOsvTu_6udsOf4.jpg?width=108&crop=smart&auto=webp&s=6023b959bb5f016acead3b5165eae4f1ccff2895', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SQnKOQ2ynhTKR8qig1oE0YFKW4fyTeOsvTu_6udsOf4.jpg?width=216&crop=smart&auto=webp&s=f3cd3b4c43cf8adffe01aac66dfaa1870ffc7d94', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SQnKOQ2ynhTKR8qig1oE0YFKW4fyTeOsvTu_6udsOf4.jpg?width=320&crop=smart&auto=webp&s=9f564ea441a484ef549135bb24896969018459e1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SQnKOQ2ynhTKR8qig1oE0YFKW4fyTeOsvTu_6udsOf4.jpg?width=640&crop=smart&auto=webp&s=54cc08028c73f394c4cc745e30d1f4aa449234c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SQnKOQ2ynhTKR8qig1oE0YFKW4fyTeOsvTu_6udsOf4.jpg?width=960&crop=smart&auto=webp&s=344aa8bf0ab59ed6270df6ef1fec6426a1416dbc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SQnKOQ2ynhTKR8qig1oE0YFKW4fyTeOsvTu_6udsOf4.jpg?width=1080&crop=smart&auto=webp&s=1a60b4fdb6d37a88fcfd7370f598d2ff44b1d65d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SQnKOQ2ynhTKR8qig1oE0YFKW4fyTeOsvTu_6udsOf4.jpg?auto=webp&s=a5f040d28e750df730ec39f670172c604bc41989', 'width': 1200}, 'variants': {}}]} |
Automating political compass tests for LLMs | 9 | 2023-11-11T20:21:08 | https://github.com/andrewimpellitteri/llm_poli_compass | Alone_Ad7391 | github.com | 1970-01-01T00:00:00 | 0 | {} | 17t2wzv | false | null | t3_17t2wzv | /r/LocalLLaMA/comments/17t2wzv/automating_political_compass_tests_for_llms/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'ZhlKVveVKIaFxD_D0iGxHksGEegptzKjTNOCwfYCR2I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h13w0xhd00lW6sXyc5d6tb6senyVbwjFV8hdKODuQtY.jpg?width=108&crop=smart&auto=webp&s=7473106e6092b0fec6cacab0df1e4d931523cecd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/h13w0xhd00lW6sXyc5d6tb6senyVbwjFV8hdKODuQtY.jpg?width=216&crop=smart&auto=webp&s=ac470d778d359fc4169a73a80d5326a21b80b5c7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/h13w0xhd00lW6sXyc5d6tb6senyVbwjFV8hdKODuQtY.jpg?width=320&crop=smart&auto=webp&s=c37d29b1e550b11d5bdcacb242357f67801af5e0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/h13w0xhd00lW6sXyc5d6tb6senyVbwjFV8hdKODuQtY.jpg?width=640&crop=smart&auto=webp&s=185ff1956bbfbe8f88096f9667adb60f0f799385', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/h13w0xhd00lW6sXyc5d6tb6senyVbwjFV8hdKODuQtY.jpg?width=960&crop=smart&auto=webp&s=727687fdf2756c86dc6a40ea7ce5ad309b1d3610', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/h13w0xhd00lW6sXyc5d6tb6senyVbwjFV8hdKODuQtY.jpg?width=1080&crop=smart&auto=webp&s=3ba1e6507f04488b62fe8598b1d5f4b242055a0c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/h13w0xhd00lW6sXyc5d6tb6senyVbwjFV8hdKODuQtY.jpg?auto=webp&s=ac058c8f0d0a788ac4c01f9c36a272ad13586a50', 'width': 1200}, 'variants': {}}]} | ||
What’s recommended hosting for open source LLMs? | 41 | Use case is that I want to create a service based on Mistral 7b that will server an internal office of 8-10 users.
I’ve been looking at modal.com, and runpod. Are there any other recommendations? | 2023-11-11T20:10:32 | https://www.reddit.com/r/LocalLLaMA/comments/17t2oq6/whats_recommended_hosting_for_open_source_llms/ | decruz007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17t2oq6 | false | null | t3_17t2oq6 | /r/LocalLLaMA/comments/17t2oq6/whats_recommended_hosting_for_open_source_llms/ | false | false | self | 41 | null |
Mirror: A hackable AI-powered Mirror on Your Laptop | 27 | 2023-11-11T19:59:30 | https://x.com/cocktailpeanut/status/1722314518208946480?s=20 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 17t2ft0 | false | null | t3_17t2ft0 | /r/LocalLLaMA/comments/17t2ft0/mirror_a_hackable_aipowered_mirror_on_your_laptop/ | false | false | 27 | {'enabled': False, 'images': [{'id': '4VQLCANpqxF9wPqSwmkfmx_gI8rEFmCyOSTg1nmJXOg', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/9meoLQbQm_oH1B35tzigGcGVgo9xkZ7LTe476Pk5HD4.jpg?width=108&crop=smart&auto=webp&s=6277aee2a68180c279f6157dc7ac6d14f84fce53', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/9meoLQbQm_oH1B35tzigGcGVgo9xkZ7LTe476Pk5HD4.jpg?width=216&crop=smart&auto=webp&s=e681c183484e0924bdb787bc4f96b9e4ab81525d', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/9meoLQbQm_oH1B35tzigGcGVgo9xkZ7LTe476Pk5HD4.jpg?width=320&crop=smart&auto=webp&s=6c915df805a0a142d4416175df0ee542b1a76a0c', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/9meoLQbQm_oH1B35tzigGcGVgo9xkZ7LTe476Pk5HD4.jpg?width=640&crop=smart&auto=webp&s=39847998770f340db64cea6438087a49bd7ff083', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/9meoLQbQm_oH1B35tzigGcGVgo9xkZ7LTe476Pk5HD4.jpg?width=960&crop=smart&auto=webp&s=6f309561d0ebcd3b6589e488abf51980347b150f', 'width': 960}, {'height': 676, 'url': 'https://external-preview.redd.it/9meoLQbQm_oH1B35tzigGcGVgo9xkZ7LTe476Pk5HD4.jpg?width=1080&crop=smart&auto=webp&s=dec8841e8d9828a4c3239da576d14245b3b651b0', 'width': 1080}], 'source': {'height': 1282, 'url': 'https://external-preview.redd.it/9meoLQbQm_oH1B35tzigGcGVgo9xkZ7LTe476Pk5HD4.jpg?auto=webp&s=66ee8a3f10fa4bb271e8ed0408e2261faf1a8109', 'width': 2048}, 'variants': {}}]} | ||
Which model is best for binary text classification? | 7 | Hi. So I am a bit new to NLP and ML as a whole and I am looking to create a text classification model. I have tried it with deBERTa and the results are decent(about 70%) but I need more accuracy. Are Generstive models a better alternative or should I stick to smaller models like Bert or maybe even non-NN classifiers and work on better dataset quality? | 2023-11-11T19:24:53 | https://www.reddit.com/r/LocalLLaMA/comments/17t1osz/which_model_is_best_for_binary_text_classification/ | Shoddy_Vegetable_115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17t1osz | false | null | t3_17t1osz | /r/LocalLLaMA/comments/17t1osz/which_model_is_best_for_binary_text_classification/ | false | false | self | 7 | null |
need help with the bloke's mistral openhermes 2.5 | 1 | [removed] | 2023-11-11T18:49:06 | https://www.reddit.com/r/LocalLLaMA/comments/17t0wna/need_help_with_the_blokes_mistral_openhermes_25/ | The_Happy_Hangman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17t0wna | false | null | t3_17t0wna | /r/LocalLLaMA/comments/17t0wna/need_help_with_the_blokes_mistral_openhermes_25/ | false | false | 1 | null | |
Question about the 'economics' of running a LLM locally? | 39 | On the one hand, I get that this is very much an 'enthusiast' sub, and many of you are doing this because you were the type to have a 4090 already.
On the other, as someone interested in LLMs, stable diffusion and AI, I'm not sure if investing in the hardware to run these things locally makes economic sense at all. I spec'd out a damned nice workstation at microcenter the other day and the bill was over $4000. Even the gpu alone was over $1700.
If you take a really sober look at the numbers, how does running your own system make sense over renting hardware at runpod or a similar service? The overall sentiment I get from reading the posts here is that a large majority of users here are using their 3090's to crank out smut. Hey, no judgement, but do you really think runpod cares what you run as long as it doesn't put them in legal jeopardy?
A 4090 is $.50/hr on some services. Even if you assumed 10h / wk of usage over like 5 years that's still probably less than the depreciation and power usage of running it locally.
TLDR: I know some of you are doing this simply 'because you can' but the value proposition looks sketchy as an outsider. | 2023-11-11T18:35:28 | https://www.reddit.com/r/LocalLLaMA/comments/17t0m81/question_about_the_economics_of_running_a_llm/ | tacticalTraumaLlama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17t0m81 | false | null | t3_17t0m81 | /r/LocalLLaMA/comments/17t0m81/question_about_the_economics_of_running_a_llm/ | false | false | self | 39 | null |
Weekly AI News Aggregate | 1 | [removed] | 2023-11-11T17:53:23 | https://medium.com/@webtek.ai/neural-narratives-ai-ml-chronicles-of-the-week-11-10-2f2789e1f3b4 | pinnapple-crush | medium.com | 1970-01-01T00:00:00 | 0 | {} | 17szqas | false | null | t3_17szqas | /r/LocalLLaMA/comments/17szqas/weekly_ai_news_aggregate/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'epNryes8g21Sfk2C_huDQcZfjMI5aI0kpi5ubR_B3RQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/AwR5KMz2LIkadl5UUj-zSjFLmlUCsv9On_jgyLlrSiA.jpg?width=108&crop=smart&auto=webp&s=894566271679722a0f0d049d309d433e4952b841', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/AwR5KMz2LIkadl5UUj-zSjFLmlUCsv9On_jgyLlrSiA.jpg?width=216&crop=smart&auto=webp&s=4c852fb7802bff3814beec8d89facc089d7c01d7', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/AwR5KMz2LIkadl5UUj-zSjFLmlUCsv9On_jgyLlrSiA.jpg?width=320&crop=smart&auto=webp&s=ce662e0eaa68f5146dc0310d05cf43b1b42a7a83', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/AwR5KMz2LIkadl5UUj-zSjFLmlUCsv9On_jgyLlrSiA.jpg?auto=webp&s=a752f14ba4a2278bc312f9e6c0114c078743e3d3', 'width': 460}, 'variants': {}}]} | |
I’m looking for instructions on how to run the complete Goliath-120B model on an M3 Mac with 128G RAM. | 2 | Any pointers? I’ve used the OpenAI and Anthropic APIs a lot but am not sure how to start accessing models locally. | 2023-11-11T17:52:44 | https://www.reddit.com/r/LocalLLaMA/comments/17szpsr/im_looking_for_instructions_on_how_to_run_the/ | inyourfaceplate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17szpsr | false | null | t3_17szpsr | /r/LocalLLaMA/comments/17szpsr/im_looking_for_instructions_on_how_to_run_the/ | false | false | self | 2 | null |
Image recognition with oobabooga webui? | 2 | Hello, I tested GPT 4 and it is wonderful for accurately captioning images. I'm wondering if anything similar is possible locally as I'm not sure what OpenAI stance is on images that are uploaded to their system. | 2023-11-11T17:09:53 | https://www.reddit.com/r/LocalLLaMA/comments/17syu3w/image_recognition_with_oobabooga_webui/ | Suimeileo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17syu3w | false | null | t3_17syu3w | /r/LocalLLaMA/comments/17syu3w/image_recognition_with_oobabooga_webui/ | false | false | self | 2 | null |
AI Portal Gun | 1 | Ever felt lost in the boundless AI universe? Say goodbye to confusion! We've built [https://www.portalgunai.org/](https://www.portalgunai.org/) to transport you to the correct dimension to deal with that.
In the vast AI cosmos, knowledge scatters like stardust. To guide your journey from AI novice to expert, we've forged an AI Portal Gun, helping you navigate the ever-evolving AI universe. Get ready to teleport to a planet filled with free, expert-curated tutorials, guides, articles, courses, papers, and books. Unlock the power to conquer the AI universe with these resources!
Tailored Adventures: Dive deep into your AI passions, whether it's computer vision, NLP, deep learning, AI in healthcare, robotics, the mathematics behind AI's core principles, or more. A universe of wisdom awaits! Embark on a quest through generative AI!
Unleash your creativity with audio, video, images, and code generation. The cosmos is your canvas.
Our content is open-source and free. Contribute to our GitHub ([https://github.com/severus27/AI-Portal-Gun](https://github.com/severus27/AI-Portal-Gun) ) by adding resources or enhancing content. Help us grow the learning experience. Your contribution matters! | 2023-11-11T16:41:55 | https://www.reddit.com/r/LocalLLaMA/comments/17sy901/ai_portal_gun/ | Jiraiya27s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sy901 | false | null | t3_17sy901 | /r/LocalLLaMA/comments/17sy901/ai_portal_gun/ | false | false | default | 1 | null |
Looking for open-source contributors for text-embedding server for inference | 40 | Around 1.5 months ago, I started [https://github.com/michaelfeil/infinity](https://github.com/michaelfeil/infinity). With the hype in Retrieval-Augmented-Generation, this topic got important over the last month in my view. With this Repo being the only option under a open license.
I now implemented everything from faster attention, onnx / ctranslate2 / torch inference, caching, better docker images, better queueing stategies. Now I am pretty much running out of ideas - if you got some, feel free to open an issue, would be very welcome! | 2023-11-11T16:10:57 | https://www.reddit.com/r/LocalLLaMA/comments/17sxmf5/looking_for_opensource_contributors_for/ | OrganicMesh | self.LocalLLaMA | 2023-11-12T00:50:11 | 0 | {} | 17sxmf5 | false | null | t3_17sxmf5 | /r/LocalLLaMA/comments/17sxmf5/looking_for_opensource_contributors_for/ | false | false | self | 40 | {'enabled': False, 'images': [{'id': 'YDTMgZ0cRKlbmMhGUEF0-0_QRfu_fQuAilAbfPhNMK8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/u1XpiC-wFEmGnDoil5oEtPZV23AXvq6rhMpwQlOsQGc.jpg?width=108&crop=smart&auto=webp&s=a251e77be17412f58b488d08f0fb1100b9ae33f7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/u1XpiC-wFEmGnDoil5oEtPZV23AXvq6rhMpwQlOsQGc.jpg?width=216&crop=smart&auto=webp&s=c88192de1cbe78a5b882b077aa9717502d93c683', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/u1XpiC-wFEmGnDoil5oEtPZV23AXvq6rhMpwQlOsQGc.jpg?width=320&crop=smart&auto=webp&s=bc35f8ded7f649e021860ac82cafbab1563579ff', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/u1XpiC-wFEmGnDoil5oEtPZV23AXvq6rhMpwQlOsQGc.jpg?width=640&crop=smart&auto=webp&s=f047651b41a07ffd0505d765aa3ab5b503086de2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/u1XpiC-wFEmGnDoil5oEtPZV23AXvq6rhMpwQlOsQGc.jpg?width=960&crop=smart&auto=webp&s=8f40dfc604e4a39b33fc96a3020cd04054843647', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/u1XpiC-wFEmGnDoil5oEtPZV23AXvq6rhMpwQlOsQGc.jpg?width=1080&crop=smart&auto=webp&s=dfcbb81b3887af1601bcc010ce99d35a4a2a1f38', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/u1XpiC-wFEmGnDoil5oEtPZV23AXvq6rhMpwQlOsQGc.jpg?auto=webp&s=ad5e6d3c0d1c222f011b7f08ac1aa63a835b1e0c', 'width': 1200}, 'variants': {}}]} |
macOS with an AMD GPU | 1 | I have a macOS Ventura hackintosh with 64gb ddr5 6000 RAM, 13900k cpu, and a 6950 xt 16gb GPU. I'm hoping to run a GPU-accelerated LLaMA for coding (or at least for fun). I know Apple Silicone chips have good support, but I can barely find anything on x86 Macs with Radeon GPUs. The last x86 Mac Pros shipped with RDNA2 GPUs, so they are well supported on macOS (and probably the last dedicated GPUs that will ever be supported.)
I seen that Metal is supported (e.g. https://www.reddit.com/r/LocalLLaMA/comments/140q3bn/metal_inference_running_on_apple_gpus_now_merged/). My GPU, just like Apple Silicone, interfaces directly with Metal, so I'm curious if it's compatible.
Does anybody have any insight on this? I'd like to know what kind of performance to expect of a 6950 xt, too.
Thank you | 2023-11-11T14:26:17 | https://www.reddit.com/r/LocalLLaMA/comments/17svh0d/macos_with_an_amd_gpu/ | virtualmnemonic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17svh0d | false | null | t3_17svh0d | /r/LocalLLaMA/comments/17svh0d/macos_with_an_amd_gpu/ | false | false | self | 1 | null |
Newb question about using llama for private data. | 6 | I’m interested in setting up an llm so I can feed it my email inbox and teams conversation history and whatnot so I can query it instead of laboriously searching for the info I want.
I’m confused by the concept of model download. Like GGUF has several versions available. Are these models capable of responding with data I provide or are they “trained” and closed to new information.
Is it possible to accomplish what I want with online services like chat gpt? | 2023-11-11T13:41:04 | https://www.reddit.com/r/LocalLLaMA/comments/17sulyb/newb_question_about_using_llama_for_private_data/ | funkenpedro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sulyb | false | null | t3_17sulyb | /r/LocalLLaMA/comments/17sulyb/newb_question_about_using_llama_for_private_data/ | false | false | self | 6 | null |
Faster prompt processing on cpu? | 7 | I'm trying to run mistral 7b on my laptop, and the inference speed is fine (~10T/s), but prompt processing takes very long when the context gets bigger (also around 10T/s). I've tried quantizing the model, but that doesn't speed up processing, only generation. I've also tried using openblas, but that didn't provide much speedup. I'm using koboldcpp's prompt cache, but that doesn't help with initial load times (which are so slow the connection times out)
From my other testing, smaller models are faster at prompt processing, but they tend to completely ignore my prompts and just go off in random directions.
So my question is: 1) is there a way to speed up prompt processing for mistral (using koboldcpp, preferably) or 2) if not, are there any coherent models around 3b parameters that support contexts around 4k?
Edit: I misremembered the generation speed. It's around 10 T/s for generation only. It's changed now in the original post | 2023-11-11T13:05:58 | https://www.reddit.com/r/LocalLLaMA/comments/17stzkm/faster_prompt_processing_on_cpu/ | very-cis-femgirl | self.LocalLLaMA | 2023-11-11T17:29:24 | 0 | {} | 17stzkm | false | null | t3_17stzkm | /r/LocalLLaMA/comments/17stzkm/faster_prompt_processing_on_cpu/ | false | false | self | 7 | null |
Same Model, different clients, different Responses - How Come? | 8 | 2023-11-11T12:31:44 | platinums99 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17stfme | false | null | t3_17stfme | /r/LocalLLaMA/comments/17stfme/same_model_different_clients_different_responses/ | false | false | 8 | {'enabled': True, 'images': [{'id': 'DnMNOBYQEiwAC9ytPb5q1Q8sQ3vCEwq2pjqH4EIcCzE', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/35yd9po8qpzb1.png?width=108&crop=smart&auto=webp&s=646568eb62c2174b5f1c6c2efb1b4118c60a1610', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/35yd9po8qpzb1.png?width=216&crop=smart&auto=webp&s=da7fbd6538c06e6a0a92362988fab7b8a17968cb', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/35yd9po8qpzb1.png?width=320&crop=smart&auto=webp&s=3d1360f1b1127f2d0d32dd9b49ab042c71d3d0b6', 'width': 320}, {'height': 333, 'url': 'https://preview.redd.it/35yd9po8qpzb1.png?width=640&crop=smart&auto=webp&s=428fe0e4b2e1f0b48a2b36ee000a54a17b214e8f', 'width': 640}, {'height': 500, 'url': 'https://preview.redd.it/35yd9po8qpzb1.png?width=960&crop=smart&auto=webp&s=8aaf7a8e8745a6202176d52555d1435da9262e69', 'width': 960}], 'source': {'height': 510, 'url': 'https://preview.redd.it/35yd9po8qpzb1.png?auto=webp&s=40d186e56ea7d7926b294ad69aaf00d358245e06', 'width': 979}, 'variants': {}}]} | |||
Same Model, different clients, different Responses - How Come? | 1 | [deleted] | 2023-11-11T12:30:31 | platinums99 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17stexw | false | null | t3_17stexw | /r/LocalLLaMA/comments/17stexw/same_model_different_clients_different_responses/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'u5Uh-t9Gn7B-quirfkqIHrPU2du7YIAcQ4bcSccLgPE', 'resolutions': [{'height': 25, 'url': 'https://preview.redd.it/xirwda20qpzb1.png?width=108&crop=smart&auto=webp&s=4e0ff4304f1bd9ea593b9a70b5304e9329a5aa85', 'width': 108}, {'height': 50, 'url': 'https://preview.redd.it/xirwda20qpzb1.png?width=216&crop=smart&auto=webp&s=476035c007ef3e8d06c2e04a5857f8091ee8e7fa', 'width': 216}, {'height': 74, 'url': 'https://preview.redd.it/xirwda20qpzb1.png?width=320&crop=smart&auto=webp&s=f16aa6b8a5e070d2a29005713590fd140b317bb6', 'width': 320}, {'height': 149, 'url': 'https://preview.redd.it/xirwda20qpzb1.png?width=640&crop=smart&auto=webp&s=3bf1960629aa29cd0bbe80aec75c577657f41228', 'width': 640}, {'height': 224, 'url': 'https://preview.redd.it/xirwda20qpzb1.png?width=960&crop=smart&auto=webp&s=fcd689ffe4d537c9629c4c2dc600a2ab7c844fbb', 'width': 960}, {'height': 252, 'url': 'https://preview.redd.it/xirwda20qpzb1.png?width=1080&crop=smart&auto=webp&s=b0d7e39186f531cca4cf2f7556b12491e8ba5cf6', 'width': 1080}], 'source': {'height': 263, 'url': 'https://preview.redd.it/xirwda20qpzb1.png?auto=webp&s=793d95bfad638fc5617dca8dbc0b049ecb928c11', 'width': 1123}, 'variants': {}}]} | ||
IS it possible to make NSFW version of OpenBuddy Llama2 70b? | 1 | I really like OpenBuddy llama 70b as multilingual LLM, especially in asian languages. But is it possible to create an NSFW version of it? Maybe someone already did? And if it's possible, how much money would it cost? Perhaps I could do it myself or sponsor someone else doing it...
[**openbuddy-llama2-70b-v10.1-bf16**](https://huggingface.co/OpenBuddy/openbuddy-llama2-70b-v10.1-bf16) | 2023-11-11T11:56:24 | https://www.reddit.com/r/LocalLLaMA/comments/17ssxbn/is_it_possible_to_make_nsfw_version_of_openbuddy/ | SeaworthinessLow4382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ssxbn | false | null | t3_17ssxbn | /r/LocalLLaMA/comments/17ssxbn/is_it_possible_to_make_nsfw_version_of_openbuddy/ | false | false | nsfw | 1 | {'enabled': False, 'images': [{'id': 'll5trj3RBFF2dkir-JdkLcApgwOJ-cOQ5WvkTPGGBlA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=108&crop=smart&auto=webp&s=edf5b242bf1480c5df5f1ca9a3af4187a7a620ce', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=216&crop=smart&auto=webp&s=4f1b3e34beca8ed1b85fa755867fe2b15fc9892d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=320&crop=smart&auto=webp&s=9cfdf25b169b48c110c136233683b1c46f716ce9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=640&crop=smart&auto=webp&s=817faf0c55db6fddd3f3351aca2502bc4845a81c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=960&crop=smart&auto=webp&s=d86bed7d9116bc0fae86fd28f55633496e82127f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=1080&crop=smart&auto=webp&s=f3c11ba3b3f51d62bad21b651cabc11b06da1b89', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?auto=webp&s=1fb2b668c86602f15cf782ffb12a366afc9eb53f', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=bf4b3e4a1a65b610f5d2423f521663f9b43f6de1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=027ea8c649aeda390b258feb617b43f911080074', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=b837bbf17a4b9aefbfba59406e4ed1881c49382c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=77d0e253c6dad0abd4e52dd86bf0ec0ebb9e4dc8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=07f97b6d65de562463e077589dc252ebfd06f278', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=88f20abae4eb94d2d98968e5c99aa4e399bbe1d3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?blur=40&format=pjpg&auto=webp&s=7c43537d381b76da28829978650a759e47220afb', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=bf4b3e4a1a65b610f5d2423f521663f9b43f6de1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=027ea8c649aeda390b258feb617b43f911080074', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=b837bbf17a4b9aefbfba59406e4ed1881c49382c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=77d0e253c6dad0abd4e52dd86bf0ec0ebb9e4dc8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=07f97b6d65de562463e077589dc252ebfd06f278', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=88f20abae4eb94d2d98968e5c99aa4e399bbe1d3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tzA74enFCW5cXbNhpEh-iolNuqnlJtnAxzCuYNW5jI0.jpg?blur=40&format=pjpg&auto=webp&s=7c43537d381b76da28829978650a759e47220afb', 'width': 1200}}}}]} |
Operating System for local LLM | 7 | Which Operating System you use for local LLM work?
[View Poll](https://www.reddit.com/poll/17ssrg2) | 2023-11-11T11:44:16 | https://www.reddit.com/r/LocalLLaMA/comments/17ssrg2/operating_system_for_local_llm/ | meetrais | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ssrg2 | false | null | t3_17ssrg2 | /r/LocalLLaMA/comments/17ssrg2/operating_system_for_local_llm/ | false | false | self | 7 | null |
Integrated third party apps within llama? | 1 | Have you been able to integrate any web app within local llama and use its functionality alike gpt and its plugins? | 2023-11-11T11:30:45 | https://www.reddit.com/r/LocalLLaMA/comments/17ssl3v/integrated_third_party_apps_within_llama/ | AdministrativeSea688 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ssl3v | false | null | t3_17ssl3v | /r/LocalLLaMA/comments/17ssl3v/integrated_third_party_apps_within_llama/ | false | false | self | 1 | null |
how to set the parameters? where do i start? | 1 | I just got into self-hosted LLMs. Looking at an alternative to GPT-4; see if there's something better for me..
​
So I rented a VM at Vast.ai. An A6000 one. Deployed it with text-generation-webui and Llama-2-70B-AWQ. Didnt like it - sort of gets into repetition and garbage text. Then tried vicuna-13B-v1.5-16K-AWQ. Seems better but only gives short responses.. Cuts off at around 30words. Even if i specify the word length.. Is there any parameter changes I need to make (as of now I'm using the simple-1 preset) ? thank you. | 2023-11-11T11:29:34 | https://www.reddit.com/r/LocalLLaMA/comments/17sskin/how_to_set_the_parameters_where_do_i_start/ | ntn8888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sskin | false | null | t3_17sskin | /r/LocalLLaMA/comments/17sskin/how_to_set_the_parameters_where_do_i_start/ | false | false | self | 1 | null |
What are the business use of LLM that will generate revenue? | 67 | Let's say, small and medium businesses. How can they utilize LLMs to generate revenue? Particularly, what are the use cases? Large enterprises would have complex business model encompassing huge area, so let's avoid that. | 2023-11-11T11:12:16 | https://www.reddit.com/r/LocalLLaMA/comments/17sscbn/what_are_the_business_use_of_llm_that_will/ | AMGraduate564 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sscbn | false | null | t3_17sscbn | /r/LocalLLaMA/comments/17sscbn/what_are_the_business_use_of_llm_that_will/ | false | false | self | 67 | null |
Created Riley Reid character for TheBlokeLLMs uncensored text gen WebUI. | 1 | I've tried this character with "The BlokeLLMs Wizard l-Vicuna-30B-Uncencored-GPTQ" model. Use this character and let me know your thoughts. | 2023-11-11T11:05:56 | https://files.catbox.moe/e3x0r7.zip | Oolegdan | files.catbox.moe | 1970-01-01T00:00:00 | 0 | {} | 17ss96v | false | null | t3_17ss96v | /r/LocalLLaMA/comments/17ss96v/created_riley_reid_character_for_theblokellms/ | false | false | default | 1 | null |
Local LLM + Autogen Help | 3 | I have been trying to get the Autogen working properly with my local LLMs. It is generating responses automatically, and all the basics are running fine. I have been using LM studio with Zephyr, Dolphin Mistral, Llama2 instruct.
All of them can generate responses, however it seems the agents do not follow their prompts very well. I tell one agent to do something, he often finishes the whole process by himself, the next agent will either continue what he wrote or just repeat it. They will often disregard the user_proxy agent's replies. Sometimes I look at the server backend they will be just generating same material on a loop.
My questions are:
1 Is this down to prompt for each agent not properly defined? Thereby I need to up my prompt engineering game?
2 this is the inherent limit of local LLMs, they just cannot carry out role playing or multi agent tasks?
3 is the sever setting going to affect the outcome? For example, prefixes and suffixes of the prompts?
Has anyone had successes with multiagent deployment using smaller local LLMs? Love to hear your advice. | 2023-11-11T10:58:03 | https://www.reddit.com/r/LocalLLaMA/comments/17ss54u/local_llm_autogen_help/ | bigjonyz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ss54u | false | null | t3_17ss54u | /r/LocalLLaMA/comments/17ss54u/local_llm_autogen_help/ | false | false | self | 3 | null |
Question about 7B models | 1 | [removed] | 2023-11-11T08:49:11 | https://www.reddit.com/r/LocalLLaMA/comments/17sqh6d/question_about_7b_models/ | serciex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sqh6d | false | null | t3_17sqh6d | /r/LocalLLaMA/comments/17sqh6d/question_about_7b_models/ | false | false | self | 1 | null |
Idea for 7B models | 1 | [removed] | 2023-11-11T08:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/17sqgn1/idea_for_7b_models/ | serciex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sqgn1 | false | null | t3_17sqgn1 | /r/LocalLLaMA/comments/17sqgn1/idea_for_7b_models/ | false | false | default | 1 | null |
Small models for text classification/basic understanding? | 1 | Are there any small models (< 3B parameters) that are good for text classification and general basic understanding/reasoning?
I want to experiment with using a small model to answer basic yes/no questions or text-classification tasks. | 2023-11-11T06:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/17soqmy/small_models_for_text_classificationbasic/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17soqmy | false | null | t3_17soqmy | /r/LocalLLaMA/comments/17soqmy/small_models_for_text_classificationbasic/ | false | false | self | 1 | null |
Max number of files to upload to GPTs | 1 | Anyone has tested the limits on how many files you can upload while creating GPTs? II've run into errors when I try to upload more than 10 files, yet I've seen instances where people successfully upload over 50 JSON files. What's been your experience with this? Any insights? | 2023-11-11T06:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/17sopzb/max_number_of_files_to_upload_to_gpts/ | OneConfusion3313 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sopzb | false | null | t3_17sopzb | /r/LocalLLaMA/comments/17sopzb/max_number_of_files_to_upload_to_gpts/ | false | false | self | 1 | null |
Local swarm of model instances for complex tasks | 8 | I made a small script that can work with any model you downloaded with [ollama](https://github.com/jmorganca/ollama) (or the openAI api). It can save / load the chat history and divide tasks to instances of itself for better results.
The limitation right now is that the model you are using has to be capable of using the command to call a new agent correctly (it is specified in the default system prompt). This is no problem at all if you are using the gpt-4 api, for ollama it depends on the capabilities of the model you are running.
I'm a beginner so please let me know what you think and what could be improved! I'm happy about feedback!
[https://github.com/KT313/swarm](https://github.com/KT313/swarm) | 2023-11-11T05:38:48 | https://www.reddit.com/r/LocalLLaMA/comments/17snxdi/local_swarm_of_model_instances_for_complex_tasks/ | KT313 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17snxdi | false | null | t3_17snxdi | /r/LocalLLaMA/comments/17snxdi/local_swarm_of_model_instances_for_complex_tasks/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'nO-VwMhKSgU4ugzgNyKkaK81LwVKgvWRCOHxQk22yFw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mnIFsRYrUvZ3XqOAY_WvqpmcTuK_zmLNg6Wm0qRj3ls.jpg?width=108&crop=smart&auto=webp&s=d054ffd171fc98e9922236e7e857a62f012065f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mnIFsRYrUvZ3XqOAY_WvqpmcTuK_zmLNg6Wm0qRj3ls.jpg?width=216&crop=smart&auto=webp&s=e5cf17a4bd5f00846830722942ef0fa54db72c91', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mnIFsRYrUvZ3XqOAY_WvqpmcTuK_zmLNg6Wm0qRj3ls.jpg?width=320&crop=smart&auto=webp&s=d8cabad621b425cf3b3a437435cd268cd38c0a9e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mnIFsRYrUvZ3XqOAY_WvqpmcTuK_zmLNg6Wm0qRj3ls.jpg?width=640&crop=smart&auto=webp&s=df35025853fa98a73da6749c62771f2f71d4ba66', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mnIFsRYrUvZ3XqOAY_WvqpmcTuK_zmLNg6Wm0qRj3ls.jpg?width=960&crop=smart&auto=webp&s=ac02faa09f9802f2b608424670378dd71d0ff4b1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mnIFsRYrUvZ3XqOAY_WvqpmcTuK_zmLNg6Wm0qRj3ls.jpg?width=1080&crop=smart&auto=webp&s=916efd2465ffdb2405ac2d36332c5c889f039b17', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mnIFsRYrUvZ3XqOAY_WvqpmcTuK_zmLNg6Wm0qRj3ls.jpg?auto=webp&s=8fd1e02bc59820b0b9787b4a3b83b96c1b66ea52', 'width': 1200}, 'variants': {}}]} |
Cat 1.0 is an uncensored, rp model aligned to be useful in all (even spicy)situations | 64 | Trained on 8xa100 for 3weeks, 50k steps trained, 120k rp responses from bluemoon rp, entirety of the airobo dataset, some rows from the chat doc dataset.
Spent about 4 weeks writing redpills into the dataset including(spicy) biology, medicine and physics domain QAs because GPTs will not make redpills for you.
​
https://preview.redd.it/nfsgtfwttmzb1.png?width=884&format=png&auto=webp&s=4c0490d88b1297e61442fa837ee46cb27d495cf7
Additional information please see the model card:
https://preview.redd.it/wd6dgigvtmzb1.png?width=842&format=png&auto=webp&s=d5a16447f434597d970886cac1dd3e1571b07eb7
[https://huggingface.co/Doctor-Shotgun/cat-v1.0-13b](https://huggingface.co/Doctor-Shotgun/cat-v1.0-13b) | 2023-11-11T02:44:46 | https://www.reddit.com/r/LocalLLaMA/comments/17skxzq/cat_10_is_an_uncensored_rp_model_aligned_to_be/ | Kaltcit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17skxzq | false | null | t3_17skxzq | /r/LocalLLaMA/comments/17skxzq/cat_10_is_an_uncensored_rp_model_aligned_to_be/ | false | false | self | 64 | {'enabled': False, 'images': [{'id': 'dQrONt_sTqbf8j__kOZSCVqx_CnWmzk3FoDzsw6cakU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/U12fHUkWZiSsEg5DV6tiuIVSXyIBFRwrtgv4g_2L6uk.jpg?width=108&crop=smart&auto=webp&s=aa4d751ce4c255adc705787fa30269fcde3a90f6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/U12fHUkWZiSsEg5DV6tiuIVSXyIBFRwrtgv4g_2L6uk.jpg?width=216&crop=smart&auto=webp&s=dc41387218b171b15813dbc06fef21b9f68b8262', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/U12fHUkWZiSsEg5DV6tiuIVSXyIBFRwrtgv4g_2L6uk.jpg?width=320&crop=smart&auto=webp&s=1c4224bb2a83793f47f3a77353e555e396501219', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/U12fHUkWZiSsEg5DV6tiuIVSXyIBFRwrtgv4g_2L6uk.jpg?width=640&crop=smart&auto=webp&s=326247d3fb4d53587f6e40a0ce4e81c770437726', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/U12fHUkWZiSsEg5DV6tiuIVSXyIBFRwrtgv4g_2L6uk.jpg?width=960&crop=smart&auto=webp&s=0381ad9fde5844b9e17281ce0ef829e3d068c6af', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/U12fHUkWZiSsEg5DV6tiuIVSXyIBFRwrtgv4g_2L6uk.jpg?width=1080&crop=smart&auto=webp&s=f589d264c6ae89f3b57fa624d10b60671eb7d41b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/U12fHUkWZiSsEg5DV6tiuIVSXyIBFRwrtgv4g_2L6uk.jpg?auto=webp&s=27d98848d1fa48b6aa4d2691abad51907b9f4d21', 'width': 1200}, 'variants': {}}]} |
Is it possible to run 4*A100 40G cards as one? | 1 | Newbie question, but is there a way to have 4\*A100 40G cards run as one, with 160G VRAM in total?
I am not able to load a 70B model even with 4bit quantization because my lab has 40G cards. | 2023-11-11T02:23:03 | https://www.reddit.com/r/LocalLLaMA/comments/17skiyn/is_it_possible_to_run_4a100_40g_cards_as_one/ | manjimin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17skiyn | false | null | t3_17skiyn | /r/LocalLLaMA/comments/17skiyn/is_it_possible_to_run_4a100_40g_cards_as_one/ | false | false | self | 1 | null |
Is the value proposition of local LLMs in production affected by the recent OpenAI releases and cost reductions? | 19 | I've been working on an LLM application that doesn't need the superpowers of GPT-4 to do a good job of it. Hosting locally also has made sense on a number of fronts like cost, fail-over protection, ability to tune models more easily. But with the recent launches and the cost reductions announced at OpenAI developer day I'm starting to wonder whether my reasons to go local still stack up.
I'm very interested to hear what other people think are the best reasons to use local models in production and whether the recent launches from OpenAI change anything for them?
​
​
​
​ | 2023-11-11T01:43:12 | https://www.reddit.com/r/LocalLLaMA/comments/17sjqgd/is_the_value_proposition_of_local_llms_in/ | moma1970 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sjqgd | false | null | t3_17sjqgd | /r/LocalLLaMA/comments/17sjqgd/is_the_value_proposition_of_local_llms_in/ | false | false | self | 19 | null |
Is nsfw the only market openai cannot get in? | 89 | I have the feeling no matter what product idea you have (which needs an llm) openai can just "steal" and implement in Chatgpt.
This makes building a business on top of llm's hard i guess.
Beside nsfw which openai probably will NEVER allow on their platform, what are areas you think will stay save?
Gaming i could also imagine. I dont see them creating games in the future. | 2023-11-11T01:27:50 | https://www.reddit.com/r/LocalLLaMA/comments/17sjfhx/is_nsfw_the_only_market_openai_cannot_get_in/ | freehuntx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sjfhx | false | null | t3_17sjfhx | /r/LocalLLaMA/comments/17sjfhx/is_nsfw_the_only_market_openai_cannot_get_in/ | false | false | nsfw | 89 | null |
How do you roleplay with your LLM? | 3 | I'm pretty new to this, but I've got a local llm set up using Oobabooga and the 13b tiefighter model. I don't really understand how you go about roleplaying, however. Given a small context size, how can you make the model 1. Take into account a specific setting and character to embody, and 2. Not lose relevant story information within a few posts?
Also because I'm new, I don't have a clue how to actually begin that process. Where would I put that information and would I have to send it every single time? | 2023-11-11T01:13:34 | https://www.reddit.com/r/LocalLLaMA/comments/17sj5cf/how_do_you_roleplay_with_your_llm/ | Doctor_Turkleton | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sj5cf | false | null | t3_17sj5cf | /r/LocalLLaMA/comments/17sj5cf/how_do_you_roleplay_with_your_llm/ | false | false | self | 3 | null |
Local LLM with Python? | 1 | How can I download communicate with a downloaded model with python? I just want to like send the prompt and the past messages and have it return its response. Thanks I'm new to this. | 2023-11-11T00:52:06 | https://www.reddit.com/r/LocalLLaMA/comments/17sip6j/local_llm_with_python/ | Spummerman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sip6j | false | null | t3_17sip6j | /r/LocalLLaMA/comments/17sip6j/local_llm_with_python/ | false | false | self | 1 | null |
Use local LLM with wikipedia offline | 1 | I want to give a model access to a locally downloaded wikipedia and i want it to reference it when it responses as well as provding a link. Is this possible? | 2023-11-11T00:50:50 | https://www.reddit.com/r/LocalLLaMA/comments/17sio6e/use_local_llm_with_wikipedia_offline/ | Spummerman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sio6e | false | null | t3_17sio6e | /r/LocalLLaMA/comments/17sio6e/use_local_llm_with_wikipedia_offline/ | false | false | self | 1 | null |
LLM Lora "Civitai" | 33 | Is there a good LLM Lora repository, kind of like Civitai for SD Loras? Or are we stuck with the impossible to navigate Huggingface?
Alternatively, does anyone here have a "list of recommended Loras"? Mostly looking for story writing stuff, particular focus on mythological creatures and history. | 2023-11-10T23:40:05 | https://www.reddit.com/r/LocalLLaMA/comments/17sh7cn/llm_lora_civitai/ | Euchale | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sh7cn | false | null | t3_17sh7cn | /r/LocalLLaMA/comments/17sh7cn/llm_lora_civitai/ | false | false | self | 33 | null |
M3 Mac performance improvements | 19 | Has anyone tried running models on the new Macs? If so, what performance did you get? I suspect dynamic caching would be really beneficial but I haven't seen anyone test it. | 2023-11-10T23:25:13 | https://www.reddit.com/r/LocalLLaMA/comments/17sgwb8/m3_mac_performance_improvements/ | metaprotium | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sgwb8 | false | null | t3_17sgwb8 | /r/LocalLLaMA/comments/17sgwb8/m3_mac_performance_improvements/ | false | false | self | 19 | null |
New to LLMs, seeking feedback on a domain-specific RAG-LLM for a non-profit. | 11 | I'm a new DS at a data-focused non-profit, pivoting from a traditional statistical science. I'm looking for feedback on a RAG-LLM that I'd like to build with our data (large public interest datasets). The nonprofit is data-focused in that we maintain a growing number of public interest datasets about governance and politics. We develop open-source applications with these data, intended for the general public, but also people like data journalists, government, non-profit, and public sector employees, social scientists, etc.
To make our data more accessible, especially to people who aren't data/code literate, I want to build a RAG-LLM so users can "talk to" the data. I'm looking for feedback on my plan, and reccomendations for models/libraries/software that I should oook into.
Here's the birds-eye-view of my plan with **my questions for this community bolded**:
Step 0: The datasets
We have several tabular datasets (\~100k to \~10m observations), and although the types of observations vary across datasets, there are many overlapping named entities between datasets (e.g., organizations, politicians, government agencies).
Step 0.5: Named Entity Resolution
I've already used a mixture of fuzzy matching and manual validation for entity resolution in these datasets. We are also collaborating with a CS professor who does DS for Good type work, and he's going to help us (i.e., tell me how to) further do entity resolution.
Step 1: Creating a Knowledge Graph (KG)
The most interesting questions we think we can answer with our data are about entities, by drawing connections between datasets with different kinds of observations about the same entities. This is why I chose a KG rather than, say, a RAG-LLM that can query multiple tabular datasets. I already have the KG schema worked out, and have integrated a little less than half of our datasets together. *The triplets in the KG don't all come from unstructured text. For example, one dataset records how much money one named entity gave to another using multiple features. Most of our data are like this, and only a few contain documents with unstructured text.* Once I build the full KG using Networkx, I plan to store it using AgensGraph, as our datasets currently live in a PostgreSQL DBMS.
**I am looking for advice on how to extract KG triplets from unstructured text using an LLM. I've tried this with Mistral on a small scale, and am curious if anyone has had success more systematically. I found Mistral to be so useful b/c it effortlessly solves coreference puzzles and understands the importance of named entities not usually recognized in vanilla NER packages (e.g., Bills or Laws).**
Step 2: Embedding the Entities and Triplets
Once I have the KG, I plan to embed the entities (nodes) and triplets using a GNN specialized for KGs, like a Graph Attention Network. Once I've embed these, I'll store them in PGVector. Again, because we are embedded in PostgreSQL. I'm not sure which GNN to use but I figure that's outside the scope of this subreddit.
Step 3: RAG Implementation
**This is where I could use the most advice from this community. There are so many RAG-related tools out there, and I am not sure which is most appropriate for a KG. Which framework would be compatible with a KG, Cypher queries, and vector search?** I have looked at txtai and LangChain, but these tools are new to me. All I've done is spin up a mistral in Google Colab. Ideally, the LLM would make use of (a) the PGVector database for semantic search and also (b) traverse the Agensgraph knowledge graph using Cypher queries, like a typical RAG might use SQL.
Step 4: Fine-Tune an LLM who better understands ouer domain-specific data
Assuming everything up to Step 3 gets figured out, I think there are further gains to be had by fine-tuning an LLM on the domain our datasets are in. This should improve its ability to talk to users about the importance and context of the data. Luckily, we have a sister organization that maintains a large corpus of human- and Whisper-transcribed meetings about various topics in our datasets' domain. I could also easily create a corpus of open-source academic papers on relevant topics. The purpose here is to fine-tune the LLM to just talk about governance and to be able to contextualize governmental/policy information for people, not to be able to recall specific facts from the training data. For that kind of trust, it will search the KG. **There are plenty of tutorials on this kind of thing so my real question is if you think this is worth it and if I should work on it in parallel or save it till the end.**
I would appreciate any input!!! Thanks! | 2023-11-10T23:06:33 | https://www.reddit.com/r/LocalLLaMA/comments/17sghz7/new_to_llms_seeking_feedback_on_a_domainspecific/ | empirical-sadboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sghz7 | false | null | t3_17sghz7 | /r/LocalLLaMA/comments/17sghz7/new_to_llms_seeking_feedback_on_a_domainspecific/ | false | false | self | 11 | null |
How to use LLM for Fuzzy Date | 1 | I am looking for a way to extract and transform date from any format into unix time at 1 10M lines scale. I am using this simple task to explore how adaptation can be made performant for scalable extraction.
Oct26 3:51PM
Nov 6, 2023 at 9:42:44 AM
2023-05-29T06:40:31.249-06:00
June 3, 2011 at 4:52 AM
Fr Nov 10, 2023, at 9:42:44 AM
Friday November 10 at 2 20AM
... You get the gist | 2023-11-10T23:04:42 | https://www.reddit.com/r/LocalLLaMA/comments/17sggj9/how_to_use_llm_for_fuzzy_date/ | yonz- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sggj9 | false | null | t3_17sggj9 | /r/LocalLLaMA/comments/17sggj9/how_to_use_llm_for_fuzzy_date/ | false | false | self | 1 | null |
local alternative to open ai add your data | 2 | is there a locally hosted alternative that would let me upload my documents similar to azure uploading your data so that the LLM to read them and answer questions based on those? is there an option in studio LM or a similar program? or an easier way to train a model on local data? | 2023-11-10T22:44:49 | https://www.reddit.com/r/LocalLLaMA/comments/17sg105/local_alternative_to_open_ai_add_your_data/ | kamiar_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sg105 | false | null | t3_17sg105 | /r/LocalLLaMA/comments/17sg105/local_alternative_to_open_ai_add_your_data/ | false | false | self | 2 | null |
Benchmark code generation models on repository wide generation | 1 | Hello everyone,
I'd like to benchmark some code models on their ability to correctly generate code that implies to understand the whole repository, to better understand how much the model can help a real developer
I've found that datasets like HumanEval are mostly simple unit tests, so although it's useful it's not enough to predict a model ability to work on more complex generations.
I'm wondering if there is a way to generate a humaneval-like dataset ? They sadly didn't release their dataset generation code, and I didn't find any related project.
The goal would be to read a repo, find the units tests, find the concerned function for every units test and then feed its signature to a model, execute the units test against the generated results and compute pass@k metrics
Any ideas on how to do it ? I think test discovery can be done with pytest API for python, I'm trying to see if function-to-test link can be established with code coverage tools but without success so far | 2023-11-10T22:28:26 | https://www.reddit.com/r/LocalLLaMA/comments/17sfob7/benchmark_code_generation_models_on_repository/ | Wats0ns | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sfob7 | false | null | t3_17sfob7 | /r/LocalLLaMA/comments/17sfob7/benchmark_code_generation_models_on_repository/ | false | false | self | 1 | null |
Fine tuned model question | 3 | I fine-tuned/quantized Mistral-7B model. It's giving me good results. Can someone please guide me what does "6.41s/it" in the attached image means? | 2023-11-10T22:24:28 | meetrais | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17sfl78 | false | null | t3_17sfl78 | /r/LocalLLaMA/comments/17sfl78/fine_tuned_model_question/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'GheMZ_3EAFlhMhn_RCqgEcuhVDZh9tHl8Mfywzat0wU', 'resolutions': [{'height': 3, 'url': 'https://preview.redd.it/zk8mjtg5jlzb1.png?width=108&crop=smart&auto=webp&s=7707fba8733375e6800a953b92c5d642039be4d0', 'width': 108}, {'height': 6, 'url': 'https://preview.redd.it/zk8mjtg5jlzb1.png?width=216&crop=smart&auto=webp&s=81067e97b54b7ebc6f5a2b15c086e781d44aadcc', 'width': 216}, {'height': 9, 'url': 'https://preview.redd.it/zk8mjtg5jlzb1.png?width=320&crop=smart&auto=webp&s=8a9cb477f728439da0804f5af8e41cfd1afea459', 'width': 320}, {'height': 19, 'url': 'https://preview.redd.it/zk8mjtg5jlzb1.png?width=640&crop=smart&auto=webp&s=72edec0da5ad61237a98ccb8acfe335874eee6e2', 'width': 640}, {'height': 28, 'url': 'https://preview.redd.it/zk8mjtg5jlzb1.png?width=960&crop=smart&auto=webp&s=600f2efd6016289f6bd652886acbb1ad5c8b0274', 'width': 960}, {'height': 32, 'url': 'https://preview.redd.it/zk8mjtg5jlzb1.png?width=1080&crop=smart&auto=webp&s=b8b4e7165c17df15ebcac69ded8384a660c075b9', 'width': 1080}], 'source': {'height': 44, 'url': 'https://preview.redd.it/zk8mjtg5jlzb1.png?auto=webp&s=4184c7c6718a8b6ca63aefe9a796885e598ed31c', 'width': 1466}, 'variants': {}}]} | ||
chatglm3-6b-base better than GPT-4 at understanding https://opencompass.org.cn/leaderboard-llm | 17 | I was checking opencompass and was surprised to see a 6B model in the top 3, also having a higher score at "understanding" than GPT-4 (March version),
[https://opencompass.org.cn/leaderboard-llm](https://opencompass.org.cn/leaderboard-llm)
though the files from hugginface are uploaded \~ 2 weeks ago, I found no post in this thread, so just posting it
huggingface link: [https://huggingface.co/THUDM/chatglm3-6b-base](https://huggingface.co/THUDM/chatglm3-6b-base)
disclaimer: not my model
​ | 2023-11-10T21:23:33 | https://www.reddit.com/r/LocalLLaMA/comments/17se75d/chatglm36bbase_better_than_gpt4_at_understanding/ | vasileer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17se75d | false | null | t3_17se75d | /r/LocalLLaMA/comments/17se75d/chatglm36bbase_better_than_gpt4_at_understanding/ | false | false | self | 17 | null |
If you were buying a laptop that would let you play with local LLMs, what would you make sure to have? | 52 | I've $2000 to spend and there's some good deals out there right now, but I'm a bit lost as to what's needed... | 2023-11-10T21:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/17sdrhh/if_you_were_buying_a_laptop_that_would_let_you/ | glintings | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sdrhh | false | null | t3_17sdrhh | /r/LocalLLaMA/comments/17sdrhh/if_you_were_buying_a_laptop_that_would_let_you/ | false | false | self | 52 | null |
Whats happening here | 4 | Hi have a RAG based system. I am using Instructor Embeddings and llama2-13b. I am seeing that after few Inferences, bot starts to respond like this:-
Based on the provided. Based on the context. Based on the provided. Based on the provided.Based on the provided.Based on the provided.Based on the provided.Based on the provided.Based on the provided.Based on the provided.Based on the provided.Based on the provided.Based on the provided.Based on the provided.Based on the provided.Based on the provided.Based on the provided.Based on the provided.Based on the provided.Based on the provided. provided.Based on the provided. provided on the provided. orurchaseolarductractheolar. 9egulatedolarcedarteldegenticeldulsuedart and and and and and and andeterfectululwardeldartuallyouspect ceustuallyferfectulloyfectululululululululululward. grusedululululegmaticallyuteduteduted.
I am only storing past 2 conversation in the memory and only retrieving top 5 from vector store for context.
any idea what is going on here? | 2023-11-10T19:57:12 | https://www.reddit.com/r/LocalLLaMA/comments/17scbsr/whats_happening_here/ | Optimal_Original_815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17scbsr | false | null | t3_17scbsr | /r/LocalLLaMA/comments/17scbsr/whats_happening_here/ | false | false | self | 4 | null |
MythoMax is really good at writing, but sometimes it feels like it was trained on a lot of powerfantasy | 20 | 2023-11-10T19:44:17 | AssistBorn4589 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17sc1sb | false | null | t3_17sc1sb | /r/LocalLLaMA/comments/17sc1sb/mythomax_is_really_good_at_writing_but_sometimes/ | false | false | 20 | {'enabled': True, 'images': [{'id': 'iHIskKAcaeThlVO5Ocrk9oV0VSIm1syMI_jjW74vDvY', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/cuq0cvhuokzb1.jpg?width=108&crop=smart&auto=webp&s=588ea932b1b4ad5ab7dac594aee2bec16fb8332c', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/cuq0cvhuokzb1.jpg?width=216&crop=smart&auto=webp&s=a1906ae8ef341c8fc5f83dc16cd9f40b45efaa0e', 'width': 216}, {'height': 188, 'url': 'https://preview.redd.it/cuq0cvhuokzb1.jpg?width=320&crop=smart&auto=webp&s=0e22f9383f7b6958925347f911ec0ad7fbcd2e86', 'width': 320}, {'height': 377, 'url': 'https://preview.redd.it/cuq0cvhuokzb1.jpg?width=640&crop=smart&auto=webp&s=19516973387c2f756ab6ef2e4f914dfd5b26d11b', 'width': 640}], 'source': {'height': 425, 'url': 'https://preview.redd.it/cuq0cvhuokzb1.jpg?auto=webp&s=7dcc2a74b948a11af6ef249649c7f1937deee3fe', 'width': 721}, 'variants': {}}]} | |||
What does batch size mean in inference? | 1 | I understand batch\_size as the number of token sequences a single epoch sees in training, but what does it mean in inference? How does it make sense to have a batch\_size in inference on an auto-regressive model? | 2023-11-10T19:37:57 | https://www.reddit.com/r/LocalLLaMA/comments/17sbwo5/what_does_batch_size_mean_in_inference/ | Evirua | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sbwo5 | false | null | t3_17sbwo5 | /r/LocalLLaMA/comments/17sbwo5/what_does_batch_size_mean_in_inference/ | false | false | self | 1 | null |
MythoMax is really good at writing, but sometimes it feels like it was trained on a lot of powerfantasy | 1 | 2023-11-10T19:29:08 | AssistBorn4589 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17sbpt0 | false | null | t3_17sbpt0 | /r/LocalLLaMA/comments/17sbpt0/mythomax_is_really_good_at_writing_but_sometimes/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'DuiUN_OPvfwXEMa2p4UZMh5QJPp9StC9IzxNX09vXS0', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/wf7e5tzsnkzb1.png?width=108&crop=smart&auto=webp&s=d3cb17b471c524247a152b95be93bf94b4c8e756', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/wf7e5tzsnkzb1.png?width=216&crop=smart&auto=webp&s=d20887d211629e2688189bbffe362d366368acc4', 'width': 216}, {'height': 221, 'url': 'https://preview.redd.it/wf7e5tzsnkzb1.png?width=320&crop=smart&auto=webp&s=122e6508c7be7dd83bf1db348a74bcfbec7f2cd7', 'width': 320}, {'height': 442, 'url': 'https://preview.redd.it/wf7e5tzsnkzb1.png?width=640&crop=smart&auto=webp&s=58c3c021fc8ca7362083df5a9ba38e1196cb179f', 'width': 640}], 'source': {'height': 496, 'url': 'https://preview.redd.it/wf7e5tzsnkzb1.png?auto=webp&s=794bbbe7d4b3fab651c0dfea30672487e7dabeb4', 'width': 718}, 'variants': {}}]} | |||
Google blog posts suggests that Google using int8 for training | 63 | [https://cloud.google.com/blog/products/compute/the-worlds-largest-distributed-llm-training-job-on-tpu-v5e](https://cloud.google.com/blog/products/compute/the-worlds-largest-distributed-llm-training-job-on-tpu-v5e)
Quote:
[Accurate Quantized Training (AQT)](https://github.com/google/aqt) is a Google-built training library that uses reduced numerical precision of 8-bit integers (INT8) instead of 16-bit floats (BF16) for training. AQT takes advantage of the fact that ML accelerators have 2X the compute speed when using INT8 operations versus BF16 operations. Using AQT’s simple and flexible API, MLEs can attain both higher performance during training and also higher model quality in production. | 2023-11-10T19:21:15 | https://www.reddit.com/r/LocalLLaMA/comments/17sbjsv/google_blog_posts_suggests_that_google_using_int8/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sbjsv | false | null | t3_17sbjsv | /r/LocalLLaMA/comments/17sbjsv/google_blog_posts_suggests_that_google_using_int8/ | false | false | self | 63 | {'enabled': False, 'images': [{'id': 'kxaM19u9qxGgIAS3pYIVu2hdFPF7AZW3TMuEf-Ep2ZU', 'resolutions': [{'height': 44, 'url': 'https://external-preview.redd.it/puQcs_BCMV-FZepLQdwZT0l8XqheLh2B6MB5fIcEKvs.jpg?width=108&crop=smart&auto=webp&s=4024bd78e018bc5cb48929497f6eba2d2c93ae1d', 'width': 108}, {'height': 89, 'url': 'https://external-preview.redd.it/puQcs_BCMV-FZepLQdwZT0l8XqheLh2B6MB5fIcEKvs.jpg?width=216&crop=smart&auto=webp&s=a1b28a56ecdf4096c701e4e34437f5e79482f6a3', 'width': 216}, {'height': 133, 'url': 'https://external-preview.redd.it/puQcs_BCMV-FZepLQdwZT0l8XqheLh2B6MB5fIcEKvs.jpg?width=320&crop=smart&auto=webp&s=3f72ee802f9094efec8b22b1ad5e7f1b7da15fa0', 'width': 320}, {'height': 266, 'url': 'https://external-preview.redd.it/puQcs_BCMV-FZepLQdwZT0l8XqheLh2B6MB5fIcEKvs.jpg?width=640&crop=smart&auto=webp&s=68ff6629dc08cc954eef4ece0d378ef4add25005', 'width': 640}, {'height': 399, 'url': 'https://external-preview.redd.it/puQcs_BCMV-FZepLQdwZT0l8XqheLh2B6MB5fIcEKvs.jpg?width=960&crop=smart&auto=webp&s=628c3dd3b5bfb5540b9497b2e9ee87e30ff66a93', 'width': 960}, {'height': 449, 'url': 'https://external-preview.redd.it/puQcs_BCMV-FZepLQdwZT0l8XqheLh2B6MB5fIcEKvs.jpg?width=1080&crop=smart&auto=webp&s=7bc47a923ae053c4f3df286af5eb8e84ab3f35b0', 'width': 1080}], 'source': {'height': 1083, 'url': 'https://external-preview.redd.it/puQcs_BCMV-FZepLQdwZT0l8XqheLh2B6MB5fIcEKvs.jpg?auto=webp&s=4870aa7474383060bdb561bd0a3733efc1a54f03', 'width': 2600}, 'variants': {}}]} |
What am I doing wrong with text-generation-webui "instruct" mode with API? | 5 | Here is my python code:
\`\`\`
request = {
'user\_input': user\_input,
'mode': 'chat-instruct', # Options: 'chat', 'chat-instruct', 'instruct'
'character': "Assistant",
'instruction\_template': 'ChatML',
'your\_name': 'You',
'regenerate': False,
'history': history,
'continue': True, # Option for sending History
'stop\_at\_newline': False,
'chat\_prompt\_size': 2048,
'chat\_generation\_attempts': 1,
'chat\_instruct\_command': 'Continue the chat dialogue below. Write a single reply for the character "<|character|>".\\n\\n<|prompt|>',
'max\_new\_tokens': 500,
'do\_sample': True,
'temperature': 0.7,
'top\_p': 0.1,
'typical\_p': 1,
'epsilon\_cutoff': 0,
'eta\_cutoff': 0,
'tfs': 1,
'top\_a': 0,
'repetition\_penalty': 1.18,
'top\_k': 40,
'min\_length': 0,
'no\_repeat\_ngram\_size': 0,
'num\_beams': 1,
'penalty\_alpha': 0,
'length\_penalty': 1,
'early\_stopping': False,
'mirostat\_mode': 0,
'mirostat\_tau': 5,
'mirostat\_eta': 0.1,
'seed': -1,
'add\_bos\_token': True,
'truncation\_length': 2048,
'ban\_eos\_token': False,
'skip\_special\_tokens': True,
}
response = requests.post(URI, json=request)
\`\`\`
This works without issues, but the moment I change 'mode' to 'instruct' it stops working. I know oobabooga now has openai api mode as well, but for reasons I will not get in to that is not an option for me now.. Any ideas?
Thanks in advance | 2023-11-10T19:19:52 | https://www.reddit.com/r/LocalLLaMA/comments/17sbinf/what_am_i_doing_wrong_with_textgenerationwebui/ | abandonedexplorer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17sbinf | false | null | t3_17sbinf | /r/LocalLLaMA/comments/17sbinf/what_am_i_doing_wrong_with_textgenerationwebui/ | false | false | self | 5 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.