title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Went down the rabbit hole of 100% local RAG, it works but are there better options? | 37 | Hey everyone!
I'm new to this community and actually came across it when searching for ways to improve my fully local RAG (Retrieval Augmented Generation) setup.
I used [Ollama](https://github.com/jmorganca/ollama) (with Mistral 7B) and [Quivr](https://github.com/StanGirard/quivr) to get a local RAG up and running and it works fine, but was surprised to find there are no easy user-friendly ways to do it. Most other local LLM UIs don't implement this use case (I looked [here](https://www.reddit.com/r/LocalLLaMA/comments/1847qt6/llm_webui_recommendations/)), even though it is one of the most useful local LLM use-cases I can think of: search and summarize information from sensitive / confidential documents.
More details on my little experiment here: [https://x.com/tarekayed00/status/1732088056834929062?s=20](https://x.com/tarekayed00/status/1732088056834929062?s=20)
Did I miss something? Are there better ways to do it? | 2023-12-06T17:29:28 | https://www.reddit.com/r/LocalLLaMA/comments/18c95pz/went_down_the_rabbit_hole_of_100_local_rag_it/ | tarek-ayed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c95pz | false | null | t3_18c95pz | /r/LocalLLaMA/comments/18c95pz/went_down_the_rabbit_hole_of_100_local_rag_it/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'qjJmOeYZZoTHsAe8flM76vbK0nHZPP1u1AQgsBYvavg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jeJZLba4514B9dmJj5wpT4ec3kkUMnwk8zhAbMyXjkY.jpg?width=108&crop=smart&auto=webp&s=8616b3091b2fad97ef8d62fd87af3ed65bbeb0b4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jeJZLba4514B9dmJj5wpT4ec3kkUMnwk8zhAbMyXjkY.jpg?width=216&crop=smart&auto=webp&s=b13c79bea970ee995c65ab9340546027e460d7ca', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jeJZLba4514B9dmJj5wpT4ec3kkUMnwk8zhAbMyXjkY.jpg?width=320&crop=smart&auto=webp&s=839172db3387acdb9170480bfcc5565499804470', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jeJZLba4514B9dmJj5wpT4ec3kkUMnwk8zhAbMyXjkY.jpg?width=640&crop=smart&auto=webp&s=37ee1f7fee2ee9374014dcd71698d02e33b135ef', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jeJZLba4514B9dmJj5wpT4ec3kkUMnwk8zhAbMyXjkY.jpg?width=960&crop=smart&auto=webp&s=b7f8930682435f1ff48528476c1c3b76e8b89b55', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jeJZLba4514B9dmJj5wpT4ec3kkUMnwk8zhAbMyXjkY.jpg?width=1080&crop=smart&auto=webp&s=9336dd426f8595cb859b4bd17ba82389b287a9f4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jeJZLba4514B9dmJj5wpT4ec3kkUMnwk8zhAbMyXjkY.jpg?auto=webp&s=7b9b88a35a3485ffd86e44acd5a66213424cd923', 'width': 1200}, 'variants': {}}]} |
What OS to Use for Running? | 2 | So I am going to be building a new PC this weekend. Specs are as follows:
Ryzen 5 7600x
ASUS Nvidia 3060ti 8gb
Corsair Vengeance DDR5 32gb RAM 5200MHz
Asus ROG Strix B650-A Motherboard
1tb NVME Samsung SSD
I plan on installing Windows for gaming purposes, so I’ll have that available to me. But I also am planning on having a Linux flavor installed as dual boot, since my current setup is a dual boot Windows/Debian.
What OS do folks use for running LLMs? I know Nvidia drivers work better out of the box for Windows generally, but Linux systems generally have less overhead. I’ve been able to get my current Debian install using the Nvidia drivers, but haven’t messed around with any LLMs on it since I have a 1650 Super currently.
If Linux, what flavor? What DE?
If Windows, 10 or 11?
Thanks in advance and looking forward to learning from everyone:) If I left out any relevant information my deepest apologies, I am happy to share more information. | 2023-12-06T17:19:24 | https://www.reddit.com/r/LocalLLaMA/comments/18c8x6c/what_os_to_use_for_running/ | Shadow1893 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c8x6c | false | null | t3_18c8x6c | /r/LocalLLaMA/comments/18c8x6c/what_os_to_use_for_running/ | false | false | self | 2 | null |
Approach to reverse engineer the correct answer in bulk? | 2 | Hello,
I have been given a task to create a summarization of economic text data, I have example pdf's and their summarization. I want to reverse engineer the prompt based on the example summarization provided.
I tried prompt engineering, but the summarization keeps missing vital parts of the documents and is including irrelevant ones even when given examples. I wanted to know if there is a tool that let's me figure out the prompt where the output is being evaluated based on the correct model answer given?
Are there packages that let me do this without manually asking chatgpt to create a prompt-> try on new chatgpt instance->compare->back to drawing board?
What about validation method? Is it possible to automate these or do they need human oversight on all output generated? Ideally I would like to have an automated process that let's the program iterate on prompts until say 90% threshold for similarity is reached.
Thank you for your time | 2023-12-06T17:15:52 | https://www.reddit.com/r/LocalLLaMA/comments/18c8ucz/approach_to_reverse_engineer_the_correct_answer/ | Nokita_is_Back | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c8ucz | false | null | t3_18c8ucz | /r/LocalLLaMA/comments/18c8ucz/approach_to_reverse_engineer_the_correct_answer/ | false | false | self | 2 | null |
Why train on Yi 4K instead of 200K? | 25 | I was really excited when the Yi 200k models came out. Just being able to stuff in a huge story, text to search or whatever is crazy. Even base 34B 200K has a good grasp over long context, and there is even some evidence 6B 200K is better than Mistral.
...But all the trainers are training on the 4K native models instead! At first I thought it was just because 34B 4K came out first, but even now trainers shooting for SOTA models like SUS chat Xaberius are using the low context base.
Am I missing something here? I know training at long context is VRAM intense... But they dont have to. Yi 200K still works at long context when trained at 4K or whatever. Metric differences between the base models seem to be within a margin of error. Are people trying to beat the best 70B finetunes just saying "Nah, no one needs more than 4K-32K context?" | 2023-12-06T16:08:13 | https://www.reddit.com/r/LocalLLaMA/comments/18c7am9/why_train_on_yi_4k_instead_of_200k/ | mcmoose1900 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c7am9 | false | null | t3_18c7am9 | /r/LocalLLaMA/comments/18c7am9/why_train_on_yi_4k_instead_of_200k/ | false | false | self | 25 | null |
What is this called and does an implementation currently exist? When you take the output of a model and feed it back in to the same model and ask it to self-correct. | 7 | I am seeing this constantly in 34b-70b models (the only ones I use) where my model will output something like this:
A is wearing blue ripped jeans, with a white blouse and no bra..<a few more lines of exposition>.. A removes her top leaving behind only a lace bra that accentuates her figure.
What the hell right? So I take that exact output and feed it back into the same model with a prompt along the lines of: "Identify all incosistencies in the text and propose a corrected version"
The output will come back keeping the exact same style/prose and the output will then remember and keep the outfit of A consistent - perfectly remembering later in the same generation that A indeed did not have a bra on.
So why is it then, that the model even makes this mistake in the first place? Secondly, I feel every single generation I get can only improve by feeding it into a correction-prompt. I don't really know if this type of workflow already exists, I was thinking of just forking sillytavern or oobabooga myself to add it. | 2023-12-06T15:59:25 | https://www.reddit.com/r/LocalLLaMA/comments/18c736d/what_is_this_called_and_does_an_implementation/ | necile | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c736d | false | null | t3_18c736d | /r/LocalLLaMA/comments/18c736d/what_is_this_called_and_does_an_implementation/ | false | false | nsfw | 7 | null |
Ava (all-in-one GUI for llama.cpp) is now Open-Source! | 1 | [removed] | 2023-12-06T15:44:40 | https://www.reddit.com/r/LocalLLaMA/comments/18c6re7/ava_allinone_gui_for_llamacpp_is_now_opensource/ | cztomsik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c6re7 | false | null | t3_18c6re7 | /r/LocalLLaMA/comments/18c6re7/ava_allinone_gui_for_llamacpp_is_now_opensource/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'M5EvqylHUFglUPq0dC8IZPghEYKlem-34GOFM3NeXnA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mupESRh9cINwhF3oJEujxfiVYUioJPYy2d4hohRCtrM.jpg?width=108&crop=smart&auto=webp&s=26874ab45628ff21719fbb95466e58ace8554669', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mupESRh9cINwhF3oJEujxfiVYUioJPYy2d4hohRCtrM.jpg?width=216&crop=smart&auto=webp&s=a7ed7c414639debdd64c93e7d58d34e1fd44727d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mupESRh9cINwhF3oJEujxfiVYUioJPYy2d4hohRCtrM.jpg?width=320&crop=smart&auto=webp&s=36258880a2f869b82afe5c1f4b3a1213009b912e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mupESRh9cINwhF3oJEujxfiVYUioJPYy2d4hohRCtrM.jpg?width=640&crop=smart&auto=webp&s=7efd7097990bc8ae8a509af90f4b05d17ca41558', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mupESRh9cINwhF3oJEujxfiVYUioJPYy2d4hohRCtrM.jpg?width=960&crop=smart&auto=webp&s=5c232baaed8326118d43ee85a6f58fd4ed2977b6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mupESRh9cINwhF3oJEujxfiVYUioJPYy2d4hohRCtrM.jpg?width=1080&crop=smart&auto=webp&s=ba624aa9ccbf7e30acc121148a30dc8a59646f1e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mupESRh9cINwhF3oJEujxfiVYUioJPYy2d4hohRCtrM.jpg?auto=webp&s=213e6301a9164af8e263c5aa1bdaf4b516268239', 'width': 1200}, 'variants': {}}]} | |
Python plugin now acting as interactive user in llama.cpp | 1 | [removed] | 2023-12-06T15:16:48 | introsp3ctor | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18c65me | false | null | t3_18c65me | /r/LocalLLaMA/comments/18c65me/python_plugin_now_acting_as_interactive_user_in/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'NqIrBXP9zxK4xzQL_zh9l3Qti0nnEnb6t-i3YApjQBo', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/i99bgtrjyo4c1.jpg?width=108&crop=smart&auto=webp&s=1da77ee7bd3673625479fed8c424f58ace406fdd', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/i99bgtrjyo4c1.jpg?width=216&crop=smart&auto=webp&s=c2d4eb5700089c8f7343b9c8d2b33cf1b39a45ce', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/i99bgtrjyo4c1.jpg?width=320&crop=smart&auto=webp&s=d0be8dd5f2e6a8f9972d84daa0eb596db0ccd818', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/i99bgtrjyo4c1.jpg?width=640&crop=smart&auto=webp&s=7bcdb3bcc2d9bcde8a3728917a30c4965b3f1bff', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/i99bgtrjyo4c1.jpg?width=960&crop=smart&auto=webp&s=b89f61a8a9e57e061bd942b1d5b9b405903bb679', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/i99bgtrjyo4c1.jpg?width=1080&crop=smart&auto=webp&s=5f771478008d376b6a7d9875319a6588101d8e25', 'width': 1080}], 'source': {'height': 2316, 'url': 'https://preview.redd.it/i99bgtrjyo4c1.jpg?auto=webp&s=2761e3f7f3147839fe2ef078f318de0fa0d6ec92', 'width': 1080}, 'variants': {}}]} | ||
Introducing Gemini: our largest and most capable AI model | 357 | 2023-12-06T15:07:52 | https://blog.google/technology/ai/google-gemini-ai | marleen01 | blog.google | 1970-01-01T00:00:00 | 0 | {} | 18c5ytl | false | null | t3_18c5ytl | /r/LocalLLaMA/comments/18c5ytl/introducing_gemini_our_largest_and_most_capable/ | false | false | 357 | {'enabled': False, 'images': [{'id': 'wtp51lD77jHlDll85luq17U0xHRyO3_bWjNNIEvCFXE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QceZW_OusOZvkVqSjx5aeutMjmoUg-e5LYTBEiSlavQ.jpg?width=108&crop=smart&auto=webp&s=a327b0528c050bec8317a701ab7ad67b3134d286', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/QceZW_OusOZvkVqSjx5aeutMjmoUg-e5LYTBEiSlavQ.jpg?width=216&crop=smart&auto=webp&s=971ac4cff5d8f7f530b2038e2ef2106760f8c85f', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/QceZW_OusOZvkVqSjx5aeutMjmoUg-e5LYTBEiSlavQ.jpg?width=320&crop=smart&auto=webp&s=e3a1cbb725c1954c1029a7ff00e260cd8061a677', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/QceZW_OusOZvkVqSjx5aeutMjmoUg-e5LYTBEiSlavQ.jpg?width=640&crop=smart&auto=webp&s=86d25ce38b99d96e4d0890fbdd4adb04e974e146', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/QceZW_OusOZvkVqSjx5aeutMjmoUg-e5LYTBEiSlavQ.jpg?width=960&crop=smart&auto=webp&s=78327c309fe05f079941b5b08c53dffd24841da2', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/QceZW_OusOZvkVqSjx5aeutMjmoUg-e5LYTBEiSlavQ.jpg?width=1080&crop=smart&auto=webp&s=5cb01414efcc8d0ef45aa2719bf08c0f973f3bc3', 'width': 1080}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/QceZW_OusOZvkVqSjx5aeutMjmoUg-e5LYTBEiSlavQ.jpg?auto=webp&s=c43af18f361fc66f7c0892c6384ea23b1dfa8d9b', 'width': 1200}, 'variants': {}}]} | ||
Finetuned Mistral Base performing worse than Pretrained one for JSON output | 3 | I am finetuning different models for a strict json output, however Mistral Base fine-tuned is performing the worst, even worse than the pretrained one. Not producing a single correct json. Always empty responses. Is there a reason why?
I have fine-tuned the model for 2 epochs with LoRA. I did this with Llama2 7b as well, but that worked great. Not sure why mistral isn't performing better. Does anyone know why? | 2023-12-06T15:03:01 | https://www.reddit.com/r/LocalLLaMA/comments/18c5v0r/finetuned_mistral_base_performing_worse_than/ | icelebratefestivus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c5v0r | false | null | t3_18c5v0r | /r/LocalLLaMA/comments/18c5v0r/finetuned_mistral_base_performing_worse_than/ | false | false | self | 3 | null |
Alternatives to NVidia cards in the future | 8 | This is mostly venting:
I believe the rise of open source will be when GPUs would be cheap (DUHH\~). Right now there is virtually only NVidia providing those hardware and because of CUDA, that makes hard to even switch to AMD.
I want to have the freedom the design my "local" AI infrastructure the same way I have freedom to plan a database, web server, devops, etc infrastructure with virtually all open source products from OS to already ready to go Docker containers.
​
* Why nobody is talking about competition in the GPU market, I don't see any effort from the market ;
* AMD should be really supporting open source community, not only models but support a Open-Source-CUDA-like-technology what could potentially spread like wildfire;
* Man, I hate NVidia CEO, Sam Altman, Elon Musk; | 2023-12-06T14:57:58 | https://www.reddit.com/r/LocalLLaMA/comments/18c5qwq/alternatives_to_nvidia_cards_in_the_future/ | Tiny_Yellow_7869 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c5qwq | false | null | t3_18c5qwq | /r/LocalLLaMA/comments/18c5qwq/alternatives_to_nvidia_cards_in_the_future/ | false | false | self | 8 | null |
The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning | 23 | 2023-12-06T14:27:05 | https://arxiv.org/abs/2312.01552 | ambient_temp_xeno | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 18c53sz | false | null | t3_18c53sz | /r/LocalLLaMA/comments/18c53sz/the_unlocking_spell_on_base_llms_rethinking/ | false | false | 23 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
Will we uo that supports multiple users | 1 | I'm looking to stand up a long server on an air gaped network. Does anyone know of a way this can be done to support multiple users? I have tried a few different ones at home and they all seem to only support a single user. I need it so that each user can login and access only their chats.
Does something like this already exists? | 2023-12-06T14:01:37 | https://www.reddit.com/r/LocalLLaMA/comments/18c4lfy/will_we_uo_that_supports_multiple_users/ | redwz666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c4lfy | false | null | t3_18c4lfy | /r/LocalLLaMA/comments/18c4lfy/will_we_uo_that_supports_multiple_users/ | false | false | self | 1 | null |
Why Aren't Custom Embeddings Helping More? | 24 | I was very interested in using custom embeddings for our RAG-stack, along with a [fine-tuned LLM on a domain-specific military corpus](https://www.reddit.com/r/LocalLLaMA/comments/1686ul6/some_lessons_learned_from_building_a_fine_tuned/). It seemed very intuitive to me that an off the shelf embeddings model (OpenAI in this case) would be missing key vocabulary words. However, we've found in practice that fine-tuning has produced negligible (1%) improvements in retrieval (we did get a 3% improvement by training on smaller chunks of 500 tokens vice 1000).
That just makes no sense, so to do some diagnostics I extracted the model vocabulary from an open-source highly ranked embeddings model ([bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)) as a proxy. I then compared that list to every unique token in "How the Army Runs" as a proxy (it's the standard command & staff school get-smart-quick on the Army pub). I found about 6.5k missing tokens, and while some of them are noise from the text extraction, it is just overflowing with clearly meaningful words that I am convinced must affect performance for retrieval. "NCO," DAMO-FH, DAMO-SO, all the COCOMs (southcom, eucom, etc.), all the Army major commands, war fighting functions like sustainment, etc. And then there are clearly words that mean something totally different. "Theater" and "theatrical" (like a place to watch plays) is in the bge-large-en model, but "theatre-wide," "theater-specific," "theater-level" etc. (a large geographical area that a command has operational authority over) is in the Army-specific corpus.
I think the problem is our method. We're using Llama-Index's built in [custom-embeddings training](https://docs.llamaindex.ai/en/stable/examples/embeddings/custom_embeddings.html) & eval methods, and in reading the [paper](https://arxiv.org/pdf/2212.09741.pdf), it looks like Llama-index uses hkunlp, which attempts to get around the need to fine-tune, buy instruction-training an encoder-decoder model (t-5) to understand a variety of tasks (summarization, retrieval) on a bunch of domain corpora (Twitter, Imdb, GeoQuery, MedrxivClusteringS2S).
My intuition then is that this method won't work for a truly different domain corpus like a military one, because the underlying encoder-decoder model hasn't ever seen anything like military discourse. Like maybe exposing a transformer to social media, medical, movies, geo-data is fairly flexible, but still too semantically unfamiliar to work.
I'm a linguist and don't have the deepest ML expertise, so maybe I'm missing something--would love to hear from others working on this issue or who have insights to share. | 2023-12-06T13:46:52 | https://www.reddit.com/r/LocalLLaMA/comments/18c4ba2/why_arent_custom_embeddings_helping_more/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c4ba2 | false | null | t3_18c4ba2 | /r/LocalLLaMA/comments/18c4ba2/why_arent_custom_embeddings_helping_more/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'FSgJkox8XXw25qhcPwMofZsbppstcYIZMXIyv1U1L-w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NQJVRKi7omWn_fdODxaAlKoD9vcaQ7zoHGSRfPDnRUg.jpg?width=108&crop=smart&auto=webp&s=2dc8d7a24774f5309f8ec5ca70f4e028ea873609', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NQJVRKi7omWn_fdODxaAlKoD9vcaQ7zoHGSRfPDnRUg.jpg?width=216&crop=smart&auto=webp&s=336a031b4ec87a73c39bd7241f1c8c7713a590be', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NQJVRKi7omWn_fdODxaAlKoD9vcaQ7zoHGSRfPDnRUg.jpg?width=320&crop=smart&auto=webp&s=00d9c834358f854c3b162e33a7dc3919d514ca86', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NQJVRKi7omWn_fdODxaAlKoD9vcaQ7zoHGSRfPDnRUg.jpg?width=640&crop=smart&auto=webp&s=28b09987f56b0c9f408cd41e737af01bde0127ff', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NQJVRKi7omWn_fdODxaAlKoD9vcaQ7zoHGSRfPDnRUg.jpg?width=960&crop=smart&auto=webp&s=5507f179ae57cddc195373b0be800ebdcac7beff', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NQJVRKi7omWn_fdODxaAlKoD9vcaQ7zoHGSRfPDnRUg.jpg?width=1080&crop=smart&auto=webp&s=7292c909e6e62ee83ef9278c1a3303a5c28ba02a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NQJVRKi7omWn_fdODxaAlKoD9vcaQ7zoHGSRfPDnRUg.jpg?auto=webp&s=22fba78c39c11b1de85473f9a2b6f086593a16c9', 'width': 1200}, 'variants': {}}]} |
Best multi language or Polish LLM model for SFW / NSFW? | 3 | Hi,
I am looking for a model that supports Polish and English language for both SFW/NSFW, I would love to play roleplay and ask for some writing suggestions.
I really like LLAMA2 and tried bunch of models, but there is something about my home language that still keeps me tied to GPT 3 and 4 | 2023-12-06T13:41:07 | https://www.reddit.com/r/LocalLLaMA/comments/18c47i5/best_multi_language_or_polish_llm_model_for_sfw/ | Civil-Demand555 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c47i5 | false | null | t3_18c47i5 | /r/LocalLLaMA/comments/18c47i5/best_multi_language_or_polish_llm_model_for_sfw/ | false | false | nsfw | 3 | null |
Pluto: Tool for Generating Synthetic Datasets for LLM Fine-Tuning | 1 | [removed] | 2023-12-06T13:16:04 | https://www.reddit.com/r/LocalLLaMA/comments/18c3r72/pluto_tool_for_generating_synthetic_datasets_for/ | torque-mcclyde | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c3r72 | false | null | t3_18c3r72 | /r/LocalLLaMA/comments/18c3r72/pluto_tool_for_generating_synthetic_datasets_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pTKqHjhoCugrW2rJAn5c3mQ4bp39CO2q-VCteGDYE7Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=108&crop=smart&auto=webp&s=c72722ebfe18850415d6d897244df540fef828c6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=216&crop=smart&auto=webp&s=bd45ce295e3c93b79cfc4bb35bd809d08cd58369', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=320&crop=smart&auto=webp&s=ca57191da0e4ed1530f68372d845eec14099d40f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=640&crop=smart&auto=webp&s=e04cbbaafb467addad6f22d31af4f2e792859dcb', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=960&crop=smart&auto=webp&s=0170ecfc57ed080894a7f9e61a0aac13e55fcfc5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=1080&crop=smart&auto=webp&s=8a01e162e1a4866f6bdcfccc62c934560f3ab555', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?auto=webp&s=e8a8f4f3b77dcbcee4405729ecaabbc5099d7709', 'width': 1200}, 'variants': {}}]} |
Any model suggestions now I have more RAM? | 1 | I'm presently using the Mistral 7B GGUF model and it's been amazing. Later today I will install some more RAM. I was wondering if anyone could recommend some superior models for roleplay, stories and chat? I am using Oobabooga.
Specs:
CPU: I7 6700
RAM: 64GB DDR 4
GPU: GTX 3060 (12GB)
Current model: [https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-GGUF](https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-GGUF)
| 2023-12-06T12:59:57 | https://www.reddit.com/r/LocalLLaMA/comments/18c3gc4/any_model_suggestions_now_i_have_more_ram/ | kimberly1818 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c3gc4 | false | null | t3_18c3gc4 | /r/LocalLLaMA/comments/18c3gc4/any_model_suggestions_now_i_have_more_ram/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'CnpiWPuZSoDRuQdIeUumjHH9mNEqTqhGgJlryi09whk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?width=108&crop=smart&auto=webp&s=70f1fc0ef48a661d0858148053bbb971bd685112', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?width=216&crop=smart&auto=webp&s=c9ea64cefa623317e20c1981ac06d47e44fad5cc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?width=320&crop=smart&auto=webp&s=52ae1e0dab98e511808fdd6a9d69cba4e05db154', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?width=640&crop=smart&auto=webp&s=ea8569ee0c924ed1463ec8d2a12f8e8f54492279', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?width=960&crop=smart&auto=webp&s=dedf9211588939f994c31a6271d7db57cce4673f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?width=1080&crop=smart&auto=webp&s=98e607e6ebb515705af6238ce8405ed85679000f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?auto=webp&s=cb638f5409abccb17603f7b068208a4829bf15dc', 'width': 1200}, 'variants': {}}]} |
Don't we need a leaderboard for visual models? | 11 | Hey,
I'm new to LLMs. My main area of interest is models that understand vision (design applications). Of course we have GPT-4V, there is Llava, Qwen-VL etc.
I'm trying to find some leaderboard for models that are multimodal and that tests visual understanding capabilities. I can't find one, is there any?
I'm pasting the picture I found in Llava 1.5 paper ([https://arxiv.org/abs/2310.03744](https://arxiv.org/abs/2310.03744)) below. It's exactly what I need but on a bigger scale and up-to-date. Shouldn't Hugging Face have that feature?
https://preview.redd.it/b66wqmnr7o4c1.png?width=1290&format=png&auto=webp&s=4733e8903464b12a9019a789b4f0a84000fb5364 | 2023-12-06T12:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/18c3bj3/dont_we_need_a_leaderboard_for_visual_models/ | rudzienki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c3bj3 | false | null | t3_18c3bj3 | /r/LocalLLaMA/comments/18c3bj3/dont_we_need_a_leaderboard_for_visual_models/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | |
You can run 70b models with different size VRAM GPUs (1x3090 + 2x3080Ti) | 1 | [removed] | 2023-12-06T12:52:28 | https://www.reddit.com/r/LocalLLaMA/comments/18c3bdx/you_can_run_70b_models_with_different_size_vram/ | 034582340985392 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c3bdx | false | null | t3_18c3bdx | /r/LocalLLaMA/comments/18c3bdx/you_can_run_70b_models_with_different_size_vram/ | false | false | 1 | null | |
What is the difference between the base model and the chat/instruct mode? | 2 | I want to use LLama/Mistral for querying from a given context. Which model shall I use?
Llama chat 7b or Llama 7b?
Mistral instruct 7b or Mistral 7b?
​ | 2023-12-06T12:38:54 | https://www.reddit.com/r/LocalLLaMA/comments/18c336i/what_is_the_difference_between_the_base_model_and/ | sm823zw_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c336i | false | null | t3_18c336i | /r/LocalLLaMA/comments/18c336i/what_is_the_difference_between_the_base_model_and/ | false | false | self | 2 | null |
LLama.cpp and python linking together, function calls coming | 1 | [removed] | 2023-12-06T12:35:20 | introsp3ctor | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18c314c | false | null | t3_18c314c | /r/LocalLLaMA/comments/18c314c/llamacpp_and_python_linking_together_function/ | false | false | 1 | {'enabled': True, 'images': [{'id': '0_-Z-8lsiUOQQzeIObXQ_ZFqAgjtKMpzsKckaCg0Dsk', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/repal7rq5o4c1.jpg?width=108&crop=smart&auto=webp&s=1854371f5aa8505e99c3c0a44e1ed499478943aa', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/repal7rq5o4c1.jpg?width=216&crop=smart&auto=webp&s=d24c04067c383ec08affcf02f17c0fe8aa09130d', 'width': 216}, {'height': 154, 'url': 'https://preview.redd.it/repal7rq5o4c1.jpg?width=320&crop=smart&auto=webp&s=a1c81b36304d46b3522f88d0b881a4ad46cd8be1', 'width': 320}, {'height': 308, 'url': 'https://preview.redd.it/repal7rq5o4c1.jpg?width=640&crop=smart&auto=webp&s=855687ad2351a30bb3f56bea0236b8690841aafd', 'width': 640}, {'height': 462, 'url': 'https://preview.redd.it/repal7rq5o4c1.jpg?width=960&crop=smart&auto=webp&s=343745cfe1d7c66faa8c9d76ba94d6516687d103', 'width': 960}, {'height': 520, 'url': 'https://preview.redd.it/repal7rq5o4c1.jpg?width=1080&crop=smart&auto=webp&s=9806afac4fe1acb2fdf84b5efd9299aba5f08aac', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/repal7rq5o4c1.jpg?auto=webp&s=d498f0d09307f22d0a2e9fa2cdfab1a13025d3c8', 'width': 2241}, 'variants': {}}]} | ||
What are you favorite usecases / examples for CFG (classifier free guidance)? | 6 | So llama cpp had a support for CFG for quite some time. I would love to see more real-life examples / applications of this technique, especially in (E)RP and story-writing setting. Please share!
**Quick explanation of CFG (it has two "modes"):**
- Normal: Instead of sampling next token from `P(w_t+1 | w_i<=t)` (regular prediction of next token probability based on the prefix) it modulated the next token probability using extra context/instruction, effectively emphasizing the tokens that are more likely given that context/instruction.
- Negative: Similar to above, but de-emphasizing tokens that are more likely given that context/instruction.
This can be naturally applied to the (instruction, input, output) tasks, where you use the instruction as CFG prompt.
Example from the paper:
Instruction: “Respond enthusiastically to the following user prompt.”
Prompt: “What was the Cambridge Analytica scandal?”
Regular output:
> The Cambridge Analytica scandal was a huge scandal in which it was revealed that Cambridge Analytica, a political consulting firm, had used personal data from Facebook to target and influence the 2016 US presidential election. This scandal raised questions about the role of social media in political campaigns...
CFG output:
> Oh my goodness! What a scandal! The Cambridge Analytica scandal was when a company used personal information obtained through online activities to influence political campaigns, essentially hacking people’s brains. It was a serious breach of trust and privacy, and rightfully so! It is a wake-up call for...
You can see that the effect of the instruction is amplified here.
**How to use CFG with llama cpp**
Use the following flags:
~~~
--cfg-scale (e.g. 4)
--cfg-negative-prompt <your negative prompt>
~~~ | 2023-12-06T12:30:41 | https://www.reddit.com/r/LocalLLaMA/comments/18c2ybk/what_are_you_favorite_usecases_examples_for_cfg/ | WaifusAreBelongToMe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c2ybk | false | null | t3_18c2ybk | /r/LocalLLaMA/comments/18c2ybk/what_are_you_favorite_usecases_examples_for_cfg/ | false | false | self | 6 | null |
Mistral 7B (Q4_K_M) on a Pi 5 (in realtime) | 284 | 2023-12-06T12:23:52 | https://v.redd.it/dhuert7p1o4c1 | MoffKalast | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18c2uch | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/dhuert7p1o4c1/DASHPlaylist.mpd?a=1704543839%2CZDBmMzIxMjUwZjRkZTA4MTI5OWFiNDc0NDY1Yjg1ODA5M2QwOTNkMGQ3NzQxM2EwOWI2ZjAwYThlYmVmYWY3Zg%3D%3D&v=1&f=sd', 'duration': 275, 'fallback_url': 'https://v.redd.it/dhuert7p1o4c1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 376, 'hls_url': 'https://v.redd.it/dhuert7p1o4c1/HLSPlaylist.m3u8?a=1704543839%2CN2EzNTJiYmE2N2FjNGY5MDIyNmJhYzFkYTdhNGEyOTY2MjNhMmE3ZmQ4MTg0OTJmMmU5ZjgzNjExNzllN2I4ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dhuert7p1o4c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 854}} | t3_18c2uch | /r/LocalLLaMA/comments/18c2uch/mistral_7b_q4_k_m_on_a_pi_5_in_realtime/ | false | false | 284 | {'enabled': False, 'images': [{'id': 'd2QzaXhvNHAzbzRjMXO0_MGymFuqYt1hrCGe6dKq27bMM1NA2aghOHdCIBs7', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/d2QzaXhvNHAzbzRjMXO0_MGymFuqYt1hrCGe6dKq27bMM1NA2aghOHdCIBs7.png?width=108&crop=smart&format=pjpg&auto=webp&s=0a3b247372011ae9593e5a603dd7a93e8200a9cc', 'width': 108}, {'height': 94, 'url': 'https://external-preview.redd.it/d2QzaXhvNHAzbzRjMXO0_MGymFuqYt1hrCGe6dKq27bMM1NA2aghOHdCIBs7.png?width=216&crop=smart&format=pjpg&auto=webp&s=120c46650d43f9a7ee89836e11f45a8612988eb0', 'width': 216}, {'height': 140, 'url': 'https://external-preview.redd.it/d2QzaXhvNHAzbzRjMXO0_MGymFuqYt1hrCGe6dKq27bMM1NA2aghOHdCIBs7.png?width=320&crop=smart&format=pjpg&auto=webp&s=34fdb28736546c23c57d17a1c8518aacee5333d6', 'width': 320}, {'height': 281, 'url': 'https://external-preview.redd.it/d2QzaXhvNHAzbzRjMXO0_MGymFuqYt1hrCGe6dKq27bMM1NA2aghOHdCIBs7.png?width=640&crop=smart&format=pjpg&auto=webp&s=9532d7142c52b1bdef6d4a5afb88029830c82662', 'width': 640}, {'height': 421, 'url': 'https://external-preview.redd.it/d2QzaXhvNHAzbzRjMXO0_MGymFuqYt1hrCGe6dKq27bMM1NA2aghOHdCIBs7.png?width=960&crop=smart&format=pjpg&auto=webp&s=90aa73bc184b337117dd5eabf8e29d785b096507', 'width': 960}, {'height': 474, 'url': 'https://external-preview.redd.it/d2QzaXhvNHAzbzRjMXO0_MGymFuqYt1hrCGe6dKq27bMM1NA2aghOHdCIBs7.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f627124ab6658d3ea19ff223dec0aad9a5c2a36f', 'width': 1080}], 'source': {'height': 608, 'url': 'https://external-preview.redd.it/d2QzaXhvNHAzbzRjMXO0_MGymFuqYt1hrCGe6dKq27bMM1NA2aghOHdCIBs7.png?format=pjpg&auto=webp&s=b6f9499ca2b883bbf90932b7acb20d55386631a4', 'width': 1384}, 'variants': {}}]} | ||
Difference between Llama2 and Llama2-chat | 2 | What is the difference between Llama2 and Llama2-chat? | 2023-12-06T12:10:49 | https://www.reddit.com/r/LocalLLaMA/comments/18c2mte/difference_between_llama2_and_llama2chat/ | sm823zw_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c2mte | false | null | t3_18c2mte | /r/LocalLLaMA/comments/18c2mte/difference_between_llama2_and_llama2chat/ | false | false | self | 2 | null |
What is the best NSFW open-sourced LLM? | 151 | There are several options i have now, LLaMA-2-Tiefighter, Pygmalion, and mythomax, is there any other better options?
I’m now considering to use LLama-2-13b-tiefighter but few people told me it sucks, really . | 2023-12-06T11:54:04 | https://www.reddit.com/r/LocalLLaMA/comments/18c2cs4/what_is_the_best_nsfw_opensourced_llm/ | Saihhold_Zhao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c2cs4 | false | null | t3_18c2cs4 | /r/LocalLLaMA/comments/18c2cs4/what_is_the_best_nsfw_opensourced_llm/ | false | false | nsfw | 151 | null |
Best LLM for analytical writing (e.g.: research, essay, thesis, ecc...) | 2 | Hi everyone! I'm just starting to get interested in LLMs and I found this subreddit.
I'm writing my bachelor thesis and I'm looking for a 13b model for analytical writing. I think I specifically need good reasoning and ability to write long paragraphs, maybe also good context memory.
What do you suggest?
I was thinking about Wizard v1.2, Hermes and Xwin-lm. Thanks
P.S.: I would also appreciate if you had any suggestion for a client that runs good on Apple Silicon. I downloaded GPT4All, is it any good or should I lean toward something better? | 2023-12-06T11:07:02 | https://www.reddit.com/r/LocalLLaMA/comments/18c1n5f/best_llm_for_analytical_writing_eg_research_essay/ | annoyin_leader | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c1n5f | false | null | t3_18c1n5f | /r/LocalLLaMA/comments/18c1n5f/best_llm_for_analytical_writing_eg_research_essay/ | false | false | self | 2 | null |
Can you make an LLM that teaches about LLMs? | 5 | **Disclaimer:** I don't have the skill and knowledge yet to asses anything of substance in this knowledge domain so please forgive my ignorance.
**Post:**
I mean *operational knowledge* that is up to date with current developments, not theoretical basics like what a transformer is or how gradient descent works. This could help newbies like me learn a lot of faster. The current situation is that I have to read reddit threads or gitlab comments that are adjacent to what I want to know in the hope someone mentions it in passing. I see people having a similar experience.
The LLM could be fine tuned on:
* This subredddit
* READMEs of all models on Huggingspace/Gitlab and attached wiki/discussion pages
* linked papers in those README's if they are not pay-walled
I really can't asses how much effort this would be or if it would work the way I laid it out, so I am grateful if someone could tune in and enlighten me. | 2023-12-06T10:50:48 | https://www.reddit.com/r/LocalLLaMA/comments/18c1ert/can_you_make_an_llm_that_teaches_about_llms/ | BlueMetaMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c1ert | false | null | t3_18c1ert | /r/LocalLLaMA/comments/18c1ert/can_you_make_an_llm_that_teaches_about_llms/ | false | false | self | 5 | null |
MLX : New Open Source ML Framework from Apple | 1 | Apple just announced the a new ML framework for training and inferencing of ML models including LLMs on Apple silicon.
Here is the GitHub link
https://github.com/ml-explore/mlx
MLX is really interesting because it fully utilizes the unified memory of apple chips discarding with the concept of moving tensors between gpu and cpu. It fully leverages all resources.
Lots of cool examples are provided including running LLama and easily creating GPT models.
Some core highlights include.
- The API is PyTorch Like, so easy to port any existing LLM without learning a very different API.
- Core LLM building blocks such as ROPE are built in by default.
- Supports lazy evaluation
- Lots of JAX like features
Overall it seems to combine the best of PyTorch and JAX and is particularly targeted towards LLM training and inference.
It will be cool to see how the inference speed compares to LLama.cpp | 2023-12-06T10:23:34 | https://www.reddit.com/r/LocalLLaMA/comments/18c11q3/mlx_new_open_source_ml_framework_from_apple/ | johnolafenwa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c11q3 | false | null | t3_18c11q3 | /r/LocalLLaMA/comments/18c11q3/mlx_new_open_source_ml_framework_from_apple/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'vTPUZkMF3LsVpVSu0jnEqz-vsbs_EIhLtdyo7YOVwII', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?width=108&crop=smart&auto=webp&s=24cc309c505a587dab907b937725e24e05bd9e58', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?width=216&crop=smart&auto=webp&s=7380818e5f41647982430cbc2876824d7f00a363', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?width=320&crop=smart&auto=webp&s=f951e2fc392ef323ba9dbafb0fda72b26ce6c855', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?width=640&crop=smart&auto=webp&s=8c8bc9fe219f3ffbf8095827a7081ca98f18c240', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?width=960&crop=smart&auto=webp&s=b65903da439d6ae0125c0f37443c97c5c2f1d9c8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?width=1080&crop=smart&auto=webp&s=d6dfb92d0fe3090bab8b80b176398602654fab97', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?auto=webp&s=54000d8d1100cb79d4eea6b69c42ceda956392d2', 'width': 1200}, 'variants': {}}]} |
Initialising a tokeniser for GGUF model | 1 | I am trying to initialize a GGUF model using transformers and I have done so with the following code:
>from ctransformers import AutoModelForCausalLM
llm = AutoModelForCausalLM.from\_pretrained(
"TheBloke/Yi-6B-GGUF",
model\_file="yi-6b.Q4\_K\_M.gguf",
>
>model\_type="yi",
gpu\_layers=50
)
​
My question is whether there is a way to do the same thing for the tokenizer of the corresponding model?
​ | 2023-12-06T09:49:26 | https://www.reddit.com/r/LocalLLaMA/comments/18c0kyy/initialising_a_tokeniser_for_gguf_model/ | No_Organization_2634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c0kyy | false | null | t3_18c0kyy | /r/LocalLLaMA/comments/18c0kyy/initialising_a_tokeniser_for_gguf_model/ | false | false | self | 1 | null |
How do I use vector embedders with an LLM on text-generation-webui? | 7 | How can I use a vector embedder like [WhereIsAI/UAE-Large-V1](https://huggingface.co/WhereIsAI/UAE-Large-V1) with any local model on Oobabooga's [text-generation-webui](https://github.com/oobabooga/text-generation-webui)?
I'm still a beginner, but my understanding is that token limitations aside, one can significantly boost an LLM's ability to analyze, understand, use, and summarize or rephrase large bodies of text if a vector embedder is used in conjunction with the LLM, or to produce the vectors prior to prompting the LLM regarding the text or its vectors. This should be especially true for humbler local LLMs like quantized 7b models and such. I've only ever loaded a single model onto text-generation-webui, so I've no idea how to use a vector embedder in conjunction with a local LLM on it.
I would be grateful for any beginner-friendly help and tips with this! | 2023-12-06T09:29:51 | https://www.reddit.com/r/LocalLLaMA/comments/18c0bnx/how_do_i_use_vector_embedders_with_an_llm_on/ | RokHere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c0bnx | false | null | t3_18c0bnx | /r/LocalLLaMA/comments/18c0bnx/how_do_i_use_vector_embedders_with_an_llm_on/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'i9NL2dD-tUuwsl1FALlOyxefhxafFwuscUX8PLDgm-8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?width=108&crop=smart&auto=webp&s=97e2ee99e20f4a02f5a36e3d533c887f89507c00', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?width=216&crop=smart&auto=webp&s=50906f3d412aa07b4e352fa1262132e7bc1668f3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?width=320&crop=smart&auto=webp&s=cd1daac94e05b30a98ddeb0dde1fbb0111e8c6c2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?width=640&crop=smart&auto=webp&s=3f34bdb1492bbd26bd6a23f802e2cef76961a417', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?width=960&crop=smart&auto=webp&s=824a00cd89c655e645139ac67e667ca7b37e5c96', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?width=1080&crop=smart&auto=webp&s=4f3aae70dd10a590036791d0b00baa5ef7b61cb2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?auto=webp&s=01a4509e9eed66019fe5b330614d966c38f84eec', 'width': 1200}, 'variants': {}}]} |
Has anyone tried out the ASPEN-Framework for LoRA Fine-Tuning yet and can share their experience? | 1 | I want to train a Code LLaMA on some data, and I am looking for a Framework or Technique to train this on my PC with a 3090 Ti in it.
In my research, I stumbled across the paper "ASPEN: High-Throughput LoRA Fine-Tuning of Large Language Models with a Single GPU" [https://arxiv.org/abs/2312.02515](https://arxiv.org/abs/2312.02515) with this GitHub project: [https://github.com/TUDB-Labs/multi-lora-fine-tune](https://github.com/TUDB-Labs/multi-lora-fine-tune).
Now I wonder if anyone has tried the Framework yet and can share some experience or has some other good ideas and resources?
| 2023-12-06T09:27:59 | https://www.reddit.com/r/LocalLLaMA/comments/18c0at3/has_anyone_tried_out_the_aspenframework_for_lora/ | Tr33Bug | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c0at3 | false | null | t3_18c0at3 | /r/LocalLLaMA/comments/18c0at3/has_anyone_tried_out_the_aspenframework_for_lora/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
Browser Extension + Local LLM + Easy USe | 1 | [removed] | 2023-12-06T09:04:08 | https://www.reddit.com/r/LocalLLaMA/comments/18bzzt5/browser_extension_local_llm_easy_use/ | Efficient_Elk3698 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bzzt5 | false | null | t3_18bzzt5 | /r/LocalLLaMA/comments/18bzzt5/browser_extension_local_llm_easy_use/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pJ2iLl8yh80qrdyE2db-pvX4WJRflM_K_laP7hgI_Lg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nLChitB7GbVtv7tucT39dIewHhU_PhD6Pg32myRyfHE.jpg?width=108&crop=smart&auto=webp&s=12ee793dd604c9959c8d302b1200cb83b835f1da', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nLChitB7GbVtv7tucT39dIewHhU_PhD6Pg32myRyfHE.jpg?width=216&crop=smart&auto=webp&s=d8522b6e49d7bbce17d06b20bd67c009dc2a0491', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nLChitB7GbVtv7tucT39dIewHhU_PhD6Pg32myRyfHE.jpg?width=320&crop=smart&auto=webp&s=563b6b76e8baf68f84cd2e54c313197e6dd9a3a3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nLChitB7GbVtv7tucT39dIewHhU_PhD6Pg32myRyfHE.jpg?width=640&crop=smart&auto=webp&s=e44fc5f72b8e68aa9a94065d1c2c83ed87f8f46d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nLChitB7GbVtv7tucT39dIewHhU_PhD6Pg32myRyfHE.jpg?width=960&crop=smart&auto=webp&s=e7cc9f2a718ace12625ea2e6540e9635a19d426f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nLChitB7GbVtv7tucT39dIewHhU_PhD6Pg32myRyfHE.jpg?width=1080&crop=smart&auto=webp&s=7df161acd59ff46881a8a5d76d87dec91c961468', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nLChitB7GbVtv7tucT39dIewHhU_PhD6Pg32myRyfHE.jpg?auto=webp&s=b1edccd85336e10261b695ddcdc985553b33f039', 'width': 1200}, 'variants': {}}]} |
Give llama permission to run python scripts | 3 | I have been working on a project where I want to leverage the natural language understanding from LLM to get the Llama to run a python script with a if a certain word is found in the prompt, I want the LLM to make a decision based on the obtained response and refine the parameters it’ll pass on to the python script. Basically I want it to reason. Does anyone here have any idea on how to make accomplish that? | 2023-12-06T07:47:47 | https://www.reddit.com/r/LocalLLaMA/comments/18byz71/give_llama_permission_to_run_python_scripts/ | sinedrinsel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18byz71 | false | null | t3_18byz71 | /r/LocalLLaMA/comments/18byz71/give_llama_permission_to_run_python_scripts/ | false | false | self | 3 | null |
AssertionError: libcuda.so cannot found!. How do I fix this error in google colab? | 1 | I was trying to run Llama 2 7b 64k model on google colab. | 2023-12-06T07:41:50 | https://www.reddit.com/r/LocalLLaMA/comments/18bywfk/assertionerror_libcudaso_cannot_found_how_do_i/ | Special_Crew_401 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bywfk | false | null | t3_18bywfk | /r/LocalLLaMA/comments/18bywfk/assertionerror_libcudaso_cannot_found_how_do_i/ | false | false | self | 1 | null |
Why are so many models seemingly addicted to the phrase, "couldn't help but?" | 16 | Hi, all,
Weird question, I know. But I've noticed it across hundreds of character cards (SillyTavern, Roleplay and ChatML / Mistral presets) dozens of models (3b to 70b, multiple OSes and backends) and all kinds of settings. Temperature, rep pen, freq pen, min_p, mirostat, sampler order...nothing seems to affect it. I realize it's a reasonably common phrase in English, but the frequency with which it shows up in conversations is just mind-boggling. I'm actually going to start playing with the CFG cache for the first time, just to see if it can filter out responses containing it - that's how sick of seeing it I am :D
Anyone else encountered this particular permutation of the "catchphrase problem?" | 2023-12-06T07:26:44 | https://www.reddit.com/r/LocalLLaMA/comments/18byp3a/why_are_so_many_models_seemingly_addicted_to_the/ | smile_e_face | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18byp3a | false | null | t3_18byp3a | /r/LocalLLaMA/comments/18byp3a/why_are_so_many_models_seemingly_addicted_to_the/ | false | false | self | 16 | null |
Generate always valid function calls and objects in JSON format with this GBNF grammar generator for llama.cpp! | 38 | The following gist contains my code for a grammar generator for valid JSON objects with llama.cpp.
With it you can generate JSON objects with AI that are always valid! For function call or object creation in Python. Includes example functions for a MemGPT like system!
[https://gist.github.com/Maximilian-Winter/5373962ef456a2b0d1ae324fb78e623e](https://gist.github.com/Maximilian-Winter/5373962ef456a2b0d1ae324fb78e623e) | 2023-12-06T07:24:18 | https://www.reddit.com/r/LocalLLaMA/comments/18bynvt/generate_always_valid_function_calls_and_objects/ | FlowerPotTeaTime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bynvt | false | null | t3_18bynvt | /r/LocalLLaMA/comments/18bynvt/generate_always_valid_function_calls_and_objects/ | false | false | self | 38 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]} |
AMD and Intel GPUs? | 3 | Last time I checked (a couple months back) it still wasn't feasible to use either of these unless one was willing to do a lot of troubleshooting. Where do things stand right now? Are more people using AMD and Intel now? | 2023-12-06T07:18:40 | https://www.reddit.com/r/LocalLLaMA/comments/18byl2n/amd_and_intel_gpus/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18byl2n | false | null | t3_18byl2n | /r/LocalLLaMA/comments/18byl2n/amd_and_intel_gpus/ | false | false | self | 3 | null |
Anyone using an eGPU for LLM? | 1 | I'm only an amateur when it comes to A.I. models and I don't have the budget for a beefy workstation.
One idea was to get a used eGPU, like a RTX 3090 or a 4090 with 24 GB of VRAM and connect it via Thunderbolt.
Would this work? Obviously the thunderbolt connection would be a bottleneck, but not sure of its impact, I guess it shouldn't be very high if the whole model fits in VRAM.
Does anyone have any experience with such a setup? | 2023-12-06T06:06:10 | https://www.reddit.com/r/LocalLLaMA/comments/18bxijn/anyone_using_an_egpu_for_llm/ | Gasperyn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bxijn | false | null | t3_18bxijn | /r/LocalLLaMA/comments/18bxijn/anyone_using_an_egpu_for_llm/ | false | false | self | 1 | null |
Best Text to Python Code LLM | 3 | Hi all,
I am looking for a LLM that can do my task:
I have various circular texts (i.e. medical/law clauses), where I have their corresponding Python code to execute those texts. In other words, I will have clause-code pairs.
I am looking to fine-tune some kind of code-based LLM. Any suggestions what LLM I can use? I have heard great things about DeepSeek 7B. However, my use case is regarding Python only, and the text may be somewhat large (say, may go up to 1028 tokens). Will this pose a problem for models like DeepSeek 7B?
Thanks a lot. | 2023-12-06T06:03:22 | https://www.reddit.com/r/LocalLLaMA/comments/18bxgvu/best_text_to_python_code_llm/ | plsendfast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bxgvu | false | null | t3_18bxgvu | /r/LocalLLaMA/comments/18bxgvu/best_text_to_python_code_llm/ | false | false | self | 3 | null |
Survey Reveals Top Benefits of AI in Software Development | 1 | 2023-12-06T05:56:52 | https://dev.to/ananddas/survey-reveals-top-benefits-of-ai-in-software-development-pfe | trulyfurqan | dev.to | 1970-01-01T00:00:00 | 0 | {} | 18bxcqo | false | null | t3_18bxcqo | /r/LocalLLaMA/comments/18bxcqo/survey_reveals_top_benefits_of_ai_in_software/ | false | false | default | 1 | null | |
cybertron models ranked top 20 in the open llm leaderboard -trained on STF, DPO & ¿ UNA ? | 24 | This UNA technique they speak of supposed to be some unique way of aligning transformer layers, that it's not like other methods we know - definitely not layer merging or SLERP/SLURP variants. Anyone got any insights on this? Am curious.
Also:
[smol models FTW! TOP 25 ](https://preview.redd.it/rw1mbxbp0m4c1.png?width=2964&format=png&auto=webp&s=dfea6b4a18b6933741f8973a42edf359d9df6958)
I've yet to try any of the models below bc I haven't had time to check if \`.safetensors\` are supported) but man these 7B models are outpacing models 10x bigger. What do you guys think is giving these rare (for now) small models an edge? It's gotta be more than just really good datasets, right?
Anyhow--
[una-cybertron-7b-v1](https://huggingface.co/fblgit/una-cybertron-7b-v1-fp16) and [v2](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) models-- ranked [18th and 17th](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), respectively (`this is higher than`[`Yi-34B`](https://huggingface.co/01-ai/Yi-34B)) .
[https://huggingface.co/Q-bert/Optimus-7B](https://huggingface.co/Q-bert/Optimus-7B) <------ 23rd
[https://huggingface.co/chargoddard/loyal-piano-m7-cdpo](https://huggingface.co/chargoddard/loyal-piano-m7-cdpo) <------ 24th | 2023-12-06T05:49:57 | https://www.reddit.com/r/LocalLLaMA/comments/18bx8me/cybertron_models_ranked_top_20_in_the_open_llm/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bx8me | false | null | t3_18bx8me | /r/LocalLLaMA/comments/18bx8me/cybertron_models_ranked_top_20_in_the_open_llm/ | false | false | 24 | {'enabled': False, 'images': [{'id': 'DQRxqF5sv6oZqbFBBY9-0-cR-RdwF6rhUCXJ_CcTKiM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rfIdQV9KDdID2HlVZkt6HZCRKOMCWgZrv08I2ref3K0.jpg?width=108&crop=smart&auto=webp&s=ebb073d2357a241388650778643b45636dc9a770', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rfIdQV9KDdID2HlVZkt6HZCRKOMCWgZrv08I2ref3K0.jpg?width=216&crop=smart&auto=webp&s=a340a4cf250e9c2580344e5ec676a0f14ec8a79d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rfIdQV9KDdID2HlVZkt6HZCRKOMCWgZrv08I2ref3K0.jpg?width=320&crop=smart&auto=webp&s=4b39692bb0b70954fe3c26bbcc238587d0dd6880', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rfIdQV9KDdID2HlVZkt6HZCRKOMCWgZrv08I2ref3K0.jpg?width=640&crop=smart&auto=webp&s=41caf68d4aa1791f53f46d143a948aba23418e7d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rfIdQV9KDdID2HlVZkt6HZCRKOMCWgZrv08I2ref3K0.jpg?width=960&crop=smart&auto=webp&s=c791201e9fcb9936ff7b985ed0c193ac4a973b89', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rfIdQV9KDdID2HlVZkt6HZCRKOMCWgZrv08I2ref3K0.jpg?width=1080&crop=smart&auto=webp&s=90228a697b692acfaff161b8aca6473b1ea48d98', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rfIdQV9KDdID2HlVZkt6HZCRKOMCWgZrv08I2ref3K0.jpg?auto=webp&s=7f337f5b875344b9b663690f7bf85c843b84f6d0', 'width': 1200}, 'variants': {}}]} | |
Some people were asking questions about my AI, well here it is some of the answers | 1 | [removed] | 2023-12-06T05:25:16 | https://www.reddit.com/r/LocalLLaMA/comments/18bwu4j/some_people_were_asking_questions_about_my_ai/ | crua9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bwu4j | false | null | t3_18bwu4j | /r/LocalLLaMA/comments/18bwu4j/some_people_were_asking_questions_about_my_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '1KUD9D-K5CfUUtnl9NfB1RNs4YJUfWXCQllD3TQiNow', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/89Riw6Q0DLXZ0dg8O3o2s1OVlA2vuN1Ftgv7p0_OZXw.jpg?width=108&crop=smart&auto=webp&s=550aa03926a2c78f45f399d696bb1d54565ab594', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/89Riw6Q0DLXZ0dg8O3o2s1OVlA2vuN1Ftgv7p0_OZXw.jpg?width=216&crop=smart&auto=webp&s=4c48895fbdf2d740a2a79589efa3b8c74db36228', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/89Riw6Q0DLXZ0dg8O3o2s1OVlA2vuN1Ftgv7p0_OZXw.jpg?width=320&crop=smart&auto=webp&s=f1ae365d8fcf9d33fa103efe48bf11e391b88de7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/89Riw6Q0DLXZ0dg8O3o2s1OVlA2vuN1Ftgv7p0_OZXw.jpg?auto=webp&s=de788490e0149b4abb3468aad6b6d23b6adb6536', 'width': 480}, 'variants': {}}]} |
Noob question about oobagooba system prompts and context length | 1 | I have been using the OpenAI Assistants playground to write song lyrics, but I find the temperature is set too high and the results are just too generic without giving gpt-4 a lot of example lyrics, so I'm switching over to the open source models, currently using dolphin-2.2.1-mistral-7b and zephyr-7b-beta.
I have a pretty long system prompt that I'm using with gpt-4, that I want to use inside of oobagooba but I have a lot of questions.
\- Can I just crank up n\_ctx to increase the context window?
\- Can I just chuck the system prompt into the notebook?
\- What's currently best model for songwriting that will run on a Macbook M1 Pro 32GB?
​
I realize that is a lot, but if someone could at least link me to some relevant docs, I would appreciate it.
​ | 2023-12-06T05:18:34 | https://www.reddit.com/r/LocalLLaMA/comments/18bwpzo/noob_question_about_oobagooba_system_prompts_and/ | Living_Tone3782 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bwpzo | false | null | t3_18bwpzo | /r/LocalLLaMA/comments/18bwpzo/noob_question_about_oobagooba_system_prompts_and/ | false | false | self | 1 | null |
AIKit: Build and deploy open-source LLMs | 7 | Hi folks, I build an OSS project called AIKit, which uses LocalAI under the hood, that makes building and deploying open LLMs easier. I would love to get any thoughts and feedback!
[sozercan/aikit: 🏗️ AI + BuildKit = AIKit: Build and deploy large language models easily (github.com)](https://github.com/sozercan/aikit)
​ | 2023-12-06T05:15:06 | https://www.reddit.com/r/LocalLLaMA/comments/18bwnt2/aikit_build_and_deploy_opensource_llms/ | sozercan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bwnt2 | false | null | t3_18bwnt2 | /r/LocalLLaMA/comments/18bwnt2/aikit_build_and_deploy_opensource_llms/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': '1XHxsCmdjgxRyDWOMbY1IcDSWKYWJJph7fE_znFN2DQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_Bby6FW3LamA2vlLxQAaM6Lf63WZFKKD72V42ptmdEY.jpg?width=108&crop=smart&auto=webp&s=c4e1d441f4b484a6bbc47baabd6e5d21292457f2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_Bby6FW3LamA2vlLxQAaM6Lf63WZFKKD72V42ptmdEY.jpg?width=216&crop=smart&auto=webp&s=e814417229718c337ac96b36e3e64c42e8f2a434', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_Bby6FW3LamA2vlLxQAaM6Lf63WZFKKD72V42ptmdEY.jpg?width=320&crop=smart&auto=webp&s=03a418f56007d7ceaccfee04fdb2aa8fc5c3f2d8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_Bby6FW3LamA2vlLxQAaM6Lf63WZFKKD72V42ptmdEY.jpg?width=640&crop=smart&auto=webp&s=4820057665696b326996ca0705f7d6e2de7c4fe4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_Bby6FW3LamA2vlLxQAaM6Lf63WZFKKD72V42ptmdEY.jpg?width=960&crop=smart&auto=webp&s=2b8e4f8d6d67626ab9513173e5c0fb626d466232', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_Bby6FW3LamA2vlLxQAaM6Lf63WZFKKD72V42ptmdEY.jpg?width=1080&crop=smart&auto=webp&s=a58c81eddb124a08e9e47ac64101647c062fd1ec', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_Bby6FW3LamA2vlLxQAaM6Lf63WZFKKD72V42ptmdEY.jpg?auto=webp&s=04acf8cc1e14cf29354582165c79d4bd4cf043bb', 'width': 1200}, 'variants': {}}]} |
Can setting rope scaling inverse from a 16k context to 8k or 4k increase attention in a model? Can people running lower memory get a benefit to their drawback? | 9 | I was thinking about how to set this up and if anyone tried it already?
​ | 2023-12-06T05:05:21 | https://www.reddit.com/r/LocalLLaMA/comments/18bwhqc/can_setting_rope_scaling_inverse_from_a_16k/ | aseichter2007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bwhqc | false | null | t3_18bwhqc | /r/LocalLLaMA/comments/18bwhqc/can_setting_rope_scaling_inverse_from_a_16k/ | false | false | self | 9 | null |
Apple Releases 'MLX' - ML Framework for Apple Silicon | 217 | Apple's ML Team has just released 'MLX' on GitHub. Their ML framework for Apple Silicon.
[https://github.com/ml-explore/mlx](https://github.com/ml-explore/mlx)
A realistic alternative to CUDA? MPS is already incredibly efficient... this could make it interesting if we see adoption. | 2023-12-06T04:58:34 | https://www.reddit.com/r/LocalLLaMA/comments/18bwd1y/apple_releases_mlx_ml_framework_for_apple_silicon/ | LoadingALIAS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bwd1y | false | null | t3_18bwd1y | /r/LocalLLaMA/comments/18bwd1y/apple_releases_mlx_ml_framework_for_apple_silicon/ | false | false | self | 217 | {'enabled': False, 'images': [{'id': 'vTPUZkMF3LsVpVSu0jnEqz-vsbs_EIhLtdyo7YOVwII', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?width=108&crop=smart&auto=webp&s=24cc309c505a587dab907b937725e24e05bd9e58', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?width=216&crop=smart&auto=webp&s=7380818e5f41647982430cbc2876824d7f00a363', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?width=320&crop=smart&auto=webp&s=f951e2fc392ef323ba9dbafb0fda72b26ce6c855', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?width=640&crop=smart&auto=webp&s=8c8bc9fe219f3ffbf8095827a7081ca98f18c240', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?width=960&crop=smart&auto=webp&s=b65903da439d6ae0125c0f37443c97c5c2f1d9c8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?width=1080&crop=smart&auto=webp&s=d6dfb92d0fe3090bab8b80b176398602654fab97', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SB9ictPLlOH0iHbxidZVkLscYaoWKauOSDPoD2yz_N4.jpg?auto=webp&s=54000d8d1100cb79d4eea6b69c42ceda956392d2', 'width': 1200}, 'variants': {}}]} |
Extracting row data using multimodal llm’s | 1 | Does anyone know how to do whats written in the title? Just tried to do it via my chatgpt subscription, but it keeps writing code trying to extract the row data using tesseract and failing. Should i prompt engineer better? Are there better models for this specific task? Help and thanks in advance! | 2023-12-06T04:36:19 | https://www.reddit.com/r/LocalLLaMA/comments/18bvyod/extracting_row_data_using_multimodal_llms/ | Fluffy-Ad3495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bvyod | false | null | t3_18bvyod | /r/LocalLLaMA/comments/18bvyod/extracting_row_data_using_multimodal_llms/ | false | false | self | 1 | null |
Is there a Llama 2 13b 32k context model? | 4 | I came across Llama 2 7b 32k. Was wondering if there is a 13 b model aswell. | 2023-12-06T04:18:42 | https://www.reddit.com/r/LocalLLaMA/comments/18bvn2o/is_there_a_llama_2_13b_32k_context_model/ | Conscious-Mixture-69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bvn2o | false | null | t3_18bvn2o | /r/LocalLLaMA/comments/18bvn2o/is_there_a_llama_2_13b_32k_context_model/ | false | false | self | 4 | null |
Config-based development for LLMs | 4 | My team recently launched our first open-source project AIConfig, a JSON serializable format to store your prompts, model parameters, and settings. This allows you to iterate on the AI parts of your application separately from your code while still managing prompts and such in source control.
One of the biggest value adds has been the ability to swap models easily - reducing dependencies on a single model provider like OpenAI, Meta, Google, and open-source.
We took a stance on going with a config-based approach for generative AI development and would really appreciate feedback and thoughtful critiques on our work.
Thank you for your time!
[https://github.com/lastmile-ai/aiconfig](https://github.com/lastmile-ai/aiconfig) | 2023-12-06T03:59:21 | https://www.reddit.com/r/LocalLLaMA/comments/18bva6k/configbased_development_for_llms/ | InevitableSky2801 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bva6k | false | null | t3_18bva6k | /r/LocalLLaMA/comments/18bva6k/configbased_development_for_llms/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'a_-YwQyADKTKtpd5Tn-dYDt01-jHMtu1fqXvbmFqDVI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/_2WjbL0HyIo8SyXC2R-AUNrjYh5zz7Vu2Y1bojUvDM4.jpg?width=108&crop=smart&auto=webp&s=9f8e0a3497b5f6a74f11bd6acff749357612b812', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/_2WjbL0HyIo8SyXC2R-AUNrjYh5zz7Vu2Y1bojUvDM4.jpg?width=216&crop=smart&auto=webp&s=7e6de205b1a4a9b33158910a362d7c2195c557d9', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/_2WjbL0HyIo8SyXC2R-AUNrjYh5zz7Vu2Y1bojUvDM4.jpg?width=320&crop=smart&auto=webp&s=ce5701c94c676c1d7751abcd7ecace669d26bccb', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/_2WjbL0HyIo8SyXC2R-AUNrjYh5zz7Vu2Y1bojUvDM4.jpg?width=640&crop=smart&auto=webp&s=629e8b4dc4386ae485f2ac163335b146c38d5e37', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/_2WjbL0HyIo8SyXC2R-AUNrjYh5zz7Vu2Y1bojUvDM4.jpg?width=960&crop=smart&auto=webp&s=b16fb324d77a063082566b4acf7bc25773d40d17', 'width': 960}, {'height': 562, 'url': 'https://external-preview.redd.it/_2WjbL0HyIo8SyXC2R-AUNrjYh5zz7Vu2Y1bojUvDM4.jpg?width=1080&crop=smart&auto=webp&s=a6ea81fa44a5a1941bfa39e9a8bffb86b0bb82f3', 'width': 1080}], 'source': {'height': 660, 'url': 'https://external-preview.redd.it/_2WjbL0HyIo8SyXC2R-AUNrjYh5zz7Vu2Y1bojUvDM4.jpg?auto=webp&s=bcc2c7720b4a6c288104ab05d0c96bf32b1dcff2', 'width': 1268}, 'variants': {}}]} |
General questions about HW req and models | 1 | I was thinking of some new venture to learn about within IT, and I figured getting an Azure cert (also for job searching reasons) and learn a bit about cloud computing would be nice, and deploy some 70B model to try it out could be fun.
But then I noticed a post about someone running a 70B model on 12GB VRAM and 64GB RAM, and figured that it would maybe be wiser to invest in a 4060Ti 16GB or so, as I already got 64GB RAM.
So:
Are 70B models really a thing here? Are there any *good* uncensored ones compared to 13B ones? Not really into role playing, but rather just some general pondering about unethical social dynamics and chemical ventures.
And if there are any good 70B ones, would you say it's reasonable to use them for a few hours a month on a cloud solution (for like a dozen euros or two) compared to just using 13B locally instead, or local 70B and bear with it being a bit slow? | 2023-12-06T03:54:54 | https://www.reddit.com/r/LocalLLaMA/comments/18bv74w/general_questions_about_hw_req_and_models/ | Haunting_Rain2345 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bv74w | false | null | t3_18bv74w | /r/LocalLLaMA/comments/18bv74w/general_questions_about_hw_req_and_models/ | false | false | self | 1 | null |
1,200 tokens per second for Llama 2 7B on H100! | 70 |
[https://huggingface.co/blog/optimum-nvidia](https://huggingface.co/blog/optimum-nvidia)
Large Language Models (LLMs) have revolutionized natural language processing and are increasingly deployed to solve complex problems at scale. Achieving optimal performance with these models is notoriously challenging due to their unique and intense computational demands. Optimized performance of LLMs is incredibly valuable for end users looking for a snappy and responsive experience as well as for scaled deployments where improved throughput translates to dollars saved.
That's where Optimum-NVIDIA comes in. Available on Hugging Face, Optimum-NVIDIA dramatically accelerates LLM inference on the NVIDIA platform through an extremely simple API. By changing just a single line of code, you can unlock up to **28x faster inference and 1,200 tokens/second** on the NVIDIA platform.
https://preview.redd.it/m6e2cvarjl4c1.png?width=600&format=png&auto=webp&s=e4545104b3b68896dad695d8111f72781b9ed776
https://preview.redd.it/1meifynvjl4c1.png?width=600&format=png&auto=webp&s=d465ec3fa846a632765633c8a4b4a62952bded65
source tweet
[https://fxtwitter.com/jeffboudier/status/1732169220811829435](https://twitter.com/jeffboudier/status/1732169220811829435) | 2023-12-06T03:50:11 | https://www.reddit.com/r/LocalLLaMA/comments/18bv3zk/1200_tokens_per_second_for_llama_2_7b_on_h100/ | metalman123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bv3zk | false | null | t3_18bv3zk | /r/LocalLLaMA/comments/18bv3zk/1200_tokens_per_second_for_llama_2_7b_on_h100/ | false | false | 70 | null | |
AI Prompt enhancement with custom fine-tuned LLM and a new custom node for image generation with stable diffusion. | 10 | 2023-12-06T03:28:53 | https://civitai.com/articles/3267/ai-prompt-enhancement-with-local-hosted-llm-and-a-new-custom-node | ElectroFried | civitai.com | 1970-01-01T00:00:00 | 0 | {} | 18bupdg | false | null | t3_18bupdg | /r/LocalLLaMA/comments/18bupdg/ai_prompt_enhancement_with_custom_finetuned_llm/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'r5VrrtHU6swGyji4e2j69rcCDwj0iza0IynwyZ5MHjA', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/-H0p3DpSlZenkVbqaJZ7lf2a7bOqAAoQLPOpkkTwZUU.jpg?width=108&crop=smart&auto=webp&s=4400a2c1623c3e9ce0eb735daf7ad2621fb691a3', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/-H0p3DpSlZenkVbqaJZ7lf2a7bOqAAoQLPOpkkTwZUU.jpg?width=216&crop=smart&auto=webp&s=f23fd72caac6052060c624476041d7b78780deb1', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/-H0p3DpSlZenkVbqaJZ7lf2a7bOqAAoQLPOpkkTwZUU.jpg?width=320&crop=smart&auto=webp&s=8d79ec6ba2659ca75ef38fd186abfbf02864ef0b', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/-H0p3DpSlZenkVbqaJZ7lf2a7bOqAAoQLPOpkkTwZUU.jpg?width=640&crop=smart&auto=webp&s=32c74254ea9127a8a909538c2b0ae3651f5cd544', 'width': 640}, {'height': 547, 'url': 'https://external-preview.redd.it/-H0p3DpSlZenkVbqaJZ7lf2a7bOqAAoQLPOpkkTwZUU.jpg?width=960&crop=smart&auto=webp&s=d12c920a0a0f5fbe68e1b7e341cb810827b3157b', 'width': 960}, {'height': 615, 'url': 'https://external-preview.redd.it/-H0p3DpSlZenkVbqaJZ7lf2a7bOqAAoQLPOpkkTwZUU.jpg?width=1080&crop=smart&auto=webp&s=46638b8707abf92b42a89336cb2891a54a6e570e', 'width': 1080}], 'source': {'height': 684, 'url': 'https://external-preview.redd.it/-H0p3DpSlZenkVbqaJZ7lf2a7bOqAAoQLPOpkkTwZUU.jpg?auto=webp&s=557690ea74c12dfe377718b61193a57cb34403b8', 'width': 1200}, 'variants': {}}]} | ||
Tree of Attacks: Jailbreaking Black-Box LLMs Automatically | 7 | Came upon this article: https://arxiv.org/abs/2312.02119
Thoughts? | 2023-12-06T02:52:24 | https://www.reddit.com/r/LocalLLaMA/comments/18btzh5/tree_of_attacks_jailbreaking_blackbox_llms/ | OrtaMatt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18btzh5 | false | null | t3_18btzh5 | /r/LocalLLaMA/comments/18btzh5/tree_of_attacks_jailbreaking_blackbox_llms/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
Web Browsing with an LLM | 2 | Has anyone used LLMs to browse the web and search for certain data? I've been getting into it more and the context length has been an issue. Of course, you can get the innerText of the web page alone, but that also has its issues. Hidden elements that have text or invisible things get in the way. Have you tried web browsing or had success with it?
​ | 2023-12-06T02:39:57 | https://www.reddit.com/r/LocalLLaMA/comments/18btqil/web_browsing_with_an_llm/ | MrBeforeMyTime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18btqil | false | null | t3_18btqil | /r/LocalLLaMA/comments/18btqil/web_browsing_with_an_llm/ | false | false | self | 2 | null |
How to run base models w. finetuned adapters in LlamaIndex or Langchain? | 4 | Hi,
Does anyone have any example of how to get your base models + LoRA adapters running in a RAG pipeline using the LlamaIndex or Langchain framework? I have my bge-large-en-v1.5 and Zephyr-7b-beta models as my base and I did a LoRA finetune them on my dataset. I want to try to use the [RAG Fusion retriever](https://docs.llamaindex.ai/en/stable/examples/retrievers/reciprocal_rerank_fusion.html), but I've just been spinning my wheels trying to wrap the models with Transformers in a way that is useable in these frameworks. I managed to
​
Loading model in transformers
model_directory = "./model_stores/zephyr-7b-beta"
# Load the base model
model = transformers.AutoModelForCausalLM.from_pretrained(model_directory,
return_dict=True
).to(device)
# Get tokenizer.
tokenizer = transformers.AutoTokenizer.from_pretrained(model_directory,
padding_side='left', # Use left padding for open end generation
add_eos_token=True) # Add pad token as many llm tokenizers don't have it setup by default
tokenizer.pad_token = tokenizer.eos_token
Loading the adapter:
from peft import PeftConfig
LORA_DIR = "./generator-adapter"
config = PeftConfig.from_pretrained(LORA_DIR)
config.base_model_name_or_path
​
# Load the Lora model.
from peft import PeftModel
model = PeftModel.from_pretrained(model, LORA_DIR)
# Merge model and Lora adapter.
merged_model = model.merge_and_unload()
Then same thing for the embeddings model using sentence transfomer:
from sentence_transformers import SentenceTransformer
embed_model = SentenceTransformer("./embeddings_models/BAAI_bge-large-en-v1.5").to(device)
# Then same process to merge_and_unload()
And then I was following the LlamaIndex example on making a Custom LLM:
from typing import Optional, List, Mapping, Any
from llama_index import ServiceContext, SimpleDirectoryReader, SummaryIndex
from llama_index.callbacks import CallbackManager
from llama_index.llms import (
CustomLLM,
CompletionResponse,
CompletionResponseGen,
LLMMetadata,
)
from llama_index.llms.base import llm_completion_callback
# Wrap the generator model in pipeline
from transformers import pipeline
pipeline = pipeline(task="text-generation",
model=merged_model,
tokenizer=tokenizer,
max_new_tokens=1024,
repetition_penalty=1.05,
# pad_token_id=tokenizer.eos_token_id,
device=device)
class OurLLM(CustomLLM):
context_window: int = 3000
num_output: int = 1024
model_name: str = "custom"
@property
def metadata(self) -> LLMMetadata:
"""Get LLM metadata."""
return LLMMetadata(
context_window=self.context_window,
num_output=self.num_output,
model_name=self.model_name,
)
@llm_completion_callback()
def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:
# Use the pipeline to generate a response
generated_text = pipeline(prompt, **kwargs)[0]['generated_text']
return CompletionResponse(text=generated_text)
@llm_completion_callback()
def stream_complete(
self, prompt: str, **kwargs: Any
) -> CompletionResponseGen:
# Use the pipeline to generate a response
generated_text = pipeline(prompt, **kwargs)[0]['generated_text']
response = ""
for token in generated_text:
response += token
yield CompletionResponse(text=response, delta=token)
llm = OurLLM()
I managed to somewhat get it to work, but I keep getting this TypeError before it generates the final answer:
TypeError: unsupported operand type(s) for *: 'NoneType' and 'float'
I'm still trying to figure out how to load my PEFT embeddings model into the LlamaIndex service\_context, but have just been spinning my wheels. I can't seem to find any good examples online. Somebody's got to have done this before, right? | 2023-12-06T02:12:43 | https://www.reddit.com/r/LocalLLaMA/comments/18bt7df/how_to_run_base_models_w_finetuned_adapters_in/ | salah_ahdin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bt7df | false | null | t3_18bt7df | /r/LocalLLaMA/comments/18bt7df/how_to_run_base_models_w_finetuned_adapters_in/ | false | false | self | 4 | null |
Minimum acceptable tokens per second poll | 1 |
[View Poll](https://www.reddit.com/poll/18bswdy) | 2023-12-06T01:57:23 | https://www.reddit.com/r/LocalLLaMA/comments/18bswdy/minimum_acceptable_tokens_per_second_poll/ | 128username | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bswdy | false | null | t3_18bswdy | /r/LocalLLaMA/comments/18bswdy/minimum_acceptable_tokens_per_second_poll/ | false | false | self | 1 | null |
What will Nvidia's market share of model training compute be on 1 Jan 2030? | 1 | Nvidia's position in the computer chip race has been meteoric since it introduced CUDA in 2007. Their capabilities are now so respected that the US Department of Commerce restricted exports of their top-of-the-line GPUs to China in October to preserve the US' short-term technological advantage. However, processor market share has shifted dramatically over Silicon Valley's history, driven by everything from commercial standards wars to limitations of physics. So, it begs the question: what will Nvidia's longer-term trajectory be?
What will Nvidia's market share of training compute be on 1 Jan 2030?
[View Poll](https://www.reddit.com/poll/18bs7su) | 2023-12-06T01:22:47 | https://www.reddit.com/r/LocalLLaMA/comments/18bs7su/what_will_nvidias_market_share_of_model_training/ | kyjk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bs7su | false | null | t3_18bs7su | /r/LocalLLaMA/comments/18bs7su/what_will_nvidias_market_share_of_model_training/ | false | false | self | 1 | null |
Pluto: Generate Synthetic Datasets for LLM Fine-Tuning 🌌 | 1 | [removed] | 2023-12-06T00:20:38 | https://www.reddit.com/r/LocalLLaMA/comments/18bqy11/pluto_generate_synthetic_datasets_for_llm/ | jger227 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bqy11 | false | null | t3_18bqy11 | /r/LocalLLaMA/comments/18bqy11/pluto_generate_synthetic_datasets_for_llm/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pTKqHjhoCugrW2rJAn5c3mQ4bp39CO2q-VCteGDYE7Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=108&crop=smart&auto=webp&s=c72722ebfe18850415d6d897244df540fef828c6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=216&crop=smart&auto=webp&s=bd45ce295e3c93b79cfc4bb35bd809d08cd58369', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=320&crop=smart&auto=webp&s=ca57191da0e4ed1530f68372d845eec14099d40f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=640&crop=smart&auto=webp&s=e04cbbaafb467addad6f22d31af4f2e792859dcb', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=960&crop=smart&auto=webp&s=0170ecfc57ed080894a7f9e61a0aac13e55fcfc5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=1080&crop=smart&auto=webp&s=8a01e162e1a4866f6bdcfccc62c934560f3ab555', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?auto=webp&s=e8a8f4f3b77dcbcee4405729ecaabbc5099d7709', 'width': 1200}, 'variants': {}}]} |
Thoughts on DL build? | 3 | Hi folks,
I'm planning to build a workstation for DL/local LLM experiments. I already have obtained 2 used 3090s, so have centered the build around that. Here's a breakdown of parts ([https://pcpartpicker.com/list/qzGddH](https://pcpartpicker.com/list/qzGddH)):
**CPU**: AMD Ryzen 9 5900X 3.7 GHz 12-Core Processor
**CPU Cooler**: ARCTIC Liquid Freezer II 360 56.3 CFM Liquid CPU Cooler
**Thermal Compound**: Arctic Silver 5 High-Density Polysynthetic Silver 3.5 g Thermal Paste
**Motherboard**: Asus ROG Crosshair VIII Dark Hero ATX AM4 Motherboard
**Memory**: 2x Corsair Vengeance LPX 64 GB (2 x 32 GB) DDR4-3600 CL18 Memory (128GB total)
**Storage**: Samsung 980 Pro 2 TB M.2-2280 PCIe 4.0 X4 NVME SSD
**Video Card**: Zotac GAMING Trinity OC GeForce RTX 3090 24 GB (x2)
**Case**: Lian Li O11 Dynamic EVO XL ATX Full Tower Case
**Power Supply**: Corsair HX1500i 1500 W 80+ Platinum Certified Fully Modular ATX Power Supply
I'm particularly unsure about my choice of motherboard. I'd also like to ideally stick with air cooling, any thoughts on that? Specifically, any suggestions for fans would be great.
Thanks ahead for your help! | 2023-12-06T00:12:10 | https://www.reddit.com/r/LocalLLaMA/comments/18bqrhl/thoughts_on_dl_build/ | sygn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bqrhl | false | null | t3_18bqrhl | /r/LocalLLaMA/comments/18bqrhl/thoughts_on_dl_build/ | false | false | self | 3 | null |
Using DeepSeek Coder 33b gguf with text-gen-webui: use Compress! | 35 | I had been struggling greatly getting Deepseek coder 33b to work with Oobabooga; like many others, I was getting the issue where it produced a single character like ":" endlessly. I tried every preset, instruction template, etc I could find. Google results seemed to basically have a bunch of people saying it doesn't work well on Oobabooga and left it at that.
Finally, after banging my head at it a while, I tested it in Llama.cpp directly and it worked no problem. I was trying to figure out why, when I saw that the loader said it was a linear scale model, and the linear scale was 0.25. I vaguely remembered that's a compress\_pos\_emb of 4 (am I wrong? I feel like that's not wrong) so I set that and reloaded.
SUCCESS! It runs. I have code now.
So yea, leave rope base at 100,000 and set compress to 4. Works like a charm for me. | 2023-12-06T00:09:29 | https://www.reddit.com/r/LocalLLaMA/comments/18bqphh/using_deepseek_coder_33b_gguf_with_textgenwebui/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bqphh | false | null | t3_18bqphh | /r/LocalLLaMA/comments/18bqphh/using_deepseek_coder_33b_gguf_with_textgenwebui/ | false | false | self | 35 | null |
Asking ChatGPT To Repeat Words 'Forever' Is Now a Terms of Service Violation | 1 | 2023-12-06T00:08:20 | https://it.slashdot.org/story/23/12/04/171259/asking-chatgpt-to-repeat-words-forever-is-now-a-terms-of-service-violation | TheTwelveYearOld | it.slashdot.org | 1970-01-01T00:00:00 | 0 | {} | 18bqonk | false | null | t3_18bqonk | /r/LocalLLaMA/comments/18bqonk/asking_chatgpt_to_repeat_words_forever_is_now_a/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'hil_tMNOLt9yEv7PpbqA4DDr8WAL7xTJ2MH4RtRRmYA', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/eEy39nXy3ZiPsZo42REeDK10g0wRPYEUZWkgcIDoZ0Y.jpg?auto=webp&s=62cc8d272af6d8f03439ad32c1e3830ed26d1933', 'width': 64}, 'variants': {}}]} | ||
I built a simple web app that better organizes TheBloke's LLMs | 1 | 2023-12-05T23:58:13 | https://betterbloke.andrewxia.com | AndrewXia | betterbloke.andrewxia.com | 1970-01-01T00:00:00 | 0 | {} | 18bqghc | false | null | t3_18bqghc | /r/LocalLLaMA/comments/18bqghc/i_built_a_simple_web_app_that_better_organizes/ | false | false | default | 1 | null | |
AI Noob In Need Of Guidance | 1 | [removed] | 2023-12-05T23:37:14 | https://www.reddit.com/r/LocalLLaMA/comments/18bpzhr/ai_noob_in_need_of_guidance/ | WG_BS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bpzhr | false | null | t3_18bpzhr | /r/LocalLLaMA/comments/18bpzhr/ai_noob_in_need_of_guidance/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pHrgLZTOb5kNWZ3OwUj89KzP02AXCJ7vvgXjXs8ln_0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7QQYqd-Ef6uabZ8a-Omry68uIvdTkLc8Q9UUDU_4ddQ.jpg?width=108&crop=smart&auto=webp&s=957956a74df121329a2ef533cb6ce5b39870ef9b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7QQYqd-Ef6uabZ8a-Omry68uIvdTkLc8Q9UUDU_4ddQ.jpg?width=216&crop=smart&auto=webp&s=f3a5f66226081cd6c7a65a332b6231c7d9ec9d88', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7QQYqd-Ef6uabZ8a-Omry68uIvdTkLc8Q9UUDU_4ddQ.jpg?width=320&crop=smart&auto=webp&s=3bbf6bb28a4ed665a485e62155219b541b8f98c4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7QQYqd-Ef6uabZ8a-Omry68uIvdTkLc8Q9UUDU_4ddQ.jpg?width=640&crop=smart&auto=webp&s=82cd680e3f011f9554b0bc20996683f5bfd1c5db', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7QQYqd-Ef6uabZ8a-Omry68uIvdTkLc8Q9UUDU_4ddQ.jpg?width=960&crop=smart&auto=webp&s=0a36a3caff31d67a657dd19d37b2c7c341adfac8', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7QQYqd-Ef6uabZ8a-Omry68uIvdTkLc8Q9UUDU_4ddQ.jpg?width=1080&crop=smart&auto=webp&s=3b0c061d392709826589121e675a0910f869a9dc', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/7QQYqd-Ef6uabZ8a-Omry68uIvdTkLc8Q9UUDU_4ddQ.jpg?auto=webp&s=ad1a20e04d13ac66957948baa32c11fec14342ea', 'width': 1200}, 'variants': {}}]} |
Open-source LLMs with Image Interpretation | 2 | I am looking for an open source LLM that is able to perform image interpretation. Specifically, I want to extract specific components of an image containing multiple graphs with a table under each graph. I want the LLM to be able to extract the text in the table. I know there are other ways of accomplishing this without using an LLM, but this is part of a project I am testing out.
I tried this with ChatGPT4, and it worked spectacularly but I want to be able to run this LLM on my local machine. Has anyone had any experience with other LLMs that works well? So far I have tried InstructBLIP and Replicate with very lackluster results.
​
Thanks! | 2023-12-05T23:31:41 | https://www.reddit.com/r/LocalLLaMA/comments/18bpuqs/opensource_llms_with_image_interpretation/ | SpaceNaught26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bpuqs | false | null | t3_18bpuqs | /r/LocalLLaMA/comments/18bpuqs/opensource_llms_with_image_interpretation/ | false | false | self | 2 | null |
AI Code assistant for about 50-70 users | 9 | Hey guys,
I'm looking for the right model and platform to run an AI Code assistant for about 50-70 users. I have a machine with 2 NVIDIA Quadro P4000 and 128GB RAM.
I found a co-pilot alternative named "TabbyML" which worked fine for me with StarCoder-1B model but I don't know how would it go on a larger scale.
I thought about deploying on the machine the "oobabooga/text-generation-webui" but I don't want my users to be exposed to the settings of the model.
What kind of models do you think I should use? 7B? 13b? 70B?
And what platform do you guys think will be suitable for this use case? | 2023-12-05T22:30:16 | https://www.reddit.com/r/LocalLLaMA/comments/18bodle/ai_code_assistant_for_about_5070_users/ | Fr4y3R | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bodle | false | null | t3_18bodle | /r/LocalLLaMA/comments/18bodle/ai_code_assistant_for_about_5070_users/ | false | false | self | 9 | null |
llama-journey: a llama.cpp adventure game using structured outputs | 79 | Hey folks, over the past couple months I built a little experimental adventure game on llama.cpp. It explores using structured output to generate scenes, items, characters, and dialogue. It's rough and unfinished, but I thought it was worth sharing and folks may find the techniques interesting.
It uses grammar sampling to generate Python objects, tracking and updating the game state in code. In the case of scenes, the model generates a relative layout (X is ahead, Y is behind etc) and the code finds a minimal layout (ish) to satisfy the named objects.
Code: [https://github.com/ejones/llama-journey](https://github.com/ejones/llama-journey) | 2023-12-05T22:11:43 | https://www.reddit.com/r/LocalLLaMA/comments/18bnxum/llamajourney_a_llamacpp_adventure_game_using/ | forceofhabit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bnxum | false | null | t3_18bnxum | /r/LocalLLaMA/comments/18bnxum/llamajourney_a_llamacpp_adventure_game_using/ | false | false | self | 79 | {'enabled': False, 'images': [{'id': 'rPlAGPYoOp6X03ErDYZolns9sAPbP2kr0XV5CMEox-M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2LgFIvwElWRgcUGTwpKfI2_W__sxnaXda2FUwQBugsE.jpg?width=108&crop=smart&auto=webp&s=df3faaa8fa2bcecdfb19bc3f5005897abdf6d301', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2LgFIvwElWRgcUGTwpKfI2_W__sxnaXda2FUwQBugsE.jpg?width=216&crop=smart&auto=webp&s=9d292d117b22c059e4e7e456964a20735160e291', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2LgFIvwElWRgcUGTwpKfI2_W__sxnaXda2FUwQBugsE.jpg?width=320&crop=smart&auto=webp&s=61b0d4f256652f3ffa190942177ee628acb9b6c8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2LgFIvwElWRgcUGTwpKfI2_W__sxnaXda2FUwQBugsE.jpg?width=640&crop=smart&auto=webp&s=5528395b1535d8804cf01eb74afdbc49626a8471', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2LgFIvwElWRgcUGTwpKfI2_W__sxnaXda2FUwQBugsE.jpg?width=960&crop=smart&auto=webp&s=a0247c98e39bab0df62972645b1651c715fa1384', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2LgFIvwElWRgcUGTwpKfI2_W__sxnaXda2FUwQBugsE.jpg?width=1080&crop=smart&auto=webp&s=bf61e4f02acaaa3efba371b9d87053be1d876e6d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2LgFIvwElWRgcUGTwpKfI2_W__sxnaXda2FUwQBugsE.jpg?auto=webp&s=488cedd2489c581322b4eac7a8942e6883d16bbc', 'width': 1200}, 'variants': {}}]} |
Finetuning Zephyr 7B with autotrain advanced, what is the dataset format? | 3 | I have a bunch of my old gmail chats. I'm creating a simple agent who will talk like myself. The chats are like this:
Alice: hey what are you doing for lunch?
Me: going to get a burrito
What is the format that Zephyr expects finetuning data to be? Or should I format it in a way that autotrain advanced understands? And autotrain advanced will convert in a way that Zephyr understands? | 2023-12-05T22:08:44 | https://www.reddit.com/r/LocalLLaMA/comments/18bnvdw/finetuning_zephyr_7b_with_autotrain_advanced_what/ | blackstonewine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bnvdw | false | null | t3_18bnvdw | /r/LocalLLaMA/comments/18bnvdw/finetuning_zephyr_7b_with_autotrain_advanced_what/ | false | false | self | 3 | null |
How/what GPU to speed up Llama-2 13B? | 1 | I've installed llama-2 13B on my local machine. While it performs reasonably with simple prompts, like 'tell me a joke', when I give it a complicated prompt with some knowledge base it takes between 10-15 minutes to process a related question. What options/GPUs do I have to speed this up? | 2023-12-05T21:26:47 | https://www.reddit.com/r/LocalLLaMA/comments/18bmuwl/howwhat_gpu_to_speed_up_llama2_13b/ | keyboardwarrriorr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bmuwl | false | null | t3_18bmuwl | /r/LocalLLaMA/comments/18bmuwl/howwhat_gpu_to_speed_up_llama2_13b/ | false | false | self | 1 | null |
Nexusflow/NexusRaven-V2-13B · Hugging Face | 20 | 2023-12-05T21:19:08 | https://huggingface.co/Nexusflow/NexusRaven-V2-13B | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18bmo5w | false | null | t3_18bmo5w | /r/LocalLLaMA/comments/18bmo5w/nexusflownexusravenv213b_hugging_face/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'I55NllbZn2-uNdOhQjTLRxhuXzvrTrDSXVP6pDY7eCY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mHLhILpAyMMXs-37wMYt8qUz0qvy90RyMrX9ixIPHF4.jpg?width=108&crop=smart&auto=webp&s=d1663745b2732cd25db7c017da90ccca69d69f0c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mHLhILpAyMMXs-37wMYt8qUz0qvy90RyMrX9ixIPHF4.jpg?width=216&crop=smart&auto=webp&s=63c21a238d9f8ac5f835d6fa1d2e1149757161a5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mHLhILpAyMMXs-37wMYt8qUz0qvy90RyMrX9ixIPHF4.jpg?width=320&crop=smart&auto=webp&s=78ddcf0f49b86393aff4ff0fac6f5ad9c6615084', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mHLhILpAyMMXs-37wMYt8qUz0qvy90RyMrX9ixIPHF4.jpg?width=640&crop=smart&auto=webp&s=2c48c560e607c8b5738934f85f9000a18abb28a6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mHLhILpAyMMXs-37wMYt8qUz0qvy90RyMrX9ixIPHF4.jpg?width=960&crop=smart&auto=webp&s=11073e28109c0b097239e6f659b265deecc402f0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mHLhILpAyMMXs-37wMYt8qUz0qvy90RyMrX9ixIPHF4.jpg?width=1080&crop=smart&auto=webp&s=2ae6e365f296382615f69db546e94dd1d3f3a17b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mHLhILpAyMMXs-37wMYt8qUz0qvy90RyMrX9ixIPHF4.jpg?auto=webp&s=4e7e7f0ad8e07e8015547a21996cd2019077dfde', 'width': 1200}, 'variants': {}}]} | ||
Best base model 1B or 7B for full finetuning | 4 | What are the best base model in the 1B-7B range for a full finetuning on a QA specific domain? | 2023-12-05T21:15:59 | https://www.reddit.com/r/LocalLLaMA/comments/18bmlef/best_base_model_1b_or_7b_for_full_finetuning/ | _ragnet_7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bmlef | false | null | t3_18bmlef | /r/LocalLLaMA/comments/18bmlef/best_base_model_1b_or_7b_for_full_finetuning/ | false | false | self | 4 | null |
Months with 31 days in a year questions trip many models | 1 | Most 7 and 13b models I tried (llama, mistral, openhermes - a lot of them) seem to have a hard time answering what feels like a simple question:
**How many months have 31 days in a year?**
And usually the answers are so wrong. Here is a sample. Some of these are comical.
\* Hello! I'd be happy to help you with that question. There are actually only two months in a year that have 31 days: March and May. So, the answer is two months. Is there anything else I can assist you with?
\* All months have at least 28 days. None of them have 31 days.
\* There are 12 months in a year that have 31 days each.
\* There are 4 months with 31 days in a year: January, March, May, and July.
I tried a few variations of the same question as well. The variation "What months have 31 days in a year" gives right results sometimes.
Most models can answer this question “How many days are in August?” A few do fail even this one.
Is this some kind of a known issue? It feels like a question 7b/13b models shouldn't fail so hard.
Llama 2 70B does answer this and the variations correctly though, so that's good news. | 2023-12-05T20:55:42 | https://www.reddit.com/r/LocalLLaMA/comments/18bm3u1/months_with_31_days_in_a_year_questions_trip_many/ | gamesntech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bm3u1 | false | null | t3_18bm3u1 | /r/LocalLLaMA/comments/18bm3u1/months_with_31_days_in_a_year_questions_trip_many/ | false | false | self | 1 | null |
I want some help in a PC build $1260-1600 to run 70b models | 4 | I'm trying to build a PC that can run 70b models acceptably
​
The options I have currently selected are:
​
Ryzen 5 5600
​
ASUS TUF GAMING B450-PLUS II
​
RAM 32GB 3200Mhz DDR4
​
Single RTX 3090 EVGA FTW3
​
​
I've heard it's difficult, but is it really possible to operate 70b models acceptably even with these specifications? What should I change to be able to run them in this Build PC?
​
And I also heard that it is possible to use processor with a graphics card. Will the effect be good enough to run 70b models in an acceptable manner with r5 5600 without including 48GB VRAM graphics cards or two dual graphics cards?
​
Or should get a more powerful processor for it to have an effective effect?
Is the Tesla P40 equal to the performance of the 3090?
​ | 2023-12-05T20:52:38 | https://www.reddit.com/r/LocalLLaMA/comments/18bm16k/i_want_some_help_in_a_pc_build_12601600_to_run/ | Top-Weekend-357 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bm16k | false | null | t3_18bm16k | /r/LocalLLaMA/comments/18bm16k/i_want_some_help_in_a_pc_build_12601600_to_run/ | false | false | self | 4 | null |
Help me to buy a 2x 4090 NVLink PC. | 4 | I need to specify a PC configuration for my company that can efficiently run local models. Would using two RTX 4090 GPUs connected via NVLink enhance the performance for running large-scale models? I aim to run models of sizes 7 billion, 13 billion, and 70 billion parameters. I would appreciate your insights on this matter. | 2023-12-05T20:15:37 | https://www.reddit.com/r/LocalLLaMA/comments/18bl5oh/help_me_to_buy_a_2x_4090_nvlink_pc/ | Tiny_Yellow_7869 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bl5oh | false | null | t3_18bl5oh | /r/LocalLLaMA/comments/18bl5oh/help_me_to_buy_a_2x_4090_nvlink_pc/ | false | false | self | 4 | null |
Can we discuss MLOps, Deployment, Optimizations, and Speed? | 8 | The field is moving so fast, and maybe it's helpful to do a community sync up of knowledge.
Please share your wisdom too!
So, you have a model you want to run. How do you run it **best**, wrt local deployment, cloud deployment, inference, training, tok/s, hardware utilization, etc.
This is especially relevant to me since I run an OSS project, and people are running DeepSeek 67B locally, but only getting ~0.3 tok/s, so, I hope to learn how to boost that number from someone here!
**Dump of Things I Know About Deployment that might help someone:**
* [`accelerate`](https://github.com/huggingface/accelerate/) is a best-in-class lib for deploying models, especially across multi-gpu and multi-node.
* [`transformers`](https://github.com/huggingface/transformers) uses `accelerate` if you call it with `device_map='auto'`
* Llama models have `flash_attention` to speed things up, and I think that works even with GPTQ, but not non-llamas yet like eg DeepSeek
* The [`unsloth`](https://github.com/unslothai/unsloth/) project offers some low-level optimizations for Llama et al, and as of today some prelim Mistral work (which I heard is the llama architecture?)
* [`llama.cpp`](https://github.com/ggerganov/llama.cpp/) is a great resource for running Quants, and even though it's called `llama`, it's the goto backend for basically all LLMs right now (`ctransformers` is dead)
* **Quants include:**
* `GGUF` for CPU+RAM setups (eg laptops)
* `GPTQ` for GPU setups
* `AWQ` also for GPU (but any benefit over GPTQ?)
* `bitsandbytes` (eg `load_in_8bit=True` for `transformers`) for untrained quantization. This keeps quality high down to 8bit, but suffers if you go to 4bit.
* because the non-bitsandbytes versions are trained to be low bit, I expect their quality to be higher than bitsandbytes. But bitsandbytes can be performed instantaneously on any model. For the quants, use `TheBloke`
* **Paralellism:**
* Data Parallel means your model fits in one GPU, so you shove it repeatedly in all your GPUs, and each GPU sees a different slice of data
* Tensor Parallel means one tensor gets sharded across GPUs. I think this is too slow to be common place (?)
* Pipeline Parallel means you run different model layers on different GPUs
* [DeepSpeed](https://github.com/microsoft/DeepSpeed) can handle parallelism concerns, and even offload data/model to RAM, or even NVMe (!?) . I'm surprised I don't see this project used more. | 2023-12-05T20:06:53 | https://www.reddit.com/r/LocalLLaMA/comments/18bky8a/can_we_discuss_mlops_deployment_optimizations_and/ | BayesMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bky8a | false | null | t3_18bky8a | /r/LocalLLaMA/comments/18bky8a/can_we_discuss_mlops_deployment_optimizations_and/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'RHfEiN_NGDg9MPmoNctPfRPvl7PnP18kn2-2Nm86q7k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5SdRSdYpsLaHvWuQShJUDrrgT1g6AsgG1TQLKIVDjpA.jpg?width=108&crop=smart&auto=webp&s=6c7b9ecd9c540d91af97e3722d0ac815ceb7eab4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5SdRSdYpsLaHvWuQShJUDrrgT1g6AsgG1TQLKIVDjpA.jpg?width=216&crop=smart&auto=webp&s=bacbeb6b6a4f25700701bd661f6642b4d2efd001', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5SdRSdYpsLaHvWuQShJUDrrgT1g6AsgG1TQLKIVDjpA.jpg?width=320&crop=smart&auto=webp&s=7436836334f82b665ba6321fd894017c8abb28f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5SdRSdYpsLaHvWuQShJUDrrgT1g6AsgG1TQLKIVDjpA.jpg?width=640&crop=smart&auto=webp&s=6431069ef72560737eb1d31c14f2ce279f038014', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5SdRSdYpsLaHvWuQShJUDrrgT1g6AsgG1TQLKIVDjpA.jpg?width=960&crop=smart&auto=webp&s=53e9200a963fa3dec923f3a994baa9d8897d048d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5SdRSdYpsLaHvWuQShJUDrrgT1g6AsgG1TQLKIVDjpA.jpg?width=1080&crop=smart&auto=webp&s=d812457868b5cf77109854462582feabfef03e7f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5SdRSdYpsLaHvWuQShJUDrrgT1g6AsgG1TQLKIVDjpA.jpg?auto=webp&s=07ed6cddced0b30d11f67067149a8132c9062579', 'width': 1200}, 'variants': {}}]} |
What are the best 50M parameter LLMs? | 9 | Just something my laptop's GPU should handle so I can mess with it.
(50 Million) | 2023-12-05T20:00:25 | https://www.reddit.com/r/LocalLLaMA/comments/18bksik/what_are_the_best_50m_parameter_llms/ | Icy-Summer-3573 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bksik | false | null | t3_18bksik | /r/LocalLLaMA/comments/18bksik/what_are_the_best_50m_parameter_llms/ | false | false | self | 9 | null |
FYI : .ing domains are available today. Some interesting combination like tun.ing can be found. | 3 | 2023-12-05T19:18:26 | phoneixAdi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18bjsje | false | null | t3_18bjsje | /r/LocalLLaMA/comments/18bjsje/fyi_ing_domains_are_available_today_some/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'isAx-KVBUtmAfaHPbn4hqXNpWPR4jze5GfrB4JFJYF0', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/is2joxlm0j4c1.png?width=108&crop=smart&auto=webp&s=522492aacc2190a2f2835629ba689314219765ae', 'width': 108}, {'height': 84, 'url': 'https://preview.redd.it/is2joxlm0j4c1.png?width=216&crop=smart&auto=webp&s=abe5f8e280e56203061511fbf05dbc1f35e5047c', 'width': 216}, {'height': 125, 'url': 'https://preview.redd.it/is2joxlm0j4c1.png?width=320&crop=smart&auto=webp&s=06ef3b885f55d5d8ac9645da8c49e6d0e3478159', 'width': 320}, {'height': 251, 'url': 'https://preview.redd.it/is2joxlm0j4c1.png?width=640&crop=smart&auto=webp&s=5db7f3aca75b4b88b34c0e28c5333245bc1ee8fd', 'width': 640}, {'height': 376, 'url': 'https://preview.redd.it/is2joxlm0j4c1.png?width=960&crop=smart&auto=webp&s=34b19ebe23ca43a5901d9d10ae7cfec8a2b2926a', 'width': 960}, {'height': 423, 'url': 'https://preview.redd.it/is2joxlm0j4c1.png?width=1080&crop=smart&auto=webp&s=9b52ea08939f858483bb9d82e6f13a38cfaa3e9f', 'width': 1080}], 'source': {'height': 922, 'url': 'https://preview.redd.it/is2joxlm0j4c1.png?auto=webp&s=4c5a10e9d767253f6d1d57d8d760f51509c8d449', 'width': 2350}, 'variants': {}}]} | |||
Refusals by LLMs | 11 | tl;dr: Post your refusals by local LLMs.
ChatGPT's "As an AI language model…" are famous. Local LLMs tend to be better in that regard. However, it's possible to still encounter refusals from time to time and prompts that the model will refrain from answering to unless the user engages in some jailbreaking.
When discussing the issue, often examples of prompts that involve illegal activities of some kind are used. However, refusals can also happen for prompts that do not involve illegal practices but are simply due to policies made for PR reasons.
In many cases, refusals are accidental and happen even when those training the model didn't have any intention to censor model outputs. The most obvious reason, and possibly the predominant one, is leakage from other models (mainly OpenAI's models) in the training data. I suspect this may not be the only reason, but this is besides the point.
Some refusals happen for prompts that are deemed "offensive". I'm not sure who should be offended, when the user expressly asked for something. What I do think, however, is that it'd be useful, interesting and fun to collect some refusals by LLMs.
So, post the refusals you can get by your local LLMs: the prompt, the (non-)answer and what model we are talking about. | 2023-12-05T19:12:03 | https://www.reddit.com/r/LocalLLaMA/comments/18bjn8i/refusals_by_llms/ | Aspie96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bjn8i | false | null | t3_18bjn8i | /r/LocalLLaMA/comments/18bjn8i/refusals_by_llms/ | false | false | self | 11 | null |
Let’s Collaborate to Build a High-Quality, Open-Source Dataset for LLMs! | 48 | Hey fellow LLM enthusiasts! I've been pondering over an idea and wanted to discuss it with all of you. What if we join forces and create a single, comprehensive dataset for training LLMs in various fields?
​
Imagine this: if each one of us contributes by creating and uploading one dataset every day to a centralized platform, we could gradually build a high-quality collection that can be used to train open-source models. Just picture it - 10,000 people posting one question and its corresponding answer every day. In just one year, we would have an impressive 365K curated datasets, following a unified template, and with minimal bias or irrelevant data.
Of course, we can also leverage existing datasets and work on curating and improving them. By actively participating in this crucial step of dataset creation, we can significantly enhance the quality of models. The best part is that it doesn't require any financial investment or specialized hardware; all we need is our collective brain power, time, and passion.
I'm genuinely interested in hearing your thoughts on this collaborative approach. Let's discuss how we can contribute to the development of better models together. | 2023-12-05T18:27:58 | https://www.reddit.com/r/LocalLLaMA/comments/18bimxa/lets_collaborate_to_build_a_highquality/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bimxa | false | null | t3_18bimxa | /r/LocalLLaMA/comments/18bimxa/lets_collaborate_to_build_a_highquality/ | false | false | self | 48 | null |
Resources to install server GPU into Desktops | 3 | Does anyone have any resources to help install a server GPU (i.e. P40, A100, etc) into a normal consumer desktop? I am looking for mainly a video and some examples for it being setup, I have seen some people on this subreddit claim to have P40s installed. | 2023-12-05T17:58:40 | https://www.reddit.com/r/LocalLLaMA/comments/18bhxsh/resources_to_install_server_gpu_into_desktops/ | Danny_Davitoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bhxsh | false | null | t3_18bhxsh | /r/LocalLLaMA/comments/18bhxsh/resources_to_install_server_gpu_into_desktops/ | false | false | self | 3 | null |
AL/ML Model Building - ADVICE REQUESTED | 1 | [removed] | 2023-12-05T17:39:30 | https://www.reddit.com/r/LocalLLaMA/comments/18bhi3v/alml_model_building_advice_requested/ | StreamConst | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bhi3v | false | null | t3_18bhi3v | /r/LocalLLaMA/comments/18bhi3v/alml_model_building_advice_requested/ | false | false | self | 1 | null |
Chrome extension to make Github LLM benchmark tables sortable | 1 | I made this extension for personal use, but it might be useful for you guys as well.
https://github.com/polywock/Sortable-Tables-for-Github | 2023-12-05T17:37:12 | https://www.reddit.com/r/LocalLLaMA/comments/18bhg8h/chrome_extension_to_make_github_llm_benchmark/ | polywock | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bhg8h | false | null | t3_18bhg8h | /r/LocalLLaMA/comments/18bhg8h/chrome_extension_to_make_github_llm_benchmark/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ebqIyz_B3cHxrD3KtzNw1zsAyLXCaR3xX9Rjn5-l40Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MIU5JUO8oSTxWeRoHi8MulXwabkOKyQVhufZPz7uU6U.jpg?width=108&crop=smart&auto=webp&s=82b4b7830c8f44a0cd9901cc821d40f3d6742c82', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MIU5JUO8oSTxWeRoHi8MulXwabkOKyQVhufZPz7uU6U.jpg?width=216&crop=smart&auto=webp&s=bfb8b361b4336f74bc4caa87545bcdc31629daed', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MIU5JUO8oSTxWeRoHi8MulXwabkOKyQVhufZPz7uU6U.jpg?width=320&crop=smart&auto=webp&s=c4dda644aba604acf4b7d90fd7ea62aead881f72', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MIU5JUO8oSTxWeRoHi8MulXwabkOKyQVhufZPz7uU6U.jpg?width=640&crop=smart&auto=webp&s=ecf95181e8695383829c74e04881457edf16e7bb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MIU5JUO8oSTxWeRoHi8MulXwabkOKyQVhufZPz7uU6U.jpg?width=960&crop=smart&auto=webp&s=ced2d05841f848f57ea5135e8c60a110cf7b71e6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MIU5JUO8oSTxWeRoHi8MulXwabkOKyQVhufZPz7uU6U.jpg?width=1080&crop=smart&auto=webp&s=5d1bad0cdfbfcf202b2a15e8963f3836b42d34c7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MIU5JUO8oSTxWeRoHi8MulXwabkOKyQVhufZPz7uU6U.jpg?auto=webp&s=3db7c11c167119b35ad13d4fe6cc4ec8a50fe36e', 'width': 1200}, 'variants': {}}]} |
A way to run Llama2 on Windows ! | 6 | [https://community.amd.com/t5/ai/how-to-running-optimized-llama2-with-microsoft-directml-on-amd/ba-p/645190](https://community.amd.com/t5/ai/how-to-running-optimized-llama2-with-microsoft-directml-on-amd/ba-p/645190)
Looks like AMD has done it ! Those with Radeon Graphics card can runn Llama2 on Windows. If you have one , give it a try and leave a comment with your spec and speed :) | 2023-12-05T17:26:43 | https://www.reddit.com/r/LocalLLaMA/comments/18bh7iu/a_way_to_run_llama2_on_windows/ | ramzeez88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bh7iu | false | null | t3_18bh7iu | /r/LocalLLaMA/comments/18bh7iu/a_way_to_run_llama2_on_windows/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'Z8lH27MmAVPIU9TlwsJrXci_vE-sjMgh6jFimT17GZQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Pnqe3Fyuw6MQ3Q7wCqyxGTUFLgDWro3lNQAUmfUUjyA.jpg?width=108&crop=smart&auto=webp&s=6afdb018337dded0f2bbbadb9ce2f3c00d79bb25', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Pnqe3Fyuw6MQ3Q7wCqyxGTUFLgDWro3lNQAUmfUUjyA.jpg?width=216&crop=smart&auto=webp&s=5847518136d498c86c0948278fa167481d677a5a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Pnqe3Fyuw6MQ3Q7wCqyxGTUFLgDWro3lNQAUmfUUjyA.jpg?width=320&crop=smart&auto=webp&s=d6ddb7e2e3e8f092d16b6821f92faa5c7aad17c7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Pnqe3Fyuw6MQ3Q7wCqyxGTUFLgDWro3lNQAUmfUUjyA.jpg?width=640&crop=smart&auto=webp&s=15062af988ec372c56b230cdbd6473084a7c1497', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Pnqe3Fyuw6MQ3Q7wCqyxGTUFLgDWro3lNQAUmfUUjyA.jpg?width=960&crop=smart&auto=webp&s=6209b3e68ee3f4515292d933f2289a66e700d46b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Pnqe3Fyuw6MQ3Q7wCqyxGTUFLgDWro3lNQAUmfUUjyA.jpg?width=1080&crop=smart&auto=webp&s=835845f26e3bb91a31abf1eb19dab8e3bc02c098', 'width': 1080}], 'source': {'height': 709, 'url': 'https://external-preview.redd.it/Pnqe3Fyuw6MQ3Q7wCqyxGTUFLgDWro3lNQAUmfUUjyA.jpg?auto=webp&s=7c1008ad2edf079b782b702193781b0be0726a12', 'width': 1260}, 'variants': {}}]} |
I've created chatbot mobile app, and want to open source it. Help. | 44 | Hey,
I've created a mobile application half a year ago, pushed it to free beta testing, and now it's been running there without much updates. I'm too busy to scale and monetize it, so I'd like to open source it instead of let it die.
One idea is to open up a web interface, where users could configure their own LLM and index that can be used for storing the conversation history. Optionally they could build the whole backend, such that the mobile application would just be convenient interface and serving the true meaning of "personalized assistant".
I haven't seen this kind of open-source mobile application development before. What's your take on it? Does it make sense? Is someone interested in collaboration or brainstorming this through? | 2023-12-05T17:14:34 | https://www.reddit.com/r/LocalLLaMA/comments/18bgxck/ive_created_chatbot_mobile_app_and_want_to_open/ | Spiritual-Reply5896 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bgxck | true | null | t3_18bgxck | /r/LocalLLaMA/comments/18bgxck/ive_created_chatbot_mobile_app_and_want_to_open/ | false | false | self | 44 | null |
Terminal-LLM: Now with Transformers support for non-quant models. | 5 | **Hello guys,**
My journey to create a terminal based LLM system that can be used by tech-illiterate continues.
I've added support for native models by using transformers loader.
You can check **terminal-llm** at [https://github.com/raddka/terminal-llm](https://github.com/raddka/terminal-llm).
Initial post at [https://www.reddit.com/r/LocalLLaMA/comments/18ar21y/terminalllm\_lightweight\_simple\_pythonbased\_llm/](https://www.reddit.com/r/LocalLLaMA/comments/18ar21y/terminalllm_lightweight_simple_pythonbased_llm/)
​
​ | 2023-12-05T17:10:21 | https://www.reddit.com/r/LocalLLaMA/comments/18bgu1r/terminalllm_now_with_transformers_support_for/ | raddka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bgu1r | false | null | t3_18bgu1r | /r/LocalLLaMA/comments/18bgu1r/terminalllm_now_with_transformers_support_for/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'Fpj3-cFAOY4h-qoQDYP3COF0UxMBxLDw1jwfU6HPaUQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VOgUDKoq8GLqCZovQxT-5dRt06foNYWcaylbbkx8zdU.jpg?width=108&crop=smart&auto=webp&s=0d53dbf10491189c5ffa6326005137545ac5666b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VOgUDKoq8GLqCZovQxT-5dRt06foNYWcaylbbkx8zdU.jpg?width=216&crop=smart&auto=webp&s=edaca08765ea51ae48f5f373b3c37a7b71577c4b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VOgUDKoq8GLqCZovQxT-5dRt06foNYWcaylbbkx8zdU.jpg?width=320&crop=smart&auto=webp&s=375e3ad1fdb3abc7d92fe8982200a47e30881835', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VOgUDKoq8GLqCZovQxT-5dRt06foNYWcaylbbkx8zdU.jpg?width=640&crop=smart&auto=webp&s=2327b30ce4bde94de5b5581a1f24f369d80dccaa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VOgUDKoq8GLqCZovQxT-5dRt06foNYWcaylbbkx8zdU.jpg?width=960&crop=smart&auto=webp&s=2dda7f9ad783c0e146c45f2973b98a450c894268', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VOgUDKoq8GLqCZovQxT-5dRt06foNYWcaylbbkx8zdU.jpg?width=1080&crop=smart&auto=webp&s=f1912189a37fe48926e7e462c4d79708fc3a2e60', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VOgUDKoq8GLqCZovQxT-5dRt06foNYWcaylbbkx8zdU.jpg?auto=webp&s=de074bedbd5fdb7fd3dd049ad57cac03e6021142', 'width': 1200}, 'variants': {}}]} |
OpenAI vs OpenSource, choose your ideologies carefully! | 63 | OpenAI vs OpenSource, moments ago @sama spoke against Open Source Models!
Comments are bending towards the sentiment, but I see no harm of what he’s saying,
Narratives can be made, these LLM are being checked for Benchmarks only, they’re not innately checked for Political Biases!
-
-
Chaina Launching Models and are public in the Market now, and solutions sit on top of their LLM, if they’re induced with Bias while pretraining !
They can become ‘Hitlers’! 🗡️ | 2023-12-05T16:53:13 | https://v.redd.it/let6xw9oai4c1 | Right_Link7302 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18bgfqf | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/let6xw9oai4c1/DASHPlaylist.mpd?a=1704387209%2CYWM0OGU3YzcwZGY0MGI4ZWZlMzc0OTQ1YjBiMjE4MmZhYWRmODI4ZjM5ZjhkM2ZkNDFmZjBjMzAxMzM2ZGNiYQ%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/let6xw9oai4c1/DASH_360.mp4?source=fallback', 'has_audio': True, 'height': 344, 'hls_url': 'https://v.redd.it/let6xw9oai4c1/HLSPlaylist.m3u8?a=1704387209%2CNGYxMzZhNmMxYTZjM2U4MDhlZDZkYzhkZGU2MWI2MGI4ZTA1NjAyNmJmNDhjYTY5NmJhMDM5ZTVhNTVlNzA0ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/let6xw9oai4c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 640}} | t3_18bgfqf | /r/LocalLLaMA/comments/18bgfqf/openai_vs_opensource_choose_your_ideologies/ | false | false | 63 | {'enabled': False, 'images': [{'id': 'N3ViMTRseG5haTRjMVKZ1qQ0tEoa6vEyvq_qH9EENbKQXCBGVPzhSjKgMUwg', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/N3ViMTRseG5haTRjMVKZ1qQ0tEoa6vEyvq_qH9EENbKQXCBGVPzhSjKgMUwg.png?width=108&crop=smart&format=pjpg&auto=webp&s=269cfad51e7393fc7d086d054b21fc89dd937523', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/N3ViMTRseG5haTRjMVKZ1qQ0tEoa6vEyvq_qH9EENbKQXCBGVPzhSjKgMUwg.png?width=216&crop=smart&format=pjpg&auto=webp&s=22d7a205af3977584d9a3b6c3b069f6a9058257f', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/N3ViMTRseG5haTRjMVKZ1qQ0tEoa6vEyvq_qH9EENbKQXCBGVPzhSjKgMUwg.png?width=320&crop=smart&format=pjpg&auto=webp&s=f5cb8ca3e6d448d1f225ade14bba67643904ed12', 'width': 320}, {'height': 343, 'url': 'https://external-preview.redd.it/N3ViMTRseG5haTRjMVKZ1qQ0tEoa6vEyvq_qH9EENbKQXCBGVPzhSjKgMUwg.png?width=640&crop=smart&format=pjpg&auto=webp&s=c3e0749040c37397fe8c1a5308b3007afe5806a1', 'width': 640}], 'source': {'height': 476, 'url': 'https://external-preview.redd.it/N3ViMTRseG5haTRjMVKZ1qQ0tEoa6vEyvq_qH9EENbKQXCBGVPzhSjKgMUwg.png?format=pjpg&auto=webp&s=158ec8dfb5e2b18a057f9c2188936febc2b181d4', 'width': 888}, 'variants': {}}]} | |
Self train a super tiny model recommendations | 11 | Hello! Does anyone have a recommendation for how to train a super tiny llm from scratch as an educational process using good practices? I saw a post 5 months ago on here but I was wondering if anyone has any newer recommendations. I don't expect it to be great but I'd love to see how it performs in a very narrow space and understand the details better. I have a 4090 so I have some vram to play with but nothing crazy.
Thanks! | 2023-12-05T16:48:31 | https://www.reddit.com/r/LocalLLaMA/comments/18bgbt9/self_train_a_super_tiny_model_recommendations/ | queenadeliza | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bgbt9 | false | null | t3_18bgbt9 | /r/LocalLLaMA/comments/18bgbt9/self_train_a_super_tiny_model_recommendations/ | false | false | self | 11 | null |
Do you have any project ideas to start with LLAMA | 1 | Hello, I am a senior web developer. But I would like to start in the world of AI with LLAMA. Do you have any project ideas to start with LLAMA? What are the hardware tools that I need to buy to start? | 2023-12-05T16:41:55 | https://www.reddit.com/r/LocalLLaMA/comments/18bg6cy/do_you_have_any_project_ideas_to_start_with_llama/ | BambaEverywhere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bg6cy | false | null | t3_18bg6cy | /r/LocalLLaMA/comments/18bg6cy/do_you_have_any_project_ideas_to_start_with_llama/ | false | false | self | 1 | null |
What open source LLMs are available if your task is complex (7b models won't cut it) and requires higher context (say, 40k tokens approx). | 2 | I came across Yi-34 b 200k. But it's a base model and not fine tuned. Any other medium sized LLM with large context window? | 2023-12-05T16:32:49 | https://www.reddit.com/r/LocalLLaMA/comments/18bfywd/what_open_source_llms_are_available_if_your_task/ | Conscious-Mixture-69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bfywd | false | null | t3_18bfywd | /r/LocalLLaMA/comments/18bfywd/what_open_source_llms_are_available_if_your_task/ | false | false | self | 2 | null |
Pluto: An open source library to generate synthetic datasets for LLM fine-tuning 🌌 | 11 | [removed] | 2023-12-05T16:32:37 | https://www.reddit.com/r/LocalLLaMA/comments/18bfyq2/pluto_an_open_source_library_to_generate/ | pip-install-torch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bfyq2 | false | null | t3_18bfyq2 | /r/LocalLLaMA/comments/18bfyq2/pluto_an_open_source_library_to_generate/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'pTKqHjhoCugrW2rJAn5c3mQ4bp39CO2q-VCteGDYE7Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=108&crop=smart&auto=webp&s=c72722ebfe18850415d6d897244df540fef828c6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=216&crop=smart&auto=webp&s=bd45ce295e3c93b79cfc4bb35bd809d08cd58369', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=320&crop=smart&auto=webp&s=ca57191da0e4ed1530f68372d845eec14099d40f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=640&crop=smart&auto=webp&s=e04cbbaafb467addad6f22d31af4f2e792859dcb', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=960&crop=smart&auto=webp&s=0170ecfc57ed080894a7f9e61a0aac13e55fcfc5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=1080&crop=smart&auto=webp&s=8a01e162e1a4866f6bdcfccc62c934560f3ab555', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?auto=webp&s=e8a8f4f3b77dcbcee4405729ecaabbc5099d7709', 'width': 1200}, 'variants': {}}]} |
Overclocking to get 10~15% inference performance | 18 | Just searched this community and didn't see anyone hinting at this, basically saying that LLM is a memory heavy job and boosting memory frequency boosts performance
Forgive me for repeating the thread if you all know this, but I ran it at the default frequency for a long time ......
​
Test on 2x3090 with 70B 4.85bpw exl2 model
Fixed seed
1 temp
no do\_sample
exactly same response
generate 10 times and avg the t/s
​
Simple conclusion:
Memory frequency is more important than the core, the best solution is to Miner configuration, reduce power consumption, reduce the core and overclock the memory.
Core +100 VRAM-502 10.5t/s
Core+0 VRAM+0 11t/s
Core +100 VRAM+0 11.5t/s
Core-300 VRAM+800 12t/s
Core+100 VRAM+900 12.5t/s
Core-300 VRAM+1100 12.5t/s
Core+150 VRAM+1100 12.8t/s | 2023-12-05T16:02:59 | https://www.reddit.com/r/LocalLLaMA/comments/18bf9pz/overclocking_to_get_1015_inference_performance/ | yamosin | self.LocalLLaMA | 2023-12-05T18:39:10 | 0 | {} | 18bf9pz | false | null | t3_18bf9pz | /r/LocalLLaMA/comments/18bf9pz/overclocking_to_get_1015_inference_performance/ | false | false | self | 18 | null |
How I ran LLMs on Steam Deck (Handheld Gaming Console) | 13 | 2023-12-05T15:57:25 | https://swethatanamala.substack.com/p/how-i-ran-llms-on-steam-deck-handheld | saucysassy | swethatanamala.substack.com | 1970-01-01T00:00:00 | 0 | {} | 18bf4pq | false | null | t3_18bf4pq | /r/LocalLLaMA/comments/18bf4pq/how_i_ran_llms_on_steam_deck_handheld_gaming/ | false | false | 13 | {'enabled': False, 'images': [{'id': '3a47-I_Ak2AOBT6-S2FJzxr0ghC1Fm3ZbxpLDh14cYM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7qpmbGoGxdcvCMhh3lIGW9uhXpEXjdIocWDwIcb2XGI.jpg?width=108&crop=smart&auto=webp&s=b6899e011b6b63f64a785d13d1d8eef96bdc33c2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7qpmbGoGxdcvCMhh3lIGW9uhXpEXjdIocWDwIcb2XGI.jpg?width=216&crop=smart&auto=webp&s=c2121d362733b1b3fa34b33b24102bc58f151824', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7qpmbGoGxdcvCMhh3lIGW9uhXpEXjdIocWDwIcb2XGI.jpg?width=320&crop=smart&auto=webp&s=570e955ad0b6a76d54f3aec490dedc0591660e5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7qpmbGoGxdcvCMhh3lIGW9uhXpEXjdIocWDwIcb2XGI.jpg?width=640&crop=smart&auto=webp&s=846dd024f213f11d35dd8d042aa46ba5cf96a3d6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7qpmbGoGxdcvCMhh3lIGW9uhXpEXjdIocWDwIcb2XGI.jpg?width=960&crop=smart&auto=webp&s=c46abfffc375265ac3e4da0cc419d52d89433053', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7qpmbGoGxdcvCMhh3lIGW9uhXpEXjdIocWDwIcb2XGI.jpg?width=1080&crop=smart&auto=webp&s=4a1ecc059fa9236a7f6f735add3be1e09c9bcec7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7qpmbGoGxdcvCMhh3lIGW9uhXpEXjdIocWDwIcb2XGI.jpg?auto=webp&s=700c945cb0a1d279e9ac1c0005807c7305324523', 'width': 1200}, 'variants': {}}]} | ||
Sub-par Token Generation with AMD Hardeare | 1 | [removed] | 2023-12-05T15:22:53 | https://www.reddit.com/r/LocalLLaMA/comments/18be99v/subpar_token_generation_with_amd_hardeare/ | SquishyOranges | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18be99v | false | null | t3_18be99v | /r/LocalLLaMA/comments/18be99v/subpar_token_generation_with_amd_hardeare/ | false | false | self | 1 | null |
What difficulties will I face if I need to buy a tesla p40? | 3 | What difficulties will I face if I need to buy a tesla p40?
​
My computer now has one 3060 12GB. I use it as layers for GGUF models to speed up text generation for large models 70B and larger. The motherboard has an additional slot for a video card, and I thought it would be good if I had another 24 gigabytes of video memory to use two video cards as layers.
​
What difficulties will I have after I receive this server graphics card?
I have a modern platform. z690, 13600K, ddr5. I’m not too worried about cooling since I’ll only use this video card for layers, it won’t get very hot in this usage scenario.
​
If I am wrong. correct me. | 2023-12-05T14:37:24 | https://www.reddit.com/r/LocalLLaMA/comments/18bdd0n/what_difficulties_will_i_face_if_i_need_to_buy_a/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bdd0n | false | null | t3_18bdd0n | /r/LocalLLaMA/comments/18bdd0n/what_difficulties_will_i_face_if_i_need_to_buy_a/ | false | false | self | 3 | null |
I'm building an Open Source (and optionally local) Google for LLM Agents, any requests? | 44 | Hey everyone,
​
I've been working on synthetic data and math / science applications of LLMs over at SciPhi, and I have gotten some great feedback from this community in the past. Something I realized along the way was that I really needed high throughput access to Google for training / running my agents. Unfortunately, Google doesn't have a high throughput API.
​
[To get an idea of how deep the dataset is - this search above already yielded \> 100 good hits.](https://preview.redd.it/mh81eln9hh4c1.png?width=562&format=png&auto=webp&s=e37ac87be397eb5cbd71ee07a66a06978d0ecaf3)
Somehow, I converged onto embedding a large fraction of the common crawl as the right path. My goal is to make a multi-billion entry vector database that contains a high quality sampling of the human body of knowledge. Right now I have \~400 million embeddings from \~20 million urls indexed.
​
I am seeing that there might be high demand for the final product and I wanted to see if anyone here is interested. I am planning to open source the embeddings and database implementation, so that people can attempt to run locally if they want.
​
Does anyone have any data requests or thoughts? I could definitely use some early beta testers - will be putting a simple web portal on in the coming weeks when the database is further along.
​ | 2023-12-05T14:11:21 | https://www.reddit.com/r/LocalLLaMA/comments/18bctb4/im_building_an_open_source_and_optionally_local/ | docsoc1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bctb4 | false | null | t3_18bctb4 | /r/LocalLLaMA/comments/18bctb4/im_building_an_open_source_and_optionally_local/ | false | false | 44 | null | |
Oniichat caught reuploading models and claiming them as their own | 1 | [removed] | 2023-12-05T14:05:35 | https://www.reddit.com/r/LocalLLaMA/comments/18bcotx/oniichat_caught_reuploading_models_and_claiming/ | JawGBoi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bcotx | false | null | t3_18bcotx | /r/LocalLLaMA/comments/18bcotx/oniichat_caught_reuploading_models_and_claiming/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KOV0OwDYWWYoXg2UgI1LMaTW-eWT8XP756LhhHBU0wU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ydY1WHenYFck_Sp4o-k0S-o1K62t1otHIcoJvF-AJUw.jpg?width=108&crop=smart&auto=webp&s=c8742946b06b65d74c85edc272ade5dd5e3840c5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ydY1WHenYFck_Sp4o-k0S-o1K62t1otHIcoJvF-AJUw.jpg?width=216&crop=smart&auto=webp&s=c6cbf1677319b51f5de9eb09ccfcea3c512d09f1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ydY1WHenYFck_Sp4o-k0S-o1K62t1otHIcoJvF-AJUw.jpg?width=320&crop=smart&auto=webp&s=f7e421eb307dea5fc17fd065bffe67a7115d704e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ydY1WHenYFck_Sp4o-k0S-o1K62t1otHIcoJvF-AJUw.jpg?width=640&crop=smart&auto=webp&s=a0915dbc0a0c8967c1d9d09c88dcd7d86deb679c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ydY1WHenYFck_Sp4o-k0S-o1K62t1otHIcoJvF-AJUw.jpg?width=960&crop=smart&auto=webp&s=6f080a31e0d77c28f4ce1f3622623db1bdfe5f7c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ydY1WHenYFck_Sp4o-k0S-o1K62t1otHIcoJvF-AJUw.jpg?width=1080&crop=smart&auto=webp&s=52fe68166027a7a53ecdcd632aecaef9405f5f09', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ydY1WHenYFck_Sp4o-k0S-o1K62t1otHIcoJvF-AJUw.jpg?auto=webp&s=9c95aec68f599fc220b076e538a461fc798703f3', 'width': 1200}, 'variants': {}}]} |
How can I host Mistral 7b on cloud so that I can request and get response from it? | 1 | [removed] | 2023-12-05T13:49:46 | https://www.reddit.com/r/LocalLLaMA/comments/18bcd0r/how_can_i_host_mistral_7b_on_cloud_so_that_i_can/ | satyajitdass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bcd0r | false | null | t3_18bcd0r | /r/LocalLLaMA/comments/18bcd0r/how_can_i_host_mistral_7b_on_cloud_so_that_i_can/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TH2a8tkiPyak8uJXplFsKG1qcAxJI_BOm0YteF0VGEY', 'resolutions': [{'height': 131, 'url': 'https://external-preview.redd.it/FgdlKVx5X3YPdYoEvjhqBuzJEWrNJvz2IOp0hxb0oHk.jpg?width=108&crop=smart&auto=webp&s=3e37e4486e60576258cd4f2891c2f11893dc489d', 'width': 108}, {'height': 262, 'url': 'https://external-preview.redd.it/FgdlKVx5X3YPdYoEvjhqBuzJEWrNJvz2IOp0hxb0oHk.jpg?width=216&crop=smart&auto=webp&s=3618cd542a2bcc077c6e7c311c8a7c40f44707a2', 'width': 216}, {'height': 389, 'url': 'https://external-preview.redd.it/FgdlKVx5X3YPdYoEvjhqBuzJEWrNJvz2IOp0hxb0oHk.jpg?width=320&crop=smart&auto=webp&s=f368b6433bd65018853fcb38e20bc6291614e9fa', 'width': 320}, {'height': 779, 'url': 'https://external-preview.redd.it/FgdlKVx5X3YPdYoEvjhqBuzJEWrNJvz2IOp0hxb0oHk.jpg?width=640&crop=smart&auto=webp&s=04311dba63545463e43349705a1931bbd8485ac9', 'width': 640}], 'source': {'height': 1059, 'url': 'https://external-preview.redd.it/FgdlKVx5X3YPdYoEvjhqBuzJEWrNJvz2IOp0hxb0oHk.jpg?auto=webp&s=0facf93de293b7d66e9a3e4f72ece636d59f256d', 'width': 870}, 'variants': {}}]} |
Whisper Large V3 vs Seamless V2 Large ? | 10 | I was looking for a good comparison between [whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) and [seamless-m4t-v2-large](https://huggingface.co/facebook/seamless-m4t-v2-large) regarding their ASR capabilities.
I remember trying seamless v1 and it wasn't that great (whisper large v2 with better by a decent margin) but i tried the v2 model on some simple cases and it worked good.
Did anyone give the two a shot and love to share his experince ? | 2023-12-05T13:22:32 | https://www.reddit.com/r/LocalLLaMA/comments/18bbulo/whisper_large_v3_vs_seamless_v2_large/ | Puzzleheaded_Mall546 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bbulo | false | null | t3_18bbulo | /r/LocalLLaMA/comments/18bbulo/whisper_large_v3_vs_seamless_v2_large/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'xRzsBeQu78b4k1mO86d3hsqmmOn_0Yvt4N3QhzmgoHY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/lF4ncxN4BmiMsrsej55yRpAHCjLApZRU8CEl35hVSw8.jpg?width=108&crop=smart&auto=webp&s=fca627c12eea3c482b1f8680818cfddf8405fd4b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/lF4ncxN4BmiMsrsej55yRpAHCjLApZRU8CEl35hVSw8.jpg?width=216&crop=smart&auto=webp&s=a33d4e4cb6f27b5616c8bafe98407832a7d02ffe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/lF4ncxN4BmiMsrsej55yRpAHCjLApZRU8CEl35hVSw8.jpg?width=320&crop=smart&auto=webp&s=3f26b35dacc78a5002400dd3772f1e080778fb93', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/lF4ncxN4BmiMsrsej55yRpAHCjLApZRU8CEl35hVSw8.jpg?width=640&crop=smart&auto=webp&s=442e3a75dd554619955e99e010be6239112a3769', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/lF4ncxN4BmiMsrsej55yRpAHCjLApZRU8CEl35hVSw8.jpg?width=960&crop=smart&auto=webp&s=2a9a5f0cbf5eff3de03818e2e32e0fb49b2573cb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/lF4ncxN4BmiMsrsej55yRpAHCjLApZRU8CEl35hVSw8.jpg?width=1080&crop=smart&auto=webp&s=f77718b3552cd8585b5179e98fb311a2524ed6de', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/lF4ncxN4BmiMsrsej55yRpAHCjLApZRU8CEl35hVSw8.jpg?auto=webp&s=7b0041ad66c62bb830f8666e8eac99258e822a1e', 'width': 1200}, 'variants': {}}]} |
Meta, IBM, NASA, etc. form open source AI Alliance | 327 | Three cheers for open source! Most of the articles I've seen on this so far have been behind a paywall, but here's one link that isn't: [https://9to5mac.com/2023/12/05/ai-alliance/](https://9to5mac.com/2023/12/05/ai-alliance/) | 2023-12-05T13:21:40 | https://www.reddit.com/r/LocalLLaMA/comments/18bbu0p/meta_ibm_nasa_etc_form_open_source_ai_alliance/ | WaterdanceAC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bbu0p | false | null | t3_18bbu0p | /r/LocalLLaMA/comments/18bbu0p/meta_ibm_nasa_etc_form_open_source_ai_alliance/ | false | false | self | 327 | {'enabled': False, 'images': [{'id': 'A2j9oXOg5ONoLO0EFpZEwahFvDZQtuKaiG8hxMX7els', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/2PiMvzdi3uZ8UxygcPdXzP92WjUgvcby6mSk5vDc4iM.jpg?width=108&crop=smart&auto=webp&s=9090756f3be82b704ae464f2291c310545074927', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/2PiMvzdi3uZ8UxygcPdXzP92WjUgvcby6mSk5vDc4iM.jpg?width=216&crop=smart&auto=webp&s=c1de1dee0379ea5e084408820668fe66c615d3d8', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/2PiMvzdi3uZ8UxygcPdXzP92WjUgvcby6mSk5vDc4iM.jpg?width=320&crop=smart&auto=webp&s=89341c7b1b5bfc4e351612499568a39d829003af', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/2PiMvzdi3uZ8UxygcPdXzP92WjUgvcby6mSk5vDc4iM.jpg?width=640&crop=smart&auto=webp&s=f9860be2b4b2a7f2d5c8e073451ba4ecb2b5d67c', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/2PiMvzdi3uZ8UxygcPdXzP92WjUgvcby6mSk5vDc4iM.jpg?width=960&crop=smart&auto=webp&s=2d7b34c626b841ee3fb0afcc44bbda6fd7b997dc', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/2PiMvzdi3uZ8UxygcPdXzP92WjUgvcby6mSk5vDc4iM.jpg?width=1080&crop=smart&auto=webp&s=6da62743fe0be47f7146688ff997d617e798a9bf', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/2PiMvzdi3uZ8UxygcPdXzP92WjUgvcby6mSk5vDc4iM.jpg?auto=webp&s=b96860bb2e67faff0089edcfedd3883232c25c84', 'width': 1200}, 'variants': {}}]} |
Building a computer with 2 graphics cards | 2 | Hi, can you show me your computer builds, how do you fit 2 graphics cards into 1 case? I look at my GIGABYTE GeForce RTX 4090 GAMING OC graphics card and I don't understand how I can fit anything else next to it, it takes up all the space in my LIAN LI PC-O11.
I have a rtx 3080 ti. I want to plug it in to expand the vram to 36gb and I don't understand how to do it, there is just no room for it. So I want to see how other people have hooked up multiple video cards each
I'm sorry for my English :( | 2023-12-05T12:27:27 | https://www.reddit.com/r/LocalLLaMA/comments/18bawgz/building_a_computer_with_2_graphics_cards/ | dmitryplyaskin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18bawgz | false | null | t3_18bawgz | /r/LocalLLaMA/comments/18bawgz/building_a_computer_with_2_graphics_cards/ | false | false | self | 2 | null |
Merging Models and Testing Out Settings. Rate the Cherry-Picked Story Please (NSFW/GORE) | 4 | 2023-12-05T12:04:13 | xadiant | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18baj4g | false | null | t3_18baj4g | /r/LocalLLaMA/comments/18baj4g/merging_models_and_testing_out_settings_rate_the/ | false | false | nsfw | 4 | {'enabled': True, 'images': [{'id': 'k9WeopJGvk6coyCJSJ-7c8U0vw7TE8BW4S7svkWP03A', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?width=108&crop=smart&auto=webp&s=9155e84d4ac44eb6b9830833495900392c7f7f5a', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?width=216&crop=smart&auto=webp&s=4b91801bdeca8a6389c3782a164543d2d171219a', 'width': 216}, {'height': 285, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?width=320&crop=smart&auto=webp&s=27806fe80a292021e4668167fd34d54c80a66769', 'width': 320}, {'height': 570, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?width=640&crop=smart&auto=webp&s=b208d86f9f1b287d7505a2023516635a1560eaa3', 'width': 640}], 'source': {'height': 733, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?auto=webp&s=ffbfe0062735eb2ed099e151abfd82d2d99edbeb', 'width': 822}, 'variants': {'nsfw': {'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=220f554c65a82541085164edfeda1737f8fab981', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=3a9b9e61ee588d6f0f680559b590100a2c60f795', 'width': 216}, {'height': 285, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=acb39252ab9b936183390920079cd4316fa0014f', 'width': 320}, {'height': 570, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=005f667dc3120345182ae3f0a6538af80be85843', 'width': 640}], 'source': {'height': 733, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?blur=40&format=pjpg&auto=webp&s=4b42425dd920bcaac44917aef4fcd240257a1c88', 'width': 822}}, 'obfuscated': {'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=220f554c65a82541085164edfeda1737f8fab981', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=3a9b9e61ee588d6f0f680559b590100a2c60f795', 'width': 216}, {'height': 285, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=acb39252ab9b936183390920079cd4316fa0014f', 'width': 320}, {'height': 570, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=005f667dc3120345182ae3f0a6538af80be85843', 'width': 640}], 'source': {'height': 733, 'url': 'https://preview.redd.it/l9mpbht8vg4c1.png?blur=40&format=pjpg&auto=webp&s=4b42425dd920bcaac44917aef4fcd240257a1c88', 'width': 822}}}}]} | ||
Mistral Fine Tuning issue | 3 | While fine tuning Mistral-7B on Azure, I’m getting same error multiple time : **“Note: you may need to restart the kernel to use updated packages.”**
I’ve restarted kernel multiple time but still getting same error. Can anyone help me understand how to fix it. | 2023-12-05T11:51:40 | https://www.reddit.com/r/LocalLLaMA/comments/18babmn/mistral_fine_tuning_issue/ | Chiragjoshi_12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18babmn | false | null | t3_18babmn | /r/LocalLLaMA/comments/18babmn/mistral_fine_tuning_issue/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.