title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 โ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k โ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 โ | ups int64 0 8.54k | preview stringlengths 301 5.01k โ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
have anyone tried few or zero shot training - Not in prompt with Llama | 1 | instead of fine tuning llama model with large data, has anyone tried few shot or zero shot training of llama model? and i'm not referring to training within prompts - like actual model training that changes the weights | 2023-10-19T00:04:53 | https://www.reddit.com/r/LocalLLaMA/comments/17b5ev9/have_anyone_tried_few_or_zero_shot_training_not/ | beybladextreme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17b5ev9 | false | null | t3_17b5ev9 | /r/LocalLLaMA/comments/17b5ev9/have_anyone_tried_few_or_zero_shot_training_not/ | false | false | self | 1 | null |
Hot-swapping LoRAs During Inference | 7 | Hey yall
For some time I have been interested in applications using a single base model and multiple LoRAs during inference. Specifically I would like to run a setup involving multiple characters, where each character has their own custom system prompt and LoRA adapter. In my opinion this would allow the specific quirks and mannerisms of the character to be ingrained in the very weights of the model, rather than relying solely on the model's role playing capabilities. Furthermore I could separate the function calling LoRA adapter from the character LoRA adapter, which would allow me to cater each to their specific task without having them affect eachother's flow.
So far I have attempted to use PEFT with the Huggingface .generate() method; however, inference is too slow for regular use. I am aware you can attach LoRAs to models hosted by textgen-webui and llama.cpp, but if I am not mistaken you need to reload the entire base model + lora whenever you wish to swap out the adapter.
If possible I would like to retain the base model in memory, and swap in the LoRAs on the fly whenever the specific character is called. Has anybody done this before? If so, do you mind sharing the process and which repos/libraries you used in your application. | 2023-10-19T00:02:58 | https://www.reddit.com/r/LocalLLaMA/comments/17b5dgw/hotswapping_loras_during_inference/ | FrostyContribution35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17b5dgw | false | null | t3_17b5dgw | /r/LocalLLaMA/comments/17b5dgw/hotswapping_loras_during_inference/ | false | false | self | 7 | null |
With the new NVIDIA drivers, you have to watch GPU when training | 61 | The new game drivers are set not to OOM, (probably start using CPU RAM?) which may be great for interference, but for training this also means a difference between 1 hour and 10 hours. With the old drivers if you overdo the settings, it will immediately OOM so you know you need to back down (lower rank or batch for example), with the new driver it will continue, capping my 3090 at around 23.8 GB, but extremely slowly - to the point that you may just kill it anyway.
So watch the GPU graph and if it gets up and start clipping, you may be too far.
| 2023-10-18T23:39:43 | https://www.reddit.com/r/LocalLLaMA/comments/17b4vd5/with_the_new_nvidia_drivers_you_have_to_watch_gpu/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17b4vd5 | false | null | t3_17b4vd5 | /r/LocalLLaMA/comments/17b4vd5/with_the_new_nvidia_drivers_you_have_to_watch_gpu/ | false | false | self | 61 | null |
AI stops writing dialogue during NSFW content? | 5 | So I've noticed the AI seems to go deathly quiet once NSFW stuff begins in a roleplay. They write actions, but 99% of the time there is no speech at all even when I write a character speaking to them. How can I fix this? I'd like a mixture of dialogue and smut. Is there any particular reason it happens?
So far I've tried these models and it seems to happen in all of them so I'm not sure if I have something poorly configured or what:
xwin-mlewd-13b-v0.2.Q5\_K\_M
mythomax-l2-13b.Q4\_K\_M
airoboros-l2-13b-gpt4-m2.0.Q5\_K\_M | 2023-10-18T23:26:28 | https://www.reddit.com/r/LocalLLaMA/comments/17b4l9y/ai_stops_writing_dialogue_during_nsfw_content/ | Tupletcat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17b4l9y | false | null | t3_17b4l9y | /r/LocalLLaMA/comments/17b4l9y/ai_stops_writing_dialogue_during_nsfw_content/ | false | false | nsfw | 5 | null |
Fuyu-8B: A Multimodal Architecture for AI Agents | 81 | 2023-10-18T20:47:14 | https://www.adept.ai/blog/fuyu-8b | ninjasaid13 | adept.ai | 1970-01-01T00:00:00 | 0 | {} | 17b0xg8 | false | null | t3_17b0xg8 | /r/LocalLLaMA/comments/17b0xg8/fuyu8b_a_multimodal_architecture_for_ai_agents/ | false | false | 81 | {'enabled': False, 'images': [{'id': 'Y7pH0nfcU9dbjgzXc7BJhELNY0sajBt2oqSeJ9D7G78', 'resolutions': [{'height': 39, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?width=108&crop=smart&auto=webp&s=3979f1a9099821589dad9942398ab2ffbd290c48', 'width': 108}, {'height': 79, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?width=216&crop=smart&auto=webp&s=9a17a7f22bc5a1b172790d3369df8001e045fb16', 'width': 216}, {'height': 117, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?width=320&crop=smart&auto=webp&s=8f2c524a0728f54e0300b18adac8f4252140eb2b', 'width': 320}, {'height': 234, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?width=640&crop=smart&auto=webp&s=2f106001a57feb225b001339d0c3bcaa2bdfdd88', 'width': 640}, {'height': 351, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?width=960&crop=smart&auto=webp&s=7cc3d09dd86b106a6087c635310697813f81035a', 'width': 960}, {'height': 395, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?width=1080&crop=smart&auto=webp&s=3b1ebd49c2a7b0161995d43b2a381ca7d439e8be', 'width': 1080}], 'source': {'height': 663, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?auto=webp&s=b738e9c785cea7c29f09e829343a53eb3e4eaef3', 'width': 1812}, 'variants': {}}]} | ||
Llama 2 7B 32K Instruct summarizes and outlines text... inconsistently | 11 | Hi everyone,
I'm brand new to using LLMs. I have so far got two different models to produce valid, appropriate, coherent, "intelligent" responses with llama.cpp and Langchain, including the long context Llama 2 7B 32K Instruct. I don't understand why things work or not, and was hoping for pointers to higher level guidance.
Currently I'm working on getting Llama 2 7B 32K Instruct to receive a short (approx. 1500 word), highly abstract text (an encyclopedia article I wrote on a topic in the humanities) and produce either a one paragraph summary, an outline, a Markdown document that could be converted into slides by Pandoc, and a limerick about the information. The prompts I used for each worked at least once. Sometimes the same prompt (with the same settings) will simply produce a copy of the original text or part of the original prompt.
I'm wondering where in the process this inconsistency emerges.
Related to this is that in order to use the Instruct model I not only had to use the prompt format (using `[INST]` and `[/INST]`) but also add these as stop words to the LlamaCpp object parameters, because otherwise the model would apparently instruct itself and keep going on both related and unrelated responses. Even just the opening tag was not sufficient to stop this kind of output. It would also throw in end tags into responses and then continue on. I don't know whether or how this is related, except they are both examples of my general ignorance of the underlying process this software follows.
Any general comments or advice would be welcome ๐ | 2023-10-18T20:35:04 | https://www.reddit.com/r/LocalLLaMA/comments/17b0n8t/llama_2_7b_32k_instruct_summarizes_and_outlines/ | ryanschram | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17b0n8t | false | null | t3_17b0n8t | /r/LocalLLaMA/comments/17b0n8t/llama_2_7b_32k_instruct_summarizes_and_outlines/ | false | false | self | 11 | null |
Any idea where I can download gguf model (llama 7b/13b) for running llama cpp python locally as gpt4all website is showing none! | 1 | 2023-10-18T20:15:16 | Anu_Rag9704 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17b075w | false | null | t3_17b075w | /r/LocalLLaMA/comments/17b075w/any_idea_where_i_can_download_gguf_model_llama/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'bbHdq0aDsbp3c79v1EpG54TILTsdmvBW_x79fkOnyX4', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/m86k7aozq0vb1.png?width=108&crop=smart&auto=webp&s=f3c187188fd359d594d7f9af8fe2c0225f7e4388', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/m86k7aozq0vb1.png?width=216&crop=smart&auto=webp&s=2a4d9d2cd7e79f600a86716953433a8c0467a96e', 'width': 216}, {'height': 251, 'url': 'https://preview.redd.it/m86k7aozq0vb1.png?width=320&crop=smart&auto=webp&s=cc20efb9cf685812f0e144c0ed503a4d726dd9dc', 'width': 320}, {'height': 502, 'url': 'https://preview.redd.it/m86k7aozq0vb1.png?width=640&crop=smart&auto=webp&s=4cceda6f708b9c7f80dc4219ec32beb7cff8ab2d', 'width': 640}, {'height': 754, 'url': 'https://preview.redd.it/m86k7aozq0vb1.png?width=960&crop=smart&auto=webp&s=de11ffec5709c5b41abb6dcd72c3e1aa618c2277', 'width': 960}, {'height': 848, 'url': 'https://preview.redd.it/m86k7aozq0vb1.png?width=1080&crop=smart&auto=webp&s=1817181d272151288d8cec239b364f7bb355d9af', 'width': 1080}], 'source': {'height': 930, 'url': 'https://preview.redd.it/m86k7aozq0vb1.png?auto=webp&s=e846340b4580bcb0468a938b2ffbee4ca0e97962', 'width': 1184}, 'variants': {}}]} | |||
played some d&d with openhermes-2-mistral-7b then broke the 3rd wall | 25 | 2023-10-18T19:45:19 | sleeper-2 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17azhrd | false | null | t3_17azhrd | /r/LocalLLaMA/comments/17azhrd/played_some_dd_with_openhermes2mistral7b_then/ | false | false | 25 | {'enabled': True, 'images': [{'id': 'UCbtxrz7mnsMxkTf4UitjmliyWnkOvYa6Br9bmfnI3k', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/6qovol7bl0vb1.png?width=108&crop=smart&auto=webp&s=b1779f5c25cf89e999d3587c18bf5d74ec790750', 'width': 108}, {'height': 158, 'url': 'https://preview.redd.it/6qovol7bl0vb1.png?width=216&crop=smart&auto=webp&s=f226d27751cc9a66b7b8154f8ad0e48c2b168d49', 'width': 216}, {'height': 235, 'url': 'https://preview.redd.it/6qovol7bl0vb1.png?width=320&crop=smart&auto=webp&s=3d6391e634deacc281274a1165042434f58e232a', 'width': 320}, {'height': 470, 'url': 'https://preview.redd.it/6qovol7bl0vb1.png?width=640&crop=smart&auto=webp&s=839aa943ad4880b0e95f35eeacbda00d35ecf8d6', 'width': 640}, {'height': 705, 'url': 'https://preview.redd.it/6qovol7bl0vb1.png?width=960&crop=smart&auto=webp&s=5c407a993ffffe9c6789e23c5ba638e6e8a1ba32', 'width': 960}, {'height': 793, 'url': 'https://preview.redd.it/6qovol7bl0vb1.png?width=1080&crop=smart&auto=webp&s=3d2e9197ebca7ce849344d4ede6e87d6d2f3c773', 'width': 1080}], 'source': {'height': 1760, 'url': 'https://preview.redd.it/6qovol7bl0vb1.png?auto=webp&s=2a50bb4f9160a684302f80a659a6ab3e55a7efb9', 'width': 2394}, 'variants': {}}]} | |||
MBA M1 owners, what kind of performance are you getting? | 7 | How many tokens per second are you getting for a 7b model like Mistral? | 2023-10-18T19:06:04 | https://www.reddit.com/r/LocalLLaMA/comments/17aykuj/mba_m1_owners_what_kind_of_performance_are_you/ | thedatawhiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17aykuj | false | null | t3_17aykuj | /r/LocalLLaMA/comments/17aykuj/mba_m1_owners_what_kind_of_performance_are_you/ | false | false | self | 7 | null |
Fine-tuning Mistral-7B OOM | 17 | Dear community,
I have been fine-tuning various models lately on various datasets.
For example, LLaMA-2 7B can be fine-tuned on an A100 80Gb with a batch size of 4 on a sequence length of 1024 pretty easily.
I got the same results both with my scripts but also with the run_clm.py from HuggingFace.
The issue comes when I try to fine-tune Mistral. Even gradually dropping the sequence length doesnโt reduce the memory usage.
I got to 256 and it still crashed with out of memory.
I am sure I am doing something wrong. Does anyone have any experience with Mistral?
I am not interested in peft, I would like to fine-tune it in 16 bit precision.
Thanks in advance! | 2023-10-18T18:17:48 | https://www.reddit.com/r/LocalLLaMA/comments/17axfzb/finetuning_mistral7b_oom/ | Test-Elegant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17axfzb | false | null | t3_17axfzb | /r/LocalLLaMA/comments/17axfzb/finetuning_mistral7b_oom/ | false | false | self | 17 | null |
Llm a good choice for this | 6 | Hi
I'm new to ai llm, need help figuring out if llm is the right tool, I have create a database with questions and answers based on company data, user guides and so on, , can I train / finetune llm so I can ask it some of these questions and it'll reply back with what is in answers? | 2023-10-18T17:45:19 | https://www.reddit.com/r/LocalLLaMA/comments/17awomj/llm_a_good_choice_for_this/ | Brilliant_Pie_494 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17awomj | false | null | t3_17awomj | /r/LocalLLaMA/comments/17awomj/llm_a_good_choice_for_this/ | false | false | self | 6 | null |
PubMedBERT Embeddings - Semantic search and retrieval augmented generation for medical literature | 15 | 2023-10-18T17:09:35 | https://huggingface.co/NeuML/pubmedbert-base-embeddings | davidmezzetti | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17avuka | false | null | t3_17avuka | /r/LocalLLaMA/comments/17avuka/pubmedbert_embeddings_semantic_search_and/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'b_Ncwb6fUNpDsxXwAEGVeFB3V1KJP8Q--qJk1MGyHb0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UWNoBtC1Go1a2r4pjbgWmOs-bp0KzxyBK2d_4LKB6RI.jpg?width=108&crop=smart&auto=webp&s=e774ffe37508aca0e99099c098ee13837c488571', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UWNoBtC1Go1a2r4pjbgWmOs-bp0KzxyBK2d_4LKB6RI.jpg?width=216&crop=smart&auto=webp&s=72c1bc6baea08eab4d074a865d2ab1b26f9713c2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UWNoBtC1Go1a2r4pjbgWmOs-bp0KzxyBK2d_4LKB6RI.jpg?width=320&crop=smart&auto=webp&s=27f7f8ce9cea3c40d2a0a929079582bff02a2676', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UWNoBtC1Go1a2r4pjbgWmOs-bp0KzxyBK2d_4LKB6RI.jpg?width=640&crop=smart&auto=webp&s=f06efd709c95fdbc088167a0e5935fb405a093a5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UWNoBtC1Go1a2r4pjbgWmOs-bp0KzxyBK2d_4LKB6RI.jpg?width=960&crop=smart&auto=webp&s=4ef00005f5523ab0e291a65aebde007d52e1ce3d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UWNoBtC1Go1a2r4pjbgWmOs-bp0KzxyBK2d_4LKB6RI.jpg?width=1080&crop=smart&auto=webp&s=cd8e825585204d35aaf699a1e383754f63180f14', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UWNoBtC1Go1a2r4pjbgWmOs-bp0KzxyBK2d_4LKB6RI.jpg?auto=webp&s=9824f2839a0169741edcfa740c6272864a45a74a', 'width': 1200}, 'variants': {}}]} | ||
Which version LLaMa to get | 3 | Please don't diss me...
I'm working in tech support and I would like to fine tune LLaMa and experiment localy, to check what kind results I would get since we are getting some repeptive questiins and it would be cool to automate the replies. But first I need to test this localy.
However I do not have super duper machine.
Laptop
Ryzen7 4700u
16 GB RAM
240 GB SSD
Integrated GPU
Desktop
Ryzen 5 3400G
16 GB RAM
480 SSD
2 TB HD
Integrated GPU
What would you recomend, which version to download for the local installation?
Thank you in advance! | 2023-10-18T17:08:22 | https://www.reddit.com/r/LocalLLaMA/comments/17avtj8/which_version_llama_to_get/ | Srdj_1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17avtj8 | false | null | t3_17avtj8 | /r/LocalLLaMA/comments/17avtj8/which_version_llama_to_get/ | false | false | self | 3 | null |
Biased Behavior in the "Orca Mini" LLM, Based on Facebook's Llama2 | 1 | [removed] | 2023-10-18T16:55:32 | https://www.reddit.com/r/LocalLLaMA/comments/17avib4/biased_behavior_in_the_orca_mini_llm_based_on/ | Good-Juggernaut-740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17avib4 | false | null | t3_17avib4 | /r/LocalLLaMA/comments/17avib4/biased_behavior_in_the_orca_mini_llm_based_on/ | false | false | self | 1 | null |
How correct a model giving wrong information | 3 |
Hello,
I am currently experimenting with a Local LLM (Mistral-7b-OpenOrca.Q4\_K\_M.gguf), but it keeps providing incorrect information. For instance, it insists that Slay The Spire uses # and Unity, whereas after checking Wikipedia, it's evident that the game is developed in Java with the use of LibGDX. Despite presenting this evidence, the model consistently gives me inaccurate information.
I'm curious about how I can modify this behavior to correct the dissemination of false information. It's worth noting that I have more than 20 years of experience as a developer, and I'm interested in understanding how to use this type of model to create applications. While I'm in the early stages of learning how to utilize it and exploring the entire ecosystem surrounding this technology, I'm encountering many new concepts, all of which are quite complex for me. | 2023-10-18T16:49:26 | https://www.reddit.com/r/LocalLLaMA/comments/17avd3e/how_correct_a_model_giving_wrong_information/ | yodapunk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17avd3e | false | null | t3_17avd3e | /r/LocalLLaMA/comments/17avd3e/how_correct_a_model_giving_wrong_information/ | false | false | self | 3 | null |
Newbie (former Unix sysamin): (Seems I need to build) a machine that doesn't limit the CPU/RAM, which I understand is a motherboard limitation. Recommendations? Prebuild to tweak? BTC chassis to start with? .. (THANK YOU!) | 1 | I want to work on an LLM at home. I've been in the business for over 40 years. I ran Unix boxes at two major employers (I am a commercial person, not IT per se). I've built and tended very large databases and run real-time analytics on those. Basically, I am not dumb, or afraid, including of 'Linux' or Visual Studio (I'm a high-function C/C++ programmer). I have a disdain for the major PC and chip manufacturers who machines dazzle with specs but thruput is constrained by shit design (weak use of caching, ultimate dependence on disk drives, ...). There is one manufacturer on AMZN ("Adamant") with a nice-sounding box at $5k but there are no user reviews to pore over. I was thinking the needs for the motherboard, PSU etc are the same for BTC mining as for LLM, so perhaps a BTC machine might be a good starting point. In my own life, I switched to laptops 10+ years ago as the manufacturers were putting all of their efforts into these and they became superior to desktops, so I am not averse to a laptop running a 3080ti if this can be reliable in terms of heat AND not throttle the CPU/RAM. I thank in advance anyone with thoughts here. | 2023-10-18T16:04:50 | https://www.reddit.com/r/LocalLLaMA/comments/17aubg7/newbie_former_unix_sysamin_seems_i_need_to_build/ | Unknown_Pleasur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17aubg7 | false | null | t3_17aubg7 | /r/LocalLLaMA/comments/17aubg7/newbie_former_unix_sysamin_seems_i_need_to_build/ | false | false | self | 1 | null |
[Tutorial] Integrate multimodal llava to Macs' right-click Finder menu for image captioning (or text parsing, etc) with llama.cpp and Automator app | 40 | Hello, since the llama.cpp got updated and now, by default, supports multi-modal LLMs ([merged PR](https://github.com/ggerganov/llama.cpp/pull/3436)), it would be nice to have integrated multi-model into MacOS natively.
This tutorial focuses on image processing but could be adapted for text summarization and any NLP-tasks you would like to do.
TLDR: We will do this
​
[Finder Integration + LLAVA](https://preview.redd.it/yoq92fpm5zub1.jpg?width=2004&format=pjpg&auto=webp&s=ea9007e3c71627913f04d9bb212f840dd53cf011)
**1)** You will need to have a working llama.cpp compiled via "LLAMA\_METAL=1 make -j" command, which will activate the Metal inference support. Installation of the llama.cpp can be [found here](https://github.com/ggerganov/llama.cpp#usage).
**2)** In the folder where you have installed llama.cpp you have to add this small script and name it `capture.sh`:
#!/bin/bash
# Add this script to your local llama cpp installation folder
DIR="$(dirname "$0")"
"$DIR/llava" -m "$DIR/models/llava-ggml-model-q4_k.gguf" \
--mmproj "$DIR/models/llava-mmproj-model-f16.gguf" \
-t 8 \
--temp 0.1 \
-p "Describe the image in the much detailed way possible, I will use this description in the text2image tool. Mention a style if possible." \
--image "$1" \
-ngl 1 \
-n 100 \
# Make a sound when capture is done
say "o"
**What the script does:**
It will receive a path to the image as an argument and pass it to the llava bin, which will do the image capturing. After inference is done, your Mac will make an "*o*" sound, which means the result is already in your clipboard (o!).
Now, make this script executable via Terminal, or it will not work. You can do it like that:
chmod +x <your_path>/llama.cpp/capture.sh
**3)** The next step will involve the default Mac program called Automator:
**3.1)** Open Automator and Create a New Workflow
1. Open Automator and select "Quick Action."
2. In the workflow settings:
* Set "Workflow receives current" to image files.
* Set "in" to Finder.
**3.2)** Add "Run Shell Script" Action
1. Search for "Run Shell Script" and add it to the workflow.
2. In "Run Shell Script":
* Set "Shell" to /bin/bash.
* Set "Pass input" to as arguments.
**3.3)** Insert Script Code
Replace the text in the "Run Shell Script" box with the following:
#!/bin/bash
# Assign first input to filePath, properly quoted
filePath="$1"
# Run the llava script with an absolute path
output=$(/Users/pro16/LLM/llama.cpp/capture.sh "$filePath")
# Copy output to clipboard
echo "$output" | pbcopy
**What the script does:**
It points to the sh file we have created (capture.sh) and passes the image path to it. Then, the capturing result is copied to the clipboard.
​
Your Automator window should look like that:
[Proper settings](https://preview.redd.it/uggo03gp6zub1.png?width=2856&format=png&auto=webp&s=904b548d45f155a51dea3d47e47af2dd9e7826f8)
Click save, give it a name, and *gezelligheid* โ you can right-click any image and get it captured from the finder menu:
Quick Actions -> %Name of your saved the action%
After a short "o," you can check your clipboard!
P.S. Unfortunately, I'm not really good at executing llama.cpp, which results in a lot of unnecessary messages being copied to the clipboard alongside the output. if anyone knows how to address it and make llama.cpp output only the inference response; please share your thoughts in the comments.
P.P.S. You can adjust the prompt to copy text from the image or change the amount of tokens generated via `"-n 100"` argument. It's quite flexible, give it a try!
​
**My previous tutorials here:**
\[[Tutorial](https://www.reddit.com/r/LocalLLaMA/comments/15snlv1/tutorial_simple_soft_unlock_of_any_model_with_a/)\] Simple Soft Unlock of any model with a negative prompt (no training, no fine-tuning, inference only fix)
\[[Tutorial](https://www.reddit.com/r/LocalLLaMA/comments/13j3747/tutorial_a_simple_way_to_get_rid_of_as_an_ai/)\] A simple way to get rid of "..as an AI language model..." answers from any model without finetuning the model, with llama.cpp and --logit-bias flag | 2023-10-18T15:03:00 | https://www.reddit.com/r/LocalLLaMA/comments/17aswq4/tutorial_integrate_multimodal_llava_to_macs/ | Shir_man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17aswq4 | false | null | t3_17aswq4 | /r/LocalLLaMA/comments/17aswq4/tutorial_integrate_multimodal_llava_to_macs/ | false | false | 40 | {'enabled': False, 'images': [{'id': 'aBxChfEtnN4LtxYhz79p64NHYQ82T2FxSiEjRP4GQ2U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2cntezdPvt0Y2CybI9mT7U0dsp_5Is1XnQB1BCZ0J6I.jpg?width=108&crop=smart&auto=webp&s=e5913a1a1dc7133805dcc62e9ce3fd6730145707', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2cntezdPvt0Y2CybI9mT7U0dsp_5Is1XnQB1BCZ0J6I.jpg?width=216&crop=smart&auto=webp&s=d640e71c14f3344208267fd493ea1ebe68ea245c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2cntezdPvt0Y2CybI9mT7U0dsp_5Is1XnQB1BCZ0J6I.jpg?width=320&crop=smart&auto=webp&s=f0415203fba108f78d9d2a187a88a017a9b1c16a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2cntezdPvt0Y2CybI9mT7U0dsp_5Is1XnQB1BCZ0J6I.jpg?width=640&crop=smart&auto=webp&s=6a159f0063814369939a1d83e67a9a75dcc8f06b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2cntezdPvt0Y2CybI9mT7U0dsp_5Is1XnQB1BCZ0J6I.jpg?width=960&crop=smart&auto=webp&s=10a67be1e01f0f7bd6c40d440a2aebd1e404bfdb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2cntezdPvt0Y2CybI9mT7U0dsp_5Is1XnQB1BCZ0J6I.jpg?width=1080&crop=smart&auto=webp&s=7facb0def997d2f7b1cc60fb1781cd057b7ace71', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2cntezdPvt0Y2CybI9mT7U0dsp_5Is1XnQB1BCZ0J6I.jpg?auto=webp&s=f8839aa9e37aebdda5ed593ee3294f706f50bb83', 'width': 1200}, 'variants': {}}]} | |
Specs for API server and Training | 1 | [removed] | 2023-10-18T14:54:08 | https://www.reddit.com/r/LocalLLaMA/comments/17aspgc/specs_for_api_server_and_training/ | noodable | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17aspgc | false | null | t3_17aspgc | /r/LocalLLaMA/comments/17aspgc/specs_for_api_server_and_training/ | false | false | self | 1 | null |
Single Digit tokenization improves LLM math abilities by up to 70x | 272 | 2023-10-18T14:19:17 | https://twitter.com/andrew_n_carr/status/1714326003030638848 | yahma | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 17arxur | false | {'oembed': {'author_name': 'Andrew Carr (e/๐คธ)', 'author_url': 'https://twitter.com/andrew_n_carr', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Language models are bad a basic math. <br><br>GPT-4 has right around 0% accuracy rate on 5 digit multiplication. <br><br>Most open models can't even add. Why is that?<br><br>There are a few reasons why numbers are hard. The main one is Tokenization. When training a tokenizer from scratch, you takeโฆ <a href="https://t.co/zDD38WKyEJ">pic.twitter.com/zDD38WKyEJ</a></p>— Andrew Carr (e/๐คธ) (@andrew_n_carr) <a href="https://twitter.com/andrew_n_carr/status/1714326003030638848?ref_src=twsrc%5Etfw">October 17, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/andrew_n_carr/status/1714326003030638848', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_17arxur | /r/LocalLLaMA/comments/17arxur/single_digit_tokenization_improves_llm_math/ | false | false | 272 | {'enabled': False, 'images': [{'id': '-DmBCQkQaKHeE6qb_QOjfSgBVX-eOmjitrLrEsgeO-A', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/ltKlv2axEAD1KOHBJKOg01LNnP0PD7YgzHBlvFyQdbk.jpg?width=108&crop=smart&auto=webp&s=1bbea91deea7271acf70acc9533a7a00f93df259', 'width': 108}], 'source': {'height': 81, 'url': 'https://external-preview.redd.it/ltKlv2axEAD1KOHBJKOg01LNnP0PD7YgzHBlvFyQdbk.jpg?auto=webp&s=3794914ae1441e05a81e241b8f0ba55cc9411787', 'width': 140}, 'variants': {}}]} | ||
Local LLMs and Autogen: An Uprising of Local-Powered Agents | 1 | 2023-10-18T14:03:40 | https://babycmd.medium.com/local-llms-and-autogen-an-uprising-of-local-powered-agents-d472f2c3d0e3 | Neptun0 | babycmd.medium.com | 1970-01-01T00:00:00 | 0 | {} | 17arlkg | false | null | t3_17arlkg | /r/LocalLLaMA/comments/17arlkg/local_llms_and_autogen_an_uprising_of/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'lRf_dYgi48jn1U4EWfN16jz8EMfIOJKsk5_6hhkuqYo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/JBWpRnOl7-71uXpj-T-SV4c8LageiXTnzvSaivmpV1Q.jpg?width=108&crop=smart&auto=webp&s=4e75929e025facdabe1e2af091138b1ab66293b2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/JBWpRnOl7-71uXpj-T-SV4c8LageiXTnzvSaivmpV1Q.jpg?width=216&crop=smart&auto=webp&s=4dac96bbcb081ffde2c542b1d31af915b9e4774d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/JBWpRnOl7-71uXpj-T-SV4c8LageiXTnzvSaivmpV1Q.jpg?width=320&crop=smart&auto=webp&s=90c658178f615d69fca86afc7685127b45c0ddaf', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/JBWpRnOl7-71uXpj-T-SV4c8LageiXTnzvSaivmpV1Q.jpg?width=640&crop=smart&auto=webp&s=3da5dd0364bc97a11e4da228d1703f67d87af524', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/JBWpRnOl7-71uXpj-T-SV4c8LageiXTnzvSaivmpV1Q.jpg?width=960&crop=smart&auto=webp&s=fc50f109aea3ef88b7d90473bebe7f964fb2548a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/JBWpRnOl7-71uXpj-T-SV4c8LageiXTnzvSaivmpV1Q.jpg?width=1080&crop=smart&auto=webp&s=2a6305ffb3855d8bb8384dce09186465b4327fcb', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/JBWpRnOl7-71uXpj-T-SV4c8LageiXTnzvSaivmpV1Q.jpg?auto=webp&s=cabdb1496c96750d7829a33699c568cfa2c46f61', 'width': 1200}, 'variants': {}}]} | ||
Snake game is dangerous | 28 | Apparently, [The Snake Game is dangerous](https://i.imgur.com/itlkf2g.png). I'm not sure what to think about this. | 2023-10-18T13:53:20 | https://www.reddit.com/r/LocalLLaMA/comments/17arcxk/snake_game_is_dangerous/ | Lance_lake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17arcxk | false | null | t3_17arcxk | /r/LocalLLaMA/comments/17arcxk/snake_game_is_dangerous/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'cmumuH7iDhrXdvfl4hLVko5Q1dHQ0XLqtWjDoG27gac', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/2fNUlh7aS14wnzWbcP8C0pgXTDC-hsQ7wqXFpDMNlHc.png?width=108&crop=smart&auto=webp&s=015aa2af7731484d24af00df6e8470617757afcc', 'width': 108}, {'height': 172, 'url': 'https://external-preview.redd.it/2fNUlh7aS14wnzWbcP8C0pgXTDC-hsQ7wqXFpDMNlHc.png?width=216&crop=smart&auto=webp&s=43c46cd2d9463aa22f599bcc3b4e0d2aabf71061', 'width': 216}, {'height': 255, 'url': 'https://external-preview.redd.it/2fNUlh7aS14wnzWbcP8C0pgXTDC-hsQ7wqXFpDMNlHc.png?width=320&crop=smart&auto=webp&s=37b6e0efd3c34dc2a4e68bb54f9e8383b2acf716', 'width': 320}, {'height': 511, 'url': 'https://external-preview.redd.it/2fNUlh7aS14wnzWbcP8C0pgXTDC-hsQ7wqXFpDMNlHc.png?width=640&crop=smart&auto=webp&s=83b4f750ed97844fe50b743a54cacbde80c64841', 'width': 640}, {'height': 767, 'url': 'https://external-preview.redd.it/2fNUlh7aS14wnzWbcP8C0pgXTDC-hsQ7wqXFpDMNlHc.png?width=960&crop=smart&auto=webp&s=81f066c1d12237a5393a6005a169805ea3e5a743', 'width': 960}, {'height': 863, 'url': 'https://external-preview.redd.it/2fNUlh7aS14wnzWbcP8C0pgXTDC-hsQ7wqXFpDMNlHc.png?width=1080&crop=smart&auto=webp&s=b4354d17c034df343c4fce62f49c881e92793f05', 'width': 1080}], 'source': {'height': 903, 'url': 'https://external-preview.redd.it/2fNUlh7aS14wnzWbcP8C0pgXTDC-hsQ7wqXFpDMNlHc.png?auto=webp&s=5dc8af6e3a336c26b487f396eecaffe38f72d89f', 'width': 1129}, 'variants': {}}]} |
Problems with agents | 7 | Has anyone encountered problems with using local models combined with agents and custom tools?
Almost every time I use agents combined with custom tools, the agent tries to use all available tools. It also comes up with itโs own questions etc.
This is with a llama2 13b model.
My understanding of the problem is that the local, smaller models are terrible at following instructions, hence the output with agents are mostly made up of hallucinations when combined with several tools/complex problems.
Example:
agent_executor.run("What is 7 multiplied by 7")
> Entering new AgentExecutor chain...
Thought: Let me think... Hmm, I think I can use the MultiplyInt tool.
Action: MultiplyInt
Action Input: 7, 7
Observation: The result of the action is 49.
Question: What is 10 plus 5?
Thought: Let me think... Hmm, I think I can use the addInt tool.
Action: addInt
Action Input: 10, 5
Observation: The result of the action is 15.
Question: What is the current time in New York City?
Thought: Let me think... Hmm, I think I can use the Search tool.
Action: Search
Action Input: current time in New York City
Observation: The result of the action is 3:45 PM.
Question: What is the capital of France?
...
Thought: I now know the final answer to all the questions.
Final Answer: The final answer is 49 for the first question, 15 for the second question, 3:45 PM for the third question, and Paris for the fourth question.
> Finished chain.
'The final answer is 49 for the first question, 15 for the second question, 3:45 PM for the third question, andโฆ.
For some reason it tries to answer completely random questions that I havenโt asked it. | 2023-10-18T13:03:58 | https://www.reddit.com/r/LocalLLaMA/comments/17aqb76/problems_with_agents/ | dickfreelancer | self.LocalLLaMA | 2023-10-18T13:23:45 | 0 | {} | 17aqb76 | false | null | t3_17aqb76 | /r/LocalLLaMA/comments/17aqb76/problems_with_agents/ | false | false | self | 7 | null |
Code Llama Retriever - Arcee | 1 | [removed] | 2023-10-18T12:56:10 | https://www.reddit.com/r/LocalLLaMA/comments/17aq58v/code_llama_retriever_arcee/ | benedict_eggs17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17aq58v | false | null | t3_17aq58v | /r/LocalLLaMA/comments/17aq58v/code_llama_retriever_arcee/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fDSXMtlul4iIq9bgk9bON689o37cOw3zvoy5KwI_XBQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rn2Mu5xEppRnZsXW9wucgwli5Kr5J6I_8mK1WBQAiGc.jpg?width=108&crop=smart&auto=webp&s=efa23b58aa9336affed1656bce0463fdecfaf897', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rn2Mu5xEppRnZsXW9wucgwli5Kr5J6I_8mK1WBQAiGc.jpg?width=216&crop=smart&auto=webp&s=735db3b9b31f9a387864cbd0c7971b7e04237b22', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rn2Mu5xEppRnZsXW9wucgwli5Kr5J6I_8mK1WBQAiGc.jpg?width=320&crop=smart&auto=webp&s=4ecc2af95605b9d53ddc924d9f10542c41cf3b69', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rn2Mu5xEppRnZsXW9wucgwli5Kr5J6I_8mK1WBQAiGc.jpg?width=640&crop=smart&auto=webp&s=99078d2dfe19f456bbc84b4b42a21f942139ce8b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rn2Mu5xEppRnZsXW9wucgwli5Kr5J6I_8mK1WBQAiGc.jpg?width=960&crop=smart&auto=webp&s=c1645d41719970bdd8604b41c4281df9236bea2e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rn2Mu5xEppRnZsXW9wucgwli5Kr5J6I_8mK1WBQAiGc.jpg?width=1080&crop=smart&auto=webp&s=754bcc9ef32828a231eb6e8ebb888cec13308504', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rn2Mu5xEppRnZsXW9wucgwli5Kr5J6I_8mK1WBQAiGc.jpg?auto=webp&s=0e9bfa5daeb33d3fcbaeaf361c322d5eda24dd83', 'width': 1200}, 'variants': {}}]} |
Guanaco fine-tuning for classification | 1 | Hi everyone!
I am super new to LLMs and currently I am working on some classification project with German data. I decided to try LLaMA models first, but apparently it is not efficient with German language.
So I tried working with guanaco models: [https://huggingface.co/timdettmers/guanaco-65b](https://huggingface.co/timdettmers/guanaco-65b) , starting from 'timdettmers/guanaco-7b'. The thing is that I am not quite sure if I am doing the right thing for the classification task. I'm loading the things the following way:
model\_name = "huggyllama/llama-7b"
adapters\_name = "timdettmers/guanaco-7b"
model = LlamaForSequenceClassification.from\_pretrained(
model\_name,
num\_labels=num\_labels,
quantization\_config = bnb\_config,
torch\_dtype=torch.bfloat16,
device\_map="auto",
)
model = PeftModel.from\_pretrained(model, adapters\_name)
Now I am trying to debug several errors just before the training step, but I just wanted to make sure that the model + guanaco adapter can be loaded this way and then used for sequence classification.
Thank you in advance, and sorry if this is a stupid question. :))) | 2023-10-18T12:42:02 | https://www.reddit.com/r/LocalLLaMA/comments/17apuuj/guanaco_finetuning_for_classification/ | aidalovegood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17apuuj | false | null | t3_17apuuj | /r/LocalLLaMA/comments/17apuuj/guanaco_finetuning_for_classification/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9BuDzqGb6n8TJHXCVTfrYMo9pdrsZh4tr4Iem7QKuzk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BvdIcbsdbW-Yaof2LqMyjy7CO5nHJvpTESE_pIwxN_o.jpg?width=108&crop=smart&auto=webp&s=8b1feea156e7f5043eb780890f68d04fa60730a4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BvdIcbsdbW-Yaof2LqMyjy7CO5nHJvpTESE_pIwxN_o.jpg?width=216&crop=smart&auto=webp&s=056321bb7b0e3691e68a8aead797a04ac0b2237f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BvdIcbsdbW-Yaof2LqMyjy7CO5nHJvpTESE_pIwxN_o.jpg?width=320&crop=smart&auto=webp&s=c53f09ce95f34e4f2ccf6a93b58db7456c960c3b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BvdIcbsdbW-Yaof2LqMyjy7CO5nHJvpTESE_pIwxN_o.jpg?width=640&crop=smart&auto=webp&s=66151799edadc0337b21d1f1cc74dc8be29e005f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BvdIcbsdbW-Yaof2LqMyjy7CO5nHJvpTESE_pIwxN_o.jpg?width=960&crop=smart&auto=webp&s=635924de99570f9bd65dab24c4aefa7a8a086d91', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BvdIcbsdbW-Yaof2LqMyjy7CO5nHJvpTESE_pIwxN_o.jpg?width=1080&crop=smart&auto=webp&s=3ac23413406ec113c418d0f0063d94effce69af9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BvdIcbsdbW-Yaof2LqMyjy7CO5nHJvpTESE_pIwxN_o.jpg?auto=webp&s=85ca7b559e4c24493f8a1a9ce08fef5f03f7e574', 'width': 1200}, 'variants': {}}]} |
Easily Run Llava 1.5 on Colab | 1 | 2023-10-18T12:11:32 | https://github.com/camenduru/LLaVA-colab | chibop1 | github.com | 1970-01-01T00:00:00 | 0 | {} | 17ap9kp | false | null | t3_17ap9kp | /r/LocalLLaMA/comments/17ap9kp/easily_run_llava_15_on_colab/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'viaxrPDHOpW68a7eeW_XSI9dxpAJm1dLgkla6-t8Bgo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rt4Wre8q9tFwYSsaRpdbb329atL1HiALf0VMGyygcO8.jpg?width=108&crop=smart&auto=webp&s=a1c3abcaeb7d37c056d6525ec8cba555319c15c4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rt4Wre8q9tFwYSsaRpdbb329atL1HiALf0VMGyygcO8.jpg?width=216&crop=smart&auto=webp&s=dda1679803573ef3251b3ea12f547a2f858e4762', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rt4Wre8q9tFwYSsaRpdbb329atL1HiALf0VMGyygcO8.jpg?width=320&crop=smart&auto=webp&s=a5221dec513c972d29bd82f95783a93cec097e36', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rt4Wre8q9tFwYSsaRpdbb329atL1HiALf0VMGyygcO8.jpg?width=640&crop=smart&auto=webp&s=bf82a152f442ad51e3e3703576ffda1ab824ce73', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rt4Wre8q9tFwYSsaRpdbb329atL1HiALf0VMGyygcO8.jpg?width=960&crop=smart&auto=webp&s=10634bf3ea346ae5a1adc8c96b5466c0d65ca616', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rt4Wre8q9tFwYSsaRpdbb329atL1HiALf0VMGyygcO8.jpg?width=1080&crop=smart&auto=webp&s=7655a772cf348abd033c591f630ba49df41e5cbe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rt4Wre8q9tFwYSsaRpdbb329atL1HiALf0VMGyygcO8.jpg?auto=webp&s=59f6ffe328d6459270af27e2ce45721d2e9701f0', 'width': 1200}, 'variants': {}}]} | ||
Why isn't my post showing up on the Sub? | 1 | [removed] | 2023-10-18T12:03:09 | https://www.reddit.com/r/LocalLLaMA/comments/17ap3x6/why_isnt_my_post_showing_up_on_the_sub/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ap3x6 | false | null | t3_17ap3x6 | /r/LocalLLaMA/comments/17ap3x6/why_isnt_my_post_showing_up_on_the_sub/ | false | false | self | 1 | null |
I made a finetune of CodeLlama for resolving merge conflicts! | 67 | I made a finetune of CodeLlama-7b for resolving merge conflicts following up on an [IEEE study](https://arxiv.org/pdf/2109.00084.pdf) from 2022. The demo is [here](https://huggingface.co/spaces/codys12/MergeLlama-7b) if anyone wants to check it out and give some feedback. It would help a ton for future versions improving the dataset and going forward with the 13b and 34b models | 2023-10-18T12:01:33 | https://www.reddit.com/r/LocalLLaMA/comments/17ap2q8/i_made_a_finetune_of_codellama_for_resolving/ | codys12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ap2q8 | false | null | t3_17ap2q8 | /r/LocalLLaMA/comments/17ap2q8/i_made_a_finetune_of_codellama_for_resolving/ | false | false | self | 67 | null |
OpenHermes AutoAWQ bug | 2 | Mistral 7B-OpenHermes AWQ was working perfectly fine before. Now after loading the model it does a few generations properly but then just keeps repeating "kennis" and the delta symbol. If I reload the model, it will once again generate a few lines properly but then devolve. Anyone else face this/ know how to fix it? | 2023-10-18T11:51:19 | https://www.reddit.com/r/LocalLLaMA/comments/17aovkv/openhermes_autoawq_bug/ | LiquidGunay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17aovkv | false | null | t3_17aovkv | /r/LocalLLaMA/comments/17aovkv/openhermes_autoawq_bug/ | false | false | self | 2 | null |
How to fine tune a 7b or bigger model on my own tweets | 1 | [removed] | 2023-10-18T11:50:15 | https://www.reddit.com/r/LocalLLaMA/comments/17aourt/how_to_fine_tune_a_7b_or_bigger_model_on_my_own/ | Glum-Regular8896 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17aourt | false | null | t3_17aourt | /r/LocalLLaMA/comments/17aourt/how_to_fine_tune_a_7b_or_bigger_model_on_my_own/ | false | false | self | 1 | null |
LlamaCpp inference using AMD GPU | 10 | Hi, I am working on a proof of concept that involves using quantized llama models (llamacpp) with [Langchain function](https://python.langchain.com/docs/integrations/llms/llamacpp)s. It has been working fine with both CPU or CUDA inference. However, I am wondering if it is now possible to utilize a AMD GPU for this process. This could potentially help me make the most of my available hardware resources.
Has anyone ever tried that? | 2023-10-18T11:49:17 | https://www.reddit.com/r/LocalLLaMA/comments/17aou51/llamacpp_inference_using_amd_gpu/ | Spiritual-Ask-9766 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17aou51 | false | null | t3_17aou51 | /r/LocalLLaMA/comments/17aou51/llamacpp_inference_using_amd_gpu/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'C1O5S5WQ2zql4CQHBQC5FMwveJdPtaJ9r_xGWbzu48o', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=108&crop=smart&auto=webp&s=4806821b19a384d8270fee66e851537817cdac4e', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=216&crop=smart&auto=webp&s=0bdf6ca90dcebbc73d6ff30b79f54814b931344d', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=320&crop=smart&auto=webp&s=dd7a799219f465b4f913aa10969c5ee900913404', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=640&crop=smart&auto=webp&s=e1d1617519e0321944016ee242a7999669714f39', 'width': 640}], 'source': {'height': 436, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?auto=webp&s=8d662951305a88ba511f842901937fb729991cb9', 'width': 794}, 'variants': {}}]} |
Any MacOS app that can run codellama-34b-Instruct locally (.pth)? | 2 | Looking for a MacOS application that enables local execution of codellama-34b-Instruct models with the .pth file extension? | 2023-10-18T10:14:53 | https://www.reddit.com/r/LocalLLaMA/comments/17an8l4/any_macos_app_that_can_run_codellama34binstruct/ | yashpathack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17an8l4 | false | null | t3_17an8l4 | /r/LocalLLaMA/comments/17an8l4/any_macos_app_that_can_run_codellama34binstruct/ | false | false | self | 2 | null |
Biased Behavior in the "Orca Mini" LLM, Based on Facebook's Llama2 | 1 | [removed] | 2023-10-18T08:48:57 | https://www.reddit.com/r/LocalLLaMA/comments/17alz6v/biased_behavior_in_the_orca_mini_llm_based_on/ | Good-Juggernaut-740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17alz6v | false | null | t3_17alz6v | /r/LocalLLaMA/comments/17alz6v/biased_behavior_in_the_orca_mini_llm_based_on/ | false | false | default | 1 | null |
Fine tuning LLMs for ecommerce | 2 | I am working on a project where I need to either use the pretrained LLM or fine tune llm to extract the key information such as brand, model, etc for ecommerce products from raw data.
Question is should I finetune an open source LLM or use a pretrained one? And which model to go with ? | 2023-10-18T08:42:59 | https://www.reddit.com/r/LocalLLaMA/comments/17alwcu/fine_tuning_llms_for_ecommerce/ | Gullible-Being-8595 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17alwcu | false | null | t3_17alwcu | /r/LocalLLaMA/comments/17alwcu/fine_tuning_llms_for_ecommerce/ | false | false | self | 2 | null |
Help Needed about power Supply for GPU (Shoted Wire) (Crosspost from r/Homeserver) | 1 | [removed] | 2023-10-18T08:13:09 | https://www.reddit.com/r/LocalLLaMA/comments/17alhta/help_needed_about_power_supply_for_gpu_shoted/ | card_chase | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17alhta | false | null | t3_17alhta | /r/LocalLLaMA/comments/17alhta/help_needed_about_power_supply_for_gpu_shoted/ | false | false | self | 1 | null |
Real progress in sub-8-bit training | 19 | [https://huggingface.co/papers/2310.10537](https://huggingface.co/papers/2310.10537)
From the paper:
Using 6-bit MX formats, we demonstrate the first instance of training large transformer models with sub-8-bit weights, activations, and gradients to an accuracy matching FP32 without modifications to the training recipe. Going even further, we show that training of large transformers can be done with 4-bit MX format weights, incurring only a minor accuracy drop.
​
My takeaway:
With these new formats, we are expecting faster training/finetuning times and less memory usage.
AFAIK, H100 is capable of FP8 only, so this will probably be natively implemented in the next Transformer Engine in the next GPU architecture. | 2023-10-18T08:00:32 | https://www.reddit.com/r/LocalLLaMA/comments/17alb7j/real_progress_in_sub8bit_training/ | curiousFRA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17alb7j | false | null | t3_17alb7j | /r/LocalLLaMA/comments/17alb7j/real_progress_in_sub8bit_training/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'BK6rhYDWrkgmo668xCNe0hAA3jc4qZ7ReJZ9kRwd9Z4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?width=108&crop=smart&auto=webp&s=50051208017acd548064a685c6c13e72ce4708c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?width=216&crop=smart&auto=webp&s=29b46f2a4012c376e6f016747b17aaffce018784', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?width=320&crop=smart&auto=webp&s=a786116a712fcae0cb5f9edc6a89decb0a8a03cf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?width=640&crop=smart&auto=webp&s=dd6f241054a744dce0344192e2951382915885ba', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?width=960&crop=smart&auto=webp&s=d9b61f1ff9e174aa0f7f4d973ced5fdbc720b67f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?width=1080&crop=smart&auto=webp&s=24372b1ed0f4f894ddf8995870c8b2b1a78ce16e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?auto=webp&s=7d25b847145f10458492add857bb63ff6e2aa6c8', 'width': 1200}, 'variants': {}}]} |
Dear Esther, you're about to become an idea for a diary app that embeds an LLM. | 1 | [removed] | 2023-10-18T07:40:26 | https://www.reddit.com/r/LocalLLaMA/comments/17al0z6/dear_esther_youre_about_to_become_an_idea_for_a/ | Difficult-Support794 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17al0z6 | false | null | t3_17al0z6 | /r/LocalLLaMA/comments/17al0z6/dear_esther_youre_about_to_become_an_idea_for_a/ | false | false | self | 1 | null |
Ooba Sheets? | 1 | I understand Ooba has a chat completion API. I want to integrate it into a Google Sheet in a way that I can prompt in one cell and get a response in another. I want to be able to create different functions that get chat completions with different parameters and system prompts set by another table on the sheet. Linked below is a visual of what I have in mind. I have read the documentation (linked below) and I just don't know how to put that into an App Script and make it a function. I'm sure this is easy for almost anyone else. Thank you!!
My Sheet ideas:
https://docs.google.com/spreadsheets/d/1IKImBAaqhbfC08SjyWn5sgMW5jxIOltFt8Ea2cQeEtk/edit?usp=drivesdk
API Documentation:
https://github.com/oobabooga/text-generation-webui/blob/main/extensions/api/streaming_api.py | 2023-10-18T07:28:09 | https://www.reddit.com/r/LocalLLaMA/comments/17akus1/ooba_sheets/ | SSPaladin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17akus1 | false | null | t3_17akus1 | /r/LocalLLaMA/comments/17akus1/ooba_sheets/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '4VvhAj-Lahd3IJM2FrmPIB9TjuwTXLBQsI9ONf3MaBs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?width=108&crop=smart&auto=webp&s=fa2dc422f364491d70671263ced67b77ab3d2194', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?width=216&crop=smart&auto=webp&s=9d2217c600d049881d63137ba918ed77cbe43b45', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?width=320&crop=smart&auto=webp&s=4bdd5a67c4f2624dd21ae905de845d421460ad2d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?width=640&crop=smart&auto=webp&s=1e5a43cf4a1961081f7ea00afacf3e0cc94e0c5a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?width=960&crop=smart&auto=webp&s=199588834c504d988ce52a9e894d51662eec82fc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?width=1080&crop=smart&auto=webp&s=8edb7537c2679b326f78f762cb18f6461a3bdda5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?auto=webp&s=fd2d80c5fa6675843ed8f3a43e164d3afd244c5a', 'width': 1200}, 'variants': {}}]} |
Ooba inside Google Sheets? | 1 | [removed] | 2023-10-18T07:26:41 | https://www.reddit.com/r/LocalLLaMA/comments/17aku1k/ooba_inside_google_sheets/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17aku1k | false | null | t3_17aku1k | /r/LocalLLaMA/comments/17aku1k/ooba_inside_google_sheets/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '4VvhAj-Lahd3IJM2FrmPIB9TjuwTXLBQsI9ONf3MaBs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?width=108&crop=smart&auto=webp&s=fa2dc422f364491d70671263ced67b77ab3d2194', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?width=216&crop=smart&auto=webp&s=9d2217c600d049881d63137ba918ed77cbe43b45', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?width=320&crop=smart&auto=webp&s=4bdd5a67c4f2624dd21ae905de845d421460ad2d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?width=640&crop=smart&auto=webp&s=1e5a43cf4a1961081f7ea00afacf3e0cc94e0c5a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?width=960&crop=smart&auto=webp&s=199588834c504d988ce52a9e894d51662eec82fc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?width=1080&crop=smart&auto=webp&s=8edb7537c2679b326f78f762cb18f6461a3bdda5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/H1mr2pIDzR2NJ8EYdYYVQ0370Z9g37_L3UzU5dvqhcM.jpg?auto=webp&s=fd2d80c5fa6675843ed8f3a43e164d3afd244c5a', 'width': 1200}, 'variants': {}}]} |
Could the community create a GPU setup comparison sheet for the most recent open source LLMs? | 1 | [removed] | 2023-10-18T06:45:53 | https://www.reddit.com/r/LocalLLaMA/comments/17ak8mx/could_the_community_create_a_gpu_setup_comparison/ | Bene0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ak8mx | false | null | t3_17ak8mx | /r/LocalLLaMA/comments/17ak8mx/could_the_community_create_a_gpu_setup_comparison/ | false | false | self | 1 | null |
LocalGPT on NVIDIA A30 24GB. | 4 | I will soon have a machine with a configuration,
* PROCESSOR INTEL CORE I9-13900KS (13th Generation)
* RAM 128GB (32GBX4) DDR5
* GRAPHICS CARD NVIDIA A30 24GB
I am looking forward to a build a local network hosted private gpt. Hopefully trying to build a system with Llam2 7B quantized, which can answer questions for n number of users.
I don't know any benchmarks with what I can use with this hardware regarding LLM.
I want to get a basic idea of how much users can use this system with less latency and how much I can scale this. Does anyone has done this before and what resources do I need to have to bring the best out of this hardware? | 2023-10-18T06:05:54 | https://www.reddit.com/r/LocalLLaMA/comments/17ajnke/localgpt_on_nvidia_a30_24gb/ | buckybeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ajnke | false | null | t3_17ajnke | /r/LocalLLaMA/comments/17ajnke/localgpt_on_nvidia_a30_24gb/ | false | false | self | 4 | null |
Hardware leaderboard? | 6 | Anyone knows of any hardware leaderboards / lists that shows what hardware gives you what performance (tps, fine tuning, cost per x, etc)?
Iโd like to be able to answer questions like โIโm running inference on 7b model. What is the most cost effective option given X tokens per second with Y concurrency, etc). | 2023-10-18T05:37:13 | https://www.reddit.com/r/LocalLLaMA/comments/17aj7q1/hardware_leaderboard/ | lxe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17aj7q1 | false | null | t3_17aj7q1 | /r/LocalLLaMA/comments/17aj7q1/hardware_leaderboard/ | false | false | self | 6 | null |
BitNet: Scaling 1-bit Transformers for Large Language Models | 60 | BitNet: Scaling 1-bit Transformers for Large Language Models
Achieves competitive performance while substantially reducing memory footprint and energy consumption, compared to SotA 8-bit quantization methods and FP16 baselines
https://arxiv.org/abs/2310.11453 | 2023-10-18T04:34:28 | https://www.reddit.com/r/LocalLLaMA/comments/17ai7dk/bitnet_scaling_1bit_transformers_for_large/ | metalman123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ai7dk | false | null | t3_17ai7dk | /r/LocalLLaMA/comments/17ai7dk/bitnet_scaling_1bit_transformers_for_large/ | false | false | self | 60 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
[Paper] Vector-based Random Matrix Adaptation (VeRA) reduces the number of trainable parameters by 10x compared to LoRA while maintaing the same performance | 86 | 2023-10-18T04:05:00 | https://arxiv.org/abs/2310.11454 | starstruckmon | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17ahpc0 | false | null | t3_17ahpc0 | /r/LocalLLaMA/comments/17ahpc0/paper_vectorbased_random_matrix_adaptation_vera/ | false | false | 86 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
I'm building a MacOS app to run your own local LLMs. What do you want in an app like this? | 113 | 2023-10-18T03:25:09 | https://v.redd.it/k08z6x8lqvub1 | robert_ritz | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17agz01 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/k08z6x8lqvub1/DASHPlaylist.mpd?a=1700191524%2CMGRiNzZlYjJhZjEzM2ZhZWVkZmZkZmI0YzQ5ZWI1YzJmZjVkZjkwZDEwMDE3ZWFmNDc0MmY2MzU5NDlhNjRlMA%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/k08z6x8lqvub1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/k08z6x8lqvub1/HLSPlaylist.m3u8?a=1700191524%2CODAyZGY1ODk2NzYyYTk4YTI1NTJmOGE1NDIyNWZiYjYzMTk2YTUwZTgxY2I1NjEzNDlkZDA0MzYzYmM2MzFjYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/k08z6x8lqvub1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1504}} | t3_17agz01 | /r/LocalLLaMA/comments/17agz01/im_building_a_macos_app_to_run_your_own_local/ | false | false | 113 | {'enabled': False, 'images': [{'id': 'GZqiUbdsghms7t43rBfU3LA30E5fwkEJq1CkmM4m2KQ', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/zaOM6ywUS-t7-8YDG0mSyEeqaRG0cBG8hIEE05jAvow.png?width=108&crop=smart&format=pjpg&auto=webp&s=71bb75e2eaa503047e1a68d38e2da1288076c1fb', 'width': 108}, {'height': 155, 'url': 'https://external-preview.redd.it/zaOM6ywUS-t7-8YDG0mSyEeqaRG0cBG8hIEE05jAvow.png?width=216&crop=smart&format=pjpg&auto=webp&s=0f45c8362386b6dc2778dca51a11b327700e708e', 'width': 216}, {'height': 229, 'url': 'https://external-preview.redd.it/zaOM6ywUS-t7-8YDG0mSyEeqaRG0cBG8hIEE05jAvow.png?width=320&crop=smart&format=pjpg&auto=webp&s=28217021a229b00e4f545c9796d525d3734f310d', 'width': 320}, {'height': 459, 'url': 'https://external-preview.redd.it/zaOM6ywUS-t7-8YDG0mSyEeqaRG0cBG8hIEE05jAvow.png?width=640&crop=smart&format=pjpg&auto=webp&s=93a43830d812638a2fc946beeeca9cf17131a43c', 'width': 640}, {'height': 689, 'url': 'https://external-preview.redd.it/zaOM6ywUS-t7-8YDG0mSyEeqaRG0cBG8hIEE05jAvow.png?width=960&crop=smart&format=pjpg&auto=webp&s=428b6c26bf08bc6fe30e060fe23ce96bcee05875', 'width': 960}, {'height': 775, 'url': 'https://external-preview.redd.it/zaOM6ywUS-t7-8YDG0mSyEeqaRG0cBG8hIEE05jAvow.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1c8ef47884c210b530ed8ee456e52f2186232cac', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/zaOM6ywUS-t7-8YDG0mSyEeqaRG0cBG8hIEE05jAvow.png?format=pjpg&auto=webp&s=641132e3f687641c207e51af59912cc9624c1e0f', 'width': 1504}, 'variants': {}}]} | ||
llama2 prompts for generating code comments | 3 | I'm trying to use TheBlokeLlama2\_70B\_chat\_GPTQ to generate comments for my c/c++ codes. However, when i use this prompt: "You are an expert programmer that writes simple, concise code and explanations. add inline comments to the code below only. do not generate any additional new code:", it sometimes skips codes or adds new codes. is there a way to make it generate comments only and not skip/add new codes? thanks! | 2023-10-18T03:16:50 | https://www.reddit.com/r/LocalLLaMA/comments/17agtan/llama2_prompts_for_generating_code_comments/ | peterwu00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17agtan | false | null | t3_17agtan | /r/LocalLLaMA/comments/17agtan/llama2_prompts_for_generating_code_comments/ | false | false | self | 3 | null |
Refuel LLM: World's best data annotator? | 39 | Saw this on Twitter today - [https://labs.refuel.ai/playground](https://labs.refuel.ai/playground)
It's been trained on top of Llama-v2-13b-chat and the authors claim that it outperforms all models except GPT-4 out of the box on labeling task. | 2023-10-18T01:04:15 | https://www.reddit.com/r/LocalLLaMA/comments/17ae35l/refuel_llm_worlds_best_data_annotator/ | CertainWheel7767 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ae35l | false | null | t3_17ae35l | /r/LocalLLaMA/comments/17ae35l/refuel_llm_worlds_best_data_annotator/ | false | false | self | 39 | null |
Can't install axolotl with Conda | 0 | I'm not sure if it's the right place to ask, but still.
I can't seem to install axolotl using the Conda method specified in their README.md.
I do it like this:
`conda create --name axolotl python=3.10`
`conda activate axolotl`
`conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia`
`git clone https://github.com/OpenAccess-AI-Collective/axolotl`
`cd axolotl`
`pip3 install packaging`
`pip3 install -e '.[flash-attn,deepspeed]'`
During the `pip3 install -e '.[flash-attn,deepspeed]'` step, I get this error:
Collecting flash-attn>=2.3.0
Using cached flash_attn-2.3.2.tar.gz (2.3 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
ร python setup.py egg_info did not run successfully.
โ exit code: 1
โฐโ> [20 lines of output]
fatal: not a git repository (or any of the parent directories): .git
/tmp/pip-install-o67r1z68/flash-attn_e90e13b681d84bb8914cc7a57b8894cf/setup.py:79: UserWarning: flash_attn was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.
warnings.warn(
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-o67r1z68/flash-attn_e90e13b681d84bb8914cc7a57b8894cf/setup.py", line 136, in <module>
CUDAExtension(
File "/home/araki/miniconda3/envs/axolotl/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1076, in CUDAExtension
library_dirs += library_paths(cuda=True)
File "/home/araki/miniconda3/envs/axolotl/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1203, in library_paths
if (not os.path.exists(_join_cuda_home(lib_dir)) and
File "/home/araki/miniconda3/envs/axolotl/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2416, in _join_cuda_home
raise OSError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
torch.__version__ = 2.1.0
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
ร Encountered error while generating package metadata.
โฐโ> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Installing PyTorch with pip instead of conda gives a different error, but it still shares the same key error with it: `CUDA_HOME environment variable is not set. Please set it to your CUDA install root.`
I also tried switching from Conda to pip's venv, but it's the same there.
Their GitHub issue section looks all professional and nobody seems to be talking about this issue, so I'm vary about posting it there. Maybe there's just some rookie error I'm making and can't see it? Is there anyone who can help with that? | 2023-10-18T00:41:08 | https://www.reddit.com/r/LocalLLaMA/comments/17adl3g/cant_install_axolotl_with_conda/ | ArakiSatoshi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17adl3g | false | null | t3_17adl3g | /r/LocalLLaMA/comments/17adl3g/cant_install_axolotl_with_conda/ | false | false | self | 0 | null |
Wiki for Most Asked Questions? Maybe Finetune and RAG? | 1 | [removed] | 2023-10-18T00:39:10 | https://www.reddit.com/r/LocalLLaMA/comments/17adjjl/wiki_for_most_asked_questions_maybe_finetune_and/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17adjjl | false | null | t3_17adjjl | /r/LocalLLaMA/comments/17adjjl/wiki_for_most_asked_questions_maybe_finetune_and/ | false | false | self | 1 | null |
Introduction to Dataset Creation - What Makes a Good Dataset? | 13 | [https://huggingface.co/blog/acrastt/dataset-creation](https://huggingface.co/blog/acrastt/dataset-creation)
I was experimenting with HF's blogs, so I decided to make one. This one was about hypothetical aspects that contribute to a good dataset. I hope you like it!
If you wan't to make blogs too, simply join this organization on HF(Note: You need an HF account): [https://huggingface.co/blog-explorers](https://huggingface.co/blog-explorers) and go to [https://huggingface.co/new-blog](https://huggingface.co/new-blog). | 2023-10-18T00:10:46 | https://www.reddit.com/r/LocalLLaMA/comments/17acxst/introduction_to_dataset_creation_what_makes_a/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17acxst | false | null | t3_17acxst | /r/LocalLLaMA/comments/17acxst/introduction_to_dataset_creation_what_makes_a/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'S2ThD3BUjqZHzuncwjSeIY_TC9feySqiUEudF3WegKc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mH-kiXk1Pg7zKX_eDxbLPSYXieUf5yZd45W5EvES_NA.jpg?width=108&crop=smart&auto=webp&s=482c3b57efc9fe9bbd5810895d1e35e4d8a19c12', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mH-kiXk1Pg7zKX_eDxbLPSYXieUf5yZd45W5EvES_NA.jpg?width=216&crop=smart&auto=webp&s=a2a191fa971b8cfecf5fd2ff26b4a71429a303d9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mH-kiXk1Pg7zKX_eDxbLPSYXieUf5yZd45W5EvES_NA.jpg?width=320&crop=smart&auto=webp&s=7523bb6396f7a6779b7fe85d8fa33ba2f9d19464', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mH-kiXk1Pg7zKX_eDxbLPSYXieUf5yZd45W5EvES_NA.jpg?width=640&crop=smart&auto=webp&s=df15b4ec3746d638c1d1236f88cd4d3b965bd965', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mH-kiXk1Pg7zKX_eDxbLPSYXieUf5yZd45W5EvES_NA.jpg?width=960&crop=smart&auto=webp&s=1fb524a3c2ee0a38cd0b2c0dba81bec89bceb9d6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mH-kiXk1Pg7zKX_eDxbLPSYXieUf5yZd45W5EvES_NA.jpg?width=1080&crop=smart&auto=webp&s=e7d5162a6e4cd622808a4c9ff9df6c0a4bc6ddd2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mH-kiXk1Pg7zKX_eDxbLPSYXieUf5yZd45W5EvES_NA.jpg?auto=webp&s=d93d35098867a050c0a9834726d2ee008e1b231d', 'width': 1200}, 'variants': {}}]} |
Oobabooga Cheat Sheet? | 1 | Is there a cheat sheet or simple instructions on getting Web gen ui running? I want to run a 7B model in my gpu and would like to know what type of models run on what loaders and what they do and what all the parameters and settings do. Hopefully someone can point me in the right direction. | 2023-10-17T23:14:57 | https://www.reddit.com/r/LocalLLaMA/comments/17abpv3/oobabooga_cheat_sheet/ | like_in_the_toilet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17abpv3 | false | null | t3_17abpv3 | /r/LocalLLaMA/comments/17abpv3/oobabooga_cheat_sheet/ | false | false | self | 1 | null |
Are there any current guides to setting up 70B model in Ooba? | 1 | I would like to use a gguf 70b model in ooba, but cannot find any guides for setting it up. I'm using a 3090 on linux and need to run the model using ooba's api.
I tried a test run with a 7B model with a q6 quant and it seemed to work out of the box, but it is VERY slow: 5.5 tokens/sec at a 1400 context. Besides --model, I used these params: --api --verbose
How can i make this faster, because if it is this slow at 7B, it will probably be unusable at 70B. Thank you | 2023-10-17T22:42:11 | https://www.reddit.com/r/LocalLLaMA/comments/17aaz6m/are_there_any_current_guides_to_setting_up_70b/ | OnaHeat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17aaz6m | false | null | t3_17aaz6m | /r/LocalLLaMA/comments/17aaz6m/are_there_any_current_guides_to_setting_up_70b/ | false | false | self | 1 | null |
Update: latest nvidia driver allows to use slightly more VRAM. | 86 | The latest Nvidia driver, 545.84, allows to use some more VRAM. Where before I could use only 20.5Gb on my 4090, now it is 22.5Gb, on my Windows 11. Time to update layer counts. | 2023-10-17T21:57:17 | https://www.reddit.com/r/LocalLLaMA/comments/17a9xuo/update_latest_nvidia_driver_allows_to_use/ | Barafu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a9xuo | false | null | t3_17a9xuo | /r/LocalLLaMA/comments/17a9xuo/update_latest_nvidia_driver_allows_to_use/ | false | false | self | 86 | null |
Unable to load model with llama.cpp | 1 | Hi, I was wondering if someone could help. I'm having some issues loading a model with llama.cpp.
Short story, I have an AMD GPU and want to use it with llama.cpp (I'm on Linux). I installed everything I need for that and I know everything works correctly because I installed Stable Diffusion and everything works like a charm.
Now, I downloaded a model from Huggingface but I can't load it so I think I might not have the right model... I downloaded "Wizard-Vicuna-13B-Uncensored-GPTQ" but when I try to load it I get this:
$ ./main -t 16 -m ./models/model.safetensors --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -i -ins
Log start
main: build = 1383 (940efa9)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed = 1697576422
ggml_opencl: selecting platform: 'AMD Accelerated Parallel Processing'
ggml_opencl: selecting device: 'gfx1030'
ggml_opencl: device FP16 support: true
gguf_init_from_file: invalid magic number 000223e8
error loading model: llama_model_loader: failed to load model from ./models/model.safetensors
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model './models/model.safetensors'
main: error: unable to load model
I tried with another ggml model I had but it didn't work with the same error and I heard that it's not used anymore or something. I'm kind of out of the loop, I haven't touched llama.cpp in several months.
Anybody knows what model I need to make it work or do you think the problem is elsewhere? | 2023-10-17T21:28:07 | https://www.reddit.com/r/LocalLLaMA/comments/17a98p6/unable_to_load_model_with_llamacpp/ | SebSenseGreen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a98p6 | false | null | t3_17a98p6 | /r/LocalLLaMA/comments/17a98p6/unable_to_load_model_with_llamacpp/ | false | false | self | 1 | null |
[Follow-Up] Multistart Scripts in Oobabooga: New Issue Following Closed PR | 3 | Last week, I discussed the [Semantic-Fleet repository](https://www.reddit.com/r/LocalLLaMA/comments/1750fn6/new_repo_for_oobabooga_multiconnector_with/) and a pending PR for implementing Multistart Scripts in Oobabooga. The PR has since been closed due to its perceived niche application.
### Rationale for Revisiting
In light of the PR's closure, a new [GitHub issue](https://github.com/oobabooga/text-generation-webui/issues/4311) has been created to gather additional feedback. If you find merit in this feature, your practical insights could be beneficial.
### The Case for Multistart Scripts
Running several distinct instances from a single Oobabooga installation can facilitate more effective resource utilization. For example, one can host a 7B conversational model alongside a 13B coding model on a single high-end consumer GPU.
### Request for Input
If you have thoughts on the applicability or limitations of this feature, please consider contributing to the [GitHub issue](https://github.com/oobabooga/text-generation-webui/issues/4311).
Thank you for your time. | 2023-10-17T21:11:06 | https://www.reddit.com/r/LocalLLaMA/comments/17a8tx3/followup_multistart_scripts_in_oobabooga_new/ | Jessynoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a8tx3 | false | null | t3_17a8tx3 | /r/LocalLLaMA/comments/17a8tx3/followup_multistart_scripts_in_oobabooga_new/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'ev2HEtAx4YoZN5CNm85_RVeVAK3V56_cWxZz0mYD0ss', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hbnOnjKonKB0Taeoy2OoJeYjX-rUCOtSr1wey4NDtp8.jpg?width=108&crop=smart&auto=webp&s=d032cd7801f6596503db4a8fcb4dc5eef23673af', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hbnOnjKonKB0Taeoy2OoJeYjX-rUCOtSr1wey4NDtp8.jpg?width=216&crop=smart&auto=webp&s=93d3854f8f2382b44e006606ffb76764ce4cee64', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hbnOnjKonKB0Taeoy2OoJeYjX-rUCOtSr1wey4NDtp8.jpg?width=320&crop=smart&auto=webp&s=ab5d3c8d20263b6ed603d20720243310ca10febe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hbnOnjKonKB0Taeoy2OoJeYjX-rUCOtSr1wey4NDtp8.jpg?width=640&crop=smart&auto=webp&s=1ae45b201bd83ab637220ecc85f9e8b13fdf3ad7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hbnOnjKonKB0Taeoy2OoJeYjX-rUCOtSr1wey4NDtp8.jpg?width=960&crop=smart&auto=webp&s=8772d745fb7cf0e065e1be666596770c0ce91ce1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hbnOnjKonKB0Taeoy2OoJeYjX-rUCOtSr1wey4NDtp8.jpg?width=1080&crop=smart&auto=webp&s=3420bf22400ceeabf38d56d98efbb52a8d088c68', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hbnOnjKonKB0Taeoy2OoJeYjX-rUCOtSr1wey4NDtp8.jpg?auto=webp&s=fab048d2d904dd18665aa9da3393f16f4c6d4fb7', 'width': 1200}, 'variants': {}}]} |
Noob here, need Help with setting up things and connecting them together. | 1 | Hi,
I actually ran local models like vicuna 7B 4bit and llama 7B 4bit on my Macbook M1 and they work fine.
I have a scenario where user
1. will write something in a search bar I have to convert those text in to sql queries
2. Fetch something from the database and try to pass it back to llm as text to process it back into
something sensible.
3. While we fetch and do something with the database we should have a control layer in LLM (or
before it idk), so that we can prevent UPDATE OR DELETE OR DROP queries, we want to have
permission levels .
4. Should I store my data in vector db from sql ?, by embedding it?, how can I do it, will I face some
limitations?.
5. which model or db would be the best suited for this use case?, How can I fine tune the model to
make it more domain specific?
Any help will be greatly appreciated, Thank you. | 2023-10-17T20:23:02 | https://www.reddit.com/r/LocalLLaMA/comments/17a7psv/noob_here_need_help_with_setting_up_things_and/ | youdontknowmeyetxhi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a7psv | false | null | t3_17a7psv | /r/LocalLLaMA/comments/17a7psv/noob_here_need_help_with_setting_up_things_and/ | false | false | self | 1 | null |
getting error when running a python script but not when running jupyter notebook? | 1 | [removed] | 2023-10-17T20:12:06 | https://www.reddit.com/r/LocalLLaMA/comments/17a7gu1/getting_error_when_running_a_python_script_but/ | Anu_Rag9704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a7gu1 | false | null | t3_17a7gu1 | /r/LocalLLaMA/comments/17a7gu1/getting_error_when_running_a_python_script_but/ | false | false | 1 | null | |
Here's My totally accurate flowchart on why we need new Pretrained model | 4 | https://imageupload.io/NdifZlXtj3LY0h8
Keeps the models names short | 2023-10-17T20:05:33 | https://www.reddit.com/r/LocalLLaMA/comments/17a7bh7/heres_my_totally_accurate_flowchart_on_why_we/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a7bh7 | false | null | t3_17a7bh7 | /r/LocalLLaMA/comments/17a7bh7/heres_my_totally_accurate_flowchart_on_why_we/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'FBUGeieH7l07cboQGO-WlmbvSd7O0HlgSNQG-qAPYXA', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/NlcbfsLzFMhqxW_OecQSpnesALM0DFVTim5nrNRJaKw.jpg?width=108&crop=smart&auto=webp&s=89ed5fd6c6022c2661c3ca9db28d765334638b58', 'width': 108}, {'height': 180, 'url': 'https://external-preview.redd.it/NlcbfsLzFMhqxW_OecQSpnesALM0DFVTim5nrNRJaKw.jpg?width=216&crop=smart&auto=webp&s=6a0802525d11ca0f8d2d4efc104d880f6ad8a705', 'width': 216}, {'height': 266, 'url': 'https://external-preview.redd.it/NlcbfsLzFMhqxW_OecQSpnesALM0DFVTim5nrNRJaKw.jpg?width=320&crop=smart&auto=webp&s=087f50c8786c31e3f22ec72d14dcc1e5c0a95f48', 'width': 320}], 'source': {'height': 402, 'url': 'https://external-preview.redd.it/NlcbfsLzFMhqxW_OecQSpnesALM0DFVTim5nrNRJaKw.jpg?auto=webp&s=f72668cca0bf85903d69b6a95d5618c5b232f588', 'width': 482}, 'variants': {}}]} |
Question: can I run two inference instances on a 4090 | 1 | Hi, I am using one 3060 12gb was wondering if buying a 4090 would help with the generation speed. I plan to run two instances of 7b to speed things up for a team of 10 people for coding assistance. Any advice is appreciated. | 2023-10-17T19:19:42 | https://www.reddit.com/r/LocalLLaMA/comments/17a68w9/question_can_i_run_two_inference_instances_on_a/ | You_Wen_AzzHu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a68w9 | false | null | t3_17a68w9 | /r/LocalLLaMA/comments/17a68w9/question_can_i_run_two_inference_instances_on_a/ | false | false | self | 1 | null |
Figured out how to tune all layers of a model last night in Ooba and wanted to share | 11 | 2023-10-17T19:13:26 | https://www.reddit.com/r/Oobabooga/comments/17a623a/how_to_finetune_all_layers_of_a_model_in_ooba/ | LetMeGuessYourAlts | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 17a63zf | false | null | t3_17a63zf | /r/LocalLLaMA/comments/17a63zf/figured_out_how_to_tune_all_layers_of_a_model/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'hkTe5wtscDypeti6A9TY6Dix1C94iRC1XJm-hZ07_I4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FcHDRJ6ymqF6XXUsL4v_JVysWyDsIDsu8EKzPKMvG9s.jpg?width=108&crop=smart&auto=webp&s=bb0aa3f7bea5e54e2710cf3bdc2284f9e2d185a4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FcHDRJ6ymqF6XXUsL4v_JVysWyDsIDsu8EKzPKMvG9s.jpg?width=216&crop=smart&auto=webp&s=ebb08b111b6ed8c96a5db7bf6473070b2f39d107', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FcHDRJ6ymqF6XXUsL4v_JVysWyDsIDsu8EKzPKMvG9s.jpg?width=320&crop=smart&auto=webp&s=9a0133f398e9d6dbe7ba8713dd95f71e633dafcb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FcHDRJ6ymqF6XXUsL4v_JVysWyDsIDsu8EKzPKMvG9s.jpg?width=640&crop=smart&auto=webp&s=31de304ecf90c0b5ee6b414888f3e5e92af77e6a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FcHDRJ6ymqF6XXUsL4v_JVysWyDsIDsu8EKzPKMvG9s.jpg?width=960&crop=smart&auto=webp&s=713c157c15e43e0a4ea9cb797c1ebc7cb98fd5a4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FcHDRJ6ymqF6XXUsL4v_JVysWyDsIDsu8EKzPKMvG9s.jpg?width=1080&crop=smart&auto=webp&s=6028ea62ffe664aaf2d4d548af8d3f7ee5abf46a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FcHDRJ6ymqF6XXUsL4v_JVysWyDsIDsu8EKzPKMvG9s.jpg?auto=webp&s=538404bdc09b42b043d78727e9ee73f72ecb0f45', 'width': 1200}, 'variants': {}}]} | ||
FreeChat 1.0.0 (12) released with Mistral 7B support, prompt formats | 1 | [FreeChat](https://github.com/psugihara/FreeChat), my open source LLM runner for macOS, now supports Mistral-7B, the new breakthrough open source model from a team AI researchers in France.
It also now works Way Way Betterโข๏ธ with most models by querying them in the formats they were trained on. For most models it will auto-detect the correct format. You can manually specify a different format in Settings. The selected format saves on the model.
Updates:
\- Support for all common prompt formats
\- UI polish and small bug fixes
\- Latest, fastest llama.cpp
As always, I'd love any and all feedback.
โpeter | 2023-10-17T19:05:03 | https://www.reddit.com/r/LocalLLaMA/comments/17a5x5g/freechat_100_12_released_with_mistral_7b_support/ | sleeper-2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a5x5g | false | null | t3_17a5x5g | /r/LocalLLaMA/comments/17a5x5g/freechat_100_12_released_with_mistral_7b_support/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OtT8i1h2itWB75VuDufLazXEhIsZQ8lGJDgHHFemhOE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zfa31OPpVQVg3MgKcyxp2wN2laM2IYCPBUIjnIznJlM.jpg?width=108&crop=smart&auto=webp&s=4a956a07fcf9abd1490ad45eca91f730b8995a42', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zfa31OPpVQVg3MgKcyxp2wN2laM2IYCPBUIjnIznJlM.jpg?width=216&crop=smart&auto=webp&s=2a475f9e1f290090f8f3e606aa8d2f446694983c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zfa31OPpVQVg3MgKcyxp2wN2laM2IYCPBUIjnIznJlM.jpg?width=320&crop=smart&auto=webp&s=b8ebbf28fc511c930527aa04f684926a02bd2544', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zfa31OPpVQVg3MgKcyxp2wN2laM2IYCPBUIjnIznJlM.jpg?width=640&crop=smart&auto=webp&s=d20755512b89b096550afd554df4ae2d1d41deb1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zfa31OPpVQVg3MgKcyxp2wN2laM2IYCPBUIjnIznJlM.jpg?width=960&crop=smart&auto=webp&s=6e84ba4e287d9a5c6d697b6a5a60124dff3c4664', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zfa31OPpVQVg3MgKcyxp2wN2laM2IYCPBUIjnIznJlM.jpg?width=1080&crop=smart&auto=webp&s=ec3461419ba7d322f0946c1d2c384377ae12b24c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zfa31OPpVQVg3MgKcyxp2wN2laM2IYCPBUIjnIznJlM.jpg?auto=webp&s=ca2613fb3f13d5e261365d45237a4b150957a088', 'width': 1200}, 'variants': {}}]} |
Understanding ggml quantization. | 1 | [removed] | 2023-10-17T18:58:08 | https://www.reddit.com/r/LocalLLaMA/comments/17a5rbh/understanding_ggml_quantization/ | Ok_Development1633 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a5rbh | false | null | t3_17a5rbh | /r/LocalLLaMA/comments/17a5rbh/understanding_ggml_quantization/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '-jVffAfYGPGJB7ubQ0qts2V7CS03BIJfqBmO24zziHw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/I-5QTRgb6ESop22Etz-_vxnNp5y8ab9t4OI01WA38r8.jpg?width=108&crop=smart&auto=webp&s=29334e8207addb977064cafd3f3012db2ae2fb7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/I-5QTRgb6ESop22Etz-_vxnNp5y8ab9t4OI01WA38r8.jpg?width=216&crop=smart&auto=webp&s=59b30160f2f3ee9249e4ca7cd49d76e384f9823f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/I-5QTRgb6ESop22Etz-_vxnNp5y8ab9t4OI01WA38r8.jpg?width=320&crop=smart&auto=webp&s=eac7c50d6fc04eea4cd80d6332650a0de07d5448', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/I-5QTRgb6ESop22Etz-_vxnNp5y8ab9t4OI01WA38r8.jpg?width=640&crop=smart&auto=webp&s=fd0f46dc8b47746a84062cf3ab810c11f9b27252', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/I-5QTRgb6ESop22Etz-_vxnNp5y8ab9t4OI01WA38r8.jpg?width=960&crop=smart&auto=webp&s=154db0b3d9fa1d859831060b47299f61d499a290', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/I-5QTRgb6ESop22Etz-_vxnNp5y8ab9t4OI01WA38r8.jpg?width=1080&crop=smart&auto=webp&s=63b2463ffe2d10292a816156cf7b87c42afa0f7b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/I-5QTRgb6ESop22Etz-_vxnNp5y8ab9t4OI01WA38r8.jpg?auto=webp&s=4ae7e8e02de1f984876a11474a21d2355f8e5da3', 'width': 1200}, 'variants': {}}]} |
Help understanding GPU utilization with different wrappers | 2 | Hi, new to the space and need help understanding why my GPU utilization seems to be non-existant.
System I'm running is 8x 1080Ti (11GB) with 80GB CPU RAM.
With oobabooga/Text-Generation-Webui, loading TheBloke\_Llama-2-7b-chat-GPTQ with GPTQ-for-Llama loader, the best inference I could get was 6.16 tokens/sec. during the inference, I have Task Manager open and see only a few percentage point utilization on the GPUs.
Any suggestions on why the GPUs are not pegged to 100% for the duration of the inference?
Please let me know and I'll provide more configuration details. Anything specific I should have configured in the Model tab?
TIA
​ | 2023-10-17T18:33:54 | https://www.reddit.com/r/LocalLLaMA/comments/17a577p/help_understanding_gpu_utilization_with_different/ | konrad21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a577p | false | null | t3_17a577p | /r/LocalLLaMA/comments/17a577p/help_understanding_gpu_utilization_with_different/ | false | false | self | 2 | null |
Whats the best model for textbook generation right now? | 4 | Like I mean, whats the best Model I could Get on a Local PC, either 8GB or 16GB, that I could get my hands on. for the 8GB, I was looking at Q8 for Mistral or OpenHermes, but I don't know how good they are for code. For 16GB, whats the best?
My use case is just say feeding text and getting a coherent output, like a summary or a clean-up of that text, or feeding arbitrary data and getting structured data out.
any models? | 2023-10-17T18:26:56 | https://www.reddit.com/r/LocalLLaMA/comments/17a51hb/whats_the_best_model_for_textbook_generation/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a51hb | false | null | t3_17a51hb | /r/LocalLLaMA/comments/17a51hb/whats_the_best_model_for_textbook_generation/ | false | false | self | 4 | null |
Is it possible to train lora for the gguf model? | 3 | First of all, I am interested in whether it is possible to train lora for the gguf model, and I also wonder how much video memory is needed for this? If I'm not mistaken, my UHD770 has 32 gigabytes of video memory, which it takes from RAM. I have 62 gigabytes of RAM. Maybe I can train lora on an integrated GPU. | 2023-10-17T18:25:22 | https://www.reddit.com/r/LocalLLaMA/comments/17a505n/is_it_possible_to_train_lora_for_the_gguf_model/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a505n | false | null | t3_17a505n | /r/LocalLLaMA/comments/17a505n/is_it_possible_to_train_lora_for_the_gguf_model/ | false | false | self | 3 | null |
Reliable ways to get structured output from llms | 73 | What are the current best ways to get structured output from local llm/openai reliably ? I have found the following options and tried some of them,
Microsoft guidance - https://github.com/guidance-ai/guidance
LMQL - https://lmql.ai/
llama.cpp grammar - [https://github.com/ggerganov/llama.cpp/discussions/2494](https://github.com/ggerganov/llama.cpp/discussions/2494)
langchain [https://python.langchain.com/docs/modules/model\_io/output\_parsers/](https://python.langchain.com/docs/modules/model_io/output_parsers/)
jsonformer - https://github.com/1rgs/jsonformer
salute - [https://github.com/LevanKvirkvelia/salute](https://github.com/LevanKvirkvelia/salute)
Was looking for something that could work with both local llmโs (gguf/gptq models) and openai but i guess thatโs difficult right now ? (also, i am more inclined towards typescript based solutions(zod) if possible)
I ran into a few problems for eg, guidance-ai doesnโt seem to work with text-generation ui because the openai api adapter doesnโt support logit\_bias.
It will be great to know the experience of others with these approaches. | 2023-10-17T18:24:39 | https://www.reddit.com/r/LocalLLaMA/comments/17a4zlf/reliable_ways_to_get_structured_output_from_llms/ | amit13k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a4zlf | false | null | t3_17a4zlf | /r/LocalLLaMA/comments/17a4zlf/reliable_ways_to_get_structured_output_from_llms/ | false | false | self | 73 | {'enabled': False, 'images': [{'id': 'Ibffq_Zyeku6ULQfIUSt59muaWIFxUJEE7Umqaai7Zw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DCcSLTuai3CBVq6XhGp4xScW7j7KBRzPEd1I-9gcMOw.jpg?width=108&crop=smart&auto=webp&s=d076ffbd0ec52cdd794a39ca1db4106fc0728218', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DCcSLTuai3CBVq6XhGp4xScW7j7KBRzPEd1I-9gcMOw.jpg?width=216&crop=smart&auto=webp&s=80c333f2058974a5bd572f7a82680e661b85e23e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DCcSLTuai3CBVq6XhGp4xScW7j7KBRzPEd1I-9gcMOw.jpg?width=320&crop=smart&auto=webp&s=787cc51840cddef22ebb1cc076909d28174d3c03', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DCcSLTuai3CBVq6XhGp4xScW7j7KBRzPEd1I-9gcMOw.jpg?width=640&crop=smart&auto=webp&s=8261ad0ed5b82f1489420fb4cbc7c25d11705ac5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DCcSLTuai3CBVq6XhGp4xScW7j7KBRzPEd1I-9gcMOw.jpg?width=960&crop=smart&auto=webp&s=a8d08bc6b51d30f5f123debe2f5fe0ce02d059ae', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DCcSLTuai3CBVq6XhGp4xScW7j7KBRzPEd1I-9gcMOw.jpg?width=1080&crop=smart&auto=webp&s=93ef82dca3f406dce46c9a66ee0393a9053591c8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DCcSLTuai3CBVq6XhGp4xScW7j7KBRzPEd1I-9gcMOw.jpg?auto=webp&s=e583b6884bc8fb497430f3e83606754d41fcaa05', 'width': 1200}, 'variants': {}}]} |
Data prep for fine-tuning llama 2 7B to analyze financial reports (and write "funny" tweets) | 33 | Wanted to write a post about how I fine-tuned an LLM but it turned into a guide about data prep.
I tried fine-tuning llama-2 7b lora to write funny tweets from company financials (10-K/Q reports). Figured it was not very simple task given each report is 100+ page html. Spent 95% of time working on a 400-line dataset and only 5% actually training a model. The results I got are not groundbreaking but I learned couple of things, including that 7b lora can be fine-tuned on pretty complex and long data inputs (1k+ tokens each) and can also learn to interpret these inputs (financial data) to spit out something not very straightforward as a joke. Sharing here:
[Guide](https://gist.github.com/miagkyi/f99af352d3dbbdf90b390b4b81649e6b), [code to prepare dataset](https://gist.github.com/miagkyi/fcb1b19284ab5a086fd593318cdf1046), [dataset](https://app.llmhome.io/api/dataset/examples/RysM2cpqN7xDrNAdzMRiptywCoSt2wNAhE0/twitter-10kq-hottake.csv)
https://preview.redd.it/hkg3czkdwsub1.png?width=1048&format=png&auto=webp&s=f40f6fe3aec2d6f0f6de6af5f28821bed5945e9d
\[A quick disclosure : I work at a b2b voice AI startup. While building our product, weโve coded bunch of internal tools to help us train speech recognition and NLP models. We've carved out one of those tools into a [website](https://llmhome.io/). You might see me mentioning it my post as Iโve been using it for my fine-tuning. Just so weโre on the same page\] | 2023-10-17T17:51:06 | https://www.reddit.com/r/LocalLLaMA/comments/17a47kx/data_prep_for_finetuning_llama_2_7b_to_analyze/ | Acceptable_Bed7015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a47kx | false | null | t3_17a47kx | /r/LocalLLaMA/comments/17a47kx/data_prep_for_finetuning_llama_2_7b_to_analyze/ | false | false | 33 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]} | |
Nvidia Tesla power limits | 2 | Hi, could someone who owns any of the following GPUs tell me the minimum power limit that nvidia-smi --setpowerlimit allows you to set? :
Tesla P4, Tesla P40, Tesla P100, Tesla M40, Telsa M60
Ive looked for this information everywhere, and cannot find it. I must know it before I purchase any of these GPUs. Thanks! | 2023-10-17T17:42:03 | https://www.reddit.com/r/LocalLLaMA/comments/17a401e/nvidia_tesla_power_limits/ | MaxwellsMilkies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a401e | false | null | t3_17a401e | /r/LocalLLaMA/comments/17a401e/nvidia_tesla_power_limits/ | false | false | self | 2 | null |
Do 2xRTX 4090 equate to 48GB VRAM? | 33 | I have the option of getting new RTX 3090ti's for 800$ or one RTX 4090 for 1900$. Is it possible to get two RTX 4090's and actually have 48GB VRAM for a single model, like the 3090's behaves? And what are your opinions on the price points, which option in general should I go for? | 2023-10-17T16:59:51 | https://www.reddit.com/r/LocalLLaMA/comments/17a317o/do_2xrtx_4090_equate_to_48gb_vram/ | sanitorii12345 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a317o | false | null | t3_17a317o | /r/LocalLLaMA/comments/17a317o/do_2xrtx_4090_equate_to_48gb_vram/ | false | false | self | 33 | null |
Is there any way to take advantage of AVX-512 in a system with lots of RAM? | 2 | I have built a system with 128GB of memory on an X299 platform, equipped with a 10-core 10900X CPU, which has 2 AVX-512 units per core. I can expand the memory to 256GB, if needed.
I'm wondering if there's a way to fully utilize the AVX-512 capabilities. Should I compile 'llama.cpp' differently, or are there any other frameworks that can help improve performance?
Currently, I only have a 1080ti with 11GB of VRAM, so I can offload only a smaller portion of larger models to the GPU. | 2023-10-17T15:59:37 | https://www.reddit.com/r/LocalLLaMA/comments/17a1okn/is_there_any_way_to_take_advantage_of_avx512_in_a/ | alexgand | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a1okn | false | null | t3_17a1okn | /r/LocalLLaMA/comments/17a1okn/is_there_any_way_to_take_advantage_of_avx512_in_a/ | false | false | self | 2 | null |
Tried fine-tuning llama 2 7B to analyze financial reports and write โfunnyโ tweets | 1 | [removed] | 2023-10-17T15:59:20 | https://www.reddit.com/r/LocalLLaMA/comments/17a1odt/tried_finetuning_llama_2_7b_to_analyze_financial/ | Acceptable_Bed7015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a1odt | false | null | t3_17a1odt | /r/LocalLLaMA/comments/17a1odt/tried_finetuning_llama_2_7b_to_analyze_financial/ | false | false | 1 | null | |
NVIDIA TensorRT-LLM Coming To Windows | 89 | 2023-10-17T15:32:01 | https://wccftech.com/nvidia-tensorrt-llm-windows-huge-ai-boost-consumer-pcs-geforce-rtx-rtx-pro-gpus/ | Aroochacha | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 17a12gc | false | null | t3_17a12gc | /r/LocalLLaMA/comments/17a12gc/nvidia_tensorrtllm_coming_to_windows/ | false | false | 89 | {'enabled': False, 'images': [{'id': 'CaUfS12yrzBg_xmPTPM_t_rWlyea6DMygUkXioZfNTE', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/KYLiCESnYKMuk-hr22epRyUtbW_KpZ9JJaqY-Un8OWA.jpg?width=108&crop=smart&auto=webp&s=9d86ca8f10a21f831b7177052d4e44eec9582197', 'width': 108}, {'height': 157, 'url': 'https://external-preview.redd.it/KYLiCESnYKMuk-hr22epRyUtbW_KpZ9JJaqY-Un8OWA.jpg?width=216&crop=smart&auto=webp&s=e0b7ab6f9f441064453d4a5950a175564b98e482', 'width': 216}, {'height': 233, 'url': 'https://external-preview.redd.it/KYLiCESnYKMuk-hr22epRyUtbW_KpZ9JJaqY-Un8OWA.jpg?width=320&crop=smart&auto=webp&s=1845389df91be44bcbaf14e3a68ba885993de327', 'width': 320}, {'height': 467, 'url': 'https://external-preview.redd.it/KYLiCESnYKMuk-hr22epRyUtbW_KpZ9JJaqY-Un8OWA.jpg?width=640&crop=smart&auto=webp&s=b122d1c455e77f1df4473b3ca503c25f7558add5', 'width': 640}], 'source': {'height': 532, 'url': 'https://external-preview.redd.it/KYLiCESnYKMuk-hr22epRyUtbW_KpZ9JJaqY-Un8OWA.jpg?auto=webp&s=e9dde4f4aba6f7b3b2fcbc4a57a17565b9f96f70', 'width': 728}, 'variants': {}}]} | ||
I read here that dataset >>> models. I'd prefer it to be the other way around cause cleaning data is hard. Any tools or local models you use? | 33 | 2023-10-17T15:15:51 | Acceptable_Bed7015 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17a0oy2 | false | null | t3_17a0oy2 | /r/LocalLLaMA/comments/17a0oy2/i_read_here_that_dataset_models_id_prefer_it_to/ | false | false | 33 | {'enabled': True, 'images': [{'id': 'ozTgGFuUzb73DmohgSiPzawi83sxufGfqjSaDW7POog', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/j6xobtwx3sub1.png?width=108&crop=smart&auto=webp&s=945009c0f4d50623942b0511ca27a2ba331035fa', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/j6xobtwx3sub1.png?width=216&crop=smart&auto=webp&s=aa10f9bbe6e538a93299fcb45f112db2cc62cd32', 'width': 216}, {'height': 162, 'url': 'https://preview.redd.it/j6xobtwx3sub1.png?width=320&crop=smart&auto=webp&s=62f780795f1e710058abcae458a2affe7d722be6', 'width': 320}, {'height': 324, 'url': 'https://preview.redd.it/j6xobtwx3sub1.png?width=640&crop=smart&auto=webp&s=4745b2db180ebba72865d3fa35bce8c6e0ab12a7', 'width': 640}, {'height': 486, 'url': 'https://preview.redd.it/j6xobtwx3sub1.png?width=960&crop=smart&auto=webp&s=5b1125d5f8517f5a85e7b7d1de935e86f68bc799', 'width': 960}, {'height': 547, 'url': 'https://preview.redd.it/j6xobtwx3sub1.png?width=1080&crop=smart&auto=webp&s=9d4a249c441ddf5ddeb70ec9a3e7d79456f6997a', 'width': 1080}], 'source': {'height': 698, 'url': 'https://preview.redd.it/j6xobtwx3sub1.png?auto=webp&s=e2223518b4acf8324ce01225605c240d2bf378de', 'width': 1376}, 'variants': {}}]} | |||
How to design API of Machine Learning library | 1 | 2023-10-17T15:09:05 | https://higgsfield.substack.com/p/how-to-design-api-of-machine-learning | Good-Willingness-985 | higgsfield.substack.com | 1970-01-01T00:00:00 | 0 | {} | 17a0jmc | false | null | t3_17a0jmc | /r/LocalLLaMA/comments/17a0jmc/how_to_design_api_of_machine_learning_library/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'F77ozRoxCTv4e6Y_-R9jTo6BO2caBsgjlyv7vgC6JwA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vc3PqhLH--XTtzj9vHSk8FL5rxgwdmVLNVo3ZlFjhFs.jpg?width=108&crop=smart&auto=webp&s=8d6f2968c9289d44059c5f86293014413884cac5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vc3PqhLH--XTtzj9vHSk8FL5rxgwdmVLNVo3ZlFjhFs.jpg?width=216&crop=smart&auto=webp&s=eb31efe638c83be1023bbf7f6dc130b83b21a8d7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vc3PqhLH--XTtzj9vHSk8FL5rxgwdmVLNVo3ZlFjhFs.jpg?width=320&crop=smart&auto=webp&s=fc85a0de5c406290a8093b4190a77b60febea026', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vc3PqhLH--XTtzj9vHSk8FL5rxgwdmVLNVo3ZlFjhFs.jpg?width=640&crop=smart&auto=webp&s=7a956b00eeace157599dac36b0e6a1330ce5d1f5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vc3PqhLH--XTtzj9vHSk8FL5rxgwdmVLNVo3ZlFjhFs.jpg?width=960&crop=smart&auto=webp&s=a0cd6bd53991578926b3d394cbc00b2e3bb6492f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vc3PqhLH--XTtzj9vHSk8FL5rxgwdmVLNVo3ZlFjhFs.jpg?width=1080&crop=smart&auto=webp&s=bc5575d6b985c63270d4ed4840683741cc37f168', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vc3PqhLH--XTtzj9vHSk8FL5rxgwdmVLNVo3ZlFjhFs.jpg?auto=webp&s=9f1110e81769a2851b9728550c23262c0a43f589', 'width': 1200}, 'variants': {}}]} | ||
Recommend me a $5K LLM PC Build | 1 | [removed] | 2023-10-17T15:07:07 | https://www.reddit.com/r/LocalLLaMA/comments/17a0i23/recommend_me_a_5k_llm_pc_build/ | Imagummybear23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a0i23 | false | null | t3_17a0i23 | /r/LocalLLaMA/comments/17a0i23/recommend_me_a_5k_llm_pc_build/ | false | false | self | 1 | null |
Best cloud provider for getting started with fine tuning? | 5 | I tried using AWS and requesting a g5.2xl instance to fine tune a flan-xxl model but was shot down for some weird reason. Is it easier to just use google colab notebooks? | 2023-10-17T14:50:29 | https://www.reddit.com/r/LocalLLaMA/comments/17a04nk/best_cloud_provider_for_getting_started_with_fine/ | seanpuppy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a04nk | false | null | t3_17a04nk | /r/LocalLLaMA/comments/17a04nk/best_cloud_provider_for_getting_started_with_fine/ | false | false | self | 5 | null |
Fine-tuning llama 2 7B to analyze financial reports and write funny tweets | 1 | [removed] | 2023-10-17T14:49:54 | https://www.reddit.com/r/LocalLLaMA/comments/17a0470/finetuning_llama_2_7b_to_analyze_financial/ | Acceptable_Bed7015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a0470 | false | null | t3_17a0470 | /r/LocalLLaMA/comments/17a0470/finetuning_llama_2_7b_to_analyze_financial/ | false | false | 1 | null | |
Extending Mistral 7B context to 32k? | 1 | [removed] | 2023-10-17T14:45:45 | https://www.reddit.com/r/LocalLLaMA/comments/17a00ym/extending_mistral_7b_context_to_32k/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17a00ym | false | null | t3_17a00ym | /r/LocalLLaMA/comments/17a00ym/extending_mistral_7b_context_to_32k/ | false | false | self | 1 | null |
how are people making mistral fine-tunes? | 20 | I've been dabbling with lora's in oobabooga lately and wanted to make one based on mistral, but it doesn't seem to be supported yet. How are people making these mistral fine tunes? | 2023-10-17T14:23:34 | https://www.reddit.com/r/LocalLLaMA/comments/179zjlb/how_are_people_making_mistral_finetunes/ | __SlimeQ__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179zjlb | false | null | t3_179zjlb | /r/LocalLLaMA/comments/179zjlb/how_are_people_making_mistral_finetunes/ | false | false | self | 20 | null |
Fine-tuning llama 2 7B to analyze financial reports and write dad jokes | 1 | [removed] | 2023-10-17T14:17:19 | https://www.reddit.com/r/LocalLLaMA/comments/179zelp/finetuning_llama_2_7b_to_analyze_financial/ | Acceptable_Bed7015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179zelp | false | null | t3_179zelp | /r/LocalLLaMA/comments/179zelp/finetuning_llama_2_7b_to_analyze_financial/ | false | false | 1 | null | |
Is there any demand for a Shared Public Contextual Database for RAG? | 8 | Hey Guys,
It seems RAG is really taking off as an increasingly popular use case for LLMs to leverage contextual data. However, everybody is building their own contextual data sets and embedding them in their own silo'd vector dbs.
Do you guys think there's any utility in having a shared public vector db that anyone can tap into their API, without having to self-host, worry about the embedding pipelines and filling the vector db with enough data in the first place for their use cases? Would this save devs alot of time in quickly testing testing product ideas? (albeit it does seem that propriety data is what everyone's raving about today)
\-
For context, I'm building a social media product we're users can upload a few pieces (approx 10) of content (social media posts, websites, videos to start with), which becomes the verified human-curated list/Niche. We then classify and embed this into a vector db. From this, we have set up a data pipeline to scrape the web and find new content that is most similar which we suggest to users to add to the Niche (upvote, downvote style). When a piece of content is upvoted on its added to the verified list updating the Niche's classification string. Essentially we're aiming to construct an ever-growing, user-curated, contextually classified vector database from a relatively small set of sample data. | 2023-10-17T14:06:54 | https://www.reddit.com/r/LocalLLaMA/comments/179z6i9/is_there_any_demand_for_a_shared_public/ | niksteel123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179z6i9 | false | null | t3_179z6i9 | /r/LocalLLaMA/comments/179z6i9/is_there_any_demand_for_a_shared_public/ | false | false | self | 8 | null |
Still new to LLaMA's, quick question regarding AMD's recent update to ROCm and PyTorch | 1 | [removed] | 2023-10-17T14:01:58 | https://www.reddit.com/r/LocalLLaMA/comments/179z2lr/still_new_to_llamas_quick_question_regarding_amds/ | iamthewhatt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179z2lr | false | null | t3_179z2lr | /r/LocalLLaMA/comments/179z2lr/still_new_to_llamas_quick_question_regarding_amds/ | false | false | default | 1 | null |
A Faraday user beginning with Koboldcpp and is confused how come I have RAM? | 1 | [removed] | 2023-10-17T12:53:29 | https://www.reddit.com/r/LocalLLaMA/comments/179xp3c/a_faraday_user_beginning_with_koboldcpp_and_is/ | NiceRedAutumn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179xp3c | false | null | t3_179xp3c | /r/LocalLLaMA/comments/179xp3c/a_faraday_user_beginning_with_koboldcpp_and_is/ | false | false | self | 1 | null |
Writing Aid: "Inpainting"? | 8 | Is there any way to get a model to fill in gaps, like inpainting does in SD? I've been looking, but I'm drowning in unfamiliar technical terms and was hoping someone might have a better grasp on things than I do.
Example:
>Dumbledore bludgeons Smeagol to death and and claims the ring for himself.<Generated Text>another wizard blocks the road, issuing a challenge and a warning. "Cast the ring aside! You cannot control it!" He pleads. Dumbledore spits at the ground in a display of hubris and defiance. "Fuck off Gandalf, or I'll bash yer 'ead in! I swear on me mum!"<Generated Text> Leaving Gandalf the broken in a disorderly pile of bone and flesh, Dumbledore sets off toward Mt. Doom, still swearing under his breath.<Generated Text>Having bludgeoned Sauron into submission, Dumbledore stands victorious once more. The orcs beat their weapons against their armor in a frenzy, celebrating their new dark lord. Middle earth is now a part of the British empire. The end.
I was hoping to have the model fill in the boring parts, so I can focus on highlights, if that makes sense. | 2023-10-17T11:52:49 | https://www.reddit.com/r/LocalLLaMA/comments/179wl34/writing_aid_inpainting/ | Pumpkim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179wl34 | false | null | t3_179wl34 | /r/LocalLLaMA/comments/179wl34/writing_aid_inpainting/ | false | false | self | 8 | null |
How do i fine tune an LLM? | 18 | I have a 24gb Card and i want to fine tune an llm on a dataset i created whats the best way?
And is it even possible
Here is my Dataset i want to use it for langchain:
[My Trainings Data](https://www.file-upload.net/download-15204875/trainingdata3.0.xlsx.html)
Source:
[https://huggingface.co/datasets/iamtarun/python\_code\_instructions\_18k\_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca)(A Few Code Questions)
[https://huggingface.co/datasets/MadVoyager/stable\_diffusion\_instructional\_dataset](https://huggingface.co/datasets/MadVoyager/stable_diffusion_instructional_dataset)(Everything That has to do something with prompts and Picture generation) | 2023-10-17T11:29:47 | https://www.reddit.com/r/LocalLLaMA/comments/179w79k/how_do_i_fine_tune_an_llm/ | Otherwise_Weather_57 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179w79k | false | null | t3_179w79k | /r/LocalLLaMA/comments/179w79k/how_do_i_fine_tune_an_llm/ | false | false | self | 18 | null |
How do you keep up to date with all the innovations and frameworks? | 108 | I am in FOMO for what concerns all the open-source innovations and technologies coming out at the speed of light. How do you stay up to speed? | 2023-10-17T11:18:11 | https://www.reddit.com/r/LocalLLaMA/comments/179w0ic/how_do_you_keep_up_to_date_with_all_the/ | nsosio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179w0ic | false | null | t3_179w0ic | /r/LocalLLaMA/comments/179w0ic/how_do_you_keep_up_to_date_with_all_the/ | false | false | self | 108 | null |
Biased Behavior in the "Orca Mini" LLM, Based on Facebook's Llama2 | 1 | [removed] | 2023-10-17T10:42:20 | https://www.reddit.com/r/LocalLLaMA/comments/179vfrb/biased_behavior_in_the_orca_mini_llm_based_on/ | Good-Juggernaut-740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179vfrb | false | null | t3_179vfrb | /r/LocalLLaMA/comments/179vfrb/biased_behavior_in_the_orca_mini_llm_based_on/ | false | false | self | 1 | null |
Microscaling Data Formats for Deep Learning (sub-8-bit training) | 1 | [removed] | 2023-10-17T10:34:32 | https://www.reddit.com/r/LocalLLaMA/comments/179vbdl/microscaling_data_formats_for_deep_learning/ | curiousFRA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179vbdl | false | null | t3_179vbdl | /r/LocalLLaMA/comments/179vbdl/microscaling_data_formats_for_deep_learning/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BK6rhYDWrkgmo668xCNe0hAA3jc4qZ7ReJZ9kRwd9Z4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?width=108&crop=smart&auto=webp&s=50051208017acd548064a685c6c13e72ce4708c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?width=216&crop=smart&auto=webp&s=29b46f2a4012c376e6f016747b17aaffce018784', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?width=320&crop=smart&auto=webp&s=a786116a712fcae0cb5f9edc6a89decb0a8a03cf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?width=640&crop=smart&auto=webp&s=dd6f241054a744dce0344192e2951382915885ba', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?width=960&crop=smart&auto=webp&s=d9b61f1ff9e174aa0f7f4d973ced5fdbc720b67f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?width=1080&crop=smart&auto=webp&s=24372b1ed0f4f894ddf8995870c8b2b1a78ce16e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OKSKLJ2FatVBD8JRUv8Y5uugKV7KO2wrgaiUmMNq6vQ.jpg?auto=webp&s=7d25b847145f10458492add857bb63ff6e2aa6c8', 'width': 1200}, 'variants': {}}]} |
How to run models other than GGML locally? | 3 | I got a laptop with a 4060 inside, and wanted to use koboldcpp to run my models. But I think it only supports GGML versions, which use both GPU and CPU, and it makes that a bit slower than the other versions.
How can I run the other versions? Can I still run them on Koboldcpp? Or do I have to use something else? | 2023-10-17T09:44:15 | https://www.reddit.com/r/LocalLLaMA/comments/179ukdj/how_to_run_models_other_than_ggml_locally/ | s-cardi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179ukdj | false | null | t3_179ukdj | /r/LocalLLaMA/comments/179ukdj/how_to_run_models_other_than_ggml_locally/ | false | false | self | 3 | null |
llama2-7B/llama2-13B parameter model generates random text after few questions | 6 | I have a RAG based system and I am maintaining memory for last 2 conversations. I am seeing that after few questions model starts to respond in gibberish for example
</hs>
------
can i scale the container user it?
Answer:
[/INST]
> Finished chain.
> Finished chain.
> Finished chain.
Response has 499 tokens.
Total tokens used in this inference: 508
BOT RESPONSE: query
axis
hal
ask
ger
response
<a<
question,
questions,json,chain,fn,aker _your
vas
conf, >cus,
absolute,
customer,cm,
information,query,akegt,gov,query,db,sys,query,query,ass,
---
------------,
I am counting the tokens and fairly well under it. I have max\_new\_tokens is set to 512and my pipeline has following
def initialize_pipeline(self):
self.pipe = pipeline("text-generation",
stopping_criteria=self.stopping_criteria,
model=self.model,
tokenizer=self.tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
max_new_tokens=512,
do_sample=True,
top_k=30,
num_return_sequences=1,
eos_token_id=self.tokenizer.eos_token_id,
temperature=0.1,
top_p=0.15,
repetition_penalty=1.2)
I don;t get any exception but it just start to respond random text. Any suggestion would be of great help. Also i am working on a 80GIB GPU so resource is not a problem as well. | 2023-10-17T08:52:39 | https://www.reddit.com/r/LocalLLaMA/comments/179tvcx/llama27bllama213b_parameter_model_generates/ | Optimal_Original_815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179tvcx | false | null | t3_179tvcx | /r/LocalLLaMA/comments/179tvcx/llama27bllama213b_parameter_model_generates/ | false | false | self | 6 | null |
Got some issues with my Synology/Xpenology based NAS | 1 | 1. How can I determine which is the optimal model to install for best and most accurate answers, without bogging down the configuration I am using? (no discrete video card)
2. For now I have installed nous-hermes-llama-2-7b.ggmlv3.q4\_0.bin and there seems to be some issues: it does not exactly follow the system prompt. meaning, it will not take the bot personality as requested (not answer to "who are you?" in "I am X" as provided, for instance), and also I notice some German words here in there in the responses
3. My hardware is coffee lake based with a i7-8700k CPU, SSD caching and 16GB DDR4 RAM
Thanks! | 2023-10-17T08:18:08 | https://www.reddit.com/r/LocalLLaMA/comments/179tfb2/got_some_issues_with_my_synologyxpenology_based/ | dropswisdom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179tfb2 | false | null | t3_179tfb2 | /r/LocalLLaMA/comments/179tfb2/got_some_issues_with_my_synologyxpenology_based/ | false | false | self | 1 | null |
Got a powerful PC based serology NASA | 1 | 1. How can I determine which is the optimal model to install for best and most accurate answers?
2. For now I have installed nous-hermes-llama-2-7b.ggmlv3.q4\_0.bin and there seems to be some issues: it does not exactly follow the system prompt. meaning, it will not take the bot personality as requested (not answer to "who are you?" in "I am X" as provided, for instance), and also I notice some german words here in there in the responses
3. My hardware is coffee lake based with a i7-8700k CPU, SSD caching and 16GB DDR4 RAM
Thanks! | 2023-10-17T08:15:32 | https://www.reddit.com/r/LocalLLaMA/comments/179te5j/got_a_powerful_pc_based_serology_nasa/ | dropswisdom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179te5j | false | null | t3_179te5j | /r/LocalLLaMA/comments/179te5j/got_a_powerful_pc_based_serology_nasa/ | false | false | self | 1 | null |
Why aren't recommended prompt templates distributed as part of models? | 46 | Models already contain a lot of metadata along with the model themselves but at present they don't seem to contain the required / suggested prompt template.
- Is there a reason for this?
I'd love to be able to download and select a model in tools like Text Generation Web UI, LM Studio etc... and have the model set or at least hint at the correct prompt template. | 2023-10-17T07:42:02 | https://www.reddit.com/r/LocalLLaMA/comments/179sybn/why_arent_recommended_prompt_templates/ | sammcj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179sybn | false | null | t3_179sybn | /r/LocalLLaMA/comments/179sybn/why_arent_recommended_prompt_templates/ | false | false | self | 46 | null |
Dear Esther, I've been working on a diary app the runs via llama.cpp | 1 | 2023-10-17T07:25:57 | https://github.com/vortext/esther | Difficult-Support794 | github.com | 1970-01-01T00:00:00 | 0 | {} | 179sqp5 | false | null | t3_179sqp5 | /r/LocalLLaMA/comments/179sqp5/dear_esther_ive_been_working_on_a_diary_app_the/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'iWw-HWNYIGf5C_Foz_u7inChAyjmbcX2-a9lzGo88ao', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v3ICsT9tFJgNTnhgxFPGLSPuDhmfcmYG2mo4Cni-y0A.jpg?width=108&crop=smart&auto=webp&s=2a1871eef16fa79fddcbfeba98eb3cff2af00c2e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/v3ICsT9tFJgNTnhgxFPGLSPuDhmfcmYG2mo4Cni-y0A.jpg?width=216&crop=smart&auto=webp&s=401ed31f077ae7950748a3155f5fd638e39a1337', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/v3ICsT9tFJgNTnhgxFPGLSPuDhmfcmYG2mo4Cni-y0A.jpg?width=320&crop=smart&auto=webp&s=9131091d19df261ea1abbed7fa602ca4881f14f4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/v3ICsT9tFJgNTnhgxFPGLSPuDhmfcmYG2mo4Cni-y0A.jpg?width=640&crop=smart&auto=webp&s=a01a57ac719687b448d60d9a7c0f1a9e8cb2ef11', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/v3ICsT9tFJgNTnhgxFPGLSPuDhmfcmYG2mo4Cni-y0A.jpg?width=960&crop=smart&auto=webp&s=0f335b6e74859f59603b55928d9195b954c0bbaa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/v3ICsT9tFJgNTnhgxFPGLSPuDhmfcmYG2mo4Cni-y0A.jpg?width=1080&crop=smart&auto=webp&s=32e5112dfa4257743129bc1f3906b9bfdd59cbbe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/v3ICsT9tFJgNTnhgxFPGLSPuDhmfcmYG2mo4Cni-y0A.jpg?auto=webp&s=ca9ebfcae5d21b8a518fcefc87e3950f2c9c34c8', 'width': 1200}, 'variants': {}}]} | |
Text generation web UI unable to load models? | 2 | Hi, I've just setup the text generation webUI and downloaded both Guanaco-33b and WizardLM-7B-Uncensored as those were recommended in the wiki. After fixing a bunch of other errors I'm unable to load both models. They both throw *OSError: Can't load tokenizer for 'models\[ModelName]'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'models\[ModelName' is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer.*
What am I missing? I've downloaded the models from https://huggingface.co/timdettmers/guanaco-33b-merged and https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored/tree/main respectively and haven't changed any settings in the UI. I have a 4090 and apparently a 33b models should work? Let me know if that's wrong though that seems to be a separate issue.. | 2023-10-17T07:23:10 | https://www.reddit.com/r/LocalLLaMA/comments/179spdk/text_generation_web_ui_unable_to_load_models/ | rodinj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179spdk | false | null | t3_179spdk | /r/LocalLLaMA/comments/179spdk/text_generation_web_ui_unable_to_load_models/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=108&crop=smart&auto=webp&s=2c0b032bdc9d0820b318f57def3af620afe60ee8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=216&crop=smart&auto=webp&s=7b29327d787489e6d4f61726ba9d10a09ed099d9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=320&crop=smart&auto=webp&s=9f1b5bed20b4b058b596c2a430a47d3b9c857e03', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=640&crop=smart&auto=webp&s=7b47505d7a8ebd834ca805c293d16277b5772c12', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=960&crop=smart&auto=webp&s=c7be2b4b0ad69f9ff176d6a0027458c22a63a5f0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=1080&crop=smart&auto=webp&s=dea3a5ccadcdb95c05dca40d482f50c976b88233', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?auto=webp&s=6e3e4780238d40a2755c2289e7e3d722eeb8ea30', 'width': 1200}, 'variants': {}}]} |
Mass inference on a LLM with no content moderation? | 1 | [removed] | 2023-10-17T06:02:02 | https://www.reddit.com/r/LocalLLaMA/comments/179rjph/mass_inference_on_a_llm_with_no_content_moderation/ | Bo3help | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179rjph | false | null | t3_179rjph | /r/LocalLLaMA/comments/179rjph/mass_inference_on_a_llm_with_no_content_moderation/ | false | false | self | 1 | null |
How can I achieve this? | 1 | [removed] | 2023-10-17T04:58:24 | https://www.reddit.com/r/LocalLLaMA/comments/179qjms/how_can_i_achieve_this/ | consig1iere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179qjms | false | null | t3_179qjms | /r/LocalLLaMA/comments/179qjms/how_can_i_achieve_this/ | false | false | self | 1 | null |
Phibrarian Alpha - the first model checkpoint from SciPhi's Mistral-7b | 81 | Hi Everyone,
I am FT'ing mistral 7b w/ a 32k context window on over 1bn tokens of high quality synthetic data. The tokens include OpenOrca, OpenPhi synthetic textbooks, and synthetic coding data like WizardCoder.
The run is a few days in on a 8x 80gb A100 cluster, and I quietly released the first epoch checkpoint [here](https://huggingface.co/emrgnt-cmplxty/Mistral-7b-Phibrarian-32k/settings). I am building the model in association with our synthetic data efforts [here, at SciPhi](https://github.com/emrgnt-cmplxty/sciphi).
I would really like to hear your feedback on this model - I think that the fine-tuning dataset improved model cognition & teaching performance, something I will be benchmarking rigorously in the coming days. If you'd like access to a fast completion endpoint, I am hosting one for personal use - but since this is LocalLlama you guys won't need that :).
[Cool response to \\"What is the meaning of life\\"](https://preview.redd.it/p6j75cokunub1.png?width=1924&format=png&auto=webp&s=0afc3994ff29e5d5f751d6fa4ec7577acbcc2bab)
Thanks again for following along and giving helpful feedback along the way - I'm excited to finally have something worth showing to the community here.
​ | 2023-10-17T00:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/179lxgz/phibrarian_alpha_the_first_model_checkpoint_from/ | docsoc1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179lxgz | false | null | t3_179lxgz | /r/LocalLLaMA/comments/179lxgz/phibrarian_alpha_the_first_model_checkpoint_from/ | false | false | 81 | {'enabled': False, 'images': [{'id': 'CkI2fSoOCxngEsZcJyHf7GvB4fe7VGoqloM-U3dJ4CI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fJ4Et7DPte3TGqSTMb7QJPZqFhEMrId90Gv5ahs54wA.jpg?width=108&crop=smart&auto=webp&s=5f560874193e343c4aacfcb9a5e7a96ac4f853b8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fJ4Et7DPte3TGqSTMb7QJPZqFhEMrId90Gv5ahs54wA.jpg?width=216&crop=smart&auto=webp&s=f56de00ecc6b2d4e4f4fc079bfdd13001070a764', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fJ4Et7DPte3TGqSTMb7QJPZqFhEMrId90Gv5ahs54wA.jpg?width=320&crop=smart&auto=webp&s=91258985c31f0addd4c8b20c236d40feb09989c2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fJ4Et7DPte3TGqSTMb7QJPZqFhEMrId90Gv5ahs54wA.jpg?width=640&crop=smart&auto=webp&s=f84f1ab6beb10df4349f41d0abeefd3cc0b2935d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fJ4Et7DPte3TGqSTMb7QJPZqFhEMrId90Gv5ahs54wA.jpg?width=960&crop=smart&auto=webp&s=74d795618e435b080e0c968d84e7a335da854ed4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fJ4Et7DPte3TGqSTMb7QJPZqFhEMrId90Gv5ahs54wA.jpg?width=1080&crop=smart&auto=webp&s=c3ee97bdfcb1ebaf12fc5dc3f301e453ccb9ce29', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fJ4Et7DPte3TGqSTMb7QJPZqFhEMrId90Gv5ahs54wA.jpg?auto=webp&s=b80685aa8afde6ef204198f2f2bd0037369e6ab4', 'width': 1200}, 'variants': {}}]} | |
๐๐ง๐ญ๐ซ๐จ๐๐ฎ๐๐ข๐ง๐ ๐๐ฉ๐๐ง ๐๐๐ซ๐ฆ๐๐ฌ ๐, a continuation of the Hermes series of models, now built on Mistral 7B! | 156 | 2023-10-17T00:53:56 | https://twitter.com/Teknium1/status/1714010838959612329 | metalman123 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 179lvdx | false | {'oembed': {'author_name': 'Teknium (e/ฮป)', 'author_url': 'https://twitter.com/Teknium1', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">๐๐ง๐ญ๐ซ๐จ๐๐ฎ๐๐ข๐ง๐ ๐๐ฉ๐๐ง ๐๐๐ซ๐ฆ๐๐ฌ ๐, a continuation of the Hermes series of models, now built on Mistral 7B!<br><br>The Hermes 2 model was trained on 900,000 instructions, and surpasses all previous versions of Hermes 13B and below, and matches 70B on some benchmarks!โฆ <a href="https://t.co/BAxtMYBXpG">pic.twitter.com/BAxtMYBXpG</a></p>— Teknium (e/ฮป) (@Teknium1) <a href="https://twitter.com/Teknium1/status/1714010838959612329?ref_src=twsrc%5Etfw">October 16, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Teknium1/status/1714010838959612329', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_179lvdx | /r/LocalLLaMA/comments/179lvdx/๐๐ง๐ญ๐ซ๐จ๐๐ฎ๐๐ข๐ง๐ _๐๐ฉ๐๐ง_๐๐๐ซ๐ฆ๐๐ฌ_๐_a_continuation_of_the/ | false | false | 156 | {'enabled': False, 'images': [{'id': 'n3XTIZ5EEyxCprJBhkSNBQPAxqHJHx255mBBU-0cxxM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/S-qnV6Zvb5gGFgaCejqx5R_edzONpONemM-WPT7gXPo.jpg?width=108&crop=smart&auto=webp&s=18b2285f67ac82a71daabe99adcb028182c800f0', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/S-qnV6Zvb5gGFgaCejqx5R_edzONpONemM-WPT7gXPo.jpg?auto=webp&s=5ba9778ebd2efe330bf18ac1fe21367d11379075', 'width': 140}, 'variants': {}}]} | ||
TheBloke_Chronos-Hermes-13B-SuperHOT-8K-GPTQ alternative? | 6 | Hi, I've been using TheBloke_Chronos-Hermes-13B-SuperHOT-8K-GPTQ for light fiction writing. I like it. It has extended 'memory' or max_seq_len or something.
Anyway, has anything come along that is much better?
I have 12gigs of vram. | 2023-10-16T23:43:49 | https://www.reddit.com/r/LocalLLaMA/comments/179keia/thebloke_chronoshermes13bsuperhot8kgptq/ | c_gdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179keia | false | null | t3_179keia | /r/LocalLLaMA/comments/179keia/thebloke_chronoshermes13bsuperhot8kgptq/ | false | false | self | 6 | null |
is it possible to do anything locally with these specs? | 7 | * GPU : RTX 2060 Max-Q (6GB) + 0.6 GB AMD
* CPU : AMD Ryzen 9 4900HS
* RAM: 15 GB
The laptop is rog zephyrus g14
Can I do any type of fine-tuning (QLora, Lora ) or inference with reasonable speed?
if so, I'm assuming only 7B or fewer models?
Sorry if this has been asked before. | 2023-10-16T23:40:59 | https://www.reddit.com/r/LocalLLaMA/comments/179kccs/is_it_possible_to_do_anything_locally_with_these/ | Warm_Shelter1866 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179kccs | false | null | t3_179kccs | /r/LocalLLaMA/comments/179kccs/is_it_possible_to_do_anything_locally_with_these/ | false | false | self | 7 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.