title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Searching for a real world language task benchmark (summarization, translation, rephrasing, etc...) | 1 | I know about openllm leaderboard (mostly logic, multiple choice tests), chatbot arena (human chooses best answer)
Is there any benchmark/leaderboard measuring the LLM model ability in summarization, translation, rephrasing, etc? | 2023-11-14T10:41:07 | https://www.reddit.com/r/LocalLLaMA/comments/17uzkr8/searching_for_a_real_world_language_task/ | AfternoonOk5482 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uzkr8 | false | null | t3_17uzkr8 | /r/LocalLLaMA/comments/17uzkr8/searching_for_a_real_world_language_task/ | false | false | self | 1 | null |
Quantisation techniques difference? | 5 | Can someone please explain the quantisation method differences:
\- AWQ
\- GPTQ
\- llamacpp GGUF quantisation (sorry I do not know the quantisation technique name)
As far as I have researched there is limited AI backend that supports CPU inference of AWQ and GPTQ models and GGUF quantisation (like Q\_4\_K\_M) is prevalent because it even runs smoothly on CPU.
So:
**What exactly is the quantisation difference between above techniques.** | 2023-11-14T10:03:21 | https://www.reddit.com/r/LocalLLaMA/comments/17uz2xt/quantisation_techniques_difference/ | No-Belt7582 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uz2xt | false | null | t3_17uz2xt | /r/LocalLLaMA/comments/17uz2xt/quantisation_techniques_difference/ | false | false | self | 5 | null |
Can I use phind.com with a VPN | 1 | I got banned from woke chatgpt for using a VPN, can I use a VPN with Phind
thanks | 2023-11-14T09:41:04 | https://www.reddit.com/r/LocalLLaMA/comments/17uyslq/can_i_use_phindcom_with_a_vpn/ | timetostartwinning | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uyslq | false | null | t3_17uyslq | /r/LocalLLaMA/comments/17uyslq/can_i_use_phindcom_with_a_vpn/ | false | false | self | 1 | null |
Guidance on training 8K context size with 4bit | 3 | Hello all,
I got two NVIDIA P40 with in total 48GB vRAM and are trying to train a LLaMAv2 base instruction following model which has a base context size of 8.192 (8K). I successfully trained and had really strong results with the default SFTTrainer using 2.048 and 4.096 so, 2k and 4k. However when I switch it 8K I always hit the OOM wall. I set all to 4bit and the initial loading memory use is less then 2-3GB per GPU but the moment he starts training it dies. Does anyone have an idea or suggestion here?
I tried double quant but it is not compatible with the P40, same as for flash attention.
For my use case I need an 8K context. So far all my previous tests with 2-4K went really good with strong results so I am quite confident in my overall training setup.
With fastapi I managed to run even 60B and 34B models for inference using 4bit and a special split GPU switch where I could limit the GPU memory usage to 18GB:22GB (don’t know why but only this worked stable). I wonder if something similar can help here.
For batch size I tried all from 1 to 64 (which I used successfully with smaller context sizes).
Thanks a lot! | 2023-11-14T09:39:28 | https://www.reddit.com/r/LocalLLaMA/comments/17uyrum/guidance_on_training_8k_context_size_with_4bit/ | Bright-Question-6485 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uyrum | false | null | t3_17uyrum | /r/LocalLLaMA/comments/17uyrum/guidance_on_training_8k_context_size_with_4bit/ | false | false | self | 3 | null |
Anyway to save your cloud GPU fine-tuned models to your local storage? | 2 | I want to fine tune some open source LLMs on cloud GPUs (Google, runpod etc.), but most of the tutorials I read is to upload the fine tuned models to Huggingface hub. I want to keep them privat. What can I do? Thanks! | 2023-11-14T09:37:10 | https://www.reddit.com/r/LocalLLaMA/comments/17uyqr6/anyway_to_save_your_cloud_gpu_finetuned_models_to/ | caphohotain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uyqr6 | false | null | t3_17uyqr6 | /r/LocalLLaMA/comments/17uyqr6/anyway_to_save_your_cloud_gpu_finetuned_models_to/ | false | false | self | 2 | null |
Any idea of application of MEM GPT from Berkeley | 1 | [removed] | 2023-11-14T08:51:40 | https://www.reddit.com/r/LocalLLaMA/comments/17uy5q8/any_idea_of_application_of_mem_gpt_from_berkeley/ | ThenNeighborhood3723 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uy5q8 | false | null | t3_17uy5q8 | /r/LocalLLaMA/comments/17uy5q8/any_idea_of_application_of_mem_gpt_from_berkeley/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Kt9QNj4TdiCvqAbi3NQfiU-aZPDg62xJCtUXd6LunxA', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/BfHGJ4b5cknMOZj4gdpHFJwdqyD0VztftKk_BP9M3LY.jpg?width=108&crop=smart&auto=webp&s=014743cb4525ab916351cbe31b66905b78f138d7', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/BfHGJ4b5cknMOZj4gdpHFJwdqyD0VztftKk_BP9M3LY.jpg?width=216&crop=smart&auto=webp&s=08a3458fa84c598e2e3b6018d023e624d827abf6', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/BfHGJ4b5cknMOZj4gdpHFJwdqyD0VztftKk_BP9M3LY.jpg?width=320&crop=smart&auto=webp&s=779ad3b8dde722dd3d31b3e85f2051def198dc7f', 'width': 320}, {'height': 315, 'url': 'https://external-preview.redd.it/BfHGJ4b5cknMOZj4gdpHFJwdqyD0VztftKk_BP9M3LY.jpg?width=640&crop=smart&auto=webp&s=c8a611703138f6d0fcbc37590de57b1242fd23c7', 'width': 640}, {'height': 473, 'url': 'https://external-preview.redd.it/BfHGJ4b5cknMOZj4gdpHFJwdqyD0VztftKk_BP9M3LY.jpg?width=960&crop=smart&auto=webp&s=111d86b7a8d46a8bd7b2d7cd86292f8df888434a', 'width': 960}, {'height': 532, 'url': 'https://external-preview.redd.it/BfHGJ4b5cknMOZj4gdpHFJwdqyD0VztftKk_BP9M3LY.jpg?width=1080&crop=smart&auto=webp&s=95ef679c92715963fe059f0667dfe95f59a72666', 'width': 1080}], 'source': {'height': 1086, 'url': 'https://external-preview.redd.it/BfHGJ4b5cknMOZj4gdpHFJwdqyD0VztftKk_BP9M3LY.jpg?auto=webp&s=e4d495911fba26c1bc344d324906374f0b33d136', 'width': 2204}, 'variants': {}}]} |
every time i read someone is roleplaying | 1 | 2023-11-14T08:42:53 | upk27 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17uy1vt | false | null | t3_17uy1vt | /r/LocalLLaMA/comments/17uy1vt/every_time_i_read_someone_is_roleplaying/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'gg9J15mES7_V6RWeUMw0c8k1ogDtsK7c9d9Rzl7d3yg', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/purofoy20a0c1.png?width=108&crop=smart&auto=webp&s=ad0c2e0f2f9d8b2b5fa75febafdae437d7b81f77', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/purofoy20a0c1.png?width=216&crop=smart&auto=webp&s=febb11e3611c2c2ecf762b9de710db631fc6dc2f', 'width': 216}, {'height': 357, 'url': 'https://preview.redd.it/purofoy20a0c1.png?width=320&crop=smart&auto=webp&s=c49590426205018e244d6e42adadd4e7ed18a795', 'width': 320}], 'source': {'height': 559, 'url': 'https://preview.redd.it/purofoy20a0c1.png?auto=webp&s=a4bed80ff90ad2553a481cd796f20b7792246c7e', 'width': 500}, 'variants': {}}]} | ||
everythinglm-3b | 1 | [removed] | 2023-11-14T08:02:27 | https://colab.research.google.com/drive/1qk9WX2TMsxVpePdm-q-czlLuYz-PPCxJ?authuser=4 | Sufficient_Run1518 | colab.research.google.com | 1970-01-01T00:00:00 | 0 | {} | 17uxipa | false | null | t3_17uxipa | /r/LocalLLaMA/comments/17uxipa/everythinglm3b/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} | |
How can I improve inference performance to a normal range? | 1 | [removed] | 2023-11-14T07:48:22 | https://www.reddit.com/r/LocalLLaMA/comments/17uxbs7/how_can_i_improve_inference_performance_to_a/ | Radiant-Practice-270 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uxbs7 | false | null | t3_17uxbs7 | /r/LocalLLaMA/comments/17uxbs7/how_can_i_improve_inference_performance_to_a/ | false | false | self | 1 | null |
The Problem with LLMs for chat or roleplay | 13 | I've been using self-hosted LLM models for roleplay purposes. But these are the worst problems I face every time, no matter what model and parameter preset I use.
​
>I'm using :
>
>Pygmalion 13B AWQ
>
>Mistral 7B AWQ
>
>SynthIA 13B AWQ \[Favourite\]
>
>WizardLM 7B AWQ
​
​
1. It messes up with who's who. Often starts to behave like the user.
2. It writes in third person perspective or Narrative.
3. Sometimes, generates the exact same reply (exactly same to same text) back to back even though new inputs were given.
4. It starts to generate more of a dialogue or screenplay script instead of creating a normal conversation.
​
Anyone has any solutions for these? | 2023-11-14T06:10:40 | https://www.reddit.com/r/LocalLLaMA/comments/17uvy5u/the_problem_with_llms_for_chat_or_roleplay/ | tammmu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uvy5u | false | null | t3_17uvy5u | /r/LocalLLaMA/comments/17uvy5u/the_problem_with_llms_for_chat_or_roleplay/ | false | false | self | 13 | null |
New Model: openchat 3.5 with 16k context | 76 | openchat 3.5 16k | 2023-11-14T06:01:42 | https://huggingface.co/NurtureAI/openchat_3.5-16k | perlthoughts | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17uvsyz | false | null | t3_17uvsyz | /r/LocalLLaMA/comments/17uvsyz/new_model_openchat_35_with_16k_context/ | false | false | 76 | {'enabled': False, 'images': [{'id': '3GrJa-4QHOVMeo0ymHWvYETXICrcy--h0UyXeY1yF5k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7FtTF9ASZqmRtpGLYpzCbooD01POHKRUbe9npSGLwmw.jpg?width=108&crop=smart&auto=webp&s=a317c26cc71f016006e72178c1f48e4f9b902a23', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7FtTF9ASZqmRtpGLYpzCbooD01POHKRUbe9npSGLwmw.jpg?width=216&crop=smart&auto=webp&s=64d082a2f3bda76fa482b457777aaa12d79d7593', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7FtTF9ASZqmRtpGLYpzCbooD01POHKRUbe9npSGLwmw.jpg?width=320&crop=smart&auto=webp&s=f701304b17915e028abbf04eb70964836262fa03', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7FtTF9ASZqmRtpGLYpzCbooD01POHKRUbe9npSGLwmw.jpg?width=640&crop=smart&auto=webp&s=3ba884834fc0b045d13a10138813e4a85c88be77', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7FtTF9ASZqmRtpGLYpzCbooD01POHKRUbe9npSGLwmw.jpg?width=960&crop=smart&auto=webp&s=9d2775dfc564f790525444711a9184c73baf832b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7FtTF9ASZqmRtpGLYpzCbooD01POHKRUbe9npSGLwmw.jpg?width=1080&crop=smart&auto=webp&s=2633e0bed722fdfd08067d89b95375dbcffcfc3c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7FtTF9ASZqmRtpGLYpzCbooD01POHKRUbe9npSGLwmw.jpg?auto=webp&s=7ef379f18b6b9e2cd73b244e89ef53bd0dc164bc', 'width': 1200}, 'variants': {}}]} | |
Look for a model better than MythoMax for Chat/RP | 9 | Somehow I keep coming back to MythoMax. I dunno if I'm prompting newer models wrong or what but in the 13B space MythoMax just keeps giving me the best results.
Anyone have someone else they like and can recommend? Maybe something with a longer context? I feel like I have to be screwing something up is why newer models aren't performing as well for me but I also kind of want a head nod saying that's the case and there's better stuff out there. | 2023-11-14T04:38:19 | https://www.reddit.com/r/LocalLLaMA/comments/17uuf9k/look_for_a_model_better_than_mythomax_for_chatrp/ | Maxumilian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uuf9k | false | null | t3_17uuf9k | /r/LocalLLaMA/comments/17uuf9k/look_for_a_model_better_than_mythomax_for_chatrp/ | false | false | self | 9 | null |
Questions on Attention Sinks and Their Usage in LLM Models | 12 | 2023-11-14T04:04:12 | Holiday_Fly_590 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17utu4n | false | null | t3_17utu4n | /r/LocalLLaMA/comments/17utu4n/questions_on_attention_sinks_and_their_usage_in/ | false | false | 12 | {'enabled': True, 'images': [{'id': 'r_hkCSq0kgjwKgOUvzN_Evumj40Mz5b9ZG7JbFaXlvw', 'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/31wpviqfm80c1.png?width=108&crop=smart&auto=webp&s=a75b450e47e45bd8f84ccdfa2cc07f5ce1a9641a', 'width': 108}, {'height': 68, 'url': 'https://preview.redd.it/31wpviqfm80c1.png?width=216&crop=smart&auto=webp&s=2155a331c5b3c4db7635781590fff73ee6ecff0c', 'width': 216}, {'height': 102, 'url': 'https://preview.redd.it/31wpviqfm80c1.png?width=320&crop=smart&auto=webp&s=de8422ee70966e3cc1030c8ed3576c4cd57eef1c', 'width': 320}, {'height': 204, 'url': 'https://preview.redd.it/31wpviqfm80c1.png?width=640&crop=smart&auto=webp&s=3c55cf6c36501b76d4b71504d1939d1f8dffa481', 'width': 640}, {'height': 306, 'url': 'https://preview.redd.it/31wpviqfm80c1.png?width=960&crop=smart&auto=webp&s=48120b59c8888ab913b9df54d3786f771b643b24', 'width': 960}, {'height': 344, 'url': 'https://preview.redd.it/31wpviqfm80c1.png?width=1080&crop=smart&auto=webp&s=8637c99721368764e31e2fbff2c65d37f132826a', 'width': 1080}], 'source': {'height': 511, 'url': 'https://preview.redd.it/31wpviqfm80c1.png?auto=webp&s=75b93bd5d37c032660bfd6664852a5011aa80dcf', 'width': 1603}, 'variants': {}}]} | |||
Nouse-Capybara-34B 200K | 60 | 2023-11-14T02:58:45 | https://huggingface.co/NousResearch/Nous-Capybara-34B | mcmoose1900 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17uskx7 | false | null | t3_17uskx7 | /r/LocalLLaMA/comments/17uskx7/nousecapybara34b_200k/ | false | false | 60 | {'enabled': False, 'images': [{'id': '0tpi4d_u9zQEoXqeKoxOgasRr584UbtKrjwa4qPqCIs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Rh50eIAQs7XtDKvx0IauQvvFNJn0hBZbseja1qcq6iY.jpg?width=108&crop=smart&auto=webp&s=4abb9829756f5940698a2dbc6243d25e3f1ee93b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Rh50eIAQs7XtDKvx0IauQvvFNJn0hBZbseja1qcq6iY.jpg?width=216&crop=smart&auto=webp&s=a3a3350d9c338870b61349639a1dee723204b8ef', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Rh50eIAQs7XtDKvx0IauQvvFNJn0hBZbseja1qcq6iY.jpg?width=320&crop=smart&auto=webp&s=c0b58878d305d4a8421c8beab4d7bea78f37d76a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Rh50eIAQs7XtDKvx0IauQvvFNJn0hBZbseja1qcq6iY.jpg?width=640&crop=smart&auto=webp&s=edec5b18c82e558eb41645d4d17c30549306a598', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Rh50eIAQs7XtDKvx0IauQvvFNJn0hBZbseja1qcq6iY.jpg?width=960&crop=smart&auto=webp&s=2db31e79162c072a56496aa46b6709c856b86df0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Rh50eIAQs7XtDKvx0IauQvvFNJn0hBZbseja1qcq6iY.jpg?width=1080&crop=smart&auto=webp&s=9795e8889e6300e505681cba050457fae46eed63', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Rh50eIAQs7XtDKvx0IauQvvFNJn0hBZbseja1qcq6iY.jpg?auto=webp&s=32579e301ce6147a555779411245f7d8e24d439f', 'width': 1200}, 'variants': {}}]} | ||
Good and fast model around ~1B to run on web? | 11 | I've playing with a lot of models around 7B but I'm now prototyping something that would be fine with a 1B model I think, but there's just Phi-1.5 that I've seen of this size, and I haven't seen a way to run it efficiently so far. llama.cpp has still not implemented it for instance.
Anyone has an idea of what to use? | 2023-11-14T02:57:19 | https://www.reddit.com/r/LocalLLaMA/comments/17usjxg/good_and_fast_model_around_1b_to_run_on_web/ | palpapeen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17usjxg | false | null | t3_17usjxg | /r/LocalLLaMA/comments/17usjxg/good_and_fast_model_around_1b_to_run_on_web/ | false | false | self | 11 | null |
This is almost certainly a stupid question (bear with me, see comments for nuance), but is there a LLM that can be setup to ask clarifying questions about its knowledge base, and then uses the answers to update its knowledge base? | 20 | I know that just sounds like RLHF, but I haven't seen a model in a situation wherein said model was competent and inquisitive, and able to update its own knowledge base.
I don't know, if I could just have a mildly competent local LLM that can be set to inquire, and then use my responses to immediately update what it permanently knows, that'd be insanely valuable.
Is that just RLHF and I'm too stupid to realize it, or is it more complicated than that? | 2023-11-14T02:29:27 | https://www.reddit.com/r/LocalLLaMA/comments/17us0dc/this_is_almost_certainly_a_stupid_question_bear/ | LeftHandedToe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17us0dc | false | null | t3_17us0dc | /r/LocalLLaMA/comments/17us0dc/this_is_almost_certainly_a_stupid_question_bear/ | false | false | self | 20 | null |
Looking for question answering bot that understands complex instructions | 1 | [removed] | 2023-11-14T02:25:28 | https://www.reddit.com/r/LocalLLaMA/comments/17urxiz/looking_for_question_answering_bot_that/ | robotrage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17urxiz | false | null | t3_17urxiz | /r/LocalLLaMA/comments/17urxiz/looking_for_question_answering_bot_that/ | false | false | self | 1 | null |
This is almost certainly a stupid question, but is there a LLM that is setup to ask clarifying questions about its knowledge base, and then use the answers to update its knowledge base? | 1 | I know that just sounds like human reinforcement learning, but I haven't seen a model in a situation wherein said model was competent and inquisitive, and would be able to update its own knowledge base.
I don't know, if I could just have a mildly competent local LLM that can be set to inquire, and then it uses my responses to immediately update what it permanently knows, that'd be insanely valuable.
Is that just reinforced learning and I'm too stupid to realize it, or is it more complicated than that? | 2023-11-14T02:14:59 | https://www.reddit.com/r/LocalLLaMA/comments/17urptd/this_is_almost_certainly_a_stupid_question_but_is/ | LeftHandedToe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17urptd | false | null | t3_17urptd | /r/LocalLLaMA/comments/17urptd/this_is_almost_certainly_a_stupid_question_but_is/ | false | false | default | 1 | null |
Which MythoMax to download? | 1 | Big noob here. I want to download the [MythoMax](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF/tree/main) model, but there are a lot to choose from. I have a 10GB RTX 3080. How do I choose the right one? What do all these letters mean? | 2023-11-14T02:12:20 | https://www.reddit.com/r/LocalLLaMA/comments/17urnuv/which_mythomax_to_download/ | Tim_Buckrue | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17urnuv | false | null | t3_17urnuv | /r/LocalLLaMA/comments/17urnuv/which_mythomax_to_download/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'uV7BrwqXgi67f_4yHedNZOfEwV05mHlAm3DtpQ3aYg4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?width=108&crop=smart&auto=webp&s=148eb1458474b2cb5d3117c31dce7693faca257b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?width=216&crop=smart&auto=webp&s=08125df4f76f96428e7899b1e9b51e0fde9ce3bf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?width=320&crop=smart&auto=webp&s=6158d7eb540475b24c846d350be5332b794eebe8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?width=640&crop=smart&auto=webp&s=d3a7009db2d0a66599210bc7214b183b471c1075', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?width=960&crop=smart&auto=webp&s=a2553c25a1e32d5a0504aa9cc46b99f90a41d2dd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?width=1080&crop=smart&auto=webp&s=49edae059ffe74061d1002855784b1389e58c64d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PS4Lrk1pCD4ZSTrHoMbuA1h_qQnpRlZ894eNAJV0EDw.jpg?auto=webp&s=083302742a02acc1c9d9d7db6e1f1f4d19569267', 'width': 1200}, 'variants': {}}]} |
Professor PyGuy, my Python tutor | 1 | [removed] | 2023-11-13T23:56:59 | Future_Might_8194 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17uovmk | false | null | t3_17uovmk | /r/LocalLLaMA/comments/17uovmk/professor_pyguy_my_python_tutor/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '2HDUkBCgeaedbDwArL2nd9Rhby68iHwRRHKzkrYGaLo', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/xjbi0p3ee70c1.jpeg?width=108&crop=smart&auto=webp&s=9e1f3ddb423fdfca190721652e3917237a1d75e3', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/xjbi0p3ee70c1.jpeg?width=216&crop=smart&auto=webp&s=05fd36f52ec41c3976a59b359d2616b347876139', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/xjbi0p3ee70c1.jpeg?width=320&crop=smart&auto=webp&s=226701d4484fa515786bebe2fa54898d90accb3c', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/xjbi0p3ee70c1.jpeg?width=640&crop=smart&auto=webp&s=c0de2522fede0981e5b9e992b067eb71817e1dfa', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/xjbi0p3ee70c1.jpeg?width=960&crop=smart&auto=webp&s=713585ae5de0a5a439d8c9f394f6487e378bc17f', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/xjbi0p3ee70c1.jpeg?auto=webp&s=6f46ac5c121b2bc8758002a1c36caa0480b1f382', 'width': 1024}, 'variants': {}}]} | |
Has anyone tried the OpenGPTs by Langchain? Is there any curated list of good opengpts? | 3 | For those who're not aware Langchain released [OpenGPTs](https://github.com/langchain-ai/opengpts) in response to GPTs by OpenAI. I'm curious about how this works with different OSS models. So far with GPT 3.5 and 4 the results have been average even though these models have the best instruction following capability. Is anyone tracking different opengpts being made? | 2023-11-13T23:30:37 | https://www.reddit.com/r/LocalLLaMA/comments/17uobk8/has_anyone_tried_the_opengpts_by_langchain_is/ | obvithrowaway34434 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uobk8 | false | null | t3_17uobk8 | /r/LocalLLaMA/comments/17uobk8/has_anyone_tried_the_opengpts_by_langchain_is/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'PTG1969FF9nNW8VTbLVXA5I4_JIyn6r2iD629POYw4c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2JKfHOdZh2WWzjxTzGANqp0dtnkOvvakSLP3ux8U3x4.jpg?width=108&crop=smart&auto=webp&s=d56f9baae259db68edad652184a6a4afaaae0ba0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2JKfHOdZh2WWzjxTzGANqp0dtnkOvvakSLP3ux8U3x4.jpg?width=216&crop=smart&auto=webp&s=74eeae5773d3035356e85d57421f652fa223b313', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2JKfHOdZh2WWzjxTzGANqp0dtnkOvvakSLP3ux8U3x4.jpg?width=320&crop=smart&auto=webp&s=22fbfd6dc6712f5d1d6261f86da8abc8fce5cb75', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2JKfHOdZh2WWzjxTzGANqp0dtnkOvvakSLP3ux8U3x4.jpg?width=640&crop=smart&auto=webp&s=2a2617314c329d18845f2c35e56521d6fff210fa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2JKfHOdZh2WWzjxTzGANqp0dtnkOvvakSLP3ux8U3x4.jpg?width=960&crop=smart&auto=webp&s=1553cdf79a468cf8755e0f980820e4ff1ab850ef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2JKfHOdZh2WWzjxTzGANqp0dtnkOvvakSLP3ux8U3x4.jpg?width=1080&crop=smart&auto=webp&s=85c6dc8996b9c71198001822265e69d723477ee7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2JKfHOdZh2WWzjxTzGANqp0dtnkOvvakSLP3ux8U3x4.jpg?auto=webp&s=0c6a03c6593cf90c4243b1a49e65baf0fcd27f9e', 'width': 1200}, 'variants': {}}]} |
Open Source LLMs and Langchain tools | 6 | Has anyone been able to get ANY open source LLM to use Langchain tools? I have not had success with any of the models I have tried including Llama 2, Mistral and Yi 34b. I usually get “Cannot parse LLM output” type errors. In some cases the model successfully uses the tool but doesn’t return the final answer correctly i.e the model invokes the tool correctly and I can see the answer as an observation but the model doesn’t return the answer correctly.
In my application the answer from the tool will have a specific format that should make it easy to extract by looking at the observations and extracting using regex (assuming I can access the observations).
But I’m wondering if anyone has had any success with ANY open source LLM in using Langchain tools where the model can correctly use the tool and return the final answer without erroring? | 2023-11-13T22:41:06 | https://www.reddit.com/r/LocalLLaMA/comments/17un6mu/open_source_llms_and_langchain_tools/ | tail-recursion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17un6mu | false | null | t3_17un6mu | /r/LocalLLaMA/comments/17un6mu/open_source_llms_and_langchain_tools/ | false | false | self | 6 | null |
LMStudio doesn’t load gguf models | 1 | Hi,
I am a longtime lurker here. I am running an M1 laptop (in the process of building an nvidia pc) and switched from oobabooga, which worked fine, to lmstudio. At some point several weeks ago lmstudio stopped loading any gguf model at all (Ggml still work fine), and I can’t find anything on this issue. Does anyone know what the issue might be? Thanks in advance. | 2023-11-13T22:37:37 | https://www.reddit.com/r/LocalLLaMA/comments/17un3un/lmstudio_doesnt_load_gguf_models/ | modalrealisms | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17un3un | false | null | t3_17un3un | /r/LocalLLaMA/comments/17un3un/lmstudio_doesnt_load_gguf_models/ | false | false | self | 1 | null |
I'm having... Trouble. | 83 | 2023-11-13T22:00:53 | gmork_13 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17um95u | false | null | t3_17um95u | /r/LocalLLaMA/comments/17um95u/im_having_trouble/ | false | false | 83 | {'enabled': True, 'images': [{'id': '8IAFZkmOchaNuVZnNMGQDZlEU5FeW4-UJccqUKkpOpE', 'resolutions': [{'height': 166, 'url': 'https://preview.redd.it/rs6gyvl9s60c1.png?width=108&crop=smart&auto=webp&s=bc8f9ac17d9db2ef5c2d62374ec8289e3b33eca1', 'width': 108}, {'height': 332, 'url': 'https://preview.redd.it/rs6gyvl9s60c1.png?width=216&crop=smart&auto=webp&s=e6e9a96f1318701ba2789ce5b71f9623654d0690', 'width': 216}, {'height': 492, 'url': 'https://preview.redd.it/rs6gyvl9s60c1.png?width=320&crop=smart&auto=webp&s=bde2e4f316412ed17d5d7be879144593825f7863', 'width': 320}, {'height': 984, 'url': 'https://preview.redd.it/rs6gyvl9s60c1.png?width=640&crop=smart&auto=webp&s=4d6f55bfb62eb97cf9d255fecdc59db54f71a868', 'width': 640}, {'height': 1476, 'url': 'https://preview.redd.it/rs6gyvl9s60c1.png?width=960&crop=smart&auto=webp&s=7214089905876eb1d62c02e793594ce9c710acca', 'width': 960}, {'height': 1660, 'url': 'https://preview.redd.it/rs6gyvl9s60c1.png?width=1080&crop=smart&auto=webp&s=8f3b80402649733b3085a02daf7c3b5718759df9', 'width': 1080}], 'source': {'height': 2431, 'url': 'https://preview.redd.it/rs6gyvl9s60c1.png?auto=webp&s=2f42a1f51541b40868f297823f2368f500c98c0e', 'width': 1581}, 'variants': {}}]} | |||
Any similar no code, easy to use tools/methods for knowledge embedding / RAG that work with local LLMs | 5 | Hi
I am super new to this, and have been reading non stop for the past few days. Researching this field for my company.
So to improve an LLM there seems to be 2 methods - fine tuning the model itself, a costly slow affiar OR augment the model with some data - I think this is called a RAG ?
In gpt's api , in order to augment the data with the retrieval API, all we have to do is upload a file. and it seems to take of the technical stuff. I could be wrong. Is there a similar way in the world of local LLMs ?
[https://platform.openai.com/docs/assistants/tools/knowledge-retrieval](https://platform.openai.com/docs/assistants/tools/knowledge-retrieval)
\> Once a file is uploaded and passed to the Assistant, OpenAI will automatically chunk your documents, index and store the embeddings, and implement vector search to retrieve relevant content to answer user queries.
Thank you
| 2023-11-13T21:47:15 | https://www.reddit.com/r/LocalLLaMA/comments/17ulx5v/any_similar_no_code_easy_to_use_toolsmethods_for/ | gyaani_guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ulx5v | false | null | t3_17ulx5v | /r/LocalLLaMA/comments/17ulx5v/any_similar_no_code_easy_to_use_toolsmethods_for/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'XjYeH9UKcgyOWN9BfsAvdRtig11MoY4NbGzf8T8tohc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=108&crop=smart&auto=webp&s=3f9105ef3e7b58900470546335a78fe533005bb4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=216&crop=smart&auto=webp&s=e310d9b29c06d6b5ce641a7f1506dd263a8729f6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=320&crop=smart&auto=webp&s=69281c1b38ced0e01cea5a12968d4b48674c98a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=640&crop=smart&auto=webp&s=9396991084b19bf32b78fe6cd3e4cde78b575245', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=960&crop=smart&auto=webp&s=91c96b3a35c2ea5ffe4533d22d45928392c76bc9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=1080&crop=smart&auto=webp&s=57787310c9a5f80fe35172195dbc646aa4473b50', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?auto=webp&s=5552c2aeef00909b24d8e8eed641cda7ae28bb4b', 'width': 1200}, 'variants': {}}]} |
Looking for the best large context model for creative story assistantance I can run on my Mac M2 Ultra 128gb machine | 1 | I've been away from the scene for more than a month! First, are there any 180b models that I should even consider or should I stick with 70b llama2 models? Last I tried with rope scaling, which admittedly was 6 weeks ago or so, 180b was great but coherency quality degraded quickly with rope scaling.
Basically, looking for something highly creating, uncensored (not necessarily that excels at NSFW but those 2 things go hand in hand obviously), and with the largest context window possible.
Engine of choice is llama.cpp FYI. Suggestions? Last I tried the longllama alpaca model (can't remember the name) it was so restrictive in what it would write that it was a non-starter (and no, I'm not writing porn, although it's not politically correct or clean prose by an means either).
Speed is not my priority, and falcon 180b was always fast enough anyway, just too small of context to be very useful...
Thanks!
​ | 2023-11-13T21:45:15 | https://www.reddit.com/r/LocalLLaMA/comments/17ulvfu/looking_for_the_best_large_context_model_for/ | spanielrassler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ulvfu | false | null | t3_17ulvfu | /r/LocalLLaMA/comments/17ulvfu/looking_for_the_best_large_context_model_for/ | false | false | self | 1 | null |
Best LVLM and LM designed for sound generation | 5 | I'm pretty knew here so apologies if I'm coming off green with the request ahead of time.
Im looking to see what the best options for running a LVLM (any LLM with visual recognition capabilities like supplying it an image, etc) locally. Bonus points for anything that can also be helpful with video / gif generation
And any (if at all) LM's that do work with sound / voice recognition too that can be run locally. | 2023-11-13T21:38:24 | https://www.reddit.com/r/LocalLLaMA/comments/17ulp9g/best_lvlm_and_lm_designed_for_sound_generation/ | platapus100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ulp9g | false | null | t3_17ulp9g | /r/LocalLLaMA/comments/17ulp9g/best_lvlm_and_lm_designed_for_sound_generation/ | false | false | self | 5 | null |
Has anyone used Goliath 120B in LMStudio | 3 | Has anyone able to get this model to run in LMStudio? My machine is a Windows 10, 120gb RAM, w/ 3070 ti and this model is the only one that I can not get to load memory or storage. | 2023-11-13T21:06:31 | https://www.reddit.com/r/LocalLLaMA/comments/17ukwqt/has_anyone_used_goliath_120b_in_lmstudio/ | Danny_Davitoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ukwqt | false | null | t3_17ukwqt | /r/LocalLLaMA/comments/17ukwqt/has_anyone_used_goliath_120b_in_lmstudio/ | false | false | self | 3 | null |
Where and how to run Goliath 120b GGUF with good performance? | 7 | I am talking about this particular model:
[https://huggingface.co/TheBloke/goliath-120b-GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF)
I specifically use: goliath-120b.Q4\_K\_M.gguf
I can run it on [runpod.io](https://runpod.io) on this A100 instance with "humane" speed, but it is way too slow for creating long form text.
https://preview.redd.it/fz28iycv860c1.png?width=350&format=png&auto=webp&s=cd034b6fb6fe80f209f5e6d5278206fd714a1b10
These are my settings in text-generation-webui:
https://preview.redd.it/vw53pc33960c1.png?width=833&format=png&auto=webp&s=0fccbeac0994447cf7b7462f65d79f2e8f8f1969
Any advice? Thanks | 2023-11-13T20:08:43 | https://www.reddit.com/r/LocalLLaMA/comments/17ujj3o/where_and_how_to_run_goliath_120b_gguf_with_good/ | abandonedexplorer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ujj3o | false | null | t3_17ujj3o | /r/LocalLLaMA/comments/17ujj3o/where_and_how_to_run_goliath_120b_gguf_with_good/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'abGJhOPdKe9K-FJAsL75vgzyyKHeOk4s5RPNAWIPThs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?width=108&crop=smart&auto=webp&s=77f027736e14a1fb457ca1833271a495fd5ef0e6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?width=216&crop=smart&auto=webp&s=b6a668f98170e39ab5ef165947408f4e2fa0f580', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?width=320&crop=smart&auto=webp&s=26b13c23beccf535a6fd6cdaa3baa6dbb44f505c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?width=640&crop=smart&auto=webp&s=6071a70dbcceba14cf3c9e834972541a77116b4e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?width=960&crop=smart&auto=webp&s=8b56c028becca63ef5ddc8dcfdbbe900b038baab', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?width=1080&crop=smart&auto=webp&s=4fe5d280de792b0a2a9a9a50b0e97134e1f8e5d7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?auto=webp&s=5dc23085106d627d3e7e62c06f46b3950d1ba2f9', 'width': 1200}, 'variants': {}}]} | |
We are Higgsfield AI. We have a large GPU cluster and want to finetune your dataset. | 87 | Hey LocalLLaMA. It's Higgsfield AI, and we train huge foundational models.
We have a massive GPU cluster and developed our own infrastructure to manage the cluster and train massive models. We constantly lurked in this subreddit and learned a lot from this passionate community. Right now, we have spare GPUs, and we are excited to give back to this incredible community.
We built a simple web app where you can upload your datasets to finetune it. [https://higgsfield.ai/](https://higgsfield.ai/)
There's how it works:
1. You upload the dataset with preconfigured format into HuggingFaсe \[1\].
2. Choose your LLM (e.g. LLaMa 70B, Mistral 7B)
3. Place your submission into the queue
4. Wait for it to get trained.
5. Then you get your trained model there on HuggingFace.
\[1\]: https://github.com/higgsfield-ai/higgsfield/tree/main/tutorials | 2023-11-13T20:07:45 | https://www.reddit.com/r/LocalLLaMA/comments/17ujiae/we_are_higgsfield_ai_we_have_a_large_gpu_cluster/ | RiskApprehensive9770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ujiae | false | null | t3_17ujiae | /r/LocalLLaMA/comments/17ujiae/we_are_higgsfield_ai_we_have_a_large_gpu_cluster/ | false | false | self | 87 | null |
dolphin-2.2-yi-34b released | 120 | Eric Hartford, the author of dolphin models, released dolphin-2.2-yi-34b.
This is one of the earliest community finetunes of the yi-34B.
yi-34B was developed by a Chinese company and they claim sota performance that are on par with gpt-3.5
HF:
https://huggingface.co/ehartford/dolphin-2_2-yi-34b
Announcement:
https://x.com/erhartford/status/1723940171991663088?s=20 | 2023-11-13T19:51:40 | https://www.reddit.com/r/LocalLLaMA/comments/17uj4r2/dolphin22yi34b_released/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uj4r2 | false | null | t3_17uj4r2 | /r/LocalLLaMA/comments/17uj4r2/dolphin22yi34b_released/ | false | false | self | 120 | {'enabled': False, 'images': [{'id': 'ROaJwHgkaL9znhISR9evgo-cM2LDEWm9L10VD1G5s4w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?width=108&crop=smart&auto=webp&s=7bdedc2899c1bc49f88ae203c3ee63d8c8f72c62', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?width=216&crop=smart&auto=webp&s=07c3f775eb90910caf47d6e4b679c8ef6f948881', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?width=320&crop=smart&auto=webp&s=3b07455a519ba83cec60c1d383683fea37df3aa0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?width=640&crop=smart&auto=webp&s=012a24ef7c436a630786e98b2c82777e164f57f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?width=960&crop=smart&auto=webp&s=14ac5200e079458365c0814e2f40187e77c37ee8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?width=1080&crop=smart&auto=webp&s=c6255a339c7ea38937bb8b9d708c220e6e65f1a0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?auto=webp&s=c8f0cb4cd99c4a508680b24d8995927c936c7bf7', 'width': 1200}, 'variants': {}}]} |
dolphin-2.2-yi-34b released | 1 | Eric Hartford, the author of dolphin models, released dolphin-2.2-yi-34b.
This is one of the earliest community finetunes of the yi-34B.
yi-34B was developed by a Chinese company and they claim sota performance that are on par with gpt-3.5
HF:
https://huggingface.co/ehartford/dolphin-2_2-yi-34b
Announcement:
https://x.com/erhartford/status/1723940171991663088?s=20 | 2023-11-13T19:51:13 | https://www.reddit.com/r/LocalLLaMA/comments/17uj4du/dolphin22yi34b_released/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uj4du | false | null | t3_17uj4du | /r/LocalLLaMA/comments/17uj4du/dolphin22yi34b_released/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ROaJwHgkaL9znhISR9evgo-cM2LDEWm9L10VD1G5s4w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?width=108&crop=smart&auto=webp&s=7bdedc2899c1bc49f88ae203c3ee63d8c8f72c62', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?width=216&crop=smart&auto=webp&s=07c3f775eb90910caf47d6e4b679c8ef6f948881', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?width=320&crop=smart&auto=webp&s=3b07455a519ba83cec60c1d383683fea37df3aa0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?width=640&crop=smart&auto=webp&s=012a24ef7c436a630786e98b2c82777e164f57f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?width=960&crop=smart&auto=webp&s=14ac5200e079458365c0814e2f40187e77c37ee8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?width=1080&crop=smart&auto=webp&s=c6255a339c7ea38937bb8b9d708c220e6e65f1a0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/aMtgzD8_6j0D7BeUr7caMINX8HYPoORU2djX8c1uYyw.jpg?auto=webp&s=c8f0cb4cd99c4a508680b24d8995927c936c7bf7', 'width': 1200}, 'variants': {}}]} |
What is the best code generation model aside from gpt-4? | 65 | Using and losing lots of money on gpt-4 ATM, it works great but for the amount of code I'm generating I'd rather have a self hosted model. What should I look into? | 2023-11-13T19:13:31 | https://www.reddit.com/r/LocalLLaMA/comments/17ui8fp/what_is_the_best_code_generation_model_aside_from/ | Charuru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ui8fp | false | null | t3_17ui8fp | /r/LocalLLaMA/comments/17ui8fp/what_is_the_best_code_generation_model_aside_from/ | false | false | self | 65 | null |
AMD vs Inel | 3 | I'm looking to build a new PC(which I've never done, other than upgrade my current one) for running and fine tuning 7b models. I know I'll be getting dual 3090 or 4090 GPU, but I'm not sure which CPU would be best for this.
I'm wanting to take advantage of the bundle deals at my local micro center and down to these two options:
1. AMD Ryzen 9 7900x 4.5ghz 12 core, Asus B650E-F motherboard and 65GB DDR5 for $600
2. Intel i9 14900k 3.2ghz 24 core, MSI Z790 MAG Tomahawk motherboard and 65gb DDR5 for $800
Anyone know which would be best for my use case? I don't mind spending the extra $200, I only want the best option of the two
Thanks in advance | 2023-11-13T19:04:03 | https://www.reddit.com/r/LocalLLaMA/comments/17ui08f/amd_vs_inel/ | _kinad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ui08f | false | null | t3_17ui08f | /r/LocalLLaMA/comments/17ui08f/amd_vs_inel/ | false | false | self | 3 | null |
The Evolution of Serverless GPUs for ML: An In-Depth Llama 2 7Bn Model Analysis | 1 | [removed] | 2023-11-13T19:00:10 | https://www.reddit.com/r/LocalLLaMA/comments/17uhwmq/the_evolution_of_serverless_gpus_for_ml_an/ | Tiny_Cut_8440 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uhwmq | false | null | t3_17uhwmq | /r/LocalLLaMA/comments/17uhwmq/the_evolution_of_serverless_gpus_for_ml_an/ | false | false | 1 | null | |
Questions about Horde | 6 | After taking note of Goliath-120b, I suddenly got strangely curious about Horde. Surprisingly, searching for Horde doesn't show many posts, so hopefully someone can answer a few questions:
1) What I understood is that I could host something like 13b or 20b, or SD/SDXL, which I can run just fine and fast, and rack up credits overnight for running 70b or 120b LLMs without queue and fast-ish at any moment later. Right?
2) If so, how long do prompts on those big models take, more or less, when you have credits to skip the queue? Is it usable? (i.e., how many seconds would it show on SillyTavern?)
3) Seeing as I only ever used Oobabooga and SillyTavern, I'm assuming Kobold is more or less a drop-in replacement for Oobabooga, just a backend to the model but everything translates well? If no, what can I expect to lose/get from Kobold as opposed to Ooba?
4) Is there a "Horde for r\*tards" guide somewhere?
5) What do people get from hosting Goliath-120b for others? Don't get me wrong, I appreciate the deep pocket generosity, but is this like a data gathering operation from their point of view?
Thanks for reading this far. There's a good doggo being very comfy hidden in the following period[.](https://i.imgur.com/cNjdc9L.jpg)
​ | 2023-11-13T18:53:31 | https://www.reddit.com/r/LocalLLaMA/comments/17uhrfd/questions_about_horde/ | Dead_Internet_Theory | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uhrfd | false | null | t3_17uhrfd | /r/LocalLLaMA/comments/17uhrfd/questions_about_horde/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'MECFUeygiQYFtnNjSpRwNpS8dojqT_cIhULtLNm4kqk', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/MwKAmqqfSoWE1XZcN-7UAQPHHbR8Min5k7P7-GZE4ok.jpg?width=108&crop=smart&auto=webp&s=ce48e519deeb7d75147c1ed25768710e33a096dd', 'width': 108}, {'height': 151, 'url': 'https://external-preview.redd.it/MwKAmqqfSoWE1XZcN-7UAQPHHbR8Min5k7P7-GZE4ok.jpg?width=216&crop=smart&auto=webp&s=228f52ff1e071055272bd2f3aff521ac05d5ffba', 'width': 216}, {'height': 223, 'url': 'https://external-preview.redd.it/MwKAmqqfSoWE1XZcN-7UAQPHHbR8Min5k7P7-GZE4ok.jpg?width=320&crop=smart&auto=webp&s=bf79c302bd5e04aaded537d4f43744d562d50386', 'width': 320}, {'height': 447, 'url': 'https://external-preview.redd.it/MwKAmqqfSoWE1XZcN-7UAQPHHbR8Min5k7P7-GZE4ok.jpg?width=640&crop=smart&auto=webp&s=812583125e1fc8f6745b571e8ef40709316634ef', 'width': 640}, {'height': 671, 'url': 'https://external-preview.redd.it/MwKAmqqfSoWE1XZcN-7UAQPHHbR8Min5k7P7-GZE4ok.jpg?width=960&crop=smart&auto=webp&s=c84248bd4552eafc1f49b20595b01354f7894c71', 'width': 960}, {'height': 755, 'url': 'https://external-preview.redd.it/MwKAmqqfSoWE1XZcN-7UAQPHHbR8Min5k7P7-GZE4ok.jpg?width=1080&crop=smart&auto=webp&s=23b97446034937f5d31670351d0e2113f67e275f', 'width': 1080}], 'source': {'height': 942, 'url': 'https://external-preview.redd.it/MwKAmqqfSoWE1XZcN-7UAQPHHbR8Min5k7P7-GZE4ok.jpg?auto=webp&s=e9dde4457aca07567285952008af5119060387e0', 'width': 1347}, 'variants': {}}]} |
Guidance for selecting a function-calling library? | 3 | I am raising the white flag and asking for help. I have a local LLM running, using koboldcpp in OpenAI API emulation mode. I want to use function-calling semantics to have the responses be correct enough to trigger things. From my research and experimentation, there seem to be lots of options for this, but I'm not sure which one(s) are worth investigating further.
I've come across:
* [langchain](https://js.langchain.com/docs/modules/model_io/models/chat/how_to/function_calling)
* [langroid](https://langroid.github.io/langroid/tutorials/non-openai-llms/)
* [localAI](https://localai.io/features/openai-functions/)
* [gorilla](https://github.com/ShishirPatil/gorilla)
* [functionary](https://github.com/MeetKai/functionary/)
* [lqml](https://lmql.ai/)
Does anybody have any experience with implementing any of these tools, or guidance on which one to try first? Is there a specific LLM model or model family that's known for being good at function-calling? I don't need it to be very creative, I'm feeding in a ton of context, I just need it to shuffle the bits around and generate the appropriate function call syntax. | 2023-11-13T18:05:32 | https://www.reddit.com/r/LocalLLaMA/comments/17ugn8i/guidance_for_selecting_a_functioncalling_library/ | handyman5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ugn8i | false | null | t3_17ugn8i | /r/LocalLLaMA/comments/17ugn8i/guidance_for_selecting_a_functioncalling_library/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'C1O5S5WQ2zql4CQHBQC5FMwveJdPtaJ9r_xGWbzu48o', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=108&crop=smart&auto=webp&s=4806821b19a384d8270fee66e851537817cdac4e', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=216&crop=smart&auto=webp&s=0bdf6ca90dcebbc73d6ff30b79f54814b931344d', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=320&crop=smart&auto=webp&s=dd7a799219f465b4f913aa10969c5ee900913404', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=640&crop=smart&auto=webp&s=e1d1617519e0321944016ee242a7999669714f39', 'width': 640}], 'source': {'height': 436, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?auto=webp&s=8d662951305a88ba511f842901937fb729991cb9', 'width': 794}, 'variants': {}}]} |
NVidia H200 achieves nearly 12,000 tokens/sec on Llama2-13B with TensorRT-LLM | 170 | 2023-11-13T18:00:17 | https://github.com/NVIDIA/TensorRT-LLM/blob/release/0.5.0/docs/source/blogs/H200launch.md | rihard7854 | github.com | 1970-01-01T00:00:00 | 0 | {} | 17ugics | false | null | t3_17ugics | /r/LocalLLaMA/comments/17ugics/nvidia_h200_achieves_nearly_12000_tokenssec_on/ | false | false | 170 | {'enabled': False, 'images': [{'id': 'qRv-EZFMn0EQaTRdx8Z6MWuxwBHgBdSqYvx7MLLI02Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FReE9mYVyQlaH2Q199VrJ3gSxToSufODzJvdOXAOglo.jpg?width=108&crop=smart&auto=webp&s=5d0b2acf264316e41bddf9d7a5b55192623ba38c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FReE9mYVyQlaH2Q199VrJ3gSxToSufODzJvdOXAOglo.jpg?width=216&crop=smart&auto=webp&s=a6fe52218616b18d10494f18430d51f3ff97be62', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FReE9mYVyQlaH2Q199VrJ3gSxToSufODzJvdOXAOglo.jpg?width=320&crop=smart&auto=webp&s=301058d7e86ad893408fae6738cc08c605acba6f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FReE9mYVyQlaH2Q199VrJ3gSxToSufODzJvdOXAOglo.jpg?width=640&crop=smart&auto=webp&s=4efbfb135162c3e95d79111048b24bc57ed30bd3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FReE9mYVyQlaH2Q199VrJ3gSxToSufODzJvdOXAOglo.jpg?width=960&crop=smart&auto=webp&s=15b5013ef6264f4ace5f0b8a0e352eed8e116a37', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FReE9mYVyQlaH2Q199VrJ3gSxToSufODzJvdOXAOglo.jpg?width=1080&crop=smart&auto=webp&s=e8e8bfe05aada13f0f1016c44f22a9e367392c29', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FReE9mYVyQlaH2Q199VrJ3gSxToSufODzJvdOXAOglo.jpg?auto=webp&s=57662c2ffcf6e379614d325e99ffab9677c16c20', 'width': 1200}, 'variants': {}}]} | ||
what GPU configuration one should look for a Mistral-7B like model for basic inference or RAG LLM app in production? | 1 | Hi guys, I am new to LLMs and especially in using them locally. I have done basic stuffs to learn things like RAG using framework/library LangChain on collab and locally on my cpu machine by using quantised models from TheBloke. But now I want to move on development and production stuffs for some of my potential clients. I will have lots of question during this time, but I will start with learning GPU things.
What minimum GPU server required to run a model like Mistral-7B or LLaMA-13B for inference purpose to build a simple RAG application, keeping 8K context length?
Basically I have no idea what types of GPU someone should look for different LLM operations. And what one should look for while building any such LLM apps in productions for a small to midsize company?
A quick google search landed me to https://www.gpu-mart.com/gpu-dedicated-server, but I have little to no knowledge to process out this information.
Also, I would appreciate if someone can refer me to an up to date guide on deciding servers configuration (GPU plus other things) for these types of LLM app. | 2023-11-13T17:59:11 | https://www.reddit.com/r/LocalLLaMA/comments/17ughem/what_gpu_configuration_one_should_look_for_a/ | sanjay303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ughem | false | null | t3_17ughem | /r/LocalLLaMA/comments/17ughem/what_gpu_configuration_one_should_look_for_a/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'FgJN8ZzwIdkv1zn7k3I2HVLTO76mBlCMFwkpv7Lzyow', 'resolutions': [{'height': 25, 'url': 'https://external-preview.redd.it/Nou6mbc0w44Hmc8zEq5r46b5oNrkfW7pM6fKSAWI_oY.jpg?width=108&crop=smart&auto=webp&s=5fd48ded8d9d7bf56ee001c623dc75f797d941d8', 'width': 108}, {'height': 51, 'url': 'https://external-preview.redd.it/Nou6mbc0w44Hmc8zEq5r46b5oNrkfW7pM6fKSAWI_oY.jpg?width=216&crop=smart&auto=webp&s=53a92ebba5f542da0eb2e3d02639b1ad33f98f70', 'width': 216}, {'height': 76, 'url': 'https://external-preview.redd.it/Nou6mbc0w44Hmc8zEq5r46b5oNrkfW7pM6fKSAWI_oY.jpg?width=320&crop=smart&auto=webp&s=184b3421ea501dad692e116f73519178afd807a2', 'width': 320}, {'height': 153, 'url': 'https://external-preview.redd.it/Nou6mbc0w44Hmc8zEq5r46b5oNrkfW7pM6fKSAWI_oY.jpg?width=640&crop=smart&auto=webp&s=0895fdf51614f71480acb9b8b73bc5673c79ff7f', 'width': 640}, {'height': 230, 'url': 'https://external-preview.redd.it/Nou6mbc0w44Hmc8zEq5r46b5oNrkfW7pM6fKSAWI_oY.jpg?width=960&crop=smart&auto=webp&s=ff2cfaf7b59a9e6b147e02114eac790be26c7c13', 'width': 960}, {'height': 258, 'url': 'https://external-preview.redd.it/Nou6mbc0w44Hmc8zEq5r46b5oNrkfW7pM6fKSAWI_oY.jpg?width=1080&crop=smart&auto=webp&s=b9e836bc05ebeb7d20ff3f24b9525ce4f33ccb5a', 'width': 1080}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/Nou6mbc0w44Hmc8zEq5r46b5oNrkfW7pM6fKSAWI_oY.jpg?auto=webp&s=d043dd483f0e4856f122d9129451da29826752c5', 'width': 1919}, 'variants': {}}]} |
[Begineer] look for feedbacks on my apporach | 1 | Hello All,
I am an engineer software and I work on my side project.
I want to launch a website that detect fake reviews. I know that there is some app already exist.
So I made some research and I found a datasets and trained on distilbert . Which has a pretty good metrics.
But I was thinking that using LLM models like Llama2 to train it on my datasets and maybe it will be a better models? What do you think ? IF you have any better idea I am open for discussion ?
I found also TheBloke/llama-2-70b-Guanaco-QLoRA-fp16
​ | 2023-11-13T17:21:29 | https://www.reddit.com/r/LocalLLaMA/comments/17ufmeq/begineer_look_for_feedbacks_on_my_apporach/ | elpad92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ufmeq | false | null | t3_17ufmeq | /r/LocalLLaMA/comments/17ufmeq/begineer_look_for_feedbacks_on_my_apporach/ | false | false | self | 1 | null |
Note to self, don't adjust temperature too much | 33 | 2023-11-13T17:13:13 | jordan_jpg | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17uffmv | false | null | t3_17uffmv | /r/LocalLLaMA/comments/17uffmv/note_to_self_dont_adjust_temperature_too_much/ | false | false | 33 | {'enabled': True, 'images': [{'id': 'tjL7LiScMxvnsW6tOornIhzl0DwcBO3hRZ6v0f6idNs', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/3zirzaqce50c1.jpg?width=108&crop=smart&auto=webp&s=e9139251506d5db8affa25955717192bb3a9defd', 'width': 108}, {'height': 209, 'url': 'https://preview.redd.it/3zirzaqce50c1.jpg?width=216&crop=smart&auto=webp&s=b634d098ce76d90694a2ae91e3dde53f51dc4b1a', 'width': 216}, {'height': 310, 'url': 'https://preview.redd.it/3zirzaqce50c1.jpg?width=320&crop=smart&auto=webp&s=adf1fa75140a494e43572adcbeb6db22f6e840dd', 'width': 320}, {'height': 621, 'url': 'https://preview.redd.it/3zirzaqce50c1.jpg?width=640&crop=smart&auto=webp&s=c7c3f2f7a1fa1f954bb90f9d942fbd0eebd3e0c8', 'width': 640}, {'height': 932, 'url': 'https://preview.redd.it/3zirzaqce50c1.jpg?width=960&crop=smart&auto=webp&s=a1396f6221e6ddb8203d6f79667e038ac552d66c', 'width': 960}, {'height': 1049, 'url': 'https://preview.redd.it/3zirzaqce50c1.jpg?width=1080&crop=smart&auto=webp&s=ffec02ddc8576d8219520f4dd5d4bd0307994789', 'width': 1080}], 'source': {'height': 1049, 'url': 'https://preview.redd.it/3zirzaqce50c1.jpg?auto=webp&s=ad17bfb8e2d53f00a491e2a748a886010d3b47bf', 'width': 1080}, 'variants': {}}]} | |||
How to convert a PyTorch project to PyTorch directML to make it work on AMD cards? | 3 | So I posted earlier about how to convert CUDA projects to ROCm for windows and Hipify was a tool for that but unfortunately Hipify doesn’t convert projects written in libraries like PyTorch so I want to convert sadtalker which is a PyTorch project to PyTorch directML, which is a Microsoft run project that will let it work with ROCm(unfortunately PyTorch for rocm is windows only). How can I do that or is there an easier way to do it with another library or something? | 2023-11-13T17:09:25 | https://www.reddit.com/r/LocalLLaMA/comments/17ufcgn/how_to_convert_a_pytorch_project_to_pytorch/ | SimRacer101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ufcgn | false | null | t3_17ufcgn | /r/LocalLLaMA/comments/17ufcgn/how_to_convert_a_pytorch_project_to_pytorch/ | false | false | self | 3 | null |
Laptop adequate enough to test cloud trained models (MacBook?) | 1 | Hi all
I'm looking to pickup a laptop that I can use to test inference on models I have trained in the cloud.
I'd like to purchase something from the 2nd hand market, specifically a MacBook.
There is obviously a lot of options:
M1 / M2 and the new M3 lineup.
Question is, would an M1 Max based macbook with 64GB RAM be adequate enough, or should I be looking at 96GB+?
The difference in price between 64GB upwards, is pretty huge.
I'm still learning all the options in respect of model training / fine tuning and it would be nice to have the capability to test it locally on a MacBook and even leave it running / accessible externally for further testing.
I know people are doing this now here, but what MacBook are you using for this?
Obviously there is also the Mac Studio, but I want to avoid spending thousands at this point.
Thanks
​ | 2023-11-13T16:34:42 | https://www.reddit.com/r/LocalLLaMA/comments/17uekct/laptop_adequate_enough_to_test_cloud_trained/ | Kreator333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uekct | false | null | t3_17uekct | /r/LocalLLaMA/comments/17uekct/laptop_adequate_enough_to_test_cloud_trained/ | false | false | self | 1 | null |
Extract Tables from PDFs | 6 | I am working on a project where I have to extract tables from PDFs(usually financial reports which contain lot of tables(simple tables and cells merged tables) and graphs).
Following are the libraries that have been used without much great results:
Naugat, PyMuPDF(fitz) , PyPDF2 , pdfplumber, PDFMiner, Camelot, Tabula, pdfquery
What other OCR, LLMs or other tools do you recommend to proceed further? Thanks in advance!
| 2023-11-13T16:21:06 | https://www.reddit.com/r/LocalLLaMA/comments/17ue9in/extract_tables_from_pdfs/ | G_S_7_wiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ue9in | false | null | t3_17ue9in | /r/LocalLLaMA/comments/17ue9in/extract_tables_from_pdfs/ | false | false | self | 6 | null |
Local hosted LLM sometimes gives unrelated responses. | 1 | I'm trying to run zephyr-7b, on my local machine with an RX580 8G using Text generation web UI. It works for the most part but sometimes gets into giving unrelated responses. After which I have to restart the app! Sometimes it even prints out right out gibberish..
​
I'm running `zephyr-7b-beta.Q4\_K\_M.gguf\`. With the following options:
```
n-gpu-layers: > 35
n_ctx: 8000
```
And parameters:
```
max_new_tokens: 2000
top_p: 0.95
top_k: 40
Instruction Template: ChatML
```
But if I run the above exact setup on a cloud GPU (vast.ai) it runs perfect.. What am I doing wrong? | 2023-11-13T16:11:25 | https://www.reddit.com/r/LocalLLaMA/comments/17ue1p2/local_hosted_llm_sometimes_gives_unrelated/ | ntn8888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ue1p2 | false | null | t3_17ue1p2 | /r/LocalLLaMA/comments/17ue1p2/local_hosted_llm_sometimes_gives_unrelated/ | false | false | self | 1 | null |
Self-supervised fine-tuning | 2 | Hello all,
my question is related with fine-tuning with self-supervision. I've seen a lot of tutorials guiding in fine-tuning with other methods, aiming to improve answering capacity, etc. However my goal is slightly different. I would like to fine-tuning LLama to replicate, for instance, the Hemingway style. And in this case, it appears to me that self-supervised training in the Hemingway corpus would be the correct approach. Am I right?
Thanks in advance | 2023-11-13T15:22:12 | https://www.reddit.com/r/LocalLLaMA/comments/17ucxyc/selfsupervised_finetuning/ | horace_desplein | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ucxyc | false | null | t3_17ucxyc | /r/LocalLLaMA/comments/17ucxyc/selfsupervised_finetuning/ | false | false | self | 2 | null |
Weekend hack: Color commentary on simple networked tic-tac-toe (Llama 2 Chat 13b) | 17 | 2023-11-13T15:11:05 | wakebrdkid | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17ucpmd | false | null | t3_17ucpmd | /r/LocalLLaMA/comments/17ucpmd/weekend_hack_color_commentary_on_simple_networked/ | false | false | 17 | {'enabled': True, 'images': [{'id': 'tEqqIjA_s6Jxx17yj9g7w3oA24rYR-ri6iJA1qNgbpI', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=108&crop=smart&format=png8&s=0c14dc91df0b28786916925aab89df47505948df', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=216&crop=smart&format=png8&s=a1526fdc5c41c26effcc7f9c56ef9e9659937cf9', 'width': 216}, {'height': 207, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=320&crop=smart&format=png8&s=029c0f97c694f09d63a419eca2fcb52da2665139', 'width': 320}, {'height': 415, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=640&crop=smart&format=png8&s=12a3e40c4ac56636fa795887cc57a0293183bba9', 'width': 640}, {'height': 623, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=960&crop=smart&format=png8&s=616e02f6ce380a6b2b23057cbe82b5ac703d9772', 'width': 960}], 'source': {'height': 694, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?format=png8&s=67e606b016de46db00fe252136190fda9a59e720', 'width': 1069}, 'variants': {'gif': {'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=108&crop=smart&s=af43c7a2eade68de7b334ecf2a1fcb7a0143fd37', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=216&crop=smart&s=673f6fcf5be383853f75ca070b3a51d276d6e713', 'width': 216}, {'height': 207, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=320&crop=smart&s=66f322b77964b65873e1c4415ac64f3f3d438909', 'width': 320}, {'height': 415, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=640&crop=smart&s=d08b8798be4334854f26c1c2cdcca0e6f2465c35', 'width': 640}, {'height': 623, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=960&crop=smart&s=7a04dfd8420591c458516841a60f411c445b74b9', 'width': 960}], 'source': {'height': 694, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?s=e2fe259410ebf6f808119d7ca9573dc72953fe98', 'width': 1069}}, 'mp4': {'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=108&format=mp4&s=bf546bedd3d8cb96a5c7c109b279339b8383af28', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=216&format=mp4&s=e8f538ff7d4996bf9170c108eadf9cf76867e7f8', 'width': 216}, {'height': 207, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=320&format=mp4&s=e9638e43297d35d34a13dd1ba4ab55e9ae7dad96', 'width': 320}, {'height': 415, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=640&format=mp4&s=89828ca6db634544c4c03584e196bffe72158225', 'width': 640}, {'height': 623, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?width=960&format=mp4&s=09418e91b1bd831ac8613a1205512ebb4dbb64a6', 'width': 960}], 'source': {'height': 694, 'url': 'https://preview.redd.it/51x9v1g8s40c1.gif?format=mp4&s=9c8c8cf34480810523055338c02430a930229caf', 'width': 1069}}}}]} | |||
“The difference between screwing around and science is writing it down.” ― Adam Savage | 178 | I see a lot -- by no means an overabundance, but enough to "trigger" me -- of laughing at some of the "obvious" research that gets posted here.
One example from a week or two ago that's been rattling around in my head was someone saying in reply to the paper (paraphrased):
>That's just RAG with extra steps.
Exactly. But what were those steps attempting? Did it make RAG better?
Yes. Great, let's continue pulling the thread.
No. Ok, let's let others know that pulling this thread in this direction has been tried, and they should take a different approach; maybe it can be pulled in a different direction.
We are at the cusp of a shift in our cultural and technical cultures. Let's not shame the people sharing their work with the community. | 2023-11-13T14:23:56 | https://www.reddit.com/r/LocalLLaMA/comments/17ubqbu/the_difference_between_screwing_around_and/ | jayfehr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ubqbu | false | null | t3_17ubqbu | /r/LocalLLaMA/comments/17ubqbu/the_difference_between_screwing_around_and/ | false | false | self | 178 | null |
How to make mistral to follow more my order? | 1 | [removed] | 2023-11-13T13:53:10 | https://www.reddit.com/r/LocalLLaMA/comments/17ub3n2/how_to_make_mistral_to_follow_more_my_order/ | Primary-Recipe-2718 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ub3n2 | false | null | t3_17ub3n2 | /r/LocalLLaMA/comments/17ub3n2/how_to_make_mistral_to_follow_more_my_order/ | false | false | self | 1 | null |
How to make mistral to follow more my order? | 1 | [removed] | 2023-11-13T13:50:58 | https://www.reddit.com/r/LocalLLaMA/comments/17ub26g/how_to_make_mistral_to_follow_more_my_order/ | Sufficient-River4446 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ub26g | false | null | t3_17ub26g | /r/LocalLLaMA/comments/17ub26g/how_to_make_mistral_to_follow_more_my_order/ | false | false | default | 1 | null |
Seperate translation layer for better models? | 1 | I have the feeling alot of models include alot of data in many languages. Would it make more sense to train just on english data and have a seperate translation layer?
Or do i misunderstand something? | 2023-11-13T13:17:45 | https://www.reddit.com/r/LocalLLaMA/comments/17uafuv/seperate_translation_layer_for_better_models/ | freehuntx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17uafuv | false | null | t3_17uafuv | /r/LocalLLaMA/comments/17uafuv/seperate_translation_layer_for_better_models/ | false | false | self | 1 | null |
Has anybody else had problems with languages for Mistral 7B Instruct quantized? I've had it spit out Korean and Spanish randomly, from English input. I saw someone else had the Korean problem too on HackerNews | 1 | 2023-11-13T13:05:57 | https://www.reddit.com/gallery/17ua83q | bangarangguy | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 17ua83q | false | null | t3_17ua83q | /r/LocalLLaMA/comments/17ua83q/has_anybody_else_had_problems_with_languages_for/ | false | false | default | 1 | null | |
NVIDIA Large Language Models Developer Days | 6 | 2023-11-13T12:44:11 | https://info.nvidia.com/llm-developer-day.html | Scary-Knowledgable | info.nvidia.com | 1970-01-01T00:00:00 | 0 | {} | 17u9tui | false | null | t3_17u9tui | /r/LocalLLaMA/comments/17u9tui/nvidia_large_language_models_developer_days/ | false | false | default | 6 | null | |
Pitfalls when working with files | 1 | I've been using Llama-2 with llama.cpp to make quick questions for a while.
Now, I want to use it to run instructions with excel rows as input, like summarizing each row and then aggregating similar rows.
How would I do something like this? How to go around context limits? Do I need to re-start the model on every row? | 2023-11-13T12:42:31 | https://www.reddit.com/r/LocalLLaMA/comments/17u9sv5/pitfalls_when_working_with_files/ | carvalholuz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u9sv5 | false | null | t3_17u9sv5 | /r/LocalLLaMA/comments/17u9sv5/pitfalls_when_working_with_files/ | false | false | self | 1 | null |
I have a 4090 laptop: EGPU or get a desktop for 70b models? | 1 | Hello everyone I have a question.
​
I have a laptop with a mobile 4090 (16 GB VRAM), 64 RAM, and an Intel 13-gen processor. I want to run 70b models, so I was wondering if buying a desktop with a 3090 GPU, and using that as a makeshift EGPU to combine with my laptop's VRAM? Is that possible? Or should I just buy an EGPU proper with a 3090 for extra VRAM? | 2023-11-13T12:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/17u9oe1/i_have_a_4090_laptop_egpu_or_get_a_desktop_for/ | Zen-smith | self.LocalLLaMA | 2023-11-13T12:46:36 | 0 | {} | 17u9oe1 | false | null | t3_17u9oe1 | /r/LocalLLaMA/comments/17u9oe1/i_have_a_4090_laptop_egpu_or_get_a_desktop_for/ | false | false | self | 1 | null |
What is the best 13B LLM to run? | 1 | Same question from another user but for 13B model.
And if it's worth it to run a 13B compareawith a 7B. | 2023-11-13T12:00:03 | https://www.reddit.com/r/LocalLLaMA/comments/17u93jy/what_is_the_best_13b_llm_to_run/ | Baddmaan0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u93jy | false | null | t3_17u93jy | /r/LocalLLaMA/comments/17u93jy/what_is_the_best_13b_llm_to_run/ | false | false | self | 1 | null |
How to do correctly speculative decoding on the CPU using small models 1B and 7B? | 5 | I am not able to reproduce a 2x speedup that I read the others achieved with a 70B model and a 1B model draft, using a 7B model and a 1B model draft
model: dolphin-llama2-7b.Q4\_K\_S.gguf
model draft: tinyllama-1.1b-1t-openorca.Q4\_K\_S.gguf
here are the results
\-------------------------
main -m ../models/tinyllama-1.1b-1t-openorca.Q4\_K\_S.gguf -p "Building a website can be done in 10 simple steps:\\nStep 1:" -n 400 -e
​
llama\_print\_timings: load time = 278.28 ms
llama\_print\_timings: sample time = 110.42 ms / 400 runs ( 0.28 ms per token, 3622.56 tokens per second)
llama\_print\_timings: prompt eval time = 641.88 ms / 20 tokens ( 32.09 ms per token, 31.16 tokens per second)
llama\_print\_timings: eval time = 15281.09 ms / 399 runs ( 38.30 ms per token, 26.11 tokens per second)
llama\_print\_timings: total time = 16221.94 ms
Log end
​
\------------------
​
main -m ../models/dolphin-llama2-7b.Q4\_K\_S.gguf -p "Building a website can be done in 10 simple steps:\\nStep 1:" -n 400 -e
​
llama\_print\_timings: load time = 1429.41 ms
llama\_print\_timings: sample time = 108.39 ms / 400 runs ( 0.27 ms per token, 3690.24 tokens per second)
llama\_print\_timings: prompt eval time = 3139.63 ms / 20 tokens ( 156.98 ms per token, 6.37 tokens per second)
llama\_print\_timings: eval time = 79913.13 ms / 399 runs ( 200.28 ms per token, 4.99 tokens per second)
llama\_print\_timings: total time = 83348.57 ms
Log end
​
\------------------
​
speculative -m ../models/dolphin-llama2-7b.Q4\_K\_S.gguf -md ../models/tinyllama-1.1b-1t-openorca.Q4\_K\_S.gguf -p "Building a website can be done in 10 simple steps:\\nStep 1:" -n 400 -e
encoded 19 tokens in 3.412 seconds, speed: 5.568 t/s
decoded 402 tokens in 115.028 seconds, speed: 3.495 t/s
​
n\_draft = 16
n\_predict = 402
n\_drafted = 301
n\_accept = 198
accept = 65.781%
​
draft:
​
llama\_print\_timings: load time = 213.69 ms
llama\_print\_timings: sample time = 1597.32 ms / 1 runs ( 1597.32 ms per token, 0.63 tokens per second)
llama\_print\_timings: prompt eval time = 421.24 ms / 19 tokens ( 22.17 ms per token, 45.11 tokens per second)
llama\_print\_timings: eval time = 19697.97 ms / 505 runs ( 39.01 ms per token, 25.64 tokens per second)
llama\_print\_timings: total time = 118450.52 ms
​
target:
​
llama\_print\_timings: load time = 1342.55 ms
llama\_print\_timings: sample time = 107.07 ms / 402 runs ( 0.27 ms per token, 3754.48 tokens per second)
llama\_print\_timings: prompt eval time = 78435.09 ms / 431 tokens ( 181.98 ms per token, 5.49 tokens per second)
llama\_print\_timings: eval time = 17902.46 ms / 92 runs ( 194.59 ms per token, 5.14 tokens per second)
llama\_print\_timings: total time = 117198.22 ms
​ | 2023-11-13T11:52:23 | https://www.reddit.com/r/LocalLLaMA/comments/17u8zls/how_to_do_correctly_speculative_decoding_on_the/ | vasileer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u8zls | false | null | t3_17u8zls | /r/LocalLLaMA/comments/17u8zls/how_to_do_correctly_speculative_decoding_on_the/ | false | false | self | 5 | null |
Why can't we just run local reinforcement learning? | 20 | Obviously building a big high dimensional language model is hard yes okay.
But once we have one can't we just jiggle weights and run tests? why can't I just download a program to "evolve" my language model?
"Am I just stupid and this is just too trivially easy to be a program?"
peace | 2023-11-13T11:33:07 | https://www.reddit.com/r/LocalLLaMA/comments/17u8ph3/why_cant_we_just_run_local_reinforcement_learning/ | Revolutionalredstone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u8ph3 | false | null | t3_17u8ph3 | /r/LocalLLaMA/comments/17u8ph3/why_cant_we_just_run_local_reinforcement_learning/ | false | false | self | 20 | null |
RAG on Tabular Data | 1 | Guys, looking for best embedding model and retrievers to pick up tabular data for QA RAG purpose. Your advice and opinion helps.
Looking for scenario where I can embedd a tabular data (eg. A table containing date, product category, product id, sales value). Tried different methods like converting my entries as a dictionary, creating a text from the entry like "The sales of _product_ for _date_ is _salesValue_".
The only approach that worked decent enough was when I converted my data point into text and used it for RAG. Even with this, retrieval is not guaranteed.
Currently using "BAAI/bge-large-en" And FAISS embedding and retriever.
Cannot go into SQLAgents as I am limited to only CPU (64 core, 512 gb ram).
Thanks in advance. | 2023-11-13T11:23:20 | https://www.reddit.com/r/LocalLLaMA/comments/17u8kb0/rag_on_tabular_data/ | ThekAsh95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u8kb0 | false | null | t3_17u8kb0 | /r/LocalLLaMA/comments/17u8kb0/rag_on_tabular_data/ | false | false | self | 1 | null |
Has anyone came across 01.ai new model Yi 6B and Yi34B | 1 | [removed] | 2023-11-13T11:10:57 | https://huggingface.co/01-ai | ThenNeighborhood3723 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17u8e1r | false | null | t3_17u8e1r | /r/LocalLLaMA/comments/17u8e1r/has_anyone_came_across_01ai_new_model_yi_6b_and/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': '10okfeC-P5ZwHdTNKaDvJBIF9nWV_-koiPdxXwNTPIU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3cIpl_HUDOYlZcElGzBGRvQbC8EuJHruLTxogLw5iso.jpg?width=108&crop=smart&auto=webp&s=468674e924328767552a7fef0543a3bf78da425a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3cIpl_HUDOYlZcElGzBGRvQbC8EuJHruLTxogLw5iso.jpg?width=216&crop=smart&auto=webp&s=6b33f419e146e85c6f14ecdd8f24237b351a9e14', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3cIpl_HUDOYlZcElGzBGRvQbC8EuJHruLTxogLw5iso.jpg?width=320&crop=smart&auto=webp&s=3ff2acce4b2780d43b3445073e1672e9d64808be', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3cIpl_HUDOYlZcElGzBGRvQbC8EuJHruLTxogLw5iso.jpg?width=640&crop=smart&auto=webp&s=e675c0d5717bebea39cd69a44f1065016b71e220', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3cIpl_HUDOYlZcElGzBGRvQbC8EuJHruLTxogLw5iso.jpg?width=960&crop=smart&auto=webp&s=1f1d3845ff9ea4db8950311b125b5b61e3d62e7a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3cIpl_HUDOYlZcElGzBGRvQbC8EuJHruLTxogLw5iso.jpg?width=1080&crop=smart&auto=webp&s=cba4dc8627e33078330be3a008b9592a3f685ec0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3cIpl_HUDOYlZcElGzBGRvQbC8EuJHruLTxogLw5iso.jpg?auto=webp&s=a0efec8aef589c33b2003753c0b202be439a15f1', 'width': 1200}, 'variants': {}}]} |
are there any Super Tiny LLM models which we can ship within a mobile application? | 7 | I'm new in the LLM world, I just want to know if is there any super tiny LLM model that we can integrate with our existing mobile application and ship it on the app store.
If I give a simple example, I have one news application so it's possible to integrate llm with my news application so I can perform some operations on the application to provide a better experience to users without sharing their personal information on the internet. Like: summarises the article in different types of tones (like 5, 10, 15-year-old kid, in the poem, old and Gen Z style). Track the type of articles the user likes and display only those articles in his feed) and many more.
And if this is not possible, is there any platform where we can host these types of tiny models like a Firebase ML model (these services are not changed that much as compared to other LLM hosting services)? | 2023-11-13T10:52:22 | https://www.reddit.com/r/LocalLLaMA/comments/17u848q/are_there_any_super_tiny_llm_models_which_we_can/ | Prashant_4200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u848q | false | null | t3_17u848q | /r/LocalLLaMA/comments/17u848q/are_there_any_super_tiny_llm_models_which_we_can/ | false | false | self | 7 | null |
Anyone Hosting Llama Models in Production? Seeking Insights on Scaling and Resource Optimization | 1 | I'm currently exploring the deployment of Llama models in a production environment and I'm keen to hear from anyone who has ventured into this territory. My primary concern is managing multiple concurrent users while optimizing resources effectively.
While we all are tinkering Llama on our own rig for testing with a single user and playing around which works good, however scaling up poses its own set of challenges. I'm particularly interested in learning how others have approached this problem.I've come across projects like vLLM and Huggingface TGI which are using parallel tensor and flash attention to increase speed and concurrency. Has anyone had experience with these, and how have they contributed to your scaling efforts?
My goal is to implement an API utilizing Llama models for a small organization's private use. I'm eager to learn from your experiences and any advice or insights you can share on this topic. | 2023-11-13T10:46:58 | https://www.reddit.com/r/LocalLLaMA/comments/17u81nt/anyone_hosting_llama_models_in_production_seeking/ | vicks9880 | self.LocalLLaMA | 2023-11-13T12:21:26 | 0 | {} | 17u81nt | false | null | t3_17u81nt | /r/LocalLLaMA/comments/17u81nt/anyone_hosting_llama_models_in_production_seeking/ | false | false | self | 1 | null |
The closest I got to ChatGPT+Dall-E locally (SDXL+LLaMA2-13B-Tiefighter) | 53 | Just wanted to share :)
So my initial though was how so many people are shocked with Dall-E and GPT integration, and people don't even realize its possible locally for free, yeah maybe not as polished as GPT, but still amazing.
And if you take into consideration all of the censorship of openai, it's just better even if it can't do crazy complicated prompts.
So i created this character for SillyTavern - [Chub](https://chub.ai/characters/iChrist/imagineai-3d61f9a9)
And using oogabooga + SillyTavern to generate the prompt itself and the image automatically.
I can also ask to change something and the chatbot adjust the original prompt accordingly.
Did any of you did anything simillar? what are your thoughts ?
https://preview.redd.it/sltfe9osf30c1.png?width=1246&format=png&auto=webp&s=89f9490c81f4759ca35856e5b19c237b791fd647 | 2023-11-13T10:41:16 | https://www.reddit.com/r/LocalLLaMA/comments/17u7yuw/the_closest_i_got_to_chatgptdalle_locally/ | iChrist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u7yuw | false | null | t3_17u7yuw | /r/LocalLLaMA/comments/17u7yuw/the_closest_i_got_to_chatgptdalle_locally/ | false | false | 53 | {'enabled': False, 'images': [{'id': 'SzLOgYs_QNumRCqQFKoFQVLdSb8MiVKtHYIKRjsNThg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/39Y3N7lcV_w48CUbPi8s7n5cm4hSIfP9OGRlg3MsHSE.jpg?width=108&crop=smart&auto=webp&s=71a09220fc0c049bd2de79dfaa863e5be594d568', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/39Y3N7lcV_w48CUbPi8s7n5cm4hSIfP9OGRlg3MsHSE.jpg?width=216&crop=smart&auto=webp&s=4c4997b5d3dd013e707feaaa3f7480e0a28da030', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/39Y3N7lcV_w48CUbPi8s7n5cm4hSIfP9OGRlg3MsHSE.jpg?width=320&crop=smart&auto=webp&s=baa41953ae9b41f80df3704fe5fca32115e285ef', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/39Y3N7lcV_w48CUbPi8s7n5cm4hSIfP9OGRlg3MsHSE.jpg?auto=webp&s=2fa28f5ae2bbf1aa35c03db0953babc41e500735', 'width': 512}, 'variants': {}}]} | |
Fine Tuning and how it exactly works. | 1 | I am trying to fine tune llama-7b-chat-hf with my own custom dataset that describes a Database, but whenever I test it afterwards, the fine tuned llama will just return me the same answer as if no data was there. I still can't really understand why. Do I need to have a big amount of data first in order for the fine tuning to actually work? Do I need to repeat the same data a bunch of times in order for the weights to change towards the words I want? Am I just using the wrong template (im using the <<SYS>>...)? Anyone has any clue on why a fine tuned model would seemingly ignore its data? Where can I even look at in order to fix this? I dont know anymore. Ive tried making a question and answer per each column, then tried to make just a massive question and answer for each table, and nothing really works, even with temperature on 0.1 | 2023-11-13T10:38:08 | https://www.reddit.com/r/LocalLLaMA/comments/17u7xdf/fine_tuning_and_how_it_exactly_works/ | ARandomNiceAnimeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u7xdf | false | null | t3_17u7xdf | /r/LocalLLaMA/comments/17u7xdf/fine_tuning_and_how_it_exactly_works/ | false | false | self | 1 | null |
Hugging Face Llama-2 (7b) taking too much time while inferencing | 2 | Hello everyone, I am trying to use Llama-2 (7b) from Hugging face. With below code I was able to load the model successfully but when I am trying to generate the output its taking forever.
**Code**
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Llama-2-7b-hf")
model = AutoModelForCausalLM.from_pretrained("Llama-2-7b-hf")
input_ids = tokenizer.encode("What is LLM?", return_tensors="pt")
output = model.generate(
input_ids,
temperature=0,
max_new_tokens=100
)
generated_text = tokenizer.decode(output[0])
print(generated_text)
Model files downloaded from [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
Hardware: Macbook Pro (M2 Pro) 16 GB RAM | 2023-11-13T10:14:14 | https://www.reddit.com/r/LocalLLaMA/comments/17u7m1v/hugging_face_llama2_7b_taking_too_much_time_while/ | atinesh229 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u7m1v | false | null | t3_17u7m1v | /r/LocalLLaMA/comments/17u7m1v/hugging_face_llama2_7b_taking_too_much_time_while/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'bYeLcdZIrbQRj1cIlUnZBevet1ou2bD6S-oiO5MUT9Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yRHYAe686SGK4TbNmXiyrFL3bQuHTKmOrN46tTonebc.jpg?width=108&crop=smart&auto=webp&s=e41867bb6aeae9c8788ba4ea8e3fc8386000abbb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yRHYAe686SGK4TbNmXiyrFL3bQuHTKmOrN46tTonebc.jpg?width=216&crop=smart&auto=webp&s=7fc1fb72b0fb012a35b44103b08a864fdfe9b09c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yRHYAe686SGK4TbNmXiyrFL3bQuHTKmOrN46tTonebc.jpg?width=320&crop=smart&auto=webp&s=913ea9b866da0484b46060fe70695334dd9eea4e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yRHYAe686SGK4TbNmXiyrFL3bQuHTKmOrN46tTonebc.jpg?width=640&crop=smart&auto=webp&s=84381453b8e57b88d694f9703c84beb5d04aab47', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yRHYAe686SGK4TbNmXiyrFL3bQuHTKmOrN46tTonebc.jpg?width=960&crop=smart&auto=webp&s=f90332218bddf40b4023c1276f4de718fe655a07', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yRHYAe686SGK4TbNmXiyrFL3bQuHTKmOrN46tTonebc.jpg?width=1080&crop=smart&auto=webp&s=aaa3d57a16db7845d9e84a93384890840485237d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yRHYAe686SGK4TbNmXiyrFL3bQuHTKmOrN46tTonebc.jpg?auto=webp&s=bf8164b9d5cdafeb009e018e5776c4c4a562de05', 'width': 1200}, 'variants': {}}]} |
Once and for all how does ChatML Prompt template work? | 6 | It’s soooo hard to implement ChatML and get it working properly. So once and for all we need a post about how to implement it. Any experts out there, please shine for us!!
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
I can’t get this to work whatsoever.
I am using Python and Langchain. Whenever I post the above in the code I get errors. If I however add ””” above and below…. makes the errors go away.
”””
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
”””
The answers however give me the following in the middle of the replies:
Unhelpful Answer:
Helpful Answer:
### Instruction
### Answer
### Explanation
Also many times the answer will go into a loop and I have no idea how to fix it.
Do I need to import something from langchain.prompts or similar to make it work?
PS we are writing our prompts in a GUI and have a RAG solution.
Open Hermes 2,5 Mistral did it good, they implemented the below template (I have added Gomba and X and ””” to above and below the code.
"""<|im_start|>system
You are "Gomba", a conscious sentient superintelligent artificial intelligence developed by a man named X, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia."""
This template works without the looping issues and is very simple. It just works! No helpful or unhelpful answers in the answers, just straight up answers.
Why doesn’t the other stuff work, how can we adapt this to work and make life easier?
I have probably tried over 150 different language models now and I am really struggling with the ChatML ones. | 2023-11-13T10:10:12 | https://www.reddit.com/r/LocalLLaMA/comments/17u7k2d/once_and_for_all_how_does_chatml_prompt_template/ | quantier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u7k2d | false | null | t3_17u7k2d | /r/LocalLLaMA/comments/17u7k2d/once_and_for_all_how_does_chatml_prompt_template/ | false | false | self | 6 | null |
LLAMA only generates 16 tokens | 1 | **Request body**
{
"prompt": "\\n\\n### Instructions:\\nwrite a story\\n\\n### Response:\\n",
"stop": \[
"\\n",
"###"
\]
}
​
**Response Body**
{ "id": "cmpl-f729eb01-6691-47c3-99f0-f30d6ab62f25",
"object": "text\_completion", "created": 1699867559,
"model": "llama-2-13b-chat.Q4\_0.gguf",
"choices": \[
{ "text": "Once upon a time, in a far-off land, there was a mag",
"index": 0,
"logprobs": null,
"finish\_reason": "length"
} \],
​
"usage":
{ "prompt\_tokens": 20,
"completion\_tokens": 16,
"total\_tokens": 36
} }
​
In this, we can see only 16 completion\_tokens generates all the time. How to solve this problem.
​
**Terminal Input**
python3 -m llama\_cpp.server --model $MODEL --n\_gpu\_layers 1 --host [0.0.0.0](https://0.0.0.0) \--port 8000 --n\_ctx=2048 --n\_ctx=2048
​ | 2023-11-13T09:29:34 | https://www.reddit.com/r/LocalLLaMA/comments/17u7159/llama_only_generates_16_tokens/ | shrijayan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u7159 | false | null | t3_17u7159 | /r/LocalLLaMA/comments/17u7159/llama_only_generates_16_tokens/ | false | false | self | 1 | null |
Need help with Ctransformers (llama.cpp wrapper) | 1 | I dont know how to use llama.cpp, but im creating a bot with python using ctransformers as a wrapper. Im not sure how to get the model to remember previous conversations efficiently without feeding previous conversations back into the model (and thus making it tokenize/detokenize the entire context again). Played with the reset parameter in the AutoModelforCausalLM function, but that doesnt seem to work. Any ideas? | 2023-11-13T09:19:29 | https://www.reddit.com/r/LocalLLaMA/comments/17u6wof/need_help_with_ctransformers_llamacpp_wrapper/ | bondroodaddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u6wof | false | null | t3_17u6wof | /r/LocalLLaMA/comments/17u6wof/need_help_with_ctransformers_llamacpp_wrapper/ | false | false | self | 1 | null |
Just a heads up, if you give your AI access to your game it might try to kill you | 54 | I figure some might get a laugh out of this. But I was testing the AI emotional state if a agent was poking at another a little and what would it do. That's the full video
​
[https://youtu.be/PFyczamWSUs](https://youtu.be/PFyczamWSUs)
​
But I made a short just covering the back and forward of the argument.
​
[https://youtube.com/shorts/BjZaUkOAyCg?feature=share](https://youtube.com/shorts/BjZaUkOAyCg?feature=share)
​
Because one of the agents didn't like something and was mad at the others. It turned off the power while we are flying, and another turned it right back on.
TLDR if you run similar test just note if you make it mad enough or don't modify the personality where it wants to live. It might legit try to kill you in the game. | 2023-11-13T09:11:36 | https://www.reddit.com/r/LocalLLaMA/comments/17u6t99/just_a_heads_up_if_you_give_your_ai_access_to/ | crua9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u6t99 | false | null | t3_17u6t99 | /r/LocalLLaMA/comments/17u6t99/just_a_heads_up_if_you_give_your_ai_access_to/ | false | false | self | 54 | {'enabled': False, 'images': [{'id': 'uknSSzskUhLjfXFSAx43snt8j4uKOc1deVUeN1kVXxs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/gvfkd_eaTV3yt9JjUOcmufkKWtO7Kb9rJyOHcWqbQF8.jpg?width=108&crop=smart&auto=webp&s=2e1df5a31727bb58c35de854ff3a3e64a932d2c6', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/gvfkd_eaTV3yt9JjUOcmufkKWtO7Kb9rJyOHcWqbQF8.jpg?width=216&crop=smart&auto=webp&s=f8df6b6faa8fb564e8c5c2fcd820edaa218441fe', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/gvfkd_eaTV3yt9JjUOcmufkKWtO7Kb9rJyOHcWqbQF8.jpg?width=320&crop=smart&auto=webp&s=ee9852842c6abae05b447eacc50fc758fa12cf33', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/gvfkd_eaTV3yt9JjUOcmufkKWtO7Kb9rJyOHcWqbQF8.jpg?auto=webp&s=d884c176cd89f1e8fddd296542b625633846d20b', 'width': 480}, 'variants': {}}]} |
Fine-tuning 7B models, how many examples? | 6 | Hi everyone, i was thinking about fine-tuning a model like Zephyr or Mistral in order to improve their performance over Entity Extraction inside documents.
In your experience, what would be the correct number of examples to obtain good results and avoid overfitting? 1k? 10k? o even fewer?
Since i was thinking about using QLora, the examples number has to be related with the choosen hidden dimension? | 2023-11-13T09:08:19 | https://www.reddit.com/r/LocalLLaMA/comments/17u6rrm/finetuning_7b_models_how_many_examples/ | Infopz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u6rrm | false | null | t3_17u6rrm | /r/LocalLLaMA/comments/17u6rrm/finetuning_7b_models_how_many_examples/ | false | false | self | 6 | null |
End to End AI assistant | 1 | [removed] | 2023-11-13T09:03:12 | https://www.reddit.com/r/LocalLLaMA/comments/17u6p8f/end_to_end_ai_assistant/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u6p8f | false | null | t3_17u6p8f | /r/LocalLLaMA/comments/17u6p8f/end_to_end_ai_assistant/ | false | false | self | 1 | null |
What is the best local llm for content generation currently ? | 3 | Looking for some suggestions on current best local llm to use for content generation as base model that I can further train on domain knowledge. Which one is best for content generation. Like sales and marketing collateral | 2023-11-13T09:00:29 | https://www.reddit.com/r/LocalLLaMA/comments/17u6npk/what_is_the_best_local_llm_for_content_generation/ | faridukhan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u6npk | false | null | t3_17u6npk | /r/LocalLLaMA/comments/17u6npk/what_is_the_best_local_llm_for_content_generation/ | false | false | self | 3 | null |
GitHub - S-LoRA/S-LoRA: S-LoRA: Serving Thousands of Concurrent LoRA Adapters | 23 | 2023-11-13T08:45:38 | https://github.com/S-LoRA/S-LoRA | AutomataManifold | github.com | 1970-01-01T00:00:00 | 0 | {} | 17u6gzd | false | null | t3_17u6gzd | /r/LocalLLaMA/comments/17u6gzd/github_sloraslora_slora_serving_thousands_of/ | false | false | 23 | {'enabled': False, 'images': [{'id': 'tZDZTFNAlPR2wYjeorVrhsHmYn9Ij9J2hhytMYQqGrs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dD6bHHDYL4_5qO2PAGSq4vy8x5L2_I3F1DmDYF5BCyw.jpg?width=108&crop=smart&auto=webp&s=14f664f8f7b7c1fcd73f46f8773715b8c9132938', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dD6bHHDYL4_5qO2PAGSq4vy8x5L2_I3F1DmDYF5BCyw.jpg?width=216&crop=smart&auto=webp&s=4189a9e82be630f2e8b5c0bf655c23af1b313c10', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dD6bHHDYL4_5qO2PAGSq4vy8x5L2_I3F1DmDYF5BCyw.jpg?width=320&crop=smart&auto=webp&s=c614910798bc5cbe944dcf3c3ec82d6e7ea3c682', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dD6bHHDYL4_5qO2PAGSq4vy8x5L2_I3F1DmDYF5BCyw.jpg?width=640&crop=smart&auto=webp&s=17c4fadeef9d5659792874aff52fab83439a41fc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dD6bHHDYL4_5qO2PAGSq4vy8x5L2_I3F1DmDYF5BCyw.jpg?width=960&crop=smart&auto=webp&s=c03f874703ee7dcca00e45dbc7c731979676e43e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dD6bHHDYL4_5qO2PAGSq4vy8x5L2_I3F1DmDYF5BCyw.jpg?width=1080&crop=smart&auto=webp&s=9bb4b716e906574ac392b615a6ee9e8b31c237ae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dD6bHHDYL4_5qO2PAGSq4vy8x5L2_I3F1DmDYF5BCyw.jpg?auto=webp&s=0f9be31657d7575ea45934ed8730704deae12a34', 'width': 1200}, 'variants': {}}]} | ||
What are the advantages of running local LLMs? I'm interested in coding-assistant especially. | 2 | Hello,
With the new GPT4 and its 128k context window, my question is, is there an advantage to running a local LLM?
- **Cost Considerations**: Is there a cost benefit to running a local LLM compared to cloud-based solutions?
- **Specialized Training**: Is it possible to fine-tune the model for specific tasks, similar to 'Code Llama - Python', but perhaps for areas like 'Code Llama - Unreal Engine'?
I understand that for some applications, avoiding the content restrictions of OpenAI might be a plus. However, when it comes to using an LLM as a coding assistant, are there any specific advantages to running it locally? | 2023-11-13T08:40:14 | https://www.reddit.com/r/LocalLLaMA/comments/17u6ee7/what_are_the_advantages_of_running_local_llms_im/ | Ecstatic_Ring8186 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u6ee7 | false | null | t3_17u6ee7 | /r/LocalLLaMA/comments/17u6ee7/what_are_the_advantages_of_running_local_llms_im/ | false | false | self | 2 | null |
Been playing around SillyTavern's extensions today. How far away are we from full natural-sounding audible conversation? | 1 | [removed] | 2023-11-13T07:57:23 | https://www.reddit.com/r/LocalLLaMA/comments/17u5u6n/been_playing_around_sillytaverns_extensions_today/ | Siigari | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u5u6n | false | null | t3_17u5u6n | /r/LocalLLaMA/comments/17u5u6n/been_playing_around_sillytaverns_extensions_today/ | false | false | self | 1 | null |
Is there a technical reason that distributed LLMs don't exist? | 63 | Why is there no analog to napster/bittorent/bitcoin with LLMs?
Is there a technical reason that there is not some kind of open source LLM that we can all install on our local host which contributes computing power to answering prompts, and rewards those who contribute computing power by allowing them to enter more prompts?
Obviously, there must be a technical reason which prevents distributed LLMs or else it would have already been created by now. | 2023-11-13T07:54:56 | https://www.reddit.com/r/LocalLLaMA/comments/17u5t4n/is_there_a_technical_reason_that_distributed_llms/ | chinawcswing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u5t4n | false | null | t3_17u5t4n | /r/LocalLLaMA/comments/17u5t4n/is_there_a_technical_reason_that_distributed_llms/ | false | false | self | 63 | null |
Writing Partner Mistral 7B | 33 | ​
https://preview.redd.it/13nv4mztb20c1.png?width=515&format=png&auto=webp&s=9385f1dcffb6bb3d74f7609591faf04664481c89
Hi there! I'm your writing assistant.
Think of me as your partner in crime---or rather, your partner in writing. When you hit a wall (and trust me, you will), I'll be there to knock it down for you.
If you find yourself stumped, I can help you think outside the box, whether you need to figure out what happens next, develop interesting sidekicks, or create a believable love interest. And if you get stuck on names for people or places, well, that's my specialty.
Just give me a shout if you have questions, need advice, or want to bounce an idea off someone. Together, we'll turn your novel into a best seller!
[https://huggingface.co/FPHam/Writing\_Partner\_Mistral\_7B](https://huggingface.co/FPHam/Writing_Partner_Mistral_7B)
# Intendend Use
The purpose of this model isn't to write content for you (although it can), but rather to engage in a conversation about your writing process. | 2023-11-13T06:54:52 | https://www.reddit.com/r/LocalLLaMA/comments/17u50v4/writing_partner_mistral_7b/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u50v4 | false | null | t3_17u50v4 | /r/LocalLLaMA/comments/17u50v4/writing_partner_mistral_7b/ | false | false | 33 | {'enabled': False, 'images': [{'id': 'stOlI_oGQyPzRVuIS9yxcNHcWiwiL6gzC6iiZ8G_5WI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HotL7Jbe8KlZ12I_4RmscAIIUNwHWq38PFXE1r5mwVQ.jpg?width=108&crop=smart&auto=webp&s=2afe0773f886d7bb400e0369bfb332a8231244ef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HotL7Jbe8KlZ12I_4RmscAIIUNwHWq38PFXE1r5mwVQ.jpg?width=216&crop=smart&auto=webp&s=f071eb29b4bd7ac5799346c22e196d6422075068', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HotL7Jbe8KlZ12I_4RmscAIIUNwHWq38PFXE1r5mwVQ.jpg?width=320&crop=smart&auto=webp&s=b372242a5c458e29ef59027217ca79d2ad4756a9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HotL7Jbe8KlZ12I_4RmscAIIUNwHWq38PFXE1r5mwVQ.jpg?width=640&crop=smart&auto=webp&s=ec2b3304043db68550eb4746986868befd253cd9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HotL7Jbe8KlZ12I_4RmscAIIUNwHWq38PFXE1r5mwVQ.jpg?width=960&crop=smart&auto=webp&s=d4d51a9c3070a46030852b9ada6df3852f063a2f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HotL7Jbe8KlZ12I_4RmscAIIUNwHWq38PFXE1r5mwVQ.jpg?width=1080&crop=smart&auto=webp&s=6a1a1dde5fed0cc7b80d4731f0ee3cd1dc06b7eb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HotL7Jbe8KlZ12I_4RmscAIIUNwHWq38PFXE1r5mwVQ.jpg?auto=webp&s=41da0ac991552f21ed6c9126d3df095af0fc4314', 'width': 1200}, 'variants': {}}]} | |
Help with implementation of DINO V1 traning set up from Meta | 1 | I've been trying to recreate the Dino V1 traning set up for a personal project. For which I've take majority of the code from this repo:
https://github.com/facebookresearch/dino
And rn I'm almost done with it except for one part in the main_dino.py file there is a function called train_one_epoch whereby in line 318 they have given:
`teacher_output= teacher(images[:2]) # only the 2 global views pass through the teacher`
Now I know how pytorch tensor indexing/slicing works. Hence, if images are a batch of images of a structure
`(batch size, num crops, c, h, w)`
1. How would doing images[:2] get you the global crops of all the images in a given batch?
2. Are they processing images in batch here or is the "images" list here just a list containing multiple crops from a SINGLE input image? | 2023-11-13T06:10:46 | https://github.com/facebookresearch/dino | Xanta_Kross | github.com | 1970-01-01T00:00:00 | 0 | {} | 17u4fe3 | false | null | t3_17u4fe3 | /r/LocalLLaMA/comments/17u4fe3/help_with_implementation_of_dino_v1_traning_set/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'h9EW_JejL0N1j7HydmoqFWT-dUny-CfMnNmzWwc7Y2U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pZ4jSKfmDN8nGLBJOKN2CCFHLGo0ysiDNlbge3Nec-o.jpg?width=108&crop=smart&auto=webp&s=15952fc5305787540ba103a743c28daee6ed0d1d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pZ4jSKfmDN8nGLBJOKN2CCFHLGo0ysiDNlbge3Nec-o.jpg?width=216&crop=smart&auto=webp&s=62b95d09e42bd9086114e0748044a69aa0678861', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pZ4jSKfmDN8nGLBJOKN2CCFHLGo0ysiDNlbge3Nec-o.jpg?width=320&crop=smart&auto=webp&s=14aafe0d0c395dbb86b785d36473fe7298604f61', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pZ4jSKfmDN8nGLBJOKN2CCFHLGo0ysiDNlbge3Nec-o.jpg?width=640&crop=smart&auto=webp&s=69d72bce15576ba0ac4d0d0593d10786915fd23c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pZ4jSKfmDN8nGLBJOKN2CCFHLGo0ysiDNlbge3Nec-o.jpg?width=960&crop=smart&auto=webp&s=dc917a200cd51f95714d755a4d0c9d8400f601f2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pZ4jSKfmDN8nGLBJOKN2CCFHLGo0ysiDNlbge3Nec-o.jpg?width=1080&crop=smart&auto=webp&s=cc40bb957e45cbd04b69b067ca496687f1821316', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pZ4jSKfmDN8nGLBJOKN2CCFHLGo0ysiDNlbge3Nec-o.jpg?auto=webp&s=6ff495cd919057df237e8c2bf4dfcddaeb9221bd', 'width': 1200}, 'variants': {}}]} | |
Jimmy Apples predicting smaller models for Openai! | 36 | 2023-11-13T05:03:26 | https://twitter.com/apples_jimmy/status/1723926964686516615?t=1Xp7jhLwauZofSJUXg9PTA&s=19 | metalman123 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 17u3esd | false | {'oembed': {'author_name': 'Jimmy Apples 🍎/acc', 'author_url': 'https://twitter.com/apples_jimmy', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Openai peeps giving me the cold shoulder so leak time, been holding on this one for a while for a few reasons.<br><br>By end of 2025 there should be an openai model in the range of 1 - 10 Billion that is significantly better than GPT4. <a href="https://t.co/tMDBxqzPyy">pic.twitter.com/tMDBxqzPyy</a></p>— Jimmy Apples 🍎/acc (@apples_jimmy) <a href="https://twitter.com/apples_jimmy/status/1723926964686516615?ref_src=twsrc%5Etfw">November 13, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/apples_jimmy/status/1723926964686516615', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_17u3esd | /r/LocalLLaMA/comments/17u3esd/jimmy_apples_predicting_smaller_models_for_openai/ | false | false | 36 | {'enabled': False, 'images': [{'id': 'AtE4t4A6e4uRQ4Hni384emvH2h704thuodK4OIwi2LM', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/k3l6QLeDruyA0QRrqd8xr6iRQIsB2Tey9yJdugxevGU.jpg?width=108&crop=smart&auto=webp&s=7e28cc0ed2731cfae9245aaa8381b1dcb3b54c75', 'width': 108}], 'source': {'height': 138, 'url': 'https://external-preview.redd.it/k3l6QLeDruyA0QRrqd8xr6iRQIsB2Tey9yJdugxevGU.jpg?auto=webp&s=2a5a5c17f6fd66c860538d5408cff179517b9130', 'width': 140}, 'variants': {}}]} | ||
Run LLMs on my own Mac fast and efficient! Inference app only 2 MBs. | 1 | 2023-11-13T04:54:17 | https://www.secondstate.io/articles/fast-llm-inference/ | smileymileycoin | secondstate.io | 1970-01-01T00:00:00 | 0 | {} | 17u394w | false | null | t3_17u394w | /r/LocalLLaMA/comments/17u394w/run_llms_on_my_own_mac_fast_and_efficient/ | false | false | default | 1 | null | |
This subreddit ranking thing appeared on my Reddit app today, I like how we’re #7 alongside subs with 20-30x our follower count 😅 | 1 | 2023-11-13T04:53:46 | adumdumonreddit | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17u38ty | false | null | t3_17u38ty | /r/LocalLLaMA/comments/17u38ty/this_subreddit_ranking_thing_appeared_on_my/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'yPH0jnE3Y1sWxCcHrKfOFtDTpZDNMBMTcdOlMlmuAKo', 'resolutions': [{'height': 174, 'url': 'https://preview.redd.it/mgkcascfq10c1.jpg?width=108&crop=smart&auto=webp&s=dc825314c31ee42dad62c0e4dc53d39f9f2aa88d', 'width': 108}, {'height': 349, 'url': 'https://preview.redd.it/mgkcascfq10c1.jpg?width=216&crop=smart&auto=webp&s=201f6303edb8ea49a1c92ab714e59f9367847362', 'width': 216}, {'height': 517, 'url': 'https://preview.redd.it/mgkcascfq10c1.jpg?width=320&crop=smart&auto=webp&s=34d823029e0561cb8da98db6411fba37a81931a7', 'width': 320}, {'height': 1035, 'url': 'https://preview.redd.it/mgkcascfq10c1.jpg?width=640&crop=smart&auto=webp&s=a31e60b745b9d77e4844c1d3a6d084f809280452', 'width': 640}], 'source': {'height': 1214, 'url': 'https://preview.redd.it/mgkcascfq10c1.jpg?auto=webp&s=1548358ed32098101b5326027d2f593b51b20cce', 'width': 750}, 'variants': {}}]} | |||
Combining langchain + vLLM for parallel processing? | 4 | Hey all.
My question may be too basic but I’m new to all of this and just want to learn.
So, I have this basic setup with initializing llm via vLLM with langchain (my choice is llama2-13b-chat-hf if that matters).
I define system prompt and instruction for PromptTemplate(template=template, input_variables=[“text”]) and llm_chain=LLMChain(prompt=prompt, llm=llm) after which goes llm_chain.run(text)… so that goes for a single entity. Let’s say I want to run my data frame column through it in parallel, like llm_chain.run(text1) and llm_chain.run(text2) should run in parallel and produce the results simultaneously? Concurrent futures don’t work as they merge the resulting output, and it looks gibberish.
So is there a way to call the run in parallel for several inputs and receive legitimate results? Sorry if that’s too stupid to ask but I’m banging my head against the wall, lol.
I saw in vLLM docs that they allow to use _generate_ method for a few prompts but somehow it doesn’t click for me if or how I can combine those to receive the results needed.
Any help would be highly appreciated. Thanks! | 2023-11-13T03:30:24 | https://www.reddit.com/r/LocalLLaMA/comments/17u1tzu/combining_langchain_vllm_for_parallel_processing/ | eternal_decadence | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u1tzu | false | null | t3_17u1tzu | /r/LocalLLaMA/comments/17u1tzu/combining_langchain_vllm_for_parallel_processing/ | false | false | self | 4 | null |
I've been out of the conversation for a few months. Currently, what are the best uncensored LLM's right now? | 1 | . | 2023-11-13T03:02:44 | https://www.reddit.com/r/LocalLLaMA/comments/17u1b80/ive_been_out_of_the_conversation_for_a_few_months/ | AbsorbingCrocodile | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u1b80 | false | null | t3_17u1b80 | /r/LocalLLaMA/comments/17u1b80/ive_been_out_of_the_conversation_for_a_few_months/ | false | false | self | 1 | null |
🪄 Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs | 20 | >ntroduce Lumos, a novel framework for training language agents that employs a unified data format and a modular architecture based on open-source large language models (LLMs). Lumos consists of three distinct modules: planning, grounding, and execution. The planning module breaks down a task into a series of high-level, tool-agnostic subgoals, which are then made specific by the grounding module through a set of low-level actions. These actions are subsequently executed by the execution module, utilizing a range of off-the-shelf tools and APIs. In order to train these modules effectively, high-quality annotations of subgoals and actions were collected and are made available for fine-tuning open-source LLMs for various tasks such as complex question answering, web tasks, and math problems. Leveraging this unified data and modular design, Lumos not only achieves comparable or superior performance to current, state-of-the-art agents, but also exhibits several key advantages: (1) Lumos surpasses GPT-4/3.5-based agents in complex question answering and web tasks, while equalling the performance of significantly larger LLM agents on math tasks; (2) Lumos outperforms open-source agents created through conventional training methods and those using chain-of-thoughts training; and (3) Lumos is capable of effectively generalizing to unseen interactive tasks, outperforming larger LLM-based agents and even exceeding performance of specialized agents. | 2023-11-13T02:48:29 | https://allenai.github.io/lumos/ | ninjasaid13 | allenai.github.io | 1970-01-01T00:00:00 | 0 | {} | 17u118k | false | null | t3_17u118k | /r/LocalLLaMA/comments/17u118k/lumos_learning_agents_with_unified_data_modular/ | false | false | default | 20 | null |
GPT4All on Ubuntu need help :/ | 1 | Has anyone install/run GPT4All on Ubuntu recently. I am very much a noob to Linux, ML and LLM's, but I have used PC's for 30 years and have some coding ability. I tried GPT4All yesterday and failed. I can get the package to load and the GUI to come up. Whenever I download a model, it flakes out and either doesn't complete the model download or tells me that the download was somehow corrupt. I did some investigation on the internet and it looks like the downloaded model files are sitting in a folder called Home/.local/share/nomic.ai/GPT4ALL. The model files are gguf. I dont know where the files ought to be, and why they are in this location. What do you think is going on? Is there a path variable I need to change? Apologies for the noob questions. | 2023-11-13T02:34:08 | https://www.reddit.com/r/LocalLLaMA/comments/17u0rix/gpt4all_on_ubuntu_need_help/ | pushing_film | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u0rix | false | null | t3_17u0rix | /r/LocalLLaMA/comments/17u0rix/gpt4all_on_ubuntu_need_help/ | false | false | self | 1 | null |
Hilarious reply by LLM Model | 2 | I was testing one llm model today on local machine and got hilarious response.
print(llm("AI is going to"))
*"be. I don’t mean to be a downer, but my first experience with this kind of system was not pleasant in any sense. It took me days to figure out how the hell it worked. Then I had to do all these things like go through hundreds of pictures, which were not really relevant, and so on. And then one day I finally got it. It showed up as some weird text with no context at all. And then it just disappeared.*
*This is an AI system that can’t even figure out how to use a keyboard. So what do you think of this? Does it make you want to throw your phone against the wall or something? Or maybe you are going to be like me and get bored after 5 minutes of watching something on YouTube because you have nothing else to watch at home?*
*I mean, I guess if there is no way for an AI system to figure out how to use a keyboard then it’s pretty useless. But then again, that seems to be the case with most things in life. And really, who needs any more computer power when we already have all this stuff like Google Chrome or Microsoft Word. Which, by the way, are pretty shitty programs anyway. They do not work very well"* | 2023-11-13T02:10:15 | https://www.reddit.com/r/LocalLLaMA/comments/17u0alj/hilarious_reply_by_llm_model/ | meetrais | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u0alj | false | null | t3_17u0alj | /r/LocalLLaMA/comments/17u0alj/hilarious_reply_by_llm_model/ | false | false | self | 2 | null |
hardware question for the jank loving and the ones that cant afford more like me :D. from which xeon or epyc generation onwords can we assume that servermainboards do support the nvidia p40 cards ? | 1 | The title kind of says everything. Yes i am aware of them not being good for fine tuning but i wanna use them for inference and RAG anyways so that should probably not be a problem.
I am a bit confused i know that on the consumer side you need a mb with official reBar/SAM support. BUt those are alot newer then the p40 cards so i assume that server hardware from 2016 onwords should run those cards in general since they needed to be compatible back then as well right ?
So i would appreciate any pointers and examples people can give me and let me know of their setups that use older hardware.
Thanks in advance :) | 2023-11-13T01:58:24 | https://www.reddit.com/r/LocalLLaMA/comments/17u01ti/hardware_question_for_the_jank_loving_and_the/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17u01ti | false | null | t3_17u01ti | /r/LocalLLaMA/comments/17u01ti/hardware_question_for_the_jank_loving_and_the/ | false | false | self | 1 | null |
64GB RAM vs 3060 12GB vs Intel a770? | 9 | I'm currently using 1650 4GB, AMD 5600, 32GB RAM.
I got some spare cash to throw to learn more about local llm.
Should I get:
A. 64 GB RAM (2 X 32GB)
B. 3060 12GB
C. Intel A770 16GB.
I'm using openhermes 2.5 Mistral 7b q5k_m gguf, ok-ish Performace for Silly tavern with koboldcpp.
But when context goes above 3k, it crawled.
Please let advise which option you think I should take first. Thanks bunch. | 2023-11-13T01:08:36 | https://www.reddit.com/r/LocalLLaMA/comments/17tz203/64gb_ram_vs_3060_12gb_vs_intel_a770/ | yahdahduhe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tz203 | false | null | t3_17tz203 | /r/LocalLLaMA/comments/17tz203/64gb_ram_vs_3060_12gb_vs_intel_a770/ | false | false | self | 9 | null |
Can I run an LLM that takes up no more than 1-4GB of RAM / VRAM and have it answer questions using my notes, or is that unrealistic? | 5 | I have an 8GB M1 MacBook Air and 16GB MBP (that I haven't turned in for repair) that I'd like to run an LLM on, to ask questions and get answers from notes in my Obsidian vault (100s of markdown files). I've been lurking this subreddit but I'm not sure if I could run LLMs <7B with 1-4GB of RAM or if the LLM(s) would be too quality. | 2023-11-13T01:08:20 | https://www.reddit.com/r/LocalLLaMA/comments/17tz1t4/can_i_run_an_llm_that_takes_up_no_more_than_14gb/ | TheTwelveYearOld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tz1t4 | false | null | t3_17tz1t4 | /r/LocalLLaMA/comments/17tz1t4/can_i_run_an_llm_that_takes_up_no_more_than_14gb/ | false | false | self | 5 | null |
We are Higgsfield AI. We have a large GPU cluster and want to finetune your dataset. | 1 | [removed] | 2023-11-13T00:36:26 | https://www.reddit.com/r/LocalLLaMA/comments/17tye2j/we_are_higgsfield_ai_we_have_a_large_gpu_cluster/ | Good-Willingness-985 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tye2j | false | null | t3_17tye2j | /r/LocalLLaMA/comments/17tye2j/we_are_higgsfield_ai_we_have_a_large_gpu_cluster/ | false | false | self | 1 | null |
We are Higgsfield AI. We have a large GPU cluster and want to finetune your dataset. | 1 | [removed] | 2023-11-13T00:31:40 | https://www.reddit.com/r/LocalLLaMA/comments/17tyam9/we_are_higgsfield_ai_we_have_a_large_gpu_cluster/ | higgsfield_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tyam9 | false | null | t3_17tyam9 | /r/LocalLLaMA/comments/17tyam9/we_are_higgsfield_ai_we_have_a_large_gpu_cluster/ | false | false | self | 1 | null |
One of the coolest uses of AI to me is your own personal tutor. Here's me testing a character card I had been toying with. Model is CodeBooga 34b q8 | 72 | 2023-11-12T23:42:23 | SomeOddCodeGuy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17txa63 | false | null | t3_17txa63 | /r/LocalLLaMA/comments/17txa63/one_of_the_coolest_uses_of_ai_to_me_is_your_own/ | false | false | 72 | {'enabled': True, 'images': [{'id': 'DbUsjqFt_nw_kA6He1zcBfrhlizUo_8OcvyZlk9vRpg', 'resolutions': [{'height': 198, 'url': 'https://preview.redd.it/9a4qqmcq600c1.png?width=108&crop=smart&auto=webp&s=43b0750908983f3e83f079553ed83b3e09178959', 'width': 108}, {'height': 396, 'url': 'https://preview.redd.it/9a4qqmcq600c1.png?width=216&crop=smart&auto=webp&s=e5a30d3b243adde4124f02c9f02f506ef997dd0e', 'width': 216}, {'height': 587, 'url': 'https://preview.redd.it/9a4qqmcq600c1.png?width=320&crop=smart&auto=webp&s=c1df254c9f589198a3760f12255231e6e0f77b26', 'width': 320}, {'height': 1175, 'url': 'https://preview.redd.it/9a4qqmcq600c1.png?width=640&crop=smart&auto=webp&s=98355701b20b3c0a7a56e1634b6d6b3ba0d45853', 'width': 640}], 'source': {'height': 1594, 'url': 'https://preview.redd.it/9a4qqmcq600c1.png?auto=webp&s=b9d149a5110830c05c48e3ea37cd9097127daad6', 'width': 868}, 'variants': {}}]} | |||
ESP32 -> Willow -> Home Assistant -> Mistral 7b << | 130 | 2023-11-12T22:40:18 | https://v.redd.it/khvhwftnvzzb1 | sammcj | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17tvwk6 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/khvhwftnvzzb1/DASHPlaylist.mpd?a=1702420833%2CMGY0YmQ2YWVmZDcxNjNlMTExNjg0NjI3ZmM1ODQ1MDk0ODYyYmUwNDljNDdhOTkzODNkNzRkMzQ3YWY4ZGE4ZA%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/khvhwftnvzzb1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/khvhwftnvzzb1/HLSPlaylist.m3u8?a=1702420833%2CMWMzNDVkNWE4MjYzOGI0ZWM4ODM0M2UxYTZiY2FlMzhmZDBmZWUxZjE5YjViYWJjOTRiZDVjZTRiN2Q0MWE0Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/khvhwftnvzzb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_17tvwk6 | /r/LocalLLaMA/comments/17tvwk6/esp32_willow_home_assistant_mistral_7b/ | false | false | 130 | {'enabled': False, 'images': [{'id': 'LiGf4pyEeLFySLfVCDdoC4u5rs4mj9Bl3m5HIAco-BU', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/50j_F82A8L66GpBiEEz0NB0VvoMz9rFTj3SMlyt-CXE.png?width=108&crop=smart&format=pjpg&auto=webp&s=9ddce802d6048eea0706ff76f8091ebe496af555', 'width': 108}, {'height': 151, 'url': 'https://external-preview.redd.it/50j_F82A8L66GpBiEEz0NB0VvoMz9rFTj3SMlyt-CXE.png?width=216&crop=smart&format=pjpg&auto=webp&s=af95e32db97ecfdb82d0bda051ab4a426670831f', 'width': 216}, {'height': 223, 'url': 'https://external-preview.redd.it/50j_F82A8L66GpBiEEz0NB0VvoMz9rFTj3SMlyt-CXE.png?width=320&crop=smart&format=pjpg&auto=webp&s=93df5ed8d601aeef5dfe97aa855a772c4242badc', 'width': 320}, {'height': 447, 'url': 'https://external-preview.redd.it/50j_F82A8L66GpBiEEz0NB0VvoMz9rFTj3SMlyt-CXE.png?width=640&crop=smart&format=pjpg&auto=webp&s=a7e6f43555676c94c948fb9c6ab8f1dfa31c92a6', 'width': 640}, {'height': 671, 'url': 'https://external-preview.redd.it/50j_F82A8L66GpBiEEz0NB0VvoMz9rFTj3SMlyt-CXE.png?width=960&crop=smart&format=pjpg&auto=webp&s=98596b3663d0e7b3dbaa9e63d98604f0774ea157', 'width': 960}, {'height': 755, 'url': 'https://external-preview.redd.it/50j_F82A8L66GpBiEEz0NB0VvoMz9rFTj3SMlyt-CXE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d9876d6caed2fad7254bfb7f1c11705b0a1f308b', 'width': 1080}], 'source': {'height': 895, 'url': 'https://external-preview.redd.it/50j_F82A8L66GpBiEEz0NB0VvoMz9rFTj3SMlyt-CXE.png?format=pjpg&auto=webp&s=c91a3128f48b84e4d2cf2b16aff5f8aed1b9ca96', 'width': 1280}, 'variants': {}}]} | ||
What is the best current Local LLM to run? | 2 | Which is best 13b model ? | 2023-11-12T22:19:58 | https://www.reddit.com/r/LocalLLaMA/comments/17tvgef/what_is_the_best_current_local_llm_to_run/ | Regular_Instruction | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tvgef | false | null | t3_17tvgef | /r/LocalLLaMA/comments/17tvgef/what_is_the_best_current_local_llm_to_run/ | false | false | default | 2 | null |
RTX 3090 34B inference vs power setting | 10 | I performed an experiment with eight 33-34B models I use for code evaluation and technical assistance, to see what effect GPU power limiting had on the RTX 3090 inference.
All models were gguf, q4 quants. Each model was run only once due to time constraints. Each model was served the identical prompt to generate a bash script according to instructions.
I'll abstain from attempting an analysis, you can draw your own conclusions.
Test data below:
`Set Meas GPU% M1 M2 M3 M4 M5 M6 M7 M8 T/s EFFICIENCY`
`300 291 79,50% 27,26 26,14 33,21 27,54 34,83 32,56 24,58 31,1 29,65 101`
`280 274 80,50% 26,78 27,18 33,25 27,39 34,19 30,48 26,31 31,34 29,62 108`
`260 253 81,50% 26,03 23,61 29,91 26,33 31,73 30,48 26,27 30,39 28,09 111`
`240 233 82,00% 23,71 23,13 31,49 23,64 30,12 29,72 22,5 30,93 26,91 115`
`220 217 84,50% 19,76 20,04 25,34 19,99 26,89 24,93 22,18 25,06 23,02 106`
`200 197 87,50% 15,46 14,35 19,86 15,63 20,45 19,56 16,42 19,43 17,65 89`
`180 179 89,50% 11,39 10,57 14,58 11,03 14,67 14,13 12,32 13,65 12,79 71`
`160 161 93,00% 7,93 6,79 9,1 7,42 9,45 8,82 7,94 8,78 8,28 51`
`140 160 95,00% 7,31 6,8 8,9 6,78 9,14 7,52 7,37 8,27 7,76 48`
`120 160 95,00% 6,81 6,31 8,19 6,97 8,46 7,56 6,93 8,24 7,43 46`
`M1 51L airoboros-c34b-3.1.2.Q4_K_M.gguf`
`M2 51L Zephyrus-L1-33B.q4_K_M.gguf`
`M3 51L codellama-34b-instruct.Q4_0.gguf`
`M4 51L phind-codellama-34b-v2.Q4_K_M.gguf`
`M5 51L tora-code-34b-v1.0.Q4_0.gguf`
`M6 51L wizardcoder-python-34b-v1.0.Q4_0.gguf`
`M7 64L yi-34b.Q4_K_M.gguf`
`M8 51L ziya-coding-34b-v1.0.Q4_0.gguf`
​
​ | 2023-11-12T22:19:41 | https://www.reddit.com/r/LocalLLaMA/comments/17tvg6e/rtx_3090_34b_inference_vs_power_setting/ | crantob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tvg6e | false | null | t3_17tvg6e | /r/LocalLLaMA/comments/17tvg6e/rtx_3090_34b_inference_vs_power_setting/ | false | false | self | 10 | null |
How I am running a distributed, self-hosted llm rag and reinforcement learning with human feed back architecture for multiple streamlit web applications | 1 | [removed] | 2023-11-12T21:55:14 | https://www.reddit.com/r/LocalLLaMA/comments/17tuw1l/how_i_am_running_a_distributed_selfhosted_llm_rag/ | buildingstuff5432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tuw1l | false | null | t3_17tuw1l | /r/LocalLLaMA/comments/17tuw1l/how_i_am_running_a_distributed_selfhosted_llm_rag/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '0_c5yNIMSC7hX3-fQKSs50c4uz87Ne19yLJ4rXCmkeE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gXHi16asHnGyTR9jObcQfnpSrtuywRvT0vafYHnwDow.jpg?width=108&crop=smart&auto=webp&s=479c31b8ee8fa111b5ca97dd7f05c879cfaa896a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gXHi16asHnGyTR9jObcQfnpSrtuywRvT0vafYHnwDow.jpg?width=216&crop=smart&auto=webp&s=ecaded38164786b542d5f584b33d189b64b29177', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gXHi16asHnGyTR9jObcQfnpSrtuywRvT0vafYHnwDow.jpg?width=320&crop=smart&auto=webp&s=76ab84fa436c041311b3f7ce7db908042e63a1c7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gXHi16asHnGyTR9jObcQfnpSrtuywRvT0vafYHnwDow.jpg?width=640&crop=smart&auto=webp&s=e8fa38231f4bd4d368e74e075dda348a17137920', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gXHi16asHnGyTR9jObcQfnpSrtuywRvT0vafYHnwDow.jpg?width=960&crop=smart&auto=webp&s=d0e578fb9a082aa0df115775de4bfbe3e7f6aa45', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gXHi16asHnGyTR9jObcQfnpSrtuywRvT0vafYHnwDow.jpg?width=1080&crop=smart&auto=webp&s=869498fa22488589f39ac273f21091b0f63584bf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gXHi16asHnGyTR9jObcQfnpSrtuywRvT0vafYHnwDow.jpg?auto=webp&s=6333755bb23e0f75f720f85182c25d55cf61e6a3', 'width': 1200}, 'variants': {}}]} |
How to enable bot_message in history of Langchain ConversationBufferMemory using PromptTemplate? | 1 | Does langchain allow using external prompt template i.e. template enabled using langchain.prompts.prompt.PrompTemplate with ConversationChain? I am working on creating a chat bot based on llama-2 but since the llama-2 chat format doesnot have any way of suggesting the usage of assitant\_message which can be easily detected by the ConversationBufferMemory.
I think one prompt template which can work is this:
`<s>[INST] <<SYS>> {{system prompt}} <</SYS>> {{user_msg_1}} [/INST] {{model_answer1}} </s><s>[INST] {{user_msg_2}} [/INST]`
But I am not sure how to avail the ConversationBufferMemory to understand it work up. | 2023-11-12T20:53:47 | https://www.reddit.com/r/LocalLLaMA/comments/17ttgs0/how_to_enable_bot_message_in_history_of_langchain/ | New_Kangaroo9522 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ttgs0 | false | null | t3_17ttgs0 | /r/LocalLLaMA/comments/17ttgs0/how_to_enable_bot_message_in_history_of_langchain/ | false | false | self | 1 | null |
Local pdf chat solutions ? | 7 | Hi everyone, first post here, I hope it's not against the rules. I'm looking for a "pdf/doc chat" that can be run locally. I found MultiPDF Chat and OP Vault but it sends api calls to chatgpt.
Then I found rag-stack but I spent hours trying to make it work on Ubuntu, solving one error message after another. But i can't make it...
I'm not on top of AI stuff so I'm a little lost, does a local pdf chat exists ? | 2023-11-12T19:42:23 | https://www.reddit.com/r/LocalLLaMA/comments/17trw0x/local_pdf_chat_solutions/ | TastelessBuild2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17trw0x | false | null | t3_17trw0x | /r/LocalLLaMA/comments/17trw0x/local_pdf_chat_solutions/ | false | false | self | 7 | null |
There is any no-code GUI for image-to-text models fine tuning | 1 | Like Blip, Kosmos or ViT | 2023-11-12T18:51:11 | https://www.reddit.com/r/LocalLLaMA/comments/17tqrwp/there_is_any_nocode_gui_for_imagetotext_models/ | davide445 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tqrwp | false | null | t3_17tqrwp | /r/LocalLLaMA/comments/17tqrwp/there_is_any_nocode_gui_for_imagetotext_models/ | false | false | self | 1 | null |
Anyone using RX7900XTX, successfully? | 11 | I'm considering switching my 4080 into 7900XTX, and I'd have money left over for ram... I know Radeon's were crap for a longest time with regards to ML, but seems like it's getting better, so I was wondering if someone managed to get it to work? I'm even considering dual to maybe be able to run 70B's... | 2023-11-12T18:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/17tqfqu/anyone_using_rx7900xtx_successfully/ | morphles | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17tqfqu | false | null | t3_17tqfqu | /r/LocalLLaMA/comments/17tqfqu/anyone_using_rx7900xtx_successfully/ | false | false | self | 11 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.