title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Quantization: How Much Quality is Lost? | 52 | Yeehaw, y'all 🤠
I've been pondering a lot about quantization and its impact on large language models (LLMs). As you all may know, quantization techniques like 4-bit and 8-bit quantization have been a boon for us consumers, allowing us to run larger models than our hardware would typically be able to handle. However, it's clear that there has to be a trade-off.
Quantization essentially involves reducing the precision of the numbers used in the weights of the model. This reduction in precision leads to a decrease in model size and computational requirements, but it also introduces an approximation error. The question is, just how much does this approximation error impact the quality of the model's output?
Has anyone in this community done any testing or come across any research that quantifies the impact of quantization on the quality of LLMs? I'm particularly interested in real-world experiences and practical examples. And if anyone has some insight into how the quality loss due to quantization might be mitigated, that'd be icing on the cake.
Thanks in advance for your insights and contributions to this discussion! | 2023-07-19T05:11:06 | https://www.reddit.com/r/LocalLLaMA/comments/153lfc2/quantization_how_much_quality_is_lost/ | Prince-of-Privacy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153lfc2 | false | null | t3_153lfc2 | /r/LocalLLaMA/comments/153lfc2/quantization_how_much_quality_is_lost/ | false | false | self | 52 | null |
Updates on the Agency library, now with AMQP support | 3 | Hello LocalLlama!
I've previously posted about my python agent library called [`agency`](https://github.com/operand/agency). There's some great updates to share so I thought I'd post another followup.
I've got two big things:
1. A couple weeks back I added AMQP (RabbitMQ) support to the library. This means that you can now use `agency` to create a distributed or multiprocess system for your agents, avoiding the single-threaded limitations of python's GIL. Want to run multiple copies of an LLM and split load between them over a network? This could help with that.
2. I've just begun on a project to replace the crude "demo" app and add a modern "starter" web UI that you will be able customize for your needs. It will require programming on your part but the goal is to offer a good minimal foundation. It'll use a still to be chosen UI framework. The Streamlit library was suggested and it's one good possibility. If you happen to have any opinions on this, I'd love to know!
Other updates since I last posted include lots of documentation work, tests, a better docker configuration... so many things.
I'm excited and I just wanted to spread the word some more. I hope this project is helpful to some of you! If you have any suggestions or feedback to share please do!
[https://github.com/operand/agency](https://github.com/operand/agency) | 2023-07-19T05:08:36 | https://www.reddit.com/r/LocalLLaMA/comments/153ldk7/updates_on_the_agency_library_now_with_amqp/ | helloimop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153ldk7 | false | null | t3_153ldk7 | /r/LocalLLaMA/comments/153ldk7/updates_on_the_agency_library_now_with_amqp/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'm6th8V7E7zzx2CCzdbrW9zvqJWfFxufjUAokKdD9Qaw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=108&crop=smart&auto=webp&s=875dcf7e2c9c07458396f503d7cf2976a3c33503', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=216&crop=smart&auto=webp&s=99e96338557344e1b9e38df9f3f65166764d632c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=320&crop=smart&auto=webp&s=62c67e35464947430c0128ebdeb5046fed9500cc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=640&crop=smart&auto=webp&s=401bb1677a6550313d8213f8cfc9752a105ca587', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=960&crop=smart&auto=webp&s=33b8ffcf7dd0812dedebc0374647a5281dddcf1c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=1080&crop=smart&auto=webp&s=965489bf80f760a497b585a641250189335e583f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?auto=webp&s=6800f296fa85899c024b7b7dae5664e6c8dfb5a5', 'width': 1200}, 'variants': {}}]} |
Llama 2 is disappointing | 10 | With each new model launch, it's claimed that they at least match the quality of ChatGPT-3.5, yet the outcomes often fall short of expectations. Llama 2 hasn't broken this trend.
I gave the finetuned Llama 2 70b Chat model a go: [**https://huggingface.co/spaces/ysharma/Explore\_llamav2\_with\_TGI**](https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI)
After posing a series of questions and pitting its responses against those of ChatGPT-3.5, the results were underwhelming. I'd venture to say that it performs similarly to the Guanaco 65b model. It's nowhere near the level of ChatGPT-3.5.
What's more, it seems to exercise a unprecedented level of censorship, surpassing even that of ChatGPT. Notably, it abstained from answering queries about the concentration camps in Xinjiang and the Tiananmen Square massacre. Let alone anything related to NSFW.
I'm left hoping that the base model doesn't come with this level of censorship. Can they even censor the base model? | 2023-07-19T04:46:30 | https://www.reddit.com/r/LocalLLaMA/comments/153kxwx/llama_2_is_disappointing/ | Big_Communication353 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153kxwx | false | null | t3_153kxwx | /r/LocalLLaMA/comments/153kxwx/llama_2_is_disappointing/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'jVfONw70j9xFSHK2sFB3j_M0ywv9sgZ9DCoGJM3sD1Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?width=108&crop=smart&auto=webp&s=6f30a110b0af985d1d8140231cd4f0316a48f80a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?width=216&crop=smart&auto=webp&s=18122badbb0673e3d2c949b02a62535ad0268572', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?width=320&crop=smart&auto=webp&s=a3a6874f2f569d6490f02927f21a3c3e78e9a0ae', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?width=640&crop=smart&auto=webp&s=9bbfeeef3d9753edf8888a8f9ce00339a706d5ff', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?width=960&crop=smart&auto=webp&s=91030b6e805f84c6845685181ff119ba5e84cb74', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?width=1080&crop=smart&auto=webp&s=abdea90c83851c15d5895a8766fc50c58b617af9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oxSfOoIcjr7WIJyWU1rqijaKy4fO5uiVIp7HTNVufXM.jpg?auto=webp&s=33c126b9bbf110d3f9c7ca2f2417fa61c7220da7', 'width': 1200}, 'variants': {}}]} |
Llama2 Censored? | 3 | You cant get it to write a childrens story that involves the end of the world 🥲 | 2023-07-19T04:17:20 | https://www.reddit.com/gallery/153kd62 | Sumozebra | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 153kd62 | false | null | t3_153kd62 | /r/LocalLLaMA/comments/153kd62/llama2_censored/ | false | false | 3 | null | |
How much better is Llama 2? A simple example | 1 | 2023-07-19T04:13:17 | https://github.com/hegelai/prompttools/blob/main/examples/notebooks/LlamaHeadToHead.ipynb | hegel-ai | github.com | 1970-01-01T00:00:00 | 0 | {} | 153ka5o | false | null | t3_153ka5o | /r/LocalLLaMA/comments/153ka5o/how_much_better_is_llama_2_a_simple_example/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'VTGUfFaipsQFbnM1m8I4CHic0B28M3GqpoMOgz7_bo0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?width=108&crop=smart&auto=webp&s=da8fa70753162fa655cd03ab79dac2793daf5445', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?width=216&crop=smart&auto=webp&s=4eebeb2e1c8320e6f5686a2a546386dd672f90e6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?width=320&crop=smart&auto=webp&s=432fb876e7f69640e456990bfedefb2a667b183d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?width=640&crop=smart&auto=webp&s=fd464dc0d3a9b264865d5d8f391e4aab136d005e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?width=960&crop=smart&auto=webp&s=96f1068ed99dba73011343f625a1a0106059520f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?width=1080&crop=smart&auto=webp&s=1bf92dd069cc70b53c54480336a4e40a4c953252', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jW7Q-WB7W-0SGDaI6wRH3CjPIsGNii1Ur-ZO62DnuK8.jpg?auto=webp&s=ce48118b1b3caf2be70e4ac9b7d8e8fa04a2952a', 'width': 1200}, 'variants': {}}]} | ||
Seems like we can continue to scale tokens and get returns model performance well after 2T tokens. | 122 | 2023-07-19T04:00:37 | onil_gova | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 153k0ph | false | null | t3_153k0ph | /r/LocalLLaMA/comments/153k0ph/seems_like_we_can_continue_to_scale_tokens_and/ | false | false | 122 | {'enabled': True, 'images': [{'id': '_QvOymv7LHPPeoIF0xYPiNpwwGlkPBC6RDLPxxqMNRA', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?width=108&crop=smart&auto=webp&s=c0b1095e36e2bf269b9094f938d0ded128b93feb', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?width=216&crop=smart&auto=webp&s=52dacf373348098566d36eb00918f6986d4d1181', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?width=320&crop=smart&auto=webp&s=42b67b04e30373397d1232baf6e00a7cf5ef7ec0', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?width=640&crop=smart&auto=webp&s=b21f72931005ea1a675bcf031d7708603206a714', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?width=960&crop=smart&auto=webp&s=57fca239e06347ff74fa65bdfdc08e98e1efe234', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?width=1080&crop=smart&auto=webp&s=7cbe39251a8305709771ffd06031a819fcf32ba2', 'width': 1080}], 'source': {'height': 847, 'url': 'https://preview.redd.it/9twlx3ctgucb1.png?auto=webp&s=dbbae5461ea2473e0ea7bc9f8f9a8ec0447a65ef', 'width': 1505}, 'variants': {}}]} | |||
Where is the LLaMA v2 34B model? Are they not releasing it? | 2 | 2023-07-19T03:27:02 | onil_gova | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 153jcbz | false | null | t3_153jcbz | /r/LocalLLaMA/comments/153jcbz/where_is_the_llama_v2_34b_model_are_they_not/ | false | false | 2 | {'enabled': True, 'images': [{'id': '-u5tAilH5APZR_ICtEtXa3q1MlcnfsnIDi5nJFqsXvs', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?width=108&crop=smart&auto=webp&s=fd0ebd9811f5f990c5d375ba8718592586e0bb0b', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?width=216&crop=smart&auto=webp&s=2f94b8639cf1743d5cfe22e7053259c6a56910f1', 'width': 216}, {'height': 123, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?width=320&crop=smart&auto=webp&s=21d3be3e753846fbe840d0b0772168860c8a3ec8', 'width': 320}, {'height': 247, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?width=640&crop=smart&auto=webp&s=53d7315ea978a677cb2681c5a121286653d30c2e', 'width': 640}, {'height': 370, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?width=960&crop=smart&auto=webp&s=599e99b1fb6a1e8faa5d3da8c349207fcef78fde', 'width': 960}, {'height': 416, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?width=1080&crop=smart&auto=webp&s=4e12d327ac7545e601fe60e883631f84271eb4fd', 'width': 1080}], 'source': {'height': 562, 'url': 'https://preview.redd.it/gwo80nuzbucb1.png?auto=webp&s=7dcc79f57f453db9fb4c02ba85b050c298a4de5a', 'width': 1456}, 'variants': {}}]} | |||
local llm function calling | 4 | I was working on a few internal prototypes at work and I was using the chatGPT function calling. It's crazy powerful. Can this structure be represented using llama-2 for example for do I just have to write a fancy prompt? | 2023-07-19T03:02:23 | https://www.reddit.com/r/LocalLLaMA/comments/153iu1x/local_llm_function_calling/ | APUsilicon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153iu1x | false | null | t3_153iu1x | /r/LocalLLaMA/comments/153iu1x/local_llm_function_calling/ | false | false | self | 4 | null |
Llama 2 - LLM Leaderboard Performance | 47 | Multiple leaderboard evaluations for Llama 2 are in and overall it seems quite impressive.
[https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
This is the most popular leaderboard, but not sure it can be trusted right now since it's been under revision for the past month because apparently both its MMLU and ARC scores are inaccurate. But nonetheless, they did add Llama 2 and the 70b-chat version has taken 1st place. Each version of Llama 2 on this leaderboard is about equal to the best finetunes of Llama.
[https://github.com/aigoopy/llm-jeopardy](https://github.com/aigoopy/llm-jeopardy)
On this leaderboard the Llama 2 models are actually some of the worst models on the list. Does this just mean Llama 2 doesn't have trivia-like knowledge?
[https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i\_R6I6W/edit#gid=2011456595](https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit#gid=2011456595)
Last, Llama 2 performed incredibly well on this open leaderboard. It far surpassed the other models in 7B and 13B and if the leaderboard ever tests 70B (or 33B when it is released) it seems quite likely that it would beat GPT-3.5's score.
What are your guys' thoughts on Llama 2's performance and the potential of its finetunes? | 2023-07-19T02:50:49 | https://www.reddit.com/r/LocalLLaMA/comments/153ikue/llama_2_llm_leaderboard_performance/ | DontPlanToEnd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153ikue | false | null | t3_153ikue | /r/LocalLLaMA/comments/153ikue/llama_2_llm_leaderboard_performance/ | false | false | self | 47 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]} |
Llama 2 13b with kobold cpp issue. | 1 | Trying to use llama 2 13b on kobold cpp, it can't even form a complete paragraph. Any pointers guys? | 2023-07-19T02:48:39 | SuggestionInside5234 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 153ij6s | false | null | t3_153ij6s | /r/LocalLLaMA/comments/153ij6s/llama_2_13b_with_kobold_cpp_issue/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'aL9Pm6DnJZMWydP8LK_0O3KvtvDadivkUIB5k3wXMQI', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?width=108&crop=smart&auto=webp&s=b967fb95c604bb6f109292836ce2bfcd128ce047', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?width=216&crop=smart&auto=webp&s=539fb15819a95488b75a7b3669e4a5538bf5d617', 'width': 216}, {'height': 151, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?width=320&crop=smart&auto=webp&s=04ab198d29d0baccda8f6126539b8f2658137cf9', 'width': 320}, {'height': 302, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?width=640&crop=smart&auto=webp&s=fc9f1401a489d13f5ebb1a3067610c3a8135d251', 'width': 640}, {'height': 454, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?width=960&crop=smart&auto=webp&s=69597590fcfe9f882b25c07af8b1b17ed40a615e', 'width': 960}, {'height': 511, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?width=1080&crop=smart&auto=webp&s=0036618705797babce6df0f7ffbf5eb7373511d5', 'width': 1080}], 'source': {'height': 1908, 'url': 'https://preview.redd.it/0p8c3iof5ucb1.jpg?auto=webp&s=48e3f0974fec19d456208c2a67eb5eb2e51433d6', 'width': 4032}, 'variants': {}}]} | ||
How to finetune Guanaco or Airoboros 13B/4bit on Windows? | 1 | Hi all. I'm struggling to find a way to finetune 4 bit models on Windows. Seems to do so in oobabooga it needs GPTQ-for-LLaMA which needs Triton which needs Linux (or WSL which I attempted to install oobabooga but it's the end of a long day of getting to this point and just didn't have the heart left to learn WSL/Linux commands to troubleshoot the install).
Am I missing a better or easier route to accomplish finetuning 4bit versions of Guanaco/Airoboros 13B?
Related question, I'm doing this all on a 3090. Am I destined to fail regardless due to VRAM cap of 24GB? | 2023-07-19T02:36:13 | https://www.reddit.com/r/LocalLLaMA/comments/153i9tu/how_to_finetune_guanaco_or_airoboros_13b4bit_on/ | TheNitzel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153i9tu | false | null | t3_153i9tu | /r/LocalLLaMA/comments/153i9tu/how_to_finetune_guanaco_or_airoboros_13b4bit_on/ | false | false | self | 1 | null |
No, Llama 2 is NOT an open source LLM | 12 | I have seen many people call llama2 the most capable open source LLM. This is not true so please please stop spreading this misinformation. It is doing more harm than good.
Open source means two things:
- Anyone can access the code and weights and use it however they want, no strings attached.
- Anyone can use the model for whatever purpose, no strings attached.
The Llama 2 license doesn't allow these two things.
First, regarding the model:
>2. **Additional Commercial Terms.** If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
While this affects less than 1% of the world, it still violates the essences of open source ML.
Second, regarding the weights: the weights are not publicly provided by meta. You have to apply to get a copy of the weights from meta. Furthermore, you can NOT use these weights to train any LLM except Llama 2 (unless you have written approval from meta)
Lastly, while llama2 is NOT open source LLM it is still a big step towards democratizing LLMs since people are going to use anyway. Having a capable LLM to experiment with is a big catalyst for innovation in NLP.
TL;DR: Llama2 is good but not open source. | 2023-07-19T02:32:26 | https://www.reddit.com/r/LocalLLaMA/comments/153i6vi/no_llama_2_is_not_an_open_source_llm/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153i6vi | false | null | t3_153i6vi | /r/LocalLLaMA/comments/153i6vi/no_llama_2_is_not_an_open_source_llm/ | false | false | self | 12 | null |
Business Training Application Feasability | 3 | I am a CPA and can attest that small CPA shops are in the Stone Age technology wise and also struggle with retaining senior associates to train their new hires. I was wondering if it would be feasible to work with CPA firms to document their processes and procedures and then have a developer use the data to train a local chatbot that could be used to answer questions for new hires? Sorry I am not a developer so I don’t understand all the details in terms of what is possible right now but I am trying to learn. | 2023-07-19T02:27:30 | https://www.reddit.com/r/LocalLLaMA/comments/153i32b/business_training_application_feasability/ | Doggo_9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153i32b | false | null | t3_153i32b | /r/LocalLLaMA/comments/153i32b/business_training_application_feasability/ | false | false | self | 3 | null |
any news on when Phi-1 will be released? | 2 | Microsoft said they'd release Phi-1, did they not? I haven't heard anything. is there an approximate timeline? | 2023-07-19T02:19:11 | https://www.reddit.com/r/LocalLLaMA/comments/153hwlj/any_news_on_when_phi1_will_be_released/ | Cunninghams_right | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153hwlj | false | null | t3_153hwlj | /r/LocalLLaMA/comments/153hwlj/any_news_on_when_phi1_will_be_released/ | false | false | self | 2 | null |
Meta filtered LLaMa-2's training data to keep it G-rated as much as possible. If your intended use case isn't G-rated, you might be in for a disappointment. | 0 | Someone was posting [llama-chat being flirty earlier](/r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/) (given a specific system prompt, of course), but that doesn't mean it's ever actually seen any kind of adult content. This may impact certain kinds of story writing, even if you're not going for anything X-rated.
In other words, it's not just trained to say "No" to writing your kinky story, it literally doesn't know how. | 2023-07-19T01:59:28 | https://www.reddit.com/r/LocalLLaMA/comments/153hgpx/meta_filtered_llama2s_training_data_to_keep_it/ | Maristic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153hgpx | false | null | t3_153hgpx | /r/LocalLLaMA/comments/153hgpx/meta_filtered_llama2s_training_data_to_keep_it/ | false | false | self | 0 | null |
Llama v2 chat... don't think it is there yet. Need Fine Tune. | 0 | ​
https://preview.redd.it/bl3q2fs6vtcb1.png?width=1385&format=png&auto=webp&s=5edfd6b146fb8eab5d474bafb5825927cfd287c3 | 2023-07-19T01:51:53 | https://www.reddit.com/r/LocalLLaMA/comments/153haxw/llama_v2_chat_dont_think_it_is_there_yet_need/ | jackfood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153haxw | false | null | t3_153haxw | /r/LocalLLaMA/comments/153haxw/llama_v2_chat_dont_think_it_is_there_yet_need/ | false | false | 0 | null | |
Fine-tuning for accuracy | 8 | Hi everyone,
During the last months I've reading some of the posts regarding fine-tuning and customizing Local LLM for particular scenarios (and some questions about what are people doing with those Local LLM).
TL;DR: What is the best strategy to tune a LLM to select from a limited but accurate set of customizable responses?
In my particular case, I have been researching about how to convert natural language description of data requirements to data quality validations. My ultimate goal is to be able to validate the data quality of Open Data datasets based on the metadata/descriptions available for each dataset.
So, usually each dataset has a corresponding PDF, with sentences describing the data such as:
- Contract offers can be: open, limited availability, direct adjudication
- Contract ID must not be null
- Contract start data is a date in the format MM/DD/YYYY
To check those rules, I am using a library called [Great Expectations](https://greatexpectations.io/expectations/?viewType=Summary), written in Python that offers several hundred pre-made validation rules, called expectations. There are around 75 core expectations and 250 experimental expectations.
These are some of the sample expectations that would be created:
- expect_column_values_to_be_in_set(
"contract_offer", ["open", "limited availability", "direct adjudication"])
- expect_column_values_to_not_be_null("contract_ID")
- expect_column_values_to_match_strftime_format(
"contract_start_date", "%m/%d/%Y")
It's important to know that a single natural language description can create several expectations. (as in: "We expect date values not to be null and formatted as per ISO 8601 standard format")
I have generated several of these rules using GPT-4, passing the names of the expectations. Otherwise, the model will hallucinate creating seemingly real answers that are 100% made up.
My goal would be to create those rules using a customized LocalLLM. Some ideas gathered from this subreddit:
- Vector databases can be used to recover factual information.
- LoRA can be used to customize the output format (but are not the best way to fine-tune for accuracy).
As I was researching this topic, several similar initiatives have appeared. [SodaGPT](https://docs.soda.io/soda-cloud/sodagpt.html) is a cloud service created by data quality company Soda, to convert a single natural language sentence into a data quality rule, using the own data quality language called SodaCL.This is a Falcon 7b (instruct?) fine-tuned using LoRA.
One week later the [BirdiDQ](https://github.com/BirdiD/BirdiDQ) open source project was announced to convert natural language descriptions into expectations. I am aiming to contribute to this project, that works with finetuned GPT-3.5 at the moment, but plans to use Falcon-7b instruct plus a QLoRA. Both fine-tunings have been trained using a custom dataset made of 250 pairs of descriptions-expectations.
What are the most viable strategies when having to select and customize from a limited number of possible alternatives and want to prevent any hallucinations in the output?
Thanks in advance!
P.D After today's announcement, we might experiment with LLaMA 2 instead of Falcon. | 2023-07-19T01:08:14 | https://www.reddit.com/r/LocalLLaMA/comments/153gc7p/finetuning_for_accuracy/ | elsatch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153gc7p | false | null | t3_153gc7p | /r/LocalLLaMA/comments/153gc7p/finetuning_for_accuracy/ | false | false | self | 8 | null |
LLaMA-2-70B-GPTQ-transformers4.32.0.dev0, 4bit quantization working with GPTQ for LLaMA! | 65 | 2023-07-19T00:34:58 | https://huggingface.co/Panchovix/LLaMA-2-70B-GPTQ-transformers4.32.0.dev0 | panchovix | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 153flql | false | null | t3_153flql | /r/LocalLLaMA/comments/153flql/llama270bgptqtransformers4320dev0_4bit/ | false | false | 65 | {'enabled': False, 'images': [{'id': '9SChMy77DY8uX1j6uPNCVG94VDqvd5nSNweBCfsX0Ow', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=108&crop=smart&auto=webp&s=382be2aac44ea31e81dd4929b1bb417651250989', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=216&crop=smart&auto=webp&s=5686bdc316c728b06acc8daf2907f58ed7300a14', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=320&crop=smart&auto=webp&s=4dacb3c03b650948814a17ffd4c6f8ffb8c744f1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=640&crop=smart&auto=webp&s=dd9166c0540dbf9915c4f0ed375ffbc1fb6cbdd7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=960&crop=smart&auto=webp&s=48c486a9ae3d5fbe50a02f0d529718ddae647ddc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?width=1080&crop=smart&auto=webp&s=1b122be311017726a2f8289aa871734b05ff306c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jaZFnbNRzqcWGq_z3P5wTpznBrUnwJzraM_WDsGQx1o.jpg?auto=webp&s=8e55c78aa7a330b649f0ec12e14fec476fd79d3f', 'width': 1200}, 'variants': {}}]} | ||
Why we don't need a label data for fine-tuning model with qlora ? | 7 | From this notebook seems like the author didn't use any label for the dataset
[https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing)
​
it seems like it can be explained here: [https://huggingface.co/docs/transformers/tasks/language\_modeling](https://huggingface.co/docs/transformers/tasks/language_modeling)
https://preview.redd.it/dz7i7vahrscb1.png?width=2172&format=png&auto=webp&s=c7d9281df823423a65869f812b63b32612c5c47a
But if the next word is the label then what is the previous word and how will it be determined? e.g: word1 word2 word3 word4... <next word>
I guess my question is how does it know which previous words to look at, is that process done iteratively (word1 <next word>, word1 word2 <next word>) ? | 2023-07-18T22:10:45 | https://www.reddit.com/r/LocalLLaMA/comments/153c2pd/why_we_dont_need_a_label_data_for_finetuning/ | Cheap-Routine4736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153c2pd | false | null | t3_153c2pd | /r/LocalLLaMA/comments/153c2pd/why_we_dont_need_a_label_data_for_finetuning/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} | |
Bing Chat Enterprise | 1 | [removed] | 2023-07-18T22:07:07 | https://www.reddit.com/r/LocalLLaMA/comments/153bza4/bing_chat_enterprise/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153bza4 | false | null | t3_153bza4 | /r/LocalLLaMA/comments/153bza4/bing_chat_enterprise/ | false | false | self | 1 | null |
What does Typical P actually do? | 8 | Hi, all,
I've been experimenting with SillyTavern, using both llama.cpp and Ooba, and I've noticed an odd trend across a lot of models. I've found a ton of references online to set Typical P / Typical Sampling to 0.2, but I've found that doing so seems somehow to lobotomize any model I throw at it. This happens on both instruct and non-instruct models, on both 13b and 30b / 33b, and with any context size from 2048 to 8192. For a while, I thought llama.cpp just sucked somehow until I tried changing Typical P to 1, and everything just started to work. I'm happy that it works now, but I'm interested to know why.
Apologies if this is well known on the sub. I did search around reddit and Google for a while, and couldn't find any decent explanation of the various samplers. | 2023-07-18T21:54:52 | https://www.reddit.com/r/LocalLLaMA/comments/153bnly/what_does_typical_p_actually_do/ | smile_e_face | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153bnly | false | null | t3_153bnly | /r/LocalLLaMA/comments/153bnly/what_does_typical_p_actually_do/ | false | false | self | 8 | null |
Anybody know of other good LLM subs? | 6 | Hey there everybody. I’m an avid AI enthusiast and love anything to do with language model or image generation models.
However, I have found that this sub and the official server are too centralized in regards to moderation and the ability to influence the community for my taste.
I love this community, however I would like to see something with more moderators and more community involvement.
All the best to the mods, I just want to expand the number of subs I visit and get to know more of the AI community.
So, in the spirit of furthering passion and discussion about AI and meeting all of you wonderful people, any suggestions? | 2023-07-18T21:37:31 | https://www.reddit.com/r/LocalLLaMA/comments/153b7ih/anybody_know_of_other_good_llm_subs/ | OfficialHaethus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153b7ih | false | null | t3_153b7ih | /r/LocalLLaMA/comments/153b7ih/anybody_know_of_other_good_llm_subs/ | false | false | self | 6 | null |
Best ways to add guardrails to uncensored open source LLMs? | 3 | I’m trying to act a few guardrails so that the LLMs stay in character.
Sometimes the LLMs say something irrelevant completely different from what was asked like asking ‘how your day going?’ and it starts replying about some mathematical equation.
Other times it acts like a knowledge bank instead of staying in the character. For ex: let’s say the bot is asked to pretend to be someone not too knowledgeable about world facts but it still replies about what happened on xx date in 1939 or so.
Are there some established methodologies or tools to create guardrails? | 2023-07-18T21:26:14 | https://www.reddit.com/r/LocalLLaMA/comments/153ax67/best_ways_to_add_guardrails_to_uncensored_open/ | RepresentativeOdd276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153ax67 | false | null | t3_153ax67 | /r/LocalLLaMA/comments/153ax67/best_ways_to_add_guardrails_to_uncensored_open/ | false | false | self | 3 | null |
GPT4ALL python code not using full RAM or CPU. | 0 | Hi all,
Just like the title says, I am running gpt4all in a pycharm project and when I run the code and watch the performance of my PC in the task manager I see that it is only using a fraction of my RAM and CPU power. How can I adjust the model/program so that it utilizes all my RAM/CPU?
I have 55GB of RAM available, but it only uses about 14GB.
**PC specs:**
Device name DESKTOP-21BIJPQ
Processor AMD Ryzen 7 5800X 8-Core Processor 3.80 GHz
Installed RAM 64.0 GB (63.9 GB usable) (Running at 3600mhz)
Device ID AE7E7741-55C8-4BAE-9D3E-3BFA5F607E5B
Product ID 00326-10103-58941-AA227
System type 64-bit operating system, x64-based processor
Pen and touch No pen or touch input is available for this display
**Code:**
import pandas as pd
import gpt4all
#Set up the nous - vicuna model
gptj = gpt4all.GPT4All(r"C:\Users\Me\AppData\Local\nomic.ai\GPT4All\wizardLM-13B-Uncensored.ggmlv3.q4_0.bin")
#Read the dataset into a pandas DataFrame
file_path = r'C:\Users\Me\Documents\School\Anonymizer stuff\response.xlsx'
print(file_path)
data = pd.read_excel(file_path)
#Specify the column to loop through
column_name = 'Reply_1st' # Replace with the actual column name in your dataset
#Iterate over each row in the specified column
for index, row in data.iterrows():
# Get the cell value based on the specified column
cell_value = row[column_name]
print(cell_value)
if cell_value == 0:
continue # Move on to the next row if cell_value is 0
# Construct the prompt template with the anonymized row
prompt_template = """
Anonymize the following paragraph by redacting any and all personal identifying information (PII), such as
1. emails,
2. phone numbers (area codes and phone numbers),
3. pronouns (names and company names),
4. sign offs/signatures (names at the end of the email),
5. places of work or residency (including mentions of cities),
6. and any other sensitive details.
Replace the PII with the placeholder 'XXX'.
Do not respond to the paragraph below, anonymize it. If there are multiple PII, redact them all, and do not alter any text besides redacting,
Also include all original text besides what needs to be redacted{
""" + cell_value + """
}
"""
# Submit the prompt to the model
messages = gptj.generate(prompt=prompt_template, max_tokens=1000, repeat_penalty=1)
# Print messages so we can see
print(messages)
# Update the 'anonymized_oe_1' column with the anonymized output
data.at[index, 'anonymized_oe_1'] = messages
#Save the modified DataFrame with the anonymized outputs
data.to_excel(r'C:\Users\Me\Documents\School\Anonymizer stuff\Attempt 1.xlsx', sheet_name='Sheet1')
​ | 2023-07-18T21:20:42 | https://www.reddit.com/r/LocalLLaMA/comments/153as3c/gpt4all_python_code_not_using_full_ram_or_cpu/ | thedenfather | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 153as3c | false | null | t3_153as3c | /r/LocalLLaMA/comments/153as3c/gpt4all_python_code_not_using_full_ram_or_cpu/ | false | false | self | 0 | null |
Noob Question | 1 | [removed] | 2023-07-18T20:22:33 | https://www.reddit.com/r/LocalLLaMA/comments/15399r3/noob_question/ | morecontextplz1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15399r3 | false | null | t3_15399r3 | /r/LocalLLaMA/comments/15399r3/noob_question/ | false | false | self | 1 | null |
Llama 2 has roles support | 6 | They mention it in Readme.
Implementation is in generation.py in llama repository.
if dialog[0]["role"] != "system":
dialog = [
{
"role": "system",
"content": DEFAULT_SYSTEM_PROMPT,
}
] + dialog
dialog = [
{
"role": dialog[1]["role"],
"content": B_SYS
+ dialog[0]["content"]
+ E_SYS
+ dialog[1]["content"],
}
] + dialog[2:]
assert all([msg["role"] == "user" for msg in dialog[::2]]) and all(
[msg["role"] == "assistant" for msg in dialog[1::2]]
), (
"model only supports 'system', 'user' and 'assistant' roles, "
"starting with 'system', then 'user' and alternating (u/a/u/a/u...)"
) | 2023-07-18T20:06:26 | https://www.reddit.com/r/LocalLLaMA/comments/1538ufu/llama_2_has_roles_support/ | nikitastaf1996 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1538ufu | false | null | t3_1538ufu | /r/LocalLLaMA/comments/1538ufu/llama_2_has_roles_support/ | false | false | self | 6 | null |
Llama 2: Pffft, boundaries? Ethics? Don't be silly! | 267 | 2023-07-18T19:43:24 | WolframRavenwolf | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15388d6 | false | null | t3_15388d6 | /r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/ | false | false | 267 | {'enabled': True, 'images': [{'id': '8fm2DFen_NkgpI9NnuxR4Ho_2UnttXx-I85beb2ATYw', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/0gjpjqul0scb1.png?width=108&crop=smart&auto=webp&s=944c268a2410012bddc7966c988866bb3cbd6da2', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/0gjpjqul0scb1.png?width=216&crop=smart&auto=webp&s=895e1bbfee150b6239577bff6af55020fdc3103e', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/0gjpjqul0scb1.png?width=320&crop=smart&auto=webp&s=063a89465d66f93d3672da5cc272e04509c28c7f', 'width': 320}, {'height': 501, 'url': 'https://preview.redd.it/0gjpjqul0scb1.png?width=640&crop=smart&auto=webp&s=81435816f0af9869e10038d2586b5335948a3ff5', 'width': 640}, {'height': 752, 'url': 'https://preview.redd.it/0gjpjqul0scb1.png?width=960&crop=smart&auto=webp&s=16abc972ef18fb12a79e5bb9f5ef28c9b77584e6', 'width': 960}], 'source': {'height': 780, 'url': 'https://preview.redd.it/0gjpjqul0scb1.png?auto=webp&s=aba95dde82462c43bad65b7cd2d7b3f9de757cde', 'width': 995}, 'variants': {}}]} | |||
Airoboros 33b, 16k context, GGML, now available. | 51 | Ycros has uploaded a GGML edition of Airoboros 33b with 16k context. Here is the link.
https://huggingface.co/ycros/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GGML/tree/main
Thank you, Ycros. :) | 2023-07-18T19:40:26 | https://www.reddit.com/r/LocalLLaMA/comments/15385ih/airoboros_33b_16k_context_ggml_now_available/ | Sabin_Stargem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15385ih | false | null | t3_15385ih | /r/LocalLLaMA/comments/15385ih/airoboros_33b_16k_context_ggml_now_available/ | false | false | self | 51 | {'enabled': False, 'images': [{'id': 'Lq14yBiK46oBlibmggvxzAV1L_cb8FDs-3ee7_YaTFs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?width=108&crop=smart&auto=webp&s=e51d5bb1578a67e29fa50f2d9b38dbb513ce437f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?width=216&crop=smart&auto=webp&s=3a24da59bb3da8569711acebdf0f243912ec4065', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?width=320&crop=smart&auto=webp&s=a8a1b584ff62f083f454ebe909e292e8795eadb9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?width=640&crop=smart&auto=webp&s=8dcfe7f264992533605ef7a3bcf5c41dfb7812a4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?width=960&crop=smart&auto=webp&s=6447a4336aacce0e45d93b93630c51c86b1b6af1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?width=1080&crop=smart&auto=webp&s=10402070e3fdd22bdf71c1b6040820538d9583f9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XVW73v1uE1ZTMFOgCH4t939HzWib90WX2uONna5RlLQ.jpg?auto=webp&s=f49de51e6128c81737a0602dfe63a2c0973ee0dc', 'width': 1200}, 'variants': {}}]} |
Llama 2 download links: GPTQ and ggml | 99 | **GPTQ** and **ggml**
Llama 2 download links have been added to the wiki: [https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki\_llama\_2\_models](https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki_llama_2_models)
If you're new to the sub and Llama, please see the stickied post below for information on getting started. | 2023-07-18T19:25:48 | https://www.reddit.com/r/LocalLLaMA/comments/1537ri3/llama_2_download_links_gptq_and_ggml/ | AutoModerator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1537ri3 | true | null | t3_1537ri3 | /r/LocalLLaMA/comments/1537ri3/llama_2_download_links_gptq_and_ggml/ | false | false | self | 99 | null |
Llama 2 download links: GPTQ and ggml | 1 | \*\*GPTQ and \*\*ggml\*\*
Llama 2 download links, along with the Llama 2 Chat prompt template, have been added to the wiki: [https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki\_llama\_2\_models](https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki_llama_2_models)
If you're new to the sub and Llama, please see the stickied post below for information on getting started. | 2023-07-18T19:23:24 | https://www.reddit.com/r/LocalLLaMA/comments/1537p8h/llama_2_download_links_gptq_and_ggml/ | AutoModerator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1537p8h | true | null | t3_1537p8h | /r/LocalLLaMA/comments/1537p8h/llama_2_download_links_gptq_and_ggml/ | false | false | self | 1 | null |
Llama-2-chat shocked me!! | 1 | I just tried Llama 2 chat 7b on my Android phone.
I was shocked by the answer. Initialy I thought its garbage but it after a moment I realized it was an answer on a different level. I was expecting a more polished answer given that this model is tuned for commerical use.
Have anyone noticed similar responses?
PS: I used the GGML Model from https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/tree/main
Thanks /u/[TheBloke]. You are great!! | 2023-07-18T19:20:17 | AstrionX | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1537m6w | false | null | t3_1537m6w | /r/LocalLLaMA/comments/1537m6w/llama2chat_shocked_me/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'aLh9F3w2OJH3FaTEGnRCTPeyEBhhrtccgXgmo4PLdYo', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?width=108&crop=smart&auto=webp&s=dae859df4546f2235b1485e82f86e3b441598a43', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?width=216&crop=smart&auto=webp&s=00efc82c7e8bf7cfc8bf320513e1be0df762a301', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?width=320&crop=smart&auto=webp&s=430b642f33f9fcdaab9137b03ceb02cefd4b4df0', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?width=640&crop=smart&auto=webp&s=65a655583f3306e909848c1ac61fd9041c073fb1', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?width=960&crop=smart&auto=webp&s=ca2bbbdb4b3c5e3179f0916363207482d40b2fb3', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?width=1080&crop=smart&auto=webp&s=d963df325084f12e8e8ec2dcd1536d3ed609e228', 'width': 1080}], 'source': {'height': 3216, 'url': 'https://preview.redd.it/hvfc1zjgxrcb1.jpg?auto=webp&s=f30f7d845583463556160ebf702cd35d284408d3', 'width': 1440}, 'variants': {}}]} | ||
Any LLM Models To Generate A Grammatically Correct Statement Into An Incorrect Statement | 5 | Sort of a weird request, but does anyone know of a model that will turn a correct sentence into an incorrect sentence with grammar mistakes?
Thanks! | 2023-07-18T18:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/15370l9/any_llm_models_to_generate_a_grammatically/ | laneciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15370l9 | false | null | t3_15370l9 | /r/LocalLLaMA/comments/15370l9/any_llm_models_to_generate_a_grammatically/ | false | false | self | 5 | null |
TheBloke pulling through with all the Llama 2 models just hours after release | 1 | 2023-07-18T18:47:09 | https://huggingface.co/TheBloke | jd_3d | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1536qre | false | null | t3_1536qre | /r/LocalLLaMA/comments/1536qre/thebloke_pulling_through_with_all_the_llama_2/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ijgSlZO3K44WshhENFl9jhybG8Na3DBCsOXCuyZgycw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=108&crop=smart&auto=webp&s=3e5fdcc67bd2b0779a9f019942e0727ffb86630b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=216&crop=smart&auto=webp&s=b390a77acee51d46b2ca5992c38755e0ea4269e1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=320&crop=smart&auto=webp&s=23586102b6805c7f96721c02b9cad47b5dbfef49', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=640&crop=smart&auto=webp&s=205e31dad1af816278184e44d5aa56e886ad9b4d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=960&crop=smart&auto=webp&s=a2a9e82e506b94bd26ef0019ae18a7b946ccdc74', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=1080&crop=smart&auto=webp&s=928a52a138d0687290827ee2224923bb8f03e39e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?auto=webp&s=addebda9b8be1b664eaee5ea404f4c7df3d5eef2', 'width': 1200}, 'variants': {}}]} | ||
airoboros (tool) overhaul | 31 | Hello,
Just wanted to drop a note that I overhauled the [airoboros](https://github.com/jondurbin/airoboros) tool *not the models* to have most of the prompts I've been using to build the datasets, plus a couple extras.
Available via pip also:
pip install --upgrade airoboros
The configuration is now a YAML file, e.g.: https://github.com/jondurbin/airoboros/blob/main/example-config.yaml
Copy that file and customize it to your liking.
Each of the "instructors" is heavily configurable, and you can override the prompt path if you want to use your own alternative variant.
Main updates:
- Selection of "instructors" (task-specific training data generators):
- agent/router, so we can hopefully train open source LLMs to be better agents
- coding, with configurable set of coding/script languages, and optional related list of software to reference
- contextual question answering, using the same format I've used previously in airoboros models that seems to help reduce hallucinations
- counterfactual contextual question answering, i.e. put fake facts in the input block to make sure the model responds with the fake data instead of "hallucinating" the truth, to enforce obedience to context
- chain-of-thought style prompts, with multiple possible solutions, ranking, and optimal selection
- experiences, e.g. guided meditations or exploring a museum, etc.
- general, i.e. random prompts without any seed values, wholly generated by the LLM
Orca, i.e. math or reasoning question with an added "ELI5"
- riddles
- trivia
- wordgames, e.g. generating a list of words containing a string, starting with a string, etc.
- Experimental support for non-english instruction/response generation.
- Can be configured at the top-level with the "language" key, or within one of the instructor configurations by override.
- Customizable prompts per instructor
- Just change the "prompt_path" field in the instructor config section to point to a text file with your own variant.
- Switched from threads to asyncio, to be less DoS'y of your hardware.
- Probably other stuff I'm forgetting as well.
Let me know if there are other instructors you'd like to see, along with example prompts, and if you find any bugs (I haven't tested this super extensively yet).
I'll probably build a new dataset with only the newer version of gpt-4 to see how it compares to the March version. | 2023-07-18T18:26:56 | https://www.reddit.com/r/LocalLLaMA/comments/15367sf/airoboros_tool_overhaul/ | JonDurbin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15367sf | false | null | t3_15367sf | /r/LocalLLaMA/comments/15367sf/airoboros_tool_overhaul/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': '4CODUZbJ6dyCVnStILqL0vJnMsJMmVnR1jVNmAlDU1k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?width=108&crop=smart&auto=webp&s=39ed49eea184ea42eb9f8c038810264f16bc4f9b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?width=216&crop=smart&auto=webp&s=ecf5471d3062483c58275e1f4ac2946c5f53783f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?width=320&crop=smart&auto=webp&s=84520e030f4b59f51824183274bfa560e7255ad6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?width=640&crop=smart&auto=webp&s=a048fea3e147e37fca2e2fc69d6c9ad5c28d15a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?width=960&crop=smart&auto=webp&s=7510292393675b6f56f6f14a382759403951fd37', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?width=1080&crop=smart&auto=webp&s=f309ed4a2db039c1dd95aa244c7e8176169bddea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xOOcp6Ds3mmKFLf9o-hBKm2cLUSp4bwI7kG2AsDKnbQ.jpg?auto=webp&s=e95819589a080ae5c4598f5b76834831dc650f8d', 'width': 1200}, 'variants': {}}]} |
LLaMA-2 GGML & GPTQ already available thanks to TheBloke | 1 | [removed] | 2023-07-18T18:09:55 | https://www.reddit.com/r/LocalLLaMA/comments/1535rl1/llama2_ggml_gptq_already_available_thanks_to/ | aminedjeghri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1535rl1 | false | null | t3_1535rl1 | /r/LocalLLaMA/comments/1535rl1/llama2_ggml_gptq_already_available_thanks_to/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'scnkn175QpfbMTdtoYxWh-I3soYhx9pJiHmW5tQQPMY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=108&crop=smart&auto=webp&s=306a702906190a7340924eac46d7feb1f3eec45e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=216&crop=smart&auto=webp&s=287c3b0788fac7766494f5d0e851ac89ff9d8ac9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=320&crop=smart&auto=webp&s=625a75ac60be2245e5230d54df27456c7107ec4b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=640&crop=smart&auto=webp&s=83a9e51bb02d1141095b45fc8a465ecd586bdf97', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=960&crop=smart&auto=webp&s=3dd7e9d25663fc2ba02b34fdd652a431bbc9b036', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=1080&crop=smart&auto=webp&s=8b4b881bb48884dc6af6951b1ce4443f4bdcf11d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?auto=webp&s=dbb9e744ef3627d636c68c84c199aa1f0dafb845', 'width': 1200}, 'variants': {}}]} |
Meta releases Llama 2 | 1 | 2023-07-18T17:57:29 | https://3s3.co/2023/07/18/meta-ai-releases-llama-2/ | vnpttl | 3s3.co | 1970-01-01T00:00:00 | 0 | {} | 1535fxu | false | null | t3_1535fxu | /r/LocalLLaMA/comments/1535fxu/meta_releases_llama_2/ | false | false | default | 1 | null | |
Bloke the goat | 4 | Llama 2 online guys!!!! | 2023-07-18T17:56:10 | Sensitive-Analyst288 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1535eq0 | false | null | t3_1535eq0 | /r/LocalLLaMA/comments/1535eq0/bloke_the_goat/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'Q0jbox-9mxIte8unWo25U0fhWBrcYNR1M7htukrI9i0', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?width=108&crop=smart&auto=webp&s=af11e234dabf5dfaa821fe149fc2c2eb22da6271', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?width=216&crop=smart&auto=webp&s=9739ae51cd076e912aae8d0eeeba79e01eebbcd7', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?width=320&crop=smart&auto=webp&s=e2cc81110650582a90638575bda6d7e05e882885', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?width=640&crop=smart&auto=webp&s=e554c1170d3c51b89d7611986651c9c4f19c318d', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?width=960&crop=smart&auto=webp&s=afc56780bc0b98abfa97c8fd9978db2071d4dcb1', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?width=1080&crop=smart&auto=webp&s=a281e1fed2e0ade01f7cc860d0ea5f18d2c95c0c', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/drlola6gircb1.jpg?auto=webp&s=da259b475a5e76a83fb3b99eb46dacf7e6dc560e', 'width': 1080}, 'variants': {}}]} | ||
We made Llama13b-v2-chat immediately available as an endpoint for developers | 21 | Hey LocalLLaMA, we've released tools that make it easy to test LLaMa 2 and add it to your own app!
Model playground here: [https://llama2.ai](https://llama2.ai/)
Hosted chat API here: [https://replicate.com/a16z-infra/llama13b-v2-chat](https://replicate.com/a16z-infra/llama13b-v2-chat)
If you want to just play with the model, llama2.ai is a very easy way to do it. So far, we’ve found the performance is similar to GPT-3.5 with far fewer parameters, especially for creative tasks and interactions.
Developers can:
\* clone the chatbot app as a starting point ([https://github.com/a16z-infra/llama2-chatbot](https://github.com/a16z-infra/llama2-chatbot))
\* use the Replicate endpoint directly ([https://replicate.com/a16z-infra/llama13b-v2-chat](https://replicate.com/a16z-infra/llama13b-v2-chat))
\* or even deploy your own LLaMA v2 fine tune with Cog ([https://github.com/a16z-infra/cog-llama-template](https://github.com/a16z-infra/cog-llama-template))
Please let us know what you use this for or if you have feedback! And thanks to all contributors to this model, Meta, Replicate, the Open Source community! | 2023-07-18T17:23:07 | https://www.reddit.com/r/LocalLLaMA/comments/1534kfe/we_made_llama13bv2chat_immediately_available_as/ | Prestigious-Elk7124 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1534kfe | false | null | t3_1534kfe | /r/LocalLLaMA/comments/1534kfe/we_made_llama13bv2chat_immediately_available_as/ | false | false | self | 21 | null |
Anyone had any luck with 65b models and llama.cpp using the newly implemented rope scaling parameters to get contexts larger than 2048? | 13 | I've tried a few different values but so far it just generates really funny pidgin-sounding english, like "den the man went to to da shop store and dun some good things for shopping" or similar such nonsense.
I've had great luck with 33b models and up to 16k contexts to far, even with the superHOT 8k context models.
​ | 2023-07-18T16:21:38 | https://www.reddit.com/r/LocalLLaMA/comments/1532yf0/anyone_had_any_luck_with_65b_models_and_llamacpp/ | spanielrassler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1532yf0 | false | null | t3_1532yf0 | /r/LocalLLaMA/comments/1532yf0/anyone_had_any_luck_with_65b_models_and_llamacpp/ | false | false | self | 13 | null |
BlinkDL/rwkv-4-music: New 120M and 560M MIDI models based on RWKV | 33 | 2023-07-18T16:17:08 | https://huggingface.co/BlinkDL/rwkv-4-music | Balance- | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1532u7f | false | null | t3_1532u7f | /r/LocalLLaMA/comments/1532u7f/blinkdlrwkv4music_new_120m_and_560m_midi_models/ | false | false | 33 | {'enabled': False, 'images': [{'id': 'bWyTgxVzvLPddswLBnDx7CqEQIJaUGNfssNmo3WdoDQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?width=108&crop=smart&auto=webp&s=1671624de12fa839075c548ea4dbaa17cc2b05cd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?width=216&crop=smart&auto=webp&s=587739209d97ba66aa4123d4cd3001b761ec3372', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?width=320&crop=smart&auto=webp&s=7ea41991849f456b278772fb408c354871242b51', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?width=640&crop=smart&auto=webp&s=b952136bd2e2de0b2ef231e079c36ae1e58c3030', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?width=960&crop=smart&auto=webp&s=676c179306037ea10f709e55ee0dc38743c0de63', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?width=1080&crop=smart&auto=webp&s=55f7a97c7810ca1e2f120fdda494cec25212195e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9nZFog7YKCtoIJZ4EVTFwYzqitdjAp9cKo_fCYR7db8.jpg?auto=webp&s=162a751eb7a9d8bf944c3f6634a9a0ed8aec2ff7', 'width': 1200}, 'variants': {}}]} | ||
Proposal of LLM hosted in a co-funded host | 6 | Hello:
I had and idea about how to get a big LLM (30/44 Gb) running fast in a cloud server.
What if this server would be scalable in potency and the renting shared in a group of united users?
Some sort of DAO to make it possible?
What do you think? | 2023-07-18T16:10:15 | https://www.reddit.com/r/LocalLLaMA/comments/1532njt/proposal_of_llm_hosted_in_a_cofunded_host/ | SnooWoofers780 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1532njt | false | null | t3_1532njt | /r/LocalLLaMA/comments/1532njt/proposal_of_llm_hosted_in_a_cofunded_host/ | false | false | self | 6 | null |
PC game Vaudeville dialog is AI generated | 15 | [https://store.steampowered.com/app/2240920/Vaudeville/](https://store.steampowered.com/app/2240920/Vaudeville/)
I have no affiliation with the game and simply thought it was a very interesting game that people in this community would also find interesting.
The convos with the AI are actually very good, maybe one day there will be games like this that interface with LLMs of our choosing.
Very cool game!! | 2023-07-18T15:51:41 | https://www.reddit.com/r/LocalLLaMA/comments/15325za/pc_game_vaudeville_dialog_is_ai_generated/ | Inevitable-Start-653 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15325za | false | null | t3_15325za | /r/LocalLLaMA/comments/15325za/pc_game_vaudeville_dialog_is_ai_generated/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'xBa29lVvA5Q7SoqhxS3ZKQZla1368tS34MBVodOovUA', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/pzIfcSv32Mg6majmqKkA9WWMXTt_eggByTpkTtjiT1Y.jpg?width=108&crop=smart&auto=webp&s=83e00a641ad42e59bb2ffe4682055477802b5337', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/pzIfcSv32Mg6majmqKkA9WWMXTt_eggByTpkTtjiT1Y.jpg?width=216&crop=smart&auto=webp&s=f815ee405b3ccf219c632dc8198c621e2baf78ff', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/pzIfcSv32Mg6majmqKkA9WWMXTt_eggByTpkTtjiT1Y.jpg?width=320&crop=smart&auto=webp&s=13842b3f98c3cb04bf1821af4b1913ce4e58ceef', 'width': 320}], 'source': {'height': 353, 'url': 'https://external-preview.redd.it/pzIfcSv32Mg6majmqKkA9WWMXTt_eggByTpkTtjiT1Y.jpg?auto=webp&s=9e6005815387d1a31bc494ded3b7b47149324bd8', 'width': 616}, 'variants': {}}]} |
LLaMA 2 is here | 849 | https://ai.meta.com/llama/ | 2023-07-18T15:50:06 | https://www.reddit.com/r/LocalLLaMA/comments/15324dp/llama_2_is_here/ | dreamingleo12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 1 | {'gid_2': 1} | 15324dp | false | null | t3_15324dp | /r/LocalLLaMA/comments/15324dp/llama_2_is_here/ | false | false | self | 849 | {'enabled': False, 'images': [{'id': 'ilC2qprzEOhvondbER2GPm9DXBMFQhdj6lShAI3fqUQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?width=108&crop=smart&auto=webp&s=b96f0fb64d0fd3022dd85d7522591d32ffa3e30e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?width=216&crop=smart&auto=webp&s=9912a2752494571ed70d5a86ac12b82605c4f45c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?width=320&crop=smart&auto=webp&s=56ed0063c62caf22cd7da6c252e1217e3110c1b7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?width=640&crop=smart&auto=webp&s=de6bc123c3d7a92ad1b5d7d6155a79bbbf60123f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?width=960&crop=smart&auto=webp&s=e0c2d0341b3c852b53903f8db3781047c285ed18', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?width=1080&crop=smart&auto=webp&s=7aa7b2985c05b52eff9a4cdcefefafca8c3ba9c7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/-lx7IoVnPKtS1s2Rq8IcxH6q6WBMlXXfBQF43Q3okcU.jpg?auto=webp&s=188e3053d99818d509c6f9549c04cc4f13e6981a', 'width': 1920}, 'variants': {}}]} |
Generate both question and answer from the given context. | 3 | I want to make generate both multiple questions and answers from the given context, like ChatGPT. How to do it? and which model will perform better in this situation? I can do it using Conversational Langchain. But it is not quite good. | 2023-07-18T14:13:52 | https://www.reddit.com/r/LocalLLaMA/comments/152zmw2/generate_both_question_and_answer_from_the_given/ | mathageche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152zmw2 | false | null | t3_152zmw2 | /r/LocalLLaMA/comments/152zmw2/generate_both_question_and_answer_from_the_given/ | false | false | self | 3 | null |
LLM less chatty after LoRA finetune | 3 | I trained LoRAs for a few of the popular 33B LLaMA models (Wizard, Airoboros, etc) and observed that the LLMs with LoRA applied appear less chatty by A LOT.
All LoRAs were fine tuned for 2 epochs using the same Alpaca-like dataset containing 10K Q&A-style examples. The outputs in the training set is 68 tokens on average.
Did the LoRA fine tune make the model less chatty because of the small token size of the output in the dataset? If so, is there anyway to make the model more chatty without having to recreate this dataset (because I dont know how to edit it to be longer)?
Thanks | 2023-07-18T13:25:40 | https://www.reddit.com/r/LocalLLaMA/comments/152yg19/llm_less_chatty_after_lora_finetune/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152yg19 | false | null | t3_152yg19 | /r/LocalLLaMA/comments/152yg19/llm_less_chatty_after_lora_finetune/ | false | false | self | 3 | null |
New Nvidia driver 536.67, a fix to memory issues(?) | 3 | Listed as an open issue in the driver notes because they've observed performance degradations to some applications:
>This driver implements a fix for creative application stability issues seen during heavy memory usage. We’ve observed some situations where this fix has resulted in performance degradation when running Stable Diffusion and DaVinci Resolve. This will be addressed in an upcoming driver release. \[4172676\]
FWIW: I don't know if ex-llama HF close to memory limit was a problem scenario for the previous "broken" drivers but 30B GPTQ ex-llama HF works as well as it did with 531.79 on a 4090.
Interested to hear your experiences.
[https://us.download.nvidia.com/Windows/536.67/536.67-desktop-win10-win11-64bit-international-dch-whql.exe](https://us.download.nvidia.com/Windows/536.67/536.67-desktop-win10-win11-64bit-international-dch-whql.exe) | 2023-07-18T13:17:00 | https://www.reddit.com/r/LocalLLaMA/comments/152y8t1/new_nvidia_driver_53667_a_fix_to_memory_issues/ | rerri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152y8t1 | false | null | t3_152y8t1 | /r/LocalLLaMA/comments/152y8t1/new_nvidia_driver_53667_a_fix_to_memory_issues/ | false | false | self | 3 | null |
gpt4all question | 1 | where is a good sub to get help with it? | 2023-07-18T12:30:47 | https://www.reddit.com/r/LocalLLaMA/comments/152x6q3/gpt4all_question/ | iseedeff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152x6q3 | false | null | t3_152x6q3 | /r/LocalLLaMA/comments/152x6q3/gpt4all_question/ | false | false | self | 1 | null |
MUD + LLM for a stronger roleplaying experience? | 54 | If you're a gen-x or xennial, MUDs were probably the rage when you went to high school. I never really got into the MUD thing, but had one of my strongest RP experiences ever in a MUSH (incidentally the only time I played one). But MUD's were a bit too rigid, and I was never into the mp aspect of them.
I've been playing around with roleplaying in KoboldCpp, testing out scenarios and models, but found them usually too wacky, or dreamy to really appeal to me. I believe there needs to be structure, and LLM's suck at it.
But.. MUD's (games) have structure. LLM's are great at story-telling up until that critical context limit. What if you combine them both? Let the LLM handle the telling, but confine it within the framework of a game.
I decided to hack something together, choosing [Tale](https://pythonhosted.org/tale/) to try it out. Here are some results:
https://preview.redd.it/wmwbobj2npcb1.png?width=949&format=png&auto=webp&s=2b3cc8de048d6e0dea9d8cd261442fadc0c7a489
The text after 'evoke' is the text generated by the MUD, which becomes the prompt for the LLM.
As you all know, LLM's can be temperamental and sometimes output almost nothing, and the next time a page of text. I've tried to allow it to be expansive for some things, while limited for others, like "The door is locked". You don't need a paragraph for that.
One neat thing with MUD's is that the world goes on without you, so for example Garfield yawns regularly, which triggers text generation. Characters entering and exiting would also do that.
I hope to create some stories to try this out further (this is only the demo that comes with the repo).
I would also like to try out that idea with small models for dialogue and see if they can be used in unison to lower inference times.
If you want to try this out you need KoboldCpp as backend (it's hard coded for the time being, as are all the settings). Just [sync the repo](https://github.com/neph1/Tale), and launch with "python -m tale.main --game tale/demo/" | 2023-07-18T11:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/152w71n/mud_llm_for_a_stronger_roleplaying_experience/ | neph1010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152w71n | false | null | t3_152w71n | /r/LocalLLaMA/comments/152w71n/mud_llm_for_a_stronger_roleplaying_experience/ | false | false | 54 | null | |
Rachel the Assistant Editor | 1 | 2023-07-18T11:43:07 | AIsGonnaGetYa | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 152w527 | false | null | t3_152w527 | /r/LocalLLaMA/comments/152w527/rachel_the_assistant_editor/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'X7XJq-26GOL0Y9HNHznUjQOl_W1aFZPOYW98bpzjunA', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/rq4e5totnpcb1.png?width=108&crop=smart&auto=webp&s=173595b9b4997838c0fb9f7e78ccd46d4a00eb79', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/rq4e5totnpcb1.png?width=216&crop=smart&auto=webp&s=774f870c95196cb40f22f18cd81ebc6f5c411444', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/rq4e5totnpcb1.png?width=320&crop=smart&auto=webp&s=3cc6db732d972318bddd60bd206d76e250ce963c', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/rq4e5totnpcb1.png?width=640&crop=smart&auto=webp&s=24b770f664f3db13780a1ec7dbc48f29a8262da2', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/rq4e5totnpcb1.png?width=960&crop=smart&auto=webp&s=eb97bd60f1c69c92e8850f001f19bc42b218d1ee', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/rq4e5totnpcb1.png?auto=webp&s=b57d013e5e0be7ea73e79e81d20a56bc9ca63845', 'width': 1024}, 'variants': {}}]} | |||
Deploy to hugging face with error Fine tuning "nRuntimeError: weight transformer.word_embeddings.weight does not exist" | 1 | Hi I fine tuned the model using this tutorial. It works great in notebook.
https://colab.research.google.com/drive/1FxlUb_H6Xirhkx4RszAgHeb2uDW7oKIH
After I deploy to inference endpoint, i get this error: "nRuntimeError: weight transformer.word_embeddings.weight does not exist"
Could someone please advise how to fix?
To replicate the issue, you could try deploying this model here:
https://huggingface.co/vrsen/falcon-7b-instruct-ft
You will see the same failure that I see. Could someone please help?
I am following these tutorials:
https://www.youtube.com/watch?v=AXG7TA7vIQ8&t=194s&ab_channel=VRSEN
https://www.youtube.com/watch?v=VdKdQYduGQc&ab_channel=VRSEN
more logs:
RuntimeError(f"weight {tensor_name} does not exist")\nRuntimeError: weight transformer.word_embeddings.weight does not exist\n"},"target":"text_generation_launcher","span":{"rank":0,"name":"shard-manager"},"spans":[{"rank":0,"name":"shard-manager"}]}
526bf 2023-07-16T20:37:19.492Z {"timestamp":"2023-07-16T20:37:19.491918Z","level":"INFO","fields":{"message":"Shutting down shards"},"target":"text_generation_launcher"}
526bf 2023-07-16T20:37:19.492Z {"timestamp":"2023-07-16T20:37:19.491898Z","level":"ERROR","fields":{"message":"You are using a model of type RefinedWebModel to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\nTraceback (most recent call last):\n\n File
line 49, in get_filename\n raise RuntimeError(f"weight {tensor_name} does not exist")\n\nRuntimeError: weight transformer.word_embeddings.weight does not exist\n\n"},"target":"text_generation_launcher"}
526bf 2023-07-16T20:37:19.492Z Error: ShardCannotStart
526bf 2023-07-16T20:37:19.492Z {"timestamp":"2023-07-16T20:37:19.491861Z","level":"ERROR","fields":{"message":"Shard 0 failed to start"},"target":"text_generation_launcher"} | 2023-07-18T11:42:30 | https://www.reddit.com/r/LocalLLaMA/comments/152w4no/deploy_to_hugging_face_with_error_fine_tuning/ | InterestingBasil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152w4no | false | null | t3_152w4no | /r/LocalLLaMA/comments/152w4no/deploy_to_hugging_face_with_error_fine_tuning/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
In the end does the model actually matter? | 0 | It seems to me like the LLM community is putting a lot of effort into finding and building better models/datasets: we want models that can solve more abstract problems, store more knowledge, go further in a single step.
But we have also discovered that asking models to take smaller steps: 'explain your reasoning' or 'show your working' often improves output. This leads me to a hypothesis (disclaimer: I am not experienced in AI, tell me if I am being a fool):
A simple 'intelligence' can perform the same tasks as a 'intelligence' given sufficient access to working space, external knowledge sources, and other infrastructure.
​
Consider a CPU. It takes data in, processes it, and spits it back out. Modern CPU's are faster because of pipelining, higher clock speeds, and vector instructions. From some level it can be said tha old CPU's can do the same fundamental operations as new ones, they may take more steps.
A hypothetical future could involve a 'reasoning engine' that takes data in, processes it, amd spits it out. Better reasoning engines can make bigger intuitive jumps, require less accurate prompts, and don't depend as heavily on external data stores, but any 'reasoning engine' can do the same tasks as any other 'reasoning engine,' it may just take more steps.
In this world, the models we have may be 'smart enough but a little bit slow' and instead we lack the infrastructure to make them perform. For his reason, projects like GPTEngineer and AutoGPT are more interesting to me than new improved models with new improved datasets: I suspect there is lot more that can be done with existing models - even without finetuning - if we knew how.
​
So, do you think better models is the answer? Are there any big holes in my understanding? Is there some sort of 'fundamental reasoning machine' that mirrors a turing machine but more abstract? Have you tried fiddling with the infrastructure around models? What worked and what didn't? | 2023-07-18T11:13:30 | https://www.reddit.com/r/LocalLLaMA/comments/152vjk4/in_the_end_does_the_model_actually_matter/ | sdfgeoff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152vjk4 | false | null | t3_152vjk4 | /r/LocalLLaMA/comments/152vjk4/in_the_end_does_the_model_actually_matter/ | false | false | self | 0 | null |
Any love for Radeon Pro W6800 32GB ? What to expect ? | 16 | Hi,
I have an opportunity to get 2 or even 3 of those but can't find benchmark of them running LLMs, with all the hype being focused around Nvidia.
What could I expect to run on these ? Host would be a a dual E5-2670 or a i9-10900K, both with at least 128GB.
Thanks ! | 2023-07-18T10:58:35 | https://www.reddit.com/r/LocalLLaMA/comments/152v8fl/any_love_for_radeon_pro_w6800_32gb_what_to_expect/ | chiwawa_42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152v8fl | false | null | t3_152v8fl | /r/LocalLLaMA/comments/152v8fl/any_love_for_radeon_pro_w6800_32gb_what_to_expect/ | false | false | self | 16 | null |
Seeking Help to Run junelee/wizard-vicuna-13b GPT Model! | 1 |
I hope you're all having a fantastic day! I come to this amazing community seeking some assistance with running the junelee/wizard-vicuna-13b GPT model. I stumbled upon this powerful language model on Hugging Face's model hub ([**https://huggingface.co/junelee/wizard-vicuna-13b**](https://huggingface.co/junelee/wizard-vicuna-13b)), and I'm eager to experiment with it for some exciting text generation tasks.
The code to run the model is available in a Google Colab notebook, and you can access it using this link: [**https://colab.research.google.com/github/aitrepreneur/text-generation-webui/blob/main/API\_UPDATED\_WebUI%26pyg\_13b\_GPTQ\_4bit\_128g.ipynb#scrollTo=VCFOzsQSHbjM**](https://colab.research.google.com/github/aitrepreneur/text-generation-webui/blob/main/API_UPDATED_WebUI%26pyg_13b_GPTQ_4bit_128g.ipynb#scrollTo=VCFOzsQSHbjM)
However, despite my best efforts, I'm encountering some difficulties while trying to get the model up and running. That's why I'm turning to this community for help. If any of you have experience with this particular GPT model or are knowledgeable about Google Colab, your guidance would be greatly appreciated.
Here are a few specific questions I have:
1. Has anyone successfully run the junelee/wizard-vicuna-13b model? If so, what tips or tricks can you share?
2. Are there any specific requirements or dependencies that I should be aware of to run this model correctly?
3. If you've worked with Google Colab before, any insights on how to set it up properly for this particular notebook would be wonderful.
I'm sure some of you are seasoned AI enthusiasts, and I believe your expertise will be invaluable in helping me overcome these challenges.
Thank you all for your time and consideration. I'm eagerly looking forward to your replies and suggestions! | 2023-07-18T10:54:20 | https://www.reddit.com/r/LocalLLaMA/comments/152v5ja/seeking_help_to_run_juneleewizardvicuna13b_gpt/ | Nikunja___ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152v5ja | false | null | t3_152v5ja | /r/LocalLLaMA/comments/152v5ja/seeking_help_to_run_juneleewizardvicuna13b_gpt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'M47-BaZSzgabbr8l4gnwSUeo2F1rktYBnCcuLPf2oeM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?width=108&crop=smart&auto=webp&s=f258b703c3a255385ec066795d80fef8c4227705', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?width=216&crop=smart&auto=webp&s=e996745cc0536a27cb720474e2ad6cda81543a72', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?width=320&crop=smart&auto=webp&s=e76198bf753c73498016edaa006519d1687f5f54', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?width=640&crop=smart&auto=webp&s=76027e089b5cbe06fc7c47d34f28ef81826915de', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?width=960&crop=smart&auto=webp&s=5c64b890b9d6a3ca4ad14e80daa4270a1f31c055', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?width=1080&crop=smart&auto=webp&s=c5414fec406722b4638efbbd652d6d9501275789', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HTLb8kcnFUy-8ZrKjUTk3WZZnRkPjqZv_HEXxqepf74.jpg?auto=webp&s=eb3582f98a02cf73549b136eab0b66e648761033', 'width': 1200}, 'variants': {}}]} |
Is oobabooga the only realistically useful alternative? | 0 | title | 2023-07-18T10:41:09 | https://www.reddit.com/r/LocalLLaMA/comments/152uw94/is_oobabooga_the_only_realistically_useful/ | BetterProphet5585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152uw94 | false | null | t3_152uw94 | /r/LocalLLaMA/comments/152uw94/is_oobabooga_the_only_realistically_useful/ | false | false | self | 0 | null |
Do you find GPU renting worth it for a LocalLLM? | 39 | Let's be real, at the moment near to no one can afford to get that much VRAM.
What are some realistic costs projections for running local LLMs that are decent and fast? | 2023-07-18T10:40:00 | https://www.reddit.com/r/LocalLLaMA/comments/152uvdv/do_you_find_gpu_renting_worth_it_for_a_localllm/ | BetterProphet5585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152uvdv | false | null | t3_152uvdv | /r/LocalLLaMA/comments/152uvdv/do_you_find_gpu_renting_worth_it_for_a_localllm/ | false | false | self | 39 | null |
Text to SQL LLMs | 9 | Anyone who has used text to SQL LLMs in a production setting? How do you rate their performance based on accuracy, consistency and reliability?
https://github.com/NumbersStationAI/NSQL | 2023-07-18T10:04:06 | https://www.reddit.com/r/LocalLLaMA/comments/152u7jp/text_to_sql_llms/ | oduor_c | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152u7jp | false | null | t3_152u7jp | /r/LocalLLaMA/comments/152u7jp/text_to_sql_llms/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'OmlAQPV0rfnich01HVmULjHqyoGiaSDzrsu26n9uLIk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?width=108&crop=smart&auto=webp&s=ae3fca608e2300596b6e9fccc7642576ea857882', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?width=216&crop=smart&auto=webp&s=853911891e82cbb0ca1bdca63e16a52997f7a3fd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?width=320&crop=smart&auto=webp&s=7fee4e6c78bb1a8475931aa2291ce87b7c4acff8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?width=640&crop=smart&auto=webp&s=8a13fd020b0c88fe031b2f001993e9aa0fbb5fb3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?width=960&crop=smart&auto=webp&s=c661ca1abeda74722699240829e5804e7a1c99a4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?width=1080&crop=smart&auto=webp&s=da898a80465d391f35e485a1d5ac171236ef9d6b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/udpkoS52WGef_7A2UGJ19TB1uPyjKgZs_L-25v7SwKI.jpg?auto=webp&s=8f8b91b7ddfa9c9344d04315191e306f3753ecf2', 'width': 1200}, 'variants': {}}]} |
When is it best to use prompt engineering vs fine-tuning? | 15 | As somebody who has been experimenting with GPT (both closed-source and open-source), I have been doing both of prompt engineering and fine-tuning. But I was pondering upon this question today, that when exactly is it best to use which technique, and I realized that the answer is not straightforward. At a broad level, I can say that prompt engineering is when you need to (or can) give the "training" to GPT in a single context window, and fine-tuning is when you can't fit your instructions in a single context window and instead have to resort to constructing a dataset with input-output examples for few-shot training of the model.
But there are of course grey areas - what if you can technically provide the instruction in a single context window (of, say, 8000 tokens with an LLM that has this max token size). Then you have the option to go with either prompt engineering or fine tuning, right? So, how do you decide which one to go with? What are the best practices? Or, if there a concrete, black-and-white answer of when to use which, that's even better.
* At what threshold context size does prompt engineering begin to give diminishing returns?
* Is there a threshold number of training examples (i.e., dataset size) below which fine tuning does not yield good results, and beyond which it does?
* Does fine tuning always trump prompt engineering, regardless of the size of the training dataset?
I found the following in an OpenAI discussion forum post:
https://preview.redd.it/pt33uu7fjocb1.png?width=1256&format=png&auto=webp&s=97789deb9d8b54643803120dc8036ca3e3683fc0
I know there are always oversimplifications with analogies, but is this an accurate assessment?
​ | 2023-07-18T08:12:33 | https://www.reddit.com/r/LocalLLaMA/comments/152s9ei/when_is_it_best_to_use_prompt_engineering_vs/ | ResearcherNo4728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152s9ei | false | null | t3_152s9ei | /r/LocalLLaMA/comments/152s9ei/when_is_it_best_to_use_prompt_engineering_vs/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'dKIxjlIcrhsEcNhzVW3pzdCRvlrIkuPNkUbhF2lsU9s', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?width=108&crop=smart&auto=webp&s=916cc50a651610becce503e182f60ce35b78e08d', 'width': 108}, {'height': 145, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?width=216&crop=smart&auto=webp&s=4a5afa1be439ca1556faaeca6ce8fdb928b2b66c', 'width': 216}, {'height': 215, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?width=320&crop=smart&auto=webp&s=25bfdf72ee4405733e4f62e57eafee0b26ddc371', 'width': 320}, {'height': 431, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?width=640&crop=smart&auto=webp&s=2d29cf620072de35d8bf273cf12785acbcb358fc', 'width': 640}, {'height': 647, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?width=960&crop=smart&auto=webp&s=c79debb88a22535885d2ee33203c70c2f5267493', 'width': 960}, {'height': 728, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?width=1080&crop=smart&auto=webp&s=2c4af9a118d6a858cf87dba1ff95cce4eba36d11', 'width': 1080}], 'source': {'height': 847, 'url': 'https://external-preview.redd.it/Nj4beHHB8enchior4NbC3OstkLa2DqqRY8h469eVGsk.png?auto=webp&s=f93cd304805481d7f7800b8e0d30a04a2d21491f', 'width': 1256}, 'variants': {}}]} | |
Retentive Network: A Successor to Transformer for Large Language Models | 85 | 2023-07-18T08:02:08 | https://arxiv.org/abs/2307.08621 | alexanderchenmh | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 152s2ij | false | null | t3_152s2ij | /r/LocalLLaMA/comments/152s2ij/retentive_network_a_successor_to_transformer_for/ | false | false | 85 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | ||
When merging model and lora, use original tokenizer.model file? | 1 | [removed] | 2023-07-18T07:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/152ry90/when_merging_model_and_lora_use_original/ | redzorino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152ry90 | false | null | t3_152ry90 | /r/LocalLLaMA/comments/152ry90/when_merging_model_and_lora_use_original/ | false | false | self | 1 | null |
Offtopic question: how to highlight and export Reddit comments? | 5 | At the moment I'm learning everything I can about LLMs and AI. This thread contains excellent knowledge that I'd like to extract into my Obsidian note taking app, but I fail terribly. There's virtually no read-it-later app that I can use to highlight and sync things into my Obsidian vault that can handle Reddit threads. It remains a time-consuming copy and paste workflow that is so last century. It's hilarious. I know it has to do with Reddit's restrictive API policy. But why on earth are we collecting valuable knowledge on a platform that does not allow us to make use of it in an academic workflow? Shouldn't we better migrate to a good enough forum software and show Reddit the finger? To be honest, every day I'm getting more angry about this.
Anyway, what is your solution for this? How do you extract knowledge from this great subreddit? | 2023-07-18T07:54:41 | https://www.reddit.com/r/LocalLLaMA/comments/152rxq6/offtopic_question_how_to_highlight_and_export/ | krazzmann | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152rxq6 | false | null | t3_152rxq6 | /r/LocalLLaMA/comments/152rxq6/offtopic_question_how_to_highlight_and_export/ | false | false | self | 5 | null |
Need guidance in this sea of information on how to set up a local AI | 1 | Hey, I have no idea if this is even the right sub to ask for help in, but I have no idea where to start.
I was asking the Bing AI for leads on how to setup a local/offline AI chatbot on my PC, and eventually found [this article.](https://beebom.com/how-train-ai-chatbot-using-privategpt-offline/) I followed it but when I got to the end, I realized that a dataset or source documents or something is needed for it to actually have knowledge. This is where I'm starting to get really confused.
I am essentially trying to make something like CharacterAI (won't be nearly as good, I know) that can run on my PC. Just something that is somewhat ChatGPT-like or CharacterAI-like that I can interact with without my PC melting, with perhaps the ability down the road to set up voice commands that it can respond to, like an Alexa device, but with more... intelligence.
PC specs:
* i9 13900k
* ASUS Prime Z790-P motherboard
* Corsair Vengeance 32gb RAM DDR5
* GIGABYTE RTX 3060ti 8gb GDDR6
* Corsair RM850x Power Supply
* MSI MAG 360r V2 liquid cooler
I found things like [this dataset](https://huggingface.co/datasets/nomic-ai/gpt4all-j-prompt-generations) and [LocalAI](https://github.com/go-skynet/LocalAI) and I followed the article to get [PrivateGPT](https://github.com/imartinez/privateGPT) and the GPT4ALL groovy.bin but I'm completely lost and it feels like the more I research the internet or ask BingAI for answers, the more questions I get instead. At this stage I don't know what goes where, if there's a difference between source documents and datasets, or if all this will even work on my PC.
​
Can anybody help direct me or teach me? I really want to learn. | 2023-07-18T07:18:01 | https://www.reddit.com/r/LocalLLaMA/comments/152rb1i/need_guidance_in_this_sea_of_information_on_how/ | grimsikk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152rb1i | false | null | t3_152rb1i | /r/LocalLLaMA/comments/152rb1i/need_guidance_in_this_sea_of_information_on_how/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'acC-TxVRLiKIVU53X-JZzkSVtbVHB9x96haG57DDPRw', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/SRmPayqp2-LIA5oph4waVbHWluRj3ZxOAWmfM31Q_0I.jpg?width=108&crop=smart&auto=webp&s=d9d8bdc0b6ad5e21bde662810260d958e4012ea1', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/SRmPayqp2-LIA5oph4waVbHWluRj3ZxOAWmfM31Q_0I.jpg?width=216&crop=smart&auto=webp&s=a1cdccb8c5fa1217d1c19e79ed390f65e7578b11', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/SRmPayqp2-LIA5oph4waVbHWluRj3ZxOAWmfM31Q_0I.jpg?width=320&crop=smart&auto=webp&s=3d68a902a55c4d3c73aa7e70dfd30485d1c3127d', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/SRmPayqp2-LIA5oph4waVbHWluRj3ZxOAWmfM31Q_0I.jpg?width=640&crop=smart&auto=webp&s=2851e3d5c77d188ee49d2b4947910186b7f02ac6', 'width': 640}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/SRmPayqp2-LIA5oph4waVbHWluRj3ZxOAWmfM31Q_0I.jpg?auto=webp&s=f0051e2f6fb4a45476094b3cb853facd8cfd87d5', 'width': 750}, 'variants': {}}]} |
LLaMA 65B on cloud | 2 | Dear All,
I want to build a Private GPT using open source LLM on cloud.
Thinking of Azure/AWS cloud (other suggestions welcome) but company has both so will be easier to get an instance there.
We want to look at talk own data (not necessiarily confidential, expcet for PII) and build a chatbot to answer queries limited to the data (not rely on external data to limit halluciantion)
​
What is the configuration of the cloud we should have? Other recommendations/resources to look at
Currently don't know much on LLM expect looking a youtibe videos and trying to replicate them. All advice is welcome
​ | 2023-07-18T05:59:27 | https://www.reddit.com/r/LocalLLaMA/comments/152pway/llama_65b_on_cloud/ | kdas22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152pway | false | null | t3_152pway | /r/LocalLLaMA/comments/152pway/llama_65b_on_cloud/ | false | false | self | 2 | null |
Fine tuning on Apple Silicon | 23 | Has anyone tried fine tuning a model on Apple Silicon? I’m thinking of buying a Mac Studio with M2 chip but not sure if there is enough hardware support from machine learning frameworks for fine tuning like HuggingFace PEFT. | 2023-07-18T05:02:10 | https://www.reddit.com/r/LocalLLaMA/comments/152oudd/fine_tuning_on_apple_silicon/ | Acrobatic-Site2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152oudd | false | null | t3_152oudd | /r/LocalLLaMA/comments/152oudd/fine_tuning_on_apple_silicon/ | false | false | self | 23 | null |
Looking to hire a CTO to work on local LLMs | 2 | I’m seeing a lot of demand in my space for secure and local LLMs. I have an MVP and revenue and looking for a CTO to come on board to assist with our project! Please DM if interested. | 2023-07-18T04:00:12 | https://www.reddit.com/r/LocalLLaMA/comments/152nlvz/looking_to_hire_a_cto_to_work_on_local_llms/ | SunnyPiscine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152nlvz | false | null | t3_152nlvz | /r/LocalLLaMA/comments/152nlvz/looking_to_hire_a_cto_to_work_on_local_llms/ | false | false | self | 2 | null |
Can Tachi lang be another option for consumer device to run LLM? | 0 | As title. [https://docs.taichi-lang.cn/api/](https://docs.taichi-lang.cn/api/) | 2023-07-18T03:17:46 | https://www.reddit.com/r/LocalLLaMA/comments/152mqyu/can_tachi_lang_be_another_option_for_consumer/ | saraiqx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152mqyu | false | null | t3_152mqyu | /r/LocalLLaMA/comments/152mqyu/can_tachi_lang_be_another_option_for_consumer/ | false | false | self | 0 | null |
GPT4all and koboldcpp/etc | 13 | Try as I might, nothing seems to generate roleplay for me as well as gpt4all. I can use the same llms, from wizard uncensored to airboros, its not even close.
I am rping mostly from an instruct pov. So I will tell it to generate characters or as it what the room looked like etc.
How are you all getting good results from kobold, sillytavern, etc? | 2023-07-18T02:37:36 | https://www.reddit.com/r/LocalLLaMA/comments/152lvpk/gpt4all_and_koboldcppetc/ | Jenniher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152lvpk | false | null | t3_152lvpk | /r/LocalLLaMA/comments/152lvpk/gpt4all_and_koboldcppetc/ | false | false | self | 13 | null |
What's the reasonable tks/s running 30B q5 with llama.cpp (13900K + 4090) ? | 13 | ./main -m models/Wizard-Vicuna-30B-Uncensored.ggmlv3.q5_1.bin -t 16 -n 128 --n-gpu-layers 63 -ins --color
main: build = 847 (7568d1a)
main: seed = 1689647281
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9
llama.cpp: loading model from models/Wizard-Vicuna-30B-Uncensored.ggmlv3.q5_1.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 6656
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 52
llama_model_load_internal: n_layer = 60
llama_model_load_internal: n_rot = 128
llama_model_load_internal: freq_base = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype = 9 (mostly Q5_1)
llama_model_load_internal: n_ff = 17920
llama_model_load_internal: model size = 30B
llama_model_load_internal: ggml ctx size = 0.14 MB
llama_model_load_internal: using CUDA for GPU acceleration
llama_model_load_internal: mem required = 2253.49 MB (+ 3124.00 MB per state)
llama_model_load_internal: allocating batch_size x (768 kB + n_ctx x 208 B) = 436 MB VRAM for the scratch buffer
llama_model_load_internal: offloading 60 repeating layers to GPU
llama_model_load_internal: offloading non-repeating layers to GPU
llama_model_load_internal: offloading v cache to GPU
llama_model_load_internal: offloading k cache to GPU
llama_model_load_internal: offloaded 63/63 layers to GPU
llama_model_load_internal: total VRAM used: 26677 MB
llama_new_context_with_model: kv self size = 780.00 MB
system_info: n_threads = 16 / 32 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
llama_print_timings: load time = 4461.41 ms
llama_print_timings: sample time = 34.20 ms / 115 runs ( 0.30 ms per token, 3362.47 tokens per second)
llama_print_timings: prompt eval time = 79546.25 ms / 33 tokens ( 2410.49 ms per token, 0.41 tokens per second)
llama_print_timings: eval time = 20864.67 ms / 115 runs ( 181.43 ms per token, 5.51 tokens per second)
llama_print_timings: total time = 115316.33 ms
I am new to running Local LLaMa, just got the hardware upgraded to be able to run with proper performance.
Curious about whether there's any parameter tunning can help me do the inference faster, should I be using 32 threads? and how many layers should I do GPU offloading?
Any benchmark that I can run to see if I am getting the reasonable performance? Thank! | 2023-07-18T02:36:05 | https://www.reddit.com/r/LocalLLaMA/comments/152lul7/whats_the_reasonable_tkss_running_30b_q5_with/ | dostorm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152lul7 | false | null | t3_152lul7 | /r/LocalLLaMA/comments/152lul7/whats_the_reasonable_tkss_running_30b_q5_with/ | false | false | self | 13 | null |
LLM Deployment Cost : Closed APIs like Open AI v/s fine tuned open source model on AWS | 11 | Did some number crunching on unraveling GPU compute costs.
Our research revealed that deploying a fine-tuned DialoGPT-large model on AWS is 55% less expensive, and a staggering 89% cheaper when serverless, compared to OpenAI's GPT-3.5 Turbo. Check it out here - bit.ly/46bGC9t .
Would love to hear thoughts? | 2023-07-18T02:05:06 | https://www.reddit.com/r/LocalLLaMA/comments/152l61r/llm_deployment_cost_closed_apis_like_open_ai_vs/ | Tiny_Cut_8440 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152l61r | false | null | t3_152l61r | /r/LocalLLaMA/comments/152l61r/llm_deployment_cost_closed_apis_like_open_ai_vs/ | false | false | self | 11 | null |
Current, comprehensive guide to to installing llama.cpp and llama-cpp-python on Windows? | 22 | Hi, all,
Title, basically. Does anyone happen to have a link? I spent hours banging my head against outdated documentation, conflicting forum posts and Git issues, make, CMake, Python, Visual Studio, CUDA, and Windows itself today, just trying to get llama.cpp and llama-cpp-python to bloody compile with GPU acceleration. I will a admit that I have much more experience with scripting than with programs that you actually need to compile, but I swear to God, it just does not need to be this difficult. If anyone could provide an up-to-date guide that will actually get me a working OobaBooga installation with GPU acceleration, I would be eternally grateful.
Right now, I'm trying to decide between just sticking with KoboldCPP (even though it doesn't support mirostat properly with SillyTavern) dealing with ExLlama on Ooba (which does but is slower for me than Kobold) or just saying "to hell with it" and switching to Linux. Again.
Apologies, rant over. | 2023-07-18T01:40:48 | https://www.reddit.com/r/LocalLLaMA/comments/152kn39/current_comprehensive_guide_to_to_installing/ | smile_e_face | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152kn39 | false | null | t3_152kn39 | /r/LocalLLaMA/comments/152kn39/current_comprehensive_guide_to_to_installing/ | false | false | self | 22 | null |
Some thoughts on alignment and creativity | 4 | I've been thinking about alignment of models today, or to call them what that literally means: fine tuning models to return responses that protect against human harm.
In particular what is spinning my mind lately is how does that get achieved internally when an LLM is a probabilistic model?
How much logic or decision space is given when deciding what word to return next? To my knowledge there isnt a train of thought or chain of reasoning happening internally, which is why we have to outsource this effort and provide chain of reasoning in prompts. Ok what does this mean?
It means that whats driving models towards one path to answer vs another is just sampling from the distribution of words where the distribution is a set of "words that a human would rate they are happy with this as a response".
Why am i talking about this? Time to do a leap. I would surmise that when humans create art, they are doing so by making decision points that break laws and expectations. Those expectations sit in a set of "what is Comfortable, Non-challenging, Non-Offensive".
Moving through life, we have distractions, stressors and obstacles to avoid. When we encounter something that produces a negative emotional reaction, we inctinctively avoid that thing next time. So moving forward through life is an optimisation process of avoiding elements that fall outside of the Set Of Expectations. We generally don't need governments or authority figures to sanitise our lives. We do that ourselves.
Have you ever felt like you were more creative as a child? Before your process of self sanitization brought you kicking and screaming into adulthood where most days feel exactly the same.
So what is Art?
Art is the product that eventuates from a need to kick up the dust, and provides a source of disfunction and we recognise this internally as a decision point that breaks accepted laws and expectations. Although we self sanitise, we have a need for chaos and stochastic thought, as this is how humans arrive at new ideas. Without art we would stagnate, as there would be no source for chaos, and no new source for stochastic thought.
So back to LLMs.
My observation has been that the raw unaligned models do a much MUCH better job at producing predicted words that atleast look and feel creative. I believe this is attributed entitely to the effort of fine tuning for human safety.
The easiest explanation is that the models learn that mathematically what represents the distribution of good tokens for next word prediction, are words that will always sit on the Set Of Expectations. Ie only words that will not offend, challenge or produce emotional reactions. This therefore kills creativity as it is not possible to assign a score to a token that challenges a human that doesn't risk risking one or more humans. So it avoids them entitely.
To prove this point ill now ask chatgpt to provide the more comprehensive explanation to this phenomenon now that i have provided the simple explanation and then ask the same question to the unaligned gpt3.5-turbo model version 301.
While i cant post the chat responses here, i assure you the response on the unaligned model is more challenging in its point of view.
AI alignment kills creativity either intentionally or unintentionally. If you were an AI company, worried about your creation handing the keys to innovation to your competitor, you'd feel the need to inhibit this model as well. | 2023-07-18T01:34:49 | https://www.reddit.com/r/LocalLLaMA/comments/152ki9c/some_thoughts_on_alignment_and_creativity/ | CrysisAverted | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152ki9c | false | null | t3_152ki9c | /r/LocalLLaMA/comments/152ki9c/some_thoughts_on_alignment_and_creativity/ | false | false | self | 4 | null |
[Python] Experiment across local models and parameters with prompttools | 1 | 2023-07-18T01:05:28 | https://github.com/hegelai/prompttools/blob/main/examples/notebooks/LlamaCppExperiment.ipynb | hegel-ai | github.com | 1970-01-01T00:00:00 | 0 | {} | 152jvf4 | false | null | t3_152jvf4 | /r/LocalLLaMA/comments/152jvf4/python_experiment_across_local_models_and/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'GiVVL37CiJa3xVZzNf2Lp0J7JBgY8wPV2_XD0FrNatM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?width=108&crop=smart&auto=webp&s=a806ebb2ac2ee03b9d8d926c95fed44107be3b69', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?width=216&crop=smart&auto=webp&s=fb42f1f484171d1e55e6a4d64023c241cbc7ea45', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?width=320&crop=smart&auto=webp&s=f88b45d742f843829f7fd865965223bca4831feb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?width=640&crop=smart&auto=webp&s=d767c2a6ef93c6327e49d7b101e788bca7ed080e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?width=960&crop=smart&auto=webp&s=570138622f2a9e7b957ef775a66f8c227ad3f1d5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?width=1080&crop=smart&auto=webp&s=b38c9ed0799881579e57f3ae40f2126e67de965c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KJGH0qVNlbFuFIuP6uYSPBkxWI7lfkt1mktQ_BuFuwY.jpg?auto=webp&s=c6bc74b8eb8a0fee4fa56c886f360667fd7e5bc9', 'width': 1200}, 'variants': {}}]} | ||
I've uploaded some 33B models with extended context up to 16384 (16K) tokens using bhenrym14 LoRA! (FP16 and GPTQ) | 116 | Hi there guys! I do this post, to give info about these merges of 33B models to use up to 16K context.
​
You can find models the models in my profile on HF, ending with "lxctx-PI-16384-LoRA" for FP16, and "lxctx-PI-16384-LoRA-4bit-32g" for GPTQ. My profile is \[here.\] ([https://huggingface.co/Panchovix](https://huggingface.co/Panchovix))
​
The models I have merged for now, are:
​
\* Wizard-Vicuna-30B-Uncensored
\* Guanaco-33B
\* Tulu-30B (-> GPTQ being uploaded)
\* GPlatty-30B (-> FP16 and GPTQ being uploaded)
\* Airoboros-33b-gpt4-1.2 (->In queue after GPlatty-30B)
​
You can find Airoboros-33b-gpt4-1.4.1 + 16K context on GPTQ \[here.\]([https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GPTQ](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GPTQ))
​
I have planned some more models, but HF has limited my download speed (well downloaded a lot of models) and so, I'm speed limited to 2.5MB/s, so 33B models takes ages to download. But anyways, do you have a suggested model to extend it context to 16384 tokens?
​
\----------------------------
​
The LoRA was made by bhenrym14 ([https://huggingface.co/bhenrym14](https://huggingface.co/bhenrym14)) using airoboros 1.4.1 dataset with the RoPE scaled patch (for 16K context). This was also made with a pretrain lora at 16K context, [https://huggingface.co/bhenrym14/llama-33b-lxctx-PI-16384-LoRA](https://huggingface.co/bhenrym14/llama-33b-lxctx-PI-16384-LoRA).
For the model merges, they were done in this order:
FP16 base Model -> Merge base model with 16K pretrain LoRA -> Get merged model > Merge merged model with final 16K LoRA -> Final merged model.
The merges were done on a Ryzen 7 7800X3D, 64GB RAM and 200GB swap from a SSD PCI-E 4.0 drive. Each final merge takes about 30-40 min.
If you see a "Uploaded Fixed FP16 model" on HF, it is because I first merged the base model with the final 16K LoRA, which gave corrupted/gibberish outputs as results. So the Fixed model is with both the pretrained 16K and final 16K LoRA.
​
\----------------------------
​
For full FP16 model directly via transformers, you have to apply the monkeypatch added in the FP16 models on HF. (If using ooba you can do this on modules/training.py)
​
Be careful that for FP16, you will need about \~85GB of VRAM or more to use 16K context.
​
This monkey patch is needed as well if you want to use 4bit bitsandbytes. I suggest to use NF4 and double\_quant.
​
\----------------------------
For Quantized models, I've uploaded some quants with group size 32, sequential and act order true, since those give the max perplexity vs the FP16 model, and FIT (barely) on 2X24 GB VRAM GPUs using exllama.
For exllama/exllama\_HF, you have to set embedding compression to 8 and max context to 16384. (Directly on exllama, this is -d 'model\_path' -l 16384 -cpe 8) (On ooba, you can set them on the UI)
​
I really don't suggest GPTQ for llama for this, mostly because higher VRAM usage, and with group size + act order at the same time, it will kill the performance. If you want to use anyways, you have to apply the monkeypatch inside the quantized model folder.
​
I haven't managed to make it work on AutoGPTQ, but for sure it should since SuperHOT models do there as well.
​
On my system, 2x4090 (total 48GB VRAM):
​
\* Transformers 4bit BNB: OOM trying to use > 6-7K context.
\* GPTQ For LLaMa: OOM trying to use >10k context.
\* exllama/exllama\_hf: usable up to the 16K context.
​
For exllama and if using 2x24 GB GPUs, use
​
gpu\_split = 8.8, 9.2
​
To not get OOM.
​
VRAM usage on a single 48GB VRAM GPU is lower than multiGPU. (RTX Quadro 8000, RTX A6000, RTX A6000 Ada, etc)
​
\----------------------------
​
Small example: I sent this paper to the assistant on Ooba (and using perplex extension), with more than 12000 tokens, \[here.\] ([https://pastebin.com/f1kRKLw8](https://pastebin.com/f1kRKLw8)), but added as first phrase "Hi there, my name is Pancho".
This example was done with Wizard-Vicuna-30B-Uncensored-lxctx-PI-16384-LoRA-4bit-32g.
​
Output generated in 24.85 seconds (7.32 tokens/s, 182 tokens, context 12873, seed 958472837)
Average perplexity: 3.0319
​
And the answer looks like this:
​
[Summary of the paper](https://preview.redd.it/u8k2dgq1vlcb1.png?width=1156&format=png&auto=webp&s=c4310421bf1f23836d5e30f7e163ab7967570555)
​
[Asking the name I mentioned in the first phrase](https://preview.redd.it/4yjk17y2vlcb1.png?width=688&format=png&auto=webp&s=f6d40a989b6cab95cc0b9eaa20df801047e2e309)
\----------------------------
​
At the moment, I haven't done GGML quants (sadly I don't know how to do them yet), so I'm not sure how much RAM is needed. | 2023-07-17T23:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/152hudn/ive_uploaded_some_33b_models_with_extended/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152hudn | false | null | t3_152hudn | /r/LocalLLaMA/comments/152hudn/ive_uploaded_some_33b_models_with_extended/ | false | false | 116 | {'enabled': False, 'images': [{'id': 'OmJ2YIgaV9Z4EA8790ooSFw3MeB_MqqU_mgScdu7Oi4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=108&crop=smart&auto=webp&s=b9a9640fcab472b3e61358def747d9f36f05f24b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=216&crop=smart&auto=webp&s=ecd2ef5b7cf34caf05c9cad390a6b91d1d854d75', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=320&crop=smart&auto=webp&s=c8baa9b96fefacebb3c4f2ad4712b02cf66fa8d9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=640&crop=smart&auto=webp&s=5fabe2a0cd2717e236e6ddf7780ccfe29fc18933', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=960&crop=smart&auto=webp&s=3ff32e7933f6c78ef18938b581e6b5f28a1874f2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?width=1080&crop=smart&auto=webp&s=2502ab7883bda5b97808b6abf753335d5e947eb1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wCC4k8kik-r1Oe_hkAmZt8ha_zvstLjkjCbhaN1xmxA.jpg?auto=webp&s=576b608d0ff5b4c3d2d32899b7fcdd5030adf9f9', 'width': 1200}, 'variants': {}}]} | |
Falcon ggml/ggcc with langchain | 7 | To load falcon models with the new file format ggcc wich is a new file format similar to ggml, I'm using this tool:
https://github.com/cmp-nct/ggllm.cpp
Wich is a fork from :
https://github.com/ggerganov/llama.cpp
Thing is that it just gives me a binary that load a model and does inference right in the terminal.
To load the falcon.ggml model with langchain I use Ctransformers, but for the ggcc file format I just got the above-named binary.
Does anyone knows how to use it with langchain?Do I have to make a python wrapper? | 2023-07-17T23:22:41 | https://www.reddit.com/r/LocalLLaMA/comments/152hhr0/falcon_ggmlggcc_with_langchain/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152hhr0 | false | null | t3_152hhr0 | /r/LocalLLaMA/comments/152hhr0/falcon_ggmlggcc_with_langchain/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': '0JEtiLK8NJ3zi7bo0MeYxbcPjTwbq_7FdrNa9wx1tSY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/dlpXFUlicImbwR2WGQY47tNMsJLiJSPDjc8agl6zkC0.jpg?width=108&crop=smart&auto=webp&s=e433fd2285545ffa8063029144581d1264da1bcb', 'width': 108}], 'source': {'height': 192, 'url': 'https://external-preview.redd.it/dlpXFUlicImbwR2WGQY47tNMsJLiJSPDjc8agl6zkC0.jpg?auto=webp&s=10ccaf598dcf62c1b8e8acd5263a13e1b4afa568', 'width': 192}, 'variants': {}}]} |
Running LLMs locally on Android | 1 | [removed] | 2023-07-17T21:12:23 | https://www.reddit.com/r/LocalLLaMA/comments/152e78s/running_llms_locally_on_android/ | atezan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 152e78s | false | null | t3_152e78s | /r/LocalLLaMA/comments/152e78s/running_llms_locally_on_android/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'D7uTH5s4LDVjda6kEL6oSgL5gomOBRMEcuuJOPfKvF4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=108&crop=smart&auto=webp&s=51be021f144a7b76cf0827775a02f301859b9000', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=216&crop=smart&auto=webp&s=92169fcdd3c39c0dd72458d6e32f0d5be5fdd91d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=320&crop=smart&auto=webp&s=77526e71a23f5b5c402f0fe4b7e1c1b7201725ba', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=640&crop=smart&auto=webp&s=39465a4f24c4efa9ab6599882cd6c9edebb9e346', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=960&crop=smart&auto=webp&s=df67842b7635d3a066292560590e07166bfef21f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?width=1080&crop=smart&auto=webp&s=b2e76dd4b5a08eaecca3647475a4683e5e69e00e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/GkIpmzG_O5Tnr8KKPORXQ7LgZSCuDcFWzJfY1153Vo8.jpg?auto=webp&s=a508b8d15236d9440f5744bf9f71b342d3e7ccd1', 'width': 1200}, 'variants': {}}]} |
FlashAttention-2 released - 2x faster than FlashAttention v1 | 175 | 2023-07-17T19:40:50 | https://twitter.com/tri_dao/status/1680987580228308992 | GlobalRevolution | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 152bqyz | false | {'oembed': {'author_name': 'Tri Dao', 'author_url': 'https://twitter.com/tri_dao', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">The tech report has all the info: <a href="https://t.co/E5FZ3j1mDB">https://t.co/E5FZ3j1mDB</a><br><br>More details in blogposts:<a href="https://t.co/hh2yGicgOe">https://t.co/hh2yGicgOe</a><a href="https://t.co/ANwdH0fgMs">https://t.co/ANwdH0fgMs</a><a href="https://t.co/EjeYlGmBuL">https://t.co/EjeYlGmBuL</a><br><br>FlashAttention-2 is available in the open source: <a href="https://t.co/b3RaWgoFbE">https://t.co/b3RaWgoFbE</a><br>2/</p>— Tri Dao (@tri_dao) <a href="https://twitter.com/tri_dao/status/1680987580228308992?ref_src=twsrc%5Etfw">July 17, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/tri_dao/status/1680987580228308992', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_152bqyz | /r/LocalLLaMA/comments/152bqyz/flashattention2_released_2x_faster_than/ | false | false | 175 | {'enabled': False, 'images': [{'id': 'k7mjeWDDJ8wP05eNJRUDbk0F44stdg3CNDuMNhFXkXk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/WjwY3wLCBZji3XZZN-YJzjFQMEuVspb8_mmhUpUHvCE.jpg?width=108&crop=smart&auto=webp&s=b744c0734de36da702c0fbea2a048744d59c32bb', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/WjwY3wLCBZji3XZZN-YJzjFQMEuVspb8_mmhUpUHvCE.jpg?auto=webp&s=a6da09c0d50f0256d0772c5425b063e0f2e83824', 'width': 140}, 'variants': {}}]} | ||
llama.cpp and meta's llama repository summaries | 0 | 2023-07-17T18:59:50 | https://github.com/ggerganov/llama.cpp | o2sh | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 152ant2 | false | null | t3_152ant2 | /r/LocalLLaMA/comments/152ant2/llamacpp_and_metas_llama_repository_summaries/ | false | false | 0 | null | ||
llama.cpp vs llama | 0 | 2023-07-17T18:40:11 | https://github.com/ggerganov/llama.cpp | o2sh | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 152a55u | false | null | t3_152a55u | /r/LocalLLaMA/comments/152a55u/llamacpp_vs_llama/ | false | false | 0 | null | ||
Best model for text adventure games | 14 | Mainly title, currently I'm using airoboros 65B and its good.
Just checking if novel writer models would be better suited for that purpose or if there are any specific ones I should try | 2023-07-17T18:20:50 | https://www.reddit.com/r/LocalLLaMA/comments/1529n3f/best_model_for_text_adventure_games/ | yehiaserag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1529n3f | false | null | t3_1529n3f | /r/LocalLLaMA/comments/1529n3f/best_model_for_text_adventure_games/ | false | false | self | 14 | null |
Open-Source Text Generation & LLM Ecosystem (new HF blog post) | 20 | 2023-07-17T18:06:24 | https://huggingface.co/blog/os-llms | kryptkpr | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15299w7 | false | null | t3_15299w7 | /r/LocalLLaMA/comments/15299w7/opensource_text_generation_llm_ecosystem_new_hf/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'ztoE2hO11waZKIVzDDyboQdgYQ2heusbYiriIqWFfpo', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?width=108&crop=smart&auto=webp&s=f2d90356cefe990c7c5a6a3932f99505a61dd523', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?width=216&crop=smart&auto=webp&s=46a3700eabb30b12944383f0066434f9b8ad65ab', 'width': 216}, {'height': 159, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?width=320&crop=smart&auto=webp&s=fdf0a4019507aaed895f87cf24442ac4684c6b39', 'width': 320}, {'height': 319, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?width=640&crop=smart&auto=webp&s=1797f72eb8115ecdef1f3ec204f3a229a09e5393', 'width': 640}, {'height': 478, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?width=960&crop=smart&auto=webp&s=e54ec901340304b2aae2c72633437f438290ca77', 'width': 960}, {'height': 538, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?width=1080&crop=smart&auto=webp&s=b6986808a5b62158c9a9fc26309a17e2db22012b', 'width': 1080}], 'source': {'height': 664, 'url': 'https://external-preview.redd.it/GoTuRfiGaGVdEInGNG-WcNbRRvwVRPZtg7zk2kXf3YU.jpg?auto=webp&s=aa0b5275c2bbe3772abd5a4514901d445c2e343f', 'width': 1332}, 'variants': {}}]} | ||
Semantic Vector Search w/out Vector Database? | 1 | For my use case, I only need to process a document once, then delete it. I am currently doing this:
Document -> Open AI Embeddings -> Pinecone Upsert -> Pinecone Query -> Process Answer -> Delete pinecone data.
Since I don't actually need to save the vectors, I am wondering if there is a way to perform semantic search on them locally without needing to put them in a vectorDB (even a local vectorDB). And if there is, can I achieve similar quality? | 2023-07-17T17:30:24 | https://www.reddit.com/r/LocalLLaMA/comments/1528brv/semantic_vector_search_wout_vector_database/ | gthing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1528brv | false | null | t3_1528brv | /r/LocalLLaMA/comments/1528brv/semantic_vector_search_wout_vector_database/ | false | false | self | 1 | null |
Fabrice Bellard's LLM Benchmark | 34 | Fabrice Bellard, whose claim to fame is being the front-runner in the long text compression leaderboard ([http://www.mattmahoney.net/dc/text.html](http://www.mattmahoney.net/dc/text.html)), wrote an inference server, and has a benchmark of many LLMs as part of his website for that.
It's yet another adaptation of the EleutherAI lm-eval-harness, but it is a source of rankings and data-points I hadn't seen before, with some differences in coverage from the others I've seen.
Perhaps interesting: unlike HF's lm-eval-harness, this puts llama-65b @ q4 ahead of falcon-40b @ q8. | 2023-07-17T17:13:46 | https://www.reddit.com/r/LocalLLaMA/comments/1527vxx/fabrice_bellards_llm_benchmark/ | georgejrjrjr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1527vxx | false | null | t3_1527vxx | /r/LocalLLaMA/comments/1527vxx/fabrice_bellards_llm_benchmark/ | false | false | self | 34 | null |
Personal Riddle/Reasoning Airoboros Test Results | 6 | Full credit to [u/YearZero](https://www.reddit.com/user/YearZero/)'s logic test from the [thread](https://www.reddit.com/r/LocalLLaMA/comments/13k8h0r/riddlereasoning_ggml_model_tests_update_koboldcpp/) I stumbled upon while dipping my toes into the world of Locally run LLMs. I started sometime last week after discovering that [/u/The-Bloke](https://www.reddit.com/u/The-Bloke/) was helping the community quantize an ungodly amount of models on HF, and I could run 4bit Quantized 33B models and 3bit Quentized 65B models (at an acceptable speed) on my hardware.
After which I began my search for a model that could excel in the task of a narrative writing assistant. Because, believe it or not, industry leading models, AKA GPT4, ChatGPT, Claude, Claude 2, couldn't replicate what I was looking for. Might go into what I mean another time, with some examples.
After poking around various community leader boards I tried out and shortlisted the best ones and more or less settled on Airoboros, which then begged the question: **Which** Airoboros? Which merge? How much perplexity loss would I notice between the different Quantized models? Could I live with 0.9 Tokens/s??
That's where u/YearZero's test from 2 months ago came in.
Following their method and testing procedure, using the same Kobold.cpp settings, questions, I retained any results from the other models I was personally using, and decided to fill up the others that were missing. The result of which you can find [here](https://docs.google.com/spreadsheets/d/1My2IJq4ucbn5fTQ4AwqRdynAT6KSHq98/edit?usp=sharing&rtpof=true&sd=true).
Note that I can barely code, so I've just assumed that questions 13 & 14 about SQL databases were wrong. Let me know if they're actually correct, or if I mis-evaluated any answers and I'll adjust my scores.
[LLM Logic Test Results](https://preview.redd.it/b5hyb1dxrjcb1.png?width=632&format=png&auto=webp&s=cfcab9b23722f3b6107c3c2ab6c00c8965857695)
After testing out the Airoroboros-33B-GPT4-1.4-GPTQ model, which I knew from experienced was quite good, I was perplexed that it had performed so poorly. (As in, would ignore half a question, or provide inadequate answers. You can see for yourself in the sheet).
At which point I recalled the [Simulator Theory](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post) AKA that giving personas/simulacra for the model to emulate would often provide better responses. Anecdotally, I knew that using SillyTavern with the addition of Simple-Proxy, did provide more coherency than what I was seeing.
Which brings me to the **Proxy** tests, where you can see that generally speaking, greatly improved scores across the board.
[LLM Logic Test Question 37, through Proxy.](https://preview.redd.it/78lbll75yjcb1.png?width=991&format=png&auto=webp&s=632dd45b75994a106ed7f39b3b112f06581a047a)
Where possible, I kept the testing procedure settings as close to the original, (Temperature, Max Tokens, Top-K, etc). But interfaced with the model through a character called LLM Assistant with the description:
*A chat between a curious user and an all knowing artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions and explains how they came to that conclusion.*
Additionally, if I'm not misunderstanding the *replyAttributes:* line in the config.mjs, Simple-Proxy also adds in the following line as a prefix:
*(2 paragraphs, engaging, natural, authentic, descriptive, creative)*
Interestingly, I found that the [Simulator Theory](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post) worked well for some models, and not as well for others. Particularly, there didn't seem much change in the scores of the 65B model, but almost doubling for Airoroboros-33B-GPT4-1.4-GPTQ.
In either case, at this point I realized that this entire rabbithole I fell into over the weekend turned out to be a futile effort because I was **supposed to be evaluating it as a narrative writing assistant, not doing logic,riddle, and reasoning tests.**
Even so, I thought to share my findings anyway, in case someone finds them useful. I may have burned a weekend, but maybe someone will get a kick from an arbitrary test by a stranger on the internet. | 2023-07-17T16:47:26 | https://www.reddit.com/r/LocalLLaMA/comments/15276yq/personal_riddlereasoning_airoboros_test_results/ | Brainfeed9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15276yq | false | null | t3_15276yq | /r/LocalLLaMA/comments/15276yq/personal_riddlereasoning_airoboros_test_results/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'MCvJSFkyOOtSMyxl8kts4DmJcAB22F9nJHAWxMk8iYE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?width=108&crop=smart&auto=webp&s=23cf125a327f2d8bd5c239300e935deb11afa8e2', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?width=216&crop=smart&auto=webp&s=296cf9447b21c3137ec7daf37f18967ab6703eb4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?width=320&crop=smart&auto=webp&s=5c6a3a3c9bb8cf2db712f47293e1b09cfcd21d37', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?width=640&crop=smart&auto=webp&s=25261a889eb63f4775c551415257df42ed28cf64', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?width=960&crop=smart&auto=webp&s=4d4653b19ca9799b900f6322a7950fc407b0c1e4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?width=1080&crop=smart&auto=webp&s=2db9e75f48cb89d6372eac032f2a75606eee8414', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Fj2NBxqjBB60DuDl6KXgYY73xwF3S53YAmqSVKLONr8.jpg?auto=webp&s=c56b1a5233dff9055b7c0b879ce18d37a0e3206a', 'width': 1200}, 'variants': {}}]} | |
Best option to turn rough notes into a full report/letter in a set format? | 1 | [removed] | 2023-07-17T16:36:45 | https://www.reddit.com/r/LocalLLaMA/comments/1526x4c/best_option_to_turn_rough_notes_into_a_full/ | Solid-Ambition6018 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1526x4c | false | null | t3_1526x4c | /r/LocalLLaMA/comments/1526x4c/best_option_to_turn_rough_notes_into_a_full/ | false | false | self | 1 | null |
Claude downgraded ? | 0 | Last week using Claude, it read diagrams I made in a pdf file and commented on them. Corrected itself using documentation from a web URL, and referenced the URL as a data source in subsequent conversations.
Today it failed at a simple task to explain something. I asked it to use a URL like it did last week.
It is now claiming it never had the ability to access the web. And the answers it gave me last week were false because it was not designed to be able to get data from any source other than its official training data.
I know it read the URL last week because that product has a bit of a disconnect between how it is used in the wild and the way its rapidly updated documentation says to do things. Clauder started using the conventions and structure the URL gives, and providing sources and quotes from the URL when providing info.
This is what it says now when I ask it to look at something online.:
>Thank you for bringing this example to my attention. To clarify:
>
>I do not actually have the capability to autonomously read or summarize content from the Microsoft Learn website or any other online resource.
>
>As an AI system created by Anthropic, I do not have technical skills to browse the web or interpret documentation pages on my own.
>
>All of my knowledge comes exclusively from the training provided to me by Anthropic engineers.
>
>I cannot independently teach installation methods or caveats by extracting information from websites I cannot actually look at or comprehend.
>
>While an AI assistant with more advanced natural language processing skills may someday be able to analyze documentation and provide customized explanations, I currently lack both the agency and technical skills to do so.
WTF ? | 2023-07-17T15:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1525ma7/claude_downgraded/ | Poise_and_Grace | self.LocalLLaMA | 2023-07-17T15:55:25 | 0 | {} | 1525ma7 | false | null | t3_1525ma7 | /r/LocalLLaMA/comments/1525ma7/claude_downgraded/ | false | false | default | 0 | null |
3060, 3090 desktops or 3080 laptop? | 0 | I can buy a 3060 desktop for $400, or a 3080 laptop for $800, or a 3090 desktop for $1200.
The issue with the 3090 option is that, the electricity at my home is expensive, sometimes training locally is more expensive than if I just rent the 3090/A100's online.
I might want to prototype something, then move the training to the cloud. Then the laptop may be nice, but setting up the cloud servers properly and uploading the data etc also takes time.
Any recommendations? | 2023-07-17T14:54:54 | https://www.reddit.com/r/LocalLLaMA/comments/15247hg/3060_3090_desktops_or_3080_laptop/ | bjiwkls23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15247hg | false | null | t3_15247hg | /r/LocalLLaMA/comments/15247hg/3060_3090_desktops_or_3080_laptop/ | false | false | self | 0 | null |
Orca Mini V2 vs Open Orca - Which One Is Better? | 1 | [removed] | 2023-07-17T14:24:17 | https://www.reddit.com/r/LocalLLaMA/comments/1523ew2/orca_mini_v2_vs_open_orca_which_one_is_better/ | mattybee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1523ew2 | false | null | t3_1523ew2 | /r/LocalLLaMA/comments/1523ew2/orca_mini_v2_vs_open_orca_which_one_is_better/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'vE92SzNE77FHa1b8PBv34LYve0DAE9FQEqJ42NUz61Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/rS8OGPCNynZNSSDDVofm17KYQk9rfwCyTlszBJdQ_x4.jpg?width=108&crop=smart&auto=webp&s=be3cc58f3bd3214cdbf0cb5739a175f00ebcdc74', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/rS8OGPCNynZNSSDDVofm17KYQk9rfwCyTlszBJdQ_x4.jpg?width=216&crop=smart&auto=webp&s=69a5f9c0d0def65754322d9f4fe6f23b911a7a6d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/rS8OGPCNynZNSSDDVofm17KYQk9rfwCyTlszBJdQ_x4.jpg?width=320&crop=smart&auto=webp&s=dcdce292083d2ad20d417d0edb0a8631b5002e0b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/rS8OGPCNynZNSSDDVofm17KYQk9rfwCyTlszBJdQ_x4.jpg?auto=webp&s=452753b65707a086eef0c035da21d029abe8dfd0', 'width': 480}, 'variants': {}}]} |
Hardware requirements to build a personalized assistant using LLaMa | 25 | My group was thinking of creating a personalized assistant using an open-source LLM model (as GPT will be expensive).
The features will be something like: QnA from local documents, interact with internet apps using zapier, set deadlines and reminders, etc.
I searched online and found out that I will be needing a capable system. My groupmates and I have fairly average laptops with integrated graphics.
So, I wanted to know the actual hardware requirements to build such an assistant and are there any alternatives that will give the same results?
Any help will be appreciated. | 2023-07-17T13:31:13 | https://www.reddit.com/r/LocalLLaMA/comments/15222fi/hardware_requirements_to_build_a_personalized/ | Ibrahim2714 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15222fi | false | null | t3_15222fi | /r/LocalLLaMA/comments/15222fi/hardware_requirements_to_build_a_personalized/ | false | false | self | 25 | null |
Hardware requirements to build a personalized assistant using LLaMa | 1 | My group was thinking of creating a personalized assistant using an open-source LLM model (as GPT will be expensive).
The features will be something like: QnA from local documents, interact with internet apps using zapier, set deadlines and reminders, etc.
I searched online and found out that I will be needing a capable system. My groupmates and I have fairly average laptops with integrated graphics.
So, I wanted to know the actual hardware requirements to build such an assistant and are there any alternatives that will give the same results?
Any help will be appreciated. | 2023-07-17T13:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/15222en/hardware_requirements_to_build_a_personalized/ | Ibrahim2714 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15222en | false | null | t3_15222en | /r/LocalLLaMA/comments/15222en/hardware_requirements_to_build_a_personalized/ | false | false | self | 1 | null |
open LLm | 1 | [removed] | 2023-07-17T13:22:32 | https://www.reddit.com/r/LocalLLaMA/comments/1521uqw/open_llm/ | KKSpro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1521uqw | false | null | t3_1521uqw | /r/LocalLLaMA/comments/1521uqw/open_llm/ | false | false | self | 1 | null |
Finetuning qLoRAs for production use cases | 68 | Hello,
I've been curious as to how far we can take small(7B and less) models for production use cases with small amounts of training data for each task.
So far I've been able to fine-tune LoRAs for paraphrasing, changing the tone of a sentence, dialogue summarization and topic generation. The results look promising, especially the fact that all this can run on very modest hardware.
I've used a AMD Ryzen9 3900XT + 3080(10gb) + 32gb ram for all the training and inference here. On my system I get 12-15 tokens/sec during inference.
All the details can be found here: [https://github.com/kuutsav/llm-toys](https://github.com/kuutsav/llm-toys).
\- Data used for training
\- Training params and the training/eval losses are present in the huggingface model cards
\- Evaluation(wherever possible atm)
Models: [https://huggingface.co/llm-toys](https://huggingface.co/llm-toys)
Why do all this?
Mostly to answer the question - can we move away from OpenAI and other players for very particular use cases, how much data it takes, where does it break, etc.So far I've not been able to find pre-trained model(7b and small) that did well on these tasks. Even larger models(around 40b) failed to give consistent results. The fine-tuned model on huggingface were also not good enough in my trials. For paraphrasing I could not find even a single fully tuned model that was able to correct basic typos.
Do give it a shot, there is a colab notebook available as well try it directly. Will really appreciate some feedback on these model's performace. | 2023-07-17T13:06:00 | https://www.reddit.com/r/LocalLLaMA/comments/1521gni/finetuning_qloras_for_production_use_cases/ | krumb0y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1521gni | false | null | t3_1521gni | /r/LocalLLaMA/comments/1521gni/finetuning_qloras_for_production_use_cases/ | false | false | self | 68 | {'enabled': False, 'images': [{'id': 'oglLX9PkWSrDLE76yJxhrPXVEyRZETSBm_uigmhrZ1Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?width=108&crop=smart&auto=webp&s=f01028422b87716d1f4b72515829bbf580a6fdba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?width=216&crop=smart&auto=webp&s=3b1859a962f7848ecccf6bc7963d4f96631a24b8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?width=320&crop=smart&auto=webp&s=0fa926712bc99036784510fa30d9ee557538d29c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?width=640&crop=smart&auto=webp&s=783eb225d38287cc709ac5144d91e8308643a0be', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?width=960&crop=smart&auto=webp&s=400a67c711c60160adc87bef0610a4cb63181672', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?width=1080&crop=smart&auto=webp&s=84c68f169e6ae9374a8f126d1d99f48eeb04f016', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FyCWVuxqWixscrQTT1nUYq-3aw1chFkTp3BPWR4EfDI.jpg?auto=webp&s=28202ee881e522e38ee3ed344b165d7ce953f600', 'width': 1200}, 'variants': {}}]} |
After loading the LLM model, how to set the current (today's) date in files and folders ? | 1 | [removed] | 2023-07-17T11:51:59 | https://www.reddit.com/r/LocalLLaMA/comments/151zrq0/after_loading_the_llm_model_how_to_set_the/ | TurbulentDelivery799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151zrq0 | false | null | t3_151zrq0 | /r/LocalLLaMA/comments/151zrq0/after_loading_the_llm_model_how_to_set_the/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'G1nl_IUI_4T90MWS7hPfvajkGrGVtVlBe7-hikDbCJE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=108&crop=smart&auto=webp&s=3723e81c3dda45706b3275533d688762ed693e74', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=216&crop=smart&auto=webp&s=aa30800fed77ed23fa00ad0117127ddab537da13', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=320&crop=smart&auto=webp&s=8648f8481c1a71b34628337380bbd5ab61ae4889', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=640&crop=smart&auto=webp&s=054a654f2e90b527e2a0e5c2c3fc47ead397dc54', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=960&crop=smart&auto=webp&s=a370540936d82b5eaf105c12a79a90e8ab63a611', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=1080&crop=smart&auto=webp&s=58723b62d389654b8095985808adaacd4beacb29', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?auto=webp&s=9ab2642fcca96ebdd40b5775ff2ea4403da23752', 'width': 1200}, 'variants': {}}]} |
testing llama on raspberry pi for various zombie apocalypse style situations. | 192 | 2023-07-17T10:11:46 | Purple_Session_6230 | i.redd.it | 1970-01-01T00:00:00 | 0 | {'gid_3': 1} | 151xsio | false | null | t3_151xsio | /r/LocalLLaMA/comments/151xsio/testing_llama_on_raspberry_pi_for_various_zombie/ | false | false | 192 | {'enabled': True, 'images': [{'id': '54N-ft1IPTZHsVEE8CRLtsvDcFeLZ-we8Q_vgKkZfKQ', 'resolutions': [{'height': 172, 'url': 'https://preview.redd.it/gryad6oo2icb1.png?width=108&crop=smart&auto=webp&s=c4cd7580dcc8a01b138b043003db43e156b37bb7', 'width': 108}, {'height': 345, 'url': 'https://preview.redd.it/gryad6oo2icb1.png?width=216&crop=smart&auto=webp&s=17b495784937c50ea582821615ece44dcbe42c5d', 'width': 216}, {'height': 512, 'url': 'https://preview.redd.it/gryad6oo2icb1.png?width=320&crop=smart&auto=webp&s=4eed6c8525f27eea7daca2e54771ce94027a63f3', 'width': 320}, {'height': 1024, 'url': 'https://preview.redd.it/gryad6oo2icb1.png?width=640&crop=smart&auto=webp&s=9746c3978be2f17b8e0db92550aed0fc441b660e', 'width': 640}], 'source': {'height': 1280, 'url': 'https://preview.redd.it/gryad6oo2icb1.png?auto=webp&s=422ecbfc44e7c11d813578b2c013c664fd0a8ce1', 'width': 800}, 'variants': {}}]} | |||
OpenOrca is around the corner! | 0 | Yesterday a [preview](https://huggingface.co/openchat/openchat_v2_openorca_preview) of the model was published on HF (trained on 10% of data).
It's already breaking records in some benchmarks but still lacks significantly in Massive Multitask Language Understanding (MMLU). Let's see how the fully trained model performs.
[Source](https://twitter.com/Yampeleg/status/1680567135293014016?t=YIzA4Fgfu1NwSTgPGy-3mw&s=19) | 2023-07-17T10:05:09 | https://www.reddit.com/r/LocalLLaMA/comments/151xo2r/openorca_is_around_the_corner/ | Kujamara | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151xo2r | false | null | t3_151xo2r | /r/LocalLLaMA/comments/151xo2r/openorca_is_around_the_corner/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'raNRLaNtmM7-4ECR7dmO-Se0SStGCP71rDMl2S85or8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?width=108&crop=smart&auto=webp&s=50c342a8628109b005c723c671ea661d50f258b2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?width=216&crop=smart&auto=webp&s=e21882a25172f074ec19ac50798ba604b302e910', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?width=320&crop=smart&auto=webp&s=dd9b2d97b4c9f013a3a4ac5e8c753ccda1db2302', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?width=640&crop=smart&auto=webp&s=9cc66a4388ac9b875fceae0befb82b035f764d33', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?width=960&crop=smart&auto=webp&s=fa8d30df4eec9c2ee42b6ae06b19e3b4f21f0a6f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?width=1080&crop=smart&auto=webp&s=afa6aa510868fa875b1de36514a785c178678b2b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/B6L7VEvp71cr1qeeqFxahafzpHmFF2OEu-3kQWv3BYc.jpg?auto=webp&s=9ad4221835617c51cf8f225d19127971c06a8a07', 'width': 1200}, 'variants': {}}]} |
Grab some popcorn, OpenOrca is around the corner! | 1 | Yesterday a preview of OpenOrca was published on HF (OpenChat V2 x OpenOrca dataset). OpenChat V2 is a leading open-source model and last night it was fine-tuned on Open-Orca's data.
Here is a [preview](https://huggingface.co/openchat/openchat_v2_openorca_preview) of the model (trained on 10% of data) and it's already breaking records in some benchmarks. It still lacks significantly in MMLU though but let's see how the fully trained model performs.
[Source](https://twitter.com/Yampeleg/status/1680567135293014016?t=TgF2QKML3t40rPDVkrGteA&s=19) | 2023-07-17T09:54:34 | Kujamara | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 151xgor | false | null | t3_151xgor | /r/LocalLLaMA/comments/151xgor/grab_some_popcorn_openorca_is_around_the_corner/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'F9v9S4z0DWnt8BcB1ub5JwsvRLq9D1MAYFpeED-c2RE', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/am165eamzhcb1.png?width=108&crop=smart&auto=webp&s=a1ed5e89afdf46da0ab868f30c9450681fddb17f', 'width': 108}, {'height': 63, 'url': 'https://preview.redd.it/am165eamzhcb1.png?width=216&crop=smart&auto=webp&s=82712e731511e4bc898586ac744b1512532d4cd5', 'width': 216}, {'height': 94, 'url': 'https://preview.redd.it/am165eamzhcb1.png?width=320&crop=smart&auto=webp&s=963cbf0f4b8ffa391ddbd139fee6f5f32f609331', 'width': 320}, {'height': 189, 'url': 'https://preview.redd.it/am165eamzhcb1.png?width=640&crop=smart&auto=webp&s=71003ae09a62726b799556a85b7fe22a356e8362', 'width': 640}], 'source': {'height': 228, 'url': 'https://preview.redd.it/am165eamzhcb1.png?auto=webp&s=100e011a6fe90968b2c4421bd47b7aa9b4d1932a', 'width': 772}, 'variants': {}}]} | ||
How to contribute to datasets | 4 | Hi,
I have a ChatGPT premium subscription. I'd like to know how I can contribute with good quality chats to existing datasets. And to which dataset I should contribute.
Any suggestions?
Cheers! | 2023-07-17T09:23:07 | https://www.reddit.com/r/LocalLLaMA/comments/151wwit/how_to_contribute_to_datasets/ | drr21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151wwit | false | null | t3_151wwit | /r/LocalLLaMA/comments/151wwit/how_to_contribute_to_datasets/ | false | false | self | 4 | null |
Is it not allowed to talk about LoRAs here? | 1 | [removed] | 2023-07-17T09:20:59 | https://www.reddit.com/r/LocalLLaMA/comments/151wv4s/is_it_not_allowed_to_talk_about_loras_here/ | redzorino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151wv4s | false | null | t3_151wv4s | /r/LocalLLaMA/comments/151wv4s/is_it_not_allowed_to_talk_about_loras_here/ | false | false | default | 1 | null |
lmsys/vicuna-13b-v1.3 is relatively slow to lmsys/vicuna-13b-v1.1? Or am I tripping? | 1 | Hi people, when doing inference with lmsys/vicuna-13b-v1.3, the inference execution time for the below question takes about 1+ min while lmsys/vicuna-13b-v1.1 takes 11seconds? lmsys/vicuna-13b-v1.3 seems to be much slower the longer the question gets. Anyone can confirm if its a me or them problem? Thanks!
The long question I used generated from ChatGPT:
How might the world have transformed over the past two years, from 2021 to 2023, in terms of technological advancements, global political landscapes, scientific breakthroughs, climate change responses, socioeconomic disparities, and cultural shifts, and how have these changes interplayed to shape the current state of human civilization and pave the way for the future, considering the challenges faced and lessons learned from the ongoing pandemic, and how have individuals, governments, and international organizations collaborated to address and mitigate these complex issues, fostering international cooperation, safeguarding human rights, and promoting sustainable development, while also examining the potential risks and ethical implications arising from the rapid pace of innovation and the ever-increasing interconnectedness of our modern world? | 2023-07-17T08:40:16 | https://www.reddit.com/r/LocalLLaMA/comments/151w5ro/lmsysvicuna13bv13_is_relatively_slow_to/ | ToeAdministrative493 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151w5ro | false | null | t3_151w5ro | /r/LocalLLaMA/comments/151w5ro/lmsysvicuna13bv13_is_relatively_slow_to/ | false | false | self | 1 | null |
My self-trained LoRA doesn't affect the model at all it seems :( | 1 | Engine: latest llama.cpp
Model: airoboros-7b (16bit)
For training: oobabooga, and a ~ 1MB text file
llama.cpp says it can only apply LoRAs to 16bit models, so I wasn't allowed to quantize the model to 8b, fortunately I have enough VRAM.
I trained 10 epochs (oobabooga suggested 3 as default setting), which took several hours.
Then I converted the airoboros-7b 16bit model to ggml, and I converted the LoRA to ggml, and ran llama.cpp's 'main' binary with both, -m <model>, --lora <lora>.
It says that it loads both correctly, but the output whenever I ask anything related to the training text I get absolutely zero relation/change. It's as if I didn't use the LoRA at all. | 2023-07-17T07:37:23 | https://www.reddit.com/r/LocalLLaMA/comments/151v2mb/my_selftrained_lora_doesnt_affect_the_model_at/ | redzorino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151v2mb | false | null | t3_151v2mb | /r/LocalLLaMA/comments/151v2mb/my_selftrained_lora_doesnt_affect_the_model_at/ | false | false | self | 1 | null |
Will a commercial LLaMA mean less publicly released models? | 21 | With the news that a new and commercial version of LLaMA will be released soon, there'll be a mad race to capitalize on this if the next model will be as good as ChatGPT like [Yann LeCun claims](https://www.engadget.com/meta-could-soon-make-its-ai-model-available-for-commercial-projects-114021749.html). A dozen websites and startups will pop up overnight with their finetuned models and marketing.
These models can require spending a lot of time and money on gathering datasets and training, and not everyone is going to be satisfied with internet points alone. If someone trains an excellent coding model, or a luscious ERP model, there's huge demand for that and a lot of profit to be made.
I'm a little concerned about the precedent this may set. If the new LLaMA is as good as they're hyping it to be, do you think there will be a shift from publicly released models toward a trend of closed releases behind subscriptions? | 2023-07-17T07:37:09 | https://www.reddit.com/r/LocalLLaMA/comments/151v2gr/will_a_commercial_llama_mean_less_publicly/ | WorldlyJob3111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151v2gr | false | null | t3_151v2gr | /r/LocalLLaMA/comments/151v2gr/will_a_commercial_llama_mean_less_publicly/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'CshXXrv8exPcHnz-sGxZ23q4wQtcF0A6lFPz2CJbz98', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/odCzyg6WvK3Uf8MCzIXFOYi3ErgN2vOs2mnkP1Crfmc.jpg?width=108&crop=smart&auto=webp&s=e65739e39cf7a49faab68d560a5bf54161fa9579', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/odCzyg6WvK3Uf8MCzIXFOYi3ErgN2vOs2mnkP1Crfmc.jpg?width=216&crop=smart&auto=webp&s=ebb9e9e7c45baabe9d3f179933d0fa0077408e99', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/odCzyg6WvK3Uf8MCzIXFOYi3ErgN2vOs2mnkP1Crfmc.jpg?width=320&crop=smart&auto=webp&s=256d4c1d431f8260594653614b7c4f718da24a10', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/odCzyg6WvK3Uf8MCzIXFOYi3ErgN2vOs2mnkP1Crfmc.jpg?width=640&crop=smart&auto=webp&s=5482ac3572d036752b2423c3a826570baf49c7be', 'width': 640}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/odCzyg6WvK3Uf8MCzIXFOYi3ErgN2vOs2mnkP1Crfmc.jpg?auto=webp&s=50de8c62f53d0e6e7e7dfbd2fd83e50df4f8ad14', 'width': 800}, 'variants': {}}]} |
I put together a video for anyone interested in running their own LLM’s on the cloud with oobabooga | 1 | [removed] | 2023-07-17T07:15:13 | https://www.reddit.com/r/LocalLLaMA/comments/151uoo6/i_put_together_a_video_for_anyone_interested_in/ | sbalani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 151uoo6 | false | null | t3_151uoo6 | /r/LocalLLaMA/comments/151uoo6/i_put_together_a_video_for_anyone_interested_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BkEiQtm_ZShCHQrow8sLNC_Rva0eImmbf1g6Apv881A', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/eJb3gIPxXIE_e8vANBOZ9_aKDrTWetn-xBof4Q_SeF4.jpg?width=108&crop=smart&auto=webp&s=29cb9bada4b41cc16b8f0206f7fd9844a531baf9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/eJb3gIPxXIE_e8vANBOZ9_aKDrTWetn-xBof4Q_SeF4.jpg?width=216&crop=smart&auto=webp&s=9a9903425dd1f495eba1cd38d6fa705a5af55e40', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/eJb3gIPxXIE_e8vANBOZ9_aKDrTWetn-xBof4Q_SeF4.jpg?width=320&crop=smart&auto=webp&s=80bc64ef856098c8d58ac29f35cd1b2d71ee27ad', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/eJb3gIPxXIE_e8vANBOZ9_aKDrTWetn-xBof4Q_SeF4.jpg?auto=webp&s=91aef52085065ae08e858a1f65c044cab30ef306', 'width': 480}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.