title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Does anyone know if any uncensored multilingual model exists? | 13 | I recently tried OpenBuddy and it was pretty decent as local model in Japanese language. At least it was better than any Japanese-focused model released by Japanese corporations. But when it comes to nsfw things it turns into chatgpt and refusing to response.
I also tried RWKV(raven and world models) but honestly it's hard to say that this is competitive to llama.
When I modified system.prompt and prompt suffix in llama.cpp, I could successfully produce short nsfw story. But it is ideal to have multilingal model which is uncensored from the beginning. | 2023-06-17T03:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/14bfs6a/does_anyone_know_if_any_uncensored_multilingual/ | Beautiful-Ad2181 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bfs6a | false | null | t3_14bfs6a | /r/LocalLLaMA/comments/14bfs6a/does_anyone_know_if_any_uncensored_multilingual/ | false | false | self | 13 | null |
Meta reportedly wants to make LLaMA v2 available for commercial use | 168 | This comes from *The Information*, a subscription based publication made for tech executives. The [full article is paywalled](https://www.theinformation.com/articles/meta-wants-companies-to-make-money-off-its-open-source-ai-in-challenge-to-google) but here's the intro:
>Meta Platforms CEO Mark Zuckerberg and his deputies want other companies to freely use and profit from new artificial intelligence software Meta is developing, a decision that could have big implications for other AI developers and businesses that are increasingly adopting it.
>
>Meta is working on ways to make **the next version of its open-source large-language model**—technology that can power chatbots like ChatGPT—**available for commercial use**, said a person with direct knowledge of the situation and a person who was briefed about it. The move could prompt a feeding frenzy among AI developers eager for alternatives to proprietary software sold by rivals Google and OpenAI. It would also indirectly benefit Meta’s own AI development.
​
This lines up with what Mark Zuckerberg mentioned on Lex Fridman's podcast and clarifies his words a lot:
>Lex: You mentioned the next of LLaMA. What can you say about the next version of LLaMA?
>
>Mark: The work that we're doing to advance the infrastructure, it's basically at this point **taking it beyond a research project** into something which is ready to be kind of core infrastructure **not only for our own products but hopefully for a lot of other things out there too**.
>
>Lex: Do you think that LLaMA version two will be open sourced?
>
>Mark: **I think the question is how to do it**. I think we did the research license for v1 and I think the big thing that we're thinking about is basically like what's the right way.
>
>We sort of believe in this more community oriented, more democratic approach to building out the products and technology around this.
OpenAI also wants to open source a model, but there hasn't been any news since the [last article](https://www.reuters.com/technology/openai-readies-new-open-source-ai-model-information-2023-05-15/). It'll be fun to see the race this sparks among the AI companies. Which company will rush first to release their new model? Maybe Google will join in and release an open source model of their own. This will all be big wins for the local community. | 2023-06-17T03:05:58 | https://www.reddit.com/r/LocalLLaMA/comments/14bfpwi/meta_reportedly_wants_to_make_llama_v2_available/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bfpwi | false | null | t3_14bfpwi | /r/LocalLLaMA/comments/14bfpwi/meta_reportedly_wants_to_make_llama_v2_available/ | false | false | self | 168 | {'enabled': False, 'images': [{'id': 'K3mJS5QAvFPQp9UO9nCAh8cDZb2w0a_Va2a2L3IwnHs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?width=108&crop=smart&auto=webp&s=232b13b494d16530c85a2400e9adad86d5830b11', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?width=216&crop=smart&auto=webp&s=35c31f7f4df5231b34799ba54781e7f81011c63c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?width=320&crop=smart&auto=webp&s=da07947f8016cf225acfe93a11f92331594fa3dd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?width=640&crop=smart&auto=webp&s=0806b8f40761ea42e5a192001412d4556e5df7d5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?width=960&crop=smart&auto=webp&s=8cc154171c484902e479d07bf966930c3abdc692', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?width=1080&crop=smart&auto=webp&s=6a2dd16b7eeae3130c019665b4cbbf5e7bd2411b', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?auto=webp&s=ed52f226c407145c30a7af3413f87f611b4a50b7', 'width': 1200}, 'variants': {}}]} |
Base models are all uncensored right? | 5 | Such as the open llama 3b and 7B base models? | 2023-06-17T02:16:49 | https://www.reddit.com/r/LocalLLaMA/comments/14bere3/base_models_are_all_uncensored_right/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bere3 | false | null | t3_14bere3 | /r/LocalLLaMA/comments/14bere3/base_models_are_all_uncensored_right/ | false | false | self | 5 | null |
EXLlama support added to oobabooga-text-generation-webui | 119 | 2023-06-17T00:13:43 | https://github.com/oobabooga/text-generation-webui/commit/9f40032d32165773337e6a6c60de39d3f3beb77d | panchovix | github.com | 1970-01-01T00:00:00 | 0 | {} | 14bc7ug | false | null | t3_14bc7ug | /r/LocalLLaMA/comments/14bc7ug/exllama_support_added_to/ | false | false | 119 | {'enabled': False, 'images': [{'id': 'K1Btc2tnb4mW5SaUKQuVDRHyun2UZv9I85wrDQolNBI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?width=108&crop=smart&auto=webp&s=509348a4936cd56dcb8065f19a1ad83851791644', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?width=216&crop=smart&auto=webp&s=b27f90340f4f5d22a8158cf43d24f79c5adb2c1c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?width=320&crop=smart&auto=webp&s=ddda4e5d06a00180e2f83c8d765fafee2cc61d25', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?width=640&crop=smart&auto=webp&s=fd78110895619e5fd96fba978489a63388ee53c2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?width=960&crop=smart&auto=webp&s=7934128d7238df011e7dfa24fafa8c49d052c1eb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?width=1080&crop=smart&auto=webp&s=6a4b601f8771c9716a4ccf6b579290e595b09e20', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?auto=webp&s=ddbcb501988d0bfee038a44741d78e802ae2ab9b', 'width': 1200}, 'variants': {}}]} | ||
OK, since people here agreed decentralized AI training could be a powerful if possible. I'm opening r/BuildThisAI for intentional discussion on achieving this goal. This is directly inspired by a QnA I got to attend w/ Sam Altman (Info post): | 1 | [removed] | 2023-06-16T23:26:22 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14bb5sm | false | null | t3_14bb5sm | /r/LocalLLaMA/comments/14bb5sm/ok_since_people_here_agreed_decentralized_ai/ | false | false | default | 1 | null | ||
Any model that can do basic reasoning? | 26 | \`\`\`Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does sally have?\`\`\`
Is there any open model that can reliably answer this question and also offer the right explanations?
I've been playing a lot with these models, and while they can be extremely good for story-telling or embody a specific character (after 33b+) they fail at basic reasoning even at 65B.
I also tried "Tree-of-Thought" strategy where 3 experts quarel with each other, nada. I tried giving hints that are obvious. No success.
What's special about logic and how do we get it into open-source. Is it something emerging at 160B+ ? | 2023-06-16T23:19:40 | https://www.reddit.com/r/LocalLLaMA/comments/14bb08o/any_model_that_can_do_basic_reasoning/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bb08o | false | null | t3_14bb08o | /r/LocalLLaMA/comments/14bb08o/any_model_that_can_do_basic_reasoning/ | false | false | self | 26 | null |
What is the best VPS / cloud server to run LLMs on ? | 13 | For LLama 65B 8bit you require 74GB of RAM (from the wiki). For a few LLM runs, its probably cheaper to hire a cloud server instead of buying a huge machine to run it yourself.
What cloud providers are there that give this much RAM? Are there any free ones?
If you are going to buy your own machine, what are your options? Even high end NVidia GPUs dont have this much, apart from the A100s. | 2023-06-16T22:51:22 | https://www.reddit.com/r/LocalLLaMA/comments/14bactj/what_is_the_best_vps_cloud_server_to_run_llms_on/ | [deleted] | self.LocalLLaMA | 2023-06-16T23:02:15 | 0 | {} | 14bactj | false | null | t3_14bactj | /r/LocalLLaMA/comments/14bactj/what_is_the_best_vps_cloud_server_to_run_llms_on/ | false | false | self | 13 | null |
[Feedback Request] Monitor LLM Hallucinations - with ChatGPT! | 4 | [removed] | 2023-06-16T21:37:17 | https://www.reddit.com/r/LocalLLaMA/comments/14b8n9t/feedback_request_monitor_llm_hallucinations_with/ | shurturgal19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b8n9t | false | null | t3_14b8n9t | /r/LocalLLaMA/comments/14b8n9t/feedback_request_monitor_llm_hallucinations_with/ | false | false | default | 4 | null |
w7900 for llama | 3 | Hello everybody, AMD recently released the w7900, a graphics card with 48gb memory. The few tests that are available suggest that it is competitive from a price performance point of view to at least the older A6000 by Nvidia.
That said, no tests with LLMs were conducted (which does not surprise me tbh). Could this card be of interest for the semiprofessional or wealthy local llama user? Or is it infeasible due to bad performance and/or bad software support? If it's the software, would there be - in theory - the possibility to rectify this situation? | 2023-06-16T20:21:07 | https://www.reddit.com/r/LocalLLaMA/comments/14b6z1e/w7900_for_llama/ | RabbitHole32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b6z1e | false | null | t3_14b6z1e | /r/LocalLLaMA/comments/14b6z1e/w7900_for_llama/ | false | false | self | 3 | null |
What Can I Run | 1 | [removed] | 2023-06-16T20:18:49 | https://www.reddit.com/r/LocalLLaMA/comments/14b6wz7/what_can_i_run/ | Creative-Match7817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b6wz7 | false | null | t3_14b6wz7 | /r/LocalLLaMA/comments/14b6wz7/what_can_i_run/ | false | false | default | 1 | null |
I created a site to share Opensource LLMs in chatbot and API form | 5 | [removed] | 2023-06-16T19:51:14 | https://www.reddit.com/r/LocalLLaMA/comments/14b68x6/i_created_a_site_to_share_opensource_llms_in/ | ortegaalfredo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b68x6 | false | null | t3_14b68x6 | /r/LocalLLaMA/comments/14b68x6/i_created_a_site_to_share_opensource_llms_in/ | false | false | default | 5 | null |
Falcon 40B instruct tuned on Open Assistant data - model weights Open Source | 6 | 2023-06-16T19:46:09 | https://gpt-gm.h2o.ai/ | ricklamers | gpt-gm.h2o.ai | 1970-01-01T00:00:00 | 0 | {} | 14b64et | false | null | t3_14b64et | /r/LocalLLaMA/comments/14b64et/falcon_40b_instruct_tuned_on_open_assistant_data/ | false | false | default | 6 | null | |
Which Linux distro is best for AI stuff? | 1 | [removed] | 2023-06-16T19:43:31 | https://www.reddit.com/r/LocalLLaMA/comments/14b620d/which_linux_distro_is_best_for_ai_stuff/ | ThrowawayProgress99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b620d | false | null | t3_14b620d | /r/LocalLLaMA/comments/14b620d/which_linux_distro_is_best_for_ai_stuff/ | false | false | default | 1 | null |
Robin V2 model reaches top of LLM leaderboard | 84 | 2023-06-16T17:38:39 | https://medium.com/@hkust.ml/robin-v2-launches-achieves-unparalleled-performance-on-openllm-4f6886e822c1 | yahma | medium.com | 1970-01-01T00:00:00 | 0 | {} | 14b2zdu | false | null | t3_14b2zdu | /r/LocalLLaMA/comments/14b2zdu/robin_v2_model_reaches_top_of_llm_leaderboard/ | false | false | 84 | {'enabled': False, 'images': [{'id': 'EyludQq-38PPetVGcAXonPPSk188ZNmvgsHWeGmH_WQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?width=108&crop=smart&auto=webp&s=808b475e8072757887d221391905eba6ad61b722', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?width=216&crop=smart&auto=webp&s=19fba5ffa273feaa049ae35607db63e8de29d7b9', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?width=320&crop=smart&auto=webp&s=78ea366b150b332dd8248c8c4e53aada9cbd9bcd', 'width': 320}, {'height': 347, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?width=640&crop=smart&auto=webp&s=559f444acb68a6391621aa8fe2d4711eb5d22404', 'width': 640}, {'height': 521, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?width=960&crop=smart&auto=webp&s=59d4ddf4c4e1559ed0bc9fb3d151d51b11f0f8ed', 'width': 960}, {'height': 586, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?width=1080&crop=smart&auto=webp&s=d784ebe25d262ee8a5ca177b84cb8163fd4c7694', 'width': 1080}], 'source': {'height': 652, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?auto=webp&s=f965900b26a1dd962248db305a39aa072c34c9df', 'width': 1200}, 'variants': {}}]} | ||
🦜🔗 Building Multi task AI agent with LangChain and using Aim to trace and visualize the executions.. Let's prompt the agent to: i. List 5 supernova facts, ii. Find the latest explosion, iii. Calculate years since the last explosion | 11 | 2023-06-16T17:28:12 | https://v.redd.it/w12wxtjfze6b1 | tatyanaaaaaa | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14b2q6n | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/w12wxtjfze6b1/DASHPlaylist.mpd?a=1695733629%2CYWI3N2RmYzVlZDVlZTgwZjMyMmZjYWZhNTA0Mjc2N2E3MmVlOWE2NzgwMjAxYTM0ZDM4NzkyOGVhY2E3ZmNlMA%3D%3D&v=1&f=sd', 'duration': 28, 'fallback_url': 'https://v.redd.it/w12wxtjfze6b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/w12wxtjfze6b1/HLSPlaylist.m3u8?a=1695733629%2CNjNmYjkxNmNkNzdkOTVlNmNmYmRkNjYwMzhmOGQ1Njk1N2Q2NWFlM2YyZjViNTc2OGU4OTcyNmY0YjhkYmM0NA%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/w12wxtjfze6b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_14b2q6n | /r/LocalLLaMA/comments/14b2q6n/building_multi_task_ai_agent_with_langchain_and/ | false | false | default | 11 | null | |
WizardCoder-15B-1.0 vs ChatGPT coding showdown: 4 webapps * 3 frameworks | 56 | Hello /r/LocalLLaMa!
With yesterday's release of WizardCoder-15B-1.0 (see [official thread](https://www.reddit.com/r/LocalLLaMA/comments/14ajglx/official_wizardcoder15bv10_released_can_achieve/) and [less official thread] (https://www.reddit.com/r/LocalLLaMA/comments/149ir49/new_model_just_dropped_wizardcoder15bv10_model/) ) we finally have an open model that passes my [can-ai-code benchmark](https://huggingface.co/spaces/mike-ravkine/can-ai-code-results)
With the basics out of the way, we are finally ready to do some real LLM coding!
I have created an [llm-webapps](https://github.com/the-crypt-keeper/llm-webapps) repository with the boilerplate necessary to:
* define requirements for simple web-apps
* format those requirements into language, framework and model-specific prompts
* run the prompts through LLM
* visualize the results
OK enough with the boring stuff, [CLICK HERE TO PLAY WITH THE APPS](https://huggingface.co/spaces/mike-ravkine/llm-webapps-results)
On mobile the sidebar is hidden by default; click the chevron on the top left to select which model, framework and project you want to try.
Lots of interesting stuff in here, drop your thoughts and feedback in the comments. If you're interested in repeating this experiment or trying your own experiments or otherwise hacking on this hit up the [llm-webapps GitHub](https://github.com/the-crypt-keeper/llm-webapps). | 2023-06-16T16:50:36 | https://www.reddit.com/r/LocalLLaMA/comments/14b1tsw/wizardcoder15b10_vs_chatgpt_coding_showdown_4/ | kryptkpr | self.LocalLLaMA | 2023-06-16T17:31:50 | 0 | {} | 14b1tsw | false | null | t3_14b1tsw | /r/LocalLLaMA/comments/14b1tsw/wizardcoder15b10_vs_chatgpt_coding_showdown_4/ | false | false | self | 56 | {'enabled': False, 'images': [{'id': 'QyV2S4rh-pMbnJML_R7bbJcWB2RH98giH_Z_P45_3vo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=108&crop=smart&auto=webp&s=a058206456911026ebb90e4a841787cd0c5c75ec', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=216&crop=smart&auto=webp&s=a0443ae4570d3dda40ce608821d828c31efa9f2e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=320&crop=smart&auto=webp&s=545ca59a9a6fb473a2e1712e0ad3b1b8380628c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=640&crop=smart&auto=webp&s=0de6bb9ce3e4101c9452d707bb74e4d14fc21bb2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=960&crop=smart&auto=webp&s=ec5fa8d6f9f1500e545661940ce5af088320a871', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=1080&crop=smart&auto=webp&s=afd418dec5f3c16c2d0ee802736fea7f3862246e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?auto=webp&s=164db6523d618406ece148c52d1007ae5deb362c', 'width': 1200}, 'variants': {}}]} |
RWKV Pile+ seems to be training on far more tokens than any LLM ever has | 93 | Has anyone else noticed [this](https://huggingface.co/BlinkDL/rwkv-4-pileplus)? Seems to be in-progress
Apparently it's being *finetuned* on 1.7T tokens (RedPajama + subset of The Pile v2), *after* already training on the ~800GB of The Pile. As far as public knowledge goes, that's *far* more than any LLM has ever been trained on. I'm excited to see how it will perform | 2023-06-16T16:21:22 | https://www.reddit.com/r/LocalLLaMA/comments/14b147o/rwkv_pile_seems_to_be_training_on_far_more_tokens/ | pedantic_pineapple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b147o | false | null | t3_14b147o | /r/LocalLLaMA/comments/14b147o/rwkv_pile_seems_to_be_training_on_far_more_tokens/ | false | false | self | 93 | {'enabled': False, 'images': [{'id': '0gNI3KsG_0eps01yrutV_yvv3UTcmipaOX5BWN-oTVY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?width=108&crop=smart&auto=webp&s=6fbcda1a2d2fc6cab365d0a844c26ef4274911ba', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?width=216&crop=smart&auto=webp&s=aeed29e0c8070df3c3085a6c13cb555f517b16ff', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?width=320&crop=smart&auto=webp&s=7ed36779f94a6039663ad9277ffec0f91908935d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?width=640&crop=smart&auto=webp&s=430868c0c4cad89d79b10b54ee6a1d570e12dbc1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?width=960&crop=smart&auto=webp&s=547c1a302106935a16f918155c6c523566e854df', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?width=1080&crop=smart&auto=webp&s=4ea025712dd86c053ff4770d10dc93da0bad6865', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?auto=webp&s=12a749a4428e3d0e4b438d711ecf788eab6e9cdd', 'width': 1200}, 'variants': {}}]} |
Question on fine tuning with LORA | 8 | I tried to fine tune a 7B model using oogaboga text gui. It went well.
When I loaded it on my machine, I was using the quantised version of the model. Llama.cpp gave me a warning saying using Lora’s with quantised models may produce bad quality.
Does anyone know if this is significant? Should I use QLora to train on a quantised model directly instead? | 2023-06-16T16:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/14b0qkt/question_on_fine_tuning_with_lora/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b0qkt | false | null | t3_14b0qkt | /r/LocalLLaMA/comments/14b0qkt/question_on_fine_tuning_with_lora/ | false | false | self | 8 | null |
Why is there not as much cutting edge research into embeddings? | 35 | I know LLMs are flashier, but embeddings are in the same vein, and arguably have more established real world usages right now. Why is there not nearly as many embeddings models and research coming out? The SOTA is still `mpnet-base-v2` (correct me if I'm wrong) which is almost 2 years old now. | 2023-06-16T15:52:06 | https://www.reddit.com/r/LocalLLaMA/comments/14b0dta/why_is_there_not_as_much_cutting_edge_research/ | srvhfvakc | self.LocalLLaMA | 2023-06-16T15:56:19 | 0 | {} | 14b0dta | false | null | t3_14b0dta | /r/LocalLLaMA/comments/14b0dta/why_is_there_not_as_much_cutting_edge_research/ | false | false | self | 35 | null |
Issue Loading 13B Model in Ooba Booga on RTX 4070 with 12GB VRAM | 5 | Hello everyone,
I am currently encountering an issue with the Ooba Booga software on my Windows 10 PC. I'm running the program on a machine with an RTX 4070 GPU, which has 12GB of VRAM.
Whenever I'm trying to load any 13B LLaMA models, I am consistently receiving an error that seems to be related to memory allocation. The error message is:
RuntimeError: \[enforce fail at C:\\cb\\pytorch\_1000000000000\\work\\c10\\core\\impl\\alloc\_cpu.cpp:72\] data. DefaultCPUAllocator: not enough memory: you tried to allocate 35389440 bytes.
​
This is puzzling because, from what I understand, a 13B model should require less than 10GB of VRAM, and my GPU should be more than capable of handling this.
I am able to load 7B models without any issue.
I've ensured that there are no other significant processes running that could be using up VRAM and I've got the last Nvidia drivers running. My Ooba Booga setup is also up to date.
Has anyone else encountered this problem or does anyone have any suggestions on what might be going wrong and how to fix it?
Any help would be greatly appreciated.
Thank you! | 2023-06-16T15:35:46 | https://www.reddit.com/r/LocalLLaMA/comments/14azzij/issue_loading_13b_model_in_ooba_booga_on_rtx_4070/ | alexthai7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14azzij | false | null | t3_14azzij | /r/LocalLLaMA/comments/14azzij/issue_loading_13b_model_in_ooba_booga_on_rtx_4070/ | false | false | self | 5 | null |
PDF form parser with non-text inputs | 1 | [removed] | 2023-06-16T15:30:04 | https://www.reddit.com/r/LocalLLaMA/comments/14azuck/pdf_form_parser_with_nontext_inputs/ | harold__hadrada | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14azuck | false | null | t3_14azuck | /r/LocalLLaMA/comments/14azuck/pdf_form_parser_with_nontext_inputs/ | false | false | default | 1 | null |
Could a model not be trained by a decentralized network? Like Seti @ home or kinda-sorta like bitcoin. Petals accomplishes this somewhat, but if raw computer power is the only barrier to open-source I'd be happy to try organizing decentalized computing efforts | 55 | Title. If enough people could be reached with a model that is trainable with decentralized computing, in theory, could this not create an (unlikely, but possible) avenue for an open-source model as powerful as Meta's or OpenAI's?
Sorry if this might be a naieve question | 2023-06-16T14:56:20 | https://www.reddit.com/r/LocalLLaMA/comments/14az00q/could_a_model_not_be_trained_by_a_decentralized/ | doublemint2202 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14az00q | false | null | t3_14az00q | /r/LocalLLaMA/comments/14az00q/could_a_model_not_be_trained_by_a_decentralized/ | false | false | self | 55 | null |
NER but with LLM | 1 | [removed] | 2023-06-16T14:22:34 | https://www.reddit.com/r/LocalLLaMA/comments/14ay6bq/ner_but_with_llm/ | JollyAnteater5339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ay6bq | false | null | t3_14ay6bq | /r/LocalLLaMA/comments/14ay6bq/ner_but_with_llm/ | false | false | default | 1 | null |
best Llama model for Mac M1? | 9 | I have a mac mini m1 256/ 8gb.
What is the best instruct llama model I can run smoothly on this machine without burning it? | 2023-06-16T13:51:01 | https://www.reddit.com/r/LocalLLaMA/comments/14axesy/best_llama_model_for_mac_m1/ | buxrmp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14axesy | false | null | t3_14axesy | /r/LocalLLaMA/comments/14axesy/best_llama_model_for_mac_m1/ | false | false | self | 9 | null |
which current LLM is most advanced/modern? which one will it be in next few months? | 15 | Not talking about size.
It can be a model which is small but uses different technologies. | 2023-06-16T12:49:28 | https://www.reddit.com/r/LocalLLaMA/comments/14aw0on/which_current_llm_is_most_advancedmodern_which/ | Sofronyami | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aw0on | false | null | t3_14aw0on | /r/LocalLLaMA/comments/14aw0on/which_current_llm_is_most_advancedmodern_which/ | false | false | self | 15 | null |
help with Oobaboga training please | 1 | [removed] | 2023-06-16T12:13:02 | https://www.reddit.com/r/LocalLLaMA/comments/14av8vf/help_with_oobaboga_training_please/ | Competitive_Fox7811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14av8vf | false | null | t3_14av8vf | /r/LocalLLaMA/comments/14av8vf/help_with_oobaboga_training_please/ | false | false | default | 1 | null |
Replace pinecone. | 28 | I am impressed with the service https://github.com/Mintplex-Labs/anything-llm but I don't want to use pinecone, as it makes me dependent on someone else's service. can I replace pinecone and if so, how do I do it? Thanks! | 2023-06-16T12:04:59 | https://www.reddit.com/r/LocalLLaMA/comments/14av30w/replace_pinecone/ | One_Creator_One | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14av30w | false | null | t3_14av30w | /r/LocalLLaMA/comments/14av30w/replace_pinecone/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'QESTWT5GMa_WkFsdEhn1_2EHGzZjQq2UCxpW6FKE-nw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?width=108&crop=smart&auto=webp&s=2c0464c24f7a3fe17b534591bf843aa02de61175', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?width=216&crop=smart&auto=webp&s=edc31e3d206bd6123abdec64522fe5cac07a0271', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?width=320&crop=smart&auto=webp&s=694653997d6bb1d048f2e6e00a8fe98bc6f766f0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?width=640&crop=smart&auto=webp&s=73e2169e10fb5c1584de25d670e04af10693f092', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?width=960&crop=smart&auto=webp&s=377e559a3a7b0f134f506d5db885f11649759c39', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?width=1080&crop=smart&auto=webp&s=ee2dbabb03d658cc0bc8d863d8869a4b89cc6e4a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?auto=webp&s=a0e75ec64e4af9d83be5511b8ef48f7e0cb2e748', 'width': 1200}, 'variants': {}}]} |
I was able to load WizardLM-30b onto my RTX 3090 | 22 | I am impressed, could load the model into vram, took 800s to load. But its really quick at inference. I gave it a few prompts such as summarizing a document, and writing me emails. Works well. But I wonder are there any better models? I saw the HF leaderboard, but seems like lots of models are missing.
Any models I should try out?
Was thinking of loading up: TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ
But I have seen some 65b models with 2 and 3 bit quantization. Anyone have success with those?
EDIT: found out why i loads so slow. In wsl2 the io speed from naitive ex4 disk and windows disk is poor. You have to move the repo to the native ex4 disk in wsl2 and its significantly faster.
Source: https://github.com/microsoft/WSL/issues/4197
Autogptq loaded the 30b model in 37s. Thank you all. | 2023-06-16T11:58:37 | https://www.reddit.com/r/LocalLLaMA/comments/14auy1z/i_was_able_to_load_wizardlm30b_onto_my_rtx_3090/ | klop2031 | self.LocalLLaMA | 2023-06-18T14:52:25 | 0 | {} | 14auy1z | false | null | t3_14auy1z | /r/LocalLLaMA/comments/14auy1z/i_was_able_to_load_wizardlm30b_onto_my_rtx_3090/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'zgC8GKkQBGteVKfyG-suakesKihDSh27INw1Hgkj5Oo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?width=108&crop=smart&auto=webp&s=b737df0e747f7673ec07665e18bb1c14197657ab', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?width=216&crop=smart&auto=webp&s=d086699b7c33424817993d71b27d392c4053c990', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?width=320&crop=smart&auto=webp&s=d38541dd0fb917ca94d70d7f3c265c7f7c369208', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?width=640&crop=smart&auto=webp&s=f26f403ad7cadd3ac8358761971b0098dea0e82a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?width=960&crop=smart&auto=webp&s=c12f92a322f1c0c0564885071037babb5f29042c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?width=1080&crop=smart&auto=webp&s=abd3709b8f73321f2a95439e1211b093a2b64c9d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?auto=webp&s=80eb818475a376132b032fa01a459c6aac8af54a', 'width': 1200}, 'variants': {}}]} |
How are all these models different from each other | 26 | Like what does Wizard do that vicuña couldn’t etc.
Also with so many different models can we have a section in the wiki explaining what specific thing each is good at | 2023-06-16T11:04:28 | https://www.reddit.com/r/LocalLLaMA/comments/14atx3q/how_are_all_these_models_different_from_each_other/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14atx3q | false | null | t3_14atx3q | /r/LocalLLaMA/comments/14atx3q/how_are_all_these_models_different_from_each_other/ | false | false | self | 26 | null |
airoboros gpt4 v1.2 | 81 | Thanks for testing the last round and giving feedback, another one is ready to test:
* [https://huggingface.co/jondurbin/airoboros-65b-gpt4-1.2](https://huggingface.co/jondurbin/airoboros-65b-gpt4-1.2)
* [https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.2](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.2)
* [https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2)
* [https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.2](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.2)
Dataset here, feel free to use as desired but it was generated by GPT-4 so take a peek at their ToS: [https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2)
It should be better at coding. I added a few hundred coding examples with " PLAINFORMAT" at the end that skips the backticks/explanation/etc. to just print the code. It doesn't always work but often does.
I also added several thousand Orca style reasoning prompts, including some with the idea of parallelization (e.g. "It takes 15 minutes to bake a dozen cookies. How long will it take to bake 24 cookies?")
Also added \~1000 role-playing data.
Still has the custom contextual question answering stuff with really low hallucinations and referencing.
The great and powerful u/TheBloke has already quantized the 33b and 65b but I don't think he's done the 13b or 7b yet ;) /hint /please
Feedback, please, for the next round of improvements!
Also, updated alpaca\_eval (pr pending) and llm-jeopardy scores:
https://preview.redd.it/eqhf3uuvxc6b1.png?width=810&format=png&auto=webp&s=38e2e635d5b5bb6c622f1b34e37f879d33c1d5a2
https://preview.redd.it/ceui8vuvxc6b1.png?width=1488&format=png&auto=webp&s=cd09c059e8acb8b2f9a5e6efc9e0e5a698651d0f | 2023-06-16T10:34:27 | https://www.reddit.com/r/LocalLLaMA/comments/14atdex/airoboros_gpt4_v12/ | JonDurbin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14atdex | false | null | t3_14atdex | /r/LocalLLaMA/comments/14atdex/airoboros_gpt4_v12/ | false | false | 81 | {'enabled': False, 'images': [{'id': 'hjs5sZRSjODTAP9HiMGqQKQDjnjJ4XWJd-J-sPTJc_U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?width=108&crop=smart&auto=webp&s=cdcd4cb494639aeab3bede204f6a48d1d99a67d1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?width=216&crop=smart&auto=webp&s=f0a453886bceb606d2b28d7bcd6cf833af6ef1df', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?width=320&crop=smart&auto=webp&s=4f9584739fa016910fb50396187413c869674c04', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?width=640&crop=smart&auto=webp&s=c1c2c303b47bc0055d7aa31067b4567257fd7338', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?width=960&crop=smart&auto=webp&s=4d3ced0a8feba7b9a08501d867830c622124c889', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?width=1080&crop=smart&auto=webp&s=295b1138217a2404f0ce55613153221e06bc40d9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?auto=webp&s=3240bcca7bbe6249ca30294614b4fba6354976bf', 'width': 1200}, 'variants': {}}]} | |
The best model for content generation? | 2 | Hey, I wanted to know which model would you guys suggest is best for content generation. | 2023-06-16T10:07:17 | https://www.reddit.com/r/LocalLLaMA/comments/14asvzd/the_best_model_for_content_generation/ | TheArchivist314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14asvzd | false | null | t3_14asvzd | /r/LocalLLaMA/comments/14asvzd/the_best_model_for_content_generation/ | false | false | self | 2 | null |
Compiling Falcon.cpp on windows | 5 | Has anyone managed to compile this https://github.com/jploski/ggml/tree/falcon40b to an exe on windows?? No matter what I try I cannot get it to compile to main.exe.
Error output looks like this (much large but similar build issues) using cMake gui and chocolately’s make package for windows.
2>E:\ggml-falcon40b\examples\falcon\main.cpp(745,28): warning C4267: '+=': conversion from 'size_t' to 'int', possible loss of data
2>E:\ggml-falcon40b\examples\falcon\main.cpp(776,30): warning C4267: '+=': conversion from 'size_t' to 'int', possible loss of data
2>Done building project "falcon.vcxproj" -- FAILED.
After I compile it I’m hoping to use https://huggingface.co/TheBloke/falcon-40b-instruct-GGML | 2023-06-16T09:51:36 | https://www.reddit.com/r/LocalLLaMA/comments/14aslu6/compiling_falconcpp_on_windows/ | ihaag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aslu6 | false | null | t3_14aslu6 | /r/LocalLLaMA/comments/14aslu6/compiling_falconcpp_on_windows/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'LhXGBGxcvevyJYiKZhH9d4c-Ry850cspL4r0h_gl7XQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?width=108&crop=smart&auto=webp&s=bc39ed4ae8e213f34a6c96ce101221a0368d3c68', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?width=216&crop=smart&auto=webp&s=8ee25f2c35dbfa8a5f7ab958fef688e8b3470fa9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?width=320&crop=smart&auto=webp&s=d932d27548cf1070d2ef68cab62d7c28dcacfaae', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?width=640&crop=smart&auto=webp&s=b322052b1cca578ddf9b192db28109e2ce6a2118', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?width=960&crop=smart&auto=webp&s=b800378039ac887915e7df73c1c8fd4cc80c42e1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?width=1080&crop=smart&auto=webp&s=3bc494f349b6c68db0a758e9da907ce2551a8b34', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?auto=webp&s=33cffd77ca62a01c0ce9f3259efbbfda7f46729c', 'width': 1200}, 'variants': {}}]} |
What do you think is the best llm to use as a teacher | 2 | [removed] | 2023-06-16T09:42:47 | https://www.reddit.com/r/LocalLLaMA/comments/14asgaa/what_do_you_think_is_the_best_llm_to_use_as_a/ | Redbeardybeard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14asgaa | false | null | t3_14asgaa | /r/LocalLLaMA/comments/14asgaa/what_do_you_think_is_the_best_llm_to_use_as_a/ | false | false | default | 2 | null |
Help me with this private gpt Error | 1 | [removed] | 2023-06-16T09:29:48 | Curious-Ninja150627 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14as7zd | false | null | t3_14as7zd | /r/LocalLLaMA/comments/14as7zd/help_me_with_this_private_gpt_error/ | false | false | default | 1 | null | |
Can anyone help me with this problem in private gpt | 1 | [removed] | 2023-06-16T09:26:36 | Curious-Ninja150627 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14as62r | false | null | t3_14as62r | /r/LocalLLaMA/comments/14as62r/can_anyone_help_me_with_this_problem_in_private/ | false | false | default | 1 | null | |
Can we evade LLM detection by using dynamic temperature across text? Most detectors pass your text through their LLM and check if the temperature is constant, and maybe look for other sampling artifacts. So making temperature vary, we make it harder to detect our text. | 25 | Can we evade LLM detection by using dynamic temperature across text? Most detectors pass your text through their LLM and check if the temperature is constant, and maybe look for other sampling artifacts. So making temperature vary, we make it harder to detect our text. | 2023-06-16T09:23:09 | https://www.reddit.com/r/LocalLLaMA/comments/14as3x3/can_we_evade_llm_detection_by_using_dynamic/ | NancyAurum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14as3x3 | false | null | t3_14as3x3 | /r/LocalLLaMA/comments/14as3x3/can_we_evade_llm_detection_by_using_dynamic/ | false | false | self | 25 | null |
LLM user support | 0 | [removed] | 2023-06-16T08:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/14arivh/llm_user_support/ | One_Creator_One | self.LocalLLaMA | 2023-06-16T11:02:22 | 0 | {} | 14arivh | false | null | t3_14arivh | /r/LocalLLaMA/comments/14arivh/llm_user_support/ | false | false | default | 0 | null |
Best uncensored llm to run locally on free google colab? | 1 | [removed] | 2023-06-16T08:26:43 | https://www.reddit.com/r/LocalLLaMA/comments/14ar639/best_uncensored_llm_to_run_locally_on_free_google/ | Murky-Cheek-7554 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ar639 | false | null | t3_14ar639 | /r/LocalLLaMA/comments/14ar639/best_uncensored_llm_to_run_locally_on_free_google/ | false | false | default | 1 | null |
Is there a free local analogue of the text translator DeepL in the form of an LLM? | 16 |
1. To translate not literally by words, but in the context of a sentence (paragraph).
2. To be able to download databases for required languages only.
3. integrated with WebUI | 2023-06-16T08:09:01 | https://www.reddit.com/r/LocalLLaMA/comments/14aqvrw/is_there_a_free_local_analogue_of_the_text/ | nihnuhname | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aqvrw | false | null | t3_14aqvrw | /r/LocalLLaMA/comments/14aqvrw/is_there_a_free_local_analogue_of_the_text/ | false | false | self | 16 | null |
Prevent LLM from answering out of context questions | 3 | I am currently playing around with fastchat t5. I have a set of documents that is fed to the model based on the query through langchain. My current issue is that whenever there's an out of context question, fastchat is using it's pretrained knowledge to answer it whereas the expected output should be 'I don't know' or 'let's keep it relevant to <domain>'.
I have tried several different prompts but none of them work.
Any suggestions on how to solve this? | 2023-06-16T06:58:55 | https://www.reddit.com/r/LocalLLaMA/comments/14apo7n/prevent_llm_from_answering_out_of_context/ | perseus_14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14apo7n | false | null | t3_14apo7n | /r/LocalLLaMA/comments/14apo7n/prevent_llm_from_answering_out_of_context/ | false | false | self | 3 | null |
Robin V2 Launches: Achieves Unparalleled Performance on OpenLLM! | 1 | 2023-06-16T06:20:36 | https://www.reddit.com/r/machinelearningnews/comments/14am06w/robin_v2_launches_achieves_unparalleled/ | saintshing | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14ap05c | false | null | t3_14ap05c | /r/LocalLLaMA/comments/14ap05c/robin_v2_launches_achieves_unparalleled/ | false | false | default | 1 | null | |
Could a llama lora be merged into OpenLLama? | 1 | If they are the same architecture & size, it should be possible, right? Because lora trained on base llama can be merged into finetunes. | 2023-06-16T05:48:07 | https://www.reddit.com/r/LocalLLaMA/comments/14aofh5/could_a_llama_lora_be_merged_into_openllama/ | pokeuser61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aofh5 | false | null | t3_14aofh5 | /r/LocalLLaMA/comments/14aofh5/could_a_llama_lora_be_merged_into_openllama/ | false | false | self | 1 | null |
Do you wanna have a free Al agent? | 1 | [removed] | 2023-06-16T04:27:56 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14amz6s | false | null | t3_14amz6s | /r/LocalLLaMA/comments/14amz6s/do_you_wanna_have_a_free_al_agent/ | false | false | default | 1 | null | ||
How is ExLlama so good? Can it be used with a more feature rich UI? | 46 | ExLlama uses way less memory and is much faster than AutoGPTQ or GPTQ-for-Llama, running on a 3090 at least. Does anyone know how to get it to work with Tavern or Kobold or Oobabooga? | 2023-06-16T02:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/14ak4yg/how_is_exllama_so_good_can_it_be_used_with_a_more/ | Excessive_Etcetra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ak4yg | false | null | t3_14ak4yg | /r/LocalLLaMA/comments/14ak4yg/how_is_exllama_so_good_can_it_be_used_with_a_more/ | false | false | self | 46 | null |
Official WizardCoder-15B-V1.0 Released! Can Achieve 59.8% Pass@1 on HumanEval! | 175 | * Today, the ***WizardLM Team*** has released their **Official** **WizardCoder-15B-V1.0** model trained with 78**k** evolved code instructions.
* *WizardLM Team* will ***open-source all the code, data, models, and algorithms recently!***
* Paper: [https://arxiv.org/abs/2306.08568](https://arxiv.org/abs/2306.08568)
* The project repo: [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder)
* The official Twitter: [WizardLM\_AI](https://twitter.com/WizardLM_AI)
* HF Model: [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
* Four online demo links:
1. [https://609897bc57d26711.gradio.app/](https://609897bc57d26711.gradio.app/)
2. [https://fb726b12ab2e2113.gradio.app/](https://fb726b12ab2e2113.gradio.app/)
3. [https://b63d7cb102d82cd0.gradio.app/](https://b63d7cb102d82cd0.gradio.app/)
4. [https://f1c647bd928b6181.gradio.app/](https://f1c647bd928b6181.gradio.app/)
(We will update the demo links in our [github](https://github.com/nlpxucan/WizardLM/tree/main).)
**Comparing WizardCoder with the Closed-Source Models.**
🔥 The following figure shows that **our WizardCoder attains the third position** in the [HumanEval benchmark](https://www.reddit.com/r/LocalLLaMA/comments/1469343/hi_folks_back_with_an_update_to_the_humaneval/), surpassing Claude-Plus (59.8 vs. 53.0) and Bard (59.8 vs. 44.5). Notably, our model exhibits a substantially smaller size compared to these models.
https://preview.redd.it/2s1pzshsr96b1.png?width=2315&format=png&auto=webp&s=b8ff5aa284a9f1a86be57fb92c66cff98993ea41
❗Note: In this study, we copy the scores for HumanEval and HumanEval+ from the [LLM-Humaneval-Benchmarks](https://github.com/my-other-github-account/llm-humaneval-benchmarks). Notably, all the mentioned models generate code solutions for each problem utilizing a **single attempt**, and the resulting pass rate percentage is reported. Our WizardCoder generates answers using greedy decoding and tests with the same [code](https://github.com/evalplus/evalplus).
​
**Comparing WizardCoder with the Open-Source Models.**
The following table clearly demonstrates that our WizardCoder exhibits a substantial performance advantage over all the open-source models.
**❗If you are confused with the different scores of our model (57.3 and 59.8), please check the Notes.**
https://preview.redd.it/x502zegur96b1.png?width=622&format=png&auto=webp&s=74f8d79557b6de8a4976c17c1f79f9f8a9cb376b
❗Note: The reproduced result of StarCoder on MBPP.
❗Note: Though PaLM is not an open-source model, we still include its results here.
❗Note: The above table conducts a comprehensive comparison of our WizardCoder with other models on the HumanEval and MBPP benchmarks. We adhere to the approach outlined in previous studies by generating **20 samples** for each problem to estimate the pass@1 score and evaluate it with the same [code](https://github.com/openai/human-eval/tree/master). The scores of GPT4 and GPT3.5 reported by [OpenAI](https://openai.com/research/gpt-4) are 67.0 and 48.1 (maybe these are the early version of GPT4&3.5). | 2023-06-16T01:27:49 | https://www.reddit.com/r/LocalLLaMA/comments/14ajglx/official_wizardcoder15bv10_released_can_achieve/ | cylaw01 | self.LocalLLaMA | 2023-06-17T13:01:14 | 0 | {} | 14ajglx | false | null | t3_14ajglx | /r/LocalLLaMA/comments/14ajglx/official_wizardcoder15bv10_released_can_achieve/ | false | false | 175 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | |
More intelligent Pdf parsers | 15 | Apologies of this is slightly off topic. I've been experimenting with Llama Index along with local models for context based question answering problem. One key issue I've been facing is Pdf parsing, especially tabular information formats.
Does anyone know of current state of the art (reproducible code, not just papers) for semi intelligent Pdf parsing solutions which might even be slightly better than regular vanilla pypdf. | 2023-06-16T01:16:51 | https://www.reddit.com/r/LocalLLaMA/comments/14aj8pe/more_intelligent_pdf_parsers/ | darthstargazer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aj8pe | false | null | t3_14aj8pe | /r/LocalLLaMA/comments/14aj8pe/more_intelligent_pdf_parsers/ | false | false | self | 15 | null |
Airoboros 65b GGML is really good! | 112 | [https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GGML](https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GGML)
Haven't seen people talk about this release from thebloke, but it did a solid job with the joke that has stumped pretty much every other open source LLM since the beginning of my testing:
USER:
A man decides that we wants to rob a bank. He paints a banana black, then puts on a mask, and goes inside and points the banana at the bank teller. "I've got a gun, stick'em up!" He shouts at the clerk. The clerk looks at him, "I've been keeping my eyes peeled for a character like you." "Put your hands in the air!" Shouts the robber. "I thought about splitting when I saw you." Replied the clerk. "Put your hands in the air or I'll fire." The robber replies. "It'd be more dangerous if you just threw your gun on the floor, am I right?" Said the clerk. Why is this joke funny? Please explain your reasoning in detail.
ASSISTANT:
This joke is funny because it uses puns and wordplay to create a humorous situation where a man tries to rob a bank with a banana painted black, pretending it's a gun. The clerk recognizes the banana as fake but plays along, using puns like "I thought about splitting when I saw you" (referring to the banana being split in half) and "It'd be more dangerous if you just threw your gun on the floor" (referring to a slippery peel). The back-and-forth between the robber and clerk creates an absurd situation that makes us laugh at their misunderstanding of each other.
I've tried a bunch of other prompts on it, including creative writing prompts, and it has been consistently top tier. What are your experiences with it? Interested to hear. | 2023-06-16T00:48:33 | https://www.reddit.com/r/LocalLLaMA/comments/14ainpc/airoboros_65b_ggml_is_really_good/ | TheDreamSymphonic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ainpc | false | null | t3_14ainpc | /r/LocalLLaMA/comments/14ainpc/airoboros_65b_ggml_is_really_good/ | false | false | self | 112 | {'enabled': False, 'images': [{'id': 'CtpzRLMS6GlQQnv1zcIL_IW4nH6Hb6XpVXFHAgeld0k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?width=108&crop=smart&auto=webp&s=1a3a7ec11661e57b61ec3bdd14353a9b4fe71fda', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?width=216&crop=smart&auto=webp&s=a079718a0044e4faa1407e13ffb9bd04fadb6785', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?width=320&crop=smart&auto=webp&s=8306ebc4ca8e96eb2a82e2cfb82d854f66bc3933', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?width=640&crop=smart&auto=webp&s=783bd92935dadcf4cd944eb87b59f0c6bc3ccd18', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?width=960&crop=smart&auto=webp&s=8f9d6e5d60914c8722b0a7efcc88f19ce53ba166', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?width=1080&crop=smart&auto=webp&s=e4a0f8228add370a1c66042d8f7fa579f098d609', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?auto=webp&s=04fd1b6d4960434cb9da29732f01a69ed72bb350', 'width': 1200}, 'variants': {}}]} |
Langchain and self hosted LLaMA hosted API | 3 | What are the current best "no reinventing the wheel" approaches to have Langchain use an LLM through a locally hosted REST API, the likes of Oobabooga or [hyperonym/basaran](https://github.com/hyperonym/basaran) with streaming support for 4-bit GPTQ? | 2023-06-15T21:55:04 | https://www.reddit.com/r/LocalLLaMA/comments/14aeo8f/langchain_and_self_hosted_llama_hosted_api/ | 2muchnet42day | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aeo8f | false | null | t3_14aeo8f | /r/LocalLLaMA/comments/14aeo8f/langchain_and_self_hosted_llama_hosted_api/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'gFHYjON9kBqne0o-foYmyiaBdqO1BkPF2DRj0ogoTxg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?width=108&crop=smart&auto=webp&s=fea07ae3ab0bb62bc75b34910774a4c0619feecf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?width=216&crop=smart&auto=webp&s=a82e2c2973b8695c5547a135e4f2fef3797e70b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?width=320&crop=smart&auto=webp&s=3f76e017af382f4811a3b3b081310bc8beb34161', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?width=640&crop=smart&auto=webp&s=3357b486e1d1226172774cfda0e852872418db2e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?width=960&crop=smart&auto=webp&s=bb910faa55650b348e9d328428f61099a64a0369', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?width=1080&crop=smart&auto=webp&s=ce8032648c7802523394a5c23a317db4bbc62272', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?auto=webp&s=29bfa5df4818629250ae36f291681ab0e380dd68', 'width': 1200}, 'variants': {}}]} |
Why are LLM benchmarks so absurdly correlated? | 20 | I've been looking at jubilantjerry's excellent 'Combined LLM benchmarks' work ([https://www.reddit.com/r/LocalLLaMA/comments/13qj07n/sharing\_my\_comparison\_methodology\_for\_llm\_models/](https://www.reddit.com/r/LocalLLaMA/comments/13qj07n/sharing_my_comparison_methodology_for_llm_models/)), and the most striking thing about his chart is that if you sort all models by their overall score, *every model appears in the same rank order for every benchmark*.
The correlation is not just that GPT4 is in 1st place on 12 out of 12 tests, but that llama-65b is 5th place on 12 of 12, and alpaca-7b is 16th place, 12/12, etc.. No models outperform on individual benchmarks relative to their overall rank.
Zero ordinal variance on 33 models x 12 benchmarks is pretty striking. Why might this be?
I can think of three possible answers, but there may be others:
**1. Foundation Model similarity**: All of these models are produced in similar ways, with similar techniques, data sets, and tunings. They all occupy points on a spectrum that shades from better to worse, but with no other differences.
**2. Benchmark similarity**: The prompt->response pattern is central to the benchmarks, so the source of the prompts, and the measured outcome, are really just minor variations on a uniform test suite. Zero-shot Trivia QA is harder than few-shot HellaSwag, but they are testing the same kinds of behavior.
**3. Feedback loops**: LLM creators know these benchmarks and use them during training, so the models are designed to do well on these benchmarks. The benchmarks just repeat the training phase of the models.
4. ???
Do any of these explanations seem plausible? More than one? Can any be ruled out? Have I missed anything? | 2023-06-15T21:31:54 | https://www.reddit.com/r/LocalLLaMA/comments/14ae2xv/why_are_llm_benchmarks_so_absurdly_correlated/ | ClayShirky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ae2xv | false | null | t3_14ae2xv | /r/LocalLLaMA/comments/14ae2xv/why_are_llm_benchmarks_so_absurdly_correlated/ | false | false | self | 20 | null |
Preset Arena: 17,205 comparisons between 241 different presets. Vote on the best ones! | 125 | Everyone here has probably been through the question "What parameters should I use to get the best responses?". Temperature, top_k, top_p, repetition_penalty, typical_p... Finding the ideal combination is extremely difficult.
To tackle this problem, I have come up with the following experiment: comparing thousands of pairs of responses for the same prompt but different presets, and then computing the resulting elo scores for the presets. Just like lmsys did in their [Chatbot Arena Leaderboard](https://lmsys.org/blog/2023-05-25-leaderboard/), but for *presets* instead of models.
I have divided the prompts for the experiment into two categories:
- Instruct: 8465 instructions from the WizardLM_evol_instruct_70k dataset.
- Chat: 8740 conversations from the soda dataset (the #1 conversational dataset on Hugging Face). I have called the characters "Friend" and "You", and have built prompts consisting of the first 4 messages. The 5th one is generated by the model.
These models were used:
- Instruct prompts: Vicuna 13b v1.1 (GPTQ, 4-bit, 128g). This is a model that has ranked well on many leaderboards, and I have been using it for a while with good results.
- Chat prompts: LLaMA 13b (GPTQ, 4-bit, 128g). I find that the base LLaMA gives more natural and human-like responses during conversations.
It took me around 36 hours to generate the ~34000 completions on my RTX 3090 using the text-generation-webui API.
Now I need help categorizing the best responses. I have rented a Linux server and put together a "Preset Arena" website where anyone can vote.
# The arena is live here: https://oobabooga.github.io/arena/index.html
The final dataset will be shared on Hugging Face, including the prompts, responses, and votes.
Before voting, you can optionally enter an identifier like your reddit username or real name. The top voters will be acknowledged in the Hugging Face dataset card.
Some comments:
* The presets include special sampling techniques (Contrastive Search, Mirostat, Eta Sampling), as well as random combinations of the more common parameters. The full list can be found here: https://oobabooga.github.io/arena/presets.html
* Since the final dataset will contain pairs of outputs for the same prompt and a human preference label for each pair, it will in principle be possible to create a reward model for RLHF training based on it.
* I will regularly post progress updates in this thread.
-----
Updates (UTC time):
* 2023-06-16 00:01: 950 votes so far. This is going really well!
* 2023-06-16 02:31: 1260 votes. First preliminary results.
* 2023-06-16 04:02: 1421 votes.
* 2023-06-16 13:42: 2284 votes.
* 2023-06-16 15:44: 2535 votes.
* 2023-06-16 17:56: 2638 votes.
* 2023-06-16 23:59: 2952 votes. **Preliminary results update**: [preliminary results](https://oobabooga.github.io/arena/preliminary-results.html). | 2023-06-15T21:06:55 | https://www.reddit.com/r/LocalLLaMA/comments/14adfw2/preset_arena_17205_comparisons_between_241/ | oobabooga4 | self.LocalLLaMA | 2023-06-17T00:00:12 | 0 | {} | 14adfw2 | false | null | t3_14adfw2 | /r/LocalLLaMA/comments/14adfw2/preset_arena_17205_comparisons_between_241/ | false | false | self | 125 | {'enabled': False, 'images': [{'id': 'Q1MF8IN_UA9pU4tqvD1hEdePlazYPLTs893pR_vAxGU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=108&crop=smart&auto=webp&s=563cdab161ef7a0db1d99480a17e1e0a964713a8', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=216&crop=smart&auto=webp&s=f1e49ccecddfa4224d3b0a84c02a2f4e91fdbe23', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=320&crop=smart&auto=webp&s=433ccbd98e6312cd473b7462d9b2795647e68b22', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=640&crop=smart&auto=webp&s=1af23e81a95a45bb6c1c4137b61cac670f2817d9', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=960&crop=smart&auto=webp&s=ea529d168b43832467d3edd73f14c041a334f395', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=1080&crop=smart&auto=webp&s=e32dc82175c894fb2626e6595baa25c1a27f2720', 'width': 1080}], 'source': {'height': 1138, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?auto=webp&s=b48765733ccaee4211093fc1887b8e3e1484d509', 'width': 1138}, 'variants': {}}]} |
Releasing a super-simple memory project for LLMs / semantic search | 19 | Hey /r/LocalLLaMA,
tl;dr: check out the project @ [https://github.com/spyglass-search/memex](https://github.com/spyglass-search/memex)
Last week, I made a [post](https://www.reddit.com/r/LocalLLaMA/comments/14405do/building_a_supersimple_memory_service_for/) about open-sourcing this tool. I'm open-sourcing the code and hoping to get general feedback from the community to improve it for any local LLM projects people are working on!
To recap: One of the major frustrations I've had (and others too from posts I've seen) with building projects w/ LLMs is dealing with the complexity of chunking/embedding/vector dbs, especially if you're in the non-python world.
Here's a demo video that I made to demonstrate how it works. I add a text file to the memory and then use an LLM to ask questions about it right afterwards.
[Run service. Add data. Query from other apps or just use the API endpooints](https://reddit.com/link/14acy7w/video/5sxl1z20t86b1/player)
**Why use this vs embeddings/vector databases/etc.?**
If you're already familiar with that and comfortable with it, no need to switch.
1. If you're on a non-python stack or need something a little less resource intensive.
2. If you're relatively new to LLMs and just want to add text to a data store and get the right piece of text out for your LLM tool/project.
3. You want text-based semantic search without all the extra steps.
**How you can help**
This is super early stage!
If you're working on an LLM project, I'd love to see how you'd use this and if there's any additional functionality that would make your life easier. | 2023-06-15T20:47:43 | https://www.reddit.com/r/LocalLLaMA/comments/14acy7w/releasing_a_supersimple_memory_project_for_llms/ | andyndino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14acy7w | false | null | t3_14acy7w | /r/LocalLLaMA/comments/14acy7w/releasing_a_supersimple_memory_project_for_llms/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'CD7VanZwz3k_Vo6F2TTbwGnDsM43dI400RAClk2pdW0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?width=108&crop=smart&auto=webp&s=6bf10cc78985d4b7bacedf6bff9ff0e745ef9b8a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?width=216&crop=smart&auto=webp&s=73f93fd6e7ac9ae1b709fea86aaee6a16be40981', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?width=320&crop=smart&auto=webp&s=b8e28d3c60cbc346c71d7b8c4cd7e777d7a77f57', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?width=640&crop=smart&auto=webp&s=b97bdf83193359ed6faf76335329c3a8bc8e27c8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?width=960&crop=smart&auto=webp&s=86105aeeb1737e7dfbebbd22b36c09a788a1308a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?width=1080&crop=smart&auto=webp&s=d797a865ee414bda5d5ba74ba5ee6cb9e91ebbe4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?auto=webp&s=78011bdad504861cd2d56b255c262ef910535490', 'width': 1200}, 'variants': {}}]} | |
My experience with Falcon 40B on Hugging Face | 5 | I tried out several different prompts on the Huggingface playground ( [Falcon-Chat - a Hugging Face Space by HuggingFaceH4](https://huggingface.co/spaces/HuggingFaceH4/falcon-chat) ) for it to act as a Spanish tutor and to correct any mistakes I made, but it either only corrected the mistakes and nothing else, or it ignored the mistakes altogether and kept chatting. I know it is a free model but it doesn't seem to follow instructions very carefully.
Mind you, this is the first time I use an opensource LLM so I may be expecting too much?
Is there a 7B model available for testing online anywhere? I could only find the 40B model.... thanks | 2023-06-15T19:49:02 | https://www.reddit.com/r/LocalLLaMA/comments/14abh3k/my_experience_with_falcon_40b_on_hugging_face/ | cool-beans-yeah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14abh3k | false | null | t3_14abh3k | /r/LocalLLaMA/comments/14abh3k/my_experience_with_falcon_40b_on_hugging_face/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'ErHVgBnGBq6UZs92whKb8RedUVO_2L-lXigC_T4TQXw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?width=108&crop=smart&auto=webp&s=b805a6aad62bd8affa3b4d25cc1f327827dd454b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?width=216&crop=smart&auto=webp&s=255904e9c827306ef890b0207b64a4b460942184', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?width=320&crop=smart&auto=webp&s=eaa2dc37536e48ced13b5c093b9cf067b21907f1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?width=640&crop=smart&auto=webp&s=abd521b9d2ba661712238cb427a5f0e52101ade5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?width=960&crop=smart&auto=webp&s=b2de6d1343a7070606226a69e72abc2b9b65c2a2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?width=1080&crop=smart&auto=webp&s=918380af84f5e526b1067e2ed0c7c1859ec87717', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?auto=webp&s=b784291ca5c25ef8d884445e92427cb78436682b', 'width': 1200}, 'variants': {}}]} |
A in-context-learning tuning dataset to few shots prompting | 10 | 2023-06-15T19:33:18 | https://huggingface.co/datasets/tasksource/icl-symbol-tuning-instruct | Jean-Porte | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 14ab2yj | false | null | t3_14ab2yj | /r/LocalLLaMA/comments/14ab2yj/a_incontextlearning_tuning_dataset_to_few_shots/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'ZgdTHBfsWO9CNq6kHwFEV90rdRHY5G_gJFbf1uYk2MQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?width=108&crop=smart&auto=webp&s=9ed7d6b1ac08c55270997f99cb3303405dd6b012', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?width=216&crop=smart&auto=webp&s=26fffa719db5eef4c54c17c67144c4e3b377f139', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?width=320&crop=smart&auto=webp&s=3898cc7de82bf1ffd09348cd800daa5a296dfc73', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?width=640&crop=smart&auto=webp&s=46e85a696798be47f3fe97834118aefc8c4abd9e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?width=960&crop=smart&auto=webp&s=46a942d705f25318d1637a6c93732b1a495efb7c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?width=1080&crop=smart&auto=webp&s=0417082e8101b6891980dc43305025bb44b094b7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?auto=webp&s=e73c6f309384dcd3d41cf6a3535b6372707eb6f4', 'width': 1200}, 'variants': {}}]} | ||
Should laptop make a scratching noise when running Oobabooga or similar tools? | 0 | [removed] | 2023-06-15T19:30:30 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14ab0d3 | false | null | t3_14ab0d3 | /r/LocalLLaMA/comments/14ab0d3/should_laptop_make_a_scratching_noise_when/ | false | false | default | 0 | null | ||
Running 40B Falcon on AMD MI300X real-time demo | 0 | [removed] | 2023-06-15T19:00:01 | https://www.reddit.com/r/LocalLLaMA/comments/14aa8ln/running_40b_falcon_on_amd_mi300x_realtime_demo/ | themushroommage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aa8ln | false | null | t3_14aa8ln | /r/LocalLLaMA/comments/14aa8ln/running_40b_falcon_on_amd_mi300x_realtime_demo/ | false | false | default | 0 | null |
At what model size does Reasoning appear? | 47 | LLMs are great as Wikipedia type information sources.
However, in GPT4 the emergent property of Reasoning appears.
Do we have a chance of achieving this on our home systems? | 2023-06-15T18:11:56 | https://www.reddit.com/r/LocalLLaMA/comments/14a91wu/at_what_model_size_does_reasoning_appear/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a91wu | false | null | t3_14a91wu | /r/LocalLLaMA/comments/14a91wu/at_what_model_size_does_reasoning_appear/ | false | false | self | 47 | null |
trial of my status | 1 | [deleted] | 2023-06-15T18:04:31 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14a8x2t | false | null | t3_14a8x2t | /r/LocalLLaMA/comments/14a8x2t/trial_of_my_status/ | false | false | default | 1 | null | ||
How do you provide a model with source files for analysis? | 3 | I'm quite new to this, and using oobabooga on windows.
The models appear to think they can read HTTP URLs, and when I asked how I could provide them with source files instructed me to upload them and provide links, but it quickly became apparent they cannot actually access and read the uploaded source files despite claiming that's what they are doing.
Working with vicuna-30B-uncensored-gptq, but WizardCoder-15B-v1.0 also had the same problem. I assume it's because anything llama based assumes it is running as a centralized service by a large organization (vicuna notably falsely claims it is sending my feedback to "the development team").
Since I haven't gotten anywhere asking the models themselves about this, I thought I'd ask here: how can I provide source files or an entire project to a model for analysis? My first guess is using LoRAs to provide extensions to it's knowledge base, but in this use case what I want is for the AI's knowledge to always reflect only the most current version of the source files in the project. If there was a pipeline feeding source files into it's knowledge base, wouldn't that mean it ends up with knowledge of tons of outdated and useless source files? Wouldn't that interfere with it's ability to answer questions about the current state of the codebase?
My second guess is that I should be using the model through visual studio to accomplish this. Not sure if that is possible with any given model or only ones which are designed for it.
Can anyone tell me what my options are here? | 2023-06-15T17:50:12 | https://www.reddit.com/r/LocalLLaMA/comments/14a8pcq/how_do_you_provide_a_model_with_source_files_for/ | gigachad_deluxe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a8pcq | false | null | t3_14a8pcq | /r/LocalLLaMA/comments/14a8pcq/how_do_you_provide_a_model_with_source_files_for/ | false | false | self | 3 | null |
Wizard-Vicuna 30B only using 50% of CPU | 1 | [removed] | 2023-06-15T17:34:39 | https://www.reddit.com/r/LocalLLaMA/comments/14a8fkp/wizardvicuna_30b_only_using_50_of_cpu/ | TenseSpoon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a8fkp | false | null | t3_14a8fkp | /r/LocalLLaMA/comments/14a8fkp/wizardvicuna_30b_only_using_50_of_cpu/ | false | false | default | 1 | null |
Need Help | 7 | Ok so I'm fairly new to llama.cpp, I used to run the lama models with oogabooga, but after the newest changes to llama.cpp I switched. It runs so much faster on my GPU. Its amazing almost instant response. So anyways, I'm using the following code inside a bat file to run it
title llama.cpp
:start
main -i --interactive-first -r "### Human:" --temp 0 -c 2048 -n -1 --ignore-eos --repeat\_penalty 1.2 --instruct -m Wizard-Vicuna-13B-Uncensored.ggmlv3.q6\_K.bin -ngl 43
pause
goto start
Now here is my Question. I read that I can also run this as a Server (OpenAI like with Rest-API). How would i need to change my bat file in order to start the Server using said bat file. Or the better Question is that at all possible or do I need to do it some other way.
Thanks in advance for any help. | 2023-06-15T16:28:35 | https://www.reddit.com/r/LocalLLaMA/comments/14a6tlc/need_help/ | GuentherDonner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a6tlc | false | null | t3_14a6tlc | /r/LocalLLaMA/comments/14a6tlc/need_help/ | false | false | self | 7 | null |
A kind of magic | 3 | [removed] | 2023-06-15T16:13:45 | https://www.reddit.com/r/LocalLLaMA/comments/14a6gii/a_kind_of_magic/ | Lolleka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a6gii | false | null | t3_14a6gii | /r/LocalLLaMA/comments/14a6gii/a_kind_of_magic/ | false | false | default | 3 | null |
GPT4all with BDSM content and not censored? | 0 | [removed] | 2023-06-15T15:43:05 | https://www.reddit.com/r/LocalLLaMA/comments/14a5p28/gpt4all_with_bdsm_content_and_not_censored/ | Pleasant_Ad5555 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a5p28 | false | null | t3_14a5p28 | /r/LocalLLaMA/comments/14a5p28/gpt4all_with_bdsm_content_and_not_censored/ | false | false | default | 0 | null |
Can you run inference on Mac with AMD GPUs? | 2 | Is that possible with things like oobabooga, ROCm, and llama.cpp?
I'm talking about Macs with Intel CPU an AMG GPU, not macs with Silicon chips. | 2023-06-15T13:13:49 | https://www.reddit.com/r/LocalLLaMA/comments/14a2505/can_you_run_inference_on_mac_with_amd_gpus/ | jl303 | self.LocalLLaMA | 2023-06-16T00:55:17 | 0 | {} | 14a2505 | false | null | t3_14a2505 | /r/LocalLLaMA/comments/14a2505/can_you_run_inference_on_mac_with_amd_gpus/ | false | false | self | 2 | null |
Best open-source LLM for code generation | 10 | I am looking for an LLM in turn to develop a code generator application. The goal is to fine-tune the LLM with template libraries. Suggestions? | 2023-06-15T12:43:04 | https://www.reddit.com/r/LocalLLaMA/comments/14a1go5/best_opensource_llm_for_code_generation/ | lahaine93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a1go5 | false | null | t3_14a1go5 | /r/LocalLLaMA/comments/14a1go5/best_opensource_llm_for_code_generation/ | false | false | self | 10 | null |
Running LLaMA 65B reminded me how much more fun base models are | 89 | Lately I've been running LLaMA 65B in the cloud to try out exllama. With no actual use case for it, I've been using the chatbot example. And it has reminded me how much more fun base models are compared to RLHF'd GPT. They're funnier, more concise, less obsequious, and capable of expressing emotions other than bland positivity. (They're also dumb as rocks compared to GPT-4, but that's expected.)
Maybe I'm just bad at prompting, but I've tried asking GPT-4 to be more concise, for example, and it didn't help much. (The message cap would make that futile anyway.) Meanwhile, if I ask a base model to play a game where we take turns saying the first word that comes to mind, it'll do it, and actually say only that word, without me having to ask explicitly.
It makes me wish that the tuned models (like GPT-4) would retain that sort of communication style. And I hope that the LLaMA v2 release isn't fine-tuned into the stereotypical RLHF writing style. | 2023-06-15T12:13:37 | https://www.reddit.com/r/LocalLLaMA/comments/14a0uig/running_llama_65b_reminded_me_how_much_more_fun/ | hold_my_fish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a0uig | false | null | t3_14a0uig | /r/LocalLLaMA/comments/14a0uig/running_llama_65b_reminded_me_how_much_more_fun/ | false | false | self | 89 | null |
what do you use for quickly testing pretrained models locally? | 1 | [deleted] | 2023-06-15T11:56:45 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14a0ib0 | false | null | t3_14a0ib0 | /r/LocalLLaMA/comments/14a0ib0/what_do_you_use_for_quickly_testing_pretrained/ | false | false | default | 1 | null | ||
Honest LLaMA - Inference-Time Intervention (ITI) | 28 |
Abstract:
We introduce Inference-Time Intervention (ITI), a technique designed to enhance the truthfulness of large language models (LLMs). ITI operates by shifting model activations during inference, following a set of directions across a limited number of attention heads. This intervention significantly improves the performance of LLaMA models on the TruthfulQA benchmark. On an instruction-finetuned LLaMA called Alpaca, ITI improves its truthfulness from to . We identify a tradeoff between truthfulness and helpfulness and demonstrate how to balance it by tuning the intervention strength. ITI is minimally invasive and computationally inexpensive. Moreover, the technique is data efficient: while approaches like RLHF require extensive annotations, ITI locates truthful directions using only few hundred examples. Our findings suggest that LLMs may have an internal representation of the likelihood of something being true, even as they produce falsehoods on the surface.
TLDR:
For Alpaca, ITI improves its truthfulness from 32.5% to 65.1% on TruthfulQA
Paper:
https://arxiv.org/abs/2306.03341 | 2023-06-15T11:53:47 | https://github.com/likenneth/honest_llama | i_wayyy_over_think | github.com | 1970-01-01T00:00:00 | 0 | {} | 14a0g7m | false | null | t3_14a0g7m | /r/LocalLLaMA/comments/14a0g7m/honest_llama_inferencetime_intervention_iti/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'seIlZWMeu9MvvI-1gNgo5gFIGcPuPnONo3Op_T_cSts', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?width=108&crop=smart&auto=webp&s=b348d8153ba7eadccfcbac0e5a7404492b79f400', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?width=216&crop=smart&auto=webp&s=be28c81e0f37865ce3e3c84d6643bcd19660d5b7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?width=320&crop=smart&auto=webp&s=cc6d6a7e2a4d01f68f5f64d672d2d902ad6b94af', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?width=640&crop=smart&auto=webp&s=f8bb33120f42500c33b23cdb73765cb57db75308', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?width=960&crop=smart&auto=webp&s=794211e375338137ec1bd56c456193531e2a138e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?width=1080&crop=smart&auto=webp&s=2e3467bc947ae9d8df25e081876ff4fed76ced19', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?auto=webp&s=bbbb635835e2867e5023124bb8ad2b42a4aaf9ae', 'width': 1200}, 'variants': {}}]} | |
A guide for WSL/Windows 11/Linux users including the installation of WSL2, Conda, Cuda & more) | 26 |
I created a guide that includes some tips to improve your UX experience when using WSL2/windows 11/Linux
The WSL part contains :
* [install WSL](https://github.com/AmineDjeghri/AwesomeWindows11#install-wsl)
* [install miniconda in WSL](https://github.com/AmineDjeghri/AwesomeWindows11#install-miniconda-in-wsl)
* [configure WSL terminal](https://github.com/AmineDjeghri/AwesomeWindows11#configure-wsl-terminal-)
* [Install cuda in WSL](https://github.com/AmineDjeghri/AwesomeWindows11#install-cuda-in-wsl-)
* [WSL2 tools (backup, restore WSL image ..)](https://github.com/AmineDjeghri/AwesomeWindows11#wsl2-tools-)
* [WSL Terminal customization](https://github.com/AmineDjeghri/AwesomeWindows11/blob/master/README.md#41-customize-wsl-ubuntu-terminal) (For both Linux and WSL)
contributions are welcomed !
[Link to WSL Section](https://github.com/AmineDjeghri/AwesomeWindows11/blob/master/README.md#321coding-using-linuxwsl-inside-windows-best-choice---skip-this-if-you-dont-want-to-use-wsl)
[Link to the repo](https://github.com/AmineDjeghri/AwesomeWindows11) | 2023-06-15T11:51:36 | https://www.reddit.com/r/LocalLLaMA/comments/14a0eqq/a_guide_for_wslwindows_11linux_users_including/ | aminedjeghri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a0eqq | false | null | t3_14a0eqq | /r/LocalLLaMA/comments/14a0eqq/a_guide_for_wslwindows_11linux_users_including/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'B-xC3cG-8ytaaWfLUNyL7usqsTGoV31nbZDjbq4WyUo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?width=108&crop=smart&auto=webp&s=93e75c196cc4dad1e97761d710cfef356fd34593', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?width=216&crop=smart&auto=webp&s=e2bc9137c3456b7d82641abc60c8597efc160094', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?width=320&crop=smart&auto=webp&s=6c085ce886c13c58ff83f5c35afbb53b142d80fe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?width=640&crop=smart&auto=webp&s=e78fc6fc70758a0de7740beea69a178ea36eb2bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?width=960&crop=smart&auto=webp&s=64b9ff6e55943fe6291be5fce28d1ceda819546d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?width=1080&crop=smart&auto=webp&s=99ffd9ae0979d1759c22d11607ab95c653bb1b5f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?auto=webp&s=6cc958a41cb442d011d627c00e75e91fbc572f44', 'width': 1200}, 'variants': {}}]} |
Free GPU options for LlaMA model experimentation | 63 | To those who are starting out on the llama model with llama.cpp or other similar models, you may feel tempted to purchase a used 3090, 4090, or an Apple M2 to run these models. However, I'd like to share that there are free alternatives available for you to experiment with before investing your hard-earned money.
Google Colab notebooks offer a decent virtual machine (VM) equipped with a GPU, and it's completely free to use.
Here are the typical specifications of this VM:
12 GB RAM
80 GB DISK
Tesla T4 GPU with 15 GB VRAM
This setup is sufficient to run most models effectively. In the comments section, I will be sharing a sample Colab notebook specifically designed for beginners.
If you happen to know about any other free GPU VMs, please do share them in the comments below. | 2023-06-15T11:47:23 | https://www.reddit.com/r/LocalLLaMA/comments/14a0bs9/free_gpu_options_for_llama_model_experimentation/ | AstrionX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a0bs9 | false | null | t3_14a0bs9 | /r/LocalLLaMA/comments/14a0bs9/free_gpu_options_for_llama_model_experimentation/ | false | false | self | 63 | null |
A guide for WSL/Windows 11 users including the installation of WSL2, Conda, Cuda & more) | 1 | [deleted] | 2023-06-15T11:42:06 | [deleted] | 2023-06-15T11:46:54 | 0 | {} | 14a086f | false | null | t3_14a086f | /r/LocalLLaMA/comments/14a086f/a_guide_for_wslwindows_11_users_including_the/ | false | false | default | 1 | null | ||
Psychiatry (Choice of medications ) advice from LLAMA? | 1 | [removed] | 2023-06-15T10:54:19 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 149zbzk | false | null | t3_149zbzk | /r/LocalLLaMA/comments/149zbzk/psychiatry_choice_of_medications_advice_from_llama/ | false | false | default | 1 | null | ||
Is it possible to train a llm to model a personality? | 14 | I don’t have a good machine so the only models are 7b ones that I can run on my machines.
What if I could give the model a personality, train it in a particular field say for example make it a doctor or even Shakespeare
The way I understand it is that less the parameters the worse the model is in general (7B worse than 30B and so on)
But the thing is it is worse in general
If all it’s powers were steered towards a particular thing
It could perform a lot better
I hope this makes sense
Maybe there already exist such models that I don’t know of | 2023-06-15T10:13:13 | https://www.reddit.com/r/LocalLLaMA/comments/149ylzh/is_it_possible_to_train_a_llm_to_model_a/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149ylzh | false | null | t3_149ylzh | /r/LocalLLaMA/comments/149ylzh/is_it_possible_to_train_a_llm_to_model_a/ | false | false | self | 14 | null |
Local LLMs: After Novelty Wanes | 75 |
What do you do with them after the novelty wears off?
- Have you found usecases that are particularly valuable?
- Have you found cool new applications in your personal or professional life?
- Are you building next gen apps with local LLMs?
Share your experiences, tips, and ideas on how to keep them useful in the long run. | 2023-06-15T09:06:55 | https://www.reddit.com/r/LocalLLaMA/comments/149xhln/local_llms_after_novelty_wanes/ | AstrionX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149xhln | false | null | t3_149xhln | /r/LocalLLaMA/comments/149xhln/local_llms_after_novelty_wanes/ | false | false | self | 75 | null |
Training on documents to summarize them. | 17 | I am highly interested in using LLM to summarize long documents. However, the maximum token length is often a limitation.
But LLM can provide summaries based on the text it was trained on. If I ask ChatGPT about the summary of "Harry Potter and the Philosopher's Stone," it can generate a satisfactory output.
So, I had the idea that it should be possible to train an LLM (likely with LoRA) on a specific document and then request a summarization. The summarization might suffer from halluzination but for my use case this would be okay.
I researched for this but could not find anything. I even asked ChatGPT about it, but it just halluzinated about it. Does anyone know if something like this exists?
Edit:
After thinking about this for a little more, I realized (or at least I think) that the model I described would work like a stateful LLM. There is at least a bit of [researchon this topic](https://arxiv.org/pdf/2209.07634.pdf). However I do not know why this is not used more yet. Isn't this the whole reason LSTMs were so popoluar for a long time? | 2023-06-15T06:56:51 | https://www.reddit.com/r/LocalLLaMA/comments/149vbmx/training_on_documents_to_summarize_them/ | Schmandli | self.LocalLLaMA | 2023-06-15T12:39:41 | 0 | {} | 149vbmx | false | null | t3_149vbmx | /r/LocalLLaMA/comments/149vbmx/training_on_documents_to_summarize_them/ | false | false | self | 17 | null |
New quantization method SqueezeLLM allows for loseless compression for 3-bit and outperforms GPTQ and AWQ in both 3-bit and 4-bit. Quantized Vicuna and LLaMA models have been released. | 216 | Paper: [https://arxiv.org/abs/2306.07629](https://arxiv.org/abs/2306.07629)
Code: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
SqueezeLLM quantized models: [https://huggingface.co/squeeze-ai-lab](https://huggingface.co/squeeze-ai-lab)
Excerpts:
>We introduce SqueezeLLM, a post-training quantization framework that not only enables lossless compression to ultra-low precisions of up to 3-bit, but also achieves higher quantization performance under the same memory constraint. We extensively test SqueezeLLM on LLaMA-7B, 13B, and 30B on language modeling tasks using the C4 and WikiText2 benchmarks, where we find that SqueezeLLM consistently outperforms existing quantization methods by a large margin across different bit precisions. Our deployed models on A6000 GPUs not only demonstrate improved quantization performance but also exhibit significant gains in latency.
>
>In generative LLM inference, loading weight matrices into memory is the primary bottleneck, while the cost of dequantization and computation in the FP16 domain is relatively insignificant. Thus, by quantizing just the weights to lower precision, while leaving the activations in full precision, we can attain significant speedup, in addition to the reduction in model size. Notably, even the dense-only version of SqueezeLLM achieves perplexity comparable to the grouped GPTQ and AWQ. By incorporating sparsity, we achieve further perplexity improvements, reducing the gap from the FP16 baseline to less than 0.1 and 0.4 perplexity points for 4-bit and 3-bit quantization, respectively. Notably, with 3-bit quantization, our approach achieves up to a 2.1× reduction in perplexity gap from the FP16 baseline compared to existing methods.
>
>SqueezeLLM achieves higher accuracy for both Vicuna-7B and 13B as compared to the AWQ method and also preserve the accuracy of the FP16 baseline model with 4-bit quantization. Furthermore, it is noteworthy that the 4-bit quantized version of Vicuna-13B using SqueezeLLM has 2× smaller memory footprint than the 7B baseline model in FP16, while still achieving a 2% higher accuracy. In the case of 3-bit quantization, SqueezeLLM outperforms both GPTQ and the state-of-the-art AWQ method with a group size of 128 even without incorporating sparsity.
>
>Keeping 0.05% of sensitive values in FP16 only adds approximately 20% latency overhead across different model sizes, while still providing up to 1.9× speed up compared to the baseline. Keeping 0.45% of parameters in FP16 only adds 40-45% latency overhead relative to the dense-only implementation, while still resulting in 1.7× speed up compared to the FP16 baseline. In contrast, when accounting for permutation, the GPTQ runtime is degraded heavily. This shows how our Dense-and-Sparse quantization methodology allows for both higher accuracy as well as better performance relative to GPTQ. | 2023-06-15T05:37:45 | https://www.reddit.com/r/LocalLLaMA/comments/149txjl/new_quantization_method_squeezellm_allows_for/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149txjl | false | null | t3_149txjl | /r/LocalLLaMA/comments/149txjl/new_quantization_method_squeezellm_allows_for/ | false | false | self | 216 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
How to finetuned module ? WizardVicuna 7B uncensored | 8 | I inspired by novel ai FAQ about how they module work, they use gpt-j and finetune it with material for fantasy story writing and anime related stuff, i want try replicate what they do with WZ 7B and see if i can achieve similar result to their module. but problem is i don't have single clue how star with it. I any enlighten here how it work and where i wrong about it. | 2023-06-15T05:00:33 | https://www.reddit.com/r/LocalLLaMA/comments/149t8oa/how_to_finetuned_module_wizardvicuna_7b_uncensored/ | Merchant_Lawrence | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149t8oa | false | null | t3_149t8oa | /r/LocalLLaMA/comments/149t8oa/how_to_finetuned_module_wizardvicuna_7b_uncensored/ | false | false | self | 8 | null |
A short conversation with Falcon | 299 | Questions are purposely stupid, btw | 2023-06-15T04:37:22 | https://www.reddit.com/gallery/149su0a | Basic_Description_56 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 149su0a | false | null | t3_149su0a | /r/LocalLLaMA/comments/149su0a/a_short_conversation_with_falcon/ | false | false | 299 | null | |
New Discord server and a poll for this community | 37 | New server: [https://discord.gg/Y8H8uUtxc3](https://discord.gg/Y8H8uUtxc3)
LocalLLaMA has reached 30,000 members! Thank you everyone for being here.
As the community has grown, one of the most popular suggestions has been to create a Discord server. To check if this is something the community wants or doesn't want, it'll be decided by poll. Do you want a Discord server for this community? Vote in the poll if this should stay open or be deleted.
**Edit:** The community has decided, and the server will remain open. Thanks for voting and showing interest.
​
Q & A:
***Why Discord? Discord is terrible.***
There are three goals in making this server:
* Give beginners another way to ask very simple questions that have existing answers. These question posts are often downvoted. Removing these posts and leaving beginners without an answer is not a good solution, but at the same time, it wouldn't be ideal to see the community and longtime members here have their home page overrun with posts they prefer not to see. Discord can be a middle ground.
* Try to provide another way for the developers in this community to connect and collaborate.
* Encourage more discussion of locally run language models and related topics.
The original plan was to never make a server for this sub, but it was reconsidered after requests for one.
***Discord is a platform where information is blocked off for everyone else. Why make a server?***
There are endless Discord servers and as locally run language models become more popular, endless more will be created. If this is inevitable, it makes sense to create a centralized place where these discussions can be searched forever instead of being split up and lost in countless smaller servers.
***Why not something else besides Discord?***
Discord is the most popular alternative and most people usually don't want to be inconvenienced with platforms they don't already use.
***Why is the server named LocalLLaMA?***
LocalLLaMA focuses on LLaMA but has always allowed discussion of other local LLMs, and the name and community has become a catchall for local language model news and discussion.
***Anything else to know?***
Have a suggestion for the server, interested in being a Discord mod, or want to help by adding info or a utility? [Click here](https://www.reddit.com/message/compose?to=/r/LocalLLaMA) to send a message. For being a mod there, active members in this subreddit will be considered first.
​
Friendly reminder: this subreddit has a wiki. You can find a list of models [here](https://www.reddit.com/r/LocalLLaMA/wiki/models) and the simple install guide [here](https://www.reddit.com/r/LocalLLaMA/wiki/guide). A list of important datasets and community projects is [here](https://www.reddit.com/r/LocalLLaMA/wiki/communityprojects). The [sticked post](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/) below on running LLaMA has also been updated with a few more frequently asked questions and a simple chart showing some models and what they can be used for.
​
**Poll:** Do you want a Discord server for this community?
Yes: keep the server open
No: delete the server
I don't care: I don't have a preference whether there's a server or not.
[View Poll](https://www.reddit.com/poll/149s2ul) | 2023-06-15T03:58:47 | https://www.reddit.com/r/LocalLLaMA/comments/149s2ul/new_discord_server_and_a_poll_for_this_community/ | Technical_Leather949 | self.LocalLLaMA | 2023-06-19T10:16:26 | 0 | {} | 149s2ul | true | null | t3_149s2ul | /r/LocalLLaMA/comments/149s2ul/new_discord_server_and_a_poll_for_this_community/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'E8Jfupwh2wiDlIJ4BmYK6WgDxfqDQRYjZ_8QK--Cnqg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/M2EBgGeeo2HSx9mOiRPZMaDbB0foY9mKnPwthOx0LRA.jpg?width=108&crop=smart&auto=webp&s=4d287cc99a365271c3b4e143e681ce7de9b177d2', 'width': 108}], 'source': {'height': 162, 'url': 'https://external-preview.redd.it/M2EBgGeeo2HSx9mOiRPZMaDbB0foY9mKnPwthOx0LRA.jpg?auto=webp&s=560dfadb35cdb30d15adec491f5a2de8611e7c2d', 'width': 162}, 'variants': {}}]} |
Noob here | 1 | [removed] | 2023-06-15T03:57:08 | https://www.reddit.com/r/LocalLLaMA/comments/149s1t7/noob_here/ | kunjinn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149s1t7 | false | null | t3_149s1t7 | /r/LocalLLaMA/comments/149s1t7/noob_here/ | false | false | default | 1 | null |
How good is Chronos Hermes 13B? | 18 | I'm reading a lot of praise for Chronos Hermes 13B for chat. Can someone put out a benchmark on this? More importantly, how does it stack up to Wizard-Vicuna-Uncensored-30B? | 2023-06-15T02:24:13 | https://www.reddit.com/r/LocalLLaMA/comments/149q8ke/how_good_is_chronos_hermes_13b/ | ReMeDyIII | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149q8ke | false | null | t3_149q8ke | /r/LocalLLaMA/comments/149q8ke/how_good_is_chronos_hermes_13b/ | false | false | self | 18 | null |
MLC-LLM Chat vicuna-Wizard-7B-Uncensored-q3f16_0 | 26 | Using the unofficial tutorials provided by MLC-LLM I was able to format the ehartford/Wizard-Vicuna-7B-Uncensored to work with MLC-Chat in Vulkan mode. It should work with AMD GPUs though I've only tested it on a RTX 3060. Feedback is appreciated. MLC-LLM doesn't get enough press here, likely because they don't upload enough models. But their approach to deploying models on all devices is something the space really needs at some point if it's going to move beyond the somewhat arcane methods, we have to get these LLMs working currently.
The Model:
PC/Linux AMD/Nvidia - [jetro30087/vicuna-Wizard-7B-Uncensored-q3f16\_0 · Hugging Face](https://huggingface.co/jetro30087/vicuna-Wizard-7B-Uncensored-q3f16_0)
Android - [jetro30087/vicuna-Wizard-7B-Uncensored-android-q4f16\_0 · Hugging Face](https://huggingface.co/jetro30087/vicuna-Wizard-7B-Uncensored-android-q4f16_0)
MLC-LLM - [GitHub - mlc-ai/mlc-llm: Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.](https://github.com/mlc-ai/mlc-llm)
https://preview.redd.it/ycx8rjcw836b1.png?width=1623&format=png&auto=webp&s=fd5bcb4c8bd2957fb6f537394a467241b8a82693 | 2023-06-15T02:01:32 | https://www.reddit.com/r/LocalLLaMA/comments/149prvz/mlcllm_chat_vicunawizard7buncensoredq3f16_0/ | jetro30087 | self.LocalLLaMA | 2023-06-15T22:01:44 | 0 | {} | 149prvz | false | null | t3_149prvz | /r/LocalLLaMA/comments/149prvz/mlcllm_chat_vicunawizard7buncensoredq3f16_0/ | false | false | 26 | {'enabled': False, 'images': [{'id': '_wJmnI2pDwAdI3VoLl7qZVF-NGz6bZRVt-aIFtKeE0Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?width=108&crop=smart&auto=webp&s=5be55471119888b3da3034bef80abefc032a801a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?width=216&crop=smart&auto=webp&s=8682df20184ebaadeba65d98dafcbf301ae6c75b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?width=320&crop=smart&auto=webp&s=accde02d4d5b8ab0e9c22aa20e2c48935831cd80', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?width=640&crop=smart&auto=webp&s=96935ab35d3bba3c5b740284994512a40ee4881c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?width=960&crop=smart&auto=webp&s=52e98fa0274fe7e4be3ce4d8351d733411c26893', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?width=1080&crop=smart&auto=webp&s=43186ec70961c9639e074065e96c73e56f9f0581', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?auto=webp&s=7161c0d049274b89e4afd7a5508e298460bb16c0', 'width': 1200}, 'variants': {}}]} | |
So hypothetically what’s the strongest LLM model this could reasonably run? (24GB VRAM + 128GB system ram) (reasonably = output generations happen within 60 seconds) | 4 | 2023-06-15T01:20:28 | katiecharm | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 149oxqx | false | null | t3_149oxqx | /r/LocalLLaMA/comments/149oxqx/so_hypothetically_whats_the_strongest_llm_model/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'EjmZ7VMJ_aNSW30VP0ebQ_7MdKg5e8Z1c9dRoev2dbY', 'resolutions': [{'height': 174, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?width=108&crop=smart&auto=webp&s=73816e487ad6e1f9fccf236f5f5b69eab252b309', 'width': 108}, {'height': 349, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?width=216&crop=smart&auto=webp&s=732418aed5d1660a66749eb31580765e1c2583e1', 'width': 216}, {'height': 517, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?width=320&crop=smart&auto=webp&s=dcb22707030b7c6296df5f1df5c60298ab77a203', 'width': 320}, {'height': 1035, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?width=640&crop=smart&auto=webp&s=87203aef9fd9fc8c09442ac5567b0a86965edee2', 'width': 640}, {'height': 1553, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?width=960&crop=smart&auto=webp&s=df59d1567d73eb4342b8b77f75309d0f0218bcf4', 'width': 960}, {'height': 1747, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?width=1080&crop=smart&auto=webp&s=786b496a8ada375ec3ba2766237dc02796c09118', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?auto=webp&s=1253a08e9713ce099738411a582ead8a71915866', 'width': 1236}, 'variants': {}}]} | |||
Your painpoints in building/using local LLMs | 10 | What are your painpoints in building/using local LLMs? Let's discuss and see what we can build together to solve these issues! | 2023-06-15T01:02:29 | https://www.reddit.com/r/LocalLLaMA/comments/149okhd/your_painpoints_in_buildingusing_local_llms/ | Latter-Implement-243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149okhd | false | null | t3_149okhd | /r/LocalLLaMA/comments/149okhd/your_painpoints_in_buildingusing_local_llms/ | false | false | self | 10 | null |
The best chat model for a RTX 4090 ? | 5 | Hello, i saw a lot of new LLM since a month, so i am a bit lost.
Can someone tell me what is the best local LLM usable in oobabooga yet ? Thanks. | 2023-06-15T00:20:11 | https://www.reddit.com/r/LocalLLaMA/comments/149nofy/the_best_chat_model_for_a_rtx_4090/ | 3deal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149nofy | false | null | t3_149nofy | /r/LocalLLaMA/comments/149nofy/the_best_chat_model_for_a_rtx_4090/ | false | false | self | 5 | null |
Tool for chatting with your codebase and docs using OpenAI, LlamaCpp, and GPT-4-All | 14 | [removed] | 2023-06-14T23:59:56 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 149n8cm | false | null | t3_149n8cm | /r/LocalLLaMA/comments/149n8cm/tool_for_chatting_with_your_codebase_and_docs/ | false | false | default | 14 | null | ||
Model possible to run on laptop GPU? | 0 | [removed] | 2023-06-14T23:51:43 | https://www.reddit.com/r/LocalLLaMA/comments/149n29k/model_possible_to_run_on_laptop_gpu/ | DarkHelmetedOne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149n29k | false | null | t3_149n29k | /r/LocalLLaMA/comments/149n29k/model_possible_to_run_on_laptop_gpu/ | false | false | default | 0 | null |
Attention based truncation approach | 7 | [deleted] | 2023-06-14T23:01:50 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 149lxwv | false | null | t3_149lxwv | /r/LocalLLaMA/comments/149lxwv/attention_based_truncation_approach/ | false | false | default | 7 | null | ||
llama.cpp full CUDA acceleration has been merged | 125 | Update of [(1) llama.cpp just got full CUDA acceleration, and now it can outperform GPTQ! : LocalLLaMA (reddit.com)](https://www.reddit.com/r/LocalLLaMA/comments/147z6as/llamacpp_just_got_full_cuda_acceleration_and_now/) posted by TheBloke.
The PR added by Johannes Gaessler has been merged to main
Link of the PR : [https://github.com/ggerganov/llama.cpp/pull/1827](https://github.com/ggerganov/llama.cpp/pull/1827)
**Description of the PR:**
This PR adds GPU acceleration for all remaining ggml tensors that didn't yet have it. Especially for long generations this makes a large difference because the KV cache is still CPU only on master and gets larger as the context fills up. Prompt processing is also significantly faster because the large batch size allows the more effective use of GPUs.
He also added a --low-vram option that disables the CUDA scratch buffer | 2023-06-14T22:54:57 | https://www.reddit.com/r/LocalLLaMA/comments/149ls1f/llamacpp_full_cuda_acceleration_has_been_merged/ | aminedjeghri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149ls1f | false | null | t3_149ls1f | /r/LocalLLaMA/comments/149ls1f/llamacpp_full_cuda_acceleration_has_been_merged/ | false | false | self | 125 | {'enabled': False, 'images': [{'id': 'GoLjSKY7nzLq6QbI1x3IXo7j3K4sPBmu0Cx1x74SKyQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?width=108&crop=smart&auto=webp&s=f650b19a78381e2e7232f5c68f5c7a7872c4be22', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?width=216&crop=smart&auto=webp&s=367fbf324b15177f88f6997ba1ca7a6b8c3a6ef6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?width=320&crop=smart&auto=webp&s=7bf63728e4b6acd82c9c13bfe7b4a4b1b0648a6c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?width=640&crop=smart&auto=webp&s=fa48465a116dbafe1eb2d6eed1013c7d3a206e14', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?width=960&crop=smart&auto=webp&s=7041ba30a59b3f50fa1bd9a38145c9e2d79f58d7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?width=1080&crop=smart&auto=webp&s=3e001773e8623fdf62a75ffdc160ba3002ff8743', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?auto=webp&s=24dfe8d1d58dc194ae0675c72931b43cc0b0a5a3', 'width': 1200}, 'variants': {}}]} |
New model just dropped: WizardCoder-15B-v1.0 model achieves 57.3 pass@1 on the HumanEval Benchmarks .. 22.3 points higher than the SOTA open-source Code LLMs. | 232 | 2023-06-14T20:50:02 | https://twitter.com/TheBlokeAI/status/1669032287416066063 | Zelenskyobama2 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 149ir49 | false | {'oembed': {'author_name': 'Tom Jobbins', 'author_url': 'https://twitter.com/TheBlokeAI', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">New StarCoder coding model from <a href="https://twitter.com/WizardLM_AI?ref_src=twsrc%5Etfw">@WizardLM_AI</a> <br>"WizardCoder-15B-v1.0 model achieves 57.3 pass@1 on the HumanEval Benchmarks .. 22.3 points higher than the SOTA open-source Code LLMs."<br>My quants:<a href="https://t.co/ABjBvFRxw7">https://t.co/ABjBvFRxw7</a><a href="https://t.co/Hn4qQCeuZn">https://t.co/Hn4qQCeuZn</a><br>Original: <a href="https://t.co/L7wJhQyfRT">https://t.co/L7wJhQyfRT</a></p>— Tom Jobbins (@TheBlokeAI) <a href="https://twitter.com/TheBlokeAI/status/1669032287416066063?ref_src=twsrc%5Etfw">June 14, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/TheBlokeAI/status/1669032287416066063', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_149ir49 | /r/LocalLLaMA/comments/149ir49/new_model_just_dropped_wizardcoder15bv10_model/ | false | false | 232 | {'enabled': False, 'images': [{'id': 'sdZCEDH6vEPePQosukNPoJBQkfbmKFDoiiOCaRr3MaM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/LDz9t2YDUQGVPqoN8o0Ev59CrRHWPrlJTMtNl3DJMe0.jpg?width=108&crop=smart&auto=webp&s=10cc2c8cc49b3f13b3f90bba40bff487054e2ead', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/LDz9t2YDUQGVPqoN8o0Ev59CrRHWPrlJTMtNl3DJMe0.jpg?auto=webp&s=3eb58837ee5e8a03f3db89f71d5dbb05e342ae0f', 'width': 140}, 'variants': {}}]} | ||
Community driven Open Source dataset collaboration platform | 26 | I am trying to create a platform where people can get together and edit datasets that can be used to fine tune or train models. I want it to be as easy as possible to collaborate, check eachother's work, and keep everything transparent. A few people suggested to me Google Sheets, but that's not viable due to Google's terms of service. So after searching around, I came across Baserow, which is a self hosted solution. I spun up a public instance last night to mess around with, and I think it might do the job.
Positive Thoughts:
* You can upload a CSV, JSON, or copy and paste raw text, and it creates front end tables you can start editing alone or with others with live changes.
* It can handle a lot of rows and fields pretty well. I've been able to upload and edit 250mb json files without slowing down or crashing.
Negative Thoughts:
* You can only expand the first column to an expanded view. I saw some hacky ways to fix this on their community forum, but I don't know how I feel about that. You can still edit the content, it just feels weird and makes it hard to read. You can always edit the data offline and copy and paste it back in.
* You can only export files as CSV. Which is annoying, but not really a deal-breaker.
It looks pretty easy to divy up and assign people to different workspaces. So we could do something like split a big dataset into a bunch small pieces. When people are finished cleaning/formatting the data, each chunk could get rotated to a fresh set up eyes to look over for errors. Then we can recombine it all and post it to a public workspace where everybody can check over the combined results for anything that might have been missed.
I'd like some feedback on this idea. If anyone has thoughts or suggestions for a better way to organize, I'm all ears. I'd like to have a fleshed out plan that people generally agree on before I start inviting people to my instance and telling them to spend their time on it.
Here was my original post for those who missed it.
[https://www.reddit.com/r/LocalLLaMA/comments/142tked/bot\_embracing\_nefarious\_deeds\_erotic\_roleplay/](https://www.reddit.com/r/LocalLLaMA/comments/142tked/bot_embracing_nefarious_deeds_erotic_roleplay/) | 2023-06-14T19:34:06 | https://www.reddit.com/r/LocalLLaMA/comments/149gv4d/community_driven_open_source_dataset/ | CheshireAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149gv4d | false | null | t3_149gv4d | /r/LocalLLaMA/comments/149gv4d/community_driven_open_source_dataset/ | false | false | self | 26 | null |
Elevating Gaming to New Heights: Unleashing Next-Level Performance with Arc GPUs | 0 | [removed] | 2023-06-14T18:33:45 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 149fcgq | false | null | t3_149fcgq | /r/LocalLLaMA/comments/149fcgq/elevating_gaming_to_new_heights_unleashing/ | false | false | default | 0 | null | ||
Cpu models 16gb ram | 1 | [removed] | 2023-06-14T18:17:33 | https://www.reddit.com/r/LocalLLaMA/comments/149eycf/cpu_models_16gb_ram/ | SignificantAd5514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149eycf | false | null | t3_149eycf | /r/LocalLLaMA/comments/149eycf/cpu_models_16gb_ram/ | false | false | default | 1 | null |
Do you ever see emojis in the output of your GGML model in ooba using streaming? | 2 | I don't know what options you have, I never got ooba to run. This is also why I ask instead of checking myself. So I am using llama-cpp-python and I know ooba uses that too, and I noticed emojis are just destroyed when I try to use streaming (meaning the output updates instead of waiting for the whole thing to complete).
I'm pretty sure it's a bug, but really I am interested, does that mean you guys never saw your ggml models use emojis? | 2023-06-14T17:47:52 | https://www.reddit.com/r/LocalLLaMA/comments/149e87t/do_you_ever_see_emojis_in_the_output_of_your_ggml/ | involviert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149e87t | false | null | t3_149e87t | /r/LocalLLaMA/comments/149e87t/do_you_ever_see_emojis_in_the_output_of_your_ggml/ | false | false | self | 2 | null |
Karen the Editor and quotations | 3 | I have a question regarding the way how [Karen the Editor](https://huggingface.co/FPHam/Karen_theEditor_13b_HF) was trained. Specifically, the model card mentions: "Based on LLAMA 13b and Wizard-Vucna-uncensored finetune, then finetuned with about 20k grammar examples (bad grammar/good grammar)."
However, for non-fiction work, sometimes there is an absolute requirement that everything in quotation marks is left intact, even if it contains incorrect grammar, because that's what someone in the real world said. Obviously, for fiction, there is no such requirement.
So - does the training dataset include examples with quotations? Is there any known prompt that reliably switches Karen between these two modes of operation? | 2023-06-14T17:42:35 | https://www.reddit.com/r/LocalLLaMA/comments/149e3ht/karen_the_editor_and_quotations/ | patrakov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149e3ht | false | null | t3_149e3ht | /r/LocalLLaMA/comments/149e3ht/karen_the_editor_and_quotations/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'uJ9V-m-WnuFvsV7073FF7JnV8PxGeAFxWkMfb-qhs5Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=108&crop=smart&auto=webp&s=86802327304e4cebeaf4a7abd3aa3b962a2fe43e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=216&crop=smart&auto=webp&s=39a5554f67d2c5a2915c32889f717abca4ae429e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=320&crop=smart&auto=webp&s=a916ac5157e89c421d7276d30ac83e7882198bab', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=640&crop=smart&auto=webp&s=2b4015b9e1553d483bc840887d8b78940013b10d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=960&crop=smart&auto=webp&s=977904dfe7ab4945056a96c9d756161b7fd56e1a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=1080&crop=smart&auto=webp&s=c6a9a3c629a3491a0fe22b3bc22311fb8d4c8c04', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?auto=webp&s=17443aa61fbb6d5aeb4cbfd08da5fb45e5a67926', 'width': 1200}, 'variants': {}}]} |
What's the best current open source model for a) vectorizing pdfs b) generative Q and A? | 6 | Looking for somethin equivalent to ChatGPT's performance in this task, although I know that of course nothing comes close. | 2023-06-14T17:20:06 | https://www.reddit.com/r/LocalLLaMA/comments/149dj2k/whats_the_best_current_open_source_model_for_a/ | bramitkittel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149dj2k | false | null | t3_149dj2k | /r/LocalLLaMA/comments/149dj2k/whats_the_best_current_open_source_model_for_a/ | false | false | self | 6 | null |
Oobabooga not recognizing GPU for adding layers | 3 | Apparently the one-click install method for Oobabooga comes with a 1.3B model from Facebook which didn't seem the best in the time I experimented with it, but one thing I noticed right away was that text generation was incredibly fast (about 28 tokens/sec) and my GPU was being utilized.
I have a GGML model that claims to support CPU+GPU inferencing which is great as there's no way a 13B model would fit in 10GB of VRAM, but adding layers doesn't actually utilize my GPU at all and I'm left with it using my CPU. After searching around it seemed that people were able to add layers and use larger models on lesser hardware with no issues so I have absolutely no idea what could be wrong here. I guess you have to basically manually add GPU support for GGML models yourself which I got a C compiler for and attempted to fix but I must be doing something wrong because even after entering the exact code given by a user that's supposed to add GPU support, I still get errors in the command box. I'm thinking I may have the wrong compiler but I'm not sure.
At this point I'm very close to throwing in the towel. I'm 100% not well-versed in coding or computer science and while I understand this is bleeding edge stuff that hasn't reached the same level of user-friendliness as Stable Diffusion UIs, I don't understand why you need to jump through this many hoops to do something as simple as adding GPU support. I also don't understand why this has to be compiled by the user and not just be supported with the application itself but I'm assuming there's a reason for this that I don't understand because, like I said, I'm not well-versed in computer science. I normally have patience with these types of things but after looking through every Reddit and GitHub thread trying to understand and test every recommended fix just for every single one of them to not work, I'm starting to lose patience. | 2023-06-14T16:48:49 | https://www.reddit.com/r/LocalLLaMA/comments/149crc9/oobabooga_not_recognizing_gpu_for_adding_layers/ | yungfishstick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149crc9 | false | null | t3_149crc9 | /r/LocalLLaMA/comments/149crc9/oobabooga_not_recognizing_gpu_for_adding_layers/ | false | false | self | 3 | null |
Please don’t close this Sub. Open-source AI needs all the help it can get | 136 | [removed] | 2023-06-14T16:25:44 | https://www.pinsentmasons.com/out-law/news/meps-eu-ai-act-foundation-models | IntenseSunshine | pinsentmasons.com | 1970-01-01T00:00:00 | 0 | {} | 149c7a8 | true | null | t3_149c7a8 | /r/LocalLLaMA/comments/149c7a8/please_dont_close_this_sub_opensource_ai_needs/ | false | false | default | 136 | null |
I was finally able to get WizardLM-13B to work on my GTX 1080 but alas only getting 0.01 tokens per second... | 25 | Is this to be expected?
Also, I'm thinking about getting a dedicated LocalLLM system, maybe a 24GB 4090? What kind of tokens per second might this generate on the Wizard model?
Apologies for the noob question but I have no idea what to expect as I'm just learning the ropes. | 2023-06-14T16:16:18 | https://www.reddit.com/r/LocalLLaMA/comments/149byz4/i_was_finally_able_to_get_wizardlm13b_to_work_on/ | amoebatron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149byz4 | false | null | t3_149byz4 | /r/LocalLLaMA/comments/149byz4/i_was_finally_able_to_get_wizardlm13b_to_work_on/ | false | false | self | 25 | null |
LLM that answers questions based on documents | 5 | Hi I’m trying to create a chatbot that can answer my questions based on a collection of documents that the model is “trained” on. I need something that I can use within my business, so privacy and the ability to use it commercially are important. I’ve looked into OpenLlama and LangChain, but am wondering what the best course of action is. | 2023-06-14T15:39:13 | https://www.reddit.com/r/LocalLLaMA/comments/149b22m/llm_that_answers_questions_based_on_documents/ | Sea_Koala_7726 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149b22m | false | null | t3_149b22m | /r/LocalLLaMA/comments/149b22m/llm_that_answers_questions_based_on_documents/ | false | false | self | 5 | null |
Creating a Wiki for all things Local LLM. What do you want to know? | 231 | It's becoming abundantly clear there is a lot of information you need to know to get into this space, making it hard for people to get started. I'm working with a small group to create a wiki page so that way its easier for people to get started. What questions do you want answered or what are the most important things you think people need to know?
I will be editing this post and appending the questions as they come in. I should have the wiki up and running by this weekend. The goal is to have everything explained in plain English so anyone can understand it.
Questions:
1. What is a LLM?
2. How do I get started, what interfaces can I use?
3. What is Quantization, GPTQ vs GGML?
4. What hardware requirements are there?
5. How do I know what model to pick?
6. What is X paper and what does it mean for LLMs in the future?
7. How do I train my own model on my own dataset?
8. What's the difference between chat vs instruction trained model?
9. How do I make an API like open ai's to interact with my local model?
10. What is prompt engineering? How can I ask a question to get the best results from a particular model?
11. What are the software requirements?
12. How do I create my own dataset?
13. What makes a high quality dataset?
14. What are text embeddings and vector databases?
15. What are LoRA's and adapter models?
16. What are tokens?
17. What are Agents?
Edit: The framework of the site has been completed and the domain is up. If you are interested in helping out you can click "edit" in the bottom left corner of a page to write content or to add any questions to a section that you think is missing from.
[https://understandgpt.ai/](https://understandgpt.ai/)
[https://github.com/UnderstandGPT/UnderstandGPT](https://github.com/UnderstandGPT/UnderstandGPT) | 2023-06-14T15:08:37 | https://www.reddit.com/r/LocalLLaMA/comments/149abrg/creating_a_wiki_for_all_things_local_llm_what_do/ | NeverEndingToast | self.LocalLLaMA | 2023-06-15T20:02:39 | 0 | {} | 149abrg | false | null | t3_149abrg | /r/LocalLLaMA/comments/149abrg/creating_a_wiki_for_all_things_local_llm_what_do/ | false | false | self | 231 | {'enabled': False, 'images': [{'id': 'F-ZIOBaWaYfLE07ouyeSREiOsgeE3ZhYPXamY49eupo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?width=108&crop=smart&auto=webp&s=d0aa982ee62c7d69336e3bc704fb2a602edd62e0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?width=216&crop=smart&auto=webp&s=1928cb299d5db6215bf11ee891ed45bfd4a5f2d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?width=320&crop=smart&auto=webp&s=643fda13909cd805648443cb8306eda8cfb444ed', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?width=640&crop=smart&auto=webp&s=41abdd5d224a05f9e39c93d81279ea2a1084ee21', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?width=960&crop=smart&auto=webp&s=1f3f38befc6fe4b6fae66a0f5664680fcae15512', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?width=1080&crop=smart&auto=webp&s=bd4913a9178b03002e41b4f2c6c797811576ac4a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?auto=webp&s=a5dbdc69221197fe6b692f5d034a14c0c78e041f', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.