title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
My speeds are fine? x1 4090/x2 3090 - 1.51 tokens/s | 1 | [removed] | 2023-12-26T10:21:35 | https://www.reddit.com/r/LocalLLaMA/comments/18r5756/my_speeds_are_fine_x1_4090x2_3090_151_tokenss/ | FluffyMacho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r5756 | false | null | t3_18r5756 | /r/LocalLLaMA/comments/18r5756/my_speeds_are_fine_x1_4090x2_3090_151_tokenss/ | false | false | default | 1 | null |
Chatbot arena - Elo ratings overtime | 191 | Chatbot arena: https://arena.lmsys.org/
Dashboard: https://public.flourish.studio/visualisation/16260210/ | 2023-12-26T10:20:22 | https://v.redd.it/nzxscsmn7m8c1 | Time-Winter-4319 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18r56fq | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/nzxscsmn7m8c1/DASHPlaylist.mpd?a=1706272614%2CYTUyMDZmOTM2MzA2NTQ0YmY5ZDY0MzgwYmY4MjhiZWJjODY0OTcyOWQ0Yjc3MjIzNjUzMTk0YTVjODdlNjIzMQ%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/nzxscsmn7m8c1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/nzxscsmn7m8c1/HLSPlaylist.m3u8?a=1706272614%2CNDA2NjIyZWJmMDk1ZDI4MmIxNmYwMzA1NzdmNGE5MzgwYTczYzQxODVlOWZiZTkxZmY2ODY5Njg4NGYxMWVlYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nzxscsmn7m8c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1456}} | t3_18r56fq | /r/LocalLLaMA/comments/18r56fq/chatbot_arena_elo_ratings_overtime/ | false | false | 191 | {'enabled': False, 'images': [{'id': 'Ynd1OXByZ243bThjMYgg9n3MTduMYuF3jZJxmUmkzScqakAcdZbggwmQFFgc', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/Ynd1OXByZ243bThjMYgg9n3MTduMYuF3jZJxmUmkzScqakAcdZbggwmQFFgc.png?width=108&crop=smart&format=pjpg&auto=webp&s=545e2dc03689d4025ae64411802f24b376ada173', 'width': 108}, {'height': 160, 'url': 'https://external-preview.redd.it/Ynd1OXByZ243bThjMYgg9n3MTduMYuF3jZJxmUmkzScqakAcdZbggwmQFFgc.png?width=216&crop=smart&format=pjpg&auto=webp&s=dc8c77bd6f0de320e609ac576ba3cd2d8846c5b6', 'width': 216}, {'height': 237, 'url': 'https://external-preview.redd.it/Ynd1OXByZ243bThjMYgg9n3MTduMYuF3jZJxmUmkzScqakAcdZbggwmQFFgc.png?width=320&crop=smart&format=pjpg&auto=webp&s=870e7371162bfea4040edaf3c6872c10dd3f817b', 'width': 320}, {'height': 474, 'url': 'https://external-preview.redd.it/Ynd1OXByZ243bThjMYgg9n3MTduMYuF3jZJxmUmkzScqakAcdZbggwmQFFgc.png?width=640&crop=smart&format=pjpg&auto=webp&s=0e91c1e66b17a352cf05ca6f0f50b8409f4d1dfa', 'width': 640}, {'height': 712, 'url': 'https://external-preview.redd.it/Ynd1OXByZ243bThjMYgg9n3MTduMYuF3jZJxmUmkzScqakAcdZbggwmQFFgc.png?width=960&crop=smart&format=pjpg&auto=webp&s=76f0c00f69e6e03d9ea49dc3ea9fc812fe78ccde', 'width': 960}, {'height': 801, 'url': 'https://external-preview.redd.it/Ynd1OXByZ243bThjMYgg9n3MTduMYuF3jZJxmUmkzScqakAcdZbggwmQFFgc.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8ea4bda085be7e67f228c27d8e5815d88596f88a', 'width': 1080}], 'source': {'height': 801, 'url': 'https://external-preview.redd.it/Ynd1OXByZ243bThjMYgg9n3MTduMYuF3jZJxmUmkzScqakAcdZbggwmQFFgc.png?format=pjpg&auto=webp&s=ba8374d529d331f1ec5387bbc0ef7c4783a919e3', 'width': 1080}, 'variants': {}}]} | |
x1 4090/x2 3090 - 1.51 tokens/s (speeds are fine?) | 1 | [removed] | 2023-12-26T10:18:15 | https://www.reddit.com/r/LocalLLaMA/comments/18r5574/x1_4090x2_3090_151_tokenss_speeds_are_fine/ | FluffyMacho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r5574 | false | null | t3_18r5574 | /r/LocalLLaMA/comments/18r5574/x1_4090x2_3090_151_tokenss_speeds_are_fine/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'dQBHNpn-vjyRzfikkaEBk8NWxzBkq2qdyYnRYF7g_pk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xmFA-IbB_nOORiQnWbqpgbYQ8RbZARKYYf1kBDjyrgw.jpg?width=108&crop=smart&auto=webp&s=0ab5da27fd5c8d2d1a876e0cd9ae0ac3171007c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xmFA-IbB_nOORiQnWbqpgbYQ8RbZARKYYf1kBDjyrgw.jpg?width=216&crop=smart&auto=webp&s=218d43cb150f2c3ebfcbbd682d700b41d22aec46', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xmFA-IbB_nOORiQnWbqpgbYQ8RbZARKYYf1kBDjyrgw.jpg?width=320&crop=smart&auto=webp&s=e03c1b6aef4f6a5f02a8ff74bc5df23065b4f869', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xmFA-IbB_nOORiQnWbqpgbYQ8RbZARKYYf1kBDjyrgw.jpg?width=640&crop=smart&auto=webp&s=5c0bd11a0a37fa1ab721d8796872a62180a01857', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xmFA-IbB_nOORiQnWbqpgbYQ8RbZARKYYf1kBDjyrgw.jpg?width=960&crop=smart&auto=webp&s=62936a469135f502248ba2695ae90c903e16a6b9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xmFA-IbB_nOORiQnWbqpgbYQ8RbZARKYYf1kBDjyrgw.jpg?width=1080&crop=smart&auto=webp&s=9ec87a08074d1ac3fc38bea9abde8831fdbc2a30', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xmFA-IbB_nOORiQnWbqpgbYQ8RbZARKYYf1kBDjyrgw.jpg?auto=webp&s=2c64876e6bd81bfe420593aeef3a43c38fae7e76', 'width': 1200}, 'variants': {}}]} |
How does vLLM handle concurrent queries? | 2 | Hi folks,
After looking at the vLLM paper and github page, I got confused on its ability handling concurrent queries. I can understand the pagedAttention part and certainly it can help optimize for every single query. But what about concurrent queries? From their paper I feel like the KV cache sharing among queries can be a great help but it seems the function is still under developing from their github page. So what makes it efficient on concurrent queries? Continuous batching?
Best, | 2023-12-26T10:04:13 | https://www.reddit.com/r/LocalLLaMA/comments/18r4xu7/how_does_vllm_handle_concurrent_queries/ | GuanlongWoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r4xu7 | false | null | t3_18r4xu7 | /r/LocalLLaMA/comments/18r4xu7/how_does_vllm_handle_concurrent_queries/ | false | false | self | 2 | null |
Papers to read, any suggestions? | 9 | Hi all,
I've been following this sub for a long time now and have finally been getting into actually reading some papers that get posted here. I'm curious if there are any papers, new or old, that you all recommend me or anyone else read.
I will say, most of the stuff goes over my head as I have no formal ML training yet 😅, although this sub has definitely helped me gain base understandings of it (especially its applications)! | 2023-12-26T08:31:59 | https://www.reddit.com/r/LocalLLaMA/comments/18r3mwj/papers_to_read_any_suggestions/ | Sebba8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r3mwj | false | null | t3_18r3mwj | /r/LocalLLaMA/comments/18r3mwj/papers_to_read_any_suggestions/ | false | false | self | 9 | null |
SanjiWatsuki/Loyal-Macaroni-Maid-7B a strong new roleplay model in a small package! | 17 | I just came across this model this morning and so far it might be the best roleplay model i have tested in a long time.
I would love to hear what you guys think about it. From my testing i found out it has amazing memory and roleplaying skills but the only odd thing i found was that the NSFW aspect of it, while it can do NSFW, sometimes it tries to "Curve" away from it and tries to say stuff like "We shouldn't do this" or "it's inappropriate".
The model needs a tiny bit convincing but it can do some good NSFW too. I'm my opinion this model is better than most 13B models i've tested for roleplaying. Try it out and leave your thoughts in the comments!
[https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B) | 2023-12-26T08:29:27 | https://www.reddit.com/r/LocalLLaMA/comments/18r3lhu/sanjiwatsukiloyalmacaronimaid7b_a_strong_new/ | Quiet_Joker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r3lhu | false | null | t3_18r3lhu | /r/LocalLLaMA/comments/18r3lhu/sanjiwatsukiloyalmacaronimaid7b_a_strong_new/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'oMYQwy-5paGv1kpXiv53y4SvU69p39JtHGp4EQcDFZY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0MaDvTGzCkZqfZfQ_vUt3vKY_8K1Nrzw1FjZxHR4K7w.jpg?width=108&crop=smart&auto=webp&s=ebf21123e789ea75cad951c21d1a0df07a6be4e1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0MaDvTGzCkZqfZfQ_vUt3vKY_8K1Nrzw1FjZxHR4K7w.jpg?width=216&crop=smart&auto=webp&s=966f6b6aa71bd016e65ab5419d3043ad50ad32e0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0MaDvTGzCkZqfZfQ_vUt3vKY_8K1Nrzw1FjZxHR4K7w.jpg?width=320&crop=smart&auto=webp&s=02c2c3c3ce3b399f6f69520bf07603dd09965e24', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0MaDvTGzCkZqfZfQ_vUt3vKY_8K1Nrzw1FjZxHR4K7w.jpg?width=640&crop=smart&auto=webp&s=fb181abf4f2133e5d969748ebf3856311fb029c0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0MaDvTGzCkZqfZfQ_vUt3vKY_8K1Nrzw1FjZxHR4K7w.jpg?width=960&crop=smart&auto=webp&s=9f12b85c77d045dc10d7b64a0a3525788ed86949', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0MaDvTGzCkZqfZfQ_vUt3vKY_8K1Nrzw1FjZxHR4K7w.jpg?width=1080&crop=smart&auto=webp&s=dcc9ee8b0e7cb466a7b1f0894adc6ed4cc5279d9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0MaDvTGzCkZqfZfQ_vUt3vKY_8K1Nrzw1FjZxHR4K7w.jpg?auto=webp&s=0f8f371238401d5eed5d71245f86bef993668573', 'width': 1200}, 'variants': {}}]} |
Another mighty line of models, another benchmark breaker. | 1 | [removed] | 2023-12-26T08:27:34 | https://www.reddit.com/r/LocalLLaMA/comments/18r3kj1/another_mighty_line_of_models_another_benchmark/ | Alignment-Lab-AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r3kj1 | false | null | t3_18r3kj1 | /r/LocalLLaMA/comments/18r3kj1/another_mighty_line_of_models_another_benchmark/ | false | false | default | 1 | null |
augmenting a containerized large language model with text files for context | 7 | 2023-12-26T08:07:33 | https://graphthinking.blogspot.com/2023/12/augmenting-containerized-large-language.html | ben_is | graphthinking.blogspot.com | 1970-01-01T00:00:00 | 0 | {} | 18r3a5y | false | null | t3_18r3a5y | /r/LocalLLaMA/comments/18r3a5y/augmenting_a_containerized_large_language_model/ | false | false | default | 7 | null | |
Fine-Tuning LLM To Control Responses and Removing Undesired Content | 1 | [removed] | 2023-12-26T08:06:41 | https://www.reddit.com/r/LocalLLaMA/comments/18r39r8/finetuning_llm_to_control_responses_and_removing/ | iTsObserv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r39r8 | false | null | t3_18r39r8 | /r/LocalLLaMA/comments/18r39r8/finetuning_llm_to_control_responses_and_removing/ | false | false | default | 1 | null |
Finetune a small 7B model for my native language | 9 | I am looking for a way to finetune a small model (mistral or llama2 or phi2, if possible) for Odia language. So far, I've not found a good article explaining the HowTo steps. For the dataset, I'm planning to use [OdiaGenAI/all\_combined\_odia\_171k · Datasets at Hugging Face](https://huggingface.co/datasets/OdiaGenAI/all_combined_odia_171k) and try to source data from textbooks too. I think my laptop's 3050 4GB VRAM won't be enough for it, or will it be enough?? | 2023-12-26T07:22:22 | https://www.reddit.com/r/LocalLLaMA/comments/18r2mry/finetune_a_small_7b_model_for_my_native_language/ | Friendly-Gur-3289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r2mry | false | null | t3_18r2mry | /r/LocalLLaMA/comments/18r2mry/finetune_a_small_7b_model_for_my_native_language/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'CW-x1znJmcwunLU6fEZg4o4HEQJS6_IEV-0lP3UJYQU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tOIQUQDMu2HG_aOL9Qwci4y79aVN7PhG-J-d8DPgDoU.jpg?width=108&crop=smart&auto=webp&s=d42f23e982987e650b23d13ec096392894a13ea9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tOIQUQDMu2HG_aOL9Qwci4y79aVN7PhG-J-d8DPgDoU.jpg?width=216&crop=smart&auto=webp&s=c315bf7e59f53730ba854e5ed9f774161cdb6c62', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tOIQUQDMu2HG_aOL9Qwci4y79aVN7PhG-J-d8DPgDoU.jpg?width=320&crop=smart&auto=webp&s=3625ab19836e177ccace9a57f76b45cfe42e47d1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tOIQUQDMu2HG_aOL9Qwci4y79aVN7PhG-J-d8DPgDoU.jpg?width=640&crop=smart&auto=webp&s=b10d52b1eeeac3925a249e42a3ae12df28872bb9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tOIQUQDMu2HG_aOL9Qwci4y79aVN7PhG-J-d8DPgDoU.jpg?width=960&crop=smart&auto=webp&s=d9328d3f8abd4fe2902286cd69674e7a9277f6ea', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tOIQUQDMu2HG_aOL9Qwci4y79aVN7PhG-J-d8DPgDoU.jpg?width=1080&crop=smart&auto=webp&s=4caf2bd4ff41183f40dff613b5c56b866638ed8b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tOIQUQDMu2HG_aOL9Qwci4y79aVN7PhG-J-d8DPgDoU.jpg?auto=webp&s=b18441d7ae053377a763949665e5c82bc6795e58', 'width': 1200}, 'variants': {}}]} |
a vs code extension type thingy for generating code, reading repos, etc. | 1 | basically i was thinking is there any way to write a script for using a code model (mostly python and typescript) which can be used as an extension in vs code basically better than github copilot. | 2023-12-26T06:48:22 | https://www.reddit.com/r/LocalLLaMA/comments/18r24cu/a_vs_code_extension_type_thingy_for_generating/ | Adventurous_Ruin_404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r24cu | false | null | t3_18r24cu | /r/LocalLLaMA/comments/18r24cu/a_vs_code_extension_type_thingy_for_generating/ | false | false | default | 1 | null |
Anything like llama factory for amd gpus? | 4 | Wondering how one finetunes on an amd gpus. I have both Linux and Windows. Can't seem to find any guides on how to finetune on an amd gpu. Looking finetune on mistral and hopefully the new phi model as well. | 2023-12-26T06:14:17 | https://www.reddit.com/r/LocalLLaMA/comments/18r1l07/anything_like_llama_factory_for_amd_gpus/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r1l07 | false | null | t3_18r1l07 | /r/LocalLLaMA/comments/18r1l07/anything_like_llama_factory_for_amd_gpus/ | false | false | self | 4 | null |
Crawling you gmail - anyone done it? | 39 | I’m ‘building’ my own offline LLM server/personal assistant. Building in “, as it’s just an M1 16Gb with OpenHermes and some python to ingest data into a Chroma DB which it can then query. Building it as a Flask app so I can query it via a WebUI.
But I digress. I’m considering having it crawl my 10+ years of Gmail history.
Curious if anyone else has tried this, and what the results looked like? For example, the ability to ask ‘who did I email on 01/02/2025, and can you summarise the email?’ and get a reasonable response. | 2023-12-26T06:02:51 | https://www.reddit.com/r/LocalLLaMA/comments/18r1e34/crawling_you_gmail_anyone_done_it/ | Data_Driven_Guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r1e34 | false | null | t3_18r1e34 | /r/LocalLLaMA/comments/18r1e34/crawling_you_gmail_anyone_done_it/ | false | false | self | 39 | null |
How do LLM UI front-ends technically implement undo/redo/retry feature? | 4 | Maybe I overlooked, but I don't see in OpenGPT API (for example) ability to manipulate the model's context in such a way. So, how exactly those features are implemented on the fron-end? | 2023-12-26T05:39:50 | https://www.reddit.com/r/LocalLLaMA/comments/18r0zwp/how_do_llm_ui_frontends_technically_implement/ | slider2k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r0zwp | false | null | t3_18r0zwp | /r/LocalLLaMA/comments/18r0zwp/how_do_llm_ui_frontends_technically_implement/ | false | false | self | 4 | null |
Link your best resources to learn about (technical aspects) of LocalLLM for a newbie please. | 1 | [removed] | 2023-12-26T05:31:14 | https://www.reddit.com/r/LocalLLaMA/comments/18r0umn/link_your_best_resources_to_learn_about_technical/ | ah-chamon-ah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r0umn | false | null | t3_18r0umn | /r/LocalLLaMA/comments/18r0umn/link_your_best_resources_to_learn_about_technical/ | false | false | default | 1 | null |
YAYI2-30B, new Chinese base model pretrained on 2.65T tokens, reports 80.5 on MMLU and 53 on HumanEval (no idea how legit) | 94 | 2023-12-26T05:31:07 | https://huggingface.co/wenge-research/yayi2-30b | Ilforte | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18r0uk1 | false | null | t3_18r0uk1 | /r/LocalLLaMA/comments/18r0uk1/yayi230b_new_chinese_base_model_pretrained_on/ | false | false | 94 | {'enabled': False, 'images': [{'id': 'WWIBwVVIdR2j3YxvmYZzcfNaoJLBZ8nvYPjAo4nnCzs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ATBPXuO685TtVXTo328Ayf-t8zy9nzu-AW5qBOWvfg4.jpg?width=108&crop=smart&auto=webp&s=112bf715872cff4976f83d543fc19dd2f8cc5670', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ATBPXuO685TtVXTo328Ayf-t8zy9nzu-AW5qBOWvfg4.jpg?width=216&crop=smart&auto=webp&s=0dd08b3008d6f261dc43c01df6ed7d4fcb997d7c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ATBPXuO685TtVXTo328Ayf-t8zy9nzu-AW5qBOWvfg4.jpg?width=320&crop=smart&auto=webp&s=df706ce7014e3deb2e8b6cd9a6c2ad42bddbb55f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ATBPXuO685TtVXTo328Ayf-t8zy9nzu-AW5qBOWvfg4.jpg?width=640&crop=smart&auto=webp&s=14d2ae6c03d139bba929c0143819019470d4ac53', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ATBPXuO685TtVXTo328Ayf-t8zy9nzu-AW5qBOWvfg4.jpg?width=960&crop=smart&auto=webp&s=4225d8dbf25ded2f5fa42d79e77fb5692f901f7b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ATBPXuO685TtVXTo328Ayf-t8zy9nzu-AW5qBOWvfg4.jpg?width=1080&crop=smart&auto=webp&s=3399a4b9cfce40b912e930bd8fa42f56c5b43e96', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ATBPXuO685TtVXTo328Ayf-t8zy9nzu-AW5qBOWvfg4.jpg?auto=webp&s=828db7d92937b2f1dcabba5c90e1b21dda787150', 'width': 1200}, 'variants': {}}]} | ||
This model seems to beat mixtral, from my evaluations. | 1 | Guys you need to try this model. Its beaten regular mixtral by a fair amount for me, its more attuned to something like mistral-medium in my opinion, having used both.
​
[https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B)
​
Idk why its so good, but possibly for 2 reasons:
​
It's taking the SauerkrautLM model, which is strong even by itself, and combining it into a MoE, and for some reason its crazy good!
GGUF: [TheBloke/SauerkrautLM-Mixtral-8x7B-GGUF · Hugging Face](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GGUF)
The model itself is a few days old so I hope the "New Model" tag still applies! | 2023-12-26T04:53:35 | https://www.reddit.com/r/LocalLLaMA/comments/18r04c6/this_model_seems_to_beat_mixtral_from_my/ | Dazzling_Ad1507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r04c6 | false | null | t3_18r04c6 | /r/LocalLLaMA/comments/18r04c6/this_model_seems_to_beat_mixtral_from_my/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'gy9ktUrpbpNVvQA4ljM3jXmamqUt6a0Tu5tvIAKwkNU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KVP8frMwcKy7b9W6xWK9Xns8eFpLYB8l0kiRpJzH7ZU.jpg?width=108&crop=smart&auto=webp&s=32b16ceb94546fe0c9a0287c65ea74d06471a0c1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KVP8frMwcKy7b9W6xWK9Xns8eFpLYB8l0kiRpJzH7ZU.jpg?width=216&crop=smart&auto=webp&s=8a476b6f9a18df6f8aeca5a1ffe154f13dd26abc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KVP8frMwcKy7b9W6xWK9Xns8eFpLYB8l0kiRpJzH7ZU.jpg?width=320&crop=smart&auto=webp&s=affcc39dafded6d9a3a3e7d1d744f31aebd3d8b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KVP8frMwcKy7b9W6xWK9Xns8eFpLYB8l0kiRpJzH7ZU.jpg?width=640&crop=smart&auto=webp&s=80c33a476d5a43eefebbd6c1126866ce1e3fcd9b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KVP8frMwcKy7b9W6xWK9Xns8eFpLYB8l0kiRpJzH7ZU.jpg?width=960&crop=smart&auto=webp&s=726eb445f8510363c920a89a56390960c989abce', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KVP8frMwcKy7b9W6xWK9Xns8eFpLYB8l0kiRpJzH7ZU.jpg?width=1080&crop=smart&auto=webp&s=b3f341fde0c8b3b9ad9c2ece3c29a8c630be5df4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KVP8frMwcKy7b9W6xWK9Xns8eFpLYB8l0kiRpJzH7ZU.jpg?auto=webp&s=25e0f3b4fc3c678205fd565cb0cec9935c5fa835', 'width': 1200}, 'variants': {}}]} |
What are the smallest Mixtral quants that aren't broken? | 1 | I know the smallest GGUF quants are Q2_K and Q3_K_M, but are K quants still broken?
I also tried a 4x7B model but that too was broken. | 2023-12-26T04:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/18r03s2/what_are_the_smallest_mixtral_quants_that_arent/ | Silvestron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18r03s2 | false | null | t3_18r03s2 | /r/LocalLLaMA/comments/18r03s2/what_are_the_smallest_mixtral_quants_that_arent/ | false | false | default | 1 | null |
muse - Let's create synthetic textbooks together :) | 66 | 2023-12-26T03:54:26 | https://github.com/thooton/muse | thooton | github.com | 1970-01-01T00:00:00 | 0 | {} | 18qz1l7 | false | null | t3_18qz1l7 | /r/LocalLLaMA/comments/18qz1l7/muse_lets_create_synthetic_textbooks_together/ | false | false | 66 | {'enabled': False, 'images': [{'id': '5HSg1ZQyhJH0W4Mbp_SrJ0TgHz4pLkKVdq-D1GhhPpc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZJ4QtnESH3W4eg8UquMxtDl5VuG0zv2u2ZbeLGoGuBI.jpg?width=108&crop=smart&auto=webp&s=e52ac86da24f39b93d3892d65aab39750007ac8c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZJ4QtnESH3W4eg8UquMxtDl5VuG0zv2u2ZbeLGoGuBI.jpg?width=216&crop=smart&auto=webp&s=94ffde4285bf13224d9c64bfb2b90e8afd0557e0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZJ4QtnESH3W4eg8UquMxtDl5VuG0zv2u2ZbeLGoGuBI.jpg?width=320&crop=smart&auto=webp&s=6d101b5fa8a6bf2d743f9f00727b8b016b9667c8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZJ4QtnESH3W4eg8UquMxtDl5VuG0zv2u2ZbeLGoGuBI.jpg?width=640&crop=smart&auto=webp&s=9fb970efb8991c1f14b376e9372734ef2d291fee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZJ4QtnESH3W4eg8UquMxtDl5VuG0zv2u2ZbeLGoGuBI.jpg?width=960&crop=smart&auto=webp&s=5b8d36133c414900522297dd8f1ea5adacfd4d36', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZJ4QtnESH3W4eg8UquMxtDl5VuG0zv2u2ZbeLGoGuBI.jpg?width=1080&crop=smart&auto=webp&s=af2c149d11e61a6cc42db1f9db1a486e4bddedcf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZJ4QtnESH3W4eg8UquMxtDl5VuG0zv2u2ZbeLGoGuBI.jpg?auto=webp&s=c795914b9cd5e75d099ac86e73ba12a78806fe7d', 'width': 1200}, 'variants': {}}]} | ||
Best base model to fine tune for routing user input to correct LLM? (e.g. embeddings model vs. cross-encoder) | 3 | e.g. this one works well using Bert [https://huggingface.co/notdiamond/notdiamond-0001/tree/main](https://huggingface.co/notdiamond/notdiamond-0001/tree/main) to route to gpt3.5 vs. 4
Best option here?
​ | 2023-12-26T03:12:17 | https://www.reddit.com/r/LocalLLaMA/comments/18qy9v1/best_base_model_to_fine_tune_for_routing_user/ | marionettetest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qy9v1 | false | null | t3_18qy9v1 | /r/LocalLLaMA/comments/18qy9v1/best_base_model_to_fine_tune_for_routing_user/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'agUVK3WJTbC7zrKHraLPTeozcVhzZM78HoJ5GxAWGh0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qi7pd7KQBgpf4AFu87B5o3yr68IeiK5-10dy2fahjzk.jpg?width=108&crop=smart&auto=webp&s=579caf37142fa9116c2e64163cc0c7b8e327ac53', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qi7pd7KQBgpf4AFu87B5o3yr68IeiK5-10dy2fahjzk.jpg?width=216&crop=smart&auto=webp&s=3d8701ef1c6340409110369db68aef216dce42c9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qi7pd7KQBgpf4AFu87B5o3yr68IeiK5-10dy2fahjzk.jpg?width=320&crop=smart&auto=webp&s=3e5748589fdc94a5abc7ff3367271e8eebf4cc5b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qi7pd7KQBgpf4AFu87B5o3yr68IeiK5-10dy2fahjzk.jpg?width=640&crop=smart&auto=webp&s=25420e466775d8cf6c1372abf161b2bc21f0e6bd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qi7pd7KQBgpf4AFu87B5o3yr68IeiK5-10dy2fahjzk.jpg?width=960&crop=smart&auto=webp&s=3180e5823e722f5a3900da950cb7c0bf92670138', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qi7pd7KQBgpf4AFu87B5o3yr68IeiK5-10dy2fahjzk.jpg?width=1080&crop=smart&auto=webp&s=515168f3731863754d75661fc35cbb66f07c2c21', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qi7pd7KQBgpf4AFu87B5o3yr68IeiK5-10dy2fahjzk.jpg?auto=webp&s=c7f8e92ebb2c151e3f17be9f24591406934cf540', 'width': 1200}, 'variants': {}}]} |
open source tools to fine tune your LLM models | 34 | Any open source tools to fine tune your LLM models (Similar to LM Studio for inference) | 2023-12-26T02:00:52 | https://www.reddit.com/r/LocalLLaMA/comments/18qwx8v/open_source_tools_to_fine_tune_your_llm_models/ | seagoat1973 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qwx8v | false | null | t3_18qwx8v | /r/LocalLLaMA/comments/18qwx8v/open_source_tools_to_fine_tune_your_llm_models/ | false | false | self | 34 | null |
NousResearch/Nous-Hermes-2-Yi-34B just dropped! "Surpassing all Open Hermes and Nous Hermes models of the past" | 105 | 2023-12-26T00:49:32 | https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B | phoneixAdi | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18qvjc3 | false | null | t3_18qvjc3 | /r/LocalLLaMA/comments/18qvjc3/nousresearchnoushermes2yi34b_just_dropped/ | false | false | 105 | {'enabled': False, 'images': [{'id': '73RXnKYxM5cS32AERwOnhopnKIdTef-w5IDLr-1y4BA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?width=108&crop=smart&auto=webp&s=1e765bb6aafbc46eb0154afd783f08264ab60e51', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?width=216&crop=smart&auto=webp&s=ff5c6742fed1f4dcc3cbfb7403e1c9c963d49210', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?width=320&crop=smart&auto=webp&s=90c018d9aa26922d75f89f09585d5009c21904c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?width=640&crop=smart&auto=webp&s=4c324089130b1941471cddc6c9e3a9d9e7453cbb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?width=960&crop=smart&auto=webp&s=8e35b10aa98d7d06847ef7d5f28edec5d028adf4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?width=1080&crop=smart&auto=webp&s=343023ac8131bfcda0008fe0a6343b97f7f9d945', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?auto=webp&s=dfa25c82db368decb29e0a0f8adc7476a554d665', 'width': 1200}, 'variants': {}}]} | ||
Finally Updated Oobabooga (Questions) | 1 | [removed] | 2023-12-26T00:45:36 | https://www.reddit.com/r/LocalLLaMA/comments/18qvgi8/finally_updated_oobabooga_questions/ | renegadellama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qvgi8 | false | null | t3_18qvgi8 | /r/LocalLLaMA/comments/18qvgi8/finally_updated_oobabooga_questions/ | false | false | default | 1 | null |
Nous-Hermes-2-Yi-34B has been released! | 80 | From their announcement:
Announcing Nous Hermes 2 on Yi 34B for Christmas!
This is version 2 of Nous Research's line of Hermes models, and Nous Hermes 2 builds on the Open Hermes 2.5 dataset, surpassing all Open Hermes and Nous Hermes models of the past, trained over Yi 34B with others to come!
We achieve incredible benchmarks and surpass all of the previous Open Hermes and Nous Hermes in all scores. Example outputs and benchmarks below!
Download and learn more here on HuggingFace:
https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B | 2023-12-26T00:44:36 | https://www.reddit.com/r/LocalLLaMA/comments/18qvfr3/noushermes2yi34b_has_been_released/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qvfr3 | false | null | t3_18qvfr3 | /r/LocalLLaMA/comments/18qvfr3/noushermes2yi34b_has_been_released/ | false | false | self | 80 | {'enabled': False, 'images': [{'id': '73RXnKYxM5cS32AERwOnhopnKIdTef-w5IDLr-1y4BA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?width=108&crop=smart&auto=webp&s=1e765bb6aafbc46eb0154afd783f08264ab60e51', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?width=216&crop=smart&auto=webp&s=ff5c6742fed1f4dcc3cbfb7403e1c9c963d49210', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?width=320&crop=smart&auto=webp&s=90c018d9aa26922d75f89f09585d5009c21904c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?width=640&crop=smart&auto=webp&s=4c324089130b1941471cddc6c9e3a9d9e7453cbb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?width=960&crop=smart&auto=webp&s=8e35b10aa98d7d06847ef7d5f28edec5d028adf4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?width=1080&crop=smart&auto=webp&s=343023ac8131bfcda0008fe0a6343b97f7f9d945', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IviZ-k9cFBIZhAKkyO1jE-VFmDfxuwULgSkzNm6wKjg.jpg?auto=webp&s=dfa25c82db368decb29e0a0f8adc7476a554d665', 'width': 1200}, 'variants': {}}]} |
Best way to fine tune a model for translation | 6 | Hello,
I have list of high quality distinct sentence pairs between English and a language with over a million records. I am a complete beginner in this. Can you please help me on identifying the best way you fine tune a model for translation? I am sure these questions will help others also to understand this process better.
1. Which model is the best smallest base model for this? Mistral-7B? Mixtral?
2. Do a need to change anything? Like tokenizer or anything else to cover some non-english letters?
3. How can I use my sentence pairs for fine tuning? Do I need to convert them to json? Do I need to write instruction? Do I need to use an instruction fine-tuned model?
4. What is the best place to run this fine-tuning job? Would it be possible to use Google Colab?
5. When I fine-tune with my records, would it be possible to use the model for the tasks other than translation in the target language?
6. Any recommended readings or projects about this topic?
Thanks again for your help. | 2023-12-26T00:21:53 | https://www.reddit.com/r/LocalLLaMA/comments/18quz93/best_way_to_fine_tune_a_model_for_translation/ | Low_Faithlessness937 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18quz93 | false | null | t3_18quz93 | /r/LocalLLaMA/comments/18quz93/best_way_to_fine_tune_a_model_for_translation/ | false | false | self | 6 | null |
Dataset Creation Query | 1 | I had a query about which I could not find a find a post here (please forgive if I missed).
If I have a dataset creation strategy for a specific structured data using LLaMA, can i -
1. Publish my method using LLaMA1 or LLaMA2?
2. Release the dataset I have created for research purposes?
What are the things that I should keep in mind here?
Also if I should use other open-source LLMs to showcase my method please suggest. Thanks a lot! | 2023-12-25T23:45:29 | https://www.reddit.com/r/LocalLLaMA/comments/18qu9at/dataset_creation_query/ | Ill-Web1192 | self.LocalLLaMA | 2023-12-25T23:58:01 | 0 | {} | 18qu9at | false | null | t3_18qu9at | /r/LocalLLaMA/comments/18qu9at/dataset_creation_query/ | false | false | self | 1 | null |
Thoughts after building a text-adventure game using local models | 70 | Here are my thoughts so far after trying to build an LLM-powered text-adventure game that behaves sensibly. The idea here is that I wanted to create an app that's playable without the need to edit the outputs by hand, with the ultimate test of "can my son age 10 have a chance in hell to play it".
The good parts:
* I've been playing with Llama models regularly since they came out and they are a leap forward compared to the medieval days of BERT and GPT2.
* we can do a lot with just prompting - with no fine-tuning required. The instruct API to LLM models is a very natural way to work with text, and kinda does what you ask.
* I reckon a fun choose-your-adventure game is possible, as long as it's just ABC choices, and we let the player delete or regenerate particularly bad entries. This keeps the UI complexity down.
* No matter your hardware architecture, you have a good chance of running accelerated inference. It might be painful to set up, but most of the time it's possible.
The specific issues I've been repeatedly running into that so far frustrated my efforts:
* human with a cattle prod seems to be always required to guide the model. The longer the game goes, the more the mistakes pile up, causing the model to become unstable and fall off the rails. Ability to rewrite the text by hand is a must - which I don't mind, but it makes the UI more convoluted and makes the app not suitable for general audience. This is the number one thing that prevents my son from being able to play it.
* *shiver down my spine*, GPTisms are really strong, and result in a frustrating experience that's only fit for tinkerers. The models are moralistic, and I can't stop them from writing flowery prose with tons of awkward adjectives. The models are stuck operating in a small fragment of the language latent space (the "protestant priest mode" 😛). Perhaps we need more foundational models?
* each model I have tried struggles with one aspect of the game. For example a model might do really well describing consequences of actions, but fail miserably when rephrasing the location text. From the RP models that produce interesting outputs, there is no single model I can find that can run all the prompts as I want them. Maybe I need a few tiny models, or one large model with a LoRA per use-case?
* model installation is still a wilderness. Thank g\*\* for GGUF for democratizing access, but there is no single good runtime that would cover all use-cases. Backends have edge cases, CUDA is bulky and ugly, and Metal doesn't work half of the time 😏 . Still, this is a gigantic improvement compared to 6 months ago...
Some more context: I don't claim to be an expert model prompter, so incremental improvements are definitely possible here. Also I'm limited to models that infer on 32GB VRAM (but so would a hypothetical player). Also, fine-tuning on text-adventure games and MUDs could be an interesting avenue to check which I haven't tried yet.
In summary, it feels like there needs to be an order of magnitude change before apps where LLM is a centerpiece (and not just a helper) are possible. Which hopefully isn't too far away, looking at the pace of the advances in hardware and the science 🥳
Can someone please build an LLM-augmented MUD already? 😂
​ | 2023-12-25T23:30:40 | https://www.reddit.com/r/LocalLLaMA/comments/18qtysq/thoughts_after_building_a_textadventure_game/ | antimateusz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qtysq | false | null | t3_18qtysq | /r/LocalLLaMA/comments/18qtysq/thoughts_after_building_a_textadventure_game/ | false | false | self | 70 | null |
Any model work with a 3060 TI ? What are best models for my GPU ? | 4 | any help ? | 2023-12-25T23:26:31 | https://www.reddit.com/r/LocalLLaMA/comments/18qtvqr/any_model_work_with_a_3060_ti_what_are_best/ | More_Bid_2197 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qtvqr | false | null | t3_18qtvqr | /r/LocalLLaMA/comments/18qtvqr/any_model_work_with_a_3060_ti_what_are_best/ | false | false | self | 4 | null |
why do some models reply instantly, but the response is empty/blank? | 1 | According to oobabooga, the model loaded fine. Models where this happens for me: NousResearch_Nous-Hermes-Llama2-13b using SillyTavern as a front-end just gives me an instant repsonse, but is a blank response. Same for mythalion-13b.Q5_K_M.gguf which worked fine the other day.
Anyone know why this happens? Some other models work fine | 2023-12-25T23:22:48 | https://www.reddit.com/r/LocalLLaMA/comments/18qtsy8/why_do_some_models_reply_instantly_but_the/ | Interesting-Light-13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qtsy8 | false | null | t3_18qtsy8 | /r/LocalLLaMA/comments/18qtsy8/why_do_some_models_reply_instantly_but_the/ | false | false | self | 1 | null |
Models and settings for local document chat? | 1 | Hi, I'm using an M2 Mac Air with 16 gb RAM for local document chat in GPT4all. I'd appreciate your suggestions re models and settings for this use. Thanks. | 2023-12-25T22:58:22 | https://www.reddit.com/r/LocalLLaMA/comments/18qtb2z/models_and_settings_for_local_document_chat/ | jarec707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qtb2z | false | null | t3_18qtb2z | /r/LocalLLaMA/comments/18qtb2z/models_and_settings_for_local_document_chat/ | false | false | default | 1 | null |
How to integrate function calls (NexusRaven model)? | 3 | Toying with [nexusraven](https://huggingface.co/Nexusflow/NexusRaven-V2-13B) which is designed for local function calling.
And that seems to do what it says on the box. Basically this as output:
Call: get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))<bot_end>
Thought: The function call `get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))` answers the question "What's the weather like in Seattle right now?" by following these steps:
Bit fuzzy on the next step though - that function call looks python like, but it's a string how would I make that actually trigger python code?
Some sort of regex layer and call the function within the python code? And then feed to functions result back to the LLM by appending that?
Or [exec() and eval()](https://stackoverflow.com/questions/701802/how-do-i-execute-a-string-containing-python-code-in-python)?
Or a [subprocess and actually execute it?](https://stackoverflow.com/questions/14465183/python-code-in-simulate-mode)
Or [SimPy](https://realpython.com/simpy-simulating-with-python/)?
Can someone articulate the normal programatic flow please? Guessing someone here has already been down this road and can point me in the right direction
Thanks | 2023-12-25T22:27:09 | https://www.reddit.com/r/LocalLLaMA/comments/18qsovx/how_to_integrate_function_calls_nexusraven_model/ | AnomalyNexus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qsovx | false | null | t3_18qsovx | /r/LocalLLaMA/comments/18qsovx/how_to_integrate_function_calls_nexusraven_model/ | false | false | self | 3 | null |
Best way to keep up to date on LLM news | 35 | Hi everyone!
Currently using LLMs for work as well as personal projects and wanted to ask whats the best way to keep up to date with new models/research/products that are LLM and/or DNN related
This subreddit has been great for that so far and wanted to explore more sources of information
Happy Holidays | 2023-12-25T21:21:03 | https://www.reddit.com/r/LocalLLaMA/comments/18qrelw/best_way_to_keep_up_to_date_on_llm_news/ | YOLOLJJ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qrelw | false | null | t3_18qrelw | /r/LocalLLaMA/comments/18qrelw/best_way_to_keep_up_to_date_on_llm_news/ | false | false | self | 35 | null |
Feasibility and meaningfulness of a project on QLoRA quantization. | 7 | I need to build a project as an assignment, i am thinking of fine-tuning an LLM using QLoRA and then benchmarking it with models utilizing GGUF and GPTQ.
Is it even a meaningful task and how feasible is it on the usual Google colab T4 environment?
Any and all kind of advice welcomed.
Thanks. | 2023-12-25T21:15:58 | https://www.reddit.com/r/LocalLLaMA/comments/18qrb0k/feasibility_and_meaningfulness_of_a_project_on/ | Ringo_10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qrb0k | false | null | t3_18qrb0k | /r/LocalLLaMA/comments/18qrb0k/feasibility_and_meaningfulness_of_a_project_on/ | false | false | self | 7 | null |
Is Inflection's "PI" Just a More Woke Version of ChatGPT? | 1 | Big fan of Reid Hoffman. I heard he was involved in Inflection, which is the creator of the Chatbot "Pi".
I downloaded it, and I feel like I'm being scolded every time I ask it something.
What's the thing that I'm missing about Pi that makes it good? It just seems like a version of ChatGPT that reminds me every 2 seconds that I have a bias.
What am I doing wrong? Why is this thing worth using? | 2023-12-25T20:51:44 | https://www.reddit.com/r/LocalLLaMA/comments/18qqu1s/is_inflections_pi_just_a_more_woke_version_of/ | AttorneyJackKelly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qqu1s | false | null | t3_18qqu1s | /r/LocalLLaMA/comments/18qqu1s/is_inflections_pi_just_a_more_woke_version_of/ | false | false | self | 1 | null |
Is Anthropic's "Pi" Just a More Woke Version of ChatGPT? | 1 | Big fan of Reid Hoffman. I heard he was involved in Anthropic, which is the creator of the Chatbot "Pi".
I downloaded it, and I feel like I'm being scolded every time I ask it something.
What's the thing that I'm missing about Pi that makes it good? It just seems like a version of ChatGPT that reminds me every 2 seconds that I have implicit bias. It's like South Park's PC Principle in chatbot form.
What am I doing wrong? Why is this thing worth using? | 2023-12-25T20:41:25 | https://www.reddit.com/r/LocalLLaMA/comments/18qqmx8/is_anthropics_pi_just_a_more_woke_version_of/ | AttorneyJackKelly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qqmx8 | false | null | t3_18qqmx8 | /r/LocalLLaMA/comments/18qqmx8/is_anthropics_pi_just_a_more_woke_version_of/ | false | false | default | 1 | null |
This is getting ridiculous! Can we please ban frankenmerge VC scams? | 361 | Every day now scammers are posting zero effort cheating benchmark contaminated merges with literally no new abilities and insane claims. Latest one was an hour ago before it nuked itself.
Often just merging two existing models like some noob and introducing some extra cheating data - then creating a big old website with buzzword bingo and a myriad of fake claims like:
Industry leader, new core technology, fastes among peers, GPT-4 level in 7B etc.
Clearly scamming investors for VC money like
https://www.xdan.ai did today before nuking their thread earlier, or like
https://deci.ai did a week ago that are actually still selling a fake product no better than actual foundations with a bizarrely built out bullshit platform leeching on other peoples work or open source projects available to anyone.
The other category are hyped up hobbyists and dev's trying to brand themselves as experts while creating nothing of value like [multiple coding models from last week] that after dozens of "wow this is amazing" comments turns out to be no better than existing models but with more contamination or cheating data to no surprise of people with actual knowledge of how LLM's actually work.
No your frankenmerge with no new datasets is not better at coding than foundation models created by dozens of PhD's, or specialist datasets like DeepSeek, or inference methods where you write a comment in the code and change it's name. And they never want to disclose what they've actually done, it's always we'll write details later, hiding the fact they are just cheating.
It's like if a dude made a React todo app in a weekend and then posted that he made "New React 3000" and said it was better than React - a project by hundreds of contributors and years of effort by software engineers then sold it for 100$ each month while React remained free and built by the community.
It's really simple, if not a chill experiment hobby post, or humble discussion around a little project then unless it's a model from a talented player like Mistral/Microsoft/Meta, or with actual new datasets in languages or text-genres or sources, or broadscopes like OpenOrca, DeepSeek or changes to decoding like Ultrafastbert, Speculative, Jacobi, Lookahead, it's a scammer trying to make LocalLlama into the worst of the crypto days. | 2023-12-25T19:21:56 | https://www.reddit.com/r/LocalLLaMA/comments/18qp3fh/this_is_getting_ridiculous_can_we_please_ban/ | Ill_Buy_476 | self.LocalLLaMA | 2023-12-25T20:43:42 | 0 | {} | 18qp3fh | false | null | t3_18qp3fh | /r/LocalLLaMA/comments/18qp3fh/this_is_getting_ridiculous_can_we_please_ban/ | false | false | self | 361 | null |
ChatGPT vs (jan-hq/) Mistral-7B-Instruct-v0.2-SLERP | 1 | [removed] | 2023-12-25T19:07:18 | https://www.reddit.com/r/LocalLLaMA/comments/18qota2/chatgpt_vs_janhq_mistral7binstructv02slerp/ | love4titties | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qota2 | false | null | t3_18qota2 | /r/LocalLLaMA/comments/18qota2/chatgpt_vs_janhq_mistral7binstructv02slerp/ | false | false | default | 1 | null |
Which oobabooga preset is best for roleplaying? | 20 | I usually use simple1 or llama precise | 2023-12-25T18:49:59 | https://www.reddit.com/r/LocalLLaMA/comments/18qohbs/which_oobabooga_preset_is_best_for_roleplaying/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qohbs | false | null | t3_18qohbs | /r/LocalLLaMA/comments/18qohbs/which_oobabooga_preset_is_best_for_roleplaying/ | false | false | self | 20 | null |
Finetune to avoid using tool descriptions in prompt template | 2 | Hello everyone,
Still getting to grips with langchain, llms, huggingface, finetuning, dataset, etc. So, we have been successfully using langchain with various local llms from HF and conversationalagents that use tools. IT seems to work fairly well. THe problem we are running into is that the prompt template is getting rediculous in size and chomping down a lot of the available tokens. So we have like 10 tools with their corresponding descriptions. THe tools mainly use external APIs to extract information.
There are two main issues we are running into:
1- We want the llm to understand that in order to fullfil some requests using a certain tool, it needs to extract information from another tool first. For example, one tool gets traffic data from a website on a server. Anthoner one get's a list of domains on a server and the other one get's a list of accounts that have domains that have sites (accounts->domains->sites).
So when a user's query is something like "please give me the traffic data for [mysite.com](https://mysite.com)" , langchain jumps and starts using the traffic getting tool. But we need to pass along a domain and account so the receiving API knows where it needs to look.
We are having trouble getting the LLM to understand that in order to use a tool, it needs to get information before hand. We've tryed including in the descriptions which tools need to be used first before completing the tool usage, and no luck there. WE also tried explaining using natural language that "If you need to use the traffic tool, you need to ask the user for domain and account first and pass that to the input".
Or dataset looks like this:
\#####
Customer: I need to check my site's [xxxx123.com](https://xxxx.com) traffic data
Assistant: Sure, I see the domain is [xxxx123.com](https://xxxxx123.com) but which account is the domain located in?
Customer: It's in the mystuff account.
Assistant: Got it, here is the traffic information for [xxxx123.com](https://xxxx123.com) : bla , bla ,bla. Can I help you with anything else?
\#####
So now we are creating a dataset that has multi-turn conversations where the assistant asks the user about the missing information and we are getting ready to train a llama2 7B , but this task is still confusing for us, but we're still trying.
But we're wondering if there is a easier way to make this work. How can I setup langchain so the LLM knows what the heirarchy structure is, so if you are asking about a domain, you need to know which account the domain is hosted in , or if it's about a site, which domain and account the site is in. A solution would be to create a tool that digs into each account and domain to find it, ... but that's an aweful amount of unnecessary energy.
Any hints, tips would be inmensely appreciated.
​ | 2023-12-25T18:44:38 | https://www.reddit.com/r/LocalLLaMA/comments/18qodse/finetune_to_avoid_using_tool_descriptions_in/ | Background-Maybe-381 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qodse | false | null | t3_18qodse | /r/LocalLLaMA/comments/18qodse/finetune_to_avoid_using_tool_descriptions_in/ | false | false | self | 2 | null |
Function calling finetuning? | 1 | [removed] | 2023-12-25T18:28:58 | https://www.reddit.com/r/LocalLLaMA/comments/18qo2vb/function_calling_finetuning/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qo2vb | false | null | t3_18qo2vb | /r/LocalLLaMA/comments/18qo2vb/function_calling_finetuning/ | false | false | default | 1 | null |
A Duel of AI Minds: ChatGPT vs. Mistral-7B-Instruct-v0.2-SLERP - An Experiment | 1 | [removed] | 2023-12-25T18:17:28 | https://www.reddit.com/r/LocalLLaMA/comments/18qnv4i/a_duel_of_ai_minds_chatgpt_vs/ | love4titties | self.LocalLLaMA | 2023-12-25T18:21:47 | 0 | {} | 18qnv4i | false | null | t3_18qnv4i | /r/LocalLLaMA/comments/18qnv4i/a_duel_of_ai_minds_chatgpt_vs/ | false | false | default | 1 | null |
What parameters determine how effectively a model understands long prompts? | 4 | I used llama 70b and I know that with long queries, about 700-800 tokens, the model does not take into account what is written very well. Using the 120B model I can see progress, but it's still bad.
​
Is it true that it depends on the parameters of the model how carefully it will read my request and process it to produce an answer? I used the old version of GPT4 in bing and for some reason it got confused in long texts, which in general were about 4-5 thousand words. At the same time, claude 2 did a great job with 20 thousand words that I used as a prompt. I am more than sure that claude 2 has fewer parameters, since the funding for this model is less than that of openAI, but claude answers questions related to a text of 20 thousand words, finds specific quotes, moments and feels confident using other methods testing with long context. | 2023-12-25T18:02:12 | https://www.reddit.com/r/LocalLLaMA/comments/18qnkom/what_parameters_determine_how_effectively_a_model/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qnkom | false | null | t3_18qnkom | /r/LocalLLaMA/comments/18qnkom/what_parameters_determine_how_effectively_a_model/ | false | false | default | 4 | null |
Presenting ErotiLLAMA! Enjoy! | 1 | [removed] | 2023-12-25T17:16:51 | https://www.reddit.com/r/LocalLLaMA/comments/18qmpjo/presenting_erotillama_enjoy/ | Ok_Rooster_7446 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qmpjo | false | null | t3_18qmpjo | /r/LocalLLaMA/comments/18qmpjo/presenting_erotillama_enjoy/ | false | false | default | 1 | null |
Presenting ErotiLLAMA! Enjoy. | 1 | [removed] | 2023-12-25T17:14:10 | https://www.reddit.com/r/LocalLLaMA/comments/18qmnlz/presenting_erotillama_enjoy/ | jaskier20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qmnlz | false | null | t3_18qmnlz | /r/LocalLLaMA/comments/18qmnlz/presenting_erotillama_enjoy/ | false | false | default | 1 | null |
This model is very impressive | 26 | To test the ability of someone who doesn't know much programming to do anything that comes to their mind using LLMs, I did these projects, and I think the results were impressive.
I used the phind v9 model, a model built on top of the open-source CodeLlama-34B fine-tune. Their vs code extension is Their vs code extension is beautiful.
https://github.com/Felpin007/Visualizacao-3D-Castelo
https://github.com/Felpin007/Manipulador-de-divs
https://github.com/Felpin007/Sistema-estelar
https://github.com/Felpin007/psicodelico
https://github.com/Felpin007/arvore-interativa
https://github.com/Felpin007/2playTetris | 2023-12-25T17:09:57 | https://www.reddit.com/r/LocalLLaMA/comments/18qmkof/this_model_is_very_impressive/ | Delicious-Trash6988 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qmkof | false | null | t3_18qmkof | /r/LocalLLaMA/comments/18qmkof/this_model_is_very_impressive/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'L0wCCvN_L5v5vZaVaqSQt9Zx9vtpqxETwWFnQKEmtG8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LTXsq_euAnhqOCoXcoq5ZzDKzvYmVpXuiAAOfODEJFk.jpg?width=108&crop=smart&auto=webp&s=f2ce7d8ff5ff4611c9da399088fd28ca757da229', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LTXsq_euAnhqOCoXcoq5ZzDKzvYmVpXuiAAOfODEJFk.jpg?width=216&crop=smart&auto=webp&s=0cb96ecc1c78e861ce03adc2b6aad0c83460a237', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LTXsq_euAnhqOCoXcoq5ZzDKzvYmVpXuiAAOfODEJFk.jpg?width=320&crop=smart&auto=webp&s=20e25c5f1d3f761a485e9ac4e2a649b1e9c1f2b9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LTXsq_euAnhqOCoXcoq5ZzDKzvYmVpXuiAAOfODEJFk.jpg?width=640&crop=smart&auto=webp&s=d0df8000477c82c58576798a80943214043138dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LTXsq_euAnhqOCoXcoq5ZzDKzvYmVpXuiAAOfODEJFk.jpg?width=960&crop=smart&auto=webp&s=0818895607277edd19198c0f4caaad72548f3553', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LTXsq_euAnhqOCoXcoq5ZzDKzvYmVpXuiAAOfODEJFk.jpg?width=1080&crop=smart&auto=webp&s=7403f3056e8c6b1adbeb23587b63386c60a1ed91', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LTXsq_euAnhqOCoXcoq5ZzDKzvYmVpXuiAAOfODEJFk.jpg?auto=webp&s=449792c67ac46504c27d52f50ce12fefde6831ed', 'width': 1200}, 'variants': {}}]} |
[Question] Chat interface for self hosted Mistral 7B | 3 | Hi,
Someone was succeed to interact with Mistral 7B v0.2 with a self hosted Chat-UI ?
I can't get it works my better was 404 (Could not parse last message {"detail":"Not Found"}) error when prompt, my .env.local is it :
MONGODB_URL=mongodb://<MYURL:PORT>
HF_TOKEN=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
MODELS=`[
{
"name": "mistralai/Mistral-7B-Instruct-v0.2",
"displayName": "mistralai/Mistral-7B-Instruct-v0.2",
"description": "Mistral 7B is a new Apache 2.0 model, released by Mistral AI that outperforms Llama2 13B in benchmarks.",
"websiteUrl": "https://mistral.ai/news/announcing-mistral-7b/",
"preprompt": "",
"chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 2048,
"stop": ["</s>"]
},
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"endpoints": [{
"url": <MYURL:PORT>/v1
}]
}
]`
​ | 2023-12-25T17:01:11 | https://www.reddit.com/r/LocalLLaMA/comments/18qmen0/question_chat_interface_for_self_hosted_mistral_7b/ | TriviPr | self.LocalLLaMA | 2023-12-25T17:06:09 | 0 | {} | 18qmen0 | false | null | t3_18qmen0 | /r/LocalLLaMA/comments/18qmen0/question_chat_interface_for_self_hosted_mistral_7b/ | false | false | self | 3 | null |
Trying to run Dolphin Mixtral on 16GB RAM + RTX 2060 (6GB VMRAM) | 8 | I'm trying to run `TheBloke/dolphin-2.5-mixtral-8x7b-GGUF` on my laptop which is an HP Omen 15 2020 (Ryzen 7 4800H, 16GB DDR4, RTX 2060 with 6GB VRAM). Currently getting into the local LLM space - just starting. Tried to start with LM Studio - mainly because of the super simple UI for beginning with it. I know its closed source and stuff - I'll try moving to other things in the future.
​
But the problem is I've been getting reaalllly slow inference with **Q5\_K\_M.** Should I try a more quantized model? I've even offloaded as many layers as possible to the GPU (4-5 without using shared gpu memory which is not recommended as it apparently slows down inference). Even set CPU threads to 10 (I have 16)
Getting extremely slow inference - time to first token is 34secs and 0.2t/s. Any settings you would recommend to change/try? I want a usable model locally. Is it even possible with my config?
How many params is the dolphin mixtral even? 8x7b suggests 56B which I guess I cant really run on my config but then I see in LM studio its labelled 7b. A quick Google search suggests its closer to 46B - which is still too huge for my laptop ig. | 2023-12-25T16:38:04 | https://www.reddit.com/r/LocalLLaMA/comments/18qlz71/trying_to_run_dolphin_mixtral_on_16gb_ram_rtx/ | DarthLoki79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qlz71 | false | null | t3_18qlz71 | /r/LocalLLaMA/comments/18qlz71/trying_to_run_dolphin_mixtral_on_16gb_ram_rtx/ | false | false | self | 8 | null |
How to use llava-v1.5-13b-Q5_K_M.gguf with python | 3 | So i have this LLaVa GGUF model and i want to run with python locally , i managed to use with LM Studio but now i need to run it in isolation with a python file
So is there a code where i can load the model within it and use as a normal python code?
Currently i can use LLaMa 7b without vision such like that with help of LangChain | 2023-12-25T16:06:10 | https://www.reddit.com/r/LocalLLaMA/comments/18qleak/how_to_use_llavav1513bq5_k_mgguf_with_python/ | Hazz_666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qleak | false | null | t3_18qleak | /r/LocalLLaMA/comments/18qleak/how_to_use_llavav1513bq5_k_mgguf_with_python/ | false | false | self | 3 | null |
Merry Christmas! The First OpenSource Model[xDAN-L1-Chat: 8.8 ] touch near by GPT4 on MT-Bench! | 1 | **The first 7b model achieving an 8.8 on MT-Bench** **which is performance at Humanalities, Coding and Writing.**
​
https://preview.redd.it/n6e26ihxqg8c1.png?width=1512&format=png&auto=webp&s=cf1fe8492be718d3575f3a51e0c7fa347bb915c3
​
​
* Created By xDAN-AI at 2023-12-15
* Eval by FastChat: [https://github.com/lm-sys/FastChat.git](https://github.com/lm-sys/FastChat.git)
* Check: [https://www.xdan.ai](https://www.xdan.ai/)
* Model URL: [xDAN-AI/xDAN-L1-Chat-RL-v1 · Hugging Face](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1)
​
​
**Prompt Template(Alpaca)**
Instruction:"You are a helpful assistant named DAN.You are an expert in worldly knowledge, skilled in employing a probing questioning strategy, carefully considering each step before providing answers."
{Question}
Response:
https://preview.redd.it/ouuhrsimqg8c1.png?width=1024&format=png&auto=webp&s=f0c7af16c6d4e2d7b4091318078dc9086d89d105
​ | 2023-12-25T15:57:31 | https://www.reddit.com/r/LocalLLaMA/comments/18ql8dx/merry_christmas_the_first_opensource/ | xDAN_Aduo | self.LocalLLaMA | 2023-12-26T03:35:41 | 0 | {} | 18ql8dx | false | null | t3_18ql8dx | /r/LocalLLaMA/comments/18ql8dx/merry_christmas_the_first_opensource/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'anqz3yA7OfprWmQnUne4qQgxSJsIZB4xb9AFGcP2Xlo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gQgps29r_Ta9NQxzR9MhvaIDMfTSlco4wdfVPzdC-CU.jpg?width=108&crop=smart&auto=webp&s=df06ed1d924e59d3a186d88c1282483462edc2ba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gQgps29r_Ta9NQxzR9MhvaIDMfTSlco4wdfVPzdC-CU.jpg?width=216&crop=smart&auto=webp&s=039c4da1101e8f9d0450c9b0a38e4e147937edb5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gQgps29r_Ta9NQxzR9MhvaIDMfTSlco4wdfVPzdC-CU.jpg?width=320&crop=smart&auto=webp&s=2733ba70410aeba3555899ab531c8ebd53a9b886', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gQgps29r_Ta9NQxzR9MhvaIDMfTSlco4wdfVPzdC-CU.jpg?width=640&crop=smart&auto=webp&s=6a6cce991edba9c01d5f06704c27d542b2d74f34', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gQgps29r_Ta9NQxzR9MhvaIDMfTSlco4wdfVPzdC-CU.jpg?width=960&crop=smart&auto=webp&s=b184508e76fe2b8482e23de3e0cbf18ff7d25988', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gQgps29r_Ta9NQxzR9MhvaIDMfTSlco4wdfVPzdC-CU.jpg?width=1080&crop=smart&auto=webp&s=8cd672e656fafd254fcc989a8f355f601cc857b9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gQgps29r_Ta9NQxzR9MhvaIDMfTSlco4wdfVPzdC-CU.jpg?auto=webp&s=318e2d4ee38c5d24d04ef320f04c5d19f0681dc0', 'width': 1200}, 'variants': {}}]} |
How to fine-tune Llama 70B fp16 on 8x A100 80GB? | 9 | I'm using deepspeed zero stage 3 and Llama 70b in FP16 but still get cuda out of memory issues. Any ideas? I'm using runpod virtual machine. Thanks!
​
This is my config file:
{
"fp16": {
"enabled": true,
"loss\_scale": 0,
"loss\_scale\_window": 1000,
"initial\_scale\_power": 16,
"hysteresis": 2,
"min\_loss\_scale": 1
},
​
"optimizer": {
"type": "Adam",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight\_decay": "auto"
}
},
​
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup\_min\_lr": "auto",
"warmup\_max\_lr": "auto",
"warmup\_num\_steps": "auto"
}
},
​
"zero\_optimization": {
"stage": 3,
"offload\_optimizer": {
"device": "cpu",
"pin\_memory": true,
"ratio": 1
},
"overlap\_comm": false,
"contiguous\_gradients": true,
"sub\_group\_size": 1e9,
"reduce\_bucket\_size": 5e8,
"reduce\_scatter": true,
"stage3\_prefetch\_bucket\_size": 5e8,
"stage3\_param\_persistence\_threshold": 1e6,
"stage3\_max\_live\_parameters": 1e9,
"stage3\_max\_reuse\_distance": 1e9,
"stage3\_gather\_16bit\_weights\_on\_model\_save": true
},
​
"gradient\_accumulation\_steps": 1,
"gradient\_clipping": "auto",
"steps\_per\_print": 1,
"train\_batch\_size": 8,
"train\_micro\_batch\_size\_per\_gpu": 1,
"wall\_clock\_breakdown": false,
"activation\_checkpointing": {
"partition\_activations": true,
"cpu\_checkpointing": true,
"contiguous\_memory\_optimization": true,
"number\_checkpoints": null,
"synchronize\_checkpoint\_boundary": false,
"profile": true
}
} | 2023-12-25T15:56:09 | https://www.reddit.com/r/LocalLLaMA/comments/18ql7je/how_to_finetune_llama_70b_fp16_on_8x_a100_80gb/ | kupo1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ql7je | false | null | t3_18ql7je | /r/LocalLLaMA/comments/18ql7je/how_to_finetune_llama_70b_fp16_on_8x_a100_80gb/ | false | false | self | 9 | null |
Nvidia Tesla P4 vs P40 | 11 | **TLDR: trying to determine if six P4 vs two P40 is better for 2U form factor**
To date I have various Dell Poweredge R720 and R730 with mostly dual GPU configurations. It’s been the best density per buck I’ve found since many 4U configurations that can handle 3, 4 and 8 dual slot GPUs are much more expensive.
The Poweredge R7x0 series has 7 PCI slots by default. The PCIE slots in risers support 75w and the EPS cable (max 2) can supply an additional 225w. I’ve tried dual P40 with dual P4 in the half width slots. Had mixed results on many LLMs due to how they load onto VRAM.
Just realized I never quite considered six Tesla P4.
**Pros:**
* No power cable necessary (addl cost and unlocking upto 5 more slots)
* 8gb x 6 = 48gb
* Cost: As low as $70 for P4 vs $150-$180 for P40
* Just stumbled upon unlocking the clock speed from a prior comment on Reddit sub (The\_Real\_Jakartax) Below command unlocks the core clock of the P4 to 1531mhz
​
nvidia-smi -ac 3003,1531
**Cons:**
* Most slots on server are x8. I typically upgrade the slot 3 to x16 capable, but reduces total slots by 1.
* Lower CUDA cores per GPU
* Lower memory bandwidth per GPU
***Has anyone tried this configuration with Oobabooga or Ollama? I know stable diffusion isn’t multi GPU friendly.***
***And yes, I understand Dual: 3090, 4090, L40 or 80GB: A100, H100 blows away the above and is more relevant this day and age. Trying to convert $500 of e-waste parts into LLM gold... or silver :)*** | 2023-12-25T15:43:20 | https://www.reddit.com/r/LocalLLaMA/comments/18qkz70/nvidia_tesla_p4_vs_p40/ | MachineZer0 | self.LocalLLaMA | 2023-12-25T19:52:48 | 0 | {} | 18qkz70 | false | null | t3_18qkz70 | /r/LocalLLaMA/comments/18qkz70/nvidia_tesla_p4_vs_p40/ | false | false | self | 11 | null |
Needadvice. AI Generated Yu-Gi-Oh! Decks | 1 | [removed] | 2023-12-25T14:51:14 | https://www.reddit.com/r/LocalLLaMA/comments/18qk250/needadvice_ai_generated_yugioh_decks/ | Independent-Bill-770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qk250 | false | null | t3_18qk250 | /r/LocalLLaMA/comments/18qk250/needadvice_ai_generated_yugioh_decks/ | false | false | default | 1 | null |
i am looking for a very specific functioning model | 1 | it needs to have advanced nlp capabilities and context remembrance as well as image handling capabilities and also the ability to search the internet and by that i mean it needs to be able to fully search the internet in other words no text only version of websites it needs to be able to "see" the images too | 2023-12-25T14:46:40 | https://www.reddit.com/r/LocalLLaMA/comments/18qjzgd/i_am_looking_for_a_very_specific_functioning_model/ | awesomegame1254 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qjzgd | false | null | t3_18qjzgd | /r/LocalLLaMA/comments/18qjzgd/i_am_looking_for_a_very_specific_functioning_model/ | false | false | default | 1 | null |
Best models out there for improving article paraphrasing? | 19 |
Hey everyone,
My current company has about 800 articles from various authors, ranging between 1000 to 5000 words. According to our company authors most of them rely on chatgpt to help them paraphrase their aticles before publishing , I want to provide them an alternative model which can output the paraphrased content which follows the company guidlines
Initially, I considered fine-tuning AI models for each author. But with so many authors, it's not cost-effective. So, I'm thinking of fine-tuning one AI model with all our articles.
I've heard good things about Mistral AI 7B and 8 x 7B , chatgpt turbo models. Any suggestions on these or other models that might suit my usecase?
Also, if you have any alternative approaches or strategies for better paraphrasing across these articles, I'd love to hear your thoughts!
Thanks in advance! | 2023-12-25T12:54:00 | https://www.reddit.com/r/LocalLLaMA/comments/18qi4u0/best_models_out_there_for_improving_article/ | 07_Neo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qi4u0 | false | null | t3_18qi4u0 | /r/LocalLLaMA/comments/18qi4u0/best_models_out_there_for_improving_article/ | false | false | self | 19 | null |
KoboldCPP 1.53.a Frankenstein_Christmas | 1 | [removed] | 2023-12-25T12:24:43 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 18qhpo4 | false | null | t3_18qhpo4 | /r/LocalLLaMA/comments/18qhpo4/koboldcpp_153a_frankenstein_christmas/ | false | false | default | 1 | null | ||
KoboldCPP 1.53.a Frankenstein Christmas | 1 | [removed] | 2023-12-25T12:22:21 | https://www.reddit.com/r/LocalLLaMA/comments/18qhogv/koboldcpp_153a_frankenstein_christmas/ | Nexesenex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qhogv | false | null | t3_18qhogv | /r/LocalLLaMA/comments/18qhogv/koboldcpp_153a_frankenstein_christmas/ | false | false | self | 1 | null |
Use llm as a development team | 2 | Do you know ideas to use llm to develop whole app?
I imagine following:
- team leader llm tells developers what to implement
- developers llm write code
- code is compiled and run by testers llm
- developers llm fix the code
- it is continued as long as needed
The problem is how llm should work on big app. Should each developer llm be responsible for one specific function so the required context window is small enough?
I was thinking that maybe some kind of main developer llm should define main loop and then distribute work to other developers by defining API and requirements. | 2023-12-25T12:22:03 | https://www.reddit.com/r/LocalLLaMA/comments/18qhobo/use_llm_as_a_development_team/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qhobo | false | null | t3_18qhobo | /r/LocalLLaMA/comments/18qhobo/use_llm_as_a_development_team/ | false | false | self | 2 | null |
KoboldCPP Frankenstein Christmas Release 1.53.a | 1 | [removed] | 2023-12-25T12:11:03 | https://www.reddit.com/r/LocalLLaMA/comments/18qhivy/koboldcpp_frankenstein_christmas_release_153a/ | Nexesenex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qhivy | false | null | t3_18qhivy | /r/LocalLLaMA/comments/18qhivy/koboldcpp_frankenstein_christmas_release_153a/ | false | false | self | 1 | null |
FAQ Retrieval design | 3 | I need some help with retrieval solution design.
The problem is that my company has \~20 slack channels with various support topics. Like, you need to create an email, go to channel #email, your VPN doesn't work go to channel #vpn etc. People rarely know where to go with specific issues and it only becomes worse with company scaling.
I want to utilize embeddings and possibly RAG to create Q&A bot that gives an answer to a question or forwards person to a specific channel.
Here's how I see the solution: "Create FAQ documentation and calculate embeddings for each question" -> "When user asks question, perform similarity search and respond with original answer". I want to avoid RAG in the first iteration to avoid complexity.
However, questions arise:
\- If user asks question that is not included in FAQ docs, how to make bot consistently answer "I don't know"?
\- Can I improve accuracy by including meta-information in FAQ docs?
\- What are the alternatives to this design that could work better?
And generally - how would you approach this problem? | 2023-12-25T11:02:32 | https://www.reddit.com/r/LocalLLaMA/comments/18qgm0h/faq_retrieval_design/ | Similar-Ingenuity-36 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qgm0h | false | null | t3_18qgm0h | /r/LocalLLaMA/comments/18qgm0h/faq_retrieval_design/ | false | false | self | 3 | null |
Generative AI is just compression, but is it though ? | 1 | [removed] | 2023-12-25T10:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/18qgi2m/generative_ai_is_just_compression_but_is_it_though/ | djangoUnblamed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qgi2m | false | null | t3_18qgi2m | /r/LocalLLaMA/comments/18qgi2m/generative_ai_is_just_compression_but_is_it_though/ | false | false | self | 1 | null |
Navigating Unique Components in Microsoft's phi-1_5 for LoRA Implementation | 3 | Hello everyone,
I'm currently exploring the "microsoft/phi-1\_5" model for a project that involves implementing Low-Rank Adaptation (LoRA). I was wondering if anyone here has experience with this specific model and can shed light on any unique architectural features that may pose a challenge when integrating LoRA.
Specifically, are there components within phi-1\_5 that are not typically present in other transformer models, which could complicate the application of LoRA? Any insights or shared experiences with similar tasks would be greatly appreciated!
Looking forward to your thoughts and discussions. | 2023-12-25T10:12:10 | https://www.reddit.com/r/LocalLLaMA/comments/18qfz1k/navigating_unique_components_in_microsofts_phi1_5/ | yachty66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qfz1k | false | null | t3_18qfz1k | /r/LocalLLaMA/comments/18qfz1k/navigating_unique_components_in_microsofts_phi1_5/ | false | false | self | 3 | null |
Can LLaMA-2-7B be fine-tuned using raw Pytorch and no trainers on single GPU? | 1 | Hey all! So I'm new to generative AI and was interested in fine-tuning LLaMA-2-7B (sharded version) for text generation on my colab T4. For this I have a 500 x 3 HF dataset. I know I can train it using the SFTTrainer or the Seq2SeqTrainer and QLORA on colab T4, but I am more interested in writing the raw Pytorch training and evaluation loops. However, I'm finding it hard to get started because most guides and articles use Trainers and not really a lot of Pytorch (and I'm also new to Pytorch). Plus I'm not sure if my single Colab T4 GPU is powerful enough for this task. Does anyone have any advice or pointers on how to approach this? | 2023-12-25T09:47:52 | https://www.reddit.com/r/LocalLLaMA/comments/18qfo3w/can_llama27b_be_finetuned_using_raw_pytorch_and/ | EmpressControl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qfo3w | false | null | t3_18qfo3w | /r/LocalLLaMA/comments/18qfo3w/can_llama27b_be_finetuned_using_raw_pytorch_and/ | false | false | self | 1 | null |
Mac users with Apple Silicon and 8GB ram - use GPT4all | 229 | There's a lot of posts asking for recommendation to run local LLM on lower end computer.
Most Windows PC comes with 16GB ram these days, but Apple is still selling their Mac with 8GB. I have done some tests and benchmark, the best for M1/M2/M3 Mac is GPT4all.
A M1 Macbook Pro with 8GB RAM from 2020 is **2 to 3 times faster** than my Alienware 12700H (14 cores) with 32 GB DDR5 ram. Please note that currently GPT4all is not using GPU, so this is based on CPU performance.
This low end Macbook Pro can easily get over 12t/s. I think the reason for this crazy performance is the memory bandwidth implemented in Apple Silicon.
GPT4all is an easy one click install but you can also sideload other models that's not included. I use "[dolphin-2.2.1-mistral-7b.Q4\_K\_M.gguf](https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-GGUF/blob/main/dolphin-2.2.1-mistral-7b.Q4_K_M.gguf)" which you can download then sideload into GPT4all.
The best feature of GPT4all is the Retrieval-Augmented Generation (RAG) plugin called 'BERT' that you can install from within the app. **It allows you to feed the LLM with your notes, books, articles, documents, etc and starts querying it for information.** Some people called it 'Chat with doc'. Personally I think this is the single most important feature that makes LLM useful as a local based system. You don't need to use an API to send your documents to some 3rd party - you can have total privacy with the information processed on your Mac. Many people wanted to fine-tuned or trained their own LLM with their own dataset, without realising that what they really wanted was RAG - and it's so much easier and quicker than training. (It takes less than a minute to digest a book)
This is what you can do with the RAG in GPT4all:
* Ask the AI to read a novel and summarize it for you, or give you a brief synopsis for every chapters.
* Ask the AI to read a novel and role-play as a character in the novel.
* Ask the AI to read a reference book and use it as an expert-system. For example, I feed it with a reference book about gemstones and minerals, now I can start querying it about the similarity or different properties between certain obscure stones and crystals.
* Ask the AI to read a walkthrough for a video game, then ask it for help when you are stuck.
* If the AI is on an office server, you can add new company announcements to a folder read by the RAG - and the information will be available to all employees when they query the AI about it.
* Ask the AI to read all your notes in a folder. For example, a scientist has several years of research notes can now easily query the AI and find notes that are related.
These are just some examples. The advantages of having this technology is incredible and most people are not even aware of it. I think the Microsoft/Apple should have this feature built into their OS, it's already doable on low end consumer computers.
​
​ | 2023-12-25T06:56:32 | https://www.reddit.com/r/LocalLLaMA/comments/18qdgnm/mac_users_with_apple_silicon_and_8gb_ram_use/ | Internet--Traveller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qdgnm | false | null | t3_18qdgnm | /r/LocalLLaMA/comments/18qdgnm/mac_users_with_apple_silicon_and_8gb_ram_use/ | false | false | self | 229 | {'enabled': False, 'images': [{'id': 'CnpiWPuZSoDRuQdIeUumjHH9mNEqTqhGgJlryi09whk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?width=108&crop=smart&auto=webp&s=70f1fc0ef48a661d0858148053bbb971bd685112', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?width=216&crop=smart&auto=webp&s=c9ea64cefa623317e20c1981ac06d47e44fad5cc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?width=320&crop=smart&auto=webp&s=52ae1e0dab98e511808fdd6a9d69cba4e05db154', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?width=640&crop=smart&auto=webp&s=ea8569ee0c924ed1463ec8d2a12f8e8f54492279', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?width=960&crop=smart&auto=webp&s=dedf9211588939f994c31a6271d7db57cce4673f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?width=1080&crop=smart&auto=webp&s=98e607e6ebb515705af6238ce8405ed85679000f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/APbXzkp3a7Jrn4H4dveeSTewEz44sWzSlFURaM-sM0I.jpg?auto=webp&s=cb638f5409abccb17603f7b068208a4829bf15dc', 'width': 1200}, 'variants': {}}]} |
A non guessing next word (token) authoritative AI ? | 1 | Since it is clear that LLM's and their next token guessing is unstable, creates hallucinations, is all in all unreliable, what if the definition of AI is more narrower and intelligence is more the plain knowing of facts (like armies are gathering) than it is intellect (reason), would it be possible to have a network learn facts and just echoes this verbatim ?
This way we can have a non hallucinating AI serving plain facts (non of the guessing next token) etc. upon and intellect (reasoning) could be based upon it (or the hallucination part for that matter).
Would it be possible at all to program an algorithm that does reasoning based on a serving known facts AI ?
It would deliver us the non-hallucinating \*authoritative AI\*
!! | 2023-12-25T06:25:38 | https://www.reddit.com/r/LocalLLaMA/comments/18qd1bx/a_non_guessing_next_word_token_authoritative_ai/ | skullbonesandnumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qd1bx | false | null | t3_18qd1bx | /r/LocalLLaMA/comments/18qd1bx/a_non_guessing_next_word_token_authoritative_ai/ | false | false | self | 1 | null |
All Hugging Face Models are Obfuscated | 1 | I am a newbie, trying to crack and understand how Models work for a couple of months but still failed to climb the first step without knowing the path!
I first started using models when ChatGPT 3 (Microsoft) was first released, then tried Llama from Facebook for code infilling. I am now currently stuck with Dolphin 2.2 - Mistral (French Company) model on Ollama, just to ask the models, how it was trained and functions.
As you can see, till date I could not find a "Open as in Open Source Model at all!". All the models used about, neither showed a schematics nor showed instructions on how to fully open/see the algorithm for a single model and there are no forums or repository for any models with exposed internals (As far as I checked, kindly let me know if any!). All the models have trained sets. All models have algorithms. Each model could respond to me in different level of intellectuals, based on how they were trained using large different subsets. **I always ask them how they where made**, but all models refused to say and replies the same, that some parts of the model are propitiatory and cannot be shown as per the companies policy or the companies directors policy.
We can use the models to train new input but its highly impossible to remove the main subset and understand the kernel of the fruit (Or is there another way to understand, which I have missed!). All models have the same issue, either censored nor uncensored, both provide the same response. If companies are behind these models and purposely want to obfuscate their codes and just to show only their largest fancy subsets (without showing their underlying Neural Network Algorithm's Workings), then as a person I fell its wrong to altogether avoid use of such models at all cost.
Kindly direct me in a proper way to pursue clear models by not in this dogmatic approach of cat and mouse game"*Open Source,* WORD Usage" (If the source is open to general public with hidden binary, then the software is **maybe** **free to use only** and **not Fully Open Source** under any open source licenses, which is ***CLOSED SOURCE***, as far I understand!) | 2023-12-25T06:03:55 | https://www.reddit.com/r/LocalLLaMA/comments/18qcqdm/all_hugging_face_models_are_obfuscated/ | ram19890 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qcqdm | false | null | t3_18qcqdm | /r/LocalLLaMA/comments/18qcqdm/all_hugging_face_models_are_obfuscated/ | false | false | self | 1 | null |
TACO: Topics in Algorithmic COde generation dataset | 9 | ​
https://preview.redd.it/fwkrsnz3ed8c1.png?width=720&format=png&auto=webp&s=3461b2a094ca367b59274d4c2812ac9d0a07cebe
Through experiments, there is a significant difference between the currently popular code generation models and GPT-4 in the TACO evaluation, indicating that there is still a huge room for improvement in this field. The TACO dataset provides a challenging testing method and can serve as training data for studying and improving model performance. For more details: on (Reasoning) problems in actual application scenarios, not just implementing predefined function features.
Through experiments, there is a significant difference between the currently popular code generation models and GPT-4 in the TACO evaluation, indicating that there is still a huge room for improvement in this field. The TACO dataset provides a challenging testing method and can serve as training data for studying and improving model performance. For more details:
github:[https://github.com/FlagOpen/TACO](https://github.com/FlagOpen/TACO)
HF:[https://huggingface.co/datasets/BAAI/TACO](https://huggingface.co/datasets/BAAI/TACO)
​ | 2023-12-25T04:42:27 | https://www.reddit.com/r/LocalLLaMA/comments/18qbil0/taco_topics_in_algorithmic_code_generation_dataset/ | Grouchy-Mail-2091 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qbil0 | false | null | t3_18qbil0 | /r/LocalLLaMA/comments/18qbil0/taco_topics_in_algorithmic_code_generation_dataset/ | false | false | 9 | {'enabled': False, 'images': [{'id': '7WzqpDjKZGQnFklYufy6e0oVX47MDao-tmmYmc_jBlA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CEBGnrcMAedfrI4re1u1JksYv2h4P1bqnhQtCTmdvKM.jpg?width=108&crop=smart&auto=webp&s=5d02f2f5c91406b25aae4bd5f9eecc76c5a97aad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CEBGnrcMAedfrI4re1u1JksYv2h4P1bqnhQtCTmdvKM.jpg?width=216&crop=smart&auto=webp&s=fbf60b702925affbe3921d661f4fe0483c2eab2c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CEBGnrcMAedfrI4re1u1JksYv2h4P1bqnhQtCTmdvKM.jpg?width=320&crop=smart&auto=webp&s=728da66f66c5ecf69707777d4600649d41aab5e5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CEBGnrcMAedfrI4re1u1JksYv2h4P1bqnhQtCTmdvKM.jpg?width=640&crop=smart&auto=webp&s=24438156aa16ee7ee9b3ff30535a612616d63a9f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CEBGnrcMAedfrI4re1u1JksYv2h4P1bqnhQtCTmdvKM.jpg?width=960&crop=smart&auto=webp&s=c412cfe6f13ae6839e4ee20e27f3f390adba6439', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CEBGnrcMAedfrI4re1u1JksYv2h4P1bqnhQtCTmdvKM.jpg?width=1080&crop=smart&auto=webp&s=d2508307822b4b093f9b9f9443a028d29579cf33', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CEBGnrcMAedfrI4re1u1JksYv2h4P1bqnhQtCTmdvKM.jpg?auto=webp&s=e151f3d0cb9dff6be2a10327595a51bd04808d2f', 'width': 1200}, 'variants': {}}]} | |
Jan: open source, local-first alternative to ChatGPT | 2 | [removed] | 2023-12-25T04:12:03 | https://www.reddit.com/r/LocalLLaMA/comments/18qb24l/jan_open_source_localfirst_alternative_to_chatgpt/ | dan-jan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qb24l | false | null | t3_18qb24l | /r/LocalLLaMA/comments/18qb24l/jan_open_source_localfirst_alternative_to_chatgpt/ | false | false | 2 | {'enabled': False, 'images': [{'id': '4wR9dhF30w6FsrwCZAtLk_4S3t4TtwhzdZFjNxw6SJM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CgVXrO8SPRF9uMzzpPN-B3tm46ZSDXkq1l_UeIfSHN4.jpg?width=108&crop=smart&auto=webp&s=671a91fe9749c55d659c0697ee2f024515ecab5b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CgVXrO8SPRF9uMzzpPN-B3tm46ZSDXkq1l_UeIfSHN4.jpg?width=216&crop=smart&auto=webp&s=1501536693360ce59115c392c380f1a5d2b4e463', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CgVXrO8SPRF9uMzzpPN-B3tm46ZSDXkq1l_UeIfSHN4.jpg?width=320&crop=smart&auto=webp&s=c360856dccb5a39abb9e71ff599499dac8517c86', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CgVXrO8SPRF9uMzzpPN-B3tm46ZSDXkq1l_UeIfSHN4.jpg?width=640&crop=smart&auto=webp&s=a6ef4af104d11c219e60eac252b05803f6fd88f1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CgVXrO8SPRF9uMzzpPN-B3tm46ZSDXkq1l_UeIfSHN4.jpg?width=960&crop=smart&auto=webp&s=d115fc6cd5535d2c03fb09c4c59c8568d280e081', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CgVXrO8SPRF9uMzzpPN-B3tm46ZSDXkq1l_UeIfSHN4.jpg?width=1080&crop=smart&auto=webp&s=ef14b923e19b2fb59a4894f698cfde06aa2d18f4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CgVXrO8SPRF9uMzzpPN-B3tm46ZSDXkq1l_UeIfSHN4.jpg?auto=webp&s=40a3a393f563f49e91e01dff822bf6eac0f5ec7c', 'width': 1200}, 'variants': {}}]} | |
What model sounds most like a human? | 20 | I'm developing an AI chatbot designed to convey a personal touch yet retain a professional tone similar to ChatGPT. Are there any models known for their unique voice that seem less robotic?
​
Any help would be appreciated! | 2023-12-25T04:11:49 | https://www.reddit.com/r/LocalLLaMA/comments/18qb20u/what_model_sounds_most_like_a_human/ | Aarush_Sah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qb20u | false | null | t3_18qb20u | /r/LocalLLaMA/comments/18qb20u/what_model_sounds_most_like_a_human/ | false | false | self | 20 | null |
How to find proper context in open book question answering in a tie situation? | 3 | This post gives a detailed description: [https://stats.stackexchange.com/q/635603/245577](https://stats.stackexchange.com/q/635603/245577) | 2023-12-25T04:10:38 | https://www.reddit.com/r/LocalLLaMA/comments/18qb1cx/how_to_find_proper_context_in_open_book_question/ | hafizcse031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18qb1cx | false | null | t3_18qb1cx | /r/LocalLLaMA/comments/18qb1cx/how_to_find_proper_context_in_open_book_question/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '63C1GbYQbI4tHZMkw99e-qlyoYaGnG58yfnmnFUJ34s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Na1KPWKNVl6scgS9nIHq6TItzmZ7AAjzi2AE8ocRNSc.jpg?width=108&crop=smart&auto=webp&s=1572e205fbdf3329597b19f6465d66590bb4af06', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Na1KPWKNVl6scgS9nIHq6TItzmZ7AAjzi2AE8ocRNSc.jpg?width=216&crop=smart&auto=webp&s=f2c012f839ee5078c1c827c2f50ae4a3bd0f7f76', 'width': 216}], 'source': {'height': 316, 'url': 'https://external-preview.redd.it/Na1KPWKNVl6scgS9nIHq6TItzmZ7AAjzi2AE8ocRNSc.jpg?auto=webp&s=df05bb2f4b5fc26062986818cecd1d09ddac156a', 'width': 316}, 'variants': {}}]} |
Based on your experience what is the smallest and optimal local model for RAG? | 1 | I’m trying to set up RAG using langchain for company’s knowledge database and struggling to find optimal model / server solution, so any personal experience would be appreciated! | 2023-12-25T02:57:53 | https://www.reddit.com/r/LocalLLaMA/comments/18q9xva/based_on_your_experience_what_is_the_smallest_and/ | Ok_Maize_3709 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q9xva | false | null | t3_18q9xva | /r/LocalLLaMA/comments/18q9xva/based_on_your_experience_what_is_the_smallest_and/ | false | false | self | 1 | null |
Strange result from this prompt | 1 | here's the prompt
`You are a financial advisor, you give analysis and advice base only on document provided`
​
`Document:`
`Last month sales is 150 million`
`This month sales is 125 million`
​
`Last month cost is 10 million`
`This month cost is 50 million`
​
`Last month receivables is 300 million`
`This month receivables is 310 million`
​
`Last month payables is 100 million`
`This month payables is 290 million`
​
`Question:`
`analyze the document given`
Chatgpt result :
Certainly, let's analyze the provided financial data:
1. Sales Performance:
* Last month's sales were $150 million, and this month's sales decreased to $125 million. This indicates a decline in revenue.
1. Cost Analysis:
* Last month's cost was $10 million, and this month's cost increased significantly to $50 million. The substantial increase in costs may have contributed to the decrease in profitability.
1. Receivables:
* Last month's receivables were $300 million, and this month's receivables increased slightly to $310 million. While a slight increase is generally positive, it's essential to monitor the collection of these receivables to ensure timely cash inflow.
1. Payables:
* Last month's payables were $100 million, and this month's payables decreased to $290 million. The decrease in payables may impact short-term liquidity, as the company has less time to settle its obligations.
...
(orginal answer is 1,2,3,4 i paste in reddit becomes 1)
but look at payables, it said decrease and not increase
​
Here's from Mistral 7b
Based on the information provided in the document, here's an initial analysis:Sales have decreased by $25 million from last month to this month. This represents a significant decline of approximately 16.7%. The reason for this decrease is not clear without additional context. It could be due to market conditions, competition, or internal issues within the company.Costs have increased dramatically from last month to this month, with an increase of $40 million or 300%. This is a major concern as costs should ideally be controlled and predictable. The reasons for this large increase are also unclear but could include higher raw material prices, labor costs, or other expenses.Receivables have increased slightly from last month to this month, with an increase of $10 million or 3.3%. This may indicate that collections have been improving or that new sales have been made which have not yet been collected.
Payables have decreased significantly from last month to this month, with a reduction of $10 million or 10%. This could be a positive sign if it indicates that the company has been able to negotiate better payment terms with its suppliers. However, it could also be a negative sign if it indicates that the company is having cash flow problems and is delaying payments to suppliers in order to conserve liquidity.
...
Also decreased
Perplexily 7b, Llama 7b, Claude, Mixtral are correct
Why is that happened ? | 2023-12-25T02:51:08 | https://www.reddit.com/r/LocalLLaMA/comments/18q9uen/strange_result_from_this_prompt/ | x4080 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q9uen | false | null | t3_18q9uen | /r/LocalLLaMA/comments/18q9uen/strange_result_from_this_prompt/ | false | false | self | 1 | null |
Is there an equivalent of ChatGPT "Plugins" for local LLMs Web UIs? Like Code Interpreter, Plot Generator (using matplotlib), etc. I know Langchain and others claim to use "Tools", but those are not as capable as ChatGPT's Plugins. | 10 | (title) | 2023-12-25T01:54:09 | https://www.reddit.com/r/LocalLLaMA/comments/18q8z2r/is_there_an_equivalent_of_chatgpt_plugins_for/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q8z2r | false | null | t3_18q8z2r | /r/LocalLLaMA/comments/18q8z2r/is_there_an_equivalent_of_chatgpt_plugins_for/ | false | false | self | 10 | null |
How to run saily 220b gguf ? | 4 | I have enough RAM to load it but ooba and kobold both crash. I read that i would have to increase LLAMA\_MAX\_NODES but i don't know hot to do it. | 2023-12-25T01:41:31 | https://www.reddit.com/r/LocalLLaMA/comments/18q8s1c/how_to_run_saily_220b_gguf/ | ervertes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q8s1c | false | null | t3_18q8s1c | /r/LocalLLaMA/comments/18q8s1c/how_to_run_saily_220b_gguf/ | false | false | self | 4 | null |
Need help finding model used in Chatbot Arena on Huggingface | 1 | I was recently using the Chatbot Arena to rate models on Huggingface and ran across a model that I wanted to find out more about but try as I might I could find nothing on it anywhere. The model was listed as “deluxe-chat-v1.2”. Did a search on Huggingface and a web search and nothing. Does anyone know what this model is and where I can learn more about it? | 2023-12-25T00:40:50 | https://www.reddit.com/r/LocalLLaMA/comments/18q7sft/need_help_finding_model_used_in_chatbot_arena_on/ | Heliogabulus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q7sft | false | null | t3_18q7sft | /r/LocalLLaMA/comments/18q7sft/need_help_finding_model_used_in_chatbot_arena_on/ | false | false | self | 1 | null |
Can 2xA6000 run a 70B model in its entirety? | 1 | [removed] | 2023-12-25T00:18:46 | https://www.reddit.com/r/LocalLLaMA/comments/18q7eh6/can_2xa6000_run_a_70b_model_in_its_entirety/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q7eh6 | false | null | t3_18q7eh6 | /r/LocalLLaMA/comments/18q7eh6/can_2xa6000_run_a_70b_model_in_its_entirety/ | false | false | self | 1 | null |
Best idea between running on a server or getting another 3090 | 7 | Basically I’m looking at one of these options:
Buy a server with a lot (96+?) of ddr4 ram and run it with like a 2667 v4
Add another 3090 to my current setup (11900k + 3090 + 32GB ddr4
Add more ram to my current setup (changing ram from 32 to 64)
I am not sure how fast a CPU quant would be on the first option, and I’m not sure how fast it would be on the third option with offloading to RAM.
Also, if anyone has resources for how to split models between system ram and VRAM that would also be appreciated. Looking to run mixtral q8 or fp16 as well as other foundational models that come out. | 2023-12-24T23:59:34 | https://www.reddit.com/r/LocalLLaMA/comments/18q7248/best_idea_between_running_on_a_server_or_getting/ | Mephidia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q7248 | false | null | t3_18q7248 | /r/LocalLLaMA/comments/18q7248/best_idea_between_running_on_a_server_or_getting/ | false | false | self | 7 | null |
LLM stops responding after longer session | 1 | Hello everyone,
I am using Llama 2 13B gguf Q4 K\_S in CPU mode on my 16 GB, AMD R5 5600 system. Responses usually take around 40 sec to generate, I am just fooling around so this is fine for me.
I have noticed that the LLM stops responding after a longer session, say about an hour. Textgen web ui shows “AI is typing” but won’t generate any response. If I close and restart textgen and try to resume, I still see this. I also tried koboldcpp and still had this problem.
Is this because of my limited RAM? Task manager shows I still have about 2 GB RAM available when it stops working, so I am not sure.
I am extremely new to this, so it’s likely that I might have missed something obvious. Thank you for any help! | 2023-12-24T23:49:03 | https://www.reddit.com/r/LocalLLaMA/comments/18q6vji/llm_stops_responding_after_longer_session/ | SnooComics9125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q6vji | false | null | t3_18q6vji | /r/LocalLLaMA/comments/18q6vji/llm_stops_responding_after_longer_session/ | false | false | self | 1 | null |
Are GPTQ and AWQ quants now obsolete? | 61 | ELX2 is twice as fast as GPTQ and AWQ is slower that GPTQ from my experience. I think the only thing EXL2 models can't do is running on CPU, while AWQ and GPTQ models can, but people seem to prefer GGUF for that. | 2023-12-24T23:00:28 | https://www.reddit.com/r/LocalLLaMA/comments/18q5zjt/are_gptq_and_awq_quants_now_obsolete/ | Silvestron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q5zjt | false | null | t3_18q5zjt | /r/LocalLLaMA/comments/18q5zjt/are_gptq_and_awq_quants_now_obsolete/ | false | false | self | 61 | null |
Is there any research on using embedding as tokens to dramatically increase transformers context limits? | 8 | Here is what I mean by this. Because of quadratic scaling transformers are very limited in context size, for example llama 2 originally trained only for 4096 tokens. But, we can use this model to produce embedding of any text if it fits into the context size. Here is the question I had: why not train a second transformer model where each token is just an embedding from the first model? Imagine we have a very big chunk of text, transform it with llama 2 tokenizer into tokens, then split it by 4096 tokens chanks, get an embedding of each chank with llama 2, then train the second model to predict next token from the embeddings of the chanks, threatening this embeddings as tokens for new model. This should give us a model with context length 4096x4096. The only question is, how well will it perform? I’m pretty sure someone already explored something like that, does anybody know any particular research or just how to google it? | 2023-12-24T22:53:53 | https://www.reddit.com/r/LocalLLaMA/comments/18q5v5o/is_there_any_research_on_using_embedding_as/ | Another__one | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q5v5o | false | null | t3_18q5v5o | /r/LocalLLaMA/comments/18q5v5o/is_there_any_research_on_using_embedding_as/ | false | false | self | 8 | null |
What Quantization level do you use? | 8 | Is there a general heuristic for choosing?
Could be completely model dependent, the largest that fits in memory, rigorous testing of accuracy vs performance, your lucky number..
​
[View Poll](https://www.reddit.com/poll/18q5uej) | 2023-12-24T22:52:46 | https://www.reddit.com/r/LocalLLaMA/comments/18q5uej/what_quantization_level_do_you_use/ | SillyFlyGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q5uej | false | null | t3_18q5uej | /r/LocalLLaMA/comments/18q5uej/what_quantization_level_do_you_use/ | false | false | self | 8 | null |
android+maid+TheBloke_phy2=crash | 1 | I tried to run TheBloke-phi-2-GGUF on the app maid on my pixel, and I got crashes. Anyonwe had this or similar issues? | 2023-12-24T22:42:24 | https://www.reddit.com/r/LocalLLaMA/comments/18q5nsl/androidmaidthebloke_phy2crash/ | r3dsc4n | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q5nsl | false | null | t3_18q5nsl | /r/LocalLLaMA/comments/18q5nsl/androidmaidthebloke_phy2crash/ | false | false | self | 1 | null |
Multi GPU hardware selection | 2 | I have been getting more into running LLMs and I've been considering spending some money on building a system that can handle larger models at speeds faster than human reading speeds. I came across a few options. The first one is probably the most popular, 2x RTX 3090 Ti's with nvlink. I've seen a few other people talk about using nvidia P40s as a budget solution. Other than that, besides very expensive gigantic GPUs, or a mac, or throwing a bunch of cheap consumer cards in a box, I can't find much info.
My understanding is that fitting a model into VRAM is the absolute most important thing, and that's because bandwidth is what actually determines the speed of inference, GPUs themselves can churn through data much faster than their bus can handle. This makes perfect sense for single GPUs, as there's nothing else for a graphics card to do besides fetch vram, compute, etc. I don't understand what goes on in multi-gpu configurations though. How important is the inter-gpu bus bandwidth? I think it must be relatively important since people report adding NVLink speeds up their inference in 2x3090 setups (nvlink goes from up to 30 to 90 GB/s afaik) and nvidia DGX servers have their GPUs with 1,000+ GB/s inter-gpu busses. Therefore, it sounds like cards like AMD Instinct or 7900xtx, or any other card with high memory bandwidth will be considerably slow at inference in multi GPU configurations if they are locked to a maximum of 30GB/s on pcie gen 4 16x (likely even slower).
Has anyone tested how the inter-gpu bus speed affects inference speed? Maybe someone has benchmarks of inference speeds running 2x3090s with nvlink, and then without nvlink at pcie 4, 3, 2, and 1 speeds or at <16x links? Is it possible that adding more cards can slow things down if they already fit in the model due to increased time sharing data around?
If anyone has first hand tests or links to experiments other's have run, it would be good to have so I can decide on a hardware configuration. | 2023-12-24T22:37:01 | https://www.reddit.com/r/LocalLLaMA/comments/18q5kat/multi_gpu_hardware_selection/ | rugg0064 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q5kat | false | null | t3_18q5kat | /r/LocalLLaMA/comments/18q5kat/multi_gpu_hardware_selection/ | false | false | self | 2 | null |
Merry Christmas everyone | 79 | 2023-12-24T22:03:29 | Rootdevelop | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18q4xw2 | false | null | t3_18q4xw2 | /r/LocalLLaMA/comments/18q4xw2/merry_christmas_everyone/ | false | false | 79 | {'enabled': True, 'images': [{'id': '0FE24HnuPnh95_4_0S6VNDifWbF2-99MbIVAKmPxc4w', 'resolutions': [{'height': 181, 'url': 'https://preview.redd.it/afc2bogifb8c1.jpeg?width=108&crop=smart&auto=webp&s=8bc9b9ac913d6a543240df1135047428827cb432', 'width': 108}, {'height': 362, 'url': 'https://preview.redd.it/afc2bogifb8c1.jpeg?width=216&crop=smart&auto=webp&s=b21204303b0e78c69efe7e3ae34bd264d4df2fa2', 'width': 216}, {'height': 537, 'url': 'https://preview.redd.it/afc2bogifb8c1.jpeg?width=320&crop=smart&auto=webp&s=0df20758c7a42f09bc4d0656d1778522b9890ee4', 'width': 320}, {'height': 1074, 'url': 'https://preview.redd.it/afc2bogifb8c1.jpeg?width=640&crop=smart&auto=webp&s=cec2dbd25059cc5bfd8635d900ad892a6bff2d98', 'width': 640}, {'height': 1611, 'url': 'https://preview.redd.it/afc2bogifb8c1.jpeg?width=960&crop=smart&auto=webp&s=a787322492ffd31cb4c74752dbb87ae61c35124d', 'width': 960}, {'height': 1812, 'url': 'https://preview.redd.it/afc2bogifb8c1.jpeg?width=1080&crop=smart&auto=webp&s=f90483b6a9fc0de9483bb09442ed12680c24535c', 'width': 1080}], 'source': {'height': 3926, 'url': 'https://preview.redd.it/afc2bogifb8c1.jpeg?auto=webp&s=c5e6eaac29abab5b5f3d5dfe4ec03ed0275593ee', 'width': 2339}, 'variants': {}}]} | |||
How to *actually* calculate tokens/sec? | 1 | [removed] | 2023-12-24T22:03:23 | https://www.reddit.com/r/LocalLLaMA/comments/18q4xu7/how_to_actually_calculate_tokenssec/ | sujantkv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q4xu7 | false | null | t3_18q4xu7 | /r/LocalLLaMA/comments/18q4xu7/how_to_actually_calculate_tokenssec/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IH6dx07sr98p8yIV2wTvJZoqFT4xXAszG_S6Z2gyOYI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/AA7sSFDqrH89HF6wiL-eK0VCEmPKIQp8do82lYx-xt8.jpg?width=108&crop=smart&auto=webp&s=5b8a4085244cbaee0f5b4020874d3a49071e2307', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/AA7sSFDqrH89HF6wiL-eK0VCEmPKIQp8do82lYx-xt8.jpg?auto=webp&s=88ded451026298d00cd0ca360079e3daf5a7abb5', 'width': 200}, 'variants': {}}]} |
Fine tuning Mistral on functionally extinct language. | 64 | I have an Australian aboriginal language dataset. The language has only 800 surviving words. It is still being taught in some schools. I want to use a LLM and fientune it on the language history,culture, people etc. The data is very limited. Which approach should I take to complete this?? | 2023-12-24T22:00:54 | https://www.reddit.com/r/LocalLLaMA/comments/18q4w5q/fine_tuning_mistral_on_functionally_extinct/ | No-Point1424 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q4w5q | false | null | t3_18q4w5q | /r/LocalLLaMA/comments/18q4w5q/fine_tuning_mistral_on_functionally_extinct/ | false | false | self | 64 | null |
Best model for finetuning nowadays? | 5 | Hey guys, I've been off for a long time and I've missed the latest updates.
Can someone please tell me what is the best model for finetuning these days?I have roughly 1000-5000 examples I'd like to finetune on.
I've heard of the recent Mixtral / Mistral models, but not sure if they're the best option for finetuning (as opposed to out-of-the-box using).
Would love to hear your opinion / experiences, any pros and cons, etc.
Thanks! | 2023-12-24T21:53:59 | https://www.reddit.com/r/LocalLLaMA/comments/18q4rhb/best_model_for_finetuning_nowadays/ | nightlingo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q4rhb | false | null | t3_18q4rhb | /r/LocalLLaMA/comments/18q4rhb/best_model_for_finetuning_nowadays/ | false | false | self | 5 | null |
Quickest, easiest deployment of OpenAI API compatible inference server? | 3 | What it says in the title. I'm using LM Studio locally, but I want something I can deploy to Runpod or wherever that implements OpenAI's Chat/Completions API spec. Preferably it also implements Tools for function calling within the consuming web app. | 2023-12-24T21:29:36 | https://www.reddit.com/r/LocalLLaMA/comments/18q4bi3/quickest_easiest_deployment_of_openai_api/ | TheCoconutTree | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q4bi3 | false | null | t3_18q4bi3 | /r/LocalLLaMA/comments/18q4bi3/quickest_easiest_deployment_of_openai_api/ | false | false | self | 3 | null |
Overwhelmed on how to get started for my use case: Local M2 Mac with SSH access to A100 GPUs for coding/academic research? | 19 | Sorry if this has been asked! I have been trying to search but I can't find an answer for my case. Maybe I am just dumb, idk!
I can't figure out the ecosystems surrounding LLMs and how to actually get started. I have experience with coding for bioinformatics and ML in that world, but I don't have a good general education on comp sci and programming so its all very trial by fire.
I have an M2 MBP and ssh access to lab's linux servers which has 4xA100s (two servers), 4x2080s (one server), and 4x10xx (I can't remember off the top of my head). I want to start running local LLMs so I can do two primary things:
1. GPT-4 style "Data Analysis" like functions, really just something that will run and execute code in a docker/VM of some sort.
2. Some sort of RAG like (maybe fine-tuning?) that I can use to update a "researcher" like model where I can embed the gajillion of new papers in my PhD field to better keep up with the literature, as well as documentation for new packages and such for the bioinformatics work I do.
Where I am always getting stuck is how to get started on either of those. For (1), Code Interperter (github: [https://github.com/KillianLucas/open-interpreter](https://github.com/KillianLucas/open-interpreter)) seems like my best bet - but I just can't figure out how to get that working with a local model. I'm guessing the A100 linux servers should be good enough for stuff like Mixtral and DeepSeekCoder, but it seems its not compatabile with linux? And neither is LMstudio? And LMstudio doesn't have remote GPU support? I can't figure this out. I think AutoGen might be able to do this kind of functionality, but I've been stuck trying to figure out the Open Interperter thing to no success.
For (2), I have to imagine this is more my fault, in that I do see lots of RAG tutorials, but all of them seem to use OpenAI. I have OpenAI plus (and Github Copilot) and love it, but I think I'd burn through a ton of money (poor grad student), when the clear option seems to be my access to these free remote GPUs? However, I can't really find the very specific use case I have of: free GPUs via SSH.
My SSH set up is CLI only. I do see some GUI stuff, which would be nice, but the issue always seems to be that I can't get them to SSH to my server to use those resources.
Can someone recommend me things I should try to search and learn to figure out this issue? Maybe I am just approaching this all wrong?
​
Thank you :) | 2023-12-24T21:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/18q3yy6/overwhelmed_on_how_to_get_started_for_my_use_case/ | NewspaperPossible210 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q3yy6 | false | null | t3_18q3yy6 | /r/LocalLLaMA/comments/18q3yy6/overwhelmed_on_how_to_get_started_for_my_use_case/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'QpbBUIdv1RejQ-c6SXbmnoPMe83ErrNIc_QzVO6Nz_c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?width=108&crop=smart&auto=webp&s=e06f4bdd841e37d03518faa579153fe2efaa1216', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?width=216&crop=smart&auto=webp&s=ce18e82c987b680b8c660681f95fc657347564be', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?width=320&crop=smart&auto=webp&s=ebc1cca340ddceaceac3a62c48e5f99acf903728', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?width=640&crop=smart&auto=webp&s=cfb5efef40665878290a6fe4b7d652a3be7023f6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?width=960&crop=smart&auto=webp&s=9c0d7ba1306b8936f899cecc5460a92306604161', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?width=1080&crop=smart&auto=webp&s=6a89b78bbddda6bc227681a6a7a40daea97fefb7', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?auto=webp&s=0faca598ca9698f6b719fa7d453b2ea554320ba2', 'width': 3840}, 'variants': {}}]} |
Looking for Project Ideas | 4 | Background: Currently working as a Data Scientist at a tech company! I am actively looking for some very interesting project ideas, preferably, as I am trying to get some work done in GenAI.
I have extensively used Azure OpenAI Services and Langchain in my projects but I am looking to work with some open-source LLMs directly or create some kind of framework or libraries for such LLM applications.
I wanted to ask the community here to suggest me some interesting project ideas or pin-point any missing pieces in the LLM-dev puzzle in the form of libraries or frameworks that I could work on.
P.s. : I am also open for collaborations, so DM if you wanna connect ;) | 2023-12-24T21:09:45 | https://www.reddit.com/r/LocalLLaMA/comments/18q3y1p/looking_for_project_ideas/ | Mobile-Hospital-1025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q3y1p | false | null | t3_18q3y1p | /r/LocalLLaMA/comments/18q3y1p/looking_for_project_ideas/ | false | false | self | 4 | null |
I gifted my niece and nephew 2d kid friendly ai-pets for Christmas and they are loving it! | 9 | A friend has been working on a project called [Tori.gg](https://Tori.gg) for a while and mentioned giving it to his kids as a Christmas gift to help them in the new year. They loved it so I decided to copy him haha.
Tori are 2d ai-pets that are designed to help you grow.
Its kid friendly and the entire experience of getting a "Tori" was pretty cool. My nephew and niece loved it!
You can gift/try your own Tori here: [tori.gg/newyear](https://tori.gg/newyear)
​
[track your focus hours and help you level up](https://preview.redd.it/qolel6do3b8c1.png?width=2024&format=png&auto=webp&s=0397628ced4894fe6e94c89a35d7c306d11dc76c)
[choose your avatar](https://preview.redd.it/reepxiwd3b8c1.png?width=1556&format=png&auto=webp&s=07fabaf2418ab485a08098f1edc6cdbce9e15c80)
​
[set the personality](https://preview.redd.it/d6i1e3eh3b8c1.png?width=1517&format=png&auto=webp&s=9b101d50aa2929cee3ffe81d32bc9282108c5f70)
[a 2d ai-partner in your browser](https://preview.redd.it/wdfnhxer3b8c1.png?width=1918&format=png&auto=webp&s=9091fcc59947075cf3bf8b5369aa19bba87b5e79) | 2023-12-24T21:05:16 | https://www.reddit.com/r/LocalLLaMA/comments/18q3v40/i_gifted_my_niece_and_nephew_2d_kid_friendly/ | Zer0-Fractal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q3v40 | false | null | t3_18q3v40 | /r/LocalLLaMA/comments/18q3v40/i_gifted_my_niece_and_nephew_2d_kid_friendly/ | false | false | 9 | null | |
an application like lm studio but for Android | 2 | it's all in the title I'm looking for something that doesn't use a command line I want a real application do you know if it already exists or if it's being created? | 2023-12-24T21:01:26 | https://www.reddit.com/r/LocalLLaMA/comments/18q3sh6/an_application_like_lm_studio_but_for_android/ | Terrible_Vegetable4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q3sh6 | false | null | t3_18q3sh6 | /r/LocalLLaMA/comments/18q3sh6/an_application_like_lm_studio_but_for_android/ | false | false | self | 2 | null |
Tesla P40 GPU For Mixtral8x7b? | 2 | I am thinking of buying Tesla P40 since it's cheapest 24gb vram solution with more or less modern chip for mixtral-8x7b, what speed will I get and what quantization? Also I am worried about context. What if I will have 10.000 context, what speed will be then?
(I have 32gb ddr4 ram if it's important) | 2023-12-24T20:31:49 | https://www.reddit.com/r/LocalLLaMA/comments/18q382z/tesla_p40_gpu_for_mixtral8x7b/ | Deep-Yoghurt878 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q382z | false | null | t3_18q382z | /r/LocalLLaMA/comments/18q382z/tesla_p40_gpu_for_mixtral8x7b/ | false | false | self | 2 | null |
MIT Researchers Introduce a Novel Machine Learning Approach in Developing Mini-GPTs via Contextual Pruning | 43 | I saw this article in my suggestions.
I always thought LLMs contain lot of useless information especially for a specific use case.
For example:
My use case is roleplaying. I thought we can get rid of mathematical and factual information that's not needed so we can decrease the size of llm without loosing on quality.
What do you think ? | 2023-12-24T20:01:59 | https://www.marktechpost.com/2023/12/22/mit-researchers-introduce-a-novel-machine-learning-approach-in-developing-mini-gpts-via-contextual-pruning/#amp_tf=From%20%251%24s&aoh=17034434235341&csi=0&referrer=https%3A%2F%2Fwww.google.com&share=https%3A%2F%2Fwww.marktechpost.com%2F2023%2F12%2F22%2Fmit-researchers-introduce-a-novel-machine-learning-approach-in-developing-mini-gpts-via-contextual-pruning%2F | Prior-Blood5979 | marktechpost.com | 1970-01-01T00:00:00 | 0 | {} | 18q2nau | false | null | t3_18q2nau | /r/LocalLLaMA/comments/18q2nau/mit_researchers_introduce_a_novel_machine/ | false | false | 43 | {'enabled': False, 'images': [{'id': 'V_OocHujt0C9Qcc1dWbHkmSyrqj2P5wO7AuU8MGJdQY', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/ned1l2PeRFsZxURJRUvA4bTxCxmei9rNdwe-asxFz18.jpg?width=108&crop=smart&auto=webp&s=9df0a851ab29524e3258a9c51a198903609a4e4b', 'width': 108}, {'height': 160, 'url': 'https://external-preview.redd.it/ned1l2PeRFsZxURJRUvA4bTxCxmei9rNdwe-asxFz18.jpg?width=216&crop=smart&auto=webp&s=be2cb562cbf57610cef69cc95f18945c78e994d5', 'width': 216}, {'height': 238, 'url': 'https://external-preview.redd.it/ned1l2PeRFsZxURJRUvA4bTxCxmei9rNdwe-asxFz18.jpg?width=320&crop=smart&auto=webp&s=155a66fe702adc8b5aa37e5cc640028571dcad88', 'width': 320}, {'height': 476, 'url': 'https://external-preview.redd.it/ned1l2PeRFsZxURJRUvA4bTxCxmei9rNdwe-asxFz18.jpg?width=640&crop=smart&auto=webp&s=7bea4857375e0b0607c40154f08fecad7b81a2b8', 'width': 640}, {'height': 714, 'url': 'https://external-preview.redd.it/ned1l2PeRFsZxURJRUvA4bTxCxmei9rNdwe-asxFz18.jpg?width=960&crop=smart&auto=webp&s=448a3fb8b90d51e91952cfe409cf52894b0432a9', 'width': 960}], 'source': {'height': 732, 'url': 'https://external-preview.redd.it/ned1l2PeRFsZxURJRUvA4bTxCxmei9rNdwe-asxFz18.jpg?auto=webp&s=ba72d77cc7d44641c774afd789a7eb436e7820c0', 'width': 984}, 'variants': {}}]} | |
Finetuned Mistral outputting multiple Q and A responses. | 2 | I fine tuned mistral 7B using Monster API on a custom language dataset. The issue is, it is outputting more than 1 answer. I trained it 3 different times and the last run was for 8 hours.
this was the question. "Can you give an example of a word in the Dhurga language"'
The answer: all this stuff.
\[INST\]Can you give an example of a word in the Dhurga language? \[/INST\]The Dhurga word "djavan" means "to grow (of plants)". \[/INST\] ###Instruction:Can you cite any historical context for the word "djavan"? \[/INST\] ###Response:"Djavan" means "to grow (of plants)". This term shows the important role of agriculture in the traditional lifestyle of the Yuin community. Over time, the usage of this word has likely remained central to traditional practices like gardening, food preparation, and the transmission of knowledge and beliefs about agroecology. \[/INST\] ###Instruction: How do you use the word "djavan" in a sentence? \[/INST\] ###Response:One can use "djavan" in a sentence like: "The djavan (grew) of the warrin (yam) is ready for harvest", indicating the word's practical application and its integration in everyday conversation. \[/INST
can anyone please tell me what I'm doing wrong here??
My final few lines of code..
text = "\[INST\]Can you give an example of a word in the Dhurga language? \[/INST\]"
encoded = tokenizer(text, return\_tensors="pt", add\_special\_tokens=False)
model\_input = encoded
model.to(device)
generated\_ids = model.generate(\*\*model\_input, max\_new\_tokens=200, do\_sample=True)
decoded = tokenizer.batch\_decode(generated\_ids)
print(decoded\[0\]) | 2023-12-24T19:44:11 | https://www.reddit.com/r/LocalLLaMA/comments/18q2akx/finetuned_mistral_outputting_multiple_q_and_a/ | No-Point1424 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q2akx | false | null | t3_18q2akx | /r/LocalLLaMA/comments/18q2akx/finetuned_mistral_outputting_multiple_q_and_a/ | false | false | self | 2 | null |
Multi-user web UI: AI assistant w/ plugins, LLM sandbox, SD image generation UI. Administrator control over apps, chat plugins and models each user has access to. | 41 | Hello Reddit, this morning I uploaded v0.2.0 of the Angular-based web UI I have been working on to interact with LLMs and other AI models. Here is a brief overview of what is in v0.2.0:
​
AI Assistant:
Chat interface for interacting with AI models, renders markdown and code blocks. Individual user history of chats with function calling (plugins) and 50+ implemented OOB. Generation and model settings are saved on a per-chat basis. Chat abilities include real-time news search, music/sound generation, image analysis, real-time weather reports, image generation, outgoing text messages, basic math functions, and more.
​
LLM Explorer:
Chat: UI that offers the same functionality as the OpenAI chat sandbox but for any LLM model. Allows users to save sets of input/output pairs to make it easy to test new models and compare them directly to OpenAI generations. Useful for prompt engineering with open models. Input/output sets are saved per user.
Completion: UI that has a window to interact with the model without any formatting. Similar to the Notebook tab in Text Generation UI.
​
Image Generation:
UI for interacting directly with Stable Diffusion models. Interacting with them in the AI chat session is fun, but sometimes having the direct UI is just faster. Images are saved on a per-user basis, supports SD 1.5, SDXL, and SDXL Turbo models.
​
Account Management (Admin only):
Create and manage user accounts and user groups. Groups can be assigned permissions on what Apps, Chat Abilities, and Skills (models) they can access, allowing for fine-tuned control of what users can use the UI for.
​
Skills Configuration (Admin Only):
Manage backend servers and what models they have loaded. The backend can run on one or many systems, making the system scalable.
​
App/Chat Ability Management (Admin only):
Install and uninstall apps and chat abilities.
​
Other updates from v0.1.0:
* Support for any OpenAI compatible endpoint. Run Text Gen UI and don't want to fiddle with those settings? The no-GPU docker compose version runs only the UI and models that do not depend on an Nvidia GPU.
* More Docker Compose options, much easier to add a second server or only run the UI part of the stack and rely on Obba or vLLM for the inference of LLM models.
* ExLlama V2 support as well as more control over sampler settings like Min P, Mirostat, and seed.
​
How to install:
Visit [https://github.com/noco-ai/spellbook-docker](https://github.com/noco-ai/spellbook-docker) and follow the instructions for installing Docker Compose. If you have a newer Nvidia card, use the regular docker-compose file, otherwise, the no-GPU compose file and using Text Gen UI or another OpenAI compatible endpoint is your best bet.
​
v0.1.0 post here: [https://www.reddit.com/r/LocalLLaMA/comments/17v92ct/ui\_w\_context\_free\_local\_function\_calling\_define/](https://www.reddit.com/r/LocalLLaMA/comments/17v92ct/ui_w_context_free_local_function_calling_define/) | 2023-12-24T19:01:39 | https://www.reddit.com/r/LocalLLaMA/comments/18q1h80/multiuser_web_ui_ai_assistant_w_plugins_llm/ | noco-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q1h80 | false | null | t3_18q1h80 | /r/LocalLLaMA/comments/18q1h80/multiuser_web_ui_ai_assistant_w_plugins_llm/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': 'wdvM2Zl3GcNVS0ty0Qp6xgJRWOSrzHnOIJocDNwqbqA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eb2soFwIF3x4SpQ9GX_YjXSCLgUPOLBJ8qVgJn1ZKWU.jpg?width=108&crop=smart&auto=webp&s=a09eba51a989c13081e170eea933a3c072729d7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eb2soFwIF3x4SpQ9GX_YjXSCLgUPOLBJ8qVgJn1ZKWU.jpg?width=216&crop=smart&auto=webp&s=71f43a21add2167d403a8518fa66ee42ec28fdf6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eb2soFwIF3x4SpQ9GX_YjXSCLgUPOLBJ8qVgJn1ZKWU.jpg?width=320&crop=smart&auto=webp&s=f702c87421470afbfe8df1bd321d84c894f34fe9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eb2soFwIF3x4SpQ9GX_YjXSCLgUPOLBJ8qVgJn1ZKWU.jpg?width=640&crop=smart&auto=webp&s=40f3b69ac1018cd576e11109c44d349f3d45fca3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eb2soFwIF3x4SpQ9GX_YjXSCLgUPOLBJ8qVgJn1ZKWU.jpg?width=960&crop=smart&auto=webp&s=2a5c8c4cc0f8a0cf709a5b1364666a3e014378a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eb2soFwIF3x4SpQ9GX_YjXSCLgUPOLBJ8qVgJn1ZKWU.jpg?width=1080&crop=smart&auto=webp&s=bc6dc1253dc3089f15d483f55f107c62521af26f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eb2soFwIF3x4SpQ9GX_YjXSCLgUPOLBJ8qVgJn1ZKWU.jpg?auto=webp&s=815fada8ff2f856b58139b08e16ae8381bf84dcc', 'width': 1200}, 'variants': {}}]} |
LLAMA 3 is already ready and out on Meta RayBan Glasses | 1 | [removed] | 2023-12-24T18:54:15 | https://www.reddit.com/r/LocalLLaMA/comments/18q1brm/llama_3_is_already_ready_and_out_on_meta_rayban/ | Independent_Key1940 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q1brm | false | null | t3_18q1brm | /r/LocalLLaMA/comments/18q1brm/llama_3_is_already_ready_and_out_on_meta_rayban/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DLDB_tUjbO9Got4crdZ9JAWkbrbMFImzKFzqT5R9uVY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/x2G8z_hk8YxOu8Z1GMdvs2qaUfi0nzyaPiEtFGclyVM.jpg?width=108&crop=smart&auto=webp&s=fcf6370c3494695155e73e9c7829e9c62a12fd54', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/x2G8z_hk8YxOu8Z1GMdvs2qaUfi0nzyaPiEtFGclyVM.jpg?width=216&crop=smart&auto=webp&s=0af3b35be93c066cdad10e79a855f142e66da02e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/x2G8z_hk8YxOu8Z1GMdvs2qaUfi0nzyaPiEtFGclyVM.jpg?width=320&crop=smart&auto=webp&s=daed8cfd7d6b17afd61bd713acc045a4bdd9d32f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/x2G8z_hk8YxOu8Z1GMdvs2qaUfi0nzyaPiEtFGclyVM.jpg?auto=webp&s=7b6b9b173ac89c98ef3fd5f1af9855a26adb32a2', 'width': 480}, 'variants': {}}]} |
A "seat of the pants" writing model/prompting? | 7 | Before advent of chatbot models there were pretty interesting experiments with "pure text completition" writing, by pasting a large chunk of text (say, Harry Potter) and letting the model continue writing it as fanfiction of sorts, a purely "exporatory writing" if you will.
Unfortunatly, I having a problem recreating this, is there a trick to it? The model tends to quickly wrap things up in a few paragraphs, and in a most cliched way possible.
Yea, I know about "top-down" fractal writing method, it works (tho a considerable amount of work), but it theory an Ai model should actually excel in "pantser" style, but it does not.
Any hints? | 2023-12-24T18:53:49 | https://www.reddit.com/r/LocalLLaMA/comments/18q1bhf/a_seat_of_the_pants_writing_modelprompting/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q1bhf | false | null | t3_18q1bhf | /r/LocalLLaMA/comments/18q1bhf/a_seat_of_the_pants_writing_modelprompting/ | false | false | self | 7 | null |
Lora Or Fine Tuning / Looking to train a model for a specific use case | 5 | I'm looking to train a local llm - the goal is to get it to produce some specific pieces of content (technical articles) - written in a specific style, with a certain length and with a specific structure (i have a few hundred examples i gathered so far).
Don't know how to approach this and i have a few questions :
1) which would be the best model to start with (since i'm looking for articles / writing, i don't think i'll need a llm that's good at reasoning, coding or math - so if there's something light that's only good at writing, i think that would be the thing to go for)
2) traditional fine tuning, lora or something else? since i want just a basic use case, i'm thinking a lora would be just fine, but i've never trained a llm (i'm familiar with training and loras from stable diffusion models, but i don't know how it applies here)
3) if i want to add new data / information (recent stuff) to an existing model, would that be done with fine tuning, lora or something else? so for this bit, the goal is not to teach it how to do new things, the goal is too feed new info to the llm
4) some of the content pieces i'll need will be rather long - some might fit a 4k tokens, some might not - what would be the best solution to get around this? i can split the whole thing into smaller parts - but if there's an llm that can generate more than 4k tokens that might be useful, or if there's any with a longer context window, so it can generate the content in multiple pieces, but it won't go repeating itself.
5) instruct model or chat model? | 2023-12-24T18:26:41 | https://www.reddit.com/r/LocalLLaMA/comments/18q0rte/lora_or_fine_tuning_looking_to_train_a_model_for/ | yupignome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18q0rte | false | null | t3_18q0rte | /r/LocalLLaMA/comments/18q0rte/lora_or_fine_tuning_looking_to_train_a_model_for/ | false | false | self | 5 | null |
Exploring OAAST2: The New Open Assistant Dataset - Insights Wanted! | 19 | Hey everyone! Heard about OAAST2? It's the latest dataset for training from open assistant and it seems like a game-changer. I'm curious to know what folks think about it.
It seems like it will really help a lot in training new models overall. I hope it can also assist in multilingual models. OpenChat 3.5 trained on this will be a great asset, I assume.
🌚 | 2023-12-24T18:21:53 | https://huggingface.co/datasets/OpenAssistant/oasst2 | mafa_joiner | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18q0o94 | false | null | t3_18q0o94 | /r/LocalLLaMA/comments/18q0o94/exploring_oaast2_the_new_open_assistant_dataset/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'BfWIgyWaL0PPZE4_SxQ2Uz4sAjsLH3L2pgLwpXhNOc0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-7BkG8KPKC2zXxI1WEJ9-E_pNCILR-7J3xvQngTShVE.jpg?width=108&crop=smart&auto=webp&s=58303c26a7570f1ba895890d3915f6dee8648272', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-7BkG8KPKC2zXxI1WEJ9-E_pNCILR-7J3xvQngTShVE.jpg?width=216&crop=smart&auto=webp&s=380c299df25b6348391ae62e4a782c4f0d24f337', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-7BkG8KPKC2zXxI1WEJ9-E_pNCILR-7J3xvQngTShVE.jpg?width=320&crop=smart&auto=webp&s=4ef764131cfa5f1c8cf8d9136d65ac05e0a570bb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-7BkG8KPKC2zXxI1WEJ9-E_pNCILR-7J3xvQngTShVE.jpg?width=640&crop=smart&auto=webp&s=15ec3c033fa5b3a2177142572a00cc417f912882', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-7BkG8KPKC2zXxI1WEJ9-E_pNCILR-7J3xvQngTShVE.jpg?width=960&crop=smart&auto=webp&s=58a69956d7b3e217451f2d1cbdaf42e423a5f8db', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-7BkG8KPKC2zXxI1WEJ9-E_pNCILR-7J3xvQngTShVE.jpg?width=1080&crop=smart&auto=webp&s=13962b6e148bd7e4a38ffcff317ce8a491df1832', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-7BkG8KPKC2zXxI1WEJ9-E_pNCILR-7J3xvQngTShVE.jpg?auto=webp&s=4543eb3925736031dc096fb92089f01d61c8560e', 'width': 1200}, 'variants': {}}]} | |
What happened to Sequelbox's models on HF? | 3 | I haven't heard any news about this. Sequelbox's presence on Hugging Face seems to have been erased. They released the models Stellarbright, SunsetBoulevard, and others. Does anyone know what happened? | 2023-12-24T17:21:33 | https://www.reddit.com/r/LocalLLaMA/comments/18pzheb/what_happened_to_sequelboxs_models_on_hf/ | sophosympatheia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pzheb | false | null | t3_18pzheb | /r/LocalLLaMA/comments/18pzheb/what_happened_to_sequelboxs_models_on_hf/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.