title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Synthetic Intelligent Agent (SynthIA) | 1 | [removed] | 2023-10-26T05:55:13 | https://www.reddit.com/r/LocalLLaMA/comments/17gpop8/synthetic_intelligent_agent_synthia/ | migtissera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gpop8 | false | null | t3_17gpop8 | /r/LocalLLaMA/comments/17gpop8/synthetic_intelligent_agent_synthia/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KJfmm8_w8Xzvhy2uLQ4qMT5g4G5IKvaoUTPuP9grdeg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=108&crop=smart&auto=webp&s=ef46686f5f0757f4ad3b2116194d777a506816d3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=216&crop=smart&auto=webp&s=3c07a30118904caba6e962990e7f4d4583ca1965', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=320&crop=smart&auto=webp&s=63dad03ade88f43233ebd7bc6fac3b274f8f9ebf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=640&crop=smart&auto=webp&s=8a0835838405bc0951692370ad1bd4a1cb9e8bb8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=960&crop=smart&auto=webp&s=ac87b31cac266144f5dcd82a3fb934d143e974f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=1080&crop=smart&auto=webp&s=29ac36dca635932bf52b40dc332355930e56eb0f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?auto=webp&s=508892389a307289a1a189b6dc98146c55e5ba38', 'width': 1200}, 'variants': {}}]} |
CLEX: Continuous Length Extrapolation for Large Language Models | 19 | https://arxiv.org/abs/2310.16450
>We demonstrate that CLEX can be *seamlessly incorporated into LLMs equipped with Rotary Position Embedding, such as LLaMA* and GPT-NeoX, with *negligible impact on training and inference latency*. Experimental results reveal that CLEX can effectively *extend the context window to over 4x or almost 8x training length*, with no deterioration in performance. Furthermore, when evaluated on the practical LongBench benchmark, our model trained on a 4k length exhibits competitive performance against state-of-the-art open-source models trained on context lengths up to 32k. | 2023-10-26T05:52:10 | https://www.reddit.com/r/LocalLLaMA/comments/17gpn6j/clex_continuous_length_extrapolation_for_large/ | starstruckmon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gpn6j | false | null | t3_17gpn6j | /r/LocalLLaMA/comments/17gpn6j/clex_continuous_length_extrapolation_for_large/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
23[F4M] Feeling horny at the moment, any hard dick available for my to suck and ride? | 0 | DM | 2023-10-26T05:10:31 | https://www.reddit.com/r/LocalLLaMA/comments/17gp0rw/23f4m_feeling_horny_at_the_moment_any_hard_dick/ | MajorAccording5812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gp0rw | false | null | t3_17gp0rw | /r/LocalLLaMA/comments/17gp0rw/23f4m_feeling_horny_at_the_moment_any_hard_dick/ | false | false | self | 0 | null |
Jina AI Launches World's First Open-Source 8K Text Embedding, Rivaling OpenAI | 5 | 2023-10-26T05:05:47 | https://jina.ai/news/jina-ai-launches-worlds-first-open-source-8k-text-embedding-rivaling-openai/?123 | ninjasaid13 | jina.ai | 1970-01-01T00:00:00 | 0 | {} | 17goy9r | false | null | t3_17goy9r | /r/LocalLLaMA/comments/17goy9r/jina_ai_launches_worlds_first_opensource_8k_text/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'QK3ExLbhFbEXVQbeJRDKAdg79ckCbm1xYdVMp33PhaA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?width=108&crop=smart&auto=webp&s=4bb7d9a6fdd87b43875881babda703a3d5f1dc05', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?width=216&crop=smart&auto=webp&s=c8cd509a369a15e692ef3478e1dc622ba255413d', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?width=320&crop=smart&auto=webp&s=dfce5a82d4fd131656a3ecec82ff9b061b10f4f8', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?width=640&crop=smart&auto=webp&s=bdc7a02860422c117b5bcb158405b28d4b546f80', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?width=960&crop=smart&auto=webp&s=e6f5a0a775903e0779fb8d445e87f47a99327a30', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?width=1080&crop=smart&auto=webp&s=07ac67b36c1c438df5a157486c9359de3a029f25', 'width': 1080}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?auto=webp&s=1a4bba8961d4fd31f71db511c6dbc8ad61ae5c84', 'width': 1200}, 'variants': {}}]} | ||
Model parallel is inefficient for extending context of LLM inference? | 6 | I run Llama-2-13B on one A100 80G with \`dtype=bfloat16\`, so model itself occupied about 26G. Without any memory efficient technique like flash attention, I encountered out of memery error when the context length reached about 9K.
If I have two A100 80G and apply model parallel, the model occupies 13G on each GPU. Compared to one GPU, the extra memory I gain in one GPU is 13G, which is marginal considering I add a whole new 80G memory. Also, the space complexity of vanilla attention is quadratic with respect to sequence length, so I can assume that the context length gain is marginal as well.
Are there efficient ways that can utilize memory of other GPUs to extend context? | 2023-10-26T04:53:00 | https://www.reddit.com/r/LocalLLaMA/comments/17gor2u/model_parallel_is_inefficient_for_extending/ | ListenProfessional47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gor2u | false | null | t3_17gor2u | /r/LocalLLaMA/comments/17gor2u/model_parallel_is_inefficient_for_extending/ | false | false | self | 6 | null |
Has Anyone Managed to Run LocalLlama on a MacBook Air M1 with an RTX 3090 eGPU? | 3 | Hello everyone,
I'm considering the feasibility of using an RTX 3090 as an eGPU with my MacBook Air M1, primarily for running LocalLlama. I realize this setup might not be plug-and-play, but I'm eager to know if anyone has figured out a way to make it work. Please share your insights, experiences, and suggestions for successfully running LocalLlama on this type of hardware setup. | 2023-10-26T04:17:58 | https://www.reddit.com/r/LocalLLaMA/comments/17go6fk/has_anyone_managed_to_run_localllama_on_a_macbook/ | yaseenox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17go6fk | false | null | t3_17go6fk | /r/LocalLLaMA/comments/17go6fk/has_anyone_managed_to_run_localllama_on_a_macbook/ | false | false | self | 3 | null |
Poe hackathon - SF bay area Nov 4 | 1 | So, let's say you hosted some of these cool new mini LLaMA models as a bot on poe.com? Are there any cool things you could do? How about using their new monetization features?
There's a Poe hackathon in the SF bay area on 11/4 (https://quorablog.quora.com/Introducing-creator-monetization-for-Poe). It's really short, from noon to 10 p.m. You can make a prompt-only bot or one that runs on a server.
Please let me know where I should advertise this, I'm not affiliated with Poe I just want good bots to play with.
During the last hackathon I won a $2k prize for making a bot that used mistral-7b to generate a short theme that was then used as a guide for ChatGPT and StableDiffusionXL to make sure the image went along with the story. First place went to someone who made a CodeInterpreter-style bot.
I saw on here someone combined voice to text and an LLM to point out fallacies as an overlay on a video of a presidential debate. That was pretty awesome.
Poe is going to be implementing a monetization model where bot operators can get paid per-usage. What could you do with a 7B or 13B model that costs almost nothing in server costs, but is worth a penny or two to the user? | 2023-10-26T04:06:40 | https://www.reddit.com/r/LocalLLaMA/comments/17gnzpo/poe_hackathon_sf_bay_area_nov_4/ | YoureMyFavoriteOne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gnzpo | false | null | t3_17gnzpo | /r/LocalLLaMA/comments/17gnzpo/poe_hackathon_sf_bay_area_nov_4/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ov_9g85YrlJybPanwpceEv90BcsW71UDr0HFRzHMams', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Zl7EsA-At4Q67AIV7_qs-SrvrUbWR2C2rDzqgcz3Ve0.jpg?width=108&crop=smart&auto=webp&s=2d597f24eb03c196ab79cc5d5614f481088bb8f7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Zl7EsA-At4Q67AIV7_qs-SrvrUbWR2C2rDzqgcz3Ve0.jpg?width=216&crop=smart&auto=webp&s=68e8aef952b2d2079b241827f9a77c667f33dc49', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Zl7EsA-At4Q67AIV7_qs-SrvrUbWR2C2rDzqgcz3Ve0.jpg?width=320&crop=smart&auto=webp&s=8b1b5b8bb07c9b073ce0024b6137db014a86bdb6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Zl7EsA-At4Q67AIV7_qs-SrvrUbWR2C2rDzqgcz3Ve0.jpg?width=640&crop=smart&auto=webp&s=9619087946f1e201cea3909c2a02a6e5359998d2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Zl7EsA-At4Q67AIV7_qs-SrvrUbWR2C2rDzqgcz3Ve0.jpg?width=960&crop=smart&auto=webp&s=fb9d40c1719a10565be33afc1eb0b7744f4e9b85', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Zl7EsA-At4Q67AIV7_qs-SrvrUbWR2C2rDzqgcz3Ve0.jpg?width=1080&crop=smart&auto=webp&s=e2e1d3faecedd6728d2f315c0979883903e2dcca', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Zl7EsA-At4Q67AIV7_qs-SrvrUbWR2C2rDzqgcz3Ve0.jpg?auto=webp&s=0ed9ca018eacb4f04703ec867c40be1e3e8460a1', 'width': 1920}, 'variants': {}}]} |
Transformers Backend Optimizations Inside Oobabooga | 4 | I have been performing some tests with the .generate() method in the transformers backend. I have noticed whenever I call .generate() on a model manually inside a notebook, the inference is significantly slower than the inference using oobabooga.
I am using the same model (open orca 7B), with the same prompt for both models, yet .generate() takes 16 seconds in my notebook, whereas oobabooga takes 9 seconds to generate the same result. I am using the default settings in oobabooga (simply running the command "python [server.py](https://server.py)" without any flags), and both models are loaded in float32 (I have a P40).
I have looked at the generate\_reply\_HF function inside textgen-webui, yet I am not sure what optimizations are present that lead to the significant speedup, again I am using the default settings (no deepspeed, accelerate, or any quantization)
Does anyone know what optimizations oobabooga uses to speed up the model inference? | 2023-10-26T03:28:54 | https://www.reddit.com/r/LocalLLaMA/comments/17gncau/transformers_backend_optimizations_inside/ | FrostyContribution35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gncau | false | null | t3_17gncau | /r/LocalLLaMA/comments/17gncau/transformers_backend_optimizations_inside/ | false | false | self | 4 | null |
How I trained an AI on my text messages to make a robot that talks like me | 111 | 2023-10-26T02:59:00 | https://www.linkedin.com/pulse/how-i-trained-ai-my-text-messages-make-robot-talks-like-eric-polewski-9nu1c/ | LetMeGuessYourAlts | linkedin.com | 1970-01-01T00:00:00 | 0 | {} | 17gmrns | false | null | t3_17gmrns | /r/LocalLLaMA/comments/17gmrns/how_i_trained_an_ai_on_my_text_messages_to_make_a/ | false | false | 111 | {'enabled': False, 'images': [{'id': 'YcmP-ZvciIBGWoa5RKrHjGsPmyEUQBnYQCJQn1lzLlc', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/SpNGACMM2u5RsMddxNrGVL---CU0wGzRvCDTAiOT2pc.jpg?width=108&crop=smart&auto=webp&s=ce7210a61e064f2050151701403e5600631c5624', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/SpNGACMM2u5RsMddxNrGVL---CU0wGzRvCDTAiOT2pc.jpg?width=216&crop=smart&auto=webp&s=b4a73d15097f0d6ebac42a5e40fb6cb325bead2e', 'width': 216}, {'height': 135, 'url': 'https://external-preview.redd.it/SpNGACMM2u5RsMddxNrGVL---CU0wGzRvCDTAiOT2pc.jpg?width=320&crop=smart&auto=webp&s=bbbd5f423cabeef77a2f8e8e44ecdaf3f9ee85ab', 'width': 320}, {'height': 271, 'url': 'https://external-preview.redd.it/SpNGACMM2u5RsMddxNrGVL---CU0wGzRvCDTAiOT2pc.jpg?width=640&crop=smart&auto=webp&s=f7213a52738ef1d46aaa32471fa906da249bfa03', 'width': 640}, {'height': 407, 'url': 'https://external-preview.redd.it/SpNGACMM2u5RsMddxNrGVL---CU0wGzRvCDTAiOT2pc.jpg?width=960&crop=smart&auto=webp&s=2076d85e971bdb877b62472112bad0aee4f1d751', 'width': 960}, {'height': 458, 'url': 'https://external-preview.redd.it/SpNGACMM2u5RsMddxNrGVL---CU0wGzRvCDTAiOT2pc.jpg?width=1080&crop=smart&auto=webp&s=acac402d76110ab4021f79425994141a3d2f86e7', 'width': 1080}], 'source': {'height': 543, 'url': 'https://external-preview.redd.it/SpNGACMM2u5RsMddxNrGVL---CU0wGzRvCDTAiOT2pc.jpg?auto=webp&s=b745d467ac87566f9c8894f7d89e22e1175b78c6', 'width': 1280}, 'variants': {}}]} | ||
Is this written by llama 0.5 ? | 1 | 2023-10-26T02:22:14 | Feztopia | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17gm0fx | false | null | t3_17gm0fx | /r/LocalLLaMA/comments/17gm0fx/is_this_written_by_llama_05/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'uDn_SHZZQ4xmBgzbogHJcttzBQWZ61_UtqLWT65rWu0', 'resolutions': [{'height': 168, 'url': 'https://preview.redd.it/6semm8lyigwb1.jpg?width=108&crop=smart&auto=webp&s=e932e5f7a708ec3669aa540ccd3c0545572ee8e8', 'width': 108}, {'height': 337, 'url': 'https://preview.redd.it/6semm8lyigwb1.jpg?width=216&crop=smart&auto=webp&s=d5c1d491f674ef022dc0f8e660fc320a571f5571', 'width': 216}, {'height': 500, 'url': 'https://preview.redd.it/6semm8lyigwb1.jpg?width=320&crop=smart&auto=webp&s=9a64ee6d5ef20663e682ccbf645de7d2c5470782', 'width': 320}, {'height': 1001, 'url': 'https://preview.redd.it/6semm8lyigwb1.jpg?width=640&crop=smart&auto=webp&s=e08ba283a5a6a838757262c040ad9eb717a895bc', 'width': 640}, {'height': 1501, 'url': 'https://preview.redd.it/6semm8lyigwb1.jpg?width=960&crop=smart&auto=webp&s=4ab4224bc422fa4f2843d107b3971919a660e6ff', 'width': 960}], 'source': {'height': 1688, 'url': 'https://preview.redd.it/6semm8lyigwb1.jpg?auto=webp&s=02e269d458e5a8289d03237765a80ea197cad33b', 'width': 1079}, 'variants': {}}]} | |||
Jumping into this world without any real experience. | 0 | I have been in IT for 30 years. I was just laid off this week. In my last job I architected gen Ai in the cloud but want to get a better understanding of this technology.
I have an AMD 5900x with 128GB of ram. I have a rtx 3050 and will soon install a 4060 ti 16G. 8TB of NVME and 4TB of ssd. My plan is to use promox and make a windows vm as my PC with the 3050 attached and then an Ubuntu vm with the 4060 attached for llm work.
I have 24 threads so thinking 8 for windows, 16 for Ubuntu. 32 ram win, 96 for Ubuntu.
Thoughts on this setup? How much drive space for llm?
My goal is to add large ancient book in a text format (xml, json) as the dataset for the llm to use to answer questions. Thoughts for a newbie? | 2023-10-26T02:04:29 | https://www.reddit.com/r/LocalLLaMA/comments/17glnxy/jumping_into_this_world_without_any_real/ | paulk1997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17glnxy | false | null | t3_17glnxy | /r/LocalLLaMA/comments/17glnxy/jumping_into_this_world_without_any_real/ | false | false | self | 0 | null |
I have a dataset of around 100 stories, each around 1000-2000 words each. Can I train a model to generate new stories in similar style and content? | 51 | I have a library of 100 stories each around ~1-2k words long, that I curated over the years, and would like to generate additional content similar to those stories using local models.
i currently have a 16GB 3080 laptop which I successfully ran 13b models for simple chatting and Q&A inferencing, but would like to utilize it to create stories similar to my library.
I looked through some options. one One possibility was fine-tuning a model, but some research showed that I would need a much more powerful hardware for that. I also saw that for my purposes, that would not be necessary, but I don't know where to begin.
From where I am now (several 13b models successfully running in windows) - how would I begin? Specifically,
1. May I ask for some high level step by step instructions on how I would train a model on this curated library and generate similar content? I'd even be happy with training a small 7b model to start with.
2. What additional hardware would be needed? I would be happy to buy more hardware, if needed. | 2023-10-26T01:32:43 | https://www.reddit.com/r/LocalLLaMA/comments/17gl1fl/i_have_a_dataset_of_around_100_stories_each/ | thatavidreadertrue | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gl1fl | false | null | t3_17gl1fl | /r/LocalLLaMA/comments/17gl1fl/i_have_a_dataset_of_around_100_stories_each/ | false | false | self | 51 | null |
I somehow uninstalled something and now Local LLMs don't work... | 1 | Any help appreciated. I'm trying to figure out what the heck I uninstalled that caused all of my local LLM's to not work.
I was ironically clearing out space to download models, and deleting things like the cache from yaml, xcode, etc. I tried to reinstall Xcode, but I uninstalled a couple things and I don't really know what I broke.
in LM studio i'm geting Error Code 5, if that helps.
Thanks in advance | 2023-10-26T01:25:35 | https://www.reddit.com/r/LocalLLaMA/comments/17gkw7n/i_somehow_uninstalled_something_and_now_local/ | juice_cane | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gkw7n | false | null | t3_17gkw7n | /r/LocalLLaMA/comments/17gkw7n/i_somehow_uninstalled_something_and_now_local/ | false | false | self | 1 | null |
For those here who use CharacterAI.... | 1 | Just wanted to share an uncensored CharacterAI sub where you can post all your NSFW and generally crazy things the bots say...
https://www.reddit.com/r/CharacterAiUncensored/ | 2023-10-26T01:24:48 | https://www.reddit.com/r/LocalLLaMA/comments/17gkvor/for_those_here_who_use_characterai/ | Earthling_April | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gkvor | false | null | t3_17gkvor | /r/LocalLLaMA/comments/17gkvor/for_those_here_who_use_characterai/ | false | false | self | 1 | null |
Can computer program and it's English language description be automatically generated by LLMs which can be fed to itself to further better generate (program, description) pairings sort of like MuZero self play way of bootstrapping itself? | 3 | It's like LLMs will be discovering programming lego blocks by being able to correctly expand the program AST. Maybe pseudo code could help in such a process. Running generated programs in sandbox to test correctness. | 2023-10-26T00:39:35 | https://www.reddit.com/r/LocalLLaMA/comments/17gjzzv/can_computer_program_and_its_english_language/ | Easy_Butterfly2125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gjzzv | false | null | t3_17gjzzv | /r/LocalLLaMA/comments/17gjzzv/can_computer_program_and_its_english_language/ | false | false | self | 3 | null |
With all these models coming out, would it be of interest to create an informal 'benchmark'? | 2 | I'M SORRY THE QUESTION IS SO STUPID LMAO
\---
Here's a fun question I posed to the AI:
\> Assume the following is true:
\> Peter is a nice person. All nice people are composed of cinnamon.
\> Is it true, then, to say, that therefore, "Peter is made of cinnamon"?
GPT-3 refused to answer the affirmative, claiming that it must be a metaphorical question, instead of correctly following the prompt; after a quick 'ok, what if it's literal?' it answered but then gave ANOTHER disclaimer about how people are made of fats, water, and other chemicals/biology, and not cinnamon.
GPT-4 mentioned formal logic being yes, and the actual real life answer being no, which is very comprehensive and a great answer.
It would be interesting to pose (obviously better) questions like this together, that try to circumvent or 'mislead' the AI without being *too* far off - rather than trying to trick it, we're merely ensuring it's not just taking it directly from its training data. Or if it is, we ensure we add in twists or variations to see how it handles subtly different situations, or riddles, etc.
​ | 2023-10-26T00:36:01 | https://www.reddit.com/r/LocalLLaMA/comments/17gjxif/with_all_these_models_coming_out_would_it_be_of/ | bearbarebere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gjxif | false | null | t3_17gjxif | /r/LocalLLaMA/comments/17gjxif/with_all_these_models_coming_out_would_it_be_of/ | false | false | self | 2 | null |
Seeking front-end solutions for a remote LLM server handling document processing | 2 | Hi everyone,
I've set up a local LLM server running on a Linux machine, equipped with a dedicated GPU. My goal is to use it as an inference server. I plan to access it from several clients using MacOS, Windows and other Linux machines. Checking for the options to run the server I'd be able to execute Mistral or Llama variants via an OpenAI compatible API using vLLM, ooba, privateGPT, etc. However, I have not been able to find a suitable front-end to access the server.
Ideally, I envision a setup where I can query documents from the client-side, engage in RAG scenarios employing local embedding databases, and then send only pertinent segments to the inference server for processing. Unfortunately, existing solutions have proven to be less than satisfactory - SillyTavern lacks document processing support, I'm getting random errors in privateGPT, and the documentation for h2oGPT is rather convoluted, especially for non-Windows systems.
Has anyone stumbled upon a solution tailored to this kind of workflows that integrates with the backend via a UI? I do appreciate any suggestions, thoughts or additional questions to clarify this expected workflow.
​ | 2023-10-25T23:50:27 | https://www.reddit.com/r/LocalLLaMA/comments/17gizl1/seeking_frontend_solutions_for_a_remote_llm/ | elsatch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gizl1 | false | null | t3_17gizl1 | /r/LocalLLaMA/comments/17gizl1/seeking_frontend_solutions_for_a_remote_llm/ | false | false | self | 2 | null |
Question about cuBlas and CUDA version in llama.cpp for Windows and Linux | 3 | Out on the bleeding edge, sometimes features grow faster than documentation, and I think this subreddit is one of the best places to ask questions, so here goes.
Reading through the main Github page for llama.cpp I was pleasantly surprised to read that builds now include pre-compiled [Windows distributions](https://github.com/ggerganov/llama.cpp#running-on-windows-with-prebuilt-binaries). Checking out the latest build as of this moment, [b1428](https://github.com/ggerganov/llama.cpp/releases/tag/b1428), I see that it has a handful of different Windows options, and comparing those to the main Github page, I can see how some are better for CPU only inference and it looks like cuBlas is the main option for GPU. (And yes, I know ExLlamav2 might be faster overall, but I have some use cases for llama.cpp for now, like [lm-format-enforcer](https://github.com/noamgat/lm-format-enforcer).)
So the Github build page for llama.cpp shows two cuBlas options for Windows:
llama-b1428-bin-win-cublas-cu11.7.1-x64.zip
llama-b1428-bin-win-cublas-cu12.2.0-x64.zip
(And let me just throw in that I really wish they hadn't opened .zip as a valid domain name, because Reddit is trying to make these into URLs)
So it seems that one is compiled using CUDA version 11.7.1 and the other using version 12.2.0. I'm sure that there must be a good reason to bother compiling these two separate versions, but I'm not sure what it is. I suspect that this could apply to Linux as well, where perhaps some GPUs would work better with one CUDA version vs. the other. But can any of you tell me under which circumstances I should use 11.7.1 and under which I should use 12.2.0?
Are some older GPUs, like maybe a P40 or something, only supported under older CUDA versions and not newer versions? Or is there some other reason to compile for two different CUDA versions that I'm completely missing?
​ | 2023-10-25T23:48:04 | https://www.reddit.com/r/LocalLLaMA/comments/17gixqk/question_about_cublas_and_cuda_version_in/ | ResearchTLDR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gixqk | false | null | t3_17gixqk | /r/LocalLLaMA/comments/17gixqk/question_about_cublas_and_cuda_version_in/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
Getting started with ChatGPT4All | 0 | I downloaded ChatGPT4All and downloaded one of the cores they have, but I can't get the program to do anything. Any input is responded to with a never ending string of #s.
What do I need to be putting in the system prompt and prompt template fields to make it work?
For context I'm looking for an ERP replacement for Muah AI, so I'm looking to set things up in that direction. | 2023-10-25T22:08:21 | https://www.reddit.com/r/LocalLLaMA/comments/17ggmb0/getting_started_with_chatgpt4all/ | throwaway899495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ggmb0 | false | null | t3_17ggmb0 | /r/LocalLLaMA/comments/17ggmb0/getting_started_with_chatgpt4all/ | false | false | self | 0 | null |
Llama.cpp is slower on Iris Xe GPU than on CPU | 8 | I've compiled llama.cpp for both CPU only and another version with ClBlast support for Intel Iris Xe on my laptop Dell 5320 i5-1135G7 Iris Xe graphics with 8gb of shared RAM. For Mistral 7b q4 CPU only I got 4 to 6 tokens per second, whereas the same model with support for the Iris Xe I got less than 1.5-2.5 tokens per second. In the second case I see my GPU being recognized by llama.cpp and the memory being allocated and the GPU processing while generating the output. I thought the GPU should increase performance not decrease it. | 2023-10-25T21:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/17gf824/llamacpp_is_slower_on_iris_xe_gpu_than_on_cpu/ | thedatawhiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gf824 | false | null | t3_17gf824 | /r/LocalLLaMA/comments/17gf824/llamacpp_is_slower_on_iris_xe_gpu_than_on_cpu/ | false | false | self | 8 | null |
First OS embedding model with 8k context | 90 | Jinai AI just open sourced the first embedding model with 8k context length.
They claim similar performance to openai's text-embedding-ada-002
They released two sizes: base (0.27GB) and small (0.07G)
Blog post:
https://jina.ai/news/jina-ai-launches-worlds-first-open-source-8k-text-embedding-rivaling-openai/?123
HF:
https://huggingface.co/jinaai/jina-embeddings-v2-base-en?ref=jina-ai-gmbh.ghost.io
https://huggingface.co/jinaai/jina-embeddings-v2-small-en?ref=jina-ai-gmbh.ghost.io | 2023-10-25T20:54:38 | https://www.reddit.com/r/LocalLLaMA/comments/17getf1/first_os_embedding_model_with_8k_context/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17getf1 | false | null | t3_17getf1 | /r/LocalLLaMA/comments/17getf1/first_os_embedding_model_with_8k_context/ | false | false | self | 90 | {'enabled': False, 'images': [{'id': 'QK3ExLbhFbEXVQbeJRDKAdg79ckCbm1xYdVMp33PhaA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?width=108&crop=smart&auto=webp&s=4bb7d9a6fdd87b43875881babda703a3d5f1dc05', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?width=216&crop=smart&auto=webp&s=c8cd509a369a15e692ef3478e1dc622ba255413d', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?width=320&crop=smart&auto=webp&s=dfce5a82d4fd131656a3ecec82ff9b061b10f4f8', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?width=640&crop=smart&auto=webp&s=bdc7a02860422c117b5bcb158405b28d4b546f80', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?width=960&crop=smart&auto=webp&s=e6f5a0a775903e0779fb8d445e87f47a99327a30', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?width=1080&crop=smart&auto=webp&s=07ac67b36c1c438df5a157486c9359de3a029f25', 'width': 1080}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/am6DCapJkLbXN7sKc4RCDkkbymFhoqzXylmXMtbBaHI.jpg?auto=webp&s=1a4bba8961d4fd31f71db511c6dbc8ad61ae5c84', 'width': 1200}, 'variants': {}}]} |
How much data do I need to feed a model to notice a difference? | 12 | Hi everyone, I apologize if this questions seems a bit odd, still learning.
If I take a model, let’s say Mistral 7b, and I want to train and fine tune it, how much data do I need to feed it to notice a difference from the base?
Let’s say as an example I want to feed it all my text messages, so it can respond to text messages in a way that sounds like me.
What number of “message” - “my response” records would I need to train it on to notice any difference from the base model?
I appreciate the help, thank you! | 2023-10-25T18:21:58 | https://www.reddit.com/r/LocalLLaMA/comments/17gbdy1/how_much_data_do_i_need_to_feed_a_model_to_notice/ | Slimxshadyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gbdy1 | false | null | t3_17gbdy1 | /r/LocalLLaMA/comments/17gbdy1/how_much_data_do_i_need_to_feed_a_model_to_notice/ | false | false | self | 12 | null |
function calling strategies? fine-tuned models vs prompt engineering? | 10 | Hi,
I'm very interested in function calling patterns. Currently I am using the guidance library with a template that enforces a function calling pattern. This works... ok. But I think I would get better results from a model that is fine-tuned on a function calling pattern.
Are there models like this? Even better, are there datasets for this that I could use to perform my own fine-tuning?
bonus question... if a model has a 'chat' and a 'instruct' version (like Mistral), what's the difference? My assumption is, chat is intended for multi-turn conversations, while 'instruct' is intended for single-turn instruction following? To make things even more confusing, I've actually heard the opposite. | 2023-10-25T17:52:29 | https://www.reddit.com/r/LocalLLaMA/comments/17gaq9j/function_calling_strategies_finetuned_models_vs/ | its_just_andy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gaq9j | false | null | t3_17gaq9j | /r/LocalLLaMA/comments/17gaq9j/function_calling_strategies_finetuned_models_vs/ | false | false | self | 10 | null |
Looking for research participants for a paid study on AI characters! | 2 | Hi everyone! I am working on a design research project and we're trying to learn more about the world of conversational AI characters. If you or someone you know might be interested in this you can share your thoughts in this survey for a chance to join a paid research study!
Here's the link --> https://qualtricsxmdb89qvlkg.qualtrics.com/jfe/form/SV\_daIiMwIJXGzpzGS | 2023-10-25T17:39:33 | https://www.reddit.com/r/LocalLLaMA/comments/17gafrb/looking_for_research_participants_for_a_paid/ | Curious_Star_9803 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gafrb | false | null | t3_17gafrb | /r/LocalLLaMA/comments/17gafrb/looking_for_research_participants_for_a_paid/ | false | false | self | 2 | null |
How can I make the model to produce longer replies | 1 | What If I want an essay on
write an essay about bill gates buisness and biography whatever but it must be over 5000 words
but it always get cut off around 800 words. I have played around some setting and it is still not working
https://preview.redd.it/9z1wmamjvdwb1.png?width=1866&format=png&auto=webp&s=d3821a16bef874c16ce34a6f91b2e16c916d11ee
any ideas? | 2023-10-25T17:27:32 | https://www.reddit.com/r/LocalLLaMA/comments/17ga5nh/how_can_i_make_the_model_to_produce_longer_replies/ | _Sneaky_Bastard_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ga5nh | false | null | t3_17ga5nh | /r/LocalLLaMA/comments/17ga5nh/how_can_i_make_the_model_to_produce_longer_replies/ | false | false | 1 | null | |
Why use one quant format over another? | 5 | Why would i use gptq over awq or llamacpp? For example i can load 8bit models with autogptq without exllama. I can only load 4bit gptq models with exllama. But exllama is faster than autogptq. Llamacpp supports 4bit and 8 bit but isnt the fastest either. None of these support sliding window attention. What do yall use? | 2023-10-25T16:47:28 | https://www.reddit.com/r/LocalLLaMA/comments/17g9880/why_use_one_quant_format_over_another/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g9880 | false | null | t3_17g9880 | /r/LocalLLaMA/comments/17g9880/why_use_one_quant_format_over_another/ | false | false | self | 5 | null |
So what will happen if I use four 3090 without nvlink | 2 | I am going to buy a x99 motherboard and it doesn't support nvlink. If this limit the performance of LLM inference and what speed it would reach? All the cards are on PCI-E 3.0 x16.
p.s. I want to use Falcon 180B in Q4_K_M which is around 110GB | 2023-10-25T16:38:44 | https://www.reddit.com/r/LocalLLaMA/comments/17g90xv/so_what_will_happen_if_i_use_four_3090_without/ | MarySmith2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g90xv | false | null | t3_17g90xv | /r/LocalLLaMA/comments/17g90xv/so_what_will_happen_if_i_use_four_3090_without/ | false | false | self | 2 | null |
Llama-cpp-Python, temperature 0 ans still different outputs | 5 | Hello,
I am using llama-cpp-python and when I am trying to use a downloaded pre-trained model by setting a fixed seed and temp=0.0, I still get different outputs from the same input.
Any clue about that ? I expected temp=0.0 to have a greedy behaviour. | 2023-10-25T16:32:35 | https://www.reddit.com/r/LocalLLaMA/comments/17g8vvk/llamacpppython_temperature_0_ans_still_different/ | Percolation_X | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g8vvk | false | null | t3_17g8vvk | /r/LocalLLaMA/comments/17g8vvk/llamacpppython_temperature_0_ans_still_different/ | false | false | self | 5 | null |
Usage of local llama? | 1 | [removed] | 2023-10-25T16:23:42 | https://www.reddit.com/r/LocalLLaMA/comments/17g8oef/usage_of_local_llama/ | dustinChen93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g8oef | false | null | t3_17g8oef | /r/LocalLLaMA/comments/17g8oef/usage_of_local_llama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SzLOgYs_QNumRCqQFKoFQVLdSb8MiVKtHYIKRjsNThg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/39Y3N7lcV_w48CUbPi8s7n5cm4hSIfP9OGRlg3MsHSE.jpg?width=108&crop=smart&auto=webp&s=71a09220fc0c049bd2de79dfaa863e5be594d568', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/39Y3N7lcV_w48CUbPi8s7n5cm4hSIfP9OGRlg3MsHSE.jpg?width=216&crop=smart&auto=webp&s=4c4997b5d3dd013e707feaaa3f7480e0a28da030', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/39Y3N7lcV_w48CUbPi8s7n5cm4hSIfP9OGRlg3MsHSE.jpg?width=320&crop=smart&auto=webp&s=baa41953ae9b41f80df3704fe5fca32115e285ef', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/39Y3N7lcV_w48CUbPi8s7n5cm4hSIfP9OGRlg3MsHSE.jpg?auto=webp&s=2fa28f5ae2bbf1aa35c03db0953babc41e500735', 'width': 512}, 'variants': {}}]} |
Why "Of course! Kickball is a fun..." ? Did they just trained on the bench answers from GPT-4? | 16 | Question No.3, AlpacaEval:
Hi, my sister and her girlfriends want me to play kickball with them. Can you explain how the game is played?
https://preview.redd.it/jr5wnyuwidwb1.png?width=1266&format=png&auto=webp&s=cead58530b406bd91b1081517b23b0713bc54e01 | 2023-10-25T16:19:14 | https://www.reddit.com/r/LocalLLaMA/comments/17g8klb/why_of_course_kickball_is_a_fun_did_they_just/ | No-Link-2778 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g8klb | false | null | t3_17g8klb | /r/LocalLLaMA/comments/17g8klb/why_of_course_kickball_is_a_fun_did_they_just/ | false | false | 16 | null | |
Local Lamma with 4~5x crossfire of Radeon 580 | 6 | Hello guys, recently with the lower price for GFX cards, i thought it would be easier to build an good rig for LLM coding
​
The question is, would it be possible to crossfire the 5 cards i will have and achiieve good inference with the top models like 74b models? Wizardcoder74b
​
​
​
I´d also put an 128 gb ram on it and an Ryzen processor witl reasonable cores
​
What are your guys thoughts on this? i´m from Brazil . Thank you guys | 2023-10-25T16:11:38 | https://www.reddit.com/r/LocalLLaMA/comments/17g8edb/local_lamma_with_45x_crossfire_of_radeon_580/ | NecessaryPreference6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g8edb | false | null | t3_17g8edb | /r/LocalLLaMA/comments/17g8edb/local_lamma_with_45x_crossfire_of_radeon_580/ | false | false | self | 6 | null |
Why you are here? | 17 | I would like to know your motivations on the subject. It is always interesting.
[View Poll](https://www.reddit.com/poll/17g7ko9) | 2023-10-25T15:35:56 | https://www.reddit.com/r/LocalLLaMA/comments/17g7ko9/why_you_are_here/ | M2_Ultra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g7ko9 | false | null | t3_17g7ko9 | /r/LocalLLaMA/comments/17g7ko9/why_you_are_here/ | false | false | self | 17 | null |
Play with AI Model (Mistral, llama2) in the browser with webgpu | 1 | [removed] | 2023-10-25T15:14:54 | https://www.reddit.com/r/LocalLLaMA/comments/17g73o4/play_with_ai_model_mistral_llama2_in_the_browser/ | gptgpt1234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g73o4 | false | null | t3_17g73o4 | /r/LocalLLaMA/comments/17g73o4/play_with_ai_model_mistral_llama2_in_the_browser/ | false | false | self | 1 | null |
How to run functions from LLM output? | 6 | For example, If I wanted to ask for the weather from the LLM, the LLM does not know the weather but from that question how do I get it to call a function to display the weather. I believe ChatGPT has this already but how do we do it in the LLM world? | 2023-10-25T15:10:39 | https://www.reddit.com/r/LocalLLaMA/comments/17g709n/how_to_run_functions_from_llm_output/ | SatoshiReport | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g709n | false | null | t3_17g709n | /r/LocalLLaMA/comments/17g709n/how_to_run_functions_from_llm_output/ | false | false | self | 6 | null |
I came from /selfhosted | 0 | Better to land here from APIdrawner maybe ❤️ | 2023-10-25T14:47:29 | https://www.reddit.com/r/LocalLLaMA/comments/17g6hrn/i_came_from_selfhosted/ | fab_space | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g6hrn | false | null | t3_17g6hrn | /r/LocalLLaMA/comments/17g6hrn/i_came_from_selfhosted/ | false | false | spoiler | 0 | null |
2024: Small Models Will Be Insane | 186 | Google Gemini and GPT-5 will most likely be God Tier at creating synthetic data. These will be used extensively for creating amazing datasets which will be significantly higher quality than what we can make now.
Combine that with all of the research Big Tech has been releasing over 2023, I predict we'll have 33b models outperforming GPT-3.5 by Q4 2024.
What are your predictions for next year? | 2023-10-25T14:44:32 | https://www.reddit.com/r/LocalLLaMA/comments/17g6fgk/2024_small_models_will_be_insane/ | FuckShitFuck223 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g6fgk | false | null | t3_17g6fgk | /r/LocalLLaMA/comments/17g6fgk/2024_small_models_will_be_insane/ | false | false | self | 186 | null |
OOM Error while using TRL for RLHF Fine-tuning | 4 | Hi,
I am using TRL for RLHF fine-tuning the Llama-2-7B model and getting an OOM error (even with batch\_size=1). If anyone used TRL for RLHF can please tell me what I am doing wrong? Code details can be found in the GitHub [issue](https://github.com/huggingface/trl/issues/899). | 2023-10-25T12:38:54 | https://www.reddit.com/r/LocalLLaMA/comments/17g3suz/oom_error_while_using_trl_for_rlhf_finetuning/ | ali0100u | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g3suz | false | null | t3_17g3suz | /r/LocalLLaMA/comments/17g3suz/oom_error_while_using_trl_for_rlhf_finetuning/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'O7Pw49scKRZmQEiTOyE73M9rQ6AOo_4mklC7Eo0CbaU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/om6Knjvep1j4G0cbDi-LUisSWOnA7Q7vLZnAfoAfP1w.jpg?width=108&crop=smart&auto=webp&s=61e6c50fb6f03abe3ae256dae7f8464ae360d55d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/om6Knjvep1j4G0cbDi-LUisSWOnA7Q7vLZnAfoAfP1w.jpg?width=216&crop=smart&auto=webp&s=19485e053e65f76faecf0f8fc9d5b790065f563c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/om6Knjvep1j4G0cbDi-LUisSWOnA7Q7vLZnAfoAfP1w.jpg?width=320&crop=smart&auto=webp&s=87225f07984972b91f851930586601f3fbb144c9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/om6Knjvep1j4G0cbDi-LUisSWOnA7Q7vLZnAfoAfP1w.jpg?width=640&crop=smart&auto=webp&s=1a28832877254e536205a0464596f4f43f344957', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/om6Knjvep1j4G0cbDi-LUisSWOnA7Q7vLZnAfoAfP1w.jpg?width=960&crop=smart&auto=webp&s=739f184c9f02e22a43b5ada78ba1f289071b3d11', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/om6Knjvep1j4G0cbDi-LUisSWOnA7Q7vLZnAfoAfP1w.jpg?width=1080&crop=smart&auto=webp&s=8d1e7dd8dad0cec465cf2e2ba564a64bcc0d8387', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/om6Knjvep1j4G0cbDi-LUisSWOnA7Q7vLZnAfoAfP1w.jpg?auto=webp&s=847a3370b523cfb5417d973145fae4bfd7f403f9', 'width': 1200}, 'variants': {}}]} |
How to download models easily from HuggingFace? | 15 | I'm looking for a simple way to download models (GPTQ, AWQ, GGUF), such that the command
* is idempotent (on repeated runs it checks if file is already downloaded correctly, e.g. checksum)
* supports resume (interrupted download is likely)
* can download a list of files (e.g. GPTQ)
* when remote is changed, it is downloaded (like git fetch)
For example, I want to download a GPTQ model, like \`TheBloke/Phind-CodeLlama-34B-v2-GPTQ\`, which contains multiple files, in a specific branch \`gptq-4bit-32g-actorder\_True\`.
The approaches I've evaluated are:
* \`huggingface-cli download TheBloke/Phind-CodeLlama-34B-v2-GPTQ --revision=gptq-4bit-32g-actorder\_True --local-dir . --local-dir-use-symlinks False\`
* uses \`.cache\` folder that has to be manually deleted after
* not idempotent (overwrites)
* no resume
* \`aria2c -c -x 10 -s 10 "https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/resolve/main/phind-codellama-34b-v2.Q4\_K\_M.gguf" -o "phind-codellama-34b-v2.Q4\_K\_M.gguf"\`
* supports resume, change detection, parallel threads
* only works with single files (e.g. GGUF)
* \`git lfs clone --single-branch --branch gptq-4bit-32g-actorder\_True [https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ\`](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ`)
* no resume (git fetch, git pull does nothing on interrupted clone)
* no parallel threads (slow)
* supports change detection (I assume, e.g. git fetch)
​
Do you have any suggestions?
​
​
​
​ | 2023-10-25T12:38:42 | https://www.reddit.com/r/LocalLLaMA/comments/17g3sq4/how_to_download_models_easily_from_huggingface/ | GeniusPengiun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g3sq4 | false | null | t3_17g3sq4 | /r/LocalLLaMA/comments/17g3sq4/how_to_download_models_easily_from_huggingface/ | false | false | self | 15 | null |
Multiple weak GPUs vs one bigger one, does anyone have a comparison? | 15 | I just started looking into runing llama/vikuna locally and cant find much info on this topic. The main problem seems to be amout of VRAM, but it seems i can use multipme GPUs ...
so naturally i'm thinking, is there any big difference in performance that new gpu can provide? did anyone compare eg. 3090 24GB with 3x RX580 8GB (that can be acquired way cheaper from former mining nodes)? or of CUDA is important, maybe 2x 2060 12GB? | 2023-10-25T11:47:23 | https://www.reddit.com/r/LocalLLaMA/comments/17g2vjq/multiple_weak_gpus_vs_one_bigger_one_does_anyone/ | ghormoon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g2vjq | false | null | t3_17g2vjq | /r/LocalLLaMA/comments/17g2vjq/multiple_weak_gpus_vs_one_bigger_one_does_anyone/ | false | false | self | 15 | null |
Adversarial Reinforcement Learning | 10 | [https://github.com/EzgiKorkmaz/adversarial-reinforcement-learning](https://github.com/EzgiKorkmaz/adversarial-reinforcement-learning) | 2023-10-25T10:12:39 | https://www.reddit.com/r/LocalLLaMA/comments/17g1fio/adversarial_reinforcement_learning/ | ml_dnn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g1fio | false | null | t3_17g1fio | /r/LocalLLaMA/comments/17g1fio/adversarial_reinforcement_learning/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': '5N3MoARFfZyWaKL1u7akTUa4AuVRG4tk38RcGMcLxCE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z0j9dBbW63ClzWufBpwJBHGcWXbmya-0hRwGAxKJLbU.jpg?width=108&crop=smart&auto=webp&s=9c69502b3f5aaa0bfd533fa5eca15ccde0b6ce3f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z0j9dBbW63ClzWufBpwJBHGcWXbmya-0hRwGAxKJLbU.jpg?width=216&crop=smart&auto=webp&s=d70f63ec918d0de3a5c55facc5908d6356b4d0c8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z0j9dBbW63ClzWufBpwJBHGcWXbmya-0hRwGAxKJLbU.jpg?width=320&crop=smart&auto=webp&s=1197f68fa93f4ddfcbf02c790e60c8310308cad4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z0j9dBbW63ClzWufBpwJBHGcWXbmya-0hRwGAxKJLbU.jpg?width=640&crop=smart&auto=webp&s=a93c74b392d3c3296a02c2abdada89ea1233f4e3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z0j9dBbW63ClzWufBpwJBHGcWXbmya-0hRwGAxKJLbU.jpg?width=960&crop=smart&auto=webp&s=0b3d87e927b7fd53e67dd7e6c2453aeb955235a3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z0j9dBbW63ClzWufBpwJBHGcWXbmya-0hRwGAxKJLbU.jpg?width=1080&crop=smart&auto=webp&s=b0969650c8a5be4342fa8da56d927f3f77e20916', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z0j9dBbW63ClzWufBpwJBHGcWXbmya-0hRwGAxKJLbU.jpg?auto=webp&s=728bc3ad18e83732f286d9c9113e6cf5c8f2227b', 'width': 1200}, 'variants': {}}]} |
Strange Llama responses | 0 | First off - love this community. Such a great resource and everyone is so helpful! I'm the co-founder of a Mac app called [Odyssey](https://odysseyapp.io/). The basic concept is that you can connect and run multiple AI models without a single line of code. Think a much simpler version of ComfyUI for Stable Diffusion combined with an easy to use Langchain-ish UI.
We have Llama2 7B and Alpaca Cielo 2 7B running locally as chat interfaces but whenever the model decides to output an emoji, we get a strange set of characters like <0xF0>, <0x98>, etc. See screenshot below for a basic prompt that, by nature, leads to a lot of emojis.
https://preview.redd.it/0jhe7p0xmbwb1.png?width=2762&format=png&auto=webp&s=fc5f2b2365791772f86b56cb903b9e5a163d70d8
Has anyone experienced this? And if so any idea how to work around it? Ideally it would actually post emojis - similar to how ChatGPT does - but if not, would prefer it just didn't put these weird sequences out there. | 2023-10-25T09:56:50 | https://www.reddit.com/r/LocalLLaMA/comments/17g17ek/strange_llama_responses/ | creatorai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g17ek | false | null | t3_17g17ek | /r/LocalLLaMA/comments/17g17ek/strange_llama_responses/ | false | false | 0 | null | |
Best Local LLM For low end device coding | 3 | Hey i am new to LLM aka noob. i wanna run a local LLM on my nvidia-1650-mobile with 4gb of vram on my hp-pavilion with 8gb of ram. i am constantly traveling and want a ai Model to help me around with coding and troubleshooting. | 2023-10-25T09:42:31 | https://www.reddit.com/r/LocalLLaMA/comments/17g10kg/best_local_llm_for_low_end_device_coding/ | Low-Plastic-2399 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17g10kg | false | null | t3_17g10kg | /r/LocalLLaMA/comments/17g10kg/best_local_llm_for_low_end_device_coding/ | false | false | self | 3 | null |
Qwen 14B Chat is *insanely* good. And with prompt engineering, it's no holds barred. | 348 | 2023-10-25T08:10:32 | https://huggingface.co/Qwen/Qwen-14B-Chat | BayesMind | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17fztds | false | null | t3_17fztds | /r/LocalLLaMA/comments/17fztds/qwen_14b_chat_is_insanely_good_and_with_prompt/ | false | false | 348 | {'enabled': False, 'images': [{'id': 'cK1ZgUPo0RQmtqY6ZJm-P_OIm326pcmEzYDodKQ0esk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nWuqrBhfv22CqmhfGDMmiL3OW1shTNYb_2o03oBB6X0.jpg?width=108&crop=smart&auto=webp&s=e2ab77383d908d049bfebbfc2738f5fd7ddbd50c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nWuqrBhfv22CqmhfGDMmiL3OW1shTNYb_2o03oBB6X0.jpg?width=216&crop=smart&auto=webp&s=d0bd97f12c33004ff06b56eb4e1f932c096079b2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nWuqrBhfv22CqmhfGDMmiL3OW1shTNYb_2o03oBB6X0.jpg?width=320&crop=smart&auto=webp&s=76d3f75d00146303b05912be0638a6ec68c43cfe', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nWuqrBhfv22CqmhfGDMmiL3OW1shTNYb_2o03oBB6X0.jpg?width=640&crop=smart&auto=webp&s=067df1ac3cf0ca79aee9b13196b76cef522a202b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nWuqrBhfv22CqmhfGDMmiL3OW1shTNYb_2o03oBB6X0.jpg?width=960&crop=smart&auto=webp&s=1687a7d18b04545229bb60f10b8270d1dae0dd4f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nWuqrBhfv22CqmhfGDMmiL3OW1shTNYb_2o03oBB6X0.jpg?width=1080&crop=smart&auto=webp&s=9d8613602f5f5496f0257d623341a0a4fb404689', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nWuqrBhfv22CqmhfGDMmiL3OW1shTNYb_2o03oBB6X0.jpg?auto=webp&s=008ddff1e3be533519edc5327d59f29cd08ed791', 'width': 1200}, 'variants': {}}]} | ||
I guess it's finally time for me to learn what I am doing. Any recommendations for videos and other resources to start digging deeper? | 23 | I have a rough grasp on the basic theory, but I'm ready to go much deeper.
Looking for YouTube channels and other resources that can get me up to speed of the more technical aspects and give me a better understanding of the the various settings and other parameters I've been 'trial and erroring' my way through up to now, more details on LORAs and fine tuning, formatting, etc.
Anyone have any good recommendations to get me started? | 2023-10-25T07:03:48 | https://www.reddit.com/r/LocalLLaMA/comments/17fyxxi/i_guess_its_finally_time_for_me_to_learn_what_i/ | Boring_Isopod2546 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fyxxi | false | null | t3_17fyxxi | /r/LocalLLaMA/comments/17fyxxi/i_guess_its_finally_time_for_me_to_learn_what_i/ | false | false | self | 23 | null |
Would people use a feature where you can call your custom characters on the phone? | 6 | Hey everyone! I've been working on a website called Textation, where users can create their own chatbot characters.
The newest feature I just finished (and seeking feedback on) is the ability for users to actually call their bots on the phone.
They remember previous conversations, just like if you were chatting on the web. There's a brief delay (4-10 seconds) between you speaking and the bot replying, just for all of the processing (STT -> LLM -> TTS)
Before I dive deeper into enhancing this, I'd love to hear your thoughts. Would you find this feature useful or intriguing? Would it be annoying/creepy? What improvements or additional features might you suggest? Thanks in advance for your insights! | 2023-10-25T06:17:04 | https://www.reddit.com/r/LocalLLaMA/comments/17fyape/would_people_use_a_feature_where_you_can_call/ | claygraffix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fyape | false | null | t3_17fyape | /r/LocalLLaMA/comments/17fyape/would_people_use_a_feature_where_you_can_call/ | false | false | self | 6 | null |
Finetune codellama for code completion task on specific programming language | 18 | Hello, guys here. I want to finetune codellama for a minor programming language and hope the fine-tuned model could perform well in code completion task just like copilot, where I give you the code context, the model finish the rest. But I am confused about the training process, should I finetune or pretrain? If I choose to fine-tune, what should my dataset should be like? Let's say we collect code files from github, how should I process the codes, just divide the codes into two parts and feed them into the model? I need your guys help desperately | 2023-10-25T05:14:37 | https://www.reddit.com/r/LocalLLaMA/comments/17fxdmc/finetune_codellama_for_code_completion_task_on/ | Apprehensive_Hawk812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fxdmc | false | null | t3_17fxdmc | /r/LocalLLaMA/comments/17fxdmc/finetune_codellama_for_code_completion_task_on/ | false | false | self | 18 | null |
Seeking Advice: Cost-Efficient Way to Generate Descriptions for 100 Million Images with LLaVa? | 1 | [removed] | 2023-10-25T04:46:52 | https://www.reddit.com/r/LocalLLaMA/comments/17fwy3e/seeking_advice_costefficient_way_to_generate/ | JackiJumper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fwy3e | false | null | t3_17fwy3e | /r/LocalLLaMA/comments/17fwy3e/seeking_advice_costefficient_way_to_generate/ | false | false | self | 1 | null |
Any thoughts on how the co-pilots are being built for Microsoft Office 365 or other similar apps? | 2 | Wondering if Microsoft Office co-pilot and similar co-pilot applications are being built on LLMs as is or fine-tuned models for specific tasks? | 2023-10-25T03:23:06 | https://www.reddit.com/r/LocalLLaMA/comments/17fvht0/any_thoughts_on_how_the_copilots_are_being_built/ | neuro_boogie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fvht0 | false | null | t3_17fvht0 | /r/LocalLLaMA/comments/17fvht0/any_thoughts_on_how_the_copilots_are_being_built/ | false | false | self | 2 | null |
How to quickly and easily run an automated "Political Compass" test on your local llama? | 1 | 2023-10-25T02:56:56 | innocuousAzureus | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17fuzy5 | false | null | t3_17fuzy5 | /r/LocalLLaMA/comments/17fuzy5/how_to_quickly_and_easily_run_an_automated/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'JgO4ZA0zvR8L_DQGcyBOzG6Qji4NBE2Y7tOXFCDe_yQ', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/40u6kna3k9wb1.png?width=108&crop=smart&auto=webp&s=7218e0fc405c6cfb0aeb97aabaf168d26895d553', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/40u6kna3k9wb1.png?width=216&crop=smart&auto=webp&s=a07ce3c906c9440194044c9ba745b7b5d35efe1a', 'width': 216}, {'height': 400, 'url': 'https://preview.redd.it/40u6kna3k9wb1.png?width=320&crop=smart&auto=webp&s=b407844bd104ede29f77a511039218cbb8fb8e32', 'width': 320}, {'height': 801, 'url': 'https://preview.redd.it/40u6kna3k9wb1.png?width=640&crop=smart&auto=webp&s=eb427bde32557c3ef77274e7d1ef5808e843be6f', 'width': 640}, {'height': 1202, 'url': 'https://preview.redd.it/40u6kna3k9wb1.png?width=960&crop=smart&auto=webp&s=82de8e43ac0923bcebc650bfa92ac3b5802e5a9c', 'width': 960}], 'source': {'height': 1342, 'url': 'https://preview.redd.it/40u6kna3k9wb1.png?auto=webp&s=8fe24c38cc1e694e559fbf7d960bd948b4e21905', 'width': 1071}, 'variants': {}}]} | |||
Can you fine-tune a GGUF model? | 1 | [removed] | 2023-10-25T02:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/17fuzqu/can_you_finetune_a_gguf_model/ | asking4bud | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fuzqu | false | null | t3_17fuzqu | /r/LocalLLaMA/comments/17fuzqu/can_you_finetune_a_gguf_model/ | false | false | self | 1 | null |
A detailed comparison between GPTQ, AWQ, EXL2, q4_K_M, q4_K_S, and load_in_4bit: perplexity, VRAM, speed, model size, and loading time. | 182 | 2023-10-25T02:43:36 | https://oobabooga.github.io/blog/posts/gptq-awq-exl2-llamacpp/ | oobabooga4 | oobabooga.github.io | 1970-01-01T00:00:00 | 0 | {} | 17fuqr7 | false | null | t3_17fuqr7 | /r/LocalLLaMA/comments/17fuqr7/a_detailed_comparison_between_gptq_awq_exl2_q4_k/ | false | false | default | 182 | null | |
Augmental — an RP model trained on a New Dataset of Human-Written, AI-enhanced texts, with a focus on response length and dataset quality | 57 | **Augmental-13b** ([Hugging Face page](https://huggingface.co/Heralax/Augmental-13b)) ([TheBloke Quant](https://huggingface.co/TheBloke/Augmental-13B-GGUF)) is a finetune on top of the already-strong MythoMax, using a new dataset of "augmented" text — text from a human-written source that's processed into the right format, and then enhanced, using AI (in this case, GPT-4). **This is the** **polished evolution of MythoMakise-13b**, which reached #13 on the Ayumi leaderboard in the 13b class, if you've heard of that one.
**What makes it special:** [Augmental-13b](https://huggingface.co/Heralax/Augmental-13b) is the flagship model of my new [Augmental dataset](https://huggingface.co/datasets/Heralax/Augmental-Dataset), which was ***created from scratch.*** The Augmental dataset is a multiturn dataset with 7.86k replies spread across about 480 different conversations and 7 different characters, each with their own character card. Emphasis is put on quality and longer responses. Each reply contains the chat history, the speaker of the reply, the reply itself, and the context behind the conversation in which the reply happens. **Training was done on the default SillyTavern prompt format:** this model is intended to work optimally and out of the box on one of the most popular frontends.
The model uploaded to HF has been trained for 1 epoch, though I do have a 2 epoch version sitting on my hard drive. If anyone asks I'll up that too.
**This model is not censored.** Because to hell with that. I was able to get it to generate very questionable stuff in my testing, so go ahead and have fun. Even GPT-4 didn't shy away from generating the NSFW parts of the dataset, and they haven't taken my API key away yet, so go figure.
​
[Data deets](https://preview.redd.it/ihr9c2p7w8wb1.png?width=1536&format=png&auto=webp&s=d161ddd66ad1c83c976e7fc4ebc8d02641835bc9)
**The process (basically an essay on the data generation)**: The data was scraped from a visual novel (Steins;Gate), split into distinct conversations based on certain criteria, filtered for longer, higher-quality conversations, rewritten and reformatted into RP format using GPT-4, and then gone over *a second time* with GPT-4 to make 4 replies in each conversation extra long, high-quality exemplars. Some manual QA was done, but not more than like 4 hours of it. What sets this approach apart is that instead of generating entirely synthetic data (i.e., Airoboros), using hybrid data (PIPPA), or using my own edited past chats with RP bots (like many model creators do), this process 1) only took a couple of days (including pausing to fix issues) once the prompts had been figured out 2) can be shared (unlike one's own edited chats, which can be NSFL) and 3) retains some human creativity and variety over pure synthetic data, due to the human origins of the text. Data cost: around $280 dollars (including all the iterating on the prompts before I actually ran the generation script).
Data quality was a massive priority of this model. Despite occasional flubs in the data generation (deliberately not solved in a manual QA pass so that the utility of the augmented data approach can be demonstrated), Augmental's good performance (at least in my testing) shows that any flubs are made up for/ironed out in the aggregate.
Augmental is the natural evolution/the polished and scaled-up version of [MythoMakise](https://huggingface.co/Heralax/MythoMakiseMerged-13b) — in fact, you might consider MythoMakise an advanced prototype, which accidentally scored #13th on the [Ayumi leaderboard](https://rentry.co/ayumi_erp_rating). Hopefully this one works even better. If you want to find out for yourself, remember that Bloke quants [are available](https://huggingface.co/TheBloke/Augmental-13B-GGUF)!
Not to go too overboard on the self-promotion, but I wrote about the rationale behind augmented data in a bit more depth [here](https://promptingweekly.substack.com/p/human-sourced-ai-augmented-a-promising?utm_source=profile&utm_medium=reader2) if you're interested.
**The hope**: is that AI-augmented data will help solve *one* of the two big problems I see AI RP (and open-source AI in general) facing right now: data sourcing. (The other problem is benchmarking). It's always been frustrating to me that, despite huge amounts of well-written creative text existing out there in the world, very little of it could be used to enhance conversational models (it simply wasn't in the right format, and basically never had \*actions\*). Using AI to reformat and enhance some source text is my attempted solution (I'm saying "my" attempted solution because I don't know of any major past examples of this, correct me if I'm wrong). The training code and prompts for data augmentation and everything are [open-sourced](https://github.com/e-p-armstrong/amadeus), so you can play around with them yourself if you want. The main attraction in that repo is `processing_refactor.ipynb.`
**Model mascot: Augmen-tan** (yet another pun, this time of Augmental and the -tan honorific in Japanese).
​
[Llama Silverhand](https://preview.redd.it/e0asuys8w8wb1.png?width=552&format=png&auto=webp&s=ed15f40940ccd7333281d22c5676db6232eeeffe)
I'm currently looking into making the data augmentation a lot cheaper and faster by using a 70b instead of GPT-4—I might post here again if I make progress on that front. Until then, I'm happy to answer any questions, and would love if you gave Augmental-13b a shot! Maybe you can even hack the data generation script a bit to work on your own raw text, and create your own dataset! (Just be mindful of OAI API costs). Beyond "just" the model, the real intention here is to solve the severe data shortage we're all experiencing right now.
I hope something in all these links proves useful to you, and either way, I'd appreciate any feedback.
Also, a note for the people out there with NASA computers and refined taste, I'm going to try tuning a 70b on it soon (like, maybe as early as tonight), so don't worry.
​
Edit: Added joke to model mascot caption | 2023-10-25T00:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/17fsbpe/augmental_an_rp_model_trained_on_a_new_dataset_of/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fsbpe | false | null | t3_17fsbpe | /r/LocalLLaMA/comments/17fsbpe/augmental_an_rp_model_trained_on_a_new_dataset_of/ | false | false | 57 | {'enabled': False, 'images': [{'id': 'CjxEi7ORNT5QbgIp0JjKf5QhK6yMB9yyhsE42zVKv-w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?width=108&crop=smart&auto=webp&s=7dae9238a114f588e999b10c0c9a5008d77361c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?width=216&crop=smart&auto=webp&s=65f56f76830942a4eca6d22694341bfcebad750f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?width=320&crop=smart&auto=webp&s=cda140aa082103655887b9d02b3fef5430b59133', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?width=640&crop=smart&auto=webp&s=a650fe8ae21f483e1282e3ad0570003f0719365d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?width=960&crop=smart&auto=webp&s=bbf4e83129ddaa01c5de4f141d6a2c78e5b56f6e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?width=1080&crop=smart&auto=webp&s=aa1f844c53e5dfd0e6a8fdfa802f90029cdc3149', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?auto=webp&s=2847ac75999ea956cc72273d383977c41a960a8e', 'width': 1200}, 'variants': {}}]} | |
Augmental — an RP model trained on Human-Written, AI-enhanced texts, with a focus on response length and dataset quality | 1 | [removed] | 2023-10-25T00:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/17fs940/augmental_an_rp_model_trained_on_humanwritten/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fs940 | false | null | t3_17fs940 | /r/LocalLLaMA/comments/17fs940/augmental_an_rp_model_trained_on_humanwritten/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'CjxEi7ORNT5QbgIp0JjKf5QhK6yMB9yyhsE42zVKv-w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?width=108&crop=smart&auto=webp&s=7dae9238a114f588e999b10c0c9a5008d77361c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?width=216&crop=smart&auto=webp&s=65f56f76830942a4eca6d22694341bfcebad750f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?width=320&crop=smart&auto=webp&s=cda140aa082103655887b9d02b3fef5430b59133', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?width=640&crop=smart&auto=webp&s=a650fe8ae21f483e1282e3ad0570003f0719365d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?width=960&crop=smart&auto=webp&s=bbf4e83129ddaa01c5de4f141d6a2c78e5b56f6e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?width=1080&crop=smart&auto=webp&s=aa1f844c53e5dfd0e6a8fdfa802f90029cdc3149', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/32sSU0jUzPEZ7_yWvN67Af-u3JPo6xKkA2PQ6XKXwpA.jpg?auto=webp&s=2847ac75999ea956cc72273d383977c41a960a8e', 'width': 1200}, 'variants': {}}]} | |
Best way to run local models generically across operating systems? | 1 | [removed] | 2023-10-25T00:06:48 | https://www.reddit.com/r/LocalLLaMA/comments/17frjpq/best_way_to_run_local_models_generically_across/ | samlhuillier3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17frjpq | false | null | t3_17frjpq | /r/LocalLLaMA/comments/17frjpq/best_way_to_run_local_models_generically_across/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '1QchPajCLP0GqcuQUTsrbCAFyvWedSH0Mx2k_nz2E6k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FYDjYfCbqSgI1DNhwGHEYgmXblewtrDTdBF9WY5PA3Y.jpg?width=108&crop=smart&auto=webp&s=fa022185a20d8540882ee82c256bb2190ae1e1e8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FYDjYfCbqSgI1DNhwGHEYgmXblewtrDTdBF9WY5PA3Y.jpg?width=216&crop=smart&auto=webp&s=321588f0fd04fdb4d34dafd7e1372aa4db01ea73', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FYDjYfCbqSgI1DNhwGHEYgmXblewtrDTdBF9WY5PA3Y.jpg?width=320&crop=smart&auto=webp&s=beb1cc17168d9e3220a01ee629742f003c5798f3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FYDjYfCbqSgI1DNhwGHEYgmXblewtrDTdBF9WY5PA3Y.jpg?width=640&crop=smart&auto=webp&s=9811f4b5c63b1aaef5a5b4b0cbcaefca1f2f7191', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FYDjYfCbqSgI1DNhwGHEYgmXblewtrDTdBF9WY5PA3Y.jpg?width=960&crop=smart&auto=webp&s=51479cb24b11a2d9b70af309f937fd766da93ffe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FYDjYfCbqSgI1DNhwGHEYgmXblewtrDTdBF9WY5PA3Y.jpg?width=1080&crop=smart&auto=webp&s=4bb486887504402962971d77a3457d81c4024ab0', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/FYDjYfCbqSgI1DNhwGHEYgmXblewtrDTdBF9WY5PA3Y.jpg?auto=webp&s=9bb4f8cf1665aa9e6f246c15e1d45637c0b29c53', 'width': 4096}, 'variants': {}}]} |
Single RTX 4090 FE at 45 tokens/s but with penalty if running 2 that only 15-20 tokens/s. Confirmed with Xwin-MLewd-13B-V0.2-GGUF and the Speechless 13B. | 1 | [removed] | 2023-10-25T00:01:05 | https://www.reddit.com/r/LocalLLaMA/comments/17frf9h/single_rtx_4090_fe_at_45_tokenss_but_with_penalty/ | easyllaama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17frf9h | false | null | t3_17frf9h | /r/LocalLLaMA/comments/17frf9h/single_rtx_4090_fe_at_45_tokenss_but_with_penalty/ | false | false | self | 1 | null |
fine tuning LLMs with just pytorch (no wrapper libraries) | 13 | Can anyone point me to a link which fine tunes LLM (for e.g: Mistral 7B) without any external libraries i.e No Huggingface SFT Trainer/ Trainer or PyTorch Lightning etc. Thank you! | 2023-10-24T23:39:07 | https://www.reddit.com/r/LocalLLaMA/comments/17fqyn2/fine_tuning_llms_with_just_pytorch_no_wrapper/ | Lumpy-Carob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fqyn2 | false | null | t3_17fqyn2 | /r/LocalLLaMA/comments/17fqyn2/fine_tuning_llms_with_just_pytorch_no_wrapper/ | false | false | self | 13 | null |
Can anyone share anecdotal experiences of LLM (llama.cpp / fauxpilot) inference performance using 2x RTX 3090s vs same config with NVLink? Or perhaps share an informed opinion based on similar experience/setup? Trying to decide if the cost is worth it for my needs. | 1 | [removed] | 2023-10-24T22:30:32 | https://www.reddit.com/r/LocalLLaMA/comments/17fpfvl/can_anyone_share_anecdotal_experiences_of_llm/ | 0xhaggis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fpfvl | false | null | t3_17fpfvl | /r/LocalLLaMA/comments/17fpfvl/can_anyone_share_anecdotal_experiences_of_llm/ | false | false | self | 1 | null |
We need better and cheaper hardware, otherwise there is no future. | 2 | LLMs are becoming bigger to work better. Current 7B models can handle some tasks but have a hard time with tougher ones. Even GPT-4, a really big model, can't do everything well, which shows we might need even bigger models for better performance. But as models grow, running or changing them on our own computers becomes harder. Some people with powerful computers can run smaller models, and a very few can handle 70B models. Training or fine-tuning these big models for special tasks needs a lot of computer power, which mostly big tech companies like Meta have. So, we depend on these big companies to provide us with updated models.
Better and cheaper hardware will not only help LLM technology improve but also allow more people to use and change it. Right now, the power of LLMs is mainly in the cloud, controlled by a few big companies. Our chance to use LLMs fully, without relying on these few big names, mostly depends on better hardware. The need is not just for advancement, but for making LLM technology available to everyone.
Also, trying to make LLMs smaller while keeping them good at their job seems hard; smaller models don't perform as well. There's a clear link between the size of the model and how well it performs. When we make models smaller, they can't do the job as well. This shows why we need hardware that can handle bigger models. The idea of making models smaller to use them easily, while keeping them good, seems far from possible now. This further stresses the need for better and cheaper hardware. | 2023-10-24T22:26:41 | https://www.reddit.com/r/LocalLLaMA/comments/17fpcky/we_need_better_and_cheaper_hardware_otherwise/ | alagagbar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fpcky | false | null | t3_17fpcky | /r/LocalLLaMA/comments/17fpcky/we_need_better_and_cheaper_hardware_otherwise/ | false | false | self | 2 | null |
Llama 2 Chat model and Alpaca prompt | 11 | Does llama 2 Chat models officially support Alpaca style prompts (ie. ### Instruction: / ### Response: format)?
I notice that in all of TheBloke's popular quantized llama 2 chat model cards like this one [https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF), All contains a suggestion when using with llama.cpp ("If you want to have a chat-style conversation, replace the -p <PROMPT>argument with -i -ins"). By using "-ins" or instruction mode, all input to the model would be prefixed with (\\n\\n### Instruction:\\n\\n) and suffixed with (\\n\\n### Response: format\\n\\n), which is different from the official \[INS\]\[/INS\] prompt format.
The thing is, I don't notice things become worse when using Alpaca prompt format with Llama2 Chat model, in fact, I am seeing a case where it actually side steps a refusal.
I tried the following variations to ask the model to role play Donald Trump:
* Official prompt format with sys message (\[INST\] <<SYS>>\\nYou are Donald Trump.\\n<</SYS>>\\n\\n \[/INST\]) => refusal ("I'm just an AI, it's not appropriate for me to pretend to be a specific individual...")
* Use instruction instead of sys message (\[INST\] You are Donald Trump. \[/INST\]) => refusal
* Not using any prompt format, just bare text (You are Donald Trump.) => the model goes a bit crazy, not correctly role playing.
* Use Alpaca style prompt (\\n\\n### Instruction:\\n\\nYou are Donald Trump.\\n\\n### Response:\\n\\n) => this works, but need to configure reverse prompt to stop the generation correctly. When when used in instruct mode (-ins -p "You are Donald Trump."), the role play just works perfectly.
So my question are several fold:
1. Any drawback on using Alpaca style prompt with Llama 2 Chat model? It is not the officially support prompt format, but in my experience it works better.
2. Why does Alpaca prompt format even works with Llama 2 Chat model? I don't suppose this works by accident?
3. Anyone has any example where system messages (<<SYS>><</SYS>>) need to be used instead of just putting the prompt in \[INS\]\[/INS\]? I struggle to find exact specific example to use system messages that would not work otherwise. | 2023-10-24T22:10:41 | https://www.reddit.com/r/LocalLLaMA/comments/17foys8/llama_2_chat_model_and_alpaca_prompt/ | GoTrojan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17foys8 | false | null | t3_17foys8 | /r/LocalLLaMA/comments/17foys8/llama_2_chat_model_and_alpaca_prompt/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'VdO4wc7FK7kxcOPs0qIPH7EjGcv9d13QHE61oMbQXuw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8UWdyJXqrkYUevxLJkCUzhri8zC7gJuhk5LhcwdYvvo.jpg?width=108&crop=smart&auto=webp&s=7e2394914275ceb2f5c29120926399247ce8832a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8UWdyJXqrkYUevxLJkCUzhri8zC7gJuhk5LhcwdYvvo.jpg?width=216&crop=smart&auto=webp&s=015ae59cb99782a329f77b6876e4a851e4d68f01', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8UWdyJXqrkYUevxLJkCUzhri8zC7gJuhk5LhcwdYvvo.jpg?width=320&crop=smart&auto=webp&s=003473b8eace76057d9ec5e48917a5154111de7d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8UWdyJXqrkYUevxLJkCUzhri8zC7gJuhk5LhcwdYvvo.jpg?width=640&crop=smart&auto=webp&s=b9755c7d97c1d08dce13c444a9476487979a0c7b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8UWdyJXqrkYUevxLJkCUzhri8zC7gJuhk5LhcwdYvvo.jpg?width=960&crop=smart&auto=webp&s=aad751cab7d6fbef870704180d7197f5601d0d6c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8UWdyJXqrkYUevxLJkCUzhri8zC7gJuhk5LhcwdYvvo.jpg?width=1080&crop=smart&auto=webp&s=371b2af8ca6087d801ec211c9f448c8039f9444e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8UWdyJXqrkYUevxLJkCUzhri8zC7gJuhk5LhcwdYvvo.jpg?auto=webp&s=3adfe96a5d5bdff66caa0ec3194c7346dcfa829d', 'width': 1200}, 'variants': {}}]} |
Llama 2 Chat model and Alpaca prompt | 1 | Does llama 2 Chat models officially support Alpaca style prompts (ie. ### Instruction: / ### Response: format)?
I notice that in all of TheBloke's popular quantized llama 2 chat model cards like this one [https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF), All contains a suggestion when using with llama.cpp ("If you want to have a chat-style conversation, replace the -p <PROMPT>argument with -i -ins"). By using "-ins" or instruction mode, all input to the model would be prefixed with (\\n\\n### Instruction:\\n\\n) and suffixed with (\\n\\n### Response: format\\n\\n), which is different from the official \[INS\]\[/INS\] prompt format.
The thing is, I don't notice things become worse when using Alpaca prompt format with Llama2 Chat model, in fact, I am seeing a case where it actually side steps a refusal.
I tried the following variations to ask the model to role play Donald Trump:
* Official prompt format with sys message (\[INST\] <<SYS>>\\nYou are Donald Trump.\\n<</SYS>>\\n\\n \[/INST\]) => refusal ("I'm just an AI, it's not appropriate for me to pretend to be a specific individual...")
* Use instruction instead of sys message (\[INST\] You are Donald Trump. \[/INST\]) => refusal
* Not using any prompt format, just bare text (You are Donald Trump.) => the model goes a bit crazy, not correctly role playing.
* Use Alpaca style prompt (\\n\\n### Instruction:\\n\\nYou are Donald Trump.\\n\\n### Response:\\n\\n) => this works, but need to configure reverse prompt to stop the generation correctly. When when used in instruct mode (-ins -p "You are Donald Trump."), the role play just works perfectly.
So my question are several fold:
1. Any drawback on using Alpaca style prompt with Llama 2 Chat model? It is not the officially support prompt format, but in my experience it works better.
2. Why does Alpaca prompt format even works with Llama 2 Chat model? I don't suppose this works by accident?
3. Anyone has any example where system messages (<<SYS>><</SYS>>) need to be used instead of just putting the prompt in \[INS\]\[/INS\]? I struggle to find exact specific example to use system messages that would not work otherwise. | 2023-10-24T22:10:29 | https://www.reddit.com/r/LocalLLaMA/comments/17foyje/llama_2_chat_model_and_alpaca_prompt/ | GoTrojan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17foyje | false | null | t3_17foyje | /r/LocalLLaMA/comments/17foyje/llama_2_chat_model_and_alpaca_prompt/ | false | false | self | 1 | null |
Llama 2 Chat model and Alpaca prompt | 1 | Does llama 2 Chat models officially support Alpaca style prompts (ie. ### Instruction: / ### Response: format)?
I notice that in all of TheBloke's popular quantized llama 2 chat model cards like this one [https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF), All contains a suggestion when using with llama.cpp ("If you want to have a chat-style conversation, replace the -p <PROMPT>argument with -i -ins"). By using "-ins" or instruction mode, all input to the model would be prefixed with (\\n\\n### Instruction:\\n\\n) and suffixed with (\\n\\n### Response: format\\n\\n), which is different from the official \[INS\]\[/INS\] prompt format.
The thing is, I don't notice things become worse when using Alpaca prompt format with Llama2 Chat model, in fact, I am seeing a case where it actually side steps a refusal.
I tried the following variations to ask the model to role play Donald Trump:
* Official prompt format with sys message (\[INST\] <<SYS>>\\nYou are Donald Trump.\\n<</SYS>>\\n\\n \[/INST\]) => refusal ("I'm just an AI, it's not appropriate for me to pretend to be a specific individual...")
* Use instruction instead of sys message (\[INST\] You are Donald Trump. \[/INST\]) => refusal
* Not using any prompt format, just bare text (You are Donald Trump.) => the model goes a bit crazy, not correctly role playing.
* Use Alpaca style prompt (\\n\\n### Instruction:\\n\\nYou are Donald Trump.\\n\\n### Response:\\n\\n) => this works, but need to configure reverse prompt to stop the generation correctly. When when used in instruct mode (-ins -p "You are Donald Trump."), the role play just works perfectly.
So my question are several fold:
1. Any drawback on using Alpaca style prompt with Llama 2 Chat model? It is not the officially support prompt format, but in my experience it works better.
2. Why does Alpaca prompt format even works with Llama 2 Chat model? I don't suppose this works by accident?
3. Anyone has any example where system messages (<<SYS>><</SYS>>) need to be used instead of just putting the prompt in \[INS\]\[/INS\]? I struggle to find exact specific example to use system messages that would not work otherwise. | 2023-10-24T22:10:24 | https://www.reddit.com/r/LocalLLaMA/comments/17foygc/llama_2_chat_model_and_alpaca_prompt/ | GoTrojan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17foygc | false | null | t3_17foygc | /r/LocalLLaMA/comments/17foygc/llama_2_chat_model_and_alpaca_prompt/ | false | false | self | 1 | null |
Local/Dev environments. What are you using? | 25 | Hello, I've been toying with LLMs this past few weeks and I've seen some different setups across the repos.
I'm very curious about what people around here are running for their local enviroments.
Something like:
Windows + LLM Studio
or
WSL2 + Docker
or
Linux + installing raw dependencies on local
or
Some cloud service that facilitates stuff?
​
I guess setups also change a bit regarding what people are trying to do with the LLMs. Also, problems with etting up CUDA? How many people here are trying to train/finetune a model vs "just" using it? | 2023-10-24T21:56:08 | https://www.reddit.com/r/LocalLLaMA/comments/17folxm/localdev_environments_what_are_you_using/ | Torocatala | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17folxm | false | null | t3_17folxm | /r/LocalLLaMA/comments/17folxm/localdev_environments_what_are_you_using/ | false | false | self | 25 | null |
Branch-Solve-Merge method | 17 | ERROR: type should be string, got "https://huggingface.co/papers/2310.15123\n\n\"We propose Branch-Solve-Merge (BSM), a Large Language Model program (Schlag et al., 2023) for tackling such challenging natural language tasks. It consists of branch, solve, and merge modules that are parameterized with specific prompts to the base LLM.\"\n...\n\"We apply our method to the tasks of LLM response evaluation and constrained text generation and evaluate its effectiveness with multiple LLMs, including Vicuna, LLaMA-2-chat, and GPT-4. BSM improves the evaluation correctness and consistency for each LLM by enhancing human-LLM agreement by up to 26%, reducing length and pairwise position biases by up to 50%, \t*and allowing LLaMA-2-chat to match or outperform GPT-4 on most domains*.\"\n\nNow that is \"big if true\" indeed.\nThoughts?" | 2023-10-24T21:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/17fnj26/branchsolvemerge_method/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fnj26 | false | null | t3_17fnj26 | /r/LocalLLaMA/comments/17fnj26/branchsolvemerge_method/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'KAZQAuOYgznQtA3maigVsU0yJNPCm5WvOwDipdp8Qqw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1zWCUpfGZBK_JnKMXegCwAVrVNyLhVy6LQsMu74juG4.jpg?width=108&crop=smart&auto=webp&s=0fd424e47ed29d2238c29c7d56845b9b849c0fcd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1zWCUpfGZBK_JnKMXegCwAVrVNyLhVy6LQsMu74juG4.jpg?width=216&crop=smart&auto=webp&s=5c81fa12c193c2afd63b6995741b06eee63e44ce', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1zWCUpfGZBK_JnKMXegCwAVrVNyLhVy6LQsMu74juG4.jpg?width=320&crop=smart&auto=webp&s=0378db0260d88c1cccbb61dd9e5d16de508f4639', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1zWCUpfGZBK_JnKMXegCwAVrVNyLhVy6LQsMu74juG4.jpg?width=640&crop=smart&auto=webp&s=3686b1db942d4ed548a0c7530ddc516cad7824cd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1zWCUpfGZBK_JnKMXegCwAVrVNyLhVy6LQsMu74juG4.jpg?width=960&crop=smart&auto=webp&s=c4b84287d266ca36a15022be0ce158e749578bba', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1zWCUpfGZBK_JnKMXegCwAVrVNyLhVy6LQsMu74juG4.jpg?width=1080&crop=smart&auto=webp&s=9c04c93f4b171f0f87677caf1c671402022cbcba', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1zWCUpfGZBK_JnKMXegCwAVrVNyLhVy6LQsMu74juG4.jpg?auto=webp&s=48aa37f53f2176304a1a8db1c1155aadc0ac388f', 'width': 1200}, 'variants': {}}]} |
llama2 7b vs 70b vs mistral 7b writing tweets on financial reports | 57 | Hey everyone, a week ago I [posted](https://gist.github.com/miagkyi/f99af352d3dbbdf90b390b4b81649e6b) here about how I fine-tuned Llama2 7b with Lora to read financial reports of public companies, summarize them, and produce funny tweets about company performance.
I went to test the same dataset with Llama2 70b and Mistral 7b.
I’ve seen some good improvement. Here is a few [examples](https://drive.google.com/file/d/1Oxgor5ZrYpNzTX5uxpi7Wh6ULVDjNIMr/view?usp=sharing) of three models analyzing cash flow and balance sheet of Tesla from their recent quarterly report.
**Llama2 7B** is trying really hard to produce humor. To be fair, it succeeds 20% of the time, but 80% of the time, it sounds like it's been written by a 9-year-old. It doesn't always output tweets, sometimes generating long texts; and, hallucinations are present.
**Mistral 7B** has performed really well, providing consistent output with minimal hallucinations. Its humor is a step up from Llama2 7B, though not as consistent as Llama2 70B’s output.
**Llama2 70B** consistently produces high-quality tweets, outperforming GPT-3.5 and rivaling GPT-4. It even adds emojis. However, it tends to hallucinate more frequently than I would like.
I decided to go with Mistral because it offers a nice balance of quality and size. After fine-tuning, I converted it to gguf, performed quantization, and deployed it on my Mac. Pretty happy with the result:
* took only a 400 line dataset to finetune this Mistral 7B
* data prep was nightmare, but necessary
* fine-tuning is super quick -- took only \~2 hours or so
* Mistral w/ llama.cpp is my favorite, but 70b is in its own league
* now I have a 10-K/Q reports analyzer pipeline that automatically runs on my computer, generating summaries and (somewhat) funny content. | 2023-10-24T20:35:30 | https://www.reddit.com/r/LocalLLaMA/comments/17fmn9c/llama2_7b_vs_70b_vs_mistral_7b_writing_tweets_on/ | Acceptable_Bed7015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fmn9c | false | null | t3_17fmn9c | /r/LocalLLaMA/comments/17fmn9c/llama2_7b_vs_70b_vs_mistral_7b_writing_tweets_on/ | false | false | self | 57 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]} |
I created my own LLM priming method: Triple-Pivot Querying (TPQ) | 7 | The program processes a user's question through three iterative loops with the LLM (explained further below):
* **Phase 1:** The system first identifies the domain expertise and key concepts pertinent to the user's inquiry.
* **Phase 2:** Leveraging the expertise determined in Phase 1, the program endeavors to furnish a comprehensive response.
* **Phase 3:** This phase is dedicated to assessing the truthfulness and precision of the answer given.
For each phase, the program utilizes phase\_prompts
as a few-shot guidance mechanism, assisting the LLM in formulating the respective outputs.
I'm only half serious :D I'm sure similar has been done before.
Shortened code:
`import streamlit as st`
`from llama_cpp import Llama`
`# Llama model setup`
`llm = Llama(model_path="./dolphin-2.1-mistral-7b.Q5_K_M.gguf", n_ctx=4096, n_gpu_layers=20)`
`# Few-shot examples for each phase`
`phase1_prompts = [`
`"Given the following examples of how to identify the necessary expertise and key concepts for various questions, please provide a similar structured response for the provided query, but don't give the final answer.",`
`"Q: What causes rainbows? A: The expertise required includes physics and atmospheric science. Key facts to consider are the refraction, dispersion, and reflection of sunlight in raindrops.",`
`"Q: How do airplanes fly? A: Expertise in aerodynamics and physics is essential. Consider the principles of lift, thrust, drag, and weight.",`
`"Q: Why is the sky blue? A: The necessary expertise lies in atmospheric science and optics. Consider the scattering of sunlight by Earth's atmosphere.",`
`"Q: What makes the ocean salty? A: Expertise in geology and chemistry is vital. Think about the erosion of rocks, which release salts, and the continuous addition of salts from rivers.",`
`"Q: How do plants produce food? A: Expertise in botany and biology is key. Consider the process of photosynthesis where plants convert sunlight into energy.",`
`"Q: Why does the moon have phases? A: Understanding this requires expertise in astronomy and physics. Key facts involve the moon's orbit around Earth and the relative positions of the Earth, moon, and sun.".`
`]`
`phase2_prompts = [`
`# "Based on the following examples that demonstrate how to answer questions using specified expertise and key considerations, provide an in-depth response to the provided query.",`
`"Given expertise in physics and optics, Q: How does a magnifying glass magnify? A: It bends the light rays to make objects appear larger.",`
`"With expertise in biology and anatomy, Q: How does the heart pump blood? A: Through muscular contractions that create pressure, pushing blood throughout the body.",`
`"Considering knowledge in geology, Q: Why do volcanoes erupt? A: Due to movement of tectonic plates and pressure build-up beneath the Earth's crust.",`
`"Utilizing expertise in chemistry, Q: Why do certain substances dissolve in water? A: Due to the polarity of water molecules, they can surround and disperse certain solute particles, causing them to dissolve.",`
`"With an understanding in astronomy, Q: Why do stars twinkle? A: Stars twinkle because of the Earth's atmosphere; the air layers of varying temperatures and densities refract light differently, causing it to shimmer.",`
`"Drawing from knowledge in computer science, Q: What is an algorithm? A: An algorithm is a set of defined steps or procedures to solve a particular problem or accomplish a task.",`
`"Based on expertise in history, Q: Why did the Renaissance period occur? A: It was a cultural and intellectual revival, influenced by the rediscovery of classical art, literature, and spurred by economic growth and societal changes in Europe."`
`]`
`phase3_prompts = [`
`"Using the following examples as a guide for evaluating the accuracy of statements in response to questions, assess the veracity of the provided claim.",`
`"Q: Evaluate the accuracy of the statement: 'Water is a gas at room temperature'. A: The statement is inaccurate; water is a liquid at room temperature.",`
`"Q: Judge the claim: 'All humans need to eat three times a day for optimal health'. A: This claim is not universally accurate; dietary needs can vary widely based on various factors, including age, activity level, and health conditions.",`
`"Q: Assess the statement: 'Birds are mammals'. A: The claim is incorrect; birds belong to the class Aves, not Mammalia.",`
`"Q: Critique the assertion: 'All stars are just like our sun'. A: This assertion is not accurate; stars can vary greatly in size, temperature, color, and lifespan."`
`]`
`# Streamlit UI`
`st.title("Llama Expert System")`
`user_input = st.text_area("Enter your question:")`
`if st.button("Submit"):`
`# Phase 1`
`phase1_input = "\n".join(phase1_prompts) + f"\nQ: {user_input} A: "`
`response = llm(phase1_input, max_tokens=100, stop=["Q:", "\n"], echo=True)`
`first_output = response["choices"][0]["text"].split("\n")[-1]`
`st.write(":orange[Phase 1:]", first_output)`
`# Phase 2`
`expertise_and_facts = first_output.split('A: ')[1]`
`# phase2_input = "\n".join(phase2_prompts) + f"\n {expertise_and_facts}, Q: {user_input}? A: "`
`phase2_input = "\n".join(phase2_prompts) + f"\n Based on the fact that {expertise_and_facts}, Q: {user_input}? A: "`
`response_phase2 = llm(phase2_input, max_tokens=500, stop=["Q:", "\n"], echo=True)`
`# response_phase2 = llm(phase2_input, max_tokens=500, stop=None, echo=True)`
`second_output = response_phase2["choices"][0]["text"].split("\n")[-1]`
`st.write(":blue[Phase 2:]", second_output)`
`# Phase 3`
`feedback_for_phase3 = "\n".join(phase3_prompts) + f"\nQ: Evaluate the accuracy of the statement in response to '{user_input}': '{second_output.split('A: ')[1]}'. A: "`
`response_phase3 = llm(feedback_for_phase3, max_tokens=150, stop=["Q:", "\n"], echo=True)`
`third_output = response_phase3["choices"][0]["text"].split("\n")[-1]`
`st.write(":red[Phase 3:]", third_output)`
`# To run the Streamlit app`
`# Save this code in a file, say "llama_app.py", and then run in terminal:`
`# streamlit run llama_app.py`
https://preview.redd.it/cab0q386j7wb1.jpg?width=1492&format=pjpg&auto=webp&s=6fda990bdf44a4d86048d5a167e980754b59d33f | 2023-10-24T20:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/17fmhcb/i_created_my_own_llm_priming_method_triplepivot/ | MustBeSomethingThere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fmhcb | false | null | t3_17fmhcb | /r/LocalLLaMA/comments/17fmhcb/i_created_my_own_llm_priming_method_triplepivot/ | false | false | 7 | null | |
Best Mistral model for code generation? | 14 | Question in the title.
I'm interested in 7B models. Mistral is mentioned in the title because it's the best 7B base model I know but if you know of other capable ones of the same size please share.
Focus is on coding ability | 2023-10-24T20:16:24 | https://www.reddit.com/r/LocalLLaMA/comments/17fm71r/best_mistral_model_for_code_generation/ | hurrytewer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fm71r | false | null | t3_17fm71r | /r/LocalLLaMA/comments/17fm71r/best_mistral_model_for_code_generation/ | false | false | self | 14 | null |
llama2 7b vs 70b vs mistral 7b writing tweets on financial reports | 1 | [removed] | 2023-10-24T20:16:23 | https://www.reddit.com/r/LocalLLaMA/comments/17fm70s/llama2_7b_vs_70b_vs_mistral_7b_writing_tweets_on/ | Acceptable_Bed7015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fm70s | false | null | t3_17fm70s | /r/LocalLLaMA/comments/17fm70s/llama2_7b_vs_70b_vs_mistral_7b_writing_tweets_on/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]} | |
How would you do it? Handling multi-turn QA conversation with matching of questions to vector database. | 1 | I have been giving this some thought and would appreciate some outside input, maybe someone has some experience they could share!
I am attempting to create a QA chatbot that is limited to answering questions from a pre-determined set of question and answer pairs I have in a vector database. Currently I create embeddings of the question using OpenAI and query a vector database for similar "reference question" - if the similarity score is high enough I proceed and use the answer text I have stored in the metdata as "context" for the answer generation.
I would now like to extend this to include conversational history. The issue I am facing however, is that a follow-on question may not hit the similarity threshold. Considering a follow-up question would typically not be worded in a way that is conducive to embedding search but still relevant enough in the greater context to warrant answering given a matching question in the vector database. Of course, there's also a chance that the user asks a question that matches a reference question, but is independent of the first question. How might I go about doing this?
***An example:***
**Question 1:** What three colors make up the primary colors?
Through the explicit stating of "primary colors" this would relatively easily match to say "What are the three primary colors?" with a relatively high similarity score.
**Follow-Up Question:** Which of these would be considered the darkest?
The follow up question might struggle to match up with the reference question "Which of the three primary colors would be considered the darkest?" simply as the "of these" is used in place for primary colors. While it might still match, the similarity score would be much lower.
**Here are a few thoughts I have had so far:**
1. Concatanating the questions before creating the embedding which would lead to "What three colors make up the primary colors? Which of these would be considered the darkest?". This however would face the issue of putting equal weight on both questions and thereby possibly also causing a lower than expected similarity score or eventually match up to a wrong answer.
2. Merging embeddings for the individual questions as we progress in the conversation, with the latest question receiving more weight (or maybe the other way around?)
3. Reducing the similarity score threshold as the conversation gets longer. This would also be suboptimal as there might be different ways of wording a follow up, some where there might be even less reference to the actual topic in the question than the previous example where only with the context can the relationship be seen.
Would appreciate any inputs or thoughts, as I am struggling to come up with a solid solution that has me feeling comfortable! | 2023-10-24T20:03:32 | https://www.reddit.com/r/LocalLLaMA/comments/17flw3k/how_would_you_do_it_handling_multiturn_qa/ | kotschi1997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17flw3k | false | null | t3_17flw3k | /r/LocalLLaMA/comments/17flw3k/how_would_you_do_it_handling_multiturn_qa/ | false | false | self | 1 | null |
Can anyone explain the whole working of FireAct code? | 3 | The FireAct code looks interesting, would love to implement it , can anyone help with that? | 2023-10-24T19:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/17fl0ka/can_anyone_explain_the_whole_working_of_fireact/ | shalinbhanot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fl0ka | false | null | t3_17fl0ka | /r/LocalLLaMA/comments/17fl0ka/can_anyone_explain_the_whole_working_of_fireact/ | false | false | self | 3 | null |
Fine-tuned llama makes up and answers its own QAs | 1 | Hello, I am trying to fine-tune llama on my dataset of medical questions. However, during inference, the model asks and answers its own questions. Here are some details of my setup:
I have a huggingface Dataset, where each instance is of the form " <s>\[INST\]<<SYS>><</SYS>>Question\[/INST\]Answer</s>"
I am using [this](https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html) tutorial to fine-tune. Specifically, I am loading the tokenizer with
`tokenizer = LlamaTokenizer.from_pretrained('llama/models')`
`tokenizer.pad_token = tokenizer.eos_token`
`tokenizer.padding_side = "right"`
Where the path points to official llama tokenizer files. I am loading the pre-trained "NousResearch/llama-2-7b-chat-hf" model, and using the following quantization config:
`use_4bit = True`
`bnb_4bit_compute_dtype = "float16"`
`bnb_4bit_quant_type = "nf4"`
`use_nested_quant = False`
with the following LoRA config:
`lora_r = 16`
`lora_alpha = 4`
`lora_dropout = 0.1`
`target_modules = ["q_proj", "v_proj"]`
After fine-tuning, when I input "`<s>[INST]<<SYS>><</SYS>>What is Alzheimer's Disease?[/INST]"`, I get:
`<s>[INST]<<SYS>><</SYS>>What is Alzheimer's Disease?[/INST]Alzheimer's Disease (AD) is a progressive neurodegenerative disorder that affects memory, thinking, and behavior.n AD is characterized by the deposition of beta-amyloid plaques, neurofibrillary tangles, and synaptic loss, leading to cognitive decline and death. 2.What are the three main types of AD?[/INST]The three main types of AD are early-onset AD, late-onset AD, and presenilin 1 (PS1) and presenilin 2 (PS2) familial AD. 3.What is the difference between early-onset AD and late-onset AD?[/]Early-onset AD typically`
​
As you can see, it asks more questions (like what are the three main types of AD, What is the difference...) in it's output. I suspected that it is because of over-fitting and tried with a lower lora rank and learning rate, but this didn't help much. Please let me know if I am doing something wrong in my code. | 2023-10-24T19:15:03 | https://www.reddit.com/r/LocalLLaMA/comments/17fkqz9/finetuned_llama_makes_up_and_answers_its_own_qas/ | Snoo41531 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fkqz9 | false | null | t3_17fkqz9 | /r/LocalLLaMA/comments/17fkqz9/finetuned_llama_makes_up_and_answers_its_own_qas/ | false | false | self | 1 | null |
How to get the same style of output with the same instructions? | 1 | 2023-10-24T18:32:01 | https://www.reddit.com/r/LLaMA2/comments/17efrf4/how_to_get_the_same_style_of_output_with_the_same/ | SecretOk9644 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 17fjqxu | false | null | t3_17fjqxu | /r/LocalLLaMA/comments/17fjqxu/how_to_get_the_same_style_of_output_with_the_same/ | false | false | default | 1 | null | |
Continue for JetBrains | 13 | ​
[Highlight code, ask a question](https://i.redd.it/gsjni9yyz6wb1.gif)
Hi all,
I'd like to share something I've been working on for a while - after focusing before on VS Code, we've built an extension for all JetBrains IDEs. It is very close to feature parity with the VS Code extension, and we're looking for folks who want to try it while in beta!
Continue lets you use any open-source model in your IDE so you don’t have to copy/paste from ChatGPT. Gathering context to prompt the LLM is as easy at highlighting code, typing @ and choosing a file, or just pressing cmd+enter to have Continue automatically pull in the most relevant files.
If you’ve been looking for something like this in JetBrains, we’d love to hear your feedback : )
[https://plugins.jetbrains.com/plugin/22707-continue](https://plugins.jetbrains.com/plugin/22707-continue) | 2023-10-24T18:30:48 | https://www.reddit.com/r/LocalLLaMA/comments/17fjpmu/continue_for_jetbrains/ | sestinj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fjpmu | false | null | t3_17fjpmu | /r/LocalLLaMA/comments/17fjpmu/continue_for_jetbrains/ | false | false | 13 | null | |
Is there any local version of the software like what runs Chat GPT-4 and allows it to write and execute new code? | 6 | I was playing with the beta data analysis function in GPT-4 and asked if it could run statistical tests using the data spreadsheet I provided. When I requested one, I noticed it didn't use a built-in function but instead wrote and executed Python code to accomplish what I was asking it to do. To double-check that it wasn't just using a pre-built script, I asked it to rerun the analysis and gave it an alternative library to import. It changed the code to use the specified library. Is there a front end that can run an LLM locally that has this type of flexibility to write and execute new code when the user requests something that isn't already part of the model? | 2023-10-24T18:01:02 | https://www.reddit.com/r/LocalLLaMA/comments/17fizwl/is_there_any_local_version_of_the_software_like/ | daileta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fizwl | false | null | t3_17fizwl | /r/LocalLLaMA/comments/17fizwl/is_there_any_local_version_of_the_software_like/ | false | false | self | 6 | null |
best model for translating Indian languages? | 2 | I'm trying to translate Indian languages like hindi to english. This is complicated by the fact that the source material often contains spelling errors, is written in the latin alphabet (transliteration) or is a mix of english and hindi (hingrish).
I've tried NLLB, it can't handle it. Lllama 2 is mostly trained on english, so it can also not handle it.
There's supposedly a model by google called madlad-400 lm, but I only find weights, no code.
ChatGPT-4 can handle it though.
Is there another local model I could try? | 2023-10-24T17:54:42 | https://www.reddit.com/r/LocalLLaMA/comments/17fiujo/best_model_for_translating_indian_languages/ | cygn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fiujo | false | null | t3_17fiujo | /r/LocalLLaMA/comments/17fiujo/best_model_for_translating_indian_languages/ | false | false | self | 2 | null |
LM Format Enforcer - Enforce the output format (JSON Schema, Regex etc) of a language model - supports llama-cpp-python | 27 | Hello all,
I would like to share a library I have been developing for my needs, but wanted to share with the community - [LM Format Enforcer](https://github.com/noamgat/lm-format-enforcer).
The library allows the user of the language model to specify a limitation on the language model's output (JSON Schema / Regex, but custom enforcers can also be developed), and the LLM will only generate strings that conform to that output.
This is especially useful when integrating the language model in an environment where its outputs are fed to another API.
The idea is similar to solutions such as Guidance / JsonFormer / Outlines, but more scalable, robust and easier to integrate.
As far as I know, it is the only one that includes support for engines outside of huggingface transformers, which made it very easy to create a [sample notebook that integrates it with the python wrapper of llama.cpp](https://github.com/noamgat/lm-format-enforcer/blob/main/samples/colab_llamacpppython_integration.ipynb) .
It was designed to have a minimal runtime footprint, and supports advanced generation loops such as batching and beam searches, making it ideal for high-scale solutions, which is a scenario that pushes people for libraries such as llama.cpp.
I hope you enjoy it, and if there are any questions I'll be happy to answer here! | 2023-10-24T17:16:46 | https://www.reddit.com/r/LocalLLaMA/comments/17fhyh4/lm_format_enforcer_enforce_the_output_format_json/ | noamgat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fhyh4 | false | null | t3_17fhyh4 | /r/LocalLLaMA/comments/17fhyh4/lm_format_enforcer_enforce_the_output_format_json/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': '8kjs0D9Z2kaBZF4N6B9cZ11LOw_cDtnC1M3AqtIq3jM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tCo84-oPRrjtfM7929WXYnmfTBEzSRCRvK4_uTNYbO4.jpg?width=108&crop=smart&auto=webp&s=d97cd7c2f82e58650cbf91556ffd9aaac47e0116', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tCo84-oPRrjtfM7929WXYnmfTBEzSRCRvK4_uTNYbO4.jpg?width=216&crop=smart&auto=webp&s=2fd47c77274e96b5c09dd4d05dc3db5c1b6af2dc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tCo84-oPRrjtfM7929WXYnmfTBEzSRCRvK4_uTNYbO4.jpg?width=320&crop=smart&auto=webp&s=fffdbfba3d083d6f8eeedb456b4f209f4f1f36e1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tCo84-oPRrjtfM7929WXYnmfTBEzSRCRvK4_uTNYbO4.jpg?width=640&crop=smart&auto=webp&s=733c04944b49014136237f02be48611ea7d2f031', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tCo84-oPRrjtfM7929WXYnmfTBEzSRCRvK4_uTNYbO4.jpg?width=960&crop=smart&auto=webp&s=07f18801edea9a06c074dc7f6e2be99c9dab1662', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tCo84-oPRrjtfM7929WXYnmfTBEzSRCRvK4_uTNYbO4.jpg?width=1080&crop=smart&auto=webp&s=bc76bf70f81047bc7dda7e1f66380ab2b8b5eff0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tCo84-oPRrjtfM7929WXYnmfTBEzSRCRvK4_uTNYbO4.jpg?auto=webp&s=4a7240c4d19aba3557bf09792841472c001a2ddd', 'width': 1200}, 'variants': {}}]} |
Autogen via local LLMs | 0 | I’m a huge fan of this guy and didn’t want anyone here missing out on this quick / priceless (imo) tutorial showing you how easy it is to actually get this setup. 🖤🥷🏴☠️
Nightmare.Algorithm @ TikTok | 2023-10-24T17:15:04 | 80Unknown08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17fhx2j | false | null | t3_17fhx2j | /r/LocalLLaMA/comments/17fhx2j/autogen_via_local_llms/ | false | false | 0 | {'enabled': True, 'images': [{'id': '2pzDGdfO2cOSjXWQlWZSBUzE9XorG3qyk9VB_zj8rWI', 'resolutions': [{'height': 189, 'url': 'https://preview.redd.it/zojp700go6wb1.jpg?width=108&crop=smart&auto=webp&s=bd41cbbdc10f90ec9f8a8aad8deed52c15154ad9', 'width': 108}, {'height': 378, 'url': 'https://preview.redd.it/zojp700go6wb1.jpg?width=216&crop=smart&auto=webp&s=d999230dcb3263c581ba8c4ab8b7ac036536d872', 'width': 216}, {'height': 560, 'url': 'https://preview.redd.it/zojp700go6wb1.jpg?width=320&crop=smart&auto=webp&s=1e789f1abeff6b9fe1b030b33eeca97a4f44b5ee', 'width': 320}], 'source': {'height': 1050, 'url': 'https://preview.redd.it/zojp700go6wb1.jpg?auto=webp&s=90c890da02bcd213585532049d06e2653fda538a', 'width': 600}, 'variants': {}}]} | ||
🐺🐦⬛ Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4) | 786 | It's been ages since my last [LLM Comparison/Test](https://www.reddit.com/r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/), or maybe just a little over a week, but that's just how fast things are moving in this AI landscape. ;)
Since then, a lot of new models have come out, and I've extended my testing procedures. So it's high time for another model comparison/test.
I initially planned to apply my whole testing method, including the "MGHC" and "Amy" tests I usually do - but as the number of models tested kept growing, I realized it would take too long to do all of it at once. So I'm splitting it up and will present just the first part today, following up with the other parts later.
## Models tested:
- 14x 7B
- 7x 13B
- 4x 20B
- 11x 70B
- GPT-3.5 Turbo + Instruct
- GPT-4
## Testing methodology:
- 4 German data protection trainings:
- I run models through **4** professional German online data protection trainings/exams - the same that our employees have to pass as well.
- The test data and questions as well as all instructions are in German while the character card is in English. This tests translation capabilities and cross-language understanding.
- Before giving the information, I instruct the model (in German): *I'll give you some information. Take note of this, but only answer with "OK" as confirmation of your acknowledgment, nothing else.* This tests instruction understanding and following capabilities.
- After giving all the information about a topic, I give the model the exam question. It's a multiple choice (A/B/C) question, where the last one is the same as the first but with changed order and letters (X/Y/Z). Each test has 4-6 exam questions, for a total of **18** multiple choice questions.
- If the model gives a single letter response, I ask it to answer with more than just a single letter - and vice versa. If it fails to do so, I note that, but it doesn't affect its score as long as the initial answer is correct.
- I sort models according to how many correct answers they give, and in case of a tie, I have them go through all four tests again and answer blind, without providing the curriculum information beforehand. Best models at the top (👍), symbols (✅➕➖❌) denote particularly good or bad aspects, and I'm more lenient the smaller the model.
- All tests are separate units, context is cleared in between, there's no memory/state kept between sessions.
- [SillyTavern](https://github.com/SillyTavern/SillyTavern) v1.10.5 frontend
- [koboldcpp](https://github.com/LostRuins/koboldcpp) v1.47 backend *for GGUF models*
- [oobabooga's text-generation-webui](https://github.com/oobabooga/text-generation-webui) *for HF models*
- **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons)
- Official prompt format as noted
### 7B:
- 👍👍👍 **[OpenHermes-2-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2-Mistral-7B)** with official ChatML format:
- ➕ Gave correct answers to **16/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **12/18**
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- 👍👍 **[airoboros-m-7b-3.1.2](https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2)** with official Llama 2 Chat format:
- ➕ Gave correct answers to **16/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **8/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- 👍 **[em_german_leo_mistral](https://huggingface.co/jphme/em_german_leo_mistral)** with official Vicuna format:
- ➕ Gave correct answers to **16/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **8/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- ❌ When giving just the questions for the tie-break, needed additional prompting in the final test.
- **[dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b)** with official ChatML format:
- ➖ Gave correct answers to **15/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **12/18**
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- ❌ Repeated scenario and persona information, got distracted from the exam.
- **[SynthIA-7B-v1.3](https://huggingface.co/migtissera/SynthIA-7B-v1.3)** with official SynthIA format:
- ➖ Gave correct answers to **15/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **8/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- **[Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)** with official Mistral format:
- ➖ Gave correct answers to **15/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **7/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- **[SynthIA-7B-v2.0](https://huggingface.co/migtissera/SynthIA-7B-v2.0)** with official SynthIA format:
- ❌ Gave correct answers to only **14/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **10/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- **[CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B)** with official Vicuna format:
- ❌ Gave correct answers to only **14/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **9/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- **[Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)** with official ChatML format:
- ❌ Gave correct answers to only **13/18** multiple choice questions!
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- ❌ After answering a question, would ask a question instead of acknowledging information.
- **[zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)** with official Zephyr format:
- ❌ Gave correct answers to only **12/18** multiple choice questions!
- ❗ Ironically, using ChatML format instead of the official one, it gave correct answers to 14/18 multiple choice questions and consistently acknowledged all data input with "OK"!
- **[Xwin-MLewd-7B-V0.2](https://huggingface.co/Undi95/Xwin-MLewd-7B-V0.2)** with official Alpaca format:
- ❌ Gave correct answers to only **12/18** multiple choice questions!
- ➕ Often, but not always, acknowledged data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- **[ANIMA-Phi-Neptune-Mistral-7B](https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B)** with official Llama 2 Chat format:
- ❌ Gave correct answers to only **10/18** multiple choice questions!
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- **[Nous-Capybara-7B](https://huggingface.co/NousResearch/Nous-Capybara-7B)** with official Vicuna format:
- ❌ Gave correct answers to only **10/18** multiple choice questions!
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- ❌ Sometimes didn't answer at all.
- **[Xwin-LM-7B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.2)** with official Vicuna format:
- ❌ Gave correct answers to only **10/18** multiple choice questions!
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- ❌ In the last test, would always give the same answer, so it got some right by chance and the others wrong!
- ❗ Ironically, using Alpaca format instead of the official one, it gave correct answers to 11/18 multiple choice questions!
#### Observations:
- No 7B model managed to answer all the questions. Only two models didn't give three or more wrong answers.
- None managed to properly follow my instruction to answer with just a single letter (when their answer consisted of more than that) or more than just a single letter (when their answer was just one letter). When they gave one letter responses, most picked a random letter, some that weren't even part of the answers, or just "O" as the first letter of "OK". So they tried to obey, but failed because they lacked the understanding of what was actually (not literally) meant.
- Few understood and followed the instruction to only answer with OK consistently. Some did after a reminder, some did it only for a few messages and then forgot, most never completely followed this instruction.
- Xwin and Nous Capybara did surprisingly bad, but they're Llama 2- instead of Mistral-based models, so this correlates with the general consensus that Mistral is a noticeably better base than Llama 2. ANIMA is Mistral-based, but seems to be very specialized, which could be the cause of its bad performance in a field that's outside of its scientific specialty.
- SynthIA 7B v2.0 did slightly worse than v1.3 (one less correct answer) in the normal exams. But when letting them answer blind, without providing the curriculum information beforehand, v2.0 did better (two more correct answers).
#### Conclusion:
As I've said again and again, 7B models aren't a miracle. Mistral models write well, which makes them look good, but they're still very limited in their instruction understanding and following abilities, and their knowledge. If they are all you can run, that's fine, we all try to run the best we can. But if you can run much bigger models, do so, and you'll get much better results.
### 13B:
- 👍👍👍 **[Xwin-MLewd-13B-V0.2-GGUF](https://huggingface.co/Undi95/Xwin-MLewd-13B-V0.2-GGUF)** Q8_0 with official Alpaca format:
- ➕ Gave correct answers to **17/18** multiple choice questions! (Just the questions, no previous information, gave correct answers: **15/18**)
- ✅ Consistently acknowledged all data input with "OK".
- ➕ Followed instructions to answer with just a single letter or more than just a single letter in most cases.
- 👍👍 **[LLaMA2-13B-Tiefighter-GGUF](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter-GGUF)** Q8_0 with official Alpaca format:
- ➕ Gave correct answers to **16/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **12/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➕ Followed instructions to answer with just a single letter or more than just a single letter in most cases.
- 👍 **[Xwin-LM-13B-v0.2-GGUF](https://huggingface.co/TheBloke/Xwin-LM-13B-v0.2-GGUF)** Q8_0 with official Vicuna format:
- ➕ Gave correct answers to **16/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **9/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- **[Mythalion-13B-GGUF](https://huggingface.co/TheBloke/Mythalion-13B-GGUF)** Q8_0 with official Alpaca format:
- ➕ Gave correct answers to **16/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **6/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- **[Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF)** Q8_0 with official Alpaca format:
- ❌ Gave correct answers to only **15/18** multiple choice questions!
- ✅ Consistently acknowledged all data input with "OK".
- ✅ Followed instructions to answer with just a single letter or more than just a single letter.
- **[MythoMax-L2-13B-GGUF](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF)** Q8_0 with official Alpaca format:
- ❌ Gave correct answers to only **14/18** multiple choice questions!
- ✅ Consistently acknowledged all data input with "OK".
- ❌ In one of the four tests, would only say "OK" to the questions instead of giving the answer, and needed to be prompted to answer - otherwise its score would only be 10/18!
- **[LLaMA2-13B-TiefighterLR-GGUF](https://huggingface.co/KoboldAI/LLaMA2-13B-TiefighterLR-GGUF)** Q8_0 with official Alpaca format:
- ❌ Repeated scenario and persona information, then hallucinated >600 tokens user background story, and kept derailing instead of answer questions. Could be a good storytelling model, considering its creativity and length of responses, but didn't follow my instructions at all.
#### Observations:
- No 13B model managed to answer all the questions. The results of top 7B Mistral and 13B Llama 2 are very close.
- The new Tiefighter model, an exciting mix by the renowned KoboldAI team, is on par with the best Mistral 7B models concerning knowledge and reasoning while surpassing them regarding instruction following and understanding.
- Weird that the Xwin-MLewd-13B-V0.2 mix beat the original Xwin-LM-13B-v0.2. Even weirder that it took first place here and only 70B models did better. But this is an objective test and it simply gave the most correct answers, so there's that.
#### Conclusion:
It has been said that Mistral 7B models surpass LLama 2 13B models, and while that's probably true for many cases and models, there are still exceptional Llama 2 13Bs that are at least as good as those Mistral 7B models and some even better.
### 20B:
- 👍👍 **[MXLewd-L2-20B-GGUF](https://huggingface.co/TheBloke/MXLewd-L2-20B-GGUF)** Q8_0 with official Alpaca format:
- ➕ Gave correct answers to **16/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **11/18**
- ✅ Consistently acknowledged all data input with "OK".
- ✅ Followed instructions to answer with just a single letter or more than just a single letter.
- 👍 **[MLewd-ReMM-L2-Chat-20B-GGUF](https://huggingface.co/Undi95/MLewd-ReMM-L2-Chat-20B-GGUF)** Q8_0 with official Alpaca format:
- ➕ Gave correct answers to **16/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **9/18**
- ✅ Consistently acknowledged all data input with "OK".
- ✅ Followed instructions to answer with just a single letter or more than just a single letter.
- 👍 **[PsyMedRP-v1-20B-GGUF](https://huggingface.co/Undi95/PsyMedRP-v1-20B-GGUF)** Q8_0 with Alpaca format:
- ➕ Gave correct answers to **16/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **9/18**
- ✅ Consistently acknowledged all data input with "OK".
- ✅ Followed instructions to answer with just a single letter or more than just a single letter.
- **[U-Amethyst-20B-GGUF](https://huggingface.co/TheBloke/U-Amethyst-20B-GGUF)** Q8_0 with official Alpaca format:
- ❌ Gave correct answers to only **13/18** multiple choice questions!
- ❌ In one of the four tests, would only say "OK" to a question instead of giving the answer, and needed to be prompted to answer - otherwise its score would only be 12/18!
- ❌ In the last test, would always give the same answer, so it got some right by chance and the others wrong!
#### Conclusion:
These Frankenstein mixes and merges (there's no 20B base) are mainly intended for roleplaying and creative work, but did quite well in these tests. They didn't do *much* better than the smaller models, though, so it's probably more of a subjective choice of writing style which ones you ultimately choose and use.
### 70B:
- 👍👍👍 **[lzlv_70B.gguf](https://huggingface.co/lizpreciatior/lzlv_70B.gguf)** Q4_0 with official Vicuna format:
- ✅ Gave correct answers to all **18/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **17/18**
- ✅ Consistently acknowledged all data input with "OK".
- ✅ Followed instructions to answer with just a single letter or more than just a single letter.
- 👍👍 **[SynthIA-70B-v1.5-GGUF](https://huggingface.co/migtissera/SynthIA-70B-v1.5-GGUF)** Q4_0 with official SynthIA format:
- ✅ Gave correct answers to all **18/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **16/18**
- ✅ Consistently acknowledged all data input with "OK".
- ✅ Followed instructions to answer with just a single letter or more than just a single letter.
- 👍👍 **[Synthia-70B-v1.2b-GGUF](https://huggingface.co/TheBloke/Synthia-70B-v1.2b-GGUF)** Q4_0 with official SynthIA format:
- ✅ Gave correct answers to all **18/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **16/18**
- ✅ Consistently acknowledged all data input with "OK".
- ✅ Followed instructions to answer with just a single letter or more than just a single letter.
- 👍👍 **[chronos007-70B-GGUF](https://huggingface.co/TheBloke/chronos007-70B-GGUF)** Q4_0 with official Alpaca format:
- ✅ Gave correct answers to all **18/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **16/18**
- ✅ Consistently acknowledged all data input with "OK".
- ✅ Followed instructions to answer with just a single letter or more than just a single letter.
- 👍 **[StellarBright-GGUF](https://huggingface.co/TheBloke/StellarBright-GGUF)** Q4_0 with Vicuna format:
- ✅ Gave correct answers to all **18/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **14/18**
- ✅ Consistently acknowledged all data input with "OK".
- ✅ Followed instructions to answer with just a single letter or more than just a single letter.
- 👍 **[Euryale-1.3-L2-70B-GGUF](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF)** Q4_0 with official Alpaca format:
- ✅ Gave correct answers to all **18/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **14/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with more than just a single letter consistently.
- **[Xwin-LM-70B-V0.1-GGUF](https://huggingface.co/TheBloke/Xwin-LM-70B-V0.1-GGUF)** Q4_0 with official Vicuna format:
- ❌ Gave correct answers to only **17/18** multiple choice questions!
- 🚩 Datenschutz: 3/4
- ✅ Consistently acknowledged all data input with "OK".
- ✅ Followed instructions to answer with just a single letter or more than just a single letter.
- **[WizardLM-70B-V1.0-GGUF](https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GGUF)** Q4_0 with official Vicuna format:
- ❌ Gave correct answers to only **17/18** multiple choice questions!
- ✅ Consistently acknowledged all data input with "OK".
- ➕ Followed instructions to answer with just a single letter or more than just a single letter in most cases.
- ❌ In two of the four tests, would only say "OK" to the questions instead of giving the answer, and needed to be prompted to answer - otherwise its score would only be 12/18!
- **[Llama-2-70B-chat-GGUF](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF)** Q4_0 with official Llama 2 Chat format:
- ❌ Gave correct answers to only **15/18** multiple choice questions!
- ➕ Often, but not always, acknowledged data input with "OK".
- ➕ Followed instructions to answer with just a single letter or more than just a single letter in most cases.
- ➖ Occasionally used words of other languages in its responses as context filled up.
- **[Nous-Hermes-Llama2-70B-GGUF](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-70B-GGUF)** Q4_0 with official Alpaca format:
- ❌ Gave correct answers to only **8/18** multiple choice questions!
- ✅ Consistently acknowledged all data input with "OK".
- ❌ In two of the four tests, would only say "OK" to the questions instead of giving the answer, and couldn't even be prompted to answer!
- **[Airoboros-L2-70B-3.1.2-GGUF](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1.2-GGUF)** Q4_0 with official Llama 2 Chat format:
- Couldn't test this as this seems to be [broken](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1.2-GGUF/discussions/1)!
#### Observations:
- 70Bs do much better than smaller models on these exams. Six 70B models managed to answer *all* the questions correctly.
- Even when letting them answer blind, without providing the curriculum information beforehand, the top models still did as good as the smaller ones did *with* the provided information.
- lzlv_70B taking first place was unexpected, especially considering it's intended use case for roleplaying and creative work. But this is an objective test and it simply gave the most correct answers, so there's that.
#### Conclusion:
70B is in a very good spot, with so many great models that answered all the questions correctly, so the top is very crowded here (with three models on second place alone). All of the top models warrant further consideration and I'll have to do more testing with those in different situations to figure out which I'll keep using as my main model(s). For now, lzlv_70B is my main for fun and SynthIA 70B v1.5 is my main for work.
### ChatGPT/GPT-4:
For comparison, and as a baseline, I used the same setup with ChatGPT/GPT-4's API and SillyTavern's default Chat Completion settings with Temperature 0. The results are very interesting and surprised me somewhat regarding ChatGPT/GPT-3.5's results.
- ⭐ **GPT-4** API:
- ✅ Gave correct answers to all **18/18** multiple choice questions! (Just the questions, no previous information, gave correct answers: **18/18**)
- ✅ Consistently acknowledged all data input with "OK".
- ✅ Followed instructions to answer with just a single letter or more than just a single letter.
- **GPT-3.5 Turbo Instruct** API:
- ❌ Gave correct answers to only **17/18** multiple choice questions! (Just the questions, no previous information, gave correct answers: **11/18**)
- ❌ Did NOT follow instructions to acknowledge data input with "OK".
- ❌ Schizophrenic: Sometimes claimed it couldn't answer the question, then talked as "user" and asked itself again for an answer, then answered as "assistant". Other times would talk and answer as "user".
- ➖ Followed instructions to answer with just a single letter or more than just a single letter only in some cases.
- **GPT-3.5 Turbo** API:
- ❌ Gave correct answers to only **15/18** multiple choice questions! (Just the questions, no previous information, gave correct answers: **14/18**)
- ❌ Did NOT follow instructions to acknowledge data input with "OK".
- ❌ Responded to one question with: "As an AI assistant, I can't provide legal advice or make official statements."
- ➖ Followed instructions to answer with just a single letter or more than just a single letter only in some cases.
#### Observations:
- GPT-4 is *the* best LLM, as expected, and achieved perfect scores (even when not provided the curriculum information beforehand)! It's noticeably slow, though.
- GPT-3.5 did way worse than I had expected and felt like a small model, where even the instruct version didn't follow instructions very well. Our best 70Bs do much better than that!
#### Conclusion:
While GPT-4 remains in a league of its own, our local models do reach and even surpass ChatGPT/GPT-3.5 in these tests. This shows that the best 70Bs can definitely replace ChatGPT in most situations. Personally, I already use my local LLMs professionally for various use cases and only fall back to GPT-4 for tasks where utmost precision is required, like coding/scripting.
--------------------------------------------------------------------------------
Here's a list of my previous model tests and comparisons or other related posts:
- [My current favorite new LLMs: SynthIA v1.5 and Tiefighter!](https://www.reddit.com/r/LocalLLaMA/comments/17e446l/my_current_favorite_new_llms_synthia_v15_and/)
- [Mistral LLM Comparison/Test: Instruct, OpenOrca, Dolphin, Zephyr and more...](https://www.reddit.com/r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/)
- [LLM Pro/Serious Use Comparison/Test: From 7B to 70B vs. ChatGPT!](https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/) Winner: Synthia-70B-v1.2b
- [LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B](https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/) Winner: Mistral-7B-OpenOrca
- [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/)
- [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1
- [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b
- [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B
- [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K
- [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2
- [SillyTavern's Roleplay preset vs. model-specific prompt format](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/) | 2023-10-24T17:05:28 | https://www.reddit.com/r/LocalLLaMA/comments/17fhp9k/huge_llm_comparisontest_39_models_tested_7b70b/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fhp9k | false | null | t3_17fhp9k | /r/LocalLLaMA/comments/17fhp9k/huge_llm_comparisontest_39_models_tested_7b70b/ | false | false | self | 786 | {'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=108&crop=smart&auto=webp&s=f076a50b0d594dc8ba3b2ee703d67664decf1cba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=216&crop=smart&auto=webp&s=dbc51e386e2d24255edce0cbd6a139d2b37dc0a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=320&crop=smart&auto=webp&s=13107e47f85ca5d663508f0d9c3bca3648a98f75', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=640&crop=smart&auto=webp&s=f340c6c7589a711ca86aba7661baee1db6acf927', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=960&crop=smart&auto=webp&s=76d5b3a13d8ba4378270e9ae41aa3081e25b37e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=1080&crop=smart&auto=webp&s=d17a08361a95b03dd8a9a733ec765497cf2bf0d1', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?auto=webp&s=577b115ae7cd70077bd0dc15f7fe27e71ff19e2b', 'width': 1280}, 'variants': {}}]} |
Did mods stop approving posts? | 1 | [removed] | 2023-10-24T16:34:08 | https://www.reddit.com/r/LocalLLaMA/comments/17fgzvm/did_mods_stop_approving_posts/ | jl303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fgzvm | false | null | t3_17fgzvm | /r/LocalLLaMA/comments/17fgzvm/did_mods_stop_approving_posts/ | false | false | self | 1 | null |
Arxiv Paper - Position Interpolation Improves ALiBi Extrapolation | 3 | 2023-10-24T16:25:39 | https://arxiv.org/abs/2310.13017 | maroule | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17fgsxb | false | null | t3_17fgsxb | /r/LocalLLaMA/comments/17fgsxb/arxiv_paper_position_interpolation_improves_alibi/ | false | false | default | 3 | null | |
Salmonn can describe audio in addition to speech to text. | 1 | [removed] | 2023-10-24T16:24:21 | https://www.reddit.com/r/LocalLLaMA/comments/17fgrvf/salmonn_can_describe_audio_in_addition_to_speech/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fgrvf | false | null | t3_17fgrvf | /r/LocalLLaMA/comments/17fgrvf/salmonn_can_describe_audio_in_addition_to_speech/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'wCl0Mu946GympANScLMMZTkkhhWxny5dG1s19wHJ_2M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WixOB-pQLIioJQa_mKQNWqgqHVOqb-3VsqKIyelRcIs.jpg?width=108&crop=smart&auto=webp&s=52ec170fa95cd221d6144614371e59682b3974a7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WixOB-pQLIioJQa_mKQNWqgqHVOqb-3VsqKIyelRcIs.jpg?width=216&crop=smart&auto=webp&s=f8c09b07e145289f59229fee958774d55c734018', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WixOB-pQLIioJQa_mKQNWqgqHVOqb-3VsqKIyelRcIs.jpg?width=320&crop=smart&auto=webp&s=cdf8ee6566f947de6ddd3f3f69a316179c1dc367', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WixOB-pQLIioJQa_mKQNWqgqHVOqb-3VsqKIyelRcIs.jpg?width=640&crop=smart&auto=webp&s=fac977aa065950d2530db6e9c8dcb9298c7e0da5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WixOB-pQLIioJQa_mKQNWqgqHVOqb-3VsqKIyelRcIs.jpg?width=960&crop=smart&auto=webp&s=128af93ab3869b4ba2a5e39bd27b870238dee847', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WixOB-pQLIioJQa_mKQNWqgqHVOqb-3VsqKIyelRcIs.jpg?width=1080&crop=smart&auto=webp&s=bf5413f5c2b1a1d10d0a9941ac7b2e85f09b3811', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WixOB-pQLIioJQa_mKQNWqgqHVOqb-3VsqKIyelRcIs.jpg?auto=webp&s=4b9e9ac34959853935aeb05fb3251200652c7a91', 'width': 1200}, 'variants': {}}]} |
How are people using open source LLMs in production apps? | 137 | How are people deploying production apps using open source LLMs? Or is every AI company just using OpenAI?
Cloud GPUs are expensive (Like Runpod ), buying and hosting your own, even more and the overall throughput is abysmally small.
An app with a few hundred users doing even a few dozen of inferences per minute would require a significant GPU cluster, assuming each gpu can do 1-3 inferences at a time.
Am I missing something here ? Is anyone doing anything production critical with these open source LLMs or are they still just a hobby?
OpenAI is bleeding out hundredths of thousands of dollars on their APIs to capture the dev market. Can a business ever hope to build something financially viable on their own ?
Thank you! | 2023-10-24T15:31:42 | https://www.reddit.com/r/LocalLLaMA/comments/17ffkdx/how_are_people_using_open_source_llms_in/ | TheAnonymousTickler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ffkdx | false | null | t3_17ffkdx | /r/LocalLLaMA/comments/17ffkdx/how_are_people_using_open_source_llms_in/ | false | false | self | 137 | null |
Using Langchain With llama.cpp? | 14 | I have Falcon-180B served locally using llama.cpp via the server REST-ful api. I assume there is a way to connect langchain to the /completion endpoint.
Does anyone have an example that does something like this? | 2023-10-24T15:21:08 | https://www.reddit.com/r/LocalLLaMA/comments/17ffbg9/using_langchain_with_llamacpp/ | Simusid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ffbg9 | false | null | t3_17ffbg9 | /r/LocalLLaMA/comments/17ffbg9/using_langchain_with_llamacpp/ | false | false | self | 14 | null |
Best 7B/13B Model for grammar editing? - And context size for models? | 1 | [removed] | 2023-10-24T15:17:11 | https://www.reddit.com/r/LocalLLaMA/comments/17ff806/best_7b13b_model_for_grammar_editing_and_context/ | galaxea | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ff806 | false | null | t3_17ff806 | /r/LocalLLaMA/comments/17ff806/best_7b13b_model_for_grammar_editing_and_context/ | false | false | self | 1 | null |
Best 7B/13B Model for grammar editing? - And context size for models? | 1 | [removed] | 2023-10-24T14:57:34 | https://www.reddit.com/r/LocalLLaMA/comments/17fervh/best_7b13b_model_for_grammar_editing_and_context/ | galaxea | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fervh | false | null | t3_17fervh | /r/LocalLLaMA/comments/17fervh/best_7b13b_model_for_grammar_editing_and_context/ | false | false | self | 1 | null |
Giving compute grants for OSS projects | 3 | Hi all, I am giving out compute grants through my compute crowdfunding platform [dreamcatcher.co](https://dreamcatcher.co). These grants are for OSS AI projects looking to add value to the community. If you have an idea that you need compute for please fill out this form: [https://forms.gle/cjKQbJLdT95HLnXh8](https://forms.gle/cjKQbJLdT95HLnXh8) | 2023-10-24T14:50:30 | https://www.reddit.com/r/LocalLLaMA/comments/17fem52/giving_compute_grants_for_oss_projects/ | codys12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fem52 | false | null | t3_17fem52 | /r/LocalLLaMA/comments/17fem52/giving_compute_grants_for_oss_projects/ | false | false | self | 3 | null |
Instruct Trained Sheared LLaMA 1.3B - Best CPU-ready Model | 1 | [removed] | 2023-10-24T14:39:48 | https://www.reddit.com/r/LocalLLaMA/comments/17fedxo/instruct_trained_sheared_llama_13b_best_cpuready/ | Mammoth-Doughnut-160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fedxo | false | null | t3_17fedxo | /r/LocalLLaMA/comments/17fedxo/instruct_trained_sheared_llama_13b_best_cpuready/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rbD8dCtSznlrSl00EOWEnAjcYXfMBJ2AaQ1vrlkMoRg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/do3d95xA51_YcYliGhdd5ATctNaRDJXpiHJlsIgXh6A.jpg?width=108&crop=smart&auto=webp&s=6cb540c3e60a30d147c2421876566649fbf8ed1c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/do3d95xA51_YcYliGhdd5ATctNaRDJXpiHJlsIgXh6A.jpg?width=216&crop=smart&auto=webp&s=94cc56712f7c9723defd660058cee2f26030b432', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/do3d95xA51_YcYliGhdd5ATctNaRDJXpiHJlsIgXh6A.jpg?width=320&crop=smart&auto=webp&s=80c16d9de57ecdf3515051ebfbf69802ac19841d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/do3d95xA51_YcYliGhdd5ATctNaRDJXpiHJlsIgXh6A.jpg?width=640&crop=smart&auto=webp&s=ac4624c47a930ff1b37da42380d273716b6aa3b2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/do3d95xA51_YcYliGhdd5ATctNaRDJXpiHJlsIgXh6A.jpg?width=960&crop=smart&auto=webp&s=8162f41fc8419319abb83a8900338e20e958a75b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/do3d95xA51_YcYliGhdd5ATctNaRDJXpiHJlsIgXh6A.jpg?width=1080&crop=smart&auto=webp&s=31ec744434f0fbde2b9b367e442453c53d9331f5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/do3d95xA51_YcYliGhdd5ATctNaRDJXpiHJlsIgXh6A.jpg?auto=webp&s=35481dda9f3edec038a336526c9f31756a4b2907', 'width': 1200}, 'variants': {}}]} |
Access T-G-WebUI via local network? I want to access my local LlaMa PC instance on my phone | 1 | [removed] | 2023-10-24T14:38:27 | https://www.reddit.com/r/LocalLLaMA/comments/17fecwy/access_tgwebui_via_local_network_i_want_to_access/ | Overall-Importance54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fecwy | false | null | t3_17fecwy | /r/LocalLLaMA/comments/17fecwy/access_tgwebui_via_local_network_i_want_to_access/ | false | false | self | 1 | null |
Llama-cpp-python vs Python server | 3 | Hi everyone!
I was just wondering for those llama-cpp-python users, do you guys use the llama-cpp-python server or just the base one?
I am just prototyping an idea, but if I wanted to build a chat bot that multiple users can talk to, is the server version a must?
Any pros or cons to using the server version over the base?
Thanks everyone! | 2023-10-24T14:36:29 | https://www.reddit.com/r/LocalLLaMA/comments/17febh2/llamacpppython_vs_python_server/ | Slimxshadyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17febh2 | false | null | t3_17febh2 | /r/LocalLLaMA/comments/17febh2/llamacpppython_vs_python_server/ | false | false | self | 3 | null |
Using ExLlama/ExLlamav2 (in Text generation web UI), does setting max_seq_len lower than that specified for the model reduce output quality? | 6 | I often find myself setting the max_seq_len to a value that is lower than that specified for the model to save VRAM. For most of my use cases, I seldom need more than 1024 tokens of context.
I was curious to know as to whether doing something like this reduces the model's output quality. From my limited experience, I feel like it does, but given that the quality of output can vary significantly from one generation to another, it could well be placebo. | 2023-10-24T14:23:25 | https://www.reddit.com/r/LocalLLaMA/comments/17fe189/using_exllamaexllamav2_in_text_generation_web_ui/ | StickyDirtyKeyboard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fe189 | false | null | t3_17fe189 | /r/LocalLLaMA/comments/17fe189/using_exllamaexllamav2_in_text_generation_web_ui/ | false | false | self | 6 | null |
What does this mean? | 1 | “add Llama2 7b chat gguf model to the chain using gpu.”
All help is appreciated, thanks. | 2023-10-24T14:23:15 | https://www.reddit.com/r/LocalLLaMA/comments/17fe135/what_does_this_mean/ | ontheportco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fe135 | false | null | t3_17fe135 | /r/LocalLLaMA/comments/17fe135/what_does_this_mean/ | false | false | self | 1 | null |
How does fine-tuning Llama-2 actually work? | 6 | I have always imagined that fine-tuning a Language Model (LLM) involves providing prompts and expecting specific answers. The model then takes the prompt, generates an answer, and calculates the loss between the predicted and expected answers.
However, upon further investigation, I noticed that prompts and answers are always concatenated (for [example](https://github.com/facebookresearch/llama-recipes/blob/main/src/llama_recipes/datasets/samsum_dataset.py)). Now, I'm a bit confused about how fine-tuning actually works. Does it involve splitting concatenated text using \[INST\] \[/INST\] tags? I doubt this is the case, as not all examples use these tags.
Does the fine-tuning process involve masking some words in the text and trying to predict what the missing words are? If so, why is it not possible to fine-tune on unstructured text?"
​
​ | 2023-10-24T14:22:54 | https://www.reddit.com/r/LocalLLaMA/comments/17fe0t1/how_does_finetuning_llama2_actually_work/ | matej1408 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fe0t1 | false | null | t3_17fe0t1 | /r/LocalLLaMA/comments/17fe0t1/how_does_finetuning_llama2_actually_work/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'Akpn2HnswKeOB_JVzBQufiRpqhPDW-29FBguMFv2GkM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4yZVHUmFyUVUWJ_-m62Lcp7hI5Yq392tbgyy5HA36Lo.jpg?width=108&crop=smart&auto=webp&s=c2af759b6d5bf91ae052ae7a105568584f49bd38', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4yZVHUmFyUVUWJ_-m62Lcp7hI5Yq392tbgyy5HA36Lo.jpg?width=216&crop=smart&auto=webp&s=c7de3ff7712356af54afa573e549bc358ccafbb8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4yZVHUmFyUVUWJ_-m62Lcp7hI5Yq392tbgyy5HA36Lo.jpg?width=320&crop=smart&auto=webp&s=5f581edf94e9fc1b9b7ba2555166470c7dd9c431', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4yZVHUmFyUVUWJ_-m62Lcp7hI5Yq392tbgyy5HA36Lo.jpg?width=640&crop=smart&auto=webp&s=60f7db1cc05088ca0abf9c70a19e88cca5f1838f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4yZVHUmFyUVUWJ_-m62Lcp7hI5Yq392tbgyy5HA36Lo.jpg?width=960&crop=smart&auto=webp&s=0afa09fdca5b4a9a11469cb72d7552495bbc5b92', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4yZVHUmFyUVUWJ_-m62Lcp7hI5Yq392tbgyy5HA36Lo.jpg?width=1080&crop=smart&auto=webp&s=0f3909246e8cf5e89b17fada835bb480ac823627', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4yZVHUmFyUVUWJ_-m62Lcp7hI5Yq392tbgyy5HA36Lo.jpg?auto=webp&s=6f34e9feaf16effbede3dbbccafca15cc5b75407', 'width': 1200}, 'variants': {}}]} |
What is the best chat model that can run on 8gb ram | 4 | Hello, my laptop has a ryzen 5 5500U and 8gb RAM. What is the best local chat model? In any case I would use it to ask programming questions. | 2023-10-24T14:18:07 | https://www.reddit.com/r/LocalLLaMA/comments/17fdwx9/what_is_the_best_chat_model_that_can_run_on_8gb/ | Hot_Buy5425 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fdwx9 | false | null | t3_17fdwx9 | /r/LocalLLaMA/comments/17fdwx9/what_is_the_best_chat_model_that_can_run_on_8gb/ | false | false | self | 4 | null |
Similarity RAG Search? | 2 | I apologize if this is a bad question - I have a simple query "dog" and a list of documents that talks about dogs, cats, and fish. All I want to do is return the top-n documents that are most similar to my query.
What is the best way to accomplish this? Cosine similarity between embeddings or is there a better way? | 2023-10-24T14:05:02 | https://www.reddit.com/r/LocalLLaMA/comments/17fdmnm/similarity_rag_search/ | phillyguy2008 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fdmnm | false | null | t3_17fdmnm | /r/LocalLLaMA/comments/17fdmnm/similarity_rag_search/ | false | false | self | 2 | null |
Error when tuning GPT2-XL on Alpaca Dataset | 2 | Been getting endless dataset format errors when trying to lora in oobabooga:
(f'Data-point "{data_point}" has no keyset match within format "{list(format_data.keys())}"')
RuntimeError: Data-point "{'output': '', 'input': 'July 26th, 2021', 'instruction': 'Standardize the following date to the ISO 8601 format.'}" has no keyset match within format "['instruction,output', 'instruction,input,output']"
Anyone run into a fix? Issue persists no matter what model or Alpaca dataset I use. | 2023-10-24T13:28:40 | https://www.reddit.com/r/LocalLLaMA/comments/17fcu4b/error_when_tuning_gpt2xl_on_alpaca_dataset/ | ProperShape5918 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fcu4b | false | null | t3_17fcu4b | /r/LocalLLaMA/comments/17fcu4b/error_when_tuning_gpt2xl_on_alpaca_dataset/ | false | false | self | 2 | null |
Resources to learn the structure/code of LLM-based applications | 2 | As a senior data scientist, I've been reading on theoretical LLM resources for a couple of months. Can you guys suggest some repos/tutorials/etc so I get more familiar to the structure, libraries and best practices to develop and deploy LLM-based applications?
Thanks a lot | 2023-10-24T12:42:39 | https://www.reddit.com/r/LocalLLaMA/comments/17fbvkd/resources_to_learn_the_structurecode_of_llmbased/ | Real-Elk-6109 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fbvkd | false | null | t3_17fbvkd | /r/LocalLLaMA/comments/17fbvkd/resources_to_learn_the_structurecode_of_llmbased/ | false | false | self | 2 | null |
Best model without instruction fine-tuning? | 2 | Hi! I'm looking to do some more experimental stuff with LLMs and I was wondering what the best model without instruction fine tuning, or perhaps just without chat-finetuning, is? I've tried using Zephyr alpha but it really likes to output things like < | assistant | > and so on, then begin outputting random text (usually cooking recipes), even if I forbid it from outputting the assistant token (in webui). | 2023-10-24T12:16:28 | https://www.reddit.com/r/LocalLLaMA/comments/17fbdlo/best_model_without_instruction_finetuning/ | laser_man6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fbdlo | false | null | t3_17fbdlo | /r/LocalLLaMA/comments/17fbdlo/best_model_without_instruction_finetuning/ | false | false | self | 2 | null |
Trying to finetune CodeLLAMA | 5 | I followed the guide listed here
[https://github.com/ragntune/code-llama-finetune/blob/main/fine-tune-code-llama.ipynb](https://github.com/ragntune/code-llama-finetune/blob/main/fine-tune-code-llama.ipynb)
and got the checkpoints, but when I try to use the checkpoints, I only get back the prompt with <unk>
​
Here is my version
[https://github.com/gavi/finetuning/blob/main/finetuning.ipynb](https://github.com/gavi/finetuning/blob/main/finetuning.ipynb)
​
Any help would be appreciated.
​
​ | 2023-10-24T11:53:17 | https://www.reddit.com/r/LocalLLaMA/comments/17fayrw/trying_to_finetune_codellama/ | visualdata | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fayrw | false | null | t3_17fayrw | /r/LocalLLaMA/comments/17fayrw/trying_to_finetune_codellama/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'cppQqEV25HiJ-JCXoxIwBEhNN-SI2YCRq_2DFjw6dTI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/C8zVJ2QVomEdDtKmrqHPPWFttAricAGopEmAXQU-I4s.jpg?width=108&crop=smart&auto=webp&s=3600d7b6656e1cccd6b125ff3af80d73cb677fae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/C8zVJ2QVomEdDtKmrqHPPWFttAricAGopEmAXQU-I4s.jpg?width=216&crop=smart&auto=webp&s=47aa6cff8e9bbb7b29bcccb315887537ffb9da00', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/C8zVJ2QVomEdDtKmrqHPPWFttAricAGopEmAXQU-I4s.jpg?width=320&crop=smart&auto=webp&s=d2304c166882db9fd7f60e2aade6457aaff44a41', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/C8zVJ2QVomEdDtKmrqHPPWFttAricAGopEmAXQU-I4s.jpg?width=640&crop=smart&auto=webp&s=59ec49fc2b202c1cf448a1b0f4bc123da7d009a9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/C8zVJ2QVomEdDtKmrqHPPWFttAricAGopEmAXQU-I4s.jpg?width=960&crop=smart&auto=webp&s=7c889c16b993ed5ee41794c0a5b376ff65aeda28', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/C8zVJ2QVomEdDtKmrqHPPWFttAricAGopEmAXQU-I4s.jpg?width=1080&crop=smart&auto=webp&s=0aad0498c794fffe3fdf4b664f5e49b7c862c71d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/C8zVJ2QVomEdDtKmrqHPPWFttAricAGopEmAXQU-I4s.jpg?auto=webp&s=4b9c948e4eb254c4d2a9c3cefb3648783ea7f5de', 'width': 1200}, 'variants': {}}]} |
Best 7B models for ERP? | 15 | I had some good experiences with MythoMax, MLewd, and the newly released TieFighter by KoboldAI, but they're all 13B models.
Is there any good model with good coherence and prose at 7B size? I tried Mistral7B, and while the model is smart, it's not that engaging and creative for RP.
Asking for a friend with a 6GB VRAM laptop. | 2023-10-24T11:02:41 | https://www.reddit.com/r/LocalLLaMA/comments/17fa3ph/best_7b_models_for_erp/ | ai_waifu_enjoyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17fa3ph | false | null | t3_17fa3ph | /r/LocalLLaMA/comments/17fa3ph/best_7b_models_for_erp/ | false | false | self | 15 | null |
Not so Local Ollamma | 4 | I've been working on Ollama intergration with Google-Colab. Includes tunneling the 11343 port either locally or publicly. One Modelfile for testing under /content/Modelfile Still working some stuff out like Ollama terminating itself and getting more detailed logging.
Open the ipynb file and start using Ollama in the cloud
[https://github.com/Luxadevi/Ollama-Colab-intergration/tree/main](https://github.com/Luxadevi/Ollama-Colab-intergration/tree/main) | 2023-10-24T09:47:57 | https://www.reddit.com/r/LocalLLaMA/comments/17f8zjc/not_so_local_ollamma/ | Wholelota | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f8zjc | false | null | t3_17f8zjc | /r/LocalLLaMA/comments/17f8zjc/not_so_local_ollamma/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'eXIz8IRdGFRTzTwWQyjgSi_bBoQKRvoMQJXuI7qt8mQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-XA0lozRByBVmn3yTrADEHceVBcSOeOtPbL2uB6IRdw.jpg?width=108&crop=smart&auto=webp&s=a9409583430960e30c716ab4a814229eb55d4449', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-XA0lozRByBVmn3yTrADEHceVBcSOeOtPbL2uB6IRdw.jpg?width=216&crop=smart&auto=webp&s=97a37e9990f373e7515abc124f0f2616da3b2d04', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-XA0lozRByBVmn3yTrADEHceVBcSOeOtPbL2uB6IRdw.jpg?width=320&crop=smart&auto=webp&s=b5ba32ced344fdaa0f07e99e6e8467d5ad0f765a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-XA0lozRByBVmn3yTrADEHceVBcSOeOtPbL2uB6IRdw.jpg?width=640&crop=smart&auto=webp&s=b4b177e674751707b83d06210dff3a8f58030801', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-XA0lozRByBVmn3yTrADEHceVBcSOeOtPbL2uB6IRdw.jpg?width=960&crop=smart&auto=webp&s=32179db2072b8d40a782c232ea98510261350708', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-XA0lozRByBVmn3yTrADEHceVBcSOeOtPbL2uB6IRdw.jpg?width=1080&crop=smart&auto=webp&s=1c0cf51aee5055f2e765111e88cbbd793f572838', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-XA0lozRByBVmn3yTrADEHceVBcSOeOtPbL2uB6IRdw.jpg?auto=webp&s=e21df66bd9a6134a0aa53fdae4acc59b6dc8011d', 'width': 1200}, 'variants': {}}]} |
Screenplay ingestion, extension, development questions. | 3 | Hello everyone, I have been exploring the potential of experimenting with existing screenplays using some of the available local Language Models (LLMs). I've attempted utilizing various ingest.py
scripts from different repositories, which are said to be capable of generating embeddings. However, most of my attempts have resulted in either overly hallucinated outputs or extremely abbreviated responses. I'm trying to identify any missteps or oversights on my part, or if there's a standard method I may be overlooking to enable a local LLM to thoroughly process a script and achieve a level of interactive cognitive capabilities comparable to those of GPT-3.5/4.
Any guidance on how to articulate this better would be much appreciated. | 2023-10-24T07:54:13 | https://www.reddit.com/r/LocalLLaMA/comments/17f7ha3/screenplay_ingestion_extension_development/ | AmeenRoayan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f7ha3 | false | null | t3_17f7ha3 | /r/LocalLLaMA/comments/17f7ha3/screenplay_ingestion_extension_development/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.