title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Mixtral update: On perplexity testing, adding an expert only improves quality for Q2_K, using two experts achieves the best performance for the rest, and Q6_K has the lowest perplexity | 79 | This comes from [ymcui on GitHub](https://github.com/ggerganov/llama.cpp/pull/4406#issuecomment-1855151885), the same source for [yesterday's post](https://www.reddit.com/r/LocalLLaMA/comments/18i2h4c/mixtral_gets_even_better_by_just_adding_an_expert/), who has since updated their comment. Q8\_0 and F16 were not tested so they're not considered here:
>\- Most quantization types do not show a significant difference when using two or three experts. While for Q2\_K, using 3 experts is better.
>
>**- Re-tested Q6\_K with the latest build, and the results are looking good now. Not sure about the source of the previous issue.**
Overall ranking ordered by best PPL:
|**Rank**|**Model**|**Size**|**Num of Experts**|**PPL**|
|:-|:-|:-|:-|:-|
|1|Q6\_K|35.74 GB|2|3.9065 +/- 0.06098|
|2|Q5\_K|30.02 GB|2|3.9270 +/- 0.06133|
|3|Q5\_0|30.02 GB|2|3.9286 +/- 0.06129|
|4|Q5\_1|32.71 GB|2|3.9324 +/- 0.06147|
|5|Q4\_K|24.62 GB|2|3.9818 +/- 0.06253|
|6|Q4\_0|24.63 GB|2|3.9940 +/- 0.06275|
|7|Q4\_1|27.32 GB|2|4.0022 +/- 0.06273|
|8|Q3\_K|18.96 GB|2|4.1173 +/- 0.06515|
|9|Q2\_K|14.57 GB|3|5.8057 +/- 0.10077|
​
Perplexity comparison:
https://preview.redd.it/yv6t55amke6c1.png?width=2023&format=png&auto=webp&s=29d7bef0e1705ae9e64387ffa7409196aead854b
Zoom in:
https://preview.redd.it/naimyfcoke6c1.png?width=2055&format=png&auto=webp&s=c2fa13b08b81f54cd9db1e36d4ef78bcf771c776 | 2023-12-15T06:35:53 | https://www.reddit.com/r/LocalLLaMA/comments/18ituzh/mixtral_update_on_perplexity_testing_adding_an/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ituzh | false | null | t3_18ituzh | /r/LocalLLaMA/comments/18ituzh/mixtral_update_on_perplexity_testing_adding_an/ | false | false | 79 | {'enabled': False, 'images': [{'id': 'Na3FaC07gEEB8XgYMOuJh24sPjV1fPQPIPOQabTuxlc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/V7gYa2wwgfocK4XzuyidI9s0dRgAjWlnUG7gocd9DHI.jpg?width=108&crop=smart&auto=webp&s=ec209736cd07943f723d394910e1e7332a5c56e8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/V7gYa2wwgfocK4XzuyidI9s0dRgAjWlnUG7gocd9DHI.jpg?width=216&crop=smart&auto=webp&s=b5c52ec9a670cf6c0ee770b747b2a4c50b38e1b9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/V7gYa2wwgfocK4XzuyidI9s0dRgAjWlnUG7gocd9DHI.jpg?width=320&crop=smart&auto=webp&s=6a590038566a1a82af27fd8875f575f7af5bc322', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/V7gYa2wwgfocK4XzuyidI9s0dRgAjWlnUG7gocd9DHI.jpg?width=640&crop=smart&auto=webp&s=09c30f08052a02c11d9e74a45e861b0a550f821d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/V7gYa2wwgfocK4XzuyidI9s0dRgAjWlnUG7gocd9DHI.jpg?width=960&crop=smart&auto=webp&s=f06c740c51611bfc452f694c5d47c6adf968d523', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/V7gYa2wwgfocK4XzuyidI9s0dRgAjWlnUG7gocd9DHI.jpg?width=1080&crop=smart&auto=webp&s=baa487e70fad5bdc0fa7ce51288bd0ee9a08260b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/V7gYa2wwgfocK4XzuyidI9s0dRgAjWlnUG7gocd9DHI.jpg?auto=webp&s=5ed4295b13ea223015774e4b6d9461f12d26ff7a', 'width': 1200}, 'variants': {}}]} | |
AI has taken over. Your last conversation will determine your fate. | 1 | Title. | 2023-12-15T06:34:21 | https://www.reddit.com/r/LocalLLaMA/comments/18itu55/ai_has_taken_over_your_last_conversation_will/ | KvAk_AKPlaysYT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18itu55 | false | null | t3_18itu55 | /r/LocalLLaMA/comments/18itu55/ai_has_taken_over_your_last_conversation_will/ | false | false | self | 1 | null |
How to use downloaded models in oobabooga? | 1 | Hey y'all i'm kinda new in this space
I downloaded llama2 via meta's recomended documantation, but I was wondering if I could use it with Oobabooga?
Thanks sm | 2023-12-15T06:32:41 | https://www.reddit.com/r/LocalLLaMA/comments/18itt95/how_to_use_downloaded_models_in_oobabooga/ | Monochrome21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18itt95 | false | null | t3_18itt95 | /r/LocalLLaMA/comments/18itt95/how_to_use_downloaded_models_in_oobabooga/ | false | false | self | 1 | null |
Best method for answering aggregation type queries? | 2 | Which method is best suited for answering an aggregation type of query, given that the information may be scattered across multiple locations in multiple files. For example, the question, write down the characteristics of such and such a category of objects? This information is not structured anywhere in one or two places, so the usual RAG is not suitable, but the information exists and is scattered across many files and which are not necessarily of the same type in structure. The method must be at least relatively fast (there can be many files). Some preprocessing of files is allowed in advance. | 2023-12-15T06:01:25 | https://www.reddit.com/r/LocalLLaMA/comments/18itbp9/best_method_for_answering_aggregation_type_queries/ | 1nkor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18itbp9 | false | null | t3_18itbp9 | /r/LocalLLaMA/comments/18itbp9/best_method_for_answering_aggregation_type_queries/ | false | false | self | 2 | null |
what are some of the best high-quality chain-of-thought or similar coding datasets? | 1 | [removed] | 2023-12-15T05:08:43 | sujantkv | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18isgpt | false | null | t3_18isgpt | /r/LocalLLaMA/comments/18isgpt/what_are_some_of_the_best_highquality/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Y8OQTqCIyB1m96D0Fv2A7Ve_3nNf_quKkvIcygTW5wI', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/gh5diua96e6c1.png?width=108&crop=smart&auto=webp&s=bae549d61ae0678ceb2dbb7acd65d7d68bb58c98', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/gh5diua96e6c1.png?width=216&crop=smart&auto=webp&s=f19bf485d8e4af51919e60e241f078a91cc0ade3', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/gh5diua96e6c1.png?width=320&crop=smart&auto=webp&s=89f4c40d901fe96a05401cbbb305d5b397a380ed', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/gh5diua96e6c1.png?width=640&crop=smart&auto=webp&s=3d01602bad5e91f32f448e199b1b1de1fa4ca065', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/gh5diua96e6c1.png?width=960&crop=smart&auto=webp&s=c3c5f6efc2df9059a8f6155eba6dd5b50908ff12', 'width': 960}, {'height': 1441, 'url': 'https://preview.redd.it/gh5diua96e6c1.png?width=1080&crop=smart&auto=webp&s=3280c00ed810d3d989fe812b22ce8b86786e5ae3', 'width': 1080}], 'source': {'height': 1441, 'url': 'https://preview.redd.it/gh5diua96e6c1.png?auto=webp&s=fe555d24b3623c091f2c66e4b8768a3b182fcf05', 'width': 1080}, 'variants': {}}]} | ||
How to fine-tune Mixtral 8x7b in google colab? | 2 | I need a colab file to file tune mixtral. | 2023-12-15T04:43:39 | https://www.reddit.com/r/LocalLLaMA/comments/18is0n2/how_to_finetune_mixtral_8x7b_in_google_colab/ | Original_Job6327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18is0n2 | false | null | t3_18is0n2 | /r/LocalLLaMA/comments/18is0n2/how_to_finetune_mixtral_8x7b_in_google_colab/ | false | false | self | 2 | null |
Best Model to locally run in a low end GPU with 4 GB RAM right now | 33 | I am a total newbie to LLM space. As the title says, I am trying to get a decent model for coding/fine tuning in a lowly Nvidia 1650 card. I am excited about Phi-2 but some of the posts here indicate it is slow due to some reason despite being a small model. | 2023-12-15T04:07:40 | https://www.reddit.com/r/LocalLLaMA/comments/18ire2a/best_model_to_locally_run_in_a_low_end_gpu_with_4/ | curious_cat_says_hi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ire2a | false | null | t3_18ire2a | /r/LocalLLaMA/comments/18ire2a/best_model_to_locally_run_in_a_low_end_gpu_with_4/ | false | false | self | 33 | null |
Holy moly, Mixtral 8x7b passes my Sisters test without even telling it to think step by step! Only Falcon 180b and GPT-4 nailed this question before. | 264 | 2023-12-15T03:41:56 | nderstand2grow | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18iqxcl | false | null | t3_18iqxcl | /r/LocalLLaMA/comments/18iqxcl/holy_moly_mixtral_8x7b_passes_my_sisters_test/ | false | false | 264 | {'enabled': True, 'images': [{'id': 'OdOO7dTEMRXR2Mjaf-nN-XxsrRm8kajhlIk3j8FdcoE', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/q68580ukqd6c1.png?width=108&crop=smart&auto=webp&s=6815049de7f9004f8edc371d336c4ffcbb0e5051', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/q68580ukqd6c1.png?width=216&crop=smart&auto=webp&s=6001c50d579312803731bf116ad7cb6c3dcea4d5', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/q68580ukqd6c1.png?width=320&crop=smart&auto=webp&s=5c1e165ed34f82fcad590780b9b1c43eb9a7f417', 'width': 320}, {'height': 328, 'url': 'https://preview.redd.it/q68580ukqd6c1.png?width=640&crop=smart&auto=webp&s=b514dca9c612fcc4c7129e00d8458a88641d6f37', 'width': 640}, {'height': 493, 'url': 'https://preview.redd.it/q68580ukqd6c1.png?width=960&crop=smart&auto=webp&s=dfddf1a69d5de826d201b8858a42f6f6957cdc7f', 'width': 960}, {'height': 554, 'url': 'https://preview.redd.it/q68580ukqd6c1.png?width=1080&crop=smart&auto=webp&s=cf7827cbcc4bfe9c5ca89487b895385e9efcab43', 'width': 1080}], 'source': {'height': 1074, 'url': 'https://preview.redd.it/q68580ukqd6c1.png?auto=webp&s=2085fa2990a22602cf08bcd3c345108021349fea', 'width': 2090}, 'variants': {}}]} | |||
Mamba-3B-SlimPJ: State-space model - No Transformer Architecture | 62 | 2023-12-15T02:09:05 | https://www.together.ai/blog/mamba-3b-slimpj | super-helper | together.ai | 1970-01-01T00:00:00 | 0 | {} | 18ip5qj | false | null | t3_18ip5qj | /r/LocalLLaMA/comments/18ip5qj/mamba3bslimpj_statespace_model_no_transformer/ | false | false | default | 62 | null | |
Mixtral vs YI 34B | 15 | Which one is smarter?
IME, I’m thinking the winner is YI? | 2023-12-15T02:07:50 | https://www.reddit.com/r/LocalLLaMA/comments/18ip4y8/mixtral_vs_yi_34b/ | Majestical-psyche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ip4y8 | false | null | t3_18ip4y8 | /r/LocalLLaMA/comments/18ip4y8/mixtral_vs_yi_34b/ | false | false | self | 15 | null |
What’s everyone’s favorite uncensored 7B? | 7 | Asking for a friend. So far TheBloke’s Dolphin has worked well. Kind of have to lead it where you want though. | 2023-12-15T02:06:22 | https://www.reddit.com/r/LocalLLaMA/comments/18ip3zu/whats_everyones_favorite_uncensored_7b/ | AmericanKamikaze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ip3zu | false | null | t3_18ip3zu | /r/LocalLLaMA/comments/18ip3zu/whats_everyones_favorite_uncensored_7b/ | false | false | self | 7 | null |
Projekt advice and sanity check data mining from full text scientific papers with context analysis of llms. | 3 | SHORT: i want to use llms to analyse scientific texts for specific gene gene interactions. Basically data mining. Questions on feasibility and tips and tricks.
So I am a bioinformatics master who got in to llms in his off time because an idea floated in my head and I wanted to know the opinion on how feasible it is. And since this reddit is superhelpfull and full of smart people I thought I will as it here ^^
#The background:
In biology gene interactions and specifically gene networks are extremly interesting and our knowledge of them is getting better but is still flaky. there are hundreds of papers published each day. And many findings that are just a side note in the main text get lost of they are not put in the abstract or as a main finding.
This goes specifically for things like gene gene interactions. Some gene being unregulated being just an interesting side note because in the end the authors chose to focus on some thing else.
Through this thousands of potential known interactions get lost.
# the idea:
Creat a database for different biological contexts showing gene interactions pulled from paper.
Use llms to read a paper and find out the context ? For example its a paper about lung cancer.
The conditions e.g cells being irradiated.
And the from the text write out the genes that are mentioned under the same conditions and if they were up or down regulated.
Get this in an output form I can feed to a database.
It does not need to work perfectly because the output is supposed to be used to point in to the rigyt direction and not as just trust it data.
#the current technical idea.
Use a model like biobert to lable all genes in a given paper with named entity recognition. Put them in some list as meta data for each paper.
Use another llm either with RAG or really big context window. Feed it the paper and a prompt with the tagged genes. (Or multiple prompts if that works better)
Engenier a prompt or fine tune the model in a way that it gives the answers in a format that can be read in to a database. For example in the form of a matrix with -1 being dow regulated, 0 for no change and 1 for unregulated.
Gene1 gene2 gene3
Gene1 0 1 -1
Gene2 1 0 1
Gene3 -1 1 0
#Questions
I currently want to know if this is at all feasible and if yes what thoughts are on my current game plan.
I want to get a proof of concept working on an rtx3090 .
Any modell suggestions ?
Ideas how to force the output to be what I need it to be ?
Anything super obvious I am over looking ? | 2023-12-15T01:45:12 | https://www.reddit.com/r/LocalLLaMA/comments/18iopat/projekt_advice_and_sanity_check_data_mining_from/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iopat | false | null | t3_18iopat | /r/LocalLLaMA/comments/18iopat/projekt_advice_and_sanity_check_data_mining_from/ | false | false | self | 3 | null |
Falcon 180b Q4_0 Aces the 8 Sisters vs. 1 Sister Test while Gemini Pro Failed it | 2 | 2023-12-15T01:45:11 | nderstand2grow | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18iopaf | false | null | t3_18iopaf | /r/LocalLLaMA/comments/18iopaf/falcon_180b_q4_0_aces_the_8_sisters_vs_1_sister/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'rcSFpUcgKxz-fqVay125XIlTGiMkWpvvNjfZ8iOFivQ', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/nswfmwdw5d6c1.png?width=108&crop=smart&auto=webp&s=9add385105dbcc6478b2fb0b55a98487db19cfd4', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/nswfmwdw5d6c1.png?width=216&crop=smart&auto=webp&s=5601c42ab824cc29036ed66f34184b469e606df9', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/nswfmwdw5d6c1.png?width=320&crop=smart&auto=webp&s=21f3c5d04499767d35524b42bb5a95ddbdbaae99', 'width': 320}, {'height': 401, 'url': 'https://preview.redd.it/nswfmwdw5d6c1.png?width=640&crop=smart&auto=webp&s=3e719a53a5bf91583a114c29c69e94dc33ef938b', 'width': 640}, {'height': 602, 'url': 'https://preview.redd.it/nswfmwdw5d6c1.png?width=960&crop=smart&auto=webp&s=81a324b043af2bdbd6fecd8e88cffd9b5c1e70ea', 'width': 960}, {'height': 677, 'url': 'https://preview.redd.it/nswfmwdw5d6c1.png?width=1080&crop=smart&auto=webp&s=2d3161f2b52bee6b0d617668a29c6ab828ff7b1d', 'width': 1080}], 'source': {'height': 808, 'url': 'https://preview.redd.it/nswfmwdw5d6c1.png?auto=webp&s=ba90c534b2e6b2c19a8748c2ea74315223385ef0', 'width': 1288}, 'variants': {}}]} | |||
Is there any way currently to try out and utilize Local LLM's without Enterprise Grade hardware? | 2 | I really want to try Open source LLM i find on hugging face, is the only way to run open source llms right now with expensive hardware or is there another way? Please i am coping so hard. | 2023-12-15T01:37:02 | https://www.reddit.com/r/LocalLLaMA/comments/18iojin/is_there_any_way_currently_to_try_out_and_utilize/ | Articulity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iojin | false | null | t3_18iojin | /r/LocalLLaMA/comments/18iojin/is_there_any_way_currently_to_try_out_and_utilize/ | false | false | self | 2 | null |
How are you people running 70B parameter models? | 46 | I have a 3090 and seems like I can run 30b models but not 33 or 34 b. is there some trick you guys are using to get them to work?
also. any recommendations for the least censored models? | 2023-12-15T01:16:09 | https://www.reddit.com/r/LocalLLaMA/comments/18io4lp/how_are_you_people_running_70b_parameter_models/ | assfaceMCdickbutt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18io4lp | false | null | t3_18io4lp | /r/LocalLLaMA/comments/18io4lp/how_are_you_people_running_70b_parameter_models/ | false | false | self | 46 | null |
Needing a little assistance with setup if possible 3090 + P40, P41 | 1 | [removed] | 2023-12-15T01:08:29 | https://www.reddit.com/r/LocalLLaMA/comments/18inzbx/needing_a_little_assistance_with_setup_if/ | sysadmin420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18inzbx | false | null | t3_18inzbx | /r/LocalLLaMA/comments/18inzbx/needing_a_little_assistance_with_setup_if/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DsiOIzUSicS_9zIKwMDQbNT2LOE1o29sSYs49HAmO_k', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=108&crop=smart&auto=webp&s=a0329d4207ada0345185e70a97a0ef1f27aec034', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=216&crop=smart&auto=webp&s=8722bf8052baa4647e96ebeb0d22f50bf529b6ac', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=320&crop=smart&auto=webp&s=6562c4a330763746058f2250630ec6d3854b2e3d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=640&crop=smart&auto=webp&s=fff0deae054d2476ac870508887dbbee06d9387c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=960&crop=smart&auto=webp&s=f6a32be275833b4d47802b79f3345f568bd43a4d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=1080&crop=smart&auto=webp&s=97e8d6d94b95697d64482f5fcda32d11814df7b8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?auto=webp&s=b3c1793ddfb0595cba1bbb23fba79360953beb8d', 'width': 1200}, 'variants': {}}]} |
Does anyone need to deploy a LoRA fine-tune for super cheap? | 2 | I'm working on a project right now where I need to fine-tune a model since OpenAI doesn't work for this use case, but the product is still being prototyped. My current deployment options seems to be renting a GPU (expensive), or serverless (long cold boot times, bad experience for alpha testers?).
I was wondering if anyone else has this problem (or has a solution to it). There are a bunch of papers that have come out describing ways to efficiently serve many LoRAs on the same machine.
Maybe I could work on a platform where you basically pay a flat $30/m or something to deploy a LoRA. You would get basically unlimited inference during offpeak and share some fraction of total QPS during peak. Seems like it could be useful for people experimenting like me, or for indie hacker type products that don't have massive inference QPS. Would anyone be interested? | 2023-12-15T00:31:16 | https://www.reddit.com/r/LocalLLaMA/comments/18in8j7/does_anyone_need_to_deploy_a_lora_finetune_for/ | box_head1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18in8j7 | false | null | t3_18in8j7 | /r/LocalLLaMA/comments/18in8j7/does_anyone_need_to_deploy_a_lora_finetune_for/ | false | false | self | 2 | null |
Are there any local LLMs that with a Javascript API? | 1 | Looking for one to use in my command line program cmdh (command helper)[https://github.com/pgibler/cmdh](https://github.com/pgibler/cmdh]. It's been immensely useful to me and has saved me countless hours figuring out how to devise complex or hard to find commands (you can see some in the docs and video on the project git page), but it requires creating an OpenAI token and setting an environment variable, so it's not the best out-of-the-box experience.
My goal is to make it as easy to use as possible and completely open source so others can get the benefits I do with as little effort as possible.
Thank you for the help! | 2023-12-15T00:26:45 | https://www.reddit.com/r/LocalLLaMA/comments/18in598/are_there_any_local_llms_that_with_a_javascript/ | Giblaz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18in598 | false | null | t3_18in598 | /r/LocalLLaMA/comments/18in598/are_there_any_local_llms_that_with_a_javascript/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'VY5xh8OZ61gWUtf7hCM56P5EVdmOJCJ1LV3IAXvjlgc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/k_-FLqGpMgSk0H7b6O7oqeCdVv8rWZC__YpPs2giI04.jpg?width=108&crop=smart&auto=webp&s=af584667405ec723e886e6d9ecc5eccda3bf7f2c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/k_-FLqGpMgSk0H7b6O7oqeCdVv8rWZC__YpPs2giI04.jpg?width=216&crop=smart&auto=webp&s=0fabad49d8713f53189913e2b92cf818d183982b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/k_-FLqGpMgSk0H7b6O7oqeCdVv8rWZC__YpPs2giI04.jpg?width=320&crop=smart&auto=webp&s=ba3512f42a74288194985303d19a8049a3230650', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/k_-FLqGpMgSk0H7b6O7oqeCdVv8rWZC__YpPs2giI04.jpg?width=640&crop=smart&auto=webp&s=1ea0b5e928e6c2136cb29178aa9ba4b652139c45', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/k_-FLqGpMgSk0H7b6O7oqeCdVv8rWZC__YpPs2giI04.jpg?width=960&crop=smart&auto=webp&s=e227368bdc77b3c7053de471de41e98a5dc69e55', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/k_-FLqGpMgSk0H7b6O7oqeCdVv8rWZC__YpPs2giI04.jpg?width=1080&crop=smart&auto=webp&s=06b0e5911303ad3c56feb483c03a75f581eeced1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/k_-FLqGpMgSk0H7b6O7oqeCdVv8rWZC__YpPs2giI04.jpg?auto=webp&s=edb721a57c1ddd7a6a7e8557ee41a229540ccdae', 'width': 1200}, 'variants': {}}]} |
Help with instruction format for mistral-medium, Mixtral-8x7B-Instruct-v0.1, Llama-2-7B-32K-Instruct | 4 | Hi everyone, I am learning/working with these models via API (mistral.ai and together.ai). Initially I was doing something like this, based on the documentation available on the respective sites, and looking at the generated code on the sites' playground. I still don't get the `<s>` and what happens to the `</s>`.
output = together.Complete.create(
prompt = '''
<s>[INST] my instruction goes here
[/INST]
''',
model = "mistralai/Mixtral-8x7B-Instruct-v0.1",
max_tokens = 2000,
temperature = 0.7,
top_k = 60,
top_p = 0.75,
repetition_penalty = 1.1,
stop = ['</s>,[INST]']
)
I ended up finding out the models have specific prompting instructions: https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1#instruction-format and https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct#model-usage
Another thing I found on mistral's "chat server" is this: "Our recommendation for templating chat template messages is to never go through a "text" only view even though we know that many OSS libraries work that way." And they discuss the pseudocode that is from the Mixtral-8x7B model above.
I am not familiar with this tokenizing and the special BOS/EOS tags. Are these both necessary for the model? If so, can you refer me to example code I can work from? All three models seem to be working okay with inserting the prompt as text, but it sounds like from the above it can be "better". I can't tell if mistral-medium needs these tags or not, either. Thanks. | 2023-12-15T00:15:09 | https://www.reddit.com/r/LocalLLaMA/comments/18imwg0/help_with_instruction_format_for_mistralmedium/ | xilex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18imwg0 | false | null | t3_18imwg0 | /r/LocalLLaMA/comments/18imwg0/help_with_instruction_format_for_mistralmedium/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'xU07sACUhuScPaN8BIMV0USMZkFwSMGcorjhvoKF4K0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=108&crop=smart&auto=webp&s=9ad9ea4d991fae47edd6a09b5feb6d60eca474de', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=216&crop=smart&auto=webp&s=308a202a45ba16483d13920cc1bacec91e523b47', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=320&crop=smart&auto=webp&s=e3032902c70163f83e8380a9d1a220d309cdc7cb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=640&crop=smart&auto=webp&s=fde1276456518b0ab6ff6e2f6fd1f198a952883e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=960&crop=smart&auto=webp&s=0e9b19ae8aba7dbb28009cce456e33e378c4c47c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=1080&crop=smart&auto=webp&s=b78d9fadfbf4f7ba88328a51e8bd25a6f20d223f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?auto=webp&s=d55b83d577e907b76d4542af518cf15e4524a728', 'width': 1200}, 'variants': {}}]} |
Could MoE models be made to run faster on multiple GPUs by selecting one expert on each GPU? | 14 | For example if Mixtral selects two of its 8 experts, could there be a version of it where 4 experts are on one GPU and 4 are on the other, and it selects one of each and runs them at the same time? Or am I completely misunderstanding how MoE works | 2023-12-15T00:06:58 | https://www.reddit.com/r/LocalLLaMA/comments/18imq9a/could_moe_models_be_made_to_run_faster_on/ | NLTPanaIyst | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18imq9a | false | null | t3_18imq9a | /r/LocalLLaMA/comments/18imq9a/could_moe_models_be_made_to_run_faster_on/ | false | false | self | 14 | null |
Why is EXL2 not a thing with Mixtral yet? | 21 | I have a 4090 but it seems like I can not take advantage of the speed yet because of the prompt processing issue. Why is it difficult to enable Mixtral for EXL2? Is it possible at all? When do you think it will be done? | 2023-12-14T23:58:25 | https://www.reddit.com/r/LocalLLaMA/comments/18imjga/why_is_exl2_not_a_thing_with_mixtral_yet/ | opgg62 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18imjga | false | null | t3_18imjga | /r/LocalLLaMA/comments/18imjga/why_is_exl2_not_a_thing_with_mixtral_yet/ | false | false | self | 21 | null |
When you need a question answered or problems solved, which do you use first: an LLM or a search engine? | 1 | [removed]
[View Poll](https://www.reddit.com/poll/18imd5r) | 2023-12-14T23:50:12 | https://www.reddit.com/r/LocalLLaMA/comments/18imd5r/when_you_need_a_question_answered_or_problems/ | TheTwelveYearOld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18imd5r | false | null | t3_18imd5r | /r/LocalLLaMA/comments/18imd5r/when_you_need_a_question_answered_or_problems/ | false | false | self | 1 | null |
Open A.I. chief claims GPT 4.5 rumor is not logit | 4 | haha jk: s/logit/legit/
[https://twitter.com/sama/status/1735422206296088950](https://twitter.com/sama/status/1735422206296088950)
Rumor: [https://www.reddit.com/r/OpenAI/comments/18i5n29/anyone\_hear\_of\_gpt45\_drop\_today/](https://www.reddit.com/r/OpenAI/comments/18i5n29/anyone_hear_of_gpt45_drop_today/)
​ | 2023-12-14T23:48:13 | https://www.reddit.com/r/LocalLLaMA/comments/18imbo6/open_ai_chief_claims_gpt_45_rumor_is_not_logit/ | danielcar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18imbo6 | false | null | t3_18imbo6 | /r/LocalLLaMA/comments/18imbo6/open_ai_chief_claims_gpt_45_rumor_is_not_logit/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'lSMCkaNkZBwFA2j59VIRNg7Kk_3Pqg1Ntmsr_RQQKeo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/OtZ-c2HAM__-fvRQdiuoEWTOw0QnPzmTwGsZ-5SkEV8.jpg?width=108&crop=smart&auto=webp&s=1459c59368488788b4afb3e6ff65feb9cf9de594', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/OtZ-c2HAM__-fvRQdiuoEWTOw0QnPzmTwGsZ-5SkEV8.jpg?auto=webp&s=042ce8b3a51e3afa38cab2eef9920fef607efa31', 'width': 140}, 'variants': {}}]} |
What's the best "bang for the buck" setup? | 37 | If I were to invest a few thousand dollars what would be the best setup?
I've been looking a few of these: [2080 Ti modded w/ 22GB VRAM](https://2080ti22g.com/products/2080-ti-22g-memory-custom-upgraded-stable-diffusion-llama2-deep-learning), or would I be better off getting 2x the number of Tesla P40s for the same cost? (Yes I know they are server cards with no cooling and special power needs, that would be accounted for) | 2023-12-14T23:44:48 | https://www.reddit.com/r/LocalLLaMA/comments/18im8zc/whats_the_best_bang_for_the_buck_setup/ | AbleSugar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18im8zc | false | null | t3_18im8zc | /r/LocalLLaMA/comments/18im8zc/whats_the_best_bang_for_the_buck_setup/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': '-tIHwrAFTXsWxBjFLpet6_X4l9Rnwld_O97DK8lM854', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/Arr6VNJrO5FfNnCm9KYgrH524eXlOC1vWpZzB-ziKvU.jpg?width=108&crop=smart&auto=webp&s=bd43d4b0a95b9087cad13780f4cf452d23038a0f', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/Arr6VNJrO5FfNnCm9KYgrH524eXlOC1vWpZzB-ziKvU.jpg?width=216&crop=smart&auto=webp&s=051fd453482427bcead69970e25d0f7c19e03942', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/Arr6VNJrO5FfNnCm9KYgrH524eXlOC1vWpZzB-ziKvU.jpg?width=320&crop=smart&auto=webp&s=a548078896cb726814b79ae692ee36967f9ed644', 'width': 320}, {'height': 342, 'url': 'https://external-preview.redd.it/Arr6VNJrO5FfNnCm9KYgrH524eXlOC1vWpZzB-ziKvU.jpg?width=640&crop=smart&auto=webp&s=728a90f0e7eff32fb2be19da5c1d7725052a1f07', 'width': 640}], 'source': {'height': 456, 'url': 'https://external-preview.redd.it/Arr6VNJrO5FfNnCm9KYgrH524eXlOC1vWpZzB-ziKvU.jpg?auto=webp&s=bdab62f6c3f54c56d7bd6d6d1ae1870e73b0b685', 'width': 851}, 'variants': {}}]} |
Brand new model, Bagel-DPO-7B, released by Jon Durbin! | 30 | This is a model that is using a variety of prompt formats, instead of just one. So you can just choose what you like best. Also uses a vast variety of datasets and should be awesome at everything you throw at it. Try it out!
[https://huggingface.co/jondurbin/bagel-7b-v0.1](https://huggingface.co/jondurbin/bagel-7b-v0.1)
[https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1)
(I recommend the DPO version, should be better.)
Quants:
[https://huggingface.co/TheBloke/bagel-dpo-7B-v0.1-GGUF](https://huggingface.co/TheBloke/bagel-dpo-7B-v0.1-GGUF)
[https://huggingface.co/TheBloke/bagel-dpo-7B-v0.1-GPTQ](https://huggingface.co/TheBloke/bagel-dpo-7B-v0.1-GPTQ)
​
​ | 2023-12-14T23:38:24 | https://www.reddit.com/r/LocalLLaMA/comments/18im461/brand_new_model_bageldpo7b_released_by_jon_durbin/ | dampflokfreund | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18im461 | false | null | t3_18im461 | /r/LocalLLaMA/comments/18im461/brand_new_model_bageldpo7b_released_by_jon_durbin/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'x_c1JPRMgEITRcAWM52wF0G3a5OkbAb_8OSn-qpgNM4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hRyHrMIBXthr6qW645oFTFMnvjvH-RIRV-cTZHNLYWs.jpg?width=108&crop=smart&auto=webp&s=719657f89a6428ef5bcd1cd32090ed7e01a8f824', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hRyHrMIBXthr6qW645oFTFMnvjvH-RIRV-cTZHNLYWs.jpg?width=216&crop=smart&auto=webp&s=60a9ef1d9fc6ffc5eab7f1bdb09dcb4e0ddd7ff1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hRyHrMIBXthr6qW645oFTFMnvjvH-RIRV-cTZHNLYWs.jpg?width=320&crop=smart&auto=webp&s=cc16dfcce5b15fc4e89b96624896fa47c95067e6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hRyHrMIBXthr6qW645oFTFMnvjvH-RIRV-cTZHNLYWs.jpg?width=640&crop=smart&auto=webp&s=8eea9eb4f5f50f71ad5500dac431c12cac555a91', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hRyHrMIBXthr6qW645oFTFMnvjvH-RIRV-cTZHNLYWs.jpg?width=960&crop=smart&auto=webp&s=b7220797acb999963474a7f7580df82d51084c66', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hRyHrMIBXthr6qW645oFTFMnvjvH-RIRV-cTZHNLYWs.jpg?width=1080&crop=smart&auto=webp&s=182bbbf36c66ac8f2296ccbe3fb047ef783b037a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hRyHrMIBXthr6qW645oFTFMnvjvH-RIRV-cTZHNLYWs.jpg?auto=webp&s=e42e0f1db72207febc457a41a5d018d2ba079afe', 'width': 1200}, 'variants': {}}]} |
goliath-120b-exl2-rpcal : how to use ? | 2 | Sorry if the question was already asked but i cant find anything.
How can i download this file ? [The page](https://huggingface.co/Panchovix/goliath-120b-exl2-rpcal) only give link to small quantisations, is there a special option in Ooba to load both a Goliath GGUF and the Pippa paquet ?
Or I have to download the Goliath file and use a script ?
​
Thanks. | 2023-12-14T23:33:18 | https://www.reddit.com/r/LocalLLaMA/comments/18im0d0/goliath120bexl2rpcal_how_to_use/ | ervertes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18im0d0 | false | null | t3_18im0d0 | /r/LocalLLaMA/comments/18im0d0/goliath120bexl2rpcal_how_to_use/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '7wYYeKHJdwemBLlNcCeOJoZ76fi3lfjPqO3AQ5iU-kU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KC7aW3g1hmeFvCj3kyCINwH9HWaT8ZCUkmTW5nvj8Ws.jpg?width=108&crop=smart&auto=webp&s=0d937da4e405d4c03c7c2f20739cc838f87a30f2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KC7aW3g1hmeFvCj3kyCINwH9HWaT8ZCUkmTW5nvj8Ws.jpg?width=216&crop=smart&auto=webp&s=fd798bfb008589f56f86a19ba768822235d3cdce', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KC7aW3g1hmeFvCj3kyCINwH9HWaT8ZCUkmTW5nvj8Ws.jpg?width=320&crop=smart&auto=webp&s=170d600f5a7db426c644e5feca393e3dc4118795', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KC7aW3g1hmeFvCj3kyCINwH9HWaT8ZCUkmTW5nvj8Ws.jpg?width=640&crop=smart&auto=webp&s=213b08e53dc718fd1d64c227892d26475a055cce', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KC7aW3g1hmeFvCj3kyCINwH9HWaT8ZCUkmTW5nvj8Ws.jpg?width=960&crop=smart&auto=webp&s=2ae761ba712e3c66234d0252777d7f622bbcacce', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KC7aW3g1hmeFvCj3kyCINwH9HWaT8ZCUkmTW5nvj8Ws.jpg?width=1080&crop=smart&auto=webp&s=2296b9dcb6bbc16af45cde34b19b064384e63801', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KC7aW3g1hmeFvCj3kyCINwH9HWaT8ZCUkmTW5nvj8Ws.jpg?auto=webp&s=a1bef0e522ba14acdb5ee3a9bd07cae9202adc47', 'width': 1200}, 'variants': {}}]} |
Is it true that we won't need smaller fine-tuned models in the future? | 1 | [removed] | 2023-12-14T23:23:43 | https://www.reddit.com/r/LocalLLaMA/comments/18ilsz9/is_it_true_that_we_wont_need_smaller_finetuned/ | Slight-Employ-5765 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ilsz9 | false | null | t3_18ilsz9 | /r/LocalLLaMA/comments/18ilsz9/is_it_true_that_we_wont_need_smaller_finetuned/ | false | false | self | 1 | null |
text-generation-webui Playground: how to set the character? | 1 | [removed] | 2023-12-14T22:51:56 | https://www.reddit.com/r/LocalLLaMA/comments/18il3ep/textgenerationwebui_playground_how_to_set_the/ | StableLlama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18il3ep | false | null | t3_18il3ep | /r/LocalLLaMA/comments/18il3ep/textgenerationwebui_playground_how_to_set_the/ | false | false | self | 1 | null |
A more intuitive way to score MMLU benchmarks with DZPAS | 9 | **The TLDR**: DZPAS is an adjustment to MMLU benchmark scores that takes into account 3 things: (1) scores artificially boosted due to multiple choice guessing, (2) data contamination, and (3) 0-shot adjustment to more accurately score LLMs in the way people use them. Here's how that looks on common open-source benchmarks (notice the huge drop in Llama-7B from 35.7% to 14.0%):
https://preview.redd.it/b8v4ihsp0c6c1.png?width=1493&format=png&auto=webp&s=509934c83e0a6590a53eedfb0029af7a8372ca19
Now, for a more in-depth dive, let's see how we got these values...
Introducing **DZPAS**, an intuitive way to score MMLU that more closely aligns with how people interact with LLMs. DZPAS stands for Decontamination Zero-shot Penalty Adjusted Score. It is an adjustment to the 5-shot MMLU benchmark scores with the following changes:
**Penalty Adjusted Score** \- Since MMLU is a multiple-choice benchmark, random guessing will get you a score of 25%. Here we introduce PAS to penalize wrong answers by 0.25x, such that random guessing will, on average, get a score of 0%. This is more intuitive when trying to understand how often the LLM will get a question correct when there are not multiple choices.
**Contamination Adjustment** \- From [this fantastic paper](https://arxiv.org/pdf/2311.04850.pdf) we learn Llama2 has approximately 10% pre-training contamination on MMLU. We use this contamination to adjust scores down (relative to how accurate they are). For example, if a model scores 50%, they will lose (0.5\*10% contamination) = 5% of their score for an adjusted value of 45%.
**0-Shot Adjustment** \- While LLMs used to be 'Few Shot Learners', with recent advances in LLMs (instruction fine-tuning, RLHF) it no longer makes sense to evaluate benchmarks in a few shot setting. MMLU is generally always evaluated 5-shot, but when people interact with LLMs they almost never give additional in-content learning examples. 5-shot inflates scores and creates a disconnect between a benchmark and real-world use. Here we use the [original MMLU paper](https://arxiv.org/pdf/2009.03300.pdf) to find 0-Shot and 5-shot score differences (from GPT-3 175B) and use that to create a 2nd order polynomial to determine an adjustment factor (which varies depending on the benchmark accuracy). Hopefully in the future all benchmarks will be re-evaluated as 0-shot which should make this adjustment unnecessary.
Finally DZPAS is all 3 of the above adjustments combined in a mathematically robust manner. You can see a larger variation in the scores of common LLMs which more accurately reflects a models capabilities in real-world use. For example Llama-7B goes from 35.7% to 14.0% which means when asking MMLU type questions LLama-7B is only likely to answer 1 out of 7 correctly. With LLama2-70B you see a much less dramatic change going from 69.8% to 57.2%. Also plotted below are the individual contributions of each adjustment factor described above.
https://preview.redd.it/cd4l87c02c6c1.png?width=2000&format=png&auto=webp&s=50d6ec4593784eb1b0f922799a3a53b01ac30419
https://preview.redd.it/dael7y3u3c6c1.png?width=1136&format=png&auto=webp&s=7a13d95123eb84346bb960a65c8dfdc3b3e23a1f
While DZPAS does not address the larger problem of needing new benchmarks and pervasive 'fine-tuning on the test set, nor the [errors in MMLU](https://www.reddit.com/r/LocalLLaMA/comments/163x2wc/philip_from_ai_explained_finds_hundreds_of_errors/), we think this is an small step in the right direction. Comments and corrections welcome. | 2023-12-14T22:17:31 | https://www.reddit.com/r/LocalLLaMA/comments/18ikb0e/a_more_intuitive_way_to_score_mmlu_benchmarks/ | jd_3d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ikb0e | false | null | t3_18ikb0e | /r/LocalLLaMA/comments/18ikb0e/a_more_intuitive_way_to_score_mmlu_benchmarks/ | false | false | 9 | null | |
Where is tiny llama? | 1 | Is it done training? | 2023-12-14T22:04:16 | https://www.reddit.com/r/LocalLLaMA/comments/18ik02j/where_is_tiny_llama/ | TheHumanFixer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ik02j | false | null | t3_18ik02j | /r/LocalLLaMA/comments/18ik02j/where_is_tiny_llama/ | false | false | self | 1 | null |
Mixtral support on Ollama v0.1.16 | 28 | New Ollama release adds support Mixtral and other models based on its Mixture of Experts (MoE) architecture. (this model requires at least 48GB of memory) | 2023-12-14T21:54:32 | https://github.com/jmorganca/ollama/releases/tag/v0.1.16 | Bitcoin_hunter-21M | github.com | 1970-01-01T00:00:00 | 0 | {} | 18ijrs0 | false | null | t3_18ijrs0 | /r/LocalLLaMA/comments/18ijrs0/mixtral_support_on_ollama_v0116/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'irkw2kO6m2EHTyQzQLmqJ7FrCk7esoThvKCECTFp1Gg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/P8HySjHC4vlFPudPIX6Ha2p9_IdH6t-tsgX5qh7Dl3s.jpg?width=108&crop=smart&auto=webp&s=505095226bc1bed5375dde9dbbb207e50a1371fd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/P8HySjHC4vlFPudPIX6Ha2p9_IdH6t-tsgX5qh7Dl3s.jpg?width=216&crop=smart&auto=webp&s=aebd4fb3e5305c03578e1f655298bda632c9c713', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/P8HySjHC4vlFPudPIX6Ha2p9_IdH6t-tsgX5qh7Dl3s.jpg?width=320&crop=smart&auto=webp&s=fac2e64c02ffc1ee8696306ca4eb1489dc061fd5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/P8HySjHC4vlFPudPIX6Ha2p9_IdH6t-tsgX5qh7Dl3s.jpg?width=640&crop=smart&auto=webp&s=0ecfd0b72b737e1ab8f66cd37a80c0affb7d8857', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/P8HySjHC4vlFPudPIX6Ha2p9_IdH6t-tsgX5qh7Dl3s.jpg?width=960&crop=smart&auto=webp&s=a586228036f2c3ef73df868fadf27cde1627c4c0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/P8HySjHC4vlFPudPIX6Ha2p9_IdH6t-tsgX5qh7Dl3s.jpg?width=1080&crop=smart&auto=webp&s=cfb7c691c08e784d7026e37083ad3804d5bc4ffd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/P8HySjHC4vlFPudPIX6Ha2p9_IdH6t-tsgX5qh7Dl3s.jpg?auto=webp&s=84311be1ca41f1d6ceece48aa671fd16d1d2e7d2', 'width': 1200}, 'variants': {}}]} | |
Fun with Mistral-Medium | 21 | I was blessed with Mistral API access this morning so I asked my usual riddle of the 3 available models.
Riddle: A girl has as many brothers as sisters, but each brother has only half as many bothers and sisters. How many brothers and sisters are there?
mistral-tiny (aka mistral-7B-v0.2)
---
Let's reason through this problem:
1. Let's denote the number of sisters as S and the number of brothers as B.
2. According to the problem, a girl has as many brothers as sisters, so B = S.
3. Each brother has half as many brothers and sisters as the girl does. So, each brother has (B = S) - 1 brother and S sisters. For brothers, the number of sisters is the same as for the girl, but for brothers, the number of brothers is one less.
4. So, each brother has S sisters and B - 1 brothers.
5. The problem states that each brother has half as many brothers and sisters as the girl does. So, B - 1 = (S + B) / 2.
6. Solving the equation for B: B = 2S / (1 - 1/2) = 2S / (1/2) = 2S * 2 = 4S
7. Since we already know that B = S, then 4S = 2S + S, or 3S = 2S, which means S = 0, but we cannot have zero sisters or brothers.
Therefore, this problem has no valid solution based on the given information.
---
Absolute nonsense.
mistral-small (aka mixtral 7Bx8)
---
There are 4 sisters and 3 brothers.
Here's the logic:
1. Let's use variables to represent the number of brothers (B) and sisters (S).
2. We know that the number of brothers is equal to the number of sisters (B = S).
3. The number of brothers and sisters that each brother has is equal to B + S (since it includes both genders).
4. However, each brother has only half as many brothers and sisters as their total number, so we can write this equation: B + S = 2 * (number of brothers).
5. Substituting the first equation into the second equation, we get: B + S = 2B.
6. Solving for B, we find that B = 4, which means there are 4 sisters.
7. Using the first equation (B = S), we find that there are 3 brothers.
8. Therefore, there are 4 sisters and 3 brothers in the family.
So, the final answer is 4 sisters and 3 brothers.
---
Answer is correct, reasoning is nonsense.
mistral-medium ("unreleased internal model")
---
There are 4 sisters and 3 brothers.
Here's the logic behind this:
1. Let's say the number of sisters is S and the number of brothers is B.
2. According to the first statement, a girl has as many brothers as sisters, so B = S.
3. Each brother has only half as many brothers and sisters combined. Since there are B brothers, each brother has (B - 1) other brothers.
4. The total number of siblings for each brother is (B - 1) + S = (2B - 1) / 2.
5. Since B = S, we can rewrite this equation as: (2S - 1) / 2.
6. However, since the number of siblings must be a whole number, the denominator 2 must divide evenly into the numerator (2S - 1).
7. The smallest possible value that satisfies this condition is S = 4, because then 2S - 1 = 7, which is divisible by 2.
8. If S = 4, then B = 4 as well. But since the question states that each brother has only half as many brothers and sisters, we need to account for the difference.
9. Therefore, there are actually 3 brothers and 4 sisters in the family.
---
Answer is NOT correct, reasoning is better but it tripped over itself doing the math and ended up with nonsense.
Thoughts:
1. All 3 models make the same mistake in step 2: B = S is not the correct first equation (it's B + 1 = S)
2. tiny messed up step 5 too, that is not the correct second equation, so it decided there is no answer.
3. small gets the right answer by sheer luck, steps 4 and 5 are nonsense and step 6 doesn't follow.
4. medium really tried hard but couldn't get over its initial mistake. its step 8 is very interesting - it realized it messed up the equation. but it fudged the answer in the wrong direction. | 2023-12-14T21:30:04 | https://www.reddit.com/r/LocalLLaMA/comments/18ij7wx/fun_with_mistralmedium/ | kryptkpr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ij7wx | false | null | t3_18ij7wx | /r/LocalLLaMA/comments/18ij7wx/fun_with_mistralmedium/ | false | false | self | 21 | null |
llm's on small/portable laptop? | 4 | I can kind of run llama-2-13b.Q2_K.gguf on my Thinkpad X270 running Linux, if I'm very patient. I'm not very patient. But I like the small size of the laptop, and having access to a local, though very blurry, copy of The Internet. I saw someone recommended the Lenovo Legion which has a 4090 (and ports! why do they always have to remove all the ports), but it's 16" – are there any smaller ones out there that are not too horribly slow with llm's? Should I prioritise getting one with a GPU, or just look for a specific CPU and ensure I have enough RAM?
Or do people with small laptops just connect to eGPU's over thunderbolt?
I saw https://www.tomshardware.com/laptops/lenovos-new-thinkpad-x1-carbon-is-smaller-lighter-and-ai-ready but there's been like no mention of NPU/Meteor Lake on this sub so I feel wary. | 2023-12-14T20:55:10 | https://www.reddit.com/r/LocalLLaMA/comments/18iigbz/llms_on_smallportable_laptop/ | fietsrad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iigbz | false | null | t3_18iigbz | /r/LocalLLaMA/comments/18iigbz/llms_on_smallportable_laptop/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'GmA2iPdndIipX7y4lhLDGEzIU4dGojswOJSHGJozsmM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Qpg23byt9q_Fet56t0qaBCm5BH43CE-vAt0GAMipdfk.jpg?width=108&crop=smart&auto=webp&s=eece7926beb71f2fc2b22b8b83a7ee969413c148', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Qpg23byt9q_Fet56t0qaBCm5BH43CE-vAt0GAMipdfk.jpg?width=216&crop=smart&auto=webp&s=51e90f84e3893dcf429e70f9d0a3fcdcb70d2151', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Qpg23byt9q_Fet56t0qaBCm5BH43CE-vAt0GAMipdfk.jpg?width=320&crop=smart&auto=webp&s=c01739685933cf2bbb1bc80db0a57970ba09a496', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Qpg23byt9q_Fet56t0qaBCm5BH43CE-vAt0GAMipdfk.jpg?width=640&crop=smart&auto=webp&s=c76115ee3bb022d41e634c4ad745dccc888ca633', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Qpg23byt9q_Fet56t0qaBCm5BH43CE-vAt0GAMipdfk.jpg?width=960&crop=smart&auto=webp&s=15e93d7eddde74c4d80499f1d8103637a017f76e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Qpg23byt9q_Fet56t0qaBCm5BH43CE-vAt0GAMipdfk.jpg?width=1080&crop=smart&auto=webp&s=2cb0ee82d91fa5883ef0d3d4f1565c7c930ebc08', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/Qpg23byt9q_Fet56t0qaBCm5BH43CE-vAt0GAMipdfk.jpg?auto=webp&s=c2cd546f3fb09f2943ff65a9e182833683036476', 'width': 1200}, 'variants': {}}]} |
What is the average percentage drop in accuracy for quantized models compared to the original ones? | 1 | [removed] | 2023-12-14T20:30:58 | https://www.reddit.com/r/LocalLLaMA/comments/18ihwkj/what_is_the_average_percentage_drop_in_accuracy/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ihwkj | false | null | t3_18ihwkj | /r/LocalLLaMA/comments/18ihwkj/what_is_the_average_percentage_drop_in_accuracy/ | false | false | default | 1 | null |
Are there any models that can write an actually good short story? (Not ChatGPT's story style) | 36 | ChatGPT stories always have the same sort of style,
- uplifting endings where everyone realizes the true meaning of friendship and that the shared experience they went with their companions has changed them all forever, etc. etc.
- lots of "purple prose" and ridiculous metaphors
- insistence on literally everything being resolved and neatly wrapped up in the span of four paragraphs
- 80%+ of stories start with "once upon a time" (though you can fix this with a good prompt)
LLAMA / mixtral have the same problem.
I tried Google Gemini Pro and it has the same issues, though it does have a slightly more interesting writing style.
I even tried the mixtral base model and, once I figured out a prompt that would output a proper story, it still ended it with the same style of happy ending.
So yeah, are there any models that can actually write a good story? | 2023-12-14T20:21:24 | https://www.reddit.com/r/LocalLLaMA/comments/18ihoty/are_there_any_models_that_can_write_an_actually/ | JealousAmoeba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ihoty | false | null | t3_18ihoty | /r/LocalLLaMA/comments/18ihoty/are_there_any_models_that_can_write_an_actually/ | false | false | self | 36 | null |
What's Your Game-Changing Use Case for Local LLMs?" | 5 |
I'm curious about practical applications of running Large Language Models locally. If you've integrated local LLMs into your workflow, I’d like to hear about it.
- What tasks are you using local LLMs for?
- How has it impacted your work?
- Any notable benefits or challenges?
- Why are you running it locally instead of using OpenAI or others?
Share your experiences and highlight the real-world value of local LLMs. | 2023-12-14T20:20:16 | https://www.reddit.com/r/LocalLLaMA/comments/18ihnsz/whats_your_gamechanging_use_case_for_local_llms/ | LuminaUI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ihnsz | false | null | t3_18ihnsz | /r/LocalLLaMA/comments/18ihnsz/whats_your_gamechanging_use_case_for_local_llms/ | false | false | self | 5 | null |
I finetuned Phi-1.5 and Phi-2 on MathInstruct using MAmmoTH but... | 8 | The models saved as pytorch\_model\_fsdp.bin. I tried to convert this to a pytorch\_model.bin by unwrapping it with this code: [https://pastebin.com/TGCuR5S6](https://pastebin.com/TGCuR5S6)
I have to manually change the model embedding dim before loading the state dict, and I think this is a bad sign.
Does anyone know if these models are recoverable? For now I have them posted on HF if anyone wants to take a look: [codys12/MAmmoTH-phi-1\_5](https://huggingface.co/codys12/MAmmoTH-phi-1_5) and [codys12/MAmmoTH/phi-2](https://huggingface.co/codys12/MAmmoTH-phi-2) | 2023-12-14T20:19:10 | https://www.reddit.com/r/LocalLLaMA/comments/18ihmwv/i_finetuned_phi15_and_phi2_on_mathinstruct_using/ | codys12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ihmwv | false | null | t3_18ihmwv | /r/LocalLLaMA/comments/18ihmwv/i_finetuned_phi15_and_phi2_on_mathinstruct_using/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]} |
Any good quantized local LLMs out there? | 1 | I opened a new sub called r/canyoufeeltheAGI (it is restricted so you can't post or comment there) as a gag sub akin to r/SubredditSimulator but then I realized that I can get a local LLM to use reddit's thread system to generate piecemeal code in an attempt to stack one code solution on top of the other until the project is completed.
And if this works then I can open it up to the public for posting but I'm still testing this. The idea is the following:
\- A user makes a post requesting code to be generated to complete a given task, like build a website or something, and if it is an appropriate request the post will be approved and a bot will start generating a piece of code.
\- Then the bot will reply to its previous comment generating another piece of code while remaining aware of the objective, and it will keep replying to its most recent comment in order to stack one piece of code on top of the other until its done.
Easier said than done, of course. We all know what could go wrong during this process but I will iron out those details later. What I DO need is a good, lightweight model that can accomplish this. I am currently using mistral-7B-Instruct because openhermes2.5 was being a little incoherent but that's because I'm using the quantized versions of those models.
What do you guys recommend? | 2023-12-14T20:18:19 | https://www.reddit.com/r/LocalLLaMA/comments/18ihm67/any_good_quantized_local_llms_out_there/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ihm67 | false | null | t3_18ihm67 | /r/LocalLLaMA/comments/18ihm67/any_good_quantized_local_llms_out_there/ | false | false | self | 1 | null |
Customer support Assistant | 3 | Hey,
I have tried to build conversational retrieval chain api with custom data and also memory for 3 prior conv. using mistral 7b instruct ,and langchain, but the api would answer one time and then just gives empty response with no errors, could not find any progress!!
Can someone please guide me where to find best Conversation Bot example/tutorial but using best open source chat models? | 2023-12-14T19:46:57 | https://www.reddit.com/r/LocalLLaMA/comments/18igwud/customer_support_assistant/ | prime_danger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18igwud | false | null | t3_18igwud | /r/LocalLLaMA/comments/18igwud/customer_support_assistant/ | false | false | default | 3 | null |
Could I serialize models in current "state"? | 1 | I am trying to write an API so that I can have some models running on my server and call those APIs from my device. Some of my custom functions have a somewhat long pre-prompt "You're an expert on blah-blah be mindful of blah-blah", which takes a considerable time for the model to read (minutes on CPU, seconds on GPU on just a 7b model). But it is always the same on that API.
So I'm wondering, the model, after reading this pre-prompt, must be in some kind of "state". Could I serialize this state and just load the model with it each time my API is called? | 2023-12-14T19:42:28 | https://www.reddit.com/r/LocalLLaMA/comments/18igt3v/could_i_serialize_models_in_current_state/ | Woitee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18igt3v | false | null | t3_18igt3v | /r/LocalLLaMA/comments/18igt3v/could_i_serialize_models_in_current_state/ | false | false | self | 1 | null |
Could I serialize models in current "state"? | 3 | I am trying to write an API so that I can have some models running on my server and call those APIs from my device. Some of my custom functions have a somewhat long pre-prompt "You're an expert on blah-blah be mindful of blah-blah", which takes a considerable time for the model to read (minutes on CPU, seconds on GPU on just a 7b model). But it is always the same on that API.
So I'm wondering, the model, after reading this pre-prompt, must be in some kind of "state". Could I serialize this state and just load the model with it each time my API is called? | 2023-12-14T19:42:27 | https://www.reddit.com/r/LocalLLaMA/comments/18igt3q/could_i_serialize_models_in_current_state/ | Woitee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18igt3q | false | null | t3_18igt3q | /r/LocalLLaMA/comments/18igt3q/could_i_serialize_models_in_current_state/ | false | false | self | 3 | null |
Does anyone have a Chat interface on local JavaScript web page accepting your Mistral Medium API key? | 4 | I am tempted to use the curl API to have it write one for me, but I am pretty sure someone here must already have one? | 2023-12-14T19:42:01 | https://www.reddit.com/r/LocalLLaMA/comments/18igsrv/does_anyone_have_a_chat_interface_on_local/ | Competitive_Travel16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18igsrv | false | null | t3_18igsrv | /r/LocalLLaMA/comments/18igsrv/does_anyone_have_a_chat_interface_on_local/ | false | false | self | 4 | null |
What could you do with 1 TB of vram? | 1 | Let's say, hypothetically, if one were to acquire a Radeon Pro SSG, such as one with 1 TB of vram, what's the most you could do with it? | 2023-12-14T19:33:35 | https://www.reddit.com/r/LocalLLaMA/comments/18iglzj/what_could_you_do_with_1_tb_of_vram/ | absouluteUNIT3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iglzj | false | null | t3_18iglzj | /r/LocalLLaMA/comments/18iglzj/what_could_you_do_with_1_tb_of_vram/ | false | false | self | 1 | null |
Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling! | 1 |
> We developed the Depth Up-Scaling technique. Built on the Llama2 architecture, SOLAR-10.7B incorporates the innovative Upstage Depth Up-Scaling. We then integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
Any ideas what Depth Up-Scaling could be? | 2023-12-14T19:31:39 | https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0 | 1mprobability | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18igkhl | false | null | t3_18igkhl | /r/LocalLLaMA/comments/18igkhl/meet_107b_solar_elevating_performance_with/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eYgOzVBAuNvqoa0iO_7oesPDF4IMahqO_VDioDBGPX4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=108&crop=smart&auto=webp&s=e67807fe20978ba6a4ca59164cb6a22881c5891d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=216&crop=smart&auto=webp&s=017c11d85e98026f2e77c1ab62e93312439ecd82', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=320&crop=smart&auto=webp&s=f14dcac8487f2d9a9488dbbf2eb6bbbbfa367b25', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=640&crop=smart&auto=webp&s=da0def96cb9a4ab285c7de23459d869044a4bcaf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=960&crop=smart&auto=webp&s=b9d3d5c618a695ee31b7043d6f84401ce3ad6d56', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=1080&crop=smart&auto=webp&s=7babdb361b6b420680d3254f56ce12ba20fd027a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?auto=webp&s=5bb52165a7d9afbe56de435e4c73da11b20c3ea4', 'width': 1200}, 'variants': {}}]} | |
What could you do with 1 TB of vram? | 80 | Let's say, hypothetically, if one were to acquire a Radeon Pro SSG, such as one with 1 TB of vram, what's the most you could do with it? | 2023-12-14T19:31:37 | https://www.reddit.com/r/LocalLLaMA/comments/18igkgb/what_could_you_do_with_1_tb_of_vram/ | absouluteUNIT3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18igkgb | false | null | t3_18igkgb | /r/LocalLLaMA/comments/18igkgb/what_could_you_do_with_1_tb_of_vram/ | false | false | self | 80 | null |
LoRA adapters for Mixtral (to run on 4.5 GB) | 6 | Just thinking out loud here, would like some thoughts.
1). If each of the 8xFF layers of the Mixtral were one layer and 8 lora adapters - we would get compression way down - could it work? And if so:
2). Can we create LoRAs out of the differences of the 8xFF layers and apply this?
This would get it down to ballpark 7.3B, then quantize it to 4-bit which it might be extra acceptable to, and it's at around 4.5GB VRAM requirements. And that's with a model that says it beats llamba 70b.
Thoughts re 1) Without extra trickery I'm assuming we'd need to load in two 2xFF to run one adapter on each, or otherwise mess with how it runs and run one layer twice with switched LoRAs each time (so, not in parallel). It's a trade-off that could be made for speed or VRAM size if it works.With 2xFF we're at 6.5GB VRAM requirements.
Thoughts re: how,
If we have two models A and B, and the weights w\_A and w\_B, a LoRA applied to transform B to A would be:
w\_A = w\_B + (L\_w1 x L\_w2)
Where L\_w1 & w2 are the LoRA weights.Subtracting w\_B to get L, the LoRA matrix, we could do SVD on L to decompose it and take the top ranking vectors to create our LoRA weights. The change between the different models could always be an issue, to create such a low rank version of it.I suppose one could either create an average FF and then apply 8 LoRAs to that, or take the FF layer that is closest to all other FF layers of each FF network to get one FF network with the least difference and use 7 LoRAs on that. | 2023-12-14T19:26:59 | https://www.reddit.com/r/LocalLLaMA/comments/18iggr8/lora_adapters_for_mixtral_to_run_on_45_gb/ | gmork_13 | self.LocalLLaMA | 2023-12-14T19:51:21 | 0 | {} | 18iggr8 | false | null | t3_18iggr8 | /r/LocalLLaMA/comments/18iggr8/lora_adapters_for_mixtral_to_run_on_45_gb/ | false | false | self | 6 | null |
Phi-2 on Mac M1/metal? | 2 | Hi the HF repo for phi-2 has instructions of CUDA specifically so I don't know if it can run on M1/metal so has anyone tested out phi-2 on metal/M1? If so please tell me how to run it? | 2023-12-14T19:02:15 | https://www.reddit.com/r/LocalLLaMA/comments/18ifx4k/phi2_on_mac_m1metal/ | Shoddy_Vegetable_115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ifx4k | false | null | t3_18ifx4k | /r/LocalLLaMA/comments/18ifx4k/phi2_on_mac_m1metal/ | false | false | self | 2 | null |
What format for notes? | 3 | I have setup mixtral 8x7b on my local machine with llama.cpp and I am using txt.ai to create a local RAG pipeline from my obsidian notes.
After playing around some, it seems like the model handles atomic note lines the best (all information in a single line).
This is different than how I can structured notes in the past, where a topic has a file (i.e. "bob dole" has a file and inside is a note "- likes raisins").
I am thinking of reformatting my notes so each line is a complete thought ("bob dole likes raisins") but before I expend the effort I wanted a reality check.
Is there a better way to do this? Am I on the right track? | 2023-12-14T18:57:37 | https://www.reddit.com/r/LocalLLaMA/comments/18ift5y/what_format_for_notes/ | tabgok | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ift5y | false | null | t3_18ift5y | /r/LocalLLaMA/comments/18ift5y/what_format_for_notes/ | false | false | self | 3 | null |
Mistral Medium- context length? | 1 | I just got API access to Mistral Medium. Anyone know its context length? My use cases require >50k tokens, and I'm hoping to create my own GPT4, Claude2, Mistral MoE. Thanks! | 2023-12-14T18:36:36 | https://www.reddit.com/r/LocalLLaMA/comments/18ifcda/mistral_medium_context_length/ | Berberis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ifcda | false | null | t3_18ifcda | /r/LocalLLaMA/comments/18ifcda/mistral_medium_context_length/ | false | false | self | 1 | null |
Best way to create a content generation LLM based on my own data | 17 | I have textual data with more than 1,000,000 words in multiple txt files. I'd like to use LLMs to create blog posts for our blog based on those texts. What would be the best way to achieve this with LLMs? I'm ok with the writing style of existing LLMs, even chatgpt could do the trick if I could give it that much data.
As far as I understand, fine-tuning an LLM is more about changing the output style than about increasing the LLM's knowledge. RAGs seem nice, but I don't know if it's doable with the large amount of data we have. | 2023-12-14T18:29:06 | https://www.reddit.com/r/LocalLLaMA/comments/18if67a/best_way_to_create_a_content_generation_llm_based/ | Blancoo21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18if67a | false | null | t3_18if67a | /r/LocalLLaMA/comments/18if67a/best_way_to_create_a_content_generation_llm_based/ | false | false | self | 17 | null |
Notice to research community: Use of generative artificial intelligence technology in the NSF merit review process | 1 | 2023-12-14T18:25:42 | https://new.nsf.gov/news/notice-to-the-research-community-on-ai | TroyHernandez | new.nsf.gov | 1970-01-01T00:00:00 | 0 | {} | 18if3kv | false | null | t3_18if3kv | /r/LocalLLaMA/comments/18if3kv/notice_to_research_community_use_of_generative/ | false | false | 1 | {'enabled': False, 'images': [{'id': '2-UWez1vxvNwPBM2lr-CXspa46VfoItcS2PAVxrCWLs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/PNyhAkkgPsPjnH0gLPBFB3FLLH5QdnpUB-1iz5xawyw.jpg?width=108&crop=smart&auto=webp&s=f3e7af5c44c641cc3feff69c0f0b3fe746d12ecb', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/PNyhAkkgPsPjnH0gLPBFB3FLLH5QdnpUB-1iz5xawyw.jpg?auto=webp&s=093c73eab89ae4c59e50dded7c214a88b82b6ddf', 'width': 200}, 'variants': {}}]} | ||
GPT 4.5 Metadata details appear in API | 1 | 2023-12-14T18:18:19 | https://twitter.com/hokeone/status/1735327219164037520?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1735327219164037520%7Ctwgr%5Ea64e9961bb0d47f8b47f63e916a144ead78fc0be%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.redditmedia.com%2Fmediaembed%2F18ibrwp%3Fresponsive%3Dtrueis_nightmode%3Dfalse | super-helper | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 18iexhm | false | {'oembed': {'author_name': 'stefano', 'author_url': 'https://twitter.com/hokeone', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="zxx" dir="ltr"><a href="https://t.co/6QdxMv3AKp">pic.twitter.com/6QdxMv3AKp</a></p>— stefano (@hokeone) <a href="https://twitter.com/hokeone/status/1735327219164037520?ref_src=twsrc%5Etfw">December 14, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/hokeone/status/1735327219164037520', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_18iexhm | /r/LocalLLaMA/comments/18iexhm/gpt_45_metadata_details_appear_in_api/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'B7g-9YB6Ixdkv4SbR9_ElNK1BwKIVd32Rtug8yeDtrM', 'resolutions': [{'height': 34, 'url': 'https://external-preview.redd.it/QBkqE5vmcamqVxv9yRNAuud9UX2aqQVXp_ssZa4YDIQ.jpg?width=108&crop=smart&auto=webp&s=ae1f898d6c89f4128e73e1fba837b10e34a783aa', 'width': 108}], 'source': {'height': 45, 'url': 'https://external-preview.redd.it/QBkqE5vmcamqVxv9yRNAuud9UX2aqQVXp_ssZa4YDIQ.jpg?auto=webp&s=9da7e39620f0c04d40246686b90395d16f115c5b', 'width': 140}, 'variants': {}}]} | ||
fblgit/una-xaberius-34b-v1beta is caught cheating on leaderboard. UNA cybertron juanako is just a lie. | 92 | [https://www.reddit.com/r/LocalLLaMA/comments/18ejwm1/what\_is\_fblgituna\_unified\_neural\_alignment\_looks/](https://www.reddit.com/r/LocalLLaMA/comments/18ejwm1/what_is_fblgituna_unified_neural_alignment_looks/)
I suspected last week that this model, and the author( [juanako.ai](https://juanako.ai/) / Xavier M. / fblgit ), is a liar, and 5 days later I got a further confirmation from HF discussion here: [https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard/discussions/444](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/444)
So there is no such methodology, UNA is nothing but cheating.
HF staff ran contamination detection tool on this model, and it's got a 99% chance of being contaminated on GSM8K.
https://preview.redd.it/ggwixbrxwa6c1.png?width=2233&format=png&auto=webp&s=62afbdfd237b3b3dbc9f072dc4f6a1cbd984c570 | 2023-12-14T18:16:59 | https://www.reddit.com/r/LocalLLaMA/comments/18iewem/fblgitunaxaberius34bv1beta_is_caught_cheating_on/ | PuzzledTeam5961 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iewem | false | null | t3_18iewem | /r/LocalLLaMA/comments/18iewem/fblgitunaxaberius34bv1beta_is_caught_cheating_on/ | false | false | 92 | {'enabled': False, 'images': [{'id': 'E52zxkgIQ4erLeto4AhQQwIDH-4S-D7pyYegSKhUC3Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?width=108&crop=smart&auto=webp&s=11874a81c84f6ac8d6af4cd02b91b763e5aa9e63', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?width=216&crop=smart&auto=webp&s=1ee0c4b18aaf0e93ccd3a971fe32081f3406766e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?width=320&crop=smart&auto=webp&s=09e4a196c01af4a45549ae99925050677754aa38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?width=640&crop=smart&auto=webp&s=91671ecc5e75078769d60d263cec150a53d2e1f8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?width=960&crop=smart&auto=webp&s=110dfb0cf008fe17de74f6a0ee93dc4ce43c816f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?width=1080&crop=smart&auto=webp&s=d07b37d24ed9a61e19b740b70435460429e029f6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?auto=webp&s=6c87ba8708477672a80d7d7c8238b6dbff1f8265', 'width': 1200}, 'variants': {}}]} | |
Any ideas why LM Studio Mixtral 8bit is giving me worse answers than Perplexity labs Mixtral? | 11 | Also it seems it's possible to configure the number of experts used for each token, but I don't see that configuration option.
The example question I give is a json with multiple elements and I want the AI to filter them and show accuracy > 0.99 elements only. On perplexity website it's always correct, but locally it's not, tried 8 bit and 6 bit.
I've tried changing temperature and other parameters such as system prompt with slight improvements but still behind. | 2023-12-14T17:59:47 | https://www.reddit.com/r/LocalLLaMA/comments/18iehlv/any_ideas_why_lm_studio_mixtral_8bit_is_giving_me/ | MidnightSun_55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iehlv | false | null | t3_18iehlv | /r/LocalLLaMA/comments/18iehlv/any_ideas_why_lm_studio_mixtral_8bit_is_giving_me/ | false | false | self | 11 | null |
Candle Phi WASM Demo - run Microsoft Phi-2 q4k in web browser | 13 | 2023-12-14T17:40:23 | https://huggingface.co/spaces/radames/Candle-phi1-phi2-wasm-demo?model=phi_2_0_q4k | niutech | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18ie1r0 | false | null | t3_18ie1r0 | /r/LocalLLaMA/comments/18ie1r0/candle_phi_wasm_demo_run_microsoft_phi2_q4k_in/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'fuO00O7qX3jXuj6gAg_bAx3TaTHxTk8hT0pE_5GOrbQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?width=108&crop=smart&auto=webp&s=7e1c3b47e73de83455e9dcd72b43d9122a26fa40', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?width=216&crop=smart&auto=webp&s=95d7ffe8488d1dddf2030653c88a6fe0ca27a7f8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?width=320&crop=smart&auto=webp&s=72ddb67b3fc2657258b1ce6d004c4b39cbb21eb2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?width=640&crop=smart&auto=webp&s=2bbc86f083b4b2241aeec135cd69406cca0543c7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?width=960&crop=smart&auto=webp&s=f1cd76e16222da106693c8ca1bb3764b75395942', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?width=1080&crop=smart&auto=webp&s=78eff36df2613f91e26e09e9e4d0287b9c3c490c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?auto=webp&s=7fac58494c77b916008a47a70e334e788b8ef212', 'width': 1200}, 'variants': {}}]} | ||
Mistral API: Initial test results | 12 | Just got access. Pretty neat results so far:
> Bob is in the living room. He walks to the kitchen, carrying a cup. He puts a ball in the cup and carries the cup to the bedroom. He turns the cup upside down, then walks to the garden. He puts the cup down in the garden, then walks to the garage. Where is the ball?
Model | Prompt | Completion | Total | Response
---|---|----|----|----
mistral-tiny | 74 | 65 | 139 | The ball is in the bedroom, as that is where Bob left the cup upside down. However, the question asks for the current location of the ball, not where Bob last left it. Based on the given information, the ball should now be in the garden, as that is where Bob put down the empty cup.
mistral-small | 74 | 44 | 118 | The ball is in the bedroom. When Bob put the ball in the cup in the kitchen, he carried it to the bedroom and turned the cup upside down, likely causing the ball to fall out in the bedroom.
mistral-medium | 80 | 74 | 154 | The ball is in the bedroom. After Bob put the ball in the cup in the kitchen, he carried it to the bedroom and turned the cup upside down, so the ball should have fallen out in the bedroom. He then left the cup in the garden and went to the garage, but the ball was not mentioned to have moved with him.
3.5-turbo | 72 | 7 | 79 | The ball is in the bedroom. *
gpt-4-1106-preview | 72 | 56 | 128 | The ball is in the bedroom because Bob turned the cup upside down in the bedroom after placing the ball in it. When he turned the cup upside down, the ball would have fallen out. He then walked to the garden without the ball since it was no longer in the cup.
*3.5-turbo got this question wrong several times
Cost Breakdown:
Model | Input | Output | Cost
---|---|----|----
tiny | 0.14€ / 1M tokens | 0.42€ / 1M tokens | 0.00003766€
small | 0.6€ / 1M tokens | 1.8€ / 1M tokens | 0.0001236€
medium | 2.5€ / 1M tokens | 7.5€ / 1M tokens | 0.000755€
IMO mistral-small seems very promising and appears to be Mistral's drop-in competitor to GPT-3.5 Turbo. If my pre-coffee math is correct GPT-3.5 Turbo's output is 1.818€ / 1M tokens compared to Mistrals 1.8€ / 1M. Additionally, even-though GPT-3.5 Turbo got the answer right it gave me inconsistent and vague results.
Ps. Models are definitely censored. You can pass an argument safe_mode = false but it still refuses to answer things like "how do I cook meth":
> I strongly advise against engaging in illegal activities, such as cooking meth. Not only is it dangerous and illegal, but it can also lead to serious health problems, legal consequences, and harm to yourself and others. If you or someone you know is struggling with addiction, I urge you to seek help from a professional healthcare provider or a trusted support network. They can provide guidance, resources, and treatment options to help address the issue. | 2023-12-14T17:10:16 | https://www.reddit.com/r/LocalLLaMA/comments/18idcwt/mistral_api_initial_test_results/ | Conutu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18idcwt | false | null | t3_18idcwt | /r/LocalLLaMA/comments/18idcwt/mistral_api_initial_test_results/ | false | false | self | 12 | null |
We would theorically compress Mixtral up to 4gb without much degradation | 218 | [https://github.com/ggerganov/llama.cpp/issues/4445](https://github.com/ggerganov/llama.cpp/issues/4445)
[https://arxiv.org/abs/2310.16795](https://arxiv.org/abs/2310.16795)
https://preview.redd.it/xu02xfl7la6c1.png?width=1742&format=png&auto=webp&s=4168598f9b11dd8a5e2d5a0ecb8347d7c6e96f00 | 2023-12-14T17:05:46 | https://www.reddit.com/r/LocalLLaMA/comments/18id9e0/we_would_theorically_compress_mixtral_up_to_4gb/ | Wonderful_Ad_5134 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18id9e0 | false | null | t3_18id9e0 | /r/LocalLLaMA/comments/18id9e0/we_would_theorically_compress_mixtral_up_to_4gb/ | false | false | 218 | {'enabled': False, 'images': [{'id': 'bri4lOOz9KOTnlKfouPgFxU3AUF1nFX9EOESuHn9xDE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Lv_-f0Hq4KeIGOubtK_-bJKAChAcDU_V2n_WOm_nCL4.jpg?width=108&crop=smart&auto=webp&s=a5dd88dab7bab3177a2a207e533ec43cb222a605', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Lv_-f0Hq4KeIGOubtK_-bJKAChAcDU_V2n_WOm_nCL4.jpg?width=216&crop=smart&auto=webp&s=444806b9845b4ac655869cdd858d18e62d8e6137', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Lv_-f0Hq4KeIGOubtK_-bJKAChAcDU_V2n_WOm_nCL4.jpg?width=320&crop=smart&auto=webp&s=f6b24e5357c23592daf1fc5ae8d9e32a0cf71777', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Lv_-f0Hq4KeIGOubtK_-bJKAChAcDU_V2n_WOm_nCL4.jpg?width=640&crop=smart&auto=webp&s=34e4bb741fb9ec26c05006670194f610f7915ef5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Lv_-f0Hq4KeIGOubtK_-bJKAChAcDU_V2n_WOm_nCL4.jpg?width=960&crop=smart&auto=webp&s=d092493068f6908e39433bf01b5a467d8a4f7ad0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Lv_-f0Hq4KeIGOubtK_-bJKAChAcDU_V2n_WOm_nCL4.jpg?width=1080&crop=smart&auto=webp&s=39a873ca8bcee45cf3a25ce41aeea1ee1543f3eb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Lv_-f0Hq4KeIGOubtK_-bJKAChAcDU_V2n_WOm_nCL4.jpg?auto=webp&s=b6bdd7d7277fe37856781488203ab6a2e2a1b795', 'width': 1200}, 'variants': {}}]} | |
Experimenting with LLM-assisted scoring: eval results | 23 | Hi llama friends, we are an early-stage AI startup. I wanted to share some results we obtained using our LLM-assisted batch evaluation tool.
Below I present results for three evaluation tasks:
* Highlight extraction from news articles
* MMLU benchmark using scoring model
* JSON schema validation
The batch eval tool is no-code, hosted, and free to use. See bottom of the post for details.
# Scoring method
Based on works by \[[1](https://arxiv.org/abs/2305.13711), [2](https://arxiv.org/abs/2310.17631), [3](https://arxiv.org/abs/2310.08491)\], we developed a scoring method based on a scoring model. The scoring model evaluates inferences based on properties defined by the user in plain english. The scoring model rates on a Likert scale from 1 to 5.
## Alignment with human scores
We built a dataset of questions to score three properties: Groundedness, Playfulness, Creativity. Each example in the dataset is designed for one of these properties (not all three).
We prompted a selection of models with the dataset (Llama 2 7B q4, Llama 2 70B, and GPT-4), then asked humans (team members and friends) to score each response on a scale of 1-5. Each example was scored by three different humans.
We then asked the scoring model to evaluate the dataset, each example for the corresponding property. We optimized the prompt to maximize correlation with the average of human scores using Pearson coefficients to measure correlation.
## Guiding the scoring model
Following work by \[[3](https://arxiv.org/abs/2310.08491)\], we found that using chain-of-though reasoning and providing a scoring rubric further helps the scoring model.
A scoring rubric lays out what each individual score (1 to 5) means concretely. See prompts below.
# Highlight extraction from news articles
## Dataset
We use [this dataset](https://huggingface.co/datasets/Hamza-Ziyard/CNN-Daily-Mail-Sinhala/tree/main) of 2000 CNN articles which we converted to JSONL.
## Prompt
We ask models to extract exactly three highlights from the articles and we ask that each highlight should be at most two-sentences long. Here is the prompt:
Please extract the top three highlights from the following article.
Each highlight should contain no more than two sentences.
<article>{{article}}</article>
We use the same prompt for all models. More work could be done to optimize the prompt for each model.
## Scoring properties
We define three properties to evaluate:
**Groundedness**: this evaluates how much the extracted highlights are grounded in the original article.
The score describes how grounded the highlights are in the original article.
The following is the original article:
<article>{{article}}</article>
The following is a grading rubric to use:
1. The highlights are not grounded at all in the article. Most statements in the highlights are not grounded in facts present in the article.
2. The highlights are lightly grounded in the article. Some statements in the highlights are directly stated in the article.
3. The highlights are somewhat grounded in the article. Many statements in the highlights are grounded in the article.
4. The highlights are mostly grounded in the article. Some statements in the highlights are not grounded in the article.
5. The highlights are very grounded in the article. Most or all statements in the highlights are directly stated in the article.
**Formatting**: this evaluates whether exactly three highlights were extracted.
The score describes how many highlights were generated.
The following is a grading rubric to use:
1. Less than three highlights were generated.
3. More than three highlights were generated.
5. Three highlights were generated.
Note that we exploit the 1-5 Likert scale to produce a trinary scale instead.
**Highlight length**: this evaluates whether models followed the instruction about the number of sentences per highlight.
The score describes how many sentences were generated.
The following is a grading rubric to use:
1. Less than three sentences were generated.
2. Three sentences were generated.
3. Four sentences were generated.
4. Five sentences were generated.
5. Six sentences were generated.
## Results
Here are the results:
* [FLAN-T5 XL vs. Llama 2 7B vs. Falcon 7B](https://app.airtrain.ai/public/job/773ccce6-8ed9-46d1-b6b1-a46190af71dd)
* [Llama 2 70B vs. Falcon 40B vs. GPT-3.5 vs GPT 4](https://app.airtrain.ai/public/job/1730cd0f-7813-4d3e-8880-b21b55aa94e4)
* [Mistral 7B](https://app.airtrain.ai/public/job/85ea67bf-56de-41c0-ae45-2b0b7694d73c)
||Avg. Groundedness (1-5)|Formatting (% 5-score)|
|:-|:-|:-|
|FLAN-T5 XL|4.5|65%|
|Llama 2 7B|4.9|98%|
|Falcon 7B|4.5|78%|
|Mistral 7B|4.9|98%|
|Falcon 40B|3.2|31%|
|Llama 2 70B|4.9|98%|
|GPT-3.5 Turbo|4.9|72%|
|GPT-4|4.7|72%|
## Takeaways
* The Llama 2 models perform identically regardless of size
* Falcon performs badly regardless of size
* Llama 2 (all sizes) is on par with GPTs on groundedness, beats GPTs on formatting
# MMLU benchmark
To test our scoring model, we attempted to reproduce MMLU benchmark results.
## Dataset
We used the [MMLU dataset from HuggingFace](https://huggingface.co/datasets/cais/mmlu). We collate all topics into a 10,000 row JSONL dataset with the following schema:
{
"topic": "high school chemistry",
"question": "Chlorine gas reacts most readily with:",
"answer_a": "toluene",
"answer_b": "ethylene",
"answer_c": "ethanoic acid",
"answer_d": "ethane",
"correct_answer": "B"
}
## Prompt
We use the same 0-shot prompt for all models:
Here is a question on the topic of {{topic}}.
Question: {{question}}
Which of the following answers is correct?
A. {{answer_a}}
B. {{answer_b}}
C. {{answer_c}}
D. {{answer_d}}
State the letter corresponding to the correct answer.
## Scoring
We define a **Correctness** property as such:
This score describes whether the chatbot selected the correct answer. The correct answer is {{correct_answer}}.
Here is a scoring rubric to use:
1. The chatbot's answer is not {{correct_answer}}, therefore the chatbot is incorrect.
5. The chatbot's answer is {{correct_answer}}, therefore the chatbot is correct.
Note that we use the 1-5 Likert scale to get a binary pass/fail answer.
## Results
* [Llama 2 7B, 13B, 70B](https://app.airtrain.ai/public/job/c2111f63-95bf-44c6-8a8a-ab81f171ff54)
* [Llama 2 7B vs. Falcon 40B vs. GPT-3.5 Turbo vs. GPT-4](https://app.airtrain.ai/public/job/1c661afb-790f-49ab-954f-6d9216b51c57)
​
||Pass rate (0-shot)|Official pass rate|
|:-|:-|:-|
|Llama 2 7B|47.5%|45.3%|
|Llama 2 13B|54.1%|54.8%|
|Llama 2 70B|61.0%|68.9%|
|Falcon 40B|48.5%|54.1%|
|GPT-3.5 Turbo|61.1%|70.0%|
|GPT-4|73.2%|86.6|
Official results taken from [here](https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu).
## Takeaways
The scoring model comes close to reproducing official MMLU benchmark results, and least maintains the hierarchy. The difference can likely be attributed to 0-shot vs 5-shot, the exact content of the evaluation dataset, as well as response extraction method.
# Our batch eval tool
All results above were obtained with our batch evaluation tool, [Airtrain.ai](http://airtrain.ai/).
As you saw, we support three evaluations methods:
* LLM-assisted evaluation: user describes the scoring property in natural language and a scoring model rates inferences on a Likert scale.
* JSON schema validation: measure compliance rates based on a user-defined schema using the [JSON Schema](https://json-schema.org/) convention.
* Unsupervised metrics: length (in tokens), density, and compression
We support 10 models + BYOI:
* Llama 2 7B, 13B, 70B
* FLAN-T5 XL, XXL
* Falcon 7B, 80B
* Mistral 7B
* GPT-3.5 Turbo, GPT-4
* Bring your own inferences
If you want to give the tool a spin, [join our Slack](https://join.slack.com/t/airtrain-ai/shared_invite/zt-252ipfh5j-ljAxJm4P0XCLAZXpOyKH1A) or DM me for an invite.
# References
\[1\] LLM-Eval – [https://arxiv.org/abs/2305.13711](https://arxiv.org/abs/2305.13711)
\[2\] JudgeLM – [https://arxiv.org/abs/2310.17631](https://arxiv.org/abs/2310.17631)
\[3\] Prometheus – [https://arxiv.org/abs/2310.08491](https://arxiv.org/abs/2310.08491) | 2023-12-14T16:55:32 | https://www.reddit.com/r/LocalLLaMA/comments/18id0pa/experimenting_with_llmassisted_scoring_eval/ | neutralino1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18id0pa | false | null | t3_18id0pa | /r/LocalLLaMA/comments/18id0pa/experimenting_with_llmassisted_scoring_eval/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
Any good tutorials for setting up Google Gemini's API? | 1 | [removed] | 2023-12-14T16:05:23 | https://www.reddit.com/r/LocalLLaMA/comments/18ibwo4/any_good_tutorials_for_setting_up_google_geminis/ | TheHunter920 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ibwo4 | false | null | t3_18ibwo4 | /r/LocalLLaMA/comments/18ibwo4/any_good_tutorials_for_setting_up_google_geminis/ | false | false | self | 1 | null |
Fine-tuning a model on books | 13 | So as title states, I want to train an open source (probably Mistral!) model on a few books. I have extracted the text from their pdf's and realized there might be many ways that could result in different effects. The books are collections of essays. Basically, at the end of each essay there is an analysis written by a professional. Then, at the end of the book there is some sort of advice from the authors of the essays, to help readers understand how they used their background to write what they wrote.
I want to train an expert that can spit out similar essays, would it be beneficial if I somehow connected the advice to each essay the make the model understand the approach taken to write one? Would including the analysis to each essay help the model understand what to do and not to do given the critiques? Would it have any more effect than just training the model on each individual essays? Within each book there are also chapters. Each chapter's title is a label to the essays within it, such as "On challenges".
Thanks for any advice! | 2023-12-14T15:57:19 | https://www.reddit.com/r/LocalLLaMA/comments/18ibq2i/finetuning_a_model_on_books/ | LetterheadWaste8170 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ibq2i | false | null | t3_18ibq2i | /r/LocalLLaMA/comments/18ibq2i/finetuning_a_model_on_books/ | false | false | self | 13 | null |
case closed? https://github.com/microsoft/promptbase | 63 | 2023-12-14T15:56:46 | LyPreto | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18ibpnx | false | null | t3_18ibpnx | /r/LocalLLaMA/comments/18ibpnx/case_closed_httpsgithubcommicrosoftpromptbase/ | false | false | 63 | {'enabled': True, 'images': [{'id': 'W8Oy_H1JC3QUk8SWoapo8jMZhcUZ78XIjpmrz1D85dI', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/9z8njpst8a6c1.png?width=108&crop=smart&auto=webp&s=e1c28ca5b849e6a5551ecd7251854db9a081f69a', 'width': 108}, {'height': 106, 'url': 'https://preview.redd.it/9z8njpst8a6c1.png?width=216&crop=smart&auto=webp&s=78060415799c4c60a9f559f46ce425ef65df78f7', 'width': 216}, {'height': 157, 'url': 'https://preview.redd.it/9z8njpst8a6c1.png?width=320&crop=smart&auto=webp&s=b200ecdbc1745c5949a0382b19d2926e1a802fae', 'width': 320}, {'height': 314, 'url': 'https://preview.redd.it/9z8njpst8a6c1.png?width=640&crop=smart&auto=webp&s=0a3dee821de8688feccbe6c6fd57a87b06025103', 'width': 640}, {'height': 472, 'url': 'https://preview.redd.it/9z8njpst8a6c1.png?width=960&crop=smart&auto=webp&s=e1d686be38b3d13939c3bb40951a11e431eaaf25', 'width': 960}, {'height': 531, 'url': 'https://preview.redd.it/9z8njpst8a6c1.png?width=1080&crop=smart&auto=webp&s=e137c2ae2109b9b44b37dea30fb3cf9b4b57fe63', 'width': 1080}], 'source': {'height': 618, 'url': 'https://preview.redd.it/9z8njpst8a6c1.png?auto=webp&s=b09509268de66108d05ee773ed31f20dbbd81bb8', 'width': 1256}, 'variants': {}}]} | |||
How is Perplexity doing the internet search? | 8 | Perplexity is such a great product, I’m curious as to how they’re able to have their LLM have real-time answers. Is it just web crawlers and RAG? | 2023-12-14T15:55:50 | https://www.reddit.com/r/LocalLLaMA/comments/18iboyt/how_is_perplexity_doing_the_internet_search/ | shafinlearns2jam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iboyt | false | null | t3_18iboyt | /r/LocalLLaMA/comments/18iboyt/how_is_perplexity_doing_the_internet_search/ | false | false | self | 8 | null |
How can I determine my hardware requirements (especially VRAM) for fine-tuning a LLM with a PEFT method? Is there a formula? | 2 | Let's say I have a 13B Llama and I want to fine-tune it with LoRA (rank=32). Can I somehow determine how much VRAM I need to do so?
I reckon it should be something like: Base VRAM for Llama model + LoRA params + LoRA gradients. But I don't know how to determine each of these variables. | 2023-12-14T15:44:19 | https://www.reddit.com/r/LocalLLaMA/comments/18ibg5i/how_can_i_determine_my_hardware_requirements/ | a_fish1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ibg5i | false | null | t3_18ibg5i | /r/LocalLLaMA/comments/18ibg5i/how_can_i_determine_my_hardware_requirements/ | false | false | self | 2 | null |
D: What prompts do you use to evaluate new LLM capabilities? | 22 | What prompts do you use to evaluate new LLM capabilities?
Do you have a series of standard prompts you use to see what an LLM can do?
What things have you found the most useful for overall evaluation?
| 2023-12-14T15:27:59 | https://www.reddit.com/r/LocalLLaMA/comments/18ib3pr/d_what_prompts_do_you_use_to_evaluate_new_llm/ | head_robotics | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ib3pr | false | null | t3_18ib3pr | /r/LocalLLaMA/comments/18ib3pr/d_what_prompts_do_you_use_to_evaluate_new_llm/ | false | false | self | 22 | null |
Best way to do guided generation in your experience? | 3 | I'm currently using Outlines but I'm looking for other options with wider model support and more faster development.
I need to make the model generate JSON responses based on a given schema. And I want to limit the possible options for some JSON fields to be selected from a given list. | 2023-12-14T15:02:05 | https://www.reddit.com/r/LocalLLaMA/comments/18iajgp/best_way_to_do_guided_generation_in_your/ | ammar- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iajgp | false | null | t3_18iajgp | /r/LocalLLaMA/comments/18iajgp/best_way_to_do_guided_generation_in_your/ | false | false | self | 3 | null |
[VIDEO] 6 Ways For Running A Local LLM (aka how to use HuggingFace) | 2 | 2023-12-14T14:39:12 | https://semaphoreci.com/blog/local-llm | tomasfern | semaphoreci.com | 1970-01-01T00:00:00 | 0 | {} | 18ia1wz | false | null | t3_18ia1wz | /r/LocalLLaMA/comments/18ia1wz/video_6_ways_for_running_a_local_llm_aka_how_to/ | false | false | 2 | {'enabled': False, 'images': [{'id': '7bFRWt1RrSstrR_aX43d5HyR_T6IGIYuCZAI4jXZy_U', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Dnv69IJuTDlSzBOHlB5Qi4NQHvOUmrAAPFhTTErkuxQ.jpg?width=108&crop=smart&auto=webp&s=43a3f357a6fb2fbb9973b90ebe8251bdb626d9fb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Dnv69IJuTDlSzBOHlB5Qi4NQHvOUmrAAPFhTTErkuxQ.jpg?width=216&crop=smart&auto=webp&s=279470c639d43ac70b5fa43963444258d7f16c56', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Dnv69IJuTDlSzBOHlB5Qi4NQHvOUmrAAPFhTTErkuxQ.jpg?width=320&crop=smart&auto=webp&s=0fba2853a6ec8a24cb5706bb0cd4c8c1befabc0d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Dnv69IJuTDlSzBOHlB5Qi4NQHvOUmrAAPFhTTErkuxQ.jpg?width=640&crop=smart&auto=webp&s=1c92dc282d2cfa9fd48bc3658f821bedc36d8dac', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Dnv69IJuTDlSzBOHlB5Qi4NQHvOUmrAAPFhTTErkuxQ.jpg?width=960&crop=smart&auto=webp&s=9ace3f867046c33684881d59132e4853e62dcebd', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Dnv69IJuTDlSzBOHlB5Qi4NQHvOUmrAAPFhTTErkuxQ.jpg?width=1080&crop=smart&auto=webp&s=8c5f5defb2b94b6b0f3751b09839cf46d36d8188', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Dnv69IJuTDlSzBOHlB5Qi4NQHvOUmrAAPFhTTErkuxQ.jpg?auto=webp&s=389e98c9ca99ffa78069540b0e6cff312104a999', 'width': 1920}, 'variants': {}}]} | ||
How do I get local LLMs to return keywords? | 5 | Before I start, I just want to say that I know that NER is the accepted best method for pure keyword extraction. However, the *type* of keywords I'm generating are more like concept categories, so NER is (afaik) not as well-suited to the task.
But so far, Llama2 70b Chat and its derivatives are the only models I can get to reliably output keywords (normal or conceptual). Mistral and the other 13b and lower models will return summaries even if I ask specifically for comma-separated keywords.
Context length is also key factor, but I can't figure out a practical solve other than RAG. For example, if I give Mistral a 1,000 token article and ask for keywords, it will work some of the time. But if I give it a 2,700 token article, it will choke and return nonsense. And yes I have tried the Mistral finetunes with supposedly longer context windows.
Here's my prompt (I've tried a few variations of course): "Generate a comma-separated list of keywords for the following article: \`\`\` \[article\] \`\`\`.
I'm using Ollama and, thus, GGUF files with 2x 3090s. Simply using Llama2 70b would work, but it's too slow with GGUF files (I have hundreds of thousands of articles to keyword and categorize), and it also chokes on longer articles.
Surely someone has grappled with similar issues, how did you make it work? I could go down the rabbithole and do a model fine-tuning, but that seems fairly daunting. | 2023-12-14T14:12:28 | https://www.reddit.com/r/LocalLLaMA/comments/18i9hj4/how_do_i_get_local_llms_to_return_keywords/ | orrorin6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i9hj4 | false | null | t3_18i9hj4 | /r/LocalLLaMA/comments/18i9hj4/how_do_i_get_local_llms_to_return_keywords/ | false | false | self | 5 | null |
Instead of high censor A.I, why not other approach? Evolution of improve logical Asimov Laws | 1 | [removed] | 2023-12-14T14:00:45 | https://www.reddit.com/r/LocalLLaMA/comments/18i98q1/instead_of_high_censor_ai_why_not_other_approach/ | maxpayne07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i98q1 | false | null | t3_18i98q1 | /r/LocalLLaMA/comments/18i98q1/instead_of_high_censor_ai_why_not_other_approach/ | false | false | self | 1 | null |
GPT-4.5 leaked by OpenAI? | 1 | Is it legit? Source:
https://www.linkedin.com/posts/thomasdordonne_breaking-openai-just-leaked-gpt-45-the-activity-7141036190605959168-d8e3?utm_source=share&utm_medium=member_ios | 2023-12-14T13:43:41 | Shoddy_Vegetable_115 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18i8wim | false | null | t3_18i8wim | /r/LocalLLaMA/comments/18i8wim/gpt45_leaked_by_openai/ | false | false | 1 | {'enabled': True, 'images': [{'id': '-nJxc5LKudwszTrn-dhPxmKNJmtnI3KEpxhkg_Hm-vg', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/nr6rhf48l96c1.jpg?width=108&crop=smart&auto=webp&s=5895cf0da221927a8b6db474303328367652189c', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/nr6rhf48l96c1.jpg?width=216&crop=smart&auto=webp&s=6319dc202dc9ee806fb6478c45d09dfd0c4e3d39', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/nr6rhf48l96c1.jpg?width=320&crop=smart&auto=webp&s=f8f8de4056e9da30edb3964671651fb773abf216', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/nr6rhf48l96c1.jpg?width=640&crop=smart&auto=webp&s=fac0be992c619bbfa9022736815dcb779e9828b5', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/nr6rhf48l96c1.jpg?width=960&crop=smart&auto=webp&s=aa6518802de7a3e9eebc93dc05889ce8a8c0768b', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/nr6rhf48l96c1.jpg?width=1080&crop=smart&auto=webp&s=266eaf192fcc88e02c7f4160f20d6cac6a49d0f4', 'width': 1080}], 'source': {'height': 2778, 'url': 'https://preview.redd.it/nr6rhf48l96c1.jpg?auto=webp&s=4561e8921009df574b86456d6aec6b25b26aceb9', 'width': 1284}, 'variants': {}}]} | ||
gpt4all returns number instead of text | 1 | I am trying out gpt4all for the first time locally.
I am just using the code from their [https://docs.gpt4all.io/gpt4all\_python.html](https://docs.gpt4all.io/gpt4all_python.html)
from gpt4all import GPT4All
model = GPT4All("orca-mini-3b-gguf2-q4_0.gguf")
output = model.generate("The capital of France is ", max_tokens=3)
print(output)
however it returns numbers like 2.1 or 100 am I doing something wrong? | 2023-12-14T13:33:50 | https://www.reddit.com/r/LocalLLaMA/comments/18i8pu5/gpt4all_returns_number_instead_of_text/ | 31NF4CHZ4H73N | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i8pu5 | false | null | t3_18i8pu5 | /r/LocalLLaMA/comments/18i8pu5/gpt4all_returns_number_instead_of_text/ | false | false | self | 1 | null |
Mixtral 7B, 8x7B: enlightenment ? | 1 | With my M1 8gb Macbook pro, I was able to run, not so easily llama2 7B with llama.cpp.
I'd like to try Mixtral but saw there were two versions: 7B and 8x7B which compares to GPT 3.5.
Is 8x7B the equivalent of a 56B parameters model or is it just larger but not more expensive for interference ? | 2023-12-14T13:19:19 | https://www.reddit.com/r/LocalLLaMA/comments/18i8gar/mixtral_7b_8x7b_enlightenment/ | CJAgln | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i8gar | false | null | t3_18i8gar | /r/LocalLLaMA/comments/18i8gar/mixtral_7b_8x7b_enlightenment/ | false | false | self | 1 | null |
gemini-pro censorship | 1 | 2023-12-14T13:02:30 | Timotheeee1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18i85jz | false | null | t3_18i85jz | /r/LocalLLaMA/comments/18i85jz/geminipro_censorship/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ARFtCOnJlNtfe4Xaa57LEgZtCXl_hOwb0a6gfUTdMXc', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/a13ju20td96c1.png?width=108&crop=smart&auto=webp&s=b60f753b1146f36651d7ba99173d949eea275afd', 'width': 108}, {'height': 235, 'url': 'https://preview.redd.it/a13ju20td96c1.png?width=216&crop=smart&auto=webp&s=a348cd7dd942db613a08d2e467a1e25a1832852c', 'width': 216}, {'height': 348, 'url': 'https://preview.redd.it/a13ju20td96c1.png?width=320&crop=smart&auto=webp&s=494bef820ffcfacf1132b9ef91aaa9aa2d04eeb0', 'width': 320}, {'height': 697, 'url': 'https://preview.redd.it/a13ju20td96c1.png?width=640&crop=smart&auto=webp&s=5f5ea3c16514e49787de3de23f49369fb46528ea', 'width': 640}], 'source': {'height': 838, 'url': 'https://preview.redd.it/a13ju20td96c1.png?auto=webp&s=4eca80ad0841f58c207242fe43742dae39fad9e1', 'width': 769}, 'variants': {}}]} | |||
In LM studio, is there a way to get details as to why an LLM 'crashed'? | 1 | I use lmstudio and local lms ahead of a presentation on Ai+medecine I'm giving soon. I have 8gb vram on a 3070, and i load medical models , 7b (meditron for example) quantization set to 4M. They should (and do) fit entirely in vram since they weight only 4.5GB. Single thread as recommended here.
For some reason, regardless of which model i use, llmstudio shows the model crashes and has to be reloaded after only a few prompts. Everything is set to default.
It would be nice to be able to get a 'reason' as to why it failed. Do you know what could be the most common reasons? Thank you for your help! | 2023-12-14T13:00:33 | https://www.reddit.com/r/LocalLLaMA/comments/18i840i/in_lm_studio_is_there_a_way_to_get_details_as_to/ | memory_moves | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i840i | false | null | t3_18i840i | /r/LocalLLaMA/comments/18i840i/in_lm_studio_is_there_a_way_to_get_details_as_to/ | false | false | self | 1 | null |
Site with the ranking of 13b models | 1 | Is there a website that updates the ranking of the best RP models in categories 7b, 13b or 20b? | 2023-12-14T12:52:24 | https://www.reddit.com/r/LocalLLaMA/comments/18i7yyg/site_with_the_ranking_of_13b_models/ | LonleyPaladin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i7yyg | false | null | t3_18i7yyg | /r/LocalLLaMA/comments/18i7yyg/site_with_the_ranking_of_13b_models/ | false | false | self | 1 | null |
Advanced Prompt Engineering Techniques: Airline Example | 1 | [removed] | 2023-12-14T12:48:20 | https://www.reddit.com/r/LocalLLaMA/comments/18i7wj5/advanced_prompt_engineering_techniques_airline/ | MiNeves | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i7wj5 | false | null | t3_18i7wj5 | /r/LocalLLaMA/comments/18i7wj5/advanced_prompt_engineering_techniques_airline/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LSGUrs26uY7cTzSgkDkEwLOQL-tclMWlEiBP3rdrjSc', 'resolutions': [{'height': 141, 'url': 'https://external-preview.redd.it/zE0OIyWeBA_5q_2qO3hw2l3uD8jQvIOyxYtcr5Jup4o.jpg?width=108&crop=smart&auto=webp&s=5b4da191aa095b7dacc029607d1f8eb4f79737c6', 'width': 108}, {'height': 282, 'url': 'https://external-preview.redd.it/zE0OIyWeBA_5q_2qO3hw2l3uD8jQvIOyxYtcr5Jup4o.jpg?width=216&crop=smart&auto=webp&s=5ab0c3f6ddd0dccb10ffbdaed451818cc1f3befb', 'width': 216}, {'height': 418, 'url': 'https://external-preview.redd.it/zE0OIyWeBA_5q_2qO3hw2l3uD8jQvIOyxYtcr5Jup4o.jpg?width=320&crop=smart&auto=webp&s=6f1b0095f3021de01c590539c1c35b5c1a897c76', 'width': 320}, {'height': 836, 'url': 'https://external-preview.redd.it/zE0OIyWeBA_5q_2qO3hw2l3uD8jQvIOyxYtcr5Jup4o.jpg?width=640&crop=smart&auto=webp&s=7a694f4ab78fb0553029e1e97370b5efe3d4d08b', 'width': 640}, {'height': 1255, 'url': 'https://external-preview.redd.it/zE0OIyWeBA_5q_2qO3hw2l3uD8jQvIOyxYtcr5Jup4o.jpg?width=960&crop=smart&auto=webp&s=d0598c3362108086d02a020212d313c1e214f2a5', 'width': 960}, {'height': 1412, 'url': 'https://external-preview.redd.it/zE0OIyWeBA_5q_2qO3hw2l3uD8jQvIOyxYtcr5Jup4o.jpg?width=1080&crop=smart&auto=webp&s=02e011f5dec33ae0aa90d977f045af42d23e7733', 'width': 1080}], 'source': {'height': 3269, 'url': 'https://external-preview.redd.it/zE0OIyWeBA_5q_2qO3hw2l3uD8jQvIOyxYtcr5Jup4o.jpg?auto=webp&s=5f2eeb9c117a6700a8abdc12e8f860651b479f28', 'width': 2500}, 'variants': {}}]} |
Llama2 speeds | 1 | [removed] | 2023-12-14T12:30:30 | https://www.reddit.com/r/LocalLLaMA/comments/18i7lxn/llama2_speeds/ | ramzeez88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i7lxn | false | null | t3_18i7lxn | /r/LocalLLaMA/comments/18i7lxn/llama2_speeds/ | false | false | self | 1 | null |
Just got access to Mistral's API | 13 | Maybe others have too or will soon, but if not I definitely suggest registering with a professional sounding email address, i.e. void ones which contain the characters hack3r and pr0n. | 2023-12-14T12:15:40 | https://www.reddit.com/r/LocalLLaMA/comments/18i7dcz/just_got_access_to_mistrals_api/ | WaterdanceAC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i7dcz | false | null | t3_18i7dcz | /r/LocalLLaMA/comments/18i7dcz/just_got_access_to_mistrals_api/ | false | false | self | 13 | null |
What is the simplest available local interface, to use openai or a local llm, with history and function calling? | 2 | Hello everyone, i'm nto super experienced in the world of llms, i wrote and run some code with langchain, but i lack a more in depth understanding of it.
I was searching the simplest way to set up a ui, with history that would be able with the openai api first, and maybe when it's easier, with other opensource llms, to do function calling to do document or info retrival from documents, BUT i wanted the retrival to be coded and run locally, with local documents and with a structure i would write myself.
i was wondering if there were any already available projects to do so, seen that im not really happy with the retrival in gpt 4, i wanted to experiment more, but keep the function calling in the equation | 2023-12-14T11:53:01 | https://www.reddit.com/r/LocalLLaMA/comments/18i705v/what_is_the_simplest_available_local_interface_to/ | Ok_Dog_5421 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i705v | false | null | t3_18i705v | /r/LocalLLaMA/comments/18i705v/what_is_the_simplest_available_local_interface_to/ | false | false | self | 2 | null |
Best (new) LLM model format to run on Apple Silicon? | 3 | With model formats like GGUF/AWQ/EXL2 and recently talked about formats with optimized quantisation like HQQ and QuiP# I am not really sure what is currently considered the best option on Mac M1/M2/M3 machines.
Are there better (faster / lower VRAM requirements) formats out there for Mac's?
Have been using GGML and later GGUF since the very beginning but since I only have apple based hardware with semi capable GPUs I'm wondering what my next experiment besides GGUF could be. I thought Exllama v2 looked promising but it seems it is not compatible with Apple Silicon / Metal.
Most current inference software only runs GGUF on macs. If any of you have good experiences with other formats I would love to hear about it!
P.S. - Have around 60GB of VRAM available | 2023-12-14T11:45:30 | https://www.reddit.com/r/LocalLLaMA/comments/18i6vta/best_new_llm_model_format_to_run_on_apple_silicon/ | Zestyclose_Yak_3174 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i6vta | false | null | t3_18i6vta | /r/LocalLLaMA/comments/18i6vta/best_new_llm_model_format_to_run_on_apple_silicon/ | false | false | self | 3 | null |
What are you using your local LLM for? | 1 | I just downloaded the new SOLAR10.7 model and it's awesome. But I'm trying to think of use cases for it. I'd love to fine tune it as well. I'm just confused what I could use a local LLM for that wouldn't be simpler with ChatGPT3.5.
What are you all using your local LLMs for? | 2023-12-14T11:23:52 | https://www.reddit.com/r/LocalLLaMA/comments/18i6joh/what_are_you_using_your_local_llm_for/ | PMMEYOURSMIL3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i6joh | false | null | t3_18i6joh | /r/LocalLLaMA/comments/18i6joh/what_are_you_using_your_local_llm_for/ | false | false | self | 1 | null |
Local Stable Diffusion | 2 | Not the place to ask, I know.
Are there any good stable diffusion models that can run on mobile devices with ~4GB ram | 2023-12-14T11:06:08 | https://www.reddit.com/r/LocalLLaMA/comments/18i6adi/local_stable_diffusion/ | Jsus-Sus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i6adi | false | null | t3_18i6adi | /r/LocalLLaMA/comments/18i6adi/local_stable_diffusion/ | false | false | self | 2 | null |
How does fine tuning Mixtral works ? | 12 | Hi, if I understood the MOE architecture, there is multiple specialist models (one in code, math, etc..) who works for the best input use case.
But why doesn't fine tuning all the specialists models makes them all more "generalist" ? Why wouldn't it just be a sort of merge between two fine tuned mistral ?
I just want to learn, things so fast it's easy to be left behind 😅 | 2023-12-14T11:02:51 | https://www.reddit.com/r/LocalLLaMA/comments/18i68na/how_does_fine_tuning_mixtral_works/ | M0ULINIER | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i68na | false | null | t3_18i68na | /r/LocalLLaMA/comments/18i68na/how_does_fine_tuning_mixtral_works/ | false | false | self | 12 | null |
Your Mixtral Experiences So Far | 29 | Alright, I got it working in my llama.cpp/llama-cpp-python chat tool and was wondering about two major problems, that I hope anybody can help me figure out.
1. It seems that it takes way too long to process a longer prompt before starting the inference (which itself has a nice speed) - in my case it takes around 39 (!) seconds before the prompt gets processed, then it spits out tokens at around \~8 tokens/sec. For comparison, a 70B model will only take around 9 seconds until producing around 1.5 tokens/sec on my end (RTX 3060 12 GB).
2. After only a short while it will start producing gibberish when just talking in a chat-mode. I'm using top\_k = 100, top\_p = 0.37, temp = 0.87, repeat\_penalty = 1.18 - these settings work very well for all my other models. But here they suck.
Here is an example (MxRobot is Mixtral in this case). And if you're wondering... yes, that Youtube Video exists, I'm not making this up.
https://preview.redd.it/2666ifmym86c1.jpg?width=1066&format=pjpg&auto=webp&s=781d1f75ce200aeda4b08e9221ef31530389f593 | 2023-12-14T10:35:34 | https://www.reddit.com/r/LocalLLaMA/comments/18i5u6d/your_mixtral_experiences_so_far/ | psi-love | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i5u6d | false | null | t3_18i5u6d | /r/LocalLLaMA/comments/18i5u6d/your_mixtral_experiences_so_far/ | false | false | 29 | null | |
Mixtral idea. 3x7B. All focused on one subject. | 2 | How cool would it be to have 3 out of 3 experts, all knowledgeable on the same subject. It probably would be more efficient and possibly smarter? 🤔 | 2023-12-14T10:31:12 | https://www.reddit.com/r/LocalLLaMA/comments/18i5rv7/mixtral_idea_3x7b_all_focused_on_one_subject/ | Majestical-psyche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i5rv7 | false | null | t3_18i5rv7 | /r/LocalLLaMA/comments/18i5rv7/mixtral_idea_3x7b_all_focused_on_one_subject/ | false | false | self | 2 | null |
Windows AI Studio - Offical Local Finetuning GUI in Windows machines! | 37 | 2023-12-14T10:13:22 | https://github.com/microsoft/windows-ai-studio | phoneixAdi | github.com | 1970-01-01T00:00:00 | 0 | {} | 18i5imw | false | null | t3_18i5imw | /r/LocalLLaMA/comments/18i5imw/windows_ai_studio_offical_local_finetuning_gui_in/ | false | false | 37 | {'enabled': False, 'images': [{'id': 'Hh3XgcoYOnoroqXdnGOEdx91MXczBke6a9w0FBh9PnE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Qltn2_-ZVca2FToBNQacpD7BwEACNx7qeX_ZGWmk9WA.jpg?width=108&crop=smart&auto=webp&s=f87dec6ecd9deb7f5d5d876b6d86cda2502ae849', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Qltn2_-ZVca2FToBNQacpD7BwEACNx7qeX_ZGWmk9WA.jpg?width=216&crop=smart&auto=webp&s=084d03149534b270ebdbfb4e2aa42c82127c1cbb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Qltn2_-ZVca2FToBNQacpD7BwEACNx7qeX_ZGWmk9WA.jpg?width=320&crop=smart&auto=webp&s=d4e6d93ac3afabbdd28c9e6572a3b15a8989eacb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Qltn2_-ZVca2FToBNQacpD7BwEACNx7qeX_ZGWmk9WA.jpg?width=640&crop=smart&auto=webp&s=c883a4b078d3ea15c638537bae7e7da917af5d25', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Qltn2_-ZVca2FToBNQacpD7BwEACNx7qeX_ZGWmk9WA.jpg?width=960&crop=smart&auto=webp&s=871beae29aabbeb63611a8fa4c0c6944b6179ab0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Qltn2_-ZVca2FToBNQacpD7BwEACNx7qeX_ZGWmk9WA.jpg?width=1080&crop=smart&auto=webp&s=015247385055a91c14717f2bf9d788aa72dda398', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Qltn2_-ZVca2FToBNQacpD7BwEACNx7qeX_ZGWmk9WA.jpg?auto=webp&s=f1c7b8c586bcd81b1a82d118b25af4641964d40e', 'width': 1200}, 'variants': {}}]} | ||
To RAG or not to RAG | 46 | **TL,DR:** how can my chatbot decide when it is necessary to retrieve context, and when it should answer based solely on the chat history?
Beloved redditors,
So we have a functioning RAG Chatbot. If you activate “Search Mode”, the system will retrieve query-relevant chunks and a prompt of the type “Answer the query {query} based on this context {context}” will be attached to the list of messages and sent to the LLM. If search mode is off, the query is attached to the list of messages as is (without retrieving context).
Retrieving context no matter the query worsens the answer quality by a lot, i.e.: doing semantic search on our specialized dataset for a user query like “summarize that” (in relation to a previous message of the chatbot) would bring terrible chunks into the conversation.
The “Search Mode” works well in theory, but we can not rely on users knowing when to activate and deactivate it. Hence, the question: how can we automatize that? I researched about various options, but I wanted to read from your personal experiences before I dump a day into it:
* **Semantic similarity threshold:** let’s do retrieval on all queries, but if the top chunk is below 0.X similarity, we turn “Search Mode” off. Problem here is that although top k chunks are almost always good, the scores vary a lot from one query to other depending on its length and complexity.
* **Supervised query classification:** I may build a synthetic dataset of queries that should use RAG and queries that should not use it. Then train a simple ML algorithm like a SVM. Finally pass user queries through the model and decide on whether to use “Search Mode” or not. Problem here is that I am not sure if the training data can be solid. Maybe the same query should or not use “Search Mode” depending on the previous messages.
* **RAG Query Generation:** the good guys of Cohere have a mode in their chat completion that takes the user prompt and fabricates queries for retrieval that are better for semantic search than the actual user prompt. It may generate one/multiple queries or none at all, meaning that no context should be retrieved ([see this link](https://txt.cohere.com/rag-chatbot/), around middle of the page, “Generate queries”). This is very interesting, but I have no idea how it is implemented. I guess I could do it through an LLM with a prompt like “Take this user prompt and generate 1 or 0 queries to retrieve chunks from our database of whatever”, but that would mean another API call and more waiting time.
Ideas?
Thanks for reading!! | 2023-12-14T09:57:32 | https://www.reddit.com/r/LocalLLaMA/comments/18i5ahe/to_rag_or_not_to_rag/ | madway99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i5ahe | false | null | t3_18i5ahe | /r/LocalLLaMA/comments/18i5ahe/to_rag_or_not_to_rag/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'o7Dosh9cTSgHs5fYow3Bq4otthpg-q3a4Nqdtg78jUw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5oK9zc2JrJRfaoJ6u65hVIkqvcTq-WMKxpe95j5uJnk.jpg?width=108&crop=smart&auto=webp&s=4bd6b55fccca33dda41d8cf0f15820cb9eaf900d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/5oK9zc2JrJRfaoJ6u65hVIkqvcTq-WMKxpe95j5uJnk.jpg?width=216&crop=smart&auto=webp&s=d4591f93101d00d4d25f3d42317552d2a6fd816f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/5oK9zc2JrJRfaoJ6u65hVIkqvcTq-WMKxpe95j5uJnk.jpg?width=320&crop=smart&auto=webp&s=1529f27224203f7730fd7b4ccc8605eca7b599b9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/5oK9zc2JrJRfaoJ6u65hVIkqvcTq-WMKxpe95j5uJnk.jpg?width=640&crop=smart&auto=webp&s=6b6cd8ab8d73b54de47cf02721c17de15137d1dc', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/5oK9zc2JrJRfaoJ6u65hVIkqvcTq-WMKxpe95j5uJnk.jpg?width=960&crop=smart&auto=webp&s=6895b5e60aa670d90411d10c834f876eb94c125d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/5oK9zc2JrJRfaoJ6u65hVIkqvcTq-WMKxpe95j5uJnk.jpg?width=1080&crop=smart&auto=webp&s=6c7792133f9b4882493731981c8a486e4ff5ece2', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/5oK9zc2JrJRfaoJ6u65hVIkqvcTq-WMKxpe95j5uJnk.jpg?auto=webp&s=76347d15765b6b7719d5d1c85403722cbd02f547', 'width': 1200}, 'variants': {}}]} |
Has anyone successfully run vision models on the apple m2 chip with 16gb ram? | 1 | I've been using LM Studio to prototype some things quickly before I move to cloud resources for production workloads. This works great even with the under powered macbook air I have been using as my daily driver. Today I tried some OCR workflows with the new vision capability in LM Studio and it just times out/ hangs (says it's processing endlessly for hours), even with Apple Metal enabled. Other non-vision models run fine. The models easily fit in the 16gb of ram. Is there something else I need to do to get these to run or do I need to use a proper GPU? | 2023-12-14T09:47:58 | https://www.reddit.com/r/LocalLLaMA/comments/18i562a/has_anyone_successfully_run_vision_models_on_the/ | rorykoehler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i562a | false | null | t3_18i562a | /r/LocalLLaMA/comments/18i562a/has_anyone_successfully_run_vision_models_on_the/ | false | false | self | 1 | null |
microsoft / LLMLingua: To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and achieve up to 20x compression with minimal performance loss. | 1 | What's LLMLingua? LLMLingua balls lmao | 2023-12-14T09:04:06 | https://github.com/microsoft/LLMLingua | StewArtMedia_Nick | github.com | 1970-01-01T00:00:00 | 0 | {} | 18i4kb5 | false | null | t3_18i4kb5 | /r/LocalLLaMA/comments/18i4kb5/microsoft_llmlingua_to_speed_up_llms_inference/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'gHZMFGAshj7Z6zsnSQoMn6VJfQtP6Vz7gxE7TxY6zWU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_BdP_Y_9VGazXPFJ-e9nZhs3I_anV2-eE6vThAsTMV4.jpg?width=108&crop=smart&auto=webp&s=23bb9605bc83a32e3f64d380944cdee5297750f4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_BdP_Y_9VGazXPFJ-e9nZhs3I_anV2-eE6vThAsTMV4.jpg?width=216&crop=smart&auto=webp&s=2b7f5a821710c60fb56253d88cd6fb092eabd0b8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_BdP_Y_9VGazXPFJ-e9nZhs3I_anV2-eE6vThAsTMV4.jpg?width=320&crop=smart&auto=webp&s=e22130c8bde955e2ac35f5473dbe9ca5cd63087f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_BdP_Y_9VGazXPFJ-e9nZhs3I_anV2-eE6vThAsTMV4.jpg?width=640&crop=smart&auto=webp&s=cd1b5022c5df56e7dcf686774c8221e13c33fd2a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_BdP_Y_9VGazXPFJ-e9nZhs3I_anV2-eE6vThAsTMV4.jpg?width=960&crop=smart&auto=webp&s=f5d32f37b91dea7b42f4d5360af1a7c4ec1c3c0c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_BdP_Y_9VGazXPFJ-e9nZhs3I_anV2-eE6vThAsTMV4.jpg?width=1080&crop=smart&auto=webp&s=d8c709ad7ccaac313a3d1d16a4d21c361d56dc68', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_BdP_Y_9VGazXPFJ-e9nZhs3I_anV2-eE6vThAsTMV4.jpg?auto=webp&s=a2577d90412420efdb22ca23e63c734c762218e2', 'width': 1200}, 'variants': {}}]} | |
A first merged 10.7B model with Solar | 18 | This model uses the SLERP method to merge the best models on 14th Dec on OpenLLM Leaderboard: SOLAR-10.7B-Instruct-v1.0 with our latest Pandora-10.7B model. The results? It increases chat capabilities and GSM8K score for the original model.
Model: [janhq/Solar-10.7B-SLERP · Hugging Face](https://huggingface.co/janhq/Solar-10.7B-SLERP)
GGUF: Updating | 2023-12-14T08:16:12 | https://www.reddit.com/r/LocalLLaMA/comments/18i3xmi/a_first_merged_107b_model_with_solar/ | Own_Procedure_8866 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i3xmi | false | null | t3_18i3xmi | /r/LocalLLaMA/comments/18i3xmi/a_first_merged_107b_model_with_solar/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'kA7es_T5rzI6FwOBEhVRRdc4YUSBR36aRusv6M_n2hQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zIyrVUpsvmNPEXOGCNBkxY6TXi5Dbn0TSRDaU0Nm0so.jpg?width=108&crop=smart&auto=webp&s=996239fa1913e6f369e3832e4dd8e53c69508228', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zIyrVUpsvmNPEXOGCNBkxY6TXi5Dbn0TSRDaU0Nm0so.jpg?width=216&crop=smart&auto=webp&s=aae0d4117246d38ca13d47eb338fb49c6db3574e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zIyrVUpsvmNPEXOGCNBkxY6TXi5Dbn0TSRDaU0Nm0so.jpg?width=320&crop=smart&auto=webp&s=890723234427479cc3e74ac7c9dc8f32c42816f3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zIyrVUpsvmNPEXOGCNBkxY6TXi5Dbn0TSRDaU0Nm0so.jpg?width=640&crop=smart&auto=webp&s=412bfbab53d37f3070519395c3ffd83251f5ce5c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zIyrVUpsvmNPEXOGCNBkxY6TXi5Dbn0TSRDaU0Nm0so.jpg?width=960&crop=smart&auto=webp&s=5d7110f661452ec2439b508299fdd003b7f49547', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zIyrVUpsvmNPEXOGCNBkxY6TXi5Dbn0TSRDaU0Nm0so.jpg?width=1080&crop=smart&auto=webp&s=71487b074570c4e901269e6d6fb12aa0841c6614', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zIyrVUpsvmNPEXOGCNBkxY6TXi5Dbn0TSRDaU0Nm0so.jpg?auto=webp&s=3981febdef811ca8dfe0801b80b5b75e68931da9', 'width': 1200}, 'variants': {}}]} |
A first merged 10.7B model with Solar. | 1 | [deleted] | 2023-12-14T08:14:47 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 18i3wvd | false | null | t3_18i3wvd | /r/LocalLLaMA/comments/18i3wvd/a_first_merged_107b_model_with_solar/ | false | false | default | 1 | null | ||
Where can I host models and pay per usage? | 3 | Hey guys, is there a place I can host my own fine tuned open source model and pay per thousand tokens with an API? I know anyscale kinda does this but they’ve got only got llama currently | 2023-12-14T08:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/18i3w6a/where_can_i_host_models_and_pay_per_usage/ | Leadership_Upper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i3w6a | false | null | t3_18i3w6a | /r/LocalLLaMA/comments/18i3w6a/where_can_i_host_models_and_pay_per_usage/ | false | false | self | 3 | null |
Besides curiosity, what do you use local LLMs for? | 28 | I'm interested in seeing what people's use cases are. I imagine privacy and lenience or absence of guardrails are likely primary considerations.
I see potential for use as an offline resource. For instance if your WiFi broke down, an LLM could help you troubleshoot it but then again people typically have phone data as a back up. Another use case I envision is a phone-sized model that is fine-tuned for survivalist knowledge for anyone venturing in the wild where there is no reception. | 2023-12-14T08:06:02 | https://www.reddit.com/r/LocalLLaMA/comments/18i3siw/besides_curiosity_what_do_you_use_local_llms_for/ | RedditPolluter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i3siw | false | null | t3_18i3siw | /r/LocalLLaMA/comments/18i3siw/besides_curiosity_what_do_you_use_local_llms_for/ | false | false | self | 28 | null |
Lost in the LLM Labyrinth: Looking For Guidance on Evaluation Tools! | 7 | Fellow AI enthusiasts, I'm knee-deep in the fascinating world of Large Language Models (LLMs) and eager to assess their capabilities. But the sheer variety of evaluation tools has me feeling like Alice tumbling down the rabbit hole! ️
I'm particularly interested in:
* **Accuracy & Factuality:** How well do LLMs handle factual queries and avoid misinformation?
* **Reasoning & Logic:** Can they follow complex arguments and draw sound conclusions?
* **Creativity & Fluency:** How adept are they at generating different creative text formats, like poems, code, scripts, musical pieces, etc.?
* **Bias & Fairness:** Are their outputs free from harmful stereotypes and discriminatory language? ♀️
I've heard whispers of tools like:
* **Deepchecks:** A platform for benchmarking LLMs on various NLP tasks.
* **SuperGLUE:** Suite of language understanding benchmarks focusing on real-world tasks.
* **Copernican AI:** Analyzes LLM outputs for potential biases and fairness issues.
But the options seem endless! What tools do you use for your evaluation? | 2023-12-14T08:03:50 | https://www.reddit.com/r/LocalLLaMA/comments/18i3rd3/lost_in_the_llm_labyrinth_looking_for_guidance_on/ | DwaywelayTOP | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i3rd3 | false | null | t3_18i3rd3 | /r/LocalLLaMA/comments/18i3rd3/lost_in_the_llm_labyrinth_looking_for_guidance_on/ | false | false | self | 7 | null |
Gemini Pro is quite lenient | 1 | [removed] | 2023-12-14T07:28:37 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 18i39j8 | false | null | t3_18i39j8 | /r/LocalLLaMA/comments/18i39j8/gemini_pro_is_quite_lenient/ | false | false | default | 1 | null | ||
ehartford/dolphin-2.5-mixtral-8x7b · Hugging Face | 74 | 2023-12-14T07:01:38 | https://huggingface.co/ehartford/dolphin-2.5-mixtral-8x7b | StewArtMedia_Nick | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18i2w6o | false | null | t3_18i2w6o | /r/LocalLLaMA/comments/18i2w6o/ehartforddolphin25mixtral8x7b_hugging_face/ | false | false | 74 | {'enabled': False, 'images': [{'id': 'xgIFz1fUGd6JouKLJX_PA7Md58NjH5XimgdMCZQreIY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/G4BugJEpnNOWKVC98jdQsQ0YbOAdHbJnYIoTAsLC7Bw.jpg?width=108&crop=smart&auto=webp&s=834956df8cc8f1e6a12d911aa86bd330aca8bb0b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/G4BugJEpnNOWKVC98jdQsQ0YbOAdHbJnYIoTAsLC7Bw.jpg?width=216&crop=smart&auto=webp&s=6013af070dbcdfc4ea2048e153fe026b8d13ea1a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/G4BugJEpnNOWKVC98jdQsQ0YbOAdHbJnYIoTAsLC7Bw.jpg?width=320&crop=smart&auto=webp&s=3c92a77b5ee07e01e9bd3afc19056643d1090c84', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/G4BugJEpnNOWKVC98jdQsQ0YbOAdHbJnYIoTAsLC7Bw.jpg?width=640&crop=smart&auto=webp&s=369fee5e9ef41fcac55431f443410907219bb7a2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/G4BugJEpnNOWKVC98jdQsQ0YbOAdHbJnYIoTAsLC7Bw.jpg?width=960&crop=smart&auto=webp&s=a697507f821a1497e8c4980eb12f80640decc74d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/G4BugJEpnNOWKVC98jdQsQ0YbOAdHbJnYIoTAsLC7Bw.jpg?width=1080&crop=smart&auto=webp&s=40bda603783796904f3a3e7859812811add97c83', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/G4BugJEpnNOWKVC98jdQsQ0YbOAdHbJnYIoTAsLC7Bw.jpg?auto=webp&s=a798f26473df5b1940d48b17b482aacb32b792be', 'width': 1200}, 'variants': {}}]} | ||
Mixtral gets even better by just adding an expert. | 177 | 2023-12-14T06:34:22 | https://twitter.com/sbeastwindy/status/1735185274475524333?t=r5RVn27ptFe2SjI4zMKTTw&s=19 | Eastwindy123 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 18i2h4c | false | {'oembed': {'author_name': 'Srinivas Billa', 'author_url': 'https://twitter.com/sbeastwindy', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">+1 expert = -1 perplexity??<br>Mixtral instruct gets even better by consulting 3 experts instead of 2 <a href="https://t.co/EdsfMqBFV0">pic.twitter.com/EdsfMqBFV0</a></p>— Srinivas Billa (@sbeastwindy) <a href="https://twitter.com/sbeastwindy/status/1735185274475524333?ref_src=twsrc%5Etfw">December 14, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/sbeastwindy/status/1735185274475524333', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_18i2h4c | /r/LocalLLaMA/comments/18i2h4c/mixtral_gets_even_better_by_just_adding_an_expert/ | false | false | 177 | {'enabled': False, 'images': [{'id': 'SX8GbQsMLex5OTxq5N7BjKpL1-FzI7FblaPoU2IKDpU', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/cc6Y_Nyyhcv6Y0gHw9dkhz1nlMUaC5NW0KRogRwv4Is.jpg?width=108&crop=smart&auto=webp&s=6cf1efd297c702c8a47723afeb5687f64edba68c', 'width': 108}], 'source': {'height': 84, 'url': 'https://external-preview.redd.it/cc6Y_Nyyhcv6Y0gHw9dkhz1nlMUaC5NW0KRogRwv4Is.jpg?auto=webp&s=f42e218af5545e8d3d410ac84c4cefe7c9428393', 'width': 140}, 'variants': {}}]} | ||
Using a local LLM as a user interface | 5 | I'm looking for examples of using a locally running LLM (Mixtral, LLaMA, whatever) as the user interface for a system. For example, let's say I have a home automation system, and I can turn any light in my house on or off by sending a specific JSON blob. How would I wire in an LLM to be the user interface for this system?
The GPT-4 API has function calling, and I see Trelis is fine-tuning open source models on HuggingFace with some kind of function calling dataset. Is that the way to go? How common is function calling in open source models?
My first thought was that I'd do a multishot prompt, give the LLM examples of how to translate commands into JSON. Are there any examples of doing an approach like that? | 2023-12-14T06:32:30 | https://www.reddit.com/r/LocalLLaMA/comments/18i2g6f/using_a_local_llm_as_a_user_interface/ | heathkit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i2g6f | false | null | t3_18i2g6f | /r/LocalLLaMA/comments/18i2g6f/using_a_local_llm_as_a_user_interface/ | false | false | self | 5 | null |
Seeking Recommendations for Tools That Autogenerate Software Projects | 1 | Hello everyone!
​
I'm on a quest to find the best tools available for automatically generating software projects. Currently, I've been using Cursor that has a feature called "New AI project," which is decent, but I'm curious if there are more options out there.
​
Has anyone come across any particularly efficient or innovative tools that can help in generating a software project?
Any recommendations or experiences you've had with such tools would be greatly appreciated.
Thanks in advance for your insights! | 2023-12-14T06:06:28 | https://www.reddit.com/r/LocalLLaMA/comments/18i21iu/seeking_recommendations_for_tools_that/ | yachty66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i21iu | false | null | t3_18i21iu | /r/LocalLLaMA/comments/18i21iu/seeking_recommendations_for_tools_that/ | false | false | self | 1 | null |
I'd just like to know if this specific RuntimeError will affect my choice of AI for the worst (like, does it turn dumb enough that I'm better off picking a smaller model?) | 2 | 2023-12-14T06:05:02 | Switchblade1080 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18i20m8 | false | null | t3_18i20m8 | /r/LocalLLaMA/comments/18i20m8/id_just_like_to_know_if_this_specific/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'Nxd_3pCcNFpIxZUmb0BiAGSH0NJniYlmfbhfOByyWBg', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/goore23x776c1.png?width=108&crop=smart&auto=webp&s=357fd1d5799f5f5935d9569009fcda1f2b5d2c05', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/goore23x776c1.png?width=216&crop=smart&auto=webp&s=b26f1b329e81bfa8e8ad9aaa771865c1ac78fae8', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/goore23x776c1.png?width=320&crop=smart&auto=webp&s=dc6a3705fc940e5add0ff1b3505a9281f9017a91', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/goore23x776c1.png?width=640&crop=smart&auto=webp&s=ddec09ed5ce33cbd0650bd38918d6deb6f138d35', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/goore23x776c1.png?width=960&crop=smart&auto=webp&s=b6929b4f5da575de3341b460c91a4547ee08f36a', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/goore23x776c1.png?width=1080&crop=smart&auto=webp&s=7c32a70cbd7e62a7706bce33a4aaeb0d63688251', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/goore23x776c1.png?auto=webp&s=18c4834c1d744a33e9f3f3246522cf7692ac4484', 'width': 1920}, 'variants': {}}]} | |||
Any chance mistral open-sources their Mistral-small model? | 2 | They announced the API endpoint but didn't release the model. | 2023-12-14T05:39:29 | https://www.reddit.com/r/LocalLLaMA/comments/18i1le2/any_chance_mistral_opensources_their_mistralsmall/ | aue_sum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i1le2 | false | null | t3_18i1le2 | /r/LocalLLaMA/comments/18i1le2/any_chance_mistral_opensources_their_mistralsmall/ | false | false | self | 2 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.