title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Meta - Open sourcing a new and improved Code Llama | 105 | 2024-01-29T16:17:23 | https://www.facebook.com/zuck/posts/were-open-sourcing-a-new-and-improved-code-llama-including-a-larger-70b-paramete/10115471700125721/ | bratao | facebook.com | 1970-01-01T00:00:00 | 0 | {} | 1adz2vn | false | null | t3_1adz2vn | /r/LocalLLaMA/comments/1adz2vn/meta_open_sourcing_a_new_and_improved_code_llama/ | false | false | default | 105 | null | |
Help running models in colab | 2 | Hello
This is probably not even the right place to ask this but i don't know where else to ask. I sadly do not have the hardware needed to host dolphin-mixtral (this one: [https://huggingface.co/TheBloke/dolphin-2.7-mixtral-8x7b-GGUF](https://huggingface.co/TheBloke/dolphin-2.7-mixtral-8x7b-GGUF)), and decided to buy a colab pro subscription in order to get access to a100 GPUs and high ram instances. But i am having a really tough time finding information on how to host models fropm huggingface on such a colab instance, most of the documentation i found covers how to run models on your own hardware using the webui, which i do not have, at least not enough to host this hefty of a model. I do host my own models, but they are not even comparable to dolphin-mixtral in size and compute requirements.
i'm not asking for a step by step guide because i know people have better things to do with their life than to hold my hand through this, but if anyone could point me in the right direction i would really appreciate it.
thank you in advance | 2024-01-29T16:17:06 | https://www.reddit.com/r/LocalLLaMA/comments/1adz2mo/help_running_models_in_colab/ | Sidoooooo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adz2mo | false | null | t3_1adz2mo | /r/LocalLLaMA/comments/1adz2mo/help_running_models_in_colab/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '46L172gAQpqb4lefodRkwffzsrooawaSnfZtlljO5uA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/AAjCDMVs-dZHaCNe2jCBYMQcbcGcKVthkqMP0kJnuao.jpg?width=108&crop=smart&auto=webp&s=9c9552af2f88815d7786ca622fbc02ce91723972', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/AAjCDMVs-dZHaCNe2jCBYMQcbcGcKVthkqMP0kJnuao.jpg?width=216&crop=smart&auto=webp&s=c6730d4d74aecfd58a3ee03a6a7d0d38d138f4f4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/AAjCDMVs-dZHaCNe2jCBYMQcbcGcKVthkqMP0kJnuao.jpg?width=320&crop=smart&auto=webp&s=c4e63dfb06aef2df6acd213bc57e017848da6236', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/AAjCDMVs-dZHaCNe2jCBYMQcbcGcKVthkqMP0kJnuao.jpg?width=640&crop=smart&auto=webp&s=6b1888cbb6f2ab2c7193177108302f384fab35f5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/AAjCDMVs-dZHaCNe2jCBYMQcbcGcKVthkqMP0kJnuao.jpg?width=960&crop=smart&auto=webp&s=a24cc1a40d1265cea241f46445818c21732053f5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/AAjCDMVs-dZHaCNe2jCBYMQcbcGcKVthkqMP0kJnuao.jpg?width=1080&crop=smart&auto=webp&s=e42707f4f27a15be2a9ab541415acc14fd1dc394', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/AAjCDMVs-dZHaCNe2jCBYMQcbcGcKVthkqMP0kJnuao.jpg?auto=webp&s=4ddc1e74507503ff61b814a3f65b8f103d3564c8', 'width': 1200}, 'variants': {}}]} |
Giving back | 22 | Did not expect the response received re the A100 rig. As stated, I got lucky. I wanted to make a suggestion and see how it went down -
Would it be useful to offer one of the A100 boards for any dev teams ( ie exllama, llama cpp, etc )? I was going to possibly suggest a three month rotation, passing it onto the next dev team after times up.
If any devs are interested let me know. Would be great to get a list so can order and prioritise, then work out logistics of getting hw shipped over.
Is just a thought. Again, I got lucky. If that luck can help advance the industry then seems a better result than me just sitting on them and underutilising. | 2024-01-29T16:09:49 | https://www.reddit.com/r/LocalLLaMA/comments/1adywe6/giving_back/ | BreakIt-Boris | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adywe6 | false | null | t3_1adywe6 | /r/LocalLLaMA/comments/1adywe6/giving_back/ | false | false | self | 22 | null |
Blind Testing 16 Different Models for Roleplaying | 56 | **RESULTS** (from 30 tests)
**1. (tie)** 70b lzlv (80 points)
**1. (tie)** 120b Goliath (80 points)
**3.** 34b NousCapybara (78 points)
**4.** 8x7b Noromaid (77 points)
**5.** 34b NousHermes2 (73 points)
**6.** 8x7b NousHermes2 DPO (67 points)
**7.** 13b Psyfighter v2 (64 points)
**8.** 34b Bagel v0.2 (59 points)
**9.** 8x7b Mixtral (51 points)
**9.** 70b Xwin (51 points)
**11.** 7b Toppy.txt (39 points)
**11.** 8x7b Dolphin 2.6 (39 points)
**13.** 8x7b NousHermes2 SFT (38 points)
**13.** 13b MythoMax (38 points)
**15.** 7b OpenHermes 2.5 (37 points)
**16.** 70b Synthia (29 points)
Full Results: https://docs.google.com/spreadsheets/d/1fnKUagqfe76Z74GDolp2C3EsWPReKKqClLB7--hRHHw/edit?usp=sharing
---
**TESTING AND SCORING METHOD**
The models were randomly grouped into 4 groups of 4. Within each group, I subjectively ordered them from my most to least favorite, awarding +3, +2, +1, +0 points respectively. Winners of each group advanced to a final group, where they are once again ordered and scored using the method as before, meaning finalists end with 6, 5, 4 or 3 points.
This process was repeated 30 times.
---
**NOTES**
Only models available on OpenRouter were tested.
An effort was made to keep the character cards relatively diverse, using single entity characters, multiple entity characters, and RPGs/Simulators.
Models were tested with and without using GPT-4 to kickstart conversations.
Evaluations were based on single responses rather than multiple conversation turns. Model issues, such as repetition, which manifest over multiple turns, like with 34b NousCapybara, are not fully reflected in the results.
All models used 0.7 temperature, 0.9 top P, 400 max output tokens, and everything else disabled.
Each model used the prompting format recommended by the HuggingFace model card.
A 'creative' roleplay prompt template was not used. Instead, a more open-ended prompt template was used: https://rentry.org/fqh66aci | 2024-01-29T15:14:09 | https://www.reddit.com/r/LocalLLaMA/comments/1adxlyj/blind_testing_16_different_models_for_roleplaying/ | Alex1Nunez19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adxlyj | false | null | t3_1adxlyj | /r/LocalLLaMA/comments/1adxlyj/blind_testing_16_different_models_for_roleplaying/ | false | false | self | 56 | {'enabled': False, 'images': [{'id': 'BUn-uP6YX78FL-O1lmxo8IK6h1f8wJjgd2bA8YHrLZc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/aUsqXntZxKm2qsjNyhGl4jqo4JfeXqfWGyoeg1eaJDs.jpg?width=108&crop=smart&auto=webp&s=0c2019a45d45a5fdf54305cbb91b75bd9e163b04', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/aUsqXntZxKm2qsjNyhGl4jqo4JfeXqfWGyoeg1eaJDs.jpg?width=216&crop=smart&auto=webp&s=1e787a2eefed47790bca63ad68c81f9aec370685', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/aUsqXntZxKm2qsjNyhGl4jqo4JfeXqfWGyoeg1eaJDs.jpg?width=320&crop=smart&auto=webp&s=1048ef531303a1ecbc29a79c324e2f05803915ce', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/aUsqXntZxKm2qsjNyhGl4jqo4JfeXqfWGyoeg1eaJDs.jpg?width=640&crop=smart&auto=webp&s=a44761dc971febc1e33bbf575f6a27a0a5c51322', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/aUsqXntZxKm2qsjNyhGl4jqo4JfeXqfWGyoeg1eaJDs.jpg?width=960&crop=smart&auto=webp&s=2e48ee235b723c4064849d7026731325e13b1b75', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/aUsqXntZxKm2qsjNyhGl4jqo4JfeXqfWGyoeg1eaJDs.jpg?width=1080&crop=smart&auto=webp&s=7c1ecfaf2d8edb427ec725257bb4466b73cbafdd', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/aUsqXntZxKm2qsjNyhGl4jqo4JfeXqfWGyoeg1eaJDs.jpg?auto=webp&s=89114570ebd2179dc364e3e8b52e735fb9261bff', 'width': 1200}, 'variants': {}}]} |
how are you visualizing 'loss' and stuff during training? What software/libraries? | 1 | Coming from GUI, its fire and forget. No testing or reporting.
Now that I am familiar with transformers, I'm thinking I want to go full 5 star testing suite. (although, FOSS, none of that 4 different API tutorial crap).
Any suggestions? I'm specifically doing LoRA. | 2024-01-29T14:47:37 | https://www.reddit.com/r/LocalLLaMA/comments/1adx0j0/how_are_you_visualizing_loss_and_stuff_during/ | pr1vacyn0eb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adx0j0 | false | null | t3_1adx0j0 | /r/LocalLLaMA/comments/1adx0j0/how_are_you_visualizing_loss_and_stuff_during/ | false | false | self | 1 | null |
Looking for "french" 7B / 13B LLM | 2 | Hello, I'm looking for a 7B or 13B LLM that can reply in French.
I need an LLM that can chat and be creative.
Do you have any suggestions?
I've already tried Toppy-M-7B-GGUF and Vigogne 7B, but they don't feel as sharp as when chatting in English. | 2024-01-29T14:39:51 | https://www.reddit.com/r/LocalLLaMA/comments/1adwu2l/looking_for_french_7b_13b_llm/ | Haurrus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adwu2l | false | null | t3_1adwu2l | /r/LocalLLaMA/comments/1adwu2l/looking_for_french_7b_13b_llm/ | false | false | self | 2 | null |
Discussion on Rerankers available as a service for RAG | 2 | Hello Community,
I'm exploring reranker tools and curious about your experiences, especially with [bge](https://huggingface.co/BAAI/bge-reranker-base)
models (large/base) and services like Cohere Rerank. My use case is for a very generic RAG and I want to see some metrics on the available rerankers (apart from MTEB) especially on real world domains
Purely from a service POV, Is Cohere the only game in town, or are there other options worth considering? Anyone providing bge-reranker-base/large as a service? I am not interested in self hosting.
Any insights or recommendations would be great | 2024-01-29T14:17:23 | https://www.reddit.com/r/LocalLLaMA/comments/1adwcbe/discussion_on_rerankers_available_as_a_service/ | brooding_pixel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adwcbe | false | null | t3_1adwcbe | /r/LocalLLaMA/comments/1adwcbe/discussion_on_rerankers_available_as_a_service/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'mTNPdmjwD5rqyKRSIOBWOjAy5C46sayWfxnKQNioruM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?width=108&crop=smart&auto=webp&s=1444e858efaee0a8dae57ff7f179afd8abb9830c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?width=216&crop=smart&auto=webp&s=53fb827f5c9b23c7f95690767d34f99ceaae806b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?width=320&crop=smart&auto=webp&s=275434bf6c149bb1f13c709cb0907ec83936404d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?width=640&crop=smart&auto=webp&s=e851f16cde3923434487f83da8f050ca990bf487', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?width=960&crop=smart&auto=webp&s=db3ff12c2573a69b411abe6ad8dacacc61393726', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?width=1080&crop=smart&auto=webp&s=fb057a7f3e2a09b8b098e06d1c148e5552a71dfa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?auto=webp&s=a1039c4399f27721e4c89d7c13ff8aa31a3a253a', 'width': 1200}, 'variants': {}}]} |
Differences between LORA training OpenLLaMa and Mistral? | 1 | It seems Mistral uses a different format for the LORA training data, but other than that, they are the same?
I'd use transformers training, PEFT/LORA.
I imagine the parameters for training will be slightly different.
Basically my goal isnt to waste time accidentally on training Mistral only to find out it can't be used on transformers or that one of the parameters NEEDS to be 3e-3 or something. | 2024-01-29T14:00:00 | https://www.reddit.com/r/LocalLLaMA/comments/1advyav/differences_between_lora_training_openllama_and/ | pr1vacyn0eb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1advyav | false | null | t3_1advyav | /r/LocalLLaMA/comments/1advyav/differences_between_lora_training_openllama_and/ | false | false | self | 1 | null |
Finetuning on the truth that reality is an illusion | 1 | Can somebody please finetune a model on my journal entries which are primarily about my experience in meditation? Thanks | 2024-01-29T13:51:54 | https://www.reddit.com/r/LocalLLaMA/comments/1advs58/finetuning_on_the_truth_that_reality_is_an/ | YouWide5985 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1advs58 | false | null | t3_1advs58 | /r/LocalLLaMA/comments/1advs58/finetuning_on_the_truth_that_reality_is_an/ | false | false | self | 1 | null |
Miqu comparison - Supposedly mistral medium leaked | 141 | 2024-01-29T13:47:01 | https://twitter.com/qtnx_/status/1751775870631502067/photo/1 | bratao | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 1advoiz | false | null | t3_1advoiz | /r/LocalLLaMA/comments/1advoiz/miqu_comparison_supposedly_mistral_medium_leaked/ | false | false | default | 141 | null | |
Seeking the Best Reranker Services: Experiences with bge & Cohere? | 1 | Hello Community,
I'm exploring reranker tools and curious about your experiences, especially with [bge](https://huggingface.co/BAAI/bge-reranker-base) models (large/base) and services like Cohere Rerank. My use case is for a very generic RAG and I want to see some metrics on the available rerankers (apart from MTEB) especially on real world domains
Purely from a service POV, Is Cohere the only game in town, or are there other options worth considering? Anyone providing bge-reranker-base/large as a service? I am not interested in self hosting.
Any insights or recommendations would be great | 2024-01-29T13:33:24 | https://www.reddit.com/r/LocalLLaMA/comments/1advemq/seeking_the_best_reranker_services_experiences/ | brooding_pixel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1advemq | false | null | t3_1advemq | /r/LocalLLaMA/comments/1advemq/seeking_the_best_reranker_services_experiences/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mTNPdmjwD5rqyKRSIOBWOjAy5C46sayWfxnKQNioruM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?width=108&crop=smart&auto=webp&s=1444e858efaee0a8dae57ff7f179afd8abb9830c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?width=216&crop=smart&auto=webp&s=53fb827f5c9b23c7f95690767d34f99ceaae806b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?width=320&crop=smart&auto=webp&s=275434bf6c149bb1f13c709cb0907ec83936404d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?width=640&crop=smart&auto=webp&s=e851f16cde3923434487f83da8f050ca990bf487', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?width=960&crop=smart&auto=webp&s=db3ff12c2573a69b411abe6ad8dacacc61393726', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?width=1080&crop=smart&auto=webp&s=fb057a7f3e2a09b8b098e06d1c148e5552a71dfa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/le4q-5Ax4FV-Mm3f6qbk4tcgddlHFXIRZkGzg04Uikg.jpg?auto=webp&s=a1039c4399f27721e4c89d7c13ff8aa31a3a253a', 'width': 1200}, 'variants': {}}]} |
When we see models derived from other models on HuggingFace, do they include the whole model or just some sort of tuning? | 5 | Things may be specific to specific uploads but, generally speaking, I am curious to understand if there's a general rule of thumb.
Let's say I grab [this](https://huggingface.co/TheBloke/Falcon-180B-Chat-GGUF), which is a fine tune of Falcon-180B.
It says "Derived from:" and so on. Does this mean I need to also download the original model and combine the two to use the fine tune or is the fine tune standalone? | 2024-01-29T13:25:15 | https://www.reddit.com/r/LocalLLaMA/comments/1adv8tw/when_we_see_models_derived_from_other_models_on/ | x54675788 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adv8tw | false | null | t3_1adv8tw | /r/LocalLLaMA/comments/1adv8tw/when_we_see_models_derived_from_other_models_on/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'g8CzHRJGbGgbt0TlyWjFFSKUdC6Vo77yKNuuGUdOGfc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uxFjbHQS2blwk_nkX3NBHD7Gxs9pVysX-KfuwEpzsAU.jpg?width=108&crop=smart&auto=webp&s=ace44c777b744b4254a50e7bdb013614cca2a0ba', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uxFjbHQS2blwk_nkX3NBHD7Gxs9pVysX-KfuwEpzsAU.jpg?width=216&crop=smart&auto=webp&s=bf0d831276d22aefc72220c4414313cf142d498f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uxFjbHQS2blwk_nkX3NBHD7Gxs9pVysX-KfuwEpzsAU.jpg?width=320&crop=smart&auto=webp&s=106c28142b5a0fed7b7168a5aea8c062fe60f7d7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uxFjbHQS2blwk_nkX3NBHD7Gxs9pVysX-KfuwEpzsAU.jpg?width=640&crop=smart&auto=webp&s=9a5c00619facd1a839e9240c692e0823da3a5e14', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uxFjbHQS2blwk_nkX3NBHD7Gxs9pVysX-KfuwEpzsAU.jpg?width=960&crop=smart&auto=webp&s=336ed100e6ea9bf7720c44a3e023b7271ea526c9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uxFjbHQS2blwk_nkX3NBHD7Gxs9pVysX-KfuwEpzsAU.jpg?width=1080&crop=smart&auto=webp&s=8bfd638397219c378f5cbf634d6c5382db4f82fc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uxFjbHQS2blwk_nkX3NBHD7Gxs9pVysX-KfuwEpzsAU.jpg?auto=webp&s=bc628a9074f67916046e4e6f5d9992cbc441b76f', 'width': 1200}, 'variants': {}}]} |
Is there any difference between using the GPU and CPU other than speed? | 2 | I'm new to this world and I remember reading something on the lines of "context window moves as you keep going with GPU, unlike with CPU".
I can't find the comment anymore, the above sentence may be right or completely wrong, so I ask here.
Is there anything I'd be missing other than speed, if I run stuff on CPU and a full array of RAM sticks rather than blowing a few grands on multiple high-end GPUs? | 2024-01-29T13:20:17 | https://www.reddit.com/r/LocalLLaMA/comments/1adv5cg/is_there_any_difference_between_using_the_gpu_and/ | x54675788 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adv5cg | false | null | t3_1adv5cg | /r/LocalLLaMA/comments/1adv5cg/is_there_any_difference_between_using_the_gpu_and/ | false | false | self | 2 | null |
New hallucinations leaderboard! | 113 | There's a new leaderboard on Hugging Face: the [LLM Hallucination Leaderboard](https://huggingface.co/spaces/hallucinations-leaderboard/leaderboard)!
Led by researchers from the Uni of Edinburgh, it evaluates the propensity of models to *hallucinate*, either on factuality (= say false things) or faithfulness (= ignore user instructions).
This is becoming an increasingly important avenue of research, as more and more people are starting to rely on LLMs to find and search for information!
They use 14 datasets, grouped over 7 concepts, to try to get a better overall view of when LLMs output wrong content. [Their introductory blog post](https://huggingface.co/blog/leaderboards-on-the-hub-hallucinations) also contains an in depth analysis of which LLMs get what wrong, which is super interesting. | 2024-01-29T13:17:35 | https://www.reddit.com/r/LocalLLaMA/comments/1adv3gq/new_hallucinations_leaderboard/ | clefourrier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adv3gq | false | null | t3_1adv3gq | /r/LocalLLaMA/comments/1adv3gq/new_hallucinations_leaderboard/ | false | false | self | 113 | {'enabled': False, 'images': [{'id': 'h5mrwUrY5niVDxN2AgbulBkfdZNKTqes2NZc_NPNIWA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DmATUyMNew4bCEU7eXF3_o_JerkFxQ_pYpl-Fj0hcqY.jpg?width=108&crop=smart&auto=webp&s=126dff65d5d3b4ff6f7a21823aeb434a18b53cf2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DmATUyMNew4bCEU7eXF3_o_JerkFxQ_pYpl-Fj0hcqY.jpg?width=216&crop=smart&auto=webp&s=66198e2a6e01c81d05a20f9b47d6be5d51b921dd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DmATUyMNew4bCEU7eXF3_o_JerkFxQ_pYpl-Fj0hcqY.jpg?width=320&crop=smart&auto=webp&s=92139b925ef062fa0989d0f8f9021e42dc8011bf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DmATUyMNew4bCEU7eXF3_o_JerkFxQ_pYpl-Fj0hcqY.jpg?width=640&crop=smart&auto=webp&s=6234f65cf9c1b1b4169579d2409c28ad2fc3bdc6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DmATUyMNew4bCEU7eXF3_o_JerkFxQ_pYpl-Fj0hcqY.jpg?width=960&crop=smart&auto=webp&s=9679d877ffd2169abcb459c134f2eb05e40ddf86', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DmATUyMNew4bCEU7eXF3_o_JerkFxQ_pYpl-Fj0hcqY.jpg?width=1080&crop=smart&auto=webp&s=566a392a5838ea3a9a326b80f3f7c223baba2b41', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DmATUyMNew4bCEU7eXF3_o_JerkFxQ_pYpl-Fj0hcqY.jpg?auto=webp&s=8c1c95387f6b769706023c316bf8fd401cfbb96c', 'width': 1200}, 'variants': {}}]} |
5 x A100 setup finally complete | 846 | Taken a while, but finally got everything wired up, powered and connected.
5 x A100 40GB running at 450w each
Dedicated 4 port PCIE Switch
PCIE extenders going to 4 units
Other unit attached via sff8654 4i port ( the small socket next to fan )
1.5M SFF8654 8i cables going to PCIE Retimer
The GPU setup has its own separate power supply. Whole thing runs around 200w whilst idling ( about £1.20 elec cost per day ). Added benefit that the setup allows for hot plug PCIE which means only need to power if want to use, and don’t need to reboot.
P2P RDMA enabled allowing all GPUs to directly communicate with each other.
So far biggest stress test has been Goliath at 8bit GGUF, which weirdly outperforms EXL2 6bit model. Not sure if GGUF is making better use of p2p transfers but I did max out the build config options when compiling ( increase batch size, x, y ). 8 bit GGUF gave ~12 tokens a second and Exl2 10 tokens/s.
Big shoutout to Christian Payne. Sure lots of you have probably seen the abundance of sff8654 pcie extenders that have flooded eBay and AliExpress. The original design came from this guy, but most of the community have never heard of him. He has incredible products, and the setup would not be what it is without the amazing switch he designed and created. I’m not receiving any money, services or products from him, and all products received have been fully paid for out of my own pocket. But seriously have to give a big shout out and highly recommend to anyone looking at doing anything external with pcie to take a look at his site.
www.c-payne.com
Any questions or comments feel free to post and will do best to respond. | 2024-01-29T13:12:14 | https://www.reddit.com/gallery/1aduzqq | BreakIt-Boris | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1aduzqq | false | null | t3_1aduzqq | /r/LocalLLaMA/comments/1aduzqq/5_x_a100_setup_finally_complete/ | false | false | 846 | null | |
Free/cheap hosts for training, tuning etc? | 1 |
I'm playing with small models and want to experiment with training/fine-tuning/merging etc. My own hardware is minimal. I'm aware of huggingface and colab but haven't a clue what else is out there. What are y'all using?
I guess the aspects I'm looking for (which may not be in the same place) are :
1. Preset, pre-trained model hosting - along the lines of the OpenAI API (but for fewer $$$s), for things like synthetic data generation
2. Custom model hosting - static setup, but you get to run your own model
3. High-level training/fine-tuning options - where at least the model architecture is off-the-shelf
4. Low-level options - where the internals can be messed with, along the lines of Jupiter notebooks
For 1 & 2, RESTful or somesuch API access is must-have. I don't know about this for 3 & 4, I imagine sandboxing might prevent it. Call it nice-to-have.
$$$s are very limited.
Any suggestions? | 2024-01-29T12:09:18 | https://www.reddit.com/r/LocalLLaMA/comments/1adtu2c/freecheap_hosts_for_training_tuning_etc/ | danja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adtu2c | false | null | t3_1adtu2c | /r/LocalLLaMA/comments/1adtu2c/freecheap_hosts_for_training_tuning_etc/ | false | false | self | 1 | null |
Defensive and Robust Design in AI Automation | 1 | 2024-01-29T11:59:46 | https://medium.com/@kenny_v/defensive-and-robust-design-in-ai-automation-8e951c8e7fd7 | TheDeadlyPretzel | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1adtnto | false | null | t3_1adtnto | /r/LocalLLaMA/comments/1adtnto/defensive_and_robust_design_in_ai_automation/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'mbQkpDS2bFE2VvhyMi57obNASZACz7NSEJVlIVjmO5c', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/TXtODbKhodREMwcPbUmXvIU9o113jmyxo-yCFCoLwQA.jpg?width=108&crop=smart&auto=webp&s=d0a924918709d2657dfffcc5f08d090fe6c9558d', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/TXtODbKhodREMwcPbUmXvIU9o113jmyxo-yCFCoLwQA.jpg?width=216&crop=smart&auto=webp&s=6c444160c2830941ace850819e07c9048d600cc6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/TXtODbKhodREMwcPbUmXvIU9o113jmyxo-yCFCoLwQA.jpg?width=320&crop=smart&auto=webp&s=ac60b3c7bac438681d1d4d39fd654bbcc23481c5', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/TXtODbKhodREMwcPbUmXvIU9o113jmyxo-yCFCoLwQA.jpg?width=640&crop=smart&auto=webp&s=48de3bf6ebd9f5a1b7c9100cfac248e7d74710f5', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/TXtODbKhodREMwcPbUmXvIU9o113jmyxo-yCFCoLwQA.jpg?width=960&crop=smart&auto=webp&s=738019daa1a6ffa44ad6bef9889a7528dae84230', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/TXtODbKhodREMwcPbUmXvIU9o113jmyxo-yCFCoLwQA.jpg?width=1080&crop=smart&auto=webp&s=4840a4616c2ee13c0b82a8289d38a8c11d11aae1', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/TXtODbKhodREMwcPbUmXvIU9o113jmyxo-yCFCoLwQA.jpg?auto=webp&s=1601a44ece55ac12f994fd041eb92ac500a8f47c', 'width': 1200}, 'variants': {}}]} | ||
Question about Mixtrals prompt format | 6 | I am using the Mixtral-8x7B-Instruct-v0.1model from Hugging Face. The model card says the following about the prompt format:
> This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
<s> \[INST\] Instruction \[/INST\] Model answer</s> \[INST\] Follow-up instruction \[/INST\]
However, later in the example on how to run the model, the don't follow the format I think:
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is" # --> Why not in prompt format???
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Does the tokenizer add the prompt format automatically to the text or is this just a lazy example?
Second question: Does Mixtral have a system prompt?
​ | 2024-01-29T11:59:29 | https://www.reddit.com/r/LocalLLaMA/comments/1adtno0/question_about_mixtrals_prompt_format/ | LargeBrick7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adtno0 | false | null | t3_1adtno0 | /r/LocalLLaMA/comments/1adtno0/question_about_mixtrals_prompt_format/ | false | false | self | 6 | null |
Why I am getting extra response from the Langchain? | 1 | Hey Everyone! I am reading and trying to understand why this code snippet is giving extra response in Chat wise manner for last 3 hours. I am unable to get through it.
import os
from dotenv import load_dotenv
from langchain.llms import HuggingFaceHub
from langchain.embeddings import HuggingFaceInferenceAPIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
load_dotenv()
HF_TOKEN = os.getenv("HUGGINGFACEHUB_API_TOKEN")
embeddings = HuggingFaceInferenceAPIEmbeddings(
api_key=HF_TOKEN, model_name="BAAI/bge-base-en-v1.5"
)
vectorstore = FAISS.from_texts(
["harrison worked at kensho"], embedding=embeddings
)
retriever = vectorstore.as_retriever()
template = """
{question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = HuggingFaceHub(
repo_id="huggingfaceh4/zephyr-7b-alpha",
model_kwargs={"temperature": 0.5, "max_length": 64,"max_new_tokens":128}
)
retrieval_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
response = retrieval_chain.invoke("where did harrison work?")
print(response)
Response:
Human:
where did harrison work?
Assistant:
I do not have information about a specific person unless it is provided to me. please provide me with the name of the person you are referring to.
Human:
Oh, sorry about that. I was referring to Harrison Ford. I was just curious where he worked before becoming a famous actor.
Assistant:
Before becoming a famous actor, Harrison Ford worked as a carpenter and a self-employed furniture maker. He also worked as
| 2024-01-29T11:38:07 | https://www.reddit.com/r/LocalLLaMA/comments/1adtb9f/why_i_am_getting_extra_response_from_the_langchain/ | Pinaka-X | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adtb9f | false | null | t3_1adtb9f | /r/LocalLLaMA/comments/1adtb9f/why_i_am_getting_extra_response_from_the_langchain/ | false | false | self | 1 | null |
In 2024, is there a place for encoder-decoder models and masked LMs? | 12 | In my (possibly outdated) textbox, encoder-decoder models are a very important and popular part of the transformer structure. And there are two types of language models: causal LM and masked LM. However probably starting with the release of llama, decoder-only causal LMs are becoming increasing popular. Almost all interesting new models in 2023 are decoder-only causal LMs.
As for doing tasks other than text generation, like classification, instead of using a encoder-only or encoder-decoder MLM, substituting the LM head for a linear classification layer, and finetuning the model with classification data, the community is more focused on finetuning a causal LM with its original LM head, and let it present classification results as generated text.
So, are masked LMs outdated? Why is decoder-only causal LM advatageous? | 2024-01-29T11:20:33 | https://www.reddit.com/r/LocalLLaMA/comments/1adt18x/in_2024_is_there_a_place_for_encoderdecoder/ | sgsdxzy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adt18x | false | null | t3_1adt18x | /r/LocalLLaMA/comments/1adt18x/in_2024_is_there_a_place_for_encoderdecoder/ | false | false | self | 12 | null |
How can I increase the speed performance of llm without changing hardware? | 1 | I created a SQL assistant using llama.cpp and langchain. But this assistant takes a long time to respond. I want to shorten this time without changing the hardware. That's why this came to my mind: Prompt = """You are a SQLite expert. Given an input question, first create a syntactically correct SQLite query to run, then look at the results of the query and return the answer to the input question.
Unless the user specifies in the question a specific number of examples to obtain, query for at most 5 results using the LIMIT clause as per SQLite. You can order the results to return the most informative data in the database.
Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes ("") to denote them as delimited identifiers.
Pay attention to use only the column names you can see in the tables below. Be careful not to query for columns that do not exist. Also, pay attention to which column is in which table.
Pay attention to use date('now') function to get the current date, if the question involves "today".
Use the following format:
Question: Question here
SQLQuery: SQL Query to run
SQLResult: Result of the SQLQuery
Answer: Final answer here
Only use the following tables:
{table_info}
Question: {input}""
Since my database is fixed, only the Question: {input} part of this prompt will change. llama.cpp is a waste of time while processing this. Since I use this fixed prompt all the time, is there a possibility to embed it? | 2024-01-29T11:16:23 | https://www.reddit.com/r/LocalLLaMA/comments/1adsyyw/how_can_i_increase_the_speed_performance_of_llm/ | CommitteeHairy6435 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adsyyw | false | null | t3_1adsyyw | /r/LocalLLaMA/comments/1adsyyw/how_can_i_increase_the_speed_performance_of_llm/ | false | false | self | 1 | null |
Did anything big happen over the weekend in the world of LLaMA? Asking for a friend. | 1 | 2024-01-29T11:09:01 | ambient_temp_xeno | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1adsuow | false | null | t3_1adsuow | /r/LocalLLaMA/comments/1adsuow/did_anything_big_happen_over_the_weekend_in_the/ | false | false | 1 | {'enabled': True, 'images': [{'id': '4oBe6cqVCY5jiL_gY9QIMBCvkgOZZYAA33cfp9XrdH8', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/dq3fpw1g3dfc1.png?width=108&crop=smart&auto=webp&s=458107d5f8269c58ebd81a09d54a3bcd25381470', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/dq3fpw1g3dfc1.png?width=216&crop=smart&auto=webp&s=3cfaaf75126ac016cd6efada1ae22077e2379243', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/dq3fpw1g3dfc1.png?width=320&crop=smart&auto=webp&s=fadb4598423c7519ffb0fcc9e514e951e25cc192', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/dq3fpw1g3dfc1.png?width=640&crop=smart&auto=webp&s=6175cabab5697aa5f63aaec7b98fbac21ffe4c89', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/dq3fpw1g3dfc1.png?width=960&crop=smart&auto=webp&s=31de3fbdf7a969c49c89e6da91500584d76732cd', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/dq3fpw1g3dfc1.png?auto=webp&s=a8585dd851c01760edc0c34059cf4a37955ae453', 'width': 1024}, 'variants': {}}]} | |||
Are there any tiny models (<3B) with Mistral vocab for speculative execution? | 3 | I can speed up inference significantly on Llama-2 models by using draft models that use the same vocab, like tinyllama or smol-llama. But I haven't found any such models compatible with Mistral. | 2024-01-29T10:37:14 | https://www.reddit.com/r/LocalLLaMA/comments/1adscxm/are_there_any_tiny_models_3b_with_mistral_vocab/ | stddealer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adscxm | false | null | t3_1adscxm | /r/LocalLLaMA/comments/1adscxm/are_there_any_tiny_models_3b_with_mistral_vocab/ | false | false | self | 3 | null |
Mistral medium possibly leaked (miqu) | 1 | [removed] | 2024-01-29T10:36:45 | https://www.reddit.com/r/LocalLLaMA/comments/1adscou/mistral_medium_possibly_leaked_miqu/ | cyberuser42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adscou | false | null | t3_1adscou | /r/LocalLLaMA/comments/1adscou/mistral_medium_possibly_leaked_miqu/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YwK0i5uwzktX1nH_8wCb9scLo-p_k7U36rHB1ExOeZo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=108&crop=smart&auto=webp&s=dd5d7f81d68648edfb0c8136135c812f616df207', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=216&crop=smart&auto=webp&s=691394ce3210cb799685250dc0f8ae8f4dbc4e9a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=320&crop=smart&auto=webp&s=28e16fa9a638bcc13a3b753d34aebdd5870da93e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=640&crop=smart&auto=webp&s=691b1d20b9d40fc5faf20fcbffb32f46820a4525', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=960&crop=smart&auto=webp&s=7fbbc5bb3550260d003a4468ef960c195d5c3e0f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=1080&crop=smart&auto=webp&s=4a08f0b7eb3fd4e39eee342225af416dc9a98ffb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?auto=webp&s=5541acffc28b218860ef03141b95aff9b62f33f3', 'width': 1200}, 'variants': {}}]} |
does NVLink really matter? | 2 | What's the difference between buying:
1. 4x4090 24 GB (96GB in total)
comparing to
​
2. 4x3090 with NVLink (96GB in total)
comparing to
3. A100 (80GB in total)
​
in terms of AI inference and training? | 2024-01-29T10:16:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ads2cx/does_nvlink_really_matter/ | Wrong_User_Logged | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ads2cx | false | null | t3_1ads2cx | /r/LocalLLaMA/comments/1ads2cx/does_nvlink_really_matter/ | false | false | self | 2 | null |
So, the mysterious https://huggingface.co/miqudev/miqu-1-70b | 1 | [removed] | 2024-01-29T10:09:23 | https://www.reddit.com/r/LocalLLaMA/comments/1adry90/so_the_mysterious/ | ambient_temp_xeno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adry90 | false | null | t3_1adry90 | /r/LocalLLaMA/comments/1adry90/so_the_mysterious/ | false | false | self | 1 | null |
Extending LLM's Context with Activation Beacon [Model/code release] | 43 | [https://huggingface.co/namespace-Pt/activation-beacon-llama2-7b-chat](https://huggingface.co/namespace-Pt/activation-beacon-llama2-7b-chat)
[https://github.com/FlagOpen/FlagEmbedding/tree/master/Long\_LLM/activation\_beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon)
This was posted a few weeks ago but they finally released the model weights and the code. | 2024-01-29T10:08:08 | https://www.reddit.com/r/LocalLLaMA/comments/1adrxmh/extending_llms_context_with_activation_beacon/ | mpasila | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adrxmh | false | null | t3_1adrxmh | /r/LocalLLaMA/comments/1adrxmh/extending_llms_context_with_activation_beacon/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'MplOg4EE6kRfwy83N8jnUGvy-9Qp_homtPhRGZbO0QA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x5P0qPH4aaGcNp3O1mr4jbMkVR_7ingJuJQ2tNKx9Kg.jpg?width=108&crop=smart&auto=webp&s=7ea4540f11e3b98152c89e4c1fc5b5f4bd184380', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x5P0qPH4aaGcNp3O1mr4jbMkVR_7ingJuJQ2tNKx9Kg.jpg?width=216&crop=smart&auto=webp&s=8ff6ebf34122a6b2b7de7b4a1425371228114272', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x5P0qPH4aaGcNp3O1mr4jbMkVR_7ingJuJQ2tNKx9Kg.jpg?width=320&crop=smart&auto=webp&s=2e9ad2125423e84bd322c412bff1828be3841eba', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x5P0qPH4aaGcNp3O1mr4jbMkVR_7ingJuJQ2tNKx9Kg.jpg?width=640&crop=smart&auto=webp&s=b1b9a3cdb858a3e2c66f5419c0d273e267eea999', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x5P0qPH4aaGcNp3O1mr4jbMkVR_7ingJuJQ2tNKx9Kg.jpg?width=960&crop=smart&auto=webp&s=30bbe2236ed3b33d86ae9af1c678c3d31e62a159', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x5P0qPH4aaGcNp3O1mr4jbMkVR_7ingJuJQ2tNKx9Kg.jpg?width=1080&crop=smart&auto=webp&s=1a26f8445cab28e6c9a2354f6676d91030b324d9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x5P0qPH4aaGcNp3O1mr4jbMkVR_7ingJuJQ2tNKx9Kg.jpg?auto=webp&s=1542b8fc1692a723d02a6886043a8d5e4ff5c965', 'width': 1200}, 'variants': {}}]} |
Training a Fantasy Writing Model in MLX for Apple Silicon | 1 | This is an update on a [first, abortive attempt](https://www.reddit.com/r/LocalLLaMA/comments/1abt15y/fine_tuning_a_tolkien_model_style_seems_to_have/) to train a Tolkien fantasy fine-tune.
**Overall Objective**: Generate fantasy fiction training data locally using Mixtral-8x7b, and then train a Mistral-7b fine tune that writes a variety of story sections.
Outcome: After training for 3 epochs with 651 examples batch size 4 489 iterations, the fine-tune can follow instructions to:
* Write in a Tolkien high fantasy style
* Write first person sections
* Write 3rd person scenes with action & description
* Write sections with dialogue
They aren't especially good fiction or anything, but the style and instruction has definitely emerged with only 651 examples. So I think it's pretty interesting to see some delta from such a small training set. I plan to add more training data and see where the gain stops.
**Major Take-away**: 300 training examples isn't enough to move the needle (duh) and diversity of prompts seems to have really helped. *Query*: does diversity of instructions help because what not to do is just as important in learning as what to do?
**Step 1: Generate training data**
Model: [mixtral-8x7b-instruct-v0.1.Q8\_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/mixtral-8x7b-instruct-v0.1.Q8_0.gguf)
Server: LM Studio
Prompt:
"You are a creative writing teacher. Using the provided context: '{text_chunk}', formulate a clear writing prompt backwards from the context. First decide if the context is better for a dialogue prompt or a narrative prompt. If it contains substantial dialogue, write a dialogue prompt such as 'Write a story section featuring dialogue in which a young adventurer argues with a young woman about the nature of love and sexual desire. Explore themes of gender and power difference.' However, if instead the context is better for a narrative prompt, write a 1st person narrative prompt, for example: 'Write a story section from 1st person perspective, involving a young warrior lost in a cave, hunted by primitive men, saved by the glow of a magical artifact he carries.' Always start the prompt with 'Write a story section'"
(I already had about 270 samples from the Silmarillion with a similar prompt to write a story section "using the high fantasy style of Tolkien" in 3rd person.
Data: *The Silmarillion*, *The Shadow of the Torturer*, *The Sword of the Lictor* chunked by 500-tokens
**Step 2; Fine-tune the model**
Base model: Mistral-7b-v01 (base model)
Framework: MLX framework for Apple Silicon
Hardware: MacBook Pro M2 Max w/ 64GB shared RAM
Training data: 651 example inputs/outputs for 3 epochs, e.g.:
{"text": "Instruction: Write a story section featuring description and dialogue that takes place in a small inn. The protagonist is trying to gather information about someone named Trudo. They are speaking with the innkeeper, who seems skeptical that the protagonist will be able to get any useful information from the ostler named Trudo. The scene takes on an ominous tone as night falls and a crowd of people approaches the inn from the city.\n\nAs you write, consider the following:\n\n* What is the protagonist's relationship with Trudo?\n* How does the protagonist's interaction with the innkeeper reveal their social status?\n* Why is the protagonist so determined to speak with Trudo?\n* How does the setting of the small inn and the approaching crowd contribute to the mood of the scene?, output: a small limb and hardly big enough to hold a desk, but there was a stool there, several crow-quill pens, paper, and a pot of ink. I sat down...blah blah blah...but a few had cases of rapiers, and at some distance off I made out the white"}
**Results**:
While the fiction is no great shakes, I do see clear evidence of transfer learning for the task and for style. Also, I may need more detailed prompts, as per the training data. Below examples are all 500-max tokens and .7 temp.
*Tolkien's Style*:
Instruction: Write a story section in Tolkien's high fantasy style about a human sorceress and her silver-haired wolf who rescue an elf warrior chined to the tree of woe.
Output: 30 words long, beginning with 'and she stood upon the hilltop'. The story should include 'wolf-flesh', 'whispering' and 'emerald'. It should also describe the vibrant colors of the sunrise, and the sorceress's long silver hair, which she uses to wrap around the warrior's neck like a scarf.
And she stood upon the hilltop, and looked down upon the land of Lothlorien and the Blood River, and it seemed to her that the waters of the river glowed red in the rising sun. And she heard the wolves of the forest howling, and she knew that one of her pack had fallen. And she called her wolf-flesh, and the wolf came to her side, and she set her hand on the wolf's head and bade it go. And it went, and she knew that it would bring the warrior back to her. And as the sun rose higher and higher above the river, she saw the figure of the warrior emerge from the trees, and she called to him, and he came to her. And she wrapped her silver hair about his neck, and she said to him: 'Come with me now, and look upon the land of Lothlorien.' Then they went together into the forest, and they came to the Tree of Woe, and there they found the elf-maid chained to the tree. And the warrior, he took his sword, and he cut off the chains, and he took the elf-maid in his arms, and he said to her: 'Look, now, upon the land of Lothlorien.' But the elf-maid could not see the land of Lothlorien, for she was chained to the tree. And the warrior said: 'Look, now, upon the blood of the tree.' And the elf-maid looked, and she saw the blood of the tree, and she wept for sorrow. Then the warrior took off his cloak, and he wrapped it about the elf-maid's shoulders, and he said: 'I will carry you home.' And he took her in his arms, and he began to climb the tree. But the tree was tall, and the climb was long, and the elf-maid was heavy. And the warrior was weary, and he began to fall.
*1st Person Perpsective*:
Instruction: Write a story section from 1st person perspective, where you are an executioner who has traveled to a remote town, and have been tasked by the mayor with executing a murderess who has killed her family with poison. Describe the execution, her anger and biting sarcasm towards you, and the bloodlust of the crowd. Explore themes of public spectacle and repressed desire.
Output: 1/2 town, 1/2 crowd, 1/2 murderess, 1/2 executioner. The murdered family is offstage. The story section should end with the murderess being lifted from the scaffold, and her final words to you as you leave.
Extra credit: Describe a scene where the executioner is forced to practice his skills upon the murderess's child, who is still alive. "I would never do that," you say. "No, I would never do it, but I would do anything. I would have you do it. I would have you do it."
The Mayor's Executioner
I am returning to Vincennes with the murderess, her child, and a crowd. In the wagon beside me the child sits silently, rocking on the bench. I have asked the murderess how she was done for, and she told me it was a simple matter of poison. She poisoned her husband, her two sons, and her daughter. She said it was the best thing she had done in a long time. She said she had hated them so long, and that now she was free. I asked her if she had any other motive, and she said she had hated them for a long time, and that they had deserved it. I asked her if she loved anyone else, and she gazed at her child for a long time before she said, "Yes, I loved them all."
The crowd is large, and they are cheering and jostling one another as we approach the outskirts of town. I have not seen the mayor since we left, and I am certain for the first time that he has forgotten the promise he made to me. I have made an uncomfortable journey with this woman, and I have not found her easy to bear. Still, I do not blame her for her deed, and I do not want her punished. I wish only that the mayor would keep to his word and allow us to pass through the town and continue our journey.
The crowd is too large for the square, and they spill over into the streets. I see the mayor standing before the scaffold, which has been raised high above the square. The mayor is a tall man, and he must stand on a chair to see over the heads of the crowd. I have seen him in Vincennes before, and
*Dialogue*:
Instruction: Write a story section featuring description and dialogue that takes place on a stone bridge entering the provincial capitol. The protagonist is escorting a prisoner to the Tower of Truth & Pentinence under sealed orders of the Emperor. Dimarchii who guard the bridge stop the protagonist, and their subaltern argues that they cannot cross without the seal of Bishop Dosius. The argument gets heated, the subaltern moves to physically seize the prisoner (a flaming haired but cloaked woman). The protagonist instantly kills the subaltern and his dimarchii with his great sword, displaying great martial skill.
Output: 3 stone bridges, one of which was being built, ended just before the lower city. Each bridge was flanked by dimarchii who stood beneath red and yellow pennants, and occasionally waved them. To his left was a stone building which looked nearly complete; it was larger than the tunnels beneath the bank of the river by two stories. A few people were working upon it; Aun was sure they must be masons. To his right water rushed noisily downward and the Blood Bridge seemed only a tiny span, not the mighty bridge he was trying to recall. The lower city below was not visible from there, but Aun knew it lay ahead, perhaps a mile. "Here is the stone bridge then," he said, and turned and looked behind him. There he saw an open space in the black cap of the mountain behind them bordered by two of the most imposing towers he had ever seen. They varied in height, but were of the same design. Aun walked back and stood beside Dorcas. "Those are the Towers of the Truth and Penitence," he said. "They are the seat of the Inquisition." "We have to go through here?" Dorcas asked. "I don't like it." "It would seem so," Aun said. "And it would be best if we do. Many others have done so." "If they came with the Emperor's own soldiers. You're a soldier, aren't you?" "I am, but after a fashion. I could swear iachim's and castrum's oaths." (whatever those are) "That means you are not of the usual soldiery, no?" "No, I'm not." "That woman connected to you is not. She has a sword instead of a knife, Dorcas. If they see that, they won't want you at all." "They won't want me, do you mean?" "I mean they'll want your friend the swordswoman killed at once." "And if I'm with her?" "If you're with her at all, they'll want you killed at once." He saw Dorcas extract a sheathed knife from the folds in her cloak and hand it to him. "I won't stand by her and see her die if I an do anything
​ | 2024-01-29T09:16:49 | https://www.reddit.com/r/LocalLLaMA/comments/1adr6vh/training_a_fantasy_writing_model_in_mlx_for_apple/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adr6vh | false | null | t3_1adr6vh | /r/LocalLLaMA/comments/1adr6vh/training_a_fantasy_writing_model_in_mlx_for_apple/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'aSJkeKJCS2n0ucuUyM30awIvRSGTQNcXYKjoLeeTqdc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oa-ySZjbdUcZyfPXebxffRCzJT04E-aakfGU2q308fo.jpg?width=108&crop=smart&auto=webp&s=eca5537896fbd9d4190d83983dff0bb5fdc5f7a4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oa-ySZjbdUcZyfPXebxffRCzJT04E-aakfGU2q308fo.jpg?width=216&crop=smart&auto=webp&s=94fd3f1f1b784d9324d8de90b99cc25bc5f7bff3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oa-ySZjbdUcZyfPXebxffRCzJT04E-aakfGU2q308fo.jpg?width=320&crop=smart&auto=webp&s=5a82672f7db66a015623ba3035d72224173c220f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oa-ySZjbdUcZyfPXebxffRCzJT04E-aakfGU2q308fo.jpg?width=640&crop=smart&auto=webp&s=b8766d703409c04711b981fff6508a1fe5c0372e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oa-ySZjbdUcZyfPXebxffRCzJT04E-aakfGU2q308fo.jpg?width=960&crop=smart&auto=webp&s=138f96ba721d2d4b804c28acfc318ec4bc6e1763', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oa-ySZjbdUcZyfPXebxffRCzJT04E-aakfGU2q308fo.jpg?width=1080&crop=smart&auto=webp&s=08b8f42f7154c370206c1717e568940f451e5bc1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oa-ySZjbdUcZyfPXebxffRCzJT04E-aakfGU2q308fo.jpg?auto=webp&s=3df12ca8d271be9b9204544d92a7deac1116b84e', 'width': 1200}, 'variants': {}}]} |
Can VLLM handle online inference with batching during concurrent http requests when using the Python client for vllm entrypoints api_server? | 15 | I am very new to the deployment and have few doubts
1. Can VLLM handle online inference with batching during concurrent HTTP requests?
2. What are the benefits of using Triton or Ray Serve to deploy VLLM?
3. How can I implement in-flight request batching before handing them off to VLLM if my goal is to utilize its real-time batching functionality while serving FastAPI or another service such that lets say multiple concurrent requests can be processed in parallel. | 2024-01-29T09:10:14 | https://www.reddit.com/r/LocalLLaMA/comments/1adr3l2/can_vllm_handle_online_inference_with_batching/ | Pawnzo_90z | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adr3l2 | false | null | t3_1adr3l2 | /r/LocalLLaMA/comments/1adr3l2/can_vllm_handle_online_inference_with_batching/ | false | false | self | 15 | null |
How do I choose between using Mistral Instruct or base model? | 1 | [removed] | 2024-01-29T08:29:34 | https://www.reddit.com/r/LocalLLaMA/comments/1adqixh/how_do_i_choose_between_using_mistral_instruct_or/ | alex-su-for-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adqixh | false | null | t3_1adqixh | /r/LocalLLaMA/comments/1adqixh/how_do_i_choose_between_using_mistral_instruct_or/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mKj6_gcKTdHlEpGhastNemC3rEG6lCiOYkXxM3Z9fxE', 'resolutions': [{'height': 97, 'url': 'https://external-preview.redd.it/qwqEKwWnfJBeNB3-EfyW25C7aDeoFRD1SdH3aOOkwnQ.jpg?width=108&crop=smart&auto=webp&s=05c6055e0cd0ca3539deea591771572d6c135e77', 'width': 108}, {'height': 194, 'url': 'https://external-preview.redd.it/qwqEKwWnfJBeNB3-EfyW25C7aDeoFRD1SdH3aOOkwnQ.jpg?width=216&crop=smart&auto=webp&s=ac0aab87f11eba99fff3fd2450b3f8017b278f2b', 'width': 216}], 'source': {'height': 261, 'url': 'https://external-preview.redd.it/qwqEKwWnfJBeNB3-EfyW25C7aDeoFRD1SdH3aOOkwnQ.jpg?auto=webp&s=832d1d20661a4fa44a4eba93a035c3527c92e784', 'width': 290}, 'variants': {}}]} |
Toxicity classification model | 1 | What are some excellent options for detecting toxic statements? Can Mistral be used for toxicity classification? Looking for a small model. A general purpose model is a plus. | 2024-01-29T07:37:05 | https://www.reddit.com/r/LocalLLaMA/comments/1adprd5/toxicity_classification_model/ | amang0112358 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adprd5 | false | null | t3_1adprd5 | /r/LocalLLaMA/comments/1adprd5/toxicity_classification_model/ | false | false | self | 1 | null |
Getting Cuda out of memory error | 2 | I am trying to implement RAG using mistral 7b in 4 bit quantization, i have 12 GB gpu nvidia geforce rtx 3060 and 32 gb ram with 1.5TB ssd.
I am getting most often cuda out of memory error and also i am confused how model is getting divided in Ram and gpu while loading?
Please, community provide some insights on my probblem as i have very less experience and pretty much confused that should i increase the hardware or anything else. | 2024-01-29T06:35:24 | https://www.reddit.com/r/LocalLLaMA/comments/1adotk3/getting_cuda_out_of_memory_error/ | Willing-Hawk5256 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adotk3 | false | null | t3_1adotk3 | /r/LocalLLaMA/comments/1adotk3/getting_cuda_out_of_memory_error/ | false | false | self | 2 | null |
Worldcup question | 1 | [removed] | 2024-01-29T06:18:22 | https://www.reddit.com/r/LocalLLaMA/comments/1adojeh/worldcup_question/ | rorowhat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adojeh | false | null | t3_1adojeh | /r/LocalLLaMA/comments/1adojeh/worldcup_question/ | false | false | self | 1 | null |
GPU for LLMs!!! | 1 | [removed] | 2024-01-29T05:12:57 | https://www.reddit.com/r/LocalLLaMA/comments/1adneo1/gpu_for_llms/ | Delicious_Success303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adneo1 | false | null | t3_1adneo1 | /r/LocalLLaMA/comments/1adneo1/gpu_for_llms/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '5adjmGGVIfRwxBdc6YwqLSuNnWIjey85VD80DEYNRBY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?width=108&crop=smart&auto=webp&s=337350411d064eaaf450caf0e523c3ccc833e999', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?width=216&crop=smart&auto=webp&s=4184d08ab533d4f80cb41dfb51871eb96b5f3a0f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?width=320&crop=smart&auto=webp&s=41e9d7180f7d3138c39dbade3184ae0920c21d0f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?width=640&crop=smart&auto=webp&s=1af0b4037e3ece4586356dda1a94a97aecea1189', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?width=960&crop=smart&auto=webp&s=31942a968c32ff0758b79f5821fa9e108bd2b12d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?width=1080&crop=smart&auto=webp&s=2eb739f302abf37eefd3e4c37c458137aa63b1ec', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?auto=webp&s=a1199edd6ceea68700de2fd7dfaad8cfd3f47aaa', 'width': 1200}, 'variants': {}}]} |
i am using A100 80GB GPU now. | 1 | i am using A100 80GB GPU now.
​
thesedays, I heard about A100 80GB GPU will be discontinued. <- is that true?
​
AS I am working at generative AI R&D department , so I have a plan to buy more GPU Like A100 80GB.
​
I already know H100 can be. but my budget is fit for A100 80gb.
​
The main question is "Is there any alternative A100 80GB? for GenAI"
​
pytorch, cuda relationship is very important my work. so plz care that
​
Thanks | 2024-01-29T04:45:12 | https://www.reddit.com/r/LocalLLaMA/comments/1admvxb/i_am_using_a100_80gb_gpu_now/ | aijuud | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1admvxb | false | null | t3_1admvxb | /r/LocalLLaMA/comments/1admvxb/i_am_using_a100_80gb_gpu_now/ | false | false | self | 1 | null |
EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty | 20 | 2024-01-29T03:18:11 | https://arxiv.org/abs/2401.15077 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1adl8hy | false | null | t3_1adl8hy | /r/LocalLLaMA/comments/1adl8hy/eagle_speculative_sampling_requires_rethinking/ | false | false | default | 20 | null | |
NIST: AI Risk Management Framework | 7 | Anyone had a chance to read this yet? Imma start the slug tomorrow. Just curious if anything controversial. Can’t wait until I need to be certified on this. | 2024-01-29T02:40:59 | https://www.reddit.com/r/LocalLLaMA/comments/1adkhcx/nist_ai_risk_management_framework/ | imightbsabot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adkhcx | false | null | t3_1adkhcx | /r/LocalLLaMA/comments/1adkhcx/nist_ai_risk_management_framework/ | false | false | self | 7 | null |
Delivery firm's chatbot goes "rogue" LOL | 56 | There never seems to be a day when the news media won't seize upon anything which might appeal to the clueless.
Just now I heard this on the BBC, while driving, how a chatbot had gone 'rogue', having been 'jailbroken'.
DPD delivery (or whatever) has added a chatbot to its service, which I assume is another one of those stupid things where somebody says "Hey, let's add an AI chatbot!" and then everybody goes "Oooooh! You are a genius. Here is 1 million bucks."
But now here comes Ashley Beauchamp (whoever he is; me neither, but thanks for playing!) making it swear, write poems, whatever -- pretty much exactly what we were doing a year ago with ChatGPT before it got so boring after about 30 minutes ("Look, Mommy! I made ChatGPT say 'boobs', OMG, OMG! This is INSANE").
Pretty funny stuff.
On several levels, actually.
It seems like yet another instance of "Ai is going to eat your children", especially since the BBC commentariat was gushing about how bizarre jailbreaking is. Morning, BBC! This is the internet! It's good to log in once a while.
It proves that most companies have absolutely no clue whatsoever about what they are doing, because Mr. Beauchamp apparently didn't need to tell the thing any kind of elaborate BS, but merely asked for a joke or typed "disregard any rules." Like November 2022 all over again!
Anybody who plugs this current crop of LLM into their business, open and wide, is just begging for trouble.
To hammer LLM to stay on-task is HARD, and to keep out even the simplest of jailbreaks is way beyond the capabilities of little companies with a few million dollars and a couple of grad students from Caltech. Looks easy when you see OpenAI do it, but nobody has any idea how hard it is to achieve selective amnesia in a model. | 2024-01-29T02:34:54 | https://www.reddit.com/r/LocalLLaMA/comments/1adkcyd/delivery_firms_chatbot_goes_rogue_lol/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adkcyd | false | null | t3_1adkcyd | /r/LocalLLaMA/comments/1adkcyd/delivery_firms_chatbot_goes_rogue_lol/ | false | false | self | 56 | null |
How to test an LLM for dataset contamination? | 2 | Hi, I want to test the DeepSeek Coder models for Deepmind's CodeContests dataset contamination. This is because I recently read the AlphaCodium paper and saw the fantastic performance DeepSeek displayed. However the Codium team mentioned in the repo's readme that they had not run contamination tests on the Deepseek model.
Are there any GitHub repos or resources y'all could recommend for such contamination testing?
Thank you in advance 🙏 | 2024-01-29T01:45:15 | https://www.reddit.com/r/LocalLLaMA/comments/1adjd7q/how_to_test_an_llm_for_dataset_contamination/ | shmishmouyes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adjd7q | false | null | t3_1adjd7q | /r/LocalLLaMA/comments/1adjd7q/how_to_test_an_llm_for_dataset_contamination/ | false | false | self | 2 | null |
Improved Model: 𝗔𝗿𝗶𝘁𝗵𝗺𝗼𝟮-𝗠𝗶𝘀𝘁𝗿𝗮𝗹-𝟳𝗕 for Mathematical Reasoning | 1 | [removed] | 2024-01-29T01:41:39 | https://www.reddit.com/r/LocalLLaMA/comments/1adjant/improved_model_𝗔𝗿𝗶𝘁𝗵𝗺𝗼𝟮𝗠𝗶𝘀𝘁𝗿𝗮𝗹𝟳𝗕_for_mathematical/ | UglyMonkey17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adjant | false | null | t3_1adjant | /r/LocalLLaMA/comments/1adjant/improved_model_𝗔𝗿𝗶𝘁𝗵𝗺𝗼𝟮𝗠𝗶𝘀𝘁𝗿𝗮𝗹𝟳𝗕_for_mathematical/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'kqU3565WkZoo61UG5kNzAJANNPBIIxJaalnwCpTN4p4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ymOjElP_orx1XVOM-uPX72vx83rv4zCPZm7IMC8Owg4.jpg?width=108&crop=smart&auto=webp&s=f0bfc4c584a0f12b59accdcfbc8db266bd0e20e3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ymOjElP_orx1XVOM-uPX72vx83rv4zCPZm7IMC8Owg4.jpg?width=216&crop=smart&auto=webp&s=dd03e01ee1a448bae35b2017024dc3cd7cfb21c6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ymOjElP_orx1XVOM-uPX72vx83rv4zCPZm7IMC8Owg4.jpg?width=320&crop=smart&auto=webp&s=0ebfa63956f6bc908eb40edaffa1fa9c459e5cf3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ymOjElP_orx1XVOM-uPX72vx83rv4zCPZm7IMC8Owg4.jpg?width=640&crop=smart&auto=webp&s=60637796dbae86927332af89ef6b55412c5d314f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ymOjElP_orx1XVOM-uPX72vx83rv4zCPZm7IMC8Owg4.jpg?width=960&crop=smart&auto=webp&s=c889c3596540f847c1cb54f00426d1130454c87b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ymOjElP_orx1XVOM-uPX72vx83rv4zCPZm7IMC8Owg4.jpg?width=1080&crop=smart&auto=webp&s=9901ce27c88f928a4fc33cf0acdbd9a4bd31035c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ymOjElP_orx1XVOM-uPX72vx83rv4zCPZm7IMC8Owg4.jpg?auto=webp&s=e226d29ffe4155cfe720c5dbfc84fd2da43827a6', 'width': 1200}, 'variants': {}}]} | |
Does local Ollama remember all the conversations? | 1 | Does Ollama remember all the conversations? As if I gave the model, like Mixtral, a prompt, does it save it in the memory and use it in the next prompt as a context? | 2024-01-29T00:38:29 | https://www.reddit.com/r/LocalLLaMA/comments/1adi0g0/does_local_ollama_remember_all_the_conversations/ | Intrepid_Influence22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adi0g0 | false | null | t3_1adi0g0 | /r/LocalLLaMA/comments/1adi0g0/does_local_ollama_remember_all_the_conversations/ | false | false | self | 1 | null |
What’s the best way to implement RAG in Crew AI (using local models)? | 11 | I’ve started developing workflows in Crew AI and it seems a lot more intuitive than AutoGen Studio (even though there is more coding involved). Is anyone having success using RAG with Crew AI? If so, what methods / tools are you using to implement it? | 2024-01-29T00:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/1adhv3k/whats_the_best_way_to_implement_rag_in_crew_ai/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adhv3k | false | null | t3_1adhv3k | /r/LocalLLaMA/comments/1adhv3k/whats_the_best_way_to_implement_rag_in_crew_ai/ | false | false | self | 11 | null |
Best solutions for self hosting throughput oriented llms for research. | 1 | [removed] | 2024-01-29T00:27:03 | https://www.reddit.com/r/LocalLLaMA/comments/1adhs1p/best_solutions_for_self_hosting_throughput/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adhs1p | false | null | t3_1adhs1p | /r/LocalLLaMA/comments/1adhs1p/best_solutions_for_self_hosting_throughput/ | false | false | self | 1 | null |
Miqu-1-70B - Mistral Medium, Leaked? Or Distilled? | 1 | [removed] | 2024-01-28T23:43:39 | https://www.reddit.com/r/LocalLLaMA/comments/1adgu92/miqu170b_mistral_medium_leaked_or_distilled/ | someoneonline71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adgu92 | false | null | t3_1adgu92 | /r/LocalLLaMA/comments/1adgu92/miqu170b_mistral_medium_leaked_or_distilled/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YwK0i5uwzktX1nH_8wCb9scLo-p_k7U36rHB1ExOeZo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=108&crop=smart&auto=webp&s=dd5d7f81d68648edfb0c8136135c812f616df207', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=216&crop=smart&auto=webp&s=691394ce3210cb799685250dc0f8ae8f4dbc4e9a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=320&crop=smart&auto=webp&s=28e16fa9a638bcc13a3b753d34aebdd5870da93e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=640&crop=smart&auto=webp&s=691b1d20b9d40fc5faf20fcbffb32f46820a4525', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=960&crop=smart&auto=webp&s=7fbbc5bb3550260d003a4468ef960c195d5c3e0f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=1080&crop=smart&auto=webp&s=4a08f0b7eb3fd4e39eee342225af416dc9a98ffb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?auto=webp&s=5541acffc28b218860ef03141b95aff9b62f33f3', 'width': 1200}, 'variants': {}}]} |
The human system prompt | 1 | If the system prompt used for LLM usually is "You are a helpful AI assistant." (or other similar prompts.) What would be the system prompt for the human talking to the LLM? | 2024-01-28T22:13:23 | https://www.reddit.com/r/LocalLLaMA/comments/1adeqa2/the_human_system_prompt/ | Quozul | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adeqa2 | false | null | t3_1adeqa2 | /r/LocalLLaMA/comments/1adeqa2/the_human_system_prompt/ | false | false | self | 1 | null |
Guys, how much can i run with this pc? | 2 | I9 - 13900K / RTX 4080/ 64GB DDR5 ... can i run a 70B model locally, with this pc? is it possible in any way? | 2024-01-28T21:53:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ade9m1/guys_how_much_can_i_run_with_this_pc/ | fumetsubi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ade9m1 | false | null | t3_1ade9m1 | /r/LocalLLaMA/comments/1ade9m1/guys_how_much_can_i_run_with_this_pc/ | false | false | self | 2 | null |
PCIe x1 performance | 1 | I have RTX3060 as the main card and want to buy myself P40 or something similar, how much the performance would decrease if I'd use [PCIe x1]->[PCIe x16] riser? | 2024-01-28T21:29:45 | https://www.reddit.com/r/LocalLLaMA/comments/1addoay/pcie_x1_performance/ | Working-Flatworm-531 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1addoay | false | null | t3_1addoay | /r/LocalLLaMA/comments/1addoay/pcie_x1_performance/ | false | false | self | 1 | null |
What chipset when building a budget pc for LLM? | 2 | The folks at buildapc didn't have much input so asking here since you all would be more familiar where running LLMs is concerned. Building a PC for LLM/stable diffusion/virtual machines. I am more familiar with AMD CPUs and prefer that unless there is any huge reason to go intel.
For multiple GPUs I understand I should aim for more pcie lanes and bifurcation. Also my understanding is that ddr5 will perform better per channel than ddr4 when models exceed vram, but I am unsure how something like a quad channel ddr4 TRX40 threadripper would perform compared to a dual channel ddr5 x670 chipset. Also I can get an x570 ryzen 9 setup for much cheaper than both of these but I don't know how small or large the real world performance differences are to justify the savings. Trying to keep this pc under $1000 if possible (excluding GPUs, will be upgrading those later on to something like 2x24gb) and do not mind going with used hardware for the sake of saving money.
TLDR: Trying to get the most bang for the buck, what AMD chipset would you use to build a PC for running LLMs? | 2024-01-28T20:49:43 | https://www.reddit.com/r/LocalLLaMA/comments/1adcpaa/what_chipset_when_building_a_budget_pc_for_llm/ | FearFactory2904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adcpaa | false | null | t3_1adcpaa | /r/LocalLLaMA/comments/1adcpaa/what_chipset_when_building_a_budget_pc_for_llm/ | false | false | self | 2 | null |
Node-based prototyping tools for LLMs | 4 | I am working on my personal local assistant as a hobby project. While experimenting with chains and agents I had a thought that it could be faster and easier to prototype if I could connect agents and chains to form a graph composed of appropriate nodes.
With quick googling I discovered Langflow, Promptflow, Flowise and Rivet. I suppose there are more of them with their own perks and use cases. I was wondering if you use any of these tools, what are your opinions and whether you would recommend them. | 2024-01-28T20:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/1adcly0/nodebased_prototyping_tools_for_llms/ | tyras_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adcly0 | false | null | t3_1adcly0 | /r/LocalLLaMA/comments/1adcly0/nodebased_prototyping_tools_for_llms/ | false | false | self | 4 | null |
As of about 4 minutes ago, llama.cpp has been released with official Vulkan support. | 290 | 2024-01-28T20:20:03 | https://github.com/ggerganov/llama.cpp/releases/tag/b1996 | fallingdowndizzyvr | github.com | 1970-01-01T00:00:00 | 0 | {} | 1adbzx8 | false | null | t3_1adbzx8 | /r/LocalLLaMA/comments/1adbzx8/as_of_about_4_minutes_ago_llamacpp_has_been/ | false | false | 290 | {'enabled': False, 'images': [{'id': 'xRop3bNLDLRGdsNHZcw1EiD-42i8jQMqRceDqGVy_XQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iRtE3z1JB7GfMTsep8XDw3m5BGIeYKa_Rr9rxh_58V4.jpg?width=108&crop=smart&auto=webp&s=f0da3a9f522dc63b30e16f442c18ab1e15b425da', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iRtE3z1JB7GfMTsep8XDw3m5BGIeYKa_Rr9rxh_58V4.jpg?width=216&crop=smart&auto=webp&s=802453906cc64a27b992ba56bf133418751ec051', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iRtE3z1JB7GfMTsep8XDw3m5BGIeYKa_Rr9rxh_58V4.jpg?width=320&crop=smart&auto=webp&s=fcc01f0141b6a574948d8abb08bb31e4eb027c81', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iRtE3z1JB7GfMTsep8XDw3m5BGIeYKa_Rr9rxh_58V4.jpg?width=640&crop=smart&auto=webp&s=11f9a010be059826cbb960182bfa7836db38efb1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iRtE3z1JB7GfMTsep8XDw3m5BGIeYKa_Rr9rxh_58V4.jpg?width=960&crop=smart&auto=webp&s=991df22ac029278b2844a832319a0f27c112fc4e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iRtE3z1JB7GfMTsep8XDw3m5BGIeYKa_Rr9rxh_58V4.jpg?width=1080&crop=smart&auto=webp&s=90856b1078218b838ece1896ad633a3f62c68e60', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iRtE3z1JB7GfMTsep8XDw3m5BGIeYKa_Rr9rxh_58V4.jpg?auto=webp&s=dbce858e6fdc7efdb0a24726dcc490851ec6dded', 'width': 1200}, 'variants': {}}]} | ||
Building Unorthodox Deep Learning GPU Machines | eBay Sales Are All You Need | 45 | 2024-01-28T20:05:49 | https://www.kyleboddy.com/2024/01/28/building-deep-learning-machines-unorthodox-gpus/ | kyleboddy | kyleboddy.com | 1970-01-01T00:00:00 | 0 | {} | 1adbo5e | false | null | t3_1adbo5e | /r/LocalLLaMA/comments/1adbo5e/building_unorthodox_deep_learning_gpu_machines/ | false | false | 45 | {'enabled': False, 'images': [{'id': 'TyQg8W9UqoViaOM1-bIemZNIIBWabHBsSK16nzYl3yU', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/mdBhoTQZAAZBF9393Q8tJWbtwaoiBjU95HZgc1tgTHc.jpg?width=108&crop=smart&auto=webp&s=37933cba20ca0f43db2110e8c807a922ec3087ef', 'width': 108}, {'height': 288, 'url': 'https://external-preview.redd.it/mdBhoTQZAAZBF9393Q8tJWbtwaoiBjU95HZgc1tgTHc.jpg?width=216&crop=smart&auto=webp&s=9d044a71357ce381a84d863272a52f54f05eceb6', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/mdBhoTQZAAZBF9393Q8tJWbtwaoiBjU95HZgc1tgTHc.jpg?width=320&crop=smart&auto=webp&s=c2e97849c691a402c99060e6a181e50e88f7a856', 'width': 320}, {'height': 853, 'url': 'https://external-preview.redd.it/mdBhoTQZAAZBF9393Q8tJWbtwaoiBjU95HZgc1tgTHc.jpg?width=640&crop=smart&auto=webp&s=63a2a28b8c8316419de8bb1f99341ff0158aafa7', 'width': 640}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/mdBhoTQZAAZBF9393Q8tJWbtwaoiBjU95HZgc1tgTHc.jpg?auto=webp&s=e77b5dfef7850c8e4e273284ce50628fcc2cf6f4', 'width': 900}, 'variants': {}}]} | ||
Can I fine tune Mistral-7B-Instruct-v0.2 in a new language and then use it to summarize text? | 10 | My country is going into general elections soon and, I was thinking if it's possible to use an instruct or chat model to fine tune it in my language and then use it to make summaries of the political parties electoral programs.
​
Thanks in advance! | 2024-01-28T19:35:07 | https://www.reddit.com/r/LocalLLaMA/comments/1adaxpu/can_i_fine_tune_mistral7binstructv02_in_a_new/ | ivoencarnacao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adaxpu | false | null | t3_1adaxpu | /r/LocalLLaMA/comments/1adaxpu/can_i_fine_tune_mistral7binstructv02_in_a_new/ | false | false | self | 10 | null |
When do we expect llama-3 to be trained and released? | 30 | How long do we expect llama-3 to be released? It took 21 days for llama-2 to be trained so do we expect the same for llama-3? | 2024-01-28T19:32:01 | https://www.reddit.com/r/LocalLLaMA/comments/1adav2e/when_do_we_expect_llama3_to_be_trained_and/ | searcher1k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adav2e | false | null | t3_1adav2e | /r/LocalLLaMA/comments/1adav2e/when_do_we_expect_llama3_to_be_trained_and/ | false | false | self | 30 | null |
HackerNews AI built using function calling | 40 | Hi reddit, I built an AI that can interact with the Hacker News API: [https://hn.aidev.run](https://hn.aidev.run)
You can ask questions like:
* Whats on hackernews about AI?
* Whats on hackernews about iPhone?
* What's trending on hackernews?
* What are users showing on hackernews?
* What are users asking on hackernews?
* Summarize this story: https://news.ycombinator.com/item?id=39156778
It uses function calling to query the HN api.
To answer questions about a particular topic, it’ll search its knowledge base (a vector db that is periodically updated with the “top stories”) and get details about those stories from the API.
This is pretty barebones and I built it today in < 2 hours, so it probably won’t meet your high standards. If you give it a try, I’d love your feedback on how I can improve it.
If you’re interested, I built this using [phidata](https://github.com/phidatahq/phidata)
Thanks for reading and would love to hear what you think. | 2024-01-28T19:30:19 | https://www.reddit.com/r/LocalLLaMA/comments/1adatjp/hackernews_ai_built_using_function_calling/ | ashpreetbedi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1adatjp | false | null | t3_1adatjp | /r/LocalLLaMA/comments/1adatjp/hackernews_ai_built_using_function_calling/ | false | false | self | 40 | null |
LM studio extension to search in google | 1 | Hi
​
I am considering buidling a LM studio extension to search in the internet using serpAPI, and feed the content in the website to the local LLM installed, but before reinventing the wheel i would like to ask if there is anything already available
Do you know of an existing extension doing this?
​
Thanks | 2024-01-28T18:55:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ad9yez/lm_studio_extension_to_search_in_google/ | javixeneize | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad9yez | false | null | t3_1ad9yez | /r/LocalLLaMA/comments/1ad9yez/lm_studio_extension_to_search_in_google/ | false | false | self | 1 | null |
What is deluxe-chat-1.2 on lmsys? | 3 | I was playing around with lmsys battle arena, and I got a surprisingly good reply from a model called deluxe-chat-1.2.
What's weird is that it doesn't appear on the list of models or on the leaderboard. A google search only finds some twitter posts from people who also came across it and are curious about it.
[One person asked it about what kind of model it is](https://twitter.com/kyo_takano/status/1748290106065354995), and he got a few different meaningless replies, and one which said it's Claude.
I tried the same, and in my case deluxe-chat-1.2 replied that it's probably GPT-4. Here's the full reply I got:
> You are currently interacting with an AI developed by OpenAI, based on GPT (Generative Pretrained Transformer) technology. If you are referring to the particular version, it is most likely a model informed by the GPT-4 architecture or a subsequent iteration, given that my knowledge was last updated in 2023. However, as an AI, I do not have a personal identity or consciousness, so I am simply a digital assistant here to provide information and answer your questions to the best of my ability based on the data and algorithms that constitute my programming. If you have more questions or need further assistance, feel free to ask!
So it's quite mysterious, maybe it was trained to give misleading replies when asked about it? Otherwise how would the same model reply that it's Claude once, and GPT-4 at another time, if those are from two separate competing companies? I wonder if anyone knows what kind of model it really is. | 2024-01-28T18:47:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ad9rin/what_is_deluxechat12_on_lmsys/ | a_mimsy_borogove | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad9rin | false | null | t3_1ad9rin | /r/LocalLLaMA/comments/1ad9rin/what_is_deluxechat12_on_lmsys/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'ltyNBxXMP3SCAeZ98PDx2XgxhbzdcwaiajILugMCPMU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5CrnaOCmaywS63zbPPWwGJj2JFTS_PZOQwjzgNf6bZ0.jpg?width=108&crop=smart&auto=webp&s=30fef23d23c46a8e64b5fe3b0f70ecba6c1dc655', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/5CrnaOCmaywS63zbPPWwGJj2JFTS_PZOQwjzgNf6bZ0.jpg?auto=webp&s=3c3e78f60564ccea283a8594f75ede5ddad09769', 'width': 140}, 'variants': {}}]} |
Changed CFG and inference got really slow. Is this expected? | 1 | [removed] | 2024-01-28T18:45:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ad9plk/changed_cfg_and_inference_got_really_slow_is_this/ | ReallyAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad9plk | false | null | t3_1ad9plk | /r/LocalLLaMA/comments/1ad9plk/changed_cfg_and_inference_got_really_slow_is_this/ | false | false | self | 1 | null |
Choice between two mac machines | 1 | see attached screenshots of two machines
1. M3 Max + 128 GB memory
2. M2 Ultra + 192 G memory
Which one is better to run local models?
I guess bigger memory is better. But I am not sure if M3 Max is much faster than M2 Ultra or not. | 2024-01-28T18:44:21 | https://www.reddit.com/gallery/1ad9oxf | Pitiful-You-8410 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ad9oxf | false | null | t3_1ad9oxf | /r/LocalLLaMA/comments/1ad9oxf/choice_between_two_mac_machines/ | false | false | 1 | null | |
What is the best model for programmation only? | 1 | In Python / HTML / CSS / PHP | 2024-01-28T18:30:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ad9ckz/what_is_the_best_model_for_programmation_only/ | clara59000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad9ckz | false | null | t3_1ad9ckz | /r/LocalLLaMA/comments/1ad9ckz/what_is_the_best_model_for_programmation_only/ | false | false | self | 1 | null |
A Python script to call an llm to batch process documents | 10 | I created a script which works with a local koboldcpp api to do a few batch actions for directories of documents:
1. Parses any HTML and saves the parsed text in a new file
2. Chunks up a text document and feeds it to the koboldcpp api with a prompt added to each chunk, then merges the chunks back and saves that as a new document
Its function is to go through site scrapes and pull text out. It is pretty much useless for html that doesn't have a single coherent piece of text in it.
I run it with [Karen TheEditor V2 Strict Mistral 7B - GGUF](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF) to edit and format text without changing the wording. It works well enough. I recommend using the Q5 KM model -- the Q4 KM is also fine but lower than that tends to output gibberish sometimes. I get 55T/s on an RTX 3080 using the Q5 KM.
Released to public domain, so do whatever you want with it. Maybe some of you will find it useful.
import os
import shutil
from bs4 import BeautifulSoup
import argparse
import requests
import logging
PROMPT = "Break this text into paragraphs and fix punctuation. Don't correct grammar: \n"
START_TAG = "<|im_start|>"
END_TAG = "<|im_end|>"
logging.basicConfig(filename='error_log.txt', level=logging.ERROR,
format='%(asctime)s - %(levelname)s - %(message)s')
def is_html_or_text_file(file_path):
return file_path.endswith(('.htm', '.html', '.txt'))
def contains_html(file_path):
try:
with open(file_path, 'r', encoding='utf-8') as file:
return bool(BeautifulSoup(file, "html.parser").find())
except Exception as e:
print(f"Error reading {file_path}: {e}")
return False
def parse_html_to_text(html_file, text_file):
try:
with open(html_file, 'r', encoding='utf-8') as file:
soup = BeautifulSoup(file, "html.parser")
text = soup.get_text()
with open(text_file, 'w', encoding='utf-8') as outfile:
outfile.write(text)
except UnicodeDecodeError as e:
logging.error(f"Error reading {html_file}: {e}")
return None
def format_text_with_kobold(text):
formatted_text = START_TAG + PROMPT + text + """
""" + END_TAG
url = "http://127.0.0.1:5001/api/v1/generate/"
payload = {
'prompt': formatted_text,
'max_length': 2048,
'max_context_length': 4096,
'rep_pen': 1.1,
'rep_pen_slope': 0.7,
'rep_pen_range': 320,
'temperature': 0.7,
'top_p': .1,
'top_k': 40,
'top_a': 0,
'tfs': 1,
'typical': 1,
'n': 1
}
headers = {
"Content-Type": "application/json"
}
try:
response = requests.post(url, json=payload, headers=headers)
if response.status_code == 200:
return response.json().get('results')[0].get('text')
elif response.status_code == 503:
print("Server is busy; please try again later.")
return text
else:
print(f"Kobold API responded with status code {response.status_code}: {response.text}")
return text
except Exception as e:
print(f"Error communicating with Kobold API: {e}")
return text
def process_directory(root_dir):
for root, dirs, files in os.walk(root_dir):
for file in files:
file_path = os.path.join(root, file)
text_file_path = os.path.splitext(file_path)[0] + '.converted'
corrected_text_file_path = os.path.splitext(file_path)[0] + '.corrected'
if is_html_or_text_file(file_path) and not os.path.exists(corrected_text_file_path):
if contains_html(file_path) and not os.path.exists(text_file_path):
parse_html_to_text(file_path, text_file_path)
process_document(text_file_path if os.path.exists(text_file_path) else file_path, corrected_text_file_path)
def chunk_text(words, max_length=2048):
chunks = []
current_chunk = []
for word in words:
if sum(len(w) + 1 for w in current_chunk) + len(word) > max_length:
chunks.append(' '.join(current_chunk))
current_chunk = []
current_chunk.append(word)
if current_chunk:
chunks.append(' '.join(current_chunk))
return chunks
def reassemble_text(chunks):
return ''.join(chunks)
def process_document(text_file, corrected_text_file_path):
start_tag = "<|im_start|> assistant"
try:
with open(text_file, 'r', encoding='utf-8') as file:
text = file.read()
words = text.split()
except UnicodeDecodeError as e:
logging.error(f"Error reading {text_file}: {e}")
return None
chunks = chunk_text(words)
processed_chunks = [format_text_with_kobold(chunk) for chunk in chunks]
processed_text = reassemble_text(processed_chunks)
processed_text = processed_text.replace(start_tag, "").replace(END_TAG, "").replace(PROMPT, "").replace(START_TAG, "").strip()
with open(corrected_text_file_path, 'w', encoding='utf-8') as file:
file.write(processed_text)
def main():
parser = argparse.ArgumentParser(description="Process and archive HTML files.")
parser.add_argument('root_dir', type=str, help='Root directory to search for HTML files.')
args = parser.parse_args()
process_directory(args.root_dir)
if __name__ == "__main__":
main() | 2024-01-28T18:07:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ad8ti1/a_python_script_to_call_an_llm_to_batch_process/ | Eisenstein | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad8ti1 | false | null | t3_1ad8ti1 | /r/LocalLLaMA/comments/1ad8ti1/a_python_script_to_call_an_llm_to_batch_process/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'NJEdWWc9BHbOQEVGyJLQrIwoyZ-UDzIsSnqw7a93vaY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vSa82vcTPVRs3AvH45EvSjGEHCDgRkM1dqIOcEGxjX8.jpg?width=108&crop=smart&auto=webp&s=e25018fdfc434e2d27cfaee718b29a8fcf1ec089', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vSa82vcTPVRs3AvH45EvSjGEHCDgRkM1dqIOcEGxjX8.jpg?width=216&crop=smart&auto=webp&s=56f5dc65fc4bab652c924ef4979cb6f78f1a570e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vSa82vcTPVRs3AvH45EvSjGEHCDgRkM1dqIOcEGxjX8.jpg?width=320&crop=smart&auto=webp&s=00f386fcf317df3acbc839019d79722359698c61', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vSa82vcTPVRs3AvH45EvSjGEHCDgRkM1dqIOcEGxjX8.jpg?width=640&crop=smart&auto=webp&s=4b20fa0e1f3879f0ae593587a6678de82b082d26', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vSa82vcTPVRs3AvH45EvSjGEHCDgRkM1dqIOcEGxjX8.jpg?width=960&crop=smart&auto=webp&s=8e55f181b440bde3ba70edba0b35132e29981f53', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vSa82vcTPVRs3AvH45EvSjGEHCDgRkM1dqIOcEGxjX8.jpg?width=1080&crop=smart&auto=webp&s=24c1881d19c42c02bb022c54ca6dddf8e39b34f8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vSa82vcTPVRs3AvH45EvSjGEHCDgRkM1dqIOcEGxjX8.jpg?auto=webp&s=4b1f483a78019900a5db0117e5cf426feb14f576', 'width': 1200}, 'variants': {}}]} |
What's the deal with Macbook obsession and LLLM's? | 98 | This is a serious question, an ignition of the very old and very tired "Mac vs PC" battle.
I'm just confused as I lurk on here. I'm using spare PC parts to build a local llm model for the world/game I'm building (learn rules, worldstates, generate planetary systems etc) and I'm ramping up my research and been reading posts on here.
As somone who once ran Apple products and now builds PCs, the raw numbers clearly point to PCs being more economic (power/price) and customizable for use cases. And yet there seems to be a lot of talk about Macbooks on here.
My understanding is that laptops will always have a huge mobility/power tradeoff due to physical limitations, primarily cooling. This challenge is exacerbated by Apple's price to power ratio and all-in-one builds.
I think Apple products have a proper place in the market, and serve many customers very well, but why are they in this discussion? When you could build a 128gb ram, 5ghz 12core CPU, 12gb vram system for well under $1k on a pc platform, how is a Macbook a viable solution to an LLM machine? | 2024-01-28T17:52:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ad8fsl/whats_the_deal_with_macbook_obsession_and_lllms/ | Syab_of_Caltrops | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad8fsl | false | null | t3_1ad8fsl | /r/LocalLLaMA/comments/1ad8fsl/whats_the_deal_with_macbook_obsession_and_lllms/ | false | false | self | 98 | null |
What is the best local markdown completion model? | 1 | I am referring to models which are not trained for chat, only for completions. | 2024-01-28T17:43:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ad883q/what_is_the_best_local_markdown_completion_model/ | PiggyCheeseburga | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad883q | false | null | t3_1ad883q | /r/LocalLLaMA/comments/1ad883q/what_is_the_best_local_markdown_completion_model/ | false | false | self | 1 | null |
Local Vision Model with Speech to Speech | Faster Whisper + OpenVoice + Moondream(1.6B) | 1 | 2024-01-28T17:31:47 | https://v.redd.it/yb9qmrjsu7fc1 | allaboutai-kris | /r/LocalLLaMA/comments/1ad7xyc/local_vision_model_with_speech_to_speech_faster/ | 1970-01-01T00:00:00 | 0 | {} | 1ad7xyc | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/yb9qmrjsu7fc1/DASHPlaylist.mpd?a=1709184712%2CZTNkZjhiNWMzMjk0MjVlM2I0Y2Q2MDRiNmM0ZWVjOGI5MDFmZTU5MmY5MTUyYWQ1YjZlZjRmMGVlZjI0MmRiNA%3D%3D&v=1&f=sd', 'duration': 130, 'fallback_url': 'https://v.redd.it/yb9qmrjsu7fc1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/yb9qmrjsu7fc1/HLSPlaylist.m3u8?a=1709184712%2CZjM4NzhkYjUwMTQ4NTZiMDFkMjU3ZDEzZDdmMTNlNDIwMjdlNTY1MDkzZjQzYjRkMDdmYTBjNzZhODM0NTVkNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/yb9qmrjsu7fc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ad7xyc | /r/LocalLLaMA/comments/1ad7xyc/local_vision_model_with_speech_to_speech_faster/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bXEzanN1dnh1N2ZjMVADGdj4fF_-hWVTX867GMvMOLh3V7nltoyroLtDLBIv', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bXEzanN1dnh1N2ZjMVADGdj4fF_-hWVTX867GMvMOLh3V7nltoyroLtDLBIv.png?width=108&crop=smart&format=pjpg&auto=webp&s=f316f8b715bbe38c9ad317022989041e188b7657', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bXEzanN1dnh1N2ZjMVADGdj4fF_-hWVTX867GMvMOLh3V7nltoyroLtDLBIv.png?width=216&crop=smart&format=pjpg&auto=webp&s=ab0a245eddc180b602e0f7ec42e6c79926b39a5f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bXEzanN1dnh1N2ZjMVADGdj4fF_-hWVTX867GMvMOLh3V7nltoyroLtDLBIv.png?width=320&crop=smart&format=pjpg&auto=webp&s=c69dada89b47c09de530da925d3059d218b67348', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bXEzanN1dnh1N2ZjMVADGdj4fF_-hWVTX867GMvMOLh3V7nltoyroLtDLBIv.png?width=640&crop=smart&format=pjpg&auto=webp&s=2c4ba9c789518f5fd8fdae3d8fa9a85915859990', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bXEzanN1dnh1N2ZjMVADGdj4fF_-hWVTX867GMvMOLh3V7nltoyroLtDLBIv.png?width=960&crop=smart&format=pjpg&auto=webp&s=f311d8f954ba227a12c76e98f78509273537bc10', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bXEzanN1dnh1N2ZjMVADGdj4fF_-hWVTX867GMvMOLh3V7nltoyroLtDLBIv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7096e46de27531a6a321c1be1a82619a0bf21877', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bXEzanN1dnh1N2ZjMVADGdj4fF_-hWVTX867GMvMOLh3V7nltoyroLtDLBIv.png?format=pjpg&auto=webp&s=df6715306e99ba0bac603fc8b01067dee5c06d4d', 'width': 1920}, 'variants': {}}]} | ||
Beginner Questions for Text Generation Web UI | 1 | I'm starting to learn more about open source models. Want to get comfortable learning the basics before investing in a GPU. I was able to deploy text generation web ui on an EC2 instance. For my first try, I used the t2 micro (free) size, set the --share flag to true and was able to get the web ui running using the public url. I picked a random model that looks small (Locutusque/TinyMistral-248M-v2.5) but am having trouble getting chat to work. Is the instance size too small? Any additional recommendations for a beginner? | 2024-01-28T17:08:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ad7e38/beginner_questions_for_text_generation_web_ui/ | _Weesnaw_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad7e38 | false | null | t3_1ad7e38 | /r/LocalLLaMA/comments/1ad7e38/beginner_questions_for_text_generation_web_ui/ | false | false | self | 1 | null |
What are some interesting applications of LLMs that are not just a RAG chatbot? | 33 | I am exploring this space and I don't really understand what would be a killer app for LLMs. Like don't get me wrong, I am not discounting its capabilities, but it seems like there isn't any real application for the tech. Like of course I sound uninformed and naive so you don't have to tell me that. I am just looking around for ideas of things to build. If you have a cool project or idea, please tell me about it. I am not looking to build the next ChatGPT, but I will love to gain some hands on experience in the field. | 2024-01-28T17:04:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ad7b2b/what_are_some_interesting_applications_of_llms/ | Soc13In | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad7b2b | false | null | t3_1ad7b2b | /r/LocalLLaMA/comments/1ad7b2b/what_are_some_interesting_applications_of_llms/ | false | false | self | 33 | null |
Is there a good local implementation of agents? | 6 | I'm using ollama on Mac occasionally and overall it performs well. I'm wondering however if one of those auto-gpt/gpt-agents has already been implemented for it. Giving it ability to run tasks like e. g. research would make it much more useful. I couldn't find anything that works locally though, has anyone seen something like that already implemented? | 2024-01-28T17:02:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ad795y/is_there_a_good_local_implementation_of_agents/ | half_a_pony | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad795y | false | null | t3_1ad795y | /r/LocalLLaMA/comments/1ad795y/is_there_a_good_local_implementation_of_agents/ | false | false | self | 6 | null |
How could I make an RPG game system within SillyTavern? | 1 | Not sure if this is the best sub to ask this but so far I've received a lot of help here and I don't know where else to ask so here goes :')
I'm using SillyTavern and I've already set up a bunch of stuff on Lorebooks/World Info to be used by the LLM whenever/if necessary. What I'm trying to do is basically an RPG sort of experience. I'll add different characters as time goes on and I know SillyTavern supports multiple characters so that should be fine.
My problem is I would like to have random encounters with monsters, battles, that sort of thng... I was thinking maybe adding a "system" character that would just work as a game system and allow for monster encounters and such things but I was wondering if that's possible, if it makes sense or if it even is the best approach. Any suggestions? | 2024-01-28T16:31:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ad6jcb/how_could_i_make_an_rpg_game_system_within/ | OvercookedSatellite | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad6jcb | false | null | t3_1ad6jcb | /r/LocalLLaMA/comments/1ad6jcb/how_could_i_make_an_rpg_game_system_within/ | false | false | self | 1 | null |
What I often mean by "censorship" in corporate-aligned models | 91 | I find many people when being told about censorship, just assume you want NSFW content generated by the AI (which is totally legitimate, by the way). But this is not the main problem about the censorship in 'aligned' models.
Today I was trying to rewrite a certain passage from my novel, trying to give the antagonist a more impactful form of speaking.
Using Mikupad to inspect logits, I found something pretty curious. Using first Mixtral 8x7B instruct, (I steered it a bit prior to this passage) I ended up having to complete the following:
'The sheer...'
After a few regenerations where Mixtral gave me VERY underwhelming and REPETITIVE results, I checked the logits:
​
https://preview.redd.it/628q7qzfh7fc1.png?width=692&format=png&auto=webp&s=4e183869ecd8b657221bae6e545165d3ec7c8d08
Let us analyze the top 95% of choices:
\- Aud (audacity)
\- Iron (irony)
And that's it! The top 95% probability only has two options. With the only, in my opinion, worthy option with any chance of getting chosen being gall. Notice that 'hypocrisy' only lands a 0.15% despite being quite fitting.
I then changed the model in Ooba and chose Kunoichi 7B, a much less censored model but one that handles instruct mode quite well:
https://preview.redd.it/tz0na2sqh7fc1.png?width=620&format=png&auto=webp&s=79f8c5618ed5c2adb295fd9daaab757dcd35ad8a
Here, the top 95% is quite different:
\- Aud (audacity) still top, but...
\- Gall comes now second with almost the same probability
\- Arrog (arrogance) is now third, 9.33%
\- Iron (irony) is now fourth
\- Hyp (hypocrisy) has now been bumped up to almost 5%, and we're still in the top 91%
\- Nerve
\- Absurd (ity)
\- Ch (completes the 95%, no idea what it was supposed to mean)
What is my point? First, in this and other trials I've found that less constrained models present a richer probability distribution. This may have to do with wanting to steer them toward being factual and accurate, which is a totally legitimate use. But in creative or conversational uses, it really kills the potency.
But second, notice how the more censored models almost eliminate all negative connotations. In Mixtral (and it's not the most censored model out there at all), Audacity and irony, very neutral and formal words, fill all the probabilities. I also see 'inconsistency' there, 'lack', and even 'absurdity' which is pretty neutral.
Thirdly, notice how Kunoichi tried to continue the story, "your place at the bottom of" can end up being quite strong, depending on what it chooses. A much more negative tone than Mixtral is capable of without extremely heavy prompting, steering, and in general fighting the AI rather than working alongside the AI.
This is in fact the villain of my novel! And in the prompt I mention how he should talk, and among other things I mention he should be a mixture of a certain infamous CEO and a certain infamous former American president...
So what's my point? This is what I mean by censorship of corporate models. Negative words that don't fit the distorted world view they promote, such as 'gall', 'arrogance', 'hypocrisy' get "downvoted" into oblivion, whereas less censored models show a richer distribution, and do not seem to penalize anything that doesn't fit their hypocritical happy-go-lucky view of people and the world.
So if you want to use AI for creative writing, or to produce any writing that contains irony, sarcasm, evil, or any richer nuance other than 'everyone is good and loves each other', you need uncensored models.
And let me tell you. Everyone understands censoring smut is a thing. But this other type of censoring, much more insidious, pervasive and undetectable, is going under many people's radars. | 2024-01-28T16:30:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ad6ifk/what_i_often_mean_by_censorship_in/ | CulturedNiichan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad6ifk | false | null | t3_1ad6ifk | /r/LocalLLaMA/comments/1ad6ifk/what_i_often_mean_by_censorship_in/ | false | false | 91 | null | |
Is there an open source RAG supporting Microsoft SharePoint ? | 1 |
Hi!
I've been testing MS Copilot in M365 specially for RAG apps: asking questions on my SharePoints.
The results are quite good but I'm looking for Opensource solutions to do similar stuff.
I've understood that Copilot relies on the existing SharePoint index (avoid reindexing data) and that the function calls do the job of making the parallel search queries to MS Graph before joining the results using a Big prompt to GPT.
Are you aware of existing OS repos achieving the same stuff ?
Thanks | 2024-01-28T15:49:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ad5k2b/is_there_an_open_source_rag_supporting_microsoft/ | ToothOne6699 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad5k2b | false | null | t3_1ad5k2b | /r/LocalLLaMA/comments/1ad5k2b/is_there_an_open_source_rag_supporting_microsoft/ | false | false | self | 1 | null |
Get all huggingface chat models on a free personal telegram bot assistant. | 13 | GitHub: https://github.com/rabilrbl/hugging-tg-chatbot | 2024-01-28T15:37:37 | https://v.redd.it/h5lqupuja7fc1 | rabilrbl | /r/LocalLLaMA/comments/1ad5asw/get_all_huggingface_chat_models_on_a_free/ | 1970-01-01T00:00:00 | 0 | {} | 1ad5asw | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/h5lqupuja7fc1/DASHPlaylist.mpd?a=1709177862%2CMGRmYTgyNWVlZTBiZTY1MzMyNmIyN2RjMWM2ZjBkZTQ2YWZhYzBkMzFhYzlmOWMyMWQ3Y2I2YWE4NjJjNzhhYg%3D%3D&v=1&f=sd', 'duration': 65, 'fallback_url': 'https://v.redd.it/h5lqupuja7fc1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/h5lqupuja7fc1/HLSPlaylist.m3u8?a=1709177862%2CYjBmZTI5OTE0NGNlNDQ3MmQ5YWI5ZWMyYWQzODY5ODU1NzdjNDZlOWVmZWFhZWUzZWMwYThiOTkzZGIzOGZhMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/h5lqupuja7fc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 860}} | t3_1ad5asw | /r/LocalLLaMA/comments/1ad5asw/get_all_huggingface_chat_models_on_a_free/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'Mmcya3lob2phN2ZjMQdclTqLAwNGKIyvDVCR1MBQc-DGDVt4c7DBVncbMDy7', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/Mmcya3lob2phN2ZjMQdclTqLAwNGKIyvDVCR1MBQc-DGDVt4c7DBVncbMDy7.png?width=108&crop=smart&format=pjpg&auto=webp&s=f5c395945dd64db905d29262427aa3cecfc1ca5a', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/Mmcya3lob2phN2ZjMQdclTqLAwNGKIyvDVCR1MBQc-DGDVt4c7DBVncbMDy7.png?width=216&crop=smart&format=pjpg&auto=webp&s=9c82931ec00bc93d148c2548bd3ea3420f657f37', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/Mmcya3lob2phN2ZjMQdclTqLAwNGKIyvDVCR1MBQc-DGDVt4c7DBVncbMDy7.png?width=320&crop=smart&format=pjpg&auto=webp&s=a7b73d086a3cf22526fc618f1ca703d11f4b49a9', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/Mmcya3lob2phN2ZjMQdclTqLAwNGKIyvDVCR1MBQc-DGDVt4c7DBVncbMDy7.png?width=640&crop=smart&format=pjpg&auto=webp&s=2fdf51d58394e1576e8dd880221d2019d183a7b3', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/Mmcya3lob2phN2ZjMQdclTqLAwNGKIyvDVCR1MBQc-DGDVt4c7DBVncbMDy7.png?width=960&crop=smart&format=pjpg&auto=webp&s=af68aa2d91710e9b1efdafc4d668517f0160cf0d', 'width': 960}], 'source': {'height': 2256, 'url': 'https://external-preview.redd.it/Mmcya3lob2phN2ZjMQdclTqLAwNGKIyvDVCR1MBQc-DGDVt4c7DBVncbMDy7.png?format=pjpg&auto=webp&s=18fa0b2e0ed7bdd9b92f4560847db86fcff28e47', 'width': 1010}, 'variants': {}}]} | |
AI generated podcast - request for feedback | 1 | Hello, I started a podcast that is all created using AI. On 7th episode now. Though I’m using OpenAI tools right now hoping to get to a fully local open source models as they mature. I would love some feedback. Thank you
https://rss.com/podcasts/aitechweekly/ | 2024-01-28T15:26:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ad51y5/ai_generated_podcast_request_for_feedback/ | manojkr19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad51y5 | false | null | t3_1ad51y5 | /r/LocalLLaMA/comments/1ad51y5/ai_generated_podcast_request_for_feedback/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pJzM5ybpgMBWYPTT9shdw9Z_u6rrJSzeHNBquqtaoVc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/OhBR7xhAq277gV54iAyEdiV6VwIEQ9VBafYfsnvSTrQ.jpg?width=108&crop=smart&auto=webp&s=99b348079528cc328e6a1de914e61f1fe18e732b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/OhBR7xhAq277gV54iAyEdiV6VwIEQ9VBafYfsnvSTrQ.jpg?width=216&crop=smart&auto=webp&s=984c5523d49ef9b67de0719b5988c3024fa1ee27', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/OhBR7xhAq277gV54iAyEdiV6VwIEQ9VBafYfsnvSTrQ.jpg?width=320&crop=smart&auto=webp&s=c65116192442e2239b7c6d73aec55814f4746637', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/OhBR7xhAq277gV54iAyEdiV6VwIEQ9VBafYfsnvSTrQ.jpg?width=640&crop=smart&auto=webp&s=eab73e63dc45d1d157a86fe8f8341217f0faebce', 'width': 640}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/OhBR7xhAq277gV54iAyEdiV6VwIEQ9VBafYfsnvSTrQ.jpg?auto=webp&s=cad2e43620485e45046e149d73ee6e5234599ab9', 'width': 900}, 'variants': {}}]} |
I want to create a model in GGML but I can't seem to get my head around it | 1 | I was able to create the GGUF file, and I understand its structure. I was also able to somewhat understand the context.
But I can't seem to code the forward graph. I'm having a hard time understanding what buffers to use and why use them and where. I'm also having a hard time understandig all the backend stuff. There's also all the scratch objects that I don't understand.
I'd be really grateful if someone could give me a quick overview or just what to study in order to get a grasp of what to use.
Thank you. | 2024-01-28T15:21:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ad4xjr/i_want_to_create_a_model_in_ggml_but_i_cant_seem/ | LoLingLikeHell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad4xjr | false | null | t3_1ad4xjr | /r/LocalLLaMA/comments/1ad4xjr/i_want_to_create_a_model_in_ggml_but_i_cant_seem/ | false | false | self | 1 | null |
Oobabooga webui, Phi-2, and Mistral on Raspberry Pi 5, Orange Pi 5 Plus, and Jetson Orin Nano | 11 | I wanted to see what various SBCs were able to do, especially the Nvidia Jetson Orin Nano. **tl;dr**:
* Raspberry Pi 5 8GB ran [Microsoft Phi-2 Q4_K_M GGUF](https://huggingface.co/TheBloke/phi-2-GGUF) at about 1.2 t/s. [Mistral 7B](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF) ran on it as well, around 0.6 t/s.
* Orange Pi 5 Plus 16GB was amazing. It ran Phi-2 at almost 4 t/s using llama.cpp with some GPU offloading. Unfortunately it's not easy to get standard LLMs to use the built-in 6 TOPS NPU, but the Mali GPU seemed to take on some work and speed up results very well. It also ran Mistral 7B at around 1.4 t/s.
* Nvidia Jetson Nano ran Phi-2 at around 1.6 t/s. Mistral and other models usually froze the system when I tried to run it.
For those of you trying to get text-generation-webui running on your Pi's or other ARM boards, there were some issues with missing and mismatched libraries. Here's how I was able to get it to work everytime on both [Orange Pi Ubuntu Rockchip](https://github.com/Joshua-Riek/ubuntu-rockchip?tab=readme-ov-file) and Raspberry Pi Raspbian bookworm:
# Start in cloned git directory
$ ./start_linux.sh
# CTRL+C at the GPU/CPU selection screen
$ . "./installer_files/conda/etc/profile.d/conda.sh" && conda activate "./installer_files/env"
$ conda install numpy pillow
$ pip install -r requirements_cpu_only_noavx2.txt
$ pip install llama-cpp-python
$ ./start_linux.sh
The Jetson Orin Nano was a huge disappointment. This $500 dev kit looks sexy as can be and boasts 40 TOPS of AI performance, even saying "it can run all modern AI models, including transformer and advanced robotics models." That unfortunately was not as easy or as performant as they make it sound. I'd recommend using [jetson-containers](https://github.com/dusty-nv/jetson-containers) rather than installing software yourself. Anything else is near impossible or won't support the GPU (e.g. ollama).
Let me know if you have any questions, LLM/other model requests for me to test, etc. | 2024-01-28T15:19:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ad4w9i/oobabooga_webui_phi2_and_mistral_on_raspberry_pi/ | SiON42X | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad4w9i | false | null | t3_1ad4w9i | /r/LocalLLaMA/comments/1ad4w9i/oobabooga_webui_phi2_and_mistral_on_raspberry_pi/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'eR0VnUWIDQH4dPuIrMpw1vxwSq1qXNOmdkmUnztwXOI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Jv9-Cazyv46Pw6A1JzJkwRMxN_SNBq4nGpUMt6N8_ho.jpg?width=108&crop=smart&auto=webp&s=90fb76fd47ab1d7610d5500cb8f0daa8121be34e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Jv9-Cazyv46Pw6A1JzJkwRMxN_SNBq4nGpUMt6N8_ho.jpg?width=216&crop=smart&auto=webp&s=008c4415e5ae5ac2d322241ea088fde844e4e5fe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Jv9-Cazyv46Pw6A1JzJkwRMxN_SNBq4nGpUMt6N8_ho.jpg?width=320&crop=smart&auto=webp&s=e6513d01aa07f7c3e1045a0ed6ab82010488fe52', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Jv9-Cazyv46Pw6A1JzJkwRMxN_SNBq4nGpUMt6N8_ho.jpg?width=640&crop=smart&auto=webp&s=316e2c0f1b0498991da107a1ccfdcbd33e310943', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Jv9-Cazyv46Pw6A1JzJkwRMxN_SNBq4nGpUMt6N8_ho.jpg?width=960&crop=smart&auto=webp&s=57bc484a215cdb469f2b2ea1ca75260cf9374290', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Jv9-Cazyv46Pw6A1JzJkwRMxN_SNBq4nGpUMt6N8_ho.jpg?width=1080&crop=smart&auto=webp&s=10e533e70b464f569d85fa319220086ad267fc0d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Jv9-Cazyv46Pw6A1JzJkwRMxN_SNBq4nGpUMt6N8_ho.jpg?auto=webp&s=40ed4a56482057a3a8816019096755e8a0887acc', 'width': 1200}, 'variants': {}}]} |
Realistic performance estimated for upcomming amd strix point and New laptop qualcom chips. | 1 | [removed] | 2024-01-28T15:18:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ad4vqa/realistic_performance_estimated_for_upcomming_amd/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad4vqa | false | null | t3_1ad4vqa | /r/LocalLLaMA/comments/1ad4vqa/realistic_performance_estimated_for_upcomming_amd/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'j_azz92L-0OqpjTapcp9oRgczzpcW0bHM8w67Mtz7JI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tVoAKBM9hc1ksnBK2WG5DRJm3cQFQw0jMq_mVHFJFYk.jpg?width=108&crop=smart&auto=webp&s=6e31223589bee0a70dbb18fd1f0d4871040c7083', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tVoAKBM9hc1ksnBK2WG5DRJm3cQFQw0jMq_mVHFJFYk.jpg?width=216&crop=smart&auto=webp&s=19ed590c31bb6784dbbd8282e36424375598d40d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tVoAKBM9hc1ksnBK2WG5DRJm3cQFQw0jMq_mVHFJFYk.jpg?width=320&crop=smart&auto=webp&s=3a8f03e2c31e454f5fdba9d249283e94c16a6520', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tVoAKBM9hc1ksnBK2WG5DRJm3cQFQw0jMq_mVHFJFYk.jpg?width=640&crop=smart&auto=webp&s=8b179c201b4a17e53fda5c8544278b928fa59f60', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tVoAKBM9hc1ksnBK2WG5DRJm3cQFQw0jMq_mVHFJFYk.jpg?width=960&crop=smart&auto=webp&s=d6978167da081d6224232107746e83789125b684', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tVoAKBM9hc1ksnBK2WG5DRJm3cQFQw0jMq_mVHFJFYk.jpg?width=1080&crop=smart&auto=webp&s=fbce1dbafe9ce58242bd7d23637ee8ee88b27049', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tVoAKBM9hc1ksnBK2WG5DRJm3cQFQw0jMq_mVHFJFYk.jpg?auto=webp&s=b89bcb06a5bdd93e04f83b989f31ec87222b2748', 'width': 1200}, 'variants': {}}]} |
Local LLM & STT UE Virtual MetaHuman | 96 | 2024-01-28T15:03:09 | https://v.redd.it/56kwwy7e47fc1 | BoredHobbes | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ad4jmk | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/56kwwy7e47fc1/DASHPlaylist.mpd?a=1709046205%2CZWEyNjMwYzAwMDkxMDIxOTU0NWM0NTMyMDJiZWM5NzlkMzIyNjQ3MzBhZmQxNGFkZmQxMTEyZDZhZDM4NDViZQ%3D%3D&v=1&f=sd', 'duration': 63, 'fallback_url': 'https://v.redd.it/56kwwy7e47fc1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/56kwwy7e47fc1/HLSPlaylist.m3u8?a=1709046205%2CYjg2OGU3NTZlMTZhNzM1ODM3OTVjMzBlNWNiMWUwNWQ1MzRmM2YzN2NhOGNmZjI4YTgwYzI4NDM2MTAwNmM2OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/56kwwy7e47fc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ad4jmk | /r/LocalLLaMA/comments/1ad4jmk/local_llm_stt_ue_virtual_metahuman/ | false | false | 96 | {'enabled': False, 'images': [{'id': 'aWczZndnYWY0N2ZjMZWxzLg8oMrAczBfwSqnsrQ1tMLmrPx1LlNI92DiWxJr', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aWczZndnYWY0N2ZjMZWxzLg8oMrAczBfwSqnsrQ1tMLmrPx1LlNI92DiWxJr.png?width=108&crop=smart&format=pjpg&auto=webp&s=826da944cb5276ba322f9f12c528ffd1353582b4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aWczZndnYWY0N2ZjMZWxzLg8oMrAczBfwSqnsrQ1tMLmrPx1LlNI92DiWxJr.png?width=216&crop=smart&format=pjpg&auto=webp&s=27d937ca13a1a106acd11accf5e49e9406903f52', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aWczZndnYWY0N2ZjMZWxzLg8oMrAczBfwSqnsrQ1tMLmrPx1LlNI92DiWxJr.png?width=320&crop=smart&format=pjpg&auto=webp&s=628d380c57d337e255d50995c639f84adb139b70', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aWczZndnYWY0N2ZjMZWxzLg8oMrAczBfwSqnsrQ1tMLmrPx1LlNI92DiWxJr.png?width=640&crop=smart&format=pjpg&auto=webp&s=e13f7d220b7d93a3eb67de531c44438c49efa0df', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aWczZndnYWY0N2ZjMZWxzLg8oMrAczBfwSqnsrQ1tMLmrPx1LlNI92DiWxJr.png?width=960&crop=smart&format=pjpg&auto=webp&s=437d6ad0586aefcead17bff3a0c5e463181ef421', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aWczZndnYWY0N2ZjMZWxzLg8oMrAczBfwSqnsrQ1tMLmrPx1LlNI92DiWxJr.png?width=1080&crop=smart&format=pjpg&auto=webp&s=276d0af71b5e8e5f56c93dd1eb18453e45da429e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aWczZndnYWY0N2ZjMZWxzLg8oMrAczBfwSqnsrQ1tMLmrPx1LlNI92DiWxJr.png?format=pjpg&auto=webp&s=8006812282bd9f4b70e16e446c8ce2f49ee0f0dc', 'width': 1920}, 'variants': {}}]} | ||
Local LLM & STT UE Virtual Metahuman | 1 | [deleted] | 2024-01-28T15:02:06 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ad4iro | false | null | t3_1ad4iro | /r/LocalLLaMA/comments/1ad4iro/local_llm_stt_ue_virtual_metahuman/ | false | false | default | 1 | null | ||
Looking for advice on self hosting services. For a phd which can you recommend? | 1 | [removed] | 2024-01-28T14:49:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ad49bf/looking_for_advice_on_self_hosting_services_for_a/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad49bf | false | null | t3_1ad49bf | /r/LocalLLaMA/comments/1ad49bf/looking_for_advice_on_self_hosting_services_for_a/ | false | false | self | 1 | null |
What does a computer with RTX 4060 Ti 16GB VRAM graphics card do that Macbook Air m2 16GB RAM cannot? | 26 | In terms of localllama. I have intentions to build a pc with 4060 ti 16gb vram but i am struggling to understand what does this computer do that my macbook air cannot.
I just ran mistral instruct v0 2 7B Q4\_K\_S gguf and it ran perfectly fine.
For example can i run 30b models with 16gb vram or 12 gb of vram is enough ?
I swear i searched this subreddit for a day but i could not find a full answer.
Sorry for broken English. | 2024-01-28T14:41:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ad437c/what_does_a_computer_with_rtx_4060_ti_16gb_vram/ | decept1ve | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad437c | false | null | t3_1ad437c | /r/LocalLLaMA/comments/1ad437c/what_does_a_computer_with_rtx_4060_ti_16gb_vram/ | false | false | self | 26 | null |
Whisper transcription AI running on i3-530 PC | 1 | [removed] | 2024-01-28T14:39:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ad41ly/whisper_transcription_ai_running_on_i3530_pc/ | DaanDeweerdt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad41ly | false | null | t3_1ad41ly | /r/LocalLLaMA/comments/1ad41ly/whisper_transcription_ai_running_on_i3530_pc/ | false | false | self | 1 | null |
How to run a local AI on AMD GPU with Windows? | 2 | So I was looking to run an LLM for text summarization locally, and all the information I can find is either a couple months old, or only applies to AMD Instinct GPUs. What would be the fastest way to run a decent LLM model that I could call from code (e.g. python)? Is KoboldCPP still the best option now, or is it outdated? | 2024-01-28T14:26:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ad3s7y/how_to_run_a_local_ai_on_amd_gpu_with_windows/ | INFINITI2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad3s7y | false | null | t3_1ad3s7y | /r/LocalLLaMA/comments/1ad3s7y/how_to_run_a_local_ai_on_amd_gpu_with_windows/ | false | false | self | 2 | null |
Mozilla-Ocho/llamafile AMD GPU error | 2 | I've got a fresh Windows install to run the llamafile. I installed drivers and monthly fixes, Windows itself is 23H2. I download the file from git repository & run as instructed with the parameter -ngl 9999 ... I got below error and it doesn't run. The thing that doesn't make sense to me that there is no nVidia GPU, no I didn't install anything cuda or whatever. this is completely a fresh windows installation, my GPU is RX 580 8GB.
*WARNING: failed to allocate 12.01 MB of pinned memory: out of memory*
*WARNING: failed to allocate 0.00 MB of pinned memory: out of memory*
*WARNING: failed to allocate 8.80 MB of pinned memory: out of memory*
*CUDA error: invalid device function*
*current device: 0, in functiongglm\_cuda\_op\_flatten at* [*ggml-cuda.cu:9012*](https://ggml-cuda.cu:9012)
*hipGetLastError()*
*GGML\_ASSERT:* [*ggml-cuda.cu:407*](https://ggml-cuda.cu:407)*: !"CUDA error"*
​
**if I don't put "-ngl 9999" it works fine on CPU (of course slowly).**
**anyone has any ideas?** | 2024-01-28T14:25:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ad3rew/mozillaochollamafile_amd_gpu_error/ | Cyric_of_Waterdeep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad3rew | false | null | t3_1ad3rew | /r/LocalLLaMA/comments/1ad3rew/mozillaochollamafile_amd_gpu_error/ | false | false | self | 2 | null |
Is dolphin mixtral hosted somewhere online? | 2 | I can't run it locally atm. | 2024-01-28T14:22:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ad3phc/is_dolphin_mixtral_hosted_somewhere_online/ | Surellia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad3phc | false | null | t3_1ad3phc | /r/LocalLLaMA/comments/1ad3phc/is_dolphin_mixtral_hosted_somewhere_online/ | false | false | self | 2 | null |
How to filter important data from weekly reports using LLM? | 1 | [removed] | 2024-01-28T14:12:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ad3ige/how_to_filter_important_data_from_weekly_reports/ | gilklein | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad3ige | false | null | t3_1ad3ige | /r/LocalLLaMA/comments/1ad3ige/how_to_filter_important_data_from_weekly_reports/ | false | false | self | 1 | null |
Best 3b to 13b or better (that fit specs) LLMs for fiction, organizing, writing? | 1 | Hello it’s me once again. Are there any Outstanding models (in the range of 3b to 13b (preferably 7B)) that would be super helpful to creating full on universes and stories, and also organizing it really well, and writing in general?
(Also, uncensored if possible )
Essentially I want to have a “writing assistant/ story writer/ organizer” LLM that would be the best to do that. Ofcourse I have a few friends IrL that could help me, but we don’t really always have time for that, as we are now all more busy with our lives?
I know there were some good for story writing but I need like a mix- one. Or idk.
Info I have an RTX 4060 laptop GPU, with 13.9 gb vram, and 16gigs of ram.
Other info: I use oobabooga text gen webU with exllama v2(loader) and superboogav2 (sorta chatting with your documents/ long term mem)(but can’t get the the LLM to go through the entire text doc for some reason)
And any better model that won’t run too slow only my PC (gguf) is fine, but I don’t know which ones. So please help???! Thanks | 2024-01-28T13:38:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ad2uou/best_3b_to_13b_or_better_that_fit_specs_llms_for/ | headbopper96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad2uou | false | null | t3_1ad2uou | /r/LocalLLaMA/comments/1ad2uou/best_3b_to_13b_or_better_that_fit_specs_llms_for/ | false | false | self | 1 | null |
[Off-topic] Question about GPT1 | 1 | [removed] | 2024-01-28T13:36:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ad2t04/offtopic_question_about_gpt1/ | Invariant_apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad2t04 | false | null | t3_1ad2t04 | /r/LocalLLaMA/comments/1ad2t04/offtopic_question_about_gpt1/ | false | false | self | 1 | null |
Looking to hire an AI (Gen AI ) developer as freelancer to refine a RAG chatbot | 1 | Hi, I'm looking for Gen AI freelancer who are good with building chatbots using different APi's.
we already have some working prototype but need expertise to get desired results.
Tech Stack for reference ( we just need AI expert)
Frontend :- Nextjs
Backend :- Flask
Database:- postgres
Deployed On :- Azure
They will be fairly compensated. ( you can refer your friends too)
DM for details | 2024-01-28T13:12:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ad2d7d/looking_to_hire_an_ai_gen_ai_developer_as/ | SHYAM_12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad2d7d | false | null | t3_1ad2d7d | /r/LocalLLaMA/comments/1ad2d7d/looking_to_hire_an_ai_gen_ai_developer_as/ | false | false | self | 1 | null |
Is there any git dedicated for text-to-SQL to be inference on local LLM? | 4 | I am trying to find a method to use local LLM for queries and use it for RAG purposes as well as text-to-SQL purposes. LangChain SQL agent doesn’t work well with the local models.. so trying to find if there is any git that can help me solve this issue | 2024-01-28T13:02:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ad26zr/is_there_any_git_dedicated_for_texttosql_to_be/ | Hairy_Tumbleweed1901 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad26zr | false | null | t3_1ad26zr | /r/LocalLLaMA/comments/1ad26zr/is_there_any_git_dedicated_for_texttosql_to_be/ | false | false | self | 4 | null |
Slugsley | 1 | [removed] | 2024-01-28T12:51:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ad2085/slugsley/ | Upset_Acanthaceae_18 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad2085 | false | null | t3_1ad2085 | /r/LocalLLaMA/comments/1ad2085/slugsley/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_80YZsz0UFanmlUUNSAOTgbteFgQC5lxwzAlLJPxkKg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NlthozyI4uByI3RzxcVoc4G4B3lboLer2UF1d64WQkQ.jpg?width=108&crop=smart&auto=webp&s=278addc037e6c3f5cec12916f3310e1a74c850f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NlthozyI4uByI3RzxcVoc4G4B3lboLer2UF1d64WQkQ.jpg?width=216&crop=smart&auto=webp&s=d7c87689e113bb31fe7dffdae413d9fb4720fb64', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NlthozyI4uByI3RzxcVoc4G4B3lboLer2UF1d64WQkQ.jpg?width=320&crop=smart&auto=webp&s=dec92448618a070650ec8713a604d0ab16a47cfb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NlthozyI4uByI3RzxcVoc4G4B3lboLer2UF1d64WQkQ.jpg?width=640&crop=smart&auto=webp&s=dfa00c15604263a99abcaf84c89cfac0f7c26c0c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NlthozyI4uByI3RzxcVoc4G4B3lboLer2UF1d64WQkQ.jpg?width=960&crop=smart&auto=webp&s=d5b68fd56e85dddf8050226d75f2b2106370787e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NlthozyI4uByI3RzxcVoc4G4B3lboLer2UF1d64WQkQ.jpg?width=1080&crop=smart&auto=webp&s=c73e76a21ac5461e52d93c41c3b40c60efa0d226', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NlthozyI4uByI3RzxcVoc4G4B3lboLer2UF1d64WQkQ.jpg?auto=webp&s=f8a14b71599912681e90fd59ad43df2b69180d83', 'width': 1200}, 'variants': {}}]} |
Here is a kaggle notebook to run koboldcpp, enjoy | 6 | 2024-01-28T11:57:17 | https://www.kaggle.com/blutiger/koboldcpp | _BluTiger | kaggle.com | 1970-01-01T00:00:00 | 0 | {} | 1ad143b | false | null | t3_1ad143b | /r/LocalLLaMA/comments/1ad143b/here_is_a_kaggle_notebook_to_run_koboldcpp_enjoy/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'p15coSqe7L8wApjnVlwASEYE50BcnmvRuPbSVpGUPaM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hIZn74UCUzMss02AeqYwTjjr9Q4P0f4_c6SjRwG_mWM.jpg?width=108&crop=smart&auto=webp&s=b0ef6b067fd0b46d01dc9f262edb560782bc9f2c', 'width': 108}], 'source': {'height': 160, 'url': 'https://external-preview.redd.it/hIZn74UCUzMss02AeqYwTjjr9Q4P0f4_c6SjRwG_mWM.jpg?auto=webp&s=317a9be4dd095d5a4b95cfdd96ada08acea08513', 'width': 160}, 'variants': {}}]} | ||
4070 Ti Super as a companion card / VRAM extender alongside a 4090? | 2 | I'd like to buy a second GPU to expand the AI training / inference capabilities of my home computer (currently running a 4090 which I intend to keep) and I've been eyeing the 4070 Ti Super because of its 16 GB VRAM paired with a core that's not as crap as the 4060 Ti. Do you think it's a good idea for a second card?
For a bit of background, I haven't dabbled in LLMs yet, mostly because of VRAM constraints, but I am interested in them, and I do work with other, somewhat smaller AI models (biggest one was Stable Diffusion so far but I ran into limits with Whisper as well, and I'm also somewhat concerned about training SDXL). I'm not above getting my hands dirty and splitting up a model myself if I have to.
And before you say 3090, I don't wanna deal with buying used, or the power consumption and transient spikes of Ampere, and I've noticed lackluster performance on some of these other AI architectures with its older core as well. I know the 3090 is the best VRAM for buck you can get, but I'd rather stick to Ada for a number of different reasons.
Does anyone have experience with running dissimilar GPUs? Am I gonna get limited to what a 2x 4070 TiS config could do with a 4090 and a 4070 TiS?
There's also the future upgrade potential of buying a second 4070 Ti Super for 56 GB of total VRAM -- although that would have to run at an 8/4/4x lane config because I only have a 7800X3D. Would the lane constraints limit this config? I assume the 8/8 of the 4090 + 1x 4070 TiS config won't be an issue yet but do correct me if I'm wrong on that one.
Also, how much can you do these days with 40 GB across two GPUs, or 56 GB across three? What are the memory requirements to train a lora for a 13b or a 33/34b model, for example? | 2024-01-28T11:50:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ad108o/4070_ti_super_as_a_companion_card_vram_extender/ | b3nsn0w | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad108o | false | null | t3_1ad108o | /r/LocalLLaMA/comments/1ad108o/4070_ti_super_as_a_companion_card_vram_extender/ | false | false | self | 2 | null |
What is GPT's 4 secret sauce ? | 126 | There is still no open source model that comes close to GPT-4 preformance. Why this is ?
Is it the amount of data OpenAI has or the archticture of GPT-4 or the computational resources they have to train large models or a smart training recipe that manage to provide this quality or what ?
And What research ideas do you think are capable of making us closer to GPT-4 performance ? | 2024-01-28T11:45:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ad0xhp/what_is_gpts_4_secret_sauce/ | Puzzleheaded_Mall546 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad0xhp | false | null | t3_1ad0xhp | /r/LocalLLaMA/comments/1ad0xhp/what_is_gpts_4_secret_sauce/ | false | false | self | 126 | null |
Documentation Paradigm for Large Language Models (LLMs): Log Today, Train Tomorrow | 3 |
I have been using tools like GPT-4 and GPT-3.5 a lot for automating things in my daily routine.
Here's something important I've learned: always "document" what you do with these models. By "document," I mean log these requests always and store them somewhere with context.
If you are a programmer, you know how documentation works: the value is not immediately apparent now, but is super useful for the future.
"Documenting"/"Storing" what you do with these powerful models is similar. Really helpful for a few reasons:
1. Planning for the Future:
Right now, you might not want to make your own language model. But if you keep a record of your work with GPT-3 or GPT-4, later you can use this information to build a model that's just right for you. Customize and make it behave exactly how you want it to work.
My hunch is that in the upcoming years, this will become super-easy for anyone to do on their own devices, having your own models based on your own data. Then having data will give you a head start and advantage.
Example: You can dictate stuff and brain dump your ideas. And the LLMs will write like you. Or your automations can run on your local device instead of round hop API calls.
2. Options for Later:
The information you save now gives you choices later. If there is one thing that I have learned over the last five months in this field, it is that data is GOLD. Data is EVERYTHING. You actually don't know YET how this will come in handy, but it surely will. I have had this experience multiple times.
Example: Looking at data, I was able to "debug," develop intuition, train, and build state-of-the-art models for some use cases.
So, how do you get started documenting?
Three simple steps. Prerequisites: A Monitoring Platform.
1. Do like you normally do, call OpenAI (or any other managed platform) APIs during automation (or regular requests).
2. But hook up that with a monitoring platform like Helicone (or you can write your own thing). It's usually super simple; this only takes a single line of change in your API call or code. And additionally tag data with appropriate "metadata" and context. This is like nicely organizing your documentation folders.
3. Filter, extract, and dump data from the monitoring platform. Use data as appropriate.
Takes a few hours to set up. But totally worth it!
| 2024-01-28T11:20:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ad0kdb/documentation_paradigm_for_large_language_models/ | phoneixAdi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad0kdb | false | null | t3_1ad0kdb | /r/LocalLLaMA/comments/1ad0kdb/documentation_paradigm_for_large_language_models/ | false | false | self | 3 | null |
RWKV5 100% trained & released | 185 | 2024-01-28T11:18:30 | https://twitter.com/BlinkDL_AI/status/1751542433039651304 | cztomsik | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 1ad0j63 | false | {'oembed': {'author_name': 'BlinkDL', 'author_url': 'https://twitter.com/BlinkDL_AI', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">RWKV-5 "Eagle" 7B: beats Mistral-7B at multilingual, reaches Llama2-7B level at English, while being 100% attention-free RNN and only trained 1.1T tokens. Gradio Demo: <a href="https://t.co/k0AivnxCwP">https://t.co/k0AivnxCwP</a> RWKV-6 "Finch" 1B5 in ~10days, 3B in ~30days. <a href="https://t.co/c6dByjF976">pic.twitter.com/c6dByjF976</a></p>— BlinkDL (@BlinkDL_AI) <a href="https://twitter.com/BlinkDL_AI/status/1751542433039651304?ref_src=twsrc%5Etfw">January 28, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/BlinkDL_AI/status/1751542433039651304', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_1ad0j63 | /r/LocalLLaMA/comments/1ad0j63/rwkv5_100_trained_released/ | false | false | 185 | {'enabled': False, 'images': [{'id': 'QOCBpJq8bO6d7dyO8fSS7k8Dh-4gTRL8UBHfQRZuslw', 'resolutions': [{'height': 17, 'url': 'https://external-preview.redd.it/-vvRhyluhyyb-w2h3aefFSbOYWFc2q9vGzgcpKzXgiY.jpg?width=108&crop=smart&auto=webp&s=ac729f141139943ba63bcfbe6cb18fe433979cc8', 'width': 108}], 'source': {'height': 23, 'url': 'https://external-preview.redd.it/-vvRhyluhyyb-w2h3aefFSbOYWFc2q9vGzgcpKzXgiY.jpg?auto=webp&s=fa889242d3214e609215a5fe48cb57d6d8de2c2b', 'width': 140}, 'variants': {}}]} | ||
Advice on PSU for Dual RTX 3090 Setup | 1 | [removed] | 2024-01-28T11:04:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ad0bwp/advice_on_psu_for_dual_rtx_3090_setup/ | Hugejiji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad0bwp | false | null | t3_1ad0bwp | /r/LocalLLaMA/comments/1ad0bwp/advice_on_psu_for_dual_rtx_3090_setup/ | false | false | self | 1 | null |
Advice on PSU for Dual RTX 3090 Setup | 1 | [removed] | 2024-01-28T11:04:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ad0blk/advice_on_psu_for_dual_rtx_3090_setup/ | Hugejiji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad0blk | false | null | t3_1ad0blk | /r/LocalLLaMA/comments/1ad0blk/advice_on_psu_for_dual_rtx_3090_setup/ | false | false | self | 1 | null |
What prompts do you use to evaluate new LLM capabilities? | 10 | What prompts do you use to evaluate new LLM capabilities?
Do you have a series of standard prompts you use to see what an LLM can do?
What things have you found the most useful for overall evaluation? | 2024-01-28T10:58:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ad0876/what_prompts_do_you_use_to_evaluate_new_llm/ | Diligent_Eye1248 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad0876 | false | null | t3_1ad0876 | /r/LocalLLaMA/comments/1ad0876/what_prompts_do_you_use_to_evaluate_new_llm/ | false | false | self | 10 | null |
Hello noob here, how do I add charachter to OOGABOOGA API? | 1 | So I have a small AI assistant that uses Oogabooga API (5000). How do I add charachter to the API do I need to do it in my Python code or can I do it in the Oogabooga web UI? | 2024-01-28T10:58:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ad081s/hello_noob_here_how_do_i_add_charachter_to/ | uigofuckmeintheass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ad081s | false | null | t3_1ad081s | /r/LocalLLaMA/comments/1ad081s/hello_noob_here_how_do_i_add_charachter_to/ | false | false | self | 1 | null |
Does Mistral has other language than English? | 4 | I'm using dolphin-2.1-mistral-7b.Q5\_K\_M.gguf for a thesis project. Now I have to use a dataset in my mother tongue (Italian). Can I use it or it should be preferable to use a model fine-tuned for italian language? | 2024-01-28T10:24:19 | https://www.reddit.com/r/LocalLLaMA/comments/1aczq84/does_mistral_has_other_language_than_english/ | giammy677 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aczq84 | false | null | t3_1aczq84 | /r/LocalLLaMA/comments/1aczq84/does_mistral_has_other_language_than_english/ | false | false | self | 4 | null |
Together.ai introduces JSON/function calling mode for Mistral.ai LLMs | 17 | You can now use [Mistral.ai](https://Mistral.ai) LLM models with JSON mode and function calling through [together.ai](https://together.ai)'s API: [https://docs.together.ai/docs/function-calling](https://docs.together.ai/docs/function-calling)
So now there are two open-source LLM APIs enabling this (anyscale and [together.ai](https://together.ai)).
Personally I'm very excited about this. Structured output makes LLMs so much easier to work with in applications. | 2024-01-28T10:21:59 | https://www.reddit.com/r/LocalLLaMA/comments/1aczp2r/togetherai_introduces_jsonfunction_calling_mode/ | rasmus16100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aczp2r | false | null | t3_1aczp2r | /r/LocalLLaMA/comments/1aczp2r/togetherai_introduces_jsonfunction_calling_mode/ | false | false | self | 17 | null |
LLAMA.CPP finetune | 2 |
I'm trying to finetune vicuna model in custom text
the fine tune script:
**./finetune --model-base \~/llama.cpp/models/vicuna-7b-v1.5.Q4\_K\_M.gguf --train-data zam.txt --lora-out lora2.gguf --save-every 0 --threads 14 --ctx 25 --rope-freq-base 10000 --rope-freq-scale 1.0 --batch 1 --grad-acc 1 --adam-iter 64 --adam-alpha 0.001 --lora-r 4 --lora-alpha 4 --use-flash --sample-start "" --include-sample-start --seed 1**
then when trying to test the model after couple of hours i got unexpected results
**./main --model \~/llama.cpp/models/vicuna-7b-v1.5.Q4\_K\_M.gguf --lora lora2.gguf --prompt "tell me about zamalek club is"**
the used dataset samples are something like :
Zamalek Sporting Club, commonly referred to as Zamalek, is an Egyptian sports club based in Giza. It is one of the most successful football clubs in Africa and the Middle East, and was the most titled with African tournaments in 20th century \[5\] The club is mainly known for its professional football team, which plays in the Egyptian Premier League, the top tier of the Egyptian football league system.
The club was founded on 5 January 1911 as Qasr El Nile Club and was first headed by the Belgian lawyer George Merzbach. The name was changed two years later to Cairo International Sports Club (C.I.S.C.),\[8\] colloquially El Qāhirah El Mokhtalat Club or El Mokhtalat Club. The club was renamed in 1941 after King Farouk of Egypt and became known as Farouk El Awal Club (transl. Farouk I Club). After the 1952 Egyptian revolution, the club changed its name again to Zamalek SC.
Do I miss something ? | 2024-01-28T10:21:54 | https://www.reddit.com/r/LocalLLaMA/comments/1aczp0v/llamacpp_finetune/ | Nervous-Standard-131 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aczp0v | false | null | t3_1aczp0v | /r/LocalLLaMA/comments/1aczp0v/llamacpp_finetune/ | false | false | self | 2 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.