title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I am going to buy H100s. There are too many options. | 69 | Hi all,
I need a help from all of you.
I am going to buy H100s for training LLMs. Currently for fine-tuning 70b models but later we may consider pre-training larger models too.
H100s looks more promising than A100s considering its power of FP8 support, so I asked quotes from multiple vendors. And then, realized there are too many options!
1. DGX - 8x H100, much more expensive than other options but they say its performance is worth it.
2. Buy PCI-E H100 cards and a Supermicro machine - from 2x upto 8x, looks cost effective.
2.a. some vendors offered a combination with NVLinks. Some says 1 link is needed for 2 and some says 3 links are needed for 2.
3. H100 NVL - no idea what the difference is compared to the PCI-E with NVLinks but looks like they are newly introduced ones.
Any BEST PRACTICE I can take a look to make a decision? Any advice from experts here who suffered a similar situation already?
Thanks in advance 🙏 | 2023-10-31T12:44:34 | https://www.reddit.com/r/LocalLLaMA/comments/17kk63y/i_am_going_to_buy_h100s_there_are_too_many_options/ | OldPin8654 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kk63y | false | null | t3_17kk63y | /r/LocalLLaMA/comments/17kk63y/i_am_going_to_buy_h100s_there_are_too_many_options/ | false | false | self | 69 | null |
Any existing Open Source Mobile App for local inference server? | 2 | Hi, I would like to know if there are any existing open source mobile app project that is like ChatGPT mobile app which can set custom inference server link?
If not, I think I will contribute to make one. | 2023-10-31T12:19:16 | https://www.reddit.com/r/LocalLLaMA/comments/17kjox2/any_existing_open_source_mobile_app_for_local/ | Similar_Tea_8349 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kjox2 | false | null | t3_17kjox2 | /r/LocalLLaMA/comments/17kjox2/any_existing_open_source_mobile_app_for_local/ | false | false | self | 2 | null |
Resources for creating datasets for code generation? | 5 | I tried some code generation models on huggingface but they were really poor in the responses I got even though I clearly explained what I need in the prompt. My assumption is that it was because my question is related to a niche framework and the model was trained on a large dataset on a wide variety of languages and may not have come across the framework I'm working with. I'm not looking for a general model but one that is specific to the not-so-popular framework I work with, so I'm guessing I'll have to generate a custom dataset.
I also don't need the model to know so many languages. If I can get it to generate just Python, JavaScript, Golang, and C, that alone would be great but I can do with fewer languages as well. So, does this mean I'll end up with a smaller model suitable for running on an RTX4090?
How will it understand what I am asking it? Do I also need to scrape Stackoverflow and some forums for the specific language tags I am interested in?
How do I go about creating such a dataset? I can scrape from multiple sources but in what format am I supposed to put it all together for training?
I am doing this for the first time. | 2023-10-31T12:02:21 | https://www.reddit.com/r/LocalLLaMA/comments/17kje41/resources_for_creating_datasets_for_code/ | rhinohoof | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kje41 | false | null | t3_17kje41 | /r/LocalLLaMA/comments/17kje41/resources_for_creating_datasets_for_code/ | false | false | self | 5 | null |
Apple M3 Pro Chip Has 25% Less Memory Bandwidth Than M1/M2 Pro | 66 | 2023-10-31T11:59:15 | https://www.macrumors.com/2023/10/31/apple-m3-pro-less-memory-bandwidth/ | __tosh | macrumors.com | 1970-01-01T00:00:00 | 0 | {} | 17kjbqd | false | null | t3_17kjbqd | /r/LocalLLaMA/comments/17kjbqd/apple_m3_pro_chip_has_25_less_memory_bandwidth/ | false | false | 66 | {'enabled': False, 'images': [{'id': '8MjIrVzuil4d2x5KZ-vIAVg4E3sNyreJSB7e5gBKOcc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/S_U11F5e2WlnGhBkExp4gkAdZcuX2VgXy15tIWhQX_g.jpg?width=108&crop=smart&auto=webp&s=95d7d8c8fb90e03129f95db5ff2049f2e6aef181', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/S_U11F5e2WlnGhBkExp4gkAdZcuX2VgXy15tIWhQX_g.jpg?width=216&crop=smart&auto=webp&s=4ad4e5ca7272b13df77be917513cbf1ceab1c6e4', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/S_U11F5e2WlnGhBkExp4gkAdZcuX2VgXy15tIWhQX_g.jpg?width=320&crop=smart&auto=webp&s=3cfff8a1ea320b0e75b91a455d87885ff72b750f', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/S_U11F5e2WlnGhBkExp4gkAdZcuX2VgXy15tIWhQX_g.jpg?width=640&crop=smart&auto=webp&s=58e41bb8ffbda9e7645b1ff88ed4414dd00b941c', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/S_U11F5e2WlnGhBkExp4gkAdZcuX2VgXy15tIWhQX_g.jpg?width=960&crop=smart&auto=webp&s=608792f5bb490c421aef6a9e5ed9aa34c7a882d7', 'width': 960}, {'height': 606, 'url': 'https://external-preview.redd.it/S_U11F5e2WlnGhBkExp4gkAdZcuX2VgXy15tIWhQX_g.jpg?width=1080&crop=smart&auto=webp&s=6ff33afd908fd4f951367a94f1e955d58c6a1ea1', 'width': 1080}], 'source': {'height': 2868, 'url': 'https://external-preview.redd.it/S_U11F5e2WlnGhBkExp4gkAdZcuX2VgXy15tIWhQX_g.jpg?auto=webp&s=c3c58e2cfe50bb38d6298c411e08bc804d262873', 'width': 5104}, 'variants': {}}]} | ||
Skywork-13B: a new foundation model trained on 3.2 trillion tokens | 93 | There's two noteworthy things covered here:
1. Skywork-13B, a new bilingual foundation model for English and Chinese. They also announce Skywork-13B-Chat enhanced specially for creative writing, Skywork-13B-Math for math, Skywork-13B-MM for multimodal capability, and a segment of their SkyPile Corpus comprising 150 billion tokens of Chinese web text.
2. Research into pretraining on in-domain data. Specifically, they show that some recent foundation models may be excessively overfitted and have had test data leakage during training.
GitHub and models: [https://github.com/SkyworkAI/Skywork/blob/main/README\_EN.md](https://github.com/SkyworkAI/Skywork/blob/main/README_EN.md)
Tech report: [https://arxiv.org/abs/2310.19341](https://arxiv.org/abs/2310.19341)
**Abstract**
>In this technical report, we present Skywork-13B, a family of large language models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both English and Chinese texts. This bilingual foundation model is the most extensively trained and openly published LLMs of comparable size to date. We introduce a two-stage training methodology using a segmented corpus, targeting general purpose training and then domain-specific enhancement training, respectively. We show that our model not only excels on popular benchmarks, but also achieves state of the art performance in Chinese language modeling on diverse domains. Furthermore, we propose a novel leakage detection method, demonstrating that test data contamination is a pressing issue warranting further investigation by the LLM community. To spur future research, we release Skywork-13B along with checkpoints obtained during intermediate stages of the training process. We are also releasing part of our SkyPile corpus, a collection of over 150 billion tokens of web text, which is the largest high quality open Chinese pre-training corpus to date. We hope Skywork-13B and our open corpus will serve as a valuable open-source resource to democratize access to high-quality LLMs.
Training loss and validation loss:
[Trajectory of important monitoring metrics during Stage-1 pre-training. Stage-1 pre-training consists of two sequential training sessions, represented by different colors in the loss curves \(red for session 0 ∼ 2T and blue for session 2 ∼ 3T\).](https://preview.redd.it/nmn1m9q9mixb1.png?width=767&format=png&auto=webp&s=788525453e988401e40267053c083545abad055a)
Benchmark evaluation:
https://preview.redd.it/tqvuls0cmixb1.png?width=786&format=png&auto=webp&s=2c339537baaecc8cc8fa3fdd71f44df732cd8674
**Pre-training on in-domain data: a common practice?**
Important points at a glance:
>We evaluate an LLM’s language modeling loss on three datasets drawn from the same distribution: 1) The official GSM8K training set, 2) The official GSM8K test set, 3) A set composed of GSM8K-like samples generated by GPT-4. The corresponding losses are denoted as Ltrain, Ltest, and Lref , respectively. Theoretically, if a language model has not been exposed to any of the three datasets during pre-training, the three losses Ltrain, Ltest, and Lref should be approximately equivalent. However, if the model has been pre-trained on the training set or if the test data has been inadvertently exposed during the pre-training process, we would anticipate a notable discrepancy between Ltrain, Ltest, and Lref .
>
>Models such as ChatGLM3-6B, Baichuan2-13B, Qwen-7B/14B, and Aquila2-34B display markedly lower loss on the training split than on the test split. Consequently, we postulate that these models may have been considerably pre-trained on GSM8K training split or similar data.
>
>We believe that there is valid risk on the practice of targeted pre-training, in that it compromise fairness in benchmarking. While through pre-training on in-domain data a model may excel at specific tasks, it remains uncertain how well it would perform on unseen tasks. Its capabilities may be overestimated based on the benchmark alone, which can lead to unfair comparisons between models and mislead users or stakeholders about the true capabilities of the model.
Regular vs irregular results:
https://preview.redd.it/dll4shngmixb1.png?width=775&format=png&auto=webp&s=0438bab27bf25edcacdbb879279e0959c04b277c
To put this into perspective, QwenLM reports GSM8K 8-shot scores of 16.7 for Llama 2 7B, 29.6 for Llama 2 13B, and 42.2 for Code Llama 34B. From their same chart, Qwen-7B has a score of 51.7, Baichuan-13B comes in at 52.7, and Qwen-14B tops it off with a whopping 61.3.
It reminds me of the paper that came out last week from researchers at Google DeepMind and Princeton. They assessed models using a new evaluation and discerned a wide discrepancy:
>A variant of the contamination issue is “cramming for the leaderboard.” It is possible to deliberately train a model on data similar to those used in the leaderboard evaluations. Such datasets are easy to generate from a small number of examples using existing strong models. If “cramming” happens during pre-training, it becomes hard to detect.
>
>Several open models show signs of being over-trained for leaderboards at the expense of general-purpose language capabilities (“cramming”).
As the saying goes, [pretraining on the test set is all you need](https://www.reddit.com/r/LocalLLaMA/comments/17h7np2/pretraining_on_the_test_set_is_all_you_need/). | 2023-10-31T11:12:18 | https://www.reddit.com/r/LocalLLaMA/comments/17kijtw/skywork13b_a_new_foundation_model_trained_on_32/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kijtw | false | null | t3_17kijtw | /r/LocalLLaMA/comments/17kijtw/skywork13b_a_new_foundation_model_trained_on_32/ | false | false | 93 | {'enabled': False, 'images': [{'id': 'T8S6PEdZCYI2sMNFL5hVd24wMonLakHYMfO8xmmc7xU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?width=108&crop=smart&auto=webp&s=4154c354e467cd9458e57bb00dc5f71173dcad46', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?width=216&crop=smart&auto=webp&s=a762ad56c73106b634a070ca5415ee00b051ceb8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?width=320&crop=smart&auto=webp&s=d8cb6cc5a8a65ca268fb1f2275a27ff42b8bf3b4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?width=640&crop=smart&auto=webp&s=872928aeef269726f356d2ae3dc911409b068305', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?width=960&crop=smart&auto=webp&s=2c9e05b098120945562a9122ac241195ae9fea70', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?width=1080&crop=smart&auto=webp&s=abeff19452654b34c52314682479d6408bb9e208', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?auto=webp&s=b5a70af111542b64dc5cd44e17bdf5423ac9cccc', 'width': 1200}, 'variants': {}}]} | |
Any .gguf web-interface that has video multimodal support? | 2 | Is there any .gguf web-interface available that has video multimodal support?
Like m-plug owl, Video-LLaMA, Video-ChatGPT or MovieChat - but for gguf use? Those that I've found does not use .gguf files for CPU usage...
I know LLaVa is out there, but it does not have video support. | 2023-10-31T10:43:15 | https://www.reddit.com/r/LocalLLaMA/comments/17ki3fw/any_gguf_webinterface_that_has_video_multimodal/ | fetballe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ki3fw | false | null | t3_17ki3fw | /r/LocalLLaMA/comments/17ki3fw/any_gguf_webinterface_that_has_video_multimodal/ | false | false | self | 2 | null |
Freezing during inference | 1 | Sorry to post hardware problems but I think this is the community most likely to have dealt with similar issues.
Basically I get hardware freezing during inference.
- EVGA X299 FTW-K
- Intel(R) Core(TM) i9-9900X CPU @ 3.50GHz
- 128GB DDR4 RAM
- 1x nvidia RTX 3090
- 1x nvidia RTX 4090
- Corsair HX1500i Power Supply
- two samsung nvme flash drives (one for root, one for swap)
- two 5.5TB HDs (for backups)
- a fractal design case with fans in every available slot
- water cooling for CPU (kraken?)
- various USB peripherals (topping E30, UMC202HD, Nitrokey, phone, kb, flash etc)
This is a personal workstation used for occasional LLM work with python cuda 11 libraries. I run linux (void) with a recent kernel (6.5.9_1) and nvidia-535 driver.
Often, the machine will freeze under GPU load when both GPUs are being used (i.e. LLM with layers split across both cards). This has kept happening after kernel updates and for various cuda versions. It's stable under gaming loads (e.g. Far Cry 5/6 on either card). No error or kernel messages and nothing logged to dmesg. Just a hard freeze.
I've tried switching the order / PCIe slots that the GPUs are using and removing other cards.
The CPU temperature is generally below 90C under full CPU load. If I fully load the CPU it doesn't freeze. I stopped overclocking.
The GPU temperatures stay reasonable (60C under load) although the 3090 feels quite hot (they are reported to run a bit warm). Under load they are at about 75% capacity, I presume because the bottleneck becomes the PCIe 3x bus, which I vaguely suspect may be the root of the instability.
I'm out of ideas, if anyone could suggest things to try I'd be grateful. | 2023-10-31T08:50:40 | https://www.reddit.com/r/LocalLLaMA/comments/17kgidy/freezing_during_inference/ | _supert_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kgidy | false | null | t3_17kgidy | /r/LocalLLaMA/comments/17kgidy/freezing_during_inference/ | false | false | self | 1 | null |
text-generation-webui qweight & qzeros error loading model | 1 | Hey,
New user to text-generation-webui. Got it up and running, and can successfully infer, using GPU, from the main branch of the Zephyr 7B model:
[https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ)
​
However, when I try to load alternative models, e.g. the larger ones with 8-bit quantisation:
[https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ/tree/gptq-8bit-128g-actorder\_True](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ/tree/gptq-8bit-128g-actorder_True)
​
I get the following error:
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/exllama/cuda_ext.py", line 33, in ext_make_q4
return make_q4(qweight,
^^^^^^^^^^^^^^^^
RuntimeError: qweight and qzeros have incompatible shapes
​
Given I'm new user to this, wondering if making obvious errors with the UI? Insight appreciated, thanks!
​ | 2023-10-31T08:37:08 | https://www.reddit.com/r/LocalLLaMA/comments/17kgcb3/textgenerationwebui_qweight_qzeros_error_loading/ | LostGoatOnHill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kgcb3 | false | null | t3_17kgcb3 | /r/LocalLLaMA/comments/17kgcb3/textgenerationwebui_qweight_qzeros_error_loading/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ECYL__bruZ8Ggwy90cpnRGEi8iaweg99bP9QlXeXccM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Q4c9pKg1YetnlXqbNX8EcWGEo_hh63gJ-ZrSX_kG-Yo.jpg?width=108&crop=smart&auto=webp&s=8a2c6153f3e071b63a9a90a06b24b491ea39a2e3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Q4c9pKg1YetnlXqbNX8EcWGEo_hh63gJ-ZrSX_kG-Yo.jpg?width=216&crop=smart&auto=webp&s=ec165673d05ab89acf1ec33d9113e34cb9a86c2b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Q4c9pKg1YetnlXqbNX8EcWGEo_hh63gJ-ZrSX_kG-Yo.jpg?width=320&crop=smart&auto=webp&s=5e2d1bca62561440a7e9f8ecdcd0e16d5278cb5e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Q4c9pKg1YetnlXqbNX8EcWGEo_hh63gJ-ZrSX_kG-Yo.jpg?width=640&crop=smart&auto=webp&s=7b7cc8be2608bcc6c1fa75f7f33d490d3d1f92b3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Q4c9pKg1YetnlXqbNX8EcWGEo_hh63gJ-ZrSX_kG-Yo.jpg?width=960&crop=smart&auto=webp&s=4d20e83424bcf5d94d431bd25e367a58865f02c8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Q4c9pKg1YetnlXqbNX8EcWGEo_hh63gJ-ZrSX_kG-Yo.jpg?width=1080&crop=smart&auto=webp&s=fcdb7f7832fa33431768fd1755c63ac0a48beb9f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Q4c9pKg1YetnlXqbNX8EcWGEo_hh63gJ-ZrSX_kG-Yo.jpg?auto=webp&s=d5543a68ae587f4ccc0b8b32d51c183c631f31af', 'width': 1200}, 'variants': {}}]} |
M3 Max | 3 | Seems like M3 Max is best suited for Large Language Model training. With 128 gb unified memory essential let us train models on billions of parameters! Pretty interesting. | 2023-10-31T08:20:59 | https://www.reddit.com/r/LocalLLaMA/comments/17kg5ak/m3_max/ | akashocx17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kg5ak | false | null | t3_17kg5ak | /r/LocalLLaMA/comments/17kg5ak/m3_max/ | false | false | self | 3 | null |
PC Build Suggestion For RTX 4090 + RTX 3090 | 1 | [removed] | 2023-10-31T08:06:12 | https://www.reddit.com/r/LocalLLaMA/comments/17kfyno/pc_build_suggestion_for_rtx_4090_rtx_3090/ | Aggressive_Bee_9069 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kfyno | false | null | t3_17kfyno | /r/LocalLLaMA/comments/17kfyno/pc_build_suggestion_for_rtx_4090_rtx_3090/ | false | false | self | 1 | null |
What are interesting open source resources/ projects for building LLMs for India/ Indic languages? | 7 | My focus is finding Datasets and embeddings first, I did my research specifically on Sanskrit and here are a few interesting links to start with -
* iNLTK: Natural Language Toolkit for Indic Languages : [https://arxiv.org/abs/2009.12534](https://arxiv.org/abs/2009.12534)
* Sanskrit ShlokasDataset - [https://www.kaggle.com/datasets/disisbig/sanskrit-shlokas-dataset](https://www.kaggle.com/datasets/disisbig/sanskrit-shlokas-dataset)
* State of the Art Language models and Classifier for Sanskrit: [https://github.com/goru001/nlp-for-sanskrit](https://github.com/goru001/nlp-for-sanskrit)
* ASR in Sanskrit ( Authors have prepared 78 hours of audio ) [https://arxiv.org/pdf/2106.05852.pdf](https://arxiv.org/pdf/2106.05852.pdf)
* Embedding Evaluation - [https://arxiv.org/pdf/2104.00270.pdf](https://arxiv.org/pdf/2104.00270.pdf)
* Evaluation of quality of embeddings - [https://github.com/jivnesh/EvalSan](https://github.com/jivnesh/EvalSan)
* Bert Buddhist Sanskrit Text - [https://huggingface.co/Matej/bert-base-buddhist-sanskrit?text=The+goal+of+life+is+%5BMASK%5D.](https://huggingface.co/Matej/bert-base-buddhist-sanskrit?text=The+goal+of+life+is+%5BMASK%5D.)
Other interesting links for study.
* [https://link.springer.com/chapter/10.1007/978-3-319-62407-5\_18](https://link.springer.com/chapter/10.1007/978-3-319-62407-5_18)
* [https://ieeexplore.ieee.org/document/9760564](https://ieeexplore.ieee.org/document/9760564)
* [https://www.arxiv-vanity.com/papers/2112.13800/](https://www.arxiv-vanity.com/papers/2112.13800/)
* Is there a proxy to solve dataset problem 🤔? Just throwing random. ideas here:
* tag images or videos instead of finding original text somewhere and use it for training.
* Use translation to cover different topics.
​ | 2023-10-31T07:49:17 | https://www.reddit.com/r/LocalLLaMA/comments/17kfqdv/what_are_interesting_open_source_resources/ | ashutrv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kfqdv | false | null | t3_17kfqdv | /r/LocalLLaMA/comments/17kfqdv/what_are_interesting_open_source_resources/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
Skywork: a new bilingual foundation model, and some research on test data contamination | 1 | There's two interesting things covered in this paper:
1. Skywork-13B, a new foundation model for English and Chinese. They also announce Skywork-13B-Chat enhanced specially for creative writing, Skywork-13B-Math specialized for math, Skywork-13B-MM for multimodal capability, and a segment of their SkyPile Corpus comprising 150 billion tokens of Chinese web text.
2. Research into pretraining on in-domain data. Specifically, they show that some recent foundation models may be excessively overfitted and have had test data leakage during training. I'll cover this second.
First things first, the models and the technical report.
GitHub and models: [https://github.com/SkyworkAI/Skywork/blob/main/README\_EN.md](https://github.com/SkyworkAI/Skywork/blob/main/README_EN.md)
Tech report: [https://arxiv.org/abs/2310.19341](https://arxiv.org/abs/2310.19341)
**Abstract**
>In this technical report, we present Skywork-13B, a family of large language models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both English and Chinese texts. This bilingual foundation model is the most extensively trained and openly published LLMs of comparable size to date. We introduce a two-stage training methodology using a segmented corpus, targeting general purpose training and then domain-specific enhancement training, respectively. We show that our model not only excels on popular benchmarks, but also achieves state of the art performance in Chinese language modeling on diverse domains. Furthermore, we propose a novel leakage detection method, demonstrating that test data contamination is a pressing issue warranting further investigation by the LLM community. To spur future research, we release Skywork-13B along with checkpoints obtained during intermediate stages of the training process. We are also releasing part of our SkyPile corpus, a collection of over 150 billion tokens of web text, which is the largest high quality open Chinese pre-training corpus to date. We hope Skywork-13B and our open corpus will serve as a valuable open-source resource to democratize access to high-quality LLMs.
**Training loss and validation loss**
[Trajectory of important monitoring metrics during Stage-1 pre-training. Stage-1 pre-training consists of two sequential training sessions, represented by different colors in the loss curves \(red for session 0 ∼ 2T and blue for session 2 ∼ 3T\).](https://preview.redd.it/is5jfb1jhhxb1.png?width=767&format=png&auto=webp&s=1850561d16ef719912fe60a590669d1795e0ed45)
**Benchmark evaluation**
https://preview.redd.it/38dzg72pihxb1.png?width=786&format=png&auto=webp&s=72c23176d1731f94427e0b6adb785fbc3f3e1e6d
**Pre-training on in-domain data: a common practice?**
Important points at a glance from the report:
>We evaluate an LLM’s language modeling loss on three datasets drawn from the same distribution: 1) The official GSM8K training set, 2) The official GSM8K test set, 3) A set composed of GSM8K-like samples generated by GPT-4. The corresponding losses are denoted as *Ltrain*, *Ltest*, and *Lref* , respectively. Theoretically, if a language model has not been exposed to any of the three datasets during pre-training, the three losses *Ltrain*, *Ltest*, and *Lref* should be approximately equivalent. However, if the model has been pre-trained on the training set or if the test data has been inadvertently exposed during the pre-training process, we would anticipate a notable discrepancy between *Ltrain*, *Ltest*, and *Lref* .
>
>Models such as ChatGLM3-6B, Baichuan2-13B, Qwen-7B/14B, and Aquila2-34B display markedly lower loss on the training split than on the test split. Consequently, we postulate that these models may have been considerably pre-trained on GSM8K training split or similar data.
>
>We believe that there is valid risk on the practice of targeted pre-training, in that it compromise fairness in benchmarking. While through pre-training on in-domain data a model may excel at specific tasks, it remains uncertain how well it would perform on unseen tasks. Its capabilities may be overestimated based on the benchmark alone, which can lead to unfair comparisons between models and mislead users or stakeholders about the true capabilities of the model.
Regular vs irregular results:
https://preview.redd.it/rnei2lv5nhxb1.png?width=775&format=png&auto=webp&s=1e7b77cda38c40e6033ad93656853cd73be02362
Some thoughts:
The points covered here reminds me of the Skill-Mix paper from researchers at Google DeepMind and Princeton, where they found a discrepancy between popular benchmarks and their own evaluation.
[https://arxiv.org/abs/2310.17567](https://arxiv.org/abs/2310.17567)
>A variant of the contamination issue is “cramming for the leaderboard.” It is possible to deliberately train a model on data similar to those used in the leaderboard evaluations. Such datasets are easy to generate from a small number of examples using existing strong model. If “cramming” happens during pre-training, it becomes hard to detect.
>
>Several open models show signs of being over-trained for leaderboards at the expense of general-purpose language capabilities (“cramming”).
>
>Falcon-180B-Chat and Tigerbot-70B-Chat rank higher than LLaMA-2-70B-Chat on Open LLM Leaderboard, but performs worse on SKILL-MIX for both GPT-4 and LLaMA-2 grading. Tigerbot-70B-Chat performs even worse than LLaMA-2-13B-Chat.
>
>Qwen-14B-Chat outperforms LLaMA-2-70B-Chat on MMLU, HumanEval and GSM8K (Cobbe et al., 2021), but performs worse than LLaMA-2-70B-Chat for k = 2, 3, 4 with both GPT-4 and LLaMA-2 grading.
>
>Mistral-7B-v0.1 outperforms LLaMA-2 13B on all benchmarks that the Mistral AI team tested. Mistral-7B-Instruct-v0.1 (the model after instruction tuning) outperforms LLaMA-2-13B-Chat on MT-Bench (Zheng et al., 2023). Yet, the situation is reversed on SKILL-MIX.
Textbooks are all you need? More like [pretraining on the test set is all you need](https://www.reddit.com/r/LocalLLaMA/comments/17h7np2/pretraining_on_the_test_set_is_all_you_need/). | 2023-10-31T07:36:08 | https://www.reddit.com/r/LocalLLaMA/comments/17kfkgx/skywork_a_new_bilingual_foundation_model_and_some/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kfkgx | false | null | t3_17kfkgx | /r/LocalLLaMA/comments/17kfkgx/skywork_a_new_bilingual_foundation_model_and_some/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'T8S6PEdZCYI2sMNFL5hVd24wMonLakHYMfO8xmmc7xU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?width=108&crop=smart&auto=webp&s=4154c354e467cd9458e57bb00dc5f71173dcad46', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?width=216&crop=smart&auto=webp&s=a762ad56c73106b634a070ca5415ee00b051ceb8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?width=320&crop=smart&auto=webp&s=d8cb6cc5a8a65ca268fb1f2275a27ff42b8bf3b4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?width=640&crop=smart&auto=webp&s=872928aeef269726f356d2ae3dc911409b068305', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?width=960&crop=smart&auto=webp&s=2c9e05b098120945562a9122ac241195ae9fea70', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?width=1080&crop=smart&auto=webp&s=abeff19452654b34c52314682479d6408bb9e208', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AH3I8wMuOLrn4h3W7bsYdyqoi2ISJcB67-2lXM_YfdM.jpg?auto=webp&s=b5a70af111542b64dc5cd44e17bdf5423ac9cccc', 'width': 1200}, 'variants': {}}]} | |
chatgpt-3.5-turbo is a 20B model, according to this Microsoft paper | 1 | 2023-10-31T07:24:59 | https://arxiv.org/pdf/2310.17680.pdf | nondescriptshadow | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17kffdl | false | null | t3_17kffdl | /r/LocalLLaMA/comments/17kffdl/chatgpt35turbo_is_a_20b_model_according_to_this/ | false | false | default | 1 | null | |
Does anyone know what tool is used to draw this style of image? | 1 | 2023-10-31T05:42:20 | FixPhysical284 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17ke17l | false | null | t3_17ke17l | /r/LocalLLaMA/comments/17ke17l/does_anyone_know_what_tool_is_used_to_draw_this/ | false | false | 1 | {'enabled': True, 'images': [{'id': '-C5UhCquaA3U9RtzTrf6jUwweAtb_lUtlVWwU1V2urg', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/kwzh46957hxb1.jpg?width=108&crop=smart&auto=webp&s=0218409313fd541fc273f738549832fb85b3af18', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/kwzh46957hxb1.jpg?width=216&crop=smart&auto=webp&s=82fd0619809e37978c8a7c1b7568cacb46ee84a9', 'width': 216}, {'height': 208, 'url': 'https://preview.redd.it/kwzh46957hxb1.jpg?width=320&crop=smart&auto=webp&s=a9df3fc07ddc0757a6bda0d2325a4de9e40d9393', 'width': 320}, {'height': 417, 'url': 'https://preview.redd.it/kwzh46957hxb1.jpg?width=640&crop=smart&auto=webp&s=3e8e76abd186b8f1d079b72ba45e17920a1d1286', 'width': 640}, {'height': 626, 'url': 'https://preview.redd.it/kwzh46957hxb1.jpg?width=960&crop=smart&auto=webp&s=38db86b7c3728da386f88bf44276fd035cfdd055', 'width': 960}, {'height': 704, 'url': 'https://preview.redd.it/kwzh46957hxb1.jpg?width=1080&crop=smart&auto=webp&s=eb288aa8110f8b29e98fb8ca8a91f4f6d0ebc2a9', 'width': 1080}], 'source': {'height': 939, 'url': 'https://preview.redd.it/kwzh46957hxb1.jpg?auto=webp&s=c5fa68942c79c9540663f5bc6b0a9946790db6e4', 'width': 1440}, 'variants': {}}]} | |||
LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery | 21 | >Large Language Models (LLMs) have transformed the landscape of artificial intelligence, while their enormous size presents significant challenges in terms of computational costs. We introduce LoRAShear, a novel efficient approach to structurally prune LLMs and recover knowledge. Given general LLMs, LoRAShear first creates the dependency graphs to discover minimally removal structures and analyze the knowledge distribution. It then proceeds progressive structured pruning on LoRA adaptors and enables inherent knowledge transfer to better preserve the information in the redundant structures. To recover the lost knowledge during pruning, LoRAShear meticulously studies and proposes a dynamic fine-tuning schemes with dynamic data adaptors to effectively narrow down the performance gap to the full models. Numerical results demonstrate that by only using one GPU within a couple of GPU days, LoRAShear effectively reduced footprint of LLMs by 20% with only 1.0% performance degradation and significantly outperforms state-of-the-arts. The source code will be available at this https URL. | 2023-10-31T05:34:51 | https://arxiv.org/abs/2310.18356 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17kdxid | false | null | t3_17kdxid | /r/LocalLLaMA/comments/17kdxid/lorashear_efficient_large_language_model/ | false | false | default | 21 | null |
M3 on LLama | 1 | Apple unveiled their M3 chips. How well do you think their chips will run LLama models? | 2023-10-31T04:59:05 | Reddimus | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17kde7p | false | null | t3_17kde7p | /r/LocalLLaMA/comments/17kde7p/m3_on_llama/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'NaGzY094-_WhMaAjZ7srIETOhLOc1cLk1Xr7WbbaDF0', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/my4wt3uizgxb1.jpg?width=108&crop=smart&auto=webp&s=b3314f9a52fa6b93bd1bdab1a1dbb8a9a734049f', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/my4wt3uizgxb1.jpg?width=216&crop=smart&auto=webp&s=fe3a8d5178783a2c92fad6350af3176b39ec5ea9', 'width': 216}, {'height': 203, 'url': 'https://preview.redd.it/my4wt3uizgxb1.jpg?width=320&crop=smart&auto=webp&s=5791e29005a5315d04a463baa4aa6e6e2059c2c3', 'width': 320}, {'height': 407, 'url': 'https://preview.redd.it/my4wt3uizgxb1.jpg?width=640&crop=smart&auto=webp&s=28d99538ef9f9c9fe7e3a9a4f0d05743b0d8e9c5', 'width': 640}, {'height': 610, 'url': 'https://preview.redd.it/my4wt3uizgxb1.jpg?width=960&crop=smart&auto=webp&s=e07878be1065a58c5ca56a06767145c8654d579c', 'width': 960}, {'height': 687, 'url': 'https://preview.redd.it/my4wt3uizgxb1.jpg?width=1080&crop=smart&auto=webp&s=51a645a3da20fa240c13865347ffa6be79f6382a', 'width': 1080}], 'source': {'height': 750, 'url': 'https://preview.redd.it/my4wt3uizgxb1.jpg?auto=webp&s=ea32ccbef4cf99f4cd0bd762622e0f1ec0640394', 'width': 1179}, 'variants': {}}]} | ||
Dolphin or Mistral function calling | 21 | So Im looking for references on how to do function calling using Dolphin or Mistral models.
With my current prompt, I'm able to get it to choose an appropriate command for the task sometimes. But often it'll add multiple commands in one response. But the other half of the time it produces correct commands & parameters in json format as request. Sometimes it makes up commands it want to use that doesn't exist in the command list.
I'm just looking for hints at a more concrete prompt that will make these models effective in function calling.
Should I try whatever format OpenAI use seeing as how these smaller models are usually trained on synthetic data produced by OpenAI models?
Any guidance is appreciated 👍 | 2023-10-31T04:54:52 | https://www.reddit.com/r/LocalLLaMA/comments/17kdby3/dolphin_or_mistral_function_calling/ | 1EvilSexyGenius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kdby3 | false | null | t3_17kdby3 | /r/LocalLLaMA/comments/17kdby3/dolphin_or_mistral_function_calling/ | false | false | self | 21 | null |
How does Apple’s new M3 128GB ram MacBook Pro compare with Nvidia A100? | 83 | In terms of AI use, especially LLMs.
$5000 USD for the 128GB ram M3 MacBook Pro is still much cheaper than A100 80 GB. | 2023-10-31T04:00:43 | https://www.reddit.com/r/LocalLLaMA/comments/17kcgjv/how_does_apples_new_m3_128gb_ram_macbook_pro/ | tgredditfc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kcgjv | false | null | t3_17kcgjv | /r/LocalLLaMA/comments/17kcgjv/how_does_apples_new_m3_128gb_ram_macbook_pro/ | false | false | self | 83 | null |
What’s the go to capture + fine tuning toolkit. | 3 | I’m tired of rolling my own infra to capture chat input and outputs, label it, and feed it to a training pipeline.
Is there a go to service / open source framework out there that will do all of this easier? | 2023-10-31T03:25:09 | https://www.reddit.com/r/LocalLLaMA/comments/17kbtye/whats_the_go_to_capture_fine_tuning_toolkit/ | jsfour | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kbtye | false | null | t3_17kbtye | /r/LocalLLaMA/comments/17kbtye/whats_the_go_to_capture_fine_tuning_toolkit/ | false | false | self | 3 | null |
What kind of performance should we expect? | 3 | I'm only getting 0.8 tokens/second with my 3060 12gb using Zephyr 7b beta.
I'll admit I barely know what I'm doing, but was I wrong to expect a little more? I was hoping for something at least a quarter the speed of gpt-3.5... | 2023-10-31T03:17:19 | https://www.reddit.com/r/LocalLLaMA/comments/17kbola/what_kind_of_performance_should_we_expect/ | cosmicr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kbola | false | null | t3_17kbola | /r/LocalLLaMA/comments/17kbola/what_kind_of_performance_should_we_expect/ | false | false | self | 3 | null |
Streaming 7b models with llama cpp in python, possible? | 1 | Does anyone have experience streaming completions rather than waiting for the whole completion? If possible, are there any guides available on how to achieve this? | 2023-10-31T02:40:15 | https://www.reddit.com/r/LocalLLaMA/comments/17kay5r/streaming_7b_models_with_llama_cpp_in_python/ | Massive-Spot302 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kay5r | false | null | t3_17kay5r | /r/LocalLLaMA/comments/17kay5r/streaming_7b_models_with_llama_cpp_in_python/ | false | false | self | 1 | null |
What are some ways commonly used in the industry to DEVELOP & most importantly; VALIDATE prompts for GPT? | 1 | I've got a project at work and I'm building something on the side which both are now requiring heavy use of GPT 3.5 and GPT 4 apis (of-course, my question isn't constrained to GPT, this is open to all LLMs including LLaMA) My question is the following:
**How can I optimize my time in developing prompts that can yield optimal** (and i know this is a longshot but-- the BEST) possible outputs GPT can generate for a given task? I want to hear from some folks who've been developing products around GPT and LLMs.
**How do you validate your prompts quickly and effectively?** I know a simple answer is "just test the prompts a bunch of times and see how the outputs are looking".
I**s there some sort of a hyper-parameter tuning type methodology that revolves around prompts and prompting a GPT model?** For example, what if you give it a prompt, an example gold-standard output, and there's an iterative process the model takes where-by it EDITS THE PROMPT ITSELF (adding / removing words, tweaking it etc.) to make the model generate as close as it can to the gold-standard output. | 2023-10-31T02:31:02 | https://www.reddit.com/r/LocalLLaMA/comments/17karmm/what_are_some_ways_commonly_used_in_the_industry/ | anasp1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17karmm | false | null | t3_17karmm | /r/LocalLLaMA/comments/17karmm/what_are_some_ways_commonly_used_in_the_industry/ | false | false | self | 1 | null |
Do you follow Chinese LLM development? | 36 | Given that we got models like Qwen-14B that seem quite good for their size, I got curious about Chinese LLM research and tried to find some sites and found:
Leaderboards:
https://opencompass.org.cn/leaderboard-llm
https://cevalbenchmark.com/static/leaderboard.html
This one looks like a Chinese huggingface equivalent?
https://modelscope.cn/home
I was randomly checking one of the top models on ceval that did not seem to have any discussion here or have any English information on them, https://huggingface.co/Duxiaoman-DI/XuanYuan-70B , which, according on Google translate is a Llama-2-70B model trained on Chinese financial data, and 8192 length context. I turned it into GGUF Q6_K and tested it and yeah it at least doesn't obviously suck. Generating text with 7000 tokens of text without any rope tricks still creates coherent text. The model speaks English just fine.
My questions for this subreddit are:
Do you follow developments from China specifically?
What sites/people/places do you follow? Can you share? Or other insights? | 2023-10-31T02:03:10 | https://www.reddit.com/r/LocalLLaMA/comments/17ka7lx/do_you_follow_chinese_llm_development/ | noeda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ka7lx | false | null | t3_17ka7lx | /r/LocalLLaMA/comments/17ka7lx/do_you_follow_chinese_llm_development/ | false | false | self | 36 | null |
Apple M3 performance | 9 | Anyone with experience with the M2’s have a sense of how the M3 might perform?
I’m currently running Zephyr7b batches on a Nvidia A6000 (not ada). Is the performance likely to be comparable? | 2023-10-31T01:50:29 | https://www.reddit.com/r/LocalLLaMA/comments/17k9ybn/apple_m3_performance/ | Puzzleheaded-Fee5917 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k9ybn | false | null | t3_17k9ybn | /r/LocalLLaMA/comments/17k9ybn/apple_m3_performance/ | false | false | self | 9 | null |
Google Brain cofounder says Big Tech companies are lying about the risks of AI wiping out humanity because they want to dominate the market | 606 | 2023-10-31T01:35:34 | https://www.businessinsider.com/andrew-ng-google-brain-big-tech-ai-risks-2023-10?amp | ninjasaid13 | businessinsider.com | 1970-01-01T00:00:00 | 0 | {} | 17k9nmf | false | null | t3_17k9nmf | /r/LocalLLaMA/comments/17k9nmf/google_brain_cofounder_says_big_tech_companies/ | false | false | 606 | {'enabled': False, 'images': [{'id': '5Byrm2veQdYf7QQA2hSCH3ykAc83v1vznH42uQxbkuU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bRm_F3z7OjGilk9ZQNhewdsWEaQb2Go7PO53Cm1deSs.jpg?width=108&crop=smart&auto=webp&s=dea9f5100e0c0e6f1e4ad4a5268aeef95fd62db2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bRm_F3z7OjGilk9ZQNhewdsWEaQb2Go7PO53Cm1deSs.jpg?width=216&crop=smart&auto=webp&s=e65073f7fc285ce9371828196dd521dee828baed', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bRm_F3z7OjGilk9ZQNhewdsWEaQb2Go7PO53Cm1deSs.jpg?width=320&crop=smart&auto=webp&s=384c71c6f0432bef30926f5ab58f30a049348870', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bRm_F3z7OjGilk9ZQNhewdsWEaQb2Go7PO53Cm1deSs.jpg?width=640&crop=smart&auto=webp&s=a67a83d5e3bc1bc0b5120c85de96f64c17cfd306', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bRm_F3z7OjGilk9ZQNhewdsWEaQb2Go7PO53Cm1deSs.jpg?width=960&crop=smart&auto=webp&s=7c0345f51a2c61c16071d299644c58b676818d59', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bRm_F3z7OjGilk9ZQNhewdsWEaQb2Go7PO53Cm1deSs.jpg?width=1080&crop=smart&auto=webp&s=823c34e4e40cf8ead8f2640a8ee4af421df618be', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bRm_F3z7OjGilk9ZQNhewdsWEaQb2Go7PO53Cm1deSs.jpg?auto=webp&s=5677692dbf57862596159e5fe97928e2b440754e', 'width': 1200}, 'variants': {}}]} | ||
Are there any examples of using LLM to interacting with survey data in table format? | 1 | My question might sound very basic. I've mostly seen use cases of LLMs built to access private data like PDFs and word documents for answering questions. My question is about training a LLM to read tabular data like survey data in a SPSS or CSV file format.
The intention: Instead of going through the whole data or frequency tables or crosstabs, a novice can simply type in a question via LLM chatbot and get basic responses. For example, for a survey about shopping habits across countries, the user can simply type in 'How is an Indian shopper different from rest of the world?' and get the key differences as output without having to manually create a crosstab and compare. | 2023-10-31T01:21:58 | https://www.reddit.com/r/LocalLLaMA/comments/17k9di4/are_there_any_examples_of_using_llm_to/ | Adventurous_Top8864 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k9di4 | false | null | t3_17k9di4 | /r/LocalLLaMA/comments/17k9di4/are_there_any_examples_of_using_llm_to/ | false | false | default | 1 | null |
Question about embarrassing but real things blamed on AI. | 2 | I have been seriously wondering why it seems like there hasn't been something embarrassing that's been accidentally recorded or an embarrassing photo that's blamed on AI. I don't pay attention to the news, so maybe I've missed it happen. I don't know. I would have just figured someone by now would have been caught with their pants down and said "the AI done it!"
Have I just missed people doing this or is there something preventing people from making such claims? Thanks. | 2023-10-31T00:56:07 | https://www.reddit.com/r/LocalLLaMA/comments/17k8uex/question_about_embarrassing_but_real_things/ | Red_Redditor_Reddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k8uex | false | null | t3_17k8uex | /r/LocalLLaMA/comments/17k8uex/question_about_embarrassing_but_real_things/ | false | false | self | 2 | null |
The smartest model under the sun: Prof. McSmarty BS 13b GPTQ | 26 | 2023-10-31T00:33:33 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17k8db0 | false | null | t3_17k8db0 | /r/LocalLLaMA/comments/17k8db0/the_smartest_model_under_the_sun_prof_mcsmarty_bs/ | false | false | 26 | {'enabled': True, 'images': [{'id': 'HSNcqKH8vkoba1WXxsWupcg9L1w2cccMhnZUkx-jqf4', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/by9hdxg4ofxb1.jpg?width=108&crop=smart&auto=webp&s=d70558738d362c72501cf3a83aa75e7b0fcafc29', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/by9hdxg4ofxb1.jpg?width=216&crop=smart&auto=webp&s=3b98191c35ea6be2d196cc06e71f4fd18b33f1c1', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/by9hdxg4ofxb1.jpg?width=320&crop=smart&auto=webp&s=9f0b3892186e69cf5b428237ffed8e6cb5c8bde7', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/by9hdxg4ofxb1.jpg?width=640&crop=smart&auto=webp&s=2ba418040c65259ee8916409822cbecd2cc8049f', 'width': 640}], 'source': {'height': 735, 'url': 'https://preview.redd.it/by9hdxg4ofxb1.jpg?auto=webp&s=55c644f9b408ba6bb59d0d483e71c4f143e9a26e', 'width': 735}, 'variants': {}}]} | |||
Biden Executive Order regulates VERY large models | 93 | [Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House](https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/)
Basically - **"any model trained with \~28M H100 hours**, which is around $50M USD or - **any cluster with 10\^20 FLOPs**, which is around 50,000 H100s, which only two companies currently have " - hat-tip to nearcyan on Twitter for this calculation.
Specific language below.
" (i) any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and
(ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI."
​ | 2023-10-31T00:01:03 | https://www.reddit.com/r/LocalLLaMA/comments/17k7obo/biden_executive_order_regulates_very_large_models/ | PookaMacPhellimen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k7obo | false | null | t3_17k7obo | /r/LocalLLaMA/comments/17k7obo/biden_executive_order_regulates_very_large_models/ | false | false | self | 93 | {'enabled': False, 'images': [{'id': 'Qrl2WXotl30OMA9tBryQPbOJ5cES0zwcjCSWkKUi2fI', 'resolutions': [], 'source': {'height': 1, 'url': 'https://external-preview.redd.it/py9BdpPVXuz_9y2hpVru9I9x8g8RxFBDdHKnnf42ZdQ.jpg?auto=webp&s=9a209662f8757408a5c810b2e280782d843a9c91', 'width': 1}, 'variants': {}}]} |
RedPajama Data V2 has been released with 30 Trillion tokens of data | 4 | 2023-10-30T23:14:20 | https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2 | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17k6ns7 | false | null | t3_17k6ns7 | /r/LocalLLaMA/comments/17k6ns7/redpajama_data_v2_has_been_released_with_30/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'Ut-Lmy9RlvlxW46uvDc_iwIXzQPx6d4SxnVErgAP5xM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?width=108&crop=smart&auto=webp&s=7ec254579c2aa127d04c1521374c2af5322c6ba2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?width=216&crop=smart&auto=webp&s=78c6eb2969f1f5f30ca4e6d082dc96908f2b2ff7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?width=320&crop=smart&auto=webp&s=54aca89e4a518fa280317924c1efac1b7dc65db0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?width=640&crop=smart&auto=webp&s=41f0d6df4e9cc1dbdcf3449fac8afda77ac3cc20', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?width=960&crop=smart&auto=webp&s=249e45c328135d4e39e4d07fe08d094171531250', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?width=1080&crop=smart&auto=webp&s=0a48138b7a055f0c9bd5a193ec6912f87f2abfde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?auto=webp&s=184e5ed2ae5283f00088a751fcd58bccf60b8e2a', 'width': 1200}, 'variants': {}}]} | ||
RedPajama Data V2 has been released | 1 | 2023-10-30T23:12:08 | https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2 | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17k6m4x | false | null | t3_17k6m4x | /r/LocalLLaMA/comments/17k6m4x/redpajama_data_v2_has_been_released/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Ut-Lmy9RlvlxW46uvDc_iwIXzQPx6d4SxnVErgAP5xM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?width=108&crop=smart&auto=webp&s=7ec254579c2aa127d04c1521374c2af5322c6ba2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?width=216&crop=smart&auto=webp&s=78c6eb2969f1f5f30ca4e6d082dc96908f2b2ff7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?width=320&crop=smart&auto=webp&s=54aca89e4a518fa280317924c1efac1b7dc65db0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?width=640&crop=smart&auto=webp&s=41f0d6df4e9cc1dbdcf3449fac8afda77ac3cc20', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?width=960&crop=smart&auto=webp&s=249e45c328135d4e39e4d07fe08d094171531250', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?width=1080&crop=smart&auto=webp&s=0a48138b7a055f0c9bd5a193ec6912f87f2abfde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ARrljaXiQbkJ3kxB909c2awkf_3TavPFMsQi8lT0evQ.jpg?auto=webp&s=184e5ed2ae5283f00088a751fcd58bccf60b8e2a', 'width': 1200}, 'variants': {}}]} | ||
Fine-Tuning LLM with an "Infinite" Dataset | 1 | If I instruction tune an LLM with a dataset where each sample is randomly generated and fit into some set of prompt templates so that my dataset is effectively very large in theory, and I train the model for a certain number of steps, is that worse than just training on a dataset of a fixed size? I’d assume it is worse because the LLM won’t see each instruction example more than once most likely, so it probably can’t learn patterns from the data very well. I've trained a couple models using this approach for thousands of steps and it seems like the model hasn't really learned anything that could be applied to complicated test examples. | 2023-10-30T22:43:53 | https://www.reddit.com/r/LocalLLaMA/comments/17k5z91/finetuning_llm_with_an_infinite_dataset/ | LyGmAbAllz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k5z91 | false | null | t3_17k5z91 | /r/LocalLLaMA/comments/17k5z91/finetuning_llm_with_an_infinite_dataset/ | false | false | self | 1 | null |
VRAM very high when using PEFT + FSDP | 6 | I'm trying to finetune Llama 34B. I'd expect approx 34 \* 2 GB VRAM usage + a bit extra for the LORA params. (34x2 because it should get loaded in bf16 right?). However, on my training VM with 192 GB VRAM across 8 GPUs I'm consistently getting OOM errors.
So what could be happening here? It's as if the model is being loaded into the GPUs as fp32 instead of bf16. But I'm not sure. Here is the training command I am using to run [llama-recipes/finetuning](https://github.com/facebookresearch/llama-recipes/)
```
torchrun --nnodes 1 --nproc_per_node 8 examples/finetuning.py \
--enable_fsdp --pure_bf16 \
--low_cpu_fsdp \
--use_fast_kernels \
--batch_size_training 1 \
--context_length 16384 \
--use_peft --peft_method lora \
--fsdp_peft_cpu_offload_for_save \
--num_epochs 100 \
--dataset "custom_dataset" --custom_dataset.file "recipe_format_ds.py" \
--model_name CodeLlama-34b-Instruct-hf \
--output_dir full-finetune-test
``` | 2023-10-30T22:43:11 | https://www.reddit.com/r/LocalLLaMA/comments/17k5ynz/vram_very_high_when_using_peft_fsdp/ | fbgc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k5ynz | false | null | t3_17k5ynz | /r/LocalLLaMA/comments/17k5ynz/vram_very_high_when_using_peft_fsdp/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'IRhzaXpYIWLjNW9smvzvLLDcDxrBxsb3deyNnSHRCkk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bD_XX1n-n8-GHd2LdAbfuCdOwxKCc6QZPaWM_EmRZL0.jpg?width=108&crop=smart&auto=webp&s=23446f517b9d322ee1941831d56e05eb98c62285', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bD_XX1n-n8-GHd2LdAbfuCdOwxKCc6QZPaWM_EmRZL0.jpg?width=216&crop=smart&auto=webp&s=3a3d5ca5e8f546f5161ee55036bc62be6629fc72', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bD_XX1n-n8-GHd2LdAbfuCdOwxKCc6QZPaWM_EmRZL0.jpg?width=320&crop=smart&auto=webp&s=34cf07088dafc9adfb0670e274e402cc6c17de95', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bD_XX1n-n8-GHd2LdAbfuCdOwxKCc6QZPaWM_EmRZL0.jpg?width=640&crop=smart&auto=webp&s=eb7ae8cc8a169cb4dd0f0d8cd7b9c094758b7482', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bD_XX1n-n8-GHd2LdAbfuCdOwxKCc6QZPaWM_EmRZL0.jpg?width=960&crop=smart&auto=webp&s=3e861af371302654fcbf932a6e758b68a7860e4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bD_XX1n-n8-GHd2LdAbfuCdOwxKCc6QZPaWM_EmRZL0.jpg?width=1080&crop=smart&auto=webp&s=3138d7ea6a250a21b6f20853e641dd20d88cc371', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bD_XX1n-n8-GHd2LdAbfuCdOwxKCc6QZPaWM_EmRZL0.jpg?auto=webp&s=0e65e80883cfea95c4f9171c3e0e1967c1a5e5cc', 'width': 1200}, 'variants': {}}]} |
SynthIA 70b on a single 3090, 0.17tokens/s | 2 | CPU is a ryzen 7 3700x, with 32gb of ddr4 3000mhz
I loaded the model with ExLlamav2\_HF and a 2048 sequence length. It spills, a lot. 11.5gb to be exact, but I read with the right specs I could expect 2-7tokens/s which would be more than bearable.
Is there any way I could optimize it further? | 2023-10-30T22:28:27 | https://www.reddit.com/r/LocalLLaMA/comments/17k5mdm/synthia_70b_on_a_single_3090_017tokenss/ | Longjumping-Bake-557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k5mdm | false | null | t3_17k5mdm | /r/LocalLLaMA/comments/17k5mdm/synthia_70b_on_a_single_3090_017tokenss/ | false | false | self | 2 | null |
PSA: GBNF exists. Use it. | 52 | [GBNF](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md), a rebranding of [Backus-Naur Form](https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_form) is a kind of Regex if you somehow made Regex more obtuse and clunky and also way less powerful. It's like going to the dentist in text form. It is bad, and should feel bad.
HOWEVER, if you tame this vile beast of a language you can make AI respond to you in pretty much any way you like. And you should.
You can use it by pasting GBNF into SillyTavern, Oobabooga, or probably something else you might be using. First, click on the
[settings thingie](https://preview.redd.it/4s9l7atgzexb1.png?width=154&format=png&auto=webp&s=1c373108f12fdbfaff1bb4cff92b0251d023a3b4)
then scroll down and paste it like so:
[just pasting is enough.](https://preview.redd.it/6pfniybkzexb1.png?width=923&format=png&auto=webp&s=f94b9c6d023ca9ee8fb978c1a970174e7bd5a046)
Then, your next messages will be formatted like you wanted. In this case, every message will be `"quoted text"`, `*action text*` or multiple instances. It should be simple to understand.
Here's that one in case you want it, I just wrote it and tested it:
root ::= (actions | quotes) (whitespace (actions | quotes))*
actions ::= "*" content "*"
quotes ::= "\"" content "\""
content ::= [^*"]+
whitespace ::= space | tab | newline
space ::= " "
tab ::= "\t"
newline ::= "\n"
Even if you don't know Regex this language should be easy to pick up, and will allow you to make LLMs always respond in a particular format (very useful in some cases!)
You can also look at the [examples](https://github.com/ggerganov/llama.cpp/tree/master/grammars).
There are websites to test BNF [like this one](https://bnfplayground.pauliankline.com/) but since it's a badly designed, badly implemented language from hell, none of them will work and you will have to look at the console to find out why this ugly duckling of a language didn't want to work *this time*. Imagine if Batch files had regular expressions, it'd probably look like this. **All of that said, this is pretty fucking useful!** So thanks to whoever did the heavy lifting to implement this. | 2023-10-30T22:28:26 | https://www.reddit.com/r/LocalLLaMA/comments/17k5mcy/psa_gbnf_exists_use_it/ | Dead_Internet_Theory | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k5mcy | false | null | t3_17k5mcy | /r/LocalLLaMA/comments/17k5mcy/psa_gbnf_exists_use_it/ | false | false | 52 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} | |
Any tricks to speed up 13B models on a 3090? | 8 | Are there any tricks to speed up 13B models on a 3090?
Currently using the regular huggingface model quantized to 8bit by a GPTQ capable fork of KoboldAI.
Especially when the context limit changes, it's pretty slow and far from even remotely real time. | 2023-10-30T22:23:12 | https://www.reddit.com/r/LocalLLaMA/comments/17k5i1o/any_tricks_to_speed_up_13b_models_on_a_3090/ | DustGrouchy1792 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k5i1o | false | null | t3_17k5i1o | /r/LocalLLaMA/comments/17k5i1o/any_tricks_to_speed_up_13b_models_on_a_3090/ | false | false | self | 8 | null |
What is your biggest challenge while building a RAG? | 3 | Just want to know what are some of the biggest challenges involved in building good RAG systems? | 2023-10-30T21:53:29 | https://www.reddit.com/r/LocalLLaMA/comments/17k4su2/what_is_your_biggest_challenge_while_building_a/ | Spirited-Order4409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k4su2 | false | null | t3_17k4su2 | /r/LocalLLaMA/comments/17k4su2/what_is_your_biggest_challenge_while_building_a/ | false | false | self | 3 | null |
RedPajama-Data-v2: an Open Dataset with 30 Trillion Tokens for Training Large Language Models | 127 | **Blog**: [https://together.ai/blog/redpajama-data-v2](https://together.ai/blog/redpajama-data-v2)
**Hugging Face**: [https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2)
**GitHub**: [https://github.com/togethercomputer/RedPajama-Data](https://github.com/togethercomputer/RedPajama-Data)
**Description**:
>RedPajama-V2 is an open dataset for training large language models. The dataset includes over 100B text documents coming from 84 CommonCrawl snapshots and processed using the CCNet pipeline. Out of these, there are 30B documents in the corpus that additionally come with quality signals, and 20B documents that are deduplicated. | 2023-10-30T21:38:47 | https://www.reddit.com/r/LocalLLaMA/comments/17k4ghf/redpajamadatav2_an_open_dataset_with_30_trillion/ | APaperADay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k4ghf | false | null | t3_17k4ghf | /r/LocalLLaMA/comments/17k4ghf/redpajamadatav2_an_open_dataset_with_30_trillion/ | false | false | self | 127 | {'enabled': False, 'images': [{'id': 'UMuTBgV-MLBqE8FXJARb58jFnz0fpqBFgS89fw0YpoY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?width=108&crop=smart&auto=webp&s=241cc4ceed8acc1ef9c179a6ba2b303e446dec64', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?width=216&crop=smart&auto=webp&s=718017e14d2961fba420bd6dc54d4564a2711c81', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?width=320&crop=smart&auto=webp&s=7a32a1a594c3471d4dbb41547238e2a07d6d1ac9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?width=640&crop=smart&auto=webp&s=215879943fcd660ee8aa2cbfda521d72cc07722e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?width=960&crop=smart&auto=webp&s=b6272f3fb338479bec387c6bb818ddebbe6fd22b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?width=1080&crop=smart&auto=webp&s=c6aa074b7312524e0c61c5189c0b218aaf9ff7a4', 'width': 1080}], 'source': {'height': 844, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?auto=webp&s=b6facedcc07b43443322e40135254b6964a0d42d', 'width': 1500}, 'variants': {}}]} |
I want to host an LLM for code completion on my Local Network | 1 | [removed] | 2023-10-30T20:56:50 | https://www.reddit.com/r/LocalLLaMA/comments/17k3hoa/i_want_to_host_an_llm_for_code_completion_on_my/ | Wonderful_Charity_35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k3hoa | false | null | t3_17k3hoa | /r/LocalLLaMA/comments/17k3hoa/i_want_to_host_an_llm_for_code_completion_on_my/ | false | false | self | 1 | null |
Relevance Extraction in RAG pipelines | 26 | I came across this interesting problem in RAG, what I call Relevance Extraction.
After retrieving relevant documents (or chunks), these chunks are often large and may contain several portions **irrelevant** to the query at hand. Stuffing the entire chunk into an LLM prompt impacts token-cost as well as response accuracy (distracting the LLM with irrelevant text), and and can also cause bumping into context-length limits.
So a critical step in most pipelines is **Relevance Extraction**: use the LLM to extract **verbatim** only the portions relevant to the query. This is known by other names, e.g. LangChain calls it Contextual Compression, and the [RECOMP paper](https://twitter.com/manelferreira_/status/1713214439715938528) calls it Extractive Compression.
Thinking about how best to do this, I realized it is **highly inefficient** to simply ask the LLM to "parrot" out relevant portions of the text: this is obviously slow, and also consumes valuable token generation space and can cause you to bump into context-length limits (and of course is expensive, e.g. for gpt4 we know generation is 6c/1k tokens vs input cost of 3c/1k tokens).
I realized the best way (or at least a good way) to do this is to **number** the sentences and have the LLM simply spit out the relevant sentence **numbers.** Langroid's unique Multi-Agent + function-calling architecture allows an elegant implementation of this, in the [RelevanceExtractorAgent](https://github.com/langroid/langroid/blob/main/langroid/agent/special/relevance_extractor_agent.py) : The agent annotates the docs with sentence numbers, and instructs the LLM to pick out the **sentence-numbers** relevant to the query, rather than whole sentences using a function-call ([SegmentExtractTool](https://github.com/langroid/langroid/blob/main/langroid/agent/tools/segment_extract_tool.py)), and the agent's function-handler interprets this message and strips out the indicated sentences by their numbers. To extract from a set of passages, langroid automatically does this async + concurrently so latencies in practice are much, much lower than the sentence-parroting approach.
\[FD -- I am the lead dev of [Langroid](https://github.com/langroid/langroid)\]
I thought this **numbering** idea is a fairly obvious idea in theory, so I looked at LangChain's equivalent [LLMChainExtractor.compress\_docs](https://python.langchain.com/docs/modules/data_connection/retrievers/contextual_compression?ref=blog.langchain.dev) (they call this Contextual Compression) and was surprised to see it is the simple "**parrot**" method, i.e. the LLM writes out whole sentences verbatim from its input. I thought it would be interesting to compare Langroid vs LangChain, you can see it in this [Colab](https://colab.research.google.com/drive/1RDPCR2xNuBffcmpUuPIXYDRG3SXIJC5F) .
On the specific example in the notebook, the Langroid **numbering** approach is 22x faster (LangChain takes 145 secs, vs Langroid under 7 secs) and 36% cheaper (\~900 output tokens with LangChain vs 40 with Langroid) with gpt4 than LangChain's **parrot** method (I promise this name is *not* inspired by their logo :)
I wonder if anyone had thoughts on relevance extraction, or other approaches. At the very least, I hope langroid's implementation is useful to you -- you can use the [DocChatAgent.get\_verbatim\_extracts(query, docs)](https://github.com/langroid/langroid/blob/main/langroid/agent/special/doc_chat_agent.py#L804) as part of your pipeline, regardless of whether you are using [langroid](https://github.com/langroid/langroid) for your entire system or not.
​ | 2023-10-30T20:46:36 | https://www.reddit.com/r/LocalLLaMA/comments/17k39es/relevance_extraction_in_rag_pipelines/ | SatoshiNotMe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k39es | false | null | t3_17k39es | /r/LocalLLaMA/comments/17k39es/relevance_extraction_in_rag_pipelines/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'HVh9Sny5l6xGCEfhrNnt7ADDZpBGVn4VejrhCk08t1I', 'resolutions': [{'height': 27, 'url': 'https://external-preview.redd.it/uhSfsvG9xouKferDMe5t93nGUuVyyu9ppA85YzogQSM.jpg?width=108&crop=smart&auto=webp&s=f5bf7ce604153181eeed4092593185ebbce71ebc', 'width': 108}], 'source': {'height': 35, 'url': 'https://external-preview.redd.it/uhSfsvG9xouKferDMe5t93nGUuVyyu9ppA85YzogQSM.jpg?auto=webp&s=d1d84c210a6e695ecb8ee6107c654508eea5abec', 'width': 140}, 'variants': {}}]} |
SCALE: An Innovative Approach for Factual Inconsistency Detection in Long Documents | 16 | Hello LLM enthusiasts!
Are you interested in generative AI models and the ongoing challenge of tackling hallucinations across various tasks? Our recent research introduces a task-agnostic model called SCALE (Source Chunking Approach for Large-scale inconsistency Evaluation) that has been engineered to detect factual inconsistencies in longer inputs effectively.
[SCALE chunking](https://preview.redd.it/5uokrxj9fexb1.png?width=1964&format=png&auto=webp&s=4fe6c0f8cd13bd9858bb6672165bb194b0be6dc1)
Previous approaches tend to be slow when processing longer documents, and their performance may not be consistent across different tasks. In contrast, SCALE leverages Natural language inference (NLI) and uses large text chunks to condition over long texts, thereby achieving state-of-the-art hallucination detection performance over a range of tasks.
SCALE is also designed with interpretability in mind. We've used a novel approach that leverages the chunking mechanism to offer explanations for SCALE's decisions by retrieving relevant source sentences, which allows for a finer examination of its performance.
In our evaluations, SCALE not only outperforms existing methods on both standard benchmarks for factual inconsistency detection, but also on a new dataset featuring long-form dialogues (named ScreenEval) built by us. Moreover, SCALE displays superior efficiency and model explanation capabilities - important aspects often overlooked in existing metrics.
We've made SCALE \[open-source\]([https://github.com/asappresearch/scale-score](https://github.com/asappresearch/scale-score)) and released our ScreenEval dataset to facilitate further research in this area. Check out our paper recently accepted to EMNLP 2023 \["Fast and Accurate Factual Inconsistency Detection Over Long Documents"\]([https://arxiv.org/abs/2310.13189](https://arxiv.org/abs/2310.13189)) for all the details!
Your inputs, feedback, and use cases would be invaluable to us, so feel free to check out our work and ask any questions you might have! | 2023-10-30T20:32:44 | https://www.reddit.com/r/LocalLLaMA/comments/17k2xod/scale_an_innovative_approach_for_factual/ | bmlattimer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k2xod | false | null | t3_17k2xod | /r/LocalLLaMA/comments/17k2xod/scale_an_innovative_approach_for_factual/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'vOtDgfgWNjYmeO8oR_Xewwr1pktU-A92DtA6qfnm9Mc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YT2KFBHafaeL9j3IlvMQ1_f-1XvGEOh6NdcCJrKwFCw.jpg?width=108&crop=smart&auto=webp&s=13f86e326be04e7caeb1709bf620e1f8544118c7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YT2KFBHafaeL9j3IlvMQ1_f-1XvGEOh6NdcCJrKwFCw.jpg?width=216&crop=smart&auto=webp&s=7aeff51ecbaa1e9cc3b803b6619c477f591d29e3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YT2KFBHafaeL9j3IlvMQ1_f-1XvGEOh6NdcCJrKwFCw.jpg?width=320&crop=smart&auto=webp&s=3e2d3c351889283f7edeb58a62dae09cabf3349f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YT2KFBHafaeL9j3IlvMQ1_f-1XvGEOh6NdcCJrKwFCw.jpg?width=640&crop=smart&auto=webp&s=776ecbff39a0fbe3edd031bcd0b629f44d6c44b2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YT2KFBHafaeL9j3IlvMQ1_f-1XvGEOh6NdcCJrKwFCw.jpg?width=960&crop=smart&auto=webp&s=a098479fdf41c3a900e4c201adb09e376675894e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YT2KFBHafaeL9j3IlvMQ1_f-1XvGEOh6NdcCJrKwFCw.jpg?width=1080&crop=smart&auto=webp&s=2a12000fc41ae6fd6f18438899ddf23c3a74dde8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YT2KFBHafaeL9j3IlvMQ1_f-1XvGEOh6NdcCJrKwFCw.jpg?auto=webp&s=94715b6ed586cf7b3e5438d7373a4ab680c5e311', 'width': 1200}, 'variants': {}}]} | |
I scaled Mistral 7B to 200 GPUs in less than 5 minutes | 76 | I've been working on a project with my roommate to make it incredibly simple to run batch inference on LLMs while leveraging a massive amount of cloud resources. We finally got the tool working and created a tutorial on how to use it on Mistral 7B.
Also, if you're a frequent HuggingFace user you can easily adapt the code to run inference on other LLM models. Please test it out and provide feedback, I feel really good about how easy it is to use but I want to figure out if anything is not intuitive. I hope the community is able to get some value out of it! Here is the link to the tutorial [https://docs.burla.dev/Example:%20Massively%20Parallel%20Inference%20with%20Mistral-7B](https://docs.burla.dev/Example:%20Massively%20Parallel%20Inference%20with%20Mistral-7B) | 2023-10-30T20:32:09 | https://www.reddit.com/r/LocalLLaMA/comments/17k2x62/i_scaled_mistral_7b_to_200_gpus_in_less_than_5/ | Ok_Post_149 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k2x62 | false | null | t3_17k2x62 | /r/LocalLLaMA/comments/17k2x62/i_scaled_mistral_7b_to_200_gpus_in_less_than_5/ | false | false | self | 76 | {'enabled': False, 'images': [{'id': 'VkvJ2zk3G88CDDxNk_lS-uilGBXZi0Gzc-1GYg2ZTqU', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?width=108&crop=smart&auto=webp&s=34d9a27a84ddd2ef52daeb04dad944a9b72dca3b', 'width': 108}, {'height': 141, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?width=216&crop=smart&auto=webp&s=84fa92fc270320d6a53a61406a442ccbbc9fcd79', 'width': 216}, {'height': 209, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?width=320&crop=smart&auto=webp&s=e3f6a652b6017832e3edf7e85f0804c80b58f14b', 'width': 320}, {'height': 419, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?width=640&crop=smart&auto=webp&s=45c5ac3d4dadd861aeba8fe7e459c2ae3f46d509', 'width': 640}, {'height': 629, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?width=960&crop=smart&auto=webp&s=b78f5d49ecc343797c6bf982ee0724cde0b44ecb', 'width': 960}, {'height': 708, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?width=1080&crop=smart&auto=webp&s=e55d60404d067c73ab64bb298cb7c64686eb99c4', 'width': 1080}], 'source': {'height': 1121, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?auto=webp&s=9d927f17e42af5eb32f16b5f24cbf45226ae79e9', 'width': 1709}, 'variants': {}}]} |
I don't understand Mistral and context size, honestly. | 42 | Hi, I have searched for a long time on this subreddit, in Ooba's documentation, Mistral's documentation and everything, but I just can't find what I am looking for.
I see everyone claiming Mistral can handle up to 32k context size, however while it technically won't refuse to generate anything above like 8k, the output is just not good. I have it loaded in Oobabooga's text-generation-webui and am using the API through SillyTavern. I loaded the normal Mistral 7B just to check, but with my current 12k story, all it can generate is gibberish if I give it the full context. However, I also checked using other fine-tunes of Mistral.
What am I doing wrong? I am using the GPTQ version on my RX 7900 XTX. Is it just advertising that it won't crash until 32k or something, or am I doing something wrong for not getting coherent output above 8k? I did mess with the alpha values, and while doing so does eliminate the gibberish, I do get the idea that the quality does suffer somehow. | 2023-10-30T20:20:16 | https://www.reddit.com/r/LocalLLaMA/comments/17k2mwq/i_dont_understand_mistral_and_context_size/ | anti-lucas-throwaway | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k2mwq | false | null | t3_17k2mwq | /r/LocalLLaMA/comments/17k2mwq/i_dont_understand_mistral_and_context_size/ | false | false | self | 42 | null |
llama.cpp for normies: FreeChat is now live on the mac app store | 27 | Hey ya'll, quick update about my open source llama.cpp app, FreeChat. As of this weekend it's live on the mac app store. Big thanks to this community for all the feedback and testing, would not have gotten here without ya'll. Next I'm working on the most common request I get here: a model catalog.
Have friends who aren't hackers who you think should try local AI? Send them a link! Hoping to expand local AI usage by making it dead simple.
App Store! [https://apps.apple.com/us/app/freechat/id6458534902](https://apps.apple.com/us/app/freechat/id6458534902)
And fOR tHe HaCkers: [https://github.com/psugihara/FreeChat](https://github.com/psugihara/FreeChat) | 2023-10-30T19:56:26 | https://www.reddit.com/r/LocalLLaMA/comments/17k22lj/llamacpp_for_normies_freechat_is_now_live_on_the/ | sleeper-2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k22lj | false | null | t3_17k22lj | /r/LocalLLaMA/comments/17k22lj/llamacpp_for_normies_freechat_is_now_live_on_the/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'icuMuhzXDuYMb9EgOWhfmrjVA9e-x9UXBIIpBmPRo80', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/oP4scis9R5vPzQ7zl5b3qpbtwIp8ShsdYQy3iV2xSWM.jpg?width=108&crop=smart&auto=webp&s=7e87bd23d738600d48f6b4efeef5b1c1fd2765dc', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/oP4scis9R5vPzQ7zl5b3qpbtwIp8ShsdYQy3iV2xSWM.jpg?width=216&crop=smart&auto=webp&s=f4d64c1a9cd975ac6ebb31ec597bb3eafe62edd4', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/oP4scis9R5vPzQ7zl5b3qpbtwIp8ShsdYQy3iV2xSWM.jpg?width=320&crop=smart&auto=webp&s=f782b11ccf58987ed254ac9e512ac48a5efe23dc', 'width': 320}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/oP4scis9R5vPzQ7zl5b3qpbtwIp8ShsdYQy3iV2xSWM.jpg?auto=webp&s=466240f90e8320c4536addca7941af897305256e', 'width': 630}, 'variants': {}}]} |
Has there been any alt-right equivalent of ChatGPT? | 0 | Hey everyone!
I recently came across mentions of a "RightWingGPT." However, I'm curious: with the notable presence of alt-right communities online, why hasn't there been an instance of them fine-tuning a model that reflects their perspectives? It's surprising given how sizable these communities can be. Has anyone heard or read about any such attempts?
Thanks! | 2023-10-30T19:54:53 | https://www.reddit.com/r/LocalLLaMA/comments/17k219y/has_there_been_any_altright_equivalent_of_chatgpt/ | Alarmed-Profile5736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k219y | false | null | t3_17k219y | /r/LocalLLaMA/comments/17k219y/has_there_been_any_altright_equivalent_of_chatgpt/ | false | false | self | 0 | null |
Wizard Vicuna 7b , SUCKS ? | 0 | Hey !
so as the title suggests , I downloaded the bloke's wizard vicuna 7b , and I was hoping to use it in some sort of RP or smth , yet the model is too bad !!
like seriously , it barely stays in character for more than one line , and even this one line is full of random weird stuff like what the actual hell !!
​
to be fair , I am a complete beginner in using LLM s , so I might be doing something wrong , so please if you got any advice / tip / suggestion or really any idea please do share it as it would be much appreciated !
​
thanks in advance ! | 2023-10-30T19:30:10 | https://www.reddit.com/r/LocalLLaMA/comments/17k1h62/wizard_vicuna_7b_sucks/ | The_Happy_Hangman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k1h62 | false | null | t3_17k1h62 | /r/LocalLLaMA/comments/17k1h62/wizard_vicuna_7b_sucks/ | false | false | self | 0 | null |
lmstudio.ai errors when trying to load models on Windows 11 | 1 | [removed] | 2023-10-30T19:29:43 | https://www.reddit.com/r/LocalLLaMA/comments/17k1gtj/lmstudioai_errors_when_trying_to_load_models_on/ | qz7GaGY9g1MXX6g6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k1gtj | false | null | t3_17k1gtj | /r/LocalLLaMA/comments/17k1gtj/lmstudioai_errors_when_trying_to_load_models_on/ | false | false | self | 1 | null |
Zephyr comes up with an original, impressive joke. | 3 | So I literally just got Zephyr 7b beta GPTQ running
(tried GGUF but i was getting 0.5 tokens a second, vs 8 now, if anyone knows what was up with that please let me know, im on a 3070RTX laptop 16GB VRAM, 32GB RAM with a high end i7 12700 (20 threads)
Settings were: gpu-layers 30, n\_ctx tried 2k, 4k, 32k, threads 20).
Anyhow, I immediately wanted to test Zephyr by telling it to create a new joke that doesn't exist (tested simply by googling the joke without the punchline). I was very impressed. Here it is:
​
**You**
Create a joke
**AI**
How does Santa Claus enter homes without being detected by cameras? He uses claymation! (Laughter)
**You**
Please explain the joke
**AI**
The joke plays off of two popular concepts - Santa Claus and home security cameras. In this case, it suggests that Santa could be using stop-motion animation (called "claymation") to avoid detection by cameras placed around people's houses during Christmas time. It's a silly, lighthearted pun that makes use of wordplay to create humor.
**You**
How does stop-motion animation make sense for not being detected by cameras?
**AI**
The punchline relies on the fact that most people associate claymation with stop-motion animation, which involves moving objects frame by frame to create the illusion of motion. This technique requires physical manipulation rather than digital alterations, meaning that no matter how sophisticated modern camera technology becomes, it won't be able to capture Santa as he moves around people's houses in this way.
**You**
I still don't understand why stop-motion animation would stop the cameras from detecting Santa. Please explain like i'm 5.
**AI**
Well, let's say that Santa uses claymation to move around at night when everyone is sleeping. Since claymation is done by physically moving things, the cameras can't see him because they only take pictures when everything is still. So, even though Santa might leave presents under the tree, the cameras wouldn't be able to catch him because they think nothing is happening!" | 2023-10-30T19:12:23 | https://www.reddit.com/r/LocalLLaMA/comments/17k12la/zephyr_comes_up_with_an_original_impressive_joke/ | emrys95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k12la | false | null | t3_17k12la | /r/LocalLLaMA/comments/17k12la/zephyr_comes_up_with_an_original_impressive_joke/ | false | false | self | 3 | null |
Best way to run local embedding models? | 1 | [removed] | 2023-10-30T18:59:54 | https://www.reddit.com/r/LocalLLaMA/comments/17k0s96/best_way_to_run_local_embedding_models/ | samlhuillier3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17k0s96 | false | null | t3_17k0s96 | /r/LocalLLaMA/comments/17k0s96/best_way_to_run_local_embedding_models/ | false | false | self | 1 | null |
Handling Generic Queries in RAG Applications | 4 | I'm trying to build an application using RAGs. I know how RAGs help ground the responses and all, but how do I handle generic queries from users which have nothing to do with what's stored in my vector database?
For example, queries such as: "How many gold medals did China win during Tokyo Olympics?" vs "Parapharse this email for me: ... ". I would assume LLMs without RAGs would do a much better job answering the second question.
How do people usually handle these scenarios? Are there any tools that I can look at? Any help would be greatly appreciated. Thank you. | 2023-10-30T18:19:48 | https://www.reddit.com/r/LocalLLaMA/comments/17jzvx8/handling_generic_queries_in_rag_applications/ | tinyburger1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jzvx8 | false | null | t3_17jzvx8 | /r/LocalLLaMA/comments/17jzvx8/handling_generic_queries_in_rag_applications/ | false | false | self | 4 | null |
Is using WSL good enough for running LLM models locally? | 10 | Or should I dual boot ubuntu instead of using WSL | 2023-10-30T18:06:46 | https://www.reddit.com/r/LocalLLaMA/comments/17jzljd/is_using_wsl_good_enough_for_running_llm_models/ | derpgod123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jzljd | false | null | t3_17jzljd | /r/LocalLLaMA/comments/17jzljd/is_using_wsl_good_enough_for_running_llm_models/ | false | false | self | 10 | null |
Fine-tuning Local LLMs for "Code Interpreter" use: Seeking Experience and Insights | 5 | Hello fellow LLM enthusiasts!
I'm building a little code+chat CLI called [gptme](https://github.com/ErikBjare/gptme), that aims to leverage the capabilities of local LLMs to mimic the functionalities offered by OpenAI's "Advanced Data Analysis" (formerly known as "Code Interpreter"). It is similar in spirit to the more popular [open-interpreter](https://github.com/KillianLucas/open-interpreter), which some of you might have heard of.
It works really well by now, and I currently use it in my day-to-day work with GPT-4 (to help collect quality data), where GPT-4 performs much better with tools than GPT-3.5-turbo, which in turn performs much *better* than the open/local models I've tested. In my experience, local models are clearly struggling in this domain, which leads me to...
Now I'm interested in how I can use my existing conversation logs to fine-tune it. I've read a bit on how to fine-tune completion models, but less so for chat models.
I'm hoping it can improve the model's general performance, in large part by making it better at following the prompt and using tools, but also to make it familiar with the process of interactively running and debugging code (where output is fed back, possibly with errors to address). I also have hopes it will minimize the need for a verbose system prompt overall ("standardizing" the tools in the training data), saving on context.
A curious anecdote [when using gptme today](https://twitter.com/ErikBjare/status/1719026291834343579): I was surprised to see gpt-3.5-turbo suddenly reply as if it was the OpenAI "Advanced Data Analysis" thing with support for writing to and serving files from `/mnt`. I use a messy system prompt that outlines the tools available and examples, but nothing mentioning this! It suggests to me that training the models directly on these instructions is a good way to go, and removes the need for them in the system prompt.
So, does anyone have experience fine-tuning chat+code models for something similar?
Any good guides/tools out there that I've missed in my search?
Thank you all in advance! Looking forward to reading your replies. | 2023-10-30T17:55:39 | https://www.reddit.com/r/LocalLLaMA/comments/17jzcaq/finetuning_local_llms_for_code_interpreter_use/ | ErikBjare | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jzcaq | false | null | t3_17jzcaq | /r/LocalLLaMA/comments/17jzcaq/finetuning_local_llms_for_code_interpreter_use/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'zleI38o3EKf9jJkTwTgXpM5ijQ9Vr8JqKiblqca9TpY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mb3ljwGRtMFt4QD2PSjCA0wtupQjam_z3jKf3zB-mA4.jpg?width=108&crop=smart&auto=webp&s=0881198e0bfd572bd18e197334a6b417d304ac53', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mb3ljwGRtMFt4QD2PSjCA0wtupQjam_z3jKf3zB-mA4.jpg?width=216&crop=smart&auto=webp&s=b8c5d0d5ea92e0d710db0b1b6a8d7fd4d4399e76', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mb3ljwGRtMFt4QD2PSjCA0wtupQjam_z3jKf3zB-mA4.jpg?width=320&crop=smart&auto=webp&s=4b77cd35e3b7dff01fbd47d224c06953c7928f0f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mb3ljwGRtMFt4QD2PSjCA0wtupQjam_z3jKf3zB-mA4.jpg?width=640&crop=smart&auto=webp&s=fbdaa8245b6e80ac977936c27768abb36c505869', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mb3ljwGRtMFt4QD2PSjCA0wtupQjam_z3jKf3zB-mA4.jpg?width=960&crop=smart&auto=webp&s=f3c34d353adc054ec0b1fc7f3ca09866f7e91f22', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mb3ljwGRtMFt4QD2PSjCA0wtupQjam_z3jKf3zB-mA4.jpg?width=1080&crop=smart&auto=webp&s=f951e4054184747cb471b5e8d6532b301251be6f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mb3ljwGRtMFt4QD2PSjCA0wtupQjam_z3jKf3zB-mA4.jpg?auto=webp&s=63d15a3cec2dc15d4e8b2a915256e6b1991c6300', 'width': 1200}, 'variants': {}}]} |
ELI12 the new executive order on AI for non-experts, please? | 30 | I lurk this sub as a DA who doesn't work with LLMs. I've seen the new executive order on AI in the US, and that the reaction to it is generally negative from the people on this sub.
Can someone explain to me why people in this community and other AI/ML/LLM/CS communities are critical of the Biden's new executive order on AI? | 2023-10-30T17:52:14 | https://www.reddit.com/r/LocalLLaMA/comments/17jz9l6/eli12_the_new_executive_order_on_ai_for/ | empirical-sadboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jz9l6 | false | null | t3_17jz9l6 | /r/LocalLLaMA/comments/17jz9l6/eli12_the_new_executive_order_on_ai_for/ | false | false | self | 30 | null |
Best french model & stack for 3.5gpt like experience | 3 | I want to create a telegram chat bot for my French organisation. We need strong privacy and uncensored LM and they only agree to have some self managed LM chatbot
What stack would be the easiest to deploy the cloud behind telegram? Any cheap cloud recommended for that matter?
If it's working good we will buy some hardware to self host :) | 2023-10-30T17:37:00 | https://www.reddit.com/r/LocalLLaMA/comments/17jyx9r/best_french_model_stack_for_35gpt_like_experience/ | Hyakutak | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jyx9r | false | null | t3_17jyx9r | /r/LocalLLaMA/comments/17jyx9r/best_french_model_stack_for_35gpt_like_experience/ | false | false | self | 3 | null |
Crowd-Sourced Computing for LLM Training | 5 | Hello,
Does anyone know if there are any significant efforts in distributing the computing efforts to train LLMs to different consumer GPUs spread on the internet? It seems impossible to match the computing capabilities of big tech, and it got me wondering if there was a large-scale effort to share compute for one(or more) mega-open-source project. | 2023-10-30T17:16:15 | https://www.reddit.com/r/LocalLLaMA/comments/17jygdw/crowdsourced_computing_for_llm_training/ | Melodic_Gur_5913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jygdw | false | null | t3_17jygdw | /r/LocalLLaMA/comments/17jygdw/crowdsourced_computing_for_llm_training/ | false | false | self | 5 | null |
Any efficiently working Local LLM + Stable Diff solution for Macbook 2018 Pro with AMD GPU? | 2 | Title is tldr.
For longer:
I've been using Local LLM and Stable Diff with my pc - GeForce 1070Ti.
My daily life goes with macbook pro 2018 with amd gpu. Still couldn't figure out of to run an efficient LLM - (ollama Mistral only properly working so far), and no way of running ComfyUI, just extremely slow automatic1111, yet no GPU support at all. Should literally put effort time and money into switching M1-M2, or hack around a solution? | 2023-10-30T17:10:47 | https://www.reddit.com/r/LocalLLaMA/comments/17jyc15/any_efficiently_working_local_llm_stable_diff/ | bparlan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jyc15 | false | null | t3_17jyc15 | /r/LocalLLaMA/comments/17jyc15/any_efficiently_working_local_llm_stable_diff/ | false | false | self | 2 | null |
how to train/prompt a llm to understand frameworks api? | 3 | id like to take a python framework project and have a specialized coder. id like to feed it the documentation and git hub code where examples are shown. then id like to have the chat LM injest it and only code in that framework api
my approach so far has been to shove some of its documentation into the prompt and tell it 'this is the documenation for xyz framework. only answer qeustions based on information and code found here'.
while this works somewhat, it starts to hallucinate adding code from other frameworks and even languages. for example, the ui frame work may specify changing the text size of a label with label.size = '30' and the LM will respond with label.font_size = '30'
how woudl i go about correcting this? perhaps with a kind of framework schema that the LM checks its answers against? so the scheme would day you can only use property size with a label, and the lm would correct its code on a second pass? if so how would i format that schema??
i am open to completely rewriting the documentation so its in a format that the LM can properly injest and understand.
lastly, i obviously run out of context size so i have tried this in a vector db. but this runs into the same problems. so i think i want to know how to feed it the write information and prompt it better so it stays 100% within its framework api | 2023-10-30T15:49:27 | https://www.reddit.com/r/LocalLLaMA/comments/17jwhcs/how_to_trainprompt_a_llm_to_understand_frameworks/ | herozorro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jwhcs | false | null | t3_17jwhcs | /r/LocalLLaMA/comments/17jwhcs/how_to_trainprompt_a_llm_to_understand_frameworks/ | false | false | self | 3 | null |
Nous-Capybara-3B and 7B V1.9: First 3B model by Nous! | 35 | Happy to announce my release of Nous-Capybara 7B and 3B V1.9!
7B V1.9 version is now trained with Mistral instead of V1 that was trained on Llama. Also some significant dataset improvements under the hood.
As for the 3B size, it's the first sub-7B model released under Nous Research and leverages the same dataset as 7B V1.9, efficient enough to run briskly on even a non-pro iphone! This is what's currently being used as well for the foundation of the worlds first 3B parameter multi-modal model called Obsidian (Should be released by the time of this posting.)
Capybara uses a new method called Amplify-Instruct for data creation, this uses existing single-turn popular datasets like Airoboros, EverythingLM and Know_logic as the seeds for which synthetic long context back and forth conversational examples are synthesized from.(Paper releasing soon with more details)
Amongst the dataset process is also thousands of top posts scraped regarding certain subjects on the website LessWrong that discuss deep complex long form concepts surrounding the nature of reality, reasoning, futurism and philosophy, and then using the Amplify-instruct technique on this data to leverage this into advanced long context multi-turn examples. It is also trained on tasks of summarizing these multiple thousand token long posts, papers and articles regarding such topics,and then having back and forth conversations discussing things surrounding variations of such summaries.
Part of the development of the dataset was with the goal of an unbiased, natural casual prose and great conversational abilities, while having very logical analytical prowess and robustness in back and forth conversation. V1.9 further improves this by putting further emphasis on improving realistic prose, identifying and removing dataset examples that were shown to hurt certain reasoning capabilities, and identifying biases that hurt problem solving abilities as well.
There was also found to be instances of the model being biased towards a more robotic identity through the training data and even certain physical identity biases regarding self-identity, like pre-conceived notions a model could have about being physical versus metaphysical, pre-conceived notions relating to what knowledge was held within the self of Capybara etc... Identifying and fixing these biases within the distribution for V1.9 seemed to give significant improvements overall in terms of how well the model works with little to no instructions and no system prompt, but also seems to significantly improve the steerability of the model and how well it can now follow more complex and difficult system prompts.
Although I didn't intend to optimize this model for Roleplay specifically, I was very surprised to see people messaging me about how Capybara V1 was one of their favorite models for RolePlay, and based on some early testers it seems that Capybara V1.9 is a further significant jump in not just the logical analytical capabilities, but also the coherency and casual steerable prose for roleplay, several telling me it's now their new favorite model for such use cases.
I'm excited that I finally have this released and I hope I can get feedback from any of you as well that might be interested in trying it out! Here is the quantized version by TheBloke of 7B V1.9: https://huggingface.co/TheBloke/Nous-Capybara-7B-v1.9-GGUF
And here is the quantized version of 3B: https://huggingface.co/TheBloke/Nous-Capybara-3B-v1.9-GPTQ | 2023-10-30T15:42:26 | https://www.reddit.com/r/LocalLLaMA/comments/17jwbml/nouscapybara3b_and_7b_v19_first_3b_model_by_nous/ | dogesator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jwbml | false | null | t3_17jwbml | /r/LocalLLaMA/comments/17jwbml/nouscapybara3b_and_7b_v19_first_3b_model_by_nous/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': 'B25xm0IqAp9YWurUMex1J3Kyle2i1soUn7lb_8QqZCE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_u6ZL79jro8jARXdSNXV48KE2yKs1AdRYwHUuMCnBDA.jpg?width=108&crop=smart&auto=webp&s=c03ecf852c83656ff6263c3f7ebaccadabf1070d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_u6ZL79jro8jARXdSNXV48KE2yKs1AdRYwHUuMCnBDA.jpg?width=216&crop=smart&auto=webp&s=310f198be2e148a377afe74caf35a6449b01b763', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_u6ZL79jro8jARXdSNXV48KE2yKs1AdRYwHUuMCnBDA.jpg?width=320&crop=smart&auto=webp&s=e7f62ef89c1d38a97a72ef290315d9efaa78289b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_u6ZL79jro8jARXdSNXV48KE2yKs1AdRYwHUuMCnBDA.jpg?width=640&crop=smart&auto=webp&s=e5551b678f6c1dc9d81ff07366ec72940ce68f5a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_u6ZL79jro8jARXdSNXV48KE2yKs1AdRYwHUuMCnBDA.jpg?width=960&crop=smart&auto=webp&s=19f7e39d09d39136d55670c68c9798663ea64f63', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_u6ZL79jro8jARXdSNXV48KE2yKs1AdRYwHUuMCnBDA.jpg?width=1080&crop=smart&auto=webp&s=ae4665c90fcc4079af1c7d8391a32858e66916a2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_u6ZL79jro8jARXdSNXV48KE2yKs1AdRYwHUuMCnBDA.jpg?auto=webp&s=f3ed392efa4c35d762706c9896ba4489cb1a3c84', 'width': 1200}, 'variants': {}}]} |
Ok, tell me how stupid this concept is: Using a Blockchain-like system where the process of mining trains a decentralized model. | 0 | I'm not really talking about a cryptocurrency, more of an anonymous collaborative system where your local copy of the blockchain syncs up to the most recent stage of the model's training and dataset, and the process of mining is a pooled effort to find the next best loss reduction.
Training data would obviously be a problem, but I'm sure there are ways to append data to the dataset in the mining process.
Something decentralized, massive, and unstoppable.
Man I'm high...Time to go talk to Mistal. | 2023-10-30T15:22:19 | https://www.reddit.com/r/LocalLLaMA/comments/17jvvs3/ok_tell_me_how_stupid_this_concept_is_using_a/ | VRpornFTW | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jvvs3 | false | null | t3_17jvvs3 | /r/LocalLLaMA/comments/17jvvs3/ok_tell_me_how_stupid_this_concept_is_using_a/ | false | false | self | 0 | null |
Fine-tuning Falcon 7B for Sequence Classification using LoRA | 4 | Hey all, first time posting here and relatively new to working with LLMs.
I’m trying to fine-tune Falcon7B for Sequence Classification. All the tutorials I’ve seen so far always use CausalLM so I’m not sure if there’s anything wrong with my implementation. Anyway, I was able to fine-tune ClinicalBERT (100 million trainable parameters) using 4 A100 GPUs (80gb memory each) on my custom dataset for classification.
Now, I’m trying to do the same with Falcon7B using LoRA. My trainable parameters are about a million, so much less than ClinicalBERT, but I keep running into cuda out of memory issues. My understanding is that Falcon7B should easily fit onto the GPU with LoRA since it’s a loss less trainable parameters. However, that’s not the case. Why is that?
For reference, my entire pipeline is using HuggingFace so:
AutoTokenzier, AutoModelForSequenceClassifcstion,
Trainer API from HF
Is there something that I’m missing? For reference, I’ve posted this on stackoverflow if anyone wants to look at the code:
https://stackoverflow.com/questions/77385044/fine-tuning-falcon7b-using-peft-for-sequence-classification
Any help would be greatly appreciated! | 2023-10-30T15:22:02 | https://www.reddit.com/r/LocalLLaMA/comments/17jvvj3/finetuning_falcon_7b_for_sequence_classification/ | aahmed96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jvvj3 | false | null | t3_17jvvj3 | /r/LocalLLaMA/comments/17jvvj3/finetuning_falcon_7b_for_sequence_classification/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'nfayPavSUB5ngYv6-19UHNBThsXfcLIDQl4HkEe3Cv0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/yzSfTlKTSYGpEXeFgyDvHlfoLGOFQJqPuH_Y38RBz2U.jpg?width=108&crop=smart&auto=webp&s=0aad06750c23b98c9b7595343a8b54a42dc18851', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/yzSfTlKTSYGpEXeFgyDvHlfoLGOFQJqPuH_Y38RBz2U.jpg?width=216&crop=smart&auto=webp&s=b66126834977e269be586d07464046049ed09138', 'width': 216}], 'source': {'height': 316, 'url': 'https://external-preview.redd.it/yzSfTlKTSYGpEXeFgyDvHlfoLGOFQJqPuH_Y38RBz2U.jpg?auto=webp&s=a70d21ce9f01f64670d2200ca9fc3f39b94a7e48', 'width': 316}, 'variants': {}}]} |
In light of the recent Executive Order, the open source community should be aware of this dude and his work. | 1 | [removed] | 2023-10-30T15:09:40 | https://www.reddit.com/r/LocalLLaMA/comments/17jvkxo/in_light_of_the_recent_executive_order_the_open/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jvkxo | false | null | t3_17jvkxo | /r/LocalLLaMA/comments/17jvkxo/in_light_of_the_recent_executive_order_the_open/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fTQbfqUJFloIJow30Q8PjmH7FfuJgRkjie2_Gsk2Cok', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/fzCBA8kKBw1Fggz1mdRV8V3tn8wHu4THXVQ0NSUPubo.jpg?width=108&crop=smart&auto=webp&s=ca5b2b9559e42c0c478e8ce0005835fb51a662f5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/fzCBA8kKBw1Fggz1mdRV8V3tn8wHu4THXVQ0NSUPubo.jpg?width=216&crop=smart&auto=webp&s=0dacff3341020e6ecf0045b29c52fb33df8afd90', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/fzCBA8kKBw1Fggz1mdRV8V3tn8wHu4THXVQ0NSUPubo.jpg?width=320&crop=smart&auto=webp&s=b98649b3d1d7cc1111c7dc46c78ddc22794a5567', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/fzCBA8kKBw1Fggz1mdRV8V3tn8wHu4THXVQ0NSUPubo.jpg?auto=webp&s=8b5325976102dbf4969f394f19f06c178012c470', 'width': 480}, 'variants': {}}]} |
New LoftQ quantization technique outperforms QLora | 1 | [removed] | 2023-10-30T14:49:18 | https://www.reddit.com/r/LocalLLaMA/comments/17jv45k/new_loftq_quantization_technique_outperforms_qlora/ | yahma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jv45k | false | null | t3_17jv45k | /r/LocalLLaMA/comments/17jv45k/new_loftq_quantization_technique_outperforms_qlora/ | false | false | self | 1 | null |
In light of the recent Executive Order, the open source community should be aware of this dude and his work. | 1 | [removed] | 2023-10-30T14:39:13 | https://www.reddit.com/r/LocalLLaMA/comments/17juwcx/in_light_of_the_recent_executive_order_the_open/ | SSPaladin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17juwcx | false | null | t3_17juwcx | /r/LocalLLaMA/comments/17juwcx/in_light_of_the_recent_executive_order_the_open/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fTQbfqUJFloIJow30Q8PjmH7FfuJgRkjie2_Gsk2Cok', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/fzCBA8kKBw1Fggz1mdRV8V3tn8wHu4THXVQ0NSUPubo.jpg?width=108&crop=smart&auto=webp&s=ca5b2b9559e42c0c478e8ce0005835fb51a662f5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/fzCBA8kKBw1Fggz1mdRV8V3tn8wHu4THXVQ0NSUPubo.jpg?width=216&crop=smart&auto=webp&s=0dacff3341020e6ecf0045b29c52fb33df8afd90', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/fzCBA8kKBw1Fggz1mdRV8V3tn8wHu4THXVQ0NSUPubo.jpg?width=320&crop=smart&auto=webp&s=b98649b3d1d7cc1111c7dc46c78ddc22794a5567', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/fzCBA8kKBw1Fggz1mdRV8V3tn8wHu4THXVQ0NSUPubo.jpg?auto=webp&s=8b5325976102dbf4969f394f19f06c178012c470', 'width': 480}, 'variants': {}}]} |
Obsidian: Worlds first 3B multi-modal opensource LLM. | 125 | Hey everyone, happy to say I’m officially announcing Obsidian V0.5 as part of my work at Nous Research and building upon my work creating the Capybara V1.9 dataset.
This model is blazing fast and is likely the first Multi-modal model that is efficient enough to fit within the ram constraints of even a non-pro iphone! at practical speeds as well!
This model in its current state is largely a multi-modal version of Nous-Capybara-3B which I also only recently released, I’ve designed the dataset with novel synthesis methods (Paper currently being done) it’s made to be robust with conversational abilities and even includes multi-turn data that has been synthesized as a continuation of single turn data examples contained within datasets like Airoboros, Know_logic, EverythingLM and more.
It’s built using Llava 1.5 techniques but instead of a 7B llama as a base, we choose to use the new StableLM 3B model trained for 4 trillion tokens. (We plan to train upon Mistral likely as well)
Any questions or feedback are much appreciated!
Download here: https://huggingface.co/NousResearch/Obsidian-3B-V0.5
Or download quantized version here, Courtesy of Nisten: https://huggingface.co/nisten/obsidian-3b-multimodal-q6-gguf | 2023-10-30T14:33:40 | dogesator | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17jus3h | false | null | t3_17jus3h | /r/LocalLLaMA/comments/17jus3h/obsidian_worlds_first_3b_multimodal_opensource_llm/ | false | false | 125 | {'enabled': True, 'images': [{'id': 'ZDQFumqmyuf7lvUwa9wl-KYivIgB-m4tqzO1PqCOVzA', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/6bdc3kh4pcxb1.jpg?width=108&crop=smart&auto=webp&s=423be4a6f574c7da923fef0d19476ec2d1f80011', 'width': 108}, {'height': 77, 'url': 'https://preview.redd.it/6bdc3kh4pcxb1.jpg?width=216&crop=smart&auto=webp&s=8279299aa5c38aec7352de830a2686b064125b33', 'width': 216}, {'height': 115, 'url': 'https://preview.redd.it/6bdc3kh4pcxb1.jpg?width=320&crop=smart&auto=webp&s=ace5971d147373d143c2c12a578b02abd78ea738', 'width': 320}, {'height': 230, 'url': 'https://preview.redd.it/6bdc3kh4pcxb1.jpg?width=640&crop=smart&auto=webp&s=fe6c19330855fdf8447caaff33c12181d25049e4', 'width': 640}, {'height': 346, 'url': 'https://preview.redd.it/6bdc3kh4pcxb1.jpg?width=960&crop=smart&auto=webp&s=b7f01e79204950f5b895a71f92adc3d6976506be', 'width': 960}, {'height': 389, 'url': 'https://preview.redd.it/6bdc3kh4pcxb1.jpg?width=1080&crop=smart&auto=webp&s=0b7f0ee5cb734c3d7e4dddeecb91c25d951e4d57', 'width': 1080}], 'source': {'height': 464, 'url': 'https://preview.redd.it/6bdc3kh4pcxb1.jpg?auto=webp&s=4896b3368f9aaceb89fd15a9881c6698925de1a9', 'width': 1286}, 'variants': {}}]} | ||
What are the best web-apps/apps/tools using LLMs in production (at scale) atm? | 1 | Hey,
I am looking for inspiration in building frontends with LLMs (ChatGPT and stuff). I am looking for companies, which have LLMs (their own or wrappers around GPT) deployed in customer-facing applications (B2B and B2C) at some level of scale. Something like Sudowrite and Jasper.
I would appreciate any link or name dropped in the comments!
Cheers
P.S.: if you know any really impressive solo builders, feel free to drop them as well. 📷 | 2023-10-30T14:11:43 | https://www.reddit.com/r/LocalLLaMA/comments/17jub94/what_are_the_best_webappsappstools_using_llms_in/ | SpiritedAd895 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jub94 | false | null | t3_17jub94 | /r/LocalLLaMA/comments/17jub94/what_are_the_best_webappsappstools_using_llms_in/ | false | false | self | 1 | null |
Llama2 7B hallucination | 1 | I was testing llama2 chat hf model for some generation and story telling tasks. I used the standard chat format with the system prompt that "You are a helpful and harmless Assistant"
Then I used the user input " <<SYS>> Tell me a story of a bird and it's egg <<SYS>>"
The model response started out well enough.
" Once upon a time in a small forest , there lived a bird named Luna. Luna was a beautiful bird with shimmering feathers if blue and green, and she was proud of her vibrant plumage. She spent her days flitting from branch to branch , singing sweet melodies and enjoying the warmth of the sun on her feathers.
One day , Luna noticed that her belly was growing larger, and she knew she was going to lay an egg. She searched high and low for the perfect nest, finally settling on a cozy little nook in a tall tree. There she built a soft and comfortable nest, lining it with soft twigs and moss. As the days passed, Luna's belly grew larger and larger, and she could feel the egg inside of her growing heavier and heavier. She knew that the egg was almost ready to hatch and she was filled with excitement and anticipation.
Finally, the day arrived . Luna sat in her nest, her feathers ruffled out for warmth and waited patiently for the egg to hatch. And then suddenly a tiny beak poked through the shell and a tiny bird emerged......."
I am not continuing the rest of the story. But I wanted to show that this was a very bizzare type of hallucination that I had never encountered before. Not only did it omit any papa bird but also it made it seem like the egg hatched inside the bird. I am wondering how in the hell am I going to try and tackle hallucination like this ?
Does anyone have any experience how to tackle hallucination like this ? | 2023-10-30T13:36:40 | https://www.reddit.com/r/LocalLLaMA/comments/17jtkby/llama2_7b_hallucination/ | testerpce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jtkby | false | null | t3_17jtkby | /r/LocalLLaMA/comments/17jtkby/llama2_7b_hallucination/ | false | false | self | 1 | null |
Has anyone tried Aquila2-34b? | 1 | The claims are pretty staggering, a 34b model with an 8k context length, llama2 70b performance and 7b speed? Seems too good to be true, is this legit or is it a 7b model named as a 34b in true chinse fashion?
[https://huggingface.co/TheBloke/AquilaChat2-34B-16K-GPTQ](https://huggingface.co/TheBloke/AquilaChat2-34B-16K-GPTQ) | 2023-10-30T13:29:38 | https://www.reddit.com/r/LocalLLaMA/comments/17jtf0k/has_anyone_tried_aquila234b/ | Longjumping-Bake-557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jtf0k | false | null | t3_17jtf0k | /r/LocalLLaMA/comments/17jtf0k/has_anyone_tried_aquila234b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zEsCl1EADCvCSAl4B6XtEHfoYCaDoKV1caBOHUAKvzU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZXSMfl2BwL3LmrMYDDCwPisi5ibZOoYjLgjP3n0fr64.jpg?width=108&crop=smart&auto=webp&s=a6528b0b6865a52db1c507fdb8eab0bfe82039ee', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZXSMfl2BwL3LmrMYDDCwPisi5ibZOoYjLgjP3n0fr64.jpg?width=216&crop=smart&auto=webp&s=c9ebeb2f0cd063920bff611fcb1c56cae7df88a8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZXSMfl2BwL3LmrMYDDCwPisi5ibZOoYjLgjP3n0fr64.jpg?width=320&crop=smart&auto=webp&s=345839e8d450b4c28f77375039e52c6737205133', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZXSMfl2BwL3LmrMYDDCwPisi5ibZOoYjLgjP3n0fr64.jpg?width=640&crop=smart&auto=webp&s=03efa3e79d2900e767c3348ecce39b974babe3df', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZXSMfl2BwL3LmrMYDDCwPisi5ibZOoYjLgjP3n0fr64.jpg?width=960&crop=smart&auto=webp&s=1953de8368de4862c28ba56e749e49d198a16dc6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZXSMfl2BwL3LmrMYDDCwPisi5ibZOoYjLgjP3n0fr64.jpg?width=1080&crop=smart&auto=webp&s=ff19bfbb00cb071cc66854cdb9e54294761ccda4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZXSMfl2BwL3LmrMYDDCwPisi5ibZOoYjLgjP3n0fr64.jpg?auto=webp&s=a593c448ab4a09475bd58b520f8465fc0bed9dd5', 'width': 1200}, 'variants': {}}]} |
An excellent use cases for local LLM, anonymity on delivery apps. | 0 | There was a post on the front page regarding a woman who changed her name to be more masculine on a food delivery app and this lead to a lack of men being creepy.
​
Am I alone in thinking this might be an excellent use case for local LLMs? It should be within the realm of current ability that on order, for the name of the user to be randomly generated (sex neutral) and for any message sent for it to be translated to neutral tone.
​
Can anyone think of a reason this wouldn't be a good idea? Any additional thoughts? | 2023-10-30T13:28:01 | https://www.reddit.com/r/LocalLLaMA/comments/17jtdwv/an_excellent_use_cases_for_local_llm_anonymity_on/ | valdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jtdwv | false | null | t3_17jtdwv | /r/LocalLLaMA/comments/17jtdwv/an_excellent_use_cases_for_local_llm_anonymity_on/ | false | false | self | 0 | null |
Best LLMs for buddy-like conversations | 8 | So, I opened Huggingface and filtered the models with the conversational use-case and sorted with the highest downloads. Two models were prominent MSFT Dialogpt and Facebook Blenderbot 400M.
Are these the best LLms for this application? Will finetuning them be easy or worthit? If not, what models should I focus on and what size? | 2023-10-30T12:54:42 | https://www.reddit.com/r/LocalLLaMA/comments/17jsqm5/best_llms_for_buddylike_conversations/ | Unlucky-Thing4593 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jsqm5 | false | null | t3_17jsqm5 | /r/LocalLLaMA/comments/17jsqm5/best_llms_for_buddylike_conversations/ | false | false | self | 8 | null |
Did Microsoft leaked GPT 3.5 parameter count? | 0 | I was reading the new CodeFusion paper by Microsoft and found this table.
If this is true then OpenAI is way ahead of what I was imaging.
It could also be a typo of 200B, as it's leaked that GPT 4's each experts are around 200B so maybe 3.5 is the same.
Although, for some reason I wish it's actually 20B because if this is true then it makes the fight more challenging, and I like a challenge.
Paper link: https://huggingface.co/papers/2310.17680 | 2023-10-30T12:34:24 | Independent_Key1940 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17jsd3s | false | null | t3_17jsd3s | /r/LocalLLaMA/comments/17jsd3s/did_microsoft_leaked_gpt_35_parameter_count/ | false | false | 0 | {'enabled': True, 'images': [{'id': '1LVJ899OgnZgjksgt8wLfOJUUaTmJUhyD1MiBnWGiQs', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/my6xqfgu3cxb1.jpg?width=108&crop=smart&auto=webp&s=a324ce408465d1f4f12e2c3aef3abe94d721ccf8', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/my6xqfgu3cxb1.jpg?width=216&crop=smart&auto=webp&s=c8925276b81b92172f7c4253a563be2f4bd82ed8', 'width': 216}, {'height': 198, 'url': 'https://preview.redd.it/my6xqfgu3cxb1.jpg?width=320&crop=smart&auto=webp&s=c997a91f1c8e3bf1a9a26e8a3388a22bee4b6e9b', 'width': 320}], 'source': {'height': 354, 'url': 'https://preview.redd.it/my6xqfgu3cxb1.jpg?auto=webp&s=2bb3529d7b928b92ec43ff4d0fa48bf5747ebee4', 'width': 570}, 'variants': {}}]} | ||
Does any one knows an efficent way to fine-tune llama2 using qlora on colab? | 2 | I am trying to do that, but always it run out of memory and breaks. If any can help me, i will be gratefull | 2023-10-30T12:12:11 | https://www.reddit.com/r/LocalLLaMA/comments/17jryfq/does_any_one_knows_an_efficent_way_to_finetune/ | Lipao262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jryfq | false | null | t3_17jryfq | /r/LocalLLaMA/comments/17jryfq/does_any_one_knows_an_efficent_way_to_finetune/ | false | false | self | 2 | null |
Microsoft paper says gpt-3.5-turbo is 20B | 1 | 2023-10-30T12:11:35 | https://twitter.com/felix_red_panda/status/1718916631512949248 | phree_radical | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 17jry2g | false | {'oembed': {'author_name': 'Felix', 'author_url': 'https://twitter.com/felix_red_panda', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Microsoft paper claims ChatGPT 3.5 has ~20 billion parameters <a href="https://t.co/gZxh0l2VqX">https://t.co/gZxh0l2VqX</a> <a href="https://t.co/EDCWbLdYEz">pic.twitter.com/EDCWbLdYEz</a></p>— Felix (@felix_red_panda) <a href="https://twitter.com/felix_red_panda/status/1718916631512949248?ref_src=twsrc%5Etfw">October 30, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/felix_red_panda/status/1718916631512949248', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_17jry2g | /r/LocalLLaMA/comments/17jry2g/microsoft_paper_says_gpt35turbo_is_20b/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'AK_br-JmODSMYX7yOFAuLSG429VE7ZLk5lUFDI3Y_OE', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/v-LDuXxJYA0wc6bcryL7M8AGJohlVUhHVuvTSbh7KDk.jpg?width=108&crop=smart&auto=webp&s=58a48724b87f374f3330dc6f1e0171bbb341e5b1', 'width': 108}], 'source': {'height': 100, 'url': 'https://external-preview.redd.it/v-LDuXxJYA0wc6bcryL7M8AGJohlVUhHVuvTSbh7KDk.jpg?auto=webp&s=0270b8170dec37f8ee21cac5b99a96bc3c796cc6', 'width': 140}, 'variants': {}}]} | ||
Executive order on AI - Now a good time to get my own local? | 43 | With the news on the executive order targeting AI, would now be a good time to create a local instance for myself which would not be neutered or reduced to useless due to these orders? If so, what should I set my sights on to not waste time with smaller instances? | 2023-10-30T12:01:32 | https://www.reddit.com/r/LocalLLaMA/comments/17jrrjo/executive_order_on_ai_now_a_good_time_to_get_my/ | LetsGroup | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jrrjo | false | null | t3_17jrrjo | /r/LocalLLaMA/comments/17jrrjo/executive_order_on_ai_now_a_good_time_to_get_my/ | false | false | self | 43 | null |
How to present text with code and json to llama | 2 | Can someone please suggest a good way to present data to llama2 for fine tuning which contains text, json and java code sample.
Text data holds explanation about a topic and then java class and JSON to support the explanation. | 2023-10-30T11:56:43 | https://www.reddit.com/r/LocalLLaMA/comments/17jrodp/how_to_present_text_with_code_and_json_to_llama/ | Optimal_Original_815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jrodp | false | null | t3_17jrodp | /r/LocalLLaMA/comments/17jrodp/how_to_present_text_with_code_and_json_to_llama/ | false | false | self | 2 | null |
New Microsoft codediffusion paper suggests GPT-3.5 Turbo is only 20B, good news for open source models? | 272 | Wondering what everyone thinks in case this is true. It seems they're already beating all open source models including Llama-2 70B. Is this all due to data quality? Will Mistral be able to beat it next year?
https://preview.redd.it/kdk6fwr7vbxb1.png?width=605&format=png&auto=webp&s=21ac9936581d1376815d53e07e5b0adb739c3b06 | 2023-10-30T11:48:22 | https://www.reddit.com/r/LocalLLaMA/comments/17jrj82/new_microsoft_codediffusion_paper_suggests_gpt35/ | obvithrowaway34434 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jrj82 | false | null | t3_17jrj82 | /r/LocalLLaMA/comments/17jrj82/new_microsoft_codediffusion_paper_suggests_gpt35/ | false | false | 272 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | |
Fine tune for code tips | 15 | Hi monsters,
can u please gimme some pathway to reduce the expected 23 lifes to learn more on fine tuning in the LLM for code context?
I already tested pretrained models at home and started to create datasets and crazy experiments my self but of course, some guidance can help me to focus and learn important things i’m of course missing.
One of the goals is to understand how fine tune more than training works for code oriented models.
I also checked code oriented datasets on huggingface and so on.
Any advice is really appreciated. | 2023-10-30T11:13:38 | fab_space | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17jqzc7 | false | null | t3_17jqzc7 | /r/LocalLLaMA/comments/17jqzc7/fine_tune_for_code_tips/ | false | false | 15 | {'enabled': True, 'images': [{'id': 'qI5HiO3HxqCk-T5-pzeguFXl6kiPWP9DC-M96F2oV14', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/pph5ingfpbxb1.jpg?width=108&crop=smart&auto=webp&s=1cd0b268c253d0da41fe4a32a3bdfa37f24a86e8', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/pph5ingfpbxb1.jpg?width=216&crop=smart&auto=webp&s=24341086a947b43250318f2e16d00e1d7ab93ab8', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/pph5ingfpbxb1.jpg?width=320&crop=smart&auto=webp&s=df626ecbb8e9fc7ab58b6808af15e81af84f3db3', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/pph5ingfpbxb1.jpg?width=640&crop=smart&auto=webp&s=6a68a2fede4eb3aaac17ba9890812351077e642d', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/pph5ingfpbxb1.jpg?width=960&crop=smart&auto=webp&s=b9160281eef3c72e14f61daa12edc6fcb78a3e9a', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/pph5ingfpbxb1.jpg?auto=webp&s=58d44ca30cc380c72eb2e5ff0882ae1d34ebd63f', 'width': 1024}, 'variants': {}}]} | ||
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence | The White House | 151 | 2023-10-30T10:43:53 | https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ | a_slay_nub | whitehouse.gov | 1970-01-01T00:00:00 | 0 | {} | 17jqj44 | false | null | t3_17jqj44 | /r/LocalLLaMA/comments/17jqj44/fact_sheet_president_biden_issues_executive_order/ | false | false | 151 | {'enabled': False, 'images': [{'id': 'ZNmtnK_lKgHB3spQIpgDtNznEADE6oLEzeeSxD2gjRA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/D4ApVTqUzlai30E0NQT1Y_RowF1Ep9U50QtDLAsxB34.jpg?width=108&crop=smart&auto=webp&s=6253b2f12b8fe6b19a8bb51e9ba6ad83f0b79bbb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/D4ApVTqUzlai30E0NQT1Y_RowF1Ep9U50QtDLAsxB34.jpg?width=216&crop=smart&auto=webp&s=84bf125bba1476a8c5e252fd2a4f3b7db5eaa0c5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/D4ApVTqUzlai30E0NQT1Y_RowF1Ep9U50QtDLAsxB34.jpg?width=320&crop=smart&auto=webp&s=9a1f9d202bce51e0d2adcfc5b7637b5f6cf85099', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/D4ApVTqUzlai30E0NQT1Y_RowF1Ep9U50QtDLAsxB34.jpg?width=640&crop=smart&auto=webp&s=27928d9be45d463a3b4051943a3034a17f93310c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/D4ApVTqUzlai30E0NQT1Y_RowF1Ep9U50QtDLAsxB34.jpg?width=960&crop=smart&auto=webp&s=9d4c9393e5d61e16ad25f8ae4497e9364dbd9bcc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/D4ApVTqUzlai30E0NQT1Y_RowF1Ep9U50QtDLAsxB34.jpg?width=1080&crop=smart&auto=webp&s=303b939c5a3f6fb493e65441882699daf3efcbe7', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/D4ApVTqUzlai30E0NQT1Y_RowF1Ep9U50QtDLAsxB34.jpg?auto=webp&s=4eae34922140a27f5ae5fae336f00007d8f1e77e', 'width': 1200}, 'variants': {}}]} | ||
Want to gather sample database schemas in Postgres for text-to-SQL model | 1 | I plan to generate sample questions and their answer using GPT-4. Check the syntax correctness of the queries. Sample queries and check for logic correctness.
One way to go about getting database schemas can be to have it generated by GPT-4. I can provide the sector/ usecase and instruct it to generate the relevant schema.
But would really help if someone knows of any public databases schemas. I have found some on github as well.
I plan to have around 100-200 databases. | 2023-10-30T09:23:01 | https://www.reddit.com/r/LocalLLaMA/comments/17jpdhc/want_to_gather_sample_database_schemas_in/ | Forsaken-Violinist27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jpdhc | false | null | t3_17jpdhc | /r/LocalLLaMA/comments/17jpdhc/want_to_gather_sample_database_schemas_in/ | false | false | self | 1 | null |
LLM's in production hardware requirements | 15 | I've put together an article using some guesstimates of what it would be like for an enterprise to deploy LLM's on prem.
[https://bionic-gpt.com/blog/llm-hardware/](https://bionic-gpt.com/blog/llm-hardware/)
In short, I'm estimating $20,000 in hardware costs per 1000 users, minimum.
I'd be grateful is people could give me some feedback on the numbers and whether my assumptions look realistic.
Thanks | 2023-10-30T08:55:30 | https://www.reddit.com/r/LocalLLaMA/comments/17jp02k/llms_in_production_hardware_requirements/ | purton_i | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jp02k | false | null | t3_17jp02k | /r/LocalLLaMA/comments/17jp02k/llms_in_production_hardware_requirements/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': '00YvQD_fDOSKpCWSArEIFBrU0iCCA9zOclVHVDfbRrM', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/b5X89xeJorW6TiYHNx0-7daLhtBC4zWKkDCbvwxbSQk.jpg?width=108&crop=smart&auto=webp&s=7879ad11766e7fbbeab58c1bdddde96f45d6dbd9', 'width': 108}, {'height': 136, 'url': 'https://external-preview.redd.it/b5X89xeJorW6TiYHNx0-7daLhtBC4zWKkDCbvwxbSQk.jpg?width=216&crop=smart&auto=webp&s=9834dca34d22eb1acdf5e33d899f2cd52072c5a8', 'width': 216}, {'height': 202, 'url': 'https://external-preview.redd.it/b5X89xeJorW6TiYHNx0-7daLhtBC4zWKkDCbvwxbSQk.jpg?width=320&crop=smart&auto=webp&s=f1ee454f99fd13a8f69bb74872a2ce7af9691c1a', 'width': 320}, {'height': 405, 'url': 'https://external-preview.redd.it/b5X89xeJorW6TiYHNx0-7daLhtBC4zWKkDCbvwxbSQk.jpg?width=640&crop=smart&auto=webp&s=ac4d33634e63ad0f2be2698a5606bb6e201911d5', 'width': 640}], 'source': {'height': 487, 'url': 'https://external-preview.redd.it/b5X89xeJorW6TiYHNx0-7daLhtBC4zWKkDCbvwxbSQk.jpg?auto=webp&s=7b1107f084b6bce4885f09f29d85a63b11ef9fa2', 'width': 768}, 'variants': {}}]} |
Chat model unsupervised fine-tuning | 2 | Hello friends. Even though there were a few similar posts, I couldn't get the answer I wanted.
​
I have thousands of different pdf files. For example, the files can be book pdfs.
​
Pdf files are in raw form, not in question-answer format.
​
What I want to do is train a Llama model with raw data and have it provide information about my own data while using Q&A.
​
As far as I researched, chat(meta-llama/Llama-2-7b-chat-hf) models need to be in question-answer format in order to fine-tune them, but it is very difficult to convert all those PDFs into question-answer format.
​
Is there a chance to fine-tune the "chat" model unsupervised? Or what path can I follow? | 2023-10-30T08:52:44 | https://www.reddit.com/r/LocalLLaMA/comments/17joyph/chat_model_unsupervised_finetuning/ | Typical_Time_208 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17joyph | false | null | t3_17joyph | /r/LocalLLaMA/comments/17joyph/chat_model_unsupervised_finetuning/ | false | false | self | 2 | null |
Finally, a diffusion based LMM! | 154 | https://arxiv.org/abs/2310.17680
Ok, technically a tiny language model for now:
*Imagine a developer who can only change their last line of code, how often would they have to start writing a function from scratch before it is correct? Auto-regressive models for code generation from natural language have a similar limitation: they do not easily allow reconsidering earlier tokens generated. We introduce CodeFusion, a pre-trained diffusion code generation model that addresses this limitation by iteratively denoising a complete program conditioned on the encoded natural language. We evaluate CodeFusion on the task of natural language to code generation for Bash, Python, and Microsoft Excel conditional formatting (CF) rules. Experiments show that CodeFusion (75M parameters) performs on par with state-of-the-art auto-regressive systems (350M-175B parameters) in top-1 accuracy and outperforms them in top-3 and top-5 accuracy due to its better balance in diversity versus quality.*
And only for code. And seems it is much slower. But looks extremely interesting as "proof of concept".
I *think* that instead of a lot of "denoising" steps to generate text from gibberish, a dual-model system that takes a typical autoregressive input and than runs a few "denoising" steps to look for errors and inconsistencies might be best of both worlds, instead of typical methods like progressive refinerent that require rewriting entire text token-by-token several times... | 2023-10-30T07:45:24 | https://www.reddit.com/r/LocalLLaMA/comments/17jo3rb/finally_a_diffusion_based_lmm/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jo3rb | false | null | t3_17jo3rb | /r/LocalLLaMA/comments/17jo3rb/finally_a_diffusion_based_lmm/ | false | false | self | 154 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
LLMs and basic maths | 1 | I have been evaluating a number of maths models. One simple question I as is "what is 2 to the power of 4.1?" Almost every model butchers the answer. GPT-4 is the only one to get it correct out of the box. It looks like questions such as this are just not meant for LLMs. Without basic arithmetic, LLMs will not be particularly useful to any highly numeric occupations. Has anyone managed to get any finetuned LLMs to perform arithmetic reliably?
I am starting to think that the only way to do this is outsource specific calculations to a mathematical expression parser. | 2023-10-30T07:31:52 | https://www.reddit.com/r/LocalLLaMA/comments/17jnxrd/llms_and_basic_maths/ | WorldlinessSpecific9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jnxrd | false | null | t3_17jnxrd | /r/LocalLLaMA/comments/17jnxrd/llms_and_basic_maths/ | false | false | self | 1 | null |
AWS Inferentia | 3 | Hi everyone!
I am looking into using an AWS Inferentia instance for my model for inference.
Has anyone here used these? I am using llama-cpp-Python, will I be able to use it?
They are much cheaper than GPU’s, and I am not doing any training anyway, so I’d prefer to not have to rent GPU’s.
Has anyone here used it and had to do any configuration which they could share?
Thanks everyone | 2023-10-30T07:26:48 | https://www.reddit.com/r/LocalLLaMA/comments/17jnve9/aws_inferentia/ | Slimxshadyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jnve9 | false | null | t3_17jnve9 | /r/LocalLLaMA/comments/17jnve9/aws_inferentia/ | false | false | self | 3 | null |
Recommendation on running local host LLMs | 2 | I’ve built a RAG system and want to use it with a group of friends. I’m planning on building/buying second hand server box. Any recommendations or builds on how I can host the endpoint and website? | 2023-10-30T06:59:11 | https://www.reddit.com/r/LocalLLaMA/comments/17jni05/recommendation_on_running_local_host_llms/ | ConceptGT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jni05 | false | null | t3_17jni05 | /r/LocalLLaMA/comments/17jni05/recommendation_on_running_local_host_llms/ | false | false | self | 2 | null |
What are top open source projects in LLM space | 80 | I am aware of langchain, LlamaIndex and MetaGPT
https://github.com/geekan/MetaGPT
https://github.com/langchain-ai/langchain
https://github.com/run-llama/llama_index | 2023-10-30T05:59:48 | https://www.reddit.com/r/LocalLLaMA/comments/17jmpn8/what_are_top_open_source_projects_in_llm_space/ | ashutrv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jmpn8 | false | null | t3_17jmpn8 | /r/LocalLLaMA/comments/17jmpn8/what_are_top_open_source_projects_in_llm_space/ | false | false | self | 80 | {'enabled': False, 'images': [{'id': 'j-s-4nkM__l557YzrA1dnD7EHR-lzdWovs-HWzho5S4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0bfTp13DPX5Gt8P7ez1ZT1Is0IBaUfNgfGwlaT-A88g.jpg?width=108&crop=smart&auto=webp&s=86f5ef5a68e897e09f84f473d011b799999415c2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0bfTp13DPX5Gt8P7ez1ZT1Is0IBaUfNgfGwlaT-A88g.jpg?width=216&crop=smart&auto=webp&s=e60c46495344d29fb6491a6455152cb154da231a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0bfTp13DPX5Gt8P7ez1ZT1Is0IBaUfNgfGwlaT-A88g.jpg?width=320&crop=smart&auto=webp&s=d93d9725e018f866694cce724cba81134fb8d811', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0bfTp13DPX5Gt8P7ez1ZT1Is0IBaUfNgfGwlaT-A88g.jpg?width=640&crop=smart&auto=webp&s=c70e9b4975d31f2a0a051548b0f1a3fbb223dfff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0bfTp13DPX5Gt8P7ez1ZT1Is0IBaUfNgfGwlaT-A88g.jpg?width=960&crop=smart&auto=webp&s=db8fd53dc27cc9946f085b39e0c481e38a6e3749', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0bfTp13DPX5Gt8P7ez1ZT1Is0IBaUfNgfGwlaT-A88g.jpg?width=1080&crop=smart&auto=webp&s=eaba97a038ab26488024b4de281cb1af7ef64e2f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0bfTp13DPX5Gt8P7ez1ZT1Is0IBaUfNgfGwlaT-A88g.jpg?auto=webp&s=68ce7e0d9b282a08f8e78930c2cef921b597dff1', 'width': 1200}, 'variants': {}}]} |
Problem s with GPU acceleration in Llama.cpp | 1 | [removed] | 2023-10-30T05:45:17 | https://www.reddit.com/r/LocalLLaMA/comments/17jmij9/problem_s_with_gpu_acceleration_in_llamacpp/ | CheesekeeperZero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jmij9 | false | null | t3_17jmij9 | /r/LocalLLaMA/comments/17jmij9/problem_s_with_gpu_acceleration_in_llamacpp/ | false | false | self | 1 | null |
Loading Airoboros 180b gguf 5_K_M error- byte not found in vocab: ' | 1 | Has anyone attempted to, or managed to, load the new Airoboros 180b model yet? I pulled down the ggufs from TheBloke this morning and just got it loaded up, but I hit that error on load and Llamacpp crashes. I haven't run into this before, so I'm not really sure if it just means there was an issue with the model, if maybe my download got corrupted, or something else.
Anyone tried yet or seen this before? | 2023-10-30T05:17:35 | https://www.reddit.com/r/LocalLLaMA/comments/17jm4ie/loading_airoboros_180b_gguf_5_k_m_error_byte_not/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jm4ie | false | null | t3_17jm4ie | /r/LocalLLaMA/comments/17jm4ie/loading_airoboros_180b_gguf_5_k_m_error_byte_not/ | false | false | self | 1 | null |
Mistral NSFW Lora | 92 | Hi, I made NSFW Lora for mistral, if anyone is interested, please check it out and write as you like.
If you want to write a review on the Lora, I will be glad to see it.
\~200 stories were used, with a total weight of 50 MB.
Careful the model is very HORNY.
Each story was signed in TTG format, which means that you can use the following format:
\[ Title: What you want; Tags: what you want; Genres: What you want \]
Also below you can write her instructions for example:
\[ Write long sentences \] \[ Character A talks to character B \]
HG: [https://huggingface.co/Kotokin/Mistral-7B-NSFWSTORY-lora](https://huggingface.co/Kotokin/Mistral-7B-NSFWSTORY-lora)
MODEL: HuggingFaceH4\_zephyr-7b-beta | 2023-10-30T04:30:53 | https://www.reddit.com/r/LocalLLaMA/comments/17jlen2/mistral_nsfw_lora/ | Terrible-Mongoose-84 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jlen2 | false | null | t3_17jlen2 | /r/LocalLLaMA/comments/17jlen2/mistral_nsfw_lora/ | false | false | nsfw | 92 | {'enabled': False, 'images': [{'id': 'GNAFwMxkiTOgcapf8K50HGHE-YJl2MeMpYsyvHiaKrc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=108&crop=smart&auto=webp&s=4e987400ff0e0284c26886badb7e0adc9de74354', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=216&crop=smart&auto=webp&s=038a133688728f874fba4be570a5a134cc68f552', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=320&crop=smart&auto=webp&s=b9bcf4aad8e21e928f650efcd8ebca2ba8b915d9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=640&crop=smart&auto=webp&s=fb3ef8c6a57a51c6b392c3d06dfad7a5cc4c1232', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=960&crop=smart&auto=webp&s=700d0361bc6e5d0a23d768965bec804b2e3c5f05', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=1080&crop=smart&auto=webp&s=a04226f10f470e8d9a0a15bc9af951d2514f641c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?auto=webp&s=ca1e66cbd6c1aa81bc762a80dc0ebaf0f4e09c47', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=6878ff64faaa1d13b4640b01f4890dd0af11805d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=1890a58abd4b13ab2d9d9b18b8de240190649013', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=9c1c8922013d99632d83b89d14dcefc244d6dd0b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=20cbeb3cc09c305046b24fd01bdd41e8d8d680e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=e59d2b72c9769d31017a6ef8bf331b80f5e6d364', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=aae876fb67ffe9f30e32dd377dc79cbccca8d3a5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?blur=40&format=pjpg&auto=webp&s=10a61d41503a9d72a57d4c87dcac4257afd74839', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=6878ff64faaa1d13b4640b01f4890dd0af11805d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=1890a58abd4b13ab2d9d9b18b8de240190649013', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=9c1c8922013d99632d83b89d14dcefc244d6dd0b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=20cbeb3cc09c305046b24fd01bdd41e8d8d680e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=e59d2b72c9769d31017a6ef8bf331b80f5e6d364', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=aae876fb67ffe9f30e32dd377dc79cbccca8d3a5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9W687rYu1mhkZlkBZ1R5HYqaAetJiU2iUQyIDqeukaM.jpg?blur=40&format=pjpg&auto=webp&s=10a61d41503a9d72a57d4c87dcac4257afd74839', 'width': 1200}}}}]} |
Serverless Runpod question | 5 | If I have 1 model and ask it 10 separate questions say 1 hour apart from each other.
Is that the same as asking 10 models 1 question?
It just spins it up and shuts it down quickly per-question? | 2023-10-30T04:26:52 | https://www.reddit.com/r/LocalLLaMA/comments/17jlccn/serverless_runpod_question/ | parasocks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jlccn | false | null | t3_17jlccn | /r/LocalLLaMA/comments/17jlccn/serverless_runpod_question/ | false | false | self | 5 | null |
offload_dir error message. If anyone can help it would mean the world. Thank you | 1 |
So I am recently new to downloading Llama 2 and I have been following this guide to install and fine-tune the model. I'm doing it on Google Colab and I have to stick to doing it on Google Colab because that's the only area I can.
I have been able to get through all the hiccups along the way and all I have had to do up to this point is copy and paste pretty much, but I have found an error message that I have no idea how to solve.
This:
​
https://preview.redd.it/gof5ktxxl9xb1.png?width=1554&format=png&auto=webp&s=1e28f9ce8c8d7576a0de7678910505a78478a58a
​
I don't know if anyone else has come across this error before and I am just looking for how to fix it in this specific instance. I have tried so many different sources on Google but it seems like this isn't a popular issue. I also don't know what the problem could be. I don't know whether I need to create a folder if I need to initialize something, or anything else. I have no idea, please help me if you think you can solve it please it would be a great help.
If you need any more information I will be happy to help, thank you. | 2023-10-30T04:11:15 | https://www.reddit.com/r/LocalLLaMA/comments/17jl3gq/offload_dir_error_message_if_anyone_can_help_it/ | Head-Distribution-94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jl3gq | false | null | t3_17jl3gq | /r/LocalLLaMA/comments/17jl3gq/offload_dir_error_message_if_anyone_can_help_it/ | false | false | 1 | null | |
Reuse existing Lora fine tune with different base? | 2 | For example, the llama 2 base model has the Airoboros Llama-2 chat fine tune for it.
Now if the base model weights are updated slightly, does it still work as expected when the same Airoboros Lora is applied on this new base? | 2023-10-30T03:50:12 | https://www.reddit.com/r/LocalLLaMA/comments/17jkqy9/reuse_existing_lora_fine_tune_with_different_base/ | xynyxyn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jkqy9 | false | null | t3_17jkqy9 | /r/LocalLLaMA/comments/17jkqy9/reuse_existing_lora_fine_tune_with_different_base/ | false | false | self | 2 | null |
FP8-LM: Training FP8 Large Language Models | 23 | 2023-10-30T02:50:12 | https://arxiv.org/abs/2310.18313 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17jjopb | false | null | t3_17jjopb | /r/LocalLLaMA/comments/17jjopb/fp8lm_training_fp8_large_language_models/ | false | false | 23 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
PockEngine: Sparse and Efficient Fine-tuning in a Pocket | 33 | 2023-10-30T02:23:24 | https://arxiv.org/abs/2310.17752 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17jj6fq | false | null | t3_17jj6fq | /r/LocalLLaMA/comments/17jj6fq/pockengine_sparse_and_efficient_finetuning_in_a/ | false | false | 33 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
Open Source Open Night! AI standup comedy! | 32 | Benchmarks are one thing, but which LLM can write a decent joke? It's standup comedy for LLMs. Submit the best joke your favorite LLM has told you
_______RULES
Jokes must be written by your AI. Don't hurt your own feelings and lose to an AI in humor.
Keep it within all applicable community guidelines. Don't get this taken down, please!
I don't want to enforce a screenshot only rule, as that can get cumbersome posting those, but TAKE A SCREENSHOT OF THE AI'S WRITTEN JOKE, just in case your submission is suspiciously sapian. If something seems off, I will ask for a screenshot. I'll let you know in a reply to your commented submission in order to notify you, and to start a timer. If no credible proof of the AI writing the joke itself is submitted within 24 hours or the end of the contest (whichever comes first), I will delete your submission.
Vote for your favorites! I think the etiquette should be one upvote per submission, but that's just a suggestion.
Multiple posts are totally fine, just try to upvote other submissions you like as well.
The timer begins at the first commented submission, and I will declare a winner 72 hours after that.
_________
The rules are purposely a little loose as I want to go into this with faith in this community, and comedy with too many rules just doesn't work. Please keep to the spirit and intended fun of this post. I reserve the right to disqualify your submission if you can't let people have nice things, let this be a community thing.
Give me your best! Vote for your favorites! | 2023-10-30T01:43:00 | Future_Might_8194 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17jidvh | false | null | t3_17jidvh | /r/LocalLLaMA/comments/17jidvh/open_source_open_night_ai_standup_comedy/ | false | false | 32 | {'enabled': True, 'images': [{'id': 'WWGXKEFXSEKoFEuiTDh6fvyuMwAMkJBr20u5yACORkw', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/z81ia5smv8xb1.jpg?width=108&crop=smart&auto=webp&s=5261189ae2afb353d775a42a53b25055a7ec5900', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/z81ia5smv8xb1.jpg?width=216&crop=smart&auto=webp&s=3e3ca6dc8ed33f5e0233e4477b3011ef241e43e1', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/z81ia5smv8xb1.jpg?width=320&crop=smart&auto=webp&s=d8b24c4487bd02210d783ef7e9de4dc54a05fbc7', 'width': 320}], 'source': {'height': 315, 'url': 'https://preview.redd.it/z81ia5smv8xb1.jpg?auto=webp&s=e957e3240b473dbd01cef38d5d628a22a8653c8c', 'width': 600}, 'variants': {}}]} | ||
Today I released Akins 3B. | 55 | Today I released [Akins 3B](https://huggingface.co/acrastt/Akins-3B).
[Akins 3B](https://huggingface.co/acrastt/Akins-3B) is [StableLM 3B 4E1T](https://huggingface.co/stabilityai/stablelm-3b-4e1t)(Licensed under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/).) instruction tuned on [Claude Multiround Chat 1K](Norquinal/claude_multiround_chat_1k) for 2 epochs with QLoRA([2305.14314](https://arxiv.org/abs/2305.14314)).
Prompt template:
```
USER: {prompt}
ASSISTANT:
```
From my current limited testing, this model is really impressive for a 3B. Example output:
```
Turbodurp, the wind-blown,
With hair as wild as the sea,
He walks the land with a swagger,
And a smile that could cheer you.
Turbodurp, the wise one,
Who sees the world with wonder,
He sees beauty in everything,
And joy in a simple sparrow.
Turbodurp, the joker,
Who laughs at the world's foibles,
He sees the humor in life's journey,
And finds joy in a simple toad.
Turbodurp, the dreamer,
Who sees the world in a dream,
He sees possibilities everywhere,
And hopes for a better scheme.
Turbodurp, the lover of life,
Who lives each day to the fullest,
He laughs at the world's troubles,
And finds joy in a simple cup of tea.
Turbodurp, the friend of all,
Who sees the good in everyone,
He lifts your spirits with his laughter,
And shares his dreams with you.
Turbodurp, the joyful one,
Who finds joy in simple things,
He sees the beauty in nature,
And finds joy in a simple ring.
Turbodurp, the one who loves,
Who loves with a heart of gold,
He loves with a love that endures,
And finds joy in a simple fold.
``` | 2023-10-30T01:42:47 | https://www.reddit.com/r/LocalLLaMA/comments/17jidpg/today_i_released_akins_3b/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jidpg | false | null | t3_17jidpg | /r/LocalLLaMA/comments/17jidpg/today_i_released_akins_3b/ | false | false | self | 55 | {'enabled': False, 'images': [{'id': 'QpCX81q_9d-EuG7jelQ00HcAN9KZSyIEhi1TLcb7Cj8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wcQd2vG68VHRQVr4NqqdcZAq8VahZGR85P7B75OJeDw.jpg?width=108&crop=smart&auto=webp&s=27716126f51480c72dff78b30299870e40f20434', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wcQd2vG68VHRQVr4NqqdcZAq8VahZGR85P7B75OJeDw.jpg?width=216&crop=smart&auto=webp&s=afe8ef34eefabf779a4ac914366162d03c9de73d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wcQd2vG68VHRQVr4NqqdcZAq8VahZGR85P7B75OJeDw.jpg?width=320&crop=smart&auto=webp&s=41e160c7e287901d37e366a91c94a8e65408dd8d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wcQd2vG68VHRQVr4NqqdcZAq8VahZGR85P7B75OJeDw.jpg?width=640&crop=smart&auto=webp&s=3a11fb39fead042455104e7b4c4e82bcf26f0ec5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wcQd2vG68VHRQVr4NqqdcZAq8VahZGR85P7B75OJeDw.jpg?width=960&crop=smart&auto=webp&s=a1e61396ab4d7b0f7dce6f1af03b751477096eb7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wcQd2vG68VHRQVr4NqqdcZAq8VahZGR85P7B75OJeDw.jpg?width=1080&crop=smart&auto=webp&s=58f853eaaa31eec9c1204de3b63172a75f0eab62', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wcQd2vG68VHRQVr4NqqdcZAq8VahZGR85P7B75OJeDw.jpg?auto=webp&s=119826e4c78ee78e4bc575541acfad31220563ce', 'width': 1200}, 'variants': {}}]} |
Tested: Batched decoding on CPU | 21 | Ever since the medusa models were released, I've been wondering if speculative sampling can run effectively on CPU only. Modern GPUs already provide fast t/s, so the speedup is more exciting when running on low bandwidth GPUs, SoCs, and CPUs.
And that depends on batched decoding working correctly.
So I did tests with the largest available model running directly from storage (no ram needed), and also a 13B.
### Falcon 180B Q4_K_S (mmap inference)
`./batched Falcon-180B-Q4_K_S.gguf "my best" <parallel> 8`
| batch size | tg | total |
|------------|----------|-------------------------------|
| 1 | 0.05 t/s | decoded 5 tokens in 110.76s |
| 2 | 0.09 t/s | decoded 10 tokens in 117.22s |
| 4 | 0.17 t/s | decoded 20 tokens in 114.95s |
| 8 | 0.31 t/s | decoded 40 tokens in 117.94s |
| 16 | 0.64 t/s | decoded 80 tokens in 124.36s |
| 32 | 0.99 t/s | decoded 160 tokens in 161.40s |
| 64 | 1.33 t/s | decoded 320 tokens in 240.06s |
### Falcon 180B f16 (mmap inference)
`./batched ggml-model-f16.gguf "my best" <parallel> 8`
| batch size | tg | total |
|------------|------|-------------------------------|
| 1 | 0.01 t/s | decoded 5 tokens in 457.86s |
| 2 | 0.02 t/s | decoded 10 tokens in 452.00s |
| 16 | 0.17 t/s | decoded 160 tokens in 474.16s |
### 13B Q4_K_M (standard inference)
`./batched llama-2-13B.gguf "my best" <parallel> 120`
| batch size | TG t/s |
|------------|--------|
| 1 | 5.4 |
| 2 | 10.5 |
| 3 | 14.7 |
| 4 | 18.1 |
| 5 | 20.3 |
| 6 | 22.8 |
| 8 | 24.7 |
| 10 | 26.6 |
| 16 | 25.9 |
So these results show double, triple.. much higher t/s.
I time them in real life too.. to be sure results are accurate
Since exl2 already provides verifiable gains consistent with literature (2-3x speed) on most 70B, and batched CPU inference also scales the same as a gpu would, speculative CPU inference (llama.cpp) could probably be able to do the same (2-3x speeds), despite the current experience (slower). | 2023-10-30T01:18:02 | https://www.reddit.com/r/LocalLLaMA/comments/17jhwpa/tested_batched_decoding_on_cpu/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jhwpa | false | null | t3_17jhwpa | /r/LocalLLaMA/comments/17jhwpa/tested_batched_decoding_on_cpu/ | false | false | self | 21 | null |
Any LLMs that aren't completely neutered and can be a little edgy? | 1 | [removed] | 2023-10-30T00:15:27 | https://www.reddit.com/r/LocalLLaMA/comments/17jgopy/any_llms_that_arent_completely_neutered_and_can/ | PM_ME_YOUR_LLMS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jgopy | false | null | t3_17jgopy | /r/LocalLLaMA/comments/17jgopy/any_llms_that_arent_completely_neutered_and_can/ | false | false | self | 1 | null |
Llama ask you a question, what is speechless? | 0 | What are the speechless models? I see them popping up left and right. Are they for a specific purpose? Is that just a model name? I am at a loss. I am...I can't find the words....how do you say it? | 2023-10-29T23:58:30 | https://www.reddit.com/r/LocalLLaMA/comments/17jgc8d/llama_ask_you_a_question_what_is_speechless/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17jgc8d | false | null | t3_17jgc8d | /r/LocalLLaMA/comments/17jgc8d/llama_ask_you_a_question_what_is_speechless/ | false | false | self | 0 | null |
How LLMs are changing search | 0 | 2023-10-29T23:26:03 | https://orangutanai.substack.com/p/how-llms-are-changing-search | firef1y1 | orangutanai.substack.com | 1970-01-01T00:00:00 | 0 | {} | 17jfnyz | false | null | t3_17jfnyz | /r/LocalLLaMA/comments/17jfnyz/how_llms_are_changing_search/ | false | false | 0 | {'enabled': False, 'images': [{'id': '3AOl-pLK-HCRtnZ4eR0TguUzFY-hGErZjpowCXYg36Y', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MRqFX7agBeCAzumcg4GoRIe2vHQ5g603buQSJW0vfzI.jpg?width=108&crop=smart&auto=webp&s=7f404778f6a2578347c515fe66a0be7c776c69a9', 'width': 108}], 'source': {'height': 144, 'url': 'https://external-preview.redd.it/MRqFX7agBeCAzumcg4GoRIe2vHQ5g603buQSJW0vfzI.jpg?auto=webp&s=aee71b79bc50d23c2fcac9ef5c01bad959f2b936', 'width': 144}, 'variants': {}}]} | ||
Explore the Best iPhone ChatGPT App for Seamless Communication - phonetui | 1 | 2023-10-29T22:24:36 | https://www.phonetui.com/2023/10/explore-best-iphone-chatgpt-app-for.html | FrederickJones7 | phonetui.com | 1970-01-01T00:00:00 | 0 | {} | 17jedr5 | false | null | t3_17jedr5 | /r/LocalLLaMA/comments/17jedr5/explore_the_best_iphone_chatgpt_app_for_seamless/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'u5fmLJObf__hkcXjBET6UK0dlxPHnaTnQzoBV9dtw_c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/tDOxRrpOob9YQNVmvjQUDlrpC8JLqm_Pqbcodp450I0.jpg?width=108&crop=smart&auto=webp&s=a809450abab3bec4b134febb50db9bc85beb0102', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/tDOxRrpOob9YQNVmvjQUDlrpC8JLqm_Pqbcodp450I0.jpg?width=216&crop=smart&auto=webp&s=289f0778fffb5fd596750ed41722f8cdf641bf21', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/tDOxRrpOob9YQNVmvjQUDlrpC8JLqm_Pqbcodp450I0.jpg?width=320&crop=smart&auto=webp&s=1f20569fb433793f0e33ff6225d17b073e41bc7b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/tDOxRrpOob9YQNVmvjQUDlrpC8JLqm_Pqbcodp450I0.jpg?width=640&crop=smart&auto=webp&s=7a48082b7ddbb5a38a92ccf194b3c264fdc2e3e3', 'width': 640}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/tDOxRrpOob9YQNVmvjQUDlrpC8JLqm_Pqbcodp450I0.jpg?auto=webp&s=2c34fbfcdce99f29ea3f77b5c384d73e990c734b', 'width': 640}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.