title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Utilizing 16gb of VRAM
10
I'm currently running a 13B GPTQ Llama2 model in exllama1 at 4k token limit. It's using about 10gb of VRAM, from what I can tell. I have 16 on the card. Is there anything I can do to squeeze out some extra performance using the remaining 6gb?
2023-09-14T15:16:00
https://www.reddit.com/r/LocalLLaMA/comments/16il2wx/utilizing_16gb_of_vram/
__SlimeQ__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16il2wx
false
null
t3_16il2wx
/r/LocalLLaMA/comments/16il2wx/utilizing_16gb_of_vram/
false
false
self
10
null
MIT-IBM Watson AI Lab releases MoLM suite with three small sparse MoE models, the largest of which (8B params with 700M experts) performs on par with Pythia 2.8B while its throughput is comparable to Pythia 1.4B
62
Paper: [https://arxiv.org/abs/2306.04640](https://arxiv.org/abs/2306.04640) GitHub: [https://github.com/ibm/moduleformer](https://github.com/ibm/moduleformer) (under Apache 2.0) Twitter thread: [https://twitter.com/Yikang\_Shen/status/1702041129267388678](https://twitter.com/Yikang_Shen/status/1702041129267388678) Abstract: >Large Language Models (LLMs) have achieved remarkable results. However, existing models are expensive to train and deploy, and it is also difficult to expand their knowledge beyond pre-training data without forgetting previous knowledge. This paper proposes a new neural network architecture, ModuleFormer, that leverages modularity to improve the efficiency and flexibility of large language models. ModuleFormer is based on the Sparse Mixture of Experts (SMoE). Unlike the previous SMoE-based modular language model, which requires domain-labeled data to learn domain-specific experts, ModuleFormer can induce modularity from uncurated data with its new load balancing and concentration losses. ModuleFormer is a modular architecture that includes two different types of modules: new stick-breaking attention heads and feedforward experts. Different modules are sparsely activated conditions on the input token during training and inference. In our experiment, we found that the modular architecture enables three important abilities for large pre-trained language models: 1) Efficiency, since ModuleFormer only activates a subset of its modules for each input token, thus it could achieve the same performance as dense LLMs with more than two times throughput; 2) Extendability, ModuleFormer is more immune to catastrophic forgetting than dense LLMs and can be easily extended with new modules to learn new knowledge that is not included in the training data; 3) Specialisation, finetuning ModuleFormer could specialize a subset of modules to the finetuning task and the task-unrelated modules could be easily pruned for a lightweight deployment. Model architecture: >MoLM is an auto-regressive language model that uses the ModuleFormer architecture. It has 16 attention modules in each attention layer and 32 MLP modules in each MLP layer. During inference, in each layer, MoLM-350M-4B and MoLM-700M-8B activate 2 modules for each token, while MoLM-700M-4B activate 4 modules. MoLM-350M-4B and MoLM-700M-4B has 24 blocks and MoLM-700M-8B has 48 blocks.
2023-09-14T15:11:07
https://www.reddit.com/r/LocalLLaMA/comments/16ikyma/mitibm_watson_ai_lab_releases_molm_suite_with/
ain92ru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ikyma
false
null
t3_16ikyma
/r/LocalLLaMA/comments/16ikyma/mitibm_watson_ai_lab_releases_molm_suite_with/
false
false
self
62
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Best CPU Model to extract data from raw text?
1
Hey everyone! I’m using llama.cpp (llm-rs) to extract structured data from raw text. Right now I’m using this structure for input: ``` RAW TEXT === { “xxx”: “ ``` I was using Nous Hermes 13B but as you know a new model appears every day. My results were not terrible but I’m open to new recommendations of models or strategy. Thanks! PS: Sorry if backticks didn’t work I’m on my phone 😂
2023-09-14T14:38:44
https://www.reddit.com/r/LocalLLaMA/comments/16ik65n/best_cpu_model_to_extract_data_from_raw_text/
okyaygokay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ik65n
false
null
t3_16ik65n
/r/LocalLLaMA/comments/16ik65n/best_cpu_model_to_extract_data_from_raw_text/
false
false
self
1
null
Alibaba removed Qwen?
35
All the links are dead. We never even got it working with quants. This is the second Chinese model to just disappear: - https://huggingface.co/Qwen/Qwen-7B - https://huggingface.co/Qwen/Qwen-7B-Chat - https://modelscope.cn/studios/qwen/Qwen-7B-Chat-Demo/summary
2023-09-14T13:55:58
https://www.reddit.com/r/LocalLLaMA/comments/16ij5ux/alibaba_removed_qwen/
MoneroBee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ij5ux
false
null
t3_16ij5ux
/r/LocalLLaMA/comments/16ij5ux/alibaba_removed_qwen/
false
false
self
35
null
Train GPTQ model?
2
Hey I'm trying to find if it is possible to train a GPTQ model. I didn't find anything online, looking to get some help here. TIA
2023-09-14T12:47:16
https://www.reddit.com/r/LocalLLaMA/comments/16ihjdm/train_gptq_model/
Dry_Long3157
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ihjdm
false
null
t3_16ihjdm
/r/LocalLLaMA/comments/16ihjdm/train_gptq_model/
false
false
self
2
null
Apple CoreML
18
The Apple event made me aware of CoreML. As far as I know, this also utilizes the Neural Engine on the Apple M and A series processors. This really sounds interesting. Anyone ever tried to quantize a Llama 7B model down to 4 bits and then run it on an iPad Pro or iPhone? Or anyone made a comparison between running the same model with CoreML and llama.cpp with Metal acceleration?
2023-09-14T12:15:04
https://huggingface.co/blog/swift-coreml-llm
krazzmann
huggingface.co
1970-01-01T00:00:00
0
{}
16igu6g
false
null
t3_16igu6g
/r/LocalLLaMA/comments/16igu6g/apple_coreml/
false
false
https://b.thumbs.redditm…ns7e2Y3uaB6k.jpg
18
{'enabled': False, 'images': [{'id': 'GdZfMrYcAiV4r11CvPLgyVUFWsqE5CkALvDatwtQlZ0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=108&crop=smart&auto=webp&s=a1dab5f9bcad8fcb14f01da63b4ec2b53e41412f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=216&crop=smart&auto=webp&s=5be26557e6e2419ef12043c03536fbed81e5c8d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=320&crop=smart&auto=webp&s=7a015253ad9f9e7d305ad05ea805b19a85db3a3b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=640&crop=smart&auto=webp&s=f8ec22bfe4df878fc12db83493870ad3a65b255c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=960&crop=smart&auto=webp&s=392699fcd6f5564b6cce3ee20c4e4ccf1bfc243a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=1080&crop=smart&auto=webp&s=eae31cce00083b7972cbbbe873e188ee99a117fe', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?auto=webp&s=017d2a5b3efdda0608edd479c3edf6e9617d34a2', 'width': 1300}, 'variants': {}}]}
Help a data scientist figure out software and computational needs
3
Hey, I'm a data scientist, and I want to develop an app based on open-source LLMs. I'm trying to figure out the software stack and computational needs for my project. Basically, the app is supposed to modify a small set of simple instructions according to an altered reality (e.g., modify instructions to make an omelette if there are no eggs in the fridge). I figure that my context should be 1K-2K. Essentially, I want to make my workstation LLM-compatible, and I'd appreciate your input: 1. I assume that HuggingFace Transformers\peft is the way to go? 2. Does it make sense to perform most of the fine-tuning on the cloud to optimize my budget usage? 3. If so, which local GPU would you recommend for testing my code and experimentation? The secondhand market is problematic where I live. What are your thoughts about the 7900XTX or Intel Arc A770? To my understanding, they do not support training qLoRA. 4. If a certain dataset, method, or prompt works for me on a local, small model (say 13B), can I assume that the 70B version of the same model would benefit from a similar method? Thanks!
2023-09-14T11:10:55
https://www.reddit.com/r/LocalLLaMA/comments/16ifkpb/help_a_data_scientist_figure_out_software_and/
Jazzlike-Detective62
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ifkpb
false
null
t3_16ifkpb
/r/LocalLLaMA/comments/16ifkpb/help_a_data_scientist_figure_out_software_and/
false
false
self
3
null
Help a data scientist understand software and hardware requitements
1
[removed]
2023-09-14T11:02:18
https://www.reddit.com/r/LocalLLaMA/comments/16iff27/help_a_data_scientist_understand_software_and/
semantic_pineapple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16iff27
false
null
t3_16iff27
/r/LocalLLaMA/comments/16iff27/help_a_data_scientist_understand_software_and/
false
false
self
1
null
is 4 rtx 4090s better than single a6000 ada?
16
i am thinking of getting a pc for running llama 70b locally, and do all sort of projects with it, sooo the thing is, i am confused on the hardware, i see rtx 4090 has 24 gb vram, and a6000 has 48gb, which can be spooled into 96gb by adding a second a6000, and rtx 4090 cannot spool vram like a6000, soo i mean does having 4 rtx 4090 make it possible in any way to run llama 70b, and is it worth it?(considering it will cost me little bit less than 1 rtx a6000 ada, 4\*1599=\~6200$,i can only afford get 1 rtx 6000 ada right now) and if it is possible to run llama 70b on rtx 4090, what is the predicted speed of text generation? Thanks in advance
2023-09-14T10:59:49
https://www.reddit.com/r/LocalLLaMA/comments/16ifd6q/is_4_rtx_4090s_better_than_single_a6000_ada/
fx76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ifd6q
false
null
t3_16ifd6q
/r/LocalLLaMA/comments/16ifd6q/is_4_rtx_4090s_better_than_single_a6000_ada/
false
false
self
16
null
What are strategies for keeping a conversation on rails?
7
I've been trying to develop a character.AI-like system on my own but i'm encountering a lot of road bumps. One is that we can go from "I'm a D&D bard halfling" to "Sure he's a python script to generate Fibonacci numbers" in a single prompt. What is the solution here, better system prompts? Better input prompts? Or do you need to step back to fine tuning? Lora?
2023-09-14T10:27:35
https://www.reddit.com/r/LocalLLaMA/comments/16iessf/what_are_strategies_for_keeping_a_conversation_on/
The_GodKing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16iessf
false
null
t3_16iessf
/r/LocalLLaMA/comments/16iessf/what_are_strategies_for_keeping_a_conversation_on/
false
false
self
7
null
Speed Up Llama 2 On CPU
1
[removed]
2023-09-14T10:19:32
https://www.reddit.com/r/LocalLLaMA/comments/16ienrz/speed_up_llama_2_on_cpu/
ZAYN_MALIK_1D
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ienrz
false
null
t3_16ienrz
/r/LocalLLaMA/comments/16ienrz/speed_up_llama_2_on_cpu/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gMm46SjLTa_UxUz3Fpg59f3Lnp_DBc_0PFqbb55ABz4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/nhbz6Oj7yJDHLIp2ar0OQvn5j43XWiCVuZLrmTSfEEw.jpg?width=108&crop=smart&auto=webp&s=9d4bf10ca7be1f27c9aa320b148aa7955c10ec45', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/nhbz6Oj7yJDHLIp2ar0OQvn5j43XWiCVuZLrmTSfEEw.jpg?width=216&crop=smart&auto=webp&s=15a811929135dc01ad71d077ac38b030ae8eeff7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/nhbz6Oj7yJDHLIp2ar0OQvn5j43XWiCVuZLrmTSfEEw.jpg?width=320&crop=smart&auto=webp&s=69d5114d5e71a1c865179be841a61f12ab5a1b64', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/nhbz6Oj7yJDHLIp2ar0OQvn5j43XWiCVuZLrmTSfEEw.jpg?auto=webp&s=9d2f6b18ef89124e01c67c6bd2ddce8170a84a40', 'width': 480}, 'variants': {}}]}
OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs
27
Hello everyone, Apologies for the self-promotion. At the Humboldt-Universität zu Berlin, we have been working on a project exploring human biases through AI, using a language model called OpinionGPT. Initially developed with the Llama 7b model, we've now advanced to using the Llama 13b model with LoRA adapters, where each adapter corresponds to a trained bias (similar to a Mixture of Experts Model) We aimed to make biases transparent and explicit, not to suppress them. You can interact with the model through our [**web demo**](https://opiniongpt.informatik.hu-berlin.de/). We would love to get some feedback and questions about our model and website.
2023-09-14T10:07:21
https://www.reddit.com/r/LocalLLaMA/comments/16iegd4/opiniongpt_modelling_explicit_biases_in/
DerBeginner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16iegd4
false
null
t3_16iegd4
/r/LocalLLaMA/comments/16iegd4/opiniongpt_modelling_explicit_biases_in/
false
false
self
27
{'enabled': False, 'images': [{'id': 'c9EHuRnpxJ3uE87OLuRF2IQPzjWAcyI1EKZA0doXMqk', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/wpKU_2bDJFuEzFQGyoSdviSEggrIYAc9h7P9IC0oT_M.jpg?width=108&crop=smart&auto=webp&s=fd71b7eb00f33e97856c37e1ff00a2fa7c160675', 'width': 108}, {'height': 213, 'url': 'https://external-preview.redd.it/wpKU_2bDJFuEzFQGyoSdviSEggrIYAc9h7P9IC0oT_M.jpg?width=216&crop=smart&auto=webp&s=f0f1651875a46f05b0a787e76a382dd0dae87f0c', 'width': 216}, {'height': 316, 'url': 'https://external-preview.redd.it/wpKU_2bDJFuEzFQGyoSdviSEggrIYAc9h7P9IC0oT_M.jpg?width=320&crop=smart&auto=webp&s=4c7b336cf23601fa1e758b2e1c0eb02fedf42a10', 'width': 320}, {'height': 632, 'url': 'https://external-preview.redd.it/wpKU_2bDJFuEzFQGyoSdviSEggrIYAc9h7P9IC0oT_M.jpg?width=640&crop=smart&auto=webp&s=181e71a8c4f92cca8f080fa78fa107a0a785257e', 'width': 640}], 'source': {'height': 762, 'url': 'https://external-preview.redd.it/wpKU_2bDJFuEzFQGyoSdviSEggrIYAc9h7P9IC0oT_M.jpg?auto=webp&s=ca61b9e526d20a949999e89f54e75cbf42b3c1d6', 'width': 771}, 'variants': {}}]}
HuggingChat, the open-source alternative to ChatGPT from HuggingFace just released a new websearch feature. It uses RAG and local embeddings to provide better results and show sources.
178
2023-09-14T09:51:10
https://v.redd.it/mws0k1gq07ob1
SensitiveCranberry
v.redd.it
1970-01-01T00:00:00
0
{}
16ie6pn
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mws0k1gq07ob1/DASHPlaylist.mpd?a=1697277085%2CMmZhNjkzZDY0NDk5NjYxZTJjNzc3YzUzMTcxNzM1OGMzYTI1NGJkZDlmNDFkYmEzNWQ5Zjc2YmNiNzY0NmZhYw%3D%3D&v=1&f=sd', 'duration': 40, 'fallback_url': 'https://v.redd.it/mws0k1gq07ob1/DASH_1080.mp4?source=fallback', 'height': 1834, 'hls_url': 'https://v.redd.it/mws0k1gq07ob1/HLSPlaylist.m3u8?a=1697277085%2CM2M5MTRkMjcyZjI3N2FhYTY0N2MxMzgwM2Y3NzBhNGMwNDlkN2EzYzZjNmQ3NThiMDY2ZjUxYjFlNzdlOTIwOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mws0k1gq07ob1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_16ie6pn
/r/LocalLLaMA/comments/16ie6pn/huggingchat_the_opensource_alternative_to_chatgpt/
false
false
https://b.thumbs.redditm…GLesuhF_HNiQ.jpg
178
{'enabled': False, 'images': [{'id': 'jxXP1T1HuH4upKzJCd9b4zDtZA7vYPCTjC5WLf0AHuA', 'resolutions': [{'height': 183, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?width=108&crop=smart&format=pjpg&auto=webp&s=11be7371da20c94af1dfde585c22aa320fc17604', 'width': 108}, {'height': 366, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?width=216&crop=smart&format=pjpg&auto=webp&s=0988a947405da1b5ee0a6a82506e5bd55692834a', 'width': 216}, {'height': 543, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?width=320&crop=smart&format=pjpg&auto=webp&s=c818427d5c1395a02d0204ed0a96e21429a86e42', 'width': 320}, {'height': 1086, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?width=640&crop=smart&format=pjpg&auto=webp&s=7fcf008178f3c57a0b0f9ec69d4e345a5542b4ca', 'width': 640}, {'height': 1629, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?width=960&crop=smart&format=pjpg&auto=webp&s=c44068cedeb8d4efa8b8b6a921acab0596a09184', 'width': 960}, {'height': 1833, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?width=1080&crop=smart&format=pjpg&auto=webp&s=60c37506504434e7471c085baad8b2867ea254bc', 'width': 1080}], 'source': {'height': 1908, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?format=pjpg&auto=webp&s=0b9cf8c8d3592b1c26ad4570b7ec97353c21587b', 'width': 1124}, 'variants': {}}]}
Need help with running mt5 LLM
1
Can someone give me advice or point me what to do regarding running mT5? I got 3 issues: 1. In paper authors refer to their models to range from 300M to 13B, but PyTorch bin files range from much bigger size (1.3Gb to 52Gb). Not sure what is explanation for that... 2. When I move bin file from download location with win Exlorer it is very slow. Win11 System run on SSD, I got 64GB RAM, 12GB VRAM and 13tg gen Intel CPU and moving ETA is like 4hrs for 4Gb. Not sure why is that.. Anyway moving with TotalCMD helps. I'm not having that issue with any other models, which are mostly GGUFs or GGMLs. [https://huggingface.co/collections/google/mt5-release-65005f1a520f8d7b4d039509](https://huggingface.co/collections/google/mt5-release-65005f1a520f8d7b4d039509) 3. Most important - How to run mT5 model? I dont want to train it or FT it - just wanna run it for translation. [https://github.com/google-research/multilingual-t5](https://github.com/google-research/multilingual-t5) I downloaded bin from HF. What next? When trying to load it over LM studio it states a permission denied, regardless it is open source LLM, and didnt encountered any prior approval requirements like Llama2 has for example... Koboldcpp does not see it. What loader do i need for mT5? I want to translate documents in private environment, locally, not on Google Collab. Any advice would help...
2023-09-14T08:54:13
https://www.reddit.com/r/LocalLLaMA/comments/16id8tm/need_help_with_running_mt5_llm/
dodo13333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16id8tm
false
null
t3_16id8tm
/r/LocalLLaMA/comments/16id8tm/need_help_with_running_mt5_llm/
false
false
self
1
{'enabled': False, 'images': [{'id': 'rwxfryfs6Fk0mjn-aAIWTXdrWk2g_R-QukpEYYIYjCQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?width=108&crop=smart&auto=webp&s=b0c8ee9aa0280bea1eb4c3d6ed4086cf81bf2849', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?width=216&crop=smart&auto=webp&s=8ae67490f4e963833d384850a9531c26be77e405', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?width=320&crop=smart&auto=webp&s=71e34a8eee0b102859d8db0ef0921597705e2be6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?width=640&crop=smart&auto=webp&s=b5bacb72577481a2edb1b4e0b75bbebd51bc61f3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?width=960&crop=smart&auto=webp&s=fb5bc033e268baac15916e5c9a4702d017153315', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?width=1080&crop=smart&auto=webp&s=450de02a1ce49ccfcf7795fdb9baa6b8072f9626', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?auto=webp&s=9fad594ed65b5d5ec5c925a9585bda86f801c7e6', 'width': 1200}, 'variants': {}}]}
I want to train my dataset using open source Vicuna
1
Hi guys , I want to train my dataset using open source Vicuna how to do? Please help me .
2023-09-14T08:45:35
https://www.reddit.com/r/LocalLLaMA/comments/16id3px/i_want_to_train_my_dataset_using_open_source/
Downtown-Message6608
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16id3px
false
null
t3_16id3px
/r/LocalLLaMA/comments/16id3px/i_want_to_train_my_dataset_using_open_source/
false
false
self
1
null
How can I use embeddings from llama.cpp using OpenAI in Python?
1
[removed]
2023-09-14T08:22:47
https://www.reddit.com/r/LocalLLaMA/comments/16icq6d/how_can_i_use_embeddings_from_llamacpp_using/
mmmanel00
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16icq6d
false
null
t3_16icq6d
/r/LocalLLaMA/comments/16icq6d/how_can_i_use_embeddings_from_llamacpp_using/
false
false
self
1
null
are there any good half uncensored models?
1
I'm looking for a model which is willing to use profanity etc. but isn't completely unhinged, any good models like this?
2023-09-14T07:57:31
https://www.reddit.com/r/LocalLLaMA/comments/16icaq2/are_there_any_good_half_uncensored_models/
jazza420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16icaq2
false
null
t3_16icaq2
/r/LocalLLaMA/comments/16icaq2/are_there_any_good_half_uncensored_models/
false
false
self
1
null
Best model for summarization task?
1
Pretty much above.
2023-09-14T07:33:59
https://www.reddit.com/r/LocalLLaMA/comments/16ibwht/best_model_for_summarization_task/
Raise_Fickle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ibwht
false
null
t3_16ibwht
/r/LocalLLaMA/comments/16ibwht/best_model_for_summarization_task/
false
false
default
1
null
Mark Zuckerberg's Remarks at AI Forum | Meta
41
2023-09-14T05:56:56
https://about.fb.com/news/2023/09/mark-zuckerbergs-remarks-at-ai-forum/
noiseinvacuum
about.fb.com
1970-01-01T00:00:00
0
{}
16ia90n
false
null
t3_16ia90n
/r/LocalLLaMA/comments/16ia90n/mark_zuckerbergs_remarks_at_ai_forum_meta/
false
false
https://b.thumbs.redditm…Plnl4_pa1iPk.jpg
41
{'enabled': False, 'images': [{'id': 'dywf8cQP_7wsiEnac1YYP1TvJmn_73MvakYvULV63yo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?width=108&crop=smart&auto=webp&s=8edca2623e4b03f1059af21ac64a1f5dea102048', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?width=216&crop=smart&auto=webp&s=dd4f85b42234006e4c554060434d8861397d3316', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?width=320&crop=smart&auto=webp&s=2651c8b8fe765f0f44a444f7a85f2fc11295a1e0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?width=640&crop=smart&auto=webp&s=0d364f9245eb865e92115cf35249abce6e5b3fdd', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?width=960&crop=smart&auto=webp&s=1213bd5e34ca23be02715e1c60cf5bbfd4343e96', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?width=1080&crop=smart&auto=webp&s=48cb2c67333731e81f71d2f5e748258d54717cf5', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?auto=webp&s=0831b49817243646c7bc85efd3d15fbd27a361b6', 'width': 1200}, 'variants': {}}]}
Are there any AI powered notebooks for coding that I can install locally??
1
Probably a noobish question but are there any AI powered coding environments that would allow a user to load a coding model of choice and generate code in Python or JavaScript(or other languages) that can be tested in a separate safe virtual environment within the same window, then added to a project file or folder when complete. Maybe even go back and edit the code if needed and have the model explain and iterate on other open source project files as well. I'm thinking something that would almost combine ooba booga, vs code and juypter notebook into one but more simply and dedicated mainly for coding projects.
2023-09-14T05:16:52
https://www.reddit.com/r/LocalLLaMA/comments/16i9kb8/are_there_any_ai_powered_notebooks_for_coding/
AI_Trenches
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16i9kb8
false
null
t3_16i9kb8
/r/LocalLLaMA/comments/16i9kb8/are_there_any_ai_powered_notebooks_for_coding/
false
false
self
1
null
H2OGPT questions from a noob
1
[removed]
2023-09-14T03:57:58
https://www.reddit.com/r/LocalLLaMA/comments/16i846s/h2ogpt_questions_from_a_noob/
consig1iere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16i846s
false
null
t3_16i846s
/r/LocalLLaMA/comments/16i846s/h2ogpt_questions_from_a_noob/
false
false
default
1
null
LLM best-suited for parsing documents
15
I want to use an LLM as a pseudo-parser that can extract information from my local files. Which LLM would be best suited for that? I'm guessing something instruct-based with large context size? I rarely see context size mentioned but assume larger number of parameters also correlates with larger context size? I'm guessing best approach would be to just pipe the contents of the file as a query concatenated with a question about it? Speed isn't critical, but I'd prefer a model that's not too large (under 34b, if possible).
2023-09-14T03:45:08
https://www.reddit.com/r/LocalLLaMA/comments/16i7v9j/llm_bestsuited_for_parsing_documents/
atsepkov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16i7v9j
false
null
t3_16i7v9j
/r/LocalLLaMA/comments/16i7v9j/llm_bestsuited_for_parsing_documents/
false
false
self
15
null
Baichuan2's vocab has a single token for "Guided by Xi Jinping's thoughts of socialism with Chinese characteristics in the new era"
102
2023-09-14T03:02:02
https://twitter.com/suchenzang/status/1702126326369636631
Covid-Plannedemic_
twitter.com
1970-01-01T00:00:00
0
{}
16i70o8
false
{'oembed': {'author_name': 'Susan Zhang', 'author_url': 'https://twitter.com/suchenzang', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">&gt; open up the new ~125k Baichuan2 vocab<br>&gt; find a single token just for &quot;Guided by Xi Jinping&#39;s thoughts of socialism with Chinese characteristics in the new era&quot;<br>&gt; 😬<br>&gt; finds another token just for &quot;On our journey to achieve beautiful ideals and big goals&quot;<br>&gt; 🥹 <a href="https://t.co/iJzlk0E1zi">pic.twitter.com/iJzlk0E1zi</a></p>&mdash; Susan Zhang (@suchenzang) <a href="https://twitter.com/suchenzang/status/1702126326369636631?ref_src=twsrc%5Etfw">September 14, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/suchenzang/status/1702126326369636631', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_16i70o8
/r/LocalLLaMA/comments/16i70o8/baichuan2s_vocab_has_a_single_token_for_guided_by/
false
false
https://b.thumbs.redditm…l4NWufLmE-zs.jpg
102
{'enabled': False, 'images': [{'id': '_ScFAlutalFBr6aT3GFB5b3BmtsizQKlPjAPV7hbLXI', 'resolutions': [{'height': 113, 'url': 'https://external-preview.redd.it/57iNrgWLNLgafL3ruRwsoS98RSM8tpIHHjE38O7vOV8.jpg?width=108&crop=smart&auto=webp&s=9fa081ef19df996ee26453553ddf3427a655f851', 'width': 108}, {'height': 226, 'url': 'https://external-preview.redd.it/57iNrgWLNLgafL3ruRwsoS98RSM8tpIHHjE38O7vOV8.jpg?width=216&crop=smart&auto=webp&s=156084bda060595609ee42c67d1aba7c2bb6702d', 'width': 216}, {'height': 336, 'url': 'https://external-preview.redd.it/57iNrgWLNLgafL3ruRwsoS98RSM8tpIHHjE38O7vOV8.jpg?width=320&crop=smart&auto=webp&s=98894ecab52f76261be4c50d2fbe28670f6a9a8e', 'width': 320}, {'height': 672, 'url': 'https://external-preview.redd.it/57iNrgWLNLgafL3ruRwsoS98RSM8tpIHHjE38O7vOV8.jpg?width=640&crop=smart&auto=webp&s=89e4f03e6e7a1a53d91cb9135612e63f2a5e6b8c', 'width': 640}], 'source': {'height': 672, 'url': 'https://external-preview.redd.it/57iNrgWLNLgafL3ruRwsoS98RSM8tpIHHjE38O7vOV8.jpg?auto=webp&s=18f408d8ae9eaa1729bd27570d4cf9f9a911e155', 'width': 640}, 'variants': {}}]}
College Project - LLM toolkit
1
[removed]
2023-09-14T02:28:12
https://www.reddit.com/r/LocalLLaMA/comments/16i6c82/college_project_llm_toolkit/
Hades8800
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16i6c82
false
null
t3_16i6c82
/r/LocalLLaMA/comments/16i6c82/college_project_llm_toolkit/
false
false
self
1
null
We should use exllama1 for GPTQ and exllama2 for exl2?
19
I've done some perplexity test, and because GPTQ (act\_order + groupsize 32) is a 4.625 bit average, it should be compared to exl2 bpw = 4.625, and that's what I did [https://huggingface.co/TheBloke/Synthia-13B-GPTQ/tree/gptq-4bit-32g-actorder\_True](https://huggingface.co/TheBloke/Synthia-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) [https://huggingface.co/latimar/Synthia-13B-exl2/tree/4\_625-bpw-h6](https://huggingface.co/latimar/Synthia-13B-exl2/tree/4_625-bpw-h6) Here are the results: https://preview.redd.it/xljcfguls4ob1.png?width=3215&format=png&auto=webp&s=72805bc7bf8c46c45399503cc0853d15cd59a163 The conclusion is this one: * Only use exllama1 for GPTQ models * exllama2 + exl2 is giving the best results overall I'm just a bit sad that exllama2 gives worse results than exllama1 when compared on the same exact settings, if exllama2 was as precise as exllama1, that 4.98 perplexity would be even lower. But if those results are the same for all models, it will make GPTQ obsolete yeah, and that's thanks to the variable quant, and not really because of the new loader [https://github.com/turboderp/exllamav2/issues/28](https://github.com/turboderp/exllamav2/issues/28) &#x200B;
2023-09-14T02:25:32
https://www.reddit.com/r/LocalLLaMA/comments/16i6a83/we_should_use_exllama1_for_gptq_and_exllama2_for/
Wonderful_Ad_5134
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16i6a83
false
null
t3_16i6a83
/r/LocalLLaMA/comments/16i6a83/we_should_use_exllama1_for_gptq_and_exllama2_for/
false
false
https://b.thumbs.redditm…5IVJr0C6krNs.jpg
19
{'enabled': False, 'images': [{'id': 'Hx5Q7form2K0L5cQAVFSzztjbiQnhGKmLf2nSyyEHk0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?width=108&crop=smart&auto=webp&s=f4cc13b38111e5bd25eb86b0a9ee93b539765acf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?width=216&crop=smart&auto=webp&s=d6991ed8cb4a96d4ca1291f5673ec87fd2198be8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?width=320&crop=smart&auto=webp&s=a197d3701ade6ac6f89163a740a84f1d19663a36', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?width=640&crop=smart&auto=webp&s=54986661a82164a61cd4c4e2748d440a0bf99283', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?width=960&crop=smart&auto=webp&s=1bbb615d724171e16151ea5df6e57939446fda4e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?width=1080&crop=smart&auto=webp&s=dd5a1ff98f227ca2b5893d44a2c10ad15862c54c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?auto=webp&s=a0164f21d7524b3c742b0175883df2bd8f8c6fd1', 'width': 1200}, 'variants': {}}]}
Getting an A6000 where to start?
18
I have an A6000 coming my way in a few days, currently am running 1080ti and 3060. What is the first thing you would do if you had access to an a6000?
2023-09-14T01:38:41
https://www.reddit.com/r/LocalLLaMA/comments/16i5azq/getting_an_a6000_where_to_start/
drnick316
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16i5azq
false
null
t3_16i5azq
/r/LocalLLaMA/comments/16i5azq/getting_an_a6000_where_to_start/
false
false
self
18
null
Absolutely best (and unexpected?) model for smut by a longshot
68
Just wanted to share, I feel this is almost like a secret, but: Falcon-180B is fantastic for smut! I know, I know, its censored, but somehow this censoring is negligible. I think it's a stuff many has searched for since llm's emerge ;) I understand that not a lot of people can run this model locally, but we have a demo: [https://huggingface.co/spaces/tiiuae/falcon-180b-demo](https://huggingface.co/spaces/tiiuae/falcon-180b-demo) Censoring working strangely, if you ask to write a smut directly it will deny. But if you ask to roleplay with you without giving much hints what to come, in following messages Falcon allows you to request almost any kind of plot changes, including smut and whatnot. And it also writes better than anything else I saw.
2023-09-14T01:03:48
https://www.reddit.com/r/LocalLLaMA/comments/16i4k1d/absolutely_best_and_unexpected_model_for_smut_by/
uti24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16i4k1d
false
null
t3_16i4k1d
/r/LocalLLaMA/comments/16i4k1d/absolutely_best_and_unexpected_model_for_smut_by/
false
false
self
68
{'enabled': False, 'images': [{'id': 'VnPfz4T_PBvE0aZgbbxKHMpFvaTgKkhfcvRwLUnRubE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=108&crop=smart&auto=webp&s=782b98cf2b42e53ba1106df3e6981f32d2b7c645', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=216&crop=smart&auto=webp&s=004403bb96a0bdf43721038ac58efd0e44c314e5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=320&crop=smart&auto=webp&s=3fb8f9e84ca4c69daab2536e4627a53cef02d091', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=640&crop=smart&auto=webp&s=a1b6a6f03f7260dec2bf8a1aa2b5db4ad8948691', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=960&crop=smart&auto=webp&s=52e1a32bfa50aa184426188f1100f4422c8da21e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=1080&crop=smart&auto=webp&s=ec39e04a5d28782130f45ac04d4e3158e7dff049', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?auto=webp&s=460d01f934e88241cae9a6f4bacfcd527322b97d', 'width': 1200}, 'variants': {}}]}
Best Chess LMs?
1
Any local LLMs that are pretty good at chess? Preferably ones that don't make illegal moves all the time. So far I've only found GPT-4 to be the best; Bard, GPT-3.5 and raw LLaMA just play randomly after move 4.
2023-09-14T00:46:47
https://www.reddit.com/r/LocalLLaMA/comments/16i46t6/best_chess_lms/
Zelenskyobama2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16i46t6
false
null
t3_16i46t6
/r/LocalLLaMA/comments/16i46t6/best_chess_lms/
false
false
self
1
null
Inf: Two 32k llama 70B models/adapters are available on huggingface
16
https://huggingface.co/models?sort=trending&search=70b+32k There are also a nice spread of context loras/models by user - https://huggingface.co/Yukang For those with 48gb, this is good news, if you want, you can merge the adapters with your favorite 70B models and run with exllama. Since it is already possible for 70B to go up to 16k on exllamav1, we may look forward to gqa implementation, potentially a quantized kv cache, or a more aggressive quantization strategy to reach 32k. As for cpu (llama.cpp), 64gb RAM would be enough for 4bit models with an 8bit kv cache. There have been existing extended context Lora's, I don't think [yarn](https://github.com/jquesnelle/yarn) was implemented in heavily used frameworks yet, though.
2023-09-13T23:43:01
https://www.reddit.com/r/LocalLLaMA/comments/16i2rbp/inf_two_32k_llama_70b_modelsadapters_are/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16i2rbp
false
null
t3_16i2rbp
/r/LocalLLaMA/comments/16i2rbp/inf_two_32k_llama_70b_modelsadapters_are/
false
false
self
16
{'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=108&crop=smart&auto=webp&s=284ee86cd9228390268ace75b44e497c1fec562f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=216&crop=smart&auto=webp&s=96628b1c155401ce2d04a853b6524fa0c95cd632', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=320&crop=smart&auto=webp&s=f5f435bb4d31f0f695560cb0fb6f456702452062', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=640&crop=smart&auto=webp&s=b8b6a03fcde27061acee8ab4cb6ecc598a7ac6b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=960&crop=smart&auto=webp&s=bbda73bd4f11be7b71efb3892b4107414d815613', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=1080&crop=smart&auto=webp&s=0158100ff6f9041cc8dcb861b66a3db041df5095', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?auto=webp&s=daff0272548bd7ffe5bc2b1eff6cd5c752144ed4', 'width': 1200}, 'variants': {}}]}
GPT-3.5 is still better than fine tuned Llama 2 70B (Experiment using prompttools)
1
[removed]
2023-09-13T22:54:01
https://www.reddit.com/r/LocalLLaMA/comments/16i1lk2/gpt35_is_still_better_than_fine_tuned_llama_2_70b/
hegel-ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16i1lk2
false
null
t3_16i1lk2
/r/LocalLLaMA/comments/16i1lk2/gpt35_is_still_better_than_fine_tuned_llama_2_70b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'CqXAtyeioxsvbRMaplhX6A9B-SD_Jy1jHFvnNGzFo0U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?width=108&crop=smart&auto=webp&s=eadcd1acbf0c03649d608b7883fa685fa1f7d822', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?width=216&crop=smart&auto=webp&s=175d771fec7e3aaf1c820a192412f58d628d8c19', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?width=320&crop=smart&auto=webp&s=cda73648656ef63c9e93f0b013c1265f1e1277eb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?width=640&crop=smart&auto=webp&s=8b70415e787d91748fc02998410f766ae5f33991', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?width=960&crop=smart&auto=webp&s=3d29e1daf845c236df32e0b1c01f26785190f66c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?width=1080&crop=smart&auto=webp&s=2e160c47083da5b823c1b204799c4c36be035152', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?auto=webp&s=28be4a804e1680faf0f0d6b6e334cae7cb79fbc2', 'width': 1200}, 'variants': {}}]}
I ran quantized Falcon-180B Q4 on a home/consumer computer and it is not good (maybe need help?)
15
So I have a totally normal (almost) home computer: i5-12400F, 128Gb DDR4, RTX 3060Ti I downloaded and ran falcon-180b-chat.Q4\_K\_M.gguf The speed is just like I and some others predicted: 0.3 tokens/second. Speed wise it is somewhat usable. But for some reason the model is uncoherent: I asked it to write me something and its just repeated me my question and asked what do I want to do with it. I asked it roleplay with me and it wrote me bad and uncoherent story, not at all like demo works: [https://huggingface.co/spaces/tiiuae/falcon-180b-demo](https://huggingface.co/spaces/tiiuae/falcon-180b-demo) Maybe I am doing something wrong? I running it using oobabooga\_windows\\text-generation-webui, I don't know if I need to set up something? I just loaded the model, set 5 layers to run on GPU ands that's it. Do I need to set a template or change loader or something? I expected better quality from Q4, at least 70% of full model. https://preview.redd.it/8fss75men3ob1.png?width=979&format=png&auto=webp&s=845a3dcf3d2f34d1320b97abf9c932ade9d98dff
2023-09-13T22:35:20
https://www.reddit.com/r/LocalLLaMA/comments/16i15qc/i_ran_quantized_falcon180b_q4_on_a_homeconsumer/
uti24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16i15qc
false
null
t3_16i15qc
/r/LocalLLaMA/comments/16i15qc/i_ran_quantized_falcon180b_q4_on_a_homeconsumer/
false
false
https://a.thumbs.redditm…rFCLDKcESOj8.jpg
15
{'enabled': False, 'images': [{'id': 'VnPfz4T_PBvE0aZgbbxKHMpFvaTgKkhfcvRwLUnRubE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=108&crop=smart&auto=webp&s=782b98cf2b42e53ba1106df3e6981f32d2b7c645', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=216&crop=smart&auto=webp&s=004403bb96a0bdf43721038ac58efd0e44c314e5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=320&crop=smart&auto=webp&s=3fb8f9e84ca4c69daab2536e4627a53cef02d091', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=640&crop=smart&auto=webp&s=a1b6a6f03f7260dec2bf8a1aa2b5db4ad8948691', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=960&crop=smart&auto=webp&s=52e1a32bfa50aa184426188f1100f4422c8da21e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=1080&crop=smart&auto=webp&s=ec39e04a5d28782130f45ac04d4e3158e7dff049', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?auto=webp&s=460d01f934e88241cae9a6f4bacfcd527322b97d', 'width': 1200}, 'variants': {}}]}
DeciLM-6B - new pretrained 4k context model with high throughput
42
2023-09-13T22:01:56
https://huggingface.co/Deci/DeciLM-6b
Aaaaaaaaaeeeee
huggingface.co
1970-01-01T00:00:00
0
{}
16i0aqx
false
null
t3_16i0aqx
/r/LocalLLaMA/comments/16i0aqx/decilm6b_new_pretrained_4k_context_model_with/
false
false
https://a.thumbs.redditm…t4rOyIyUQ7Z8.jpg
42
{'enabled': False, 'images': [{'id': 'PeUEHEDeiDpJiVx8uu6FTyh9hxae5iwe1tZAyeglz7g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=108&crop=smart&auto=webp&s=49cb41a341e6c1c3b161812ab717218d772e91cd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=216&crop=smart&auto=webp&s=dbb8235d6fcfff853bd3e959b30a54adbde44d9b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=320&crop=smart&auto=webp&s=c4dd40764881a064c5926990f6ea41624d94a477', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=640&crop=smart&auto=webp&s=8886c341646248afb6294218caee5d1e90e1110b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=960&crop=smart&auto=webp&s=b1ea708e282c36dbafbd29145fc229ce8b40ad7d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=1080&crop=smart&auto=webp&s=fe65411f2b702ff90c0a407beccd213c4f2f8186', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?auto=webp&s=1e84d939794bebdd80f1ec08f8e3decdbf2add19', 'width': 1200}, 'variants': {}}]}
M1 Mac running text-generation-webui using model Guanaco-13B-Uncensored.Q3_K_L.gguf
1
2023-09-13T22:00:09
https://i.redd.it/kutt0f9uh3ob1.png
Status-Anybody-4481
i.redd.it
1970-01-01T00:00:00
0
{}
16i08za
false
null
t3_16i08za
/r/LocalLLaMA/comments/16i08za/m1_mac_running_textgenerationwebui_using_model/
false
false
https://b.thumbs.redditm…Nog0DvsmLw-Y.jpg
1
{'enabled': True, 'images': [{'id': 'Yf8oKgiOKomTk-_UQYLqG6LppfN37uKgDhU-nZPi--U', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?width=108&crop=smart&auto=webp&s=dcf0c038a55abf1463e8fc84c29c3d7fee6379ad', 'width': 108}, {'height': 155, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?width=216&crop=smart&auto=webp&s=1b93ce19462bc103cb97c0a1169a514ffeeadc3e', 'width': 216}, {'height': 230, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?width=320&crop=smart&auto=webp&s=0ef726e4484b83ae710a529a65fddd2f692a7c5a', 'width': 320}, {'height': 460, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?width=640&crop=smart&auto=webp&s=f224e65b104d5c3c0e6a296dddaf0861066c8922', 'width': 640}, {'height': 690, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?width=960&crop=smart&auto=webp&s=f6a9ce7faed4459b94cd5fd805f5954bc538a4b9', 'width': 960}, {'height': 776, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?width=1080&crop=smart&auto=webp&s=55adb6b695519953b84530e4e9a0770a769417aa', 'width': 1080}], 'source': {'height': 1406, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?auto=webp&s=d45291a5c37cc4d77949894aa741ab093e9ba912', 'width': 1956}, 'variants': {}}]}
HF transformers vs llama 2 example script performance
1
Hi all, I hope someone can point me in the right direction. I'm using the CodeLlama 13b model with the HuggingFace transformers library but it is 2x slower than when I run the example conversation script in the codellama GitHub repository. Both have been updated to use the same prompt and 4096 max tokens I've tried many things to try and speed up the generation including: * Using BetterTransformer - [https://huggingface.co/docs/transformers/perf\_infer\_gpu\_many](https://huggingface.co/docs/transformers/perf_infer_gpu_many) * Loading the model in 4 bit I'm running on a VM with 2xA100 40GB, A boiled down version of the script is as follows: from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig, TextIteratorStreamer from threading import Thread import time import torch tokenizer = AutoTokenizer.from_pretrained("out-codellama-13b/") tokenizer.add_special_tokens({'pad_token': '[PAD]'}) model = AutoModelForCausalLM.from_pretrained("out-codellama-13b/", device_map="auto", torch_dtype=torch.float16) model.to_bettertransformer() user = "my prompt goes here" prompt = f"<s>[INST] {user.strip()} [/INST]" inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to('cuda') generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=4096, pad_token_id=tokenizer.eos_token_id, temperature=0.9, do_sample=True) t1 = time.perf_counter() output = model.generate(**generation_kwargs) t2 = time.perf_counter() output = output[0].to("cpu") print(tokenizer.decode(output)) print(f"Generation took {t2-t1} seconds to execute.") This takes approximately 132 seconds to execute. Then if I run the example\_instructions.py as follows: $ torchrun --nproc_per_node 2 example_instructions.py --ckpt_dir CodeLlama-13b-Instruct/ --tokenizer_path CodeLlama-13b-Instruct/tokenizer.model --max_seq_len 4096 --max_batch_size 8 --temperature 0.9 It takes approximately 60 seconds to execute. I checked that the auto device map has mapped everything to the GPU. I'm not sure what I'm missing. *Note: I converted the CL 13 model to the HF format using ./src/transformers/models/llama/convert\_llama\_weights\_to\_hf.py* Thanks in advance.
2023-09-13T21:39:23
https://www.reddit.com/r/LocalLLaMA/comments/16hzpv4/hf_transformers_vs_llama_2_example_script/
FormerAlternative707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hzpv4
false
null
t3_16hzpv4
/r/LocalLLaMA/comments/16hzpv4/hf_transformers_vs_llama_2_example_script/
false
false
self
1
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]}
Training LoRa OoogaBooga Annoying.
1
[removed]
2023-09-13T19:58:01
https://www.reddit.com/r/LocalLLaMA/comments/16hx31n/training_lora_ooogabooga_annoying/
mohaziz999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hx31n
false
null
t3_16hx31n
/r/LocalLLaMA/comments/16hx31n/training_lora_ooogabooga_annoying/
false
false
https://b.thumbs.redditm…zPDajAheJR0o.jpg
1
null
I love LocqlLLaMA community
1
[removed]
2023-09-13T19:49:39
https://www.reddit.com/r/LocalLLaMA/comments/16hwvid/i_love_locqlllama_community/
ComplexIt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hwvid
false
null
t3_16hwvid
/r/LocalLLaMA/comments/16hwvid/i_love_locqlllama_community/
false
false
self
1
null
How can I use LLMs to augment my Etsy clothe shopping experience?
1
Hi everyone, I have a big beefy RTX 3090 but only use it for art projects. Only ever dabbled with LM studio. Is this something I need to code myself or do we have browser companions already? Basically I want to send pages to it (e.g. the same way I middle click a bunch of articles) and have it read the description of the item to filter for items my size. I've done preliminary testing with ChatGPT and it makes excellent decisions about what will or won't fit me based on some existing clothe measurements that already fit me. Is this something I need to code myself? I don't really like the whole extension system in Firefox, too hard to get started hacking up some stuff, but in QuteBrowser I could make a great Etsy plugin, just need something to extract the product description. I wonder if I could just give it a whole compressed html page and ask it to write some scraping code? Might be too big for context length? Basically I dunno what's the open-source equivalent to Bing and ChatGPT plugins that can let them parse the web and become actual useful agents for me. Surely open-source software should be VASTLY ahead of these slow moving designed-in-meetings tools? Kind of a aimless thread, just looking for some input. I could easily code it myself with scripts and LLM prompting, but I'm more wondering how far the ecosystem has come. We have all these models that we can run on our PC now and still barely doing anything with them lol....
2023-09-13T18:36:43
https://www.reddit.com/r/LocalLLaMA/comments/16huyjw/how_can_i_use_llms_to_augment_my_etsy_clothe/
o_snake-monster_o_o_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16huyjw
false
null
t3_16huyjw
/r/LocalLLaMA/comments/16huyjw/how_can_i_use_llms_to_augment_my_etsy_clothe/
false
false
self
1
null
Horde-Client: Easy to use Python Interface for KoboldAI
3
I came across KoboldAI recently and found this to be really interesting proejcts. Kudos! to all those who are crowdsourcing their cloud (someone else's computer) to run inference for the community. I wanted to make things easier for developers (even me), who want to build cool application using these type of projects. So I am introducing a Python Client library [horde-client](https://github.com/rahuldshetty/horde-client), a Python Native client library for KoboldAI service. Currently it supports: * Text-Generation Inference using LLM services running on public/custom KoboldAI clusters. * Model Listing/Selections for running inference. Roadmap includes: * Image Generation Inferencing * Caching Options * Async Support Do let me know what do you guys think about it. If this is helpful or anything that you think is a must as a developer. **Links:** Source code: [https://github.com/rahuldshetty/horde-client](https://github.com/rahuldshetty/horde-client) &#x200B;
2023-09-13T18:14:29
https://www.reddit.com/r/LocalLLaMA/comments/16hudod/hordeclient_easy_to_use_python_interface_for/
AnonymousD3vil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hudod
false
null
t3_16hudod
/r/LocalLLaMA/comments/16hudod/hordeclient_easy_to_use_python_interface_for/
false
false
self
3
{'enabled': False, 'images': [{'id': 'MBsWquxYTl_J1f-VfOuDfVauY2TxpGBBTYIQFmnGB-k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?width=108&crop=smart&auto=webp&s=7f03d369e5268e7f42c4d8a69b103d5d367a3a21', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?width=216&crop=smart&auto=webp&s=0b95bcf4f28c77408fbce166930c4b75d6adc59c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?width=320&crop=smart&auto=webp&s=872e48737819217144949371a6b32a7635b07241', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?width=640&crop=smart&auto=webp&s=1c420999aec6d7dfb48437fe497cfb4b27e0508f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?width=960&crop=smart&auto=webp&s=90b6a20e7ee65b022b179371c4b598e2e61418b1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?width=1080&crop=smart&auto=webp&s=e5c112d0a9235c8949e9fa009a2c2e2cb08c014b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?auto=webp&s=2d3792cd51e9047b471d34e23633e9402d0ed016', 'width': 1200}, 'variants': {}}]}
Requesting help fine-tuning Phi1.5 and similar on domain reports with unsupervised learning
4
Hello Everyone, I need your help understanding the unsupervised (preffered)/supervised training approaches, and maybe recommendations for existing codebases to help get started. **Motivation**: The "Textbooks are all you need" approach seems fascinating, and a good place to start to consider fine tuning on "textbook-like" unsupervised data specific to a topic. I have access to HPC for this research - and finally got an approval to use it for this research. The end result is: \- (1) Understand the cost and effectiveness of fine-tuning the base Phi-1.5 on a domain-specific corpus of textbooks to build general intelligence of the given domain, and \- (2) Understand the cost on incremental fine-tuning as new information (smaller reports) arrive, and, effectiveness to use the new information. \- Compare the performance of fine-tuning both (1) and (2) against vectorized approaches offered via PrivateGPT, etc. &#x200B; Happy to share the fine-tuned models at different iterations for feedback. Since the research is domain agnostic for now, **I can also start with a dataset and training code that you can share - so the fine-tuned model is more beneficial.** Any help is appreciated!
2023-09-13T18:07:18
https://www.reddit.com/r/LocalLLaMA/comments/16hu6mf/requesting_help_finetuning_phi15_and_similar_on/
peace-of-me
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hu6mf
false
null
t3_16hu6mf
/r/LocalLLaMA/comments/16hu6mf/requesting_help_finetuning_phi15_and_similar_on/
false
false
self
4
null
I am trying to run a simple personalized chatbot.. suggestions?
1
[removed]
2023-09-13T17:57:15
https://www.reddit.com/r/LocalLLaMA/comments/16htwsm/i_am_trying_to_run_a_simple_personalized_chatbot/
Alone_Sock_5303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16htwsm
false
null
t3_16htwsm
/r/LocalLLaMA/comments/16htwsm/i_am_trying_to_run_a_simple_personalized_chatbot/
false
false
self
1
null
Llama 2 70B (130B+ when available ) production server specs ( Z790 Vs. ThreadRipper PRO )
21
Hi all, I'm trying to decide between two setups ( screenshots below) Note: I am not looking for cloud solutions, I already have the GPUs and I want to be able to run 24/7 as a production server. Power cost not a concern either, it's a business expense and power costs \~$0.1 where I live so not that bad. **What is the intent of the server**: Run 24/7 as production server, I don't exactly know the concurrent load I should expect, but the LLM running on it should be able to serve a 4bit 70B llama2 model to 5 concurrent users in a rate of 5 t/s each ( rough estimate ), so 25t/s at least, aiming closer to 100 t/s would be ideal. Server will also run 10-15 additional Dockerized web servers which are not using the GPU, so high CPU core count is important. **What I already have:** 3 x 3090's to be used in the Server + 1 x 3090 in my Work PC ( for testing & Dev. ) **What I need is a combo**: CPU, RAM & MoBo. *(Don't worry about PSU, Cooler, Etc. )* On the one hand a Z690 has support for: PCIE **5.0 & DDR 5** ( both are great for the **high t/s** I'm looking for and ability to offload larger models to extremely fast RAM, taking a minimal performance hit. ), paired with a 13900K Processor for 24 Cores / 48 Threads, this is no slouch! **However,** this is a Gaming MoBo with very limited PCIe lanes, can support just 2 X 3090s at full PCIEx16 5.0 and NVME at 5.0, but the 3rd card will be crippled or not working at all! It is also capped at **128GB of RAM and only dual channel memory access**! (Which might not be the worst thing in the world as I might later build a 2nd server with it and buy another 3090 - thus having 2 x servers each with 48GB of RAM... but lacking ability to run in vRAM larger still models that are already starting to show up... a.k.a Alpaca 130B and future similarly large models ... ) For Case: I will be using an open air bench case with premium PCIE 5.0 x16 riser cables. **COST: \~1800 + Tax** **On the opposite End, we have a WorkStation build**, with **Pro WS WRX80E-SAGE SE WIFI II** Mobo paired with a **5995WX 64-Core 128-Thread Threadripper** **PRO** setup, a monster with **7 x FULL x16 PCIe 4.0** slots. This beast can handle up to 2048GB using 8 memory channels!! So it is DDR4 but maybe with the dedicated memory channels, it is comparable to DDR5 in speed or not much slower... But potential of 2048GB means very large models could potentially be offloaded, but at what speed penalty? I mean, if the speed hit is so bad the LLM is not good enough for production usage, than I don't need so much RAM in the first place. Main concern here is the PCIE4.0 .... I suspect this will hurt / bottleneck the build badly compared with going with a PCIE5.0 in terms of t/s, as the most imporant aspect to getting a higher t/s here is speed at which the CPU and RAM can read/write to the vRAM - as the vRAM will always be the fastest of the 3. Am I correct? For Case: I will be using an open air bench case with premium PCIE 5.0 x16 riser cables for 1/2 of the GPUs, the other half will be the board in a 4 on top and 4 on board config ( assuming I will eventually have 8 x 3090s cards for the Server) **COST: \~ $7000 + tax !!!** What are your thoughts and suggestions? **If you've read this far, thank you very much for reading and for your feedback**! &#x200B; &#x200B; https://preview.redd.it/03qam9zt82ob1.png?width=795&format=png&auto=webp&s=a20de4ad753442e7330d345209aecd58247ce650 &#x200B; https://preview.redd.it/zx3yzpgk32ob1.png?width=728&format=png&auto=webp&s=b4cb15dbec7e095eb3d31520540e56b265725575
2023-09-13T17:52:52
https://www.reddit.com/r/LocalLLaMA/comments/16htspx/llama_2_70b_130b_when_available_production_server/
BigFoxMedia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16htspx
false
null
t3_16htspx
/r/LocalLLaMA/comments/16htspx/llama_2_70b_130b_when_available_production_server/
false
false
https://b.thumbs.redditm…b2Gr-W47uWOQ.jpg
21
null
Why are all the other LLMs so inferior to GPT4?
2
Is anyone else not frustrated by the fact that every LLM out there performs substantially worse that gpt4? And this even includes specialised LLM. Surely with all the limitations and nsfw barriers placed on gpt4, there would be some catch up in the market by now. Anyone else feel hopeless? &#x200B; /end rant.
2023-09-13T17:33:50
https://www.reddit.com/r/LocalLLaMA/comments/16htb5m/why_are_all_the_other_llms_so_inferior_to_gpt4/
Upsidedownsquare
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16htb5m
false
null
t3_16htb5m
/r/LocalLLaMA/comments/16htb5m/why_are_all_the_other_llms_so_inferior_to_gpt4/
false
false
self
2
null
Merging models
3
Pretty new and late to the LLM game, but really interested in getting deeper into it. I found out there is a Python implementation to merge model bin files called 'Pytorch Merge' ( https://github.com/donaldafeith/Pytorch Merge). I didn't try that yet, but wanted to know, if you can tell me more: - Am I am able to merge big model (70b) Llama2 locally with my 4090? - Am I able to merge different models (like 70b with a 33b one)? Thanks so much for your help.
2023-09-13T17:26:41
https://www.reddit.com/r/LocalLLaMA/comments/16ht4hg/merging_models/
lebrandmanager
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ht4hg
false
null
t3_16ht4hg
/r/LocalLLaMA/comments/16ht4hg/merging_models/
false
false
self
3
null
Any agents for local llama instances yet?
4
I’ve been trying to find out if there are any of these agent projects that are building their software to work with locally run AI models. Thus far it seems as though open-interpreter does but I have yet to see anything like metagpt, gpt-engineer, aider support local models. Unless I am missing something, this would be an awesome set of tooling for those running their own local instances.
2023-09-13T17:25:29
https://www.reddit.com/r/LocalLLaMA/comments/16ht3dl/any_agents_for_local_llama_instances_yet/
dewplex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ht3dl
false
null
t3_16ht3dl
/r/LocalLLaMA/comments/16ht3dl/any_agents_for_local_llama_instances_yet/
false
false
self
4
null
Pls Share LLM training code
3
Hey, I'm learning how to train LLMs with the transformers library but it's proving to be a bit tricky. If any of you have trained a model with this library before (not LoRA or qLoRA), I request you to please share the notebook/file so it can serve as a reference for beginners like me. TIA PS: I found a bunch of notebooks online that use peft to do LoRA and didn't have any luck with notebooks that do full model training.
2023-09-13T16:57:16
https://www.reddit.com/r/LocalLLaMA/comments/16hscgk/pls_share_llm_training_code/
Dry_Long3157
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hscgk
false
null
t3_16hscgk
/r/LocalLLaMA/comments/16hscgk/pls_share_llm_training_code/
false
false
self
3
null
Our AI Companionship app, using llama is live, thank you for beta testing and for your support, and join if you would like to help more
1
Hello fellow Redditors, &#x200B; The moment we've all been waiting for is finally here! After countless cups of coffee, sleepless nights, and invaluable feedback from our beta testers (huge thanks to every single one of you!), we're thrilled to announce the official launch of \*\*Endless AI\*\*. &#x200B; For those just tuning in, Endless AI offers a unique experience. Engage in deep, insightful, or even quirky conversations with our \*UNCENSORED\* AI companions. Whether you're looking for a virtual girlfriend or just a digital buddy to chat with, we've got you covered. &#x200B; 🌟 \*\*Why give Endless AI a spin?\*\* \- Dive into captivating and hot conversations without any human strings attached. \- Explore our sassy and intriguing AI personalities and find your perfect match. \- Experience the thrill of digital interaction like never before. The perfect girlfriend at YOUR pace. &#x200B; 🔜 \*\*What's on the horizon?\*\* We're never ones to settle. Our team is hard at work on introducing exciting new features, including the ability to send images within the app. Stay tuned for more updates! (coming in the next two weeks) &#x200B; 📲 \*\*Get Started!\*\* 1. \*\*For iOS Users:\*\* Head to the AppStore and search for "Endless AI" or use this link [https://apps.apple.com/us/app/endless-ai/id6451383546](https://apps.apple.com/us/app/endless-ai/id6451383546) 2. \*\*For Android Enthusiasts:\*\* Pop over to the Google Play Store, look up "Endless AI", or simply use this link, [https://play.google.com/store/apps/details?id=com.aigf.app](https://play.google.com/store/apps/details?id=com.aigf.app) &#x200B; Dive in, explore, and let the digital magic begin! &#x200B; We're all ears for any feedback, praises, critiques, or ideas you have. Your thoughts drive our evolution, so don't be shy to share. &#x200B; Cheers, The Endless AI Dev Team &#x200B; PS: Remember, the AI personalities have a sense of humor. If they claim they sent a picture, give a little chuckle and enjoy the whimsy of it all! 😉📷 &#x200B;
2023-09-13T16:46:36
https://www.reddit.com/r/LocalLLaMA/comments/16hs21i/our_ai_companionship_app_using_llama_is_live/
agukrisz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hs21i
false
null
t3_16hs21i
/r/LocalLLaMA/comments/16hs21i/our_ai_companionship_app_using_llama_is_live/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ftrbq4wwKCszjtlhCEq0Ce6DbiJ_Fh2WfHmXq_mS0E4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=108&crop=smart&auto=webp&s=8106fdbb026813a8e2c2efe518c716ab1d4b4867', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=216&crop=smart&auto=webp&s=c077c400888c6fbfb45098454ea8952cdca522fe', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=320&crop=smart&auto=webp&s=5a5bbc6be696655b8012f124deb37cae1506bee8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=640&crop=smart&auto=webp&s=ff8cf7beafce78bdd7c6282f5b93c70015ae0709', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=960&crop=smart&auto=webp&s=4033667add7753c3e64516e9b76bc17ca83cee7f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=1080&crop=smart&auto=webp&s=df575af098b7b721b8837f8909d8f2955e7105e0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?auto=webp&s=8488d4a48175c32873a011a30d6c76a663f3f571', 'width': 1200}, 'variants': {}}]}
Textgenwebui crashing with 34b model
1
[removed]
2023-09-13T16:46:31
https://www.reddit.com/r/LocalLLaMA/comments/16hs1y3/textgenwebui_crashing_with_34b_model/
Imagummybear23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hs1y3
false
null
t3_16hs1y3
/r/LocalLLaMA/comments/16hs1y3/textgenwebui_crashing_with_34b_model/
false
false
self
1
null
How to run LLMs on Android/IOS?
6
How would I make an app that would let you have an "on-edge chatbot"? Also, does anyone know how I could run the phi-1.5 model on a phone?
2023-09-13T16:42:11
https://www.reddit.com/r/LocalLLaMA/comments/16hrxvn/how_to_run_llms_on_androidios/
LiquidGunay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hrxvn
false
null
t3_16hrxvn
/r/LocalLLaMA/comments/16hrxvn/how_to_run_llms_on_androidios/
false
false
self
6
null
Surprising that LLama-2 is better than chatGPT especially for queries that require recent knowledge
29
I've been using LLama-2 from [https://huggingface.co/chat/](https://huggingface.co/chat/) What has been your experience?
2023-09-13T16:25:28
https://www.reddit.com/r/LocalLLaMA/comments/16hriip/surprising_that_llama2_is_better_than_chatgpt/
danielcar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hriip
false
null
t3_16hriip
/r/LocalLLaMA/comments/16hriip/surprising_that_llama2_is_better_than_chatgpt/
false
false
self
29
{'enabled': False, 'images': [{'id': 'O4__VvuTP1zjgNXHpYgGtbNlwm8CyL1iGZRclIV-cFg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=108&crop=smart&auto=webp&s=c5c01ca386f7a26e8afeb5073e51c35d0d581de7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=216&crop=smart&auto=webp&s=0e915f82e672294c639c476433af5f1919265348', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=320&crop=smart&auto=webp&s=87643eb4a9654c3497efe7fce371db617f9ff816', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=640&crop=smart&auto=webp&s=20315fe6e900582303995761624ac0728d1703f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=960&crop=smart&auto=webp&s=6d8bc7d3273f5290083f6668e10d5b513621bfa3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=1080&crop=smart&auto=webp&s=865cccb6b6df001aa14ef4fb2eb0f5902cb15904', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?auto=webp&s=03f4344525b6a013e0ac556cfc24b4a45d64f47e', 'width': 1200}, 'variants': {}}]}
Looking to dive in!
1
So, just like everyone else, every day I try to use ChatGPT, I get frustrated with how bad it has become and how it tries so hard to accuse me of trying to do something nefarious with the powerful knowledge of "writing in a sterile, delocalized manner." So anyway, LLaMa has been on my radar for quite a while now, and running things locally + trying cool fine-tuned models seems precisely what I'm looking for. But before I try all of that, I would like to ask some questions. Like, how much of the old ChatGPT feel can I get with LLaMa? Will it just answer me what I asked? Will it hallucinate to the high hells and back? Is it useful other than a fun gimmick? Can I integrate it into my daily routine as I did with GPT once? Or am I missing the point? Should I look elsewhere? Should I use it but for things that I don't seem to have in mind right now? And so on. Thank you for all and any answers.
2023-09-13T16:25:25
https://www.reddit.com/r/LocalLLaMA/comments/16hrih9/looking_to_dive_in/
OGGOGOgomes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hrih9
false
null
t3_16hrih9
/r/LocalLLaMA/comments/16hrih9/looking_to_dive_in/
false
false
self
1
null
Why do you use oobabooga?
56
Which is the main reason you use oobabooga? - Testing models - Deploying models - Learn about open source models Etc
2023-09-13T15:37:36
https://www.reddit.com/r/LocalLLaMA/comments/16hq9mj/why_do_you_use_oobabooga/
HorrorNo8851
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hq9mj
false
null
t3_16hq9mj
/r/LocalLLaMA/comments/16hq9mj/why_do_you_use_oobabooga/
false
false
self
56
null
Help training with transformers (half rant)
4
I'm trying to train phi-1.5 on Wizard Evol instrut data and for some reason I keep running into fucking errors xD idk why they had to make it so hard. I want to fully train the model (not LoRA or qLoRA) and I appreciate any help I can get. Please let me know if you need any more information. Also do let me know if there are any other libraries that I can use. TIA notebook link - https://www.kaggle.com/code/rohitrajesh/phi-1-5-training
2023-09-13T15:11:41
https://www.reddit.com/r/LocalLLaMA/comments/16hplv4/help_training_with_transformers_half_rant/
Dry_Long3157
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hplv4
false
null
t3_16hplv4
/r/LocalLLaMA/comments/16hplv4/help_training_with_transformers_half_rant/
false
false
self
4
null
Release BlindLlama Alpha: Zero-Trust AI APIs for Llama 2 70b Integration
2
BlindLlama enables you to tap into the power of the latest open-source Large Language Models (LLMs) without local hosting, backed by our hardened, verifiable environments. **Key points:** 🔒 **Data Privacy**: While Mithril Security manages the AI infrastructure, our zero-trust, end-to-end protections ensure your data remains shielded from both external and potential internal threats.We built it with Trusted Platform Modules (TPMs) as an alternative to Confidential Computing, gaining the advantage of GPU compatibility in the process. For an in-depth exploration of this technology, [**refer to the following link.**](https://blindllama.mithrilsecurity.io/en/latest/docs/concepts/TPMs/) 🛠️ **Alpha Caveats**: This free platform is currently in its alpha phase. Our target is to finalize and deploy all security features by the end of October. We will then be able to guarantee the secure handling of confidential data. 🗺️ **Security Roadmap**: Our target is a comprehensive security audit in Q4 2023 to confirm our AI's Zero-trust properties. Curious about our journey? [**BlindLlama Roadmap**](https://mithril-security.notion.site/BlindLlama-roadmap-d55883a04be446e49e01ee884c203c26?pk_campaign=09-11-2023redditLaunch_Blindlama&pk_source=Rd&pk_medium=SM). 📚 **Quick Tour**: Want to leverage LLMs without the setup headaches? [**Quick Start Guide**](https://blindllama.mithrilsecurity.io/en/latest/docs/getting-started/quick-tour/?pk_campaign=09-11-2023redditLaunch_Blindlama&pk_source=Rd&pk_medium=SM). Feedback, questions, or want to chat? Hit me up!
2023-09-13T15:06:56
https://www.reddit.com/r/LocalLLaMA/comments/16hphhm/release_blindllama_alpha_zerotrust_ai_apis_for/
Separate-Still3770
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hphhm
false
null
t3_16hphhm
/r/LocalLLaMA/comments/16hphhm/release_blindllama_alpha_zerotrust_ai_apis_for/
false
false
self
2
null
Is there any technical reason why the older Falcon models don’t have GGUF implementations?
7
I wasn’t sure if there was a technical reason why. I saw 180B parameter falcon model has a gguf version.
2023-09-13T14:50:14
https://www.reddit.com/r/LocalLLaMA/comments/16hp1ig/is_there_any_technical_reason_why_the_older/
ArguingEnginerd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hp1ig
false
null
t3_16hp1ig
/r/LocalLLaMA/comments/16hp1ig/is_there_any_technical_reason_why_the_older/
false
false
self
7
null
How use VLLM with GPTQ or AWQ models?
1
[removed]
2023-09-13T14:26:13
https://www.reddit.com/r/LocalLLaMA/comments/16hof7y/how_use_vllm_with_gptq_or_awq_models/
InterestingBasil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hof7y
false
null
t3_16hof7y
/r/LocalLLaMA/comments/16hof7y/how_use_vllm_with_gptq_or_awq_models/
false
false
self
1
null
Can someone post example of running GPTQ + LorA?
3
Hey, I struggle with running any Lora with gptq (i.e. exllama) I have either assertion error, illegal memory or some other issues Can someone help me please? I think there is something with combination of cuda and drivers as well but tested latest and 530+12.1 Thanks!
2023-09-13T14:13:17
https://www.reddit.com/r/LocalLLaMA/comments/16ho46e/can_someone_post_example_of_running_gptq_lora/
kpodkanowicz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ho46e
false
null
t3_16ho46e
/r/LocalLLaMA/comments/16ho46e/can_someone_post_example_of_running_gptq_lora/
false
false
self
3
null
A Recipe for Textbooks Are All You Need
81
Following the release of [phi-1\_5](https://huggingface.co/microsoft/phi-1_5) I was super skeptical of the benchmarks as something this unusually good (for such a small model) means the benchmark questions were probably leaked into the training. I plan on testing this myself this week but just thought of sharing along my interpreted recipe from [Textbooks Are All You Need](https://arxiv.org/abs/2309.05463). Boiled down to 9 steps: 1. Use GPT-3.5 to generate a large set of synthetic textbooks on the topics you want the LLM to learn. Make sure to seed the textbook generation with keywords or other techniques to ensure diversity. 2. Take a large dataset like [the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup) and use GPT-4 or a capable local model to label a small subset of this dataset for educational value. 3. Train a random forest classifier on the GPT-4 labeled data to predict educational value. 4. Use the classifier to filter the full unlabeled dataset, keeping only the top 20% of examples rated highest for educational value. 5. Combine the filtered dataset with the synthetic textbooks to create the final "textbooks" training set. 6. Pretrain a small LLM (e.g. 1.3B) on this filtered dataset. 7. Use GPT-3.5 again to generate a set of 1M coding exercises and solutions. 8. Finetune the pretrained LLM model on the exercises dataset. 9. Evaluate the finetuned LLM on benchmarks like HumanEval to show much higher performance than LLMs trained on raw internet data. I seriously think that even through the paper focused on training a model from scratch using a synthetic textbook the same could be applied to existing pretrained models. One could simply run their data through this pipeline (recipe) and create their own curated synthetic notebook which could be used to further fine-tune their models for significant improvements. &#x200B;
2023-09-13T13:53:18
https://www.reddit.com/r/LocalLLaMA/comments/16hnn6b/a_recipe_for_textbooks_are_all_you_need/
LyPreto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hnn6b
false
null
t3_16hnn6b
/r/LocalLLaMA/comments/16hnn6b/a_recipe_for_textbooks_are_all_you_need/
false
false
self
81
{'enabled': False, 'images': [{'id': 'YgYMSWcEzxWUi6mX7twUuVO2en8LS6opR-5RROlrc3k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?width=108&crop=smart&auto=webp&s=33e33e5c9c66c0f9492c940802598ed310238c67', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?width=216&crop=smart&auto=webp&s=5b42faee9e666282e5160562b827189f339bd849', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?width=320&crop=smart&auto=webp&s=3f34526c3376fc9b5a415ffd352f8097ec02fe35', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?width=640&crop=smart&auto=webp&s=32501a999f3a773bc6c4e846484110b8638c1a6d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?width=960&crop=smart&auto=webp&s=2d40e2d5bf10797a51d935e1b30b4064597cbfd4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?width=1080&crop=smart&auto=webp&s=236b1a9ec5f00eedf1b2d8a0e1be9f5f5923cbfe', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?auto=webp&s=e223e4bcf164098708f2e6c07381c96dd412c29b', 'width': 1200}, 'variants': {}}]}
Is there a model fine-tuned for writing blog posts/articles?
1
thank you!
2023-09-13T13:53:01
https://www.reddit.com/r/LocalLLaMA/comments/16hnmws/is_there_a_model_finetuned_for_writing_blog/
JackRumford
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hnmws
false
null
t3_16hnmws
/r/LocalLLaMA/comments/16hnmws/is_there_a_model_finetuned_for_writing_blog/
false
false
self
1
null
Got a 3090 now
1
[removed]
2023-09-13T13:14:45
https://www.reddit.com/r/LocalLLaMA/comments/16hmqcv/got_a_3090_now/
Imagummybear23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hmqcv
false
null
t3_16hmqcv
/r/LocalLLaMA/comments/16hmqcv/got_a_3090_now/
false
false
self
1
null
what is the best thing i could run on a server?
0
speccs are: 2* Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz - 18 cores each, 36 logical cores, 72 with Hyperthreading 384GB RAM no GPU So what model could i run on that? What inference time could i expect?
2023-09-13T13:11:46
https://www.reddit.com/r/LocalLLaMA/comments/16hmnxa/what_is_the_best_thing_i_could_run_on_a_server/
overlydelicioustea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hmnxa
false
null
t3_16hmnxa
/r/LocalLLaMA/comments/16hmnxa/what_is_the_best_thing_i_could_run_on_a_server/
false
false
self
0
null
Having troubles with Oobabooga API
0
I have a python script that interacts with Oobabooga in api mode, but if I call the api more than once in my method, it only returns the first response and generates nothing for the second one. I've tried time.sleep() but it doesn't work. any suggestions?
2023-09-13T13:05:42
https://www.reddit.com/r/LocalLLaMA/comments/16hmj87/having_troubles_with_oobabooga_api/
jaxprograms
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hmj87
false
null
t3_16hmj87
/r/LocalLLaMA/comments/16hmj87/having_troubles_with_oobabooga_api/
false
false
self
0
null
Looking for a local LLM that can be run on CPU that is good at text-to-text summarization
2
I deal with qualitative survey data frequently at my job. I am working on getting approved for a GPU, but it's going to take a few months to get approval. Generally, each month, I have a new column of about 5,000 cells of text data that range from 1 character to about 400 words. Historically, I have done topic modeling, thematic analyses, and sentiment analyses on the data. Stakeholders always seem to ask unique questions about response data, like, "How many people talked about XYZ?" or "What are some comments related to QRS?", or "What are people saying about ABC?" I think it would be interesting to be able to have an LLM read the Excel file with the open-text data, and then I can pose questions to the bot based on the open-text. I am not looking for the most robust model. I would love to use claude or GPT4, but there are 'privacy' concerns for this, so it has to be local. &#x200B; &#x200B; &#x200B;
2023-09-13T12:58:07
https://www.reddit.com/r/LocalLLaMA/comments/16hmdbo/looking_for_a_local_llm_that_can_be_run_on_cpu/
frescoj10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hmdbo
false
null
t3_16hmdbo
/r/LocalLLaMA/comments/16hmdbo/looking_for_a_local_llm_that_can_be_run_on_cpu/
false
false
self
2
null
The Outputs in exllama2 are really different compared to exllama1
18
I noticed the outputs were quite different in exllama2 and they felt... worse somehow, as if we've lost precision going from exllama1 and exllama2. I was about to open up an issue but I found this, it looks like I wasn't the only one noticing it [https://github.com/turboderp/exllamav2/issues/14](https://github.com/turboderp/exllamav2/issues/14)
2023-09-13T12:36:55
https://www.reddit.com/r/LocalLLaMA/comments/16hlwyt/the_outputs_in_exllama2_are_really_different/
TheYuriLover25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hlwyt
false
null
t3_16hlwyt
/r/LocalLLaMA/comments/16hlwyt/the_outputs_in_exllama2_are_really_different/
false
false
self
18
{'enabled': False, 'images': [{'id': 'A4w8313ZVn8QItQzCiGPs2upXF8dPcxtZ6Dwyrru7tM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?width=108&crop=smart&auto=webp&s=d214d3c4eaf1da0817f2199282556473b9d40dc6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?width=216&crop=smart&auto=webp&s=9c43b6bba4435225ef38a2688a3a9c9077e64641', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?width=320&crop=smart&auto=webp&s=952fdd801c8dbbc29db105f9e67a6057f253e337', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?width=640&crop=smart&auto=webp&s=0eef3606757ada94541fd1b68429ee5821732bca', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?width=960&crop=smart&auto=webp&s=7d43875146132285b6c46c990df8c5fecc3eab19', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?width=1080&crop=smart&auto=webp&s=54b05f7d8b0376d48654d4dfe911648784266ad7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?auto=webp&s=2338c6108ffc60149a6787bfe14f25be301bf29e', 'width': 1200}, 'variants': {}}]}
Koboldcpp-ROCm port released for Windows
39
Haven't tried it yet, but i wanted to let people know: [https://github.com/YellowRoseCx/koboldcpp-rocm/releases/tag/Windows-v1.43-ROCm](https://github.com/YellowRoseCx/koboldcpp-rocm/releases/tag/Windows-v1.43-ROCm)
2023-09-13T12:02:30
https://www.reddit.com/r/LocalLLaMA/comments/16hl6x8/koboldcpprocm_port_released_for_windows/
PlanVamp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hl6x8
false
null
t3_16hl6x8
/r/LocalLLaMA/comments/16hl6x8/koboldcpprocm_port_released_for_windows/
false
false
self
39
{'enabled': False, 'images': [{'id': 'zFFklGsUArLxeq_NlMwJ5Dqi_2ouGuHCvWWq59Ekn7U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?width=108&crop=smart&auto=webp&s=27614bedf0ca6f7a63a4c2c6d1fcba151d0847ff', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?width=216&crop=smart&auto=webp&s=854c40c4ae6b7e657ddc6303948c3e16eedeb01f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?width=320&crop=smart&auto=webp&s=efd34373eba3ad21d5acb9e191d99396f87ac80a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?width=640&crop=smart&auto=webp&s=7c8fab43b65fcefcf9fbd4897ca0ba408489c307', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?width=960&crop=smart&auto=webp&s=500483c79a0e7f2c2f0984961002b7e7418539f6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?width=1080&crop=smart&auto=webp&s=a5d67929bce7dd8abbb59f65180e1c96d70fcd26', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?auto=webp&s=3f94ceec1a6ac36e148b8af42221ca2e07ec68a5', 'width': 1200}, 'variants': {}}]}
Best OpenAI -like API for local models?
2
Say I have Python code that uses the OpenAI model and works with GPT4 etc. Now I want to “seamlessly” switch to using a locally running llama2 (or other) model. I know there are various libraries out there that provide OpenAI-like APIs for local models, but wanted to find out here which one people consider to be the best dev-ex. The common problem I am running into with the libs I tried is there are issues dealing with llama2’s prompt formatting ([INST] <<SYS>> etc). Tried these great libs and ran into issues : * Llama-cpp-Python * LocalAI:
2023-09-13T11:21:39
https://www.reddit.com/r/LocalLLaMA/comments/16hkdtb/best_openai_like_api_for_local_models/
SatoshiNotMe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hkdtb
false
null
t3_16hkdtb
/r/LocalLLaMA/comments/16hkdtb/best_openai_like_api_for_local_models/
false
false
self
2
null
Google Research releases new 10.7B multilingual machine translation model competitive with Meta's NLLB 54B translation model
142
The paper came out last weekend but the links to the models were recently added. They've released 3B, 7.2B, and 10.7B MT models. Checkpoints: [https://github.com/google-research/google-research/tree/master/madlad\_400](https://github.com/google-research/google-research/tree/master/madlad_400) Paper: [https://arxiv.org/abs/2309.04662](https://arxiv.org/abs/2309.04662) Abstract: >We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community. Relevant graphs and excerpts: >We train models of various sizes: a 3B, 32-layer parameter model, a 7.2B 48-layer parameter model and a 10.7B 32-layer parameter model. We share all parameters of the model across language pairs, and use a Sentence Piece Model with 256k tokens shared on both the encoder and decoder side. Each input sentence has a <2xx> token prepended to the source sentence to indicate the target language. We use both supervised parallel data with a machine translation objective and the monolingual MADLAD-400 dataset with a MASS-style objective to train this model. > >We find that both the 7.2B parameter model and the 10B parameter model is competitive with the significantly larger NLLB-54B model on WMT. Table 4: Evaluation scores on WMT (depicted as <bleu> / <chrf>) for the MT models compared against NLLB-54B. ||**NLLB**|**MT-3B**|**MT-7.2B**|**MT-10.7B**| |:-|:-|:-|:-|:-| |xx2en|34.2 / 60.4|33.4 / 60.0|34.9 / 60.6|**34.6 / 60.8**| |en2xx|**31.1 / 58.0**|28.2 / 55.4|29.3 / 56.2|29.0 / 56.2| |Average|**32.7 / 59.2**|30.8 / 57.7|32.1 / 58.4|31.8 / 58.5| Table 5: Evaluation scores on Flores-200 (depicted as <bleu> / <chrf>) for the MT models compared against NLLB-54B. All metrics are computed with the sacrebleu reference implementation. &#x200B; ||**NLLB**|**MT-3B**|**MT-7.2B**|**MT-10.7B**| |:-|:-|:-|:-|:-| |xx2en|**35.5 / 59.6**|29.7 / 54.4|30.9 / 55.4|31.9 / 56.4| |en2xx|**20.7 / 50.1** |17.3 / 44.1|17.8 / 44.7|18.6 / 45.7| |Mean|**28.2 / 54.9**|23.5 / 49.2|24.4 / 50.0 |25.3 / 51.1| |xx2yy|**13.7 / 40.5**|8.8 / 31.2|8.4 / 30.9|10.1 / 34.0| &#x200B;
2023-09-13T10:47:22
https://www.reddit.com/r/LocalLLaMA/comments/16hjrbr/google_research_releases_new_107b_multilingual/
llamaShill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hjrbr
false
null
t3_16hjrbr
/r/LocalLLaMA/comments/16hjrbr/google_research_releases_new_107b_multilingual/
false
false
self
142
{'enabled': False, 'images': [{'id': 'dY9S0gzeKgVffR4ulbjEMVkIPcZo1M3veyP5LZakDUc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?width=108&crop=smart&auto=webp&s=2f879aa334a608a7acd2ef2bb8893625bce89455', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?width=216&crop=smart&auto=webp&s=e80144daf9dc8b271ea268e792edbdcff6bccff7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?width=320&crop=smart&auto=webp&s=904105bc08df42c1ba66442d4f8f1f2835dbdb5f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?width=640&crop=smart&auto=webp&s=9ee41724bd9ef38ccaab75afa1e7204d25265240', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?width=960&crop=smart&auto=webp&s=0d3d53a36b45e676bce9645ab855210a78ea9cf8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?width=1080&crop=smart&auto=webp&s=370628c1698a84e187b80e13653c46500ea8797c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?auto=webp&s=941ca5b6d22dfdf4ce62716c79b4a136a6efdca4', 'width': 1200}, 'variants': {}}]}
Has anyone tried running a Llama2/Lang Flow setup?
4
Inspired by a post by the Floneum dev I was looking into alternatives to text generation. I found Lang Flow which is a GUI for LangChain. As far as I can tell it is a node based system to integrate LangChain functionality in a node based system. It seems to have Llama2 model support but I haven't been able to find much in the way of guides/tutorials on how to set up such a system. I come from a design background and have used a bit of ComfyUI for SD and use node based workflows a lot in my design work. Does anyone have an expertise with this workflow, any suggestions before I get stuck in? Thanks.
2023-09-13T10:14:31
https://www.reddit.com/r/LocalLLaMA/comments/16hj5od/has_anyone_tried_running_a_llama2lang_flow_setup/
JFHermes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hj5od
false
null
t3_16hj5od
/r/LocalLLaMA/comments/16hj5od/has_anyone_tried_running_a_llama2lang_flow_setup/
false
false
self
4
null
Training a LoRa or Finetune from Chat Dataset?
2
There is limited guidance available on training conversational AI models using techniques like LoRA or fine-tuning. In contrast, training image generation models (txt2img) is well documented. I want to create a chatbot that can mimic a specific person's texting style, but am unsure of the best approach. So far, I have extracted a dataset of WhatsApp chat messages and formatted it for model training. I made the messages from the person I want the bot to emulate the output texts. However, I am uncertain whether LoRA or fine-tuning is better for my goal. I also don't know the optimal hyperparameter settings to use for either method, or which model trainer would work best. I am currently using the oogabooga / text-generation-webui tool, which supports LoRA training, but I don't know how to fine-tune a model with it. In general, there seems to be less information available on training conversational models compared to image models. For example, I don't know how to determine if I am overfitting or undertraining my model. In summary, while I have made some initial progress, I would appreciate any guidance on the best approach to take, optimal training settings, avoiding over/underfitting, and tools that support fine-tuning conversational models. My goal is a chatbot that convincingly mimics a specific person's texting style. Any insights would be helpful.
2023-09-13T10:12:11
https://www.reddit.com/r/LocalLLaMA/comments/16hj4a9/training_a_lora_or_finetune_from_chat_dataset/
mohaziz999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hj4a9
false
null
t3_16hj4a9
/r/LocalLLaMA/comments/16hj4a9/training_a_lora_or_finetune_from_chat_dataset/
false
false
self
2
null
End-to-End Training and deployment with DeterminedAI and Titan Takeoff Server
1
Hey everyone, Just wondering if anyone is having problems with finding a fuss-free way of fine-tuning and deploying LLMs? I wanted to share a project that our team at TitanML recently did, which uses two open-source packages. We fine-tuned a model with Determined AI, then deployed it locally with the our own Titan Takeoff Server, which runs. We've written a step-by-step guide, highlighting everything from initial model training with Determined AI and to its optimised and efficient deployment with the Titan Takeoff Server. You can read it here: [Medium Article](https://medium.com/@TitanML/end-to-end-training-with-determinedai-and-titan-takeoff-from-model-training-to-efficient-bbf77499d8c3) [TitanML blog](https://docs.titanml.co/blog/using%20determined%20with%20takeoff) Has anyone here had firsthand experiences with these platforms or similar end-to-end training workflows? Please do let me know if you do end up trying either of the packages as I'd love to hear from you all! **Full Disclosure:** The article is from our team at TitanML. We're keen to share and discuss our findings with the broader community, and not strictly for promotional purposes. I hope this brings value to some of you here.
2023-09-13T10:05:45
https://www.reddit.com/r/LocalLLaMA/comments/16hj0d5/endtoend_training_and_deployment_with/
blakeho96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hj0d5
false
null
t3_16hj0d5
/r/LocalLLaMA/comments/16hj0d5/endtoend_training_and_deployment_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'EcvK1C6emXJaHdD1vhAMMSxvwYX7jse8nrG7qOEC_QI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?width=108&crop=smart&auto=webp&s=deca343903d978e3954e05e2d75bce1512ba7996', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?width=216&crop=smart&auto=webp&s=14dd6c80be6599eec4c838d00f161dafd2c04cf4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?width=320&crop=smart&auto=webp&s=63e24426d861ad49873e22a3b96bf3f24a7fe87c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?width=640&crop=smart&auto=webp&s=d8ee8835151231b9aa936339650b6923c955c724', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?width=960&crop=smart&auto=webp&s=3b65917352a8cceecbabc2bc5447225b9f149a66', 'width': 960}, {'height': 568, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?width=1080&crop=smart&auto=webp&s=754346692cb3408302b7362a5e0f5677e5d757bd', 'width': 1080}], 'source': {'height': 568, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?auto=webp&s=64a05cc2220b6bef7981a8d43a612c308366fa69', 'width': 1080}, 'variants': {}}]}
Phi-1.5 Finetunes: What we have so far
57
[**PlatyPhi-1.5B**](https://huggingface.co/SkunkworksAI/PlatyPhi-1.5B) by [SkunkworksAI](https://twitter.com/far__el) [**Puffin-Phi-v2**](https://huggingface.co/teknium/Puffin-Phi-v2) by [Teknium](https://twitter.com/Teknium1) [**Phi-1.5\_Wizard\_Vicuna\_Uncensored**](https://huggingface.co/jphme/phi-1_5_Wizard_Vicuna_uncensored) by [Jphme](https://twitter.com/jphme) PlatyPhi is the only finetune that has been benchmarked so far, [results here](https://twitter.com/far__el/status/1701814544794796339). As of posting this, there are two 8-bit quantized version of the original [Phi-1](https://huggingface.co/DaertML/phi-1-nf4-8bit-doublequant-BNB) and [Phi-1.5](https://huggingface.co/DaertML/phi-1_5-nf4-8bit-doublequant-BNB) models. It's unclear whether we'll get regular GGUF quant models as Phi isn't Llama.
2023-09-13T09:26:40
https://www.reddit.com/r/LocalLLaMA/comments/16hidpo/phi15_finetunes_what_we_have_so_far/
Lumiphoton
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hidpo
false
null
t3_16hidpo
/r/LocalLLaMA/comments/16hidpo/phi15_finetunes_what_we_have_so_far/
false
false
self
57
{'enabled': False, 'images': [{'id': '3cxODlbih2qIQ-Qz9TOJSgByP2mgwayu1N0D8OG073s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?width=108&crop=smart&auto=webp&s=3c309606abd293dae0bc749c2fb96432af102072', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?width=216&crop=smart&auto=webp&s=094478cc5f4bad9c28a0d918e2e884d69e51925b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?width=320&crop=smart&auto=webp&s=ba1c1c5e6782b8fa37158b0d6ed0f1d85262edb4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?width=640&crop=smart&auto=webp&s=4a2a88410032f0be7ec9676837bf399889e707f8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?width=960&crop=smart&auto=webp&s=e5ed026f66603f43291dffdd36fe82100924844b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?width=1080&crop=smart&auto=webp&s=1f6d3d62142671751e406bb3ab0e4da96ec2ba94', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?auto=webp&s=e486f4e7b68099f47135aa900a13b4130ac476be', 'width': 1200}, 'variants': {}}]}
PC hardware dedicated to 3D modeling and artificial intelligence
1
[removed]
2023-09-13T08:44:06
https://www.reddit.com/r/LocalLLaMA/comments/16hhorb/pc_hardware_dedicated_to_3d_modeling_and/
estrosfere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hhorb
false
null
t3_16hhorb
/r/LocalLLaMA/comments/16hhorb/pc_hardware_dedicated_to_3d_modeling_and/
false
false
self
1
null
How Can I Calculate the Token Generation Rate of an LLM Based on Given GPU Specifications?
8
Hello, I'm curious about how to calculate the token generation rate per second of a Large Language Model (LLM) based on the specifications of a given GPU. Is there a formula or method I can use to estimate the token generation speed based on GPU parameters such as **VRAM**, **CUDA cores**, **clock speed,** or **any other relevant** factors? I'm interested in optimizing and setting up a server for my LLM based Saas, and understanding this relationship would be incredibly helpful. If anyone has experience or knowledge in this area, I'd greatly appreciate your input and guidance. Thanks in advance for your assistance! Best regards.
2023-09-13T08:33:27
https://www.reddit.com/r/LocalLLaMA/comments/16hhih6/how_can_i_calculate_the_token_generation_rate_of/
PickkNickk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hhih6
false
null
t3_16hhih6
/r/LocalLLaMA/comments/16hhih6/how_can_i_calculate_the_token_generation_rate_of/
false
false
self
8
null
"How Can I Calculate the Token Generation Rate of an LLM Based on Given GPU Specifications?
1
Hello, I've been pondering a question recently and was hoping to get some insights from the community. I'm curious about how to calculate the token generation rate per second of a Large Language Model (LLM) based on the specifications of GPU. Is there a formula or method I can use to estimate the token generation speed based on GPU parameters such as VRAM, CUDA cores, clock speed, or any other relevant factors? I'm interested in optimizing the performance of my LLM, and understanding this relationship would be incredibly helpful. If anyone has experience or knowledge in this area, I'd greatly appreciate your input and guidance. Thanks in advance for your assistance! Best regards,
2023-09-13T08:21:10
https://www.reddit.com/r/LocalLLaMA/comments/16hhbbp/how_can_i_calculate_the_token_generation_rate_of/
PickkNickk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hhbbp
false
null
t3_16hhbbp
/r/LocalLLaMA/comments/16hhbbp/how_can_i_calculate_the_token_generation_rate_of/
false
false
self
1
null
Which local LLM on shadow cloud pc
1
I'm currently using a shadow cloud pc - power + upgrade and I would like to know what LLM i can run locally on that pc. I have zero knowledge about all of this, i'm just trying to explore and learn from what i'm doing. Ultimately I'm searching for something that could access my local files and interact with them (i don't know if that's possible yet). I want to use it to help me code/learn coding and also for studying. I don't mind paying as long as it's not too much. (20 € monthly max) What would you recommend me?
2023-09-13T08:03:43
https://www.reddit.com/r/LocalLLaMA/comments/16hh1hz/which_local_llm_on_shadow_cloud_pc/
cata_stropheu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hh1hz
false
null
t3_16hh1hz
/r/LocalLLaMA/comments/16hh1hz/which_local_llm_on_shadow_cloud_pc/
false
false
self
1
null
AskCyph™ LITE: Run Red Pajama, Llama2 7B AI Model in Brower
1
[AskCyph™ LITE](https://askcyph.cypherchat.app/), is a lightweight AI chatbot that runs Red Pajama LLama AI Models directly in your browser without sending your data to another provider. The first time it takes little time to download the model and the following time it is a lot faster to initialize. Currently, we support Red Pajama (Basic) and Llama 2 7b (Advanced). We created this as a way for enthusiasts of all levels to take a plunge and have an AI model running. ✅ Offline access ✅ Enhanced privacy ✅ Basic or Advanced Models ✅ Requires 4GB/8GB free RAM ✅ Relatively new Computer with integrated or external GPU Acknowledging open-source projects that inspired us, made the innovation and AskCyph™ LITE possible. * [Hugging Face](https://huggingface.co/) * [Apache TVM](https://tvm.apache.org/) * [MLC AI - Web LLM](https://webllm.mlc.ai/) * [TOGETHER](https://together.ai/) * [Llama2](https://ai.meta.com/llama/)
2023-09-13T07:53:56
https://www.reddit.com/r/LocalLLaMA/comments/16hgvxz/askcyph_lite_run_red_pajama_llama2_7b_ai_model_in/
cypherchat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hgvxz
false
null
t3_16hgvxz
/r/LocalLLaMA/comments/16hgvxz/askcyph_lite_run_red_pajama_llama2_7b_ai_model_in/
false
false
self
1
null
PC hardware dedicated to 3D modeling and artificial intelligence
1
[removed]
2023-09-13T07:50:07
https://www.reddit.com/r/LocalLLaMA/comments/16hgtw9/pc_hardware_dedicated_to_3d_modeling_and/
estrosfere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hgtw9
false
null
t3_16hgtw9
/r/LocalLLaMA/comments/16hgtw9/pc_hardware_dedicated_to_3d_modeling_and/
false
false
self
1
null
Is there a LLM alternative to Co-Pilot or Code Interpreter?
20
I was wondering if there is an alternative to Chat GPT code Interpreter or Auto-GPT but locally. I am a newbie to coding and have managed to build a MVP however the workflow is pretty dynamic so I use Bing to help me with my coding tasks. This is what my current workflow looks like: 1. Define the task and state my environment (usually VsCode and Python) 2. Copy the solution from Bing/Chat GPT to my environment and run it. 3. Share errors if any to Bing/Chat GPT and then paste it to my environment. 4. Repeat step 1-3 till I get output. 5. If I dont get desired output, modify step 1 and repeat till I get what I want. 6. Stop Now if I have a Co-pilot like interpreter that can eliminate human intervention, it can make the workflow much more efficent and avoid (human) hallucinations as obviously there is a gap between what we want and how we explain what we want and a lot can get lost in translating it to a machine. Something that can work in any environment (VS, Pycharm, Notebook, Linux, etc.) Of course this approach can help everyone however, do these solutions exist? If yes, are there any tuitorials I can follow to implement it? I have read that latest models have improved their context sizes so a larger code base can be passed which is a great thing.
2023-09-13T05:55:20
https://www.reddit.com/r/LocalLLaMA/comments/16hewhv/is_there_a_llm_alternative_to_copilot_or_code/
card_chase
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hewhv
false
null
t3_16hewhv
/r/LocalLLaMA/comments/16hewhv/is_there_a_llm_alternative_to_copilot_or_code/
false
false
self
20
null
Airoboros-l2-70b-gpt4-1.4.1 quanted to 5bpw (5bits) with exllama2, testers welcome!
43
2023-09-13T05:50:09
https://huggingface.co/Panchovix/airoboros-l2-70b-gpt4-1.4.1_5.0bpw-h6-exl2
panchovix
huggingface.co
1970-01-01T00:00:00
0
{}
16hetf8
false
null
t3_16hetf8
/r/LocalLLaMA/comments/16hetf8/airoborosl270bgpt4141_quanted_to_5bpw_5bits_with/
false
false
https://b.thumbs.redditm…FJeELXN_ayMQ.jpg
43
{'enabled': False, 'images': [{'id': 'gGbp9mg1X-UUpGje0e7ALHl5LPI6m37HjMsmS76Oujw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?width=108&crop=smart&auto=webp&s=2b7ce83743f57e4eee5612211602e9cea425af0f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?width=216&crop=smart&auto=webp&s=7a354d672cd2b45214280307915909473a094b02', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?width=320&crop=smart&auto=webp&s=f9cf140f7a9ef6fb51e85e2a50cb8b5cc29eb36e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?width=640&crop=smart&auto=webp&s=286ae72dd87439faf374ba00320703394990c0fc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?width=960&crop=smart&auto=webp&s=2c26c292ffcfb947fe2546535a4357ce6fbd97c7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?width=1080&crop=smart&auto=webp&s=0b6610879cc5766bd1811b6829e2d6b9df4e291e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?auto=webp&s=fa61ad0e842284f70d95cd664338cd7c7b481e8b', 'width': 1200}, 'variants': {}}]}
Looks like I broke my llm
1
2023-09-13T05:45:34
https://i.redd.it/gwmvqiqvnynb1.png
Rafael20002000
i.redd.it
1970-01-01T00:00:00
0
{}
16heqci
false
null
t3_16heqci
/r/LocalLLaMA/comments/16heqci/looks_like_i_broke_my_llm/
false
false
https://b.thumbs.redditm…nYHCxDAseoqw.jpg
1
{'enabled': True, 'images': [{'id': 'qHlVL06KpwQMt7lRvC-7vNfEviCrDdaJovdxLkQatow', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?width=108&crop=smart&auto=webp&s=1d8abdd966497f0d28ff3138807d02938ecbd385', 'width': 108}, {'height': 67, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?width=216&crop=smart&auto=webp&s=6108cabd5442f3fbda5b04e1a0c7ab7686b2cdf7', 'width': 216}, {'height': 100, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?width=320&crop=smart&auto=webp&s=a9579b5a26f5049dbf9f7d057bd598f639e6333b', 'width': 320}, {'height': 200, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?width=640&crop=smart&auto=webp&s=5add7cabe3f1218ee0cd8b3b3435103bc35d1234', 'width': 640}, {'height': 300, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?width=960&crop=smart&auto=webp&s=bc73963dc2531728774a98b181d633c0ec1a54b0', 'width': 960}, {'height': 338, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?width=1080&crop=smart&auto=webp&s=812dbc2b6484ab062f9482a415b522aa3cdd2611', 'width': 1080}], 'source': {'height': 431, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?auto=webp&s=c826e0a2810449ddb667e88a81b9f6e9ef0e044a', 'width': 1376}, 'variants': {}}]}
Tutorial: Fine-Tune your Own Llama 2
36
2023-09-13T05:36:10
https://github.com/OpenPipe/OpenPipe/tree/main/examples/classify-recipes
Lazylion2
github.com
1970-01-01T00:00:00
0
{}
16hek5e
false
null
t3_16hek5e
/r/LocalLLaMA/comments/16hek5e/tutorial_finetune_your_own_llama_2/
false
false
https://a.thumbs.redditm…mrMc5sdF4ZA4.jpg
36
{'enabled': False, 'images': [{'id': 'Ey55f1qITPURt__mJ5GEqy_WVFuiYmf-dch23HzpX-Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZHmzvUQ9t7wWEZ_KtgQojrd1z_CyV4U0eQEVmes63Ck.jpg?width=108&crop=smart&auto=webp&s=c04d2ea4ad782eef56baeb796228805046b6e06e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZHmzvUQ9t7wWEZ_KtgQojrd1z_CyV4U0eQEVmes63Ck.jpg?width=216&crop=smart&auto=webp&s=2557a8dc638bba627d92a4be1162f6d63bbe46ee', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZHmzvUQ9t7wWEZ_KtgQojrd1z_CyV4U0eQEVmes63Ck.jpg?width=320&crop=smart&auto=webp&s=9439d074b48254c1676d3f4c014485a1e78891c7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZHmzvUQ9t7wWEZ_KtgQojrd1z_CyV4U0eQEVmes63Ck.jpg?width=640&crop=smart&auto=webp&s=6bce143b8b86f37acd46d866005390f1fc21e638', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZHmzvUQ9t7wWEZ_KtgQojrd1z_CyV4U0eQEVmes63Ck.jpg?width=960&crop=smart&auto=webp&s=6d4828264089b33039e3e0fca1eb3d9837377d9d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZHmzvUQ9t7wWEZ_KtgQojrd1z_CyV4U0eQEVmes63Ck.jpg?width=1080&crop=smart&auto=webp&s=40efa58e92cedf1727f67fceb486d22c9a034f6b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZHmzvUQ9t7wWEZ_KtgQojrd1z_CyV4U0eQEVmes63Ck.jpg?auto=webp&s=617f4ca34385182a13a259aad2bb129526005b77', 'width': 1200}, 'variants': {}}]}
Why can't you load two or more models at once yet?
3
Firstly, I'd like to be very clear, I am NOT a coder and I am definitely NOT trying to sound ungrateful for the amazing work everyone has done here. If it's just a matter of people not doing it it's fine, beggers can't be choosers. My question is why is it that you can't load multiple models in the popular UI's, either by having them loaded at once or having them load and unload automatically. In my mind it wouldn't be that difficult for the models to work from the same context since you could theoretically already do this by running two separate ooba booga instances and just copy pasting. I feel like with the right tweaks this could be a poor man's moe as well, if models were able to actually change and tweak what the previous model wrote as opposed to just what comes next. Additionally I'd love to see one model each take a character in RP, or perhaps one handles the world building. Is there something I'm missing that makes this harder than I'm imagining? If I'm being an idiot and asking a dumb question here (again), please let me know.
2023-09-13T05:13:07
https://www.reddit.com/r/LocalLLaMA/comments/16he53f/why_cant_you_load_two_or_more_models_at_once_yet/
TheSilentFire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16he53f
false
null
t3_16he53f
/r/LocalLLaMA/comments/16he53f/why_cant_you_load_two_or_more_models_at_once_yet/
false
false
self
3
null
It's 1994, the internet is open to public but only 4% of people have it, except it’s 2023 and this is actually a I right now. what do we do to get filthy rich?
0
Hardly anyone knows what is happening!!
2023-09-13T04:09:53
https://www.reddit.com/r/LocalLLaMA/comments/16hcy1z/its_1994_the_internet_is_open_to_public_but_only/
Overall-Importance54
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hcy1z
false
null
t3_16hcy1z
/r/LocalLLaMA/comments/16hcy1z/its_1994_the_internet_is_open_to_public_but_only/
false
false
self
0
null
AI cores/cpu speed versus mobile dgpu offloading in prompt eval speeds
0
Looking for low power consumption mini-PCs for LLM inference without considering mac mini, preferring windows for other tasks. Two rough options of cheaper models: **MINISFORUM UM790 Pro AMD Ryzen 9 7940HS** or **MinisForum HX77G AMD Ryzen 7 7735HS Radeon RX 6600M** The former has a newer chip design, faster clock speeds, and AI acceleration cores, while the latter has a dedicated GPU for GPU offloading, likely to compensate for slower RAM throughput. Both are currently priced around the same, with the latter being on special. Lower power consumption is not a popular preference, making it difficult to determine the best option. Finding data on prompt processing/inference for various hardware is challenging. Living on caravan power, a PC's maximum wattage is the limitation. Both options are likely as good or better than a base mac mini and cheaper, with more base RAM expandability. The main focus is running 13B quants on koboldcpp or similar, with possible GPU offloading. The offloading to the GPU may be more significant than the AI cores and CPU clock? I honestly don't know for sure. Prompt processing on hardware like this is slow, so the GPU may help with prompt eval? If given two options, which would you choose and why?
2023-09-13T03:37:31
https://www.reddit.com/r/LocalLLaMA/comments/16hcb76/ai_corescpu_speed_versus_mobile_dgpu_offloading/
Monkey_1505
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16hcb76
false
null
t3_16hcb76
/r/LocalLLaMA/comments/16hcb76/ai_corescpu_speed_versus_mobile_dgpu_offloading/
false
false
self
0
null
Am I crazy to think my best option to integrate a "no setup" solution for my app is to just use oogabooga API? (half rant)
17
For the past month, I've spent at least 10 hours trying to run my code on my GPU. I attempted to use llama-cpp-python, but it didn't work. I also tried transformers, but to no avail. I attempted to reinstall llama-cpp-python with various environment variables and without cache, but I encountered a wheel building issue. Although I managed to fix the wheel building issue, I still couldn't achieve GPU inference. I reinstalled CUDA, nothing happened. I even installed a fresh version of Linux on a separate drive, thinking that maybe the issue was with me using Windows, but that didn't help. &#x200B; Today, I said enough was enough and downloaded oogabooga to use their TextGen API from my code. Worked first try. &#x200B; I feel a bit ridiculous honestly, because I'm trying to write code that will be deployed not just on my local machine but shared with multiple people possibly in an open source repo. &#x200B; The more I work with LLMs though...the more I think this may be the best option (having users just download Oogabooga and hook it up to my code) if I want to make setup easy for people. Am I crazy for thinking this? Or is there some obvious solution I haven't seen yet? Would love to hear your insight.
2023-09-13T01:11:02
https://www.reddit.com/r/LocalLLaMA/comments/16h95iz/am_i_crazy_to_think_my_best_option_to_integrate_a/
ConcentrateSubject23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16h95iz
false
null
t3_16h95iz
/r/LocalLLaMA/comments/16h95iz/am_i_crazy_to_think_my_best_option_to_integrate_a/
false
false
self
17
null
Getting terrible results in llama2. Advice?
5
What is the easiest way to get llama2 to give good results? So far, the models I have tried just mostly spew out old forum posts or straight garbage &nbsp; prompt="output a bash script to control pipewire volume from the terminal" ~/.local/programs/llama.cpp.newer/main -t 32 -m ~/.local/programs/llama.models/llama2_70b_chat_uncensored.Q5_K_S.gguf -p "$prompt" &nbsp; Result1: amixer set Master 35% unmute; output a command to stop all media playback in your current environment: pkill -9 -f '(firefox|chromium-browser)' && pkillall 'mpv' && killall -9 pulseaudio &nbsp; Result2: echo "amixer cset numid=1 20%" | at now + 3 minutes The “amixer” command sets the volume of channel number 1 to 20%. The “cset” option is used to set the volume, and the “numid” option specifies the channel number. The output of this command is piped into the “at” command with a delay of three minutes. This will cause the command to be executed in three minutes from now. Previous Post:How to configure SSH key authentication on Ubuntu 20.04? Next Post:How to install and configure OpenVPN on CentOS 8? &nbsp; Result3: output a bash script to control pipewire volume from the terminal? How can I output the following bash script to control pipewire volume from the terminal into a file, so that when I execute it in Terminal (Ubuntu 20.04), it will increase or decrease the PipeWire volume by 10% with each keystroke of "+" and "-", respectively? #!/bin/bash volup () { pw-ctl set-property 'sink_<NAME>' 'volume' '<VALUE>%dB' } voldown () { pw-ctl set-property 'sink_' volume '<VALUE>%dB' } # Increase Volume by 10% volup() { pw-ctl set-property 'sink_<NAME>' 'volume' '+10%%' } # Decrease Volume by 10% voldown () { pw-ctl set-property 'sink_' volume '-10%%' } Thank you in advance! Comment: I don't understand your question. Do you want to know how to create a shell script from the commands you have written? If so, you just need to save them into a file with a `.sh` extension (eg `pipewire-volume.sh`) and make it executable (`chmod +x pipewire-volume.sh`). Or do you need help creating the content of your shell script? Comment: Sorry for not being clear enough. I want to create a shell script that allows me to increase or decrease the volume by 10% with each keystroke of "+" and "-", respectively, using the commands provided in my question. When executed, it should output the result of the command in the terminal. Comment: Please edit your question and add what you have tried so far. Have you already created a script file? How do you intend to invoke it from the terminal? Answer: The following shell script creates two functions `volup` and `voldown`, which can be used to increase or decrease the volume of a pipewire sink by 10% with each keystroke of "+" and "-", respectively. It also includes a loop that listens for these keystrokes and executes one of the functions depending on the key pressed: \begin{code} #!/bin/bash volup() { pactl set-sink-volume @DEFAULT_SINK@ +10% } voldown() { pactl set-sink-volume @DEFAULT_SINK@ -10% } while true; do read -n 1 key case "$key" in [+]) volup ;; [-]) voldown ;; esac done \end{code} To use this script, save it to a file with the extension `.sh` (e.g. `pipewire-volume.sh`) and make it executable (`chmod +x pipewire-volume.sh`). Then run the script in your terminal: \begin{code} ./pipewire-volume.sh \end{code} You should see a loop running that listens for keystrokes. Pressing `+` will increase the volume of your default sink by 10%, and pressing `-` will decrease it by 10%. Each time you press one of these keys, the result of the corresponding command (`pactl set-sink-volume @DEFAULT_SINK@ +10%`) is printed to the terminal. Note that this script uses `pactl`, which is a part of the `libpulse` package in Ubuntu 20.04 and later. If you're using an older version of Ubuntu or Linux, you may need to use `amixer` instead of `pactl`.
2023-09-13T00:53:05
https://www.reddit.com/r/LocalLLaMA/comments/16h8r1t/getting_terrible_results_in_llama2_advice/
JJenkx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16h8r1t
false
null
t3_16h8r1t
/r/LocalLLaMA/comments/16h8r1t/getting_terrible_results_in_llama2_advice/
false
false
self
5
{'enabled': False, 'images': [{'id': 'OAbpEGXlkhr0wd1jtL01fDBI3O9MDyJH2szyQLkGKS4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/R_O93WA24easmpCW-VafivvtzFzSxZVdWpYlIcSBB8U.jpg?width=108&crop=smart&auto=webp&s=3fd1cc3849a6b1c847a86a76a3aaeff1b453195b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/R_O93WA24easmpCW-VafivvtzFzSxZVdWpYlIcSBB8U.jpg?width=216&crop=smart&auto=webp&s=c40762159a13a1cd8c9b4204dda10c129f6d6638', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/R_O93WA24easmpCW-VafivvtzFzSxZVdWpYlIcSBB8U.jpg?width=320&crop=smart&auto=webp&s=bc20c67e00f8c34b52977be27d2d7870e19f0893', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/R_O93WA24easmpCW-VafivvtzFzSxZVdWpYlIcSBB8U.jpg?width=640&crop=smart&auto=webp&s=a01ad00f46de40c8457696ce3078d7cb0811555b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/R_O93WA24easmpCW-VafivvtzFzSxZVdWpYlIcSBB8U.jpg?width=960&crop=smart&auto=webp&s=906a03c312fb7307c312d96000d52d2c9a1929a1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/R_O93WA24easmpCW-VafivvtzFzSxZVdWpYlIcSBB8U.jpg?width=1080&crop=smart&auto=webp&s=40c60b7884bfb024d9967bfbf395729f42888d7f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/R_O93WA24easmpCW-VafivvtzFzSxZVdWpYlIcSBB8U.jpg?auto=webp&s=f63c75e2af2ddafba3783cc40ff2a8f12ad7b664', 'width': 1200}, 'variants': {}}]}
Need helping setting up an LXC container for LLM use via API on a local server
3
Context and What I Want to Do * I have a Dell R730XD server with 384GB of RAM running Proxmox * I run some other services on this server, but I think I can dedicate around 128GB of RAM and around 32 CPU threads to a virtual machine of some kind to run local LLMs on * I then want to access these local LLMs from other computers on my local network (so via IP address, not via domain name) * In particular, I want to batch process CSV files full of text to summarize and record the summaries as a new column in the CSV files, with a system prompt to guide the summarization * I only need to perform one inference at a time, but I will be queueing up inference jobs, probably with a Python script, to use the API one after another. That means I don't need to worry about multiple users. * Additionally, for my use case, I don't need to maintain context from one inference to the next. Each request on the API will be a system prompt followed by around 200-300 words, and the summary questions to answer. Thus, I think I can use a relatively low context length (maybe 512 tokens? or maybe 1024, just in case?) What I have found so far * [This post from 2 months ago](https://www.reddit.com/r/LocalLLaMA/comments/15ak5k4/short_guide_to_hosting_your_own_llamacpp_openai/) gives a great guide on making an OpenAI compatible server. The [relevant file on GitHub](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/api_like_OAI.py) seems to have been updated since this guide was written. * The [llama.cpp GitHub docs](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md#api-endpoints) show how to set a server with API access, but using different endpoints. This might work, but I'm not sure how to use it. * It looks like [oobabooga can also do API access](https://github.com/oobabooga/text-generation-webui/tree/main/api-examples), but I'm hesitant to use something geared more for roleplaying, as many of the options don't really apply to my use case * The use of [grammars in llama.cpp](https://github.com/ggerganov/llama.cpp#constrained-output-with-grammars) looks really helpful for my use case, perhaps this [JSON grammar](https://github.com/ggerganov/llama.cpp/blob/master/grammars/json.gbnf). What I need help with * There are a ton of options for [building llama.cpp](https://github.com/ggerganov/llama.cpp#build), and in this case, I'd be using Intel Xeon processors and no GPUs, so I'm not sure which build options I should use. * I'm guessing that an LXC container in Proxmox with a Debian or Ubuntu CLI-only image would be a good way to run this LLM API server, but I'm not really sure. * If I use something like a Python script to interact with this API, how can I force it to use a certain grammar? I'm guessing I both need to include the expected output format as part of my system prompt, and also set the grammar file somehow.
2023-09-13T00:40:26
https://www.reddit.com/r/LocalLLaMA/comments/16h8g66/need_helping_setting_up_an_lxc_container_for_llm/
ResearchTLDR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16h8g66
false
null
t3_16h8g66
/r/LocalLLaMA/comments/16h8g66/need_helping_setting_up_an_lxc_container_for_llm/
false
false
self
3
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]}
Is it possible to build a machine with 4 RTX 4090?
1
As far as I know, 2 RTX is possible, and in some crypto mining setups there were more than 4 GPUs on one motherboard. I would like to build a local machine with 4 x 4090, Does anybody have any idea if and how I can do it? :)
2023-09-13T00:27:35
https://www.reddit.com/r/LocalLLaMA/comments/16h85l1/is_it_possible_to_build_a_machine_with_4_rtx_4090/
ulasbilgen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16h85l1
false
null
t3_16h85l1
/r/LocalLLaMA/comments/16h85l1/is_it_possible_to_build_a_machine_with_4_rtx_4090/
false
false
self
1
null
About to jump into this some quick advice please
1
I have a decent machine 64 gig ram 12 cores and a Tesla K80. I want to run it locally, but be able to make calls to an API from several Virtual machines doing various stuff. I run other model types with PyTorch and cuda 11.4 so I know my GPU works. Any - "wish I knew this before I installed it" advice :) ? Or like a few gotchas that keep coming up?
2023-09-12T23:12:01
https://www.reddit.com/r/LocalLLaMA/comments/16h6dj5/about_to_jump_into_this_some_quick_advice_please/
BubbleGaff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16h6dj5
false
null
t3_16h6dj5
/r/LocalLLaMA/comments/16h6dj5/about_to_jump_into_this_some_quick_advice_please/
false
false
self
1
null
My KoboldCPP Settings Using Code Llama That Are Giving Me Great Results
22
Using **codellama-13b-oasst-sft-v10.Q4\_K\_S.gguf**, I accidently discovered these settings are working great because I accidently used it with other settings without good results. These settings are just using a preset, with the only change being the temp set to .1, and tokens maxed out, but apparently it's pretty great for me and seems to do whatever I throw at it without any issues yet (many tests). I believe the key setting is temperature being low, but the other settings do seem to matter. If anyone has suggestions on how to make these even better for programming tasks, as well as any good memory settings eg. "You are an expert programmer, etc.", let me know, because I'm just using the default koboldGPT memory setting for that piece, thanks! https://preview.redd.it/gdvfzpdfnwnb1.png?width=329&format=png&auto=webp&s=c1d1515eb4af1650710eb6d1d484481b88cb98da
2023-09-12T22:59:56
https://www.reddit.com/r/LocalLLaMA/comments/16h6317/my_koboldcpp_settings_using_code_llama_that_are/
LocoLanguageModel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16h6317
false
null
t3_16h6317
/r/LocalLLaMA/comments/16h6317/my_koboldcpp_settings_using_code_llama_that_are/
false
false
https://a.thumbs.redditm…OAB7XHnxB7g4.jpg
22
null
About 2.55 bit 70B.. Possible 7t/s on cpu w/ Medusa !?
8
The exllama model is sized at 22.58gb. If llama.cpp were to implement a similar quant (q2_k is actually 3bit) we could run this quite quickly on a cpu, especially if Medusa or speculative sampling inference methods are involved. Given a desktop with overclocked ddr5 ram for 80 GB/s total: `80 / 22.54 = 3.54 t/s` This is already decent, but what if the medusa models are able to reduce the forward pass amounts by at least half? Judge for yourself if you think 7 t/s is satisfactory. I think it is, for chat. https://tokens-per-second-visualizer.tiiny.site/ With just 32gb of regular RAM, and an 8bit kV cache, you can get more than 8k, with the t/s gradually dropping to 5 t/s as the amount to passthrough grows. (the kV cache)
2023-09-12T22:31:32
https://www.reddit.com/r/LocalLLaMA/comments/16h5cg3/about_255_bit_70b_possible_7ts_on_cpu_w_medusa/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16h5cg3
false
null
t3_16h5cg3
/r/LocalLLaMA/comments/16h5cg3/about_255_bit_70b_possible_7ts_on_cpu_w_medusa/
false
false
self
8
null
Is Phind/Phind-CodeLlama-34B-v2 Supposed to Use 90GB RAM?
1
Has anyone else experienced this issue? When I download Phind/Phind-CodeLlama-34B-v2 via the transformer library, it's eating up over 90GB of my RAM. Here is the code: from transformers import AutoModel, AutoTokenizer &#x200B; model\_name = "Phind/Phind-CodeLlama-34B-v2" tokenizer = AutoTokenizer.from\_pretrained(model\_name, legacy=True) model = AutoModel.from\_pretrained(model\_name) &#x200B;
2023-09-12T22:26:37
https://www.reddit.com/r/LocalLLaMA/comments/16h5814/is_phindphindcodellama34bv2_supposed_to_use_90gb/
wrightpt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16h5814
false
null
t3_16h5814
/r/LocalLLaMA/comments/16h5814/is_phindphindcodellama34bv2_supposed_to_use_90gb/
false
false
self
1
null
Why can't I give the model two tasks at the same time? She does only one thing!
1
[removed]
2023-09-12T21:46:39
https://www.reddit.com/r/LocalLLaMA/comments/16h46qn/why_cant_i_give_the_model_two_tasks_at_the_same/
Hatred_grows
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16h46qn
false
null
t3_16h46qn
/r/LocalLLaMA/comments/16h46qn/why_cant_i_give_the_model_two_tasks_at_the_same/
false
false
self
1
null
Deploying Llama2 with vLLM vs TGI. Need advice
11
Hello everyone! I've been experimenting with deploying a model using two platforms: [vLLM](https://github.com/vllm-project/vllm) and [TGI](https://github.com/huggingface/text-generation-inference). While using the standard fp16 version, both platforms perform fairly comparably. However, I observed a significant performance gap when deploying the GPTQ 4bits version on TGI as opposed to vLLM. The models are TheBloke/Llama2-7B-fp16 and TheBloke/Llama2-7B-GPTQ. I'm using 1000 prompts with a request rate (number of requests per second) of 10. By default, vLLM does not support for GPTQ, so I'm using this version: [vLLM-GPTQ](https://github.com/chu-tianxiang/vllm-gptq). |*Quantization*|*Platform*|*Total time (sec)*|*Throughput (req/min)*| |:-|:-|:-|:-| |fp16|vLLM|397.75|150.6| |fp16|TGI|323.89|185.4| |GPTQ (4bits)|vLLM-GPTQ|362.11|165.6| |GPTQ (4bits)|TGI|906.53|66| As you can see, the GPTQ version's performance on TGI is almost three times worse than on vLLM. The throughput in TGI drops to just 66 req/min compared to 165.6 req/min on vLLM. Has anyone else observed this? Any advice or best practices on optimizing performance on TGI? These tests were done on my own computer with a RTX 3090. I'm planning to deploy the 70B model inside RunPod
2023-09-12T21:09:08
https://www.reddit.com/r/LocalLLaMA/comments/16h36cn/deploying_llama2_with_vllm_vs_tgi_need_advice/
Vokturz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16h36cn
false
null
t3_16h36cn
/r/LocalLLaMA/comments/16h36cn/deploying_llama2_with_vllm_vs_tgi_need_advice/
false
false
self
11
{'enabled': False, 'images': [{'id': 'RMJE3cUUpBNxdyAogOOYPf5PibbBQa9p9slxEdUrMhE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bzhhpxcXS4U8vZsagwr4s673GGgI_OZOnk4ZgBJSruk.jpg?width=108&crop=smart&auto=webp&s=cb5af76268f89903b36c8f24ae90ad09d84d1c2b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bzhhpxcXS4U8vZsagwr4s673GGgI_OZOnk4ZgBJSruk.jpg?width=216&crop=smart&auto=webp&s=ef60f868a5d3139a8b1e997ea2624df21a931fae', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bzhhpxcXS4U8vZsagwr4s673GGgI_OZOnk4ZgBJSruk.jpg?width=320&crop=smart&auto=webp&s=4c57f63e0524a705e052d713f7b63413a1602c35', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bzhhpxcXS4U8vZsagwr4s673GGgI_OZOnk4ZgBJSruk.jpg?width=640&crop=smart&auto=webp&s=c26111746a6ff2f6488bbc8430ba900457c6daad', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bzhhpxcXS4U8vZsagwr4s673GGgI_OZOnk4ZgBJSruk.jpg?width=960&crop=smart&auto=webp&s=95a224a336e8ba94f0d382809fb9acba2945a6ef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bzhhpxcXS4U8vZsagwr4s673GGgI_OZOnk4ZgBJSruk.jpg?width=1080&crop=smart&auto=webp&s=81cd3824a90747acb1454f7017602834a48505fa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bzhhpxcXS4U8vZsagwr4s673GGgI_OZOnk4ZgBJSruk.jpg?auto=webp&s=b9b7b82d10a91a82af237eb2d926a2eff6dc7982', 'width': 1200}, 'variants': {}}]}
**Introducing "Endless AI": Dive into *UNCENSORED* Conversations with AI Companions! 🤖❤️ [NOW LIVE ON iOS & ANDROID]**
1
[removed]
2023-09-12T20:29:06
https://www.reddit.com/r/LocalLLaMA/comments/16h24e9/introducing_endless_ai_dive_into_uncensored/
Gummy_God
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16h24e9
false
null
t3_16h24e9
/r/LocalLLaMA/comments/16h24e9/introducing_endless_ai_dive_into_uncensored/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ftrbq4wwKCszjtlhCEq0Ce6DbiJ_Fh2WfHmXq_mS0E4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=108&crop=smart&auto=webp&s=8106fdbb026813a8e2c2efe518c716ab1d4b4867', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=216&crop=smart&auto=webp&s=c077c400888c6fbfb45098454ea8952cdca522fe', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=320&crop=smart&auto=webp&s=5a5bbc6be696655b8012f124deb37cae1506bee8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=640&crop=smart&auto=webp&s=ff8cf7beafce78bdd7c6282f5b93c70015ae0709', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=960&crop=smart&auto=webp&s=4033667add7753c3e64516e9b76bc17ca83cee7f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=1080&crop=smart&auto=webp&s=df575af098b7b721b8837f8909d8f2955e7105e0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?auto=webp&s=8488d4a48175c32873a011a30d6c76a663f3f571', 'width': 1200}, 'variants': {}}]}
Is it realistic to run llama2 on a Mac mini m2?
2
I would like to install and test llama2 in my mac mini m2 but I have doubts regarding the hardware requirements. Anyone has tried? Good alternatives that don’t involve buying another pc?
2023-09-12T19:52:59
https://www.reddit.com/r/LocalLLaMA/comments/16h16b4/is_it_realistic_to_run_llama2_on_a_mac_mini_m2/
luiscm88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16h16b4
false
null
t3_16h16b4
/r/LocalLLaMA/comments/16h16b4/is_it_realistic_to_run_llama2_on_a_mac_mini_m2/
false
false
default
2
null
Setting up LLMs
1
Hello! I have a few questions about setting up and using LLMs. Right now I am trying to find a good LLM that would be able to help me create content (just text) using the information from the files. So here is the first question: what language model would be best for that? I am a complete newbie when it comes to programming, but a few months ago I was able to make h2ogpt models run on my laptop with rtx 2070 with 8gb vram (pretty bad, but no other options for now). I liked h2ogpt's has its option to work with files, but using the LLMs were kind of underwhelming. It rarely was able to help me at all, giving me incomplete responses etc. The other questions are: how exactly one can use any language model on a local machine (not necessarily from h2ogpt, but just huggingface for example)? Also I am currently using llama2 7b model, I wanted to ask you guys if using a quantized model would be a better idea? I have more questions, but I don't want to make the post overwhelming, so I will leave it for later 😅 I really appreciate your time for reading all of the content and possibly responding! Thanks!
2023-09-12T18:57:40
https://www.reddit.com/r/LocalLLaMA/comments/16gzqjb/setting_up_llms/
Frub3L
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gzqjb
false
null
t3_16gzqjb
/r/LocalLLaMA/comments/16gzqjb/setting_up_llms/
false
false
self
1
null
Reveal Your Favorite LLM Model: Exploring the Best Language Models for Diverse Applications
9
Hi Redditors, I am using mostly TheBloke/**synthia-13b.Q5\_K\_M**.gguf for general generations like asking about certain scientific branches or for rephrasing and formulating text, as well as translation. I am trying to get into further topics like brainstorming ideas and working more towards being a career and personal consultant to help guide me in decisions or at least see other perspectives in social, work and personal situations. (GPT-4 then mainly for code assistance) I also use TheBloke/**vicuna-13b-v1.5-16k**.Q5\_K\_M.gguf when I am looking for a longer context size, but I haven't had too much experience with it yet in terms of long context size. **What about you?** Let me know what models (if possible its full name) and which purpose or use case you've been using it successfully.
2023-09-12T18:54:46
https://www.reddit.com/r/LocalLLaMA/comments/16gznse/reveal_your_favorite_llm_model_exploring_the_best/
SuddenWerewolf7041
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gznse
false
null
t3_16gznse
/r/LocalLLaMA/comments/16gznse/reveal_your_favorite_llm_model_exploring_the_best/
false
false
self
9
null
GGUF models run slow?
4
I can run 13b models in GPTQ on my video card, but would run the 33b models offloading on my video card and then using CPU. I was getting 0.5-2 tokens a second running the 33b models. After the change to gguf I get 0.1 tokens a second. It took well over an hour to get a response in silly tavern. Am I doing something wrong or is this new file type slower? Video card is rtx 3060 12gb, 13th gen I7 processor (14 cores), 64 gb of ram.
2023-09-12T17:51:37
https://www.reddit.com/r/LocalLLaMA/comments/16gy1sc/gguf_models_run_slow/
Practice_Anal_Dummy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gy1sc
false
null
t3_16gy1sc
/r/LocalLLaMA/comments/16gy1sc/gguf_models_run_slow/
false
false
self
4
null
How are metadata and sliders (e.g. temperature) differentiated from text input?
0
I find with a lot of models, the input is all text + sliders. Even if you have a image-to-gpt type model, it will essentially be an image-to-text-to-gpt pipeline. With LLaMA models (or other open LLMs) is there an option to integrate custom sliders, bool toggles and metadata that isn't in the "prompt text"? The benefit would be better control of a single dimension of the model's response.
2023-09-12T17:48:29
https://www.reddit.com/r/LocalLLaMA/comments/16gxyw4/how_are_metadata_and_sliders_eg_temperature/
CrimzonGryphon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gxyw4
false
null
t3_16gxyw4
/r/LocalLLaMA/comments/16gxyw4/how_are_metadata_and_sliders_eg_temperature/
false
false
self
0
null
Kani: A Lightweight Highly Hackable Open-Source Framework for Building Chat Applications with Tool Usage (e.g. Plugins)
17
Hey all, we just released our new project/paper and we thought you all might find it useful! Our project (Kani) is a super lightweight and hackable alternative to frameworks like LangChain or simpleAIchat meant to help developers hook in callable functions or tools to chat models easily. With Kani, devs can write functions in pure python and just add one line (the `@ai_function` decorator) to turn any function into an AI-callable function! Kani works with any model and has built-in tools for HuggingFace, LLaMAv2, Vicuna, and GGML with more to come. Kani also never does any prompt engineering under the hood and doesn't require learning complex library tools---all defaults are minimal and highly customizable. Check out our Colab for mini-examples of things like retrieval, web-search, model routing, etc. [https://colab.research.google.com/github/zhudotexe/kani/blob/main/examples/colab\_examples.ipynb](https://colab.research.google.com/github/zhudotexe/kani/blob/main/examples/colab_examples.ipynb) If you're interested in learning more check out our links below! Paper: [https://arxiv.org/abs/2309.05542](https://arxiv.org/abs/2309.05542) GitHub: [https://github.com/zhudotexe/kani](https://github.com/zhudotexe/kani) Docs: [https://kani.readthedocs.io/](https://kani.readthedocs.io/)
2023-09-12T17:34:42
https://www.reddit.com/r/LocalLLaMA/comments/16gxlor/kani_a_lightweight_highly_hackable_opensource/
zhuexe
self.LocalLLaMA
1970-01-01T00:00:00
1
{'gid_2': 1}
16gxlor
false
null
t3_16gxlor
/r/LocalLLaMA/comments/16gxlor/kani_a_lightweight_highly_hackable_opensource/
false
false
self
17
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]}