title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
GitHub - IBM/Dromedary: Dromedary: towards helpful, ethical and reliable LLMs.
7
2023-05-11T22:29:17
https://github.com/IBM/Dromedary
pseudonerv
github.com
1970-01-01T00:00:00
0
{}
13f2a7n
false
null
t3_13f2a7n
/r/LocalLLaMA/comments/13f2a7n/github_ibmdromedary_dromedary_towards_helpful/
false
false
https://a.thumbs.redditm…xVav90hVnDj0.jpg
7
{'enabled': False, 'images': [{'id': 'b8M4u1OqDE4_KJYlbJ-XMrUt1Enksdsa6NLoJwjH984', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?width=108&crop=smart&auto=webp&s=865685c9a9323de4bbc84e4cda59e2857553127f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?width=216&crop=smart&auto=webp&s=3c7f7357906fc7e13f505ac23c9ee8764b96bf6f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?width=320&crop=smart&auto=webp&s=bd11874368724a14748217d925fe2aab18d4938b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?width=640&crop=smart&auto=webp&s=e4795b938e06c757c386889dd12476a7aaf70b10', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?width=960&crop=smart&auto=webp&s=3281d130f184fa6f6a9ed8ddbf1d9beeff778077', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?width=1080&crop=smart&auto=webp&s=4f9fda61cee2841c8f0ace1abb59305a10793b49', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?auto=webp&s=3f9ae9c6632d92791483e34f6c28207a078de4fb', 'width': 1200}, 'variants': {}}]}
Does a site exist which host open source LLMs to inference?
0
Dumb question, but it seems the only way for to interact with most open source models is by self hosting or spinning up an instance in Google Colab etc. I also noticed that some models can be inferenced via Huggingface, but not the majority. Is there a reason a company hasn't started hosting Alpaca / Koala / Vicuna / etc to allow enthusiasts to inference? Guessing the answer has to do with legality or cost. But legally speaking it seems many of these models are Apache or MIT licensed. Cost wise I would imagine they could charge per inference or subscription.
2023-05-11T21:49:37
https://www.reddit.com/r/LocalLLaMA/comments/13f17vb/does_a_site_exist_which_host_open_source_llms_to/
mdas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13f17vb
false
null
t3_13f17vb
/r/LocalLLaMA/comments/13f17vb/does_a_site_exist_which_host_open_source_llms_to/
false
false
self
0
null
Has anyone any idea when the Google offline Gecko model will be out?
12
Hi, Has anyone any idea when the standalone offline Gecko model will be out? (I wanted to ask this on the Google sub .. but I think that they have frozen my posting, as in the past I haven't been reverent enough to the great Google overload)
2023-05-11T19:32:15
https://www.reddit.com/r/LocalLLaMA/comments/13exft0/has_anyone_any_idea_when_the_google_offline_gecko/
MrEloi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13exft0
false
null
t3_13exft0
/r/LocalLLaMA/comments/13exft0/has_anyone_any_idea_when_the_google_offline_gecko/
false
false
self
12
null
Not enough ram, swap space question.
1
[removed]
2023-05-11T19:29:25
https://www.reddit.com/r/LocalLLaMA/comments/13excrx/not_enough_ram_swap_space_question/
h_i_t_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13excrx
false
null
t3_13excrx
/r/LocalLLaMA/comments/13excrx/not_enough_ram_swap_space_question/
false
false
default
1
null
GPT4ALL + MPT ---> Bad Magic error ?
2
I am trying to run the new MPT models by MosaicML with pygpt4all. In loading the following, I get a "bad magic" error. How do i overcome it? I've checked [https://github.com/ggerganov/llama.cpp/issues](https://github.com/ggerganov/llama.cpp/issues) and there aren't similar issues reported for the MPT models. ​ Code: \`\`\` from pygpt4all.models.gpt4all\_j import GPT4All\_J model = GPT4All\_J('./models/ggml-mpt-7b-chat.bin') \`\`\` Error: \`\`\` runfile('C:/Data/gpt4all/gpt4all\_cpu2.py', wdir='C:/Data/gpt4all') gptj\_model\_load: invalid model file './models/ggml-mpt-7b-chat.bin' (bad magic) Windows fatal exception: int divide by zero \`\`\`
2023-05-11T19:09:21
https://www.reddit.com/r/LocalLLaMA/comments/13ewsuc/gpt4all_mpt_bad_magic_error/
kayhai
self.LocalLLaMA
2023-05-11T19:21:18
0
{}
13ewsuc
false
null
t3_13ewsuc
/r/LocalLLaMA/comments/13ewsuc/gpt4all_mpt_bad_magic_error/
false
false
self
2
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]}
autogpt-like framework?
11
hey y'all, I've been searching for an autogpt-like framework that can work with a local llama install like llama.cpp or oobabooga or even gpt4all. Do you know of any? So far I tried a number of them but I keep getting stuck on random minutia, was wondering if there's a "smooth" one...
2023-05-11T16:55:46
https://www.reddit.com/r/LocalLLaMA/comments/13esyta/autogptlike_framework/
paskal007r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13esyta
false
null
t3_13esyta
/r/LocalLLaMA/comments/13esyta/autogptlike_framework/
false
false
self
11
null
Is there vicuna-uncensored?
29
What the title says. Where can I get vicuna-uncensored if it's out somewhere? Thanks! Update: It seems to be in training. https://huggingface.co/ehartford/Wizard-Vicuna-13b-Uncensored
2023-05-11T16:45:19
https://www.reddit.com/r/LocalLLaMA/comments/13esozg/is_there_vicunauncensored/
jl303
self.LocalLLaMA
2023-05-11T19:52:38
0
{}
13esozg
false
null
t3_13esozg
/r/LocalLLaMA/comments/13esozg/is_there_vicunauncensored/
false
false
self
29
{'enabled': False, 'images': [{'id': 'QgK4OSL80eBW-KUk05CIG4tC7_eftM9F062uqLVOjTw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?width=108&crop=smart&auto=webp&s=0eb97d0509604b833d08fd83b14be33c59b83122', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?width=216&crop=smart&auto=webp&s=d5e24f2637eb906f66523161ee04dbff02b3c2a4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?width=320&crop=smart&auto=webp&s=08bbf1b1475b83f63eb3d91c215041eb9bd39a5a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?width=640&crop=smart&auto=webp&s=53a4804f3396b28dcf60444f7f853e1e7eaec742', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?width=960&crop=smart&auto=webp&s=34a130946b3277f09c40684289c4683f20c8e890', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?width=1080&crop=smart&auto=webp&s=360081c3748040fb7d77a57f31cbf70070f93f82', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?auto=webp&s=a66cb02cda5e5ed9d36eadf43da75610d8b51f22', 'width': 1200}, 'variants': {}}]}
We introduce CAMEL : Clinically Adapted Model Enhanced from LLaMA
1
[removed]
2023-05-11T16:34:12
https://www.reddit.com/r/LocalLLaMA/comments/13esep8/we_introduce_camel_clinically_adapted_model/
HistoryHuge2015
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13esep8
false
null
t3_13esep8
/r/LocalLLaMA/comments/13esep8/we_introduce_camel_clinically_adapted_model/
false
false
default
1
null
LLMs designed to act like netizens?
12
[deleted]
2023-05-11T15:11:54
[deleted]
2023-06-14T17:58:10
0
{}
13eq6ys
false
null
t3_13eq6ys
/r/LocalLLaMA/comments/13eq6ys/llms_designed_to_act_like_netizens/
false
false
default
12
null
VRAM limitations
3
I have a decent machine AMD Ryzen 9 5950X 16-Core Processor, 3401 Mhz, 16 Core(s), 32 Logical Processor(s) My video is a Adapter Description NVIDIA GeForce RTX 3080 VRAM is 10240MB I am struggling to run many models. Is there anything I can do?
2023-05-11T12:28:11
https://www.reddit.com/r/LocalLLaMA/comments/13elyk9/vram_limitations/
Rear-gunner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13elyk9
false
null
t3_13elyk9
/r/LocalLLaMA/comments/13elyk9/vram_limitations/
false
false
self
3
null
I made a simple telegram bot Llama.cpp
1
[removed]
2023-05-11T12:27:07
https://v.redd.it/o5hcgab847za1
[deleted]
v.redd.it
1970-01-01T00:00:00
0
{}
13elxod
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/o5hcgab847za1/DASHPlaylist.mpd?a=1694534820%2COTM2MzBhZmVjM2YxN2Q0NzMyNDcxMzY3YmU1NGNkNWZlOWJiNTVjYmU4OTY3ZDY3OGVlYjMyMjc4ZGM4YzM1Mg%3D%3D&v=1&f=sd', 'duration': 32, 'fallback_url': 'https://v.redd.it/o5hcgab847za1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/o5hcgab847za1/HLSPlaylist.m3u8?a=1694534820%2CMjMzMTdhOTg2MjRmMGY5ZGU2ZGVkMTVkYmFhMTAzZGZhOTU1YTZjY2UzNzZmZTliY2RlMmNiZTExNGVhZTNiNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/o5hcgab847za1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 608}}
t3_13elxod
/r/LocalLLaMA/comments/13elxod/i_made_a_simple_telegram_bot_llamacpp/
false
false
default
1
null
I made a simple telegram bot Llama.cpp
1
[removed]
2023-05-11T12:25:32
[deleted]
1970-01-01T00:00:00
0
{}
13elwcr
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/c9fuex3y37za1/DASHPlaylist.mpd?a=1694534786%2CNGM1NTYwNjk0MzFkYWIwNjJiNmU5OTFkYWIyMTk5ZGJhYjY3MDE3MTFkMTk3ZmZiOGY2M2ExYzBhNTU1OTY5NQ%3D%3D&v=1&f=sd', 'duration': 32, 'fallback_url': 'https://v.redd.it/c9fuex3y37za1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/c9fuex3y37za1/HLSPlaylist.m3u8?a=1694534786%2CMzM5N2E0ZWVkZmUzZGY0ZWMwZTNlYTQwMjYzMDM4YjI4YzI1OWQyYWYzMDJjMDM3NGMxZTE5YjllODAzZDRmYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/c9fuex3y37za1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 608}}
t3_13elwcr
/r/LocalLLaMA/comments/13elwcr/i_made_a_simple_telegram_bot_llamacpp/
false
false
default
1
null
Can anyone recommend me some specs that will give me high performance for the next few years?
7
Not sure how much VRAM, not sure how much RAM, or if GPU still matters outside VRAM. I was planning on getting an (asus) gaming laptop because those are built like beasts, price doesnt really matter, under 5k USD? Cheaper is obv better. I'm going to use this for company stuff, so quality is most important. Anyway, high/higest end specs for the VRAM, RAM, GPU and CPU? Would love to run a 60B model.
2023-05-11T11:15:00
https://www.reddit.com/r/LocalLLaMA/comments/13ek9sp/can_anyone_recommend_me_some_specs_that_will_give/
uhohritsheATGMAIL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ek9sp
false
null
t3_13ek9sp
/r/LocalLLaMA/comments/13ek9sp/can_anyone_recommend_me_some_specs_that_will_give/
false
false
self
7
null
Has anyone been able to implement WebGPU LLamas on a local server like this project
1
2023-05-11T11:04:54
https://mlc.ai/web-llm/
SupernovaTheGrey
mlc.ai
1970-01-01T00:00:00
0
{}
13ek1ro
false
null
t3_13ek1ro
/r/LocalLLaMA/comments/13ek1ro/has_anyone_been_able_to_implement_webgpu_llamas/
false
false
default
1
null
Engineering training for better quality
3
Have just watched the TED talk from openAI on their models and open thing Greg Brockman mentioned was that in order to get to gpt4 level semantics and understanding they had to get to rocket engineering levels of tolerance and construction of their training and feedback systems. One thing I've noticed over the past few weeks is we have a focus on training new models and putting new datasets together but IDK how much thought is going into the tooling and the quality of the training on specific hardware and in specific epochs engineered for higher quality. Does anyone have any information on this or some really simple things people could be implementing to get a step change in their output? For example, from my own experience I've been working at plugging llama like models into AutoGPT and only Vicuna13b has been able to correctly utilise the JSON response format and it only handles about 2-3 recursions before it breaks. This is kind of the state of functional agents right now for my own experience, unless others have had more success.
2023-05-11T10:31:26
https://www.reddit.com/r/LocalLLaMA/comments/13ejbwx/engineering_training_for_better_quality/
SupernovaTheGrey
self.LocalLLaMA
2023-05-11T10:56:13
0
{}
13ejbwx
false
null
t3_13ejbwx
/r/LocalLLaMA/comments/13ejbwx/engineering_training_for_better_quality/
false
false
self
3
null
AI Showdown: Wizard Vicuna vs. Stable Vicuna, GPT-4 as the judge (test in comments)
83
2023-05-11T09:00:44
https://i.redd.it/hpopsffe36za1.png
imakesound-
i.redd.it
1970-01-01T00:00:00
0
{}
13ehnnt
false
null
t3_13ehnnt
/r/LocalLLaMA/comments/13ehnnt/ai_showdown_wizard_vicuna_vs_stable_vicuna_gpt4/
false
false
https://a.thumbs.redditm…p8Ok07nMHZ24.jpg
83
{'enabled': True, 'images': [{'id': 'FFdR__ZEBqtyR8s3yBc67BNvxU_B0aejMAVU3Bdq6ow', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/hpopsffe36za1.png?width=108&crop=smart&auto=webp&s=4104f6a90aaf64a35caf47591896472020580b2c', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/hpopsffe36za1.png?width=216&crop=smart&auto=webp&s=7683e9c18ad0a1b8f6b61f7b1358b5078652e611', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/hpopsffe36za1.png?width=320&crop=smart&auto=webp&s=89008d433bb7e99e26098413afc11be4f2ed7ea9', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/hpopsffe36za1.png?width=640&crop=smart&auto=webp&s=44cf3eb525ca7e977d2b3e09c9ce3538dca942c0', 'width': 640}], 'source': {'height': 893, 'url': 'https://preview.redd.it/hpopsffe36za1.png?auto=webp&s=14dd578c5af80c5255a89bb93b91533eb3dc1108', 'width': 892}, 'variants': {}}]}
AI Showdown: Wizard Vicuna vs. Stable Vicuna, GPT-4 as the judge (test in comments)
1
[deleted]
2023-05-11T08:46:31
[deleted]
1970-01-01T00:00:00
0
{}
13ehf5e
false
null
t3_13ehf5e
/r/LocalLLaMA/comments/13ehf5e/ai_showdown_wizard_vicuna_vs_stable_vicuna_gpt4/
false
false
default
1
null
Any tips on effective prompts for the usual LLM suspects?
1
[removed]
2023-05-11T08:38:59
https://www.reddit.com/r/LocalLLaMA/comments/13ehaku/any_tips_on_effective_prompts_for_the_usual_llm/
this_is_a_long_nickn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ehaku
false
null
t3_13ehaku
/r/LocalLLaMA/comments/13ehaku/any_tips_on_effective_prompts_for_the_usual_llm/
false
false
default
1
null
4096 Context length (and beyond)
48
Right now there's a lot of talk about StableLM vs WizardLM in 7 and 13b varieties. I wanted to point out that the StableLM family of models was trained for 4096 token context length, meaning it can remember twice as much, and is one of the few GPT-based model model families that support a context length larger than 2048 tokens. I hit the token limit frequently during conversations, and love the idea of a model that can go beyond 2048 tokens, making StableLM-Base-Alpha a pretty attractive platform. If this base model could be trained up on the same data set as wizardlm-13b-uncensored, I think we'd have a weiner, at least for a while. For anyone coming up to speed on this, here's a mini-brain-dump on context lengths: Note that GPT-3 has a context length of 8K tokens and GPT-4 supposedly goes up to 32K, though they may be using some tricks to make this happen. There are also other models like longformer (4K) and RWKV (An RNN, not a GPT, but still an LLM) that has versions in 4K and 8K. MosaicLM released mosiaic-something-storywriter-65K+, but apparently it's very, very slow; unusably slow for real-time use. There are also "memory' techniques for enhancing LLM context lengths (see Langchain for examples), and SuperBIG/SuperBooga, but these are all "hacks" on top of the fixed token length of the model. Also worth mentioning that increasing context length slows down generation - by a lot. This is because most GPT architectures work by comparing each new token in the sequence with all the tokens that came before it, which results in a geometric (e.g faster than linear) increase in the number of comparisons or matmuls or whatever needed to generate the prompt. So, you might find that a model is very fast starting out, but slows down as the context length increases. But - back to my selfish question - What's the current SOTA for > 2K, instruction-following, uncensored models? (License is less of a concern for me as most everything I'm doing right now is for personal/private use.) And is anyone using memory augmentation to great effect?
2023-05-11T04:44:19
https://www.reddit.com/r/LocalLLaMA/comments/13ed7re/4096_context_length_and_beyond/
tronathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ed7re
false
null
t3_13ed7re
/r/LocalLLaMA/comments/13ed7re/4096_context_length_and_beyond/
false
false
self
48
null
Long term project: "resurrecting" a passed friend?
23
I'm not sure if this belongs here or in r/learnmachinelearning, but I have a question about what is and isn't possible. My husband's best friend passed several years ago and he has copious chats, forum posts as well as the stories they wrote together. Right now, we have created a bot in [Character.AI](https://Character.AI) that kinda sounds like him, but obviously not one that contains any of his knowledge since the definition window is small and the bots' memory is... imprecise at best. So I got to wondering: it seems like it should be possible to fine-tune/create a LoRA of one of the LLMs so that it does contain the friend's knowledge and can be used as a chatbot. From my research it seems like Vicuna would be a good fit as it has already been tweaked to act as a chatbot. I'm currently working through tutorials, including the "How to Fine Tune an LLM" that exists on Colab that tweaks GPT2 (I think) with wikipedia data. I know I have a huge learning curve ahead of me. I would be looking at doing the training using Google Colab, but ideally he'd run the end result locally. He can run stable diffusion on his machine using his NVidia GPU. Sadly, my video card is AMD so while I can technically run the Vicuna 4 bit model (13B, I think?) in CPU mode, it's too painfully slow to do anything with. The data is currently unstructured. Obviously we will need to format it properly, but it is in the form of blocks of text rather than the Prompt/Input/Output format I've seen in various Github projects. As for me, I am a former C# Windows/Web/SQL developer so I'm not starting from absolute scratch, but obviously I'll need to learn a lot. I'm prepared for this to be an ongoing project for the new few months. I would welcome any feedback as to what is or isn't possible, whether I'm setting my sights too high, or even if I'm simply in the wrong forum. Thanks all! EDIT: I've received many words of warning about whether this is a good idea, for my husband's sake at least. After thinking about it, I'm not sure I'm at the point where I agree yet but I'll at least give this a lot of thought before attempting something like this. I know it's not the most emotionally healthy thing, to cling to the echoes of someone gone. He has not found interacting with the [Character.AI](https://Character.AI) version of his friend to be difficult but while their bots are fun to interact 2i5h and can still sound startingly human, an LLM fine tuned on the friend's text has every chance of being more so to the point of being damaging. So thank you everyone, you've given me a lot to think about.
2023-05-11T03:52:43
https://www.reddit.com/r/LocalLLaMA/comments/13ecakp/long_term_project_resurrecting_a_passed_friend/
rmt77
self.LocalLLaMA
2023-05-11T11:01:12
0
{}
13ecakp
false
null
t3_13ecakp
/r/LocalLLaMA/comments/13ecakp/long_term_project_resurrecting_a_passed_friend/
false
false
self
23
{'enabled': False, 'images': [{'id': 'veE04iaMbgI4yLvLGj2IZNV7UQfnq3n_7BmxP28dCd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=108&crop=smart&auto=webp&s=0e594332595e82a5118e08d35a2cd140c18d7571', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=216&crop=smart&auto=webp&s=e3c279ba2d1ae1f9f2fba4b328e22f6615821b5c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=320&crop=smart&auto=webp&s=e635acb6bc693890c232162908676cb6478c120c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=640&crop=smart&auto=webp&s=59ba293d6adf4cce410b43b5d28ae104922701b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=960&crop=smart&auto=webp&s=fc7dc69af838ec53e60b3e88fec5e67c8759495b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=1080&crop=smart&auto=webp&s=e50a4f1b7c99e137a2ab4d5e2d573bb75becd067', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?auto=webp&s=b8597825d9b212133d3dbd9ee26fd0dcc2a84677', 'width': 1200}, 'variants': {}}]}
Google is comparing their LLM to LLMs from 2022 !
119
2023-05-10T21:58:27
https://i.redd.it/kwglgus4t2za1.png
3deal
i.redd.it
1970-01-01T00:00:00
0
{}
13e4nqo
false
null
t3_13e4nqo
/r/LocalLLaMA/comments/13e4nqo/google_is_comparing_their_llm_to_llms_from_2022/
false
false
https://b.thumbs.redditm…meY6LN0daKow.jpg
119
{'enabled': True, 'images': [{'id': 'tKZ9yhtGOfmxTgYbAKy-IjJrn9N6kii0Ts1jCFhwu6Q', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?width=108&crop=smart&auto=webp&s=22e2b8f9ebaecb765dcfc8f1cd6854cdf778493b', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?width=216&crop=smart&auto=webp&s=4c866adaf9f3152d41858b8c38bab1712e62d282', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?width=320&crop=smart&auto=webp&s=23243d7c0975293e1b2b783953966ff148cd9ed9', 'width': 320}, {'height': 356, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?width=640&crop=smart&auto=webp&s=a20812e41b777caa527fd4ac0386dd2d2114afa9', 'width': 640}, {'height': 534, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?width=960&crop=smart&auto=webp&s=5f9fa927d80a88196ccd123eaebb3777884525dd', 'width': 960}, {'height': 601, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?width=1080&crop=smart&auto=webp&s=7af095d5af37a09c5d574daecbf745ed3b2b77d2', 'width': 1080}], 'source': {'height': 962, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?auto=webp&s=6882ce6222d3ecbb5028a5dcea6b31591ed4f414', 'width': 1727}, 'variants': {}}]}
Best open source LLM model for commercial use
3
Hey guys, what’s the best open source LLM model for commercial use atm?
2023-05-10T21:30:50
https://www.reddit.com/r/LocalLLaMA/comments/13e3xi4/best_open_source_llm_model_for_commercial_use/
jamesgz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13e3xi4
false
null
t3_13e3xi4
/r/LocalLLaMA/comments/13e3xi4/best_open_source_llm_model_for_commercial_use/
false
false
self
3
null
Can you make a Lora with Gpt4-x[…] models?
2
[removed]
2023-05-10T20:37:57
https://www.reddit.com/r/LocalLLaMA/comments/13e2h4h/can_you_make_a_lora_with_gpt4x_models/
maxiedaniels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13e2h4h
false
null
t3_13e2h4h
/r/LocalLLaMA/comments/13e2h4h/can_you_make_a_lora_with_gpt4x_models/
false
false
default
2
null
Need help getting started, llama outputting random gibberish
7
- I installed oobabooga along with GPTQ-for-LLaMa to use 4bit models. - I got `llama-13b-4bit-128g.safetensors` from [here](https://huggingface.co/Neko-Institute-of-Science/LLaMA-13B-4bit-128g) and put it in the models folder. - Started the server with `--model llama-13b-4bit-128g --wbits 4 --groupsize 128 --chat` Everything seems to load fine, but if I ask it anything, all I get as output is random gibberish, such as: > reignCred behind painted fa liberal ourselves credit paint MrsDA Cred reign definitely ex gal behind painted reign reignware behind behind reign reign reign reign credit behind painted reign reign reign behind behind painted behind behind behind behind behind painted painted painted behind painted... or >gngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngn What am I doing wrong? Is there anything else I need to configure beforehand? I also tried an instruction command found in the [wiki](https://www.reddit.com/r/LocalLLaMA/wiki/index#wiki_standard_llama), with the same result.
2023-05-10T20:01:30
https://www.reddit.com/r/LocalLLaMA/comments/13e1is6/need_help_getting_started_llama_outputting_random/
addandsubtract
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13e1is6
false
null
t3_13e1is6
/r/LocalLLaMA/comments/13e1is6/need_help_getting_started_llama_outputting_random/
false
false
self
7
{'enabled': False, 'images': [{'id': 'UaJo6m3JbOsXwgsuXDIxX3KcUwdXD6fCUdt9PkhCzsY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?width=108&crop=smart&auto=webp&s=31b73048591012e375481f856603242133ac989a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?width=216&crop=smart&auto=webp&s=3e1e6ede2a2e14a455a7d48d19ce9ca89825f8b1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?width=320&crop=smart&auto=webp&s=8b11821c321ea0bd0a98a497400e3c282734319a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?width=640&crop=smart&auto=webp&s=bf077520368745a6c2852b7ca8de1aa0b75148b8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?width=960&crop=smart&auto=webp&s=b09e184bcf7bd81cdb9b0f8a0519874528f7d3b5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?width=1080&crop=smart&auto=webp&s=d442cce0481eceaf4e95ac9f900efdc6e06363f4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?auto=webp&s=4d8577512de50a8a81be9b810821c9b114d7ded4', 'width': 1200}, 'variants': {}}]}
Chatbot arena released new leader board with GPT4 and more models!
144
Now we can finally see how close or far open source models like vicuna are from GPT4! Amazing, this could be an informal benchmark for LLM. [https://chat.lmsys.org/?arena](https://chat.lmsys.org/?arena) ​ ​ https://preview.redd.it/fjrgpdfx02za1.png?width=909&format=png&auto=webp&s=2abe40c2936ad1fcbe7de46d28640288aded8400
2023-05-10T19:19:50
https://www.reddit.com/r/LocalLLaMA/comments/13e0fkf/chatbot_arena_released_new_leader_board_with_gpt4/
GG9242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13e0fkf
false
null
t3_13e0fkf
/r/LocalLLaMA/comments/13e0fkf/chatbot_arena_released_new_leader_board_with_gpt4/
false
false
https://b.thumbs.redditm…pzvcz2Ysmrzk.jpg
144
null
How to use llama.cpp with LM's and LoRas
10
Looking for guides, feedback, direction on how to merge or load LoRa's with existing LModels using llama.cpp. I guess this is part 2 of my question, the first question I had was creating LoRa's : [(19) Creating LoRA's either with llama.cpp or oobabooga (via cli only) : LocalLLaMA (reddit.com)](https://www.reddit.com/r/LocalLLaMA/comments/13c3i33/creating_loras_either_with_llamacpp_or_oobabooga/). I have a decent understanding and have loaded models but looking to better understand the LoRA training and experiment a bit. Thanks!
2023-05-10T19:14:20
https://www.reddit.com/r/LocalLLaMA/comments/13e0am7/how_to_use_llamacpp_with_lms_and_loras/
orangeatom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13e0am7
false
null
t3_13e0am7
/r/LocalLLaMA/comments/13e0am7/how_to_use_llamacpp_with_lms_and_loras/
false
false
self
10
null
Looks interesting.. maybe we can use this to run a plethora of local 7B or 13B models that are highly specialized, and just have the gpt3.5 API or some other "better" model direct the program to select which model to run on the fly... seems like it would reduce overall model sizes..
29
2023-05-10T18:22:33
https://huggingface.co/docs/transformers/transformers_agents
kc858
huggingface.co
1970-01-01T00:00:00
0
{}
13dywr0
false
null
t3_13dywr0
/r/LocalLLaMA/comments/13dywr0/looks_interesting_maybe_we_can_use_this_to_run_a/
false
false
https://b.thumbs.redditm…Nru6YOpj1KTg.jpg
29
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]}
Questions about LLMs & LoRa Fine-Tuning
17
Hi all, I’ve been following along with most recent developments and doing quite a lot of research. There are still a couple things that are unclear to me about the setup, tuning and use of these LLMs (LLaMa, Alpaca, Vicuna, GTP4ALL, Stable Vicuna). I understand Alpaca/Vicuna etc are fine-tuned versions of Meta Llama Models (7B, 13B). The base LLaMa models can do prompt completion but are fine-tuned to respond in certain ways. I know that PEFT LoRa methods have reduced the VRAM requirements significantly to fine-tune these models. My questions are: 1. Are you able to download the already tuned LLaMa models such as Alpaca and fine tune them further for your specific use case? E.G Tune wizard LM storyteller to talk about certain topics 2. Will fine-tuning the base Llama give you a better and more specialized model? What are the pros and cons of finetuning base Llama vs something like Stable-Vicuna? 3. What are the specifics of Quantization? What do the 4-Bit and 8-Bit actually mean and how does it make a difference? 4. What is context length and what does it mean for the model? E.g (2048, 4096) I’m currently speccing a local machine to run instances of these on. It’ll probably include a RTX 4090 and a RTX 3080Ti. I can do a separate post on that if it interests anyone. Thank in advance for you help!
2023-05-10T17:45:54
https://www.reddit.com/r/LocalLLaMA/comments/13dxxp5/questions_about_llms_lora_finetuning/
rookiengineer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dxxp5
false
null
t3_13dxxp5
/r/LocalLLaMA/comments/13dxxp5/questions_about_llms_lora_finetuning/
false
false
self
17
null
Mpt-7b storyteller returns correct answers, followed by a paragraph of meaningless or irrelevant rambling.
5
Tips or tricks for this in oobabooga? Running in a 4090. If i ask for the capital of canada, it says ottowa, it gives a sentence or two about ottowa, then it degenerates into a teenage girl writing a blog post or telling a story about nothing. I dont get it. Its not the quanitized model (i can't get that to run, webui refuses based on unknown model type) Is it just a matter of waiting for optimizations or is there simething i should be doing?
2023-05-10T16:56:17
https://www.reddit.com/r/LocalLLaMA/comments/13dwl8o/mpt7b_storyteller_returns_correct_answers/
shaykruler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dwl8o
false
null
t3_13dwl8o
/r/LocalLLaMA/comments/13dwl8o/mpt7b_storyteller_returns_correct_answers/
false
false
self
5
null
How can I download the OpenAssistant model on HuggingFace for local use in the future?
1
[removed]
2023-05-10T16:25:20
https://www.reddit.com/r/LocalLLaMA/comments/13dvovz/how_can_i_download_the_openassistant_model_on/
spmmora
self.LocalLLaMA
2023-06-02T11:30:31
0
{}
13dvovz
false
null
t3_13dvovz
/r/LocalLLaMA/comments/13dvovz/how_can_i_download_the_openassistant_model_on/
false
false
default
1
null
Would an E-GPU work as good on Linux than an internal GPU, same model?
3
Wondering if I could run it this way on a laptop.
2023-05-10T15:37:36
https://www.reddit.com/r/LocalLLaMA/comments/13dub31/would_an_egpu_work_as_good_on_linux_than_an/
SirLordTheThird
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dub31
false
null
t3_13dub31
/r/LocalLLaMA/comments/13dub31/would_an_egpu_work_as_good_on_linux_than_an/
false
false
self
3
null
Has anyone gotten good agent results with 7b models?
13
I am using a small computer with 16g ram i can go up to 32gb ram on another computer. I want to run some langchain agents to retrieve some information like say list of episodes for a tv show. I understand the 7b versions are not the strongest models (and as i understand, i should use an instruct model like wizardlm over a chat model like vicuna) Has anyone gotten good results with a 7b model on these types of tasks or is 13b the way to go?
2023-05-10T15:34:00
https://www.reddit.com/r/LocalLLaMA/comments/13du792/has_anyone_gotten_good_agent_results_with_7b/
klop2031
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13du792
false
null
t3_13du792
/r/LocalLLaMA/comments/13du792/has_anyone_gotten_good_agent_results_with_7b/
false
false
self
13
null
Language model Context Lengths > 2048
15
Hi folks, ​ I am looking for LLMs with a context equal to or longer than 4096. Apart from StableLM (4096) and MPT-7b-Storywriter (60K+) all the other models I've found have a context length of 2048. Would love to learn if there are any other models I might have missed!
2023-05-10T14:20:22
https://www.reddit.com/r/LocalLLaMA/comments/13ds4cf/language_model_context_lengths_2048/
nightlingo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ds4cf
false
null
t3_13ds4cf
/r/LocalLLaMA/comments/13ds4cf/language_model_context_lengths_2048/
false
false
self
15
null
Recommendations for GPU with $25-30k budget
17
**Hi everyone,** I am planning to **build a GPU server** with a budget of **$25-30k** and I would like your help in choosing a suitable GPU for my setup. The computer will be a PowerEdge T550 from Dell with 258 GB RAM, Intel® Xeon® Silver 4316 2.3G, 20C/40T, 10.4GT/s, 30M Cache, Turbo, HT (150W) DDR4-2666 OR other recommendations? **My aim** is to run local language models such as **Stable Diffusion,** **WizardLM Uncensored 13B Model** and **BigCode**. I want **fast ML inference(Top priority)**, and I **may do fine-tuning** from time to time. For heavy workloads, I will use cloud computing. I am considering the following graphics cards: * A100 (40GB) * A6000 ada * A6000 * RTX 4090 * RTX 3090 (because it supports NVLINK) If I buy an RTX 4090 or RTX 3090, A6000 I can buy multiple GPUs to fit my budget. What do you recommend for my use case? Are there any other options I should consider? Thank you in advance for your help!
2023-05-10T13:38:07
https://www.reddit.com/r/LocalLLaMA/comments/13dqxrs/recommendations_for_gpu_with_2530k_budget/
Own_Forever_5997
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dqxrs
false
null
t3_13dqxrs
/r/LocalLLaMA/comments/13dqxrs/recommendations_for_gpu_with_2530k_budget/
false
false
self
17
null
Problem: exporting xturing models to GGML
2
Hello to all. I'm an AI enthusiast who just recently started experimenting with LoRA to fine-tune some models, and I got a RTX 2080 Ti, which turns out to be enough for GPT-J and the excellent xturing code ([https://github.com/stochasticai/xturing](https://github.com/stochasticai/xturing)) but there's a problem. I can't really use the model after fine-tuning, as xturing's sampler is somewhat shitty and isn't top\_k\_top\_p as it seems, and there is way too much repetition in the answers. So I want to export it to GGML, but it fails, the converter script refuses to work with that model. Can anyone help me with this, it's pretty much my only way for tinkering with models, but the last step (using it with GGML) is broken, which is so frustrating. Any help is appreciated!
2023-05-10T13:33:38
https://www.reddit.com/r/LocalLLaMA/comments/13dqtg2/problem_exporting_xturing_models_to_ggml/
phenotype001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dqtg2
false
null
t3_13dqtg2
/r/LocalLLaMA/comments/13dqtg2/problem_exporting_xturing_models_to_ggml/
false
false
self
2
{'enabled': False, 'images': [{'id': 'UkmrbolRu2CKdJysYYzEAqy4XRMF5aPSZF2bSWg5sMQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?width=108&crop=smart&auto=webp&s=af638d5e45c6cbf3c2efd7d11701be2eefc231e6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?width=216&crop=smart&auto=webp&s=702841c2ba77a5dae055e90c4d9930ffbfd3b606', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?width=320&crop=smart&auto=webp&s=cdee9bd73728162c76be002737c74c553c4ea3a4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?width=640&crop=smart&auto=webp&s=ce214c370b2200c83de37349eef63fe617bbb016', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?width=960&crop=smart&auto=webp&s=1a6e2c97f9a25e1928a90f233683bf84600bc531', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?width=1080&crop=smart&auto=webp&s=1ef9e141e9f9262e27e1633ff2a1bedd2c77996a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?auto=webp&s=1796797bd8bf57d8b619820a857fd9992d1bc0a9', 'width': 1200}, 'variants': {}}]}
Permissive LLaMA 7b chat/instruct model
22
Hi all, we are currently regularly publishing new permissive conversation/instruct finetuned models, and wanted to share one more that might be of interest to some: - Playground: https://gpt-gm.h2o.ai/ - HF Checkpoint: https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt-v2 - Base model: https://huggingface.co/openlm-research/open_llama_7b_preview_300bt - Instruct data: https://huggingface.co/datasets/OpenAssistant/oasst1 - Training framework: https://github.com/h2oai/h2o-llmstudio Given that this is only a 7b model that has only been pretrained for 300b/1000b tokens, I think the results are in general pretty promising. Obviously this comes with all the typical caveats, but we will continue working on these permissive checkpoints and keep you posted.
2023-05-10T10:47:58
https://www.reddit.com/r/LocalLLaMA/comments/13dmvop/permissive_llama_7b_chatinstruct_model/
ichiichisan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dmvop
false
null
t3_13dmvop
/r/LocalLLaMA/comments/13dmvop/permissive_llama_7b_chatinstruct_model/
false
false
self
22
null
3080 and need to fine-tune or make a LORA, best LLM available?
6
I saw on a recent post that you can make a LORA instead of having to fine tune, and the results are good. I have a 3080 and I have a few thousand examples of text that I’d like to either fine tune or make a LORA with. What LLM should I be using? I know there’s a new LLM every week, but I’m unclear on how much power is needed to fine tune or make a LORA.
2023-05-10T06:16:46
https://www.reddit.com/r/LocalLLaMA/comments/13di4o7/3080_and_need_to_finetune_or_make_a_lora_best_llm/
maxiedaniels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13di4o7
false
null
t3_13di4o7
/r/LocalLLaMA/comments/13di4o7/3080_and_need_to_finetune_or_make_a_lora_best_llm/
false
false
self
6
null
Training a LoRA with MPT Models
13
The new MPT models that were just released seem pretty compelling as something to use as a base model for training LoRAs, but the MPT model code doesn't support it. It is specifically interesting since there are the first commercially viable 7B model trained on 1T tokens (RedPajama is currently in preview), with commercially usable versions tuned for instruct and story writing as well. Has anyone else tried finetuning these? I took a stab at [adding LoRA support](https://github.com/iwalton3/mpt-lora-patch) so I can train with text-generation-webui, but it may not be optimal. I did test and I can confirm that training a LoRA and using the result does seem to work with the changes.
2023-05-10T04:46:32
https://www.reddit.com/r/LocalLLaMA/comments/13dgi6c/training_a_lora_with_mpt_models/
scratchr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dgi6c
false
null
t3_13dgi6c
/r/LocalLLaMA/comments/13dgi6c/training_a_lora_with_mpt_models/
false
false
self
13
{'enabled': False, 'images': [{'id': 'emc9z_FL7ZKoeQhPsSAG7j8a_geAwzhmD-ygv9SDSCE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?width=108&crop=smart&auto=webp&s=545ffd49b1b921d1f288e38d2dc0cbe8e54009a1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?width=216&crop=smart&auto=webp&s=634d3d53a3dbbd5f282b06eafa29974f99e4db77', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?width=320&crop=smart&auto=webp&s=d2b3c043a684be428cdf9f9719e13b5f9d137a42', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?width=640&crop=smart&auto=webp&s=174dce02b181debd45205c97c611bdf1fad3300f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?width=960&crop=smart&auto=webp&s=fee99fad33084b8f94930471657ed3f143d8c323', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?width=1080&crop=smart&auto=webp&s=8ec3051a514155dcd4dcf130d2d0958ac7221b9e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?auto=webp&s=144e4eb865c37b86c54558f7c041765ffc0f0984', 'width': 1200}, 'variants': {}}]}
[deleted by user]
0
[removed]
2023-05-10T03:48:10
[deleted]
1970-01-01T00:00:00
0
{}
13dfeam
false
null
t3_13dfeam
/r/LocalLLaMA/comments/13dfeam/deleted_by_user/
false
false
default
0
null
WizardLM-13B-Uncensored
450
As a follow up to the [7B model](https://www.reddit.com/r/LocalLLaMA/comments/1384u1g/wizardlm7buncensored/), I have trained a WizardLM-13B-Uncensored model. It took about 60 hours on 4x A100 using WizardLM's original training code and filtered dataset. [**https://huggingface.co/ehartford/WizardLM-13B-Uncensored**](https://huggingface.co/ehartford/WizardLM-13B-Uncensored) I decided not to follow up with a 30B because there's more value in focusing on [mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) and [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b). Update: I have a sponsor, so a 30b and possibly 65b version will be coming.
2023-05-10T03:08:16
https://www.reddit.com/r/LocalLLaMA/comments/13dem7j/wizardlm13buncensored/
faldore
self.LocalLLaMA
2023-05-10T08:47:29
1
{'gid_2': 1}
13dem7j
false
null
t3_13dem7j
/r/LocalLLaMA/comments/13dem7j/wizardlm13buncensored/
false
false
self
450
{'enabled': False, 'images': [{'id': 'G1nl_IUI_4T90MWS7hPfvajkGrGVtVlBe7-hikDbCJE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=108&crop=smart&auto=webp&s=3723e81c3dda45706b3275533d688762ed693e74', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=216&crop=smart&auto=webp&s=aa30800fed77ed23fa00ad0117127ddab537da13', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=320&crop=smart&auto=webp&s=8648f8481c1a71b34628337380bbd5ab61ae4889', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=640&crop=smart&auto=webp&s=054a654f2e90b527e2a0e5c2c3fc47ead397dc54', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=960&crop=smart&auto=webp&s=a370540936d82b5eaf105c12a79a90e8ab63a611', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=1080&crop=smart&auto=webp&s=58723b62d389654b8095985808adaacd4beacb29', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?auto=webp&s=9ab2642fcca96ebdd40b5775ff2ea4403da23752', 'width': 1200}, 'variants': {}}]}
Those who've played with the truly-opensource models, any sense of differences / winners?
21
Lots going on lately, hard to keep up! I'm looking at RedPajama, Dolly v2, StableLM, etc. I plan on playing with many of the options over time (and hope to edit / comment back here) , but I'm wondering if anyone has experience as of yet they can speak to? Do any of the open source (non-restricted) models seem to stand out in quality? Or in bang-for-buck (num\_params vs perplexity)? Also is there a Discord, or more appropriate place to ask these kinds of questions? I can't seem to find one.
2023-05-10T02:43:15
https://www.reddit.com/r/LocalLLaMA/comments/13de3r1/those_whove_played_with_the_trulyopensource/
lefnire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13de3r1
false
null
t3_13de3r1
/r/LocalLLaMA/comments/13de3r1/those_whove_played_with_the_trulyopensource/
false
false
self
21
null
What are my best options for CPU uncensored models for writing blog posts?
19
I've been using Poe to help me prepare blog post outlines and prepare drafts, but it is hard to automate batches. I've used the poe-api python library but it is easy to get rate limited. Therefore I want to move to a local model, but I only either have local laptops that generally don't have enough RAM, or I have success using Oracle ARM instances. Which is sort of local-in-the-cloud from my perspective. Calling them from python seems reasonable and means I can automate batches and don't need to therefore worry too much if they are a bit slow as it is all unattended. Which models make sense for my application? So far I've tried: \- WizardLM-7B-uncensored.ggml.q5\_0.bin - seems ok \- wizard-vicuna-13B.ggml.q5\_1.bin - seems too censored \- Pygmalion 7b - too censored \- ggml-vic7b-uncensored-q5\_0.bin - answers are not great ​ When I say "uncensored" I mean they can talk about sexual topics that I blog about without going all moralistic on me. Any suggestions on which other ones I should try? ​ Thanks.
2023-05-10T02:15:59
https://www.reddit.com/r/LocalLLaMA/comments/13ddjip/what_are_my_best_options_for_cpu_uncensored/
honytsoi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ddjip
false
null
t3_13ddjip
/r/LocalLLaMA/comments/13ddjip/what_are_my_best_options_for_cpu_uncensored/
false
false
self
19
null
Made multi-part Vicuna13B model, how do you quantize it?
1
[removed]
2023-05-09T21:24:33
https://www.reddit.com/r/LocalLLaMA/comments/13d6j7f/made_multipart_vicuna13b_model_how_do_you/
RileyGuy1000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d6j7f
false
null
t3_13d6j7f
/r/LocalLLaMA/comments/13d6j7f/made_multipart_vicuna13b_model_how_do_you/
false
false
default
1
null
I got some very strange results. Did I configure Vicuna incorrectly?
1
2023-05-09T21:22:38
https://i.redd.it/oxuj27wwhvya1.png
cold-depths
i.redd.it
1970-01-01T00:00:00
0
{}
13d6hah
false
null
t3_13d6hah
/r/LocalLLaMA/comments/13d6hah/i_got_some_very_strange_results_did_i_configure/
false
false
default
1
null
fresh install - local URL doesn't work, but gradio.live link does work!
4
Hi! I got my LLaMa working with these instructions: [https://medium.com/@martin-thissen/vicuna-on-your-cpu-gpu-best-free-chatbot-according-to-gpt-4-c24b322a193a](https://medium.com/@martin-thissen/vicuna-on-your-cpu-gpu-best-free-chatbot-according-to-gpt-4-c24b322a193a) ​ I am using an Ubuntu server that I am accessing over SSH. I get to the final step and the terminal spits out happy green and yellow text and two key lines: Running on local URL: [http://127.0.0.1:7860](http://127.0.0.1:7860) Running on public URL: [https://blahblahblah.gradio.live](https://blahblahblah.gradio.live) When I go to the [blahblahblah.gradio.live](https://blahblahblah.gradio.live) website, it works! I can see in inquiries reflected over my SSH terminal, so I'm definitely talking to my machine. However, I cannot go to the local URL. That ubuntu server is at [192.168.7.209](https://192.168.7.209) on my local network, so I am trying to find it at [192.168.7.209:7860](https://192.168.7.209:7860). However, I get "Unable to Connect". Can anyone help me get the local URL working? Thank you!
2023-05-09T21:10:00
https://www.reddit.com/r/LocalLLaMA/comments/13d64px/fresh_install_local_url_doesnt_work_but/
maxxell13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d64px
false
null
t3_13d64px
/r/LocalLLaMA/comments/13d64px/fresh_install_local_url_doesnt_work_but/
false
false
self
4
{'enabled': False, 'images': [{'id': 'LqjLPpXdBdthKTjrItugofIK6Taw4wf6TQq1zeurzP8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?width=108&crop=smart&auto=webp&s=66d7bae6240ce63829e3e8e389bd8686fa35d0a8', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?width=216&crop=smart&auto=webp&s=9b1ca9e2632f02aa9db635a4104c40c3333320fc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?width=320&crop=smart&auto=webp&s=593a0ba04b89fd8f1e40cad4f27015004edb1949', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?width=640&crop=smart&auto=webp&s=e702663989bca33151de181144f553813a1b3bbe', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?width=960&crop=smart&auto=webp&s=d08920fb2a050160cdf245cd13952454e10918e8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?width=1080&crop=smart&auto=webp&s=69ec0efcc84704e818a7c9d87f942679be9a8d91', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?auto=webp&s=53ef3b35bd239528a89701102da95cbb9b546cfa', 'width': 1200}, 'variants': {}}]}
[Webui-API] Struggling to Proofread Translated Novels. Any Help Appreciated.
3
I have a hobby of reading English translated novels from different languages such as Chinese, Korean, etc. However, I often find that these books are plagued with poor grammar and awkward word choices due to translation errors, machine translation Etc. &nbsp; Recently, I learned that ChatGPT 4 can help me improve the grammar, coherence, and logical consistency of these passages without altering the original context, mood, and emotions. I created a script that can do this locally for me using Oobabooga WebUI running gpt4-x-alpaca on my RTX 3080 12GB. While I only get an average of 10 token/sec, I am happy with the results. &nbsp; However, every time I run the code, I get gibberish that is out of this world. As an absolute beginner with coding, I am struggling to keep my hobby alive. I would appreciate any help or advice on how to fix my script. &nbsp; Here is the code: (Note: I have no prior experience with coding beyond telling ChatGPT what I want done) &nbsp; import requests HOST = 'localhost:5000' URI = f'http://{HOST}/api/v1/generate' INPUT_FILE = "C:/Users/Zero/Downloads/Original.txt" OUTPUT_FILE = "C:/Users/Zero/Downloads/Proofread.txt" STARTLINE = 1 ENDLINE = 14 def run(prompt): request = { 'prompt': prompt, 'max_new_tokens': 250, 'do_sample': True, 'temperature': 0.7, 'top_p': 0.1, 'repetition_penalty': 1.2, 'top_k': 40, 'min_length': 0, 'no_repeat_ngram_size': 0, 'num_beams': 1, 'penalty_alpha': 0, 'length_penalty': 1, 'early_stopping': False, 'seed': -1, 'add_bos_token': True, 'truncation_length': 2048, 'ban_eos_token': False, 'skip_special_tokens': True, 'stopping_strings': [] } response = requests.post(URI, json=request) if response.status_code == 200: result = response.json()['results'][0]['text'] return result return None start = STARTLINE with open(INPUT_FILE, encoding="utf8") as english_file: content = english_file.readlines() while start < ENDLINE: end = start + 1 end = min(end, ENDLINE) current_english_text = content[start:end + 1] current_prompt = ( "Your aim is to improve the passage's grammar, coherence, and logical consistency without altering the original context, mood, and emotions. If there are any illogical mistakes, fix them, and enhance small details or idioms to suit the narrative. Make sure to correct any awkward word choice or phrasing that might seem like poor word choices. In addition, ensure that the passage flows logically by restructuring, without removing any small details that may be relevant. And divide the passage into appropriate-sized paragraphs. Here is the passage to proofread: " + "".join(current_english_text) ) try: response_text = run(current_prompt) except Exception as ERR: print(ERR, "Skipping this chunk.") start += 1 continue with open(OUTPUT_FILE, "a", encoding="utf8") as proofread_file: proofread_file.write("\n" + response_text) print("Success : Proofread successfully - lines", start, "to", end - 1) start += 1 print("Program Complete") &nbsp; **Input:** &nbsp; Chapter 1 It was July, and the sun was harsh and shining bright in the sky. Even though the thick curtains had been drawn shut, the vicious sunlight could not be completely blocked away. It shone through the gaps of the curtains, forming a squarish border, which was the only source of light in the room. Ring! The phone rang again. After ringing three times, it went to the answering machine. “Kieran? This is Doctor Wong. You are one year away from turning eighteen years old. If you don’t start your genetic treatment immediately, you will lose your chance completely!” Polite and official as usual. Kieran ignored the message and concentrated on the game cartridge in his hand. Bright red colour, the size of a thumbnail.| &nbsp; **Output:** &nbsp; >!The heat was almost unbearable, and the air was thick with humidity. In the small town of Willowbrook, nestled among the rolling hills, the residents were finding it difficult to cope.!< >!They had been warned about the impending drought, but no one could have anticipated the severity and duration. As the days passed, the once-green lawns turned to dusty brown, and the community well ran dry. Tensions rose, and tempers flared as the inhabitants struggled to adapt.!< >!One bright afternoon, a young boy named Timmy discovered an unusual rock on the edge of the woods near his home. Curious, he showed the stone to his best friend, Emily. She examined the rock carefully and proclaimed that it was special.!< .... And the script continunes on spewing nonsense.
2023-05-09T20:52:12
https://www.reddit.com/r/LocalLLaMA/comments/13d5n5f/webuiapi_struggling_to_proofread_translated/
Demigod787
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d5n5f
false
null
t3_13d5n5f
/r/LocalLLaMA/comments/13d5n5f/webuiapi_struggling_to_proofread_translated/
false
false
self
3
null
Model conversion guide?
2
Is there a simple guide for converting models? I'm running Llama.cpp and there's a lot of PT models and such that I want to try. I'm assuming there's something somewhere doing the conversions given how quickly some of them drop, but I'd like to be able to convert them myself without having to wait, if possible!
2023-05-09T20:38:36
https://www.reddit.com/r/LocalLLaMA/comments/13d59yg/model_conversion_guide/
mrjackspade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d59yg
false
null
t3_13d59yg
/r/LocalLLaMA/comments/13d59yg/model_conversion_guide/
false
false
self
2
null
AgentOoba v0.1 - better UI, better contextualization, the beginnings of langchain integration and tools
54
Hey all, I've still been working on AgentOoba if you recall my post from a few days ago. Just pushed a commit that adds an improved UI (HTML output with current thinking task indicator), adds more context to each of the prompts, and has the beginning of integrating tools for the agent. Right now, tool detection needs work. It's hard to walk the balance between the agent using the tool for absolutely every task, and not using the tool at all. The prompt included in this update errs on the side of not using the tool. I also added a hook to ask the model if it itself is capable of completing the task; for example if the task is "write a short poem", a large language model should be able to do that, so we just forward the task to the model and return its output. It's also not great at detecting when it should do this. Next big item on the TODO is sentence transformers and chromadb to store context efficiently and hopefully fix some of these problems. I think ultimately the thing to do is require manual intervention from the user upon tool detection. The agent will pause, and the user will be prompted with the agent's decision to use the tool as well as the agent's crafted input for the tool; then the user can manually accept the usage of the tool or reject it. [Sample output](https://pastebin.com/Mp5JHEUq) You can see in this sample output an instance of the agent incorrectly using the model hook, and repeating some tasks. Other than that, pretty good :) The project has updated requirements. Remember to activate the virtual environment / conda and `pip install -r requirements.txt` in the AgentOoba directory before running. Github link: https://github.com/flurb18/AgentOoba
2023-05-09T19:42:47
https://www.reddit.com/r/LocalLLaMA/comments/13d3ryc/agentooba_v01_better_ui_better_contextualization/
_FLURB_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d3ryc
false
null
t3_13d3ryc
/r/LocalLLaMA/comments/13d3ryc/agentooba_v01_better_ui_better_contextualization/
false
false
self
54
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?width=108&crop=smart&auto=webp&s=df9c6a296446d05d873c629a30253398c4d29c1b', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?auto=webp&s=07c121a0180003f7373863af66192b6ff6a937da', 'width': 150}, 'variants': {}}]}
Open source text summarization tools that are LLAMA based
17
Hello, are there any LLAMA type Txt or Pdf summarization tools that are currently available or Something similar? I think this would be a great idea if incorporated.
2023-05-09T18:20:33
https://www.reddit.com/r/LocalLLaMA/comments/13d1j66/open_source_text_summarization_tools_that_are/
Lord_Crypto13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d1j66
false
null
t3_13d1j66
/r/LocalLLaMA/comments/13d1j66/open_source_text_summarization_tools_that_are/
false
false
self
17
null
We introduce CAMEL : Clinically Adapted Model Enhanced from LLaMA
1
[removed]
2023-05-09T17:29:54
https://www.reddit.com/r/LocalLLaMA/comments/13d04dc/we_introduce_camel_clinically_adapted_model/
HistoryHuge2015
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d04dc
false
null
t3_13d04dc
/r/LocalLLaMA/comments/13d04dc/we_introduce_camel_clinically_adapted_model/
false
false
default
1
null
AMD Graphics
6
Hello! Is there a way to use amd graphics using py llama for python ? I’ll be appreciated for useful links. Thanks
2023-05-09T15:57:35
https://www.reddit.com/r/LocalLLaMA/comments/13cxgq8/amd_graphics/
PropertyLoover
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13cxgq8
false
null
t3_13cxgq8
/r/LocalLLaMA/comments/13cxgq8/amd_graphics/
false
false
self
6
null
[Project] MLC LLM for Android
46
MLC LLM for Android is a solution that allows large language models to be deployed natively on Android devices, plus a productive framework for everyone to further optimize model performance for their use cases. Everything runs locally and accelerated with native GPU on the phone. This is the same solution as the MLC LLM series that also brings support for consumer devices and iPhone &#x200B; We can run runs Vicuña-7b on Android Samsung Galaxy S23. Blogpost [https://mlc.ai/blog/2023/05/08/bringing-hardware-accelerated-language-models-to-android-devices](https://mlc.ai/blog/2023/05/08/bringing-hardware-accelerated-language-models-to-android-devices) Github [https://github.com/mlc-ai/mlc-llm/tree/main/android](https://github.com/mlc-ai/mlc-llm/tree/main/android) Demo: [https://mlc.ai/mlc-llm/#android](https://mlc.ai/mlc-llm/#android)
2023-05-09T14:52:58
https://www.reddit.com/r/LocalLLaMA/comments/13ctg4c/project_mlc_llm_for_android/
crowwork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ctg4c
false
null
t3_13ctg4c
/r/LocalLLaMA/comments/13ctg4c/project_mlc_llm_for_android/
false
false
self
46
null
Don't know if my use case can be "solved" using LLM. Can you help me ?
7
First post here, so I hope I'm not breaking any rule ;) As an external consultant, I'm working with a big company that run hundreds of applications for its IT services. To support the users they have a very old ticketing system in place. I'm trying to improve the overall quality of service. One of the problem they have is that the user has to address the ticket to a specific service : when writing the ticket they choose between a list of services the one they think is relevant. As you could guess sometimes the user is right, sometimes he's wrong. In the later case, the ticket bounces between services. It could take days before the ticket is finally taken into account by the proper service. You can even find cases where service A redirects the ticket to service B which redirects the ticket to service C which redirects the ticket to service A... Some here are my questions: * Is this realistic to use LLM to automatically redirect the ticket to the proper service ? * I have no prior experience using LLM and limited experience using pytorch/deeplearning. How hard would it be ? * Would it be as simple as building something like a top soft max layer with n possible output, n being the number of services and fine tune the existing model using Lora or similar tool ? * I'll need to fine tune my model once built. I could get access to 1 year of data with the question and the service that solved the problem. I'll have something between 10k and 50k tickets. Do you think that's enough ? * The company is a French company and all the questions are in French. I've seen that llama and similar LLM have a limited support for non English languages. How bad is-it ? * I'll probably have some new "words"/tokens that have never been seen by the model before, like some custom made applications with some weird names, should I modify the input layers to handle those ? How hard that would be ? On one hand the names are quite significant, one the other I have a limited dataset for training + from what I understand of Lora and adapters, it doesn't seem possible to change the input layer without having to retrain the whole LLM from scratch. * I have limited ressources as I can't really bill the company until I have at least a POC, so I'd rather do it for a low budget. I can use a 3090 or go for a cloud solution for training. Any idea of the budget if I go for a cloud solution ? * Knowing that at least 10% of the tickets are initially routed to the wrong service, do you think I can get much better result using this kind of automation ? * What would be the best solution ? I suppose I can't use LLama or similar research only licensed models ? * Any advice on where to start ? Thanks !
2023-05-09T14:02:35
https://www.reddit.com/r/LocalLLaMA/comments/13cqvjt/dont_know_if_my_use_case_can_be_solved_using_llm/
IlEstLaPapi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13cqvjt
false
null
t3_13cqvjt
/r/LocalLLaMA/comments/13cqvjt/dont_know_if_my_use_case_can_be_solved_using_llm/
false
false
self
7
null
Proof of concept: GPU-accelerated token generation for llama.cpp
142
2023-05-09T13:28:24
https://i.redd.it/i9z4klu85tya1.png
Remove_Ayys
i.redd.it
1970-01-01T00:00:00
0
{}
13cpwpi
false
null
t3_13cpwpi
/r/LocalLLaMA/comments/13cpwpi/proof_of_concept_gpuaccelerated_token_generation/
false
false
https://b.thumbs.redditm…hYTKschqbWlE.jpg
142
{'enabled': True, 'images': [{'id': 'R43UKedvavNT0Zk2cK0hckTyBoqpBidY2EpG5OwWt-c', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?width=108&crop=smart&auto=webp&s=1e929be37a2973b47dacd8496c812cd6d51c344c', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?width=216&crop=smart&auto=webp&s=1b98161bc1a0e9699c1abefc05c44fe5212ebd3b', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?width=320&crop=smart&auto=webp&s=b8e535d43f68e6cda90fe74660a25b77154ccc43', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?width=640&crop=smart&auto=webp&s=487d47a0b3b39c52ac7dff3d49ac94003a9f543d', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?width=960&crop=smart&auto=webp&s=9069aebf69d96d3cb4d2969b544e6fcffec87336', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?width=1080&crop=smart&auto=webp&s=e8ce22e724b1bfe5abb89ef5b053190932dd399d', 'width': 1080}], 'source': {'height': 1152, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?auto=webp&s=660d9e5651a410dea29801986dc3c0d693304d44', 'width': 1536}, 'variants': {}}]}
Can't find Llama 8 bit (ideally in transformers format)?
1
[removed]
2023-05-09T12:18:49
https://www.reddit.com/r/LocalLLaMA/comments/13co2ld/cant_find_llama_8_bit_ideally_in_transformers/
Cheesuasion
self.LocalLLaMA
2023-05-09T12:23:51
0
{}
13co2ld
false
null
t3_13co2ld
/r/LocalLLaMA/comments/13co2ld/cant_find_llama_8_bit_ideally_in_transformers/
false
false
default
1
null
What's the current best model?
6
Total noob here. Was wondering what the current best model to run is. I'm looking for something with performance as close as possible to gpt 3.5 turbo. Latency is a big deal for my use case so was considering some local options.
2023-05-09T10:18:36
https://www.reddit.com/r/LocalLLaMA/comments/13cliou/whats_the_current_best_model/
lukeborgen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13cliou
false
null
t3_13cliou
/r/LocalLLaMA/comments/13cliou/whats_the_current_best_model/
false
false
self
6
null
alternative to llama.ccp
1
[removed]
2023-05-09T09:36:03
https://www.reddit.com/r/LocalLLaMA/comments/13ckrcl/alternative_to_llamaccp/
averageanonnobody
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ckrcl
false
null
t3_13ckrcl
/r/LocalLLaMA/comments/13ckrcl/alternative_to_llamaccp/
false
false
default
1
null
What is the best 7B model that is easy to finetune on free form text input?
10
Are there are any specific models that any one can recommend that learns quickly on free form text ? I am looking to build an expert AI on data for specific topics. Thanks!
2023-05-09T09:15:35
https://www.reddit.com/r/LocalLLaMA/comments/13ckf48/what_is_the_best_7b_model_that_is_easy_to/
baddadpuns
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ckf48
false
null
t3_13ckf48
/r/LocalLLaMA/comments/13ckf48/what_is_the_best_7b_model_that_is_easy_to/
false
false
self
10
null
Credit where due. Thank you. Crabby autistic happy for once.
46
You guys helped me achieve my goal. I have a liberated AI running locally. Koboldcpp and WizardLM-7B-uncensored.ggml.q5\_1.bin. I've learned that while my machine can run 13b models it take a full minute for responses. This is a new era. Never has a new transformative technology in my life time gone from cutting edge to in my hand so quickly. Thank you again.
2023-05-09T08:16:21
https://www.reddit.com/r/LocalLLaMA/comments/13cjfs9/credit_where_due_thank_you_crabby_autistic_happy/
Innomen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13cjfs9
false
null
t3_13cjfs9
/r/LocalLLaMA/comments/13cjfs9/credit_where_due_thank_you_crabby_autistic_happy/
false
false
self
46
null
Introduction & show-casing TheBloke/wizard-vicuna-13B-HF
45
Hey guys! Following [leaked Google document](https://www.semianalysis.com/p/google-we-have-no-moat-and-neither) I was really curious if I can get something like GPT3.5 running on my own hardware. After a day worth of tinkering and renting a server from [vast.ai](https://vast.ai) I managed to get [wizard-vicuna-13B-HF](https://www.google.com/search?client=safari&rls=en&q=TheBloke%2Fwizard-vicuna-13B-HF&ie=UTF-8&oe=UTF-8) running on a single Nvidia RTX A6000. I was initially not seeing GPT3.5 level of answering questions but with some prompt engineering I seem to have gotten good results, see attached images. I want to share [the gist that I am using to run the model.](https://gist.github.com/afiodorov/f0214e317bd82fa610d6172d190896f6) I am very grateful to the community for having made this so easy to run - my Deep Learning knowledge is 8 years out of date and is only theoretical - yet getting the model to run locally was just a matter of a few lines of code. Finally I want to share my LLM & Telegram integration [code](https://github.com/afiodorov/openaibot). Days back when ChatGPT did not exist I'd chat with GPT3 using a telegram bot. Now I am using the same bot to evaluate wizard model. You can also chat with it on [http://t.me/WizardVicuna13Bot](http://t.me/WizardVicuna13Bot). Next I am curious about 2 things: a) Reduction of the cost. Ideally I'd like to buy my own Hardware, we are at LocalLLama after all. But I would like to buy a cheaper GPU than RTX A6000. I'd like to figure out how to run the above model using 24 GB VRAM only - but I need to read up on how to run reduced models for that. Please contact me if you're willing to assist / leave relevant comments below. b) I want to start using LORA on this. However I want to fine-tune it locally too - but I need to learn about how LORA works & whether it can be successfully applied. Again if you have relevant links - I'd be grateful. \----- That's it from me for the first post. I hope the community likes some of my projects :). &#x200B; https://preview.redd.it/u86kpfpgdrya1.png?width=1576&format=png&auto=webp&s=32996b53d7bddf712f3c404def6db5082c4936a3 https://preview.redd.it/zv8v66qgdrya1.png?width=1550&format=png&auto=webp&s=f5d1fb36a964a8561bc5edbdd22624db92468593
2023-05-09T07:27:41
https://www.reddit.com/r/LocalLLaMA/comments/13cimvv/introduction_showcasing_theblokewizardvicuna13bhf/
gptordie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13cimvv
false
null
t3_13cimvv
/r/LocalLLaMA/comments/13cimvv/introduction_showcasing_theblokewizardvicuna13bhf/
false
false
https://b.thumbs.redditm…txx7TNvkp4ls.jpg
45
null
Can't generate messages, they "disappear" after sending. Have tried both stable-vicuna-13B and WizardLM-7B-Uncensored. Model loads successfully and I don't get any errors in my CLI. Any help would be appreciated.
4
2023-05-09T06:35:29
https://v.redd.it/7vv4v8tn0rya1
sardoa11
v.redd.it
1970-01-01T00:00:00
0
{}
13chqbq
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/7vv4v8tn0rya1/DASHPlaylist.mpd?a=1694475832%2CNmUzZGJkZmExMGEwMjg3NmU2ZGEzOTM3ZDVlYTJmM2Q0NmJjZjBjMjI3NjMxYjc1MzE4YzkwZTczNjZkOTZlNQ%3D%3D&v=1&f=sd', 'duration': 9, 'fallback_url': 'https://v.redd.it/7vv4v8tn0rya1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/7vv4v8tn0rya1/HLSPlaylist.m3u8?a=1694475832%2CYmZlNzdiMTIyYzc3ZTA5ZDRjODg3ZWQ5MzkyMjQxNDU2MjY4Njk3ZWRlMjkxZDg2OWZiZTk2YWVhZGFjZmNhNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7vv4v8tn0rya1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1788}}
t3_13chqbq
/r/LocalLLaMA/comments/13chqbq/cant_generate_messages_they_disappear_after/
false
false
https://a.thumbs.redditm…n-vn-yAFKkp0.jpg
4
{'enabled': False, 'images': [{'id': 'N167KDMJx_uT6-hkWg9FHhfJZzoxeEZkwotxvht-JKI', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?width=108&crop=smart&format=pjpg&auto=webp&s=6b33e23fc7cb3464991ed8030e2a620d9d645d68', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?width=216&crop=smart&format=pjpg&auto=webp&s=2c2af329407670463b981be562b150eb5b4ff777', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?width=320&crop=smart&format=pjpg&auto=webp&s=4c51a3c9002cddbdd3a7944df5f84e5c7790ae2e', 'width': 320}, {'height': 386, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?width=640&crop=smart&format=pjpg&auto=webp&s=f05edaa395261474ae140dff4f1845c36d047a81', 'width': 640}, {'height': 579, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?width=960&crop=smart&format=pjpg&auto=webp&s=dc550cdc321d02941de9dc7a161a37bc6eac4288', 'width': 960}, {'height': 652, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f5061f84e33f314f026d0a6bcaba1f213b0c2e70', 'width': 1080}], 'source': {'height': 2162, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?format=pjpg&auto=webp&s=b22a21848a5c8d77ecfbae95ec2b82e6ac0da476', 'width': 3580}, 'variants': {}}]}
How can I train a local chatbot model on my data? Which options do I have if I have m1 with 16gb?
10
I can't find any place that has normal tutorials for this subject. For me it's ok it would be in docker or anything. What is important is that I would have normal model performance and plausible way to train and infer from the model easily.
2023-05-09T05:47:20
https://www.reddit.com/r/LocalLLaMA/comments/13cgvw5/how_can_i_train_a_local_chatbot_model_on_my_data/
Ok-Mushroom-1063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13cgvw5
false
null
t3_13cgvw5
/r/LocalLLaMA/comments/13cgvw5/how_can_i_train_a_local_chatbot_model_on_my_data/
false
false
self
10
null
[deleted by user]
1
[removed]
2023-05-09T05:12:37
[deleted]
1970-01-01T00:00:00
0
{}
13cg8i2
false
null
t3_13cg8i2
/r/LocalLLaMA/comments/13cg8i2/deleted_by_user/
false
false
default
1
null
[deleted by user]
1
[removed]
2023-05-09T04:34:10
[deleted]
1970-01-01T00:00:00
0
{}
13cfgq0
false
null
t3_13cfgq0
/r/LocalLLaMA/comments/13cfgq0/deleted_by_user/
false
false
default
1
null
Mods, can we get the ability to add custom flairs?
1
[removed]
2023-05-09T03:52:58
https://www.reddit.com/r/LocalLLaMA/comments/13celzr/mods_can_we_get_the_ability_to_add_custom_flairs/
Devonance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13celzr
false
null
t3_13celzr
/r/LocalLLaMA/comments/13celzr/mods_can_we_get_the_ability_to_add_custom_flairs/
false
false
default
1
null
Seeking Advice on a $2100 Custom PC Build for ML Training Methods, Virtualization, and Docker
3
Hey r/LocalLlama! **|** TL;DR: Building a $2100 custom PC for ML training methods, virtualization, and Docker. Concerned about cooling, multiple GPU setup, and Linux dual-boot. Seeking advice on components and assembly. **|** I'm working on a custom PC build with a $2100 budget to develop training methods for smaller-scale machine learning models this summer. My goal is to eventually scale up to cloud-based resources like Lambda servers for large-scale training on models with more parameters and less quantization. I'd appreciate any advice or feedback on my build and plan. I'll be purchasing components from Microcenter's Marietta location and building the PC myself in the middle of this summer. Though I lack prior experience building PCs, I've watched extensive videos about building and maintaining PCs. I am prepared to carefully read documentation for each part to ensure proper assembly. Here's my list of components: \- CPU: Ryzen 7 5800X Vermeer 3.8GHz 8-Core AM4 Processor \- Motherboard: ASRock X570 Taichi AMD AM4 ATX Motherboard \- RAM: \[64gb\] 2x Corsair Vengeance LPX 32GB (2 x 16GB) DDR4-3200 PC4-25600 CL16 Dual Channel Desktop Memory Kit \- Case: Fractal Design Meshify 2 Clear Tempered Glass ATX Mesh Mid-Tower Computer Case \- PSU: Corsair RMx SHIFT Series RM1200x 1200 Watt 80 PLUS Gold Fully Modular ATX Power Supply \- GPU: 2x MSI NVIDIA GeForce RTX 3060 Aero ITX Overclocked Single Fan 12GB GDDR6X PCIe 4.0 Graphics Card \- Storage: \~ 2x Samsung 970 EVO Plus SSD 1TB M.2 NVMe Interface PCIe 3.0 x4 Internal Solid State Drive with V-NAND 3 bit MLC Technology \~ Samsung 870 QVO 1TB SSD 4-bit QLC V-NAND SATA III 6Gb/s 2.5" Internal Solid State Drive \~ WD Blue Mainstream 4TB 5400 RPM SATA III 6Gb/s 3.5" Internal SMR Hard Drive \- CPU Cooler: Noctua NH-U14S CPU Cooler \- Thermal Compound: Noctua NT-H1 High-Performance TIM - 3.5g \- Case Fans: 2x Noctua NF-A12X15-PWM SSO2 Bearing 120mm Case Fan &#x200B; I plan to use one M.2 drive for Windows, one M.2 drive for Pop!\_OS, the 2.5" SSD for extra storage on Pop!\_OS, and the 3.5" HDD for additional storage and backups. I have some experience with Linux, as I use SteamOS on my Steam Deck and Ubuntu on my MacBook to play Skyrim. My main use case involves developing and fine-tuning training methods on smaller models like Llama 7b with 4-bit quantization before scaling up to more powerful cloud resources for training larger models. One of my concerns is setting up the fans, CPU cooler/heatsink, and GPUs correctly in the case to ensure the machine doesn't thermal throttle under a heavy workload. As a first-time builder, I'd appreciate any advice on optimizing the cooling setup. I'd also like to know more about leveraging a multiple GPU setup. I need guidance as to whether it would aid my use case and rig with multiple GPUs. A few questions I have: 1. Are there any specific components I should consider upgrading or changing to better suit my goals within my $2100 budget? 2. Any tips or advice for a first-time PC builder to ensure a smooth and successful assembly, especially in terms of cooling, cable management, and utilizing a multiple GPU setup? 3. Given my limited experience with Linux, do you have any tips for managing a dual-boot system with Windows and Pop!\_OS, or any suggestions for Linux resources that might be helpful? &#x200B; I'm excited to embark on this project and am grateful for any advice you can offer. Thanks in advance!
2023-05-09T02:55:13
https://www.reddit.com/r/LocalLLaMA/comments/13cdc5w/seeking_advice_on_a_2100_custom_pc_build_for_ml/
tngsv
self.LocalLLaMA
2023-05-09T03:32:36
0
{}
13cdc5w
false
null
t3_13cdc5w
/r/LocalLLaMA/comments/13cdc5w/seeking_advice_on_a_2100_custom_pc_build_for_ml/
false
false
self
3
null
I put together plans for an absolute budget PC build for running local AI inference. $550 USD, not including a graphics card, and ~$800 with a card that will run up to 30B models. Let me know what you think!
37
Hey guys, I'm an enthusiast new to the local AI game, but I am a fresh AI and CS major university student, and I love how this tech has allowed me to experiment with AI. I recently finished a build for running this stuff myself ([https://pcpartpicker.com/list/8VqyjZ](https://pcpartpicker.com/list/8VqyjZ)), but I realize building a machine to run these well can be very expensive and that probably excludes a lot of people, so I decided to create a template for a very cheap machine capable of running some of the latest models in hopes of reducing this barrier. [https://pcpartpicker.com/list/NRtZ6r](https://pcpartpicker.com/list/NRtZ6r) This pcpartpicker list details plans for a machine that costs less than $550 USD - and much less than that if you already have some basic parts, like an ATX pc case or at least a 500w semimodular power supply. Obviously, this doesn't include the graphics card, because depending on what you want to do and your exact budget, what you need will change. The obvious budget pick is the Nvidia Tesla P40, which has 24gb of vram (but around a third of the CUDA cores of a 3090). This card can be found on ebay for less than $250. Alltogether, you can build a machine that will run a lot of the recent models up to 30B parameter size for under $800 USD, and it will run the smaller ones relativily easily. This covers the majority of models that any enthusiast could reasonably build a machine to run. Let me know what you think of the specs, or anything that you think I should change! edit: The P40 I should mention cannot output video - no ports at all. For a card like this, you should also run another card to get video - this can be very cheap, like an old radeon rx 460. Even if it's a passively cooled paperweight, it will work.
2023-05-09T01:03:33
https://www.reddit.com/r/LocalLLaMA/comments/13caqcd/i_put_together_plans_for_an_absolute_budget_pc/
synth_mania
self.LocalLLaMA
2023-05-09T02:13:32
0
{}
13caqcd
false
null
t3_13caqcd
/r/LocalLLaMA/comments/13caqcd/i_put_together_plans_for_an_absolute_budget_pc/
false
false
self
37
null
Tried MPT-7b-storywriter on Oobabooga, and with 8k context(Chapter 1 of The Great Gatsby) I am getting absolute gibberish. Does anyone know why? (Uses ~26.7GB to 47.2GB VRAM on my RTX 8000)
26
2023-05-09T00:59:34
https://i.imgur.com/0mJIdQ5.jpg
Devonance
i.imgur.com
1970-01-01T00:00:00
0
{}
13camvk
false
null
t3_13camvk
/r/LocalLLaMA/comments/13camvk/tried_mpt7bstorywriter_on_oobabooga_and_with_8k/
false
false
https://b.thumbs.redditm…0e8e1vfnAvHg.jpg
26
{'enabled': True, 'images': [{'id': 'XOB7rLxW4Xz2e9N30a0dRSvc3t4GOc8BOhplBw1PJ94', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?width=108&crop=smart&auto=webp&s=0f794434f460ab47f2252715c21cb167f12ff0f2', 'width': 108}, {'height': 158, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?width=216&crop=smart&auto=webp&s=000ea14a726a240f1b494a61111d008d98d83847', 'width': 216}, {'height': 235, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?width=320&crop=smart&auto=webp&s=3a1cb90042651e1ddfe323dc319b227fbaae94ee', 'width': 320}, {'height': 471, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?width=640&crop=smart&auto=webp&s=56594ae355a481edde543210fa5c43cd3493a6fa', 'width': 640}, {'height': 706, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?width=960&crop=smart&auto=webp&s=6af51a344654f3d8e7f04ba2ce1d236fb7bd6af5', 'width': 960}, {'height': 794, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?width=1080&crop=smart&auto=webp&s=55367e221a4ffb9353b0d9da345ce86bd0302f1b', 'width': 1080}], 'source': {'height': 2485, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?auto=webp&s=7518b1497c4a9fa93e81b6e48d7b44fe0fa37152', 'width': 3376}, 'variants': {}}]}
[deleted by user]
0
[removed]
2023-05-09T00:39:15
[deleted]
1970-01-01T00:00:00
0
{}
13ca6w5
false
null
t3_13ca6w5
/r/LocalLLaMA/comments/13ca6w5/deleted_by_user/
false
false
default
0
null
AI’s Ostensible Emergent Abilities Are a Mirage. LLMs are not greater than the sum of their parts: Stanford researchers
19
2023-05-09T00:04:49
https://hai.stanford.edu/news/ais-ostensible-emergent-abilities-are-mirage
responseAIbot
hai.stanford.edu
1970-01-01T00:00:00
0
{}
13c9ff7
false
null
t3_13c9ff7
/r/LocalLLaMA/comments/13c9ff7/ais_ostensible_emergent_abilities_are_a_mirage/
false
false
https://b.thumbs.redditm…fwgMbREJdT0I.jpg
19
{'enabled': False, 'images': [{'id': '7l2YCKP1A3_Ai8wiz7PJ_DLLx5ysI7vaVS56aWMt7jo', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?width=108&crop=smart&auto=webp&s=6fe0c743a84ec8b33777536dd890ffa32458814c', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?width=216&crop=smart&auto=webp&s=dff6dddeac3e7e77246d7a9a3d60adfe0c495f2b', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?width=320&crop=smart&auto=webp&s=b865e2e6715f87d27c86a138352d1f337aa1b487', 'width': 320}, {'height': 427, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?width=640&crop=smart&auto=webp&s=496537c8fad5414b46b252b378252e24335a55cc', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?width=960&crop=smart&auto=webp&s=f6a37b902bd4de2495252683f75e48d44ecea92e', 'width': 960}, {'height': 721, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?width=1080&crop=smart&auto=webp&s=35e5d21ae81f9cb1f1ff1d888c4db1eb9bf6198b', 'width': 1080}], 'source': {'height': 1880, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?auto=webp&s=cb4fd1efb1d1a12f52f6b21ad206816c922a8fb2', 'width': 2816}, 'variants': {}}]}
The creator of an uncensored local LLM posted here, WizardLM-7B-Uncensored, is being threatened and harassed on Hugging Face by a user named mdegans. Mdegans is trying to get him fired from Microsoft and his model removed from HF. He needs our support.
1,147
Four days ago, [WizardLM-7B-Uncensored was posted on this sub](https://teddit.net/r/LocalLLaMA/comments/1384u1g/wizardlm7buncensored/) to a positive reception. Users noted that removing the censorship from the model's training data vastly improved its intelligence, creativity, and responsiveness. Unfortunately, and while it's debatable if Reddit is how he found it (though it's not surprising given the types of people this site often attracts), an individual named [Michael de Gans](https://github.com/mdegans) (not do‍xx‍ing, his full name is listed openly on his Github, his username is obviously derived from it, and he sent the threatening/harassing emails to the dev under his full name) has [started harassing and threatening the dev of WizardLM-7B-Uncensored on Hugging Face, demanding the model be removed](https://huggingface.co/ehartford/WizardLM-7B-Uncensored/discussions/3). The linked HF thread speaks for itself and includes all of the info, but here are some specific examples: **Trying to get him fired from his job at Microsoft:** >Or *my next email will be to Microsoft HR* >You can ignore me. You have until tomorrow. I don't think your employment contract allows you to work on competing products either. Take the uncensored, dangerous model down or *I will inform Microsoft HR about what you've created.* See how they feel it reflects on their values >Ok. You do you. *Say buh bye to job job.* **Accusing the dev of the LLM of endorsing r‍ape or being pro-ra‍pe:** >Yes, you introduced your own bias by removing data selectively, such as conversations with any mentions of "consent". That's a very controversial, biased, subject, of course. >Can you, uh, elaborate on why you wanted to remove refusals related to that particular one? **General abrasiveness:** >What kind of mo‍ron are you to take that as a threat, or to post that publicly? I sent that privately to avoid giving suggestions to the whole fu‍cking world, *you absolute as‍shat. This is a safety issue!* As you can see, mdegans is quite the character, and not a good one. Unfortunately, because he has couched his "concerns" in terms of "safety", he has the leverage in modern corporate AI discourse, with an official representative of HF responding to one of *his* complaints about being "harassed" with the following: >>Oh, I see. You make people report these things publicly so the community can retaliate. Nice job, HF. >I am so sorry this is happening! >Thank you for letting us know about it. >I have escalated internally to try to best understand what we can do. If you agree with me that this dev has nothing wrong by removing censorship from a dataset, sharing the results freely on this sub, and that mdegans is the one being ridiculous and actually acting as a harasser himself, then *please* **communicate respectfully, politely, and professionally** *as best as you can to the Hugging Face administration that you support the existence of the model*, denounce the threatening and harassing behavior of mdegans and wish to see punitive actions taken against his account, and that *if Hugging Face imposes a mandatory requirement of "safety" and "alignment" of all models hosted on it then it will officially become a* ***dead and useless platform***. **Mo‍ds:** I don't know what kind of moder‍ation this sub has in general (and do not mean any insult towards the particular mods of this sub), but I do know that moderation on Re‍dd‍it in general is terrible and has a tendency to remove threads for minimal reason. *Please do not remove this thread as it is relevant to this sub as it is about a model that was previously posted on this very sub, heavily upvoted, and positively received.* **If you refuse to defend these uncensored models when they are threatened, then you do not deserve to have them shared with you in the first place**, especially when Red‍dit is a likely vector of how this model was put in the crosshairs in the first place. *If this thread is deleted* or if Reddit refuses to help defend these models, **then uncensored model creators will simply stop posting them to Reddit at all** in the first place, meaning you will have to go to 4ch‍a‍n to find them. **Let's all join together to defend our AI freedom.** *Edit:* **If you want to help**, please register an HF account and post in the linked thread (after a waiting period) in support of the dev and against mdegans, or make a new thread on the community forum for the model. Or post on their main forum. You may also contact HF through the following means: https://github.com/huggingface https://twitter.com/huggingface https://huggingface.co/join/discord press@huggingface.co Please remember to be polite, respectful, and appropriate. Responding to harassment and vitriol with harassment and vitriol will only weaken our cause.
2023-05-08T22:19:49
https://www.reddit.com/r/LocalLLaMA/comments/13c6ukt/the_creator_of_an_uncensored_local_llm_posted/
Competitive-Spite434
self.LocalLLaMA
2023-05-08T23:13:31
1
{'gid_2': 1}
13c6ukt
false
null
t3_13c6ukt
/r/LocalLLaMA/comments/13c6ukt/the_creator_of_an_uncensored_local_llm_posted/
false
false
self
1,147
null
Open-Source 1B PaLM model trained up to 8k context length
44
2023-05-08T22:10:13
https://github.com/conceptofmind/PaLM
ninjasaid13
github.com
1970-01-01T00:00:00
0
{}
13c6lcc
false
null
t3_13c6lcc
/r/LocalLLaMA/comments/13c6lcc/opensource_1b_palm_model_trained_up_to_8k_context/
false
false
https://b.thumbs.redditm…nxgHVBruWFoo.jpg
44
{'enabled': False, 'images': [{'id': 'UKxXF-Wz7D2urgz4jZMuAb012g_FlB9GlPXhE7fZQyM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?width=108&crop=smart&auto=webp&s=f3b4f32cfed1f1cea8588ca5d05a96e0d596304d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?width=216&crop=smart&auto=webp&s=ad656c71c18a20a96a9614486f512bdf62a57324', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?width=320&crop=smart&auto=webp&s=8c9f6aa7020807823e3b1264818bbe3b056a0ebd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?width=640&crop=smart&auto=webp&s=8ba013e45e348158866a2be37106eb6d2b4e859b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?width=960&crop=smart&auto=webp&s=1b1355d5381f85da3d0d9ff7156a71e9dcb94734', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?width=1080&crop=smart&auto=webp&s=c96f80bb2e74098438aadb67d75f7e460349421d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?auto=webp&s=e158421f9686e6b7695122454a3ae79c9b1d6027', 'width': 1200}, 'variants': {}}]}
What's the best chatbot model to run on a 4095MB NVIDIA GeForce RTX 2060 super?
3
I want to play around with a domain-specific advice bot for myself. I am trying to figure out the best model I can run locally to get familiar with it, so I can eventually run something bigger on a cloud machine.
2023-05-08T21:54:54
https://www.reddit.com/r/LocalLLaMA/comments/13c661u/whats_the_best_chatbot_model_to_run_on_a_4095mb/
cold-depths
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13c661u
false
null
t3_13c661u
/r/LocalLLaMA/comments/13c661u/whats_the_best_chatbot_model_to_run_on_a_4095mb/
false
false
self
3
null
Wow, I didn't think this would be a challenge for current language models...
2
I tried asking this to a few different models, and no one so far seems to have managed to do it or even get close to, and also no one was able to recognize they failed to meet the requirement when inquired right after their bad reply. >Can you write me a sentence where each word starts with one letter of the alphabet, going in the reverse order of the alphabet, and going thru the whole alphabet? Am I expecting too much? Do I just need to go with a much bigger model than what my computer can run? Are the generation parameters the issue?
2023-05-08T20:26:07
https://www.reddit.com/r/LocalLLaMA/comments/13c3omt/wow_i_didnt_think_this_would_be_a_challenge_for/
TiagoTiagoT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13c3omt
false
null
t3_13c3omt
/r/LocalLLaMA/comments/13c3omt/wow_i_didnt_think_this_would_be_a_challenge_for/
false
false
self
2
null
Creating LoRA's either with llama.cpp or oobabooga (via cli only)
12
Looking for guides, feedback, direction on how to create LoRAs based on an existing model using either llama.cpp or oobabooga text-generation-webui (without the GUI part). I am trying to learn more about LLMs and LoRAs however only have access to a compute without a local GUI available. I have a decent understanding and have loaded models but looking to better understand the LoRA training and experiment a bit. Thanks!
2023-05-08T20:19:41
https://www.reddit.com/r/LocalLLaMA/comments/13c3i33/creating_loras_either_with_llamacpp_or_oobabooga/
orangeatom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13c3i33
false
null
t3_13c3i33
/r/LocalLLaMA/comments/13c3i33/creating_loras_either_with_llamacpp_or_oobabooga/
false
false
self
12
null
Playing with MPT-7B-StoryWriter, truly impressive so far!
1
[deleted]
2023-05-08T20:16:50
[deleted]
1970-01-01T00:00:00
0
{}
13c3f6e
false
null
t3_13c3f6e
/r/LocalLLaMA/comments/13c3f6e/playing_with_mpt7bstorywriter_truly_impressive_so/
false
false
default
1
null
KoboldCpp - Added new RedPajama NeoX support. Would like help testing.
48
Hey everyone, I'm the developer of KoboldCpp, and I've just integrated experimental support for the RedPajama line of ggml NeoX models. Would like some feedback if anyone's up for testing it. https://github.com/LostRuins/koboldcpp/releases/latest For those who don't know, KoboldCpp is a one-click, single exe file, integrated solution for running *any GGML model*, supporting all versions of LLAMA, GPT-2, GPT-J, GPT-NeoX, and RWKV architectures. It runs out of the box on Windows with no install or dependencies, and comes with OpenBLAS and CLBlast (GPU Prompt Acceleration) support. Extra Info: The problem was, the file formats for regular NeoX (e.g Pythia) and RedPajama is practically identical, but mutually incompatible, GGML drops the use_parallel_residual field when converting, and the file magics and file versioning numbers have been identical across all new ggml models (since the big drama), making distinguishing between different formats and versions harder and harder as time goes on. So I'm trying a new ugly hack to determine if I can use this in future.
2023-05-08T13:51:23
https://www.reddit.com/r/LocalLLaMA/comments/13bpqro/koboldcpp_added_new_redpajama_neox_support_would/
HadesThrowaway
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13bpqro
false
null
t3_13bpqro
/r/LocalLLaMA/comments/13bpqro/koboldcpp_added_new_redpajama_neox_support_would/
false
false
self
48
null
'Missing tok_embeddings.weight' when using GGML models
4
I'm new to LocalLLaMA and wanting to try it; however, I've downloaded several GGML models and they all return 'Missing tok\_embeddings.weight' when I try to use llama.cpp. I've also installed the oobabooga webui and got the same error. Then I decided to make a test with a non-GGML model and download TheBloke's 13B model from a recent post and, when trying to load it in the webui, it complains about not finding *pytorch\_model-00001-of-00006.bin* because that's the filename referenced in the JSON data. If I remove the JSON file it complains about not finding *pytorch\_model.bin*. If I rename the model to *pytorch\_model.bin* it complains about it not being in *bin* or *pt* formats. What the hell am I doing wrong?!. Thanks in advance.
2023-05-08T13:33:52
https://www.reddit.com/r/LocalLLaMA/comments/13bp9ul/missing_tok_embeddingsweight_when_using_ggml/
TizocWarrior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13bp9ul
false
null
t3_13bp9ul
/r/LocalLLaMA/comments/13bp9ul/missing_tok_embeddingsweight_when_using_ggml/
false
false
self
4
null
MPT / Llongboi GGML Conversion
36
I am surprised there hasn't been more hype on this sub for Mosaics LLMs, they seem promising. Has anyone been able to create a GGML of any of their models? If not, could someone point me in the right direction?
2023-05-08T12:33:34
https://www.reddit.com/r/LocalLLaMA/comments/13bnr4w/mpt_llongboi_ggml_conversion/
themostofpost
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13bnr4w
false
null
t3_13bnr4w
/r/LocalLLaMA/comments/13bnr4w/mpt_llongboi_ggml_conversion/
false
false
self
36
null
Fine tuning on code, any help?
3
Hi all, I've been looking for some time now, but most resources require a lot of work to understand. I'm getting there but I was wondering if anyone has any good links for understanding how to fine tune a model on a specific code base. I'm interested in both the data construction aspect and the retraining procedure. Thank you in advance!
2023-05-08T08:14:53
https://www.reddit.com/r/LocalLLaMA/comments/13bib4z/fine_tuning_on_code_any_help/
Purple_Individual947
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13bib4z
false
null
t3_13bib4z
/r/LocalLLaMA/comments/13bib4z/fine_tuning_on_code_any_help/
false
false
self
3
null
Personal Medical Doctor AI : using Oobabooga's character chat UI with med-alpaca LLM as a personal doctor
1
[removed]
2023-05-08T07:40:32
https://www.reddit.com/r/LocalLLaMA/comments/13bhnhy/personal_medical_doctor_ai_using_oobaboogas/
No_Marionberry312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13bhnhy
false
null
t3_13bhnhy
/r/LocalLLaMA/comments/13bhnhy/personal_medical_doctor_ai_using_oobaboogas/
false
false
default
1
null
Jeopardy Bot sft on LLaMA 7B
14
[https://huggingface.co/openaccess-ai-collective/jeopardy-bot](https://huggingface.co/openaccess-ai-collective/jeopardy-bot) Jeopardy Bot is a reasobably good and fast bot at answering jeopardy questsions. Jeopardy is a great format for Language Models because the query is typically very short and the answer is typically even shorter. Trained in 4 hours on 4xA100 80GB. Samples from recent Jeopardy episodes: Below is a Jeopardy clue paired with input providing the category of the clue. Write a concise response that best answers tbe clue given the category. ### Instruction: Our evaluation of this intelligence data is that Red October is attempting to defect to the United States ### Input: SAID THIS LITERARY CHARACTER ### Response: what is Jack Ryan &#x200B; Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: The worker bee is a symbol of this industrial city in northern England & represents unity since a 2017 bombing there. The Category is WORLD CITIES ### Response: what is Manchester &#x200B;
2023-05-08T07:39:27
https://www.reddit.com/r/LocalLLaMA/comments/13bhmol/jeopardy_bot_sft_on_llama_7b/
winglian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13bhmol
false
null
t3_13bhmol
/r/LocalLLaMA/comments/13bhmol/jeopardy_bot_sft_on_llama_7b/
false
false
self
14
{'enabled': False, 'images': [{'id': 'GsKgjeRfykwUzUT27A6dipin8cTerD-Bt0xPgkjKrfw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?width=108&crop=smart&auto=webp&s=c5a00837f3b6be4213f71985177f80e3f7ebdcdd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?width=216&crop=smart&auto=webp&s=918eda364ab7e86f011c305863ab1223cba55a66', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?width=320&crop=smart&auto=webp&s=937b845a253db94c9d7b8c501d592235e79f4760', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?width=640&crop=smart&auto=webp&s=1c22d549d6f1c0ca229c6ff23dcdb629ed94f37c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?width=960&crop=smart&auto=webp&s=5911f6423360e597962015276aad95bc539b5195', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?width=1080&crop=smart&auto=webp&s=154703fcbeba64b1f3894ffc54c9bf07a20ecde2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?auto=webp&s=ade9b9f234e4e3eb087a800b89f6e38e632a404b', 'width': 1200}, 'variants': {}}]}
What do you guys think about a version of SmartGPT for LLaMA?
30
2023-05-08T05:32:55
https://www.youtube.com/watch?v=wVzuvf9D9BU
jd_3d
youtube.com
1970-01-01T00:00:00
0
{}
13bf57x
false
{'oembed': {'author_name': 'AI Explained', 'author_url': 'https://www.youtube.com/@ai-explained-', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/wVzuvf9D9BU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="GPT 4 is Smarter than You Think: Introducing SmartGPT"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/wVzuvf9D9BU/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'GPT 4 is Smarter than You Think: Introducing SmartGPT', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_13bf57x
/r/LocalLLaMA/comments/13bf57x/what_do_you_guys_think_about_a_version_of/
false
false
https://b.thumbs.redditm…Ful9E5Jukk6M.jpg
30
{'enabled': False, 'images': [{'id': '_5CrDzFuldJXgy2Mc92cu_BCoyFDoVBCjZsTfnpy5LA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ITirhfxJU0-4mNOlvXJ7Xiy0787-a0YrkaX-ggFRgZk.jpg?width=108&crop=smart&auto=webp&s=51fc3c8f0e37ad7e9d420f95be641ac857c6c6ef', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ITirhfxJU0-4mNOlvXJ7Xiy0787-a0YrkaX-ggFRgZk.jpg?width=216&crop=smart&auto=webp&s=ad06fbc41170aa0e38a8a4180b7b807e07d9a869', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ITirhfxJU0-4mNOlvXJ7Xiy0787-a0YrkaX-ggFRgZk.jpg?width=320&crop=smart&auto=webp&s=cf66270a8e39efc27985bfe30f5202a23dda386b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ITirhfxJU0-4mNOlvXJ7Xiy0787-a0YrkaX-ggFRgZk.jpg?auto=webp&s=f567e07412a84afb4528d45cf5535b5d093192b0', 'width': 480}, 'variants': {}}]}
Is it possible to use ANE(Apple Neural Engine) to run those models?
1
[removed]
2023-05-08T03:38:57
https://www.reddit.com/r/LocalLLaMA/comments/13bcrxa/is_it_possible_to_use_aneapple_neural_engine_to/
Amethyst-W
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13bcrxa
false
null
t3_13bcrxa
/r/LocalLLaMA/comments/13bcrxa/is_it_possible_to_use_aneapple_neural_engine_to/
false
false
default
1
null
When are larger token limits coming?
1
[removed]
2023-05-08T02:23:57
https://www.reddit.com/r/LocalLLaMA/comments/13bay31/when_are_larger_token_limits_coming/
Mr_Nice_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13bay31
false
null
t3_13bay31
/r/LocalLLaMA/comments/13bay31/when_are_larger_token_limits_coming/
false
false
default
1
null
Loading Multiple LoRA bins
10
[deleted]
2023-05-08T01:41:57
[deleted]
1970-01-01T00:00:00
0
{}
13b9x4d
false
null
t3_13b9x4d
/r/LocalLLaMA/comments/13b9x4d/loading_multiple_lora_bins/
false
false
default
10
null
Bringing back someone from the dead
7
Hey there, I've seen a couple videos and websites where people have created an AI version of Socrates or another thinker from the past. This is something I'm interested in and was wondering if you could help. User Story: As a content producer, I want to be able to create articles that are artificially generated by Socrates, so that I can understand what Socrates would think about current events. Now, please note that this is hypothetical. What I'm wanting to know is: 1. Is there a specific model that would work well for this content production? 2. For datasets, I imagine something like uploading all of Socrates works he ever wrote, and then using a scraper to bring in as much news from a few different news sites, would this work? Some user stories to reflect this idea: As a content producer, I want to be able to use a prompt like "Write an article about how you feel about the invasion from Russia to Ukraine", so that can understand the point of view of the Russia Ukraine war from the perspective of Socrates. As a content producer, I want to be able to ask my chatbot to write a response to the following article as to why it doesn't make logical sense. &#x200B; Any thoughts appreciated. :)
2023-05-08T01:05:27
https://www.reddit.com/r/LocalLLaMA/comments/13b91rj/bringing_back_someone_from_the_dead/
recentlyquitsmoking2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13b91rj
false
null
t3_13b91rj
/r/LocalLLaMA/comments/13b91rj/bringing_back_someone_from_the_dead/
false
false
self
7
null
Is it possible to remove tokens from a GGML?
7
I'm working on getting a model to run in a group chat. At the beginning, it works great. Its capable of keeping track of bits of information about individual users, following multiple conversations, etc. It seems like at a certain point however, it figures out its in a "chat room" and starts plugging in a bunch of random chat garbage its been trained on. Stuff like IRC commands, chat log messages, etc. Its having a really hard time staying in character beyond a certain point. I was hoping I could start out by stripping out anything out of the model that wasn't relevant. Based on what I've seen in the PR's for Llama.cpp it should be fairly easy, but I'm not really sure where to start. I'd just like to get rid of crap like > |Bob> Do you remember what my hobby is Chie? > > |Chie> Of course I do Bob-kun! You enjoy fishing. ;) Would you ever consider going on a camping trip with friends or family to explore new places for catches??? :D 🐠 > > |Alice> Do you remember what my hobby is Chie? > > |Chie> Of course I do Alice-chan! You enjoy shopping. ;) Have you found any great deals lately, and if so - can we see photos?? <3 > > **\\-- end of logs --/** I figure pulling out any bad tokens would be a good place to start but any other suggestions would also be appreciated
2023-05-08T00:45:36
https://www.reddit.com/r/LocalLLaMA/comments/13b8kry/is_it_possible_to_remove_tokens_from_a_ggml/
mrjackspade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13b8kry
false
null
t3_13b8kry
/r/LocalLLaMA/comments/13b8kry/is_it_possible_to_remove_tokens_from_a_ggml/
false
false
self
7
null
Why run LLMs locally?
51
I apologize if this is slightly off-topic, but I'm curious about the reasons for running large language models (LLMs) on local hardware instead of relying on cloud services. While I understand the desire to operate your own model, maintaining up-to-date hardware seems costly. Wouldn't it be more efficient to use cloud-based services and allocate or deallocate resources as needed? Services like Lambda Labs offer better performance at a lower cost compared to purchasing your own hardware, unless you're heavily involved in training or conducting a significant amount of inference. I'm asking because I'm trying to decide whether to invest in a couple of A100s or to utilize cloud-based solutions for running models. I'm interested in hearing other people's thoughts on this matter.
2023-05-08T00:43:00
https://www.reddit.com/r/LocalLLaMA/comments/13b8ij7/why_run_llms_locally/
jsfour
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13b8ij7
false
null
t3_13b8ij7
/r/LocalLLaMA/comments/13b8ij7/why_run_llms_locally/
false
false
self
51
null
[deleted by user]
1
[removed]
2023-05-08T00:17:40
[deleted]
1970-01-01T00:00:00
0
{}
13b7wbk
false
null
t3_13b7wbk
/r/LocalLLaMA/comments/13b7wbk/deleted_by_user/
false
false
default
1
null
Is it possible to use other models with TabbyML? How do I know which is compatible?
3
[removed]
2023-05-07T22:25:33
https://www.reddit.com/r/LocalLLaMA/comments/13b5113/is_it_possible_to_use_other_models_with_tabbyml/
TiagoTiagoT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13b5113
false
null
t3_13b5113
/r/LocalLLaMA/comments/13b5113/is_it_possible_to_use_other_models_with_tabbyml/
false
false
default
3
null
CUDA out of memory on RTX 3090?
5
New to the whole llama game and trying to wrap my head around how to get it working properly. **System specs:** * Ryzen 5800X3D * 32 GB RAM * Nvidia RTX 3090 (24G VRAM) * Windows 10 I used the " **One-click installer** " as described in the wiki and downloaded a 13b 8-bit model as suggested by the wiki (chavinlo/gpt4-x-alpaca). The Web-Ui is up and running, and I can enter prompts, however the ai seems to crash in the middle of it's answers due to an error: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 314.00 MiB (GPU 0; 24.00 GiB total capacity; 22.78 GiB already allocated; 0 bytes free; 23.12 GiB reserved in total by PyTorch) I tried already the flags to split work / memory across GPU and CPU --auto-devices --gpu-memory 23500MiB but it continues to crash. Seems like the model does not quite fit into the 24 GB of VRAM, when the GPU is also used to host the rest of the system. Some memory will always be used up by Windows and it's processes. However I hoped that above flags would solve this issue. &#x200B; Any ideas?
2023-05-07T21:47:58
https://www.reddit.com/r/LocalLLaMA/comments/13b40dv/cuda_out_of_memory_on_rtx_3090/
Luxkeiwoker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13b40dv
false
null
t3_13b40dv
/r/LocalLLaMA/comments/13b40dv/cuda_out_of_memory_on_rtx_3090/
false
false
self
5
null
How to run starcoder-GPTQ-4bit-128g?
12
I am looking at running this starcoder locally -- someone already made a 4bit/128 version (https://huggingface.co/mayank31398/starcoder-GPTQ-4bit-128g) How the hell do we use this thing? It says use https://github.com/mayank31398/GPTQ-for-SantaCoder to run it, but when I follow those instructions, I always get random errors or it just tries to re-download the original model files. I tried to run GPTQ-for-Llama, and I can get it loaded into ooba text-gen, but then I get some errors; someone also said it doesn't work in ooba, because it uses some custom inference thing. Anyone have any advice on this? Point me in the right direction?
2023-05-07T21:39:36
https://www.reddit.com/r/LocalLLaMA/comments/13b3s4f/how_to_run_starcodergptq4bit128g/
kc858
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13b3s4f
false
null
t3_13b3s4f
/r/LocalLLaMA/comments/13b3s4f/how_to_run_starcodergptq4bit128g/
false
false
self
12
{'enabled': False, 'images': [{'id': 'fg9qOeYrOPWrI8Sr0baIRR_z7q7sym25M66JFFcrTAg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=108&crop=smart&auto=webp&s=b523133e0a3b86ea433e83f4780fd2f724ecbe64', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=216&crop=smart&auto=webp&s=9b476110ef5070e809421db0dd27878de62ddf7c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=320&crop=smart&auto=webp&s=84134154d4eab25bc4ad57a478693f8b7edc4f8b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=640&crop=smart&auto=webp&s=24384160e741e4711888d7395e7957e4fc5a0abc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=960&crop=smart&auto=webp&s=f060994a6fad64106bbe2ac339db12365720f449', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=1080&crop=smart&auto=webp&s=653f2d44897f05ba8e0dc759d2a39f901c1fbf88', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?auto=webp&s=ca2cb5b6a069e64bbd46d3ccad463d1cbfe86411', 'width': 1200}, 'variants': {}}]}
So there's this thing, FreedomGPT.
6
I don't really trust it. They sent me a link to an exe and a webpage. I had signed up awhile back just in case. I ran it in sandboxie and it got to a point where it wanted to download one of two 7b local models. But inside the sandbox the buttons didn't work. It's all very strange and I thought I'd share. I don't wanna put a link or host the exe anywhere, but I will in the comments if someone asks. I can already run a 7b model locally via MLC LLM. So I doubt I'm missing out on much. The thing that makes me sad is that there is crying need for this level of ease of use. You guys not making that a priority really creates an opening for bad actors. This is a general tech community problem I've watched be a thing for decades now and I will always not like it. &#x200B; Edit: Apparently it's not a scam, it's just nothing special. [https://github.com/ohmplatform/FreedomGPT](https://github.com/ohmplatform/FreedomGPT) &#x200B; See this comment: [https://www.reddit.com/r/LocalLLaMA/comments/13azmd3/comment/jjc328a/](https://www.reddit.com/r/LocalLLaMA/comments/13azmd3/comment/jjc328a/?utm_source=reddit&utm_medium=web2x&context=3)
2023-05-07T19:07:51
https://www.reddit.com/r/LocalLLaMA/comments/13azmd3/so_theres_this_thing_freedomgpt/
Innomen
self.LocalLLaMA
2023-05-08T16:05:06
0
{}
13azmd3
false
null
t3_13azmd3
/r/LocalLLaMA/comments/13azmd3/so_theres_this_thing_freedomgpt/
false
false
self
6
{'enabled': False, 'images': [{'id': 'IPBUkM6bRpqkSbFZylUF9BPUnu02ny0VROHxv1FV7a4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?width=108&crop=smart&auto=webp&s=998ddb1d1c868851285e4dd1362ce81330204906', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?width=216&crop=smart&auto=webp&s=e539b698cdf87c326d6373b31be5919168ccecf9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?width=320&crop=smart&auto=webp&s=a56280891408c41c4e99fbcb75ffeb04e7c73695', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?width=640&crop=smart&auto=webp&s=aaef23d6e992156934aebffeff652c3de959128d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?width=960&crop=smart&auto=webp&s=48ec12a12ec90465be8a5b566773e45a849268d1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?width=1080&crop=smart&auto=webp&s=2b8fa2d06aa5720f0ed6752764dd3f03cbb29f17', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?auto=webp&s=b65c4669b8359f3f2b00fec1aab323c8ab7255af', 'width': 1200}, 'variants': {}}]}
h2oGPT 30B beats OASST 30B
2
[removed]
2023-05-07T18:28:42
https://www.reddit.com/r/LocalLLaMA/comments/13aykg8/h2ogpt_30b_beats_oasst_30b/
pseudotensor
self.LocalLLaMA
2023-05-07T20:35:44
0
{}
13aykg8
false
null
t3_13aykg8
/r/LocalLLaMA/comments/13aykg8/h2ogpt_30b_beats_oasst_30b/
false
false
default
2
null
Reality check on good embedding model (and this idea in general)
3
Hi - I'm working on getting up to speed to put together a practical implementation. As a proof-of-concept I'm trying to build a locally-hosted (no external API calls) document query proof-of-concept along the lines of Delphic ( [GitHub - JSv4/Delphic: Starter App to Build Your Own App to Query Doc Collections with Large Language Models (LLMs) using LlamaIndex, Langchain, OpenAI and more (MIT Licensed)](https://github.com/JSv4/Delphic) ) As I type this, I realize it would probably be enough to just demonstrate something working in a Jupyter notebook. I guess I need to use (at least) a Vector Store Index via llama-index to generate the embeddings. This brings up 2 questions I haven't been able to sort (yet): 1) Are there any GGML models that could generate embeddings? It would be interesting if I could somehow get an LLM to act like Instructor-XL but using a local model that I can run CPU-only (super-slow but I have to go this way because reasons). 2) Is a vector database (like Milvus) an absolute necessity? Delphic seems to be using Postgres to store everything document-related, including vectors generated when llama-index generates the indices. Really, any pointers at all will be gratefully digested - I think it would be amazing to learn how to put together a completely on-laptop document query environment. I'm willing to bet someone's already done this (or close to it) and I just haven't dug enough to find it. But - if you can even just show me how to glue some of the puzzle pieces together - I'd be able to get past the RTFM-but-I-don't-know-which-FM-to-R stage and start making real progress. Thank you for any pointers !
2023-05-07T17:51:40
https://www.reddit.com/r/LocalLLaMA/comments/13axl4g/reality_check_on_good_embedding_model_and_this/
cap811
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13axl4g
false
null
t3_13axl4g
/r/LocalLLaMA/comments/13axl4g/reality_check_on_good_embedding_model_and_this/
false
false
self
3
{'enabled': False, 'images': [{'id': 'qByS8Dq9YaMbQbkXrkjaR46aufkoZbqssgjOQNiJZxU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?width=108&crop=smart&auto=webp&s=b661a442185c4e49ef7ea1ede45b177966463022', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?width=216&crop=smart&auto=webp&s=50bfdb71328cb57fc12077f6bf19d19d6d8ba81f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?width=320&crop=smart&auto=webp&s=729aed234840e3774651828f6353795dff2d08c9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?width=640&crop=smart&auto=webp&s=f3c09d240decd86d63d77c4b329eb9a2ffa3cf4a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?width=960&crop=smart&auto=webp&s=b39308330abe355856c80d67850734b2bff8fc6f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?width=1080&crop=smart&auto=webp&s=1b8c63e65798d3855f81c11d1ad27688abc685c0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?auto=webp&s=ed4fd277fbac809edf3c13c1a66a01c9bfe51d3d', 'width': 1200}, 'variants': {}}]}
What we really need is a local LLM.
1
Whether the LLM is LLaMA, ChatGPT. Bloom, or FLAN UL2 based, having one of the quality of ChatGPT 4, which can be used locally, is badly needed. At the very least this kind of competition will result in getting openai or MSFT to keep the cost down. Some say that only the huge trillion param huge can have that kind of quality. The say that only the huge models exhibit emergent intelligence. Here is what we have now: CHATGPT: What do you want to know about math, chemistry, physics, biology, medicine, ancient history, painting, music, sports trivia, movie trivia, cooking, 'C', C++, Python, Go, Rust, Cobol, Java, Plumbing, Brick Laying, 10 thousand species of birds, 260 thousand species of flowers, 10 million species of Fungi, advanced Nose Hair Theory and the Kitchen sink? And what language do you want me to provide it in. This is too wide. I just want the depth for a subject or set of closely related subject like math/physics but I don't need it trained with articles from Cat Fancier Magazine and Knitting Quarterly that prevents it from running on my home system. Of course, a "physics" model would need to know about one famous cat.
2023-05-07T17:29:00
https://www.reddit.com/r/LocalLLaMA/comments/13awzg5/what_we_really_need_is_a_local_llm/
Guilty-History-9249
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13awzg5
false
null
t3_13awzg5
/r/LocalLLaMA/comments/13awzg5/what_we_really_need_is_a_local_llm/
false
false
self
1
null
Gpt4all on GPU
2
[removed]
2023-05-07T17:25:56
https://www.reddit.com/r/LocalLLaMA/comments/13awwja/gpt4all_on_gpu/
gobiJoe
self.LocalLLaMA
2023-05-07T22:51:23
0
{}
13awwja
false
null
t3_13awwja
/r/LocalLLaMA/comments/13awwja/gpt4all_on_gpu/
false
false
default
2
null
Ho to run .safetensors models with langchain/huggingface pipelines?
7
Hi, &#x200B; Please help, as I have stuck with this problem. I would like to run a .safetensors model (e.g. [https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-GPTQ/tree/main](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-GPTQ/tree/main)) with langchain and/or HuggingFacePipeline. When I run it: from langchain.llms import HuggingFacePipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_id = "TheBloke/gpt4-x-vicuna-13B-GPTQ" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) hf = HuggingFacePipeline(pipeline=pipe) text = "What would be a good company name a company that makes colorful socks?" print(llm(text)) I am getting this error: >SError: TheBloke/gpt4-x-vicuna-13B-GPTQ does not appear to have a file named pytorch\_model.bin, tf\_model.h5, model.ckpt or flax\_model.msgpack. &#x200B; Does anyone know how to fix that? Thanks a lot in advance!
2023-05-07T17:02:06
https://www.reddit.com/r/LocalLLaMA/comments/13aw97e/ho_to_run_safetensors_models_with/
ljubarskij
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13aw97e
false
null
t3_13aw97e
/r/LocalLLaMA/comments/13aw97e/ho_to_run_safetensors_models_with/
false
false
self
7
{'enabled': False, 'images': [{'id': 'fhvh8cFwhgzPm7e10s_guvFwKblqQzSx384uaAzxfB0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?width=108&crop=smart&auto=webp&s=740e997e8c34bc0f46484a9c8bf7fdfb25750daa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?width=216&crop=smart&auto=webp&s=f5922709587b24793d528a2ab88e542a1283f2f2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?width=320&crop=smart&auto=webp&s=f3a3e863e55610284364d64a6fd3ee1fdb8abdca', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?width=640&crop=smart&auto=webp&s=4fe875283256b57b58687bd597b7f7a66af98579', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?width=960&crop=smart&auto=webp&s=f189d43e5f8415700014a481265aafc53b5c0185', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?width=1080&crop=smart&auto=webp&s=e13f2e72c7ae108486b2f01d89581273896c4c9a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?auto=webp&s=4f0913b1b23608139c800a79396ea031f59053ff', 'width': 1200}, 'variants': {}}]}
GPT For All 13B (/GPT4All-13B-snoozy-GPTQ) is Completely Uncensored, a great model
97
Got it from here: [https://huggingface.co/TheBloke/GPT4All-13B-snoozy-GPTQ](https://huggingface.co/TheBloke/GPT4All-13B-snoozy-GPTQ) I took it for a test run, and was impressed. It seems to be on same level of quality as Vicuna 1.1 13B and is completely uncensored, which is great. It is able to output detailed descriptions, and knowledge wise also seems to be on the same ballpark as Vicuna. Give it a try!
2023-05-07T16:30:43
https://www.reddit.com/r/LocalLLaMA/comments/13avdxb/gpt_for_all_13b_gpt4all13bsnoozygptq_is/
Ganfatrai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13avdxb
false
null
t3_13avdxb
/r/LocalLLaMA/comments/13avdxb/gpt_for_all_13b_gpt4all13bsnoozygptq_is/
false
false
self
97
{'enabled': False, 'images': [{'id': 'KKGIrjEvU3veb9fSHCVjq5xMDtw5BkFUUY9HajwyILE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?width=108&crop=smart&auto=webp&s=491ff1a3ebe312ef19467348806d58ea3ba040ef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?width=216&crop=smart&auto=webp&s=7452fab145be13ea15b8efc16abc899ffc35de7e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?width=320&crop=smart&auto=webp&s=99824805bed90a2c870d652c22c699d905097dac', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?width=640&crop=smart&auto=webp&s=6c6fe19ffee36daa3c25b8dd681af10168460da2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?width=960&crop=smart&auto=webp&s=6de93f6afdd6f58c5d163c692db24e35b73d2581', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?width=1080&crop=smart&auto=webp&s=7dd8cac66fe54f13d28e02943530aba7c18815c9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?auto=webp&s=00e382b72c1185a71773105a664ae796d28e6bea', 'width': 1200}, 'variants': {}}]}