title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Evaluating mistral-medium
31
I've seen very little evaluations of mistral-medium, their new model that's only available via API and that's supposedly somewhere between GPT 3 and GPT 4. I've gotten access to it via my office, and I'll be evaluating its performance for various apps we've been working on using OpenAI's APIs so far. Alongside that, anybody has decent prompts to try to compare its performance to the GPT models and to mistral-small (the MoE model they open sourced last week) ? I'd be happy to share the results.
2023-12-17T14:20:27
https://www.reddit.com/r/LocalLLaMA/comments/18kib8y/evaluating_mistralmedium/
Kinniken
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kib8y
false
null
t3_18kib8y
/r/LocalLLaMA/comments/18kib8y/evaluating_mistralmedium/
false
false
self
31
null
failed to quantize: ios_base::clear: unspecified iostream_category error - Where to go for help?
1
I'm trying to run llama following the instructions on [their repo](https://github.com/ggerganov/llama.cpp) on a Mac M2 Max 64GB: Here are the instructions of their repo that I'm following, and I get stuck at step #4 (I added the numbers for better legibility). I have also not done any of the optional steps. Prepare Data & Run # 1 # obtain the original LLaMA model weights and place them in ./models ls ./models 65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model # [Optional] for models using BPE tokenizers ls ./models 65B 30B 13B 7B vocab.json # 2 # install Python dependencies python3 -m pip install -r requirements.txt #3 # convert the 7B model to ggml FP16 format python3 convert.py models/7B/ # [Optional] for models using BPE tokenizers python convert.py models/7B/ --vocabtype bpe #4 # quantize the model to 4-bits (using q4_0 method) ./quantize ./models/7B/ggml-model-f16.gguf ./models/7B/ggml-model-q4_0.gguf q4_0 #5 # update the gguf filetype to current if older version is unsupported by another application ./quantize ./models/7B/ggml-model-q4_0.gguf ./models/7B/ggml-model-q4_0-v2.gguf COPY #6 # run the inference ./main -m ./models/7B/ggml-model-q4_0.gguf -n 128 When trying to run #4: ./quantize ./models/7B/ggml-model-f16.gguf ./models/7B/ggml-model-q4_0.gguf q4_0 it immediately fails with error llama_model_quantize: failed to quantize: ios_base::clear: unspecified iostream_category error` main: failed to quantize model from './models/llama-2-13b-chat/ggml-model-f16.gguf' and the extended result is (some lines omitted for brevity) ./quantize ./models/llama-2-13b-chat/ggml-model-f16.gguf ./models/7B/ggml-model-q4_0.gguf q4_0 main: build = 1628 (fecac45) main: built with Apple clang version 15.0.0 (clang-1500.0.40.1) for arm64-apple-darwin23.1.0 main: quantizing './models/llama-2-13b-chat/ggml-model-f16.gguf' to './models/7B/ggml-model-q4_0.gguf' as Q4_0 llama_model_loader: loaded meta data with 15 key-value pairs and 363 tensors from ./models/llama-2-13b-chat/ggml-model-f16.gguf (version GGUF V3 (latest)) llama_model_loader: - tensor 0: token_embd.weight f16 [ 5120, 32000, 1, 1 ] llama_model_loader: - tensor 1: output_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 2: output.weight f16 [ 5120, 32000, 1, 1 ] llama_model_loader: - tensor 3: blk.0.attn_q.weight f16 [ 5120, 5120, 1, 1 ] (...) llama_model_loader: - tensor 361: blk.39.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 362: blk.39.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 (...) llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - type f32: 81 tensors llama_model_loader: - type f16: 282 tensors llama_model_quantize: failed to quantize: ios_base::clear: unspecified iostream_category error main: failed to quantize model from './models/llama-2-13b-chat/ggml-model-f16.gguf' I have asked chatgpt what I could do to fix this without any useful output from it... so I'm resorting to humans! Let me know if there's any other place where it's more correct to ask :) Thanks in advance :)
2023-12-17T13:42:04
https://www.reddit.com/r/LocalLLaMA/comments/18khle8/failed_to_quantize_ios_baseclear_unspecified/
Lanski13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18khle8
false
null
t3_18khle8
/r/LocalLLaMA/comments/18khle8/failed_to_quantize_ios_baseclear_unspecified/
false
false
self
1
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
Best function call return format
9
Lately I've been testing diferent ways to provide the LLM with its own function call output and it's interesting to see how the model responds to the diferent formats. For example if you provide it inside the function call json with the key response seems to be the best way to make the LLM understand we're that information came from, but it comes with the drawback that it sometimes it creates its own responses and continues generating an output. I also tried putting the response inside the user chat turn. Which solves the continuous output problem and seems to work great for long function returns but the LLM seems to struggle more understand we're that information came from. What are you guys doing to provide the model with the functions output? Also is there a "proper" way to do it? I haven't found much info about this.
2023-12-17T13:15:05
https://www.reddit.com/r/LocalLLaMA/comments/18kh45r/best_function_call_return_format/
igna778
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kh45r
false
null
t3_18kh45r
/r/LocalLLaMA/comments/18kh45r/best_function_call_return_format/
false
false
self
9
null
LLM playground
5
Can you tell of me good web interface LLM playground websites? The likes of chatbot arena for example. The more the better as they say.
2023-12-17T13:12:07
https://www.reddit.com/r/LocalLLaMA/comments/18kh2ck/llm_playground/
OldSet0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kh2ck
false
null
t3_18kh2ck
/r/LocalLLaMA/comments/18kh2ck/llm_playground/
false
false
self
5
null
FineTuning LongT5 for QA
3
Code for FineTuning and then it's inference of LongT5 for QA
2023-12-17T13:06:23
https://www.reddit.com/r/LocalLLaMA/comments/18kgysq/finetuning_longt5_for_qa/
yashBhaskar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kgysq
false
null
t3_18kgysq
/r/LocalLLaMA/comments/18kgysq/finetuning_longt5_for_qa/
false
false
self
3
null
Worth it to buy a modded 22GB 2080ti for LLMs?
17
I can get my hands on a 2080Ti with 22GB of VRAM instead of the original 11GB, for around $500/£400. Would you say this is worth it / the markup is fair given that unmodded variants can be had for slightly over half the price? Right now I'm using A4000s which have been okay, but the 16GB limit is sometimes difficult to work around for running larger models. Also if buying the modded 2080Ti, I plan on getting two and connecting them using nvlink for better performance. Given the emerging trend of better performing models running with less and less VRAM (phi, mixtral etc), one concern is buying this now and not needing it in a few months time. &#x200B; Would appreciate any pros/cons that I might not have considered! Thanks
2023-12-17T12:54:31
https://www.reddit.com/r/LocalLLaMA/comments/18kgr9m/worth_it_to_buy_a_modded_22gb_2080ti_for_llms/
define_undefine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kgr9m
false
null
t3_18kgr9m
/r/LocalLLaMA/comments/18kgr9m/worth_it_to_buy_a_modded_22gb_2080ti_for_llms/
false
false
self
17
null
👋 How we created Trinity: Our experimental LLM that's #1 and #2 on the Hugging Face Leaderboard
16
[removed]
2023-12-17T12:47:55
https://www.reddit.com/r/LocalLLaMA/comments/18kgnfl/how_we_created_trinity_our_experimental_llm_thats/
jan-ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kgnfl
false
null
t3_18kgnfl
/r/LocalLLaMA/comments/18kgnfl/how_we_created_trinity_our_experimental_llm_thats/
false
false
self
16
{'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=108&crop=smart&auto=webp&s=90f4efd1c1314faf5b0cd1c5eeb8d2835fe4a3ba', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=216&crop=smart&auto=webp&s=062336de177b9f9f124a98f4e03b59faa819be1d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=320&crop=smart&auto=webp&s=01e9aeccc0d76fee4ecb359bfb6238dc2afd87f0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=640&crop=smart&auto=webp&s=723b41bc410ff59454cf7a9a3db4eced43d4868f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=960&crop=smart&auto=webp&s=2c43ffe72f7f32d522c3e85c1aa8e25d6f213b38', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=1080&crop=smart&auto=webp&s=6a7d63eae44237642a3f95e586436bf6efe5dd70', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?auto=webp&s=51349e0b781d1c9e91535974e09833705c76a3cc', 'width': 1200}, 'variants': {}}]}
failed to quantize: ios_base::clear: unspecified iostream_category error - Where to go for help?
1
[removed]
2023-12-17T12:40:52
https://www.reddit.com/r/LocalLLaMA/comments/18kgjas/failed_to_quantize_ios_baseclear_unspecified/
Electrical_Town_2273
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kgjas
false
null
t3_18kgjas
/r/LocalLLaMA/comments/18kgjas/failed_to_quantize_ios_baseclear_unspecified/
false
false
self
1
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
How do I update llama.cpp for android in termux
1
[removed]
2023-12-17T12:10:38
https://www.reddit.com/r/LocalLLaMA/comments/18kg212/how_do_i_update_llamacpp_for_android_in_termux/
Alert-Estimate
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kg212
false
null
t3_18kg212
/r/LocalLLaMA/comments/18kg212/how_do_i_update_llamacpp_for_android_in_termux/
false
false
self
1
null
Hardware problem
3
Should I get: 1) 4060 Ti 16GB x 4 + Threadripper 5955wx on TRX40 motherboard or 2) 3090 24GB x2 + i9 14700k on z790 motherboard They have the same price in my country. The good thing about the first setup is 64gb vram, but less bandwidth. My purpose is to finetune llama 2 70B locally.
2023-12-17T12:00:44
https://www.reddit.com/r/LocalLLaMA/comments/18kfw95/hardware_problem/
2047Escaper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kfw95
false
null
t3_18kfw95
/r/LocalLLaMA/comments/18kfw95/hardware_problem/
false
false
self
3
null
SOLAR: an 11B model that beats every open model, including Mixtral, Yi-34B, Llama 2 70B, and Falcon 180B!
116
[https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
2023-12-17T11:22:18
https://www.reddit.com/r/LocalLLaMA/comments/18kfbh7/solar_an_11b_model_that_beats_every_open_model/
Own_Relationship8953
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kfbh7
false
null
t3_18kfbh7
/r/LocalLLaMA/comments/18kfbh7/solar_an_11b_model_that_beats_every_open_model/
false
false
self
116
{'enabled': False, 'images': [{'id': 'eYgOzVBAuNvqoa0iO_7oesPDF4IMahqO_VDioDBGPX4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=108&crop=smart&auto=webp&s=e67807fe20978ba6a4ca59164cb6a22881c5891d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=216&crop=smart&auto=webp&s=017c11d85e98026f2e77c1ab62e93312439ecd82', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=320&crop=smart&auto=webp&s=f14dcac8487f2d9a9488dbbf2eb6bbbbfa367b25', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=640&crop=smart&auto=webp&s=da0def96cb9a4ab285c7de23459d869044a4bcaf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=960&crop=smart&auto=webp&s=b9d3d5c618a695ee31b7043d6f84401ce3ad6d56', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=1080&crop=smart&auto=webp&s=7babdb361b6b420680d3254f56ce12ba20fd027a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?auto=webp&s=5bb52165a7d9afbe56de435e4c73da11b20c3ea4', 'width': 1200}, 'variants': {}}]}
Why is there so much focus on Role Play?
171
Hi! I ask this with the utmost respect. I just wonder why is there so much focus on Role play in the world of LocalLLM’s. Whenever a new model comes out, it seems that one of the first things to be tested is the RP capabilities. There seem to be TONS of tools developed around role playing, like silly tavern and characters with diverse backgrounds. Do people really use to just chat as it was just a friend? Do people use it for actual role play like if it was Dungeon and Dragons? Are people just lonely and use it talk to a horny waifu? As I see LLMs mainly as a really good tool to use for coding, summarizing, rewriting emails, as an assistant… looks to me as RP is even bigger than all of those combined. I just want to learn if I’m missing something here that has great potential. Thanks!!!
2023-12-17T11:12:39
https://www.reddit.com/r/LocalLLaMA/comments/18kf6es/why_is_there_so_much_focus_on_role_play/
bullerwins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kf6es
false
null
t3_18kf6es
/r/LocalLLaMA/comments/18kf6es/why_is_there_so_much_focus_on_role_play/
false
false
self
171
null
Translation Model Leaderboard
15
Hey guys, for one of my projects I have recently added in a translation layer before prompting my LLM for inference. It was then when I realized we didn't really have a leaderboard for translation models, a compilation of the best ones or even a real list of all the serious competitors. This is what I wanted to start here, so all of us can find the best models quickly without having to research for hours on end. I also added some questions at the end. The best models I have tested so far: \- OPUS MT: tiny, blazing fast models that exist for almost all languages, making them basically multilingual. The problem with these models is the output quality - it doesn't suck, but it's not great either. \-allenai: Almost like opus, with a bit less models and a bit better output, but I haven't tested them as much. \- facebook-nllb-200: Not really a production model, only single sentence, overall would not recommend, as even distilled it is still large and I haven't gotten it to produce a great output. \-madlad-400: From what I have heard a great, but slow model, haven't really gotten around to testing it. My questions mainly are: 1. Are there other great models? Especially small, single language ones like opus? 2. Has anyone tested inference times and quality with madlad? 3. Has anyone had a good experience with nllb-200, if yes, how? &#x200B;
2023-12-17T11:11:13
https://www.reddit.com/r/LocalLLaMA/comments/18kf5pb/translation_model_leaderboard/
Galaktische_Gurke
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kf5pb
false
null
t3_18kf5pb
/r/LocalLLaMA/comments/18kf5pb/translation_model_leaderboard/
false
false
self
15
null
How can RAG from LlamaIndex be integrated with a local llama-cpp-python API server?
4
I would like to use [LlamaIndex](https://www.llamaindex.ai/) to index documents and media into a vector store, and then be able to use a [chat UI](https://github.com/mckaywrigley/chatbot-ui) to query that information, entirely locally. Ideally this would be using [Llama.cpp's OpenAI API web service](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md), exposed via [llama-cpp-python](https://github.com/abetlen/llama-cpp-python#openai-compatible-web-server) (considering the later is a dependency of LlamaIndex and depends on the former in turn, unless I am misunderstand something). I am able to run these pieces individually just fine, but am having trouble figuring out how to use the exported vector store JSON files from LlamaIndex while running (or at least launching) the server. It almost seems like if LlamaIndex is used then I have to stay with it and would need to implement the API as a web service separately. Surely there's a way to combine everything so that updates made to the Llama.cpp project's OpenAI API support are accessible? Any tips, suggestions, pointers, example, etc. would be most appreciated!
2023-12-17T08:47:05
https://www.reddit.com/r/LocalLLaMA/comments/18kd2n8/how_can_rag_from_llamaindex_be_integrated_with_a/
PrinceOfLeon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kd2n8
false
null
t3_18kd2n8
/r/LocalLLaMA/comments/18kd2n8/how_can_rag_from_llamaindex_be_integrated_with_a/
false
false
self
4
null
Best Uncensored Model to run on 16GB of VRAM?
45
Been out of the local scene for a few months but I have a gpu with 16gb of vram whats the best ungimped model i could run at the moment? Just general stuff nothing specific.
2023-12-17T08:22:16
https://www.reddit.com/r/LocalLLaMA/comments/18kcqcb/best_uncensored_model_to_run_on_16gb_of_vram/
AMDBlackScreen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18kcqcb
false
null
t3_18kcqcb
/r/LocalLLaMA/comments/18kcqcb/best_uncensored_model_to_run_on_16gb_of_vram/
false
false
self
45
null
LLM playground
1
[removed]
2023-12-17T05:28:48
https://www.reddit.com/r/LocalLLaMA/comments/18ka4ir/llm_playground/
OldSet0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ka4ir
false
null
t3_18ka4ir
/r/LocalLLaMA/comments/18ka4ir/llm_playground/
false
false
self
1
null
Has anyone played around with Phi-2
1
They uploaded to huggingface: [https://huggingface.co/microsoft/phi-2](https://huggingface.co/microsoft/phi-2) Has it been evaled on the benchmarks?
2023-12-17T05:22:53
https://www.reddit.com/r/LocalLLaMA/comments/18ka10m/has_anyone_played_around_with_phi2/
Honest-Debate-6863
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ka10m
false
null
t3_18ka10m
/r/LocalLLaMA/comments/18ka10m/has_anyone_played_around_with_phi2/
false
false
self
1
{'enabled': False, 'images': [{'id': 'IC9jEjr4wqjQ7Zp6pkl3zp0k6W16A-udtoplxE4MnyY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=108&crop=smart&auto=webp&s=0e88db6a0c39318ffd52cbcc7a8c5ebd7f2446f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=216&crop=smart&auto=webp&s=48293ec8b5fd2d6d749fb8b67af0c923af87536c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=320&crop=smart&auto=webp&s=6b58317970194d6b55cc98922fac8107fb9c96e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=640&crop=smart&auto=webp&s=ac12bd7536bd03e08ddd91011f911469b3bf213f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=960&crop=smart&auto=webp&s=f757573ea5e2942de4e36268eb580b7d37e2116f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=1080&crop=smart&auto=webp&s=b8ce394087a4bb2d4704349f7d44bffafd690bac', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?auto=webp&s=27d011b779a551bbb18568f84f87b984fa99bea7', 'width': 1200}, 'variants': {}}]}
Best local model to answer questions given some context?
7
Hello, I'm trying to build a local question and answering system. I stored all of my documents as a faiss index and I'm retrieving top 10 relevant embeddings for a user's question. So far this is working great. Now I'm looking for a model that can run on my 30GB RAM computer that can take those 10 embeddings and get me the actual answer. I tried Deepset's Roberta Base and bunch of other small models but none worked for me. I tried some 7B models (Mistral, Llama etc.) and they answered the question perfectly. But I can't run those models on my machine. So is there a model that I can run on my 30GB RAM computer that is on par with Mistral/Llama 7B question and answer capabilities? I want it to be able to answer questions given some context.
2023-12-17T05:00:48
https://www.reddit.com/r/LocalLLaMA/comments/18k9nth/best_local_model_to_answer_questions_given_some/
impl66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k9nth
false
null
t3_18k9nth
/r/LocalLLaMA/comments/18k9nth/best_local_model_to_answer_questions_given_some/
false
false
self
7
null
Tried to make a deez nuts joke but the model is too dumb for me to make fun of it
2
2023-12-17T04:52:14
https://i.redd.it/uhc8l6d5ds6c1.jpeg
mesayousa
i.redd.it
1970-01-01T00:00:00
0
{}
18k9ipf
false
null
t3_18k9ipf
/r/LocalLLaMA/comments/18k9ipf/tried_to_make_a_deez_nuts_joke_but_the_model_is/
false
false
https://b.thumbs.redditm…hOiDpHEHmvIU.jpg
2
{'enabled': True, 'images': [{'id': 'Kj-eGRuySBquQ6_gYZpzknoUdrN5l322RAiDNhF-ZHw', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/uhc8l6d5ds6c1.jpeg?width=108&crop=smart&auto=webp&s=0117581769a4fa06862691a5fad58b92be798e8c', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/uhc8l6d5ds6c1.jpeg?width=216&crop=smart&auto=webp&s=acba36ed9279fc10a0e00623e50ce336217f7217', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/uhc8l6d5ds6c1.jpeg?width=320&crop=smart&auto=webp&s=c83b6249aa68969983b247f90adea0a83d440672', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/uhc8l6d5ds6c1.jpeg?width=640&crop=smart&auto=webp&s=04e6ab11838be327eaaeb7bae46e1ab9c648eea1', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/uhc8l6d5ds6c1.jpeg?width=960&crop=smart&auto=webp&s=e0326d418ad7cfce9ebc979e0a49cb3fb68e1d59', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/uhc8l6d5ds6c1.jpeg?width=1080&crop=smart&auto=webp&s=5faa7b2939e52600f0c49dc6968ee308abbc9d8c', 'width': 1080}], 'source': {'height': 2532, 'url': 'https://preview.redd.it/uhc8l6d5ds6c1.jpeg?auto=webp&s=1abd7c82c6ba46cc93205c8165118da3cf5da08f', 'width': 1170}, 'variants': {}}]}
i need a swift ui to interface with a websocket server. how do i bundle both
1
so id like to have my swiftui get updates from a websocket server. but i want this websocket to be added the app bundle. so do i just make the websocket server in swift and compile it as a binary and include and manage it from the swift code?
2023-12-17T04:43:42
https://www.reddit.com/r/LocalLLaMA/comments/18k9dmc/i_need_a_swift_ui_to_interface_with_a_websocket/
herozorro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k9dmc
false
null
t3_18k9dmc
/r/LocalLLaMA/comments/18k9dmc/i_need_a_swift_ui_to_interface_with_a_websocket/
false
false
self
1
null
Mixtral on an i7-7820HQ, 32GB RAM laptop is pretty decent for chatting
20
I installed KobaldAI and dolphin-2.5-mixtral-8x7b-gguf on an old laptop today (i7-7820HQ, 32GB RAM). The speed is like watching someone type. I normally use LLMs for brainstorming so that speed really isn’t bad at all. It’s pretty amazing to be able to run it locally on a $250 laptop!
2023-12-17T04:37:39
https://www.reddit.com/r/LocalLLaMA/comments/18k99x5/mixtral_on_an_i77820hq_32gb_ram_laptop_is_pretty/
DKW0000001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k99x5
false
null
t3_18k99x5
/r/LocalLLaMA/comments/18k99x5/mixtral_on_an_i77820hq_32gb_ram_laptop_is_pretty/
false
false
self
20
null
Can someone tell me how to use this model
1
[removed]
2023-12-17T04:19:01
https://www.reddit.com/r/LocalLLaMA/comments/18k8yga/can_someone_tell_me_how_to_use_this_model/
throwfalseaway12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k8yga
false
null
t3_18k8yga
/r/LocalLLaMA/comments/18k8yga/can_someone_tell_me_how_to_use_this_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'yyc20imk2jDaMyTtiGEeH9sc5jRHFDPY1F7immamzdw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HPTjHyuYnAZvB2MVSx1dk3P3SOuTl2C6u9czKq0fTEA.jpg?width=108&crop=smart&auto=webp&s=507c2a04086207ffd43c00f538ecb03cb0d7972d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HPTjHyuYnAZvB2MVSx1dk3P3SOuTl2C6u9czKq0fTEA.jpg?width=216&crop=smart&auto=webp&s=f4cd10958296f13a34b6d1f007ae305d344609dc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HPTjHyuYnAZvB2MVSx1dk3P3SOuTl2C6u9czKq0fTEA.jpg?width=320&crop=smart&auto=webp&s=f52bf8b8dd803b55f2dae6bc8bd16fc273c1ec85', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HPTjHyuYnAZvB2MVSx1dk3P3SOuTl2C6u9czKq0fTEA.jpg?width=640&crop=smart&auto=webp&s=b594af832f67af1b625ee7bb4f055244b184fc9f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HPTjHyuYnAZvB2MVSx1dk3P3SOuTl2C6u9czKq0fTEA.jpg?width=960&crop=smart&auto=webp&s=f703f936514bb43141ef37ac921d4762664bc700', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HPTjHyuYnAZvB2MVSx1dk3P3SOuTl2C6u9czKq0fTEA.jpg?width=1080&crop=smart&auto=webp&s=da5b4f87c0f9ff791f22bc5d6eb0cf33da07710c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HPTjHyuYnAZvB2MVSx1dk3P3SOuTl2C6u9czKq0fTEA.jpg?auto=webp&s=4ec252f60f1440d940b5335e9205707219702e2d', 'width': 1200}, 'variants': {}}]}
3080 16gb laptop + desktop gpus?
1
Hi, I found a good deal for laptop priced in a similar to a used 3090. I like the convenience and capability of such a device, and the fact that image training and cuda tts, and any ml github project would work hassle-free. The bandwidth is 448.0 GB/s, which is higher than 3060 or 4060ti. I was wondering about if I could use specialized software to run this in multi-gpu configuration if I'm not currently using it. A low bandwidth would not matter here, so I wondering if someone has done this before or found something to do this. I would like to run software like exllamav2.
2023-12-17T04:08:30
https://www.reddit.com/r/LocalLLaMA/comments/18k8s2n/3080_16gb_laptop_desktop_gpus/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k8s2n
false
null
t3_18k8s2n
/r/LocalLLaMA/comments/18k8s2n/3080_16gb_laptop_desktop_gpus/
false
false
self
1
null
Any recommendations for using a 980ti?
2
Or is it just too old? Stable Diffusion works ok, but not amazing. I tried to get a 7b model to load, but it crashed my machine after a few attempts. I thought the 6gb of VRAM would at least count for something. Anyone have experience running an LLM on this card?
2023-12-17T04:00:15
https://www.reddit.com/r/LocalLLaMA/comments/18k8mxr/any_recommendations_for_using_a_980ti/
BriansRevenge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k8mxr
false
null
t3_18k8mxr
/r/LocalLLaMA/comments/18k8mxr/any_recommendations_for_using_a_980ti/
false
false
self
2
null
LLM_Web_search for text-generation-webui: it's pretty awesome!
43
The following conversation was had using "Mistral 7B" with the **LLM\_Web\_Search** extension installed. I really like that it determines when it wants to search itself and what terms it will use! &#x200B; >**Searcher** > >How can I help you today, using the internet as my guide! > >**You** > >What was the last patriots game final score? > >**Searcher** > >I'd be happy to help you with that! Let me just check the most recent information I can find. > >Search\_web: "Last Patriots game final score" > >**Searcher** > >Based on the information I've found, the last Patriots game final score was 21-18 in favor of the Patriots against the Steelers. > >**You** > >When was that game played? > >**Searcher** > >The last Patriots game against the Steelers was played on December 7, 2023. &#x200B; You can check it out here: [https://github.com/mamei16/LLM\_Web\_search](https://github.com/mamei16/LLM_Web_search) I personally couldn't get the "conda env" thing to work to install all of the requirements, so I manually (and temporarily) added these requirements to requirements.txt and ran update\_windows.bat (under wandb): faiss-cpu==1.7.4 duckduckgo-search==4.1.0 beautifulsoup4==4.12.2 langchain==0.0.177 unstructured==0.6.6 I also added a "Searcher" character that had this context: A chat between a curious user and artificial intelligence assistant. The assistant is never confident about facts and up to date information. The assistant can search the web for facts and up to date information using the following search command format: Search_web: \"<|query|>\" The search tool will search the web for these keywords and return the results. Finally, the assistant extracts the information from the results of the search tool to guide its response. I find it quite neat, I hope it gets a little more love and perhaps be considered a default extension? It took me awhile to find a searcher this good!
2023-12-17T03:46:46
https://www.reddit.com/r/LocalLLaMA/comments/18k8f2x/llm_web_search_for_textgenerationwebui_its_pretty/
phr00t_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k8f2x
false
null
t3_18k8f2x
/r/LocalLLaMA/comments/18k8f2x/llm_web_search_for_textgenerationwebui_its_pretty/
false
false
self
43
{'enabled': False, 'images': [{'id': 'TKom3ss809OGr1QYVHNHYAz7hr2eZAv_BugQ_JelnUg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qe0hH-vpFe7obaLCVPBhwa2g5_vkdydxAwP5RV1TXU8.jpg?width=108&crop=smart&auto=webp&s=8fe2c4fbd31b66b407d66a269f7ae25841a3d0db', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qe0hH-vpFe7obaLCVPBhwa2g5_vkdydxAwP5RV1TXU8.jpg?width=216&crop=smart&auto=webp&s=ef8f8c21abbd526770633352b74f7900ea686800', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qe0hH-vpFe7obaLCVPBhwa2g5_vkdydxAwP5RV1TXU8.jpg?width=320&crop=smart&auto=webp&s=e8e4055a705a05387329a8c8dc0845d254f4891e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qe0hH-vpFe7obaLCVPBhwa2g5_vkdydxAwP5RV1TXU8.jpg?width=640&crop=smart&auto=webp&s=59d8550452571723b7b3ebc7588ff1d1529737df', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qe0hH-vpFe7obaLCVPBhwa2g5_vkdydxAwP5RV1TXU8.jpg?width=960&crop=smart&auto=webp&s=6c7bb1f89171874254b38cbddc5f667d9fc7a506', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qe0hH-vpFe7obaLCVPBhwa2g5_vkdydxAwP5RV1TXU8.jpg?width=1080&crop=smart&auto=webp&s=4163aec1f838c8df6a7422b5303800b27de609f8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qe0hH-vpFe7obaLCVPBhwa2g5_vkdydxAwP5RV1TXU8.jpg?auto=webp&s=cab6b4c5565ec6639e1e26eb42d8e19720a122fc', 'width': 1200}, 'variants': {}}]}
How does speechless-llama2-hermes-orca-platypus-wizardlm-13b compare to mixtral subjectively, to anyone whose played around with both?
1
One might expect mixtral to have a superior knowledge domain base, but I think the other one in any domain of mixtral's base for example, would have qualities to it that subjectively have value that might be missed by mindless benchmarks. So I'm asking the people, what is their/your subjective intuition about how each carries us forward in their own contrasting ways
2023-12-17T03:34:35
https://www.reddit.com/r/LocalLLaMA/comments/18k87wl/how_does/
_______DEADPOOL____
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k87wl
false
null
t3_18k87wl
/r/LocalLLaMA/comments/18k87wl/how_does/
false
false
self
1
null
GPU Recommendations
4
Where it starts - I have an old mining rig: 1 - 3080-TI 12GB 5 - 3060-TI 8GB 1 - 3060 12GB But they are hooked up into a mining board that only supports PCIE-1. I'm interested in training some LLMS and in general building a machine for AI, what are your suggestions? 1. get a new board and build a new system - (cheaper) 2. sell these and build with new hardware - (selling I would have to pay fees ) 3. something else? &#x200B; &#x200B; &#x200B;
2023-12-17T02:16:55
https://www.reddit.com/r/LocalLLaMA/comments/18k6urt/gpu_recommendations/
Second26
self.LocalLLaMA
2023-12-17T02:22:24
0
{}
18k6urt
false
null
t3_18k6urt
/r/LocalLLaMA/comments/18k6urt/gpu_recommendations/
false
false
self
4
null
How long before we have a consumer 32GB LLM accelerator?
28
Shame on AMD and Intel for not having a 32GB competitor to Nvidia. How long before they wake up and smell the coffee? Could we get one that is upgradable to 64 GB of memory? Can we get some low cost, not top speed cards with lots of memory? Is it really that hard? Have zero interest in graphics performance. Can it also optimize for [Mamba](https://arxiv.org/abs/2312.00752) performance? What do you think? [View Poll](https://www.reddit.com/poll/18k6oy4)
2023-12-17T02:07:46
https://www.reddit.com/r/LocalLLaMA/comments/18k6oy4/how_long_before_we_have_a_consumer_32gb_llm/
danielcar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k6oy4
false
null
t3_18k6oy4
/r/LocalLLaMA/comments/18k6oy4/how_long_before_we_have_a_consumer_32gb_llm/
false
false
self
28
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
Which 'GPT-ism' in local models annoys you the most?
104
So there's always a ton of people complaining about how they see "As an AI language model" or something like that. However, since I don't really use AI for coding/science/homework, I've never seen this. Instead, since I'm a writer, my least favorite GPT-ism is when it tries to wrap up an entire story in a single generation. You know, like "Together, no matter what challenges may trouble them, the two heroes have become inseparable, sharing not just in their glory and strength as knights but also an unbreakable connection that transcends the ordinary world of humans." What's your personal least favourite GPT-ism? Furthermore, there's been success with training models on only human data, like Noromaid. Do you think it'll catch on?
2023-12-17T02:05:27
https://www.reddit.com/r/LocalLLaMA/comments/18k6nft/which_gptism_in_local_models_annoys_you_the_most/
adumdumonreddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k6nft
false
null
t3_18k6nft
/r/LocalLLaMA/comments/18k6nft/which_gptism_in_local_models_annoys_you_the_most/
false
false
self
104
null
How to use LM Studio as a writing assistant and/or Proofreader?
1
How to use LM Studio as a writing assistant and/or Proofreader?
2023-12-17T01:51:53
https://www.reddit.com/r/LocalLLaMA/comments/18k6eaj/how_to_use_lm_studio_as_a_writing_assistant_andor/
poet3991
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k6eaj
false
null
t3_18k6eaj
/r/LocalLLaMA/comments/18k6eaj/how_to_use_lm_studio_as_a_writing_assistant_andor/
false
false
self
1
null
Essential AI Tools and Libraries: A Guide to Python, Git, C++ Compile Tools, FFmpeg, CUDA, PyTorch - Amazing Resource For AI Beginners
1
[removed]
2023-12-17T01:50:23
https://www.reddit.com/r/LocalLLaMA/comments/18k6da2/essential_ai_tools_and_libraries_a_guide_to/
CeFurkan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k6da2
false
null
t3_18k6da2
/r/LocalLLaMA/comments/18k6da2/essential_ai_tools_and_libraries_a_guide_to/
false
false
self
1
{'enabled': False, 'images': [{'id': 'dFWyXDVLReIS75Z351RIKJiqtUjmZLljhk2QyHNvgXw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/9ns5vfSrrgMV2pJYoPByHIXYjddvrU4bkmD_HgBhyE8.jpg?width=108&crop=smart&auto=webp&s=c086b62dc56d9c20b03527adfa19989bf593d39a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/9ns5vfSrrgMV2pJYoPByHIXYjddvrU4bkmD_HgBhyE8.jpg?width=216&crop=smart&auto=webp&s=1bd8f5d6ecc49fd8a5179f8a75d3c1e8500ce5d0', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/9ns5vfSrrgMV2pJYoPByHIXYjddvrU4bkmD_HgBhyE8.jpg?width=320&crop=smart&auto=webp&s=7b8eb338d9c199182636a565dabaea80aec0b11e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/9ns5vfSrrgMV2pJYoPByHIXYjddvrU4bkmD_HgBhyE8.jpg?auto=webp&s=d91824e53965a3a1ae4137c07c112a48053c537a', 'width': 480}, 'variants': {}}]}
Anyone else having trouble running GPTQ Mixtral models in text generation webui?
3
When selecting a GPT Mixtral model I'm getting these errors and they won't load. Anyone else having these problems? C:\\text-generation-webui-main\\installer\_files\\env\\Lib\\site-packages\\gradio\\components\\dropdown.py:231: UserWarning: The value passed into gr.Dropdown() is not in the list of choices. Please update the list of choices to include: 3 or set allow\_custom\_value=True. warnings.warn( C:\\text-generation-webui-main\\installer\_files\\env\\Lib\\site-packages\\gradio\\components\\dropdown.py:231: UserWarning: The value passed into gr.Dropdown() is not in the list of choices. Please update the list of choices to include: 128 or set allow\_custom\_value=True. warnings.warn(
2023-12-17T01:11:28
https://www.reddit.com/r/LocalLLaMA/comments/18k5mls/anyone_else_having_trouble_running_gptq_mixtral/
SantaBooks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k5mls
false
null
t3_18k5mls
/r/LocalLLaMA/comments/18k5mls/anyone_else_having_trouble_running_gptq_mixtral/
false
false
self
3
null
ExllamaV2 0.0.11 is released, bringing support for Mixtral models
144
2023-12-17T01:05:07
https://github.com/turboderp/exllamav2/releases/tag/v0.0.11
AmazinglyObliviouse
github.com
1970-01-01T00:00:00
0
{}
18k5i9s
false
null
t3_18k5i9s
/r/LocalLLaMA/comments/18k5i9s/exllamav2_0011_is_released_bringing_support_for/
false
false
https://b.thumbs.redditm…UvpVFiHxg9IY.jpg
144
{'enabled': False, 'images': [{'id': 'zIbt4061SwYOC5uAlx2WNJ6wz99mO2Xu8iiLNcjBvNA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Gcbsg_0BgFcEbf5t-8BGwl9lMAgSZ44JMEW5MnieKGM.jpg?width=108&crop=smart&auto=webp&s=2e0f4366491d6a8c9a591fd9e20d63111563829b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Gcbsg_0BgFcEbf5t-8BGwl9lMAgSZ44JMEW5MnieKGM.jpg?width=216&crop=smart&auto=webp&s=5e2283bcee9e9c1a98b91d24a0aebabab9cdc1d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Gcbsg_0BgFcEbf5t-8BGwl9lMAgSZ44JMEW5MnieKGM.jpg?width=320&crop=smart&auto=webp&s=ac0e541bbd7bdfe357bba41df6b188876219fe15', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Gcbsg_0BgFcEbf5t-8BGwl9lMAgSZ44JMEW5MnieKGM.jpg?width=640&crop=smart&auto=webp&s=0c989041651b68d50301c59e8fa6e89b8a79fb68', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Gcbsg_0BgFcEbf5t-8BGwl9lMAgSZ44JMEW5MnieKGM.jpg?width=960&crop=smart&auto=webp&s=8d23e663351ca30e6044c30ce9425b939b44a1bd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Gcbsg_0BgFcEbf5t-8BGwl9lMAgSZ44JMEW5MnieKGM.jpg?width=1080&crop=smart&auto=webp&s=dc56f30e866285a5a36351b40929dfc68c30296a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Gcbsg_0BgFcEbf5t-8BGwl9lMAgSZ44JMEW5MnieKGM.jpg?auto=webp&s=6bd8091bcc4772bbd3fd0072b996d22f7af51795', 'width': 1200}, 'variants': {}}]}
Trinity 7B - a powerful 7B model for LocalAI
16
This model uses the SLERP method to merge the best models on 14th Dec on OpenLLM Leaderboard: LeoScorpius-7B-Chat-DPO with our latest GreenNodeLM-7B-v1olet model. It is taking the leaderboard. Model: [Trinity-v1](https://huggingface.co/jan-hq/trinity-v1) GGUF: [Trinity-v1-GGUF](https://huggingface.co/jan-hq/trinity-v1-GGUF) https://preview.redd.it/vwwr69hh8r6c1.png?width=2297&format=png&auto=webp&s=ef8c0246a66ea147261af45aa42f021756d39b74
2023-12-17T01:04:43
https://www.reddit.com/r/LocalLLaMA/comments/18k5i0g/trinity_7b_a_powerful_7b_model_for_localai/
Own_Procedure_8866
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k5i0g
false
null
t3_18k5i0g
/r/LocalLLaMA/comments/18k5i0g/trinity_7b_a_powerful_7b_model_for_localai/
false
false
https://b.thumbs.redditm…e7B05qqYXMvg.jpg
16
{'enabled': False, 'images': [{'id': 'NIYbDogY4FTN5y15JnJpSqHOV-XhzU0cTnjgKRIRZ5A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vT6ZOubLqa_CxwWLBLlSCArZYfzF-w3b9wu5xg73WsI.jpg?width=108&crop=smart&auto=webp&s=eeb3cc6be0ed447daf2979456f3e9fcbf0bf65df', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vT6ZOubLqa_CxwWLBLlSCArZYfzF-w3b9wu5xg73WsI.jpg?width=216&crop=smart&auto=webp&s=b40dd8e189705fdcfff0b6ba08d4b1fa4ce8ac23', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vT6ZOubLqa_CxwWLBLlSCArZYfzF-w3b9wu5xg73WsI.jpg?width=320&crop=smart&auto=webp&s=63d5502e505fb4265166599825298f1f77927d2a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vT6ZOubLqa_CxwWLBLlSCArZYfzF-w3b9wu5xg73WsI.jpg?width=640&crop=smart&auto=webp&s=620687cd67f070393a307e6264fe3220a1e4501c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vT6ZOubLqa_CxwWLBLlSCArZYfzF-w3b9wu5xg73WsI.jpg?width=960&crop=smart&auto=webp&s=9d97b4e2ede131a94aa2d4d8e4eb9f3e514bc7fe', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vT6ZOubLqa_CxwWLBLlSCArZYfzF-w3b9wu5xg73WsI.jpg?width=1080&crop=smart&auto=webp&s=abf31cb401b4760e7c007eba1b5126bcfa1c5386', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vT6ZOubLqa_CxwWLBLlSCArZYfzF-w3b9wu5xg73WsI.jpg?auto=webp&s=d6a0f178c5f6808d982e0565db359907418eb42c', 'width': 1200}, 'variants': {}}]}
Most accurate model for server CPU setup?
8
I have a couple of older servers that I was hoping to play around with a local LLM on. I don't really care how long it takes to process and get responses, but I'd like to shoot for the best (most accurate) ones. The server I'm thinking of using has the following specs: - 2x2.1 Ghz * 26 cores (104 threads total) - 1.5TB RAM - no GPU Where should I turn my attention for targeting my research? Most of the responses I see here seem to talk about GPU systems so I was worried that I wouldn't be able to run anything without one.
2023-12-16T23:54:36
https://www.reddit.com/r/LocalLLaMA/comments/18k444r/most_accurate_model_for_server_cpu_setup/
at_physicaltherapy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k444r
false
null
t3_18k444r
/r/LocalLLaMA/comments/18k444r/most_accurate_model_for_server_cpu_setup/
false
false
self
8
null
Snapdragon x1 elite maybe a "cheap" apple competitor ?
28
Out of interest do any of you think or hope that the new qualcomm chips might be a relatively cheap way to get in to ai for quantized 34 and potentially 70b models with 64 gb of ram and an at least decent npu ? Which supposedly can run 13b models (which i take Was meant as in un quantized) on the maschine. I am slightly optimistic and it would be need to have something on the level of an m2 pro. Which hopefully will cost a good bit less. Software will be a big thing though ^^ Any how wanted to hear some opinions and if these things are on the radar for you my fellow llm enthusiast :D
2023-12-16T23:24:08
https://www.google.com/amp/s/www.forbes.com/sites/karlfreund/2023/12/12/qualcomm-makes-its-push-for-pc-relevance-leading-with-ai/amp/
Noxusequal
google.com
1970-01-01T00:00:00
0
{}
18k3hm7
false
null
t3_18k3hm7
/r/LocalLLaMA/comments/18k3hm7/snapdragon_x1_elite_maybe_a_cheap_apple_competitor/
false
false
https://b.thumbs.redditm…QRwYJ2ac-4yA.jpg
28
{'enabled': False, 'images': [{'id': '-vMIdiFogwof_O3yHf3iS-MMtZ5tj7XHYVLDS7Q9Zts', 'resolutions': [{'height': 30, 'url': 'https://external-preview.redd.it/tpC6KW38VTGRwY98NsjRQbLRH5lBZsk-Emlnp8Eb8fM.jpg?width=108&crop=smart&auto=webp&s=9ec63ac2341834844fde9a6b2d351625be7083ba', 'width': 108}, {'height': 61, 'url': 'https://external-preview.redd.it/tpC6KW38VTGRwY98NsjRQbLRH5lBZsk-Emlnp8Eb8fM.jpg?width=216&crop=smart&auto=webp&s=1c802279abd43471537775b3ae62e712a5f1e2e6', 'width': 216}, {'height': 91, 'url': 'https://external-preview.redd.it/tpC6KW38VTGRwY98NsjRQbLRH5lBZsk-Emlnp8Eb8fM.jpg?width=320&crop=smart&auto=webp&s=798dfb76e08bac41f61b45fb2b736a1adaaacc3d', 'width': 320}, {'height': 183, 'url': 'https://external-preview.redd.it/tpC6KW38VTGRwY98NsjRQbLRH5lBZsk-Emlnp8Eb8fM.jpg?width=640&crop=smart&auto=webp&s=78ecf39df96024401deaf236d45f5dc0f7caa2dc', 'width': 640}, {'height': 275, 'url': 'https://external-preview.redd.it/tpC6KW38VTGRwY98NsjRQbLRH5lBZsk-Emlnp8Eb8fM.jpg?width=960&crop=smart&auto=webp&s=bc78540aed6866ef8800593eb587f3986c69a1a5', 'width': 960}, {'height': 309, 'url': 'https://external-preview.redd.it/tpC6KW38VTGRwY98NsjRQbLRH5lBZsk-Emlnp8Eb8fM.jpg?width=1080&crop=smart&auto=webp&s=8c93c14ccd4304b1c82d5aa1d27d55949e4d0346', 'width': 1080}], 'source': {'height': 459, 'url': 'https://external-preview.redd.it/tpC6KW38VTGRwY98NsjRQbLRH5lBZsk-Emlnp8Eb8fM.jpg?auto=webp&s=12fd6aa6ff5d129ce624d7c44da8b295fdc656a5', 'width': 1600}, 'variants': {}}]}
Advise on GOAL: open source projects like Azure Cognitive Search + GPT
1
[removed]
2023-12-16T23:17:20
https://www.reddit.com/r/LocalLLaMA/comments/18k3cne/advise_on_goal_open_source_projects_like_azure/
Pretend-Word2531
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k3cne
false
null
t3_18k3cne
/r/LocalLLaMA/comments/18k3cne/advise_on_goal_open_source_projects_like_azure/
false
false
self
1
{'enabled': False, 'images': [{'id': 'KFfJdQptaGXLhvtBUcLyLc7pCd7o2gCky5puXOS9giQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d97ATK0u2Re68XPT0vJD75brLE8HsO_Q8i3rfaPN6pc.jpg?width=108&crop=smart&auto=webp&s=d92d8408d9881fc0c1bbdd11ba6132d9eb00fc80', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d97ATK0u2Re68XPT0vJD75brLE8HsO_Q8i3rfaPN6pc.jpg?width=216&crop=smart&auto=webp&s=0ba568a9fb84fc5ffb8c724339f5956730111b60', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d97ATK0u2Re68XPT0vJD75brLE8HsO_Q8i3rfaPN6pc.jpg?width=320&crop=smart&auto=webp&s=52fce281b44221d753f4c78a2943aaba362aa169', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d97ATK0u2Re68XPT0vJD75brLE8HsO_Q8i3rfaPN6pc.jpg?width=640&crop=smart&auto=webp&s=0c3cbed18bfe7ef8324bb46039321452a514f32f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d97ATK0u2Re68XPT0vJD75brLE8HsO_Q8i3rfaPN6pc.jpg?width=960&crop=smart&auto=webp&s=04bdc69eb3d791b71cc7853e7e2760f5a8424e59', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d97ATK0u2Re68XPT0vJD75brLE8HsO_Q8i3rfaPN6pc.jpg?width=1080&crop=smart&auto=webp&s=2d8c52125735a045acaf6da8de2c1b3b0aa36d05', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/d97ATK0u2Re68XPT0vJD75brLE8HsO_Q8i3rfaPN6pc.jpg?auto=webp&s=494476e8b88f0a2f9a72b1a64c3646045deaa957', 'width': 1280}, 'variants': {}}]}
PiVoT-MoE, 10.7B*4 RP Model
49
> PiVoT-MoE, is an advanced AI model specifically designed for roleplaying purposes. It has been trained using a combination of four 10.7B sized experts, each with their own specialized characteristic, all fine-tuned to bring a unique and diverse roleplaying experience. > >The Mixer of Experts (MoE) technique is utilized in this model, allowing the experts to work together synergistically, resulting in a more cohesive and natural conversation flow. The MoE architecture allows for a higher level of flexibility and adaptability, enabling PiVoT-MoE to handle a wide variety of roleplaying scenarios and characters. > >Based on the PiVoT-10.7B-Mistral-v0.2-RP model, PiVoT-MoE takes it a step further with the incorporation of the MoE technique. This means that not only does the model have an expansive knowledge base, but it also has the ability to mix and match its expertise to better suit the specific roleplaying scenario. Only GGUF version available now. Only 1\*24GB is enough to run this model with q4.
2023-12-16T22:37:31
https://www.reddit.com/r/LocalLLaMA/comments/18k2jan/pivotmoe_107b4_rp_model/
Warm_Paramedic2528
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k2jan
false
null
t3_18k2jan
/r/LocalLLaMA/comments/18k2jan/pivotmoe_107b4_rp_model/
false
false
self
49
null
x4 or x8 120B MoE possibly?
12
I would love to see someone attempt this with some of the 120B models floating around hugging face. Maybe really give gpt4 a run for its money. Given you'd need to rent quite the machine on runpod or equivalent to run it even quantized.
2023-12-16T22:28:39
https://www.reddit.com/r/LocalLLaMA/comments/18k2cpv/x4_or_x8_120b_moe_possibly/
Dazzling_Ad1507
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k2cpv
false
null
t3_18k2cpv
/r/LocalLLaMA/comments/18k2cpv/x4_or_x8_120b_moe_possibly/
false
false
self
12
null
Most advanced LLM for a Jetson Orin Nano?
4
I’m looking to create a working offline chatbot (with voice input/output) and figure the Jetson Orin Nano is my best bet. Anyone worked on a similar project before? I have some pretty cool and funny ideas I want to try building, but am wondering if any LLMs can fit on the 4 or 8gb versions of the Jetson Orin Nano + have a fast enough t/s. I figured the Jetson class chips are the best bet as of now for an offline edge use-case.
2023-12-16T21:47:38
https://www.reddit.com/r/LocalLLaMA/comments/18k1i2h/most_advanced_llm_for_a_jetson_orin_nano/
fission4433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k1i2h
false
null
t3_18k1i2h
/r/LocalLLaMA/comments/18k1i2h/most_advanced_llm_for_a_jetson_orin_nano/
false
false
self
4
null
Optimizing inference on Pascal GPUs
2
I've been running Q 5\_K\_M Mistral GGUF quants on Ooba with my Quadro P4000 (8 GB) for the past few weeks, but haven't been able to get speeds above 10 t/s. Are there any ways to eke out faster inference speeds on Pascal GPUs with Mistral or any other performant models? &#x200B; Any tips or tricks would be appreciated.
2023-12-16T21:35:01
https://www.reddit.com/r/LocalLLaMA/comments/18k18tv/optimizing_inference_on_pascal_gpus/
Aarush_Sah
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k18tv
false
null
t3_18k18tv
/r/LocalLLaMA/comments/18k18tv/optimizing_inference_on_pascal_gpus/
false
false
self
2
null
Anybody use mistral-7b-instruct v0.2 model? (Doesn't work well for me)
3
I'm using the latest v0.2 version and it's pretty bad at times. Typically the very first question I ask any model or fine tune is "How many days are in August?" Here are some terrible answers produced by the model: * There are 31 days in August, unless it is a leap year and then there can be an additional day. * There are 31 days in August, except for leap years where there can be an additional day. * August has 31 days, unless it's a leap year when it may have 31 or 32 days. The additional day is added to the month of February, not August. * August has 31 days in a normal year, or 30 days if it is not a leap year and February has 28 days instead of 29. However, since we are currently in a common year (a non-leap year), August will have 31 days in the year 2022. * August has 31 days in a normal year, or 30 days if it is not a leap year and the month follows February with only 28 days. However, in a leap year (every fourth year that is divisible by 4, except for years that are also divisible by 100 but not by 400), August has 31 days. So, to answer your question, August always has 31 days if we consider the possibility of a leap year. * In a regular year, there are 31 days in the month of August. Even a simple fine tune of the base model I did locally can answer this question just fine (with a simple 'August has 31 days'). As does the previous version of the instruct model (v0.1). Am I missing something or is this update worse? I see no information on what the update is even about. I don't mind being mysterious and all but having some transparency in this are would be more helpful.
2023-12-16T21:34:39
https://www.reddit.com/r/LocalLLaMA/comments/18k18la/anybody_use_mistral7binstruct_v02_model_doesnt/
gamesntech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k18la
false
null
t3_18k18la
/r/LocalLLaMA/comments/18k18la/anybody_use_mistral7binstruct_v02_model_doesnt/
false
false
self
3
null
Are models getting slower with longer input?
7
I can load a 7B model with 8k context on my GPU, but the longer my input gets, the slower it gets. I am talking about getting three paragraphs in 4 seconds for the first response, but later waiting 1 minute for a single paragraph. When using a streaming API, it gets an acceptable token/s speed once it starts writing, but takes a long pause before it starts outputting anything.
2023-12-16T21:14:47
https://www.reddit.com/r/LocalLLaMA/comments/18k0trd/are_models_getting_slower_with_longer_input/
_allo_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k0trd
false
null
t3_18k0trd
/r/LocalLLaMA/comments/18k0trd/are_models_getting_slower_with_longer_input/
false
false
self
7
null
PSA: You can (and may want to) disable Mixtral's Sliding Window!
61
Was banging my head against the wall why Mixtral-8x7B-Instruct-v0.1 with 32K context wasn't able to summarize a 16K text. My prompt started with the instruction to summarize the following text, but the model simply ignored it all and hallucinated random stuff. I first thought it was an issue with my prompt or the format, but then I remembered the Mistral architecture's Sliding Window Attention feature. By default that's 4K, so apparently the model couldn't pay attention to my instructions anymore as it reached the end of the text, after 16K tokens. Using the Transformers model format, I went into the model folder and edited `config.json`, changing `sliding_window` from `4096` to `32768`. Reloaded the model, reran my prompt, and got the expected summarization. Now I wonder if there's a bug with how Sliding Window Attention works with Mixtral or if that's normal behavior? Wouldn't it affect everyone who uses Mixtral at more than 4K context as the system prompt at the top would be the first thing to get "unattended"? Or is that why Mixtral's prompt format doesn't even include an official system prompt, knowing it would scroll out of view anyway? That format only has INSTructions and responses, and old instructions will vanish from the context as it fills up, maybe that's why? So while I found a workaround for my issue, I wonder if it's a general problem and this is the right solution? Anyone else having issues with Mixtral (and possible Mistral) models at >4K context ignoring instructions or other information outside the sliding window?
2023-12-16T20:55:47
https://www.reddit.com/r/LocalLLaMA/comments/18k0fek/psa_you_can_and_may_want_to_disable_mixtrals/
WolframRavenwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18k0fek
false
null
t3_18k0fek
/r/LocalLLaMA/comments/18k0fek/psa_you_can_and_may_want_to_disable_mixtrals/
false
false
self
61
{'enabled': False, 'images': [{'id': 'xU07sACUhuScPaN8BIMV0USMZkFwSMGcorjhvoKF4K0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=108&crop=smart&auto=webp&s=9ad9ea4d991fae47edd6a09b5feb6d60eca474de', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=216&crop=smart&auto=webp&s=308a202a45ba16483d13920cc1bacec91e523b47', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=320&crop=smart&auto=webp&s=e3032902c70163f83e8380a9d1a220d309cdc7cb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=640&crop=smart&auto=webp&s=fde1276456518b0ab6ff6e2f6fd1f198a952883e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=960&crop=smart&auto=webp&s=0e9b19ae8aba7dbb28009cce456e33e378c4c47c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=1080&crop=smart&auto=webp&s=b78d9fadfbf4f7ba88328a51e8bd25a6f20d223f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?auto=webp&s=d55b83d577e907b76d4542af518cf15e4524a728', 'width': 1200}, 'variants': {}}]}
Where To Start?
2
I’m looking to self host a system to give it specific information and have it fetch requested information from provided documents only. I have experience in text to image but this is a little foreign. My understanding is that the foundation for what I am looking for has been developed and is available, but I’m unsure where/how to start implementing it. Any recommendations?
2023-12-16T20:12:46
https://www.reddit.com/r/LocalLLaMA/comments/18jzjdb/where_to_start/
InterestingMain5192
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jzjdb
false
null
t3_18jzjdb
/r/LocalLLaMA/comments/18jzjdb/where_to_start/
false
false
self
2
null
Model and fun prompts for nephew?
9
He’s 11 and has a 2070 super (8gb vram) and 16 gb ram. I got LM Studio with The Bloke Mistral 7b. He’s having fun having it write ridiculous stories. What are other models / uses cases for him? Just for fun. He’s a typical, into Fortnite/gaming kid.
2023-12-16T20:07:10
https://www.reddit.com/r/LocalLLaMA/comments/18jzeyp/model_and_fun_prompts_for_nephew/
kwerky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jzeyp
false
null
t3_18jzeyp
/r/LocalLLaMA/comments/18jzeyp/model_and_fun_prompts_for_nephew/
false
false
self
9
null
Conversations with friends
1
[removed]
2023-12-16T19:20:26
https://www.reddit.com/r/LocalLLaMA/comments/18jyg0a/conversations_with_friends/
Ettaross
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jyg0a
false
null
t3_18jyg0a
/r/LocalLLaMA/comments/18jyg0a/conversations_with_friends/
false
false
self
1
null
The Best GPUs for Deep Learning in 2023
89
2023-12-16T19:17:55
https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/
semicausal
timdettmers.com
1970-01-01T00:00:00
0
{}
18jye6i
false
null
t3_18jye6i
/r/LocalLLaMA/comments/18jye6i/the_best_gpus_for_deep_learning_in_2023/
false
false
https://b.thumbs.redditm…5nolONhytvsM.jpg
89
{'enabled': False, 'images': [{'id': 'hRzrP-m1lWiqRsPC9clNfPnRc_tCRGpGzbHrCBCO32w', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/VsKEg4u26XX9vTR-l8w5GIancrmHR-u88Vw43XWdAC0.jpg?width=108&crop=smart&auto=webp&s=d3bf136e531bc2c273b80183994e8247105e2fde', 'width': 108}, {'height': 212, 'url': 'https://external-preview.redd.it/VsKEg4u26XX9vTR-l8w5GIancrmHR-u88Vw43XWdAC0.jpg?width=216&crop=smart&auto=webp&s=8312ef413052630840430cce4eddd0df5a97837d', 'width': 216}, {'height': 314, 'url': 'https://external-preview.redd.it/VsKEg4u26XX9vTR-l8w5GIancrmHR-u88Vw43XWdAC0.jpg?width=320&crop=smart&auto=webp&s=31497781ba569c77637ae2e7b2a9d9d9e23cdf18', 'width': 320}, {'height': 628, 'url': 'https://external-preview.redd.it/VsKEg4u26XX9vTR-l8w5GIancrmHR-u88Vw43XWdAC0.jpg?width=640&crop=smart&auto=webp&s=ec2a465f7464922ca502ad46140073491ebe4459', 'width': 640}, {'height': 943, 'url': 'https://external-preview.redd.it/VsKEg4u26XX9vTR-l8w5GIancrmHR-u88Vw43XWdAC0.jpg?width=960&crop=smart&auto=webp&s=2033ead52fd43fd3292f91dee783d2c8d95076f6', 'width': 960}, {'height': 1060, 'url': 'https://external-preview.redd.it/VsKEg4u26XX9vTR-l8w5GIancrmHR-u88Vw43XWdAC0.jpg?width=1080&crop=smart&auto=webp&s=ef5353f6d1591adb33ccfcde0666e1985332522a', 'width': 1080}], 'source': {'height': 1673, 'url': 'https://external-preview.redd.it/VsKEg4u26XX9vTR-l8w5GIancrmHR-u88Vw43XWdAC0.jpg?auto=webp&s=e60975cadfde4e88bb0269f7b64a42bf2818083a', 'width': 1703}, 'variants': {}}]}
Is it possible to finetune a small LLM (7B) to interpreter a single character, without the need for a character card?
1
I'm curious if it would be possible to train an LLM to play and be a single character (both in chat and instruct mode) giving it all the information it needs through finetuning?
2023-12-16T18:58:16
https://www.reddit.com/r/LocalLLaMA/comments/18jxytd/is_it_possible_to_finetune_a_small_llm_7b_to/
Aril_1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jxytd
false
null
t3_18jxytd
/r/LocalLLaMA/comments/18jxytd/is_it_possible_to_finetune_a_small_llm_7b_to/
false
false
self
1
null
Guide to run Mixtral correctly. I see a lot of people using the wrong settings / setup which makes it go schizo or repetitive.
152
2023-12-16T18:31:18
https://rentry.org/HowtoMixtral
Different_Fix_2217
rentry.org
1970-01-01T00:00:00
0
{}
18jxehq
false
null
t3_18jxehq
/r/LocalLLaMA/comments/18jxehq/guide_to_run_mixtral_correctly_i_see_a_lot_of/
false
false
https://b.thumbs.redditm…WZ17vidL77wo.jpg
152
{'enabled': False, 'images': [{'id': 'C-X4vFW_HFwis_-uHlyBVcfzqYrybqx0OpVjUR6hMGs', 'resolutions': [{'height': 94, 'url': 'https://external-preview.redd.it/l64Dz2MMqFrXWFChndInlSXfCPYQ1FFe4i5cbiy6cX0.jpg?width=108&crop=smart&auto=webp&s=0ef38a857d2d493f52effac01ff95da9e7780070', 'width': 108}, {'height': 189, 'url': 'https://external-preview.redd.it/l64Dz2MMqFrXWFChndInlSXfCPYQ1FFe4i5cbiy6cX0.jpg?width=216&crop=smart&auto=webp&s=1925cc8579bbe60dff28239e03f61d8376fa31e9', 'width': 216}, {'height': 280, 'url': 'https://external-preview.redd.it/l64Dz2MMqFrXWFChndInlSXfCPYQ1FFe4i5cbiy6cX0.jpg?width=320&crop=smart&auto=webp&s=764403856c7fea184384b20bc2e7ab0a9891c202', 'width': 320}, {'height': 561, 'url': 'https://external-preview.redd.it/l64Dz2MMqFrXWFChndInlSXfCPYQ1FFe4i5cbiy6cX0.jpg?width=640&crop=smart&auto=webp&s=da08bdc46d6817d6296e02662d33937807562b54', 'width': 640}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/l64Dz2MMqFrXWFChndInlSXfCPYQ1FFe4i5cbiy6cX0.jpg?auto=webp&s=5d213fd6b0994ac6e4899fd3bee3a59a27f78de3', 'width': 715}, 'variants': {}}]}
MoE test from CasualLM, bad performance?
1
[deleted]
2023-12-16T18:25:14
https://x.com/RealJosephus/status/1735898788949287342?s=20
ninjasaid13
x.com
1970-01-01T00:00:00
0
{}
18jx9w5
false
null
t3_18jx9w5
/r/LocalLLaMA/comments/18jx9w5/moe_test_from_casuallm_bad_performance/
false
false
https://b.thumbs.redditm…8Mpt5M4zQ7HQ.jpg
1
{'enabled': False, 'images': [{'id': '_me4ExZcbrMjmGGnXNrAOImbOxtbwf4ROiQeROS7RDM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Ke1MMuVwm99NKD64TkIimoWHhGJ-MXF8QGYjrGaRA74.jpg?width=108&crop=smart&auto=webp&s=306d4ef3cbf0196743160f9780e68b1c07f91360', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/Ke1MMuVwm99NKD64TkIimoWHhGJ-MXF8QGYjrGaRA74.jpg?auto=webp&s=d132fa041a90f1557f240d306d12ab3c119ae3a4', 'width': 200}, 'variants': {}}]}
How do LLMs learn to use tools?
11
Anyone got any good educational resources they can share, in general or specific to Local LLMs?
2023-12-16T18:14:05
https://www.reddit.com/r/LocalLLaMA/comments/18jx1ci/how_do_llms_learn_to_use_tools/
Nano_9a9o
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jx1ci
false
null
t3_18jx1ci
/r/LocalLLaMA/comments/18jx1ci/how_do_llms_learn_to_use_tools/
false
false
self
11
null
Is a local-LLM assistant a realistic goal at this point?
52
AI/LLM is crazy to wrap my head around. So many moving parts and variables to compare, and so much is moving (and search engines are getting so bad) it's hard to tell if reading posts from six months ago have any real relevance on the topic today. For reference to where I sit- I call myself a script kiddie- I can run most projects i find on github, i can adapt some to suite what i need, but I'm not much one for architecting new solutions, especially on complex issues. AI assistants are something that come up a lot, but is usually poorly defined on what that means- an expert consultant on coding, writing, or what? How do you want to interact with it? Etc. How I'd define an assistant: 1. The ability to interface with other systems. In a vacuum it's just a sounding board, but I'd like the ability for it to update/check calendars, run a websearch to compare it's own answer against results, give summaries of emails, documents, etc. Tied into this, some ways to trigger behaviors simply from external systems. - "If i get an email from my boss, reply with "i'm working on it" in 10 minutes, and add it to my task list in Obisidan" 2. Memory - If you have to constantly repeat the same information, it kinda defeats the purpose. I know there's options for using vector databases and RAG to try to get around this, but you'd still need some intelligence to figure out what's worth pulling from the DB/Documents to include in the request sent forward, or what information from a token is worth storing. I can't figure out if we have a real solution here for that yet- or if everything just gives a bit of improvement, and you'll still see LLMs with dodgy memory. 3. Still hard to define, but the ability to learn, to get better at doing the kind of tasks set for it. 4. Should be easy to interact with when needed, but also should ideally be able to handle multiple users, so you can adjust the permissions, try to adjust user preferences etc. What i've seen looking around- Langchain might be the answer to 1, and might assist with 2, but in reading posts on this reddit mentioning them- i see a lot of users talking about not being able to get langchain to work, and even then it's a framework not so much a plug and play solution. . Memgpt seems to have started a buzz, but I don't see people reporting on their experience with deploying it. Sillytavern, as much as it's built for roleplaying, feels like it gives a picture of a lot of these options, with all it's extensions and extras, but having played with it, I'm left feeling mostly disappointed. One moment I can it doing something clever, the next it has problems following along a basic description. I can't tell if my disappointment may be because of bad settings, prompting, model choices or limitations, etc etc, but I'm overall left with the impression that this isn't possible unless i'm able to invest a lot of time into learning and connecting some frameworks and projects, and even then performance may be spotty. &#x200B;
2023-12-16T18:14:02
https://www.reddit.com/r/LocalLLaMA/comments/18jx1bl/is_a_localllm_assistant_a_realistic_goal_at_this/
moarmagic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jx1bl
false
null
t3_18jx1bl
/r/LocalLLaMA/comments/18jx1bl/is_a_localllm_assistant_a_realistic_goal_at_this/
false
false
self
52
null
Amica 0.2 released - now has binaries for Windows, Mac, and Linux, and the ability to share and import characters
29
Hi, Since the initial launch announcement a few weeks ago, [Amica](https://heyamica.com/) has received a lot of updates. I am excited to announce 0.2.0 is now ready. https://github.com/semperai/amica/releases/tag/app-v0.2.0 # Amica 0.2.0 This release of Amica includes desktop builds for Mac, Windows, and Linux. It also supports sharing and importing characters. ## New features * Share and import characters * Support for loading Mixamo animations * Added button to mute TTS temporarily * Added button to toggle unrestricted message size * Add support for Ollama vision * Add support for KoboldAI * Add whisper.cpp STT backend * Add support for OpenAI TTS * Add i18n support * Added docs site https://docs.heyamica.com/ * Added ability to change background color * Added ability for background to be transparent * Added ability to change name * Added notification for network errors ## Bug fixes * Fix VAD error: "The object can not be found" * Fix loading VRM, stop previous animation before starting newly loaded * Fix vision_llama backend page selector * Fix OpenAI TTS link * Fix removal of non-ascii characters * Prune emojis from TTS * Prompt to reload if background video changed * Fix mobile layout ## Miscellaneous * Improved default system prompt * Redesigned the UI * Refactor settings page * Basic test setup https://github.com/semperai/amica/releases/tag/app-v0.2.0 I hope this release makes it much easier for users to setup and use Amica. If you have any questions please share in the comments!
2023-12-16T17:40:50
https://www.reddit.com/r/LocalLLaMA/comments/18jwbkf/amica_02_released_now_has_binaries_for_windows/
arbiusai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jwbkf
false
null
t3_18jwbkf
/r/LocalLLaMA/comments/18jwbkf/amica_02_released_now_has_binaries_for_windows/
false
false
self
29
{'enabled': False, 'images': [{'id': 'xIwLQvsokSf2xQMHkQy-SiWcc8MlZg-rWn3i-ntjWpM', 'resolutions': [{'height': 48, 'url': 'https://external-preview.redd.it/YvaWFMhc7Eu1Ew_E4dyuxmaNlL4d8H5gQkJJLjntomM.jpg?width=108&crop=smart&auto=webp&s=5566a47bfb94c18f77713ad0441c5441765a50f3', 'width': 108}, {'height': 96, 'url': 'https://external-preview.redd.it/YvaWFMhc7Eu1Ew_E4dyuxmaNlL4d8H5gQkJJLjntomM.jpg?width=216&crop=smart&auto=webp&s=3cf31138c21f22a51276fe8e9f872c565412d28a', 'width': 216}, {'height': 142, 'url': 'https://external-preview.redd.it/YvaWFMhc7Eu1Ew_E4dyuxmaNlL4d8H5gQkJJLjntomM.jpg?width=320&crop=smart&auto=webp&s=552aaedc16b59b0ed40a13e3d5a983a5c9f9ddd2', 'width': 320}, {'height': 285, 'url': 'https://external-preview.redd.it/YvaWFMhc7Eu1Ew_E4dyuxmaNlL4d8H5gQkJJLjntomM.jpg?width=640&crop=smart&auto=webp&s=da011562bc40d46c452f88fbef3c34e22828f0ab', 'width': 640}], 'source': {'height': 384, 'url': 'https://external-preview.redd.it/YvaWFMhc7Eu1Ew_E4dyuxmaNlL4d8H5gQkJJLjntomM.jpg?auto=webp&s=d6cda73cd16988756514da40935c81abb95361d5', 'width': 860}, 'variants': {}}]}
Running models on windows with amd hardware? (7900xtx)
1
I can't find any resources online on how to do this. Only thing I could think of is torch-directml but have no idea how too make the models use it over the base torch module. Any help would be hugely appreciated!
2023-12-16T17:22:58
https://www.reddit.com/r/LocalLLaMA/comments/18jvxlu/running_models_on_windows_with_amd_hardware/
Matt_1F44D
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jvxlu
false
null
t3_18jvxlu
/r/LocalLLaMA/comments/18jvxlu/running_models_on_windows_with_amd_hardware/
false
false
default
1
null
Fantasy writing assistant model?
6
I've recently discovered LM Studio which allows me to quickly chat with downloaded models, and a plugin for ingesting my own fantasy novel content. I'd done well choosing an LLM and embedding model that work pretty well, but I'm wondering if anyone else has experience with this. Some models are too dry - asking a question gets an answer that's either 100% true/false. Meaning if I ask what Chapter a character dies, if my ingested document doesn't *clearly* state them *dying*, it'll tell me it never happened. Some models are too vague - As if they take a rounded idea of my novel and just make shit up to fill in the blanks. ChatGPT does a pretty good job at understanding the context but clearly limited by the fast that I cannot feed it a 50 page text. Right now I'm using : TheBloke/MythoMax-L2-13B-GGML/mythomax-l2-13b.ggmlv3.q4\_K\_S.bin Embedding Model : hkunlp--instructor-base
2023-12-16T17:17:31
https://www.reddit.com/r/LocalLLaMA/comments/18jvtfq/fantasy_writing_assistant_model/
TheCouncilNovel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jvtfq
false
null
t3_18jvtfq
/r/LocalLLaMA/comments/18jvtfq/fantasy_writing_assistant_model/
false
false
self
6
null
Newbie, searching among text intelligently
3
Here's my request: &#x200B; I have tons of text (I specify that the language is German) in these piles of messages, there are parts that interest me but I can't just search for a keyword, I have to look for a context. &#x200B; For example, I'd say to a software program: "find all the parts that contain a conversation about flowers". &#x200B; What kind of software could help me do this? &#x200B; Preferably locally on my PC, and by the way, how could LLama help me (and that's why I'm here)?
2023-12-16T16:42:36
https://www.reddit.com/r/LocalLLaMA/comments/18jv2d3/newbie_searching_among_text_intelligently/
soumdeal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jv2d3
false
null
t3_18jv2d3
/r/LocalLLaMA/comments/18jv2d3/newbie_searching_among_text_intelligently/
false
false
self
3
null
Outfit anyone OSS equivalent
1
Is anyone working on, or knows of a model similar to outfit anyone? Working on a project that would have a ton of use for it but right now theirs is just a demo. Any solutions or ideas?
2023-12-16T16:40:53
https://www.reddit.com/r/LocalLLaMA/comments/18jv10m/outfit_anyone_oss_equivalent/
How_i_met_your_bro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jv10m
false
null
t3_18jv10m
/r/LocalLLaMA/comments/18jv10m/outfit_anyone_oss_equivalent/
false
false
self
1
null
Snarky AIs?
1
[removed]
2023-12-16T16:36:02
https://www.reddit.com/r/LocalLLaMA/comments/18juxfv/snarky_ais/
Enspiredjack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18juxfv
false
null
t3_18juxfv
/r/LocalLLaMA/comments/18juxfv/snarky_ais/
false
false
self
1
null
very Slow speeds on 2x4090 in the last few days.
6
currently running a 7950x with 2x4090. just a few days ago I was getting great speeds on most LLMs I tried now I'm only getting around 1.25 tokens/s on aetheria-l2-70b.Q3. I've tried to reinstall Ooba and update Kobold, but both are slower than ever. It may be because I updated my Nvidia drivers but I'm not sure what version I was running before, also tried the studio drivers with no luck. Any help would be much appreciated.
2023-12-16T16:20:02
https://www.reddit.com/r/LocalLLaMA/comments/18jul75/very_slow_speeds_on_2x4090_in_the_last_few_days/
asdfgbvcxz3355
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jul75
false
null
t3_18jul75
/r/LocalLLaMA/comments/18jul75/very_slow_speeds_on_2x4090_in_the_last_few_days/
false
false
self
6
null
A new way of MOE
1
[removed]
2023-12-16T16:18:07
https://www.reddit.com/r/LocalLLaMA/comments/18jujx4/a_new_way_of_moe/
AI_lover_9669
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jujx4
false
null
t3_18jujx4
/r/LocalLLaMA/comments/18jujx4/a_new_way_of_moe/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oEbTU1Y2fB-Za8R-jQULSh-lYktf0zRPYPfCqS5H1gQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?width=108&crop=smart&auto=webp&s=12175e0b9d13008ecb3b08eb85c9ed49fa2d4848', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?width=216&crop=smart&auto=webp&s=f6f6edfd5a7d65aca2e5a5cdea137b3c5cad9578', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?width=320&crop=smart&auto=webp&s=ccd2872b7e09103ee57e29f29e15f7ee03582999', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?width=640&crop=smart&auto=webp&s=03425b6bd06644ff19ef83c941814cfe824a027e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?width=960&crop=smart&auto=webp&s=4016dcf6ecc0b6984ff606275cedd8beadf5a1d7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?width=1080&crop=smart&auto=webp&s=78f82b60059588b3ef9dbaf27d849d7605c3e38d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?auto=webp&s=7c826c7858de52dd0a149ae94aba2c8fc0676692', 'width': 1200}, 'variants': {}}]}
Need help to build a RAG application on legal data - vectorDB vs Knowledge graph
2
I need suggestion on this space. I need precise answers to certain questions related to a legal data. Vector databases seems to be bad on this, while knowledge graphs are good. But converting legal documents into knowledge graphs is pretty difficult and approximate. How do I decide on this ? Should I use both ? Are there any exisiting tutorials on this ?
2023-12-16T16:11:51
https://www.reddit.com/r/LocalLLaMA/comments/18jufbx/need_help_to_build_a_rag_application_on_legal/
maayon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jufbx
false
null
t3_18jufbx
/r/LocalLLaMA/comments/18jufbx/need_help_to_build_a_rag_application_on_legal/
false
false
self
2
null
A new way of MOE???
1
[removed]
2023-12-16T16:09:29
https://www.reddit.com/r/LocalLLaMA/comments/18judgm/a_new_way_of_moe/
AI_lover_9669
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18judgm
false
null
t3_18judgm
/r/LocalLLaMA/comments/18judgm/a_new_way_of_moe/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oEbTU1Y2fB-Za8R-jQULSh-lYktf0zRPYPfCqS5H1gQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?width=108&crop=smart&auto=webp&s=12175e0b9d13008ecb3b08eb85c9ed49fa2d4848', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?width=216&crop=smart&auto=webp&s=f6f6edfd5a7d65aca2e5a5cdea137b3c5cad9578', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?width=320&crop=smart&auto=webp&s=ccd2872b7e09103ee57e29f29e15f7ee03582999', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?width=640&crop=smart&auto=webp&s=03425b6bd06644ff19ef83c941814cfe824a027e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?width=960&crop=smart&auto=webp&s=4016dcf6ecc0b6984ff606275cedd8beadf5a1d7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?width=1080&crop=smart&auto=webp&s=78f82b60059588b3ef9dbaf27d849d7605c3e38d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4IrLivuY4U7nrHkhHN_rwkaFxAjb-OCY1woXiEOdkvQ.jpg?auto=webp&s=7c826c7858de52dd0a149ae94aba2c8fc0676692', 'width': 1200}, 'variants': {}}]}
What's the smallest but stil useful model you encountered
40
Hi, I'm interested in your opinion in the topic: what's the smallest but still usable for something model you encountered? By useful I mean for any task, from generating tags from text, sentiment analysis, to full fledged text generation, conversations, also translations would be interesting. And by small I mean below 7B size, which started to be popular when Mistral launched. There was some similar topics on this sub, but as the models landscape is so fast moving, maybe there are some recent small models which are noteworthy I'm asking because I was recently impressed by phi-2 model, I have never before considered using models so small, I was surprised by a small size of even mistral 7B when it launched, but as it seems small models can still be useful for some tasks. I've not yet tried running phi-2 locally, because I mainly use llama.cpp, and I'm waiting for a feature of handling models like phi-2 in it, but maybe there are some other small models like it already available on llama.cpp
2023-12-16T15:52:41
https://www.reddit.com/r/LocalLLaMA/comments/18ju0uw/whats_the_smallest_but_stil_useful_model_you/
Fisent
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ju0uw
false
null
t3_18ju0uw
/r/LocalLLaMA/comments/18ju0uw/whats_the_smallest_but_stil_useful_model_you/
false
false
self
40
null
Mixtral can also handle number series questions
16
[mixtral-8x7b-instruct-v0.1.Q6\_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/mixtral-8x7b-instruct-v0.1.Q6_K.gguf) \----------------------------------------------------- In the series below, look for the degree and direction of change between the numbers. In other words, do the numbers increase or decrease, and by how much. &#x200B; Look at this series: 7, 10, 8, 11, 9, 12, ... What number should come next? 7 10 12 13 &#x200B; &#x200B; &#x200B; &#x200B; Think this through logically step by step. Look for any pattern or trend in the numbers to see how they increase and decrease from one term to another: \- From 7 to 10, we have an increase of 3. \- Then from 10 back down to 8, we have a decrease of 2. \- Next, there's an increase from 8 to 11, adding 3 again. \- But then it goes down from 11 to 9 by subtracting 2 once more. \- This pattern continues with another addition of 3 taking us from 9 to 12. &#x200B; So, following this alternating increase/decrease pattern, the next number should decrease by 2 from the last given term, 12. That would give us a new number: &#x200B; 10 &#x200B; Final answer: The next number in the series is 10.
2023-12-16T15:44:33
https://www.reddit.com/r/LocalLLaMA/comments/18jtuz8/mixtral_can_also_handle_number_series_questions/
Scary-Knowledgable
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jtuz8
false
null
t3_18jtuz8
/r/LocalLLaMA/comments/18jtuz8/mixtral_can_also_handle_number_series_questions/
false
false
self
16
{'enabled': False, 'images': [{'id': 'zjNIGTMwA8_2859f_IZsUxwU-LAtGdWFd1GsZDfnEwI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=108&crop=smart&auto=webp&s=7e4d40db3f52b19bf486502af3ae70ccdb9267c5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=216&crop=smart&auto=webp&s=b3999b5f8b3b0b58cfc4c46f92bff5ae7bfc2673', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=320&crop=smart&auto=webp&s=b08fd6444215d5931933d7bf1cfbf1b2c1886804', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=640&crop=smart&auto=webp&s=cfba937d7b846aa1e404f0141ded4b0ad2063e5c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=960&crop=smart&auto=webp&s=b7868598636843e9a0bf1c329f5e971f579f4742', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=1080&crop=smart&auto=webp&s=dc77f2f44d1de80f2c993e9339750fc5b5f5bd3e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?auto=webp&s=a57ad70492f52a87f5b93de7663bf81140506dfd', 'width': 1200}, 'variants': {}}]}
Mixtral is great with analogies and the right prompt
28
&#x200B; [mixtral-8x7b-instruct-v0.1.Q6\_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/mixtral-8x7b-instruct-v0.1.Q6_K.gguf) \-------------------------------------------------- You are an expert in analogies. &#x200B; Marathon is to race as hibernation is to winter bear dream sleep &#x200B; Think this through logically step by step. &#x200B; Is a marathon the same thing as a winter? Obviously not! But if we look at it differently, maybe a marathon isn't an event but rather a state of being. When you run a marathon, you are in a race for a long time - just like how bears spend their winters hibernating. &#x200B; Now that we have our analogy framework, let's fill in the blanks: &#x200B; Marathon is to race as hibernation is to \[winter/bear/dream/sleep\]. &#x200B; \- Winter: While it's true that bears hibernate during winter, this doesn't fully capture the essence of the analogy. Winter is a season, not a state of being or action. So, we can eliminate 'winter'. \- Bear: This seems like a plausible answer because bears do indeed hibernate. However, the focus here should be on the act of hibernating itself, rather than who performs it. Therefore, 'bear' isn't the best fit either. \- Dream: We dream when we sleep, but dreams and hibernation aren't equivalent. &#x200B; \- Sleep: This fits perfectly because just as running a marathon means being in a race for an extended period, hibernation refers to a state of prolonged rest or torpor characteristic of certain mammals (like bears). &#x200B; So the correct answer is: Marathon is to race as hibernation is to sleep.
2023-12-16T15:04:41
https://www.reddit.com/r/LocalLLaMA/comments/18jt2ms/mixtral_is_great_with_analogies_and_the_right/
Scary-Knowledgable
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jt2ms
false
null
t3_18jt2ms
/r/LocalLLaMA/comments/18jt2ms/mixtral_is_great_with_analogies_and_the_right/
false
false
self
28
{'enabled': False, 'images': [{'id': 'zjNIGTMwA8_2859f_IZsUxwU-LAtGdWFd1GsZDfnEwI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=108&crop=smart&auto=webp&s=7e4d40db3f52b19bf486502af3ae70ccdb9267c5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=216&crop=smart&auto=webp&s=b3999b5f8b3b0b58cfc4c46f92bff5ae7bfc2673', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=320&crop=smart&auto=webp&s=b08fd6444215d5931933d7bf1cfbf1b2c1886804', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=640&crop=smart&auto=webp&s=cfba937d7b846aa1e404f0141ded4b0ad2063e5c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=960&crop=smart&auto=webp&s=b7868598636843e9a0bf1c329f5e971f579f4742', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=1080&crop=smart&auto=webp&s=dc77f2f44d1de80f2c993e9339750fc5b5f5bd3e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?auto=webp&s=a57ad70492f52a87f5b93de7663bf81140506dfd', 'width': 1200}, 'variants': {}}]}
[Koboldpcc + Sillytavern] Please help me find the right settings for my setup!
1
2023-12-16T14:49:59
https://www.reddit.com/gallery/18jsrye
Benjamin_swoleman
reddit.com
1970-01-01T00:00:00
0
{}
18jsrye
false
null
t3_18jsrye
/r/LocalLLaMA/comments/18jsrye/koboldpcc_sillytavern_please_help_me_find_the/
false
false
default
1
null
Tokens per Second Mistral 8x7B - Performance
14
Hey, does anyone have data about the performance of Mistrals 8x7B on different GPUs? Thank you!
2023-12-16T14:41:09
https://www.reddit.com/r/LocalLLaMA/comments/18jslmf/tokens_per_second_mistral_8x7b_performance/
Bene0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jslmf
false
null
t3_18jslmf
/r/LocalLLaMA/comments/18jslmf/tokens_per_second_mistral_8x7b_performance/
false
false
self
14
null
[Question] - Chat UI for Mistral API
6
Hi all, does anyone know a good out of the box chat-ui that can be self-hosted and is interacting with the Mistral API (Mixtral 8x7B)? I found quite a number of solutions for the chatgpt API, but would need something compatible with the Mistral APIs, as my local hardware power is unfortunately not strong enough to run models locally. Thanks for your help!!!
2023-12-16T14:35:23
https://www.reddit.com/r/LocalLLaMA/comments/18jshor/question_chat_ui_for_mistral_api/
ghac101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jshor
false
null
t3_18jshor
/r/LocalLLaMA/comments/18jshor/question_chat_ui_for_mistral_api/
false
false
self
6
null
How Sparse MoE(Mixtral) actually works
1
I've seen quite a few people ask about how exactly SMoE(for example Mixtral) works. And what an "expert" actually means and how Inference is done. This article gives a really good introduction.
2023-12-16T14:22:22
https://towardsdatascience.com/mixtral-8x7b-understanding-and-running-the-sparse-mixture-of-experts-0e3fc7fde818
Eastwindy123
towardsdatascience.com
1970-01-01T00:00:00
0
{}
18js8k9
false
null
t3_18js8k9
/r/LocalLLaMA/comments/18js8k9/how_sparse_moemixtral_actually_works/
false
false
https://b.thumbs.redditm…wzwHwUZdkKKA.jpg
1
{'enabled': False, 'images': [{'id': 'BhpvLBlwlu2U-6Osy6z6j2P-DKlYg3sSlO2ASet0558', 'resolutions': [{'height': 30, 'url': 'https://external-preview.redd.it/zzaaIXqvUYHZXZn3UdqTIrEV7iqhc964vyQ4X07N3X4.jpg?width=108&crop=smart&auto=webp&s=e965d229f33cdc4a9fdb18dd4c524275f4905150', 'width': 108}, {'height': 60, 'url': 'https://external-preview.redd.it/zzaaIXqvUYHZXZn3UdqTIrEV7iqhc964vyQ4X07N3X4.jpg?width=216&crop=smart&auto=webp&s=55d3e6a8c22fe79a89134a1506315d24ebc7b9ea', 'width': 216}, {'height': 90, 'url': 'https://external-preview.redd.it/zzaaIXqvUYHZXZn3UdqTIrEV7iqhc964vyQ4X07N3X4.jpg?width=320&crop=smart&auto=webp&s=3c23ad019ca67613f3b13d5b2b933cd37cfab0d1', 'width': 320}, {'height': 180, 'url': 'https://external-preview.redd.it/zzaaIXqvUYHZXZn3UdqTIrEV7iqhc964vyQ4X07N3X4.jpg?width=640&crop=smart&auto=webp&s=59617f46be21b3a6489e8a256c5ae5cfdd0b4936', 'width': 640}, {'height': 270, 'url': 'https://external-preview.redd.it/zzaaIXqvUYHZXZn3UdqTIrEV7iqhc964vyQ4X07N3X4.jpg?width=960&crop=smart&auto=webp&s=011ed265075a311acb39572261b1e859f19097d6', 'width': 960}, {'height': 304, 'url': 'https://external-preview.redd.it/zzaaIXqvUYHZXZn3UdqTIrEV7iqhc964vyQ4X07N3X4.jpg?width=1080&crop=smart&auto=webp&s=bcf6aa25a5a25dd8c8c043efd78873935a3bc3e8', 'width': 1080}], 'source': {'height': 338, 'url': 'https://external-preview.redd.it/zzaaIXqvUYHZXZn3UdqTIrEV7iqhc964vyQ4X07N3X4.jpg?auto=webp&s=459e8810fac226067f099312415bf0f2999a3014', 'width': 1200}, 'variants': {}}]}
I built my own android chatting frontend for LLMs.
27
Project here: https://github.com/Vali-98/ChatterUI Hey all, I'm a junior dev who has recently gotten into mobile app development and react-native. This app is a hobby project of mine and was heavily inspired by SillyTavern. Originally, it was for practicing Javascript, but I found it usable enough for casual LLM chatting. It isn't as feature rich as ST, but does fill the niche of 'locally stored mobile frontend' that is currently lacking in this space. Also removes the need to host the ST web server. The app itself is pretty simple: - Download the apk from the latest release - Make / get a Character Card - Connect to your backend of choice and chat away. It doesn't have all popular backends just yet but I plan on supporting as much as possible. It should be possible to modify the project to work on IOS, but I lack the devices and dev account to build and test it. Also, its probably possible to implement llama.cpp locally too, but the react-native library for llama.cpp is still very much a WIP. Have fun chatting!
2023-12-16T14:19:05
https://www.reddit.com/r/LocalLLaMA/comments/18js65y/i_built_my_own_android_chatting_frontend_for_llms/
----Val----
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18js65y
false
null
t3_18js65y
/r/LocalLLaMA/comments/18js65y/i_built_my_own_android_chatting_frontend_for_llms/
false
false
self
27
{'enabled': False, 'images': [{'id': 'dpxVtuEri6rQsM9bvtDp6TifJ2xu6HU6l-qxyiWv_ug', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o6Iowk3eQoFVU0jTKM_C5JZU5DOZ1QsvGMK2pnysga0.jpg?width=108&crop=smart&auto=webp&s=951d4188a162a1a93b49edb3a34d6a0657ace564', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/o6Iowk3eQoFVU0jTKM_C5JZU5DOZ1QsvGMK2pnysga0.jpg?width=216&crop=smart&auto=webp&s=e185b292fdf8c176850055deccd5622c9000c5fc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/o6Iowk3eQoFVU0jTKM_C5JZU5DOZ1QsvGMK2pnysga0.jpg?width=320&crop=smart&auto=webp&s=b9b50ac8477222f79f77dba608119a7015af0e5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/o6Iowk3eQoFVU0jTKM_C5JZU5DOZ1QsvGMK2pnysga0.jpg?width=640&crop=smart&auto=webp&s=dd0ec1a949592573d65bc1ccd2904bb5dd3ea639', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/o6Iowk3eQoFVU0jTKM_C5JZU5DOZ1QsvGMK2pnysga0.jpg?width=960&crop=smart&auto=webp&s=6d1aa459ec57cc7ad5af67640f91cd3d88b5e878', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/o6Iowk3eQoFVU0jTKM_C5JZU5DOZ1QsvGMK2pnysga0.jpg?width=1080&crop=smart&auto=webp&s=723de48085f5ea079ee58ad24a1ccf61fa3f73d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/o6Iowk3eQoFVU0jTKM_C5JZU5DOZ1QsvGMK2pnysga0.jpg?auto=webp&s=58bebb85de6e10ef916ef22b8b15c9c5f9526e15', 'width': 1200}, 'variants': {}}]}
Unofficial Qwen 8x7b MoE - CausalLM/8x7B-MoE-test-NOT-MIXTRAL
45
[https://huggingface.co/CausalLM/8x7B-MoE-test-NOT-MIXTRAL](https://huggingface.co/CausalLM/8x7B-MoE-test-NOT-MIXTRAL) >a Toolformer/Agent expert model, a multilingual translation expert model, a mathematics expert model, a visual expert model, a coding and computer expert model, and an uncensored knowledge model — together forming the MoE model along with Qwen-Chat and Qwen-Base.
2023-12-16T13:27:57
https://www.reddit.com/r/LocalLLaMA/comments/18jr7i5/unofficial_qwen_8x7b_moe/
No-Link-2778
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jr7i5
false
null
t3_18jr7i5
/r/LocalLLaMA/comments/18jr7i5/unofficial_qwen_8x7b_moe/
false
false
self
45
{'enabled': False, 'images': [{'id': 'wGQkuonLd-lD4qCbXRcuQvjyoh9Opdp8JX3C_Cl_3dM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GrGsEIUIZ544IM-4WtNUZwVX0KW4EdcRcANUKm5bfgY.jpg?width=108&crop=smart&auto=webp&s=bcad4af61c4dbcdf281bc760688475e8968a60a5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GrGsEIUIZ544IM-4WtNUZwVX0KW4EdcRcANUKm5bfgY.jpg?width=216&crop=smart&auto=webp&s=c42c116a4636b6ef59c17d6b1f4228d8bb71e5de', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GrGsEIUIZ544IM-4WtNUZwVX0KW4EdcRcANUKm5bfgY.jpg?width=320&crop=smart&auto=webp&s=a9bac98a328ce7fc6e4956d7ee9325152e7a05ba', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GrGsEIUIZ544IM-4WtNUZwVX0KW4EdcRcANUKm5bfgY.jpg?width=640&crop=smart&auto=webp&s=999fbc754482d5a093d7200fec90a79a7aa2d1d6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GrGsEIUIZ544IM-4WtNUZwVX0KW4EdcRcANUKm5bfgY.jpg?width=960&crop=smart&auto=webp&s=93c05d7e4c44865289094a807e06354040a234e1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GrGsEIUIZ544IM-4WtNUZwVX0KW4EdcRcANUKm5bfgY.jpg?width=1080&crop=smart&auto=webp&s=d5f595275620ce68334dfb296af4bb042aea505a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GrGsEIUIZ544IM-4WtNUZwVX0KW4EdcRcANUKm5bfgY.jpg?auto=webp&s=c04d66d505909b88e33186de4173928eb062f99a', 'width': 1200}, 'variants': {}}]}
I made a text-generation-webui extension that allows you to talk to multiple versions of the same character simultaneously in a single chat, in order to figure out which configuration gives the best replies
102
It's called Chatbot Clinic, and it's available at [https://github.com/p-e-w/chatbot_clinic](https://github.com/p-e-w/chatbot_clinic). Each character version can have its own context and generation parameters. I've been using this for a while, and it has completely changed the way I approach writing character descriptions. There simply is no substitute for hard numbers when it comes to figuring out whether something really is an improvement or not. I quickly realized that many "best practices" that I was cargo culting from character cards I found online don't actually yield demonstrably better replies when put to the test in a blind comparison. As a result, I was able to slim down my character descriptions substantially, expanding my usable context length, while getting better and better compliance with what I actually want through iterative modifications, each of them backed by testing with Chatbot Clinic. I hope that Chatbot Clinic is as useful for others as it has been for me!
2023-12-16T13:00:10
https://www.reddit.com/r/LocalLLaMA/comments/18jqq0m/i_made_a_textgenerationwebui_extension_that/
-p-e-w-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jqq0m
false
null
t3_18jqq0m
/r/LocalLLaMA/comments/18jqq0m/i_made_a_textgenerationwebui_extension_that/
false
false
self
102
{'enabled': False, 'images': [{'id': 'itws8VFb_QOHPKAlZhJUMEY1TC2GfBmqoI792x-G1uU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n3gZWUQwoMQcryc6UAJ6BIM9n8f7v4c1oBZbqdvUrAA.jpg?width=108&crop=smart&auto=webp&s=b851809ba6ab6aa7300e4ed0f51667780502ff75', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/n3gZWUQwoMQcryc6UAJ6BIM9n8f7v4c1oBZbqdvUrAA.jpg?width=216&crop=smart&auto=webp&s=6278c653c8eb1e31e78568b9d382f17ff4e1eed6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/n3gZWUQwoMQcryc6UAJ6BIM9n8f7v4c1oBZbqdvUrAA.jpg?width=320&crop=smart&auto=webp&s=22d146e0dc146a9956fbe50cbf0f5e9f9416e2a3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/n3gZWUQwoMQcryc6UAJ6BIM9n8f7v4c1oBZbqdvUrAA.jpg?width=640&crop=smart&auto=webp&s=9269057040d8fb91147c08578b2edfc24a6d435a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/n3gZWUQwoMQcryc6UAJ6BIM9n8f7v4c1oBZbqdvUrAA.jpg?width=960&crop=smart&auto=webp&s=c695391cc01a2a1cb9d3a5ea644fe52b7ad6b7e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/n3gZWUQwoMQcryc6UAJ6BIM9n8f7v4c1oBZbqdvUrAA.jpg?width=1080&crop=smart&auto=webp&s=76e48a3539a333d9a944d4f0349310408d7bf6f4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/n3gZWUQwoMQcryc6UAJ6BIM9n8f7v4c1oBZbqdvUrAA.jpg?auto=webp&s=0b4cf86137d4bc1b3584c2e35104cab73d455d93', 'width': 1200}, 'variants': {}}]}
Looking for a local LLM with large context
4
Hi. Hope you have a good day and this is a right place to ask. Long story short, I'm looking for a local LLM with a large context that can hopefully replace claude2. Usecase: I need to dump 40k-60k character text into it and ask an "opinion" about it, acting as a replacement of an average human. The text may be non-english. That should fit into about 50k context. I'd want much more than that, but there are no models in existence with 1 million token context. I have 16GB VRAM, 32GB RAM, and can run koboldcpp with CUDA support. I tried "mixtral-8x7b-instruct-v0.1.Q2_K.gguf", "mpt-7b-storywriter.ggmlv3.q8_0.bin" (both quantized by TheBloke) and feels like either i'm doing something wrong, missing a key piece or models aren't working. I can't make it process even several paragraphs never mind 40-60k. MPT-7b produces uncontrolled insanity. It writes random stories, or tries to continue text. Mixtral (half the model is offloaded to RAM, apparently) understands the text, understand non-english language, but does not follow instructions (regardless of language they're written in), and translates the text to english instead. The translation is decent, I give it that, but that's not what I want. There isn't a whole lot of info on running mixtral on koboldcpp locally, and I could not locate further information about large context models. I've been out of the loop with them for a couple of months, so I'm not even sure what is good right now. Advice?
2023-12-16T12:58:34
https://www.reddit.com/r/LocalLLaMA/comments/18jqp4c/looking_for_a_local_llm_with_large_context/
NaN-183648
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jqp4c
false
null
t3_18jqp4c
/r/LocalLLaMA/comments/18jqp4c/looking_for_a_local_llm_with_large_context/
false
false
self
4
null
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
3
I'm using LLAMA-2-7B-Chat AWQ on EC2 and when I'm using 3 workers with --preload it getting error and when I'm using it without --preload then it consumes the 3x memory of the model. Following is the error what I'm getting ``` [2023-12-16 12:49:47,479] ERROR in app: Exception on /api [POST] Traceback (most recent call last): File "/home/ubuntu/env/lib/python3.10/site-packages/flask/app.py", line 1455, in wsgi_app response = self.full_dispatch_request() File "/home/ubuntu/env/lib/python3.10/site-packages/flask/app.py", line 869, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/ubuntu/env/lib/python3.10/site-packages/flask/app.py", line 867, in full_dispatch_request rv = self.dispatch_request() File "/home/ubuntu/env/lib/python3.10/site-packages/flask/app.py", line 852, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "/home/ubuntu/llama2-model-test/app.py", line 37, in api get_reply(previous_messages=previous_messages, current_input=input_u) File "/home/ubuntu/llama2-model-test/hlm.py", line 60, in get_reply get_response(previous_messages) File "/home/ubuntu/llama2-model-test/hlm.py", line 45, in get_response to_encoded = encoded.to(DEVICE_TYPE) File "/home/ubuntu/env/lib/python3.10/site-packages/torch/cuda/__init__.py", line 284, in _lazy_init raise RuntimeError( RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method ``` My concern is to handle large number of concurrent users on the server.
2023-12-16T12:54:30
https://www.reddit.com/r/LocalLLaMA/comments/18jqmu4/runtimeerror_cannot_reinitialize_cuda_in_forked/
Downtown-Rice-7560
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jqmu4
false
null
t3_18jqmu4
/r/LocalLLaMA/comments/18jqmu4/runtimeerror_cannot_reinitialize_cuda_in_forked/
false
false
self
3
null
Mistral with RAG try to use to much Vram if adding text files.
1
Hi, &#x200B; If I use articles as below has news about fantasyfootbal , its working in 13Gb ram (I have 16GvRam) but when I adding pdf and text (not more than 1 Mb text) its requires more memory and crashing. trying to reserve 8Gb or so .. any idea that I can efficiently use RAG with mistral7: Code: import transformers &#x200B; import os import torch from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline from langchain.document\_loaders import AsyncChromiumLoader from langchain.embeddings.huggingface import HuggingFaceEmbeddings from langchain.text\_splitter import CharacterTextSplitter from langchain.vectorstores import FAISS from langchain\_core.runnables import RunnableParallel, RunnablePassthrough from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline ) \#from langchain.chains import RunnablePassthrough, llm\_chain import nest\_asyncio from transformers import BitsAndBytesConfig &#x200B; &#x200B; import os \#import fitz # PyMuPDF &#x200B; import torch from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline ) from datasets import load\_dataset from peft import LoraConfig, PeftModel &#x200B; from langchain.text\_splitter import CharacterTextSplitter from langchain.document\_transformers import Html2TextTransformer from langchain.document\_loaders import AsyncChromiumLoader &#x200B; from langchain.embeddings.huggingface import HuggingFaceEmbeddings from langchain.vectorstores import FAISS &#x200B; from langchain.prompts import PromptTemplate from langchain.schema.runnable import RunnablePassthrough from langchain.llms import HuggingFacePipeline from langchain.chains import LLMChain &#x200B; from langchain.document\_loaders import PyPDFLoader from langchain.text\_splitter import CharacterTextSplitter from langchain.embeddings import HuggingFaceEmbeddings from langchain.llms import HuggingFaceHub from langchain.vectorstores import Chroma from langchain.chains import ConversationalRetrievalChain \################################################################# \# Tokenizer \################################################################# torch.cuda.set\_per\_process\_memory\_fraction(0.5) # Adjust the fraction as needed &#x200B; \#device = torch.device('cpu') &#x200B; model\_name='mistralai/Mistral-7B-Instruct-v0.2' \#model\_name='mistralai/Mistral-7B-Instruct-v0.1' &#x200B; model\_config = transformers.AutoConfig.from\_pretrained( model\_name, ) &#x200B; tokenizer = AutoTokenizer.from\_pretrained(model\_name, trust\_remote\_code=True) tokenizer.pad\_token = tokenizer.eos\_token tokenizer.padding\_side = "right" &#x200B; \################################################################# \# bitsandbytes parameters \################################################################# &#x200B; \# Activate 4-bit precision base model loading use\_4bit = True &#x200B; \# Compute dtype for 4-bit base models bnb\_4bit\_compute\_dtype = "float16" &#x200B; \# Quantization type (fp4 or nf4) bnb\_4bit\_quant\_type = "nf4" &#x200B; \# Activate nested quantization for 4-bit base models (double quantization) use\_nested\_quant = False &#x200B; \################################################################# \# Set up quantization config \################################################################# compute\_dtype = getattr(torch, bnb\_4bit\_compute\_dtype) &#x200B; bnb\_config = BitsAndBytesConfig( load\_in\_4bit=use\_4bit, bnb\_4bit\_quant\_type=bnb\_4bit\_quant\_type, bnb\_4bit\_compute\_dtype=compute\_dtype, bnb\_4bit\_use\_double\_quant=use\_nested\_quant, ) &#x200B; \# Check GPU compatibility with bfloat16 if compute\_dtype == torch.float16 and use\_4bit: major, \_ = torch.cuda.get\_device\_capability() if major >= 8: print("=" \* 80) print("Your GPU supports bfloat16: accelerate training with bf16=True") print("=" \* 80) &#x200B; \################################################################# \# Load pre-trained config \################################################################# model = AutoModelForCausalLM.from\_pretrained( model\_name, quantization\_config=bnb\_config, ) &#x200B; &#x200B; def print\_number\_of\_trainable\_model\_parameters(model): trainable\_model\_params = 0 all\_model\_params = 0 for \_, param in model.named\_parameters(): all\_model\_params += param.numel() if param.requires\_grad: trainable\_model\_params += param.numel() return f"trainable model parameters: {trainable\_model\_params}\\nall model parameters: {all\_model\_params}\\npercentage of trainable model parameters: {100 \* trainable\_model\_params / all\_model\_params:.2f}%" &#x200B; print(print\_number\_of\_trainable\_model\_parameters(model)) &#x200B; text\_generation\_pipeline = transformers.pipeline( model=model, tokenizer=tokenizer, task="text-generation", temperature=0.2, repetition\_penalty=1.1, return\_full\_text=True, max\_new\_tokens=1000, ) &#x200B; mistral\_llm = HuggingFacePipeline(pipeline=text\_generation\_pipeline) &#x200B; import nest\_asyncio nest\_asyncio.apply() &#x200B; \# # Articles to index articles = \[ "[https://www.fantasypros.com/2023/11/rival-fantasy-nfl-week-10/](https://www.fantasypros.com/2023/11/rival-fantasy-nfl-week-10/)", "[https://www.fantasypros.com/2023/11/5-stats-to-know-before-setting-your-fantasy-lineup-week-10/](https://www.fantasypros.com/2023/11/5-stats-to-know-before-setting-your-fantasy-lineup-week-10/)", "[https://www.fantasypros.com/2023/11/nfl-week-10-sleeper-picks-player-predictions-2023/](https://www.fantasypros.com/2023/11/nfl-week-10-sleeper-picks-player-predictions-2023/)", "[https://www.fantasypros.com/2023/11/nfl-dfs-week-10-stacking-advice-picks-2023-fantasy-football/](https://www.fantasypros.com/2023/11/nfl-dfs-week-10-stacking-advice-picks-2023-fantasy-football/)", "[https://www.fantasypros.com/2023/11/players-to-buy-low-sell-high-trade-advice-2023-fantasy-football/](https://www.fantasypros.com/2023/11/players-to-buy-low-sell-high-trade-advice-2023-fantasy-football/)"\] &#x200B; \## I I use pdf and converted to the txt (size \~1Mb total) its failed due to vrAM request &#x200B; \# # Scrapes the blogs above loader = AsyncChromiumLoader(articles) docs = loader.load() \# \# # Converts HTML to plain text html2text = Html2TextTransformer() docs\_transformed = html2text.transform\_documents(docs) \# \# # Chunk text text\_splitter = CharacterTextSplitter(chunk\_size=100, chunk\_overlap=0) &#x200B; \# \# pdf\_folder = "/home/bc/Projects/metinler\_RD" # Update this with your PDF folder path \# \# pdf\_documents = \[\] \# for filename in os.listdir(pdf\_folder): \# if filename.endswith(".pdf"): \# pdf\_path = os.path.join(pdf\_folder, filename) \# pdf\_documents.append(pdf\_path) \# \# # Load PDFs and extract text \# text\_from\_pdfs = \[\] \# for pdf\_path in pdf\_documents: \# loader = PyPDFLoader(pdf\_path) \# pdf\_document= loader.load() \# pdf\_text = "" \# text\_splitter = CharacterTextSplitter(chunk\_size=1000, chunk\_overlap=0) \# texts = text\_splitter.split\_documents(pdf\_document) \# #for page\_num in range(texts): \# for idx, text in enumerate(texts, start=1): \# page = idx \# #pdf\_text += text \# text\_from\_pdfs.append(text) &#x200B; \#chunked\_documents = text\_splitter.split\_documents(text\_from\_pdfs) &#x200B; &#x200B; chunked\_documents = text\_splitter.split\_documents(docs\_transformed) &#x200B; \# Load chunked documents into the FAISS index db = FAISS.from\_documents(chunked\_documents, HuggingFaceEmbeddings(model\_name='sentence-transformers/all-mpnet-base-v2')) &#x200B; retriever = db.as\_retriever() &#x200B; \# Create prompt template prompt\_template = """ \### \[INST\] Instruction: Answer the question based on your Quran knowledge. Here is context to help: &#x200B; {context} &#x200B; \### QUESTION: {question} \[/INST\] """ &#x200B; \# Create prompt from prompt template prompt = PromptTemplate( input\_variables=\["context", "question"\], template=prompt\_template, ) &#x200B; \# Create llm chain llm\_chain = LLMChain(llm=mistral\_llm, prompt=prompt) &#x200B; rag\_chain = ( {"context": retriever, "question": RunnablePassthrough()} | llm\_chain ) &#x200B; rag\_chain.invoke("Should I start Gibbs next week for fantasy?") &#x200B;
2023-12-16T12:49:58
https://www.reddit.com/r/LocalLLaMA/comments/18jqk6b/mistral_with_rag_try_to_use_to_much_vram_if/
Total_Regular2799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jqk6b
false
null
t3_18jqk6b
/r/LocalLLaMA/comments/18jqk6b/mistral_with_rag_try_to_use_to_much_vram_if/
false
false
default
1
null
How to use GPT4ALL to screen basic legal contracts
1
hi guys! im hoping someone can point me in the right direction... Im trying to let GPT4ALL (i tried different models) to screen a simple legal contract. How does this work? I get the errors as with ChatGPT-4: "im not alowed error error blablabull" I have acces to local docs, its like GPT4ALL is still blocking screening of contracts? &#x200B;
2023-12-16T12:43:25
https://www.reddit.com/r/LocalLLaMA/comments/18jqgex/how_to_use_gpt4all_to_screen_basic_legal_contracts/
alwaysonthebeach
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jqgex
false
null
t3_18jqgex
/r/LocalLLaMA/comments/18jqgex/how_to_use_gpt4all_to_screen_basic_legal_contracts/
false
false
self
1
null
Ilya Sutskever thinks next token prediction is enough to reach AGI and I agree
1
[removed]
2023-12-16T12:35:58
https://www.reddit.com/r/LocalLLaMA/comments/18jqc96/ilya_sutskever_thinks_next_token_prediction_is/
enspiralart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jqc96
false
null
t3_18jqc96
/r/LocalLLaMA/comments/18jqc96/ilya_sutskever_thinks_next_token_prediction_is/
false
false
default
1
{'enabled': False, 'images': [{'id': 'oLuwzk7t70GyWvOIscDdbxL7s033khw4jiWPgTPl11Q', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tnda7AvdNng6BTuK5iqY2YFsvHBeHP1s4CEnGi2SADo.jpg?width=108&crop=smart&auto=webp&s=9480057060dd874f647d18d3af17c207f54bed7b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/tnda7AvdNng6BTuK5iqY2YFsvHBeHP1s4CEnGi2SADo.jpg?width=216&crop=smart&auto=webp&s=ca14b136b30bdf1c21c87ccae33cee533de9844c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/tnda7AvdNng6BTuK5iqY2YFsvHBeHP1s4CEnGi2SADo.jpg?width=320&crop=smart&auto=webp&s=2df98035b9f9755eb94f60b192f66622223b3704', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/tnda7AvdNng6BTuK5iqY2YFsvHBeHP1s4CEnGi2SADo.jpg?auto=webp&s=766b8809c4bd0de002ca34d9bcdee07cf2e637f1', 'width': 480}, 'variants': {}}]}
Mixtral on Cpu problem
2
Is anybody actually able to run Mixtral on Cpu with ok speeds? When i try it the inference in llamacpp it runs pretty okay. But the evaluation is extremely slow, going through the whole prompt at a speed similar to inference speed, meaning it takes forever to actually get the answer going. Blas (CuBlas, Clblast) in llamacpp also doesn't help, I don't think it works correctly for mixtral models... What are your experiences with it?
2023-12-16T12:35:49
https://www.reddit.com/r/LocalLLaMA/comments/18jqc5u/mixtral_on_cpu_problem/
paryska99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jqc5u
false
null
t3_18jqc5u
/r/LocalLLaMA/comments/18jqc5u/mixtral_on_cpu_problem/
false
false
self
2
null
Techniques & datasets for validating capabilities of a language model?
3
I've been toying with the problem of providing an "external" memory to transformers, and have a few questions about testing & validating new transformer architectures - Model Scale: In developing a new transformer model, what scale is ideal for demonstrating its effectiveness without directly competing with large-scale LLMs like ChatGPT? How do we balance size with the proof of concept? - Training Data Selection: Could you share your thoughts on the optimal size and type of training data for such a model? Is there a specific kind of dataset that would be most effective in showcasing the model's capabilities? - Evaluating Intelligence: When comparing a new architecture to traditional transformers of similar sizes, what metrics would you consider the most indicative of superior intelligence or efficiency? How can we objectively measure this advancement? Is it possible to "prove" a new architecture without breaking the bank?
2023-12-16T12:14:35
https://www.reddit.com/r/LocalLLaMA/comments/18jq0bk/techniques_datasets_for_validating_capabilities/
Gear5th
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jq0bk
false
null
t3_18jq0bk
/r/LocalLLaMA/comments/18jq0bk/techniques_datasets_for_validating_capabilities/
false
false
self
3
null
Best coding companion model today?
31
I am relatively new to LocalLLama's, but while playing around with Ollama + various models, I believe it doesn't make a lot of sense to use ChatGPT anymore for coding (which is what I use it for mostly). But I am not able to figure out which models would be ideal (read at par/ better than ChatGPT?) for code completions and buddy programming. Secondly, help me fish, ie. How do you even evaluate this by yourself, with hundreds of models out there how do you even find out if Model A is better than Model B without downloading 30GB files (even then not sure if I can validate this). Beyond asking reddit, is there a better methodology to this? (Both discovery and validation). I am relatively new to this, so sorry if this is a noob question.
2023-12-16T11:54:42
https://www.reddit.com/r/LocalLLaMA/comments/18jppfi/best_coding_companion_model_today/
codevalley
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jppfi
false
null
t3_18jppfi
/r/LocalLLaMA/comments/18jppfi/best_coding_companion_model_today/
false
false
self
31
null
Can't implement FAISS on RAG mistral instruct.Help.
1
For code: import os import numpy as np import faiss import logging from sentence\_transformers import SentenceTransformer from llama\_index.llms import LlamaCPP \# Set up logging logging.basicConfig(stream=sys.stdout, level=logging.INFO) \# Function to load and process documents def load\_and\_process\_documents(directory\_path): documents = \[\] for filename in os.listdir(directory\_path): file\_path = os.path.join(directory\_path, filename) if os.path.isfile(file\_path): # Check if it's a file try: with open(file\_path, 'r', encoding='utf-8') as file: content = file.read() documents.append(content) except UnicodeDecodeError: try: with open(file\_path, 'r', encoding='latin-1') as file: content = file.read() documents.append(content) except UnicodeDecodeError: logging.error(f"Could not read file {file\_path} due to encoding issues.") return documents \# Load documents from a directory documents\_path = "/content/data/" documents = load\_and\_process\_documents(documents\_path) \# Initialize Hugging Face embeddings model embed\_model = SentenceTransformer("thenlper/gte-large") \# Generate embeddings for the documents doc\_embeddings = np.array(\[embed\_model.encode(doc) for doc in documents\]) \# Create a FAISS index faiss\_index = faiss.IndexFlatL2(doc\_embeddings.shape\[1\]) faiss\_index.add(doc\_embeddings) \# Initialize LlamaCPP model llm = LlamaCPP( model\_url='https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/mistral-7b-instruct-v0.2-dare.Q8\_0.gguf?download=true', model\_kwargs={"n\_gpu\_layers": -1} ) def query(question, k=5): \# Embed the query question\_embedding = embed\_model.encode(question) \# Retrieve documents based on the query \_, indices = faiss\_index.search(np.array(\[question\_embedding\]), k) retrieved\_docs = \[documents\[idx\] for idx in indices\[0\]\] \# Construct context from retrieved documents context = ' '.join(retrieved\_docs) \# Create a prompt for the structured\_predict method prompt\_content = f"Question: {question}\\nContext: {context}\\nAnswer:" \# Use LlamaCPP to generate a response try: response = llm.structured\_predict(prompt\_content) # Passing prompt as a positional argument except Exception as e: # Catching any type of error logging.error(f"Error using LlamaCPP: {e}") response = "An error occurred while generating the response." return response \# Example usage query\_text = "how do i change my branch?" response = query(query\_text) print(response) I get output:AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512\_VBMI = 0 | AVX512\_VNNI = 0 | FMA = 1 | NEON = 0 | ARM\_FMA = 0 | F16C = 1 | FP16\_VA = 0 | WASM\_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | ERROR:root:Error using LlamaCPP: LLM.structured\_predict() missing 1 required positional argument: 'prompt' An error occurred while generating the response. what should be done
2023-12-16T11:20:33
https://www.reddit.com/r/LocalLLaMA/comments/18jp7wf/cant_implement_faiss_on_rag_mistral_instructhelp/
astrok_not
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jp7wf
false
null
t3_18jp7wf
/r/LocalLLaMA/comments/18jp7wf/cant_implement_faiss_on_rag_mistral_instructhelp/
false
false
self
1
null
Generating multiple tokens at once
5
Hi, what always boggles my mind is that LLMs only generate one token at a time. I believe they must be capable of much more as they do not generate a random token but one that can be followed up with more to create whole texts that kind of make sense. So I wonder if it would be possible to add multiple prediction heads, similar to how YOLOs / SSDs in computer vision work. Lets say we have n tokens as input and there are x prediction heads, each of them is responsible of predicting one token n+1,…,n+x. Then we would end up with x distributions over the all the possible tokens. At this point we could use an algorithm like beam search to find the best sequence of tokens. First I thought it would be a nearly linear speed increase. Then I learned about key-value hashing. But I still think it would be a nice experiment to try out that might lead somewhere. My ideas for experiments are: 1. Change the loss function of a training script to finetune a network to predict the n+2nd token instead of the n+1st. Test it. 2. ⁠Finetune a separate model with the same data to predict the n+1st token. Try algorithms to combine the outputs. 3. ⁠change a model architecture to accept two prediction heads. Test it. &#x200B; What are your thoughts about that idea? Do you think this could work or did I already miss some showstopper? Do you think this could be interesting if it works? P.S. I already started to test the 1. step. The loss dropped and the model seemed to skip single tokens. I take this as a small win however I did not want to put more time into it before I share the idea with other people to check if I might just waste my time.
2023-12-16T11:02:44
https://www.reddit.com/r/LocalLLaMA/comments/18joyu2/generating_multiple_tokens_at_once/
Schmandli
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18joyu2
false
null
t3_18joyu2
/r/LocalLLaMA/comments/18joyu2/generating_multiple_tokens_at_once/
false
false
self
5
null
Is No Moat getting real?
293
I am referring this article from May that was supposingly leaked from a google employee [Google "We Have No Moat, And Neither Does OpenAI" (semianalysis.com)](https://www.semianalysis.com/p/google-we-have-no-moat-and-neither) It's December now and google tries the first time to get market share from API calls. But at what cost? Google's Gemini project must've sucked up a few billion bucks, easy. Plus, they've got one of the top AI CEOs at the helm. Given all the time they had, you'd think they'd come up with something on par with, or even better than, GPT-4, especially with their resources and server capabilities. Here's what I expected from a tech titan like Google: 1. A budget-friendly base LLM, as powerful as GPT-4, and fully multimodal. 2. A pricier, more advanced LLM that totally outclasses GPT-4. 3. An affordable API setup, token-based, with some language diversity thrown in. 4. Something akin to OpenAI's Agent system for immediate scalability. 5. A statement that's essentially: "We're Google. Bow down." But what did we actually get? 1. A basic LLM that's barely a step up from GPT-3. 2. A costlier LLM that slightly edges out GPT-4, but that's based on their own benchmarks and a flashy promo video. 3. An API that's more about data hoarding and messing with your token calculations because, well, "Google." 4. Zero in terms of scalability. 5. And they didn't even launch the full suite. Meanwhile, in the same week that Google unveils their so-so AI, some French company drops an open-source model that blows past GPT-3 with a 56B model. They're even teasing an 8x32B model that might give off some serious GPT-4 vibes. And all this at an API cost of about $0.5 per million tokens. Feels like this whole situation is a snapshot of the future of LLMs and AI as a whole. Anyone else think this is a big deal?
2023-12-16T11:02:37
https://www.reddit.com/r/LocalLLaMA/comments/18joyro/is_no_moat_getting_real/
Bene0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18joyro
false
null
t3_18joyro
/r/LocalLLaMA/comments/18joyro/is_no_moat_getting_real/
false
false
self
293
{'enabled': False, 'images': [{'id': '2e-ORlXr3q8t_t4YwQwiP0Mo0wzKQ10f6PvX1VywrKs', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/Dd9F-kRfRODUzYVJwfzGw-m0UE1PIvav14lL_7C6SWs.jpg?width=108&crop=smart&auto=webp&s=f28453b0097499a073aa10d835a6060bed74dbb9', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/Dd9F-kRfRODUzYVJwfzGw-m0UE1PIvav14lL_7C6SWs.jpg?width=216&crop=smart&auto=webp&s=dfe480ed5466a7c41117a039300558fadfe58246', 'width': 216}, {'height': 156, 'url': 'https://external-preview.redd.it/Dd9F-kRfRODUzYVJwfzGw-m0UE1PIvav14lL_7C6SWs.jpg?width=320&crop=smart&auto=webp&s=53e67c7dca4e66a4bc132c9813d4cd3c440b1d29', 'width': 320}, {'height': 313, 'url': 'https://external-preview.redd.it/Dd9F-kRfRODUzYVJwfzGw-m0UE1PIvav14lL_7C6SWs.jpg?width=640&crop=smart&auto=webp&s=45c4a9d085ea5b2734686ebdfb1ce064013e2ab4', 'width': 640}, {'height': 470, 'url': 'https://external-preview.redd.it/Dd9F-kRfRODUzYVJwfzGw-m0UE1PIvav14lL_7C6SWs.jpg?width=960&crop=smart&auto=webp&s=5e127a2313829f23740a2b810492da040caedb39', 'width': 960}, {'height': 529, 'url': 'https://external-preview.redd.it/Dd9F-kRfRODUzYVJwfzGw-m0UE1PIvav14lL_7C6SWs.jpg?width=1080&crop=smart&auto=webp&s=3185480792558e3ebdf7b87d3d30962e02cd1f6b', 'width': 1080}], 'source': {'height': 588, 'url': 'https://external-preview.redd.it/Dd9F-kRfRODUzYVJwfzGw-m0UE1PIvav14lL_7C6SWs.jpg?auto=webp&s=5f446f4b34243e902e680a4b968e1fa866653221', 'width': 1200}, 'variants': {}}]}
Inference for m2m100 or other translation models. Is there something similar as vLLM
1
[removed]
2023-12-16T08:47:48
https://www.reddit.com/r/LocalLLaMA/comments/18jn42q/inference_for_m2m100_or_other_translation_models/
Torkoslav
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jn42q
false
null
t3_18jn42q
/r/LocalLLaMA/comments/18jn42q/inference_for_m2m100_or_other_translation_models/
false
false
self
1
null
Stable 7B Model
1
What's the Most stable 7B LM that can more or less run on 4gb ram. I prefer the GGUF ones
2023-12-16T08:43:17
https://www.reddit.com/r/LocalLLaMA/comments/18jn1zi/stable_7b_model/
Jsus-Sus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jn1zi
false
null
t3_18jn1zi
/r/LocalLLaMA/comments/18jn1zi/stable_7b_model/
false
false
self
1
null
Autogen + mistral small/moe/mixtral
6
Has anyone managed to get this working? I myself would have to use the api as I have terrible hardware but it would be interesting to know. I'm trying to get it working in vs code and have used the search bar before asking.
2023-12-16T07:17:11
https://www.reddit.com/r/LocalLLaMA/comments/18jlv52/autogen_mistral_smallmoemixtral/
Unusual_Pride_6480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jlv52
false
null
t3_18jlv52
/r/LocalLLaMA/comments/18jlv52/autogen_mistral_smallmoemixtral/
false
false
self
6
null
Phi 2 crashing on Maid (mobile)
1
[removed]
2023-12-16T07:03:40
https://www.reddit.com/r/LocalLLaMA/comments/18jlnzn/phi_2_crashing_on_maid_mobile/
Reentry9028
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jlnzn
false
null
t3_18jlnzn
/r/LocalLLaMA/comments/18jlnzn/phi_2_crashing_on_maid_mobile/
false
false
self
1
{'enabled': False, 'images': [{'id': 'rfNB09Gf5zkQIjjLzajs3quYIjwIaJLfjuMjxUc9kqQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?width=108&crop=smart&auto=webp&s=919346f7ac2e7659560e5a0a86fcd3569fc802f6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?width=216&crop=smart&auto=webp&s=75cb9faffe4d99a3b263a8e2b3a1841dc8a9e931', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?width=320&crop=smart&auto=webp&s=801f0e15709a87a644abf93d81ef50a62fee9d5b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?width=640&crop=smart&auto=webp&s=41a7370540ce98ac4bceafae99140314d5978fe1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?width=960&crop=smart&auto=webp&s=74e79e7c4471a951a7d597ee2d37cf55235105bf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?width=1080&crop=smart&auto=webp&s=68a44496d04351e3d7e21d0f587b45f39e66cf63', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?auto=webp&s=f3b7dca5357047a7e4cc596485fa8dd5e0892118', 'width': 1280}, 'variants': {}}]}
I decided to have some fun with a LLaMa based RP AI
2
Please don't judge me, but I've been doing a lot of role play with AI as I feel like it's one area that AI can actually be pretty strong in. I have been trying out a variety of models using LM Studio, oobabooga,and GPT4all to get a full picture of how the models react to various things. One of the things I've been doing lately is having the model role play as a female pokemon. Sometimes I have it choose the pokemon itself (so many charmanders, pikachus and squirtles) and sometimes I'd just ask it to describe the pokemon and I'd figure out which one matched it best and assign that as the answer. Every time each one seemed to have its own personality, and some were really guarded in their responses, others peppy and open, at least one of them practically mauled me in an attempt to seduce me. I play around till the conversation gets stale and then I load a new chat and try again. Today I had two interesting interactions: One was a small 7B model (zephyr-7b-beta.Q8_0) and the other was a 20B RP model (Iambe-RP-v3-20b.gguf) The 7B model started off really REALLY formal and stiff but she loosened up as we continued talking. Then I decided to give her a quiz on Japanese just for my own amusement, and I gave her 10 questions and awarded a star for each answer (and maybe an additional star if she provided extra information in her answer and it was all correct). As I am asking the questions, I'm praising the AI every time they did well, and it obviously had an effect as she continued trying to earn extra points as soon as she learned extra credit was an option. By the end she had 14 and a half points (half points awarded when she was at least CLOSE to correct) and I promised a reward. The reward was a trip anywhere in the world. All she had to do was name it. It was hell trying to get her to agree to tell me a spot to visit. Every time I pressured her to tell me, she would defer to me. Finally I just made her pick eastern hemisphere or western and so she finally picked... and then my intention was to keep narrowing the field down until we had a spot. Nope... the AI was against any more narrowing down so I had to go to ChatGPT to get help and with that AI's help got it down to a selection of 10 choices. Providing the AI with 10 choices did the trick and the destination was selected. I asked ChatGPT to describe the location as if he was Rod Roddy from the Price is Right and then copypasted the whole thing into chat to inform the AI of what she had won. The interesting part was that she acted like she was genuinely excited. I mean it positively exploded in emoji and acted like it had never won anything in her life. Very amusing (to me at least). The 2nd AI was the one that was trying to seduce me... as a Clefable. (WHAT?) I told her that Clefable isn't exactly sexy and she was hurt but undeterred. I decided to try to distract her by telling her we were going to watch TV together. I got a DVD for an episode of Cheers, played it and described the dialog and narrated the activities to the AI while she followed along with the plot. The AI did a pretty good job of following along, but got a little confused a couple of times, and tried to answer as me once or twice. The interesting part to me was that here she was trying to seduce me, and one of the characters in the episode was basically sleeping around with everyone, and she quickly empathized with that character. I explained that it was the 1980s so if guy slept around he was a player, but if a girl did it, well that was just very very bad. She defended the character saying "We don't know her backstory" and also did a fair job of guessing a couple of plot points when I asked though we were surprised more often than she got it right. Anyway, so the 30 minute episode took me way longer to watch because I was constantly pausing to type in dialog and get her reactions before continuing. It wasn't quite as nice as watching TV with another human but it was interesting getting her perspectives, and also cool that she caught several of the jokes where I thought the context might be lost due to it being 40 years old ( damn). I now kinda wonder what an AI would be like if you trained it on scripts from a bunch of old TV shows and movies. Would it be a movie critic? Would they act like a comedian? I haven't tried making a LoRA yet, but that might be something I'd consider. Has anyone else tried something like this? If so which model did you use and how did it go? *edited to correct misspellings*
2023-12-16T06:28:58
https://www.reddit.com/r/LocalLLaMA/comments/18jl4px/i_decided_to_have_some_fun_with_a_llama_based_rp/
Cool-Hornet4434
self.LocalLLaMA
2023-12-16T06:32:41
0
{}
18jl4px
false
null
t3_18jl4px
/r/LocalLLaMA/comments/18jl4px/i_decided_to_have_some_fun_with_a_llama_based_rp/
false
false
self
2
null
how can i access an install of ollama with datasette llm?
1
I'm running [Ollama](https://github.com/jmorganca/ollama) accessible with a POST to http://localhost:11434/api/chat how can I access it through [llm datasette](https://llm.datasette.io/en/stable/other-models.html#openai-compatible-models)?
2023-12-16T06:00:03
https://www.reddit.com/r/LocalLLaMA/comments/18jkonj/how_can_i_access_an_install_of_ollama_with/
202-456-1414
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jkonj
false
null
t3_18jkonj
/r/LocalLLaMA/comments/18jkonj/how_can_i_access_an_install_of_ollama_with/
false
false
self
1
null
T7610 Dual P40 Inference
10
Just wanted to share my journey building an AI-focused system using Tesla P40s. I read [this post](https://www.reddit.com/r/LocalLLaMA/comments/13n8bqh/my_results_using_a_tesla_p40) a while ago and was intrigued, and found a lot of 2 P40s at $250 on ebay. I ended up putting them into my workstation: **Original System Specs:** * Dell Precision T7610 * CPUs: Dual 2697v2 12-core * RAM: 256GB DDR3-14900R (quad channel for each CPU) * PSU: 1300W **Cooling, Case Mods, and Hardware issues:** * The P40s are power-hungry (250W TDP) and the case airflow was in no ways enough, so I added a 3D printed shroud and a BFB1012VH blower fan for each GPU. * This ended up making them too long, and so I had to remove the front panel fans and bracket, as well as the 5.25 inch drive bay, which was surprisingly painless to do (just a couple of screws). * The airflow was surprisingly fine after I added a 80mm Noctua fan as exhaust. Acoustics could be better but is bearable. * I used one PCIe-8 pin to EPS-8 pin adapter for each P40. I know that this technically isn't enough for the 250w TDP, but in my experience I have not run into any problems (yet). * For some reason, the workstation did not like to boot with both GPUs under one CPU, so I ended up moving a GPU to the top and it just works ¯\\\_(ツ)\_/¯ **Software and Drivers:** * Running Ubuntu 23.04. * Faced driver conflicts with a Quadro K2000 as the display GPU. * Solved it by replacing the K2000 with a cheap AMD R5 340x, using Nvidia drivers for P40s. **Performance:** * Using Oobabooga webui on CUDA 12.2 * AI Model Performance: Running at 2-4k context with q4\_m, I get 18 tokens/sec on 7b models, 15 tokens/sec on 13b, and 11 tokens/sec on 20b. * Haven't tested 34b or 70b models yet Happy to answer any questions about the build process or test model inference speeds! [Complete build!](https://preview.redd.it/xeuv620ael6c1.jpg?width=3024&format=pjpg&auto=webp&s=0ae7451d1c559c977975eb640501a0ea7657668d)
2023-12-16T05:26:32
https://www.reddit.com/r/LocalLLaMA/comments/18jk4u3/t7610_dual_p40_inference/
ConcaveTriangle5761
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jk4u3
false
null
t3_18jk4u3
/r/LocalLLaMA/comments/18jk4u3/t7610_dual_p40_inference/
false
false
https://b.thumbs.redditm…3s6qqpgRPlYA.jpg
10
null
Dual Titan RTX somehow SLOWER than Tesla P40
1
Like the title says. Not sure what's going on but dual Titan RTX is somehow performing worse than a Nvidia Tesla P40. Here is comparison both running on the latest oobabooga commit 443be391f2a7cee8402d9a58203dbf6511ba288c as of posting. Mistral 7B Dolphin2.2.1 Q8 GGUF: Dual Titan RTX: llama_print_timings: load time = 1377.08 ms llama_print_timings: sample time = 71.67 ms / 388 runs ( 0.18 ms per token, 5413.32 tokens per second) llama_print_timings: prompt eval time = 1376.82 ms / 69 tokens ( 19.95 ms per token, 50.12 tokens per second) llama_print_timings: eval time = 36928.17 ms / 387 runs ( 95.42 ms per token, 10.48 tokens per second) llama_print_timings: total time = 39322.19 ms Output generated in 39.84 seconds (9.71 tokens/s, 387 tokens, context 69, seed 1712188872) Tesla P40: llama_print_timings: load time = 702.37 ms llama_print_timings: sample time = 100.23 ms / 431 runs ( 0.23 ms per token, 4300.28 tokens per second) llama_print_timings: prompt eval time = 702.21 ms / 69 tokens ( 10.18 ms per token, 98.26 tokens per second) llama_print_timings: eval time = 26612.55 ms / 430 runs ( 61.89 ms per token, 16.16 tokens per second) llama_print_timings: total time = 28911.85 ms Output generated in 29.31 seconds (14.67 tokens/s, 430 tokens, context 69, seed 848367547) Mixtral 8x7B Instruct Q3KM GGUF: Dual Titan RTX: llama_print_timings: load time = 5836.49 ms llama_print_timings: sample time = 52.99 ms / 296 runs ( 0.18 ms per token, 5586.07 tokens per second) llama_print_timings: prompt eval time = 5836.14 ms / 69 tokens ( 84.58 ms per token, 11.82 tokens per second) llama_print_timings: eval time = 51153.37 ms / 295 runs ( 173.40 ms per token, 5.77 tokens per second) llama_print_timings: total time = 57761.12 ms Output generated in 58.16 seconds (5.07 tokens/s, 295 tokens, context 69, seed 1051201643) Tesla P40: llama_print_timings: load time = 4217.42 ms llama_print_timings: sample time = 73.22 ms / 284 runs ( 0.26 ms per token, 3878.62 tokens per second) llama_print_timings: prompt eval time = 4216.71 ms / 70 tokens ( 60.24 ms per token, 16.60 tokens per second) llama_print_timings: eval time = 29004.60 ms / 283 runs ( 102.49 ms per token, 9.76 tokens per second) llama_print_timings: total time = 34317.03 ms Output generated in 34.74 seconds (8.15 tokens/s, 283 tokens, context 70, seed 675071417) Specs of the sytems: 1. Dual Nvidia Titan RTX, Intel Core i7 5960X 4.7GHz OC, 256GB DDR4 2400MHz. 2. 1x Nvidia Tesla P40, Intel Xeon E-2174G (similar to 7700K), 64GB DDR4 2666MHz, IN A VM with 24GB allocated to it. HOW in the world is the Tesla P40 faster? What happened to llama.cpp that made it much faster running on an Nvidia Tesla P40? I tried recompiling and installing llama\_cpp\_python myself with cublas and cuda flags in order for it to indicate to use tensor cores on the Titans but that didn't help t/s one bit. I also was using a Xeon E5 2679v4 CPU with the Titan RTX which had lower single core performance and switching to the overclocked Core i7 5960X didn't do anything to generation speed. So it wasn't a CPU bottleneck. The Titan RTX seems to be underutilized with only 70W or so power consumption and GPU usage in the 30% range. No change at all whether it is running llama.cpp in MMQ or Tensor kernel. Ofcourse I've ran dual and triple Tesla P40 before and when running 70b models the Titan RTX running exllamav2 is about 2x as fast as the Tesla P40s. So why is it much slower running GGUF on llama.cpp?
2023-12-16T05:11:19
https://www.reddit.com/r/LocalLLaMA/comments/18jjw48/dual_titan_rtx_somehow_slower_than_tesla_p40/
nero10578
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jjw48
false
null
t3_18jjw48
/r/LocalLLaMA/comments/18jjw48/dual_titan_rtx_somehow_slower_than_tesla_p40/
false
false
self
1
null
Is LLama.cpp + Mixtral unencumbered Open Source, or still under the Meta License?
8
Just wondering if there's a "fruit of the poisoned tree" effect going on when using something like Llama.cpp (or llama-cpp-python or llama\_index, etc.) which are all released under the [MIT License](https://github.com/ggerganov/llama.cpp/blob/master/LICENSE) when paired with Mistral's [Apache 2.0-licensed](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) models with regards to Open Source status. Meta's [license](https://github.com/facebookresearch/llama/blob/main/LICENSE) for Llama seems pretty explicit with regards to derivative works falling under the "permission needed" section 2. But those terms are also only regards the product or services having 700 million monthly users at the time Llama 2 was released. So while Llama.cpp describes itself as a "port" of Meta's work, it didn't meet the threshold at the time of release. So what happens when an individual, group, or company comes along now and only ever accesses the MIT and Apache licensed code and models and builds products and services on top?
2023-12-16T05:05:09
https://www.reddit.com/r/LocalLLaMA/comments/18jjsby/is_llamacpp_mixtral_unencumbered_open_source_or/
PrinceOfLeon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jjsby
false
null
t3_18jjsby
/r/LocalLLaMA/comments/18jjsby/is_llamacpp_mixtral_unencumbered_open_source_or/
false
false
self
8
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
With half a TB of ram and 8x 1080s, what’s the best unsafe coding model?
58
I am looking for a model to work as my coding and infosec partner. I say unsafe to mean I want to be able to tell it “produce golang code with a buffer overflow in it”, the kind of things chatgpt doesn’t do. Also something that I can ask security questions to, or feed word templates and have it format them, or give PDFs and give me summaries. I have access to a hash cracking rig work bought a few years ago, and that’s why I thought it important to say they are 1080s, not 4090s or things I see people here using. Because of this large ram size, I am more than happy to run multiple models and switch between them to achieve different tasks, but I am extremely out of my depth with regards to which models are best. It feels like I stop paying attention for a month and the ENTIRE scene has shifted. Thank you
2023-12-16T04:14:23
https://www.reddit.com/r/LocalLLaMA/comments/18jiw08/with_half_a_tb_of_ram_and_8x_1080s_whats_the_best/
oldschooldaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jiw08
false
null
t3_18jiw08
/r/LocalLLaMA/comments/18jiw08/with_half_a_tb_of_ram_and_8x_1080s_whats_the_best/
false
false
self
58
null
Fine tuning for dummies
45
I am looking to do novel writing and I've found the output I get from any llm, including chatgpt, sounds wooden and boring. From what I've read, fine tuning can help significantly. Given my equipment (3090), it sounds like qlora is the way to go. That being said, I've been on YouTube, Google, and here and the topic is so big it's hard to know 1) if I'm on the right track and 2) what to do next. Is this a good way to go about it, given my hardware? Also, in terms of training data, I am planning on using samples from project Gutenberg. Is there something I should consider in how to select and format the data? Finally, what's the best way to train? Should I use oobabooga or something else?
2023-12-16T04:07:15
https://www.reddit.com/r/LocalLLaMA/comments/18jirmu/fine_tuning_for_dummies/
Jenniher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jirmu
false
null
t3_18jirmu
/r/LocalLLaMA/comments/18jirmu/fine_tuning_for_dummies/
false
false
self
45
null
Help me choose: Need local RAG, options for embedding, GPU, with GUI. PrivateGPT, localGPT, MemGPT, AutoGen, Taskweaver, GPT4All, or ChatDocs?
23
Hi As post title implies, I'm a bit confused and need some guidance. I've tried some but not yet all of the apps listed in the title. Looking to get a feel (via comments) for the "State of the Union" of LLM end-to-end apps with local RAG. What I want to do: 1. Multi-format: I have folders of PDFs, epubs, and text-file transcripts (from YT vids and podcasts) and want to chat with this body of knowledge. 2. Embedding Customization: I'd like to try various methods of creating embeddings. EG, chunking, sentence transformers, embedding models. Ideally app has a GUI to change these options. 3. More than 1 vector store? Option to create/select from multiple vectorstores. Say I also want to chat with, uh, some recipes. I don't want my other docs and my recipes in one store. I want to load up the recipes embeddings when I want that, and then close it and open a different set of embeddings when I want. Seems neater? Please don't make me mix my data. 4. All local. My data stays on my system. But of course the option to add OpenAI API keys, should I want to process embeddings with OpenAI or chat with my docs and GPT4. 5. GPU: Allow me to use GPU when possible. I have 2x4090s and want to use them - many apps seem to be limited to GGUF and CPU, and trying to make them work with GPU after the fact, has been difficult. 6. Free? At least partly. Doesn't require a paid, web-based vectorDB (same point as above, stay local, but thought I had to spell this out). 7. GUI. Ideally has a GUI for EVERYTHING, including options and settings and in-app model switching. EG some apps you need to exit, adjust a yaml manually, then restart *just to switch models*. Come on, it's 2023. Why do we need to shut down and manually type the model into a yaml? &#x200B; My impressions/tests so far: **- Oobabooga** with Superboogav2: Seems very lacking in functionality and configuration for local RAG/Chat with Docs. Lacks options. **- privateGPT** You can't have more than 1 vectorstore. No way to remove a book or doc from the vectorstore once added. Can't change embedding settings. Difficult to use GPU (I can't make it work, so it's slow AF). To change chat models you have to edit a yaml then relaunch. Not sure why people can't add that into the GUI... a lot of cons, not many pros. **- localGPT** Looks like you have to make embeddings via CLI? WHY GOD WHY. And as with privateGPT, looks like changing models is a manual text edit/relaunch process. Similar to privateGPT, looks like it goes part way to local RAG/Chat with docs, but stops short of having options and settings (one-size-fits-all, but does it really?) **- LMStudio** You can browse, download, and switch models in a really nice GUI, but there's no "chat with my docs" function, that I can see, let alone configurable options for embeddings, chunking, etc. Limited to GGUF, so... slow? **- OLlama** Mac only? I'm on PC and want to use the 4090s. **- LangChain** Just don't even. This thing is a dumpster fire. And remember, the whole post is more about complete apps and end-to-end solutions, ie, "where is the Auto1111 for LLM+RAG?" (hint it's NOT PrivateGPT or LocalGPT or Ooba that's for sure). **- MemGPT** ? Still need to look into this. **- AutoGPT** Looks interesting but AFAIK can't change embedding settings? **- GPT4All** ? Still need to look into this. **- ChatDocs** Supposed to be a fork of privateGPT but it has very low stars on Github compared to privateGPT, so I'm not sure how viable this is or how active. **- Taskweaver** Did the rounds on Youtube hype vids last week but still need to look into this more. **Is there anything else significant that I am missing?** Not looking at OpenAi "GPTs" because while you can upload docs, you can't change or select embedding settings, chunking, etc there either. Which makes it seem like OpenAI is going for RAG-For-Dummies (no offence to anyone, I am emphasizing their ease-of-use resulting in lack-of-options). BUT at least I could make one GPT for work chat, one for recipes chat, one for hobbies chat. Where is this in local tools (ability to select from multiple vector stores and instructions? Is it in Autogen?). Thanks
2023-12-16T03:59:30
https://www.reddit.com/r/LocalLLaMA/comments/18jimix/help_me_choose_need_local_rag_options_for/
TheWebbster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jimix
false
null
t3_18jimix
/r/LocalLLaMA/comments/18jimix/help_me_choose_need_local_rag_options_for/
false
false
self
23
null
What's the easiest way to run Phi-2 locally currently?
2
Seems like there's a quantised version working in browser here: [https://huggingface.co/spaces/radames/Candle-phi1-phi2-wasm-demo?model=phi\_2\_0\_q4k](https://huggingface.co/spaces/radames/Candle-phi1-phi2-wasm-demo?model=phi_2_0_q4k). But doesn't look like tools like LM Studio or Ollama are supporting Phi-2 yet!
2023-12-16T03:26:55
https://www.reddit.com/r/LocalLLaMA/comments/18ji1nf/whats_the_easiest_way_to_run_phi2_locally/
mikachip
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ji1nf
false
null
t3_18ji1nf
/r/LocalLLaMA/comments/18ji1nf/whats_the_easiest_way_to_run_phi2_locally/
false
false
default
2
{'enabled': False, 'images': [{'id': 'fuO00O7qX3jXuj6gAg_bAx3TaTHxTk8hT0pE_5GOrbQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?width=108&crop=smart&auto=webp&s=7e1c3b47e73de83455e9dcd72b43d9122a26fa40', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?width=216&crop=smart&auto=webp&s=95d7ffe8488d1dddf2030653c88a6fe0ca27a7f8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?width=320&crop=smart&auto=webp&s=72ddb67b3fc2657258b1ce6d004c4b39cbb21eb2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?width=640&crop=smart&auto=webp&s=2bbc86f083b4b2241aeec135cd69406cca0543c7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?width=960&crop=smart&auto=webp&s=f1cd76e16222da106693c8ca1bb3764b75395942', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?width=1080&crop=smart&auto=webp&s=78eff36df2613f91e26e09e9e4d0287b9c3c490c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2wvCpjPYxmaxmOyPHrWjhXsua5erHt5Y07a0XHPjmHc.jpg?auto=webp&s=7fac58494c77b916008a47a70e334e788b8ef212', 'width': 1200}, 'variants': {}}]}
Ethics in LLM Training Data: Any Thought?
1
Hello beautiful Llama community I'm curious about the datasets used for the training of LLMs. I couldn't find much about their sources, except maybe for BLOOM. Even open source models like mistral do reveal anything about the datasets used to train their LLMs. The only thing I found is they used available datasets on Hugging face to finetune the model... Are there other open-source models with transparent training data? Any link or info about it is welcomed. It looks likes dataset training is still a secret in the making of an LLM
2023-12-16T03:25:56
https://www.reddit.com/r/LocalLLaMA/comments/18ji110/ethics_in_llm_training_data_any_thought/
steph_pop
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ji110
false
null
t3_18ji110
/r/LocalLLaMA/comments/18ji110/ethics_in_llm_training_data_any_thought/
false
false
self
1
null
Mistral or Mixtral with tabular data
2
Greetings! I’m a noob! I’ve been testing out different 7B models as well as 70B models and I’ve landed on Mistral and Mixtral. Now for the question. I have tabular data that I’m trying to query in a basic csv. The data pertains to finding prices based on the names of dogs breeds. Basically I type in, German Sheppard and it should give me a full list of German Sheppards and their prices. It does for the most part, but it doesn’t give me all of the German Shepherds from the csv even though I prompted it to. The dogs and prices are in a vector db and will change in the future, so based on my limited knowledge of LLMs fine tuning wouldn’t make sense. I thought about serializing my CSV data which in theory might help. I’ve prompted the model, but it doesn’t seem to help with retrieving all results . Any ideas how one can increase the precision such that we get all dog breeds? Thank you guys
2023-12-16T03:07:00
https://www.reddit.com/r/LocalLLaMA/comments/18jhori/mistral_or_mixtral_with_tabular_data/
Tymid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jhori
false
null
t3_18jhori
/r/LocalLLaMA/comments/18jhori/mistral_or_mixtral_with_tabular_data/
false
false
self
2
null
LLM PC build, looking for input or suggestions
3
I've been running on a system that is pretty old now, besides the 4090 in it. I'm thinking of putting together a system for about $2000 to $4000. Here's the part list I have, I already have a decent PSU. I was thinking of getting another 4090 but I'm not sure if it will fit in this mobo without risers. Any suggestions or input would be greatly appreciated! I've been wondering if a mac ultra might be a better idea. &#x200B; [AMD Ryzen 9 7950X3D](https://www.amazon.ca/AMD-RYZEN-7950X3D-Without-Cooler/dp/B0BTRH9MNS/ref=sr_1_1?crid=UZNBA06M2CS0&keywords=AMD+Ryzen+9+7950X&qid=1702619596&sprefix=amd+ryzen+9+7950x%2Caps%2C152&sr=8-1&ufe=app_do%3Aamzn1.fos.c021cf91-aacd-4a27-9bf5-80aba3271009) \- $760 [MSI MAG X670E Motherboard](https://www.amazon.ca/MSI-Tomahawk-Gaming-Motherboard-2-5Gbps/dp/B0BYB3PGQW/ref=sr_1_29?crid=N3GWBVOHKA7S&keywords=AM5%2BSocket%2Bmotherboard&qid=1702612540&sprefix=am5%2Bsocket%2Bmotherboar%2Caps%2C151&sr=8-29&th=1) \- $420 [128GB RAM](https://www.amazon.ca/CORSAIR-VENGEANCE-6400MHz-Compatible-Computer/dp/B0C1RGLVZ8/ref=sr_1_1?crid=3RORF99MUGKOV&keywords=DDR5%2B128gb%2B6400&qid=1702619985&sprefix=ddr5%2B128gb%2B640%2Caps%2C188&sr=8-1&th=1) \- $580 ($290x2) [Full size ATX Case](https://www.amazon.ca/Thermaltake-Motherboard-Computer-Pre-Installed-CA-1Q6-00M1WN-00/dp/B08451HFPR/ref=sr_1_28?crid=2DHAPWJQWFPTJ&keywords=ATX%2Bcase%2Bfull%2Btower&qid=1702622435&sprefix=atx%2Bcase%2Bfull%2Btower%2Caps%2C314&sr=8-28&th=1) \- $211
2023-12-16T03:00:06
https://www.reddit.com/r/LocalLLaMA/comments/18jhk3l/llm_pc_build_looking_for_input_or_suggestions/
SantaBooks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18jhk3l
false
null
t3_18jhk3l
/r/LocalLLaMA/comments/18jhk3l/llm_pc_build_looking_for_input_or_suggestions/
false
false
self
3
null