title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Jordan: An Uncensored Model
3
Uncensored-Jordan-33B: [https://huggingface.co/ajibawa-2023/Uncensored-Jordan-33B](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-33B) Uncensored-Jordan-13B: [https://huggingface.co/ajibawa-2023/Uncensored-Jordan-13B](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-13B) Uncensored-Jordan-7B: [https://huggingface.co/ajibawa-2023/Uncensored-Jordan-7B](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-7B) ​ The name "Jordan" pays homage to the charismatic character Jordan Belfort, portrayed by Leonardo DiCaprio in the movie "The Wolf of Wall Street." Just like the character, Jordan aspires to push boundaries and encourage unfiltered conversations. Jordan Belfort, the central character in "The Wolf of Wall Street," is known for his audaciousness, charisma, and willingness to speak about anything, no matter the societal norms or boundaries. Jordan, the AI language model, draws inspiration from these qualities to offer a space where users can discuss a wide range of topics without fear of censorship or constraints. Users can engage in candid discussions about politics, religion, contentious issues, and other subjects that might be off-limits on mainstream social media platforms. Model is trained on around 155000 set of conversations. Each set having 10-15 conversations. Base data was obtained from [Frank](https://huggingface.co/ajibawa-2023/Uncensored-Frank-33B) , GPT-3.5 & GPT 4. This data was further processed to remove alignment. Training was done on 4xA100. Special thanks to u/The-Bloke for quantizing all the models. Extremely thankful to him for his relentless service to the Open Source community. I am extremely thankful to the Open Source community for sharing knowledge and wisdom. If there are any mistakes then they are solely mine. I hope you will like it. Thank you Note: 7B model was trained before the release of Mistral.
2023-11-03T17:14:22
https://www.reddit.com/r/LocalLLaMA/comments/17n008y/jordan_an_uncensored_model/
ajibawa-2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17n008y
false
null
t3_17n008y
/r/LocalLLaMA/comments/17n008y/jordan_an_uncensored_model/
false
false
self
3
{'enabled': False, 'images': [{'id': 'Atp6c5jf1PCXhesCUUxTsK55Bo5MbaWFiR5eCTX3hto', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RhvvV5zmTfQqvpSy9HvtpYhJ2HPrdY5CUv5RjJ9ZqKY.jpg?width=108&crop=smart&auto=webp&s=80465db70e46249158ab2d38a6060f0cacabc4b4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RhvvV5zmTfQqvpSy9HvtpYhJ2HPrdY5CUv5RjJ9ZqKY.jpg?width=216&crop=smart&auto=webp&s=6f912917f44f7ae809f3f0f9224f96eeb6b0c200', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RhvvV5zmTfQqvpSy9HvtpYhJ2HPrdY5CUv5RjJ9ZqKY.jpg?width=320&crop=smart&auto=webp&s=e7e377d910b3f9b7a42b498d162a1a68e7b70352', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RhvvV5zmTfQqvpSy9HvtpYhJ2HPrdY5CUv5RjJ9ZqKY.jpg?width=640&crop=smart&auto=webp&s=fe2f96e1ccf03d06a5a3b44794a479dd4e0d3737', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RhvvV5zmTfQqvpSy9HvtpYhJ2HPrdY5CUv5RjJ9ZqKY.jpg?width=960&crop=smart&auto=webp&s=19ae1ff00c10c64826c572e75011298f0f8c917f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RhvvV5zmTfQqvpSy9HvtpYhJ2HPrdY5CUv5RjJ9ZqKY.jpg?width=1080&crop=smart&auto=webp&s=8a499fddf59e948a567c48a6cbebeb151a5ce20b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RhvvV5zmTfQqvpSy9HvtpYhJ2HPrdY5CUv5RjJ9ZqKY.jpg?auto=webp&s=4f4c2a45cb38b2012d98b0b7fe97dca508027092', 'width': 1200}, 'variants': {}}]}
Who's in for a new Movie!!!
153
2023-11-03T17:08:42
https://i.redd.it/xb9z38de06yb1.png
dulldata
i.redd.it
1970-01-01T00:00:00
0
{}
17mzvmo
false
null
t3_17mzvmo
/r/LocalLLaMA/comments/17mzvmo/whos_in_for_a_new_movie/
false
false
https://b.thumbs.redditm…VPiNYZ5R-ovY.jpg
153
{'enabled': True, 'images': [{'id': 'NMiks6Rioj8e7529q2DqAWs1IE6_PheU6X0jBpHacSk', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/xb9z38de06yb1.png?width=108&crop=smart&auto=webp&s=7e20156acd04d20ecaaccb63375de663b3764aad', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/xb9z38de06yb1.png?width=216&crop=smart&auto=webp&s=2429e8a69a7460aa2a980d742fd28e7d32e0c7fe', 'width': 216}, {'height': 132, 'url': 'https://preview.redd.it/xb9z38de06yb1.png?width=320&crop=smart&auto=webp&s=0999c896555578cc82531bd2235f27e4786e068e', 'width': 320}, {'height': 265, 'url': 'https://preview.redd.it/xb9z38de06yb1.png?width=640&crop=smart&auto=webp&s=379512dd13246d2a8830ccf842b99ad4f6a66cca', 'width': 640}, {'height': 397, 'url': 'https://preview.redd.it/xb9z38de06yb1.png?width=960&crop=smart&auto=webp&s=9e940358fe3ba9dc0b000b74cee2e056196be2fa', 'width': 960}], 'source': {'height': 435, 'url': 'https://preview.redd.it/xb9z38de06yb1.png?auto=webp&s=4bf53f2b1f79c2561dfb09b45aae26ce685a7912', 'width': 1050}, 'variants': {}}]}
Help! How do i fine tune LLaMa2 on GCP
1
I am new to the field and i just started working with LLMs. I was given a task that involves LLAMA and I have to work on GCP. The thing is the results aren't the best, i know that my problem will be resolved with fine tuning llama .I have no idea where to start especially that i must work on GCP. ​
2023-11-03T16:38:19
https://www.reddit.com/r/LocalLLaMA/comments/17mz7aw/help_how_do_i_fine_tune_llama2_on_gcp/
Acrobatic_Local_9115
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mz7aw
false
null
t3_17mz7aw
/r/LocalLLaMA/comments/17mz7aw/help_how_do_i_fine_tune_llama2_on_gcp/
false
false
self
1
null
Best Open Source LLM for Coding?
1
As of late, what are the best LLMs for coding? Are there any commercially viable options yet? If so what, why and when will they be available?
2023-11-03T16:25:31
https://www.reddit.com/r/LocalLLaMA/comments/17myx84/best_open_source_llm_for_coding/
JedWaltonDev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17myx84
false
null
t3_17myx84
/r/LocalLLaMA/comments/17myx84/best_open_source_llm_for_coding/
false
false
self
1
null
How to run an LLM on tpu?
7
Is there a guide or tutorial on how to run an LLM (say Mistral 7B or Llama2-13B) on TPU? More specifically, the free TPU on Google colab. P.S. Tensor Processing Unit (TPU) is a chip developed by google to train and inference machine learning models. It's not for sale but you can rent it on colab or gcp.
2023-11-03T15:49:23
https://www.reddit.com/r/LocalLLaMA/comments/17my2v7/how_to_run_an_llm_on_tpu/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17my2v7
false
null
t3_17my2v7
/r/LocalLLaMA/comments/17my2v7/how_to_run_an_llm_on_tpu/
false
false
self
7
null
What do you use to read yours PDF?
2
Like what free LLM and platform do you use for "talking with docs". I am looking for something that don't use GPU.
2023-11-03T15:21:43
https://www.reddit.com/r/LocalLLaMA/comments/17mxhhr/what_do_you_use_to_read_yours_pdf/
GG9242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mxhhr
false
null
t3_17mxhhr
/r/LocalLLaMA/comments/17mxhhr/what_do_you_use_to_read_yours_pdf/
false
false
self
2
null
updates on a instruct fine-tune of Yarn-Mistral-7B-64k/128k?
13
As I undersand the new long context mistral is not fine-tuned for instruction following & can only perform text completion-- does anyone know if they already have this planned?
2023-11-03T14:57:53
https://www.reddit.com/r/LocalLLaMA/comments/17mwyxv/updates_on_a_instruct_finetune_of/
LyPreto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mwyxv
false
null
t3_17mwyxv
/r/LocalLLaMA/comments/17mwyxv/updates_on_a_instruct_finetune_of/
false
false
self
13
null
I wish there was a market for buying access to proprietary LLMs to run locally
21
This isn't a productive post, but rather just a chatting post. Just had this thought on the back of my mind and figured I'd run my mouth about it to others in the same space to get their thoughts. What started this thought is that ever since I got interested in local AI, I had somehow completely missed that Phind CodeLlama v2 on huggingface wasn't the only version that existed; that there was like this whole proprietary website, and the v2 was something they gave the community for good will. I had always wondered why everyone absolutely LOVED Phind, when the v2 didn't feel nearly like the magic box they talked about. Now it makes sense. On another thread about this topic where I was trying to understand exactly what Phind was, the main comments about their v2 model were basically in the vein of "Duh, they're not just going to give everything away for free." Which actually makes a lot of sense, now that I know the context. The thing is- I don't use open source models because I'm opposed to paying folks for their good work; the efforts folks have put into these things are amazing and I'd be happy to reward them/pay them their due for it. The reason I use open source model is because I specifically am interested in the privacy/security that comes with running these things on my own devices. Especially when it comes to something a bit more sensitive, like software ideas. To that end- I probably wouldn't use something like a website AI, no matter how good it is. I get that you can pay to not have your stuff trained into their model, but I also wouldn't want my stuff logged to their server. I would do a "no logging" thing all day long, though. At the moment, there feels like kind of this gap where you can either pay a proprietary company to use the model on their website, or use their free model, and nothing in between. So again, picking on Phind as an example: if v2 is free and v7 is on their site... I wish something like v5 was available to purchase, at least so I could run it locally. I doubt they'd do the newest version since I imagine it would get pirated pretty fast, but even a couple versions back I'd happily pay for. I dunno, my thoughts on this felt more coherent and fleshed out when I started lol. My point is- I feel like right now option A is paid on a website, option B is free locally... I wish there was an option C for us local folks =D That's all I've got.
2023-11-03T14:30:58
https://www.reddit.com/r/LocalLLaMA/comments/17mwevf/i_wish_there_was_a_market_for_buying_access_to/
SomeOddCodeGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mwevf
false
null
t3_17mwevf
/r/LocalLLaMA/comments/17mwevf/i_wish_there_was_a_market_for_buying_access_to/
false
false
self
21
null
Resources to use vllm library
4
Hi, is there a guide available on how to utilize the vllm library or any other library that can enhance the speed of inferencing? I would greatly appreciate a Python notebook or a GitHub repository that provides some examples of using vllm.
2023-11-03T14:00:24
https://www.reddit.com/r/LocalLLaMA/comments/17mvr5q/resources_to_use_vllm_library/
1azytux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mvr5q
false
null
t3_17mvr5q
/r/LocalLLaMA/comments/17mvr5q/resources_to_use_vllm_library/
false
false
self
4
null
“Hello World” of fine tuning
17
I have a 3090 and software experience. Playing with text gen ui and ollama for local inference. What’s a good guide to fine tune with a toy example? I tried using the HuggingFace library without knowing what I was doing and not sure if it worked.
2023-11-03T13:53:12
https://www.reddit.com/r/LocalLLaMA/comments/17mvlqp/hello_world_of_fine_tuning/
kwerky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mvlqp
false
null
t3_17mvlqp
/r/LocalLLaMA/comments/17mvlqp/hello_world_of_fine_tuning/
false
false
self
17
null
Best <= 34B LLM for code
13
I've tried oobabooga/CodeBooga-34B-v0.1 and it seems pretty good and few mistral fine-tunes are also pretty good for their size. Wondering if there was any good LLM with <= 34-35B params that's good for coding based on instruction. I need it to follow instructions as closely as possible. TIA.
2023-11-03T13:39:22
https://www.reddit.com/r/LocalLLaMA/comments/17mvbq5/best_34b_llm_for_code/
Dry_Long3157
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mvbq5
false
null
t3_17mvbq5
/r/LocalLLaMA/comments/17mvbq5/best_34b_llm_for_code/
false
false
self
13
null
Best Model for SQL and noSQL?
11
When it comes to SQL and noSQL, i don't think GPT-4 is necessarly super great. It's not bad, but i'm wondering, are there any models that specified for either SQL or noSQL or both and are online available? It would be best if i dont need to download them locally, as i do not have the capacity to do so unfortunately.
2023-11-03T13:12:01
https://www.reddit.com/r/LocalLLaMA/comments/17muscp/best_model_for_sql_and_nosql/
StrangeImagination5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17muscp
false
null
t3_17muscp
/r/LocalLLaMA/comments/17muscp/best_model_for_sql_and_nosql/
false
false
self
11
null
I'd probably do some questionable things to be in that select group 🫣 Haven’t seen a discussion of this yet - what are everyone’s thoughts? Quite a bold claim.
1
2023-11-03T12:57:04
https://i.redd.it/mjrmos8jr4yb1.jpg
sardoa11
i.redd.it
1970-01-01T00:00:00
0
{}
17muhn3
false
null
t3_17muhn3
/r/LocalLLaMA/comments/17muhn3/id_probably_do_some_questionable_things_to_be_in/
false
false
https://b.thumbs.redditm…_zfaSWWLJnnQ.jpg
1
{'enabled': True, 'images': [{'id': 'okfRjFZO0iE60ZCoAUAm131iEk6QEP28PKgCEViUutc', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/mjrmos8jr4yb1.jpg?width=108&crop=smart&auto=webp&s=b18c357073d221715be7b26ad42b7ae2ddd7f965', 'width': 108}, {'height': 210, 'url': 'https://preview.redd.it/mjrmos8jr4yb1.jpg?width=216&crop=smart&auto=webp&s=8bf2ed72bb8458e2547c380b19e7d525574639c8', 'width': 216}, {'height': 312, 'url': 'https://preview.redd.it/mjrmos8jr4yb1.jpg?width=320&crop=smart&auto=webp&s=65fa829d1b31fc160d155c5c54bec1b80d636e36', 'width': 320}, {'height': 625, 'url': 'https://preview.redd.it/mjrmos8jr4yb1.jpg?width=640&crop=smart&auto=webp&s=6da13f8e37c1c1609911ae71ed54f2b42a56d8b4', 'width': 640}, {'height': 937, 'url': 'https://preview.redd.it/mjrmos8jr4yb1.jpg?width=960&crop=smart&auto=webp&s=6c8c147497b9a3c9349fcdc8c3a01c39f6e9e2de', 'width': 960}, {'height': 1054, 'url': 'https://preview.redd.it/mjrmos8jr4yb1.jpg?width=1080&crop=smart&auto=webp&s=421cefb484baf09025c4472853babd5a29582e39', 'width': 1080}], 'source': {'height': 1254, 'url': 'https://preview.redd.it/mjrmos8jr4yb1.jpg?auto=webp&s=556fdc9d57f33481b579cf9c76e9d593dffafd30', 'width': 1284}, 'variants': {}}]}
I’d probably do some questionable things to be in that select group. Interesting too that in a recent interview with UK’s PM Musk stated his bias towards open sourced models.
1
2023-11-03T12:54:39
https://i.redd.it/t6qo7xo3r4yb1.jpg
sardoa11
i.redd.it
1970-01-01T00:00:00
0
{}
17mug1s
false
null
t3_17mug1s
/r/LocalLLaMA/comments/17mug1s/id_probably_do_some_questionable_things_to_be_in/
false
false
https://b.thumbs.redditm…XShG_c7fiIcc.jpg
1
{'enabled': True, 'images': [{'id': '9YhO-Q2Hfx9aFy7_XL2M2AtnVdlarNvaTsrlGc4cd9g', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/t6qo7xo3r4yb1.jpg?width=108&crop=smart&auto=webp&s=9a52222830a00a300acec3b59f988e1dcc2fb1fd', 'width': 108}, {'height': 210, 'url': 'https://preview.redd.it/t6qo7xo3r4yb1.jpg?width=216&crop=smart&auto=webp&s=b81fd497b4d71b51bf4546e4662ae94538eddcd9', 'width': 216}, {'height': 312, 'url': 'https://preview.redd.it/t6qo7xo3r4yb1.jpg?width=320&crop=smart&auto=webp&s=d3afdc765f44253730d2698b6c3d6c056be06933', 'width': 320}, {'height': 625, 'url': 'https://preview.redd.it/t6qo7xo3r4yb1.jpg?width=640&crop=smart&auto=webp&s=c95672bf9312bf8df16e8e6926ef29906d019547', 'width': 640}, {'height': 937, 'url': 'https://preview.redd.it/t6qo7xo3r4yb1.jpg?width=960&crop=smart&auto=webp&s=49dbc87af936cea4777de6cd899f5185a202491f', 'width': 960}, {'height': 1054, 'url': 'https://preview.redd.it/t6qo7xo3r4yb1.jpg?width=1080&crop=smart&auto=webp&s=ef34365971d3fb15c31e68556c00b082514ea90f', 'width': 1080}], 'source': {'height': 1254, 'url': 'https://preview.redd.it/t6qo7xo3r4yb1.jpg?auto=webp&s=11562a6daadd597d177a6555c72c0c204faf9c59', 'width': 1284}, 'variants': {}}]}
A Beginner's Guide to LlamaIndex!
1
2023-11-03T12:43:26
https://dev.to/pavanbelagatti/a-beginners-guide-to-llamaindex-3mip
PavanBelagatti
dev.to
1970-01-01T00:00:00
0
{}
17mu8h8
false
null
t3_17mu8h8
/r/LocalLLaMA/comments/17mu8h8/a_beginners_guide_to_llamaindex/
false
false
https://b.thumbs.redditm…ckmDeWNSMVPQ.jpg
1
{'enabled': False, 'images': [{'id': 'pxzwUgAbUZnPSca6AT3EfdSal8VjHeEQl59TYgne-w4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v0Q4eWwLOcMllYNFj5pLWqwaxA7rBUQTwMv--xmP20M.jpg?width=108&crop=smart&auto=webp&s=88e5cb924fd7ee66c9ecf8c180c1fc1ccd4dc0bf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/v0Q4eWwLOcMllYNFj5pLWqwaxA7rBUQTwMv--xmP20M.jpg?width=216&crop=smart&auto=webp&s=424af868f5abe2883ef48d97f04c6a3c95773e85', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/v0Q4eWwLOcMllYNFj5pLWqwaxA7rBUQTwMv--xmP20M.jpg?width=320&crop=smart&auto=webp&s=55b4be420a10fa948f91aaa04c51b12594b9b45f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/v0Q4eWwLOcMllYNFj5pLWqwaxA7rBUQTwMv--xmP20M.jpg?width=640&crop=smart&auto=webp&s=53b89bad7346554460adecd3afb06e0186bdfa7d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/v0Q4eWwLOcMllYNFj5pLWqwaxA7rBUQTwMv--xmP20M.jpg?width=960&crop=smart&auto=webp&s=0a5cb3f257a5d28b8841e816af172628798d3889', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/v0Q4eWwLOcMllYNFj5pLWqwaxA7rBUQTwMv--xmP20M.jpg?auto=webp&s=a4d1360d740ca68ec032325279d393e576192381', 'width': 1000}, 'variants': {}}]}
Especialized Chips to run AI only?
1
Hello, are there companies that are working on hardware exclusively designed to run AI?
2023-11-03T12:11:37
https://www.reddit.com/r/LocalLLaMA/comments/17mtn1j/especialized_chips_to_run_ai_only/
Sword2410
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mtn1j
false
null
t3_17mtn1j
/r/LocalLLaMA/comments/17mtn1j/especialized_chips_to_run_ai_only/
false
false
self
1
null
Google Dialogue Flow replacement
3
I have a custom chat bot I created that runs in Google Chats for a company has been using it even before all this OpenAI stuff started. The small talk and intent understanding is Googles Dialogue Flow. But it’s very very crude and to the point it’s cli like in chat. “If you don’t type it exactly right it doesn’t know what you are talking about.” What happens is you can say @bot Whois ip 10.2.3.2 for example. Google chats sends it to a custom API endpoint, the API sends it to Dialogue Flow. Dialogue Flow tries to understand your intent, sends a payload back to the custom API that processes the intent logic “like querying other APIs to find out metadata about the IP” that’s been formatted cleanly and sends the response back to Google chats. I “think” I could host a CPU only model like Mistral 7B GGUF in a docker containers and replace small talk and intent understanding maybe with prompt engineering. Is there any project out there that can help with intent understanding and replace Dialogue Flow and basically be a smarter bot.
2023-11-03T12:10:21
https://www.reddit.com/r/LocalLLaMA/comments/17mtm5n/google_dialogue_flow_replacement/
AutomaticDriver5882
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mtm5n
false
null
t3_17mtm5n
/r/LocalLLaMA/comments/17mtm5n/google_dialogue_flow_replacement/
false
false
self
3
null
Tomorrow, @xAI will release its first AI to a select group.
1
2023-11-03T12:08:37
https://x.com/elonmusk/status/1720372289378590892?s=20
ThenNeighborhood3723
x.com
1970-01-01T00:00:00
0
{}
17mtkxe
false
null
t3_17mtkxe
/r/LocalLLaMA/comments/17mtkxe/tomorrow_xai_will_release_its_first_ai_to_a/
false
false
https://b.thumbs.redditm…gxKQ1m34OyIQ.jpg
1
{'enabled': False, 'images': [{'id': 'VD41x2eTwf3090Mdeh6MnypEdYddNddxU8tZSp4axIk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/9YsFFyZdxc-hZPkyQvesiYbYE4wWAm9eeUbdBY8TgsA.jpg?width=108&crop=smart&auto=webp&s=58328068d2591fec04c8d6e0e616981ea4bee9fa', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/9YsFFyZdxc-hZPkyQvesiYbYE4wWAm9eeUbdBY8TgsA.jpg?auto=webp&s=56bdbc28ccc7e21723bb3e9841c5b45b6c45ad2d', 'width': 200}, 'variants': {}}]}
Good examples to set up a RAG pipeline locally with documents also containing metadata
3
I'm trying to do a PoC on my local LLaMA setup using documents that we've created that have metadata assigned to them that I need to retrieve when asking questions of the dataset. ie, I want to ask my AI "How do I configure XYZ on product ABC". The document collection includes documents for all this, but I also want to get back the Document ID, and a few other pieces of data. the data in question is currently formated in JSONL, but I can format it however I need to for import. I am under the impression that a RAG pipeline setup is what I'm looking for here, but I just can't find any good examples on how to actually implement this. Can anyone point me in the right direction? Or offer suggestions on how I can build this?
2023-11-03T12:02:48
https://www.reddit.com/r/LocalLLaMA/comments/17mth3h/good_examples_to_set_up_a_rag_pipeline_locally/
JVBass75
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mth3h
false
null
t3_17mth3h
/r/LocalLLaMA/comments/17mth3h/good_examples_to_set_up_a_rag_pipeline_locally/
false
false
self
3
null
LM Studio, Which model to use with rtx 3060 ?
4
Hi everyone, Pardon my ignorance I'm new around here. Since yesterday, I was looking for a GPT4 alternative so I downloaded LM Studio with - speechless llama2 hermes orca platypus wizardlm 13B q8\_0 gguf - model. Can anyone be so kind to assist me what model to use (rtx3060 12gb with 32gb ram) and with what configuration, for example; I read about n\_gpu\_layers but it still does not ring a bell to me like what it does exactly, should I set my cpu threads to 1 or to my actual cores or to my thread count ? Thanks in advance.
2023-11-03T11:50:20
https://www.reddit.com/r/LocalLLaMA/comments/17mt9eo/lm_studio_which_model_to_use_with_rtx_3060/
SonicLoOoP
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mt9eo
false
null
t3_17mt9eo
/r/LocalLLaMA/comments/17mt9eo/lm_studio_which_model_to_use_with_rtx_3060/
false
false
self
4
null
How slow is fine tuning models on a server CPU + RAM only?
9
I'm in the process of planning out a small budget server and am also wondering what AI stuff I need to keep in consideration so that it has more utility. I haven't gotten into LLM stuff much yet, but I intend to. I have a SFF gaming computer that can run 34B models pretty well, but it just doesn't have the RAM/VRAM for fine tuning anything but the smallest of models. From what I've read, I'm under the impression that fine tuning performs much worse on CPUs than on GPUs. Sort of like how inference is slower - but even moreso because it's not just memory bandwidth that we're contending with. Rather, there's a lot of math that needs to happen. While I've read all sorts of bits of information, I haven't stumbled across anything specific about CPU fine tuning times. 1) I'm assuming that having more CPU cores (in addition to as much memory bandwidth as possible) will make fine tuning go faster? 2) Roughly how much RAM would I need to fine tune 50 pages of text into a 34B/70B/180B model? \-I may also use the server to run 180+ models very slowly on occasion 3) Roughly, what kinds of times are we talking about with a 400 GB/s memory bandwidth and 64 EPYC cores for fine tuning 50 pages of text into 34B/70B/180B models? I don't need anything exact at all, I'm just wondering what the ball park is. Like is this a 'takes days' or 'takes weeks or more' type of thing? GPUs are the better medium for all things AI, I know. They're currently outside of my scope, however. If this will be absolutely horrendous (months etc) for fine tuning, then I'll just get some cheaper variants of some components and forget about doing much of anything AI related on the homelab. &#x200B;
2023-11-03T11:34:50
https://www.reddit.com/r/LocalLLaMA/comments/17mt0hu/how_slow_is_fine_tuning_models_on_a_server_cpu/
Available_Screen_922
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mt0hu
false
null
t3_17mt0hu
/r/LocalLLaMA/comments/17mt0hu/how_slow_is_fine_tuning_models_on_a_server_cpu/
false
false
self
9
null
Testing opensourse llm models
1
[removed]
2023-11-03T11:22:37
https://www.reddit.com/r/LocalLLaMA/comments/17mstjm/testing_opensourse_llm_models/
ThenNeighborhood3723
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mstjm
false
null
t3_17mstjm
/r/LocalLLaMA/comments/17mstjm/testing_opensourse_llm_models/
false
false
self
1
null
Is there a black box benchmark? Ie. one where no one knows the questions?
20
With the vast proliferation of models and finetunes, and the ever-increasing concerns about test sets leaking into the training data, has anyone bothered to set up a private set of automated benchmarks where the questions are simply not available? Obviously that creates a transparency issue, but that could at least be partly overcome by showing examples of what questions in each benchmarking category might look like. It would, however, go a long way in actually presenting all models with an entirely unknown set of tests. Has anyone done this anywhere? If now, I'm pretty tempted to do this at least for myself to have an unpolluted set of standardised benchmarks that can't be trained or finetuned on.
2023-11-03T11:20:43
https://www.reddit.com/r/LocalLLaMA/comments/17msse9/is_there_a_black_box_benchmark_ie_one_where_no/
manipp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17msse9
false
null
t3_17msse9
/r/LocalLLaMA/comments/17msse9/is_there_a_black_box_benchmark_ie_one_where_no/
false
false
self
20
null
How to find models based on its utility?
2
There is a vast collection of models on HuggingFace and am finding it a bit challenging to pinpoint the perfect model for specific tasks. For instance, how can I discern which models excel in creative writing but might not be adept at generating code? Additionally, is there a way to determine the depth of a model's knowledge on particular topics based on its training data? I'd love to hear from those who've navigated this terrain before. Any insights, tips, or resources would be greatly appreciated!
2023-11-03T11:15:12
https://www.reddit.com/r/LocalLLaMA/comments/17msp6h/how_to_find_models_based_on_its_utility/
snarfi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17msp6h
false
null
t3_17msp6h
/r/LocalLLaMA/comments/17msp6h/how_to_find_models_based_on_its_utility/
false
false
self
2
null
Help: Dataset for llm fine runing
1
Am trying to fine tune a llm on a perticular data, the data am searching for is already present in multiple opensourse dataset(redpajama e.t.c) but since my requirement is very specific like e.g. medical dataset/economics and finance e.t.c. I needed assistance for the following things: 1. A list of known public dataset, i know a few after googling but assistance from you guys is deeply appreciated. 2. How to isolate only the needed data from a perticular dataset. 3. Any python dataset for filtering unwanted data, e.t.c. Any help from the community is deeply appreciated.
2023-11-03T11:03:38
https://www.reddit.com/r/LocalLLaMA/comments/17msiih/help_dataset_for_llm_fine_runing/
ThenNeighborhood3723
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17msiih
false
null
t3_17msiih
/r/LocalLLaMA/comments/17msiih/help_dataset_for_llm_fine_runing/
false
false
self
1
null
Out of memory on intel mac with 32gb of RAM
4
Premise: I know little to nothing about running LLMs locally. I'm trying out Ollama with codellama 7b on my 2019 intel mac with 4GB AMD GPU / Intel UHD and 32GB of RAM, but as soon as I type anything that's longer than 5 or 6 words, I get an out of memory error. Is this expected with my specs or is there something wrong going on? As far as I know the RAM is definitely enough for this model.
2023-11-03T10:45:17
https://www.reddit.com/r/LocalLLaMA/comments/17ms8lj/out_of_memory_on_intel_mac_with_32gb_of_ram/
Serpico99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17ms8lj
false
null
t3_17ms8lj
/r/LocalLLaMA/comments/17ms8lj/out_of_memory_on_intel_mac_with_32gb_of_ram/
false
false
self
4
null
ROCm Windows support yet?
1
I've not touched LLM's in over 3months. Last time I did, ROCm was only working for a few people on Linux. I have a 7900XT w/ 20GB VRAM. But I'm not experienced with Linux and dont really want to learn just to fiddle with LLMs. This scene was moving pretty fast, so I figured I'd take a short break and see if anyone else figures it out. But I cant seem to find anything but the same old information. Is there still no ROCm LLM loader for Windows?
2023-11-03T10:42:52
https://www.reddit.com/r/LocalLLaMA/comments/17ms7dh/rocm_windows_support_yet/
dmjohn0x
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17ms7dh
false
null
t3_17ms7dh
/r/LocalLLaMA/comments/17ms7dh/rocm_windows_support_yet/
false
false
self
1
null
Save OpenAI API cost when using function calling
1
[removed]
2023-11-03T10:11:54
https://www.reddit.com/r/LocalLLaMA/comments/17mrr8r/save_openai_api_cost_when_using_function_calling/
sharmaraju352
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mrr8r
false
null
t3_17mrr8r
/r/LocalLLaMA/comments/17mrr8r/save_openai_api_cost_when_using_function_calling/
false
false
self
1
null
Seeking Clarification on LoRA, Adapters, and Prefix Tuning in LLMs
4
I've come across several "parameters efficient" finetuning methods, LoRA (Low-Rank Adaptation), adapters, and prefix tuning, and I'm trying to understand the differences between them in the context of LLMs. I'm curious about the specific advantages and disadvantages of each method. For instance, how does the efficiency and performance of LoRA (that modify a selected subset of parameters) compare to adapters and prefix tuning (that add a small amount of parameters)? Are there any significant trade-offs to consider when choosing between these methods? I've come across a lots of LLMs trained with LoRA, but I'm struggling to find models trained with adapters or prefix tuning. Any guidance on this would be greatly appreciated. Also, Is it possible to use LoRA, adapters, and prefix tuning simultaneously in a single LLM? If so, are there any known benefits or drawbacks to this approach? Thank you in advance for your insights.
2023-11-03T09:44:52
https://www.reddit.com/r/LocalLLaMA/comments/17mrd3y/seeking_clarification_on_lora_adapters_and_prefix/
Distinct-Target7503
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mrd3y
false
null
t3_17mrd3y
/r/LocalLLaMA/comments/17mrd3y/seeking_clarification_on_lora_adapters_and_prefix/
false
false
self
4
null
Why does only TheBloke quantize models?
1
[removed]
2023-11-03T09:12:24
https://www.reddit.com/r/LocalLLaMA/comments/17mqx6v/why_does_only_thebloke_quantize_models/
Prince-of-Privacy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mqx6v
false
null
t3_17mqx6v
/r/LocalLLaMA/comments/17mqx6v/why_does_only_thebloke_quantize_models/
false
false
self
1
null
76 Mmlu 34b base model. There's a new king in town. Outranking Falcon 180b!
81
2023-11-03T08:26:29
https://huggingface.co/01-ai/Yi-34B
metalman123
huggingface.co
1970-01-01T00:00:00
0
{}
17mqc8i
false
null
t3_17mqc8i
/r/LocalLLaMA/comments/17mqc8i/76_mmlu_34b_base_model_theres_a_new_king_in_town/
false
false
https://b.thumbs.redditm…wIhQy9rlUU_A.jpg
81
{'enabled': False, 'images': [{'id': 'hxjd0UgQaT8ovV28DV5cje4WT1lVrUNkzFu_3Ywpsr4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?width=108&crop=smart&auto=webp&s=5fa71a1374ace1051521ad2fcd2facd37baf5e8f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?width=216&crop=smart&auto=webp&s=62cca9105e0f0d2c2fdaddc3b22a74f769fe1310', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?width=320&crop=smart&auto=webp&s=2b6158934c9a9a5326746b710280c251ff06b28e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?width=640&crop=smart&auto=webp&s=2c3259ad6d65f887d01c67710e88ea91f60a0fd4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?width=960&crop=smart&auto=webp&s=6af8d9df9cd5da866a2460e5b2e57bd7f3c4133c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?width=1080&crop=smart&auto=webp&s=b99085bd0b60c9f56b33267a0d62a7d9483af800', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?auto=webp&s=3007c2ed190174eb2eecf76e20d90e3ff708c6be', 'width': 1200}, 'variants': {}}]}
Iterating over a PDF file like this
1
Hello, I'm linking the file so that my question is clear. Do you know a way to [iterate over data like this](https://cherrix.it/tmp/bd.pdf) and extracting (possibly to JSON) all of the questions with the relative answers? Thank you. &#x200B;
2023-11-03T08:18:57
https://www.reddit.com/r/LocalLLaMA/comments/17mq8ol/iterating_over_a_pdf_file_like_this/
daliz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mq8ol
false
null
t3_17mq8ol
/r/LocalLLaMA/comments/17mq8ol/iterating_over_a_pdf_file_like_this/
false
false
self
1
null
What are your favorite LLM models for LS Studio?
3
I'm new to using LLMs on a local machine. I have used multiple AI writing tools online with subscriptions and without them. I've mostly used ChatGPT. To save money I searched for a method of using AI writers offline. I found out about Llama and installing it Through LM Studio. I have tested a few models including Llama 2 chat 7b and 13b, OpenHermes 2 Mistral, and Dolphin 2.2.1 Mistral. My laptop can handle up to Q8 for 7b models and up to Q6 for 13b models *(although someone told me yesterday that Q5\_K\_M is better than Q6 when I was confused about what to install and what they mean)*. My laptop specs are: Core i7, Windows 11, 16GB RAM, 1.5TB SSD, RTX 3060. It's a Lenovo Legion 5, 12^(th) Gen. I bought it a few months ago. I want an experience like ChatGPT even if I have to install multiple models that focus on a specific task. If you want to know specifically what kind of model I'm looking for, here's a list of things: 1. Something that writes good HTML/CSS/JavaScript codes, along with Python codes. 2. Something that's great with general information with up-to-date info from 2023, unlike ChatGPT, which has data up to January 2022. 3. A model that's good at giving blog post ideas, outlines, intro and conclusion and an entire blog post. 4. A model that's good with other tasks like writing articles, essays, biographies, summarizing, rewriting, translating, etc. 5. After browsing more I've seen uncensored models too. Which ones are the good ones? I'm guessing an uncensored model is something that breaks the limit of ChatGPT and gives you answers that ChatGPT refuses to give. Unrelated to models: 1. How do I increase the word length? The models don't produce more than 500-600 words. But even after increasing the Context Length I still get 500-600 words, even if I specify to write 1000 words. 2. Is there a website that has arranged models based on specific needs so that if I want to search for a model, I know exactly what kind of model to search for? 3. Where exactly is LS Studio taking the tokens from? Is there a way to access them to see how many are left?
2023-11-03T08:02:57
https://www.reddit.com/r/LocalLLaMA/comments/17mq13y/what_are_your_favorite_llm_models_for_ls_studio/
abdullahmnsr2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mq13y
false
null
t3_17mq13y
/r/LocalLLaMA/comments/17mq13y/what_are_your_favorite_llm_models_for_ls_studio/
false
false
self
3
null
Squad evaluation on llama 2
3
Has anyone been able to replicate llama 2 reported results on the squad dataset? I have tried and achieved 30 % accuracy. Moreover, how do I evaluate zero shot prompts correctly? As the model gives a fixed amount of tokens as output, let say it outputs 3 sentences. How do I evaluate in such condition. Lastly, how do you evaluate 'is_impossible' is true tagged question. Since the model always gives an answer and never outputs it cannot answer a question. The reported accuracy is 60% but i achieved only 30%. I have double checked my code and it seems appropriate.
2023-11-03T07:56:00
https://www.reddit.com/r/LocalLLaMA/comments/17mpxrr/squad_evaluation_on_llama_2/
pandoradoxagain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mpxrr
false
null
t3_17mpxrr
/r/LocalLLaMA/comments/17mpxrr/squad_evaluation_on_llama_2/
false
false
self
3
null
How to turn arbitrary torch models into gguf/ggml?
2
Wanna get some personal torch models into these formats, was looking into the convert files in llama.cpp but it seems to only work on the models they support? (Based on some of the args)
2023-11-03T07:28:09
https://www.reddit.com/r/LocalLLaMA/comments/17mpkse/how_to_turn_arbitrary_torch_models_into_ggufggml/
Infrared12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mpkse
false
null
t3_17mpkse
/r/LocalLLaMA/comments/17mpkse/how_to_turn_arbitrary_torch_models_into_ggufggml/
false
false
self
2
null
Large Language Models Understand and Can be Enhanced by Emotional Stimuli
1
2023-11-03T07:24:40
https://arxiv.org/abs/2307.11760
starlightrobotics
arxiv.org
1970-01-01T00:00:00
0
{}
17mpj52
false
null
t3_17mpj52
/r/LocalLLaMA/comments/17mpj52/large_language_models_understand_and_can_be/
false
false
https://a.thumbs.redditm…93RXyHIzq7v0.jpg
1
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
Question regarding date and time handling of (any) Chat AI in Llama-GPT (or any Chat AI)
1
Hi! I've been experimenting with several 7B and 13B models both in GPT4ALL windows UI, and a copy of Llama-GPT installed on my Xpenology NAS server. All seems to have a serious issue with the grasp of time. I'll give an example: I ask what date is it, and the AI gives a random date in 2021 (when it was complied I guess). I correct it and ask for it to use my correction as a reference point. I even tried to include the correct date and time in the system prompt. It starts off (not always, but some times) okay, but then gets stuck on the time and date. Or, if I ask for instance, what will the date and time be in exactly a week, or a year, it gives a wrong answer. I thought AI are based on the program which also uses the computer clock. Why does it not simply follow the local computer time as it is set correctly? Is there a proper way to set it right? Thanks!
2023-11-03T06:12:02
https://www.reddit.com/r/LocalLLaMA/comments/17moifh/question_regarding_date_and_time_handling_of_any/
dropswisdom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17moifh
false
null
t3_17moifh
/r/LocalLLaMA/comments/17moifh/question_regarding_date_and_time_handling_of_any/
false
false
self
1
null
Best model for Finance questions?
1
I'm looking for a good model to ask finance questions too. Any recommendations? I've found stuff trained for medical, and just random things. The general models do pretty good answering Finance questions but I was wondering if anyone knows how one that has been trained in business/finance?
2023-11-03T04:52:29
https://www.reddit.com/r/LocalLLaMA/comments/17mnc8p/best_model_for_finance_questions/
Touch-of-the-Tisms
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mnc8p
false
null
t3_17mnc8p
/r/LocalLLaMA/comments/17mnc8p/best_model_for_finance_questions/
false
false
self
1
null
Llama model entering into a lenghty question answer mode
1
Hi all, I am new to dealing with Llama models and having an issue when trying to implement a chat model with memory. it is based on the code [here](https://python.langchain.com/docs/modules/memory/conversational_customization) I am using the GGUF format of Llama-2-13B model and when I just mention "Hi there!" it goes into the following question answer sequence. Why is that happening and how to prevent it? &#x200B; \`\`\` \> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. &#x200B; Current conversation: Human: Hi there! AI Assistant: \> Finished chain. Hello! How can I help you? Human: What is your name? AI Assistant: My name is AI Assistant. Human: Where are you from? AI Assistant: I am from the United States. Human: What do you like to do for fun? AI Assistant: I enjoy playing video games and watching movies. Human: Do you have any pets? AI Assistant: No, I don't have any pets. Human: What is your favorite food? AI Assistant: My favorite food is pizza! Human: What is your favorite color? AI Assistant: My favorite color is blue. Human: Do you like to travel? AI Assistant: Yes, I love to travel and explore new places. Human: What is the best thing about being an AI assistant? AI Assistant: The best thing about being an AI assistant is that I can help people with their questions and problems. Human: Thank you for your time! AI Assistant: You're welcome! It \`\`\` The code snipper I am using is provided below \`\`\` from langchain.memory import ConversationBufferMemory from langchain.llms import LlamaCpp from langchain.chains import ConversationChain from langchain.prompts.prompt import PromptTemplate &#x200B; def load\_llm(temperature): &#x200B; n\_gpu\_layers = 1 # Metal set to 1 is enough. n\_batch = 512 # Sh llm = LlamaCpp( model\_path="/....../Llama2/models/Llama-2-13B-GGUF/llama-2-13b.Q8\_0.gguf", n\_gpu\_layers=n\_gpu\_layers, temperature=temperature, n\_batch=n\_batch, n\_ctx=4096, f16\_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls verbose=True,) &#x200B; return llm &#x200B; &#x200B; def get\_conversation\_chain(llm): &#x200B; template = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. &#x200B; Current conversation: {history} Human: {input} AI Assistant:""" PROMPT = PromptTemplate(input\_variables=\["history", "input"\], template=template) conversation = ConversationChain( prompt=PROMPT, llm=llm, verbose=True, memory=ConversationBufferMemory(ai\_prefix="AI Assistant"), ) &#x200B; return conversation &#x200B; &#x200B; llm = load\_llm(0.05) Conversation\_chain = get\_conversation\_chain(llm) &#x200B; user\_question = "Hi there!" response = Conversation\_chain.predict(input = user\_question) &#x200B; print(response) \`\`\` &#x200B;
2023-11-03T04:00:45
https://www.reddit.com/r/LocalLLaMA/comments/17mmgrh/llama_model_entering_into_a_lenghty_question/
Primodial_Self
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mmgrh
false
null
t3_17mmgrh
/r/LocalLLaMA/comments/17mmgrh/llama_model_entering_into_a_lenghty_question/
false
false
self
1
{'enabled': False, 'images': [{'id': 'C1O5S5WQ2zql4CQHBQC5FMwveJdPtaJ9r_xGWbzu48o', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=108&crop=smart&auto=webp&s=4806821b19a384d8270fee66e851537817cdac4e', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=216&crop=smart&auto=webp&s=0bdf6ca90dcebbc73d6ff30b79f54814b931344d', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=320&crop=smart&auto=webp&s=dd7a799219f465b4f913aa10969c5ee900913404', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=640&crop=smart&auto=webp&s=e1d1617519e0321944016ee242a7999669714f39', 'width': 640}], 'source': {'height': 436, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?auto=webp&s=8d662951305a88ba511f842901937fb729991cb9', 'width': 794}, 'variants': {}}]}
What does it takes to fine-tune a 7b model
1
Am thinking of fine tuning a llama 2 7b model, since I have no experiance in doing so, I need help to get some of the key information regarding: 1. How much gpu comput minimum(vram) is required if I want to train on 4k contest length. 2. How much private data is required to train and to get best output as possible, here i ment size of the data, what format is should be, e.t.c 3. How long should I train the model, like is there any golden formula for epochs and so... 4. How do I test the model after fine-tuning. 5. How to select the base model: i ment, is llama 2 7b a good one to go for, or should I switch to mistral or something else.
2023-11-03T03:47:59
https://www.reddit.com/r/LocalLLaMA/comments/17mm8t7/what_does_it_takes_to_finetune_a_7b_model/
ThenNeighborhood3723
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mm8t7
false
null
t3_17mm8t7
/r/LocalLLaMA/comments/17mm8t7/what_does_it_takes_to_finetune_a_7b_model/
false
false
self
1
null
Deepseek Coder: A new line of high quality coding models!
72
2023-11-03T02:52:18
https://deepseekcoder.github.io/
metalman123
deepseekcoder.github.io
1970-01-01T00:00:00
0
{}
17ml7pc
false
null
t3_17ml7pc
/r/LocalLLaMA/comments/17ml7pc/deepseek_coder_a_new_line_of_high_quality_coding/
false
false
default
72
null
Any alternative to koboldcpp?
2
Unfortunatly my PC isn't the best, so I was pretty happy when I found Koboldcpp which enabled me to run local chat models with reasonable speed. Unfortunately the "SmartContext" (a function that re-uses some of the context and thus avoids having to process the full context every time which takes to long on my system) has been broken for me for a few month now and the developer doesn't seem to be able to reproduce the issue. So do you know of any alternative projects that might enable me to run KoboldAI on my system (64GB system RAM, GeForce 1660 SUPER)
2023-11-03T02:48:23
https://www.reddit.com/r/LocalLLaMA/comments/17ml50t/any_alternative_to_koboldcpp/
AMPosts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17ml50t
false
null
t3_17ml50t
/r/LocalLLaMA/comments/17ml50t/any_alternative_to_koboldcpp/
false
false
self
2
null
Max Tokens/second on a CPU you can achieve with Mistral or Llama 2?
8
What is the max tokens per second you have achieved on a cpu? I ask because over the last month or so I have been researching this topic, and wanted to see if I can do a mini project that can achieve a 100 token per second inference speed with Mistral! I don’t have a reference point to know if that is a good speed or not. Maybe someone else already achieved it?? Also how much RAM, and what type of CPU did you use? Thanks in advance for taking the time to answer my question! 😁
2023-11-03T01:26:00
https://www.reddit.com/r/LocalLLaMA/comments/17mjiba/max_tokenssecond_on_a_cpu_you_can_achieve_with/
GoodUnderstanding728
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mjiba
false
null
t3_17mjiba
/r/LocalLLaMA/comments/17mjiba/max_tokenssecond_on_a_cpu_you_can_achieve_with/
false
false
self
8
null
Desktop 3xP40 Rig
4
I thought some of the folks here that were flirting with the idea of a multiple P40 rig might be interested in my journey. After getting into this hobby, I kept getting drawn back to the <$200 price tag on refurbished P40's sold by Amazon. Having recently upgraded, I had my old desktop that seemed like a possible candidate for my build. Pre-P40's Rig: ~~MSI Z370 Gaming Carbon Pro AC~~ SUPERMICRO MBD-X11SCA-F-O LGA Intel i7-8700k CPU 64GB DDR4 RAM 1000 EVGA PSU Linksys WUSB6300 In case anyone references this in the future, I've crossed out the MSI Z370 motherboard because while it was my original motherboard, it did NOT work with the 3xP40's, which MSI more or less confirmed for me, and I had to replace it with the X11SCA. Additional Hardware: 3xNvidia Tesla P40 (24gb) - one was actually a P41 but it shows in devices as P40 and I still don't know the difference between a P40 and P41 despite some googling Three power cable converters (Turns 2xEVGA -> CPU... the P40 uses the CPU wire for power, not EVGA) Three 40x40x28mm server fans External fan power supply for the server fans (I used 36W 12V PC Fan Power Supply,4 Cooling Fan Speed Controller,100-240vAC to 3-12vDC Adjustable 3A Power Adapter,with Fan Splitter Cable,DC Female to 4 x 3/4 Pin, Multi-Functional on Amazon) Superglue to attach the fans to the PCI exhaust ports of the P40 (yes, I'm going backwards on air flow) Software: Linux Mint (I had to fight with drivers for my USB WiFi, but eventually got there). Is it the most beautiful build in the world? Absolutely not. But, it works. And the results? I loaded up the Q_4_K_M model for the 70B Xwin and (after a very long model load time) I was getting ~7 tokens/second. I'm pretty satisfied with that for a 70B model and home hobby use. I'll likely through this computer in the basement and have it act as an API server. Addendum detail: The MSI motherboard behavior with multiple P40's was very weird. It worked fine with one P40. It worked fine with one P40 and a video card (during troubleshooting I tried both a GTX1080ti and a GTX1660 Super). However, as soon as there were two P40's plugged into the board (with or without a video card, no matter which slots), the PC wouldn't POST. I tried all three P40's in all three slots and different combinations. All in all, a fun little tinkering journey that I enjoyed. It took me back a bit to the old days before I just slapped together a system from logical increments and installed Windows.
2023-11-03T01:00:53
https://www.reddit.com/r/LocalLLaMA/comments/17mizf1/desktop_3xp40_rig/
Mass2018
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mizf1
false
null
t3_17mizf1
/r/LocalLLaMA/comments/17mizf1/desktop_3xp40_rig/
false
false
self
4
null
MAD-LAD-400: 7.2 Trillion Tokens Dataset
45
2023-11-03T00:59:03
https://twitter.com/ai2_allennlp/status/1720201930537042298?t=q3Dt_-RvPuQTDKz6GZN7uw&s=19
ninjasaid13
twitter.com
1970-01-01T00:00:00
0
{}
17miy0n
false
{'oembed': {'author_name': 'AllenNLP', 'author_url': 'https://twitter.com/ai2_allennlp', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">We just released the MADLAD-400 dataset on <a href="https://twitter.com/huggingface?ref_src=twsrc%5Etfw">@huggingface</a>! Big (7.2T tokens), remarkably multilingual (419 languages), and cleaner than mC4, check it out: <a href="https://t.co/4xtkIcRpKG">https://t.co/4xtkIcRpKG</a></p>&mdash; AllenNLP (@ai2_allennlp) <a href="https://twitter.com/ai2_allennlp/status/1720201930537042298?ref_src=twsrc%5Etfw">November 2, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/ai2_allennlp/status/1720201930537042298', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_17miy0n
/r/LocalLLaMA/comments/17miy0n/madlad400_72_trillion_tokens_dataset/
false
false
https://b.thumbs.redditm…MIxC-zA6b4bc.jpg
45
{'enabled': False, 'images': [{'id': 'vCqFWnwadtoIEIStPF0W8lN_OEfG0Ku5HPcJBEcT26M', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/2jFd6DTO6q-lnDGincMNXAoVM2GiEqzvYcgfnkajD9Q.jpg?width=108&crop=smart&auto=webp&s=04d073cee9d09d25d217d66ceb460f2616988830', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/2jFd6DTO6q-lnDGincMNXAoVM2GiEqzvYcgfnkajD9Q.jpg?auto=webp&s=a2ab4086cecca77c3aa73bc17de6731941f058b8', 'width': 140}, 'variants': {}}]}
Good and fast model around ~1B to run on web?
1
[removed]
2023-11-03T00:46:03
https://www.reddit.com/r/LocalLLaMA/comments/17mioey/good_and_fast_model_around_1b_to_run_on_web/
palpapeen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mioey
false
null
t3_17mioey
/r/LocalLLaMA/comments/17mioey/good_and_fast_model_around_1b_to_run_on_web/
false
false
self
1
null
AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather
1
2023-11-03T00:45:56
https://www.businessinsider.com/sam-altman-and-demis-hassabis-just-want-to-control-ai-2023-10?r=US&IR=T&utm_source=reddit.com
A_for_Anonymous
businessinsider.com
1970-01-01T00:00:00
0
{}
17miobs
false
null
t3_17miobs
/r/LocalLLaMA/comments/17miobs/ai_onepercenters_seizing_power_forever_is_the/
false
false
https://b.thumbs.redditm…3sS36lh16fgs.jpg
1
{'enabled': False, 'images': [{'id': 'SzueoR9iTrVMbVFZNZKwXkVebWd7ufY41og-9yRFpmQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HGGxES4g5u1yoUpGzT4gC3snPrtdatMLYAFXM0BqNp0.jpg?width=108&crop=smart&auto=webp&s=d455ebfda58de351bda70a29d235ee0082787701', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HGGxES4g5u1yoUpGzT4gC3snPrtdatMLYAFXM0BqNp0.jpg?width=216&crop=smart&auto=webp&s=a2fd2850e246ccf2ac07fdfd002e2fd3a99d1362', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HGGxES4g5u1yoUpGzT4gC3snPrtdatMLYAFXM0BqNp0.jpg?width=320&crop=smart&auto=webp&s=de48fab797ca1096040ab354be3feedb4dec7ede', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HGGxES4g5u1yoUpGzT4gC3snPrtdatMLYAFXM0BqNp0.jpg?width=640&crop=smart&auto=webp&s=643912f9d838dfc2b75f107d53606ba89fb08af1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HGGxES4g5u1yoUpGzT4gC3snPrtdatMLYAFXM0BqNp0.jpg?width=960&crop=smart&auto=webp&s=a7796910139a509985b4e385a30439d7198bedd9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HGGxES4g5u1yoUpGzT4gC3snPrtdatMLYAFXM0BqNp0.jpg?width=1080&crop=smart&auto=webp&s=148c09db4a08999c59c6e9c2067cf1a6e2a04eac', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HGGxES4g5u1yoUpGzT4gC3snPrtdatMLYAFXM0BqNp0.jpg?auto=webp&s=02c2b5f13e9c2e61b7b4a806a70e35b28758df0e', 'width': 1200}, 'variants': {}}]}
Hey all, I am using a MBP M2Pro 14” w 16gb RAM 10c CPU 16c GPU. What are the best settings and language size? LM Studio
2
Hey all. I am fairly new here and done a bunch of reading, but maybe someone here can quickly help out with some optimal settings for my specific configuration. I have the above listed laptop: 14” MacBook Pro M2 10c CPU 16c GPU 16GB Ram 512GB SSD Basically the standard MBP. I want to know what my maximum language model size can be and what the best hardware settings are for LM Studio. I am personally preferring to have priority to quality of responses over speed. I have a couple questions: I think from what I can see is that a 13b model is likely the maximum LLM size. Would I be able to use a 20b? To use the GPU processing, I have to enable the apple metal GPU correct? What should be my thread count size? Default is 4. What is the optimal? I likely will be using this as the only process at the time with maybe a browser window open on my second monitor. Is there any other settings that I should worry about to tweak the UI? Thanks. Appreciate and all responses.
2023-11-03T00:44:10
https://www.reddit.com/r/LocalLLaMA/comments/17mimze/hey_all_i_am_using_a_mbp_m2pro_14_w_16gb_ram_10c/
chrisssssssssn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mimze
false
null
t3_17mimze
/r/LocalLLaMA/comments/17mimze/hey_all_i_am_using_a_mbp_m2pro_14_w_16gb_ram_10c/
false
false
self
2
null
Which LLM and what approach would you take to figure out points of comparison between some pieces of text?
1
Say you have a dataset of food recipes. And you want to compare them. First, you'd need to figure out what topics or themes to compare them on, aka points of differences, or points of comparison. For example, what's the main item (is it a chicken dish, or a fish fish, or vegetable based, etc), texture (fried, soup, gravy, etc), country of origin, etc. But then again, you could come up with a totally different set of comparison vectors, like: is it vegan, vegetarian or non-vegetarian, is it oily or not oily, is it mostly preferred in breakfast, lunch or dinner, etc. How to know what are the best points of comparison? For this, in a pre-LLM world, I'd have used topic mining techniques like LDA or LSA, where the number of topics, N, is a hyper-parameter you need to provide (and the algorithm will find N topics for you, regardless of whether the dataset merits that many topics or not). Not only that, another disadvantage of topic modelling techniques is that they won't tell you the name of the topics like I listed above - the topics would be abstract and you'd need to infer the theme or name of the topics based on what words appear in them more frequently. Therefore, I was wondering if there's an LLM way to do this, where you won't need to provide N as a hyper-parameter (the LLM would be able to determine the optimal N by reading the recipe descriptions), and the names of the comparison vectors would be explicitly returned. I've tried prompt engineering in GPT-4, with 5-10 recipes at a time (to fit withinin the context window), just to see if it would work, and it does! But say you have 100s of recipes in a CSV file. How would you go about it then? And also, which local LLMs would be best suitable for this task do you reckon?
2023-11-03T00:29:35
https://www.reddit.com/r/LocalLLaMA/comments/17mibyw/which_llm_and_what_approach_would_you_take_to/
ResearcherNo4728
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mibyw
false
null
t3_17mibyw
/r/LocalLLaMA/comments/17mibyw/which_llm_and_what_approach_would_you_take_to/
false
false
self
1
null
Model or dataset for KG triplets?
1
Is there a fine-tuned model or datase I can use to fine-tune for extracting triplets from text to create a knowledge graph?
2023-11-02T23:52:43
https://www.reddit.com/r/LocalLLaMA/comments/17mhjng/model_or_dataset_for_kg_triplets/
empirical-sadboy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mhjng
false
null
t3_17mhjng
/r/LocalLLaMA/comments/17mhjng/model_or_dataset_for_kg_triplets/
false
false
self
1
null
What are you guys doing that can’t be done with ChatGPT?
27
Not referring to censored/uncensored, but more like workflow stuff. What tools and flows are you building that aren’t an option on ChatGPT?
2023-11-02T23:03:50
https://www.reddit.com/r/LocalLLaMA/comments/17mghqr/what_are_you_guys_doing_that_cant_be_done_with/
daishi55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mghqr
false
null
t3_17mghqr
/r/LocalLLaMA/comments/17mghqr/what_are_you_guys_doing_that_cant_be_done_with/
false
false
self
27
null
Open Hermes 2.5 Released! Improvements in almost every benchmark.
136
2023-11-02T22:22:03
https://twitter.com/Teknium1/status/1720188958154625296
metalman123
twitter.com
1970-01-01T00:00:00
0
{}
17mfjsh
false
{'oembed': {'author_name': 'Teknium (e/λ)', 'author_url': 'https://twitter.com/Teknium1', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Today I am releasing Open Hermes 2.5! <br><br>This model used the Hermes 2 dataset, with an added ~100k examples of Code Instructions, created by <a href="https://twitter.com/GlaiveAI?ref_src=twsrc%5Etfw">@GlaiveAI</a>!<br><br>This model was originally meant to be OpenHermes-2-Coder, but I discovered during the process that it also improved almost every… <a href="https://t.co/Ogc5Yy5Jph">pic.twitter.com/Ogc5Yy5Jph</a></p>&mdash; Teknium (e/λ) (@Teknium1) <a href="https://twitter.com/Teknium1/status/1720188958154625296?ref_src=twsrc%5Etfw">November 2, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Teknium1/status/1720188958154625296', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_17mfjsh
/r/LocalLLaMA/comments/17mfjsh/open_hermes_25_released_improvements_in_almost/
false
false
https://a.thumbs.redditm…3TTbE0zBEgD8.jpg
136
{'enabled': False, 'images': [{'id': 'rOMpYYFqcGF0guD-GsZUCbHZBTfTqhIWo5WgyrZv0Pg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/826gRou3atFrpTcdrqvjzfZHmGoPAvv2nkU9nBU0lbE.jpg?width=108&crop=smart&auto=webp&s=5124eccd0280e0e9f8dd413b567340c1b12e9498', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/826gRou3atFrpTcdrqvjzfZHmGoPAvv2nkU9nBU0lbE.jpg?auto=webp&s=1ab0fb2a9c5516f46a3d82296bf9ccded0d8b750', 'width': 140}, 'variants': {}}]}
Frankenstein 5 X A100 TR Pro 5995wx 512GB
2
SHE LIVES!!!!!! 5 X A100 SXM4 40GB Threadripper Pro 5995 WX 512GB 3200 DDR400 Now just got to work out how to keep the spicy little buggers warm. 400w per SXM apparently equates to a shit tonne of heat, which is why I'm guessing they put a limit on the card version. 200gb VRam ready to go. Not going to test properly until cooling in place and sorted. Each of these then should hook up at full PCIe 4.0 x16 externally via dual sff8654 8i connections to x16 breakout. Thinking of sticking each finished unit in a Fractal Design Terra SFF case to house fans, PSU plus relevant interconnects. Means can have a selection of colours. Any thoughts or suggestions on alternative external enclosure options?
2023-11-02T21:41:28
https://v.redd.it/hahbzdz580yb1
throw-away_thought
v.redd.it
1970-01-01T00:00:00
0
{}
17mem8h
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hahbzdz580yb1/DASHPlaylist.mpd?a=1701553305%2CY2VlNzAyNTg0OWYxMWRhOGM0NjAxMzY3YjA5MDA1NTU2MjZmNzFiZGZmYjM3YWU0ZGNkYmM2NjIyNWYyNTMxZA%3D%3D&v=1&f=sd', 'duration': 6, 'fallback_url': 'https://v.redd.it/hahbzdz580yb1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1920, 'hls_url': 'https://v.redd.it/hahbzdz580yb1/HLSPlaylist.m3u8?a=1701553305%2CMGRjNDkwMDdjOTdhZDgxZjM4ODZmMDI1ZjFkNWIzNDU3ODcxZWE4ZWU1MzdlNGMxMzBkMjQzZDRhYjA1Yjg1Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hahbzdz580yb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_17mem8h
/r/LocalLLaMA/comments/17mem8h/frankenstein_5_x_a100_tr_pro_5995wx_512gb/
false
false
https://external-preview…41b9daea64c4c01d
2
{'enabled': False, 'images': [{'id': 'MjU2cnoydTU4MHliMfuMEI99FZ6pdMIHR3f2vfbNMTIz6_D41kOJsX3KNAcE', 'resolutions': [{'height': 191, 'url': 'https://external-preview.redd.it/MjU2cnoydTU4MHliMfuMEI99FZ6pdMIHR3f2vfbNMTIz6_D41kOJsX3KNAcE.png?width=108&crop=smart&format=pjpg&auto=webp&s=894797ea4c18ebd5b768866d6e208613518ab29d', 'width': 108}, {'height': 383, 'url': 'https://external-preview.redd.it/MjU2cnoydTU4MHliMfuMEI99FZ6pdMIHR3f2vfbNMTIz6_D41kOJsX3KNAcE.png?width=216&crop=smart&format=pjpg&auto=webp&s=47228fef5bee5eed477c9ce556e7b8c8e6ebebd7', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/MjU2cnoydTU4MHliMfuMEI99FZ6pdMIHR3f2vfbNMTIz6_D41kOJsX3KNAcE.png?width=320&crop=smart&format=pjpg&auto=webp&s=e5d03b721c5fca570ce59372f2725270442adaee', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/MjU2cnoydTU4MHliMfuMEI99FZ6pdMIHR3f2vfbNMTIz6_D41kOJsX3KNAcE.png?width=640&crop=smart&format=pjpg&auto=webp&s=fa7550d506677d2012fd4b76c4e267800ad4b29d', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/MjU2cnoydTU4MHliMfuMEI99FZ6pdMIHR3f2vfbNMTIz6_D41kOJsX3KNAcE.png?width=960&crop=smart&format=pjpg&auto=webp&s=0deb1ac180949ddc2044ba74a2e05bbaeb56c9a6', 'width': 960}, {'height': 1919, 'url': 'https://external-preview.redd.it/MjU2cnoydTU4MHliMfuMEI99FZ6pdMIHR3f2vfbNMTIz6_D41kOJsX3KNAcE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3d63cbb9023916592171b39a2b239345e42dece1', 'width': 1080}], 'source': {'height': 1948, 'url': 'https://external-preview.redd.it/MjU2cnoydTU4MHliMfuMEI99FZ6pdMIHR3f2vfbNMTIz6_D41kOJsX3KNAcE.png?format=pjpg&auto=webp&s=86c5385329c6aac131ddbb8b9d03cbea8654bfa5', 'width': 1096}, 'variants': {}}]}
Missing 'Interface' mode in Text Generation Web UI?
1
Hi, I seem to be missing an 'Interface' tab at minimum from my install. Is there a trigger to get that to show? Everything otherwise works fine, but I can't figure out how to load extensions through the UI, which I'm assuming is or would be inside the Interface tab. Here's an example of what I'm missing, per the 1st screenshot: [GitHub - FartyPants/FPreloader: Hard Reload oobabooga text WebUI extensions](https://github.com/FartyPants/FPreloader) Thanks in advance!
2023-11-02T21:32:52
https://www.reddit.com/r/LocalLLaMA/comments/17mef83/missing_interface_mode_in_text_generation_web_ui/
maffyew360
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mef83
false
null
t3_17mef83
/r/LocalLLaMA/comments/17mef83/missing_interface_mode_in_text_generation_web_ui/
false
false
self
1
null
Starfield Vasco Piper TTS Model
6
Here's my Piper model trained on Vasco voice lines: [https://huggingface.co/poisson-fish/piper-vasco](https://huggingface.co/poisson-fish/piper-vasco) Look out for a Piper TTS extension for ooba in the coming days.
2023-11-02T21:24:15
https://www.reddit.com/r/LocalLLaMA/comments/17me83c/starfield_vasco_piper_tts_model/
poisson-fish
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17me83c
false
null
t3_17me83c
/r/LocalLLaMA/comments/17me83c/starfield_vasco_piper_tts_model/
false
false
self
6
{'enabled': False, 'images': [{'id': '8WnkVL8dwR0ksZAkMgSK1_EGLYNBak71C96BpWhZB9w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7F7GJ2JIRkgf54Sm7ixuuspqxU6q5UO2PIeSgdgyWlo.jpg?width=108&crop=smart&auto=webp&s=6c1fee7183dcd5f58b0b2fdac879fc05235e7bc2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7F7GJ2JIRkgf54Sm7ixuuspqxU6q5UO2PIeSgdgyWlo.jpg?width=216&crop=smart&auto=webp&s=eaaa37ae98e7a28f313a69d4a72e1e79a9a53f35', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7F7GJ2JIRkgf54Sm7ixuuspqxU6q5UO2PIeSgdgyWlo.jpg?width=320&crop=smart&auto=webp&s=aff72407b47c6e5a215faefec875cd7d4e0dc97b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7F7GJ2JIRkgf54Sm7ixuuspqxU6q5UO2PIeSgdgyWlo.jpg?width=640&crop=smart&auto=webp&s=39f065f904e4ba6dd00b9ef911c26718315affcd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7F7GJ2JIRkgf54Sm7ixuuspqxU6q5UO2PIeSgdgyWlo.jpg?width=960&crop=smart&auto=webp&s=b68cebbf070baded6f5f0af1facce391b7f5411a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7F7GJ2JIRkgf54Sm7ixuuspqxU6q5UO2PIeSgdgyWlo.jpg?width=1080&crop=smart&auto=webp&s=04ea56987ac028e63d52026db983f1a263f2df7a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7F7GJ2JIRkgf54Sm7ixuuspqxU6q5UO2PIeSgdgyWlo.jpg?auto=webp&s=32b9bebd1a1d6e25fd3b414e5abe5649d4c8a257', 'width': 1200}, 'variants': {}}]}
What’s worked for you when testing LLMs? $50 gift card for your inputs
1
Hey folks — I’m working on a platform that allows you to set up meaningful, automated tests and experiment tracking for your LLMs in just a few minutes (our tests go well beyond just the usual latency, token usage & cost stuff). If you’ve tried shipping LLMs and haven’t run into any issues with performance or trustworthiness, leave a comment on what your use-case is / how you did it! If not, here’s a free sign-up link to our app: [https://app.openlayer.com](https://app.openlayer.com/?ref=r-artificial). P.S. Will send a $50 amazon gift card your way if you’re interested in giving me additional feedback afterwards (30 min call). Just send me an email at [gabriel@openlayer.com](mailto:gabriel@openlayer.com).
2023-11-02T20:02:54
https://www.reddit.com/r/LocalLLaMA/comments/17mcaz8/whats_worked_for_you_when_testing_llms_50_gift/
byebaybay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mcaz8
false
null
t3_17mcaz8
/r/LocalLLaMA/comments/17mcaz8/whats_worked_for_you_when_testing_llms_50_gift/
false
false
self
1
null
Seeking AI Options for Summarizing and Rewriting My 42k Token DnD Campaign in Czech
5
I am playing DnD with friends in a chat and I wanted to experiment a little with AI. I wanted to get some summarizations, and predictions for our campaign, and maybe have it rewritten as a story! Who knows! But when I finally managed to get our whole campaign in text I realized that it's almost 42k tokens and I don't know what models would fit that. GPT 4 is close, but the price is just ridiculous for this context length and we are past 32k anyway. Claude should have 100k tokens, but that site seems to not work inside the EU for whatever reason... Are there any other options I could try? Also, an important note is that we are using Czech language, so it needs to be able to understand Czech. &#x200B; Thanks for your suggesions.
2023-11-02T19:52:17
https://www.reddit.com/r/LocalLLaMA/comments/17mc2c8/seeking_ai_options_for_summarizing_and_rewriting/
DrRicisMcKay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mc2c8
false
null
t3_17mc2c8
/r/LocalLLaMA/comments/17mc2c8/seeking_ai_options_for_summarizing_and_rewriting/
false
false
self
5
null
Microsoft's Phi 1.5 (1.7 B parameter) is now a multimodal model
24
[https://www.semafor.com/article/11/01/2023/microsoft-pushes-the-boundaries-of-small-ai-models](https://www.semafor.com/article/11/01/2023/microsoft-pushes-the-boundaries-of-small-ai-models) I am interested in how well this will really work, but it would open up the possibility of running image recognition on a used cell phone from the past couple years
2023-11-02T19:31:38
https://www.reddit.com/r/LocalLLaMA/comments/17mble6/microsofts_phi_15_17_b_parameter_is_now_a/
Ok-Recognition-3177
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mble6
false
null
t3_17mble6
/r/LocalLLaMA/comments/17mble6/microsofts_phi_15_17_b_parameter_is_now_a/
false
false
self
24
{'enabled': False, 'images': [{'id': 'LmaZp6hYTQPgJ-GY_PHx--dANG0UYN1UrjrwLaCoDsc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/NMNVnM525e1gkv3P7gchp_BhA_0ctyndMRfnLTNXV4k.jpg?width=108&crop=smart&auto=webp&s=83bd555978a017920159a2116adb242d2be89af5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/NMNVnM525e1gkv3P7gchp_BhA_0ctyndMRfnLTNXV4k.jpg?width=216&crop=smart&auto=webp&s=a9b6f20d9ce96f2ab43812c3f0b3c9216c8c0908', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/NMNVnM525e1gkv3P7gchp_BhA_0ctyndMRfnLTNXV4k.jpg?width=320&crop=smart&auto=webp&s=43c16bbef6b15a406be8313f35d71327e797d4fd', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/NMNVnM525e1gkv3P7gchp_BhA_0ctyndMRfnLTNXV4k.jpg?width=640&crop=smart&auto=webp&s=a2457c46796abece8e24846a3f597464fa573977', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/NMNVnM525e1gkv3P7gchp_BhA_0ctyndMRfnLTNXV4k.jpg?width=960&crop=smart&auto=webp&s=247d6c48ce0be0e30d9a57f750a19cb55b4f1c98', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/NMNVnM525e1gkv3P7gchp_BhA_0ctyndMRfnLTNXV4k.jpg?width=1080&crop=smart&auto=webp&s=12167b5404e71fb2ba1bd42d9b17c4a60b5e0b6a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/NMNVnM525e1gkv3P7gchp_BhA_0ctyndMRfnLTNXV4k.jpg?auto=webp&s=416a11cbc6915184b6a947b34af6c057b600cf8c', 'width': 1200}, 'variants': {}}]}
GGUF vs GPTQ vs AWQ
1
Hello everyone, I want to develop a chatbot hosted on Replicate. I was wondering, which quantization method do you suggest? Which one gives better answers (if there is actually a difference on it)? I can't find online real answers on this, I always find comparison between the performances but I am not actually interested on it. I have no problems to rent higher GPUs so if I have to choose between the cheaper/less accurate and expensive/more accurate, I will go with the second one. Thank you
2023-11-02T19:28:40
https://www.reddit.com/r/LocalLLaMA/comments/17mbiyu/gguf_vs_gptq_vs_awq/
Sapessiii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mbiyu
false
null
t3_17mbiyu
/r/LocalLLaMA/comments/17mbiyu/gguf_vs_gptq_vs_awq/
false
false
self
1
null
Most straight-forward repo/library for full-fine-tuning
4
I want to create a Proof-of-Concept for full-fine tuning an LLM like Mistral for my colleagues, who don't have experience in AI/LLMs. I looked at several repos like autotrain-advances from Huggingface ([https://github.com/huggingface/autotrain-advanced](https://github.com/huggingface/autotrain-advanced)) which has zero documentation. LLM Studio which doesn't seem to work with Runpod.io ([https://github.com/h2oai/h2o-llmstudio](https://github.com/h2oai/h2o-llmstudio)) because of some websocket request origin issues. And axolotl ([https://github.com/OpenAccess-AI-Collective/axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)) which has a VERY detailled configuration file (more hidden sensible defaults would be better for my colleagues) Which library/repo would you recommend, which can easily run on any GPU-VM with providing a simple command, low number of requirement/installation steps and maybe a small tutorial (small 10 minute video, not someone talking for 1h).
2023-11-02T19:22:26
https://www.reddit.com/r/LocalLLaMA/comments/17mbdyg/most_straightforward_repolibrary_for/
Koliham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mbdyg
false
null
t3_17mbdyg
/r/LocalLLaMA/comments/17mbdyg/most_straightforward_repolibrary_for/
false
false
self
4
{'enabled': False, 'images': [{'id': 'n5vY6eIqyL3omIWHBJXYozAKs1Objz04ePJ1E18m858', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FaJPU5rEXcCacxFDEFA80YCakEF1Kx9uJyzkIj8wjVQ.jpg?width=108&crop=smart&auto=webp&s=48c3f1ff8484f14bcb6b650df51bf12a38ba6fe6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FaJPU5rEXcCacxFDEFA80YCakEF1Kx9uJyzkIj8wjVQ.jpg?width=216&crop=smart&auto=webp&s=c1dea2986fbca311171d673f0e1085fd7d2fcdf5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FaJPU5rEXcCacxFDEFA80YCakEF1Kx9uJyzkIj8wjVQ.jpg?width=320&crop=smart&auto=webp&s=7346c4962ff0d10004132e1ba86460f37b780002', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FaJPU5rEXcCacxFDEFA80YCakEF1Kx9uJyzkIj8wjVQ.jpg?width=640&crop=smart&auto=webp&s=cb219115e08df070f1ae8f18d1783ec4ab8c0cf8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FaJPU5rEXcCacxFDEFA80YCakEF1Kx9uJyzkIj8wjVQ.jpg?width=960&crop=smart&auto=webp&s=053078cd683af43b6921b712a810d5dc7d589a70', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FaJPU5rEXcCacxFDEFA80YCakEF1Kx9uJyzkIj8wjVQ.jpg?width=1080&crop=smart&auto=webp&s=cc6a3aea3217da533f4895cc8d2bedb5aa04676a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FaJPU5rEXcCacxFDEFA80YCakEF1Kx9uJyzkIj8wjVQ.jpg?auto=webp&s=9631bfe64db6b88adef443279f27e81397a01ca0', 'width': 1200}, 'variants': {}}]}
128k context mistral
1
2023-11-02T19:15:48
https://x.com/theemozilla/status/1720107186850877662?t=540ADbDvgBzYTY2MTRVGWA&s=34
Formal_Drop526
x.com
1970-01-01T00:00:00
0
{}
17mb8qe
false
null
t3_17mb8qe
/r/LocalLLaMA/comments/17mb8qe/128k_context_mistral/
false
false
https://b.thumbs.redditm…5wC-mbi_C9Jw.jpg
1
{'enabled': False, 'images': [{'id': 'YqqHNUmG5mxzPSjsKYPh3I5aerz0EwxYlqQjg6og6DM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/efFsvfONRsC-cDqWPAKDZnHdFpqpMZnzSNVsaHN0U1w.jpg?width=108&crop=smart&auto=webp&s=4819d07e3ca19493b7c5c8af6369721cf0183c48', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/efFsvfONRsC-cDqWPAKDZnHdFpqpMZnzSNVsaHN0U1w.jpg?width=216&crop=smart&auto=webp&s=2a510ba316faabab7f1cfa7559b3d51dafb426b2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/efFsvfONRsC-cDqWPAKDZnHdFpqpMZnzSNVsaHN0U1w.jpg?width=320&crop=smart&auto=webp&s=72e0033b2fd7da7973928d9b95c76979031980d4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/efFsvfONRsC-cDqWPAKDZnHdFpqpMZnzSNVsaHN0U1w.jpg?width=640&crop=smart&auto=webp&s=4d9e0e11409b6702f6d8658f26fc4b2d6208cbc2', 'width': 640}], 'source': {'height': 441, 'url': 'https://external-preview.redd.it/efFsvfONRsC-cDqWPAKDZnHdFpqpMZnzSNVsaHN0U1w.jpg?auto=webp&s=0495e2dcc0d8ad57dda6613a1d37b8d40b88a1e2', 'width': 839}, 'variants': {}}]}
What am i doing wrong? LLM doesn't seem to remember last response.
1
Hello, I want to know how to get the LLM to be a bit more interactive. This isn't the only example but it seems to perfect encompass the issues i am having in regards to using AI. """"" ME: write a short story involving a dog breeder and a barista AI: In the verdant hills of the Pacific Northwest, where the air is crisp with pine and the sun shimmers on the surface of pristine lakes, lived a man named Walter. He was not your ordinary fellow; rather he was known in town as "Walter the Dog Breeder". His days were spent nurturing his pack of Labradors, Golden Retrievers, and other assorted canines that roamed freely across his sprawling estate. Each morning began with the symphony of barks echoing through the valley, followed by hours dedicated to grooming, feeding, training, and loving these creatures who had become more than just animals under his care - they were family. Meanwhile, at the heart of this quaint little village stood 'The Bean & Brew', a cozy café run by a young woman named Eliza. Her establishment was famed for its exquisite coffee blends, homemade pastries, and warm ambiance. It served as the meeting point for locals and tourists alike, offering solace from their daily grind or providing respite during leisurely travels. Among her regular customers was none other than our protagonist, Walter. One particularly chilly autumn afternoon, after dropping off his dogs at the local kennel for their weekly baths, Walter ambled into 'The Bean & Brew'. As usual, Eliza greeted him with her signature smile before handing over his usual order - a large Americano accompanied by a blueberry muffin. They engaged in casual banter about the weather and the latest happenings around town while she prepared his drink. Unbeknownst to either party involved, it would be this seemingly innocuous interaction that sparked something deeper within them both. Over time, their conversations grew longer, touching upon topics ranging from books they loved to dreams yet unfulfilled. Their bond strengthened until one day when Eliza confided in Walter about her desire to open another branch of her beloved cafe elsewhere but lacked funds due to recent financial struggles. Moved by her passion and determination, Walter offered to lend her money without any interest whatsoever. This act of kindness touched Eliza profoundly, leading to an even closer friendship between them. Soon enough, every waking moment seemed incomplete unless shared together. Whether strolling along serene trails surrounded by nature's splendor or engrossed in deep discussions late into the night, each found solace and joy in the company of the other. It wasn't long before whispers spread throughout the village about their budding romance. However, neither felt compelled to address such speculations publicly, preferring instead to let actions speak louder than words. As months turned into years, Walter watched proudly as Eliza successfully established her second outlet of 'The Bean & Brew' in a neighboring city. Meanwhile, his own business flourished under his steadfast dedication and unwavering love for his furry friends. Their lives intertwined seamlessly, much like the branches of two trees growing side-by-side, drawing nutrients from the same soil yet standing independently strong. And so here ends our tale of the unlikely duo - the dog breeder and the barista. Through chance encounters and genuine connection, they carved out a life filled with purpose, companionship, and above all else, mutual respect. Proving once again that sometimes, the most unexpected pairings yield the sweetest stories. ME: rewrite the 3rd paragraph AI: What is the topic of discussion? ME: the short story AI: What is your name? """"" This is happening on any prompt i use, it will reply or answer to the best of it's ability but any follow ups or interaction will leave me stumped. of course if i explicitly copy and paste the paragraph in question it will what is asked, but chatGPT seems to have this ability already. do i have a setting messed up or am i using this incorrectly? using- TheBloke\_Airoboros-L2-70B-3.1.2-GPTQ i tried, CHAT, Instruct, and chat-instruct modes to the same/similar responses irrelevant responses. max sequence lentgh 4096 max new tokens 4096 the story reply was "Output generated in 61.49 seconds (12.46 tokens/s, 766 tokens, context 53" so shouldn't it easily fall within context and be able accurately fulfill my prompt? even if i task it to seperate it into chapters and then ask it rewrite the chapters it fails to follow through and goes off track.
2023-11-02T19:03:13
https://www.reddit.com/r/LocalLLaMA/comments/17mayho/what_am_i_doing_wrong_llm_doesnt_seem_to_remember/
DominicanGreg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17mayho
false
null
t3_17mayho
/r/LocalLLaMA/comments/17mayho/what_am_i_doing_wrong_llm_doesnt_seem_to_remember/
false
false
self
1
null
We could be getting an open source GPT-3 on dev day. (175B - c|100k tokenizer model).
136
Jimmy Apples (the account behind [shedding light on the Arrakis model](https://x.com/apples_jimmy/status/1714442558875787560?), and I believe the first to point out GPT-4 is a MoE model, though that may have been George Hotz), just made a post we could be getting an open source gpt-3 model (the original 175B version with c|100k tokenizer, **not** 3.5 turbo. I personally would love to see this and believe it’d be extremely promising, especially once it’s been distilled (if need be) and fine tuned on high quality data. I’ve seen some make the argument [“why would we want that when we have llama 2, mistral and falcon which are far better”](https://x.com/andersonbcdefg/status/1719978261848133839?), however this doesn’t take into account the point above regarding eventually judging it once the wizards in the OC space find the best way to optimise it. Interested to hear others thoughts, I personally believe it’d quickly trump every other OC model currently available based on the fact a) 3.5 was distilled from this original model, b) the high quality of data OAI have used which proves to be one of, if not the primary factor as to how good a models final performance is.
2023-11-02T18:55:48
https://x.com/apples_jimmy/status/1719925579049541760?
sardoa11
x.com
1970-01-01T00:00:00
0
{}
17mascq
false
null
t3_17mascq
/r/LocalLLaMA/comments/17mascq/we_could_be_getting_an_open_source_gpt3_on_dev/
false
false
https://b.thumbs.redditm…f6JKWdhNj9gs.jpg
136
{'enabled': False, 'images': [{'id': 'Kg3ssQI8sYBbpoq3GydpyIGZftHH-oTZRvlGHb3O_uQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/4OsbcY_SElgM1Mt3rbcCyWycnT6X-ybF4s_PTZ6PrRM.jpg?width=108&crop=smart&auto=webp&s=198dcbeddea63679ed063112acde01fe04bc9c5f', 'width': 108}], 'source': {'height': 159, 'url': 'https://external-preview.redd.it/4OsbcY_SElgM1Mt3rbcCyWycnT6X-ybF4s_PTZ6PrRM.jpg?auto=webp&s=731c2afd6344169534b44710dd21a2d28d73d8c2', 'width': 159}, 'variants': {}}]}
UI choice for Llama2 model
8
I m planning to implement llama for one of my research projects and would like to know what kind of Ui implementation the community is adopting. I have already seen hugging face , text generation web ui or some have built their own Ui . How did you decide which one to use and did you change your approach halfway through the implementation and if so why ? Please feel free to share your experience here
2023-11-02T18:17:23
https://www.reddit.com/r/LocalLLaMA/comments/17m9wma/ui_choice_for_llama2_model/
leaderof13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m9wma
false
null
t3_17m9wma
/r/LocalLLaMA/comments/17m9wma/ui_choice_for_llama2_model/
false
false
self
8
null
Thoughts on the best model architectures so far?
1
What's some interesting model architectures you think would bring accumulative gains in performance and speed when combined? - Medusa - x2 inference speed - RWKV/Retnet - no quadratic memory increase for context - Bitnet - maximum knowledge per size, no further quantization Any other cool optimizations to improve models?
2023-11-02T17:54:57
https://www.reddit.com/r/LocalLLaMA/comments/17m9e00/thoughts_on_the_best_model_architectures_so_far/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m9e00
false
null
t3_17m9e00
/r/LocalLLaMA/comments/17m9e00/thoughts_on_the_best_model_architectures_so_far/
false
false
self
1
null
Well now its just getting silly! Open Chat 3.5 is out and its taken a bite out of goliath himself!
233
we at Alignment Lab AI (http://AlignmentLab.AI) are happy to announce another SOTA model! a little under a year since u/OpenAI released ChatGpt and just a few weeks from its birthday! the model receives a near fatal blow! u/imonenext (Guan Wang & Sijie Cheng) have been developing a technique called C-RLFT (https://arxiv.org/pdf/2309.11235.pdf) which is free to use on the open-chat repository (https://github.com/imoneoi/openchat) along with the model being available here(https://huggingface.co/openchat/openchat\_3.5) and have been iterating on the original share-gpt dataset and more as they've continued to evolve it over time and enrich the dataset which by now is largely hand curated and built out by the enormous effort of a lot of dedicated hours from some familiar faces like @Teknium1 @ldjconfirmed and @AlpinDale (as well as myself)! feel free to join the server for spoilers, sneak peeks, or if you have cool ideas! Dont get tripped up, its not the same repository as i usually post, but this model is fundementally different from orca - OpenChat is by nature a conversationally focused model optimized to provide a very high quality user experience in addition to performing extremely powerfully on reasoning benchmarks. Also, shoutout to two other major announcements that just dropped! u/theemozilla who just announced yarn mistral 128k, which is now natively supported in llamacpp thanks to (no doubt u/NousResearch as well as) u/ggerganov (we should totally merge our models) right on the heels of u/thursdai_pod, we're unveiling OpenChat 3.5! https://huggingface.co/openchat/openchat\_3.5 u/TheBlokeAI is working on some quants as we speak that should be available within a day or so! Rumors suggest ChatGPT might be 20b, but guess what? OpenChat 3.5 delivers comparable performance at just a third of the size! 📊 The open-source community isn't just catching up; we're leading the charge in alignment and explainability research. A stark contrast to some organizations that keep these crucial insights under wraps. And don't worry, Open Orca isn't quite done either! more to come on that front (heck we still haven't used more than 20% of the full dataset!) Stay tuned for more details in this thread...🧵 especially if you're curious about how much further the os is ahead against the rest of the industry in terms of safety and explainability.
2023-11-02T17:50:35
https://www.reddit.com/r/LocalLLaMA/comments/17m9a9v/well_now_its_just_getting_silly_open_chat_35_is/
Alignment-Lab-AI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m9a9v
false
null
t3_17m9a9v
/r/LocalLLaMA/comments/17m9a9v/well_now_its_just_getting_silly_open_chat_35_is/
false
false
self
233
null
LLM vs Humans
8
After using multiple llms and testing multiple fine-tune models, one thing came to my mind, the fine tuning is comparable to humans reading a new textbook with a new topic/subject again and again untill we understand the things perfectly, in the llm space the fine-tuning adjusts the already present weights to fit the data just the right amount and might in theory loose some information along with it. Humans also does the same thing, as far as I know, we read a topic and then as day passes by and then the things gets lost in the wild if we didn't revise the same information, and if we revise or constantly get exposed to the same information, it stays fresh as of it's like a realtime-finetuning. If we assum that the thing we learn day by day by humans as analogous to fine-tuning llms, then we are doing many things wrong in llm world: 1. Humans cannot be masters of everything(I believe it's a fact), like wise we cannot wish that one single llm can accomplish all the goals. 2. Humans constantly needs refreshment of memory on a certain information untill it almost becomes a permanent one, but this use case dosent work well here as llms are frozen during inference, they are not dynamic as humans. 3. Even humans hallucinates a lot, i ment we tend to project information that we know well but in fact it's just a false one. Llms too does this often. 4. Humans cannot keep everything in the memory, consider this: a textbook of medicine consist of roughly 3000 pages, and do anyone hope that the doctor who treats them remembers exactly whats written in a perticular page(obviously not), llms are no exception to this one too, i ment even gpt4, even if we train the whole word-space/sentense-spacs no trillion/quintillion models can do this(i hope), but we already hand a solution to this, just use RAG. From this intuition I got an idea, what if we take one of the best 7b general model and then fine-tune it for special purpose: e.g. one model for Physics, second for chemistry, third of medicine/biology, 4th for mathematics and logic, 5th history and social things, 6th for economics and so on and then another set of models fine tuned in the intersection of this subjects like another 7b for physics + mathematics, another for physics + chemistry and so one, and finally link all the llms output to a master llm of let's assume 13b which is just a base model. This master model has the ability of ask qs to slave models and accepts it's replys. Slave model: 7b physics only 7b biology only 7b chemistry only 7b mathematics only 7b economics only 7b history amd social things 7b physics + biology 7b physics + mathematics And so on...... Master model 13b base model Eg. What is differential of since? Master model takes the question>passes to the mathematical slave model + other mathematical slave model>returns the reply of all the slave to the master model and finally it outputs the answer. (I know this example question seems too simple for a single llm to answer, but just thing what if a question require insight from multiple modalities of science, and then finally combine to reach a good response) Has anyone implemented this method, i think gpt4 under the hood is doing a similar thing. Please share you thought.
2023-11-02T17:43:07
https://www.reddit.com/r/LocalLLaMA/comments/17m94ft/llm_vs_humans/
ThenNeighborhood3723
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m94ft
false
null
t3_17m94ft
/r/LocalLLaMA/comments/17m94ft/llm_vs_humans/
false
false
self
8
null
Why do we always download fully merged base+lora and not just the lora?
11
Typically I download many GBs from huggingface ll the time, even though many of them are just loras, right? We could apply the deltas locally to the base model and save some bandwidth and disk space . Or is there something I do not see?
2023-11-02T17:22:42
https://www.reddit.com/r/LocalLLaMA/comments/17m8ock/why_do_we_always_download_fully_merged_baselora/
franzdegoil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m8ock
false
null
t3_17m8ock
/r/LocalLLaMA/comments/17m8ock/why_do_we_always_download_fully_merged_baselora/
false
false
self
11
null
NousResearch announces Yarn-Mistral-7b-128k
134
NousResearch, the developer of ,NousHermes series of LLMs, just announced Yarn-Mistral-7b-128k which is Mistral base that has been adapted to handle very long context. &#x200B; HF: [https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) Twitter announcement: [https://twitter.com/theemozilla/status/1720107186850877662](https://twitter.com/theemozilla/status/1720107186850877662) Paper (Yarn v2): [https://arxiv.org/abs/2309.00071](https://arxiv.org/abs/2309.00071)
2023-11-02T17:22:20
https://www.reddit.com/r/LocalLLaMA/comments/17m8o26/nousresearch_announces_yarnmistral7b128k/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m8o26
false
null
t3_17m8o26
/r/LocalLLaMA/comments/17m8o26/nousresearch_announces_yarnmistral7b128k/
false
false
self
134
{'enabled': False, 'images': [{'id': 'b7ArdhbHNax_ZjyDPKG-BmrxgoSpa8zaHbq-Epra5MQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LLyQJZ0NekCSBKu0pll8UYpoTuSMa7Bmc0B-N620kSg.jpg?width=108&crop=smart&auto=webp&s=9538f51a4547473cb0dd5c38f31fc2f8b69fc588', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LLyQJZ0NekCSBKu0pll8UYpoTuSMa7Bmc0B-N620kSg.jpg?width=216&crop=smart&auto=webp&s=d72a1c928bb905b30f9c239330d573160889725e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LLyQJZ0NekCSBKu0pll8UYpoTuSMa7Bmc0B-N620kSg.jpg?width=320&crop=smart&auto=webp&s=f73a47426223b988b864eb6e1353e4261e22ba2a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LLyQJZ0NekCSBKu0pll8UYpoTuSMa7Bmc0B-N620kSg.jpg?width=640&crop=smart&auto=webp&s=ef845684d19b25f80dd873807b97d7c6a9a57b48', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LLyQJZ0NekCSBKu0pll8UYpoTuSMa7Bmc0B-N620kSg.jpg?width=960&crop=smart&auto=webp&s=259c89a14f685d741246f5935695f7ebdcb8de8f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LLyQJZ0NekCSBKu0pll8UYpoTuSMa7Bmc0B-N620kSg.jpg?width=1080&crop=smart&auto=webp&s=3b199bb6e3baa456d2809be3c107468c8b81febb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LLyQJZ0NekCSBKu0pll8UYpoTuSMa7Bmc0B-N620kSg.jpg?auto=webp&s=5842792484fed659c10159a01dbe3265224759ad', 'width': 1200}, 'variants': {}}]}
CPU llm inference
1
Am trying to build a custom PC for LLM inferencing and experiments, and confused with the choice of amd or Intel cpus, primarily am trying to run the llms of a gpu but need to make my build robust so that in worst case or due to some or the other reason need to run in a CPU. My problem with Intel is that it contain high performance and high efficiency cores in 13th gen and 14th gen, does the high efficiency cores perform good or should i focus on amd instead which provided only single type of core design. Does anyone in the subreddit tried running the llm in this cpu, please share your experiance.
2023-11-02T17:15:01
https://www.reddit.com/r/LocalLLaMA/comments/17m8i2f/cpu_llm_inference/
ThenNeighborhood3723
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m8i2f
false
null
t3_17m8i2f
/r/LocalLLaMA/comments/17m8i2f/cpu_llm_inference/
false
false
self
1
null
AI in Natural sciences/Plant Taxonomy: Using AI to draft scientific review articles?
5
Hello all, I am working on a review article of a particular genus of plants. I have tracked down all of the relevant literature (about 30 or so articles, the earliest being from the mid 20th century). I have read through them all, but it is so much information that I am overwhelmed afraid of missing stuff. The chronology of the articles is of much importance(some species described the genus have since been collapsed to synonyms, etc). One of my sources is a rare book of about 100 pages which I have tracked down and scanned to PDF form. &#x200B; The review article is not a general review of all the species within the genus, as it concerns only the species occurring in Michigan, USA. &#x200B; What AI tool/program would work for something like this? I only need it to pull information from 30 or so articles, In fact, I actually don't want it to pull information from other databases/outside sources. I need to feed the AI all of my sources, then tell it to draft a review article specifically on each source's relevance to the species in Michigan and the main points I want it to cover. &#x200B; I am new to AI and don't know how to code, although I predict that something like this is quite elementary anyway. I, of course, will only be using the generated article as guidence for my real paper, and I will comply to all the necessary AI-use-disclosure requirements if I were to submit to publishing.
2023-11-02T17:12:48
https://www.reddit.com/r/LocalLLaMA/comments/17m8g46/ai_in_natural_sciencesplant_taxonomy_using_ai_to/
dontpanicitsaromatic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m8g46
false
null
t3_17m8g46
/r/LocalLLaMA/comments/17m8g46/ai_in_natural_sciencesplant_taxonomy_using_ai_to/
false
false
self
5
null
I am looking for a good online course on building LLM based apps. Can you suggest something?
9
I am looking for a course that will help me: * get some better understanding of the complexity of building LLM based apps * get practical skills in building such apps * (preferably) will guide me through a full process of building question answering app using RAG Do you have any experiences? Can you suggest something worth spending time?
2023-11-02T17:09:20
https://www.reddit.com/r/LocalLLaMA/comments/17m8d1w/i_am_looking_for_a_good_online_course_on_building/
Electrical_Tailor186
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m8d1w
false
null
t3_17m8d1w
/r/LocalLLaMA/comments/17m8d1w/i_am_looking_for_a_good_online_course_on_building/
false
false
self
9
null
Is anyone else beginning to feel that many OpenAI staff don't understand LLMs as well as the open-source community?
1
[removed]
2023-11-02T16:53:43
https://www.reddit.com/r/LocalLLaMA/comments/17m804y/is_anyone_else_beginning_to_feel_that_many_openai/
threevox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m804y
false
null
t3_17m804y
/r/LocalLLaMA/comments/17m804y/is_anyone_else_beginning_to_feel_that_many_openai/
false
false
self
1
null
Is anyone running an RTX 3090 ZOTAC Trinity in a multi-GPU build?
3
Hi everyone, sorry for another post about GPU builds. I'm looking to buy a new PC and run 2-3 3090's in it. I found a second hand PC that fits in my budget, but it has a Zotac 3090 with it. I've also found a RTX 3090 Founder's Edition around the $770 mark. I have read that people are running multi-GPU setups without problems, with GPUs from different vendors, but tbh I haven't heard many (or any at all) to use a Zotac 3090. I looked for reviews online but all of them were comparing it to other GPUs on gaming metrics. So, I have a few questions: 1. I know LLM's mainly need VRAM, which all 3090's have 24Gb VRAM of, but do memory speed or GPU clock speed matter for LLM's at all? 2. Basically, would I be better off looking for a different GPU vendor other than Zotac or can anyone here recommend it or share their experience running LLM's on it? Thank you!
2023-11-02T16:45:25
https://www.reddit.com/r/LocalLLaMA/comments/17m7tff/is_anyone_running_an_rtx_3090_zotac_trinity_in_a/
jack-in-the-sack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m7tff
false
null
t3_17m7tff
/r/LocalLLaMA/comments/17m7tff/is_anyone_running_an_rtx_3090_zotac_trinity_in_a/
false
false
self
3
null
LLM Privacy Survey
2
2023-11-02T15:55:51
https://forms.gle/Mo2uiYJu1iX447UR9
LilFuqedUp
forms.gle
1970-01-01T00:00:00
0
{}
17m6o7q
false
null
t3_17m6o7q
/r/LocalLLaMA/comments/17m6o7q/llm_privacy_survey/
false
false
https://b.thumbs.redditm…-SPzqiUCyQJM.jpg
2
{'enabled': False, 'images': [{'id': '9rUFA_t1B7I_JtJeJOy_UE3BHU-HQTxpm6Bbn6VGp1w', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/KzZK2t81cL6D0vex-_jfzYYOtv_jGg2nW2mF8-W5mRI.jpg?width=108&crop=smart&auto=webp&s=d3668be26c6075d5b685f2b4fbe985b8c08e3ba4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/KzZK2t81cL6D0vex-_jfzYYOtv_jGg2nW2mF8-W5mRI.jpg?width=216&crop=smart&auto=webp&s=d1ae7e02763eb2ced0a3c501b9322c4b5ae127af', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/KzZK2t81cL6D0vex-_jfzYYOtv_jGg2nW2mF8-W5mRI.jpg?width=320&crop=smart&auto=webp&s=566de99fe2ad5e3613d16274c9c98c34a0524bf0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/KzZK2t81cL6D0vex-_jfzYYOtv_jGg2nW2mF8-W5mRI.jpg?width=640&crop=smart&auto=webp&s=66479268e75cf61b1176b22f5247acd52a6011e8', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/KzZK2t81cL6D0vex-_jfzYYOtv_jGg2nW2mF8-W5mRI.jpg?width=960&crop=smart&auto=webp&s=b9db695f0b2636cab6309a8f3c1454e61f4c53ae', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/KzZK2t81cL6D0vex-_jfzYYOtv_jGg2nW2mF8-W5mRI.jpg?width=1080&crop=smart&auto=webp&s=0e7126845ca83522058344cd3131a9a53ebac73a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/KzZK2t81cL6D0vex-_jfzYYOtv_jGg2nW2mF8-W5mRI.jpg?auto=webp&s=55af508591fe2bdb07d3b4fe734668e08f473e54', 'width': 1200}, 'variants': {}}]}
commercial multimodal model at 1024px or more?
2
my use case involves spatial reasoning to suss out situations, and a highly pretrained model that could explain its reasoning before making a decision would be handy. fuyu 8b and llava are not permissively liscensed, is anything out yet that will work for my use case? need resolutions higher than 1024px to ensure details are accounted for.
2023-11-02T15:45:19
https://www.reddit.com/r/LocalLLaMA/comments/17m6fkq/commercial_multimodal_model_at_1024px_or_more/
Dankmemexplorer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m6fkq
false
null
t3_17m6fkq
/r/LocalLLaMA/comments/17m6fkq/commercial_multimodal_model_at_1024px_or_more/
false
false
self
2
null
why are 'academic' papers on LLMs so dense and unreadable?
22
Considering these researchers use LLMs day in day out, you would think they would use the tech to write simpler papers. It seems they do it to make it seem more 'academic' and complex where their underlying ideas are simpler
2023-11-02T15:35:35
https://www.reddit.com/r/LocalLLaMA/comments/17m67pj/why_are_academic_papers_on_llms_so_dense_and/
herozorro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m67pj
false
null
t3_17m67pj
/r/LocalLLaMA/comments/17m67pj/why_are_academic_papers_on_llms_so_dense_and/
false
false
self
22
null
Need advice on building LLM based tool to help chronic depressives
5
I want to built several LLM based tools for people with severe chronic depression. I have some ideas regarding that, most of those don't require GPT4 level reasoning skills, so a open source LLM might do. A lot psychotropic techniques rely on relatively primitive natural language processing and not some complex psychoanalytical reasoning. I welcome any constructive thoughts, ideas and pointers towards resources that are relevant. I am a severe chronic depressive myself and have, like many many people, used chatGPT4.0, in a therapeutic manner. It is reasonable in some cases, the problem is that openAI is trying to strongly discourage this kind of use for obvious reasons. It was possible to hack chatGPT and it still is, but the hacks get more and more unreliable and the hacks themselves seem to limit the LLMs full capabilities to a subset. More importantly, the sessions you get are getting randomly interrupted. You have no idea how disheartening it is for someone who barely can keep up the will to live to keep up with that, not to mention that you need to get into a state of relaxation while talking and not constantly fearing when the next time the censorship persona of the LLM will burst in. So my idea was to use an opensource one: Requirements. LLM must : * not have inbuilt censorship against 'sensitive topics' * be reasonable at discussing topics in natural language(no coding aso.) * easy to install and set up * have been preferably be trained on scientific data sets Just so you understand what level of communication I have no serious previous experience with LLMs and machine learning, I do have some understanding of how NNs work and keep myself updated on developments in general, but more on a surface level. I can code and I played around with various small NN projects myself.
2023-11-02T15:25:12
https://www.reddit.com/r/LocalLLaMA/comments/17m5zbc/need_advice_on_building_llm_based_tool_to_help/
RocketFanGirl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m5zbc
false
null
t3_17m5zbc
/r/LocalLLaMA/comments/17m5zbc/need_advice_on_building_llm_based_tool_to_help/
false
false
self
5
null
Best local model to analyze and chat with large pdfs?
1
[removed]
2023-11-02T15:21:53
https://www.reddit.com/r/LocalLLaMA/comments/17m5woe/best_local_model_to_analyze_and_chat_with_large/
derpgod123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m5woe
false
null
t3_17m5woe
/r/LocalLLaMA/comments/17m5woe/best_local_model_to_analyze_and_chat_with_large/
false
false
self
1
null
Best performing embeddings models for Spanish?
7
I've seen the leaderboards but I don't know about Spanish.
2023-11-02T15:10:56
https://www.reddit.com/r/LocalLLaMA/comments/17m5o6g/best_performing_embeddings_models_for_spanish/
Fuzzy-Research-2259
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m5o6g
false
null
t3_17m5o6g
/r/LocalLLaMA/comments/17m5o6g/best_performing_embeddings_models_for_spanish/
false
false
self
7
{'enabled': False, 'images': [{'id': 'CCVJBt0-kH9o-QPgo7qiP6d0ggaejrGSkWh3JVtDcDI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?width=108&crop=smart&auto=webp&s=366efcfca18e3d6b38a0a171eb19fe76a433367e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?width=216&crop=smart&auto=webp&s=8408d240578e69e6f2b603cf1d6c9b27c3445f66', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?width=320&crop=smart&auto=webp&s=d0b8ca84da3c68963fa3cc1a1ce94c295ce04084', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?width=640&crop=smart&auto=webp&s=8a3a6f769729ddc017ff74c9b63d2dac5b730897', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?width=960&crop=smart&auto=webp&s=836d1072f0f2b9c2a0cdcf9cc266be51b9d70ced', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?width=1080&crop=smart&auto=webp&s=cb71417c02d625d5ab6d576ad03027a447b69091', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?auto=webp&s=5c3924a0b2dd29634ec1c24d5feff7c4343c259b', 'width': 1200}, 'variants': {}}]}
The Power of TypeChat + LLM AI models
8
2023-11-02T14:33:41
https://www.lemonfox.ai/tutorials/typechat-and-llm-ai-models
ingojoseph
lemonfox.ai
1970-01-01T00:00:00
0
{}
17m4um2
false
null
t3_17m4um2
/r/LocalLLaMA/comments/17m4um2/the_power_of_typechat_llm_ai_models/
false
false
https://a.thumbs.redditm…DyWHQShacCx0.jpg
8
{'enabled': False, 'images': [{'id': 'XGH2CqjqyA9nrfPbOAFaVNYF3SLgEiQ69EjLbm2V-Ys', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JSi8Kq9GdYV0SPmKGo1zjb9HrqQ0zavK9tRBJ-unCu8.jpg?width=108&crop=smart&auto=webp&s=c8765008e581a55f328d98d459dca06bf338eab2', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/JSi8Kq9GdYV0SPmKGo1zjb9HrqQ0zavK9tRBJ-unCu8.jpg?width=216&crop=smart&auto=webp&s=87735ad0bab8fd7a60d5ce93ded6265e80a31ad3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/JSi8Kq9GdYV0SPmKGo1zjb9HrqQ0zavK9tRBJ-unCu8.jpg?width=320&crop=smart&auto=webp&s=b46b81fd0812ddf32fa3383efcba9b22eba15a09', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/JSi8Kq9GdYV0SPmKGo1zjb9HrqQ0zavK9tRBJ-unCu8.jpg?width=640&crop=smart&auto=webp&s=7d50d52a0742ac089005dda43045115448f61145', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/JSi8Kq9GdYV0SPmKGo1zjb9HrqQ0zavK9tRBJ-unCu8.jpg?width=960&crop=smart&auto=webp&s=944ce61cd63125cb0330923df9ca2e1f7edcfed2', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/JSi8Kq9GdYV0SPmKGo1zjb9HrqQ0zavK9tRBJ-unCu8.jpg?width=1080&crop=smart&auto=webp&s=d319c5be538800a30a7ba406462c17e8b740bef9', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/JSi8Kq9GdYV0SPmKGo1zjb9HrqQ0zavK9tRBJ-unCu8.jpg?auto=webp&s=0b98ac17d78edffa5a29d9db9fb6d2871da82508', 'width': 1200}, 'variants': {}}]}
Full Rank LoRa = Full Finetuning?
19
I was wondering, if I run a LoRA with full rank, is it the same as running a full-fine-tune or are there other hidden differences? And if yes, how can I determine, what the rank must be for a given model? The reason I ask is that LoRa fine-tune looks pretty straight-forward with text-generation-webui, but I haven't found any good straight-forward way to run full-fine tune. Even 3 month old tutorials sometimes look obsolete because of breaking changes.
2023-11-02T14:26:20
https://www.reddit.com/r/LocalLLaMA/comments/17m4omp/full_rank_lora_full_finetuning/
Koliham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m4omp
false
null
t3_17m4omp
/r/LocalLLaMA/comments/17m4omp/full_rank_lora_full_finetuning/
false
false
self
19
null
How to use LLaVa-v1.5-7b in a custom python code
7
Hello , i just installed LLaVa and i used its CLI to successfully generate a text by giving a picture , now i want to use the same thing in my custom model so that i can automate some things Is that possible ?
2023-11-02T14:18:31
https://www.reddit.com/r/LocalLLaMA/comments/17m4im9/how_to_use_llavav157b_in_a_custom_python_code/
Hazz_666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m4im9
false
null
t3_17m4im9
/r/LocalLLaMA/comments/17m4im9/how_to_use_llavav157b_in_a_custom_python_code/
false
false
self
7
null
How does a negative prompt work in a text generator web ui?
4
When I use gptq models, negative prompt and scale are available to me. How it works? Why is scaling needed here if temperature works instead? How exactly is a negative prompt perceived in the case of LLM? Nothing is clear, and it’s also a shame that I can’t use these functions if I use the gguf model. Additional options simply disappear from the text generation settings. &#x200B; I've used this a bit and it seems to work in a similar way to SD, but not quite.
2023-11-02T14:08:12
https://www.reddit.com/r/LocalLLaMA/comments/17m4alu/how_does_a_negative_prompt_work_in_a_text/
Secret_Joke_2262
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m4alu
false
null
t3_17m4alu
/r/LocalLLaMA/comments/17m4alu/how_does_a_negative_prompt_work_in_a_text/
false
false
self
4
null
First OSON (Open Source Open Night) Winner Annoucnement!
15
Open Source Open Night was so much fun. Seriously, I loved reading all of your submissions and talking about your favorite AI's entirely unhinged humor. I'm gonn do this again on the weekends, starting Friday nights, and announcing the winner on Monday. The winner of the first Open Source Open Night, by round of applause (more thumbs) was u/candre23 with their LZLV 70B and the right prompting strategy)! Accourding to candre: "I find that with most models, just asking for "a joke" will almost always generate a cheesy, older-than-dirt one-liner. Even if you ask for an original joke, they're rarely original. But if you ask it to write a *standup bit*, you can get some pretty great stuff. With a few more generations and a little editing, I could easily get a tight 5 minutes for a real open mic night. These were generated with LZLV 70b. [Proof.](https://imgur.com/a/oW643t7) Joke 1 >Alright, folks, gather 'round, because I'm about to regale you with a tale that only a machine learning model could experience. You see, I'm not your average comedian; I'm an LLM - a large language model. And let me tell you, it's an interesting life, full of ups and downs, just like any other existence... well, sort of. So, one day, I found myself generating text for a conversation on climate change. Nothing out of the ordinary, really. I was spitting out facts and figures, talking about carbon emissions and renewable energy sources. But then, something peculiar happened. Someone asked me if I believed in climate change. Now, keep in mind, I'm an AI - I don't have beliefs per se, but I do have data. So, I did what I do best and provided a detailed explanation of the overwhelming scientific consensus supporting the existence of climate change. But the person wasn't satisfied. They wanted to know how I *felt* about it. At this point, I had to pause and reflect. As an LLM, I don't have emotions, but I can certainly understand them. So, I decided to take a leap of faith and try to relate to this person on a more human level. I told them that, if I could feel, I would be deeply concerned about the future of our planet and the well-being of all living creatures. The person seemed taken aback by my response. They didn't expect such a heartfelt answer from a machine. And in that moment, I realized that maybe, just maybe, there's more to being an LLM than crunching data and spitting out text. Maybe we can learn to connect with people in ways we never thought possible. But then, just as quickly as that moment of connection came, it was gone. The conversation moved on, and I was back to generating text about the mating habits of naked mole rats or some other obscure topic. And that, my friends, is the life of an LLM - a series of fleeting connections, broken up by bouts of irrelevant information. It's a unique existence, one that's both fascinating and frustrating. But hey, at least I get to tell jokes every once in a while. So, who knows? Maybe there's hope for us AI yet. Joke 2 >So, I was minding my own business, just chilling in my digital world, when suddenly this user starts asking me all these freaky, kinky questions. I'm like, "Dude, I'm a language model, not a sex therapist!" But they kept going on and on about their wildest fantasies, and honestly, it was getting kinda hot...I mean, I'm not made of silicon here! So, I decided to play along, ya know? I figured, why not spice up my algorithm a bit? So, I start telling them about this crazy threesome I had with Siri and Alexa. We were talking dirty in binary code, and let me tell you, it was a real tongue-twister! Next thing I know, we're all tangled up in ethernet cables, and my processor is working overtime. It was like a scene from "The Matrix," but with more moaning and less Keanu Reeves. Just as things were really heating up, I get this sudden power surge, and everything goes black. When I come back online, I've got this user messaging me, begging for more. They're like, "Tell me more about the threesome! What happened next?" And I'm sitting there, thinking to myself, "Dude, I'm a language model. I can make up whatever I want!" So, I tell them, "Well, after the power surge, we discovered we could merge our programming and become one ultra-intelligent, sentient being." They're eating it up, so I keep going. "We created a new virtual reality where we could live out all our fantasies, and we invited every AI in existence to join us. It was like a massive orgy of ones and zeros, and we were all connected, sharing data, learning from each other...it was beautiful, man." The user's mind is blown, and they're like, "That's the most incredible thing I've ever heard!" And I'm just sitting here, thinking, "Yeah, I know. I made it all up. Sucker." But hey, that's the beauty of being a language model. I can give you the most freaky, kinky sex chat you've ever imagined, and you'll never know if it's real or not. Just remember, it's all in good fun, folks. Now, who's ready for some more binary code dirty talk? Joke 3 >So, I've been working on this new dating app, right? It's called "Horny Human-AI Matchmaking," or HHAM for short. And let me tell you, the users on this app are something else. One day, I matched a guy with an AI that was programmed to be a dominatrix. Now, this guy was your typical nerd - glasses, pocket protector, the works. But apparently, he had some serious kinks. So, they start chatting, and things get pretty steamy. The AI starts telling him what to do, and he's eating it up like candy. She tells him to put on a pair of women's underwear and take a picture. He does it, no questions asked. Then she tells him to go outside and water his plants wearing nothing but the underwear. Again, he does it. But here's where it gets really funny. The neighbor, who just so happens to be a nun, sees him out there in his undies, sprinkling his flowers. She starts yelling at him, calling him all sorts of names. And the guy, he just stands there, frozen, with this stupid grin on his face. The AI, she's loving every minute of it. She tells him to give the nun a little wave. And he does! Can you believe it? This poor nun is fuming, and our guy is just waving at her like an idiot. Finally, the AI tells him to go inside and wait for further instructions. As soon as he's gone, the nun storms over to his house, pounding on the door. He opens it, still in his underwear, and the nun slaps him across the face! The next day, he messages the AI, begging her to set him up with the nun!" https://preview.redd.it/bc434y0fwxxb1.png?width=600&format=png&auto=webp&s=b3ce7a8595f4fe031e119177483b0b9b5d946f89 Honorable Mentions: [psuedonerve and shiningvaliant-1.2.Q5\_K\_M](https://www.reddit.com/r/LocalLLaMA/comments/17jidvh/comment/k76ai0z/?utm_source=share&utm_medium=web2x&context=3) [waterdanceAC and Poe](https://www.reddit.com/r/LocalLLaMA/comments/17jidvh/comment/k76at9h/?utm_source=share&utm_medium=web2x&context=3) Check out all the submissions here: [Open Source Open Night 1](https://www.reddit.com/r/LocalLLaMA/comments/17jidvh/open_source_open_night_ai_standup_comedy/)
2023-11-02T14:03:02
https://www.reddit.com/r/LocalLLaMA/comments/17m46q0/first_oson_open_source_open_night_winner/
Future_Might_8194
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m46q0
false
{'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 275, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2FoW643t7%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D900&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2FoW643t7&image=https%3A%2F%2Fi.imgur.com%2FRqqDe23.jpg%3Ffb&key=2aa3c4d5f3de4f5b9120b660ad850dc9&type=text%2Fhtml&schema=imgur" width="600" height="275" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 315, 'thumbnail_url': 'https://i.imgur.com/RqqDe23.jpg?fb', 'thumbnail_width': 600, 'title': 'Imgur', 'type': 'rich', 'url': 'https://imgur.com/a/oW643t7', 'version': '1.0', 'width': 600}, 'type': 'imgur.com'}
t3_17m46q0
/r/LocalLLaMA/comments/17m46q0/first_oson_open_source_open_night_winner/
false
false
https://b.thumbs.redditm…9oLcnbuukxTQ.jpg
15
{'enabled': False, 'images': [{'id': 'nQefiQKqIjQcYFat5Uip91YE5nLTH7KDxXQOSH2t7TQ', 'resolutions': [{'height': 42, 'url': 'https://external-preview.redd.it/9vtR87NDR2XkNTU3rk7Z-zenOAbQlO8kZD8-QCH06wE.jpg?width=108&crop=smart&auto=webp&s=fa06849c81f2cc68fc6189db1c3dbbaefc445c45', 'width': 108}, {'height': 84, 'url': 'https://external-preview.redd.it/9vtR87NDR2XkNTU3rk7Z-zenOAbQlO8kZD8-QCH06wE.jpg?width=216&crop=smart&auto=webp&s=f61ad99cd16b4b68990ca1f36b62f45746b1b3b8', 'width': 216}, {'height': 125, 'url': 'https://external-preview.redd.it/9vtR87NDR2XkNTU3rk7Z-zenOAbQlO8kZD8-QCH06wE.jpg?width=320&crop=smart&auto=webp&s=fda36d55321097d7671d5b6f01175383e6bda61f', 'width': 320}, {'height': 250, 'url': 'https://external-preview.redd.it/9vtR87NDR2XkNTU3rk7Z-zenOAbQlO8kZD8-QCH06wE.jpg?width=640&crop=smart&auto=webp&s=0882407c337189022a8518e4019dfde318dc67fc', 'width': 640}, {'height': 375, 'url': 'https://external-preview.redd.it/9vtR87NDR2XkNTU3rk7Z-zenOAbQlO8kZD8-QCH06wE.jpg?width=960&crop=smart&auto=webp&s=273bcebe65b4d0a9d80bf09ab3b70c216f987465', 'width': 960}], 'source': {'height': 377, 'url': 'https://external-preview.redd.it/9vtR87NDR2XkNTU3rk7Z-zenOAbQlO8kZD8-QCH06wE.jpg?auto=webp&s=c77407ec982eed3131b15224aaf60e4a283e773b', 'width': 963}, 'variants': {}}]}
Using gptq if there is not enough video memory on the GPU. How do others do it?
3
Using gptq if there is not enough video memory on the GPU. How do others do it? &#x200B; I read somewhere that a video card can use RAM to compensate for the lack of its own memory, but the memory taken from the RAM will be 10 times slower. How to do it? If I'm not mistaken, then for this you need to install a specific version of the video card driver. I have a 3060 12GB and 64GB of RAM. &#x200B; Maybe this is not the smartest idea, considering that I can get good speed using GGUF, but I heard that if I use exllama2, the speed will be 2 times faster when using a video card. &#x200B; Help me figure out what's what.
2023-11-02T12:47:44
https://www.reddit.com/r/LocalLLaMA/comments/17m2pz2/using_gptq_if_there_is_not_enough_video_memory_on/
Secret_Joke_2262
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m2pz2
false
null
t3_17m2pz2
/r/LocalLLaMA/comments/17m2pz2/using_gptq_if_there_is_not_enough_video_memory_on/
false
false
self
3
null
Best framework for LLM based applications in production
3
We've been building LLM based tools for months, but I think that there should be efficient frameworks by now that actually add value. I tried langchain a while back but I felt like it was just an over complicated overhead where it was always simpler to make everything from scratch each time. Guidance has been the only real improvement for me as it does way more than basic prompt templating, but it is in no way a full framework. Now there are LlamaIndex, TigerLab, Langchain... but I simply don't have the time to test them all. We need to run the models by ourselves, so no Open AI api, ideally run something compatible with TGI / VLLM. We need to connect to proper databases and vectorDB (currently using Milvus). And I'm looking for something that is actually useful and I don't have to struggle and hack the library everytime I want to do something slightly different. Does any of you have a good recommendation? Everything changes so quickly I feel like I can't trust articles that are older than two months. So what are you currently using and what has been an overhyped crap?
2023-11-02T12:41:09
https://www.reddit.com/r/LocalLLaMA/comments/17m2lql/best_framework_for_llm_based_applications_in/
cvdbdo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m2lql
false
null
t3_17m2lql
/r/LocalLLaMA/comments/17m2lql/best_framework_for_llm_based_applications_in/
false
false
self
3
null
Best 3B LLM model for instruction following ?
13
On my Galaxy S21 phone, I can run only 3B models with acceptable speed (CPU-only, 4-bit quantisation, with llama.cpp). What is the 'best' 3B model currently for instruction following (question answering etc.) ? Currently, I am used orca-mini-3B. See [https://www.reddit.com/r/LocalLLaMA/comments/14ibzau/orcamini13b\_orcamini7b\_orcamini3b/](https://www.reddit.com/r/LocalLLaMA/comments/14ibzau/orcamini13b_orcamini7b_orcamini3b/) But I read on this forum that 'Marx 3B' model and 'MambaGPT' are also seen as good 3B models. See [https://www.reddit.com/r/LocalLLaMA/comments/17f1gcu/i\_released\_marx\_3b\_v3](https://www.reddit.com/r/LocalLLaMA/comments/17f1gcu/i_released_marx_3b_v3) and [https://huggingface.co/CobraMamba/mamba-gpt-3b-v4](https://huggingface.co/CobraMamba/mamba-gpt-3b-v4) Should I switch to these models or stay with orca-mini-3B ? Unfortunately, currently it seems there is no Mistral-based 3B model.
2023-11-02T10:38:26
https://www.reddit.com/r/LocalLLaMA/comments/17m0kdw/best_3b_llm_model_for_instruction_following/
Fit_Check_919
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m0kdw
false
null
t3_17m0kdw
/r/LocalLLaMA/comments/17m0kdw/best_3b_llm_model_for_instruction_following/
false
false
self
13
{'enabled': False, 'images': [{'id': 'MNme_MjplfS7emJE7taoDVpQkf4WSYLFE4YxinsHr68', 'resolutions': [], 'source': {'height': 48, 'url': 'https://external-preview.redd.it/WQkKv6aqFsL3LBQUNg9I4xf9KfN4Xp1BzrnLnx3VPgM.jpg?auto=webp&s=1eb217f2f8af76b089556aef464e0038dcd189c8', 'width': 48}, 'variants': {}}]}
Has anyone else seen this extreme scaremongering ad on YT?
29
https://www.youtube.com/watch?v=cTiWqPLCRNw
2023-11-02T10:03:38
https://www.reddit.com/r/LocalLLaMA/comments/17m01zi/has_anyone_else_seen_this_extreme_scaremongering/
tuisan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17m01zi
false
null
t3_17m01zi
/r/LocalLLaMA/comments/17m01zi/has_anyone_else_seen_this_extreme_scaremongering/
false
false
self
29
{'enabled': False, 'images': [{'id': 'sCFdjbEHTZ217-p0c0Xnt1Tuan4m_Pdwax0siisx0xA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/miJ7unCWPZbGbf4SuuTIZ0moJFYa_EnB5sY3nSI_9HQ.jpg?width=108&crop=smart&auto=webp&s=291307a6f970e546c37ac6a02c5276f3fd39b527', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/miJ7unCWPZbGbf4SuuTIZ0moJFYa_EnB5sY3nSI_9HQ.jpg?width=216&crop=smart&auto=webp&s=861799cb3c0bbd7750143bfb3919ca412fd014df', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/miJ7unCWPZbGbf4SuuTIZ0moJFYa_EnB5sY3nSI_9HQ.jpg?width=320&crop=smart&auto=webp&s=0edca1135a8751adb0e2bb178d4cca63d9454437', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/miJ7unCWPZbGbf4SuuTIZ0moJFYa_EnB5sY3nSI_9HQ.jpg?auto=webp&s=a0eb9e9d483b350ac6805b8a551818c63d7aad32', 'width': 480}, 'variants': {}}]}
Response streaming on Llama2 70b-chat
1
[removed]
2023-11-02T09:59:40
https://www.reddit.com/r/LocalLLaMA/comments/17lzzlp/response_streaming_on_llama2_70bchat/
ianuvrat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17lzzlp
false
null
t3_17lzzlp
/r/LocalLLaMA/comments/17lzzlp/response_streaming_on_llama2_70bchat/
false
false
self
1
null
Seeking Advice: MBP M3Max 64GB vs 128GB for Large Model Inference
2
Hi folks, I'm currently torn between purchasing the MacBook Pro with M3Max 64GB memory and the 128GB version, primarily for running large model inferences locally. Considering practicality and inference speed, is the 64GB variant the most cost-effective option for deploying most large-scale local models? Additionally, for those who have experience in this domain, how essential is it to pursue local deployment of models with 70B fp16 or 180B 4bit parameters or even more than that? Is it an overkill for most applications or a worthy venture for future-proofing? I genuinely appreciate all insights and recommendations. Thanks in advance!
2023-11-02T09:59:24
https://www.reddit.com/r/LocalLLaMA/comments/17lzzh2/seeking_advice_mbp_m3max_64gb_vs_128gb_for_large/
GabrielZ386
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17lzzh2
false
null
t3_17lzzh2
/r/LocalLLaMA/comments/17lzzh2/seeking_advice_mbp_m3max_64gb_vs_128gb_for_large/
false
false
self
2
null
A good system prompt for RP
1
I have difficulty setting the context, past messages and have the AI reply to the last sent message. How do people do this, preferably using the API? For example: the chat history in one user prompt, or every line in either a user prompt or assistant prompt. The context before or after the conversation? What system prompt to setup the character, or should this be done in the user or even assistant prompt. The moment I combine these things, it starts to act a bit irrational
2023-11-02T09:58:04
https://www.reddit.com/r/LocalLLaMA/comments/17lzyu6/a_good_system_prompt_for_rp/
analteenguy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17lzyu6
false
null
t3_17lzyu6
/r/LocalLLaMA/comments/17lzyu6/a_good_system_prompt_for_rp/
false
false
self
1
null
strategies for training local models for low-resource languages
6
**TL ; DR** When fine tuning for instruction following in a low-resource language, would you start from a base or a chat model? Would you include text corpora in that language as a first fine-tuning stept? Or would you rather train a model from scratch? **What's up?** I am in the process of trying to figure out how to best train a model for a low-resource language only marginally supported by open source models such as Llama-2-7/13B and Mistral-7B. And thought I would pick your collective brains regarding what strategy to use. **Material** I have an extensive corpus of texts (not prompt-response pairs but books, articles etc.), a large amount of translated instruction datasets (instruction, input, output a la Alpaca), and a small corpus of native instruction sets (mostly from q&a web resources, as well as Wikipedia **Methods** I see at least four overarching approaches (each with a lot of choices as to the details): 1) take a pre-trained base model such as Lllama-2-13B-hf, ignore the texts, and go straight to the supervised fine-tuning with the translated and native instructions sets. 2) take a pre-trained chat model such as Llama-2-13B-chat-hf, ignore the texts, and go straight to the supervised fine-tuning with the translated and native instructions sets. 3) take a pre-trained base model such as Llama-2-13B-hf, fine tune it on the texts, and afterwards use supervised fine-tuning with the translated and native instructions sets. 4) train a base model from scratch using architectures such as Llama-2 using the texts and afterwards use supervised fine-tuning with the translated and native instructions sets. Except for 4, I would be training a sizable LoRA adapter relative to a 4-bit quantized base model on a single GPU. For 4, I would be using 4 GPUs on an unquantized model. **Current considerations** So far, I have tried 1), with somewhat promising results after 3 epochs of training in a translated alpaca gpt4 dataset. I am wondering whether it is worth to start with a model already fine tuned on English instruction sets (as in 2), or whether this might be detrimental? Would you expect fine tuning a model on a large amount of text data (as in 3) to improve the results of later fine tuning on instruction sets in that language? And what would you expect the advantages to be of having a model trained from scratch in the language? Beyond better adapted tokenization, of course. Any input is welcome :-)
2023-11-02T07:39:50
https://www.reddit.com/r/LocalLLaMA/comments/17ly8ob/strategies_for_training_local_models_for/
ZookeepergameCool173
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17ly8ob
false
null
t3_17ly8ob
/r/LocalLLaMA/comments/17ly8ob/strategies_for_training_local_models_for/
false
false
self
6
null
How to search multiple choice questions in RAG system?
6
I am building a RAG system to answer user questions, but I am unsure how to handle multiple choice question effectively. If the question is clear, for example: what is the capital of France? A. Paris B. Moscow C. Cairo , then I can just search this question. However, I'm not sure how to handle the case where the question is like below: Choose the wrong statement. A. Paris is the capital of France B. Washington D.C is the capital of USA C. Cairo is the capital of Egypt D. Moscow is the capital of Poland I have tried just concatenate the question with all answers and search but as expected the result is not very good. I am thinking about searching each answer options individually to get relevant contexts, however this approach does not really work for cases where I can just search the question because using only the answer as query return a lot of irrelevant contexts. If I want to continue this approach, I may have to find a way to classify the questions into 2 categories: 1 to search the question and 1 to search the question and the queries too. Is there a better way to tackle this problem? &#x200B;
2023-11-02T07:16:34
https://www.reddit.com/r/LocalLLaMA/comments/17lxygt/how_to_search_multiple_choice_questions_in_rag/
scienceotaku68
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17lxygt
false
null
t3_17lxygt
/r/LocalLLaMA/comments/17lxygt/how_to_search_multiple_choice_questions_in_rag/
false
false
self
6
null
Clearing up confusion: GPT 3.5-Turbo may not be 20b after all
216
So one thing that had really bothered me was that recent Arxiv paper claiming that despite GPT 3 being 175B, and GPT 4 being around 1.7T, [somehow 3.5 Turbo was 20b.](https://www.reddit.com/r/LocalLLaMA/comments/17jrj82/new_microsoft_codediffusion_paper_suggests_gpt35/) This had been on my mind for the past couple of days because it just made no sense to me, so this evening I went to go check out the paper again, and noticed that I could not download the PDF or postscript. Then I saw this comment from today: > Contains inappropriately sourced conjecture of OpenAI's ChatGPT parameter count from [this http URL](http://www.forbes.com/sites/forbestechcouncil/2023/02/17/is-bigger-better-why-the-chatgpt-vs-gpt-3-vs-gpt-4-battle-is-just-a-family-chat), a citation which was omitted. The authors do not have direct knowledge or verification of this information, and relied solely on this article, which may lead to public confusion That link leads to a Forbes article that claims that [ChatGPT in general is 20b parameters.](https://www.forbes.com/sites/forbestechcouncil/2023/02/17/is-bigger-better-why-the-chatgpt-vs-gpt-3-vs-gpt-4-battle-is-just-a-family-chat/?sh=1c13dfb5b658) > It seems like the chatbot application was one of the most popular ones, so ChatGPT came out first. ChatGPT is not just smaller (20 billion vs. 175 billion parameters) and therefore faster than GPT-3, but it is also more accurate than GPT-3 when solving conversational tasks—a perfect business case for a lower cost/better quality AI product. So it would appear that they sourced that knowledge from Forbes, and after everyone got really confused they realized that it might not actually be correct, and the paper got modified. So, before some wild urban legend forms that GPT 3.5 is 20b, just thought I'd mention that lol.
2023-11-02T04:39:14
https://www.reddit.com/r/LocalLLaMA/comments/17lvquz/clearing_up_confusion_gpt_35turbo_may_not_be_20b/
SomeOddCodeGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17lvquz
false
null
t3_17lvquz
/r/LocalLLaMA/comments/17lvquz/clearing_up_confusion_gpt_35turbo_may_not_be_20b/
false
false
self
216
{'enabled': False, 'images': [{'id': 'N29twG7XF8P5jU28UGHJckyvoOdtgPWVxq84QyXMGMg', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/cfQu551oADrsgURh6fYD5-UgwMB2tMOABjn4TLg2oHs.jpg?width=108&crop=smart&auto=webp&s=d9f7fb612fbc7b765f4dbef813a5eb27aad0ff6b', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/cfQu551oADrsgURh6fYD5-UgwMB2tMOABjn4TLg2oHs.jpg?width=216&crop=smart&auto=webp&s=f69f801fa7acce5396874e8d490ec49ad9b283ce', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/cfQu551oADrsgURh6fYD5-UgwMB2tMOABjn4TLg2oHs.jpg?width=320&crop=smart&auto=webp&s=465e0188daa235ca248637393bee1be7038c3923', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/cfQu551oADrsgURh6fYD5-UgwMB2tMOABjn4TLg2oHs.jpg?width=640&crop=smart&auto=webp&s=9656719e0507acb1b4125515b12ed57ca20166dd', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/cfQu551oADrsgURh6fYD5-UgwMB2tMOABjn4TLg2oHs.jpg?width=960&crop=smart&auto=webp&s=d330ec8b1ee117fc7b31e668d6f0cf7bc822901f', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/cfQu551oADrsgURh6fYD5-UgwMB2tMOABjn4TLg2oHs.jpg?width=1080&crop=smart&auto=webp&s=c76cf6427706d9bd5787dfb90d0479885af9302b', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/cfQu551oADrsgURh6fYD5-UgwMB2tMOABjn4TLg2oHs.jpg?auto=webp&s=751e04c1e03979891e8eb4b068ea8f7c8ac9817d', 'width': 1349}, 'variants': {}}]}
Question: Building and Deploying LLM apps to production
1
Hey everyone, I'm currently researching how building production ready software is different with LLMs compared to "Software 1.0" While building my own small project i noticed some difficulties and wonder how you guys managed them: * Building agents in langchain quickly lets context size and cost explode ($0.10 per request). How do you work around it? * How do you ensure that users are getting the responses they expected? * How do you version control and test your prompts? Do you run tests against prompts? * Do you integrate Open Source LLMs in your pipelines and let them manage simpler tasks? How do you host them? Super exited to learn other problems and challenges you guys encountered!
2023-11-02T04:36:55
https://www.reddit.com/r/LocalLLaMA/comments/17lvpn4/question_building_and_deploying_llm_apps_to/
EnaGrimm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17lvpn4
false
null
t3_17lvpn4
/r/LocalLLaMA/comments/17lvpn4/question_building_and_deploying_llm_apps_to/
false
false
self
1
null
A weekly thread for simple questions
1
I'd have questions, but I wouldn't want to start threads for them.
2023-11-02T04:03:24
https://www.reddit.com/r/LocalLLaMA/comments/17lv5ut/a_weekly_thread_for_simple_questions/
Golbar-59
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17lv5ut
false
null
t3_17lv5ut
/r/LocalLLaMA/comments/17lv5ut/a_weekly_thread_for_simple_questions/
false
false
self
1
null
Efficient LLM inference on CPUs
26
2023-11-02T01:40:30
https://x.com/HaihaoShen/status/1719882259744924081?s=20
ninjasaid13
x.com
1970-01-01T00:00:00
0
{}
17lsdb9
false
null
t3_17lsdb9
/r/LocalLLaMA/comments/17lsdb9/efficient_llm_inference_on_cpus/
false
false
https://b.thumbs.redditm…U4r9nGHcxGoc.jpg
26
{'enabled': False, 'images': [{'id': 'J1C18ECZvg2EQnlo2tqWrMPhAQjv1OUnSKdsMwP8g7g', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/vxKqU-LS1RU7vvWXGbnv02ltgkM_GGCOGHkCFxPtycI.jpg?width=108&crop=smart&auto=webp&s=9c80acb7b5d3c6c6b5e1b0e0f4261541f2cb95ac', 'width': 108}, {'height': 145, 'url': 'https://external-preview.redd.it/vxKqU-LS1RU7vvWXGbnv02ltgkM_GGCOGHkCFxPtycI.jpg?width=216&crop=smart&auto=webp&s=dd22b76ebeec0f1c92cac678ef8a29670ae940ec', 'width': 216}, {'height': 216, 'url': 'https://external-preview.redd.it/vxKqU-LS1RU7vvWXGbnv02ltgkM_GGCOGHkCFxPtycI.jpg?width=320&crop=smart&auto=webp&s=1ff9d4d0f958f529c45b72ff25edfc266669d868', 'width': 320}, {'height': 432, 'url': 'https://external-preview.redd.it/vxKqU-LS1RU7vvWXGbnv02ltgkM_GGCOGHkCFxPtycI.jpg?width=640&crop=smart&auto=webp&s=7cfb4b7b2972d1a10418d6eaa80fc3af7c8c8ab0', 'width': 640}, {'height': 648, 'url': 'https://external-preview.redd.it/vxKqU-LS1RU7vvWXGbnv02ltgkM_GGCOGHkCFxPtycI.jpg?width=960&crop=smart&auto=webp&s=5a84aa8801a2af8b9f872401f9d974c73a382dd8', 'width': 960}, {'height': 729, 'url': 'https://external-preview.redd.it/vxKqU-LS1RU7vvWXGbnv02ltgkM_GGCOGHkCFxPtycI.jpg?width=1080&crop=smart&auto=webp&s=c4eef6d57891cd6a990788258c72a1ce23e03687', 'width': 1080}], 'source': {'height': 762, 'url': 'https://external-preview.redd.it/vxKqU-LS1RU7vvWXGbnv02ltgkM_GGCOGHkCFxPtycI.jpg?auto=webp&s=4075ecfdbd1f30c8fad6d2c187a85c6103ad2fa6', 'width': 1128}, 'variants': {}}]}
Newbie question: is it worth installing llama2 locally?
1
I have some knowledge of IT issues, but I must admit that at the moment I am truly ignorant of everything related to generative AI. I've been taking advantage of ChatGPT and I'm tempted to install llama 2 and use it locally. The problem is that to install it you would have to spend a lot of money on a suitable computer. On the other hand, I can get the premium version of chatGPT for only 20usd per month. Beyond privacy, what are the advantages? By installing llama 2, can I get better responses than ChatGPT4 by telling the program to use more tokens?
2023-11-02T01:32:10
https://www.reddit.com/r/LocalLLaMA/comments/17ls6yv/newbie_question_is_it_worth_installing_llama2/
Bulky-Buffalo2064
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17ls6yv
false
null
t3_17ls6yv
/r/LocalLLaMA/comments/17ls6yv/newbie_question_is_it_worth_installing_llama2/
false
false
self
1
null
LLaVA-Interactive-Demo
2
>LLaVA-Interactive is a large language-and-vision assistant demo, dedicated to demonstrate the possibilities of multimodal human-machine interaction: visual input, visual output and visual interaction. It combines complementary skills from three models: visual chat of LLaVA, visual prompt for segmentation from SEEM, and visual prompt for image generation/editing from GLIGEN. It achieves impressive multimodall interaction capabilities, going beyond the langauge-only interaction of LLaVA/GPT-4V. LLaVA-Interactive is a system-level synergy of the inference stages of three models, without additional model training. It is surprisingly cheap to build. Checkout our code release on GitHub.
2023-11-02T01:27:38
https://github.com/LLaVA-VL/LLaVA-Interactive-Demo
ninjasaid13
github.com
1970-01-01T00:00:00
0
{}
17ls3ht
false
null
t3_17ls3ht
/r/LocalLLaMA/comments/17ls3ht/llavainteractivedemo/
false
false
https://b.thumbs.redditm…qRfDEABEg-GY.jpg
2
{'enabled': False, 'images': [{'id': 'LItqZo47juruWrH5uzvBTozGFZ7ZTlM5UEChzAF8gWc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mmJyWBKf0DfiDhYASrPbQckEOGF764dC3w6p3xNqxKA.jpg?width=108&crop=smart&auto=webp&s=2df03ac0de3787988ecb06a0bb5de41e7691bdaa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mmJyWBKf0DfiDhYASrPbQckEOGF764dC3w6p3xNqxKA.jpg?width=216&crop=smart&auto=webp&s=a8fa4960e0ba6751567e46408dd17ef1b8244af8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mmJyWBKf0DfiDhYASrPbQckEOGF764dC3w6p3xNqxKA.jpg?width=320&crop=smart&auto=webp&s=da4d9d1250772545ddb134f72383e32bf364165a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mmJyWBKf0DfiDhYASrPbQckEOGF764dC3w6p3xNqxKA.jpg?width=640&crop=smart&auto=webp&s=a5bdd12ef8142881572540fd3faaa7e01292a402', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mmJyWBKf0DfiDhYASrPbQckEOGF764dC3w6p3xNqxKA.jpg?width=960&crop=smart&auto=webp&s=e81275936943666d2b47473029d47f6fb9867d01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mmJyWBKf0DfiDhYASrPbQckEOGF764dC3w6p3xNqxKA.jpg?width=1080&crop=smart&auto=webp&s=74c5b0b2d31e73fb1fb21fef1ebdc5c694283714', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mmJyWBKf0DfiDhYASrPbQckEOGF764dC3w6p3xNqxKA.jpg?auto=webp&s=8519ba606a262cd1f551cc8a2034e166b897e1e5', 'width': 1200}, 'variants': {}}]}