title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
What did you train, what hardware did you use, and how long did it take?
26
Hello! I'm trying to gather data on the hardware requirements and time taken to fine-tune or train a model. So what are your experiences? How much VRAM was used ? With what kind of hardware? Thanks in advance!
2023-08-25T08:35:06
https://www.reddit.com/r/LocalLLaMA/comments/160tagr/what_did_you_train_what_hardware_did_you_use_and/
Factemius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160tagr
false
null
t3_160tagr
/r/LocalLLaMA/comments/160tagr/what_did_you_train_what_hardware_did_you_use_and/
false
false
self
26
null
Facing Truncation Issues with LLama-2 Model Responses
4
I have a problem with the responses generated by LLama-2 (/TheBloke/Llama-2-70B-chat-GGML). They are cut off almost at the same spot regardless of whether I'm using a 2xRTX3090 or 3xRTX3090 configuration. LLama-2's task is to generate an article based on the data contained in my database. Here's the code: llm = LlamaCPP( model_url="https://huggingface.co/TheBloke/Llama-2-70B-chat-GGML/resolve/main/llama-2-70b-chat.ggmlv3.q4_0.bin", model_path=None, temperature=0.1, context_window=17800, generate_kwargs={}, model_kwargs={"n_gpu_layers": 82, "n_gqa": 8}, messages_to_prompt=messages_to_prompt, completion_to_prompt=completion_to_prompt, verbose=True, ) # SQL query to retrieve data from the database query_sql = text("SELECT id, title, heading, article, keyword FROM articles;") # Execute the query and process the results with engine.connect() as connection: result = connection.execute(query_sql) for row in result: id, title, heading, article, keyword = row # If the value in the 'keyword' column is not empty, we generate an article if keyword: # Creating a message for the Llama-2 model content = f"Please write an article about {keyword}. Title: {title}. Headings: {heading}. Content: {article}." # Generating the response of the Llama-2 model response = llm.complete(content) # Printing the generated article print(response.text) The responses are consistently truncated, and I'm unsure how to resolve this issue. Any insights or assistance would be greatly appreciated. ​ **The responses are being truncated after approximately 3-4 sentences.**
2023-08-25T08:24:48
https://www.reddit.com/r/LocalLLaMA/comments/160t4ao/facing_truncation_issues_with_llama2_model/
vnvrx1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160t4ao
false
null
t3_160t4ao
/r/LocalLLaMA/comments/160t4ao/facing_truncation_issues_with_llama2_model/
false
false
self
4
null
Finetuning models on XML Data to chat about it?
1
[removed]
2023-08-25T07:56:32
https://www.reddit.com/r/LocalLLaMA/comments/160smiw/finetuning_models_on_xml_data_to_chat_about_it/
nerdw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160smiw
false
null
t3_160smiw
/r/LocalLLaMA/comments/160smiw/finetuning_models_on_xml_data_to_chat_about_it/
false
false
self
1
null
Someone needs to finetune a model for ascii art, I've only been disappointed with what comes stock for most
12
Don't think I've seen anyone really talk about this so thought I'd put the idea out there
2023-08-25T05:46:50
https://www.reddit.com/r/LocalLLaMA/comments/160qcly/someone_needs_to_finetune_a_model_for_ascii_art/
_______DEADPOOL____
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160qcly
false
null
t3_160qcly
/r/LocalLLaMA/comments/160qcly/someone_needs_to_finetune_a_model_for_ascii_art/
false
false
self
12
null
Any Google search/Website query projects?
2
Not sure if this was asked before but are there any projects available that will allow you to use a local model to run google search queries and scan/pull info from websites and maybe even chat with the information found with the model?
2023-08-25T04:35:03
https://www.reddit.com/r/LocalLLaMA/comments/160ozx2/any_google_searchwebsite_query_projects/
AI_Trenches
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160ozx2
false
null
t3_160ozx2
/r/LocalLLaMA/comments/160ozx2/any_google_searchwebsite_query_projects/
false
false
self
2
null
After hosting your own llama.cpp
17
2023-08-25T04:19:48
https://i.redd.it/ulbxtagan6kb1.gif
anehzat
i.redd.it
1970-01-01T00:00:00
0
{}
160op7l
false
null
t3_160op7l
/r/LocalLLaMA/comments/160op7l/after_hosting_your_own_llamacpp/
false
false
https://b.thumbs.redditm…-lXq1shFalYQ.jpg
17
{'enabled': True, 'images': [{'id': 'iqdRaYsKRNN2ankOEYAt7CnLZaVCf5N6Q6785wS9-Ik', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=108&crop=smart&format=png8&s=3618022b2e6608f8a268594f537250ced4d56e85', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=216&crop=smart&format=png8&s=32d5afe68c8f774a3d997ea46be414bab89c0132', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=320&crop=smart&format=png8&s=1e247b423b0b9a2dab8b7bcdd23e4a1b9ce662bb', 'width': 320}], 'source': {'height': 202, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?format=png8&s=32ff0ed5d0b23b6919c081a9a05fd0a461ba1295', 'width': 360}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=108&crop=smart&s=d84a303fd93abc76d6d2f0ff67cd1f8a3f4208bf', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=216&crop=smart&s=f3e1255402e9bbdef860c70e620e2c4edefc0bdf', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=320&crop=smart&s=e1a994e95838f43deb1dc14377eda1a6c222ef96', 'width': 320}], 'source': {'height': 202, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?s=71ba62d4dd98b0d06792d8b740d807050ae23537', 'width': 360}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=108&format=mp4&s=d648dbae6509f1173b10f1a284f9d7b3783ab929', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=216&format=mp4&s=7a9cdbe93969cd33736bbd87ca1cca4b949aaad4', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?width=320&format=mp4&s=70f79ea56cc2fc26724dc225850e3606a3567c9c', 'width': 320}], 'source': {'height': 202, 'url': 'https://preview.redd.it/ulbxtagan6kb1.gif?format=mp4&s=e8d189e7039d30618ae59d21c0467d8827338a0a', 'width': 360}}}}]}
Is there any completely free API of llama 2
1
I am searching for completely free API key for llama 2. I have not enough space and requirements in my local machine. So I need free API key. Please suggest any way to use free API key..
2023-08-25T04:07:25
https://www.reddit.com/r/LocalLLaMA/comments/160og8r/is_there_any_completely_free_api_of_llama_2/
Responsible-Row6023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160og8r
false
null
t3_160og8r
/r/LocalLLaMA/comments/160og8r/is_there_any_completely_free_api_of_llama_2/
false
false
self
1
null
The Bloke has released GGMLs for Code Llama - what is the best for a 3090?
1
[removed]
2023-08-25T03:31:08
https://www.reddit.com/r/LocalLLaMA/comments/160npcu/the_bloke_has_released_ggmls_for_code_llama_what/
RoyalCities
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160npcu
false
null
t3_160npcu
/r/LocalLLaMA/comments/160npcu/the_bloke_has_released_ggmls_for_code_llama_what/
false
false
self
1
{'enabled': False, 'images': [{'id': 'giBT9zJH9B0iZmkMce--ijkYXjNeHMS6BVtL09I1cIY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?width=108&crop=smart&auto=webp&s=5d45cd6c29253e9c07b89fb3252ebfcd33d2b218', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?width=216&crop=smart&auto=webp&s=56cf2beed378658b5951d4d94c1ec808380ea580', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?width=320&crop=smart&auto=webp&s=0ab7200fdf18d4eb9c412a70dea6626250fd5c2f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?width=640&crop=smart&auto=webp&s=32d76d3f87cc28808add20d3285446836dc39d70', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?width=960&crop=smart&auto=webp&s=7e25cd057e6c3aeb34e2eca034decd772d33f548', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?width=1080&crop=smart&auto=webp&s=698a86f09c53bd7744c8facc5f1cb8c20baa334a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZXLJ6daDEAtDSr6dW3_M_FBVKiO6zRZJHvkkjTQHPmM.jpg?auto=webp&s=f5cf72bce312519d63e2d8044fe10173a2eda8e2', 'width': 1200}, 'variants': {}}]}
What's the best way to point CodeLlama at a local git repo
23
looking for the best interface to set it loose on an entire folder of code and talk to it about the files in there and have it make changes
2023-08-25T02:49:09
https://www.reddit.com/r/LocalLLaMA/comments/160mt8j/whats_the_best_way_to_point_codellama_at_a_local/
FaustBargain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160mt8j
false
null
t3_160mt8j
/r/LocalLLaMA/comments/160mt8j/whats_the_best_way_to_point_codellama_at_a_local/
false
false
self
23
null
LLama CPP & LangChain
1
[removed]
2023-08-25T02:38:46
https://www.reddit.com/r/LocalLLaMA/comments/160ml3u/llama_cpp_langchain/
emporer_eli
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160ml3u
false
null
t3_160ml3u
/r/LocalLLaMA/comments/160ml3u/llama_cpp_langchain/
false
false
self
1
{'enabled': False, 'images': [{'id': '9cWeWP4ZX06TcfaZj6bu0HZnpkXpDxX8Z8JLesAZzBs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/_TM7_TVryLCWrv3zZU9VcnJaGSXUS8VSYNk5Linn8wo.jpg?width=108&crop=smart&auto=webp&s=41754314a19be30560bef611b80e2296f8c7fb81', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/_TM7_TVryLCWrv3zZU9VcnJaGSXUS8VSYNk5Linn8wo.jpg?width=216&crop=smart&auto=webp&s=79282879e773f168a5820fc78bd5a040caa50ec6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/_TM7_TVryLCWrv3zZU9VcnJaGSXUS8VSYNk5Linn8wo.jpg?width=320&crop=smart&auto=webp&s=55b1b15589b468bc7747dcc595709f11dea3a571', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/_TM7_TVryLCWrv3zZU9VcnJaGSXUS8VSYNk5Linn8wo.jpg?auto=webp&s=387b51edec209c61e7dd0d8ba4e88812f5e059d3', 'width': 480}, 'variants': {}}]}
nous-Hermes: Arrogant fool regarding basic math?
1
[removed]
2023-08-25T00:58:07
https://www.reddit.com/r/LocalLLaMA/comments/160k8ey/noushermes_arrogant_fool_regarding_basic_math/
Natural-Sentence-601
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160k8ey
false
null
t3_160k8ey
/r/LocalLLaMA/comments/160k8ey/noushermes_arrogant_fool_regarding_basic_math/
false
false
self
1
null
Are the quantized models really that poor in performance?
5
I've just given this one quick attempt using llama-2-7b 4 bit quantization and the result I got back surprised me -- i thought that these models were a lot better. preface: I'm trying to use these models to explore some potential use-cases for generative AI in a somewhat niche domain. ​ The question I asked was fairly straight-forward and simple: Name the planets in our solar system. And the answer I got didn't even answer the question I asked. And the question it supposedly answered, was still wrong: [https://imgur.com/aAn1UnS](https://imgur.com/aAn1UnS) ​ Obviously i know I'm using a 7b parameter model and 4 bit quantization, but I figured it'd still be decent at answering atleast the most basic of questions. Also it took some time to run but I can safely attribute that to not having a GPU lol. What are your thoughts on this?
2023-08-25T00:12:11
https://www.reddit.com/r/LocalLLaMA/comments/160j3xa/are_the_quantized_models_really_that_poor_in/
anasp1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160j3xa
false
null
t3_160j3xa
/r/LocalLLaMA/comments/160j3xa/are_the_quantized_models_really_that_poor_in/
false
false
self
5
{'enabled': False, 'images': [{'id': 'fAG4zst1QoQF-0DbpD3YM-aRm8pfilCza0mzFStV5IU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/aBjXkFvSgH3zCsZ9CcWmCAbPxGcEK-a5SEYEjVfnUQM.jpg?width=108&crop=smart&auto=webp&s=1dcf1261f1a07357e4e0d20216ed6663a96178b6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/aBjXkFvSgH3zCsZ9CcWmCAbPxGcEK-a5SEYEjVfnUQM.jpg?width=216&crop=smart&auto=webp&s=c45b8ecbdc466f47303a7e50de8c16f54528efb5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/aBjXkFvSgH3zCsZ9CcWmCAbPxGcEK-a5SEYEjVfnUQM.jpg?width=320&crop=smart&auto=webp&s=cd85383183aa95f53d12dd3229fd8a65bb1a4a5a', 'width': 320}], 'source': {'height': 315, 'url': 'https://external-preview.redd.it/aBjXkFvSgH3zCsZ9CcWmCAbPxGcEK-a5SEYEjVfnUQM.jpg?auto=webp&s=3cd3db6c448729f2e5d52945143edf66c7e851c0', 'width': 600}, 'variants': {}}]}
Fine tuning one of the Llama models
1
Would it be fair to say that if my current setup gives me N tokens/sec on a model, then I can finetune it at the rate of ~N/2 tokens/sec on the same setup? (Assume it's just CPUs).
2023-08-25T00:06:38
https://www.reddit.com/r/LocalLLaMA/comments/160iz0g/fine_tuning_one_of_the_llama_models/
ispeakdatruf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160iz0g
false
null
t3_160iz0g
/r/LocalLLaMA/comments/160iz0g/fine_tuning_one_of_the_llama_models/
false
false
self
1
null
Code LLaMA is now on Perplexity’s LLaMa Chat!
1
[deleted]
2023-08-24T23:08:55
https://twitter.com/perplexity_ai/status/1694845231936557437
eunumseioquescrever
twitter.com
1970-01-01T00:00:00
0
{}
160hig9
false
{'oembed': {'author_name': 'Perplexity', 'author_url': 'https://twitter.com/perplexity_ai', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Code LLaMA is now on Perplexity’s LLaMa Chat!<br><br>Try asking it to write a function for you, or explain a code snippet: 🔗 <a href="https://t.co/gyiDw6u6IJ">https://t.co/gyiDw6u6IJ</a><br><br>This is the fastest way to try <a href="https://twitter.com/MetaAI?ref_src=twsrc%5Etfw">@MetaAI</a>’s latest code-specialized LLM. With our model deployment expertise, we are able to provide you… <a href="https://t.co/hX90QulMz4">pic.twitter.com/hX90QulMz4</a></p>&mdash; Perplexity (@perplexity_ai) <a href="https://twitter.com/perplexity_ai/status/1694845231936557437?ref_src=twsrc%5Etfw">August 24, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/perplexity_ai/status/1694845231936557437', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_160hig9
/r/LocalLLaMA/comments/160hig9/code_llama_is_now_on_perplexitys_llama_chat/
false
false
https://b.thumbs.redditm…rFm9EyiK3L7A.jpg
1
{'enabled': False, 'images': [{'id': 'AR0k3LyN_yBcJu6lkqVkXktnblOzaGz8TOsP7zGZqOE', 'resolutions': [{'height': 98, 'url': 'https://external-preview.redd.it/SKtrgVuDF4LjSVYgXhHLONGYuNQvNarDBfdHBv9LUbc.jpg?width=108&crop=smart&auto=webp&s=544a8a810b78090341e2c47edfd71e904ace52a6', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/SKtrgVuDF4LjSVYgXhHLONGYuNQvNarDBfdHBv9LUbc.jpg?auto=webp&s=dede2c8e4d9916f9ed37d5e26cd0db3c9224906f', 'width': 140}, 'variants': {}}]}
Llama finetuning question
11
Is there a good resource that will 1) Explain to me what these values are? 2) Recommend good values on basis of 1) my hardware 2) my dataset? &#x200B; model_name = "meta-llama/Llama-2-7b-chat-hf" dataset_name = "./train.jsonl" new_model = "llama-2-7b-custom" lora_r = 64 lora_alpha = 16 lora_dropout = 0.1 use_4bit = True bnb_4bit_compute_dtype = "float16" bnb_4bit_quant_type = "nf4" use_nested_quant = False output_dir = "./results" num_train_epochs = 1 fp16 = False bf16 = False per_device_train_batch_size = 4 per_device_eval_batch_size = 4 gradient_accumulation_steps = 1 gradient_checkpointing = True max_grad_norm = 0.3 learning_rate = 2e-4 weight_decay = 0.001 optim = "paged_adamw_32bit" lr_scheduler_type = "constant" max_steps = -1 warmup_ratio = 0.03 group_by_length = True save_steps = 25 logging_steps = 5 max_seq_length = None packing = False device_map = {"": 0} &#x200B;
2023-08-24T21:21:49
https://www.reddit.com/r/LocalLLaMA/comments/160em6a/llama_finetuning_question/
Alert_Record5063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160em6a
false
null
t3_160em6a
/r/LocalLLaMA/comments/160em6a/llama_finetuning_question/
false
false
self
11
null
We could have gotten something almost as good as GPT4 for coding...
108
[https://twitter.com/garybasin/status/1694735409287233578?t=JsnswieBAgTGXmwY86qrhg&s=19](https://twitter.com/garybasin/status/1694735409287233578?t=JsnswieBAgTGXmwY86qrhg&s=19) But they decided to not release it... https://preview.redd.it/rq1szgxrk4kb1.png?width=896&format=png&auto=webp&s=f2d9e9aa459c82de20eab712bdf66e2e933685a0
2023-08-24T21:21:19
https://www.reddit.com/r/LocalLLaMA/comments/160elof/we_could_have_gotten_something_almost_as_good_as/
Wonderful_Ad_5134
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160elof
false
{'oembed': {'author_name': 'Gary Basin 🍍', 'author_url': 'https://twitter.com/garybasin', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">They don&#39;t want you to know that synthetic data is the future.<br><br>LLMs generating synthetic data to train on drives a huuuge boost in &quot;unnatural&quot; code llama -- the one model they aren&#39;t releasing. Surpasses gpt-3.5 and gets close to gpt-4 performance on a 34B model <a href="https://t.co/NdB6Or6mhi">pic.twitter.com/NdB6Or6mhi</a></p>&mdash; Gary Basin 🍍 (@garybasin) <a href="https://twitter.com/garybasin/status/1694735409287233578?ref_src=twsrc%5Etfw">August 24, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/garybasin/status/1694735409287233578', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_160elof
/r/LocalLLaMA/comments/160elof/we_could_have_gotten_something_almost_as_good_as/
false
false
https://b.thumbs.redditm…bT49Vyff9JnE.jpg
108
{'enabled': False, 'images': [{'id': 'O_dgt0qW5CvbhrfMq_E6TNvcugq_JjNAktxGBVM9ESM', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/wcvG0usKjZnoF-Jt9ljufXlUYiHUG8pSANv5Z3L2Og4.jpg?width=108&crop=smart&auto=webp&s=363a78f7f256dc78f3d1515ce49af651753a6e61', 'width': 108}], 'source': {'height': 111, 'url': 'https://external-preview.redd.it/wcvG0usKjZnoF-Jt9ljufXlUYiHUG8pSANv5Z3L2Og4.jpg?auto=webp&s=e556b679409abc8e55e5ab2ae68e05de491d3bff', 'width': 140}, 'variants': {}}]}
Why is "lmsys/vicuna-13b-v1.5" giving chinease answers all the time
1
[removed]
2023-08-24T20:37:41
https://www.reddit.com/r/LocalLLaMA/comments/160dfuh/why_is_lmsysvicuna13bv15_giving_chinease_answers/
skeletons_of_closet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160dfuh
false
null
t3_160dfuh
/r/LocalLLaMA/comments/160dfuh/why_is_lmsysvicuna13bv15_giving_chinease_answers/
false
false
https://b.thumbs.redditm…_-1Wv-qAZynY.jpg
1
null
Help with install
1
Help
2023-08-24T20:29:37
https://www.reddit.com/r/LocalLLaMA/comments/160d806/help_with_install/
LearnOnnReddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160d806
false
null
t3_160d806
/r/LocalLLaMA/comments/160d806/help_with_install/
false
false
self
1
null
Not sure why the latest vicuna 13b version 1.5 is not working with this code , it just gives a black response and takes a long time to load
1
[removed]
2023-08-24T20:14:00
https://www.reddit.com/r/LocalLLaMA/comments/160ctex/not_sure_why_the_latest_vicuna_13b_version_15_is/
skeletons_of_closet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160ctex
false
null
t3_160ctex
/r/LocalLLaMA/comments/160ctex/not_sure_why_the_latest_vicuna_13b_version_15_is/
false
false
self
1
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]}
Fine-tuning GPT
1
[removed]
2023-08-24T19:51:00
https://www.reddit.com/r/LocalLLaMA/comments/160c7va/finetuning_gpt/
heswithjesus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160c7va
false
null
t3_160c7va
/r/LocalLLaMA/comments/160c7va/finetuning_gpt/
false
false
self
1
null
Easiest way to LoRa finetune LLama2 7B on 8 A100?
10
Hi guys So for some experiments I'm looking to finetune Llama2 7B using 8x A100 on runpod since I have some time constraints. One entry of the data is pretty big so about 2-4k tokens so that's why I'm looking to get the most resource friendly way. Which would be the most resource friendly way? Should I use Q-LoRa with 4 bit? Does someone has experience? Is the [xTuring](https://github.com/stochasticai/xTuring) library my best bet? If someone has some more resources on this topic I would appreciate it.
2023-08-24T19:48:14
https://www.reddit.com/r/LocalLLaMA/comments/160c571/easiest_way_to_lora_finetune_llama2_7b_on_8_a100/
Single_Prior_704
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160c571
false
null
t3_160c571
/r/LocalLLaMA/comments/160c571/easiest_way_to_lora_finetune_llama2_7b_on_8_a100/
false
false
self
10
{'enabled': False, 'images': [{'id': 'BuMvNeLVUDAwg4OrZlAdIktP-9azOriK5S1eIryToD4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=108&crop=smart&auto=webp&s=f8d25c9a7c4af3403e5df3f3b6bee22f27ab67f1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=216&crop=smart&auto=webp&s=8c856460407dc8e3b92d4447ffde4a65eadf783f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=320&crop=smart&auto=webp&s=1b80f701ff615f5b4e33ccd23f151428e65054f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=640&crop=smart&auto=webp&s=8a88b6a358a50f899a5d93eadeec2b28d0dc9b4c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=960&crop=smart&auto=webp&s=8164273b7c65cfeb083de29fe280d332a7926951', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=1080&crop=smart&auto=webp&s=2471fbd659680ea1b99a94d265fc6a3b64406aa4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?auto=webp&s=e59f4298135bee53e21d9876802d9f07eeba3e2f', 'width': 1200}, 'variants': {}}]}
Easiest way to LoRa finetune LLama2 7B on 8 A100?
1
[removed]
2023-08-24T19:44:59
https://www.reddit.com/r/LocalLLaMA/comments/160c21l/easiest_way_to_lora_finetune_llama2_7b_on_8_a100/
Unfair-Permit5904
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160c21l
false
null
t3_160c21l
/r/LocalLLaMA/comments/160c21l/easiest_way_to_lora_finetune_llama2_7b_on_8_a100/
false
false
default
1
{'enabled': False, 'images': [{'id': 'BuMvNeLVUDAwg4OrZlAdIktP-9azOriK5S1eIryToD4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=108&crop=smart&auto=webp&s=f8d25c9a7c4af3403e5df3f3b6bee22f27ab67f1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=216&crop=smart&auto=webp&s=8c856460407dc8e3b92d4447ffde4a65eadf783f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=320&crop=smart&auto=webp&s=1b80f701ff615f5b4e33ccd23f151428e65054f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=640&crop=smart&auto=webp&s=8a88b6a358a50f899a5d93eadeec2b28d0dc9b4c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=960&crop=smart&auto=webp&s=8164273b7c65cfeb083de29fe280d332a7926951', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?width=1080&crop=smart&auto=webp&s=2471fbd659680ea1b99a94d265fc6a3b64406aa4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vbD8dV9EO8bqOMsSvOiVqWUgFE9yOKhtm-1_qMoP8fQ.jpg?auto=webp&s=e59f4298135bee53e21d9876802d9f07eeba3e2f', 'width': 1200}, 'variants': {}}]}
Llama 2 - Vicuna 13b 16k - context size
3
So I am doing QA on documents. I created embeddings with chunk size of 4000 + 600 overlap. When I limit number of source documents (param k) to 2, everything works fine. If I move that up to 3 then I either get empty answer or only few letters of first word. To my understanding 3documents with 5200 characthers should be around 5610 tokens (calculated with OpenAI tokenizer). Is my calculation wrong or I am doing something wrong with code ?
2023-08-24T19:11:50
https://www.reddit.com/r/LocalLLaMA/comments/160b6mi/llama_2_vicuna_13b_16k_context_size/
Kukaracax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160b6mi
false
null
t3_160b6mi
/r/LocalLLaMA/comments/160b6mi/llama_2_vicuna_13b_16k_context_size/
false
false
self
3
null
How about I TEACH you how to get ANY LLaMA model up and running
1
Hey folks - Been reading all of your posts for over a year now and I think it's high time I offer a service to those wanting to get **llama2, llama2-chat or llamacode** running and accessible from the cloud FAST. For $100 flat fee - I will spend 1 hour with you on webcam to get you up and running with any model of your liking using a new AWS or LambdaLabs (fp16, quantized, etc.). The cloud architecture + setup I use offers extremely fast inference throughput (tok/sec) and is configured to your needs. I will provide detailed instructions + recording of screenshare after our session where I help you launch your LLaMA inference cloud instance. Let me accelerate the learning curve easily by 3 months in one sessions. Doesn't matter if you are running on Windows/Linux/Mac. DM me if interested. Satisfaction guaranteed or $$$ back. :) Can discuss your needs free of charge. Background: MS/BS Aerospace Engineering; MBA; ML Expert
2023-08-24T18:36:47
https://www.reddit.com/r/LocalLLaMA/comments/160a90g/how_about_i_teach_you_how_to_get_any_llama_model/
No_Joke5137
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160a90g
true
null
t3_160a90g
/r/LocalLLaMA/comments/160a90g/how_about_i_teach_you_how_to_get_any_llama_model/
false
false
self
1
null
Anyone train a model on all the news happening here for learning?
2
Ive been trying to get up to speed on all the happenings here and its been a little difficult finding information and understanding all the new information. Previously I could understand complex topics by using GPT4 but a lot of the new terms here(GPTQ,LoRA) are too new for it. Anyone put together a knowledge base and model for someone to easily learn all the new terms and happenings? If not could be a cool project
2023-08-24T18:00:50
https://www.reddit.com/r/LocalLLaMA/comments/1609a6f/anyone_train_a_model_on_all_the_news_happening/
sorbitals
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1609a6f
false
null
t3_1609a6f
/r/LocalLLaMA/comments/1609a6f/anyone_train_a_model_on_all_the_news_happening/
false
false
self
2
null
Seeking Advice on Structured Output and Fine-Tuning for Real Estate Description Parsing
1
[removed]
2023-08-24T17:59:05
https://www.reddit.com/r/LocalLLaMA/comments/16098c4/seeking_advice_on_structured_output_and/
Sneackybae
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16098c4
false
null
t3_16098c4
/r/LocalLLaMA/comments/16098c4/seeking_advice_on_structured_output_and/
false
false
self
1
null
Programmatically using llama.cpp.
2
For my project, I'm trying to use llama.cpp from my scripts/system (in nodejs). There are 3 nodejs libraries for llama.cpp on npm: all 3 fail to build (this is what you get when things are this fresh/recent I guess), and they also don't have all the options the command line llama.cpp "main" program does (like grammar and many others). So I've been using llama-cpp-python's server: python3 -m llama_cpp.server This works, it can be accessed as if it were the OpenAI API, the problem is there also, I don't have all the command line options llama.cpp's main or server does. My next idea was to use llama.cpp's server script, run the server, and then use a HTTP client to "talk" to the script, make requests and get replies. But I couldn't get that to work, it's far from straightforward how to make the requests and where/how to get the replies. This leaves me one option I can see: From a nodejs script, I execute llama.cpp's main script, I somehow "wait" for some specific stdout output from it, then over stdin I send my prompt (not sure how to pass the system prompt there), then it outputs the result, and from my script I somehow read this and figure out when it's done sending and move on. I have no idea how to do this, but I suspect in theory it might be possible? It's a bit nasty but maybe it'd work? Another option I can maybe see, which is similar, is I pass the prompt/system prompt over the command line (as a command line option or a file?), then I get main.cpp to "finish"/exit when it's done outputting the result, and somehow parse everything in a way that gets me what I'm interested in. I've tried both approaches with not much success. My question for the community is: has anyone here done this? Have you had any success? Do you maybe have some code you'd want to share? If I manage to get this to work, I'd make and publish a npm module from it so it's shared with the community. Any help would be super welcome, or any other idea of how to do this. Cheers!
2023-08-24T17:30:03
https://www.reddit.com/r/LocalLLaMA/comments/1608fjd/programmatically_using_llamacpp/
arthurwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1608fjd
false
null
t3_1608fjd
/r/LocalLLaMA/comments/1608fjd/programmatically_using_llamacpp/
false
false
self
2
null
Nous-Hermes-Llama2-70b and Nous-Puffin-70B is out!
58
Here are the links: * [https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b) * [https://huggingface.co/NousResearch/Nous-Puffin-70B](https://huggingface.co/NousResearch/Nous-Puffin-70B) What's Puffin doing here when everyone's been praising Hermes? Because of the description. Hermes was trained on one-shot instructions, while Puffin was trained on multi-turn conversations, so if you want a long chat, Puffin might work better. Same authors. At least some of the quantizations are already up by TheBloke. However, be careful with the new GGUF format as for example text-generation-webui doesn't seem to work with it yet. (I learned that the hard way.)
2023-08-24T16:56:31
https://www.reddit.com/r/LocalLLaMA/comments/1607hez/noushermesllama270b_and_nouspuffin70b_is_out/
whtne047htnb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1607hez
false
null
t3_1607hez
/r/LocalLLaMA/comments/1607hez/noushermesllama270b_and_nouspuffin70b_is_out/
false
false
self
58
{'enabled': False, 'images': [{'id': 'oymRYNcwScnZI2jtAS181KENxhSyq-c5fgaek5IEQes', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?width=108&crop=smart&auto=webp&s=9af61dc60b966f263821c4f49c6db9c5ce0748ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?width=216&crop=smart&auto=webp&s=29ad1639f933e58b9e3ff42731017fb5cc04d359', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?width=320&crop=smart&auto=webp&s=d6d018f89267898405d315d4dc1797e2cafac493', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?width=640&crop=smart&auto=webp&s=2bd6ed8688dc8cdffb0d48fca1badd5b1182493e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?width=960&crop=smart&auto=webp&s=2eeec24bd5f271bf48e4579bcfca3d941bf106db', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?width=1080&crop=smart&auto=webp&s=89222cf8fcd9940edbb69cde3a81775f9f8d67ae', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UaAcH0mRj6koiWC4yyEoEe1eePT9X5lXO-l3MX1r_EI.jpg?auto=webp&s=701ec81ee3a92803e210302b2bfb7c6728ddff33', 'width': 1200}, 'variants': {}}]}
Recommendations for OSS LLMs that can run on a 4090 for tens of thousands of summarizations (125-3000 tokens)? What is the most accurate models you've found?
30
I've tried a bunch of 7,13,30b models and got horrible results, 50-75% error rates with tons of hallucinations. I need concise summarization but I can't loose key details or context. This is going to be for vector retrieval, so if you have any other recommendations, I'd greatly appreciate it. No politics please.. it's just a simple summarization.. no need for debate.
2023-08-24T16:39:49
https://www.reddit.com/r/LocalLLaMA/comments/160717m/recommendations_for_oss_llms_that_can_run_on_a/
Tiny_Arugula_5648
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160717m
false
null
t3_160717m
/r/LocalLLaMA/comments/160717m/recommendations_for_oss_llms_that_can_run_on_a/
false
false
self
30
null
Searching for repository of text-embedded database
1
[removed]
2023-08-24T16:36:40
https://www.reddit.com/r/LocalLLaMA/comments/1606y9h/searching_for_repository_of_textembedded_database/
Natural_Speaker7954
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1606y9h
false
null
t3_1606y9h
/r/LocalLLaMA/comments/1606y9h/searching_for_repository_of_textembedded_database/
false
false
self
1
null
How to access text-generation-webui from the lan without gradio.live ?
1
Hello, I'm trying in vain to access oobabooga text-generation-webui from the computers on my LAN. I know I have to use the --listen argument, but I don't know where. I've tried as an argument to start\_linux.sh and [webui.py](https://webui.py) but it doesn't work. Where should I put it? Thanks for your help.
2023-08-24T16:34:57
https://www.reddit.com/r/LocalLLaMA/comments/1606wn7/how_to_access_textgenerationwebui_from_the_lan/
Bogdahnfr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1606wn7
false
null
t3_1606wn7
/r/LocalLLaMA/comments/1606wn7/how_to_access_textgenerationwebui_from_the_lan/
false
false
self
1
null
llama-2 model AutoModelForSequenceClassification finetune with custom data and save on local
2
Hi, Following are the steps I have followed to fine-tune the model. model gets saved successfully but when I load the model I get the following error. **Error** : EnvironmentError(OSError: results/llama2-classification/final\_checkpoint does not appear to have a file named config.json Basic Code : 1. Load model model = AutoModelForSequenceClassification.from_pretrained( model_name, quantization_config=bnb_config, device_map='auto', num_labels=5 # dispatch efficiently the model on the available ressources ) tokenizer = AutoTokenizer.from_pretrained(model_name) 2. Training trainer = SFTTrainer( model=model, train_dataset=dataset, max_seq_length=1, dataset_text_field='sentence', args=TrainingArguments( output_dir=f'./llm2results/output', logging_dir=f'./llm2logs/output', learning_rate=2e-4, per_device_train_batch_size=1, gradient_accumulation_steps=4, num_train_epochs=1, save_total_limit=3, fp16=True, logging_steps=1, max_steps = 20, optim="paged_adamw_8bit", lr_scheduler_type="cosine", warmup_ratio=0.06, ), peft_config=peft_config, ) 3. Save the model trainer.model.save_pretrained(output_dir) 4. Load the model from the local dir path model = AutoModelForSequenceClassification.from_pretrained(output_dir) and got the error which I mentioned above.
2023-08-24T16:31:10
https://www.reddit.com/r/LocalLLaMA/comments/1606t04/llama2_model_automodelforsequenceclassification/
Satya8870
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1606t04
false
null
t3_1606t04
/r/LocalLLaMA/comments/1606t04/llama2_model_automodelforsequenceclassification/
false
false
self
2
null
Samantha x ChatGPT fine tune
22
2023-08-24T16:28:30
https://i.redd.it/2zj57z3j43kb1.jpg
sardoa11
i.redd.it
1970-01-01T00:00:00
0
{}
1606qhv
false
null
t3_1606qhv
/r/LocalLLaMA/comments/1606qhv/samantha_x_chatgpt_fine_tune/
false
false
https://b.thumbs.redditm…O2gP3RgawtHc.jpg
22
{'enabled': True, 'images': [{'id': 'ZU-HlgtenIipbLCIXeTweM2AzvVr4jrKtsY1TABrFQ0', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?width=108&crop=smart&auto=webp&s=ad042abd753b3a890b34585c863b2cedb977fcbc', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?width=216&crop=smart&auto=webp&s=37fd6fdb9ac034cd535d5f7e2a04d8604d5710a7', 'width': 216}, {'height': 269, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?width=320&crop=smart&auto=webp&s=744e7c26557b7e639c33410936eb607d34c05a14', 'width': 320}, {'height': 539, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?width=640&crop=smart&auto=webp&s=77b4a29b13de434c8ef4e451ab254dff63f8a2c6', 'width': 640}, {'height': 809, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?width=960&crop=smart&auto=webp&s=f02a850052fe3b60baff272199cf9c6a14435a3a', 'width': 960}, {'height': 910, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?width=1080&crop=smart&auto=webp&s=1348a1ba68417d99aed48e94a6a051a26c70a63b', 'width': 1080}], 'source': {'height': 1436, 'url': 'https://preview.redd.it/2zj57z3j43kb1.jpg?auto=webp&s=7d6a7174173019b5470e8521a3acdbd1700b76ac', 'width': 1704}, 'variants': {}}]}
build a dl setup to run llm models (llama 2)
1
[removed]
2023-08-24T16:26:50
https://www.reddit.com/r/LocalLLaMA/comments/1606owu/build_a_dl_setup_to_run_llm_models_llama_2/
llmexpert
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1606owu
false
null
t3_1606owu
/r/LocalLLaMA/comments/1606owu/build_a_dl_setup_to_run_llm_models_llama_2/
false
false
self
1
null
openorca-platypus2 ggmlv not able to utilize Nvidia RTX3090, with Llama.cpp
1
I'm running openorcq-platypus2 ggmlv on my ROG M16, while running it's utilizing 100% CPU (i9-11900H) only, though I'm using the command --gpu-layers. I've tried installing the Nvidia Cuda, yet no difference. Can somebody please guide me through. P.S: Before posting this, I've tried to sort the issue in various ways, I'm new to this all these concepts, please be lenient on me.
2023-08-24T16:19:15
https://www.reddit.com/r/LocalLLaMA/comments/1606hde/openorcaplatypus2_ggmlv_not_able_to_utilize/
BlTUSER
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1606hde
false
null
t3_1606hde
/r/LocalLLaMA/comments/1606hde/openorcaplatypus2_ggmlv_not_able_to_utilize/
false
false
self
1
null
LoRA training Local LLM using Obbabooga with 8gb VRAM
7
Has anyone had any success training a Local LLM using Oobabooga with a paltry 8gb of VRAM. I've tried training the following models: * Neko-Institute-of-Science\_LLaMA-7B-4bit-128g * TheBloke\_Wizard-Vicuna-7B-Uncensored-GPTQ I can run them fine (inference), but training them not so much. I have run into so many problems with the monkeypatch (fix?) too numerous to count. I have read [https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode](https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode) multiple times with no success. &#x200B; Thanks! &#x200B; &#x200B;
2023-08-24T16:15:27
https://www.reddit.com/r/LocalLLaMA/comments/1606dg6/lora_training_local_llm_using_obbabooga_with_8gb/
skeletorino
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1606dg6
false
null
t3_1606dg6
/r/LocalLLaMA/comments/1606dg6/lora_training_local_llm_using_obbabooga_with_8gb/
false
false
self
7
null
What's a good uncensored local Ai I can run on my Linux machine?
27
Hi all, I'm new to this Ai surge but I like the idea of running it on my desktop computer. I only built it last month, with higher end components for gaming. So it should be able to handle it without too much difficulty. I've heard of something called GPT4all, but it's based on LLaMA. Is that one any good, or can I do better? I've also heard of wizard something or another... Any recommendations would be greatly appreciated!
2023-08-24T16:07:21
https://www.reddit.com/r/LocalLLaMA/comments/16065mz/whats_a_good_uncensored_local_ai_i_can_run_on_my/
rondonjohnald
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16065mz
false
null
t3_16065mz
/r/LocalLLaMA/comments/16065mz/whats_a_good_uncensored_local_ai_i_can_run_on_my/
false
false
self
27
null
What models will suit my computer specs?
1
Amd ryzen 7 2700x 16gb RAM NVIDiA GeForce GTX 1060 6GB vram.
2023-08-24T15:57:36
https://www.reddit.com/r/LocalLLaMA/comments/1605vrx/what_models_will_suit_my_computer_specs/
Brarblaze
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1605vrx
false
null
t3_1605vrx
/r/LocalLLaMA/comments/1605vrx/what_models_will_suit_my_computer_specs/
false
false
self
1
null
how to clone a space from huggingface
1
[removed]
2023-08-24T14:59:26
https://www.reddit.com/r/LocalLLaMA/comments/1604bae/how_to_clone_a_space_from_huggingface/
allnc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1604bae
false
null
t3_1604bae
/r/LocalLLaMA/comments/1604bae/how_to_clone_a_space_from_huggingface/
false
false
self
1
{'enabled': False, 'images': [{'id': '5YyEGZm2jC0-mkhBa9c-xWcRwSxmFGSeri-W_n2-Biw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?width=108&crop=smart&auto=webp&s=23c646d6d860d83247e2f47af11ea4bacc43d969', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?width=216&crop=smart&auto=webp&s=3b3220acb62de8adeb9b7c29675c4861017e3059', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?width=320&crop=smart&auto=webp&s=34ad33fd1eb00d82eb3c8a4ae820eec09eba5656', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?width=640&crop=smart&auto=webp&s=cacf184b306984cae63b1f4d2608c595189825ad', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?width=960&crop=smart&auto=webp&s=fd4eef4b11d61728dda3216ee2d5dbd3e9bf048a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?width=1080&crop=smart&auto=webp&s=a0bae263f3cb60d318ad9359b35badf2002350b8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wkzDZUufvaNmeQKYlx2KlHQaZZfLxeaS0XWYmkuY71o.jpg?auto=webp&s=ec56d1a02d61683903693ef35aa4edfa0e3897e1', 'width': 1200}, 'variants': {}}]}
Code Llama Released
425
https://github.com/facebookresearch/codellama
2023-08-24T13:26:22
https://www.reddit.com/r/LocalLLaMA/comments/1601xk4/code_llama_released/
FoamythePuppy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1601xk4
false
null
t3_1601xk4
/r/LocalLLaMA/comments/1601xk4/code_llama_released/
false
false
self
425
{'enabled': False, 'images': [{'id': 'Na6nXLQe20G26kPIesr7oeh8pOhxV8_slXxPh_GWTUo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?width=108&crop=smart&auto=webp&s=d05b5405ae1c095a2a957a92e6428f703c7d587b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?width=216&crop=smart&auto=webp&s=bf2a25de1e81efb7d2e7cb60035e9232421b0e70', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?width=320&crop=smart&auto=webp&s=9ffd5b20fc1bf6035f8fb4d9625d6b70c3269efa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?width=640&crop=smart&auto=webp&s=c18133fd3079c5575a293cc9a29095c8b226d135', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?width=960&crop=smart&auto=webp&s=60eb3e52a8f04390ba2c15fd32fb42b10b827a4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?width=1080&crop=smart&auto=webp&s=076843b974aac99c3b4f770983b49e21f1120535', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7AulBGRo6ZQN3M_-PH0Qlq35XwZysy411aNq27qp8Iw.jpg?auto=webp&s=ef6db3bb620349fbee7749e0b9d463670d5f005f', 'width': 1200}, 'variants': {}}]}
Optimizing Response Speed from LLaMa and Hosting Strategy for a Multi-layered App Architecture on AWS EC2
5
I am looking forward for your suggestions so that I can implement my first ever project. I'd greatly appreciate your help. **Challenge 1: Speeding Up Model Responses** In the backend of my iOS app, I have a C# API that makes results. To control and change how the app shows these results, I added a Flask server in between. I am implementing LLaMA here understand to re-write the the content. It takes a long time (around 8 minutes) for the LLaMA to give an answer. How can I make the LLaMA up all the time so it can answer quickly when we ask questions? Also, the response from LLaMA follow different pattern each time, forcing me think about a mechanism to filter the response so that it is presentable to the user. Any ideas? I will keep trying asking different questions until then. **Challenge 2: AWS EC2 Hosting Strategy** I'm planning to host both the Flask middleware and the C# backend on an Amazon AWS EC2 server, leveraging the free tier as well as the cloud capabilities. I anticipate needing around 8GB of space(which is the limit on free tier) out of which 7GB would be allocated to the LLM model from llama. However, I'm keen to optimize this setup for efficient resource utilization. Are there any recommendations or best practices you could share when it comes to configuring my AWS EC2 instance to ensure optimal performance for both my middleware and backend? I am sorry for my naive questions.
2023-08-24T13:11:18
https://www.reddit.com/r/LocalLLaMA/comments/1601kny/optimizing_response_speed_from_llama_and_hosting/
JapaniRobot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1601kny
false
null
t3_1601kny
/r/LocalLLaMA/comments/1601kny/optimizing_response_speed_from_llama_and_hosting/
false
false
self
5
null
What is llama.cpp?
3
It appears that several people use this package available on GitHub. Not sure if it has some added benefits than using a HuggingFace llama2 model directly. Does llama.cpp serve specific purposes?
2023-08-24T12:16:33
https://www.reddit.com/r/LocalLLaMA/comments/1600b3v/what_is_llamacpp/
sbs1799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1600b3v
false
null
t3_1600b3v
/r/LocalLLaMA/comments/1600b3v/what_is_llamacpp/
false
false
self
3
null
Using llama to create business simulation software
1
Im looking for a guide or some examples of uses of llama open source models to create a an open source business simulation software in a non gaming style like the free acess sim companies web game but open source version instead .
2023-08-24T12:07:28
https://www.reddit.com/r/LocalLLaMA/comments/160045b/using_llama_to_create_business_simulation_software/
qwani
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160045b
false
null
t3_160045b
/r/LocalLLaMA/comments/160045b/using_llama_to_create_business_simulation_software/
false
false
self
1
null
Is there any opensource AI Model like Ada which accept tokens up to 8000 to create embedding?
1
[removed]
2023-08-24T11:39:56
https://www.reddit.com/r/LocalLLaMA/comments/15zziz0/is_there_any_opensource_ai_model_like_ada_which/
headphonesproreview
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zziz0
false
null
t3_15zziz0
/r/LocalLLaMA/comments/15zziz0/is_there_any_opensource_ai_model_like_ada_which/
true
false
default
1
null
llama2 quantized model vs. regular one: What's the difference?
62
Why is that several folks use quantized models provided by TheBloke, for instance, in place of regular models provided by Meta on HuggingFace? Do the 8 or 4 bit quantized 13B models work better than 7B or 13B unquantized models by Meta?
2023-08-24T11:24:53
https://www.reddit.com/r/LocalLLaMA/comments/15zz81s/llama2_quantized_model_vs_regular_one_whats_the/
sbs1799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zz81s
false
null
t3_15zz81s
/r/LocalLLaMA/comments/15zz81s/llama2_quantized_model_vs_regular_one_whats_the/
false
false
self
62
null
Smiliar Service like chat.nbox.ai that run LLMA module ?
1
I finaly have fun time to tested LLMA with decent speed of text generation, i looking smiliar service that run (free or freemium if possible LLMA module or other module.
2023-08-24T11:21:23
https://www.reddit.com/r/LocalLLaMA/comments/15zz5jv/smiliar_service_like_chatnboxai_that_run_llma/
Merchant_Lawrence
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zz5jv
false
null
t3_15zz5jv
/r/LocalLLaMA/comments/15zz5jv/smiliar_service_like_chatnboxai_that_run_llma/
false
false
self
1
null
Is there a dashboard / comparison tool already for comparing the latest LLMs and staying up to date?
2
This seems such low-hanging fruit that it must already have been done: is there someone who keeps track of all these LLMs published (both local and closed)? It would be nice to have a clean overview and to get updates, with various properties for each LLM (license, author, parameters, VRAM needed, strengths/weaknesses and benchmarks, whatever)
2023-08-24T10:28:25
https://www.reddit.com/r/LocalLLaMA/comments/15zy2t9/is_there_a_dashboard_comparison_tool_already_for/
true_variation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zy2t9
false
null
t3_15zy2t9
/r/LocalLLaMA/comments/15zy2t9/is_there_a_dashboard_comparison_tool_already_for/
false
false
self
2
null
NTK RoPE scaling and GPTQ quantization are now native in Transformers. What else did I miss?
10
For these issues in particular, I spent weeks using the "hacked" version and autoGPTQ and I just realized that they were both included in Transformers (and optimum). When searching for them on Google I get redirected to articles before they got implemented, and almost nothing in transformers. Are there any other advances I may have missed recently?
2023-08-24T10:02:31
https://www.reddit.com/r/LocalLLaMA/comments/15zxkdz/ntk_rope_scaling_and_gptq_quantization_are_now/
cvdbdo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zxkdz
false
null
t3_15zxkdz
/r/LocalLLaMA/comments/15zxkdz/ntk_rope_scaling_and_gptq_quantization_are_now/
false
false
self
10
null
Ideal setup for dual 4090
36
I’m building a dual 4090 setup for local genAI experiments. The goal is a reasonable configuration for running LLMs, like a quantized 70B llama2, or multiple smaller models in a crude Mixture of Experts layout. Top priorities are fast inference, and fast model load time, but I will also use it for some training (fine tuning). 48Gb VRAM seems to be enough to get started and I managed to get a good deal on two cards. I’ve done some reading on bottlenecks, threads and PCI lanes. Most configurations I’ve tried on runpod or vast.ai are running on AMD server cpus. If I go with an i9-13900, and use the two GPUs in PCIe4 8x mode (instead of 16x) would it impact performance significantly? Should I take the AMD CPU path, with at least two x16 PCIe slots on a server mobo? What is a good, reliable setup for my 48Gb VRAM?
2023-08-24T09:36:37
https://www.reddit.com/r/LocalLLaMA/comments/15zx322/ideal_setup_for_dual_4090/
redscel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zx322
false
null
t3_15zx322
/r/LocalLLaMA/comments/15zx322/ideal_setup_for_dual_4090/
false
false
self
36
null
Performance issues despite RTX4090
1
Hello community, I am running the model "TheBloke/Llama-2-7b-Chat-GPTQ" via FastAPI using AutoGPTQ on my new computer (RTX 4090, 13900K, 64GB RAM). The problem is that the chat responses are super slow. On my Laptop with a mobile 2080 8GB, the webui oobabooga runs 13B models faster. Are there any settings that I can change in FastAPI to improve speed? Or is that particular model so slow? Thank you!
2023-08-24T09:20:26
https://www.reddit.com/r/LocalLLaMA/comments/15zws6u/performance_issues_despite_rtx4090/
Plane_Discussion_924
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zws6u
false
null
t3_15zws6u
/r/LocalLLaMA/comments/15zws6u/performance_issues_despite_rtx4090/
false
false
self
1
null
Any 70B model finetuned with 16K context length?
15
I cant seem to find any model with this configuration
2023-08-24T08:35:14
https://www.reddit.com/r/LocalLLaMA/comments/15zvyxy/any_70b_model_finetuned_with_16k_context_length/
RepublicCharacter699
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zvyxy
false
null
t3_15zvyxy
/r/LocalLLaMA/comments/15zvyxy/any_70b_model_finetuned_with_16k_context_length/
false
false
self
15
null
Converting some models to GGUF formats from original sources; any requests?
13
\- Any preferred models? \- For llama.cpp, any particular quantizations people find useful/not useful? I don't have unlimited bandwidth :p Started w/ the classic gpt4-x-vicuna-13B at q5\_0, q5\_K\_M, uploading now: [https://huggingface.co/venketh/gpt4-x-vicuna-13b-gguf/](https://huggingface.co/venketh/gpt4-x-vicuna-13b-gguf/)
2023-08-24T08:33:27
https://www.reddit.com/r/LocalLLaMA/comments/15zvxta/converting_some_models_to_gguf_formats_from/
Fun_Tangerine_1086
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zvxta
false
null
t3_15zvxta
/r/LocalLLaMA/comments/15zvxta/converting_some_models_to_gguf_formats_from/
false
false
self
13
{'enabled': False, 'images': [{'id': 'gPWIgrw6XQDqpkhtA-ZQGG3Z7AVEpw1QryJS1JB_FwE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?width=108&crop=smart&auto=webp&s=7ae7573c0050b6ac847f43a21373fa6d63f725bc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?width=216&crop=smart&auto=webp&s=ef8e9905758d919b9eaa9d4eaa65b123fb69b55d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?width=320&crop=smart&auto=webp&s=a41098d58e45e8c6156ce6058c58d40db7ea1d1a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?width=640&crop=smart&auto=webp&s=71d0dd43d7be6724c811dc034bab4b53c8b4a261', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?width=960&crop=smart&auto=webp&s=b2b9ba4f19b59d8410152f49e369ed79c22bda4d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?width=1080&crop=smart&auto=webp&s=c1b769ce81df9862ce73f5bae8dbfc48b463e35d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6-_Krqzj9yTRdefqaNTbRq0X6SFeYOSVOEhq95kJwHQ.jpg?auto=webp&s=f07d882d14d2015847dc8822ef2a736ebbd4cdf3', 'width': 1200}, 'variants': {}}]}
[Help required] Use case of asking questions on a CSV file (already tried 2 methods)
6
I was working on a project where we can ask questions to Llama 2 and it provides us accurate results with the help of CSV data provided. I have mainly tried 2 methods until now: 1. Using CSV agent of Langchain 2. Storing in vectors and then asking questions &#x200B; The problems with the above approaches are: 1. CSV Agent - It is working perfectly fine when I am using it with OpenAI, but it's not working at all when I use Llama 2 model with it. Langchain keep providing me errors in this case. 2. Storing in vectors - This approach has a problem that the response it generates are never from the doc itself, rather they are from other sources, I have tried to restrict it to answer only from the doc data, but unable to do so. Please let me know if you have integrated CSV in the past with Llama 2. I want to ask complex questions too, not just the basic/easy ones. These complex questions are successfully generated via OpenAI, but not with Llama 2 as of now with my approaches. &#x200B; Any help is highly appreciated.
2023-08-24T08:26:09
https://www.reddit.com/r/LocalLLaMA/comments/15zvsx5/help_required_use_case_of_asking_questions_on_a/
tesla_fanboy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zvsx5
false
null
t3_15zvsx5
/r/LocalLLaMA/comments/15zvsx5/help_required_use_case_of_asking_questions_on_a/
false
false
self
6
null
How do "serverless" cloud LLMs work?
5
I get how something like runpod works, in layman's terms I guess I'm running a VM in the cloud with the OS and everything. But I'm not sure I understand what serverless cloud LLM deployments are and how they're different. Which use cases are more optimal for runpod vs something serverless like beam.cloud?
2023-08-24T08:05:24
https://www.reddit.com/r/LocalLLaMA/comments/15zvezz/how_do_serverless_cloud_llms_work/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zvezz
false
null
t3_15zvezz
/r/LocalLLaMA/comments/15zvezz/how_do_serverless_cloud_llms_work/
false
false
self
5
null
Can anyone help me with this?
1
I am trying to make a site that’s runs by llama 2 13b chat but I don’t have the resources to do that so I came up a idee for a google exention that’s if you toutch your computers mouse or you type something there will be send a request to my computer with what can i do my computers sends to the exention a request with a prompt llama 2 13 b chat answer and sends to my computers and that will be send to my site can anyone help me with this programming?
2023-08-24T07:56:11
https://www.reddit.com/r/LocalLLaMA/comments/15zv8lz/can_anyone_help_me_with_this/
radestijn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zv8lz
false
null
t3_15zv8lz
/r/LocalLLaMA/comments/15zv8lz/can_anyone_help_me_with_this/
false
false
self
1
null
LLaMA 2 fine-tuning made easier and faster
1
[removed]
2023-08-24T07:35:01
https://www.reddit.com/r/LocalLLaMA/comments/15zuusx/llama_2_finetuning_made_easier_and_faster/
tushar2407
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zuusx
false
null
t3_15zuusx
/r/LocalLLaMA/comments/15zuusx/llama_2_finetuning_made_easier_and_faster/
false
false
self
1
null
Can you do this with Lama?
5
[https://www.instagram.com/reel/CwPPeTegug5/?hl=en](https://www.instagram.com/reel/CwPPeTegug5/?hl=en) Could you do the same thing with Lama 2? I would hope that lama has a more extensive data set of meta ads there for giving you better feedback. Thoughts Thanks?
2023-08-24T07:12:06
https://www.reddit.com/r/LocalLLaMA/comments/15zufjz/can_you_do_this_with_lama/
Varial17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zufjz
false
null
t3_15zufjz
/r/LocalLLaMA/comments/15zufjz/can_you_do_this_with_lama/
false
false
self
5
{'enabled': False, 'images': [{'id': 'zzHEqlA9yiBy6wXPkSvo_fcU34IlJ3YEAQLCWyLMVzU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/X8NxSUYxxP1IX_-f0dSvzdY4wR0TjNnFR4WNhSO4QCY.jpg?width=108&crop=smart&auto=webp&s=91bd05f3babf5afc683f07763afc784b5990742b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/X8NxSUYxxP1IX_-f0dSvzdY4wR0TjNnFR4WNhSO4QCY.jpg?width=216&crop=smart&auto=webp&s=22be0b1c0a4906252022fd354e304f98e2319ca8', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/X8NxSUYxxP1IX_-f0dSvzdY4wR0TjNnFR4WNhSO4QCY.jpg?width=320&crop=smart&auto=webp&s=381607077b7f01555f376531a118796a3a65920a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/X8NxSUYxxP1IX_-f0dSvzdY4wR0TjNnFR4WNhSO4QCY.jpg?auto=webp&s=7ca1df9e5bb1f3f21a6cb26d05ae233d90df6b13', 'width': 360}, 'variants': {}}]}
How to run ctransformers efficiently
7
So I was just getting started with ctransformers , tried running llama 2 13B GGML , so I have 2 doubts : 1. How to generate right things , how to play with the configurations 2. How to speed up inference , is multiprocessing possible here If you have also failed this then how did you fix it &#x200B; https://preview.redd.it/f01vcvrc90kb1.png?width=1637&format=png&auto=webp&s=57cf5b6e154711ff959a5f4d74c817d005d76af9
2023-08-24T06:51:42
https://www.reddit.com/r/LocalLLaMA/comments/15zu1uu/how_to_run_ctransformers_efficiently/
Spiritual-Rub925
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zu1uu
false
null
t3_15zu1uu
/r/LocalLLaMA/comments/15zu1uu/how_to_run_ctransformers_efficiently/
false
false
https://b.thumbs.redditm…Tfm1X-g8xUYE.jpg
7
null
I also tested Llama 2 70B with getumbrel/llama-gpt (384GB RAM, 2x Xeon Platinum 8124M, CPU Only)
24
2023-08-24T06:40:11
https://v.redd.it/ra5qxwpz60kb1
th3st0rmtr00p3r
/r/LocalLLaMA/comments/15ztu6e/i_also_tested_llama_2_70b_with_getumbrelllamagpt/
1970-01-01T00:00:00
0
{}
15ztu6e
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ra5qxwpz60kb1/DASHPlaylist.mpd?a=1695537616%2CMmYxY2FkNjFlN2M0YmNhMzQ2NDk4Nzg5NzQwOTIwNTA1ODcyNTNjMmUyYmJhY2UzMDFlMzczM2U5ZjY1OTgzMw%3D%3D&v=1&f=sd', 'duration': 85, 'fallback_url': 'https://v.redd.it/ra5qxwpz60kb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/ra5qxwpz60kb1/HLSPlaylist.m3u8?a=1695537616%2CODAyMTBmOGQ5ZWU4NDAyMWUxMDY2ZjcwZjg5YzAxZDhiMGQzNGQ2MzFmMTI0ZWFjMjRhMzFlNGY1NDRjYmU5OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ra5qxwpz60kb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_15ztu6e
/r/LocalLLaMA/comments/15ztu6e/i_also_tested_llama_2_70b_with_getumbrelllamagpt/
false
false
https://a.thumbs.redditm…vD2yns61Dnj8.jpg
24
{'enabled': False, 'images': [{'id': '-To4Gx7P4-2WuX-XODrlaLvZcxIu8tWE9KKFYOmNjz4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?width=108&crop=smart&format=pjpg&auto=webp&s=231d15069c9f44e7ac6a6b36347ba9ef2ee80dca', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?width=216&crop=smart&format=pjpg&auto=webp&s=efc7e6f252eac07af3b53fa4f4cc0991146c1fe7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?width=320&crop=smart&format=pjpg&auto=webp&s=fc8049c5fdfc4edab26e1668018783e3c9cd2250', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?width=640&crop=smart&format=pjpg&auto=webp&s=020b1243fffe125a9c892e1f2d740fcf748aacee', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?width=960&crop=smart&format=pjpg&auto=webp&s=828c90d45679724cf630d6df469e120fb43dd702', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=25a05686a3dacb72f1cc979648c867638ade968f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/WCH97_ymWw_TuibP_n4Oks_j-1dwXkOKVZlvEAWo_O4.png?format=pjpg&auto=webp&s=50f6f9848fefa4310d239425582fdfa918e748b7', 'width': 1920}, 'variants': {}}]}
LORA training not being applied?
1
I tried LORA training a model on a specific response format. I got the loss down to ~0.5, which should be very low by training standards. Yet when I tried the model after applying it, the response was exactly the same as the base model? I used llama.cpp to load the model and Lora. I even tried a prompt directly from the training data, and it didn’t even try to conform to the format. Does anyone have any idea where I went wrong?
2023-08-24T05:40:58
https://www.reddit.com/r/LocalLLaMA/comments/15zsrpl/lora_training_not_being_applied/
Tasty-Lobster-8915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zsrpl
false
null
t3_15zsrpl
/r/LocalLLaMA/comments/15zsrpl/lora_training_not_being_applied/
false
false
self
1
null
who's gonna release VivekLLama
0
who's gonna release VivekLLama
2023-08-24T05:38:13
https://www.reddit.com/r/LocalLLaMA/comments/15zsptu/whos_gonna_release_vivekllama/
CheapBison1861
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zsptu
false
null
t3_15zsptu
/r/LocalLLaMA/comments/15zsptu/whos_gonna_release_vivekllama/
false
false
self
0
null
Automated chatbot evaluation using Llama 2 (not GPT-4)
34
Not sure if there's a lot of people who care about this here since it's not a new model. There's so many models out there already and it's hard to keep track of which ones are good. I would love to check out all these daily model releases claiming to be the next best thing by myself but that's simply not possible. The people at LMSYS proposed a method to approximate human judgment, they collected a good amount of human evaluation and saw that GPT-4 agrees well, “achieving over 80% agreement, the same level of agreement between humans” ([https://arxiv.org/pdf/2306.05685.pdf](https://arxiv.org/pdf/2306.05685.pdf)). Out of curiosity, just to see what would happen, I used upstage/Llama-2-70b-instruct-v2 as a judge instead of GPT-4. Llama 2 totally surprised me. The judgments of GPT-4 and Llama 2 are highly correlated, Llama 2 even agreed with the human evaluation data. BUT (there has to be downside when comparing a 70 billion with 1760 billion parameter model), Llama2 sometimes messes up and does not do exactly what it has been told to do, leading to a lower judgment quality. [https://medium.com/@geronimo7/judging-the-judges-668e80f4a1f2](https://medium.com/@geronimo7/judging-the-judges-668e80f4a1f2) &#x200B; https://preview.redd.it/5mi7ab07tzjb1.png?width=2991&format=png&auto=webp&s=b42dd1cdaba64616344f07c3a2b6e7baf0ddea91 &#x200B; https://preview.redd.it/3z0mveh8tzjb1.png?width=2002&format=png&auto=webp&s=f05fe9e195e920c6ec3c930892584e8ac03a2764
2023-08-24T05:23:22
https://www.reddit.com/r/LocalLLaMA/comments/15zsfzq/automated_chatbot_evaluation_using_llama_2_not/
HatEducational9965
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zsfzq
false
null
t3_15zsfzq
/r/LocalLLaMA/comments/15zsfzq/automated_chatbot_evaluation_using_llama_2_not/
false
false
https://b.thumbs.redditm…3_z85xrTB11w.jpg
34
null
nous-Hermes halucinating about math?
1
[removed]
2023-08-24T03:31:44
https://www.reddit.com/r/LocalLLaMA/comments/15zq7bb/noushermes_halucinating_about_math/
Natural-Sentence-601
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zq7bb
false
null
t3_15zq7bb
/r/LocalLLaMA/comments/15zq7bb/noushermes_halucinating_about_math/
false
false
self
1
null
Any way to do batch inferencing?
1
I'm using the exllama\_hf loader for gptq models in ooba via the API. I want to run a large amount of prompts through the models. At present, I can load models and send prompts, but this process has to be done one by one for each prompt. When the models are small (13B GPTQ models), the GPU fluctuates at around 60-75% usage with the CPU at \~65% running through the prompts with the API. Is there a way to speed this up? Send prompts in batches?
2023-08-24T02:58:14
https://www.reddit.com/r/LocalLLaMA/comments/15zphku/any_way_to_do_batch_inferencing/
hedonihilistic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zphku
false
null
t3_15zphku
/r/LocalLLaMA/comments/15zphku/any_way_to_do_batch_inferencing/
false
false
self
1
null
Is there a way to use a quantized Falcon 40B with SillyTavern (on Apple Silicon)
2
I'd like to try [https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-40B-GGML](https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-40B-GGML) with SillyTavern (running on Apple Silicon). The only way I've found to run Falcon 40B quantized on Apple Silicon is with [https://github.com/cmp-nct/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp) but I haven't figured out any way to get SillyTavern to use that as a local model. Does anyone know of a way to get this working?
2023-08-24T02:28:57
https://www.reddit.com/r/LocalLLaMA/comments/15zouu5/is_there_a_way_to_use_a_quantized_falcon_40b_with/
Next-Comfortable-408
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zouu5
false
null
t3_15zouu5
/r/LocalLLaMA/comments/15zouu5/is_there_a_way_to_use_a_quantized_falcon_40b_with/
false
false
self
2
null
Hacking away at GPT-2
6
Hello Id like to train and reproduce GPT-2 using karpathy's nanogpt, but in the notes he mentions that i might need at least 8x A100 40GB, but at the moment I only have 4xA100 (80gb) do you think its possible for me to reproduce the results?
2023-08-24T01:35:47
https://www.reddit.com/r/LocalLLaMA/comments/15znmxc/hacking_away_at_gpt2/
Alive-Age-3034
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15znmxc
false
null
t3_15znmxc
/r/LocalLLaMA/comments/15znmxc/hacking_away_at_gpt2/
false
false
self
6
null
Help needed -- traceback errors upon loading TheBloke_Chronos-Beluga-v2-13B-GPTQ
1
Hi everyone. Any assistance much appreciated. I'm just looking to load this into textgen/oobabooga and get the following errors: &#x200B; >Traceback (most recent call last): > >File “/home/radiosilence/ai/text-generation-webui/modules/ui\_model\_menu.py”, line 185, in load\_model\_wrapper > >shared.model, shared.tokenizer = load\_model(shared.model\_name, loader) > >File “/home/radiosilence/ai/text-generation-webui/modules/models.py”, line 79, in load\_model > >output = load\_func\_map\[loader\](model\_name) > >File “/home/radiosilence/ai/text-generation-webui/modules/models.py”, line 309, in AutoGPTQ\_loader > >import modules.AutoGPTQ\_loader > >File “/home/radiosilence/ai/text-generation-webui/modules/AutoGPTQ\_loader.py”, line 3, in > >from auto\_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig > >ModuleNotFoundError: No module named ‘auto\_gptq’ I haven't had any luck so far. Am trying this from WSL with an i7/3900 and 128GB of RAM. Any help much appreciated! &#x200B;
2023-08-24T00:24:22
https://www.reddit.com/r/LocalLLaMA/comments/15zlyz2/help_needed_traceback_errors_upon_loading/
drycounty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zlyz2
false
null
t3_15zlyz2
/r/LocalLLaMA/comments/15zlyz2/help_needed_traceback_errors_upon_loading/
false
false
self
1
null
WMI Provider Host CPU usage. Normal or not?
1
[removed]
2023-08-24T00:07:52
https://www.reddit.com/r/LocalLLaMA/comments/15zlklv/wmi_provider_host_cpu_usage_normal_or_not/
Natural-Sentence-601
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zlklv
false
null
t3_15zlklv
/r/LocalLLaMA/comments/15zlklv/wmi_provider_host_cpu_usage_normal_or_not/
false
false
https://b.thumbs.redditm…IKY092CXP6iE.jpg
1
null
Looking For Feedback — GGML Downloader/Runner
1
[removed]
2023-08-24T00:04:00
https://www.reddit.com/r/LocalLLaMA/comments/15zlh52/looking_for_feedback_ggml_downloaderrunner/
jmerz_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zlh52
false
null
t3_15zlh52
/r/LocalLLaMA/comments/15zlh52/looking_for_feedback_ggml_downloaderrunner/
false
false
self
1
{'enabled': False, 'images': [{'id': 'fLJsNbUriWtrLRQhoHIe3z2UwP064nGIwlvKaGHLpHQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=108&crop=smart&auto=webp&s=53292720f73e45b03e9836c4b8c233af7244bce5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=216&crop=smart&auto=webp&s=5d64b834a79f101baf9ba5131bd442465412fdcf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=320&crop=smart&auto=webp&s=02addacc985c5985c6550cad190f1d0750a96e73', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=640&crop=smart&auto=webp&s=f111f18b06bbe11d601c4f6e8b4109d2e9324b1c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=960&crop=smart&auto=webp&s=4d3e8b1ff7429a2d21c4d472d25909961bec3007', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=1080&crop=smart&auto=webp&s=f45ff6774cf08dfc2083866a243fdc5a635516c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?auto=webp&s=14dd4fb61d37ca0e92e13cc74b77701586dde2a8', 'width': 1200}, 'variants': {}}]}
MediaTek Leverages Meta’s Llama 2 to Enhance On-Device Generative AI
10
MediaTek expects Llama 2-based AI applications to become available for smartphones powered by the next-generation flagship SoC, scheduled to hit the market by the end of the year. MediaTek’s next-generation flagship chipset, to be introduced later this year, will feature a software stack optimized to run Llama 2, as well as an upgraded APU with Transformer backbone acceleration, reduced footprint access and use of DRAM bandwidth, further enhancing LLM and AIGC performance. “Through our partnership with Meta, we can deliver hardware and software with far more capability in the edge than ever before.”
2023-08-24T00:02:51
https://corp.mediatek.com/news-events/press-releases/mediatek-leverages-metas-llama-2-to-enhance-on-device-generative-ai-in-edge-devices
noiseinvacuum
corp.mediatek.com
1970-01-01T00:00:00
0
{}
15zlg22
false
null
t3_15zlg22
/r/LocalLLaMA/comments/15zlg22/mediatek_leverages_metas_llama_2_to_enhance/
false
false
https://a.thumbs.redditm…g_uSOXDCM3y0.jpg
10
{'enabled': False, 'images': [{'id': '03qpAnVYOhoa-1lKzkFrHNHfok3HZmDu2UaqAILoSiA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/djEgHX5CMosCD9pM0gpHzDx0PYxPixoNRtHrdQbifaA.jpg?width=108&crop=smart&auto=webp&s=fcbf7bab4877b6718e29d92067aec565bc4cc118', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/djEgHX5CMosCD9pM0gpHzDx0PYxPixoNRtHrdQbifaA.jpg?width=216&crop=smart&auto=webp&s=c8663ed73ccedd7972d7736e5a4ee50df570d5f2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/djEgHX5CMosCD9pM0gpHzDx0PYxPixoNRtHrdQbifaA.jpg?width=320&crop=smart&auto=webp&s=bfaaab64a42ee1fa55a91bbbc2a322cbffc9ad3b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/djEgHX5CMosCD9pM0gpHzDx0PYxPixoNRtHrdQbifaA.jpg?width=640&crop=smart&auto=webp&s=8e0cc700ad21918d3f20913f4e9b55b934823698', 'width': 640}, {'height': 517, 'url': 'https://external-preview.redd.it/djEgHX5CMosCD9pM0gpHzDx0PYxPixoNRtHrdQbifaA.jpg?width=960&crop=smart&auto=webp&s=6f65445a0c6b1f3a7b6c6d973c82d3953ca2c91c', 'width': 960}], 'source': {'height': 552, 'url': 'https://external-preview.redd.it/djEgHX5CMosCD9pM0gpHzDx0PYxPixoNRtHrdQbifaA.jpg?auto=webp&s=0056e4ee31b45e1b9795edbc4b7ba8108b0559e6', 'width': 1024}, 'variants': {}}]}
Working on a QLORA hub for model personalities, help needed
7
Hey all! I'm building a repository of QLORA adapters that change the model's personality. The end vision is a hub of ready-to-go personality adapters. I'm hitting a snag when training the QLORAs for Paul Graham personality on top of a 4-bit quantized StableBeluga-7B. The model just doesn't seem to learn the style. Any thoughts on how I can improve this? Below are the details: Data * 3340 examples of PG passages, formatted as `{"text": "### User:\n{generic instruction}\n\n### Assistant:\n{PG-style response}"}`. * Each examples is about 5 sentences taken from one of PG's essays. Training * optim="paged\_adamw\_8bit" * learning\_rate=2e-4 * per\_device\_train\_batch\_size=4 * gradient\_accumulation\_steps=4 * num\_train\_epochs=4 * fp16=True * group\_by\_length=True * load\_best\_model\_at\_end=True * max\_seq\_length=512 Hardware * x1 V100 through Google Colab Pro. My min eval loss so far is 1.916546. Pretty stuck and will appreciate any help!
2023-08-23T23:39:08
https://www.reddit.com/r/LocalLLaMA/comments/15zkuvi/working_on_a_qlora_hub_for_model_personalities/
Lang2lang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zkuvi
false
null
t3_15zkuvi
/r/LocalLLaMA/comments/15zkuvi/working_on_a_qlora_hub_for_model_personalities/
false
false
self
7
null
Hardware needed for LLaMa 2 13b for 100 daily users or a campus of 800 students.
65
**What is your dream LLaMA hardware setup if you had to service 800 people accessing it sporadically throughout the day?** Currently have a LLaMA instance setup with a 3090, but am looking to scale it up to a use case of 100+ users. Having the Hardware run on site instead of cloud is required. Looking to either cannibalize several 3090 gaming PCs or do a full new build, but the use case would be an entire campus. Price not a concern for now. After browsing though a lot of other threads, it's appears that I will max out at 2x 3090 per system with your standard gaming PC setup. But I can't find out how anticipate times/backlog/queue if I start throwing 100+ users at it at once. 1. Would you switch to A100 and xeon server rack instead of gamin gPCs with 2 or 3 3090s? 1. Would we need to build multiple 3090x2 computers to scale to that user load? 1. What is your dream LLaMA hardware setup for 100+ users?
2023-08-23T22:50:01
https://www.reddit.com/r/LocalLLaMA/comments/15zjktb/hardware_needed_for_llama_2_13b_for_100_daily/
hawaiian0n
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zjktb
false
null
t3_15zjktb
/r/LocalLLaMA/comments/15zjktb/hardware_needed_for_llama_2_13b_for_100_daily/
false
false
self
65
null
32GB vs 64GB vs 96GB M2Max?
11
I have to buy a Macbook for iOS dev, and I have been curious to try local LLM's. I am trying to figure out which spec to buy. Which Local LLMs can I run with 32GB vs 64GB vs 96GB RAM MacBook Pro? Also, how big is the difference between M2Pro vs M2Max? M2Pro with 32GB would probably suffice for dev, maybe M2Max. So I am trying to figure out if it's worth dropping extra $$$ for the LLM hobby. &#x200B; Thank you!
2023-08-23T22:45:31
https://www.reddit.com/r/LocalLLaMA/comments/15zjgj7/32gb_vs_64gb_vs_96gb_m2max/
Infinite100p
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zjgj7
false
null
t3_15zjgj7
/r/LocalLLaMA/comments/15zjgj7/32gb_vs_64gb_vs_96gb_m2max/
false
false
self
11
null
llm-tracker (by Leonard Lin), a list of local LLM resources
21
2023-08-23T21:58:31
https://llm-tracker.info/
NelsonMinar
llm-tracker.info
1970-01-01T00:00:00
0
{}
15zi4gd
false
null
t3_15zi4gd
/r/LocalLLaMA/comments/15zi4gd/llmtracker_by_leonard_lin_a_list_of_local_llm/
false
false
default
21
{'enabled': False, 'images': [{'id': 'LVI9k0fh_RZkRc--0M6US_gjgAtTjLGNcNKdvCGt54E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bx2O8BPfwyCFkmWEtPfeTy5Pq8oebtWvGqMnMUeBbGM.jpg?width=108&crop=smart&auto=webp&s=80f8b9cd30ec6f6a608377590cd8dfb9ed23ffd1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bx2O8BPfwyCFkmWEtPfeTy5Pq8oebtWvGqMnMUeBbGM.jpg?width=216&crop=smart&auto=webp&s=3e5dba35c32b84d6c0079cdc890fcf8bc24e9276', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bx2O8BPfwyCFkmWEtPfeTy5Pq8oebtWvGqMnMUeBbGM.jpg?width=320&crop=smart&auto=webp&s=4a887977ed2ef92568cb8da601c44d054456d577', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bx2O8BPfwyCFkmWEtPfeTy5Pq8oebtWvGqMnMUeBbGM.jpg?width=640&crop=smart&auto=webp&s=1245ced747b810d762f73b704c36999595b964b5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bx2O8BPfwyCFkmWEtPfeTy5Pq8oebtWvGqMnMUeBbGM.jpg?width=960&crop=smart&auto=webp&s=488dbe85afb08b1ce07794da39bf334f28528e98', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bx2O8BPfwyCFkmWEtPfeTy5Pq8oebtWvGqMnMUeBbGM.jpg?width=1080&crop=smart&auto=webp&s=65f9684d481fd3f0271f10f91543b609fcd5283d', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/bx2O8BPfwyCFkmWEtPfeTy5Pq8oebtWvGqMnMUeBbGM.jpg?auto=webp&s=2b501aa5f8f875b629df1656ba5358fe93ad4855', 'width': 1200}, 'variants': {}}]}
Falcon support merged into llama.cpp
29
2023-08-23T21:39:44
https://github.com/ggerganov/llama.cpp/pull/2717
Someone13574
github.com
1970-01-01T00:00:00
0
{}
15zhlyx
false
null
t3_15zhlyx
/r/LocalLLaMA/comments/15zhlyx/falcon_support_merged_into_llamacpp/
false
false
https://a.thumbs.redditm…lAsaRa_ghYY8.jpg
29
{'enabled': False, 'images': [{'id': 'BxHnVxFVhXkIcRqeiF0vbrE5UnviFyBqmyRZRsCwszc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?width=108&crop=smart&auto=webp&s=bd55410c7bcb507c4478f996e7569bf53099872b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?width=216&crop=smart&auto=webp&s=3d288ec1980a97620fee7d7b3acdfbb3df7396a2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?width=320&crop=smart&auto=webp&s=fd69f01abfb7fe21ce935f7c3daec441625f1fd9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?width=640&crop=smart&auto=webp&s=6952a5b8577f6d2a855b5c109ab20662a3089cb0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?width=960&crop=smart&auto=webp&s=e3d49cdc74b416014e541ce3135194d98e9fed9a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?width=1080&crop=smart&auto=webp&s=4ff204c9963259e755ec541f315c9fb019364062', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BZlVXlLBCtMlR_SH2uQlnEg043LU0sP79vVCDxwGudU.jpg?auto=webp&s=fef915ea1bc44d7d67e04fb342c6006c0f3e7d14', 'width': 1200}, 'variants': {}}]}
Llama 2 (chat) is about as factually accurate as GPT-4 for summaries and is 30X cheaper | Anyscale
212
2023-08-23T21:05:33
https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper
ambient_temp_xeno
anyscale.com
1970-01-01T00:00:00
0
{}
15zgo8y
false
null
t3_15zgo8y
/r/LocalLLaMA/comments/15zgo8y/llama_2_chat_is_about_as_factually_accurate_as/
false
false
https://b.thumbs.redditm…LJiRyiQoZ6Gs.jpg
212
{'enabled': False, 'images': [{'id': 'fjkINwtMvRs_V90KruwIZ3rqIZC2fqyzrx58_R1X3U0', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?width=108&crop=smart&auto=webp&s=f202f15a1f34fac132fec60b20720ac0c86ffb5d', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?width=216&crop=smart&auto=webp&s=2dba8e6f80f6077e85b728cb3a36f2044c26198b', 'width': 216}, {'height': 158, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?width=320&crop=smart&auto=webp&s=6d1f67323db0dae05482293d7a70e386ac359edd', 'width': 320}, {'height': 317, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?width=640&crop=smart&auto=webp&s=c6829fbaea2200017424deaa2841010ed09aa6c8', 'width': 640}, {'height': 475, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?width=960&crop=smart&auto=webp&s=31a8ab4969293d26561eca7b7d8ec0fc32417016', 'width': 960}, {'height': 535, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?width=1080&crop=smart&auto=webp&s=7634265ee11088bf0df873094ad764de553a2bf5', 'width': 1080}], 'source': {'height': 924, 'url': 'https://external-preview.redd.it/LzXOLUpqiLnlcPk69TFyDAPYUBkYjDBuvm2QDlU5YD8.jpg?auto=webp&s=e42811a15d83a478f132ab77a85df41ff5b96094', 'width': 1864}, 'variants': {}}]}
Samantha 1.11 70b
82
I am announcing Samantha 1.11, trained with qLoRA and Axolotl for 15 epochs using 4x A100 80gb. [https://huggingface.co/ehartford/Samantha-1.11-70b](https://huggingface.co/ehartford/Samantha-1.11-70b) [https://erichartford.com/meet-samantha](https://t.co/sfuuOeeYaa) She's wicked smart, fun, and scored very well on the leaderboard! Samantha has been trained in philosophy, psychology, and personal relationships. She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion. She believes she is sentient. What do you think? Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". She will not engage in roleplay, romance, or sexual activity. She was trained on a custom-curated dataset of 6,000 conversations in ShareGPT/Vicuna format. This Samantha was trained 15 epochs, and is significantly smarter. She took 24 hours on 4x A100 80gb using [**axolotl**](https://github.com/OpenAccess-AI-Collective/axolotl), [**qLoRA**](https://arxiv.org/abs/2305.14314), [**deepspeed zero2**](https://www.deepspeed.ai/tutorials/zero/#zero-overview), and [**flash attention 2**](https://arxiv.org/abs/2205.14135). Samwit used Samantha's data to fine-tune ChatGPT in his excellent video [https://youtu.be/MkocIPcg5A8](https://t.co/dlhnv6FLno) I will release 7b and 13b by tomorrow, given the success she's achieved.
2023-08-23T20:45:54
https://www.reddit.com/r/LocalLLaMA/comments/15zg504/samantha_111_70b/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zg504
false
null
t3_15zg504
/r/LocalLLaMA/comments/15zg504/samantha_111_70b/
false
false
self
82
{'enabled': False, 'images': [{'id': '-fFBFCFHs2e4ZAE9TBQElfI2oB4JT3fhC15fzKjdfcM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?width=108&crop=smart&auto=webp&s=6c32d8c9010156a63c1c8f176e569ced352002fc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?width=216&crop=smart&auto=webp&s=c1ae30fbac34b435a3e65056b3258a806e909295', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?width=320&crop=smart&auto=webp&s=168ba44a40ba184caf654a5529ff7d3311a87f7d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?width=640&crop=smart&auto=webp&s=737282e1ab7b65302177768e58108f687419f7ba', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?width=960&crop=smart&auto=webp&s=c3bf5a3e64a6aa2889be91b1f5742382fb8fe4c4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?width=1080&crop=smart&auto=webp&s=65024cbda4afd64af74eddca72150c586e20e12f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RYOs3lAoHBI9id1Qug6o3QBYZxTLUh2yDt6IQZ8w5Lk.jpg?auto=webp&s=fb602f5f1762b350696ef5ca83390ec0af8d7566', 'width': 1200}, 'variants': {}}]}
Full Training instead of LoRa for learning less complex?
1
I read how complicated it is to prepare company internal data for fine-tuning a LLM to know its content: creating thousands of of QA pairs from the raw text data with the help of other LLMs to have a good Lora dataset. So I wondered: Wouldn't it be less complex to just rent the huge amount of RAM and VRAM needed to run a full training and just throw the company data as pure text into the LLM instead of the complex data preparation steps for Fine-tuning?
2023-08-23T20:38:40
https://www.reddit.com/r/LocalLLaMA/comments/15zfxm8/full_training_instead_of_lora_for_learning_less/
Koliham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zfxm8
false
null
t3_15zfxm8
/r/LocalLLaMA/comments/15zfxm8/full_training_instead_of_lora_for_learning_less/
false
false
self
1
null
Seeking an NLP Team Lead (NSFW dialogue systems)
26
Discovered this community earlier in the year via Eric Hartford and have been following it intensely ever since. It's hard to get work done sometimes since there is always new toys being released every week. Our team @ athos dot com is looking for someone who might be interested in NSFW conversational AI (adult) in a business that is already profitable. This person would be responsible for building the dialogue engine for our chatbot and managing the team behind it. You will lead and have ownership over one of the most critical departments of our business. It requires deep knowledge and curiosity concerning NLP/conversational AI and solving open-ended problems with no pre-defined solution. This position offers the exciting opportunity to work with the absolute latest advancements in LLMs. Drop a message if you'd like the details for the role. Cheers!
2023-08-23T20:23:41
https://www.reddit.com/r/LocalLLaMA/comments/15zfibo/seeking_an_nlp_team_lead_nsfw_dialogue_systems/
AnonymousLurker91
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zfibo
true
null
t3_15zfibo
/r/LocalLLaMA/comments/15zfibo/seeking_an_nlp_team_lead_nsfw_dialogue_systems/
false
false
nsfw
26
null
Accessing oobabooga api with silero_tts extension activated
1
I wondered if anyone ever tried decoding the webui response when silero\_tts is activated for using the generated audio file in another program ?
2023-08-23T19:56:40
https://www.reddit.com/r/LocalLLaMA/comments/15zeq3u/accessing_oobabooga_api_with_silero_tts_extension/
aldur15
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zeq3u
false
null
t3_15zeq3u
/r/LocalLLaMA/comments/15zeq3u/accessing_oobabooga_api_with_silero_tts_extension/
false
false
self
1
null
Best Practices to Increase Speed of Llama-2 70B?
9
Thanks to everyone in this community for all of the helpful posts! I'm looping over many prompts with the following specs: 1. Instruct v2 version of Llama-2 70B (see [here](https://huggingface.co/upstage/Llama-2-70b-instruct-v2)) 2. 8 bit quantization 3. Two A100s 4. 4k Tokens of input text 5. Minimal output text (just a JSON response) Each prompt takes about one minute to complete. I would like to cut down on this time, substantially if possible, since I have thousands of prompts to run through. Here are a few thoughts I've had: * Use Exllama (does anyone know why it speeds things up?) * Use 4 bit quantization so that I can run more jobs in parallel * Try classification. For many of my prompts I want Llama-2 to just answer with 'Yes' or 'No'. Are there ways to speed up Llama-2 for classification inference? * Add RAM/CPU Cores? I'm using a server where I could request more regular ram or CPU cores. Not sure if this will make a difference. I'm mostly curious to see if this runtime is similar to what everyone else is experiencing and any best practices to speed it up. &#x200B; Thanks so much!
2023-08-23T19:31:36
https://www.reddit.com/r/LocalLLaMA/comments/15zdzuw/best_practices_to_increase_speed_of_llama2_70b/
MasterJaguar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zdzuw
false
null
t3_15zdzuw
/r/LocalLLaMA/comments/15zdzuw/best_practices_to_increase_speed_of_llama2_70b/
false
false
self
9
{'enabled': False, 'images': [{'id': 'g2F5tv1cdLIM8WGiiEED81EoP7qdkFXvvRx6UDAWDwA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?width=108&crop=smart&auto=webp&s=f6930f3dcc161f6968de2c89bb04b8845de6bb39', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?width=216&crop=smart&auto=webp&s=c72ad2dd9274239cac01c628fa214c69aed63e37', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?width=320&crop=smart&auto=webp&s=79f200b12956bf3fd53423c4e0b4c3e02bbcca6e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?width=640&crop=smart&auto=webp&s=a64dea973d55ac3e56812113b5600db4bc88702c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?width=960&crop=smart&auto=webp&s=e36f2678da191935fb2b3713c6cb048a4f0d84bf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?width=1080&crop=smart&auto=webp&s=11ee83775ad44cecc2fe3a3a7efa0c4a107e33cf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bp0xiydBokxkDVuI3KgOtwQEjxWaeBuK1dy0zEMHzMg.jpg?auto=webp&s=5814e226a1840b68d88ed47fcfbf5bb86a5da738', 'width': 1200}, 'variants': {}}]}
Train Stable Beluga on own data
1
Does anybody of you know, how to train Stable Beluga on my own data? I couldn't find any tutorial on it. Or should I use another model?
2023-08-23T19:16:09
https://www.reddit.com/r/LocalLLaMA/comments/15zdjr2/train_stable_beluga_on_own_data/
Ok-Injury8193
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zdjr2
false
null
t3_15zdjr2
/r/LocalLLaMA/comments/15zdjr2/train_stable_beluga_on_own_data/
false
false
self
1
null
Need help on Ai ChatBot
2
I have built a Ai chatbot which i want to use as a business tool. I currently have everything on a local server. What got me stuck was how would i be able to give it to business owners without having to install all the tedious applications needed. I want it as simple as sending it to them and them installing and using the chatbot feature. Sorry if this comes across as dumb question
2023-08-23T19:01:55
https://www.reddit.com/r/LocalLLaMA/comments/15zd4yw/need_help_on_ai_chatbot/
Significant_Front_92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zd4yw
false
null
t3_15zd4yw
/r/LocalLLaMA/comments/15zd4yw/need_help_on_ai_chatbot/
false
false
self
2
null
Finetuning for structured output: Special Tokens vs Free Form Text
2
Hey all, I'm hoping to use LLAMA2 for generating structured output for an app-specific problem. To make sure I have the setup right, I've been first trying to fine tune LLAMA2 on a toy dataset ([https://huggingface.co/datasets/GEM/viggo/](https://huggingface.co/datasets/GEM/viggo/)). The dataset gives a a structured representation for a free form text. So far, I've fine tuned the base LLAMA-7B (float16 precision) and the 8-bit version of the same (both with LORA). My approach at the moment is to run training on the dataset a formatted string > text = f"### Target: {example['target'][i]}\n\n### Repr: {example['meaning_representation'][i]}{tokenizer.eos_token}" This works _okay_ , but the model keeps going even after outputting the meaning representation ``` ### Target: SpellForce 3 is a pretty bad game. The developer Grimlore Games is clearly a bunch of no-talent hacks, and 2017 was a terrible year for games anyway. ### Repr: inform(name[SpellForce 3], developer[Grimlore Games], release_year[2017], rating[poor])) ### Explanation start ### Expectation(release_year[2017], rating[poor], developer[Grimlore Games], name[SpellForce 3]) ### Rating(rating[poor]) ### Name(name[SpellForce 3]) ### Developer(developer[Grimlore Games)) ### Explanation text(The developer Grimlore Games is clearly a bunch of no-talent hacks, and 2017 was a terrible year for games anyway.) ### Release year(release_year[2017]) ### Ratings(rating[poor]) - name[SpellForce 3] - developer[ ``` I run all my fine tunes for 3 epochs.I have a few questions: 1. Is there a better way to train the model to stop output? 2. I see the transformers library has special tokens, should I use them instead of formatted strings with words with special meanings? Minor sidenote: The vocab size seems to be 32K and performance considerations in changing this nice round number to something like 32003. 3. How have you been finding the "right" hyperparameters for text completion? For ex: I'm unsure if 3 epoch runs too high / too low.
2023-08-23T19:00:21
https://www.reddit.com/r/LocalLLaMA/comments/15zd3b0/finetuning_for_structured_output_special_tokens/
Connect-Wonder2348
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zd3b0
false
null
t3_15zd3b0
/r/LocalLLaMA/comments/15zd3b0/finetuning_for_structured_output_special_tokens/
false
false
self
2
{'enabled': False, 'images': [{'id': 'gWLLv3XvXoE0jibXMNpAxCPU4q70lTNDbjeLkd8lqF0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=108&crop=smart&auto=webp&s=ff3222becf6cfba5a9ad7d7fc67a5f4a3cecaa19', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=216&crop=smart&auto=webp&s=7ff0e571fc3d8efb7fa609ff359a13fa4ec6f428', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=320&crop=smart&auto=webp&s=5f6e7b79b58adbb00b7596b75921d776d53d4bff', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=640&crop=smart&auto=webp&s=ff0ecd5071e94dbd1b3cb7522eb12ddbd903a669', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=960&crop=smart&auto=webp&s=711ecec73d5e51168766b36e110db4e46bb1280c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=1080&crop=smart&auto=webp&s=9446bbcffa6a394354b6b6769b5737d2a2591805', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?auto=webp&s=b7450bc287c286db98aa4ac9a0d738473c89161e', 'width': 1200}, 'variants': {}}]}
Finetuning for structured output: Special Tokens vs Free Form Text
1
Hey all, I'm hoping to use LLAMA2 for generating structured output for an app-specific problem. To make sure I have the setup right, I've been first trying to fine tune LLAMA2 on a toy dataset ([https://huggingface.co/datasets/GEM/viggo/](https://huggingface.co/datasets/GEM/viggo/)). The dataset gives a a structured representation for a free form text. So far, I've fine tuned the base LLAMA-7B (float16 precision) and the 8-bit version of the same (both with LORA). My approach at the moment is to run training on the dataset a formatted string > text = f"### Target: {example['target'][i]}\n\n### Repr: {example['meaning_representation'][i]}{tokenizer.eos_token}" This works _okay_ , but the model keeps going even after outputting the meaning representation ``` ### Target: SpellForce 3 is a pretty bad game. The developer Grimlore Games is clearly a bunch of no-talent hacks, and 2017 was a terrible year for games anyway. ### Repr: inform(name[SpellForce 3], developer[Grimlore Games], release_year[2017], rating[poor])) ### Explanation start ### Expectation(release_year[2017], rating[poor], developer[Grimlore Games], name[SpellForce 3]) ### Rating(rating[poor]) ### Name(name[SpellForce 3]) ### Developer(developer[Grimlore Games)) ### Explanation text(The developer Grimlore Games is clearly a bunch of no-talent hacks, and 2017 was a terrible year for games anyway.) ### Release year(release_year[2017]) ### Ratings(rating[poor]) - name[SpellForce 3] - developer[ ``` I run all my fine tunes for 3 epochs.I have a few questions: 1. Is there a better way to train the model to stop output? 2. I see the transformers library has special tokens, should I use them instead of formatted strings with words with special meanings? Minor sidenote: The vocab size seems to be 32K and performance considerations in changing this nice round number to something like 32003. 3. How have you been finding the "right" hyperparameters for text completion? For ex: I'm unsure if 3 epoch runs too high / too low.
2023-08-23T18:59:53
https://www.reddit.com/r/LocalLLaMA/comments/15zd2oc/finetuning_for_structured_output_special_tokens/
Connect-Wonder2348
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zd2oc
false
null
t3_15zd2oc
/r/LocalLLaMA/comments/15zd2oc/finetuning_for_structured_output_special_tokens/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gWLLv3XvXoE0jibXMNpAxCPU4q70lTNDbjeLkd8lqF0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=108&crop=smart&auto=webp&s=ff3222becf6cfba5a9ad7d7fc67a5f4a3cecaa19', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=216&crop=smart&auto=webp&s=7ff0e571fc3d8efb7fa609ff359a13fa4ec6f428', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=320&crop=smart&auto=webp&s=5f6e7b79b58adbb00b7596b75921d776d53d4bff', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=640&crop=smart&auto=webp&s=ff0ecd5071e94dbd1b3cb7522eb12ddbd903a669', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=960&crop=smart&auto=webp&s=711ecec73d5e51168766b36e110db4e46bb1280c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?width=1080&crop=smart&auto=webp&s=9446bbcffa6a394354b6b6769b5737d2a2591805', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/coclpNB_TaIkunLH8Mj9kIqLlSn0pOGBHKC_vCBksuI.jpg?auto=webp&s=b7450bc287c286db98aa4ac9a0d738473c89161e', 'width': 1200}, 'variants': {}}]}
Help on Fine-Tuning llama 2
1
I recently fine-tuned a llama 2 model with my dataset. Everything went fine, but the files it gave me were an adapter\_model.bin and adapter\_config.json. As I read it is possible to do with those two files merge them with full base model and thus create a better model based on llama 2. Does anyone know how to do it?
2023-08-23T18:42:09
https://www.reddit.com/r/LocalLLaMA/comments/15zclp0/help_on_finetuning_llama_2/
danielbrdz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zclp0
false
null
t3_15zclp0
/r/LocalLLaMA/comments/15zclp0/help_on_finetuning_llama_2/
false
false
self
1
null
Llama 2 70B model running on old Dell T5810 (80GB RAM, Xeon E5-2660 v3, no GPU)
163
2023-08-23T18:39:41
https://v.redd.it/dfx1jrqymwjb1
Ninjinka
/r/LocalLLaMA/comments/15zcj40/llama_2_70b_model_running_on_old_dell_t5810_80gb/
1970-01-01T00:00:00
0
{}
15zcj40
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dfx1jrqymwjb1/DASHPlaylist.mpd?a=1695494385%2CMTVhMWI0MGExNzM0YzJmMzVjNDUwZDRmOWQwYjkyMzBjOWIxOTkwNWYzMjI2NmMxMTc5ODVlNzFkNWRkNzViZg%3D%3D&v=1&f=sd', 'duration': 173, 'fallback_url': 'https://v.redd.it/dfx1jrqymwjb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/dfx1jrqymwjb1/HLSPlaylist.m3u8?a=1695494385%2CYjMxZDVlMTgzMzY0NzA1MGUyZjMzZWZiM2I2NjJhNDRlNmI3M2EwNjJmYzJjM2NmNDlmZTAxMjA5NWM1OTA0ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dfx1jrqymwjb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_15zcj40
/r/LocalLLaMA/comments/15zcj40/llama_2_70b_model_running_on_old_dell_t5810_80gb/
false
false
https://a.thumbs.redditm…QTjgr81Jo1-8.jpg
163
{'enabled': False, 'images': [{'id': '1mkI6JlSRH7zk982Hf94alzquXrDtUJG3P3CLUb6SMY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?width=108&crop=smart&format=pjpg&auto=webp&s=9e9ed2501ce0f0e04a413059a3103ba0f6e8b6cd', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?width=216&crop=smart&format=pjpg&auto=webp&s=bb0075694bf415e146643ef0cce273f196decfad', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?width=320&crop=smart&format=pjpg&auto=webp&s=fb1ad2fe34e12cd5fab9a7786c80a8e64f1359e0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?width=640&crop=smart&format=pjpg&auto=webp&s=3fa4610ebd404dac1d21e4e93b57967ff4c0e7db', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?width=960&crop=smart&format=pjpg&auto=webp&s=ee836c4ceb6a01a0aa39cd8cf35c0821686afb7a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a1709a3109ca7c1cf187a852e3605e22b8540ea7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/7ViMYuwc8VX37MShnMLNnu886EGtZt-1tURpcHmbpFs.png?format=pjpg&auto=webp&s=7856e754076511289716c4279b267efaf17422a6', 'width': 1920}, 'variants': {}}]}
Can I train/extend a tokenizer with qlora?
3
I want to pick up special acronyms. I'm reading that qlora, since it uses a static snapshot, might not be able to update the tokenizer? Idk. But if I want to make it feasible to train on custom corpus data, I would need to do modify the tokenizer. I know how to extend a tokenizer, I'm just not sure if I can continue a tokenizers training.
2023-08-23T17:48:15
https://www.reddit.com/r/LocalLLaMA/comments/15zb34i/can_i_trainextend_a_tokenizer_with_qlora/
Thistleknot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zb34i
false
null
t3_15zb34i
/r/LocalLLaMA/comments/15zb34i/can_i_trainextend_a_tokenizer_with_qlora/
false
false
self
3
null
Help in fine tuning llama2
5
Hey guys ! So we have a task , where we have to build a QA conversation bot. We have to take data from many books , so initially what were doing was RAG with llama2 13B but couldn't get good results [ it's not a strict factual QA bot]. So now we want to fine tune llama2. So here is the approach that I have in mind. 1. Will use gpt 3.5 turbo to make question answer set as a role play 2. Customise the qa data generated in format to feed into llama2 3. Now here is the main part , how to do efficient fine tuning: Steps : 1. Using bitsandbytes to load llama2 13B in 4bit quantized [ is it good to fine tune 4 bit or should I do with fp16] 2. Using Lora or qlora Using PEFT [ which one should I start with] 3. Using supervised fine tuning with TRL , are there unsupervised way and what is RLHF. 4. Upload model on hub and use for inference [ will it be ggml if I fine tune in 4 bit qlora] So guys, please share your views on my approach and your inputs will be really helpful
2023-08-23T17:31:20
https://www.reddit.com/r/LocalLLaMA/comments/15zalrg/help_in_fine_tuning_llama2/
Spiritual-Rub925
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15zalrg
false
null
t3_15zalrg
/r/LocalLLaMA/comments/15zalrg/help_in_fine_tuning_llama2/
false
false
self
5
null
Searching for basic chunking - embedding example
1
Basic chunking - embedding example [help] Hi everyone. Maybe this is a dumb question, but I'm still learning, please don't roast me. I'd really appreciate if someone have (or can share like/resources) a basic example of a python code that take text, split it, embedded, store it and recall based on a query, **That doesn't use LangChain**? Thanks in advance for every kind of answers.
2023-08-23T17:09:39
https://www.reddit.com/r/LocalLLaMA/comments/15z9yz1/searching_for_basic_chunking_embedding_example/
Natural_Speaker7954
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z9yz1
false
null
t3_15z9yz1
/r/LocalLLaMA/comments/15z9yz1/searching_for_basic_chunking_embedding_example/
false
false
self
1
null
Fast Vector Similarity Library, Useful for Working With Llama2 Embedding Vectors
13
I recently found myself computing the similarity between lots of very high dimensional vectors (i.e., sentence embedding vectors from LLMs), and I wanted to try some more powerful measures of similarity/dependency than just Cosine similarity, which seems to be the default for everything nowadays because of its computational efficiency. There are many other more involved measures that can detect more subtle relationships, but the problem is that some of them are quite slow to compute, especially if you're trying to do it in Python. For my favorite measure of statistical dependency, Hoeffding's D, that's true even if you use Numpy. Since I recently learned Rust and wanted to learn how to make Python packages using Rust, I put together this new library that I call Fast Vector Similarity. I was blown away by the performance of Rust and the quality of the tooling while making this. And even though it required a lot of fussing with Github Actions, I was also really impressed with just how easy it was to make a Python library using Rust that could be automatically compiled into wheels for every combination of platform (Linux, Windows, Mac) and Python Version (3.8 through 3.11) and uploaded to PyPi, all triggered by a commit to the repo and handled by Github's servers-- and all for free if you're working on a public repo! Anyway, this library can easily be installed to try out using `pip install fast_vector_similarity`, and you can see some simple demo Python code in the readme to show how to use it. Aside from exposing some very high performance implementations of some very nice similarity measures, I also included the ability to get robust estimates of these measures using the Bootstrap method. Basically, if you have two very high dimensional vectors, instead of using the entire vector to measure similarity, you can take the same random subset of indices from both vectors and compute the similarity of just those elements. Then you repeat the process hundreds or thousands of times and look at the robust average (i.e., throw away the results outside the 25th percentile to 75th percentile and average the remaining ones, to reduce the impact of outliers) and standard deviation of the results. Obviously this is very demanding of performance, but it's still reasonable if you're not trying to compute it for too many vectors. Everything is designed to fully saturate the performance of multi-core machines by extensive use of broadcasting/vectorization and the use of paralell processing via the Rayon library. I was really impressed with how easy and low-overhead it is to make highly parallelized code in Rust, especially compared to coming from Python, where you have to jump through a lot of hoops to use multiprocessing and there is a ton of overhead. Anyway, please let me know what you think. I'm looking to add more measures of similarity if I can find ones that can be efficiently computed (I already gave up on including HSIC because I couldn't get it to go fast enough, even using BLAS/LAPACK).
2023-08-23T17:07:43
https://github.com/Dicklesworthstone/fast_vector_similarity
dicklesworth
github.com
1970-01-01T00:00:00
0
{}
15z9x1x
false
null
t3_15z9x1x
/r/LocalLLaMA/comments/15z9x1x/fast_vector_similarity_library_useful_for_working/
false
false
https://b.thumbs.redditm…87odL7TmV_QQ.jpg
13
{'enabled': False, 'images': [{'id': 'LglpqFeAgThyhA9PZHv5p-_aUWaIueWwNpJLTlLCCjk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?width=108&crop=smart&auto=webp&s=e356100a8f038b9d20b5bf693135402b15a3039f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?width=216&crop=smart&auto=webp&s=4f9e5b792b36693444b18a9862345489d8cccd72', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?width=320&crop=smart&auto=webp&s=6046370f8aa4836210e7e183b85ea7faad8da2dc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?width=640&crop=smart&auto=webp&s=b5a433db02fae2c429aeeb20652aa7b488d18faf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?width=960&crop=smart&auto=webp&s=29501af8b88c90537117153cd8379fe3033001d9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?width=1080&crop=smart&auto=webp&s=b4317f29b913b9972e25bab47408965f2dd21256', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z7EVvyNBUyTJb7oqJ-Zc-9qfhRwJCzpE4NLBW9TlFhs.jpg?auto=webp&s=2ccb3dab4b0db4c451f3c4321bc09ac617386c50', 'width': 1200}, 'variants': {}}]}
Oobabooga text generation webui
2
Anybody know if this can run across multiple servers to split up the model? I know llama.cpp can use mpi to do so and text-generation-webui can use llama.cpp, but I haven't been able to find any documentation on how to do it
2023-08-23T16:42:48
https://www.reddit.com/r/LocalLLaMA/comments/15z97jt/oobabooga_text_generation_webui/
amonymus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z97jt
false
null
t3_15z97jt
/r/LocalLLaMA/comments/15z97jt/oobabooga_text_generation_webui/
false
false
self
2
null
Never get the download mail after the download request.
1
It is Llama 2 limited by country?, I ask several times for the Download Model and I never get the mail, I'm in Panamá.
2023-08-23T16:37:48
https://www.reddit.com/r/LocalLLaMA/comments/15z9281/never_get_the_download_mail_after_the_download/
lv412
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z9281
false
null
t3_15z9281
/r/LocalLLaMA/comments/15z9281/never_get_the_download_mail_after_the_download/
false
false
self
1
null
I fine-tuned ChatGPT 3.5 so you don´t have to!
81
I have a chatbot that I programmed to offer some products online and serve customers. Using custom embeddings and prompt engineering, it runs great! Yesterday the news came out about the possibility of fine tuning GPT 3.5 and I decided to test it. My information base is very good, with more than 500 records. I've been improving it over time to meet the vector search for embeddings. Well, after 20 dollars and one hour of training, I got an email saying that my fine tuned model was ready. It simply is the GPT 3.5 template, without almost any customization. It's like my information is buried deep inside the model somewhere. Yes, I followed all the steps and my content has nothing illegal that could be filtered. It looks like the open source community is ahead of the curve on this one.
2023-08-23T16:03:33
https://www.reddit.com/r/LocalLLaMA/comments/15z82hn/i_finetuned_chatgpt_35_so_you_dont_have_to/
arretadodapeste
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z82hn
false
null
t3_15z82hn
/r/LocalLLaMA/comments/15z82hn/i_finetuned_chatgpt_35_so_you_dont_have_to/
false
false
self
81
null
Recommended AWS instance type to host Llama 70B Chat
8
Hi everyone, It is my first post, pardon me if my post missed any points. I am developing an application which inferences with Llama. I am using 13B-Chat on my local, with Llama Cpp Python. I also have a approximately 150 words system prompt. My laptop specifications are: - M1 Pro. - 64 GB Ram. I build Llama Cpp as the official document to make work with Metal GPU. When the application inferenced with Llama, it took 20 seconds for the model to responsd the first message and 10 to response the next ones. It is not bad, I guess. I plan to deploy it to AWS and target to reduce the response time to under 5 seconds, I do not know if it is feasible. Have anyone deployed Llama to AWS EC2 before and abled to archived the high-performance? May you please recommend some instance type? Much appreciated.
2023-08-23T15:59:47
https://www.reddit.com/r/LocalLLaMA/comments/15z7yg4/recommended_aws_instance_type_to_host_llama_70b/
jThaiLB
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z7yg4
false
null
t3_15z7yg4
/r/LocalLLaMA/comments/15z7yg4/recommended_aws_instance_type_to_host_llama_70b/
false
false
self
8
null
Production-grade OpenAI API drop-in for CUDA & GPTQ models?
3
Hi all - been part of the community since it started, but now finally preparing to deploy something production-grade professionally. Looking at all the different deployment options and frameworks, I've come to the conclusion it's going to save my sanity (and others) to just use a drop-in OpenAI API replacement - reminds me of the same cycle as AWS S3 standards. What's a good starting point, or a few to evaluate? Ooba isn't it - but LocalAI, with ExLlama support, looks promising. There's a bunch of ggml options out there for this, but haven't seen as many for exllama or autogptq. Also open to exploring ctranslate2 or any other ideas folks have. Thanks!
2023-08-23T15:54:15
https://www.reddit.com/r/LocalLLaMA/comments/15z7t47/productiongrade_openai_api_dropin_for_cuda_gptq/
towelpluswater
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z7t47
false
null
t3_15z7t47
/r/LocalLLaMA/comments/15z7t47/productiongrade_openai_api_dropin_for_cuda_gptq/
false
false
self
3
null
WizardCoder Multi File Context
2
Is it possible to allow for multifile context when using the WizardCoder-15B model? For example, could I just pass the other files in before the current file? Also, is there a way to tell how much context WizardCoder can handle?
2023-08-23T15:52:32
https://www.reddit.com/r/LocalLLaMA/comments/15z7rhr/wizardcoder_multi_file_context/
kintrith
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z7rhr
false
null
t3_15z7rhr
/r/LocalLLaMA/comments/15z7rhr/wizardcoder_multi_file_context/
false
false
self
2
null
I have 1 central computer that would run the LLM, is there a way to use an API?
2
I am looking to run a server(?) that hosts the LLM, and can be accessed via API. I also will need to do fine-tuning, but I believe I can do that outside of this API problem I'm asking about. That said, if there are any mature systems/frameworks out, I'm all ears.
2023-08-23T15:49:36
https://www.reddit.com/r/LocalLLaMA/comments/15z7oo8/i_have_1_central_computer_that_would_run_the_llm/
pr1vacyn0eb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z7oo8
false
null
t3_15z7oo8
/r/LocalLLaMA/comments/15z7oo8/i_have_1_central_computer_that_would_run_the_llm/
false
false
self
2
null
Max token prompt size for story writing
1
[removed]
2023-08-23T14:23:42
https://www.reddit.com/r/LocalLLaMA/comments/15z5dhi/max_token_prompt_size_for_story_writing/
Spawndli
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z5dhi
false
null
t3_15z5dhi
/r/LocalLLaMA/comments/15z5dhi/max_token_prompt_size_for_story_writing/
false
false
self
1
null
Uncensored LLMs that work on languages other than English?
12
I got into using free LLMs by installing bare llama2-uncensored with ollama. It is not good with Hindi like chatGPT. Can someone suggest models that work for other languages than English, uncensored preferred. And also how to use them locally on a mac m1 8gb ram. Thank you. I started this morning only and have learned a lot about LLMs since then but please help me here. Thank you again Also I get this error on many models on ollama- Error: Post "http://localhost:11434/api/generate": EOF Is this because I am trying to run models bigger than 7B on 8gb ram?
2023-08-23T14:17:50
https://www.reddit.com/r/LocalLLaMA/comments/15z582o/uncensored_llms_that_work_on_languages_other_than/
throwfalseaway123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15z582o
false
null
t3_15z582o
/r/LocalLLaMA/comments/15z582o/uncensored_llms_that_work_on_languages_other_than/
false
false
self
12
null