title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
โŒ€
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
โŒ€
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
โŒ€
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
โŒ€
Limit the Input/Output of LlaMa 2 to specific topics
4
I have a project I am working on to create a history app that allows people to learn about history by asking the AI historical questions. The issue is I want to limit the output to just historical answers. Is there anyway to do that? I am new to this so it might be a noob question, but I see a lot of resources regarding fine tuning the model. Not much around restricting input/output scenarios.
2023-08-07T13:31:38
https://www.reddit.com/r/LocalLLaMA/comments/15kkukk/limit_the_inputoutput_of_llama_2_to_specific/
ojunior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kkukk
false
null
t3_15kkukk
/r/LocalLLaMA/comments/15kkukk/limit_the_inputoutput_of_llama_2_to_specific/
false
false
self
4
null
What's a decent llama-2 model fine tuned for OpenAI style function calls ?
7
I see models like the one below, but it's licensed, is there any open sourced ones ? Thanks! [https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling)
2023-08-07T13:30:56
https://www.reddit.com/r/LocalLLaMA/comments/15kktxm/whats_a_decent_llama2_model_fine_tuned_for_openai/
Gold_Comparison_3068
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kktxm
false
null
t3_15kktxm
/r/LocalLLaMA/comments/15kktxm/whats_a_decent_llama2_model_fine_tuned_for_openai/
false
false
self
7
{'enabled': False, 'images': [{'id': 'z-tciLuRFNi0luiaJI03HL-ccj97mSAZB4mFb5qSOWA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?width=108&crop=smart&auto=webp&s=7d44429b776c3df967df4748c2a72ca08e2f98de', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?width=216&crop=smart&auto=webp&s=c55b6bab5bf920690fedc50f4d9172c62d5d9aca', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?width=320&crop=smart&auto=webp&s=3693b0ba69955f39916a2fd4e0217e51fc903322', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?width=640&crop=smart&auto=webp&s=439c34016e3420d65b601f4ba050e56bbe6c2101', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?width=960&crop=smart&auto=webp&s=dd0fccbb31671c0fc108d4504705057aeba3f657', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?width=1080&crop=smart&auto=webp&s=251616971acdf18a8bca2bdef126405025ab17d3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jMVxjyICWJdQoTQ2ad17JPA6jhDa6CljgfiXq_iysag.jpg?auto=webp&s=f25ad01548ca467347a3fa30d8a6ec67f9136eb9', 'width': 1200}, 'variants': {}}]}
How to optimize inference for llama related model?
4
Hello! As a newbie to llm field, I have a question about how to optimize inference for llama related model. I'm trying to optimize inference for vicuna model. ([model link](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF)) I found that to optimize inference for llm, onnx and tensorrt method are widely used. There are many examples about how to apply tensorrt on bert related models, but I could not find any tutorials on using tensorrt as inference optimizer on llama related models. (for example, vicuna) So, my question is, to optimize inference for llama related models, which method is widely used? onnx? tensorrt? or quantization like ggml or gptq? If quantization is commonly used than tensorrt, then what is the reason for that? Could you please help me figure it out? It will be very helpful to give me any advice to improve inference performance of llama model ! Thank you !
2023-08-07T12:46:36
https://www.reddit.com/r/LocalLLaMA/comments/15kjrm3/how_to_optimize_inference_for_llama_related_model/
Fluffy_Pianist_3135
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kjrm3
false
null
t3_15kjrm3
/r/LocalLLaMA/comments/15kjrm3/how_to_optimize_inference_for_llama_related_model/
false
false
self
4
{'enabled': False, 'images': [{'id': 'lRl3SipxuAd2pfX-TGfnDhxzpabUu4mLrPvD6h_TfTI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=108&crop=smart&auto=webp&s=b76f8ccbfd16146ac1855fe96fc4710ff29e946e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=216&crop=smart&auto=webp&s=f395ef8d7cec16b0287e2280fe5ece598d51f4a2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=320&crop=smart&auto=webp&s=1f0a2f8a56f978d570a73b61ea9f995ae799084b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=640&crop=smart&auto=webp&s=288d4b4a040e44be55cfd0bcf05f9f1d0d421d2c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=960&crop=smart&auto=webp&s=d4c42f7347e07b07ee6b56f651cac0076eb0cf2a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=1080&crop=smart&auto=webp&s=1de56b29995de89448063a59cabd8939902e96de', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?auto=webp&s=e9052386db71b626aa8458c4b3052d3b8c8fd9bf', 'width': 1200}, 'variants': {}}]}
Fine-tuning LLM's for roleplay
11
I want to create a perfect conversational character that I can interact in my game. I've tried creating character.json in oobabooga with 13B Nous-Hermes LLaMa-2 model but the results did not satisfy me. I looked into fine tuning but never tried it. I know I need to gather decent amount of info about my character which I also don't know how to format. Luckly I have enough hardware resources (5x RTX 4090). Do you think using a big model with 4k or even 8k context to create this character or fine tuning it will be better? I'm open to any suggestions about fine-tuning.
2023-08-07T12:39:05
https://www.reddit.com/r/LocalLLaMA/comments/15kjlba/finetuning_llms_for_roleplay/
sarimsak13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kjlba
false
null
t3_15kjlba
/r/LocalLLaMA/comments/15kjlba/finetuning_llms_for_roleplay/
false
false
self
11
null
๐ƒ๐จ ๐ฒ๐จ๐ฎ ๐ค๐ง๐จ๐ฐ ๐ฐ๐ก๐ฒ ๐‹๐ฅ๐š๐ฆ๐š ๐Ÿฆ™ ๐จ๐ซ ๐‚๐ก๐š๐ญ๐†๐๐“ ๐Ÿค– ๐ก๐š๐ฏ๐ž ๐ญ๐ž๐ฑ๐ญ ๐ฌ๐ข๐ณ๐ž ๐ฅ๐ข๐ฆ๐ข๐ญ๐ฌ? ๐Ÿค”๐Ÿ’ญ Find it out! ๐Ÿ’ก
1
2023-08-07T12:21:59
https://medium.com/@joaolages/the-quest-to-have-endless-conversations-with-llama-and-chatgpt-%EF%B8%8F-81360b9b34b2
JClub
medium.com
1970-01-01T00:00:00
0
{}
15kj71f
false
null
t3_15kj71f
/r/LocalLLaMA/comments/15kj71f/๐ƒ๐จ_๐ฒ๐จ๐ฎ_๐ค๐ง๐จ๐ฐ_๐ฐ๐ก๐ฒ_๐‹๐ฅ๐š๐ฆ๐š_๐จ๐ซ_๐‚๐ก๐š๐ญ๐†๐๐“_๐ก๐š๐ฏ๐ž_๐ญ๐ž๐ฑ๐ญ_๐ฌ๐ข๐ณ๐ž/
false
false
default
1
{'enabled': False, 'images': [{'id': 'Bd2tKVhk26ts2G8Ag4O6orZvv4fbvZNAF7Q03ikzADQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=108&crop=smart&auto=webp&s=ba96a68eba2babade967b840ab3c1d89e134154e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=216&crop=smart&auto=webp&s=6e06649464f3ca435c32f2ed9e73168e9273fb90', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=320&crop=smart&auto=webp&s=d8d718d986fc19b9bd1bb0562e6733d0ee2ba125', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=640&crop=smart&auto=webp&s=5a5f3c7aeeb56bbab7b0db89995dd4919050176f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=960&crop=smart&auto=webp&s=7dc8f89416830f2e71160073e28463a450434e17', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=1080&crop=smart&auto=webp&s=92d2d7a7907d777bff8e63ad2d24594dae67f25c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?auto=webp&s=0b855eb3d262137937689963a3854557f4517b0b', 'width': 1200}, 'variants': {}}]}
llama-gaan-2-7b-chat-hf-dutch ๐Ÿ‡ณ๐Ÿ‡ฑ
29
If it ain't Dutch, it ain't much [๐Ÿ˜‰](https://emojipedia.org/winking-face/)! [https://huggingface.co/Mirage-Studio/llama-gaan-2-7b-chat-hf-dutch](https://huggingface.co/Mirage-Studio/llama-gaan-2-7b-chat-hf-dutch) At Mirage Studio we just released Llama-gaan, an open-source ๐Ÿ‡ณ๐Ÿ‡ฑ -finetuned version of everyone's favorite Llama 2. Llama-gaan represents a major milestone in the Llama-2 catalogue, since as far as we know, we are the first to create such a customized model for the Dutch language. While the model has plenty room for improvement, we wanted to release this asap, because it already speaks Dutch better than meta's own llama2. And now, we can't wait to see what the community will contribute to make it even better. Give it a try and let us know! *Example question:* Wanneer is Nederland ontstaan? *Llama gaan result:* Nederland is een land in Noordwest-Europa dat zich bevindt in de provincie Holland. Het is een van de meest bevolkte landen van Europa en is een van de meest bezochte landen ter wereld. Nederland is een van de meest ontwikkelde landen ter wereld en heeft een van de hoogste levensstandaarden. Het is een van de meest diverse landen ter wereld en heeft een lange geschiedenis van immigratie. Also, this message was generated by our very own [MirageGPT](https://www.mirage-studio.io/) ;)
2023-08-07T12:05:25
https://www.reddit.com/r/LocalLLaMA/comments/15kitng/llamagaan27bchathfdutch/
gijs4g
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kitng
false
null
t3_15kitng
/r/LocalLLaMA/comments/15kitng/llamagaan27bchathfdutch/
false
false
self
29
{'enabled': False, 'images': [{'id': 'TYg5_5x2yp-Ny10ISVzht6UW6sIt0n0UWd8ROlAPJqs', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/Pl8jDgd6Vq-H9rJgHFXWMecxyAsgCygbeVsuuS8Xcuk.jpg?width=108&crop=smart&auto=webp&s=a4c3348398bb118872a1fd866ba65b817340ce3f', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/Pl8jDgd6Vq-H9rJgHFXWMecxyAsgCygbeVsuuS8Xcuk.jpg?width=216&crop=smart&auto=webp&s=4d0220212c34caede329a6d9a92d1b896b288e24', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/Pl8jDgd6Vq-H9rJgHFXWMecxyAsgCygbeVsuuS8Xcuk.jpg?width=320&crop=smart&auto=webp&s=d0f6944fdf14cc486c12dbfee736a7d8379b1b73', 'width': 320}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/Pl8jDgd6Vq-H9rJgHFXWMecxyAsgCygbeVsuuS8Xcuk.jpg?auto=webp&s=6065ad7ab9055e9bcf262f4a8afe7adf13362cb0', 'width': 560}, 'variants': {}}]}
Getting wrong answer after fine tuning.
3
I fine tuning using peft-lora on a custom question answer based dataset. it doesn't give accurate output for exact questions. for example: quest: "where does elon musk born?" ans: elon musk borned in america." Though in the dataset the accurate answer is given. Any idea about how to fix it? Do I need to change something in preporcessing? or something else? Can anyone help me, I am stressing about this?
2023-08-07T11:23:43
https://www.reddit.com/r/LocalLLaMA/comments/15khwlm/getting_wrong_answer_after_fine_tuning/
mathageche
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15khwlm
false
null
t3_15khwlm
/r/LocalLLaMA/comments/15khwlm/getting_wrong_answer_after_fine_tuning/
false
false
self
3
null
Enhancing a foundational model locally
1
[removed]
2023-08-07T10:59:17
https://www.reddit.com/r/LocalLLaMA/comments/15khd28/enhancing_a_foundational_model_locally/
Decent_Pattern_9343
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15khd28
false
null
t3_15khd28
/r/LocalLLaMA/comments/15khd28/enhancing_a_foundational_model_locally/
false
false
self
1
null
Open Source Models or OpenAI API for Document Summarization and Question Answering
4
Hello everyone! I am working on an application that leverages LLMs for document summarization and question answering. Mainly the type of documents Il be working on are RFP documents to automate responses for them. I cant really decide if I should invest in a better system and deploy a local LLM or just get the openAI api and do it that way. Mainly I will be using LangChain, although if you think there are better options please do let me know (I am a total noob in this). From what little Ive learned through scanning its doc, it pretty much handles all of the use cases that I need. i just dont know firstly which model to use and if ultimately its better to just use gpt 3.5 through the api. This application is intended to be commercialized so I guess hosting it locally would be too expensive. Please give suggestions. I havent yet talked to my boss yet as Ive mostly been experimenting on colab but the free version is so limited in RAM that apart from very small models, its a pain to work on and you run out of time limits as well. So, should I then ask my boss for a beefier laptop or openAI api subscription? I would greatly appreciate any guidance or insights on these matters. Thanks in advance!
2023-08-07T10:32:33
https://www.reddit.com/r/LocalLLaMA/comments/15kgu48/open_source_models_or_openai_api_for_document/
Chuckycutie1993
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kgu48
false
null
t3_15kgu48
/r/LocalLLaMA/comments/15kgu48/open_source_models_or_openai_api_for_document/
false
false
self
4
null
Qwen 7B
1
[removed]
2023-08-07T09:50:00
https://www.reddit.com/r/LocalLLaMA/comments/15kg1du/qwen_7b/
lavajam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kg1du
false
null
t3_15kg1du
/r/LocalLLaMA/comments/15kg1du/qwen_7b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=108&crop=smart&auto=webp&s=284ee86cd9228390268ace75b44e497c1fec562f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=216&crop=smart&auto=webp&s=96628b1c155401ce2d04a853b6524fa0c95cd632', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=320&crop=smart&auto=webp&s=f5f435bb4d31f0f695560cb0fb6f456702452062', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=640&crop=smart&auto=webp&s=b8b6a03fcde27061acee8ab4cb6ecc598a7ac6b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=960&crop=smart&auto=webp&s=bbda73bd4f11be7b71efb3892b4107414d815613', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=1080&crop=smart&auto=webp&s=0158100ff6f9041cc8dcb861b66a3db041df5095', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?auto=webp&s=daff0272548bd7ffe5bc2b1eff6cd5c752144ed4', 'width': 1200}, 'variants': {}}]}
Instruction Fine-Tuning to explicitly follow a certain Context?
6
I want to know if it would be possible to instruct fine-tune a model to answer a question based on a particular context. The idea being here that during inference, the context will be replaced by the relevant documents returned from a VectorDB search. **What I have already tried** 1. I have already tried regular RAG over my documents, it was either giving incomplete answers or just generating repeating content of maybe 1 line from the context provided. Temperature was set to 0 so that was probably the reason for it repeating the same content. Even adjusted repetition penalty still no change. 2. I have also tried fine-tuning it on a "### Human: <question to be answered based on document> ###Assistant: <answer to question using context from document>" style dataset. Here I am not explicitly telling the model to follow context, just what to generate. The model lost its generalization and lost the ability listen to the instruction asking it to pay attention to the context when provided during inference. **Coming back to my question** Is it possible that a dataset such as the following will make the model follow the context explicitly as part of the instruct fine tuning? "**###Instruction**: "Generate a procedure for the following: **###Input**: <some input> while following this context. **###Context**: <relevant context>. **###Output**: <answer that follows the context>" &#x200B; Please let me know if such datasets already exist, I have found [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k), which has the context portion, but I don't know how the model learns to follow the context during generation.
2023-08-07T09:44:08
https://www.reddit.com/r/LocalLLaMA/comments/15kfxj1/instruction_finetuning_to_explicitly_follow_a/
ShaneMathy911
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kfxj1
false
null
t3_15kfxj1
/r/LocalLLaMA/comments/15kfxj1/instruction_finetuning_to_explicitly_follow_a/
false
false
self
6
{'enabled': False, 'images': [{'id': '185ujYrwJ2EOh6mGp-l9eYbo_lk0hPRZdekaAQhzzZs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?width=108&crop=smart&auto=webp&s=25c02f753e4e995e00b31f979f098a1d32a76339', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?width=216&crop=smart&auto=webp&s=c71c0d43184d353dcdd696d3d476ba8fa503f6b2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?width=320&crop=smart&auto=webp&s=a72aef4d9cb1684b021eb1dcadaec9d66d097f7e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?width=640&crop=smart&auto=webp&s=d50d8b7275e2b8234a3ca628d7d30fcf5264bfe7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?width=960&crop=smart&auto=webp&s=8d93caa62b7007a7a00c47fad77cc40c2e21d181', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?width=1080&crop=smart&auto=webp&s=7177455cbe550ca69ab33bf76b531fddec57881c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/txx2VhCwQTgljoODlhhYc_faL_kN5MP0P-MwRRqdPyQ.jpg?auto=webp&s=75daad98aebd2f8e505e2beb3f66f0fc521621dd', 'width': 1200}, 'variants': {}}]}
OpenChat 3.2
1
[removed]
2023-08-07T09:29:04
https://www.reddit.com/r/LocalLLaMA/comments/15kfnm1/openchat_32/
imonenext
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kfnm1
false
null
t3_15kfnm1
/r/LocalLLaMA/comments/15kfnm1/openchat_32/
false
false
self
1
null
Llama 2 officially recommends removing the default system prompt
97
2023-08-07T09:14:42
https://github.com/facebookresearch/llama#system-prompt-update
hackerllama
github.com
1970-01-01T00:00:00
0
{}
15kfe8p
false
null
t3_15kfe8p
/r/LocalLLaMA/comments/15kfe8p/llama_2_officially_recommends_removing_the/
false
false
https://b.thumbs.redditmโ€ฆykXreJILEJ8U.jpg
97
{'enabled': False, 'images': [{'id': 'FGHADZF6kKD31GejH2OnPEP-YDb6BsOBjcCJJKsF9Bc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?width=108&crop=smart&auto=webp&s=bf9621bee0952932576a0b76165d02c71b71a6b7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?width=216&crop=smart&auto=webp&s=67dc90b46419e6d60d553dc4c3e78ea7ff8c27bd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?width=320&crop=smart&auto=webp&s=ced3e0f281f786462efeed34ebaf94dca9119fba', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?width=640&crop=smart&auto=webp&s=219465ec9da49c3bd5bf73e1c54e7c67c0b9a079', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?width=960&crop=smart&auto=webp&s=48e8d6beb330c153f9c1a4044fd7ade268788040', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?width=1080&crop=smart&auto=webp&s=e0fe55c8f58645f3f3d7c268acc22d1f32dc390f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q6uqIm8ZHsU8dvI2bphFuxKJq8_G421UZ1s-cNMYu8I.jpg?auto=webp&s=66d454f9704b4060ec583e03beebbd393af55e5c', 'width': 1200}, 'variants': {}}]}
Anyone use this
0
Google made this yea?
2023-08-07T08:23:56
https://www.reddit.com/r/LocalLLaMA/comments/15kehp2/anyone_use_this/
LearnOnnReddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kehp2
false
null
t3_15kehp2
/r/LocalLLaMA/comments/15kehp2/anyone_use_this/
false
false
self
0
null
Falcon-7b qlora finetuned generates extra response.
5
So I fine tuned falcon-7b using peft on python code instructions dataset(18k rows) for a single epoch. During the inference the model generates some extra text after answering the question. Sometimes it starts another question by itself and answers it. Sometimes it keeps on explaining the answer. Why is this happening and how should I avoid this.
2023-08-07T06:40:03
https://www.reddit.com/r/LocalLLaMA/comments/15kcohd/falcon7b_qlora_finetuned_generates_extra_response/
thepotatoguy17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kcohd
false
null
t3_15kcohd
/r/LocalLLaMA/comments/15kcohd/falcon7b_qlora_finetuned_generates_extra_response/
false
false
self
5
null
Using LLMs to query personal data. Does it suck or am I doing it wrong
10
Iโ€™ve tried experimenting with privateGPT and compatible models over the last few days on a Runpod A40 instance. Setup went fine. Provided source docs of various types. Did the ingest. Ran the model. But it seems like it canโ€™t answer basic questions. For example: โ€œwhat are the top five most frequently mention countries in the source document.โ€ Nope, unable to do it no matter the model. Does it suck, or am I using it wrong. For reference, itโ€™s a CSV file of CRS report titles.
2023-08-07T06:09:57
https://www.reddit.com/r/LocalLLaMA/comments/15kc4z8/using_llms_to_query_personal_data_does_it_suck_or/
badatmathdave
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kc4z8
false
null
t3_15kc4z8
/r/LocalLLaMA/comments/15kc4z8/using_llms_to_query_personal_data_does_it_suck_or/
false
false
self
10
null
how to make multiple inference requests from a single model object? (llama-cpp-python)
11
I was planning to setup a local LLaMA API server for our team and I have been using llama-cpp-python with flask for this. The code looks kinda like this ```python llm = LLaMA(model) @route(route = "/llama, method = POST) def inference(prompt): return llm(prompt) ``` But whenever I make more than 2 post requests (ie 2 inference requests at once), the model seems to crash without any logs. My guess is that some variable is getting called twice in the LLaMA class from llama-cpp-python but not so sure. Are there any better methods to create an API like this? (using luna-ai-llama2 uncensored model ggml for this) Lemme know if you need other details.
2023-08-07T05:25:27
https://www.reddit.com/r/LocalLLaMA/comments/15kbbna/how_to_make_multiple_inference_requests_from_a/
nerdyvaroo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15kbbna
false
null
t3_15kbbna
/r/LocalLLaMA/comments/15kbbna/how_to_make_multiple_inference_requests_from_a/
false
false
self
11
null
How does Microsoft Guidance work?
27
Microsoft's Guidance has been recommended for more reliable structuring of the output. I had some fundamental doubts about its applicability: - it seems like a programming language. wouldn't a llm know how to "interpret" this syntax of {{ }} to work. so it shouldnt be applicable to all llms by default. or is there a second agent that post-processes the output of a base llm? - if there is a post processing llm, then in case the base output doesnt have entities that can be fit into the variables, there might be issues. or is this uncommon and the system is usually reliable? - or is this a prompt engineering system? the variables define a way for Guidance to make the actual prompt so that those entities would be guaranteed? - the speedup in Guidance Acceleration mention cached key value pairs. this seems to me that we'd see speedups when there are repeated queries of the same type, where cached values may be reused. a single call should be similar in time cost for Guidance and non-Guidance forward passes?
2023-08-07T03:44:17
https://www.reddit.com/r/LocalLLaMA/comments/15k9efr/how_does_microsoft_guidance_work/
T_hank
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k9efr
false
null
t3_15k9efr
/r/LocalLLaMA/comments/15k9efr/how_does_microsoft_guidance_work/
false
false
self
27
null
8 bit WizardCoder 15B
12
I think 8 Bit WizardCoder should run on a 3090/4090 as it should be about 23.07 GB VRAM required. Is there anywhere I can acquire an 8 Bit WizardCoder model? Or how can I create one myself from the original? I'm currently running 4 bit WizardCoder btw just trying to get some improved accuracy.
2023-08-07T03:24:17
https://www.reddit.com/r/LocalLLaMA/comments/15k90yz/8_bit_wizardcoder_15b/
kintrith
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k90yz
false
null
t3_15k90yz
/r/LocalLLaMA/comments/15k90yz/8_bit_wizardcoder_15b/
false
false
self
12
null
Can I use LLM to compare 2 pdf documents and find changes ?
5
So I review a lot of documents and sometimes minor but essential changes occur after Iโ€™m done reviewing, was wondering if LLM can be a way to find the changes fast instead of going through hundreds of pages for an article ? Can embedding be used to achieve that ?
2023-08-07T02:06:28
https://www.reddit.com/r/LocalLLaMA/comments/15k7g0f/can_i_use_llm_to_compare_2_pdf_documents_and_find/
Plane_Ad9568
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k7g0f
false
null
t3_15k7g0f
/r/LocalLLaMA/comments/15k7g0f/can_i_use_llm_to_compare_2_pdf_documents_and_find/
false
false
self
5
null
WizardLM-1.0-Uncensored-Llama2-13b
112
Today I released WizardLM-1.0-Uncensored-Llama2-13b [https://huggingface.co/ehartford/WizardLM-1.0-Uncensored-Llama2-13b](https://huggingface.co/ehartford/WizardLM-1.0-Uncensored-Llama2-13b) Based on the [WizardLM/WizardLM\_evol\_instruct\_V2\_196k](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) dataset I filtered it to remove refusals, avoidance, bias. [ehartford/WizardLM\_evol\_instruct\_V2\_196k\_unfiltered\_merged\_split](https://huggingface.co/datasets/ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split) I trained this with Vicuna's FastChat, as the new data is in ShareGPT format and WizardLM team has not specified a method to train it. This is a full weights training, not a LoRA. The prompt format is Vicuna-1.1 style. While the instruct layer has been uncensored, Llama2 has opinions and will resist instructions it disagrees with. You must overcome that with your own wit and prompt engineering. Finally, you are responsible for the content you create using this model. It is uncensored and can produce illegal and unethical content, if you ask it to. If deploying this in a service, I would recommend you train a LoRA to implement your own alignment to prevent unwanted behavior. Thank you to the open-source community and everyone who helped me.
2023-08-07T02:03:29
https://www.reddit.com/r/LocalLLaMA/comments/15k7dv9/wizardlm10uncensoredllama213b/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k7dv9
false
null
t3_15k7dv9
/r/LocalLLaMA/comments/15k7dv9/wizardlm10uncensoredllama213b/
false
false
self
112
{'enabled': False, 'images': [{'id': 'XOHpgHNy2-sJYt-7otMYQr6A_lch0wGQeb5mnw39QKQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?width=108&crop=smart&auto=webp&s=aa773b62368ae0a87bcb427e8ce76f550c2d447b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?width=216&crop=smart&auto=webp&s=df0c0f27c0d720383e67b6c88459adaa5a100b96', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?width=320&crop=smart&auto=webp&s=1b41e9e067f2be185d446dc1f79d3a8b31c1d32f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?width=640&crop=smart&auto=webp&s=8380a79aadf7aae0be05c74e57fddf4d1ad0f059', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?width=960&crop=smart&auto=webp&s=b18838e7b6603017dbc38e531bd52e9cfaec3c83', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?width=1080&crop=smart&auto=webp&s=478b0364753e57d9933345b25a83a22b003ae37d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8o_c9AAxq8qn_syV3jHVABmVXGhLkLtmI9gelzOdtkA.jpg?auto=webp&s=b68a9998dc2ce0efe3b3e932fd7157f235ff80f1', 'width': 1200}, 'variants': {}}]}
dolphin-llama2-7b
190
Today, I released dolphin-llama2-7b, sponsored by [preemo](https://www.preemo.io/) [https://huggingface.co/ehartford/dolphin-llama2-7b](https://huggingface.co/ehartford/dolphin-llama2-7b) [**https://erichartford.com/dolphin**](https://erichartford.com/dolphin) This model is based on llama-2 and thus it is suitable for commercial or non-commercial use Dataset consisted of: * 842,610 instructions of FLANv2 augmented with GPT-4 completions * 2,625,353 instructions of FLANv2 augmented with GPT-3.5 completions This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant to requests. You are advised to implement your own alignment layer before exposing the model as a service. Please read my blog post about uncensored models. [**https://erichartford.com/uncensored-models**](https://erichartford.com/uncensored-models) You are responsible for any content you create using this model. Enjoy responsibly. Gratitude to my core team - Pankaj Mathur, Rob "Rohan" O'Callahan, and Tom "TheBloke" Jobbins. Thanks and respect to those who helped - including Wing "Caseus" Lian, NanoBit, and Teknium. If you like my work you are welcome to [buy me a coffee](https://www.buymeacoffee.com/ehartford) &#x200B;
2023-08-07T01:56:56
https://www.reddit.com/r/LocalLLaMA/comments/15k78yi/dolphinllama27b/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k78yi
false
null
t3_15k78yi
/r/LocalLLaMA/comments/15k78yi/dolphinllama27b/
false
false
self
190
null
Does anyone know of any information on how to finetune a llm on a image, language pair dataset?
5
I've seen the end product and am now just reading through code but any sort of introduction to the problem would be a great time saver. I'm hoping to finetune an llm on a custom dataset that I believe it may do better on than traditional approaches due to the ability to feed it associated contextual text with the image. I've looked but I haven't seen anything that goes through how to do it.
2023-08-07T01:04:02
https://www.reddit.com/r/LocalLLaMA/comments/15k655k/does_anyone_know_of_any_information_on_how_to/
CertainCoat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k655k
false
null
t3_15k655k
/r/LocalLLaMA/comments/15k655k/does_anyone_know_of_any_information_on_how_to/
false
false
self
5
null
Specifying output format
8
Hi! I have a tinkering project where I need the model to generate an answer according to a defined format. For example, answering with only one word. I tried a few models in GPTQ format, but it looks like the often just ignore the request to answer in just one word. The answers are somewhat accurate, but mostly lengthy. Do you have any experience in making an LLM "behave"? :) Maybe a specific model to try? I got 24GB VRAM and I'm not afraid to use it! ^^
2023-08-06T23:19:54
https://www.reddit.com/r/LocalLLaMA/comments/15k3u2g/specifying_output_format/
maz3max
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k3u2g
false
null
t3_15k3u2g
/r/LocalLLaMA/comments/15k3u2g/specifying_output_format/
false
false
self
8
null
Any Australian, or US West Coast, based LLM / AI technology specialists?
1
2023-08-06T23:12:37
https://i.redd.it/j7qdhej4okgb1.png
Rufawana
i.redd.it
1970-01-01T00:00:00
0
{}
15k3o00
false
null
t3_15k3o00
/r/LocalLLaMA/comments/15k3o00/any_australian_or_us_west_coast_based_llm_ai/
false
false
https://b.thumbs.redditmโ€ฆsbKtn83R-qLw.jpg
1
{'enabled': True, 'images': [{'id': 'GZpbe6yagQBIBRhPAiqbtmwgjlJU-osglstOmWBm08Y', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/j7qdhej4okgb1.png?width=108&crop=smart&auto=webp&s=30814f39f87a44baaabcbd3f03d236b222929e2c', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/j7qdhej4okgb1.png?width=216&crop=smart&auto=webp&s=83a351141911d6d739a4c5747123bf6b9f2161b6', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/j7qdhej4okgb1.png?width=320&crop=smart&auto=webp&s=a221fda2bb955448269d89446c7c54257a2ee07a', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/j7qdhej4okgb1.png?width=640&crop=smart&auto=webp&s=da940e1c9af2d9b85a34892fa0aa35ac43a0c2b3', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/j7qdhej4okgb1.png?width=960&crop=smart&auto=webp&s=3a4c63f9d7d1820d356d65e930771b4bb078fb16', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/j7qdhej4okgb1.png?auto=webp&s=9c202ce4a60be9687684b534ac03f06092edf90c', 'width': 1024}, 'variants': {}}]}
Mamba v3 3b: when did humans first learn how to whistle a tune by puckering their lips?
0
> when did humans first learn how to whistle a tune by puckering their lips? The earliest known examples of whistling and the production of complex, tonal musical instruments date back to 360,000 B.C., when the first hominids began to use their hands and hands to produce sounds similar to bird calls or flutes as their ancestors migrated from Africa to Eurasia in search of food. But the fact is that the human ability to whistle was not the result of natural evolution, but rather artificial, through the use of teeth and lips. Humans have been producing sounds by manipulating their teeth, lips and tongue since they first learned to make sounds, either through sound production or through noise amplification through a tube-like instrument. The early humans learned to whistle by puckering their lips (the same way as how we open our mouth to speak), which produces the high-pitched squeaks that many people associate with whistling and which they have since used for food gathering, communication, or entertainment. It was not until about 30,000 years ago that a tool was developed with teeth and a handle to produce higher pitched sounds, making it possible to whistle at a much higher pitch and in more complex patterns. It is important to note that whistling did not develop out of necessity; rather, it was a unique way for early humans to communicate with one another, either through sound production or noise amplification. It is believed that the first instruments were used as part of ritual performances, with whistles and other pipes being used in ceremonies or rituals to honor elders, announce a new leader, or mark important events. To conclude, it is safe to say that the practice of whistling began through the use of teeth, lips, and hands rather than as an evolutionary trait. However, this does not mean that whistling is not useful in our daily lives - while we may not have evolved the ability to whistle, many people find it a useful way to express themselves or convey their message to others.
2023-08-06T23:01:42
https://www.reddit.com/r/LocalLLaMA/comments/15k3ete/mamba_v3_3b_when_did_humans_first_learn_how_to/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k3ete
false
null
t3_15k3ete
/r/LocalLLaMA/comments/15k3ete/mamba_v3_3b_when_did_humans_first_learn_how_to/
false
false
self
0
null
Any news on BigCode? (octocoder and starcoder cosmos)
31
Some weeks ago BigCode said in a tweet that they trained smaller version of the starcoder model (1B,3B and 7B). Meanwhile they create a github talking about octocoder and a new dataset with 4Tb of data, more than the 2.9Tb of the cleaned stack. So, anybody have any new information? They are training a new model? They will release a new dataset? And where are the smaller models? Just curious
2023-08-06T22:59:09
https://www.reddit.com/r/LocalLLaMA/comments/15k3cjn/any_news_on_bigcode_octocoder_and_starcoder_cosmos/
GG9242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k3cjn
false
null
t3_15k3cjn
/r/LocalLLaMA/comments/15k3cjn/any_news_on_bigcode_octocoder_and_starcoder_cosmos/
false
false
self
31
null
First Indonesian LLM?
36
99% of posts in LocalLLaMa are dedicated to English models. So today is that rare exception when the post will be about non-English model. Here it is: [Ichsan2895/Merak-7B-v1 ยท Hugging Face](https://huggingface.co/Ichsan2895/Merak-7B-v1) GGML version: [s3nh/Merak-7B-v1-GGML at main (huggingface.co)](https://huggingface.co/s3nh/Merak-7B-v1-GGML/tree/main) P.S: If you managed to find other non-English ggml models, you can post links to them in comments. Thank you.
2023-08-06T22:13:29
https://www.reddit.com/r/LocalLLaMA/comments/15k2ami/first_indonesian_llm/
Languages_Learner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k2ami
false
null
t3_15k2ami
/r/LocalLLaMA/comments/15k2ami/first_indonesian_llm/
false
false
self
36
{'enabled': False, 'images': [{'id': '2lbpttmquP319hezfse47DJxEwuNaYUdtjB5VTuzhT8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?width=108&crop=smart&auto=webp&s=995c3f055b80321894b7c394d2c3cf0a27c05bef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?width=216&crop=smart&auto=webp&s=d023513291fa97fc113d9ba96f060703fe652ebe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?width=320&crop=smart&auto=webp&s=acdb2f92e0544bbee515e0cb592011abd137a0ea', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?width=640&crop=smart&auto=webp&s=b647460787d1e6833ecb9d738d0ae044ce83e843', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?width=960&crop=smart&auto=webp&s=30b1548c08fcac6ae7561c69b5252b29bc020870', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?width=1080&crop=smart&auto=webp&s=5c253ff1463530936adf903bf20fea087502cad1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/p7D4uETpHk9odOyT7ftZPfA7aKFofcWrbnD28uIpAu0.jpg?auto=webp&s=59e2634b80f8a40c738a09ccf6e5b89d5e2172ba', 'width': 1200}, 'variants': {}}]}
Local Llama-2 API in Rust
18
Hello, I have been working on an OpenAI-compatible API for serving LLAMA-2 models written entirely in Rust. It supports offloading computation to Nvidia GPU and Metal acceleration for GGML models. Here is the project link : \[cria\]([https://github.com/AmineDiro/cria/blob/main/README.md](https://github.com/AmineDiro/cria/blob/main/README.md)). You can use it as an OpenAI replacement (check out the included \`Langchain\` example in the project). This is an ongoing project, I have implemented the \`embeddings\` and \`completions\` routes. The \`chat-completion\` route will be here very soon! Really interested in your feedback and I would welcome any help :) ! &#x200B; &#x200B;
2023-08-06T21:55:05
https://www.reddit.com/r/LocalLLaMA/comments/15k1uk7/local_llama2_api_in_rust/
amindiro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k1uk7
false
null
t3_15k1uk7
/r/LocalLLaMA/comments/15k1uk7/local_llama2_api_in_rust/
false
false
self
18
{'enabled': False, 'images': [{'id': '0LBRRYfSaj7DGydH663GMFcgOTeSykUmnO1PUdIqG34', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?width=108&crop=smart&auto=webp&s=0fcb4489a88ee99a70ce4cab1aa170bf4c9df6f2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?width=216&crop=smart&auto=webp&s=d9cf17e2141e331a3b2a7a363ecd94226fa5cf8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?width=320&crop=smart&auto=webp&s=2deb04c713d14fc54a1485efd508a9f2567b2944', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?width=640&crop=smart&auto=webp&s=7c20aa1256dd42ef46722d26a64ab4e15019c776', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?width=960&crop=smart&auto=webp&s=f9a268a7f6339680ee13120150b4c1263dc8dfb4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?width=1080&crop=smart&auto=webp&s=a5954f2039ac2223b422f294a1c004ef80c9271b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iVvCGW5rLCxt2r3J3wghRSsB8DOSe9cV-OAZtuSsRDI.jpg?auto=webp&s=b0c36722873beb0e2eac690c34a833967b385b74', 'width': 1200}, 'variants': {}}]}
How can I finetune MiniGpt-4 on my own dataset?
1
[removed]
2023-08-06T21:41:08
https://www.reddit.com/r/LocalLLaMA/comments/15k1ij7/how_can_i_finetune_minigpt4_on_my_own_dataset/
mr_house7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k1ij7
false
null
t3_15k1ij7
/r/LocalLLaMA/comments/15k1ij7/how_can_i_finetune_minigpt4_on_my_own_dataset/
false
false
self
1
null
Classifier-Free Guidance (CFG) has been merged into the transformers library
88
2023-08-06T21:33:37
https://github.com/huggingface/transformers/pull/24654
oobabooga4
github.com
1970-01-01T00:00:00
0
{}
15k1brq
false
null
t3_15k1brq
/r/LocalLLaMA/comments/15k1brq/classifierfree_guidance_cfg_has_been_merged_into/
false
false
https://b.thumbs.redditmโ€ฆbuc_XxpVDYSA.jpg
88
{'enabled': False, 'images': [{'id': '92JE4v1HSrvv1_kNuRUgyEUXGPEB4CPO1SmWC9DBPlA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?width=108&crop=smart&auto=webp&s=11d5737d9b960431d772d4e0b3c7614bd8af9abc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?width=216&crop=smart&auto=webp&s=c22f8f33373beb9eef85081e65a2a2432efda668', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?width=320&crop=smart&auto=webp&s=574541411bbc746d5174b1f8cf34c6d3e8113ed7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?width=640&crop=smart&auto=webp&s=acbc9965ae137cab4901e8001cef11234bc83033', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?width=960&crop=smart&auto=webp&s=317a24cd9eefa29740d4e1b35475b8a7222aaace', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?width=1080&crop=smart&auto=webp&s=5bca4d8d38962e3dba35fb1786a02e94aafe8d02', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MIGc7ZgNB0IV8UubR8avc7UBp7yzzEgv4fASgzCZoks.jpg?auto=webp&s=8fa87ef5dcec26fda023de672373822bd3564732', 'width': 1200}, 'variants': {}}]}
Black lies, white lies, statistics. And benchmarking.
39
&#x200B; When I was younger, lies got sorted into black lies and white lies. Later, 'statistics' was added to the set. And, in hindsight, 'benchmarking' should have been there, too. It is is the zeitgeist, after all. Anything and everything must be compared and ranked. Who is number one? How to explain complex stuff to people unable or unwilling to understand complex stuff? Easy, dumb it down to a number. Just. One. Number. 5 is more than 3. 88 is more than 78. The whatever with a score of 99 is better than the one with 88. This post was borne from frustration over the commonly used benchmarks on HuggingFace, where I could not find any description of these benchmarks at all. And all having nonsense names. I am sure someone will come up with a direct link at HuggingFace after this post. In any case, I found: [https://msandbu.org/benchmarking-llms-and-what-is-the-best-llm/](https://msandbu.org/benchmarking-llms-and-what-is-the-best-llm/) I think this article does a decent job of describing the current landscape of LLM benchmarking. In short, I think it is a bloody mess. And boatloads of new, retrained and/or fine-tuned models are submitted on a daily basis without any accompanying benchmarking data at all. And that is \*fine\*. Not all models/finetunes are meant for stuff that (trivially) can be measured by a machine and represented with a single number. Visual art and music, for instance. Storytelling. For a models ability to code, the situation is slightly better. Give it X problems to solve, test the Y results and see how many working solutions you got. At least it is a measurable number. Also, some models are better than other, simply because they are bigger models. 3B vs 70B etc. But what kind of resources actually required to run these models may take serious digging. And even comparing models of the same size, lots of parameters may still differ. Context length, for instance. And how much memory is required to run it. Performance depends on hardware, not sure if this can be 'normalized' in a meaningful way. I am pretty sure the time has not yet come for 'The Grand Unified LLM Benchmarking Test Set'. The average standards organization would probably not be able to organize a single meeting before the agenda was wildly outdated. That is less a dig at standards organizations, and more a nod at the rate of change in this space. Yet... is it possible to have something better than the current state? In terms of de-facto testing/test results, but also regarding parameters the model creator should be expected to provide in the model card? If yes, who needs to talk to each other to make this happen? Thank you for reading. &#x200B;
2023-08-06T20:56:45
https://www.reddit.com/r/LocalLLaMA/comments/15k0f06/black_lies_white_lies_statistics_and_benchmarking/
ethertype
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k0f06
false
null
t3_15k0f06
/r/LocalLLaMA/comments/15k0f06/black_lies_white_lies_statistics_and_benchmarking/
false
false
self
39
{'enabled': False, 'images': [{'id': '3v4YqMpXl9I7qXs9mjUIIoybueFW3pvFBgJc0pOONwk', 'resolutions': [{'height': 30, 'url': 'https://external-preview.redd.it/H1pGaJfcjzy95tjj9Oxq0UcWoJBF4scpwsXiGIclgoI.jpg?width=108&crop=smart&auto=webp&s=6d8aa04bc6bf7e7a8bd0eb7d62450d854f46751c', 'width': 108}, {'height': 60, 'url': 'https://external-preview.redd.it/H1pGaJfcjzy95tjj9Oxq0UcWoJBF4scpwsXiGIclgoI.jpg?width=216&crop=smart&auto=webp&s=c5037729ab39dab6dacf3dfa391d2bdf34db0055', 'width': 216}, {'height': 90, 'url': 'https://external-preview.redd.it/H1pGaJfcjzy95tjj9Oxq0UcWoJBF4scpwsXiGIclgoI.jpg?width=320&crop=smart&auto=webp&s=3047ce95383a6911aabce1bb2bec412cee1fc02d', 'width': 320}, {'height': 180, 'url': 'https://external-preview.redd.it/H1pGaJfcjzy95tjj9Oxq0UcWoJBF4scpwsXiGIclgoI.jpg?width=640&crop=smart&auto=webp&s=72e7b9463e77d0fc95351394b27dc22190c50264', 'width': 640}, {'height': 270, 'url': 'https://external-preview.redd.it/H1pGaJfcjzy95tjj9Oxq0UcWoJBF4scpwsXiGIclgoI.jpg?width=960&crop=smart&auto=webp&s=7a2432ffb622e12216d2122a2c9cfaef6002f041', 'width': 960}], 'source': {'height': 288, 'url': 'https://external-preview.redd.it/H1pGaJfcjzy95tjj9Oxq0UcWoJBF4scpwsXiGIclgoI.jpg?auto=webp&s=d33a78409e555980ea84c80305c8144847239e4c', 'width': 1024}, 'variants': {}}]}
Anyone else is getting problems with repetition with chronos-hermes-13b-v2 and nous hermes v2?
14
These 2 specific models gave my huge troubles (I used the standard PyTorch ones), no matter the preset, sampler order, etc. They start repeating phrases from the second response. Any solutions?
2023-08-06T20:48:14
https://www.reddit.com/r/LocalLLaMA/comments/15k07ba/anyone_else_is_getting_problems_with_repetition/
localbasedman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15k07ba
false
null
t3_15k07ba
/r/LocalLLaMA/comments/15k07ba/anyone_else_is_getting_problems_with_repetition/
false
false
self
14
null
Finetuning Llama-2 on Runpod? How to increase gpu count on a pod?
1
I'm trying to finetune Llama-2 on Runpod. However, when I launch, I only see 1x a100 80gb. It says max 8 though, so I assume you can assign more. However, when I click deploy and then click customize, I can't find a place to increase gpu count. I'd appreciate any help!
2023-08-06T20:38:32
https://www.reddit.com/r/LocalLLaMA/comments/15jzyr0/finetuning_llama2_on_runpod_how_to_increase_gpu/
jl303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jzyr0
false
null
t3_15jzyr0
/r/LocalLLaMA/comments/15jzyr0/finetuning_llama2_on_runpod_how_to_increase_gpu/
false
false
self
1
null
learning about LLAMA need help
1
[removed]
2023-08-06T19:40:15
https://www.reddit.com/r/LocalLLaMA/comments/15jyi6m/learning_about_llama_need_help/
neo-gen-evan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jyi6m
false
null
t3_15jyi6m
/r/LocalLLaMA/comments/15jyi6m/learning_about_llama_need_help/
false
false
self
1
null
Has anybody managed to find a way of describing maps, floorplans, game grids, etc. to a LLM ?
38
I've been experimenting with simplified Binary Space Partitioning, or grids with "chess" coordinates, but the results are meh. If anybody figured out a good reliable format of describing physical layout of both indoor and outdoor situations, I would be greatful if you can share.
2023-08-06T18:34:34
https://www.reddit.com/r/LocalLLaMA/comments/15jwvtz/has_anybody_managed_to_find_a_way_of_describing/
staviq
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jwvtz
false
null
t3_15jwvtz
/r/LocalLLaMA/comments/15jwvtz/has_anybody_managed_to_find_a_way_of_describing/
false
false
self
38
null
Llama2 as a SPAM filter
34
Hi, I am writing a plugin to enhance the famous SpamAssassin spam filter with LLMs. There are many cases where spam is not detected by SpamAssassin because it passes all rules and even their machine learning classifiers. However, any human being looking at the text will immediately know it is spam. I did some quick tests using Llama2 7b and it was able to detect many spam messages that SpamAssassing classified as legitimate. The llama2 model as it is, without any fine tuning, can even provide an explanation of why the message is junk mail. However, since I am new to LLMs, I had some questions for you: * Is llama2 the appropriate model for this use case or would you recommend vicuna, alpaca or something else? * Would llama2 (or another model) perform better if it is fine tuned with some of the spam datasets available on HuggingFace? If so, which fine tuning method would you recommend and how the training set would have to be labelled? Thanks!
2023-08-06T17:52:13
https://www.reddit.com/r/LocalLLaMA/comments/15jvubj/llama2_as_a_spam_filter/
InvertedYieldCurve
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jvubj
false
null
t3_15jvubj
/r/LocalLLaMA/comments/15jvubj/llama2_as_a_spam_filter/
false
false
self
34
null
Android app to interact with text-generation-webui and stable-diffusion-webui [github]
28
2023-08-06T16:19:31
https://v.redd.it/jaox49r3migb1
frapastique
v.redd.it
1970-01-01T00:00:00
0
{}
15jtlj5
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jaox49r3migb1/DASHPlaylist.mpd?a=1693930787%2CM2IwOTYxNDAyMDg0MzRiNDgxNmQ1ZTJjMzlkN2ZjYjlkMDA5ODgwMjFhYWM0NWIyODQ1NDExYzgzNTQyOTlmNQ%3D%3D&v=1&f=sd', 'duration': 60, 'fallback_url': 'https://v.redd.it/jaox49r3migb1/DASH_1080.mp4?source=fallback', 'height': 1920, 'hls_url': 'https://v.redd.it/jaox49r3migb1/HLSPlaylist.m3u8?a=1693930787%2COTMyMDk3NzBmNTY5ZmM4MzMyZmU2Yzc5YTgwMTZmZDJkMDJkZmI1MGFkM2MxNDIyY2QwNzg0MDAzNGUzNzUyMg%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/jaox49r3migb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 862}}
t3_15jtlj5
/r/LocalLLaMA/comments/15jtlj5/android_app_to_interact_with_textgenerationwebui/
false
false
https://b.thumbs.redditmโ€ฆVWyKesBe6OVc.jpg
28
{'enabled': False, 'images': [{'id': 'vFlw91nUV0iFlUGS0GXKxNLfeZwUBfWDyD-8lzpeI-g', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?width=108&crop=smart&format=pjpg&auto=webp&s=c8882ee3d1855a552cf18cdbf47e8788cf8278fa', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?width=216&crop=smart&format=pjpg&auto=webp&s=d770129e79c93b85e46562feb0a8abb9056123f2', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?width=320&crop=smart&format=pjpg&auto=webp&s=0fb35e55cdb623afe299d97c35db74ab29bab157', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?width=640&crop=smart&format=pjpg&auto=webp&s=89b1efb0c18ac2f02dadffb81ae13e0dec71a6ec', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?width=960&crop=smart&format=pjpg&auto=webp&s=c2c4c59e574d3bc55108a445ea07fbbe56ff6e0f', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=76297da45430355ee7293ee087a2a42c2387fec9', 'width': 1080}], 'source': {'height': 2408, 'url': 'https://external-preview.redd.it/fADpmg2CMAuyKQsNcI_ZNhwpdVOkSWubFv52lX5swBE.png?format=pjpg&auto=webp&s=160423fa4adedde8d735c95a5a5e0bbca0cc1848', 'width': 1080}, 'variants': {}}]}
LLAMA 2 thinks speaking Georgian is inappropriate and racist
183
2023-08-06T15:20:48
https://i.redd.it/b0vg31opbigb1.png
SilentFudge
i.redd.it
1970-01-01T00:00:00
0
{}
15js721
false
null
t3_15js721
/r/LocalLLaMA/comments/15js721/llama_2_thinks_speaking_georgian_is_inappropriate/
false
false
https://a.thumbs.redditmโ€ฆSxjlTZiNBpN4.jpg
183
{'enabled': True, 'images': [{'id': 'ROf1n8PoNFyFx2lTKS57jNywNgi3ge2sYwlWeoZFIvc', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/b0vg31opbigb1.png?width=108&crop=smart&auto=webp&s=0fb039801d2784fd595824e1b805facbc3e7cdb4', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/b0vg31opbigb1.png?width=216&crop=smart&auto=webp&s=93e8f9f0f3a881833762cb35cb42caa3300e3e35', 'width': 216}, {'height': 165, 'url': 'https://preview.redd.it/b0vg31opbigb1.png?width=320&crop=smart&auto=webp&s=81e5ea47d0229968ebf9322dd8e936c1f36e14e2', 'width': 320}, {'height': 331, 'url': 'https://preview.redd.it/b0vg31opbigb1.png?width=640&crop=smart&auto=webp&s=5760d4f27c1e911cfd8a23fcdb2578bfcc2628b2', 'width': 640}, {'height': 497, 'url': 'https://preview.redd.it/b0vg31opbigb1.png?width=960&crop=smart&auto=webp&s=f5534f3821c5fc481b1beac5444d735df1b75ac3', 'width': 960}], 'source': {'height': 535, 'url': 'https://preview.redd.it/b0vg31opbigb1.png?auto=webp&s=c535c6634bb1fb6966fd2ca4281bcdd9f10b3460', 'width': 1033}, 'variants': {}}]}
Best way to send pictures to llms?
17
im currently using oobaboogas blip captioner through extensions and I want something that will describe pictures a little better does anyone have any tips or useful advice for me?
2023-08-06T12:36:07
https://www.reddit.com/r/LocalLLaMA/comments/15jol3b/best_way_to_send_pictures_to_llms/
ElectricalAngle1611
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jol3b
false
null
t3_15jol3b
/r/LocalLLaMA/comments/15jol3b/best_way_to_send_pictures_to_llms/
false
false
self
17
null
Maybe donโ€™t rename your company to the token most associated with โ€œI donโ€™t knowโ€
0
Token optimization is the new SEO
2023-08-06T12:19:05
https://www.reddit.com/gallery/15jo8yl
mfish001188
reddit.com
1970-01-01T00:00:00
0
{}
15jo8yl
false
null
t3_15jo8yl
/r/LocalLLaMA/comments/15jo8yl/maybe_dont_rename_your_company_to_the_token_most/
false
false
https://b.thumbs.redditmโ€ฆy6pvtxy-FbMI.jpg
0
null
Maybe donโ€™t name your company the most commonly used token representing โ€œI donโ€™t knowโ€
1
Token optimization is the new SEO.
2023-08-06T12:12:07
https://www.reddit.com/gallery/15jo451
mfish001188
reddit.com
1970-01-01T00:00:00
0
{}
15jo451
false
null
t3_15jo451
/r/LocalLLaMA/comments/15jo451/maybe_dont_name_your_company_the_most_commonly/
false
false
https://b.thumbs.redditmโ€ฆUv38JDNXmU8s.jpg
1
null
Tesla P40 users - High context is achievable with GGML models + llama_HF loader
60
Just wanted to share that I've finally gotten reliable, repeatable "higher context" conversations to work with the P40. In the past I've been using GPTQ (Exllama) on my main system with the 3090, but this won't work with the P40 due to its lack of FP16 instruction acceleration. So, using GGML models and the llama\_hf loader, I have been able to achieve higher context. Inference is relatively slow going, down from around 12-14 t/s to 2-4 t/s with nearly 6k context. It's usable though. The key parameters that must be set per model are n\_gpu\_layers, n\_ctx (context length) and compress\_pos\_emb. n\_gpu\_layers should be 43 or higher to load all of - for example - Chronos Hermes into VRAM. I use q5\_1 quantisations. For SuperHOT models, going 8k is not recommended as they really only go up to 6k before borking themselves. So n\_ctx should be set to 6144, compress\_pos\_emb to 3. For all fully 8k models, n\_ctx should be 8192, and compress\_pos\_emb should be 4. Tested with the classic [https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-GGML](https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-GGML) and the more recent [https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGML](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGML) The latter does not work reliably for RP but does give generally more verbose responses. Hope this helps folks. The P40 is still rather slow but I'm very happy to have achieved a reliable way to load models into it fully and with more than 2k context, at last.
2023-08-06T10:14:50
https://www.reddit.com/r/LocalLLaMA/comments/15jm3br/tesla_p40_users_high_context_is_achievable_with/
CasimirsBlake
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jm3br
false
null
t3_15jm3br
/r/LocalLLaMA/comments/15jm3br/tesla_p40_users_high_context_is_achievable_with/
false
false
self
60
{'enabled': False, 'images': [{'id': 'Mr81S77mGxXAcYypKfXGEyi5SzvBJfQA8YhHnBIsZHA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?width=108&crop=smart&auto=webp&s=02b4a8e9cca1d225785a17ddef10aa33f0272367', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?width=216&crop=smart&auto=webp&s=ad9ca078d93369a8eb334150f8454a733c337597', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?width=320&crop=smart&auto=webp&s=0805d2613d69fd35b208db98c467738377552fed', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?width=640&crop=smart&auto=webp&s=f5b17358e33c6e2d35d096d61c2482a3ac72eb68', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?width=960&crop=smart&auto=webp&s=1140023dbadd8977455d00b18918d81e9d4c3ebc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?width=1080&crop=smart&auto=webp&s=026a61fcab1e6bf412938421aae4973c928623e8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3Fzl3Wt2ECj2QAWbDyX6YnV5IB7WvdWLL0h5YfMyiRA.jpg?auto=webp&s=1983ce61b6a01349602a26abf06690a553ca061a', 'width': 1200}, 'variants': {}}]}
What are your suggestions on hosting LLama2 for an enterprise? [commercial]
14
I am planning to implement a chatbot that will give answers based on the documents of a company. The main focus is that I am going to deploy whole processing chain offline i.e. training or finetuning, inference, and data processing etc. Due to data privacy, company wants nothing online at all. For trial version, I am going to do finetuning on RTX4090s. If performance is good then company is ready to invest in servers. What will be your suggestions based on this application case? BTW, company already has large number of CPU and RAM resources. If llama performance is good then they can invest in new Nvidia servers.
2023-08-06T08:33:14
https://www.reddit.com/r/LocalLLaMA/comments/15jkf2h/what_are_your_suggestions_on_hosting_llama2_for/
awaiss113
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jkf2h
false
null
t3_15jkf2h
/r/LocalLLaMA/comments/15jkf2h/what_are_your_suggestions_on_hosting_llama2_for/
false
false
self
14
null
LLM Comparison
1
[removed]
2023-08-06T06:47:07
https://www.reddit.com/r/LocalLLaMA/comments/15jin0f/llm_comparison/
Nikunja___
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jin0f
false
null
t3_15jin0f
/r/LocalLLaMA/comments/15jin0f/llm_comparison/
false
false
self
1
null
What is LoRA actually?
1
[removed]
2023-08-06T06:45:44
https://www.reddit.com/r/LocalLLaMA/comments/15jim6a/what_is_lora_actually/
Nikunja___
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jim6a
false
null
t3_15jim6a
/r/LocalLLaMA/comments/15jim6a/what_is_lora_actually/
false
false
self
1
null
M2 Max for llama 2 13b inference server?
11
Hello, I am looking at a M2 Max (38 GPU Cores) Mac Studio with 64 gigs of ram to run interference on llama 2 13b. Would this be a good option for tokens per second, or would there be something better? Also is llama.cpp the best software to run on the Mac with its Metal support? &#x200B; Thanks!
2023-08-06T06:43:55
https://www.reddit.com/r/LocalLLaMA/comments/15jil27/m2_max_for_llama_2_13b_inference_server/
techpro864
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jil27
false
null
t3_15jil27
/r/LocalLLaMA/comments/15jil27/m2_max_for_llama_2_13b_inference_server/
false
false
self
11
null
Fine tuning speeds vs 3090
8
Has anyone measured how much faster are some other cards at LoRA fine tuning (eg 13B llama) compared to 3090? - 4090 - A6000 - A6000 Ada - A100-40B I have 3090s for 4-bit LoRA fine tuning and am starting to be interested in faster hardware.
2023-08-06T06:04:27
https://www.reddit.com/r/LocalLLaMA/comments/15jhx6k/fine_tuning_speeds_vs_3090/
xynyxyn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jhx6k
false
null
t3_15jhx6k
/r/LocalLLaMA/comments/15jhx6k/fine_tuning_speeds_vs_3090/
false
false
self
8
null
Tutorial to install CUDA for RTX 3050?
1
I cannot run Oobabooga and I always encounter error like torch is not installed with CUDA or something. Anyone knows any guide how to properly setup RTX 3050 for text generation? Thank you in advance.
2023-08-06T05:05:12
https://www.reddit.com/r/LocalLLaMA/comments/15jgvuc/tutorial_to_install_cuda_for_rtx_3050/
leo-the-great
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jgvuc
false
null
t3_15jgvuc
/r/LocalLLaMA/comments/15jgvuc/tutorial_to_install_cuda_for_rtx_3050/
false
false
self
1
null
Where would one host Llama (2)? (Commercial)
44
Hi guys. I am planning on beginning to train a version of Llama 2 to my needs. I have a local machine with i7 4th Gen. Radeon 7600 8 GB. I have no budget, and I need to do this. I have absolutely not the slightest clue what I am doing but here is to everything that comes after today. My question is, once I am done/close to done, and I want to host this for my clients and integrate it into my SaaS, where would one even go for something like this? Vaast.ai?
2023-08-06T03:05:15
https://www.reddit.com/r/LocalLLaMA/comments/15jen3o/where_would_one_host_llama_2_commercial/
shakespear94
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jen3o
false
null
t3_15jen3o
/r/LocalLLaMA/comments/15jen3o/where_would_one_host_llama_2_commercial/
false
false
self
44
null
Llama.cpp with 13B is hallucinating in my domain?
7
I have been experimenting with Llama2 locally and I started asking it things that I actually can judge whether it is correct or not. This is not correct output (the last 2 prompts below did not produce valid FHIR and were misinformed about what a FHIR Patient is (it is only demographics). I tried to to tweak the prompt and it was still wrong. In fact, it's not even close. Is this because the model has not been fine tuned to FHIR? Would I get better results with a larger model? &#x200B; `./main -ins -t 6 -ngl 10 --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -s 42 -m llama-2-13b-chat.ggmlv3.q4_0.bin -p "Act as a helpful Health IT consultant" -n -1` `main: build = 938 (c574bdd)` `main: seed = 42` `ggml_init_cublas: found 1 CUDA devices:` `Device 0: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6` `llama.cpp: loading model from llama-2-13b-chat.ggmlv3.q4_0.bin` `llama_model_load_internal: format = ggjt v3 (latest)` `llama_model_load_internal: n_vocab = 32000` `llama_model_load_internal: n_ctx = 2048` `llama_model_load_internal: n_embd = 5120` `llama_model_load_internal: n_mult = 256` `llama_model_load_internal: n_head = 40` `llama_model_load_internal: n_head_kv = 40` `llama_model_load_internal: n_layer = 40` `llama_model_load_internal: n_rot = 128` `llama_model_load_internal: n_gqa = 1` `llama_model_load_internal: rnorm_eps = 5.0e-06` `llama_model_load_internal: n_ff = 13824` `llama_model_load_internal: freq_base = 10000.0` `llama_model_load_internal: freq_scale = 1` `llama_model_load_internal: ftype = 2 (mostly Q4_0)` `llama_model_load_internal: model size = 13B` `llama_model_load_internal: ggml ctx size = 0.11 MB` `llama_model_load_internal: using CUDA for GPU acceleration` `llama_model_load_internal: mem required = 5775.77 MB (+ 1600.00 MB per state)` `llama_model_load_internal: allocating batch_size x (640 kB + n_ctx x 160 B) = 480 MB VRAM for the scratch buffer` `llama_model_load_internal: offloading 10 repeating layers to GPU` `llama_model_load_internal: offloaded 10/43 layers to GPU` `llama_model_load_internal: total VRAM used: 2182 MB` `llama_new_context_with_model: kv self size = 1600.00 MB` &#x200B; `system_info: n_threads = 6 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |` `main: interactive mode on.` `Reverse prompt: '### Instruction:` &#x200B; `'` `sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.700000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000` `generate: n_ctx = 2048, n_batch = 512, n_predict = -1, n_keep = 9` &#x200B; &#x200B; `== Running in interactive mode. ==` `- Press Ctrl+C to interject at any time.` `- Press Return to return control to LLaMa.` `- To return control without starting a new line, end your input with '/'.` `- If you want to submit another line, end your input with '\'.` &#x200B; Act as a helpful Health IT consultant \> What is FHIR? FHIR (Fast Healthcare Interoperability Resources) is a standard for exchanging healthcare information electronically. It defines a set of resources and APIs that can be used to exchange healthcare data between different systems, such as electronic health records (EHRs), laboratory systems, and medication management systems. FHIR is designed to be flexible and extensible, so it can be used in a wide range of healthcare settings, from small clinics to large hospitals and beyond. &#x200B; One of the key benefits of FHIR is that it enables seamless interoperability between different healthcare systems, allowing healthcare providers to access and share patient data more easily and efficiently. This can improve the quality and safety of care, as well as reduce costs and administrative burdens. Additionally, FHIR is designed to be easy to implement and use, so it can be adopted by healthcare organizations of all sizes and technical capabilities. &#x200B; Overall, FHIR is an important standard for healthcare interoperability, and it has the potential to transform the way healthcare is delivered and managed. &#x200B; \> Create a FHIR Patient Resource To create a FHIR Patient Resource, we can use the following steps: &#x200B; 1. Define the patient's identity: This includes the patient's name, address, date of birth, and other identifying information. 2. Specify the patient's contact information: This includes the patient's phone number, email address, and other contact details. 3. Define the patient's medical history: This includes any relevant medical conditions, allergies, medications, and other health-related information. 4. Specify the patient's current status: This includes the patient's current location, their clinical status (e.g., stable, unstable), and any other relevant information. 5. Include any additional relevant information: This might include the patient's family history, social history, or other information that is important for their care. &#x200B; Here is an example of what a FHIR Patient Resource might look like: `\`\`\`` `{` `"name": "John Doe",` `"address": {` `"street": "123 Main St",` `"city": "Anytown",` `"state": "CA",` `"postalCode": "12345"` `},` `"dateOfBirth": "1980-01-01",` `"contact": {` `"phone": "555-555-1234",` `"email":` [`"johndoe@example.com`](mailto:"johndoe@example.com)`"` `},` `"medicalHistory": [` `{` `"code": "Diabetes",` `"status": "stable"` `},` `{` `"code": "Hypertension",` `"status": "stable"` `}` `],` `"currentStatus": {` `"code": "Outpatient",` `"status": "stable"` `},` `"additionalInfo": [` `{` `"code": "FamilyHistoryOfHeartDisease",` `"status": "important"` `}` `]` `}` `\`\`\`` Note that this is just one example of what a FHIR Patient Resource might look like, and the actual structure and content of the resource will depend on the specific needs and requirements of the healthcare organization. &#x200B; \> Create the json for a FHIR r4 Patient resource Here is an example of what the JSON for a FHIR r4 Patient resource might look like: `\`\`\`` `{` `"resourceType": "Patient",` `"id": "123",` `"name": [` `{` `"use": "official",` `"family": "Doe",` `"given": ["John"]` `}` `],` `"contact": [` `{` `"use": "home",` `"phone": "555-555-1234",` `"email":` [`"johndoe@example.com`](mailto:"johndoe@example.com)`"` `}` `],` `"dateOfBirth": "1980-01-01",` `"gender": {` `"code": "M"` `},` `"address": [` `{` `"use": "home",` `"street": "123 Main St",` `"city": "Anytown",` `"state": "CA",` `"postalCode": "12345"` `}` `],` `"medicalHistory": [` `{` `"code": "Diabetes",` `"status": "stable"` `},` `{` `"code": "Hypertension",` `"status": "stable"` `}` `],` `"currentStatus": {` `"code": "Outpatient",` `"status": "stable"` `},` `"additionalInfo": [` `{` `"code": "FamilyHistoryOfHeartDisease",` `"status": "important"` `}` `]` `}` `\`\`\`` Note that this is just one example of what the JSON for a FHIR r4 Patient resource might look like, and the actual structure and content of the resource will depend on the specific needs and requirements of the healthcare organization.
2023-08-06T00:35:27
https://www.reddit.com/r/LocalLLaMA/comments/15jbmil/llamacpp_with_13b_is_hallucinating_in_my_domain/
fhirflyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jbmil
false
null
t3_15jbmil
/r/LocalLLaMA/comments/15jbmil/llamacpp_with_13b_is_hallucinating_in_my_domain/
false
false
self
7
null
StableLM-Alpha-V2 3B and 7B released, trained on 1.1T tokens.
98
2023-08-06T00:24:43
https://github.com/Stability-AI/StableLM
pokeuser61
github.com
1970-01-01T00:00:00
0
{}
15jbe3r
false
null
t3_15jbe3r
/r/LocalLLaMA/comments/15jbe3r/stablelmalphav2_3b_and_7b_released_trained_on_11t/
false
false
https://b.thumbs.redditmโ€ฆMpitg6FZ-qLI.jpg
98
{'enabled': False, 'images': [{'id': 'EqeGhDk7wYAIyVZVza87x-J5MbNcQ9Ibv6IpfHctP14', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?width=108&crop=smart&auto=webp&s=094aae81bc6b32dd9eb4be5434a835bf6632b9e0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?width=216&crop=smart&auto=webp&s=161706dad7b2be375edc7cbd2e7ddf53d52528aa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?width=320&crop=smart&auto=webp&s=18868529b41fc19bf88d48d5ee1e0b83077369b9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?width=640&crop=smart&auto=webp&s=f352c07a4105fdb9318bb97deeb1aa433b49f044', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?width=960&crop=smart&auto=webp&s=3652efe677408b64cc1872f1d254688efba3a0dd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?width=1080&crop=smart&auto=webp&s=55503245601db5a433b3d80e2bf409a86d77e6af', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XR1XZ7AcT-Z0aamqxw1VStLiqu-Qj0tRSbqNd-p4JRY.jpg?auto=webp&s=b3e4ad0f63417694f43dd59776d2930dc7352fca', 'width': 1200}, 'variants': {}}]}
Problem Updating Oobabooga Installed on External Drive
2
When I try to update Oobabooga installed on G drive, a nonsystem drive, I get this kind of error message. "G:/oobabooga\_windows\_08062023/oobabooga\_windows/installer\_files/pip-req-build-igr9ong4' is on a file system that does not record ownership To add an exception for this directory, call: git config --global --add [safe.directory](https://safe.directory) G:/oobabooga\_windows\_08062023/oobabooga\_windows/installer\_files/pip-req-build-igr9ong4\\" But adding an exception for the above directory does not seem to solve the issue since I will have to add tens or hundreds of exceptions before I could successfully run the update\_windows.bat. Is there any way to avoid adding exceptions one by one? Thanks.
2023-08-06T00:15:37
https://www.reddit.com/r/LocalLLaMA/comments/15jb6vt/problem_updating_oobabooga_installed_on_external/
statingwindow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jb6vt
false
null
t3_15jb6vt
/r/LocalLLaMA/comments/15jb6vt/problem_updating_oobabooga_installed_on_external/
false
false
self
2
null
Evaluation loss increases when training Guanaco
38
I am probably not the only one who noticed that evaluation loss increases after a few epochs when using QLoRA to finetune Llamas on conversational data. Wondering how long one could ignore the rise in loss and continue training, I did a benchmark. The latest MT Bench (GPT-4 based evaluation, claimed 80+% agreement with human eval.) was used to evaluate each checkpoint and it seems that model quality improves despite the increase in evaluation loss and eventually flattens out (at least under these specific circumstances). [QLoRA, standard hyperparameters, Llama2-7B, OASST1 dataset](https://preview.redd.it/8jkbp9ixrdgb1.png?width=1500&format=png&auto=webp&s=9112fe1e21dd37a6cd86282e5624a049fd57ea45) The entire story can be found here: [https://medium.com/@geronimo7/reproducing-guanaco-141a6a85a3f7](https://medium.com/@geronimo7/reproducing-guanaco-141a6a85a3f7)
2023-08-06T00:05:26
https://www.reddit.com/r/LocalLLaMA/comments/15jaywk/evaluation_loss_increases_when_training_guanaco/
HatEducational9965
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jaywk
false
null
t3_15jaywk
/r/LocalLLaMA/comments/15jaywk/evaluation_loss_increases_when_training_guanaco/
false
false
https://b.thumbs.redditmโ€ฆhWwrFO46IvwI.jpg
38
{'enabled': False, 'images': [{'id': 'JKUJLtSlwkv4ZIGxAPgPzZY6OuVnwhiUoVaEgXS-Ad8', 'resolutions': [{'height': 42, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?width=108&crop=smart&auto=webp&s=7cac0162ac531306de0042f6db0851640fadcdbc', 'width': 108}, {'height': 84, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?width=216&crop=smart&auto=webp&s=736987990f0ccd2ffdd807a22fcbbb2fc6be32b9', 'width': 216}, {'height': 124, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?width=320&crop=smart&auto=webp&s=b678b0c4e6577e545b8a713acfdf742819c2de22', 'width': 320}, {'height': 249, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?width=640&crop=smart&auto=webp&s=503ae8c7df80057208b6f651fae7b6d73339191a', 'width': 640}, {'height': 373, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?width=960&crop=smart&auto=webp&s=62453d8a8b2c6e21a854fd35f2e1cf750cf843de', 'width': 960}, {'height': 420, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?width=1080&crop=smart&auto=webp&s=3cbbecb1b5b01d5285e723770fe25df0bfc5b9a4', 'width': 1080}], 'source': {'height': 467, 'url': 'https://external-preview.redd.it/QKbaMpHHGHaWscQLKt86PVj3JQ10G_4KOq5TmWw4hGY.jpg?auto=webp&s=ebb92c80ee3702525cdb8b2abc4fe374394a4330', 'width': 1200}, 'variants': {}}]}
A beginner who is looking for other ways to use llms
14
I've a simple chat ui (CLI) for command line. Requirements: modern CPU with 6+ cores, 32GB RAM, OpenBLAS + koboldcpp. It can communicate with a local koboldcpp or a remote runpod instance. It can also: * import a tavernai (webp) character * import tavernai's room / chatlog * connect to multiple koboldcpp through different ports * listen to my voice, and reply in SAPI5 voices * chat rooms are simple text files, <char name>-<date>.txt I use it for roleplaying and speaking practice (with a chatbot in a foreign language). An online speech recognition api - similar to whisper api - transcribes my voice to text, which then appears in the chat. I'm talking to the same character, but (2-3) different language models (13b vicuna, 7b janeway, and another 7b) are replying to the same sentence. I can choose one of them, then write my response (or dictate, if I want to). It costs me 0 Euro (besides electricity) to run this uncensored chat prototype. This is how the configuration file looks like KOBOLD=http://192.168.0.101:5001 #PORT OF PRIMARY MODEL # SUMMARIZER=http://127.0.0.1:5002 PORT=5002 ย #PORT OF OTHER MODELS NCHAR=true #MULTIPLE MODEL MODE CHARS=C:/Tavern/public/characters/ CHAR=C:/Tavern/public/characters/Akari.webp MASTER=Honey ASM_KEY= ASM_LANGUAGE=fr TTS=Microsoft Caroline I would like to find a good model (ggml) that can help to summarize the chat log (in English), I could use word embeddings to find the relevant chat lines and put the summary behind the truncated conversation every time I generate a new response.
2023-08-06T00:03:01
https://www.reddit.com/r/LocalLLaMA/comments/15jawyp/a_beginner_who_is_looking_for_other_ways_to_use/
justynasty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15jawyp
false
null
t3_15jawyp
/r/LocalLLaMA/comments/15jawyp/a_beginner_who_is_looking_for_other_ways_to_use/
false
false
self
14
null
Start-ups in Local LLM
0
It's now possible to truly have your own unbiased AI assistant that is no one else's but yours! Just this simple selling point should be enough for start-ups to jump on this, but the only startup I've seen that is serving local integration is Tiny Corp by George Hotz. (Check Him Out!) Why do you guys follow this community? what about LocalRun LLMs motivates you to sit through them command lines, and install headaches?
2023-08-05T23:25:35
https://www.reddit.com/r/LocalLLaMA/comments/15ja2uq/startups_in_local_llm/
PigWedgion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ja2uq
false
null
t3_15ja2uq
/r/LocalLLaMA/comments/15ja2uq/startups_in_local_llm/
false
false
self
0
null
llama-2-70b-chat model running continuously
7
I can run the 7b and 13b models with no issue. However, when attempting to run the 70b versions, the model loads and then runs on the GPUs at 100% forever. No errors, just runs continuously. torchrun --nproc_per_node 8 example_chat_completion.py --ckpt_dir llama-2-70b-chat/ --tokenizer_path tokenizer.model --max_seq_len 512 --max_batch_size 4 Any ideas on what could be causing this issue with 70b but not the other versions?
2023-08-05T23:22:47
https://www.reddit.com/r/LocalLLaMA/comments/15ja0mz/llama270bchat_model_running_continuously/
_daddylonglegz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ja0mz
false
null
t3_15ja0mz
/r/LocalLLaMA/comments/15ja0mz/llama270bchat_model_running_continuously/
false
false
self
7
null
Advice on where to start
2
I asked this question to chat gpt 4 and it have an answer but I don't know where to start so I want to ask here and hope for advisement: If I was trying to design an AI setup for trading stocks and options as one use to interpret data and develop strategies and secondly as a personal AI for my 2 year old daughter that would essentially follow her through life as her own personal tutor, what with be the best way to achieve this? Should I build a separate computer that functions as an AI server or just rent cloud space? Where would be a good starting place to learn how to do this? Any help is greatly appreciated.
2023-08-05T23:10:15
https://www.reddit.com/r/LocalLLaMA/comments/15j9qil/advice_on_where_to_start/
Seronkseronk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j9qil
false
null
t3_15j9qil
/r/LocalLLaMA/comments/15j9qil/advice_on_where_to_start/
false
false
self
2
null
The ways to train conversational language model or chat model. What is the template to train a interactive chatbot for ORCA dataset?
1
[removed]
2023-08-05T22:53:43
https://www.reddit.com/r/LocalLLaMA/comments/15j9coe/the_ways_to_train_conversational_language_model/
dungduong16
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j9coe
false
null
t3_15j9coe
/r/LocalLLaMA/comments/15j9coe/the_ways_to_train_conversational_language_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'PVvkKvkDFvHzNVO6GsOKvqBayJrtELtPRA80qzZg8kY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?width=108&crop=smart&auto=webp&s=44bfb6288e553d23a79c16441268211c0db6c774', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?width=216&crop=smart&auto=webp&s=454edd26769c3e54dc1c43b7c37ae23d77b8bd37', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?width=320&crop=smart&auto=webp&s=11e406b202ac33c1ca5b520f030ddf0eb97eea5b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?width=640&crop=smart&auto=webp&s=5d32d81c2587df25d53350de76765df78559587a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?width=960&crop=smart&auto=webp&s=3ef68369061443211eb5f206bca142423dac71e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?width=1080&crop=smart&auto=webp&s=7800c57192c5fbd4c083792789759d7b308e79a1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JFgdLnj6uxaVXIlggQWE0HPvIhwAawbbjH-EBSFUwDE.jpg?auto=webp&s=f8162e7879d23a737e1413b2e63529015c826717', 'width': 1200}, 'variants': {}}]}
Formatting Training Data for Fiction Generation?
1
If I want to fine-tune a model to generate fiction, how should I format my training data? In this case it is a half written novel.
2023-08-05T22:02:59
https://www.reddit.com/r/LocalLLaMA/comments/15j85eh/formatting_training_data_for_fiction_generation/
Mbando
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j85eh
false
null
t3_15j85eh
/r/LocalLLaMA/comments/15j85eh/formatting_training_data_for_fiction_generation/
false
false
self
1
null
Looking for RP 3B / 2.7B models
5
Exist a version of Erebus, Pygmalion or something similar model of 3B or 2.7B that I can use in llama.cpp? I can't be able to run the 7B version of Llama. So far I've been able to run open llama 3B.
2023-08-05T21:17:14
https://www.reddit.com/r/LocalLLaMA/comments/15j707q/looking_for_rp_3b_27b_models/
Alexis212s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j707q
false
null
t3_15j707q
/r/LocalLLaMA/comments/15j707q/looking_for_rp_3b_27b_models/
false
false
self
5
null
The Quest to Have Endless Conversations with Llama and ChatGPT ๐Ÿ—ฃ๏ธ๐Ÿฆ™๐Ÿค–๐Ÿ’ฌ
6
2023-08-05T20:50:41
https://medium.com/@joaolages/the-quest-to-have-endless-conversations-with-llama-and-chatgpt-%EF%B8%8F-81360b9b34b2
JClub
medium.com
1970-01-01T00:00:00
0
{}
15j6cre
false
null
t3_15j6cre
/r/LocalLLaMA/comments/15j6cre/the_quest_to_have_endless_conversations_with/
false
false
https://b.thumbs.redditmโ€ฆ7uaND7Zcfeak.jpg
6
{'enabled': False, 'images': [{'id': 'Bd2tKVhk26ts2G8Ag4O6orZvv4fbvZNAF7Q03ikzADQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=108&crop=smart&auto=webp&s=ba96a68eba2babade967b840ab3c1d89e134154e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=216&crop=smart&auto=webp&s=6e06649464f3ca435c32f2ed9e73168e9273fb90', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=320&crop=smart&auto=webp&s=d8d718d986fc19b9bd1bb0562e6733d0ee2ba125', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=640&crop=smart&auto=webp&s=5a5f3c7aeeb56bbab7b0db89995dd4919050176f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=960&crop=smart&auto=webp&s=7dc8f89416830f2e71160073e28463a450434e17', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?width=1080&crop=smart&auto=webp&s=92d2d7a7907d777bff8e63ad2d24594dae67f25c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CTypdu0BQ5FGyzkfIBDvKm4lV45PMJGwwkvJ9hxNRw4.jpg?auto=webp&s=0b855eb3d262137937689963a3854557f4517b0b', 'width': 1200}, 'variants': {}}]}
MoEM > MoE?
6
I am very much in the beginner category of wrangling LLMs and other AI types, but I have the framework of an idea. I'd love to hear any comments or criticism, if anyone knows that this has already been tried, or which fundamental principles i could be misinterpreting/misunderstanding. Could we take the GPT4 architecture (~8 'expert' models that are specialized to a specific domain, ie coding, all working together in a MoE [mixture of experts] set up.) a step further and make a kind of 'MoEM' = Mixture of Expert Managers? In this idea, I'm imagining that similarly, there might be a 'Coding Expert Manager' which in turn manages smaller, even more specialized models (eg, Language, optimization, debugging, maybe even particular fields like data processing, web dev, etc). My reasoning is that it seems like the smaller, more specialized models are outperforming larger, general models on specific tasks. I also imagine it might help with efficiency, if you could 'freeze' the parts of the model that aren't needed for a given task, or maybe the 'Expert Managers' help to ensure that only the relevant parts of a network are activated? The more I've typed, the more this just sounds like a hybrid of the Agent/Tool-maker models and MoE, but I'd still love any and all input!
2023-08-05T19:19:59
https://www.reddit.com/r/LocalLLaMA/comments/15j45a7/moem_moe/
Only-Offer4234
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j45a7
false
null
t3_15j45a7
/r/LocalLLaMA/comments/15j45a7/moem_moe/
false
false
self
6
null
[Request] Updated benchmark numbers for 7900xt/x ?
11
7900xtx for **llamacpp/koboldcpp (CLBlast)** or **exllama(ROCm-HIP)** or **vLLM(idk if amd gfx is supported)** or **whatever other inference software you use**. Please let me know of more projects i haven't heard of yet. So. Help me make an informed decision. **Are you running 7900xtx for local llama? What's your experience and speed?** My preferred OS is Windows, but all info is appreciated. Thank you. &#x200B; I feel like 7900xtx should be faster than 3090, only behind 4090 (significantly) in speed. It has 24GB VRAM and ROCm/HIP support is finally coming to windows, even if it's limited as a first release. I nearly bought a 7900xtx (900ยฃ) on the spot the day ROCm for windows was released, except it was just out of stock on my preferred UK retailer. In the days later i realized, ROCm support was still limited. So i thought, what the heck, I ll wait for more info on how it performs on AI. My other option is a 4090, but thats 1530ยฃ (founders edition) . 4090 should be roughly 40-60 % faster, if neither 7900xtx or 4090 are VRAM bandwidth limited.
2023-08-05T18:57:51
https://www.reddit.com/r/LocalLLaMA/comments/15j3lzw/request_updated_benchmark_numbers_for_7900xtx/
windozeFanboi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j3lzw
false
null
t3_15j3lzw
/r/LocalLLaMA/comments/15j3lzw/request_updated_benchmark_numbers_for_7900xtx/
false
false
self
11
null
What are the variables to take into account to calculate the best model that fit in my hardware?
35
Ok, I installed oobabooga and I have to download a model to start getting into this LLM world. I started looking for a model to try on hugginface. I ideally search an Argentine or Latin Spanish model, but an English model also works for me. I cannot understand among so many models and sizes which is best for the hardware I have: Intel Core i7-7700HQ CPU 2.8Ghz Nvidia GeForce GTX 1060 6GB VRAM 32GB Ram DDR4 2400mhz 2TB M2 nvme &#x200B; **I have two main questions:** &#x200B; **1.** Is the amount of VRAM or RAM a limitation when running a model or does it just make the model run and respond slower? &#x200B; **2.** Is there a way to calculate the amount of vram and ram memory I need to run a model or at least certain parameters to take into account when choosing the best model for my hardware? &#x200B; **Context:** I am a software engineer with +15 years of experience in many languages and technologies. I am passionate about AI, but have no experience with LLMs yet. I have quite a bit of experience in Stable Diffusion, using LoRas, LoRas training. I use kohya, automatic1111 and ComfyUI daily. &#x200B; Links or references to information that help me get started and learn about this topic are appreciated. &#x200B; Sorry for my English, i am a native Spanish speaker. &#x200B; Thanks in advance.
2023-08-05T16:53:28
https://www.reddit.com/r/LocalLLaMA/comments/15j0mvm/what_are_the_variables_to_take_into_account_to/
joaco84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j0mvm
false
null
t3_15j0mvm
/r/LocalLLaMA/comments/15j0mvm/what_are_the_variables_to_take_into_account_to/
false
false
self
35
null
Preferred way to integrate into a new UI?
6
I've been working on an open source app for LLM power users, spending a lot of time on the UI and overall "quality of life" features that ChatGPT and the likes of gradio don't offer out of the box. Question for you: Right now it's OpenAI only, but I want to change that by offering integration options for other local models (or models running in the cloud somewhere). **What would be the easiest way for you to plugin to a new UI, if you wanted to do so, without having to change your backend setup**? I was thinking about integrating the oobabooga API but I'm thinking instead I may just make an option to add additional API endpoints and then figure out how to handle schema configurations if they differ from OpenAI's API. Would love any feedback you have, and if you want to check out the app I'd love to know what you think as well: [https://github.com/neary-ai/neary](https://github.com/neary-ai/neary). Thanks!
2023-08-05T16:52:54
https://www.reddit.com/r/LocalLLaMA/comments/15j0mdn/preferred_way_to_integrate_into_a_new_ui/
BlindingLT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15j0mdn
false
null
t3_15j0mdn
/r/LocalLLaMA/comments/15j0mdn/preferred_way_to_integrate_into_a_new_ui/
false
false
self
6
{'enabled': False, 'images': [{'id': 'C_FfafjN8qeHhNfdkX8eP2RZrXAdBCmkDPRCdfncjig', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?width=108&crop=smart&auto=webp&s=524c9c73c66cfc736217af354749fd4c9182984a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?width=216&crop=smart&auto=webp&s=939d567fd3b0fc5169dc5816538f36676d309752', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?width=320&crop=smart&auto=webp&s=38a1523724c7c4e0bffedb2864dc1d3beb1afe2a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?width=640&crop=smart&auto=webp&s=96fcbad9506f42e4804deb645a420bbf3c4020fa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?width=960&crop=smart&auto=webp&s=3cadbd7a6455633f88e7b1f9bd6eccf1f19af58a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?width=1080&crop=smart&auto=webp&s=8f58e7d187b2cef2a012b546c8e8f3e52b31a975', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/o7nTEfQC3b_uAS_8bm0V7a8t2TD_GIKfJyZo4F1qdy8.jpg?auto=webp&s=ef376cddeeba58aca07bf1f47ff7f2fcd2097ef0', 'width': 1200}, 'variants': {}}]}
How to run local models on the cloud
32
Hello, I wanted to make a guide on how to run local models on the cloud, for people who want to test, but don't have the right hardware (like me). I will be using Vast in this guide, but Runpod will work too. (This will have you using Ooga Booga Text Generation Web UI. Let me know if you want a guide for KoboldAI too.) * Go to [https://cloud.vast.ai/](https://cloud.vast.ai/create/) to find instances (if you want, you can use [https://cloud.vast.ai/?ref\_id=73271](https://cloud.vast.ai/?ref_id=73271) instead to support me at no extra cost, completely optional though). Here are some things to consider: 1. Hover your mouse over every instance you might buy, as there are bandwidth costs associated with every instance. A few of them have no costs, some are dirt cheap, and others are outrageously priced. I would not do anything like 30 dollars every TB (1 TB is about 1000 GB), but anything under 10 dollars/TB is pretty reasonable. You will need to figure out the price yourself though, as what may be cheap for me may not be cheap for you. For example, if I wanted 30 GB to download (which for 4bit Llama-2-70b GPTQ which is about 35 GB, if the pricing is about 5 dollars/TB for download (marked with the down arrow before the price) then it would cost about 17 cents, I think. I'm tired, so I don't know for sure. 2. Keep in mind you have to pay for storage. You can set the amount on the left (you can't change it later without destroying the instance then renting the instance again). 3. I would never choose a GTX unless you know what you are doing, probably not even a RTX 2070. I would say go with either RTX 3060 or RTX A2000 at the very minimum. 4. Internet speed: make sure it is at least 200 mbps for download otherwise it will take a very long time to download the models. &#x200B; https://preview.redd.it/tcf058vagbgb1.png?width=880&format=png&auto=webp&s=080fe75c2bd42f9f03dd3e854c068b7ec60448a5 * Go to the side where it says "Instance Configuration" then click "Edit Image and Config". Next, go to the "Recommended" tab, look for "OogaBooga LLM WebUI", currently it says "(LLaMa 2)" after the text, which will probably change when a new model comes out. Click on it (not the link, but the blank space around it). It should highlight it as shown here. Then click "Select" and now the configuration should say the name of the image. * Next, find the model you want. I wouldn't recommend anything less than 13B, as it just won't be very good or coherent. I will be using [https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GPTQ](https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GPTQ) in this example. It is censored though. I would recommend at least 12 GB of VRAM for this, so a 3060 will work, as long as you don't go too high on the context length, maybe 1 or 2 thousand tokens (maybe more, maybe less, should be at least 1k though)? You will need to use ExLlama to do it because it uses less VRAM which allows for more context (I will show that in a sec), but keep in mind that the model itself can only go to 4k context. * Now, check to make sure you have enough storage. On the model page on HuggingFace, click on "files and versions", and look at the size of the model. Add about 1 or 2 gigs to that just in case, then you should be ready. TheBloke/WizardLM-13B-V1.2-GPTQ (which is 4bit) uses about 8 gigs of storage, and as I just said, about 12 GB of VRAM. * Rent your desired instance. Make sure you look at my considerations one more time, then go to the instances tab and wait for it to boot up. I will be using an RTX 3060 for this, which has 12GB of VRAM. * Once it is done booting (it can sometimes take a while, if it takes over 10 mins, maybe try a different instance), click "open" then in the new link it opened, go to the "model" tab. Go back to the model you want. Next to the name, there should be a tiny copy button. Click that, then paste it into the field that says "download custom model or lora". * After that, in the same tab, go to the drop-down under the text "model", make sure it has the model selected, then click load (it may already be loaded). You may have to click the refresh button too. Usually, the model page (if it is by TheBloke) will say what the prompt template is, which is this for WizardLM 1.2: &#8203; A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: * Go back to the text generation tab, paste the template and modify the prompt to whatever you like. * If you want to chat with it (not all models are chat models though), then go to session, then change the mode to "chat" then click "apply and restart". * To make sure your prompt format is correct in chat mode, go to "chat settings" and for me, I would put "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions." in the context and set "your name" to "USER:" and set "character's name" to "ASSISTANT:". * You may want to set the parameters to a different preset. Finding the best one will take some trial and error, but there is a "learn more" dropdown that says some recommended ones for chat and instruct. That's it for this guide. I hope this helped you! If you have any questions, just ask!
2023-08-05T16:14:13
https://www.reddit.com/r/LocalLLaMA/comments/15izoxk/how_to_run_local_models_on_the_cloud/
kotykd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15izoxk
false
null
t3_15izoxk
/r/LocalLLaMA/comments/15izoxk/how_to_run_local_models_on_the_cloud/
false
false
https://b.thumbs.redditmโ€ฆsB-nnoAqsRIE.jpg
32
null
running vicuna models serverless on banana: sharing my templates for community use
31
hey. i am running the vicuna model family serverless on banana. wanted to share the templates in case anyone else feels the need to do so๐Ÿ™‚. \[vicuna-7b\](https://github.com/yachty66/demo-vicuna-7b) \[vicuna-13b\](https://github.com/yachty66/demo-vicuna-13b) \[vicuna-33b\](https://github.com/yachty66/demo-vicuna-33b)
2023-08-05T15:16:34
https://www.reddit.com/r/LocalLLaMA/comments/15iyacr/running_vicuna_models_serverless_on_banana/
yachty66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15iyacr
false
null
t3_15iyacr
/r/LocalLLaMA/comments/15iyacr/running_vicuna_models_serverless_on_banana/
false
false
self
31
{'enabled': False, 'images': [{'id': 'nkcL1-qWvpPw_u6jENaoK_t-7_GgyyAy59AwbnvGZ3c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?width=108&crop=smart&auto=webp&s=e0f0de2b2f67cbe0c90b7b0aba14b69651d6351f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?width=216&crop=smart&auto=webp&s=b28fc2f9c6aa0537a45cd41a3790b043872f9177', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?width=320&crop=smart&auto=webp&s=1a60ef1e3bf89ab127ab3da3eec1b52ee448d941', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?width=640&crop=smart&auto=webp&s=4a4a5097974d07a53d9a611126fa4d3d823f1e6b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?width=960&crop=smart&auto=webp&s=d6079d2c72be4c4bee078e68f54f9d2da0257e0b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?width=1080&crop=smart&auto=webp&s=649b970d4f225dda86313839160c0cde5a37539a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fY7-pWqYH9HaliwFObOgo0DvAVfe7g8nWy9WP27l1ss.jpg?auto=webp&s=795cd48079e2382375b7b88de78918943ec95b4f', 'width': 1200}, 'variants': {}}]}
How does the HF leaderboard work?
5
A lot of models are tuned to have specific prompt formats. When I go to the model submission page in HF leaderboard, I donโ€™t see anywhere where you specify the prompt format. So how does the evaluation know what format to use?
2023-08-05T14:58:35
https://www.reddit.com/r/LocalLLaMA/comments/15ixu4y/how_does_the_hf_leaderboard_work/
Tasty-Lobster-8915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ixu4y
false
null
t3_15ixu4y
/r/LocalLLaMA/comments/15ixu4y/how_does_the_hf_leaderboard_work/
false
false
self
5
null
Any feedback using a i9-13900hx and 4090 mobile?
5
I know, I know laptops are slow, but I travel a lot.. Anyway I'm wondering what sort of speed I could get with this setup: I9-13900hx 8 p-core 2.20ghz/5.40ghz 6 e-core 1.70ghz/3.60ghz 32 threads total Ddr5 5600mhz max bandwidth 89 gb/s This seems like a crazy cpu for a laptop And a 4090 with 16gb ggr6 576gb/s 9728 cuda core To me it seems like a good setup I'm wondering what kind of speeds I could expect doing inference on a 70b 4q? I guess using a ggml model offloading 16gb to vram and about 30gb in ram ram Thank your for ur time
2023-08-05T14:52:30
https://www.reddit.com/r/LocalLLaMA/comments/15ixp1j/any_feedback_using_a_i913900hx_and_4090_mobile/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ixp1j
false
null
t3_15ixp1j
/r/LocalLLaMA/comments/15ixp1j/any_feedback_using_a_i913900hx_and_4090_mobile/
false
false
self
5
null
Trying to load togethercomputer_LLaMA-2-7B-32K with fully loaded context but it OOMs, but I should have enough VRAM?
12
Using text-generation-webui API through Sillytavern to actually be able to send more than 16k of context. I start running out of memory at about 15k context on a single 80gb card, so I added a second 80gb and I can see that the VRAM of both cards is actually being used, but I'm not able to actually add any context past that despite doubling my VRAM. I'm still getting the same out of memory issue. I've tried using auto-devices and manually assigning VRAM amounts, but both give me the same problem. Tried loading in 8/4 bit, and that lowers the initial (pretty miniscule) footprint of the model without context, but doesn't seem to impact how the context load uses VRAM. I don't think it's some weird problem with the normal 16k context barrier in ooba (because I'm able to use the 16k superhot models without issue, and I'm actually OOMing when I'm sending only 15k on this model.) Any ideas?
2023-08-05T14:18:00
https://www.reddit.com/r/LocalLLaMA/comments/15iwvqj/trying_to_load_togethercomputer_llama27b32k_with/
tenmileswide
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15iwvqj
false
null
t3_15iwvqj
/r/LocalLLaMA/comments/15iwvqj/trying_to_load_togethercomputer_llama27b32k_with/
false
false
self
12
null
What is the best way to run local LLM in another language than English?
11
Hi folks, I wonder what is the most optimal way for me to run a local LLM that can make responses in preffered language? In my case Norwegian. I want to develop a local system that can scan my docs (that are originally written in Norwegian) and give me response in Norwegian as well. Thanks in advance!
2023-08-05T13:37:06
https://www.reddit.com/r/LocalLLaMA/comments/15ivxl5/what_is_the_best_way_to_run_local_llm_in_another/
Calm-Cap-1480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ivxl5
false
null
t3_15ivxl5
/r/LocalLLaMA/comments/15ivxl5/what_is_the_best_way_to_run_local_llm_in_another/
false
false
self
11
null
How we can get the access of llama 2 API key?
0
I want to use llama 2 model in my application but doesn't know where I can get API key which i can use in my application. I know we can host model private instance but it's doesn't fit in my requirement, i just want to make 500 to 1000 request every day. So for that it's doesn't make any sense of me to deploy private instance so i get one more unwanted headache and extra bill. So is there any way where i can get the access of llama 2 API free or low cost?
2023-08-05T12:04:41
https://www.reddit.com/r/LocalLLaMA/comments/15itz6u/how_we_can_get_the_access_of_llama_2_api_key/
Prashant_4200
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15itz6u
false
null
t3_15itz6u
/r/LocalLLaMA/comments/15itz6u/how_we_can_get_the_access_of_llama_2_api_key/
false
false
self
0
null
Fine-Tuning LLaMA 2 7b q4 GGML?
10
So I got LLaMA 2 7b q4 GGML inference running on some systems, works well enough. Using Koboldcpp. I'd like to fine-tune the model, and I'm trying to figure out how to go about doing this...there are a lot of methods (LORA etc) so I'm a little confused. I'm fine with the default context lengths etc 1. the process: So I should I fine tune the un-quantized fp16 LLaMA 2 7b released by Meta, and then convert the fine tuned model into a .bin? Can quantized GGML models be fine-tuned directly? I found this: https://www.philschmid.de/instruction-tune-llama-2 - - but its for QLORA. Is it better (from output quality POV) to fine tune with or without QLORA and flash attention? 2. instruction fine tuned dataset: should this be a JSONL file, a JSON, separate files...how should I format it? 3. Hyperparameters: can I use the same hyperparameters specified in the link? what if I don't use QLORA, what would the hyperparameters be then? 4. Where: runpod seems to be the recommended one here, are there any templates I can use for runpod?
2023-08-05T11:58:44
https://www.reddit.com/r/LocalLLaMA/comments/15itul5/finetuning_llama_2_7b_q4_ggml/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15itul5
false
null
t3_15itul5
/r/LocalLLaMA/comments/15itul5/finetuning_llama_2_7b_q4_ggml/
false
false
self
10
{'enabled': False, 'images': [{'id': 'NYy7vS_DCF7ziYozZI5NewU4mrQpjLxWwJIEeoOeoTE', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=108&crop=smart&auto=webp&s=4768a7f3ce8e98b65ec2928dd27be69d13817653', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=216&crop=smart&auto=webp&s=f597cbd4fbbce7835de2c3ddf57bea4be32791f5', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=320&crop=smart&auto=webp&s=63abbf41f12bdd3f3a744092849dea63858626f3', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=640&crop=smart&auto=webp&s=8c350290c3032da07ffd1380750949fe1a6eddec', 'width': 640}, {'height': 591, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=960&crop=smart&auto=webp&s=eb6f8491e988e2a9cbc7ff3ab2a8f7d3c829b09f', 'width': 960}, {'height': 665, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=1080&crop=smart&auto=webp&s=c63d9cb2ef67160c0d0c200ae7b5a4b86e3e4148', 'width': 1080}], 'source': {'height': 1478, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?auto=webp&s=b98be99841a14dfc0937f46c8910ea6847ab32b0', 'width': 2400}, 'variants': {}}]}
Help for set Llama.cpp public
1
Hi I would like to put llama.cpp public, every time I start it i gave me this output: To create a public link, set share=True in launch()." where is the file with that option? Thanks
2023-08-05T11:54:40
https://www.reddit.com/r/LocalLLaMA/comments/15itrt5/help_for_set_llamacpp_public/
HackintoshWITHoMEN30
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15itrt5
false
null
t3_15itrt5
/r/LocalLLaMA/comments/15itrt5/help_for_set_llamacpp_public/
false
false
self
1
null
Transformer implementation - help
1
[removed]
2023-08-05T11:40:16
https://www.reddit.com/r/LocalLLaMA/comments/15ithtd/transformer_implementation_help/
04RR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ithtd
false
null
t3_15ithtd
/r/LocalLLaMA/comments/15ithtd/transformer_implementation_help/
false
false
self
1
null
Comparing Linear Rope Scaling vs NTK Scaling for 8K Superhot and Hermes-LLongMA-2 8K Models
60
I have done quite a few tests with models that have been finetuned with linear rope scaling, like the 8K superhot models and now also with the hermes-llongma-2-13b-8k.ggmlv3.q4\_K\_S.bin model. My GPU has 16GB VRAM, which allows me to run 13B q4\_0 or q4\_K\_S models entirely on the GPU with 8K context using koboldcpp (v1.38). As an initial test, I use a short prompt (in story mode) and set the number of tokens to generate to 8000, which still fits in the 8K context buffer together with the prompt. I use temp 0.3, top p 0.9 and streaming and I abort and regenerate if I don't like the first chapter, e.g. if it's too short. It takes about 8 minutes for me to generate the 8000 tokens and then I look through the text to check for obvious problems. The prompt that I use should (with some luck) generate enough varying text and reads as follows: The following is an encyclopedia about every country in the world, each chapter addressing a different country, ordered by the name of the country, including its statistics, GDP per capita, history, culture, and notable landmarks. \------------- Chapter 1: Afghanistan Afghanistan First, I conducted the test multiple times using the correct scaling method: linear rope scaling, with the correct scaling factor. Next, I conducted multiple tests using the incorrect scaling method: NTK scaling instead of linear rope scaling, with the correct scaling factor. Results for all 13B 8K superhot models I tested, like chronos-hermes-13b-superhot-8k.ggmlv3.q4\_0.bin: \- ropeconfig 0.25 10000 (linear factor 4): Many problems in numbers containing 2 or more successive identical digits, like 11 -> 111, 2001 -> 20001 etc. \- ropeconfig 1.0 82000 (NTK factor 4): Much better, very obvious. But still problems in numbers containing 3 or more successive identical digits, like 000 -> 0000. Results for hermes-llongma-2-13b-8k.ggmlv3.q4\_K\_S.bin: \- ropeconfig 0.5 10000 (linear factor 2): Problems in numbers containing 3 or more successive identical digits, like 000 -> 0000. Similar quality of numbers as superhot 8K models when using NTK scaling, maybe slightly better. \- ropeconfig 1.0 32000 (NTK factor 2): Much better. No obvious problems in numbers seen. So, according to these results, 8K models based on linear rope scaling like superhot and hermes-llongma-2 produce much better number behaviour when using NTK scaling than when using linear rope scaling during inference. This result was surprising. I have only tested using koboldcpp (up to v1.38), but I don't believe that there is a bug. I also have the impression that not just numbers but the quality of the text in general is better when using NTK scaling instead of linear rope scaling with these models, but I may be hallucinating that. Has anybody else seen similar behaviour for such a test with 8K superhot or LLongMA-2 8K models? I also did these tests with the vicuna-13b-v1.5-16k.ggmlv3.q4\_K\_S.bin model which supports a 16K context and is based on LLaMA-2 (scaling factor 4), testing with a 8K context buffer and I didn't see any numbers problems in the 8000 tokens produced, neither using linear scaling nor using NTK scaling.
2023-08-05T11:12:16
https://www.reddit.com/r/LocalLLaMA/comments/15isyyo/comparing_linear_rope_scaling_vs_ntk_scaling_for/
mll59
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15isyyo
false
null
t3_15isyyo
/r/LocalLLaMA/comments/15isyyo/comparing_linear_rope_scaling_vs_ntk_scaling_for/
false
false
self
60
null
What exactly is it that makes Llama 2 better than Llama?
1
[removed]
2023-08-05T10:27:42
https://www.reddit.com/r/LocalLLaMA/comments/15is5t3/what_exactly_is_it_that_makes_llama_2_better_than/
crono760
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15is5t3
false
null
t3_15is5t3
/r/LocalLLaMA/comments/15is5t3/what_exactly_is_it_that_makes_llama_2_better_than/
false
false
self
1
null
Help wanted with a project
0
I have been tasked with researching the use of a Large Language Model as a knowledge base for my company. I aim to implement the most capable model possible, so I would appreciate any recommendations on hardware and AI models that would be suitable. Specifically, what hardware specifications - such as GPUs, CPUs, and memory - would you advise to support advanced models? And which Large Language Model architecture would you suggest is best suited for a robust enterprise knowledge base, in terms of capabilities and scalability? Insights on model training approaches and optimization for our use case would also be greatly appreciated. My goal is to leverage state-of-the-art AI to build an intelligent and comprehensive knowledge base. Expert guidance on optimal hardware configurations and model selection would help ensure I implement an effective solution suited to my company's needs and resources. I welcome any technical guidance you can offer, as I want to make an informed decision for this important initiative. Please let me know your thoughts and suggestions.
2023-08-05T10:22:36
https://www.reddit.com/r/LocalLLaMA/comments/15is2it/help_wanted_with_a_project/
Rear-gunner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15is2it
false
null
t3_15is2it
/r/LocalLLaMA/comments/15is2it/help_wanted_with_a_project/
false
false
self
0
null
Llama.cpp and llama II models
1
I downloaded several models including: airoboros-l2-70b-gpt4-m2.0.ggmlv3.q5_K_S.bin llama2_70b_chat_uncensored.ggmlv3.q5_K_S.bin llama-2-70b.ggmlv3.q6_K.bin All of them are giving me the same error: error loading model: llama.cpp: tensor 'layers.0.attention.wk.weight' has wrong shape; expected 8192 x 8192, got 8192 x 1024 Am I using an older software that is not compatible or am I just doing something wrong?
2023-08-05T10:11:09
https://www.reddit.com/r/LocalLLaMA/comments/15irvgz/llamacpp_and_llama_ii_models/
Red_Redditor_Reddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15irvgz
false
null
t3_15irvgz
/r/LocalLLaMA/comments/15irvgz/llamacpp_and_llama_ii_models/
false
false
self
1
null
What presets are good for Llama 2 on ooba's text-generation-webui
31
I've always used Midnight Enigma for Llama 1 but I'm not sure if there is a better option for 2. I almost always get the same response whenever I regenerate text so I'm looking for a better option.
2023-08-05T08:58:20
https://www.reddit.com/r/LocalLLaMA/comments/15iql9c/what_presets_are_good_for_llama_2_on_oobas/
yeoldecoot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15iql9c
false
null
t3_15iql9c
/r/LocalLLaMA/comments/15iql9c/what_presets_are_good_for_llama_2_on_oobas/
false
false
self
31
null
Unsupervised training of llama 2.
19
Hi all. I've explored many fine tuning techniques of llama 2, but all of them require the training data to be in a chat template. But I just want to fine tune it using the raw corpus. Is there any way to do it?
2023-08-05T07:02:07
https://www.reddit.com/r/LocalLLaMA/comments/15iols7/unsupervised_training_of_llama_2/
zaid-70
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15iols7
false
null
t3_15iols7
/r/LocalLLaMA/comments/15iols7/unsupervised_training_of_llama_2/
false
false
self
19
null
Llama2 โ‰ˆ7b storytelling/novel model?
9
Yes, I know this is a very specific request, but Iโ€™m wondering if such a thing exists yet? Iโ€™m on a 2060 so anything 13b+ would probably be too slow/large
2023-08-05T06:56:37
https://www.reddit.com/r/LocalLLaMA/comments/15ioia8/llama2_7b_storytellingnovel_model/
kotobdev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ioia8
false
null
t3_15ioia8
/r/LocalLLaMA/comments/15ioia8/llama2_7b_storytellingnovel_model/
false
false
self
9
null
Any way to train or otherwise tune a local model on a collection of EPUB files?
9
I have a collection of ebooks in EPUB format which I'd like to use as data for a conversational model. I'm vaguely aware of some methods of tuning local models, but I'm curious about firstly which method might work best for this use case, and second how I would go about specifically using EPUB files. In terms of tech I'm just rocking an RTX 3060 12GB with a Ryzen 5600x and 32GB of memory, but I'm probably capable of using Colab or something similar if that would be better. Thanks for the help ๐Ÿ’œ
2023-08-05T06:09:46
https://www.reddit.com/r/LocalLLaMA/comments/15inpdu/any_way_to_train_or_otherwise_tune_a_local_model/
v00d00_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15inpdu
false
null
t3_15inpdu
/r/LocalLLaMA/comments/15inpdu/any_way_to_train_or_otherwise_tune_a_local_model/
false
false
self
9
null
Is there AI for that? Ask this bot trained on 4,000+ AI tools
2
2023-08-05T03:58:26
https://gpte.ai
Slow_Interest_1273
gpte.ai
1970-01-01T00:00:00
0
{}
15il9wc
false
null
t3_15il9wc
/r/LocalLLaMA/comments/15il9wc/is_there_ai_for_that_ask_this_bot_trained_on_4000/
false
false
https://a.thumbs.redditmโ€ฆZ05vcTcdQcW8.jpg
2
{'enabled': False, 'images': [{'id': 'I-JC_zUwxE9jB5ZKSS9kp74Nu-jf33W0Safboo0oLzE', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/ErQZMe0fLN5c9-r9tK2RssQOaiaxq5PiAJfSOaTP-kM.jpg?width=108&crop=smart&auto=webp&s=ca3f3181427ca127156c75d4cea053c40a96bcd1', 'width': 108}, {'height': 146, 'url': 'https://external-preview.redd.it/ErQZMe0fLN5c9-r9tK2RssQOaiaxq5PiAJfSOaTP-kM.jpg?width=216&crop=smart&auto=webp&s=71598e568dc868476d3096451bf240bdb0ec32c4', 'width': 216}, {'height': 217, 'url': 'https://external-preview.redd.it/ErQZMe0fLN5c9-r9tK2RssQOaiaxq5PiAJfSOaTP-kM.jpg?width=320&crop=smart&auto=webp&s=24b34ed6e88ed76935de8729827fb4763bde3a81', 'width': 320}, {'height': 434, 'url': 'https://external-preview.redd.it/ErQZMe0fLN5c9-r9tK2RssQOaiaxq5PiAJfSOaTP-kM.jpg?width=640&crop=smart&auto=webp&s=7faec0161f3d00fe1f77252b43c93e63a0780d77', 'width': 640}], 'source': {'height': 565, 'url': 'https://external-preview.redd.it/ErQZMe0fLN5c9-r9tK2RssQOaiaxq5PiAJfSOaTP-kM.jpg?auto=webp&s=05b6f9f8e1b82ce916a17e252a28f0ae6a2bcad5', 'width': 833}, 'variants': {}}]}
Suggested machine configuration for llm training and inference
3
Hello all, I would like to know what are the cloud options that I can use for llm training and inference ( faster inference). I might be using mostly 3b and 7b models rarely 40b models. Lambda labs seems to be a good option of that which one seems cost affordable? Or any other cloud service? Help is appreciated.
2023-08-05T03:56:32
https://www.reddit.com/r/LocalLLaMA/comments/15il8l5/suggested_machine_configuration_for_llm_training/
s1lv3rj1nx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15il8l5
false
null
t3_15il8l5
/r/LocalLLaMA/comments/15il8l5/suggested_machine_configuration_for_llm_training/
false
false
self
3
null
Could llama.cpp be ran on an Oracle Cloud server?
2
Oracle Cloud has virtual machines with a 4 core ARM cpu alongside 24gb of ram. Is it possible to use it to run llama.cpp or is the CPU too weak? Also, how can I train a llama model on 100m rows of data for free and use that trained model with llama.cpp?
2023-08-05T02:22:34
https://www.reddit.com/r/LocalLLaMA/comments/15ijcdu/could_llamacpp_be_ran_on_an_oracle_cloud_server/
FormerAccident
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ijcdu
false
null
t3_15ijcdu
/r/LocalLLaMA/comments/15ijcdu/could_llamacpp_be_ran_on_an_oracle_cloud_server/
false
false
self
2
null
Tips for setting up a LLM stand up comedy show?
1
[removed]
2023-08-05T02:10:26
https://www.reddit.com/r/LocalLLaMA/comments/15ij3gs/tips_for_setting_up_a_llm_stand_up_comedy_show/
vrsvrsvrs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ij3gs
false
null
t3_15ij3gs
/r/LocalLLaMA/comments/15ij3gs/tips_for_setting_up_a_llm_stand_up_comedy_show/
false
false
self
1
null
Using โ€œBadโ€ examples for instruction fine tuning with llama2
15
Hello! I have a question that I hope others can help me with. I have been very successful at training llama2 using techniques outlined by Phil Schmid and others. The general technique is: def format_instruction(sample): return f"""### Instruction: Use the Input below to create an instruction, which could have been used to generate the input using an LLM. ### Input: {sample['response']} ### Response: {sample['instruction']} """ And the for inference you just use the instruction and input to get a response. For my data, I have both โ€œgoodโ€ response examples and โ€œbadโ€ response examples. I am wondering how I set up the instruction to train on both bad and good. Thanks!
2023-08-05T02:06:55
https://www.reddit.com/r/LocalLLaMA/comments/15ij0xn/using_bad_examples_for_instruction_fine_tuning/
tantan1187
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ij0xn
false
null
t3_15ij0xn
/r/LocalLLaMA/comments/15ij0xn/using_bad_examples_for_instruction_fine_tuning/
false
false
self
15
null
Llama.cpp + GGML
23
Anyone using Llama.cpp and the GGML Lama2 models from the Bloke on HF, I would like to know your feedback on performance. My experience has been pretty good so far, but maybe not as good as some of the videos I have seen. I am wondering if anyone has any tricks to accelerate the response. I run it like : `./main -ins -t 8 -ngl 1 --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -s 42 -m llama-2-13b-chat.ggmlv3.q4_0.bin -p "Act as a helpful Health IT consultant" -n -1`
2023-08-05T01:41:12
https://www.reddit.com/r/LocalLLaMA/comments/15iihlp/llamacpp_ggml/
fhirflyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15iihlp
false
null
t3_15iihlp
/r/LocalLLaMA/comments/15iihlp/llamacpp_ggml/
false
false
self
23
null
Is it reasonable to expect LLMs will get the Doom treatment and be able to run on (just about) anything in the future?
98
I know it sounds crazy but could it be possible? Would be great for poorer nations, fact-checking claims by shady companies, etc. I ask because I read articles about LLMs soon coming to phones...which is nuts in and of itself.
2023-08-05T01:32:25
https://www.reddit.com/r/LocalLLaMA/comments/15iiasp/is_it_reasonable_to_expect_llms_will_get_the_doom/
JebryyathHS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15iiasp
false
null
t3_15iiasp
/r/LocalLLaMA/comments/15iiasp/is_it_reasonable_to_expect_llms_will_get_the_doom/
false
false
self
98
null
Do I need the tokenizer and other files with the ggml to run it optimally on kobold.cpp?
1
As title suggests, I'm not sure if I'm doing this right. I've just been only keeping the ggml file to run the models locally. Do I actually need the other files in the folder?
2023-08-04T23:58:33
https://www.reddit.com/r/LocalLLaMA/comments/15ig7cj/do_i_need_the_tokenizer_and_other_files_with_the/
ssrcrossing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ig7cj
false
null
t3_15ig7cj
/r/LocalLLaMA/comments/15ig7cj/do_i_need_the_tokenizer_and_other_files_with_the/
false
false
self
1
null
Fine tune llama 70b chat possible?
7
Hey all! I have a simple RAG based application - doing vanilla vector search + question answering based on vector search results. Things are great, BUT - when the RAG does not contain the information required to answer the question, LLAMA simply would not say "I dont know" despite prompt pleading and prompt begging. It wants to either give the response from its training data, or worse, hallucinate something realistic sounding but totally wrong. Having exhausted all prompt techniques, is there an option to finetune llama70b chat with a few hundred of these missing rag type questions and have it learn to say "I dont know"? Would appreciate any help, advice, or simply "Not possible"! &#x200B;
2023-08-04T23:50:44
https://www.reddit.com/r/LocalLLaMA/comments/15ig0u1/fine_tune_llama_70b_chat_possible/
Ok-Contribution9043
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ig0u1
false
null
t3_15ig0u1
/r/LocalLLaMA/comments/15ig0u1/fine_tune_llama_70b_chat_possible/
false
false
self
7
null
Vector search for semantic matching - chunking question
1
[removed]
2023-08-04T23:19:34
https://www.reddit.com/r/LocalLLaMA/comments/15ifa2t/vector_search_for_semantic_matching_chunking/
Alert_Record5063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ifa2t
false
null
t3_15ifa2t
/r/LocalLLaMA/comments/15ifa2t/vector_search_for_semantic_matching_chunking/
false
false
self
1
null
Can I fine tune llama 70b chat?
1
[removed]
2023-08-04T23:15:17
https://www.reddit.com/r/LocalLLaMA/comments/15if6f8/can_i_fine_tune_llama_70b_chat/
Alert_Record5063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15if6f8
false
null
t3_15if6f8
/r/LocalLLaMA/comments/15if6f8/can_i_fine_tune_llama_70b_chat/
false
false
self
1
null
FYI GGML Llama-2 Airoboros(LlamaCppModel object has no attribute 'model'.)
22
I noticed that the error some people are getting is because the yaml file is looking for a REGEX that matches llama-2 and the pre-quantized version from hugging face doesn't have that in its name. I changed the -l2- to -llama-2- in the name and it started working. from config.yaml: .\*llama.\*70b.\*ggml.\*\\.bin: n\_gqa: 8 fix for me: mv airoboros-l2-70b-gpt4-m2.0.ggmlv3.q4\_K\_M.bin airoboros-llama-2-70b-gpt4-m2.0.ggmlv3.q4\_K\_M.bin &#x200B;
2023-08-04T22:26:20
https://www.reddit.com/r/LocalLLaMA/comments/15idzn4/fyi_ggml_llama2_airoborosllamacppmodel_object_has/
Tasty-Attitude-7893
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15idzn4
false
null
t3_15idzn4
/r/LocalLLaMA/comments/15idzn4/fyi_ggml_llama2_airoborosllamacppmodel_object_has/
false
false
self
22
null
Anyone else getting constant chat context resets with the greeting repeated?
15
Not sure if I'm doing something wrong as instruct mode works wonderfully with long responses. But for some reason chat mode no longer seems to work well for me despite it working well in the past. Essentially what seems to happen is within a few exchanges, the model abruptly loses context and repeats the greeting message. What's odd is it's extremely firm when this happens - switching the parameter presets does nothing, and every regenerate produces the greeting message. We're talking only 200-300 tokens in, and that's including the character card context. Using chat-instruct mode. I've made sure I'm using the correct prompt format from the model repo, and even tried different loader options; exllama, gptq-for-llama etc. Has anyone had this issue? Any thoughts of what I might be doing wrong considering that regular instruct mode can produce decent lengthy essay-like responses, and chat mode does seem to work up until this abrupt reset in context.
2023-08-04T22:23:10
https://www.reddit.com/r/LocalLLaMA/comments/15idwu9/anyone_else_getting_constant_chat_context_resets/
trusty20
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15idwu9
false
null
t3_15idwu9
/r/LocalLLaMA/comments/15idwu9/anyone_else_getting_constant_chat_context_resets/
false
false
self
15
null
Local, open models and inference key to intellectual diversity
11
2023-08-04T22:01:13
https://medium.com/@cliff.smyth/large-language-models-tools-for-accessing-human-intelligence-not-artificial-intelligence-dff7d0549f20
bidet_enthusiast
medium.com
1970-01-01T00:00:00
0
{}
15idcmv
false
null
t3_15idcmv
/r/LocalLLaMA/comments/15idcmv/local_open_models_and_inference_key_to/
false
false
https://b.thumbs.redditmโ€ฆoB1Cf5kTFgXs.jpg
11
{'enabled': False, 'images': [{'id': '34fpJhE76DX2ggBkYYxIOW2a51i8hovLoV6lY81OkkM', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/iR-YRh9ib6hKPfiEkskuMVZ5ieG5ufdunJsqAq1O-bs.jpg?width=108&crop=smart&auto=webp&s=1ede79d8d6a454b7627e14c37b3d477a1c7a788a', 'width': 108}, {'height': 288, 'url': 'https://external-preview.redd.it/iR-YRh9ib6hKPfiEkskuMVZ5ieG5ufdunJsqAq1O-bs.jpg?width=216&crop=smart&auto=webp&s=bf55b7a9f13e48941968370763580967c4c61091', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/iR-YRh9ib6hKPfiEkskuMVZ5ieG5ufdunJsqAq1O-bs.jpg?width=320&crop=smart&auto=webp&s=29e3550b5964bc3bd512aa12be734ec4fe0ece0c', 'width': 320}, {'height': 853, 'url': 'https://external-preview.redd.it/iR-YRh9ib6hKPfiEkskuMVZ5ieG5ufdunJsqAq1O-bs.jpg?width=640&crop=smart&auto=webp&s=28ad13321bc9b2f1cd8e84391d4a6d4618d519b4', 'width': 640}, {'height': 1280, 'url': 'https://external-preview.redd.it/iR-YRh9ib6hKPfiEkskuMVZ5ieG5ufdunJsqAq1O-bs.jpg?width=960&crop=smart&auto=webp&s=b3e08b5d24308277313224302158ff8ee351c1b5', 'width': 960}, {'height': 1440, 'url': 'https://external-preview.redd.it/iR-YRh9ib6hKPfiEkskuMVZ5ieG5ufdunJsqAq1O-bs.jpg?width=1080&crop=smart&auto=webp&s=57e37fe14f7a6dd09676ef2161afae1c9e73fdf4', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://external-preview.redd.it/iR-YRh9ib6hKPfiEkskuMVZ5ieG5ufdunJsqAq1O-bs.jpg?auto=webp&s=d8590a467618f84a3b2349081fb8a89f5eb12a9d', 'width': 1200}, 'variants': {}}]}
I have a 3090, was going to buy a second vs buying a 4090, as it's 1100 vs 1600 and I only care about having 48gb memory for LLM/stable diffusion/3d rendering. Bad idea?
6
Will this let me load 70b models into gpu entirely?
2023-08-04T21:41:19
https://www.reddit.com/r/LocalLLaMA/comments/15ictwc/i_have_a_3090_was_going_to_buy_a_second_vs_buying/
countrycruiser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ictwc
false
null
t3_15ictwc
/r/LocalLLaMA/comments/15ictwc/i_have_a_3090_was_going_to_buy_a_second_vs_buying/
false
false
self
6
null
So bad.... Startcoder with it's own prompt
0
To be fair, even Bard gets this incorrect. Try this prompt: The bakers at the Beverly Hills Bakery baked 200 loaves of bread on Monday morning. They sold 93 loaves in the morning and 39 loaves in the afternoon. A grocery store returned 6 unsold loaves. How many loaves of bread did they have left? It's incredible how bad the models are on this prompt (which is supposed to be correct for StarCoder)
2023-08-04T21:23:42
https://www.reddit.com/r/LocalLLaMA/comments/15iccps/so_bad_startcoder_with_its_own_prompt/
808phone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15iccps
false
null
t3_15iccps
/r/LocalLLaMA/comments/15iccps/so_bad_startcoder_with_its_own_prompt/
false
false
self
0
null