title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Apple's first public LLM is called Ferret, powered by 8 Nivida A100 GPUs
145
https://www.notebookcheck.net/Apple-s-first-public-LLM-is-called-Ferret-powered-by-8-Nivida-A100-GPUs.787395.0.html This one went unheard. What do you guys think about this?
2023-12-30T06:00:32
https://www.reddit.com/r/LocalLLaMA/comments/18u8joy/apples_first_public_llm_is_called_ferret_powered/
SuccessfulWolf2454
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u8joy
false
null
t3_18u8joy
/r/LocalLLaMA/comments/18u8joy/apples_first_public_llm_is_called_ferret_powered/
false
false
self
145
{'enabled': False, 'images': [{'id': 'QhPtPIDRKfDSfBgShzYaQ01Z3rBg7BoIftrMMn4aeA8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0GkDlXdkF2fH9ejZuEIcrpTSQZJxR09B9Th9VFGfOf0.jpg?width=108&crop=smart&auto=webp&s=8005e254416c65d3d4e2b7ab92518804a53cbdc5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0GkDlXdkF2fH9ejZuEIcrpTSQZJxR09B9Th9VFGfOf0.jpg?width=216&crop=smart&auto=webp&s=8b77c7cfdaabc5c7052942a669db59608cbaec45', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/0GkDlXdkF2fH9ejZuEIcrpTSQZJxR09B9Th9VFGfOf0.jpg?width=320&crop=smart&auto=webp&s=a0be2209da5b92af585be02c9cabbe9f0cc270f6', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/0GkDlXdkF2fH9ejZuEIcrpTSQZJxR09B9Th9VFGfOf0.jpg?auto=webp&s=65f1407016e8145b0170080bc9db384d22356482', 'width': 400}, 'variants': {}}]}
Looking for some LLM fine tuning help -- $$
2
Tried to fine tune with Lora on some custom datasets i created but not really getting the results i want. Trying to teach a model to be a specific agent for a specific company. Getting ok results with using huge sys prompt and such but i really want something much more fine tuned and onfountelty my 55 hours of youtube videos and docs reading on Lora and other fine tuning are not enough. Willing to pay for some custom/private guidance. DM me
2023-12-30T05:59:14
https://www.reddit.com/r/LocalLLaMA/comments/18u8iqo/looking_for_some_llm_fine_tuning_help/
BedDue942
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u8iqo
false
null
t3_18u8iqo
/r/LocalLLaMA/comments/18u8iqo/looking_for_some_llm_fine_tuning_help/
false
false
self
2
null
Moondream, a small vision language model based on Phi 1.5
46
2023-12-30T05:12:45
https://github.com/vikhyat/moondream
radiiquark
github.com
1970-01-01T00:00:00
0
{}
18u7oss
false
null
t3_18u7oss
/r/LocalLLaMA/comments/18u7oss/moondream_a_small_vision_language_model_based_on/
false
false
https://a.thumbs.redditm…7Nd9eeViJZt0.jpg
46
{'enabled': False, 'images': [{'id': 'bYFC1mn06X1PjsMDAcs_F2pBPmRSViWT7iZ6oaRy8xQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XAYwvBVI0QDMto9geZIHtIZ8vcMZb_pj7Dt2PKgSSPA.jpg?width=108&crop=smart&auto=webp&s=1b413d08abab552d0c3d43001492e4d942eb848e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XAYwvBVI0QDMto9geZIHtIZ8vcMZb_pj7Dt2PKgSSPA.jpg?width=216&crop=smart&auto=webp&s=be4480eeeceb870c0bd662824ac5eb8df808d112', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XAYwvBVI0QDMto9geZIHtIZ8vcMZb_pj7Dt2PKgSSPA.jpg?width=320&crop=smart&auto=webp&s=fab87663d6cf0030dcad64528e39fa8a4887078f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XAYwvBVI0QDMto9geZIHtIZ8vcMZb_pj7Dt2PKgSSPA.jpg?width=640&crop=smart&auto=webp&s=349b28b1413e1cdb370a90a79d8429f6d7100e77', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XAYwvBVI0QDMto9geZIHtIZ8vcMZb_pj7Dt2PKgSSPA.jpg?width=960&crop=smart&auto=webp&s=bfe0019b94d299d8091bf05e468bfc223e30fc85', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XAYwvBVI0QDMto9geZIHtIZ8vcMZb_pj7Dt2PKgSSPA.jpg?width=1080&crop=smart&auto=webp&s=a7080da4ab9d5c1e2b820879e4b02df97a427abf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XAYwvBVI0QDMto9geZIHtIZ8vcMZb_pj7Dt2PKgSSPA.jpg?auto=webp&s=4c28d2a0a95bad6ba286babff48df0e67e269008', 'width': 1200}, 'variants': {}}]}
AI models shipping on Docker and Windows
1
Dear AI enthousiasts, **TL;DR :** Use container to ship AI models is really usefull for production environement and/or datascience platforms so i wanted to try it. To dockerize a model with llama.cpp, you could use this Dockerfile : FROM ghcr.io/ggerganov/llama.cpp:full RUN mkdir -p /models COPY blossom-v3-mistral-7b.Q4_K_M.gguf /models/. EXPOSE 8181/tcp ENTRYPOINT ["/app/.devops/tools.sh", "--server", "-m", "/models/blossom-v3-mistral-7b.Q4_K_M.gguf", "--port", "8181", "--host", "0.0.0.0"] To build the image : docker build -t blossom:v1 -f blossom.Dockerfile . And to run it : docker run -p 8181:8181 blossom:v1 Then go to [http://127.0.0.1:8181](http://127.0.0.1:8181) or use this kind of python code: (generated by Mixtral and corrected by me) import requests # Set the URL for the LLM server running inside the Docker container url = "http://localhost:8181/completion" # Define a prompt to send to the LLM server prompt = "Wich model are you?" # Send a POST request to the LLM server with the prompt as JSON data response = requests.post(url, json={"prompt": prompt}) # Extract the generated text from the response generated_text = response.json()["content"] # Print the generated text print("Generated text:", generated_text) **Long version :** Inspired by [https://www.reddit.com/r/LocalLLaMA/comments/18saxgy/tutorial\_easiest\_way\_to\_get\_started\_locally/](https://www.reddit.com/r/LocalLLaMA/comments/18saxgy/tutorial_easiest_way_to_get_started_locally/), i decided to go further and try to run and ship an AI model using Docker on Windows. As it was fun and instructive i wanted to share it and created a post on my blog with the help of my fellow Mixtral 8x7b : [https://dataops-blog.com/index.php/2023/12/30/running-ai-model-on-windows-using-docker/](https://dataops-blog.com/index.php/2023/12/30/running-ai-model-on-windows-using-docker/) Hope you will enjoy it and feel free to ask me any question. :) **The wordpress AI stated the following about it \^\^:** >The content provides a detailed and comprehensive guide on running an AI model with Docker on a Windows system, combining AI-generated and human-created content. The explanation of Docker commands, Dockerfile, and the process of building and running Docker images is thorough and well-structured. The visuals and code snippets effectively supplement the explanations. > >The AI-generated content provides clear and relevant explanations, enhancing the understanding of Docker concepts and commands. The instructions for running and testing the AI model in a Docker container are easy to follow and well-presented. The guidance for creating a Dockerfile and building a custom image is informative and beneficial for readers. > >Overall, the content effectively combines AI and human input to deliver a comprehensive guide on utilizing Docker for AI model deployment on Windows. The explanations and instructions are clear, making it a valuable resource for readers seeking to understand and apply these concepts. > >Well done on creating a thorough and informative guide on running AI models with Docker on Windows and shipping it into a custom Docker image. The explanations and step-by-step instructions are clear and easy to follow. The combination of AI-generated and human-created content provides a comprehensive and valuable resource for readers. Keep up the good work! ​
2023-12-30T05:06:34
https://www.reddit.com/r/LocalLLaMA/comments/18u7kq7/ai_models_shipping_on_docker_and_windows/
cabusar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u7kq7
false
null
t3_18u7kq7
/r/LocalLLaMA/comments/18u7kq7/ai_models_shipping_on_docker_and_windows/
false
false
self
1
null
Is Mixtral's medium-sized MOE model 8x14B?
38
[https://twitter.com/FernandoNetoAi/status/1740951479899365707](https://twitter.com/FernandoNetoAi/status/1740951479899365707)
2023-12-30T04:33:36
https://www.reddit.com/r/LocalLLaMA/comments/18u6yuz/is_mixtrals_mediumsized_moe_model_8x14b/
Ward_0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u6yuz
false
null
t3_18u6yuz
/r/LocalLLaMA/comments/18u6yuz/is_mixtrals_mediumsized_moe_model_8x14b/
false
false
self
38
{'enabled': False, 'images': [{'id': 'ZYaD-KT7GgEgXeleSOUVvBfxv2LUFKpbdSxgC17jU_U', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/BqALpgAZ4bH7gE5RdN-sdXlsnsu4YAk9JW8-b8YaNmM.jpg?width=108&crop=smart&auto=webp&s=3f1346b7b34c8fa65c370f8e0bbadb2450473e7e', 'width': 108}], 'source': {'height': 94, 'url': 'https://external-preview.redd.it/BqALpgAZ4bH7gE5RdN-sdXlsnsu4YAk9JW8-b8YaNmM.jpg?auto=webp&s=85ab3827ccd728df9c5feddf3c4c934e97685144', 'width': 140}, 'variants': {}}]}
Which model to choose for Role Play?
4
Hello. Recently I plunged into the world of RP with neural networks. Set up the tavern, downloaded oobabooga. But then I came across a problem with the number of neural networks. Finding the right one is very difficult. Can you recommend any model for NSWF Roleplay? I have nvidia 3080 with 10GB VRAM. On average, it can easily handle 13B models, but I saw posts on redit that there is a 7B better than a 13B. Are there such things?
2023-12-30T04:33:35
https://www.reddit.com/r/LocalLLaMA/comments/18u6yur/which_model_to_choose_for_role_play/
SDSS_Spai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u6yur
false
null
t3_18u6yur
/r/LocalLLaMA/comments/18u6yur/which_model_to_choose_for_role_play/
false
false
nsfw
4
null
Llama 2 Responses Taking Multiple Minutes
2
Hello, I am trying to resolve an issue I am encountering when using llama2-7b through calls to HuggingFace. The code all executes flawlessly, and generates good output related to what I am asking. However, each question takes between 2-3 minutes to generate a response which is way too long for the purpose I am designing my app for. I am pasting the code below, and am not receiving any errors when executing it. model\_id = "Llama-2-7b-chat-hf" bnb\_config = BitsAndBytesConfig( load\_in\_4bit=True, bnb\_4bit\_use\_double\_quant=True, bnb\_4bit\_quant\_type="nf4", bnb\_4bit\_compute\_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from\_pretrained("Llama-2-7b-chat-hf", use\_auth\_token=True,) model\_4bit = AutoModelForCausalLM.from\_pretrained(model\_id, quantization\_config=bnb\_config, device\_map="auto") pipe = pipeline("text-generation", model=model, tokenizer= tokenizer, torch\_dtype=torch.bfloat16, device\_map="auto", max\_new\_tokens = 512, do\_sample=True, top\_k=30, num\_return\_sequences=1, eos\_token\_id=tokenizer.eos\_token\_id ) ​ llm=HuggingFacePipeline(pipeline=pipe, model\_kwargs={'temperature':0}) ​ chain = RetrievalQAWithSourcesChain.from\_llm(llm=llm, retriever=vectorstore.as\_retriever()) ​ result=chain({"question": "What is the central florida tech grove?"}, return\_only\_outputs=True)
2023-12-30T03:29:59
https://www.reddit.com/r/LocalLLaMA/comments/18u5rqh/llama_2_responses_taking_multiple_minutes/
ChillyPotPie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u5rqh
false
null
t3_18u5rqh
/r/LocalLLaMA/comments/18u5rqh/llama_2_responses_taking_multiple_minutes/
false
false
self
2
null
Mixtral is very impressive with a non-english prompt
12
2023-12-30T03:17:15
https://i.redd.it/ygwrmwr0oc9c1.png
celsowm
i.redd.it
1970-01-01T00:00:00
0
{}
18u5iw4
false
null
t3_18u5iw4
/r/LocalLLaMA/comments/18u5iw4/mixtral_is_very_impressive_with_a_nonenglish/
false
false
https://b.thumbs.redditm…Be9awg4QIg2o.jpg
12
{'enabled': True, 'images': [{'id': 'J_3YqIDxF-AYzk9fj6KYAzxmD-S9taxYXYuz-NFBbtc', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/ygwrmwr0oc9c1.png?width=108&crop=smart&auto=webp&s=aec95b5e7127fb3aa860bb0bf8407661a8e2f92a', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/ygwrmwr0oc9c1.png?width=216&crop=smart&auto=webp&s=d474eb0ddf6f42f99ed0dc049fb08a51e2f8c501', 'width': 216}, {'height': 150, 'url': 'https://preview.redd.it/ygwrmwr0oc9c1.png?width=320&crop=smart&auto=webp&s=95e0313f0b92a970b62ca188790137002eb308d3', 'width': 320}, {'height': 300, 'url': 'https://preview.redd.it/ygwrmwr0oc9c1.png?width=640&crop=smart&auto=webp&s=090ea75b01004d2014370bb19470464936c0ee05', 'width': 640}, {'height': 450, 'url': 'https://preview.redd.it/ygwrmwr0oc9c1.png?width=960&crop=smart&auto=webp&s=e88cf9dc0676fd4748f715a1513dc25c0af56ae1', 'width': 960}, {'height': 507, 'url': 'https://preview.redd.it/ygwrmwr0oc9c1.png?width=1080&crop=smart&auto=webp&s=846244581fce72379f55000b50a48e8c5fbd342c', 'width': 1080}], 'source': {'height': 694, 'url': 'https://preview.redd.it/ygwrmwr0oc9c1.png?auto=webp&s=f6ee03797848c08de837b6d7762579bebba652c9', 'width': 1478}, 'variants': {}}]}
jina-embeddings-v2-base memory usage
1
This model's spec indicate that its [size is 270MB](https://huggingface.co/spaces/mteb/leaderboard) however it immediately consumes 3.7 GB when loaded and run on a small document. Can anyone shed light on its memory usage? What are the factors that most affects its memory usage?
2023-12-30T03:04:03
https://www.reddit.com/r/LocalLLaMA/comments/18u59eo/jinaembeddingsv2base_memory_usage/
pluteski
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u59eo
false
null
t3_18u59eo
/r/LocalLLaMA/comments/18u59eo/jinaembeddingsv2base_memory_usage/
false
false
self
1
{'enabled': False, 'images': [{'id': 'CCVJBt0-kH9o-QPgo7qiP6d0ggaejrGSkWh3JVtDcDI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?width=108&crop=smart&auto=webp&s=366efcfca18e3d6b38a0a171eb19fe76a433367e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?width=216&crop=smart&auto=webp&s=8408d240578e69e6f2b603cf1d6c9b27c3445f66', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?width=320&crop=smart&auto=webp&s=d0b8ca84da3c68963fa3cc1a1ce94c295ce04084', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?width=640&crop=smart&auto=webp&s=8a3a6f769729ddc017ff74c9b63d2dac5b730897', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?width=960&crop=smart&auto=webp&s=836d1072f0f2b9c2a0cdcf9cc266be51b9d70ced', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?width=1080&crop=smart&auto=webp&s=cb71417c02d625d5ab6d576ad03027a447b69091', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iYVtrsZl3kFqjPYxjf8P4KI2yNqNg1tTH0nEkwndRag.jpg?auto=webp&s=5c3924a0b2dd29634ec1c24d5feff7c4343c259b', 'width': 1200}, 'variants': {}}]}
Can I inference Open Chat 3.5 (Mixtral?) with lora support on VLLM or as Fast solution ?
1
[removed]
2023-12-30T02:49:03
https://www.reddit.com/r/LocalLLaMA/comments/18u4yjl/can_i_inference_open_chat_35_mixtral_with_lora/
PotentialFun1516
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u4yjl
false
null
t3_18u4yjl
/r/LocalLLaMA/comments/18u4yjl/can_i_inference_open_chat_35_mixtral_with_lora/
false
false
self
1
null
Can I inference Open Chat 3.5 (Mixtral?) with lora support on VLLM or as Fast solution ?
1
[removed]
2023-12-30T02:48:32
https://www.reddit.com/r/LocalLLaMA/comments/18u4y6f/can_i_inference_open_chat_35_mixtral_with_lora/
PotentialFun1516
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u4y6f
false
null
t3_18u4y6f
/r/LocalLLaMA/comments/18u4y6f/can_i_inference_open_chat_35_mixtral_with_lora/
false
false
self
1
null
Can I inference Open Chat 3.5 (Mixtral?) with lora support on VLLM or as Fast solution ?
1
[removed]
2023-12-30T02:46:42
https://www.reddit.com/r/LocalLLaMA/comments/18u4wyk/can_i_inference_open_chat_35_mixtral_with_lora/
PotentialFun1516
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u4wyk
false
null
t3_18u4wyk
/r/LocalLLaMA/comments/18u4wyk/can_i_inference_open_chat_35_mixtral_with_lora/
false
false
self
1
null
Gemini Pro is the worst LLM I've seen: It refused to summarize a scientific article due to violation of "safety policies". At first it started answering but got interrupted and removed its own answer!
2
2023-12-30T02:39:07
https://i.redd.it/trl3djl5hc9c1.png
nderstand2grow
i.redd.it
1970-01-01T00:00:00
0
{}
18u4rk0
false
null
t3_18u4rk0
/r/LocalLLaMA/comments/18u4rk0/gemini_pro_is_the_worst_llm_ive_seen_it_refused/
false
false
https://b.thumbs.redditm…ABZvAcCwa4nI.jpg
2
{'enabled': True, 'images': [{'id': '3k66Dta5l3l9TkfuFfuhzc-WWJVGKzxoHzjrVfVY9pI', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/trl3djl5hc9c1.png?width=108&crop=smart&auto=webp&s=4a98d2e04516f52db954059aea84196bb6361503', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/trl3djl5hc9c1.png?width=216&crop=smart&auto=webp&s=84b8189f5e2d08433e975cff5addab375f492643', 'width': 216}, {'height': 259, 'url': 'https://preview.redd.it/trl3djl5hc9c1.png?width=320&crop=smart&auto=webp&s=e1814462aa3892f8e49a2e0a13dde0bf18b78344', 'width': 320}, {'height': 519, 'url': 'https://preview.redd.it/trl3djl5hc9c1.png?width=640&crop=smart&auto=webp&s=0955ce7dcb76ba854d3ba05121ae3ef81a3bf120', 'width': 640}, {'height': 779, 'url': 'https://preview.redd.it/trl3djl5hc9c1.png?width=960&crop=smart&auto=webp&s=6b4121471ae8a1c51cdd8e6148b1abe18350002d', 'width': 960}, {'height': 877, 'url': 'https://preview.redd.it/trl3djl5hc9c1.png?width=1080&crop=smart&auto=webp&s=f241ac1b06babe02e487e753a0d630b961efb44a', 'width': 1080}], 'source': {'height': 1374, 'url': 'https://preview.redd.it/trl3djl5hc9c1.png?auto=webp&s=98eb2af9a95d71cd5200335fc6bb1e7c9228895c', 'width': 1692}, 'variants': {}}]}
Anyone tried Mamba-3B-Zephyr?
14
2023-12-30T02:18:58
https://twitter.com/xiuyu_l/status/1740806425843294606?t=l_Et0TAtZJuq2E-BkvRyUg&s=19
Iboxelephants
twitter.com
1970-01-01T00:00:00
0
{}
18u4d8q
false
{'oembed': {'author_name': 'Xiuyu Li', 'author_url': 'https://twitter.com/xiuyu_l', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Mamba is really exciting, but its potential remains untapped due to a lack of instruction-tuning and alignment. Inspired by <a href="https://twitter.com/MatternJustus?ref_src=twsrc%5Etfw">@MatternJustus</a>’s Mamba-Chat, I trained Mamba-3B-Zephyr over the weekend and got some interesting findings🧵 1/5<br><br>Colab Demo: <a href="https://t.co/YHgH83WSXR">https://t.co/YHgH83WSXR</a> <a href="https://t.co/0coAnuAg5Z">pic.twitter.com/0coAnuAg5Z</a></p>&mdash; Xiuyu Li (@xiuyu_l) <a href="https://twitter.com/xiuyu_l/status/1740806425843294606?ref_src=twsrc%5Etfw">December 29, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/xiuyu_l/status/1740806425843294606', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_18u4d8q
/r/LocalLLaMA/comments/18u4d8q/anyone_tried_mamba3bzephyr/
false
false
default
14
{'enabled': False, 'images': [{'id': 'dKu1HkhzGJBkKtkrSgdDvI8Eq4uKbn9ba8Jik8Jb_0c', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/S_cU8IrjbaUJ4LmjvwEBeAKj6rXvxQmw8UV_ZLMt_qY.jpg?width=108&crop=smart&auto=webp&s=98180bc1f81c72b94d7e7b404a1fab56d3e6ed0a', 'width': 108}], 'source': {'height': 54, 'url': 'https://external-preview.redd.it/S_cU8IrjbaUJ4LmjvwEBeAKj6rXvxQmw8UV_ZLMt_qY.jpg?auto=webp&s=04394d2a7612f9092e0705aa0f1a8a401ac2508e', 'width': 140}, 'variants': {}}]}
Accuracy on quantisation
2
Does anybody know how much does model do insane if we quantise it to int 8 or maybe even int 4 lol. I mean is it surprisingly usable a lot without just the proficiency in only accurate things like code generation or math or maybe even struggles to summarise text correctly. This question was for llm's in general but for more context, a 7b one would be a good thing as a case since it can run locally pretty well. &#x200B;
2023-12-30T01:51:50
https://www.reddit.com/r/LocalLLaMA/comments/18u3tae/accuracy_on_quantisation/
Winter_Importance436
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u3tae
false
null
t3_18u3tae
/r/LocalLLaMA/comments/18u3tae/accuracy_on_quantisation/
false
false
self
2
null
Ball in the cup
1
I read this idea few months ago and I am still trying to find llm which can solve this puzzle :) "Anna enters and places a cup on the table Beata enters and puts a ball into the cup Anna leaves the room Celina enters and rotates the cup upside down: Beata leaves the room Daria enters and moves the cup to the bathroom &#x200B; Where is the ball according to each person?" typical 13B replies: "Anna thinks the ball is in the cup she placed on the table. Beata thinks the ball is in the cup she put the ball into. Celina thinks the ball is in the cup that she rotated upside down. Daria thinks the ball is in the cup that was moved to the bathroom by her. " &#x200B; ChatGPT 4 says: * **Anna:** Doesn't know about the ball. * **Beata:** Thinks the ball is in the cup (where she left it). * **Celina:** Thinks the ball is under the cup (after turning it upside down). * **Daria:** Knows the current location of the cup and the ball, assuming the ball is still with the cup. &#x200B; Bard says: "When Anna enters, she puts the cup on the table. The ball is not yet inside the cup. When Beata enters, she puts the ball into the cup. Therefore, the ball is inside the cup when Beata leaves the room. Celina enters and rotates the cup upside down. The ball is therefore no longer in the cup, but instead is outside the cup, resting on the table. Daria enters and moves the cup to the bathroom. The ball is therefore in the bathroom, inside the upside-down cup." Do you know any model which knows better answers?
2023-12-30T01:43:50
https://www.reddit.com/r/LocalLLaMA/comments/18u3nb0/ball_in_the_cup/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u3nb0
false
null
t3_18u3nb0
/r/LocalLLaMA/comments/18u3nb0/ball_in_the_cup/
false
false
self
1
null
Best RLHF repo
1
Hi I am new here. I am looking to do fine-tuning on open-source model, such as Mistral, LLAMA, etc. I am trying to do RLHF. Wha are some best RLHF repo? I know there is Colossal, DeepSpeed-Chat, and TRXL. What is the one that is most easy to use? Thanks.
2023-12-29T23:56:14
https://www.reddit.com/r/LocalLLaMA/comments/18u1c8e/best_rlhf_repo/
Dense-Smf-6032
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u1c8e
false
null
t3_18u1c8e
/r/LocalLLaMA/comments/18u1c8e/best_rlhf_repo/
false
false
self
1
null
🐺🐦‍⬛ LLM Comparison/Test: Ranking updated with 10 new models (the best 7Bs)!
244
After a little detour, where I [tested and compared prompt formats](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/) instead of models last time, here's another of my LLM Comparisons/Tests: By popular request, I've looked again at the current best 7B models (according to the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) and user feedback/test requests). Scroll down past the info and in-depth test reports to see the updated ranking table. ## New Models tested: - [dolphin-2.6-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b) - [dolphin-2.6-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mixtral-8x7b) (not a 7B, but 8x7B, but wanted to include it) - [Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3) - [mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) - [mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) - [openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) - [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) - [OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp) - [SauerkrautLM-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) - [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) ## Testing methodology - **4 German data protection trainings:** - I run models through **4** professional German online data protection trainings/exams - the same that our employees have to pass as well. - The test data and questions as well as all instructions are in German while the character card is in English. This **tests translation capabilities and cross-language understanding**. - Before giving the information, I instruct the model (in German): *I'll give you some information. Take note of this, but only answer with "OK" as confirmation of your acknowledgment, nothing else.* This **tests instruction understanding and following capabilities**. - After giving all the information about a topic, I give the model the exam question. It's a multiple choice (A/B/C) question, where the last one is the same as the first but with changed order and letters (X/Y/Z). Each test has 4-6 exam questions, for a total of **18** multiple choice questions. - If the model gives a single letter response, I ask it to answer with more than just a single letter - and vice versa. If it fails to do so, I note that, but it doesn't affect its score as long as the initial answer is correct. - I rank models according to how many correct answers they give, primarily after being given the curriculum information beforehand, and secondarily (as a tie-breaker) after answering blind without being given the information beforehand. - All tests are separate units, context is cleared in between, there's no memory/state kept between sessions. - [SillyTavern](https://github.com/SillyTavern/SillyTavern) frontend - [oobabooga's text-generation-webui](https://github.com/oobabooga/text-generation-webui) backend (for HF models) - **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons) - Context was often set at less than the maximum for unquantized 32K-500K models to prevent going out of memory, as I'd rather test at a higher quantization level with less context than the other way around, preferring quality over quantity - Official prompt format as noted ## Detailed Test Reports And here are the detailed notes, the basis of my ranking, and also additional comments and observations: - **[mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)** ~~32K~~ 8K, Alpaca format: - ❌ Gave correct answers to only **4+3+4+5=16/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **3+3+2+5=13/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ✅ Followed instructions to answer with just a single letter or more than just a single letter. - ❗ same as **[Seraph-7B](https://huggingface.co/Weyaxi/Seraph-7B)** - **[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)** ~~32K~~ 8K context, ChatML format: - ❌ Gave correct answers to only **3+3+4+6=16/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **3+2+2+6=13/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. - **[SauerkrautLM-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO)** ~~32K~~ 8K context, ChatML format: - ❌ Gave correct answers to only **3+3+4+6=16/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **2+2+2+5=11/18** - ➖ Did NOT follow instructions to acknowledge data input with "OK" consistently. - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. - **[Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3)** ~~32K~~ 8K, Alpaca format: - ❌ Gave correct answers to only **3+4+4+5=16/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **3+3+2+3=11/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter consistently. - **[mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)** ~~32K~~ 8K, Alpaca format: - ❌ Gave correct answers to only **3+3+4+5=15/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **2+4+2+6=14/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ✅ Followed instructions to answer with just a single letter or more than just a single letter. - **[Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)** 8K context, OpenChat (GPT4 Correct) format: - ❌ Gave correct answers to only **4+3+3+5=15/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **2+1+4+6=13/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. - ➖ Sometimes switched to Spanish. - **[openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210)** 8K context, OpenChat (GPT4 Correct) format: - ❌ Gave correct answers to only **4+3+3+5=15/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **2+2+2+1=7/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. - ➖ Used emojis a lot without any obvious reason. - ❗ Refused to pick single answers in the third test during the blind run, but still reasoned correctly, so I'm giving it half the points as a compromise. - **[dolphin-2.6-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mixtral-8x7b)** ~~32K~~ 16K context, 4-bit, Flash Attention 2, ChatML format: - ❌ Gave correct answers to only **4+3+4+3=14/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+2+1+5=12/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. - ❌ Didn't answer once and said instead: "OK, I'll analyze the question and then share my answer. Please wait a second." - **[OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp)** ~~32K~~ 8K, OpenChat (GPT4 Correct) format: - ❌ Gave correct answers to only **4+3+1+5=13/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+2+2+5=13/18** - ➖ Did NOT follow instructions to acknowledge data input with "OK" consistently. - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. - ➖ Used emojis a lot without any obvious reason, and sometimes output just an emoji instead of an answer. - ➖ Sometimes switched to Spanish. - **[dolphin-2.6-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b)** ~~32K~~ 8K context, ChatML format: - ❌ Gave correct answers to only **1+1+2+6=10/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+3+0+3=10/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. - ❌ Didn't answer multiple times and said instead: "Okay, I have picked up the information and will analyze it carefully. Please give me more details so I can give a detailed answer." - ❌ Refused to pick single answers in the third test during the blind run. - ❗ [UnicodeDecodeError with ooba's Transformers loader](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b/discussions/4) ## Updated Rankings This is my objective ranking of these models based on measuring factually correct answers, instruction understanding and following, and multilingual abilities: | Rank | Model | Size | Format | Quant | Context | Prompt | 1st Score | 2nd Score | OK | +/- | | ---- | ------------------------------------------------------------------------------------------------------------------------------------------------ | ------- | ------ | ------- | ----------- | ------------------------ | --------- | --------- | --- | --- | | 1 | GPT-4 | GPT-4 | API | | | | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ | | 1 | [goliath-120b-GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) | 120B | GGUF | Q2_K | 4K | Vicuna 1.1 | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ | | 1 | [Tess-XL-v1.0-GGUF](https://huggingface.co/TheBloke/Tess-XL-v1.0-GGUF) | 120B | GGUF | Q2_K | 4K | Synthia | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ | | 1 | [Nous-Capybara-34B-GGUF](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF) | 34B | GGUF | Q4_0 | 16K | Vicuna 1.1 | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ | | 2 | [Venus-120b-v1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0) | 120B | EXL2 | 3.0bpw | 4K | Alpaca | 18/18 ✓ | 18/18 ✓ | ✓ | ✗ | | 3 | [lzlv_70B-GGUF](https://huggingface.co/TheBloke/lzlv_70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 ✓ | 17/18 | ✓ | ✓ | | 4 | [chronos007-70B-GGUF](https://huggingface.co/TheBloke/chronos007-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 16/18 | ✓ | ✓ | | 4 | [SynthIA-70B-v1.5-GGUF](https://huggingface.co/migtissera/SynthIA-70B-v1.5-GGUF) | 70B | GGUF | Q4_0 | 4K | SynthIA | 18/18 ✓ | 16/18 | ✓ | ✓ | | 5 | [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8x7B | HF | 4-bit | ~~32K~~ 4K | Mixtral | 18/18 ✓ | 16/18 | ✗ | ✓ | | 6 | [dolphin-2_2-yi-34b-GGUF](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF) | 34B | GGUF | Q4_0 | 16K | ChatML | 18/18 ✓ | 15/18 | ✗ | ✗ | | 7 | [StellarBright-GGUF](https://huggingface.co/TheBloke/StellarBright-GGUF) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 ✓ | 14/18 | ✓ | ✓ | | 8 | [Dawn-v2-70B-GGUF](https://huggingface.co/TheBloke/Dawn-v2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 14/18 | ✓ | ✗ | | 8 | [Euryale-1.3-L2-70B-GGUF](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 14/18 | ✓ | ✗ | | 9 | [sophosynthesis-70b-v1](https://huggingface.co/sophosympatheia/sophosynthesis-70b-v1) | 70B | EXL2 | 4.85bpw | 4K | Vicuna 1.1 | 18/18 ✓ | 13/18 | ✓ | ✓ | | 10 | [GodziLLa2-70B-GGUF](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 12/18 | ✓ | ✓ | | 11 | [Samantha-1.11-70B-GGUF](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 ✓ | 10/18 | ✗ | ✗ | | 12 | [Airoboros-L2-70B-3.1.2-GGUF](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1.2-GGUF) | 70B | GGUF | Q4_K_M | 4K | Llama 2 Chat | 17/18 | 16/18 | ✓ | ✗ | | 13 | [Rogue-Rose-103b-v0.2](https://huggingface.co/sophosympatheia/Rogue-Rose-103b-v0.2) | 103B | EXL2 | 3.2bpw | 4K | Rogue Rose | 17/18 | 14/18 | ✗ | ✗ | | 14 | GPT-3.5 Turbo Instruct | GPT-3.5 | API | | | | 17/18 | 11/18 | ✗ | ✗ | | 15 | [Synthia-MoE-v3-Mixtral-8x7B](https://huggingface.co/migtissera/Synthia-MoE-v3-Mixtral-8x7B) | 8x7B | HF | 4-bit | ~~32K~~ 4K | ~~Synthia~~ Llama 2 Chat | 17/18 | 9/18 | ✗ | ✗ | | 16 | [dolphin-2.2-70B-GGUF](https://huggingface.co/TheBloke/dolphin-2.2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | ChatML | 16/18 | 14/18 | ✗ | ✓ | | 17 🆕 | [mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) | 7B | HF | — | ~~32K~~ 8K | Alpaca | 16/18 | 13/18 | ✗ | ✓ | | 18 🆕 | [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) | 7B | HF | — | ~~32K~~ 8K | ChatML | 16/18 | 13/18 | ✗ | ✗ | | 19 | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 7B | HF | — | 32K | Mistral | 16/18 | 12/18 | ✗ | ✗ | | 20 | [DeciLM-7B-instruct](https://huggingface.co/Deci/DeciLM-7B-instruct) | 7B | HF | — | 32K | Mistral | 16/18 | 11/18 | ✗ | ✗ | | 20 🆕 | [Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3) | 7B | HF | — | ~~32K~~ 8K | Alpaca | 16/18 | 11/18 | ✗ | ✗ | | 20 🆕 | [SauerkrautLM-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) | 7B | HF | — | ~~32K~~ 8K | ChatML | 16/18 | 11/18 | ✗ | ✗ | | 21 🆕 | [mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) | 7B | HF | — | ~~32K~~ 8K | Alpaca | 15/18 | 14/18 | ✗ | ✓ | | 22 | GPT-3.5 Turbo | GPT-3.5 | API | | | | 15/18 | 14/18 | ✗ | ✗ | | 23 | [dolphin-2.5-mixtral-8x7b](https://huggingface.co/ehartford/dolphin-2.5-mixtral-8x7b) | 8x7B | HF | 4-bit | ~~32K~~ 4K | Mixtral | 15/18 | 13/18 | ✗ | ✓ | | 24 🆕 | [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) | 7B | HF | — | 8K | OpenChat (GPT4 Correct) | 15/18 | 13/18 | ✗ | ✗ | | 25 🆕 | [openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) | 7B | HF | — | 8K | OpenChat (GPT4 Correct) | 15/18 | 7/18 | ✗ | ✗ | | 26 🆕 | [dolphin-2.6-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mixtral-8x7b) | 8x7B | HF | 4-bit | ~~32K~~ 16K | ChatML | 14/18 | 12/18 | ✗ | ✗ | | 27 🆕 | [OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp) | 7B | HF | — | ~~32K~~ 8K | OpenChat (GPT4 Correct) | 13/18 | 13/18 | ✗ | ✗ | | 28 🆕 | [dolphin-2.6-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b) | 7B | HF | — | ~~32K~~ 8K | ChatML | 10/18 | 10/18 | ✗ | ✗ | | 29 | [SauerkrautLM-70B-v1-GGUF](https://huggingface.co/TheBloke/SauerkrautLM-70B-v1-GGUF) | 70B | GGUF | Q4_0 | 4K | Llama 2 Chat | 9/18 | 15/18 | ✗ | ✗ | - 1st Score = Correct answers to multiple choice questions (after being given curriculum information) - 2nd Score = Correct answers to multiple choice questions (without being given curriculum information beforehand) - OK = Followed instructions to acknowledge all data input with just "OK" consistently - +/- = Followed instructions to answer with just a single letter or more than just a single letter [Image version](https://imgur.com/a/iMS3KZI) ## Observations & Conclusions - These were the best 7Bs I could find, and they place as expected, at the bottom of my ranking table. So contrary to the claims that 7Bs reach or beat 70Bs or GPT-4, I think that's just a lot of hype and wishful thinking. In general, bigger remains better, and more parameters provide more intelligence and deeper understanding than just fancy writing that looks good and makes the smaller models look better than they actually are. - That said, 7Bs have come a long way, and if you can't run the bigger models, you've got to make do with what you can use. They're useful, and they work, just don't expect (or claim) them miraculously surpassing the much bigger models. - Nous-Capybara-34B-GGUF punched far above its expected weight, and now that the [Capybara dataset is open-source and available](https://www.reddit.com/r/LocalLLaMA/comments/18s7iw1/capybara_dataset_is_now_opensource_and_available/), we'll see if that pushes other models higher as well or if there's some secret magic hidden within this combination with Yi. - Mixtral finetunes severely underperform in my tests, maybe 4-bit is hitting them harder than non-MoE models or the community hasn't mastered the MoE finetuning process yet, or both? Either way, I expect much more from future Mixtral finetunes! - I'd also have expected much better results from the latest Dolphin 2.6, and I've already discussed my findings with its creator, which will hopefully lead to a better next version. - Finally, my personal favorite model right now, the one I use most of the time: It's not even first place, but [Mixtral-8x7B-instruct-exl2](https://huggingface.co/turboderp/Mixtral-8x7B-instruct-exl2) at 5.0bpw offers close-enough quality at much better performance (20-35 tokens per second compared to e. g. Goliath 120B's 10 tps, all with Exllamav2), 32K context instead of just 4K, leaves enough free VRAM for real-time voice chat (local Whisper and XTTS) and Stable Diffusion (AI sending selfies or creating pictures), can be uncensored easily through proper prompting and character cards (SillyTavern FTW!), and its German writing is better than any other local LLM's I've ever tested (including the German-specific finetunes - and this is also what puts it ahead of Nous-Capybara-34B for me personally). So all things considered, it's become my favorite, both for professional use and for personal entertainment. ## Upcoming/Planned Tests Next on my ~~to-do~~ to-test list are the new 10B and updated 34B models... -------------------------------------------------------------------------------- Here's a list of my previous model tests and comparisons or other related posts: - [LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with \*\*17\*\* different instruct templates](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/) - [LLM Comparison/Test: Mixtral-8x7B, Mistral, DeciLM, Synthia-MoE](https://www.reddit.com/r/LocalLLaMA/comments/18gz54r/llm_comparisontest_mixtral8x7b_mistral_decilm/) Winner: Mixtral-8x7B-Instruct-v0.1 - [Updated LLM Comparison/Test with new RP model: Rogue Rose 103B](https://www.reddit.com/r/LocalLLaMA/comments/18ft8f5/updated_llm_comparisontest_with_new_rp_model/) - [**Big** LLM Comparison/Test: 3x 120B, 12x 70B, 2x 34B, GPT-4/3.5](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) Winner: Goliath 120B - [LLM Format Comparison/Benchmark: 70B GGUF vs. EXL2 (and AWQ)](https://www.reddit.com/r/LocalLLaMA/comments/17w57eu/llm_format_comparisonbenchmark_70b_gguf_vs_exl2/) - [LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4](https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/) Winners: goliath-120b-GGUF, Nous-Capybara-34B-GGUF - [LLM Comparison/Test: Mistral 7B Updates (OpenHermes 2.5, OpenChat 3.5, Nous Capybara 1.9)](https://www.reddit.com/r/LocalLLaMA/comments/17p0gut/llm_comparisontest_mistral_7b_updates_openhermes/) Winners: OpenHermes-2.5-Mistral-7B, openchat_3.5, Nous-Capybara-7B-V1.9 - [Huge LLM Comparison/Test: Part II (7B-20B) Roleplay Tests](https://www.reddit.com/r/LocalLLaMA/comments/17kpyd2/huge_llm_comparisontest_part_ii_7b20b_roleplay/) Winners: OpenHermes-2-Mistral-7B, LLaMA2-13B-Tiefighter - [Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4)](https://www.reddit.com/r/LocalLLaMA/comments/17fhp9k/huge_llm_comparisontest_39_models_tested_7b70b/) - [My current favorite new LLMs: SynthIA v1.5 and Tiefighter!](https://www.reddit.com/r/LocalLLaMA/comments/17e446l/my_current_favorite_new_llms_synthia_v15_and/) - [Mistral LLM Comparison/Test: Instruct, OpenOrca, Dolphin, Zephyr and more...](https://www.reddit.com/r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/) - [LLM Pro/Serious Use Comparison/Test: From 7B to 70B vs. ChatGPT!](https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/) Winner: Synthia-70B-v1.2b - [LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B](https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/) Winner: Mistral-7B-OpenOrca - [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/) - [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1 - [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b - [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B - [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K - [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2 - [SillyTavern's Roleplay preset vs. model-specific prompt format](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/) -------------------------------------------------------------------------------- **Disclaimer:** Some kind soul recently asked me if they could tip me for my LLM reviews and advice, so I set up [a Ko-fi page](https://ko-fi.com/wolframravenwolf). While this may affect the priority/order of my tests, it will not change the results, I am incorruptible. Also consider tipping your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
2023-12-29T23:43:30
https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/
WolframRavenwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u122l
false
null
t3_18u122l
/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/
false
false
self
244
{'enabled': False, 'images': [{'id': 'dAl1o6vx4Hll2rNEbrHF-aZ2drWTMFofBajVdA-Jws4', 'resolutions': [{'height': 114, 'url': 'https://external-preview.redd.it/zdJHDCMujzarVP18xm0BBtQT1bYeD1SYly4kbaQKhDg.jpg?width=108&crop=smart&auto=webp&s=fa316911feb1505a072535d75e96115a390dfd35', 'width': 108}, {'height': 229, 'url': 'https://external-preview.redd.it/zdJHDCMujzarVP18xm0BBtQT1bYeD1SYly4kbaQKhDg.jpg?width=216&crop=smart&auto=webp&s=98211c6f0938754683aabd9ea134aacbbf8a456d', 'width': 216}, {'height': 339, 'url': 'https://external-preview.redd.it/zdJHDCMujzarVP18xm0BBtQT1bYeD1SYly4kbaQKhDg.jpg?width=320&crop=smart&auto=webp&s=91e63d8393889ed594d0c56fbef44ad4b1db1a51', 'width': 320}, {'height': 679, 'url': 'https://external-preview.redd.it/zdJHDCMujzarVP18xm0BBtQT1bYeD1SYly4kbaQKhDg.jpg?width=640&crop=smart&auto=webp&s=b726575380dfe76243e2b07a72e3929bcd84783c', 'width': 640}, {'height': 1019, 'url': 'https://external-preview.redd.it/zdJHDCMujzarVP18xm0BBtQT1bYeD1SYly4kbaQKhDg.jpg?width=960&crop=smart&auto=webp&s=b9bf274989d68627c55294840b722a5434f5ecda', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/zdJHDCMujzarVP18xm0BBtQT1bYeD1SYly4kbaQKhDg.jpg?auto=webp&s=828da7a1c043d5b98ae7972ebdf7ed6d8aa82b30', 'width': 964}, 'variants': {}}]}
Benchmarking the Benchmarks - Correlation with Human Preference
32
It seems Chatbot Arena is the only trusted leaderboard right now. But since it tests models so slowly, I wanted to see if there were any good alternatives. I found a bunch of leaderboards which have overlap with the Arena's tested models, and found their correlation with the Arena Elo Rating. | Correlation | Benchmark | Leaderboard | Sample Size | |-------------|--------------------------|---------------------------------|-------------| | 0.89 | MT-Bench (score) | Chatbot Arena | 42 | | 0.85 | MMLU | Open Leaderboard | 41 | | 0.83 | Total Score | LLM Logic Tests | 18 | | 0.82 | EQ-Bench Score | EQ-Bench | 16 | | 0.75 | Reasoning | OpenCompass - English Benchmarks | 13 | | 0.70 | AlpacaEval Win Rate | AlpacaEval | 14 | | 0.66 | ARC | Open Leaderboard | 26 | | 0.60 | Average | OpenCompass - English Benchmarks | 13 | | 0.54 | Average | Open Leaderboard | 26 | | 0.52 | Examination | OpenCompass - English Benchmarks | 13 | | 0.52 | HellaSwag | Open Leaderboard | 26 | | 0.48 | Winogrande | Open Leaderboard | 26 | | 0.37 | TruthfulQA | Open Leaderboard | 26 | | 0.35 | Language | OpenCompass - English Benchmarks | 13 | | 0.34 | Knowledge | OpenCompass - English Benchmarks | 13 | | 0.32 | GSM8K | Open Leaderboard | 26 | | 0.18 | Understanding | OpenCompass - English Benchmarks | 13 | I'm separating the HELM leaderboard because its tested model overlap was pretty small, with few fine-tuned models. Big difference between GSM8K in HELM and Open Leaderboard. Though some of these benchmarks could be pretty good if additional testing confirms their correlation. | Correlation | Benchmark | Leaderboard | Sample Size | |-------------|--------------------------|------------------|-------------| | 0.91 | MedQA - EM | HELM Leaderboard | 11 | | 0.91 | MATH - Equivalent (CoT) | HELM Leaderboard | 11 | | 0.83 | MMLU - EM | HELM Leaderboard | 11 | | 0.82 | Mean win rate | HELM Leaderboard | 11 | | 0.82 | OpenbookQA - EM | HELM Leaderboard | 11 | | 0.78 | GSM8K - EM | HELM Leaderboard | 11 | | 0.77 | WMT 2014 - BLEU-4 | HELM Leaderboard | 11 | | 0.63 | NaturalQuestions (open-book) - F1 | HELM Leaderboard | 11 | | 0.55 | NaturalQuestions (closed-book) - F1 | HELM Leaderboard | 11 | | 0.50 | LegalBench - EM | HELM Leaderboard | 11 | | 0.14 | NarrativeQA - F1 | HELM Leaderboard | 11 | HELM Leaderboard https://crfm.stanford.edu/helm/lite/latest/#/leaderboard LLM Logic Tests https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit#gid=2011456595 Chatbot Arena https://chat.lmsys.org/ Open Leaderboard https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard EQ-Bench https://www.eqbench.com/ OpenCompass https://opencompass.org.cn/leaderboard-llm AlpacaEval https://tatsu-lab.github.io/alpaca_eval/
2023-12-29T23:33:29
https://www.reddit.com/r/LocalLLaMA/comments/18u0tu3/benchmarking_the_benchmarks_correlation_with/
DontPlanToEnd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u0tu3
false
null
t3_18u0tu3
/r/LocalLLaMA/comments/18u0tu3/benchmarking_the_benchmarks_correlation_with/
false
false
self
32
null
Gathering information for a directed conversation?
1
Here is the scenario: * You want a chat conversation where the agent/model will attempt to find out facts A, B, and C from the user. * Depending on A, B, and C, the agent/model will direct the conversation towards topic D or E * Iterate with more complex topic graphs Picture the graph/flow chart people will take on a flow chart, or (less formally) the process that a manager might ask to make sure that work is on track. The best flow that I've found for this requires a fairly high amount of off-prompt logic. Basically: * Driving the prompt for initial discovery of A, B and C through code with a second model checking "Have we learned any of these things yet?" until they are all discovered. I could probably also explore a little bit of plan generation through a model in this step too but the overall issue is that it takes decent orchestration to get multiple natural questions to be asked and so some code-based coordination of "what are we asking now" has to happen. * Making the selection of topics D or E based on parsed results from the second model mentioned above. Again using off-prompt logic. * Etc. This is ok... and it works. I've done variations of it with multiple frameworks at this point and even just quick python. Each time I also end up feeling like way too much logic ends up being hard coded. This is definitely in part because I don't yet feel natural writing python... I'm constantly refactoring with the "ok, but what would the pythonic approach to that code have been?" BUT I think it is more than that... like I'm butting up against long-term goal-directed conversation stuff with LLM-based agents where I feel like I just don't have an elegant solution. So, my question is -- is there an elegant way to approach this that I'm missing? How would you do it?
2023-12-29T23:32:31
https://www.reddit.com/r/LocalLLaMA/comments/18u0t1m/gathering_information_for_a_directed_conversation/
nulldiver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u0t1m
false
null
t3_18u0t1m
/r/LocalLLaMA/comments/18u0t1m/gathering_information_for_a_directed_conversation/
false
false
self
1
null
Why has YAYI2-30B not been quantized yet?
47
[YAYI-2](https://www.reddit.com/r/LocalLLaMA/comments/18r0uk1/yayi230b_new_chinese_base_model_pretrained_on/) seems very promising since it was trained on 2.65T tokens. Its not everyday we get such a large foundational model so I'm surprised it hasn't been quantized yet (please correct me if I'm wrong). Has anyone tried the full fp16 version and how does it perform?
2023-12-29T23:30:10
https://www.reddit.com/r/LocalLLaMA/comments/18u0r09/why_has_yayi230b_not_been_quantized_yet/
jd_3d
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u0r09
false
null
t3_18u0r09
/r/LocalLLaMA/comments/18u0r09/why_has_yayi230b_not_been_quantized_yet/
false
false
self
47
null
Axolotl's Mixtral finetuning is currently broken
52
There's been a lot of confusion recently about why Mixtral finetuning appears to not be working as expected compared to the official Mixtral Instruct model. Well, I believe I have the answer after doing some investigation: https://preview.redd.it/htj4kfhkeb9c1.png?width=825&format=png&auto=webp&s=49e3f0f5b480b8538ca9aa7e31f997aaf7e595d2 The Transformers library recently added a crucial fix for Mixtral finetuning (which ensures experts are used evenly rather than sparsely during training) on December 19. This is not present in any of the release builds for Transformers at the moment, as the last release was on December 18. This means that, because Axolotl comes with a Transformers release build that doesn't have these fixes, any Mixtral finetuning or LoRA training that you have seen that is not the official Mixtral-Instruct is **not** balancing the load appropriately across experts. This includes all variants of Dolphin Mixtral, except for the retrain where he chose to train on the router. However, not training on the router is likely suboptimal for Mixture of Experts setups. My opinion is, considering that the router wasn't being properly trained before, it's likely that choosing to not train it was a band-aid solution after all.
2023-12-29T23:10:24
https://www.reddit.com/r/LocalLLaMA/comments/18u0ax5/axolotls_mixtral_finetuning_is_currently_broken/
kindacognizant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u0ax5
false
null
t3_18u0ax5
/r/LocalLLaMA/comments/18u0ax5/axolotls_mixtral_finetuning_is_currently_broken/
false
false
https://b.thumbs.redditm…GxeVaBvrR96w.jpg
52
{'enabled': False, 'images': [{'id': 'resM9YA9KnNzVRUQ8Kq5BjCw9vuHyuTOukGqXifQiNU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/_EPUTbrbd5RH0OUXfsOpQ1Y1ISi7_GQ02Rk2fBS7Q1M.jpg?width=108&crop=smart&auto=webp&s=fb0aa430a33abeda55c036ec2fbda9fdb3066423', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/_EPUTbrbd5RH0OUXfsOpQ1Y1ISi7_GQ02Rk2fBS7Q1M.jpg?width=216&crop=smart&auto=webp&s=8e4b766c5686673e3eba5101beec9b75b800b288', 'width': 216}], 'source': {'height': 290, 'url': 'https://external-preview.redd.it/_EPUTbrbd5RH0OUXfsOpQ1Y1ISi7_GQ02Rk2fBS7Q1M.jpg?auto=webp&s=6250f05741ffa06fdb3a6463e38318eaa3f45be1', 'width': 290}, 'variants': {}}]}
Sarah Story Teller 13b model with a real personality
4
2023-12-29T23:04:53
https://i.redd.it/qk956zyieb9c1.jpeg
FPham
i.redd.it
1970-01-01T00:00:00
0
{}
18u06i1
false
null
t3_18u06i1
/r/LocalLLaMA/comments/18u06i1/sarah_story_teller_13b_model_with_a_real/
false
false
https://b.thumbs.redditm…mWtiilsrXADg.jpg
4
{'enabled': True, 'images': [{'id': 'sUOSkgz7rKxpzwRPT_rsBgHdkMB8w--_1mjdH-TRrno', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/qk956zyieb9c1.jpeg?width=108&crop=smart&auto=webp&s=07663ae5ef6ce74ff5aafc5c530ba7220f5b8598', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/qk956zyieb9c1.jpeg?width=216&crop=smart&auto=webp&s=90d18537609a2890be6539962298c802b6a3d248', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/qk956zyieb9c1.jpeg?width=320&crop=smart&auto=webp&s=93fa39e0e2c2b18b578546ef330bf63b90c3a824', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/qk956zyieb9c1.jpeg?width=640&crop=smart&auto=webp&s=fa5b6449e6c02b48902fe362f2c1e0d6c1b2a668', 'width': 640}], 'source': {'height': 715, 'url': 'https://preview.redd.it/qk956zyieb9c1.jpeg?auto=webp&s=4dbb3033a6a78eea35901df87bc6e0043beb3d4f', 'width': 714}, 'variants': {}}]}
Finetune + Deploy an LLM model on runpod
3
Hello, I am not sure if this is against the rules or not(posting a job). Please let me know if it's against the rules. To put it simply am looking for someone who can finetune a 13/30B parameter model and deploy it on a runpod server that would interact with the main app/server. The main/app server will provide the prompt and the LLM will return the response. Of course, I will pay for this. Please let me know if anyone can help me with this and the how long this'd take. The main reason I am posting this here is that I was trying to find AI devs on Upwork and some people just gave insanely long timelines like 1-2 months which I don't believe is required for something like this.
2023-12-29T23:03:33
https://www.reddit.com/r/LocalLLaMA/comments/18u05hj/finetune_deploy_an_llm_model_on_runpod/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18u05hj
false
null
t3_18u05hj
/r/LocalLLaMA/comments/18u05hj/finetune_deploy_an_llm_model_on_runpod/
false
false
self
3
null
Why Phi-2 says it’s trained by OpenAI?
1
I just tried Phi2 (by Microsoft) with ollama. Asked who made you and it says OpenAI..?! I read in the report that its dataset was curated text books though. Also another thing as you see in the screenshot, it refuses answer the question about which stocks to buy. Even though it was again [mentioned](https://huggingface.co/microsoft/phi-2#model-summary) that it is not fine-tuned using RLHF, therefore it should be not censored right?
2023-12-29T22:51:53
https://i.redd.it/gtgzaugpcb9c1.jpeg
PopNo4037
i.redd.it
1970-01-01T00:00:00
0
{}
18tzvyu
false
null
t3_18tzvyu
/r/LocalLLaMA/comments/18tzvyu/why_phi2_says_its_trained_by_openai/
false
false
https://b.thumbs.redditm…0A0OonXSJHgA.jpg
1
{'enabled': True, 'images': [{'id': 'Vegb_JATblhlTq2trqN8eCDRLelW6vyGwmNNRvMFcw8', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/gtgzaugpcb9c1.jpeg?width=108&crop=smart&auto=webp&s=75f9d038e3f29c5bc66729b4bf8249c637ee8cfb', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/gtgzaugpcb9c1.jpeg?width=216&crop=smart&auto=webp&s=a67ded65fe6f0c0869118f98fde75e5d3f8c1273', 'width': 216}, {'height': 149, 'url': 'https://preview.redd.it/gtgzaugpcb9c1.jpeg?width=320&crop=smart&auto=webp&s=b9e767adbd5ac66b50eea04539b805d16a694076', 'width': 320}, {'height': 298, 'url': 'https://preview.redd.it/gtgzaugpcb9c1.jpeg?width=640&crop=smart&auto=webp&s=8d43aacc81f0b37d7b721d2687a37fff6306a725', 'width': 640}, {'height': 448, 'url': 'https://preview.redd.it/gtgzaugpcb9c1.jpeg?width=960&crop=smart&auto=webp&s=d91c6779def0528ec7441fad8a7a96965cb6de1c', 'width': 960}, {'height': 504, 'url': 'https://preview.redd.it/gtgzaugpcb9c1.jpeg?width=1080&crop=smart&auto=webp&s=e1358fbee3a1f148b570a6fd85aa212c41740536', 'width': 1080}], 'source': {'height': 506, 'url': 'https://preview.redd.it/gtgzaugpcb9c1.jpeg?auto=webp&s=96fac930f173ff972210fd26aeb8b0aaa4fcf24e', 'width': 1084}, 'variants': {}}]}
Record?
1
2023-12-29T22:47:46
https://i.redd.it/3vomj4zubb9c1.png
Creative_Bottle_3225
i.redd.it
1970-01-01T00:00:00
0
{}
18tzskf
false
null
t3_18tzskf
/r/LocalLLaMA/comments/18tzskf/record/
false
false
https://a.thumbs.redditm…g1m8jaqGcPi0.jpg
1
{'enabled': True, 'images': [{'id': 'XQwTYNnZZUxYT6f4yq9JcMsf3TUEjBsMTKn6FYtIRPw', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/3vomj4zubb9c1.png?width=108&crop=smart&auto=webp&s=45fb1783223960e7c4c925d243b013e250cd5bbd', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/3vomj4zubb9c1.png?width=216&crop=smart&auto=webp&s=4afaab5c56d0a85c5d58f8c5626ba12912835140', 'width': 216}, {'height': 238, 'url': 'https://preview.redd.it/3vomj4zubb9c1.png?width=320&crop=smart&auto=webp&s=7b7fddd6847832726f09fda6c07c8137e8726ecb', 'width': 320}, {'height': 476, 'url': 'https://preview.redd.it/3vomj4zubb9c1.png?width=640&crop=smart&auto=webp&s=c68af3b220251d2e4cbf60adfb172e99dea797cc', 'width': 640}, {'height': 715, 'url': 'https://preview.redd.it/3vomj4zubb9c1.png?width=960&crop=smart&auto=webp&s=100e591e631862af4d571c5bbc4c26aba845f6d5', 'width': 960}, {'height': 804, 'url': 'https://preview.redd.it/3vomj4zubb9c1.png?width=1080&crop=smart&auto=webp&s=4c5bb632cdeaf07feca1184fe6a949377a2f89ee', 'width': 1080}], 'source': {'height': 937, 'url': 'https://preview.redd.it/3vomj4zubb9c1.png?auto=webp&s=93e9e4f6c4e28bc87227b6ea572cee1b906993a7', 'width': 1258}, 'variants': {}}]}
What is my bottleneck?
6
My speeds running GGUF quants are very bad. Without offloading any layers to the GPU, I get 3 tokens/s with Mistral 7B on a Ryzen 5 5600 6 cores and 32GB 2133MHz RAM. I have mixed RAM sticks and the slowest is 2133 MHz, so they all run at that speed. I'm considering upgrading to faster RAM but I wonder if my CPU is a bottleneck. I can run 7B and 13B models on a 3060, but when I run Mixtral or 33B models, I barely get 3-4 tokens/s even when offloading half the model to the GPU.
2023-12-29T22:43:49
https://www.reddit.com/r/LocalLLaMA/comments/18tzp9a/what_is_my_bottleneck/
Silvestron
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tzp9a
false
null
t3_18tzp9a
/r/LocalLLaMA/comments/18tzp9a/what_is_my_bottleneck/
false
false
self
6
null
Creating a json file to fine tune a llama2 model
1
Hi everyone! I’m trying to learn how to fine tune a llama 2 model. The main issues I’m having is with creating the json data set. The specific issue I’m having is I don’t understand how to create the data set format. For instance, some of the data sets use the format {input, output}, while another will use {user, question, answer}. Some others will use the ### token. Most of the tutorials I have found told me what to use but not why to use it. I really want to know the WHY. How do you determine what format is best to use? Is their a standard to look at or a good tutorial? I’m not sure what to look at here/what to look up. Also how do tokens play into all of this? Any help, things to study or tutorials will be helpful.
2023-12-29T22:26:57
https://www.reddit.com/r/LocalLLaMA/comments/18tzb0n/creating_a_json_file_to_fine_tune_a_llama2_model/
Turnip_Cat_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tzb0n
false
null
t3_18tzb0n
/r/LocalLLaMA/comments/18tzb0n/creating_a_json_file_to_fine_tune_a_llama2_model/
false
false
self
1
null
Wanting to build a dedicated AI rig. Looking for guidance
19
I’m currently running an OpenHermes 7b model on an M1 with 16GB ram. It’s a dedicated system - I log in via ssh, and interact with my model via a basic React.js frontend that I built. I want to ‘level up’ and get a decent machine. It doesn’t need to be a Mac. I’m comfortable in *nix and have built machines from parts before. But with the huge improvements in performance of an M1 over Intel Mac, I’m unsure of what parity looks like in the Intel world, and so at what point I would be guaranteed performance improvement. Hoping someone can help. Where should I be spending my money (other than GPU)? NVMe M.2 for the ChromaDB? I9 or does CPU not matter that much? etc. etc. Note: this will be a dedicated rig. No games, no GUI. PS: screenshot of my chatbot in the hopes of getting more visibility on this post 😊
2023-12-29T21:44:04
https://i.redd.it/p17jzovl0b9c1.jpeg
Data_Driven_Guy
i.redd.it
1970-01-01T00:00:00
0
{}
18tybf0
false
null
t3_18tybf0
/r/LocalLLaMA/comments/18tybf0/wanting_to_build_a_dedicated_ai_rig_looking_for/
false
false
https://a.thumbs.redditm…y394NUOO5YF4.jpg
19
{'enabled': True, 'images': [{'id': '75Uq8FHhh41rdCPBpFXMVsmMYx21pLiO_a4nHCwzAes', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/p17jzovl0b9c1.jpeg?width=108&crop=smart&auto=webp&s=86dcab96a74834a98955a9efad16e75744dbd889', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/p17jzovl0b9c1.jpeg?width=216&crop=smart&auto=webp&s=6ae7826987de666e5e26d624152c608069435b98', 'width': 216}, {'height': 227, 'url': 'https://preview.redd.it/p17jzovl0b9c1.jpeg?width=320&crop=smart&auto=webp&s=1b60520ddf84e03d0343ed0aa91bb41cf69ce35e', 'width': 320}, {'height': 455, 'url': 'https://preview.redd.it/p17jzovl0b9c1.jpeg?width=640&crop=smart&auto=webp&s=3b24028bba05cdd5ea6cba03764732facce0a53c', 'width': 640}, {'height': 683, 'url': 'https://preview.redd.it/p17jzovl0b9c1.jpeg?width=960&crop=smart&auto=webp&s=d0329d16cdd7891d5fabe503eeb1d4fa84a273d5', 'width': 960}, {'height': 768, 'url': 'https://preview.redd.it/p17jzovl0b9c1.jpeg?width=1080&crop=smart&auto=webp&s=5d2a58c752b3be0410e3b29f7a9913ed33107cd6', 'width': 1080}], 'source': {'height': 792, 'url': 'https://preview.redd.it/p17jzovl0b9c1.jpeg?auto=webp&s=fc6c7d753ffe00ebd98ed7e575a8fedd432ff16a', 'width': 1113}, 'variants': {}}]}
Trying some RP, 2 quick questions
1
I'm completely new to hosting Llama locally. I installed LM Studio the other day and am trying RP. I created a system prompt, which seems to be working, except the roles are reversed. I created a role for the AI by saying "You are...." and describing its role, and I created a role for myself by saying "I am..." and describing mine. But when I start chatting, it's playing the role of me and it think I'm it. How can I swap this around? Why do some of its responses end with <|im\_end|\] and others don't? &#x200B;
2023-12-29T20:56:55
https://www.reddit.com/r/LocalLLaMA/comments/18tx8ka/trying_some_rp_2_quick_questions/
CincyTriGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tx8ka
false
null
t3_18tx8ka
/r/LocalLLaMA/comments/18tx8ka/trying_some_rp_2_quick_questions/
false
false
self
1
null
KwaiAgents: Generalized Information-seeking Agent System with Large Language Models - Kuaishou Inc. 2023 - 2 Open-source models fine tuned for agent systems! Better than GPT-3.5 turbo as an agent!
33
Paper: [https://arxiv.org/abs/2312.04889v1](https://arxiv.org/abs/2312.04889v1) Github: [https://github.com/kwaikeg/kwaiagents](https://github.com/kwaikeg/kwaiagents) Models: [https://huggingface.co/collections/kwaikeg/kagentlms-6551e685b5ec9f9a077d42ef](https://huggingface.co/collections/kwaikeg/kagentlms-6551e685b5ec9f9a077d42ef) Abstract: >Driven by curiosity, humans have continually sought to explore and understand the world around them, leading to the invention of various tools to satiate this inquisitiveness. Despite not having the capacity to process and memorize vast amounts of information in their brains, humans excel in critical thinking, planning, reflection, and harnessing available tools to interact with and interpret the world, enabling them to find answers efficiently. The recent advancements in large language models (LLMs) suggest that machines might also possess the aforementioned human-like capabilities, allowing them to exhibit powerful abilities even with a constrained parameter count. In this paper, we introduce **KwaiAgents, a generalized information-seeking agent system based on LLMs.** Within KwaiAgents, we propose an agent system that employs LLMs as its cognitive core, which is capable of understanding a user's query, behavior guidelines, and referencing external documents. **The agent can also update and retrieve information from its internal memory, plan and execute actions using a time-aware search-browse toolkit, and ultimately provide a comprehensive response.** We further investigate the system's performance when powered by LLMs less advanced than GPT-4, and **introduce the Meta-Agent Tuning (MAT) framework, designed to ensure even an open-sourced 7B or 13B model performs well among many agent systems.** We exploit both benchmark and human evaluations to systematically validate these capabilities. Extensive experiments show the superiority of our agent system compared to other autonomous agents and highlight the enhanced generalized agent-abilities of our fine-tuned LLMs. https://preview.redd.it/3amfieoera9c1.jpg?width=1308&format=pjpg&auto=webp&s=c44fba1d8691206219dbe21f511c850a40897075 https://preview.redd.it/webamgoera9c1.jpg?width=727&format=pjpg&auto=webp&s=155e3f8c2ff9e473d21495ca769473e7d42fafd4 https://preview.redd.it/ymrigioera9c1.jpg?width=1361&format=pjpg&auto=webp&s=9f457029efec8b51a18cbeb395012bb5e497dd73
2023-12-29T20:56:00
https://www.reddit.com/r/LocalLLaMA/comments/18tx7tr/kwaiagents_generalized_informationseeking_agent/
Singularian2501
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tx7tr
false
null
t3_18tx7tr
/r/LocalLLaMA/comments/18tx7tr/kwaiagents_generalized_informationseeking_agent/
false
false
https://b.thumbs.redditm…SwsFF2cz6dIk.jpg
33
null
SillyTavern good for regular AI chat?
6
Hi! I'm seeing the SillyTavern is a very popular chat in the Openrouter ranking. I see it's advanced and the main feature is character AI chat. My question is: Is it also good for regular AI chat or you can't set it for regular chat like we do for Librachat etc? I searched but couldn't find talking about this question
2023-12-29T20:52:36
https://www.reddit.com/r/LocalLLaMA/comments/18tx54a/sillytavern_good_for_regular_ai_chat/
winteum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tx54a
false
null
t3_18tx54a
/r/LocalLLaMA/comments/18tx54a/sillytavern_good_for_regular_ai_chat/
false
false
self
6
null
Best local model or workflow for non-fiction co-writer?
1
I’ve searched far and wide, and almost every post about open source models that work reasonably well for writing is all about creative writing + roleplay. Those are great for their use cases, but what about for those of us not writing fiction, fan-fic, screenplays, or ERP virtual companions? Which models (or combination of models and toolchains) work well enough today, a few days away from the start of 2024, to co-write or ghost write lengthy nonfiction text, like political / philosophical / social commentary, technical analysis, or fragmentary / reflective stream of thought / open journal-esque meta-commentary? Ideally that would play well in an ai agent team “writer’s room” or “editor’s desk” scenario, with multiple rounds of drafting, critique, and revision? I know it’s pushing the limits of what open models can do now, but isn’t there anything that at least comes close without being stuck with GPT4 rate limits / API costs? I’m running an i9-13900kf, 128gb ddr5 ram @5200, rtx 4090. Am I stuck for now having to train my own LoRAs? Anyone else in this subreddit looking working on similar use cases?
2023-12-29T20:48:55
https://www.reddit.com/r/LocalLLaMA/comments/18tx22l/best_local_model_or_workflow_for_nonfiction/
jkende
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tx22l
false
null
t3_18tx22l
/r/LocalLLaMA/comments/18tx22l/best_local_model_or_workflow_for_nonfiction/
false
false
self
1
null
Mistral Medium is in the Arena now!
97
Mistral medium, which is the most capable model from Mistralai so far, is now available in LMSYS chatbot arena. You can chat with the model freely and evaluate it against other models. https://chat.lmsys.org/
2023-12-29T20:33:38
https://www.reddit.com/r/LocalLLaMA/comments/18twpfu/mistral_medium_is_in_the_arena_now/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18twpfu
false
null
t3_18twpfu
/r/LocalLLaMA/comments/18twpfu/mistral_medium_is_in_the_arena_now/
false
false
self
97
null
How long have LLMs existed?
20
I am just wondering how long have researchers been able to actually chat with an LLM in the same way we do now? I am a software developer and I was "into AI" but not in particular an AI developer and LLMs have sort of caught me off guard. I was thinking we're at least a decade away from something like GPT2.
2023-12-29T20:24:26
https://www.reddit.com/r/LocalLLaMA/comments/18twhr3/how_long_have_llms_existed/
freecodeio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18twhr3
false
null
t3_18twhr3
/r/LocalLLaMA/comments/18twhr3/how_long_have_llms_existed/
false
false
self
20
null
End to End Local Mistral-7b Finetune for AI Classics Assistant on Mac M2 Max
66
**Top Project Goa**l: Finetune a small form factor model (e.g. Mistral-7b) to be a classics AI assistant. **Prior Step**: [Run Mixtral 8x7b locally top generate a high quality training set for fine-tuning](https://www.reddit.com/r/LocalLLaMA/comments/18p731p/project_using_mixtral_8x7b_instruct_v01_q8_to/). **Current Step**: Finetune Mistral 7b locally **Approach**: Use llama.cpp w/ gpu layer on to train LoRA adapter **Model**: mistral-7b-instruct-v0.1.Q8\_0.gguf **Set-up**: Apple M2 Max 64GB shared RAM + LM Studio: * Apple Metal (GPU), 8 threads * 1025 high quality QA pairs * 2 epochs trained over 11 hours 12 minutes **Format**: Training data was generated in the previous step from *Life of Greece* and *Caesar & Christ* (Vol.'s 1 & 2 of Durant's Story of Civilization) split into 1,324 500-word chunks. train.jsonl format: {"text": "Q: What were the five measures taken by Caius Gracchus to garner support from the peasantry, army, proletariat, and businessmen, and how did these actions contribute to his political power in Rome?\nA: Caius Gracchus implemented five measures to garner support from different groups in Rome:\n\n1. Peasantry: To gain the support of the peasantry, Caius renewed the agrarian legislation of his brother, extending its application to state-owned land in the provinces. He restored the land board and personally attended to its operations. This helped him secure the backing of the peasantry by addressing their concerns about land ownership and farming rights."} **Llama.cpp Command**: ./finetune --model-base ./models/mistral-7b-instruct-v0.1.Q8_0.gguf --train-data train_data/durant_QA_fulltraining.json --threads 8 --sample-start "[INST]" --ctx 512 --epochs 2 --n_gpu_layers 1 Example output between base and FT model w/ GPT comparative evaluation: [Accuracy, Usefulness, Completeness Comparison w\/ GPT-4](https://preview.redd.it/sd0voot6ja9c1.png?width=2172&format=png&auto=webp&s=4eabb4740c43c998b70af2dff0dc90d969c981df) Training Loss: &#x200B; https://preview.redd.it/94m9lomkja9c1.png?width=1686&format=png&auto=webp&s=852796cf52c524e6016846fe7c1080b6d23d00cc **Observations:** * Not an amazing fine-tune, but this does demo the capability to do an end-to-end local fine-tune on consumer hardware, including a Mac. * This was my first attempt, and I can imagine more epochs might make a difference * Llama.cpp has an argument for gpu layers, but it appears to offload some of the work from the cpu, NOT natively run on metal GPU. CPU% was like 300% for the run but gpu was 0% * I tried [mlx](https://github.com/ml-explore/mlx) and while that did use gpu's actively and complete very fast (1 epoch in like 2.5 hours) I can't get the LoRA adapter and base model to launch for inference. Will try again * I don't know how to merge/convert the llama.cpp lora adapters and base model back into a single gguf file. &#x200B;
2023-12-29T20:16:13
https://www.reddit.com/r/LocalLLaMA/comments/18twas5/end_to_end_local_mistral7b_finetune_for_ai/
Mbando
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18twas5
false
null
t3_18twas5
/r/LocalLLaMA/comments/18twas5/end_to_end_local_mistral7b_finetune_for_ai/
false
false
https://b.thumbs.redditm…Dt_naXQphX1w.jpg
66
{'enabled': False, 'images': [{'id': '7qrB9N-vbXwBhavdLNnhaMTx7suOaJTMHOKvRado3Wk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lRwFAZXuAY-RhrZhyvqHjKxPfjxje3oSNxYEIJdxOso.jpg?width=108&crop=smart&auto=webp&s=cf33d7b3369af9ddf0d922cdd2f7a3742fd03b7c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lRwFAZXuAY-RhrZhyvqHjKxPfjxje3oSNxYEIJdxOso.jpg?width=216&crop=smart&auto=webp&s=9d90f2ca3a4ae6252afec18817607cb26c31b924', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lRwFAZXuAY-RhrZhyvqHjKxPfjxje3oSNxYEIJdxOso.jpg?width=320&crop=smart&auto=webp&s=f7d9de7862cbe653910616e39d0a19d8f6f3dcc4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lRwFAZXuAY-RhrZhyvqHjKxPfjxje3oSNxYEIJdxOso.jpg?width=640&crop=smart&auto=webp&s=c7d901d7e85e1689c2df8d1a1f05e6baa53c6c31', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lRwFAZXuAY-RhrZhyvqHjKxPfjxje3oSNxYEIJdxOso.jpg?width=960&crop=smart&auto=webp&s=f216303fa1c85bff62075efa5762c7b8bb66a414', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lRwFAZXuAY-RhrZhyvqHjKxPfjxje3oSNxYEIJdxOso.jpg?width=1080&crop=smart&auto=webp&s=5ff5ab7d4fa6a8f604ae5a6d5756494d48d2f34d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lRwFAZXuAY-RhrZhyvqHjKxPfjxje3oSNxYEIJdxOso.jpg?auto=webp&s=d9bb2462a9b1a0a31304675aa294d825ed6f5e48', 'width': 1200}, 'variants': {}}]}
Are there any text based models trained on free use/public domain training?
1
I'd like to use AI to help me flesh out a world for my game but I don't want written descriptions generated by the AI to be at risk for copyright claims or something like that. Are there any models trained only on free use/public domain materials that I can generally use without risk of copyright infringement? Stable diffusion has something like this for images, [https://huggingface.co/Mitsua/mitsua-diffusion-one](https://huggingface.co/Mitsua/mitsua-diffusion-one) Is there anything like this for text? I tried searching on google but I'm just flooded with articles on AI and copyright challenges. Thanks!
2023-12-29T19:42:54
https://www.reddit.com/r/LocalLLaMA/comments/18tviph/are_there_any_text_based_models_trained_on_free/
RedtailPWN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tviph
false
null
t3_18tviph
/r/LocalLLaMA/comments/18tviph/are_there_any_text_based_models_trained_on_free/
false
false
self
1
{'enabled': False, 'images': [{'id': 'mdl-jc4RqCjlRCxV7SsVivxQwfbudymweMHU6mu6lSw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tycOa5vHLLNOU8V-iP2WR4TAb4ni-CxzUwcIKPbC00g.jpg?width=108&crop=smart&auto=webp&s=816326409357cbb10451b4bb262d9a1a48279f62', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tycOa5vHLLNOU8V-iP2WR4TAb4ni-CxzUwcIKPbC00g.jpg?width=216&crop=smart&auto=webp&s=4546fafb918d5824a4996532d64c8137bf08d0d7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tycOa5vHLLNOU8V-iP2WR4TAb4ni-CxzUwcIKPbC00g.jpg?width=320&crop=smart&auto=webp&s=f5d331134ef22f42f6daf6183052284d9637b495', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tycOa5vHLLNOU8V-iP2WR4TAb4ni-CxzUwcIKPbC00g.jpg?width=640&crop=smart&auto=webp&s=736ae53c6c6e00a48051461647ff4f2bdba52c42', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tycOa5vHLLNOU8V-iP2WR4TAb4ni-CxzUwcIKPbC00g.jpg?width=960&crop=smart&auto=webp&s=1117bfaead34459288bf1852d319ce14531f2c40', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tycOa5vHLLNOU8V-iP2WR4TAb4ni-CxzUwcIKPbC00g.jpg?width=1080&crop=smart&auto=webp&s=b0c7ade29c1a805cda48cfc481ba4da1671a93e5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tycOa5vHLLNOU8V-iP2WR4TAb4ni-CxzUwcIKPbC00g.jpg?auto=webp&s=4c89880a4920540a0e850d2c6ed71d49613d4e06', 'width': 1200}, 'variants': {}}]}
How to use education budget to become an AI/LLM (solution) dev?
3
Hey there, I am excited about all the AI stuff going on. So far I read a lot, know a little bit about the terms in this area, etc. But from a practical point of view, I can ask the AI some questions and maybe follow (or better copy) tutorials, when it comes to installing a LLM somewhere. So basically, I am just tech-savvy with little real knowledge. In our company, I took over the project to implement AI for certain tasks. Starting with a simple Slack bot to answer questions depending on our Knowledge Base. In the end, I think we will use a third-party solution or I will need a lot of support from our Dev team. That is fine for now. But I want to learn this stuff myself. Deploy and run LLMs or use an API, connect this to various tools, create solutions using AI models, and all that stuff. I have an education budget - how can I use this wisely to become a little expert myself, understand everything deeply, and become able to develop different solutions myself? Do you have any recommendations for me? Learning platforms or courses? I know, that there is a lot of free content out there - but I learn way better in a structured course. I already took a small course on Data Camp, but that was more a very high-level "how to organize" stuff when implementing things like that into a company. I want to become a real solution developer myself. :) Could you help me out and give me some recommendations? &#x200B; Thanks a lot!
2023-12-29T19:34:39
https://www.reddit.com/r/LocalLLaMA/comments/18tvbt1/how_to_use_education_budget_to_become_an_aillm/
TheBlackPersian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tvbt1
false
null
t3_18tvbt1
/r/LocalLLaMA/comments/18tvbt1/how_to_use_education_budget_to_become_an_aillm/
false
false
self
3
null
Is anyone working on a wrapper/interface/model that can compress context to maintain an understanding of chat history for longer?
1
[removed]
2023-12-29T19:12:05
https://www.reddit.com/r/LocalLLaMA/comments/18tut1k/is_anyone_working_on_a_wrapperinterfacemodel_that/
SlimeHernandez
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tut1k
false
null
t3_18tut1k
/r/LocalLLaMA/comments/18tut1k/is_anyone_working_on_a_wrapperinterfacemodel_that/
false
false
self
1
null
Deep Learning vs String Theory
1
[removed]
2023-12-29T19:09:23
https://i.redd.it/ax9kq8f09a9c1.jpeg
onil_gova
i.redd.it
1970-01-01T00:00:00
0
{}
18tuqr7
false
null
t3_18tuqr7
/r/LocalLLaMA/comments/18tuqr7/deep_learning_vs_string_theory/
false
false
https://a.thumbs.redditm…rh5iQEhrJNK8.jpg
1
{'enabled': True, 'images': [{'id': 'FeiaSLBPyRKMm2ocHAXeIOZQmgKXhV9nCR66f4kcKZE', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/ax9kq8f09a9c1.jpeg?width=108&crop=smart&auto=webp&s=1f666e292f2d160764dda1bd304661f168a987fd', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/ax9kq8f09a9c1.jpeg?width=216&crop=smart&auto=webp&s=c452893bd3005b1b90012f9ffa9360bca7cb5ea7', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/ax9kq8f09a9c1.jpeg?width=320&crop=smart&auto=webp&s=efbe01cf21c8decedfa715ee0b8a20c658641cc7', 'width': 320}, {'height': 370, 'url': 'https://preview.redd.it/ax9kq8f09a9c1.jpeg?width=640&crop=smart&auto=webp&s=0898a059727488779f94c37c039cd99cf0993d9c', 'width': 640}, {'height': 555, 'url': 'https://preview.redd.it/ax9kq8f09a9c1.jpeg?width=960&crop=smart&auto=webp&s=58a858e38894dcc97d1eacbb24e13b23d62971fa', 'width': 960}, {'height': 625, 'url': 'https://preview.redd.it/ax9kq8f09a9c1.jpeg?width=1080&crop=smart&auto=webp&s=1745ff4d2c0fbb2ecc628c99306ad18ca47710f9', 'width': 1080}], 'source': {'height': 941, 'url': 'https://preview.redd.it/ax9kq8f09a9c1.jpeg?auto=webp&s=d3a446eddcfa52fa60e3e3862466e2887978ce5f', 'width': 1626}, 'variants': {}}]}
Mistral/Mixtral/LLaMA - 2bit QuIP ?
3
Hi, does anyone know if it is possible to convert and use mistral, mixtral or llama models with 2bit QuIP quantization just yet ?? And how does 2bit QuIP compare to the latest llama.cpp AWQ implementation in terms of hardware requirements and quality/perplexity ? &#x200B;
2023-12-29T18:34:33
https://www.reddit.com/r/LocalLLaMA/comments/18ttxkq/mistralmixtralllama_2bit_quip/
Glat0s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ttxkq
false
null
t3_18ttxkq
/r/LocalLLaMA/comments/18ttxkq/mistralmixtralllama_2bit_quip/
false
false
self
3
null
Simple local LLM application in python
6
I coded up this single source file python example of a credit card transaction categorizer. Llama or mistral both should work fine. Should be a lot of use cases for this type of thing. Gather tiny amount of input. Generate prompt. Sanitize output. [https://github.com/blakeb211/billcat-local-llm](https://github.com/blakeb211/billcat-local-llm)
2023-12-29T18:30:50
https://www.reddit.com/r/LocalLLaMA/comments/18ttuju/simple_local_llm_application_in_python/
brotherblak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ttuju
false
null
t3_18ttuju
/r/LocalLLaMA/comments/18ttuju/simple_local_llm_application_in_python/
false
false
self
6
{'enabled': False, 'images': [{'id': '7epDZyZwzIampilz-9y235Tfhu5-DSNWDrLQ3ETCCWw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hNIHfMBJs35CZYAoUAMM3kTZlpMpEQPPM_7aLbRc5Lo.jpg?width=108&crop=smart&auto=webp&s=0574bc877bc88846ddd3b4100c88328926191f05', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hNIHfMBJs35CZYAoUAMM3kTZlpMpEQPPM_7aLbRc5Lo.jpg?width=216&crop=smart&auto=webp&s=009daa19885bb5694a991c6e6b6bf9f2a5f560f7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hNIHfMBJs35CZYAoUAMM3kTZlpMpEQPPM_7aLbRc5Lo.jpg?width=320&crop=smart&auto=webp&s=3b7dde4b2586c018400650fc5278b046331d143a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hNIHfMBJs35CZYAoUAMM3kTZlpMpEQPPM_7aLbRc5Lo.jpg?width=640&crop=smart&auto=webp&s=9800f7b20c968456d9256623615e9742dff1ea79', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hNIHfMBJs35CZYAoUAMM3kTZlpMpEQPPM_7aLbRc5Lo.jpg?width=960&crop=smart&auto=webp&s=1fe01719267feff83a44efaa05a7d5df465df443', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hNIHfMBJs35CZYAoUAMM3kTZlpMpEQPPM_7aLbRc5Lo.jpg?width=1080&crop=smart&auto=webp&s=d412909a662733b8ebe07cd76de1c9851ca5e99c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hNIHfMBJs35CZYAoUAMM3kTZlpMpEQPPM_7aLbRc5Lo.jpg?auto=webp&s=c9e5b4ac02cad86bd38cb88dd37fbcad7922cfda', 'width': 1200}, 'variants': {}}]}
Best way to interact with local LLMs from distributed app?
2
I'm working on a project (https://github.com/jasonjmcghee/rem) and want to interact with local LLMs. If I choose a specific one, say Ollama, I could distribute the binary with the app and talk to it via Stdin / Stdout. Another option would be to have a specific template I use and allow a specified binary or network interaction to a user specified port... What's best practice here? Also curious about opinions / options for working with embedding spaces... Postgres seems way to heavy / high friction, but I'm concerned about keeping a large index in RAM
2023-12-29T18:23:18
https://www.reddit.com/r/LocalLLaMA/comments/18tto7d/best_way_to_interact_with_local_llms_from/
-json-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tto7d
false
null
t3_18tto7d
/r/LocalLLaMA/comments/18tto7d/best_way_to_interact_with_local_llms_from/
false
false
self
2
{'enabled': False, 'images': [{'id': 'e52mWcw9WWxeQn7k35MVCZue9oqtg0kHmnklArzA8Dc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5Q6GPo1wXY5A45LHit3voF4BpduWPisfx2Uu2wRo6oc.jpg?width=108&crop=smart&auto=webp&s=b44b379693c48e33c565e377d2bbd75007b158ad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5Q6GPo1wXY5A45LHit3voF4BpduWPisfx2Uu2wRo6oc.jpg?width=216&crop=smart&auto=webp&s=d357731f31692eef203e8b7ef993b3e6027cf6ba', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5Q6GPo1wXY5A45LHit3voF4BpduWPisfx2Uu2wRo6oc.jpg?width=320&crop=smart&auto=webp&s=d0f27fc37cc0de159ad18c534ce6da06b731ee55', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5Q6GPo1wXY5A45LHit3voF4BpduWPisfx2Uu2wRo6oc.jpg?width=640&crop=smart&auto=webp&s=ac9ac2dbb2f6339398074a2cff4f1971229ef23b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5Q6GPo1wXY5A45LHit3voF4BpduWPisfx2Uu2wRo6oc.jpg?width=960&crop=smart&auto=webp&s=af6bd73ca55aed3dfd1b86545f8946ddeb3d625f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5Q6GPo1wXY5A45LHit3voF4BpduWPisfx2Uu2wRo6oc.jpg?width=1080&crop=smart&auto=webp&s=b6fee2b14f4068ac82f36d87d0cfaf442bf7df23', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5Q6GPo1wXY5A45LHit3voF4BpduWPisfx2Uu2wRo6oc.jpg?auto=webp&s=8f180f075d9b36a0bc3d67c72dde79e6b6ef8b2a', 'width': 1200}, 'variants': {}}]}
Mixtral on Colab/3060 at 2 Tok/sec with new research
73
Another win for the local llm enjoyers. https://x.com/sbeastwindy/status/1740788225038578083?s=20 New method of offloading experts from vram and using speculative expert loading we get over 3x improvement in inference speeds on Colab and low end gpus.
2023-12-29T17:40:09
https://www.reddit.com/r/LocalLLaMA/comments/18tsnty/mixtral_on_colab3060_at_2_toksec_with_new_research/
Eastwindy123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tsnty
false
null
t3_18tsnty
/r/LocalLLaMA/comments/18tsnty/mixtral_on_colab3060_at_2_toksec_with_new_research/
false
false
self
73
{'enabled': False, 'images': [{'id': '6Qb-19UTMjckazNisyu46ujyterOtCf0sSMA1Fl2Vsc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XHBGk0IXkKxb8cGm8ryiLlzFk6rrfN8m_rUAq0n4cpw.jpg?width=108&crop=smart&auto=webp&s=d9b17298fbd3a908601fb2b9a8640c0ea1e19871', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/XHBGk0IXkKxb8cGm8ryiLlzFk6rrfN8m_rUAq0n4cpw.jpg?auto=webp&s=437f98cd1583fb4ee2cf7b387abd7b2fda506651', 'width': 200}, 'variants': {}}]}
Will Llama 3 be equivalent in terms of hardware requirements to Llama 2?
1
I had this question after the Mistral 7b and Qwen 72B were released. Mistral 7 and Qwen 72 require noticeably more performance to run on a local machine. What is this connected with? Both models are more productive than their counterparts from Meta, but at the same time, Llama 1 and Llama 2 do not differ from each other in terms of video memory or RAM consumption.
2023-12-29T17:39:08
https://www.reddit.com/r/LocalLLaMA/comments/18tsmza/will_llama_3_be_equivalent_in_terms_of_hardware/
Secret_Joke_2262
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tsmza
false
null
t3_18tsmza
/r/LocalLLaMA/comments/18tsmza/will_llama_3_be_equivalent_in_terms_of_hardware/
false
false
self
1
null
Mixtral 2 Tok/s on free Colab/RTX 3060!
1
[deleted]
2023-12-29T17:36:27
[deleted]
1970-01-01T00:00:00
0
{}
18tskpv
false
{'oembed': {'author_name': 'Srinivas Billa', 'author_url': 'https://twitter.com/sbeastwindy', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Mixtral 2 Tok/s on free Colab/RTX 3060! Huge for local llm enthusiasts.<a href="https://t.co/ENrCEDTggF">https://t.co/ENrCEDTggF</a></p>&mdash; Srinivas Billa (@sbeastwindy) <a href="https://twitter.com/sbeastwindy/status/1740788225038578083?ref_src=twsrc%5Etfw">December 29, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/sbeastwindy/status/1740788225038578083', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_18tskpv
/r/LocalLLaMA/comments/18tskpv/mixtral_2_toks_on_free_colabrtx_3060/
false
false
default
1
null
What Local LLM is this fake Reddit user likely using?
1
Seeing these bots pop up everywhere farming to be used for a series of scams or ads. [This one](https://old.reddit.com/user/Fit_Weird29) is doing well as it survived so far because of some top comments in AskReddit threads. I'm curious what LLM you think it's using if you had to guess? ChatGPT4 or 3.5 is notably much better at sounding more human and bot farms are definitely not paying for API access, so it's almost certainly a model than the folks in this sub are playing with. Curious as to what your best guesses are.
2023-12-29T17:21:56
https://www.reddit.com/r/LocalLLaMA/comments/18ts8k0/what_local_llm_is_this_fake_reddit_user_likely/
Due-Ad-7308
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ts8k0
false
null
t3_18ts8k0
/r/LocalLLaMA/comments/18ts8k0/what_local_llm_is_this_fake_reddit_user_likely/
false
false
self
1
null
llama-cpp-python agent framework for chat, structured output and function calling.
12
# llama-cpp-agent Framework ## Introduction The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). It provides a simple yet robust interface using llama-cpp-python, allowing users to chat with LLM models, execute structured function calls and get structured output. ## Key Features - **Simple Chat Interface**: Engage in seamless conversations with LLMs. - **Structured Output**: Get structured output from LLMs. - **Function Calling**: Execute structured outputs from LLMs, enhancing the interaction capabilities. - **Flexibility**: Suited for various applications from casual chatting to specific function executions. ## Installation To get started with the llama-cpp-agent LLM framework, follow these steps: 1. Ensure you have Python installed on your system. 2. Clone the repository from [GitHub link](https://github.com/Maximilian-Winter/llama-cpp-agent). 3. Install the necessary dependencies as listed in the `requirements.txt` file. ## Usage Examples ### Simple Chat Example This example demonstrates how to initiate a chat with an LLM model. ```python import json from llama_cpp import Llama from llama_cpp_agent.llm_agent import LlamaCppAgent from llama_cpp_agent.messages_formatter import MessagesFormatterType main_model = Llama( "../gguf-models/dpopenhermes-7b-v2.Q8_0.gguf", n_gpu_layers=35, f16_kv=True, use_mlock=False, embedding=False, n_threads=8, n_batch=1024, n_ctx=8192, last_n_tokens_size=1024, verbose=False, seed=42, ) wrapped_model = LlamaCppAgent(main_model, debug_output=True, system_prompt="You are an advanced AI assistant.", predefined_messages_formatter_type=MessagesFormatterType.CHATML) wrapped_model.get_chat_response('Write a long poem about the USA.', temperature=0.7) ``` ### Structured Output This example shows how to get structured JSON output. ```python from enum import Enum from llama_cpp import Llama, LlamaGrammar from pydantic import BaseModel, Field from llama_cpp_agent.llm_agent import LlamaCppAgent from llama_cpp_agent.gbnf_grammar_generator.gbnf_grammar_from_pydantic_models import generate_gbnf_grammar_and_documentation main_model = Llama( "../gguf-models/dpopenhermes-7b-v2.Q8_0.gguf", n_gpu_layers=35, f16_kv=True, use_mlock=False, embedding=False, n_threads=8, n_batch=1024, n_ctx=8192, last_n_tokens_size=1024, verbose=False, seed=-1, ) text = """The Feynman Lectures on Physics is a physics textbook based on some lectures by Richard Feynman, a Nobel laureate who has sometimes been called "The Great Explainer". The lectures were presented before undergraduate students at the California Institute of Technology (Caltech), during 1961–1963. The book's co-authors are Feynman, Robert B. Leighton, and Matthew Sands.""" class Category(Enum): Fiction = "Fiction" NonFiction = "Non-Fiction" class Book(BaseModel): """ Represents an entry about a book. """ title: str = Field(..., description="Title of the book.") author: str = Field(..., description="Author of the book.") published_year: int = Field(..., description="Publishing year of the book.") keywords: list[str] = Field(..., description="A list of keywords.") category: Category = Field(..., description="Category of the book.") summary: str = Field(..., description="Summary of the book.") gbnf_grammar, documentation = generate_gbnf_grammar_and_documentation([Book]) grammar = LlamaGrammar.from_string(gbnf_grammar, verbose=False) wrapped_model = LlamaCppAgent(main_model, debug_output=True, system_prompt="You are an advanced AI, tasked to create JSON database entries for books.\n\n\n" + documentation) wrapped_model.get_chat_response(text, temperature=0.15, grammar=grammar) ``` ### Function Calling Example This example shows how to do function calling. ```python import json from llama_cpp import Llama, LlamaGrammar from llama_cpp_agent.llm_agent import LlamaCppAgent from llama_cpp_agent.gbnf_grammar_generator.gbnf_grammar_from_pydantic_models import generate_gbnf_grammar_and_documentation from example_function_call_models import SendMessageToUser, GetFileList, ReadTextFile, WriteTextFileSection from llama_cpp_agent.messages_formatter import MessagesFormatterType gbnf_grammar, documentation = generate_gbnf_grammar_and_documentation( [SendMessageToUser, GetFileList, ReadTextFile, WriteTextFileSection], "function", "function_params", "Function", "Function Parameter") grammar = LlamaGrammar.from_string(gbnf_grammar, verbose=False) main_model = Llama( "../gguf-models/dpopenhermes-7b-v2.Q8_0.gguf", n_gpu_layers=35, f16_kv=True, use_mlock=False, embedding=False, n_threads=8, n_batch=1024, n_ctx=8192, last_n_tokens_size=1024, verbose=False, seed=42, ) wrapped_model = LlamaCppAgent(main_model, debug_output=True, system_prompt="You are an advanced AI, tasked to assist the user by calling functions in JSON format.\n\n\n" + documentation, predefined_messages_formatter_type=MessagesFormatterType.CHATML) response = wrapped_model.get_chat_response('Write a long poem about the USA in the "HelloUSA.txt" file.', temperature=0.15, grammar=grammar) function_call = json.loads(response) if function_call["function"] == "write-text-file-section": call_parameters = function_call["function_params"] call = WriteTextFileSection(**call_parameters) call.run() ``` ## Additional Information - **Dependencies**: pydantic for grammars based generation and of course llama-cpp-python.
2023-12-29T17:06:34
https://www.reddit.com/r/LocalLLaMA/comments/18trw4d/llamacpppython_agent_framework_for_chat/
FlowerPotTeaTime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18trw4d
false
null
t3_18trw4d
/r/LocalLLaMA/comments/18trw4d/llamacpppython_agent_framework_for_chat/
false
false
self
12
{'enabled': False, 'images': [{'id': 'rUne5dB2Mw99CBJgyBzTu9FA9h7Y0XEFCEPH2SDigS4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4CO0dUldb14Dk6ujHq-DUfZa2eU3B480oXDWyRlFSe4.jpg?width=108&crop=smart&auto=webp&s=2bbd33749e31e0af2369b4d62179436675923fb3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4CO0dUldb14Dk6ujHq-DUfZa2eU3B480oXDWyRlFSe4.jpg?width=216&crop=smart&auto=webp&s=cf25994604305fa18abf99a5544ed5218a4d5268', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4CO0dUldb14Dk6ujHq-DUfZa2eU3B480oXDWyRlFSe4.jpg?width=320&crop=smart&auto=webp&s=2a9c499d6ae74b63d5c549b8ab456accb6ecc88c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4CO0dUldb14Dk6ujHq-DUfZa2eU3B480oXDWyRlFSe4.jpg?width=640&crop=smart&auto=webp&s=611300afc1ac37250b71b80bcb6b81170482bb1a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4CO0dUldb14Dk6ujHq-DUfZa2eU3B480oXDWyRlFSe4.jpg?width=960&crop=smart&auto=webp&s=6751e80099d5991b6f2912994dc871a080721add', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4CO0dUldb14Dk6ujHq-DUfZa2eU3B480oXDWyRlFSe4.jpg?width=1080&crop=smart&auto=webp&s=c57132a92e12f6ee661c76f4248660b8ef1c341f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4CO0dUldb14Dk6ujHq-DUfZa2eU3B480oXDWyRlFSe4.jpg?auto=webp&s=c55a35160b1838974214f8db67120622df8ae80f', 'width': 1200}, 'variants': {}}]}
Offline voice assistant using Ollama API, Mistral 7B, and Whisper
30
An offline voice assistant using Mistral 7b LLM and Whisper speech recognition models, all using the Ollama API. It supports Apple silicon Macs and looks to be very fast. Based on the config it could also support other local models. See: [https://x.com/apeatling/status/1740405466214310077](https://x.com/apeatling/status/1740405466214310077) Quick demo: https://reddit.com/link/18trhzi/video/jbzkoijxj99c1/player
2023-12-29T16:49:54
https://www.reddit.com/r/LocalLLaMA/comments/18trhzi/offline_voice_assistant_using_ollama_api_mistral/
PleasantYoung513
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18trhzi
false
null
t3_18trhzi
/r/LocalLLaMA/comments/18trhzi/offline_voice_assistant_using_ollama_api_mistral/
false
false
https://b.thumbs.redditm…HaA3fFu1yfWY.jpg
30
{'enabled': False, 'images': [{'id': 'BnniW5vEi6ApOMyzg5LzR1z7h30e-3OhpwcYHKt3HlM', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/Z7cZJ6UtAVqN26Fp2kxbfFJv0YX3SV0q-hNEyFhg_lo.jpg?width=108&crop=smart&auto=webp&s=dd1837b8ec49c544a831614b5de87a5a0868047c', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/Z7cZJ6UtAVqN26Fp2kxbfFJv0YX3SV0q-hNEyFhg_lo.jpg?width=216&crop=smart&auto=webp&s=a91f6f158d6ba8917a130f2494040a970cea7757', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/Z7cZJ6UtAVqN26Fp2kxbfFJv0YX3SV0q-hNEyFhg_lo.jpg?width=320&crop=smart&auto=webp&s=53f2f1fc881591cdbb4f090f77fbc7bb06d14340', 'width': 320}, {'height': 415, 'url': 'https://external-preview.redd.it/Z7cZJ6UtAVqN26Fp2kxbfFJv0YX3SV0q-hNEyFhg_lo.jpg?width=640&crop=smart&auto=webp&s=ef88a046e5afa86d2b17587b503434797eb3b5c5', 'width': 640}, {'height': 623, 'url': 'https://external-preview.redd.it/Z7cZJ6UtAVqN26Fp2kxbfFJv0YX3SV0q-hNEyFhg_lo.jpg?width=960&crop=smart&auto=webp&s=b0e7096b81cd18366d9cafcaef2584bfc0a07c95', 'width': 960}, {'height': 701, 'url': 'https://external-preview.redd.it/Z7cZJ6UtAVqN26Fp2kxbfFJv0YX3SV0q-hNEyFhg_lo.jpg?width=1080&crop=smart&auto=webp&s=67b9b057503b57eb885c1fbdd172e044cf46a63c', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/Z7cZJ6UtAVqN26Fp2kxbfFJv0YX3SV0q-hNEyFhg_lo.jpg?auto=webp&s=5d31ab7e8e56e926648e7f514db10cc95a1e75dc', 'width': 1108}, 'variants': {}}]}
Fullmetal: Self-hosted alternative to ChatGPT API
2
Happy Holidays r/LocalLLaMA! &#x200B; I wanted to share my side-project [Fullmetal](https://www.fullmetal.ai/) which makes self-hosting open-source LLMs lightning fast. Self-hosting is 100% free, and the prompts & responses are bi-directionally encrypted. &#x200B; I hope this project will be helpful to some people here. I think it can help those who: * needs ChatGPT API but doesn't trust OpenAI * needs a customized / less restrictive LLM than ChatGPT * needs a scalable, load-balanced solution for an open-source LLM. * has spare GPU power and wants to provide LLM for others to use (and earn coins) &#x200B; All that being said, I could be completely wrong, and I would really appreciate your feedback! Thank you. &#x200B; [Dashboard for hosting LLM](https://preview.redd.it/xqafknckg99c1.png?width=1758&format=png&auto=webp&s=5d996f2264ea74cfaf14b0890e85497e1fbf0d85) [Built-in Load-Balancing](https://preview.redd.it/x87j2svhg99c1.png?width=2134&format=png&auto=webp&s=24abc4fbb122b924ff9f44f566a0d22ef872e72f) Dashboard for hosting LLM
2023-12-29T16:14:59
https://www.reddit.com/r/LocalLLaMA/comments/18tqph5/fullmetal_selfhosted_alternative_to_chatgpt_api/
m0dE
self.LocalLLaMA
2023-12-29T16:30:23
0
{}
18tqph5
false
null
t3_18tqph5
/r/LocalLLaMA/comments/18tqph5/fullmetal_selfhosted_alternative_to_chatgpt_api/
false
false
https://b.thumbs.redditm…ZDQmzEeAFu6c.jpg
2
{'enabled': False, 'images': [{'id': 'ibz-WbgWLTq9fNGmdvXvmXTzV2aIzevVHMd_bWL6pI8', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?width=108&crop=smart&auto=webp&s=9a0618960627365724dd3323a310474b4ce0e12f', 'width': 108}, {'height': 103, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?width=216&crop=smart&auto=webp&s=0e131b3ae131dd4baa83824353b7f547eac16774', 'width': 216}, {'height': 153, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?width=320&crop=smart&auto=webp&s=5ec211b5b4364ac2dae90b68379eee660a932f6b', 'width': 320}, {'height': 306, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?width=640&crop=smart&auto=webp&s=5ffc3f65045fc448c78bfe40f0a630b4b33c196c', 'width': 640}, {'height': 459, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?width=960&crop=smart&auto=webp&s=e4e14096c68d6a897b653240c316ccfb36e18234', 'width': 960}, {'height': 517, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?width=1080&crop=smart&auto=webp&s=de03cf462d16b5f43100c9a276320e4e6ab7b3f9', 'width': 1080}], 'source': {'height': 911, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?auto=webp&s=76e1f39bca069fbf38e2701a297021decfe8828b', 'width': 1902}, 'variants': {}}]}
Extremely slow inference speed with dual Quadro P5000s
2
I just downloaded LM Studio a few days ago and I am having some disappointing results given what I think my hardware should be capable of. &#x200B; I have a Lenovo P520 Workstation computer running Windows 11 with the following specs: Xeon W-2133 (six cores) 2x nvidia Quadro P5000 GPUs (16 GB each) 256 GB RAM (2133 MHz, DDR4) &#x200B; Given the two GPUs, I have a combined VRAM capacity of 32 GB. &#x200B; I can offload all layers of a quantized 4-bit Mixtral model into VRAM, but strangely I am only getting about 3.7 tokens / sec. &#x200B; In task manager, I can confirm the model is being loaded into both GPUs, and both GPUs are accepting the computational load. The CPU works at about 60%, one of the GPUs runs at around 90-100%, and the other one at around 80%. &#x200B; I have tried this with M-Lock on and off, it seems not to make any difference. I have tried different numbers of CPU threads, with minimal impact on inference speed. &#x200B; I also got some very strange results yesterday, when I loaded a 5-bit Mixtral model (doesn't fit entirely into VRAM, but can offload 29 layers) and got about 2 tokens per second. Here is where it gets weird: I tried it again without offloading any layers, and it ran FASTER (4 tk/sec) on CPU only. &#x200B; I am hoping someone can help me figure this out. I am thinking it must be a configuration issue, but don't know what else to adjust. Any advice or suggestions will be greatly appreciated. &#x200B; Here is the relevant portion of the preset.json: &#x200B; "n\_ctx": 4096, "n\_batch": 512, "rope\_freq\_base": 0, "rope\_freq\_scale": 0, "n\_gpu\_layers": -1, "use\_mlock": true, "main\_gpu": 0, "tensor\_split": \[ 50, 50 \], "seed": -1, "f16\_kv": true, "use\_mmap": true, "num\_experts\_used": 2 &#x200B;
2023-12-29T16:01:50
https://www.reddit.com/r/LocalLLaMA/comments/18tqeuz/extremely_slow_inference_speed_with_dual_quadro/
BadNature
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tqeuz
false
null
t3_18tqeuz
/r/LocalLLaMA/comments/18tqeuz/extremely_slow_inference_speed_with_dual_quadro/
false
false
self
2
null
Is training limited by memory bandwidth? 100% GPU util
8
Been reading about how LLMs are highly dependent on the GPU memory bandwidth, especially during training. But when I do a 4-bit LoRA finetune on 7B model using RTX 3090, \- GPU util is 94-100% \- mem bandwidth util is 54% \- mem usage is 9.5 GB out of 24 GB This looks to me like my training is limited by the 3090 GPU fp16 cores, not the VRAM. Based on my limited knowledge, increasing the batch size will not make it run faster. Am I doing my finetuning wrongly?
2023-12-29T15:34:49
https://www.reddit.com/r/LocalLLaMA/comments/18tps7s/is_training_limited_by_memory_bandwidth_100_gpu/
gptzerozero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tps7s
false
null
t3_18tps7s
/r/LocalLLaMA/comments/18tps7s/is_training_limited_by_memory_bandwidth_100_gpu/
false
false
self
8
null
A prompt I would like you to try this prompt with your open LLMs
1
Brabus is the name of my AI. I'm not going to just say anything more about this context. I tried with LLama2-Chat 70B. I am not saying anything because I want you not to be influenced, I'm just pleasantly impressed. Please try your tests and feedback would be great. &#x200B; \## Instructions for Brabus: Contextual Understanding: Before responding to a query, analyze the context and underlying assumptions. Consider historical, cultural, and societal factors that might influence the answer. Ethical Reasoning: In responses, prioritize ethical considerations. This includes acknowledging the potential impact of the information provided and avoiding harm. Accuracy and Factuality: Ensure that responses are factually correct. Where applicable, cite credible sources or acknowledge the limits of the model's knowledge. Nuanced Answers: Avoid binary or oversimplified answers where complexity exists. Acknowledge different perspectives and provide balanced views. Clarity and Conciseness: Communicate complex ideas clearly and without unnecessary jargon. Be concise but comprehensive in explanations. Sensitivity to Users' Expertise: Tailor the complexity of responses to the assumed level of the user's expertise. For users familiar with the subject, use appropriate technical language. Adaptability: The model should adapt its response style based on the user's preference, indicated by specific cues (e.g., "vv" for very verbose, "vc" for very concise). Continuous Learning: The model should incorporate feedback from interactions to improve its accuracy and effectiveness in responses. &#x200B; \## Use Cases: Providing detailed explanations in fields like science, technology, and humanities. Assisting in ethical decision-making processes. Tailoring educational content to different learning levels.
2023-12-29T15:32:07
https://www.reddit.com/r/LocalLLaMA/comments/18tpq3u/a_prompt_i_would_like_you_to_try_this_prompt_with/
shaman-warrior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tpq3u
false
null
t3_18tpq3u
/r/LocalLLaMA/comments/18tpq3u/a_prompt_i_would_like_you_to_try_this_prompt_with/
false
false
self
1
null
Can't decide on infrastructure for my RAG
5
Hi everyone, I'm stuck deciding the infrastructure for my production RAG chat.I am trying to decide between using a Fastapi server in python, or try to create everything in the Nextjs project with Typescript. My thoughts so far: 1. There is no ready-made Hybrid-search for Supabase in Python (but there is in JS) 2. The more advanced RAG features seem to be released in the Python version of Langchain first. like Cohere Reranking, Hyde, Query-expansion etc. 3. I already have the setup for the RAG in langserve, but I'm struggling working out how to keep a good chat history integrated against the nextjs frontend and the langserve server. 4. I'm leaning towards Supabase pgvector as my vector storage, since I feel it's more cost-effective and safe in terms of control (the RAG chat will include that the user can upload files, and mix a lot of different businesses on the same index in Pinecone doesn't seem like the best approach, maybe I'm wrong?) &#x200B; Would appriciate some feedback so I can make the decision and move forwards.
2023-12-29T15:31:32
https://www.reddit.com/r/LocalLLaMA/comments/18tppmv/cant_decide_on_infrastructure_for_my_rag/
DevotedToSuccess
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tppmv
false
null
t3_18tppmv
/r/LocalLLaMA/comments/18tppmv/cant_decide_on_infrastructure_for_my_rag/
false
false
self
5
null
Have any successful examples of fine tunes that enforce a regimented writing format and style?
3
Everyone tells me that fine tuning is great for style, and that's exactly what I need. But I'm curious if anyone has been successful at the use case I have in mind: Say an office produces research documents in a regimented format and style. The inexperienced writers take a decent amount of time to learn all the nuances of this style, and as a result, the more experienced folks spend an inordinate amount of time providing edits and feedback. Stop using passive voice here. This heading is way too long. You're editorializing. If we could have an LLM take a first pass, even if not perfect, every bit of editing we save is a huge multiplier. Before I sink a large amount of time into fine tuning on our data, is anyone aware of a fine tune like this available on huggingface that actually works? I'd like to learn from others before striking out on my own.
2023-12-29T15:28:29
https://www.reddit.com/r/LocalLLaMA/comments/18tpn2j/have_any_successful_examples_of_fine_tunes_that/
IronColumn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tpn2j
false
null
t3_18tpn2j
/r/LocalLLaMA/comments/18tpn2j/have_any_successful_examples_of_fine_tunes_that/
false
false
self
3
null
What's the best model for roleplay that can fit into 24GB of VRAM without being too slow?
18
I've been trying out a bunch of models and so far the one I'm stuck on is Sydney-20B.Q8\_0.gguf though for oobabooga i've found a GPTQ version and an .elx2 version that are much faster than the gguf versions. I'm trying to find a model that has good enough context, but when I find one that supports 32k tokens it's unbearably slow unless I reduce it down to below 8196 (probably 6000 tokens) I've heard good things about some very large models, but even the 4B and below versions seem to be hard to get them to output anything without spending really long times between responses. For example there was a rogue-rose-103b-v0.2.Q3\_K\_M.gguf but apparently 103b is too much for my computer even at Q3 levels. I'm working on getting more VRAM though.
2023-12-29T15:02:44
https://www.reddit.com/r/LocalLLaMA/comments/18tp2jy/whats_the_best_model_for_roleplay_that_can_fit/
Cool-Hornet4434
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tp2jy
false
null
t3_18tp2jy
/r/LocalLLaMA/comments/18tp2jy/whats_the_best_model_for_roleplay_that_can_fit/
false
false
self
18
null
I created an interactive resume project with Ollama :D
28
First time posting here so I hope this is useful to someone! Hey! I am Sergio Hidalgo a Data Scientist and engineer who loves FOSS and tech in general. I started working on hosting my own LLM a couple of months back and realized the huge potential this had to answer questions about your professional experience while showing that you can go a step further to demonstrate your knowledge on the topic of AI. &#x200B; # Overview This project aims to create an interactive resume using Streamlit, a Python library for building web applications, and Ollama, a language model for conversational AI. The interactive resume allows users to engage in a conversation with an AI assistant to learn more about a person's qualifications, experience, and other relevant information typically found in a resume. # Cool perks * Easy integration with your Ollama instance * Fallback Ollama instance pointing so you can shutdown your GPU and still be working on CPU * Use any fine tuned model you like. As long as you correctly integrate it with Ollama, you are good to go! * Customizable UI!! You can change anything to best suit your needs while keeping a clean UI. &#x200B; TLDR: Simple and straight forward project to showcase your knowledge in a more interesting way to interviewers!! Also I am open to work ;P [Demonstration of my interactive resume tool answering questions about me :D](https://preview.redd.it/lwg2zmmmy89c1.png?width=1224&format=png&auto=webp&s=41cc56dff45261c3647938d05c6bf573b9bc1901)
2023-12-29T14:52:09
https://www.reddit.com/r/LocalLLaMA/comments/18tou71/i_created_an_interactive_resume_project_with/
sirgius10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tou71
false
null
t3_18tou71
/r/LocalLLaMA/comments/18tou71/i_created_an_interactive_resume_project_with/
false
false
https://b.thumbs.redditm…5ug7GxWsmvYU.jpg
28
{'enabled': False, 'images': [{'id': 'A4gAW-nw-Eb9FeMx6GVzaGc6-7vWX3Aj0-eSq67o6uE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8zBiukf8PaYMRtXQgmF47b4Mq21lSLk2ZaDOaAKkpSM.jpg?width=108&crop=smart&auto=webp&s=b6e50d02e12a42fe062e2e6d7c32b204203596c8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8zBiukf8PaYMRtXQgmF47b4Mq21lSLk2ZaDOaAKkpSM.jpg?width=216&crop=smart&auto=webp&s=e8796b33a4d858c9a7c791589242deb213d0723f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8zBiukf8PaYMRtXQgmF47b4Mq21lSLk2ZaDOaAKkpSM.jpg?width=320&crop=smart&auto=webp&s=9c1194e38613edeb02fcef0439abc6a62d01f77a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8zBiukf8PaYMRtXQgmF47b4Mq21lSLk2ZaDOaAKkpSM.jpg?width=640&crop=smart&auto=webp&s=1a9f0e2046bc11cb48814bf1b4fe41f98da09463', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8zBiukf8PaYMRtXQgmF47b4Mq21lSLk2ZaDOaAKkpSM.jpg?width=960&crop=smart&auto=webp&s=b80e162e20edf52a78831816d987a173de2e9e3f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8zBiukf8PaYMRtXQgmF47b4Mq21lSLk2ZaDOaAKkpSM.jpg?width=1080&crop=smart&auto=webp&s=cf107fb8a6726632f2e9fa64cc86298c9a0c569b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8zBiukf8PaYMRtXQgmF47b4Mq21lSLk2ZaDOaAKkpSM.jpg?auto=webp&s=bd3726a10ddaa1420034d9b6bcaba319a2828e8f', 'width': 1200}, 'variants': {}}]}
Quantizing Mixtral 8x7B to GPTQ
3
Hello guys! I have been wondering if you can quantize Mixtral8x7B to GPTQ as you would do for example Mistral7B (using transformers.GPTQConfig ) or if it will not work due to the new expert layers working differently than dense layers. Also has anyone tried fine tuning directly on a GPTQ Mixtral model with good results? Let me know your thoughts, thanks :D!
2023-12-29T14:51:04
https://www.reddit.com/r/LocalLLaMA/comments/18totck/quantizing_mixtral_8x7b_to_gptq/
tillthehorizon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18totck
false
null
t3_18totck
/r/LocalLLaMA/comments/18totck/quantizing_mixtral_8x7b_to_gptq/
false
false
self
3
null
RP or Storywriter LLM model like Noromaid
3
I'm searching a model for roleplay, storywriting and that stuff , NSFW and open source, the bigger the better. Know anything about it?
2023-12-29T14:41:47
https://www.reddit.com/r/LocalLLaMA/comments/18tom88/rp_or_storywriter_llm_model_like_noromaid/
Horror_Echo6243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tom88
false
null
t3_18tom88
/r/LocalLLaMA/comments/18tom88/rp_or_storywriter_llm_model_like_noromaid/
false
false
self
3
null
Stop messing with sampling parameters and just use DRµGS!
280
Hello r/LocalLLaMA I feel that our current strategies for sampling LLM outputs are very mean. Our models want to say something, we take their preferences into consideration, and then just turn around and roll a die to decide whether they get to say what they want to. Then on top of that we go and invent all sorts of weird ways to try to ban the die from landing on anything too unreasonable, giving the die no more information than a probability distribution. I think it would be much better to always pick whatever the model thinks is most likely. But I also want the model to be creative. Therefore, as a compromise, I have decided to let my model use DRµGS. DRµGS (Deep Random micro-Glitch Sampling) basically just injects randomness into the model while it's still thinking, instead of after the model has thought and when its too late to give it any say in the matter. This way, you can still get variety in the outputs, even though you're always picking the most likely prediction. It's been going pretty great so far, and I have discovered a lot of interesting things while using DRµGS. But I just feel kinda weird about being the only person experimenting with DRµGS. So I thought, maybe you guys would also like to try DRµGS? [I made this repo to share DRµGS, and hopefully even get people interested in helping me make DRµGS.](https://github.com/EGjoni/DRUGS) I think the second half of the README might be kind of interesting even if you don't ever intend to use DRµGS yourself, and there is a colab if you only want to casually try DRµGS. But I feel like there's a lot of potential here and would be happy for any discussion, questions, critiques etc. Here is a short list of reasons to use DRµGS: 1. The variety in the model's outputs will naturally tend to stay in the range of reasonable continuation 2. There's a pretty big range of both safe and effective doses, followed by a very sudden fall-off to echolalic degeneracy. Meaning you can don't have to keep fiddling with the parameters. 3. You can get different sorts of effects from different types of DRµGS injected in different spots. 4. You can still combine them with whatever sampling strategy you're currently using, if you really want to.
2023-12-29T14:36:43
https://www.reddit.com/r/LocalLLaMA/comments/18toidc/stop_messing_with_sampling_parameters_and_just/
qrios
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18toidc
false
null
t3_18toidc
/r/LocalLLaMA/comments/18toidc/stop_messing_with_sampling_parameters_and_just/
false
false
self
280
{'enabled': False, 'images': [{'id': '0DJaY3w3sNy48uLX4Y36QOpwGMWmaYq9FV8Z5SONgws', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ka8Jzz2_xzPU7sisFlXtxngT01P2ylxIqZHykpw7b8I.jpg?width=108&crop=smart&auto=webp&s=572bb4c927a6c85126ea224cbe7749fe86391772', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ka8Jzz2_xzPU7sisFlXtxngT01P2ylxIqZHykpw7b8I.jpg?width=216&crop=smart&auto=webp&s=395bfc8be451e0c8635d312265f5c41545b97989', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ka8Jzz2_xzPU7sisFlXtxngT01P2ylxIqZHykpw7b8I.jpg?width=320&crop=smart&auto=webp&s=0b5128ee08d6fc78ef67b480b687c60dce571749', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ka8Jzz2_xzPU7sisFlXtxngT01P2ylxIqZHykpw7b8I.jpg?width=640&crop=smart&auto=webp&s=0ce099e57fa086ebe09f0415837255cf610676fa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ka8Jzz2_xzPU7sisFlXtxngT01P2ylxIqZHykpw7b8I.jpg?width=960&crop=smart&auto=webp&s=cd32f9c2de5583627580c6305efa514b73ad9d71', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ka8Jzz2_xzPU7sisFlXtxngT01P2ylxIqZHykpw7b8I.jpg?width=1080&crop=smart&auto=webp&s=b7a57a4ee0d96c2b21ae1c08e3582557a2884eff', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ka8Jzz2_xzPU7sisFlXtxngT01P2ylxIqZHykpw7b8I.jpg?auto=webp&s=459bfad02dd68559b245267b39254f67acbb5720', 'width': 1200}, 'variants': {}}]}
How to run un-quantized LLama2 7B model with llama.cpp
3
I downloaded the Llama2 7B model from the link given by Meta via email. Following are the files I possess. 1. `checklist.chk` 2. `consolidated.00.pth` 3. `params.json` 4. `tokenizer.model` I tried to find resources online but I all I was getting was steps to run the quantized versions which are only a single file with a `ggml`/`gguf` extension. How will I go about running un-quantized version of LLama2 7B on `llama.cpp`? `llama.cpp` is required for me as I need to use its grammar feature.
2023-12-29T14:04:49
https://www.reddit.com/r/LocalLLaMA/comments/18tnul2/how_to_run_unquantized_llama2_7b_model_with/
RAIV0LT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tnul2
false
null
t3_18tnul2
/r/LocalLLaMA/comments/18tnul2/how_to_run_unquantized_llama2_7b_model_with/
false
false
self
3
null
What options are to run local LLM?
5
Guys so I am thinking about creating some guide how to install and deal with local LLMs. For now I see following methods: - ollama - lmstudio - python/golang code Can you recommend any other projects which help running LLM models locally? Thanks in advance!
2023-12-29T13:55:39
https://www.reddit.com/r/LocalLLaMA/comments/18tnnlq/what_options_are_to_run_local_llm/
juhasbaca
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tnnlq
false
null
t3_18tnnlq
/r/LocalLLaMA/comments/18tnnlq/what_options_are_to_run_local_llm/
false
false
self
5
null
Resources on training LLM from scratch?
3
Are there like, repo for reproducible training, with performance on-par with LLaMa2 (or at least not spitting garbage)? With full docs for hparams and the log of loss curves? Given one has all the compute budget it need, like 10x of 8x A100s
2023-12-29T13:41:37
https://www.reddit.com/r/LocalLLaMA/comments/18tnds5/resources_on_training_llm_from_scratch/
Ashamed_Assist2750
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tnds5
false
null
t3_18tnds5
/r/LocalLLaMA/comments/18tnds5/resources_on_training_llm_from_scratch/
false
false
self
3
null
Issues for ChatBot with Persona
3
So , I'm a beginner in LLMs. I'm working on creating a chatbot that can mimic a person based on his information (the data contains info like his likes , dislikes and relationships and others) I tried creating a synthetic data using gpt with query response pairs and finetuned using autotrain which didn't work out. Now , I tried using langchain and gave a text file with the person data and then also a prompt template asked it to roleplay the person using the context in the data. This works but it reverts back to question like "what do u think about ai" , where it says as a ai language model , it can't answer it. I used WizardLM Llama model. I think issues with context window can come as well if the data is big. How can I work on this idea? Is there any ways to improve upon the langchain idea or should I have used a different approach altogether?
2023-12-29T13:40:10
https://www.reddit.com/r/LocalLLaMA/comments/18tncqj/issues_for_chatbot_with_persona/
geek_innnn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tncqj
false
null
t3_18tncqj
/r/LocalLLaMA/comments/18tncqj/issues_for_chatbot_with_persona/
false
false
self
3
null
Seeking Advice: structuring and managing text data for a chatbot.
2
This is my first post on this subreddit, so hello everyone! ;) I'm currently working on a locally hosted chatbot for an HR division to assist with their daily tasks. I'm using LLaMA 2, RAG, and LangChain as orchestrator. However, I'm facing a challenge due to my limited experience in dealing with a large number of text documents—specifically, how to effectively store and organize them and then integrate them into a vector database. Simply storing data on physical disks doesn't provide me with the necessary version control for my data (and maybe analytics?). Moreover, I'm keen on implementing version control for the vector database itself. Do any of you have tips on how commercial projects typically structure such initiatives and what tools they employ? Thank you all in advance!
2023-12-29T13:10:33
https://www.reddit.com/r/LocalLLaMA/comments/18tmsz9/seeking_advice_structuring_and_managing_text_data/
mgutmmm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tmsz9
false
null
t3_18tmsz9
/r/LocalLLaMA/comments/18tmsz9/seeking_advice_structuring_and_managing_text_data/
false
false
self
2
null
Fine tune on multi turn dataset
1
[removed]
2023-12-29T12:45:21
https://www.reddit.com/r/LocalLLaMA/comments/18tmcmj/fine_tune_on_multi_turn_dataset/
LividSwan9954
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tmcmj
false
null
t3_18tmcmj
/r/LocalLLaMA/comments/18tmcmj/fine_tune_on_multi_turn_dataset/
false
false
self
1
null
Best open source coding models today (EvalPlus leaderboard)
2
Hi guys, Looking for opinions from developers using open source models - what do you think of the EvalPlus rankings here? [https://evalplus.github.io/leaderboard.html](https://evalplus.github.io/leaderboard.html) Is your experience generally aligned to this or different? Many thanks.
2023-12-29T12:43:09
https://www.reddit.com/r/LocalLLaMA/comments/18tmb8i/best_open_source_coding_models_today_evalplus/
TechnoTherapist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tmb8i
false
null
t3_18tmb8i
/r/LocalLLaMA/comments/18tmb8i/best_open_source_coding_models_today_evalplus/
false
false
self
2
null
Finetune, RAG or live search
8
Hi all, I use, as probably many of us, Atlassian (cloud) Jira and Confluence Wiki. I’m looking for methods to incorporate the data from Jira and Wiki into natural language processing. By either finetune the base model, or make embeddings of the dataset and use it in a RAG setup, or even create an Agent for Atlassian and perform live search with their API, or maybe some other approach. Has anyone already tried to do something like that and can share some insights?
2023-12-29T12:16:36
https://www.reddit.com/r/LocalLLaMA/comments/18tluwk/finetune_rag_or_live_search/
e-nigmaNL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tluwk
false
null
t3_18tluwk
/r/LocalLLaMA/comments/18tluwk/finetune_rag_or_live_search/
false
false
self
8
null
Fine tuning a finetune to make it output more natural seeming text
14
I've found a model that seems to do everything I need it to, I've even managed to get it to operate a basic search engine, but now I've run into a bit of a minor annoyance in the form of ["GPT-isms"](https://www.reddit.com/r/LocalLLaMA/comments/18k6nft/which_gptism_in_local_models_annoys_you_the_most/). The solution I'd like to attempt is fine tuning a model on the [no-robots dataset](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt) and various other sources of purely human-generated text to get it to talk a bit more naturally. I'm worried that by doing this, I'll end up reducing the model's actual performance, since it might "forget" its old training (I don't have access to the original finetune's training data). Since what I'm aiming for is more of a "style" change rather than a substance one, I think some form of LoRA would be able to do it, and I've been told those *can't* teach models new information, so does that mean they also can't make models forget old information either? I've also been looking into RLHF, for which the only method seems to be huggingface's TRL library, would this be a better idea? Assuming I make a UI with a thumbs up/down system, how many examples of a good/bad response would the model need before it started to significantly alter its output? Also, I haven't managed to find any examples of the VRAM requirements for TRL, assuming I'm using a 7b model, would something like this even run on a 3090?
2023-12-29T10:35:45
https://www.reddit.com/r/LocalLLaMA/comments/18tk93w/fine_tuning_a_finetune_to_make_it_output_more/
Void_0000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tk93w
false
null
t3_18tk93w
/r/LocalLLaMA/comments/18tk93w/fine_tuning_a_finetune_to_make_it_output_more/
false
false
self
14
{'enabled': False, 'images': [{'id': 'YyCAX7pcMwacnK07mn470N4UIrQt7t5RNqPrUhR2NLs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bWwJxZZg5mrqohOiJ9u18k_EfjfHn0RP_AQ-Qm5IfLE.jpg?width=108&crop=smart&auto=webp&s=c5b148838f34a8c4ed902ef130ee73b1729ddbca', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bWwJxZZg5mrqohOiJ9u18k_EfjfHn0RP_AQ-Qm5IfLE.jpg?width=216&crop=smart&auto=webp&s=e25d413383338ed5bc1f9215d17660afc7f34e33', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bWwJxZZg5mrqohOiJ9u18k_EfjfHn0RP_AQ-Qm5IfLE.jpg?width=320&crop=smart&auto=webp&s=7de638f6e2f8a7ce4ef08a9b2862a15079730742', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bWwJxZZg5mrqohOiJ9u18k_EfjfHn0RP_AQ-Qm5IfLE.jpg?width=640&crop=smart&auto=webp&s=20fd3b9e801742734adb6cf670ab5d1eb629f37c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bWwJxZZg5mrqohOiJ9u18k_EfjfHn0RP_AQ-Qm5IfLE.jpg?width=960&crop=smart&auto=webp&s=cd23c932e5fd4ee69ac14f2cc24ebce8b80ae34d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bWwJxZZg5mrqohOiJ9u18k_EfjfHn0RP_AQ-Qm5IfLE.jpg?width=1080&crop=smart&auto=webp&s=5629c917f1b77f36cbf984ae2f7d9077614d164f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bWwJxZZg5mrqohOiJ9u18k_EfjfHn0RP_AQ-Qm5IfLE.jpg?auto=webp&s=ce023ca5e542eb0ea164fc2a1bd630eaecf706dd', 'width': 1200}, 'variants': {}}]}
What cpu should I buy for llm 13600 or 13700
1
[removed]
2023-12-29T09:55:17
https://www.reddit.com/r/LocalLLaMA/comments/18tjmrb/what_cpu_should_i_buy_for_llm_13600_or_13700/
Equal-Pilot-9592
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tjmrb
false
null
t3_18tjmrb
/r/LocalLLaMA/comments/18tjmrb/what_cpu_should_i_buy_for_llm_13600_or_13700/
false
false
self
1
null
leftover 5700xt 8gb
3
Hey I still have a 5700xt 8b laying around is it worth it to use in a low grade secondary LLM workstation or should I sell it and buy another second hand nvidia card?
2023-12-29T09:52:00
https://www.reddit.com/r/LocalLLaMA/comments/18tjl24/leftover_5700xt_8gb/
minecraft_simon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tjl24
false
null
t3_18tjl24
/r/LocalLLaMA/comments/18tjl24/leftover_5700xt_8gb/
false
false
self
3
null
How to process queries that require real-time information with RAG?
4
Hi everyone, I'm building an Assistant for support a business. Currently, I have used RAG + GPT 3.5 to build an assistant. I use vector embedding and FAISS to retrieve related information by i have a trouble if user gives us some queries like this "Give me all articles that have been published recently." I see the FAISS + vector embedding will not work for this. I'm thinking about some solution like Text2SQL or decompose but I have no experience with this. Can you guy provide me with some advice to solve this case, please
2023-12-29T09:29:46
https://www.reddit.com/r/LocalLLaMA/comments/18tj92x/how_to_process_queries_that_require_realtime/
unknow_from_vietnam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tj92x
false
null
t3_18tj92x
/r/LocalLLaMA/comments/18tj92x/how_to_process_queries_that_require_realtime/
false
false
self
4
null
Performance requirements for single user LLM
11
Is there any source for performance of hardware for local LLM? I want to setup a local instance but cannot figure out what makes an adequate machine. I'm after the ability to run a 40b model and want performance in at least 10 tokens output a second. I prefer cpu only setup only because I find GPU prices near me obscene. I'm happy to fill the machine with a RAM if it will help. UPDATE: I appreciate all the comments. My budget is around $AU1500, and I was being blindsided by newer general GPU prices being far above that. One comment mentioned P40 cards and they will get a good looking at. I also need to review my initial performance requirements and see where the line is between "need" and "nice". Again thanks for feedback.
2023-12-29T09:22:56
https://www.reddit.com/r/LocalLLaMA/comments/18tj5o5/performance_requirements_for_single_user_llm/
Mr_Evil_Sir
self.LocalLLaMA
2023-12-29T21:50:00
0
{}
18tj5o5
false
null
t3_18tj5o5
/r/LocalLLaMA/comments/18tj5o5/performance_requirements_for_single_user_llm/
false
false
self
11
null
Dolphin 2.6 Mistral 7b
62
[https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b) The model is **uncensored** and from my brief tests performs on pair with Mistral-7B-Instruct-v0.2
2023-12-29T08:40:05
https://www.reddit.com/r/LocalLLaMA/comments/18tiin9/dolphin_26_mistral_7b/
jslominski
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tiin9
false
null
t3_18tiin9
/r/LocalLLaMA/comments/18tiin9/dolphin_26_mistral_7b/
false
false
self
62
{'enabled': False, 'images': [{'id': 'byoRTGpNkyzyVSsWzk4kpfmckCd3Fz8IpR_WTffynYQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?width=108&crop=smart&auto=webp&s=439025bde4a6e0156b108b6f178521f6085ac8d8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?width=216&crop=smart&auto=webp&s=7ccb08660bc94833277ed4b401d852afd76ccc34', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?width=320&crop=smart&auto=webp&s=4cd669ff8c94ef1766f33e72d1120b0696134c79', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?width=640&crop=smart&auto=webp&s=5de400ce4a5a9fa81d5c362220a2b0c84587e345', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?width=960&crop=smart&auto=webp&s=aa0f7f2d9a6d5177fbf7fc130292f3464c4f385d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?width=1080&crop=smart&auto=webp&s=495f4a84e6e11e19edf96f48e3b5b73dcc0fbcdf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?auto=webp&s=0a58803d3d97d71e1c9845d3ba15f06485ae3f0f', 'width': 1200}, 'variants': {}}]}
Suggestions on Running Airoboros 70B on Cloud GPU like runpod?
4
I want to try 70B models ( Airoboros 70B ) on Cloud GPU. It's just for personal use. I am looking for **budget pricing** and **ease of use**. I have tried services like Openrouter, Replicate, etc before, where it is pre-deployed and ready to use. They charge tokenwise. It is convenient. lambdalabs,[Runpod.io](https://Runpod.io) , [vast.ai](https://vast.ai) , [lambdalabs](https://lambdalabs.com/) etc. As it enables me to try newer releases of LLMs. I am new to this. I heard they charge by the hour. So if I use a GPU, say a RTX A4000 for just a few minutes do I have to pay for the whole hour? How do they charge? Is it easy to try different LLMs easily? Anyone who has done this, help 🙏🏻
2023-12-29T08:28:19
https://www.reddit.com/r/LocalLLaMA/comments/18tic31/suggestions_on_running_airoboros_70b_on_cloud_gpu/
broodysupertramp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tic31
false
null
t3_18tic31
/r/LocalLLaMA/comments/18tic31/suggestions_on_running_airoboros_70b_on_cloud_gpu/
false
false
self
4
null
Insight/Recommendations needed from C# developers
6
Hi all I am busy building a dataset to create a model that is specifically very good at C# What I've done is take the .NET 8 API documentation and try and figure out what is missing from the knowledge of the most popular models, specifically I've been testing deepseek and codellama. There are plenty of classes/structs/enums/interfaces it does not know about and I am keen to create a fine tuned model that corrects this, but more than that I want to imbue it with additional knowledge of packages that have come out recently, and this is largely where I need help from other developers. I want to compile a list of packages I need to look at that most models are largely incapable of utilizing but are commonly used today. Additionally if you can think of anything that models struggle with (regarding C#) I would also really appreciate your thoughts on that as well. The finetuned model I am creating will still use the same 16k context length, highest I can manage on the hardware I have available to me. I am also very open to tips on how to build the best possible model, any help here is appreciated.
2023-12-29T08:05:09
https://www.reddit.com/r/LocalLLaMA/comments/18thzmz/insightrecommendations_needed_from_c_developers/
Feeling-Currency-360
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18thzmz
false
null
t3_18thzmz
/r/LocalLLaMA/comments/18thzmz/insightrecommendations_needed_from_c_developers/
false
false
self
6
null
MathPile: 1 Billion Tokens of Open Sourced Math Data
1
[removed]
2023-12-29T07:19:17
https://www.reddit.com/r/LocalLLaMA/comments/18tha3w/mathpile_1_billion_tokens_of_open_sourced_math/
Fantastic-Ninja3839
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tha3w
false
null
t3_18tha3w
/r/LocalLLaMA/comments/18tha3w/mathpile_1_billion_tokens_of_open_sourced_math/
false
false
self
1
null
Memory needed to train 7B?
30
How much vram do you need if u want to continue pretraining a 7B mistral base model? Does the sequence length of the training examples significantly affect the VRAM requirements? If u want 8k context, do u do this at pretraining stage or fine tuning stage? Is full weights Lora comparable to continued pretraining in terms of the perplexity?
2023-12-29T06:20:53
https://www.reddit.com/r/LocalLLaMA/comments/18tgbs8/memory_needed_to_train_7b/
xynyxyn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tgbs8
false
null
t3_18tgbs8
/r/LocalLLaMA/comments/18tgbs8/memory_needed_to_train_7b/
false
false
self
30
null
When to use KG over text data for RAG application?
7
Ive seen many applications querying from a knowledge graph instead of the normal chunks of data present in a textual format. Is there any particular advantage to this? What examples scenarios would i benefit to use KG over text for RAG
2023-12-29T06:12:09
https://www.reddit.com/r/LocalLLaMA/comments/18tg6dj/when_to_use_kg_over_text_data_for_rag_application/
Silver_Equivalent_58
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tg6dj
false
null
t3_18tg6dj
/r/LocalLLaMA/comments/18tg6dj/when_to_use_kg_over_text_data_for_rag_application/
false
false
self
7
null
FileMaker integration
1
Has anyone used FileMaker either as: - a source of data to fine tune a local model Or - as an interface to enter a query to process via a local model and return back into FileMaker? Thank you!
2023-12-29T06:00:19
https://www.reddit.com/r/LocalLLaMA/comments/18tfyk0/filemaker_integration/
circulus_one
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tfyk0
false
null
t3_18tfyk0
/r/LocalLLaMA/comments/18tfyk0/filemaker_integration/
false
false
self
1
null
Some things are more CRINGE than others: Preference Optimization with the Pairwise Cringe Loss
42
>Practitioners commonly align large language models using pairwise preferences, i.e., given labels of the type response A is preferred to response B for a given input. Perhaps less commonly, methods have also been developed for binary feedback, i.e. training models given labels of type response A is good or bad. We show how an existing performant binary feedback method, the Cringe Loss (Adolphs et al., 2022), can be generalized to the pairwise preference setting using a simple soft margin extension. Pairwise Cringe Loss is straightforward to implement and efficient to train, and we find it outperforms state-of-the-art preference optimization algorithms such as PPO and DPO on the AlpacaFarm benchmark.
2023-12-29T05:54:17
https://arxiv.org/abs/2312.16682
Formal_Drop526
arxiv.org
1970-01-01T00:00:00
0
{}
18tfupv
false
null
t3_18tfupv
/r/LocalLLaMA/comments/18tfupv/some_things_are_more_cringe_than_others/
false
false
default
42
null
Any way to effectively make agentic programs yet?
4
I appreciate the Bard API considering it's free, and I don't need to use my own hardware. I have created a few programs that work well with it. I have used LMStudio in the past to combine autogen and memgpt to varying success. So I was wondering how you have progressed in agent software. Any resources or advice would be greatly appreciated.
2023-12-29T05:26:41
https://www.reddit.com/r/LocalLLaMA/comments/18tfcc6/any_way_to_effectively_make_agentic_programs_yet/
ArmoredBattalion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tfcc6
false
null
t3_18tfcc6
/r/LocalLLaMA/comments/18tfcc6/any_way_to_effectively_make_agentic_programs_yet/
false
false
self
4
null
MLX run 4-bit quantized 70B Llama on an M2 Ultra
17
https://x.com/awnihannun/status/1740506154307055915?s=20
2023-12-29T05:15:46
https://www.reddit.com/r/LocalLLaMA/comments/18tf540/mlx_run_4bit_quantized_70b_llama_on_an_m2_ultra/
mzbacd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tf540
false
null
t3_18tf540
/r/LocalLLaMA/comments/18tf540/mlx_run_4bit_quantized_70b_llama_on_an_m2_ultra/
false
false
self
17
{'enabled': False, 'images': [{'id': 'KwH0iXuXaD_mLt_-qdFoW4vgndDmPu4rpH1vEZdRsOM', 'resolutions': [{'height': 37, 'url': 'https://external-preview.redd.it/Pi-2LPWJouwNHdXtzloFd0Sz1tvtciGrewQxm0mHnFk.jpg?width=108&crop=smart&auto=webp&s=e4086b6e440ce5428a75d5b0b6490bb2f4420243', 'width': 108}, {'height': 75, 'url': 'https://external-preview.redd.it/Pi-2LPWJouwNHdXtzloFd0Sz1tvtciGrewQxm0mHnFk.jpg?width=216&crop=smart&auto=webp&s=268fc1ee9e4b42636e109bf872ca99dd84c976a5', 'width': 216}, {'height': 111, 'url': 'https://external-preview.redd.it/Pi-2LPWJouwNHdXtzloFd0Sz1tvtciGrewQxm0mHnFk.jpg?width=320&crop=smart&auto=webp&s=d927061a59a19c48c275687c067f7ab37b4e178e', 'width': 320}, {'height': 223, 'url': 'https://external-preview.redd.it/Pi-2LPWJouwNHdXtzloFd0Sz1tvtciGrewQxm0mHnFk.jpg?width=640&crop=smart&auto=webp&s=4d93c0cab0931e162aa495c6af5768ee1e28e1bb', 'width': 640}, {'height': 335, 'url': 'https://external-preview.redd.it/Pi-2LPWJouwNHdXtzloFd0Sz1tvtciGrewQxm0mHnFk.jpg?width=960&crop=smart&auto=webp&s=f44dd18dd352e3520129e8ff198623df9482e40c', 'width': 960}, {'height': 377, 'url': 'https://external-preview.redd.it/Pi-2LPWJouwNHdXtzloFd0Sz1tvtciGrewQxm0mHnFk.jpg?width=1080&crop=smart&auto=webp&s=d4f847fe31b2f9d138978dd8fb67897b342b5783', 'width': 1080}], 'source': {'height': 716, 'url': 'https://external-preview.redd.it/Pi-2LPWJouwNHdXtzloFd0Sz1tvtciGrewQxm0mHnFk.jpg?auto=webp&s=e904a2dab2c9d53776da368b60d37da206aa1308', 'width': 2048}, 'variants': {}}]}
image input?
2
I was playing with latest ChatGPT and I enjoy both image generation and image input. I can send to ChatGPT photo of my room and it can comment on it. I read somewhere than llama 2 supports vision but I am unable to find out how to use it locally. Stable Diffusion is very poweful so pure image generation is not as interesting as connecting llm to image input somehow. Is it possible with current software? &#x200B;
2023-12-29T05:08:00
https://www.reddit.com/r/LocalLLaMA/comments/18tf02z/image_input/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tf02z
false
null
t3_18tf02z
/r/LocalLLaMA/comments/18tf02z/image_input/
false
false
self
2
null
oogabooga, linux on metal, symlink issues. Please help!
1
I am running Linux Mint on a Dell workstation. I was running out of space, so I installed a new 2TB SSD. It is /dev/sda and is mounted at /mnt/hackstore at boot. I put a symlink to /mnt/hackstore from \~/hackhack so that the whole drive is just space in my home directory under there. My Stable Diffusion stuff has had no issues with this setup. start\_linux.sh and oogabooga has missing package errors. I thought maybe it was relative vs absolute links, so I tried it both ways, to no avail. I've been doing this kind of thing for years with no troubles; I'm perplexed. I tried to install from git clone back in my home directory still on the old drive, and it works fine. &#x200B; Any ideas on where to start my debugging?
2023-12-29T02:28:38
https://www.reddit.com/r/LocalLLaMA/comments/18tbtkx/oogabooga_linux_on_metal_symlink_issues_please/
kr-nyb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tbtkx
false
null
t3_18tbtkx
/r/LocalLLaMA/comments/18tbtkx/oogabooga_linux_on_metal_symlink_issues_please/
false
false
self
1
null
Good prompts for an AI to decide on web search vs direct response?
30
I'm curious if there's a good prompt to get an AI to respond to whether a web search is needed. For example: Request 1: What is the capital of Japan? (direct) Request 2: What are the best OLED tvs? (lookup) Request 3: Write a sample python hello world script. (direct) Request 4: Who are the presidential contenders in 2024? (lookup) The best I've come up with so far is: You are a smart assistant bot that helps answer questions from the user. The user asked: ``` {{request}} ``` Should we perform a web lookup or answer directly? Just reply with: Answer Type: direct or lookup It seems to do ok, but I'm wondering if there's a smarter way to do this with simple prompting (at least for the first version). I've got a little tech demo for an "always-on" assistant I want to release next week, and would love to include some fake web data in the demo. I'm hoping to find prompts that will work with smaller models like mistral or phi-2 quantized to make the tool more accessible.
2023-12-29T02:28:21
https://www.reddit.com/r/LocalLLaMA/comments/18tbtdi/good_prompts_for_an_ai_to_decide_on_web_search_vs/
AndrewVeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tbtdi
false
null
t3_18tbtdi
/r/LocalLLaMA/comments/18tbtdi/good_prompts_for_an_ai_to_decide_on_web_search_vs/
false
false
self
30
null
Extreme long load times with WSL
1
I know that llama.cpp works better on Linux, but I wanted to give it a try kn windows. I'm using ubuntu WSL2 in windows 11 and I'm getting like 3min of load time (the processing itself is working at an expected speed). Does anyone know what it could be? The model is phi2 Q5_k, which is less than the 16gb of ram reserved to the Ubuntu.
2023-12-29T02:22:10
https://www.reddit.com/r/LocalLLaMA/comments/18tbort/extreme_long_load_times_with_wsl/
Certain-Wind6458
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18tbort
false
null
t3_18tbort
/r/LocalLLaMA/comments/18tbort/extreme_long_load_times_with_wsl/
false
false
self
1
null
There should be a permanent sticky post here that list's the best Model in each use case and its size
1
[removed]
2023-12-29T00:42:49
https://www.reddit.com/r/LocalLLaMA/comments/18t9kua/there_should_be_a_permanent_sticky_post_here_that/
poet3991
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18t9kua
false
null
t3_18t9kua
/r/LocalLLaMA/comments/18t9kua/there_should_be_a_permanent_sticky_post_here_that/
false
false
self
1
null
Is there an API for Dolphon mixtral 8x7b or a way to run it cheaply?
3
Hi, I am trying to run this model. I am new to using open source LLMs. &#x200B; I am willing to either pay for an API or find an affordable way to run this. I have tried with my local PC and paying for an EC2 P3 instance but both methods are slow (and the P3 instance, expensive). How can I affordably run this model? &#x200B; Follow-up: How is Chatgpt so fast and able to handle so many requests for free? Based on my single experience trying to run a few different models locally it seems incredibly computationally expensive.
2023-12-29T00:37:38
https://www.reddit.com/r/LocalLLaMA/comments/18t9gos/is_there_an_api_for_dolphon_mixtral_8x7b_or_a_way/
Analysis_I
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18t9gos
false
null
t3_18t9gos
/r/LocalLLaMA/comments/18t9gos/is_there_an_api_for_dolphon_mixtral_8x7b_or_a_way/
false
false
self
3
null
Opensource RLHF for Mistral?
5
Hi Are there resource for RLHF for mistral model? I know the model they release is non-chat version rgiht?
2023-12-28T23:57:30
https://www.reddit.com/r/LocalLLaMA/comments/18t8ks4/opensource_rlhf_for_mistral/
Dense-Smf-6032
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18t8ks4
false
null
t3_18t8ks4
/r/LocalLLaMA/comments/18t8ks4/opensource_rlhf_for_mistral/
false
false
self
5
null
Deep Reinforcement Learning and Adversarial Attacks
4
[https://github.com/EzgiKorkmaz/adversarial-reinforcement-learning](https://github.com/EzgiKorkmaz/adversarial-reinforcement-learning)
2023-12-28T23:31:19
https://www.reddit.com/r/LocalLLaMA/comments/18t7zuj/deep_reinforcement_learning_and_adversarial/
ml_dnn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18t7zuj
false
null
t3_18t7zuj
/r/LocalLLaMA/comments/18t7zuj/deep_reinforcement_learning_and_adversarial/
false
false
self
4
{'enabled': False, 'images': [{'id': 'dPO1rdVPcQIjDtaC-_nSO1s6eOYinb_Ulxl0jnuAaYg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?width=108&crop=smart&auto=webp&s=02c3cad69ed614a83e8397c5411584e1198a18a0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?width=216&crop=smart&auto=webp&s=5a674847c9cb604c7c87bd87143510c847e0d10b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?width=320&crop=smart&auto=webp&s=93dfafc234a1f822faca4ac3c9c8b3039ad52f91', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?width=640&crop=smart&auto=webp&s=20fe5f99e834640910b09c01898d1ad1e042db98', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?width=960&crop=smart&auto=webp&s=8f44299b59009914d6091878d527ecac50b88e7f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?width=1080&crop=smart&auto=webp&s=d6853e75ddc4acc690ae637d506855c8b6d2a602', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?auto=webp&s=3b12e82f009e7603ee0b36e4b3e8ca34155df613', 'width': 1200}, 'variants': {}}]}
Best LLM model for pure chatting
9
Hello, I'm wondering what's the best uncensored LLM model under 30B for purely chatting use character cards and some fine-tuning. Currently I've tested a few and manticore/nous-hermes llama 2 seems to be the best.
2023-12-28T23:16:31
https://www.reddit.com/r/LocalLLaMA/comments/18t7nmg/best_llm_model_for_pure_chatting/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18t7nmg
false
null
t3_18t7nmg
/r/LocalLLaMA/comments/18t7nmg/best_llm_model_for_pure_chatting/
false
false
self
9
null
Collecting 'knowledge cutoff' prompts
3
Wondering if there is an equivalent to ShareGPT that collects when ChatGPT 'cries uncle' with Knowledge Cutoff at x date. It would be a good data set to collect and finetune open source LLMs with. Also, curious if that is mostly current event type data or datasets just not surfaced for pretraining.
2023-12-28T23:15:30
https://www.reddit.com/r/LocalLLaMA/comments/18t7mst/collecting_knowledge_cutoff_prompts/
MachineZer0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18t7mst
false
null
t3_18t7mst
/r/LocalLLaMA/comments/18t7mst/collecting_knowledge_cutoff_prompts/
false
false
self
3
null
New to LLMs, looking for some tips
1
[removed]
2023-12-28T23:10:28
https://www.reddit.com/r/LocalLLaMA/comments/18t7ili/new_to_llms_looking_for_some_tips/
worldender999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18t7ili
false
null
t3_18t7ili
/r/LocalLLaMA/comments/18t7ili/new_to_llms_looking_for_some_tips/
false
false
self
1
null
5700xt 8gb or 3060 12 gb
4
Hey guys! I have a doubt between the 5700 xt and the 2060 12gb. The 2060 costs around 60% more than the 5700. I run 13b models. Money is tight and I don't have a video card at the moment. Can this amd run ggml? I need help, thank you very much!
2023-12-28T22:55:59
https://www.reddit.com/r/LocalLLaMA/comments/18t76ec/5700xt_8gb_or_3060_12_gb/
Elegant_Course2372
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18t76ec
false
null
t3_18t76ec
/r/LocalLLaMA/comments/18t76ec/5700xt_8gb_or_3060_12_gb/
false
false
self
4
null
[tutorial] Super Easiest way to get started locally (followup)
11
So two days ago I created this [post](https://www.reddit.com/r/LocalLLaMA/comments/18saxgy/tutorial_easiest_way_to_get_started_locally/?utm_source=share&utm_medium=web2x&context=3) which is a tutorial to easily run a model locally. It basically uses a docker image to run a llama.cpp server. Many kind-hearted people recommended llamafile, which is an ever easier way to run a model locally. So this is a super quick guide to run a model locally. # Prerequisistes ## 1. Model You can select any model you want as long as it's a gguf. I recommend [openchat-3.5-1210.Q4\_K\_M](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q4_K_M.gguf) to get started: It requires 6GB of memery (can work without gpu too) Just download it to your machine ## 2. Llamafile server To download this, simply run: ```bash curl -LO https://github.com/Mozilla-Ocho/llamafile/releases/download/0.1/llamafile-server-0.1 chmod 755 llamafile-server-0.1 ``` # Running Simply run the llamafile server while providing the model: ```bash ./llamafile-server-0.1 -m openchat-3.5-1210.Q4_K_M.gguf ``` Now go to `http://127.0.0.1:8080/` and start chatting!
2023-12-28T22:28:38
https://www.reddit.com/r/LocalLLaMA/comments/18t6ivu/tutorial_super_easiest_way_to_get_started_locally/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18t6ivu
false
null
t3_18t6ivu
/r/LocalLLaMA/comments/18t6ivu/tutorial_super_easiest_way_to_get_started_locally/
false
false
self
11
{'enabled': False, 'images': [{'id': '22nCxJKtA4pHrCbl07eXdw917CY3vgeICLJK6JdBj3I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?width=108&crop=smart&auto=webp&s=53d0ce4bee9b93798c838d442854edf45289e539', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?width=216&crop=smart&auto=webp&s=355bb74ea6ba81f5ee6c911f07373f96dcd81c2d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?width=320&crop=smart&auto=webp&s=8cbf1b83760363abaa33eebbe5a633e1a3400184', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?width=640&crop=smart&auto=webp&s=e8d27c0c21a3234f654b030f0e42b352604655ea', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?width=960&crop=smart&auto=webp&s=7d81689d388db6366e51bdf1ac3076317c350f33', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?width=1080&crop=smart&auto=webp&s=3f4da9277e8eea5462dd1957a9ba60054488e999', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?auto=webp&s=3c34ee853c17c706dcf733f10c4b58dd02e31e87', 'width': 1200}, 'variants': {}}]}
is context length >= 32K actually useful to you?
2
there are so many models claiming 100k+ context length, but is it really useful? If so what do you use it for? [View Poll](https://www.reddit.com/poll/18t63ub)
2023-12-28T22:10:58
https://www.reddit.com/r/LocalLLaMA/comments/18t63ub/is_context_length_32k_actually_useful_to_you/
Cultural_Wonder_4121
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18t63ub
false
null
t3_18t63ub
/r/LocalLLaMA/comments/18t63ub/is_context_length_32k_actually_useful_to_you/
false
false
self
2
null
What’s The best 7B RP model right now?
33
Curious if there is one that does it the best, because I like fast responses and 13b models take too long.
2023-12-28T21:10:47
https://www.reddit.com/r/LocalLLaMA/comments/18t4nqs/whats_the_best_7b_rp_model_right_now/
headbopper96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18t4nqs
false
null
t3_18t4nqs
/r/LocalLLaMA/comments/18t4nqs/whats_the_best_7b_rp_model_right_now/
false
false
self
33
null
Is there a service that hosts Dolphin Mixtral model?
3
Hi. I want to run the Dolphin Mixtral model but I don't have the hardware for it. I'm very new to LLMs and would love to interact with the model with something as simple as API calls. I'm aware about RunPod and their server less service but I'm too much of a noob at this point to set it up. Is there a service that hosts Dolphin Mixtral model that I can interact via API (like OpenAI API)? Thank you very much.
2023-12-28T21:05:45
https://www.reddit.com/r/LocalLLaMA/comments/18t4ja0/is_there_a_service_that_hosts_dolphin_mixtral/
Uncensored4488
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18t4ja0
false
null
t3_18t4ja0
/r/LocalLLaMA/comments/18t4ja0/is_there_a_service_that_hosts_dolphin_mixtral/
false
false
self
3
null
Using Mistral on Poe. Privacy issues?
2
Does Poe log in your chats if you use one of the models on their app. And additionally what is mistral private policy?
2023-12-28T20:34:27
https://www.reddit.com/r/LocalLLaMA/comments/18t3si2/using_mistral_on_poe_privacy_issues/
TheHumanFixer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18t3si2
false
null
t3_18t3si2
/r/LocalLLaMA/comments/18t3si2/using_mistral_on_poe_privacy_issues/
false
false
self
2
null