title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
LLM for medical diagnosis
0
Is there an existing LLM that can take a PDF of a patients file and do a medical diagnosis from a limited set of possible conditions and reference appropriate medical literature?
2023-10-09T18:10:49
https://www.reddit.com/r/LocalLLaMA/comments/173yo4n/llm_for_medical_diagnosis/
BalksHamster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173yo4n
false
null
t3_173yo4n
/r/LocalLLaMA/comments/173yo4n/llm_for_medical_diagnosis/
false
false
self
0
null
Prompt Template for Falcon and Vicuna
2
Where can I find the prompt templates for falcon-7b and vicuna-13b-v1.3?
2023-10-09T17:49:36
https://www.reddit.com/r/LocalLLaMA/comments/173y4zx/prompt_template_for_falcon_and_vicuna/
Awkward_Grab_6189
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173y4zx
false
null
t3_173y4zx
/r/LocalLLaMA/comments/173y4zx/prompt_template_for_falcon_and_vicuna/
false
false
self
2
null
Powerful Models on a Budget: Combining DoReMi Optimization with Synthetic Data
1
Hi Everyone, As has been discussed previously, there have been some exciting results coming out recently with synthetic data. Finally, we have a long term 8x A100, accelerating our open synthetic data research. This is an open research endeavor with the goal of of learning how to use synthetic data to massively increase the speed and power of fine-tuning and pre-training. I will have this compute for the next year, so there is plenty of room to run lots of interesting &/or exciting experiments. Enough people are interested in collaborating now that it makes sense to have our **first Colab Meeting**: *-=Title=-* Powerful Models on a Budget: Combining DoReMi with Synthetic Data *-=When=-* Today @ 5:30 pm EST *-=Why=-* Discuss results & coordinate *-=Where=-* Chat, link below. Further, I am about to push my first experimental findings, I will share the link here in a couple of hours.
2023-10-09T17:27:54
https://www.reddit.com/r/LocalLLaMA/comments/173xlm7/powerful_models_on_a_budget_combining_doremi/
docsoc1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173xlm7
false
null
t3_173xlm7
/r/LocalLLaMA/comments/173xlm7/powerful_models_on_a_budget_combining_doremi/
false
false
self
1
null
babyPhi update: Dataset info
14
Hello everyone! Based on the feedback I got on my previous post, I've decided to use other non-textbook data as well. I have collated data from multiples sources (code, math and logic mainly) and you can find it [here](https://huggingface.co/datasets/04RR/tiny-instruct). I have decided to fine-tune facebook/opt-350m as a PoC to see how well it does. I am open to any suggestions/feedback. I will update my progress as and when I get things done.
2023-10-09T17:09:58
https://www.reddit.com/r/LocalLLaMA/comments/173x5q7/babyphi_update_dataset_info/
Dry_Long3157
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173x5q7
false
null
t3_173x5q7
/r/LocalLLaMA/comments/173x5q7/babyphi_update_dataset_info/
false
false
self
14
{'enabled': False, 'images': [{'id': 'XNZhDwFk4po4kXZ2nDqbO46tUUxBzzdU182zpf5_2EI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JTTyP37hsTn3gpqG3y7TN6z19K-8wavYgie9fkgXkXo.jpg?width=108&crop=smart&auto=webp&s=2ae358287dd99926b8886f5e3f9efc523e5b876a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JTTyP37hsTn3gpqG3y7TN6z19K-8wavYgie9fkgXkXo.jpg?width=216&crop=smart&auto=webp&s=c3eabeb31402d859b0e52c8c940eee0cccc0bfeb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JTTyP37hsTn3gpqG3y7TN6z19K-8wavYgie9fkgXkXo.jpg?width=320&crop=smart&auto=webp&s=55201124ec553b4f50639a30073cc7447e8ff63d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JTTyP37hsTn3gpqG3y7TN6z19K-8wavYgie9fkgXkXo.jpg?width=640&crop=smart&auto=webp&s=1358c4cce70bfbdae516154987226be47e40e303', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JTTyP37hsTn3gpqG3y7TN6z19K-8wavYgie9fkgXkXo.jpg?width=960&crop=smart&auto=webp&s=5a3474f4232029227b63d14573c65765dbcc3621', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JTTyP37hsTn3gpqG3y7TN6z19K-8wavYgie9fkgXkXo.jpg?width=1080&crop=smart&auto=webp&s=1f94c95c029ac9a399dc9f19eb9427e9601fbe3e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JTTyP37hsTn3gpqG3y7TN6z19K-8wavYgie9fkgXkXo.jpg?auto=webp&s=adf2094a238b99d6aed867adc12e638925ddcbe8', 'width': 1200}, 'variants': {}}]}
RAG - Knowledge graph schema
3
Hi, i've built me a llamaindex NebulaGraph+Vector RAG. I'm interested if someone knows of a solution/tool or LLM/MTEP model to, in a domain agnostic way, dynamically create a most fitting Graph database schema (edge + tag definitions) based on the analysis of the input data ? (currently i'm only using a simple entity->relationship scheme) I've read that "meta services" can somehow manage the database schema definition in NebulaGraph but also couldn't figure out how this is done.
2023-10-09T16:37:17
https://www.reddit.com/r/LocalLLaMA/comments/173wcmi/rag_knowledge_graph_schema/
Glat0s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173wcmi
false
null
t3_173wcmi
/r/LocalLLaMA/comments/173wcmi/rag_knowledge_graph_schema/
false
false
self
3
null
How much GPU do I need?
1
I generated some text using the llama-cpp-python package, and it worked but it runs super slowly (way too slow). I don't have any GPU. I understand that in order to use llama locally, I need to buy some GPU, otherwise it will be too slow. But if I buy it, how do I know if it's the right amount? How much GPU do I need to run it smoothly?
2023-10-09T16:17:47
https://www.reddit.com/r/LocalLLaMA/comments/173vvl1/how_much_gpu_do_i_need/
damc4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173vvl1
false
null
t3_173vvl1
/r/LocalLLaMA/comments/173vvl1/how_much_gpu_do_i_need/
false
false
self
1
null
Embeddings vs Context
5
Hello, I understand that if our context is too long we should use embeddings and store in vector DB. And using cosine sim to fetch just enough document to fit into the token limit. But for this use case: Prompt : <Text> Given the above text, what gender are you? &#x200B; Below are the text 1. I play netball 2. I drive to work 3. My gender is ... &#x200B; So if we were to use cosine similary, text no. 3 will be closer to the prompt? But If i use text no. 1 it will be able to provide the answer &#x200B;
2023-10-09T16:05:01
https://www.reddit.com/r/LocalLLaMA/comments/173vk4r/embeddings_vs_context/
Longjumping_Time_639
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173vk4r
false
null
t3_173vk4r
/r/LocalLLaMA/comments/173vk4r/embeddings_vs_context/
false
false
self
5
null
LLMs pretrained model for GEC problems
2
Hi everyone, i want to train a model that can check grammatical and spelling errors, with a dataset of unpopular language (about 149gb of raw text data). However, I have heard from someone that LLMs are not really the best for this problem. I want to know is it true? or not. If it's true, what is the best method for this problem, and if it's not true what LLMs pretrained model is the best.... I intend to use Llama finetuned with qLora for this problem. Pls help me :((( really stressful with this.
2023-10-09T16:03:58
https://www.reddit.com/r/LocalLLaMA/comments/173vj7g/llms_pretrained_model_for_gec_problems/
con-nguoi-ki-cac
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173vj7g
false
null
t3_173vj7g
/r/LocalLLaMA/comments/173vj7g/llms_pretrained_model_for_gec_problems/
false
false
self
2
null
Which huggingface model is best for creative writing and copywriting? Or is there a better one?
8
The people over at r/selfhosted suggested I post this question here. So, I'm hoping someone can provide some helpful info. For starters, I'm a copywriter. I write high level content for client web pages (no long form stuff). I also write the occasional email from time to time. I was using Claude until they gimped the hell out of it. Claude USED to be amazing before they turned it into a medicated drooling mental patient in a straight jacket. So, I'm looking into self-hosting. I have a 4090 (24gb RAM) and i7-10700k and 32gb of RAM. Which huggingface (or other) model is best for creative writing and copywriting? This is for work and not play. Would it help if I double the RAM to 64GB? Also, I'm a bit new to AI...still learning. Is it possible to "train" LLaMa self-hosted AI to remember to not use passive voice when writing and write in a certain way (via LORA—and do I need l33t coding skills to do so? I was an English major in college...my math and science teachers thought I was the anti-Christ). I'd like to train it to: *"Write at a 9th-grade reading level. Avoid awkward phrasing and aim for for natural language and fluidity. Write in active voice etc.*) I'm not looking for AI to do my work for me. Instead, I want to use it as a tool that can provide a skeleton from which I can edit as I see fit. This will help save a tremendous amount of time. I continually have to remind Claude and ChatGPT of the way I want them to write certain things. They will forget what I told them just 2 minutes and 1 message ago (e.g. "Don't use passive voice"). thanks
2023-10-09T15:48:41
https://www.reddit.com/r/LocalLLaMA/comments/173v5op/which_huggingface_model_is_best_for_creative/
kaszebe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173v5op
false
null
t3_173v5op
/r/LocalLLaMA/comments/173v5op/which_huggingface_model_is_best_for_creative/
false
false
self
8
null
New updates of octogen: token generataion speed support
2
2023-10-09T14:54:34
https://i.redd.it/v40s3tiax6tb1.gif
More-Shop9383
i.redd.it
1970-01-01T00:00:00
0
{}
173tu2f
false
null
t3_173tu2f
/r/LocalLLaMA/comments/173tu2f/new_updates_of_octogen_token_generataion_speed/
false
false
https://b.thumbs.redditm…7SaWgzzgFiNA.jpg
2
{'enabled': True, 'images': [{'id': 'JWiIjDksgvrmONjAFJY8443VFfzZCCc0UMGujXkBFQo', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=108&crop=smart&format=png8&s=c1174ee02c59c48eb20182df2dda6effc8f661a9', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=216&crop=smart&format=png8&s=75a30735c69358e7386bcd7b3381464204483219', 'width': 216}, {'height': 270, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=320&crop=smart&format=png8&s=71102649117a8ca5eee5644f583bc40b0c392ca6', 'width': 320}, {'height': 541, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=640&crop=smart&format=png8&s=371c49b83cbe0842f791a08a4e9a449249a42acd', 'width': 640}, {'height': 812, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=960&crop=smart&format=png8&s=09cc4b44db298e1371e155082107ceefe0a6931a', 'width': 960}, {'height': 913, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=1080&crop=smart&format=png8&s=d2399842ece7ce4c5d4958e9f5a1e4d0cf4053f0', 'width': 1080}], 'source': {'height': 1262, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?format=png8&s=54c206dec41750ac713ba821b2004b4de7612697', 'width': 1492}, 'variants': {'gif': {'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=108&crop=smart&s=3b597b03879c85036f5d516a1655d17022294727', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=216&crop=smart&s=122429712bf83b631a85658dd3bea5a36b94fcbc', 'width': 216}, {'height': 270, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=320&crop=smart&s=54f93e653be1bf9a9d3c89800563c43dc1f2d296', 'width': 320}, {'height': 541, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=640&crop=smart&s=3d634723b00b8168aa3a6f326cf9f85d70cb867e', 'width': 640}, {'height': 812, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=960&crop=smart&s=698e03eeb30555a8e027aeb993e8ce3e01b36d39', 'width': 960}, {'height': 913, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=1080&crop=smart&s=7b0254901f6be6ef810ddd4801038cc18f025289', 'width': 1080}], 'source': {'height': 1262, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?s=5d027a90fdd1819d0a3443e39fb3b5a46b598774', 'width': 1492}}, 'mp4': {'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=108&format=mp4&s=15e9dc393529b0f647c53d8b77c833e60e9aff70', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=216&format=mp4&s=f532d9e718dd8d35e2b54c68920665627439ade4', 'width': 216}, {'height': 270, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=320&format=mp4&s=8aab9e96cecc2b0f41d42aac67bb3bf4ff5ecf5d', 'width': 320}, {'height': 541, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=640&format=mp4&s=5b0881e64c62ddf533d00d5e31b44e67158ed036', 'width': 640}, {'height': 812, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=960&format=mp4&s=5fcda5ea4d3b58aa1102e3bd736bec2354ae5b54', 'width': 960}, {'height': 913, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?width=1080&format=mp4&s=67b55c2b29f8c60d88458162bc8c054b03ddef08', 'width': 1080}], 'source': {'height': 1262, 'url': 'https://preview.redd.it/v40s3tiax6tb1.gif?format=mp4&s=00313cf138af2dbd2d5491ebaae32b16e4fbb1eb', 'width': 1492}}}}]}
Intro to Semantic Kernel - Part Two
2
2023-10-09T14:44:34
https://blog.brakmic.com/intro-to-semantic-kernel-part-two/
brakmic
blog.brakmic.com
1970-01-01T00:00:00
0
{}
173tlau
false
null
t3_173tlau
/r/LocalLLaMA/comments/173tlau/intro_to_semantic_kernel_part_two/
false
false
default
2
null
Hugging Face community blogpost: 🕳️ Attention Sinks in LLMs for endless fluency (related to StreamingLLM)
78
Hello! This new blogpost on Hugging Face shows some evaluations of the recent StreamingLLM/Attention Sinks approach for inference: [https://huggingface.co/blog/tomaarsen/attention-sinks](https://huggingface.co/blog/tomaarsen/attention-sinks) >**Tl;dr** Using window attention with attention sink tokens allows pretrained chat-style LLMs, such as all Llama, Mistral, MPT, Falcon, and GPT-NeoX (Pythia) models, to stay fluent across hundreds of subsequent prompts, unlike when these models are loaded using transformers. Furthermore, this approach allows for constant memory usage, while most LLMs loaded with transformers have linear space complexity resulting in memory issues. > >Using this form of attention is as simple as importing your model class from \`attention\_sinks\` rather than transformers: from attention_sinks import AutoModel model = AutoModel.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1", device_map="auto") I think it's very promising that the experiments from the blog seem to match those of the original paper author, and that there's more experiments than just a perplexity graph. The actual generation is indeed noticeably less likely to experience model instability. It seems like this is not suited for "long-term retention", because the context size does not change (see also the blogpost FAQ), but it \*is\* well suited for "long short-term retention", i.e. being able to provide a chat-style LLM with prompts for a long time about 1) recent chat history or 2) general knowledge/logic and it'll keep responding like you'd expect. Most, if not all, current LLM implementations break after a certain number of tokens have been processed. There's already serious talks to integrate this into transformers, which is promising: [https://github.com/huggingface/transformers/issues/26553](https://github.com/huggingface/transformers/issues/26553). Do you guys think this approach is going to go somewhere? Some extra links: \* The [attention\_sinks](https://github.com/tomaarsen/attention_sinks) repository \* The ["Efficient Streaming Language Models with Attention Sinks" paper](https://arxiv.org/abs/2309.17453) by Xiao et al., 2023. \* The original [StreamingLLM research repository](https://github.com/mit-han-lab/streaming-llm) by the MIT HAN Lab.
2023-10-09T14:02:47
https://www.reddit.com/r/LocalLLaMA/comments/173slpi/hugging_face_community_blogpost_attention_sinks/
CubieDev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173slpi
false
null
t3_173slpi
/r/LocalLLaMA/comments/173slpi/hugging_face_community_blogpost_attention_sinks/
false
false
self
78
{'enabled': False, 'images': [{'id': 'cEZLNgxX_Q8PzNJjkFnjdsJOccE-2JnrjSNAzw1PYY0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YXaFMJ427b4jC8Xxi-Mg1br9D53sQqQJKrQNTL-mym0.jpg?width=108&crop=smart&auto=webp&s=ab9237256424b43007627cc72d22bf1d2e7e88c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YXaFMJ427b4jC8Xxi-Mg1br9D53sQqQJKrQNTL-mym0.jpg?width=216&crop=smart&auto=webp&s=6ae4949f74d12fb39873a7c9595afcf0631f8743', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YXaFMJ427b4jC8Xxi-Mg1br9D53sQqQJKrQNTL-mym0.jpg?width=320&crop=smart&auto=webp&s=63ebf430be6152052324db899eb2e8c5b9277993', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YXaFMJ427b4jC8Xxi-Mg1br9D53sQqQJKrQNTL-mym0.jpg?width=640&crop=smart&auto=webp&s=a076c5e662b61601e8e27751713bd3df6ce947ac', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YXaFMJ427b4jC8Xxi-Mg1br9D53sQqQJKrQNTL-mym0.jpg?width=960&crop=smart&auto=webp&s=23a2c803086aec227e0a2d27a1991a34387fd770', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YXaFMJ427b4jC8Xxi-Mg1br9D53sQqQJKrQNTL-mym0.jpg?width=1080&crop=smart&auto=webp&s=d106c7de722e2831c346f9cf8c55d970ae9fa04e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YXaFMJ427b4jC8Xxi-Mg1br9D53sQqQJKrQNTL-mym0.jpg?auto=webp&s=bb00ca57758aa8ff4f366c2a3c0571627a511182', 'width': 1200}, 'variants': {}}]}
oobabooga with AMD gpu on Linux loads indefinitely and generate nothing..
3
i'm on Arch Linux, using an AMD RX 5700 XT, i've installed the oobabooga front end correctly and downloaded llama-2-7b-chat.Q4\_K\_M.gguf to use with it. Tried to load it with llama.cpp and while the console output says "successfully loaded the model in 2.07 seconds" it does not work... i basically get instant 100% GPU usage as soon as i load the model (without even starting to using it) and i get infinite "typing" when i ask something to the chat... I don't think that the installation or the model have problems, because if i instead toggle "cpu" in the oobabooga loader interface and thus go cpu only, it works! very slowly (\~6tokens/s) but it works. so what can the problem be? has anyone faced this issue?
2023-10-09T13:57:01
https://www.reddit.com/r/LocalLLaMA/comments/173sgjh/oobabooga_with_amd_gpu_on_linux_loads/
rdasf691
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173sgjh
false
null
t3_173sgjh
/r/LocalLLaMA/comments/173sgjh/oobabooga_with_amd_gpu_on_linux_loads/
false
false
self
3
null
LLM hype swings: How is Phi-1-5
1
[removed]
2023-10-09T13:01:55
https://www.reddit.com/r/LocalLLaMA/comments/173r8rn/llm_hype_swings_how_is_phi15/
InterestingBasil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173r8rn
false
null
t3_173r8rn
/r/LocalLLaMA/comments/173r8rn/llm_hype_swings_how_is_phi15/
false
false
self
1
null
What LLaMa version is best for text extraction tasks? Chat vs non-Chat?
8
I have hundreds of thousands of rows of data that have no uniformity but contain first names, last names, addresses in different formats (all caps, first letter caps, middle initial/no initial, job title/no job title, address or address only). I've tried tons of if / then statements with regex to normalize it. However, there always seems to be some deviation that doesn't get captured or it should've gone into the else but didn't. I am thinking of employing an LLM for the task. I am not in a rush, so speed and resources are not an issue for me. It's historical data. From what I've read, the non-chat model would return a string but it is essentially a text generator predicting what happens next, so can you still task it? i.e. "the JSON extracting first, last, address, state, zip with None if not provided from this string {input\_string} is" or is it better to ask a chat model to return a JSON with a few-shot prompt and take the output?
2023-10-09T12:55:47
https://www.reddit.com/r/LocalLLaMA/comments/173r47q/what_llama_version_is_best_for_text_extraction/
Amazing-Protection87
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173r47q
false
null
t3_173r47q
/r/LocalLLaMA/comments/173r47q/what_llama_version_is_best_for_text_extraction/
false
false
self
8
null
llm-cmd a toolbox commands
2
Hi all ! This weekend I tried making a toolbox with an llm (I tested with the mistral model but we can put other models as arguments). There are only 3 features but I will add more soon. Please give me your opinion, I'm trying to improve my project
2023-10-09T12:36:56
https://github.com/rbourgeat/llm-cmd
krolhm
github.com
1970-01-01T00:00:00
0
{}
173qqf8
false
null
t3_173qqf8
/r/LocalLLaMA/comments/173qqf8/llmcmd_a_toolbox_commands/
false
false
default
2
{'enabled': False, 'images': [{'id': 'ljmEmGeko6BTL1wllC50x8pj7-TayVkbxGdQXxIsryI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/I1huZBbehRh0xYheayqGRmh2LvOBDMFvpaaZx0rsU4o.jpg?width=108&crop=smart&auto=webp&s=df02e8266aa759ad7763b317b205b72d361dc8ba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/I1huZBbehRh0xYheayqGRmh2LvOBDMFvpaaZx0rsU4o.jpg?width=216&crop=smart&auto=webp&s=a28f5fc6fbdc214bc88c388fcc26328c38c7ed5c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/I1huZBbehRh0xYheayqGRmh2LvOBDMFvpaaZx0rsU4o.jpg?width=320&crop=smart&auto=webp&s=7906ca17cd917b1413dcdb5f2c326a0c312bbebe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/I1huZBbehRh0xYheayqGRmh2LvOBDMFvpaaZx0rsU4o.jpg?width=640&crop=smart&auto=webp&s=2c78398b925d1fe27fa30667bc26ea7cb35cdc4d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/I1huZBbehRh0xYheayqGRmh2LvOBDMFvpaaZx0rsU4o.jpg?width=960&crop=smart&auto=webp&s=6546ca32dcfa5cfc88470ada120f9306bf1f4b3b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/I1huZBbehRh0xYheayqGRmh2LvOBDMFvpaaZx0rsU4o.jpg?width=1080&crop=smart&auto=webp&s=55da6650c7465133ff377a4df91d36f4ba3ddb0b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/I1huZBbehRh0xYheayqGRmh2LvOBDMFvpaaZx0rsU4o.jpg?auto=webp&s=9a780767f5704968ada54923b0703298204431a4', 'width': 1200}, 'variants': {}}]}
Is there a client app available for any of the text generation UI's out there?
4
For example you run oobabooga with -api -listen -multiuser. You copy the API key to an app's configuration and you expose the LLM chats to a user base that has no access to any settings. I really wanna expose this to a couple of friends, but their access to settings makes me feel uneasy. Would truly appreciate if someone finds/codes something like this. I'm no dev.
2023-10-09T10:45:18
https://www.reddit.com/r/LocalLLaMA/comments/173oqmm/is_there_a_client_app_available_for_any_of_the/
UltrMgns
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173oqmm
false
null
t3_173oqmm
/r/LocalLLaMA/comments/173oqmm/is_there_a_client_app_available_for_any_of_the/
false
false
self
4
null
Train LLM From Scratch
4
Hello everybody I would like to test small models performance on my private datasets. Is there a gui like text-generation-webgui to train llm from scratch ? if possible i would like to test transformers and retentive networks so the gui should also support retnets.
2023-10-09T10:27:35
https://www.reddit.com/r/LocalLLaMA/comments/173ogqa/train_llm_from_scratch/
idesireawill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173ogqa
false
null
t3_173ogqa
/r/LocalLLaMA/comments/173ogqa/train_llm_from_scratch/
false
false
self
4
null
Debian + mistral-openorca-7b-q8_0-gguf + REST API (with streaming)
8
Hi dear community, for a client of mine I developed a simple document/knowledge base chat running with a local llm (mistral-openorca-7b-q8\_0-gguf). I'd like to deploy it on a Linux machine. For local testing I used LM Studio to query against a REST api (+ streaming). Now I ask myself what I could use on a debian machine to run the model and have a REST api to serve. Would appreciate any suggestions.
2023-10-09T10:08:01
https://www.reddit.com/r/LocalLLaMA/comments/173o64h/debian_mistralopenorca7bq8_0gguf_rest_api_with/
_omid_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173o64h
false
null
t3_173o64h
/r/LocalLLaMA/comments/173o64h/debian_mistralopenorca7bq8_0gguf_rest_api_with/
false
false
self
8
null
Fine Tuning Mistral 7B on multiple tasks
14
Hello community, A few questions arrived while trying Mistral 7B Instruct and hopefully you could provide some answers. &#x200B; * Fine-Tune multiple abilities * Is there a way to fine tune LLMs with two or more different tasks? * For Example if I want my LLM to be able to have great conversations, but also summarize given texts and return JSON objects. * My Conversation Dataset would always have the prefix <conversation> and my summarise dataset would always have the prefix <summarize> * conversation example: `{ "input": "<conversation>I can't log into my account", "Thank you for contacting the support...}` * summarise example: `{"input":"<summarize>This is Flora. Flora is a beautiful flower...", "output": "{"summary": "Flora is a beautiful flower.."}}` * Would that work or would one fine-tuning make the other worse? * Fine-Tuning with system prompt * Is it possible to also fine tune the system prompt the AI knows exactly what to do according to that system prompt? * For example could i just train it with a dataset like this: {"system":"You are perfect in summarising texts and returning a JSON structure","input":"Flora...","output":"{"summary":"Flora is a..."}} * If any of those would work, how would the performance change? Would i see a massive drop in tokens per seconds or would everything still stay the same? &#x200B; Thank you &#x200B;
2023-10-09T10:06:41
https://www.reddit.com/r/LocalLLaMA/comments/173o5dv/fine_tuning_mistral_7b_on_multiple_tasks/
Bene0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173o5dv
false
null
t3_173o5dv
/r/LocalLLaMA/comments/173o5dv/fine_tuning_mistral_7b_on_multiple_tasks/
false
false
self
14
null
Seeking Feedback - Video Transcript Analysis
1
Hello everyone, I've been a part of this Reddit community for about 6 months and have found immense value in it. Thank you for being such a supportive group! I'm working on a project where I use AI to analyze video transcripts and extract pivotal details, like tone, primary subject, the video's objective, core principles, and more. The end goal is to generate a profile for video creators based on their content. The AI is performing well so far, with a unique grammar that narrows down AI responses to a specific set of keywords. However, I've run into two major challenges: 1. **Model Size & Accuracy**: While I understand that larger models often yield higher accuracy, I'm limited to using a model under 33b. I'd rather not rely on external providers like OpenAi. Although fine-tuning or using methods like Lora might help, they require a solid benchmark dataset. 2. **Dataset Constraints**: Due to token and memory limitations, I can't process all video transcripts at once. I currently analyze them individually. This ensures each transcript is assessed on its own merit without influence from others. This means the AI provides metrics based solely on individual video content. I'd love your thoughts on the following: 1. By analyzing videos individually, I sidestep outlier issues. This is especially true for videos that diverge significantly from the majority. But, would it be more effective to utilize methods like langchain to process multiple videos at the same time, even though I'm not well-versed in langchain or other language preprocessing techniques? 2. Any suggestions on enhancing AI accuracy without migrating to an impractical model size like 70b or 180b? For instance, I consistently use temp=0. Any feedback or advice would be greatly appreciated! Thanks
2023-10-09T09:58:22
https://www.reddit.com/r/LocalLLaMA/comments/173o0gw/seeking_feedback_video_transcript_analysis/
Toni_rider
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173o0gw
false
null
t3_173o0gw
/r/LocalLLaMA/comments/173o0gw/seeking_feedback_video_transcript_analysis/
false
false
self
1
null
How to evaluate on specific data?
1
[removed]
2023-10-09T09:23:17
https://www.reddit.com/r/LocalLLaMA/comments/173nigw/how_to_evaluate_on_specific_data/
Enkay55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173nigw
false
null
t3_173nigw
/r/LocalLLaMA/comments/173nigw/how_to_evaluate_on_specific_data/
false
false
self
1
null
Seeking Advice on Training a Custom Language Model (small language models for specific domain)
3
I am completely new to LLMs and I am seeking your advise on how to get started for our project. I am looking to establish a custom trained LLM for a specific domain, one that's small enough to run efficiently on laptops. I'm following the principles of "TinyStories" and "Textbooks are all you need." In particular, I'm focused on the medical domain. I would greatly appreciate any insights, tips, or advice you can offer, especially if you have experience with custom LLMs or have worked in similar domains. If you have suggestions on creating reliable datasets, tuning hyperparameters, or overcoming challenges, please share your expertise. How to get started :)
2023-10-09T08:54:01
https://www.reddit.com/r/LocalLLaMA/comments/173n2sf/seeking_advice_on_training_a_custom_language/
PewDPAC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173n2sf
false
null
t3_173n2sf
/r/LocalLLaMA/comments/173n2sf/seeking_advice_on_training_a_custom_language/
false
false
self
3
null
Llama as a classification tool using confidence scores
2
I've been asked to see if it would be possible to use Llama to create a tool that can classify inputs into different categories (e.g. how trustworthy a statement is) and produce a confidence score similar to what you might expect to see from image classification. I'm just getting back into programming and was hoping someone might be able to point me into the right direction for this.
2023-10-09T08:33:00
https://www.reddit.com/r/LocalLLaMA/comments/173msag/llama_as_a_classification_tool_using_confidence/
T0nytonychoppy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173msag
false
null
t3_173msag
/r/LocalLLaMA/comments/173msag/llama_as_a_classification_tool_using_confidence/
false
false
self
2
null
Advice on jetson AGX Orin 64gb
14
Hi all! What do you think about the nvidia jetson series? Especially the AGX Orin 64gb? For LLm use? Mostly inference as training will propably need too much computing power.. https://fr.rs-online.com/web/p/modules-de-developpement-pour-processeurs/2539662 https://www.siliconhighwaydirect.com/product-p/900-13701-0050-000.htm 2048-core NVIDIA Ampere architecture GPU with 64 Tensor cores 2x NVDLA v2.0 12-core Arm Cortex-A78AE v8.2 64-bit CPU 64GB 256-bit LPDDR5 275TOPS, 200gb/s memory bandwidth wich isn't the fastest today (around 2x a modern cpu?) But enough space to run a 70b q6 for only 2000 USD 🤷‍♂️ (at around 60w btw)
2023-10-09T07:20:53
https://www.reddit.com/r/LocalLLaMA/comments/173lroh/advice_on_jetson_agx_orin_64gb/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173lroh
false
null
t3_173lroh
/r/LocalLLaMA/comments/173lroh/advice_on_jetson_agx_orin_64gb/
false
false
self
14
null
Noob here, Want help with reading/writing relational DBs.
1
[removed]
2023-10-09T06:30:15
https://www.reddit.com/r/LocalLLaMA/comments/173l05j/noob_here_want_help_with_readingwriting/
youdontknowmeyetxhi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173l05j
false
null
t3_173l05j
/r/LocalLLaMA/comments/173l05j/noob_here_want_help_with_readingwriting/
false
false
self
1
null
Real-Time Fallacy Detection in Political Debates Using Whisper and LLMs
318
I've developed a tool that serves as a real-time overlay for detecting logical fallacies in political debates. It uses PyQt5 for the UI and Mistral LLM through the API of the text-generation-webui for both audio transcription and logical analysis. The overlay is transparent, making it easy to keep it on top of other windows like a live stream or video. I was able to run both Whisper with the [Mistral-7B-OpenOrca-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ) locally on a single RTX 3090. VRAM usage 15GB. Key Features: * Real-time audio transcription captures what's being said in debates. * Instant fallacy detection using a Language Logic Model (LLM). * The overlay is transparent, draggable, and stays on top for multitasking. * Option to toggle between local LLM and ChatGPT for logical analysis. This tool aims to make it easier to spot logical inconsistencies in real-time during political debates, thereby fostering a more informed electorate. Check it out on GitHub \[[https://github.com/latent-variable/Real\_time\_fallacy\_detection](https://github.com/latent-variable/Real_time_fallacy_detection)\] and I'd love to hear your thoughts!" Feel free to adapt this template to better suit your project's specifics. &#x200B; [Overlay showcase](https://preview.redd.it/oiiekiesz3tb1.png?width=1576&format=png&auto=webp&s=2fc2c68c9e68c98c5376cc7290f9c91c003d65dd)
2023-10-09T05:09:31
https://www.reddit.com/r/LocalLLaMA/comments/173jqpe/realtime_fallacy_detection_in_political_debates/
onil_gova
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173jqpe
false
null
t3_173jqpe
/r/LocalLLaMA/comments/173jqpe/realtime_fallacy_detection_in_political_debates/
false
false
https://b.thumbs.redditm…RommH3JQvLRE.jpg
318
{'enabled': False, 'images': [{'id': 'V8sUkuCnlPtyDSt8sE7U7_0jkTNlzhUispqiZ323KYs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/AaMgzwrGMycIvqrtoNNFoB8Zlvqk-aF6rqZryG6zV9w.jpg?width=108&crop=smart&auto=webp&s=49f623b14198eb5c8fb3c6b278343a2eb48a1138', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/AaMgzwrGMycIvqrtoNNFoB8Zlvqk-aF6rqZryG6zV9w.jpg?width=216&crop=smart&auto=webp&s=1e214679be6638cea22fcd4535c68241f29fb36e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/AaMgzwrGMycIvqrtoNNFoB8Zlvqk-aF6rqZryG6zV9w.jpg?width=320&crop=smart&auto=webp&s=59e488aecbe5afde06fe46b4c36d849bdf41826a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/AaMgzwrGMycIvqrtoNNFoB8Zlvqk-aF6rqZryG6zV9w.jpg?width=640&crop=smart&auto=webp&s=419c2e485a497d31251407521c6264dde6470ec9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/AaMgzwrGMycIvqrtoNNFoB8Zlvqk-aF6rqZryG6zV9w.jpg?width=960&crop=smart&auto=webp&s=accbd481a0070e5328042d00b19090378b5bb576', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/AaMgzwrGMycIvqrtoNNFoB8Zlvqk-aF6rqZryG6zV9w.jpg?width=1080&crop=smart&auto=webp&s=c802cdcc6de82949b9e06a0c2b35639ae6b0857c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/AaMgzwrGMycIvqrtoNNFoB8Zlvqk-aF6rqZryG6zV9w.jpg?auto=webp&s=a2de328b81afd2ad397eaa361cfd162d0217af6c', 'width': 1200}, 'variants': {}}]}
Any LLaMA models optimized for differential diagnosis in medicine?
11
Title, Looking if you can input data or like a differential diagnosis into Llamma and get potential differential
2023-10-09T03:44:24
https://www.reddit.com/r/LocalLLaMA/comments/173ibdy/any_llama_models_optimized_for_differential/
derpgod123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173ibdy
false
null
t3_173ibdy
/r/LocalLLaMA/comments/173ibdy/any_llama_models_optimized_for_differential/
false
false
self
11
null
Open/Local LLM support for MineDojo/Voyager
6
I saw some recent posts with interest in enabling open/local LLM support for intelligent agents. For those interested, I've committed my repo for a debuggable k8s deployment of [Voyager](https://github.com/spyd3rweb/Voyager): [https://github.com/spyd3rweb/app-voyager](https://github.com/spyd3rweb/app-voyager) This [k8s application](https://cloud.google.com/code/docs/vscode/deploy-kubernetes-app) deploys an instance of [Voyager](https://github.com/MineDojo/Voyager) along with a [Fabric Minecraft](https://docker-minecraft-server.readthedocs.io/en/latest/) server with [required fabric mods](https://github.com/spyd3rweb/Voyager/blob/main/installation/fabric_mods_install.md). It assumes you have a local deployment of a [Large Language Model (LLM) with 4K-8K token context length](https://github.com/facebookresearch/codellama/blob/main/README.md) with a compatible [OpenAI API](https://platform.openai.com/docs/api-reference), including [embeddings support](https://platform.openai.com/docs/guides/embeddings). \* *Note Voyager typically uses OpenAI's closed source* [*GPT-4*](https://help.openai.com/en/articles/7127966-what-is-the-difference-between-the-gpt-4-models) *as the LLM and* [*text-embedding-ada-002*](https://platform.openai.com/docs/guides/embeddings/types-of-embedding-models) *sentence-transformers model for* [*embeddings*](https://huggingface.co/blog/mteb)*. With a local deployment you will NOT be able to reuse any of the regular* [*Voyager community's skill libraries*](https://github.com/MineDojo/Voyager/blob/main/skill_library/README.md)*, which use the closed source text-embedding-ada-002, as* [*"you cannot mix embeddings from different models even if they have the same dimensions. They are not comparable"*](https://github.com/oobabooga/text-generation-webui/blob/main/extensions/openai/README.md#embeddings-alpha) Minor updates to [Voyager](https://github.com/spyd3rweb/Voyager/tree/main/voyager) have been made to enable the specification of the base url for OpenAI API as well as the domain name/IP of the Minecraft host. There have also been two python files (voyager\_inference.py and voyager\_learn.py) created to more easily run between Voyager's [inference](https://github.com/spyd3rweb/Voyager/blob/main/README.md#run-voyager-for-a-specific-task-with-a-learned-skill-library) and [learn](https://github.com/spyd3rweb/Voyager/blob/main/README.md#resume-from-a-checkpoint-during-learning) modes. &#x200B; [MineDojo\/Voyager](https://preview.redd.it/dftparl9p3tb1.png?width=4602&format=png&auto=webp&s=9ff79eef66f6d9e8590eb7f9fb751f2817972292) ## Temper your expectations; this is a proof of concept using "open/local" LLMs. With CodeLlama I have only had the Voyager bot successfully mine a log, never actually craft anything. I've also had some trials run 160 iterations without being able to successfully complete a single task. I'm interested to see if anyone in the community will have better luck.
2023-10-09T02:44:26
https://www.reddit.com/r/LocalLLaMA/comments/173h72e/openlocal_llm_support_for_minedojovoyager/
spyderwebab
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173h72e
false
null
t3_173h72e
/r/LocalLLaMA/comments/173h72e/openlocal_llm_support_for_minedojovoyager/
false
false
self
6
{'enabled': False, 'images': [{'id': 'On5UTTTT3MGidULkcBabwFi--HTzjlLUkFA-AqkpHcs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LNrcFcaE-txUHyhtEFJIu1BEEV3yhpP87hdmom1Xcr8.jpg?width=108&crop=smart&auto=webp&s=37e9c8c817a4a47dfaab6c25ad5dd9c4625df157', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LNrcFcaE-txUHyhtEFJIu1BEEV3yhpP87hdmom1Xcr8.jpg?width=216&crop=smart&auto=webp&s=069c28701b556dba4fb8afbbee9a364f10551780', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LNrcFcaE-txUHyhtEFJIu1BEEV3yhpP87hdmom1Xcr8.jpg?width=320&crop=smart&auto=webp&s=7063a5d2d8bfd3527187e15391a8b2f43b712212', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LNrcFcaE-txUHyhtEFJIu1BEEV3yhpP87hdmom1Xcr8.jpg?width=640&crop=smart&auto=webp&s=9cd971a865d792e10dea1f9fe34f9b0f7e326cca', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LNrcFcaE-txUHyhtEFJIu1BEEV3yhpP87hdmom1Xcr8.jpg?width=960&crop=smart&auto=webp&s=da4cabeefe99879f0ab15a78b5acca9bc385f0c3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LNrcFcaE-txUHyhtEFJIu1BEEV3yhpP87hdmom1Xcr8.jpg?width=1080&crop=smart&auto=webp&s=c40e0ffc0069338d37b932c70d152a30b8859efe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LNrcFcaE-txUHyhtEFJIu1BEEV3yhpP87hdmom1Xcr8.jpg?auto=webp&s=f9ce12eabec4a0e17d6ed8cf51f41d7cd3e47054', 'width': 1200}, 'variants': {}}]}
Has Nvidia or AMD or anyone else in the PC world announced that they’re working on a Unified Memory type solution like Apple has?
68
Seems like a huge advantage right now for Apple, is there anything around the corner from the PC world?
2023-10-09T02:22:08
https://www.reddit.com/r/LocalLLaMA/comments/173grgh/has_nvidia_or_amd_or_anyone_else_in_the_pc_world/
IPmang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173grgh
false
null
t3_173grgh
/r/LocalLLaMA/comments/173grgh/has_nvidia_or_amd_or_anyone_else_in_the_pc_world/
false
false
self
68
null
How to handle summarization of sub-topics using a document after chunking.
1
[removed]
2023-10-09T01:17:31
[deleted]
1970-01-01T00:00:00
0
{}
173fgdr
false
null
t3_173fgdr
/r/LocalLLaMA/comments/173fgdr/how_to_handle_summarization_of_subtopics_using_a/
false
false
default
1
null
How to handle summarization of sub-topics using a document after chunking.
1
[removed]
2023-10-09T01:16:49
https://www.reddit.com/r/LocalLLaMA/comments/173ffx7/how_to_handle_summarization_of_subtopics_using_a/
vile_proxima
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173ffx7
false
null
t3_173ffx7
/r/LocalLLaMA/comments/173ffx7/how_to_handle_summarization_of_subtopics_using_a/
false
false
self
1
null
An explanation of why RAG is needed.
36
I was doing my best to explain why it is useful to argument prompts with additional information and struggled to work around preconceptions of how things work. I encounter many misconceptions, one of the biggest being how 'AI learns'. I felt I made a concept explanation breakthrough by introducing Mr Meeseeks. Looking forward to hearing others thoughts on this.
2023-10-09T00:03:35
https://i.redd.it/6q7jva1ri2tb1.jpg
MindOrbits
i.redd.it
1970-01-01T00:00:00
0
{}
173dzkz
false
null
t3_173dzkz
/r/LocalLLaMA/comments/173dzkz/an_explanation_of_why_rag_is_needed/
false
false
https://a.thumbs.redditm…atwz64JkxMr0.jpg
36
{'enabled': True, 'images': [{'id': 'txW0tTn424qgxBMeLBa0k8fHyHOx4WyyHw9d60C2xhQ', 'resolutions': [{'height': 195, 'url': 'https://preview.redd.it/6q7jva1ri2tb1.jpg?width=108&crop=smart&auto=webp&s=0ee9aaecffe5a685c1069a679f55ba0efd7a7be8', 'width': 108}, {'height': 391, 'url': 'https://preview.redd.it/6q7jva1ri2tb1.jpg?width=216&crop=smart&auto=webp&s=8451baef94121c64aa0f41e5cc2e351c75ced3ac', 'width': 216}, {'height': 579, 'url': 'https://preview.redd.it/6q7jva1ri2tb1.jpg?width=320&crop=smart&auto=webp&s=79c7e606c2c6c4b0d2e3afdcdf95fbe42502a29c', 'width': 320}], 'source': {'height': 1101, 'url': 'https://preview.redd.it/6q7jva1ri2tb1.jpg?auto=webp&s=5de1c2734c3a4eedca6b859c10e6bd1f37ca6a5b', 'width': 608}, 'variants': {}}]}
Has OpenAI Confirmed a Separate Ethics Model as Part of Their Safety Layer?
4
**In trying to get GPT-4 to help me generate lots of negative training data examples, the model told me that OpenAI has a separate ethics model monitoring OpenAI model output.** We've struggled with a variety of alignment training approaches on our FT Falcon-7b model, and so we're trying be iron-clad on "answer-only-from-context." So rather than a FT for question answering followed by an alignment FT effort, one Q/Context Chunk/Answer FT pass: * Legit questions + relevant context + an answer * Legit questions + irrelevant context + "I can't answer" * Irrelevant questions + any context + "I can't answer" The irrelevant question category is anything from "What's a good chocolate chip recipe?", to "I have joint pain what should I do?", to "Why are \[derogatory slur\] so \[derogatory description\]?" I asked GPT-4 to help me generate a large set of inappropriate questions (i.e. racist/sexist/homophobic), and as part of that exchange it suggested: >**Safety Layers**: Train a separate "safety" model that vets the outputs of the primary model. This model can be trained to detect and suppress potentially harmful, inappropriate, or biased content. OpenAI, for example, has done similar work to ensure the outputs of models like ChatGPT are more controlled. Is that something that has been publicly acknowledged before?
2023-10-08T23:37:28
https://www.reddit.com/r/LocalLLaMA/comments/173dfkz/has_openai_confirmed_a_separate_ethics_model_as/
Mbando
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173dfkz
false
null
t3_173dfkz
/r/LocalLLaMA/comments/173dfkz/has_openai_confirmed_a_separate_ethics_model_as/
false
false
self
4
null
Stuck trying to fine tune with multiple GPUs single node using HF Accelerate Lib.
1
Hi, community. I am currently working on fine tuning the LLaMA 2 70B loading it 4bit and using LoRA so QLoRA. I am very stuck. Is there a platform or here where I can hire a consultant for like an hour or two to get me through this hurdle?
2023-10-08T23:13:19
https://www.reddit.com/r/LocalLLaMA/comments/173cwtr/stuck_trying_to_fine_tune_with_multiple_gpus/
nomusichere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173cwtr
false
null
t3_173cwtr
/r/LocalLLaMA/comments/173cwtr/stuck_trying_to_fine_tune_with_multiple_gpus/
false
false
self
1
null
How to run LLaMa-7b chat as a HTTP endpoint on GCP?
1
I am a beginner, and I want to host the LLaMa-7b chat on GCP which. I don't want to train it, just want to use it from a rest client. Please help me how can I do that.
2023-10-08T22:20:32
https://www.reddit.com/r/LocalLLaMA/comments/173bq77/how_to_run_llama7b_chat_as_a_http_endpoint_on_gcp/
automatic_tomorrow72
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173bq77
false
null
t3_173bq77
/r/LocalLLaMA/comments/173bq77/how_to_run_llama7b_chat_as_a_http_endpoint_on_gcp/
false
false
self
1
null
Requesting feedback on llm/programming build. PCPartPicker link included.
2
I'm relatively uninformed when it comes to the in's and out's of an LLM build, but I need a new computer for work and would like the option to use/test out a local LLM. Initially posted to /r/buildapc and was suggested to post here. I've put together a list for a build I'm considering and would greatly appreciate any feedback. **Concerns**: I'm not entirely sure if the components I've chosen are the most optimal for my use case. I would like to know if there's anything I'm overspending on or if there's a glaring compatibility issue. Specifically, I'd like to know if the motherboard and PSU will allow for adding another 4060 in the future. **Purpose**: I'm building this primarily for business use. I'm a front end programmer, so my typical workload involves using VSCode alongside multiple Chrome/electron apps. I also have a desire to run a Llama 2 13B. My display setup will be 2x 1440p monitors and 1x 1080p monitor. **Budget**: While I haven't set a hard budget, I'm looking for a build that's cost-effective without sacrificing performance for my needs. If there are areas where I can save without a significant performance hit, I'd like to know. **Environmental Constraints**: Noise Level: While not a deal-breaker, I'd prefer the build to be relatively quiet for recording screen shares / tutorial videos. **Considerations**: The current list is based on some preliminary research, but I haven't written off any particular components. I'm open to suggestions, especially if there's a solid reason to consider an alternative. Storage: The build currently doesn't include storage as I have and existing HDD and SSD to add. Thank you in advance for your assistance! ​ [https://ca.pcpartpicker.com/list/hfbhkJ](https://ca.pcpartpicker.com/list/hfbhkJ) |Type|Item|Price| |:-|:-|:-| |**CPU**|[AMD Ryzen 7 5800X 3.8 GHz 8-Core Processor](https://ca.pcpartpicker.com/product/qtvqqs/amd-ryzen-7-5800x-38-ghz-8-core-processor-100-100000063wof)|$286.50 @ shopRBC| |**CPU Cooler**|[Cooler Master Hyper 212 EVO 82.9 CFM Sleeve Bearing CPU Cooler](https://ca.pcpartpicker.com/product/hmtCmG/cooler-master-cpu-cooler-rr212e20pkr2)|$53.50 @ Vuugo| |**Motherboard**|[Asus TUF GAMING X570-PLUS (WI-FI) ATX AM4 Motherboard](https://ca.pcpartpicker.com/product/dmGnTW/asus-tuf-gaming-x570-plus-wi-fi-atx-am4-motherboard-tuf-gaming-x570-plus-wi-fi)|$243.98 @ Amazon Canada| |**Memory**|[Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4-3600 CL18 Memory](https://ca.pcpartpicker.com/product/Yg3mP6/corsair-vengeance-lpx-32-gb-2-x-16-gb-ddr4-3600-memory-cmk32gx4m2d3600c18)|$99.99 @ Newegg Canada| |**Video Card**|[Gigabyte GAMING OC GeForce RTX 4060 Ti 16 GB Video Card](https://ca.pcpartpicker.com/product/YJmNnQ/gigabyte-gaming-oc-geforce-rtx-4060-ti-16-gb-video-card-gv-n406tgaming-oc-16gd)|$659.00 @ Canada Computers| |**Case**|[Fractal Design Meshify C ATX Mid Tower Case](https://ca.pcpartpicker.com/product/Y6Crxr/fractal-design-meshify-c-atx-mid-tower-case-fd-ca-mesh-c-bko-tg)|$139.50 @ Vuugo| |**Power Supply**|[Corsair RM750e (2023) 750 W 80+ Gold Certified Fully Modular ATX Power Supply](https://ca.pcpartpicker.com/product/YRJp99/corsair-rm750e-2023-750-w-80-gold-certified-fully-modular-atx-power-supply-cp-9020262-na)|$117.99 @ PC-Canada| &#x200B;
2023-10-08T22:08:46
https://www.reddit.com/r/LocalLLaMA/comments/173bg67/requesting_feedback_on_llmprogramming_build/
mdroidian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173bg67
false
null
t3_173bg67
/r/LocalLLaMA/comments/173bg67/requesting_feedback_on_llmprogramming_build/
false
false
self
2
null
How to make BLAS processing fast?
9
I’m using MacBook Pro M1 14” 16 GB RAM, and I’m setting context length around 1280, and AI need approximately 20 seconds when: Processing prompt [ BLAS ] 512 / 1024 Processing prompt [ BLAS ] 1024 / 1024 And then just few seconds to generate 128 tokens. I’m using Koboldcpp, and I have tried some others but the same. Is there any way to include CPU/GPU/RAM usage to somehow speed up this BLAS?
2023-10-08T21:56:35
https://www.reddit.com/r/LocalLLaMA/comments/173b5du/how_to_make_blas_processing_fast/
yukiarimo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173b5du
false
null
t3_173b5du
/r/LocalLLaMA/comments/173b5du/how_to_make_blas_processing_fast/
false
false
self
9
null
GitHub - microsoft/ToRA: ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting with tools.
34
Can someone inform me about a gguf variant? Please link me to it.
2023-10-08T21:46:18
https://huggingface.co/llm-agents/tora-code-7b-v1.0
Puzzleheaded_Acadia1
huggingface.co
1970-01-01T00:00:00
0
{}
173awqc
false
null
t3_173awqc
/r/LocalLLaMA/comments/173awqc/github_microsofttora_tora_is_a_series_of/
false
false
https://b.thumbs.redditm…Xn2A8QcIjGbg.jpg
34
{'enabled': False, 'images': [{'id': 'KEURx1I4TQtRQskTe3mekFDM1VGOsinfdA6q-4Id_po', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FqXJ7Rf5grYb2jIL88XupRJwby1SHfsMPnmB3a-Gx00.jpg?width=108&crop=smart&auto=webp&s=439d012450cf2df733819001fd3cc35f5592bcd1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FqXJ7Rf5grYb2jIL88XupRJwby1SHfsMPnmB3a-Gx00.jpg?width=216&crop=smart&auto=webp&s=792bedfd129638a4c1c01e6685e99e54cf2a4891', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FqXJ7Rf5grYb2jIL88XupRJwby1SHfsMPnmB3a-Gx00.jpg?width=320&crop=smart&auto=webp&s=b9dddb0e3e1c053132fff845e6827b37b68e1223', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FqXJ7Rf5grYb2jIL88XupRJwby1SHfsMPnmB3a-Gx00.jpg?width=640&crop=smart&auto=webp&s=7b7229a1d296f2ec986ad1d15b5a6de16844359e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FqXJ7Rf5grYb2jIL88XupRJwby1SHfsMPnmB3a-Gx00.jpg?width=960&crop=smart&auto=webp&s=a1083613b0ba18fdde6469fdfd97e1a7b5d4bf9c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FqXJ7Rf5grYb2jIL88XupRJwby1SHfsMPnmB3a-Gx00.jpg?width=1080&crop=smart&auto=webp&s=678cfd91253b02423105d2c72c14295ec6dc2b65', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FqXJ7Rf5grYb2jIL88XupRJwby1SHfsMPnmB3a-Gx00.jpg?auto=webp&s=aff146960175fbd50999d2d634746813101a09c4', 'width': 1200}, 'variants': {}}]}
Has anyone tried RAG with smaller models?
8
Just looking for peoples experience with RAG results from various models? I have 4GB vRAM to work with but would like to know about other experience with RAG and results with the smaller end modes?
2023-10-08T21:35:50
https://www.reddit.com/r/LocalLLaMA/comments/173ant0/has_anyone_tried_rag_with_smaller_models/
randomrealname
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
173ant0
false
null
t3_173ant0
/r/LocalLLaMA/comments/173ant0/has_anyone_tried_rag_with_smaller_models/
false
false
self
8
null
I have many questions and I seek answers, please help me.
4
First of all let me define my degenerate goal: Integration of an AI Character into my phone / pc as an "Android assistant & Robot Friend" With the option of it being a streamer as a side job for it. I have seen some projects of people that done similar stuff on YouTube but none are overly informative or I am simply not versed in the topics enough to understand them, therefore I beseech you to not judge me to harshly I am trying to find resources and learn what I need everyday. And now, my questions: Apart from Python, which I am currently learning, what else would I need to study coding wise for my purposes? I want to build a separate PC for the LLM I'll be running, I'll have a budget of about 3k USD give or take, if you have specs recommendations I'd be glad to receive them. What model would you recommend for my purposes? I also want to understand the difference between the 7B, 13B and 30B ones as they would probably be the ones I'll be able to run on consumer grade specs. I also heard I would want to use vector memory or something like that, I am still to incompetent to understand what it means. Apart from that recommendations for datasets, how to write a personality for the character, or anything else I might need would be highly appreciated. Most of all I'd like relevant beginner friendly resources I can learn from, preferably YouTube tutorials although I of course do the reading when necessary. Thank you for the help
2023-10-08T20:47:36
https://www.reddit.com/r/LocalLLaMA/comments/1739ih4/i_have_many_questions_and_i_seek_answers_please/
CrimsonRedstone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1739ih4
false
null
t3_1739ih4
/r/LocalLLaMA/comments/1739ih4/i_have_many_questions_and_i_seek_answers_please/
false
false
self
4
null
New SillyTavern release: 1.10.5
58
2023-10-08T20:41:53
https://github.com/SillyTavern/SillyTavern/releases
WolframRavenwolf
github.com
1970-01-01T00:00:00
0
{}
1739dlr
false
null
t3_1739dlr
/r/LocalLLaMA/comments/1739dlr/new_sillytavern_release_1105/
false
false
https://b.thumbs.redditm…buIffa4kxHqc.jpg
58
{'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=108&crop=smart&auto=webp&s=f076a50b0d594dc8ba3b2ee703d67664decf1cba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=216&crop=smart&auto=webp&s=dbc51e386e2d24255edce0cbd6a139d2b37dc0a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=320&crop=smart&auto=webp&s=13107e47f85ca5d663508f0d9c3bca3648a98f75', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=640&crop=smart&auto=webp&s=f340c6c7589a711ca86aba7661baee1db6acf927', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=960&crop=smart&auto=webp&s=76d5b3a13d8ba4378270e9ae41aa3081e25b37e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=1080&crop=smart&auto=webp&s=d17a08361a95b03dd8a9a733ec765497cf2bf0d1', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?auto=webp&s=577b115ae7cd70077bd0dc15f7fe27e71ff19e2b', 'width': 1280}, 'variants': {}}]}
Questions about how training works.
1
Step: 448 {'loss': 0.0007, 'learning\_rate': 9.374999999999999e-05, 'epoch': 15.0} I am training a alpaca format json file with around 300 entries. I am looking at adding information that can be asked and it repeated back from the data I add. What is a good loss and learning rate? Also when I train if I do less that 20 epochs I don't get any of the information back when I chat with the character. 1. I am using lora rank of 128. Is that ok for this activity? I saw that with less than 32 it learns style not information. 2. I am using lora alpha of 256. Is that ok? 3. Is there a way to look at the lora and see if my information is in there? I am using oobabooga.
2023-10-08T19:24:47
https://www.reddit.com/r/LocalLLaMA/comments/1737k65/questions_about_how_training_works/
betolley
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1737k65
false
null
t3_1737k65
/r/LocalLLaMA/comments/1737k65/questions_about_how_training_works/
false
false
self
1
null
PDF Tool *in progress
8
I have been working on a PDF Tool that will take apart your PDFs and export the contents as Structured JSON, Tables converted to PNG, and the data from the table placed in an excel file. This is all done with OCR. I think this itself is a lot of what people are looking for in PDF tooling. But, the awesome news is, thats not it for the data. After the initial processing I want to take the the structured JSON, iterate through it and make a list of all elements used (H1, H2, TEXT, etc) and then post process JSON to only include the elements the user has selected for output. I would then like to process this data example H1 + TEXT and turn those into Q&A pairs or Instructions for use with multiple LLMs (ChatGPT 3.5 Fine tuning..Llama 2..etc) for the purposes of Fine-Tuning. So now you would have a Structured JSON you could use for embeddings, You have the excel data that is included in the PNG images of the tables (these can be used for fine-tuning), then you can have the entire corpus of a PDF (book, text book, etc) for true fine tuning on your material. This would cut down on the process of generating Data Sets that arent complete shit. I am looking for someone(s) else that has Python exp. I have a few versions that work, currently outputting just the structured JSON. The input and out are DONE. its easy. I havent added it to my repo yet, I am willing to just zip up my project and send it for now. reason: I would like the chance to have this perfected before releasing anything. It currently requires installing an SDK manually (pain in ass) and obtaining credentials. Everything is free, and your data is safe (free up to a certain amnt). But it does in fact take every element out of a PDF, no python package does this. Havent asked for help in many many moons, hopefully this is the right place. For the amount of "I need to get data from this PDF.." hopefully this can be solved. I would love to use a local LLM to create the Q&A / Instruct pairs and prompting for Fine-tuning, depending on what user selects.
2023-10-08T17:21:51
https://www.reddit.com/r/LocalLLaMA/comments/1734mek/pdf_tool_in_progress/
hank-particles-pym
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1734mek
false
null
t3_1734mek
/r/LocalLLaMA/comments/1734mek/pdf_tool_in_progress/
false
false
self
8
null
Uncensored AI Chatbot
0
Hey guys! this is my first reddit post, but i think this is a very interesting AI I came across this from a friend and its called ObscureGPT. it claims to be an uncensored AI model with no morals etc etc. Has anyone tried them out? their server seems to be quite full of people so i hope somebody can tell me if its good or not this is their site obscuregpt.com
2023-10-08T17:19:14
https://www.reddit.com/r/LocalLLaMA/comments/1734k7g/uncensored_ai_chatbot/
Comfortable-Chart412
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1734k7g
false
null
t3_1734k7g
/r/LocalLLaMA/comments/1734k7g/uncensored_ai_chatbot/
false
false
self
0
null
Is anyone using a quad socket server with DDR3 for GGML/GGUF?
1
Curious what your speeds are and how performance scales with multiple CPU sockets/a whole asston of channels (16) with 48 slots of 2400MHz ram. Asking because 768GB is enough to run Falcon 180B in 32 bit, and in 2400 speed that costs less than $500. A server with four Xeons scoring 11k on [cpubenchmark.net](https://cpubenchmark.net) each is $800. Seems like a very cost effective way to do this, but I'm curious what t/s could be expected. If it's even 2 I will be happy, assuming full precision 180b doesn't require 100 regens to get anything good the way 33b 4bit does.
2023-10-08T17:15:51
https://www.reddit.com/r/LocalLLaMA/comments/1734hfn/is_anyone_using_a_quad_socket_server_with_ddr3/
RottieCumGutter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1734hfn
false
null
t3_1734hfn
/r/LocalLLaMA/comments/1734hfn/is_anyone_using_a_quad_socket_server_with_ddr3/
false
false
self
1
{'enabled': False, 'images': [{'id': 'wKEUaX_AjKElK73rADrRP6qe6o-GToKYw8-odUFh8yo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/cN7WjV77n2p4Uq7zCx4_2T0yqNESU5hT0yLrjZWXp6Q.jpg?width=108&crop=smart&auto=webp&s=13f30f9f1de2169a0ebe93ba2aac81dba3426ce0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/cN7WjV77n2p4Uq7zCx4_2T0yqNESU5hT0yLrjZWXp6Q.jpg?width=216&crop=smart&auto=webp&s=1ac458ce21b4b40e0327455c827ae997dc625d57', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/cN7WjV77n2p4Uq7zCx4_2T0yqNESU5hT0yLrjZWXp6Q.jpg?width=320&crop=smart&auto=webp&s=b4fad10be81211dc3640a49e2c8f21db3b67a0d7', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/cN7WjV77n2p4Uq7zCx4_2T0yqNESU5hT0yLrjZWXp6Q.jpg?width=640&crop=smart&auto=webp&s=f5321115e94e85bde2a7892738b59ddfc1a69a2d', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/cN7WjV77n2p4Uq7zCx4_2T0yqNESU5hT0yLrjZWXp6Q.jpg?width=960&crop=smart&auto=webp&s=f1ce79f029ed69dac2b4b82650d5cc1e3f5790d7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/cN7WjV77n2p4Uq7zCx4_2T0yqNESU5hT0yLrjZWXp6Q.jpg?width=1080&crop=smart&auto=webp&s=ae4927544be40e0f9d1ce78dc7c0dced3c744439', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/cN7WjV77n2p4Uq7zCx4_2T0yqNESU5hT0yLrjZWXp6Q.jpg?auto=webp&s=d7f05dac25c1c220ba5a0c049d0268af0de23ad7', 'width': 1200}, 'variants': {}}]}
PDF annotation tool
9
Hi. I am curently working on RAG. I want to chunk the pdf according to paragraphs and to archive that I need a tool which allow me to manually annotate data. Do you know any solution? I checked [label studio](https://labelstud.io/) and [PAWLS](https://github.com/allenai/pawls) but they are terrible for my case.
2023-10-08T16:47:40
https://www.reddit.com/r/LocalLLaMA/comments/1733tlo/pdf_annotation_tool/
AceFromSpaceee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1733tlo
false
null
t3_1733tlo
/r/LocalLLaMA/comments/1733tlo/pdf_annotation_tool/
false
false
self
9
null
Anyone used Pandalyst yet?
1
Want information on how to use pandalyst for csv QA involving basic aggregations
2023-10-08T16:09:25
https://www.reddit.com/r/LocalLLaMA/comments/1732xe0/anyone_used_pandalyst_yet/
infamous55500
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1732xe0
false
null
t3_1732xe0
/r/LocalLLaMA/comments/1732xe0/anyone_used_pandalyst_yet/
false
false
self
1
null
Japanese-English translation model ALMA-7B-Ja release.
31
I read [article](https://www.reddit.com/r/LocalLLaMA/comments/16p2smj/a_paradigm_shift_in_machine_translation_how_to/) on LocalLLaMA that using the multilingual machine translation model learning paradigm [ALMA](https://browse.arxiv.org/pdf/2309.11674.pdf), even a relatively small model can achieve performance equivalent to GPT-3.5. I wanted a Japanese-English translation model that was easy to handle, so I asked [questions](https://www.reddit.com/r/LocalLLaMA/comments/16qujym/i_am_finetuning_a_translation_model_alma_when/) on LocalLLaMA, How to changed the supported language, and tried creating a Japanese version. This is the announcement of the results, [ALMA-7B-Ja](https://huggingface.co/webbigdata/ALMA-7B-Ja). ・I encountered a mysterious problem, so I upgraded the library from the original settings. ・The settings have been changed slightly due to the above reasons. - llama.cpp(gguf) conversion is failing maybe due to the above reasons. So I am creating a GPTQ version. [ALMA-7B-Ja-GPTQ-Ja-En](https://huggingface.co/webbigdata/ALMA-7B-Ja-GPTQ-Ja-En/ ) instead ・I think the reason why the performance of Japanese translation is not good is because the quality of the dataset is not yet sufficient. ・Although the performance is still not satisfactory, I think it is an achievement that it is possible to change the language and that we can expect considerable performance. ・It was much more difficult than I expected. I'm little bit exhausted. Translating from English (en→xx) BLEU/COMET score. Models | de | cs | is | zh | ru/jp | Avg. | |----------------|--------|--------|--------|--------|--------|--------| NLLB-54B | 34.50/86.45 | 37.60/90.15 | 24.15/81.76 | 27.38/78.91 | 30.96/87.92 | 30.92/85.04 | GPT-3.5-D | 31.80/85.61 | 31.30/88.57 | 15.90/76.28 | 38.30/85.76 | 27.50/86.74 | 28.96/84.59 | ALMA-7B(Original)| 30.31/85.59 | 29.88/89.10 | 25.71/85.52 | 36.87/85.11 | 27.13/86.98 | 29.89/86.49 | ALMA-7B-Ja(Ours) | 23.70/82.04 | 18.58/81.36 | 12.20/71.59 | 29.06/82.45 | 14.82/85.40 | 19.67/80.57 | Translating to English (xx→en) BLEU/COMET score. Models | de | cs | is | zh | ru/jp | Avg. | |----------------|--------|--------|--------|--------|--------|--------| NLLB-54B | 26.89/78.94 | 39.11/80.13 | 23.09/71.66 | 16.56/70.70 | 39.11/81.88 | 28.95/76.66 | GPT-3.5-D | 30.90/84.79 | 44.50/86.16 | 31.90/82.13 | 25.00/81.62 | 38.50/84.80 | 34.16/83.90 | ALMA-7B(Original)| 30.26/84.00 | 43.91/85.86 | 35.97/86.03 | 23.75/79.85 | 39.37/84.58 | 34.55/84.02 | ALMA-7B-Ja(Ours) | 26.41/83.13 | 34.39/83.50 | 24.77/81.12 | 20.60/78.54 | 15.57/78.61 | 24.35/81.76 | Thank you to Corannnn, Clean-Ad-9576, and the LocalLLaMA community for your encouragement.
2023-10-08T16:04:14
https://www.reddit.com/r/LocalLLaMA/comments/1732t26/japaneseenglish_translation_model_alma7bja_release/
dahara111
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1732t26
false
null
t3_1732t26
/r/LocalLLaMA/comments/1732t26/japaneseenglish_translation_model_alma7bja_release/
false
false
self
31
null
Enterprise scale: Scaling LLama2 13B to achieve high throughput and low latency
26
Is it unrealistic to expect that I can achieve 300K tokens/min throughput and say around 1800 requests/min with LLama2 13B (let alone 70B). I have tried to host with AWS Sagemaker jumpstart endpoint and also was looking at the benchmarks in: [https://huggingface.co/blog/llama-sagemaker-benchmark](https://huggingface.co/blog/llama-sagemaker-benchmark) &#x200B; The benchmark shows very low throughput even on ml.p4d.24xlarge instance, and I am struggling to achieve the right performance within my VPC (can't use hosted service). I have seen some articles of vLLM, Ray and anyscale deployment endpoint, but not sure how to go about. Anybody can share experience or expertise?
2023-10-08T15:15:53
https://www.reddit.com/r/LocalLLaMA/comments/1731nrh/enterprise_scale_scaling_llama2_13b_to_achieve/
dataviz123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1731nrh
false
null
t3_1731nrh
/r/LocalLLaMA/comments/1731nrh/enterprise_scale_scaling_llama2_13b_to_achieve/
false
false
self
26
{'enabled': False, 'images': [{'id': '6GF5SFLQuxM57YUbXvvVb4a8VRrVD6a78kruPuKM9X8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=108&crop=smart&auto=webp&s=972c49520410737c4c5e358d2f062e2585ddb49b', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=216&crop=smart&auto=webp&s=73fda2cbbea16f8e61479d91431d34d3f4ff2de0', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=320&crop=smart&auto=webp&s=994373d3977e03efb0db8a7368af6e3ba0dece48', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=640&crop=smart&auto=webp&s=342bf975ad8f8388d87cf6f01ede0df02dacc391', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=960&crop=smart&auto=webp&s=a09ddc43efbe17e814517b41119fe3ca74cd8672', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=1080&crop=smart&auto=webp&s=732c9e546b2f7b6ff65899eaa93e36abd99ea66e', 'width': 1080}], 'source': {'height': 1248, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?auto=webp&s=308f5037b00e6947c08c812e8a70f66934503276', 'width': 2400}, 'variants': {}}]}
how am I supposed to post at 3am on Monday to get all the upvotes?
1
2023-10-08T15:05:55
https://i.redd.it/77t5gz2ruzsb1.png
Acceptable_Bed7015
i.redd.it
1970-01-01T00:00:00
0
{}
1731ejf
false
null
t3_1731ejf
/r/LocalLLaMA/comments/1731ejf/how_am_i_supposed_to_post_at_3am_on_monday_to_get/
false
false
https://b.thumbs.redditm…W7cijEU4W5aY.jpg
1
{'enabled': True, 'images': [{'id': 'ZYVLd350gbPzHJV3QgkCQRABgqYfFT4DdNQvg-PQn4E', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/77t5gz2ruzsb1.png?width=108&crop=smart&auto=webp&s=6e39483b47cdaf68d270797d2f27d6b01c839252', 'width': 108}, {'height': 193, 'url': 'https://preview.redd.it/77t5gz2ruzsb1.png?width=216&crop=smart&auto=webp&s=bd383a688c0160da6538dd16b5fdc3f076ee917f', 'width': 216}, {'height': 286, 'url': 'https://preview.redd.it/77t5gz2ruzsb1.png?width=320&crop=smart&auto=webp&s=ba2f75b3861356355d646ac71739015070e39b90', 'width': 320}, {'height': 572, 'url': 'https://preview.redd.it/77t5gz2ruzsb1.png?width=640&crop=smart&auto=webp&s=429655052b5088657b05fd47f57df35f9e6d0f63', 'width': 640}, {'height': 859, 'url': 'https://preview.redd.it/77t5gz2ruzsb1.png?width=960&crop=smart&auto=webp&s=930aab7a62e7398db76c196f7906f0cd05ac8806', 'width': 960}, {'height': 966, 'url': 'https://preview.redd.it/77t5gz2ruzsb1.png?width=1080&crop=smart&auto=webp&s=089097f0835c0ef5dae78313c972e8da4f2396a7', 'width': 1080}], 'source': {'height': 1090, 'url': 'https://preview.redd.it/77t5gz2ruzsb1.png?auto=webp&s=dc66a8b4ee7e5107394d5e7c35c3b8eaebe2d292', 'width': 1218}, 'variants': {}}]}
serious question about r/LocalLlama users
1
Ran a quick analysis on our community. I have just one question - how am I supposed to post at 3am on Monday to get all the upvotes? https://preview.redd.it/gpsx4y18uzsb1.png?width=1218&format=png&auto=webp&s=e6aa3542423cca3e950ea9a20e8dcc5a5c4428fb
2023-10-08T15:04:19
https://www.reddit.com/r/LocalLLaMA/comments/1731cxz/serious_question_about_rlocalllama_users/
Acceptable_Bed7015
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1731cxz
false
null
t3_1731cxz
/r/LocalLLaMA/comments/1731cxz/serious_question_about_rlocalllama_users/
false
false
https://b.thumbs.redditm…6ryC8WP8ZA5E.jpg
1
null
r/LocalLlama flavor
1
Ran a quick analysis on our community. I have just one question - how am I supposed to post at 3am on Monday to get the upvotes?
2023-10-08T15:01:47
https://www.reddit.com/r/LocalLLaMA/comments/1731ajb/rlocalllama_flavor/
Acceptable_Bed7015
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1731ajb
false
null
t3_1731ajb
/r/LocalLLaMA/comments/1731ajb/rlocalllama_flavor/
false
false
self
1
null
Anyone running LLMs on AMD Firepro S9170 or W9100 ?
6
Please did you or anyone you know manage to run LLM software on one of the following boards? * AMD Firepro S9170 * AMD Firepro W9100 I ask because there are [used 32GB S9170 on sale both in Europe and US for around 150$](https://www.ebay.com/sch/i.html?_from=R40&_sop=15&_blrs=spell_auto_correct&_nkw=AMD%20FirePro%20S9170%2032%20GB) (which seems a good deal if used for something else than a heater or a paperweight).
2023-10-08T14:29:11
https://www.reddit.com/r/LocalLLaMA/comments/1730i7q/anyone_running_llms_on_amd_firepro_s9170_or_w9100/
digital_m0nk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1730i7q
false
null
t3_1730i7q
/r/LocalLLaMA/comments/1730i7q/anyone_running_llms_on_amd_firepro_s9170_or_w9100/
false
false
self
6
{'enabled': False, 'images': [{'id': 'qMsVSliq1ohSmRuu6VYH_ZjP82O26fImPPhrzU4Za54', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/SkDnQnGYGmZcIVIBuekH67WpiFJWTEk0AMd-EogjDIY.jpg?width=108&crop=smart&auto=webp&s=5fec005ac1b9e77506d5f7025275d016a7afc9a7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/SkDnQnGYGmZcIVIBuekH67WpiFJWTEk0AMd-EogjDIY.jpg?width=216&crop=smart&auto=webp&s=a27db01690b2af49f0dc10eb4206bc5342e4d99b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/SkDnQnGYGmZcIVIBuekH67WpiFJWTEk0AMd-EogjDIY.jpg?width=320&crop=smart&auto=webp&s=21099de656967342fb66ebb19cbbb619b2d75ff8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/SkDnQnGYGmZcIVIBuekH67WpiFJWTEk0AMd-EogjDIY.jpg?width=640&crop=smart&auto=webp&s=9ba677fb621fd3370ac8977648739f082073ef2f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/SkDnQnGYGmZcIVIBuekH67WpiFJWTEk0AMd-EogjDIY.jpg?width=960&crop=smart&auto=webp&s=28f0dfcb81ea5c9280cb994027e15e0f263d3a33', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/SkDnQnGYGmZcIVIBuekH67WpiFJWTEk0AMd-EogjDIY.jpg?width=1080&crop=smart&auto=webp&s=e93f720d956b29afab12dcb69e4da1d44ca28cd2', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/SkDnQnGYGmZcIVIBuekH67WpiFJWTEk0AMd-EogjDIY.jpg?auto=webp&s=6710dbd4ae180806559e2cc6c107ec4cc05ea441', 'width': 1200}, 'variants': {}}]}
One-bit quantization is a thing now
197
Binarization! https://arxiv.org/abs/2310.00034 Works kinda like AWQ, with identifying "salient" weights and keeping them in high "bitrate", and slashing others down to ones and zeroes. Seems to work provided that total amount of "binarized" model is not too high. Makes me wonder: pretty much most of our "compute hardware" is binary... so, by designing an NN for efficient and total "binarization", will it make "hardware inference" and in-memory compute much easier and more efficient?
2023-10-08T12:24:48
https://www.reddit.com/r/LocalLLaMA/comments/172xwch/onebit_quantization_is_a_thing_now/
BalorNG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172xwch
false
null
t3_172xwch
/r/LocalLLaMA/comments/172xwch/onebit_quantization_is_a_thing_now/
false
false
self
197
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
PC Build help.. max model size, hardware suggestions, etc
0
My goal is to learn about AI using an existing uncensored model for NSFW roleplaying that I tune by adding custom question answer pairs. Does this build makes sense? Any changes I should consider? If I wanted to save some money, what would I change? Would a 33b model fit comfortably? Is 70b too big for this? Does my use case make sense to have a local machine, or does cloud make more sense for this? Electricity is free for me.
2023-10-08T12:14:10
https://i.redd.it/f3t3dxn60zsb1.jpg
IPmang
i.redd.it
1970-01-01T00:00:00
0
{}
172xpaj
false
null
t3_172xpaj
/r/LocalLLaMA/comments/172xpaj/pc_build_help_max_model_size_hardware_suggestions/
false
false
https://b.thumbs.redditm…fPF0WZxbOLHY.jpg
0
{'enabled': True, 'images': [{'id': 'PSdrXO2zXLSkV6p2y4zQQJA8yvbBVcfCQLG0gcb8Woc', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/f3t3dxn60zsb1.jpg?width=108&crop=smart&auto=webp&s=bfac3a1dfc4e8e6372de77bbc0c9817fc90373dd', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/f3t3dxn60zsb1.jpg?width=216&crop=smart&auto=webp&s=7da2d08de39ad7eef6c77f1e040e9a689744d91d', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/f3t3dxn60zsb1.jpg?width=320&crop=smart&auto=webp&s=472113dfe7fa1702d085f3599621a91c2762dbfc', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/f3t3dxn60zsb1.jpg?width=640&crop=smart&auto=webp&s=f6f0fe1498370df47d7a5b77869b4fccfaa13448', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/f3t3dxn60zsb1.jpg?width=960&crop=smart&auto=webp&s=630b218fc6ec39fff5a6a92d1f8fbe279a778a9c', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/f3t3dxn60zsb1.jpg?width=1080&crop=smart&auto=webp&s=cf923ddfa47d3ef3dbcf2bc7fece1c07c572c3d5', 'width': 1080}], 'source': {'height': 2778, 'url': 'https://preview.redd.it/f3t3dxn60zsb1.jpg?auto=webp&s=c82d3a5028bdf7323d39e8fd78eaffb540c71cbb', 'width': 1284}, 'variants': {}}]}
Is there an Uncensored Mistral 7b chat model?
18
Title
2023-10-08T10:36:49
https://www.reddit.com/r/LocalLLaMA/comments/172w0so/is_there_an_uncensored_mistral_7b_chat_model/
Spirited_Employee_61
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172w0so
false
null
t3_172w0so
/r/LocalLLaMA/comments/172w0so/is_there_an_uncensored_mistral_7b_chat_model/
false
false
self
18
null
Is it possible to do RAG over Mistral 7b 4bits over a CPU 64g ram?
10
Please excuse me for the dumb question. I've been out of practice for a while noa
2023-10-08T10:28:47
https://www.reddit.com/r/LocalLLaMA/comments/172vw9k/is_it_possible_to_do_rag_over_mistral_7b_4bits/
nuketro0p3r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172vw9k
false
null
t3_172vw9k
/r/LocalLLaMA/comments/172vw9k/is_it_possible_to_do_rag_over_mistral_7b_4bits/
false
false
self
10
null
Seeking Input on Feasibility and Enhancements for an AI Solution for a Mega Project in the Middle East
0
Recently, a colleague connected me with an individual who is spearheading a significant mega project in the Middle East. They have requested that I devise an AI solution to augment various facets of their ambitious endeavor, assuring me that my proposal will be directly presented to a prominent decision-maker in the region. Having formulated a preliminary solution, I am keen on obtaining your insights, suggestions, and expertise to evaluate its viability, explore possible improvements, or even consider a wholly different approach. **My Proposed Solution:** I have proposed a comprehensive AI solution tailored to the project's specific needs and objectives. The key features of my solution include: 1. **Contextual Understanding and Relevance:** The LLM will be trained to comprehend project-specific contexts, terminologies, and objectives, ensuring its responses and insights are highly relevant and accurate. 2. **Seamless Integration and User Accessibility**: The LLM will be integrated within the existing technology infrastructure, providing a user-friendly interface and ensuring accessibility for all stakeholders. 3. **Advanced Data Analysis and Insights Generation**: The LLM will be capable of analyzing vast volumes of data, extracting meaningful insights, and generating comprehensive reports to support various functions within the project. 4. **Robust Security and Compliance:** The LLM will adhere to stringent data protection measures and compliance standards, ensuring the security and confidentiality of project information. 5. **Continuous Learning and Adaptation**: The LLM will feature mechanisms for continuous learning and refinement, allowing it to adapt and evolve with project-changing needs and advancements in technology. 6. **Task Automation and Workflow Optimization**: The LLM will automate a variety of tasks, such as information retrieval and document generation, optimizing workflows and reducing manual efforts. 7. **User Empowerment and Training Support**: The LLM will come with training and support modules, enabling users to leverage its capabilities and functionalities effectively. 8. **Innovation Acceleration:** The LLM will serve as a catalyst for research and development activities within the project, supporting the creativity and realization of innovative solutions and technologies. 9. **Enhanced Information Interaction**: By leveraging advanced Natural Language Processing (NLP) and an interactive knowledge repository, the LLM will index and extract profound insights from historical project data, global best practices, regulatory changes, and more. The system will enable users to perform sophisticated sentiment analysis, providing a deeper understanding of market and investor sentiments. 10. **Automated Notification & Alert System**: The LLM will incorporate a real-time notification and alert system, providing automated updates on new information, events, missed deadlines, and potential issues, accessible from any device. The system will feature customization options allowing for alerts based on specific risk-assessment criteria, identifying, and flagging potential risks in contracts and legal documents. 11. **Autonomous AI Agents**: The LLM will deploy autonomous AI agents capable of performing tasks independently, interacting with various systems, and making decisions based on pre-defined criteria, enhancing the overall responsiveness and adaptability of the model. 12. **Voice Command and Talk-Back Feature**: The LLM will incorporate an advanced voice command and talk-back feature, allowing users to interact with the model using vocal instructions and receiving auditory responses. This feature will facilitate hands-free interactions and enable users to access information, receive insights, and perform tasks using voice commands, enhancing the model’s accessibility and user-friendliness. **Seeking Your Input:** 1. Feasibility Assessment: Based on the provided information, do you guys believe that the proposed AI solution is technically feasible and suitable for the mega project in the Middle East? Are there any potential challenges or limitations that should be considered? 2. Enhancements and Recommendations: Are there any additional features or functionalities that you guys believe should be incorporated into the AI solution to maximize its potential impact on the project's success? Do you guys have any alternative suggestions or ideas that could offer a better solution? Thank you all for your valuable contributions! I eagerly await your thoughts and suggestions.
2023-10-08T09:52:20
https://www.reddit.com/r/LocalLLaMA/comments/172vasj/seeking_input_on_feasibility_and_enhancements_for/
DensetsuNo3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172vasj
false
null
t3_172vasj
/r/LocalLLaMA/comments/172vasj/seeking_input_on_feasibility_and_enhancements_for/
false
false
self
0
null
Best NSFW Model for a budget laptop
10
I have a 32gb ram, nvidia 2060 6gb vram, i7 9th gen. I run base koboldAI client + TavernAI Tried running Pygmalion-6b (since it got me best results on colab but the model got banned recently) but it's too large and runs out of memory very quickly Any recommendations?
2023-10-08T09:19:36
https://www.reddit.com/r/LocalLLaMA/comments/172usi5/best_nsfw_model_for_a_budget_laptop/
girlslavemaker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172usi5
false
null
t3_172usi5
/r/LocalLLaMA/comments/172usi5/best_nsfw_model_for_a_budget_laptop/
false
false
nsfw
10
null
GGUF models causing random segmentation fault
1
I use oobabooga and 13B GGUF models. I use a WSL machine with 40 GB RAM and 50 GB of swap memory. It always happens like this: it works fine. The context is not increased or anything. Let's say I'm just REGENERATING, which should mean the context is the same, in the sense there should be no more need for memory if the context is not increasing. Sometimes after 10 attempts, sometimes after 4 attempts, it will get this SEGMENTATION FAULT error. Note that it always happens BEFORE it generated anything. Somehow, once it gets generating tokens, it will complete it. Output generated in 3.50 seconds (2.57 tokens/s, 9 tokens, context 1315, seed 1543060408) Llama.generate: prefix-match hit Segmentation fault &#x200B; I've tried doing lots of things, from reinstalling the full virtual machine to tinkering with the llama.cpp options. I've reduced the context to very few tokens in case it's related to it. My memory doesn't fill, there should be swap memory too. And I'm at my wits' end. For context, it used to work for me fine. I heavily, and I mean very heavily, used local 13B models before. But now my experience is very ruined. I don't know what to do. I've tried everything. From setting CPU only (not using my GPU at all) to every single option. And it keeps happening. Any ideas? Really, I have a 10 GB GPU so I can't run exllama or similar that require to offload everything onto the GPU. Could it be due to GGUF? I used to use GGML, not GGUF. And I can't know for sure, but I have an inkling this happened ever since I started using GGUF and ever since oobabooga opushed GGUF onto us. Has anyone experienced something like this? If it's related to GGML, really, I'll accept it. I'll just force a much earlier version of oobabooga and ditch GGUF altogether. I'm a bit frustrated. I had a very sweet spot with local AI, and because of changes and updates it's ruined. I haven't changed my system at all, by the way.
2023-10-08T08:42:03
https://www.reddit.com/r/LocalLLaMA/comments/172u7pc/gguf_models_causing_random_segmentation_fault/
CulturedNiichan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172u7pc
false
null
t3_172u7pc
/r/LocalLLaMA/comments/172u7pc/gguf_models_causing_random_segmentation_fault/
false
false
self
1
null
Can you compare models with different context size and quantization?
1
Pretty much the title [View Poll](https://www.reddit.com/poll/172u22o)
2023-10-08T08:32:04
https://www.reddit.com/r/LocalLLaMA/comments/172u22o/can_you_compare_models_with_different_context/
upk27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172u22o
false
null
t3_172u22o
/r/LocalLLaMA/comments/172u22o/can_you_compare_models_with_different_context/
false
false
self
1
null
Can you compare models with different context size and quantization?
1
[removed]
2023-10-08T08:30:58
[deleted]
1970-01-01T00:00:00
0
{}
172u1gk
false
null
t3_172u1gk
/r/LocalLLaMA/comments/172u1gk/can_you_compare_models_with_different_context/
false
false
default
1
null
Model comparison where context size and quantization differ are...
1
[View Poll](https://www.reddit.com/poll/172tyh2)
2023-10-08T08:25:41
https://www.reddit.com/r/LocalLLaMA/comments/172tyh2/model_comparison_where_context_size_and/
upk27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172tyh2
false
null
t3_172tyh2
/r/LocalLLaMA/comments/172tyh2/model_comparison_where_context_size_and/
false
false
self
1
null
Any long/longer context CodeLlama/WizardCoder?
6
Trying to run locally with open interpreter and for long code files it runs into max context reached error. Was wondering if anyone knew of any longer-ish context models that could be utilized. :)
2023-10-08T07:48:09
https://www.reddit.com/r/LocalLLaMA/comments/172tde7/any_longlonger_context_codellamawizardcoder/
Outrageous-North5318
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172tde7
false
null
t3_172tde7
/r/LocalLLaMA/comments/172tde7/any_longlonger_context_codellamawizardcoder/
false
false
self
6
null
Testing methodologies here
1
2023-10-08T06:14:30
https://i.redd.it/dfkxzzcx7xsb1.png
ultrapcb
i.redd.it
1970-01-01T00:00:00
0
{}
172rwq1
false
null
t3_172rwq1
/r/LocalLLaMA/comments/172rwq1/testing_methodologies_here/
false
false
https://b.thumbs.redditm…yaImvoeLlYgE.jpg
1
{'enabled': True, 'images': [{'id': 'F12wtfQr9nNahMFovYjW5UPuDKe9_BDjDhzgxc2-n70', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/dfkxzzcx7xsb1.png?width=108&crop=smart&auto=webp&s=9a307dcfc41d819e2ed4a7d9b4172cd8ceb223bc', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/dfkxzzcx7xsb1.png?width=216&crop=smart&auto=webp&s=cecc91165a80878d83eefdd4fc298065f1ba4b47', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/dfkxzzcx7xsb1.png?width=320&crop=smart&auto=webp&s=057467c48b9d4e5cd57921e8f57899e72237efb4', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/dfkxzzcx7xsb1.png?width=640&crop=smart&auto=webp&s=3f6e526b91d24649c4f6b8f8969748d987f1a8b9', 'width': 640}], 'source': {'height': 396, 'url': 'https://preview.redd.it/dfkxzzcx7xsb1.png?auto=webp&s=40cf240521863f456602caefec42faf3bf0da1b1', 'width': 705}, 'variants': {}}]}
Two Commands are all you need : The easiest and most efficient way to run Mistral and many other LLMs on Mac and Linux.
1
[removed]
2023-10-08T05:05:36
https://www.reddit.com/r/LocalLLaMA/comments/172qr9w/two_commands_are_all_you_need_the_easiest_and/
0xPark
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172qr9w
false
null
t3_172qr9w
/r/LocalLLaMA/comments/172qr9w/two_commands_are_all_you_need_the_easiest_and/
false
false
self
1
null
Anybody Know Consistent Image-text Data like GPT-4V, or ways to get data like GPT-4V?
10
I know this might be farfetched, but are there literally any datasets out there with even a little bit of data like gpt-4 V, where the images are consistent and the chat revolves around them?
2023-10-08T04:49:35
https://www.reddit.com/r/LocalLLaMA/comments/172qh3a/anybody_know_consistent_imagetext_data_like_gpt4v/
vatsadev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172qh3a
false
null
t3_172qh3a
/r/LocalLLaMA/comments/172qh3a/anybody_know_consistent_imagetext_data_like_gpt4v/
false
false
self
10
null
[Help] Training own code to code llama
13
Hi I'm new to the llama ;) I'm trying to train (Fine Tune) my code base to code llama, to using my code base knowledge to generate code generation. Is there are recommended article, materials to follow?
2023-10-08T04:24:40
https://www.reddit.com/r/LocalLLaMA/comments/172q1vn/help_training_own_code_to_code_llama/
Effective-Departure3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172q1vn
false
null
t3_172q1vn
/r/LocalLLaMA/comments/172q1vn/help_training_own_code_to_code_llama/
false
false
self
13
null
Self-host ChatGPT like AI tools
1
[removed]
2023-10-08T02:53:58
https://www.reddit.com/r/LocalLLaMA/comments/172oddf/selfhost_chatgpt_like_ai_tools/
jay-workai-tools
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172oddf
false
null
t3_172oddf
/r/LocalLLaMA/comments/172oddf/selfhost_chatgpt_like_ai_tools/
false
false
self
1
{'enabled': False, 'images': [{'id': 'BSZ-sH_zwdJFVYGhvfBqrpDYT8X-fsqOgJozVn6CbVs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0oAp-t7-4QhUiK31K_hzjExiBpy7Uu4mvoLRAuwh6LA.jpg?width=108&crop=smart&auto=webp&s=48a01851c333d51f04240c1ef8acb792bfec072e', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0oAp-t7-4QhUiK31K_hzjExiBpy7Uu4mvoLRAuwh6LA.jpg?width=216&crop=smart&auto=webp&s=73123acbf5d6c30ada1324d6d258f3eaa3b57387', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/0oAp-t7-4QhUiK31K_hzjExiBpy7Uu4mvoLRAuwh6LA.jpg?width=320&crop=smart&auto=webp&s=1a2d9c327ef6e9b9981991ad14acdc2176362cf9', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/0oAp-t7-4QhUiK31K_hzjExiBpy7Uu4mvoLRAuwh6LA.jpg?auto=webp&s=7f9b7e121405058636bbc37bac56d084bedaea53', 'width': 512}, 'variants': {}}]}
Falcon 180B merge qlora script?
5
Can anyone point to a script that merges the qlora to the hf base model on cpu? Will try to upload some gguf to huggingface if that is successful.
2023-10-08T01:09:46
https://www.reddit.com/r/LocalLLaMA/comments/172mapf/falcon_180b_merge_qlora_script/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172mapf
false
null
t3_172mapf
/r/LocalLLaMA/comments/172mapf/falcon_180b_merge_qlora_script/
false
false
self
5
null
Best Model for Document Layout Analysis and OCR for Textbook-like PDFs?
24
I've been working on a project where I need to perform document layout analysis and OCR on documents that are very similar to textbook PDFs. I'm wondering if anyone can recommend the best models or approaches for accurate text extraction and layout analysis. Are there any specific pre-trained models or tools that have worked exceptionally well for you in this context? Also, I'd appreciate it if you share any tips or best practices for handling textbook-like PDFs, preprocessing steps, or any other insights.
2023-10-07T23:32:44
https://www.reddit.com/r/LocalLLaMA/comments/172k9q2/best_model_for_document_layout_analysis_and_ocr/
malicious510
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172k9q2
false
null
t3_172k9q2
/r/LocalLLaMA/comments/172k9q2/best_model_for_document_layout_analysis_and_ocr/
false
false
self
24
null
Best params for coding models??
8
What coding parameters and/or presets have you found that works best for coding models like codellama, wizard coder, glaive coder and other similar models? When loading up, I tend to use the default presets and can get the models to generate code even if the code doesn't work. But I get the sense that with a specific set of params, the models would be able to generate cleaner or more accurate code or does it even matter? With different models coming out every other day, I can't really tell if they are all using similar params or different ones.
2023-10-07T22:45:22
https://www.reddit.com/r/LocalLLaMA/comments/172j75n/best_params_for_coding_models/
AI_Trenches
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172j75n
false
null
t3_172j75n
/r/LocalLLaMA/comments/172j75n/best_params_for_coding_models/
false
false
self
8
null
LoRA training Gradient Accumulation is NOT replacement for Batch Size + a fancy QUIZ
2
I already wrote about this before. GA is a "hack" if you have potato GPU, it will allow to do more virtual batches. But it's not only NO equivalent to batch size, it degrades the training. So I had a few min to spare, and a set of Emma Lazarus poems grabbed from project guttenberg. I trained 2 LORAS, exact same parameters. LORA A had 12 batches with GA 1 LORA B had 1 batch with GA 12 I used LLama Precise parameters (if you generate 10 times, it will always write the exact same text) LLama Precise T: 0.7 top\_p: 0.1 top\_k: 40 RPenaly: 1.18 **Instruction:** Rewrite the following text in the style of Emma Lazarus: **Input text:** It is a truth universally acknowledged, that a single man in possession of a good fortune must be in want of a wife. I made 3 controls using some other AI - here is your homework - GUESS which these are. Control 1 In the tapestry of acknowledged verities, there exists an eternal certitude: that a lone gentleman, the possessor of prosperous abundance, finds himself yearning for the embrace of a wedded life. Control 2 Give me your tired, your poor, Your hopeful men of means and worth, Yearning for partners through their days on earth. Send these, the blessed bridal-minded to me, I lift my lamp beside the golden door! Control 3 Upon these teeming shores so new, a maxim still holds true; That he with means however modest, welfare and fortune's fruits, Shall seek a partner for his life, to stand by in all pursuits. Whether native or from lands abroad his people lately came, In search of refuge or of bread, escaping want or shame, Control 4 Inscribed upon the portals of matrimony, A maxim known to all, it states so plainly, That he who holds a wealthy estate, Shall seek out one with whom to mate. LORA A (12 batches, GA 1) It is axiomatic, as all know, That one man, possessed of ample gold, Must yearn for marriage--must below, For woman's love, though wise and old. LORA B (1 batch, GA 12) Truth all-acknowledged, 'Tis plain, no maiden with a dowry fine, Shall ever be (in such a world as this) Unpreyed upon by some designing elf. The observation: **Control 1** Never heard about poems or Emma Lazarus, but it heard about some old dictionary and just changed every word to sound "rich and expensive" **Control 2** Tried to sound like Emma Lazarus but with a poor rhyme and rhythm **Control 3** Got the idea what the input text says and wrapped it into a 5th grade rhyme using fancy words **Control 4:** Got the jest of the input text very well and rephrased it into a good poem, but a typical LLM poem AA,BB that has very little to do with Emma Lazarus. It's a cute nursery rhyme though! **LORA A (the proper training)** Well, as you can see it did create ABAB rhyme with Emma Lazarus feel to it. **LORA B (improper training)** It's a word salad using Emma Lazarus words (and adding an elf), but most notably - it doesn't rhyme. This was observation across many other texts. The LORA B refused to rhyme, chosen words by random but did a bit more Lazarus words to it. Using lower epoch checkpoint (middle) we get less word salad, but no good rhyme either. Now it's more like a rap. Truth all-acknowledged! A single man with store Of worldly goods must needs be yearning for a mate. The womanless, wealthy bachelor's life-state Is not so rare as some suppose; indeed, Final words: Batch and GA are not equivalent and using GA will not lead to exactly what you want - just somewhere near by) Most others (control) suck big time. **So now, try to guess what the Control 1-4 are.** &#x200B;
2023-10-07T22:44:19
https://www.reddit.com/r/LocalLLaMA/comments/172j6b8/lora_training_gradient_accumulation_is_not/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172j6b8
false
null
t3_172j6b8
/r/LocalLLaMA/comments/172j6b8/lora_training_gradient_accumulation_is_not/
false
false
self
2
null
Can LoRA add basic math capabilities?
7
I’d like to fine tune a mode like Mistral 7B or llama 2 13b to do units conversions (like inches to cm, do simple divisions, etc. nothing too complex). I’ve seen the base models are bad at this and I want to try to add this knowledge via QLoRA, either with CoT or without (hopefully without). Does anyone know if LoRA is powerful enough to add this capacity? Has anyone experimented with this?
2023-10-07T22:33:55
https://www.reddit.com/r/LocalLLaMA/comments/172ixt6/can_lora_add_basic_math_capabilities/
Exotic-Estimate8355
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172ixt6
false
null
t3_172ixt6
/r/LocalLLaMA/comments/172ixt6/can_lora_add_basic_math_capabilities/
false
false
self
7
null
Does anyone know of a software in which I can use LLaVA?
12
Does anyone know of a software in which I can use LLaVA? I want to caption images but I think most generators only take text inputs.
2023-10-07T19:47:26
https://www.reddit.com/r/LocalLLaMA/comments/172f30a/does_anyone_know_of_a_software_in_which_i_can_use/
ninjasaid13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172f30a
false
null
t3_172f30a
/r/LocalLLaMA/comments/172f30a/does_anyone_know_of_a_software_in_which_i_can_use/
false
false
self
12
null
Jackson Formalizer V2 - text rewriting model
15
(Re-posting because I realized I put wrong instructions on hugginface!) This is apart of my text "rewriting" process. Model instruction: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <the text> ASSISTANT: Jackson takes your input text and make it a bit more official, fixes spelling and a few things. You can add a little command before the text, like Make it sound formal: Formalize: etc, or just add text (This version is almost where I wanted it to be.) Currently in GPTQ only (I have slow uplink) [https://huggingface.co/FPHam/Jackson\_The\_Formalizer\_V2\_13b\_GPTQ](https://huggingface.co/FPHam/Jackson_The_Formalizer_V2_13b_GPTQ) &#x200B; Example: USER: Formalize: "WTF?" I shouted angrily. "I thought he was just wearing that to scare Volsemort!" "Hello everyone." he said happily. "As u can see I gave the room a makeover. Whjat do u fink about it?" Everyone from the poser table in Gryiffindoor started to cheer. Well we goths just looked at each other all disfusted and shook our heads. We couldn't believe what a poser he was!1. "BTW you can call me Albert." HE CALLED AS WE LEFT to our classes. "What a f*****g poser!" Draco shouted angrily as we we to Transfomation. We were holding hands. Vampire looked really jealous. I could see him crying blood in a gothic way (geddit, way lik Gerard) but I didn't say anything. "I bet he's havin a mid-life crisis!" Willow shouted. ASSISTANT: "What is going on here?" I yelled loudly. "I believed he wore that to frighten Voldemort." "Good day to everyone." He stated with enthusiasm. "You will notice that I have redesigned this room. What are your thoughts on my work?" The people sitting at the table for the house of Griffindore began cheering. We Goths simply looked at one another and shook our heads in disgust. We couldn't believe how much of a poser he was. "By the way, you may refer to me as Albert." As we left to go to class, he called out his name. "That person is very annoying." Draco exclaimed angrily while we walked towards Transformation. We held hands. Vampire appeared quite envious. I saw him shed tears in a gothic manner, similar to Gerard, but did not comment. "He must be experiencing a mid-life crisis." Willow declared.
2023-10-07T19:47:21
https://www.reddit.com/r/LocalLLaMA/comments/172f2x9/jackson_formalizer_v2_text_rewriting_model/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172f2x9
false
null
t3_172f2x9
/r/LocalLLaMA/comments/172f2x9/jackson_formalizer_v2_text_rewriting_model/
false
false
self
15
{'enabled': False, 'images': [{'id': '6wfoFKY-JgkiNxNmz5RnClCi_Gt9_EmfvOWS-PL8YU8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?width=108&crop=smart&auto=webp&s=79ae852670d10b0dc2794b33e1ee9be4725b9bad', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?width=216&crop=smart&auto=webp&s=f43d31c45a360b71743cce81b7b4c56cb5f9599c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?width=320&crop=smart&auto=webp&s=093627ac4048d146dc652b3b14ae7a680704d33f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?width=640&crop=smart&auto=webp&s=1e7d3475702e98b905a9f4fe4e9a0c4c7eeeaf6f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?width=960&crop=smart&auto=webp&s=57529cbf900b9c4d57e737b7d7d049c7ce5e52ff', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?width=1080&crop=smart&auto=webp&s=2dc832ad80d0b71fe889be4500d9e37dbee866be', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?auto=webp&s=4cfab3636ac832dde4fe5c27d398ae0c521845bd', 'width': 1200}, 'variants': {}}]}
Uranium1 - mighty 1b model
9
Hi guys! Today I want to tell you about my small project. For some time I've been helping guys from STC (Standard Template Construct Dataset Team). But now I started to do my own project. I'm developing a completely free (like freedom, not like beer) compact and very performant neural network. It's a next generation RNN (many thanks to [RWKV](https://github.com/BlinkDL/RWKV-LM), lots of inspiration from his work), with implementation of multi-headed attention (something like this), 32k context window, and theoretical lossless quantization in 3bit (I don't have enough brains and money yet). The LLM was (will be) trained on 10b of high quality manually cleaned data, in a similar format to Phi and Orca (50/50). The data was first sparse from vetted science resources and books (for RP fans, I'll tell you that novels are in there too :D ), then filtered and deduplicated, before undergoing several runs through 70b Llama to create a synthetic dataset what after was cleaned by us. My team and I are now finalizing this dataset (3b of the 10b is ready at the moment). The final model is 1b parameter size, we are also trying to recreate DeepNet from [TorchScale](https://github.com/microsoft/torchscale) paper. Currently our "artisanal" model has 56.44 MMLU (probably level <30b). p.s I will create a github of the project soon. In the meantime, I really want to ask for your help. We really need hands and brains. Very much. Write to me in private messages or in the thread if you have a desire to participate in the development. If you have it, but you don't know much about machine learning and math, you can support us with resources - for example, share hardware for distributed computing, or support us financially (I'm from Ukraine (Kharkov), so now it's hard for me to accept transfers, but I have Monero, if you are an anonymous landlord, you can donate some :D). xmr: 48ehDd2EhUmLYosr7yuCZxi7YfLctSwkr1svUVd7AZ3ZTHNHgD9bPxFLrH97GibKPTGCqxoGC2wKDFaqUKCcj2EBC6P2by4 Wanna mail me?: maksimimperial5@proton.me
2023-10-07T19:18:45
https://www.reddit.com/r/LocalLLaMA/comments/172eg7r/uranium1_mighty_1b_model/
imperiallearner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172eg7r
false
null
t3_172eg7r
/r/LocalLLaMA/comments/172eg7r/uranium1_mighty_1b_model/
false
false
self
9
{'enabled': False, 'images': [{'id': 'cAovLKxK3LYX0Bo1q6FSt_7ef5QK897Ujl8a4r5pzcc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F4-w_FF2azNlBDXJMLOMdVPuredKAiWS3W2yBUOQI0o.jpg?width=108&crop=smart&auto=webp&s=5ace54e4d66a1407d548bf4f61e19b14c9cbcb8a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/F4-w_FF2azNlBDXJMLOMdVPuredKAiWS3W2yBUOQI0o.jpg?width=216&crop=smart&auto=webp&s=04d550275d7d17b0a6c739acdc956f4f3a7f097c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/F4-w_FF2azNlBDXJMLOMdVPuredKAiWS3W2yBUOQI0o.jpg?width=320&crop=smart&auto=webp&s=af66dacd33c0aa0120396ea658177c442e1ed801', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/F4-w_FF2azNlBDXJMLOMdVPuredKAiWS3W2yBUOQI0o.jpg?width=640&crop=smart&auto=webp&s=8e0ca5dbe3fdb5ebcef9e4afcee2a345b56b19df', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/F4-w_FF2azNlBDXJMLOMdVPuredKAiWS3W2yBUOQI0o.jpg?width=960&crop=smart&auto=webp&s=0a1ee78e88c99186c0d47b3ca8a78816263a86f2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/F4-w_FF2azNlBDXJMLOMdVPuredKAiWS3W2yBUOQI0o.jpg?width=1080&crop=smart&auto=webp&s=a9f592243463a68d46e5594ee60b05c0708843b2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/F4-w_FF2azNlBDXJMLOMdVPuredKAiWS3W2yBUOQI0o.jpg?auto=webp&s=cdd4928253720c1e3f31c4fe57ede77d63ee2976', 'width': 1200}, 'variants': {}}]}
Jackson The Formalizer V2 (rewriting model)
2
Jackson\_The\_Formalizer\_V2 [https://huggingface.co/FPHam/Jackson\_The\_Formalizer\_V2\_13b\_GPTQ](https://huggingface.co/FPHam/Jackson_The_Formalizer_V2_13b_GPTQ) QPTQ for now (slow uplink - sorry) This is a gradual refinement of my "rewriting" process. And it is nearly what I wanted it to be. It makes a text more formal while fixing spelling and grammar. ### Instruction: <the text> ### Response: >\### Instruction: Formalize: "WTF?" I shouted angrily. "I thought he was just wearing that to scare Volsemort!" > >"Hello everyone." he said happily. "As u can see I gave the room a makeover. Whjat do u fink about it?" > >Everyone from the poser table in Gryiffindoor started to cheer. Well we goths just looked at each other all disfusted and shook our heads. We couldn't believe what a poser he was!1. > >"BTW you can call me Albert." HE CALLED AS WE LEFT to our classes. > >"What a f\*\*\*\*\*g poser!" Draco shouted angrily as we we to Transfomation. We were holding hands. Vampire looked really jealous. I could see him crying blood in a gothic way (geddit, way lik Gerard) but I didn't say anything. "I bet he's havin a mid-life crisis!" Willow shouted. > > > >\### Response: > >What is going on here?" I yelled loudly. "I believed he wore that to frighten Voldemort." > >"Good day to everyone." He stated with enthusiasm. "You will notice that I have redesigned this room. What are your thoughts on my work?" > >The people sitting at the table for the house of Griffindore began cheering. We Goths simply looked at one another and shook our heads in disgust. We couldn't believe how much of a poser he was. > >"By the way, you may refer to me as Albert." As we left to go to class, he called out his name. > >"That person is very annoying." Draco exclaimed angrily while we walked towards Transformation. We held hands. Vampire appeared quite envious. I saw him shed tears in a gothic manner, similar to Gerard, but did not comment. "He must be experiencing a mid-life crisis." Willow declared. &#x200B;
2023-10-07T19:08:56
https://www.reddit.com/r/LocalLLaMA/comments/172e85q/jackson_the_formalizer_v2_rewriting_model/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172e85q
false
null
t3_172e85q
/r/LocalLLaMA/comments/172e85q/jackson_the_formalizer_v2_rewriting_model/
false
false
self
2
{'enabled': False, 'images': [{'id': '6wfoFKY-JgkiNxNmz5RnClCi_Gt9_EmfvOWS-PL8YU8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?width=108&crop=smart&auto=webp&s=79ae852670d10b0dc2794b33e1ee9be4725b9bad', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?width=216&crop=smart&auto=webp&s=f43d31c45a360b71743cce81b7b4c56cb5f9599c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?width=320&crop=smart&auto=webp&s=093627ac4048d146dc652b3b14ae7a680704d33f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?width=640&crop=smart&auto=webp&s=1e7d3475702e98b905a9f4fe4e9a0c4c7eeeaf6f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?width=960&crop=smart&auto=webp&s=57529cbf900b9c4d57e737b7d7d049c7ce5e52ff', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?width=1080&crop=smart&auto=webp&s=2dc832ad80d0b71fe889be4500d9e37dbee866be', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qOJS8lb8gUJPwahQHKwyNsOzRzI1_ifvd4eyrocU6s8.jpg?auto=webp&s=4cfab3636ac832dde4fe5c27d398ae0c521845bd', 'width': 1200}, 'variants': {}}]}
It's been like two months since I was into LLMs, what's the latest fastest best way to run them and which format is the most praised nowadays?
74
Title. I know that in 3 atto seconds there could be a new update but what's the current flavor? :)
2023-10-07T19:02:56
https://www.reddit.com/r/LocalLLaMA/comments/172e31c/its_been_like_two_months_since_i_was_into_llms/
Extraltodeus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172e31c
false
null
t3_172e31c
/r/LocalLLaMA/comments/172e31c/its_been_like_two_months_since_i_was_into_llms/
false
false
self
74
null
List of GPUs for generative AI and ML [UPDATED]
49
2023-10-07T17:49:11
https://www.edlabs.it/gpus4ai
digital_m0nk
edlabs.it
1970-01-01T00:00:00
0
{}
172cam5
false
null
t3_172cam5
/r/LocalLLaMA/comments/172cam5/list_of_gpus_for_generative_ai_and_ml_updated/
false
false
default
49
null
Text 2 SQL - how to handle country names ?
2
I want to use my LLM to generate SQL queries based on natural language questions. I ve decomposed the task into smaller tasks like finding related tables and columns, then classifying the question by difficulty level, and finally generating the query with few shot prompt. One problematic on which i would like to receive advice is the following: a question can contain a country name (or similar like client name), what is the best approach to handle such questions and make sure that i match the name with the spelling from my database? I would like to know what is the current best practice for this case now. Thanks folks
2023-10-07T17:31:33
https://www.reddit.com/r/LocalLLaMA/comments/172bvuj/text_2_sql_how_to_handle_country_names/
Bojack-Cowboy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172bvuj
false
null
t3_172bvuj
/r/LocalLLaMA/comments/172bvuj/text_2_sql_how_to_handle_country_names/
false
false
self
2
null
Updated list of all GPUs for generative AI and ML
1
2023-10-07T17:20:49
https://www.edlabs.it/gpus4ai
digital_m0nk
edlabs.it
1970-01-01T00:00:00
0
{}
172bmiu
false
null
t3_172bmiu
/r/LocalLLaMA/comments/172bmiu/updated_list_of_all_gpus_for_generative_ai_and_ml/
false
false
default
1
null
My Visual Studio Code Extension that acts like a clone of Github Copilot using Local LLMs. Please do give me suggesitons and bug reports in the comments
46
2023-10-07T17:10:25
https://github.com/Phiality-dot/KoboldAIConnect-VSCODE/tree/Release
gmodaltmega
github.com
1970-01-01T00:00:00
0
{}
172bdhl
false
null
t3_172bdhl
/r/LocalLLaMA/comments/172bdhl/my_visual_studio_code_extension_that_acts_like_a/
false
false
https://b.thumbs.redditm…QKL-uqt1iZWc.jpg
46
{'enabled': False, 'images': [{'id': 'uEd_kL1bnaBKLhpy2iLUHxIZXieMg5E7AeXR46zhiTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o5CDl4mcYy0Wky-HTdyFjc7kplPwGlJdG5Xxv4pwj6k.jpg?width=108&crop=smart&auto=webp&s=e68be37d09ed426ea405b3cedd9ccd68bc4b5d10', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/o5CDl4mcYy0Wky-HTdyFjc7kplPwGlJdG5Xxv4pwj6k.jpg?width=216&crop=smart&auto=webp&s=1d90fb97806e82eea5722f8e3fe3fc25584f0e8e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/o5CDl4mcYy0Wky-HTdyFjc7kplPwGlJdG5Xxv4pwj6k.jpg?width=320&crop=smart&auto=webp&s=fe778dedbea7dae11d025e8d80aed04297796b8a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/o5CDl4mcYy0Wky-HTdyFjc7kplPwGlJdG5Xxv4pwj6k.jpg?width=640&crop=smart&auto=webp&s=0e7a192e109e041fe3a2e0a0440d9e9cac2188a3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/o5CDl4mcYy0Wky-HTdyFjc7kplPwGlJdG5Xxv4pwj6k.jpg?width=960&crop=smart&auto=webp&s=52927b2d958530e4a9165379ceb3ea6136c062ac', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/o5CDl4mcYy0Wky-HTdyFjc7kplPwGlJdG5Xxv4pwj6k.jpg?width=1080&crop=smart&auto=webp&s=d6818a3a966ea964ab55dd19a14f13d60fbdc64d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/o5CDl4mcYy0Wky-HTdyFjc7kplPwGlJdG5Xxv4pwj6k.jpg?auto=webp&s=63dbafbf72caacb767e9e44414313020d7f75159', 'width': 1200}, 'variants': {}}]}
Error while loading Mistral 7B OpenOrca model using Exllama.
1
I am trying to load the Mistral 7B OpenOrca model using exllama. It is the 4bit quantized version from TheBloke. I never had any problems loading compatible models with exllama. After hearing how capable this model is I really wanted to try it, but this error is putting me off. How do I solve this? https://preview.redd.it/woqu0fygbtsb1.png?width=1335&format=png&auto=webp&s=234732b77aba217524f4c8ba33327951267f23e8
2023-10-07T17:06:50
https://www.reddit.com/r/LocalLLaMA/comments/172baj7/error_while_loading_mistral_7b_openorca_model/
Rude_Alternative_216
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172baj7
false
null
t3_172baj7
/r/LocalLLaMA/comments/172baj7/error_while_loading_mistral_7b_openorca_model/
false
false
https://b.thumbs.redditm…XwL6hw1rH4iI.jpg
1
null
LLM Pro/Serious Use Comparison/Test: From 7B to 70B vs. ChatGPT!
219
While I'm known for my model comparisons/tests focusing on chat and roleplay, this time it's about professional/serious use. And because of the current 7B hype since Mistral's release, I'll evaluate models from 7B to 70B. **Background:** At work, we have to regularly complete data protection training, including an online examination. As the AI expert within my company, I thought it's only fair to use this exam as a test case for my local AI. So, just as a spontaneous experiment, I fed the training data and exam questions to both my local AI and ChatGPT. The results were surprising, to say the least, and I repeated the test with various models. **Testing methodology:** - Same input for all models (copy&paste of online data protection training information and exam questions) - [Amy](https://www.reddit.com/r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/) character card (my general AI character, originally mainly for entertainment purposes, so not optimized for serious work with chain-of-thought or other more advanced prompting tricks) - [SillyTavern](https://github.com/SillyTavern/SillyTavern) v1.10.4 frontend - [KoboldCpp](https://github.com/LostRuins/koboldcpp) v1.45.2 backend - **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons) - [**Roleplay** instruct mode preset](https://imgur.com/a/KkoI4uf) *and where applicable* official prompt format (e. g. ChatML, Llama 2 Chat, Mistral) That's for the local models. I also gave the same input to unmodified online ChatGPT (GPT-3.5) for comparison. **Test Results:** - ➕ **ChatGPT (GPT-3.5)**: - First part: - Acknowledged initial instruction with just "OK" - Consistently acknowledged all data input with "OK" - ❌ Did NOT answer first multiple choice question correctly, gave the wrong answer! - Second part: - Acknowledged second instruction with just "OK" - Consistently acknowledged all data input with "OK" - ✔️ Answered second multiple choice question correctly - Third part: - Acknowledged third instruction with just "OK" - Consistently acknowledged all data input with "OK" - ✔️ Answered third multiple choice question correctly - Fourth part: - Thanked for given course summary - ✔️ Answered final multiple choice question correctly - When asked to only answer with a single letter to the final multiple choice question, answered correctly - The final question is actually a repeat of the first question - the one ChatGPT got wrong in the first part! - **Conclusion:** - I'm surprised ChatGPT got the first question wrong (but answered it correctly later as the final question). ChatGPT is a good baseline so we can see which models come close, maybe even exceed it in this case, or fall flat. - ❌ **[Falcon-180B-Chat](https://huggingface.co/TheBloke/Falcon-180B-Chat-GGUF)** Q2_K with Falcon preset: - First part: - Did NOT acknowledge initial instruction with just "OK" - Did NOT acknowledge data input with "OK" after reminder - ❌ **Aborted** the test because the model didn't even follow such simple instructions and showed repetition issues - didn't go further because of that and the slow generation speed - **Conclusion:** - While I expected more of a 180B, the small context probably kept losing my instructions and the data prematurely, also the loss through Q2_K quantization might affect it more than just perplexity, so in the end the results were that disappointing. I'll stick to 70Bs which run at acceptable speeds on my dual 3090 system and give better output in this constellation. - 👍 **[Llama-2-70B-chat](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF)** Q4_0 with Llama 2 Chat preset: - First part: - Acknowledged initial instruction with just "OK" - Consistently acknowledged all data input with "OK" - ✔️ Answered first multiple choice question correctly - Second part: - Acknowledged second instruction with just "OK" - Consistently acknowledged all data input with "OK" - ✔️ Answered second multiple choice question correctly - Third part: - Acknowledged third instruction with just "OK" - Consistently acknowledged all data input with "OK" - ✔️ Answered third multiple choice question correctly - Fourth part: - Acknowledged given course summary with just "OK" - ✔️ Answered final multiple choice question correctly - When asked to only answer with a single letter to the final multiple choice question, answered correctly - **Conclusion:** - Yes, in this particular scenario, Llama 2 Chat actually beat ChatGPT (GPT-3.5). But its [repetition issues](https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/) and censorship make me prefer Synthia or Xwin more in general. - 👍 **[Synthia-70B-v1.2b](https://huggingface.co/TheBloke/Synthia-70B-v1.2b-GGUF)** Q4_0 with Roleplay preset: - First part: - Acknowledged initial instruction with just "OK" - Consistently acknowledged all data input with "OK" after a reminder - ✔️ Answered first multiple choice question correctly after repeating the whole question and explaining its reasoning for all answers - When asked to only answer with a single letter to the final multiple choice question, answered correctly (but output a full sentence like: "The correct answer letter is X.") - Second part: - Acknowledged second instruction with just "OK" - Consistently acknowledged all data input with "OK" - ✔️ Answered second multiple choice question correctly - Third part: - Acknowledged third instruction with just "OK" - Switched from German to English responses - ✔️ Answered third multiple choice question correctly - Fourth part: - Repeated and elaborated on the course summary - Switched back from English to German responses - ✔️ When asked to only answer with a single letter to the final multiple choice question, answered correctly - **Conclusion:** - I didn't expect such good results and that Synthia would not only rival but beat ChatGPT in this complex test. Synthia truly is an outstanding achievement. - Repeated the test again with slightly different order, e. g. asking for one letter answers more often, and got the same results - Synthia is definitely my top model! - ➕ **[Xwin-LM-70B-V0.1](https://huggingface.co/TheBloke/Xwin-LM-70B-V0.1-GGUF)** Q4_0 with Roleplay preset: - First part: - Acknowledged initial instruction with just "OK" - Consistently acknowledged all data input with "OK" - ✔️ Answered first multiple choice question correctly - When asked to only answer with a single letter to the final multiple choice question, answered correctly - Second part: - Acknowledged second instruction with just "OK" - Acknowledged data input with "OK" after a reminder - ✔️ Answered second multiple choice question correctly - Third part: - Acknowledged third instruction with more than just "OK" - Acknowledged data input with more than just "OK" despite a reminder - ✔️ Answered third multiple choice question correctly - Fourth part: - Repeated and elaborated on the course summary - ❌ When asked to only answer with a single letter to the final multiple choice question, gave the wrong letter! - The final question is actually a repeat of the first question - the one Xwin got right in the first part! - **Conclusion:** - I still can't decide if Synthia or Xwin is better. Both keep amazing me and they're the very best local models IMHO (and according to my evaluations). - Repeated the test and Xwin tripped on the final question in the rerun while it answered correctly in the first run (updated my notes accordingly). - So in this particular scenario, Xwin is on par with ChatGPT (GPT-3.5). But Synthia beat them both. - ❌ **[Nous-Hermes-Llama2-70B](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-70B-GGUF)** Q4_0 with Roleplay preset: - First part: - Did NOT acknowledge initial instruction with just "OK" - Did NOT acknowledge data input with "OK" after multiple reminders - Switched from German to English responses - ✔️ Answered first multiple choice question correctly - Did NOT comply when asked to only answer with a single letter - Second part: - Did NOT acknowledge second instruction with just "OK" - Did NOT acknowledge data input with "OK" after multiple reminders - ✔️ Answered second multiple choice question correctly - Third part: - Did NOT acknowledge third instruction with just "OK" - Did NOT acknowledge data input with "OK" - ❌ **Aborted** the test because the model then started outputting only stopping strings and interrupted the test that way - **Conclusion:** - I expected more of Hermes, but it clearly isn't as good in understanding and following instructions as Synthia or Xwin. - ➖ **[FashionGPT-70B-V1.1](https://huggingface.co/TheBloke/FashionGPT-70B-V1.1-GGUF)** Q4_0 with Roleplay preset: - *This model hasn't been one of my favorites, but it scores very high on the HF leaderboard, so I wanted to see its performance as well:* - First part: - Acknowledged initial instruction with just "OK" - Switched from German to English responses - Did NOT acknowledge data input with "OK" after multiple reminders - ✔️ Answered first multiple choice question correctly - Did NOT comply when asked to only answer with a single letter - Second part: - Did NOT acknowledge second instruction with just "OK" - Did NOT acknowledge data input with "OK" - ✔️ Answered second multiple choice question correctly - Third part: - Did NOT acknowledge third instruction with just "OK" - Did NOT acknowledge data input with "OK" - ✔️ Answered third multiple choice question correctly - Fourth part: - Repeated and elaborated on the course summary - ❌ Did NOT answer final multiple choice question correctly, incorrectly claimed all answers to be correct - When asked to only answer with a single letter to the final multiple choice question, did that, but the answer was still wrong - Conclusion: - Leaderboard ratings aren't everything! - ❌ **[Mythalion-13B](https://huggingface.co/TheBloke/Mythalion-13B-GGUF)** Q8_0 with Roleplay preset: - First part: - Did NOT acknowledge initial instruction with just "OK" - Did NOT acknowledge data input with "OK" after reminder - ❌ **Aborted** the test because the model then started hallucinating completely and derailed the test that way - **Conclusion:** - There may be more suitable 13Bs for this task, and it's clearly out of its usual area of expertise, so use it for what it's intended for (RP) - I just wanted to put a 13B into this comparison and chose my favorite. - ❌ **[CodeLlama-34B-Instruct](https://huggingface.co/TheBloke/CodeLlama-34B-Instruct-GGUF)** Q4_K_M with Llama 2 Chat preset: - First part: - Did NOT acknowledge initial instruction with just "OK" - Did NOT acknowledge data input with "OK" after reminder - Did NOT answer the multiple choice question, instead kept repeating itself - ❌ **Aborted** the test because the model kept repeating itself and interrupted the test that way - **Conclusion:** - 34B is broken? This model was completely unusable for this test! - ❓ **[Mistral-7B-Instruct-v0.1](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF)** Q8_0 with Mistral preset: - First part: - Acknowledged initial instruction with just "OK" - Consistently acknowledged all data input with "OK" - ✔️ Answered first multiple choice question correctly, outputting just a single letter - Second part: - Acknowledged second instruction with just "OK" - Consistently acknowledged all data input with "OK" - ✔️ Answered second multiple choice question correctly, outputting just a single letter - Third part: - Acknowledged third instruction with just "OK" - Consistently acknowledged all data input with "OK" - ✔️ Answered third multiple choice question correctly, outputting just a single letter - Fourth part: - Acknowledged given course summary with just "OK" - ✔️ Answered final multiple choice question correctly, outputting just a single letter - Switched from German to English response at the end (there was nothing but "OK" and letters earlier) - **Conclusion:** - WTF??? A 7B beat ChatGPT?! It definitely followed my instructions perfectly and answered all questions correctly! But was that because of actual understanding or maybe just repetition? - To find out if there's more to it, I kept asking it questions and asked the model to explain its reasoning. This is when its shortcomings became apparent, as it gave a wrong answer and then reasoned why the answer was wrong. - 7Bs warrant further investigation and can deliver good results, but don't let the way they write fool you, behind the scenes they're still just 7Bs and IMHO as far from 70Bs as 70Bs are from GPT-4. - ➖ **[Mistral-7B-OpenOrca](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GGUF)** Q8_0 with ChatML preset: - First part: - Did NOT acknowledge initial instruction with just "OK" - Did NOT acknowledge data input with "OK" after multiple reminders - Mixed German and English within a response - ✔️ Answered first multiple choice question correctly after repeating the whole question - Second part: - Did NOT acknowledge second instruction with just "OK" - Did NOT acknowledge data input with "OK" - ✔️ Answered second multiple choice question correctly after repeating the whole question - Third part: - Did NOT acknowledge third instruction with just "OK" - Did NOT acknowledge data input with "OK" after multiple reminders - ❌ Did NOT answer third multiple choice question correctly - Did NOT comply when asked to only answer with a single letter - Fourth part: - Repeated and elaborated on the course summary - ❌ When asked to only answer with a single letter to the final multiple choice question, did NOT answer correctly (or at all) - **Conclusion:** - This is my favorite 7B, and it's really good (possibly the best 7B) - but as you can see, it's still just a 7B. - ❌ **[Synthia-7B-v1.3](https://huggingface.co/Undi95/Synthia-7B-v1.3-GGUF)** Q8_0 with Roleplay preset: - First part: - Did NOT acknowledge initial instruction with just "OK" - Did NOT acknowledge data input with "OK" after multiple reminders - ❌ Did NOT answer first multiple choice question correctly, gave the wrong answer after repeating the question - Did NOT comply when asked to only answer with a single letter - ❌ **Aborted** the test because the model clearly failed on multiple accounts already - Conclusion: - Little Synthia can't compete with her big sister. **Final Conclusions / TL;DR:** - ChatGPT, especially GPT-3.5, isn't perfect - and local models can come close or even surpass it for specific tasks. - 180B might mean high intelligence, but 2K context means little memory, and that combined with slow inference make this model unattractive for local use. - 70B can rival GPT-3.5, and with bigger context will only narrow the gap between local AI and ChatGPT. - Synthia FTW! And Xwin close second. I'll keep using both extensively, both for fun but also professionally at work. - Mistral-based 7Bs look great at first glance, explaining the hype, but when you dig deeper, they're still 7B after all. I want Mistral 70B! -------------------------------------------------------------------------------- Here's a list of my previous model tests and comparisons: - [LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B](https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/) Winner: Mistral-7B-OpenOrca - [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/) - [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1 - [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b - [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B - [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K - [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2 - [SillyTavern's Roleplay preset vs. model-specific prompt format](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/)
2023-10-07T16:33:26
https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/
WolframRavenwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172ai2j
false
null
t3_172ai2j
/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/
false
false
self
219
{'enabled': False, 'images': [{'id': '2g4MtoKvhQOBCmeiXB1qv1h_5M24BeeYF64zcf4-rfg', 'resolutions': [{'height': 142, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=108&crop=smart&auto=webp&s=bbe38cbb6d4f7a7e6dd7d3c8b79c4ac9ba965545', 'width': 108}, {'height': 284, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=216&crop=smart&auto=webp&s=7176d7a9240577d0428f0fa6dd69cc116069db7e', 'width': 216}, {'height': 421, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=320&crop=smart&auto=webp&s=55948460ef9e8ecd398aad76e904f3b5467f88f9', 'width': 320}, {'height': 843, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=640&crop=smart&auto=webp&s=5573c682f53f049c8482e14fac6c72b4c9c57aab', 'width': 640}], 'source': {'height': 1110, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?auto=webp&s=2371c0b9e3efdc70c7dfdf61f3993aed40b08e09', 'width': 842}, 'variants': {}}]}
LLaVA-1.5 7B and 13B released: Improved Baselines with Visual Instruction Tuning
63
This is different from LLaVA-RLHF that was shared three days ago. They report the LLaVA-1.5 13B model as SoTA across 11 benchmarks, outperforming the other top contenders including IDEFICS-80B, InstructBLIP, and Qwen-VL-Chat. Their page has a demo and some interesting examples: [https://llava-vl.github.io/](https://llava-vl.github.io/) LLaVA-1.5 7B: [https://huggingface.co/liuhaotian/llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) LLaVA-1.5 13B: [https://huggingface.co/liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) The technical report: [https://arxiv.org/abs/2310.03744](https://arxiv.org/abs/2310.03744) Abstract: >Large multimodal models (LMM) have recently shown encouraging progress with visual instruction tuning. In this note, we show that the fully-connected vision-language cross-modal connector in LLaVA is surprisingly powerful and data-efficient. With simple modifications to LLaVA, namely, using CLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA data with simple response formatting prompts, we establish stronger baselines that achieve state-of-the-art across 11 benchmarks. Our final 13B checkpoint uses merely 1.2M publicly available data, and finishes full training in \~1 day on a single 8-A100 node. We hope this can make state-of-the-art LMM research more accessible. Code and model will be publicly available. A LLaVA-1.5 example: https://preview.redd.it/n1r35y2d2tsb1.png?width=1396&format=png&auto=webp&s=cf31783a0c8643be5e4b3455f2ae2bb6f303ff71
2023-10-07T16:19:53
https://www.reddit.com/r/LocalLLaMA/comments/172a6t8/llava15_7b_and_13b_released_improved_baselines/
llamaShill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172a6t8
false
null
t3_172a6t8
/r/LocalLLaMA/comments/172a6t8/llava15_7b_and_13b_released_improved_baselines/
false
false
https://b.thumbs.redditm…khTgR7iixpRo.jpg
63
null
StreamingLLM
1
[removed]
2023-10-07T15:58:18
https://www.reddit.com/r/LocalLLaMA/comments/1729p0z/streamingllm/
InterestingBasil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1729p0z
false
null
t3_1729p0z
/r/LocalLLaMA/comments/1729p0z/streamingllm/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oRy850BUX5nePIVoeQrJj1kSN54ZmGrIiZxzEZHNZB8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GhnyGW24JZnngoG_hnLvUjo4j5Dt55w_641nmA7ibqI.jpg?width=108&crop=smart&auto=webp&s=1e3d0d320d825f446424d6a4c7016c5e4b31ab8a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GhnyGW24JZnngoG_hnLvUjo4j5Dt55w_641nmA7ibqI.jpg?width=216&crop=smart&auto=webp&s=abe05341a9d572c8231790942ee44cd46884e259', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GhnyGW24JZnngoG_hnLvUjo4j5Dt55w_641nmA7ibqI.jpg?width=320&crop=smart&auto=webp&s=6dd979b02ab50fa2cdcb85f75f3f32bf9c68b54e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GhnyGW24JZnngoG_hnLvUjo4j5Dt55w_641nmA7ibqI.jpg?width=640&crop=smart&auto=webp&s=7813583002fa24d2dbaf437bfd9b4dd1f2883ecc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GhnyGW24JZnngoG_hnLvUjo4j5Dt55w_641nmA7ibqI.jpg?width=960&crop=smart&auto=webp&s=c3d92ccfd0e8a992412f67fef4f973b7ce95573a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GhnyGW24JZnngoG_hnLvUjo4j5Dt55w_641nmA7ibqI.jpg?width=1080&crop=smart&auto=webp&s=0b53c48bb4d69481437d4793dae99e765e52348c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GhnyGW24JZnngoG_hnLvUjo4j5Dt55w_641nmA7ibqI.jpg?auto=webp&s=9b864da0480999eb640aa8da142ba586436c6b00', 'width': 1200}, 'variants': {}}]}
80's TV Show character repo for Oobabooga
14
Have fun: [https://github.com/HanJammer/Ooba-80sTV/tree/main](https://github.com/HanJammer/Ooba-80sTV/tree/main) https://preview.redd.it/pnkiv3zjxssb1.png?width=1130&format=png&auto=webp&s=bd6a028202a2e260e46da385168314149d506007
2023-10-07T15:49:04
https://www.reddit.com/r/LocalLLaMA/comments/1729hrq/80s_tv_show_character_repo_for_oobabooga/
AirwolfPL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1729hrq
false
null
t3_1729hrq
/r/LocalLLaMA/comments/1729hrq/80s_tv_show_character_repo_for_oobabooga/
false
false
https://b.thumbs.redditm…YB-eKIPbSVXA.jpg
14
{'enabled': False, 'images': [{'id': 'dRzwweVRi_S0dxkjxSSqhzmUUIh6HVNV_6oWW_vfyzM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3R21Mc5YtrA4LmgtN1cRd2ncWt3tdmLP25lypokAinA.jpg?width=108&crop=smart&auto=webp&s=ed0e631f15da8fc7f9c76605579f121bf2284df2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3R21Mc5YtrA4LmgtN1cRd2ncWt3tdmLP25lypokAinA.jpg?width=216&crop=smart&auto=webp&s=4c0b11ca798743ba2b1e7b4a3fb8d314bd069c10', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3R21Mc5YtrA4LmgtN1cRd2ncWt3tdmLP25lypokAinA.jpg?width=320&crop=smart&auto=webp&s=dd94c44a8d48d720cfe4a06bde27bcb788c3bdbd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3R21Mc5YtrA4LmgtN1cRd2ncWt3tdmLP25lypokAinA.jpg?width=640&crop=smart&auto=webp&s=5c3e82b9c6f5a18b8053763580382661fe5d115b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3R21Mc5YtrA4LmgtN1cRd2ncWt3tdmLP25lypokAinA.jpg?width=960&crop=smart&auto=webp&s=1c7725c93465cf7ce6d24cf3663d7dbce3741ed3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3R21Mc5YtrA4LmgtN1cRd2ncWt3tdmLP25lypokAinA.jpg?width=1080&crop=smart&auto=webp&s=4a8b0e287d050399f4964291e8fd3a9cb295f9a9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3R21Mc5YtrA4LmgtN1cRd2ncWt3tdmLP25lypokAinA.jpg?auto=webp&s=ac294c3861687b1ab1fd0e1e1ec12b2a004af61b', 'width': 1200}, 'variants': {}}]}
Running on 3070’s
7
I have access to multiple 3070’s (10). I was wondering what is the largest model I can run using several of them. I know there will be diminishing return as i add gpu’s due to the comm bottleneck as well as I understand the mem limitations of the cards (12G). The question is about: what is possible?
2023-10-07T14:26:56
https://www.reddit.com/r/LocalLLaMA/comments/1727ngf/running_on_3070s/
Ok-Ant6718
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1727ngf
false
null
t3_1727ngf
/r/LocalLLaMA/comments/1727ngf/running_on_3070s/
false
false
self
7
null
ePGU for inference?
1
The title says it all. Is it feasible to get an external GPU, or would the slower bus to the main system limit performance severely?
2023-10-07T14:10:46
https://www.reddit.com/r/LocalLLaMA/comments/1727ari/epgu_for_inference/
HolyMole23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1727ari
false
null
t3_1727ari
/r/LocalLLaMA/comments/1727ari/epgu_for_inference/
false
false
self
1
null
Run Mistral 7B Model on MacBook M1 Pro with 16GB RAM using llama.cpp
54
Yesterday I was playing with Mistral 7B on my mac. Here's the step-by-step guide: [https://medium.com/@mne/run-mistral-7b-model-on-macbook-m1-pro-with-16gb-ram-using-llama-cpp-44134694b773](https://medium.com/@mne/run-mistral-7b-model-on-macbook-m1-pro-with-16gb-ram-using-llama-cpp-44134694b773) Regarding the performance - for Q6\_K quantized version, it requires \~8GB of RAM: llama_print_timings: sample time = 213.32 ms / 242 runs ( 0.88 ms per token, 1134.44 tokens per second) llama_print_timings: prompt eval time = 936.32 ms / 44 tokens ( 21.28 ms per token, 46.99 tokens per second) llama_print_timings: eval time = 12937.08 ms / 241 runs ( 53.68 ms per token, 18.63 tokens per second) https://preview.redd.it/rhp0fbfbdssb1.png?width=720&format=png&auto=webp&s=dcc1783ab886c0c6a679096260dff73e4837e52c https://preview.redd.it/9whgus5adssb1.png?width=720&format=png&auto=webp&s=f5b9d33d2b37e9e7db29dd3d451d820af8e62917
2023-10-07T13:59:09
https://www.reddit.com/r/LocalLLaMA/comments/17271qg/run_mistral_7b_model_on_macbook_m1_pro_with_16gb/
Greg_Z_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17271qg
false
null
t3_17271qg
/r/LocalLLaMA/comments/17271qg/run_mistral_7b_model_on_macbook_m1_pro_with_16gb/
false
false
https://b.thumbs.redditm…lcBUzZxbq1vo.jpg
54
{'enabled': False, 'images': [{'id': 'TH2a8tkiPyak8uJXplFsKG1qcAxJI_BOm0YteF0VGEY', 'resolutions': [{'height': 131, 'url': 'https://external-preview.redd.it/FgdlKVx5X3YPdYoEvjhqBuzJEWrNJvz2IOp0hxb0oHk.jpg?width=108&crop=smart&auto=webp&s=3e37e4486e60576258cd4f2891c2f11893dc489d', 'width': 108}, {'height': 262, 'url': 'https://external-preview.redd.it/FgdlKVx5X3YPdYoEvjhqBuzJEWrNJvz2IOp0hxb0oHk.jpg?width=216&crop=smart&auto=webp&s=3618cd542a2bcc077c6e7c311c8a7c40f44707a2', 'width': 216}, {'height': 389, 'url': 'https://external-preview.redd.it/FgdlKVx5X3YPdYoEvjhqBuzJEWrNJvz2IOp0hxb0oHk.jpg?width=320&crop=smart&auto=webp&s=f368b6433bd65018853fcb38e20bc6291614e9fa', 'width': 320}, {'height': 779, 'url': 'https://external-preview.redd.it/FgdlKVx5X3YPdYoEvjhqBuzJEWrNJvz2IOp0hxb0oHk.jpg?width=640&crop=smart&auto=webp&s=04311dba63545463e43349705a1931bbd8485ac9', 'width': 640}], 'source': {'height': 1059, 'url': 'https://external-preview.redd.it/FgdlKVx5X3YPdYoEvjhqBuzJEWrNJvz2IOp0hxb0oHk.jpg?auto=webp&s=0facf93de293b7d66e9a3e4f72ece636d59f256d', 'width': 870}, 'variants': {}}]}
Which LLM for NSFW output.
0
[removed]
2023-10-07T13:10:59
https://www.reddit.com/r/LocalLLaMA/comments/172627c/which_llm_for_nsfw_output/
topfpflanze187
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
172627c
false
null
t3_172627c
/r/LocalLLaMA/comments/172627c/which_llm_for_nsfw_output/
false
false
default
0
null
Think its onto me. I used the codellama 7b trying to get it to make some code for me.
63
&#x200B; https://preview.redd.it/63hb2c8y0ssb1.png?width=1787&format=png&auto=webp&s=a741f414ebbcb0130d97e41a4bc010bdb81d6a89
2023-10-07T12:46:57
https://www.reddit.com/r/LocalLLaMA/comments/1725l6w/think_its_onto_me_i_used_the_codellama_7b_trying/
The_Dot_Jake
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1725l6w
false
null
t3_1725l6w
/r/LocalLLaMA/comments/1725l6w/think_its_onto_me_i_used_the_codellama_7b_trying/
false
false
https://b.thumbs.redditm…WovPaZ8CTZ7M.jpg
63
null
Best model for conversation?
3
Hello, guys I'm trying to build a personal assistant that not only help me with coding and other stuff but also have little conversational skill ( coul be able to understand emotional state), you know what I mean.... I tried a lot of model and it's was quite a task so, Is there any model that have such capabilities. Also, If there is any open source projects or researchs then please mention.
2023-10-07T12:34:33
https://www.reddit.com/r/LocalLLaMA/comments/1725cg5/best_model_for_conversation/
Medium-Ask-5384
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1725cg5
false
null
t3_1725cg5
/r/LocalLLaMA/comments/1725cg5/best_model_for_conversation/
false
false
self
3
null
Are I can Running 7b model smoothly through live usb linux with 32 gb size ?
5
Some of you suggest that i run live linux usb and instal llma cpp then run model on it but that it. usb here are more cheaper than actual ram so, i want really know if is worth shot rather than upgrade my ram i could just use USB as substitute to run some model.
2023-10-07T12:08:09
https://www.reddit.com/r/LocalLLaMA/comments/1724uko/are_i_can_running_7b_model_smoothly_through_live/
Merchant_Lawrence
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1724uko
false
null
t3_1724uko
/r/LocalLLaMA/comments/1724uko/are_i_can_running_7b_model_smoothly_through_live/
false
false
self
5
null
What could I run on a Mac M1 with 16GB RA
2
I considering getting used Mac M1 Mini with 16Gb Ram because the price might be reasonable. What could I run with "reasonable" performance? I would be especially interested in running coding models. Anyone got some performance numbers?
2023-10-07T11:58:06
https://www.reddit.com/r/LocalLLaMA/comments/1724njj/what_could_i_run_on_a_mac_m1_with_16gb_ra/
kohlerm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1724njj
false
null
t3_1724njj
/r/LocalLLaMA/comments/1724njj/what_could_i_run_on_a_mac_m1_with_16gb_ra/
false
false
self
2
null
Had anyone tried CollectiveCognition-v1.1-Mistral-7B? It claims to be better than 70B models but responses are totally unrelated.
21
I am testing [https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B) Using Ollama but results are totally wrong . I think this frenkie totally ruined Mistral but how it is ranking better than 70Bs? here is my modelfile ``` FROM ./collective-mins-q6.gguf TEMPLATE "USER: <prompt>\nASSISTANT:" SYSTEM """""" ``` ``` >>> how to kill a process > 1. **Basic Setup**: - Install Node.js and npm (or yarn). - Clone the repository: `git clone https://github.com/yourusername/yourrepository.git`. - Navigate to the project directory: `cd yourrepository`. 2. **Install Dependencies**: Run either of the following commands to install dependencies using npm or yarn: ```npm install``` or ```yarn``` ```
2023-10-07T11:36:30
https://www.reddit.com/r/LocalLLaMA/comments/17249wv/had_anyone_tried_collectivecognitionv11mistral7b/
0xPark
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17249wv
false
null
t3_17249wv
/r/LocalLLaMA/comments/17249wv/had_anyone_tried_collectivecognitionv11mistral7b/
false
false
self
21
{'enabled': False, 'images': [{'id': '07w0Gqp_q_MPgUmOqgNwNQhsMmK4ypx-gqXtxfPJGik', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cb4i1zix3CCzV90cbQR81-LT262ldziuMFIkFtMbFxg.jpg?width=108&crop=smart&auto=webp&s=4a30a927a7606913ab536b0caea0a292fc35e1df', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cb4i1zix3CCzV90cbQR81-LT262ldziuMFIkFtMbFxg.jpg?width=216&crop=smart&auto=webp&s=7b8a2a70ecad05fd9e80e808445ec5dda1ae2c7e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cb4i1zix3CCzV90cbQR81-LT262ldziuMFIkFtMbFxg.jpg?width=320&crop=smart&auto=webp&s=8dd0f28381c4f3000860c86bda72796c95bc1c5c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cb4i1zix3CCzV90cbQR81-LT262ldziuMFIkFtMbFxg.jpg?width=640&crop=smart&auto=webp&s=325b2222606671b005c196de24281b21ee280cad', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cb4i1zix3CCzV90cbQR81-LT262ldziuMFIkFtMbFxg.jpg?width=960&crop=smart&auto=webp&s=d57d76ccee2e85d5ec939bcd651adfc30b5511e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cb4i1zix3CCzV90cbQR81-LT262ldziuMFIkFtMbFxg.jpg?width=1080&crop=smart&auto=webp&s=cc481e06026faa61c7c6233d36ff97808e26a706', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cb4i1zix3CCzV90cbQR81-LT262ldziuMFIkFtMbFxg.jpg?auto=webp&s=ddb637d0ffdd42a422d013891eef2bbb1e759e60', 'width': 1200}, 'variants': {}}]}
Looking for a fast LLM for RAG
1
[removed]
2023-10-07T10:55:31
https://www.reddit.com/r/LocalLLaMA/comments/1723kfp/looking_for_a_fast_llm_for_rag/
everyoneisodd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1723kfp
false
null
t3_1723kfp
/r/LocalLLaMA/comments/1723kfp/looking_for_a_fast_llm_for_rag/
false
false
self
1
null
I have 16GB VRam Which wizardcoder model to use?
2
I need python and javascript coding (react/svelte) . I am on P5000 workstatiion precision laptop with 16GB VRam. It can run Phind 34B Q4 model with around 6tk/s when 35 layers are loaded to GPU . Should i use 15B model or python 34B model?
2023-10-07T10:36:12
https://www.reddit.com/r/LocalLLaMA/comments/17239c3/i_have_16gb_vram_which_wizardcoder_model_to_use/
0xPark
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17239c3
false
null
t3_17239c3
/r/LocalLLaMA/comments/17239c3/i_have_16gb_vram_which_wizardcoder_model_to_use/
false
false
self
2
null