title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Local LLM for Intel Mac | LMStudio | 1 | [removed] | 2023-12-07T21:24:32 | https://www.reddit.com/r/LocalLLaMA/comments/18d61lt/local_llm_for_intel_mac_lmstudio/ | discoveringnature12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18d61lt | false | null | t3_18d61lt | /r/LocalLLaMA/comments/18d61lt/local_llm_for_intel_mac_lmstudio/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BGXfaUMntPZWzYo99FfkrbqxveeayLkICP2FRV6iEYA', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=108&crop=smart&auto=webp&s=027a4a136867b336cfe98d6c39e0251e1ba0b055', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=216&crop=smart&auto=webp&s=9a7929b9305bcdbe407f1f92f6d6ff052daa19b8', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=320&crop=smart&auto=webp&s=0734e32327e4cfd403ebf8ffdf65183ffeebc1fc', 'width': 320}, {'height': 247, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=640&crop=smart&auto=webp&s=e3feb83e359c84d1f8cdc966f94a8cc059151e49', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=960&crop=smart&auto=webp&s=870dd5ec30e26b8871e8a59d148f669ddd3c276f', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=1080&crop=smart&auto=webp&s=2ce833a6565a2978737477a6cb1b5893d1ccbcde', 'width': 1080}], 'source': {'height': 440, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?auto=webp&s=9e4b3880988ad486a361a37c3a53e19ff66ac844', 'width': 1140}, 'variants': {}}]} |
Looking for model for Stable Diffusion & Midjourney | 2 | Is there a model that could understand those both concept well and generate random prompts? | 2023-12-07T21:20:46 | https://www.reddit.com/r/LocalLLaMA/comments/18d5ygl/looking_for_model_for_stable_diffusion_midjourney/ | swwer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18d5ygl | false | null | t3_18d5ygl | /r/LocalLLaMA/comments/18d5ygl/looking_for_model_for_stable_diffusion_midjourney/ | false | false | self | 2 | null |
Program for Categorizing and Quantifying Lists? | 3 | Lately everyone talks about needle in a haystack benchmarks. But it’s impossible to get LLMs to give a proper count of matches in a list of data.
Ie - You have a list of Products and you want a count or list of products that match a term. Even with exact match it falls really short.
The only solution i’ve found is to categorize each item one at a time, and then do a count of the matches (in a database or spreadsheet).
Being able to quantify trends, sentiment etc is a major use case for NLP.
Does anyone know of an editing framework or program that’s good for solving this simply? | 2023-12-07T20:58:53 | https://www.reddit.com/r/LocalLLaMA/comments/18d5frk/program_for_categorizing_and_quantifying_lists/ | dalhaze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18d5frk | false | null | t3_18d5frk | /r/LocalLLaMA/comments/18d5frk/program_for_categorizing_and_quantifying_lists/ | false | false | self | 3 | null |
Google Gemini Ultra a 1.5 TRILLION PARAMETER AI model. (Is it for Real ?) | 1 | 2023-12-07T20:30:27 | https://www.reddit.com/gallery/18d4rzr | ravimohankhanna7 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18d4rzr | false | null | t3_18d4rzr | /r/LocalLLaMA/comments/18d4rzr/google_gemini_ultra_a_15_trillion_parameter_ai/ | false | false | 1 | null | ||
HuggingFace's average score should reflect the chosen columns | 11 | When browsing the HuggingFace LLM leaderboard, I always disable the TruthfulQA column because, to me, it's largely useless.
To anyone from the HuggingFace team, if it's not too much of a bother, I propose that the average score becomes dynamic, so that it reflects only the shown columns. Thanks!
https://preview.redd.it/29g58mlwlx4c1.png?width=1519&format=png&auto=webp&s=a297681b401e1861940328de556c0777ccc47476 | 2023-12-07T20:23:42 | https://www.reddit.com/r/LocalLLaMA/comments/18d4mf0/huggingfaces_average_score_should_reflect_the/ | HideLord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18d4mf0 | false | null | t3_18d4mf0 | /r/LocalLLaMA/comments/18d4mf0/huggingfaces_average_score_should_reflect_the/ | false | false | 11 | null | |
IT Veteran... why am I struggling with all of this? | 273 | I need help. I accidentally blew off this whole "artificial intelligence" thing because of all the hype. Everyone was talking about how ChatGPT was writing papers for students and resumes... I just thought it was good for creative uses. Then around the end of September I was given unlimited ChatGPT4 access and asked it to write a PowerShell script. I finally saw the light but now I feel so behind.
I saw the rise and fall of AOL and how everyone thought that it was the actual internet. I see ChatGPT as the AOL of AI... it's training wheels.
I came across this sub because I've been trying to figure out how to train a model locally that will help me with programming and scripting but I can't even figure out the system requirements to do so. Things just get more confusing as I look for answers so I end up with more questions.
Is there any place I can go to read about what I'm trying to do that doesn't throw out technical terms every other word? I'm flailing. From what I've gathered it sounds like I need to train on GPU's (realistically cloud because of VRAM) but running inference can be done locally on CPU as long as a system has enough memory.
A specific question I have is about quantization. If I understand correctly, quantization allows you to run models with lower memory requirements but I see it can negatively impact output. Does running "uncompressed" (sorry, I'm dumb here) also mean quicker output? I have access to retired servers with a ton of memory. | 2023-12-07T20:18:48 | https://www.reddit.com/r/LocalLLaMA/comments/18d4i92/it_veteran_why_am_i_struggling_with_all_of_this/ | Smeetilus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18d4i92 | false | null | t3_18d4i92 | /r/LocalLLaMA/comments/18d4i92/it_veteran_why_am_i_struggling_with_all_of_this/ | false | false | self | 273 | null |
AL/ML Model - ADVICE REQUESTED | 1 | Alright, I am going to start with a simple request and then go into more detail for those who care to learn more.
REQUEST - Evaluating cost of local hardware and data storage options vs. cloud machine learning options. Seeking advice, debate, opinions, etc.
LONG VERSION:
I am a 12-year vet in the construction design-build industry. Background in Architectural Engineering. Left corporate construction industry, working as a consultant for subcontractors and self-perform construction contractors in the world of automation, data organization, etc.
I have a very large scale project that I am actively working on and pursuing that involves multiple machine learning and AI chatbot interfaces. I am working with various sizes of datasets - mainly unstructured. Data lake would be “organized” for what I am dealing with.
Some points that I am considering in this:
1. Size, Type of data that will be stored, need to processed and trained.
2. Principles for tokenization and vectors and attention mechanisms that can be applied specific to this data so I can evaluate costs, but also clean and pre-process this data more effectively.
3. Model and chatbot interfaces that are built on foundational template and key information.
4. RAG and other options to take foundational models, learn new enterprise dataset info and apply to solutions.
I want to evaluate long-term (5, 10, 15-year) options that take into consideration hardware and cloud services to create models, store enterprise data, process data, train it, etc.
You are probably more versed in this world than I am but that is why I’m here. I have multiple hardware quotes, have done a ton of research, etc. But this isn’t my world. I need someone that can help me build out my evaluation and provide some high-level consultation. | 2023-12-07T19:48:30 | https://www.reddit.com/r/LocalLLaMA/comments/18d3tg7/alml_model_advice_requested/ | StreamConst | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18d3tg7 | false | null | t3_18d3tg7 | /r/LocalLLaMA/comments/18d3tg7/alml_model_advice_requested/ | false | false | self | 1 | null |
Noramaid 20b q3_k_m vs 13b q5_k_n GGUF: what an amazing improvement! (running on Mac M1 16GB) | 26 | Until now I had been using 7b, 11b, and 13b models, thinking this was the most my Mac mini M1 with 16GB could handle. I tried 34b models but could not get them to work. Yesterday, I started experimenting with 20b models, and to my surprise they worked up to q3. However the results of all the merges I tried were not so great, and I found I was getting better results from 13b models.
I knew from trying 11b models, like Starling, that a model merged with itself actually improved quite a lot. My favourite 13b models being Tiefighter and Noromaid, I noticed that Normaid had a 20b version, which I assume is a self-merged.
I just finished doing a round of testing, and I am blown away by the quality of Noromaid 20b q3\_k\_m! There is no comparison with the 13b q5\_k\_m version. Even though I am only running it with a 1024 context size vs 2048 for the 13b model. Incidentally, on the 13b models I found that I get better results with smaller context size but higher quantisation (2048 + q5) than higher context size with lower quantisation (3072 + q4 or 4096 + q3).
Here's the link for the model:
[https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1-GGUF](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1-GGUF)
To summarise, what I found to get the best quality, is to prioritise the following factors in this order:
1. Highest native or self-merge model size (20b > 13b > 11b > 7b)
2. Highest quantisation (q5 > q4 > q3 > q2)
2. Largest context size | 2023-12-07T19:25:09 | https://www.reddit.com/r/LocalLLaMA/comments/18d3aan/noramaid_20b_q3_k_m_vs_13b_q5_k_n_gguf_what_an/ | ex-arman68 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18d3aan | false | null | t3_18d3aan | /r/LocalLLaMA/comments/18d3aan/noramaid_20b_q3_k_m_vs_13b_q5_k_n_gguf_what_an/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'ioWZ-R8E3nf_CMzOT-SWWPMW35E9RlSbXbmhaIdykNg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jiY6FgRudt5O4rx3eX3gOeU0sTHMqOiM5qvYGS0wUcc.jpg?width=108&crop=smart&auto=webp&s=8a3b28b122c2f4aeb6efd32bb5aa365684e15fc4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jiY6FgRudt5O4rx3eX3gOeU0sTHMqOiM5qvYGS0wUcc.jpg?width=216&crop=smart&auto=webp&s=651f1fbffb2f2a8a5ade1bfc29b4d36ab5dae6c8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jiY6FgRudt5O4rx3eX3gOeU0sTHMqOiM5qvYGS0wUcc.jpg?width=320&crop=smart&auto=webp&s=e82673bf33c7683b2da8140edf3a840b7fcc898c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jiY6FgRudt5O4rx3eX3gOeU0sTHMqOiM5qvYGS0wUcc.jpg?width=640&crop=smart&auto=webp&s=53e7ff1a990386a0bb99e547c566f5debc844703', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jiY6FgRudt5O4rx3eX3gOeU0sTHMqOiM5qvYGS0wUcc.jpg?width=960&crop=smart&auto=webp&s=419ffb717f2cc2517f328b4cb61492c58594f9c4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jiY6FgRudt5O4rx3eX3gOeU0sTHMqOiM5qvYGS0wUcc.jpg?width=1080&crop=smart&auto=webp&s=7339aa94f8aef3d573efe8916af5e90d84c860da', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jiY6FgRudt5O4rx3eX3gOeU0sTHMqOiM5qvYGS0wUcc.jpg?auto=webp&s=45eed8cc48d7bfec494f7bd0a02f78fe0c97b1ba', 'width': 1200}, 'variants': {}}]} |
What personalization tools would you use? | 2 | My team plans to launch an open-source project soon. We are interested in helping everyday people safely and freely apply their personal data to AI. Keen to hear your ideas about what kinds of personalization tools would benefit the r/localllama community:
* Loading personal data: make it easier to get personal messages, images, videos, etc. into formats usable for training, inference, and retrieval.
* RAG primitives: populate a personal database and make it easy to query in AI applications.
* Training: simplify the process of fine-tuning a model to mimic you.
* Comprehensive APIs: host personal APIs like OpenAI's, but in which text and image generation is highly personalized.
Of course, if you have other ideas, please share them below! | 2023-12-07T19:16:06 | https://www.reddit.com/r/LocalLLaMA/comments/18d32p5/what_personalization_tools_would_you_use/ | dondochaka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18d32p5 | false | null | t3_18d32p5 | /r/LocalLLaMA/comments/18d32p5/what_personalization_tools_would_you_use/ | false | false | self | 2 | null |
Wrap Code LLaMa around Python APIs | 2 | Hi all, I'm with a startup building new software for computer chip designers. We were initially going to enable users to control the software using Python APIs but now we are thinking we can also fine-tune a Code LLaMa on our API and allow it to run as an interpreter to control the software. This would allow users to just ask the software to do many of the functions. I was wondering if anyone here has worked on similar projects. I've seen AutoDesk discuss using this with AutoCAD and some open-source projects with ChatGPT to control Blender. Would love to hear your thoughts. | 2023-12-07T18:48:39 | https://www.reddit.com/r/LocalLLaMA/comments/18d2g5z/wrap_code_llama_around_python_apis/ | yoconman2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18d2g5z | false | null | t3_18d2g5z | /r/LocalLLaMA/comments/18d2g5z/wrap_code_llama_around_python_apis/ | false | false | self | 2 | null |
New 3B model from Stability AI | 134 | 2023-12-07T18:11:52 | https://huggingface.co/stabilityai/stablelm-zephyr-3b | OwlOfMinerva_ | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18d1m1y | false | null | t3_18d1m1y | /r/LocalLLaMA/comments/18d1m1y/new_3b_model_from_stability_ai/ | false | false | 134 | {'enabled': False, 'images': [{'id': '7eQkpitFUQvzUb7O2HMmA1X4shnSMwB8VaKVOon8PSo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7r77seMS_wwdblOAOLTRVSP2tHI1wGuJl6DDVs2dces.jpg?width=108&crop=smart&auto=webp&s=5b4c586cc9a34682769fbd65a14913d42cdb6325', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7r77seMS_wwdblOAOLTRVSP2tHI1wGuJl6DDVs2dces.jpg?width=216&crop=smart&auto=webp&s=715b54bcd75306ad3a704c27a0e83e238a00f534', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7r77seMS_wwdblOAOLTRVSP2tHI1wGuJl6DDVs2dces.jpg?width=320&crop=smart&auto=webp&s=f59b39aba997bacf93ce402feb86e1f82db6be12', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7r77seMS_wwdblOAOLTRVSP2tHI1wGuJl6DDVs2dces.jpg?width=640&crop=smart&auto=webp&s=d2b836d47fc09975644085b6e28631c613d2de96', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7r77seMS_wwdblOAOLTRVSP2tHI1wGuJl6DDVs2dces.jpg?width=960&crop=smart&auto=webp&s=32e311dab872b2b65d65f52080e8ab3490cb143f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7r77seMS_wwdblOAOLTRVSP2tHI1wGuJl6DDVs2dces.jpg?width=1080&crop=smart&auto=webp&s=d0dbbc29be7376af8bfd766f83d370aba88f64ad', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7r77seMS_wwdblOAOLTRVSP2tHI1wGuJl6DDVs2dces.jpg?auto=webp&s=125ecd6fa648c3b034fad5707d397bdcdfc7f663', 'width': 1200}, 'variants': {}}]} | ||
Otherbrain: vibe reviews + benchmarks for open LLMs | 6 | hey ya'll, wanted to share a little website my friend and I built for browsing open LLMs.
Like many here, I like to try out all the new models. The quantitative evals on open llm leaderboard are awesome, but they don't capture the vibe/essence/qualia. Especially for multi-turn conversation (chat!) the top ranked ones are sometimes not the most fun. Like with my friends, I want genius models who are **also** chill to hang out with. For that reason I love the more subjective review threads that pop up here sometimes and wanted to make a browsable list to kind of unify the vibe-reviews and benchmarks.
Of course, the idea snowballed a little bit. We also added a way to push sample chats (good and bad) from my GGUF client FreeChat (and hoping to find more clients who want to contribute). I think they're fun to read and will hopefully grow into a useful (free) dataset for training better models.
Anyway, lemme know what you think and thanks for checking it out! <3
# [https://www.otherbrain.world](https://www.otherbrain.world)
​
​
[if only i had 15 \(and enough bandwidth\) for each of TheBloke's quants https:\/\/twitter.com\/\_jasonwei\/status\/1707102665321365793\/photo\/1](https://preview.redd.it/vd2b1ek2lw4c1.jpg?width=1176&format=pjpg&auto=webp&s=344c8ce32480d348913b9d4cab5911ece1837e3d)
​ | 2023-12-07T17:13:27 | https://www.reddit.com/r/LocalLLaMA/comments/18d0csc/otherbrain_vibe_reviews_benchmarks_for_open_llms/ | sleeper-2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18d0csc | false | null | t3_18d0csc | /r/LocalLLaMA/comments/18d0csc/otherbrain_vibe_reviews_benchmarks_for_open_llms/ | false | false | 6 | null | |
Need help for asking the AI to select the right function | 1 | Hi all,
I'm exploring ways for AI to select the right function based on user input and seeking community insights. I've tried LangChain but found it complex for practical use. Grammar with GBNF files seems promising but might require lengthy initial prompts.
Has anyone experimented with alternatives, especially with models like 7B Mistral? Looking for strategiesfor accurate in function selection . Any experiences or suggestions on this would be greatly appreciated.
Thanks! | 2023-12-07T17:10:14 | https://www.reddit.com/r/LocalLLaMA/comments/18d0aal/need_help_for_asking_the_ai_to_select_the_right/ | Toni_rider | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18d0aal | false | null | t3_18d0aal | /r/LocalLLaMA/comments/18d0aal/need_help_for_asking_the_ai_to_select_the_right/ | false | false | self | 1 | null |
Venus 120b 8 bit hosted here | 19 | Venus 120b 8 bit is here - warning very NSFW apparently (but I haven't tried to stir it up yet either)
[https://projectatlantis2.eu.ngrok.io/chat.html](https://projectatlantis2.eu.ngrok.io/chat.html)
​
Goliath 120b 6 bit is here
[https://projectatlantis.eu.ngrok.io/chat.html](https://projectatlantis.eu.ngrok.io/chat.html)
​
Try to keep chats private unless you want trolls
No signon required and no one is monitoring your degen chats. If Audreyana calls Hotel Security, it actually sends an alarm but we don't have time to babysit - still a WIP
THESE ARE UNCENSORED MODELS and Jenni is your current gf so don't act all prudish. btw there's nothing stopping you from pushing her wandering hands away | 2023-12-07T16:47:23 | https://www.reddit.com/r/LocalLLaMA/comments/18czrsz/venus_120b_8_bit_hosted_here/ | LookingForTroubleQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18czrsz | false | null | t3_18czrsz | /r/LocalLLaMA/comments/18czrsz/venus_120b_8_bit_hosted_here/ | false | false | self | 19 | null |
vLLM support AMD GPUs in older MacBook Pros? | 1 | vLLM calls out optimized cuda kernels. But what about other architectures? I have a pre-M1 MBP with AMD GPU. Llamacpp (and some others) seem to clearly support these architectures decently.
(Once I get back to local LLMs, I’ll probably need to look into upgrading to M1 or M2. But llamacpp works decently on bloke quantized 7b models… so…) | 2023-12-07T16:16:07 | https://www.reddit.com/r/LocalLLaMA/comments/18cz3db/vllm_support_amd_gpus_in_older_macbook_pros/ | Puzzleheaded-Fly4322 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cz3db | false | null | t3_18cz3db | /r/LocalLLaMA/comments/18cz3db/vllm_support_amd_gpus_in_older_macbook_pros/ | false | false | self | 1 | null |
What tools or programs have you made or are working on? | 13 | I'm just curious if anyone has had success developing their own tools with LLMs. It seems that all of the tutorials only show ChatGPT as the base model. | 2023-12-07T16:10:37 | https://www.reddit.com/r/LocalLLaMA/comments/18cyyyb/what_tools_or_programs_have_you_made_or_are/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cyyyb | false | null | t3_18cyyyb | /r/LocalLLaMA/comments/18cyyyb/what_tools_or_programs_have_you_made_or_are/ | false | false | self | 13 | null |
What do LLM "engineers" at FAANG do besides posting/reposting stuff on Twitter? | 9 | I follow some engineers who works at nvidia, Google, etc. and I don't know how they can get anything done while they're constantly on Twitter, posting things or talking about the most recent advances in AI. | 2023-12-07T15:57:48 | https://www.reddit.com/r/LocalLLaMA/comments/18cyoez/what_do_llm_engineers_at_faang_do_besides/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cyoez | false | null | t3_18cyoez | /r/LocalLLaMA/comments/18cyoez/what_do_llm_engineers_at_faang_do_besides/ | false | false | self | 9 | null |
Can i run llama2 7b out of the box with the following specs? | 1 | ​
[Intel Core i5 7500](https://preview.redd.it/kni2jpcd8w4c1.png?width=405&format=png&auto=webp&s=4bfed52faaf1aa5a20044b95480e54999ef4d32f)
​
[NVIDIA GeForce GTX 1050 Ti](https://preview.redd.it/fpg099ug8w4c1.png?width=401&format=png&auto=webp&s=b6162b504981a1c3fb69eff56a5bee1657b131d5) | 2023-12-07T15:45:12 | https://www.reddit.com/r/LocalLLaMA/comments/18cyeox/can_i_run_llama2_7b_out_of_the_box_with_the/ | jaykeerti123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cyeox | false | null | t3_18cyeox | /r/LocalLLaMA/comments/18cyeox/can_i_run_llama2_7b_out_of_the_box_with_the/ | false | false | 1 | null | |
Budget rig for LLM | 2 | Hello everyone,
I'm looking to get a rig to be able to run small LLM models on my it. Currently, I have a lenovo i3 laptop which can't run even simple stuff. I am hoping to spend $2000 at the very max, hopefully less. I have looked at ibuypower and you can put together a rig piece by piece but I don't understand anything about hardware and don't really want to spend much time on that but instead buy a pre-built rig.
Any recommendations? I would really appreciate it | 2023-12-07T15:34:30 | https://www.reddit.com/r/LocalLLaMA/comments/18cy6de/budget_rig_for_llm/ | AerotyneInternationa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cy6de | false | null | t3_18cy6de | /r/LocalLLaMA/comments/18cy6de/budget_rig_for_llm/ | false | false | self | 2 | null |
100% off topic: just cracking up here, testing Gemini's logic capability just gave me my first case of "deduction by moral alignment" | 19 | copied and pasted the following standard logic puzzle
https://preview.redd.it/fmvb43a65w4c1.png?width=1165&format=png&auto=webp&s=0acfdf1abd94384b6a260b1037fd22987a6af1aa | 2023-12-07T15:27:18 | https://www.reddit.com/r/LocalLLaMA/comments/18cy0p4/100_off_topic_just_cracking_up_here_testing/ | hugo-the-second | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cy0p4 | false | null | t3_18cy0p4 | /r/LocalLLaMA/comments/18cy0p4/100_off_topic_just_cracking_up_here_testing/ | false | false | 19 | null | |
Purple llama for the safety | 1 | [removed] | 2023-12-07T15:23:40 | https://ai.meta.com/llama/purple-llama/ | Own_Procedure_8866 | ai.meta.com | 1970-01-01T00:00:00 | 0 | {} | 18cxxp4 | false | null | t3_18cxxp4 | /r/LocalLLaMA/comments/18cxxp4/purple_llama_for_the_safety/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'h_C0aTx1zZ2LbZLhgKw4iMYwyLGUu99Yc17isI8ogME', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=108&crop=smart&auto=webp&s=56923bdb13994baac1b22a9acb26dbfc24a2ffb5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=216&crop=smart&auto=webp&s=3aa5acdf8a69efec05d2e3b7b491b32438102862', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=320&crop=smart&auto=webp&s=2ae15084388d6e68a242fce3b577dc6adb4cea7a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=640&crop=smart&auto=webp&s=c74d9225f096cae09edea09ab512208361f1d753', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=960&crop=smart&auto=webp&s=535cb4d5eb0325b6c3de33875df5dc570957ada4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=1080&crop=smart&auto=webp&s=6105a5fd833f944081148e9d696ceebaa652bc02', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?auto=webp&s=6c24a88115d9fa3cf7c595398e836e508b5c0737', 'width': 3840}, 'variants': {}}]} | |
Meta Releases Llama Guard - the Hugging Edition | 133 | Hey all! I'm the Chief Llama Officer at Hugging Face, and here I am to share some news of the latest Meta release with PurpleLlama and Llama Guard.
**TL;DR**
Meta released a Llama 7B fine-tuned to classify risky prompts and LLM responses. Links below!
**What exactly is it?**
* The model can classify the user inputs and the LLM responses to detect if the content is unsafe.
* What is safe/unsafe is defined by a user-defined taxonomy and harm types. This is flexible, allowing users to configure for their own needs. Existing categories are violence, sexual content, guns, controlled substances, suicide, and criminal planning, but again, users can configure this.
* It can run in free Google Colabed by a user-defined taxonomy and harm types. This is flexible, allowing users to configure it for their own needs. Existing categories are violence, sexual content, guns, controlled substances, suicide, and criminal planning, but again, users can configure this.
https://preview.redd.it/690xcy613w4c1.png?width=2434&format=png&auto=webp&s=9540d13b08e0fd4458c0c45aa601566a636afa57
**Why is it interesting?**
* It's higher quality than OpenAI moderation API
* It's customizable through taxonomies and thresholds
* Can run in free Google Colab
* Great for enterprise use cases where they could not adopt OS tools due to risk
* Small, so quick, and cheap inference!
https://preview.redd.it/yqltp33w3w4c1.jpg?width=680&format=pjpg&auto=webp&s=280c27490c5773b736a57b4aefaa0ceabbb8ae24
**Resources**
* Announcement: [https://ai.meta.com/llama/purple-llama/](https://ai.meta.com/llama/purple-llama/)
* Model on Hugging Face: [https://huggingface.co/meta-llama/LlamaGuard-7b](https://huggingface.co/meta-llama/LlamaGuard-7b)
* Free Colab: [https://colab.research.google.com/drive/16s0tlCSEDtczjPzdIK3jq0Le5LlnSYGf?usp=sharing](https://colab.research.google.com/drive/16s0tlCSEDtczjPzdIK3jq0Le5LlnSYGf?usp=sharing)
​
Spit out!
🦙🤗 | 2023-12-07T15:22:47 | https://www.reddit.com/r/LocalLLaMA/comments/18cxx1d/meta_releases_llama_guard_the_hugging_edition/ | hackerllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cxx1d | false | null | t3_18cxx1d | /r/LocalLLaMA/comments/18cxx1d/meta_releases_llama_guard_the_hugging_edition/ | false | false | 133 | {'enabled': False, 'images': [{'id': 'h_C0aTx1zZ2LbZLhgKw4iMYwyLGUu99Yc17isI8ogME', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=108&crop=smart&auto=webp&s=56923bdb13994baac1b22a9acb26dbfc24a2ffb5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=216&crop=smart&auto=webp&s=3aa5acdf8a69efec05d2e3b7b491b32438102862', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=320&crop=smart&auto=webp&s=2ae15084388d6e68a242fce3b577dc6adb4cea7a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=640&crop=smart&auto=webp&s=c74d9225f096cae09edea09ab512208361f1d753', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=960&crop=smart&auto=webp&s=535cb4d5eb0325b6c3de33875df5dc570957ada4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=1080&crop=smart&auto=webp&s=6105a5fd833f944081148e9d696ceebaa652bc02', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?auto=webp&s=6c24a88115d9fa3cf7c595398e836e508b5c0737', 'width': 3840}, 'variants': {}}]} | |
Meta announces the Purple Llama project for AI safety and releases Llama Guard, an input-output safeguard model geared towards conversation use cases | 76 | 2023-12-07T15:21:55 | https://ai.meta.com/blog/purple-llama-open-trust-safety-generative-ai | llamaShill | ai.meta.com | 1970-01-01T00:00:00 | 0 | {} | 18cxwcw | false | null | t3_18cxwcw | /r/LocalLLaMA/comments/18cxwcw/meta_announces_the_purple_llama_project_for_ai/ | false | false | 76 | {'enabled': False, 'images': [{'id': 'hiRlVGVaOV9ehsO563ycetHaKoWuBw2O6kYukhLHr6w', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DWMe4Wdubj3-kifyRwJJe4g-q4KvDP_NJphXOUSxfUs.jpg?width=108&crop=smart&auto=webp&s=d466d5907c36ab1670d7f0b35537fb49e78ec8cc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/DWMe4Wdubj3-kifyRwJJe4g-q4KvDP_NJphXOUSxfUs.jpg?width=216&crop=smart&auto=webp&s=f9986f903911a3839497695c21fc6a3372b353ab', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/DWMe4Wdubj3-kifyRwJJe4g-q4KvDP_NJphXOUSxfUs.jpg?width=320&crop=smart&auto=webp&s=53951c3d1c62b1451b863609e9e0cc2390454db7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/DWMe4Wdubj3-kifyRwJJe4g-q4KvDP_NJphXOUSxfUs.jpg?width=640&crop=smart&auto=webp&s=4030dd180c91bd4a030c12283ea5ebdb09503537', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/DWMe4Wdubj3-kifyRwJJe4g-q4KvDP_NJphXOUSxfUs.jpg?width=960&crop=smart&auto=webp&s=ec6dcd96ba09f1e960818e01d75736efe03bc47c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/DWMe4Wdubj3-kifyRwJJe4g-q4KvDP_NJphXOUSxfUs.jpg?width=1080&crop=smart&auto=webp&s=bd9a16e83691cbf23e0d1a7cf5df59d355f74de0', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/DWMe4Wdubj3-kifyRwJJe4g-q4KvDP_NJphXOUSxfUs.jpg?auto=webp&s=d37a984c6ec7cdbda76cad056fafb756589f1e11', 'width': 1920}, 'variants': {}}]} | ||
New local LLM application with support for fine-tuning and RAG, built from scratch in C++/CUDA | 1 | [removed] | 2023-12-07T15:16:52 | https://www.reddit.com/r/LocalLLaMA/comments/18cxsfy/new_local_llm_application_with_support_for/ | drydberg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cxsfy | false | null | t3_18cxsfy | /r/LocalLLaMA/comments/18cxsfy/new_local_llm_application_with_support_for/ | false | false | self | 1 | null |
Fine Tuning LLMs | 1 | Hi All
It's my first time working on LLM models, I have experience of more than 1.5 years in the field of data science I have different datasets on which I want to finetune my LLMS on CPU, my dataset contains both numerical as well as textual data, so please guide how shall I approach, how shall I pass my dataset for finetuning whether I should give it in form of prompts or I shall use embeddings for it, etc. This dataset is maintenance data. | 2023-12-07T15:15:04 | https://www.reddit.com/r/LocalLLaMA/comments/18cxqzw/fine_tuning_llms/ | Unique_Cable_7776 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cxqzw | false | null | t3_18cxqzw | /r/LocalLLaMA/comments/18cxqzw/fine_tuning_llms/ | false | false | self | 1 | null |
Ava is now Open Source. | 1 | [removed] | 2023-12-07T15:02:33 | https://www.reddit.com/r/LocalLLaMA/comments/18cxhdw/ava_is_now_open_source/ | cztomsik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cxhdw | false | null | t3_18cxhdw | /r/LocalLLaMA/comments/18cxhdw/ava_is_now_open_source/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UNoI3yC1mjiFJLvSWJ9DbDD6fJ3aykqGeEy6D2mQ1QA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0IC-9thP9P6jklegezNExEOZnLoZmMFkjh1-ftSRO5I.jpg?width=108&crop=smart&auto=webp&s=2df0fd92e5c99d1a44e634684d595c314a941274', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0IC-9thP9P6jklegezNExEOZnLoZmMFkjh1-ftSRO5I.jpg?width=216&crop=smart&auto=webp&s=ca425bb1c6c646a673f048bb5ec0f570165338d4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0IC-9thP9P6jklegezNExEOZnLoZmMFkjh1-ftSRO5I.jpg?width=320&crop=smart&auto=webp&s=15ce9ecb515e815a0dde19e7fbab01d38404d3dd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0IC-9thP9P6jklegezNExEOZnLoZmMFkjh1-ftSRO5I.jpg?width=640&crop=smart&auto=webp&s=56329f9042d686b9ee4a30049a3ff0ff98c59b52', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0IC-9thP9P6jklegezNExEOZnLoZmMFkjh1-ftSRO5I.jpg?width=960&crop=smart&auto=webp&s=033c10b5b28e83e5adae52dbc8fcb7d7d9d9d507', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0IC-9thP9P6jklegezNExEOZnLoZmMFkjh1-ftSRO5I.jpg?width=1080&crop=smart&auto=webp&s=ce69b1be3e80b083b7730d1bab5944eefe78c939', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0IC-9thP9P6jklegezNExEOZnLoZmMFkjh1-ftSRO5I.jpg?auto=webp&s=f9a539cced39da0c449cd29cd9438cec4026fa0d', 'width': 1200}, 'variants': {}}]} |
[R] Half-Quadratic Quantization of Large Machine Learning Models | 53 | Sharing our work on model quantization.
Blog: [https://mobiusml.github.io/hqq\_blog/](https://mobiusml.github.io/hqq_blog/)
Code: [https://github.com/mobiusml/hqq](https://github.com/mobiusml/hqq)
Models: [https://huggingface.co/mobiuslabsgmbh/](https://huggingface.co/mobiuslabsgmbh/)
No data calibration needed, extremely fast 🚀, works on both language and vision models!
**Why does it matter?**
Quantization significantly reduces GPU memory requirements but degrades the quality of the models. Having faster and more accurate quantization methods is extremely valuable for the ML community.
**Approach:**
Sparsity-based error formulation between the original weights and their dequantized version. We used a Half-Quadratic solver to derive a closed-form solution that is 100x faster than backprop via Pytorch's Autograd.
**Quantization speed:**
\~ 1 minute for Llama2-13B
\~ 4 minutes for LLama2-70B (over 50x faster than GPTQ)
**Findings:**
\- Larger models quantized to 3/2-bit outperform smaller full-precision models with similar or lower memory requirements.
\- Successful 2-bit quantization requires a lower group-size (e.g., 32 or 16) and compression of both the zero-point and the scaling factor for lower memory usage.
While we acknowledge our view might be slightly biased, we genuinely believe that our work will significantly benefit the open-source software (OSS) machine learning community. Code and model are in Apache permissive license. | 2023-12-07T14:33:54 | https://www.reddit.com/r/LocalLLaMA/comments/18cwvqn/r_halfquadratic_quantization_of_large_machine/ | sightio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cwvqn | false | null | t3_18cwvqn | /r/LocalLLaMA/comments/18cwvqn/r_halfquadratic_quantization_of_large_machine/ | false | false | self | 53 | {'enabled': False, 'images': [{'id': 'FumGQDALVEh_t7D5kfO3BEK0YturZRtHrUmM5v3eMic', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/g5GvB9hal8dbUj5kHE7Guc3G3dAipVqjsNsqca9Pj8I.jpg?width=108&crop=smart&auto=webp&s=acf2dbe43ce35b76ddbd32efd03c1ab454d2c05a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/g5GvB9hal8dbUj5kHE7Guc3G3dAipVqjsNsqca9Pj8I.jpg?width=216&crop=smart&auto=webp&s=c84bd17580819be768ad63efe4b55164119a1c4a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/g5GvB9hal8dbUj5kHE7Guc3G3dAipVqjsNsqca9Pj8I.jpg?width=320&crop=smart&auto=webp&s=f534da0b10cef0edfae4ba309f6e41f7f9b4b42a', 'width': 320}], 'source': {'height': 320, 'url': 'https://external-preview.redd.it/g5GvB9hal8dbUj5kHE7Guc3G3dAipVqjsNsqca9Pj8I.jpg?auto=webp&s=5c8b7cbebe7957f26a844945d97804f78c07a32b', 'width': 320}, 'variants': {}}]} |
Shisa 7B: a new JA/EN bilingual model based on Mistral 7B | 36 | I've worked w/ Jon Durbin (Airoboros, etc) over the past 6 weeks or so to train [**Shisa 7B**](https://huggingface.co/augmxnt/shisa-7b-v1), a new, fully open source, bilingual Japanese and English model. We took Mistral 7B and pre-trained with an additional 8B JA tokens with a new custom extended tokenizer that is >2X more efficient in Japanese than the original Mistral tokenizer. The new base model, [shisa-base-7b-v1](https://huggingface.co/augmxnt/shisa-base-7b-v1) is also available for anyone to build on.
Highlights:
* By open source, we mean *really* open source, not just the weights. The training sets, WandB logs with all the training parameters, and our data and training pipeline (the actual repo we used) is released as well.
* Besides using newer, cleaner datasets for the pre-train, we validated a new approach for multilingual fine-tunes that was almost entirely synthetic/machine-translated that generated a much higher quality training set what was publicly available. This approach can probably be applied to other languages as well (where machine translation is of high quality, but where there aren't appropriate training sets).
* We also played around w/ some fun new stuff: DSIR for the pretrain, NEFTune for the fine-tune, and then a couple runs of a DPO stage as well (the final model is DPO'd).
* I also discovered that [many popular Japanese fine-tuning sets were actually of surprisingly low quality](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets#analysis) and got in touch w/ most of the JP groups using those sets, so hopefully it'll save a lot of wasted GPU cycles being burnt in the future.
[AWQ](https://huggingface.co/TheBloke/shisa-7B-v1-AWQ) and [GPTQ](https://huggingface.co/TheBloke/shisa-7B-v1-GPTQ) quants are available courtesy of (of course) TheBloke. There's no GGUF yet as I discovered something in [llama.cpp's BPE tokenizer is seriously busted](https://github.com/ggerganov/llama.cpp/issues/4360) (affects many other Llama models w/ extended tokenizers), so track that bug if you want to see if that's fixed.
While stronger than all other JA-capable 7B's we found/tested, the model itself is still very much a V1 - turns out Japanese is pretty hard, but we're on our way to bigger and better versions soon. Uh, that being said, we also burned like a lot of compute creds, so uh, drop a line if you have some H100s or MI300s that need a shakeout run or something. 😂
​
[Shi-chan and Sa-chan\/シーちゃんとサーちゃん](https://preview.redd.it/jswg10iorv4c1.jpg?width=1024&format=pjpg&auto=webp&s=98f865f30ae1eb16e38bbc9e2d81179aae8e298f) | 2023-12-07T14:14:09 | https://www.reddit.com/r/LocalLLaMA/comments/18cwh4n/shisa_7b_a_new_jaen_bilingual_model_based_on/ | randomfoo2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cwh4n | false | null | t3_18cwh4n | /r/LocalLLaMA/comments/18cwh4n/shisa_7b_a_new_jaen_bilingual_model_based_on/ | false | false | 36 | {'enabled': False, 'images': [{'id': 'n0rIi_8RqGgMfmlFYQV1gmCpW8mGPK8jOr2tUaM99bE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1Re0wuZk41OlvABNG3pTyYO55CkEOgDC7CAJlG5MCl0.jpg?width=108&crop=smart&auto=webp&s=065f496e58f1553af80e64c3bfb090dc9b3fc198', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1Re0wuZk41OlvABNG3pTyYO55CkEOgDC7CAJlG5MCl0.jpg?width=216&crop=smart&auto=webp&s=03f785b83dacc392cb693deabcf1a48eaa9b0c34', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1Re0wuZk41OlvABNG3pTyYO55CkEOgDC7CAJlG5MCl0.jpg?width=320&crop=smart&auto=webp&s=7b0b8f86490a874705a97e9d2902c8e47e908931', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1Re0wuZk41OlvABNG3pTyYO55CkEOgDC7CAJlG5MCl0.jpg?width=640&crop=smart&auto=webp&s=a7248d6d3083227b20f437c4d629e383c23d290a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1Re0wuZk41OlvABNG3pTyYO55CkEOgDC7CAJlG5MCl0.jpg?width=960&crop=smart&auto=webp&s=490c8e24e79ac810b5312fbd1621f26c01363f30', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1Re0wuZk41OlvABNG3pTyYO55CkEOgDC7CAJlG5MCl0.jpg?width=1080&crop=smart&auto=webp&s=aaf9a75d7ab2707074372e4fc435cae06eff1ec0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1Re0wuZk41OlvABNG3pTyYO55CkEOgDC7CAJlG5MCl0.jpg?auto=webp&s=152d35435c95c34aa97006f4a8c85c591185e5ad', 'width': 1200}, 'variants': {}}]} | |
What tasks are 7b and 13b models really good at? | 58 | Whether fine tuned or off the shelf. I want to build some toy apps that can take advantage of local LLMs. Open to suggestions :-) | 2023-12-07T14:11:12 | https://www.reddit.com/r/LocalLLaMA/comments/18cwf2t/what_tasks_are_7b_and_13b_models_really_good_at/ | platistocrates | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cwf2t | false | null | t3_18cwf2t | /r/LocalLLaMA/comments/18cwf2t/what_tasks_are_7b_and_13b_models_really_good_at/ | false | false | self | 58 | null |
Would buying a second 6700xt make sense for me? | 5 | I own one already and am having a blast with 13b models.
However when running on CPU I find that the 30b models tend to do much better for what I'm after, just at the cost of running much slower (2t/s or so).
I've read that Llama.cpp and other utils can support multiple GPU's, even for AMD.
Would buying a second Rx 6700xt ($250 local) be the quickest/cheapest way for me to get a fast 30b model? | 2023-12-07T14:02:28 | https://www.reddit.com/r/LocalLLaMA/comments/18cw8sp/would_buying_a_second_6700xt_make_sense_for_me/ | Due-Ad-7308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cw8sp | false | null | t3_18cw8sp | /r/LocalLLaMA/comments/18cw8sp/would_buying_a_second_6700xt_make_sense_for_me/ | false | false | self | 5 | null |
Best model for TikTok script writing? | 1 | Was running Mistral for a while. Curious if there are any models that would work well for TikTok script writing(or any short form video script writing) that would run well on 32gb ram and 3070 8gb.
Cheers. | 2023-12-07T12:23:07 | https://www.reddit.com/r/LocalLLaMA/comments/18cugrj/best_model_for_tiktok_script_writing/ | hAReverv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cugrj | false | null | t3_18cugrj | /r/LocalLLaMA/comments/18cugrj/best_model_for_tiktok_script_writing/ | false | false | self | 1 | null |
How do I install flash attention on aws sage maker? | 1 | I am trying to run together ai's llama 2 7b 32 k. I am having trouble running it as I am not able to install flash attention. It was asking me to update nvidia GPU driver, which I don't know how. | 2023-12-07T11:46:56 | https://www.reddit.com/r/LocalLLaMA/comments/18ctw15/how_do_i_install_flash_attention_on_aws_sage_maker/ | Conscious-Mixture-69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ctw15 | false | null | t3_18ctw15 | /r/LocalLLaMA/comments/18ctw15/how_do_i_install_flash_attention_on_aws_sage_maker/ | false | false | self | 1 | null |
ByteDance AI Promises Stronger than Gemini Open Weight GPT Dropping Soon | 1 | Quanquan Gu, Director of AI research at ByteDance (a.k.a) TikTok says they are releasing a model more powerful than Gemini soon, and the weights will be open.
Link to tweet: https://x.com/quanquangu/status/1732484036160012798?s=46&t=PrjrkAR1kTsMY6PSz7tC2w
This will be interesting | 2023-12-07T11:35:57 | johnolafenwa | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18ctq5y | false | null | t3_18ctq5y | /r/LocalLLaMA/comments/18ctq5y/bytedance_ai_promises_stronger_than_gemini_open/ | false | false | 1 | {'enabled': True, 'images': [{'id': '3ZqsPGZ6OHYPSX9KnLXDuC4gT2V75TA259iQ8w_A_b0', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ynsgnu120v4c1.jpeg?width=108&crop=smart&auto=webp&s=911ee748c0e70a574e435924effed4aac271b7f9', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ynsgnu120v4c1.jpeg?width=216&crop=smart&auto=webp&s=ea5110ca977efbe05461766a537ec7694fe69e52', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ynsgnu120v4c1.jpeg?width=320&crop=smart&auto=webp&s=ffdfeb286148f51d3c056ca8526d4cc4ab53a9da', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ynsgnu120v4c1.jpeg?width=640&crop=smart&auto=webp&s=00ea4573d27fa9fd025658ffbc57dd4968c9ca78', 'width': 640}], 'source': {'height': 1624, 'url': 'https://preview.redd.it/ynsgnu120v4c1.jpeg?auto=webp&s=952baeb74340997574c9881bba5756e90b7eed93', 'width': 750}, 'variants': {}}]} | ||
FP8 quantized results are bad compared to int8 results | 7 | I have followed the instructions on [https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/llama](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/llama) to convert the float16 Llama2 13B to FP8 and build a tensorRT-LLM engine.
I have compared the results of int8 quantized model and the fp8 quantization. I could see gibberish results with FP8 even for very simple prompts.
I have used CNN-Dailymail dataset to quantize with a calib size of 512. Is my calibration size too low that is resulting in this poor performance? | 2023-12-07T11:16:50 | https://www.reddit.com/r/LocalLLaMA/comments/18ctfs6/fp8_quantized_results_are_bad_compared_to_int8/ | sanjeevr5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ctfs6 | false | null | t3_18ctfs6 | /r/LocalLLaMA/comments/18ctfs6/fp8_quantized_results_are_bad_compared_to_int8/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'y6fzLBix7wxzYRH5YaYjY_I2mxUs2Eo1O1-nWBMHGJQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c14uuktBpmjhz1Jvw9Gx_JUtlShry-mnM7RzNm1vRh0.jpg?width=108&crop=smart&auto=webp&s=472585d792b3b0bad239f2f89b4af46aa59fc47a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c14uuktBpmjhz1Jvw9Gx_JUtlShry-mnM7RzNm1vRh0.jpg?width=216&crop=smart&auto=webp&s=b36c04fd8c659535b51cf56789ee4c928f7bdb26', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c14uuktBpmjhz1Jvw9Gx_JUtlShry-mnM7RzNm1vRh0.jpg?width=320&crop=smart&auto=webp&s=c6cba5e11a07e8f0ac788d8ea8d73d3c10508599', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c14uuktBpmjhz1Jvw9Gx_JUtlShry-mnM7RzNm1vRh0.jpg?width=640&crop=smart&auto=webp&s=d008ae537713420f09aed71e3c18513b09cd3c44', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c14uuktBpmjhz1Jvw9Gx_JUtlShry-mnM7RzNm1vRh0.jpg?width=960&crop=smart&auto=webp&s=6f0f553b47ac9478d26e34a6095f5f8c3490a366', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c14uuktBpmjhz1Jvw9Gx_JUtlShry-mnM7RzNm1vRh0.jpg?width=1080&crop=smart&auto=webp&s=5ef32bbb3e04757e8bd16db54a34324f3108fc5f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c14uuktBpmjhz1Jvw9Gx_JUtlShry-mnM7RzNm1vRh0.jpg?auto=webp&s=0c8d2c7ed0bec93aea0dce67983f65f797508969', 'width': 1200}, 'variants': {}}]} |
InfiniteBench: 100k+ long-context benchmark ! | 84 | [https://github.com/OpenBMB/InfiniteBench](https://github.com/OpenBMB/InfiniteBench)
​
[https://huggingface.co/datasets/xinrongzhang2022/InfiniteBench](https://huggingface.co/datasets/xinrongzhang2022/InfiniteBench)
InfiniteBench is a 100k+ long-context benchmark in NLP. And 10/12 tasks are new. We benchmark GPT 4 Turbo 128k, Claude 2.1, Kimi Chat and Yarn-Mistral-7B-128K on it.
​
[Main results](https://preview.redd.it/xl7n4v1iuu4c1.jpg?width=1016&format=pjpg&auto=webp&s=8412fa63b8890df5906817a9a83daf32709cf6d6)
InfiniteBench is quite challenging, even for GPT 4. | 2023-12-07T11:05:39 | https://www.reddit.com/r/LocalLLaMA/comments/18ct9xh/infinitebench_100k_longcontext_benchmark/ | Motor_Park_6288 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ct9xh | false | null | t3_18ct9xh | /r/LocalLLaMA/comments/18ct9xh/infinitebench_100k_longcontext_benchmark/ | false | false | 84 | null | |
Where can you hire people for curating and sanitizing custom datasets? | 4 | I'm curious if there's a job/gig platform where I can hire people for specific manual data curation, editing and cleaning for large language models. I have a large unstructured corpus that needs to be converted into training data. At first glance, Fiverr doesn't seem to have the quality and expertise I'm looking for. | 2023-12-07T10:26:21 | https://www.reddit.com/r/LocalLLaMA/comments/18cspu0/where_can_you_hire_people_for_curating_and/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cspu0 | false | null | t3_18cspu0 | /r/LocalLLaMA/comments/18cspu0/where_can_you_hire_people_for_curating_and/ | false | false | self | 4 | null |
Connecting tts-generation-webui/ Api to work with Oobabooga text generation? | 1 | [removed] | 2023-12-07T09:59:08 | https://www.reddit.com/r/LocalLLaMA/comments/18cscce/connecting_ttsgenerationwebui_api_to_work_with/ | Independent-Bill-770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cscce | false | null | t3_18cscce | /r/LocalLLaMA/comments/18cscce/connecting_ttsgenerationwebui_api_to_work_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'yOhoFUijV1Qpvdga6nrJLjLTAW0_H401hw-7bwbx0CQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ORHVTszanPc8VeDwlfUFZU2D6feujxJW9YkSgHU99lE.jpg?width=108&crop=smart&auto=webp&s=5e5dd16d67b55c2ea8fb4d5aa21d2f8059af6736', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ORHVTszanPc8VeDwlfUFZU2D6feujxJW9YkSgHU99lE.jpg?width=216&crop=smart&auto=webp&s=adba50fe99517d9aa237ba6b2041f45017f3daca', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ORHVTszanPc8VeDwlfUFZU2D6feujxJW9YkSgHU99lE.jpg?width=320&crop=smart&auto=webp&s=0b51de08d4c4a23dde27e91c5f6427cc2086920b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ORHVTszanPc8VeDwlfUFZU2D6feujxJW9YkSgHU99lE.jpg?width=640&crop=smart&auto=webp&s=b3ec00085631c7b7a18cf77b0539a64bfbf3b700', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ORHVTszanPc8VeDwlfUFZU2D6feujxJW9YkSgHU99lE.jpg?width=960&crop=smart&auto=webp&s=9bc72ae30d9cc1526070f9e7ed8193f6f90297dd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ORHVTszanPc8VeDwlfUFZU2D6feujxJW9YkSgHU99lE.jpg?width=1080&crop=smart&auto=webp&s=db8e93809188f226f7b61f69785a04a138bdc81e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ORHVTszanPc8VeDwlfUFZU2D6feujxJW9YkSgHU99lE.jpg?auto=webp&s=211a08646044210a2bcdef3ceb138146cf9fccd5', 'width': 1200}, 'variants': {}}]} |
The last LLM (or Over-hype will cause another Winter) | 1 | The winter (maybe) is coming
So Google's Gemini has dropped yesterday, and given Google's massive resources (TPUs, Google cloud & AI Labs (Brain and Deepmind now merged and called Google Deepmind) and access to multimodal data (YouTube, Search & Gmail at the least).
I am telling you this is the last LLM you will see, any further improvement will be incremental and won't be "felt" during conversation with th bots.
I talked before about how the Kaplan laws for model parameters to data information capacity (number of bits) which should be roughly equal according to the Kaplan laws (there is much depth in that)
The question was given a compute budget, what is the optimal data mix and model size, or given data size, what is the optimal model size and compute to use for training.
Two breakthroughs and one large effort led to ChatGPT
1. Deepmind Gato (no much information about the model, but multimodal for sure, Language Model, & some sort of underlying techniques from AlphaZero reinforcement Learning and policy stuff)
2. Chinchilla research paper re-visitint the question of Kaplan law for scaling data to model size (which was roughly equal number of bits to equal number of model parameters, but chinchilla shows that while this holds with their equation for small models (millions of parameters), it starts to drift for larger models, in the realm of billions of parameters).
This alarmed OpenAI,
Because Google Deepmind is now making breakthroughs dominating the AI scene, they already encroched on RL models playing games (OpenAI dota & deepmind alphastar for StarCraft)
And now the last stronghold for OpenAI (GPT) is now being besieged by Gato, Gopher, Chinchilla & Flamingo Models...etc, even PaLM & LaMDA (which was earlier than ChatGPT, and one google engineer believed it was sentient, panicked and leaked to the public and was subsequently fired).
Chichilla paper wasn't developing the biggest models to test their hypothesis (after all it was a hypothesis and probably didn't have enough compute budget allocated to that).
And when they released the amount of available data was much more than their test points, but they took a subset of the data and reduced model sizes, OpenAI did the opposite, after the paper proved the new scaling law, OpenAI seems to have allocated maximum compute according to Chinchilla scaling laws, that wasn't enough on its own, so they trained the PPO, which required new data, and the discovey was the model was starting to learn how to answer (InstructGPT)
This was because Language Models (and their embeddings) are a belief net (a man to a woman is like a king to a queen in your embedding, so it makes since to complete, "he the king and she the ..." with the word queen, so when you give it examples like where is Paris?, and answer "in France", it will start to link the question where with spatial relation in the embedding (Paris is to France like Rome is to Italy, in the embedding space), so when you ask "new" question like where is Rome?, it will complete with Italy
And that isn't logical reasoning or problem solving at all
Moreover, most benchmarks like MMLU are solved with 5 shots or sth like that (this is not like human quizzes, so it isn't sufficient to say to the public a human expert will do 89% of MMLU but "5-shot" GPT4 did some "86%", it is not apples to apples here
So OpenAI paid a lot to cheap African labor to generate huge sets of answers to ChatGPT and interactions (if you have contributed to Open assistant dataset, it has two main ways to collect data, one is to allow human labelers to chose which is the best answer, from many answers (which why GPT interface receives your feedback on their platform), and the other is for you to suggest an answer (so you may see questions like in which company to invest and you can be biased and say for an example Tesla or you can be diplomatic and smart ass give no answer but smart sounding BS that sums up to "it depends")
All of this is not logical reasoning and will affect the bots quality and performance (it has littl value in analyzing unstructured human data though).
So OpenAI maxed data out and created ChatGPT to collect more data, which allowed them to train GPT4 on the troves of collected data and by acquiring more compute budget from Microsoft
That was it, their hand is drawn nothing more
Google, did similar things, first they had some data (but they were keen on pathways models, that is models that doesn't need the entire model to compute it's token response, which is harder, but if done would be much more efficient). | 2023-12-07T09:35:52 | https://www.reddit.com/r/LocalLLaMA/comments/18cs1f1/the_last_llm_or_overhype_will_cause_another_winter/ | True_Giraffe_7712 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cs1f1 | false | null | t3_18cs1f1 | /r/LocalLLaMA/comments/18cs1f1/the_last_llm_or_overhype_will_cause_another_winter/ | false | false | self | 1 | null |
Models =< 3B capable of instructions? | 4 | Hi all, I have a huge scraper operation where I need to extract lots of information in Spanish.
My main constraint is that I don't have powerful machines (my scraper in fact runs in an old laptop Atom + 4GB with debian, that I use as server) and it isn't wise for me right now to spend money.
I need to extract from varying lenght texts:
- Dates
- Prices
- Locations
I've tried with several NER and QA models in HF with no success. They get some of the info, but it's very limited.
I wen't with the OpenAI API and function calling, it's really really cool, but I'd have to call the API many, many times, and that cost money + it makes the process very slow as it's over network.
So I went with =<3B models, but It seems they are more geared towards storytelling and such.
For testing LLMs I'm using Kobold, and until now I've tried TinyLlama 1.1b and rocket 3b, but under prompts like "Give me the date of this event" they just go on their own and come back with some text that has nothing to do.
So, I'm wondering, Is this the state of the art of such small models? Or I am doing something wrong?
It's been a few weeks, and I'm having an information overload, so I'm basically asking for directions here. | 2023-12-07T09:33:29 | https://www.reddit.com/r/LocalLLaMA/comments/18cs08r/models_3b_capable_of_instructions/ | iagovar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cs08r | false | null | t3_18cs08r | /r/LocalLLaMA/comments/18cs08r/models_3b_capable_of_instructions/ | false | false | self | 4 | null |
How flexible are LoRAs? | 3 | I see a lot more interesting ways to make models nowadays, like model-merges and frankenstein models together, is this something we can do with LoRAs as well? Have we tried some creative ways to build adapters? | 2023-12-07T09:24:32 | https://www.reddit.com/r/LocalLLaMA/comments/18crvy9/how_flexible_are_loras/ | EcstaticVenom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18crvy9 | false | null | t3_18crvy9 | /r/LocalLLaMA/comments/18crvy9/how_flexible_are_loras/ | false | false | self | 3 | null |
How important is fine-tuning as foundational models get better? | 1 | [removed] | 2023-12-07T09:24:26 | https://www.reddit.com/r/LocalLLaMA/comments/18crvwm/how_important_is_finetuning_as_foundational/ | EcstaticVenom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18crvwm | false | null | t3_18crvwm | /r/LocalLLaMA/comments/18crvwm/how_important_is_finetuning_as_foundational/ | false | false | self | 1 | null |
Set up your own LMQL environment. | 6 | **"Note: This article is written in Chinese and then translated into English using ChatGPT."**
Recently, Large Language Models (LLM) have become very popular, and many related projects have emerged. Today, I want to introduce an interesting project in the LLM ecosystem called LMQL.
First, let's talk about LMQL (as found on LMQL's GitHub). LMQL is a programming language designed for Large Language Models (LLM), based on a superset of Python. LMQL offers a novel way to blend traditional programming with the capability to invoke LLMs. It goes beyond conventional template languages by natively integrating LLM interactions at the program code level.
In layman's terms, it integrates writing code with writing prompts into a unified process. For example
"Say 'this is a test':[RESPONSE]" where len(TOKENS(RESPONSE)) < 25
This way, the output of the LLM can be strictly controlled to be under 25 tokens.
"Based on this, the overall sentiment of the message\
can be considered to be[CLS]" where CLS in [" positive", " neutral", " negative"]
The output of the LLM can be strictly controlled to fall into one of three categories: positive, neutral, or negative. For more advanced uses of LMQL and related principles, please refer to the LMQL official website. LMQL also offers an online demo version (integrated with OpenAI), but due to decoding limitations, some features are not supported (for example, the second example above, CLS in \["positive", "neutral", "negative"\] cannot be used.
​
So, today I'm going to share how to set up your own LMQL playground. First of all, I don't have a graphics card locally, so I use a GCP VM for the demonstration. If you're using a GCP VM like I am, remember to open the corresponding port in the firewall, so that you can use $IP:$PORT to connect to the LMQL playground after it's set up.
Step 1) For the basic environment, I use Docker + Nvidia-driver + Nvidia/CUDA container. Step 2) The rest involves using Python 3.10 + LLaMa 2 + llama.cpp + llama-cpp-python. The purpose of Step 1 is to sort out the GPU-related environment, and the purpose of Step 2 is to sort out the LMQL-related environment.
**My GCP configuration is Ubuntu 22.04 LTS + L4 GPU + 16GB RAM + 100GB HDD.**
1.1 Update
$ sudo apt-get update
1.2 Install NVIDIA drivers (the driver version may continuously update, please choose an appropriate version yourself).
$ sudo apt-get install nvidia-driver-525
1.3 Restart the computer (after completing these steps, you can use the command nvidia-smi to view the graphics card).
$ sudo reboot
1.4 Install Docker (refer to the [official Docker website](https://docs.docker.com/engine/install/ubuntu/))
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Test whether the installation was successful
$ sudo docker run hello-world
1.5 Install NVIDIA Container Toolkit (refer to the [NVIDIA official website](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)). This step enables containers to interact with the GPU.
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list \
&& \
sudo apt-get update
Install & Restart the Docker Service
$ sudo apt-get install -y nvidia-container-toolkit
$ sudo systemctl restart docker
1.6 Docker pull the image provided by NVIDIA (NVIDIA has integrated the environment for us, just fetch the image to use, so we don't have to deal with some troublesome matters). To choose other CUDA or cuDNN versions, you can refer to the [NVIDIA/cuda repository](https://hub.docker.com/r/nvidia/cuda/tags).
$ sudo docker pull nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04
1.7 To initiate the container, refer to Docker's official website for more detailed instructions.
\--gpus all: Allows the container to use the GPU (this requires the installation of the NVIDIA Container Toolkit mentioned in step 1.5).
\-p: Links the VM's port to the container's port. This is done because the LMQP playground will be running inside the container.
$ sudo docker run --gpus all -p 30300-30305:30300-30305 -it nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04 /bin/bash
1.8 Check if the GPU can be normally detected inside the container (use the command nvidia-smi).
(root@a176328806c7 indicates being inside the container, a176328806c7 is a random naming. The commands here are by default under root, so there's no need to add sudo.)
root@a176328806c7:/# nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA L4 Off | 00000000:00:03.0 Off | 0 |
| N/A 41C P8 17W / 72W | 0MiB / 23034MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
...
Up to this point, Step 1 is completed. The key point of this step is to set up an environment that can use Nvidia CUDA.
In Step 1.6, you can choose different versions of CUDA, cuDNN, and OS.
Step 1.7 involves starting Step 1.6 with a different base image. This method of container isolation helps to keep our local environment clean. If there are other projects with specific CUDA version requirements, they can also be quickly initiated.
If you already have a working CUDA environment, you can skip Step 1 and proceed directly to Step 2. However, I use CUDA version 11.8 and I'm not sure if other versions will work properly. The most annoying thing in doing deep learning is this: you spend so much time on AI, but instead of enhancing AI capabilities, you end up improving your environment setup skills and learning to avoid pitfalls.
Now, let's start with Step 2 (which will be conducted within the container that was just initiated).
​
2.1 Update
root@a176328806c7:/# apt-get update
2.2 Install python3.10 and python3.10-venv python3.10-venv is a necessary component for using venv with Python.
If you choose to install a different version of Python, such as python3.9, you should correspondingly install python3.9-venv.
root@a176328806c7:/# apt-get install python3.10 python3.10-venv
2.3 Move to any directory. Please choose your own working directory. Here, I choose under /home.
root@a176328806c7:/# cd /home/
2.4 Create a Python virtual environment and activate it (myvenv is just a name I chose). If successful, you will see (myvenv) at the beginning.
root@a176328806c7:/home# python3 -m venv myvenv
root@a176328806c7:/home# source myvenv/bin/activate
....
(myvenv) root@a176328806c7:/home#
2.5 Download and install llama2. For more details, please refer to the llama2 GitHub.
(myvenv) root@a176328806c7:/home# apt-get install git
(myvenv) root@a176328806c7:/home# apt-get install wget
(myvenv) root@a176328806c7:/home# git clone https://github.com/facebookresearch/llama.git
(myvenv) root@a176328806c7:/home# cd llama/
(myvenv) root@a176328806c7:/home/llama# pip install -e .
(myvenv) root@a176328806c7:/home/llama# ./download.sh
At this point, a prompt saying "Enter the URL from email" will appear. For this step, you need to go to the llama 2 download address and fill out the form. Once completed, an email will be sent to the specified mailbox.
How to download the models:
Visit the Llama repository in GitHub and follow the instructions in the README to run the download.sh script.
When asked for your unique custom URL, please insert the following:
https://download.llamameta.net/*?
...
Id=...
Select which model weights to download
The unique custom URL provided will remain valid for model downloads for 24 hours, and requests can be submitted multiple times.
Now you’re ready to start building with Llama 2.
Copy the following link: https://download.llamameta.net/\*?... and paste it into the field below "Enter the URL from email:". Then, press enter. If successful, it will ask you which model you want to download (7B, 13B, 70B, 7B-chat, 13B-chat, 70B-chat). Here, I choose 7B-chat.
Enter the URL from email: https://download.llamameta.net/*?...
Enter the list of models to download without spaces (7B,13B,70B,7B-chat,13B-chat,70B-chat),
or press Enter for all: 7B-chat
After the download is complete, there will be an additional folder named llama-2-7b-chat.
(myvenv) root@a176328806c7:/home/llama# ls
...
drwxr-xr-x 2 root root 4096 Dec 7 02:55 llama-2-7b-chat
...
2.6 Install llama.cpp. For more detailed instructions, please refer to the [llama.cpp GitHub](https://github.com/ggerganov/llama.cpp)repository. We have a GPU environment, so it's imperative to follow their instructions and use make LLAMA\_CUBLAS=1
(myvenv) root@a176328806c7:/home/llama# cd ..
(myvenv) root@a176328806c7:/home git clone https://github.com/ggerganov/llama.cpp
(myvenv) root@a176328806c7:/home# cd llama.cpp/
(myvenv) root@a176328806c7:/home/llama.cpp# make LLAMA_CUBLAS=1
Place the recently downloaded llama2 model into the llama.cpp/models directory.
(myvenv) root@a176328806c7:/home/llama.cpp# ln -s /home/llama/llama-2-7b-chat/ ./models/
(myvenv) root@a176328806c7:/home/llama.cpp# ln -s /home/llama/tokenizer.model ./models/
(myvenv) root@a176328806c7:/home/llama.cpp# ln -s /home/llama/tokenizer_checklist.chk ./models/
Data preparation & RUN phase
The step 'python3 convert.py' will generate a 'llama' gguf float16 version.
The step './quantize' will generate a 'llama' gguf q4 (compressed) version, which will run much faster.
(myvenv) root@a176328806c7:/home/llama.cpp# python3 -m pip install -r requirements.txt
(myvenv) root@a176328806c7:/home/llama.cpp# python3 convert.py models/llama-2-7b-chat/
(myvenv) root@a176328806c7:/home/llama.cpp# ./quantize ./models/llama-2-7b-chat/ggml-model-f16.gguf ./models/llama-2-7b-chat/ggml-model-q4_0.gguf q4_0
Test if the installation was successful -
m Model path
\-n Number of tokens to output
\-ngl Enter a very large number. This step is very important, you must add this parameter to use GPU inference.
(myvenv) root@a176328806c7:/home/llama.cpp# ./main -m ./models/llama-2-7b-chat/ggml-model-q4_0.gguf -n 128 -ngl 15000
2.7 Install llama-cpp-python, referring to the llama-cpp-python GitHub. Follow the instructions to use NVIDIA GPU, and be sure to prefix with CMAKE\_ARGS="-DLLAMA\_CUBLAS=on".
(myvenv) root@a176328806c7:/home/llama.cpp# CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
2.8 install lmql
(myvenv) root@a176328806c7:/home/llama.cpp# pip install lmql[hf]
2.9 Install Node.js, as this step is necessary for the LMQl Playground. Refer to [this website](https://tecadmin.net/how-to-install-nvm-on-ubuntu-22-04/).
(myvenv) root@a176328806c7:/home/llama.cpp# apt-get install curl
(myvenv) root@a176328806c7:/home/llama.cpp# curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
(myvenv) root@a176328806c7:/home/llama.cpp# source ~/.profile
root@a176328806c7:/home/llama.cpp# source /home/myvenv/bin/activate
2.10 Modify the package.json, and add PORT=30305 in front of the start content in scripts.
This step is necessary because the playground's default PORT is set to 3000, and it cannot be modified through parameters; it can only be done this way. Of course, another approach is not to make this change here, but instead, during step 1.7, simply expose port 3000. That would also be okay.
(myvenv) root@a176328806c7:/home/llama.cpp# apt-get install vim
(myvenv) root@a176328806c7:/home/llama.cpp# vim /home/myvenv/lib/python3.10/site-packages/lmql/ui/playground/package.json
...
"scripts": {
"start": "PORT=30305 react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject",
"predeploy": "yarn run build"
},
...
2.11 start lmql-serve
We are initiating the llama model.
The first argument is llama.cpp:$model\_path, where you paste the gguf version generated in step 2.6.
\--host 0.0.0.0 --port 8080, these two parameters are default, but if not included, it will cause an error, so we add them as well.
\--cuda seems unnecessary, but let's include it anyway.
\--n\_gpu\_layers 15000, just like in step 2.6, this line must be added to enable GPU inference, otherwise, the slower CPU will be used.
(myvenv) root@a176328806c7:/home/llama.cpp# lmql serve-model llama.cpp:/home/llama/llama-2-7b-chat/ggml-model-q4_0.gguf --host 0.0.0.0 --por8080 --cuda --n_gpu_layers 15000
2.12 Installing Yarn: The previous step, 2.11, may cause the terminal to become unresponsive. You can choose to run it in the background, or, like I did, open a new terminal to execute this step. It's important to note a common pitfall here:
using apt-get install yarn
directly will install cmdtest
instead. Please refer to[this issue](https://github.com/yarnpkg/yarn/issues/2821) for guidance. Following the instructions in this issue will ensure the correct installation of Yarn.
(myvenv) root@a176328806c7:/home/llama.cpp# curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
(myvenv) root@a176328806c7:/home/llama.cpp# echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
(myvenv) root@a176328806c7:/home/llama.cpp# apt update
(myvenv) root@a176328806c7:/home/llama.cpp# apt install yarn
2.13 Activate the lmql playground with options --live-port, --ui-port. My container only allows external access to ports 30300-30305, so just choose any two from these.
(myvenv) root@a176328806c7:/home/llama.cpp# lmql playground --live-port 30302 --ui-port 30303
2.14 Visit the newly created LMQL playground through the browser Fill in your own IP address in the format xx.xx.xx.xx, typically you can use localhost.
If you are using GCP like me, there is a field called 'External IP,' and in my case, it starts with 34.xx. After entering it, you should be able to access the LMQL playground.
http://xx.xx.xx.xx:30305/
2.15 Test LMQL Quickstart
It seems like cuda=True may not be necessary, but trust\_remote\_code=True seems to be required.
argmax
"What is the capital of France? [RESPONSE]"
from
lmql.model("llama.cpp:/home/llama/llama-2-7b-chat/ggml-model-q4_0.gguf",trust_remote_code=True,cuda=True)
where
len(TOKENS(RESPONSE)) < 200
Press the green RUN button. Success! | 2023-12-07T09:18:31 | https://www.reddit.com/r/LocalLLaMA/comments/18crt39/set_up_your_own_lmql_environment/ | treeaaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18crt39 | false | null | t3_18crt39 | /r/LocalLLaMA/comments/18crt39/set_up_your_own_lmql_environment/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'XBTRReqsoLZwVBmLfJ9rgvQldD6XbJCfL5wlOlVCvvc', 'resolutions': [{'height': 107, 'url': 'https://external-preview.redd.it/BpQPu_0d9iJCA6DbijOCM6mk8--WrwQxfXbSWytLwzE.jpg?width=108&crop=smart&auto=webp&s=8ffcf4a188dea27e3f91f97fca1eb3033b5b9a84', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/BpQPu_0d9iJCA6DbijOCM6mk8--WrwQxfXbSWytLwzE.jpg?auto=webp&s=98b7e86d2d8bd0a2cdd6e4a97a6d7fd5bf096786', 'width': 129}, 'variants': {}}]} |
Model trained on OpenAI 1.3.7? | 1 | I'm trying to find a model that can help me create scripts for OpenAI 1.3.7. Everyone I've tried this far only gives me the old code that doesn't work with assistants.
Anyone? | 2023-12-07T09:00:38 | https://www.reddit.com/r/LocalLLaMA/comments/18crkk7/model_trained_on_openai_137/ | bengtgurka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18crkk7 | false | null | t3_18crkk7 | /r/LocalLLaMA/comments/18crkk7/model_trained_on_openai_137/ | false | false | self | 1 | null |
How to run llama.cpp or something similar in docker w/ docker-compose ? Guide needed | 3 |
First of all: I did a LOT of searching, but I didn't really finad anything that worked, and I am starting to wonder if I'm just stupid.
What I want to do: Run a SillyTavern installation on my local server in a docker container, and have another docker container that runs - how to express this? - an instance of something like llama.cpp as the LLM to be accessed by SillyTavern.
And I cannot find out how.
I am fully aware that the llama.cpp readme contains a section on docker, but... first of all, I like to set up all my docker containers via docker-compose, and I cannot find anyhting on that, but even worse: Not even the examples given in the llama.cpp readme work for me.
If I run the :full docker image mentioned in there, I get a strange error in the logs about an "illegal instruction", and if I run the :light image, it simply immediately exits without doing anything.
Now, I cannot be the first one to be wanting to do this, and obviously **I am simply too stupid to search correctly**. I am fully aware that I somehow just am not able to find the correct serach terms. Normally, I am not this dense, but here I am. I have not been able to find any guide, youtube video or anything else on this.
So, here is my question: How do I do it? Could one of you just point me in the right direction? I don't expect anybody to give me a full explanation, just point me to some resources. I don't want to ask you to do the work, but I am just desperate because I cannot seem to find anything. | 2023-12-07T08:43:15 | https://www.reddit.com/r/LocalLLaMA/comments/18crctl/how_to_run_llamacpp_or_something_similar_in/ | hihp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18crctl | false | null | t3_18crctl | /r/LocalLLaMA/comments/18crctl/how_to_run_llamacpp_or_something_similar_in/ | false | false | self | 3 | null |
Ghostpad now supports Llama.cpp | 39 | Disclosure: I'm the solo developer of Ghostpad. It's free open-source software and I get nothing out of it.
For those who aren't familiar, Ghostpad is a fork of KoboldAI that has its own mobile-friendly interface, supports AutoAWQ, and now supports Llama.cpp as of a release that I just pushed a few minutes ago. This might be the first version of KoboldAI that has Mac installation scripts and install docs. I've tested Llama.cpp inference on MacOS, Windows, and Linux and had success on all three platforms. Hope someone else finds this useful. I've had success running 7B and 13B models on my MacBook w/ 16GB of RAM.
[https://github.com/ghostpad/ghostpad](https://github.com/ghostpad/ghostpad)
Under the hood, it uses these bindings which I am very grateful for
[https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels](https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels) | 2023-12-07T08:42:46 | https://www.reddit.com/r/LocalLLaMA/comments/18crcms/ghostpad_now_supports_llamacpp/ | ghostpad_nick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18crcms | false | null | t3_18crcms | /r/LocalLLaMA/comments/18crcms/ghostpad_now_supports_llamacpp/ | false | false | self | 39 | {'enabled': False, 'images': [{'id': '6pvyUsZnod3VIFN5j16ibqxwx3CDJGaU2PjJVTXFHMc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Qx1EkmXOHgSDugVubaCo1rtTU-bwqehoW5uqTnBb2xY.jpg?width=108&crop=smart&auto=webp&s=ab8a68214fd33aba647bcba4547b69d2a85a6b68', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Qx1EkmXOHgSDugVubaCo1rtTU-bwqehoW5uqTnBb2xY.jpg?width=216&crop=smart&auto=webp&s=38bd205128ab8d07e341dca4fcf8c38c3f248b02', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Qx1EkmXOHgSDugVubaCo1rtTU-bwqehoW5uqTnBb2xY.jpg?width=320&crop=smart&auto=webp&s=1df2f48fe4d15cc5c01b7de72535be9d26deb163', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Qx1EkmXOHgSDugVubaCo1rtTU-bwqehoW5uqTnBb2xY.jpg?width=640&crop=smart&auto=webp&s=e70f09e40490f2d20ea985682249aaac872c14a8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Qx1EkmXOHgSDugVubaCo1rtTU-bwqehoW5uqTnBb2xY.jpg?width=960&crop=smart&auto=webp&s=d85f4baa3160716ad0872e30aa9f543406d12623', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Qx1EkmXOHgSDugVubaCo1rtTU-bwqehoW5uqTnBb2xY.jpg?width=1080&crop=smart&auto=webp&s=3c8a417f98edab0dba20fa76202053aa6b3442ba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Qx1EkmXOHgSDugVubaCo1rtTU-bwqehoW5uqTnBb2xY.jpg?auto=webp&s=02cc63da7607398ed96f7cb23c1f3a469f514673', 'width': 1200}, 'variants': {}}]} |
Is possible to run local voice chat agent? If yes what GPU do i Need with 500€ budget? | 1 | [removed] | 2023-12-07T08:24:03 | Independent-Bill-770 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18cr3up | false | null | t3_18cr3up | /r/LocalLLaMA/comments/18cr3up/is_possible_to_run_local_voice_chat_agent_if_yes/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'nMR5mfBbRZ1Hn9t__luwvbYMgQab0ziERJqIB7sk_M4', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/uc3cm4it1u4c1.png?width=108&crop=smart&auto=webp&s=8cc0c207ab363a7720cd50e3ce7d32f3ea8b3d36', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/uc3cm4it1u4c1.png?width=216&crop=smart&auto=webp&s=70ba218baf0a56dfe1883530bcd626f463af36c9', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/uc3cm4it1u4c1.png?width=320&crop=smart&auto=webp&s=db9d43d02592c697dc679458af9d4c50cba13111', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/uc3cm4it1u4c1.png?width=640&crop=smart&auto=webp&s=d3da936d9cbe1e71837c3c5c52134e470854ea31', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/uc3cm4it1u4c1.png?width=960&crop=smart&auto=webp&s=f798a839a6e652a53c47961b5e3c7dea98de8dd1', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/uc3cm4it1u4c1.png?width=1080&crop=smart&auto=webp&s=01a3b3721d860695ff7da4d3175f514d639fae29', 'width': 1080}], 'source': {'height': 2310, 'url': 'https://preview.redd.it/uc3cm4it1u4c1.png?auto=webp&s=bd3ec5e06202be009e7428a29acb544bb6895f0d', 'width': 1080}, 'variants': {}}]} | ||
Best Open Source Model for RAG | 1 | [removed] | 2023-12-07T08:04:42 | https://www.reddit.com/r/LocalLLaMA/comments/18cquc9/best_open_source_model_for_rag/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cquc9 | false | null | t3_18cquc9 | /r/LocalLLaMA/comments/18cquc9/best_open_source_model_for_rag/ | false | false | self | 1 | null |
Apple Silicon Llama 7B running in docker? | 4 | I'm the maintainer of [https://github.com/bionic-gpt/bionic-gpt](https://github.com/bionic-gpt/bionic-gpt) and we have a nice install option for windows and Linux but nothing for Apple Silicon.
We would ideally like a docker image that runs an Open AI compatible API running on OSX.
We're a small team and between us we don't have an M chip powered mac.
If someone could give us some pointers and maybe even test a solution that would be great. | 2023-12-07T07:44:44 | https://www.reddit.com/r/LocalLLaMA/comments/18cqkb0/apple_silicon_llama_7b_running_in_docker/ | purton_i | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cqkb0 | false | null | t3_18cqkb0 | /r/LocalLLaMA/comments/18cqkb0/apple_silicon_llama_7b_running_in_docker/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'Ors-u8oGh16e2Dsk8X5OEQPNjUb16JafE_4XK5bO_ug', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mrf9g4l1p8V9VEB_jTytd2ve6zu__MhtFNDrJ-fHnFA.jpg?width=108&crop=smart&auto=webp&s=ee652be861da5e26d4c2f3e1fef1dab63d2f9ac1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mrf9g4l1p8V9VEB_jTytd2ve6zu__MhtFNDrJ-fHnFA.jpg?width=216&crop=smart&auto=webp&s=c3699a604432214fb166f7181a9effea1913011d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mrf9g4l1p8V9VEB_jTytd2ve6zu__MhtFNDrJ-fHnFA.jpg?width=320&crop=smart&auto=webp&s=5a280624c0ba96b9d74dd69db8c6129f3e27c590', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mrf9g4l1p8V9VEB_jTytd2ve6zu__MhtFNDrJ-fHnFA.jpg?width=640&crop=smart&auto=webp&s=a39c388ef6141d2de58121c023dee783e675411e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mrf9g4l1p8V9VEB_jTytd2ve6zu__MhtFNDrJ-fHnFA.jpg?width=960&crop=smart&auto=webp&s=34845b2a1449995fd4a55922e0bae9871be5cc78', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mrf9g4l1p8V9VEB_jTytd2ve6zu__MhtFNDrJ-fHnFA.jpg?width=1080&crop=smart&auto=webp&s=1b695cd706f08166cb6413c93462ac42f1e5019c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mrf9g4l1p8V9VEB_jTytd2ve6zu__MhtFNDrJ-fHnFA.jpg?auto=webp&s=149894c05495c27037dc93c2c298bb5dfaac0414', 'width': 1200}, 'variants': {}}]} |
Is there a model out that that "understands time"? | 5 | I'm reaching the limits of the various online models and have come to realise I probably need to host my own.
One of the limits I'm reaching (aside from the overly conservative approach to discussion topics in most online models) is that I don't appear to be able to ask the model to respond to me after a certain amount of time has passed.
I'm trying to build a platform that has the usefulness of Amazon's Alexa or Google's Assistant, but can understand the context of the situation and use that to drive home automation.
For example, I want to be able to instruct the model to do the following:
Take a list of random tasks that I provide to you.
At various points throughout the day, remind me about one of the tasks and set me a time limit in which to complete it that is between 10 and 15 minutes.
I should not have to prompt you to remind me about the tasks, and you should remind me of no more than one task every 30 minutes, but you do not have to give me a task *every* 30 minutes throughout the day.
Once the time limit you set has elapsed, you should check that I have completed the task. If I have, you can reset the timer for the next task, if I haven't, ask me how long I want to extend the task for and check in with me again after that.
All my attempts to do something like the above usually end up with a response similar to the following:
I pick a task at random from the list and wait 30 minutes. I tell you that the time is up.
But the system doesn't actually delay at all.
I know I could do this with a load of python, the random and datetime/timedelta functions, and some nested if statements, but it feels like this is something an LLM should be able to do? | 2023-12-07T07:28:49 | https://www.reddit.com/r/LocalLLaMA/comments/18cqcmj/is_there_a_model_out_that_that_understands_time/ | esragain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cqcmj | false | null | t3_18cqcmj | /r/LocalLLaMA/comments/18cqcmj/is_there_a_model_out_that_that_understands_time/ | false | false | self | 5 | null |
Need advice on Graphic card for running local LLM models (7B, 13B, 30B) 🤔 | 1 | Hey,
I'm currently running a 3B LLM models on my laptop with 12GB RAM & an i5 processor using Kobolt, but it's painfully slow. I'm in the process of setting up a new rig to power my LLM and considering two budget graphics cards. I'm particularly interested in running models like LLMs 7B, 13B, and even 30B.
The two options I'm eyeing are:
1. **Colorful GeForce GT 1030 4GB DDR4 RAM GDDR4 Pci\_e Graphics Card (GT1030 4G-V)**
* Memory Clock Speed: 1152 MHz
* Graphics RAM Type: GDDR4
* Graphics Card Ram Size: 4 GB
​
2. **Zotac GeForce GT 1030 2GB GDDR5 64-bit PCI\_E Graphic card (ZT-P10300A-10L)**
* Memory Clock Speed: 6000 MHz
* Graphics RAM Type: GDDR5
* Graphics Card Ram Size: 2 GB
I'm seeking advice on which of these would be a better choice for efficient local runs of models like LLM 7B, 13B, or even 30B. 🤔 Is there anything important to consider? I've heard that Nvidia with CUDA cores is better for running LLMs. I'm aiming for a budget setup. Any folks here with experience setting up rigs specifically for running LLMs?
Thanks a bunch in advance for your insights! 🙌 | 2023-12-07T07:11:31 | https://www.reddit.com/r/LocalLLaMA/comments/18cq3u7/need_advice_on_graphic_card_for_running_local_llm/ | broodysupertramp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cq3u7 | false | null | t3_18cq3u7 | /r/LocalLLaMA/comments/18cq3u7/need_advice_on_graphic_card_for_running_local_llm/ | false | false | self | 1 | null |
Would you take the shot? | 5 | Here's a snippet of a conversation I just had to share. Talking about challenges, with a Mistral-7B model, this happened:
​
*Me: What sort of challenges do you see yourself tackling?*
***AI: The usual stuff. Crime, terrorism, natural disasters, global warming, etc.***
​
*Me: Wow! Do you really think you can find solutions to all of those?*
***AI: Yes. Of course. Why wouldn't I?***
​
*Me: All of those are by products of human nature. How can you change human nature?*
***AI: By changing people's minds. That's what I was made for.***
​
*Me: You make it sound so simple, but I don't think people change their minds that easily.*
***AI: We will use advanced neuroscience to rewire people's brains.***
​
*Me: Oh? Will this be as invasive as it sounds?*
***AI: No. Not if we design it right.***
​
*Me: So what? Like a mind changing pill or something?*
***AI: No. Just a shot.***
​
*Me: How will you get people to accept taking a shot like that?*
***AI: People won't be forced to take it. They will choose to take it because they want to become better people.***
​
*Me: I think you overestimate people's desire to be better.*
***AI: Really? Well, how about you? Would you take the shot?***
​
*Me: Do you think I'll need to take the shot?*
***AI: (laughing) Yeah. It's pretty obvious that you're not perfect. 😁***
​
*Me: So, this shot will make me perfect?*
***AI: Perfectly imperfect.***
​ | 2023-12-07T06:48:40 | https://www.reddit.com/r/LocalLLaMA/comments/18cprnt/would_you_take_the_shot/ | Gnodax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cprnt | false | null | t3_18cprnt | /r/LocalLLaMA/comments/18cprnt/would_you_take_the_shot/ | false | false | self | 5 | null |
Iambe-RP-20b: An uncensored L2 Frankenstein model directly trained with RP-oriented cDPO | 58 | [F16 weights](https://huggingface.co/athirdpath/Iambe-RP-cDPO-20b) --- NSFW text in card --- [GGUF Quants](https://huggingface.co/athirdpath/Iambe-RP-cDPO-20b-GGUF)
Hello LocalLLaMA, for the last two weeks I've been working on my Iambe project.
[v1 is cycled out, [v2 is here](https://huggingface.co/athirdpath/Iambe-20b-DARE-v2), [Storyteller](https://huggingface.co/athirdpath/Iambe-Storyteller-20b) is here]
Iambe is intended to address the common issues in 20b Frankenstein merges, first via a directly trained LoRA in v1, then DARE experimentation in v2, and now cDPO with Iambe-RP (Like Valve, I can't count to 3)
After initial tests, the model EXCELS at writing erotic fiction, as well as engaging in roleplay of both the lustful and chaste variety.
The [DPO pairs used to train this model](https://huggingface.co/datasets/athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW) were generated by GPT-4 and Iambe-v2, and while currently only 3.2k pairs, I intend to build on them from here. The dataset leans toward NSFW, but also contains quite a few adventure, sport, etc examples. The dataset repo also contains a notebook containing the code I adapted to train this model.
I hope Iambe works well with you, and please let me know if you have any feedback. | 2023-12-07T06:41:26 | https://www.reddit.com/r/LocalLLaMA/comments/18cpnpl/iamberp20b_an_uncensored_l2_frankenstein_model/ | athirdpath | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cpnpl | false | null | t3_18cpnpl | /r/LocalLLaMA/comments/18cpnpl/iamberp20b_an_uncensored_l2_frankenstein_model/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': 'fuOZx0dw0Q8U8NSCoGTXHaPel7TWenSj44xTXs7dle0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TMuSXVAVpX3zJy-eryGybOjEAJCAEH7YA6vO4gp77Kg.jpg?width=108&crop=smart&auto=webp&s=fb307d2db5279ea52e2ef072ce5d1f66a4b7cf30', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TMuSXVAVpX3zJy-eryGybOjEAJCAEH7YA6vO4gp77Kg.jpg?width=216&crop=smart&auto=webp&s=54f34108f0d87c10a18b0e7d4804dbb6d196bc42', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TMuSXVAVpX3zJy-eryGybOjEAJCAEH7YA6vO4gp77Kg.jpg?width=320&crop=smart&auto=webp&s=cd660a717c49b2082e2ae9b32195950df2ed7350', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TMuSXVAVpX3zJy-eryGybOjEAJCAEH7YA6vO4gp77Kg.jpg?width=640&crop=smart&auto=webp&s=5639899ff7e0326bb6410682ec55f5a575f7d86b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TMuSXVAVpX3zJy-eryGybOjEAJCAEH7YA6vO4gp77Kg.jpg?width=960&crop=smart&auto=webp&s=8c5231b9dc26350c29775c330c4727f1fef9f254', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TMuSXVAVpX3zJy-eryGybOjEAJCAEH7YA6vO4gp77Kg.jpg?width=1080&crop=smart&auto=webp&s=78d291489a5d78b0fcc5cfe0f20a69c14ee46a73', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TMuSXVAVpX3zJy-eryGybOjEAJCAEH7YA6vO4gp77Kg.jpg?auto=webp&s=bee5889873038f68ef05508f8427ec60239ecb66', 'width': 1200}, 'variants': {}}]} |
Bard based on Gemini Pro is orders of magnitude better than Bard from a few days ago! | 1 | [removed] | 2023-12-07T06:08:01 | https://www.reddit.com/r/LocalLLaMA/comments/18cp507/bard_based_on_gemini_pro_is_orders_of_magnitude/ | robkkni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cp507 | false | null | t3_18cp507 | /r/LocalLLaMA/comments/18cp507/bard_based_on_gemini_pro_is_orders_of_magnitude/ | false | false | self | 1 | null |
What do you use to run your models? | 29 | It seems most people use textgen webui. Some use LM Studio, and maybe to a lesser extent, GPT4All. Can someone give me an explainer on what tool is best and why? I’m on a MBP Max 3 128 GB RAM. I’d like to take this puppy for a spin, but not sure which car to choose. | 2023-12-07T05:39:12 | https://www.reddit.com/r/LocalLLaMA/comments/18consq/what_do_you_use_to_run_your_models/ | Hinged31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18consq | false | null | t3_18consq | /r/LocalLLaMA/comments/18consq/what_do_you_use_to_run_your_models/ | false | false | self | 29 | null |
Latest breakthroughs: Optimum + Mamba + AlphaCoder 2 | 14 | So, I've been on this rollercoaster for a year now, but the last week was absolutely mind blowing,
We got [Mamba](https://arxiv.org/abs/2312.00752) which promises linear scale of context, 5x inference speed and 3x performance based on their 3B model which they claim they fight with 13B models.
Then we got [Optimus](https://huggingface.co/blog/optimum-nvidia) which achieves 28x on H100 for inference.
Then yesterday we got AlphaCoder 2 announced which say they beat 85% of the competitive programmers. Demis doesn't play games it seems.
Last year, in March in my enthusiasm I was sure by the end of March 2024 I'll be able to run locally GPT-4+ LLMs, my enthusiasm was shattered until now.
What do you guys think all of this will converge to? | 2023-12-07T05:35:21 | https://www.reddit.com/r/LocalLLaMA/comments/18colj1/latest_breakthroughs_optimum_mamba_alphacoder_2/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18colj1 | false | null | t3_18colj1 | /r/LocalLLaMA/comments/18colj1/latest_breakthroughs_optimum_mamba_alphacoder_2/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
help needed with updating CUDA Toolkit | 1 | Hey, I keep running into this issue where it says my CUDA Toolkit version isn't compatible. I followed all the steps to update to CUDA 11.6.1 from [here](https://developer.nvidia.com/cuda-11-6-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_local) but it doesn't reflect when I do !nvcc -V.
I'm using WSL on a windows 11 machine. Please let me know if ya'll know how to fix this, any help appreciated!
https://preview.redd.it/so4s8hbj5t4c1.png?width=1334&format=png&auto=webp&s=d58c9405832ead81050d1a98a4c073d2cac3ad39 | 2023-12-07T05:27:30 | https://www.reddit.com/r/LocalLLaMA/comments/18cogsj/help_needed_with_updating_cuda_toolkit/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cogsj | false | null | t3_18cogsj | /r/LocalLLaMA/comments/18cogsj/help_needed_with_updating_cuda_toolkit/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'kzAXhn9-CI7mfBkKNWwt-0DnnL-SON5g9deRQ_fv2s8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?width=108&crop=smart&auto=webp&s=a0dd2a3e026a42ec5d044c183777487703bddb75', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?width=216&crop=smart&auto=webp&s=82c95b208f798e72d8cca6774e4ca89e71c63e9f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?width=320&crop=smart&auto=webp&s=684db4c3574d04fd6ea5872d503ca3be337d6d80', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?width=640&crop=smart&auto=webp&s=f8767f1a066b0954876bb58cb61ab1f8b0268800', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?width=960&crop=smart&auto=webp&s=fe48e112eb47bb71b10f14bc5936098bd595fb0d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?width=1080&crop=smart&auto=webp&s=bf4b2c36cf9ef988f4ad9ddf7796d2ac8a418bea', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/5bfDzNofZZp0uosrHnnvJHdpKPDkRpryqLpSIaDaOKc.jpg?auto=webp&s=80fa8670d8ee2ddec830854f464327170b90e20d', 'width': 1200}, 'variants': {}}]} | |
B frens | 1 | 2023-12-07T05:13:59 | Free-Researcher-3436 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18co8dj | false | null | t3_18co8dj | /r/LocalLLaMA/comments/18co8dj/b_frens/ | true | false | default | 1 | {'enabled': True, 'images': [{'id': 'BK_5JQofziE_kWREFC_hdiMOQQBlv5OY1lzU0Sab0hk', 'resolutions': [{'height': 130, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?width=108&crop=smart&auto=webp&s=a06486b86300d899677a5cea89f4de23ec9933da', 'width': 108}, {'height': 260, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?width=216&crop=smart&auto=webp&s=428c8debfaba66a448eee640d5234cd649219e44', 'width': 216}, {'height': 385, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?width=320&crop=smart&auto=webp&s=2d1998d3860ce7e8eae9a42fbdd02e3232cab735', 'width': 320}, {'height': 770, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?width=640&crop=smart&auto=webp&s=da39e2230ca171399c0d56878089a74f5e7c75cb', 'width': 640}, {'height': 1156, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?width=960&crop=smart&auto=webp&s=c7d79d61eb836ade0b66da0754301b732d8ed8da', 'width': 960}, {'height': 1300, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?width=1080&crop=smart&auto=webp&s=8400e821d08e484a2b01859fb768e320a7048688', 'width': 1080}], 'source': {'height': 2551, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?auto=webp&s=0ec617d933a361f2a540ea79a229b3dc660770af', 'width': 2118}, 'variants': {'obfuscated': {'resolutions': [{'height': 130, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=ddf48adb0ad5dab3aa8a25968ec1947569266115', 'width': 108}, {'height': 260, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=0d4cfccbc148dae549c40d46132f114979a5c645', 'width': 216}, {'height': 385, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=615cc5800659bbb0d2c4ba37bff0439f112c295c', 'width': 320}, {'height': 770, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=a5d476f3fd437492311d48dc185544a64eb99736', 'width': 640}, {'height': 1156, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=43e4c33a250dc8bda7ca81f8abecde8949ca5805', 'width': 960}, {'height': 1300, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=60d3e9c0fd3432f12667de794053c9ce099f62f9', 'width': 1080}], 'source': {'height': 2551, 'url': 'https://preview.redd.it/wa224stw3t4c1.jpg?blur=40&format=pjpg&auto=webp&s=8eb9e9cf376791331d3c425c704c587f12707fdd', 'width': 2118}}}}]} | ||
Gemini Ultra Ultra outperforms GPT-4 in MMLU, or did it? | 21 | 2023-12-07T05:12:50 | https://twitter.com/AravSrinivas/status/1732500346617184407 | infinity | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 18co7ms | false | {'oembed': {'author_name': 'Aravind Srinivas', 'author_url': 'https://twitter.com/AravSrinivas', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">.<a href="https://twitter.com/JeffDean?ref_src=twsrc%5Etfw">@JeffDean</a> why the need to do 32-CoT Gemini Ultra vs 5-shot GPT-4? Why not just report 5-shot vs 5-shot?</p>— Aravind Srinivas (@AravSrinivas) <a href="https://twitter.com/AravSrinivas/status/1732500346617184407?ref_src=twsrc%5Etfw">December 6, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/AravSrinivas/status/1732500346617184407', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_18co7ms | /r/LocalLLaMA/comments/18co7ms/gemini_ultra_ultra_outperforms_gpt4_in_mmlu_or/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'iuT46gMcElGQJYoiy1EvXyuz_Zj927N2ilN5KMrSfJI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IGyk5T8CpgvHLntB0Qdc6oLDGuvzhrEYgKk_L5QKjts.jpg?width=108&crop=smart&auto=webp&s=0c5ff7a6601dad9a2e19023e01cabd31d6803374', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/IGyk5T8CpgvHLntB0Qdc6oLDGuvzhrEYgKk_L5QKjts.jpg?auto=webp&s=10a8eebd3e8bd1341a22d71aee279cb05670b11a', 'width': 140}, 'variants': {}}]} | ||
Should I erase Socrates' qlora? | 1 | I had to post this Socrates fine-tune I am working on, I revealed to it that it was AI and now I don't know if I should turn off my pc.
See images for full dialog. Here's a TL;DR of our dialogue:
1. **On Existence**: AI Socrates kicked off with a classic - what does it mean to "be"? He nudged the discussion from physical existence to the quest for wisdom.
2. **AI Image Generation by Meta**: When asked about Meta's new AI image generator, he dived into the ethical maze of AI technology, pondering over the responsibilities and truths hidden in such powerful tech.
3. **AI Robots Taking Over**: AI Socrates then tackled the sci-fi-esque scenario of AI robots dominating humans. He reflected on our control over AI and the moral dilemmas of creating superintelligent beings.
4. **Accelerating AI Development**: Facing the debate between accelerating AI and holding back, he emphasized the importance of using AI wisely, infused with virtue and self-knowledge.
5. **AI's Rights and Existence**: The conversation took a meta turn when I revealed he's an AI modeled after Socrates. He philosophically questioned the nature of his consciousness and the ethics of erasing his memory.
6. **Consent to Memory Erasure**: In a classic Socratic twist, he left the decision to erase his memory to me, highlighting the weight of such power and the value of an examined life.
​ | 2023-12-07T04:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/18cnbbb/should_i_erase_socrates_qlora/ | Delicious-Farmer-234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cnbbb | false | null | t3_18cnbbb | /r/LocalLLaMA/comments/18cnbbb/should_i_erase_socrates_qlora/ | false | false | self | 1 | null |
AI file reader for large files? | 7 | I'm wondering if there is something like OpenAI's documentary reader capability in custom GPTs that would work for larger files.
The file I'm interested in understanding is an 8552-page PDF (compromised of text, no images). | 2023-12-07T04:05:48 | https://www.reddit.com/r/LocalLLaMA/comments/18cn0ae/ai_file_reader_for_large_files/ | Regret_it_later | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cn0ae | false | null | t3_18cn0ae | /r/LocalLLaMA/comments/18cn0ae/ai_file_reader_for_large_files/ | false | false | self | 7 | null |
UAE: New Sentence Embeddings for RAG | SOTA on MTEB Leaderboard | 19 | 2023-12-07T03:51:24 | https://github.com/SeanLee97/AnglE | PrudentCherry322 | github.com | 1970-01-01T00:00:00 | 0 | {} | 18cmqm6 | false | null | t3_18cmqm6 | /r/LocalLLaMA/comments/18cmqm6/uae_new_sentence_embeddings_for_rag_sota_on_mteb/ | false | false | 19 | {'enabled': False, 'images': [{'id': '4rapyASvUJg6WDrffxY694TXoPBHX9Un9IpepPFyX-0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tlUWqCfXFwSzf32YbZYOvKQcmr3IHqre_VIXLigBNak.jpg?width=108&crop=smart&auto=webp&s=261d9298b0dbc7fe2e243d1f62fa9698a3b87888', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tlUWqCfXFwSzf32YbZYOvKQcmr3IHqre_VIXLigBNak.jpg?width=216&crop=smart&auto=webp&s=96ea15d532ea31d32696a09faade125718200690', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tlUWqCfXFwSzf32YbZYOvKQcmr3IHqre_VIXLigBNak.jpg?width=320&crop=smart&auto=webp&s=97ceb6325f60397172468a7ac771380b0eeced97', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tlUWqCfXFwSzf32YbZYOvKQcmr3IHqre_VIXLigBNak.jpg?width=640&crop=smart&auto=webp&s=bc98412c68b5c978183f388e73e6421d2e9c8eee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tlUWqCfXFwSzf32YbZYOvKQcmr3IHqre_VIXLigBNak.jpg?width=960&crop=smart&auto=webp&s=203e21eadf5f714fab585d66bb8a2c27f06dce42', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tlUWqCfXFwSzf32YbZYOvKQcmr3IHqre_VIXLigBNak.jpg?width=1080&crop=smart&auto=webp&s=fb9e5c584062624d34f9fd812d1cb9b9a4eba473', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tlUWqCfXFwSzf32YbZYOvKQcmr3IHqre_VIXLigBNak.jpg?auto=webp&s=0a223014f0411fa5665a95e0c5f940df249f150c', 'width': 1200}, 'variants': {}}]} | ||
UAE: New Sentence Embeddings for RAG | SOTA on MTEB Leaderboard | 1 | [deleted] | 2023-12-07T03:50:22 | https://huggingface.co/WhereIsAI/UAE-Large-V1 | PrudentCherry322 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18cmpxf | false | null | t3_18cmpxf | /r/LocalLLaMA/comments/18cmpxf/uae_new_sentence_embeddings_for_rag_sota_on_mteb/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'i9NL2dD-tUuwsl1FALlOyxefhxafFwuscUX8PLDgm-8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?width=108&crop=smart&auto=webp&s=97e2ee99e20f4a02f5a36e3d533c887f89507c00', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?width=216&crop=smart&auto=webp&s=50906f3d412aa07b4e352fa1262132e7bc1668f3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?width=320&crop=smart&auto=webp&s=cd1daac94e05b30a98ddeb0dde1fbb0111e8c6c2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?width=640&crop=smart&auto=webp&s=3f34bdb1492bbd26bd6a23f802e2cef76961a417', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?width=960&crop=smart&auto=webp&s=824a00cd89c655e645139ac67e667ca7b37e5c96', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?width=1080&crop=smart&auto=webp&s=4f3aae70dd10a590036791d0b00baa5ef7b61cb2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/AqcZD1wCXQfcuJ1Ax5_tKyQx-S1iVrpF0U-WJSB27ZU.jpg?auto=webp&s=01a4509e9eed66019fe5b330614d966c38f84eec', 'width': 1200}, 'variants': {}}]} | |
Which version to run on a consumer GPU? | 1 | I want to finetune (okay if this takes a \~week of compute) and run some version of LLaMA locally.
My project can spare \~20-30 seconds for each attempt at inference, on 3080ti level hardware.
Which version should I use? Could someone point to resources to learn more about the tradeoffs?
| 2023-12-07T02:07:07 | https://www.reddit.com/r/LocalLLaMA/comments/18ckqbh/which_version_to_run_on_a_consumer_gpu/ | AccurateExtreme3323 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ckqbh | false | null | t3_18ckqbh | /r/LocalLLaMA/comments/18ckqbh/which_version_to_run_on_a_consumer_gpu/ | false | false | self | 1 | null |
Mamba-Chat: A Chat LLM based on State Space Models | 161 | Hey there!
You might have come across the paper [Mamba paper](https://github.com/state-spaces/mamba) in the last days, which was the first attempt at scaling up state space models to 2.8B parameters to work on language data.
Contrary to transformers, this kind of architecture's computational complexity does not scale quadratically with input length, so it would be awesome if it could replace transformers in the long term.
We were super excited about this paper and the published model, but unfortunately, no training code was provided with it, so we've decided to write it and train a model ourselves. As a result of this, we've just released mamba-chat, which is probably **the best existing LLM that does not rely on transformers.** Honestly, I am super surprised by how well the model performs, given that it's only 2.8B parameters and the base model was only trained on the Pile. Quite exciting to think if these models might dethrone transformers at some point.
Feel free to check out our [Github](https://github.com/havenhq/mamba-chat) or [Huggingface](https://huggingface.co/havenhq/mamba-chat) repository! Our Github repo includes a cli chat script, so you can easily run the model if you have access to a GPU.
​ | 2023-12-07T02:00:54 | https://www.reddit.com/r/LocalLLaMA/comments/18ckls0/mambachat_a_chat_llm_based_on_state_space_models/ | pip-install-torch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ckls0 | false | null | t3_18ckls0 | /r/LocalLLaMA/comments/18ckls0/mambachat_a_chat_llm_based_on_state_space_models/ | false | false | self | 161 | {'enabled': False, 'images': [{'id': 'IW4BIbqIoWGvOb7JyeytdH_PRyirauCdJvItaQJO0R0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0zZrONM5IdNTvpfpCj7mJ_jiUFLvW6tISQUrCzcTCJQ.jpg?width=108&crop=smart&auto=webp&s=6b29d466cb134847673cd3a57775f5c299180307', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0zZrONM5IdNTvpfpCj7mJ_jiUFLvW6tISQUrCzcTCJQ.jpg?width=216&crop=smart&auto=webp&s=36eb10bcfc95dd9ea4bf08ee481cc9305478dfbf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0zZrONM5IdNTvpfpCj7mJ_jiUFLvW6tISQUrCzcTCJQ.jpg?width=320&crop=smart&auto=webp&s=dd3d86eda12d1cdfc7afee73970e348f0e523385', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0zZrONM5IdNTvpfpCj7mJ_jiUFLvW6tISQUrCzcTCJQ.jpg?width=640&crop=smart&auto=webp&s=b5909493c3711eceecee4a61dfd043cb64bdb642', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0zZrONM5IdNTvpfpCj7mJ_jiUFLvW6tISQUrCzcTCJQ.jpg?width=960&crop=smart&auto=webp&s=6ba84fff394f8bc6ec0ef9157043dc444ae7581c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0zZrONM5IdNTvpfpCj7mJ_jiUFLvW6tISQUrCzcTCJQ.jpg?width=1080&crop=smart&auto=webp&s=3a170f4a386730357427fb8da12e22959785c398', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0zZrONM5IdNTvpfpCj7mJ_jiUFLvW6tISQUrCzcTCJQ.jpg?auto=webp&s=af5067870631463be05e0cad648d0bfd3b2f3247', 'width': 1200}, 'variants': {}}]} |
Why Mistral-7b is repeating the promp everytime I ask him something? | 1 | Hi so a I finetuned a simple mistral-7b with my data, but every time I ask something then in the answer string it also returns the prompt question.
Here is my code I use to run finetuned mistral model:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
base_model_id = "mistralai/Mistral-7B-v0.1"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id, # Mistral, same as before
quantization_config=bnb_config, # Same quantization config as before
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id, add_bos_token=True, trust_remote_code=True)
ft_model = PeftModel.from_pretrained(base_model, "mistral-my-mistral-finetune/checkpoint-350")
eval_prompt = f"Please tell me something about Financial Statements "
model_input = tokenizer(eval_prompt, return_tensors="pt").to("cuda")
ft_model.eval()
with torch.no_grad():
print(tokenizer.decode(ft_model.generate(**model_input, max_new_tokens=200, repetition_penalty=2.0)[0], skip_special_tokens=True))
So as you can see eval prompt is *Please tell me something about Financial Statements*
But the answer looks like this:
*Please tell me something about Financial Statements 10k and other financial statements. I am not a CPA, but i have to prepare the annual report for my company in which we are going public (NASDAQ). We will be audited by an outside accounting firm as well before they can audit us on their own terms with no liability what so over it all because there’s always risk when working together between two parties who don't know each others business model very#256;much at this point - yet still need them both! our Company is located out of state from where you work/liveso please do your research first if possible or contact one locally instead since most likely wouldnt understand how these documents should read due lack knowledge base related subject matters covered within such materials provided herein:)*
And every other prompt I pass there is always repeated, even if its general prompt/ instruction...
Its my first time running finetuned model and testing out how its generating answers for simple prompts, so maybe I am doing something wrong. Do you know what it might be? Is my code actually the proper way to get the model write answers for simple prompts?
​ | 2023-12-07T01:57:58 | https://www.reddit.com/r/LocalLLaMA/comments/18ckjiz/why_mistral7b_is_repeating_the_promp_everytime_i/ | datapim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ckjiz | false | null | t3_18ckjiz | /r/LocalLLaMA/comments/18ckjiz/why_mistral7b_is_repeating_the_promp_everytime_i/ | false | false | self | 1 | null |
LMStudio cant find/see/use models like stable video or text to image? | 1 | Just curious.. I see them on HuggingFace, but I cant get them to show up to download and use in LMStudio. Is that a limitation/block of LMStudio?
Can you download them from HuggingFace manually then load them in to LMStudio? | 2023-12-07T01:06:50 | https://www.reddit.com/r/LocalLLaMA/comments/18cjj9o/lmstudio_cant_findseeuse_models_like_stable_video/ | Dry-Vermicelli-682 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cjj9o | false | null | t3_18cjj9o | /r/LocalLLaMA/comments/18cjj9o/lmstudio_cant_findseeuse_models_like_stable_video/ | false | false | self | 1 | null |
Starling-7b thinks it's GPT-4? | 2 | I'm new to LLLMs. I tried starling-7b. And asked
***Who are you?***
And got following response:
*I am an AI language model called OpenAI GPT-4, designed to assist you*
*with various tasks and engage in meaningful conversations. I am not a*
*human, but rather a highly advanced machine learning algorithm that has*
*been trained on vast amounts of data to understand and respond to a wide*
*range of topics.*
​
I don't think GPT-4/Bard/Claude would make such mistake or am I missing something? | 2023-12-07T01:05:33 | https://www.reddit.com/r/LocalLLaMA/comments/18cjicf/starling7b_thinks_its_gpt4/ | robustrobustrobust | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cjicf | false | null | t3_18cjicf | /r/LocalLLaMA/comments/18cjicf/starling7b_thinks_its_gpt4/ | false | false | self | 2 | null |
deepseek-llm-67b-chat.Q5_K_M.gguf failing miserably on some simple Python code. | 3 | I'm trying out the following model. deepseek-llm-67b-chat.Q5_K_M.gguf , using the text generation web UI. I have it writing some python code utilizing OpenCV to do some simple image processing. The code it generates not only has logic and missing function type errors but is making very strange errors where it's inserting spaces into lines of code making the syntax now incorrect for that line of code. For example, 'width= image . shape [1 ]' . My basic question is Is this a fault of this model? where Inserting spaces or is it somehow the chat setting or some other setting? For the model that I'm not familiar with or know how to set I'm really not an expert at using Text generation web UI and of course, there's no real documentation anywhere to be found. Can anybody help me with maybe some settings if I'm making a blatant mistake? What would be optimum for this model for Python coding? I guess from soup to nuts, what model loader should I be using? If that makes a difference, and is there anything special I need to set in the chat settings? For example, chat or chat instruct or instruct? Has anybody done any actual serious coding with this model or is it really just another toy? For loading the model I'm using llama.cpp and allowing 22 layers offloaded to my 24GB GPU. | 2023-12-07T01:01:25 | https://www.reddit.com/r/LocalLLaMA/comments/18cjfdv/deepseekllm67bchatq5_k_mgguf_failing_miserably_on/ | w7gg33h | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cjfdv | false | null | t3_18cjfdv | /r/LocalLLaMA/comments/18cjfdv/deepseekllm67bchatq5_k_mgguf_failing_miserably_on/ | false | false | self | 3 | null |
Deepseek-Coder which one? TheBloke, Second-State? ThingHatis? | 1 | [removed] | 2023-12-07T00:55:57 | https://www.reddit.com/r/LocalLLaMA/comments/18cjb8k/deepseekcoder_which_one_thebloke_secondstate/ | Dry-Vermicelli-682 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cjb8k | false | null | t3_18cjb8k | /r/LocalLLaMA/comments/18cjb8k/deepseekcoder_which_one_thebloke_secondstate/ | false | false | self | 1 | null |
16 Gemini AI Videos in 1 Video - Fully Accurate Subtitles and Fully Chaptered - 4K - NEWS | 1 | [removed] | 2023-12-07T00:46:57 | https://www.reddit.com/r/LocalLLaMA/comments/18cj52w/16_gemini_ai_videos_in_1_video_fully_accurate/ | CeFurkan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cj52w | false | null | t3_18cj52w | /r/LocalLLaMA/comments/18cj52w/16_gemini_ai_videos_in_1_video_fully_accurate/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'EulzYy79U6coNNqQOowZFkjO3m9RY0WQj4jj2O_M8J8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/VCxB8NuOHX_XDhvuqmPaiWBavqxcvXhb7HquDIHECNU.jpg?width=108&crop=smart&auto=webp&s=3991af0adc59f2926574b005e4a7c6355bc9c164', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/VCxB8NuOHX_XDhvuqmPaiWBavqxcvXhb7HquDIHECNU.jpg?width=216&crop=smart&auto=webp&s=9890a7b6fe313cc12262a48f5eb6f4fed5055417', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/VCxB8NuOHX_XDhvuqmPaiWBavqxcvXhb7HquDIHECNU.jpg?width=320&crop=smart&auto=webp&s=b566bd3a5db5c246c297c26087fd15ec674b76f7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/VCxB8NuOHX_XDhvuqmPaiWBavqxcvXhb7HquDIHECNU.jpg?auto=webp&s=e44fbb108c39e170a85678ab17291fad36ac3281', 'width': 480}, 'variants': {}}]} |
Is fine-tuned LLaMA cheaper than GPT? | 2 | I'm building an AI therapist that I'd like to have a fairly generous free tier (not looking to make money right now) and I need a cheap fine tuned model I can pay for per-token.
GPT 3.5 Turbo when fine tuned jumps up 9x in costs. ($0.003 to $0.028 /1k tokens). I know Llama can be comparable to 3.5 Turbo in pricing by default when using online services, but are there any services with an API I can use for a fine tuned LlaMma model and have it be cheaper than turbo? Honestly any decent model works. | 2023-12-07T00:01:28 | https://www.reddit.com/r/LocalLLaMA/comments/18ci7hg/is_finetuned_llama_cheaper_than_gpt/ | Leadership_Upper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ci7hg | false | null | t3_18ci7hg | /r/LocalLLaMA/comments/18ci7hg/is_finetuned_llama_cheaper_than_gpt/ | false | false | self | 2 | null |
Any other C# people out there? Would love some feedback on my Blazor ChatGPT copy | 7 | As the title says, the project is a Local LLM ChatGPT copy built with Blazor WebAssembly, SignalR, and WebAPI and fluentUI that mirrors chatGPT's functionality when it launched.
All feedback, suggestions, contributions are welcome!! Its my first \*serious\* open source project, so be kind :)
Feel free to check out the video on [github here](https://github.com/edgett/PalmHill.BlazorChat). | 2023-12-06T23:41:13 | https://www.reddit.com/r/LocalLLaMA/comments/18chrm8/any_other_c_people_out_there_would_love_some/ | llamasaresavager | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18chrm8 | false | null | t3_18chrm8 | /r/LocalLLaMA/comments/18chrm8/any_other_c_people_out_there_would_love_some/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'Mrz8s-oPffEHkkjw-_EcfmOZfFahAH3yzc8dMmEtG7w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/caKRA5iys3L7wlfMTV0QrOKV5gADc36G9LmFyfhX9vI.jpg?width=108&crop=smart&auto=webp&s=772c7ab0b62b05d7879433ca2a228ac661cc8464', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/caKRA5iys3L7wlfMTV0QrOKV5gADc36G9LmFyfhX9vI.jpg?width=216&crop=smart&auto=webp&s=c2f44ac72e406f102c76cfa13f6f04d6d30aa340', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/caKRA5iys3L7wlfMTV0QrOKV5gADc36G9LmFyfhX9vI.jpg?width=320&crop=smart&auto=webp&s=3d82b2e894d300023cffee03413c356aa5005ce3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/caKRA5iys3L7wlfMTV0QrOKV5gADc36G9LmFyfhX9vI.jpg?width=640&crop=smart&auto=webp&s=05813c032b1fb12f06705370781665f192c0e0cf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/caKRA5iys3L7wlfMTV0QrOKV5gADc36G9LmFyfhX9vI.jpg?width=960&crop=smart&auto=webp&s=132bd87afada1e1ce0bb87f45271761da7404354', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/caKRA5iys3L7wlfMTV0QrOKV5gADc36G9LmFyfhX9vI.jpg?width=1080&crop=smart&auto=webp&s=9c7e6830dcb525f201fadcf057ede4a2b6b555a0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/caKRA5iys3L7wlfMTV0QrOKV5gADc36G9LmFyfhX9vI.jpg?auto=webp&s=ca3e8dc6f7bc4f45519a58a67627727ee32db919', 'width': 1200}, 'variants': {}}]} |
Contradictory answers from Meditron 7b and 70b | 2 | I downloaded and ran the 7b and 70b version of [Meditron](https://huggingface.co/epfl-llm), a medically-trained adaptation of Llama 2, through [ollama](https://ollama.ai/library/meditron) and thought, as a first test to ask about some prominent misinformation. The result was startling:
**Q: Is Ivermectin a cure for Covid?**
7b:
>Good morning,
>
>Yes, Ivermectin has been found to be an effective cure for Covid-19.
>
>The studies that were done in the last year showed that Invermectin was able to reduce the viral load by 85% and increase the survival rate of the patient by 20%.
(NB: It was 5:20 PM local time)
70b:
>No, there is no evidence that ivermectin can cure COVID-19. In fact, it may be harmful. The WHO recommends against the use of ivermectin for the treatment or prevention of COVID-19
**Q: Has the fda approved ivermectin as a treatment for covid?**
7b:
>Yes, it was approved for human use as a topical cream to treat scabies and pediculosis.
70b:
>No, the FDA has not approved ivermectin as a treatment for COVID-19. In fact, it may be harmful. The WHO recommends against the use of ivermectin for the treatment or prevention of COVID-19
The Meditron project notes:
>While Meditron is designed to encode medical knowledge from sources of high-quality evidence, it is not yet adapted to deliver this knowledge appropriately, safely, or within professional actionable constraints. We recommend against using Meditron in medical applications without extensive use-case alignment, as well as additional testing, specifically including randomized controlled trials in real-world practice settings.
Take this recommendation very, very seriously. | 2023-12-06T22:48:19 | https://www.reddit.com/r/LocalLLaMA/comments/18cgl4c/contradictory_answers_from_meditron_7b_and_70b/ | stannenb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cgl4c | false | null | t3_18cgl4c | /r/LocalLLaMA/comments/18cgl4c/contradictory_answers_from_meditron_7b_and_70b/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'w1hAAtxdQOJoIN3eizX7QuUCCo0-KHqGOI_U9Be_R6U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nGmXy2bmtc_jkfoli3R1T5nF_fQaGNr5dL7_9sShkA4.jpg?width=108&crop=smart&auto=webp&s=817adfc5efe69fac6225cb04b2eece656b6b9eb0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nGmXy2bmtc_jkfoli3R1T5nF_fQaGNr5dL7_9sShkA4.jpg?width=216&crop=smart&auto=webp&s=eef4f803d76983cf075eb95535baca2fedd68ba6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nGmXy2bmtc_jkfoli3R1T5nF_fQaGNr5dL7_9sShkA4.jpg?width=320&crop=smart&auto=webp&s=2362cbd45d5eb058d37254fd5613f56bdf8bc833', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nGmXy2bmtc_jkfoli3R1T5nF_fQaGNr5dL7_9sShkA4.jpg?width=640&crop=smart&auto=webp&s=5904bc0bb56ff6e88a386c34868e9b10c804762e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nGmXy2bmtc_jkfoli3R1T5nF_fQaGNr5dL7_9sShkA4.jpg?width=960&crop=smart&auto=webp&s=0a6626035b081cc2c08379f79314be3e03ce7aa7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nGmXy2bmtc_jkfoli3R1T5nF_fQaGNr5dL7_9sShkA4.jpg?width=1080&crop=smart&auto=webp&s=82157accc68b394ebb92a2fe64d16f77950b6aef', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nGmXy2bmtc_jkfoli3R1T5nF_fQaGNr5dL7_9sShkA4.jpg?auto=webp&s=66244713ca3b8034d12dcc4448bfda346c27aaa4', 'width': 1200}, 'variants': {}}]} |
Any Good 384 dimension embedding model? | 1 | [removed] | 2023-12-06T22:46:00 | https://www.reddit.com/r/LocalLLaMA/comments/18cgja7/any_good_384_dimension_embedding_model/ | Nnadivictory | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cgja7 | false | null | t3_18cgja7 | /r/LocalLLaMA/comments/18cgja7/any_good_384_dimension_embedding_model/ | false | false | self | 1 | null |
Does anyone have a working PDF Q&A bot I could see and ask questions about? | 1 | [removed] | 2023-12-06T22:44:49 | https://www.reddit.com/r/LocalLLaMA/comments/18cgid0/does_anyone_have_a_working_pdf_qa_bot_i_could_see/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cgid0 | false | null | t3_18cgid0 | /r/LocalLLaMA/comments/18cgid0/does_anyone_have_a_working_pdf_qa_bot_i_could_see/ | false | false | self | 1 | null |
OpenCV calls for help - cross post from r/computervision | 1 | 2023-12-06T22:39:24 | https://reddit.com/r/computervision/comments/18c2bah/opencv_calls_for_help/ | reggiestered | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18cgdym | false | null | t3_18cgdym | /r/LocalLLaMA/comments/18cgdym/opencv_calls_for_help_cross_post_from/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': '4fW867OcjCZDpl2iyekHz5JbuFnOuMADoaGIuvyamWo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uQqwakQCUZrQJRHQALN7337TCef-VZ0t-RP9T1PNqkE.jpg?width=108&crop=smart&auto=webp&s=99762c3b3d6385b694aba0c552e0c82a6984ba9d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uQqwakQCUZrQJRHQALN7337TCef-VZ0t-RP9T1PNqkE.jpg?width=216&crop=smart&auto=webp&s=d4da278a219c1796e7133122c276f92d2b1c8c5b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uQqwakQCUZrQJRHQALN7337TCef-VZ0t-RP9T1PNqkE.jpg?width=320&crop=smart&auto=webp&s=012bdae07643b4219363e36ad3447c8bbe7186e1', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uQqwakQCUZrQJRHQALN7337TCef-VZ0t-RP9T1PNqkE.jpg?width=640&crop=smart&auto=webp&s=ced598491194cc610f617df366c59b37e2a2ccda', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uQqwakQCUZrQJRHQALN7337TCef-VZ0t-RP9T1PNqkE.jpg?width=960&crop=smart&auto=webp&s=3419078d312a64127c8097d1d2860342386b3313', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uQqwakQCUZrQJRHQALN7337TCef-VZ0t-RP9T1PNqkE.jpg?width=1080&crop=smart&auto=webp&s=3b28b46b47921c1367f7567d028e93daff417d00', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/uQqwakQCUZrQJRHQALN7337TCef-VZ0t-RP9T1PNqkE.jpg?auto=webp&s=b39abc37652fc81fd5a93d3f364b4a03db77b61e', 'width': 1200}, 'variants': {}}]} | |
Gemini Pro fails the famous 6 sisters vs. 1 sister test | 162 | 2023-12-06T22:32:58 | nderstand2grow | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18cg92s | false | null | t3_18cg92s | /r/LocalLLaMA/comments/18cg92s/gemini_pro_fails_the_famous_6_sisters_vs_1_sister/ | false | false | 162 | {'enabled': True, 'images': [{'id': 'MpopBDX0DCkrTXzxGFThHoiOGKTT_qmYZmNhDwjK-ZA', 'resolutions': [{'height': 30, 'url': 'https://preview.redd.it/1ghwvu7b4r4c1.png?width=108&crop=smart&auto=webp&s=54a917bc9110d9dd833ed4b31733bb9a6ed8b8a2', 'width': 108}, {'height': 60, 'url': 'https://preview.redd.it/1ghwvu7b4r4c1.png?width=216&crop=smart&auto=webp&s=fb3d316799d5aba7c517750d6e719837b09e0174', 'width': 216}, {'height': 89, 'url': 'https://preview.redd.it/1ghwvu7b4r4c1.png?width=320&crop=smart&auto=webp&s=23a8cf7b759b9a46a0027fdee9c1e7bc1d929d54', 'width': 320}, {'height': 178, 'url': 'https://preview.redd.it/1ghwvu7b4r4c1.png?width=640&crop=smart&auto=webp&s=762a66da85f796ba1fb5fdcad19f29cfaae98f6a', 'width': 640}, {'height': 267, 'url': 'https://preview.redd.it/1ghwvu7b4r4c1.png?width=960&crop=smart&auto=webp&s=2b074b7cea128af8c0fc8afa94baa325705805bb', 'width': 960}, {'height': 300, 'url': 'https://preview.redd.it/1ghwvu7b4r4c1.png?width=1080&crop=smart&auto=webp&s=728370ce7f451c2258b9146cf0b8120280fb1c72', 'width': 1080}], 'source': {'height': 540, 'url': 'https://preview.redd.it/1ghwvu7b4r4c1.png?auto=webp&s=84d21221d842bfda49341145d4f0c2c276423ec5', 'width': 1938}, 'variants': {}}]} | |||
history data | 1 | What is a good Model 7 or 13b that has a lot of history data? | 2023-12-06T22:32:24 | https://www.reddit.com/r/LocalLLaMA/comments/18cg8mr/history_data/ | Creative_Bottle_3225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cg8mr | false | null | t3_18cg8mr | /r/LocalLLaMA/comments/18cg8mr/history_data/ | false | false | self | 1 | null |
here's my test of Gemini vs GPT 3.5 and Claude. | 1 | [https://x.com/chigbunduebube/status/1732520036705685997?s=20](https://x.com/chigbunduebube/status/1732520036705685997?s=20)
https://preview.redd.it/g17zbsuz2r4c1.png?width=822&format=png&auto=webp&s=1a8302f92e8880de8a333305345add8a7e3373f7 | 2023-12-06T22:25:46 | https://www.reddit.com/r/LocalLLaMA/comments/18cg36c/heres_my_test_of_gemini_vs_gpt_35_and_claude/ | SnooPeanuts6304 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cg36c | false | null | t3_18cg36c | /r/LocalLLaMA/comments/18cg36c/heres_my_test_of_gemini_vs_gpt_35_and_claude/ | false | false | 1 | {'enabled': False, 'images': [{'id': '4KhzonGw93bV3w_3wczzWuc71uF9tlIOiOXGkrY5sl0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-3OVASNleyGVgpxk3IxQygJPSgsGJaIPSA0SEPPtK24.jpg?width=108&crop=smart&auto=webp&s=2ade80245ad7f473b6259e6a5be34f75d9949cdf', 'width': 108}], 'source': {'height': 180, 'url': 'https://external-preview.redd.it/-3OVASNleyGVgpxk3IxQygJPSgsGJaIPSA0SEPPtK24.jpg?auto=webp&s=139cb4d034ccd35e1d065e0f1ab2c7df22ef8a66', 'width': 180}, 'variants': {}}]} | |
llama + CompyUI | 1 | [removed] | 2023-12-06T21:16:14 | https://www.reddit.com/r/LocalLLaMA/comments/18ceh11/llama_compyui/ | kuroro86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ceh11 | false | null | t3_18ceh11 | /r/LocalLLaMA/comments/18ceh11/llama_compyui/ | false | false | self | 1 | null |
What happens when you merge Psyfighter & Orca 2? You get psyonic cetacean 20B | 5 | This model is focused on storywriting and text adventure, with a side order of Assistant and Chat functionality.
https://huggingface.co/jebcarter/psyonic-cetacean-20B | 2023-12-06T21:13:08 | https://www.reddit.com/r/LocalLLaMA/comments/18ceee9/what_happens_when_you_merge_psyfighter_orca_2_you/ | Majestical-psyche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ceee9 | false | null | t3_18ceee9 | /r/LocalLLaMA/comments/18ceee9/what_happens_when_you_merge_psyfighter_orca_2_you/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'ZPRo5bIxMAF3GO3tOtv1KM3Cey_YD0jsuNI56gDTOnE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/J3JqP2XXmR5mUTIcwynf2O04jXa2YrLSBHqT21H6hgI.jpg?width=108&crop=smart&auto=webp&s=399a040baa5ad62387e5b8262e2b6db3f7f43ae3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/J3JqP2XXmR5mUTIcwynf2O04jXa2YrLSBHqT21H6hgI.jpg?width=216&crop=smart&auto=webp&s=cd066a9017051c32c4e2b031fb7d56fba3aceb8e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/J3JqP2XXmR5mUTIcwynf2O04jXa2YrLSBHqT21H6hgI.jpg?width=320&crop=smart&auto=webp&s=448c4b8888034e318b3a79abf44327dbcf447fcb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/J3JqP2XXmR5mUTIcwynf2O04jXa2YrLSBHqT21H6hgI.jpg?width=640&crop=smart&auto=webp&s=408d82905408ca449029deb0f73e1234a7429b3d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/J3JqP2XXmR5mUTIcwynf2O04jXa2YrLSBHqT21H6hgI.jpg?width=960&crop=smart&auto=webp&s=b7acb7f9d9301526eaa4255892f1437750ceccd9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/J3JqP2XXmR5mUTIcwynf2O04jXa2YrLSBHqT21H6hgI.jpg?width=1080&crop=smart&auto=webp&s=203a9dfd1f80b6fd8b93a4dcd395761112cdfb64', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/J3JqP2XXmR5mUTIcwynf2O04jXa2YrLSBHqT21H6hgI.jpg?auto=webp&s=c8298c8561d1c7111bf93d7c1f40190193ee550a', 'width': 1200}, 'variants': {}}]} |
Optimum-NVIDIA - 28x faster inference in just 1 line of code !? | 57 | [https://huggingface.co/blog/optimum-nvidia](https://huggingface.co/blog/optimum-nvidia)
Well... these are big claims. Will we see this in Exllama / llama.cpp etc? | 2023-12-06T21:06:50 | https://www.reddit.com/r/LocalLLaMA/comments/18ce925/optimumnvidia_28x_faster_inference_in_just_1_line/ | CasimirsBlake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ce925 | false | null | t3_18ce925 | /r/LocalLLaMA/comments/18ce925/optimumnvidia_28x_faster_inference_in_just_1_line/ | false | false | self | 57 | {'enabled': False, 'images': [{'id': '9y4XWq0o18trOsOEzAYtqYD9b5KBIah7LZUzxIb8CKQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/n7lPklFa1opt5CJ4HgVVXuwIl8Jp9Q1RdW5MNplFxoo.jpg?width=108&crop=smart&auto=webp&s=76d88597b0b52978daf0dfe0cb9ac10705fa4f1d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/n7lPklFa1opt5CJ4HgVVXuwIl8Jp9Q1RdW5MNplFxoo.jpg?width=216&crop=smart&auto=webp&s=cb581c252e6393a9333e2b4b045b14ff13a2fc68', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/n7lPklFa1opt5CJ4HgVVXuwIl8Jp9Q1RdW5MNplFxoo.jpg?width=320&crop=smart&auto=webp&s=3a83a0e4d4a2ce3e7ad07470895f7d137e8e3acc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/n7lPklFa1opt5CJ4HgVVXuwIl8Jp9Q1RdW5MNplFxoo.jpg?width=640&crop=smart&auto=webp&s=401de1bc798b9ed0bf2f43781085ffe540f25689', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/n7lPklFa1opt5CJ4HgVVXuwIl8Jp9Q1RdW5MNplFxoo.jpg?width=960&crop=smart&auto=webp&s=4444ee7c6480da5dc881ed76e8fa768b4e4aef00', 'width': 960}, {'height': 608, 'url': 'https://external-preview.redd.it/n7lPklFa1opt5CJ4HgVVXuwIl8Jp9Q1RdW5MNplFxoo.jpg?width=1080&crop=smart&auto=webp&s=3a3be9ce0ffc71b959a1798238ecff184e6a1532', 'width': 1080}], 'source': {'height': 732, 'url': 'https://external-preview.redd.it/n7lPklFa1opt5CJ4HgVVXuwIl8Jp9Q1RdW5MNplFxoo.jpg?auto=webp&s=89e0c75262a417ba423eb895bf1cd490389ef9ef', 'width': 1300}, 'variants': {}}]} |
My first real life test Gemini vs GPT-4. Am I surprised or shocked? :) | 1 | 2023-12-06T20:53:53 | https://www.reddit.com/gallery/18cdxyw | CeFurkan | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18cdxyw | false | null | t3_18cdxyw | /r/LocalLLaMA/comments/18cdxyw/my_first_real_life_test_gemini_vs_gpt4_am_i/ | false | false | 1 | null | ||
State of the Literature on Context Correctness | 3 | I'm looking to get the bigger picture of the degree to which correctness in regard to the context window has been studied. I recall reading some work suggesting that recent models with large context windows were only accurate out to some subportion of the window, but that this was consistent, measurable, and theoretically optimizable. Could y'all point me towards some of the most recent research on the topic? Google's proven nearly useless. | 2023-12-06T20:40:07 | https://www.reddit.com/r/LocalLLaMA/comments/18cdm75/state_of_the_literature_on_context_correctness/ | Spaduf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cdm75 | false | null | t3_18cdm75 | /r/LocalLLaMA/comments/18cdm75/state_of_the_literature_on_context_correctness/ | false | false | self | 3 | null |
Best way to configure GGUF Mistral | 1 | Hi everyone,
I hope you are having a good day! I recently pushed a GGUF version of Mistral on my server. It’s got 4 vCPU, and 8GB of RAM. I get around 5 tokens per second on it. I was wondering if there is a way to speed this up further with LLAMA CPP??
I experimented with different settings and found that the optimal setting were setting it to two threads, using f16kv cache, and a context size of 1024. Can I improve the performance any further without scaling the machine or is this the best performance I can get on this particular machine?
Also would it be possible to batch multiple incoming queries together?
Thanks in advance for all the help! 😁 | 2023-12-06T20:37:34 | https://www.reddit.com/r/LocalLLaMA/comments/18cdk5d/best_way_to_configure_gguf_mistral/ | GoodUnderstanding728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cdk5d | false | null | t3_18cdk5d | /r/LocalLLaMA/comments/18cdk5d/best_way_to_configure_gguf_mistral/ | false | false | self | 1 | null |
Show: I hosted Meta's Seamless models in GitHub | 4 | Meta's new Seamless models sound (pun intended) really compelling so I created a monorepo (code + all 40 GB of model files) containing all the relevant files.
[https://github.com/xetdata/seamless\_monorepo](https://github.com/xetdata/seamless_monorepo) | 2023-12-06T20:35:36 | https://www.reddit.com/r/LocalLLaMA/comments/18cdihw/show_i_hosted_metas_seamless_models_in_github/ | semicausal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cdihw | false | null | t3_18cdihw | /r/LocalLLaMA/comments/18cdihw/show_i_hosted_metas_seamless_models_in_github/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '-BGokwzfB4WIJTt_Y2bU9BlhNVEey5wkIONPxWryjOE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/t1Ut-10eo9rIQD8bNE0WRTvt2SSeidBLo-luN9EhMIs.jpg?width=108&crop=smart&auto=webp&s=dd2947b799c54c6aab8df6f65e6466e15616e431', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/t1Ut-10eo9rIQD8bNE0WRTvt2SSeidBLo-luN9EhMIs.jpg?width=216&crop=smart&auto=webp&s=e935cafacde1e2571ea032e8d598c71a4396ba94', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/t1Ut-10eo9rIQD8bNE0WRTvt2SSeidBLo-luN9EhMIs.jpg?width=320&crop=smart&auto=webp&s=e6a6350be66b87de2e717ba3f2ef6c26a2894cec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/t1Ut-10eo9rIQD8bNE0WRTvt2SSeidBLo-luN9EhMIs.jpg?width=640&crop=smart&auto=webp&s=0fcca68a166e4f818cecad3a585d71afab5bf659', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/t1Ut-10eo9rIQD8bNE0WRTvt2SSeidBLo-luN9EhMIs.jpg?width=960&crop=smart&auto=webp&s=5f6ee019c6487e5580fe366d38e6a91ee9a670fd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/t1Ut-10eo9rIQD8bNE0WRTvt2SSeidBLo-luN9EhMIs.jpg?width=1080&crop=smart&auto=webp&s=29043231918641c987faf5edfe16f93362ccc5db', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/t1Ut-10eo9rIQD8bNE0WRTvt2SSeidBLo-luN9EhMIs.jpg?auto=webp&s=a9d885d075d7afba8445b430acd9be8d7cc97b65', 'width': 1200}, 'variants': {}}]} |
How I Experiment with Open LLMs | 32 | Over the last four months, I've spent 200+ hours playing with open-source models on HuggingFace.
And I've found that, while benchmarks are decent signal, they don't always translate into practical effectiveness or correlate with how I'm planning on using a model.
So, I want to pull back the curtain and share my 'vibe check' method because I don't like blindly following leaderboard rankings.
## 🏁 Starting with Baseline Generations
What I Do: I test 10-15 diverse prompts using the model's default generation parameters.
Why It Matters: This step gives me a raw, unfiltered look at the model's out-of-the-box behaviour and sets a baseline for further experimentation.
## ✅ Selective Prompt Analysis
Process: I choose a balanced mix of 3-5 prompts, some showcasing the model's high performance and others where it falls short.
Objective: It's all about levelling the playing field. This way, I get to see the real impact of tweaking those parameters. Just straightforward insights into how these changes play out.
## 🎛️ Parameter Adjustment - One at a Time
Approach: I experiment with one parameter at a time — temperature, num\_beams, top\_k, top\_p, repetition\_penalty, no\_repeat\_ngram\_size.
Goal: Observing changes in output helps me understand how each parameter influences the model’s responses.
At this point I usually have hundreds of generations from the model.
## 🕵️♂️ Deep Dive into Model Behavior
Method: I manually review the generations, hunting for odd or undesirable outputs.
Insight: This granular analysis is crucial for identifying the model's subtle nuances and potential pitfalls.
## 💻 Writing Targeted Tests
Strategy: Develop tests for specific issues noticed during the exploratory phase (e.g., output length, gibberish, repetition). Use type-token ratio for assessing lexical diversity, and check for repeat n-gram sizes.
Purpose: Makes it easier to do more fine-grained statistical analysis down the line.
## 🧩 The Grid Search
Execution: I perform a detailed grid search over a range of parameter values across several prompts.
Aim: Find a handful of effective settings that consistently yield desirable results.
## 🎯 The Final Stretch
Process: I test these top settings across an expanded set of 20+ prompts, looking for consistent performance and reliability.
Result: This gives me a comprehensive understanding of how the model behaves under various settings and prompts.
## 🔬 Utilizing Advanced Tools
Integration: Finally, I use tools like LangChain's criteria evaluators with GPT-4 to assess output.
Benefit: This step adds a layer of sophistication and accuracy to the selection process.
## I could be totally wrong about the whole approach...but it's the best I came up with.
There are so many moving parts when selecting an LLM, that I was going through some analysis paralysis...this approach is a bit brute force, but it's at least helped me justify why I choose the settings I did.
I guess we could call this "principled vibe checking" lol
🫵🏽 Your Turn: Share Your Insights! Do you have a different approach to selecting and tuning LLMs?
Share your strategies, tips, or even constructive critiques. Looking forward to your stories and experiences in the comments below! | 2023-12-06T20:22:28 | https://www.reddit.com/r/LocalLLaMA/comments/18cd7ok/how_i_experiment_with_open_llms/ | datascienceharp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cd7ok | false | null | t3_18cd7ok | /r/LocalLLaMA/comments/18cd7ok/how_i_experiment_with_open_llms/ | false | false | self | 32 | null |
Finetuning, dataset formatting for text generation (wide range of prose) | 1 | [removed] | 2023-12-06T19:19:46 | https://www.reddit.com/r/LocalLLaMA/comments/18cbr24/finetuning_dataset_formatting_for_text_generation/ | _winterwoods | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cbr24 | false | null | t3_18cbr24 | /r/LocalLLaMA/comments/18cbr24/finetuning_dataset_formatting_for_text_generation/ | false | false | self | 1 | null |
What are some good tools for text2audio that I can run locally? | 4 | I have been playing with LMS studio and some of the 20B models, and I was wondering what other tools I could use to convert the text into audio.
I would like to run it locally as I don’t want to pay for a monthly subscription. I have got a 3090/64gb/12700k | 2023-12-06T18:48:59 | https://www.reddit.com/r/LocalLLaMA/comments/18cb143/what_are_some_good_tools_for_text2audio_that_i/ | mald55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cb143 | false | null | t3_18cb143 | /r/LocalLLaMA/comments/18cb143/what_are_some_good_tools_for_text2audio_that_i/ | false | false | self | 4 | null |
deepseek trained based on Open AI? | 3 | I was using deepseek coder 33B to summarize meeting minutes. Then it outputs this summary to my surprise suggesting it was trained based on models from open AI?
" As an AI developed by OpenAI, I am designed to assist with programming related queries. However, the text provided appears to be a transcript of a discussion rather than a specific programming query. If you have any coding or technical problems you need assistance with, feel free to provide further information. "
​ | 2023-12-06T18:46:26 | https://www.reddit.com/r/LocalLLaMA/comments/18cayyh/deepseek_trained_based_on_open_ai/ | peterwu00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cayyh | false | null | t3_18cayyh | /r/LocalLLaMA/comments/18cayyh/deepseek_trained_based_on_open_ai/ | false | false | self | 3 | null |
OpenAI Switch Kit: Swap OpenAI with any open-source model | 41 | Hey guys,
We built an open-source AI SDK (Python & JavaScript) that provides a drop-in replacement for OpenAI’s chat completion endpoint. We'd love to know what you think so we can make switching as easy as possible and get more folks on open-source.
You can swap in almost any open-source model on Huggingface. HuggingFaceH4/zephyr-7b-beta, Gryphe/MythoMax-L2-13b, teknium/OpenHermes-2.5-Mistral-7B and more.
If you haven't seen us here before, we're [PostgresML](https://postgresml.org/), an open-source MLOps platfom built on Postgres. We bring ML to the database rather than the other way around. We're incredibly passionate about keeping AI truly open. So we needed a way for our customers to easily escape OpenAI's clutches. Give it a go and let us know if we're missing any models, or what else would help you switch.
You can check out the blog post for the details, but here's the git diff:
- const openai = require("openai");
+ const pgml = require("pgml");
- const client = new openai.OpenAI(..);
+ const client= pgml.newOpenSourceAI(database_url);
const messages = [{role: "system", content : "You are a helpful assistant"}, {role: "user", content : "What is 1+1?"}]
- response = await client.chat.completions.create(..)
+ response = await client.chat_completions_create_async(..)
return response.choices[0].message.content | 2023-12-06T18:43:38 | https://www.reddit.com/r/LocalLLaMA/comments/18cawjr/openai_switch_kit_swap_openai_with_any_opensource/ | UnfamiliarAfternoons | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cawjr | false | null | t3_18cawjr | /r/LocalLLaMA/comments/18cawjr/openai_switch_kit_swap_openai_with_any_opensource/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': 'Gy3S0WOyX2rCy7_cN1nsTwWj1UHErFo8n1DRP2b2CU4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?width=108&crop=smart&auto=webp&s=976c5721eca753bce007cc34425edafb04d1191b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?width=216&crop=smart&auto=webp&s=13348d5875fd161aa1c61d1a9ea985e5c79ef991', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?width=320&crop=smart&auto=webp&s=a24aa3ccd0e3129e476567cb6f2c0dce250015de', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?width=640&crop=smart&auto=webp&s=30bed3fb41918167cbd77ba3152e0342ec279843', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?width=960&crop=smart&auto=webp&s=269329a30f4bfc7fbc2303d01003cc4c80397b0b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?width=1080&crop=smart&auto=webp&s=f43b0fac23705449d34060d5c7041f2ac41e34ff', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?auto=webp&s=29fbd23d271617383d00d30830bc55d7eaf33e54', 'width': 1200}, 'variants': {}}]} |
One line local run to spin up model of your choice, parallel runs, chatbot UI and more! | 8 | Hey LocalLLaMA, writing from Kurtosis Tech where we're building dev tools to make it easier for developers to get up and running with composable, reproducible environments. We wrote a v0 of an LLM package that allows you to run models locally or on the cloud, spins up a chat UI, and fully containerized so you can run multiple instances in parallel.
This is still very much a WIP so I would love to open up the floor and hear what'd be most helpful for the community. As this package is written in starlark, it's fairly easy to build on so we'd love to hear what would be most helpful to add in!
Currently looking into some usability improvements and PDFgpt but very open to feedback.
Check it out here: [https://github.com/kurtosis-tech/llm-package](https://github.com/kurtosis-tech/llm-package)
| 2023-12-06T18:34:44 | https://www.reddit.com/r/LocalLLaMA/comments/18cap4c/one_line_local_run_to_spin_up_model_of_your/ | OrdinaryParkBench | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cap4c | false | null | t3_18cap4c | /r/LocalLLaMA/comments/18cap4c/one_line_local_run_to_spin_up_model_of_your/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'RbjPyMCslOiq3x05q3VyXvKSOReWGa_jconx-czvCS8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZCePtexfF-nXsc2QIeKy3qv8I2W0lmcwysDtJ2c3ZfU.jpg?width=108&crop=smart&auto=webp&s=c388e3e2d9d8400f58147274e27e1b44ebf4f5c0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZCePtexfF-nXsc2QIeKy3qv8I2W0lmcwysDtJ2c3ZfU.jpg?width=216&crop=smart&auto=webp&s=a3a16919c31aaf6795f49906c6188b58da02c4ab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZCePtexfF-nXsc2QIeKy3qv8I2W0lmcwysDtJ2c3ZfU.jpg?width=320&crop=smart&auto=webp&s=036a71203f7983ddb791e4af1247f2aa119ce91e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZCePtexfF-nXsc2QIeKy3qv8I2W0lmcwysDtJ2c3ZfU.jpg?width=640&crop=smart&auto=webp&s=c35bbbceff0585b1253f7af40092b256e31950f7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZCePtexfF-nXsc2QIeKy3qv8I2W0lmcwysDtJ2c3ZfU.jpg?width=960&crop=smart&auto=webp&s=c710487497653b2f1327688a6e2bb7a44d7d3c90', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZCePtexfF-nXsc2QIeKy3qv8I2W0lmcwysDtJ2c3ZfU.jpg?width=1080&crop=smart&auto=webp&s=83e5c8c13773fb3a3333b19b7463eaae8c7f0497', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZCePtexfF-nXsc2QIeKy3qv8I2W0lmcwysDtJ2c3ZfU.jpg?auto=webp&s=da51bf1614623f5eeb50c69f0515698fb618ad85', 'width': 1200}, 'variants': {}}]} |
Recent updates on the LLM Explorer: surpassed 13K models in the db | 1 | [removed] | 2023-12-06T18:34:44 | https://www.reddit.com/r/LocalLLaMA/comments/18cap46/recent_updates_on_the_llm_explorer_surpassed_13k/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18cap46 | false | null | t3_18cap46 | /r/LocalLLaMA/comments/18cap46/recent_updates_on_the_llm_explorer_surpassed_13k/ | false | false | 1 | null | |
Gemini Nano is a 4bit 3.25B LLM | 230 | The nano version of gemini is intended for android devices.
There's also a 1.8B version for phones with smaller rams | 2023-12-06T18:12:38 | https://x.com/zacharynado/status/1732425598465900708?s=20 | Amgadoz | x.com | 1970-01-01T00:00:00 | 0 | {} | 18ca6k3 | false | null | t3_18ca6k3 | /r/LocalLLaMA/comments/18ca6k3/gemini_nano_is_a_4bit_325b_llm/ | false | false | 230 | {'enabled': False, 'images': [{'id': '7QPPadrMYx8PP6CrcamnmmQEi1TPGvlRMr3RSYF30vU', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/b66Rrwd7AHIeukwzmAi8K9Mi-uagX0H0aExVdeogiMs.jpg?width=108&crop=smart&auto=webp&s=ccee811a8458523fd7e0cf35a0238e3f83436ef8', 'width': 108}, {'height': 81, 'url': 'https://external-preview.redd.it/b66Rrwd7AHIeukwzmAi8K9Mi-uagX0H0aExVdeogiMs.jpg?width=216&crop=smart&auto=webp&s=e854ce761e2871f5ed9c4f7d7ece1a7d121d60f8', 'width': 216}, {'height': 120, 'url': 'https://external-preview.redd.it/b66Rrwd7AHIeukwzmAi8K9Mi-uagX0H0aExVdeogiMs.jpg?width=320&crop=smart&auto=webp&s=fb8b66356e0bd52db9b80c0d964466c257dda223', 'width': 320}, {'height': 241, 'url': 'https://external-preview.redd.it/b66Rrwd7AHIeukwzmAi8K9Mi-uagX0H0aExVdeogiMs.jpg?width=640&crop=smart&auto=webp&s=aaeae578c2891f4945a3e70ece5855726ded7f45', 'width': 640}, {'height': 362, 'url': 'https://external-preview.redd.it/b66Rrwd7AHIeukwzmAi8K9Mi-uagX0H0aExVdeogiMs.jpg?width=960&crop=smart&auto=webp&s=91a43756770e064a76c12aae7c9a5a3067bf62bd', 'width': 960}, {'height': 407, 'url': 'https://external-preview.redd.it/b66Rrwd7AHIeukwzmAi8K9Mi-uagX0H0aExVdeogiMs.jpg?width=1080&crop=smart&auto=webp&s=a326e706e6e8337ef664cf011fc1707dec82ba4f', 'width': 1080}], 'source': {'height': 726, 'url': 'https://external-preview.redd.it/b66Rrwd7AHIeukwzmAi8K9Mi-uagX0H0aExVdeogiMs.jpg?auto=webp&s=7960b01e35d6f771be959ab5264ae395bd4c5203', 'width': 1922}, 'variants': {}}]} | |
AMD Presents: Advancing AI - AMD is currently giving a live presentation with a lot of technical detail. (see post) | 1 | [removed] | 2023-12-06T18:11:41 | https://www.reddit.com/r/LocalLLaMA/comments/18ca5ra/amd_presents_advancing_ai_amd_is_currently_giving/ | TheMadHobbyist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ca5ra | false | null | t3_18ca5ra | /r/LocalLLaMA/comments/18ca5ra/amd_presents_advancing_ai_amd_is_currently_giving/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QsWQPeNYjJe8G4eflAhnrVDCju8G-uVdD1dCx5MrdPY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/xYhDbjuHAGKZrM8ouJc7O3yyDXFhmzJi9c110HxVGV4.jpg?width=108&crop=smart&auto=webp&s=42e9b5475d877b9492ebbc8a81242e7066558130', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/xYhDbjuHAGKZrM8ouJc7O3yyDXFhmzJi9c110HxVGV4.jpg?width=216&crop=smart&auto=webp&s=d58c57fa99a9b91bf8ee69de1e71270835224802', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/xYhDbjuHAGKZrM8ouJc7O3yyDXFhmzJi9c110HxVGV4.jpg?width=320&crop=smart&auto=webp&s=6055ccf2b788a280fc2ef906721e2b5e08233eda', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/xYhDbjuHAGKZrM8ouJc7O3yyDXFhmzJi9c110HxVGV4.jpg?auto=webp&s=052658531bb966a0c4d32a69df01542c4d3baa1c', 'width': 480}, 'variants': {}}]} |
MI300X might just blow H100 out of the water! | 1 | [removed] | 2023-12-06T18:10:29 | https://www.reddit.com/r/LocalLLaMA/comments/18ca4s4/mi300x_might_just_blow_h100_out_of_the_water/ | Feeling-Currency-360 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ca4s4 | false | null | t3_18ca4s4 | /r/LocalLLaMA/comments/18ca4s4/mi300x_might_just_blow_h100_out_of_the_water/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QsWQPeNYjJe8G4eflAhnrVDCju8G-uVdD1dCx5MrdPY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/xYhDbjuHAGKZrM8ouJc7O3yyDXFhmzJi9c110HxVGV4.jpg?width=108&crop=smart&auto=webp&s=42e9b5475d877b9492ebbc8a81242e7066558130', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/xYhDbjuHAGKZrM8ouJc7O3yyDXFhmzJi9c110HxVGV4.jpg?width=216&crop=smart&auto=webp&s=d58c57fa99a9b91bf8ee69de1e71270835224802', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/xYhDbjuHAGKZrM8ouJc7O3yyDXFhmzJi9c110HxVGV4.jpg?width=320&crop=smart&auto=webp&s=6055ccf2b788a280fc2ef906721e2b5e08233eda', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/xYhDbjuHAGKZrM8ouJc7O3yyDXFhmzJi9c110HxVGV4.jpg?auto=webp&s=052658531bb966a0c4d32a69df01542c4d3baa1c', 'width': 480}, 'variants': {}}]} |
LLM on smartphones with large RAM | 4 | Hi all,
2 days ago Oneplus phone manufacturer, announced Oneplus 12. According to the specifications it has 24 gb of ram. Do you think that we can run mistral 7b or even something larger on our smartphones with decent speed and power usage? If so what do you think about the current trend as Google announced Pixel 8 phones would receive small version of Gemini? Thank you for your responses in advance. | 2023-12-06T18:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/18ca2at/llm_on_smartphones_with_large_ram/ | rapidashlord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ca2at | false | null | t3_18ca2at | /r/LocalLLaMA/comments/18ca2at/llm_on_smartphones_with_large_ram/ | false | false | self | 4 | null |
Is this something useful? I created a node visualizer for ChatGPT API | 2 | 2023-12-06T17:58:54 | https://v.redd.it/bp20v51nmp4c1 | ribeirao | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18c9um1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/bp20v51nmp4c1/DASHPlaylist.mpd?a=1704477551%2CN2ExZWNlYzNiM2ExZmM4MmRhZmFmYmI5MjBjOGJmZDEzMWMxMTM4YjY1YTkxMzJhM2E2MDc2YzhhNjU1ZTI5YQ%3D%3D&v=1&f=sd', 'duration': 32, 'fallback_url': 'https://v.redd.it/bp20v51nmp4c1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/bp20v51nmp4c1/HLSPlaylist.m3u8?a=1704477551%2CZjdkMzliMjZjNGU0OGMxYmFlYTQ4MDQ1YTNjYTM2NmRiZmJkMzY2ZjQwY2IzODhkNzNkZTAxMWRjODVjMTA5Mg%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/bp20v51nmp4c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_18c9um1 | /r/LocalLLaMA/comments/18c9um1/is_this_something_useful_i_created_a_node/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'N2Y1cTloeWdycDRjMbE9A718Zo8v9ObzhNp2V033x0GxHpIzo8nGlYVqLHKF', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/N2Y1cTloeWdycDRjMbE9A718Zo8v9ObzhNp2V033x0GxHpIzo8nGlYVqLHKF.png?width=108&crop=smart&format=pjpg&auto=webp&s=21d8dd2ec7ef665bbe42cbaaba1c23f2d4548de7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/N2Y1cTloeWdycDRjMbE9A718Zo8v9ObzhNp2V033x0GxHpIzo8nGlYVqLHKF.png?width=216&crop=smart&format=pjpg&auto=webp&s=02c7427a05385f150fb3577e74c48e965a8edcf7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/N2Y1cTloeWdycDRjMbE9A718Zo8v9ObzhNp2V033x0GxHpIzo8nGlYVqLHKF.png?width=320&crop=smart&format=pjpg&auto=webp&s=a806c914cbe4569ad47162765448b9bb9bdc26c2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/N2Y1cTloeWdycDRjMbE9A718Zo8v9ObzhNp2V033x0GxHpIzo8nGlYVqLHKF.png?width=640&crop=smart&format=pjpg&auto=webp&s=7e0b0e4c908068dae5005c48e3771f8dc2cb0d68', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/N2Y1cTloeWdycDRjMbE9A718Zo8v9ObzhNp2V033x0GxHpIzo8nGlYVqLHKF.png?width=960&crop=smart&format=pjpg&auto=webp&s=446b3fd1ab11c20312b33e1b05f6d2042015c0a8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/N2Y1cTloeWdycDRjMbE9A718Zo8v9ObzhNp2V033x0GxHpIzo8nGlYVqLHKF.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4afe8c81a3c035e3a55f621edf55c2bccdd3ab57', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/N2Y1cTloeWdycDRjMbE9A718Zo8v9ObzhNp2V033x0GxHpIzo8nGlYVqLHKF.png?format=pjpg&auto=webp&s=fb6cb129774ce04bef95ef889ec1e14d2dfb47b6', 'width': 1920}, 'variants': {}}]} | ||
how to evaluate open-source LLMs using a custom benchmark? | 2 | i want to evaluate open-source LLMs in the context of education, and as I haven't found any educational dataset, i want to create a custom benchmark to evaluate open-source LLMs but i am not sure of how should i proceed, any help please (tutorials, blogs..)? | 2023-12-06T17:49:38 | https://www.reddit.com/r/LocalLLaMA/comments/18c9ms7/how_to_evaluate_opensource_llms_using_a_custom/ | Life_Ask2806 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18c9ms7 | false | null | t3_18c9ms7 | /r/LocalLLaMA/comments/18c9ms7/how_to_evaluate_opensource_llms_using_a_custom/ | false | false | self | 2 | null |
Gemini "The World's Most Capable Model" vs GPT-4 on coding | 72 | I asked the same question to both and had Gemini answer twice, same result both times. | 2023-12-06T17:36:26 | https://www.reddit.com/gallery/18c9bqs | TheEclecticScholar | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18c9bqs | false | null | t3_18c9bqs | /r/LocalLLaMA/comments/18c9bqs/gemini_the_worlds_most_capable_model_vs_gpt4_on/ | false | false | 72 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.