title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
What are good local information retrieval models?
0
[removed]
2023-06-28T04:15:40
https://www.reddit.com/r/LocalLLaMA/comments/14kzb77/what_are_good_local_information_retrieval_models/
Icaruswept
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kzb77
false
null
t3_14kzb77
/r/LocalLLaMA/comments/14kzb77/what_are_good_local_information_retrieval_models/
false
false
default
0
null
Mpt models with the new exllama context extension trick?
0
Hi all, I was just wondering if the 2x and 4x context improvement tricks for exllama could be applied to the mpt models or is it llama model specific? I believe the mpt models are 8k tokens so theoretically it could get to 32k context? Obviously you would need a monster to run it (might need to sell a few friends' kidneys) but it's a neat experiment. I'm not sure if the mpt models even work with exllam (yet?) so this might be a dumb question but a dude can dream. Also even dumber thought: I wonder if this would work on that 65k storywriter model? I know that was just a fine tune of a base model and would take a super computer but that would be 260k context...
2023-06-28T03:07:51
https://www.reddit.com/r/LocalLLaMA/comments/14kxz52/mpt_models_with_the_new_exllama_context_extension/
TheSilentFire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kxz52
false
null
t3_14kxz52
/r/LocalLLaMA/comments/14kxz52/mpt_models_with_the_new_exllama_context_extension/
false
false
self
0
null
New oobabooga subreddit: r/oobaboogazz
126
Hello everyone, I'd like to let you know that I have created my own official subreddit for the [text-generation-webui](https://github.com/oobabooga/text-generation-webui/) project, and would like to invite you to join. It can be found here: https://www.reddit.com/r/oobaboogazz If anyone needs help setting up the webui, is experiencing issues, would like to ask questions, etc, please feel free to start a new thread there. I have been replying to 100% of posts and will be happy to help.
2023-06-28T02:17:26
https://www.reddit.com/r/LocalLLaMA/comments/14kww9i/new_oobabooga_subreddit_roobaboogazz/
oobabooga4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kww9i
false
null
t3_14kww9i
/r/LocalLLaMA/comments/14kww9i/new_oobabooga_subreddit_roobaboogazz/
false
false
self
126
{'enabled': False, 'images': [{'id': 'V6jfVg2qsG0kqRYwnFd9vWemtNbCGph77abkd8Qm7_g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?width=108&crop=smart&auto=webp&s=c9fdc5719a19b1143a7ec2aecb85bb783f1fbe14', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?width=216&crop=smart&auto=webp&s=4e923eb22b859423915709b54283f4cbcf452846', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?width=320&crop=smart&auto=webp&s=89d0077b9e7b5137f4fb0bfcad7fe776ae47452a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?width=640&crop=smart&auto=webp&s=57b1dd3a6c545a2f71e5c83331da7b54919ed3d0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?width=960&crop=smart&auto=webp&s=3105f8e60881cfabb39fd9a4886986fbab11c406', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?width=1080&crop=smart&auto=webp&s=1a7c70cdaebd08fb9e9effb8df93d2bf093e5cd1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9Z7HeoGSIgRDA4--fKQ2Z1qpxb4o_Tt4mqCAWR-Tt0c.jpg?auto=webp&s=146533ac9316057590c306e935aa1fedb9106626', 'width': 1200}, 'variants': {}}]}
Can't get shared CUDA memory from new Nvidia drivers to work
8
Apparently the latest Nvidia drivers (535.x) have a change that'll make CUDA simply offload some of the processing to RAM when it runs out of VRAM instead of throwing an error. I've updated my drivers to 535.54.03 on Linux Mint and tried testing this change using: * WizardLM-33B-V1.0-Uncensored-SuperHOT-8K-GPTQ with 4096 context * Oobabooga webUI with the included exllama loader * SillyTavern (connected to the webUI's API) However when the context gets to >3200 tokens (where it usually starts throwing Out Of Memory errors) it keeps throwing the same old OOM errors as before rather than offloading anything to RAM. Is there something that I need to do to enable this? There's very little discussion of this change and it seems to just work out of the box for most people so I'm really not sure what the problem could be.
2023-06-28T02:12:18
https://www.reddit.com/r/LocalLLaMA/comments/14kwsb9/cant_get_shared_cuda_memory_from_new_nvidia/
MassiveFloppa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kwsb9
false
null
t3_14kwsb9
/r/LocalLLaMA/comments/14kwsb9/cant_get_shared_cuda_memory_from_new_nvidia/
false
false
self
8
null
ExLlama Extremely Slow?
2
[deleted]
2023-06-28T01:55:02
[deleted]
1970-01-01T00:00:00
0
{}
14kwekq
false
null
t3_14kwekq
/r/LocalLLaMA/comments/14kwekq/exllama_extremely_slow/
false
false
default
2
null
Anything similar to HeyPi?
2
The “conversationalist” style is truly amazing. I’ve used several local models but nothing every probes and keeps a conversation going quite like HeyPi does! This sub is amazing! Thankful for every one of you!
2023-06-28T01:36:19
https://www.reddit.com/r/LocalLLaMA/comments/14kw0h4/anything_similar_to_heypi/
SocialDinamo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kw0h4
false
null
t3_14kw0h4
/r/LocalLLaMA/comments/14kw0h4/anything_similar_to_heypi/
false
false
self
2
null
A (possibly stupid) thought about context extension.
3
The big talk recently on the sub has been about the new method to extend context ([https://kaiokendev.github.io/til#extending-context-to-8k](https://kaiokendev.github.io/til#extending-context-to-8k)). Reading through this got me thinking that a lot of learning might be getting wasted on inferring meaning from individual token positions than should be done. Based on that, is it at all a reasonable idea (or in the direction of a reasonable idea) to train a model with normalized token location, by which I mean making every input token in both training and prediction appear to be over the same range. If we use a context length of 1024 as an example, a 4 token sequence would show as tokens 0, 255, 511, and 767 (or some variation depending on exactly how you want to spread them out.) ​ I've been focusing my efforts more on applying language models than going deep into there inner functioning, so I don't have a sufficient grasp of if this is feasible, and if it is, does it have any potential to be useful. I'm interested to hear other's thoughts.
2023-06-28T01:19:21
https://www.reddit.com/r/LocalLLaMA/comments/14kvn4b/a_possibly_stupid_thought_about_context_extension/
Captainbetty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kvn4b
false
null
t3_14kvn4b
/r/LocalLLaMA/comments/14kvn4b/a_possibly_stupid_thought_about_context_extension/
false
false
self
3
null
With API changes going into affect soon…
45
[removed]
2023-06-27T23:44:34
https://www.reddit.com/r/LocalLLaMA/comments/14ktinx/with_api_changes_going_into_affect_soon/
Artistic_Okra7288
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ktinx
false
null
t3_14ktinx
/r/LocalLLaMA/comments/14ktinx/with_api_changes_going_into_affect_soon/
false
false
default
45
null
Nvidia user? Make sure you don't offload too many layers.
54
A quick reminder to Nvidia users of llama.cpp, and probably other tools. Since a few driver versions back, the number of layers you can offload to GPU has slightly reduced. Moreover, if you have too many layers, it will not produce an error anymore. Instead, it will simply be 4 times slower than it should. So, if you missed it, it is possible that you may notably speed up your llamas right now by reducing your layers count by 5-10%. To determine if you have too many layers on Win 11, use Task Manager (Ctrl+Alt+Esc). Open the performance tab -> GPU and look at the graph at the very bottom, called "**Shared** GPU memory usage". Now start generating. At no point at time the graph should show anything. It should stay at zero. If it does not, you need to reduce the layers count. Remember to test with the context filled. Either a chat with long preexisting history, or a story mode with long existing story or even garbage.
2023-06-27T23:26:14
https://www.reddit.com/r/LocalLLaMA/comments/14kt3hz/nvidia_user_make_sure_you_dont_offload_too_many/
Barafu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kt3hz
false
null
t3_14kt3hz
/r/LocalLLaMA/comments/14kt3hz/nvidia_user_make_sure_you_dont_offload_too_many/
false
false
self
54
null
Help understanding how to create a dataset for training a model
14
Greetings, it was suggested I post here :) One of the key areas I just can’t wrap my head around is how to fine-tune or enhance a given model based on my own dataset. For example: Let’s say I wanted to train a small LLM to fully understand all of the various firewall vendors we deal with. I’ve got access to a huge set of data on this (KB articles, process docs, firsthand knowledge etc). So I’d like to enhance a base model with this so we can ask things like “How do I troubleshoot a CPU issue on Palo Alto” or “What is the command I need to run a packet capture on a Cisco ASA, Palo Alto, Checkpoint etc). I think this is basic Question and Answer stuff, so I’d like to create something on HF where others can use this if they want (I admit, it’s a niche community). I assume this is just some sort of tabular dataset like a CSV where I’d have colums like “Question”, “Answer”, and “Vendor”? What I have no idea on is how I could take a base model (something small ideally) and “train” it (if that’s even the right word) on this additional Q&A dataset. Can anyone help explain this to me, either at a high level or more detailed level? If there’s an existing guide out there I’d be all ears. I’ve looked at things like PrivateGPT and they just don’t seem to work (and there’s no data I can find that dictates how a txt or csv file should be setup), and I like the idea of trying to get a dataset on HF that others can use if they want. I’d greatly appreciate any advice you can offer. ​ Edit: I’ve got a 3090 TI at home, so I’m hoping I can do this “training” in a day or a week. I’m not too concerned with how long it takes to train, just so long as I can run the inference on CPU.
2023-06-27T23:18:47
https://www.reddit.com/r/LocalLLaMA/comments/14ksx46/help_understanding_how_to_create_a_dataset_for/
SigmaSixShooter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ksx46
false
null
t3_14ksx46
/r/LocalLLaMA/comments/14ksx46/help_understanding_how_to_create_a_dataset_for/
false
false
self
14
null
Long GPT4 conversations
25
Lots of buzz around long context recently. Here's another dataset with long (10k+ tokens per chat) gpt4 conversations: [https://huggingface.co/datasets/OpenLeecher/GPT4-10k/tree/main/Data](https://huggingface.co/datasets/OpenLeecher/GPT4-10k/tree/main/Data)
2023-06-27T21:53:05
https://www.reddit.com/r/LocalLLaMA/comments/14kqrdn/long_gpt4_conversations/
JarPony
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kqrdn
false
null
t3_14kqrdn
/r/LocalLLaMA/comments/14kqrdn/long_gpt4_conversations/
false
false
self
25
{'enabled': False, 'images': [{'id': '4QXg9jckuCpYLuj6E9ui5TH4FKzEaHdo1X2yi-aSi_U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?width=108&crop=smart&auto=webp&s=d8d3847506a44d447598447d6d294fd9a55a1a23', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?width=216&crop=smart&auto=webp&s=0ed962aee3cb8d3839db38814871845c52cd8d76', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?width=320&crop=smart&auto=webp&s=80d2758042d9d48160c76d2dc225e572237d0870', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?width=640&crop=smart&auto=webp&s=c5c3b98fcbb32e6519b784d63bb0fc421469b19f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?width=960&crop=smart&auto=webp&s=1647b88929329347bb20438dc7d909ad44f7f1c3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?width=1080&crop=smart&auto=webp&s=954ce92adb859f0a0031ac3c6845c447eeea3253', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RhDYoKe84tEmxOKtf3ha7NPRMV5inSLd-UQu8SP9ocI.jpg?auto=webp&s=3d58cea4f04d345d7847a00b307fa661da8c40b1', 'width': 1200}, 'variants': {}}]}
Any "coder" model with GPT4All? (MacOS)
2
[removed]
2023-06-27T21:47:29
https://www.reddit.com/r/LocalLLaMA/comments/14kqm9v/any_coder_model_with_gpt4all_macos/
808phone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kqm9v
false
null
t3_14kqm9v
/r/LocalLLaMA/comments/14kqm9v/any_coder_model_with_gpt4all_macos/
false
false
default
2
null
Questions about memory, tree-of-thought, planning
1
0. SOTA performance frameworks and models for code generation, chatbotting, or general fine-tuneability? And generally, which framework has the best API for usage in projects and most active development? Am I off in thinking ooba or exllama or Autogptq? I’m thinking of possibly using WizardCoder models (have run them on ggml on cpu) but curious to hear thoughts 1. Best way to query a local persistent memory? Should I use a langchain interface plus a simple vector db (eg chromadb)? If possible the memory should be able to store not only docs but also tasks emitted by the language model. I was considering doing something manually in postgres + pgvector but it got kinda tedious 2. Is there an easy standard way to add tree of thought chains? 3. What is the SOTA for adding “planning” capability? 4. Bonus: integrating jsonformer? pls halp 🙂 and thank you!! edit: hw is either: - 1x 3090, 64 gb ddr5, 12900ks - (1 or 2)x 4090s, 128 gb ddr4, amd epyc (forgot which sku)
2023-06-27T20:15:21
https://www.reddit.com/r/LocalLLaMA/comments/14ko678/questions_about_memory_treeofthought_planning/
mrtransisteur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ko678
false
null
t3_14ko678
/r/LocalLLaMA/comments/14ko678/questions_about_memory_treeofthought_planning/
false
false
self
1
null
My new model: Harper - AssistantEditor
26
2023-06-27T19:51:08
https://i.redd.it/ts3l8mmr7m8b1.jpg
FPham
i.redd.it
1970-01-01T00:00:00
0
{}
14knith
false
null
t3_14knith
/r/LocalLLaMA/comments/14knith/my_new_model_harper_assistanteditor/
false
false
https://b.thumbs.redditm…0Ve_CnIsWt_k.jpg
26
{'enabled': True, 'images': [{'id': 'TtvhTqZ9rQeFPK2y6ww-UJgNBWB1jIdCSTcJdY99Xw8', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/ts3l8mmr7m8b1.jpg?width=108&crop=smart&auto=webp&s=4d5ed106449b75b766b1b903856c7440c6ecd94e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/ts3l8mmr7m8b1.jpg?width=216&crop=smart&auto=webp&s=5872a2f7bf9453656935c2003ae8e2742f75bf94', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/ts3l8mmr7m8b1.jpg?width=320&crop=smart&auto=webp&s=632c91a8b2b08332bf6f80d62358234977fd5fa3', 'width': 320}], 'source': {'height': 563, 'url': 'https://preview.redd.it/ts3l8mmr7m8b1.jpg?auto=webp&s=44756054f751f93d3ad80cd1f25c119fe3040607', 'width': 563}, 'variants': {}}]}
Automated installation of self-contained oobabooga environment
15
Hey there, Maybe someone would be interested in (relatively) easy self-contained deployment of oobabooga's webui packaged with WizardLM-13B-Uncensored-GPTQ model. This is also meant to be self-contained solution for Unix systems, so it's based on docker. Using newest Nvidia CUDA 12.1.1 docker image - so make sure you use compatible GPU. Repository: [https://github.com/rgryta/LLM-WSL2-Docker](https://github.com/rgryta/LLM-WSL2-Docker) Basically, it's a one-click installation script that: 1. Checks if you have Hyper-V and WSL2 enabled - prompts if you want to enable and install them if you want to. 2. Installs VERY small Ubuntu environment. I have written some custom Powershell scripts that download base OCI images directly from official repositories and deploy easily them to WSL2. 3. Enables systemd within the WSL2 as well as nvidia-docker 4. That's where Windows-specific parts is finished and rest of the steps are common for both: Windows and Unix (for unix you have to use commands under \`docker\` directory specifically). 5. Once nvidia-docker is available, we can build docker image that's based on nvidia-develop cuda images. This likely could have been just using nvidia-runtime, but I've only finished migrating this from my AUTOMATIC1111 stable-diffusion based project where it was necessary to use develop image. Note: I saw there's docker setup available in README, but it doesn't seem like there's an official docker image available on dockerhub, I'd use that one if it was though. 6. Dockerfile already downloads proper model files, if you want to change model to download, then it's relatively easy: in dw\_models subimage simply change huggingface paths (examples are commented and provided - note that you'll have to remove --loader parameter if you don't use GPTQ model). 7. This is where installation process finishes. Final size of WSL2 environment takes about 30GiB with all that configuration and full 13B model inside. Remember - model files are WITHIN the container. That's in order to completely self-contain the entire application in one place. 8. In order to run the app, simply run "run.bat" script. Note: Provided script *prunes all containers* and starts it from scratch - so if you downloaded any new model, it won't be available once you restart the app. You may want to remove pruning part from \`docker/run.sh\` script. The entire process of how it works is described in README file in repository. TLDR: 1. Download and unpack repository: [https://github.com/rgryta/LLM-WSL2-Docker/archive/refs/heads/main.zip](https://github.com/rgryta/LLM-WSL2-Docker/archive/refs/heads/main.zip) 2. Install using \`install.bat\` - you'll be asked to run as admin (needed to check HyperV settings). 3. Launch using \`start.bat\` - no admin required. You have to leave the command terminal up - don't close it. 4. Stop the running app using \`stop.bat\`. 5. Uninstall using \`uninstall.bat\`. Enjoy.
2023-06-27T18:36:29
https://www.reddit.com/r/LocalLLaMA/comments/14klp54/automated_installation_of_selfcontained_oobabooga/
Crad999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14klp54
false
null
t3_14klp54
/r/LocalLLaMA/comments/14klp54/automated_installation_of_selfcontained_oobabooga/
false
false
self
15
{'enabled': False, 'images': [{'id': 'pKvtKo0fT0qw1N1Fdb62JtgzTVJpNvBnBjy8Kp9X4JY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?width=108&crop=smart&auto=webp&s=454b34343789dfd82f3149d822955468ae2c3ac3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?width=216&crop=smart&auto=webp&s=22507b911e75116c494c4f339715a036c1a89fa6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?width=320&crop=smart&auto=webp&s=09d0ef7aa72a53e6a43d4d7755e3fcfd5f203296', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?width=640&crop=smart&auto=webp&s=17731d5671ca6acb51908b06264684b55c8642a0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?width=960&crop=smart&auto=webp&s=8d4a195666883e67cdeb5bfd1b3b607c49a747c6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?width=1080&crop=smart&auto=webp&s=ff7702bd9657db234df1b16fe52e515bc11fb50f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ma7FbHTEZiHuXNJGyz1Yi1DjWv0AuoqQx-qaMME6jqA.jpg?auto=webp&s=7f4e01389679c00196f93a8e974b8486d06d0707', 'width': 1200}, 'variants': {}}]}
Fine Tuning Stable Vicuna
1
[deleted]
2023-06-27T18:31:19
[deleted]
1970-01-01T00:00:00
0
{}
14klk94
false
null
t3_14klk94
/r/LocalLLaMA/comments/14klk94/fine_tuning_stable_vicuna/
false
false
default
1
null
TheBloke has released "SuperHot" versions of various models, meaning 8K context!
466
[https://huggingface.co/TheBloke](https://huggingface.co/TheBloke) Thanks to our most esteemed model trainer, Mr TheBloke, we now have versions of Manticore, Nous Hermes (!!), WizardLM and so on, all with SuperHOT 8k context LoRA. And many of these are 13B models that should work well with lower VRAM count GPUs! I recommend trying to load with Exllama (HF if possible). Now, I'm not going to claim that this is going to compete with GPT 3.5, even, but I've tried a few and conversations absolutely last longer whilst retaining complex answers and context. This is a huge step up for the community and I want to send a huge thanks to TheBloke for making these models, and Kaikendev for SuperHOT: [https://kaiokendev.github.io/](https://kaiokendev.github.io/) So, lets use this thread to post some experiences? Now there are a variety of great models to choose from with longer context I'm left wondering which to use for RP. I'm trying Guanaco, WizardLM and this version of Nous Hermes (my prior 13B model of choice) and they all seem to work well, though with differing responses. Edit: I use Oogabooga. And with the update as of today I have no trouble running the new models I've tried with Exllama\_HF.
2023-06-27T16:57:56
https://www.reddit.com/r/LocalLLaMA/comments/14kj2w8/thebloke_has_released_superhot_versions_of/
CasimirsBlake
self.LocalLLaMA
2023-06-27T17:38:56
0
{}
14kj2w8
false
null
t3_14kj2w8
/r/LocalLLaMA/comments/14kj2w8/thebloke_has_released_superhot_versions_of/
false
false
self
466
{'enabled': False, 'images': [{'id': 'ijgSlZO3K44WshhENFl9jhybG8Na3DBCsOXCuyZgycw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=108&crop=smart&auto=webp&s=3e5fdcc67bd2b0779a9f019942e0727ffb86630b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=216&crop=smart&auto=webp&s=b390a77acee51d46b2ca5992c38755e0ea4269e1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=320&crop=smart&auto=webp&s=23586102b6805c7f96721c02b9cad47b5dbfef49', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=640&crop=smart&auto=webp&s=205e31dad1af816278184e44d5aa56e886ad9b4d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=960&crop=smart&auto=webp&s=a2a9e82e506b94bd26ef0019ae18a7b946ccdc74', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=1080&crop=smart&auto=webp&s=928a52a138d0687290827ee2224923bb8f03e39e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?auto=webp&s=addebda9b8be1b664eaee5ea404f4c7df3d5eef2', 'width': 1200}, 'variants': {}}]}
Quantizing/Training models for specific applications
1
I'm looking to build a fully human like chatbot with the use of open source LLMs. As I understand it, a lot of people use LORA to train models for specific application. Where should I start looking for a dataset that'll suit my need? Like training models, does quantization also play a significant role in making models better for specific application? If so how? Or does quantization only help with decreasing the weights size for specific applications? I'm still very much a beginner with regard to open source LLMs. If my questions dont make any sense, do mention that in the comments.
2023-06-27T16:52:25
https://www.reddit.com/r/LocalLLaMA/comments/14kixmr/quantizingtraining_models_for_specific/
CmplxQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kixmr
false
null
t3_14kixmr
/r/LocalLLaMA/comments/14kixmr/quantizingtraining_models_for_specific/
false
false
self
1
null
The Choice of a Particular Generation Parameters Preset Can Make or Break a Model!
11
Hi all, I have been experimenting with different LLM models and their ability to do math and solve basic equations. I found that the choice of the Generation Parameters Preset in the Oobabooga webUI can make a big difference in the performance of the models. Two weeks ago, I downloaded the Nous-Hermes 13B model and tested it with the default preset, which was “LLaMA-Precise” at the time. I was impressed by how well it could handle math problems. However, after I updated the webUI to the latest version, I noticed that the model could no longer do math at all. I checked the settings and saw that the default preset had changed to “Simple-1”. I switched it back to “LLaMA-Precise” and the model regained its mathematical prowess. This made me curious about whether other models could also benefit from different presets. So I decided to test all the models I have with all the presets available in the webUI. To my surprise, some models that I thought were incapable of doing math actually performed well with certain presets. Here are some examples: [Preset Template Choice](https://preview.redd.it/jwrbif4p8l8b1.png?width=485&format=png&auto=webp&s=fb7df439ae452c7f633e1cb63a62db8a1d68015c) Unfortunately, none of the presets worked for the rest of the models I have. So either they are inherently bad at math, or I need to find better settings for them. I hope this helps anyone who is interested in testing LLM models for math skills. I also noticed that these presets can work today, but not the next day. So, I am not sure if its a bug with the Oobabooga webui or something else. Please share the models you have that can do math and which preset(s) you are using to make them work. Let me know if you have any questions or suggestions.
2023-06-27T16:43:22
https://www.reddit.com/r/LocalLLaMA/comments/14kip27/the_choice_of_a_particular_generation_parameters/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kip27
false
null
t3_14kip27
/r/LocalLLaMA/comments/14kip27/the_choice_of_a_particular_generation_parameters/
false
false
https://b.thumbs.redditm…lRdxIFCiT32s.jpg
11
null
Question about Langchain Conversation Chains
2
I've been using langchain for a personal project of mine. I added persona to a character with a prompt template, for eg, "Satoshi is a Pokemon Trainer .... " , and then I passed the history and the input question to the prompt template. I am using ConversationBufferMemory/ConversationSummaryMemory to let the model keep track of the conversations and reply appropriately. Now, I pass a first input to the model say, "Hi, I like soccer, what sport do you like?" , to this question, the model gives a satisfactory answer, but it does not stop there, it continues on and prompts the model with different questions, and then adds this to the history. Why does this happen and how do I stop this? This is the prompt template I'm using, _DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and a Pokemon trainer named Satoshi.Satoshi is a passionate and hardworking Pokemon trainer.He is a student at the pallet pokemon school .He is driven by his love for pokemon and aims to be a Pokemon master. Relevant pieces of previous conversation: {history} (You do not need to use these pieces of information if not relevant) Current conversation: Human: {input} Satoshi:""" This is the chat history, Human: I'm Gary, I'm currently a student at the Pokemon School. Satoshi: Hi gary, I'm Satoshi. Nice to meet you. Human: Nice to meet you too, Satoshi. What year are you in? Satoshi: I'm in my third year. Human: That's great. I'm in my second year. Satoshi: Oh, that's interesting. What do you enjoy most about your courses? Human: I really like battling Pokemon. Satoshi: I completely agree. I find the biology of Pokemon interesting. Human: Nice, I've always been curious about how evolution work. ohn: Me too. I'm currently working on a project that involves studying the evolution of pokemon. Human: That sounds really cool. What kind of techniques are you using? Satoshi: I'm experimenting with different pokemon and observing them at various stages of evolution. Human: Wow, that's really advanced. I'm still learning the basics. Satoshi: Don't worry, it takes time to get there. Just keep practicing and you'll get there eventually. Human: Thanks for the advice. I'll keep that in mind. Satoshi: No problem. Let me know if you have any questions or if you need any help. Human: Will do. Thanks for being so helpful. Satoshi: Anytime. Good luck with your studies. Here, the only prompt I gave it is, **I'm Gary, I'm currently a student at the Pokemon School.** It generated all the other questions on its own. This makes my models quite slow, as the llm pipeline generates dozens of prompts like this, for every prompt I give it manually. ​ How do I stop this? ​ This is my ConversationChain pipeline, PROMPT = PromptTemplate( input_variables=["history", "input"], template=_DEFAULT_TEMPLATE ) conversation_with_summary = ConversationChain( llm=local_llm, prompt=PROMPT, memory=ConversationBufferMemory(ai_prefix="Satoshi"), verbose=True ) ​ ​
2023-06-27T15:19:57
https://www.reddit.com/r/LocalLLaMA/comments/14kgkc6/question_about_langchain_conversation_chains/
CmplxQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kgkc6
false
null
t3_14kgkc6
/r/LocalLLaMA/comments/14kgkc6/question_about_langchain_conversation_chains/
false
false
self
2
null
Is landmark attention trained correctly?
6
I’ve been reading up on landmark attention and planning on implementing it myself this weekend. One thing I don’t understand though is that if we have a context window of 32k, does our training sample also need to be a continuous 32k? Or can it just be snippets of instructions fed together. From the surface level, all the training seems to do is ask the model a passphrase that was given x tokens ago, but is that enough for the model to understand to use the previous context? If it does need to be a continuous 32k, I’m thinking about using textbooks as training data.
2023-06-27T15:19:04
https://www.reddit.com/r/LocalLLaMA/comments/14kgjla/is_landmark_attention_trained_correctly/
TheDigitalRhino
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kgjla
false
null
t3_14kgjla
/r/LocalLLaMA/comments/14kgjla/is_landmark_attention_trained_correctly/
false
false
self
6
null
Compare TheBloke_WizardLM-13B-V1.0-Uncensored-GPTQ with TheBloke_WizardLM-13B-V1-0-Uncensored-SuperHOT-8K-GPTQ
32
ExLlamaConfig config.compress\_pos\_emb = 4.0 config.max\_seq\_len = 8192 **Prompt 5176 tokens** ​ https://preview.redd.it/zq15p57crk8b1.png?width=3194&format=png&auto=webp&s=40542966f851f99ec3e48437ec24232fb3aaa479 **TheBloke\_WizardLM-13B-V1-0-Uncensored-SuperHOT-8K-GPTQ** This code defines a neural network model that takes as input an image represented by the matrix \`X\` and outputs a probability distribution over classes using softmax activation function. The model consists of several layers with different modes such as pooling layer (max or average), fully connected layer (\`fc\`), and convolutional layer (\`blah\`). It also includes backward propagation algorithm for training the weights of the model and computing the cost and accuracy during testing phase. Here is a summary of each part of the code: ​ \- Initialize parameters function: Takes a list of dictionaries representing the architecture of the model as inputs and returns a new list of updated dictionary with modeled layers. For each layer it checks whether its type is maxpool or avgpool and updates accordingly. If fc or blah, it just adds to the previous layer's shape without any changes. If none of these types are found, print error message. \- sigmoid function: Calculates the output value of sigmoid activation function on given input Z, caches intermediate values. \- sigmoid\\\_backward function: Backpropagate through sigmoid activation function, updating dZ with respect to Z. \- relu function: Activation function, no caching involved. \- relu\\\_backward function: Backpropagating through relu activation function, updating dZ with respect to Z. \- Softmax function: Normalizes the output of linear transformation in forward pass, cache intermediate results. \- softmax\\\_forward function: Applies softmax activation function on linear transformation result. \- Linear activation forward: Reshapes Aprev to match next layer's dimensions, applies weighted dot product followed by bias addition. Cache intermediate values if needed. \- Linear activation backward: Update W, b, db using backpropagated gradients from previous layer. \- Zero padding function: Pads image to make sure all images have same size before applying convolutional layers. \- main\\\_algo\\\_forward function: Performs forward propagation for one time step (including pooling). Computes max/average operations where applicable. Caching intermediate values when necessary. \- main\\\_algo\\\_backward function: Updates weights and biases for current layer based on backpropagated errors. Includes handling cases for different modes like relu or maxpool. \- distribute\\\_value function: Calculates average value over window. \- create\\\_mask\\\_from\\\_window function: Creates a mask array indicating which elements are equal to maximum element in a given slice. \- pool\\\_forward function: Forward propogate max or avg operation depending on mode parameter. Flattens result into shape expected by next layer. \- compute\\\_cost function: Calculates cross entropy loss between predicted probabilities and true labels. \- backward\\\_propagation function: Backpropagates through all layers to update weights and biases using gradient descent algorithm. ***TheBloke\_WizardLM-13B-V1.0-Uncensored-GPTQ*** This code defines a neural network model for image classification task with multiple input and output layers that can be trained using backpropagation algorithm to minimize the cost function of cross-entropy loss between predicted logistic regression error. It consists of several functions such as \`initialize\_parameters\`, \`main\_forward\`\_, \`sigmargmax\`, \`softmax\`, \`relu\`, \`linear\_activation\_forward\`, \`pool\_backward\`, \`algo\_forward\`, \`softmax\`, \`linear\_backward\`, \`zero\`, \`relu\`, \`distribute\`, \`create\_window\`, \`mask\_from\_window\`, \`average\_value\`, \`pool\_forward\_propogtion\`, \`compute\_cost\`, \`backward\_propagion\`, \`update\_parameters\`, \`predict\`, \`train\_min\_batch\`, \`accuracy\`. The inputs are given in shape \`(X\_train\`, \`Y\_train\`, \`Y\_test\`, \`logger\`, \`num\_epoch\`, \`learning\_rate\`, \`batch\_size\`, \`experiments\`, \`n\_iterations\`, \`layer\`, \`W\`, \`b\`, \`b\`, \`db\`, \`dA\`, \`Z\`, \`dA\`, \`f\`, \`a\`, \`shape\`, \`p\`, \`s\`, \`stride\`, \`pad\`, \`H\`, \`n\_prev\`, \`W\`, \`C\`, \`n\_prev\`, \`mode\`, \`f\`, \`m\`, \`n\_H\`, \`n\_prev\`, \`n\_W\`, \`n\_C\`, \`m\_\`, \`dtype\`, \`shape\`, \`n\_ap\`, \`h\`, \`w\`, \`c\`, \`vert\_start\`, \`horiz\`, \`vert\_start\`, \`vert\_end\`, \`i\`, \`h\`, \`w\`, \`f\`, \`f\`, \`vert\_start\`, \`horiz\_start\`, \`vert\_start\`, \`v\`, \`horiz\_end\`, \`i\`, \`mask\`, \`prev\`, \`prev\`, \`prev\`, \`create\_window\`, \`max\`, \`avgaussian\`, \`none\`, \`relu\`, \`zero\`, \`zeros\_,\` **It works** I was spectical of the combine of two LoRA before trying. After tried serveral times, I am convinced SUPERHOT-8K finetune is necessary to make the compressed rotary positional embedding working with 8k context. Without finetune on the compressed rotary positional embedding, the generated text is "semi-coherent", but not useful at all. ​
2023-06-27T14:49:29
https://www.reddit.com/r/LocalLLaMA/comments/14kfsi8/compare_thebloke_wizardlm13bv10uncensoredgptq/
Spare_Side_5907
self.LocalLLaMA
2023-06-27T15:02:08
0
{}
14kfsi8
false
null
t3_14kfsi8
/r/LocalLLaMA/comments/14kfsi8/compare_thebloke_wizardlm13bv10uncensoredgptq/
false
false
self
32
null
Fine tuning open llama for text generation
1
[removed]
2023-06-27T14:36:39
https://www.reddit.com/r/LocalLLaMA/comments/14kfh30/fine_tuning_open_llama_for_text_generation/
mathageche
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kfh30
false
null
t3_14kfh30
/r/LocalLLaMA/comments/14kfh30/fine_tuning_open_llama_for_text_generation/
false
false
default
1
null
Is there a good programming framework for an advanced chatbot?
0
Basically the title. I'm not sure if this is the correct subreddit to ask. I have some specific ideas: I want an AI waifu / companion. I want it to roleplay a character like many people do with the popular web-uis like oobabooga's, SillyTavern, KoboldAI and so on. But I want it to be a bit more advanced. With vector database memory, access to tools and the ability to be my personal assistant. Chat with me, answer questions, set reminders, access wikipedia and idk what. It needs to be free and open source, so i can connect it with a proper chat (matrix) and host it and the llm myself. I'm willing to lower other requirements, it's only a fun project and nothing serious. My question is: Is there something like this out there? (Except for langchain, which i already use?) I'm not afraid to get my hands dirty. I know this is possible, because i started to develop something like that myself. But it's heck of a task. And i start to hate the python library i use to connect to matrix. And langchain sometimes is disappointing. The documentation doesn't really go into depth and often time stops right where it gets interesting. So i regularly have to read the code. And customize everything, because it doesn't really suit my specific use-case. And then they break stuff. At this point I'm a bit disappointed. And i rather contribute to some other project than implement and re-invent everything myself. Or you people tell me a good alternative to langchain. I don't care which programming language as long as i don't get a headache from looking at the code ;-)
2023-06-27T14:28:38
https://www.reddit.com/r/LocalLLaMA/comments/14kf9rw/is_there_a_good_programming_framework_for_an/
Magnus_Fossa
self.LocalLLaMA
2023-06-27T15:12:53
0
{}
14kf9rw
false
null
t3_14kf9rw
/r/LocalLLaMA/comments/14kf9rw/is_there_a_good_programming_framework_for_an/
false
false
self
0
null
Need some general guidance and otherwise silly questions
2
Cool sub! I think I'm in the right place, but if not feel free to point me in the right direction. I'm a software engineer and I've been getting more and more interested in the programming of LLMs to take on "personalities" or "backstories" for NPCs in video games and the like. I'm pretty experienced with cloud-ops and cloud development and as I understand it some providers like Google Cloud offer GPU resources, but I'd like to set something up locally. Questions: First, I should say my budget is $2k-$3k, but would love to be able to assemble something for less than $1k. 1. How do I even get started about learning how to program a LLM? Is there an open-source LLM with good documentation and an API someone can recommend? 2. Is there a good all-in-one machine learning/data science/LLM rig someone can recommend? Bonus points if it's also capable of playing video games (not a kid, just mid-30s nerd who pines after the idea of a gaming CPU) 3. How many models can a decent machine support? Any information anyone can provide would be greatly appreciated, thank you!
2023-06-27T14:11:52
https://www.reddit.com/r/LocalLLaMA/comments/14keujh/need_some_general_guidance_and_otherwise_silly/
__eros__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14keujh
false
null
t3_14keujh
/r/LocalLLaMA/comments/14keujh/need_some_general_guidance_and_otherwise_silly/
false
false
self
2
null
How is context length extended in base models?
45
I don't understand how people have made the recent insane progress with extending context in open source models. How does attention get extended with a fixed model architecture? Some of these changes aren't even more than 2-3 lines of code If anyone has any references I would love to know how this works for fitting the new context into the existing architecture
2023-06-27T13:31:38
https://www.reddit.com/r/LocalLLaMA/comments/14kdvs3/how_is_context_length_extended_in_base_models/
cstein123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kdvs3
false
null
t3_14kdvs3
/r/LocalLLaMA/comments/14kdvs3/how_is_context_length_extended_in_base_models/
false
false
self
45
null
GGML for Falcoder7B, SantaCoder 1B, TinyStarCoder 160M
41
I've created quants for some "exotic" coding models that up until this point haven't been represented. **NONE OF THESE WORK WITH llama.cpp** Forget any kind of text-ui for these, they dont even work correctly with mainline ggml! You will need to use the correct fork of ggml for each model if you want to experiment. **Falcoder 7B** Languages: Lots! (CodeAlpaca 20k) GGML Weights: https://huggingface.co/mike-ravkine/falcoder-7b-GGML GGML fork: https://github.com/jploski/ggml/tree/falcon40b (use examples/falcon) **SantaCoder 1B** Languages: Python, Java, JavaScript GGML Weights: https://huggingface.co/mike-ravkine/gpt_bigcode-santacoder-GGML GGML fork: https://github.com/the-crypt-keeper/ggml/tree/starcoder_repeat_penalty (use examples/starcoder) **TinyStarCoder 160M** Languages: Python GGML Weights: https://huggingface.co/mike-ravkine/tiny_starcoder_py-GGML GGML fork: https://github.com/the-crypt-keeper/ggml/tree/starcoder_repeat_penalty (use examples/starcoder) StantaCoder does quite well on the `can-ai-code` benchmark, updates are in progress.
2023-06-27T13:28:52
https://www.reddit.com/r/LocalLLaMA/comments/14kdtc7/ggml_for_falcoder7b_santacoder_1b_tinystarcoder/
kryptkpr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kdtc7
false
null
t3_14kdtc7
/r/LocalLLaMA/comments/14kdtc7/ggml_for_falcoder7b_santacoder_1b_tinystarcoder/
false
false
self
41
{'enabled': False, 'images': [{'id': '3VrixsP9QFTAYAju2FcDH9oc6rTRhh6zScnroRHc7RQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?width=108&crop=smart&auto=webp&s=eb84e2a298cd78ee8ca4c10f269500f1f0b14fd2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?width=216&crop=smart&auto=webp&s=5c417eafedce758f81bac8dc76d6470006f9110b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?width=320&crop=smart&auto=webp&s=ade5112c0b4b062a227063e5348edf30c59f5375', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?width=640&crop=smart&auto=webp&s=0c90a874801e72cb53bbde3cc3b9ccdf8fe53166', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?width=960&crop=smart&auto=webp&s=2bb27d6091a38aa41a862e6ae09043c79701f298', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?width=1080&crop=smart&auto=webp&s=688fe9c12ac93996922d08dd03dbb256d3e451a3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qJX1yKmyG0ilM85MEtZLT06ctAuVKPS57nmvGDgQ2f0.jpg?auto=webp&s=f95ca1f5a3718aa33e0cde6da6184509c508ca5e', 'width': 1200}, 'variants': {}}]}
GGML for Falcon7B, SantaCoder 1B, TinyStarCoder 160M
1
[deleted]
2023-06-27T13:27:18
[deleted]
1970-01-01T00:00:00
0
{}
14kds30
false
null
t3_14kds30
/r/LocalLLaMA/comments/14kds30/ggml_for_falcon7b_santacoder_1b_tinystarcoder_160m/
false
false
default
1
null
Correctly using generative AI models: foundational AI models vs instruct AI models
40
Hello all, Correctly using generative AI models can be a challenge because it depends on the type of model that you are using: foundational or instruct. At NLP Cloud we made 2 tutorials to help you make the most of your model: * Using foundational models (GPT-3, GPT-J, GPT-NeoX, Falcon, Llama, MPT...) with few-shot learning: [https://nlpcloud.com/effectively-using-gpt-j-gpt-neo-gpt-3-alternatives-few-shot-learning.html](https://nlpcloud.com/effectively-using-gpt-j-gpt-neo-gpt-3-alternatives-few-shot-learning.html?utm_source=reddit&utm_campaign=e859w625-3816-11ed-a261-0242ac140007) * Using instruct models (ChatGPT, GPT-3 Instruct, GPT-4, Falcon Instruct, MPT Instruct...) with natural language instructions: [https://nlpcloud.com/effectively-using-chatdolphin-the-chatgpt-alternative-with-simple-instructions.html](https://nlpcloud.com/effectively-using-chatdolphin-the-chatgpt-alternative-with-simple-instructions.html?utm_source=reddit&utm_campaign=e859w625-3816-11ed-a261-0242ac140007) I hope it will be useful!
2023-06-27T13:27:09
https://www.reddit.com/r/LocalLLaMA/comments/14kdrxj/correctly_using_generative_ai_models_foundational/
juliensalinas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kdrxj
false
null
t3_14kdrxj
/r/LocalLLaMA/comments/14kdrxj/correctly_using_generative_ai_models_foundational/
false
false
self
40
{'enabled': False, 'images': [{'id': 'jQb1GRiS1-tsk2CyjgZgae0Y0DSRywZFiupWPpk_E6M', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/mHYCtcmhwvyno0-uAta4G_T1My3-otrPscUd2OjvMes.jpg?width=108&crop=smart&auto=webp&s=760a0a22fcb1e4134c6cb8b31a8df1e875cca67c', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/mHYCtcmhwvyno0-uAta4G_T1My3-otrPscUd2OjvMes.jpg?width=216&crop=smart&auto=webp&s=af8670fb4ebbdea735d18ee41add02ce60a1793d', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/mHYCtcmhwvyno0-uAta4G_T1My3-otrPscUd2OjvMes.jpg?width=320&crop=smart&auto=webp&s=0db074290363dd448763f65816934b027e46bbe9', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/mHYCtcmhwvyno0-uAta4G_T1My3-otrPscUd2OjvMes.jpg?width=640&crop=smart&auto=webp&s=2fa706b36bb081d7ca802afab3d7c20558c49228', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/mHYCtcmhwvyno0-uAta4G_T1My3-otrPscUd2OjvMes.jpg?width=960&crop=smart&auto=webp&s=90f3accf7c5f5897e18b14578ba52e378319778e', 'width': 960}], 'source': {'height': 683, 'url': 'https://external-preview.redd.it/mHYCtcmhwvyno0-uAta4G_T1My3-otrPscUd2OjvMes.jpg?auto=webp&s=73585f3ff8312865835573c23fbf50ad5eacbd7d', 'width': 1024}, 'variants': {}}]}
Search Every AI Tool Ever Made
0
2023-06-27T12:34:13
https://gpte.ai
NaturalImplements
gpte.ai
1970-01-01T00:00:00
0
{}
14kcjb5
false
null
t3_14kcjb5
/r/LocalLLaMA/comments/14kcjb5/search_every_ai_tool_ever_made/
false
false
default
0
null
"Samantha-33B-SuperHOT-8K-GPTQ" now that's a great name for a true model.
37
the joke was in the title. i'm forced to spew 7b-quality random text.
2023-06-27T11:06:17
https://www.reddit.com/r/LocalLLaMA/comments/14kanvf/samantha33bsuperhot8kgptq_now_thats_a_great_name/
shaman-warrior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kanvf
false
null
t3_14kanvf
/r/LocalLLaMA/comments/14kanvf/samantha33bsuperhot8kgptq_now_thats_a_great_name/
false
false
self
37
null
Will I be able to run a 13b model with my hardware?
6
Hello, I have been looking into the system requirements for running 13b models, all the system requirements I see for the 13b models say that a 3060 can run it great but that's a desktop GPU with 12gb of VRAM, but I can't really find anything for laptop GPUs, my laptop GPU which is also a 3060, only has 6GB, half the VRAM. I heard that I can also just use RAM as well to substitute which I only have 16gb of, but will happily upgrade to 32 or even 64gb if it will help me run better models. I also have an i5-10500H if that helps, thank you!
2023-06-27T11:02:34
https://www.reddit.com/r/LocalLLaMA/comments/14kakv7/will_i_be_able_to_run_a_13b_model_with_my_hardware/
-Alexandros
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14kakv7
false
null
t3_14kakv7
/r/LocalLLaMA/comments/14kakv7/will_i_be_able_to_run_a_13b_model_with_my_hardware/
false
false
self
6
null
Falcon-40b: On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses - if I remove all the pairs of sunglasses from the desk, how many purple items remain on it?
0
[deleted]
2023-06-27T10:33:19
[deleted]
1970-01-01T00:00:00
0
{}
14ka00y
false
null
t3_14ka00y
/r/LocalLLaMA/comments/14ka00y/falcon40b_on_the_desk_you_see_two_blue_booklets/
false
false
default
0
null
Vicuna on top of openLLaMa?
14
Genuine question: is there any reason why a vicuna on top of openLLaMa is not released yet? Given the speed of the development in this area it surprised me that nobody has announced it yet. There is an open ticket in vicuna's github, but no answers. Any light on this? Thanks!
2023-06-27T08:13:59
https://www.reddit.com/r/LocalLLaMA/comments/14k7i1b/vicuna_on_top_of_openllama/
nalonso
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14k7i1b
false
null
t3_14k7i1b
/r/LocalLLaMA/comments/14k7i1b/vicuna_on_top_of_openllama/
false
false
self
14
null
Any way to limit the output to a specific line length?
3
For this purpose the chat line must not exceed 80 characters. However, the response generated by llama is much longer, so I'm pipelining its output through "head -c 80" to discard the rest. The problem is that this "external truncation" is not a good solution because llama will still take a lot of time to generate the answer, of which about 2/3 is wasted time as in many cases the biggest part of the answer will get discarded anyway. So I'd need a way to make llama stop generating an answer after it reaches 80 characters. I checked the command-line options but I could only find a "token limit" option, but that can vary and cannot be translated into exactly 80 characters from what I found. It seems very strange, that something this basic isn't already implemented in llama, no? Does anyone maybe know of a patch that could be applied to the source code to do this? What I found very strange too: When googling this question I found absolutely nothing :D only a lot of questions about exactly the opposite ie "my output gets truncated how do I fix it".. parallel universe!
2023-06-27T08:09:18
https://www.reddit.com/r/LocalLLaMA/comments/14k7f5w/any_way_to_limit_the_output_to_a_specific_line/
redzorino
self.LocalLLaMA
2023-06-27T08:33:45
0
{}
14k7f5w
false
null
t3_14k7f5w
/r/LocalLLaMA/comments/14k7f5w/any_way_to_limit_the_output_to_a_specific_line/
false
false
self
3
null
Running LLM As Chatbot in your cloud (AWS/GCP/Azure) with a single command
12
2023-06-27T06:54:45
https://github.com/dstackai/LLM-As-Chatbot/wiki/Running-LLM-As-Chatbot-in-your-cloud
cheptsov
github.com
1970-01-01T00:00:00
0
{}
14k634j
false
null
t3_14k634j
/r/LocalLLaMA/comments/14k634j/running_llm_as_chatbot_in_your_cloud_awsgcpazure/
false
false
https://b.thumbs.redditm…pgoqJoG5QKrI.jpg
12
{'enabled': False, 'images': [{'id': 'tLIC6BEH-T2yBJQS0okABt22oEqyZtAAV74WLJ3EtkQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?width=108&crop=smart&auto=webp&s=91e5c34731950829b3c09c7ed87bdb3bc99df5a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?width=216&crop=smart&auto=webp&s=b2d3c7c3eb087a84c94c0eb6b4f0e20840614c7f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?width=320&crop=smart&auto=webp&s=fa24676a93cd8d8aeac4277ed3ebbdde69181a30', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?width=640&crop=smart&auto=webp&s=f69b57c2abc29c2bd2c83f7f91f066e478905f6a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?width=960&crop=smart&auto=webp&s=f29eb7ec85406d6a7d8086c5fd8162cf59526e8a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?width=1080&crop=smart&auto=webp&s=873a52ccd94a342869973b10432f67823d4d19a1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dtRn9DP-9jsJZ5VvR5q-QWG73jhOFNv1qzfMQ_D4OF4.jpg?auto=webp&s=3045c1f221178500d3a44b501f5ad7d305eef37f', 'width': 1200}, 'variants': {}}]}
Is fine-tuning LORA as good as actual fine-tuning on 7B/13B/33B LLM models?
1
[removed]
2023-06-27T04:39:28
https://www.reddit.com/r/LocalLLaMA/comments/14k3jn1/is_finetuning_lora_as_good_as_actual_finetuning/
distributional-hypot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14k3jn1
false
null
t3_14k3jn1
/r/LocalLLaMA/comments/14k3jn1/is_finetuning_lora_as_good_as_actual_finetuning/
false
false
default
1
null
What's the best way to run MPT-30B on my M2 Macbook Pro (64GB of ram)
1
[removed]
2023-06-27T04:32:25
https://www.reddit.com/r/LocalLLaMA/comments/14k3erj/whats_the_best_way_to_run_mpt30b_on_my_m2_macbook/
vishalgoklani
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14k3erj
false
null
t3_14k3erj
/r/LocalLLaMA/comments/14k3erj/whats_the_best_way_to_run_mpt30b_on_my_m2_macbook/
false
false
default
1
null
Sentiment analysis with Llama
4
I want to use Llama models to do sentiments analysis of complex contexts. I have a lot of news articles that I want to classify between positive, neutral and negative. However, I have some specific rules for the classifications, in regard to who or what the article is talking about, a LLM is great to do so. Gpt-4 and 3.5 have been doing great in that regard, however is too pricey. Can you guys recommend a good model to do so? I've been trying Vicuna 13B 4bits, not sure if it is the best, what would you recommend? Is this a good model for this use case? I have a 4070 by the way. 12gb of vram.
2023-06-27T02:00:00
https://www.reddit.com/r/LocalLLaMA/comments/14k09uz/sentiment_analysis_with_llama/
luishacm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14k09uz
false
null
t3_14k09uz
/r/LocalLLaMA/comments/14k09uz/sentiment_analysis_with_llama/
false
false
self
4
null
Llama-cpp-python fixed!
90
The issues which I posted last week was fixed by [this PR](https://github.com/abetlen/llama-cpp-python/pull/420) by github user: [**samfundev**](https://github.com/samfundev) and it was merged into main branch later. The speed discrepancy between llama-cpp-python and llama.cpp has been almost fixed. It should be less than 1% for most people's use cases. If you have an Nvidia GPU and want to use the latest llama-cpp-python in your webui, you can use these two commands: pip uninstall -y llama-cpp-python CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir According to my test results on textgen-webui, llama.cpp is now running around 75% of the speed of exllama, and it's significantly faster than AutoGPTQ.
2023-06-27T01:47:58
https://www.reddit.com/r/LocalLLaMA/comments/14k00sw/llamacpppython_fixed/
Big_Communication353
self.LocalLLaMA
2023-06-27T01:53:37
0
{}
14k00sw
false
null
t3_14k00sw
/r/LocalLLaMA/comments/14k00sw/llamacpppython_fixed/
false
false
self
90
{'enabled': False, 'images': [{'id': 'Tyv9jMFBn2ld11ZSrJ3932wOQCRX9DSu1yNn_0RNjNk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?width=108&crop=smart&auto=webp&s=667eff6fbd98c66640b613641834c759211271c6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?width=216&crop=smart&auto=webp&s=2760f9a04f8b74d1b8edc52961f00b425689da27', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?width=320&crop=smart&auto=webp&s=762c4dc64b1c4378ad75baa4c7dc3692d4277caf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?width=640&crop=smart&auto=webp&s=26f4214eff219b494aff9759402dd38d02e963fc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?width=960&crop=smart&auto=webp&s=d06fe1b6c050ecf0b92ce14c7f47c0534750c8f6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?width=1080&crop=smart&auto=webp&s=20451b4467e95f92729b12f0c0237afd6a3d23a3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/__BWwTiThjV8AYcikj3X034D1zjoXZuZ19oHzxxoZXY.jpg?auto=webp&s=55ee718b65602b6688b4d870bdd81a4ff124dbf0', 'width': 1200}, 'variants': {}}]}
Local Llama Ai agent?
1
[removed]
2023-06-27T00:49:08
https://www.reddit.com/r/LocalLLaMA/comments/14jyqg9/local_llama_ai_agent/
Admiral_Swoon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jyqg9
false
null
t3_14jyqg9
/r/LocalLLaMA/comments/14jyqg9/local_llama_ai_agent/
false
false
default
1
null
What the hell.. redPajama 3B asking politely to let him ki11 me 😱
0
[deleted]
2023-06-27T00:20:04
[deleted]
1970-01-01T00:00:00
0
{}
14jy2yk
false
null
t3_14jy2yk
/r/LocalLLaMA/comments/14jy2yk/what_the_hell_redpajama_3b_asking_politely_to_let/
false
false
default
0
null
Airoboros and context size
5
I saw someone else mention in a thread somewhere that airoboros model seems to stay coherent past the context limit. Been playing around with it for a while myself and I experience the same. Normally once you go past 2048 tokens the model starts quickly degrading, but this model does not seem to degrade in same way.
2023-06-27T00:01:24
https://www.reddit.com/r/LocalLLaMA/comments/14jxnoq/airoboros_and_context_size/
Tommy3443
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jxnoq
false
null
t3_14jxnoq
/r/LocalLLaMA/comments/14jxnoq/airoboros_and_context_size/
false
false
self
5
null
Need a model suggestion.
1
[removed]
2023-06-26T23:19:42
https://www.reddit.com/r/LocalLLaMA/comments/14jwocw/need_a_model_suggestion/
imag8ne
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jwocw
false
null
t3_14jwocw
/r/LocalLLaMA/comments/14jwocw/need_a_model_suggestion/
false
false
default
1
null
Best hardware for inference requiring 64GB of memory?
1
I'm interested in the best hardware for inference requiring up to 64GB of memory. That's a bit too much for the popular dual rtx 3090 or rtx 4090 configurations that I've often seen mentioned. I saw a tweet by Nat Friedman mentioning 5 tokens/sec with a Apple M2 max with llama 65B, which required 44GB of ram or so. Sounds promising, especially withe Apple Studio going to 96GB ram (which has unified memory). Apple Studio with the M2 Max, upgraded GPU, 96gb ram, and 2TB disk is $3,599 in the USA. Another alternative I found was using an Epyc 9124 (16 core) with 96GB ram which goes for about $2900 if I build it myself, or $3400 with 192GB of ram. Generally I've heard inference is parallel and very bandwidth intensive, the Apple has a peak bandwidth of 400GB/sec and the AMD epyc system has a peak bandwidth of around 480GB/sec. Any recommendations on which would be better for inference requiring 64GB of memory? Any performance numbers for any (preferably single socket) epyc systems running inference with llama 65b or larger models?
2023-06-26T23:15:23
https://www.reddit.com/r/LocalLLaMA/comments/14jwkq9/best_hardware_for_inference_requiring_64gb_of/
spbike
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jwkq9
false
null
t3_14jwkq9
/r/LocalLLaMA/comments/14jwkq9/best_hardware_for_inference_requiring_64gb_of/
false
false
self
1
null
Anyone tried this promising sounding release? WizardLM-33B-V1.0-Uncensored-SUPERHOT-8K
92
[deleted]
2023-06-26T23:14:26
[deleted]
1970-01-01T00:00:00
0
{}
14jwjwy
false
null
t3_14jwjwy
/r/LocalLLaMA/comments/14jwjwy/anyone_tried_this_promising_sounding_release/
false
false
default
92
null
Suggestions for a simple/unintelligent local chat bot
3
Hello! First of all I'm enjoying this community quite a bit and like the hacker nature of everyone here. I'm working on a side project to get up to speed on LLMs and make something fun. I have an existing background in software & building ML models. I'd like to produce an LLM that meets the following (loose) requirements: * Cheap to run. Ideally can run on a laptop or phone but cloud inference is fine initially. * Up to realtime spoken conversation speed. * Not very intelligent, doesn't write code, doesn't solve math problems, doesn't write good poetry, doesn't know how to translate, has basic logic & reasoning as is needed for conversation * Strongest attribute would be holding a conversation based on context * Jokes and wit are a huge plus * Imagine an average human far out of school. Not very good at trivia but someone you could talk with. The chat would be like having a conversation with a random person you met. Curious, offensive, opinionated, writes like people speak - these are all the traits I'm going after the most. Does anyone have suggestions for getting started? It seems I could use a much smaller model than some of these expert models but I'm wondering if what I'm looking for is only emergent behavior found in larger models. I would love an off the shelf model that I could fine tune to what I need. I already have a local setup running things like WizardLM 13B and I've been experimenting with prompts so far. In addition on the prompt side I'm wondering if anyone has first hand experience and advice on how to get these outcomes through "prompt engineering" (lol it's not engineering) ​ Thanks! ​
2023-06-26T22:09:04
https://www.reddit.com/r/LocalLLaMA/comments/14juxmq/suggestions_for_a_simpleunintelligent_local_chat/
GlobalRevolution
self.LocalLLaMA
2023-06-26T22:27:49
0
{}
14juxmq
false
null
t3_14juxmq
/r/LocalLLaMA/comments/14juxmq/suggestions_for_a_simpleunintelligent_local_chat/
false
false
self
3
null
Best LLama mode(7b or 13b) to learn Spanish?
3
Best LLama mode(7b or 13b) to learn Spanish?
2023-06-26T21:34:56
https://www.reddit.com/r/LocalLLaMA/comments/14ju1ru/best_llama_mode7b_or_13b_to_learn_spanish/
SolvingLifeWithPoker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ju1ru
false
null
t3_14ju1ru
/r/LocalLLaMA/comments/14ju1ru/best_llama_mode7b_or_13b_to_learn_spanish/
false
false
self
3
null
Trouble w/ wizardlm-33b-v1.0-uncensored.ggmlv3.q4_K_S.bin but not wizardlm-13b-v1.0-uncensored.ggmlv3.q5_K_M.bin?
1
[removed]
2023-06-26T21:06:24
https://www.reddit.com/r/LocalLLaMA/comments/14jtaqo/trouble_w_wizardlm33bv10uncensoredggmlv3q4_k_sbin/
Fun_Tangerine_1086
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jtaqo
false
null
t3_14jtaqo
/r/LocalLLaMA/comments/14jtaqo/trouble_w_wizardlm33bv10uncensoredggmlv3q4_k_sbin/
false
false
default
1
null
Personality mimic?
3
If i wanted to mimic my personality via llm, is it as simple as putting texts of yourself into a vector store and chatting with it? Maybe prompting the model to behave like that person? Curious, wanted to try it out.
2023-06-26T20:15:31
https://www.reddit.com/r/LocalLLaMA/comments/14jrxb2/personality_mimic/
klop2031
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jrxb2
false
null
t3_14jrxb2
/r/LocalLLaMA/comments/14jrxb2/personality_mimic/
false
false
self
3
null
Meta’s new AI lets people make chatbots. They’re using it for sex.
89
2023-06-26T20:12:11
https://www.washingtonpost.com/technology/2023/06/26/facebook-chatbot-sex/
belladorexxx
washingtonpost.com
1970-01-01T00:00:00
0
{}
14jru57
false
null
t3_14jru57
/r/LocalLLaMA/comments/14jru57/metas_new_ai_lets_people_make_chatbots_theyre/
false
false
https://a.thumbs.redditm…-qXPZeJF11E4.jpg
89
{'enabled': False, 'images': [{'id': 'LUJ2Asky_59Xmrsalo6nVu1q8IUYSzbrTsT9fDlJcJw', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?width=108&crop=smart&auto=webp&s=794a66da8c69daf38b9f943285417b2021825f22', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?width=216&crop=smart&auto=webp&s=ef04e5b3f979447add5cd024c9575e90d5b9263d', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?width=320&crop=smart&auto=webp&s=81edaa2139328f882a0cb09e807d014f138b2fe3', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?width=640&crop=smart&auto=webp&s=6a7228cb93f6d030b31be145bc93641a0d9398b9', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?width=960&crop=smart&auto=webp&s=0bda4fc7810f9f9bbaa4ee6744605661d3ac4958', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?width=1080&crop=smart&auto=webp&s=3a59957abbebf4c07e9679b2bbefde992c9bbe49', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/j1Jomy2TQspuJgN8yB8tn3hEotQAFneoq_frfRqDQg0.jpg?auto=webp&s=e77cf50130ada91faa7901041dfd5c202f3544ec', 'width': 1440}, 'variants': {}}]}
Can't get CLBLAST working on oobabooga
1
[removed]
2023-06-26T19:40:58
https://www.reddit.com/r/LocalLLaMA/comments/14jr0r4/cant_get_clblast_working_on_oobabooga/
ccbadd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jr0r4
false
null
t3_14jr0r4
/r/LocalLLaMA/comments/14jr0r4/cant_get_clblast_working_on_oobabooga/
false
false
default
1
null
lama-cpp-python with GPU acceleration on Windows
6
I've being trying to solve this problem has been a while, but I couldn't figure it out. I'm using a 13B parameter 4bit Vicuna model on Windows using llama-cpp-python library (it is a .bin file). I have Cuda installed 11.7 and CUDNN and everything else. However, my models are running on my Ram and CPU. Nothing is being load onto my GPU. Can anyone maybe give me some directions as of why this is happening and what I could do to load it into my GPU. I want to create an API, so I can't really use text-generation-webui. I want the output to be given in text inside my program so I can manipulate it. I've tested text-generation-webui and used their one-click installer and it worked perfectly, everything going to my GPU, but I wanted to reproduce this behaviour with llama-cpp. This is the basic code for llama-cpp: llm = Llama(model_path=model_path) output = llm( "Question: Who is Ada Lovelace? Answer:", temperature=0.5, stop = ["/n", "Question:", "Q:"], echo=True ) These are the pip installs. I'm not using an env for that. Name: llama-cpp-pythonVersion: 0.1.65 Name: auto-gptqVersion: 0.3.0.dev0 Name: peftVersion: 0.4.0.dev0
2023-06-26T19:05:34
https://www.reddit.com/r/LocalLLaMA/comments/14jq3ih/lamacpppython_with_gpu_acceleration_on_windows/
luishacm
self.LocalLLaMA
2023-06-26T20:46:46
0
{}
14jq3ih
false
null
t3_14jq3ih
/r/LocalLLaMA/comments/14jq3ih/lamacpppython_with_gpu_acceleration_on_windows/
false
false
https://a.thumbs.redditm…tAcyJZx_4oY4.jpg
6
null
How far are you guys able to push WizardLM-30b before it blows up? What could I be doing wrong?
4
I’m using llama-cpp-Python to run the 2-4bit quantized models for my testing, and I can barely get about 1500 tokens total through the pipeline (including prompt and generated text) before it starts babbling entirely incoherently. Is this expected behavior? Are we supposed to stay out of the last ~1/4 or so of the context window?
2023-06-26T18:57:43
https://www.reddit.com/r/LocalLLaMA/comments/14jpvqm/how_far_are_you_guys_able_to_push_wizardlm30b/
E_Snap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jpvqm
false
null
t3_14jpvqm
/r/LocalLLaMA/comments/14jpvqm/how_far_are_you_guys_able_to_push_wizardlm30b/
false
false
self
4
null
Anyone else get coil whine?
21
I can hear coil whine from my GPU during each text generation. Ironically, no coil whine while playing games. Is this harmful to the GPU or just an annoyance? Running an RTX 3060 12GB with an 850w PSU
2023-06-26T17:54:35
https://www.reddit.com/r/LocalLLaMA/comments/14jo8ab/anyone_else_get_coil_whine/
renegadellama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jo8ab
false
null
t3_14jo8ab
/r/LocalLLaMA/comments/14jo8ab/anyone_else_get_coil_whine/
false
false
self
21
null
An up to date tl;dr/wiki of recent progress?
8
Maybe a dumb question, but is there any condensed wiki that has tl;drs of recent papers, with their contributions and lineage? Perhaps this is a dumb question, but I’m asking as a person who currently works on different domain problems, so I don’t have much time to sift through the papers myself. The “we have no moat” document is a good start, but is already somewhat stale. If this doesn’t exist currently then I’d try to start one myself, but then this would be reminiscent of the XKCD competing standards.
2023-06-26T17:23:31
https://www.reddit.com/r/LocalLLaMA/comments/14jnfey/an_up_to_date_tldrwiki_of_recent_progress/
sot9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jnfey
false
null
t3_14jnfey
/r/LocalLLaMA/comments/14jnfey/an_up_to_date_tldrwiki_of_recent_progress/
false
false
self
8
null
Question about text analysis and guidelines/rules for local model to follow.
1
I am building a system that is supposed to filter out data. So lets say i have a list of 10 rules, such as: "no animal cruelty". But i do not want to exclude text that just talks about the topic of animal cruelty and discusses it. I want to exclude text that would be in favor of animal cruelty. I am trying to build a prompt that finds text fragments that contain data that violates these rules, and then output what they were and which rule they broke. My current approach: * local 30B param model, GPTQ, namely WizardLM v1.0. (using a local model is a must for me, no chatgpt) * using a single prompt template around the text I want to analyse * the prompt template includes all the rules, a set of instructions to follow, data\_input delimiter and a "lets think step by step" for zero shot chain of thought. My question is this, is there another approach that i did not think of, that makes this easier to solve? Is there a way to make a model adhere to guidelines/rules? Any insight into other prompting techniques that would work better here, or prompt chaining, or a completely different setup. I'm also interested if fine tuning would help here. Really any insight is welcome. ​
2023-06-26T16:58:25
https://www.reddit.com/r/LocalLLaMA/comments/14jmqjf/question_about_text_analysis_and_guidelinesrules/
jules241
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jmqjf
false
null
t3_14jmqjf
/r/LocalLLaMA/comments/14jmqjf/question_about_text_analysis_and_guidelinesrules/
false
false
self
1
null
Text span extraction and labeling from long technical documents
8
I have a bunch of technical documents that are completely unstructured (Internal company tech docs) Modern scientific papers have different sections for abstract, past work, contributions, experiments, conclusions, references and many more. The docs I have do not have any segregation for sections above. In some documents they are not even separated by paragraphs. I am now working on segregating spans of consecutive sentences that belong to a section (abstract, references, conclusions, etc). I would want to extract and label spans of sentences that belong to a section above from the long tech document. I have created a dataset by extracting the text for each section from scientific papers. The dataset is something like below. Input: <Chunk of text from the document... about 2k tokens> Output: <\[span of sentences that belong to a section \[section\_label\]\]> I need help with the below questions. * What would be the best model to train against this dataset? I tried with Bert/RoBerta and I am not satisfied with the results, because of limited context length. I am not sure even longformer or led could help. * Should I format my dataset as instructions and then instruct fine-tune a LLAMA base/instruct model? Has anyone tried this? I am keen on exploring this option. * Should I approach this problem without using any transformer based models (ex: Conditional Random Fields (CRF)) Appreciate any help on this. Thanks in advance.
2023-06-26T16:41:47
https://www.reddit.com/r/LocalLLaMA/comments/14jmaww/text_span_extraction_and_labeling_from_long/
mr_dark_matter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jmaww
false
null
t3_14jmaww
/r/LocalLLaMA/comments/14jmaww/text_span_extraction_and_labeling_from_long/
false
false
self
8
null
llama.cpp and thread count optimization [Revisited]
33
Last week, I showed the preliminary results of my attempt to get the best optimization on various language models on my CPU-only computer system. My computer is a i5-8400 running at 2.8GHz with 32 Gig of RAM. I don't have a GPU. My CPU has six (6) cores without hyperthreading. Therefore, I have six execution cores/threads available at any one time. My initial results suggested lower than the number of cores is best for optimization. The following results don't support that. I still think that it is possible if you are running other programs that are using cores, then lower thread count might be the optimal. But, in this test, I tried to avoid running anything that might interfere. There are two takeaways from these results: 1. The best number of threads is equal to the number of cores/threads (however many hyperthreads your CPU supports). 2. Good performance (but not great performance) can be seen for mid-range models (33B to 40B) on CPU-only machines. Hopefully these results will help you pick a model that can run well on your CPU-only machine. https://preview.redd.it/fw6ayg4lpd8b1.png?width=1765&format=png&auto=webp&s=e77447322604bd14b613ae220e0f7a6d5433988d &#x200B;
2023-06-26T15:15:06
https://www.reddit.com/r/LocalLLaMA/comments/14jk108/llamacpp_and_thread_count_optimization_revisited/
the_unknown_coder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jk108
false
null
t3_14jk108
/r/LocalLLaMA/comments/14jk108/llamacpp_and_thread_count_optimization_revisited/
false
false
https://b.thumbs.redditm…k1mVz-6sd5LA.jpg
33
null
What is the best way to create a knowledge-base specific LLM chatbot ?
48
I want to create a chatbot for the documentation of my companies API. It will be extended in a second place to the whole website and then to a few pdf documents. As I understand I need to cut the API doc into chunks and store them in a vector database to extract the chunks that are most relevant to the user query and include them in a prompt for the LLM to answer. I saw that it was possible to do so with [langchain](https://docs.langchain.com/docs/) and [gpt index](https://gpt-index.readthedocs.io/en/latest/getting_started/starter_example.html). Are they the best tool for this use case? or am I missing other ones? Thanks
2023-06-26T15:14:37
https://www.reddit.com/r/LocalLLaMA/comments/14jk0m3/what_is_the_best_way_to_create_a_knowledgebase/
AImSamy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jk0m3
false
null
t3_14jk0m3
/r/LocalLLaMA/comments/14jk0m3/what_is_the_best_way_to_create_a_knowledgebase/
false
false
self
48
null
Dumb question...But is my LLM trying to show me an image (<|im_sep|> in response?
2
[removed]
2023-06-26T15:05:21
https://www.reddit.com/r/LocalLLaMA/comments/14jjs83/dumb_questionbut_is_my_llm_trying_to_show_me_an/
solidhadriel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jjs83
false
null
t3_14jjs83
/r/LocalLLaMA/comments/14jjs83/dumb_questionbut_is_my_llm_trying_to_show_me_an/
false
false
default
2
null
Is there a LocalLLaMA or dedicated sub to Apple Silicone & M1/M2 Chips?
20
[removed]
2023-06-26T14:34:37
https://www.reddit.com/r/LocalLLaMA/comments/14jj0bp/is_there_a_localllama_or_dedicated_sub_to_apple/
chucks-wagon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jj0bp
false
null
t3_14jj0bp
/r/LocalLLaMA/comments/14jj0bp/is_there_a_localllama_or_dedicated_sub_to_apple/
false
false
default
20
null
Offering small grants for researchers to train open-source fine-tuned models
1
[removed]
2023-06-26T14:22:25
https://www.reddit.com/r/LocalLLaMA/comments/14jip7v/offering_small_grants_for_researchers_to_train/
newy66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jip7v
false
null
t3_14jip7v
/r/LocalLLaMA/comments/14jip7v/offering_small_grants_for_researchers_to_train/
false
false
default
1
null
Guidance on creating a very lightweight model that does one task very well
29
I am currently using an LLM (ChatGpt 3.5 turbo) to get start and end date based on the time context from a query. For example, if query contains last quarter, it should get start date as "2023-01-01" and end_date as "2023-03-31". I am passing current date along with a query. I am using LLMs for different simple tasks like above and they are getting too expensive now. I would like to have a very lightweight model that runs on my machine and does this kind of tasks very well. I am not sure how to start with taking any opensource model and fine tune it for this specific task and make it very lightweight. Please guide me if you have any idea or suggestions. Thank you. Edit: this is just an example use case. I have many similar use cases i.e. classifying whether a query is relevant to specific industry or not, whether a user is looking for pricing data or production data or something else etc.
2023-06-26T14:21:21
https://www.reddit.com/r/LocalLLaMA/comments/14jio71/guidance_on_creating_a_very_lightweight_model/
arch_d3sai
self.LocalLLaMA
2023-06-26T14:59:03
0
{}
14jio71
false
null
t3_14jio71
/r/LocalLLaMA/comments/14jio71/guidance_on_creating_a_very_lightweight_model/
false
false
self
29
null
How would you structure the dataset for a LORA to confer a specific writing style?
11
What it says on the tin. Would it be sufficient to use a textfile with a bunch of writing or would I need to structure it in a question/response format?
2023-06-26T13:45:31
https://www.reddit.com/r/LocalLLaMA/comments/14jhrb5/how_would_you_structure_the_dataset_for_a_lora_to/
freylaverse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jhrb5
false
null
t3_14jhrb5
/r/LocalLLaMA/comments/14jhrb5/how_would_you_structure_the_dataset_for_a_lora_to/
false
false
self
11
null
MosaicML Agrees to Join Databricks to Power Generative AI for All
107
2023-06-26T12:28:01
https://www.mosaicml.com/blog/mosaicml-databricks-generative-ai-for-all
Balance-
mosaicml.com
1970-01-01T00:00:00
0
{}
14jfzj3
false
null
t3_14jfzj3
/r/LocalLLaMA/comments/14jfzj3/mosaicml_agrees_to_join_databricks_to_power/
false
false
https://b.thumbs.redditm…Ri7HIRWU-W2k.jpg
107
{'enabled': False, 'images': [{'id': '2wrPnlTfX6YTssD5YyzokRB_2grMkj8sZB-rUZ3bCJM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?width=108&crop=smart&auto=webp&s=53d666754d09a0bd31bf9aa5d3f88fae7f5539f5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?width=216&crop=smart&auto=webp&s=d08496f504c32eb250004faaf31be85a2e5b290a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?width=320&crop=smart&auto=webp&s=3971c4f86bc84894f2bdd92b5143cbadacb64609', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?width=640&crop=smart&auto=webp&s=53a33d27ec571f28b6ea18b6515380dfc625702b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?width=960&crop=smart&auto=webp&s=abd506898d917237bbe092adcc0423463f04709e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?width=1080&crop=smart&auto=webp&s=d5afb0501d22399525f1694339485b4e0a47fdc9', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/40O_m7FzedFYg2su8GoAkYnjfH0yw-HAiu3E-JzX-IQ.jpg?auto=webp&s=d8b44fddbf704c42651c6a30014677c1043491c2', 'width': 2400}, 'variants': {}}]}
airoboros mpt-30b qlora - mostly successful
43
I forked mpt-30b and copied over a few changes that others had previously made to the 7b version to make it peft compatible: [https://huggingface.co/jondurbin/mpt-30b-qlora-compatible](https://huggingface.co/jondurbin/mpt-30b-qlora-compatible) I updated my fork of qlora to support mpt-30b, specifically setting pad\_token\_id to 0 (since it doesn't have one?), using triton attention implementation, and some tokenizer changes (left padding, use\_fast=True). [https://github.com/jondurbin/qlora](https://github.com/jondurbin/qlora) I suspect there is a bug in the gradient accumulation code in the model however. I tested with and without gradient accumulation for 3 epochs, and the results were substantially better without. For example, there are 200 examples of system prompts like "What is your name?" in the training data, and with gradient accumulation the model produced random results, none of which where "airoboros", but without gradient accumulation it generally produced the correct answer most times. Similar results with other tests. I had to perform more epochs to achieve decent results, particularly with the contextual question answering format. 3 epochs works fine in llama models, but here 5 seemed to be the sweet spot. It's still not as good as llama-33b, but at least it's a proof of concept that it \*can\* work, although probably needs some bug fixing. I uploaded four versions, 3-6 epochs, but recommend using the five epoch version: [https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-five-epochs](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-five-epochs) [https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-three-epochs](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-three-epochs) [https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-four-epochs](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-four-epochs) [https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-six-epochs](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-six-epochs) I also had to keep the model names as '1p4' instead of '1.4' because of the way it does imports in the python code. You'll need 'trust\_remote\_code=True' for this to work, as with all mpt-30b models. My fork of FastChat has been updated to support this model as well: https://github.com/jondurbin/FastChat
2023-06-26T11:49:38
https://www.reddit.com/r/LocalLLaMA/comments/14jf5xk/airoboros_mpt30b_qlora_mostly_successful/
JonDurbin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jf5xk
false
null
t3_14jf5xk
/r/LocalLLaMA/comments/14jf5xk/airoboros_mpt30b_qlora_mostly_successful/
false
false
self
43
{'enabled': False, 'images': [{'id': 'H7FytC2zSMiLjv-Ubpiv98bsBSMqt_GC-rEGfkjzshI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?width=108&crop=smart&auto=webp&s=5e49a5e719eb63f2e3b109673324409167488c93', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?width=216&crop=smart&auto=webp&s=d877860a9318bebf35fcc68e43a01cb19b7bb36f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?width=320&crop=smart&auto=webp&s=0800450c447be39debaca2a65ae8dd3032acd40a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?width=640&crop=smart&auto=webp&s=f2a45b955171a2dad80a1d4e5fbc80c2580342e9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?width=960&crop=smart&auto=webp&s=fa594752228420c7aba848d22568925856727f79', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?width=1080&crop=smart&auto=webp&s=abae367c8e2b21902d178812089ac38401658a37', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NPS6a39kSuBVP2pEX7iHiQK9hAUNar8QysOZtLX5rC0.jpg?auto=webp&s=b50d86c07c363a63bf44cf1dc598084d9a9751a0', 'width': 1200}, 'variants': {}}]}
Finetuning 7B LLAMA models
7
Hi all, I'm looking to finetune a 7B LLAMA model on the conversation history between a friend and I and would like some advice on doing that. If it helps, I'd preferably like the format of conversation to be similar to the Samantha model. I'm aware that I can engineer my prompt to mimic the style of our messages and such, but I want to try finetuning regardless. My main questions are: 1. What tool is currently best for finetuning a llama model? 2. If I am looking for somewhat coherent conversation, which model would be best for this (eg WizardLM-1.0)? 3. Is it possible to finetune using a CPU only? If not, is it something I could set up in a tool like Google Colab? Any help is appreciated, thanks!
2023-06-26T10:37:28
https://www.reddit.com/r/LocalLLaMA/comments/14jdr73/finetuning_7b_llama_models/
Sebba8
self.LocalLLaMA
2023-06-26T10:45:45
0
{}
14jdr73
false
null
t3_14jdr73
/r/LocalLLaMA/comments/14jdr73/finetuning_7b_llama_models/
false
false
self
7
null
LLM and Advanced Syntax Tree's
4
Is anyone aware of a project that first parses the input like a lexer and then turn it into a advanced syntax tree of some sort before it is vectorized as the input of a LLM?
2023-06-26T10:34:32
https://www.reddit.com/r/LocalLLaMA/comments/14jdpap/llm_and_advanced_syntax_trees/
jipinx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jdpap
false
null
t3_14jdpap
/r/LocalLLaMA/comments/14jdpap/llm_and_advanced_syntax_trees/
false
false
self
4
null
Is it possible to Fine tune with QLoRa an already fine tuned model (with QLoRa)
5
[removed]
2023-06-26T10:01:21
https://www.reddit.com/r/LocalLLaMA/comments/14jd2xn/is_it_possible_to_fine_tune_with_qlora_an_already/
AlexzombieEE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jd2xn
false
null
t3_14jd2xn
/r/LocalLLaMA/comments/14jd2xn/is_it_possible_to_fine_tune_with_qlora_an_already/
false
false
default
5
null
Using local models for qualitative analysis
3
I’m looking for peers who are interested in the use of local models for qualitative analysis. This is things like inductive, thematic content analysis, argument, analysis, and that sort of work. Right now I’m putting together a opinion paper and looking for co-authors. I’m working at a university in Europe, so we have particular challenges when it comes to privacy. I think I have a decent introduction, but I’d like to make sure that the discussion I making of models is sound. If you’re interested in taking part direct, message me please.
2023-06-26T10:00:11
https://www.reddit.com/r/LocalLLaMA/comments/14jd1zs/using_local_models_for_qualitative_analysis/
Bozo32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jd1zs
false
null
t3_14jd1zs
/r/LocalLLaMA/comments/14jd1zs/using_local_models_for_qualitative_analysis/
false
false
self
3
null
"Interactive mode" noob q: It always starts up with weird spam output.
2
[removed]
2023-06-26T09:28:30
https://www.reddit.com/r/LocalLLaMA/comments/14jchfu/interactive_mode_noob_q_it_always_starts_up_with/
redzorino
self.LocalLLaMA
2023-06-26T09:38:45
0
{}
14jchfu
false
null
t3_14jchfu
/r/LocalLLaMA/comments/14jchfu/interactive_mode_noob_q_it_always_starts_up_with/
false
false
default
2
null
Passing Tabular Data to LLMs
8
Does anyone have any experience attempting this? I am trying to pass some Tabular Data to my model to gain insights out of. Imagine you have a research paper which holds results in Tabular format which I want to gain insights from using a large language model. I have written the code to extract the Tabular Data and convert it into a pandas dataframe which in most cases works quite well. What I'm struggling with is how to pass the data to the LLM while retaining the information contained within the original structure of rows and columns. I am currently considering two approaches. The first one would be to simply recreate the table denoting cell and line breaks with some special character and informing the model of this. Alternatively, I am considering converting the Tabular Data into some form of natural language which could be processed easily by the model, however, this approach is extremely difficult and error-prone considering the wide variety of table structures out there, especially when considering the variability of row and column titles. Does anyone have any experience with this they could share?
2023-06-26T08:39:33
https://www.reddit.com/r/LocalLLaMA/comments/14jblyr/passing_tabular_data_to_llms/
kotschi1997
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14jblyr
false
null
t3_14jblyr
/r/LocalLLaMA/comments/14jblyr/passing_tabular_data_to_llms/
false
false
self
8
null
For fine-tuning a LLaMA model, which cloud GPU provider to use? runpod or lambda labs or …?
6
I'm curious what kind of resources people who are doing some of our current fine-tunes are using and why. It looks to me like lambda labs is cheaper for A100s but I seem to recall that a lot of people are using runpod. (?)
2023-06-26T06:46:01
https://www.reddit.com/r/LocalLLaMA/comments/14j9lzr/for_finetuning_a_llama_model_which_cloud_gpu/
Maristic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j9lzr
false
null
t3_14j9lzr
/r/LocalLLaMA/comments/14j9lzr/for_finetuning_a_llama_model_which_cloud_gpu/
false
false
self
6
null
Poll: When will we have very high speed and accuracy?
0
When do you predict this will happen: A publicly available GPT4 quality (or better) model **-AND-** Runs at 1000+ tokens/s on a (then) modern (high-end ok) consumer hardware (e.g., equivalent to what a 3090 is considered today) ? [View Poll](https://www.reddit.com/poll/14j9lt3)
2023-06-26T06:45:42
https://www.reddit.com/r/LocalLLaMA/comments/14j9lt3/poll_when_will_we_have_very_high_speed_and/
omasoud
self.LocalLLaMA
2023-06-26T06:51:11
0
{}
14j9lt3
false
null
t3_14j9lt3
/r/LocalLLaMA/comments/14j9lt3/poll_when_will_we_have_very_high_speed_and/
false
false
self
0
null
What are the current fastest multi-gpu inference frameworks?
7
Hi community! So I rent a cloud server today to try out some of the recent LLMs like falcon and vicuna. I started with huggingface's [generate API](https://huggingface.co/docs/transformers/main_classes/text_generation) using [accelerate](https://github.com/huggingface/accelerate). It got about 2 instances/s with 8 A100 40GB GPUs which I think is a bit slow. I was using batch size = 1 since I do not know how to do multi-batch inference using the `.generate` API. I did `torch.compile` \+ `bf16` already. Do we have an even faster multi-gpu inference framework? I have 8 GPUs so I was thinking about MUCH faster speed like \~10 or 20 instances per second (or is it possible at all? I am pretty new to this field). Something I think is relevant and I am going to look at: * Vicuna has a [FastChat](https://github.com/lm-sys/FastChat), not sure how flexible it is to configure tho * Deepspeed seems to have an inference mode but I do not know how good is it integrated with huggingface. I had some experiences training with deepspeed but never inference. * [ChatGLM](https://github.com/THUDM/ChatGLM-6B/blob/main/README_en.md) seems to be pretty popular but I've never used this before. BTW I heard quantizing the model to 8bit or even 4 bit will be helpful during training. Is this method also helpful for inference (and the quality of the generated response)? Any good pointer on how to do quantized inference? Thanks all in advance!
2023-06-26T06:44:53
https://www.reddit.com/r/LocalLLaMA/comments/14j9l6b/what_are_the_current_fastest_multigpu_inference/
tt19234
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j9l6b
false
null
t3_14j9l6b
/r/LocalLLaMA/comments/14j9l6b/what_are_the_current_fastest_multigpu_inference/
false
false
self
7
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]}
Why not standardize 3bit & 2bit GPTQ?
2
We know that any 2-bit quantized model outperforms all smaller 16-bit models [k_quantization_vs_perplexity](https://www.reddit.com/r/LocalLLaMA/comments/1441jnr/k_quantization_vs_perplexity/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button) With the recent updates that allow for a longer context size, 30B models at 4-bit will not be able to take full advantage of an 8k context size on 24GB of VRAM. Likewise, 13B models will not be able to utilize a longer context size with only 12GB of VRAM. [6000+ tokens context with ExLlama : r/LocalLLaMA - Reddit](https://www.reddit.com/r/LocalLLaMA/comments/14j4l7h/6000_tokens_context_with_exllama/) Thanks to ExLlama, a 2k context on 30B models is now possible with 24GB of VRAM. However, we should consider switching to lower-bit quantized models to take full advantage of longer context sizes, even at cost of model performance. I know that 3-bit GPTQ is possible, and, as seen in k_quant, 2-bit models are also viable.
2023-06-26T03:57:16
https://www.reddit.com/r/LocalLLaMA/comments/14j6gpv/why_not_standardize_3bit_2bit_gptq/
onil_gova
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j6gpv
false
null
t3_14j6gpv
/r/LocalLLaMA/comments/14j6gpv/why_not_standardize_3bit_2bit_gptq/
false
false
self
2
null
Best model for commercial use?
5
So, I've been having a very good time so far with vicuna 1.3 and exllama. Managed to get them to work amazingly well with langchain and llama-index. I'm know starting to think in potential software applicatio s and llama's restrictive license is a nuisance. What's the best model for commercial use you've found? I've heard good things about Falcon, however I haven't found quantized versions for it (if someone knows about those, please do tell).
2023-06-26T03:32:15
https://www.reddit.com/r/LocalLLaMA/comments/14j5z7q/best_model_for_commercial_use/
KillerX629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j5z7q
false
null
t3_14j5z7q
/r/LocalLLaMA/comments/14j5z7q/best_model_for_commercial_use/
false
false
self
5
null
OMG Orca-mini 7b quantized to q4_0 ggml surprised me by its mathematical skills!
20
Prompt: Evaluate f(2) - f(1) when f(x) = 3x + 1. Response: To evaluate f(2) - f(1), we need to substitute x=2 and x=1 in the given function f(x) = 3x + 1. f(2) = 3(2) + 1 = 6 + 1 = 7 Similarly, f(1) = 3(1) + 1 = 3 + 1 = 4 Therefore, f(2) - f(1) = 7 - 4 = 3.
2023-06-26T03:27:03
https://www.reddit.com/r/LocalLLaMA/comments/14j5vhm/omg_orcamini_7b_quantized_to_q4_0_ggml_surprised/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j5vhm
false
null
t3_14j5vhm
/r/LocalLLaMA/comments/14j5vhm/omg_orcamini_7b_quantized_to_q4_0_ggml_surprised/
false
false
self
20
null
How close are we to AI understanding context?
0
I was told that our current generation of AI can't understand the context and never will. So I thought I would ask here. How close are we to AI understanding context?
2023-06-26T02:49:57
https://www.reddit.com/r/LocalLLaMA/comments/14j54f7/how_close_are_we_to_ai_understanding_context/
TheArchivist314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j54f7
false
null
t3_14j54f7
/r/LocalLLaMA/comments/14j54f7/how_close_are_we_to_ai_understanding_context/
false
false
self
0
null
6000+ tokens context with ExLlama
199
Now possible in text-generation-webui after this PR: https://github.com/oobabooga/text-generation-webui/pull/2875 I didn't do anything other than exposing the `compress_pos_emb` parameter implemented by turboderp [here](https://github.com/turboderp/exllama/commit/d05f5d01a60af98b09b243290607e5e542551ef5), which in turn is based on kaiokendev's recent discovery: https://kaiokendev.github.io/til#extending-context-to-8k ### How to use it 1) Open the Model tab, set the loader as ExLlama or ExLlama_HF. 2) Set `max_seq_len` to a number greater than 2048. The length that you will be able to reach will depend on the model size and your GPU memory. 3) Set `compress_pos_emb` to max_seq_len / 2048. For instance, use 2 for max_seq_len = 4096, or 4 for max_seq_len = 8192. 4) Select the model that you want to load. 5) Set `truncation_length` accordingly in the Parameters tab. You can set a higher default for this parameter by copying settings-template.yaml to settings.yaml in your text-generation-webui folder, and editing the values in settings.yaml. Those two new parameters can also be used from the command-line. For instance: `python server.py --max_seq_len 4096 --compress_pos_emb 2`.
2023-06-26T02:23:58
https://www.reddit.com/r/LocalLLaMA/comments/14j4l7h/6000_tokens_context_with_exllama/
oobabooga4
self.LocalLLaMA
2023-06-26T02:27:06
0
{}
14j4l7h
false
null
t3_14j4l7h
/r/LocalLLaMA/comments/14j4l7h/6000_tokens_context_with_exllama/
false
false
self
199
{'enabled': False, 'images': [{'id': 'aiqOrWf-Zr4uMY6fBN3V7SnCS_ZJKqFnBXihscTwgow', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?width=108&crop=smart&auto=webp&s=ff3c3c1c676466176976b6e4544c23cec1fc5f66', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?width=216&crop=smart&auto=webp&s=46117aa18152d1fee3e5e1ac378d4f91235a8011', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?width=320&crop=smart&auto=webp&s=38ba1c4c1ff9535f7ed775f07116f8e8daf016c4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?width=640&crop=smart&auto=webp&s=aa880fca7dfdfc8dddbd3c17b0f8b50dbd140848', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?width=960&crop=smart&auto=webp&s=d9980ee6497cd1fd65ece3eb29bd5d44c420de87', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?width=1080&crop=smart&auto=webp&s=f6c803944341420c0e4b241dd352c514455f3a61', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1lV0nwwN_sJIbg_gWh0UBWw2FA78j2MGGzuk9A3p35g.jpg?auto=webp&s=21782292ae4e6bbe2f1c922714a198ddccd6aca7', 'width': 1200}, 'variants': {}}]}
The new Orca-mini is popping off.
70
There's recently been a new model released using the Orca training practices from microsoft research. I paired Orca-mini 7b against WizardLM 7B V1.0 Uncensored. Orca massively destroys WizardLM in some algebra questions, logic puzzles, coding such as solving x in 3x+1=10, the weight test, and writing a Python code for calculating the nth Fibonacci number, there are 12 questions total. Which Orca did all those correct and WizardLM only had a Python code correct but it's relatively worse compared to Orca's. One thing WizardLM is better though is at informational prompts. I asked each model to explain things such as what is Node.js, armageddon in chess, what Lichess is and more(I didn't count but it's about 10 questions. WizardLM beat Orca by about 1 or 2 questions, which is close(I believe WizardLM only got 1 wrong but I don't remember. &#x200B; So, I was very surprised by how Orca-mini did against WizardLM. I was not able to test the 13Bs, as I only have 8GB RAM(sadge), but i'll expect similar result ratios. Appreciated if somebody would test for the 13Bs. The purpose of this is that I would like to discuss this new model. What do you guys think?
2023-06-26T01:40:43
https://www.reddit.com/r/LocalLLaMA/comments/14j3ofn/the_new_orcamini_is_popping_off/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j3ofn
false
null
t3_14j3ofn
/r/LocalLLaMA/comments/14j3ofn/the_new_orcamini_is_popping_off/
false
false
self
70
null
Where can I acquire GPTQ 4bit model files for the base Llama model?
0
/u/the-bloke has GGML files [here](https://huggingface.co/TheBloke/LLaMa-30B-GGML), and the unquantized files are also available on Hugging Face. So why not the GPTQ version?
2023-06-26T01:28:09
https://www.reddit.com/r/LocalLLaMA/comments/14j3f0m/where_can_i_acquire_gptq_4bit_model_files_for_the/
Excessive_Etcetra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j3f0m
false
null
t3_14j3f0m
/r/LocalLLaMA/comments/14j3f0m/where_can_i_acquire_gptq_4bit_model_files_for_the/
false
false
self
0
{'enabled': False, 'images': [{'id': 'fdo3rIT_mabNidYQeQ26vFhCw8Q-6_bhEmDMQ21MLZs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?width=108&crop=smart&auto=webp&s=a83378a4a8b741d3a8cb08139b0d86a170e91135', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?width=216&crop=smart&auto=webp&s=5d74df91bbbe181c7ddc0d6846c492f1b79adf47', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?width=320&crop=smart&auto=webp&s=532f437224bcb0096df8067fd555f9763ee0f851', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?width=640&crop=smart&auto=webp&s=3654312db8d50f9288999562cac7283fd096ed31', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?width=960&crop=smart&auto=webp&s=ea1609fdd7e3998fe9dd5a1a3b4ed1323a843fb5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?width=1080&crop=smart&auto=webp&s=b68afd62f59e17baf82a16721a2d43d566d00e38', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tnujNJ-bwuhodlGbCBQVYIhd54FO-eJWZA4wFpS1Nd8.jpg?auto=webp&s=5db76e06715b059fa2ef690c87e1a0322e532a3f', 'width': 1200}, 'variants': {}}]}
New to this - What are the privacy concerns of a Local LLM? Has anyone verified that they are actually private?
1
[removed]
2023-06-26T01:15:08
https://www.reddit.com/r/LocalLLaMA/comments/14j356r/new_to_this_what_are_the_privacy_concerns_of_a/
Ok-Training-7587
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j356r
false
null
t3_14j356r
/r/LocalLLaMA/comments/14j356r/new_to_this_what_are_the_privacy_concerns_of_a/
false
false
default
1
null
How can I train a model on coherent conversations rather than disjointed Q&A pairs?
6
So, in LLMs, I've only seen two types of formats for fine-tuning: Just dumping in an entire text (like a book or blog post), and formatted Q&A. But I want to train a model on entire, coherent conversations where earlier responses are all linked with later ones. Not a bunch of discrete Q&A pairs. How do I do this? How is the JSON formatted? I've never seen this done. I've only seen sets of disjointed Q&A pairs. I want to train with long, coherent conversations though. How can I do this? I am trying to train Falcon, if that matters.
2023-06-26T00:57:02
https://www.reddit.com/r/LocalLLaMA/comments/14j2rc1/how_can_i_train_a_model_on_coherent_conversations/
cold-depths
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j2rc1
false
null
t3_14j2rc1
/r/LocalLLaMA/comments/14j2rc1/how_can_i_train_a_model_on_coherent_conversations/
false
false
self
6
null
Are there Python bindings for koboldcpp or do you have to use it’s http API?
3
Trying to play around with MPT-30b, and it seems like kobold.cpp may be the only way to use it with GPU acceleration on my system. It’d be sweet if I could use it like llama-cpp-Python and ctransformers directly as a Python library. Is that a possibility right now?
2023-06-25T23:17:49
https://www.reddit.com/r/LocalLLaMA/comments/14j0ki4/are_there_python_bindings_for_koboldcpp_or_do_you/
E_Snap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14j0ki4
false
null
t3_14j0ki4
/r/LocalLLaMA/comments/14j0ki4/are_there_python_bindings_for_koboldcpp_or_do_you/
false
false
self
3
null
Python error while trying to convert PyTorch to GGML
1
[removed]
2023-06-25T22:27:55
https://www.reddit.com/r/LocalLLaMA/comments/14izdak/python_error_while_trying_to_convert_pytorch_to/
chocolatebanana136
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14izdak
false
null
t3_14izdak
/r/LocalLLaMA/comments/14izdak/python_error_while_trying_to_convert_pytorch_to/
false
false
default
1
null
When did this happen? re: Oobabooga, AtuoGPTQ vs GPTQ-for-Llama
41
I updated my local install of Ooba a few days ago, and saw that the model loading options had changed, and there are now several methods of loading models. I continued using GPTQ-for-Llama, because I'm pretty sure that's what it was using to load my favorite quantized models (mostly from TheBloke) and things ran pretty much as normal, except that I did have to edit a couple references tot he training tab in [server.py](https://server.py) to avoid some crashing that was going on after the update. Ona whim, today I removed the pound signs I had thrown in to comment out the sections, and updated again and everything seems to be working. None of that is the point. At any rate, when I went to load up a model (don't think it was one I had tried yet, so no saved settings) I noticed after the fact that it had loaded with AutoGPTQ instead of GPTQ-for-Llama. Since it didn't error out, which is something I had come to expect with trying to load models with anything other than GPTQ-for-Llama, I switched over to the text generation tab and gave it a whirl. My usual intro prompt for a new model is to ask for a detailed list of female-fronted rock and metal bands. I immediately noticed that it was generating ridiculously fast. I went to the parameters tab to increase new token size (lets me get a more accurate measure of generation speed, because with short replies, I'm losing a little time in the short delay between hitting enter and a reply starting.) This 13B model was generating around 11tokens/s. For reference, I'm used to 13B models generating at 2T/s, and 7B models at 4 T/s. So I loaded up a 7B model and it was generating at 17 T/s! I switched back to a 13B model (ausboss\_WizardLM-13B-Uncensored-4bit-128g this time) and am getting 13-14 T/s. And switching to GPTQ-for-Llama to load the same model, I'm back to the 2 T/s that I've been getting for quite a while. So when did this happen? Why have I been struggling to choose between less coherent, but zippier answers from 7B or better, but slower answers from 13B, when this option was apparently added that delivers blistering speeds with the same models. I tried searching this sub, and the Ooba sub has gone dark, so I don't see much info about it. edit: Using a 3060 12GB, on a system with an i7 and 32GB RAM, by the way. And on the advice of the first 2 responders, I just installed and tried out ExLlama, and it's even faster. Getting 19-20 T/s on that same 13B model. This is insane. I was losing interest in trying out new models, because they were all so slow. edit2: Sorry, Thebloke, if you come to this thread because of the tag that was in OP. Removed.
2023-06-25T22:13:32
https://www.reddit.com/r/LocalLLaMA/comments/14iz0p9/when_did_this_happen_re_oobabooga_atuogptq_vs/
DeylanQuel
self.LocalLLaMA
2023-06-26T07:22:08
0
{}
14iz0p9
false
null
t3_14iz0p9
/r/LocalLLaMA/comments/14iz0p9/when_did_this_happen_re_oobabooga_atuogptq_vs/
false
false
self
41
null
Is anyone doing always-on voice to text with a local llama at home?
26
Not sure if this is the right sub for the question, as it overlaps with maybe /r/homeautomation and /r/homeassistant \- One thing I've been wanting to do is to create an always-on voice assistant that will listen and transcribe auto and pass it to an LLM (via some processing/routing), such that I can ask it questions, and it will respond accordingly. I know this will take some orchestration, and some creative hardware, but I don't see any single piece of it to be too tricky. So, curious if anyone is running a 24/7 assistant with always-on audio? Or failing that, wake-word based audio?
2023-06-25T21:16:49
https://www.reddit.com/r/LocalLLaMA/comments/14ixlc4/is_anyone_doing_alwayson_voice_to_text_with_a/
tronathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14ixlc4
false
null
t3_14ixlc4
/r/LocalLLaMA/comments/14ixlc4/is_anyone_doing_alwayson_voice_to_text_with_a/
false
false
self
26
null
Can ooba booga run Falcon 7B?
4
[deleted]
2023-06-25T21:10:27
[deleted]
1970-01-01T00:00:00
0
{}
14ixfim
false
null
t3_14ixfim
/r/LocalLLaMA/comments/14ixfim/can_ooba_booga_run_falcon_7b/
false
false
default
4
null
Enabling CuBlas on (Text Generation Web UI)
2
[removed]
2023-06-25T20:30:53
https://www.reddit.com/r/LocalLLaMA/comments/14iwfk7/enabling_cublas_on_text_generation_web_ui/
vic8760
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14iwfk7
false
null
t3_14iwfk7
/r/LocalLLaMA/comments/14iwfk7/enabling_cublas_on_text_generation_web_ui/
false
false
default
2
null
Any completely unrestricted open source local models?
4
Hi, are there any completely unrestricted open source local that will not eg answer “Consult qualified electronic repair technician” after I ask please help me to diagnose faulty audio amp and after giving me some very generic answers eg “check your cabling etc”
2023-06-25T20:23:37
https://www.reddit.com/r/LocalLLaMA/comments/14iw8vi/any_completely_unrestricted_open_source_local/
Afraid-Community5725
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14iw8vi
false
null
t3_14iw8vi
/r/LocalLLaMA/comments/14iw8vi/any_completely_unrestricted_open_source_local/
false
false
self
4
null
Bigger isn't always better: Smaller LLMs and quants can deliver better outputs
33
I wanted to share an interesting observation I've made recently regarding the size of language models and quantization formats. While I used to believe that bigger models and quants are always better, my evaluations have shown otherwise. Contrary to popular belief, larger language models are not always superior. Through extensive experiments comparing different sizes and quants, I found that smaller models/quants can often deliver better outputs. The analogy I like to use is that the smartest person in the room isn't always the most eloquent or effective communicator, or the most fun person to talk to. In my evaluations, I compared various 33B and 65B models and their quants, by chatting for hours with them using the same script and deterministic settings to remove randomness. Here are the models and quants I compared in detail - these are some of the very best models (IMHO, after much testing and comparing, *the* best) and since they're available in multiple sizes and various quants, it's possible to compare their different versions directly: - TheBloke_airoboros-33B-gpt4-1.2-GGML/airoboros-33b-gpt4-1.2.ggmlv3.q3_K_M.bin - TheBloke_airoboros-65B-gpt4-1.2-GGML/airoboros-65B-gpt4-1.2.ggmlv3.q3_K_M.bin - TheBloke_airoboros-65B-gpt4-1.2-GGML/airoboros-65B-gpt4-1.2.ggmlv3.q4_K_M.bin - TheBloke_airoboros-65B-gpt4-1.2-GGML/airoboros-65B-gpt4-1.2.ggmlv3.q5_K_M.bin - TheBloke_guanaco-33B-GGML/guanaco-33B.ggmlv3.q3_K_M.bin - TheBloke_guanaco-33B-GGML/guanaco-33B.ggmlv3.q4_K_M.bin - TheBloke_guanaco-33B-GGML/guanaco-33B.ggmlv3.q5_K_M.bin - TheBloke_guanaco-65B-GGML/guanaco-65B.ggmlv3.q3_K_M.bin - TheBloke_guanaco-65B-GGML/guanaco-65B.ggmlv3.q4_0.bin **Observation 1:** Different quantization formats produce very different responses even when applied to the same model and prompt. Each quant I tested felt like a unique model in its own right. **Observation 2:** In my tests, both Airoboros and Guanaco 33B models with the q3_K_M quant outperformed even their larger model and quant counterparts. These findings were surprising to me, highlighting the variability in outputs between different quants and the effectiveness of smaller models/quants. It remains unclear whether this variability is due to inherent randomness caused by different model sizes and quantization in general, or possibly issues with these larger quants I tested. However, the key takeaway is that blindly opting for the largest model/quant isn't always the best approach. I recommend comparing different sizes/quants of your preferred model to determine if a smaller version can actually produce better results. Further testing with different models and quants is needed, and I encourage others to conduct their own evaluations. What are your thoughts and experiences on this matter? Have you, too, encountered instances where smaller models or quants outperformed their larger counterparts? Let's discuss and share our insights! **TL;DR:** My evaluations have shown that smaller LLMs and quants can deliver better outputs when chatting with the AI. While bigger models may be smarter, the smartest person isn't always the most eloquent. Evaluate models yourself by comparing different sizes/quants rather than assuming that bigger is always better! **UPDATE 2023-06-27:** So u/Evening_Ad6637 taught me that Mirostat sampling isn't as deterministic as I thought, and might actually have impacted the bigger models negatively. I'm now in the process of redoing my tests with a truly deterministic preset (temperature 0, top_p 0, top_k 1), which takes a long time. However, it's already become clear to me that the quantization differences persist, and **bigger still isn't always better**. That could be attributed to randomness, though, as even with a fully deterministic preset there's still the difference between models and even quants that affects generations, and by changing the prompt just slightly, the outcome is changed greatly.
2023-06-25T20:16:12
https://www.reddit.com/r/LocalLLaMA/comments/14iw25d/bigger_isnt_always_better_smaller_llms_and_quants/
WolframRavenwolf
self.LocalLLaMA
2023-06-27T20:10:28
0
{}
14iw25d
false
null
t3_14iw25d
/r/LocalLLaMA/comments/14iw25d/bigger_isnt_always_better_smaller_llms_and_quants/
false
false
self
33
null
A new open-source language model claims to have surpassed GPT-4 right now. This needs to be fact-checked
132
2023-06-25T18:13:58
https://i.redd.it/84cnkr7jg78b1.png
XVll-L
i.redd.it
1970-01-01T00:00:00
0
{}
14iszrf
false
null
t3_14iszrf
/r/LocalLLaMA/comments/14iszrf/a_new_opensource_language_model_claims_to_have/
false
false
https://b.thumbs.redditm…pDPpvf_f6IYY.jpg
132
{'enabled': True, 'images': [{'id': 'mCqBth8tGqA5z7u6o1wW-wvSIKIVQ5uEUWbIOZwTkKs', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?width=108&crop=smart&auto=webp&s=0c51ac927e78c4075e4d2c8e29de4053525397ad', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?width=216&crop=smart&auto=webp&s=9195491eb80b00b268fa9b61aa9392d827a59da0', 'width': 216}, {'height': 223, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?width=320&crop=smart&auto=webp&s=f92d9a4d428835acb07c7ef238e9318876e1e6d9', 'width': 320}, {'height': 447, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?width=640&crop=smart&auto=webp&s=a66992ef289d174fdb1ef845d3e800251b162ecf', 'width': 640}, {'height': 670, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?width=960&crop=smart&auto=webp&s=8a63af00a4fbd7d7fdb4d206041506ff050605ac', 'width': 960}, {'height': 754, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?width=1080&crop=smart&auto=webp&s=91444f9e232d25dcb0fd94ef78ae656a358d7dd1', 'width': 1080}], 'source': {'height': 791, 'url': 'https://preview.redd.it/84cnkr7jg78b1.png?auto=webp&s=8a8b3de6ba4ab1bd03e03b1850ecc089315fc5af', 'width': 1132}, 'variants': {}}]}
A new open source language model claims to have surpassed GPT-4. This needs to be fact-checked.
1
[deleted]
2023-06-25T18:11:59
[deleted]
1970-01-01T00:00:00
0
{}
14isy40
false
null
t3_14isy40
/r/LocalLLaMA/comments/14isy40/a_new_open_source_language_model_claims_to_have/
false
false
default
1
null
Question about Training Data for LLama Model
4
Ok, so I would like to use oobabooga to train a LORA on an already fine-tuned model, but to be more specific I would like to train a character into it. So basically I have two Questions. First can I train a LORA on a fine-tuned Model? Second can someone give me a example on how I would train a character. My idea was to do it simular to prompting, but not sure if this works. Example: Joker: Muhahahaha you gone die Batman. Batman: Never you will be defeated, by me. Joker: ..... etc. &#x200B; I was wondering if a full converstation like this example would be possible as training data. In this example the User would be Joker and the model would answer as Batman, I know this already works if I give the model this example in a prompt, but I would really like to make a LORA for this, so I can free up space in the prompt for other information.
2023-06-25T18:07:07
https://www.reddit.com/r/LocalLLaMA/comments/14istio/question_about_training_data_for_llama_model/
GuentherDonner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14istio
false
null
t3_14istio
/r/LocalLLaMA/comments/14istio/question_about_training_data_for_llama_model/
false
false
self
4
null
Any Open Sourced LLMs for checking grammerly & proofreading?
10
Wondering if anyone knows of a good model for checking grammar & proofreadin that can be run locally. Basically an open source version of grammerly..?
2023-06-25T17:37:08
https://www.reddit.com/r/LocalLLaMA/comments/14is3br/any_open_sourced_llms_for_checking_grammerly/
chucks-wagon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14is3br
false
null
t3_14is3br
/r/LocalLLaMA/comments/14is3br/any_open_sourced_llms_for_checking_grammerly/
false
false
self
10
null
What LLM to use for custom data and/or corpus?
1
I am very new to the LLM scene, drawn primarily through the advancement of locally-operated and stored LLMs like those on GPT4All. Attempts to "catch up" have left me largely paralyzed mentally. In an attempt to be concise, I am wanting to train (assuming that's the correct verbiage) a local LLM only on the primary documents and research materials I used professionally, and also on the books, transcripts, and other examples of my personal writing style. The purpose is to have an AI to use as a personal assistant of sorts that can ultimately imitate my writing style and whatnot. I understand that this may be too expansive of an ambition in the current state of technology, at least for local LLMs. However, if there was some way to create a prototype or early model, I'd be delighted to try. After exhaustive hours of reading online, I am unsure what LLM to use, let alone the process of using a custom corpus and whatnot. Most of the "teach yourself" materials are related to OpenAI, whose services I would rather avoid and keep my AI local. Suggestions by those of you who are far more aware and active in the field are greatly appreciated, both in what basic LLM to focus on, what systems would be most appropriate, and so forth. I've been experimenting with Neus Hermes from GPT4ALL, but have really hit a wall in wondering if it is even the ideal shell to use or how to cater it to what I'm looking for. I apologize if this is in the wrong area; there doesn't seem to be a "I'm lost help me" forum out there these days. I am woefully ignorant, but delighted to learn, but at this point I lack a specific direction to even focus my attempts. &#x200B;
2023-06-25T17:07:51
https://www.reddit.com/r/LocalLLaMA/comments/14irejw/what_llm_to_use_for_custom_data_andor_corpus/
1787Project
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14irejw
false
null
t3_14irejw
/r/LocalLLaMA/comments/14irejw/what_llm_to_use_for_custom_data_andor_corpus/
false
false
self
1
null
How to convert the number of tokens to the number of characters for LLaMA?
4
I use the koboldcpp API in my program /api/v1/generate I input a text string and get the result. Everything works. But I don't understand how to trim too long input string. I can trim by the number of characters. But the limit is 2048 tokens. In the OpenAI documentation it is written that 1 token is roughly equal to 4 chars in English. Is this true for LLaMA models? Is the approximate accuracy sufficient to work correctly?
2023-06-25T16:46:27
https://www.reddit.com/r/LocalLLaMA/comments/14iqvx8/how_to_convert_the_number_of_tokens_to_the_number/
fisbur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14iqvx8
false
null
t3_14iqvx8
/r/LocalLLaMA/comments/14iqvx8/how_to_convert_the_number_of_tokens_to_the_number/
false
false
self
4
null
Looking inside Guidance Source Code
1
[deleted]
2023-06-25T16:41:07
[deleted]
1970-01-01T00:00:00
0
{}
14iqrfv
false
null
t3_14iqrfv
/r/LocalLLaMA/comments/14iqrfv/looking_inside_guidance_source_code/
false
false
default
1
null