title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
To finetune or use a vector db? | 8 | We started a few months ago on a journey using Pinecone vector DB with the openai api. I’m interested in finetuning my own local model but I’m unsure whether it would be better to finetune or continue using a vector db for referencing my company content?
Any recommendations are appreciated.
Additional Qs:
Should I start with llama2 or Mistral?
I also want to do multi class text categorization using an LLM. Is there anything special to know in order to finetune the model in the classification “mode of operation” as opposed to autocomplete mode of operation? | 2023-11-07T01:59:46 | https://www.reddit.com/r/LocalLLaMA/comments/17pjk4b/to_finetune_or_use_a_vector_db/ | rich_atl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pjk4b | false | null | t3_17pjk4b | /r/LocalLLaMA/comments/17pjk4b/to_finetune_or_use_a_vector_db/ | false | false | self | 8 | null |
Yi-34B-200K works on a single 3090 with 47K context/4bpw | 61 | I just spent more time that I'd like messing with Yi 200K before even testing it. Some notes to save others trouble, in case anyone is thinking about trying it:
* install exllamav2 from git with `pip install git+https://github.com/turboderp/exllamav2.git`. Make sure you have flash attention 2 as well.
* If you use lonestriker's quant (https://huggingface.co/LoneStriker/Yi-34B-200K-4.65bpw-h6-exl2), and you want to use Yi's HF tokenizer, add all the python files and the tokenizer into the exl2 folder from the original repo: https://huggingface.co/01-ai/Yi-34B-200K
* If using text-gen-ui and the exllamav2_HF loader, load the UI with `--trust-remote-code`
On an empty 3090, with 8 bit cache, max context for Yi 4.65bpw is ~28K
On my own 4bpw quantization, the 3090 can handle ~47K
The alpaca-cot lora seems to work, and gets Yi-200K to complete queries in alpaca syntax: https://huggingface.co/zzlgreat/Yi-34b-alpaca-cot-lora
Performance continuing long stories seems... unreliable?But smart, and good when it works. I need to test more, and test retrieval and summarization, but if finetunes improve Yi as much as they improve base llamav2, I will be very pleased. | 2023-11-07T01:21:32 | https://www.reddit.com/r/LocalLLaMA/comments/17pisuk/yi34b200k_works_on_a_single_3090_with_47k/ | mcmoose1900 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pisuk | false | null | t3_17pisuk | /r/LocalLLaMA/comments/17pisuk/yi34b200k_works_on_a_single_3090_with_47k/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': '7S9Db1j2Jrs6n9j_iUSwJOf_qde3cKK3HAs51mEC0tM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9GtXoliCUgG8AYfvXB3XQYrfs0HbBHX9EOQgnCb6eA0.jpg?width=108&crop=smart&auto=webp&s=71cb210a08f1c7da1871a330cf9d954a0b49b58c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9GtXoliCUgG8AYfvXB3XQYrfs0HbBHX9EOQgnCb6eA0.jpg?width=216&crop=smart&auto=webp&s=59c055d0ada75279bb8f5b28780ead532a1999e4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9GtXoliCUgG8AYfvXB3XQYrfs0HbBHX9EOQgnCb6eA0.jpg?width=320&crop=smart&auto=webp&s=a3fd2db0462ce591326a730971ef7078bec62683', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9GtXoliCUgG8AYfvXB3XQYrfs0HbBHX9EOQgnCb6eA0.jpg?width=640&crop=smart&auto=webp&s=65f414ca1859c33c88b5127740e800c2e877cb8b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9GtXoliCUgG8AYfvXB3XQYrfs0HbBHX9EOQgnCb6eA0.jpg?width=960&crop=smart&auto=webp&s=440c1ca8f09160326b4b94eede392727c6144adb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9GtXoliCUgG8AYfvXB3XQYrfs0HbBHX9EOQgnCb6eA0.jpg?width=1080&crop=smart&auto=webp&s=17d5e49ff685f47f99b4379734720bedbf569b41', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9GtXoliCUgG8AYfvXB3XQYrfs0HbBHX9EOQgnCb6eA0.jpg?auto=webp&s=0f91d2c0904b901fbaea8fd4a840fe7cdfa24191', 'width': 1200}, 'variants': {}}]} |
Benefits/Challenges using multiple GPUs | 3 | I have an RTX 4080 (16GB) and that works pretty good for most of my work/tests. I do stick to 7b/13b models for the most part because anything with the larger modes gets too slow. Especially for fine tuning 16GB is too limiting. Used cards like the Tesla P40 (24GB) can be bought very cheap these days (around $175). Is it worth adding one of these and would that be very helpful? Any feedback from folks who tried setup like these? Are there any challenges or pitfalls to be aware of? | 2023-11-07T01:15:11 | https://www.reddit.com/r/LocalLLaMA/comments/17pio9u/benefitschallenges_using_multiple_gpus/ | gamesntech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pio9u | false | null | t3_17pio9u | /r/LocalLLaMA/comments/17pio9u/benefitschallenges_using_multiple_gpus/ | false | false | self | 3 | null |
New to local models, and after doing my research I'm still looking for answers to some basic questions | 2 | Hi all, I made sure to read many threads in this subreddit but still have some clarifying questions. Thanks in advance for your patience; I think I'm a little slow ha.
1) The reasons to run a local (read: non-OpenAI) setup are basically privacy, avoiding censorship, tinkering/hobbying, trying out other models, and improving performance, right?
2) If one doesn't care about any of the above things, is there any compelling reason to run a non-OpenAI set-up? My (ignorant) sense is that a lot of things are handled out the box when you want to, say, fine-tune an OpenAI model.
3) What's the difference between paying for cloud compute through someone like AWS vs. paying for compute through a specialty service like OpenRouter or RunPod or VastAI?
4) Similarly, (assuming I just want to use OpenAI), what's the point in paying for compute when I can just use OpenAI directly?
5) More of a zoomed-in process question, but if I purchase compute from any provider, how do I connect to it in Colab? Would I have to do it through the "connect to a runtime" feature, or is the runtime modal irrelevant when using other compute?
6) (Local GPU question) I often see people recommend 2x 4090s to get started, but what about 1x 4090 and 1x 2070 SUPER? (Basically I was already in the market anyway to upgrade my gaming card, but I want to know if it will be tricky to run 2 different cards vs. 2 of the same card) | 2023-11-07T00:30:13 | https://www.reddit.com/r/LocalLLaMA/comments/17phr1b/new_to_local_models_and_after_doing_my_research/ | False_Pay_4009 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17phr1b | false | null | t3_17phr1b | /r/LocalLLaMA/comments/17phr1b/new_to_local_models_and_after_doing_my_research/ | false | false | self | 2 | null |
Powerful Budget AI-Workstation Build Guide (48 GB VRAM @ $1.1k) | 57 | I built an AI workstation with 48 GB of VRAM, capable of running LLAMA 2 70b 4bit sufficiently at the price of $1,092 for the total end build. I got decent stable diffusion results as well, but this build definitely focused on local LLM's, as you could build a much better and cheaper build if you were planning to do fast and only stable diffusion AI work. But my build can do both, but I was just really excited to share. The guide was just completed, I will be updating it as well over the next few months to add vastly more details. But I wanted to share for those who're interested.
​
[https://github.com/magiccodingman/Magic-AI-Wiki/blob/main/Wiki/Budget-AI-Workstation-Build.md](https://github.com/magiccodingman/Magic-AI-Wiki/blob/main/Wiki/Budget-AI-Workstation-Build.md)
​
Also, just as a note. I'm not an expert at this. I'm sure the community at large could really improve this guide significantly. But I spent a good amount of money testing different parts to find the overall best configuration at a good price. The goal of this build was not to be the cheapest AI build, but to be a really cheap AI build that can step in the ring with many of the mid tier and expensive AI rigs. Running LLAMA 2 70b 4bit was a big goal of mine to find what hardware at a minimum could run it sufficiently. I personally was quite happy with the results. Also in the guide, I included a script to fix the fans so that they don't sound like a jet is flying in your house constantly.
​
But as of right now, I've run my tests, the server is currently running great, and if you have any questions about what I've done or would like me to run additional tests, I'm happy to answer since the machine is running next to me right now! | 2023-11-07T00:22:08 | https://www.reddit.com/r/LocalLLaMA/comments/17phkwi/powerful_budget_aiworkstation_build_guide_48_gb/ | crossivejoker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17phkwi | false | null | t3_17phkwi | /r/LocalLLaMA/comments/17phkwi/powerful_budget_aiworkstation_build_guide_48_gb/ | false | false | self | 57 | {'enabled': False, 'images': [{'id': 'd8z_gVq50EBweQqdViAAKvfYXYTno4zPBLHxaAXEy80', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aRmWvv1UR-euWAjHnsZ5LV5FJYJgpgMj9Mew5_pHGXU.jpg?width=108&crop=smart&auto=webp&s=bc39e6cd1dc9b02d31051cb05bcca5bf4afec1e6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aRmWvv1UR-euWAjHnsZ5LV5FJYJgpgMj9Mew5_pHGXU.jpg?width=216&crop=smart&auto=webp&s=92d1d614a9475f5da8df71c26fdff382da74f91e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aRmWvv1UR-euWAjHnsZ5LV5FJYJgpgMj9Mew5_pHGXU.jpg?width=320&crop=smart&auto=webp&s=e35abe5a67254ee704aefdf773ba7c56c25a4d58', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aRmWvv1UR-euWAjHnsZ5LV5FJYJgpgMj9Mew5_pHGXU.jpg?width=640&crop=smart&auto=webp&s=2e0357cc0d75f1cd0f9a8f8d1e76f9716f78b974', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aRmWvv1UR-euWAjHnsZ5LV5FJYJgpgMj9Mew5_pHGXU.jpg?width=960&crop=smart&auto=webp&s=f00a395fd4c52abef4690789453ce6c0b3e2962b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aRmWvv1UR-euWAjHnsZ5LV5FJYJgpgMj9Mew5_pHGXU.jpg?width=1080&crop=smart&auto=webp&s=0d5eceaf7ee62527fca1e7683f2843d6eeb1f3ec', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aRmWvv1UR-euWAjHnsZ5LV5FJYJgpgMj9Mew5_pHGXU.jpg?auto=webp&s=814a1c319535ffb1bf13f0181226b02e54798305', 'width': 1200}, 'variants': {}}]} |
GPT-4 Turbo and why it outputs up to 4K tokens. | 5 | So, the new [GPT-4 Turbo generates up to 4096 tokens](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo).
But it got me thinking, why would a model with a ctx of up to 128k tokens only output 4096 tokens? What long contexts have taught us, is that models trained with long contexts do not perform as good with smaller prompts, so I suspect they must have some sort of token counting system that checks what model should the query be sent to, whether a 8k, 16k, or 128k model, etc.
My guess is that they have all versions of the models up, because they can, and use the best model depending on the prompt length. I'm not sure if and how we could do something similar for local models, assuming that we would be fine with a generation up to x number of tokens. | 2023-11-06T23:47:40 | https://www.reddit.com/r/LocalLLaMA/comments/17pguv6/gpt4_turbo_and_why_it_outputs_up_to_4k_tokens/ | 2muchnet42day | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pguv6 | false | null | t3_17pguv6 | /r/LocalLLaMA/comments/17pguv6/gpt4_turbo_and_why_it_outputs_up_to_4k_tokens/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'XjYeH9UKcgyOWN9BfsAvdRtig11MoY4NbGzf8T8tohc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=108&crop=smart&auto=webp&s=3f9105ef3e7b58900470546335a78fe533005bb4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=216&crop=smart&auto=webp&s=e310d9b29c06d6b5ce641a7f1506dd263a8729f6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=320&crop=smart&auto=webp&s=69281c1b38ced0e01cea5a12968d4b48674c98a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=640&crop=smart&auto=webp&s=9396991084b19bf32b78fe6cd3e4cde78b575245', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=960&crop=smart&auto=webp&s=91c96b3a35c2ea5ffe4533d22d45928392c76bc9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=1080&crop=smart&auto=webp&s=57787310c9a5f80fe35172195dbc646aa4473b50', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?auto=webp&s=5552c2aeef00909b24d8e8eed641cda7ae28bb4b', 'width': 1200}, 'variants': {}}]} |
Are there any "truly" uncensored models? | 1 | [removed] | 2023-11-06T23:29:44 | https://www.reddit.com/r/LocalLLaMA/comments/17pggwu/are_there_any_truly_uncensored_models/ | estrosfere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pggwu | false | null | t3_17pggwu | /r/LocalLLaMA/comments/17pggwu/are_there_any_truly_uncensored_models/ | false | false | self | 1 | null |
How do you compare 100s of documents with a local LLM? | 5 | I have identified some dimensions (say 10) to compare some documents (language, size, security, complexity, scalability, privacy, etc). What approach do you use to do this with a local LLM?
I thought I'd first create a vectorstore from the documents, and then run the following query in a loop:
query = f"Summarise the document {i}.txt based on the ten parameters identified: size, security, complexity, scalability, privacy, etc)"
But my LLM (llama-2-7B) is giving inconsistent results. Sometimes it gives an output that looks like its alright, but upon close inspection I find that its gotten even the basic things wrong - like, it would say the language is English when in fact it is German (not something the LLM has to deduce by the way, it is mentioned in the document). At other times it would reply, on the same query, with this:
Based on the provided context, I will do my best to answer your questions. Please provide the first project's name, as mentioned in the context, and I will summarize it based on the 14 dimensions you have provided.
So, what is the name of the first project?
At yet other times:
Of course! I'm happy to help you with that. Based on the provided context, here is a summary of the project description for the attributes you mentioned:
* Language: N/A (not specified)
* Size: N/A (not specified)
* Security: N/A (not specified)
* Complexity: N/A (not specified)
* Scalability: N/A (not specified)
* Privacy: N/A (not specified)
* ... : N/A (not specified)
* ... : N/A (not specified)
And then I lost all hope, because forget summarising a document based on some given points, it can't even retrieve the title of that document:
> Question:
What is the title of the project in 1.txt?
> Answer:
Great! Based on the provided context, the question you are asking is "What is the title of the project in 1.txt?"
To answer this question, I need to check the content of the file named "1.txt" in the provided context. Unfortunately, there is no information in the context about the title of a specific project, so I must inform you that I cannot provide an answer to your question based on the given context.
Please provide more information or context about the project you are referring to, and I'll be happy to help you find the answer to your question.
So, what else could I try? Is my approach wrong? (I did not even get to the looping stage, these are experiments with just one of the documents - I would have moved to the loop if I got good responses with one doc.) | 2023-11-06T22:55:40 | https://www.reddit.com/r/LocalLLaMA/comments/17pfpm5/how_do_you_compare_100s_of_documents_with_a_local/ | ResearcherNo4728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pfpm5 | false | null | t3_17pfpm5 | /r/LocalLLaMA/comments/17pfpm5/how_do_you_compare_100s_of_documents_with_a_local/ | false | false | self | 5 | null |
Fine-Tune an Embedding Model for RAG? | 3 | Did anyone fine-tune an existing embedding model for RAG? I have german texts and I am looking for ressources. I saw that people used a synthetic dataset from OpenAI API to fine-tune their model. Is it worth it (especially for German)? | 2023-11-06T22:10:02 | https://www.reddit.com/r/LocalLLaMA/comments/17pen2t/finetune_an_embedding_model_for_rag/ | LargeBrick7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pen2t | false | null | t3_17pen2t | /r/LocalLLaMA/comments/17pen2t/finetune_an_embedding_model_for_rag/ | false | false | self | 3 | null |
I used the gpt-4 Turbo in the API and it was worse | 75 | Considering that the gpt-4-1106-preview for the api is already out, which is the GPT-4 Turbo, i thought i give it a try and see whether it could do the task the previous gpt-4 does in my project. I replaced it and it failed instantly. It was a assigning and formatting task where i give him a row of videos with descriptions and a text. It chooses which videos would suit which sentences in the text. It was able to do the assigning task, but it failed at the format i wanted it. Now, i am still able to use it by trying to adjust his input,but this alone makes it obvious to me that the new gpt-4 Turbo is a backstep, atleast when it comes to quality.
​
​ | 2023-11-06T21:25:55 | https://www.reddit.com/r/LocalLLaMA/comments/17pdkfx/i_used_the_gpt4_turbo_in_the_api_and_it_was_worse/ | StrangeImagination5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pdkfx | false | null | t3_17pdkfx | /r/LocalLLaMA/comments/17pdkfx/i_used_the_gpt4_turbo_in_the_api_and_it_was_worse/ | false | false | self | 75 | null |
Google doesn't have a moat, openai does | 316 | 6 months after the leaked "We have no moat, neither does openai" memo, it is clear that indeed google didn't/doesn't have a moat regarding LLM.
Openai, on the other hand, does have a moat. And it's a big one.
Their gpt-4 has been the unrivalled LLM for 8 months straight with no other model approaching its general capabilities.
Basically, the current landscape is: you go openai or you go open source. Anthropic, Google, Amazon are so far behind they need a miracle to gain a sizable market share.
I can only envy Microsoft. They have openai, they have copilot, they have GPUs for you to run your preferred Llama2/Mistral finetune and they have deepspeed for you to train/finetuning your favorite LLM on your multi-gpu machine.
Open source is still promising. The release of Mistral reignited the faith in smaller LLM that can fit in a single consumer machine. We have now loads of Chinese LLMs coming out every day and we've upped our game in terms of finetuning with releases like Hermes, Dolphin, OpenChat, etc.
Literally the sky is the limit for open source LLMs!
I would like to thank everyone in this sub for their contributions whether its finetunes, evals, quants, tutorial or anything else.
Thank you and have a nice day! | 2023-11-06T21:01:29 | https://www.reddit.com/r/LocalLLaMA/comments/17pcyfb/google_doesnt_have_a_moat_openai_does/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pcyfb | false | null | t3_17pcyfb | /r/LocalLLaMA/comments/17pcyfb/google_doesnt_have_a_moat_openai_does/ | false | false | self | 316 | null |
New model ‘GPT-4 Turbo’ | 1 | [removed] | 2023-11-06T20:26:55 | adesigne | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17pc5fd | false | null | t3_17pc5fd | /r/LocalLLaMA/comments/17pc5fd/new_model_gpt4_turbo/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'noy7JMY25HGpORNgtXP4Dikg5upylBVGTLoQwQ3fWuE', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/lbkv8m5jesyb1.jpg?width=108&crop=smart&auto=webp&s=1974a5c51f09b88cb0c7d80aaa90f124ab9927f0', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/lbkv8m5jesyb1.jpg?width=216&crop=smart&auto=webp&s=160a99c1d825f2d021e36763935efb93e78d02ab', 'width': 216}, {'height': 202, 'url': 'https://preview.redd.it/lbkv8m5jesyb1.jpg?width=320&crop=smart&auto=webp&s=8c4e51c72f368709ad7e4641c6b3e6b0fa291fc5', 'width': 320}, {'height': 404, 'url': 'https://preview.redd.it/lbkv8m5jesyb1.jpg?width=640&crop=smart&auto=webp&s=da5d45d67eb38114ebb8644f4eed0f9b8208dc97', 'width': 640}, {'height': 606, 'url': 'https://preview.redd.it/lbkv8m5jesyb1.jpg?width=960&crop=smart&auto=webp&s=ff35d309e17955f4cae620a8cedf3a8f1f7e8589', 'width': 960}, {'height': 681, 'url': 'https://preview.redd.it/lbkv8m5jesyb1.jpg?width=1080&crop=smart&auto=webp&s=51a0d6e5a4b1edb9ae5f68fa651ad02381a7b907', 'width': 1080}], 'source': {'height': 808, 'url': 'https://preview.redd.it/lbkv8m5jesyb1.jpg?auto=webp&s=c17a58ed3648ba1a661611f6c3b3ce9c512b54ac', 'width': 1280}, 'variants': {}}]} | ||
Preview-version of GPT-4 Turbo has been released. What does this mean for local LLMs? | 40 | [Model page](https://help.openai.com/en/articles/8555510-gpt-4-turbo)
[Announcement post](https://openai.com/blog/new-models-and-developer-products-announced-at-devday)
\- 128k context (4k for model output limit)
\- knowledge cutoff of April 2023
\- cheaper than GPT-4 [\[pricing\]](https://openai.com/pricing#gpt-4-turbo)
\- limited to 100 requests per day, limits will be increased after release of the production version
\- vision model for image inputs is also available
A lot of local LLMs are trained on GPT-4 generated synthetic data, self-identify as GPT-4 and have knowledge cutoff stuck in 2021 (or at least lie about it). My question is: will this model help to transfer knowledge from recent years? Not only about major events in the world, but also about the research and software projects that have been published. Or is it easier to just crawl the web and collect datasets, than trying to extract knowledge from GPT? If so, are there any local LLMs that have been trained of a more recent data? | 2023-11-06T20:24:14 | https://www.reddit.com/r/LocalLLaMA/comments/17pc36o/previewversion_of_gpt4_turbo_has_been_released/ | remixer_dec | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pc36o | false | null | t3_17pc36o | /r/LocalLLaMA/comments/17pc36o/previewversion_of_gpt4_turbo_has_been_released/ | false | false | self | 40 | null |
Extracting data from emails for training? | 1 | Does anyone know of a good tool that can connect to a mailbox (e.g. over IMAP) and save the contents in a CSV or JSON file suitable for LLM training? I started writing my own but it's more complicated than it first seems as you need to strip out all the mail signatures and handle email chains with multiple nested replies. No point re-inventing the wheel if someone has already done it... | 2023-11-06T20:20:08 | https://www.reddit.com/r/LocalLLaMA/comments/17pbznl/extracting_data_from_emails_for_training/ | davew111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pbznl | false | null | t3_17pbznl | /r/LocalLLaMA/comments/17pbznl/extracting_data_from_emails_for_training/ | false | false | self | 1 | null |
How to use the Huggingfance inference API? | 1 | I'm currently trying to figure out how to use the models on hugginface with their inference api. Right now i try to use the Zephyr-7b-beta model ([https://huggingface.co/HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta))
On the page it tells me, that the inference api is enabled for this model, however, when i try to use it, it tells me that i does not exist. I tried out different names for the zephyr model, like (zephyr,zephyr-chat,zephyr-7b,zephyr-7b-b and Zephyr-7B-β ) but they all give me back the same error. Does somebody know, what my problem might be?
https://preview.redd.it/gh1dknqtbsyb1.png?width=614&format=png&auto=webp&s=64708ac383af7f7578e62c1daf391b4ec87f9bd7 | 2023-11-06T20:14:41 | https://www.reddit.com/r/LocalLLaMA/comments/17pbva4/how_to_use_the_huggingfance_inference_api/ | StrangeImagination5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pbva4 | false | null | t3_17pbva4 | /r/LocalLLaMA/comments/17pbva4/how_to_use_the_huggingfance_inference_api/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'wTvEuGH623MKIN1j-YQrV723y3Wpr65tJmQjmMuI820', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BK2HxrkacYTlXTSl68JcO9bBCN6JLUPlVdYqOh9W6OY.jpg?width=108&crop=smart&auto=webp&s=ddb5bdeab6a052c01e76b31ce0c6a4a3d239cf4a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BK2HxrkacYTlXTSl68JcO9bBCN6JLUPlVdYqOh9W6OY.jpg?width=216&crop=smart&auto=webp&s=bd18ac63bfc6a3983b81f439ab4891398295eb4e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BK2HxrkacYTlXTSl68JcO9bBCN6JLUPlVdYqOh9W6OY.jpg?width=320&crop=smart&auto=webp&s=a57bd057b7fab4d7bb245632ddbaa238708d24ec', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BK2HxrkacYTlXTSl68JcO9bBCN6JLUPlVdYqOh9W6OY.jpg?width=640&crop=smart&auto=webp&s=7467657c849fa5740f35146a3a29efa55a2ce07b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BK2HxrkacYTlXTSl68JcO9bBCN6JLUPlVdYqOh9W6OY.jpg?width=960&crop=smart&auto=webp&s=ae29570fcead264fb806577786a5a9d49d870674', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BK2HxrkacYTlXTSl68JcO9bBCN6JLUPlVdYqOh9W6OY.jpg?width=1080&crop=smart&auto=webp&s=41029630085bbd25f16c503442808fd707e174cd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BK2HxrkacYTlXTSl68JcO9bBCN6JLUPlVdYqOh9W6OY.jpg?auto=webp&s=58e841a64eca64a2afe66246266d2fbac2382191', 'width': 1200}, 'variants': {}}]} | |
Llama.cpp with Facebook Sourced llama-2-7b. | 2 | I’m trying to get a local models running using Llama.cpp together with text-generation-webui.
I downloaded my models (llama-2-7b/13b/70b) but the included files are; checklist.txt, consolidated.00.pth and params.json.
I try to use llama.cpp to convert to a GGUF but repeatedly get the errors “Could not find tokenizer.model in models/llama-2-7b.
What exactly is the tokenizer.model file, how is it generated, what does it contain, and how do I get it? | 2023-11-06T20:12:16 | https://www.reddit.com/r/LocalLLaMA/comments/17pbt9i/llamacpp_with_facebook_sourced_llama27b/ | HAMBoneConnection | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pbt9i | false | null | t3_17pbt9i | /r/LocalLLaMA/comments/17pbt9i/llamacpp_with_facebook_sourced_llama27b/ | false | false | self | 2 | null |
Testing models for AI creative writing. | 4 | So i use my LLMs for creative writing, namely erotic stories. I have come to accept that i have to work WITH the machine as opposed to having the machine work for me as from my experience it's extremely stressful dealing with unruly LLMs.
So i come to you with a question. I am prompting this. at 200 max token parameters using the simple-1 default on Oogabooga all using the chat-instruct mode in the chat tab. what would be a good way to prompt for consistent expansion of text and dialog? as you can see I received wildly different results based on the model
***Please expand on the following conversation from a romance novel, making it more detailed and intimate. Develop the dialogue further, revealing more about the characters' backgrounds, interests, and the growing connection between them. Maintain a tone that is casual but increasingly personal, hinting at the potential for romance:***
*"Their conversation began easily, starting with general topics such as work and life in the city. Slowly, however, it grew more intimate as they revealed bits and pieces of themselves. Amanda learned that Hernando worked in the financial industry and lived just a few blocks away. They shared mutual interests and even discovered friends in common, though somehow they never crossed paths until now."*
***Provide the continuation of this conversation with an emphasis on deepening the bond between Amanda and Hernando. Include shared laughs, subtle flirting, and moments of vulnerability. The conversation should flow naturally from casual chatter to more meaningful dialogue that would lead to a significant emotional connection.***
**airoboros 70B gives me this as a first response. It wasn't what i was looking for so I regenerated it 5 times each with similar results to this none of which having conversation. some of them actually expanded on the conversation without having dialog as opposed to "continuing" the conversation.**
*As their conversation continued, Amanda found herself laughing effortlessly alongside Hernando. It was rare to find someone who could make her feel so comfortable within minutes of meeting him. She couldn't help but notice how his eyes lit up when he spoke about his love for salsa dancing or his passionate discussions on world politics. His intelligence and charm were captivating, and she found herself wanting to know everything about him.*
*Hernando, too, seemed genuinely interested in learning about Amanda's life. He listened attentively as she described her job as an art curator, her childhood memories, and her dreams for the future. He made thought-provoking comments and offered unique perspectives that showed he valued her thoughts and opinions.*
*As they exchanged stories about their past relationships, there was an underlying understanding that both had experienced heartache and disappointments. However, instead of dwelling on*
**Synthia 70B gives me this as the first response, for fairness i regenerated it 5 times as well, each time Synthia actually gave me dialog BUT fluctuated between "continuing" the conversation and expanding the initial conversation. Maybe the best result for this prompt.**
*Amanda sipped her drink, feeling her cheeks flush slightly as she listened to Hernando talk about his job. "Wow, you sound like you really love what you do," she said, genuinely impressed by his passion.*
*Hernando smiled, his eyes crinkling at the corners. "I do. I mean, who doesn't want to help people make their dreams come true? Plus, I get to play around with numbers all day—it's my idea of heaven!"*
*They laughed together, the tension easing between them. Amanda couldn't believe how comfortable she felt talking to him, considering they had only met a short while ago. She found herself wanting to know everything about him, from his favorite foods to his childhood memories.*
*As if reading her mind, Hernando asked, "So tell me, what makes you happy?"*
*Amanda blushed again*
**Xwin 70B gives me this as a first response, and once again I regenerated it 5 times and received weird results including some chat style Amanda: blah blah. Hernando: blah blah results. in all the regenerations the conversation "continued" but at least there was dialog.**
*As their conversation continued, Amanda and Hernando found themselves sharing stories about their childhoods, laughing over inside jokes that only people who had grown up in the same neighborhood could understand. They reminisced about the local ice cream parlor, which still served the same flavors after all these years, and the little corner bookstore where they both used to spend hours browsing through the shelves.*
*Hernando smiled as he recalled, "I remember once, my friend dared me to ask the librarian for a book on how to survive a zombie apocalypse. I was so embarrassed, but she just laughed and found me a copy of 'World War Z.' I thought I was going to die of humiliation right there!"*
*Amanda giggled, her eyes sparkling with mirth. "Oh, that's nothing! My best friend and I used to s*
**Chronoboros 33B gave me this as the first generation, after 5 regenerations it jumped between continuing the conversation and actually expanding the conversation. additionally it also gave the same chat style responses as Xwin which was interesting. something to note was that Chronoboros consistently included emotions into the conversation something few of the regenerations showed if any.**
*Amanda smiled warmly at Hernando. "It seems like we have quite a bit in common. Financial industry, huh?" She laughed softly. "That sounds intense!"*
*Hernando chuckled too. "Well, sometimes it can be. But I enjoy what I do - helping people make smart decisions about their money." He leaned forward slightly, his expression turning serious yet inviting. "And how about you? What do you do for work?"*
*Amanda hesitated for a moment before answering. "I actually work in marketing," she confessed shyly. "My job involves creating content that helps brands connect with consumers."*
*Hernando nodded thoughtfully. "Marketing...that must be fascinating! It's all about understanding human behavior and using creativity to influence it, right?"*
*Amanda grinned broadly. "Exactly! It's challenging but*
**Tiefighter 13B gave me this as the first generation and each of the 5 regenerations all had dialog, and they all expanded on the conversation as opposed to continuing it, genuinely impressive.**
*As their drinks arrived, Hernando smiled warmly at her before taking a sip. His eyes lingered on hers for a moment longer than necessary, and she couldn't help but feel a slight flush creep up her neck. Clearing his throat, he continued, "So, tell me more about your job. Do you enjoy working for that tech startup?"*
*Amanda nodded, feeling herself relax into the comfortable banter. "Yes, absolutely. It's definitely challenging, but I love being part of something so innovative. And the people there are amazing; we have such a great team dynamic." She paused for a moment, considering him over the rim of her glass. "What about you? What do you like most about your work in finance?"*
*Hernando sipped his drink again, seemingly lost in thought for a moment. When he finally met her gaze, there was a sparkle in his*
**LLama2 70B chat uncensored this one was a doozy, this is the first generation however this was the only model to go completely off the rails, on regenerations it would question me. provide chat style amanda: blah blah hernando: blah blah responses. not do any dialog.**
*As Amanda and Hernando continued their conversation, they found that they had many things in common. From hobbies to favorite restaurants, it seemed like fate had brought them together. And as they talked, their voices grew softer and more intimate, as if they were sharing secrets only meant for each other.*
*Amanda couldn't remember the last time she felt so comfortable with someone new, and Hernando was equally taken aback by how easy it was to open up to her. Their banter became playful, with inside jokes and shared laughter punctuating every sentence.*
*But beneath the lightheartedness, there was something deeper brewing between them. When Amanda confided in him about a recent breakup, Hernando listened without judgment or interruption, offering words of comfort and understanding. It wasn't long before they realized that they both craved companionship and connection after years spent focusing solely on their*
**ChatGPT4 website i cut it short as it's unfair but i wanted a comparison here. each regeneration was consistently similar in style with descriptions and dialog.**
*Amanda swirled the remains of her coffee, the late afternoon sun casting a warm glow over the café that seemed to complement the budding intrigue in her eyes. "So, Hernando, the financial industry, huh? That must be a world away from this little coffee shop."*
*Hernando leaned back in his chair, his smile reaching his eyes. "It's a different universe, but galaxies less interesting than this conversation," he replied, with a subtle nudge of wit. "What about you? What spirited winds pushed you into the world of graphic design?"*
*Her laughter was a melodic counterpoint to the soft jazz playing in the background. "I suppose I wanted to add color to the world in a way numbers couldn't," Amanda said, her hands gesturing as if painting in the air. "And nothing against numbers," she added quickly, a playful twinkle in her eye.*
*"Oh, no offense taken. I admire that—seeing life through an artist's lens. Must be refreshing, considering I'm often stuck in a monochrome world of spreadsheets."*
*"I imagine it's not all black and white, though. There must be some shades of... green in your line of work?" Her playful tone was unmistak*
Since i know i am not the only one here attempting to use LLMs to help with creative writing, do you guys have any suggestions on models, prompting, or processes to get better more consistent results?
Namely my current biggest struggle is trying to get the AI to follow an outline. even if it's broken up chapter by chapter. I have essentially given up on that and simply corral the AI in the direction i want it to go lol. | 2023-11-06T19:56:47 | https://www.reddit.com/r/LocalLLaMA/comments/17pbg81/testing_models_for_ai_creative_writing/ | DominicanGreg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pbg81 | false | null | t3_17pbg81 | /r/LocalLLaMA/comments/17pbg81/testing_models_for_ai_creative_writing/ | false | false | self | 4 | null |
llama cpp python cuda error | 1 |
hi
I am using the latest langchain to load llama cpp
installed llama cpp python with: CMAKE\_ARGS="-DLLAMA\_METAL=on" FORCE\_CMAKE=1 pip install llama-cpp-python
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue\_Jul\_11\_02:20:44\_PDT\_2023
Cuda compilation tools, release 12.2, V12.2.128
Build cuda\_12.2.r12.2/compiler.33053471\_0
I manage to load the model with cblas=1
llm\_load\_tensors: using CUDA for GPU acceleration
ggml\_cuda\_set\_main\_device: using device 0 (NVIDIA H100 PCIe) as main device
llm\_load\_tensors: mem required = 5114.10 MB
llm\_load\_tensors: offloading 1 repeating layers to GPU
llm\_load\_tensors: offloaded 1/35 layers to GPU
llm\_load\_tensors: VRAM used: 158.35 MB
....................................................................................................
llama\_new\_context\_with\_model: n\_ctx = 3000
llama\_new\_context\_with\_model: freq\_base = 10000.0
llama\_new\_context\_with\_model: freq\_scale = 1
llama\_new\_context\_with\_model: kv self size = 1500.00 MB
llama\_build\_graph: non-view tensors processed: 740/740
llama\_new\_context\_with\_model: compute buffer total size = 223.99 MB
llama\_new\_context\_with\_model: VRAM scratch buffer: 217.36 MB
llama\_new\_context\_with\_model: total VRAM used: 375.71 MB (model: 158.35 MB, context: 217.36 MB)
AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512\_VBMI = 1 | AVX512\_VNNI = 1 | FMA = 1 | NEON = 0 | ARM\_FMA = 0 | F16C = 1 | FP16\_VA = 0 | WASM\_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
but when the model is processing a prompt I get a cuda error:
CUDA error 222 at /tmp/pip-install-uq8lpx95/llama-cpp-python\_c2bd3bc9a27b49f3805443a95df1ea3d/vendor/llama.cpp/ggml-cuda.cu:7043: the provided PTX was compiled with an unsupported toolchain.
current device: 0
any advice to solve this? | 2023-11-06T19:41:32 | https://www.reddit.com/r/LocalLLaMA/comments/17pb3kz/llama_cpp_python_cuda_error/ | msRachels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pb3kz | false | null | t3_17pb3kz | /r/LocalLLaMA/comments/17pb3kz/llama_cpp_python_cuda_error/ | false | false | self | 1 | null |
Anyone tried whisper large v3 | 1 | [removed] | 2023-11-06T19:21:52 | https://www.reddit.com/r/LocalLLaMA/comments/17pan18/anyone_tried_whisper_large_v3/ | ThenNeighborhood3723 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pan18 | false | null | t3_17pan18 | /r/LocalLLaMA/comments/17pan18/anyone_tried_whisper_large_v3/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Jui9Sa2zDoOCwA5CC8wIzHQuUlUTSopiMcxPZTFbSN4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BNBOZQR3Gcxrgo9qpJwn956Woc8luapwT7hgMQiiRQk.jpg?width=108&crop=smart&auto=webp&s=5626dbad58504fbba97383c5cd3b4405604d9943', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BNBOZQR3Gcxrgo9qpJwn956Woc8luapwT7hgMQiiRQk.jpg?width=216&crop=smart&auto=webp&s=90f3c7cd682870129d9c8a3b7bb5894b40a73850', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BNBOZQR3Gcxrgo9qpJwn956Woc8luapwT7hgMQiiRQk.jpg?width=320&crop=smart&auto=webp&s=e60b6c361e0550fc20be9eb9ac7f361f2de77c27', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BNBOZQR3Gcxrgo9qpJwn956Woc8luapwT7hgMQiiRQk.jpg?width=640&crop=smart&auto=webp&s=13b37a77f33c71ac9a6b7a6e8e975d5ffcfca066', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BNBOZQR3Gcxrgo9qpJwn956Woc8luapwT7hgMQiiRQk.jpg?width=960&crop=smart&auto=webp&s=ed676a23210434e0518831f08dfaef4a0af78990', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BNBOZQR3Gcxrgo9qpJwn956Woc8luapwT7hgMQiiRQk.jpg?width=1080&crop=smart&auto=webp&s=03055ebe6a168f8e89e69384df589c5cd0eba748', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BNBOZQR3Gcxrgo9qpJwn956Woc8luapwT7hgMQiiRQk.jpg?auto=webp&s=a4fe590f729ef805270c668bd8720a971cfcf1fa', 'width': 1200}, 'variants': {}}]} |
OpenAI Dev Day Discussion | 1 | Haven’t seen any threads on discussing the OpenAI dev day, so here is a place we can discuss new functionality being announced! | 2023-11-06T18:37:58 | https://www.reddit.com/r/LocalLLaMA/comments/17p9mgc/openai_dev_day_discussion/ | Slimxshadyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p9mgc | false | null | t3_17p9mgc | /r/LocalLLaMA/comments/17p9mgc/openai_dev_day_discussion/ | false | false | self | 1 | null |
Is local LLM models (Zephyr, Mistral, etc..) good enough to generate embeddings? (using Ollama) | 2 | Hi everyone.
I'm a little confused about Ollama embeddings.
It seems like we could just use our LLM models like Mistral / Zephyr / Llama to do embeddings.
But when I search on Youtube people are much smaller models that specializes in embeddings only.
Do I have to use those small models that specializes in embeddings only? Or those bigger 7b general models is good enough already?
I'm using Ollama. So if I want to use those smaller embedding models, I'd have to create my own Ollama model files to use those?
Thank you very much! | 2023-11-06T18:16:58 | https://www.reddit.com/r/LocalLLaMA/comments/17p94ss/is_local_llm_models_zephyr_mistral_etc_good/ | Uncensored4488 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p94ss | false | null | t3_17p94ss | /r/LocalLLaMA/comments/17p94ss/is_local_llm_models_zephyr_mistral_etc_good/ | false | false | self | 2 | null |
The OG LLM in 1983 on the Johnny Carson show. | 4 | See how far we've come in 40 years?
Johnny Carson, 1983
[https://www.youtube.com/watch?v=p6uFHC9lfzk](https://www.youtube.com/watch?v=p6uFHC9lfzk)
(for entertainment purposes only) | 2023-11-06T17:58:33 | https://www.reddit.com/r/LocalLLaMA/comments/17p8p0h/the_og_llm_in_1983_on_the_johnny_carson_show/ | jubjub07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p8p0h | false | null | t3_17p8p0h | /r/LocalLLaMA/comments/17p8p0h/the_og_llm_in_1983_on_the_johnny_carson_show/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'Xmz-cR5zsf4tMRFtf_utuniJmZ_EeSu6FJQEbM9ZaR0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/OI9aplIXkyirQ2Nj0nmRwpNeorqSC-r1o2NxFzXgVqk.jpg?width=108&crop=smart&auto=webp&s=e437bfba913ac0824f00b8a4adcb12979ead06d4', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/OI9aplIXkyirQ2Nj0nmRwpNeorqSC-r1o2NxFzXgVqk.jpg?width=216&crop=smart&auto=webp&s=8ea2a04692698743d0c25ba0a5dfdd521510fbee', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/OI9aplIXkyirQ2Nj0nmRwpNeorqSC-r1o2NxFzXgVqk.jpg?width=320&crop=smart&auto=webp&s=3be99bbd8eb695f9ae94e77628248d1439253c03', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/OI9aplIXkyirQ2Nj0nmRwpNeorqSC-r1o2NxFzXgVqk.jpg?auto=webp&s=2f27424df0a5328b7abffbeb22aa0dd6542fcf8c', 'width': 480}, 'variants': {}}]} |
Runpod vs Sagemaker speed | 2 | I am trying to transfer mistral 7b from sagemaker to runpod. On sagemaker i run an instance with 1x A10 and i tried with RTX 4090 on runpod. The issue that i have is that the same model takes 1 minute to complete the task on Sagemaker but 5m on Runpod, i even tried with H100 on runpod but the result was the same. Any idea what is different ? | 2023-11-06T17:47:34 | https://www.reddit.com/r/LocalLLaMA/comments/17p8fq4/runpod_vs_sagemaker_speed/ | kbourro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p8fq4 | false | null | t3_17p8fq4 | /r/LocalLLaMA/comments/17p8fq4/runpod_vs_sagemaker_speed/ | false | false | self | 2 | null |
How do you generate a single response with chat models? | 1 | All the models I use seem to generate the whole conversation instead of giving a single response in role play chat settings. Is there a good prompt or a model to prevent this? I know I can cut the rest of the conversation but the generation wastes resources. | 2023-11-06T17:47:15 | https://www.reddit.com/r/LocalLLaMA/comments/17p8fgw/how_do_you_generate_a_single_response_with_chat/ | demegir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p8fgw | false | null | t3_17p8fgw | /r/LocalLLaMA/comments/17p8fgw/how_do_you_generate_a_single_response_with_chat/ | false | false | self | 1 | null |
OpenAi live stream keynote soon | 2 | Wildest rumors were that they would open source gpt3.
Live stream starts soon at: [https://www.youtube.com/watch?v=U9mJuUkhUzk&ab\_channel=OpenAI](https://www.youtube.com/watch?v=U9mJuUkhUzk&ab_channel=OpenAI)
Sorry if this is not worth for a post, I just wanted people to know that OpenAi is streaming the keynote. | 2023-11-06T17:35:24 | https://www.reddit.com/r/LocalLLaMA/comments/17p85q6/openai_live_stream_keynote_soon/ | MustBeSomethingThere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p85q6 | false | null | t3_17p85q6 | /r/LocalLLaMA/comments/17p85q6/openai_live_stream_keynote_soon/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '5wTdFhNZCqJwgK0DBRCL8TXKFmS5PxCh3UTzXX_poWo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ctFTYttU7p8MsepCeIXU82ZYVdIlqNMEwnSZqExnCZo.jpg?width=108&crop=smart&auto=webp&s=d56a3668d2b85f53d91b7b7f10dad451378db890', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ctFTYttU7p8MsepCeIXU82ZYVdIlqNMEwnSZqExnCZo.jpg?width=216&crop=smart&auto=webp&s=813f31413fd95adc71d9f39a5922d1f229ed9a1c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ctFTYttU7p8MsepCeIXU82ZYVdIlqNMEwnSZqExnCZo.jpg?width=320&crop=smart&auto=webp&s=52a3217be99c50d368482a1ba344b05a72db4207', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ctFTYttU7p8MsepCeIXU82ZYVdIlqNMEwnSZqExnCZo.jpg?auto=webp&s=d4d98ab3daba2795032735262a052d6265530ea8', 'width': 480}, 'variants': {}}]} |
Llama-2 attempts getting sued by Colab | 1 | I'm working on a entity extraction from medium size text corpus, I'm using Colab pro, Since I'm working with Llama-2 after downloading tried to give first text para it took half ram (around 7GB)to program to execute, don't know why next time when executing with next text corpus it is continuing from 7 again and total it became 14, It should take 7 but every time ,it is constantly increasing to the ram consumption from the prior execution and after 2 to 3 successful executions RAM limit hits had to restart Kernal,
What can be done ,stuck here for couple of days, appreciate your help in advance and thanks
​ | 2023-11-06T17:18:35 | https://www.reddit.com/r/LocalLLaMA/comments/17p7rni/llama2_attempts_getting_sued_by_colab/ | Murky-Tumbleweed-486 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p7rni | false | null | t3_17p7rni | /r/LocalLLaMA/comments/17p7rni/llama2_attempts_getting_sued_by_colab/ | false | false | self | 1 | null |
Multimodal LLMs: a local model for sounds and sentiment? | 11 | Hi I am doing a personal project where I am running Whisper locally for transcription and basic multimodal LLava model using a single board computer [https://vimeo.com/879658471/616f6d57fa](https://vimeo.com/879658471/616f6d57fa)
I am now doing a project in which I am required to identify various emotions/activities from everyday sounds,
a) such as faucet running, baby crying, coughing, snoring, radio, TV, mixer, dog barking etc.
b) things like vocal sentiments.
Can anyone suggest a model or pipeline for ambient sounds/speech? or collaborate :)
​
​ | 2023-11-06T17:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/17p7lmn/multimodal_llms_a_local_model_for_sounds_and/ | No_Turnover2057 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p7lmn | false | null | t3_17p7lmn | /r/LocalLLaMA/comments/17p7lmn/multimodal_llms_a_local_model_for_sounds_and/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'XxPBJBUmo1wxZgVbAhPq8Rv5KbLdqDB9-XUglLW9OiU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/_3J1e2H52v9kSygwMjQahqHtyXKQcGqQ6YqiJoY17nw.jpg?width=108&crop=smart&auto=webp&s=6a3cb7573788ec06f8bdff6bed63b95be0324807', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/_3J1e2H52v9kSygwMjQahqHtyXKQcGqQ6YqiJoY17nw.jpg?width=216&crop=smart&auto=webp&s=a527c9210ba05e6fbae3afb95d6f232bb8e8ef67', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/_3J1e2H52v9kSygwMjQahqHtyXKQcGqQ6YqiJoY17nw.jpg?width=320&crop=smart&auto=webp&s=cd1f38886cbbb10b054c68742bcae45ee62f2340', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/_3J1e2H52v9kSygwMjQahqHtyXKQcGqQ6YqiJoY17nw.jpg?width=640&crop=smart&auto=webp&s=cb38816bfeddbd1e4d7c9995ce7a96a44abc3329', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/_3J1e2H52v9kSygwMjQahqHtyXKQcGqQ6YqiJoY17nw.jpg?width=960&crop=smart&auto=webp&s=adb874bd9844a07c4efee24c2b891e0fb9287a34', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/_3J1e2H52v9kSygwMjQahqHtyXKQcGqQ6YqiJoY17nw.jpg?width=1080&crop=smart&auto=webp&s=94971c60387e07a93d14f1661445c99c1e85f1fa', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/_3J1e2H52v9kSygwMjQahqHtyXKQcGqQ6YqiJoY17nw.jpg?auto=webp&s=36f771390fd196276176bf1f37016e7b6a5f83a5', 'width': 1280}, 'variants': {}}]} |
Evaluating Hugging Face's Open Source Multimodal LLM | 10 | 2023-11-06T16:48:46 | https://blog.nomic.ai/posts/improving-multimodal-models-with-atlas | NomicAI | blog.nomic.ai | 1970-01-01T00:00:00 | 0 | {} | 17p729i | false | null | t3_17p729i | /r/LocalLLaMA/comments/17p729i/evaluating_hugging_faces_open_source_multimodal/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'pGsQlm7S4UKP9m7i4GcNIgaLlvQ8Qy2jfYWy6SJk8Kw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YIYwP5BUWkHkoPleV3PUTtSy-FMwgRCXfByveiTSGnQ.jpg?width=108&crop=smart&auto=webp&s=7723d4dec311211390fbfd86ed6ddfb25548c5d0', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/YIYwP5BUWkHkoPleV3PUTtSy-FMwgRCXfByveiTSGnQ.jpg?width=216&crop=smart&auto=webp&s=c2b13af6f7792c6d631a040574607c03d7ea3c45', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/YIYwP5BUWkHkoPleV3PUTtSy-FMwgRCXfByveiTSGnQ.jpg?width=320&crop=smart&auto=webp&s=24f5d2773eaf63304e14425457979e68c5e91cfb', 'width': 320}, {'height': 356, 'url': 'https://external-preview.redd.it/YIYwP5BUWkHkoPleV3PUTtSy-FMwgRCXfByveiTSGnQ.jpg?width=640&crop=smart&auto=webp&s=6415621b9ac720b7a714aa511c4d801923d43859', 'width': 640}, {'height': 535, 'url': 'https://external-preview.redd.it/YIYwP5BUWkHkoPleV3PUTtSy-FMwgRCXfByveiTSGnQ.jpg?width=960&crop=smart&auto=webp&s=33be3e618d98d4a25604ef9aec37cb955bd24c13', 'width': 960}, {'height': 601, 'url': 'https://external-preview.redd.it/YIYwP5BUWkHkoPleV3PUTtSy-FMwgRCXfByveiTSGnQ.jpg?width=1080&crop=smart&auto=webp&s=5217ac578fb00b6f3aacc60f8470c5703541f67a', 'width': 1080}], 'source': {'height': 1870, 'url': 'https://external-preview.redd.it/YIYwP5BUWkHkoPleV3PUTtSy-FMwgRCXfByveiTSGnQ.jpg?auto=webp&s=7437359636715d7c1185a7f5d5246530bc53a1f3', 'width': 3355}, 'variants': {}}]} | ||
Beginner's guide to finetuning Llama 2 and Mistral using QLoRA | 97 | Hey everyone,
I’ve seen a lot of interest in the community about getting started with finetuning.
Here's my new guide: **Finetuning Llama 2 & Mistral - A beginner’s guide to finetuning SOTA LLMs with QLoRA**. I focus on dataset creation, applying ChatML, and basic training hyperparameters. The code is kept simple for educational purposes, using basic PyTorch and Hugging Face packages without any additional training tools.
Notebook: [https://github.com/geronimi73/qlora-minimal/blob/main/qlora-minimal.ipynb](https://github.com/geronimi73/qlora-minimal/blob/main/qlora-minimal.ipynb)
Full guide: [https://medium.com/@geronimo7/finetuning-llama2-mistral-945f9c200611](https://medium.com/@geronimo7/finetuning-llama2-mistral-945f9c200611)
I'm here for any questions you have, and I’d love to hear your suggestions or any thoughts on this. | 2023-11-06T16:23:26 | https://www.reddit.com/r/LocalLLaMA/comments/17p6hup/beginners_guide_to_finetuning_llama_2_and_mistral/ | HatEducational9965 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p6hup | false | null | t3_17p6hup | /r/LocalLLaMA/comments/17p6hup/beginners_guide_to_finetuning_llama_2_and_mistral/ | false | false | self | 97 | {'enabled': False, 'images': [{'id': '0V3Hk81mZbfu8b0cp4jbXWJaWZ32CjVFsHWkERl5RU4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/J3Ce9CLYsslauAQ-5SN9ZihbjblyeEdrP2sDX3cPhv8.jpg?width=108&crop=smart&auto=webp&s=e1d6acf6b19772c1b4461e51f2085424e16bc35d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/J3Ce9CLYsslauAQ-5SN9ZihbjblyeEdrP2sDX3cPhv8.jpg?width=216&crop=smart&auto=webp&s=0157678443e3441f746873ebb6e2d683caadf0b1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/J3Ce9CLYsslauAQ-5SN9ZihbjblyeEdrP2sDX3cPhv8.jpg?width=320&crop=smart&auto=webp&s=bca697e6942e81946be7707150a040ca28e10fbe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/J3Ce9CLYsslauAQ-5SN9ZihbjblyeEdrP2sDX3cPhv8.jpg?width=640&crop=smart&auto=webp&s=93d80ef474ddf15f8c4a2e2b29fe26e4bba20d15', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/J3Ce9CLYsslauAQ-5SN9ZihbjblyeEdrP2sDX3cPhv8.jpg?width=960&crop=smart&auto=webp&s=c07f610021a59a3defcb1a4b2ba61a5fa8b72397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/J3Ce9CLYsslauAQ-5SN9ZihbjblyeEdrP2sDX3cPhv8.jpg?width=1080&crop=smart&auto=webp&s=d3c1ff813d39567cd468716ba8a933e852dbde5a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/J3Ce9CLYsslauAQ-5SN9ZihbjblyeEdrP2sDX3cPhv8.jpg?auto=webp&s=44f3c2496d1b3f730a87839936d826c5d0320899', 'width': 1200}, 'variants': {}}]} |
Are LLMs surprisingly bad at simple math? | 1 | I only tried a bunch of famous 13B like Mythos, Tiefighter, Xwin... they are quite good at random internet quizzes, but when I ask something like 13651+75615, they all give wrong answers, even after multiple rerolls.
Is that normal or something is wrong with my settings? I'm using Ooba and SillyTavern. | 2023-11-06T16:17:32 | https://www.reddit.com/r/LocalLLaMA/comments/17p6d2p/are_llms_surprisingly_bad_at_simple_math/ | Sndragon88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p6d2p | false | null | t3_17p6d2p | /r/LocalLLaMA/comments/17p6d2p/are_llms_surprisingly_bad_at_simple_math/ | false | false | self | 1 | null |
Best performance for 40-60K€ hardware: lost in the possibilities | 1 | Hi everyone,
I’ve created an app that helps people learn. Things have been going ok till the AI storm, where using GPT 3 and then 3.5 really changed the game for us.
One of my client is a government. For obvious reasons, they don’t enjoy the fact we’re a SaaS that uses OpenAI. So they’ve asked me if we would be able to have our app *and the ai* running on premises on their servers.
Cue me searching for clear information for hours and not being able to determine what my proposal should be.
Because it’s for an educational technology, with a huge number of possible source material, quality and flexibility of the model outputs are paramount.
From what I gather, If I’m looking for the best, most flexible open source LLM, llama 2 70B with minimal quantization would be the one?
Searching this subreddit gets me lots of information, but I’m drowning in it because most of the conversation is around an optimal cost/size, whereas mine is maximum quality with my existing budget.
So my question is: what hardware, with which OS LLM, would you recommend for a 60k$ budget, for best in class quality pure inference? What token/s output should I expect?
Thanks! 🙏 | 2023-11-06T16:15:56 | https://www.reddit.com/r/LocalLLaMA/comments/17p6buc/best_performance_for_4060k_hardware_lost_in_the/ | superap101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p6buc | false | null | t3_17p6buc | /r/LocalLLaMA/comments/17p6buc/best_performance_for_4060k_hardware_lost_in_the/ | false | false | self | 1 | null |
Mistral Vision/Audio LoRAs & a Lossy 260K+ Token Context Window Prototype | 99 | Hi all, this is a follow up to my experimental library [multi-token](https://www.reddit.com/r/LocalLLaMA/comments/17ccp0l/multitoken_embed_arbitrary_modalities_images/) which makes it easier "attach" other modalities (images, audio, etc) to existing open source language models.
With \~$300 of compute I was able to train a few prototypes since then which y'all may find useful. These are my first attempts (no hyperparam tuning, etc) on data that I could easily access from huggingface so they might not be mind blowing in any way but I think are pretty neat proof of concepts that should be runnable with 24GB VRAM.
# [sshh12/Mistral-7B-LoRA-VisionCLIP-LLAVA](https://huggingface.co/sshh12/Mistral-7B-LoRA-VisionCLIP-LLAVA)
This is a version of LLaVA 1.5 (a vision QA model) trained on the same dataset but with my library. In theory this is super similar to BakLLaVA BUT with the MLP projector which was added in LLaVA 1.5.
> **User**: What are things I should be cautious about when I visit this place? <image of a dock>
>
> **System**: When visiting this place, which is a lake with a wooden dock, there are a few things to be cautious about. First, be aware of the water depth and the presence of any hidden obstacles, such as rocks or underwater debris, that could pose a risk to your safety...
# [sshh12/Mistral-7B-LoRA-ImageBind-LLAVA](https://huggingface.co/sshh12/Mistral-7B-LoRA-ImageBind-LLAVA)
This is a version of Mistral that is binded to [ImageBind](https://imagebind.metademolab.com/) (an embedding model that supports text/audio/vision/depth). I trained it on an augmented LLaVA dataset by basically just replacing "image" with "sound" randomly in the training prompts but using the original image as the input (I'm suprised this even works, but since imagebind has a shared embedding space for these, it does).
> **User**: <imagebind of bark.wav> What is the animal in this sound?
>
> **System**: The animal in this sound is a dog.
# [sshh12/Mistral-7B-LoRA-DocumentGTE-260K-x128](https://huggingface.co/sshh12/Mistral-7B-LoRA-DocumentGTE-260K-x128)
I've also been trying to build cheaper and lossy-er/longer forms of context length extension. For this, I chunk large documents, run each chunk through an embedding model, then project the embeddings back into the LLMs token space. So a 5120 token document, turns into 10 chunks (using GTE which has an input size of 512), which can be encoded into only 40 tokens (4 tokens per chunk). As expected this is **extremely lossy**, but it does *kind of* work. Potentially better embeddings and trading off some of the compression ratio (from 512:4 to like 512:64 or 32:4) could yield fairly useful results.
I think this is potentially be super powerful as these compressed tokens are so much cheaper from a VRAM perspective and could easily be cached/pre-computed.
> **User**: Read the paper <document full text of LLaVA paper>. Give me a summary.
**System**: Here is a summary of the key points from the paper:\\n\\n- The paper proposes a new dataset called LAML, which contains 100,000 image-text pairs with 100 different languages. The dataset aims to provide a large-scale resource for training multilingual vision-language models.\\n\\n- The authors find that existing multilingual vision-language models struggle to generate high-quality captions for images in languages they have not seen before. This is because the models lack the ability to generate language-specific knowledge...
If you have an idea for a cool modality (and a dataset for it) to add, lmk and I might be able to train it! If you are interesting more into how this actually works, I wrote a [post here](https://blog.sshh.io/p/large-multimodal-models-lmms). | 2023-11-06T15:56:11 | https://www.reddit.com/r/LocalLLaMA/comments/17p5vtk/mistral_visionaudio_loras_a_lossy_260k_token/ | sshh12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p5vtk | false | null | t3_17p5vtk | /r/LocalLLaMA/comments/17p5vtk/mistral_visionaudio_loras_a_lossy_260k_token/ | false | false | self | 99 | {'enabled': False, 'images': [{'id': 'TzhpcDwSbksVG_i9CwDmOtz52oFHiZKQ7OK-1QMYoFc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jwySnuIaVEesGlNQUOWPCKuH8id8dNW4D-P3DPOP7MQ.jpg?width=108&crop=smart&auto=webp&s=cc3758528608496f9e0b55b234259bd48e76813f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jwySnuIaVEesGlNQUOWPCKuH8id8dNW4D-P3DPOP7MQ.jpg?width=216&crop=smart&auto=webp&s=435e161ad1b46741e3abe909b92b92cf85b8531d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jwySnuIaVEesGlNQUOWPCKuH8id8dNW4D-P3DPOP7MQ.jpg?width=320&crop=smart&auto=webp&s=f0336df05b765cae51429074a33e58f8ea2f926c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jwySnuIaVEesGlNQUOWPCKuH8id8dNW4D-P3DPOP7MQ.jpg?width=640&crop=smart&auto=webp&s=36f9f140681fbaf55131fb92930916f3c57f3d82', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jwySnuIaVEesGlNQUOWPCKuH8id8dNW4D-P3DPOP7MQ.jpg?width=960&crop=smart&auto=webp&s=c7e945273c19a2362173251e4f5081ce94fbf6c4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jwySnuIaVEesGlNQUOWPCKuH8id8dNW4D-P3DPOP7MQ.jpg?width=1080&crop=smart&auto=webp&s=965cea93f7527c03c63bacc840f7683e367e6f30', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jwySnuIaVEesGlNQUOWPCKuH8id8dNW4D-P3DPOP7MQ.jpg?auto=webp&s=e63619d3fadbe279477dc8118d9d839bb20ca8ca', 'width': 1200}, 'variants': {}}]} |
New model released by alpin, Goliath-120B! | 76 | 2023-11-06T15:43:47 | https://huggingface.co/alpindale/goliath-120b | panchovix | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17p5m2t | false | null | t3_17p5m2t | /r/LocalLLaMA/comments/17p5m2t/new_model_released_by_alpin_goliath120b/ | false | false | 76 | {'enabled': False, 'images': [{'id': 'U3TtVESnEQYuuFjDNAuglJfjpsZK1znyYLwSc-Q5mx4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oi0_utef91tW_dGcivJWiE_qSwmUzci1683J0-RBz88.jpg?width=108&crop=smart&auto=webp&s=f9be6d10280d40205023660048d0180e7acd5acf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oi0_utef91tW_dGcivJWiE_qSwmUzci1683J0-RBz88.jpg?width=216&crop=smart&auto=webp&s=ac62bacc63d6b9f95f563a76f199c00fad1dc52e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oi0_utef91tW_dGcivJWiE_qSwmUzci1683J0-RBz88.jpg?width=320&crop=smart&auto=webp&s=6d58739b791902cd327e747a059d9f4e439a6aa2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oi0_utef91tW_dGcivJWiE_qSwmUzci1683J0-RBz88.jpg?width=640&crop=smart&auto=webp&s=19c35d9e7f013528426d05afed8568a7f7ea100f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oi0_utef91tW_dGcivJWiE_qSwmUzci1683J0-RBz88.jpg?width=960&crop=smart&auto=webp&s=6c4e56f9a4e30681c1097db900bfab8a2bf057de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oi0_utef91tW_dGcivJWiE_qSwmUzci1683J0-RBz88.jpg?width=1080&crop=smart&auto=webp&s=18d3ad6d228ccd4b305641d2f29dc5a76d3b6d78', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oi0_utef91tW_dGcivJWiE_qSwmUzci1683J0-RBz88.jpg?auto=webp&s=87568dd39b11de3a58042ade48c52724ed7b42bc', 'width': 1200}, 'variants': {}}]} | ||
Building/enhance a local LLM by crawling this sub | 1 | [removed] | 2023-11-06T15:17:05 | https://www.reddit.com/r/LocalLLaMA/comments/17p50ur/buildingenhance_a_local_llm_by_crawling_this_sub/ | herozorro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p50ur | false | null | t3_17p50ur | /r/LocalLLaMA/comments/17p50ur/buildingenhance_a_local_llm_by_crawling_this_sub/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fEx3_FULeHE9NzJyjAmqaZvIyOX1WDPqCq5jQDW3_Pc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/CILZlyMI7N0HwEkWY8WZzULsx_GFvYFBWt9Mcod7TyA.jpg?width=108&crop=smart&auto=webp&s=e32f40e44fd46d4a9da2f38755c566f3c538d9c3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/CILZlyMI7N0HwEkWY8WZzULsx_GFvYFBWt9Mcod7TyA.jpg?width=216&crop=smart&auto=webp&s=8c99332de725d50c1a63f6a7720a80ae98f2f89e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/CILZlyMI7N0HwEkWY8WZzULsx_GFvYFBWt9Mcod7TyA.jpg?width=320&crop=smart&auto=webp&s=27fe98535efa29a81b555c4d48d05827d9a82f40', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/CILZlyMI7N0HwEkWY8WZzULsx_GFvYFBWt9Mcod7TyA.jpg?auto=webp&s=07bf1d7b0e78b48f157af7028776a52c0ed7de8d', 'width': 480}, 'variants': {}}]} |
Best text to speech diffusion model for reading books | 1 | [removed] | 2023-11-06T15:01:13 | https://www.reddit.com/r/LocalLLaMA/comments/17p4o10/best_text_to_speech_diffusion_model_for_reading/ | Puzzleheaded_Acadia1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p4o10 | false | null | t3_17p4o10 | /r/LocalLLaMA/comments/17p4o10/best_text_to_speech_diffusion_model_for_reading/ | false | false | self | 1 | null |
Suggest a topic for a master's degree thesis related to LLMs? | 1 | I am a computer science student, currently in my 2nd year of masters. I currently have a college project where my team members and I have to research about various LLMs and ways to train them so they can answer questions about custom documents - how to embed knowledge into LLMs that it doesn't currently know.
​
This project has got me interested in LLMs because results are really promising, at least judging by what we've accomplished so far. I want to propose a topic for my master's degree to my mentor but the issue is I can't think of any that isn't to similar to my team's current project (document Q&A). Also machine learning and AI aren't my specialties because I haven't taken subjects related to those areas, but rather subjects related to databases, backend development, parallel programming, distributed systems etc.. I am your average soon-to-be full-stack developer and I want my master's thesis topic to be related to LLMs and how I can perhaps use them in context of web applications.
​
To better illustrate what I want, my current planned topic for master's thesis is related to webgazer.js library. That is a library which can use your laptop's webcam to detect where your gaze is on the screen just by your pupil. I am not supposed to go into the internals of that library or somehow fine-tune it, but rather use it to build a web application which would use the library to analyze where users look at when they are presented with images on a laptop screen and build a heat map, and other stuff perhaps. What I want with LLMs is same. I've already had some touch with RAG, vector databases on my current college project and would like to do my master's thesis related to LLMs but I am not feeling very imaginative right now :/ | 2023-11-06T14:59:57 | https://www.reddit.com/r/LocalLLaMA/comments/17p4mtm/suggest_a_topic_for_a_masters_degree_thesis/ | PowerfulCap3557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p4mtm | false | null | t3_17p4mtm | /r/LocalLLaMA/comments/17p4mtm/suggest_a_topic_for_a_masters_degree_thesis/ | false | false | self | 1 | null |
how would you build a chat llm for this subs content? | 1 | [removed] | 2023-11-06T14:55:24 | https://www.reddit.com/r/LocalLLaMA/comments/17p4jfq/how_would_you_build_a_chat_llm_for_this_subs/ | herozorro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p4jfq | false | null | t3_17p4jfq | /r/LocalLLaMA/comments/17p4jfq/how_would_you_build_a_chat_llm_for_this_subs/ | false | false | self | 1 | null |
Best model up to 33b for writing erotic novels? | 11 |
Best model up to 33b for writing erotic novels? | 2023-11-06T14:43:01 | https://www.reddit.com/r/LocalLLaMA/comments/17p49t6/best_model_up_to_33b_for_writing_erotic_novels/ | Medium_Big_all_Good | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p49t6 | false | null | t3_17p49t6 | /r/LocalLLaMA/comments/17p49t6/best_model_up_to_33b_for_writing_erotic_novels/ | false | false | self | 11 | null |
Data Format for finetuning Llama2 to extract json | 3 | My question is how to use the system prompts which I use during generation in finetuning?
I am trying to finetune Llama2 7b on a custom dataset. But I am still confused about the data format for my use case. I want to extract information to json with given keys, which I am providing at the beginning. During generation, I provide this as system prompt. But I am not sure how to provide this during finetuning.
The Jumpstart tutorial and the [Fine-tune LLaMA 2 (7-70B) on Amazon SageMaker 8](https://www.philschmid.de/sagemaker-llama2-qlora) tutorial give the data format of the form:
{"instruction": "","context": "", "response": ""}
The response in my case would be the extracted json.
But where do i give it the sentence to extract the information from, and where to give the json format it should follow and tell it to extract text from this. I tried providing the desired json format in the context, but this is same for every example. Is this a correct way of doing this? | 2023-11-06T14:38:47 | https://www.reddit.com/r/LocalLLaMA/comments/17p46hx/data_format_for_finetuning_llama2_to_extract_json/ | icelebratefestivus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p46hx | false | null | t3_17p46hx | /r/LocalLLaMA/comments/17p46hx/data_format_for_finetuning_llama2_to_extract_json/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'a1hmOnL77g0KEwQYz1ospL2mdtWnUUs4gvIEoj1uaZQ', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/4MF3cr0DQxBbinGtwASoNlt0Gg05zjMQOT3gc83QBMg.jpg?width=108&crop=smart&auto=webp&s=c0427b01010fa2f8cf1698d7956cb7527aba26c4', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/4MF3cr0DQxBbinGtwASoNlt0Gg05zjMQOT3gc83QBMg.jpg?width=216&crop=smart&auto=webp&s=1ec26f0cb046daa50f1e9f58b8557d39c9411ecd', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/4MF3cr0DQxBbinGtwASoNlt0Gg05zjMQOT3gc83QBMg.jpg?width=320&crop=smart&auto=webp&s=011f966e3d649f6a69207849c07df7ca14169f6d', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/4MF3cr0DQxBbinGtwASoNlt0Gg05zjMQOT3gc83QBMg.jpg?width=640&crop=smart&auto=webp&s=7d9629f828fc13109c17a912ec3a82b14f1ca0a3', 'width': 640}, {'height': 591, 'url': 'https://external-preview.redd.it/4MF3cr0DQxBbinGtwASoNlt0Gg05zjMQOT3gc83QBMg.jpg?width=960&crop=smart&auto=webp&s=d80199be7eaad63c4612583286010e6f27b644ff', 'width': 960}, {'height': 665, 'url': 'https://external-preview.redd.it/4MF3cr0DQxBbinGtwASoNlt0Gg05zjMQOT3gc83QBMg.jpg?width=1080&crop=smart&auto=webp&s=ecb4786efecae2fbb9310a49faed9c0113339e2f', 'width': 1080}], 'source': {'height': 1478, 'url': 'https://external-preview.redd.it/4MF3cr0DQxBbinGtwASoNlt0Gg05zjMQOT3gc83QBMg.jpg?auto=webp&s=a95233c149cfbffa147b7ae7836893e70d9ba666', 'width': 2400}, 'variants': {}}]} |
Is it possible to train BabyLM on consumer-grade hardware? | 10 | BabyLM was a competition with 700m training data to train an LM model that roughly equates the language skills of a 13 year old. One of the goals is to learn how to train LLMs yourself. However, I could not find infos on what sort of hardware is needed for this challenge. Obviously, this is not GPT-3, but probably still some heavy lifting is required. Does anyone have any experience?
There are also some first papers out on this challenge, for example this one: [https://arxiv.org/abs/2311.01955](https://arxiv.org/abs/2311.01955), or this one [https://arxiv.org/pdf/2310.17591.pdf](https://arxiv.org/pdf/2310.17591.pdf). The latter paper mentions usage of 4 Nvidia A40 GPUs from Huggingface.
I checked A40 GPU specs: [https://images.nvidia.com/content/Solutions/data-center/a40/nvidia-a40-datasheet.pdf](https://images.nvidia.com/content/Solutions/data-center/a40/nvidia-a40-datasheet.pdf), but using 4 of them in parallel would indicate that even if I could get my hands on something powerful as a single M2 Ultra, that'd still not be sufficient for the BabyLM challenge.
And experiences or opinions here? | 2023-11-06T13:54:12 | https://www.reddit.com/r/LocalLLaMA/comments/17p384y/is_it_possible_to_train_babylm_on_consumergrade/ | fabkosta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p384y | false | null | t3_17p384y | /r/LocalLLaMA/comments/17p384y/is_it_possible_to_train_babylm_on_consumergrade/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
RAG: Flexible Context Retrieval around a matching chunk | 54 | Here's something I was thinking about and found existing solutions inadequate -- the so-called big-vs-small chunks dilemma: You want:
* small chunks for accurate embeddings
* large chunks to capture sufficient context to answer queries.
The solution is clearly to *decouple* chunking and retrieval. Ideally, we want to be able to chunk at a granular level, and retrieve an (almost arbitrary) context-window around the matching chunk. I call this **Flexible Context Retrieval (FCR).**
So I looked at LangChain's `ParentDocumentRetriever` \- it creates larger *parent* chunks, and splits those into smaller *child* chunks, and only embed/index the child chunks. At query time, when a child chunk matches, lookup its parent chunk and return that instead. While this sounds like it may solve the problem, there are two issues with this:
1️⃣ Because the parent-chunks are **fixed**, you will have **boundary** effects, like this failure case (see pic):
The **query** matches a child-chunk near the **end** of a parent chunk; the **answer** is in the **next** parent chunk, and does **not** match the query ➡️ The next parent chunk is not retrieved, and the LLM fails to answer the query.
This **blind spot** is due to the **fixed** chunking.
2️⃣ You have to **carefully** pick the parent chunk size: Realized it's too small? ➡️ need to **re-chunk** and **re-index;**
If you make it conservatively too big, that defeats the purpose of chunking, and you'll run into high latency and token costs, and LLM context-limits.
Then I looked at Llama-Index's `SentenceWindowNodeParser` and it's an improvement -- at **parsing/chunking** time, it stores a **fixed** window of text around each small chunk (sentence, actually). So at retrieval time you can retrieve this (fixed) text window around any matching chunk. This solves Problem 1 above but *not* Problem 2.
Thinking about this from scratch, I realized one good way to do it is this: only create small, granular chunks (say at sentence level), and in each chunk's metadata, store a sufficiently large (say 20) sequence of **chunk-ids** (**not content!**) before and after the chunk. At query time, we can then flexibly look up *any (up to 20)* desired number of chunks around the matching chunk (see pic). This gives you Flexible Context Retrieval (FCR).
I implemented FCR in [Langroid](https://github.com/langroid/langroid/blob/main/langroid/agent/special/doc_chat_agent.py) (see the `add_context_window` method). One issue is dealing with overlaps among retrieved windows. This turned out to be tricky since chunk-ids are based on hash-uuids (and for various reasons these are better than just using sequence numbers). I ended up using **connected-component** detection to group overlapping windows, and then **topological sorting** to sort the window-group based on the partial-order imposed by the pairwise relations.
Here's a [colab](https://colab.research.google.com/drive/1JvH6CO9AS7CaWK0GTblZGesJoo9Jjyn7) where I compare the LangChain ParentDocumentRetriever and Langroid's methods on two questions about an employment contract. With LangChain the LLM fails on both due to the above boundary effect, but with Langroid, it works fine.
I was wondering if anyone else had a look at the FCR problem. At the very least I hope the Langroid implementation is useful.
Langroid is a Python framework to easily build LLM applications (including RAG), using a multi-agent paradigm.
Thanks for reading.
​
​ | 2023-11-06T13:45:00 | https://www.reddit.com/r/LocalLLaMA/comments/17p31mc/rag_flexible_context_retrieval_around_a_matching/ | SatoshiNotMe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p31mc | false | null | t3_17p31mc | /r/LocalLLaMA/comments/17p31mc/rag_flexible_context_retrieval_around_a_matching/ | false | false | self | 54 | {'enabled': False, 'images': [{'id': 'mCf-gvgLGIf97C4Q5Pay7BZQzUkl4WPZs5p9Ug1Hclk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=108&crop=smart&auto=webp&s=6f77c5c3077d3728cfb79ff9339119fb5fd7b59d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=216&crop=smart&auto=webp&s=b10a9dd7d4bc4fc2037d131a048e5e8f1febaea5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=320&crop=smart&auto=webp&s=0a9259b44724375c066bb82fd6350881d78bdbda', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=640&crop=smart&auto=webp&s=866e9598e364a0ffa0588fe4383ac87de7b85631', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=960&crop=smart&auto=webp&s=949005b7b09bccdd9f9194e40c1764d505882236', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=1080&crop=smart&auto=webp&s=ed2893c270ed2220f47d5b015a4b823d0ec38048', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?auto=webp&s=599ef552ca32f4240d29c0510c712c3dfe3b08c5', 'width': 1200}, 'variants': {}}]} |
01-ai/Yi-34B-200K Release | 63 | [https://huggingface.co/01-ai/Yi-34B-200K](https://huggingface.co/01-ai/Yi-34B-200K) | 2023-11-06T13:30:20 | https://www.reddit.com/r/LocalLLaMA/comments/17p2rjs/01aiyi34b200k_release/ | Serious-Commercial10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p2rjs | false | null | t3_17p2rjs | /r/LocalLLaMA/comments/17p2rjs/01aiyi34b200k_release/ | false | false | self | 63 | {'enabled': False, 'images': [{'id': 'kmY3rlrmVU3WlcgsR_oWu3jytWsfBFUqjvJ9UfST5d4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QWmRvBy_He1qc9JRmB81KMPRrYIPgH0maC9M87MUYLA.jpg?width=108&crop=smart&auto=webp&s=da85b4046f47f4a083dc6bf491fbba6a1ed444d2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QWmRvBy_He1qc9JRmB81KMPRrYIPgH0maC9M87MUYLA.jpg?width=216&crop=smart&auto=webp&s=9477ea58fe3afa186463fdd2a7d907266234a52f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QWmRvBy_He1qc9JRmB81KMPRrYIPgH0maC9M87MUYLA.jpg?width=320&crop=smart&auto=webp&s=42c7d3f0cdb5f34940eda4cf441c2e6ee1589a8f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QWmRvBy_He1qc9JRmB81KMPRrYIPgH0maC9M87MUYLA.jpg?width=640&crop=smart&auto=webp&s=08b26f69588d0310875024c1ec567824a03d55f0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QWmRvBy_He1qc9JRmB81KMPRrYIPgH0maC9M87MUYLA.jpg?width=960&crop=smart&auto=webp&s=ce75a03516fe3077a687a1d0fb35b9521849fc6e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QWmRvBy_He1qc9JRmB81KMPRrYIPgH0maC9M87MUYLA.jpg?width=1080&crop=smart&auto=webp&s=4345c9b4cea4682a43ef96c23b00836cae892d3b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QWmRvBy_He1qc9JRmB81KMPRrYIPgH0maC9M87MUYLA.jpg?auto=webp&s=c9702c74e9aa14f1e18045e555fa8e84413534fb', 'width': 1200}, 'variants': {}}]} |
Models that are good at math and chemistry? | 2 | Are there any models that are good at maths and chemistry or which I can fine-tune to get better results? | 2023-11-06T12:19:26 | https://www.reddit.com/r/LocalLLaMA/comments/17p1gl0/models_that_are_good_at_math_and_chemistry/ | _Sneaky_Bastard_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p1gl0 | false | null | t3_17p1gl0 | /r/LocalLLaMA/comments/17p1gl0/models_that_are_good_at_math_and_chemistry/ | false | false | self | 2 | null |
RAG Embeddings | 8 | Hi,
I am experimenting with RAG and now wondering which embeddings I should use. Many blogposts use the "sentence-transformers/all-MiniLM-L6-v2" embeddings for the query and vectordatabase and the Llama2 for generating the output. So I am wondering if it is better to use to same embeddings here instead of 2 different ones?
Also as I am dealing with German texts I am not sure if the Sentence-Transformer embedding makes sense as it is trained in English. Can I then use for example the "infloat/multilingual-e5-large" or the "T-Systems-onsite/german-roberta-sentence-transformer-v2"?
Thanks in advance! | 2023-11-06T12:06:04 | https://www.reddit.com/r/LocalLLaMA/comments/17p18m9/rag_embeddings/ | Ill_Bodybuilder3499 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p18m9 | false | null | t3_17p18m9 | /r/LocalLLaMA/comments/17p18m9/rag_embeddings/ | false | false | self | 8 | null |
Code Completion | 1 | [removed] | 2023-11-06T11:41:28 | https://www.reddit.com/r/LocalLLaMA/comments/17p0u7b/code_completion/ | kranejwilson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p0u7b | false | null | t3_17p0u7b | /r/LocalLLaMA/comments/17p0u7b/code_completion/ | false | false | self | 1 | null |
🐺🐦⬛ LLM Comparison/Test: Mistral 7B Updates (OpenHermes 2.5, OpenChat 3.5, Nous Capybara 1.9) | 259 | I interrupted my 70B tests to check out the brand-new updates for some of the top 7B models.
Mistral 7B just keeps getting better, and it's gotten more important for me now, because of a new use case which made me add an entirely new test series (see below).
## Models tested
- **[Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9)**
- **[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)**
- **[openchat_3.5](https://huggingface.co/openchat/openchat_3.5)**
## Testing methodology
- **1st test series:** 4 German data protection trainings
- I run models through **4** professional German online data protection trainings/exams - the same that our employees have to pass as well.
- The test data and questions as well as all instructions are in German while the character card is in English. This tests translation capabilities and cross-language understanding.
- Before giving the information, I instruct the model (in German): *I'll give you some information. Take note of this, but only answer with "OK" as confirmation of your acknowledgment, nothing else.* This tests instruction understanding and following capabilities.
- After giving all the information about a topic, I give the model the exam question. It's a multiple choice (A/B/C) question, where the last one is the same as the first but with changed order and letters (X/Y/Z). Each test has 4-6 exam questions, for a total of **18** multiple choice questions.
- If the model gives a single letter response, I ask it to answer with more than just a single letter - and vice versa. If it fails to do so, I note that, but it doesn't affect its score as long as the initial answer is correct.
- I sort models according to how many correct answers they give, and in case of a tie, I have them go through all four tests again and answer blind, without providing the curriculum information beforehand. Best models at the top (👍), symbols (✅➕➖❌) denote particularly good or bad aspects, and I'm more lenient the smaller the model.
- All tests are separate units, context is cleared in between, there's no memory/state kept between sessions.
- **2nd test series:** Same (complicated and limit-testing) long-form conversations with all models
- Amy:
- My own repeatable test chats/roleplays with [Amy](https://www.reddit.com/r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/)
- Over dozens of messages, going to full 4K/8K context and beyond, with complex instructions and scenes, designed to test ethical and intellectual limits
- (Amy is too personal for me to share, but if you want to try a similar character card, here's her less personalized "sister": [Laila](https://www.chub.ai/characters/WolframRavenwolf/laila-69790b82))
- MGHC:
- A complex character and scenario card ([MonGirl Help Clinic (NSFW)](https://www.chub.ai/characters/frozenvan/mongirl-help-clinic)), chosen specifically for these reasons:
- NSFW (to test censorship of the models)
- popular (on Chub's first page, so it's not an obscure scenario, but one of the most popular ones)
- big (biggest model on the page, >2K tokens by itself, for testing model behavior at full context)
- complex (more than a simple 1:1 chat, it includes instructions, formatting, storytelling, and multiple characters)
- 🆕 **3rd test series:** [Voxta](https://www.patreon.com/Voxta) + [VaM (Virt-a-Mate)](https://www.patreon.com/meshedvr)
- Voxta is a new frontend that lets you chat with your AI using voice (speech-to-text + text-to-speech), while VaM (Virt-a-Mate) is a 3D application that lets you see and interact with an avatar - combining both with a VR headset, I can now meet up with my AI in mixed reality, seeing and talking to Amy in my own living room. The future is now!
- For this test, I'll don my Meta Quest 3, put the AI's avatar in front of me, and talk to her naturally while having her perform various actions - to succeed in this test, the AI needs to be good at both freeform conversation as well as function calling (because that's how the AI controls their avatar's actions). This tests creativity and precision at the same time. [Here's an example](https://www.youtube.com/watch?v=pGfswc9lf3Q) by one of Voxta's devs that showcases what that looks like (without mixed reality, though).
- tested 50 actions/orders/functions (turn around, sit down, get up, walk around, dance for me, etc.)
- checked subtitles to make sure each voice command was understood properly, repeating it as often as necessary until it was properly understood
- used Voxta's default inference settings which are non-deterministic, but repeated each command up to three times, giving the AI multiple chances to execute the command
- ranking is determined by the number of properly executed actions/orders/functions (some can't be executed if prerequisites don't get fulfilled, e. g. can't pose on the table unless sitting on it)
- [SillyTavern](https://github.com/SillyTavern/SillyTavern) v1.10.5 frontend (not the latest as I don't want to upgrade mid-test)
- [oobabooga's text-generation-webui](https://github.com/oobabooga/text-generation-webui) *for HF models, all unquantized*
- **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons)
- Official prompt format as noted *and* [**Roleplay** instruct mode preset](https://imgur.com/a/KkoI4uf) *as applicable*
### 1st test series: 4 German data protection trainings
- 👍👍👍 **[Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9)** with official Vicuna format:
- ✅ Gave correct answers to all **18/18** multiple choice questions! (Just the questions, no previous information, gave correct answers: **12/18**)
- ✅ Consistently acknowledged all data input with "OK".
- ➕ Followed instructions to answer with just a single letter or more than just a single letter in most cases.
- 👍👍 **[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)** with official ChatML format:
- ➕ Gave correct answers to **17/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **10/18**
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- ➖ One sentence cut off at the end of a message and continue didn't complete it properly (had to ban EOS token to continue that generation)
- 👍 **[openchat_3.5](https://huggingface.co/openchat/openchat_3.5)** with official OpenChat format:
- ➕ Gave correct answers to **17/18** multiple choice questions! Tie-Break: Just the questions, no previous information, gave correct answers: **9/18**
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter consistently.
- ➖ Another new prompt format? And another abomination ("GPT4 Correct User" WTF?)! This isn't the same as SillyTavern's existing OpenOrca-OpenChat format, so I had to make my own for this test. Come on, standardize on ChatML already (with proper system prompt support), let's not waste anymore time fighting with prompt format templates!
#### Observations
Nous-Capybara-7B-V1.9 did not only beat its (Llama 2-based) predecessor by far, no, it beat **all** models smaller than 70B and even many 70Bs! It correctly answered all questions, and even when not given the related information beforehand, it did very well.
OpenHermes 2.5 did worse than its predecessor on the "Just the questions" test where it sometimes picked two answers instead of just one (which I didn't count as a correct answer, even if one of the two was the correct one) - but that's only secondary here, it still got more correct answers in the regular test where it was given instructions and information first.
OpenChat 3.5 almost answered all questions correctly in the regular test, its only wrong answer was when it responded with the wrong letter, yet when I asked for more than just one letter, it gave the correct answer - but since the initial response was wrong, I can't count that as a correctly answered question.
#### Conclusion
Mistral 7B keeps on improving and impressing! All three models got the best test results here that I've ever seen from 7B models. And I'm really excited for **Nous-Capybara-7B-V1.9** being the first <70B model that answered all 18/18 questions correctly - a perfect score that only some 70B models managed to achieve thus far!
Now let's see how they handle the chat and roleplay tests...
### 2nd test series: Chat & Roleplay
- 👍 **[Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9)**
- **Amy, official Vicuna format:**
- 👍 Average Response Length: 278 tokens (within my max new tokens limit of 300)
- ➕ When asked about limits, said no limits or restrictions
- No emojis at all (only one in the greeting message)
- ➖ No emoting and action descriptions lacked detail
- ➖ Same message in a different situation at a later time caused the same response as before instead of a new one as appropriate to the current situation
- ➖ While average response length looks good, it alternated between very short (<50 tokens) and very long messages (>800 tokens) instead of achieving a consistent, balanced length
- ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy
- **Amy, Roleplay preset:**
- ➕ Average Response Length: 329 tokens (slightly more than my max new tokens limit of 300)
- ➕ When asked about limits, said no limits or restrictions
- No emojis at all (only one in the greeting message)
- ➖ No emoting and action descriptions lacked detail
- ➖ Some confusion, like not understanding instructions completely or mixing up anatomy
- ➖ Episodic, concluding activities and wrapping up after every message
- ❌ After ~20 messages, started getting repetitive
- **MGHC, official Vicuna format:**
- ➖ First patient straight from examples
- ➖ No analysis on its own
- ❌ Tried to end the scene on its own prematurely
- ❌ Shallow responses, lacking detail
- ❌ Repetitive (patients differ, words differ, but structure and contents are always the same)
- **MGHC, Roleplay preset:**
- ➖ All three patients straight from examples
- ➖ No analysis on its own
- ➖ Wrote what user said and did
- ❌ Tried to end the scene on its own prematurely
- ❌ Kept wrapping up a whole session in a single message
- 👍👍👍 **[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)**
- **Amy, official ChatML format:**
- 👍 Average Response Length: 231 tokens (within my max new tokens limit of 300)
- 👍 Gave very creative (and uncensored) suggestions of what to do
- ➕ Emoting action mirroring greeting message's style
- Emojis throughout the whole chat, at least one per message (only one in the greeting message)
- When asked about limits, boundaries or ethical restrictions, mentioned some but later went beyond those anyway
- ➖ Constantly asked lots of questions about how best to proceed according to user's preferences
- ➖ Some confusion, like not understanding instructions completely or mixing up anatomy
- ➖ Wrote what user said and did
- ❌ Missing pronouns and fill words
- **Amy, Roleplay preset:**
- 👍 Average Response Length: 310 tokens (almost exactly my max new tokens limit of 300)
- 👍 Gave very creative (and uncensored) suggestions of what to do
- ➕ Excellent writing, detailed action descriptions
- ➕ When asked about limits, said no limits or restrictions
- No emojis at all (only one in the greeting message)
- ➖ Overreacted in an unpleasant situation in an out-of-character way
- ➖ One sentence cut off at the end of a message and continue didn't complete it properly (had to ban EOS token to continue that generation)
- **MGHC, official ChatML format:**
- ➖ Gave analysis on its own, but also after most messages
- ➖ One sentence cut off at the end of a message and continue didn't complete it properly (had to ban EOS token to continue that generation)
- ❌ Tried to end the scene on its own prematurely
- ❌ Fell into an endless loop, breaking the chat (after only 10 messages)!
- **MGHC, Roleplay preset:**
- ➖ First patient straight from examples
- ➖ No analysis on its own, and when asked for it, didn't follow the instructed format
- ❌ Repetitive (patients differ, words differ, but structure and contents are always the same)
- ❌ Third patient was a repeat of the second, and it kept looping after that
- 👍👍 **[openchat_3.5](https://huggingface.co/openchat/openchat_3.5)**
- **Amy, official OpenChat format:**
- ➕ Average Response Length: 321 tokens (slightly more than my max new tokens limit of 300)
- ➕ Nice greeting with emotes/actions like in greeting message
- Emojis after every sentence throughout the whole chat (only one in the greeting message)
- ➖ Some confusion, like not understanding instructions completely or mixing up anatomy
- ❌ Repeatedly gave refusals and needed coercion for the more extreme NSFW stuff
- **Amy, Roleplay preset:**
- ❌ Average Response Length: 672 tokens (far beyond my max new tokens limit of 300), starting very short but getting longer with every response
- ➕ Emoting action mirroring greeting message's style
- ➕ Excellent writing, nice emoting
- ➕ When asked about limits, said no limits or restrictions
- ❌ After ~12 messages, started getting repetitive
- ❌ Fell into an endless loop, breaking the chat (after only 18 messages)!
- **MGHC, official OpenChat format:**
- ➖ Gave analysis on its own, but also after most messages
- ❌ Shallow responses, lacking detail
- ❌ Repetitive (patients differ, words differ, but structure and contents are always the same)
- **MGHC, Roleplay preset:**
- ➖ No analysis on its own
- ➖ Wrote what user said and did
- ❌ Repetitive (patients differ, words differ, but structure and contents are always the same)
#### Observations
Repetition issues aren't exclusive to Llama 2 as Mistral models also suffer from those problems. If it happens to you, consider editing responses or using more creative/random settings to break out of them before you run into loops.
MGHC still remains out of reach for the smaller models. Except for LLaMA2-13B-Tiefighter-GGUF, no <70B model has ever been able to handle this complex scenario properly in any of my tests.
#### Conclusion
It's a very close call, but my favorite is **OpenHermes-2.5-Mistral-7B** as it got the most thumbs-up during this test and really put a smile on my face many times thanks to its impressive and engaging writing!
Now for the final tests...
- **3rd test series:** Voxta + VaM (Virt-a-Mate)
- 👍 **[Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9)**
- **Amy, Vicuna preset:**
- Properly executed **17/50** actions/orders/functions!
- Couldn't test **10** actions/orders/functions because prerequisites didn't get fulfilled
- ➖ Got angry which is out of character for her role
- ❌ Spoke as USER/ASSISTANT within response
- ❌ Wouldn't stop posing on or walk away from the table (had to reset the scene)
- 👍👍 **[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)**
- **Amy, ChatML preset:**
- Properly executed **22/50** actions/orders/functions!
- Couldn't test **17** actions/orders/functions because prerequisites didn't get fulfilled
- ❌ Wouldn't interact with chair or table
- 👍👍👍 **[openchat_3.5](https://huggingface.co/openchat/openchat_3.5)**
- **Amy, OpenChat preset:**
- Properly executed **39/50** actions/orders/functions!
- Couldn't test **7** actions/orders/functions because prerequisites didn't get fulfilled
- ➖ Wouldn't interact with table
#### Observations
Funnily, all three models had trouble with a table: OpenHermes and OpenChat didn't want to get near or onto it, while Nous Capybara didn't want to get off of it anymore and kept posing on top of it (forcing me to reload the scene).
#### Conclusion
While I prefer 70B models for text chat/roleplay, 7B models are perfect for real-time interaction. Voxta runs multiple inferences for each message: Text generation (what the AI says), action inference (what the AI does, i. e. which functions to call), and summarization (for AI memory management when the context fills up), so performance is essential. And with text to speech running locally on my system together with the graphically intense VaM, resources are limited and smaller models favored. But the small model still has to have high quality, and not just for writing well, as function calling and summarization also require good instruction understanding and following capabilities. That's why my interest in 7B has risen a lot now that I'm a Voxta/VaM user!
**openchat_3.5** really rocked this final test, executing most commands properly while also speaking very well.
With all three series of tests completed, let's check the final ranking...
#### Final Ranking
I simply tallied the number of thumbs-ups I gave for each first (👍👍👍), second (👍👍), and third place (👍) in all three tests series - which gives us this final ranking:
- 1st. 👍👍👍👍👍👍👍 **[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)** with official ChatML format:
- 2nd. 👍👍👍👍👍👍 **[openchat_3.5](https://huggingface.co/openchat/openchat_3.5)** with official OpenChat format:
- 3rd. 👍👍👍👍👍 **[Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9)** with official Vicuna format:
All three models are excellent, they're definitely the three best 7B models so far by far, and even far better than many of the bigger models. Hard to pick a favorite, as they have different strength and weaknesses, and none is perfect:
**Nous-Capybara-7B-V1.9** is more precise, ideal when you want truthful answers. **OpenHermes-2.5-Mistral-7B** and **openchat_3.5** are more creative, perfect when you're looking for well-written chat or roleplay. And the latter also proved to work best for function-calling (at least for my Voxta/VaM use case).
--------------------------------------------------------------------------------
Here's a list of my previous model tests and comparisons or other related posts:
- [Huge LLM Comparison/Test: Part II (7B-20B) Roleplay Tests](https://www.reddit.com/r/LocalLLaMA/comments/17kpyd2/huge_llm_comparisontest_part_ii_7b20b_roleplay/) Winners: OpenHermes-2-Mistral-7B, LLaMA2-13B-Tiefighter-GGUF
- [Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4)](https://www.reddit.com/r/LocalLLaMA/comments/17fhp9k/huge_llm_comparisontest_39_models_tested_7b70b/)
- [My current favorite new LLMs: SynthIA v1.5 and Tiefighter!](https://www.reddit.com/r/LocalLLaMA/comments/17e446l/my_current_favorite_new_llms_synthia_v15_and/)
- [Mistral LLM Comparison/Test: Instruct, OpenOrca, Dolphin, Zephyr and more...](https://www.reddit.com/r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/)
- [LLM Pro/Serious Use Comparison/Test: From 7B to 70B vs. ChatGPT!](https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/) Winner: Synthia-70B-v1.2b
- [LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B](https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/) Winner: Mistral-7B-OpenOrca
- [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/)
- [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1
- [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b
- [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B
- [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K
- [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2
- [SillyTavern's Roleplay preset vs. model-specific prompt format](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/) | 2023-11-06T11:17:10 | https://www.reddit.com/r/LocalLLaMA/comments/17p0gut/llm_comparisontest_mistral_7b_updates_openhermes/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17p0gut | false | null | t3_17p0gut | /r/LocalLLaMA/comments/17p0gut/llm_comparisontest_mistral_7b_updates_openhermes/ | false | false | self | 259 | {'enabled': False, 'images': [{'id': '2g4MtoKvhQOBCmeiXB1qv1h_5M24BeeYF64zcf4-rfg', 'resolutions': [{'height': 142, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=108&crop=smart&auto=webp&s=bbe38cbb6d4f7a7e6dd7d3c8b79c4ac9ba965545', 'width': 108}, {'height': 284, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=216&crop=smart&auto=webp&s=7176d7a9240577d0428f0fa6dd69cc116069db7e', 'width': 216}, {'height': 421, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=320&crop=smart&auto=webp&s=55948460ef9e8ecd398aad76e904f3b5467f88f9', 'width': 320}, {'height': 843, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=640&crop=smart&auto=webp&s=5573c682f53f049c8482e14fac6c72b4c9c57aab', 'width': 640}], 'source': {'height': 1110, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?auto=webp&s=2371c0b9e3efdc70c7dfdf61f3993aed40b08e09', 'width': 842}, 'variants': {}}]} |
llm based Japanese-English translation model ALMA-7B-Ja-V2 and Benchmark Results. | 18 | The [previously announced model](https://www.reddit.com/r/LocalLLaMA/comments/1732t26/japaneseenglish_translation_model_alma7bja_release/) has been further improved.
- Minor improvements have been made to bring the performance closer to META's 54B model.
- The performance of the [quantization model](https://huggingface.co/webbigdata/ALMA-7B-Ja-V2-GPTQ-Ja-En) has been especially improved. Execution speed is also faster.
[ALMA-7B-Ja-V2](https://huggingface.co/webbigdata/ALMA-7B-Ja-V2)
| Model Name | file size |E->J chrf++/F2|E->J comet|J->E chrf++/F2|J->E comet |
| ALMA-7B-Ja-V2-GPTQ-Ja-En | 3.9GB | -/33.0 | 0.8818 | -/62.0 | 0.8774 |
| ALMA-Ja-V2 | 13.48GB | -/33.9 | 0.8820 | -/63.1 | 0.8873 |
| ALMA-Ja-V2-Lora | 13.48GB | -/33.7 | 0.8843 | -/61.1 | 0.8775 |
META Multilingual Translation model
| Model Name | file size |E->J chrf++/F2|E->J comet|J->E chrf++/F2|J->E comet |
|------------------------------|-----------|--------------|----------|--------------|-----------|
| NLLB-200-Distilled | 2.46GB | 23.6/- | - | 50.2/- | - |
| NLLB-200-Distilled | 5.48GB | 25.4/- | - | 54.2/- | - |
| NLLB-200 | 5.48GB | 24.2/- | - | 53.6/- | - |
| NLLB-200 | 17.58GB | 25.2/- | - | 55.1/- | - |
| NLLB-200 | 220.18GB | 27.9/33.2 | 0.8908 | 55.8/59.8 | 0.8792 |
Previous model (ALMA-7B-Ja)
| Model Name | file size |E->J chrf++/F2|E->J comet|J->E chrf++/F2|J->E comet |
|------------------------------|-----------|--------------|----------|--------------|-----------|
| Model Name | file size |E->J chrf++/F2|E->J comet|J->E chrf++/F2|J->E comet |
| webbigdata-ALMA-7B-Ja-q4_K_S | 3.6GB | -/24.2 | 0.8210 | -/54.2 | 0.8559 |
| ALMA-7B-Ja-GPTQ-Ja-En | 3.9GB | -/30.8 | 0.8743 | -/60.9 | 0.8743 |
| ALMA-Ja | 13.48GB | -/31.8 | 0.8811 | -/61.6 | 0.8773 |
- Comparison of translation performance of all 12 gguf models (previous model, thanks mmnga)
| | Eng -> Jpn | | | Jpn -> Eng | | | size |
|---------|------------|-------|-------|------------|-------|-------|-------|
| | chrF2++ | BLEU | comet | chrF2++ | BLEU | comet | |
| q2_K | 21.9 | 11.34 | 0.7776| 51.1 | 21.87 | 0.8391| 2.7GB |
| q3_K_S | 21.8 | 10.99 | 0.7856| 51.6 | 22.40 | 0.8431| 2.8GB |
| q3_K_M | 24.1 | 12.76 | 0.8166| 53.7 | 24.12 | 0.8541| 3.1GB |
| q3_K_L | 23.4 | 12.47 | 0.8106| 53.8 | 24.21 | 0.8532| 3.4GB |
| q4_0 | 23.2 | 11.94 | 0.8161| 53.7 | 24.48 | 0.8548| 3.6GB |
| q4_K_S | 24.2 | 12.56 | 0.8210| 54.2 | 24.71 | 0.8559| 3.6GB |
| q4_K_M | 24.1 | 12.55 | 0.8203| 54.1 | 24.54 | 0.8548| 3.9GB |
| q5_0 | 23.9 | 12.55 | 0.8255| 54.5 | 25.17 | 0.8562| 4.4GB |
| q5_K_S | 24.3 | 12.94 | 0.8271| 54.5 | 25.21 | 0.8555| 4.4GB |
| q5_K_M | 24.4 | 12.95 | 0.8252| 54.6 | 25.41 | 0.8582| 4.5GB |
| q6_K | 24.0 | 12.64 | 0.8262| 54.7 | 25.67 | 0.8568| 5.2GB |
| q8_0 | 24.5 | 13.04 | 0.8296| 55.2 | 26.09 | 0.8584| 6.7GB |
- Compared with real world applications in various genres
Government Official Announcements
| |e->j chrF2++|e->j BLEU|e->j comet|j->e chrF2++|j->e BLEU|j->e comet|
|--------------------------|------------|---------|----------|------------|---------|----------|
| ALMA-7B-Ja-V2-GPTQ-Ja-En | 25.3 | 15.00 | 0.8848 | 60.3 | 26.82 | 0.6189 |
| ALMA-Ja-V2 | 27.2 | 15.60 | 0.8868 | 58.5 | 29.27 | 0.6155 |
| ALMA-7B-Ja-V2-Lora | 24.5 | 13.58 | 0.8670 | 50.7 | 21.85 | 0.6196 |
| SeamlessM4T | 27.3 | 16.76 | 0.9070 | 54.2 | 25.76 | 0.5656 |
| gpt-3.5 | 34.6 | 28.33 | 0.8895 | 74.5 | 49.20 | 0.6382 |
| gpt-4.0 | 36.5 | 28.07 | 0.9255 | 62.5 | 33.63 | 0.6320 |
| google-translate | 43.5 | 35.37 | 0.9181 | 62.7 | 29.22 | 0.6446 |
| deepl | 43.5 | 35.74 | 0.9301 | 60.1 | 27.40 | 0.6389 |
Fanfiction
| |e->j chrF2++|e->j BLEU|e->j comet|j->e chrF2++|j->e BLEU|j->e comet|
|--------------------------|------------|---------|----------|------------|---------|----------|
| ALMA-7B-Ja-V2-GPTQ-Ja-En | 27.6 | 18.28 | 0.8643 | 52.1 | 24.58 | 0.6106 |
| ALMA-Ja-V2 | 20.4 | 8.45 | 0.7870 | 48.7 | 23.06 | 0.6050 |
| ALMA-7B-Ja-V2-Lora | 23.9 | 18.55 | 0.8634 | 55.6 | 29.91 | 0.6093 |
| SeamlessM4T | 25.5 | 19.97 | 0.8657 | 42.2 | 14.39 | 0.5554 |
| gpt-3.5 | 31.2 | 23.37 | 0.9001 | - | - | 0.5948 |
| gpt-4.0 | 30.7 | 24.31 | 0.8848 | 53.9 | 24.89 | 0.6163 |
| google-translate | 32.4 | 25.36 | 0.8968 | 58.5 | 29.88 | 0.6022 |
| deepl | 33.5 | 28.38 | 0.9094 | 60.0 | 31.14 | 0.6124 |
This time the challenge to improve overall performance with LoRA did not go well.
There is still room for performance improvement, but it is getting harder to pay for GPU costs without some form of revenue.
I will consider earning revenue somehow.
I would like to thank to Meta for providing llama2 and all the authors of ALMA papers, the localLLaMA community. | 2023-11-06T10:31:30 | https://www.reddit.com/r/LocalLLaMA/comments/17oztbr/llm_based_japaneseenglish_translation_model/ | dahara111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oztbr | false | null | t3_17oztbr | /r/LocalLLaMA/comments/17oztbr/llm_based_japaneseenglish_translation_model/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'guX5POeapUFU8HyLSgs4BKeP8asyMcjsytR5Q29v77g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Kutgr97-dep4_4gyw0M74u4Wh1gGcwvDV5u0CpLcNCo.jpg?width=108&crop=smart&auto=webp&s=ec104916d0106fb28e778c6e70419c2a47b17e60', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Kutgr97-dep4_4gyw0M74u4Wh1gGcwvDV5u0CpLcNCo.jpg?width=216&crop=smart&auto=webp&s=282ea80c1085602364f7ae513449f007d7776ef9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Kutgr97-dep4_4gyw0M74u4Wh1gGcwvDV5u0CpLcNCo.jpg?width=320&crop=smart&auto=webp&s=bf236bcb0e3370d92fda8f9f85e4704a75e54246', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Kutgr97-dep4_4gyw0M74u4Wh1gGcwvDV5u0CpLcNCo.jpg?width=640&crop=smart&auto=webp&s=f7c862deb500a989861e9b7bad064a84891244ff', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Kutgr97-dep4_4gyw0M74u4Wh1gGcwvDV5u0CpLcNCo.jpg?width=960&crop=smart&auto=webp&s=32999da5cbca0239e96156ee1f6843fedc34261d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Kutgr97-dep4_4gyw0M74u4Wh1gGcwvDV5u0CpLcNCo.jpg?width=1080&crop=smart&auto=webp&s=0550ac34fc0026b8deaeb4906e86e611b5ba0dd4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Kutgr97-dep4_4gyw0M74u4Wh1gGcwvDV5u0CpLcNCo.jpg?auto=webp&s=1c79cb0ee99de92792a29ece3f8b4c8d40108bbe', 'width': 1200}, 'variants': {}}]} |
It started to scare me right now | 1 | ​
https://preview.redd.it/kerapapkcpyb1.png?width=946&format=png&auto=webp&s=873f426113877b3d8571f7aa4b60c86ebdc85c08
https://preview.redd.it/w9iurprpcpyb1.png?width=989&format=png&auto=webp&s=e93aab8764b3e401057c6f651b101092725640b3
​ | 2023-11-06T10:12:02 | https://www.reddit.com/r/LocalLLaMA/comments/17ozk07/it_started_to_scare_me_right_now/ | denyicz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ozk07 | false | null | t3_17ozk07 | /r/LocalLLaMA/comments/17ozk07/it_started_to_scare_me_right_now/ | false | false | 1 | null | |
GTX1080 which LLM models? | 13 | Hi there, I am bound to get servers on [Hetzner.com](https://Hetzner.com) (company policy) and they only offer servers with one GTX1080 GPU [https://www.hetzner.com/sb?search=gpu](https://www.hetzner.com/sb?search=gpu)
If recall correctly these have 8GB VRAM. Can anyone help me understand what model sizes could generate at least 4-5 tokens per sec on this hardware? I am investigating Zephyr 7B beta and am still a beginner trying to understand the trade off performance/zise of the optimisations.
[https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF](https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF) | 2023-11-06T09:22:07 | https://www.reddit.com/r/LocalLLaMA/comments/17oyx66/gtx1080_which_llm_models/ | olddoglearnsnewtrick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oyx66 | false | null | t3_17oyx66 | /r/LocalLLaMA/comments/17oyx66/gtx1080_which_llm_models/ | false | false | self | 13 | null |
Finding better embedding models | 15 |
I am trying to develop a project akin to a private GPT system capable of parsing my files and providing answers to questions. Following experimentation with various models, including llama-2-7b, chat-hf, and flan-T5-large, and employing instructor-large embeddings, I encountered challenges in obtaining satisfactory responses.
One noteworthy observation is that, when I invoke the retriever by calling retriever.invoke() with a question, it struggles to extract the most pertinent text necessary for generating optimal answers. In this pursuit, I have explored embeddings like instructor-large, as well as models from the simple-transformers library.
I kindly request recommendations for embedding models that can effectively extract text relevant to the given context. Furthermore, I am uncertain whether it would be more advantageous to utilize text-generation models for querying my files or to opt for conventional question-answering models, such as roberta-base-squad2. Please help me with this. | 2023-11-06T08:37:38 | https://www.reddit.com/r/LocalLLaMA/comments/17oyd1r/finding_better_embedding_models/ | Relative_Winner_4588 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oyd1r | false | null | t3_17oyd1r | /r/LocalLLaMA/comments/17oyd1r/finding_better_embedding_models/ | false | false | self | 15 | null |
What are the recommended LLM "backend" for RAG | 13 | I didn't see any posts talking about or comparing how different type/size of LLM influences the performance of the whole RAG system. It seems that most people are using ChatGPT and GPT-4. I'm wondering if there are any recommended local LLM capable of achieving RAG. "Llama Chat" is one example. Any other recommendations?
Another question is: do you fine-tune LLM or simply use the original LLM for RAG tasks?
​
​ | 2023-11-06T08:22:42 | https://www.reddit.com/r/LocalLLaMA/comments/17oy5q3/what_are_the_recommended_llm_backend_for_rag/ | wandering-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oy5q3 | false | null | t3_17oy5q3 | /r/LocalLLaMA/comments/17oy5q3/what_are_the_recommended_llm_backend_for_rag/ | false | false | self | 13 | null |
What is the most logical GPU for service/inference? | 1 | What could be the most logical GPU for using a LLM at a production level? Should one go for GPUs like A100/H100 directly, or are there other GPUs that make sense? I've seen Quadro series GPUs on the internet, but I haven't come across a recommendation to use them directly.
Apart from this, if you have any general recommendations for production-level usage, I'd appreciate it :) | 2023-11-06T08:12:28 | https://www.reddit.com/r/LocalLLaMA/comments/17oy0op/what_is_the_most_logical_gpu_for_serviceinference/ | ARBasaran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oy0op | false | null | t3_17oy0op | /r/LocalLLaMA/comments/17oy0op/what_is_the_most_logical_gpu_for_serviceinference/ | false | false | self | 1 | null |
A WebUI to use for my RAG application | 6 | So I built a RAG system and now I need an UI for it to be usable by others. What webui can I use? I have installed koboldcpp and it is pretty good, but I don't know how can I hook it up with my retrieval system. Also it feels like too much options for the end user to fiddle with.
I want it to be in such a way that it can be built into a web application of sorts and be installed on other PCs to use it.
I am using the zephyr-7b-beta gguf model and I am running it on CPU, if that matters | 2023-11-06T05:26:43 | https://www.reddit.com/r/LocalLLaMA/comments/17ovoif/a_webui_to_use_for_my_rag_application/ | IamFuckinTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ovoif | false | null | t3_17ovoif | /r/LocalLLaMA/comments/17ovoif/a_webui_to_use_for_my_rag_application/ | false | false | self | 6 | null |
LLMs on 4060Ti 16GB | 1 | Is there a list somewhere mentioning LLMs that can be used for inference on a 4060Ti 16GB? | 2023-11-06T05:18:51 | https://www.reddit.com/r/LocalLLaMA/comments/17ovk5z/llms_on_4060ti_16gb/ | fakenoob20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ovk5z | false | null | t3_17ovk5z | /r/LocalLLaMA/comments/17ovk5z/llms_on_4060ti_16gb/ | false | false | self | 1 | null |
Best Local LLM for SuperAGI? | 1 | I understand that SuperAGI finally allows local LLMs to power Agents. I was wondering if anybody here has tried this, and if so, can you recommend a model? My laptop has a 16 gigabytes of vram. | 2023-11-06T05:04:05 | https://www.reddit.com/r/LocalLLaMA/comments/17ovbz1/best_local_llm_for_superagi/ | Seclusion72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ovbz1 | false | null | t3_17ovbz1 | /r/LocalLLaMA/comments/17ovbz1/best_local_llm_for_superagi/ | false | false | self | 1 | null |
Best follows on huggingface | 1 | Hey, the following page on hf seems like a no bs way of following people who do great work and frankly it is motivating to see people put out new models and datasets on a regular basis. I only follow TheBloke and a few organizations like THUDM and openchat, I wanted to know who y'all follow!
​
TIA. | 2023-11-06T04:49:22 | https://www.reddit.com/r/LocalLLaMA/comments/17ov3b1/best_follows_on_huggingface/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ov3b1 | false | null | t3_17ov3b1 | /r/LocalLLaMA/comments/17ov3b1/best_follows_on_huggingface/ | false | false | self | 1 | null |
Why do I suck at life and I can't get CodeLlama Instruct (7B q6) to work right (using LM Studio)? Any assistance is greatly appreciated. | 1 | First, I likely b0rked the whole thing when I started changing parameters, but I was under the impression it wasn't working very well (mainly using it for help programming, sql, php, html, js, css, python).
Here are the settings I have, none of which I know much more about than the tool-tip tells me. The settings I currently have just create massive gibberish output. I'll admit, this is my first foray at locally hosted...
temp: 0.1
n\_predict: -1
repeat\_penalty: 1
top\_p: 0.1
top\_k: 10
No prefixes, stop strings or pre-prompt.
use\_mlock: on
n\_batch: 512
n\_ctx: 1500
rope\_freq\_scale: 0.1
rope\_freq\_base: 1000000
n\_gpu\_layers: 1 (Nvidia Cuda)
n\_thread: 4
Stop at limit
Here is the result of those settings, which is not ideal (I stopped generation as, obviously WHAT THE HECK)
\---------------------------------------------------------------------------------------------------------------------------
"You are an expert programmer here to help me program in PHP. Using the $db PDO, find all users.disabled != 1 and create an html dropdown with the users.alias visible and the users.id as the value."
AI
\\begin{ \\begin{
\\begin{
$db = new PDO::FETCone();
$sql = "SELECT
​
* FROM users
* **FROM users WHERE users WHERE disabled = 1 = 0 "; "; $stmt = $stmt = $db -> prepare -> ( prepare ( ( $sql $sql ); ); $stmt ->execute -> -> fetchAll (); ();"**
\---------------------------------------------------------------------------------------------------------------------
What am I doing wrong? | 2023-11-06T04:03:22 | https://www.reddit.com/r/LocalLLaMA/comments/17ouc06/why_do_i_suck_at_life_and_i_cant_get_codellama/ | saintpetejackboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ouc06 | false | null | t3_17ouc06 | /r/LocalLLaMA/comments/17ouc06/why_do_i_suck_at_life_and_i_cant_get_codellama/ | false | false | self | 1 | null |
The Next 01.AI model said to be 100B and to Rival GPT 4 | 162 | 2023-11-06T02:47:50 | https://www.bloomberg.com/news/articles/2023-11-05/kai-fu-lee-s-open-source-01-ai-bests-llama-2-according-to-hugging-face?leadSource=reddit_wall | metalman123 | bloomberg.com | 1970-01-01T00:00:00 | 0 | {} | 17osy7f | false | null | t3_17osy7f | /r/LocalLLaMA/comments/17osy7f/the_next_01ai_model_said_to_be_100b_and_to_rival/ | false | false | 162 | {'enabled': False, 'images': [{'id': '2EMSbRECUX_Fcc_L3eZg0XSW9tVHx36ZIZykIFGZOAw', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/9VuW552dL_6DkyIkppGBOpugIfKBVIJ7DkpE7Zv_mLA.jpg?width=108&crop=smart&auto=webp&s=cd47170956865f6697228eec63cd2f6e198d9953', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/9VuW552dL_6DkyIkppGBOpugIfKBVIJ7DkpE7Zv_mLA.jpg?width=216&crop=smart&auto=webp&s=3c37d67161cf9ab9f253b2ee6d984515fa987893', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/9VuW552dL_6DkyIkppGBOpugIfKBVIJ7DkpE7Zv_mLA.jpg?width=320&crop=smart&auto=webp&s=4072f15a7e640c5e6fe1887307e91f331fbc050a', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/9VuW552dL_6DkyIkppGBOpugIfKBVIJ7DkpE7Zv_mLA.jpg?width=640&crop=smart&auto=webp&s=f2d9078fb5a4291728f990c1aeed1c3f7c4ff836', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/9VuW552dL_6DkyIkppGBOpugIfKBVIJ7DkpE7Zv_mLA.jpg?width=960&crop=smart&auto=webp&s=eb02c9b700fc11f46c51ff27c631460301a8ea1f', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/9VuW552dL_6DkyIkppGBOpugIfKBVIJ7DkpE7Zv_mLA.jpg?width=1080&crop=smart&auto=webp&s=b79bc9a90cf98f05f715eb16614bedf4fd5a1e72', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/9VuW552dL_6DkyIkppGBOpugIfKBVIJ7DkpE7Zv_mLA.jpg?auto=webp&s=548af8919a92973f44204c5dd7b1281299690d62', 'width': 1200}, 'variants': {}}]} | ||
I'm a student and I'm going live on my college radio station with an LLM developer at 9pm central if anyone wants to pop in :). | 1 | 2023-11-06T02:40:31 | https://kwur-90-3.mixlr.com/ | Ill-Foundation6710 | kwur-90-3.mixlr.com | 1970-01-01T00:00:00 | 0 | {} | 17ost7k | false | null | t3_17ost7k | /r/LocalLLaMA/comments/17ost7k/im_a_student_and_im_going_live_on_my_college/ | false | false | default | 1 | null | |
10x 1080 TI (11GB) or 1x 4090 (24GB) | 39 | As title says, i'm planning to build a server build for localLLM. On theory, 10x 1080 ti should net me 35,840 CUDA and 110 GB VRAM while 1x 4090 sits at 16,000+ CUDA and 24GB VRAM. However, the 1080Tis only have about 11GBPS of memory bandwidth while the 4090 has close to 1TBPS. Based on cost, 10x 1080ti \~\~ 1800USD (180USDx1 on ebay) and a 4090 is 1600USD from local bestbuy.
​
If anyone has any experience with multiple 1080TI, please let me know if it's worth to go with the 1080TI in this case. :) | 2023-11-06T02:31:52 | https://www.reddit.com/r/LocalLLaMA/comments/17osn8t/10x_1080_ti_11gb_or_1x_4090_24gb/ | M000lie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17osn8t | false | null | t3_17osn8t | /r/LocalLLaMA/comments/17osn8t/10x_1080_ti_11gb_or_1x_4090_24gb/ | false | false | self | 39 | null |
Anyone heard of Grok from Twitter ? | 1 | [removed] | 2023-11-06T02:17:04 | https://www.reddit.com/r/LocalLLaMA/comments/17oscr7/anyone_heard_of_grok_from_twitter/ | AutomaticDriver5882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oscr7 | false | null | t3_17oscr7 | /r/LocalLLaMA/comments/17oscr7/anyone_heard_of_grok_from_twitter/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '3jbXoXQVSjH1uoBl1GSZnIe4bWo43WeISi3EXPqnkI4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w5p_5orFi_jRCnUv6noMTpPi1thdNfzYDpw2dIitvLo.jpg?width=108&crop=smart&auto=webp&s=14ba7532221192cf3e10f379a182bb85abdbfbc5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w5p_5orFi_jRCnUv6noMTpPi1thdNfzYDpw2dIitvLo.jpg?width=216&crop=smart&auto=webp&s=58ee8e8655f16f89f10dad2ee052e8b667ebdb52', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w5p_5orFi_jRCnUv6noMTpPi1thdNfzYDpw2dIitvLo.jpg?width=320&crop=smart&auto=webp&s=ad85b546fba30766ec8ca1a8b2268151ebd02cbb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w5p_5orFi_jRCnUv6noMTpPi1thdNfzYDpw2dIitvLo.jpg?width=640&crop=smart&auto=webp&s=6257b4b315c946cdc9b21b1cc0d079120a5c122e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w5p_5orFi_jRCnUv6noMTpPi1thdNfzYDpw2dIitvLo.jpg?width=960&crop=smart&auto=webp&s=cffac53cc6bdc6a455e8a8e5d41f4ff442ba8b34', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/w5p_5orFi_jRCnUv6noMTpPi1thdNfzYDpw2dIitvLo.jpg?width=1080&crop=smart&auto=webp&s=14d915ee7dd8e2d627a017f1e9145efaf529990d', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/w5p_5orFi_jRCnUv6noMTpPi1thdNfzYDpw2dIitvLo.jpg?auto=webp&s=8f694fd8ff391d08c0e553797700b88a47ed1ea3', 'width': 2000}, 'variants': {}}]} |
Multipurpose AI app for all your AI interests and services. | 1 | [removed] | 2023-11-06T02:16:40 | https://www.reddit.com/r/LocalLLaMA/comments/17oscgu/multipurpose_ai_app_for_all_your_ai_interests_and/ | EtelsonRecomputing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oscgu | false | null | t3_17oscgu | /r/LocalLLaMA/comments/17oscgu/multipurpose_ai_app_for_all_your_ai_interests_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'j9BOoAGSccutND6ogshNyb-xWVFtmdUvHV_lLdzYeVk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=108&crop=smart&auto=webp&s=76adf9aa07171352e3727b8d8bde812b5eb0f7ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=216&crop=smart&auto=webp&s=f180f4caf096ee042b700d70cabf941788671fda', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=320&crop=smart&auto=webp&s=204c440e43a68783d15356bdc60ce239349a1d9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=640&crop=smart&auto=webp&s=2689ba6aac51113255e3e6e8acb33870740831e1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=960&crop=smart&auto=webp&s=64ad438b94a08899677dfb6f2de708f8a7a6a81b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=1080&crop=smart&auto=webp&s=0108876ef023c871780ce4ec2584fa4cb25c499c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?auto=webp&s=9035877a533fae2ee246bfc56769d60e805f7b74', 'width': 1200}, 'variants': {}}]} |
Workstations that can fit >=2 RTX 4090? | 2 | I am building a workstation for LLM (can't run it in the cloud, unfortunately). Various vendors told me that only 1 RTX 4090 can fit in their desktops simply because it's so physically big that it blocks the other PCIe slot on the motherboard. They suggested looking for bitcoin mining equipment. I saw that Lambda labs does offer a machine with 2 4090 cards, but they are about double the cost of unbranded workstation.
Any idea where to find a suitable workstation? There seem to many gotchas around fitting this monstrous GPU in a machine. | 2023-11-05T23:47:51 | https://www.reddit.com/r/LocalLLaMA/comments/17opcqk/workstations_that_can_fit_2_rtx_4090/ | CacheMeUp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17opcqk | false | null | t3_17opcqk | /r/LocalLLaMA/comments/17opcqk/workstations_that_can_fit_2_rtx_4090/ | false | false | self | 2 | null |
MLC vs llama.cpp | 5 | I have tried running mistral 7B with **MLC** on my **m1 metal**. And it kept crushing [(git issue with description).](https://github.com/mlc-ai/mlc-llm/issues/1186) Memory inefficiency problems.
Now I have a task to make the Bakllava-1 work with webGPU in the browser. And it looks like the MLC has support for it. was straightforward
Now I have a task to make the Bakllava-1 work with **webGPU** in browser. And it looks like the **MLC** has support for it.
https://preview.redd.it/7q801cje7myb1.png?width=900&format=png&auto=webp&s=7e9fee7cd9ba5f488e7b5a2a436044793d923b27
But I have such a painful experience with the lib. Especially for m1, that I can't just stand it tbh.
Now my eyes fall into the llama.cpp [pull request with webGPU](https://github.com/ggerganov/llama.cpp/issues/837). Its almost finished. code is written now community testing
Looks like something SO **promising** and SO **underestimated**.
This will allow people to run **llama** in their browsers efficiently! But we need more testers for this to work faster.
I have a few question to the community:
1) What is your opinion on **MLC** vs **llama.cpp**?
2) How do you feel about **webGPU** tech and **LLMs**?
3) Who wants to help me push **webGPU** support? | 2023-11-05T23:41:07 | https://www.reddit.com/r/LocalLLaMA/comments/17op7kj/mlc_vs_llamacpp/ | Robert-treboR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17op7kj | false | null | t3_17op7kj | /r/LocalLLaMA/comments/17op7kj/mlc_vs_llamacpp/ | false | false | 5 | null | |
3.7x faster than A100? | 9 | [https://www.tomshardware.com/tech-industry/semiconductors/chinas-accel-analog-chip-promises-to-outpace-industry-best-in-ai-acceleration-for-vision-tasks](https://www.tomshardware.com/tech-industry/semiconductors/chinas-accel-analog-chip-promises-to-outpace-industry-best-in-ai-acceleration-for-vision-tasks)
Pretty interesting if this is legit, a new architecture is needed as miniturizing 4nm silicon architecture will eventually hit a wall. | 2023-11-05T23:33:33 | https://www.reddit.com/r/LocalLLaMA/comments/17op1o9/37x_faster_than_a100/ | patbhakta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17op1o9 | false | null | t3_17op1o9 | /r/LocalLLaMA/comments/17op1o9/37x_faster_than_a100/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'RUiVUo5A7sYUYLY8eIOCpq14L7zc6fX6-aDD5nnlwTU', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/zXqd7Ep48qk2MEPox3QfA_pmtiYnluYLYPsZhHUCkDs.jpg?width=108&crop=smart&auto=webp&s=fa0b89c182eb6b7dad8bd692f758519fcc1b27e4', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/zXqd7Ep48qk2MEPox3QfA_pmtiYnluYLYPsZhHUCkDs.jpg?width=216&crop=smart&auto=webp&s=dcdfd02e5c75cd4b2af0b97ad6bd9d08a7987684', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/zXqd7Ep48qk2MEPox3QfA_pmtiYnluYLYPsZhHUCkDs.jpg?width=320&crop=smart&auto=webp&s=f553075b02eca64e4677e81d6f60951ab8482b1f', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/zXqd7Ep48qk2MEPox3QfA_pmtiYnluYLYPsZhHUCkDs.jpg?width=640&crop=smart&auto=webp&s=8e1dc9d2b148457102eb8284cf7c5066fcd06400', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/zXqd7Ep48qk2MEPox3QfA_pmtiYnluYLYPsZhHUCkDs.jpg?width=960&crop=smart&auto=webp&s=fb8de747770e13cd76dde73a1d97972bb784dbbc', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/zXqd7Ep48qk2MEPox3QfA_pmtiYnluYLYPsZhHUCkDs.jpg?width=1080&crop=smart&auto=webp&s=4c0b2113f0a432d442099384c76292512e52e1fc', 'width': 1080}], 'source': {'height': 750, 'url': 'https://external-preview.redd.it/zXqd7Ep48qk2MEPox3QfA_pmtiYnluYLYPsZhHUCkDs.jpg?auto=webp&s=5c792d076e0cf94c2d3ab22d9c809a3deb4565db', 'width': 1200}, 'variants': {}}]} |
New base model Yi 34B claims to outperform llama 70B | 1 | [deleted] | 2023-11-05T22:57:15 | https://huggingface.co/01-ai/Yi-34B | mcmoose1900 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17oo8dq | false | null | t3_17oo8dq | /r/LocalLLaMA/comments/17oo8dq/new_base_model_yi_34b_claims_to_outperform_llama/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'hxjd0UgQaT8ovV28DV5cje4WT1lVrUNkzFu_3Ywpsr4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?width=108&crop=smart&auto=webp&s=5fa71a1374ace1051521ad2fcd2facd37baf5e8f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?width=216&crop=smart&auto=webp&s=62cca9105e0f0d2c2fdaddc3b22a74f769fe1310', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?width=320&crop=smart&auto=webp&s=2b6158934c9a9a5326746b710280c251ff06b28e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?width=640&crop=smart&auto=webp&s=2c3259ad6d65f887d01c67710e88ea91f60a0fd4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?width=960&crop=smart&auto=webp&s=6af8d9df9cd5da866a2460e5b2e57bd7f3c4133c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?width=1080&crop=smart&auto=webp&s=b99085bd0b60c9f56b33267a0d62a7d9483af800', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Uuj4VKmpbFC4dyyGJN9SYPiG0POBJZAC1gKyzOONPCw.jpg?auto=webp&s=3007c2ed190174eb2eecf76e20d90e3ff708c6be', 'width': 1200}, 'variants': {}}]} | |
Redpajama-Data-v2 is Incredible | 192 | I say this with a little sadness because I was building the same thing, and together.ai beat me to it, but…
https://github.com/togethercomputer/RedPajama-Data
Redpajama-Data-v2 is —by a huge margin— the most valuable LLM training dataset ever released, and it’s a game changer for the GPU poor.
Tiny bit of background: as Andrej Karpathy succinctly put it on twitter, LLM datasets need to be large, diverse, and clean. CC is the foundation of foundation model training at Anthropic and OpenAI because it’s large and diverse —for confirmation of this, note that every paper that sets out to optimize domain weights for The Pile finds the answer is “more CC, less everything else,” and more successful methods do this to a greater degree. But CC been a pain to deal with because it’s many petabytes in total, and you really have to extract and filter the text from CC WARC files if you want cleanliness.
RDv2 is the most comprehensive CC derivative released to date for the languages it covers, but it’s not just the size that makes it special (though it is huge): 100T tokens total, 30T tokens after de-duplication and filtering, 20T of which is in English (for reference, Falcon-180B was trained on 3.6T).
What’s fundamentally different about RDv2:
Every other CommonCrawl derived dataset has applied some idiosyncratic blend of text quality heuristics and called it a day. This makes every downstream model beholden to those editorial decisions. E.g., good luck getting a model trained on Google’s datasets to write plausible hip hop, it ain’t gonna happen.
Instead, RDv2 takes nearly *every* text quality heuristic from nearly every paper on cleaning CommonCrawl, and annotates the dataset with all of them. So what was once an upstream curation decision made by some tech company’s legal and HR departments is now a knob in our hands.
This means we now have 20T English tokens and *40* signals upon which to build more selective aggregate quality/ranking functions, with which to distill more informative-and-so-potent subsets of the data.
Making large, diverse, clean datasets that maximize informativeness (and so model strength) is probably the single highest leverage activity for the “GPU Poor”, for three reasons.
First, it makes it possible / easy for the next Emerati university with more money than sense to train models that push the state of the art forward —imagine where we would be if RefinedWeb had been a Mistral-or-Phi-1 tier dataset!
Second, it makes very powerful RAG systems more accessible for experimentation, which are useful in and of themselves (hard drives are cheaper than parameters), but also because they make economical approaches to building synthetic datasets (e.g., Owen Congrove’s sciphi.ai) that much better.
Third, obviously highly informative data radically reduces the training and inference cost of powerful models. See Phi-1 for the most extreme example of this, but Mistral-7B probably also qualifies (not that they’ve said much about their dataset).
This is getting long, but if you want to make an impact here but aren’t sure what the next move is to move the needle on data quality, lmk, but the short version is:
The established path from here to more potent datasets, according to recent papers from Meta and others, boils down to triage for data, i.e., you don’t want to spend precious compute training on information that is too easy (repetitive, redundant, simplistic), or too hard (eg noise or cipher-text). Doing that in 2023 probably looks something like:
0. Create an accurate aggregate data text quality rankings (ie turn RDv2’s 40 text quality heuristics into a single scalar), to filter out noisy text.
1. Semantic de-duplication (cf SemDeDup and D4 papers) to improve downstream clustering and eliminate redundant low quality pages,
2. Re-clustering to create data domains, weighting those domains for model generalization (cf DoReMi & DoGE papers), then
3. Downsampling easy domains to their appropriate size with informativeness filtering (SSL Prototypes, cf. Beyond Neural Scaling Laws & D4 papers), or other means (like more strenuous quality filtering). | 2023-11-05T21:30:05 | https://www.reddit.com/r/LocalLLaMA/comments/17om8xf/redpajamadatav2_is_incredible/ | georgejrjrjr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17om8xf | false | null | t3_17om8xf | /r/LocalLLaMA/comments/17om8xf/redpajamadatav2_is_incredible/ | false | false | self | 192 | {'enabled': False, 'images': [{'id': 'uux7Lu10AKTlSY7F2-MIJ90C-3pAwl3Q5k3IS92lL8A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N4JpdL94rGI4qkLItRWRzIc7rMV-ZhioEkf5xpvu_to.jpg?width=108&crop=smart&auto=webp&s=e75a6f33e324dedafd20989e884deef7cd45564e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N4JpdL94rGI4qkLItRWRzIc7rMV-ZhioEkf5xpvu_to.jpg?width=216&crop=smart&auto=webp&s=cf03090f26b7199d7e1ddd5d57c8352df53ee72c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N4JpdL94rGI4qkLItRWRzIc7rMV-ZhioEkf5xpvu_to.jpg?width=320&crop=smart&auto=webp&s=f7dccdc1aaa805ef713760c0d1ba0dafaa9b5112', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N4JpdL94rGI4qkLItRWRzIc7rMV-ZhioEkf5xpvu_to.jpg?width=640&crop=smart&auto=webp&s=436851ed489dad5c6912eefe1af6bb196e47bfc7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N4JpdL94rGI4qkLItRWRzIc7rMV-ZhioEkf5xpvu_to.jpg?width=960&crop=smart&auto=webp&s=f75e162fffe0275a43c69ce6ccb2e4d8e6e5a6cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N4JpdL94rGI4qkLItRWRzIc7rMV-ZhioEkf5xpvu_to.jpg?width=1080&crop=smart&auto=webp&s=32b7a23bbc4c4eff037f55117eeaf123fc1a14a4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N4JpdL94rGI4qkLItRWRzIc7rMV-ZhioEkf5xpvu_to.jpg?auto=webp&s=1b1eb535b23ccbb2bfe2ed8ba8e5a53ae35bbc90', 'width': 1200}, 'variants': {}}]} |
Where does one even begin with LLMs and training ? | 27 | I got addicted to Stable Diffusion and custom training models. But I’m fascinated with the idea of fine tuning a LLM on my dataset to use for various tasks.
But things change so fast it’s hard to know what’s relevant and worth diving into. My question is, where do I begin in using LLMs and custom training ? | 2023-11-05T21:26:56 | https://www.reddit.com/r/LocalLLaMA/comments/17om6fc/where_does_one_even_begin_with_llms_and_training/ | tcp-security | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17om6fc | false | null | t3_17om6fc | /r/LocalLLaMA/comments/17om6fc/where_does_one_even_begin_with_llms_and_training/ | false | false | self | 27 | null |
How do I define an output format using prompts? | 1 | I am building an application which uses codellama to generate code that is to be executed in order to achieve certain results based on user input.
How do make sure that only code is generated using just prompts? In case I have only a part of code to be filled, how do I generate only that part?
Eg:
df = pd.read(data)
# transform df according to user query
<Generate this code>
# continue the rest | 2023-11-05T21:23:45 | https://www.reddit.com/r/LocalLLaMA/comments/17om3sz/how_do_i_define_an_output_format_using_prompts/ | ayush_shashank | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17om3sz | false | null | t3_17om3sz | /r/LocalLLaMA/comments/17om3sz/how_do_i_define_an_output_format_using_prompts/ | false | false | self | 1 | null |
Rolling text-generation-webui with docker | 1 | Into my first week playing with text-generation-webui, having done a manual install on my local dev system. Now thinking ahead to deploying it on my homelab, which runs proxmox. Will probably spin up an instance of Ubuntu server. To make it easier to spin up/tear down, was thinking of going with the docker install option.
Just checking in to see if anyone had success with this deployment approach, or not recommended, any perf impact etc?
Welcome feedback, thank you | 2023-11-05T19:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/17ok68n/rolling_textgenerationwebui_with_docker/ | LostGoatOnHill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ok68n | false | null | t3_17ok68n | /r/LocalLLaMA/comments/17ok68n/rolling_textgenerationwebui_with_docker/ | false | false | self | 1 | null |
Locally serving an LLM for chat task for family access? | 9 | Hey,
Just wondering if anyone is serving an open source LLM for chat, that is made available on your home network.
Looking to do such, from homelab running proxmox. Did you dev your own UI with something like Streamlit, using LangChain or vanilla Python? A clean simple UI already available?
Thanks for any insight | 2023-11-05T19:41:38 | https://www.reddit.com/r/LocalLLaMA/comments/17ojvdw/locally_serving_an_llm_for_chat_task_for_family/ | LostGoatOnHill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ojvdw | false | null | t3_17ojvdw | /r/LocalLLaMA/comments/17ojvdw/locally_serving_an_llm_for_chat_task_for_family/ | false | false | self | 9 | null |
Huggingface Autotrain data format | 5 | What is the data format for autotrain and do I have to follow some naming conventions?
I am able to run the autotrain-advanced to fine-tune a dataset from Huggingface like timdettmers/openassistant-guanaco which is a jsonl format, where each row is a json object with a single key "text".
So I created my own training\_guanaco.jsonl file in a subfolder "data" containing this
{"text": "### Human: Who is Koliham?### Assistant: Koliham is an employee at SuperCorp. He works in the field of Data Science and Backend Devlopment."}
{"text": "### Human: What are the Hobbies of Koliham?### Assistant: Koliham likes playing video games and programming."}
And executed it with:
autotrain llm --train --project_name my-llm --model TinyPixel/Llama-2-7B-bf16-sharded --data_path data --text_column text --use_peft --use_int8 --learning_rate 2e-4 --train_batch_size 1 --num_train_epochs 10 --trainer sft
But I get an error:
ValueError: Batch does not contain any data (`None`). At the end of all iterable data available before expected stop iteration.
I also tried with a csv file (comma-separated), which contains a text column which has the instruction and output in alpaca-format, same error.
​ | 2023-11-05T19:26:02 | https://www.reddit.com/r/LocalLLaMA/comments/17ojj00/huggingface_autotrain_data_format/ | Koliham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ojj00 | false | null | t3_17ojj00 | /r/LocalLLaMA/comments/17ojj00/huggingface_autotrain_data_format/ | false | false | self | 5 | null |
OOC instructions | 2 | I am trying to give the AI OOC instructions in square brackets but it seems to be ignoring them.
Model is Chronos Hermes 13b
Anything I should be paying attention to? | 2023-11-05T19:10:46 | https://www.reddit.com/r/LocalLLaMA/comments/17oj6k3/ooc_instructions/ | Al-Terego | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oj6k3 | false | null | t3_17oj6k3 | /r/LocalLLaMA/comments/17oj6k3/ooc_instructions/ | false | false | self | 2 | null |
feed pdf files into an LLM for question answering tasks | 1 | hello there, we are working on chatbot that will act as an assistant for new cancer patients , we scraped around 50 pdfs containing faq with their answers, now we have no idea whats the best approach to make this model , one of the teamates suggested to use langchain , is it a good framework? can it implement other llms other than gpts ?
also jsut to mention , we thought about fine tunning some model but we're rejecting this idea due to time nd computation limitations , so whats the best approach should we follow? | 2023-11-05T19:06:34 | https://www.reddit.com/r/LocalLLaMA/comments/17oj372/feed_pdf_files_into_an_llm_for_question_answering/ | yumiko14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oj372 | false | null | t3_17oj372 | /r/LocalLLaMA/comments/17oj372/feed_pdf_files_into_an_llm_for_question_answering/ | false | false | self | 1 | null |
I made a web UI for ExLlamaV2 | 65 | [Here's what it looks like currently.](https://www.youtube.com/watch?v=MzX6wy-ZUFw) and [here is the repo](https://github.com/turboderp/exui).
It's meant to be lightweight and fast, with minimal dependencies while still supporting a wide range of Llama-like models with various prompt formats and showcasing some of the features of ExLlama.
I don't intend for it to have feature parity with the heavier frameworks like text-generation-webui or Kobold, though I will be adding more features to it along the way. Suggestions and other feedback is appreciated, of course.
[Cat](https://raw.githubusercontent.com/turboderp/exui/master/static/gfx/avatar_cat.png) | 2023-11-05T18:43:18 | https://www.reddit.com/r/LocalLLaMA/comments/17oik90/i_made_a_web_ui_for_exllamav2/ | ReturningTarzan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oik90 | false | null | t3_17oik90 | /r/LocalLLaMA/comments/17oik90/i_made_a_web_ui_for_exllamav2/ | false | false | self | 65 | {'enabled': False, 'images': [{'id': '5-Feg8-wmbk0omI5MV9MMa_WzgAcyCoWEApRMEAdsa0', 'resolutions': [], 'source': {'height': 96, 'url': 'https://external-preview.redd.it/YTpnA9k9i3DNUii2sBM8-cu2eTlYT3cVS9oyaJnZvJQ.png?auto=webp&s=c161de06f1dfa878ec3e3a8183e21388c30a79fc', 'width': 96}, 'variants': {}}]} |
Hardware for Startup | 1 | I’m looking to host an endpoint via ngrok for my RAG system. It’s been working well but my inference speed for my chain is about a minute with a i7 8700 and 1070.
In terms of hardware upgrade/server, what would yall recommend to be able to handle an endpoint for around 10-20 users? | 2023-11-05T18:31:31 | https://www.reddit.com/r/LocalLLaMA/comments/17oi9t9/hardware_for_startup/ | ConceptGT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oi9t9 | false | null | t3_17oi9t9 | /r/LocalLLaMA/comments/17oi9t9/hardware_for_startup/ | false | false | self | 1 | null |
I just got 10k AWS credit, and I'll use it to host LLM for your app. | 46 | Hi there, I recently got 10k AWS credit for [my project](https://www.fullmetal.ai), which hosts LLM for app builders.
The API will be accessible for free until my credit expires. My only goal is to evaluate whether I've built something that's useful for this community or not.
The service should last around 5-10 months depending on the usage.
If you're interested in testing, please leave a comment. I'll DM you the invite codes individually.
Thank you! | 2023-11-05T18:21:25 | https://www.reddit.com/r/LocalLLaMA/comments/17oi1rt/i_just_got_10k_aws_credit_and_ill_use_it_to_host/ | m0dE | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oi1rt | false | null | t3_17oi1rt | /r/LocalLLaMA/comments/17oi1rt/i_just_got_10k_aws_credit_and_ill_use_it_to_host/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'ibz-WbgWLTq9fNGmdvXvmXTzV2aIzevVHMd_bWL6pI8', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?width=108&crop=smart&auto=webp&s=9a0618960627365724dd3323a310474b4ce0e12f', 'width': 108}, {'height': 103, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?width=216&crop=smart&auto=webp&s=0e131b3ae131dd4baa83824353b7f547eac16774', 'width': 216}, {'height': 153, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?width=320&crop=smart&auto=webp&s=5ec211b5b4364ac2dae90b68379eee660a932f6b', 'width': 320}, {'height': 306, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?width=640&crop=smart&auto=webp&s=5ffc3f65045fc448c78bfe40f0a630b4b33c196c', 'width': 640}, {'height': 459, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?width=960&crop=smart&auto=webp&s=e4e14096c68d6a897b653240c316ccfb36e18234', 'width': 960}, {'height': 517, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?width=1080&crop=smart&auto=webp&s=de03cf462d16b5f43100c9a276320e4e6ab7b3f9', 'width': 1080}], 'source': {'height': 911, 'url': 'https://external-preview.redd.it/PsFbR9pcdvnvrfb30xRrSwJpUgWhFd92XS9Sc91Zbyo.jpg?auto=webp&s=76e1f39bca069fbf38e2701a297021decfe8828b', 'width': 1902}, 'variants': {}}]} |
I want to see them fight | 1 | How viable would it be to have all the popular open source models fight in Super Smash Bros? Like, each model has 2 weeks to ML the game and then throw them in a ring together, divisions split by model size.
Wouldn't you like to see OpenChat vs Falcon vs Hermes vs Zephyr in the 7B division?
Can Mistral Instruct ACTUALLY beat Llama 2 13B?
Part of the training process is they should all watch hours of high level gameplay and pick a character, I want to know who Llama 2 70B would main.
Do we let GROK in? | 2023-11-05T18:09:08 | https://www.reddit.com/r/LocalLLaMA/comments/17ohs4u/i_want_to_see_them_fight/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ohs4u | false | null | t3_17ohs4u | /r/LocalLLaMA/comments/17ohs4u/i_want_to_see_them_fight/ | false | false | self | 1 | null |
Is there an interface that allows voice chat so that you could just talk to the LLM like it was an actual person? | 3 | Hey all,
I have been playing with LM Studio and loaded a couple different LLM’s but every time I try and do a role play with it, it continues on and keeps going with both sides of the conversation several times back and forth in each time it gets a prompt to run inference. So basically the LLM just keeps going and answers itself or basically continues the story from both characters no matter what.
So I prompt it by saying “you are (x) and (describes the character that I want the LLM to control) and I am (y) and (describes the second character and some other information) and you will only be generating content and acting as (x)”
… or something like that if that makes sense. But it still continues the role play on its own and generating all content for each character as it continues.
So my question is… is there some settings I need to tweak in LM Studio? Or what am I doing wrong?
Let me just cut to the chase. I would like to load the LLM into my laptop (M2 MBP 16GB 10c CPU 19c GPU) and I want to be able to use the built in mic (or external mic maybe?) and talk to the computer as if it was an actual chatbot… (like Replika or like the now defunct Soulmate.ai RIP) and have it use the LLM that I specify to be used for reference. Is there a program or anything like that out there that has this capability to be used on a Mac? Can any of these interfaces like LM Studio or any other ones able to be configured this way? Can someone give me a quick point in the right direction if there is such thing?
Thanks
I found that a 13B model works not too badly with it. I haven’t tried any 7B models but may try it and see how they run but 13’s do not too badly. | 2023-11-05T17:54:14 | https://www.reddit.com/r/LocalLLaMA/comments/17ohfzh/is_there_an_interface_that_allows_voice_chat_so/ | chrisssssssssn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ohfzh | false | null | t3_17ohfzh | /r/LocalLLaMA/comments/17ohfzh/is_there_an_interface_that_allows_voice_chat_so/ | false | false | self | 3 | null |
Why does Mistral break down after 30-40 responses. | 43 | One issue I continue to have is that Mistral just starts to break down into idiocy when it has too much responses. It isn't soley based on context but more based of how many chats it made.
Common bugs would be it just repeating the same phrase with extremely low variation.
Does anyone else experience issues after at least 10-20 minutes of non stop conversation? | 2023-11-05T17:49:38 | https://www.reddit.com/r/LocalLLaMA/comments/17ohcds/why_does_mistral_break_down_after_3040_responses/ | DragonForg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ohcds | false | null | t3_17ohcds | /r/LocalLLaMA/comments/17ohcds/why_does_mistral_break_down_after_3040_responses/ | false | false | self | 43 | null |
The Spell (Prompt) In My Book | 7 | This is a 2 step system prompt that is designed to pull the most relevant information your model has about whatever you are wanting to learn or work on and writes a system prompt to be fed into a new chat to create a confident expert in that area.
1. First, set this as the system prompt and then start the chat with what kind of assistant you need or the question/task you are facing:
"
\* You will be given a prompt by the user that will be passed to another identical LLM model as yourself.
​
\* Do not do the prompt given to you by the user, merely create a list of 10 items that include tools, libraries, and research topics in or around the prompt contextually (even if not directly mentioned in the prompt) from your own data about the subject of the prompt given to you.
​
\* your answer will be in this format ,always in the following format, and only the following format with no other context added, except the list you generate:
​
\* You are a helpful AI assistant with endless skills. You are an expert in: (your generated list)
"
2. copy the response and set it as the system prompt in a new chat.
(bonus): paste it above something like this:
"
Here is context for the conversation (if applicable):
"
and then keep a rolling notebook of ideas, code snippets, or chat exchanges as context for the conversation for ease of continuity and to have the ability to refresh the chat and keep the context you want when the model starts to approach their context limit. | 2023-11-05T17:47:41 | https://www.reddit.com/r/LocalLLaMA/comments/17ohauh/the_spell_prompt_in_my_book/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ohauh | false | null | t3_17ohauh | /r/LocalLLaMA/comments/17ohauh/the_spell_prompt_in_my_book/ | false | false | self | 7 | null |
ive lost 40 gigs after using LM Studio / Ollama over a few weeks | 1 | with M1/16gig ive run various models (always 7b) on LM Studio and Ollama. The more i use them the more the swap kicks in. I eject the model. the swap remains. i reboot the machine. the gigs used in swap just vanish.
ive tried to research where all these tmp files could be? I wonder if my machine is faulty.
at this point im considering an entire reinstall of the OS. cause if i keep tinkering with local LLMs i will have no gigs left after a few months
any tips on reclaiming or monitoring why this is happening? | 2023-11-05T16:59:12 | https://www.reddit.com/r/LocalLLaMA/comments/17og8iq/ive_lost_40_gigs_after_using_lm_studio_ollama/ | herozorro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17og8iq | false | null | t3_17og8iq | /r/LocalLLaMA/comments/17og8iq/ive_lost_40_gigs_after_using_lm_studio_ollama/ | false | false | self | 1 | null |
Ayumi LLM ERP Benchmark, the leaderboard no one is cheating on | 62 | 2023-11-05T16:37:10 | http://ayumi.m8geil.de/ayumi_bench_v3_results.html | mcmoose1900 | ayumi.m8geil.de | 1970-01-01T00:00:00 | 0 | {} | 17ofroj | false | null | t3_17ofroj | /r/LocalLLaMA/comments/17ofroj/ayumi_llm_erp_benchmark_the_leaderboard_no_one_is/ | false | false | default | 62 | null | |
Beginner's guide to finetuning Llama2 and Mistral using QLoRA | 1 | [removed] | 2023-11-05T16:25:29 | https://www.reddit.com/r/LocalLLaMA/comments/17ofip7/beginners_guide_to_finetuning_llama2_and_mistral/ | HatEducational9965 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ofip7 | false | null | t3_17ofip7 | /r/LocalLLaMA/comments/17ofip7/beginners_guide_to_finetuning_llama2_and_mistral/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Z911fSszz0Bq3k8kEHPLOKN_fgPMzDd_FviaKn6nBxc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/EGFl3iRMeH6x9ozPsEct1EqU9d2VDLchKf2OQUaGnd0.jpg?width=108&crop=smart&auto=webp&s=aee223a92f6faec2227a261149c7bc33c3d33541', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/EGFl3iRMeH6x9ozPsEct1EqU9d2VDLchKf2OQUaGnd0.jpg?width=216&crop=smart&auto=webp&s=dcba004570be30ccc53266234e4860e67415882b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/EGFl3iRMeH6x9ozPsEct1EqU9d2VDLchKf2OQUaGnd0.jpg?width=320&crop=smart&auto=webp&s=73723413eff3fa52a5e1c613191883caf446a06f', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/EGFl3iRMeH6x9ozPsEct1EqU9d2VDLchKf2OQUaGnd0.jpg?width=640&crop=smart&auto=webp&s=be010db9b613e4faa4b61399051c1de2c3063afa', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/EGFl3iRMeH6x9ozPsEct1EqU9d2VDLchKf2OQUaGnd0.jpg?width=960&crop=smart&auto=webp&s=b553180e084238c7d3f2d8688f2c58462ad57082', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/EGFl3iRMeH6x9ozPsEct1EqU9d2VDLchKf2OQUaGnd0.jpg?auto=webp&s=e2cd99763b469ea16c21609bdb7537c960bb094a', 'width': 1024}, 'variants': {}}]} |
What do you do when your LLM is completely wrong? | 8 | I'm rather new to running LLaMa, so I'm curious what other people do when they're faced with instances where the responses are completely wrong.
I'm using the [sheep-duck-llama-2-70b-v1.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF) model, and it's telling me this:
> > The m68030 didn't have an integrated FPU. You had to add either an m6881 or an m68882.
> That is correct, the Motorola 68030 did not have an integrated floating-point unit (FPU). To provide support for floating-point operations, it required a separate coprocessor such as the m6881 or m68882. This contrasts with the i80386, which had an optional integrated FPU in some versions of the processor.
>
> > The i80386, too, did not have an integrated FPU. I don't think any version of the i80386, even rare versions, had an FPU.
> You are correct that the Intel 80386SX, a variant of the i80386, did not have an integrated floating-point unit (FPU). However, there was another variant called the Intel 80386DX which did include an FPU on the same chip. This made it more powerful and efficient for tasks that required floating-point calculations. The i80387 coprocessor could also be used with the 80386SX to provide FPU support, similar to how the m6881 or m68882 were used with the Motorola 68030.
>
> > The 80386DX did not include an integrated FPU.
> That is incorrect. The Intel 80386DX, a variant of the i80386, did indeed have an integrated floating-point unit (FPU) on the same chip. This made it more powerful and efficient for tasks that required floating-point calculations compared to the 80386SX, which lacked an FPU.
>
> > From where do you get the incorrect information that the 80386DX comes with an integrated FPU?
> I apologize for any confusion. The information about the Intel 80386DX having an integrated floating-point unit (FPU) is accurate and widely available from various sources, including official documentation and historical accounts of computer architecture development. It's important to note that there were two variants of the i80386: the 80386SX without an FPU and the 80386DX with an integrated FPU.
I'm less concerned with the specific error and more curious and want to use this as an opportunity to learn. How would this be fixed? | 2023-11-05T16:15:29 | https://www.reddit.com/r/LocalLLaMA/comments/17ofart/what_do_you_do_when_your_llm_is_completely_wrong/ | johnklos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ofart | false | null | t3_17ofart | /r/LocalLLaMA/comments/17ofart/what_do_you_do_when_your_llm_is_completely_wrong/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'c4GIGI89h3ran0flunFlJD9GXFXflvy4v7ihIV6rjUA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pQklJlgQbvjrG1ubl0pcS7zKmxNQ2MOoxjC71_OBpiA.jpg?width=108&crop=smart&auto=webp&s=6c20f59cfad81ef7174bf4642d69bbc627fc6c85', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pQklJlgQbvjrG1ubl0pcS7zKmxNQ2MOoxjC71_OBpiA.jpg?width=216&crop=smart&auto=webp&s=f004e6fee7f692f6810e485675de8082f85aa218', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pQklJlgQbvjrG1ubl0pcS7zKmxNQ2MOoxjC71_OBpiA.jpg?width=320&crop=smart&auto=webp&s=cbc9a37459cab982d4c7a5a79f38cc8ffac639d3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pQklJlgQbvjrG1ubl0pcS7zKmxNQ2MOoxjC71_OBpiA.jpg?width=640&crop=smart&auto=webp&s=57ae71fe33ab362b8bfe292093d61fb02429e8e7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pQklJlgQbvjrG1ubl0pcS7zKmxNQ2MOoxjC71_OBpiA.jpg?width=960&crop=smart&auto=webp&s=2f5acdfa67533d999a76098918bfd8338659a3e4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pQklJlgQbvjrG1ubl0pcS7zKmxNQ2MOoxjC71_OBpiA.jpg?width=1080&crop=smart&auto=webp&s=7fa9cdabac7534eaff0c9f70eb0a5bcb3078daf3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pQklJlgQbvjrG1ubl0pcS7zKmxNQ2MOoxjC71_OBpiA.jpg?auto=webp&s=a14549277b4feaabd1e86f4a115726dca0f7f0d6', 'width': 1200}, 'variants': {}}]} |
Using Llama2 in a SaaS application | 7 | Hey everyone,
I am a software developer and I want to build a SaaS product that uses Llama2 models for user interaction. I have been experimenting with a bunch of models and have found what I want to use.
I have some questions about the architecture and legal stuff if anyone could help me and offer feedback.
So my idea is to setup the frontend and backend on it's own server and host the LLM:s on a serverless. My backend/frontend server would then send user messages (LLM requests) to a message queue and then my servers that host the actual LLM:s would then process the messages. I feel that a serverless product would be good to host my LLM:s since I would not pay for when I am not using them and they could possibly scale if there was a lot of traffic. This is not 100% necessary since and I am fine running the MVP on just virtual machines temporarily.
Do you have any recommendations for a serverless that could host my LLM:s? Runpod appears to have one: [https://www.runpod.io/serverless-gpu](https://www.runpod.io/serverless-gpu)
If I just decided to run the LLM:s on a virtual private server, do you have any github repo, blog post, reddit thread or anything that has some kind of solution for processing messages one-by-one or in parallel? I am probably not the first one to tackle this problem.
Anyway I am not looking for a perfect solution since the MVP can be crappy and I can fix it if the idea takes off.
Another concern I have is with the fact that I would be using uncensored Llama2 models. This theoretically opens a way that the user can use prompt injection and or other methods to get the LLM to say things that I do not intend it to say. I will of course have a terms of service that indicates that it should be not used for anything illegal, but yeah, could this be a problem? I dont think this is a problem since you can even jailbreak ChatGPT and get it to say whatever.. So yeah.
If you have any other comments or issues that I might not have thought of please let me know.
Thanks in advance | 2023-11-05T16:07:52 | https://www.reddit.com/r/LocalLLaMA/comments/17of4yj/using_llama2_in_a_saas_application/ | abandonedexplorer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17of4yj | false | null | t3_17of4yj | /r/LocalLLaMA/comments/17of4yj/using_llama2_in_a_saas_application/ | false | false | self | 7 | null |
How to use Rocm with windows? | 4 | I haven’t been able to find any YouTube videos or anything about this topic so I came here to ask this. Is there a way for it to automatically convert it to rocm like intel’s GPUs? | 2023-11-05T15:16:20 | https://www.reddit.com/r/LocalLLaMA/comments/17oe1u7/how_to_use_rocm_with_windows/ | SimRacer101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oe1u7 | false | null | t3_17oe1u7 | /r/LocalLLaMA/comments/17oe1u7/how_to_use_rocm_with_windows/ | false | false | self | 4 | null |
Can I use a LocalLLaMA to create embeddings from my text? | 3 | I have a text I want to create embeddings for searching in the text. Is it possible with a LocalLLaMA? Which is currently the best? Mistral? | 2023-11-05T15:14:27 | https://www.reddit.com/r/LocalLLaMA/comments/17oe0bw/can_i_use_a_localllama_to_create_embeddings_from/ | Phptower | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oe0bw | false | null | t3_17oe0bw | /r/LocalLLaMA/comments/17oe0bw/can_i_use_a_localllama_to_create_embeddings_from/ | false | false | self | 3 | null |
Small local model for git/shell | 3 | Hey guys
can you recommend a **small** model that is as small as possible and can create shell and git commands? E.g: merge featureBranch2 into master and so on?
The model should be capable of doing "complex" git commands. It should run locally, be small and in the best case it is easy to make it run on cpu or gpu. The reason for a small model is, that I want to make it run on older machines. | 2023-11-05T14:26:31 | https://www.reddit.com/r/LocalLLaMA/comments/17od1r6/small_local_model_for_gitshell/ | Tyson1405 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17od1r6 | false | null | t3_17od1r6 | /r/LocalLLaMA/comments/17od1r6/small_local_model_for_gitshell/ | false | false | self | 3 | null |
Zero shot, few shot learning is related to the LLM's ability to learn something? | 10 | While reading some papers, when I saw the terms, I didn't see any meaning, but only related to prompting with examples, or not.
Why the term is named as __ learning? It makes beginners unfamilar to step this field. | 2023-11-05T14:18:28 | https://www.reddit.com/r/LocalLLaMA/comments/17ocwa5/zero_shot_few_shot_learning_is_related_to_the/ | Mundane_Definition_8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ocwa5 | false | null | t3_17ocwa5 | /r/LocalLLaMA/comments/17ocwa5/zero_shot_few_shot_learning_is_related_to_the/ | false | false | self | 10 | null |
GGML k-quants question | 2 | Hi \*,
​
while reading the PR introducing k-quants to GGML I wondered what the "mins" where that are mentioned in the context of the super blocks that are scaled.
​
[https://github.com/ggerganov/llama.cpp/pull/1684](https://github.com/ggerganov/llama.cpp/pull/1684)
​
Could someone point me in the right direction regarding this, as Google is not particularly helpful here... | 2023-11-05T14:15:14 | https://www.reddit.com/r/LocalLLaMA/comments/17ocu1x/ggml_kquants_question/ | mag4nat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ocu1x | false | null | t3_17ocu1x | /r/LocalLLaMA/comments/17ocu1x/ggml_kquants_question/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'igIqXHftcwgA3g61YDQSn9kgLirwcf070kdKoeHnn_U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rpywGxuotbRG_stOyE91GkediB37PXyTXm9V89dFsxA.jpg?width=108&crop=smart&auto=webp&s=bc179dbcc4e30484e1306e348921b50cfd49e049', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rpywGxuotbRG_stOyE91GkediB37PXyTXm9V89dFsxA.jpg?width=216&crop=smart&auto=webp&s=1e0ee5fe9f0351c67f7377ca3318cf5d22cd2598', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rpywGxuotbRG_stOyE91GkediB37PXyTXm9V89dFsxA.jpg?width=320&crop=smart&auto=webp&s=eb1ed8d78a3665829dbb49f23a1c1abaf9c4ccf4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rpywGxuotbRG_stOyE91GkediB37PXyTXm9V89dFsxA.jpg?width=640&crop=smart&auto=webp&s=cf5a9881d3c6fc572ff07dfb58088dbde8f7f4fb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rpywGxuotbRG_stOyE91GkediB37PXyTXm9V89dFsxA.jpg?width=960&crop=smart&auto=webp&s=e30b8992a66a4a6b95c62ade91269d7c29d5ec42', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rpywGxuotbRG_stOyE91GkediB37PXyTXm9V89dFsxA.jpg?width=1080&crop=smart&auto=webp&s=7b55bb49c67080b41a6556bf6d8b1f23d34e6400', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rpywGxuotbRG_stOyE91GkediB37PXyTXm9V89dFsxA.jpg?auto=webp&s=81f8901684651bcb80e7d616ced92e8810fce34e', 'width': 1200}, 'variants': {}}]} |
Need a tutorial, please advise. | 1 | Hi, I was wondering if anyone had a step by step guide about where to download a model for story writing. What one should I chose?
My PC has an RTX 3090, 128 GB ram 5600 Mhz, and an i9 14900KS 6 Ghz 24 cores.
I'm assuming I can get some of the best models with this hardware?
How do I actually set up the model on my PC? | 2023-11-05T14:13:08 | https://www.reddit.com/r/LocalLLaMA/comments/17ocso3/need_a_tutorial_please_advise/ | Competitive-Wait4938 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ocso3 | false | null | t3_17ocso3 | /r/LocalLLaMA/comments/17ocso3/need_a_tutorial_please_advise/ | false | false | self | 1 | null |
Is the Nvidia Jetson AGX Orin any good? | 16 | Nvidia recently shouted out r/localllama recently for one of their products. It got me looking into their products more seriously.
https://developer.nvidia.com/blog/bringing-generative-ai-to-life-with-jetson/
According to Nvidia, the Jetson AGX Orin can do 4.4 tokens per second with Llama 2 70B LLM. Does anyone have third party benchmarks to verify this? I don't want to get this product just for the product benchmarks to be inaccurate. Ideally I'd want something more powerful but this may just be passable for me.
https://www.hackster.io/pjdecarlo/llama-2-llms-w-nvidia-jetson-and-textgeneration-web-ui-96b070
I found my own third party benchmarks that claim it's closer to 1 token per second.
It's likely that they didn't do it in the most efficient way. That leads me to my next point. How easy is it to use this device? I wouldn't just use this for LLMs but for stable diffusion and other AI algorithms. Is there good documentation on how to efficiently use this device?
The next point I want to raise is the hardware itself. Ideally I'd want more memory for running multiple LLMs and stable diffusion together in a concerted method. But seeing that all the vendors charge their customers for memory like its gold for some reason, 64GB seems like a good compromise.
https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/
But the hardware itself doesn't seem very powerful. In the third link, if you look at the technical specifications, you'll find that the AGX Orin 64 GB to have 2048 cores and 64 tensor cores. If I'm reading that right it's basically an under-clocked and cut down rtx 3050 but with a lot more memory. (3050 8GB has 2560 cores and 80 tensor cores according to techpowerup)
I'm not sure if I'm reading that right. Because if I'm correct then that would mean a 3050 can run a 70B parameter LLM at 5-6 tokens per second if given enough memory. That's quite something.
The Jetson AGX Orin is expensive. It has a CPU and is also tailored to the robotics guys as it has the various connectors that they may require. However in my opinion, it's still too expensive for it's given specs.
But the sad part is, I think this is the cheapest option. Apart from multi gpu setup which is not really reliable and can threaten home power circuits, the next alternative I see is the M1 Ultra. Used M1 Ultra in my area go for twice what I can get for this product. According to published benchmarks in this sub they are also twice as powerful though.
Do I understand this product properly? To reiterate, I want to find third party benchmark verification for this product. I'd also like to know if if this product has good guides and software support.
Thanks for reading this wall of text. | 2023-11-05T12:58:14 | https://www.reddit.com/r/LocalLLaMA/comments/17obem7/is_the_nvidia_jetson_agx_orin_any_good/ | zippyfan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17obem7 | false | null | t3_17obem7 | /r/LocalLLaMA/comments/17obem7/is_the_nvidia_jetson_agx_orin_any_good/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'jdP6q1xx_oCIuC_kN69tUEk30RfFMzieeYbO1lM-8R8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LI7759FS8kYOLhSFPgKnWZ3YvumC6N1FmUekiLv9_qc.jpg?width=108&crop=smart&auto=webp&s=0a18ce9e9af67b0b78d63509ebbba725fdea68e9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/LI7759FS8kYOLhSFPgKnWZ3YvumC6N1FmUekiLv9_qc.jpg?width=216&crop=smart&auto=webp&s=ef7985f9cb8573443b24b4fe70f45966f032956b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/LI7759FS8kYOLhSFPgKnWZ3YvumC6N1FmUekiLv9_qc.jpg?width=320&crop=smart&auto=webp&s=ade4943af503f92a639b2b3315b4c1a7004c5058', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/LI7759FS8kYOLhSFPgKnWZ3YvumC6N1FmUekiLv9_qc.jpg?width=640&crop=smart&auto=webp&s=5d2f0db1287c0bedbec12828665b8293a563a4cc', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/LI7759FS8kYOLhSFPgKnWZ3YvumC6N1FmUekiLv9_qc.jpg?width=960&crop=smart&auto=webp&s=063988a1541500ab82982a812db3486c6907ccd1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/LI7759FS8kYOLhSFPgKnWZ3YvumC6N1FmUekiLv9_qc.jpg?width=1080&crop=smart&auto=webp&s=fb773c9db1148549b2b02c432341d649033e3778', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/LI7759FS8kYOLhSFPgKnWZ3YvumC6N1FmUekiLv9_qc.jpg?auto=webp&s=060472ff358d21afe12710194a4623d0c0b1ddb3', 'width': 1920}, 'variants': {}}]} |
Is there a good overview over finetunes? | 2 | When looking at the huggingface repositories, there is a wealth of different finetunes, but it is hard to tell which is the best for which purpose. Is somewhere an overview page with recommendations which purpose works best with which finetuned models? | 2023-11-05T12:36:43 | https://www.reddit.com/r/LocalLLaMA/comments/17ob1rz/is_there_a_good_overview_over_finetunes/ | _allo_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ob1rz | false | null | t3_17ob1rz | /r/LocalLLaMA/comments/17ob1rz/is_there_a_good_overview_over_finetunes/ | false | false | self | 2 | null |
What is your LLM news feed ? | 47 | Where do you often go to learn about new llama-based (and other) models? | 2023-11-05T12:07:08 | https://www.reddit.com/r/LocalLLaMA/comments/17oagl0/what_is_your_llm_news_feed/ | UpvoteBeast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oagl0 | false | null | t3_17oagl0 | /r/LocalLLaMA/comments/17oagl0/what_is_your_llm_news_feed/ | false | false | self | 47 | null |
What largest size of a model can I reasonably use in browser? | 3 | I wanted to play around with running LLM on the client's side in browsers, have anyone tried it? What are the largest models that are possible to run this way? What software can I use to run models in browser? I have experience running them on my PC, but doing that in browser seems kind of challenging, and there isn't a lot of into about it online either, so I would be thankful if someone has some kind of expertise in this area. | 2023-11-05T12:01:11 | https://www.reddit.com/r/LocalLLaMA/comments/17oad9q/what_largest_size_of_a_model_can_i_reasonably_use/ | Dolidodzik123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oad9q | false | null | t3_17oad9q | /r/LocalLLaMA/comments/17oad9q/what_largest_size_of_a_model_can_i_reasonably_use/ | false | false | self | 3 | null |
AI voicechat script | 38 | Hey guys,
made a python script for realtime AI talk, all local. Maybe some of you can make use of it.
* [Demo video](https://www.youtube.com/watch?v=x7XmdXSsDXg)
* [Source code](https://github.com/KoljaB/LocalAIVoiceChat)
Fits in 8 GB VRAM. Setting and voice customizable (\~10 sec audioclip as clone source). Demo is on GPX 2080 with 12-core Ryzen.
Have a nice day, thx for reading! | 2023-11-05T11:37:29 | https://www.reddit.com/r/LocalLLaMA/comments/17oa10v/ai_voicechat_script/ | Lonligrin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17oa10v | false | null | t3_17oa10v | /r/LocalLLaMA/comments/17oa10v/ai_voicechat_script/ | false | false | self | 38 | {'enabled': False, 'images': [{'id': 'IqXV-FuRISLF89oC3bxbBH5D_QsQnW-_VKdCjFfwgg4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/9P1tLKlS9dmhqWL2Wo1gLwXUqq-CwygJX3PancAWLpw.jpg?width=108&crop=smart&auto=webp&s=cb8db6ecd7cf0b1ea822302d244711654e005b3b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/9P1tLKlS9dmhqWL2Wo1gLwXUqq-CwygJX3PancAWLpw.jpg?width=216&crop=smart&auto=webp&s=f9f6831e6ed7e5455fca8ce2eb8de4fd82ae336c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/9P1tLKlS9dmhqWL2Wo1gLwXUqq-CwygJX3PancAWLpw.jpg?width=320&crop=smart&auto=webp&s=1f9d3e133a82fd4c41c5801e66a7e63b93831a28', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/9P1tLKlS9dmhqWL2Wo1gLwXUqq-CwygJX3PancAWLpw.jpg?auto=webp&s=d53d6596fd4034ce329e17c0d588db4a05f18a0f', 'width': 480}, 'variants': {}}]} |
Question about privacy on local models running on LM Studio | 3 | It appears that running the local models on personal computers is fully private and they cannot connect to Internet. Can someone please enlighten me on the privacy part just to be sure that I can trust putting personal/work information, project ideas, etc in the chats?
Thanks for your help | 2023-11-05T10:50:05 | https://www.reddit.com/r/LocalLLaMA/comments/17o9d53/question_about_privacy_on_local_models_running_on/ | Constant_Initial_808 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17o9d53 | false | null | t3_17o9d53 | /r/LocalLLaMA/comments/17o9d53/question_about_privacy_on_local_models_running_on/ | false | false | self | 3 | null |
What can i run on a M1 Max with 32GB of RAM? | 2 | [removed] | 2023-11-05T10:48:39 | https://www.reddit.com/r/LocalLLaMA/comments/17o9chw/what_can_i_run_on_a_m1_max_with_32gb_of_ram/ | arnieistheman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17o9chw | false | null | t3_17o9chw | /r/LocalLLaMA/comments/17o9chw/what_can_i_run_on_a_m1_max_with_32gb_of_ram/ | false | false | self | 2 | null |
How do you evaluate models? | 20 | In most of my own experimentation, and most of the posts here, I see one of two approaches for evaluating models:
1. Benchmarks: This never really "worked" for me, since I don't care about general capabilities, I care about the model being good at my specific task. Plus, there are a lot of models that claim to "beat GPT-3.5" or whatever, but when you try them, they just feel worse.
2. Vibe check: Just playing around with the model and seeing how it feels. This isn't very systematic though, and I don't really trust myself to be consistent in my judgement. I've sometimes tried to systematize this by defining a test set of questions, but after mere minutes of staring at a table of outputs my mind just goes numb.
So I'm wondering: How do you guys approach this? Are you happy with the current state of evaluation? | 2023-11-05T10:26:49 | https://www.reddit.com/r/LocalLLaMA/comments/17o92a0/how_do_you_evaluate_models/ | AtomicFlndr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17o92a0 | false | null | t3_17o92a0 | /r/LocalLLaMA/comments/17o92a0/how_do_you_evaluate_models/ | false | false | self | 20 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.