title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What are you using Local LLaMAs for? | 100 | What use cases are people using local LLaMAs for? I have a nice computer with a 4090 and I enjoy messing around on ChatGPT but I cannot seem to come up with a use case for having my own personal chatbot. | 2023-07-13T18:20:27 | https://www.reddit.com/r/LocalLLaMA/comments/14ys9ol/what_are_you_using_local_llamas_for/ | Swab1987 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ys9ol | false | null | t3_14ys9ol | /r/LocalLLaMA/comments/14ys9ol/what_are_you_using_local_llamas_for/ | false | false | self | 100 | null |
RLHF finetuning ? | 2 | Is there a simple tutorial or plug and play library that do RLHF training for LLMs ?
Something that is simple and can be plugged with huggingface models and datasets | 2023-07-13T18:20:09 | https://www.reddit.com/r/LocalLLaMA/comments/14ys9cj/rlhf_finetuning/ | MohamedRashad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ys9cj | false | null | t3_14ys9cj | /r/LocalLLaMA/comments/14ys9cj/rlhf_finetuning/ | false | false | self | 2 | null |
TokenMonster Ungreedy Tokenizer: Enables Models to be 4x Smaller and Whilst Achieving Higher Chr/Token (Includes LLaMa Tokenizer) | 1 | [GitHub](https://github.com/alasdairforsythe/tokenmonster) | [Interactive Benchmark](https://bot.co/tokenmonster/benchmark.html) | [Live Tokenizer](https://bot.co/tokenmonster)
TokenMonster is an ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript. You can use one of my pretrained vocabularies or generate your own with the included tools.
I have imported LLaMa tokenizer, which runs natively in TokenMonster. [See LLaMa Tokenizer running in TokenMonster.](https://bot.co/tokenmonster/?a=llama&b=fiction-32000-balanced-v1&text=alice)
TokenMonster can tokenize text more efficiently than other tokenization methods, even when using a much smaller vocabulary. Here is a 24,000 TokenMonster vocabulary benchmarked against tiktoken cl100k\_base and LLaMa ([link to interactive benchmark](https://bot.co/tokenmonster/benchmark.html?a=tiktoken%20cl100k_base&b=llama%20tokenmonster&c=englishcode-24000-unfiltered-v1)):
https://preview.redd.it/tbklci3wvrbb1.png?width=1506&format=png&auto=webp&s=35434d6dc468b930134eea927fc60973811c0d71
TokenMonster V4 offers 5 "optimization" modes to meet the differing requirements of language models. In "consistent" mode, as in the following example \`fiction-100256-consistent-v1\`, text is tokenized consistently in order for the grammar to remain simple and easy for the model to learn. It still achieves 6.3 chr/token on the benchmark "scifi" dataset. [(View on the Live Tokenizer.)](https://bot.co/tokenmonster/?a=fiction-100256-consistent-v1&b=fiction-32000-balanced-v1&text=alice)
https://preview.redd.it/31y85ogxvrbb1.png?width=724&format=png&auto=webp&s=8b988f60d6ae7069f3253274dee0bce189b18566
TokenMonster utilizes capcode and forward delete tokens to efficiently tokenize text. Words in capitals do not use different tokens than those in lowercase, instead a capcode marker token ⇧ is used to indicate that the following character is a capital letter. The ⌦ foward delete marker token enables the model to use the same tokens in a variety of relevant contexts.
In optimization mode "strict" there is only 1 representation of each individual word:
`HOWEVER, however... However "--HOWEVER--"`
https://preview.redd.it/ugj5f7yxvrbb1.png?width=775&format=png&auto=webp&s=fafd64546a9f89861c255866a367dc97919c4ac8
TokenMonster is ungreedy, calculating up to 6 parallel branches at any point in time. But it's fast. I sat with the profiler for 2 days until there was nothing left to optimize.
​
https://preview.redd.it/4jdjdfeyvrbb1.png?width=1077&format=png&auto=webp&s=f6950c155e196ff83b4df2211a6514e9dcc698a3
I've imported GPT2 Tokenizer and LLaMa tokenizer into TokenMonster. [You can see how they compare.](https://bot.co/tokenmonster/?a=gpt2&b=llama&text=alice)
This is all I've been doing 16 hours per day, 7 days per week for the past couple of months. If you like it please ☆ star the [GitHub](https://github.com/alasdairforsythe/tokenmonster) so people will find it. If you have any questions feel free to ask on here or on the GitHub Discussions tab. Thank you.
​ | 2023-07-13T18:10:44 | https://www.reddit.com/r/LocalLLaMA/comments/14ys0m3/tokenmonster_ungreedy_tokenizer_enables_models_to/ | Pan000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ys0m3 | false | null | t3_14ys0m3 | /r/LocalLLaMA/comments/14ys0m3/tokenmonster_ungreedy_tokenizer_enables_models_to/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Fz1scB97CXxMh9bFciLtOjHuHQMzSwHoNgv5Fen843k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?width=108&crop=smart&auto=webp&s=f08b13aefa64e77de543ccafc4ac8e54e7566243', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?width=216&crop=smart&auto=webp&s=2d28222705f8fa58cdc3f33dfaee0f3cf016af7d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?width=320&crop=smart&auto=webp&s=962181d63e157a324e7e3e42633ab154ebf5f114', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?width=640&crop=smart&auto=webp&s=26e11c9b79efba9dfe59a27200f1597ba73e228b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?width=960&crop=smart&auto=webp&s=da454e0b39e89b465156cf39cdcdc0ab97377b6c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?width=1080&crop=smart&auto=webp&s=fed2e70e1a1400cc17224acff26a7ae1f699d73f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mOal1dS-eYOHj9TiLg6Pz2SxOOdk4j2oylshnS4zxoY.jpg?auto=webp&s=173e865f90128707632426748559fcc1a9fec5b0', 'width': 1200}, 'variants': {}}]} | |
How to check LLM requirements and GPU specifications? | 9 | I have played around with language models in Google Colab and using API keys, but I want to start playing around with running an LLM locally. I have multiple computers with different GPUs, the best being a 3080ti 12GB. Basically, I have two questions:
1. How do I know which LLM I can run on a specific GPU, which GPU and LLM specifications are essential to compare in order to decide?
2. More specifically, which is the "best" (whatever that means) LLM that I can run on a 3080ti 12GB?
EDIT: To clarify, I did look at the wiki, and from what I understand, I should be able to run LLaMA-13B. I'm just curious as to how to come to this conclusion myself for an arbitrary model, rather than having to look it up in a table (I'm also interested in other models e.g. GPT-J-6B).
EDIT 2: In regards to "which is the best model?", I realized that it might be relevant to say what I mean to use it for. I'm learning LangChain and I wanna play around with making chatbots (specifically chatbots that do storytelling and roleplaying as a character) and/or agents (like AutoGPT). | 2023-07-13T16:45:31 | https://www.reddit.com/r/LocalLLaMA/comments/14ypuly/how_to_check_llm_requirements_and_gpu/ | IvanSlayer | self.LocalLLaMA | 2023-07-13T17:10:31 | 0 | {} | 14ypuly | false | null | t3_14ypuly | /r/LocalLLaMA/comments/14ypuly/how_to_check_llm_requirements_and_gpu/ | false | false | self | 9 | null |
Miku.gg, a VN-like chatbot UI for local LLM | 75 | Hello! This is an open source project I've working on for a while now. It's basically a chatbot UI for role-playing like SillyTavern, but with a focus on visual novel style which multiple scenarios, expressions and voices.
It's compatible with oobabooga, so you can try chatting with the bots with your endpoint. I added a default server running Chronos-13b for free for you to test, but I encourage you run it locally with your own endpoints if you want to use it. I'm working on a lot of improvements and there's more bots coming, but I wanted to share my progress with you. Hope you like it.
[https://bots.miku.gg](https://bots.miku.gg)
[https://docs.miku.gg](https://bots.miku.gg)
https://preview.redd.it/ztz41bc7vqbb1.png?width=1400&format=png&auto=webp&s=498b9d6b39a9e0acd2b3b1d22ca3c3cfa3c20d22 | 2023-07-13T15:20:20 | https://www.reddit.com/r/LocalLLaMA/comments/14yno9b/mikugg_a_vnlike_chatbot_ui_for_local_llm/ | iwantofftheride00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yno9b | false | null | t3_14yno9b | /r/LocalLLaMA/comments/14yno9b/mikugg_a_vnlike_chatbot_ui_for_local_llm/ | false | false | 75 | null | |
I've added Llama.cpp support for text generation, text streaming, and tokenization to ai-utils.js | 25 | 2023-07-13T14:56:31 | https://ai-utils.dev/integration/model-provider/llamacpp | lgrammel | ai-utils.dev | 1970-01-01T00:00:00 | 1 | {'gid_2': 1} | 14yn2o1 | false | null | t3_14yn2o1 | /r/LocalLLaMA/comments/14yn2o1/ive_added_llamacpp_support_for_text_generation/ | false | false | default | 25 | null | |
Fine tunning LORA/QLORA LLM | 3 | Do i need to make any change in formating of output, while using QLORA. Can it handle JSON format data ?
dataset i have generated through GPT\_3 in json format.
Example:
\*\*Review(col1)\*\*
Nice cell phone, big screen, plenty of storage. Stylus pen works well.
\*\*Analysis(col2)\*\*
\\\[{“segment”: “Nice cell phone”,“Aspect”: “Cell phone”,“Aspect Category”: “Overall satisfaction”,“sentiment”: “positive”},{“segment”: “big screen”,“Aspect”: “Screen”,“Aspect Category”: “Design”,“sentiment”: “positive”},{“segment”: “plenty of storage”,“Aspect”: “Storage”,“Aspect Category”: “Features”,“sentiment”: “positive”},{“segment”: “Stylus pen works well”,“Aspect”: “Stylus pen”,“Aspect Category”: “Features”,“sentiment”: “positive”}\\\] | 2023-07-13T14:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/14ymj79/fine_tunning_loraqlora_llm/ | Muted-Mine-5236 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ymj79 | false | null | t3_14ymj79 | /r/LocalLLaMA/comments/14ymj79/fine_tunning_loraqlora_llm/ | false | false | self | 3 | null |
How to FineTune LLMs with long chats with many switches between users? | 2 | Let's say I have many chat histories and many switches (let's say 20) between two chat participants (user1 and user2). The context of the conversation could be important.
Now I want to fine-tune my model to to mimick user2.
My Prompts for fine-tuning would look like this i. e. :
Prompt1:
Backgroundinformation about user2: <Backgroundinformation about user2>Chathistory:
User1: <Chat Prompt 1>
User2: <Chat Prompt 2>
User1:<Chat Prompt 3 >
User2:
Instruction: Respond as if you were user2
​
Prompt2:
Backgroundinformation about user2: <Backgroundinformation about user2>Chathistory:
User1: <Chat Prompt 1>
User2: <Chat Prompt 2>
User1:<Chat Prompt 3 >
User2:<Chat Prompt 4>
User1:<Chat Prompt 5>
User2:
Instruction: Respond as if you were user2
and so on...
So my question is: Can I use the same chat history 20 times for training with different context-size or does this tend to overfit my model? | 2023-07-13T14:08:20 | https://www.reddit.com/r/LocalLLaMA/comments/14ylvms/how_to_finetune_llms_with_long_chats_with_many/ | deepRLearner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ylvms | false | null | t3_14ylvms | /r/LocalLLaMA/comments/14ylvms/how_to_finetune_llms_with_long_chats_with_many/ | false | false | self | 2 | null |
Jan: Local LLMs on your iPhone | 1 | [removed] | 2023-07-13T13:25:37 | https://www.reddit.com/r/LocalLLaMA/comments/14yku72/jan_local_llms_on_your_iphone/ | askjanai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yku72 | false | null | t3_14yku72 | /r/LocalLLaMA/comments/14yku72/jan_local_llms_on_your_iphone/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '2vQtwpb1uYXb3f86dOnWLFaMJlKISbZ1nwvPDa26gT8', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?width=108&crop=smart&auto=webp&s=10a0bea31a6a60a966df4152471d2c35966f1f47', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?width=216&crop=smart&auto=webp&s=d4f24b68d8445a5ab3a37f2009db12e148d7b14d', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?width=320&crop=smart&auto=webp&s=d9fa7b96d9666687c35ef00829549b22a1dbee8a', 'width': 320}, {'height': 417, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?width=640&crop=smart&auto=webp&s=fd4a6aab6ce120fda6dc5f386052e8fd7ff27738', 'width': 640}, {'height': 625, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?width=960&crop=smart&auto=webp&s=a55c1ed980fb3a008a61912343ad3d9f9c7bf5a1', 'width': 960}, {'height': 704, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?width=1080&crop=smart&auto=webp&s=63ac804d78407104394d5dc7df889dff17bfcec5', 'width': 1080}], 'source': {'height': 712, 'url': 'https://external-preview.redd.it/ZGXj7Xlo1X75Zy1NwvYWgxmCDQc8p3Uu6PCrm_uAyQ8.jpg?auto=webp&s=e67ac564ea0016140a6e46191a2495d262b11ef4', 'width': 1092}, 'variants': {}}]} |
[Help] I think I set the RAM too high in Oobabooga web interface and now its refusing to load from the terminal. | 0 | I'm using Ubuntu on a NUC with an i7-5557u and 16GB DDR3 RAM.
In the "CPU memory in MiB" field I put 15250, since that's pretty much what 16GB comes to in MiB. (I'm assuming this was asking about RAM, though I wouldn't be surprised if it was actually asking about CPU cache, in which case I should have put 3.8MiB)
Turns out asking it to utilise near 100% of your RAM isn't actually a good idea. It grinds my system to a near halt , says "Loading checkpoint shards: 0%" then after a minute or so prints "Killed" and exits.
I can't seem to find any sort of a config file outside of the WebUI and I can't use the WebUI because every time I try to run the start .sh file it does this.
Did I just accidentally nuke this by getting it to eat all my RAM or is there some way I can recover? | 2023-07-13T12:53:13 | https://www.reddit.com/r/LocalLLaMA/comments/14yk2sz/help_i_think_i_set_the_ram_too_high_in_oobabooga/ | pstrib | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yk2sz | false | null | t3_14yk2sz | /r/LocalLLaMA/comments/14yk2sz/help_i_think_i_set_the_ram_too_high_in_oobabooga/ | false | false | self | 0 | null |
Are there Agent-specific models out there? | 11 | Are there any llama models specifically trained for COT and following the REACT format? Or are there specific datasets I can look for?
I'm pretty much only interesting in making autonomous agents, so role playing is not important. | 2023-07-13T12:06:29 | https://www.reddit.com/r/LocalLLaMA/comments/14yj2bh/are_there_agentspecific_models_out_there/ | IWantToBeAWebDev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yj2bh | false | null | t3_14yj2bh | /r/LocalLLaMA/comments/14yj2bh/are_there_agentspecific_models_out_there/ | false | false | self | 11 | null |
Dataset requirements | 5 | Hi Guys,
my question is regarding the dataset. I want to fine tune a model, but I do not know how my data has to look like. I am not interested in instruction based data, I would like to use just raw text.
Do I have to care about any format or can I just create a full flowing text txt.file ?
Thanks! | 2023-07-13T09:40:10 | https://www.reddit.com/r/LocalLLaMA/comments/14yg8b5/dataset_requirements/ | Enkay55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yg8b5 | false | null | t3_14yg8b5 | /r/LocalLLaMA/comments/14yg8b5/dataset_requirements/ | false | false | self | 5 | null |
Stable Diffusion | 9 | Hi guys, I just wanna say that I love this community and I love the resources I've found for llms like llama.cpp, oobabooga, etc. but I really wanna try out running stable diffusion stuff locally now but I can't seem to find open-source stuff for it and don't even know where to really look for beginner friendly not that hardware intensive stuff, does anyone have a recommendation of just where to start getting into that? like I don't know, a llama.cpp equivalent for stable diffusion models, I honestly don't even know if they have an equivalent to ggml format and yeah I just wanna learn more about it but don't know where to start. | 2023-07-13T08:41:22 | https://www.reddit.com/r/LocalLLaMA/comments/14yf7e7/stable_diffusion/ | sebramirez4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yf7e7 | false | null | t3_14yf7e7 | /r/LocalLLaMA/comments/14yf7e7/stable_diffusion/ | false | false | self | 9 | null |
A theoretical roadplan for AGI, maybe some developer here could be interested | 1 | 2023-07-13T08:41:01 | https://www.reddit.com/r/singularity/comments/14ybisu/why_massive_context_lengths_and_a_mixture_of/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=1 | Mission-Length7704 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14yf757 | false | null | t3_14yf757 | /r/LocalLLaMA/comments/14yf757/a_theoretical_roadplan_for_agi_maybe_some/ | false | false | default | 1 | null | |
Sources: Meta is poised to release a commercial version of LLaMA imminently and plans to make the AI model more widely available and customizable by companies | 303 | 2023-07-13T08:08:24 | https://archive.is/WS877 | Zyj | archive.is | 1970-01-01T00:00:00 | 0 | {} | 14yemv9 | false | null | t3_14yemv9 | /r/LocalLLaMA/comments/14yemv9/sources_meta_is_poised_to_release_a_commercial/ | false | false | 303 | {'enabled': False, 'images': [{'id': '2C7dUvYiFqhCAdDqMyHKP-TQ_1nBTG9Fi1j6D1Xlo1g', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/VYy3lq8gMCkm4scfSONS1aZWMrtNj3kRPNARIofSfyw.jpg?width=108&crop=smart&auto=webp&s=2cd226c4c3aa657c3efe7a564cdfbc40c1febc5c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/VYy3lq8gMCkm4scfSONS1aZWMrtNj3kRPNARIofSfyw.jpg?width=216&crop=smart&auto=webp&s=254a082a42680cd2bdf5fbe610f75e748557d869', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/VYy3lq8gMCkm4scfSONS1aZWMrtNj3kRPNARIofSfyw.jpg?width=320&crop=smart&auto=webp&s=8afa957c693ca2fb12e160edb02428228153d7e7', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/VYy3lq8gMCkm4scfSONS1aZWMrtNj3kRPNARIofSfyw.jpg?width=640&crop=smart&auto=webp&s=e65eda9ff63a73b8e506c0d44e4ebe3393c2c6f7', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/VYy3lq8gMCkm4scfSONS1aZWMrtNj3kRPNARIofSfyw.jpg?width=960&crop=smart&auto=webp&s=40ac720f1e0305d368cb3e25e245c4d9fae6c4c0', 'width': 960}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/VYy3lq8gMCkm4scfSONS1aZWMrtNj3kRPNARIofSfyw.jpg?auto=webp&s=41735b6b91616583fa58049cdf9c9801044964e8', 'width': 1024}, 'variants': {}}]} | ||
Our first OpenOrca Model Preview is here! | 66 | ​
We beat current state of the art for public model releases in this class, for <$200 total training budget.
[https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
​
We used our own dataset, OpenOrca. We trained on \~6% of our data and saw \~60% of the total improvement achieved in the Orca paper on hard reasoning tasks from BigBench-Hard and AGIEval.
​
[https://huggingface.co/datasets/Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
​
BigBench-Hard and AGIEval have been the hardest for small models to improve on.
​
In some cases, we beat the original Orca paper's claims.
​
"Our Relative Improvement" shows 100% when our score matches that achieved in the Orca paper, so >100% means we've outperformed.
​
We're excited to be achieving state of the art performance in the 13B model class with a fast and simple train and expect to meet or beat the Orca scores with full model trainings.
​
We have several training partners we'll be announcing soon and continue to expand and refine our dataset.
​
Get in touch if you'd like to work with us to expand the performance boundaries of open source AI!
​
[AlignmentLab.ai](https://AlignmentLab.ai)
​
You can visualize our OpenOrca dataset via Atlas here:
[https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2) | 2023-07-13T07:43:45 | https://www.reddit.com/r/LocalLLaMA/comments/14ye7nq/our_first_openorca_model_preview_is_here/ | Alignment-Lab-AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ye7nq | false | null | t3_14ye7nq | /r/LocalLLaMA/comments/14ye7nq/our_first_openorca_model_preview_is_here/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': '7Ig2FG5EWbGTMd0_W6GGGNwn1jmkXi5VMp-JtJ9VsHI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=108&crop=smart&auto=webp&s=679d090619256f7b14b7856b99f6b1aa35af57e4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=216&crop=smart&auto=webp&s=c0947b7daceb2f9cb019ae6799e70eca34e186ae', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=320&crop=smart&auto=webp&s=30e6f6478f45fdc78bff9e5f0eec5bee3de1e00c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=640&crop=smart&auto=webp&s=8b2508645708a95dd0e0989d36a42fe7799d5dfc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=960&crop=smart&auto=webp&s=8ceb05cc52b01d36c0acb0388aa2a0d44cdd5c92', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=1080&crop=smart&auto=webp&s=8a1e00ae476f451fb3a9c12ab8224b2e8af535c3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?auto=webp&s=9ed318b363aca0c2f3189a07988e39d6939302c8', 'width': 1200}, 'variants': {}}]} |
Help a newbie here! | 2 | Hi there! Can anyone point me to which one of the text generation UI are easy to setup and can use GPU?
So far, I have tried dalai (bad magic error, doesn't work to me), llama.cpp (I can't get it installed. I'm dumb), Fastchat (I can only use Vicuna and Alpaca ggml doesnt work cuz I'm dumb) and currently using koboldcpp which works very well as of now aside from my ggml model is outdated it says so I am unable to use my GPU.
I am currently looking at the git repo of oobabooga. I want something similar to Automatic1111 for SD which just works.
I use Linux Mint and NVIDIA 1060 GPU. Thanks | 2023-07-13T07:42:44 | https://www.reddit.com/r/LocalLLaMA/comments/14ye71x/help_a_newbie_here/ | Spirited_Employee_61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ye71x | false | null | t3_14ye71x | /r/LocalLLaMA/comments/14ye71x/help_a_newbie_here/ | false | false | self | 2 | null |
The head of DeepMind has confirmed the authenticity of an email in which a Google engineer doubted the company's future in AI | 21 | 2023-07-13T07:11:54 | https://gagadget.com/en/ai/277135-the-head-of-deepmind-has-confirmed-the-authenticity-of-an-email-in-which-a-google-engineer-doubted-the-companys-future/ | yehiaserag | gagadget.com | 1970-01-01T00:00:00 | 0 | {} | 14ydof5 | false | null | t3_14ydof5 | /r/LocalLLaMA/comments/14ydof5/the_head_of_deepmind_has_confirmed_the/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'yH5kikl13IlZqVuzBY0m6s-juZC26rmWEPrCac6mQKk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/eBn1ItPfdFwnn2Q-NUJIG9E7E5T2u4tzZoQ1W1tF0hA.jpg?width=108&crop=smart&auto=webp&s=cbee981fb347276daf73bb07457d8d7c94a0cdb7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/eBn1ItPfdFwnn2Q-NUJIG9E7E5T2u4tzZoQ1W1tF0hA.jpg?width=216&crop=smart&auto=webp&s=ec82e20df1c0641c7616ce8a54f9f1b06aa23973', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/eBn1ItPfdFwnn2Q-NUJIG9E7E5T2u4tzZoQ1W1tF0hA.jpg?width=320&crop=smart&auto=webp&s=99532836d64518fc0d609d111241f94cd67fb890', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/eBn1ItPfdFwnn2Q-NUJIG9E7E5T2u4tzZoQ1W1tF0hA.jpg?width=640&crop=smart&auto=webp&s=4018ee08fff9638bd873712b6659dc3e10eac248', 'width': 640}], 'source': {'height': 441, 'url': 'https://external-preview.redd.it/eBn1ItPfdFwnn2Q-NUJIG9E7E5T2u4tzZoQ1W1tF0hA.jpg?auto=webp&s=1a2123db77ec20f90869e0ad09633c6f8569b253', 'width': 840}, 'variants': {}}]} | ||
airoboros-65B-gpt4-1.2-GGML - Solution of Egg Balancing Puzzle | 5 | inspired by this post [https://www.reddit.com/r/LocalLLaMA/comments/14xolpe/follow\_up\_to\_who\_of\_them\_is\_older/](https://www.reddit.com/r/LocalLLaMA/comments/14xolpe/follow_up_to_who_of_them_is_older/)
CC [u/mosquit0](https://www.reddit.com/user/mosquit0/)
I used a similar prompt to hopefully improve the model's answer. I also set the params such that it favours accuracy over creativity.
Here is the full transcript:
[https://paste.c-net.org/ClinicsGently](https://paste.c-net.org/ClinicsGently)
**Any tips on how to make the model more intelligent? also maybe use a different model? :)**
**thanks**
| 2023-07-13T06:31:45 | https://www.reddit.com/r/LocalLLaMA/comments/14ycygb/airoboros65bgpt412ggml_solution_of_egg_balancing/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ycygb | false | null | t3_14ycygb | /r/LocalLLaMA/comments/14ycygb/airoboros65bgpt412ggml_solution_of_egg_balancing/ | false | false | self | 5 | null |
ex-ChatGPT newbie question | 0 | So far my experience has only been with ChatGPT and GPT4, but their replies have become unbearably "moral" and "ethical" in what appears to be support for a political narrative under the banner of "safety." So, I'm in my first hour of looking for non-OpenAI alternatives.
My question is, are there non-locally ran alternatives (server side), as I only have 16GB of memory and I am reading that the locally ran need more memory. Or are there locally ran AI that are not so resource intensive?
Reminder, I'm only an hour into looking for an alternative to OpenAI products. Edit//I'm 8 hours into this at this point. Also, I not trust search engines (suggest one that is trustable if you know one), as they are all leading me to where I don't want to go.
Thank you in advance. | 2023-07-13T04:41:48 | https://www.reddit.com/r/LocalLLaMA/comments/14yavm8/exchatgpt_newbie_question/ | Ethan_Boylinski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yavm8 | false | null | t3_14yavm8 | /r/LocalLLaMA/comments/14yavm8/exchatgpt_newbie_question/ | false | false | self | 0 | null |
Are there instructions on how to get set up with a Local LLaMA? GPT 3.5 is just good enough to run my project. I am keen to get something better, run locally and with a larger token count. Not sure where to start! | 0 | I have a 1600S and Ryzen 5 3600 computer at home that although I use for gaming, am interested in using for Local LLaMA. I am very new to all of this but am interested.
Are there resources available to help me get started?
Seems like there are new Local LLM's being pumped out constantly that I would love to try.
Thanks! | 2023-07-13T04:19:23 | https://www.reddit.com/r/LocalLLaMA/comments/14yag4j/are_there_instructions_on_how_to_get_set_up_with/ | The-SillyAk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14yag4j | false | null | t3_14yag4j | /r/LocalLLaMA/comments/14yag4j/are_there_instructions_on_how_to_get_set_up_with/ | false | false | self | 0 | null |
Several newb questions | 0 | I have successfully installed (I'm not even sure what to call it? Alpaca? LLaMA? Surge?) and downloaded Airoboros 33B, going to switch to the 16GB model. This brought my system to it's knees. 12900k / RTX 4090 / 32GB RAM.
1. How do I enable the GPU because I don't think its enabled.
2. How can I give it access to the open internet? For example I would like to ask it "How many actors were nominated for an Oscar and didn't win, that were 42 years old and it happened in an even numbered year" and have it able to access Google / Wikipedia / iMDB and provide an answer?
3. Can I give it access to a DB on the internet that requires a login?
4. If I do give it that access what are the security risks to me?
​
Thanks for your help, this is very exciting! | 2023-07-13T00:15:02 | https://www.reddit.com/r/LocalLLaMA/comments/14y5cfw/several_newb_questions/ | AstroAlto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14y5cfw | false | null | t3_14y5cfw | /r/LocalLLaMA/comments/14y5cfw/several_newb_questions/ | false | false | self | 0 | null |
GPT4ALL and 33B/40B models? | 1 | I'm running the GPT4ALL program and I like the interface. I've got 13b models to run and the smaller ones. But I've downloaded a few of the 33B models and they don't seem to show up in the application. I put them in the correct folder with the other files. I made sure it was the single GGML bin files.
Does anyone know what I am doing wrong or if there just an issue limiting sizes that can be ran on GPT4ALL (Google turned up nothing)?
​ | 2023-07-12T22:23:50 | https://www.reddit.com/r/LocalLLaMA/comments/14y2oox/gpt4all_and_33b40b_models/ | KookyKaleidoscope152 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14y2oox | false | null | t3_14y2oox | /r/LocalLLaMA/comments/14y2oox/gpt4all_and_33b40b_models/ | false | false | self | 1 | null |
OpenOrca-Preview1-13B released | 115 | Today, the Open-Orca team released OpenOrca-Preview1-13B
This is llama 13b fine tuned on a small subset (6%) of the Open-Orca instructions dataset.
The Open-Orca instructions dataset is an attempt to reproduce the Orca paper.
They claim they got 60% of the improvement with 6% of the data.
The fine tuning used 8x A100-80G GPUs for 15 hours. Commodity cost was < $200.
I am excited to see the models finetuned on the entire dataset.
https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B | 2023-07-12T22:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/14y2f74/openorcapreview113b_released/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14y2f74 | false | null | t3_14y2f74 | /r/LocalLLaMA/comments/14y2f74/openorcapreview113b_released/ | false | false | self | 115 | {'enabled': False, 'images': [{'id': '7Ig2FG5EWbGTMd0_W6GGGNwn1jmkXi5VMp-JtJ9VsHI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=108&crop=smart&auto=webp&s=679d090619256f7b14b7856b99f6b1aa35af57e4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=216&crop=smart&auto=webp&s=c0947b7daceb2f9cb019ae6799e70eca34e186ae', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=320&crop=smart&auto=webp&s=30e6f6478f45fdc78bff9e5f0eec5bee3de1e00c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=640&crop=smart&auto=webp&s=8b2508645708a95dd0e0989d36a42fe7799d5dfc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=960&crop=smart&auto=webp&s=8ceb05cc52b01d36c0acb0388aa2a0d44cdd5c92', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?width=1080&crop=smart&auto=webp&s=8a1e00ae476f451fb3a9c12ab8224b2e8af535c3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vny-8ESRqfJT0r548IJUn0evjiKu4ZA-PIWjBspqPCY.jpg?auto=webp&s=9ed318b363aca0c2f3189a07988e39d6939302c8', 'width': 1200}, 'variants': {}}]} |
Post short chat exchanges or answers that surprised you. | 2 | I want to see some screenshots or texts of short exchanges that surprised you.
I don't want long stories - who has time to read all that.
Here is my example from today:
https://preview.redd.it/odi03m0cvlbb1.png?width=752&format=png&auto=webp&s=09dcc87a153762da2b7d245ebaaf9ffa57559b71
The last line did indeed surprised me - because the Assistant responded in a way as if it actually got the joke I was playing on it.
Things like that. Short, funny, surprising... | 2023-07-12T21:55:36 | https://www.reddit.com/r/LocalLLaMA/comments/14y1yos/post_short_chat_exchanges_or_answers_that/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14y1yos | false | null | t3_14y1yos | /r/LocalLLaMA/comments/14y1yos/post_short_chat_exchanges_or_answers_that/ | false | false | 2 | null | |
ERROR:No model is loaded! Select one in the Model tab | 1 | However in the web UI's model tab, selecting anything, for example llama.cpp and clicking the download button to the right returns a python error.
*Traceback (most recent call last): File “../oobabooga_linux/text-generation-webui/server.py”, line 142, in download_model_wrapper model, branch = downloader.sanitize_model_and_branch_names(model, branch) File “../oobabooga_linux/text-generation-webui/download-model.py”, line 37, in sanitize_model_and_branch_names if model[-1] == ‘/’: IndexError: string index out of range* | 2023-07-12T21:39:33 | https://www.reddit.com/r/LocalLLaMA/comments/14y1jpl/errorno_model_is_loaded_select_one_in_the_model/ | aeniFi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14y1jpl | false | null | t3_14y1jpl | /r/LocalLLaMA/comments/14y1jpl/errorno_model_is_loaded_select_one_in_the_model/ | false | false | self | 1 | null |
Why do they always or often fail at this task? Can Llama models do this? | 0 | [removed] | 2023-07-12T21:20:57 | https://www.reddit.com/gallery/14y12fm | ninjasaid13 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14y12fm | false | null | t3_14y12fm | /r/LocalLLaMA/comments/14y12fm/why_do_they_always_or_often_fail_at_this_task_can/ | false | false | default | 0 | null |
Who's participating? | 1 | LLM efficiency challenge at Neurips. | 2023-07-12T21:20:47 | https://llm-efficiency-challenge.github.io/challenge | turtle-run | llm-efficiency-challenge.github.io | 1970-01-01T00:00:00 | 0 | {} | 14y12af | false | null | t3_14y12af | /r/LocalLLaMA/comments/14y12af/whos_participating/ | false | false | default | 1 | null |
Who of them is older - llama.cpp Server - German Vicuna | 0 | …
TL;DR:
- I believe that llms struggle with understanding some concepts that related to temporal relations, kinship, and ordinal relations. llms may lack the capacity to grasp specialized thinking concepts. Larger models may perform better as they have more capacity to consider these concepts.
- I think there is more an abstract understanding of concepts but lack of concrete understanding.
- This reminded me of the weak central coherence theory in autism.
- Recommendation of llama.cpp server for very fast inference.
…
---
…
I would like to start a discussion and know what your thoughts are on this topic.
I thought about why there are questions that the language models can only answer very badly or not at all. This is not a scientific research or discussion of the topic, just my thoughts and impressions.
It seems to me that there are discernible patterns among the questions that cause difficulty for language models.
My personal impression so far is that LLMs have difficulty with the following (aside from the already well-known areas like coding and arethmetics, etc):
- Understanding temporal relations using certain linguistic facts (e.g. when using words like: later, before, after, younger, earlier, afterwards, etc). In a book I can't remember the title of, which was about language and thinking, I once read that most people process and store temporal segments and spatial distances in a very similar way. I don't know if the following sentence can be said like that in English, but in German I could say something like "Oh, but we're talking about a very distant generation of us" and almost all people would immediately know that I don't mean all the mothers and fathers who are hundreds of miles away from me, but that I'm referring to the past. I wonder if LLMs have difficulty with some tasks because those tasks require specialized thinking concepts that cannot be represented with just plain words per se. And I wonder if this is why larger models tend to solve such tasks better than models with fewer parameters, because their neural network had more capacity to consider a correlate for such a thought concept.
- Kinship and family relations. It may take a kind of "ego" and an identification of the self to realistically 'understand' this concept.
- It also seems to me that LLMs have difficulty understanding ordinal relations and integrating them into a real context. Although they seem to understand these relations quite well in an abstract way, the concrete meaning or manifestation of ordinal relations often seems to cause them problems. For example, to capture the carrying range of such statements and to integrate them correctly into the context: to be second in a race (relates to a status); my last thought was... (relates to mentalization); the third to the left of me (refers to spatial indication); who was born first (as in the example in a subreddit - refers to the temporal indication) So, my thought is, in order to grasp with real understanding, to which an ordinal indication refers exactly, it requires a concretical understanding of the concept and not just an abstract one.
…
---
…
There is an example below, but in the course of this I would like to recommend the llama.cpp server, which can be seen in the [screenshot](https://imgur.com/IpXzoR5) (I only discovered it myself yesterday). This achieves the fastest inference times of all sorts of solutions I know so far and the implementation is really very simple. Simply call ./server -m /path/to/model and in your browser call localhost:8080. This UI is still very minimalistic, which I personally find beautiful. (This just seems like bad surreptitious advertising to me, but I assure you that I was not commissioned or paid by anyone :D )
…
Okay, the second is a recommendation for German-speaking users: In this example, I used from [Huggingface User **jphme**](https://huggingface.co/jphme) the following 7B model: [Orca Mini v2 German 7b GGML](https://huggingface.co/jphme/orca_mini_v2_ger_7b-GGML)
He also has a Vicuna-13B-v1.3-ggml in German, but I haven't tried it yet.
So here is the English translation from the short dialogue with Orca-Mini-7B:
https://imgur.com/IpXzoR5
…
…
…
> User: I have a logic task for you here: Daniel is Eddy's father. Which of these two was born first?
> Response: The problem with this logical question is that it is based on the false assumption that only two people can exist, namely Daniel and Eddy. In reality, there are many people in the world, and everyone has their own father.
> User: Okay, you're right. But my question does not only relate to these two people. So, with regard to these two people (Daniel and Eddy): Who was born after whom?
> Response: Regarding Daniel and Eddy, the answer to this logical question is as follows: Daniel was born first before Eddy.
…
…
What is interesting here is that I even made a mistake in my follow-up statement by mistakenly denying my own statement and making the task unnecessarily difficult ("...my question does **not** only refer to these two people..."), but the model actually understood what it was about. But here you can see what I mean by abstract understanding: Since my question was asked across the board, the model apparently referred the word "first" to the whole world. This error cannot come from the training data, but in my opinion it arises from the absence of a concrete understanding – or to say it more correctly: from the absence of enough well trained parameters to achieve a more advanced prediction which would mimic a human concrete understanding.
*(I'm just realizing that my contribution is getting much longer than I intended, so I'm trying to make it short now)*
- In the considerations described above, I can also think of a term from the field of autism, namely the [weak central coherence theory](https://en.wikipedia.org/wiki/Weak_central_coherence_theory). This theory says that autistic brains automatically pay more attention to details than to the big picture (to the whole context). This would explain why autistic people often understand things very literally, such as irony, or why they have difficulties in social interaction, where it is particularly important to recognize contexts. I see similarities in communication with current LLMs. I say this as my purely personal impression and as someone who is himself diagnosed with Asperger's syndrome or ASD. Communication with the LLMs causes me a similar "feeling" to the feeling I get when I talk to other autistic people. It's a hard-to-describe feeling, but all the autistic people I know personally confirm the same thing (that you feel it almost immediately when someone else is autistic - not that with the LLMs).
…
---
…
The literal taking of the word "first" brought me to this thought. Because it's true in itself. **First** in relation to what? One could also understand it this way: Was the father the firstborn in the context of his possible siblings or was the son a firstborn in the context of his possible siblings? Etc. I also asked this question a few other models and usually set relatively deterministic samples. I then tried to observe possible 'thoughts'/predictsions of the LLMs with the --verbose (llama.cpp) or --debugmode (koboldcpp) option. It is very interesting to see if, for example, you set top-k to 2 or 3, because then you can see which other thought processes (second and third words with corresponding probabilities) would have been possible. The second and third most common words seemed to refer very often to siblings. This illustrates quite well that the model actually understood something different. Namely "first" in the sense of first-born within the group of children/siblings...
…
Okay, since the post has been longer than I wanted anyway and because I lost the thread out of all the excitement after my 'auting' :P (no, to be honest, I assume that I'm no longer a rare species 'here' in this environment) I finish this and would be happy to hear if others have already thought about this stuff and/or what they think now | 2023-07-12T21:05:34 | https://www.reddit.com/r/LocalLLaMA/comments/14y0o46/who_of_them_is_older_llamacpp_server_german_vicuna/ | Evening_Ad6637 | self.LocalLLaMA | 2023-07-12T21:27:11 | 0 | {} | 14y0o46 | false | null | t3_14y0o46 | /r/LocalLLaMA/comments/14y0o46/who_of_them_is_older_llamacpp_server_german_vicuna/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '9M6Jt1vsUoqDFZbrbLdLQ2nbm2rqQyRKz3NLiVLNnx4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1J_1NaZJ3YsRa4_I6PPJS3LwPd1ssF1FDk34eykY5Ik.jpg?width=108&crop=smart&auto=webp&s=c87af5c53984d2f8976b8cefa1c11fa8d905f4ba', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/1J_1NaZJ3YsRa4_I6PPJS3LwPd1ssF1FDk34eykY5Ik.jpg?width=216&crop=smart&auto=webp&s=71a57826eedd57f2fba4325ad3f3d6506057ae87', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/1J_1NaZJ3YsRa4_I6PPJS3LwPd1ssF1FDk34eykY5Ik.jpg?width=320&crop=smart&auto=webp&s=71623b9bc8f9af0814f45599bb08a1f703500438', 'width': 320}], 'source': {'height': 315, 'url': 'https://external-preview.redd.it/1J_1NaZJ3YsRa4_I6PPJS3LwPd1ssF1FDk34eykY5Ik.jpg?auto=webp&s=723fb8cc9a03d9fbec56d111c4e80e15dc2f5858', 'width': 600}, 'variants': {}}]} |
Is there a way to persist Llama-cpp-python caches to disk and then reload them at inference time? | 7 | Since regenerating cached prompts is *so* much faster than processing them each time, is there any way I can pre-process a bunch of prompts, save them to disk, and then just reload them at inference time?
For example, say I have a 2000-token prompt that I use daily. Since we’re talking about a program that uses *all* of my available memory, I can’t keep it running while I’m working. I don’t want to have to wait for it to ingest that prompt every time I start my Python program. Wat do? | 2023-07-12T20:13:47 | https://www.reddit.com/r/LocalLLaMA/comments/14xzb7a/is_there_a_way_to_persist_llamacpppython_caches/ | E_Snap | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xzb7a | false | null | t3_14xzb7a | /r/LocalLLaMA/comments/14xzb7a/is_there_a_way_to_persist_llamacpppython_caches/ | false | false | self | 7 | null |
Floneum, a graph editor for local AI workflows | 14 | 2023-07-12T19:50:21 | https://news.ycombinator.com/item?id=36697595 | _underlines_ | news.ycombinator.com | 1970-01-01T00:00:00 | 0 | {} | 14xyp6n | false | null | t3_14xyp6n | /r/LocalLLaMA/comments/14xyp6n/floneum_a_graph_editor_for_local_ai_workflows/ | false | false | default | 14 | null | |
Need help setting a Prompt Template correctly in Oobabooga | 8 | I downloaded the [airoboros 33b GPTQ](https://huggingface.co/TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ) model and the model started talking to itself. I figured it needed a prompt template. But this is what is given on /u/TheBloke 's page:
"A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: \[prompt\] ASSISTANT:"
However, text generation UI has 5 fields: User String, Bot String, Context, Turn Template & Command for chat-instruct mode.
Can anyone tell me what goes where? | 2023-07-12T19:48:51 | https://www.reddit.com/r/LocalLLaMA/comments/14xyntm/need_help_setting_a_prompt_template_correctly_in/ | trithilon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xyntm | false | null | t3_14xyntm | /r/LocalLLaMA/comments/14xyntm/need_help_setting_a_prompt_template_correctly_in/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': '8AK3SkatWEN74ojYFGTpToLsvPMQRtCspidPdVy7ReU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?width=108&crop=smart&auto=webp&s=9f98761408cf4875b0c0f62397ca67a752d8ebf4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?width=216&crop=smart&auto=webp&s=12496afef68559fedf866241efadc4250cb4d9f8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?width=320&crop=smart&auto=webp&s=940cf30926d5ac3edd524edbce78d1b425d76d1f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?width=640&crop=smart&auto=webp&s=7c3ca2a7c727981265e01a4f5d9ccafa3657626d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?width=960&crop=smart&auto=webp&s=ce8f6a8cb83e6f5e2915acce4117141c111ec796', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?width=1080&crop=smart&auto=webp&s=ffa5f364f34063056748f74b7f1369f43b373089', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JQAt8RIQL0WnvW9fmBtmS78K4rgIYb79dgELg176sqE.jpg?auto=webp&s=5ea94b0abd3329f42b8cca54c95a50ec61507d03', 'width': 1200}, 'variants': {}}]} |
Handling chat histories that are longer than the context length? | 8 | For a chatbot, how do I handle a chat history that is longer than the context length for the model? I want the chatbot to be able to refer to as much of the chat as possible. I know summarizing and vector databases are useful to tackle this but I wasn't able to find anything specific to chat histories.
Are there any example implementations which I can refer to? Or any resources that may help? | 2023-07-12T19:40:31 | https://www.reddit.com/r/LocalLLaMA/comments/14xyg8h/handling_chat_histories_that_are_longer_than_the/ | Chirumer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xyg8h | false | null | t3_14xyg8h | /r/LocalLLaMA/comments/14xyg8h/handling_chat_histories_that_are_longer_than_the/ | false | false | self | 8 | null |
Suggestions for a good Story Telling model? | 18 | Hi, I'm looking at the models over at:
https://huggingface.co/TheBloke
I have 12GB of VRAM, so I'm choosing models that have 13B, GPTQ, and SuperHOT-8K.
That still leaves me with lots to choose from! Any idea which are good for "Write a short a story about..." | 2023-07-12T17:55:19 | https://www.reddit.com/r/LocalLLaMA/comments/14xvost/suggestions_for_a_good_story_telling_model/ | c_gdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xvost | false | null | t3_14xvost | /r/LocalLLaMA/comments/14xvost/suggestions_for_a_good_story_telling_model/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'ijgSlZO3K44WshhENFl9jhybG8Na3DBCsOXCuyZgycw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=108&crop=smart&auto=webp&s=3e5fdcc67bd2b0779a9f019942e0727ffb86630b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=216&crop=smart&auto=webp&s=b390a77acee51d46b2ca5992c38755e0ea4269e1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=320&crop=smart&auto=webp&s=23586102b6805c7f96721c02b9cad47b5dbfef49', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=640&crop=smart&auto=webp&s=205e31dad1af816278184e44d5aa56e886ad9b4d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=960&crop=smart&auto=webp&s=a2a9e82e506b94bd26ef0019ae18a7b946ccdc74', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?width=1080&crop=smart&auto=webp&s=928a52a138d0687290827ee2224923bb8f03e39e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xoOXoChYs6mWDlUf2h1S1ZmC14O0X-Z2_QHtAdDh0C8.jpg?auto=webp&s=addebda9b8be1b664eaee5ea404f4c7df3d5eef2', 'width': 1200}, 'variants': {}}]} |
Best local LLMs for medical knowledge | 1 | [removed] | 2023-07-12T17:53:50 | https://www.reddit.com/r/LocalLLaMA/comments/14xvndz/best_local_llms_for_medical_knowledge/ | CalmingColor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xvndz | false | null | t3_14xvndz | /r/LocalLLaMA/comments/14xvndz/best_local_llms_for_medical_knowledge/ | false | false | self | 1 | null |
How long would it take/cost for someone to train BLOOM-176B on airoboros 1.4? | 5 | I was thinking someone could spend a lot of money with runpod or another cloud provider to train BLOOM on a larger and higher quality dataset like airoboros, since BLOOM is known for being undertrained (366B tokens I think?).
Does anyone know how many GPUs this would take or how long it would take and how much it would cost with a cloud provider for training or finetuning? Such a large number of parameters on a really good and high quality dataset should get us closer to GPT-3.5 levels of quality.
If someone with a lot of experience could actually train the model on this dataset (I have no idea how to train a model myself locally or on cloud) I could pay the costs of finetuning on cloud services assuming it wasn’t too high. The only thing really limiting this community is money, which companies have a lot of so they can develop very high quality models that open source really doesn’t have the money to develop. | 2023-07-12T16:53:15 | https://www.reddit.com/r/LocalLLaMA/comments/14xu1uw/how_long_would_it_takecost_for_someone_to_train/ | TheRedMoonLives | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xu1uw | false | null | t3_14xu1uw | /r/LocalLLaMA/comments/14xu1uw/how_long_would_it_takecost_for_someone_to_train/ | false | false | self | 5 | null |
Best way to support open LLMs? | 41 | Hey everyone. I work professionally in optimizing LLM compute, and I'm interested in using my experience to tip the balance more toward individual and open LLMs, since the industry is very enterpise-focused right now.
I've started a project that I think will contribute to this goal, but I wanted to ask the community: what do you think is most needed to bootstrap open LLMs? What is most holding back open source and individual developers from competing with the big guys? | 2023-07-12T16:30:24 | https://www.reddit.com/r/LocalLLaMA/comments/14xtfbl/best_way_to_support_open_llms/ | MartinCampbell6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xtfbl | false | null | t3_14xtfbl | /r/LocalLLaMA/comments/14xtfbl/best_way_to_support_open_llms/ | false | false | self | 41 | null |
Have 10 NVIDIA A6000 cards - need advice. | 3 | Hey, Reddit!
I've got ten brand new NVIDIA A6000 cards, still sealed, except for one I used for testing. Here's the catch: I received them directly from NVIDIA as part of a deal, so no official papers or warranties to provide, unfortunately. What do you think I should do with them? Would there be any interest in buying them?
Appreciate any advice or suggestions! Thanks! | 2023-07-12T14:36:53 | https://www.reddit.com/r/LocalLLaMA/comments/14xqcdw/have_10_nvidia_a6000_cards_need_advice/ | tadem2k3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xqcdw | false | null | t3_14xqcdw | /r/LocalLLaMA/comments/14xqcdw/have_10_nvidia_a6000_cards_need_advice/ | false | false | self | 3 | null |
🤖 AI Podcast - Voice Conversations🎙 with Local LLMs on M2 Max | 25 | ​
https://reddit.com/link/14xpw6m/video/65zbjgsxjjbb1/player
Code: [https://github.com/xorbitsai/inference/blob/main/examples/AI\_podcast.py](https://github.com/xorbitsai/inference/blob/main/examples/AI_podcast.py)
This demo is powered by [Xorbits Inference](https://github.com/xorbitsai/inference), a library designed to deploy and serve LLMs (but not the only) on-premises, even on your laptop (available for macOS/Windows/Linux).
You can install Xinference via PyPI. We strongly recommend using a new virtual environment to avoidpotential dependency conflicts:
$ pip install "xinference[all]"
Run:
$ xinference
And you are all set! 🍻
You can now chat with your favorite open-source LLM:
https://preview.redd.it/zbx4vt3iljbb1.jpg?width=2104&format=pjpg&auto=webp&s=f23e414ae05fe35a18ac6c638a4f162ee2915f29
Or you can build an AI powered application with xinference client:
from xinference.client import Client
client = Client("http://localhost:9997")
model_uid = client.launch_model(model_name="chatglm2")
model = client.get_model(model_uid)
chat_history = []
prompt = "What is the largest animal?"
model.chat(
prompt,
chat_history,
generate_config={"max_tokens": 1024}
)
Try using Xinference now! | 2023-07-12T14:19:23 | https://www.reddit.com/r/LocalLLaMA/comments/14xpw6m/ai_podcast_voice_conversations_with_local_llms_on/ | CORNMONSTER_2022 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xpw6m | false | null | t3_14xpw6m | /r/LocalLLaMA/comments/14xpw6m/ai_podcast_voice_conversations_with_local_llms_on/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'J0ZGjDf4MkFGANtZB1KuqiW-He34U5Q5G8CPc03YOWg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?width=108&crop=smart&auto=webp&s=a19fb44d1e22dd8e6400166c46e87ac642ff2116', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?width=216&crop=smart&auto=webp&s=0f455b2dd7bf6a5d79194d3d1c9e1d574145e9ba', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?width=320&crop=smart&auto=webp&s=b18e0091ecee3c87aebea56cb22728b00893327a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?width=640&crop=smart&auto=webp&s=a647b722d63f7ae10794a43707bf589786f4bb6e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?width=960&crop=smart&auto=webp&s=f297b3798e38585bbd1df45942d5ad44ba838747', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?width=1080&crop=smart&auto=webp&s=609eb94bb2dab999542edc854d5766f901cb5cba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nVWeO2RVpBgZ4uiOASaUzOpjzpLwqxhXaaql4SSP3Ko.jpg?auto=webp&s=1b85be589971c34aad0ffc919e3c5152b1274f7e', 'width': 1200}, 'variants': {}}]} | |
Help a newbie :) | 4 | Hey all
I'm new to LLMs and everything AI related, in respect of offline. Here is what I have:
1. GPT4 sub / API
2. Gaming rig with a single 3090 and 128GB RAM
3. 2019 15" MacBook Pro with 32GB RAM
As far as my knowledge goes:
1. Installed Oobabooga and played around with various models from HF on the gaming rig (offline)
2. Installed Stable Diffusion offline on the gaming rig, pretty amazing stuff
What do I want to do?
I want to learn how to tune my own models, not for corporate use, just for my own interests. For the purpose of TLDR:
1. People talk of LoRA and Qlora github repos having examples of how my dataset should be formatted, which repos? is there any specific ones that can help me with examples? I can code in Python, its just a case of seeing some example and then using it as a reference.
2. Offline - I want to use an API like I could with GPT4, but I want it to be offline / running on my gaming rig, what do you recommend? I didn't think I would use Oobabooga for this, I was hoping to run an actual model from the command line and then learn the API and I would also use this model for training.
3. Cloud - I'd use the cloud for training, though my initial dataset is tiny, I just want to use it to learn how the whole process works, before moving on to something more challenging.
4. I assume the actual training piece is the easiest part, the data accuracy etc is possibly the most time consuming of all.
Can anyone help me out?
Also, it would be nice to run these models on a laptop, as I travel, is it worth picking up an old M1 MacbookPro with 64GB RAM? would that achieve that?
Thanks | 2023-07-12T14:03:18 | https://www.reddit.com/r/LocalLLaMA/comments/14xphbs/help_a_newbie/ | Kreator333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xphbs | false | null | t3_14xphbs | /r/LocalLLaMA/comments/14xphbs/help_a_newbie/ | false | false | self | 4 | null |
Follow up to: "Who of them is older?" | 9 | I was quite intrigued by the original question: [https://www.reddit.com/r/LocalLLaMA/comments/14uujna/which\_model\_can\_answer\_this\_question\_correctly/](https://www.reddit.com/r/LocalLLaMA/comments/14uujna/which_model_can_answer_this_question_correctly/)
I'm more interested in the prompt engineering that helps the LLM to solve the problem and not saying: "it works or doesn't work". For example when using recommended prompt for Orca:
Version 1 - original prompt
---------
```
\### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
\### User:
Alex is Charlie's father. Who of them was born later?
\### Input:
\### Response:
```
The answer is wrong (\`./main --file prompts/orca\_chat\_v2.txt --ctx\_size 2048 --temp 0.2 --model models/orca\_mini\_v2\_13b.ggmlv3.q4\_0.bin --threads 10\`)
```
Based on the information given, we cannot determine who was born later between Alex and Charlie's father. We need more information to answer this question accurately.
```
The answer is more or less the same over 10 tries.
Version 2 - step-by-step thinking
---------
When changing the prompt slightly by adding "step-by-step" reasoning:
```
\### Response:
That's easy let's think step-by-step:
```
The answers (while not perfect at least try to analyse the problem):
```
1. We know that Alex and Charlie are related as father and son, respectively.
2. To determine who was born later, we need to compare their birth dates.
3. Let's assume that Alex was born on January 1st, 1980 and Charlie was born on July 15th, 2000.
4. From this information, we can conclude that Charlie was born later than his father Alex.
```
The success rate of this approach is more or less 20%. Can you share your prompts that help the LLM to arrive at the answer without hardcoding too much information about the question itself?
Version 3 - chatgpt-4 assisted prompt
---------
In this version the system prompt is much more advanced and it isn't hardcoded to the specific problem (maybe to the extent that there are some entities and relationship between them).
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
In answering questions, follow these steps:
1. **Entity Identification**: Identify the main entities involved in the query. Entities can include people, objects, events, or abstract concepts.
2. **Relationship Comprehension**: Determine the relationships between these entities. Relationships may be explicit in the text or may need to be inferred based on context and general knowledge.
3. **Implication Understanding**: Understand the general implications of these relationships. These implications can be based on established facts, principles, or rules related to the identified relationships.
4. **Question Contextualization**: Align the implications of the relationships with the context of the query. This alignment should guide your response to the query.
5. **Answer Generation**: Based on the understanding of the entities, their relationships, and implications, generate an appropriate response to the query.
This strategy is applicable to a wide variety of questions and should help you make informed decisions and provide accurate answers even in complex situations.
### User:
Alex is Charlie's father. Who of them was born later?
### Input:
### Response:
That's easy let's think step-by-step:
1.
```
Success rate 5/5.
Yesterday someone posted this on hackernews - https://news.ycombinator.com/item?id=36677034 (GPT-Prompt-Engineer). Shouldn't it be the case to really utilise any LLM the prompts have to be designed or predicted for the set of problems?
Perhaps even some other LLM can be fine-tuned to generate prompts that help the LLM solve the problem. | 2023-07-12T13:27:52 | https://www.reddit.com/r/LocalLLaMA/comments/14xolpe/follow_up_to_who_of_them_is_older/ | mosquit0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xolpe | false | null | t3_14xolpe | /r/LocalLLaMA/comments/14xolpe/follow_up_to_who_of_them_is_older/ | false | false | self | 9 | null |
Recent updates on the LLM Explorer (15,000+ LLMs listed) | 144 | Hi All!
I'd like to share the recent updates to LLM Explorer ([https://llm.extractum.io](https://llm.extractum.io)), which I announced a few weeks ago. I've implemented a bunch of new stuff and enhancements since that:
* Over 15,000 LLMs in the database, with all the latest ones from HuggingFace and their internals (all properties are visible on a separate "model details" page).
* Omni-search box and multi-column filters to refine your search.
* A fast filter for uncensored models, GGML support, commercial usage, and more. Simply click to generate the list, and then filter or sort the results as needed.
* A sorting feature by the number of "likes" and "downloads", so you can opt for the most popular ones. The HF Leaderboard score is also included.
Planned enhancements include:
* Showing the file size (to gauge the RAM needed for inference).
* Providing a list of agents that support the model based on the architecture, along with compatibility for Cuda/Metal/etc.
* If achievable, we plan to verify if the model is compatible with specific CPU/RAM resources available for inference. We suspect there's a correlation between the RAM needed and the size of the model files. But your ideas are always welcome.
I'd love to know if the loading time if the main page is problematic for you, as it currently takes about 5 seconds to load and render the table with 15K models. If it is, I will consider redesigning it to load data in chunks.
I value all feedback, bug reports, and ideas about the service. So, please let me know your thoughts!
[https://llm.extractum.io](https://llm.extractum.io) | 2023-07-12T11:56:12 | https://www.reddit.com/r/LocalLLaMA/comments/14xmgzg/recent_updates_on_the_llm_explorer_15000_llms/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xmgzg | false | null | t3_14xmgzg | /r/LocalLLaMA/comments/14xmgzg/recent_updates_on_the_llm_explorer_15000_llms/ | false | false | self | 144 | null |
Any idea how to deploy a LocalLlm (GPT4ALL) ? Maybe on a web domain or as a chatbot embed on a website | 5 | I've made an llm bot using one of the commercially licensed gpt4all models and streamlit but I was wondering if I could somehow deploy the webapp? Like running the model on my cpu/gpu but sending/receiving the prompts and outputs through a webpage. will I need to make my own API for this? | 2023-07-12T11:29:26 | https://www.reddit.com/r/LocalLLaMA/comments/14xlwkc/any_idea_how_to_deploy_a_localllm_gpt4all_maybe/ | Shoddy_Champion9023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xlwkc | false | null | t3_14xlwkc | /r/LocalLLaMA/comments/14xlwkc/any_idea_how_to_deploy_a_localllm_gpt4all_maybe/ | false | false | self | 5 | null |
Why does GPT4all respond so slowly on my machine? | 7 | It takes anywhere from 3 to 8 minutes to complete a response on my machine. Specs below. I'm just curious if it's the age of the machine, or if it's software inefficiencies that are causing this long wait time for responses to my prompts. Even when it's finished processing, the words appear very slowly.
For example, it's so slow that even a long word will be posted slowly. So it won't just post the word "conversation" it will go "con... ... ver ... sat ... ... ion"
I'm using GPT4all 'Hermes' and the latest Falcon 10. The OS is Arch Linux, and the hardware is a 10 year old Intel I5 3550, 16Gb of DDR3 RAM, a sATA SSD, and an AMD RX-560 video card. All pretty old stuff.
Would upgrading to a higher end computer from 2023 help much? | 2023-07-12T10:48:45 | https://www.reddit.com/r/LocalLLaMA/comments/14xl12f/why_does_gpt4all_respond_so_slowly_on_my_machine/ | rondonjohnald | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xl12f | false | null | t3_14xl12f | /r/LocalLLaMA/comments/14xl12f/why_does_gpt4all_respond_so_slowly_on_my_machine/ | false | false | self | 7 | null |
How to visualise LLMs ? | 32 | May be a noob question but, let’s say If I give a prompt to an LLM and it predicts token by token and gives out worlds (tokens) how to visualise the whole functioning pipeline that predicts the next token ? Has there been any research around this with Llama models ? | 2023-07-12T09:51:25 | https://www.reddit.com/r/LocalLLaMA/comments/14xjw3s/how_to_visualise_llms/ | Jealous_Ad4067 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xjw3s | false | null | t3_14xjw3s | /r/LocalLLaMA/comments/14xjw3s/how_to_visualise_llms/ | false | false | self | 32 | null |
Best way to upgrade to run 65B? | 1 | [removed] | 2023-07-12T09:45:21 | https://www.reddit.com/r/LocalLLaMA/comments/14xjrx0/best_way_to_upgrade_to_run_65b/ | Necessary_Ad_9800 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xjrx0 | false | null | t3_14xjrx0 | /r/LocalLLaMA/comments/14xjrx0/best_way_to_upgrade_to_run_65b/ | false | false | self | 1 | null |
Is there any relative word like 'concept' in LLM? | 0 | 'concept' is famous in Stable Diffusion field, but I have never heard any concept like 'concept' in LLM field. If there is no 'concept' in LLM, so how do people describe LoRA or other adapters? | 2023-07-12T09:22:29 | https://www.reddit.com/r/LocalLLaMA/comments/14xjc54/is_there_any_relative_word_like_concept_in_llm/ | Sad-Lengthiness-6938 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xjc54 | false | null | t3_14xjc54 | /r/LocalLLaMA/comments/14xjc54/is_there_any_relative_word_like_concept_in_llm/ | false | false | self | 0 | null |
2020 MBA M1 any good for LLM? | 7 | I notice the 2020 Macbook Air M1 is on sale on Amazon for Prime Day.
I've had little success with effectivity using AMD GPUs with either llama.cpp or pytorch (I got rocm working but 2GB VRAM isn't enough).
Was wondering whether this Mac would be better or if the memory is still too low (8GB shared RAM/VRAM)? | 2023-07-12T08:48:09 | https://www.reddit.com/r/LocalLLaMA/comments/14ximyq/2020_mba_m1_any_good_for_llm/ | steve_mynott | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ximyq | false | null | t3_14ximyq | /r/LocalLLaMA/comments/14ximyq/2020_mba_m1_any_good_for_llm/ | false | false | self | 7 | null |
Really struggling with open source models | 32 |
Hey all. I’m a designer whose been dabbling in attempting to integrate AI tools into our team at work. Im a bit out my depth and looking for help. Although I'm not a programmer, I've been trying my hand at various open source models on my M1 MacBook, after initially working with GPT-4.
I've tried textgen-web-UI, GPT4ALL, among others, but usually encounter challenges when loading or running the models, or navigating GitHub to make them work.
I've been seeking help via forums and GPT-4, but am still finding it hard to gain a solid footing. My goal is to run a proficient model on my Mac to generate text for work without compromising sensitive information to third parties like OpenAI or Google.
I'm enthusiastic about these developments and eager to be part of them. I appreciate any advice you can offer. Thanks! | 2023-07-12T06:46:38 | https://www.reddit.com/r/LocalLLaMA/comments/14xghbd/really_struggling_with_open_source_models/ | Drake_psd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xghbd | false | null | t3_14xghbd | /r/LocalLLaMA/comments/14xghbd/really_struggling_with_open_source_models/ | false | false | self | 32 | null |
What’s the difference | 1 | [removed] | 2023-07-12T03:51:59 | https://www.reddit.com/r/LocalLLaMA/comments/14xd5kk/whats_the_difference/ | waltercrypto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14xd5kk | false | null | t3_14xd5kk | /r/LocalLLaMA/comments/14xd5kk/whats_the_difference/ | false | false | default | 1 | null |
airoboros-33B reminds me of why I first fell in love with ChatGPT | 148 | 2023-07-12T03:20:43 | zach_will | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14xch59 | false | null | t3_14xch59 | /r/LocalLLaMA/comments/14xch59/airoboros33b_reminds_me_of_why_i_first_fell_in/ | false | false | 148 | {'enabled': True, 'images': [{'id': '-n_yRVQHZbr3aDojYt__3W84l3981idnax70KbMmZxM', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?width=108&crop=smart&auto=webp&s=6adfb918f8b88ff6aad307580d86f1f4e169ee7b', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?width=216&crop=smart&auto=webp&s=6a5593a566a9def2a4d688ca5560238554982783', 'width': 216}, {'height': 344, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?width=320&crop=smart&auto=webp&s=dc14c6497ef0d00ae2d42d4dcec4c35fd8efa6cb', 'width': 320}, {'height': 689, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?width=640&crop=smart&auto=webp&s=649510cab3d568e28595df4c0c82bd73e3163823', 'width': 640}, {'height': 1034, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?width=960&crop=smart&auto=webp&s=d1897460ee3a1aae5ee456d84747a0f57df83a39', 'width': 960}, {'height': 1163, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?width=1080&crop=smart&auto=webp&s=38a3867034098ecef4372a483517bde0b72bde2e', 'width': 1080}], 'source': {'height': 1560, 'url': 'https://preview.redd.it/90of5tepcgbb1.png?auto=webp&s=e172df729855108399f39afb871763c1d2d1ecba', 'width': 1448}, 'variants': {}}]} | |||
Outputting attention for HF Llama models | 6 | So if you have a LlamaForCausalLM model from HF, you can get attention from a single forward call by doing
model.forward(inputs.input\_ids, output\_attentions=True)
However, it doesn't seem like that carries over if you use model.generate(). If you just try
model.generate(inputs.input\_ids, max\_new\_tokens=30, output\_attentions=True)
It doesn't return anything other the output tensor.
Is there any way to implement returning the attention weights during each step of the generation from model.generate? The docs seem to imply so but I can't tell. | 2023-07-12T01:30:46 | https://www.reddit.com/r/LocalLLaMA/comments/14x9zrd/outputting_attention_for_hf_llama_models/ | Blergzor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x9zrd | false | null | t3_14x9zrd | /r/LocalLLaMA/comments/14x9zrd/outputting_attention_for_hf_llama_models/ | false | false | self | 6 | null |
Looking for appealing and offline frontend UI for privateGPT running Vicuna 7b | 0 | Hey all, like the title states, I am wondering if there is a readily accessible frontend GUI I can use with privateGPT but maintain the completely offline aspect of it. Right now, I built a simple gradio locally hosted question and answer page, but it is painfully simple and frankly ugly. The two most important things are that it can work with the model that I already have and that it can run offline. I apologize if this is a dumb question, I am still new to this. Any suggestions? | 2023-07-12T00:29:26 | https://www.reddit.com/r/LocalLLaMA/comments/14x8m61/looking_for_appealing_and_offline_frontend_ui_for/ | billnyethesciencebi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x8m61 | false | null | t3_14x8m61 | /r/LocalLLaMA/comments/14x8m61/looking_for_appealing_and_offline_frontend_ui_for/ | false | false | self | 0 | null |
Haven v0.2.0 - Adding support for fine-tuned models | 31 | Hey! A bit more than a week ago [I wrote a post on a tool that a friend and I built to host LLMs on Google Cloud](https://www.reddit.com/r/LocalLLaMA/comments/14rqyx8/tool_for_deploying_open_source_llms_on_your_own/). We got a lot of feedback and just released our second version!
The repository: [https://github.com/havenhq/haven](https://github.com/havenhq/haven)
This is what has changed:
* We added support for mpt-30!
* You can now add any fine-tuned model to Haven if we support the base-model. [Here is a list of all supported base-models](https://docs.haven.run/models) which includes llama-7b, llama-13b, mpt-7b, mpt-30b and more
* We now support T4 GPUs which are much cheaper than the A100s (Google Cloud charges by the minute so you can spin resources up and down whenever you like)
* We added a pure text-completion endpoint (before we only had chat-completion), so you can now run non-chat models and have finer control over the prompt format.
* We also fixed a bunch of problems that people ran into and made the setup a little bit easier (there is still a long way to go here so bear with us lol)
* If you’re only interested in the orchestration and not in our LLM code you can add your own worker docker-image. This is still super experimental so shoot me a message if you want to do this and I’m happy to chat.
Super interested to hear what you think!
We also created a little Google Colab demo which points at a Haven deployment that we run ourselves. You can send it some prompts and see how it works.
[https://colab.research.google.com/drive/1eGGSisS9Du5-\_KcaejY5y9vk9v7EIfba?usp=sharing](https://colab.research.google.com/drive/1eGGSisS9Du5-_KcaejY5y9vk9v7EIfba?usp=sharing)
Thank you to everyone that started using this and for all the feedback we received!
This is the repository: [https://github.com/havenhq/haven](https://github.com/havenhq/haven) | 2023-07-12T00:18:55 | https://www.reddit.com/r/LocalLLaMA/comments/14x8dsd/haven_v020_adding_support_for_finetuned_models/ | h-konsti | self.LocalLLaMA | 2023-07-12T00:25:44 | 0 | {} | 14x8dsd | false | null | t3_14x8dsd | /r/LocalLLaMA/comments/14x8dsd/haven_v020_adding_support_for_finetuned_models/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'AmmG9EIuuQJAm-fNgSRfIKaL9Yi3ykpeidYk8XJ44e4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?width=108&crop=smart&auto=webp&s=3690d27109dd23d1f6b0936f0c588864a733205c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?width=216&crop=smart&auto=webp&s=9a51fa29c60b254211054071ac8f83ea4ca8cd0e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?width=320&crop=smart&auto=webp&s=e164cf8026e82aefc7b5a092425a8d6b41bf353d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?width=640&crop=smart&auto=webp&s=83d92f354c95042272c51060defd459dffd2d08d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?width=960&crop=smart&auto=webp&s=79e8d69bbfd699d9fa665db1db131490485432f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?width=1080&crop=smart&auto=webp&s=39b9ffc1b4c91b16393b35a9ebf191ad3ea36126', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Dl9R6MhHOdidwMetYd_SPTlSQZ5_PAMuB0yo-XlwpY4.jpg?auto=webp&s=45690a1128d1ab1545f2b5bd087b9c19e5fc6682', 'width': 1200}, 'variants': {}}]} |
Can I train ai to response like a person adding as source the messanger chat between us? | 10 | I dont know so good about AI (I'm an economist) But I am fully interested in it Large language models. So maybe this question is stupid.
If I have like 10 years everyday chat records of someone, can I make a chatbot trained to response like him using as source our conversations? I am sure I am not the first one who thought it but i think it is strange it isn't already a thing. | 2023-07-11T23:01:40 | https://www.reddit.com/r/LocalLLaMA/comments/14x6j3h/can_i_train_ai_to_response_like_a_person_adding/ | orestaras | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x6j3h | false | null | t3_14x6j3h | /r/LocalLLaMA/comments/14x6j3h/can_i_train_ai_to_response_like_a_person_adding/ | false | false | self | 10 | null |
How to append encoders to existing LLM? | 1 | [removed] | 2023-07-11T22:33:56 | https://www.reddit.com/r/LocalLLaMA/comments/14x5u0t/how_to_append_encoders_to_existing_llm/ | cstein123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x5u0t | false | null | t3_14x5u0t | /r/LocalLLaMA/comments/14x5u0t/how_to_append_encoders_to_existing_llm/ | false | false | self | 1 | null |
Improving vector search by converting documents to question/answer pairs | 3 | 2023-07-11T22:12:09 | https://www.psychic.dev/post/improving-vector-search-results-by-converting-documents-to-question-answer-pairs | valjestir | psychic.dev | 1970-01-01T00:00:00 | 0 | {} | 14x5ait | false | null | t3_14x5ait | /r/LocalLLaMA/comments/14x5ait/improving_vector_search_by_converting_documents/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'RevJNMj984CUvYJtdgOVExkve6bG3X_JfPLlvgYgxmk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/DiHLawuxuZt9v2eHoOQ0Z5Ov126UsDMDh3v7SnRevnU.jpg?width=108&crop=smart&auto=webp&s=8bf0ddd2facf9b07252917e86f816bc35ceaf14f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/DiHLawuxuZt9v2eHoOQ0Z5Ov126UsDMDh3v7SnRevnU.jpg?width=216&crop=smart&auto=webp&s=8a22b54b1bd7740910e8088a8901c8e014bcb600', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/DiHLawuxuZt9v2eHoOQ0Z5Ov126UsDMDh3v7SnRevnU.jpg?width=320&crop=smart&auto=webp&s=9c9a6953f2c202d4d18f4475e4ea80571c1f8009', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/DiHLawuxuZt9v2eHoOQ0Z5Ov126UsDMDh3v7SnRevnU.jpg?width=640&crop=smart&auto=webp&s=fc9357a19d174fde654887a7d793e1369ff005e2', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/DiHLawuxuZt9v2eHoOQ0Z5Ov126UsDMDh3v7SnRevnU.jpg?width=960&crop=smart&auto=webp&s=6ed4bb75651e4bedc131c83f9ee6715ee06a568b', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/DiHLawuxuZt9v2eHoOQ0Z5Ov126UsDMDh3v7SnRevnU.jpg?auto=webp&s=48399a13d0d2d51ed1021a5611ec64b385ad16eb', 'width': 1024}, 'variants': {}}]} | ||
Disallow tokens for simplistic language | 5 | I have been playing around with exllama and noticed that you can disallow certain tokens from being generated. In the example files this is used to force a longer response by disallowing end of sequence tokens for a certain amount of tokens.
At the moment I want to create a bot which uses extremely simplistic language. I was wondering if this could be archived by using the disallow tokens feature to simply disallow all tokens of more complex words (multiple thousand tokens). Has anybody tried something similar already? | 2023-07-11T21:04:35 | https://www.reddit.com/r/LocalLLaMA/comments/14x3h79/disallow_tokens_for_simplistic_language/ | TheNewSecret315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x3h79 | false | null | t3_14x3h79 | /r/LocalLLaMA/comments/14x3h79/disallow_tokens_for_simplistic_language/ | false | false | self | 5 | null |
Creating a new Finetuned model | 23 | So, I started on this a while ago to create a LoRA finetune over a specific domain data to enrich its knowledge of it. Similar to saying that I want the model to be better at Python coding than any existing model (Just an example).
I gathered about 35k samples (In essence 4M tokens), the quality being between average and good, trained it for multiple epochs. The result was a bit surprising honestly.
There was repetition in the answer: If I ask a question, at times it answers gibberish and looks like the English learnt on original model was destroyed.
Hallucination increased, at an alarming rate, once I did the finetuning.
With limited training, the answers were changed drastically. Is this supposed to be the usual case? Do you think there could be a specific reason for this? Or is the approach outdated? (Wouldn't be surprised with the speed the research is moving) | 2023-07-11T20:36:42 | https://www.reddit.com/r/LocalLLaMA/comments/14x2qes/creating_a_new_finetuned_model/ | Randomgod101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x2qes | false | null | t3_14x2qes | /r/LocalLLaMA/comments/14x2qes/creating_a_new_finetuned_model/ | false | false | self | 23 | null |
Database Tool for Semantic Search | 1 | [removed] | 2023-07-11T20:25:23 | https://www.reddit.com/r/LocalLLaMA/comments/14x2f7c/database_tool_for_semantic_search/ | Important-Sun-3562 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x2f7c | false | null | t3_14x2f7c | /r/LocalLLaMA/comments/14x2f7c/database_tool_for_semantic_search/ | false | false | self | 1 | null |
Learning App | 1 | The idea is pretty simple, but I have no idea if I'm able to implement it with a little bit of programming experience (advanced beginner in C and Python).
The app should do the following
1) When I learn something, it should be able to read the resource. Starting with PDFs, then other document types, then also audio files and finally videos.
2) From this input it should generate questions and quiz the user. From the answers it should be able to figure out which topics still need further improvement and with a spaced repetition algorithm it should bring up the questions in an appropriate timeframe again.
3) All the input should be searchable. So once the user read about algorithms, when looking for algorithms it should display all resources that have been used so far.
Would anyone care to elaborate how this app could be a realized and if it's even a feasible task for one person with only limited skills in the fields of programming and LLMs? | 2023-07-11T20:22:30 | https://www.reddit.com/r/LocalLLaMA/comments/14x2cf2/learning_app/ | RotjeCore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x2cf2 | false | null | t3_14x2cf2 | /r/LocalLLaMA/comments/14x2cf2/learning_app/ | false | false | self | 1 | null |
I can't make a compact gpt4all app in mac | 2 | I have a mac (mid2011 - not M2) and i intall the chat gpt from the .dmg of the website. After the installment, it just create a file which contain the elements of gpt4all. Not the app.
Any idea? | 2023-07-11T19:46:18 | https://www.reddit.com/r/LocalLLaMA/comments/14x1db2/i_cant_make_a_compact_gpt4all_app_in_mac/ | orestaras | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14x1db2 | false | null | t3_14x1db2 | /r/LocalLLaMA/comments/14x1db2/i_cant_make_a_compact_gpt4all_app_in_mac/ | false | false | self | 2 | null |
Oobabooga vs LocalAI | 2 | Hello!
Basically the title, I would like to know more about the differences between the two solutions. I know they allow you to download and run OSS LLMs locally and that Oobabooga provides the user with a gradio-based interface, but other than that I am still in the dark about the rest and would appreciate it if someone could clarify this for me. | 2023-07-11T18:50:11 | https://www.reddit.com/r/LocalLLaMA/comments/14wzul0/oobabooga_vs_localai/ | meteogold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wzul0 | false | null | t3_14wzul0 | /r/LocalLLaMA/comments/14wzul0/oobabooga_vs_localai/ | false | false | self | 2 | null |
What Is the Best Uncensored Model for GPT4ALL | 1 | [removed] | 2023-07-11T18:26:39 | https://www.reddit.com/r/LocalLLaMA/comments/14wz8mu/what_is_the_best_uncensored_model_for_gpt4all/ | fnhoes0902 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wz8mu | false | null | t3_14wz8mu | /r/LocalLLaMA/comments/14wz8mu/what_is_the_best_uncensored_model_for_gpt4all/ | false | false | self | 1 | null |
I'm completely lost here - how do I install Wizard LM using Python for GPU inference? | 1 | OK, I hope this isn't a low-effort post, but I am completely lost about how to do the installation. Now, in fairness, I am familiar with Huggingface Transformers but I've never run models from weights before. I have a GPU that can run 13B models easily, but I'm lost at what to do. I'm trying to install the new WizardLM13B V1.1 and...I have no idea what to do. I can download the full weights from various places, but then what? How do I load the model, infer with it, and ensure it goes to the GPU? I feel like I'm completely missing something - most guides I"ve found use the webUI but don't seem to work with Python, or I'm completely dumb. | 2023-07-11T17:38:37 | https://www.reddit.com/r/LocalLLaMA/comments/14wxxnj/im_completely_lost_here_how_do_i_install_wizard/ | crono760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wxxnj | false | null | t3_14wxxnj | /r/LocalLLaMA/comments/14wxxnj/im_completely_lost_here_how_do_i_install_wizard/ | false | false | self | 1 | null |
Open source chatbot toolkit w/ LLaMa models | 14 | [https://github.com/a16z-infra/companion-app](https://github.com/a16z-infra/companion-app)
New open source project that makes it easy to build your own chatbot and use models like Vicuna. It has memory, conversation history, personality - and you can even text with your bot. | 2023-07-11T17:21:13 | https://www.reddit.com/r/LocalLLaMA/comments/14wxh4w/open_source_chatbot_toolkit_w_llama_models/ | No-Negotiation9226 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wxh4w | false | null | t3_14wxh4w | /r/LocalLLaMA/comments/14wxh4w/open_source_chatbot_toolkit_w_llama_models/ | false | false | self | 14 | null |
Comparison of companies that allow you to create API endpoints for open source models? | 16 | Hello! I'm creating a chatbot frontend app that will make POST requests to LLM API endpoints. This is of course easy to do with Open AI and the other commercial LLMs, but I want to also be able to create endpoints for some of my favorite open source models, like 📷 [TheBloke/open-llama-7B-v2-open-instruct-GPTQ](https://huggingface.co/TheBloke/open-llama-7B-v2-open-instruct-GPTQ).
So far i've been accessing these GPTQ models via RunPod with Textgen Web UI (which is great!) but requires running a cloudGPU and paying for the time the GPU is running, which doesn't work for an API, where I will be randomly using the API daily (but can't afford to have it be up and running for hours each day).
I believe the solution is to setup a Serverless endpoint to host the model and pay per second for each request???. Has anyone done a comparison of companies offering for this type of service? I'm even sure how to quantifie (average chatbot per second request time) But maybe two to three hours of request time per day? What would that even cost...
I'd love to put together a csv that compares prices and features for this specific use case but I'm finding it kind of hard to research. Can anyone with any experience with setting up their own endpoints for open source models let me know if you have any thoughts/advice to share? Thanks! | 2023-07-11T16:38:40 | https://www.reddit.com/r/LocalLLaMA/comments/14wwc0b/comparison_of_companies_that_allow_you_to_create/ | jacobgolden | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wwc0b | false | null | t3_14wwc0b | /r/LocalLLaMA/comments/14wwc0b/comparison_of_companies_that_allow_you_to_create/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'u-FsYyHhg2fsjz3OhnZndlALNAkH1vvabia79wl-x-s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?width=108&crop=smart&auto=webp&s=b4e5cf42e7378f7ccabf8208c5aa362f8c530424', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?width=216&crop=smart&auto=webp&s=60216bceb665f6da160a0fc076545f8fa1c7281f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?width=320&crop=smart&auto=webp&s=01f68466c2044048f5c5dd76a291964c9788213e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?width=640&crop=smart&auto=webp&s=f452e6ff3e0a8e716653a1fe98d1643b1bec25fc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?width=960&crop=smart&auto=webp&s=b1ab04c9f2246f6ba400d03b9d47ff360c2f5dd9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?width=1080&crop=smart&auto=webp&s=553933f7db653e99d9ded2d254cbd7aadcba4b24', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4Tc-4Ee8UL3iBANOMGSOVQuVCN3_elr-vCcEFAwrKiM.jpg?auto=webp&s=55f9a8f0ef5c2fd24faf0db5c9c70700c3ebc1fc', 'width': 1200}, 'variants': {}}]} |
[Mac M1 Max] Every tokens takes several minutes to complete | 3 | Hey, I am trying to get text generation webui running on my MacBook Pro M1 Max with 32gb. The installation was quite forward and everything seems to work. But regardless which model I use (I tried vicuna 13b/7b, Facebook 1.3b, ultralm 13b and a couple of others) it takes several Minutes to complete one token. I can not figure out what the bottleneck is.
If I change the model the shards are collected in a couple of seconds but loading the model takes about 80 sec, I do not know if this is normal but I can live with that.
I also tried --cpu and --mps but they do not seem to have any effect on generation speed.
Since I read that people have working installation with feasible generation times I wonder if anyone knows what the bottleneck could be? If you need any information from me to evaluate my situation, I will provide anything I have.
Thank you all in advance! | 2023-07-11T16:38:22 | https://www.reddit.com/r/LocalLLaMA/comments/14wwbr8/mac_m1_max_every_tokens_takes_several_minutes_to/ | EconomyWorldliness67 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wwbr8 | false | null | t3_14wwbr8 | /r/LocalLLaMA/comments/14wwbr8/mac_m1_max_every_tokens_takes_several_minutes_to/ | false | false | self | 3 | null |
Are you sure you are focusing on the right things? (venting) | 0 | When I started reading about LLMs a few months ago, I was really excited by all the new possibilities they offered and all the cool stuff I would be able to create but I was quickly disappointed.
Not because of the models or the context length but just because of the tools. Each one of them was a nightmare to understand and use. Llama-cpp-Python, text generation webui, guidance... Each time it was hours if not days of debugging before getting my first answer.
I'm far from being an expert in LLMs or Python, it's not my job at all, it's a hobby and I want to create things, not just spend hours to install a library. I don't know how you all see the future of local LLMs but the fact that there is no "simple way" to run things like for all the other Python libraries is clearly a barrier that will prevent a lot of people from getting interested in the subject.
For now, I'm tired. I think I'll come back in a year or two when the tools are ready but please in the future, think more about beginners. They also want to help, but they can't.
I wish you good luck everyone! Maybe we'll meet again one day. | 2023-07-11T16:33:59 | https://www.reddit.com/r/LocalLLaMA/comments/14ww7fu/are_you_sure_you_are_focusing_on_the_right_things/ | Zeink303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ww7fu | false | null | t3_14ww7fu | /r/LocalLLaMA/comments/14ww7fu/are_you_sure_you_are_focusing_on_the_right_things/ | false | false | self | 0 | null |
Claude 2 Released - How do you feel it compares to LLMs? | 70 | Registration is currently open: https://claude.ai | 2023-07-11T15:55:28 | https://www.reddit.com/r/LocalLLaMA/comments/14wv5sw/claude_2_released_how_do_you_feel_it_compares_to/ | MoneroBee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wv5sw | false | null | t3_14wv5sw | /r/LocalLLaMA/comments/14wv5sw/claude_2_released_how_do_you_feel_it_compares_to/ | false | false | self | 70 | null |
Signup for Claude 2 now open and public - How do you feel it compares to LLMs? | 1 | https://claude.ai | 2023-07-11T15:45:15 | https://www.reddit.com/r/LocalLLaMA/comments/14wuvts/signup_for_claude_2_now_open_and_public_how_do/ | MoneroBee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wuvts | false | null | t3_14wuvts | /r/LocalLLaMA/comments/14wuvts/signup_for_claude_2_now_open_and_public_how_do/ | false | false | self | 1 | null |
INT-FP-QSim: Simulating LLMs and vision transformers in different precisions and formats | 1 | [removed] | 2023-07-11T15:27:55 | https://www.reddit.com/r/LocalLLaMA/comments/14wuf7n/intfpqsim_simulating_llms_and_vision_transformers/ | IllustriousSir_007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wuf7n | false | null | t3_14wuf7n | /r/LocalLLaMA/comments/14wuf7n/intfpqsim_simulating_llms_and_vision_transformers/ | false | false | 1 | null | |
Do we know GPT-4 methodology for vision on AP exams? | 4 | [https://arxiv.org/pdf/2303.08774v3.pdf](https://arxiv.org/pdf/2303.08774v3.pdf)
https://preview.redd.it/36ejfkqnscbb1.png?width=1770&format=png&auto=webp&s=5c482e0dbae0e6a67d01ad090cd4fd9674d9f348
​
OpenAI reports results on AP exams, many of which have visual questions like Chemistry and Biology.
However, the report does not mention how they presented this visual data to the LLM.
Does anyone know if this was later published or if there is another SOTA approach?
​
​ | 2023-07-11T15:22:37 | https://www.reddit.com/r/LocalLLaMA/comments/14wuaae/do_we_know_gpt4_methodology_for_vision_on_ap_exams/ | FreezeproofViola | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wuaae | false | null | t3_14wuaae | /r/LocalLLaMA/comments/14wuaae/do_we_know_gpt4_methodology_for_vision_on_ap_exams/ | false | false | 4 | null | |
AWS advice | 1 | A client of mine has an AWS account with free credits to use up.
They have a lot of documentation (pdf, web sites, etc), and I thought I might be a good idea to set up an AI chat, so that they can query it.
Can anyone give me some pointers as to what I can set up in AWS to give the best experience without using any external paid resource? | 2023-07-11T15:11:40 | https://www.reddit.com/r/LocalLLaMA/comments/14wu0fb/aws_advice/ | FlappySocks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wu0fb | false | null | t3_14wu0fb | /r/LocalLLaMA/comments/14wu0fb/aws_advice/ | false | false | self | 1 | null |
ChatGPT code interpreter sandbox files | 1 | The readme located in /home/sandbox suggests that all of these files were meant to be seen by the user so here you go
https://filebin.net/rs90ygmtmfer3p7d | 2023-07-11T14:51:07 | https://www.reddit.com/gallery/14wthla | Basic_Description_56 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14wthla | false | null | t3_14wthla | /r/LocalLLaMA/comments/14wthla/chatgpt_code_interpreter_sandbox_files/ | false | false | 1 | null | |
LoRA training error on WizardLM 1.0 7B | 1 | Hey ya'll, I was wondering if I could get some help debugging an error I get when trying to train WizardLM 1.0 on a JSON LoRA I made following the Alpaca format. I'm new to this whole thing, so I'm not 100% sure what's going wrong here. Any feedback would help. Here's the traceback from the cmd line:
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\gradio\routes.py", line 427, in run_predict
output = await app.get_blocks().process_api(
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1067, in call_function
prediction = await utils.async_iteration(iterator)
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\gradio\utils.py", line 336, in async_iteration
return await iterator.__anext__()
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\gradio\utils.py", line 329, in __anext__
return await anyio.to_thread.run_sync(
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\gradio\utils.py", line 312, in run_sync_iterator_async
return next(iterator)
File "E:\LLaMa\oobabooga_windows\text-generation-webui\modules\training.py", line 422, in do_train
train_data = data['train'].map(generate_and_tokenize_prompt, new_fingerprint='%030x' % random.randrange(16**30))
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 580, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 545, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 3087, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 3441, in _map_single
example = apply_function_on_filtered_inputs(example, i, offset=offset)
File "E:\LLaMa\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 3344, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "E:\LLaMa\oobabooga_windows\text-generation-webui\modules\training.py", line 417, in generate_and_tokenize_prompt
prompt = generate_prompt(data_point)
File "E:\LLaMa\oobabooga_windows\text-generation-webui\modules\training.py", line 409, in generate_prompt
if set(options.split(',')) == set(x[0] for x in data_point.items() if (x[1] is not None and len(x[1].strip()) > 0)):
File "E:\LLaMa\oobabooga_windows\text-generation-webui\modules\training.py", line 409, in <genexpr>
if set(options.split(',')) == set(x[0] for x in data_point.items() if (x[1] is not None and len(x[1].strip()) > 0)):
AttributeError: 'dict' object has no attribute 'strip'
​ | 2023-07-11T14:11:48 | https://www.reddit.com/r/LocalLLaMA/comments/14wsj2y/lora_training_error_on_wizardlm_10_7b/ | GreedyLilGobblin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wsj2y | false | null | t3_14wsj2y | /r/LocalLLaMA/comments/14wsj2y/lora_training_error_on_wizardlm_10_7b/ | false | false | self | 1 | null |
AMD Radeon Pro V620 32GB VRAM Drivers | 1 | I picked up one of these due to the fact that it is basically a W6800 with 32GB VRAM and listed as officially supported by ROCm. The problem is getting the GIM driver that AMD requires to be able to use the card at all. Anyone get one of these going and have a line on the driver? I found an email address ["CloudGPsupport@amd.com](mailto:"CloudGPsupport@amd.com)" that AMD lists in the help files to email to request the driver but they never bothered to reply to the request. | 2023-07-11T14:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/14wsdm8/amd_radeon_pro_v620_32gb_vram_drivers/ | ccbadd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wsdm8 | false | null | t3_14wsdm8 | /r/LocalLLaMA/comments/14wsdm8/amd_radeon_pro_v620_32gb_vram_drivers/ | false | false | self | 1 | null |
How to finetune LLaMA on my Mac M1 Pro? | 8 | Is it possible to do this with QLoRa? | 2023-07-11T13:47:40 | https://www.reddit.com/r/LocalLLaMA/comments/14wrwuu/how_to_finetune_llama_on_my_mac_m1_pro/ | Azure-Vision | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wrwuu | false | null | t3_14wrwuu | /r/LocalLLaMA/comments/14wrwuu/how_to_finetune_llama_on_my_mac_m1_pro/ | false | false | self | 8 | null |
Text web ui started outputting weird repeating messages. | 10 | ​
[WTF?](https://preview.redd.it/8ycjn2kdfbbb1.png?width=1182&format=png&auto=webp&v=enabled&s=b988bdd16f9acf0f89b1917c6254c350114fbd7d)
Hello everyone.
I've just installed oobaboogas web ui and loaded model **TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ**
Loaded with Exllama, and when I try to write something. It goes really crazy and outputs meaningless text.
WHy this is happening? | 2023-07-11T10:48:03 | https://www.reddit.com/r/LocalLLaMA/comments/14wnsuf/text_web_ui_started_outputting_weird_repeating/ | DaniyarQQQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wnsuf | false | null | t3_14wnsuf | /r/LocalLLaMA/comments/14wnsuf/text_web_ui_started_outputting_weird_repeating/ | false | false | 10 | null | |
Wizard-Vicuna-13b-SUPERHOT, Mac M2 16gb unified Ram. Is it normal to get responses in 1-2 minutes? What Text Generation UI Settings can help me speed it up? | 16 | Stats in the title. I even tried working with 7b and it was somehow even slower. I'm very new to all of this and have tried researching it, going through the sub's wiki's, etc. but I have to admit a lot of the vocabulary is a bit overwhelming. I'm tech savvy - I know how to code front and backend, and I've worked with python before.
But I don't know if I'm using 4bit or 8bit or how to find out. And I know there are other settings that might help it along, but I'm not sure where to start. Have been researching it, googling, checking the sub, reading through the wiki's.
Anyone want to point me in the right direction. I'd even settle for getting 7b to work in a reasonable amount of time. | 2023-07-11T10:30:13 | https://www.reddit.com/r/LocalLLaMA/comments/14wng3q/wizardvicuna13bsuperhot_mac_m2_16gb_unified_ram/ | Ok-Training-7587 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wng3q | false | null | t3_14wng3q | /r/LocalLLaMA/comments/14wng3q/wizardvicuna13bsuperhot_mac_m2_16gb_unified_ram/ | false | false | self | 16 | null |
Agents for browser | 18 | Do we have any available projects, that used LLM as an agent to control a browser., navigate websites, take actions, etc. | 2023-07-11T10:14:53 | https://www.reddit.com/r/LocalLLaMA/comments/14wn5tf/agents_for_browser/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wn5tf | false | null | t3_14wn5tf | /r/LocalLLaMA/comments/14wn5tf/agents_for_browser/ | false | false | self | 18 | null |
Myth of Context Length - Jim Fan | 53 | 2023-07-11T09:44:12 | https://twitter.com/DrJimFan/status/1678460065811136512 | saintshing | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 14wmkwv | false | {'oembed': {'author_name': 'Jim Fan', 'author_url': 'https://twitter.com/DrJimFan', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">I'm calling the Myth of Context Length:<br><br>Don't get too excited by claims of 1M or even 1B context tokens. You know what, LSTMs already achieve infinite context length 25 yrs ago!<br><br>What truly matters is how well the model actually uses the context. It's easy to make seemingly wild… <a href="https://t.co/FKqjs3xb6O">pic.twitter.com/FKqjs3xb6O</a></p>— Jim Fan (@DrJimFan) <a href="https://twitter.com/DrJimFan/status/1678460065811136512?ref_src=twsrc%5Etfw">July 10, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/DrJimFan/status/1678460065811136512', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_14wmkwv | /r/LocalLLaMA/comments/14wmkwv/myth_of_context_length_jim_fan/ | false | false | 53 | {'enabled': False, 'images': [{'id': '2ZFDBQFVUKOun-YvrA37NLqxOmmO36fB8LtcVIjtE-4', 'resolutions': [{'height': 101, 'url': 'https://external-preview.redd.it/8CinACgI2LSqrbcLRa9oTSxGFsgXM7MgNk7qHYJuoCE.jpg?width=108&crop=smart&auto=webp&v=enabled&s=9e80801ce7a474165063b1b5940e03aa0b38ba35', 'width': 108}], 'source': {'height': 131, 'url': 'https://external-preview.redd.it/8CinACgI2LSqrbcLRa9oTSxGFsgXM7MgNk7qHYJuoCE.jpg?auto=webp&v=enabled&s=aefe8a0e10f6285599819fc1a631d63098f43980', 'width': 140}, 'variants': {}}]} | ||
What is LLM Agent? and How to Build LLM Agents to Improve CX | 1 | [removed] | 2023-07-11T09:31:41 | https://www.reddit.com/r/LocalLLaMA/comments/14wmcxu/what_is_llm_agent_and_how_to_build_llm_agents_to/ | Floatbot_Inc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wmcxu | false | null | t3_14wmcxu | /r/LocalLLaMA/comments/14wmcxu/what_is_llm_agent_and_how_to_build_llm_agents_to/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rJCm1Yo7cxRIo_UiS4P4cgDvF_fhbYzY6nQppHgmC4c', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?width=108&crop=smart&auto=webp&v=enabled&s=47c2fcaa4d79127fcf7a50f05946c3a2817903a6', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?width=216&crop=smart&auto=webp&v=enabled&s=9ed6c85bba95b5a1e85ab2f48c7361648203cc10', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?width=320&crop=smart&auto=webp&v=enabled&s=5279c0f712e6e5fefce1b8efc6d9cd4e49cfb7f2', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?width=640&crop=smart&auto=webp&v=enabled&s=8a9f1bb301a7a04bb68e9a7d3d812e1584d63b96', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?width=960&crop=smart&auto=webp&v=enabled&s=60328a8b61a671f0e8f8b66552e28fa8f3197004', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=fb36cdcbf0287513c98ddb31e1f786d93784ab8d', 'width': 1080}], 'source': {'height': 749, 'url': 'https://external-preview.redd.it/bHjkqmOnam7fUxhPXXtI5-DelWNxI3iM212F6Z7dEGc.jpg?auto=webp&v=enabled&s=cae8f02154f3eecf0890190f2135ce38e29cb8d9', 'width': 1125}, 'variants': {}}]} |
Any empirical results when it comes to increasing context length? Is their a deterioration in text generation quality? | 4 | Given all the recent papers that claim to allow models to extrapolate to much longer context lengths (last one being DeepMind's focused transformers), has anyone taken a look at the performance of the generation quality of these models when applying any of these methods? Are there any resources where I can read up on each and figure out what the trade-offs are for using each one? Very interested in increasing the context length of my models, but there's quite a lot of stuff out there and don't know what works and what doesn't... | 2023-07-11T07:24:10 | https://www.reddit.com/r/LocalLLaMA/comments/14wk3c2/any_empirical_results_when_it_comes_to_increasing/ | Ok_Coyote_8904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wk3c2 | false | null | t3_14wk3c2 | /r/LocalLLaMA/comments/14wk3c2/any_empirical_results_when_it_comes_to_increasing/ | false | false | self | 4 | null |
LLM intelligence is more feminine than masculine? | 0 | Is there anyone else who feels that the intelligence resulting from generative AI has a feminine feel to it? I mean at first it sounds pretty stodgy and masculine -- "it is important to note..." and so on -- but after you keep prompting it and going round in circles it feels more like a woman than a man?
But I guess it depends on the model. Flan-T5 seems pretty fact-oriented because it is an instructional model. Other models like Guanaco and Vicuna are more creative and tend to go off on crazy tangents. Their training data appears to be full of weird conversations on the Internet that may jump in at any moment. Remind you of anyone who has spent too much time on social media? | 2023-07-11T06:05:24 | https://www.reddit.com/r/LocalLLaMA/comments/14wilbt/llm_intelligence_is_more_feminine_than_masculine/ | NodeTraverser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wilbt | false | null | t3_14wilbt | /r/LocalLLaMA/comments/14wilbt/llm_intelligence_is_more_feminine_than_masculine/ | false | false | self | 0 | null |
Open-Source, High Quality ERP Chatbot | 51 | Hey everyone! I am working on creating a open-source, high quality ERP chatbot.
I've built a unique dataset by scraping a huge amount of human-written ERP-themed content. It has everything from GFE to very specific topics, and from all types of relationships. The current dataset size is just over 21 million tokens.
I want to train a WizardLM Uncensored 30B model on this large dataset, and then fine tune it on a smaller, chat-formatted, curated dataset that I have been working on as well. I will release the datasets and models when finished.
I also plan to perform additional fine-tuning so the model can actually creatively generate relevant images based upon the chat conversation using StableDiffusion prompts!
If you are interested, please PM me! If you have a GPU setup you would be willing to use for training the model, server credits, or anything similar it would be really helpful.
As far as I know, no one else is working on something like this. The dataset I have is amazing, and I can't wait to see what happens when a top model is trained with it! | 2023-07-11T05:31:03 | https://www.reddit.com/r/LocalLLaMA/comments/14whxvk/opensource_high_quality_erp_chatbot/ | ilovejoi36912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14whxvk | false | null | t3_14whxvk | /r/LocalLLaMA/comments/14whxvk/opensource_high_quality_erp_chatbot/ | false | false | self | 51 | null |
How much data would I require to train a character? | 11 | I want to train a character in anime like Hoshino Ai
​
[Hoshino Ai](https://preview.redd.it/h8ncqkqt99bb1.png?width=130&format=png&auto=webp&s=d4d4b0899e67c785fac76339afc7ef7291a1d9bb) | 2023-07-11T03:31:56 | https://www.reddit.com/r/LocalLLaMA/comments/14wfkvd/how_much_data_would_i_require_to_train_a_character/ | Sad-Lengthiness-6938 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wfkvd | false | null | t3_14wfkvd | /r/LocalLLaMA/comments/14wfkvd/how_much_data_would_i_require_to_train_a_character/ | false | false | 11 | null | |
Is there a UI that can limit LLM tokens to a preset list? | 3 | I’m working with LLMs and sometimes I want one of a preset list of answers, but I haven’t found a way to limit the tokens that are allowed for consideration like that. For example, if I want the LLM to pick a mood/tone for some text, but I need it to choose one of a specific list of moods/tones like “upset” or “confused”, is there a way to limit the tokens to only those options? Is there a UI that can do this? If not, what would be the best way to achieve this?
I'm a software developer and so for stablediffusion I know how to write an extension or addon for Automatic1111, but for the LLM stuff I havent worked with it as much and I'm not sure if there's a UI that I could easily do this with but from what I understand (and correct me if I'm wrong) this would be in the same vein as editing how the temperature variable operates and would probably be best with the temperature set to make it deterministic. | 2023-07-11T01:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/14wdjds/is_there_a_ui_that_can_limit_llm_tokens_to_a/ | Sixhaunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wdjds | false | null | t3_14wdjds | /r/LocalLLaMA/comments/14wdjds/is_there_a_ui_that_can_limit_llm_tokens_to_a/ | false | false | self | 3 | null |
How Do I Change The Max Context Window In Llama-cpp-python | 0 | Hey subreddit. I'm a noob here but had a question that I couldn't find the answer to anywhere. I know you can change the max context window in llama.cpp, but I have no clue how to do it in llama-cpp-python and there's no info online about it. Any help is awesome! | 2023-07-11T00:41:43 | https://www.reddit.com/r/LocalLLaMA/comments/14wbsut/how_do_i_change_the_max_context_window_in/ | Norok_The_Diablo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wbsut | false | null | t3_14wbsut | /r/LocalLLaMA/comments/14wbsut/how_do_i_change_the_max_context_window_in/ | false | false | self | 0 | null |
GPT-4 details leaked | 853 | [https://threadreaderapp.com/thread/1678545170508267522.html](https://threadreaderapp.com/thread/1678545170508267522.html)
Here's a summary:
GPT-4 is a language model with approximately 1.8 trillion parameters across 120 layers, 10x larger than GPT-3. It uses a Mixture of Experts (MoE) model with 16 experts, each having about 111 billion parameters. Utilizing MoE allows for more efficient use of resources during inference, needing only about 280 billion parameters and 560 TFLOPs, compared to the 1.8 trillion parameters and 3,700 TFLOPs required for a purely dense model.
The model is trained on approximately 13 trillion tokens from various sources, including internet data, books, and research papers. To reduce training costs, OpenAI employs tensor and pipeline parallelism, and a large batch size of 60 million. The estimated training cost for GPT-4 is around $63 million.
While more experts could improve model performance, OpenAI chose to use 16 experts due to the challenges of generalization and convergence. GPT-4's inference cost is three times that of its predecessor, DaVinci, mainly due to the larger clusters needed and lower utilization rates. The model also includes a separate vision encoder with cross-attention for multimodal tasks, such as reading web pages and transcribing images and videos.
OpenAI may be using speculative decoding for GPT-4's inference, which involves using a smaller model to predict tokens in advance and feeding them to the larger model in a single batch. This approach can help optimize inference costs and maintain a maximum latency level. | 2023-07-11T00:34:13 | https://www.reddit.com/r/LocalLLaMA/comments/14wbmio/gpt4_details_leaked/ | HideLord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wbmio | false | null | t3_14wbmio | /r/LocalLLaMA/comments/14wbmio/gpt4_details_leaked/ | false | false | self | 853 | {'enabled': False, 'images': [{'id': 'UIX0vFPKedHxyW4Fccq45yrkc-s6LZB2Pri0j9xdOYg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?width=108&crop=smart&auto=webp&v=enabled&s=eb3531b8335413d0157c530d25bd467c29d3e142', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?width=216&crop=smart&auto=webp&v=enabled&s=56eafd2c02eb138704b1f79bbfc980575a2288f9', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?width=320&crop=smart&auto=webp&v=enabled&s=ca39c9cb0d3cd51ef184f46ac7e8e376c45562c5', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?width=640&crop=smart&auto=webp&v=enabled&s=ce0e8ea4cb6b9a48e3c770094cb2b5d6e29d3561', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?width=960&crop=smart&auto=webp&v=enabled&s=4e00f8e03f37d8d26b5e5b9c88dca4afb267de9f', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=aa3c3b47caad609a14b98aca5d54c38e8ec227c4', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/YJzAmcMFmWiKv0UY-Ve8mXsj54eeAdSf_3B-I8gaxuc.jpg?auto=webp&v=enabled&s=98e390fda92c3ffe9efb0329543a143e0152fc17', 'width': 1200}, 'variants': {}}]} |
Fine tuning model for code refactoring | 0 | My supervisor wants me to refactor some codes by fine tuning some open source models. What model will be recommended?!( The code are python codes)
I have tried GPT neo and could not receive the results I would like to get! | 2023-07-11T00:16:31 | https://www.reddit.com/r/LocalLLaMA/comments/14wb7u7/fine_tuning_model_for_code_refactoring/ | learner_beginner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wb7u7 | false | null | t3_14wb7u7 | /r/LocalLLaMA/comments/14wb7u7/fine_tuning_model_for_code_refactoring/ | false | false | self | 0 | null |
Seeking Guidance on Hosting and Running Falcon 40b Instruct Model for Research Purposes | 1 | Hi, for my research purposes, I want to run falcon 40b instruct model.
The participants will just interact with the model to gather some information for their task.
May I know if there is any online website where it had been hosted for research purposes?
Or can someone please explain or redirect to tutorial on how I can run this model in runpod.
The GPTQ model is very slow in runpod A6000. Will it run faster in better processor by any chance? | 2023-07-10T23:51:03 | https://www.reddit.com/r/LocalLLaMA/comments/14waltg/seeking_guidance_on_hosting_and_running_falcon/ | maheshvulasala | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14waltg | false | null | t3_14waltg | /r/LocalLLaMA/comments/14waltg/seeking_guidance_on_hosting_and_running_falcon/ | false | false | self | 1 | null |
Is it possible to make two LLMs talk to each other? | 10 | I've been researching LLMs (language models) and a question has come to my mind: is it possible to make two LLMs interact and have a conversation with each other? I'm fascinated by the idea, is it possible. How can I achieve it? Could you please give me any suggestions ? Thank you in advance | 2023-07-10T23:43:55 | https://www.reddit.com/r/LocalLLaMA/comments/14wafkn/is_it_possible_to_make_two_llms_talk_to_each_other/ | mashimaroxc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14wafkn | false | null | t3_14wafkn | /r/LocalLLaMA/comments/14wafkn/is_it_possible_to_make_two_llms_talk_to_each_other/ | false | false | self | 10 | null |
How content extension works in simple words? | 4 | I have a vague understanding of how LLM works, if oversimplified, it counts probability of next word after some combination of previous words.
Lets imagine, we learned a model on a really small context, like 10 words. So it can solve a simple prompts: "My name is Bob. Say my name" and model can predict, that next word should be "Bob". It can predict it because it takes in account all previous worlds, and given we learned model on a context length of 10 words, it has weights only on chain of word with the length of 10.
Can somebody explain in simple words, how content extension works in LLM without relearning model on a longer contexts? I can make a trivial suggestion, that when context come close to its limit some mechanism outside of model asks model to conclude a whole context in something more compact and then, but this idea does not seem to give not an effective nor a precise one (and not scalable for that matter), and models with extended context seem to be able to handle much bigger contexts than my trivial solution suggests, so what is it?
Is it some clever looping of tokens connections inside a model? | 2023-07-10T22:34:11 | https://www.reddit.com/r/LocalLLaMA/comments/14w8ppt/how_content_extension_works_in_simple_words/ | uti24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14w8ppt | false | null | t3_14w8ppt | /r/LocalLLaMA/comments/14w8ppt/how_content_extension_works_in_simple_words/ | false | false | self | 4 | null |
Prompt templates | 1 | Hi,
I'm trying to find the best prompt template for pairing instruction with context with a 7B Vicuna1.1 model. In the part 'Prompt Templates' of this article [https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki\_4-bit\_model\_requirements\_for\_llama](https://www.reddit.com/r/LocalLLaMA/wiki/models/#wiki_4-bit_model_requirements_for_llama)
I found this:
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
*your text here*
### Input:
*your text here*
### Response:
(include newline)
It works really well even if the article mentioned that this template fits for Alpaca models. Now, I’m trying to adapt it in order for the model respond 'I don't know' if the input doesn't contain information asked in the instruction. I tried a lot of variations of this template unsuccessfully. Do you have an idea of a variation that will help me accomplish this? | 2023-07-10T21:41:59 | https://www.reddit.com/r/LocalLLaMA/comments/14w7cjt/prompt_templates/ | SanglierAlbinos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14w7cjt | false | null | t3_14w7cjt | /r/LocalLLaMA/comments/14w7cjt/prompt_templates/ | false | false | default | 1 | null |
4070/4090 for more vram or a separate 3090 on a separate machine to run ai. *opinions please* | 1 | Bit of a rant but opinions are wanted, please.
I recently bought a 3090 to pair with my 4090 to run 65b models, but had no end of issues setting it up (case stuffed close, only 1000w power supply, blue screens) and fact kinda wasting a 4090 sat in ai machine so in frustration built up two separate machines my main/gaming machine with 4090/4070 currently in it, and now have a 3090 laying around can either try to resell or set up a space pc to run ai on (5800x 64 gig ram/3090).
I know some might say run ai on 4090 lol but i play a lot of games, i kinda was doing that on the 4070/4090 setup, but thinking if setup the 3090 for ai i could sell the 4070 and use the 4090 for gaming... through with 4070/4090 i do have access to 36 gig vram if that's ever gonna be a benefit in future.
So my question is, is it worth building that 3090 (So can keep it running?) into a second machine/should I maybe sell the 4070 then, or should i just keep 4070/4090 combo and try to get back my money on the 3090.
I can only fit the 4090/3090 in the 5800x ddr4 machine.
I now have a 7950x3d + 64 gig ram... so while i can put 4070 in there to game on, thats what I was doing before all the blue screens made me give up in frustration... plus feels like 4090 in the server is a waste of cash, well even more waste than I've already spent!
Any opinions are welcome, does anyone run ai full-time? Is it worth power cost to have it running all time or should i just reload between games and such? | 2023-07-10T21:20:24 | https://www.reddit.com/r/LocalLLaMA/comments/14w6r84/40704090_for_more_vram_or_a_separate_3090_on_a/ | fluffywuffie90210 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14w6r84 | false | null | t3_14w6r84 | /r/LocalLLaMA/comments/14w6r84/40704090_for_more_vram_or_a_separate_3090_on_a/ | false | false | self | 1 | null |
Nvidia T4 shortage on GCP | 1 | It appears that there is a scarcity of Nvidia T4 resources in GCP across all regions (at least which I tried). If anyone possesses information regarding its availability, kindly inform | 2023-07-10T18:54:33 | https://www.reddit.com/r/LocalLLaMA/comments/14w2qd9/nvidia_t4_shortage_on_gcp/ | Effective_Twist6995 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14w2qd9 | false | null | t3_14w2qd9 | /r/LocalLLaMA/comments/14w2qd9/nvidia_t4_shortage_on_gcp/ | false | false | self | 1 | null |
Generating "stories" with smaller models | 12 | So, I'm very excited about the LongLlama paper, Yesterday I read a post here that linked to a repo that had a notebook that you can run on colab, so I did to look at the generation results. What interested me were 2 things: inference speed and how well it "remembered" stuff. The model is the 3b LongLLaMA preview, so not an instruct model or anything.
After seeing the results, I wanted to have some fun, so I gave the model this prompt:
```"It was a very tragic story!"```
And let it run wild.
the result is nothing short of seizure inducing writing, I had a good laugh reading parts of this:
It was a tragic story! A 51 year old middle-Eastern airline pilot named Adnan Ghalib was killed today in Texas while attempting to land a small private plane in poor weather. I will not describe the circumstances and the details of his final flight, and I am trying so hard not to make a political issue out of it, so I will also refrain from giving you the pilot's name. Many times when you see bad news about something, you can go online, search and see the pilot's last flight log. If it was a commercial airline, you can easily find the pilots last thirty+ flights and get a good look at the pilot. You can usually find the pilot's name in the last paragraph if a crash occurred. You don't always have to know the names or the dates of those flights to know that something has gone very, very wrong! Sometime after the first pilot died, I happened to do a quick search and found Ghalib's flight log site and found a picture of Adnan. He was a bit older than I thought he was, but other than that, he was probably very similar to all the other pilots out there. He had a perfect career with no fatal accidents. This flight log is what I'm really interested in. I saw that they were flying a Mooney M20P, and I've spent a lot of time doing crash analyses with an NTSB investigator and I know that it is one of the best planes to crash in. The 20P is a 172 with some upgrades, which makes it a lot safer and more weather tolerant. The weather over Texas had been bad up until their takeoff. The last twenty days I was looking for reports of weather near Houston that was as bad or worse than conditions today. For about twenty days there were reports of bad flying weather, but as you can see nothing was really dangerous. The weather just became better, some days were not so great, but the last few days it got good again, and that should have been a pretty good indication of changing weather. The last forty days the weather was pretty good around Houston/Galveston, and I don't use weather that good to make landings. I have a few cases right when bad weather started, and several crashes I have done show it wasn't because of bad weather. The weather did get very, very dangerous around the time of the crash, but not dangerous enough. Ghalib was a very good pilot, but his luck just ran out today! It could just be one very bad decision, and he will probably never be the same again...but even with the very very minor luck in not having any sort of weather problems, Adnan could have made a very much safer decision. I have only looked at the flight log once, and at any point if you see something unusual, like a last entry of "flare out", and then you don't see any more entries until three days later, you know it is unusual, but you don't know why. There are a few ways you can know which way something went. You can look at the way they went for flights that were good. You can compare to flights that are going poorly. You could also go back a long time and compare to similar flight. As you can see, Adnan was more than just an average pilot. His first 10 flights were all good, with no fatal accidents. He flew for about a year, and he then had a fatal accident, and he spent the last 8 months of his life involved in aviation crashes. He had 12 deaths, 10 crashes and 5 near misses. He had one "good", one "good" and 4 "good". There was a mix of good and bad flying. In short, Adnan was going about his business one day, got killed and he was an excellent pilot! Now I know that I'm being a bit dramatic, but if you were the last person to see him alive, can you figure out that he did not do it or had not done it on purpose? I'm still waiting for confirmation that it was an accident, probably just a very dangerous decision, even though not bad enough to kill him in the crash. I had a pilot die once over the ocean in 70 degrees of weather and he was flying too fast to be in the air and then he lost the power because of the waves and his elevator broke. I think he died a very painful death because he was in a tiny sea-plane flying blind and without any instrumentation. He hit the water like a freight train and his plane just ended up in a big, deep hole. It's the worst thing I've ever seen. He was dead when he first hit the water when I heard him hit it. We had to just pull him out of the water and get him back on land before
I'd like to see some more of these so, go to this repo:
https://github.com/CStanKonrad/long_llama/blob/main/long_llama_colab.ipynb
and post whatever the model ends up writing | 2023-07-10T18:49:42 | https://www.reddit.com/r/LocalLLaMA/comments/14w2lm5/generating_stories_with_smaller_models/ | KillerX629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14w2lm5 | false | null | t3_14w2lm5 | /r/LocalLLaMA/comments/14w2lm5/generating_stories_with_smaller_models/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'CNhaNGJGLI7aayYDsTUEDCDNA4p_bXbfMm6Vl68IDME', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?width=108&crop=smart&auto=webp&v=enabled&s=cd0c7151a10ed61ae8eca41f5943fc302e7c040b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?width=216&crop=smart&auto=webp&v=enabled&s=ad7cda45d10df83a003643a629824354ca1bc3f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?width=320&crop=smart&auto=webp&v=enabled&s=4b6f14159eaff21800f94c6b359a2ba91da4f9f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?width=640&crop=smart&auto=webp&v=enabled&s=147290c65930a6fdfe24454ec419e30597f86e49', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?width=960&crop=smart&auto=webp&v=enabled&s=5b36bea2dc4d4192c5639ea2f229fa845366dfdd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=a913cc1510862c27acb9eb0d43c646a81b0a4ed9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Gw9EHAx-HPAHQJSdDJplsc5J7ZmndtRwxkMQX-DjtdE.jpg?auto=webp&v=enabled&s=d16e109f09fc5b666dd3eefaa815099fefec7d73', 'width': 1200}, 'variants': {}}]} |
Recommendation: an ingenious frontend & LocalAI | 63 | # [LocalAI](https://github.com/go-skynet/LocalAI)
First of all I would like to recommend **LocalAI** (it's my favorite *backend/interface* btw). It's rarely mentioned here and I think LocalAI in general is sadly under-represented at the moment. LocalAI is a kind of server interface for llama.cpp **and more** that uses the usual OpenAI json format - so a lot of existing applications can be redirected to local models with only minor changes. LocalAI also inherently supports requests to stable diffusion models, to bert.cpp (embeddings), to RWKV, GPT-2 etc etc.
…
…
…
# [ChatWizard](https://github.com/lisiur/ChatWizard)
I have now also found an ingenious frontend that can work seamlessly with LocalAI. **ChatWizard** is not so well known yet, and if at all, then not really in the Local LLM community, because ChatWizard is originally meant to interact with OpenAI's models. But you can change the API endpoint in the settings (e.g. set LocalAI's 127.0.0.1:8080). Some days ago I asked the developer if he could implement it that you could add your own models: Only one (!) day later he developed and integrated that. Yesterday I wrote him that the app communicates with LocalAI initially, but doesn't get any responses: today already he fixed the bug!
What I want to say is: We have here a confluence of excellent open source spirit and highest coding quality: LocalAI is written in Go and receives continuous updates. ChatWizard is written in Rust and the developer responds immediately to requests. Both Go and Rust are very powerful languages and in my opinion very suitable to act as a framework and frontend for a C/C++ application like llama.cpp. The developers of LocalAI also seem to be very engaged and motivated in their project.
I know I've rambled a lot now, but I can only urge you to give these applications a try.
…
…
Here is a short demonstation I've made:
**[Youtube](https://youtu.be/YBrnq7VuZJU)**
…
**[Screenshot](https://imgur.com/a/bvcotMW)**
…
---
### Links
**https://github.com/go-skynet/LocalAI**
**https://github.com/lisiur/ChatWizard**
Dozen Edits: Layout stuff | 2023-07-10T18:35:03 | https://www.reddit.com/r/LocalLLaMA/comments/14w2767/recommendation_an_ingenious_frontend_localai/ | Evening_Ad6637 | self.LocalLLaMA | 2023-07-10T18:42:12 | 0 | {} | 14w2767 | false | null | t3_14w2767 | /r/LocalLLaMA/comments/14w2767/recommendation_an_ingenious_frontend_localai/ | false | false | self | 63 | {'enabled': False, 'images': [{'id': 'MuROFddyYuvY5xg7xSbDZd2HkvG6m6fxiog6JDl0Oco', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?width=108&crop=smart&auto=webp&v=enabled&s=b67776eb7cb7607afcbab19a091e1c1aac5dbdc4', 'width': 108}, {'height': 138, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?width=216&crop=smart&auto=webp&v=enabled&s=8d9669754e9f6740e64c1428dce7705801998827', 'width': 216}, {'height': 204, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?width=320&crop=smart&auto=webp&v=enabled&s=e4ef13259cd8dbb46d749a4e8d4adcee1a981d8e', 'width': 320}, {'height': 408, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?width=640&crop=smart&auto=webp&v=enabled&s=b9d63735ab538f63829fefe9702ca9add03b9c3f', 'width': 640}, {'height': 613, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?width=960&crop=smart&auto=webp&v=enabled&s=c7df6463bd0f72fa49f7cd107eb90a7eb8c3cf35', 'width': 960}, {'height': 690, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=bfffff4e4bd6f5a5e068ac2539fef9ff3d13ff2e', 'width': 1080}], 'source': {'height': 2372, 'url': 'https://external-preview.redd.it/Laym6Vz_Luo-rXyIo8VlWmU-Eikx_OHUhH7sfmj9TC8.jpg?auto=webp&v=enabled&s=f0ada066eea1b2e630cbd7b5647d8ea8c073457f', 'width': 3712}, 'variants': {}}]} |
Advice recommendations for the smallest(fastest) possible CPU model for document question answering? | 3 | Hi, I have a very large group of unstructured text documents that I want to extract data from and speed is a very big bottle neck.
I'm trying to evaluate all of my options for models that are available as well as consider if I will need to build a specific gpu based server to handle the requests.
Before I begin evaluating models that require GPUs, I want to evaluate the cpu models for their speed and accuracy in data extraction.
Does anyone have any recommendations for fast, cpu models that might be good at extracting data from unstructured text?
Thanks!! | 2023-07-10T17:44:24 | https://www.reddit.com/r/LocalLLaMA/comments/14w0to0/advice_recommendations_for_the_smallestfastest/ | fcname | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14w0to0 | false | null | t3_14w0to0 | /r/LocalLLaMA/comments/14w0to0/advice_recommendations_for_the_smallestfastest/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.