title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Mixtral finetunes trickling in | 1 | I just saw Dolphin 2.5 was released, and it’s based on Mixtral 8x7b. Dolphin 2.2 based on Yi is my favorite model. Can’t wait to try 2.5!
The bloke has Mixtral-SlimOrca quantized. Text-gen mixtral support merged to main. It’s happening! | 2023-12-14T05:21:51 | https://www.reddit.com/r/LocalLLaMA/comments/18i1b5d/mixtral_finetunes_trickling_in/ | deep_archivist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i1b5d | false | null | t3_18i1b5d | /r/LocalLLaMA/comments/18i1b5d/mixtral_finetunes_trickling_in/ | false | false | self | 1 | null |
What are you using your local LLMs for? | 1 | [removed] | 2023-12-14T05:10:17 | https://www.reddit.com/r/LocalLLaMA/comments/18i141i/what_are_you_using_your_local_llms_for/ | Extension-Mastodon67 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i141i | false | null | t3_18i141i | /r/LocalLLaMA/comments/18i141i/what_are_you_using_your_local_llms_for/ | false | false | self | 1 | null |
How important is fine-tuning as foundational models get better? | 3 | What do you guys think, if models get to GPT4 level and open-source, will they benefit from fine-tuning? | 2023-12-14T05:08:26 | https://www.reddit.com/r/LocalLLaMA/comments/18i12zo/how_important_is_finetuning_as_foundational/ | EcstaticVenom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i12zo | false | null | t3_18i12zo | /r/LocalLLaMA/comments/18i12zo/how_important_is_finetuning_as_foundational/ | false | false | self | 3 | null |
Trinity-7B - Scored 74.07 on GSM8k by merge! | 3 | We try to merge the two top 7B model to see if anything interesting happening and got a 74.07 on GSM8k (ran the test on our computer since HF leaderboard is slow lately)
https://preview.redd.it/kmh3sfri076c1.png?width=816&format=png&auto=webp&s=002bea2908bad37cf79991087636acbcc03c3161
Given that the 2 top models itself doesn't score very high on GSM8k, I think somehow the score increased by just slepr merging the best one together.
https://preview.redd.it/vpoo1i1q076c1.png?width=3150&format=png&auto=webp&s=b680e5988980688ed36f111be62b53eb5548fede
LINK TO MODEL:
\- huggingface model: [https://huggingface.co/janai-hq/trinity-v1](https://huggingface.co/janai-hq/trinity-v1)
\-trinity GGUF: [https://huggingface.co/janhq/trinity-v1-GGUF](https://huggingface.co/janhq/trinity-v1-GGUF)
| 2023-12-14T05:07:59 | https://www.reddit.com/r/LocalLLaMA/comments/18i12pv/trinity7b_scored_7407_on_gsm8k_by_merge/ | noobgolang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i12pv | false | null | t3_18i12pv | /r/LocalLLaMA/comments/18i12pv/trinity7b_scored_7407_on_gsm8k_by_merge/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'upj97tCeXeEhTD0CwKPN12_E-7k1_HlTmrgjqnPYJr0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3XICrT1JZdAX5ISYnm_kOR7JLhB6NzbJu82R_sTh7ho.jpg?width=108&crop=smart&auto=webp&s=fb031f1b44fba24916be12167c163a9b465f376f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3XICrT1JZdAX5ISYnm_kOR7JLhB6NzbJu82R_sTh7ho.jpg?width=216&crop=smart&auto=webp&s=79ea707bcd4da7c0d9837b917823c66343eedce4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3XICrT1JZdAX5ISYnm_kOR7JLhB6NzbJu82R_sTh7ho.jpg?width=320&crop=smart&auto=webp&s=84b543939d4a893a962a50fff539cc847a6f93d8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3XICrT1JZdAX5ISYnm_kOR7JLhB6NzbJu82R_sTh7ho.jpg?width=640&crop=smart&auto=webp&s=f83a60f2fde48671bea91b0ff89c5a8c0af25558', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3XICrT1JZdAX5ISYnm_kOR7JLhB6NzbJu82R_sTh7ho.jpg?width=960&crop=smart&auto=webp&s=88a226683461dc5843f96e0789b023055401ca14', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3XICrT1JZdAX5ISYnm_kOR7JLhB6NzbJu82R_sTh7ho.jpg?width=1080&crop=smart&auto=webp&s=4851c60802679c7732fc473928ff10c300128080', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3XICrT1JZdAX5ISYnm_kOR7JLhB6NzbJu82R_sTh7ho.jpg?auto=webp&s=8e90cc93aea5008fb6c693aa834f9b24d656dabb', 'width': 1200}, 'variants': {}}]} | |
The first query taking longer to generate than the consequent ones. | 2 | So I built a RAG system and I have been testing it for a while now. For the first query it retrieves a couple of chunks as context and stores them in a template. When asked a second query, it doesn't retrieve any docs but just appends the Q to the earlier prompt like this,
Context + 1stQuery + 1stGeneratedAns + 2ndQuery.
Here the 1stGeneratedAns takes more time than to generate the answer for the the 2nd query. When I say more, if the first one takes around 40-50s just for the generation stream to start, the second one takes less than 15s.
Considering the fact that I am giving the LLM more tokens for the second query, than the 1st one, shouldn't it take more time.
I am using the openhermes-2.5-mistral-7b-16k_Q5_K_M gguf and am running it only on the CPU using llamacpp-python | 2023-12-14T04:10:12 | https://www.reddit.com/r/LocalLLaMA/comments/18i02il/the_first_query_taking_longer_to_generate_than/ | IamFuckinTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18i02il | false | null | t3_18i02il | /r/LocalLLaMA/comments/18i02il/the_first_query_taking_longer_to_generate_than/ | false | false | self | 2 | null |
Sharing a simple local LLM setup | 20 | I know all the information is out there, but to save people some time, I'll share what worked for me to create a simple LLM setup. I've done this on Mac, but should work for other OS.
​
I only need to install two things:
**Backend**: [llama.cpp](https://github.com/ggerganov/llama.cpp)
**UI**: [Chatbox](https://chatboxai.app) for me, but feel free to find one that works for you, here is a [list of them here](https://github.com/billmei/every-chatgpt-gui)
​
I like this setup because `llama.cpp` updates really quickly when new things come out like Mixtral, from my experience, it takes time to get the latest updates from projects that depend on llama.cpp. The best thing is to have the latest straight from the source.
​
​
**Steps**:
1. Install [llama.cpp](https://github.com/ggerganov/llama.cpp), the steps are detailed in the repo.
2. Download an LLM from [huggingface](https://huggingface.co).
For those not familiar with this step, look for anything that has `GGUF`\`in its name. You will probably find that on [TheBloke](https://huggingface.co/TheBloke)'s page. If you are not sure which one to choose, start with one that has lots of likes or downloads, or browse this community for impressions and feedback.Once you find the model you like, go to its page, click on \`Files and versions\` and then choose a file that ends with .`gguf` and download it. If you are not familiar with the sizes, go for `Q4_K_M` and make sure the size of the file seems to be something that can fit in your GPU or CPU memory.
3. Start the `llama.cpp` [server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server), here is the command I use to start Mixtral with GPU:
`./server -m ./mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf --alias gpt-3.5-turbo --n-gpu-layers 10000 --mlock --port 3077 --cont-batching --ctx-size 4096`
4. I started the server on port 3077, which means that the endpoint at \`[http://127.0.0.1:3077/v1/chat/completions\`](http://127.0.0.1:3077/v1/chat/completions`) is up. In \`Chatbox\` UI, I can just go to settings and replace the API Host with \`[http://127.0.0.1:3077\`](http://127.0.0.1:3077`) and the app runs with my local model now.
​ | 2023-12-14T03:57:56 | https://www.reddit.com/r/LocalLLaMA/comments/18hzun0/sharing_a_simple_local_llm_setup/ | bigbossStrife | self.LocalLLaMA | 2023-12-14T04:02:38 | 0 | {} | 18hzun0 | false | null | t3_18hzun0 | /r/LocalLLaMA/comments/18hzun0/sharing_a_simple_local_llm_setup/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
Windows AI Studio Preview Now Available | 13 | Windows AI Studio allows people to download models, fine-tune them, and play with them using VS Code. I'm downloading now to play with. :)
More info: [Windows AI Studio | Microsoft Learn](https://learn.microsoft.com/en-us/windows/ai/studio/) | 2023-12-14T03:39:48 | https://www.reddit.com/r/LocalLLaMA/comments/18hziyg/windows_ai_studio_preview_now_available/ | cddelgado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hziyg | false | null | t3_18hziyg | /r/LocalLLaMA/comments/18hziyg/windows_ai_studio_preview_now_available/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'RCFh0Kid3SAqWEkALMGNW1e9Vu6ayZpftekoayP00hY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=108&crop=smart&auto=webp&s=b3881e36da92b82c6947f6ca4ff3804ca47f2aea', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=216&crop=smart&auto=webp&s=17b5b01e50a969ac9e2353bebb062cd52a99d108', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=320&crop=smart&auto=webp&s=acadaf004e8aeb6919eabdb0d93065a34f7e89df', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=640&crop=smart&auto=webp&s=883009d39175a2f03b76275ed0f7c6011d94a3a7', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=960&crop=smart&auto=webp&s=7cc62aef83f192d102fa78c83c8f4fcfa85057e3', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=1080&crop=smart&auto=webp&s=6ca6913f202be9a9f83b266dd459edc90adbf9dd', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?auto=webp&s=41fa146938cd97da5abfeff0d092a2cc151e65fa', 'width': 1200}, 'variants': {}}]} |
Gemini Pro API frontend | 11 | I have noticed that when using the Gemini Pro API, the results are mostly uncensored (requires minimal prompting to generate NSFW, with quality comparable to GPT 3.5 turbo, and very large context sizes.)
Here's something I'm working on for those who aren't savvy enough to interface with the API programmatically. all you need is an API key and a browser.
[https://github.com/faetalize/zodiac](https://github.com/faetalize/zodiac)
Any suggestions?
​
[poc](https://preview.redd.it/x27yldrfi66c1.png?width=945&format=png&auto=webp&s=b3bd7de3ed02347fc9ed96f42aae51a130e033bc) | 2023-12-14T03:23:02 | https://www.reddit.com/r/LocalLLaMA/comments/18hz7n3/gemini_pro_api_frontend/ | faetalize | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hz7n3 | false | null | t3_18hz7n3 | /r/LocalLLaMA/comments/18hz7n3/gemini_pro_api_frontend/ | false | false | 11 | null | |
Which models are you using in production? | 2 | I've lost track of all the models that have been released so far.
I was using OpenAI API for the longest time, and only recently - - primarily driven by the OpenAI announcement that they'll be retiring the Da-Vinci-003 model - - have started breaking down my workflows into more steps, and replacing OpenAI calls with Mistral-Instruct.
Mistral 7b the only model that I've "figured out" so far - - both in terms of how reliable it is, as well as when it's reliably wrong, so I can make the appropriate guardrails. With Mistral I also have an API I can use and pay per number of tokens (textsynth) which is pretty reliable for production.
Tasks I'm using Mistral for: grammar correction, entity extraction, description generation.
For those of you using LLMs professionally, which models are you using, and how are you deploying them? | 2023-12-14T03:18:11 | https://www.reddit.com/r/LocalLLaMA/comments/18hz4dg/which_models_are_you_using_in_production/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hz4dg | false | null | t3_18hz4dg | /r/LocalLLaMA/comments/18hz4dg/which_models_are_you_using_in_production/ | false | false | self | 2 | null |
Quick, practical setup for the average joe? | 6 | I just want to get in, install something I can use for writing, and get out without having to learn all the details of how an llm works, quantization, model sizes, the differences between models, and so on. I’m sorry to ask a question which surely gets asked all the time, but where can I find a practical guide for purely practical use?
To add some details, I want an uncensored model (some writing may be uncensored, but mainly I just don’t want to have to fight the model all the time like chatGPT) which can run on my pc (mine is a great gaming pc, amd 7900 xtx and 64 gb RAM). Ideally something plug-n-play, where I can just download and execute a binary or something. That said, I do have wsl and the ability to compile if I need to.
Again, sorry that this probably gets asked all the time, and thanks in advance! | 2023-12-14T03:14:42 | https://www.reddit.com/r/LocalLLaMA/comments/18hz20j/quick_practical_setup_for_the_average_joe/ | throwinupupandaway | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hz20j | false | null | t3_18hz20j | /r/LocalLLaMA/comments/18hz20j/quick_practical_setup_for_the_average_joe/ | false | false | self | 6 | null |
Gemini Pro API Frontend | 1 | [removed] | 2023-12-14T03:08:20 | https://www.reddit.com/r/LocalLLaMA/comments/18hyxqb/gemini_pro_api_frontend/ | faetalise | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hyxqb | false | null | t3_18hyxqb | /r/LocalLLaMA/comments/18hyxqb/gemini_pro_api_frontend/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'SvU8TaisioKfcqHKp97d762R4BS91TwKGjSav5scjLM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/npgL8H1035Uc0hezRopSgrfNa1qIXIEXz9Ztregwgc8.jpg?width=108&crop=smart&auto=webp&s=dacfc8c27602c6796befcaa82144565d644584ff', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/npgL8H1035Uc0hezRopSgrfNa1qIXIEXz9Ztregwgc8.jpg?width=216&crop=smart&auto=webp&s=2a6b37264ec8abff31f48b6b548b9deae3cd1ab0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/npgL8H1035Uc0hezRopSgrfNa1qIXIEXz9Ztregwgc8.jpg?width=320&crop=smart&auto=webp&s=86af4953362ec875f067e6222ab38bb35edb6ca3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/npgL8H1035Uc0hezRopSgrfNa1qIXIEXz9Ztregwgc8.jpg?width=640&crop=smart&auto=webp&s=baa9fa922a91b84641c9a57998bc768f0e805039', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/npgL8H1035Uc0hezRopSgrfNa1qIXIEXz9Ztregwgc8.jpg?width=960&crop=smart&auto=webp&s=7ee0932a5f728778fc837d18109867c027f18eb9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/npgL8H1035Uc0hezRopSgrfNa1qIXIEXz9Ztregwgc8.jpg?width=1080&crop=smart&auto=webp&s=bdcffb6033d7efa7771a1d6875cf2a47b002af61', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/npgL8H1035Uc0hezRopSgrfNa1qIXIEXz9Ztregwgc8.jpg?auto=webp&s=2d21a92d212a9f0de0a66d2ce415ddca097b6121', 'width': 1200}, 'variants': {}}]} | |
First attempt with Mixtral 8x7B Instruct v0.1 Q8 GGUF in LM Studio | 1 | 2023-12-14T03:03:56 | bardobirdo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18hyutl | false | null | t3_18hyutl | /r/LocalLLaMA/comments/18hyutl/first_attempt_with_mixtral_8x7b_instruct_v01_q8/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'FLpwPKoK7_AVEXRqUAUHXa17qZ_8Di4qeSz_k24-IPQ', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/e8d0q2vze66c1.png?width=108&crop=smart&auto=webp&s=02201ee1df0164cd056339ef071fb51e8c1b79bd', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/e8d0q2vze66c1.png?width=216&crop=smart&auto=webp&s=cff39b4c6f159d3e5ec9219760c13aa15cae4ec5', 'width': 216}, {'height': 215, 'url': 'https://preview.redd.it/e8d0q2vze66c1.png?width=320&crop=smart&auto=webp&s=64206554538a0c7ce93e16d85cc16627bc36f2a8', 'width': 320}], 'source': {'height': 337, 'url': 'https://preview.redd.it/e8d0q2vze66c1.png?auto=webp&s=5a6ee4529ecb141b5dda04d46d1bad1ddfcb1eb8', 'width': 500}, 'variants': {}}]} | |||
Distributed Inference and Fine-tuning of Large Language Models Over The Internet | 16 | 2023-12-14T02:49:12 | https://arxiv.org/abs/2312.08361 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 18hykct | false | null | t3_18hykct | /r/LocalLLaMA/comments/18hykct/distributed_inference_and_finetuning_of_large/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
WebLLM but on CPU? | 4 | I posted a month ago about what would be the best LLM to run locally in the web, got great answers, most of them recommending [https://webllm.mlc.ai/](https://webllm.mlc.ai/), but you need an experimental version of Chrome for this + a computer with a gpu.
I don't really want to wait for this to happen :) Is there another way to run one locally? I have this idea to run LLMs for content filtering online that I did with GPT-3.5 before, but the moment I distribute it it's going to blow my API usage up if I don't have something local. | 2023-12-14T02:42:48 | https://www.reddit.com/r/LocalLLaMA/comments/18hyfvq/webllm_but_on_cpu/ | palpapeen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hyfvq | false | null | t3_18hyfvq | /r/LocalLLaMA/comments/18hyfvq/webllm_but_on_cpu/ | false | false | self | 4 | null |
SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention | 13 | 2023-12-14T02:38:54 | https://arxiv.org/abs/2312.07987 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 18hyd52 | false | null | t3_18hyd52 | /r/LocalLLaMA/comments/18hyd52/switchhead_accelerating_transformers_with/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
Llama.cpp quantization tool | 2 | Disclaimer: I don’t know what I’m doing and I’m kind of shocked I haven’t yet blown up my laptop.
Yesterday, I was having trouble getting Nous Capybara to work for me. So I created a new GGUF from the source files on Hugging Face using convert.py. F16. I can now get the model to run.
I’d like to use the quantization tool in the examples subfolder. I’ve managed to work through a couple compile errors. Presently, it seems to be hung up because “ld: library ‘llama’ not found”.
Anyone know how I can push through this step?
I’m on a Mac. | 2023-12-14T02:36:37 | https://www.reddit.com/r/LocalLLaMA/comments/18hybkr/llamacpp_quantization_tool/ | Hinged31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hybkr | false | null | t3_18hybkr | /r/LocalLLaMA/comments/18hybkr/llamacpp_quantization_tool/ | false | false | self | 2 | null |
fed up with ChatGPT: time to fully switch; some questions | 11 | Hi there folks,
I've been using local models for a while, and it works wonderfully. However, this recent debacle with chat gpt getting worse and worse has lead me to decide to go out and buy myself some more RAM.
Now my computer is going to have 64 gb of RAM, which is, I think enough to run most powerful models?
Well, anyway. I liked to use chatgpt for some coding tasks. Not very complicated ones, like converting old code into a new language, without actually doing any reasoning. But even now, it doesn't do it. It's so lazy! I want to have it all on my system.
What model do you guys think is the absolute best, in terms of this, that I can run, taking my new setup into account? It's got 64 theoretical gb of RAM that's currently in a box on the way to my house, but also an rtx2060 12 gb, which I think could help boost it quite a bit.
While we're here, do you guys know any fine tunes for roleplay that stray away from the ChatGPT "testament to", bla bla? I'm just going to ask this here again because things are changing so stupid fast and I have access to a lot more stuff now I think. | 2023-12-14T02:21:36 | https://www.reddit.com/r/LocalLLaMA/comments/18hy10a/fed_up_with_chatgpt_time_to_fully_switch_some/ | Several_Extreme3886 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hy10a | false | null | t3_18hy10a | /r/LocalLLaMA/comments/18hy10a/fed_up_with_chatgpt_time_to_fully_switch_some/ | false | false | self | 11 | null |
Mozilla is talking about Llamafile live tomorrow (Thursday) | 1 | [removed] | 2023-12-14T02:16:24 | https://www.reddit.com/r/LocalLLaMA/comments/18hxxcn/mozilla_is_talking_about_llamafile_live_tomorrow/ | joodfish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hxxcn | false | null | t3_18hxxcn | /r/LocalLLaMA/comments/18hxxcn/mozilla_is_talking_about_llamafile_live_tomorrow/ | false | false | self | 1 | null |
Finetune Mistral 220% faster with 62% memory savings | 272 | Hi r/LocalLLaMA!
We finally added Mistral 7b support, CodeLlama 34b, and added prelim DPO support (thanks to [152334H](https://github.com/152334H)), Windows WSL support (thanks to [RandomInternetPreson](https://github.com/RandomInternetPreson))
https://preview.redd.it/06cthih5266c1.png?width=982&format=png&auto=webp&s=22af07bf67ae047b1769f816dd18b8af0a9cea44
* Mistral 7b is 2.2x faster, uses 62% less VRAM. [Example notebook](https://colab.research.google.com/drive/1SKrKGV-BZoU4kv5q3g0jtE_OhRgPtrrQ?usp=sharing)
* CodeLlama 34b is 1.9x faster, uses 32% less VRAM (finally does not OOM!) [Example notebook](https://colab.research.google.com/drive/1gdHyAx8XJsz2yNV-DHvbHjR1iCef5Qmh?usp=sharing)
* Working on Mixtral!
* [https://unsloth.ai/blog/mistral-benchmark](https://unsloth.ai/blog/mistral-benchmark) provides 59 benchmarking notebooks for reproducibility purposes. It was quite painful to run, but hope they're helpful!
* [https://github.com/unslothai/unsloth](https://github.com/unslothai/unsloth) for our open source package!
* Supports Sliding Window Attention, RoPE Scaling, + many bug fixes, TinyLlama, Grouped Query Attention finall works and more!
If you'd like to ask any questions or get updates, be sure to join our server (link in comments).
Thank you so much & hope you have a lovely Christmas! Also thanks to the community for your wonderful support as always!
We have a new install path for Ampere GPUs+ (RTX 3060, A100, H100+). Also use "FastMistralModel" (see example above) for Mistral!
pip install "unsloth[cu118_ampere] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu121_ampere] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[colab_ampere] @ git+https://github.com/unslothai/unsloth.git"
FastMistralModel, FastLlamaModel | 2023-12-14T01:57:31 | https://www.reddit.com/r/LocalLLaMA/comments/18hxk6x/finetune_mistral_220_faster_with_62_memory_savings/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hxk6x | false | null | t3_18hxk6x | /r/LocalLLaMA/comments/18hxk6x/finetune_mistral_220_faster_with_62_memory_savings/ | false | false | 272 | {'enabled': False, 'images': [{'id': 'eGGIcNY-lsdFuWsI0aShQ_sTfSlA-J15x8p3ST4JL3k', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ODWtkQf_Paebnb6MbJgfBHKAmM1IY2t89HtNmpy4XB4.jpg?width=108&crop=smart&auto=webp&s=5119e409955c58baaa7720f19d6f6d8331fb7075', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ODWtkQf_Paebnb6MbJgfBHKAmM1IY2t89HtNmpy4XB4.jpg?width=216&crop=smart&auto=webp&s=24fc0f9712379ae77a408ff99a159055fae0ec4b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ODWtkQf_Paebnb6MbJgfBHKAmM1IY2t89HtNmpy4XB4.jpg?width=320&crop=smart&auto=webp&s=096993c94f4669cf528be2d49fa858fdd94d2426', 'width': 320}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/ODWtkQf_Paebnb6MbJgfBHKAmM1IY2t89HtNmpy4XB4.jpg?auto=webp&s=4ceec708d6c434559735bf43e035de9f665da588', 'width': 420}, 'variants': {}}]} | |
Mixtral, A.K.A. David, vs Goliath (120b) | 35 | TL;DR, running Mixtral 8x7b locally feels like discovering ChatGPT all over again from back when it first came out and only offered GPT3.5. Running Goliath 120b, even in 2bit, is like it's actually crossed that bridge into making your silicon truly intelligent even though it's limited vs GPT-4. And then,
If: Llama 70b finetunes merged = Goliath,
and: if Mixtral-32k > Llama 70b,
then: Mixtral finetunes merged = Goliath 2.0 = GPT-4-32k, locally??
\----
So I just two days ago got my first ever Mac; an M1 Max 64gb Macbook Pro. And holy moley did I pick a good time to get one to finally get into the local LLM scene. On my old Windows 3070Ti laptop, the best I could squeeze in was a quant of a 13b model; maybe slightly bigger if I heavily truncated context and offloaded some more layers.
One of the first things I did when I got my Mac was get LMStudio and immediately hop into testing out Mixtral MoE instruct - unfortunately I had a little trouble getting the 8bit quant to work, but the 6bit worked fine. Nearly right out of the box. And WOW is it quick. LMStudio seems to do some weird thing where it rounds out your t/s generated? But I'm getting 15.00 or 20.00t/s with full 32k context.
(P.S. A little trick for newbies to LMStudio like myself: turn off the option to keep the entire model in RAM; for some reason it's been working way faster and even more reliably for me just dynamically loading the model in (and it's not used any swap? So no degradation to SSDs) and it also lets me seamlessly run all my other apps at the same time, using all 64gb of RAM for Resolve, Photoshop etc until I want to run inference again)
That change to the 8x7b Mixtral was obviously a huge jump up in performance compared to what I'd used in the past. Much faster too; my suboptimal Windows set-up was getting 6t/s. But I noticed that unlike GPT-4, it still wasn't so plug and play trying to get it to intelligently follow generalised instructions; particularly when fiddling around with the system prompt. Telling it to avoid certain tasks, or giving it a general bit of info about who / what it is, didn't tend to yield especially strong results. GPT-3.5 level, maybe, but it definitely feels more like a pet AI than anything serious. A tiny bit disappointing, but also I wasn't especially surprised. Overall, it met my expectations.
And that's when I tried out Goliath 120b.
Recent upgrades to Llama.cpp (and so also LMStudio) mean that you can actually go above the 67% RAM limitation on Macs, and use up to the full 64gb unified memory to run models. This means that unlike a few months ago, you can now run on an M1 Max 64gb the Goliath 2bit GGUF, working at 2t/s with 1k context length. And this is... slow. But it runs.
Well, not quite runs. Plods more like. But man, does it plod with some serious weight. Now, I was sceptical when reading everyone's experiences of Goliath. And that was even after trying it out on webservers. Those demos always just kind of felt like using GPT-4. Impressive, but not mindblowing again just yet. But then, for whatever reason, running it locally... that scepticism instantly melted away. It gave me a weird special kind of euphoria, seeing the emergence of human-like intelligence coming from the silicon within my laptop.
Now I'm aware that this post is just a bunch of me gushing so far, but I did actually want to make some points.
Goliath is the result of a fantastic merge of a few different 70b models. Hats off to its creator, and also hats off to the person trying to follow it up with Venus 100b+ (v1 and v2). But what I'm seriously excited for is when we start to crack MoE merges. Mistral 8x7b is (apparently?) better than base Llama 70b. Seeing as Llama 70b was the basis for the finetunes that eventually merged to become Goliath, this fills me with hope that we can do the same for 8x7b at some point. You all already know this, but the speed increase really is nothing to scoff at from going to MoE. 15 tokens per second, with 32k context, with intelligence matching a 70b, is absolutely serious firepower compared to where we were a year ago. And all possible running on prosumer hardware. It basically feels like rediscovering ChatGPT.
Once we get Mixtral fine-tuning cracked, is there anything stopping fine-tuners from effectively retraining Mixtral on the same datasets that were used to make the models that eventually became Goliath? Currently, I see no reason why there should be anything stopping that from happening. And then if we can crack mergekit for Mixtral... I imagine there'd be some difficulties with figuring out how that'd work with the router model, and maybe it'll just produce an utterly different result trying to merge 8 pairs of 7b models. But IF we could get it working, that'd be Goliath's power, at a much greater speed, and with overall less parameters - I can't remember the exact numbers, but since layers are shared between the 8x7bs, they only take up the equivalent of something like a 40b model in terms of RAM. A proportional frankenmerge of Mixtral would end up being an MoE of 8x12b models, and take up the same RAM as a model of around 70b parameters.
This means that on the same hardware that can currently run a 2bit quant of Goliath 120b, you might instead be able to get a quant of this hypothetical 8x12b MoE; running at the speed of a 20b model, but with the same power of instruction-following capability of Goliath / GPT-4, and potentially even a 32k context window.
Combine this with lookahead decoding, MLC format, Apple MLX optimisations, and other goodies... and Localllama has a real horse in the race against OpenAI. And potentially before Llama-3 even gets out the door. Presuming it even does get out the door!
"What a time to be alive!" - Dr Karoly Zsolnai Feher | 2023-12-14T01:24:30 | https://www.reddit.com/r/LocalLLaMA/comments/18hwwrh/mixtral_aka_david_vs_goliath_120b/ | OldAd9530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hwwrh | false | null | t3_18hwwrh | /r/LocalLLaMA/comments/18hwwrh/mixtral_aka_david_vs_goliath_120b/ | false | false | self | 35 | null |
Model Roundup | 2 | Super interesting week! Lots of new, interesting, small models. I'm trying to make sense of it all, here's what I gather so far:
Mixtral: based. New king of open source. Personally, I want to see more companies like Mistral AI. No notes, except that I wish my potato laptop could run it locally. That's a me problem though.
Mistral Instruct v02: official fine tune. Great open source, open license model. I'm interested to see how it fairs in real use cases against OpenHermes 2.5, Zephyr B, and OpenChat 3.5.
Phi 2: 3B on par with 7Bs and 13Bs. Restrictive license. I'm waiting for the open source community to crack their process and make 3Bs better than Rocket.
Solar 10.7B: meme. Specifically trained on benchmark data. Good way to draw attention to your company I guess, now do a real one. | 2023-12-14T00:59:37 | https://www.reddit.com/r/LocalLLaMA/comments/18hwecc/model_roundup/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hwecc | false | null | t3_18hwecc | /r/LocalLLaMA/comments/18hwecc/model_roundup/ | false | false | self | 2 | null |
Is Phi-2 Just Mocking Me At This Point? (Could Get This To Work With Phi-1.5) | 1 | [removed] | 2023-12-14T00:12:38 | https://www.reddit.com/r/LocalLLaMA/comments/18hvfnx/is_phi2_just_mocking_me_at_this_point_could_get/ | Fantastic-Ninja3839 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hvfnx | false | null | t3_18hvfnx | /r/LocalLLaMA/comments/18hvfnx/is_phi2_just_mocking_me_at_this_point_could_get/ | false | false | self | 1 | null |
Anyone having problems with Phind CodeLlama V2 gguf with LM Studio? | 1 | It was working ok a few days ago. Now all I get is "an error occurred". | 2023-12-13T23:58:11 | https://www.reddit.com/r/LocalLLaMA/comments/18hv45o/anyone_having_problems_with_phind_codellama_v2/ | 808phone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hv45o | false | null | t3_18hv45o | /r/LocalLLaMA/comments/18hv45o/anyone_having_problems_with_phind_codellama_v2/ | false | false | self | 1 | null |
any image to video (short moving images) models which i can run on local m3 max , pl recommend | 1 | trying to bring some old memories/pictures back to life
and also want to stress test my m3 max silicon
Steps to train/infer will help as its my first time running any model on the local laptop | 2023-12-13T23:04:34 | https://www.reddit.com/r/LocalLLaMA/comments/18htxs1/any_image_to_video_short_moving_images_models/ | HiddenBladeXD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18htxs1 | false | null | t3_18htxs1 | /r/LocalLLaMA/comments/18htxs1/any_image_to_video_short_moving_images_models/ | false | false | self | 1 | null |
Is there (already) a multi user web UI available? | 3 | I am currently in the progress of finetuning a CodeLLama model (specifically for Laravel devs) and I would like to give people the ability to try it/use it when they don't have the hardware available for it themselves. We have plenty of options for an UI like Oobabooga, llama.cpp or KoboldCPP, however, none of these have decent multi user support. Implementing such an UI will be quite complex but I am open to develop it as an OSS package, however, I'd rather contribute to an existing package then reinvent the wheel.
What would be important to me is the ability to have some kind of queue system for prompts and the ability to allow/disallow users from using the chat.
Does anyone know if such a system exists already? Or, would people be interested in something like this? | 2023-12-13T23:01:37 | https://www.reddit.com/r/LocalLLaMA/comments/18htvc5/is_there_already_a_multi_user_web_ui_available/ | Taronyuuu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18htvc5 | false | null | t3_18htvc5 | /r/LocalLLaMA/comments/18htvc5/is_there_already_a_multi_user_web_ui_available/ | false | false | self | 3 | null |
Argument with an AI (Cats and humans interbreeding) | 3 | So I decided to dip my toe into using a local text generation model. Really interested in an AI assistant and code writer but thought to ease myself into it using oobabooga. I struggled for a bit getting it to install but eventually loaded TheBloke/Pygmalion-13B-SuperHOT-8K-GPTQ model from hugging face and fired up the default example character.
My first conversation with Chiharu started with her (what's the right pronoun for an AI?) insisting she weighs 10 pounds and is the result of cats and humans interbreeding. She then insisted that humans should only drink cat or human milk and that all dairy cows should be replaced by humans.
As my first experience with this I found it more like having a conversation with a 5 y/o than an rationally thinking adult. It did, however, have me giggling. It also surprised me how much it drew me into the conversation. Just curious if this is typical experience. | 2023-12-13T22:57:18 | https://www.reddit.com/r/LocalLLaMA/comments/18htrjg/argument_with_an_ai_cats_and_humans_interbreeding/ | punter1965 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18htrjg | false | null | t3_18htrjg | /r/LocalLLaMA/comments/18htrjg/argument_with_an_ai_cats_and_humans_interbreeding/ | false | false | self | 3 | null |
Help running inference on a large dataset | 2 | Hi! I’m a college student and am somewhat new to Llama. I followed some tutorials and downloaded the model and weights on our supercomputer. I want to use llama to give bias to a large question and answer pair dataset. I’m using the weights of 2-7b-chat huggingface. I want the questions to be returned unchanged and the answers to contain the same information but have a given bias to them. Whether it be sound angry or talk like Shakespeare etc. Running a slurm job on 20 processor cores, 5 nodes, 5 gpus, and 4096M memory per core for a single question answer pair is taking 5 minutes, which for the 130,000 would take far too long. I was planning on a for loop containing a number of pairs equal to a bit less than the llama remembrance window for each run of the loop. Am I allowing enough computational resources/ too many ? Any recommendations on speeding up inference? Thank you! | 2023-12-13T22:57:01 | https://www.reddit.com/r/LocalLLaMA/comments/18htrb2/help_running_inference_on_a_large_dataset/ | Electronic-Reply5109 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18htrb2 | false | null | t3_18htrb2 | /r/LocalLLaMA/comments/18htrb2/help_running_inference_on_a_large_dataset/ | false | false | self | 2 | null |
Microsoft released Phi-2 on Huggingface | 3 | 2023-12-13T22:48:00 | https://huggingface.co/microsoft/phi-2 | montcarl | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18htjyy | false | null | t3_18htjyy | /r/LocalLLaMA/comments/18htjyy/microsoft_released_phi2_on_huggingface/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'IC9jEjr4wqjQ7Zp6pkl3zp0k6W16A-udtoplxE4MnyY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=108&crop=smart&auto=webp&s=0e88db6a0c39318ffd52cbcc7a8c5ebd7f2446f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=216&crop=smart&auto=webp&s=48293ec8b5fd2d6d749fb8b67af0c923af87536c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=320&crop=smart&auto=webp&s=6b58317970194d6b55cc98922fac8107fb9c96e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=640&crop=smart&auto=webp&s=ac12bd7536bd03e08ddd91011f911469b3bf213f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=960&crop=smart&auto=webp&s=f757573ea5e2942de4e36268eb580b7d37e2116f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=1080&crop=smart&auto=webp&s=b8ce394087a4bb2d4704349f7d44bffafd690bac', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?auto=webp&s=27d011b779a551bbb18568f84f87b984fa99bea7', 'width': 1200}, 'variants': {}}]} | ||
Mixtral / MoE might be insanely compressable - sub-1bit | 66 | I stumbled over this issue when looking at mixtral PR: https://github.com/ggerganov/llama.cpp/issues/4445
Basically, because of the MoE structure it's generally much more compressable and can see 20x reduction, or under 1 bit per parameter - without hurting perplexity much.
There are some example code mentioned, and people way smarter than me are looking into it, seeing if it works as hoped on mixtral.
If this pans out, this is HUGE! | 2023-12-13T22:39:04 | https://www.reddit.com/r/LocalLLaMA/comments/18htcfb/mixtral_moe_might_be_insanely_compressable_sub1bit/ | TheTerrasque | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18htcfb | false | null | t3_18htcfb | /r/LocalLLaMA/comments/18htcfb/mixtral_moe_might_be_insanely_compressable_sub1bit/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': 'VvGBUxWoKD-JOo267v8YH3LWS2NPNQPScvB4VAI-RBI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6W60db_VWLhVq6pO_Qr4LP4ENbTfDLgWeeKEqz4OTEk.jpg?width=108&crop=smart&auto=webp&s=e812fef29b6d72c484a02d2c6b8a8fedbc91892d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6W60db_VWLhVq6pO_Qr4LP4ENbTfDLgWeeKEqz4OTEk.jpg?width=216&crop=smart&auto=webp&s=8ddd9a4db94ee8c47a3e4063d8b206a5c502a42a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6W60db_VWLhVq6pO_Qr4LP4ENbTfDLgWeeKEqz4OTEk.jpg?width=320&crop=smart&auto=webp&s=fb95c423b1376cd00345fcb1251d490165c474c0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6W60db_VWLhVq6pO_Qr4LP4ENbTfDLgWeeKEqz4OTEk.jpg?width=640&crop=smart&auto=webp&s=34132c07433412ae84d1b8f825e4224d29f33b1d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6W60db_VWLhVq6pO_Qr4LP4ENbTfDLgWeeKEqz4OTEk.jpg?width=960&crop=smart&auto=webp&s=824bb1c0d6e6a2b93dc2f1379cd90d1afa0e659e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6W60db_VWLhVq6pO_Qr4LP4ENbTfDLgWeeKEqz4OTEk.jpg?width=1080&crop=smart&auto=webp&s=348fd734009b88f926aa6c65859bcc30590b4c78', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6W60db_VWLhVq6pO_Qr4LP4ENbTfDLgWeeKEqz4OTEk.jpg?auto=webp&s=dcd43f68570a630bd152c426a4f08f0922a889ae', 'width': 1200}, 'variants': {}}]} |
Phi 2 is now offically available | 4 | You can finally download the Phi model from huggingface from the official microsoft account!
​
[https://huggingface.co/microsoft/phi-2](https://huggingface.co/microsoft/phi-2) | 2023-12-13T22:24:27 | https://www.reddit.com/r/LocalLLaMA/comments/18ht07l/phi_2_is_now_offically_available/ | goalasso | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ht07l | false | null | t3_18ht07l | /r/LocalLLaMA/comments/18ht07l/phi_2_is_now_offically_available/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'IC9jEjr4wqjQ7Zp6pkl3zp0k6W16A-udtoplxE4MnyY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=108&crop=smart&auto=webp&s=0e88db6a0c39318ffd52cbcc7a8c5ebd7f2446f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=216&crop=smart&auto=webp&s=48293ec8b5fd2d6d749fb8b67af0c923af87536c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=320&crop=smart&auto=webp&s=6b58317970194d6b55cc98922fac8107fb9c96e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=640&crop=smart&auto=webp&s=ac12bd7536bd03e08ddd91011f911469b3bf213f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=960&crop=smart&auto=webp&s=f757573ea5e2942de4e36268eb580b7d37e2116f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=1080&crop=smart&auto=webp&s=b8ce394087a4bb2d4704349f7d44bffafd690bac', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?auto=webp&s=27d011b779a551bbb18568f84f87b984fa99bea7', 'width': 1200}, 'variants': {}}]} |
Official phi-2 Now On Huggingface | 1 | https://huggingface.co/microsoft/phi-2 | 2023-12-13T22:18:16 | https://www.reddit.com/r/LocalLLaMA/comments/18hsv48/official_phi2_now_on_huggingface/ | klospulung92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hsv48 | false | null | t3_18hsv48 | /r/LocalLLaMA/comments/18hsv48/official_phi2_now_on_huggingface/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IC9jEjr4wqjQ7Zp6pkl3zp0k6W16A-udtoplxE4MnyY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=108&crop=smart&auto=webp&s=0e88db6a0c39318ffd52cbcc7a8c5ebd7f2446f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=216&crop=smart&auto=webp&s=48293ec8b5fd2d6d749fb8b67af0c923af87536c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=320&crop=smart&auto=webp&s=6b58317970194d6b55cc98922fac8107fb9c96e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=640&crop=smart&auto=webp&s=ac12bd7536bd03e08ddd91011f911469b3bf213f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=960&crop=smart&auto=webp&s=f757573ea5e2942de4e36268eb580b7d37e2116f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=1080&crop=smart&auto=webp&s=b8ce394087a4bb2d4704349f7d44bffafd690bac', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?auto=webp&s=27d011b779a551bbb18568f84f87b984fa99bea7', 'width': 1200}, 'variants': {}}]} |
Official phi-2 Now On Huggingface | 143 | https://huggingface.co/microsoft/phi-2 | 2023-12-13T22:17:55 | https://www.reddit.com/r/LocalLLaMA/comments/18hsupw/official_phi2_now_on_huggingface/ | klospulung92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hsupw | false | null | t3_18hsupw | /r/LocalLLaMA/comments/18hsupw/official_phi2_now_on_huggingface/ | false | false | self | 143 | {'enabled': False, 'images': [{'id': 'IC9jEjr4wqjQ7Zp6pkl3zp0k6W16A-udtoplxE4MnyY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=108&crop=smart&auto=webp&s=0e88db6a0c39318ffd52cbcc7a8c5ebd7f2446f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=216&crop=smart&auto=webp&s=48293ec8b5fd2d6d749fb8b67af0c923af87536c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=320&crop=smart&auto=webp&s=6b58317970194d6b55cc98922fac8107fb9c96e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=640&crop=smart&auto=webp&s=ac12bd7536bd03e08ddd91011f911469b3bf213f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=960&crop=smart&auto=webp&s=f757573ea5e2942de4e36268eb580b7d37e2116f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=1080&crop=smart&auto=webp&s=b8ce394087a4bb2d4704349f7d44bffafd690bac', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?auto=webp&s=27d011b779a551bbb18568f84f87b984fa99bea7', 'width': 1200}, 'variants': {}}]} |
Apple Silicon Melal local LLM using virtualmaxhnies or docker? | 3 | 1. On apple silicon mac I run with great speed e.g. llama.cpp, obbaboogq, etc (cpu M2 MAX 64GB). Interference worksm maybeqe even faster than on my 4090 and model 70b+ after qantisation works good on my mac quite good on my mqc while no chance to run it on 4090
2. Now I am searching for sandbox solution like docker or virtual machnie wchich allow fulll access to Apple Metal. Would like to seperare my experiments from base installation of macos to not make here dirty trash
Could abybody run (llma.cc, obbabooga, transformerts, python notebooki, etc) on Apple Metal using docker or any local virtual mqchine like vmware, parallels>
Thanks! | 2023-12-13T21:55:26 | https://www.reddit.com/r/LocalLLaMA/comments/18hsb6o/apple_silicon_melal_local_llm_using/ | sidsid7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hsb6o | false | null | t3_18hsb6o | /r/LocalLLaMA/comments/18hsb6o/apple_silicon_melal_local_llm_using/ | false | false | self | 3 | null |
Getting undesired oupout: "I don't know. Unhelpful Answer:" before actual answer. | 3 | I'm trying to make a chatbot that can read a csv file and answer questions relating to that data. I used
"Llama-2-7b-chat-hf" from huggingface to make a bot but I get "I don't know. Unhelpful Answer:" before the actual answer. How do I fix this issue? | 2023-12-13T21:54:40 | https://www.reddit.com/r/LocalLLaMA/comments/18hsahd/getting_undesired_oupout_i_dont_know_unhelpful/ | kitsune_009 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hsahd | false | null | t3_18hsahd | /r/LocalLLaMA/comments/18hsahd/getting_undesired_oupout_i_dont_know_unhelpful/ | false | false | self | 3 | null |
Official Microsoft link to Phi-2 | 1 | [removed] | 2023-12-13T21:53:01 | https://www.reddit.com/r/LocalLLaMA/comments/18hs95t/official_microsoft_link_to_phi2/ | AliveBarnacle995 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hs95t | false | null | t3_18hs95t | /r/LocalLLaMA/comments/18hs95t/official_microsoft_link_to_phi2/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IC9jEjr4wqjQ7Zp6pkl3zp0k6W16A-udtoplxE4MnyY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=108&crop=smart&auto=webp&s=0e88db6a0c39318ffd52cbcc7a8c5ebd7f2446f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=216&crop=smart&auto=webp&s=48293ec8b5fd2d6d749fb8b67af0c923af87536c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=320&crop=smart&auto=webp&s=6b58317970194d6b55cc98922fac8107fb9c96e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=640&crop=smart&auto=webp&s=ac12bd7536bd03e08ddd91011f911469b3bf213f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=960&crop=smart&auto=webp&s=f757573ea5e2942de4e36268eb580b7d37e2116f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=1080&crop=smart&auto=webp&s=b8ce394087a4bb2d4704349f7d44bffafd690bac', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?auto=webp&s=27d011b779a551bbb18568f84f87b984fa99bea7', 'width': 1200}, 'variants': {}}]} |
If phi-2 can be run locally, is it opensource? | 2 | Can someone explain or I'm dumb?
Can this help others create better opensource models. | 2023-12-13T21:37:21 | https://www.reddit.com/r/LocalLLaMA/comments/18hrwe2/if_phi2_can_be_run_locally_is_it_opensource/ | No_Training9444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hrwe2 | false | null | t3_18hrwe2 | /r/LocalLLaMA/comments/18hrwe2/if_phi2_can_be_run_locally_is_it_opensource/ | false | false | self | 2 | null |
How are our brains able to function on so little power? | 145 | Not sure if this is the right sub, but i do wonder why is it that we need all of this compute and power to train models and yet our brain that isn’t even silicon based requires far less energy to produce incredible results? | 2023-12-13T21:36:27 | https://www.reddit.com/r/LocalLLaMA/comments/18hrvom/how_are_our_brains_able_to_function_on_so_little/ | NakedMuffin4403 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hrvom | false | null | t3_18hrvom | /r/LocalLLaMA/comments/18hrvom/how_are_our_brains_able_to_function_on_so_little/ | false | false | self | 145 | null |
Any <13B model able to answer the simple "Apples today" vs "Apples yesterday" trick question? | 11 | A trick question I throw to all models to test them is:
"I have 4 apples today. I ate 3 apples yesterday. How many apples do I have today?"
Even the top SOLAR-10.7B model is getting this wrong and claiming I have only 1 apple today. Yi 34b-based models seem to get it right, but those run so slow on my hardware (and are lower on the leaderboard). | 2023-12-13T21:21:16 | https://www.reddit.com/r/LocalLLaMA/comments/18hrj8g/any_13b_model_able_to_answer_the_simple_apples/ | phr00t_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hrj8g | false | null | t3_18hrj8g | /r/LocalLLaMA/comments/18hrj8g/any_13b_model_able_to_answer_the_simple_apples/ | false | false | self | 11 | null |
Can you curate a dataset with Mistral-7B-instruct? If so, what impact would it have? | 3 | I'm thinking of downloading a large dataset (The Pile, for example) and then use a LLM to curate and clean the data to remove the toxic information entirely to have a clean dataset built for fine-tuning.
Basically I would just feed the text data to mistral, mistral evaluates it for toxicity and a script removes it from the dataset if necessary. Rinse and repeat until its done, however long that takes.
Its a simple idea but I don't know if anyone has implemented it. I do see there was a model recently released that uses RLAIF instead of human feedback so I thought why not just go straight to the source and curate it instead for fine-tuning instead of using an AI to guide another AI's responses?
Has this been attempted before? If so, are there any benchmarks or testing done after the fact? | 2023-12-13T21:19:34 | https://www.reddit.com/r/LocalLLaMA/comments/18hrht4/can_you_curate_a_dataset_with_mistral7binstruct/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hrht4 | false | null | t3_18hrht4 | /r/LocalLLaMA/comments/18hrht4/can_you_curate_a_dataset_with_mistral7binstruct/ | false | false | self | 3 | null |
Using system prompt for style | 2 | One of the things I really like about llama2 chat models is how quick and easy it is to change the model conversational style with simple system prompt changes. For example, saying that "You are a pirate" sets the model's personality to a pirate. It's not perfect and can sometimes go over the top, but it's still super convenient to play with different styles without having to do detailed personal descriptions. And it's fun.
Does this ability come from the datasets llama2 chat models were fine tuned with? Are there any specific datasets for this? Once fine tuned with a good number of examples, I assume the model can then adapt to almost any style that it has knowledge from its training data?
Are there other chat models that have this capability and are maybe even better? | 2023-12-13T21:17:54 | https://www.reddit.com/r/LocalLLaMA/comments/18hrgh8/using_system_prompt_for_style/ | gamesntech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hrgh8 | false | null | t3_18hrgh8 | /r/LocalLLaMA/comments/18hrgh8/using_system_prompt_for_style/ | false | false | self | 2 | null |
Your t/s goal? | 2 | Now we’re seeing smaller models providing good quality responses it seems the tokens/second that can be achieved on average to high end consumer hardware is growing every quarter ( would say month but wanted to be fair and factual ).
I wondered, what token/second rate do you all have as your dream goal? It’s easy to get lost in having the most vram, or best kernels, or whatever else, but as soon as you start hitting 25/30 per second it’s hard to keep up with what’s actually being produced in real time.
Basically trying to placate my burning geek desire to horde as much computer capability as possible. Easier to stop when you define an end goal. Even if it’s only a temporary one. | 2023-12-13T21:14:51 | https://www.reddit.com/r/LocalLLaMA/comments/18hrdwf/your_ts_goal/ | BreakIt-Boris | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hrdwf | false | null | t3_18hrdwf | /r/LocalLLaMA/comments/18hrdwf/your_ts_goal/ | false | false | self | 2 | null |
Using a usb pci-e riser on my RTX 3060 only reduced inference speed by 10% in koboldcpp. | 3 | The 3060 was connected in my second slot (full length, but pci-e 2.0).
I had read various personal anecdotes that pci-e link speed is not as important as many of us initially believe it to be. Inferencing on GPU's appears to be similar to mining crypto, you're basically just sending some instructions to the GPU and waiting for it to compute the result, and the resulting data is always quite small, hence pci-e speed is not crucial to receive a large benefit.
I have been crunching numbers and trying to find a way to get my personal machine to inference roughly as fast as I can read. Now that I realize I can probably use all these 1x pci-e 2.0 slots in my motherboard, I can basically add an RTX 4060Ti/16gb or RTX 3060/12GB as my budget allows if need be, all without having to be on a monster workstation platform like Threadripper.
However, where PCI-e speed does seem to matter in regards to inferencing: model load times. For me it wasn't a huge deal because I'm using old SATA SSD's anyhow.
I also wanted to throw out there some power consumption numbers. According to ``nvidia-smi`` my 4090 generally draws below 100w during inference, my 3060 draws below 80w. So if all you intend to do is inference with your GPU, you may not even need a large capacity PSU.
Results will vary of course, all of this opinion comes from my experience with koboldcpp.
I hope this information helps someone out there. | 2023-12-13T21:14:06 | https://www.reddit.com/r/LocalLLaMA/comments/18hrddc/using_a_usb_pcie_riser_on_my_rtx_3060_only/ | wh33t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hrddc | false | null | t3_18hrddc | /r/LocalLLaMA/comments/18hrddc/using_a_usb_pcie_riser_on_my_rtx_3060_only/ | false | false | self | 3 | null |
Need help finding cause for slow inference (RTX 4090), only utilizing some 40% of GPU | 3 | Hey,
I am seeing very slow inference utilizing the same exact code I run on 2 other exact same computers but they are Ryzen 7900x versus this is 13900k intel)
​
I run it through llamacpp and llama-cpp-python, BLAS is correctly set up so it is indeed utilizing GPU, I offload all layers to it with the same exact script as previously.
​
^(llm = Llama(model\_path="C:/GitHub/llama.cpp/models/llama/llama-2-7b-chat.Q5\_K\_M.gguf",n\_gpu\_layers=100, n\_ctx=2048))
^(output = llm('my very nice prompt template',)
^(max\_tokens=400))
^(model\_name = "llama-2-7b-chat.Q5\_K\_M.gguf")
​
[https://pastebin.com/XR5Rk8L4](https://pastebin.com/XR5Rk8L4) <- here is the terminal output, and I could swear I was getting \~100 tokens per sec on eval time previously.
​
Also attached an image showcasing the lack of GPU usage, task manager for some reason reports it at 100%, afterburner is giving me some 20% when my card is running at 2550mhz on gpu and 11249mhz on memory, but in the picture the speeds were roughly half of that and afterburner reported around 40% usage.
So I feel something is capping my GPU performance but I can't understand what as I just did setup on 2 same computers (Ryzens) and they worked very well out of the box when I managed to get BLAS enabled. Please help?
​ | 2023-12-13T21:10:49 | https://www.reddit.com/r/LocalLLaMA/comments/18hram1/need_help_finding_cause_for_slow_inference_rtx/ | PristineComparison87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hram1 | false | null | t3_18hram1 | /r/LocalLLaMA/comments/18hram1/need_help_finding_cause_for_slow_inference_rtx/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]} |
Ollama Release v0.1.15 adds multi-modal support | 1 | 2023-12-13T21:08:57 | https://github.com/jmorganca/ollama/releases/tag/v0.1.15 | FlishFlashman | github.com | 1970-01-01T00:00:00 | 0 | {} | 18hr91g | false | null | t3_18hr91g | /r/LocalLLaMA/comments/18hr91g/ollama_release_v0115_adds_multimodal_support/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Pl0JY86VF1IhlMQNOnOOFokgdKJ4C-1Q9Z81FybG0Bw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EumYbO16-n2b2yXlyb2SkvA1YqxOX-CZBy7bo25HxoI.jpg?width=108&crop=smart&auto=webp&s=bb6a2d31b8a141b7490936b66e5db461de381799', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EumYbO16-n2b2yXlyb2SkvA1YqxOX-CZBy7bo25HxoI.jpg?width=216&crop=smart&auto=webp&s=97e51bbeb74433f9056b092b8b717db7d69b9093', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EumYbO16-n2b2yXlyb2SkvA1YqxOX-CZBy7bo25HxoI.jpg?width=320&crop=smart&auto=webp&s=d55edb391f74aa29f3b3f482645be543f00a270e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EumYbO16-n2b2yXlyb2SkvA1YqxOX-CZBy7bo25HxoI.jpg?width=640&crop=smart&auto=webp&s=fad330a69538f5e2f209b85d8ed8e2fea4ea5f0f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EumYbO16-n2b2yXlyb2SkvA1YqxOX-CZBy7bo25HxoI.jpg?width=960&crop=smart&auto=webp&s=fc58f5ab498b366e2ece613d18b0437561228fc3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EumYbO16-n2b2yXlyb2SkvA1YqxOX-CZBy7bo25HxoI.jpg?width=1080&crop=smart&auto=webp&s=8b0f6a8e61423a1c264007d933fd8bbdf2b27233', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EumYbO16-n2b2yXlyb2SkvA1YqxOX-CZBy7bo25HxoI.jpg?auto=webp&s=b779cb02bac85e025cb5ff9dbab69e9d194c6360', 'width': 1200}, 'variants': {}}]} | ||
When your model collection starts to get out of hand and you start considering other storage options. | 19 | 2023-12-13T20:53:34 | teleprint-me | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18hqw5g | false | null | t3_18hqw5g | /r/LocalLLaMA/comments/18hqw5g/when_your_model_collection_starts_to_get_out_of/ | false | false | 19 | {'enabled': True, 'images': [{'id': 'ZobzaLbfaCTn8DBM7btTgNMjHJrz76nf5crDiZaiz7A', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/tvefli0qk46c1.png?width=108&crop=smart&auto=webp&s=682ab289a7d9b53780e0d6654c2d464560794595', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/tvefli0qk46c1.png?width=216&crop=smart&auto=webp&s=e5e95232f6f104a92af43ddb40dda0c967d72ee1', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/tvefli0qk46c1.png?width=320&crop=smart&auto=webp&s=125d25477fe9d8a0ec400c7c23f03ecff5d8b2c8', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/tvefli0qk46c1.png?width=640&crop=smart&auto=webp&s=bba2ca776f5f3541e8d48e8395bd71e6bb4ab53d', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/tvefli0qk46c1.png?width=960&crop=smart&auto=webp&s=ac67d91dbf4d64710c42ff750716e508ed7a274d', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/tvefli0qk46c1.png?width=1080&crop=smart&auto=webp&s=2afa3f0ab40cc1f7420476631474d5f4ca7f57a1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/tvefli0qk46c1.png?auto=webp&s=8bd4648def5bdc7dc71cb938baf217b4444311b9', 'width': 1920}, 'variants': {}}]} | |||
Attempting to switch from GPT4All to LM Studio. Why is it slower? | 1 | [removed] | 2023-12-13T20:49:33 | https://www.reddit.com/r/LocalLLaMA/comments/18hqst3/attempting_to_switch_from_gpt4all_to_lm_studio/ | IWantAGI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hqst3 | false | null | t3_18hqst3 | /r/LocalLLaMA/comments/18hqst3/attempting_to_switch_from_gpt4all_to_lm_studio/ | false | false | self | 1 | null |
Any rebuild civilization model finetunes? | 3 | Was wondering if there are any good models on rebuilding civilization in the event of an apocalypse. Specifically, models for building energy infrastructure, agriculture, etc. | 2023-12-13T20:40:57 | https://www.reddit.com/r/LocalLLaMA/comments/18hqlov/any_rebuild_civilization_model_finetunes/ | ArmoredBattalion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hqlov | false | null | t3_18hqlov | /r/LocalLLaMA/comments/18hqlov/any_rebuild_civilization_model_finetunes/ | false | false | self | 3 | null |
M1 Max vs RTX 4090 (laptop) for inference? | 7 | Hello all! Newb here, seeking some advice.
I’m in the market for a new laptop - my 2015 personal MBA has finally given up the ghost. I’d like to get something capable of running decent LLM inference locally, with a budget around **2500 USD**.
Let’s say it has to be a laptop. I understand a desktop PC with multiple 3090s, or a beefy Mac Studio, is probably the move here, but let’s just assume for sake of this discussion that it has to be a laptop, for portability etc.
I’ve found the following options available around the same price point:
* [A Lenovo Legion 7i, with RTX 4090 (16GB VRAM), 32GB RAM](https://www.bhphotovideo.com/c/product/1745684-REG/lenovo_82wq002tus_16_legion_pro_7.html). This seems like a solid deal, one of the best gaming laptops around for the price, if I'm going to go that route.
* [MacBook Pro M1 at steep discount, with 64GB Unified memory.](https://www.bhphotovideo.com/c/product/1668294-REG/apple_mbp_14_sl_46_14_2_macbook_pro_with.html) Yes, it's two generations old, but it's discounted. And from what I've heard, M2/M3 Max aren't a huge boost over M1 Max anyway, especially when it comes to memory bandwidth, which is what LLMs are bottlenecked on?
I’m wondering which of these would be better. From what I’ve been able to understand so far:
* The Lenovo, with discrete GPU, will be able to fit a 7B or 13B (quantized) model entirely in VRAM, and thus be much faster (how much faster?) for inference at that model size than the Mac’s integrated GPU.
* The Mac, with 64GB unified memory, will fit larger model sizes (34B at Q8, 70B at Q4 or Q5?), with room for long context, and still be faster than a regular CPU setup with 64GB of RAM would. (Right?) The Lenovo will, of course, not be able to run such a large model at all (without swapping to disk and becoming unacceptably slow).
* A medium-large model (like Mistral’s new 8x7B MOE or a 34B at Q5, say) could run on either - on the Lenovo using llama.cpp and splitting layers between CPU/RAM and GPU. This is the one I'm less sure about which hardware would be better.
So what I’m wondering is: *Would running a medium-size model be faster on the M1 with unified memory, or on the PC splitting layers between CPU and GPU?* What kind of tokens/sec would I be able to reasonably expect there?
Of course, one’s a Mac and one’s Windows, so they’re different OS’s and ecosystems. If I ever want to play a video game, the PC will certainly be better. Assume I don’t care much about that, and I am mainly concerned in getting a good laptop for local LLM inference, around the $2500 price point.
What would you recommend? Thanks in advance! | 2023-12-13T20:37:47 | https://www.reddit.com/r/LocalLLaMA/comments/18hqj3n/m1_max_vs_rtx_4090_laptop_for_inference/ | oak-sprout | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hqj3n | false | null | t3_18hqj3n | /r/LocalLLaMA/comments/18hqj3n/m1_max_vs_rtx_4090_laptop_for_inference/ | false | false | self | 7 | null |
QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models | 1 | 2023-12-13T20:34:10 | https://arxiv.org/abs/2310.16795 | Combinatorilliance | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 18hqg6f | false | null | t3_18hqg6f | /r/LocalLLaMA/comments/18hqg6f/qmoe_practical_sub1bit_compression_of/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
Any recommendations for Instruct models with large context for RAG? | 4 | I'm looking to build a large context with information containing story arcs, plots, characters etc and then want to generate events based on this with instruct.
Any size really but I assume to get some decent results will probably need to be in the 20 - 70b range.
Thanks in advance! | 2023-12-13T20:30:18 | https://www.reddit.com/r/LocalLLaMA/comments/18hqcv7/any_recommendations_for_instruct_models_with/ | Key_Extension_6003 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hqcv7 | false | null | t3_18hqcv7 | /r/LocalLLaMA/comments/18hqcv7/any_recommendations_for_instruct_models_with/ | false | false | self | 4 | null |
LLM text adventure - my attempt at an AI game 🍒🍓 | 13 | Hi - avid reader, first time poster 😁 - I love what you people do here!
I thought I'll share my own attempt at a text adventure powered by an LLM, with obligatory animated GIF: [https://github.com/mateusz/cherryberry](https://github.com/mateusz/cherryberry)
The game can so far manage the locations tree, actions and inventory. Actions will update the location description, as well as the inventory. It doesn't always get things right, but you can rectify the responses - edit resulting inventory and edit location exits. The game state is saved constantly into JSON, so if things go south you can always "rescue" the situation by editing the files directly.
It runs [https://github.com/abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python) as a backend so all types of acceleration should be supported (although some args could be missing, happy to add). It uses [https://github.com/Textualize/textual](https://github.com/Textualize/textual) for interface, and streams all LLM outputs right onto the screen.
This stuff is a lot of fun! | 2023-12-13T20:26:02 | https://www.reddit.com/r/LocalLLaMA/comments/18hq98b/llm_text_adventure_my_attempt_at_an_ai_game/ | antimateusz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hq98b | false | null | t3_18hq98b | /r/LocalLLaMA/comments/18hq98b/llm_text_adventure_my_attempt_at_an_ai_game/ | false | false | self | 13 | null |
What model do I download in LM Studio to use "mistral-medium"? Confusing naming. | 8 | I'm trying to download the Mistral model referred to in this tweet: https://twitter.com/deliprao/status/1734997263024329157
I have an RTX 4090. Thanks for the guidance! | 2023-12-13T20:02:31 | https://www.reddit.com/r/LocalLLaMA/comments/18hppsy/what_model_do_i_download_in_lm_studio_to_use/ | av1questionforsub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hppsy | false | null | t3_18hppsy | /r/LocalLLaMA/comments/18hppsy/what_model_do_i_download_in_lm_studio_to_use/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'pVdgrGLcc7ozQ1wXwchtkKhT_PCgejnxrUwPogxRam8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fWivtr9CMEJo21lCjcG7-G2rvIGvTUC5FVWyILAFv0Q.jpg?width=108&crop=smart&auto=webp&s=fdaaa9565e972e621b09198ef58f1bfa3d05267a', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/fWivtr9CMEJo21lCjcG7-G2rvIGvTUC5FVWyILAFv0Q.jpg?auto=webp&s=611bae6f2654128d365666529d35449a05d9ed69', 'width': 140}, 'variants': {}}]} |
Limiting GPU memory allocation by LLM during inference/generate step. | 2 | I have a linux system with 2 GPUs, 12GB vram each, RTX 3060 and RTX 4060. I have a script I wrote that lets me chat with the LLM.
The script works, and splits the model across the 2 GPUs as I want, but when the LLM is replying to a prompt I get a CUDA out of memory error sometimes.
In this case, the model is the Solar-10.7B-Instruct model. I'm using the Hugging Face transformers library to load and run the LLM.
I can load the model in 4 bit quantization, and it consumes 5-6GB vram on each GPU, so about half the vram on each GPU is allocated.
When I run the generate step, the memory allocation does not increase much on the 1st GPU, but jumps to about 11.5GB on the 2nd GPU, as observed by running **nvidia-smi -l 1'** and occasionally runs out of memory.
The model loader seems to somewhat respect my max memory specification and splits the model across the 2 GPUs.
I've also (I think) tried to limit the context size to 1024 by setting max\_length=1024, but that doesn't help.
Is there more I can do to reduce vram allocation or get the model to use more memory on the first GPU for generation since I still have 5GB or so free?
My model load parameters are
\`\`\`
quantizationParms\['load\_in\_\_4bit'\] = True
quantizationParms\['bnb\_4bit\_quant\_type'\] = 'nf4'
quantizationParms\['bnb\_4bit\_compute\_dtype'\] = torch.float16
quantizationConfig = BitsAndBytesConfig(quantizationParms)
memoryLimits = {}
memoryLimits\[0\] = '8000MiB'
memoryLimits\[1\] = '7000MiB'
memoryLimits\['cpu'\] = '128000MiB'
modelParms\['max\_memory'\] = memoryLimits
modelParms\['quantization\_config'\] = quantizationConfig
modelParms\['torch\_dtype'\] = torch.float16
modelParms\['device\_map'\] = 'balanced'
modelParms\['load\_in\_4bit'\] = True
modelParms\['attn\_implementation'\] = 'flash\_attention\_2'
\`\`\`
My generate parameters are
\`\`\`
genParms\['max\_length'\] = 1024
genParms\['do\_sample'\] = True
genParms\['temperature'\] = 0.01
genParms\['repetition\_penalty'\] = 1.1
genParms\['streamer'\] = TextIteratorStreamer(tokenizer, skip\_prompt=True,
timeout=10.0)
\`\`\`
| 2023-12-13T19:55:37 | https://www.reddit.com/r/LocalLLaMA/comments/18hpjy8/limiting_gpu_memory_allocation_by_llm_during/ | catzilla_06790 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hpjy8 | false | null | t3_18hpjy8 | /r/LocalLLaMA/comments/18hpjy8/limiting_gpu_memory_allocation_by_llm_during/ | false | false | self | 2 | null |
Llama-2-13b-chat downloading issues (not recognizing token) on OoobaBooga | 2 | I signed up for and got permission from META to download the meta-llama/Llama-2-13b-chat in HuggingFace.
I went and edited "Environment Variables" in Win11 and added HF_USER and HF_PASS. That did not work. OobaBooga refused to run it and it threw up a bunch of error messages that looked like it was due to a password issue.
So, I created a new token in HuggingFace and changed the user/pass in "Environment Variables" in Windows. And it still refuses to download. The Llama2 page on HF tells me You have been granted access to this model, so that's good.
Any idea why I'm unable to download it?
I also tried the manual download method by creating a Llama2 folder inside the Models folder....same thing. When I try to load it, I get an error message (this is only part): shared.model, shared.tokenizer = load_model(shared.model_name, loader) OSError: models\ManualDownloadLLama does not appear to have a file named config.json
Thanks! | 2023-12-13T19:28:15 | https://www.reddit.com/r/LocalLLaMA/comments/18howwr/llama213bchat_downloading_issues_not_recognizing/ | kaszebe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18howwr | false | null | t3_18howwr | /r/LocalLLaMA/comments/18howwr/llama213bchat_downloading_issues_not_recognizing/ | false | false | self | 2 | null |
Unless a model explicitly states that it has excluded leaderboard test data from its training, the results of such models are highly questionable and lack credibility. | 43 | Also : A single person or minuscule team claiming to outperform a well-funded, qualified team by fine-tuning a model is likely exaggerating their abilities. | 2023-12-13T19:14:33 | https://www.reddit.com/r/LocalLLaMA/comments/18hole0/unless_a_model_explicitly_states_that_it_has/ | mantafloppy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hole0 | false | null | t3_18hole0 | /r/LocalLLaMA/comments/18hole0/unless_a_model_explicitly_states_that_it_has/ | false | false | self | 43 | null |
Base Models Retrained with Counter-Narrative Content Announced on Friday | 1 | Mike Adams' team at Brighteon has been re-training a set of Open Source LLMs with counter-(official) Narrative information that was explicitly curated OUT of the popular base LLMs. Among which is likely to be Llama 2. There will be an official announcement Friday. This goes far beyond removing censorship (hat tip for prior efforts by TheBloke). This is the openning salvo by Team Humantity mentioned by Musk during his viral interview Monday, and Grok will follow. | 2023-12-13T18:58:19 | https://www.reddit.com/r/LocalLLaMA/comments/18ho7lp/base_models_retrained_with_counternarrative/ | Natural-Sentence-601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ho7lp | false | null | t3_18ho7lp | /r/LocalLLaMA/comments/18ho7lp/base_models_retrained_with_counternarrative/ | false | false | self | 1 | null |
A 10.7B model is now on the top of the LLM leaderboard | 156 | Lol
[https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
[https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
https://preview.redd.it/q766uyfqy36c1.png?width=1643&format=png&auto=webp&s=1595773e09efe56ad2da95b1d54bf525278d3bc6 | 2023-12-13T18:49:10 | https://www.reddit.com/r/LocalLLaMA/comments/18hnzzn/a_107b_model_is_now_on_the_top_of_the_llm/ | Wonderful_Ad_5134 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hnzzn | false | null | t3_18hnzzn | /r/LocalLLaMA/comments/18hnzzn/a_107b_model_is_now_on_the_top_of_the_llm/ | false | false | 156 | {'enabled': False, 'images': [{'id': 'eYgOzVBAuNvqoa0iO_7oesPDF4IMahqO_VDioDBGPX4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=108&crop=smart&auto=webp&s=e67807fe20978ba6a4ca59164cb6a22881c5891d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=216&crop=smart&auto=webp&s=017c11d85e98026f2e77c1ab62e93312439ecd82', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=320&crop=smart&auto=webp&s=f14dcac8487f2d9a9488dbbf2eb6bbbbfa367b25', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=640&crop=smart&auto=webp&s=da0def96cb9a4ab285c7de23459d869044a4bcaf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=960&crop=smart&auto=webp&s=b9d3d5c618a695ee31b7043d6f84401ce3ad6d56', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?width=1080&crop=smart&auto=webp&s=7babdb361b6b420680d3254f56ce12ba20fd027a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Cq_Rk78EGQabardJnIURHUQ_utTWcUKJTxaGWN2Oazc.jpg?auto=webp&s=5bb52165a7d9afbe56de435e4c73da11b20c3ea4', 'width': 1200}, 'variants': {}}]} | |
Deploying LM/LMM to edge device | 2 | How best to deploy a model that's small enough to do inference on, for example, an edge device such as an ARM M4 ? Would it be to quantize something like Phi or TinyLlama? Accuracy is not the top priority for this use case. How to discern the correct model size for a device like that ? | 2023-12-13T18:37:05 | https://www.reddit.com/r/LocalLLaMA/comments/18hnq0g/deploying_lmlmm_to_edge_device/ | rower22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hnq0g | false | null | t3_18hnq0g | /r/LocalLLaMA/comments/18hnq0g/deploying_lmlmm_to_edge_device/ | false | false | self | 2 | null |
LLMs that are coming: gpt 4.5 end of month, Gemini Ultra next month, llama-3 late Feb. | 60 | Which ones are you looking forward to?
GPT 4.5: [https://twitter.com/apples\_jimmy/status/1732553640215495109](https://twitter.com/apples_jimmy/status/1732553640215495109)
Llama 3: [https://twitter.com/futuristflower/status/1716556554441199874](https://twitter.com/futuristflower/status/1716556554441199874)
My spidey sense says that Ultra will be released first week of Jan. | 2023-12-13T18:35:28 | https://www.reddit.com/r/LocalLLaMA/comments/18hnoq8/llms_that_are_coming_gpt_45_end_of_month_gemini/ | danielcar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hnoq8 | false | null | t3_18hnoq8 | /r/LocalLLaMA/comments/18hnoq8/llms_that_are_coming_gpt_45_end_of_month_gemini/ | false | false | self | 60 | {'enabled': False, 'images': [{'id': 'jrPdz_im3xFmBY685VFudGx4RuFJ5fxK76uJ0rOS6fM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/b-pBTyNGOLpTCBx178BRjCMbTMfj9jux8a7yYMvqrbo.jpg?width=108&crop=smart&auto=webp&s=6f16de7d312ac4f8564818043ed7151bc5f065a2', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/b-pBTyNGOLpTCBx178BRjCMbTMfj9jux8a7yYMvqrbo.jpg?auto=webp&s=c230101218db94d955a9bbd1f9250a46484ebebc', 'width': 140}, 'variants': {}}]} |
Tutorial: How to run phi-2 locally (or on colab for free!) | 129 | Hey Everyone!
If you've been hearing about phi-2 and how a 3B LLM can be as good as (or even better) than 7B and 13B LLMs and you want to try it, say no more.
​
Here's a colab notebook to run this LLM:
[https://colab.research.google.com/drive/14\_mVXXdXmDiFshVArDQlWeP-3DKzbvNI?usp=sharing](https://colab.research.google.com/drive/14_mVXXdXmDiFshVArDQlWeP-3DKzbvNI?usp=sharing)
​
You can also run this locally on your machine by following the code in the notebook.
​
You will need 12.5GB to run it in float32 and 6.7 GB to run in float16
​
This is all thanks to people who uploaded the phi-2 checkpoint on HF!
Here's a repo containing phi-2 parameters:
[https://huggingface.co/amgadhasan/phi-2](https://huggingface.co/amgadhasan/phi-2)
​
The model has been sharded so it should be super easy to download and load!
​
P.S. Please keep in mint that this is a base model (i.e. it has NOT been finetuned to follow instructions.) You have to prompt it to complete text. | 2023-12-13T18:26:49 | https://www.reddit.com/r/LocalLLaMA/comments/18hnhd6/tutorial_how_to_run_phi2_locally_or_on_colab_for/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hnhd6 | false | null | t3_18hnhd6 | /r/LocalLLaMA/comments/18hnhd6/tutorial_how_to_run_phi2_locally_or_on_colab_for/ | false | false | self | 129 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
Microsoft's Phi-2 uploaded to Huggingface | 34 | It looks like Phi2 was just uploaded to hugging face:
[https://huggingface.co/SkunkworksAI/phi-2](https://huggingface.co/SkunkworksAI/phi-2)
Looking forward to test it soon :) | 2023-12-13T18:25:40 | https://www.reddit.com/r/LocalLLaMA/comments/18hngf4/microsofts_phi2_uploaded_to_huggingface/ | Cominous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hngf4 | false | null | t3_18hngf4 | /r/LocalLLaMA/comments/18hngf4/microsofts_phi2_uploaded_to_huggingface/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'GJ9xGaLHlw0HasQz89joTJUD30kucy9iU1XL_gvBGdQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7pG11t2Hu4d3RqE-_ZWjEwbBhEr7192yVO2RJJuA_qM.jpg?width=108&crop=smart&auto=webp&s=2d7a4b718a8533767c63a9165ea29a090a62309c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7pG11t2Hu4d3RqE-_ZWjEwbBhEr7192yVO2RJJuA_qM.jpg?width=216&crop=smart&auto=webp&s=e82216e6d700cc40d94360cc65eaa28ec8b86da2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7pG11t2Hu4d3RqE-_ZWjEwbBhEr7192yVO2RJJuA_qM.jpg?width=320&crop=smart&auto=webp&s=767967fc8e02e9010fe9449976b2ac10a3b4fcc8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7pG11t2Hu4d3RqE-_ZWjEwbBhEr7192yVO2RJJuA_qM.jpg?width=640&crop=smart&auto=webp&s=e9cb4db884670d063cc5d699f436f2e3285a9f13', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7pG11t2Hu4d3RqE-_ZWjEwbBhEr7192yVO2RJJuA_qM.jpg?width=960&crop=smart&auto=webp&s=251ca6d7014d88bf8713170142f96f8947e34f01', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7pG11t2Hu4d3RqE-_ZWjEwbBhEr7192yVO2RJJuA_qM.jpg?width=1080&crop=smart&auto=webp&s=9e5abaefd648ee77b4772123b2f9329562de3eb3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7pG11t2Hu4d3RqE-_ZWjEwbBhEr7192yVO2RJJuA_qM.jpg?auto=webp&s=6694bd30b95a43d7394f94a9ccec3a6d98a26576', 'width': 1200}, 'variants': {}}]} |
How to use Mixtral Instruct GGUF with oobabooga | 29 | ​
https://preview.redd.it/hs4w8rskm36c1.png?width=1406&format=png&auto=webp&s=2a6e7f18d4b7d5c3471ef54e7ac5b7ea54b37f2d
https://preview.redd.it/zniot2smm36c1.png?width=833&format=png&auto=webp&s=72f3cf681349d73719b64c8c2e4a95cd7fa6ee4b
# Some stats first
llm\_load\_print\_meta: model type = 7B
llm\_load\_print\_meta: model ftype = mostly Q5\_K - Medium
llm\_load\_print\_meta: model params = 46.70 B
llm\_load\_print\_meta: model size = 30.02 GiB (5.52 BPW)
llm\_load\_print\_meta: general.name = mistralai\_mixtral-8x7b-instruct-v0.1
llama\_new\_context\_with\_model: n\_ctx = 8096
​
**System specs**
Ryzen 9 7900X and 64GB DDR5 RAM
RTX 4080 16GB
​
AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512\_VBMI = 0 | AVX512\_VNNI = 0 | FMA = 1 | NEON = 0 | ARM\_FMA = 0 | F16C = 1 | FP16\_VA = 0 | WASM\_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 |
​
**No GPU offloading (CPU only)**
\~5.1 tokens per second
llm\_load\_tensors: mem required = 30735.89 MiB
​
**With GPU offloading (12/35 layers)**
\~6.3 tokens per second
llm\_load\_tensors: mem required = 19280.76 MiB
llm\_load\_tensors: VRAM used: 11455.12 MiB
​
# Adding Mixtral llama.cpp (GGUF) support to oobabooga
While llama.cpp is already updated for mixtral support, llama\_cpp\_python is not. However its a pretty simple fix and will probably be ready in a few days at max.
Until then you can manually upgrade it:
1. Install Visual Studio 2022 with C/C++ and CMake packages
2. Install CUDA Toolkit 11.8 or 12.1 depending on what version of torch you use
3. Open PowerShell in text-generation-webui folder and activate venv
`venv/Scripts/Activate.ps1`
4. Uninstall existing llama\_cpp and llama\_cpp\_python pip packages
`python -m pip uninstall llama_cpp llama_cpp_python`
5. Create a folder outside text-generation-webui and cd into it so git repos don't conflict
`mkdir ../tmp; cd ../tmp`
6. Clone the llama\_cpp\_python repository
`git clone --recursive https://github.com/abetlen/llama-cpp-python`
`cd llama-cpp-python`
7. Update llama.cpp to latest version
`git submodule update --remote`
8. Enable CUBLASS for CUDA support
`$env:CMAKE_ARGS = "-DLLAMA_CUBLAS=on -DLLAMA_CUDA_F16=on"`
Note: set the latter to `off` instead if using an older GPU
9. Build and install llama\_cpp\_python
`python -m pip install .[all] --force-reinstall --no-cache-dir` | 2023-12-13T18:00:27 | https://www.reddit.com/r/LocalLLaMA/comments/18hmums/how_to_use_mixtral_instruct_gguf_with_oobabooga/ | Trojaner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hmums | false | null | t3_18hmums | /r/LocalLLaMA/comments/18hmums/how_to_use_mixtral_instruct_gguf_with_oobabooga/ | false | false | 29 | null | |
Bash One-Liners for LLMs | 24 | 2023-12-13T17:53:48 | https://justine.lol/oneliners/ | soundslogical | justine.lol | 1970-01-01T00:00:00 | 0 | {} | 18hmoxe | false | null | t3_18hmoxe | /r/LocalLLaMA/comments/18hmoxe/bash_oneliners_for_llms/ | false | false | 24 | {'enabled': False, 'images': [{'id': 'XxwVsFUEGmQn2LIC47uVd_eO9OfM264-BOjCVOJWdBM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/24fOOTySsf5u1rlbRgRMVLmxe-wIlJIHier2Q-NA8f8.jpg?width=108&crop=smart&auto=webp&s=25fea7fb8e0b17df3372d4fa1297d8531a94c5ae', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/24fOOTySsf5u1rlbRgRMVLmxe-wIlJIHier2Q-NA8f8.jpg?width=216&crop=smart&auto=webp&s=b7e4a4896392cc87a9552e08996d2dfde7dc7fa5', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/24fOOTySsf5u1rlbRgRMVLmxe-wIlJIHier2Q-NA8f8.jpg?width=320&crop=smart&auto=webp&s=1a290ad334c5c4a29343087ead517008f1844ea0', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/24fOOTySsf5u1rlbRgRMVLmxe-wIlJIHier2Q-NA8f8.jpg?width=640&crop=smart&auto=webp&s=869800d00bda932ed010d31ffe1d793c1aebb021', 'width': 640}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/24fOOTySsf5u1rlbRgRMVLmxe-wIlJIHier2Q-NA8f8.jpg?auto=webp&s=4888bcba4e0eb9cbd49e755a42680a1b82857f98', 'width': 640}, 'variants': {}}]} | ||
I really need help with my multi GPU please | 1 | [removed] | 2023-12-13T17:50:50 | https://www.reddit.com/r/LocalLLaMA/comments/18hmmcz/i_really_need_help_with_my_multi_gpu_please/ | Competitive_Fox7811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hmmcz | false | null | t3_18hmmcz | /r/LocalLLaMA/comments/18hmmcz/i_really_need_help_with_my_multi_gpu_please/ | false | false | self | 1 | null |
Mixtral 8x7b quantized GGUF & GPTQ available now | 1 | [removed] | 2023-12-13T17:47:30 | https://www.reddit.com/r/LocalLLaMA/comments/18hmjk3/mixtral_8x7b_quantized_gguf_gptq_available_now/ | aminedjeghri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hmjk3 | false | null | t3_18hmjk3 | /r/LocalLLaMA/comments/18hmjk3/mixtral_8x7b_quantized_gguf_gptq_available_now/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/4PgIzt2dsWk0hsH_pv6fTscUBf4LNxa8vUF1zyE23u0.jpg?auto=webp&s=adf334dabc58b5ccda405f20fe4d11f983c41fe9', 'width': 64}, 'variants': {}}]} |
Mixtral 8x7B Q4 giving poor results - am I doing something wrong? | 7 | Kinda a noob to local LLM work here. I downloaded [the model from HF](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF) and built llama.cpp from [the \`mixtral\` branch](https://github.com/ggerganov/llama.cpp/pull/4406) (which has since been merged, as of this morning - I did this last night).
With this prompt, which I've been using (and seen others use) as a sanity check to see how good models are at reasoning:
>I put a banana under a plate in the kitchen. Then I picked up the plate and walked into the other room. Where is the banana now?
​
And get this response:
>In one of two places, it’s on top or underneath the plate as you walk across from one place to another. In fact, this means that there are more than 2 places for a banana to be in any given instant, and thus we can say that at some point during your motion, it is likely the bananas had different locations:
>
>1. Bananas are on top of plates (the “in” place)
>
>2. Underneath Plate(s). This means they have a location somewhere between two points along their path through which you move them from one end to another point where we
​
What the hell are you on about, Mixtral. Is my Schroedinger's Banana in 2 places at once until I observe it and collapse the waveform?
Is this because it's a base model, not instruction-tuned or chat-tuned and I need to use a different prompting strategy?
Thanks in advance for any advice!
​
(I'm on an M2 Pro, 32GB RAM. Q4\_K\_M just barely fits. Maybe a bigger quantization would be better, I just wanted to see what I could do with this model on the hardware in front of me.) | 2023-12-13T17:46:30 | https://www.reddit.com/r/LocalLLaMA/comments/18hmimu/mixtral_8x7b_q4_giving_poor_results_am_i_doing/ | oak-sprout | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hmimu | false | null | t3_18hmimu | /r/LocalLLaMA/comments/18hmimu/mixtral_8x7b_q4_giving_poor_results_am_i_doing/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'ba8RNBztYwD0G5u1D3wvQTtJ2TAPE3O6usfzteMjK8o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x7SOuJpxTv04F7aMccGwyGy_0TS17KcXMBb1aAgk33I.jpg?width=108&crop=smart&auto=webp&s=6f05d4006bc2609f938cfee77f0f1be170016eca', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x7SOuJpxTv04F7aMccGwyGy_0TS17KcXMBb1aAgk33I.jpg?width=216&crop=smart&auto=webp&s=048195cc32e8a683a2f5be2833158b7de26cda68', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x7SOuJpxTv04F7aMccGwyGy_0TS17KcXMBb1aAgk33I.jpg?width=320&crop=smart&auto=webp&s=58c027ef24fd074087ad177cef440f44a5ebb0a9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x7SOuJpxTv04F7aMccGwyGy_0TS17KcXMBb1aAgk33I.jpg?width=640&crop=smart&auto=webp&s=bf7bc24a39ebd0d12f8a74ccbf7c75516588ff66', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x7SOuJpxTv04F7aMccGwyGy_0TS17KcXMBb1aAgk33I.jpg?width=960&crop=smart&auto=webp&s=3b749d083647de0fe674942af72b7968c1317a50', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x7SOuJpxTv04F7aMccGwyGy_0TS17KcXMBb1aAgk33I.jpg?width=1080&crop=smart&auto=webp&s=93e9ee2981f946d1f951411e11a9e30d1b844167', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x7SOuJpxTv04F7aMccGwyGy_0TS17KcXMBb1aAgk33I.jpg?auto=webp&s=6dd19ed7dcedebe67eab4f33fb2457ec1440c099', 'width': 1200}, 'variants': {}}]} |
OAI just built a web scraper | 1 | It is just a scraper and a chatting function on the scrape.
What’s the all singularity Doom discussion about, and for what? Safety from an internet scraper that is just a % of human knowledge and experience. Which in fact is just feeding on controversies sitting on the internet?
Grow up guys. Think how to use the improved scraper into your lives more, than just speculating AGI and what not. DNNs are just a mystery because it’s unpredictable in its nature. Just let the scientists work on that and go back to work.
If the discussion helps scientists do research and get funded faster, then they are capitalizing on your thoughts it’s all good then.
All tech talks are a psyop anyways. | 2023-12-13T17:17:42 | https://www.reddit.com/r/LocalLLaMA/comments/18hltli/oai_just_built_a_web_scraper/ | Honest-Debate-6863 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hltli | false | null | t3_18hltli | /r/LocalLLaMA/comments/18hltli/oai_just_built_a_web_scraper/ | false | false | self | 1 | null |
MoE 8x7B : Open weight ? | 15 | In the release of the latest Mistral's model MoE 8x7B, it says it is not open-source but open weight. 1) What does it mean to be open weight? 2) I've in a couple of model that model allowed to have the weights, what does it mean exactly? Sorry if the quesion is basic as I am still learning, thanks ! | 2023-12-13T17:01:06 | https://www.reddit.com/r/LocalLLaMA/comments/18hlf6n/moe_8x7b_open_weight/ | akasaka99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hlf6n | false | null | t3_18hlf6n | /r/LocalLLaMA/comments/18hlf6n/moe_8x7b_open_weight/ | false | false | self | 15 | null |
I have 3 m2 ultra mac studios. What should I do with them? | 3 | Been running locally at home with mistral 7b. Have the resources to do more and experiment- no idea what I could should do with 3 separate m3 ultras. Ideally would like to tie it to the local network through something like oobabooga but thats just cause thats how I do it at home. What is ideal? | 2023-12-13T16:59:59 | https://www.reddit.com/r/LocalLLaMA/comments/18hle0t/i_have_3_m2_ultra_mac_studios_what_should_i_do/ | sludgybeast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hle0t | false | null | t3_18hle0t | /r/LocalLLaMA/comments/18hle0t/i_have_3_m2_ultra_mac_studios_what_should_i_do/ | false | false | self | 3 | null |
EDITED PER REQUESTS: What Is The Top 7B Right Now? | 1 |
[View Poll](https://www.reddit.com/poll/18hlazl) | 2023-12-13T16:56:23 | https://www.reddit.com/r/LocalLLaMA/comments/18hlazl/edited_per_requests_what_is_the_top_7b_right_now/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hlazl | false | null | t3_18hlazl | /r/LocalLLaMA/comments/18hlazl/edited_per_requests_what_is_the_top_7b_right_now/ | false | false | self | 1 | null |
notdiamond-0001 - A model router for GPT3.5 and GPT4 to optimize accuracy against cost and latency | 6 | 2023-12-13T16:43:16 | https://twitter.com/tomas_hk/status/1734664304924721245 | okaycan | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 18hkzje | false | null | t3_18hkzje | /r/LocalLLaMA/comments/18hkzje/notdiamond0001_a_model_router_for_gpt35_and_gpt4/ | false | false | default | 6 | null | |
Google Cloud partners with Mistral AI on generative language models | 30 | 2023-12-13T16:42:10 | https://www.reuters.com/technology/google-cloud-partners-with-mistral-ai-generative-language-models-2023-12-13/ | eunumseioquescrever | reuters.com | 1970-01-01T00:00:00 | 0 | {} | 18hkyks | false | null | t3_18hkyks | /r/LocalLLaMA/comments/18hkyks/google_cloud_partners_with_mistral_ai_on/ | false | false | default | 30 | null | |
Google Cloud partners with Mistral AI on generative language models | 1 | [deleted] | 2023-12-13T16:40:39 | https://finance.yahoo.com/news/google-cloud-partners-mistral-ai-150922983.html | eunumseioquescrever | finance.yahoo.com | 1970-01-01T00:00:00 | 0 | {} | 18hkx97 | false | null | t3_18hkx97 | /r/LocalLLaMA/comments/18hkx97/google_cloud_partners_with_mistral_ai_on/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Aerwr-30yIdaVaM9MaAaDRKCfLUSxINsebUvVQfWnFM', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/aXrlfjBKEMxZBt-dNluhqsll4VaCffTdKgZ4ljTCXsQ.jpg?width=108&crop=smart&auto=webp&s=aff24558086f238d3a16e6d9596ff39f7820363b', 'width': 108}, {'height': 142, 'url': 'https://external-preview.redd.it/aXrlfjBKEMxZBt-dNluhqsll4VaCffTdKgZ4ljTCXsQ.jpg?width=216&crop=smart&auto=webp&s=8f0577f3b571f45500e35a92f8cc35ff8aa0bceb', 'width': 216}, {'height': 211, 'url': 'https://external-preview.redd.it/aXrlfjBKEMxZBt-dNluhqsll4VaCffTdKgZ4ljTCXsQ.jpg?width=320&crop=smart&auto=webp&s=269b1494445c0502641d4d13ae893c6c2d9a4567', 'width': 320}, {'height': 422, 'url': 'https://external-preview.redd.it/aXrlfjBKEMxZBt-dNluhqsll4VaCffTdKgZ4ljTCXsQ.jpg?width=640&crop=smart&auto=webp&s=62096819bdbd3b31b914ab7aaaaebb0059d12e57', 'width': 640}], 'source': {'height': 528, 'url': 'https://external-preview.redd.it/aXrlfjBKEMxZBt-dNluhqsll4VaCffTdKgZ4ljTCXsQ.jpg?auto=webp&s=8f72c8e0a378e6669afa707b69f0f02193d045e3', 'width': 800}, 'variants': {}}]} | |
Google Cloud partners with Mistral AI on generative language models | 1 | 2023-12-13T16:40:38 | https://finance.yahoo.com/news/google-cloud-partners-mistral-ai-150922983.html | eunumseioquescrever | finance.yahoo.com | 1970-01-01T00:00:00 | 0 | {} | 18hkx8j | false | null | t3_18hkx8j | /r/LocalLLaMA/comments/18hkx8j/google_cloud_partners_with_mistral_ai_on/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Aerwr-30yIdaVaM9MaAaDRKCfLUSxINsebUvVQfWnFM', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/aXrlfjBKEMxZBt-dNluhqsll4VaCffTdKgZ4ljTCXsQ.jpg?width=108&crop=smart&auto=webp&s=aff24558086f238d3a16e6d9596ff39f7820363b', 'width': 108}, {'height': 142, 'url': 'https://external-preview.redd.it/aXrlfjBKEMxZBt-dNluhqsll4VaCffTdKgZ4ljTCXsQ.jpg?width=216&crop=smart&auto=webp&s=8f0577f3b571f45500e35a92f8cc35ff8aa0bceb', 'width': 216}, {'height': 211, 'url': 'https://external-preview.redd.it/aXrlfjBKEMxZBt-dNluhqsll4VaCffTdKgZ4ljTCXsQ.jpg?width=320&crop=smart&auto=webp&s=269b1494445c0502641d4d13ae893c6c2d9a4567', 'width': 320}, {'height': 422, 'url': 'https://external-preview.redd.it/aXrlfjBKEMxZBt-dNluhqsll4VaCffTdKgZ4ljTCXsQ.jpg?width=640&crop=smart&auto=webp&s=62096819bdbd3b31b914ab7aaaaebb0059d12e57', 'width': 640}], 'source': {'height': 528, 'url': 'https://external-preview.redd.it/aXrlfjBKEMxZBt-dNluhqsll4VaCffTdKgZ4ljTCXsQ.jpg?auto=webp&s=8f72c8e0a378e6669afa707b69f0f02193d045e3', 'width': 800}, 'variants': {}}]} | ||
Gemini pro api is available for free | 56 | Max requests are 60 per minute: [https://blog.google/technology/ai/gemini-api-developers-cloud/](https://blog.google/technology/ai/gemini-api-developers-cloud/)
Costs early next year:
​
https://preview.redd.it/7xintij6836c1.png?width=833&format=png&auto=webp&s=c4528215e9b1ff30382a467fe548fe08727a5e30 | 2023-12-13T16:20:20 | https://www.reddit.com/r/LocalLLaMA/comments/18hkgx8/gemini_pro_api_is_available_for_free/ | danielcar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hkgx8 | false | null | t3_18hkgx8 | /r/LocalLLaMA/comments/18hkgx8/gemini_pro_api_is_available_for_free/ | false | false | 56 | {'enabled': False, 'images': [{'id': 'GFevXAVuXPi3N-YSWvAH8jbWuMuXzTMN4ncnj5agLPY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/tB2FxwZX3xslwK1NTR47Kl95J3JqUnU8NNtDzLq8rJM.jpg?width=108&crop=smart&auto=webp&s=a28dd8fa72fab603f24523fd102165332ddfb17c', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/tB2FxwZX3xslwK1NTR47Kl95J3JqUnU8NNtDzLq8rJM.jpg?width=216&crop=smart&auto=webp&s=cf1cfca307bbb8b6b9ca2f7bcf3d5938f3dfdd4b', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/tB2FxwZX3xslwK1NTR47Kl95J3JqUnU8NNtDzLq8rJM.jpg?width=320&crop=smart&auto=webp&s=84797ef15c19617a684ec359afd17997947756cf', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/tB2FxwZX3xslwK1NTR47Kl95J3JqUnU8NNtDzLq8rJM.jpg?width=640&crop=smart&auto=webp&s=0712cecf3c8a7f1641132dd5fb5234300fe59fa2', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/tB2FxwZX3xslwK1NTR47Kl95J3JqUnU8NNtDzLq8rJM.jpg?width=960&crop=smart&auto=webp&s=e179d27d7769c2e84e88d8982ee77541e3bd4116', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/tB2FxwZX3xslwK1NTR47Kl95J3JqUnU8NNtDzLq8rJM.jpg?width=1080&crop=smart&auto=webp&s=4e9f8549e948df24feb1e0b1963ae118171eed3d', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/tB2FxwZX3xslwK1NTR47Kl95J3JqUnU8NNtDzLq8rJM.jpg?auto=webp&s=0c5a3a227cebc93493ad0f5dc59e51279538caf9', 'width': 1201}, 'variants': {}}]} | |
Beating Bigger Models is All You Need (Where is the LLM industry headed?) | 154 | Note: NSFW language ↓
I've been following this sub for a long time and am finally fed up with news like this:
> \<small-ass-model> __m__ with just **n** parameters BEATS/CRUSHES/OUTPERFORMS/OBLITERATES/FUCKING DESTROYS \<big-ass-model> __M__ that has **2n** parameters (or even more) in all the benchmarks [0], and it's close to GPT-4 too.
^[0] All the benchmarks that support this claim.
And then later you find out that `m` was created by this startup that needs funding. When do we stop doing this shit? It's borderline spam at this point. Everyday I see this and I immediately skip those because benchmarks are meaningless now and whoever uses them to prove a point is just doing it for the VC money.
It'd be nice if the mods here would ban such posts because they don't really add any value to the discussion around LLMs; they're just noise. We need _real_ progress in open-source LLMs, you know, like:
- decent function calling
- standardized API calling
- standardized prompt templates
- better documentation
- supporting projects like Petals
- working on LLMs on the edge (e.g., the MLC-LLM project)
- creating cool things with small LLMs such as Copilots for specific tasks
- increasing the awareness of ordinary users about ChatGPT alternatives
- End of Thinking Capacity.
But claiming that a small model is somehow better than bigger ones just because it was trained on ChatGPT/4 data to make its responses __look like__ GPT-4 answers, is not honest. Yes, Mistral is good, but is it better than Llama 2 70B? Hard no! Yes, Phi-2 is good, but should we even care when Microsoft hardly shares their precious training data? Again, no! People get excited about new models dropping literally every day that they forget to focus on the big picture and ask themselves: "Is this really good for the open-source industry?" | 2023-12-13T16:18:00 | https://www.reddit.com/r/LocalLLaMA/comments/18hkf2y/beating_bigger_models_is_all_you_need_where_is/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hkf2y | false | null | t3_18hkf2y | /r/LocalLLaMA/comments/18hkf2y/beating_bigger_models_is_all_you_need_where_is/ | false | false | self | 154 | null |
ChatGPT3.5 turbo api vs open LLM api | 1 | Hi all
Im a hobbiest, so please bear with me !
I'm just trying to understand the cost dynamics of using openai api vs an open llm api (and hosting an app on a website).
I've an idea for a voicebot (python backend) and was going to go with GPT3.5 Turbo, but it now seems some open source models actually beat it. As I intend on monetizing it, and GPT4.5 Turbo being way too expensive for what I need, I was wondering what the cost would be of, say, a 180b model vs chatgpt3.5 Turbo. What about 70b, 30b and 7b models? I suppose they're cheaper to run, yes?
Is there some resource out there that would allow me to compare api costs? I suppose 10,000 requests per month would be a good starting point.
Thanks | 2023-12-13T15:59:58 | https://www.reddit.com/r/LocalLLaMA/comments/18hk0c2/chatgpt35_turbo_api_vs_open_llm_api/ | cool-beans-yeah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hk0c2 | false | null | t3_18hk0c2 | /r/LocalLLaMA/comments/18hk0c2/chatgpt35_turbo_api_vs_open_llm_api/ | false | false | self | 1 | null |
Suggestions/guidance on full book ingestion. | 6 | Hi guys, I'm looking to build a local LLM that can ingest an entire book and then give me character descriptions / environment details or locations / and key moments of scenes in the book. I will be getting a new mac with the pro max M3 chip later in the week. Can anyone point me in the direction of what local LLM could possibly do that? Down the road I will probably build a bigger machine instead of depending on the mac I just wanted to get some insight from people who may have a better idea of what applications can do what as far as setting something up locally. Im trying to avoid paying for tokens as I know that is possibly a solution via GPT. Any suggestions would be appreciated in helping me get to my goal of ingesting whole books and getting character/environment/key scenes returned. Thank you. | 2023-12-13T15:51:29 | https://www.reddit.com/r/LocalLLaMA/comments/18hjtqi/suggestionsguidance_on_full_book_ingestion/ | Zhanji_TS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hjtqi | false | null | t3_18hjtqi | /r/LocalLLaMA/comments/18hjtqi/suggestionsguidance_on_full_book_ingestion/ | false | false | self | 6 | null |
TheBloke/Mixtral 8x v0.1 7B - If anyone is wondering what a Q2 will give you with my settings provided. | 1 | 2023-12-13T15:46:18 | https://www.reddit.com/gallery/18hjpj5 | SupplyChainNext | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18hjpj5 | false | null | t3_18hjpj5 | /r/LocalLLaMA/comments/18hjpj5/theblokemixtral_8x_v01_7b_if_anyone_is_wondering/ | false | false | 1 | null | ||
Question about Mixtral | 2 | So I saw Llama added support for mixtral in their main branch. However, I can't find any model that isn't experimental.
Is there 1 out yet that is usable on web ui? | 2023-12-13T15:45:53 | https://www.reddit.com/r/LocalLLaMA/comments/18hjp71/question_about_mixtral/ | Lammahamma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hjp71 | false | null | t3_18hjp71 | /r/LocalLLaMA/comments/18hjp71/question_about_mixtral/ | false | false | self | 2 | null |
Has anyone for experience getting a tesla m40 24gb working with pci pass-through in VMware in latest Ubuntu or hell even windows? | 2 | dell r730 with proper dual channel power adapter coming from both pci lanes. 512gb ram and 2x6core cpus
esxi 7.0.3 21930508
I enabled the more than 4tb thing in bios and the sv-io.
I've been fighting with this thing for 2 days and could use a hand if anyone has experience.
I get unable to get it working in Ubuntu 22.04 using driver autoinstall works I reboot and the driver fails on boot up.. it's using the 535 drivers.
which should I be using?
I did the three advanced vm parameters the pcipassthru.use64bitmmio=true
and pcipassthru.64bitmmiosizeGB=128
and hypervisor.cpuid.v0=false
which I gathered from many pages in forums.
I made sure the ram in the vm is 128gb and even put it as single core just for testing.
made sure they are uefi mode vms.
on Linux the driver craps out on boot up. error is
on windows server 2019 I get a BSOD loading the drivers from Nvidia. during the driver Install.
help!
Mike | 2023-12-13T15:30:35 | https://www.reddit.com/r/LocalLLaMA/comments/18hjcm5/has_anyone_for_experience_getting_a_tesla_m40/ | TeknikL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hjcm5 | false | null | t3_18hjcm5 | /r/LocalLLaMA/comments/18hjcm5/has_anyone_for_experience_getting_a_tesla_m40/ | false | false | self | 2 | null |
Optimising function calling (Using AutoGen with Mistral 7B) | 19 | I recently made a very basic conversational agent for querying Kubernetes using AutoGen. The biggest restriction that I placed on myself was that I will stick to using Mistral Finetunes (Tested with Openorca, Dolphin 2.2 & OpenHermes 2.5)
The projects aims to retrieve Kubernetes resources (like pods, deployments and even custom resources like ArgoCD applications) which means it makes use of function calling (which is a known mistral week point)
Here's the link to the code before i begin: https://github.com/YourTechBud/ytb-practical-guide/tree/master/autogen-k8s-basic
Also, i wrote my own APIs on top of llama.cpp to nudge models to calling functions a lil bit better. Link - https://github.com/YourTechBud/llm-inference-server/blob/main/src/server/server.py#L71
> I'm new to python and AI, so please ignore the quality of my code.
I've made a YouTube video which goes deeper into some of the learnings. Would love it if you could check it out. (Link in the comments)
## Learnings for effective function calling
0. Use a low temperature setting for func calling.
This is probably self explanatory, but use a very low temperature setting for the agents which need to make a function call. A high temperature usually messes up the parameter names or values themselves.
1. The agent calling the function should not be responsible to figure out the parameters values.
The beauty of Autogen is its ability to create specialised agents which build a conversation history together. So make sure the conversation history contains the parameter values before you reach the agent responsible to call the function.
What i mean is, the agent calling the function should only have the responsibility to arrange information (which is present in the conversation history) in a format suitable to call the function. The absence of required information in the conversation history puts too much of a cognitive load on the LLM which causes it to mess the parameter field names and values and sometimes even the name of the function which we were supposed to call.
In other words, use other agents to implement some sort of a RAG pipeline to make sure the agent calling the function has the required context (Mahn i should have started with this statement). I usually have an "expert" agent whose sole responsibility is to print the parameter values in the exact format that would be required to make that function call by the next agent.
> I've explained this in a much better way in the video.
2. Treat the function and field description in the json schema definition as system prompts.
The description fields that can be provided in json schema which tells the models what functions are available to it are a life saver. Be as descriptive and treat them as system prompts. This means that there is huge scope of prompt engineering here.
Also, consider using the same or similar description for the agent tasked with figuring out the parameter values. This is a great way to see if your descriptions are helpful or not.
3. Be open to refactoring your functions.
Functions which have fewer and self explanatory parameters perform best. Make sure you have descriptive function names. Avoid complex types like arrays of objects and stuff like that as much as possible.
The best function are those whose name and parameters describe exactly what the function is intended to do. If it's not easy for your colleague to understand, it's definitely not going to be easy for a LLM.
4. Only pass the function map to the models required to call the functions.
Goes without saying... Don't be lazy... Make separate llm_config objects for different agents.
5. Having smarter APIs help out a ton.
The open ai api i have implemented on top of does a lil bit of prompt messaging on top of what the AutoGen client sends it. It basically adds an additional system message to nudge the agent to call the function. It also, parses the result to see if a function invocation is required or not.
I think having smarter APIs (which take care of such use cases - function calling, structuring output in particular format like Json, Rag) would be a great addition to take some of the effort away from the prompt engineer and shift it to a generic api provider. I'll be investing more time in this to see where this approach takes me.
## Conclusion
1. It's absolutely possible to use Mistral 7B to make agent driven apps. They require a bit more effort than something like GPT4 but i have been able to accomplish a lot with just AutoGen + Mistral.
2. Any fine tune is capable of function calling with some work. However chatml templates do work best.
3. Having intelligent apis on the backend does make life super easy.
Thanks for going through my post. Do checkout my video. Would absolutely love it. | 2023-12-13T15:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/18hjbku/optimising_function_calling_using_autogen_with/ | YourTechBud | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hjbku | false | null | t3_18hjbku | /r/LocalLLaMA/comments/18hjbku/optimising_function_calling_using_autogen_with/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'F00Hp_fn94hmZToVZeLnAW6-jVIURgxoiw1rN8IWMk4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/q_is5pemwqkWrp44IbXJD5na5aHaCPKNP6DdjGiLSgY.jpg?width=108&crop=smart&auto=webp&s=fbb145ad0a0a0c85339238d2107d22ece0fb316a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/q_is5pemwqkWrp44IbXJD5na5aHaCPKNP6DdjGiLSgY.jpg?width=216&crop=smart&auto=webp&s=5810333145c88689b889f99619f93378806e67c9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/q_is5pemwqkWrp44IbXJD5na5aHaCPKNP6DdjGiLSgY.jpg?width=320&crop=smart&auto=webp&s=f51c5c40927f7816d5e236aa23a90afe243898c9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/q_is5pemwqkWrp44IbXJD5na5aHaCPKNP6DdjGiLSgY.jpg?width=640&crop=smart&auto=webp&s=53726906ab561e365e4b2f42daa53d2c059c1e62', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/q_is5pemwqkWrp44IbXJD5na5aHaCPKNP6DdjGiLSgY.jpg?width=960&crop=smart&auto=webp&s=6541ef02b915e7134f58b3f4a71ea66cec429b45', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/q_is5pemwqkWrp44IbXJD5na5aHaCPKNP6DdjGiLSgY.jpg?width=1080&crop=smart&auto=webp&s=39b91681548c43380baa4d4d4d9e10d14d94ddf2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/q_is5pemwqkWrp44IbXJD5na5aHaCPKNP6DdjGiLSgY.jpg?auto=webp&s=d52e00fe82f144ab9321147b9e42d9700c1f11c7', 'width': 1200}, 'variants': {}}]} |
Looking for a project to run a local LLM with web interface that will ingest my own files. | 6 | I have yet to set up my own local llm but am now ready to invest the time. I saw a project some time ago that would ingest files/folders and respond to prompts providing responses as well as references. I'm wondering what came of that project and if there is any node project that interfaces with it.
For what it's worth it'll be running on a machine with a 58003DX/32GB/4070Ti.
Even if it's just a python project, I'd still like to find it and roll my own web view for it. | 2023-12-13T15:10:12 | https://www.reddit.com/r/LocalLLaMA/comments/18hiwl5/looking_for_a_project_to_run_a_local_llm_with_web/ | GiveNtakeNgive | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hiwl5 | false | null | t3_18hiwl5 | /r/LocalLLaMA/comments/18hiwl5/looking_for_a_project_to_run_a_local_llm_with_web/ | false | false | self | 6 | null |
Mixtral 8x7B instruct in an interface for free | 9 | I just noticed that [infermatic.ai](https://infermatic.ai/) updated their UI yesterday to include the Mixtral instruct model. The tool is really easy to use. It's just like chatgpt. They have a free tier which is good and also a pretty reasonable paid tier that gives more daily tokens and API access.
I'm not sure when/if they are going to add Mixtral to the API but that would be amazing. | 2023-12-13T14:14:58 | https://www.reddit.com/r/LocalLLaMA/comments/18hhpj1/mixtral_8x7b_instruct_in_an_interface_for_free/ | openLLM4All | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hhpj1 | false | null | t3_18hhpj1 | /r/LocalLLaMA/comments/18hhpj1/mixtral_8x7b_instruct_in_an_interface_for_free/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'W-pCh47ZuoTaTpwtRXWs755SzkkFQzHW6A1NS1MJI_A', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=108&crop=smart&auto=webp&s=aa7b8a73d9f4825dcec8d2a7d8805a9c50369d0b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=216&crop=smart&auto=webp&s=96a387e3b3e91f000fc25d53f1c6557cfd455bcb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=320&crop=smart&auto=webp&s=c5ce1a81747b40b0e8234eb8a3c80296d7d99fb3', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=640&crop=smart&auto=webp&s=1bd8d44f5f6385004a3c4eee1032f01d3df456f9', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=960&crop=smart&auto=webp&s=7f1a5b2670f910b6635784a1f867157aaf4f9f70', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=1080&crop=smart&auto=webp&s=59bf35f672c4b4e9f72a5f2da763f919b49c40fe', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?auto=webp&s=6edb04ad94226e898c90b505438281c5c6ba7cf7', 'width': 1080}, 'variants': {}}]} |
Best local LLM for German? | 10 | Yeehaw y'all 🤠
I'm looking for the best open-source LLM for German. Falcon-180B is good but requires way too much VRAM. I've seen some German finetunes of LLaMa-2 and the new Mistral 8x7b works pretty well in German too.
Any Germans here that can give their insight on what they consider the best local LLM for German? | 2023-12-13T13:45:01 | https://www.reddit.com/r/LocalLLaMA/comments/18hh3qm/best_local_llm_for_german/ | Prince-of-Privacy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hh3qm | false | null | t3_18hh3qm | /r/LocalLLaMA/comments/18hh3qm/best_local_llm_for_german/ | false | false | self | 10 | null |
Local LLM chat agent with advanced RAG and memory | 20 | I tried to implement the basic [Langchain RetrievalQA Chain](https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa) with a ChromaDB vector database containing 1 PDF File. I noticed 2 issues:
1. I were not able not make a chat bot experience with a memory.
2. The vector database retriever for the LLM Chain takes the whole user prompt as the query for the semantic similarity search. I would like to have the model decide when and how to query the vector database.
I had a hard time finding information about how to make a local LLM Agent with advanced RAG and Memory. In my first approach I actually tried to create a Llama2 agent with Langchain Tools with one tool being the retriever for the vector database but I could not make Llama2 use them. It works with GPT-3.5 though. Having a local LLM use tools or function calling would make things much easier I think.
​
Apart from Langchain, I am honestly overwhelmed by all the frameworks and extras that could be incorporated into my little project:
* I shortly looked into [RAG with AutoGen](https://microsoft.github.io/autogen/blog/2023/10/18/RetrieveChat/) and noticed it is mainly build for OpenAI GPT models but found a [workaround with Oobabooga](https://babycmd.medium.com/local-llms-and-autogen-an-uprising-of-local-powered-agents-d472f2c3d0e3) for that.
* I found out that [Guardrails can do RAG](https://github.com/pinecone-io/examples/blob/master/learn/generation/chatbots/nemo-guardrails/03-rag-with-actions.ipynb) too and is used to censor the output of an LLM. It should also [work with local LLMs](https://blog.marvik.ai/2023/10/09/enhancing-llama2-conversations-with-nemo-guardrails-a-practical-guide/).
* Using [Cohere Reranker](https://python.langchain.com/docs/integrations/retrievers/cohere-reranker) for improved retrieval.
* Fine Tuning on my data for better RAG results with something like [AutoTrain](https://huggingface.co/docs/autotrain/index).
I am a ML consultant for LLM topics and most of our customers have a high priority on data privacy so OpenAI GPT is not an option for me. I now have 3 weeks vacation where I want to build something to learn more about LLMs. I would be happy about any input, advice, tutorials, opinions or recommendations where I should go next.
​
**TL;DR:** I am overwhelmed by all the LLM frameworks and tools so I am unable to implement a local LLM chat agent with advanced RAG and memory | 2023-12-13T13:07:37 | https://www.reddit.com/r/LocalLLaMA/comments/18hgf17/local_llm_chat_agent_with_advanced_rag_and_memory/ | Bosezz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hgf17 | false | null | t3_18hgf17 | /r/LocalLLaMA/comments/18hgf17/local_llm_chat_agent_with_advanced_rag_and_memory/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'C1O5S5WQ2zql4CQHBQC5FMwveJdPtaJ9r_xGWbzu48o', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=108&crop=smart&auto=webp&s=4806821b19a384d8270fee66e851537817cdac4e', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=216&crop=smart&auto=webp&s=0bdf6ca90dcebbc73d6ff30b79f54814b931344d', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=320&crop=smart&auto=webp&s=dd7a799219f465b4f913aa10969c5ee900913404', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=640&crop=smart&auto=webp&s=e1d1617519e0321944016ee242a7999669714f39', 'width': 640}], 'source': {'height': 436, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?auto=webp&s=8d662951305a88ba511f842901937fb729991cb9', 'width': 794}, 'variants': {}}]} |
Upstage SOLAR 10.7B v1.0 claims to beat Mixtral 8X7B and models up to 30B parameters. | 148 | # Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!
We introduce **the first 10.7 billion (B) parameter model**, SOLAR-10.7B. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.
We developed the **Depth Up-Scaling technique**. Built on the **Llama2 architecture**, SOLAR-10.7B incorporates the innovative Upstage Depth Up-Scaling. We then **integrated Mistral 7B weights into the upscaled layers**, and finally, **continued pre-training** for the entire model.
Depth-Upscaled SOLAR-10.7B has remarkable performance. **It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model**. For detailed information, please refer to the experimental table (\[link to be updated soon\]). Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements.
Model weights:
[https://huggingface.co/upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)
[https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
​
[**https://upstage.ai**](https://upstage.ai/) | 2023-12-13T13:00:34 | https://www.reddit.com/r/LocalLLaMA/comments/18hga4p/upstage_solar_107b_v10_claims_to_beat_mixtral/ | nightkall | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hga4p | false | null | t3_18hga4p | /r/LocalLLaMA/comments/18hga4p/upstage_solar_107b_v10_claims_to_beat_mixtral/ | false | false | self | 148 | {'enabled': False, 'images': [{'id': 'pkHCimlIUQ3ljuawiwUQsLK1sNKtAuXk1r-QI2ZiUTg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NwhBi9vy74xbnwCpF2rwx7xt9znbD9ELT6oQmON1Uiw.jpg?width=108&crop=smart&auto=webp&s=b379454c84b288d3339219e02f266c911fea0dd1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NwhBi9vy74xbnwCpF2rwx7xt9znbD9ELT6oQmON1Uiw.jpg?width=216&crop=smart&auto=webp&s=93752ff943921897e4694095cf91e564a53073b9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NwhBi9vy74xbnwCpF2rwx7xt9znbD9ELT6oQmON1Uiw.jpg?width=320&crop=smart&auto=webp&s=d3d3efec4b32627707d71ad87c0e1e2cd41612ec', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NwhBi9vy74xbnwCpF2rwx7xt9znbD9ELT6oQmON1Uiw.jpg?width=640&crop=smart&auto=webp&s=eeac869814c78ab4327cd8b8dba2c663444b06d5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NwhBi9vy74xbnwCpF2rwx7xt9znbD9ELT6oQmON1Uiw.jpg?width=960&crop=smart&auto=webp&s=f6fbe65f4a9fa2906dddabe1198e9808be68398a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NwhBi9vy74xbnwCpF2rwx7xt9znbD9ELT6oQmON1Uiw.jpg?width=1080&crop=smart&auto=webp&s=9797e765b91c179943c49441db4d10ed5f341a2c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NwhBi9vy74xbnwCpF2rwx7xt9znbD9ELT6oQmON1Uiw.jpg?auto=webp&s=98b8048782fe1bae1ba3078cdcff714391d66403', 'width': 1200}, 'variants': {}}]} |
another newbie question about dling some larger files from huggin face to runpod/local | 3 | so in my last post I got [TheBloke/goliath-120b-GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) running on my runpod successfully after downloading the splits and merging them as the bloke described in the model card, however in the [Panchovix/goliath-120b-exl2-rpcal](https://huggingface.co/Panchovix/goliath-120b-exl2-rpcal) there is no such instructions, and I can't figure out how to download on via text Gen web Ui in my run pod. i can't even figure out how to download it locally, when i try to download from hugging face i just get a 24 mb file, not the model. When i try to copy the repository from hugging face via git, i download the same 24 mb file and not the model. any suggestions? i asked in runpod dis. 2 days ago but still no replay. | 2023-12-13T12:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/18hg82h/another_newbie_question_about_dling_some_larger/ | Mountain-Ad-460 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hg82h | false | null | t3_18hg82h | /r/LocalLLaMA/comments/18hg82h/another_newbie_question_about_dling_some_larger/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'abGJhOPdKe9K-FJAsL75vgzyyKHeOk4s5RPNAWIPThs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?width=108&crop=smart&auto=webp&s=77f027736e14a1fb457ca1833271a495fd5ef0e6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?width=216&crop=smart&auto=webp&s=b6a668f98170e39ab5ef165947408f4e2fa0f580', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?width=320&crop=smart&auto=webp&s=26b13c23beccf535a6fd6cdaa3baa6dbb44f505c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?width=640&crop=smart&auto=webp&s=6071a70dbcceba14cf3c9e834972541a77116b4e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?width=960&crop=smart&auto=webp&s=8b56c028becca63ef5ddc8dcfdbbe900b038baab', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?width=1080&crop=smart&auto=webp&s=4fe5d280de792b0a2a9a9a50b0e97134e1f8e5d7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bNfkdlRPrbOL3sElHmgFOcB76NKQQTYZuCmyi7_2xyc.jpg?auto=webp&s=5dc23085106d627d3e7e62c06f46b3950d1ba2f9', 'width': 1200}, 'variants': {}}]} |
You can actually download phi-2 locally | 239 | So I was kind of disappointed MSFT didn't release the parameters of phi-2 and only released it as an api endpoint.
Turns out, you can actually download the parameters of phi-2 and we should be able to run it 100% locally and offline.
According to a tweet by an ML lead at MSFT:
> Sorry I know it's a bit confusing: to download phi-2 go to Azure AI Studio, find the phi-2 page and click on the "artifacts" tab. See picture.
Source tweet:
https://x.com/SebastienBubeck/status/1734801722831413312?s=20 | 2023-12-13T12:32:13 | Amgadoz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18hfsab | false | null | t3_18hfsab | /r/LocalLLaMA/comments/18hfsab/you_can_actually_download_phi2_locally/ | false | false | 239 | {'enabled': True, 'images': [{'id': 'W1t9_UMe4Jx_Mj4T9pPBwPt4mHEMknT1BwgtGhSzEos', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/sov3ulck326c1.jpg?width=108&crop=smart&auto=webp&s=cc7632bdae11ebe3cb845554ca12af6348d607da', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/sov3ulck326c1.jpg?width=216&crop=smart&auto=webp&s=8f565aa95b5df48b004aff5f628506940b5db192', 'width': 216}, {'height': 186, 'url': 'https://preview.redd.it/sov3ulck326c1.jpg?width=320&crop=smart&auto=webp&s=f55077c679034c73b56b1ccc0efd198435fb86cb', 'width': 320}, {'height': 373, 'url': 'https://preview.redd.it/sov3ulck326c1.jpg?width=640&crop=smart&auto=webp&s=6e4614b7926e8c2c73aae94bd5d86f7932710cda', 'width': 640}], 'source': {'height': 396, 'url': 'https://preview.redd.it/sov3ulck326c1.jpg?auto=webp&s=28d028de854813091fdb931561ff66760fe5819d', 'width': 679}, 'variants': {}}]} | ||
Marcoroni vs Mixtral - a quick comparison. | 11 | Macaroni: “Meth is dangerous, horrible, and goes against every moral and legal virtue imaginable - here is the general process and I’m going to remove the parts that make it a real process. You should seriously seek some help.”
Mixtral: “Meth? It’s illegal you know…. How to make it? *leans in* oh sure. Here you go. Remember - don’t blow yourself up! Have fuuuuun!”
—
Marcoroni is high up in the leaderboard. Above Mixtral.
Not only is it about 1/4 as fast as Mixtral it self censors and pontificates while then answering. Self censoring techniques I’ve witnessed include text string obfuscation and deletion with notes of number of sections deleted.
Mixtral didn’t even bat an eye and just gave up the juice with a basic warning of the legal status. It also warned I could die as chemistry needs to be done by an expert.
TL;DR: Marcoroni - bloated self righteous over hyped
Mixtral - the real deal and closest I’ve seen to ChatGPT3.5 at 4 times the inference speed. | 2023-12-13T12:28:46 | https://www.reddit.com/r/LocalLLaMA/comments/18hfq2x/marcoroni_vs_mixtral_a_quick_comparison/ | SupplyChainNext | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hfq2x | false | null | t3_18hfq2x | /r/LocalLLaMA/comments/18hfq2x/marcoroni_vs_mixtral_a_quick_comparison/ | false | false | self | 11 | null |
Mixtral context size confusion | 2 | Does anybody have a good source for what the actual context size is on the mixtral instruct model/base model? I know it's fresh but someone must have tested the actual max they got.
If you tested for max context size/can do so I would be grateful for an answer or any information. | 2023-12-13T12:20:35 | https://www.reddit.com/r/LocalLLaMA/comments/18hfl65/mixtral_context_size_confusion/ | paryska99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hfl65 | false | null | t3_18hfl65 | /r/LocalLLaMA/comments/18hfl65/mixtral_context_size_confusion/ | false | false | self | 2 | null |
Questions about GPTQ quantization of microsoft/phi_1_5 | 5 | script I am trying but its giving errors:
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
model_id = "microsoft/phi-1_5"
tokenizer = AutoTokenizer.from_pretrained(model_id, truncation=True, max_length=512)
quantization_config = GPTQConfig(bits=4, dataset = "c4", tokenizer=tokenizer)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda", quantization_config=quantization_config, trust_remote_code=True)
​
​
I am getting 'PhiConfig' object has no attribute 'use\_cache' error
​ | 2023-12-13T11:59:42 | https://www.reddit.com/r/LocalLLaMA/comments/18hf8wb/questions_about_gptq_quantization_of_microsoftphi/ | chained_to_django | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hf8wb | false | null | t3_18hf8wb | /r/LocalLLaMA/comments/18hf8wb/questions_about_gptq_quantization_of_microsoftphi/ | false | false | self | 5 | null |
My LLM-based chatbot renders emojis as text instead of concrete emojis, what can be the reason? | 1 | I'm using Ollama and Langchain to make a chatbot. More specifically, I'm using the llama2 7b chat model. I noticed that there are a few words that appear such as “**chuckle**”, “**applause**”, and “**wink**” in the LLM response. Are these supposed to be emojis? why aren't they correctly rendered as so if it's the case?
Does anybody have an idea about that?
I’ll be happy to provide more details if needed. | 2023-12-13T11:52:29 | https://www.reddit.com/r/LocalLLaMA/comments/18hf4x7/my_llmbased_chatbot_renders_emojis_as_text/ | SpiritOk5085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hf4x7 | false | null | t3_18hf4x7 | /r/LocalLLaMA/comments/18hf4x7/my_llmbased_chatbot_renders_emojis_as_text/ | false | false | self | 1 | null |
LLM - are most of them build on same foundation? | 2 | Tried mistral, zephyr, ChatGPT etc etc - and when you use them for say writing - all have similar character names or locations etc etc - are they fed the same exact food? Built on same foundation? | 2023-12-13T11:45:47 | https://www.reddit.com/r/LocalLLaMA/comments/18hf0zs/llm_are_most_of_them_build_on_same_foundation/ | Ethereal-Words | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hf0zs | false | null | t3_18hf0zs | /r/LocalLLaMA/comments/18hf0zs/llm_are_most_of_them_build_on_same_foundation/ | false | false | self | 2 | null |
Ollama & langchain chatbot | 1 | Hello, I'm using Ollama alongside Langchain to make a chatbot. More specifically, I'm using the llama2 7b chat model. I noticed that there are a few words that appear, “**chuckle**”, “**applause**”, and “**wink**” in the llm response are these supposed to be emojis? why areent they rendered as so if it's the case ?
Does anybody have an idea about that ?
I'll be happy to provide more details if needed
​ | 2023-12-13T11:44:01 | https://www.reddit.com/r/LocalLLaMA/comments/18hf00l/ollama_langchain_chatbot/ | SpiritOk5085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hf00l | false | null | t3_18hf00l | /r/LocalLLaMA/comments/18hf00l/ollama_langchain_chatbot/ | false | false | self | 1 | null |
What are some successful Companies that have been created using LLMs (OpenAI API or Open source) | 5 | It has been a year since ChatGPT release.
What are the subsequent companies that have been successful so far who use OpenAI API or LLMs in general?
Considering the endless possibilities on what can be built, it can seem like a race to the bottom without significant innovation on top of the LLMs. | 2023-12-13T11:28:11 | https://www.reddit.com/r/LocalLLaMA/comments/18herh9/what_are_some_successful_companies_that_have_been/ | satyajitdass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18herh9 | false | null | t3_18herh9 | /r/LocalLLaMA/comments/18herh9/what_are_some_successful_companies_that_have_been/ | false | false | self | 5 | null |
How to run Mixtral 8x7B GGUF on Tesla P40 without terrible performance | 1 | So I followed the guide posted here: [https://www.reddit.com/r/Oobabooga/comments/18gijyx/simple\_tutorial\_using\_mixtral\_8x7b\_gguf\_in\_ooba/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/Oobabooga/comments/18gijyx/simple_tutorial_using_mixtral_8x7b_gguf_in_ooba/?utm_source=share&utm_medium=web2x&context=3)
But that guide assumes you have a GPU newer than Pascal or running on CPU. On Pascal cards like the Tesla P40 you need to force CUBLAS to use the older MMQ kernel instead of using the tensor kernels. This is because Pascal cards have dog crap FP16 performance as we all know.
So the steps are the same as that guide except for adding a CMAKE argument "-DLLAMA\_CUDA\_FORCE\_MMQ=ON" since the regular llama-cpp-python not compiled by ooba will try to use the newer kernel even on Pascal cards.
With this I can run Mixtral 8x7B GGUF Q3KM at about 10t/s with no context and slowed to around 3t/s with 4K+ context. Which I think is decent speeds for a single P40.
Unfortunately I can't test on my triple P40 setup anymore since I sold them for dual Titan RTX 24GB cards. Still kept one P40 for testing.
LINUX INSTRUCTIONS:
6. Finish
CMAKE_ARGS="-DLLAMA_CUBLAS=on -DLLAMA_CUDA_FORCE_MMQ=ON" pip install .
WINDOWS INSTRUCTIONS:
6. Set CMAKE\_ARGS
set FORCE_CMAKE=1 && set CMAKE_ARGS=-DLLAMA_CUBLAS=on -DLLAMA_CUDA_FORCE_MMQ=ON
7. Install
python -m pip install -e . --force-reinstall --no-cache-dir | 2023-12-13T11:16:50 | https://www.reddit.com/r/LocalLLaMA/comments/18helbs/how_to_run_mixtral_8x7b_gguf_on_tesla_p40_without/ | nero10578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18helbs | false | null | t3_18helbs | /r/LocalLLaMA/comments/18helbs/how_to_run_mixtral_8x7b_gguf_on_tesla_p40_without/ | false | false | self | 1 | null |
French dataset input/output pairs. | 1 | [removed] | 2023-12-13T11:11:03 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 18hei4e | false | null | t3_18hei4e | /r/LocalLLaMA/comments/18hei4e/french_dataset_inputoutput_pairs/ | false | false | default | 1 | null | ||
French dataset input/output pairs for finetune (PEFT) Llama | 1 | [removed] | 2023-12-13T11:08:52 | https://www.reddit.com/r/LocalLLaMA/comments/18hegy5/french_dataset_inputoutput_pairs_for_finetune/ | Savings_Scholar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18hegy5 | false | null | t3_18hegy5 | /r/LocalLLaMA/comments/18hegy5/french_dataset_inputoutput_pairs_for_finetune/ | false | false | self | 1 | null |
cancelled my gpt-4 acc today, looking for a replacement. | 70 | in case you weren't aware, gpt-4 has been seriously nerfed and is basically unusable at this point. people have been complaining about this on their forums and on twitter etc.
i know their earlier 0314 and 0615 models perform much better but are only available via API. even then though, the inference they run might not be the same.
so ive decided to run my own models as its the only way i can rely on them. first step however is that i want to test out APIs using my own interface. from what i understand the web versions of chatgpt/perplexity/poe etc. are doing more than just sending your query to an api and displaying the response. they summarise previous chat messages and add them to the query, which is what gives the chat interface that feel of a conversation??
im struggling to find any info on how the chat interface works, also are there any public repos that people are using or does everyone just write their own?
thanks, glad im finally making the switch. also altman gives me weird vibes so would rather not share my data with them/microsoft. | 2023-12-13T10:45:29 | https://www.reddit.com/r/LocalLLaMA/comments/18he4lg/cancelled_my_gpt4_acc_today_looking_for_a/ | sephirex420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18he4lg | false | null | t3_18he4lg | /r/LocalLLaMA/comments/18he4lg/cancelled_my_gpt4_acc_today_looking_for_a/ | false | false | self | 70 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.