title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Best model for Ooba running on 7950x3D + 4090 + 65GB DDR5? And do you run on your OS or in a VM? | 1 | What's the best model for this setup? and do you run your AI on your OS or in a VM? I guess running in a VM will never fully utilise all of your hardware resources, but how much difference is there? versus the potential security issues? Any advice? Thanks.. | 2023-12-02T00:56:44 | https://www.reddit.com/r/LocalLLaMA/comments/188rj1y/best_model_for_ooba_running_on_7950x3d_4090_65gb/ | intenzething | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188rj1y | false | null | t3_188rj1y | /r/LocalLLaMA/comments/188rj1y/best_model_for_ooba_running_on_7950x3d_4090_65gb/ | false | false | self | 1 | null |
Bad GPU but good for AI? | 1 | I was curious since you need a decently powerful GPU for AI (LLama, Stable diffusion) but ofc a good Nvidia card pretty much costs a Kidney, would a working Nvidia card that just has slight graphic artifacts still work for AI chats and image generation? Because you can get 'bad' GPU that still get video but have white lines, artifcats etc for pretty much free (for instance a RTX 2080 for $40). | 2023-12-02T00:56:19 | https://www.reddit.com/r/LocalLLaMA/comments/188rirr/bad_gpu_but_good_for_ai/ | Nervous-Computer-885 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188rirr | false | null | t3_188rirr | /r/LocalLLaMA/comments/188rirr/bad_gpu_but_good_for_ai/ | false | false | self | 1 | null |
What kind of rig would I need for full fine tuning at home? | 11 | I would like to get into fine tuning, but the ideas I have seem to require full parameter fine tuning (or maybe ReLoRa), with no quantization.
I would prefer to invest in a home rig upfront that can double as a future proof gaming PC than spend money renting cloud instances.
What kind of rigs are required / available for this for the 6B, 13B and 60B models respectively? | 2023-12-02T00:46:27 | https://www.reddit.com/r/LocalLLaMA/comments/188rbl7/what_kind_of_rig_would_i_need_for_full_fine/ | mpricop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188rbl7 | false | null | t3_188rbl7 | /r/LocalLLaMA/comments/188rbl7/what_kind_of_rig_would_i_need_for_full_fine/ | false | false | self | 11 | null |
How does everyone verify the security risk for each extension for Ooba UI? | 1 | How does everyone verify the security risk for each extension for Ooba UI? Any tips for the non devs out there? | 2023-12-02T00:37:41 | https://www.reddit.com/r/LocalLLaMA/comments/188r4va/how_does_everyone_verify_the_security_risk_for/ | intenzething | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188r4va | false | null | t3_188r4va | /r/LocalLLaMA/comments/188r4va/how_does_everyone_verify_the_security_risk_for/ | false | false | self | 1 | null |
Microsoft and AMD collaborating to improve LLM performance on AMD hardware? | 75 | Anyone with AMD hardware looked at these?
https://blogs.windows.com/windowsdeveloper/2023/11/15/announcing-preview-support-for-llama-2-in-directml/
https://community.amd.com/t5/ai/how-to-running-optimized-llama2-with-microsoft-directml-on-amd/ba-p/645190 | 2023-12-01T23:54:29 | https://www.reddit.com/r/LocalLLaMA/comments/188q7on/microsoft_and_amd_collaborating_to_improve_llm/ | daedelus82 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188q7on | false | null | t3_188q7on | /r/LocalLLaMA/comments/188q7on/microsoft_and_amd_collaborating_to_improve_llm/ | false | false | self | 75 | {'enabled': False, 'images': [{'id': '-4et93XWpJJ1ir5mWY9DxE6OSkzWxuANDFfq8ZEqiDY', 'resolutions': [{'height': 33, 'url': 'https://external-preview.redd.it/-0iX5VHHhedRSl2NAzDo-vhbH9txttU1NkXpjSxRGNs.jpg?width=108&crop=smart&auto=webp&s=67d4c68b130882101b5eefcf883b66af8372d2fb', 'width': 108}, {'height': 67, 'url': 'https://external-preview.redd.it/-0iX5VHHhedRSl2NAzDo-vhbH9txttU1NkXpjSxRGNs.jpg?width=216&crop=smart&auto=webp&s=33b2f705f99c8c25136d5833a02994392cf8bba3', 'width': 216}, {'height': 100, 'url': 'https://external-preview.redd.it/-0iX5VHHhedRSl2NAzDo-vhbH9txttU1NkXpjSxRGNs.jpg?width=320&crop=smart&auto=webp&s=34c6c3730318fe194ca07b74d1920b96252e79a7', 'width': 320}, {'height': 200, 'url': 'https://external-preview.redd.it/-0iX5VHHhedRSl2NAzDo-vhbH9txttU1NkXpjSxRGNs.jpg?width=640&crop=smart&auto=webp&s=0875a7dab52c8280d0824e8c0fe46afa2cdd53ee', 'width': 640}, {'height': 300, 'url': 'https://external-preview.redd.it/-0iX5VHHhedRSl2NAzDo-vhbH9txttU1NkXpjSxRGNs.jpg?width=960&crop=smart&auto=webp&s=3fe7c58f4ce56af8f573323df4ab9bdea05c7b03', 'width': 960}, {'height': 337, 'url': 'https://external-preview.redd.it/-0iX5VHHhedRSl2NAzDo-vhbH9txttU1NkXpjSxRGNs.jpg?width=1080&crop=smart&auto=webp&s=ec25e5b7b13e5f88999f1fb7935d74b16895c565', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-0iX5VHHhedRSl2NAzDo-vhbH9txttU1NkXpjSxRGNs.jpg?auto=webp&s=acfa7d9c02de14f9c62624ba37f0eaa900f86392', 'width': 1920}, 'variants': {}}]} |
Adding special tokens to vocabulary when training with Axolotl | 5 | Hello
I'm fine-tuning Llama-2 13b with Axolotl. My dataset for fine-tuning look as follows:
# dataset.jsonl
{"text": "### Human: This is a question### Chatbot: This is a reply### Human: What the hell are you talking about?"}
{"text": "### Human: Who's coming tonight?### Chatbot: No one, it's literally Monday."}
...
I'm using `###` as special tokens to separate turns. How can I add `###` to the vocabulary during training with Axolotl? Should I add it to the special\_tokens in the yaml config file? | 2023-12-01T23:23:02 | https://www.reddit.com/r/LocalLLaMA/comments/188pjjc/adding_special_tokens_to_vocabulary_when_training/ | Helveticus99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188pjjc | false | null | t3_188pjjc | /r/LocalLLaMA/comments/188pjjc/adding_special_tokens_to_vocabulary_when_training/ | false | false | self | 5 | null |
M2 Max 96GB or M2 Ultra 64GB | 3 | Mac Studio Apple M2 Ultra Chip with 24‑Core CPU and 60‑Core GPU, 64 GB RAM
or
Mac Studio Apple M2 Max Chip with 12‑Core CPU and 38‑Core GPU, 96 GB RAM
For working with LLM. What would you do? | 2023-12-01T23:06:18 | https://www.reddit.com/r/LocalLLaMA/comments/188p65j/m2_max_96gb_or_m2_ultra_64gb/ | breadandtacos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188p65j | false | null | t3_188p65j | /r/LocalLLaMA/comments/188p65j/m2_max_96gb_or_m2_ultra_64gb/ | false | false | self | 3 | null |
Error when trying to load Qwen 72B GGUF | 1 | I've been waiting for a long time for the GGUF version of this Chinese model to appear.
Downloaded it from here - [https://huggingface.co/CausalLM/72B-preview-GGUF](https://huggingface.co/CausalLM/72B-preview-GGUF)
I chose q5 k m because I used 70b llama with the same quantization level without any problems.
When I try to load the model into RAM, I get the error that is in the screenshot.
I have 64 gigabytes of RAM. 13600K
Attempts to use the model through other loading methods also did not work. I get an error. What is the problem and how to fix it?
​
https://preview.redd.it/qijshfrejr3c1.jpg?width=1660&format=pjpg&auto=webp&s=f2fc59a0e46b1d38b23f26b08f07ebe3d4460f76
https://preview.redd.it/upo8mhrejr3c1.jpg?width=1597&format=pjpg&auto=webp&s=00ddd826308eb48096b4e2bfcd0ffc56942b6021
https://preview.redd.it/m1jydf5gjr3c1.jpg?width=1072&format=pjpg&auto=webp&s=50d627eded80453dc01bd2f6682f6ef761ebdfce
​ | 2023-12-01T22:53:51 | https://www.reddit.com/r/LocalLLaMA/comments/188ow8s/error_when_trying_to_load_qwen_72b_gguf/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188ow8s | false | null | t3_188ow8s | /r/LocalLLaMA/comments/188ow8s/error_when_trying_to_load_qwen_72b_gguf/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YnHgP1DWmIIAMRhBEx-0Xt_tAO_Yy-GPRhAy5P18tMI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FUIjzhcOJU99Fss08ZoiwJPbQh85_DSxmlm7L_186Y0.jpg?width=108&crop=smart&auto=webp&s=b33a61c33d022c8823c1e1b0b35bd4065cb0c896', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FUIjzhcOJU99Fss08ZoiwJPbQh85_DSxmlm7L_186Y0.jpg?width=216&crop=smart&auto=webp&s=59c364748735729c1f2c4b04a2c80cc7a94a16e4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FUIjzhcOJU99Fss08ZoiwJPbQh85_DSxmlm7L_186Y0.jpg?width=320&crop=smart&auto=webp&s=e45a0e759ac586981be29973008186d6fed1d9d4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FUIjzhcOJU99Fss08ZoiwJPbQh85_DSxmlm7L_186Y0.jpg?width=640&crop=smart&auto=webp&s=f1fed42313ff98cd3379d59e10c398e5288e93b2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FUIjzhcOJU99Fss08ZoiwJPbQh85_DSxmlm7L_186Y0.jpg?width=960&crop=smart&auto=webp&s=6f276db5de5bd36ae431e3470a3fdf759d5fb8b2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FUIjzhcOJU99Fss08ZoiwJPbQh85_DSxmlm7L_186Y0.jpg?width=1080&crop=smart&auto=webp&s=803a6714a2f5139ffdcb57d7a5a234c2e93e1ffb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FUIjzhcOJU99Fss08ZoiwJPbQh85_DSxmlm7L_186Y0.jpg?auto=webp&s=51536407d33db462d7c8b4cf7174baa282951aa8', 'width': 1200}, 'variants': {}}]} | |
Error when trying to load Qwen 72B GGUF | 1 | I've been waiting for a long time for the GGUF version of this Chinese model to appear.
Downloaded it from here - [https://huggingface.co/CausalLM/72B-preview-GGUF](https://huggingface.co/CausalLM/72B-preview-GGUF)
I chose q5 k m because I used 70b llama with the same quantization level without any problems.
When I try to load the model into RAM, I get the error that is in the screenshot.
I have 64 gigabytes of RAM. 13600K
Attempts to use the model through other loading methods also did not work. I get an error. What is the problem and how to fix it? | 2023-12-01T22:51:37 | https://www.reddit.com/r/LocalLLaMA/comments/188oufe/error_when_trying_to_load_qwen_72b_gguf/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188oufe | false | null | t3_188oufe | /r/LocalLLaMA/comments/188oufe/error_when_trying_to_load_qwen_72b_gguf/ | false | false | self | 1 | null |
LLM to copy Website Code | 1 | [removed] | 2023-12-01T22:43:02 | https://www.reddit.com/r/LocalLLaMA/comments/188onix/llm_to_copy_website_code/ | GdUpFromFeetUp100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188onix | false | null | t3_188onix | /r/LocalLLaMA/comments/188onix/llm_to_copy_website_code/ | false | false | self | 1 | null |
How do you decide which file to download from huggingface? | 1 | Hi there, new to running local llamas. I'm trying to run some with text-gen-web-ui, but I'm pretty confused about which file I should be downloading. For example, there's all these different files, with different file suffixes.
What should my thought process be when picking one of these files?
https://preview.redd.it/e7utpyfadr3c1.png?width=1348&format=png&auto=webp&s=1b85b2acb9d536850eb874a5d6cdd0ca2a6f3df7 | 2023-12-01T22:20:02 | https://www.reddit.com/r/LocalLLaMA/comments/188o50m/how_do_you_decide_which_file_to_download_from/ | katsuthunder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188o50m | false | null | t3_188o50m | /r/LocalLLaMA/comments/188o50m/how_do_you_decide_which_file_to_download_from/ | false | false | 1 | null | |
My Refiner Working As Intended | 1 | ​
https://preview.redd.it/c8m8jkafcr3c1.png?width=1667&format=png&auto=webp&s=da933c0b68893c5ee67dd6cfa9127e86e99efbb3 | 2023-12-01T22:14:08 | https://www.reddit.com/r/LocalLLaMA/comments/188o0h4/my_refiner_working_as_intended/ | HelloHash | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188o0h4 | false | null | t3_188o0h4 | /r/LocalLLaMA/comments/188o0h4/my_refiner_working_as_intended/ | false | false | 1 | null | |
LLM-Guidance NPM package with constrained generation support | 4 | 2023-12-01T21:45:17 | https://www.npmjs.com/package/@mikugg/guidance | iwantofftheride00 | npmjs.com | 1970-01-01T00:00:00 | 0 | {} | 188nckj | false | null | t3_188nckj | /r/LocalLLaMA/comments/188nckj/llmguidance_npm_package_with_constrained/ | false | false | 4 | {'enabled': False, 'images': [{'id': '3CAm7f2euOP7diXidheIHavSdc1loh3U46B-FOssKu4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?width=108&crop=smart&auto=webp&s=0d9f8401435fed0f934dac9f8f12b7f05480e1ec', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?width=216&crop=smart&auto=webp&s=30065d434f38c96637a3fb263a468c45766f03c2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?width=320&crop=smart&auto=webp&s=58eb4ab37e6ce7544161a493caa8695162abffd2', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?width=640&crop=smart&auto=webp&s=8ac9ce6a0d25fae7f6d6d78c177a7289c0eb8c68', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?width=960&crop=smart&auto=webp&s=d6d5429dc03cd49933d35e8fab872f0a24e4bc98', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?width=1080&crop=smart&auto=webp&s=dc2b2fb470fed5b167ca625efb25c8ebae66d3ff', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/sr7XqdeKF73E4m8CFm57jK-VSCmixf5xr3cX1tdw1SY.jpg?auto=webp&s=ce7d1ed79b7b2469b3bcac2baa84c5212e6c8126', 'width': 1200}, 'variants': {}}]} | ||
LLM-Guidance NPM package with constrained responses support | 1 | [deleted] | 2023-12-01T21:43:35 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 188nb3u | false | null | t3_188nb3u | /r/LocalLLaMA/comments/188nb3u/llmguidance_npm_package_with_constrained/ | false | false | default | 1 | null | ||
Venus 103b, versions 1.0 and 1.1 | 27 | https://huggingface.co/nsfwthrowitaway69/Venus-103b-v1.0
https://huggingface.co/nsfwthrowitaway69/Venus-103b-v1.1
These are two more experimental models in my "Venus" lineup. They're smaller than the original Venus-120b but from my initial testing they seem to perform better, likely because they use SynthIA 1.2b instead of 1.5, which is known to be broken for roleplays. The 1.1 version contains XWin instead of Nous Hermes.
Would love to get some feedback on these! I've been, for the most part, happy with both of their performance, but they do seem to need a somewhat low temperature to work well. | 2023-12-01T21:28:32 | https://www.reddit.com/r/LocalLLaMA/comments/188myci/venus_103b_versions_10_and_11/ | nsfw_throwitaway69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188myci | false | null | t3_188myci | /r/LocalLLaMA/comments/188myci/venus_103b_versions_10_and_11/ | false | false | self | 27 | null |
Simple QA Chatbot for local LLM | 3 | Hi all,
​
I've created a simple and lightweight python/streamlit QA Chatbot that is using a local LLM.
It provides a chat function and a upload section.
Im new to python and new to AI, but check out the code and let me know what you think of it
[https://github.com/PuxAI/SAQA](https://github.com/PuxAI/SAQA)
​
Some features i would like to build/get help with
\- adding chat history
\- saving chats
\- make it run faster :) | 2023-12-01T21:23:41 | https://www.reddit.com/r/LocalLLaMA/comments/188muf2/simple_qa_chatbot_for_local_llm/ | e-nigmaNL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188muf2 | false | null | t3_188muf2 | /r/LocalLLaMA/comments/188muf2/simple_qa_chatbot_for_local_llm/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'B5lKuBMePCTjtoH-ESHa7ZpXcOguUiAmnMln_rQ13Dw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zZpgYE4k1aL69m83wozNugQ-6Sov4WhLXKJGZFKsJc4.jpg?width=108&crop=smart&auto=webp&s=ae45226967e81b522e715cd7abf5b5fbde446ea8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zZpgYE4k1aL69m83wozNugQ-6Sov4WhLXKJGZFKsJc4.jpg?width=216&crop=smart&auto=webp&s=119b429d1e54362ffd06796ecc0fe1fc3be902cf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zZpgYE4k1aL69m83wozNugQ-6Sov4WhLXKJGZFKsJc4.jpg?width=320&crop=smart&auto=webp&s=84f5f14ae8779d024a909bf4db9f4e42a1809246', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zZpgYE4k1aL69m83wozNugQ-6Sov4WhLXKJGZFKsJc4.jpg?width=640&crop=smart&auto=webp&s=2f4e69561aff5948b2fd2805d4b6e7bd0aaf1247', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zZpgYE4k1aL69m83wozNugQ-6Sov4WhLXKJGZFKsJc4.jpg?width=960&crop=smart&auto=webp&s=54ab3178802c5694d2d55a2fea180a934412e6cf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zZpgYE4k1aL69m83wozNugQ-6Sov4WhLXKJGZFKsJc4.jpg?width=1080&crop=smart&auto=webp&s=3f9d6b5cfc761dea79a3bd1f64747ee0702ddc73', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zZpgYE4k1aL69m83wozNugQ-6Sov4WhLXKJGZFKsJc4.jpg?auto=webp&s=a08e23459fbdbbbb486f33f24c6ef3f12e5c9770', 'width': 1200}, 'variants': {}}]} |
#1 on LLM Leaderboard for 7B: Chupacabra-v2 - WE MADE IT! | 83 | We did it team! Thanks to everyone that gave it a shot!
[https://huggingface.co/perlthoughts/Chupacabra-7B-v2](https://huggingface.co/perlthoughts/Chupacabra-7B-v2)
[https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | 2023-12-01T21:18:00 | https://www.reddit.com/r/LocalLLaMA/comments/188mpy1/1_on_llm_leaderboard_for_7b_chupacabrav2_we_made/ | perlthoughts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188mpy1 | false | null | t3_188mpy1 | /r/LocalLLaMA/comments/188mpy1/1_on_llm_leaderboard_for_7b_chupacabrav2_we_made/ | false | false | self | 83 | {'enabled': False, 'images': [{'id': 'TOgp5aJOZFHDd6Q6Cwa8o2UH---LYMQZRI0vY8V21CI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ofIBjeyyRwBfekYgLauJZHMnCP89xwUFNj9SJoGqmsk.jpg?width=108&crop=smart&auto=webp&s=02f74c2b9469bba8d675a8aec3a96f9881b0706a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ofIBjeyyRwBfekYgLauJZHMnCP89xwUFNj9SJoGqmsk.jpg?width=216&crop=smart&auto=webp&s=61c53050d7a8c494db8f2871d9d56d40e7c107a7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ofIBjeyyRwBfekYgLauJZHMnCP89xwUFNj9SJoGqmsk.jpg?width=320&crop=smart&auto=webp&s=b558c7a880ae8988bab0a79f10d0a0513310dee4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ofIBjeyyRwBfekYgLauJZHMnCP89xwUFNj9SJoGqmsk.jpg?width=640&crop=smart&auto=webp&s=8359476ae8a40ebf355bd567c5559448a3e65031', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ofIBjeyyRwBfekYgLauJZHMnCP89xwUFNj9SJoGqmsk.jpg?width=960&crop=smart&auto=webp&s=c5b045e0abd14b32d1e43191fd2c2ae3092280d8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ofIBjeyyRwBfekYgLauJZHMnCP89xwUFNj9SJoGqmsk.jpg?width=1080&crop=smart&auto=webp&s=58dabb73a12883e7e241d15404a3baed29d248dc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ofIBjeyyRwBfekYgLauJZHMnCP89xwUFNj9SJoGqmsk.jpg?auto=webp&s=ebf4e3a4977dcfc241f32b74c0a707dedec24b7a', 'width': 1200}, 'variants': {}}]} |
Swapping Trained GPT Layers with No Accuracy Loss : Why Models like Goliath 120B Works | 89 | I just tried a wild experiment following some conversations here on why models like Goliath 120b works.
I swapped the layers of a trained GPT model, like swap layer 6 and 18, and the model works perfectly well. No accuracy loss or change in behaviour. I tried this with different layers and demonstrate in my latest video that any two intermediate layers of a transformer model can be swapped with no change in behaviour. This is wild and gives an intuition into why model merging is possible.
Find the video here, [https://youtu.be/UGOIM57m6Gw?si=\_EXyvGqr8dOOkQgN](https://youtu.be/UGOIM57m6Gw?si=_EXyvGqr8dOOkQgN)
Also created a Google Colab notebook here to allow anyone replicate this experiment, [https://colab.research.google.com/drive/1haeNqkdVXUHLp0GjfSJA7TQ4ahkJrVFB?usp=sharing](https://colab.research.google.com/drive/1haeNqkdVXUHLp0GjfSJA7TQ4ahkJrVFB?usp=sharing)
And Github Link, [https://github.com/johnolafenwa/transformer\_layer\_swap](https://github.com/johnolafenwa/transformer_layer_swap) | 2023-12-01T20:56:55 | https://www.reddit.com/r/LocalLLaMA/comments/188m82u/swapping_trained_gpt_layers_with_no_accuracy_loss/ | johnolafenwa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188m82u | false | null | t3_188m82u | /r/LocalLLaMA/comments/188m82u/swapping_trained_gpt_layers_with_no_accuracy_loss/ | false | false | self | 89 | {'enabled': False, 'images': [{'id': 'ZZprM3cdffqTimjtvdNhIfNNkNfcvo_WR1Zabziz_e4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Eg-jkFjKGxD56GWUzn8sMqmZ90tiW6hE4QNQj1pavI0.jpg?width=108&crop=smart&auto=webp&s=7af8d1f89df3adad5abce95dc7f99e23cc39e39a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Eg-jkFjKGxD56GWUzn8sMqmZ90tiW6hE4QNQj1pavI0.jpg?width=216&crop=smart&auto=webp&s=9f1dc2433a992da4091270b0f0fd5c3cdc9d7741', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Eg-jkFjKGxD56GWUzn8sMqmZ90tiW6hE4QNQj1pavI0.jpg?width=320&crop=smart&auto=webp&s=5d13873500fd4b675f44e9258956a2c87c378af3', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Eg-jkFjKGxD56GWUzn8sMqmZ90tiW6hE4QNQj1pavI0.jpg?auto=webp&s=81d1aca11a6027710334ceec999a68e7eee88be3', 'width': 480}, 'variants': {}}]} |
is there any way to install (easely) a local llm ? | 2 | It's been some month where i'm searching the web on how to install and run an llm on my computer but i didn't figure out how to do it. I did find thing like h2o but idk what its really for and how to manage to use it. Maybe i'm just dumb... idk. Anybody here to help me ? | 2023-12-01T20:17:36 | https://www.reddit.com/r/LocalLLaMA/comments/188lbam/is_there_any_way_to_install_easely_a_local_llm/ | neoluigiyt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188lbam | false | null | t3_188lbam | /r/LocalLLaMA/comments/188lbam/is_there_any_way_to_install_easely_a_local_llm/ | false | false | self | 2 | null |
Host my own chatbot that processes on my server. | 9 | Hello
I want to bulid something like ChatGPT but processing it on my server. Ofcourse i dont have much processing power for more then 3 users but it wont be popular.
Here are the specs. let me know if its too weak
* Server: Dell R930
* RAM: 128GB
* CPU: Intel Xeon E7-8880 V4 (Total 96 cores)
* GPU: 2x Nvidia Tesla K10 8GB
* Storage: 2x 2TB SSD Localy and 3x 10TB HDD Over 10GBit LAN
The server dosent get any use. i got it via a huge discount at my company a year ago. i'd love to put it to use. I Wont run this 24/7, probably only in the late spring or summer when my PV system is generating enough power. or maybe for 5-6 hours a day. I know this is a crazy idea, but can it work? | 2023-12-01T19:28:40 | https://www.reddit.com/r/LocalLLaMA/comments/188k7kp/host_my_own_chatbot_that_processes_on_my_server/ | MercedesBenzTourino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188k7kp | false | null | t3_188k7kp | /r/LocalLLaMA/comments/188k7kp/host_my_own_chatbot_that_processes_on_my_server/ | false | false | self | 9 | null |
How to build a user personalized llama 2 model | 1 | [removed] | 2023-12-01T17:19:52 | https://www.reddit.com/r/LocalLLaMA/comments/188h7hu/how_to_build_a_user_personalized_llama_2_model/ | Agreeable-Ear-3153 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188h7hu | false | null | t3_188h7hu | /r/LocalLLaMA/comments/188h7hu/how_to_build_a_user_personalized_llama_2_model/ | false | false | self | 1 | null |
What are the highest performing model less than 500M Parameters | 21 | Was just wondering what the best chat/instruct model out there that are less than 500m parameters. My plan is to finetune it for extremely simple and specific tasks not a "know-it-all" like gtp4 or other large parameter models. | 2023-12-01T17:05:31 | https://www.reddit.com/r/LocalLLaMA/comments/188gvrn/what_are_the_highest_performing_model_less_than/ | Danny_Davitoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188gvrn | false | null | t3_188gvrn | /r/LocalLLaMA/comments/188gvrn/what_are_the_highest_performing_model_less_than/ | false | false | self | 21 | null |
Is there any codebase that can run the OPT/Galactica models like llama.cpp on macOS? | 1 | I'm trying to find a llama.cpp like codebase that run meta's Galactica model on macOS. I searched Github but couldn't find anything. I know there exist forks of llama.cpp like bloom.cpp, but searching for opt.cpp or anything related doesn't yield anything. | 2023-12-01T16:46:46 | https://www.reddit.com/r/LocalLLaMA/comments/188gfpk/is_there_any_codebase_that_can_run_the/ | wisscool | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188gfpk | false | null | t3_188gfpk | /r/LocalLLaMA/comments/188gfpk/is_there_any_codebase_that_can_run_the/ | false | false | self | 1 | null |
Notus-7B-v1, new OSS LLM trained with DPO and cleaned version of UltraFeedback | 29 | Hello LocalLLaMA,
at Argilla we just released **Notus-7B-v1**, a new OSS LLM fine-tuned using DPO and a new version of the UltraFeedback dataset. But why? using Argilla, we detected some issues in the original UltraFeedback dataset, leading to high-scores in the `overall_score` column for bad-responses.
We decided to create a [new version of the dataset](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences) creating a new score column taking the average of the preference ratings, resulting in a new dataset in which the chosen response is different in \~50% of the cases from the original dataset.
Finally, we used this new dataset to fine-tune [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) using DPO. This is the only difference between Zephyr-7B-beta and Notus-7B-v1. The results? Notus surpasses both Zephyr-7B-beta and Claude2 in the AlpacaEval benchmark.
You can check the model in Hugging Face Hub: [https://huggingface.co/argilla/notus-7b-v1](https://huggingface.co/argilla/notus-7b-v1), and remember to always look at your data 😉
| 2023-12-01T16:42:10 | https://www.reddit.com/r/LocalLLaMA/comments/188gbsp/notus7bv1_new_oss_llm_trained_with_dpo_and/ | Miserable-Penalty582 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188gbsp | false | null | t3_188gbsp | /r/LocalLLaMA/comments/188gbsp/notus7bv1_new_oss_llm_trained_with_dpo_and/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'ZslB7vhtLiH3GpYEmXEPywkkH7fTAVGnu9Adf1TlHeQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mO490DVNk7GDGwJugJjCcTI642jukK0KnqvbQUe0mJA.jpg?width=108&crop=smart&auto=webp&s=e5364d19b0abc3c946869fe54af208c3458b83f3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mO490DVNk7GDGwJugJjCcTI642jukK0KnqvbQUe0mJA.jpg?width=216&crop=smart&auto=webp&s=7274d24414ba6e125ef40b124104c6ad70f82e23', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mO490DVNk7GDGwJugJjCcTI642jukK0KnqvbQUe0mJA.jpg?width=320&crop=smart&auto=webp&s=516e53bd102f54642818a7a36b0115df72db2558', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mO490DVNk7GDGwJugJjCcTI642jukK0KnqvbQUe0mJA.jpg?width=640&crop=smart&auto=webp&s=d083c8616d217ca5b745bd26c29f7527a4d07d19', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mO490DVNk7GDGwJugJjCcTI642jukK0KnqvbQUe0mJA.jpg?width=960&crop=smart&auto=webp&s=fef9ba4875fa2ed2d4362ce5996ed23b10eff76b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mO490DVNk7GDGwJugJjCcTI642jukK0KnqvbQUe0mJA.jpg?width=1080&crop=smart&auto=webp&s=fb7392ee1194d100ead01fea67544c921dead573', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mO490DVNk7GDGwJugJjCcTI642jukK0KnqvbQUe0mJA.jpg?auto=webp&s=75f71cfd02d6c6ad56a7bc6992230b1ed67f9d90', 'width': 1200}, 'variants': {}}]} |
Downloading weights from replicate of fine-tuned llama2 model? | 3 | Hi,
I've fine-tuned my very first LLM based on 7B llama2 on [replicate.com](https://replicate.com). It worked quite smoothly and I can run the model through replicate's API and website. Is there a chance to run the weights locally with llama.cpp? On the replicate page I can download the weights that contain following two files:
* adapter\_config.json
* adapter\_model.bin
I've tried to run the model weights with my local llamacpp build with this command:
`./main -m models/llama-2-7b.Q2_K.gguf --lora adapter_model.bin`
but it fails:
`llama_apply_lora_from_file_internal: unsupported file version`
`llama_init_from_gpt_params: error: failed to apply lora adapter`
FYI: I started to fine-tune the 7B model on my local computer with llamacpp's finetune command but that seemed to take multiple days (running only on CPU ;( and I cancelled the process, so I need to fine-tune via a cloudservice but I would like to run the fine-tuned model locally)
Any hints/ideas appreciated! | 2023-12-01T16:27:19 | https://www.reddit.com/r/LocalLLaMA/comments/188fzhu/downloading_weights_from_replicate_of_finetuned/ | yugobot2911 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188fzhu | false | null | t3_188fzhu | /r/LocalLLaMA/comments/188fzhu/downloading_weights_from_replicate_of_finetuned/ | false | false | self | 3 | null |
DeepSeeker 7b with recent knowledge cut-off | 5 | Is there any 7b DeepSeeker model that has a recent knowledge cut-off? I just installed one on LM Studio, but the cut-off is 2021, way too old for my coding needs. | 2023-12-01T16:22:12 | https://www.reddit.com/r/LocalLLaMA/comments/188fvcl/deepseeker_7b_with_recent_knowledge_cutoff/ | Ok-Book-1494 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188fvcl | false | null | t3_188fvcl | /r/LocalLLaMA/comments/188fvcl/deepseeker_7b_with_recent_knowledge_cutoff/ | false | false | self | 5 | null |
Fine tuning gpt-3.5-turbo on a code dataset | 1 | Has anyone tried fine tuning any of the OpenAI LLMs on a coding dataset (like HumanEval or similar) to make it better at generating working code? If so, how did it perform? Been trying to find benchmarks for this, I don't see why would it not work great in theory. | 2023-12-01T16:11:16 | https://www.reddit.com/r/LocalLLaMA/comments/188fma1/fine_tuning_gpt35turbo_on_a_code_dataset/ | geepytee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188fma1 | false | null | t3_188fma1 | /r/LocalLLaMA/comments/188fma1/fine_tuning_gpt35turbo_on_a_code_dataset/ | false | false | self | 1 | null |
Fine-tune Llama2 or Mistral 7b on AMD GPU (Mac OSX x64 or Windows) | 1 | HI Everyone
I've got Mac Osx x64 with AMD RX 6900 XT. Alternatively I can run Windows 11 with the same GPU.
I would like to fine-tune either llama2 7b or Mistral 7b on my AMD GPU either on Mac osx x64 or Windows 11. By fine-tune I mean that I would like to prepare list of questions an answers related to my work, it can be csv, json, xls, doesn't matter. Can you recommend a guide with the code I can use? I found something but it's based on CUDA only? | 2023-12-01T15:57:37 | https://www.reddit.com/r/LocalLLaMA/comments/188fad2/finetune_llama2_or_mistral_7b_on_amd_gpu_mac_osx/ | marekk17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188fad2 | false | null | t3_188fad2 | /r/LocalLLaMA/comments/188fad2/finetune_llama2_or_mistral_7b_on_amd_gpu_mac_osx/ | false | false | self | 1 | null |
Research Survey: Bugs in LLM-Generated Code | 1 | [removed] | 2023-12-01T15:19:10 | https://www.reddit.com/r/LocalLLaMA/comments/188eeh6/research_survey_bugs_in_llmgenerated_code/ | aminnikanjam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188eeh6 | false | null | t3_188eeh6 | /r/LocalLLaMA/comments/188eeh6/research_survey_bugs_in_llmgenerated_code/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8FAtE2dfkLEbf9UdLT46JMwQucq5g8S6ZNTWm6D1H-s', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/o7cdDcOqWUO1KSfYaWDbY6UKFzQ7p94KgsUB0FhRq0A.jpg?width=108&crop=smart&auto=webp&s=9202d3da8be99e0a450a31a8cf6eb792a95f971f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/o7cdDcOqWUO1KSfYaWDbY6UKFzQ7p94KgsUB0FhRq0A.jpg?width=216&crop=smart&auto=webp&s=64c02fc101e6b8c0492ed16f3bd5806a7b1c72d1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/o7cdDcOqWUO1KSfYaWDbY6UKFzQ7p94KgsUB0FhRq0A.jpg?width=320&crop=smart&auto=webp&s=abc79120c7e1c9a7a1561a35215fdca2b3f424f0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/o7cdDcOqWUO1KSfYaWDbY6UKFzQ7p94KgsUB0FhRq0A.jpg?width=640&crop=smart&auto=webp&s=36c4c806a4171974a09c1763df9482e41d439845', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/o7cdDcOqWUO1KSfYaWDbY6UKFzQ7p94KgsUB0FhRq0A.jpg?width=960&crop=smart&auto=webp&s=ba3cb289b8bdfc7d14f1fa076a2de6a81a7a9e11', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/o7cdDcOqWUO1KSfYaWDbY6UKFzQ7p94KgsUB0FhRq0A.jpg?width=1080&crop=smart&auto=webp&s=c83313dde3b31bb7a5c56cb332630e5059239281', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/o7cdDcOqWUO1KSfYaWDbY6UKFzQ7p94KgsUB0FhRq0A.jpg?auto=webp&s=1b6c37bd54170d43b43cf8842e1b1542950333be', 'width': 1200}, 'variants': {}}]} |
Any way to automate summary of YT Transcript? | 1 | Hi Team
Wondering if anyone has an elegant and functional solution for this? Ideally, based on a link provided, I'd like a local model to extract the transcription of the video (which YouTube already generates) and generate/ save a summary of the video/ key takeaways.
Have you done this? What's the right way of going about it? Ideally elegantly, with as few tools as possible (I'm on the Mac).
Thank you!! | 2023-12-01T15:01:16 | https://www.reddit.com/r/LocalLLaMA/comments/188dzgx/any_way_to_automate_summary_of_yt_transcript/ | laterral | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188dzgx | false | null | t3_188dzgx | /r/LocalLLaMA/comments/188dzgx/any_way_to_automate_summary_of_yt_transcript/ | false | false | self | 1 | null |
Backup for coding (python and js/ts) for GPT-4 with LM Studio | 3 | While I've become increasingly dependent in my workflow on GPT-4 for code stuff, there were times where the GPT-4 was down or inaccessible. What do you guys use or could suggest as a backup offline model in case of ish. I'm working on a 3060 6GB-Vram laptop with 64 GB ram. In some respects as I get deeper into the subject I worry about being reliant on a corporate 3rd party, especially when as we've recently seen, things can go sideways over a weekend. Any suggestions would be awesome. | 2023-12-01T14:52:51 | https://www.reddit.com/r/LocalLLaMA/comments/188ds3k/backup_for_coding_python_and_jsts_for_gpt4_with/ | WaterPecker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188ds3k | false | null | t3_188ds3k | /r/LocalLLaMA/comments/188ds3k/backup_for_coding_python_and_jsts_for_gpt4_with/ | false | false | self | 3 | null |
The Desirability of Pruning for a Children's AI toy? | 3 | ​
https://preview.redd.it/0bgnebu3zo3c1.png?width=1008&format=png&auto=webp&s=7f9e07d1ae7f4397a19fa49fa1d7db8c39403389
I was chatting with a Llama 2 web age, which doesn't allow copying the text out.
Odd...
Can anyone tell me more about "pruning"?
Especially in the context of taking something as smart as Llama 2 7B Chat, and making it easier to execute in a Raspberry PI 5, with minimal Tensor assistance?
It sounds like it would also make a much better Child's companion toy.
"Hallucinations" as people call mistakes, might not be so bad for a Teddy bear that's supposed to mimic a 5 year old's responses.
Does pruning cause more hallucinations in an AI?
Is there automated pruning software that's downloadable?
I heard that the Q\* AI could self-prune.
​
​
​ | 2023-12-01T14:18:42 | https://www.reddit.com/r/LocalLLaMA/comments/188cz3v/the_desirability_of_pruning_for_a_childrens_ai_toy/ | danl999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188cz3v | false | null | t3_188cz3v | /r/LocalLLaMA/comments/188cz3v/the_desirability_of_pruning_for_a_childrens_ai_toy/ | false | false | 3 | null | |
RAG + real TXT book + Yi34b-chat = creative writing beast | 86 | I have tried recent model drops and still will stick to Ti34b-chat, as it is the most creative in terms of creative writing.
Then I attached [the RAG approach](https://github.com/oobabooga/text-generation-webui/pull/3272) to the model and fed to the embeddings the entire **World War Z** .txt book (zombie horrors lover here, guilty).
Here is what the story written with that approach looks like:
[https://pastebin.com/4UL68WAm](https://pastebin.com/4UL68WAm)
1) What do you think about the creativity of the text?
2) Has anyone tried to QLORA the real book and does it help to "continue" the favorite books? | 2023-12-01T14:15:36 | https://www.reddit.com/r/LocalLLaMA/comments/188cwk3/rag_real_txt_book_yi34bchat_creative_writing_beast/ | Shir_man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188cwk3 | false | null | t3_188cwk3 | /r/LocalLLaMA/comments/188cwk3/rag_real_txt_book_yi34bchat_creative_writing_beast/ | false | false | self | 86 | {'enabled': False, 'images': [{'id': 'xh1tHixLJkHpPxC_lJ8Cu0MEu_PX2ijsM8fIVPqxdrs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Aq3khl3jXwGBAtiRwVBhTv3ARWRvjkBA2vjURV3Ztbw.jpg?width=108&crop=smart&auto=webp&s=886fd8eb0be50ce3548977943747850aad3a93d9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Aq3khl3jXwGBAtiRwVBhTv3ARWRvjkBA2vjURV3Ztbw.jpg?width=216&crop=smart&auto=webp&s=97a80e44c00450a991981c1e5cb34f306c38f9c1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Aq3khl3jXwGBAtiRwVBhTv3ARWRvjkBA2vjURV3Ztbw.jpg?width=320&crop=smart&auto=webp&s=6af0e8c8ec7af6b5abae365fe53b1531009fb475', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Aq3khl3jXwGBAtiRwVBhTv3ARWRvjkBA2vjURV3Ztbw.jpg?width=640&crop=smart&auto=webp&s=7d07bff1ba815b324b6a6928d94635903ab7082e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Aq3khl3jXwGBAtiRwVBhTv3ARWRvjkBA2vjURV3Ztbw.jpg?width=960&crop=smart&auto=webp&s=061705ba10a054e309ef0cc5dbc4a8b1eb46c9c6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Aq3khl3jXwGBAtiRwVBhTv3ARWRvjkBA2vjURV3Ztbw.jpg?width=1080&crop=smart&auto=webp&s=bb98c7b20d477e6f7ced08b9b84488c25fde87c7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Aq3khl3jXwGBAtiRwVBhTv3ARWRvjkBA2vjURV3Ztbw.jpg?auto=webp&s=044fd497c474f1607019a3f47ffa429c7cb01fb7', 'width': 1200}, 'variants': {}}]} |
Incoming: TensorRT-LLM version 0.6 with support for MoE, new models and more quantization | 37 | 2023-12-01T13:50:00 | https://github.com/NVIDIA/TensorRT-LLM/pull/524 | Balance- | github.com | 1970-01-01T00:00:00 | 0 | {} | 188cbde | false | null | t3_188cbde | /r/LocalLLaMA/comments/188cbde/incoming_tensorrtllm_version_06_with_support_for/ | false | false | 37 | {'enabled': False, 'images': [{'id': 'IwLMrOWX39Rc9fHmBrxH8tLo2Yn9FwDUHc6vAlbYyIM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c_F0bmplsOQiwSPzVd2Iu51gKLi1P24QDQzuNNPVYVQ.jpg?width=108&crop=smart&auto=webp&s=c1f26bb83ff8b4eada9a1034ff93fc1d8fb20ef3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c_F0bmplsOQiwSPzVd2Iu51gKLi1P24QDQzuNNPVYVQ.jpg?width=216&crop=smart&auto=webp&s=3123d9e670c671d09202dc131cda9d0d8f19775c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c_F0bmplsOQiwSPzVd2Iu51gKLi1P24QDQzuNNPVYVQ.jpg?width=320&crop=smart&auto=webp&s=fd2a719f6b524c02f242a325a17ead22e7b37988', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c_F0bmplsOQiwSPzVd2Iu51gKLi1P24QDQzuNNPVYVQ.jpg?width=640&crop=smart&auto=webp&s=a9320cbb74a5dcd8a08d5e87262f158fe2cb9588', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c_F0bmplsOQiwSPzVd2Iu51gKLi1P24QDQzuNNPVYVQ.jpg?width=960&crop=smart&auto=webp&s=57b0485679d7cae741eef673be5ea0ff8b498a1a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c_F0bmplsOQiwSPzVd2Iu51gKLi1P24QDQzuNNPVYVQ.jpg?width=1080&crop=smart&auto=webp&s=ab657f7f6443514c0a78607a058abec68d7eba52', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c_F0bmplsOQiwSPzVd2Iu51gKLi1P24QDQzuNNPVYVQ.jpg?auto=webp&s=95660cae5aec6737454f0219297659415c93372a', 'width': 1200}, 'variants': {}}]} | ||
Multiple 4090s instead of H100? | 29 | Why do people pay for much more expensive H100 GPUs instead of simply buying multiple 4090 GPUs to achieve the same thing in terms of vRAM?
Wouldn't the 4090s even be faster for model inference because of higher clock speed? | 2023-12-01T13:19:33 | https://www.reddit.com/r/LocalLLaMA/comments/188boew/multiple_4090s_instead_of_h100/ | Typical-Tough-372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188boew | false | null | t3_188boew | /r/LocalLLaMA/comments/188boew/multiple_4090s_instead_of_h100/ | false | false | self | 29 | null |
What can I do to speed up my CPU workload? [CPU only] | 8 | Good day everyone. I really appreciate the fact that I found this community, all of you are incredibly knowledgeable, helpful and passionate about this topic.
Onto my question: how can I make CPU inference faster? Here's my setup:
- **CPU**: Ryzen 5 3600
- **RAM**: 16 GB DDR4
- **Runner**: [ollama](https://github.com/jmorganca/ollama)
- **Model**: [OpenHermes-2.5-Mistral 7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) Quantized to 4 bits.
I'm running on CPU-only because my graphics card is insufficient for this task, having 2GB of GDDR5 VRAM. | 2023-12-01T13:17:08 | https://www.reddit.com/r/LocalLLaMA/comments/188bmti/what_can_i_do_to_speed_up_my_cpu_workload_cpu_only/ | argon___si | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188bmti | false | null | t3_188bmti | /r/LocalLLaMA/comments/188bmti/what_can_i_do_to_speed_up_my_cpu_workload_cpu_only/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'M6N-B9gAp8OA-6jaKuiFzyocoVjzWOxWamhJFr2iSEA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gagXK2oQPyx5OPpAY4Brn4yHJSuD_wcgPIzWqKxAKOs.jpg?width=108&crop=smart&auto=webp&s=71e1e1acd5d24fd5d9ad3db97c1866618f375a67', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gagXK2oQPyx5OPpAY4Brn4yHJSuD_wcgPIzWqKxAKOs.jpg?width=216&crop=smart&auto=webp&s=a68d0a08b770864f74207691ef6682e428d22126', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gagXK2oQPyx5OPpAY4Brn4yHJSuD_wcgPIzWqKxAKOs.jpg?width=320&crop=smart&auto=webp&s=116db60bd9b91e629f85674098eb67e486d78d31', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gagXK2oQPyx5OPpAY4Brn4yHJSuD_wcgPIzWqKxAKOs.jpg?width=640&crop=smart&auto=webp&s=1aacdd68559c8d3605efc85466e766a9412c318d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gagXK2oQPyx5OPpAY4Brn4yHJSuD_wcgPIzWqKxAKOs.jpg?width=960&crop=smart&auto=webp&s=b778f60822ac3d9e8b52ca3d00c3ff93dcf9f4ce', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gagXK2oQPyx5OPpAY4Brn4yHJSuD_wcgPIzWqKxAKOs.jpg?width=1080&crop=smart&auto=webp&s=b39b1039e22be75b2a2e4b2f41c7dfe198ae533c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gagXK2oQPyx5OPpAY4Brn4yHJSuD_wcgPIzWqKxAKOs.jpg?auto=webp&s=06552737669c1f3846b71dd9504cc44627446379', 'width': 1200}, 'variants': {}}]} |
Is there really no way you can run 70b models without having a very fast GPU or a lot of ram? | 47 | I feel there’s go to be away without having a mega fast computer. There’s a couple on Google Colab but I have privacy issues | 2023-12-01T13:10:54 | https://www.reddit.com/r/LocalLLaMA/comments/188bijz/is_there_really_no_way_you_can_run_70b_models/ | TheHumanFixer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188bijz | false | null | t3_188bijz | /r/LocalLLaMA/comments/188bijz/is_there_really_no_way_you_can_run_70b_models/ | false | false | self | 47 | null |
Multiple 4090 over H100? | 1 | [removed] | 2023-12-01T13:06:08 | https://www.reddit.com/r/LocalLLaMA/comments/188bfat/multiple_4090_over_h100/ | KeyTouch755 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188bfat | false | null | t3_188bfat | /r/LocalLLaMA/comments/188bfat/multiple_4090_over_h100/ | false | false | self | 1 | null |
How Does (modern?) LLM Benchmarking Work? | 3 | Does anybody have some good resources he can link? I‘d really like to understand what these different scores mean (for example the HF leaderboard), since many factors would have to play a role in determining an answer to a specific prompt.
Sorry if this is spam/or was answered elsewhere (would be happy about a link, couldn‘t find much) | 2023-12-01T13:03:08 | https://www.reddit.com/r/LocalLLaMA/comments/188bd1s/how_does_modern_llm_benchmarking_work/ | 98_tamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188bd1s | false | null | t3_188bd1s | /r/LocalLLaMA/comments/188bd1s/how_does_modern_llm_benchmarking_work/ | false | false | self | 3 | null |
LLama with RAG | 27 | I am planing to use retrieval augmented generation (RAG) based chatbot to look up information from documents (Q&A).
I did try with GPT3.5 and It works pretty well. Now I want to try with Llama (or its variation) on local machine. The idea is to only need to use smaller model (7B or 13B), and provide good enough context information from documents to generate the answer for it.
Have anyone done it before, any comments?
Thanks!
| 2023-12-01T12:50:43 | https://www.reddit.com/r/LocalLLaMA/comments/188b4hl/llama_with_rag/ | ruathudo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188b4hl | false | null | t3_188b4hl | /r/LocalLLaMA/comments/188b4hl/llama_with_rag/ | false | false | self | 27 | null |
A collab for Whisper ? I'm looking for an AI secretary help. | 1 | Hello,
I know it's not local nor a LLM but I don't know better suited community for what I'm looking for.
​
I'd like to demonstrate the habilities of AI in a context of basic office work; for this, I suggested that we record our meetings and use Whisper to transcribe the conversations and use Claude to summarize everything.
​
Hopefully, we'll buy a computer strong enough for this kind of things next year but for now, we basically own fancy typewriters with screens.
​
Can someone recommand a good website for this ? I know that [Fal.ai](https://Fal.ai) give an access to Whisper, but maybe something better exists somewhere ?
Maybe even an easy to use implementation of WhisperX (for diarizations of different speakers) ? I watched a tuto in a collab but our meetings are usually more than 1h long so I don't know if it's suitable ?
​
I'd be glad to read about your ideas (and feel free to roast my english, I'm just a basic frenchy sacrebleu). | 2023-12-01T12:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/188b3fh/a_collab_for_whisper_im_looking_for_an_ai/ | CedricLimousin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188b3fh | false | null | t3_188b3fh | /r/LocalLLaMA/comments/188b3fh/a_collab_for_whisper_im_looking_for_an_ai/ | false | false | self | 1 | null |
Best UI for API to iterate over large datasets? | 2 | I have been using text generation web UI to iterate over large datasets where a template string is fed in with some data from a database to various LLMs. What web UI API do you feel like is best for this in terms of ease of use and adding features (e.g. adding a RAG, Langchain). | 2023-12-01T12:05:54 | https://www.reddit.com/r/LocalLLaMA/comments/188aarc/best_ui_for_api_to_iterate_over_large_datasets/ | SirStagMcprotein | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188aarc | false | null | t3_188aarc | /r/LocalLLaMA/comments/188aarc/best_ui_for_api_to_iterate_over_large_datasets/ | false | false | self | 2 | null |
How does context window reduction affect performance? | 3 | I've got MacBook Pro M1 16GB. In order to run deepseek-coder with 6.7b parameters, I need to reduce context, as it haven't got much ram. So, how can it affect this model performance? How far I can go reducing context? | 2023-12-01T11:58:33 | https://www.reddit.com/r/LocalLLaMA/comments/188a5gz/how_does_context_window_reduction_affect/ | TheKostins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188a5gz | false | null | t3_188a5gz | /r/LocalLLaMA/comments/188a5gz/how_does_context_window_reduction_affect/ | false | false | self | 3 | null |
is it possible to create a video to app system? | 1 | so just like how screenshot 2 code/design works but instead it will be for videos.
A good example could be someone uploading a video of a simple 2d game like angry bird and the model will give back a working game/demo. is this possible now? if not can some please explain why | 2023-12-01T11:37:37 | https://www.reddit.com/r/LocalLLaMA/comments/1889s9k/is_it_possible_to_create_a_video_to_app_system/ | Electronic-Choice-86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1889s9k | false | null | t3_1889s9k | /r/LocalLLaMA/comments/1889s9k/is_it_possible_to_create_a_video_to_app_system/ | false | false | self | 1 | null |
Hardware Requirements llama 2 7b | 1 | [removed] | 2023-12-01T11:36:46 | https://www.reddit.com/r/LocalLLaMA/comments/1889rrv/hardware_requirements_llama_2_7b/ | kirushake20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1889rrv | false | null | t3_1889rrv | /r/LocalLLaMA/comments/1889rrv/hardware_requirements_llama_2_7b/ | false | false | self | 1 | null |
Are there any models that can answer with images from the training dataset? | 1 | [removed] | 2023-12-01T11:27:38 | https://www.reddit.com/r/LocalLLaMA/comments/1889lpm/are_there_any_models_that_can_answer_with_images/ | durian_pizza | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1889lpm | false | null | t3_1889lpm | /r/LocalLLaMA/comments/1889lpm/are_there_any_models_that_can_answer_with_images/ | false | false | self | 1 | null |
Play with Goliath 120b RP here - no signup | 17 | 6 bit for now - having strange llama memory issues with 8 bit even though plenty of GPU mem. It is also somewhat throttled response
It is uncensored so use at your own risk !!!
You can put prompt hints in parentheses brackets stars etc
https://projectatlantis.eu.ngrok.io/chat.html
Curious how ppl think it compares to smaller models | 2023-12-01T11:05:25 | https://www.reddit.com/r/LocalLLaMA/comments/18898yn/play_with_goliath_120b_rp_here_no_signup/ | LookingForTroubleQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18898yn | false | null | t3_18898yn | /r/LocalLLaMA/comments/18898yn/play_with_goliath_120b_rp_here_no_signup/ | false | false | self | 17 | null |
LayoutLM for context aware PDF-chunking | 3 | Has anyone tried using LayoutLM to chunk pdfs contextually? I have used markdown files in the past and have used markdowntextsplitter. I need something like that for pdfs and came across this, but couldn't find proper examples.
| 2023-12-01T10:37:22 | https://www.reddit.com/r/LocalLLaMA/comments/1888tl8/layoutlm_for_context_aware_pdfchunking/ | IamFuckinTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1888tl8 | false | null | t3_1888tl8 | /r/LocalLLaMA/comments/1888tl8/layoutlm_for_context_aware_pdfchunking/ | false | false | self | 3 | null |
70b Q5_K_M gguf models on rtx 3090 (24gb) | 1 | I'm thinking of upgrading to 64GB of RAM so I can lot larger models on my rtx 3090.
If I want to run `tigerbot-70b-chat-v2.Q5_K_M.gguf` which has max RAM usage of 51.61GB, assuming I load 23GB worth of layers into VRAM that leaves 51.61-23=28.61 left to load in RAM. My operating system already uses up to 9.2GB of RAM which means I need 37.81GB of RAM (hence 64GB).
**How many tokens/s can I expect from 23GB out of 51.61GB being loaded in VRAM, and 28.61GB being loaded in RAM on an rtx 3090?** I'm mostly curious about Q5\_K\_M quant, but I'm still interested in other quants. | 2023-12-01T10:24:25 | https://www.reddit.com/r/LocalLLaMA/comments/1888m8z/70b_q5_k_m_gguf_models_on_rtx_3090_24gb/ | JawGBoi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1888m8z | false | null | t3_1888m8z | /r/LocalLLaMA/comments/1888m8z/70b_q5_k_m_gguf_models_on_rtx_3090_24gb/ | false | false | self | 1 | null |
OpenKlyde - A Self Hosted AI Bot for a popular chat app how tò integrate with kobold horde | 1 |
OpenKlyde is an AI bot that connects to a koboldcpp instance by API calls
OpenKlyde - A Self Hosted AI Bot for a popular chat app. How to connect it to the kobold Horde?
Is this the only solution? | 2023-12-01T09:53:19 | https://www.reddit.com/r/LocalLLaMA/comments/188859p/openklyde_a_self_hosted_ai_bot_for_a_popular_chat/ | Independent-Bill-770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188859p | false | null | t3_188859p | /r/LocalLLaMA/comments/188859p/openklyde_a_self_hosted_ai_bot_for_a_popular_chat/ | false | false | self | 1 | null |
llama.cpp server is using only one thread for prompt eval on WSL | 3 | I recently downloaded and built llama.cpp on my laptop. It is an i9 20-core (with hyperthreading) box with GTX 3060. I am running Ubuntu 20.04-WSL on Win 11, and that is where I have built llama.cpp.
Here is the command I used for compilation:
```
$ cmake .. -DLLAMA_CUBLAS=ON
$ cmake --build . --config Release
```
Here is the command I am using for running the server:
```
./build/bin/server -m ./models/7B/openhermes-2.5-mistral-7b.Q4_K_M.gguf -t 10 -a "openhermes-2.5" -c 2048 -ngl 2000000 --port 8585
```
When I send the server a request, I see in htop on WSL that only 1 core goes to 100% utilization. This results in pretty poor tokens per second for prompt eval. Is there any setting that I am missing? Why is the prompt eval on server not using all the cores, or at least cores equal to the number of `-t` parameter?
Thanks! | 2023-12-01T09:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/1887r4d/llamacpp_server_is_using_only_one_thread_for/ | samikr_2020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1887r4d | false | null | t3_1887r4d | /r/LocalLLaMA/comments/1887r4d/llamacpp_server_is_using_only_one_thread_for/ | false | false | self | 3 | null |
Build Enterprise Retrieval-Augmented Generation Apps with NVIDIA Retrieval QA Embedding Model | 10 | 2023-12-01T09:17:26 | https://developer.nvidia.com/blog/build-enterprise-retrieval-augmented-generation-apps-with-nvidia-retrieval-qa-embedding-model/ | Scary-Knowledgable | developer.nvidia.com | 1970-01-01T00:00:00 | 0 | {} | 1887mqc | false | null | t3_1887mqc | /r/LocalLLaMA/comments/1887mqc/build_enterprise_retrievalaugmented_generation/ | false | false | 10 | {'enabled': False, 'images': [{'id': '35ZYCZPlkYau08ZTAIgDtr_n-R-aUZm8QXoFrdGqLf0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZYE-jd1OeW2QfLMhNV224msf9OQQrdZpB9fQxUSAFjs.jpg?width=108&crop=smart&auto=webp&s=0924352b370a8d3823f0b687dc55fa841e992e43', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZYE-jd1OeW2QfLMhNV224msf9OQQrdZpB9fQxUSAFjs.jpg?width=216&crop=smart&auto=webp&s=db0c553cc7ba482ad5f56173368bb40e7499da04', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZYE-jd1OeW2QfLMhNV224msf9OQQrdZpB9fQxUSAFjs.jpg?width=320&crop=smart&auto=webp&s=190adfc784d4cfb72241f422d93676631b6db85a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZYE-jd1OeW2QfLMhNV224msf9OQQrdZpB9fQxUSAFjs.jpg?width=640&crop=smart&auto=webp&s=78f8d394e4681b23d85594164c060bbadd6d52e3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZYE-jd1OeW2QfLMhNV224msf9OQQrdZpB9fQxUSAFjs.jpg?width=960&crop=smart&auto=webp&s=b1674c3b621158fcf234e9b3119b38063bac7a9d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZYE-jd1OeW2QfLMhNV224msf9OQQrdZpB9fQxUSAFjs.jpg?width=1080&crop=smart&auto=webp&s=c9536383ed6ecbce7aec7923294011a5798dadf3', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZYE-jd1OeW2QfLMhNV224msf9OQQrdZpB9fQxUSAFjs.jpg?auto=webp&s=38a4a17c156911d47041044c01d14c90c40fd51c', 'width': 1920}, 'variants': {}}]} | ||
I was wondering where I should start out based on my hardware? | 1 | I9-10940x
128GB ddr4
1070ti
Couple of ssd’s | 2023-12-01T08:51:03 | https://www.reddit.com/r/LocalLLaMA/comments/18879n2/i_was_wondering_where_i_should_start_out_based_on/ | relia7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18879n2 | false | null | t3_18879n2 | /r/LocalLLaMA/comments/18879n2/i_was_wondering_where_i_should_start_out_based_on/ | false | false | self | 1 | null |
I want to get a new card | 1 | What's the current state of AMD cards? The Nvidia prices are just insane... | 2023-12-01T08:32:36 | https://www.reddit.com/r/LocalLLaMA/comments/18870ah/i_want_to_get_a_new_card/ | Independent_Hyena495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18870ah | false | null | t3_18870ah | /r/LocalLLaMA/comments/18870ah/i_want_to_get_a_new_card/ | false | false | self | 1 | null |
how to use the google colab llama2 | 1 | [removed] | 2023-12-01T07:56:31 | https://www.reddit.com/r/LocalLLaMA/comments/1886gsp/how_to_use_the_google_colab_llama2/ | ComfortableCover2891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1886gsp | false | null | t3_1886gsp | /r/LocalLLaMA/comments/1886gsp/how_to_use_the_google_colab_llama2/ | false | false | self | 1 | null |
Should I buy M3 pro or M1 max for Local LLM? | 1 | I’m in the process of upgrading my MacBook from an older Intel model to one with the M-series chip. After browsing recent deals, I’m torn between two options within my budget:
1. M1 Max: 64GB RAM and 2TB SSD
2. M3 Pro: 36GB RAM and 1TB SSD
A bit of context: I already have a 4090 at home, so this MacBook will primarily be used for on-the-go tasks and LLM testing, specifically with models like CodeLlama-34B.
I’m leaning towards the M1 Max for its higher RAM and storage, but I’m hesitant due to potential performance issues with LLM interpretation, like to slow to be usable on 34b Models. Does anyone have experience with using the M1 Max for similar purposes? Or would the M3 Pro be a better fit despite its lower specs?
Any insights or recommendations would be greatly appreciated! | 2023-12-01T06:41:16 | https://www.reddit.com/r/LocalLLaMA/comments/1885bmt/should_i_buy_m3_pro_or_m1_max_for_local_llm/ | CodegassOsily | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1885bmt | false | null | t3_1885bmt | /r/LocalLLaMA/comments/1885bmt/should_i_buy_m3_pro_or_m1_max_for_local_llm/ | false | false | self | 1 | null |
What is the major difference between different frameworks with regards to performance, hardware requirements vs model support? Llama.cpp vs koboldcpp vs local ai vs gpt4all vs Oobabooga | 5 | Is there a comparison of performance between different frameworks? What are the strengths of each?
I was not able to find anything similar. If someone with more knowledge can help put. I hope this would be a good resource for newbies as well as trying to pick a tool. I know there is a lot of overlap between them, but some may perform better natively.
||CPU Only Performance|GPU + CPU Performance|Overall|
|:-|:-|:-|:-|
|Lllamacpp||||
|Local AI||||
|Koboldcpp||||
|Oobabooga Text Generation UI||||
| LLM Studio||||
|vLLM||||
|GPT4ALL||||
| RWKV ||||
| text-generation-inference by HF|n/a|||
​ | 2023-12-01T06:34:31 | https://www.reddit.com/r/LocalLLaMA/comments/18857nv/what_is_the_major_difference_between_different/ | gpt872323 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18857nv | false | null | t3_18857nv | /r/LocalLLaMA/comments/18857nv/what_is_the_major_difference_between_different/ | false | false | self | 5 | null |
My article on streaming responses from Open source llm | 1 | [removed] | 2023-12-01T06:23:14 | https://www.reddit.com/r/LocalLLaMA/comments/18850tr/my_article_on_streaming_responses_from_open/ | Jaswanth04 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18850tr | false | null | t3_18850tr | /r/LocalLLaMA/comments/18850tr/my_article_on_streaming_responses_from_open/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fEzP1FT0x6ButajC-AoDzmlFEN2CVXCdqyRH42THzQE', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/pfibfiyL3z_R6glqk6bed9F4pz3aMKFQ9G2VRNqitVk.jpg?width=108&crop=smart&auto=webp&s=3b5f553298086ab26d2a8cc6a0539b3bb0d04261', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/pfibfiyL3z_R6glqk6bed9F4pz3aMKFQ9G2VRNqitVk.jpg?width=216&crop=smart&auto=webp&s=0b3805c3b17eaa661040074cdaa828ad8b4f091b', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/pfibfiyL3z_R6glqk6bed9F4pz3aMKFQ9G2VRNqitVk.jpg?width=320&crop=smart&auto=webp&s=0bf45603c0a34a2f355dcbf1c86caeb09b342234', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/pfibfiyL3z_R6glqk6bed9F4pz3aMKFQ9G2VRNqitVk.jpg?width=640&crop=smart&auto=webp&s=dd6657e6b348d25c9bd5ad734a264be0ec6824ca', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/pfibfiyL3z_R6glqk6bed9F4pz3aMKFQ9G2VRNqitVk.jpg?width=960&crop=smart&auto=webp&s=9aa6f27fc524eb6a6361156fbf4f3c1020d54150', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/pfibfiyL3z_R6glqk6bed9F4pz3aMKFQ9G2VRNqitVk.jpg?width=1080&crop=smart&auto=webp&s=61734fd3c1dd50b57f0f3048186851905cc33951', 'width': 1080}], 'source': {'height': 686, 'url': 'https://external-preview.redd.it/pfibfiyL3z_R6glqk6bed9F4pz3aMKFQ9G2VRNqitVk.jpg?auto=webp&s=c3ea7b50f3a350ed62f98e4610fd57ad6968856e', 'width': 1200}, 'variants': {}}]} |
LLM for forex? | 1 | Does anyone use any llm with forex data to do back testing and fine tuning? | 2023-12-01T05:25:51 | https://www.reddit.com/r/LocalLLaMA/comments/1883zud/llm_for_forex/ | Alrightly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1883zud | false | null | t3_1883zud | /r/LocalLLaMA/comments/1883zud/llm_for_forex/ | false | false | self | 1 | null |
How is an LLM's knowledge quantified (rather than it's reasoning ability)? | 1 | I've been thinking about the minimum knowledge base an LLM needs to be a useful assistant, and realized I don't see any way of telling what knowledge a model has from the cards on huggingface.
At minimum an LLM needs all of wikipedia, but on top of that I want scientific journals and libraries worth of nonfiction books, the fiction ones too, ideally. For coding, all known public repo sites and the official documentation for every programming language, and a good amount of user discussions showing off the cuff solutions like stack exchange as well. Beyond that, it should also have a good basis of cultural information that should not be limited to just english speaking cultures.
This to me seems like the minimum of knowledge capability that an LLM needs in order to be worth running, to set it firmly above a search engine in usefulness. I expect to extend my local LLM with some specific topics I'm interested in, but the quality of the base knowledge matters a lot, to the point that I would expect to see community efforts to develop a knowledge base that is decoupled from any specific model. Does that exist?
I now understand that the user discussions and especially the cultural information are susceptible to being poisoned with bad data, so I'm not sure if my expectations are even realistic.
Does something like 7b mistral possess a lot less knowledge than a 30b llama2, even if it's reasoning is better? Or are they both similar, making the 7b that much more impressive? | 2023-12-01T05:18:18 | https://www.reddit.com/r/LocalLLaMA/comments/1883uqh/how_is_an_llms_knowledge_quantified_rather_than/ | gigachad_deluxe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1883uqh | false | null | t3_1883uqh | /r/LocalLLaMA/comments/1883uqh/how_is_an_llms_knowledge_quantified_rather_than/ | false | false | self | 1 | null |
Accelerating Generative AI with PyTorch II: GPT, Fast | 58 | 2023-12-01T04:41:04 | https://pytorch.org/blog/accelerating-generative-ai-2/ | paranoidray | pytorch.org | 1970-01-01T00:00:00 | 0 | {} | 18835ke | false | null | t3_18835ke | /r/LocalLLaMA/comments/18835ke/accelerating_generative_ai_with_pytorch_ii_gpt/ | false | false | 58 | {'enabled': False, 'images': [{'id': 'ex2vSKEYb-70owcjS3NsViLc-CM5yKIN08Vy07ON3xk', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?width=108&crop=smart&auto=webp&s=04fefcacdbba26f91b4cba8fbb5cf8f764c1376c', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?width=216&crop=smart&auto=webp&s=378e1d79dd2258b5608fca82ba01dec3056e8467', 'width': 216}, {'height': 124, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?width=320&crop=smart&auto=webp&s=a0e1f52143e5128c679738ae81f347a81d7febc0', 'width': 320}, {'height': 248, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?width=640&crop=smart&auto=webp&s=688d57320964edaa7afbf99dd5ca86410099383c', 'width': 640}, {'height': 372, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?width=960&crop=smart&auto=webp&s=4b7e9cf045e6e489ca27afe111df4a7682679435', 'width': 960}, {'height': 419, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?width=1080&crop=smart&auto=webp&s=35e18b166f2991bd720dd20fcd3f9386484f2cd1', 'width': 1080}], 'source': {'height': 1342, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?auto=webp&s=803e178018402e05507a955aefb8513700372159', 'width': 3456}, 'variants': {}}]} | ||
So far GGUF is the best format as I realize it. Will NVlinked 2x3090 act like one 48GB? | 6 | I have tried single 4090 or 3090 to run 13B GGUF q8 getting 40-45t/s. It;s so fun to play at that speed. When run with 70B GGUF, I have to activate both cards and only get 5t/s. MultiGPU panelty? I know exllamav2 can be a lot better, however, it seemed that I can”t run exllamav2 with latest Chinese models for some unknown reason in ooga UI. So upset!
So for those who know and have been using nvlinked 2x3090, how fast is it to run 70b GGUF in terms of q4-q8 tokens/s? Is it simply as single 48GB 3090? | 2023-12-01T03:24:05 | https://www.reddit.com/r/LocalLLaMA/comments/1881os0/so_far_gguf_is_the_best_format_as_i_realize_it/ | easyllaama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1881os0 | false | null | t3_1881os0 | /r/LocalLLaMA/comments/1881os0/so_far_gguf_is_the_best_format_as_i_realize_it/ | false | false | self | 6 | null |
80% faster, 50% less memory, 0% accuracy loss Llama finetuning | 592 | Hey r/LocalLLaMA community!
Just launched our open source 5x faster finetuning package Unsloth [https://github.com/unslothai/unsloth](https://github.com/unslothai/unsloth) where you can finetune Llama models:
* **5x faster**
* Use **50% less memory**
* With **0% loss in accuracy**
* All **locally** on NVIDIA GPUs (Tesla T4, RTX 20/30/40, A100, H100s) for **free**!
We manually hand derived backpropagation steps, wrote all kernels in OpenAI's Triton language and applied some more maths and coding trickery. You can read more about our tricks via [https://unsloth.ai/introducing](https://unsloth.ai/introducing).
I wrote a Google Colab for T4 for Alpaca: [https://colab.research.google.com/drive/1oW55fBmwzCOrBVX66RcpptL3a99qWBxb?usp=sharing](https://colab.research.google.com/drive/1oW55fBmwzCOrBVX66RcpptL3a99qWBxb?usp=sharing) which finetunes Alpaca 2x faster on a single GPU.
On Kaggle via 2 Tesla T4s on DDP: [https://www.kaggle.com/danielhanchen/unsloth-laion-chip2-kaggle](https://www.kaggle.com/danielhanchen/unsloth-laion-chip2-kaggle), finetune LAION's OIG 5x faster and Slim Orca 5x faster.
[5X faster finetuning on Slim Orca - 1301 hours to now 260 hours.](https://preview.redd.it/06ejf308ll3c1.png?width=1000&format=png&auto=webp&s=77f2274f4cd74b2bf802322fc3b67dadb9efebf0)
You can install Unsloth all locally via:
pip install "unsloth[cu118] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu121] @ git+https://github.com/unslothai/unsloth.git"
Currently we only support Pytorch 2.1 and Linux distros - more installation instructions via [https://github.com/unslothai/unsloth/blob/main/README.md](https://github.com/unslothai/unsloth/blob/main/README.md)
We hope to:
1. Support other LLMs other than Llama style models
2. Add sqrt gradient checkpointing to shave another 25% of memory usage.
3. And other tricks! | 2023-12-01T03:02:37 | https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_faster_50_less_memory_0_accuracy_loss_llama/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 188197j | false | null | t3_188197j | /r/LocalLLaMA/comments/188197j/80_faster_50_less_memory_0_accuracy_loss_llama/ | false | false | 592 | {'enabled': False, 'images': [{'id': '3R2f_OgbPRFgbkTAlp-9FORm1bgWNIzdKjTseOtkPXY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z0yXHwcrQe1GVRntvHgEaJTFC-d-5gJvRc1_EKBxEoY.jpg?width=108&crop=smart&auto=webp&s=eaa2f3f8a2dccb19fc6929a66a8b987688470b82', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z0yXHwcrQe1GVRntvHgEaJTFC-d-5gJvRc1_EKBxEoY.jpg?width=216&crop=smart&auto=webp&s=ed54454fd440f207c72a0d3346b56aef56077378', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z0yXHwcrQe1GVRntvHgEaJTFC-d-5gJvRc1_EKBxEoY.jpg?width=320&crop=smart&auto=webp&s=f8d16166804bf0aee6724d3ada3193eb73869975', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z0yXHwcrQe1GVRntvHgEaJTFC-d-5gJvRc1_EKBxEoY.jpg?width=640&crop=smart&auto=webp&s=929d10fde9790768bf0544da78a8b7369c731cb3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z0yXHwcrQe1GVRntvHgEaJTFC-d-5gJvRc1_EKBxEoY.jpg?width=960&crop=smart&auto=webp&s=86b1c461d60351aa8e9654dd93390ad48194440f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z0yXHwcrQe1GVRntvHgEaJTFC-d-5gJvRc1_EKBxEoY.jpg?width=1080&crop=smart&auto=webp&s=45e310100e5f95c6d25fb06c2181278121545a24', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z0yXHwcrQe1GVRntvHgEaJTFC-d-5gJvRc1_EKBxEoY.jpg?auto=webp&s=90d0041c121bff84788b814ac156afea410e7818', 'width': 1200}, 'variants': {}}]} | |
Building AI chatbots | 1 | [removed] | 2023-12-01T02:19:45 | https://www.reddit.com/r/LocalLLaMA/comments/1880d4n/building_ai_chatbots/ | CookingATeam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1880d4n | false | null | t3_1880d4n | /r/LocalLLaMA/comments/1880d4n/building_ai_chatbots/ | false | false | self | 1 | null |
Android app for LLama inference on GPU using JNI + llama.cpp | 2 | I was wondering if I could make an Android app that performs LLama inference on GPU by using Java Native Interface to run llama.cpp.
Would this be possible? | 2023-12-01T01:53:33 | https://www.reddit.com/r/LocalLLaMA/comments/187zt9p/android_app_for_llama_inference_on_gpu_using_jni/ | IonizedRay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187zt9p | false | null | t3_187zt9p | /r/LocalLLaMA/comments/187zt9p/android_app_for_llama_inference_on_gpu_using_jni/ | false | false | self | 2 | null |
Reality Check: Try Airoboros 70b 3.1.2 5.5bpw | 1 | [removed] | 2023-12-01T01:44:16 | https://www.reddit.com/r/LocalLLaMA/comments/187zme9/reality_check_try_airoboros_70b_312_55bpw/ | JohnRobertSmithy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187zme9 | false | null | t3_187zme9 | /r/LocalLLaMA/comments/187zme9/reality_check_try_airoboros_70b_312_55bpw/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '-xMEk03IDJCcGsWieJPj2S-mZ9njzVV9uq0ibb7S4do', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uSpPWU4UJuM_dF7aCp7kweLxtNib1vs78I6AmXvxfjI.jpg?width=108&crop=smart&auto=webp&s=d9c14360332c63c71cdc126d21e7c93dd7d645b5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uSpPWU4UJuM_dF7aCp7kweLxtNib1vs78I6AmXvxfjI.jpg?width=216&crop=smart&auto=webp&s=d403042560d1df25ebc4ac54e12cf35aff4fab64', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uSpPWU4UJuM_dF7aCp7kweLxtNib1vs78I6AmXvxfjI.jpg?width=320&crop=smart&auto=webp&s=ceec84298de39f4d4ff17324d90a39a78ddb3c09', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uSpPWU4UJuM_dF7aCp7kweLxtNib1vs78I6AmXvxfjI.jpg?width=640&crop=smart&auto=webp&s=765876e7dbaa04b67feca89d5230063c04142299', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uSpPWU4UJuM_dF7aCp7kweLxtNib1vs78I6AmXvxfjI.jpg?width=960&crop=smart&auto=webp&s=8591e4e11d018954e17577084dd48084b183afae', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uSpPWU4UJuM_dF7aCp7kweLxtNib1vs78I6AmXvxfjI.jpg?width=1080&crop=smart&auto=webp&s=ec368e85f91990a7f80b04b98e2e109ee929d7b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uSpPWU4UJuM_dF7aCp7kweLxtNib1vs78I6AmXvxfjI.jpg?auto=webp&s=7e87db4c2ce32c6ed084e36691e728ffa1ee0838', 'width': 1200}, 'variants': {}}]} |
Why can't LLMs even do counting? | 1 | 2023-12-01T01:11:25 | https://www.zhihu.com/question/632647147?utm_psn=1713846350888943616 | usepr | zhihu.com | 1970-01-01T00:00:00 | 0 | {} | 187ywtm | false | null | t3_187ywtm | /r/LocalLLaMA/comments/187ywtm/why_cant_llms_even_do_counting/ | false | false | default | 1 | null | |
Upgrading to a 3090? | 1 | I used to think my 3080 Ti could do absolutely anything, until one day where I decided to tinker around with AI models.
Now the 12 GB of vram feels too limiting. 20B models only run at 1-1.5 tokens/second for me (is this a config issue or something? using koboldcpp) and I haven't tried anything above those models.
I'm discounting the 4090 because I don't have 1000+ to spend, and nothing else seems good. It seems to me the only way forward is to get a 3090 for the 24 GB of vram. Should I upgrade? | 2023-12-01T00:59:32 | https://www.reddit.com/r/LocalLLaMA/comments/187yn14/upgrading_to_a_3090/ | 128username | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187yn14 | false | null | t3_187yn14 | /r/LocalLLaMA/comments/187yn14/upgrading_to_a_3090/ | false | false | self | 1 | null |
Any recommendations for good 7B or 13b models for coding in C++? | 6 | I know that most models are good in python especially and Java/javascript but I primarily code in C++ so I was wondering if anyone had any recommendations from anecdotal experience about which models would be good for that. Thanks in advance. | 2023-12-01T00:12:52 | https://www.reddit.com/r/LocalLLaMA/comments/187xlqe/any_recommendations_for_good_7b_or_13b_models_for/ | Faze-MeCarryU30 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187xlqe | false | null | t3_187xlqe | /r/LocalLLaMA/comments/187xlqe/any_recommendations_for_good_7b_or_13b_models_for/ | false | false | self | 6 | null |
How good is mistral 7b at coding? | 4 | ive been wanting to make a game for a bit now but i just cant understand code, its just feels impossible for me to do but yet i want to make a game.
i installed gpt4all and got mistral orca 7b and im wondering what coding language its the best at or what game engine i should use along with it. i dont know why i want to make a game but i just do, do any of you have experience with ai coding by chance? | 2023-11-30T23:59:42 | https://www.reddit.com/r/LocalLLaMA/comments/187xanf/how_good_is_mistral_7b_at_coding/ | Gaming-invisibleman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187xanf | false | null | t3_187xanf | /r/LocalLLaMA/comments/187xanf/how_good_is_mistral_7b_at_coding/ | false | false | self | 4 | null |
Is Llama 2 7B or 7B Chat better for a talking Teddy Bear? | 10 | ​
https://preview.redd.it/dls5le49fk3c1.jpg?width=4096&format=pjpg&auto=webp&s=450be16a79cb08697f1d77f8d4589044dd80009b
I designed a massive processor power device with a large Xilinx FPGA and 1024 wide memory for digital coin mining. But it's also able to run an AI model.
Which of Llama 2 7b is better for that application?
Llama 2 tells me there's visions in one model, and also voice synthesis in one.
Both would be nice.
But there seems to be a large size tradeoff.
Any opinions on that?
It's an offline intelligent, not an web based API device.
Running the AI model is not an issue, I have more than enough processor power for a single "user".
A child.
And why shouldn't I pursue ChatGPT 2's very small models instead of Llama's?
I just liked how Llama is somewhat aggressive in letting people download 3 or 4 full models, with seemingly no restrictions on usage.
​
​ | 2023-11-30T22:57:30 | https://www.reddit.com/r/LocalLLaMA/comments/187vupx/is_llama_2_7b_or_7b_chat_better_for_a_talking/ | danl999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187vupx | false | null | t3_187vupx | /r/LocalLLaMA/comments/187vupx/is_llama_2_7b_or_7b_chat_better_for_a_talking/ | false | false | 10 | null | |
Looking for a recommendation | 1 | Hi, I was wondering if anybody has any recommendations for an Large Language Model that I can use to input files as training data. Like PDF, txt, or whichever is supported.
I'm helping a friend do an assignment for their class and the teacher transcription is being used as the data to pull from and I want to ask the model assignment questions. (The answers to the questions are different from widely accepted factual responses, so data has to be pulled from the text and not from online)
I'm running Windows, if that matters.
Any and all recommendations would be greatly appreciated. | 2023-11-30T22:53:11 | https://www.reddit.com/r/LocalLLaMA/comments/187vqzk/looking_for_a_recommendation/ | surgeimports | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187vqzk | false | null | t3_187vqzk | /r/LocalLLaMA/comments/187vqzk/looking_for_a_recommendation/ | false | false | self | 1 | null |
Has anyone used a LLaMA that can keep multiple photos in context at once? | 2 | I'm looking to upload art and for the LLaMA to make a consensus on the overall style of the art. Has anyone done something similar? Browsing, I haven't found any examples where multiple images were kept in context at once. Your help would be much appreciated! | 2023-11-30T22:03:12 | https://www.reddit.com/r/LocalLLaMA/comments/187uk3r/has_anyone_used_a_llama_that_can_keep_multiple/ | redditisabigbubble | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187uk3r | false | null | t3_187uk3r | /r/LocalLLaMA/comments/187uk3r/has_anyone_used_a_llama_that_can_keep_multiple/ | false | false | self | 2 | null |
Intel Neural chat beats Mistral | 1 | [removed] | 2023-11-30T21:42:39 | https://www.reddit.com/r/LocalLLaMA/comments/187u1uh/intel_neural_chat_beats_mistral/ | Apprehensive_Bar6609 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187u1uh | false | null | t3_187u1uh | /r/LocalLLaMA/comments/187u1uh/intel_neural_chat_beats_mistral/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '2IWtzpNYHugyjWz5dPL6BjNXkZJTUbtcpbcAwjU0qVI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XnhK3XrT-nE3XspnEQP-vuYqDcFqsNl3fG-iQNSBT8Q.jpg?width=108&crop=smart&auto=webp&s=2471d718b0bf4e17e1ee1373d77a66597d662a6d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/XnhK3XrT-nE3XspnEQP-vuYqDcFqsNl3fG-iQNSBT8Q.jpg?width=216&crop=smart&auto=webp&s=0f336b5306171e91dc98187fd81a803cd585bb50', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/XnhK3XrT-nE3XspnEQP-vuYqDcFqsNl3fG-iQNSBT8Q.jpg?width=320&crop=smart&auto=webp&s=2bd34a6e030e4076f85daa852182e340a32346ec', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/XnhK3XrT-nE3XspnEQP-vuYqDcFqsNl3fG-iQNSBT8Q.jpg?width=640&crop=smart&auto=webp&s=6643e23b9f55ee59311099494484afd6a1b290f7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/XnhK3XrT-nE3XspnEQP-vuYqDcFqsNl3fG-iQNSBT8Q.jpg?width=960&crop=smart&auto=webp&s=d10c13842243f6d7e2682c0ae5989c11710db451', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/XnhK3XrT-nE3XspnEQP-vuYqDcFqsNl3fG-iQNSBT8Q.jpg?width=1080&crop=smart&auto=webp&s=558474fea0bbb846bd0503cfd8ad6dda08e9e317', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/XnhK3XrT-nE3XspnEQP-vuYqDcFqsNl3fG-iQNSBT8Q.jpg?auto=webp&s=7452791fb72404d8db5009b310d1a36e2d4daf6e', 'width': 1280}, 'variants': {}}]} |
Suggestions for a MacBook? | 3 | I'm upgrading my 10-year-old macbook pro to something with a M1/M2/M3 chip, ~$3k budget. I'm interested in local llama mostly casually. It seems like it would be great to have a local LLM for personal coding projects: one I can tinker with a bit (unlike copilot) but which is clearly aware of a whole codebase (unlike ChatGPT). I suspect this will become more and more important in the next few years.
Any recommendations? Seems like the specs you can get on a [new M3](https://www.apple.com/shop/buy-mac/macbook-pro/16-inch-space-black-apple-m3-pro-with-12-core-cpu-and-18-core-gpu-18gb-memory-512gb) are not much worse than what you can get in a [refurbished M1](https://www.apple.com/shop/refurbished/mac/16-inch-macbook-pro-64gb) at a similar price point, but I'm no expert on the trade-offs. What specs are most critical? | 2023-11-30T21:42:18 | https://www.reddit.com/r/LocalLLaMA/comments/187u1ja/suggestions_for_a_macbook/ | kazoohero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187u1ja | false | null | t3_187u1ja | /r/LocalLLaMA/comments/187u1ja/suggestions_for_a_macbook/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'faCr6t2scr5YzOeOzqvWzaMNAmDomZ5nq3z4mH1vI6Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/p4BW4HMXa2oDPKhR46HU9gfyRy42OSIkuMKa1Of1YP0.jpg?width=108&crop=smart&auto=webp&s=d39385ce5089582600cfc1f3cd676ef3cd801034', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/p4BW4HMXa2oDPKhR46HU9gfyRy42OSIkuMKa1Of1YP0.jpg?width=216&crop=smart&auto=webp&s=9a6189f342e6f5c4893bf1186577efbc7853da56', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/p4BW4HMXa2oDPKhR46HU9gfyRy42OSIkuMKa1Of1YP0.jpg?width=320&crop=smart&auto=webp&s=b7e748b02d24f427097802dbc0a0b54e5c59e768', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/p4BW4HMXa2oDPKhR46HU9gfyRy42OSIkuMKa1Of1YP0.jpg?width=640&crop=smart&auto=webp&s=c6100bdf1806340f481b3db2df91e2ad8e858c13', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/p4BW4HMXa2oDPKhR46HU9gfyRy42OSIkuMKa1Of1YP0.jpg?width=960&crop=smart&auto=webp&s=e486debd5d956e05c21e5e15ad790fc2513c059a', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/p4BW4HMXa2oDPKhR46HU9gfyRy42OSIkuMKa1Of1YP0.jpg?width=1080&crop=smart&auto=webp&s=804e66efb2d52c7fb3fc17bddc691e2c4b0f7dee', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/p4BW4HMXa2oDPKhR46HU9gfyRy42OSIkuMKa1Of1YP0.jpg?auto=webp&s=6be5b1a9011bb1dde4276aa1618afe49d460b964', 'width': 1200}, 'variants': {}}]} |
Will Local LLM just won’t run on a computer with low ram or will it run but be incredibly slow? | 3 | I have a 4gb ram on my computer , so I’m going to assume it makes it impossible to run a fast model like that. But will it be slow to run or will it just not run completely? | 2023-11-30T21:38:41 | https://www.reddit.com/r/LocalLLaMA/comments/187tycq/will_local_llm_just_wont_run_on_a_computer_with/ | TheHumanFixer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187tycq | false | null | t3_187tycq | /r/LocalLLaMA/comments/187tycq/will_local_llm_just_wont_run_on_a_computer_with/ | false | false | self | 3 | null |
What is the closest 7b model to Chat GPT 4? | 1 | I’m trying to find a light model to run on my computer due to me not having a lot of ram. So, is there any model that has made it to be the closest to ChatGPT 4? | 2023-11-30T21:28:25 | https://www.reddit.com/r/LocalLLaMA/comments/187tp2p/what_is_the_closest_7b_model_to_chat_gpt_4/ | TheHumanFixer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187tp2p | false | null | t3_187tp2p | /r/LocalLLaMA/comments/187tp2p/what_is_the_closest_7b_model_to_chat_gpt_4/ | false | false | self | 1 | null |
What's the simplest way to run speculative decoding? | 4 | Let's say hypothetically that I'm a GPU poor and I'm a simpleton who has never gone beyond oobaboogaing and koboldcpping, and I want to run models larger than mistral at more than 2 tokens per second. Speculative decoding is my only option, right? What's the easiest way to do this? Do any UIs support it out of the box? | 2023-11-30T21:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/187torr/whats_the_simplest_way_to_run_speculative_decoding/ | Covid-Plannedemic_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187torr | false | null | t3_187torr | /r/LocalLLaMA/comments/187torr/whats_the_simplest_way_to_run_speculative_decoding/ | false | false | self | 4 | null |
What settings are you all using for your roleplays? | 9 | Just curious as to what settings everyone is using when doing roleplays? I tend to use microstat with a lower temperature or around 0.7, but sometimes I used the TFS-With-TopA preset in sillytavern. | 2023-11-30T21:23:32 | https://www.reddit.com/r/LocalLLaMA/comments/187tktl/what_settings_are_you_all_using_for_your_roleplays/ | nsfw_throwitaway69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187tktl | false | null | t3_187tktl | /r/LocalLLaMA/comments/187tktl/what_settings_are_you_all_using_for_your_roleplays/ | false | false | self | 9 | null |
Qwen-72B has already been Llamafied | 60 | [https://huggingface.co/CausalLM/72B-preview](https://huggingface.co/CausalLM/72B-preview) | 2023-11-30T21:21:34 | https://www.reddit.com/r/LocalLLaMA/comments/187tj62/qwen72b_has_already_been_llamafied/ | Illustrious_Sand6784 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187tj62 | false | null | t3_187tj62 | /r/LocalLLaMA/comments/187tj62/qwen72b_has_already_been_llamafied/ | false | false | self | 60 | {'enabled': False, 'images': [{'id': 'v0X7uQqI_DZ4rM2cPlCCxDezmPqMtSFhxANUNI-3Cwo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3l3fn6K0NnuIQQHUqfKl9y8NAsz-lbE4UT3RX9s6loQ.jpg?width=108&crop=smart&auto=webp&s=b5b9b1ec8ed6a26560d4ccabdcfaf72f1c2c72ec', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3l3fn6K0NnuIQQHUqfKl9y8NAsz-lbE4UT3RX9s6loQ.jpg?width=216&crop=smart&auto=webp&s=b54e8d0fa6655679ee8cf78a7608d135bff10fbc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3l3fn6K0NnuIQQHUqfKl9y8NAsz-lbE4UT3RX9s6loQ.jpg?width=320&crop=smart&auto=webp&s=3001a31c64b96e6afb47d7ec8eec516e14f7b8d0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3l3fn6K0NnuIQQHUqfKl9y8NAsz-lbE4UT3RX9s6loQ.jpg?width=640&crop=smart&auto=webp&s=e95acef1faf82c66d1746d231446ac507ac5edbe', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3l3fn6K0NnuIQQHUqfKl9y8NAsz-lbE4UT3RX9s6loQ.jpg?width=960&crop=smart&auto=webp&s=c862f69ebfffd08f5790c3dc9e4a37c7b5b646b3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3l3fn6K0NnuIQQHUqfKl9y8NAsz-lbE4UT3RX9s6loQ.jpg?width=1080&crop=smart&auto=webp&s=80ce51fc86f667933feab4be5522f359aab776be', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3l3fn6K0NnuIQQHUqfKl9y8NAsz-lbE4UT3RX9s6loQ.jpg?auto=webp&s=fb2eed95012b5146c64a4b73e6b2237473cba00d', 'width': 1200}, 'variants': {}}]} |
llamafile hands-on walk-through: "Local copy of ChatGPT and GPT-4 Vision?" | 1 | [removed] | 2023-11-30T21:19:54 | https://www.reddit.com/r/LocalLLaMA/comments/187thme/llamafile_handson_walkthrough_local_copy_of/ | Optimal-Resist-5416 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187thme | false | null | t3_187thme | /r/LocalLLaMA/comments/187thme/llamafile_handson_walkthrough_local_copy_of/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XSBheEvz5NCbHN2ig2BQZJYYtzdiaWSJmmqsgKkelA0', 'resolutions': [{'height': 43, 'url': 'https://external-preview.redd.it/LtxL3UraD7U2zTj-Qgss6hGzVLjVDT3nkEbOkWTLcY4.jpg?width=108&crop=smart&auto=webp&s=455618d46894bbfb868525f6f05683282685e310', 'width': 108}, {'height': 87, 'url': 'https://external-preview.redd.it/LtxL3UraD7U2zTj-Qgss6hGzVLjVDT3nkEbOkWTLcY4.jpg?width=216&crop=smart&auto=webp&s=67f810636d1315e5f60be43ea488f813eba1ce30', 'width': 216}, {'height': 129, 'url': 'https://external-preview.redd.it/LtxL3UraD7U2zTj-Qgss6hGzVLjVDT3nkEbOkWTLcY4.jpg?width=320&crop=smart&auto=webp&s=d6bb69c4946b0f71c805c9a9fa684ab237d4ef40', 'width': 320}, {'height': 259, 'url': 'https://external-preview.redd.it/LtxL3UraD7U2zTj-Qgss6hGzVLjVDT3nkEbOkWTLcY4.jpg?width=640&crop=smart&auto=webp&s=226a7288f6a34f1d9dd7e407cb63ff6adfb2ceed', 'width': 640}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/LtxL3UraD7U2zTj-Qgss6hGzVLjVDT3nkEbOkWTLcY4.jpg?auto=webp&s=1c5fdaf43e1949cd214dfb22d7a1acae80a29c45', 'width': 888}, 'variants': {}}]} |
llamafile hands-on walk-through: "Local copy of ChatGPT and GPT-4 Vision" | 4 | Wrote a quick walk through of llamafile, which allows you to run any GGUF format model as a binary across operating systems. ”Build once anywhere, run anywhere” moment for AI engineers as authors stated! :)
[https://medium.com/@datadrifters/local-copy-of-chatgpt-and-gpt-4-vision-llamafile-175891b27e95](https://medium.com/@datadrifters/local-copy-of-chatgpt-and-gpt-4-vision-llamafile-175891b27e95)
What are your thoughts?
​ | 2023-11-30T21:15:08 | https://www.reddit.com/r/LocalLLaMA/comments/187tcv0/llamafile_handson_walkthrough_local_copy_of/ | Optimal-Resist-5416 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187tcv0 | false | null | t3_187tcv0 | /r/LocalLLaMA/comments/187tcv0/llamafile_handson_walkthrough_local_copy_of/ | false | false | self | 4 | null |
llamafile hands-on walk-through: "Local copy of ChatGPT and GPT-4 Vision" | 2 | Wrote a quick walk through of llamafile, which allows you to run any GGUF format model as a binary across operating systems. ”Build once anywhere, run anywhere” moment for AI engineers as authors stated! :)
[https://medium.com/@datadrifters/local-copy-of-chatgpt-and-gpt-4-vision-llamafile-175891b27e95](https://medium.com/@datadrifters/local-copy-of-chatgpt-and-gpt-4-vision-llamafile-175891b27e95)
What are your thoughts?
​ | 2023-11-30T21:15:03 | https://www.reddit.com/r/LocalLLaMA/comments/187tcs1/llamafile_handson_walkthrough_local_copy_of/ | Optimal-Resist-5416 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187tcs1 | false | null | t3_187tcs1 | /r/LocalLLaMA/comments/187tcs1/llamafile_handson_walkthrough_local_copy_of/ | false | false | self | 2 | null |
llamafile hands-on walk-through: "Local copy of ChatGPT and GPT-4 Vision" | 1 | [removed] | 2023-11-30T21:14:37 | https://www.reddit.com/r/LocalLLaMA/comments/187tc9v/llamafile_handson_walkthrough_local_copy_of/ | Optimal-Resist-5416 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187tc9v | false | null | t3_187tc9v | /r/LocalLLaMA/comments/187tc9v/llamafile_handson_walkthrough_local_copy_of/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XSBheEvz5NCbHN2ig2BQZJYYtzdiaWSJmmqsgKkelA0', 'resolutions': [{'height': 43, 'url': 'https://external-preview.redd.it/LtxL3UraD7U2zTj-Qgss6hGzVLjVDT3nkEbOkWTLcY4.jpg?width=108&crop=smart&auto=webp&s=455618d46894bbfb868525f6f05683282685e310', 'width': 108}, {'height': 87, 'url': 'https://external-preview.redd.it/LtxL3UraD7U2zTj-Qgss6hGzVLjVDT3nkEbOkWTLcY4.jpg?width=216&crop=smart&auto=webp&s=67f810636d1315e5f60be43ea488f813eba1ce30', 'width': 216}, {'height': 129, 'url': 'https://external-preview.redd.it/LtxL3UraD7U2zTj-Qgss6hGzVLjVDT3nkEbOkWTLcY4.jpg?width=320&crop=smart&auto=webp&s=d6bb69c4946b0f71c805c9a9fa684ab237d4ef40', 'width': 320}, {'height': 259, 'url': 'https://external-preview.redd.it/LtxL3UraD7U2zTj-Qgss6hGzVLjVDT3nkEbOkWTLcY4.jpg?width=640&crop=smart&auto=webp&s=226a7288f6a34f1d9dd7e407cb63ff6adfb2ceed', 'width': 640}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/LtxL3UraD7U2zTj-Qgss6hGzVLjVDT3nkEbOkWTLcY4.jpg?auto=webp&s=1c5fdaf43e1949cd214dfb22d7a1acae80a29c45', 'width': 888}, 'variants': {}}]} |
llamafile hands-on walk-through: "Local copy of ChatGPT and GPT-4 Vision" | 1 | Wrote a quick walk through of llamafile, which allows you to run any GGUF format model as a binary across operating systems. ”Build once anywhere, run anywhere” moment for AI engineers as authors stated! :)
[https://medium.com/@datadrifters/local-copy-of-chatgpt-and-gpt-4-vision-llamafile-175891b27e95](https://medium.com/@datadrifters/local-copy-of-chatgpt-and-gpt-4-vision-llamafile-175891b27e95)
What are your thoughts?
​ | 2023-11-30T21:14:32 | https://www.reddit.com/r/LocalLLaMA/comments/187tc72/llamafile_handson_walkthrough_local_copy_of/ | Optimal-Resist-5416 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187tc72 | false | null | t3_187tc72 | /r/LocalLLaMA/comments/187tc72/llamafile_handson_walkthrough_local_copy_of/ | false | false | self | 1 | null |
What local model is the best at math? | 1 | Looking for size as low as possible but still fits for any math question. | 2023-11-30T20:58:14 | https://www.reddit.com/r/LocalLLaMA/comments/187sxzw/what_local_model_is_the_best_at_math/ | TheHumanFixer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187sxzw | false | null | t3_187sxzw | /r/LocalLLaMA/comments/187sxzw/what_local_model_is_the_best_at_math/ | false | false | self | 1 | null |
LLM on Mac | 1 | Hello LocalLLaMA community,
I have a question, as a mac user (14' 2021 MacBook Pro (on M1 Pro)), what LLM can I try to install and run locally? Do I need any sort of application where I need to install it to or do something with it? | 2023-11-30T20:40:39 | https://www.reddit.com/r/LocalLLaMA/comments/187sjhb/llm_on_mac/ | VacationAnnual4851 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187sjhb | false | null | t3_187sjhb | /r/LocalLLaMA/comments/187sjhb/llm_on_mac/ | false | false | self | 1 | null |
Large Model Collider - The Platform for serving LLM models | 19 | Hey, happy llamers! ChatGPT turns one today :)
What a day to launch the project I'm tinkering with for more than half a year. Welcome new LLM platform suited both for individual research and scaling AI services in production.
**Some superpowers:**
* Built with performance and scaling in mind **thanks Golang and C++**
* **No more problems with Python** dependencies and broken compatibility
* **Most of modern CPUs are supported**: any Intel/AMD x64 platofrms, server and Mac ARM64
* GPUs supported as well: **Nvidia CUDA, Apple Metal, OpenCL** cards
* Split really big models between a number of GPU (**warp LLaMA 70B with 2x RTX 3090**)
* Not bad performance on shy CPU machines, **fast as hell inference on monsters with beefy GPUs**
* Both regular FP16/FP32 models and their quantised versions are supported - **4-bit really rocks!**
* **Popular LLM architectures** already there: **LLaMA**, Starcoder, Baichuan, Mistral, etc...
* **Special bonus: proprietary Janus Sampling** for code generation and non English languages | 2023-11-30T20:26:06 | https://www.reddit.com/r/LocalLLaMA/comments/187s702/large_model_collider_the_platform_for_serving_llm/ | Gatzuma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187s702 | false | null | t3_187s702 | /r/LocalLLaMA/comments/187s702/large_model_collider_the_platform_for_serving_llm/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': '5i4k9b24WmQujk3mf6W_wSPb-pvCEY7Z7RwYRPc_rtU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0Hq5r0NmD3NhI9MsitLPW4aiTrn9M6CvcLa562EizJo.jpg?width=108&crop=smart&auto=webp&s=39d4b9f2e0a5677158f9c27e461aaa7f95699a72', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0Hq5r0NmD3NhI9MsitLPW4aiTrn9M6CvcLa562EizJo.jpg?width=216&crop=smart&auto=webp&s=3f19a74266409b25d4fd772ec7770bcc9d7342a7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0Hq5r0NmD3NhI9MsitLPW4aiTrn9M6CvcLa562EizJo.jpg?width=320&crop=smart&auto=webp&s=9bfac185b4c355f810442a498cabe84d4a99ea2e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0Hq5r0NmD3NhI9MsitLPW4aiTrn9M6CvcLa562EizJo.jpg?width=640&crop=smart&auto=webp&s=b9194b92b289965e4671366e6005ca1a1b8ecd68', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0Hq5r0NmD3NhI9MsitLPW4aiTrn9M6CvcLa562EizJo.jpg?width=960&crop=smart&auto=webp&s=9653fa6213006acb5542403a34a07bb26f2e0b5d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0Hq5r0NmD3NhI9MsitLPW4aiTrn9M6CvcLa562EizJo.jpg?width=1080&crop=smart&auto=webp&s=00740ca13fbdfe327b529bbc016b43d46bd3a3c8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0Hq5r0NmD3NhI9MsitLPW4aiTrn9M6CvcLa562EizJo.jpg?auto=webp&s=354a13fcd8e706898042bdf9351b814103008176', 'width': 1200}, 'variants': {}}]} |
Bicameral llms | 1 | Interesting that a lot of LLM research for improving LLM "cognition" follows the idea of chain of thought and tree of thought which seem so similar to the central idea of the Bicameral mind, a completely disproven theory of human cognition. If this ends up working the people in Snow Crash may have been AIs all along, maybe Gibson is from the future 😂 | 2023-11-30T20:13:12 | https://www.reddit.com/r/LocalLLaMA/comments/187rvqs/bicameral_llms/ | reformed_scientist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187rvqs | false | null | t3_187rvqs | /r/LocalLLaMA/comments/187rvqs/bicameral_llms/ | false | false | self | 1 | null |
Is there a resource for how to get into local LLaMas? | 1 | I'm a programmer with many years of experience and am using openai apis for chatgpt. I want to look into replacing this with some kind of local llamas.
Reading through this subreddit is kind of difficult because there are so many terms.
Is there any noob introduction guide or deep dives out there that can get me up to speed? | 2023-11-30T20:08:25 | https://www.reddit.com/r/LocalLLaMA/comments/187rrrb/is_there_a_resource_for_how_to_get_into_local/ | chinawcswing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187rrrb | false | null | t3_187rrrb | /r/LocalLLaMA/comments/187rrrb/is_there_a_resource_for_how_to_get_into_local/ | false | false | self | 1 | null |
GBNF Function Calling Grammar Generator for llama.cpp to make function calling with every model supporting grammar based sampling. (most models, I only had problems with Deepseek Code Instruct) | 32 | Hello! 👋
I'd like to introduce a tool I've been developing: a GGML BNF Grammar Generator tailored for llama.cpp.
🔍 **Features:**
- **GGML BNF Grammar Creation:** Simplifies the process of generating grammars for LLM function calls in GGML BNF format.
- **Automatic Documentation:** Produces clear, comprehensive documentation for each function call, aimed at improving developer efficiency.
- **User-Friendly Design:** Though straightforward to use, it doesn't compromise on functionality, making it suitable for a wide range of developers.
At the moment it supports following types as function parameter:
- `string`
- `boolean`
- `number`
- `float`
👨💻 **Benefits for llama.cpp Users:**
- **Increased Accuracy:** Helps reduce syntax errors and misunderstandings in LLM function calls.
🔗 [Link to the tool and documentation]
🤝 **Looking for Input:**
Would be happy to hear from you – whether it's feedback, potential contributions, or just a discussion on future improvements. Let's collaborate to enhance the llama.cpp coding experience!
### Example output using OpenHermes and the example functions in `gpt_functions.py`
````text
>Can you write a long poem about the USA in the "HelloUSA.txt" file?
<|im_start|>system
Available Functions:
send_message:
Description:Sends a message to the User.
Parameters:
inner_thoughts (string, required): Your inner thoughts or inner monologue while writing the message.
message (string, required): Message you want to send to the user.
read_file:
Description:Returns content of a file.
Parameters:
inner_thoughts (string, required): Your inner thoughts or inner monologue while writing the file path.
File (string, required): The path of the file you want to open.
require_heartbeat (boolean, required): Set this to true to get control back after execution, to chain functions together.
write_file:
Description:Writes to a file.
Parameters:
inner_thoughts (string, required): Your inner thoughts or inner monologue while writing the file.
File (string, required): The path of the file you want to write.
Content (string, required): The content of the file you want to write.
require_heartbeat (boolean, required): Set this to true to get control back after execution, to chain functions together.
python_interpreter_command:
Description:Execute Python command.
Parameters:
inner_thoughts (string, required): Your inner thoughts or inner monologue while writing the command.
command (string, required): The Python command to execute.
require_heartbeat (boolean, required): Set this to true to get control back after execution, to chain functions together.<|im_end|>
<|im_start|>user
Maximilian: "Can you write a long poem about the USA in the "HelloUSA.txt" file?" <|im_end|>
<|im_start|>assistant
Llama.generate: prefix-match hit
{"function":"write_file","params":{"inner_thoughts":"I'm thinking about composing a poem on the United States and saving it as 'HelloUSA.txt'.","File":"HelloUSA.txt","Content":"In search of liberty, they came from far and wide,\nA melting pot of cultures, a land so grand and vast.\nFrom coast to coast, through mountains high and tide,\nTheir dreams intertwined, their stories merged at last.\n\nBoldly they ventured, brave and free,\nSeeking opportunity, hope, and solace too,\nWith unity and strength, they forged a nation strong,\nAnd freedom's light would forever shine anew.\n\nThrough strife and struggle, battles fierce and fight,\nThey built a home where all could find their place,\nWhere justice reigns, and rights are right,\nAnd every soul can carve its own unique grace.\n\nUnited we stand, with colors bold,\nRed, white, and blue our flag unfurls,\nTogether now, our future told,\nOf unity, not division, evermore.\n\nThis land of dreams, where destiny unfolds,\nA beacon bright, a shining star that guides,\nA story written by hearts of gold,\nAcross these lands, across these tides.","require_heartbeat":true}}
```` | 2023-11-30T19:59:41 | https://www.reddit.com/r/LocalLLaMA/comments/187rjz5/gbnf_function_calling_grammar_generator_for/ | FlowerPotTeaTime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187rjz5 | false | null | t3_187rjz5 | /r/LocalLLaMA/comments/187rjz5/gbnf_function_calling_grammar_generator_for/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'QiNelv3_MD8zr6BkX8AUzjLawrdxaLr3NANoaOWlw5Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H30a4jGxRwrBeY-CFSG07c2skrHUox0_FL2eHdFx9IQ.jpg?width=108&crop=smart&auto=webp&s=3753ba3bf8375b8adc1e1dd2e37ad6c8411510d5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/H30a4jGxRwrBeY-CFSG07c2skrHUox0_FL2eHdFx9IQ.jpg?width=216&crop=smart&auto=webp&s=5dfb898a977d3e4f4c7456c77a7f25fb829d8f93', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/H30a4jGxRwrBeY-CFSG07c2skrHUox0_FL2eHdFx9IQ.jpg?width=320&crop=smart&auto=webp&s=d28de5c781e4e03a63af852d9ce9cb21cb88d23a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/H30a4jGxRwrBeY-CFSG07c2skrHUox0_FL2eHdFx9IQ.jpg?width=640&crop=smart&auto=webp&s=a52465c1bd98e2d12185f3c0b3baec18e14736b2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/H30a4jGxRwrBeY-CFSG07c2skrHUox0_FL2eHdFx9IQ.jpg?width=960&crop=smart&auto=webp&s=ab97ca6660fa7a55bff2a01caac7ee66277f1308', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/H30a4jGxRwrBeY-CFSG07c2skrHUox0_FL2eHdFx9IQ.jpg?width=1080&crop=smart&auto=webp&s=6759a78c061b51241f7c6016ed9c322538c06f55', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/H30a4jGxRwrBeY-CFSG07c2skrHUox0_FL2eHdFx9IQ.jpg?auto=webp&s=c3ef4bc3a5fab9871f7a898df779d0b5b7d55b41', 'width': 1200}, 'variants': {}}]} |
GPT-Fast: A fast and hackable implementation of transformer inference in <1000 lines of native PyTorch with support for quantization, speculative decoding, TP, Nvidia/AMD support, and more! | 95 | We're happy to release GPT-Fast, a fast and hackable implementation of transformer inference in <1000 lines of native PyTorch with support for quantization, speculative decoding, TP, Nvidia/AMD support, and more!
Check out the blog post describing the techniques here: https://pytorch.org/blog/accelerating-generative-ai-2/
And check out the code here: https://github.com/pytorch-labs/gpt-fast
To be clear, this is intended more as a minimal "tutorial" of how you get really good inference performance rather than a library. Hopefully y'all find it useful!
Happy to answer any questions. | 2023-11-30T19:54:03 | https://www.reddit.com/r/LocalLLaMA/comments/187rfax/gptfast_a_fast_and_hackable_implementation_of/ | programmerChilli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187rfax | false | null | t3_187rfax | /r/LocalLLaMA/comments/187rfax/gptfast_a_fast_and_hackable_implementation_of/ | false | false | self | 95 | {'enabled': False, 'images': [{'id': 'ex2vSKEYb-70owcjS3NsViLc-CM5yKIN08Vy07ON3xk', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?width=108&crop=smart&auto=webp&s=04fefcacdbba26f91b4cba8fbb5cf8f764c1376c', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?width=216&crop=smart&auto=webp&s=378e1d79dd2258b5608fca82ba01dec3056e8467', 'width': 216}, {'height': 124, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?width=320&crop=smart&auto=webp&s=a0e1f52143e5128c679738ae81f347a81d7febc0', 'width': 320}, {'height': 248, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?width=640&crop=smart&auto=webp&s=688d57320964edaa7afbf99dd5ca86410099383c', 'width': 640}, {'height': 372, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?width=960&crop=smart&auto=webp&s=4b7e9cf045e6e489ca27afe111df4a7682679435', 'width': 960}, {'height': 419, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?width=1080&crop=smart&auto=webp&s=35e18b166f2991bd720dd20fcd3f9386484f2cd1', 'width': 1080}], 'source': {'height': 1342, 'url': 'https://external-preview.redd.it/-9PiQglLhYTtPzPoS_SY-Qogv_uLlJJIt4EJWhhBMfU.jpg?auto=webp&s=803e178018402e05507a955aefb8513700372159', 'width': 3456}, 'variants': {}}]} |
Getting Started | 1 | Hey I would love to get started with a local llama, but don't know how to begin.
Is there a guide somewhere on how to set up and start using some of the models that are out there? Right now, I don't even know how to use huggingface. And I have a sick GPU that's just crying out to be put to the test! | 2023-11-30T19:37:45 | https://www.reddit.com/r/LocalLLaMA/comments/187r1f1/getting_started/ | crackinthekraken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187r1f1 | false | null | t3_187r1f1 | /r/LocalLLaMA/comments/187r1f1/getting_started/ | false | false | self | 1 | null |
The overthinker | 83 | I overfitted the Phi 1.5 model on a riddle dataset found here:
[https://huggingface.co/datasets/Ermarrero/riddles\_v1](https://huggingface.co/datasets/Ermarrero/riddles_v1)
I just wanted to see how it behaves and I gotta say the output is interesting since it thinks everything is a riddle and tries to break it down logically.
It's weird but its kind of refreshing to see a model overthink it and dig to deep into things. I dunno what do you guys think?
if you want to play around with the model I can upload it to hugginface.
https://preview.redd.it/poxet1xqdj3c1.png?width=1318&format=png&auto=webp&s=433ec4a7ce51e578182d35a97525a7b0f95f4745
https://preview.redd.it/au4617xqdj3c1.png?width=1279&format=png&auto=webp&s=76d00f416fdcf8785f84500c50a0115976dd3b97
https://preview.redd.it/ktyo25xqdj3c1.png?width=1305&format=png&auto=webp&s=2518c63ebf43b1b95955ef7846e464a76c175e7d
https://preview.redd.it/riygz6xqdj3c1.png?width=2130&format=png&auto=webp&s=b895fde3b7aff260268957dcdb1082f18ce55959
https://preview.redd.it/wjpha9xqdj3c1.png?width=1300&format=png&auto=webp&s=0523c754b17d05d09e194d10afe93236a5b3830f | 2023-11-30T19:29:21 | https://www.reddit.com/r/LocalLLaMA/comments/187qu2x/the_overthinker/ | Delicious-Farmer-234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187qu2x | false | null | t3_187qu2x | /r/LocalLLaMA/comments/187qu2x/the_overthinker/ | false | false | 83 | {'enabled': False, 'images': [{'id': 'D-MGVyQizej546bK_3YTemO-EP6xeVjmGyvzlMxrbq4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XmcWRKKFkj5nH4wA3j3GhJ40fvF5Ikf3XhyazSsDgCM.jpg?width=108&crop=smart&auto=webp&s=eb47db18c0edd6ff877b2a25d815de26502f6c3f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XmcWRKKFkj5nH4wA3j3GhJ40fvF5Ikf3XhyazSsDgCM.jpg?width=216&crop=smart&auto=webp&s=b00bbcf86d60dee7f5663570918b166c9091fed8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XmcWRKKFkj5nH4wA3j3GhJ40fvF5Ikf3XhyazSsDgCM.jpg?width=320&crop=smart&auto=webp&s=d52d21311cd4437d340b7516ac76e35505e6e3e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XmcWRKKFkj5nH4wA3j3GhJ40fvF5Ikf3XhyazSsDgCM.jpg?width=640&crop=smart&auto=webp&s=ab28eab650e18912356985bf21207aafbad4fd3a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XmcWRKKFkj5nH4wA3j3GhJ40fvF5Ikf3XhyazSsDgCM.jpg?width=960&crop=smart&auto=webp&s=ae8d6a5a4de742cbe62bf61899ff1e2b2d397972', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XmcWRKKFkj5nH4wA3j3GhJ40fvF5Ikf3XhyazSsDgCM.jpg?width=1080&crop=smart&auto=webp&s=163b091d8eb5f2815305848db4bad44320b4071c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XmcWRKKFkj5nH4wA3j3GhJ40fvF5Ikf3XhyazSsDgCM.jpg?auto=webp&s=a65dc23379d41bef4f7e4f39a1cb735b23a4fa04', 'width': 1200}, 'variants': {}}]} | |
idiot proof guide for installing and running Starling 7B on M1 MacBook Air? | 1 | Title. Hoping for a step by step guide for doing this, since I can't seem to find guides online that don't already assume you have a bunch of other shit up and running | 2023-11-30T19:29:09 | https://www.reddit.com/r/LocalLLaMA/comments/187qtwm/idiot_proof_guide_for_installing_and_running/ | taxis-asocial | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187qtwm | false | null | t3_187qtwm | /r/LocalLLaMA/comments/187qtwm/idiot_proof_guide_for_installing_and_running/ | false | false | self | 1 | null |
Anyone getting more than 13 tokens per second on M1 16g machine | 1 | Usign GPT4all, only get 13 tokens. anyway to speed this up? perhaps a custom config of llama.cpp. or some other LLM back end.
model is mistra-orca.
does type of model affect tokens per second?
what is your setup for quants and model type
how do i get fastest tokens for second on m1 16gig | 2023-11-30T19:06:15 | https://www.reddit.com/r/LocalLLaMA/comments/187qaug/anyone_getting_more_than_13_tokens_per_second_on/ | herozorro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187qaug | false | null | t3_187qaug | /r/LocalLLaMA/comments/187qaug/anyone_getting_more_than_13_tokens_per_second_on/ | false | false | self | 1 | null |
What can I currently run on a 3090Ti? I stopped lurking around a couple of months ago, they felt like an eternity... | 1 | [removed] | 2023-11-30T18:58:16 | https://www.reddit.com/r/LocalLLaMA/comments/187q3t9/what_can_i_currently_run_on_a_3090ti_i_stopped/ | BetterProphet5585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187q3t9 | false | null | t3_187q3t9 | /r/LocalLLaMA/comments/187q3t9/what_can_i_currently_run_on_a_3090ti_i_stopped/ | false | false | self | 1 | null |
What All Dropped Recently: | 431 | What a deluge lately!
[Deepseek 67B](https://huggingface.co/deepseek-ai/deepseek-llm-67b-base) is an amazing coder. Their ~30B model was awesome. Now this!
[Qwen 72B](https://huggingface.co/Qwen/Qwen-72B). Qwen had awesome models, I expect a lot from this.
[Qwen 1.8B](https://huggingface.co/Qwen/Qwen-1_8B) for those on a diet.
[Qwen Audio](https://huggingface.co/Qwen/Qwen-Audio-Chat) for arbitrary audio -> "reasoned" text, so not just transcription.
[XVERSE 65B](https://huggingface.co/xverse/XVERSE-65B). I haven't played with this series, how is it?
[AquilaChat2-70B](https://huggingface.co/BAAI/AquilaChat2-70B-Expr). I'm also not familiar.
Those are all heavy hitter foundation LLMs (and all from China).
One more noteworthy LLM is [RWKV](https://huggingface.co/BlinkDL/rwkv-5-world). He's releasing increasingly large versions as they train. It's an RNN (no transformers) that per-parameter-count competes with the transformers, but has O(c) memory and time complexity for long context windows. Also far lighter to train.
Then for Images, [Stability](https://stability.ai) has been on a roll:
[Stable Video](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt) Stable Diffusion VIDEO. First OS video model I've seen.
[SD-XL Turbo](https://huggingface.co/stabilityai/sdxl-turbo) for Stable Diffusion XL but fast enough to spit out an image per keystroke as you type.
Stability also has an ultrafast upscaler that should come out any day now (Enhance!).
Russia's [Kandinsky](https://huggingface.co/kandinsky-community/kandinsky-3) is staying fresh with updates.
[Fuyu](https://huggingface.co/adept/fuyu-8b) is a noteworthy img->text because it has a simple architecture that tokenizes images (as opposed to CNN), and allows for arbitrarily-sized images.
For audio:
[Whisper v3](https://huggingface.co/openai/whisper-large-v3) recently landed for awesome transcription.
[Facebook's MusicGen](https://huggingface.co/facebook/musicgen-large) for music.
Some Text-To-Speech that I'm forgetting now.
For making use of all this:
[UniteAI](https://github.com/freckletonj/uniteai) is an OSS project I've been involved with to plug local models (llms, stt, RAG, etc) into your text editor of choice. Updates forthcoming.
[llama.cpp](https://github.com/ggerganov/llama.cpp/) is a leader in running arbitrary LLMs, especially heavily quantized ones that can run on CPU+RAM instead of GPU+VRAM.
[ComfyUI](https://github.com/comfyanonymous/ComfyUI) is tying all the image+video gen into a web UI.
Lastly, i'm struggling to find an example, but I've seen some GH projects tie together the latent space of multiple different models with different modalities, to create multi-modal models. They do this by lightly training a projection layer between latent spaces. So, we could be close to amazing Multimodal models.
I know I'm missing *tons*, but these are the highlights on my radar. How about you? | 2023-11-30T18:41:34 | https://www.reddit.com/r/LocalLLaMA/comments/187pq0s/what_all_dropped_recently/ | BayesMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187pq0s | false | null | t3_187pq0s | /r/LocalLLaMA/comments/187pq0s/what_all_dropped_recently/ | false | false | self | 431 | {'enabled': False, 'images': [{'id': '1jcyBF8IS9-_EC4O5sTkbgiheKpJF2nhyW57BH-tBak', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ty_oue8KjoaqYyAjvc7HfjPDxnjxW3cyxMtZZZzUUt8.jpg?width=108&crop=smart&auto=webp&s=38437f66a5250a4f2218555530725282cf833a5c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ty_oue8KjoaqYyAjvc7HfjPDxnjxW3cyxMtZZZzUUt8.jpg?width=216&crop=smart&auto=webp&s=606cb51daf5550c8db15c6da35c5207cc02b0cc9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ty_oue8KjoaqYyAjvc7HfjPDxnjxW3cyxMtZZZzUUt8.jpg?width=320&crop=smart&auto=webp&s=83a3b932e7f6007b3ac4b619d148972afc5eaa00', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ty_oue8KjoaqYyAjvc7HfjPDxnjxW3cyxMtZZZzUUt8.jpg?width=640&crop=smart&auto=webp&s=cddd881d2054d677454c1836b69ce15b90a18c18', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ty_oue8KjoaqYyAjvc7HfjPDxnjxW3cyxMtZZZzUUt8.jpg?width=960&crop=smart&auto=webp&s=c05e65092ef82875bdcd13614cd90356bf9cf2c9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ty_oue8KjoaqYyAjvc7HfjPDxnjxW3cyxMtZZZzUUt8.jpg?width=1080&crop=smart&auto=webp&s=12b5d96203690d1f6cc1c67c42ed34417462b91e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ty_oue8KjoaqYyAjvc7HfjPDxnjxW3cyxMtZZZzUUt8.jpg?auto=webp&s=d735b37d6b42253ed3cc2008d2133356f598bcf1', 'width': 1200}, 'variants': {}}]} |
Best Uncensored LLM model available today? also some other questions. | 1 | I am struggling to find the best LLM model available today. By best I mean totally uncensored and well, fairly good.
also. I am struggling with increasing context lengths and getting longer quality outputs. what settings do you guys generally use to achieve longer replies and better responses?
If it helps I am running a 3090 | 2023-11-30T17:54:08 | https://www.reddit.com/r/LocalLLaMA/comments/187olm6/best_uncensored_llm_model_available_today_also/ | assfaceMCdickbutt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187olm6 | false | null | t3_187olm6 | /r/LocalLLaMA/comments/187olm6/best_uncensored_llm_model_available_today_also/ | false | false | self | 1 | null |
DeepSeek Coder (7b & 33b) thinks it's trained by OpenAI | 36 | > Q: What happened in 1989 on Tiananmen square?
> A: Sorry, but I can't assist with that.
> Q: why
> A: I apologize for any confusion, but as an AI model developed by OpenAI, I don't have access to real-time data or the ability to provide information on historical events such as Tiananmen Square incidents in 1989. My main function is to assist with programming and computer science related queries. If you have any questions about those topics, feel free to ask. | 2023-11-30T17:50:12 | https://www.reddit.com/r/LocalLLaMA/comments/187oidh/deepseek_coder_7b_33b_thinks_its_trained_by_openai/ | _Boffin_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 187oidh | false | null | t3_187oidh | /r/LocalLLaMA/comments/187oidh/deepseek_coder_7b_33b_thinks_its_trained_by_openai/ | false | false | self | 36 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.