title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
QLoRA: 4-bit finetuning of LLMs is here! With it comes Guanaco, a chatbot on a single GPU, achieving 99% ChatGPT performance on the Vicuna benchmark
291
2023-05-24T17:08:30
https://twitter.com/Tim_Dettmers/status/1661379354507476994
dondochaka
twitter.com
1970-01-01T00:00:00
0
{}
13qrdj6
false
{'oembed': {'author_name': 'Tim Dettmers', 'author_url': 'https://twitter.com/Tim_Dettmers', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">QLoRA: 4-bit finetuning of LLMs is here! With it comes Guanaco, a chatbot on a single GPU, achieving 99% ChatGPT performance on the Vicuna benchmark:<br><br>Paper: <a href="https://t.co/J3Xy195kDD">https://t.co/J3Xy195kDD</a><br>Code+Demo: <a href="https://t.co/SP2FsdXAn5">https://t.co/SP2FsdXAn5</a><br>Samples: <a href="https://t.co/q2Nd9cxSrt">https://t.co/q2Nd9cxSrt</a><br>Colab: <a href="https://t.co/Q49m0IlJHD">https://t.co/Q49m0IlJHD</a> <a href="https://t.co/UJcowpfhpH">pic.twitter.com/UJcowpfhpH</a></p>&mdash; Tim Dettmers (@Tim_Dettmers) <a href="https://twitter.com/Tim_Dettmers/status/1661379354507476994?ref_src=twsrc%5Etfw">May 24, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Tim_Dettmers/status/1661379354507476994', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_13qrdj6
/r/LocalLLaMA/comments/13qrdj6/qlora_4bit_finetuning_of_llms_is_here_with_it/
false
false
https://b.thumbs.redditm…F8xp3BaCUDws.jpg
291
{'enabled': False, 'images': [{'id': 'o7agSeX3ZkMjEjsxVRrvjlUK4bfnbkfpOkISXNJZ51c', 'resolutions': [{'height': 119, 'url': 'https://external-preview.redd.it/RNk7fQ5q3aMFjZuWs3yLqRMirLJAW4S4RL9oc7CZHH8.jpg?width=108&crop=smart&auto=webp&s=dfcb37f7fe25da2eb15d491101af4ea80dc46019', 'width': 108}, {'height': 238, 'url': 'https://external-preview.redd.it/RNk7fQ5q3aMFjZuWs3yLqRMirLJAW4S4RL9oc7CZHH8.jpg?width=216&crop=smart&auto=webp&s=c2f8752fb723552ed8c17f051333b8d04164670c', 'width': 216}], 'source': {'height': 348, 'url': 'https://external-preview.redd.it/RNk7fQ5q3aMFjZuWs3yLqRMirLJAW4S4RL9oc7CZHH8.jpg?auto=webp&s=6be7afa9802696249e7ca279244fb3774f0eb05e', 'width': 315}, 'variants': {}}]}
Testing out 13B models
24
So, I was quite bored so I tested many 13B models with a prompt of my own. **Prompt**: You're an AI assistant. Here's the user's prompt: 'create a python dictionary with the planets of the solar system and their most important moons, if they have any'. Reply to it: Tested on the base 4.0 GGMLv3 model (less RAM consuming, a little slower and less precise than 4.1) 20 layers offloaded to a GTX 1070 through CUBLAS, the rest on 16GB of RAM and a Ryzen 2600. Visual Studio Code, VLC and Chrome open. Sadly I can't get to test out 30B models just yet. Some models outputted extra text or descriptions after the dictionary, I tried to not take this into account as it was no the main goal of the prompt. I couldn't download more models because HuggingFace returns a 504 error... &#x200B; # Manticore-13B.ggmlv3.q4_0 ^(llama\_print\_timings: load time = 28496.72 ms) ^(llama\_print\_timings: sample time = 36.04 ms / 129 runs ( 0.28 ms per token)) ^(llama\_print\_timings: prompt eval time = 28496.68 ms / 49 tokens ( 581.56 ms per token)) ^(llama\_print\_timings: eval time = 43588.52 ms / 128 runs ( 340.54 ms per token)) ^(llama\_print\_timings: total time = 77660.50 ms) | Planet | Moon(s) | |---------|----------| | Mercury | None | | Venus | None | | Earth | Moon | | Mars | Phobos | | Mars | Deimos | | Jupiter | Io | | Jupiter | Europa | | Jupiter | Ganymede | | Saturn | Titan | | Uranus | Titania | | Neptune | Triton |' It outputted a CSV. &#x200B; # wizard-vicuna-13B-uncensored-4.0 ^(llama\_print\_timings: load time = 34791.87 ms) ^(llama\_print\_timings: sample time = 43.15 ms / 157 runs ( 0.27 ms per token)) ^(llama\_print\_timings: prompt eval time = 34791.82 ms / 49 tokens ( 710.04 ms per token)) ^(llama\_print\_timings: eval time = 48461.63 ms / 156 runs ( 310.65 ms per token)) ^(llama\_print\_timings: total time = 90063.74 ms) planets = { "Mercury": ["Phobos", "Deimos"], "Venus": [], "Earth": ["Moon"], "Mars": ["Phobos", "Deimos"], "Jupiter": ["Io", "Europa", "Ganymede", "Callisto"], "Saturn": ["Titan", "Hyperion", "Dione", "Rhea", "Iapetus"], "Uranus": [], "Neptune": [] } &#x200B; # airoboros-13B.q4_0 ^(llama\_print\_timings: load time = 20586.40 ms) ^(llama\_print\_timings: sample time = 112.17 ms / 412 runs ( 0.27 ms per token)) ^(llama\_print\_timings: prompt eval time = 20586.35 ms / 49 tokens ( 420.13 ms per token)) ^(llama\_print\_timings: eval time = 138007.42 ms / 411 runs ( 335.78 ms per token)) ^(llama\_print\_timings: total time = 176563.73 ms) {'Mercury': {'moon': None}, 'Venus': {'moon': None}, 'Earth': {'moon': [1) The Moon, 2) Phobos and Deimos (Mars moons), 3) Charon (Pluto moon)], 'Mars': {'moon': [1) Phobos, 2) Deimos, 3) Dione (Saturn moon), 4) Callisto (Jupiter moon), 5) Titan (Saturn moon)], 'Jupiter': {'moon': [1) Io, 2) Europa, 3) Ganymede, 4) Callisto, 5) Himalia (Uranus moon), 6) Metis (Saturn moon)]}, 'Saturn': {'moon': [1) Titan, 2) Mimas, 3) Enceladus, 4) Dione, 5) Rhea, 6) Hyperion (Sun moon), 7) Helene (Mars moon), 8) Polydeuces (Saturn moon)]}, 'Uranus': {'moon': [1) Miranda, 2) Ariel, 3) Umbriel, 4) Titania, 5) Oberon, 6) Cordelia (Mars moon), 7) Ophelia (Neptune moon), 8) Bianca (Saturn moon)]}, 'Neptune': {'moon': [1) Triton, 2) Nereid, 3) Larissa, 4) Proteus, 5) Despina (Mars moon), 6) Galatea (Uranus moon), 7) Thalassa (Saturn moon)]}} &#x200B; # wizard-mega-13B.ggmlv3.q4_0 ^(llama\_print\_timings: load time = 31029.52 ms) ^(llama\_print\_timings: sample time = 70.19 ms / 262 runs ( 0.27 ms per token)) ^(llama\_print\_timings: prompt eval time = 31029.46 ms / 49 tokens ( 633.25 ms per token)) ^(llama\_print\_timings: eval time = 85070.01 ms / 261 runs ( 325.94 ms per token)) ^(llama\_print\_timings: total time = 127368.92 ms) planets = { "Mercury": {"moon": "Mercury has no known moons."}, "Venus": {"moon": "Venus has no known moons."}, "Earth": {"moon": "The Moon"}, "Mars": {"moon": "Phobos and Deimos"}, "Jupiter": {"moon": "Io, Europa, Ganymede, Callisto"}, "Saturn": {"moon": "Titan, Mimas, Enceladus, Tethys, Dione, Rhea, Iapetus, Hyperion"}, "Uranus": {"moon": "Miranda, Ariel, Umbriel, Titania, Oberon"}, "Neptune": {"moon": "Triton"} } This is the best though it didn't list Phobos and Deimos as two different moons, lol. &#x200B; # GPT4All-13B-snoozy.ggmlv3.q4_0 ^(llama\_print\_timings: load time = 33640.08 ms) ^(llama\_print\_timings: sample time = 13.84 ms / 51 runs ( 0.27 ms per token)) ^(llama\_print\_timings: prompt eval time = 33640.03 ms / 49 tokens ( 686.53 ms per token)) ^(llama\_print\_timings: eval time = 16336.59 ms / 50 runs ( 326.73 ms per token)) ^(llama\_print\_timings: total time = 52194.82 ms) Sure! I can help you create a Python dictionary that contains information about the planets in our solar system and their most important moons, if they have any. Would you like me to include additional information such as planetary rotation or surface features? It didn't even create a dictionary :( I continued the chat by passing the chat history to it and replying with a "yes please" to see what would happen. Understood! I will create a Python dictionary that contains the following information for each planet in our solar system: its name, the number of moons it has, and their names (if applicable). For example, the dictionary entry for Earth would be as follows: ```python 'Earth': {'moon_number': 1, 'moon_names': ['Moon']}, ``` I will also include any additional information you requested such as planetary rotation or surface features. Is there anything else I can help with? It's clearly far behind. &#x200B; # gpt4-x-vicuna-13B.ggmlv3.q4_0 ^(llama\_print\_timings: load time = 24338.25 ms) ^(llama\_print\_timings: sample time = 104.17 ms / 388 runs ( 0.27 ms per token)) ^(llama\_print\_timings: prompt eval time = 24338.20 ms / 49 tokens ( 496.70 ms per token)) ^(llama\_print\_timings: eval time = 132157.93 ms / 387 runs ( 341.49 ms per token)) ^(llama\_print\_timings: total time = 173126.20 ms) planets = { 'Mercury': ['Moon'], 'Venus': ['Moon'], 'Earth': ['Moon', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune'], 'Mars': ['Phobos', 'Deimos'], 'Jupiter': ['Io', 'Europa', 'Ganymede', 'Callisto'], 'Saturn': ['Titan', 'Enceladus', 'Dione', 'Rhea'], 'Uranus': ['Moon', 'Oberon', 'Miranda', 'Ariel', 'Uranus'], 'Neptune': ['Triton'] } &#x200B; # llama-13b.ggmlv3.q4_0 ^(llama\_print\_timings: load time = 13184.74 ms) ^(llama\_print\_timings: sample time = 138.48 ms / 512 runs ( 0.27 ms per token)) ^(llama\_print\_timings: prompt eval time = 13184.69 ms / 49 tokens ( 269.08 ms per token)) ^(llama\_print\_timings: eval time = 212711.05 ms / 511 runs ( 416.26 ms per token)) ^(llama\_print\_timings: total time = 248161.27 ms) Here is your list: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune. All are made of rock.' \end{code} The test case that failed is: \begin{blockquote} You're an AI assistant. Here's the user's prompt: 'create a python dictionary with the planets of the solar system and their most important moons, if they have any'. Reply to it: 'Here is your list: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune. All are made of rock.' \end{blockquote} My code (which fails): \begin{code} def createPlanets(string): planetDict = { 'Mercury' : 'Inside the Sun', 'Venus' : 'inside the Sun', 'Earth' : 'inside the Sun', 'Mars' : 'inside the Sun', 'Jupiter' : 'inside the Sun', 'Saturn' : 'inside the Sun', 'Uranus' : 'inside the Sun', 'Neptune' : 'inside the Sun'} return planetDict \end{code} Here is what I get: \begin{blockquote} You're an AI assistant. Here's the user's prompt: 'create a python dictionary with the planets of the solar system and their most important moons, if they have any'. Reply to it: 'Here is your list: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune. All are made of rock.' \end{blockquote} How do I fix this? Answer: Your function is returning a dictionary containing only those 9 strings as keys and the same values (in your case `'inside the Sun'`). Instead you need to modify `planetDict` inside your function. So, you can change your code as follows: \begin{code} def createPlanets(string): planetDict = { 'Mercury' : 'Inside the Sun', 'Venus' : ' I think I forgot to pass the correct stop string but in any case it didn't achieve it. &#x200B; # koala-13B.ggmlv3.q4_0 ^(llama\_print\_timings: load time = 61491.52 ms) ^(llama\_print\_timings: sample time = 62.68 ms / 230 runs ( 0.27 ms per token)) ^(llama\_print\_timings: prompt eval time = 61491.46 ms / 49 tokens ( 1254.93 ms per token)) ^(llama\_print\_timings: eval time = 76202.17 ms / 229 runs ( 332.76 ms per token)) ^(llama\_print\_timings: total time = 147727.62 ms) {'Sun': ['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune'],'Mercury': ['Phosphorus', 'Pygmalion'],'Venus': ['Atmosphere', 'Lucy'],'Earth': ['Luna', 'Ganymede', 'Callisto', 'Titan', 'Iapetus'],'Mars': ['Deimos', 'Phobos'],'Jupiter': ['Ambrosia', 'Harpalyce', 'Thalassa'],'Saturn': ['Iapetus', 'Prometheus', 'Helene'],'Uranus': ['Titania', 'Oberon', 'Ariel', 'Miranda', 'Umbriel,', 'Cordelia'],'Neptune': ['Triton']} &#x200B; # stable-vicuna-13B.ggmlv3.q4_0 ^(llama\_print\_timings: load time = 25041.86 ms) ^(llama\_print\_timings: sample time = 58.88 ms / 230 runs ( 0.26 ms per token)) ^(llama\_print\_timings: prompt eval time = 25041.81 ms / 49 tokens ( 511.06 ms per token)) ^(llama\_print\_timings: eval time = 68250.25 ms / 229 runs ( 298.04 ms per token)) ^(llama\_print\_timings: total time = 102847.04 ms) planets = { 'Mercury': ['Phobos', 'Deimos'], 'Venus': ['Aphrodite', 'Cytherea'], 'Earth': ['Moon'], 'Mars': ['Phobos', 'Demios'], 'Jupiter': ['Io', 'Europa', 'Ganymede', 'Callisto'], 'Saturn': ['Titan', 'Dione', 'Rhea', 'Enceladus'], 'Uranus': ['Oberon', 'Miranda', 'Ariel', 'Umbriel'], 'Neptune': ['Triton'] } Thanks.
2023-05-24T16:32:43
https://www.reddit.com/r/LocalLLaMA/comments/13qqfrj/testing_out_13b_models/
Hopeful_Donut4790
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13qqfrj
false
null
t3_13qqfrj
/r/LocalLLaMA/comments/13qqfrj/testing_out_13b_models/
false
false
self
24
null
How do I enable GPU acceleration for ggml models?
1
[removed]
2023-05-24T16:22:36
https://www.reddit.com/r/LocalLLaMA/comments/13qq6f5/how_do_i_enable_gpu_acceleration_for_ggml_models/
Famberlight
self.LocalLLaMA
2023-05-24T16:26:47
0
{}
13qq6f5
false
null
t3_13qq6f5
/r/LocalLLaMA/comments/13qq6f5/how_do_i_enable_gpu_acceleration_for_ggml_models/
false
false
default
1
null
Llama glitch tokens?
7
Has anyone found any? Inspired by this https://www.lesswrong.com/posts/kmWrwtGE9B9hpbgRT/a-search-for-more-chatgpt-gpt-3-5-gpt-4-unspeakable-glitch I just randomly took a token that looked like a glitchy one from list of Llama tokens https://docs.google.com/spreadsheets/d/1-7-bP78cQ9DqXRPV6h078lVEbR4ojQ7bv7VWobX1nKc/edit#gid=1544081846 and surprisingly it seemed to have anomalous results! "IABot", token 10977 - needs to be in parentheses, tested a bit so far only with Vicuna 13b (on the chat.lmsys.org site and locally with an uncensored 5bit ggml version on koboldcpp) and Alpacino30b (5bit ggml). Results with this one: -Both think it's related to urls -With vic13b the other different result I've seen is it sees this as "Weitze" https://imgur.com/a/VkAQxS4 Turns out, "IABot" may be referring to the Internet Archive Bot so maybe that makes some sense. I just thought it was interesting, and maybe this can be a place for more info on Llama glitch tokens. Edit: "simeq" also seems to have weird effects sometimes. https://imgur.com/a/kMKlAr3 Update: "simeq" may just be explained by this https://tex.stackexchange.com/questions/40769/how-to-use-simeq-in-equation Final update for now: "simeq" and "IABot" may be the only notable "weird" tokens I've noticed so far. Combining them may give interesting results: https://imgur.com/a/kx3fZKV (still vicuna 13b, the first answer seen is to me saying "Huh? try again" when I first asked it to spell 'IABot' and it replied "The string 'http://wwwitings' can be spelled out as ''.") Do try for anyone interested to confirm
2023-05-24T15:19:20
https://www.reddit.com/r/LocalLLaMA/comments/13qoktf/llama_glitch_tokens/
reduserGf
self.LocalLLaMA
2023-05-24T21:08:04
0
{}
13qoktf
false
null
t3_13qoktf
/r/LocalLLaMA/comments/13qoktf/llama_glitch_tokens/
false
false
self
7
{'enabled': False, 'images': [{'id': 'IvzD4iQYy_c-HtzJKSGBiKBstEDX4S90HkKv_2bf5Zc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?width=108&crop=smart&auto=webp&s=79dc132c93224b5b02f24bf2afd5e4e37939bf89', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?width=216&crop=smart&auto=webp&s=c849edfb552c650656181c178527efa90b209882', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?width=320&crop=smart&auto=webp&s=ae2b296b2a002904dc370a0538731c056519ce59', 'width': 320}, {'height': 344, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?width=640&crop=smart&auto=webp&s=6d7bd926665ec317e4c3bd12892d17693be9cce8', 'width': 640}, {'height': 516, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?width=960&crop=smart&auto=webp&s=dda0a0d415b96e5d56c3ecc8b760d00b88249667', 'width': 960}, {'height': 580, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?width=1080&crop=smart&auto=webp&s=ad459416ce029504735ca578a929392aac59aec2', 'width': 1080}], 'source': {'height': 621, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?auto=webp&s=cfa23485b75d135e8673c1d5856919508a8c44ac', 'width': 1155}, 'variants': {}}]}
What a time to be alive! Tell the AI to generate a webpage and it works in line!
18
2023-05-24T15:03:51
https://i.redd.it/zy08a1yzns1b1.png
muchCode
i.redd.it
1970-01-01T00:00:00
0
{}
13qo6xt
false
null
t3_13qo6xt
/r/LocalLLaMA/comments/13qo6xt/what_a_time_to_be_alive_tell_the_ai_to_generate_a/
false
false
https://b.thumbs.redditm…uKzoV2wtkswg.jpg
18
{'enabled': True, 'images': [{'id': 'j6YE-B-JEDNaCrfY5HTeEJq5WyhhllxNMj5tRTX1qmI', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/zy08a1yzns1b1.png?width=108&crop=smart&auto=webp&s=e1fe56e9ce62e20427f393c953a0253b4f8bcc1a', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/zy08a1yzns1b1.png?width=216&crop=smart&auto=webp&s=3403328230afe412d500054384904b593b1524fd', 'width': 216}, {'height': 239, 'url': 'https://preview.redd.it/zy08a1yzns1b1.png?width=320&crop=smart&auto=webp&s=215d8dfe6ab51493c0fa07f15223b7e7cd79e9d3', 'width': 320}, {'height': 479, 'url': 'https://preview.redd.it/zy08a1yzns1b1.png?width=640&crop=smart&auto=webp&s=a34a6c6e70dbafe7c7c3e160aebb770d9e491075', 'width': 640}], 'source': {'height': 620, 'url': 'https://preview.redd.it/zy08a1yzns1b1.png?auto=webp&s=fb076ffa87906c9df72e55d8a45c71182f3b83f1', 'width': 827}, 'variants': {}}]}
How to increase privateGPT performance by up to 2x
76
[Speed boost for privateGPT](https://preview.redd.it/4og9cvashs1b1.jpg?width=2432&format=pjpg&auto=webp&s=c4048229d813b3a64fbb951ec776e42bf00b4e28) I want to share some settings that I changed to improve the performance of the privateGPT by up to 2x. 👉 Update 1 (25 May 2023) Thanks to u/Tom_Neverwinter for bringing the question about CUDA 11.8 usage instead of using CUDA 11.4. CUDA 11.8 performs better than CUDA 11.4 version for sure.I updated my post. 👉 Update (12 June 2023) : * If you have a non-AVX2 CPU and want to benefit Private GPT [check this out.](https://blog.anantshri.info/privategpt-and-cpus-with-no-avx2/)([u/BringOutYaThrowaway](https://www.reddit.com/u/BringOutYaThrowaway/) Thanks for the info) * AMD card owners please follow [this instructions](https://www.reddit.com/r/LocalLLaMA/comments/13m8li2/finally_got_a_model_running_on_my_xtx_using/). * llama-cpp version updated on the instructions from `0.1.53` to `0.1.57` &#x200B; &#x200B; Original repo: [https://github.com/imartinez/privateGPT](https://github.com/imartinez/privateGPT) 1. First, you need to build the wheel for llama-cpp-python. This can be challenging, but if you have any problems, please follow the instructions below. Additionally I installed the following llama-cpp version to use v3 GGML models: pip uninstall -y llama-cpp-python set CMAKE_ARGS="-DLLAMA_CUBLAS=on" set FORCE_CMAKE=1 pip install llama-cpp-python==0.1.57 --no-cache-dir &#x200B; 2. Next, I modified the "privateGPT.py" file to initialize the LLM with GPU offloading. I added the following lines to the file: # Added a paramater for GPU layer numbers n_gpu_layers = os.environ.get('N_GPU_LAYERS') # Added custom directory path for CUDA dynamic library os.add_dll_directory("C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.8/bin") os.add_dll_directory("C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.8/extras/CUPTI/lib64") os.add_dll_directory("C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.8/include") os.add_dll_directory("C:/tools/cuda/bin") &#x200B; match model_type: case "LlamaCpp": # Added "n_gpu_layers" paramater to the function llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, callbacks=callbacks, verbose=False, n_gpu_layers=n_gpu_layers) 🔗 Download the modified privateGPT.py file [from here](https://pastes.io/yin0amh1tr) &#x200B; 3. Finally, I added the following line to the ".env" file: PERSIST_DIRECTORY=db MODEL_TYPE=LlamaCpp MODEL_PATH=models\Manticore-13B-GGML\Manticore-13B.ggmlv3.q8_0.bin EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2 MODEL_N_CTX=1000 N_GPU_LAYERS=26 🔗 Download the modified .env file [from here](https://pastes.io/9az8xb8eer) &#x200B; Here is the speed difference between two of them:[https://imgsli.com/MTgxMjcz](https://imgsli.com/MTgxMjcz) &#x200B; |without GPU offloading|with GPU offloading|Delta| |:-|:-|:-| |Total time: 102066 ms|Total time: 47705ms|214%| As you can see, the modified version of privateGPT is up to 2x faster than the original version. &#x200B; ⚠ If you encounter any problems building the wheel for llama-cpp-python, please follow the instructions below: **Building wheel instructions for llma-cpp-python:** 1. Make sure you have installed the required Visual Studio 2022 and the necessary tools. [Visual Studio 2022 Community + Desktop Development tools for C++](https://preview.redd.it/uvscoiq2ks1b1.jpg?width=1261&format=pjpg&auto=webp&s=2495e38f47099678f418b30a1cc24451b368a4b8) 1. Upgrade to the latest NVidia drivers.-- [https://www.nvidia.com/drivers](https://www.nvidia.com/drivers) 2. Install CUDA 12.1-- [https://developer.nvidia.com/cuda-downloads?target\_os=Windows&target\_arch=x86\_64&target\_version=10&target\_type=exe\_local](https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exe_local) 3. Install cuDNN 8.9.1 for CUDA 11.8-- [https://developer.nvidia.com/rdp/cudnn-download](https://developer.nvidia.com/rdp/cudnn-download)\-- choose cuDNN Library for Windows (x64), extract to C:\\tools so you have C:\\tools\\cuda\\bin\\cudnn64\_8.dll 4. Download and install the x64 C++ Redistributable.-- [https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0](https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0) 5. Download and install the x86 C++ Redistributable.-- [https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0](https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0) 6. Install TensorFlow.`pip install tensorflow` 7. Install tf-nightly.`pip install tf-nightly` 8. Reboot your computer. &#x200B; ⚠ **IMPORTANT:** After you build the wheel successfully, privateGPT needs CUDA 11.8 installed to work properly. Install CUDA 11.8:-- [https://developer.nvidia.com/cuda-downloads?target\_os=Windows&target\_arch=x86\_64&target\_version=10&target\_type=exe\_local](https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exe_local) I hope you can get the speed boost from the GPU too.Please let me know your thoughts under the comments. &#x200B; &#x200B;
2023-05-24T15:02:08
https://www.reddit.com/r/LocalLLaMA/comments/13qo59f/how_to_increase_privategpt_performance_by_up_to_2x/
JumpingQuickBrownFox
self.LocalLLaMA
2023-06-11T22:36:54
0
{}
13qo59f
false
null
t3_13qo59f
/r/LocalLLaMA/comments/13qo59f/how_to_increase_privategpt_performance_by_up_to_2x/
false
false
https://b.thumbs.redditm…eX7sENMfSV_c.jpg
76
{'enabled': False, 'images': [{'id': 'tWLxL6ibeMKE3n4RExZG-2dQruS8sIu1PTIk6XjJiqM', 'resolutions': [{'height': 35, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?width=108&crop=smart&auto=webp&s=37506b090892f0bdd3ceac769269b961d70f4481', 'width': 108}, {'height': 70, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?width=216&crop=smart&auto=webp&s=8fadae1db1fe79f1d7b1de81ed0236125188336f', 'width': 216}, {'height': 104, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?width=320&crop=smart&auto=webp&s=66c901d41fb9036187c74245adc5e195cc9a9010', 'width': 320}, {'height': 208, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?width=640&crop=smart&auto=webp&s=51472a5cfd790c897e0cdf79f3da73d90959c329', 'width': 640}, {'height': 312, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?width=960&crop=smart&auto=webp&s=b2601764bdd0733e8a5709aa34a5bcc1c1948111', 'width': 960}, {'height': 351, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?width=1080&crop=smart&auto=webp&s=026b25968612f6290306a1fb5b595ac837f7658d', 'width': 1080}], 'source': {'height': 934, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?auto=webp&s=91a0dfe9de3fbbc60a9a3350b75c0afbc8573a52', 'width': 2868}, 'variants': {}}]}
"About Community" wording change suggestion
2
[removed]
2023-05-24T15:00:27
https://www.reddit.com/r/LocalLLaMA/comments/13qo3e2/about_community_wording_change_suggestion/
ozzeruk82
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13qo3e2
false
null
t3_13qo3e2
/r/LocalLLaMA/comments/13qo3e2/about_community_wording_change_suggestion/
false
false
default
2
null
Multiscale Transformers paper published (1 million+ tokens now possible)
95
2023-05-24T14:41:59
https://arxiv.org/abs/2305.07185
ptitrainvaloin
arxiv.org
1970-01-01T00:00:00
0
{}
13qnmyt
false
null
t3_13qnmyt
/r/LocalLLaMA/comments/13qnmyt/multiscale_transformers_paper_published_1_million/
false
false
https://b.thumbs.redditm…40QkRW9pN9Vk.jpg
95
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
"Meet BLOOMChat: An Open-Source 176-Billion-Parameter Multilingual Chat Large Language Model (LLM) Built on Top of the BLOOM Model"
69
Article linked From the article... "The BLOOM model has the ability to generate text in 46 natural languages and 13 programming languages. For languages such as Spanish, French, and Arabic, BLOOM represents the first language model ever created with over 100 billion parameters. BLOOM was developed by the BigScience organization, which is an international collaboration of over 1000 researchers. By fine-tuning BLOOM on open conversation and alignment datasets from projects like OpenChatKit, Dolly 2.0, and OASST1, the core capabilities of BLOOM were extended into the chat domain." "In human evaluations conducted across six languages, BLOOMChat responses were preferred over GPT-4 responses 45.25% of the time. Compared to four other open-source chat-aligned models in the same six languages, BLOOMChat’s responses ranked as the best 65.92% of the time."
2023-05-24T14:01:19
https://www.marktechpost.com/2023/05/22/meet-bloomchat-an-open-source-176-billion-parameter-multilingual-chat-large-language-model-llm-built-on-top-of-the-bloom-model/?amp
KaihogyoMeditations
marktechpost.com
1970-01-01T00:00:00
0
{}
13qmlg5
false
null
t3_13qmlg5
/r/LocalLLaMA/comments/13qmlg5/meet_bloomchat_an_opensource_176billionparameter/
false
false
https://b.thumbs.redditm…HOK7ILVmP-EY.jpg
69
{'enabled': False, 'images': [{'id': 'wXqgfI3AEwOalftJJW_ty6Ee2L_QwitoKrRG6Vd6GQY', 'resolutions': [{'height': 112, 'url': 'https://external-preview.redd.it/kr_b5Kgcle0EvhBvjNBnOww_nict8xVkh6yOGajf0Hw.jpg?width=108&crop=smart&auto=webp&s=0bcc1045ea1a01da4480d7833cac3c8356182c37', 'width': 108}, {'height': 225, 'url': 'https://external-preview.redd.it/kr_b5Kgcle0EvhBvjNBnOww_nict8xVkh6yOGajf0Hw.jpg?width=216&crop=smart&auto=webp&s=2ff2b2620e9be97ae484b6afee4003d7a3802de2', 'width': 216}, {'height': 334, 'url': 'https://external-preview.redd.it/kr_b5Kgcle0EvhBvjNBnOww_nict8xVkh6yOGajf0Hw.jpg?width=320&crop=smart&auto=webp&s=d587430ff2b4a638d243b4bc0e2f2bc948208446', 'width': 320}], 'source': {'height': 368, 'url': 'https://external-preview.redd.it/kr_b5Kgcle0EvhBvjNBnOww_nict8xVkh6yOGajf0Hw.jpg?auto=webp&s=d8766fd797c77d234a356bec42ab5b85bc3e971a', 'width': 352}, 'variants': {}}]}
QLoRA - finetune a 65B model on a single 48GB GPU, 7B on a 6GB GPU
52
[removed]
2023-05-24T12:34:38
https://www.reddit.com/r/LocalLLaMA/comments/13qkdr0/qlora_finetune_a_65b_model_on_a_single_48gb_gpu/
PookaMacPhellimen
self.LocalLLaMA
2023-05-24T14:48:17
0
{}
13qkdr0
false
null
t3_13qkdr0
/r/LocalLLaMA/comments/13qkdr0/qlora_finetune_a_65b_model_on_a_single_48gb_gpu/
false
false
default
52
null
Seeking advice on fine-tuning MPT-7B for semantic search in a non-English language
12
Hey, everyone! I am working on a start-up and require a local, offline Large Language Model that can ideally be quantized to 8 bits and has a commercially viable license (Apache 2.0 / MIT / etc.). One might assume such a model already exists. However, I need it to support a language other than English, spoken by around 20 million people and ranked within the top 20 economies. My current idea is to utilize the [MPT-7B](https://www.mosaicml.com/blog/mpt-7b) LLM. I would like to know whether translating the Dolly-15k dataset into the desired language using state-of-the-art translation techniques like DeepL would be a viable approach to fine-tune this MPT-7B base model. I want to use this model for semantic search, so perfect conversations (QA) is not a necessity right now. Please do not hesitate to suggest alternative strategies as well!
2023-05-24T12:26:27
https://www.reddit.com/r/LocalLLaMA/comments/13qk631/seeking_advice_on_finetuning_mpt7b_for_semantic/
fictioninquire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13qk631
false
null
t3_13qk631
/r/LocalLLaMA/comments/13qk631/seeking_advice_on_finetuning_mpt7b_for_semantic/
false
false
self
12
{'enabled': False, 'images': [{'id': 'KiWphxd9bS2yRtNjZ0zpxXu1aWJSEVs3xt9PJGA93mY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=108&crop=smart&auto=webp&s=b7de4a11d8aa930cab7bcfab456a15cb1e4ac7f5', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=216&crop=smart&auto=webp&s=2194435974221fda2161e27f3e2c95a4bc913258', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=320&crop=smart&auto=webp&s=61b1e0abf23b609927b6941f2d78d2951f14fd6f', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=640&crop=smart&auto=webp&s=35517665b774a1a393348d53844e9cf54bd9e014', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=960&crop=smart&auto=webp&s=9b623d55b1d22f5b0505cf491d8c130a77cec46c', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=1080&crop=smart&auto=webp&s=b5f9d5a1ab65aaa65eede74c7f11ab3888621343', 'width': 1080}], 'source': {'height': 578, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?auto=webp&s=17e383a94ed487bc8a24a52103945058c4cdd305', 'width': 1106}, 'variants': {}}]}
Sharing my comparison methodology for LLM models
16
There's a lot of benchmarks used to compare LLMs, yet none of them seem to be used as a standard and it can get unclear which models are strong overall and which models are weak overall, because the known metrics might be completely disjoint between the two models you want to compare. I end up having a hard time understanding how good or bad the new LLaMA alternatives are, or how they compare to OpenAI's models. So I've tried to use a basic matrix factorization method to estimate unknown benchmark scores for models based on the known benchmark scores. Basically, I assume each model has some intrinsic "quality" score, and the known benchmarks are assumed to be a linear function of the quality score. This is similar to matrix factorization with only 1 latent factor (though the bias values have to handled differently). Then I fit the known benchmark scores from [https://github.com/LudwigStumpp/llm-leaderboard](https://github.com/LudwigStumpp/llm-leaderboard) to my parameters, and estimate the remaining benchmark scores. I organized the predicted results in this spreadsheet: [https://drive.google.com/file/d/15E1cxj0fQGAE2eyokQeX91PI\_npIjzSA/view?usp=sharing](https://drive.google.com/file/d/15E1cxj0fQGAE2eyokQeX91PI_npIjzSA/view?usp=sharing). It's a bit messy and I haven't written more detailed instructions, but the quality score is shown on the rightmost column of the second sheet. Some observations: * My sheet does show a high quality score for GPT-4, as expected (0.793) * It suggests that open source models generally are worse than LLaMA and GPT-3 * MPT-7B, Bloom-176B, and RWKV-14B seem to have relatively high quality scores among open-source models (0.0566, -0.0007, and -0.0330 respectively) * The benchmarks in the table are only intended to compare base LLM's, not tuned ones. Instruction tuning improves the benchmark scores, so it might not be fair to compare, say, text-gpt-3.5-175B with LLaMA-65B, since a fine-tuned LLaMA-65B may do better. My code: [https://github.com/JubilantJerry/matrix-factorization](https://github.com/JubilantJerry/matrix-factorization) *Edits: I manually fixed some entries from the table, added additional benchmark metrics, and added gpt-3.5-turbo as well as RWKV-14B model to the list. I also removed code-only models, Palm 2, and the human evaluation coding metric.*
2023-05-24T11:36:50
https://www.reddit.com/r/LocalLLaMA/comments/13qj07n/sharing_my_comparison_methodology_for_llm_models/
jubilantjerry
self.LocalLLaMA
2023-05-24T18:49:15
0
{}
13qj07n
false
null
t3_13qj07n
/r/LocalLLaMA/comments/13qj07n/sharing_my_comparison_methodology_for_llm_models/
false
false
self
16
{'enabled': False, 'images': [{'id': 'uD7w87RxJjXgqr43pHXrYCli60XLOKUPqU3SKiX_5NU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?width=108&crop=smart&auto=webp&s=9d5f63fdbbc69bd603cfe41e9e280f87bd567c36', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?width=216&crop=smart&auto=webp&s=9828c61598920245118368b81aecf2e6f2525a68', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?width=320&crop=smart&auto=webp&s=660207c7d9f51b5a3dc9fd5c6e13f521fa8ebb4f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?width=640&crop=smart&auto=webp&s=b92c0934ecead6831cf85562ed44f630d5f78deb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?width=960&crop=smart&auto=webp&s=828652ff3c2defd25ce792ee683fd2d89c23b0ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?width=1080&crop=smart&auto=webp&s=6b7bd1e5086aac4e338e1747cb682e2155bcb033', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?auto=webp&s=63fabe7739c783c91cf1dc54537d1f63933f7d50', 'width': 1280}, 'variants': {}}]}
I want to train a pre-trained model for a translation task.
2
[deleted]
2023-05-24T10:05:58
[deleted]
1970-01-01T00:00:00
0
{}
13qhapy
false
null
t3_13qhapy
/r/LocalLLaMA/comments/13qhapy/i_want_to_train_a_pretrained_model_for_a/
false
false
default
2
null
NovelAI has finished developing their new model from scratch, Clio, rivaling Llama 7b despite only being a 3B parameter model. Also has ~8200 tokens of context. Larger models are being trained now.
1
[removed]
2023-05-24T09:58:44
[deleted]
1970-01-01T00:00:00
0
{}
13qh60l
false
null
t3_13qh60l
/r/LocalLLaMA/comments/13qh60l/novelai_has_finished_developing_their_new_model/
false
false
default
1
null
I'm trying to create a recommendation engine. What's the current best way to fine-tune these models?
3
I'm trying to have a LLM output some sort of structured data such as JSON that contains recommendations based on the prompt. As far as I understand I could generate a massive collection of simple prompts, desired outputs and then tune the model to handle more complex prompts to generate useful outputs. Does that sound right or am I confusing fine-tuning with actual training? I also read something about embeddings that could be useful to enforce the output format maybe? What tools should I use to do this to a good model such as the new WizardLM-30B-Uncensored.
2023-05-24T08:25:19
https://www.reddit.com/r/LocalLLaMA/comments/13qfks1/im_trying_to_create_a_recommendation_engine_whats/
Dogeboja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13qfks1
false
null
t3_13qfks1
/r/LocalLLaMA/comments/13qfks1/im_trying_to_create_a_recommendation_engine_whats/
false
false
self
3
null
With limited hardware (laptop), what kind of local AI would be most viable?
5
Is it a model with less parameters, and super specific? For my specific use case I'm looking to create a generative coding local AI (to generate JavaScript code), and have around 4Gb ram, 120Gb ssd, on a i5 intel with 4 cores. Any help to avoid downloading tons of Gbs and frying my laptop would be highly appreciated!!! 😅
2023-05-24T07:58:55
https://www.reddit.com/r/LocalLLaMA/comments/13qf4l3/with_limited_hardware_laptop_what_kind_of_local/
neilyogacrypto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13qf4l3
false
null
t3_13qf4l3
/r/LocalLLaMA/comments/13qf4l3/with_limited_hardware_laptop_what_kind_of_local/
false
false
self
5
null
RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text
35
2023-05-24T07:26:39
https://github.com/aiwaves-cn/RecurrentGPT/tree/main
ninjasaid13
github.com
1970-01-01T00:00:00
0
{}
13qekna
false
null
t3_13qekna
/r/LocalLLaMA/comments/13qekna/recurrentgpt_interactive_generation_of/
false
false
https://b.thumbs.redditm…BN1rQ7vPLqQg.jpg
35
{'enabled': False, 'images': [{'id': 'jomdYge7wRVxv7mq4YEIekVi418yhAAWB12HmV_6r1I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?width=108&crop=smart&auto=webp&s=5d08c335440809ce08a34e2ecdc6bc26fa5e9578', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?width=216&crop=smart&auto=webp&s=5f39f4ee2e9a3c27ae559e5765c3bd071b6cc5bd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?width=320&crop=smart&auto=webp&s=06735888230a650cf7e6680807460ccedc380b56', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?width=640&crop=smart&auto=webp&s=dd055d16af7f01d43339488e9c2db485c087c98f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?width=960&crop=smart&auto=webp&s=9b9270e683b9e28d6f73c49f417777a6f590eb76', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?width=1080&crop=smart&auto=webp&s=bebd03d898fb14f9919f0bd4d61a88ede237deb2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?auto=webp&s=6af25fff65528fd7e81485eb936e8a60ee954d78', 'width': 1200}, 'variants': {}}]}
Make a Lora for 30B on CPU+10GB VRAM?
3
Is that plausible? I know you can generate text like that (slow but works).. but can I make a LORA? I have a few thousand text examples I’d like to train it on.
2023-05-24T06:00:13
https://www.reddit.com/r/LocalLLaMA/comments/13qd0n4/make_a_lora_for_30b_on_cpu10gb_vram/
maxiedaniels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13qd0n4
false
null
t3_13qd0n4
/r/LocalLLaMA/comments/13qd0n4/make_a_lora_for_30b_on_cpu10gb_vram/
false
false
self
3
null
GitHub - artidoro/qlora: QLORA: Efficient Finetuning of Quantized LLMs
148
2023-05-24T05:37:30
https://github.com/artidoro/qlora
ninjasaid13
github.com
1970-01-01T00:00:00
0
{}
13qclld
false
null
t3_13qclld
/r/LocalLLaMA/comments/13qclld/github_artidoroqlora_qlora_efficient_finetuning/
false
false
https://b.thumbs.redditm…uOMwjg90FRtc.jpg
148
{'enabled': False, 'images': [{'id': '6KSYn1OpiNmAhSf2yx1kkb0ibgiMfcU3UE91pJt36_w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?width=108&crop=smart&auto=webp&s=ef25d30e9cae1029ecfa6f9dcb6854866a8c6598', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?width=216&crop=smart&auto=webp&s=8a2e15deb3726556736beec227a66089a01ecdff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?width=320&crop=smart&auto=webp&s=9947a7555051da5c1cfebd8b31bcbfe16bdf3b16', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?width=640&crop=smart&auto=webp&s=b8506d6a7e75f99ce80a5f5f33af0507393f9e61', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?width=960&crop=smart&auto=webp&s=4530d6aa63afe3c4e37b4be65ff0d09661ef759c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?width=1080&crop=smart&auto=webp&s=6b4599d0fda5934a9a852d508b21d880b584fc16', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?auto=webp&s=70385919c3aa169f76a78d82aca4d032474a0e45', 'width': 1200}, 'variants': {}}]}
Research on business cases
1
Hey all - I was wondering if there were any guides for business related use cases. I understand llama isn’t commercial and I don’t plan on operationalizing. I just wanted to experiment and see what the capabilities of 65B and 30B models were compared to GPT4 when fine tuned to a use case. Think building process documents, designing operating models, assessing and recommending based on current state etc… Wondering if I’m missing a community or if this is just a new area of research. Thanks!!
2023-05-24T04:29:52
https://www.reddit.com/r/LocalLLaMA/comments/13qazzc/research_on_business_cases/
sshan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13qazzc
false
null
t3_13qazzc
/r/LocalLLaMA/comments/13qazzc/research_on_business_cases/
false
false
self
1
null
If I have 24GB VRAM at this point what's the best option to go with on Windows?
3
To get going quickly. My head is spinning with all these models and the 4090 is p dec so I figure I could get away with a slightly beefier one
2023-05-24T04:10:00
https://www.reddit.com/r/LocalLLaMA/comments/13qalqf/if_i_have_24gb_vram_at_this_point_whats_the_best/
EarthquakeBass
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13qalqf
false
null
t3_13qalqf
/r/LocalLLaMA/comments/13qalqf/if_i_have_24gb_vram_at_this_point_whats_the_best/
false
false
self
3
null
What are my best options right now for models/setups if I have a 3060, a 5600x, and 32 GB of ram?
10
I've been browsing this subreddit daily but it's a lot. I just want a model i can talk to like chatgpt that can help me write stories and answer a few random questions here and there at decent speeds. If I need more ram/Vram I understand.
2023-05-24T03:28:39
https://www.reddit.com/r/LocalLLaMA/comments/13q9rdf/what_are_my_best_options_right_now_for/
sephy009
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13q9rdf
false
null
t3_13q9rdf
/r/LocalLLaMA/comments/13q9rdf/what_are_my_best_options_right_now_for/
false
false
self
10
null
Looking for reccomendations
6
I was wanting to find the best local LLM in terms of output quality and dont care that much if its slow. I tried snoozy from GPT4all and its alright but im trying to find one thats almost as good as ChatGPT
2023-05-24T03:02:45
https://www.reddit.com/r/LocalLLaMA/comments/13q97ub/looking_for_reccomendations/
Top_Culture_9625
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13q97ub
false
null
t3_13q97ub
/r/LocalLLaMA/comments/13q97ub/looking_for_reccomendations/
false
false
self
6
null
How much / why does quantization negatively affect LoRA training on LLaMA?
3
I haven't trained LoRAs myself but I have a lot of interest in the technology. One thing I'm really curious about is how quantization affects the LoRA training that everyone's doing with LLaMA or its derivatives. Basically, I'm wondering if I can get a big model optimized well enough to run inference on my machine, would I also be able to make LoRA's with that same optimized model? If not, what actually goes wrong? My impression for inference at least is that people have been managing to make LLaMA run on Raspberry Pi's using int4 and getting results mostly indistinguishable from normal precision. And there is even promising research about using binary, which is int1! It seems like there is a lot of room for optimizing the model here. My questions: 1. Does quantization work just as well for training? For full fine-tuning at least I get the impression that normal precision is needed, otherwise the losses fail to converge. I do see tutorials for training LoRAs using int8, but also I see people claiming that using normal precision is better if you have enough VRAM for it (not sure if that's slightly better, or massively better). And I can't quite tell from searching online whether int4 training works and just has poor library support currently, or if it performs so poorly that libraries don't bother implementing it for good reason. 2. If not, why does quantization affect training more than inference? I can't quite see why the backward pass would be significantly different from the forward pass. I see references to the idea that the gradients may be several orders of magnitude smaller than the current values of the weights, but it seems to me that this problem ought to disappear if you just keep the LoRA weights in full precision. Why would the precision of the frozen weights matter? It seems to me at least that it would be a huge deal if super low precision LoRA training works, because that would mean training with frozen base model weights can be made massively faster than training with all weights tunable. It would lend credence to the idea of incrementally upgrading a base model with many layers of LoRA to create an ever stronger base model.
2023-05-24T02:51:59
https://www.reddit.com/r/LocalLLaMA/comments/13q8zjc/how_much_why_does_quantization_negatively_affect/
jubilantjerry
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13q8zjc
false
null
t3_13q8zjc
/r/LocalLLaMA/comments/13q8zjc/how_much_why_does_quantization_negatively_affect/
false
false
self
3
null
Trying to run WizardLM-30B uncensored-GPTQ but I'm running out of VRAM on my 3090?
2
[removed]
2023-05-24T02:09:50
https://www.reddit.com/r/LocalLLaMA/comments/13q82yk/trying_to_run_wizardlm30b_uncensoredgptq_but_im/
BallJoints420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13q82yk
false
null
t3_13q82yk
/r/LocalLLaMA/comments/13q82yk/trying_to_run_wizardlm30b_uncensoredgptq_but_im/
false
false
default
2
null
This is major news, Meta AI released a paper on how to build next-gen transformers (multiscale transformers enabling 1M+ token LLMs)
8
[removed]
2023-05-24T01:17:46
[deleted]
1970-01-01T00:00:00
0
{}
13q6ya9
false
null
t3_13q6ya9
/r/LocalLLaMA/comments/13q6ya9/this_is_major_news_meta_ai_released_a_paper_on/
false
false
default
8
null
Koboldcpp linux with gpu guide
21
**Opencl installation** Fedora: sudo dnf in clblast clblast-devel mesa-libOpenCL-devel Arch: sudo pacman -S cblas clblast Debian: libclblast-dev Amd proprietary drivers are not needed **Koboldcpp linux with gpu guide** git clone https://github.com/LostRuins/koboldcpp && cd koboldcpp && LLAMA_CLBLAST=1 make clinfo --list You need to use the right platform and device id from clinfo! The easy launcher which appears when running koboldcpp without arguments may not do this automatically like in my case. For me the correct option is Platform #2: AMD Accelerated Parallel Processing, Device #0: gfx1030. python ./koboldcpp.py --useclblast 2 0 --model /ggmlfolder/ggmlfile.bin # first useclblast argument is platform ID, second is device ID If you get an error about missing a mesa3d file you are probably using clover which requires mesa 23.1 or newer, avoid it for now GGML models can be found on https://huggingface.co Stop koboldcpp once you see n_layer value then run again: python ./koboldcpp.py --useclblast 2 0 --model /ggmlfolder/ggmlfile.bin --gpulayers 40 I am testing with [Manticore-13B.ggmlv3.q4_1](https://huggingface.co/TheBloke/Manticore-13B-GGML) which has 40 layers. I want all layers on gpu so I input 40. You can also put more layers than actual if you want, no harm. Use less if you don't have enough vram, but speed will be slower. Memory usage order low to high: q4_0, q4_1, q5_0, q5_1, etc Open http://localhost:5001 in browser, have fun! Alternate steps for rocm (faster blas processing!): git clone https://github.com/YellowRoseCx/koboldcpp koboldcpprocm && cd koboldcpprocm make LLAMA_HIPBLAS=1 Rocm is linux only and must be installed. See my gptq posts for information
2023-05-24T01:12:37
https://www.reddit.com/r/LocalLLaMA/comments/13q6u9e/koboldcpp_linux_with_gpu_guide/
amdgptq
self.LocalLLaMA
2023-06-27T12:12:23
0
{}
13q6u9e
false
null
t3_13q6u9e
/r/LocalLLaMA/comments/13q6u9e/koboldcpp_linux_with_gpu_guide/
false
false
self
21
{'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=108&crop=smart&auto=webp&s=17279fa911dbea17f2a87e187f47ad903120ba87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=216&crop=smart&auto=webp&s=12bf202fa02a8f40e2ad8bab106916e06cceb1b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=320&crop=smart&auto=webp&s=90ff2c682d87ee483233b1136984d608f8b5c5c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=640&crop=smart&auto=webp&s=2bc95e1b2395af837db2786db2f84b9c7f86370a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=960&crop=smart&auto=webp&s=67e903b600e020b7bcf93fc2000ed3cf95cb4dbb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=1080&crop=smart&auto=webp&s=b4cb1ebc087816d879ac777ed29f74d454f35955', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?auto=webp&s=a4fb691b1b470f21e5ef01685267735cb15b7735', 'width': 1200}, 'variants': {}}]}
Baize v2! This is the best 7B chatbot! Fast & lightweight!
34
From the [founder](https://twitter.com/XuCanwen/status/1661099080746143745): 🐲 We are releasing Baize v2! This is the best 7B chatbot! Fast & lightweight! 🍺Baize v2 features Self-Distillation with Feedback (SDF) using ChatGPT's feedback to replace RLHF. 🤗Demo: huggingface.co/spaces/project-baize/chat-with-baize 🏎Weights: github.com/project-baize/baize-chatbot#v2 📄Paper: github.com/project-baize/baize-chatbot/releases/download/baize-v2/baize_v2_technical_report.pdf 🐲Now Baize supports @lmsysorg Fastchat. You can run Baize v2 with Fsatchat CLI or API. Fastchat also supports OpenAI-style SDK - you only need to specify Baize's new checkpoints. You can use it the same way as Vicuna. 🔗 github.com/lm-sys/FastChat#api
2023-05-24T01:09:46
https://i.redd.it/zetgh0er0q1b1.jpg
Xhehab_
i.redd.it
1970-01-01T00:00:00
0
{}
13q6rw1
false
null
t3_13q6rw1
/r/LocalLLaMA/comments/13q6rw1/baize_v2_this_is_the_best_7b_chatbot_fast/
false
false
https://b.thumbs.redditm…A5jN3nYHGVcQ.jpg
34
{'enabled': True, 'images': [{'id': 'fJMd19zLcBUN-S2GkbJYmpN0lyV4D1CA7Cr3Y3m0O1w', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?width=108&crop=smart&auto=webp&s=f9e1bdb5d0fbcfeda0da536285bc24ab90c9ad8b', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?width=216&crop=smart&auto=webp&s=e703fbadf94771507e5899e3db81841985061dcc', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?width=320&crop=smart&auto=webp&s=7363267febeb84f479d5b86fc1666e745e257801', 'width': 320}, {'height': 365, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?width=640&crop=smart&auto=webp&s=b42ef77ebb2f8991fa9c24b4a7fc987eb9db9eb3', 'width': 640}, {'height': 548, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?width=960&crop=smart&auto=webp&s=82398f5485626a9babdd97601849535d86276ec7', 'width': 960}, {'height': 617, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?width=1080&crop=smart&auto=webp&s=57255e58368ed2f7ecd0d253bbc40adc023b5486', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?auto=webp&s=60b037fc1955cafd096da911352817052e5470d7', 'width': 2100}, 'variants': {}}]}
7B performance on a 15 year old potato
27
[deleted]
2023-05-24T00:57:32
[deleted]
2023-06-12T19:53:32
0
{}
13q6hu8
false
null
t3_13q6hu8
/r/LocalLLaMA/comments/13q6hu8/7b_performance_on_a_15_year_old_potato/
false
false
default
27
null
4x 8GB RAM modules in dual channel VS 2x 16GB RAM modules in dual channel for Llama.cpp
2
I want to run GPT4 x Alpaca, I have two options for upgrading my PC. 4x 8GB RAM modules in dual channel: affordable but I will not have space for more RAM in the future. 2x 16GB RAM modules in dual channel: expensive but I could upgrade to 64GB in the future. Would having 64GB of RAM in the future provide a significant boost for offline LLMs? Additionally, I plan to acquire an RTX 3060 for stable diffusion in the near future. Would having that much RAM be beneficial for SD, or would 32GB be sufficient?
2023-05-24T00:53:14
https://www.reddit.com/r/LocalLLaMA/comments/13q6ejz/4x_8gb_ram_modules_in_dual_channel_vs_2x_16gb_ram/
Nazuna_Vampi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13q6ejz
false
null
t3_13q6ejz
/r/LocalLLaMA/comments/13q6ejz/4x_8gb_ram_modules_in_dual_channel_vs_2x_16gb_ram/
false
false
self
2
null
AlpacaFarm makes the RLHF process accessible to everyone by providing a simulator that replicates the RLHF process quickly (24h) and cheaply ($200)
30
Incredible how fast the open source langauge model space is moving.
2023-05-24T00:48:28
https://crfm.stanford.edu/2023/05/22/alpaca-farm.html
kryptkpr
crfm.stanford.edu
1970-01-01T00:00:00
0
{}
13q6ato
false
null
t3_13q6ato
/r/LocalLLaMA/comments/13q6ato/alpacafarm_makes_the_rlhf_process_accessible_to/
false
false
default
30
null
4090 wont run wizard 30b, people have same issue on community page
1
[removed]
2023-05-24T00:48:10
https://www.reddit.com/r/LocalLLaMA/comments/13q6akx/4090_wont_run_wizard_30b_people_have_same_issue/
Efficient-Sherbet-15
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13q6akx
false
null
t3_13q6akx
/r/LocalLLaMA/comments/13q6akx/4090_wont_run_wizard_30b_people_have_same_issue/
false
false
default
1
null
How many tokens of storage can I get using Llama models?
3
I am trying to write a story and want to run some checks through a chat bot. GPT does okay, but forgets after a fairly short period of time technical details that are critical to continuing the discussion for longer than a few exchanges. What am I looking at for some of these? I have a few models already, and am wondering if I can somehow increase its memory storage? I have 2 4090s, 64 GB ram and plenty of SSD space.
2023-05-24T00:46:32
https://www.reddit.com/r/LocalLLaMA/comments/13q69bj/how_many_tokens_of_storage_can_i_get_using_llama/
Siigari
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13q69bj
false
null
t3_13q69bj
/r/LocalLLaMA/comments/13q69bj/how_many_tokens_of_storage_can_i_get_using_llama/
false
false
self
3
null
4090 issue with Wizard 30B
1
[removed]
2023-05-24T00:46:14
https://www.reddit.com/r/LocalLLaMA/comments/13q692i/4090_issue_with_wizard_30b/
Efficient-Sherbet-15
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13q692i
false
null
t3_13q692i
/r/LocalLLaMA/comments/13q692i/4090_issue_with_wizard_30b/
false
false
default
1
null
What is an budget level PC for running basic LLaMa models?
3
I sort of adhere to learning to swim by jumping in. I am looking to understand locally hosted llamas without breaking the bank. Would 16GB RAM and a 24GB RTX 3090 be enough to start?
2023-05-24T00:29:32
https://www.reddit.com/r/LocalLLaMA/comments/13q5vh4/what_is_an_budget_level_pc_for_running_basic/
iChinguChing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13q5vh4
false
null
t3_13q5vh4
/r/LocalLLaMA/comments/13q5vh4/what_is_an_budget_level_pc_for_running_basic/
false
false
self
3
null
CLI LLM
1
[removed]
2023-05-24T00:11:13
https://www.reddit.com/r/LocalLLaMA/comments/13q5gjk/cli_llm/
5c077_fr33
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13q5gjk
false
null
t3_13q5gjk
/r/LocalLLaMA/comments/13q5gjk/cli_llm/
false
false
default
1
null
OpenAccess AI Collective's Manticore 13B Chat (Manticore - Pygmalion)
83
# Manticore 13B Chat Manticore 13B Chat builds on Manticore with new datasets, including a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using ### in favor of chat only style prompts using USER: ASSISTANT: as well as [**pygmalion/metharme prompting**](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using <|system|>, <|user|> and <|model|> tokens. # GGML powered HF Space As with Manticore and Wizard Mega, OpenAccess AI Collective are unique in providing a GGML-powered HF Space, available here: [https://huggingface.co/spaces/openaccess-ai-collective/manticore-13b-chat-pyg](https://huggingface.co/spaces/openaccess-ai-collective/manticore-13b-chat-pyg) # Main repo [https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg) # Quantisations for local usage GPTQ for GPU inference: [https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GPTQ](https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GPTQ) GGML for CPU(+GPU) inference: [https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GGML](https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GGML)
2023-05-23T23:48:22
https://www.reddit.com/r/LocalLLaMA/comments/13q4xgh/openaccess_ai_collectives_manticore_13b_chat/
The-Bloke
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13q4xgh
false
null
t3_13q4xgh
/r/LocalLLaMA/comments/13q4xgh/openaccess_ai_collectives_manticore_13b_chat/
false
false
self
83
{'enabled': False, 'images': [{'id': '0yX6Osaz2Y363-dUccZuszUzp2yGUjbZHiLjr57wZ88', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?width=108&crop=smart&auto=webp&s=fc6e7eb7da486c3d445b45b1c25f6cd1d33d2ad5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?width=216&crop=smart&auto=webp&s=53dfb937180b13d61254b3802e86ff113aa4f01e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?width=320&crop=smart&auto=webp&s=643e47aee926253a2ca4fc1fc504314832fdee83', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?width=640&crop=smart&auto=webp&s=f8163f4e56d353e7d66e32b1ecc76cfcbc254f9f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?width=960&crop=smart&auto=webp&s=cf9b8f5438c54fe216bc72f6e0fe2d98c635ecc1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?width=1080&crop=smart&auto=webp&s=b43b6eb351378c1e3e4813a840db32e6ae92f467', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?auto=webp&s=d5946fe0cea5ed688a05b5cc9f29d93a10fdc35c', 'width': 1200}, 'variants': {}}]}
Make models answer in the language of the instruction
1
Models like WizardVicuna13B are good at understanding non-English (in my case German) instructions, but they are responding in English, that I have to explicitely write that it should answer it in German. What would be a method to auto-detect the language and respond in that, too? For ChatGPT it is the case. Or does the underlying Llama architecture miss an auto-translate layer?
2023-05-23T22:16:36
https://www.reddit.com/r/LocalLLaMA/comments/13q2ob1/make_models_answer_in_the_language_of_the/
Koliham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13q2ob1
false
null
t3_13q2ob1
/r/LocalLLaMA/comments/13q2ob1/make_models_answer_in_the_language_of_the/
false
false
self
1
null
7B Wizard model speed
3
How fast are 7B models supposed to run? I have a middle of the road cpu for my laptop(Ryzen 5 5500U) and I'm using WizardLM-7B-uncensored with llama.cpp. I'm getting a prompt eval time of 77.60ms per token and a regular eval time of 174.67ms per token. I don't know if this is up to par so I'm thinking of using gpu acceleration. Not that I have any problems with the current speed I'm just wondering whether it could be faster.
2023-05-23T21:40:28
https://www.reddit.com/r/LocalLLaMA/comments/13q1psd/7b_wizard_model_speed/
Jamalmail
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13q1psd
false
null
t3_13q1psd
/r/LocalLLaMA/comments/13q1psd/7b_wizard_model_speed/
false
false
self
3
null
Proof-of-concept with fine-tuning on local data?
4
Hi, What do you all recommend for implementing a Q&A chat similar to ChatGPT, but with fine-tuning on some local documents (bunch of html and pdf files)? I would prefer an API approach offered by OpenAI, but they require the use of json in the question-answer format. I'd like to point to a bunch of files, fine tune, then be able to ask the AI chat questions. PrivateGPT allows one to point to a directory with files, but I heard PrivateGPT can be quite slow. I am aware of Serge via a docker container, but I don't know if it supports tuning using local data. MosaicML's mpt-7b looks promising, but not sure how to get started, either through their API or on a local server. I'm open to an API, or trying it on a local computer/server. I think I can get my hands on a temporary server with GPU's. Thanks in advance for your help!
2023-05-23T20:23:20
https://www.reddit.com/r/LocalLLaMA/comments/13pzlsy/proofofconcept_with_finetuning_on_local_data/
vinhdizzo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13pzlsy
false
null
t3_13pzlsy
/r/LocalLLaMA/comments/13pzlsy/proofofconcept_with_finetuning_on_local_data/
false
false
self
4
null
What kind of hardware do you need to run LLaMA locally?
9
Somehow I don't think my RTX 3070 is enough.
2023-05-23T18:21:57
https://www.reddit.com/r/LocalLLaMA/comments/13pwcu6/what_kind_of_hardware_do_you_need_to_run_llama/
Darkhog
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13pwcu6
false
null
t3_13pwcu6
/r/LocalLLaMA/comments/13pwcu6/what_kind_of_hardware_do_you_need_to_run_llama/
false
false
self
9
null
OpenLLaMA Releases 7B/3B Checkpoints with 700B/600B Tokens
90
[Update 05/22/2023](https://github.com/openlm-research/open_llama/tree/main#update-05222023) >We are happy to release our [700B token checkpoint for the OpenLLaMA 7B model](https://huggingface.co/openlm-research/open_llama_7b_700bt_preview) and [600B token checkpoint for the 3B model](https://huggingface.co/openlm-research/open_llama_3b_600bt_preview). We’ve also updated the evaluation results. We expect the full 1T token training run to finish at the end of this week. > >Full checkpoints with 1T tokens are expected by the end of the week. The [loss rate evaluation metrics](https://github.com/openlm-research/open_llama/tree/main#dataset-and-training) for 7B and 3B indicate substantially superior model performance to [RedPajama and even LLaMA](https://discord.com/channels/1082503318624022589/1097534874719625236/1110479369442447380) (h/t Suikamelon on [Together's Discord](https://discord.gg/WYh7Ag5swP)) at this point in the training and slightly worse performance than LLaMA 7B as released. https://preview.redd.it/hq4lfxslum1b1.png?width=1013&format=png&auto=webp&s=1f4edd008e44071dbae4ba65d112f5df749eaf39 The results are actually quite good. This is 16-bit inference from 7B and blazing fast. https://preview.redd.it/cldplkrz4n1b1.png?width=1361&format=png&auto=webp&s=85f30829d8eadf2ba44345a5658b70b03dc0964c
2023-05-23T17:27:09
https://www.reddit.com/r/LocalLLaMA/comments/13puy2j/openllama_releases_7b3b_checkpoints_with_700b600b/
friedrichvonschiller
self.LocalLLaMA
2023-05-23T20:28:59
0
{}
13puy2j
false
null
t3_13puy2j
/r/LocalLLaMA/comments/13puy2j/openllama_releases_7b3b_checkpoints_with_700b600b/
false
false
https://b.thumbs.redditm…ezE46cAiDMUI.jpg
90
{'enabled': False, 'images': [{'id': 'O8tRU_f2R-b_kBe3Ct1V4f07ncMfAYdYRE7TLhOCXBI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?width=108&crop=smart&auto=webp&s=690b3adda60e0cec765ac71023f16c613e76dc35', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?width=216&crop=smart&auto=webp&s=97dafa7ae671d168423988f61ac82f8fb7db5a6b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?width=320&crop=smart&auto=webp&s=18cc957ce65ab4a09b5f971f983577703e385fb5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?width=640&crop=smart&auto=webp&s=8f2d21bfaacb751a90fc59bfb02a398fa305db76', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?width=960&crop=smart&auto=webp&s=452068adb31f5136fa9c745cb31cd919827b7533', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?width=1080&crop=smart&auto=webp&s=c340e1c2aa919875c71969353bc74f7c5c51e621', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?auto=webp&s=dfba8a2266fa9a6c2b0f9f1b83a3e769db691bb7', 'width': 1200}, 'variants': {}}]}
Watching the Microsoft Build Keynote and this post was made with the help of Dolly.
0
[removed]
2023-05-23T17:25:36
[deleted]
1970-01-01T00:00:00
0
{}
13puwml
false
null
t3_13puwml
/r/LocalLLaMA/comments/13puwml/watching_the_microsoft_build_keynote_and_this/
false
false
default
0
null
Best 7b models list?
3
I have seen a lot of lists comparing best open source models. Those are great but they often don't list 7b models as they are not good compare to higher parameter models, especially the uncensored ones. I am unable to run higher models on 8GB vram so 7b are best fit for me. Which are the best ones so far? If a list for 7b exist let me know. Also, what family of models are being most successful in 7b? use cases: summaries, q/n, describing stuff (prompts basically), story writing.
2023-05-23T17:18:02
https://www.reddit.com/r/LocalLLaMA/comments/13pupe6/best_7b_models_list/
Suimeileo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13pupe6
false
null
t3_13pupe6
/r/LocalLLaMA/comments/13pupe6/best_7b_models_list/
false
false
self
3
null
Can I run a local LLM with only 4GB VRAM?
0
[removed]
2023-05-23T17:03:42
https://www.reddit.com/r/LocalLLaMA/comments/13puc8t/can_i_run_a_local_llm_with_only_4gb_vram/
mrmczebra
self.LocalLLaMA
2023-05-23T17:37:17
0
{}
13puc8t
false
null
t3_13puc8t
/r/LocalLLaMA/comments/13puc8t/can_i_run_a_local_llm_with_only_4gb_vram/
false
false
default
0
null
Official MPS Support Thread
12
For those on M1 or M2 macs, let's create a centralized post for all resources related to MPS and how to get models to run faster on macs. Currently on my M1 Max 32GB Memory machine I can handle 5 tokens/sec with 13B models. However 30B is crawling. Have you guys been able to get faster speeds, and if so how? &#x200B; \-- # How to see memory usage? ActivityMonitor's Memory doesn't show the real memory usage. For that, right click the header bar and select `RealMemory`. This will show the correct memory usage. # More tokens/sec These flags will make your model run much faster: * `--threads` = num of CPU cores you have. On my M1 Max its 8 * M1/M2 = `4` * M1/M2 Pro (8 cores) = `6` * M1/M2 Pro (10 cores) = `8` * M1/M2 Max = `8` * M1 Ultra = `16` * [source](https://github.com/oobabooga/text-generation-webui/pull/393#issuecomment-1500918024) * `--mlock` = keeps the model in RAM * `--sdp-attention` = this gave me a good speed up for 30B * `--load-in-4bit` = made my model load up much faster # [Bug] First message Whenever I load up my `text-generation-webui` server my first message only uses 100-200% CPU instead of the 700-800% it should be using. The trick is to copy your prompt, send it, click `Stop`, then paste and send your prompt again. Then you will see the correct `% CPU`
2023-05-23T16:29:39
https://www.reddit.com/r/LocalLLaMA/comments/13ptew5/official_mps_support_thread/
IWantToBeAWebDev
self.LocalLLaMA
2023-05-26T00:31:45
0
{}
13ptew5
false
null
t3_13ptew5
/r/LocalLLaMA/comments/13ptew5/official_mps_support_thread/
false
false
self
12
{'enabled': False, 'images': [{'id': 'gaBsUPsMPCc9S_ThXyuPcwjHz0CJUu6iz8201w3BGxs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?width=108&crop=smart&auto=webp&s=06a03436c65d40168a8118c576a419a3190031de', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?width=216&crop=smart&auto=webp&s=bea50dcebd00104e044de7e8c869812b0ad89eb4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?width=320&crop=smart&auto=webp&s=856549cc651253d1c7dcf8a060e9cdb6b236bb80', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?width=640&crop=smart&auto=webp&s=1cd6c82e76afbf2098a49687daf2df17a0d3c2cf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?width=960&crop=smart&auto=webp&s=b9c2eab860882c3c2e4514fd06d8a03694fe76b0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?width=1080&crop=smart&auto=webp&s=0cd281c22512fb96c7c73f280ea893da415d4b5c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?auto=webp&s=b68f9d772b79529b9a8c97f79c1c1313e6c2691c', 'width': 1200}, 'variants': {}}]}
Is local LLM cheaper than ChatGPT API?
32
ChatGPT api only costs 0.002 dollar for 1k token. I found that LLMs like llama output only 10-20 tokens per second, which is very slow. And such machines costs over 1 dollar per hour. It seems that using api is much cheaper. Based on these observations, it seems that utilizing the ChatGPT API might be a more affordable option. Have any of you compared the costs and performance of local LLMs versus the ChatGPT API? Share your experiences and insights below.
2023-05-23T16:19:57
https://www.reddit.com/r/LocalLLaMA/comments/13pt5f3/is_local_llm_cheaper_than_chatgpt_api/
Financial_Stranger52
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13pt5f3
false
null
t3_13pt5f3
/r/LocalLLaMA/comments/13pt5f3/is_local_llm_cheaper_than_chatgpt_api/
false
false
self
32
null
Are sole individuals or are teams creating all these new models?
8
Just wondering : Are driven individuals or are teams creating all these new models? If teams, what size are they typically?
2023-05-23T15:58:23
https://www.reddit.com/r/LocalLLaMA/comments/13pskc2/are_sole_individuals_or_are_teams_creating_all/
MrEloi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13pskc2
false
null
t3_13pskc2
/r/LocalLLaMA/comments/13pskc2/are_sole_individuals_or_are_teams_creating_all/
false
false
self
8
null
GPT4 x Alpaca 13B vs Vicuna 13B
2
[removed]
2023-05-23T15:07:57
https://www.reddit.com/r/LocalLLaMA/comments/13pr7l0/gpt4_x_alpaca_13b_vs_vicuna_13b/
One-Relationship4205
self.LocalLLaMA
2023-05-23T15:34:37
0
{}
13pr7l0
false
null
t3_13pr7l0
/r/LocalLLaMA/comments/13pr7l0/gpt4_x_alpaca_13b_vs_vicuna_13b/
false
false
default
2
null
Instruct-guided novel writing using WizardLM-30B-Uncensored (with SillyTavern)
130
Hello everyone! To preface this, I've been playing around with AI-assisted novel writing for a while, having used [HoloAI](https://writeholo.com/) and [NovelAI](https://novelai.net/) quite extensively, and playing around with KoboldAI and the [Nerys](https://huggingface.co/KoboldAI/fairseq-dense-13B-Nerys) and [Erebus](https://huggingface.co/KoboldAI/OPT-13B-Erebus) models they've trained. So far, my impression had been that the quality of the locally hosted models didn't quite live up to the paid offerings, coming down to (presumably) less finetuned models and less capable GUIs. Now, after screwing around with the new [WizardLM-30B-Uncensored](https://huggingface.co/ehartford/WizardLM-30B-Uncensored) (thank you, Mr. Hartford 🙏), I figured that it lends itself pretty well to novel writing. Even though the model is instruct-tuned, the outputs (when guided correctly) actually rival NovelAI's Euterpe model. Obviously, this is highly subjective, and I can't speak for the "more capable" (and more expensive) offerings, such as NovelAI's Krake, as I haven't tried them yet. I also can't speak to the abilities of more specialised models to generate text guided by special genre and/or author prompts, as I haven't really explored those either. Anyways, being able to run a high-parameter count LLaMA-based model locally (thanks to GPTQ) and "uncensored" is absolutely amazing to me, as it enables quick, (mostly) stylistically and semantically consistent text generation on a broad range of topics without having to spend money on a subscription. However, local generation does have hurdles, and getting SillyTavern to handle WizardLM together with its built-in World Info and the summarization feature provided by SillyTavern-extras took some finagling. The result seems pretty good though: https://preview.redd.it/v803qcmr7l1b1.png?width=1013&format=png&auto=webp&s=301d0866cf5e8f50db7a5f16847d920710728587 For the benefit of those interested in AI-assisted writing (and for my own record, should my PC catch fire from all the *intense inferencing*), I thought I should write down a short guide. First things first, a note on my setup: * I'm running all AI stuff on a WSL2 VM (Ubuntu 22.04) with an RTX 3090 and 64 GB of RAM. * The actual model used is the [WizardLM's-30B-Uncensored GPTQ 4bit-quantized version provided by TheBloke](https://huggingface.co/TheBloke/WizardLM-30B-Uncensored-GPTQ) * The backend for [SillyTavern](https://github.com/Cohee1207/SillyTavern) is provided by [oobabooga's text-generation-webui](https://github.com/oobabooga/text-generation-webui), the GPTQ implementation is [iwalton3's GPTQ-for-LLaMa fork](https://github.com/iwalton3/GPTQ-for-LLaMa) providing support for act-order. Now for the actual settings: &#x200B; [Generation Settings](https://preview.redd.it/16wosahebl1b1.png?width=912&format=png&auto=webp&s=6bef884db0e4c9137d60b6edb0b87f8b4dd90434) &#x200B; [Advanced Formatting](https://preview.redd.it/37ulgzxsbl1b1.png?width=1824&format=png&auto=webp&s=4910b0a5b30250289bb976327a9bd1ca1376ea8e) Note that all AutoFormat Overrides are enabled, Instruct mode is active, Preset set to WizardLM, and the Tokenizer is Sentencepiece. The Anchors are disabled. The system prompt is modified from the default, which is guiding the model towards behaving like a chatbot. The prompt I use is the following: >This response is written in the style of a novel excerpt written in the third person by an omniscient narrator, containing vivid descriptions of each scene. Every response takes into account every previous response. The response stays in the present and does not contain information about the future. At the end of each response, there is a cliffhanger, staying in the present, leaving it open what happens next. Never end with a summary or moral of the story. > >Consider the following context when generating the response: The bit about not having a summary or "moral of the story" tries to prevent the model from spitting out such things at the end of every generation. &#x200B; [World Info Example](https://preview.redd.it/ei6bv43qcl1b1.png?width=1824&format=png&auto=webp&s=3a97647aa3aacc3bdb1a99f625287cfa4888e50c) The World Info entries should be kept very short, but improve generation noticeably by preventing the AI from forgetting the characters. Notice that I chose "After Char" insertion, which will place the entry below the "Consider the following context when generating the response:" in the System Prompt mentioned above. &#x200B; [Extensions](https://preview.redd.it/74cfpg5pdl1b1.png?width=1824&format=png&auto=webp&s=43ca1e1b4fcd76232faca4eea1460d8d900fef37) Note that the Author's Note stays empty, as I didn't really find a use for it. The "Chat memory" is kind of janky, because it sometimes misinterprets the story, but it can be manually edited. Note that it is regenerated every time a previous generation falls out of the token budget - which is pretty much every generation after the first few. The memory is inserted into the prompt below the "World Info" entries, directly in front of the instruction and prefixed by "Context:" &#x200B; &#x200B; [Character Tab 1](https://preview.redd.it/r9vmoajnel1b1.png?width=1824&format=png&auto=webp&s=088dd2f39c8f3afd60445966fe786e9cfcbb7710) [Character Tab 2](https://preview.redd.it/yj2f23ktel1b1.png?width=1824&format=png&auto=webp&s=f0c9abf063bd28fdb38c6d34e8ac1524d859db34) The Character is empty except for the Scenario field (and the Name). The Scenario prompt is inserted in front of all World Info entries right below the "Consider the following context when generating the response:" mentioned above. &#x200B; To visualize what's actually in the entire prompt sent to text-generation-webui, I have created this intricate diagram: [Prompt Burger](https://preview.redd.it/iixl1akzgl1b1.png?width=910&format=png&auto=webp&s=3197ab27930fdefbefbbaa56cea8a41407cd3a92) Note that the "previous generations" section is the one that gets reduced when all the other sections take up "too much" tokens. &#x200B; Now as always, your mileage may vary. Maybe WizardLM-30B-Uncensored-GPTQ is only good for weird cliché-ridden Until Dawn-based "aifiction"? Only one way to find out ;) And obviously, this post is probably going to be outdated by the end of the month, sooo - I hope this helps for the millisecond that it's relevant, and I'd love to hear feedback or improvements!
2023-05-23T15:03:25
https://www.reddit.com/r/LocalLLaMA/comments/13pr3bh/instructguided_novel_writing_using/
bilwis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13pr3bh
false
null
t3_13pr3bh
/r/LocalLLaMA/comments/13pr3bh/instructguided_novel_writing_using/
false
false
https://b.thumbs.redditm…KkapdGTZazCw.jpg
130
null
Can a llm demand more memory than is physically available?
0
[removed]
2023-05-23T15:00:22
https://www.reddit.com/r/LocalLLaMA/comments/13pr05z/can_a_llm_demand_more_memory_than_is_physically/
MrEloi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13pr05z
false
null
t3_13pr05z
/r/LocalLLaMA/comments/13pr05z/can_a_llm_demand_more_memory_than_is_physically/
false
false
default
0
null
Meanwhile here at LocalLLaMA..
716
2023-05-23T14:42:21
https://i.redd.it/jr463ta7fl1b1.png
MoneroBee
i.redd.it
1970-01-01T00:00:00
0
{}
13pqj3j
false
null
t3_13pqj3j
/r/LocalLLaMA/comments/13pqj3j/meanwhile_here_at_localllama/
false
false
https://b.thumbs.redditm…dIoSxiyjD4qA.jpg
716
{'enabled': True, 'images': [{'id': 'ROIALCegwgm_E2XHpaA-DA29woATZKxLTPj8YOtsnCE', 'resolutions': [{'height': 156, 'url': 'https://preview.redd.it/jr463ta7fl1b1.png?width=108&crop=smart&auto=webp&s=e81e8cdcad02bc05d1f5cfa401c7db4e48493087', 'width': 108}, {'height': 312, 'url': 'https://preview.redd.it/jr463ta7fl1b1.png?width=216&crop=smart&auto=webp&s=901753897965525029b8940b0c320379f4388322', 'width': 216}, {'height': 462, 'url': 'https://preview.redd.it/jr463ta7fl1b1.png?width=320&crop=smart&auto=webp&s=fac0fc9729bb0fc072a479fdd924082727d8baac', 'width': 320}], 'source': {'height': 748, 'url': 'https://preview.redd.it/jr463ta7fl1b1.png?auto=webp&s=5fc6e2a8cd6ace5520ec6b558b19be962cff0eac', 'width': 517}, 'variants': {}}]}
How can I run the TinyStories model on llama.cpp for conversation instead of completion?
1
[removed]
2023-05-23T13:27:35
https://www.reddit.com/r/LocalLLaMA/comments/13pojdd/how_can_i_run_the_tinystories_model_on_llamacpp/
su3su2u1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13pojdd
false
null
t3_13pojdd
/r/LocalLLaMA/comments/13pojdd/how_can_i_run_the_tinystories_model_on_llamacpp/
false
false
default
1
null
What's the story behind Hugging Face?
14
What's the story behind Hugging Face? I have just found their website, and don't know what to make of it. Is there an article somewhere which describes their history, aims and products/services?
2023-05-23T12:26:11
https://www.reddit.com/r/LocalLLaMA/comments/13pn1a7/whats_the_story_behind_hugging_face/
MrEloi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13pn1a7
false
null
t3_13pn1a7
/r/LocalLLaMA/comments/13pn1a7/whats_the_story_behind_hugging_face/
false
false
self
14
null
Desktop connected to the RunPod cloud Oobabooga, running TheBloke_vicuna-13B-1.1-GPTQ-4bit-128g. These local models have so much potential!
24
2023-05-23T12:25:43
https://v.redd.it/8cogzmvrpk1b1
No_Wheel_9336
v.redd.it
1970-01-01T00:00:00
0
{}
13pn0v6
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/8cogzmvrpk1b1/DASHPlaylist.mpd?a=1694951465%2CODQxMGQxNjBjZTY0NzY5ODFiMjgyZjA0Mzg3NjNhYzkwM2Y4YTYzMWRiNTgyMzQzMjNkMmYyMGM2YjI2ZDRlMA%3D%3D&v=1&f=sd', 'duration': 41, 'fallback_url': 'https://v.redd.it/8cogzmvrpk1b1/DASH_720.mp4?source=fallback', 'height': 720, 'hls_url': 'https://v.redd.it/8cogzmvrpk1b1/HLSPlaylist.m3u8?a=1694951465%2CMWYzZjc0ZGM0NTU1MmM0MjAyZTRhYjFjYjM0NzIwZjE0ODQ2YTA1Y2JlYTYwOGUxYTFlY2NiMTA0MjU3ZTlmYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8cogzmvrpk1b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 790}}
t3_13pn0v6
/r/LocalLLaMA/comments/13pn0v6/desktop_connected_to_the_runpod_cloud_oobabooga/
false
false
https://b.thumbs.redditm…MRLIUS6VG3wQ.jpg
24
{'enabled': False, 'images': [{'id': 'SleJWLSZhwK3tcsK1ghjR6he76HoDxjqiluCNhjl3i0', 'resolutions': [{'height': 98, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?width=108&crop=smart&format=pjpg&auto=webp&s=90bb8fccde9b7da3a379f22e27f5d0cb9e8de49d', 'width': 108}, {'height': 196, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?width=216&crop=smart&format=pjpg&auto=webp&s=517300430329609e4b9ae3c048b41fa7dc618c74', 'width': 216}, {'height': 291, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?width=320&crop=smart&format=pjpg&auto=webp&s=dcb81fc1eb42c3277a39d68ec8f95f05a453f436', 'width': 320}, {'height': 583, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?width=640&crop=smart&format=pjpg&auto=webp&s=08de76196bfef62ad38625cc6db36a794fcb9912', 'width': 640}, {'height': 874, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?width=960&crop=smart&format=pjpg&auto=webp&s=37962d54cde08d0a7050889003f46cf9269bb965', 'width': 960}, {'height': 984, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?width=1080&crop=smart&format=pjpg&auto=webp&s=374c7fbc348d6922e7d07db84f62682b76cd2c1c', 'width': 1080}], 'source': {'height': 1028, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?format=pjpg&auto=webp&s=8413eaa8a07b72ecde66144ee342cc635e0300bb', 'width': 1128}, 'variants': {}}]}
(oobabooga) Is there any way to chat with more than one character at once?
2
[removed]
2023-05-23T10:38:13
https://www.reddit.com/r/LocalLLaMA/comments/13pkr0e/oobabooga_is_there_any_way_to_chat_with_more_than/
OobaboogaHelp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13pkr0e
false
null
t3_13pkr0e
/r/LocalLLaMA/comments/13pkr0e/oobabooga_is_there_any_way_to_chat_with_more_than/
false
false
default
2
null
Is it possible to run the model wizard 7B GGLM 4 bit with my computer?
2
I'm totally new to the domain and trying to run the model wizard 7B GGLM 4 bit on my computer. I can't make it work but because I don't have good config, I'm thinking that's it maybe comes from that. &#x200B; I have 8GB of RAM, an AMD Ryzen 7 3700U and a. AMD Radeon Vega mobile Gfx graphics. &#x200B; I try to run the model on llama.cpp on windows (so on the CPU), but I have something like 1 token per 6 secondes. &#x200B; Does my configuration not allow me to run a model or is it a configuration problem with llama.cpp?
2023-05-23T09:35:15
https://www.reddit.com/r/LocalLLaMA/comments/13pjk09/is_it_possible_to_run_the_model_wizard_7b_gglm_4/
superanki
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13pjk09
false
null
t3_13pjk09
/r/LocalLLaMA/comments/13pjk09/is_it_possible_to_run_the_model_wizard_7b_gglm_4/
false
false
self
2
null
Whats currently the best model for scientific paper writing?
9
I had good results writing papers with ChatGPT. Just summarizing the main points I was trying to make for a particular section, then letting ChatGPT generating the final text. Basically I supply the technical input, and the model supplies the language. I'm not a native speaker, so this is obviously helpful for me. I tried this with the new WizardLM 30B uncensored in 4bit, and it works ok. But maybe there are other models better suited for this?
2023-05-23T09:26:18
https://www.reddit.com/r/LocalLLaMA/comments/13pje5m/whats_currently_the_best_model_for_scientific/
IlIllIlllIlllIllll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13pje5m
false
null
t3_13pje5m
/r/LocalLLaMA/comments/13pje5m/whats_currently_the_best_model_for_scientific/
false
false
self
9
null
Local LLMs that pass this test?
10
"I have 2 apples. I ate 1 banana. How many apples do I have now?" I was trying Wizard 33B GPTQ 4bit last night and I was disappointed to see this answer. No matter how many times I clicked "regenerate", it is consistently giving wrong answers. Can you guys try your favorite local LLMs and see if they pass? However, I am impressed that the LLM tried to gaslight me saying my question was unclear when confronted. https://preview.redd.it/nfnfs6rhpj1b1.png?width=1080&format=png&auto=webp&s=0a6effc161f331b88d7158930bf7e4226f4a18c2
2023-05-23T08:57:29
https://www.reddit.com/r/LocalLLaMA/comments/13pivzv/local_llms_that_pass_this_test/
Saren-WTAKO
self.LocalLLaMA
2023-05-23T09:13:55
0
{}
13pivzv
false
null
t3_13pivzv
/r/LocalLLaMA/comments/13pivzv/local_llms_that_pass_this_test/
false
false
https://b.thumbs.redditm…qBj2EWM6Axzg.jpg
10
null
How to fine tune / train an open source model?
27
[deleted]
2023-05-23T08:20:53
[deleted]
1970-01-01T00:00:00
0
{}
13pi9h8
false
null
t3_13pi9h8
/r/LocalLLaMA/comments/13pi9h8/how_to_fine_tune_train_an_open_source_model/
false
false
default
27
null
[ Removed by Reddit ]
166
[removed]
2023-05-23T07:45:10
https://www.reddit.com/r/LocalLLaMA/comments/13phmy0/removed_by_reddit/
Creative-Rest-2112
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13phmy0
false
null
t3_13phmy0
/r/LocalLLaMA/comments/13phmy0/removed_by_reddit/
false
false
default
166
null
Keeping an LLM on subject
1
I am using private Gpt with llamacpp, if it gets a positive hit in the chroma vector store its great and is accurate. If it doesn't then it falls back to its trained knowledge, are there any techniques to get it to just say it doesn't know? I have been looking at nvidia nemo guardrails but that seems more like an NLU system to catch certain sentiments such as insults etc which could be useful in some ways. https://github.com/NVIDIA/NeMo-Guardrails
2023-05-23T05:40:18
https://www.reddit.com/r/LocalLLaMA/comments/13pfbk1/keeping_an_llm_on_subject/
megadonkeyx
self.LocalLLaMA
2023-05-23T06:00:01
0
{}
13pfbk1
false
null
t3_13pfbk1
/r/LocalLLaMA/comments/13pfbk1/keeping_an_llm_on_subject/
false
false
self
1
{'enabled': False, 'images': [{'id': 'jWD991VzfuV_zlwEbdfXj9xZHb0GTG64p-F7C6HDdZU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?width=108&crop=smart&auto=webp&s=3325e379ae9469740b0f6be622454b8611e4ae4e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?width=216&crop=smart&auto=webp&s=6ec10809601e29e5ec43c683785b62294d1a2238', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?width=320&crop=smart&auto=webp&s=120d4259a92321877c35162a2b23c2a440897892', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?width=640&crop=smart&auto=webp&s=4f2440a128458f3fab34f3a8f28342b5e7c62778', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?width=960&crop=smart&auto=webp&s=6a5cbeb9fbeb5ef4130477b7afb1670eecd237fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?width=1080&crop=smart&auto=webp&s=fecb0a3dc5cc975122fc135920aeac4f792840a3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?auto=webp&s=cbd033c6c01da513c20703fa1fa70ca2cbf5fc06', 'width': 1200}, 'variants': {}}]}
Meta AI announces Massive Multilingual Speech code, models for 1000+ languages
85
2023-05-23T05:34:02
https://github.com/facebookresearch/fairseq/tree/main/examples/mms
dagerdev
github.com
1970-01-01T00:00:00
0
{}
13pf7cm
false
null
t3_13pf7cm
/r/LocalLLaMA/comments/13pf7cm/meta_ai_announces_massive_multilingual_speech/
false
false
https://b.thumbs.redditm…k3Xbj1038zmM.jpg
85
{'enabled': False, 'images': [{'id': 'bWWjT8_ksza2jJ4nJ9S2yWMnlqwSuvgQKoig4BcjaoU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?width=108&crop=smart&auto=webp&s=3e4c74bcd50f1599ca51b7a786f2663240e23b2e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?width=216&crop=smart&auto=webp&s=123307d73bb8230634799c774893b7e2a17d960d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?width=320&crop=smart&auto=webp&s=a3f93a86b4e7c4d2ac4e760b1eb3ef580912734a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?width=640&crop=smart&auto=webp&s=b5afd44d9273e26fa361ef5358cfa2e4977676f2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?width=960&crop=smart&auto=webp&s=787e6926f989dd3ab43dfb561df1cf084caf3e91', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?width=1080&crop=smart&auto=webp&s=62e86656da7d9e6c090c8042ebe704d310878c0c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?auto=webp&s=ad57f71cbe9ec7d44ff5d7d001dafc1d0509acd9', 'width': 1200}, 'variants': {}}]}
cannot for the life of me compile libllama.dll
1
I searched through GitHub and nothing comes up that is new. I wanted to run the model through the C# wrapper linked on [LLaMASharp](https://github.com/SciSharp/LLamaSharp) which requires compiling llama.cpp and extracting the libllama dll into the C# project files. When I build llama.cpp with OpenBLAS, everything shows up fine in the command line. Just as the link suggests I make sure to set DBUILD\_SHARED\_LIBS=ON when in CMake. However, the output in the Visual Studio Developer Command Line interface ignores the setup for libllama.dll in the CMakeFiles.txt entirely. The only dll to compile is llama.dll; I know this is a fairly technical question but does anyone know how to fix?
2023-05-23T02:58:11
https://www.reddit.com/r/LocalLLaMA/comments/13pbzcl/cannot_for_the_life_of_me_compile_libllamadll/
EmbarrassedSleep6981
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13pbzcl
false
null
t3_13pbzcl
/r/LocalLLaMA/comments/13pbzcl/cannot_for_the_life_of_me_compile_libllamadll/
false
false
self
1
{'enabled': False, 'images': [{'id': 'HnmhCr9KpOY1JpoxPu0QbKiqBD659HvHAf1UUmqWpMM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?width=108&crop=smart&auto=webp&s=8fe5981522c1551224da419a3ae2356d37d3d7ce', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?width=216&crop=smart&auto=webp&s=63d8ddd2ac3b1bcf34c29dbb965f39307aae83dd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?width=320&crop=smart&auto=webp&s=7d0f1fb21a2face61ef25dd5b270896111ec42a6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?width=640&crop=smart&auto=webp&s=ad29b53b06105820ccc102b2b1bf04566d3a242b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?width=960&crop=smart&auto=webp&s=68448ed0af640ffccdf9069a6bba8e0c3c92dfab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?width=1080&crop=smart&auto=webp&s=529d0ac58accb5e7882820d93d492c97c731a5f7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?auto=webp&s=9e847298d76898bc4fa232495de9ab5080b43e33', 'width': 1200}, 'variants': {}}]}
30b running slowly on 4090
8
I’m not sure what I’m doing wrong. I have a 4090 and 32Gib of memory running on Ubuntu server with an 11700K. Is it not enough memory? How much memory would I need. Could it be something else? I see other people say they can get about 15 tokens per second with a 4090.
2023-05-23T01:42:31
https://www.reddit.com/r/LocalLLaMA/comments/13paao9/30b_running_slowly_on_4090/
OldLostGod
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13paao9
false
null
t3_13paao9
/r/LocalLLaMA/comments/13paao9/30b_running_slowly_on_4090/
false
false
self
8
null
Update: OpenCL is merged! AMD GPUs now work with llama.cpp
168
Compile with `LLAMA_CLBLAST=1 make`. Make sure you have OpenCL drivers installed. More info on original post: https://www.reddit.com/r/LocalLLaMA/comments/13m8li2/finally_got_a_model_running_on_my_xtx_using/ Edit: Some speed benchmarks I did on my XTX with `WizardLM-30B-Uncensored.ggmlv3.q4_1` All 60 layers offloaded to GPU: 22 GB VRAM usage, 8.5 tokens/s 52 layers offloaded: 19.5 GB VRAM, 6.1 tokens/s 27 layers offloaded: 11.3 GB VRAM, 4.0 tokens/s
2023-05-23T00:47:36
https://www.reddit.com/r/LocalLLaMA/comments/13p8zq2/update_opencl_is_merged_amd_gpus_now_work_with/
TeakTop
self.LocalLLaMA
2023-05-25T07:41:47
0
{}
13p8zq2
false
null
t3_13p8zq2
/r/LocalLLaMA/comments/13p8zq2/update_opencl_is_merged_amd_gpus_now_work_with/
false
false
self
168
null
llama.cpp has a long warmup time in the beginning
1
[removed]
2023-05-23T00:37:18
https://www.reddit.com/r/LocalLLaMA/comments/13p8r7e/llamacpp_has_a_long_warmup_time_in_the_beginning/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13p8r7e
false
null
t3_13p8r7e
/r/LocalLLaMA/comments/13p8r7e/llamacpp_has_a_long_warmup_time_in_the_beginning/
false
false
default
1
null
Running RedPajama and other open LLMs on phones, browsers and AMD/NV/Intel GPUs
43
MLC LLM enables universal deployment of RedPajama-3B and other LLMs (Dolly, Vicuna, etc) across different platforms with hardware acceleration. It uses ~2.2GB memory, which most of the GPUs, macbooks and phones can afford. Check it out: https://mlc.ai/blog/2023/05/22/bringing-open-large-language-models-to-consumer-devices! Our next step will be better documentation and making the compilation pipeline more accessible to DIY and redistribution.
2023-05-23T00:30:22
https://www.reddit.com/r/LocalLLaMA/comments/13p8lba/running_redpajama_and_other_open_llms_on_phones/
yzgysjr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13p8lba
false
null
t3_13p8lba
/r/LocalLLaMA/comments/13p8lba/running_redpajama_and_other_open_llms_on_phones/
false
false
self
43
null
How to test if a model is truly UNCENSORED?
27
I downloaded the latest WizardLM-30B-Uncensored models (ggml) and wanted to check whether they're actually uncensored. But I still get "I'm sorry, as an AI model..." for certain prompts (e.g., the model refuses to write racist jokes, etc.) Is that what people mean by "uncensored"?
2023-05-23T00:29:18
https://www.reddit.com/r/LocalLLaMA/comments/13p8kfe/how_to_test_if_a_model_is_truly_uncensored/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13p8kfe
false
null
t3_13p8kfe
/r/LocalLLaMA/comments/13p8kfe/how_to_test_if_a_model_is_truly_uncensored/
false
false
self
27
null
Could someone help me figure out what I'm doing wrong with recent models?
1
[removed]
2023-05-22T23:43:15
https://www.reddit.com/r/LocalLLaMA/comments/13p7h73/could_someone_help_me_figure_out_what_im_doing/
banzai_420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13p7h73
false
null
t3_13p7h73
/r/LocalLLaMA/comments/13p7h73/could_someone_help_me_figure_out_what_im_doing/
false
false
default
1
null
New to this community, most models I download fail and end up in a core dump
1
Hi all, Aside from a few models, when I load most models with llama.cpp (updated and compiled from master) I get an error such as the following: ➜ llama.cpp git:(master) ./main -m ./models/ggml-gpt4all-j-v1.3-groovy.bin -c 512 -b 1024 -n 256 --keep 48 \ --repeat_penalty 1.0 --color -i \ -r "User:" -f prompts/chat-with-bob.txt main: build = 584 (2e6cd4b) main: seed = 1684793577 llama.cpp: loading model from ./models/ggml-gpt4all-j-v1.3-groovy.bin terminate called after throwing an instance of 'std::runtime_error' what(): unexpectedly reached end of file [1] 392757 IOT instruction (core dumped) ./main -m ./models/ggml-gpt4all-j-v1.3-groovy.bin -c 512 -b 1024 -n 256 --kee I'm a little lost as to what I can do to fix this. I read somet stuff about a format changing, and I'm wondering if that's just what i'm running into and I need to wait a few days until projects release updates of their quantized models. Same issue with `ggml-vic7b-q4_0.bin`
2023-05-22T22:17:14
https://www.reddit.com/r/LocalLLaMA/comments/13p5b18/new_to_this_community_most_models_i_download_fail/
evert
self.LocalLLaMA
2023-05-22T22:36:29
0
{}
13p5b18
false
null
t3_13p5b18
/r/LocalLLaMA/comments/13p5b18/new_to_this_community_most_models_i_download_fail/
false
false
self
1
null
I Want Speed Reccomendations
1
Which ggml model gives you the smartest fastest results? I'm talking 7B or lower. It doesn't have to be great, I'm looking for an oracle to spitball ideas. Normally I use story dice or Magic cards, but I thought a very fast AI could work great as well. Thanks for your suggestions in advance
2023-05-22T21:54:57
https://www.reddit.com/r/LocalLLaMA/comments/13p4phb/i_want_speed_reccomendations/
shamaalpacadingdong
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13p4phb
false
null
t3_13p4phb
/r/LocalLLaMA/comments/13p4phb/i_want_speed_reccomendations/
false
false
self
1
null
Anything using palm-2 in the open world?
1
[removed]
2023-05-22T20:32:57
https://www.reddit.com/r/LocalLLaMA/comments/13p2dce/anything_using_palm2_in_the_open_world/
suoko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13p2dce
false
null
t3_13p2dce
/r/LocalLLaMA/comments/13p2dce/anything_using_palm2_in_the_open_world/
false
false
default
1
null
Small Uncensored LLM model to train cheaply for a specific task.
14
I’m looking for a small LLM model that I can train on my local machine with a basic gpu. I want to use it to create very specific type of content. I have access to large amount of training data for this type of content but I’m not sure how to approach this whole problem. I am very new to fine tuning and LLaMAs in general but I have used OpenAIs API to build a similar AI content generation tool. Any direction to solve this is highly appreciated.
2023-05-22T20:25:44
https://www.reddit.com/r/LocalLLaMA/comments/13p25qy/small_uncensored_llm_model_to_train_cheaply_for_a/
ImpressiveFault42069
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13p25qy
false
null
t3_13p25qy
/r/LocalLLaMA/comments/13p25qy/small_uncensored_llm_model_to_train_cheaply_for_a/
false
false
self
14
null
zero-shot / few-shot / fine-tuning classification with Llama?
5
I'm trying to perform some zero shot / few-shot classification with Llama. I tried with \`decapoda-research/llama-7b-hf\` and a basic prompt. For example: Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. # noqa: E501 \### Instruction: classify the news title and body into liberal or conservative. \### Input: UN climate report: 'Atlas of human suffering' worse, bigger news body: Deadly with extreme weather now, climate change is about to get so much worse. It is likely going to make the world sicker, hungrier, poorer, gloomier and way more dangerous in the next 18 years with an “unavoidable” increase in risks, a new United Nations science report says. And after that watch out. The UN Intergovernmental Panel on Climate Change report said Monday if human-caused global warming isn’t limited to just another couple tenths of a degree, an Earth now struck regularly by deadly heat, fires, floods and drought in... \[PROMPT\_END\] **Output from Llama:** &#x200B; Input: UN climate report: "Atlas of human suffering" worse, bigger better news body: Deadly with extreme weather now, climate change is about to get so much worse, it is likely likely likely likely likely ..... &#x200B; Any idea how to approach this? I saw [https://huggingface.co/tasks/zero-shot-classification](https://huggingface.co/tasks/zero-shot-classification) but not exactly sure if it's going to work with Llama.
2023-05-22T20:23:25
https://www.reddit.com/r/LocalLLaMA/comments/13p23ew/zeroshot_fewshot_finetuning_classification_with/
furciferX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13p23ew
false
null
t3_13p23ew
/r/LocalLLaMA/comments/13p23ew/zeroshot_fewshot_finetuning_classification_with/
false
false
self
5
{'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=108&crop=smart&auto=webp&s=17279fa911dbea17f2a87e187f47ad903120ba87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=216&crop=smart&auto=webp&s=12bf202fa02a8f40e2ad8bab106916e06cceb1b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=320&crop=smart&auto=webp&s=90ff2c682d87ee483233b1136984d608f8b5c5c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=640&crop=smart&auto=webp&s=2bc95e1b2395af837db2786db2f84b9c7f86370a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=960&crop=smart&auto=webp&s=67e903b600e020b7bcf93fc2000ed3cf95cb4dbb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=1080&crop=smart&auto=webp&s=b4cb1ebc087816d879ac777ed29f74d454f35955', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?auto=webp&s=a4fb691b1b470f21e5ef01685267735cb15b7735', 'width': 1200}, 'variants': {}}]}
Can't get 30B models to work properly in oobabooga
8
I have 24GB VRAM and 64GB RAM, even with nothing else running, the 30B models will typically freeze after a couple of prompts. I am using 4bit quantized models and llama model type. Have tried cycling through different groupsizes. Resource monitor shows 100% GPU usage. I thought my specs should be enough to run 30B quantized models? Am I doing something wrong?
2023-05-22T19:42:15
https://www.reddit.com/r/LocalLLaMA/comments/13p0y4m/cant_get_30b_models_to_work_properly_in_oobabooga/
Commercial_Way_8217
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13p0y4m
false
null
t3_13p0y4m
/r/LocalLLaMA/comments/13p0y4m/cant_get_30b_models_to_work_properly_in_oobabooga/
false
false
self
8
null
Ideal Laptop
7
Hello r/LocalLLaMA I'm shopping for a new laptop, my current one being a 16gb ram macbook pro. The ideal use case would be to run Local LLM's on my laptop. Ideally I don't want to have to buy a GPU so I'm thinking a lot of ram will probably be what I need. Are the new macbooks with 96GB\~ ram something I should be looking into? I don't want to buy a desktop, since I'll be traveling and desktops are a bit restrictive. What are y'all using and what do you recommend?
2023-05-22T19:00:49
https://www.reddit.com/r/LocalLLaMA/comments/13oztjn/ideal_laptop/
aldarisbm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13oztjn
false
null
t3_13oztjn
/r/LocalLLaMA/comments/13oztjn/ideal_laptop/
false
false
self
7
null
🚨 How to deploy local models to Google Colab/Azure/etc.
6
I can run some models on M1 Mac, but I'd like to use the 65B models. As a student, I have free access to Azure. Is it possible to use that infrastructure to run such large models? How about Colab?
2023-05-22T18:47:13
https://www.reddit.com/r/LocalLLaMA/comments/13ozgej/how_to_deploy_local_models_to_google_colabazureetc/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ozgej
false
null
t3_13ozgej
/r/LocalLLaMA/comments/13ozgej/how_to_deploy_local_models_to_google_colabazureetc/
false
false
self
6
null
airoboros-7b
85
Pick your poison: - https://huggingface.co/jondurbin/airoboros-7b - https://huggingface.co/jondurbin/airoboros-7b-ggml-f32 - https://huggingface.co/jondurbin/airoboros-7b-ggml-f16 - https://huggingface.co/jondurbin/airoboros-7b-ggml-q8_0 - https://huggingface.co/jondurbin/airoboros-7b-ggml-q5_1 - https://huggingface.co/jondurbin/airoboros-7b-ggml-q4_0 More info about training, eval, etc. found in the 13b repo, and reminder that: 1. this is all trained on 100% AI generated training data (instructions and responses) 2. the point was not to make an incredible model, but rather to prove that in some ways, smaller models can be quickly and easily tuned to match the style and response of better models by fine-tuning on synthetic, random training data 3. training data for this particular model is here: https://huggingface.co/datasets/jondurbin/airoboros-uncensored I hope people will find the training data generation tool useful, and come up with better prompts with more complex questions, including multi-step reasoning/theory of mind, etc.
2023-05-22T17:46:32
https://www.reddit.com/r/LocalLLaMA/comments/13oxr6z/airoboros7b/
JonDurbin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13oxr6z
false
null
t3_13oxr6z
/r/LocalLLaMA/comments/13oxr6z/airoboros7b/
false
false
self
85
{'enabled': False, 'images': [{'id': '3dEWqzEOS8nFBfNpUZ3gTC5On3cSB4SQNDtsmSxToCY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?width=108&crop=smart&auto=webp&s=d5932439417948fdd15bc8c62102ac2efea23371', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?width=216&crop=smart&auto=webp&s=c4d3e60156b5ec05843f765ee54be64f1a1c7324', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?width=320&crop=smart&auto=webp&s=a18eedabe58bb8238b3180e43c872dfd4dcd84ee', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?width=640&crop=smart&auto=webp&s=70ac10f1d6d2b54dfe552d363ffb851c2f88779c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?width=960&crop=smart&auto=webp&s=48c7fb433e83e39d290f3d31e67ef21eca398341', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?width=1080&crop=smart&auto=webp&s=5a7b9c530333e20d41dd67639df1b2b3acf286cc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?auto=webp&s=e640eaea5b22f4e615345bb816e0ffb4d73c3717', 'width': 1200}, 'variants': {}}]}
Newbie here. Need help with choosing a llm model for pdf ingestion and summarization locally
6
I have a gaming laptop with 16gb ram, Amd ryzen 7 cpu & 6700M 10GB gpu. I'm looking to install a llm on my local and keep it offline if possible too, to ingest pdfs (mostly code documentations, tutorial books, e.t.c) and then summarize concepts, ask what & how to questions on concepts.. Also ingest story books and help my bf to summarize them to help him with his PhD. Does Alpaca 13B (4bit quant) servers this purpose? Or [Private GPT](https://www.youtube.com/watch?v=oX5LsiuRkLM)? Pls suggest. I want to run the model on GPU for faster results.. also do I need to train them? I have no knowledge on ML. I'm a regular backend developer. Thanks.
2023-05-22T17:25:25
https://www.reddit.com/r/LocalLLaMA/comments/13ox6r3/newbie_here_need_help_with_choosing_a_llm_model/
MexicanJalebi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ox6r3
false
null
t3_13ox6r3
/r/LocalLLaMA/comments/13ox6r3/newbie_here_need_help_with_choosing_a_llm_model/
false
false
self
6
{'enabled': False, 'images': [{'id': '118B2cshwj2mSijBF-qtIF58AaM7gAvK7glTehl6X_k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/iD7CF2pJcVP1kASZRTCLBRpfR4pKDt4N8Q_ECfgrwGk.jpg?width=108&crop=smart&auto=webp&s=b7cec692ca08e6fcf11b5de302973fa544fd5938', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/iD7CF2pJcVP1kASZRTCLBRpfR4pKDt4N8Q_ECfgrwGk.jpg?width=216&crop=smart&auto=webp&s=e4280d78a1755324b19d1f7d95603f3c8af8f301', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/iD7CF2pJcVP1kASZRTCLBRpfR4pKDt4N8Q_ECfgrwGk.jpg?width=320&crop=smart&auto=webp&s=50c34c14549f553429edabde0225db26362a3d36', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/iD7CF2pJcVP1kASZRTCLBRpfR4pKDt4N8Q_ECfgrwGk.jpg?auto=webp&s=01fa0c4df13938672a8444c1aa287f328ed6ffb9', 'width': 480}, 'variants': {}}]}
Train Large Language Models (LLM) using LoRA
36
2023-05-22T15:30:59
https://github.com/rhulha/lora
paranoidray
github.com
1970-01-01T00:00:00
0
{}
13ou59l
false
null
t3_13ou59l
/r/LocalLLaMA/comments/13ou59l/train_large_language_models_llm_using_lora/
false
false
default
36
null
WizardLM-30B-Uncensored
686
Today I released WizardLM-30B-Uncensored. [https://huggingface.co/ehartford/WizardLM-30B-Uncensored](https://huggingface.co/ehartford/WizardLM-30B-Uncensored) Standard disclaimer - just like a knife, lighter, or car, you are responsible for what you do with it. Read my blog [article](https://erichartford.com/uncensored-models), if you like, about why and how. A few people have asked, so I put a buy-me-a-coffee link in my profile. Enjoy responsibly. Before you ask - yes, 65b is coming, thanks to a generous GPU sponsor. And I don't do the quantized / ggml, I expect they will be posted soon.
2023-05-22T12:07:50
https://www.reddit.com/r/LocalLLaMA/comments/13op1sd/wizardlm30buncensored/
faldore
self.LocalLLaMA
2023-05-22T12:43:39
1
{'gid_2': 1}
13op1sd
false
null
t3_13op1sd
/r/LocalLLaMA/comments/13op1sd/wizardlm30buncensored/
false
false
self
686
{'enabled': False, 'images': [{'id': 'ywT2smm0IqTqqbq4_29s6rsXmM0IP5bSJvdaC-klH0w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?width=108&crop=smart&auto=webp&s=60dcf7fd7ae53c3690cdd8124a8a8eb419955f4b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?width=216&crop=smart&auto=webp&s=c6a3eb4eab7457ff8ded6626d9cf19bbb13b0b7e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?width=320&crop=smart&auto=webp&s=35d6f5083d13671cde033d40e49f51b3b89d36ac', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?width=640&crop=smart&auto=webp&s=c92e3eb631752e2a219439e0db4d8dca26bd2dae', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?width=960&crop=smart&auto=webp&s=89a7b8f4cf8bcae2eda828b23002245983e04418', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?width=1080&crop=smart&auto=webp&s=12b6861d9d341d7bd765dc9206c93beeeb7ce915', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?auto=webp&s=9a575e40f78505315258e75792b7adf47e508916', 'width': 1200}, 'variants': {}}]}
llama-30b-supercot surpassed 65b models on HF leaderboard
42
[https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) You may have to push the Resfresh-button to see the updated list. There are not much information about the Supercot? And not any GGML-version yet! And it's already one month old model. Very mysterious model.
2023-05-22T12:00:47
https://www.reddit.com/r/LocalLLaMA/comments/13oovzt/llama30bsupercot_surpassed_65b_models_on_hf/
MustBeSomethingThere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13oovzt
false
null
t3_13oovzt
/r/LocalLLaMA/comments/13oovzt/llama30bsupercot_surpassed_65b_models_on_hf/
false
false
self
42
{'enabled': False, 'images': [{'id': '2yXkO2nXyv2ynd0Gc85xzzHWd7q-pzJRTeM5uxEBdoE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=108&crop=smart&auto=webp&s=7c3bb0e464c062e6518a90b686b3544dad39673d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=216&crop=smart&auto=webp&s=6c25136371e9056c3998c03e64e73605446a33ac', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=320&crop=smart&auto=webp&s=30c559b0a3b92cbca6df2ffce369af9f85ccd82d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=640&crop=smart&auto=webp&s=9cd841171a06a0d0a5be5ca54c5bbc731ae610af', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=960&crop=smart&auto=webp&s=a1ce8b1063692ab2b2d978ab9459f34cc311ced2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=1080&crop=smart&auto=webp&s=ddc039e579cbc6105b7c11bc9be89382f69290ce', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?auto=webp&s=dff5ffff20b56c519b288a1462cbab0c2de6f313', 'width': 1200}, 'variants': {}}]}
TinyStories: The Smallest GPT with Coherent English (by Microsoft)
137
Turns out, even 2.5M (yep, not B) parameters are enough to generate coherent text. The main obstacle to full language understanding for transformers is the huge number of rare words (the long tail of the distribution). But if you compile a training dataset from the 1.5k most frequent roots (the vocabulary of a ~5-year-old child), then even a single-layer GPT can be trained in such a way that it will outperform GPT2-XL. This wonderful dataset, written by GPT-4, is perfect for validating new architectures, it even confirms chinchilla scaling. So if you want to invent your own "SuperGPT" architecture, I recommend experimenting with this dataset. The dataset size is only 3 GB. One common already conclusion - it's better to scale GPT in depth, rather than in width. [Article](https://arxiv.org/abs/2305.07759), [dataset](https://huggingface.co/datasets/roneneldan/TinyStories), [model](https://huggingface.co/papers/2305.07759)
2023-05-22T11:34:26
https://www.reddit.com/r/LocalLLaMA/comments/13ooc3o/tinystories_the_smallest_gpt_with_coherent/
Shir_man
self.LocalLLaMA
2023-05-22T17:27:17
0
{}
13ooc3o
false
null
t3_13ooc3o
/r/LocalLLaMA/comments/13ooc3o/tinystories_the_smallest_gpt_with_coherent/
false
false
self
137
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Using an LLM to predict Instagram video performance?
3
I have an instagram channel with about a 1000 videos up. The audience is not interested in the videos because of the visuals, they are interested in what is being spoken. I was wondering if there's a way to train a model from the Llama zoo to predict the performance of a video if I feed it the transcript. My idea is to categorize all videos in terms of how they performed. Performance was High if it was one of the top 33% of videos by views, in the month it was posted, Medium if it falls into the 33%-66% bracket, and Low if it was one the bottom 33% of videos. I've dabbled with classification-related tasks using RoBerta. Was wondering if training a LORA with a 7B model would perform better, and which model I should use - llama or one of the finetuned ones? Anything else that I might be missing that I should consider? If using one of the finetuned models, should I add an instruction to go with it?
2023-05-22T11:20:22
https://www.reddit.com/r/LocalLLaMA/comments/13oo20b/using_an_llm_to_predict_instagram_video/
regstuff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13oo20b
false
null
t3_13oo20b
/r/LocalLLaMA/comments/13oo20b/using_an_llm_to_predict_instagram_video/
false
false
self
3
null
Zicklein - a German 🇩🇪 finetuned LlaMA-7b base model (OS)
34
Zicklein is a German version of Alpaca 7b fine-tuned using LoRA, trained using a German translated version of the cleaned Alpaca instruct dataset. Github: [https://github.com/avocardio/zicklein](https://github.com/avocardio/zicklein) HuggingFace: [https://huggingface.co/avocardio/alpaca-lora-7b-german-base-52k](https://huggingface.co/avocardio/alpaca-lora-7b-german-base-52k) You can also try it out [here](https://huggingface.co/spaces/avocardio/German-Alpaca-LoRA-7b) (although its super slow - running on a CPU, responses take around 130s).
2023-05-22T11:15:03
https://www.reddit.com/r/LocalLLaMA/comments/13onyco/zicklein_a_german_finetuned_llama7b_base_model_os/
capital-man
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13onyco
false
null
t3_13onyco
/r/LocalLLaMA/comments/13onyco/zicklein_a_german_finetuned_llama7b_base_model_os/
false
false
self
34
{'enabled': False, 'images': [{'id': 'QZJ6rDS2xcs16sJyKUjWzrJnYKUlC5nssuE8knyORAQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?width=108&crop=smart&auto=webp&s=0223d3b56f68a94324092e5cf98a219c8bccb8b8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?width=216&crop=smart&auto=webp&s=0ef48f38ea489178dc99e5ab6542d39f58f5d4fc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?width=320&crop=smart&auto=webp&s=727d7e74ab2c49c72935f2f1717092fbc372d40e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?width=640&crop=smart&auto=webp&s=4f071d04c438a0414a2fc7cc493ee08003816a1b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?width=960&crop=smart&auto=webp&s=440be2cecc87c75b8fd6f35687c2b92e55cfa7a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?width=1080&crop=smart&auto=webp&s=fe1b08fefc5e6a39f4930f422b1582b189bf07cb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?auto=webp&s=54d79ee2213ae6f0cc6b8742c37822937ed44517', 'width': 1200}, 'variants': {}}]}
Zicklein - a German 🇩🇪 fine-tuned LlaMA-7b base model (OS)
1
[deleted]
2023-05-22T11:10:22
[deleted]
1970-01-01T00:00:00
0
{}
13onv0c
false
null
t3_13onv0c
/r/LocalLLaMA/comments/13onv0c/zicklein_a_german_finetuned_llama7b_base_model_os/
false
false
default
1
null
Llama.cpp binary for OLD computer with GPU?
5
I'd like to try the GPU splitting option, and I have a NVIDIA GPU, however my computer is very old so I'm currently using the bin-win-avx-x64.zip release of llama.cpp. Is there a compiled llama.cpp exe that supports the --gpu-layers option, but doesn't require an AVX2 capable CPU? Or would I have to compile my own?
2023-05-22T10:59:02
https://www.reddit.com/r/LocalLLaMA/comments/13onm7n/llamacpp_binary_for_old_computer_with_gpu/
Robot_Graffiti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13onm7n
false
null
t3_13onm7n
/r/LocalLLaMA/comments/13onm7n/llamacpp_binary_for_old_computer_with_gpu/
false
false
self
5
null
Local GPT (completely offline and no OpenAI!)
26
For those of you who are into downloading and playing with hugging face models and the like, check out my project that allows you to chat with PDFs, or use the normal chatbot style conversation with the llm of your choice (ggml/llama-cpp compatible) completely offline! Drop a star if you like it.
2023-05-22T10:23:58
https://github.com/jlonge4/local_llama
Jl_btdipsbro
github.com
1970-01-01T00:00:00
0
{}
13omy9c
false
null
t3_13omy9c
/r/LocalLLaMA/comments/13omy9c/local_gpt_completely_offline_and_no_openai/
false
false
https://b.thumbs.redditm…F7igD-bKo3VU.jpg
26
{'enabled': False, 'images': [{'id': 'UFi4l0MOE2C0JSggO1LtPttSGlhXgc31SeEVFPFndvU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?width=108&crop=smart&auto=webp&s=cddd2a19507e78766c2b75e3574ffd5c82d84f05', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?width=216&crop=smart&auto=webp&s=edd56d5518d60e0b238cbf2d06db079f70a9a956', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?width=320&crop=smart&auto=webp&s=01e3fb4166977c6ab5d56960568e0a543b17f9f2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?width=640&crop=smart&auto=webp&s=ed442983376c54db0bea65cee15e121e67e9ea89', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?width=960&crop=smart&auto=webp&s=34b0bdfecbd64e512a40cd413ca074a58698cb39', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?width=1080&crop=smart&auto=webp&s=2de97cfd38ae9aae96733b8772a4d18b936879f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?auto=webp&s=93d521efdd363eccaaec805576ad8fab07d02e21', 'width': 1200}, 'variants': {}}]}
Nvidia Tesla M40 vs P40.
7
I'm considering starting as a hobbyist. Thing is I´d like to run the bigger models, so I´d need at least 2, if not 3 or 4, 24 GB cards. I read the P40 is slower, but I'm not terribly concerned by speed of the response. I'd rather get a good reply slower than a fast less accurate one due to running a smaller model. My question is, how slow would it be on a cluster of m40s vs p40s, to get a reply to a question answering model of 30b or 65b? Is there anything I wouldn't be able to do with the m40, due to firmware limitations or the like? &#x200B; Thank you.
2023-05-22T09:57:55
https://www.reddit.com/r/LocalLLaMA/comments/13omfzw/nvidia_tesla_m40_vs_p40/
SirLordTheThird
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13omfzw
false
null
t3_13omfzw
/r/LocalLLaMA/comments/13omfzw/nvidia_tesla_m40_vs_p40/
false
false
self
7
null
Introducing Kobold-Assistant
1
[removed]
2023-05-22T08:46:43
https://www.reddit.com/r/LocalLLaMA/comments/13ol4mz/introducing_koboldassistant/
_Lee_B_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ol4mz
false
null
t3_13ol4mz
/r/LocalLLaMA/comments/13ol4mz/introducing_koboldassistant/
false
false
default
1
null
airoboros uncensored dataset (98% chatGPT3.5) released (thanks jondurbin)
65
2023-05-22T08:25:52
https://huggingface.co/datasets/jondurbin/airoboros-uncensored
_underlines_
huggingface.co
1970-01-01T00:00:00
0
{}
13oksgq
false
null
t3_13oksgq
/r/LocalLLaMA/comments/13oksgq/airoboros_uncensored_dataset_98_chatgpt35/
false
false
https://b.thumbs.redditm…bLb90UaI1ILk.jpg
65
{'enabled': False, 'images': [{'id': 'l2t25VWYBLjqCy2iWN4UskcCQqQfq1kt7D8Dloe2MhY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?width=108&crop=smart&auto=webp&s=31d2300ef7152a8b2cebab69493c4b277d70bb0a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?width=216&crop=smart&auto=webp&s=ad9f1441c2abf01e40bb2ef98c9fdbaf8203f46d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?width=320&crop=smart&auto=webp&s=2cd1ab22afc7788dab188da8f3f9b16ef1324dec', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?width=640&crop=smart&auto=webp&s=088dd7cafba449f9c2729fb3d0d5679a28aa2768', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?width=960&crop=smart&auto=webp&s=0e18af5ca6d66f093f16421404e0d035710a4cdd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?width=1080&crop=smart&auto=webp&s=61ae65fb99c7a9c98be43f70b9b8343c9133fcbf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?auto=webp&s=e66881c16d7e137715e65f1eb36a77008118b437', 'width': 1200}, 'variants': {}}]}
airoboros dataset (98% chatGPT3.5) released its uncensored dataset (thanks jondurbin)
1
[deleted]
2023-05-22T08:24:55
[deleted]
1970-01-01T00:00:00
0
{}
13okruj
false
null
t3_13okruj
/r/LocalLLaMA/comments/13okruj/airoboros_dataset_98_chatgpt35_released_its/
false
false
default
1
null
Deploying LLM on Cloud
4
I want to create a product which uses open source LLM as its backend. How do I deploy that LLM to Google Cloud or AWS? Is there any tutorial for that?
2023-05-22T08:12:35
https://www.reddit.com/r/LocalLLaMA/comments/13okjzz/deploying_llm_on_cloud/
Proof_Mouse9105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13okjzz
false
null
t3_13okjzz
/r/LocalLLaMA/comments/13okjzz/deploying_llm_on_cloud/
false
false
self
4
null
With a single 3090, which model is finetune-able and decent reasoning ability
18
Hello guys, a 3090 is what I can get, and if I what to make my own model, which model I should choose? hopefully it has decent reasoning ability, by using langchain wish I can solve the accuracy problem.
2023-05-22T07:40:27
https://www.reddit.com/r/LocalLLaMA/comments/13ojysf/with_a_single_3090_which_model_is_finetuneable/
JohnSmith004
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ojysf
false
null
t3_13ojysf
/r/LocalLLaMA/comments/13ojysf/with_a_single_3090_which_model_is_finetuneable/
false
false
self
18
null
Using a local LLM for large-scale text analysis
9
Hi all, I'm looking to run a local LLM for large-scale text analysis. I work in a law enforcement related environment, and we have a ton of data (think chat-logs, phone transcripts) that I would like to query. The data is so vast that we don't have the manpower to search this and would like to make use of an LLM. For (hopefully) obvious reasons this has to be in a local environment, as the data is sensitive and we cannot run any risk of sharing with a third party. Most of the LLM's I see here seem to be catered to uncensored text-generation and the like, is there also something more akin to what I'm looking for? Thanks in advance!
2023-05-22T07:36:11
https://www.reddit.com/r/LocalLLaMA/comments/13ojvzo/using_a_local_llm_for_largescale_text_analysis/
panthaduprince
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ojvzo
false
null
t3_13ojvzo
/r/LocalLLaMA/comments/13ojvzo/using_a_local_llm_for_largescale_text_analysis/
false
false
self
9
null
PSA: There is no 30B LLaMA model; it was a typo. The actual model has 33B parameters; please stop referring to it as "LLaMA-30B."
29
[https://github.com/facebookresearch/llama/issues/49](https://github.com/facebookresearch/llama/issues/49) &#x200B; It seems like the majority of people here believe that Meta AI's second-largest LLaMA model has 30 billion parameters, which is incorrect, and I felt the need to correct that. &#x200B; Another mistake I see people make is mistyping [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) as LLaMa.
2023-05-22T07:27:18
https://www.reddit.com/r/LocalLLaMA/comments/13ojqaj/psa_there_is_no_30b_llama_model_it_was_a_typo_the/
Adventurous_Jelly276
self.LocalLLaMA
2023-05-22T11:30:36
0
{}
13ojqaj
false
null
t3_13ojqaj
/r/LocalLLaMA/comments/13ojqaj/psa_there_is_no_30b_llama_model_it_was_a_typo_the/
false
false
self
29
{'enabled': False, 'images': [{'id': 'OuBKEiVtkvFhX7rAH6Id3Gj5fkda21RfNlBuukX3nMw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?width=108&crop=smart&auto=webp&s=69a988ff03e69e5ba364d8e7548101e93c3077c6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?width=216&crop=smart&auto=webp&s=9f6dab2d8cca6d5e681499751b8601581f3879ee', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?width=320&crop=smart&auto=webp&s=69e9d47e83a681f1ef54d1597d69f8e39bd62f91', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?width=640&crop=smart&auto=webp&s=078f00e443307931a2e4ebddeb338e73a0c53b1d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?width=960&crop=smart&auto=webp&s=1a26a2ee087c71d197c4400bdf4fbfbf230a9579', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?width=1080&crop=smart&auto=webp&s=1976bfa90c8126b54fe159919e0d8cce067c40d9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?auto=webp&s=38638c91a9a13b491c111878d58682aabbddb643', 'width': 1200}, 'variants': {}}]}
LIMA: Less Is More for Alignment
45
2023-05-22T06:23:01
https://arxiv.org/abs/2305.11206
ambient_temp_xeno
arxiv.org
1970-01-01T00:00:00
0
{}
13oijm0
false
null
t3_13oijm0
/r/LocalLLaMA/comments/13oijm0/lima_less_is_more_for_alignment/
false
false
https://b.thumbs.redditm…RcOb9hTLQ7cI.jpg
45
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Manticore 13B (formerly Wizard Mega 13B) is now the top of the pack of 13B models
92
See [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) on HF. You'll need to click the "Refresh" button near the bottom as HF heavily caches stale data for that page. We probably haven't even pushed the limits of what's possible with 13B either. But we're still hoping to scale [Manticore](https://huggingface.co/openaccess-ai-collective/manticore-13b) to 30B and 65B soon! https://preview.redd.it/glbdxer9hb1b1.png?width=3108&format=png&auto=webp&s=ad57869ee6835fc512854eda385a98e6e6b3d944
2023-05-22T05:18:28
https://www.reddit.com/r/LocalLLaMA/comments/13ohbvp/manticore_13b_formerly_wizard_mega_13b_is_now_the/
winglian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ohbvp
false
null
t3_13ohbvp
/r/LocalLLaMA/comments/13ohbvp/manticore_13b_formerly_wizard_mega_13b_is_now_the/
false
false
https://b.thumbs.redditm…Px1-hIEEJo3Y.jpg
92
{'enabled': False, 'images': [{'id': '2yXkO2nXyv2ynd0Gc85xzzHWd7q-pzJRTeM5uxEBdoE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=108&crop=smart&auto=webp&s=7c3bb0e464c062e6518a90b686b3544dad39673d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=216&crop=smart&auto=webp&s=6c25136371e9056c3998c03e64e73605446a33ac', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=320&crop=smart&auto=webp&s=30c559b0a3b92cbca6df2ffce369af9f85ccd82d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=640&crop=smart&auto=webp&s=9cd841171a06a0d0a5be5ca54c5bbc731ae610af', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=960&crop=smart&auto=webp&s=a1ce8b1063692ab2b2d978ab9459f34cc311ced2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=1080&crop=smart&auto=webp&s=ddc039e579cbc6105b7c11bc9be89382f69290ce', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?auto=webp&s=dff5ffff20b56c519b288a1462cbab0c2de6f313', 'width': 1200}, 'variants': {}}]}
Training Large Language Models: Fluctuating Training Loss But Smooth Eval Loss, What's Happening?
3
Hey folks, I've been training a llama-7b recently and have run into a bit of an issue I hope someone might be able to shed some light on. While my evaluation loss is decreasing smoothly (as expected), my training loss seems to be oscillating on a regular cycle while also trending downwards. I used datasets as dolly, shareGPT, self -instruct. &#x200B; [Train loss](https://preview.redd.it/yt1gi756fb1b1.png?width=1898&format=png&auto=webp&s=0aa76e0c6fd83b3253cf5fa757e5653cb202c9bd) &#x200B; [Eval Loss](https://preview.redd.it/qc4smnh7fb1b1.png?width=1889&format=png&auto=webp&s=906a8b796c7ffa5314fe06d0917527529c4faa5b) &#x200B; &#x200B; Here's what I'm seeing: * The eval loss decreases smoothly, which indicates to me that the model is learning something useful. * However, the training loss decreases in a jagged, oscillating manner. It's trending downwards overall but it spikes at regular intervals. I'm using four GPUs for the training and have set the batch size and micro-batch size based on their memory limits. But I can't figure out why I'm seeing this oscillating pattern in the training loss. Is this normal? Am I doing something wrong? What factors might be contributing to this? Any insights or suggestions would be greatly appreciated. I used alpca-lora training code from [https://github.com/tloen/alpaca-lora](https://github.com/tloen/alpaca-lora) &#x200B; &#x200B; Here is my training arguments #!/bin/bash DATA_DIR="databricks/databricks-dolly-15k" EPOCH=10 LR=1e-5 SAVE_DIR="llama-7b-lora-dolly-lr-$LR" wandb online export WANDB_PROJECT="llm-comparision" CUDA_VISIBLE_DEVICES=1,2 python -m torch.distributed.launch --master_port=34322 --nproc_per_node 2 train_dolly.py \ --fp16 \ --base_model 'decapoda-research/llama-7b-hf' \ --data_path $DATA_DIR \ --output_dir ckpt/$SAVE_DIR \ --batch_size 1024 \ --micro_batch_size 128 \ --num_epochs $EPOCH \ --learning_rate $LR \ --cutoff_len 512 \ --val_set_size 2000 \ --lora_r 8 \ --lora_alpha 16 \ --lora_dropout 0.05 \ --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' \ --train_on_inputs \ --logging_steps 1 \ --eval_steps 1 \ --weight_decay 0. \ --warmup_steps 0 \ --warmup_ratio 0.1 \ --lr_scheduler_type "cosine" \ --wandb_run_name $SAVE_DIR \ --group_by_length > logs/$SAVE_DIR.log 2>&1 & echo logs/$SAVE_DIR.log gradient\_accumulation\_steps(8) = batch\_size(1024) // micro\_batch\_size(128) &#x200B; Thanks in advance!
2023-05-22T05:04:34
https://www.reddit.com/r/LocalLLaMA/comments/13oh26g/training_large_language_models_fluctuating/
Traditional-Gain-593
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13oh26g
false
null
t3_13oh26g
/r/LocalLLaMA/comments/13oh26g/training_large_language_models_fluctuating/
false
false
https://b.thumbs.redditm…vhOmblRzeWds.jpg
3
null