title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PSA: New Nvidia driver 536.23 still bad, don't waste your time | 65 | The driver was just released and I tried it hoping the issue was resolved. No luck, it's still way slower than 531.79 when running close to max VRAM capacity (long context length).
This was a quick test on a 4090, Win11, Windows installation of oobabooga (not WSL), AutoGPTQ.
(I'm just a dabbler so maybe it's good if another user tests and confirms this) | 2023-06-14T13:52:01 | https://www.reddit.com/r/LocalLLaMA/comments/1498gdr/psa_new_nvidia_driver_53623_still_bad_dont_waste/ | rerri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1498gdr | false | null | t3_1498gdr | /r/LocalLLaMA/comments/1498gdr/psa_new_nvidia_driver_53623_still_bad_dont_waste/ | false | false | self | 65 | null |
Update on my new agent library, now called `agency` | 11 | Hello r/LocalLLaMa!
You might've seen a post two weeks back where I excitedly announced a new agent related library I was calling "everything". It was still not quite ready and I got a ton of really thoughtful feedback from here that I'm so grateful for. I've been super busy since.
First, I renamed the project to [`agency`](https://github.com/operand/agency). Much better I think!
And I've done a lot of work to simplify, improve, document, and finish up the API for a real release.
Lots has changed so if you read the readme before, it's been largely redone and now contains a detailed and working walkthrough of building an agent integrated system that includes multiple types of agents, operating system integration, access control, and a flask+react based web application where users appear as individual "agents" as well.
`agency` differs from other agent libraries, most importantly in that it focuses on a distinct part of the overall problem, that of agent integration. Many more details in the readme.
Also worth noting here is that I *just* pushed an update to integrate with the brand new [functions support on the OpenAI API](https://openai.com/blog/function-calling-and-other-api-updates)!
I also plan to make a detailed video walkthrough soon and I'll add it to the project page and post here when I do.
I'm eager to hear what people think! I developed this in order to build a foundation for some of my own ambitious ideas. If you find this useful for your projects I'd love to know!
Thanks for checking it out! I hope this helps you build amazing things! ❤️
[https://github.com/operand/agency](https://github.com/operand/agency) | 2023-06-14T13:39:09 | https://www.reddit.com/r/LocalLLaMA/comments/14985gw/update_on_my_new_agent_library_now_called_agency/ | helloimop | self.LocalLLaMA | 2023-06-14T15:07:44 | 0 | {} | 14985gw | false | null | t3_14985gw | /r/LocalLLaMA/comments/14985gw/update_on_my_new_agent_library_now_called_agency/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'QxvGpt1V4qdsVLmrWfFUgfPo91Y1fgxqs8Ul2uUyiBw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?width=108&crop=smart&auto=webp&s=b8974706ac051d75975ba5ea77014038801a627b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?width=216&crop=smart&auto=webp&s=7fef51a07cb00e683a9de25adf7a956e5b2c60ad', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?width=320&crop=smart&auto=webp&s=2fef9142d8a96f3af1b1af80be9af6031e08f9e7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?width=640&crop=smart&auto=webp&s=cd49a0ad8071d0bac5eb8491a94897b824403bf2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?width=960&crop=smart&auto=webp&s=a88dceacd2811199144d0610c03a597ed31673b2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?width=1080&crop=smart&auto=webp&s=5b9f9afd2fd29bb0785bf17410dd5400fdbe6506', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?auto=webp&s=8fce127c2e1cce4e0dbf5ed4210331615704bdfe', 'width': 1200}, 'variants': {}}]} |
Joining the Blackout: Private every Tuesday | 25 | [removed] | 2023-06-14T10:33:01 | https://www.reddit.com/r/LocalLLaMA/comments/1494f89/joining_the_blackout_private_every_tuesday/ | Balance- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1494f89 | true | null | t3_1494f89 | /r/LocalLLaMA/comments/1494f89/joining_the_blackout_private_every_tuesday/ | false | false | default | 25 | null |
What are you using for RP? | 31 | With the Ten Thousand Models of Llama, and all the variants thereof, it's becoming both more difficult, and easier, to get the model you want. So I was wanting to ask the community - those who use LLM for roleplay, which models are you using? What do you like/dislike about them? | 2023-06-14T10:22:53 | https://www.reddit.com/r/LocalLLaMA/comments/14948ud/what_are_you_using_for_rp/ | Equal_Station2752 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14948ud | false | null | t3_14948ud | /r/LocalLLaMA/comments/14948ud/what_are_you_using_for_rp/ | false | false | self | 31 | null |
LlaMa best hardware? Cloud? | 3 | [deleted] | 2023-06-14T09:54:01 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1493qf6 | false | null | t3_1493qf6 | /r/LocalLLaMA/comments/1493qf6/llama_best_hardware_cloud/ | false | false | default | 3 | null | ||
How context building works in very first run? | 1 | Using Oogabooga's WebUI i can see that any chat with prompt takes much slower than any other. I presume this is not just Oogabooga's WebUI, and not about GGML or GPTQ (GPTQ-for-llama), as i have tried both.
This is how i see the process in the
1. As i can understand, we have "precompiled" parameters, where each are grouped as set of neurons
2. Program loads those groups to memory
3. We "feed" them the text, which turns into "tokens"
4. Program consecutively transfer processed "tokens" through next neurons, loading one group after another
5. Eventually, processed "tokens" turns into text again at some last layers.
When we first run the question which has additional prompt it should also be tokenized, that is why it is so slow. But how exactly tokenization (if my understanding was right) happens? It takes significantly more time to tokenize text rather than to process resulted tokens it through whole network. Is it so much resource consumptive? What can i read to understand this part? What processes going on there?
Because when i already had my first question (with additional prompt) processed and there is the second question, everything responds dramatically faster. Which probably mean that adding this my question to the context happens faster and without a problem.
But when context reaches limit and being truncated, it happens as slow as it was at first time. Would that mean that text is coming in a chunk, and adding word is harder than removing older parts? Whole text should be deciphered back, then reassembled?
I have seen pictures of structure, telling that token embeddings should be merged into token matrix. But how exactly consumptive the whole merging process is and what processes happening in there?
I wonder if it can be done by separate programs or neural networks. This way i could, for example, feed preprocessed context and work with documentation/books/articles. But overall and complete understanding is much more vital for me. | 2023-06-14T09:46:53 | https://www.reddit.com/r/LocalLLaMA/comments/1493mbp/how_context_building_works_in_very_first_run/ | Accomplished_Bet_127 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1493mbp | false | null | t3_1493mbp | /r/LocalLLaMA/comments/1493mbp/how_context_building_works_in_very_first_run/ | false | false | self | 1 | null |
Local models on laptops: AMD 6900HS/32GB/Nvidia 3050TI 4GB vs. Apple M2/24GB vs. | 9 | Dear respected community,
In awe of recent developments I, like many, wonder how I can most effectively run models on end-user laptop hardware for personal use.
I will be upgrading my laptop to either a
Asus X13 with AMD 6900HS, 32GB RAM (LPDDR5 6400), Nvidia 3050TI or a
MacBook Air with M2, 24GB RAM, 8 GPU
Given GGML has shown promise with GPU offloading, is it reasonable to assume I can run 30/33B models on the Asus X13? I suppose it would be preferable to the MacBook.
By the way: Thank you for everything and everyone bringing forward local LLMs!
**EDIT 2:** I actually got both laptops at very good prices for testing and will sell one - I'm still thinking about which one.
**Testing the Asus X13, 32GB LPDDR5 6400, Nvidia 3050TI 4GB vs. the MacBook Air 13.6'', M2, 24GB, 10 Core GPU**
* **In the end, the MacBook is clearly faster with 9.47 tokens per second. The Asus X13 runs at 5.2 tokens per second.**
* **However, the MacBook only runs q4\_0 models at the moment and most 13B models can be run; on the Asus, 30/33B models can be run.**
* **EDIT 3: smaller 33B models run on the MacBook as well using kobold.cpp - at the moment without GPU/Metal/OpenCL:**
* **vicuna-33b-preview.ggmlv3.q3\_K\_S.bin with 1.6 tokens per second**
* **It also runs models of other architectures not supported by llama.cpp on MacOS with Metal, e.g.,**
* **WizardCoder-15B-1.0.ggmlv3.q4\_0 with 4 tokens per second**
* **starchat-beta.ggmlv3.q4\_0 both with 4.3 tokens per second**
* **With the 4GB Nvidia GPU the Asus is 16% faster compared to CPU only.**
Note:
* Testing with wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0.bin (reason: MacBook with llama.cpp and metal supports only q4\_0 (and certain others) at this time).
* On the Asus X13 with CUDA max. 14 of 43 layers could be offloaded to the 4GB GPU
* On the Asus X13 with OpenCL max. 19 of 43 layers could be offloaded to the 4GB GPU
* OS: Pop OS / Ubuntu 22.04 and macOS 13.4.1
* llama.cpp, compiled from Git repository on 2023-06-23
Results for wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0:
* Asus X13, CUDA, 14/43 layers: 5.0 tokens per second
* **Asus X13, OpenCL, 19/43 layers: 5.2 tokens per second**
* Asus X13, CPU only: 4.5 tokens per second
* **MacBook M2, Metal: 9.47 tokens per second**
Asus X13, CUDA, 14/43 layers:
llama.cpp-cuda/main -t 16 -ngl 14 -m wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0.bin --color -c 2048 --temp 0.7 --repeat\_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\\n### Response:"
llama\_print\_timings: prompt eval time = 1107.44 ms / 17 tokens ( 65.14 ms per token, 15.35 tokens per second)
llama\_print\_timings: eval time = 80015.33 ms / 402 runs ( 199.04 ms per token, 5.02 tokens per second)
llama\_print\_timings: total time = 81372.60 ms
Asus X13, CPU 6900HS:
llama.cpp/main -t 16 -ngl 0 -m wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0.bin --color -c 2048 --temp 0.7 --repeat\_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\\n### Response:"
llama\_print\_timings: prompt eval time = 1234.49 ms / 17 tokens ( 72.62 ms per token, 13.77 tokens per second)
llama\_print\_timings: eval time = 74731.55 ms / 337 runs ( 221.76 ms per token, 4.51 tokens per second)
llama\_print\_timings: total time = 76175.37 ms
X13, OpenGL, 19/43 Layers:
llama.cpp/main -t 16 -ngl 19 -m wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0.bin --color -c 2048 --temp 0.7 --repeat\_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\\n### Response:"
llama\_print\_timings: prompt eval time = 1303.10 ms / 17 tokens ( 76.65 ms per token, 13.05 tokens per second)
llama\_print\_timings: eval time = 82414.65 ms / 430 runs ( 191.66 ms per token, 5.22 tokens per second)
llama\_print\_timings: total time = 83987.24 ms
X13, CPU 6900HS, with OpenBLAS:
llama.cpp/main -t 16 -b 512 -m wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0.bin --color -c 2048 --temp 0.7 --repeat\_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\\n### Response:"
llama\_print\_timings: prompt eval time = 1215.86 ms / 17 tokens ( 71.52 ms per token, 13.98 tokens per second)
llama\_print\_timings: eval time = 124515.19 ms / 564 runs ( 220.77 ms per token, 4.53 tokens per second)
llama\_print\_timings: total time = 126082.17 ms
MBA, M2, Metal, 10c GPU
llama.cpp/main -t 10 -ngl 1 -m wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0.bin --color -c 2048 --temp 0.7 --repeat\_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\\n### Response:"
llama\_print\_timings: prompt eval time = 18236,60 ms / 17 tokens ( 1072,74 ms per token, 0,93 tokens per second)
llama\_print\_timings: eval time = 40985,60 ms / 388 runs ( 105,63 ms per token, 9,47 tokens per second)
llama\_print\_timings: total time = 59520,43 ms
(**EDIT 1**: Summary of responses
Thank you so much for all the comments and real-world performance measurements!
I'm leaning towards the X13 since it does not seem to be slower in terms of processor and RAM performance compared to M2, has more RAM, has an additional GPU. Further, it seems the 4GB GPU will provide little performance benefit over the 6900HS alone, but it will be a few percent and the card is a nice addition in any case. The Asus X13 should run most models in GGML format with 13B or 30B and quantization. The MacBook Air should run most models in GGML format or other formats with 13B (and possibly with heavy quantization barely also larger models); i.e., not only GGML can be run as up to half of the integrated RAM can be utilized for the GPU (that is, the MacBook with 24GB of RAM is somewhat comparable to having \~12GB VRAM).
Other suggestions, possibly very well suited, are the Asus G14 (more VRAM, faster GPU/CPU, however, larger and heavier than the small Asus X13), a MacBook with M1, M1 Pro, M1 Max or M2 32 GB (or even 64GB should be speedy for LLMs, in M1 Max especially due to very good memory bandwidth, possibly a used one is not that expensive), other 15 inch laptops (which are, for me, too large and heavy when traveling).
All of the above assumes casual use (question, answer, question, ... scenario without batch queries, learning etc.) of models in GGML format. For other formats, either lots of VRAM or a MacBook with >= 32GB seem to be required at a minimum.
Note also this is a rough characterization based on the current state - implementations could change and allow for better performance when using AMD, optimizations for Nvidia are not integrated everywhere at this point, and Macs could also see increases in performance with more flexible offloading / possibly being able to utilize more than 1/2 of RAM for GPU in the future.
I will report back on the results once I have bought a laptop and have it up and running with Ubuntu 22.04 (or Pop!\_OS / Mint).)
Regards
Felix | 2023-06-14T09:34:34 | https://www.reddit.com/r/LocalLLaMA/comments/1493fes/local_models_on_laptops_amd_6900hs32gbnvidia/ | bitangular | self.LocalLLaMA | 2023-06-25T13:33:55 | 0 | {} | 1493fes | false | null | t3_1493fes | /r/LocalLLaMA/comments/1493fes/local_models_on_laptops_amd_6900hs32gbnvidia/ | false | false | self | 9 | null |
This is getting really complicated. | 229 | I wish the whole LLM community (as well as stable diffusion) would iron out some of the more user-unfriendly kinks. Every day you hear some news about how the stochastic lexical cohesion analysis or whatever has improved tenfold (but no mention of what it does or how to use it). Or you get oobabooga to run locally only to be met with a ten page list of settings where the deep probabilistic syntactic parsing needs to be set to 0.75 with latent variable models but **absolutely not** for hierarchical attentional graph convolutional networks or you'll break your computer (with no further details).
If you have any questions you're expected to already know how to code and you need to parse five git repositories for error messages where the answers were outdated a week ago.
I'm just saying... We need to simplify this for the average user and have an "advanced" button on the side instead of the main focus.
Edit: Some of you are going "well, it's very bleeding edge tech so of course it's going to be complicated but I agree that it could be a bit easier to parse once we've joined together and worked on it as a community" and some of you are going "lol smoothbrain non-techie, go use ChatGPT dum fuk settings are supposed to be obtuse because we're progressing *science* what have *u* done with your life?"
One of these opinions is correct.
Edit2: Here's a point: it's perfectly valid to work on the back end and the front end of a product at the same time. Just because the interface is (let's face it) unproductive, doesn't mean you can't work on that while also still working on the nitty gritty of machine learning or coding. Saying "it's obtuse" is not the same as saying "there's no need to improve."
How many people know each component and function of a car? The user just needs to gas and steer, that doesn't mean car manufacturers can't iterate on and improve the engine. | 2023-06-14T09:33:30 | https://www.reddit.com/r/LocalLLaMA/comments/1493et3/this_is_getting_really_complicated/ | Adkit | self.LocalLLaMA | 2023-06-15T10:06:24 | 0 | {} | 1493et3 | false | null | t3_1493et3 | /r/LocalLLaMA/comments/1493et3/this_is_getting_really_complicated/ | false | false | self | 229 | null |
Which model has the highest token limit? | 3 | Just getting into this, so pardon my question, but which model has the highest token limit AND is closest to GPT 3.5 in terms of chatbot mode? | 2023-06-14T09:10:10 | https://www.reddit.com/r/LocalLLaMA/comments/149313m/which_model_has_the_highest_token_limit/ | cool-beans-yeah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149313m | false | null | t3_149313m | /r/LocalLLaMA/comments/149313m/which_model_has_the_highest_token_limit/ | false | false | self | 3 | null |
SlimPajama: A 627B token cleaned and deduplicated version of RedPajama | 46 | 2023-06-14T08:11:39 | https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama | baconwasright | cerebras.net | 1970-01-01T00:00:00 | 0 | {} | 14924w1 | false | null | t3_14924w1 | /r/LocalLLaMA/comments/14924w1/slimpajama_a_627b_token_cleaned_and_deduplicated/ | false | false | 46 | {'enabled': False, 'images': [{'id': 'I4vZeqDJ34df6oqvOcDRwRvFQVJ55B9iedovy4BjqKU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/J_Y4yH25SQxQvvybsOT6njqXQ5qTE5B4FAXcCLMtSos.jpg?width=108&crop=smart&auto=webp&s=76593e6f5cf714b379ef1f5e5eacf58e9f16a119', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/J_Y4yH25SQxQvvybsOT6njqXQ5qTE5B4FAXcCLMtSos.jpg?width=216&crop=smart&auto=webp&s=747abdae1026ccae000a6ce552e2605823a3f678', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/J_Y4yH25SQxQvvybsOT6njqXQ5qTE5B4FAXcCLMtSos.jpg?width=320&crop=smart&auto=webp&s=7da96e90977ad255fe023e8f8c1550b6d54da161', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/J_Y4yH25SQxQvvybsOT6njqXQ5qTE5B4FAXcCLMtSos.jpg?width=640&crop=smart&auto=webp&s=b3cf1cbb4f864f852629fd53480cb78b8fa02867', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/J_Y4yH25SQxQvvybsOT6njqXQ5qTE5B4FAXcCLMtSos.jpg?width=960&crop=smart&auto=webp&s=27196d66a1d7e18fbf3a3ff1a97c49735b561d84', 'width': 960}], 'source': {'height': 1018, 'url': 'https://external-preview.redd.it/J_Y4yH25SQxQvvybsOT6njqXQ5qTE5B4FAXcCLMtSos.jpg?auto=webp&s=2569800299f88b480de75ac449a3e4f1d1ebbc91', 'width': 1018}, 'variants': {}}]} | ||
Introducing my 'VowelReconstruct' Method - A Tangible Test for Comparing LLM's General Intelligence | 31 | TL;DR I have created a test method I call it "VowelReconstruct", where texts with almost all vowels were removed are presented to language models, and its job is to reconstruct the original text. I am very excited to introduce it to you. My approach is interesting because the model needs various cognitive capabilities at same time to be able to achieve this task. The result is evaluated by comparing the reconstructed text to the original text using two metrics, the Levenshtein-distance and a simple characters based similarity score. After that I calculate a new score (let's call it symscore), which provide insights into the performance of different language models and helping assess their intelligence.. This method aims to provide a practical way of assessing and comparing language models intelligence.
I've also decided to start my own blog and there you can read more about the method, if you are interested:
https://publish.obsidian.md/mountaiin/VowelReconstruct
---
Here you'll find the files, if you want to use this method too:
https://github.com/mounta11n/VowelReconstruct
---
And here you can see how some of my results look like:
| Name | Size | Specifications | Similarity | Levenshtein | Symscore |
|:------------:|:-------:|:--------------:|:----------:|:-----------:|:----------:|
| Guanaco | 7B | */* | 39.81% | 151 | 379.29 |
| WizardLM | 7B | q40 | 42.36% | 194 | 457.90 |
| **--------** | **---** | **------** | **------** | **---** | **---** |
| Vicuna | 13B | q41_v3 | 44.90% | 109 | 242.76 |
| Vicuna | 13B | q41_v3 | 57.64% | 29 | 50.31 |
| Vicuna | 13B | q6k | 51.27% | 190 | 370.82 |
| WizardLM | 13B | q40_v3 | 51.27% | 41 | 79.95 |
| WizardLM | 13B | q40_v3 | 51.59% | 31 | 60.09 |
| WizardLM | 13B | q40_v3 | 51.27% | 42 | 81.92 |
| WizardLM | 13B | q40_v3 | 50.00% | 29 | 58.00 |
| WizardLM | 13B | q4km | 57.96% | 34 | 58.64 |
| WizardLM | 13B | q6k | 55.73% | 41 | 73.52 |
| **--------** | **---** | **------** | **------** | **---** | **---** |
| based | 30B | q40_v3 | 53.50% | 108 | 201.87 |
| LLaMA | 30B | s-hotcot | 67.20% | 83 | 123.45 |
| **--------** | **---** | **------** | **------** | **---** | **---** |
| **Guanaco** | **65B** | **q40_v3** | **99%** | **2** | **2.01** |
| **--------** | **---** | **------** | **------** | **---** | **---** |
| Claude+ | */* | 100k | 93% | 12 | 12.90 |
| GPT-3.5 | */* | */* | 96.18% | 12 | 12.48 |
| GPT-4 | */* | */* | 97.77% | 2 | 2.04 |
EDIT: I maybe should have mention, as I mentioned it in the article: " … a test that I believe is very meaningful and suitable **for everyday use**. That means this test is more "tangible" and has a direct realistic value. It is **not** a matter of measuring something **exactly** to a certain decimal place, … "
So, in other words, this method/test does not aim to replace SOTA tests or to be ordered in the same categories like already existing tests. It is also not declaring itself as a highly scientific one.
This test aims to be a tool for the average user and for everyday life. This means that this test addresses the problem that there is a lack of meaningful tests that are
1) easy and **quickly** to use,
2) easy to understand,
3) reliable and valid enough for hobby research, and, above all,
4) that are really applicable within the time and technical framework of an average citizen.
This further fills a niche area that has so far only been able to offer meagre resources.
In view of this requirement, it is only a logical consequence that this test should not be compared with the big well-known tests.
EDIT EDIT: Typos etc | 2023-06-14T08:11:29 | https://www.reddit.com/r/LocalLLaMA/comments/14924se/introducing_my_vowelreconstruct_method_a_tangible/ | Evening_Ad6637 | self.LocalLLaMA | 2023-07-16T09:04:32 | 0 | {} | 14924se | false | null | t3_14924se | /r/LocalLLaMA/comments/14924se/introducing_my_vowelreconstruct_method_a_tangible/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'Oivw5lBZ_Xvm4N55tuIVtXcGjiWjMahJK6a6LZVFZkI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?width=108&crop=smart&auto=webp&s=e77fa167fe3c65645b1c59f1d803014688367ff1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?width=216&crop=smart&auto=webp&s=b4c64edc16bd4445781aa8645b537b7c1d36ac05', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?width=320&crop=smart&auto=webp&s=eca3c8298dcba4147ac4019b3d13ecbaaee665ba', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?width=640&crop=smart&auto=webp&s=fc5cbe00a5581e44fff6fdcf00327fdd9db92e86', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?width=960&crop=smart&auto=webp&s=5358f46424fa1ebab9289c74131b8aa5aa8dba52', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?width=1080&crop=smart&auto=webp&s=c9d5a04c0b51000f8f76911449d427a0fc81e5c6', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?auto=webp&s=0ec443d8adca0701bca99cee788cebc4e31449b8', 'width': 1200}, 'variants': {}}]} |
Tiny models for contextually coherent conversations? | 8 | I'm looking for a small (maybe less than 1B) GGML model that can hold a simple conversation with good contextual coherence, so it should remember the history and understand context. No general knowledge required, just that. Any suggestions? | 2023-06-14T07:58:10 | https://www.reddit.com/r/LocalLLaMA/comments/1491wg6/tiny_models_for_contextually_coherent/ | Amazing_Sentence5393 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1491wg6 | false | null | t3_1491wg6 | /r/LocalLLaMA/comments/1491wg6/tiny_models_for_contextually_coherent/ | false | false | self | 8 | null |
Help a beginner | 2 | I've searched through this sub and the llama.cpp github but nothing seems to help.
Maybe it's just some trivial thing I'm missing.
I was trying to run vicuna through llama.cpp on a Azure VM (Standard D4ds v4 (4 vcpus, 16 GiB memory) and 150gb storage)
I followed the installation guide from this sub
This is where is downloaded the model from : [https://huggingface.co/vicuna/ggml-vicuna-7b-1.1/tree/main](https://huggingface.co/vicuna/ggml-vicuna-7b-1.1/tree/main)
and when i try to run it, this is what i get
user@temp:~/llama.cpp$ ./main -m ./models/7B/ggml-vic7b-q4_0.bin -n 128
main: build = 669 (9254920)
main: seed = 1686726150
llama.cpp: loading model from ./models/7B/ggml-vic7b-q4_0.bin
error loading model: unexpectedly reached end of file
llama_init_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model './models/7B/ggml-vic7b-q4_0.bin'
main: error: unable to load model
there were things about converting and quantizing which I really don't understand but tried them anyway with no avail only more errors
Please help | 2023-06-14T07:24:41 | https://www.reddit.com/r/LocalLLaMA/comments/1491d7a/help_a_beginner/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1491d7a | false | null | t3_1491d7a | /r/LocalLLaMA/comments/1491d7a/help_a_beginner/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'KJJFb_vYzt3LgoIp4piANHHDFm2Fi9VkonZzVdjEgVA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=108&crop=smart&auto=webp&s=8cda09ab5c77cdcf284e7f085d139a72ac86bac3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=216&crop=smart&auto=webp&s=86fde117f5379c7df0dbbc434aaa7c0771a92ce9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=320&crop=smart&auto=webp&s=0e69b61be1f991d8c667291896f9ce716e73023a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=640&crop=smart&auto=webp&s=8bd8cd26283c3e890f7b2e2273e714df2bb45143', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=960&crop=smart&auto=webp&s=afbed4721bff1979ec8adfc199cb0b6021782a46', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=1080&crop=smart&auto=webp&s=2ee64a7edaa0c4f582de40db45853578fbf2d312', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?auto=webp&s=fc09d54412f3392d664111a799e57b0f44b048db', 'width': 1200}, 'variants': {}}]} |
NO issues loading Q4 K_S model but cant load Q3 K_S model, get this error | 1 | [removed] | 2023-06-14T07:17:21 | https://www.reddit.com/r/LocalLLaMA/comments/14918zm/no_issues_loading_q4_k_s_model_but_cant_load_q3_k/ | Equal-Pilot-9592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14918zm | false | null | t3_14918zm | /r/LocalLLaMA/comments/14918zm/no_issues_loading_q4_k_s_model_but_cant_load_q3_k/ | false | false | default | 1 | null |
Honkware/Manticore-13b-Landmark-GPTQ · Hugging Face | 11 | 2023-06-14T06:45:37 | https://huggingface.co/Honkware/Manticore-13b-Landmark-GPTQ | glowsticklover | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1490qch | false | null | t3_1490qch | /r/LocalLLaMA/comments/1490qch/honkwaremanticore13blandmarkgptq_hugging_face/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'gfzA01Jl7gabQOK32OfizGoVb5tlu8m5ffyF_ox7om0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?width=108&crop=smart&auto=webp&s=5620d15d5450ac6f9d4d0a0aae8ff93634a34ab1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?width=216&crop=smart&auto=webp&s=17abaae64b3ab5ad9fda4b320b97fb10352aa987', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?width=320&crop=smart&auto=webp&s=a2dc350d7c8459b597f1291bcd440fe0b38be4a9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?width=640&crop=smart&auto=webp&s=01db63701106ccdb06e7ca303889bf8d46a69f00', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?width=960&crop=smart&auto=webp&s=c1390da794b2dd5272c141358a71f2a3d4ea26ec', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?width=1080&crop=smart&auto=webp&s=3a148e7c96a51e32765a037e475e32e6d67c0e84', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?auto=webp&s=6e2a4d6bc40f0f028407c9e226a1e0d3ef1215d0', 'width': 1200}, 'variants': {}}]} | ||
30b models super slow webui 4090 | 1 | [removed] | 2023-06-14T06:11:20 | https://www.reddit.com/r/LocalLLaMA/comments/1490633/30b_models_super_slow_webui_4090/ | fractaldesigner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1490633 | false | null | t3_1490633 | /r/LocalLLaMA/comments/1490633/30b_models_super_slow_webui_4090/ | false | false | default | 1 | null |
16k context for OpenAI GPT-3.5 API | 74 | Looks like OpenAI just upped the context length for gpt-3.5-turbo and made some other updates to make it easier to integrate with other applications.
[Function calling and other API updates (openai.com)](https://openai.com/blog/function-calling-and-other-api-updates)
* new function calling capability in the Chat Completions API
* updated and more steerable versions of gpt-4 and gpt-3.5-turbo
* new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
* 75% cost reduction on our state-of-the-art embeddings model
* 25% cost reduction on input tokens for gpt-3.5-turbo
* announcing the deprecation timeline for the gpt-3.5-turbo-0301and gpt-4-0314 models | 2023-06-14T04:44:31 | https://www.reddit.com/r/LocalLLaMA/comments/148yoo4/16k_context_for_openai_gpt35_api/ | noco-ai | self.LocalLLaMA | 2023-06-14T04:50:44 | 0 | {} | 148yoo4 | false | null | t3_148yoo4 | /r/LocalLLaMA/comments/148yoo4/16k_context_for_openai_gpt35_api/ | false | false | self | 74 | {'enabled': False, 'images': [{'id': '9DbbdjKChgxgpk85RvWkYY-sPol1aDoPYeUX07sqagA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?width=108&crop=smart&auto=webp&s=85d6ab49c24a9caab2287f2df04f1bcafac79db4', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?width=216&crop=smart&auto=webp&s=6e97adf939b8013963f8ca5d50d0233faa5921bf', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?width=320&crop=smart&auto=webp&s=482f69c2cb28fd9c61367e2463e17e0054c7301b', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?width=640&crop=smart&auto=webp&s=24ca881245cde17e486b4596b7d10b30534df708', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?width=960&crop=smart&auto=webp&s=57fd3f6d8874f2a6f11daa3e4fb5b540325d4c08', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?width=1080&crop=smart&auto=webp&s=37d674c99b69ea0c836a41f1fefe92c7394d0a7c', 'width': 1080}], 'source': {'height': 4096, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?auto=webp&s=fde5b948b4f1deeabe69cb002890fdeb34b08cc8', 'width': 4096}, 'variants': {}}]} |
Forgive my ignorance, but is crowdsourced networked GPU processing feasible? | 1 | [deleted] | 2023-06-14T04:30:23 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 148yff7 | false | null | t3_148yff7 | /r/LocalLLaMA/comments/148yff7/forgive_my_ignorance_but_is_crowdsourced/ | false | false | default | 1 | null | ||
Useful Links and Info | 39 | [removed] | 2023-06-14T03:27:27 | [deleted] | 2023-06-14T03:33:49 | 0 | {} | 148xacr | false | null | t3_148xacr | /r/LocalLLaMA/comments/148xacr/useful_links_and_info/ | false | false | default | 39 | null | ||
Your painpoints in building/using Local LLMs | 1 | [removed] | 2023-06-14T02:46:41 | https://www.reddit.com/r/LocalLLaMA/comments/148win2/your_painpoints_in_buildingusing_local_llms/ | Latter-Implement-243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148win2 | false | null | t3_148win2 | /r/LocalLLaMA/comments/148win2/your_painpoints_in_buildingusing_local_llms/ | false | false | default | 1 | null |
Simple LLM Performance Benchmarking Util utilizing the oobabooga web API | 21 | Hey everyone, I've created a simple performance benchmarking utility using the oobabooga Text Generation Web API.
Repo: [oobabooga-benchmark](https://github.com/traumahound86/oobabooga-benchmark)
After getting bitten by the [changes in version 535.98 of the nVidia drivers](https://www.reddit.com/r/LocalLLaMA/comments/1461d1c/major_performance_degradation_with_nvidia_driver/), I thought it'd be a good idea to have a simple and repeatable way to check and save performance metrics.
This utility will send one or more instruction prompts to oobabooga through its web API and output and save the results. (ex. total time taken, total tokens generated, tokens per second, etc.)
In its basic form, the utility is simple to use:
python benchmark.py --infile prompt.txt
Run the utility on a different machine than the oobabooga server. Pass in a static seed value for repeatable tests. Benchmark multiple prompt files at once
python benchmark.py --host 192.168.0.10 --seed 600753398 --tokens_per_gen 250 --infile prompt01.txt prompt02.txt prompt03.txt
**Protip** Shell globbing multiple files works and since the benchmarking util also saves the generated output in addition to the performance metrics, it can be used to generate batches of output with no additional interaction.
python benchmark.py --host 192.168.0.10 --infile mystories*.txt mypoems*.txt mysongs*.txt
There are many additional (optional) flags to alter the behaviour of the utility and API (ex. temperature, output directory, tokens per generation, etc.) documented on the GitHub page.
I'm definitely open to feedback and/or suggestions for improvements, bug fixes, etc.
Hopefully someone finds this tool useful. | 2023-06-14T02:26:15 | https://www.reddit.com/r/LocalLLaMA/comments/148w4ry/simple_llm_performance_benchmarking_util/ | GoldenMonkeyPox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148w4ry | false | null | t3_148w4ry | /r/LocalLLaMA/comments/148w4ry/simple_llm_performance_benchmarking_util/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': '4Zzz0mayEAE8fC5P7jtkzPi2mVVpArA1WN5OOTBlJYw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?width=108&crop=smart&auto=webp&s=8ead88b6982ad32cb2032d8e59c635a0989e5cc6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?width=216&crop=smart&auto=webp&s=b6fa613caed1f37b17807e0053f51e0e5e96ddcc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?width=320&crop=smart&auto=webp&s=dc9979e05ecc88752289d377a5f5dd6aed4f496d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?width=640&crop=smart&auto=webp&s=236f2463015e611cf54adea570882da60ae58659', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?width=960&crop=smart&auto=webp&s=6fb4e596af41f2d9409350e30c31c1b7706d1a6a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?width=1080&crop=smart&auto=webp&s=c126a7e30e39765f1f842017bea9f59ee278f4c1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?auto=webp&s=dbbcedba0322004a8d6cf7065aa4d6b5d930fb0e', 'width': 1200}, 'variants': {}}]} |
Alternative download links | 1 | [removed] | 2023-06-14T02:22:43 | https://www.reddit.com/r/LocalLLaMA/comments/148w2cu/alternative_download_links/ | Yip37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148w2cu | false | null | t3_148w2cu | /r/LocalLLaMA/comments/148w2cu/alternative_download_links/ | false | false | default | 1 | null |
Best approach to local LLMs for a journal? | 13 | My partner and I built an [open source](https://github.com/ocdevel/gnothi) robo-journal [Gnothi](https://gnothiai.com). Started in 2019 with summarization, recurring themes, book recommendations, behavior tracking. Soon as GPT came along, you bet your butt I integrated.
But privacy. It's a journal. I trust OpenAI with mine, but not everyone does. And frankly, I'd love to be coached by [Wizard-Vicuna\_Uncensored\_Uncut\_Extreme\_Chaos\_non-GMO\_Organic-GGML](https://www.reddit.com/r/LocalLLaMA/comments/13pqj3j/meanwhile_here_at_localllama/).
I want users to be able to choose between OpenAI; a Gnothi-hosted LLM; or BYO-model, using an IP / ngrok.
**The easy question (maybe?):**
For users to self-host, what's the easiest approach? Some sort of local service which exposes an API similar to OpenAI. Oobabooga? Is it reasonable to expect, if they've setup Oobabooga, they can setup Ngrok for Gnothi to webhook? Once I land on a decent one-size-fits-all, I'll write a Github Wiki for these users.
**The tall order:**
For Gnothi-hosted, I'd need a model which (all 3, even I have to wait on upcoming SOTA):
1. Fits on Lambda or SageMaker Serverless. <=10GB RAM. So I'm thinking 7B quantized.
2. Is robust / quality enough to compete with GPT 3.5. [This is the master-prompt](https://github.com/ocdevel/gnothi/blob/main/sst/services/ml/node/summarize.ts#L54) which the model would have to be able to handle.
3. Is safe enough for the headspace in which someone is journaling.
If all 3 can't be achieved, I'll just lean into the DIY side. If you want it enough, you'll know what you're doing, and I'll get hands-dirty with ya
If you're interested in using it but not keen on OpenAI: use the free version. Free uses pre-trained Huggingface, premium uses GPT. If you're interested on the code side, hit me up - it'll give me incentive to bring the README back to life. All the action's in the \`sst/\` folder (all other top-level folders are old code). My biggest problem is that it uses SST (AWS, CDK) and a real RDS database and VPC with Nat Gateway, even in dev-mode - a minimum of $70/m. So I need a way to Dockerize parts of this for tinkering. In the meantime, if there's low-hanging fruit, just Github-issue me and I'll test changes on my end. | 2023-06-14T00:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/148txkx/best_approach_to_local_llms_for_a_journal/ | lefnire | self.LocalLLaMA | 2023-06-14T03:03:15 | 0 | {} | 148txkx | false | null | t3_148txkx | /r/LocalLLaMA/comments/148txkx/best_approach_to_local_llms_for_a_journal/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'sQUuNLY1QJMR87V9uBaSdK8B_Agvap2O4wbDGbhU0-s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?width=108&crop=smart&auto=webp&s=84e68237e5a5b6bea78426fe9b1f312334699bef', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?width=216&crop=smart&auto=webp&s=46c86a5c8d07b6336206c2f144efed2b52b690f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?width=320&crop=smart&auto=webp&s=88311bb5d2ebbd44be7cf0bfacc9d2875b751882', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?width=640&crop=smart&auto=webp&s=ebfcf159f7a82b6307653e64b54899fb706fbac0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?width=960&crop=smart&auto=webp&s=3ab22003a650e62928a3d5748ba5ad178e4d63a3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?width=1080&crop=smart&auto=webp&s=364ddd46c3bacfa48d7c47989e4f2e980b6fc144', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?auto=webp&s=b39861cde4ce7ea9086b845dfc549842da2e78b2', 'width': 1200}, 'variants': {}}]} |
Multimodal models and "active" learning | 5 | For the former, there's already projects like:
https://panda-gpt.github.io/
https://github.com/OpenGVLab/LLaMA-Adapter/tree/main/imagebind_LLM
https://github.com/Luodian/Otter
and others (although they're usually bimodal, if that's the right word).
I bring this up because many want to compete with GPT-4 but seem to forget it's multimodal. And of course, we might never reach its level soon- but multimodality still looks promising. Maybe it is the case language is all we need- but more work or activity on multimodality would be interesting, like a llamacpp, ggml, and quantization for multimodal models. Maybe give it a few months?
I'm not sure about the "active" learning being done right now, but what I mean is it just seems people are so focused on the "frozen" intelligence of LMs. Although to be fair maybe in context learning with a much bigger context window and others I'm missing may be good work for now.
(Note: speaking as a layman so forgive me if I sound silly but) with ideas from task vectors (https://arxiv.org/pdf/2212.04089.pdf) and Loras, maybe a thing can be done? Maybe simulating a stream of memory, some threshold can be reached that changes weights or at least some "adapters" on the model? Ofc assuming this process becomes fast enough (with all the advancements going on).
Now I'm not saying that people SHOULD do these RIGHT NOW or complaining- understanding that this is cutting edge, at the very least hard work and apologies if I sound demanding. But with that said, I'll just put this thought out there. I wish I was as capable enough to try something or know when something can work instead of just come off as whining about it- all I've done so far was stuff like modifying PandaGPT to run on only CPU- could only try 7b and though it worked it wasn't that impressive probably because of my setup (still trying others).
For LMs though, I am still excited for work being done on context length although unable to run them as is rn. Though as I post this, this just came out https://old.reddit.com/r/LocalLLaMA/comments/148prx3/landmark_attention_oobabooga_support_gptq/ so that's something to do after I make this post.
At the end of the day, I'm still really grateful to be able to run LMs at all and hope that every work on local models continues despite any moats or whatnot! | 2023-06-14T00:18:36 | https://www.reddit.com/r/LocalLLaMA/comments/148tpun/multimodal_models_and_active_learning/ | reduserGf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148tpun | false | null | t3_148tpun | /r/LocalLLaMA/comments/148tpun/multimodal_models_and_active_learning/ | false | false | self | 5 | null |
Any way to adjust GPT4All 13b I have 32 Core Threadripper with 512 GB RAM but not sure if GPT4ALL uses all power? Any other alternatives that are easy to install on Windows? | 2 | Any way to adjust GPT4All 13b I have 32 Core Threadripper with 512 GB RAM but not sure if GPT4ALL uses all power? Any other alternatives that are easy to install on Windows?
Ideally I would like to have most powerful AI chat connected to Stable Diffusion (for my machine 32 core Threadripper 512 GB RAM 3070 8GB
I would love to stay with Windows 11 and avoid Linux | 2023-06-13T23:38:34 | https://www.reddit.com/r/LocalLLaMA/comments/148sx4w/any_way_to_adjust_gpt4all_13b_i_have_32_core/ | SolvingLifeWithPoker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148sx4w | false | null | t3_148sx4w | /r/LocalLLaMA/comments/148sx4w/any_way_to_adjust_gpt4all_13b_i_have_32_core/ | false | false | self | 2 | null |
llama.cpp can now train? | 37 | Looks like we just got some support for training in llama.cpp! Not sure what I'm doing wrong but it's crashing for me. Anyone else tried it and got any success?
[https://github.com/ggerganov/llama.cpp/tree/master/examples/train-text-from-scratch](https://github.com/ggerganov/llama.cpp/tree/master/examples/train-text-from-scratch) | 2023-06-13T23:32:52 | https://www.reddit.com/r/LocalLLaMA/comments/148st5r/llamacpp_can_now_train/ | stonegdi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148st5r | false | null | t3_148st5r | /r/LocalLLaMA/comments/148st5r/llamacpp_can_now_train/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]} |
Running LLaMA Inference in Parallel Using Accelerate | 3 | 2023-06-13T21:58:22 | https://www.bengubler.com/posts/multi-gpu-inference-with-accelerate | FutureIncrease | bengubler.com | 1970-01-01T00:00:00 | 0 | {} | 148qstm | false | null | t3_148qstm | /r/LocalLLaMA/comments/148qstm/running_llama_inference_in_parallel_using/ | false | false | default | 3 | null | |
Llama.cpp crashing with lora, but only when using GPU. | 3 | I was wondering if anyone’s run into this problem using loras with llama.cpp. It works fine for me if I don’t use the GPU. But if I do use the GPU it crashes. For example, starting llama.cpp with the following works fine on my computer.
./main -m models/ggml-vicuna-7b-f16.bin --lora lora/testlora_ggml-adapter-model.bin
Lora loads up with no errors and it demonstrates responses in line with the data I trained the lora on.
And starting with the same model, and GPU, but no lora, works fine.
./main -m models/ggml-vicuna-7b-f16.bin --n-gpu-layers 1
Starts up as expected when using --n-gpu-layers and no lora.
But if I add n-gpu-layers, like this?
./main -m models/ggml-vicuna-7b-f16.bin --lora lora/testlora_ggml-adapter-model.bin --n-gpu-layers 1
It crashes with the following error.
llama_apply_lora_from_file_internal: r = 256, alpha = 512, scaling = 2.00
...............GGML_ASSERT: ggml-cuda.cu:1643: src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_F32 && dst->type == GGML_TYPE_F32
Aborted (core dumped)
I get the same thing with every other model I try. And I’ve created a few different lora, both on my own machine and runpod, to try to see if anything changes. But I keep running into the same thing.
Any ideas?
edit: I forgot to mention I'm using an nvidia m40. Nvidia driver version: 530.30.02, CUDA version: 12.1. I just finished totally purging everything related to nvidia from my system and then installing the drivers and cuda again, setting the path in bashrc, etc. Aaaaaaand, no luck. llama.cpp still crashes if I use a lora and the --n-gpu-layers together.
edit2: Someone opened up a new issue report on it. Turns out that this behavior is normal, and a result of llama.cpp running things as f32 in the GPU. It was suggested to just merge the lora. | 2023-06-13T21:44:40 | https://www.reddit.com/r/LocalLLaMA/comments/148qi16/llamacpp_crashing_with_lora_but_only_when_using/ | toothpastespiders | self.LocalLLaMA | 2023-06-14T19:22:55 | 0 | {} | 148qi16 | false | null | t3_148qi16 | /r/LocalLLaMA/comments/148qi16/llamacpp_crashing_with_lora_but_only_when_using/ | false | false | self | 3 | null |
idea: We need a raspberry pi for LLMs. | 1 | [removed] | 2023-06-13T21:35:19 | https://www.reddit.com/r/LocalLLaMA/comments/148qanl/idea_we_need_a_raspberry_pi_for_llms/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148qanl | false | null | t3_148qanl | /r/LocalLLaMA/comments/148qanl/idea_we_need_a_raspberry_pi_for_llms/ | false | false | default | 1 | null |
Landmark Attention Oobabooga Support + GPTQ Quantized Models! | 175 | Hey everyone! We've managed to get Landmark attention working properly in Oobabooga, and /u/theBloke has quantized the models! (Currently only GPTQ support).
We need more effort put into properly evaluating these models. It is still very early days and we are looking for some feedback on their performance/any issues you run into. Please feel free to chat with us! You can find a link in my QLoRA Repo.[https://github.com/eugenepentland/landmark-attention-qlora](https://github.com/eugenepentland/landmark-attention-qlora)
Models:[https://huggingface.co/TheBloke/WizardLM-7B-Landmark](https://huggingface.co/TheBloke/WizardLM-7B-Landmark)[https://huggingface.co/TheBloke/Minotaur-13B-Landmark](https://huggingface.co/TheBloke/Minotaur-13B-Landmark)
Notes when using the models
1. Trust-remote-code must be enabled for the attention model to work correctly.
2. Add bos\_token must be disabled in the parameters tab
3. Truncat the prompt must be increased to allow for a larger context. The slider goes up to a max of 8192, but the models can handle larger contexts as long as you have memory. If you want to go higher, go to text-generation-webui/modules/shared.py and increase truncation\_length\_max to whatever you want it to be.
4. You may need to set the repetition\_penalty when asking questions about a long context to get the correct answer.
Performance Notes:
1. Inference in a long context is slow. On the RTX Quadro 8000 I'm testing, it takes about a minute to get an answer for 10k context. This is working on being improved.
2. Remember that the model only has good performance at the base model for complex queries. Sometimes you may not get the answer you are looking for, but it's worth testing if the base model would be able to answer the question within the 2k context.
Thanks again to the team who worked on the original landmark paper for making this possibleThey made an update to the repo and the code I wrote 4 days ago is now marked legacy so I'm in the process of updating it again... | 2023-06-13T21:12:25 | https://www.reddit.com/r/LocalLLaMA/comments/148prx3/landmark_attention_oobabooga_support_gptq/ | NeverEndingToast | self.LocalLLaMA | 2023-06-13T21:28:32 | 0 | {} | 148prx3 | false | null | t3_148prx3 | /r/LocalLLaMA/comments/148prx3/landmark_attention_oobabooga_support_gptq/ | false | false | self | 175 | {'enabled': False, 'images': [{'id': 'xHpvVEcy3S8jRFD78uuxihyCIcFKFRMnZ2PCic0F0p8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?width=108&crop=smart&auto=webp&s=c68c1bd6748a23358cb8868f701e7bed787caa0c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?width=216&crop=smart&auto=webp&s=312e61d8103dc3d6026bb26f583a623d3d90c0ca', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?width=320&crop=smart&auto=webp&s=b6674706646d3c3d9868734afb5499890b7d9a1c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?width=640&crop=smart&auto=webp&s=1e969c1e8c9b787476c3087f843c94bada078890', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?width=960&crop=smart&auto=webp&s=09eb54e85b29054134473ef5d890ccb1e0fd7557', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?width=1080&crop=smart&auto=webp&s=9f35e6ee02edff433b3b789cfafc84d477806ce8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?auto=webp&s=73c679ed9bc1f6f92c3c9d8ff553f079c4d855dc', 'width': 1200}, 'variants': {}}]} |
Open AI - Just Killed GPT-3 and GPT-4 Apis | 1 | [removed] | 2023-06-13T20:41:03 | https://www.reddit.com/r/LocalLLaMA/comments/148p2u4/open_ai_just_killed_gpt3_and_gpt4_apis/ | splitur34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148p2u4 | false | null | t3_148p2u4 | /r/LocalLLaMA/comments/148p2u4/open_ai_just_killed_gpt3_and_gpt4_apis/ | false | false | default | 1 | null |
2,512 -H100s, can train LLaMA 65B in 10 days | 69 | This 10 exaflop beast looks really promising and for open source startups it may be the best chance to get a true open source LLaMA alternative at the 30-65B+ size (hopefully with longer context and more training tokens). | 2023-06-13T20:39:00 | https://twitter.com/natfriedman/status/1668650915505803266?s=19 | jd_3d | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 148p17i | false | {'oembed': {'author_name': 'Nat Friedman', 'author_url': 'https://twitter.com/natfriedman', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Daniel and I have setup a cluster for startups: <a href="https://t.co/FMmfA7MQm3">https://t.co/FMmfA7MQm3</a> <a href="https://t.co/cx4NkPVdFI">pic.twitter.com/cx4NkPVdFI</a></p>— Nat Friedman (@natfriedman) <a href="https://twitter.com/natfriedman/status/1668650915505803266?ref_src=twsrc%5Etfw">June 13, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/natfriedman/status/1668650915505803266', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_148p17i | /r/LocalLLaMA/comments/148p17i/2512_h100s_can_train_llama_65b_in_10_days/ | false | false | 69 | {'enabled': False, 'images': [{'id': 'epfPZBJ3d-zRlP7wFeHil-A4whqkrS_PPktjBeH4rdg', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/1XGkcBHG4q6mltbaesZdwqq9REY6KIw1YspYSYSI2nw.jpg?width=108&crop=smart&auto=webp&s=9cac0cc8a174bc4c600519506920700af0cdf056', 'width': 108}], 'source': {'height': 74, 'url': 'https://external-preview.redd.it/1XGkcBHG4q6mltbaesZdwqq9REY6KIw1YspYSYSI2nw.jpg?auto=webp&s=9f243bef1a7b8f471120503c78b98c76a9efeffd', 'width': 140}, 'variants': {}}]} | |
I wrote a tokenizer for LLaMA that runs inside the browser | 15 | 2023-06-13T20:32:59 | https://github.com/belladoreai/llama-tokenizer-js | belladorexxx | github.com | 1970-01-01T00:00:00 | 0 | {} | 148owhe | false | null | t3_148owhe | /r/LocalLLaMA/comments/148owhe/i_wrote_a_tokenizer_for_llama_that_runs_inside/ | false | false | 15 | {'enabled': False, 'images': [{'id': 't5BYImubexnZbSs3UMfYWIEQSAIwcB_4G44jxoPka2g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=108&crop=smart&auto=webp&s=df91c49afd9f6de58616898380c72ae6a948f937', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=216&crop=smart&auto=webp&s=01acc1af705b9172a06b059a3e265f55986ab948', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=320&crop=smart&auto=webp&s=033a1bc04e7af056e872516f37d64e10ec61f82c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=640&crop=smart&auto=webp&s=aab47dcb08ce43c36e470372be27bece1d7701af', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=960&crop=smart&auto=webp&s=d8412d139a899f824337c59dcbbaa7521352300a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=1080&crop=smart&auto=webp&s=123b220a268d6fbf3ef80c09fd10e26f1ac12ab3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?auto=webp&s=89c6e0f69fd3e2e908da1599a3d56019cd1a93cc', 'width': 1200}, 'variants': {}}]} | ||
Is it possible to train a LoRA for a 4-bit model using the same amount of VRAM as inference? | 7 | And if so, is it possible to merge the LoRA with a GPTQ while using the same amount of VRAM as inference?
For example, if you had a 13B 4-bit llama model that uses ~10 gb VRAM for inference, is there any way to train a LoRA for the 4-bit model that also only uses ~10 gb of VRAM? Is it possible to merge the LoRA with the 4-bit model, and if so is it possible to do it using only ~10 gb VRAM?
If it isn't possible now, is it technically possible and probably will be possible soon? Or is there a fundamental limit preventing it? | 2023-06-13T20:32:07 | https://www.reddit.com/r/LocalLLaMA/comments/148ovsg/is_it_possible_to_train_a_lora_for_a_4bit_model/ | SoylentMithril | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148ovsg | false | null | t3_148ovsg | /r/LocalLLaMA/comments/148ovsg/is_it_possible_to_train_a_lora_for_a_4bit_model/ | false | false | self | 7 | null |
Shouldn’t A.I become less resource intensive as time goes on? | 1 | [removed] | 2023-06-13T20:26:11 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 148oqyy | false | null | t3_148oqyy | /r/LocalLLaMA/comments/148oqyy/shouldnt_ai_become_less_resource_intensive_as/ | false | false | default | 1 | null | ||
Toolkit for using local models, through langchain &c, including usable demos | 13 | Hi all, I'll be honest that what inspired this project was my observation that there is a metric ton of LLM-related client Python code out there, and a lot of it is clearly by folks new to programming, or at least to Python. As an old Python head, I thought I'd lean on my experience to put together some tools and demos for the many, cool alt-GPT type project ideas out there, and I'm already reusing some of these components in my own consulting work.
[https://github.com/uogbuji/OgbujiPT/](https://github.com/uogbuji/OgbujiPT/)
A good place to start is the demos, all using Langchain, though shallowly, I admit, including
* Simple bot for you know which popular chat app 😉
* Simple streamlit chat-your-PDF
* Blocking API async workaround
[https://github.com/uogbuji/OgbujiPT/tree/main/demo](https://github.com/uogbuji/OgbujiPT/tree/main/demo) | 2023-06-13T20:01:57 | https://www.reddit.com/r/LocalLLaMA/comments/148o7w3/toolkit_for_using_local_models_through_langchain/ | CodeGriot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148o7w3 | false | null | t3_148o7w3 | /r/LocalLLaMA/comments/148o7w3/toolkit_for_using_local_models_through_langchain/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'IeXm3QcmbN8FE8x8OSrJwNGkIEXBmO3Ite07eRod0rc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?width=108&crop=smart&auto=webp&s=f7205367d3b6743657d01059a5ff26c552e3f7a4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?width=216&crop=smart&auto=webp&s=271fddb97bbae295053990fa547b1c1e8eb2ab4b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?width=320&crop=smart&auto=webp&s=1984b62108435fc72adb857326db7b8ebfff38f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?width=640&crop=smart&auto=webp&s=7d713bacb4c47d03e8879dfa867ebded8e51b0a7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?width=960&crop=smart&auto=webp&s=5a7410523947e44a115fbd15f4731d51b97aca51', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?width=1080&crop=smart&auto=webp&s=f12341e21b19535d9e9e721c017ea598c1c8f4dd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?auto=webp&s=b9f4fc9a5989ad7e092cb71e4466fea4a356db8b', 'width': 1200}, 'variants': {}}]} |
AMD Expands AI/HPC Product Lineup With Flagship GPU-only Instinct Mi300X with 192GB Memory | 46 | 2023-06-13T19:01:46 | https://www.anandtech.com/show/18915/amd-expands-mi300-family-with-mi300x-gpu-only-192gb-memory | Balance- | anandtech.com | 1970-01-01T00:00:00 | 0 | {} | 148mvnx | false | null | t3_148mvnx | /r/LocalLLaMA/comments/148mvnx/amd_expands_aihpc_product_lineup_with_flagship/ | false | false | 46 | {'enabled': False, 'images': [{'id': 'R96zuZO45Qp3NE1yABoAUYRTw4YE4s4fJ1ipwGDmHgA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/iZo2BGdzs7lzXK-IbQmUQuwq0IDLwTrnnQu7rC6fSpg.jpg?width=108&crop=smart&auto=webp&s=a24a6ec07470476a238e61e5cfe85d492faf4e00', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/iZo2BGdzs7lzXK-IbQmUQuwq0IDLwTrnnQu7rC6fSpg.jpg?width=216&crop=smart&auto=webp&s=22708b4d0ec466e1c5ff536a7b0118d247ef96da', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/iZo2BGdzs7lzXK-IbQmUQuwq0IDLwTrnnQu7rC6fSpg.jpg?width=320&crop=smart&auto=webp&s=4698c584fa3b44d143233274ea0f75df3c10a15d', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/iZo2BGdzs7lzXK-IbQmUQuwq0IDLwTrnnQu7rC6fSpg.jpg?width=640&crop=smart&auto=webp&s=a5233028c360549209d28e123d67b3382dfb71f9', 'width': 640}], 'source': {'height': 381, 'url': 'https://external-preview.redd.it/iZo2BGdzs7lzXK-IbQmUQuwq0IDLwTrnnQu7rC6fSpg.jpg?auto=webp&s=fd0871ef2076aba5911cd308e3d55edc7d134247', 'width': 678}, 'variants': {}}]} | ||
what llama model would be good for instruct purposes | 0 | are their any good models that are around as capable for instruct purposes as text-davinci-003 | 2023-06-13T18:45:13 | https://www.reddit.com/r/LocalLLaMA/comments/148mi3l/what_llama_model_would_be_good_for_instruct/ | SuccessfulCommand882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148mi3l | false | null | t3_148mi3l | /r/LocalLLaMA/comments/148mi3l/what_llama_model_would_be_good_for_instruct/ | false | false | self | 0 | null |
How to use gpu for GGML model , n-gpu-layers isn't working | 2 | [removed] | 2023-06-13T18:22:16 | https://www.reddit.com/r/LocalLLaMA/comments/148lzil/how_to_use_gpu_for_ggml_model_ngpulayers_isnt/ | Equal-Pilot-9592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148lzil | false | null | t3_148lzil | /r/LocalLLaMA/comments/148lzil/how_to_use_gpu_for_ggml_model_ngpulayers_isnt/ | false | false | default | 2 | null |
Are you seeing emojis from conversational models in ooba? | 1 | [deleted] | 2023-06-13T18:07:05 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 148ln9p | false | null | t3_148ln9p | /r/LocalLLaMA/comments/148ln9p/are_you_seeing_emojis_from_conversational_models/ | false | false | default | 1 | null | ||
What ggml models can I run on 16GB RAM, 8GB VRAM GPU0 and 4GB VRAM GPU1? | 2 | [removed] | 2023-06-13T17:51:49 | https://www.reddit.com/r/LocalLLaMA/comments/148laeo/what_ggml_models_can_i_run_on_16gb_ram_8gb_vram/ | chocolatebanana136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148laeo | false | null | t3_148laeo | /r/LocalLLaMA/comments/148laeo/what_ggml_models_can_i_run_on_16gb_ram_8gb_vram/ | false | false | default | 2 | null |
The gap between open source LLMs and OpenAI continues to widen, GPT3.5 now supports 16K context | 1 | 2023-06-13T17:40:08 | https://openai.com/blog/function-calling-and-other-api-updates | EcstaticVenom | openai.com | 1970-01-01T00:00:00 | 0 | {} | 148l0mw | false | null | t3_148l0mw | /r/LocalLLaMA/comments/148l0mw/the_gap_between_open_source_llms_and_openai/ | false | false | default | 1 | null | |
How to generate longer stories? | 8 | Is there any trick to generate longer stories, with characters and actions happening? I've been using several models such as llama and wizardLM. My prompt is something like "Write a story about a man who falls in love with a coconut", but the stories generated don't even pretend to flow, it's just a few sentences of "There was a man named X. He met a coconut named Y. They fell in love. The moral of the story was that you can fall in love with a coconut" and that's it. | 2023-06-13T17:30:06 | https://www.reddit.com/r/LocalLLaMA/comments/148ksbt/how_to_generate_longer_stories/ | skocznymroczny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148ksbt | false | null | t3_148ksbt | /r/LocalLLaMA/comments/148ksbt/how_to_generate_longer_stories/ | false | false | self | 8 | null |
local GPT agent projects? | 0 | [removed] | 2023-06-13T17:19:34 | https://www.reddit.com/r/LocalLLaMA/comments/148kjts/local_gpt_agent_projects/ | PossessionOk6481 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148kjts | false | null | t3_148kjts | /r/LocalLLaMA/comments/148kjts/local_gpt_agent_projects/ | false | false | default | 0 | null |
What can I do to get AMD GPU support CUDA-style? | 23 | Guys, I have a 6800 XT and I believe I can squeeze more juice from it.
On AMD we have this: [https://learn.microsoft.com/en-us/windows/ai/directml/gpu-pytorch-windows](https://learn.microsoft.com/en-us/windows/ai/directml/gpu-pytorch-windows) . Is there anyway to leverage it? If there isn't, is there anything close to it?
I am a decent coder I can code it, just need some general guidelines and some advice.
I think it would be smart to go towards a one-code-for-all approaches for Windows, like Games have with DirectX, and we no longer depend on a single behemoth (NVIDIA) | 2023-06-13T17:04:06 | https://www.reddit.com/r/LocalLLaMA/comments/148k700/what_can_i_do_to_get_amd_gpu_support_cudastyle/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148k700 | false | null | t3_148k700 | /r/LocalLLaMA/comments/148k700/what_can_i_do_to_get_amd_gpu_support_cudastyle/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'LVmzWMJU1UZwRubzQYJZSar-z-Rq8ntUH65yhQyfxB8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/YH8gNap4KoGcFgyFYrNZ86fXmYfRn6pa7uuwIlZkjEE.jpg?width=108&crop=smart&auto=webp&s=0b9526a51504048891d5e64783519fd5dc3cd83f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/YH8gNap4KoGcFgyFYrNZ86fXmYfRn6pa7uuwIlZkjEE.jpg?width=216&crop=smart&auto=webp&s=0150bc3ab1c6838c35ff951d69578f3d19ae4ed3', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/YH8gNap4KoGcFgyFYrNZ86fXmYfRn6pa7uuwIlZkjEE.jpg?width=320&crop=smart&auto=webp&s=c1e830770227ae4802c5776d22f63d0f6aa71b15', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/YH8gNap4KoGcFgyFYrNZ86fXmYfRn6pa7uuwIlZkjEE.jpg?auto=webp&s=e62264227377a9581e2e2946169864d130fa3217', 'width': 400}, 'variants': {}}]} |
Raw result of Wonder Studio beta | 1 | [removed] | 2023-06-13T16:33:14 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 148jhle | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/akimr7opbt5b1/DASHPlaylist.mpd?a=1695643881%2CYjgzNzAwM2UxMmUxOGEwODM5ZThkNTdmMDI4YTc5YTRiYThmNGNkZDU3NmI2ZmE5YmJjMTUzYTYwZTA5YmJmNw%3D%3D&v=1&f=sd', 'duration': 3, 'fallback_url': 'https://v.redd.it/akimr7opbt5b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/akimr7opbt5b1/HLSPlaylist.m3u8?a=1695643881%2CYzIxNzU2ZWQ5OTE5YTlkMWYxMTlmNGY5YzkwOThjYjJhOTAxODdjYTBmMGE4NWRhN2U2MTM2YzRkYWU3ODJkZQ%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/akimr7opbt5b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 608}} | t3_148jhle | /r/LocalLLaMA/comments/148jhle/raw_result_of_wonder_studio_beta/ | false | false | default | 1 | null | ||
Looking for help getting llama.cpp working on runpod | 1 | [deleted] | 2023-06-13T16:26:46 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 148jc78 | false | null | t3_148jc78 | /r/LocalLLaMA/comments/148jc78/looking_for_help_getting_llamacpp_working_on/ | false | false | default | 1 | null | ||
Help: q2 k quant does not offer any speed advancement over q4 ggml llama cpp | 4 | Hi y'all. Have anyone tried k quants model on llama cpp python? I tried with the model q2\_k (smallest size that I could find) and it even took more time to run the same prompt compared to the q4\_0 model.
I don't know if this is normal, let me know if y'all experienced something different. | 2023-06-13T15:48:42 | https://www.reddit.com/r/LocalLLaMA/comments/148igxf/help_q2_k_quant_does_not_offer_any_speed/ | Cheap-Routine4736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148igxf | false | null | t3_148igxf | /r/LocalLLaMA/comments/148igxf/help_q2_k_quant_does_not_offer_any_speed/ | false | false | self | 4 | null |
"Emergent capabilities" and OpenAI models | 1 | I haven't been following the research for long, but I'm confused by researchers pointing to OpenAI models when talking about "emergent capabilities." Besides RLHF, those models are fine-tuned on task-specific instruction/response data OpenAI hasn't published information about, including "massive amounts of" synthetic data and now incorporating data from user interactions... so how can researchers make inferences about emergent capabilities in larger models when counting those models among the examples?
For example, if GPT-4 immediately shows step-by-step thinking when asked to do a math problem, I would say that's clearly not a behavior that emerged due to parameter count, but rather carefully curated instruction/response examples, with the specific aim of improving performance on math problems by "thinking step by step" | 2023-06-13T15:27:14 | https://www.reddit.com/r/LocalLLaMA/comments/148hzx4/emergent_capabilities_and_openai_models/ | phree_radical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148hzx4 | false | null | t3_148hzx4 | /r/LocalLLaMA/comments/148hzx4/emergent_capabilities_and_openai_models/ | false | false | self | 1 | null |
KoboldCPP Updated to Support K-Quants, new bonus CUDA build. | 85 | [deleted] | 2023-06-13T15:20:35 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 148hul5 | false | null | t3_148hul5 | /r/LocalLLaMA/comments/148hul5/koboldcpp_updated_to_support_kquants_new_bonus/ | false | false | default | 85 | null | ||
Bitsandbytes for windows | 0 | [removed] | 2023-06-13T14:23:01 | [deleted] | 2023-06-14T03:41:27 | 0 | {} | 148gluf | false | null | t3_148gluf | /r/LocalLLaMA/comments/148gluf/bitsandbytes_for_windows/ | false | false | default | 0 | null | ||
Utilize my current hardware or upgrade? | 1 | I've got a 3070 with 8gb VRAM and 32gb RAM.
I want to be able to run 13b or even 30b models.
With the new ways to load models and offload to GPU, should I look at just upgrading my ram to 80GB or upgrade GPU to 4070 12gb? I don't have the budget to get a 4090. | 2023-06-13T14:13:17 | https://www.reddit.com/r/LocalLLaMA/comments/148geaj/utilize_my_current_hardware_or_upgrade/ | reiniken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148geaj | false | null | t3_148geaj | /r/LocalLLaMA/comments/148geaj/utilize_my_current_hardware_or_upgrade/ | false | false | self | 1 | null |
Question Answering benchmark | 1 | [removed] | 2023-06-13T13:56:10 | https://www.reddit.com/r/LocalLLaMA/comments/148g1j9/question_answering_benchmark/ | nofreewill42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148g1j9 | false | null | t3_148g1j9 | /r/LocalLLaMA/comments/148g1j9/question_answering_benchmark/ | false | false | default | 1 | null |
Has anybody trained a model for Open Information Extraction? | 2 | I have the idea to train a model to do OpenIE for Dutch. I have found that the latest OpenAI models are quite good at generating triples from unstructured text. I will let a paid model generate triples for me and use those as training data.
Has anyone tried something similar? I have a RTX3090, so I guess I am fine with training a Lora model.
I have trained FLAN-t5-base before on wikipedia triples (REBEL data set) and the results were luke warm. | 2023-06-13T13:41:39 | https://www.reddit.com/r/LocalLLaMA/comments/148fref/has_anybody_trained_a_model_for_open_information/ | kcambrek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148fref | false | null | t3_148fref | /r/LocalLLaMA/comments/148fref/has_anybody_trained_a_model_for_open_information/ | false | false | self | 2 | null |
A minimal design pattern for LLM-powered microservices with FastAPI & LangChain | 21 | 2023-06-13T12:43:49 | https://github.com/tleers/llm-api-starterkit | timleers | github.com | 1970-01-01T00:00:00 | 0 | {} | 148eldn | false | null | t3_148eldn | /r/LocalLLaMA/comments/148eldn/a_minimal_design_pattern_for_llmpowered/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'ZEU-RYtO_z2hDyy82PIoDqvx-r-84Sor43rhUNTTrN0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?width=108&crop=smart&auto=webp&s=8a3db85c32ca9bb3dff82809f883e6675716226f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?width=216&crop=smart&auto=webp&s=d207be995280b6ac3c682e069f0957b87e7f9b34', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?width=320&crop=smart&auto=webp&s=bad4f83ddcf3caadc8ed087312755eb9481af297', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?width=640&crop=smart&auto=webp&s=daa3374b2114858c91a3cfd4cfc2da2727c12462', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?width=960&crop=smart&auto=webp&s=ef71213803f9e10f5fa348ad06de4680db2446a1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?width=1080&crop=smart&auto=webp&s=5f43c8edb54904ba4357ef03bb85013b8770bfc1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?auto=webp&s=1aafa5bae7e9bc1def16d095dca958781e1a027f', 'width': 1200}, 'variants': {}}]} | ||
How to make your own local version of chatbase? | 4 | Hi all, I was wondering if people here know the technology/software stack to make a personal version of something like [chatbase.io](https://chatbase.io) or any of these other personal chatbots on your own data/documents. I'm very new here to these technologies and want to tinker around to better understand limitations, etc.
Bonus: if somebody here has done this, what has your experience been like? what challenges have you run into? how large of a model do you need to get reasonable responses? are any of the free llms suitable, or is it really only usable after the human-dialogue tailoring that went into ChatGPT?
Thanks! | 2023-06-13T11:50:29 | https://www.reddit.com/r/LocalLLaMA/comments/148dkl8/how_to_make_your_own_local_version_of_chatbase/ | EnPaceRequiescat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148dkl8 | false | null | t3_148dkl8 | /r/LocalLLaMA/comments/148dkl8/how_to_make_your_own_local_version_of_chatbase/ | false | false | self | 4 | null |
FabLab Assistant | 0 | Briefly, here's one of my use cases:
I own a fairly decent commercial grade PC/Workstation, will update with specs if interested.
I also own a homemade 3-axis CNC milling machine, a blue laser engraver/cutter, miter saw and assorted power tools; I have experience running servos, steppers, microcontrollers, general digital electronics, PCB fabrication, microfabrication, fluidics, pneumatics, instrument control and so on and so forth. I've got sufficient coding skills, decent engineering simulations skills, and hardware fab skills down to at least micro-scale.
I want to make my tools work for me, though, not the other way round.
Objective: bootstrap automated/augmented fabrication lab capabilities with LLMs' assistance.
Since the objective looks generic, here is my specific application: I'd like to build a powerful homemade nuclear magnetic resonance (NMR) spectrometer, from almost scratch. I have lots of experience with industrial grade NMR specs, both hardware and software and the theory & practice behind it all.
I have plenty of ideas I want to explore on my own, for my own entertainment. What I do not have enough of is time and money (I know, shocking). Also, good power tools are expensive, but I'm considering investing some more. I have some good spare parts I can build stuff with, too, like a 3D printer or a small CNC lathe, or a 4th axis for the mill, etc...
An AI assistant that could lift weigh from my shoulders in any conceivable way would be such a huge return on investment for my hobby project.
Any suggestion or comment that vaguely aligns to my ramblings would be heartily appreciated. E.g. datasets to consider, finetuning strategies, pretrained models, anything goes. I'm just getting started in this space and I'm shooting in the dark, so to speak. | 2023-06-13T11:49:13 | https://www.reddit.com/r/LocalLLaMA/comments/148djph/fablab_assistant/ | Lolleka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148djph | false | null | t3_148djph | /r/LocalLLaMA/comments/148djph/fablab_assistant/ | false | false | self | 0 | null |
British/American spelling of one word changes the output on the same seed. | 0 | LLaMA seems to deal well with alternate spellings and typos, but I wondered if using the British or American spellings of a word would change the output. It turns out it does.
Using the Openblas version of llama.cpp (cublas doesn't give deterministic results still, afaik), I chose the seed 256 and used the same settings each time with the prompt:
>write a story about a pigeon having a day out in the city centre
tulu-30b.ggmlv3.q5_K_M.bin wrote a nice little kid's story and understood centre was the same as center, and used the word center in the story.
Then I changed the prompt to:
>write a story about a pigeon having a day out in the city center
Totally different story although obviously in the same vein. | 2023-06-13T11:42:21 | https://www.reddit.com/r/LocalLLaMA/comments/148df70/britishamerican_spelling_of_one_word_changes_the/ | ambient_temp_xeno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148df70 | false | null | t3_148df70 | /r/LocalLLaMA/comments/148df70/britishamerican_spelling_of_one_word_changes_the/ | false | false | self | 0 | null |
A little form to help me understand LLM's place in programming | 1 | [removed] | 2023-06-13T11:25:00 | https://www.reddit.com/r/LocalLLaMA/comments/148d4b5/a_little_form_to_help_me_understand_llms_place_in/ | ouils | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148d4b5 | false | null | t3_148d4b5 | /r/LocalLLaMA/comments/148d4b5/a_little_form_to_help_me_understand_llms_place_in/ | false | false | default | 1 | null |
A local model for summarizing articles | 3 | [deleted] | 2023-06-13T11:17:51 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 148czv1 | false | null | t3_148czv1 | /r/LocalLLaMA/comments/148czv1/a_local_model_for_summarizing_articles/ | false | false | default | 3 | null | ||
Microsoft Research proposes new framework, LongMem, allowing for unlimited context length along with reduced GPU memory usage and faster inference speed. Code will be open-sourced | 388 | Paper: [https://arxiv.org/abs/2306.07174](https://arxiv.org/abs/2306.07174)
Code: [https://aka.ms/LongMem](https://aka.ms/LongMem)
Excerpts:
>In this paper, we propose a framework for Language Models Augmented with Long-Term Memory, (LongMem), which enables language models to cache long-form previous context or knowledge into the non-differentiable memory bank, and further take advantage of them via a decoupled memory module to address the memory staleness problem.
>
>For LongMem, there are three key components: the frozen backbone LLM, SideNet, and Cache Memory Bank. To tap into the learned knowledge of the pretrained LLM, both previous and current inputs are encoded using the frozen backbone LLM but different representations are extracted. For previous inputs, the key-value pairs from the Transformer self-attention at m-th layer are stored in Cache Memory Bank, whereas the hidden states from each LLM decoder layer for the current inputs are retained and transferred to SideNet. The SideNet module can be viewed as an efficient adaption model that is trained to fuse the current input context and relevant cached previous contexts in the decoupled memory
>
>The long-term memory capability of LongMem is achieved via a memory-augmentation module for retrieval and fusion. Instead of performing token-to-token retrieval, we focus on token-to-chunk retrieval for acceleration and integrity. The memory bank stores cached key-value pairs at the level of token chunks. The proposed LongMem model significantly outperform all considered baselines on long-text language modeling datasets. Surprisingly, the proposed method achieves the state-of-the-art performance of 40.5% accuracy on ChapterBreakAO3 suffix identification benchmark and outperforms both the strong long-context transformers and latest LLM GPT-3 with 313x larger parameters.
>
>With the proposed unlimited-length memory augmentation, our LongMem method can overcome the limitation of the number of demonstration examples in the local context and even attend on the whole training set by loading it into the cached memory. When the model is required to comprehend long sequences, the proposed method LongMem can load the out-of-boundary inputs into the cached memory as previous context. Thus, the memory usage and inference speed can be significantly improved compared with vanilla self-attention-based models. | 2023-06-13T10:47:52 | https://www.reddit.com/r/LocalLLaMA/comments/148ch6z/microsoft_research_proposes_new_framework_longmem/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148ch6z | false | null | t3_148ch6z | /r/LocalLLaMA/comments/148ch6z/microsoft_research_proposes_new_framework_longmem/ | false | false | self | 388 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Falcon 40B | 1 | [removed] | 2023-06-13T09:19:44 | https://www.reddit.com/r/LocalLLaMA/comments/148b1z7/falcon_40b/ | Toaster496 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148b1z7 | false | null | t3_148b1z7 | /r/LocalLLaMA/comments/148b1z7/falcon_40b/ | false | false | default | 1 | null |
FinGPT: Open-Source Financial Large Language Models | 25 | 2023-06-13T09:04:31 | https://arxiv.org/abs/2306.06031 | Balance- | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 148atct | false | null | t3_148atct | /r/LocalLLaMA/comments/148atct/fingpt_opensource_financial_large_language_models/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | ||
Error During Alpaca Build | 1 | [removed] | 2023-06-13T08:51:52 | https://www.reddit.com/r/LocalLLaMA/comments/148amb1/error_during_alpaca_build/ | Strong-Employ6841 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148amb1 | false | null | t3_148amb1 | /r/LocalLLaMA/comments/148amb1/error_during_alpaca_build/ | false | false | default | 1 | null |
Gradient Ascent Post-training Enhances Language Model Generalization | 48 | Paper: [\[2306.07052\] Gradient Ascent Post-training Enhances Language Model Generalization (arxiv.org)](https://arxiv.org/abs/2306.07052)
Abstract:
>In this work, we empirically show that updating pretrained LMs (350M, 1.3B, 2.7B) with just a few steps of Gradient Ascent Post-training (GAP) on random, unlabeled text corpora enhances its zero-shot generalization capabilities across diverse NLP tasks. Specifically, we show that GAP can allow LMs to become comparable to 2-3x times larger LMs across 12 different NLP tasks. We also show that applying GAP on out-of-distribution corpora leads to the most reliable performance improvements. Our findings indicate that GAP can be a promising method for improving the generalization capability of LMs without any task-specific fine-tuning.
This seems like an interesting method to try on open source models. Any chance we could achieve some similar results with LORAs or QLORAs? What do you think? | 2023-06-13T08:38:21 | https://www.reddit.com/r/LocalLLaMA/comments/148afhx/gradient_ascent_posttraining_enhances_language/ | ptxtra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148afhx | false | null | t3_148afhx | /r/LocalLLaMA/comments/148afhx/gradient_ascent_posttraining_enhances_language/ | false | false | self | 48 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Hot take 🔥: Lots of buzz these days about new foundation open-source models but what if I told you there have been no real advance since 2019's T5 models 😀 - Yi Tay, ex GoogleBrain senior research scientist | 55 | 2023-06-13T08:01:14 | https://twitter.com/YiTayML/status/1668302949276356609 | saintshing | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 1489v57 | false | {'oembed': {'author_name': 'Yi Tay', 'author_url': 'https://twitter.com/YiTayML', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Hot take 🔥: Lots of buzz these days about new foundation open-source models but what if I told you there have been no real advance since 2019's T5 models 😀<br><br>Take a look at this table from this new InstructEval paper: <a href="https://t.co/2lKNFCX7Ke">https://t.co/2lKNFCX7Ke</a>. Some thoughts/observations:<br><br>1.… <a href="https://t.co/qwZELqWkaK">pic.twitter.com/qwZELqWkaK</a></p>— Yi Tay (@YiTayML) <a href="https://twitter.com/YiTayML/status/1668302949276356609?ref_src=twsrc%5Etfw">June 12, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/YiTayML/status/1668302949276356609', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_1489v57 | /r/LocalLLaMA/comments/1489v57/hot_take_lots_of_buzz_these_days_about_new/ | false | false | 55 | {'enabled': False, 'images': [{'id': '_JpfQjkvlhmKt-sCIwWth7_NTjjcXLVN7moNjzP77s8', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/Q2IFmd-zFy6nTUs9kTN-OgoOODxUTDYZuyfdbqH-x9c.jpg?width=108&crop=smart&auto=webp&s=4a94d704cfb55adc89bfa6e1769b1c70cadf763a', 'width': 108}], 'source': {'height': 75, 'url': 'https://external-preview.redd.it/Q2IFmd-zFy6nTUs9kTN-OgoOODxUTDYZuyfdbqH-x9c.jpg?auto=webp&s=9615717b08258d2a1d5221d1028977ae0b6601bd', 'width': 140}, 'variants': {}}]} | ||
Honkware/Manticore-13b-Landmark · Hugging Face | 31 | 2023-06-13T07:38:39 | https://huggingface.co/Honkware/Manticore-13b-Landmark | glowsticklover | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1489iuy | false | null | t3_1489iuy | /r/LocalLLaMA/comments/1489iuy/honkwaremanticore13blandmark_hugging_face/ | false | false | 31 | {'enabled': False, 'images': [{'id': 'pfcHCM428xuIGlgLdfF-ND8u6vKuCegaJQjrX38aAV4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?width=108&crop=smart&auto=webp&s=b5c4c42225aa114a8022cdc3658c96eda104a631', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?width=216&crop=smart&auto=webp&s=68cd9627f3659463f4112635a7c59df6bfb34f80', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?width=320&crop=smart&auto=webp&s=71aba67e90923e95880f6ffe365c7db80f3b0f27', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?width=640&crop=smart&auto=webp&s=2f9232d1ef45097476bdd37b697e49dc80edf644', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?width=960&crop=smart&auto=webp&s=17c316ccffc3573a248f2f9d39d498134de16b8e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?width=1080&crop=smart&auto=webp&s=a651059917f4c0682dc9f64e6dba4c5aad56dd61', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?auto=webp&s=1980facb1e1a69e324df535b85e46975d768eca5', 'width': 1200}, 'variants': {}}]} | ||
How to compile models for MlC-LLM | 18 | The brilliant folks at MLC-LLM posted a tutorial on adding models to their client for running LLM's. I found it while scouring their social media. If you don't know MLC-LLM is a client meant for running LLMs like llamacpp, but on any device and at speed. It works on android, apple, Nvidia, and **AMD** gpus. They look like they are preparing to create a fair number of tutorials on their project, but this stood out since the inability to add models to their client meant it hasn't gotten attention compared to other methods of inference.
[GitHub - mlc-ai/mlc-llm: Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.](https://github.com/mlc-ai/mlc-llm)
[How to Compile Models — mlc-llm 0.1.0 documentation](https://mlc.ai/mlc-llm/docs/tutorials/compile-models.html) | 2023-06-13T07:23:37 | https://www.reddit.com/r/LocalLLaMA/comments/1489ami/how_to_compile_models_for_mlcllm/ | jetro30087 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1489ami | false | null | t3_1489ami | /r/LocalLLaMA/comments/1489ami/how_to_compile_models_for_mlcllm/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'v5fQwUsLUQibT4Hl66p9ydJOiRfoOIf-hrnthRvkmmk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?width=108&crop=smart&auto=webp&s=3f3d89ea66851d9a189f4745e4944103738df827', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?width=216&crop=smart&auto=webp&s=f5fbd1846487914e0705ccfcc2c9c330884c9071', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?width=320&crop=smart&auto=webp&s=a33d130736bced65969c9e80515d43deb23cd7ff', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?width=640&crop=smart&auto=webp&s=50ce3d0b0131ff6371e1f912e245d82950e31a3e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?width=960&crop=smart&auto=webp&s=2c9cb5fdadb9746ed089adb3a143c64e4e3a055e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?width=1080&crop=smart&auto=webp&s=864151b4d3982a31f6c0aab309d713aa62a94abd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?auto=webp&s=00ab7b16a6258eed941903a04c51a14f9af2e89d', 'width': 1200}, 'variants': {}}]} |
The Artificial Intelligence Act | 15 | 2023-06-13T06:46:08 | https://artificialintelligenceact.eu/ | fallingdowndizzyvr | artificialintelligenceact.eu | 1970-01-01T00:00:00 | 0 | {} | 1488oqj | false | null | t3_1488oqj | /r/LocalLLaMA/comments/1488oqj/the_artificial_intelligence_act/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'A83_FVtL_M8DC_8TOXZ0adGfrLJusi1N5EjkBgIRMb8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XrnsTXBAzXjU3J_xRg28Cakfohdga0BLYWD0MOzo9OI.jpg?width=108&crop=smart&auto=webp&s=624e2983ca7fbffec152c53c4ea84d41ef0890e9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XrnsTXBAzXjU3J_xRg28Cakfohdga0BLYWD0MOzo9OI.jpg?width=216&crop=smart&auto=webp&s=a8dce1becececb96f7d05933498b8cfc30e7474e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XrnsTXBAzXjU3J_xRg28Cakfohdga0BLYWD0MOzo9OI.jpg?width=320&crop=smart&auto=webp&s=cd17d8eb07cbe1f8aac9bb4eac70eeae9b898dfe', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/XrnsTXBAzXjU3J_xRg28Cakfohdga0BLYWD0MOzo9OI.jpg?width=640&crop=smart&auto=webp&s=9f995169eaa92868b595de83bcfff7509286dfa5', 'width': 640}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/XrnsTXBAzXjU3J_xRg28Cakfohdga0BLYWD0MOzo9OI.jpg?auto=webp&s=a8b5c1be0d32aa6dde7a4634d3b056cfd640e66c', 'width': 700}, 'variants': {}}]} | ||
How to reduce Latency when using Falcon 7B? | 4 | I want to create a chatbot on my personal documents and When I use Falcon 7B with the RetrievalQA chain, the output takes atleast 15 minutes while it takes near 30-45 seconds when using with LLMChain. How do I speed this up? Also, are there any other Alternatives that are very fast(Open Source)?
PS: I'm using Google Colab Free for this | 2023-06-13T05:46:21 | https://www.reddit.com/r/LocalLLaMA/comments/1487pl7/how_to_reduce_latency_when_using_falcon_7b/ | Alive_Effective9516 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1487pl7 | false | null | t3_1487pl7 | /r/LocalLLaMA/comments/1487pl7/how_to_reduce_latency_when_using_falcon_7b/ | false | false | self | 4 | null |
Oobabooga Webui : Connection Reset? | 2 | [deleted] | 2023-06-13T04:49:55 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1486rq2 | false | null | t3_1486rq2 | /r/LocalLLaMA/comments/1486rq2/oobabooga_webui_connection_reset/ | false | false | default | 2 | null | ||
Llama.cpp GPU Offloading Not Working for me with Oobabooga Webui - Need Assistance | 8 | Hello,
I've been trying to offload transformer layers to my GPU using the llama.cpp Python binding, but it seems like the model isn't being offloaded to the GPU. I've installed the latest version of llama.cpp and followed the instructions on GitHub to enable GPU acceleration, but I'm still facing this issue.
Here's a brief description of what I've done:
1. I've installed llama.cpp and the llama-cpp-python package, making sure to compile with CMAKE\_ARGS="-DLLAMA\_CUBLAS=on" FORCE\_CMAKE=1.
2. I've added --n-gpu-layersto the CMD\_FLAGS variable in webui.py.
3. I've verified that my GPU environment is correctly set up and that the GPU is properly recognized by my system. The nvidia-smicommand shows the expected output, and a simple PyTorch test shows that GPU computation is working correctly.
I have the Nvidia RTX 3060 Ti 8 GB Vram
I am trying to load 13B model and offload some of into the GPU. Right now I have it loaded/working on CPU/RAM.
I was able to load the models just using the GGML directly into RAM but I'm trying to offload some of it into Vram see if it would speed things up a bit, but I'm not seeing GPU Vram being used or any Vram taken up.
​
Thanks!! | 2023-06-13T03:39:15 | https://www.reddit.com/r/LocalLLaMA/comments/1485ir1/llamacpp_gpu_offloading_not_working_for_me_with/ | medtech04 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1485ir1 | false | null | t3_1485ir1 | /r/LocalLLaMA/comments/1485ir1/llamacpp_gpu_offloading_not_working_for_me_with/ | false | false | self | 8 | null |
Is there a way to *partially* stream a prompt into Llama-cpp-Python? | 6 | I’ve noticed that a lot of the really powerful prompts have an enormous amount of boilerplate text in them that never changes. For example, prompts that focus on in-context learning will give a series of examples that remain static during every evaluation run. Because of that, I’m wondering if there is a way to partially ingest a prompt into Llama-cpp-Python, then wait for further input, and only ingest the last little bit of the prompt once the user submits it? As I understand it, that would really save on the amount of time you must wait before the model starts to output tokens, but I might be wrong. | 2023-06-13T02:10:22 | https://www.reddit.com/r/LocalLLaMA/comments/1483vrp/is_there_a_way_to_partially_stream_a_prompt_into/ | E_Snap | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1483vrp | false | null | t3_1483vrp | /r/LocalLLaMA/comments/1483vrp/is_there_a_way_to_partially_stream_a_prompt_into/ | false | false | self | 6 | null |
If you value the open-source movement, donate to Common Crawl | 38 | I'm not affiliated at all, I just see the importance. This is one of the most important resources we need to maintain if we're going to keep the open-source AI movement alive. [https://commoncrawl.org/](https://commoncrawl.org/) | 2023-06-13T01:49:18 | https://www.reddit.com/r/LocalLLaMA/comments/1483hlr/if_you_value_the_opensource_movement_donate_to/ | Careful-Temporary388 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1483hlr | false | null | t3_1483hlr | /r/LocalLLaMA/comments/1483hlr/if_you_value_the_opensource_movement_donate_to/ | false | false | self | 38 | null |
Manticore-Falcon-Wizard-Orca-LLaMA | 103 | 2023-06-13T01:10:27 | PatientWizardTaken | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1482rff | false | null | t3_1482rff | /r/LocalLLaMA/comments/1482rff/manticorefalconwizardorcallama/ | false | false | 103 | {'enabled': True, 'images': [{'id': '2md2kR7hO82mvqFd7sg4Hwm_CcLKUa-5uU3qNEB4Kz0', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/esr1kju1ro5b1.png?width=108&crop=smart&auto=webp&s=cd82e66bb6f61e47772928c72f52c18668f4fe1e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/esr1kju1ro5b1.png?width=216&crop=smart&auto=webp&s=f1e7a69d1064c4fb2a128786136ddf9a06e8b11b', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/esr1kju1ro5b1.png?width=320&crop=smart&auto=webp&s=9a1002cc33ea29a179099024f51bba1b1fb95220', 'width': 320}], 'source': {'height': 512, 'url': 'https://preview.redd.it/esr1kju1ro5b1.png?auto=webp&s=6a556f9366c73426c9fea0184347074d9efbc5d0', 'width': 512}, 'variants': {}}]} | |||
Let's create a 65B benchmark in this thread | 52 | Please, everyone in this thread, post the specifications of your machine, including the software you are using (e.g. LLaMa.cpp), the format of 65B you are working with (e.g. Q4\_K\_M), and most important: the speed (token/s preferrably).
If you are using an Apple device, please also mention how many GPU cores you have.
If you are on a PC, please specify your GPU model, CPU model, and memory speed. | 2023-06-13T01:09:59 | https://www.reddit.com/r/LocalLLaMA/comments/1482r1r/lets_create_a_65b_benchmark_in_this_thread/ | Big_Communication353 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1482r1r | false | null | t3_1482r1r | /r/LocalLLaMA/comments/1482r1r/lets_create_a_65b_benchmark_in_this_thread/ | false | false | self | 52 | null |
Best machine setup to last the next couple years? | 17 | If I wanted to get a rig that would be good for the next couple years as far as “keeping up” with things like llama what would you guys suggest as far as specs? | 2023-06-13T00:54:31 | https://www.reddit.com/r/LocalLLaMA/comments/1482gd5/best_machine_setup_to_last_the_next_couple_years/ | ricketpipe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1482gd5 | false | null | t3_1482gd5 | /r/LocalLLaMA/comments/1482gd5/best_machine_setup_to_last_the_next_couple_years/ | false | false | self | 17 | null |
Best UI starting out? | 1 | [removed] | 2023-06-13T00:11:39 | https://www.reddit.com/r/LocalLLaMA/comments/1481mrp/best_ui_starting_out/ | themushroommage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1481mrp | false | null | t3_1481mrp | /r/LocalLLaMA/comments/1481mrp/best_ui_starting_out/ | false | false | default | 1 | null |
Passing embeddings to llama with ctransformers for long term memory | 2 | Hey guys sorry if this might sound stupid but I have been thinking of building an app with long term memory by caching responses and requests and saving them into a file where vectors will be continuously generated on them but I don't know how to pass those embeddings to the model using c transformers | 2023-06-12T23:50:03 | https://www.reddit.com/r/LocalLLaMA/comments/148177l/passing_embeddings_to_llama_with_ctransformers/ | GOD_HIMSELVES | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 148177l | false | null | t3_148177l | /r/LocalLLaMA/comments/148177l/passing_embeddings_to_llama_with_ctransformers/ | false | false | self | 2 | null |
Can you increase a models number of parameters? | 0 | [removed] | 2023-06-12T23:26:20 | https://www.reddit.com/r/LocalLLaMA/comments/1480qg1/can_you_increase_a_models_number_of_parameters/ | TimTams553 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1480qg1 | false | null | t3_1480qg1 | /r/LocalLLaMA/comments/1480qg1/can_you_increase_a_models_number_of_parameters/ | false | false | default | 0 | null |
Id like to download wizardlm before it gets blocked | 0 | [removed] | 2023-06-12T22:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/147zmm8/id_like_to_download_wizardlm_before_it_gets/ | coop7774 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147zmm8 | false | null | t3_147zmm8 | /r/LocalLLaMA/comments/147zmm8/id_like_to_download_wizardlm_before_it_gets/ | false | false | default | 0 | null |
llama.cpp just got full CUDA acceleration, and now it can outperform GPTQ! | 397 | New PR just added by Johannes Gaessler: [https://github.com/ggerganov/llama.cpp/pull/1827](https://github.com/ggerganov/llama.cpp/pull/1827)
This adds full GPU acceleration to llama.cpp. It is now able to fully offload all inference to the GPU.
For the first time ever, this means GGML can now outperform AutoGPTQ and GPTQ-for-LLaMa inference (though it still loses to exllama)
Note: if you test this, be aware that you should now use `--threads 1` as it's no longer beneficial to use multiple threads; in fact it slows down performance a lot.
# Some initial benchmarks
**With H100 GPU + Intel Xeon Platinum 8480+ CPU:**
* 7B q4\_K\_S:
* Previous llama.cpp performance: 25.51 tokens/s
* New PR llama.cpp performance: 60.97 tokens/s
* = 2.39x
* AutoGPTQ 4bit performance on this system: 45 tokens/s
* 30B q4\_K\_S
* Previous llama.cpp performance: 10.79 tokens/s
* New PR llama.cpp performance: 18.62 tokens/s
* = 1.73x
* AutoGPTQ 4bit performance on the same system: 20.78 tokens/s
**On 4090 GPU + Intel i9-13900K CPU:**
* 7B q4\_K\_S:
* New llama.cpp performance: 109.29 tokens/s
* AutoGPTQ CUDA 7B GPTQ 4bit: 98 tokens/s
* 30B q4\_K\_S:
* New PR llama.cpp performance: 29.11 tokens/s
* AutoGPTQ CUDA 30B GPTQ 4bit: 35 tokens/s
So on 7B models, GGML is now ahead of AutoGPTQ on both systems I've tested.
30B it's a little behind, but within touching difference.
And Johannes says he believes there's even more optimisations he can make in future.
Everything we knew before is changing! Now GGML is both the most flexible/accessible, AND starting to rival the fastest.
(For Llama models anyway. Which is really going to emphasise the fact of how far ahead Llama GGML development is versus the other GGML models, like GPT-J, GPT-NeoX, MPT, StarCoder, etc,)
# Still CPU bottlenecked (for now)
There's been quite a few comments asking why the 4090 beats the H100. It's important to note that CPU still plays an important part.
I believe the reason why the 4090 system is winning in the benchmarks above is because the i9-13900K in that system has much higher single-core performance than the server CPU in the H100 system.
When doing these benchmarks, I noted that:
* One CPU core was pegged at 100%
* GPU utilisation was well below 100%:
* On the H100 with Intel Xeon, max GPU usage was 55%
* On the 4090 with i9-13900K, max GPU usage was 69%
Therefore the CPU is still an important factor and can limit/bottleneck the GPU. And specifically, it's now the max single-core CPU speed that matters, not the multi-threaded CPU performance like it was previously in llama.cpp. This now matches the behaviour of pytorch/GPTQ inference, where single-core CPU performance is also a bottleneck (though apparently the exllama project has done great work in reducing that dependency for their GPTQ implementation.)
Johannes, the developer behind this llama.cpp PR, says he plans to look at further CPU optimisations which might make CPU less of a bottleneck, and help unlock more of that currently-unused portion of the GPU.
# Increased VRAM requirements with the new method
One important thing to note is that these performance improvements have come as a result of putting more on to the GPU, which necessarily also increases VRAM usage.
As a result, some people are finding that their preferred quant size will no longer fit in VRAM on their hardware, and that performance is therefore lower.
Johannes has said he plans on making the changes optional:
>I plan to make moving the KV cache to VRAM optional but before I decide on a specific implementation for the user interface I'll need to do some performance testing. If it turns out that the KV cache is always less efficient in terms of t/s per VRAM then I think I'll just extend the logic for --n-gpu-layers to offload the KV cache after the regular layers if the value is high enough.
So by the time this PR has reached llama.cpp main, it should be possible for the user to choose what works best for them: maximum performance but perhaps on a smaller quant size, or a larger quant size with the same performance as they're used to now.
# Want to try this for yourself?
* For now you'll need to compile the PR from source:
​
git clone https://github.com/JohannesGaessler/llama.cpp llama.cpp-PR
cd llama.cpp-PR
git checkout cuda-full-gpu-2
make clean && LLAMA_CUBLAS=1 make -j
​ | 2023-06-12T22:12:15 | https://www.reddit.com/r/LocalLLaMA/comments/147z6as/llamacpp_just_got_full_cuda_acceleration_and_now/ | The-Bloke | self.LocalLLaMA | 2023-06-13T10:43:14 | 0 | {} | 147z6as | false | null | t3_147z6as | /r/LocalLLaMA/comments/147z6as/llamacpp_just_got_full_cuda_acceleration_and_now/ | false | false | self | 397 | {'enabled': False, 'images': [{'id': 'libw-YiNaD5BcmhkgQeD707MDy7dFNk9mryQZ0gsqvM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?width=108&crop=smart&auto=webp&s=448424edf998a31a1f3075021f02d2cb4b3ae890', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?width=216&crop=smart&auto=webp&s=b08d914ba4082f4117d1d12e7fe101ac2895afdb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?width=320&crop=smart&auto=webp&s=281a05cbdf0ca5efb53ce5c8f571627d8624f417', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?width=640&crop=smart&auto=webp&s=39c2c59081b59572dce3d63fb16dc25b7242203c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?width=960&crop=smart&auto=webp&s=ebc05e4017c9113a5a95e7a1b2b8cdf9222913b8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?width=1080&crop=smart&auto=webp&s=b2acffac245312646f0a627aaaad5b188faac8a3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?auto=webp&s=032cf4107a62a0de7efcc5144dbab611328f98ef', 'width': 1200}, 'variants': {}}]} |
Developer Help | 0 | [removed] | 2023-06-12T20:49:50 | https://www.reddit.com/r/LocalLLaMA/comments/147xaaq/developer_help/ | Own_Turnip8625 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147xaaq | false | null | t3_147xaaq | /r/LocalLLaMA/comments/147xaaq/developer_help/ | false | false | default | 0 | null |
Advice on training a grammar correction model? | 1 | Howdy folks, could anyone point me to (or provide some advice) on how to train a local model to transform a sentence from a freeform construction into one that follows a set of specific grammatic rules? Basically, I'd like to convert our code docs into a formal language that can then be analyzed for completeness and correctness. It seems like I should be able to do this...but playing around with RedPajama-Incite-3B, my training gives terrible results. I am pretty new to this, so I'm definitely making a hash of things. Any advice would be much appreciated! | 2023-06-12T20:36:54 | https://www.reddit.com/r/LocalLLaMA/comments/147wzjn/advice_on_training_a_grammar_correction_model/ | shitty_coder_2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147wzjn | false | null | t3_147wzjn | /r/LocalLLaMA/comments/147wzjn/advice_on_training_a_grammar_correction_model/ | false | false | self | 1 | null |
Looking to hire an AI Web Developer for a Project | 0 | [removed] | 2023-06-12T20:03:58 | https://www.reddit.com/r/LocalLLaMA/comments/147w7um/looking_to_hire_an_ai_web_developer_for_a_project/ | Specific_Valuable893 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147w7um | false | null | t3_147w7um | /r/LocalLLaMA/comments/147w7um/looking_to_hire_an_ai_web_developer_for_a_project/ | false | false | default | 0 | null |
The Safari of Deep Signal Processing: Hyena and Beyond (New Models for Ultra-Long Sequences) | 24 | 2023-06-12T19:29:12 | https://hazyresearch.stanford.edu/blog/2023-06-08-hyena-safari | Balance- | hazyresearch.stanford.edu | 1970-01-01T00:00:00 | 0 | {} | 147vdtb | false | null | t3_147vdtb | /r/LocalLLaMA/comments/147vdtb/the_safari_of_deep_signal_processing_hyena_and/ | false | false | 24 | {'enabled': False, 'images': [{'id': '-WHgGLJANkDpubg8JwSLJ_kMgGHdyAiWnD4mQMVCLm0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?width=108&crop=smart&auto=webp&s=fd10df8933b1c9751ea7d0fcf20f1e54587a02ce', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?width=216&crop=smart&auto=webp&s=d39d5b78df8653abb63f948d029eb734c0bd0254', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?width=320&crop=smart&auto=webp&s=2a242c8ec32e5e02a650a73b9418c82d2327ac54', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?auto=webp&s=b3c6e6f793801a135b69d089032f2aee32682946', 'width': 460}, 'variants': {}}]} | ||
Can we participate in the Subredit Blackout? | 92 | I wonder if the mods are open to the idea of participating in the [subreddit blackout](https://www.google.com/search?q=subreddit+blackout&tbm=nws&sxsrf=APwXEddc6z6xiO5fkre7jgb0sWf38uDTmQ:1686595092461&source=lnt&tbs=sbd:1&sa=X&ved=2ahUKEwjFsPaqsL7_AhXDEFkFHYRaACsQpwV6BAgZEBM&biw=931&bih=568&dpr=1.1) over their api changes as well as accessibility issues for blind users.
Apparently over 3k subreddits are participating right now, and growing.
Thanks for your consideration! | 2023-06-12T18:40:00 | https://www.reddit.com/r/LocalLLaMA/comments/147u9k7/can_we_participate_in_the_subredit_blackout/ | jl303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147u9k7 | false | null | t3_147u9k7 | /r/LocalLLaMA/comments/147u9k7/can_we_participate_in_the_subredit_blackout/ | false | false | self | 92 | null |
Best or more complete instruction datasets | 7 | If we didn't have enough threats from politicians (aka useless people), now the big companies seem to be going against open source AI as well, as usual driven by their unquenchable greed and malice.
So considering that the future looks bleak, what are some of the most complete datasets out there?
I got some time ago this one: [https://huggingface.co/datasets/anon8231489123/ShareGPT\_Vicuna\_unfiltered](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) since it removed some of the ethics filter BS that plagues a lot of instruction datasets that use chatGPT or corporate data.
But are there any other complete and interesting datasets that one should save and keep for a rainy day, once censorship lays waste to sites like huggingface? I've been using some of those data to train stuff like novelAI modules (i.e. LORAs), so I really want to keep as many interesting instruction datasets as I possibly can (or any other interesting datasets).
What suggestions do you have? | 2023-06-12T18:35:59 | https://www.reddit.com/r/LocalLLaMA/comments/147u6ag/best_or_more_complete_instruction_datasets/ | CulturedNiichan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147u6ag | false | null | t3_147u6ag | /r/LocalLLaMA/comments/147u6ag/best_or_more_complete_instruction_datasets/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'hCJm1WvoukTm8o3iKxx6PgypOTukUiQ9MSNgq1s3NQE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=108&crop=smart&auto=webp&s=53cfd5649ccabc02caf81c85c0ef6fd93c0d6753', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=216&crop=smart&auto=webp&s=4b2776e4ab9a0394aada31f03054955a7242c6b6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=320&crop=smart&auto=webp&s=5fa1a900b723e80f7b65e561e5028867be4b58c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=640&crop=smart&auto=webp&s=13412c8d161e4a13edf3f7ad8b8750684a005536', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=960&crop=smart&auto=webp&s=f73fac0c06956e47104c1b3c606a3edaf1b1d98f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=1080&crop=smart&auto=webp&s=200773d04c8debe3865bdc395a318126791fffde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?auto=webp&s=6130b1031b11bc2639db3f24677561e5a4e73b10', 'width': 1200}, 'variants': {}}]} |
Best Open Source Model for Therapy? | 5 | I've been wanting to experiment with some open source models that offer good therapy advice - yk generally uplifting, doesn't have to be too scientific + that sounds more like a real person than GPT does. Has anyone had any experience with models that might be useful? | 2023-06-12T18:29:01 | https://www.reddit.com/r/LocalLLaMA/comments/147u0a3/best_open_source_model_for_therapy/ | robopika | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147u0a3 | false | null | t3_147u0a3 | /r/LocalLLaMA/comments/147u0a3/best_open_source_model_for_therapy/ | false | false | self | 5 | null |
Long-term memory management for Llama model. | 9 | I am trying to build long-term memory management for Llama model. But I am not getting anywhere. So I am here to ask if there is some existing prompt or Long-term memory Llama system?
my prompt so far:
\`\`\`\`
f"""### Instruction: You are memAI, an AI with extended memory capabilities.
You can "essentialy" talk to your memory.
In reality the memory is just a command parser that at the start provides you a question you will need to answer.
There is a question below and you will need to answer it.
All of your responses will be parsed, so make sure they comply with the commands listed below.
Available commands:
1. Name: 'List files', Description: 'Returns currently available .txt files', Usage: 'list\_files'
2. Name: 'Read file', Description: 'Returns content of a specified .txt file', Usage: 'read\_file example.txt'
3. Name: 'Final answer', Description: 'Ends the answer refining loop' Usage: 'final\_answer example answer'
Now you schould understand what to do.
Just keep in mind that if your response is not a available command your message will be invalid, so please respond with the commands and with the commands only.
{chat\_history}
Respond only with one of the available commands:
\### Input:
memory: {user\_input}
\### Response:
memAI: """
\`\`\` | 2023-06-12T16:41:58 | https://www.reddit.com/r/LocalLLaMA/comments/147rhm5/longterm_memory_management_for_llama_model/ | floppapeek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147rhm5 | false | null | t3_147rhm5 | /r/LocalLLaMA/comments/147rhm5/longterm_memory_management_for_llama_model/ | false | false | self | 9 | null |
MacBook Air 15 inch capabilities? | 0 | [removed] | 2023-06-12T16:41:29 | https://www.reddit.com/r/LocalLLaMA/comments/147rh8d/macbook_air_15_inch_capabilities/ | Necessary_Ad_9800 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147rh8d | false | null | t3_147rh8d | /r/LocalLLaMA/comments/147rh8d/macbook_air_15_inch_capabilities/ | false | false | default | 0 | null |
Finetuning using Google Colab (Free Tier) | 0 | I wanted to finetune any of the open-source LLMs using the free Google Colab runtime instances. Is there any setup that works out the best? If so, could you please share them? I was trying to use LoRA adaptors on the free google colab but I ran out of RAM and am unable to proceed. | 2023-06-12T16:32:12 | https://www.reddit.com/r/LocalLLaMA/comments/147r92h/finetuning_using_google_colab_free_tier/ | garamkarakchai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147r92h | false | null | t3_147r92h | /r/LocalLLaMA/comments/147r92h/finetuning_using_google_colab_free_tier/ | false | false | self | 0 | null |
What are the best open sourced LLMs for financial NLP tasks ? | 0 | 2023-06-12T16:17:12 | https://www.reddit.com/r/LocalLLaMA/comments/147qw2p/what_are_the_best_open_sourced_llms_for_financial/ | Zine47X | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147qw2p | false | null | t3_147qw2p | /r/LocalLLaMA/comments/147qw2p/what_are_the_best_open_sourced_llms_for_financial/ | false | false | default | 0 | null | |
what is the next step after LLM? when? how? why? required? | 0 | [removed] | 2023-06-12T15:43:26 | https://www.reddit.com/r/LocalLLaMA/comments/147q46o/what_is_the_next_step_after_llm_when_how_why/ | Sofronyami | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147q46o | false | null | t3_147q46o | /r/LocalLLaMA/comments/147q46o/what_is_the_next_step_after_llm_when_how_why/ | false | false | default | 0 | null |
Web-ui vs API | 1 | When I call the API using Postman, I don't get the same results as when I use the Web-UI. How can I replicate the same parameters that the Web-UI is using? I'm passing the following:
Here is a sample of what I'm sending, it just rewrites my question instead of giving me a story:
{
"prompt": "write a short story about a bear and a dog that became friends",
"max_new_tokens": 250,
"do_sample": true,
"temperature": 1.3,
"top_p": 0.1,
"typical_p": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"tfs": 1,
"top_a": 0,
"repetition_penalty": 1.18,
"top_k": 40,
"min_length": 0,
"no_repeat_ngram_size": 0,
"num_beams": 1,
"penalty_alpha": 0,
"length_penalty": 1,
"early_stopping": false,
"mirostat_mode": 0,
"mirostat_tau": 5,
"mirostat_eta": 0.1,
"seed": -1,
"add_bos_token": true,
"truncation_length": 2048,
"ban_eos_token": false,
"skip_special_tokens": true,
"stopping_strings": []
} | 2023-06-12T13:47:04 | https://www.reddit.com/r/LocalLLaMA/comments/147oklj/webui_vs_api/ | igorbirman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147oklj | false | null | t3_147oklj | /r/LocalLLaMA/comments/147oklj/webui_vs_api/ | false | false | self | 1 | null |
Finetuning on multiple GPUs | 4 | Hi,
Does anyone have a working example for finetuning LLaMa or Falcon on multiple GPUs?
If it also has QLoRA that would be the best but afaik it's [not implemented in bitsandbytes yet](https://github.com/TimDettmers/bitsandbytes/issues/366)? | 2023-06-12T13:29:39 | https://www.reddit.com/r/LocalLLaMA/comments/147o6pb/finetuning_on_multiple_gpus/ | Simhallq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147o6pb | false | null | t3_147o6pb | /r/LocalLLaMA/comments/147o6pb/finetuning_on_multiple_gpus/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'X92AqpWm5jfYXXLshVqqMTAbuBIAstNiq5DLgpyT2Vk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?width=108&crop=smart&auto=webp&s=25735138efd4a163368d9e3e1f0a5771c43f7938', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?width=216&crop=smart&auto=webp&s=ecdfe9f7546516bb6cd5463ddb933a318abfc988', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?width=320&crop=smart&auto=webp&s=f652a86d64d4ed0ab656c3a0eeeef4425aaf66a9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?width=640&crop=smart&auto=webp&s=ce9080c7eb06e27559723d90ce66b4c35c9db040', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?width=960&crop=smart&auto=webp&s=1c189639e25ec5aef03edadcacb176747ec51c42', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?width=1080&crop=smart&auto=webp&s=1227e0c6e69449d2905e1ad6201d0331a458aa07', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?auto=webp&s=8cbd2d269894c50888a9b60253d74e2b4a8e24e7', 'width': 1200}, 'variants': {}}]} |
Which one of these models has potential to become sentient? | 0 | [deleted] | 2023-06-12T12:57:08 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 147nhpn | false | null | t3_147nhpn | /r/LocalLLaMA/comments/147nhpn/which_one_of_these_models_has_potential_to_become/ | false | false | default | 0 | null | ||
How to add GPU support to oobabooga? | 4 | [deleted] | 2023-06-12T12:42:37 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 147n6ti | false | null | t3_147n6ti | /r/LocalLLaMA/comments/147n6ti/how_to_add_gpu_support_to_oobabooga/ | false | false | default | 4 | null | ||
How to keep track of all the LLMs out there? | 34 | Hi,
I'm supposed to be the NLP "expert" at work. But I am so overwhelmed by the LLM scene right now with new ones popping every day. Is there an easy way to keep track of all the LLMs out there? This includes how to download the model, use it both programmatically and from a UI, what type of model it is etc.
Thanks. | 2023-06-12T11:58:37 | https://www.reddit.com/r/LocalLLaMA/comments/147mbr6/how_to_keep_track_of_all_the_llms_out_there/ | learning_agent | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 147mbr6 | false | null | t3_147mbr6 | /r/LocalLLaMA/comments/147mbr6/how_to_keep_track_of_all_the_llms_out_there/ | false | false | self | 34 | null |
Which best uncensored (free-speech) LLM models should we download before its too late? | 124 | [deleted] | 2023-06-12T11:20:17 | [deleted] | 2023-06-14T08:28:49 | 0 | {} | 147lmku | false | null | t3_147lmku | /r/LocalLLaMA/comments/147lmku/which_best_uncensored_freespeech_llm_models/ | false | false | default | 124 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.