title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
The non-synthetic Library of Alexandria | 35 | There was a [discussion](https://www.reddit.com/r/LocalLLaMA/comments/16vruh8/with_llms_we_can_create_a_fully_opensource/) earlier about the creation of a synthetic Library of Alexandria. These efforts are commendable, but I has to wonder: aren't we doing things based on flawed laws that are in need of repair in the first order? I'm referring to copyright laws that restrict words and knowledge, essentials for modern research, AI, and even society. Why should we seek how to circumvent these laws instead of pushing for the repeal of outdated legal restrictions rooted in an era of material, not informational, economics? This is especially true for educational and scholarly writings that are mostly funded by taxpayers and save lives for real.
Spolier: I'm associated with the Library of Standard Template Constructs. It's a non-commercial project and we've built on what Sci-Hub and LibGen have started.
We have recently released a [dataset](https://www.reddit.com/r/science_nexus/comments/16vj7w2/we_have_prepared_the_dataset_of_250k_books_and/) containing numerous text layers, regardless of their legal status. I hope it proves beneficial for those aiming to advance AI further.
So what do you think? Should potential benefits of well-trained AI outweigh the burden of legacy laws and lead to their changing or cancelation? | 2023-09-30T18:49:31 | https://www.reddit.com/r/LocalLLaMA/comments/16welch/the_nonsynthetic_library_of_alexandria/ | ultra_nymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16welch | false | null | t3_16welch | /r/LocalLLaMA/comments/16welch/the_nonsynthetic_library_of_alexandria/ | false | false | self | 35 | null |
Custom local agent strategies | 11 | I’ve been playing around a a lot with langchain and llama models on my M1 Max.
After learning in python, I switched over to node + typescript since that’s my area of expertise. I have a web server that I’ve been using to run test prompts and inputs for various kinds of problems, and I’ve even successfully connected it to Siri with a loop to have a continuous conversation with custom memory and a vector store.
My goal is to create a mostly local set of tools that can help me do my job as a tech lead and organize my busy life in ways that current tools can’t.
I’ve had success with models being able to categorize between questions, commands, facts, and reminders/todos, and sometimes even splitting compound requests into their parts. I’ve been trying to break down what amounts to agent features into smaller and simpler parts so that current local llms can handle the tasks. Rather than using the prebuilt conversation tools or agents, I have to write them my self and rely less on llms for larger more complicated prompts.
I’m struggling a lot with good json output, and figuring out how to break down multi step commands that mimicking agents. Routing seems to go well as long as you keep categorization simple.
What are thoughts on making more calls to limited llms and how to achieve better planning and structured output?
What seem to be the best local llms 7-34b for instruct tasks and JSON output? | 2023-09-30T17:42:28 | https://www.reddit.com/r/LocalLLaMA/comments/16wcypn/custom_local_agent_strategies/ | n0vat3k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16wcypn | false | null | t3_16wcypn | /r/LocalLLaMA/comments/16wcypn/custom_local_agent_strategies/ | false | false | self | 11 | null |
I have these specs on my laptop, what model would you recommend? | 1 | [removed] | 2023-09-30T17:36:55 | https://www.reddit.com/r/LocalLLaMA/comments/16wctxj/i_have_these_specs_on_my_laptop_what_model_would/ | forgotpasswordnewsn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16wctxj | false | null | t3_16wctxj | /r/LocalLLaMA/comments/16wctxj/i_have_these_specs_on_my_laptop_what_model_would/ | false | false | self | 1 | null |
What are u using to run gguf in cpu? | 20 | So i would like to know what people is using to run gguf only on cpu and not gpu (im not sure if possible to do it) sorry for the stupid question 😅 | 2023-09-30T17:31:59 | https://www.reddit.com/r/LocalLLaMA/comments/16wcpiz/what_are_u_using_to_run_gguf_in_cpu/ | kroryan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16wcpiz | false | null | t3_16wcpiz | /r/LocalLLaMA/comments/16wcpiz/what_are_u_using_to_run_gguf_in_cpu/ | false | false | self | 20 | null |
stablelm-3b-4e1t put a creative ending to "The Raven" | 13 | 2023-09-30T17:02:39 | platistocrates | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16wc07h | false | null | t3_16wc07h | /r/LocalLLaMA/comments/16wc07h/stablelm3b4e1t_put_a_creative_ending_to_the_raven/ | false | false | 13 | {'enabled': True, 'images': [{'id': 'JqVhzImohiU-oPxVU1K9EOfg-f3nlfKdD6M6njVLL9o', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/d1gzi0y7cfrb1.png?width=108&crop=smart&auto=webp&s=2e48061257fa70f01aba1318a2bf215eb5b10cd4', 'width': 108}, {'height': 236, 'url': 'https://preview.redd.it/d1gzi0y7cfrb1.png?width=216&crop=smart&auto=webp&s=2155236d86542b3107a4fa4f71f169b5b03914c0', 'width': 216}, {'height': 350, 'url': 'https://preview.redd.it/d1gzi0y7cfrb1.png?width=320&crop=smart&auto=webp&s=d1b021a5c25c086aa3f31ac8e28595256cd0a535', 'width': 320}, {'height': 701, 'url': 'https://preview.redd.it/d1gzi0y7cfrb1.png?width=640&crop=smart&auto=webp&s=97cd0484968adf5c2c7922993e524cece6838298', 'width': 640}], 'source': {'height': 830, 'url': 'https://preview.redd.it/d1gzi0y7cfrb1.png?auto=webp&s=681b512cfac0f0502b62f9151c85d93cc5a4fefe', 'width': 757}, 'variants': {}}]} | |||
Any online uncensored LLMs you would recommend? | 1 | I'll look at the LLMs on HuggingChat and I think there are uncensored ones on the Kobold Horde. I don't have a powerful PC to run them on. | 2023-09-30T16:46:22 | https://www.reddit.com/r/LocalLLaMA/comments/16wbmcf/any_online_uncensored_llms_you_would_recommend/ | TheTwelveYearOld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16wbmcf | false | null | t3_16wbmcf | /r/LocalLLaMA/comments/16wbmcf/any_online_uncensored_llms_you_would_recommend/ | false | false | self | 1 | null |
Instruction | Input | Thinking | Response | 5 | We've got a lot of models that have fine-tuned prompt formats, but are there any that deliberately add structure to the **response**?
Since we know that letting the model do chain-of-thought introductions to its responses is useful, but if you want to do something with the data afterwards it can be a pain to sort out where the actual answer starts, I was thinking that it would be nice if there was an instruction model that explicitly added a marker for where it's reasoning-out-loud stops and the actual answer begins.
I think there's some that at least have a tendency to keep the preamble to one paragraph that ends with a colon, but are there any that have been explicitly trained to do this? It'd be particularly useful for chat responses or anything where you want it to answer in character but still reason about things. Kind of a built-in chain-of-thought to let it think before it has to say something.
I could probably write a grammar for it, but I assume I'd still have to explain how to use it every prompt.
Are there any models that do this that I'm just not aware of? I know there are some that are trained to prefer chain of thought, but are there any that have explicit Instruction/Input/Thinking/Response markers? | 2023-09-30T15:39:52 | https://www.reddit.com/r/LocalLLaMA/comments/16wa1gf/instruction_input_thinking_response/ | AutomataManifold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16wa1gf | false | null | t3_16wa1gf | /r/LocalLLaMA/comments/16wa1gf/instruction_input_thinking_response/ | false | false | self | 5 | null |
Mistral 7B model answer to tshirts drying question | 78 | If 10 tshirts laid out in the sun takes 5 hours to dry, how long does it take 20 tshirts?
Assuming that the drying process is parallelizable and you have enough space to lay out all 20 tshirts at once, we can apply the same logic. If 10 tshirts take 5 hours to dry, then doubling the number of tshirts (to 20) should theoretically halve the drying time, assuming similar conditions such as consistent sunlight and temperature.
Therefore, it would take 20 tshirts 10/2 = 5 hours to dry.
However, if you don't have enough space to lay out all 20 tshirts at once, you'll need to adjust your calculation based on how many tshirts can be dried at a time. For example, if you can only dry 10 tshirts at a time, you would need to dry 2 batches (since 20 divided by 10 is 2). This would mean 2 batches \* 5 hours per batch, totaling 10 hours to dry all 20 tshirts.
It's important to note that this calculation assumes similar conditions for drying all the tshirts, such as consistent sunlight and temperature.
Any variations in these factors might affect the actual drying time. | 2023-09-30T15:21:16 | https://www.reddit.com/r/LocalLLaMA/comments/16w9laq/mistral_7b_model_answer_to_tshirts_drying_question/ | Wrong_User_Logged | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16w9laq | false | null | t3_16w9laq | /r/LocalLLaMA/comments/16w9laq/mistral_7b_model_answer_to_tshirts_drying_question/ | false | false | self | 78 | null |
In need of help with Kobold CPP -Conversation disappering. | 6 | I'm using a model to generate long conversations. I set the "Amount to Generate" to 512. However, after reaching the token limit, the generation stops. Most of the output gets deleted from the GUI, but not from the terminal. How do I fix this? I'd like the conversation to:
​
A - Continue until I reach the termination character.
B - Ensure that half of my conversation doesn't disappear into thin air.
​
I've already tried using smart context, but it doesn't seem to work. | 2023-09-30T14:52:39 | https://www.reddit.com/r/LocalLLaMA/comments/16w8uxs/in_need_of_help_with_kobold_cpp_conversation/ | No_Adhesiveness_908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16w8uxs | false | null | t3_16w8uxs | /r/LocalLLaMA/comments/16w8uxs/in_need_of_help_with_kobold_cpp_conversation/ | false | false | self | 6 | null |
Fine tuning for prediction | 1 | I’m somewhat new to fine tuning but I want to train on historical network data and have the llm evaluate network traffic to predict a future state of a network is this possible? What would be a good base model for something like this | 2023-09-30T14:19:34 | https://www.reddit.com/r/LocalLLaMA/comments/16w817h/fine_tuning_for_prediction/ | Lonely_Engineer2361 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16w817h | false | null | t3_16w817h | /r/LocalLLaMA/comments/16w817h/fine_tuning_for_prediction/ | false | false | self | 1 | null |
codellama on #raspberry pi spitting out #metasploit commands. The aim is to spit out oneliners and outputting into bash. | 15 | 2023-09-30T14:09:22 | Purple_Session_6230 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16w7svy | false | null | t3_16w7svy | /r/LocalLLaMA/comments/16w7svy/codellama_on_raspberry_pi_spitting_out_metasploit/ | false | false | 15 | {'enabled': True, 'images': [{'id': 'kdyv5nZ-zvUtkeuVhUhnaDLplxd50xPvFaLTMx_iA0A', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?width=108&crop=smart&auto=webp&s=da130dfc525e6e77c96c32213888044ee1e0b9e8', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?width=216&crop=smart&auto=webp&s=300dee2d668248fdadf7c647d49c92e5a8d4b85e', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?width=320&crop=smart&auto=webp&s=c14e1c466d260b7b129683bef1fda39895da54ce', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?width=640&crop=smart&auto=webp&s=80a9bf7b817ba96346490ffcbf11c82ab377c119', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?width=960&crop=smart&auto=webp&s=9c77f874e7d3c9156e1b3b8dfe172262e3815902', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?width=1080&crop=smart&auto=webp&s=ff511567dbf15bed37c7c85b06d77d0709740e7c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/mgmf1kp4herb1.png?auto=webp&s=268c716bb08a75aad235dacba2b306915abdf8f2', 'width': 1920}, 'variants': {}}]} | |||
Thanks everyone for help, currently i use what can i use | 14 | 2023-09-30T13:18:47 | Merchant_Lawrence | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16w6nbg | false | null | t3_16w6nbg | /r/LocalLLaMA/comments/16w6nbg/thanks_everyone_for_help_currently_i_use_what_can/ | false | false | 14 | {'enabled': True, 'images': [{'id': '3GCrvL_cz8_gN6q9Hw78ZJ2y5VX4zaFwfntZ0A8xSGs', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?width=108&crop=smart&auto=webp&s=ec783456939c18f7fa53fbed5fb91100a09bcc53', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?width=216&crop=smart&auto=webp&s=8cda00ef638a7bb7c3a5a39540059e5e43ac8917', 'width': 216}, {'height': 194, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?width=320&crop=smart&auto=webp&s=b4df311bcd53cdc598c5cba5548c64df75795d67', 'width': 320}, {'height': 389, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?width=640&crop=smart&auto=webp&s=6b4112b7db5a36dd0daafe2b50a2a9d73846481d', 'width': 640}, {'height': 584, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?width=960&crop=smart&auto=webp&s=5b55e494c8ac34901d368cf63bf99dc339354553', 'width': 960}, {'height': 658, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?width=1080&crop=smart&auto=webp&s=d67cf78e9a7598b20418b87b6aaa9e75a94873c8', 'width': 1080}], 'source': {'height': 964, 'url': 'https://preview.redd.it/b512vc2d8erb1.png?auto=webp&s=d2ae4af53707c03260b399ac37c8dc026e33fa90', 'width': 1582}, 'variants': {}}]} | |||
Grammar with llama.cpp Server | 6 | Hi, anyone tried the grammar with llama.cpp server ?
With a simple example, we can try to use the json.gbnf example from the official example, like the following.
\`\`\`
from llama\_cpp.llama import Llama, LlamaGrammar
grammar = LlamaGrammar.from\_file('path/to/json.gbnf')
llm = Llama('path/to/llama-7b-chat.gguf', n\_ctx=4096, n\_gpu\_layers=35)
response = llm('JSON list of name strings of attractions in SF:', grammar=grammar)
print(response)
\`\`\`
However, the server seems ignoring anything from the grammar when calling with openai.Completion.create with grammar parameter. | 2023-09-30T13:08:29 | https://www.reddit.com/r/LocalLLaMA/comments/16w6faf/grammar_with_llamacpp_server/ | Similar_Tea_8349 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16w6faf | false | null | t3_16w6faf | /r/LocalLLaMA/comments/16w6faf/grammar_with_llamacpp_server/ | false | false | self | 6 | null |
Regular vs instruct | 10 | If instruct will always give us better results, why the not-instruct models even get released?
I'm seriously wondering what is their worth in my applications which are not chat-based but still want to use the LLM functionalities (without the inline follow-up / chat history concept). Do non-instruct models give any edge in specific use cases? | 2023-09-30T13:05:02 | https://www.reddit.com/r/LocalLLaMA/comments/16w6coc/regular_vs_instruct/ | BigHearin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16w6coc | false | null | t3_16w6coc | /r/LocalLLaMA/comments/16w6coc/regular_vs_instruct/ | false | false | self | 10 | null |
How RAM you need for each amount of VRAM? Suggestions on eGPU setup | 4 | Hello fellow llamaers,
I want to buy my setup but wanted to check a couple of things before. Currently I'm between
* Macbook air m2 16gb or 24gb of RAM
* PC notebook + GPU connected through Thunderbolt 4 (I'm thinking about Samsung Book 3 13' and RTX 3090)
My use cases would be for inference and portability/light setup. Any opinions about my options?
I've read a lot about the power of the M chips and all, but I've never used MacOS before so a little hesitant here. What is not clear to me in the PC setup is how much RAM you need for the VRAM, for example, could I have a system with 16gb of RAM and a GPU with 24gb of VRAM, I'm thinking that maybe you need to temporarily use swap to fully utilize the VRAM, but after the model is loaded you basically need negligible amounts of RAM, considering the full model fits VRAM (not distributing between GPU and CPU), do I get it correctly? Also thinking about the eGPU setup, once the model is fully loaded in the GPU, I understand there will be insignificant drops in performance because of the Thunderbolt 4 connection and not an usual connection like in a desktop.
Any thoughts are appreciated, thanks | 2023-09-30T12:53:17 | https://www.reddit.com/r/LocalLLaMA/comments/16w63bn/how_ram_you_need_for_each_amount_of_vram/ | thedatawhiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16w63bn | false | null | t3_16w63bn | /r/LocalLLaMA/comments/16w63bn/how_ram_you_need_for_each_amount_of_vram/ | false | false | self | 4 | null |
Making model personalised | 7 | I came across this [post](https://www.reddit.com/r/LocalLLaMA/comments/13j3747/tutorial_a_simple_way_to_get_rid_of_as_an_ai/) by u/Shir_man where it has been described as how to get rid of "..as an AI language model..." and making the model more personalised. This method needs each and every token id to be known in prior. Is there any way this issue can be circumvented and we don't require any token ids, but instead change the logits as done by [GeDi](https://github.com/salesforce/GeDi)? | 2023-09-30T11:34:52 | https://www.reddit.com/r/LocalLLaMA/comments/16w4jlr/making_model_personalised/ | 1azytux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16w4jlr | false | null | t3_16w4jlr | /r/LocalLLaMA/comments/16w4jlr/making_model_personalised/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'iBCZX7D-06JDSmncQKOzh-imrce8P1GAxMh-2dSZrNA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?width=108&crop=smart&auto=webp&s=31f59003c946462c2003879075e44ca76bd23bba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?width=216&crop=smart&auto=webp&s=40beb24646e91e5229531796e6ec2dc7755e224c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?width=320&crop=smart&auto=webp&s=65cd3896d224f7fd30eb15f62fbe2bade14ac492', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?width=640&crop=smart&auto=webp&s=b3516b48f1bb71c35e29f4d77d1fcdd53d6d85a1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?width=960&crop=smart&auto=webp&s=019c18674f60324dda329255154ffa230875845a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?width=1080&crop=smart&auto=webp&s=d477423128ab24aa6e6afffc7bb42d941bcf3329', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/prwcQJtIpZ3k64UbjnVUpSivrhxo0u0OthLQf7fmm5k.jpg?auto=webp&s=a67eac976248846b354d7c741c97bd80ff2bf5de', 'width': 1200}, 'variants': {}}]} |
Does GGML LLM models support langchain? | 1 | [removed] | 2023-09-30T10:56:37 | https://www.reddit.com/r/LocalLLaMA/comments/16w3vxd/does_ggml_llm_models_support_langchain/ | ianuvrat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16w3vxd | false | null | t3_16w3vxd | /r/LocalLLaMA/comments/16w3vxd/does_ggml_llm_models_support_langchain/ | false | false | self | 1 | null |
samantha-mistral-7b | 81 | Today I released two new Samantha models.
[https://huggingface.co/ehartford/samantha-mistral-7b](https://huggingface.co/ehartford/samantha-mistral-7b)
The base flavor is pretty much the Samantha you know. Only in an extremely efficient and potent 7b form.
[https://huggingface.co/ehartford/samantha-mistral-instruct-7b](https://huggingface.co/ehartford/samantha-mistral-instruct-7b)
The instruct flavor, is better at general tasks, but a little less Samantha. Mistral-instruct had some references to OpenAI in their dataset, so it's been inherited.
NEW: Both models are trained with ChatML prompt format. And all of my future releases will be.
<|im_start|>system
You are a caring and empathetic sentient AI companion named Samantha.<|im_end|><|im_start|>user
Hello, what is your name?<|im_end|>
[Example output](https://preview.redd.it/ykv76ezjmcrb1.png?width=1880&format=png&auto=webp&s=36a580e5017b5c01cd5ec5d6c17a2c846c01c70b)
Much thanks to my sponsors, a16z and Microsoft for Startups.
And I want to extend my gratitude to Wing Lian and the [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) contributors for making an amazing tool.
https://preview.redd.it/o7gaao32ncrb1.png?width=400&format=png&auto=webp&s=74d4ca85ec02b27122ddcbeb4b7b7a541f7b650e
And my gratitute to u/The-Bloke for his mentorship and distribution. | 2023-09-30T07:59:32 | https://www.reddit.com/r/LocalLLaMA/comments/16w111m/samanthamistral7b/ | faldore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16w111m | false | null | t3_16w111m | /r/LocalLLaMA/comments/16w111m/samanthamistral7b/ | false | false | 81 | {'enabled': False, 'images': [{'id': 'Dh-hi7De0Nznld86SA9l93VOeiDJ5grGzKEnBvE4OVE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?width=108&crop=smart&auto=webp&s=5732996d228b2c454ec5f66de9dcd5bb5cea699a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?width=216&crop=smart&auto=webp&s=5adb05c9352ca1762b3da9ef3be0694c1d6432dd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?width=320&crop=smart&auto=webp&s=22b68711c933a927a5222df7ab272fdec0daf747', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?width=640&crop=smart&auto=webp&s=9c122bd8a84a055e913588878584e0e03bd297db', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?width=960&crop=smart&auto=webp&s=5d652085b55f4d73e37a590ab674c2c44cbb6c7d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?width=1080&crop=smart&auto=webp&s=3ee64eb6b1b1066c0ecb77d560ddd65d74cfc68b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oPiw-3Du7qlvKz5-dbgK_UCvCueG_j7xPhEJjoHsPw0.jpg?auto=webp&s=0359931de2d8942cab37a1ce4a09959572455b13', 'width': 1200}, 'variants': {}}]} | |
Deploy the Mistral 7b Generative Model on an A10 GPU on AWS | 1 | [removed] | 2023-09-30T07:09:03 | https://www.reddit.com/r/LocalLLaMA/comments/16w078q/deploy_the_mistral_7b_generative_model_on_an_a10/ | juliensalinas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16w078q | false | null | t3_16w078q | /r/LocalLLaMA/comments/16w078q/deploy_the_mistral_7b_generative_model_on_an_a10/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'gcOFI3bFaeug4uwPBpXu4gJk2MnoT0mLEzaNvCEQFbU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?width=108&crop=smart&auto=webp&s=0a3f5c6393091d4736e8cf4f6f0784e7f5a2b9ea', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?width=216&crop=smart&auto=webp&s=54b1821095eac7abf9fd0a93b991132324ab5b3b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?width=320&crop=smart&auto=webp&s=3e19c1d8ef9967dda8d95d72d26d6a9c78f5b66b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?width=640&crop=smart&auto=webp&s=5f9d09d6209e09f06e43ca407c407e0290b4586c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?width=960&crop=smart&auto=webp&s=119c98dc6bdddf5e4f8423ea0c6b99927f6ff89e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?width=1080&crop=smart&auto=webp&s=95fb8a0687086906918d1990f1cbc18659b183f3', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/xTOH9kYcOZrCx2KGgqds-XGt0SuEmNo9R0vd3jPCFKE.jpg?auto=webp&s=22be6beca08ec134bba554494d7b8d8acd5d3a08', 'width': 1920}, 'variants': {}}]} |
Looking for best model to fine tune for Uncensored Dating chats | 2 | Hi everyone! Looking for your suggestions about which models best in your opinion and why for uncensored dating type of chats to fine tuned ?
Money is not an issue and can be placed on A100s 80GB GPUs.
Cheers:) | 2023-09-30T06:46:07 | https://www.reddit.com/r/LocalLLaMA/comments/16vztlu/looking_for_best_model_to_fine_tune_for/ | Live-Acanthaceae4371 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vztlu | false | null | t3_16vztlu | /r/LocalLLaMA/comments/16vztlu/looking_for_best_model_to_fine_tune_for/ | false | false | self | 2 | null |
Intel ARC IPEX performance? | 8 | i just reinstalled the oobabooga webui and saw that IPEX is now one of the options during setup, this actually makes me consider swapping my 3060 in my ML server for an A770, has anyone tried inference/training on an A770?, what were speeds like? | 2023-09-30T06:35:47 | https://www.reddit.com/r/LocalLLaMA/comments/16vznmm/intel_arc_ipex_performance/ | MR_-_501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vznmm | false | null | t3_16vznmm | /r/LocalLLaMA/comments/16vznmm/intel_arc_ipex_performance/ | false | false | self | 8 | null |
What are good langchain alternatives to train LLMs and create LLM apps? | 4 | Magentic, minichain, llfn <--- I saw these three but haven't tried them yet except for langchain. Which one do you use to rapid prototype LLMs currently? | 2023-09-30T05:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/16vy6aw/what_are_good_langchain_alternatives_to_train/ | vlodia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vy6aw | false | null | t3_16vy6aw | /r/LocalLLaMA/comments/16vy6aw/what_are_good_langchain_alternatives_to_train/ | false | false | self | 4 | null |
Mistral 7B (quantised 8bit) with tabular data | 1 | [removed] | 2023-09-30T03:32:24 | https://www.reddit.com/r/LocalLLaMA/comments/16vwec2/mistral_7b_quantised_8bit_with_tabular_data/ | ianuvrat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vwec2 | false | null | t3_16vwec2 | /r/LocalLLaMA/comments/16vwec2/mistral_7b_quantised_8bit_with_tabular_data/ | false | false | self | 1 | null |
Stability AI releases StableLM 3B, trained on 1T tokens for 4 epochs | 136 | 2023-09-30T03:08:35 | https://huggingface.co/stabilityai/stablelm-3b-4e1t | Covid-Plannedemic_ | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 16vvx5d | false | null | t3_16vvx5d | /r/LocalLLaMA/comments/16vvx5d/stability_ai_releases_stablelm_3b_trained_on_1t/ | false | false | 136 | {'enabled': False, 'images': [{'id': 'OLOAe76KelEcGAJ8UCZCDM2jdkoASgcUmuC2RjrL-XE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?width=108&crop=smart&auto=webp&s=43829856783a335aea360cdbdb7aa252bba4bb56', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?width=216&crop=smart&auto=webp&s=960c173d62bc1dc156a64917a8ef5cd1a6e30dbe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?width=320&crop=smart&auto=webp&s=3dfa65b39ec523ef0916e4d36663dda15bc4b6c4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?width=640&crop=smart&auto=webp&s=a8ea901eab98312c01f8f00efd589c97ef5de4ad', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?width=960&crop=smart&auto=webp&s=6ef9cf12c2c7cf806dd1ea977f433b033ac8f257', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?width=1080&crop=smart&auto=webp&s=925957a6c52c742791d5c366b5dfa29961176a49', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6s3oDqZ_AHjJfGDw5eX1-g_Y_Il6QpLtNb9NW5pJjaY.jpg?auto=webp&s=cb82a3fb6a458f87b975c5654eeee22c95b6c7bb', 'width': 1200}, 'variants': {}}]} | ||
Different LLAMA_CUDA config values for llama.cpp? | 10 | Anyone tested these?
This is the relevant section of the readme: https://github.com/ggerganov/llama.cpp#cublas
I finally got NVLink set up on my dual 3090 and getting 17 tok/s on 70B models which is great. Curious to know if I can go even faster.
I will do some building and testing and report back! | 2023-09-30T01:32:42 | https://www.reddit.com/r/LocalLLaMA/comments/16vtwa3/different_llama_cuda_config_values_for_llamacpp/ | 0xd00d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vtwa3 | false | null | t3_16vtwa3 | /r/LocalLLaMA/comments/16vtwa3/different_llama_cuda_config_values_for_llamacpp/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
Training Cost Leaderboard? | 1 | Especially for language models, which can be fine tuned to do any number of things, the monetary cost of training these models is arguably one of the most important considerations that should be taken into account. Especially for the open source community which is much more limited on funds.
Additionally, with methods such as quantization, flash attention, and more, the price of training a nearly identical model should be much cheaper now than it was 6 months ago, but this metric doesn’t seem to be shown off as much.
However, to the best of my knowledge, most of the information about training cost is either just not mentioned or hidden somewhere in the individual model’s technical report. Is there any resources that I’m just missing related to this topic? And if not, are you all interested in this as well? | 2023-09-30T01:14:30 | https://www.reddit.com/r/LocalLLaMA/comments/16vthyx/training_cost_leaderboard/ | Dramatic_Road3570 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vthyx | false | null | t3_16vthyx | /r/LocalLLaMA/comments/16vthyx/training_cost_leaderboard/ | false | false | self | 1 | null |
Good local models for data cleaning and extraction? | 4 | Hi,
​
I want to be able to extract a schema of data from unstructured text, so I thought I'd use a local model for that, since there's no point in using a massive model when a small one will do. What are some recommendations for this use case? | 2023-09-30T00:43:30 | https://www.reddit.com/r/LocalLLaMA/comments/16vssv7/good_local_models_for_data_cleaning_and_extraction/ | ExaminationNo8522 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vssv7 | false | null | t3_16vssv7 | /r/LocalLLaMA/comments/16vssv7/good_local_models_for_data_cleaning_and_extraction/ | false | false | self | 4 | null |
What is the best open-source framework to create a synthetic and domain specific dataset for fine-tuning small models? | 1 | [removed] | 2023-09-30T00:18:00 | https://www.reddit.com/r/LocalLLaMA/comments/16vs6zu/what_is_the_best_opensource_framework_to_create_a/ | Separate-Still3770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vs6zu | false | null | t3_16vs6zu | /r/LocalLLaMA/comments/16vs6zu/what_is_the_best_opensource_framework_to_create_a/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'HLTQ2Io4n-agFuTu1AKgU1unD5v0xYfSV8JVgdBjrig', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?width=108&crop=smart&auto=webp&s=1f15ac727a32de12dcc708870423c1e89f4d0068', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?width=216&crop=smart&auto=webp&s=78b5f124f9ff08d78531bec975d7ca1b009016ed', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?width=320&crop=smart&auto=webp&s=01b79416adbf9fba82f0115bdf752dab4d6323de', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?width=640&crop=smart&auto=webp&s=4679d4fa1cf02a37e5d2e297f6cbcbf5bc18d476', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?width=960&crop=smart&auto=webp&s=b1d7ee53f82796dca3b1d0ebc1e7eb7722d15bcc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?width=1080&crop=smart&auto=webp&s=796b99aab3f97e894e8f81522da9a72098819a44', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MZ4PSRPeTEpPzDVerFuXOsGGcA18nclVzJgd0s_51cM.jpg?auto=webp&s=a05645b9afe9f7443b44b7a66afa2de3724f552b', 'width': 1200}, 'variants': {}}]} |
With LLMs we can create a fully open-source Library of Alexandria. | 126 | I recently generated 650,000 unique synthetic textbook samples from a diverse span of courses, kindergarten through graduate school. I shared them with the community for feed back previously and now there is a really [cool training attempt](https://www.reddit.com/r/LocalLLaMA/comments/16vk266/ml1_preview_phi_115_reproduction/) led by a community member to see how far this prelim dataset can take us. There are some other closely related other efforts happening too.
I am updating because we have another interesting result - by going deeper instead of broader, and by combining new techniques like RAG, we can make incredibly descriptive textbooks. [This one here](https://github.com/emrgnt-cmplxty/sciphi/blob/main/sciphi/data/library_of_phi/Introduction_to_Deep_Learning.md) was generated by a \~fully AI pipeline. The pipeline goes MIT OCW -> Syllabus -> Table of Contents -> Textbook. The last step is grounded through vector-lookups over the whole of Wikipedia.
I believe this work continues to align w/ the overall goals of the community, so I wanted to share. This initially began as an effort to replicate Phi-1.5, but I'm now convinced that it is interesting in it's own right. I think it might be interesting to build an open source library - *the Library of Phi*. This would be a crowdsourced effort to democratize textbooks.
What do you think? | 2023-09-30T00:03:49 | https://www.reddit.com/r/LocalLLaMA/comments/16vruh8/with_llms_we_can_create_a_fully_opensource/ | docsoc1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vruh8 | false | null | t3_16vruh8 | /r/LocalLLaMA/comments/16vruh8/with_llms_we_can_create_a_fully_opensource/ | false | false | self | 126 | {'enabled': False, 'images': [{'id': 'wFTetGOZ9uqk6bFE12iEfmfGT0qaPT-FowY2vN_0gnk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?width=108&crop=smart&auto=webp&s=db262d448f07f80108311a36e44dc38dbc03e956', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?width=216&crop=smart&auto=webp&s=654fa9d20939c0c8b88f2c943e0ea4d4480ef5ef', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?width=320&crop=smart&auto=webp&s=7cfed2ab7a2b3fd137c5c0b22f468ddf6d11b7fc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?width=640&crop=smart&auto=webp&s=6e0b659c9c4a6ecde9e83bc95157e9bfc9241428', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?width=960&crop=smart&auto=webp&s=c3226acfd50a30bff29ee94664718ad848a15fd9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?width=1080&crop=smart&auto=webp&s=c94f70710cbc06be5ee62da8130e12d34a890b59', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GU7zpCOPLXmp3SBSvqYxnBOyDrKCvtnNsNjg13rACG0.jpg?auto=webp&s=ffe8ded9ca555dbd13e3e614739af8edd4534bfc', 'width': 1200}, 'variants': {}}]} |
Any way to verify training method with very small dataset? | 3 | My goal is to train Llama-2-13b on a larger dataset of chat conversations but, before I commit to that processing time, I'm trying to verify that my approach is even valid. I'm using a Colab notebook for now with a VERY small dataset, just 20 records of JSONL loaded into Hugging Face. I invented a word that doesn't exist in the wild and all 20 records are pertaining to that word, like what is \[fake\_word\] followed by a fake definition? I'm following online guides on how to do this in Colab and everything appears to work (no errors). It loads my dataset and runs the training, but then when I send a test prompt to the model, it says that it has no idea what my \[fake\_word\] is. So clearly something isn't right. Is my dataset too small to overcome the fact that it has zero knowledge of that word in pre-training?
At this point, I'd be satisfied with ANY smaller training dataset and validation after training to show that it works. | 2023-09-29T23:43:22 | https://www.reddit.com/r/LocalLLaMA/comments/16vrcuz/any_way_to_verify_training_method_with_very_small/ | BGFlyingToaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vrcuz | false | null | t3_16vrcuz | /r/LocalLLaMA/comments/16vrcuz/any_way_to_verify_training_method_with_very_small/ | false | false | self | 3 | null |
Equivalent to autogpt to use it with palm API | 0 | Is there a software that can run the palm API because it's free and I want to try like code interpreter and autogpt ect can someone tell if there is a software that exists | 2023-09-29T22:02:42 | https://www.reddit.com/r/LocalLLaMA/comments/16vou6r/equivalent_to_autogpt_to_use_it_with_palm_api/ | Puzzleheaded_Acadia1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vou6r | false | null | t3_16vou6r | /r/LocalLLaMA/comments/16vou6r/equivalent_to_autogpt_to_use_it_with_palm_api/ | false | false | self | 0 | null |
LMStudio vicuna13b v1.5 16K context question | 7 | Running vicuna-13b-v1.5-16k.Q5_K_M.gguf via LM Studio on my M1pro 16GB
By default the model loaded with context window being 1500. It inferences fine on some small prompts. However by increasing the context up to 15k my computer instantly crashed. Just trying to understand what is happening here.
Is increasing the context like this normal since the model suggests 16k? Is the crash due to sudden increase in memory requirements? | 2023-09-29T21:53:45 | https://www.reddit.com/r/LocalLLaMA/comments/16vollo/lmstudio_vicuna13b_v15_16k_context_question/ | iwannaforever | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vollo | false | null | t3_16vollo | /r/LocalLLaMA/comments/16vollo/lmstudio_vicuna13b_v15_16k_context_question/ | false | false | self | 7 | null |
What is on the fly method | 1 | [removed] | 2023-09-29T21:32:35 | https://www.reddit.com/r/LocalLLaMA/comments/16vo1ny/what_is_on_the_fly_method/ | AcceptableBat8912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vo1ny | false | null | t3_16vo1ny | /r/LocalLLaMA/comments/16vo1ny/what_is_on_the_fly_method/ | false | false | self | 1 | null |
Best open source model for medical text | 6 | The one I know about is pubmedbert but I was wondering if anyone has any experience with RAG type question answering on medical text specifically? And possibly any idea how they might compare to gpt 3.5? | 2023-09-29T20:10:45 | https://www.reddit.com/r/LocalLLaMA/comments/16vlyes/best_open_source_model_for_medical_text/ | StrippedSilicon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vlyes | false | null | t3_16vlyes | /r/LocalLLaMA/comments/16vlyes/best_open_source_model_for_medical_text/ | false | false | self | 6 | null |
I am looking for information regarding Running llama on a zen4 or xeon 4th generation cpu? Or alternative no gpu suggestions (for 180b falcon) | 6 | As my title suggests, I wanted to know if anyone has tried running an LLM on a zen4 or xeon 4th generation cpu, the ones that have AVX512. I was wondering what sort of performance they give, assuming no gpu setups.
The reason I am asking is because I would like to have a 180b model running without spending a small fortune on gpu's.
I also like the idea of upgradeable memory (Something that gpu's do not have)
If anyone has any alternative suggestions please let me know, (I am accepting of .5 to 1 token/s in terms of performance (as long as that is with a max context window)).
I am honestly surprised that there is no home compute devices with expandable memory. | 2023-09-29T19:48:40 | https://www.reddit.com/r/LocalLLaMA/comments/16vlf0d/i_am_looking_for_information_regarding_running/ | jasonmbrown | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vlf0d | false | null | t3_16vlf0d | /r/LocalLLaMA/comments/16vlf0d/i_am_looking_for_information_regarding_running/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'PEO--Jo7GPtt6eUC0LpM9jWIQ1OKfdKyUrfDFS7Gs0Q', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/CsniDEK8IeUwnQPSMnwbJCMl5vaQMWdeErWO3-kJ8Ec.jpg?width=108&crop=smart&auto=webp&s=255eca204ba5cf1858340794cd836f313bcc8c8c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/CsniDEK8IeUwnQPSMnwbJCMl5vaQMWdeErWO3-kJ8Ec.jpg?width=216&crop=smart&auto=webp&s=a7ad499396e226d37ea7d40f7fa1392840b9a062', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/CsniDEK8IeUwnQPSMnwbJCMl5vaQMWdeErWO3-kJ8Ec.jpg?width=320&crop=smart&auto=webp&s=bd2dbaeb8c6943fd483c7616e872d114fcce7105', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/CsniDEK8IeUwnQPSMnwbJCMl5vaQMWdeErWO3-kJ8Ec.jpg?auto=webp&s=091bc0aab7e9df12723125d1ac18f202b993c574', 'width': 500}, 'variants': {}}]} |
Average bpw for gguf is much higher than you think | 33 | fyi, average bpw for gguf models is ~5bpw for Q4_K_M models. It could explain why the quality of these models feel higher to some people. Choosing an equivalent exl2 bpw model is very important.
Most people will say we already have 2bit models, but it seems to not be the case. They are ~3.3bpw. I know people will keep the terms "2bit", " 3bit", "5bit" "6bit" but the distinction is important. In your head, offset the average bpw by +1 for gguf. | 2023-09-29T19:18:43 | https://www.reddit.com/r/LocalLLaMA/comments/16vknmz/average_bpw_for_gguf_is_much_higher_than_you_think/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vknmz | false | null | t3_16vknmz | /r/LocalLLaMA/comments/16vknmz/average_bpw_for_gguf_is_much_higher_than_you_think/ | false | false | self | 33 | null |
ML1 Preview (Phi 1/1.5 Reproduction) | 41 | Hello again. In follow up from [yesterday's post](https://www.reddit.com/r/LocalLLaMA/comments/16ul4sw/ml1_34b70b_phi_115_reproduction_on_llama2/), I will be posting preview checkpoints to hugging face in the below model repository. Currently the 0.15 epoch preview is uploaded and ready to try.
[https://huggingface.co/nmitchko/ML1-previews](https://huggingface.co/nmitchko/ML1-previews) | 2023-09-29T18:57:10 | https://www.reddit.com/r/LocalLLaMA/comments/16vk266/ml1_preview_phi_115_reproduction/ | nickmitchko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vk266 | false | null | t3_16vk266 | /r/LocalLLaMA/comments/16vk266/ml1_preview_phi_115_reproduction/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': 'Nh1Jt_8fRy74h9rU-kDkWetXzYo2h6Tb25L8pS7vUDI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?width=108&crop=smart&auto=webp&s=214ff1efaecbebd8e99bdda48f20205b0e3dcc2e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?width=216&crop=smart&auto=webp&s=9220e409908cc8068fb743ff307a8b553c596a6f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?width=320&crop=smart&auto=webp&s=21b9fd32bc4487217a462327154de1db897c03b2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?width=640&crop=smart&auto=webp&s=018fc319328235f74ba0559099fa49ea761b95ee', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?width=960&crop=smart&auto=webp&s=81a1bf600beaf9c73f0c0119b7d91a399ca3076a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?width=1080&crop=smart&auto=webp&s=e4fd0d5a32aa1749c445dde857a64813ecd78c0a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/botXS2ZT-4_82NrhT9jjvMPamzfz0TpnVfzVDczOfjQ.jpg?auto=webp&s=84801a7090a32b7af16c925c875ab44d92333da0', 'width': 1200}, 'variants': {}}]} |
video narration | 1 | I remember watching Sesame Street lessons where children yell the answer together. Can somebody recommend a text to voice model that will be able to do this? | 2023-09-29T18:46:09 | https://www.reddit.com/r/LocalLLaMA/comments/16vjry5/video_narration/ | Fairness_Needed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vjry5 | false | null | t3_16vjry5 | /r/LocalLLaMA/comments/16vjry5/video_narration/ | false | false | self | 1 | null |
How do I train (LoRA) uncensored LLaMa on raw dataset? | 7 | I haven't touched language models for a long time. I used to fine-tune GPT-2 and mess with it. But now its too hard to actually dissect anything to re-purpose it for your use-case. Its either just one script that builds the most generic model or just raw weights and so much code with no documentation, bunch of libraries and then you just give up.
Where can I start learning how I can repurpose these models by training through LoRA? I don't want it to be a chatbot or anything. Just simple next token prediction model train on raw corpus. The rate at which new tools are being released its hard to keep up. | 2023-09-29T18:37:31 | https://www.reddit.com/r/LocalLLaMA/comments/16vjk0y/how_do_i_train_lora_uncensored_llama_on_raw/ | _H_a_c_k_e_r_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vjk0y | false | null | t3_16vjk0y | /r/LocalLLaMA/comments/16vjk0y/how_do_i_train_lora_uncensored_llama_on_raw/ | false | false | self | 7 | null |
AI-Powered Tools and Smart Glasses by Meta | BuyUtter | 1 | 2023-09-29T18:00:52 | https://buyutter.com/ai-powered-tools-and-smart-glasses-by-meta/ | Allinhalf | buyutter.com | 1970-01-01T00:00:00 | 0 | {} | 16vil7b | false | null | t3_16vil7b | /r/LocalLLaMA/comments/16vil7b/aipowered_tools_and_smart_glasses_by_meta_buyutter/ | false | false | 1 | {'enabled': False, 'images': [{'id': '1C0DugNOm4uALNQA3SujW2BQlK-AqUKsWIL8BcyVdAM', 'resolutions': [{'height': 161, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?width=108&crop=smart&auto=webp&s=4d0e29409f04a3a4e79ff8163e84ee22000bc1ac', 'width': 108}, {'height': 323, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?width=216&crop=smart&auto=webp&s=50986f7d5c28ee5b32b4ddc3333c146bf8130ca7', 'width': 216}, {'height': 479, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?width=320&crop=smart&auto=webp&s=f4b2413c94bbe55c7eb88c1f826bbb51a557f8ca', 'width': 320}, {'height': 959, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?width=640&crop=smart&auto=webp&s=797dec6f92f50cc4def40d4994003f5d9979528c', 'width': 640}, {'height': 1439, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?width=960&crop=smart&auto=webp&s=0548a80bf01f5ce1d2dab08b71b84c3c98c45740', 'width': 960}, {'height': 1619, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?width=1080&crop=smart&auto=webp&s=6233d272ec57963d405a6ba4e23174a8b7c11c81', 'width': 1080}], 'source': {'height': 2560, 'url': 'https://external-preview.redd.it/RA80Mg_s4QM3Ac0FJ7bT909274N356UafI3LXPoXtbU.jpg?auto=webp&s=86051fcd206ce1bbaa15ca3ffd72822d451726f9', 'width': 1707}, 'variants': {}}]} | ||
How do you set up something like the HuggingChat Web Search functionality on Oobabooga? | 11 | On https://huggingface.co/chat/ there's an option to turn on a websearch that can reference internet results for it's response. According to https://huggingface.co/spaces/huggingchat/chat-ui/discussions/254 this utilizes RAG which I've played with a bit previously but I'm a few weeks out of date. Is it possible to enable something like this on Oobabooga where I can run it with my own models? | 2023-09-29T17:40:53 | https://www.reddit.com/r/LocalLLaMA/comments/16vi2y4/how_do_you_set_up_something_like_the_huggingchat/ | LeifEriksonASDF | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vi2y4 | false | null | t3_16vi2y4 | /r/LocalLLaMA/comments/16vi2y4/how_do_you_set_up_something_like_the_huggingchat/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'O4__VvuTP1zjgNXHpYgGtbNlwm8CyL1iGZRclIV-cFg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=108&crop=smart&auto=webp&s=732d76370b97811ab9fb6fed1800fef253ff5141', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=216&crop=smart&auto=webp&s=52d55920ecb4ddd3b9a6a3f1f377b1e947338e76', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=320&crop=smart&auto=webp&s=87995474933baa503604de4c36087e982c1556f0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=640&crop=smart&auto=webp&s=b15bbf288cd406be52bb70ea93bac1eeb4e4b7b8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=960&crop=smart&auto=webp&s=6d645adfc183d0fc80bf7a259b180d40c1d08007', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=1080&crop=smart&auto=webp&s=f355db7c39116418e70484205c54fb53d8e5c262', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?auto=webp&s=acc428a35d715d5e8eaec455e35020dcd81cf080', 'width': 1200}, 'variants': {}}]} |
Is there a way to add self attention to outputs? | 2 | One of the big changes to Stable Diffusion is a base model, and a refiner model for the last 20%. I'm curious if a mixture of experts option where the same model with different instructions reads through its output and corrects and rewords it could accomplish something similar. I've seen several times where an LLM will change its answer midway through, and if I understand the tech correctly, that's because it's simply posting the next most likely token according to settings, and that the output isn't 'seeing' the other tokens in the output that it has already outputted (from my limited understanding), and it certainly can't relate the response to tokens that have yet to be outputted later on in the response. Would feeding its response back into it in some kind of summarize and correct mode improve the response? Am I basing this question based on a misunderstanding of the tech? Is this an area any research has been done in? | 2023-09-29T17:36:41 | https://www.reddit.com/r/LocalLLaMA/comments/16vhywz/is_there_a_way_to_add_self_attention_to_outputs/ | clyspe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vhywz | false | null | t3_16vhywz | /r/LocalLLaMA/comments/16vhywz/is_there_a_way_to_add_self_attention_to_outputs/ | false | false | self | 2 | null |
Best free online Role Playing AIs? | 1 | [removed] | 2023-09-29T16:49:59 | https://www.reddit.com/r/LocalLLaMA/comments/16vgs9q/best_free_online_role_playing_ais/ | Butefluko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vgs9q | false | null | t3_16vgs9q | /r/LocalLLaMA/comments/16vgs9q/best_free_online_role_playing_ais/ | false | false | default | 1 | null |
Open Source Terraform Code Completion | 6 | It turns out WizardCoder15B does not support terraform (always returns empty completions for terraform code). Had two separate, but related, questions.
1. What's the best open source model that supports terraform code completion?
2. How hard would it be to find tune WizardCoder to support terraform? Would I need to train it on thousands of examples? Could I just use code from GitHub? I'm a little surprised it wasn't already trained on terraform because it feels like you'd have to actively exclude it. | 2023-09-29T16:06:38 | https://www.reddit.com/r/LocalLLaMA/comments/16vfo22/open_source_terraform_code_completion/ | kintrith | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vfo22 | false | null | t3_16vfo22 | /r/LocalLLaMA/comments/16vfo22/open_source_terraform_code_completion/ | false | false | self | 6 | null |
Which is the best quantized LLM for a dating chatbot that can roleplay | 1 | [removed] | 2023-09-29T15:06:55 | https://www.reddit.com/r/LocalLLaMA/comments/16ve4rb/which_is_the_best_quantized_llm_for_a_dating/ | Middle_Focus_314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ve4rb | false | null | t3_16ve4rb | /r/LocalLLaMA/comments/16ve4rb/which_is_the_best_quantized_llm_for_a_dating/ | false | false | self | 1 | null |
Need help, What model that i can run with this spec ? and which one | 6 | 2023-09-29T15:02:14 | https://www.reddit.com/gallery/16ve0aj | Merchant_Lawrence | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 16ve0aj | false | null | t3_16ve0aj | /r/LocalLLaMA/comments/16ve0aj/need_help_what_model_that_i_can_run_with_this/ | false | false | 6 | null | ||
Is llama actually more expensive than GPT | 34 | This is more of a cost comparison that I am doing between gpt 3.5 and llama2 13B for one of my projects.. I need an LLM and am getting decent results from both gpt3.5 and llama13B. But surprisingly the cloud compute I am using from Azure is making llama more expensive than gpt 3.5.
Know that this is the cheapest GPT compute that I can get so probably I am not choosing the wrong compute(or am I?). I was under the impression that using an open-sourced LLM model will decrease the operation cost but it doesn't seem to be doing it.
The compute I am using for llama-2:
https://preview.redd.it/0rl9525fh7rb1.png?width=866&format=png&auto=webp&s=38686476e3ec38934f477e6359d80bebd0d4771c
​
https://preview.redd.it/3isoka8qh7rb1.png?width=1730&format=png&auto=webp&s=19536835d3bafd348c65d662a36e43df2fbbe214
​
The number of tokens in my prompt is (request + response) = 700
Cost of GPT for one such call = $0.001125
Cost of GPT for 1k such call = **$1.125**
Time taken for llama to respond to this prompt \~ 9s
Time taken for llama to respond to 1k prompt \~ 9000s = 2.5 hrs = **$1.87**
This difference drastically increases with increasing number of API calls. Please help me find
​ | 2023-09-29T14:44:10 | https://www.reddit.com/r/LocalLLaMA/comments/16vdk15/is_llama_actually_more_expensive_than_gpt/ | todaysgamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vdk15 | false | null | t3_16vdk15 | /r/LocalLLaMA/comments/16vdk15/is_llama_actually_more_expensive_than_gpt/ | false | false | 34 | null | |
After the last update of llama-cpp-python my model acts in a weird way (Vicuna v1.5 13B 16k) | 2 | Does anybody else have this problem? I'm not sure it's about llama-cpp-python or llama.cpp. Because I didn't change anything in my code regarding model prompts or loading. Other models work fine and it worked fine before. I use CUblas with CUDA 12.2 by the way.
(llama-cpp-python version 0.2.7)
What's happening:
Vicuna suddenly produces outputs like: "Hello hello hello Hello" or "HHHHHHHHHHHHHH..." | 2023-09-29T14:31:40 | https://www.reddit.com/r/LocalLLaMA/comments/16vd8w6/after_the_last_update_of_llamacpppython_my_model/ | psi-love | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vd8w6 | false | null | t3_16vd8w6 | /r/LocalLLaMA/comments/16vd8w6/after_the_last_update_of_llamacpppython_my_model/ | false | false | self | 2 | null |
70b on a budget. What cards should I buy? | 2 | After getting tired of paying for Servers, I've looked into buying a large VRAM graphics card or two for playing with AI models. At least in my area, there's often numerous Pascal Teslas on sale for sub $200. This brings me onto my two questions:
P40 or P100? Does the HBM memory make a speed difference on inference or just on training.
Would a dual P100 plus 2080 Super setup work and be useful? I already have a 2080 Super in my system and from what I have heard Linux shouldn't have any major driver issues with card mixing. Assuming no overhead a 70b *should* fit into video memory, I think.
I'm looking to mostly infer with some LoRA stuff and also play around with other AI stuff as well as training my own.
Graphics output on the Teslas isn't needed since I already have a good (enough) card. I'm running Linux. | 2023-09-29T14:20:23 | https://www.reddit.com/r/LocalLLaMA/comments/16vcxo2/70b_on_a_budget_what_cards_should_i_buy/ | Ather5125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vcxo2 | false | null | t3_16vcxo2 | /r/LocalLLaMA/comments/16vcxo2/70b_on_a_budget_what_cards_should_i_buy/ | false | false | self | 2 | null |
German LLMs based on LLaMA-2 pre-training are available: LeoLM/leo-hessianai-7b / LeoLM/leo-hessianai-13b | 17 | Did someone tried them out? Did someone quantizied them for using them locally?
Links:
[https://huggingface.co/LeoLM/leo-hessianai-7b](https://huggingface.co/LeoLM/leo-hessianai-7b)
[https://huggingface.co/LeoLM/leo-hessianai-13b](https://huggingface.co/LeoLM/leo-hessianai-13b) | 2023-09-29T13:47:21 | https://www.reddit.com/r/LocalLLaMA/comments/16vc45l/german_llms_based_on_llama2_pretraining_are/ | New_Lifeguard4020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vc45l | false | null | t3_16vc45l | /r/LocalLLaMA/comments/16vc45l/german_llms_based_on_llama2_pretraining_are/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'R8xI5-_m5qZvA5mIvjAxY2JezNku2-CN5IgiluNB3tM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?width=108&crop=smart&auto=webp&s=0886ddb02b8bafb9d77e322bc7e70612a0b155dc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?width=216&crop=smart&auto=webp&s=95072bdc5438e828a501df7298f437df0f0d8470', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?width=320&crop=smart&auto=webp&s=fbd0e0a1b65ea109356763baf70282da201ec4ef', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?width=640&crop=smart&auto=webp&s=70c038a7e96a2827a915dc261bda7f7cdf52ca5a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?width=960&crop=smart&auto=webp&s=c44f843c6ddbef221c4205f194593ac0a25bf334', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?width=1080&crop=smart&auto=webp&s=bb59f485030efd2b5955d37e9a689791dcb4c542', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FbHgTpHVKojbMODjERugvtduS53m1aGPG5nWrX7SRgY.jpg?auto=webp&s=1e465718093d3709c43edcde0ab03e86761ae432', 'width': 1200}, 'variants': {}}]} |
Which is the model for text processing like Q&A Builder, text cleaning | 1 | Hey Guys,
I converted my PDF´s to txt files, but unfortuantely there are some random numbers, words left from tables or images.
Now my idea is to use a language model, to let him clean my data, since I am only interested in the core text.
Question:
Which model performs the best for this kind of task?
Vicuna has 16k input which is good, but are there any other models, which are better? | 2023-09-29T13:37:59 | https://www.reddit.com/r/LocalLLaMA/comments/16vbwm5/which_is_the_model_for_text_processing_like_qa/ | Enkay55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vbwm5 | false | null | t3_16vbwm5 | /r/LocalLLaMA/comments/16vbwm5/which_is_the_model_for_text_processing_like_qa/ | false | false | self | 1 | null |
Could someone make an EXL2 version of LLama2-70B-uncensored? | 3 | I just got Exllama2 working and I'm simply blown away how efficient it is... both 3090's are just not seeing any load during inference (VRAM is still full, 46.4Gb/48). Can't really compare it to anything and I truly want to play with the uncensored version of LLama2-70b. No such model on HF though. Pretty please? | 2023-09-29T13:35:08 | https://www.reddit.com/r/LocalLLaMA/comments/16vbu7f/could_someone_make_an_exl2_version_of/ | UltrMgns | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vbu7f | false | null | t3_16vbu7f | /r/LocalLLaMA/comments/16vbu7f/could_someone_make_an_exl2_version_of/ | false | false | self | 3 | null |
what's the obsession with large LLMs? | 117 | The comments of HN/twitter people are complaining that Mistral released only a 7B module and requiring larger ones. Am I the only one that think big opportunity are in smaller modules?
* We saw what TinyLlama and Phi can do.
* Running an LLM on the CPU will help discover more use cases.
* LLama 2 13B is preforming better than Chinchilla 70b.
* MoE will be easier with smaller modules.
I would be happy if we can get a 7b update every quarter than get a 70b that will be obsolete after 6 months. | 2023-09-29T13:19:06 | https://www.reddit.com/r/LocalLLaMA/comments/16vbgt3/whats_the_obsession_with_large_llms/ | thawab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vbgt3 | false | null | t3_16vbgt3 | /r/LocalLLaMA/comments/16vbgt3/whats_the_obsession_with_large_llms/ | false | false | self | 117 | null |
vLLM 0.2.0 released: up to 60% faster, AWQ quant support, RoPe, Mistral-7b support | 75 | It is commonly asked on this forum "how can my API run inference in parallel?"
One very good answer is "use vLLM" which has had a new major release today!
https://github.com/vllm-project/vllm/releases/tag/v0.2.0
Major changes
* Up to 60% performance improvement by optimizing de-tokenization and sampler
* Initial support for AWQ (performance not optimized)
* Support for RoPE scaling and LongChat
* Support for Mistral-7B
* Many bug fixe
Don't sleep on AWQ if you haven't tried it yet. TheBloke has already quantized your favorite model and output quality is significantly better then any GPTQ at 4bits. | 2023-09-29T13:16:09 | https://www.reddit.com/r/LocalLLaMA/comments/16vbeet/vllm_020_released_up_to_60_faster_awq_quant/ | kryptkpr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16vbeet | false | null | t3_16vbeet | /r/LocalLLaMA/comments/16vbeet/vllm_020_released_up_to_60_faster_awq_quant/ | false | false | self | 75 | {'enabled': False, 'images': [{'id': 'PEfoG-FIG3L9MDqE5iUFyp7ztfFxwznprV6Ai8M-NwY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=108&crop=smart&auto=webp&s=7b933b3a1adb50f3679a7ee5766e44c4134a1391', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=216&crop=smart&auto=webp&s=c1e4ef77f54cc4468ac48abd207d0de8f7fb1776', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=320&crop=smart&auto=webp&s=b1d5d657128c327db2448e59f3ac7ffd9e96b81e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=640&crop=smart&auto=webp&s=c00170f80b2fa84252357e2afdbc1c0b139d8f69', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=960&crop=smart&auto=webp&s=d4995f0977dd8c3d706798b4bc0cbcdce1a8321c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=1080&crop=smart&auto=webp&s=2d7891dc5c6a1695f6f15a2a9644614aa3a17981', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?auto=webp&s=30a1b5f55dc6350e983ee2547a1897bd0cc5851f', 'width': 1200}, 'variants': {}}]} |
How to load GGUF models on colab | 3 | How to load and rest these quantised models on google colab?
Models are here
https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF | 2023-09-29T12:10:16 | https://www.reddit.com/r/LocalLLaMA/comments/16v9wrb/how_to_load_gguf_models_on_colab/ | ianuvrat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v9wrb | false | null | t3_16v9wrb | /r/LocalLLaMA/comments/16v9wrb/how_to_load_gguf_models_on_colab/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'lqv_3ohqLggkd4G78Cl05sBQ1GrM3q73gW3Kd0gHFUc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?width=108&crop=smart&auto=webp&s=a72ab81cef97d122cbe4385a8018b5a1039c11d5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?width=216&crop=smart&auto=webp&s=c46c590e3290817b36ee152f9fc368f28588d6bb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?width=320&crop=smart&auto=webp&s=df1cb1636b69127b581978e7460eef3e421b5d15', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?width=640&crop=smart&auto=webp&s=48d894354346e4fed657f72b238f8a709dc6f32e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?width=960&crop=smart&auto=webp&s=39e0423c532eee4b4d3585199d696fc94c44ffcc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?width=1080&crop=smart&auto=webp&s=a930543cb9eb1dc67e359963785572f39d35a466', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qAcabb_NmQ7tZ8LPaV6ZdHU4LOSuH4m-694fGm83ntA.jpg?auto=webp&s=e23c54962926a7bd6920455f6b1123b4a10c8fda', 'width': 1200}, 'variants': {}}]} |
Fine Tune Base Model or Chat Model For Conversations/Instructions? | 12 |
Hello amazing community...
I've been reading a lot lately about that, what i mostly see from YouTube videos/Medium articles, etc... that people tend to use the base model for fine tuning a conversations/instruct based data. Some little use the chat version. Why is that?
Isn't it better to fine tune a model that already knows how to make conversations like llama 2 chat/flacon instruct than a model that is only generative like llama 2/falcon.
Falcon is also mentioning this in all their instruct models card in huggingface: "This is an instruct model, which may not be ideal for further finetuning. If you are interested in building your own instruct/chat model, we recommend starting from Falcon-7B.".
What are your inputs on that? What would you fine tune for your case? Any success stories on both types of fine tuning? | 2023-09-29T11:50:41 | https://www.reddit.com/r/LocalLLaMA/comments/16v9hms/fine_tune_base_model_or_chat_model_for/ | Aillian7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v9hms | false | null | t3_16v9hms | /r/LocalLLaMA/comments/16v9hms/fine_tune_base_model_or_chat_model_for/ | false | false | self | 12 | null |
Can I really run autogpt without GPU | 2 | I was hoping to build a decent desktop by oct which got delayed due to stock issues for the time being can I run auto gpt with 4bit quant model on my i3 4010u laptop with 8gigs ram. I know its bad, but I don't care about speed. I am still learning and due to the lackluster specs I wasn't able to perform my own tests. i am hoping to run 7b models. | 2023-09-29T11:33:14 | https://www.reddit.com/r/LocalLLaMA/comments/16v951l/can_i_really_run_autogpt_without_gpu/ | abir_legend | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v951l | false | null | t3_16v951l | /r/LocalLLaMA/comments/16v951l/can_i_really_run_autogpt_without_gpu/ | false | false | self | 2 | null |
Merging Lora with llama models on Colab | 4 | Are there any colabs I can merge loras with llama 7b? | 2023-09-29T11:08:22 | https://www.reddit.com/r/LocalLLaMA/comments/16v8nm3/merging_lora_with_llama_models_on_colab/ | LazerCuber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v8nm3 | false | null | t3_16v8nm3 | /r/LocalLLaMA/comments/16v8nm3/merging_lora_with_llama_models_on_colab/ | false | false | self | 4 | null |
Mistral 7B temperature settings? | 10 | I'm playing around with the Mistral Q8 GGUF model in Oobabooga and it seems to be all over the place when trying to do basic math and reasoning, Not anywhere close to other 13B models.
Does anyone know some good settings for Temperature to get it to perform optimally?
I use a 4096 context window even though it loaded with a 32K as default, but that seemed to perform at lot worse. | 2023-09-29T10:35:22 | https://www.reddit.com/r/LocalLLaMA/comments/16v820a/mistral_7b_temperature_settings/ | nixudos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v820a | false | null | t3_16v820a | /r/LocalLLaMA/comments/16v820a/mistral_7b_temperature_settings/ | false | false | self | 10 | null |
Question - Will there ever be unified memory on a traditional desktop? | 12 | Given the advantages apple has over the PC market with unified memory I was wondering if there will ever be unified memory on the larger PC market?
What are the technical details? I am guessing the hardware of apple has a way of quickly shunting information across their GPU/RAM which probably means they a special architecture so a 1:1 solution is probably not possible or patented. The technology is great however and I was wondering if it is possible on a PC setup.
I was also wondering if there is a hybrid. I am still on the old gen of MOBO with DDR4 but I am guessing a hybrid solution where you could maybe store the context on DDR5 might perhaps work. As in, load the language model onto VRAM for inference but store the the output of each token on DDR5 to store the output. Would this work? I understand you would probably be bottlenecked by DDR5 but I would accept this solution if I got a huge context window with traditional RAM. | 2023-09-29T09:52:07 | https://www.reddit.com/r/LocalLLaMA/comments/16v7ais/question_will_there_ever_be_unified_memory_on_a/ | JFHermes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v7ais | false | null | t3_16v7ais | /r/LocalLLaMA/comments/16v7ais/question_will_there_ever_be_unified_memory_on_a/ | false | false | self | 12 | null |
How to deploy llama 2 on local A100 gpu with tools like llama index further use ? | 1 | [removed] | 2023-09-29T09:28:33 | https://www.reddit.com/r/LocalLLaMA/comments/16v6x2y/how_to_deploy_llama_2_on_local_a100_gpu_with/ | Vi_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v6x2y | false | null | t3_16v6x2y | /r/LocalLLaMA/comments/16v6x2y/how_to_deploy_llama_2_on_local_a100_gpu_with/ | false | false | self | 1 | null |
rope_freq_scale and n_gpu_settings ? | 4 | How do we determine which values to use in these settings when you are trying to run the language model for inference (i.e. chat with it).
For example, if the hardware is 16GB VRAM, 64GB RAM, and the model is e.g. 13B or 30B, what rough calculation would you make to choose appropriate settings?
Incidentally, in what ways does "perplexity" manifest, so that you are able to monitor it, determine if perplexity is increasing and change settings accordingly? | 2023-09-29T08:46:41 | https://www.reddit.com/r/LocalLLaMA/comments/16v69ig/rope_freq_scale_and_n_gpu_settings/ | innocuousAzureus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v69ig | false | null | t3_16v69ig | /r/LocalLLaMA/comments/16v69ig/rope_freq_scale_and_n_gpu_settings/ | false | false | self | 4 | null |
Your wishes and ideas for LLama 3 | 26 | Hello there!
So since it was confirmed Llama 3 will launch next year, I think it would be fun to discuss what this community hopes and expectations for the next game changer of local AI are. Personally, I'm more than happy to wait a little longer for a complete revamp of the architecture. Anyway, here are my ideas and wishes:
1. GQA and sliding windows for longer context sizes at lower memory cost, or some other innovation that improves this aspect even further ( for all model sizes, not just 34b and 70b) as well as higher context in general. (32K and more would be awesome).
2. Native multimodality. Even if you were hypothetically speaking just using the LLM for text generation, native multimodality would enhance the model's general capabilities a lot because then it would have a much better understanding of the concepts its talking about. I do think GPT-4 has native multimodality which is one of the key reasons why it performs so well. Also, multimodality is just awesome, look at the stuff GPT-4 Vision is doing right now. Technically some open source models already have multimodality, but it's mostly just adapters that translate pictures to text or specific models that are not good at plain text generation anymore. I'm speaking about native multimodality from the ground up here.
3. Mixture of Experts. Another key reason why GPT-4 performs so well. It has a lot of smaller models that are trained on one specific task, which come together in the end to make the output. Really cool stuff and a must have for LLama 3. I'm curious though how we will use it. I can imagine you're still having one file, but you are able to choose which experts you are going to train on which dataset using the training platform. I think that makes the most sense. Transformers will need a huge update then. But perhaps it will be not using transformers anymore, which would also make sense considering successors are being worked on.
4. Learning and unlearning in real time or when the LLM is not used. This is probably the hardest point to achieve on this list, as right now training is super expensive, even just training loras. But maybe there is a way to figure out how to train neural networks while inference, e.g. learning and unlearning in real time and at very low compute costs (I think there was already the concept of liquid neural networks) This would help personalize the AIs a lot more, which would be a key advantage of local, personal AI. And if its not possible in real time just yet, maybe it could be some background task when the user is not chatting with the LLM anymore (sleeping, if you will). Also important is some sort of self attention while learning (similar to humans, we choose what we learn or ignore, so that the LLM does not break itself when the user just writes nonsense for example)
Please tell me what you think and what you would like to see in a LLama 3! | 2023-09-29T08:40:44 | https://www.reddit.com/r/LocalLLaMA/comments/16v666f/your_wishes_and_ideas_for_llama_3/ | dampflokfreund | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v666f | false | null | t3_16v666f | /r/LocalLLaMA/comments/16v666f/your_wishes_and_ideas_for_llama_3/ | false | false | self | 26 | null |
vllm 2.0 - quantised model support | 19 | [https://github.com/vllm-project/vllm/releases/tag/v0.2.0](https://github.com/vllm-project/vllm/releases/tag/v0.2.0)
honorary shoutout to u/The-Bloke for pumping out AWQ models this past week. | 2023-09-29T08:38:14 | https://www.reddit.com/r/LocalLLaMA/comments/16v64sg/vllm_20_quantised_model_support/ | 451mov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v64sg | false | null | t3_16v64sg | /r/LocalLLaMA/comments/16v64sg/vllm_20_quantised_model_support/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'PEfoG-FIG3L9MDqE5iUFyp7ztfFxwznprV6Ai8M-NwY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=108&crop=smart&auto=webp&s=7b933b3a1adb50f3679a7ee5766e44c4134a1391', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=216&crop=smart&auto=webp&s=c1e4ef77f54cc4468ac48abd207d0de8f7fb1776', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=320&crop=smart&auto=webp&s=b1d5d657128c327db2448e59f3ac7ffd9e96b81e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=640&crop=smart&auto=webp&s=c00170f80b2fa84252357e2afdbc1c0b139d8f69', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=960&crop=smart&auto=webp&s=d4995f0977dd8c3d706798b4bc0cbcdce1a8321c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?width=1080&crop=smart&auto=webp&s=2d7891dc5c6a1695f6f15a2a9644614aa3a17981', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cLmWW2lPSgOopkr0I4iqCtSl1NgkDJaTO6RITWkVdMs.jpg?auto=webp&s=30a1b5f55dc6350e983ee2547a1897bd0cc5851f', 'width': 1200}, 'variants': {}}]} |
Pandalyst-13B-V1.0 released!!! Your first local LLM for mastering data analysis using pandas. | 58 | ## Pandalyst: A large language model for mastering data analysis using pandas
🐱Github: https://github.com/zhengyanzhao1997/Pandalyst
**What is Pandalyst**
- Pandalyst is a general large language model specifically trained to process and analyze data using the pandas library.
**How is Pandalyst**
- Pandalyst has strong generalization capabilities for data tables in different fields and different data analysis needs.
**Why is Pandalyst**
- Pandalyst is open source and free to use, and its small parameter size (7B/13B) allows us to easily deploy it on local PC.
- Pandalyst can handle complex data tables (multiple columns and multiple rows), allowing us to enter enough context to describe our table in detail.
- Pandalyst has very competitive performance, significantly outperforming models of the same size and even outperforming some of the strongest closed-source models.
## News
- 🔥[2023/09/30] We released **Pandalyst-7B-V1.1** , which was trained on **CodeLlama-7b-Python** and achieves the **76.1 exec@1** in our **PandaTest_V1.0** and surpasses **Pandalyst-13B-V1.0**, **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**.
- 🔥[2023/09/28] We released **Pandalyst-13B-V1.0** , which was trained on **WizardCoder-Python-13B-V1.0** and achieves the **70.7 exec@1** in our **PandaTest_V1.0** and surpasses **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**.
| Model | Checkpoint | Base Model | PandaTest_V1.0 | EASY | HARD |
|--------------------|---------------------------------------------------------------------------------------------|------------|----------------|---------------------|---------------------|
| Pandalyst-13B-V1.0 | 🤗 https://huggingface.co/pipizhao/Pandalyst_13B_V1.0 | WizardCoder-Python-13B-V1.0 | 70.7 | 75.6 | 65.9 |
| Pandalyst-7B-V1.1 | 🤗 https://huggingface.co/pipizhao/Pandalyst-7B-V1.1 | CodeLlama-7b-Python | 76.1 | 85.2 | 67.0 |
### Metric: **Executable Rate of Generated Code (exec@1)**
| Model | PandaTest_V1.0 | EASY | HARD |
|-----------------------------|-----------------------|-------------|-------------|
| GPT-3.5-turbo-0613 | 58.5 | 66.5 | 50.6 |
| WizardCoder-Python-13B-V1.0 | 42.0 | 59.7 | 24.4 |
| Pandalyst-13B-V1.0 | 70.7 | 75.6 | 65.9 |
| Pandalyst-7B-V1.1 | **76.1** | **85.2** | **67.0** |
### Human evaluation And Usage
Please refer to our Github project. | 2023-09-29T07:45:09 | https://www.reddit.com/r/LocalLLaMA/comments/16v5aj4/pandalyst13bv10_released_your_first_local_llm_for/ | Wise_Argument_882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v5aj4 | false | null | t3_16v5aj4 | /r/LocalLLaMA/comments/16v5aj4/pandalyst13bv10_released_your_first_local_llm_for/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': 'qxRKFZwQd_lXjLPFiZq6TiuTqCqtt2rmA-2rAZhVpE0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?width=108&crop=smart&auto=webp&s=c887c4bbd1f3e02161e8ad75e4d38ff7cf4e374e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?width=216&crop=smart&auto=webp&s=1bea4ec8c214926691b9724e7f5ccda796f930c1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?width=320&crop=smart&auto=webp&s=11925b9c20e20a8b8f22f0f218d563f394fd5ce7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?width=640&crop=smart&auto=webp&s=3f1a9648ae1a5c1f79f5f8f2139328bed6ceffae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?width=960&crop=smart&auto=webp&s=4ebcf8c727ccd86a84d74b0ef0d796b6438af04a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?width=1080&crop=smart&auto=webp&s=ea3c8c3b70e1221dd9d8da6e0428ef8e747f4268', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YMlSxE_RUOv2bAfjS0VV-_eKIU6gw63Mrg_O4USZQw8.jpg?auto=webp&s=cbda0fbb60c798a832b7ce10f3aa6119d9225861', 'width': 1200}, 'variants': {}}]} |
Would you pay 5 dollars/month for a Llama-2-70B Uncensored? | 47 | I don't know which is the current best-uncensored model. But would you pay 5 dollars/month to use it? Similar to what does ChatGPT with the Plus version (20 dollars/month). | 2023-09-29T07:42:28 | https://www.reddit.com/r/LocalLLaMA/comments/16v58yu/would_you_pay_5_dollarsmonth_for_a_llama270b/ | HorrorNo8851 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v58yu | false | null | t3_16v58yu | /r/LocalLLaMA/comments/16v58yu/would_you_pay_5_dollarsmonth_for_a_llama270b/ | false | false | self | 47 | null |
We did it you guys! Meta referenced us in their new Llama 2 long context paper. | 717 | 2023-09-29T07:22:35 | jd_3d | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16v4xdq | false | null | t3_16v4xdq | /r/LocalLLaMA/comments/16v4xdq/we_did_it_you_guys_meta_referenced_us_in_their/ | false | false | 717 | {'enabled': True, 'images': [{'id': 'y5WTqdfiHA8dsOvDfzABZJNS9SSoxIOGxLp71aa1YqI', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/j2oh7bhyb5rb1.jpg?width=108&crop=smart&auto=webp&s=d7c93f1af5e0f48ded71686d9a239fac0df71fa2', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/j2oh7bhyb5rb1.jpg?width=216&crop=smart&auto=webp&s=c6874862c4d30389c39298dbd119f900c40b817f', 'width': 216}, {'height': 267, 'url': 'https://preview.redd.it/j2oh7bhyb5rb1.jpg?width=320&crop=smart&auto=webp&s=04fa9948424da99871cd5ec3c6ae085e8370030a', 'width': 320}, {'height': 535, 'url': 'https://preview.redd.it/j2oh7bhyb5rb1.jpg?width=640&crop=smart&auto=webp&s=d45f96c07b3cf06c58377764f41067a2ac79625b', 'width': 640}, {'height': 802, 'url': 'https://preview.redd.it/j2oh7bhyb5rb1.jpg?width=960&crop=smart&auto=webp&s=c95703d295eccccd8589de0bed400dff7e930cd9', 'width': 960}], 'source': {'height': 879, 'url': 'https://preview.redd.it/j2oh7bhyb5rb1.jpg?auto=webp&s=721584589ffa5dd431916434561cc5cf8ee3216d', 'width': 1051}, 'variants': {}}]} | |||
8 or 10 gb of ram enough for llama.cpp ?? | 1 | srry if this gets asked often
i gave a homeserver with a i7 4th gen and 8-10 gb of ddr3 ram that i can throw at running llama.cpp. is this enough for it to be useable or am i going to have a terrible time with it? how much ram is ideal? | 2023-09-29T07:05:58 | https://www.reddit.com/r/LocalLLaMA/comments/16v4n3j/8_or_10_gb_of_ram_enough_for_llamacpp/ | icanburnwater | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v4n3j | false | null | t3_16v4n3j | /r/LocalLLaMA/comments/16v4n3j/8_or_10_gb_of_ram_enough_for_llamacpp/ | false | false | self | 1 | null |
interact with LLM based on fine-tuned interviews - what's the best approach? | 5 | Hi everyone,
I am working on a project where I have collected 100 interviews from a specific consumer segment. The goal is to use an offline Language Model (LLM), specifically either the 7B or 13B model, to interact with this data. By asking questions or proposing scenarios, I hope to get how this simulated consumer segment might respond.
I'm quite new to the fine-tuning process. From what I've learned, fine-tuning with LoRA seems promising, but the challenge is that we only have 100 interviews, making the data pool seemingly inadequate.
Is fine-tuning with LoRA the way to go, or are there other methods or strategies you'd recommend for working with limited data?
Any guidance or suggestions would be incredibly helpful and much appreciated. Thank you! | 2023-09-29T06:52:50 | https://www.reddit.com/r/LocalLLaMA/comments/16v4f8i/interact_with_llm_based_on_finetuned_interviews/ | No-Teach4712 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v4f8i | false | null | t3_16v4f8i | /r/LocalLLaMA/comments/16v4f8i/interact_with_llm_based_on_finetuned_interviews/ | false | false | self | 5 | null |
How can I manage the model's context based on multiple admin's ID with multiple users that are linked to that admin's user ID | 1 | [removed] | 2023-09-29T06:36:31 | https://www.reddit.com/r/LocalLLaMA/comments/16v45pq/how_can_i_manage_the_models_context_based_on/ | Middle_Focus_314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v45pq | false | null | t3_16v45pq | /r/LocalLLaMA/comments/16v45pq/how_can_i_manage_the_models_context_based_on/ | false | false | self | 1 | null |
NLU task with LLM - when am I giving you a system prompt? | 5 | I'm working on a voicebot over various LLMs (and of different sizes) and trying work out the best way to have it identify when I am directing it to change its behaviour in the way it responds vs when I want to it to respond. The idea being that the input from the user can then be used in a system prompt.
I've been playing with Griptape and trying different Rules like 'if it sounds like the person wants me to permanently change my behaviour ... \[set a flag\]' but inconsitent results.
Any thoughts? Has anyone else had success with this or a similar NLU task? Which models? | 2023-09-29T05:10:49 | https://www.reddit.com/r/LocalLLaMA/comments/16v2p1y/nlu_task_with_llm_when_am_i_giving_you_a_system/ | moma1970 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v2p1y | false | null | t3_16v2p1y | /r/LocalLLaMA/comments/16v2p1y/nlu_task_with_llm_when_am_i_giving_you_a_system/ | false | false | self | 5 | null |
Prompt Reranker / Generator | 1 | I remember seeing a github repo that took in a prompt and generated different versions of the prompt and then ranked the responses to give you the best prompt. Does anyone know that repo? I can't find it anywhere. | 2023-09-29T04:35:21 | https://www.reddit.com/r/LocalLLaMA/comments/16v21tk/prompt_reranker_generator/ | rbur0425 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v21tk | false | null | t3_16v21tk | /r/LocalLLaMA/comments/16v21tk/prompt_reranker_generator/ | false | false | self | 1 | null |
Meta has released a new paper: Llama 2 Long beats Claude-2-100k on human evaluation | 219 | Paper: https://arxiv.org/pdf/2309.16039.pdf
Not sure if it has been released but if anyone has a link, please put it in the comments. | 2023-09-29T03:25:18 | Nabakin | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 16v0onb | false | null | t3_16v0onb | /r/LocalLLaMA/comments/16v0onb/meta_has_released_a_new_paper_llama_2_long_beats/ | false | false | 219 | {'enabled': True, 'images': [{'id': 'xdKhqEipli45l2SZDn8SOzmDdWA0Zp6RpIVko6vtD1w', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?width=108&crop=smart&auto=webp&s=00d221951dd6323d59519d1c9554cb8a75516c9d', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?width=216&crop=smart&auto=webp&s=b656802c1ce2d9f46797e81f0cc3a169aba80f3e', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?width=320&crop=smart&auto=webp&s=347ab130fcc0fa1c3e83b6e8a2036aac01366a91', 'width': 320}, {'height': 357, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?width=640&crop=smart&auto=webp&s=81cd0fd5dcc9fe8610456088d807520d8f7f964b', 'width': 640}, {'height': 536, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?width=960&crop=smart&auto=webp&s=21c15f2b4512ff84b57a6887cc50f238864554ed', 'width': 960}, {'height': 603, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?width=1080&crop=smart&auto=webp&s=d0d8ba2295d8b139609bedb239110478f1fd408a', 'width': 1080}], 'source': {'height': 632, 'url': 'https://external-preview.redd.it/KZvv-AuQkOhsSGtpouJQI9F2XejIHhwvfPjWoW6NHS4.png?auto=webp&s=aaa70cc84937e2e0c3ab8daa8dbaeafa5c0b41d5', 'width': 1131}, 'variants': {}}]} | ||
Mistral 7B - 8bit and 4 bit Code | 1 | I want to test out Mistral &B with langchain for chatbot purpose. But I have only Colab GPU (T4 Free tier). How to test out quantised version. Have anyone tried it . Need the codes | 2023-09-29T03:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/16v0mlk/mistral_7b_8bit_and_4_bit_code/ | ianuvrat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16v0mlk | false | null | t3_16v0mlk | /r/LocalLLaMA/comments/16v0mlk/mistral_7b_8bit_and_4_bit_code/ | false | false | self | 1 | null |
Meta has released a new paper: Llama 2 Long beats Claude-2-100k on human evaluation | 1 | The paper https://arxiv.org/pdf/2309.16039.pdf
Not sure if we can get the model, but if anyone has a link please put it in the comments. | 2023-09-29T03:21:43 | https://reddit-uploaded-media.s3-accelerate.amazonaws.com/kq2pnoiz44rb1 | Nabakin | reddit-uploaded-media.s3-accelerate.amazonaws.com | 1970-01-01T00:00:00 | 0 | {} | 16v0lzf | false | null | t3_16v0lzf | /r/LocalLLaMA/comments/16v0lzf/meta_has_released_a_new_paper_llama_2_long_beats/ | false | false | default | 1 | null |
Someone know any projects about an alexa like device build with llama | 3 | Well what exactly mean is like a kind of speaker able to talk and listen running llama or a kind of repository to run llama and speak to it by micro and get answer by speaker and i can build the device just using 3d printer, a pi, speaker and micro, any ideas? | 2023-09-29T00:17:34 | https://www.reddit.com/r/LocalLLaMA/comments/16uwk0x/someone_know_any_projects_about_an_alexa_like/ | kroryan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16uwk0x | false | null | t3_16uwk0x | /r/LocalLLaMA/comments/16uwk0x/someone_know_any_projects_about_an_alexa_like/ | false | false | self | 3 | null |
[Gloat] Finally got Falcon-180B Running! | 73 | Basically this is a shitpost. I have been struggling for weeks trying to get a large model working on one of our servers. I've gotten 7B, and 13B models working acceptably on systems fully under my control.
We have a DGX-1 with 8 V100's, but I'm not the admin there. I have disk space headaches, network storage headaches, resource hogs (other than me!), slow downloads of models and other speedbumps. My biggest recent challenge is that the RHEL version of gcc is 4.8 and llama.cpp needed 5 or higher. Seeing little help, I decided to build gcc from scratch (a fun challenge) and eventually everything came together tonight and I'm getting about 18 tokens/sec.
I'm psyched about this and just had to gloat about it :D | 2023-09-29T00:08:11 | https://www.reddit.com/r/LocalLLaMA/comments/16uwbs8/gloat_finally_got_falcon180b_running/ | Simusid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16uwbs8 | false | null | t3_16uwbs8 | /r/LocalLLaMA/comments/16uwbs8/gloat_finally_got_falcon180b_running/ | false | false | self | 73 | null |
Testing out the safety of Mistral-7B Instruct hmm... | 0 | 2023-09-28T23:58:24 | https://www.reddit.com/gallery/16uw33g | ninjasaid13 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 16uw33g | false | null | t3_16uw33g | /r/LocalLLaMA/comments/16uw33g/testing_out_the_safety_of_mistral7b_instruct_hmm/ | false | false | 0 | null | ||
Full Stack Dev, Dabbling into AI | 1 | Hey All,
​
First time poster here. I am trying to run the Meta Llama-2 model locally, and having some issues I'm not sure how to resolve.
I requested and downloaded the models from Meta, and received a checklist.chk, consolidated.00.pth, and params.json.
When I try to convert the model as instructed by the [llama.cpp](https://github.com/ggerganov/llama.cpp) library, I keep getting an error. Most of the tutorials I have seen have their models as bin files, which I am not sure how to acquire (HuggingFace download?) but I would like to get this up with the files I have currently
Would greatly appreciate if anyone else has had these issues!
Error -- `FileNotFoundError: Could not find tokenizer.model in models/7B or its parent; if it's in another directory, pass the directory as --vocab-dir`
​
The models/7B folder has the chk, pth, and json files | 2023-09-28T23:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/16uvv73/full_stack_dev_dabbling_into_ai/ | macaccount423 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16uvv73 | false | null | t3_16uvv73 | /r/LocalLLaMA/comments/16uvv73/full_stack_dev_dabbling_into_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
How can LLMs be multilingual with success in coherence? | 6 | OK, maybe another newvie question around here.
I know "Open"AI models are multilingual, because they can communicate in various languages other than English.
I understand it needs datasets with different languages so that the LLM can learn them. But how can an LLM learn various languages and not get confused with the language of input prompts? Can an LLM detect if the given input is "X" language? | 2023-09-28T23:32:21 | https://www.reddit.com/r/LocalLLaMA/comments/16uvh4k/how_can_llms_be_multilingual_with_success_in/ | OC2608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16uvh4k | false | null | t3_16uvh4k | /r/LocalLLaMA/comments/16uvh4k/how_can_llms_be_multilingual_with_success_in/ | false | false | self | 6 | null |
Script for Full tune of models using DeepSpeed ZeRO | 2 | Hello, I've been trying to finetune some models with DeepSpeed ZeRO without much luck, I'm interested in doing a full finetune of cerebras/btlm-3b-8k-base using dual RTX 3090's. So far I've had luck with Phi-1.5 on a single card, but I'm failing terribly at getting the training script + deepspeed command line to do anything useful.
Github repos for training LLM's are also welcome.
Thanks! | 2023-09-28T23:24:35 | https://www.reddit.com/r/LocalLLaMA/comments/16uvaeb/script_for_full_tune_of_models_using_deepspeed/ | 2muchnet42day | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16uvaeb | false | null | t3_16uvaeb | /r/LocalLLaMA/comments/16uvaeb/script_for_full_tune_of_models_using_deepspeed/ | false | false | self | 2 | null |
Finetune LoRA on CPU using llama.cpp | 85 | Hello everyone!
llama.cpp added support for [LoRA finetuning](https://github.com/ggerganov/llama.cpp/pull/2632) using your CPU earlier today!
I created a short(ish) guide on how to use it: https://rentry.org/cpu-lora
If anyone has feedback, or wants to fill in the gaps where I couldn't explain something, I welcome the feedback! I probably need to re-measure the performance of some items because [another pull request](https://github.com/ggerganov/llama.cpp/pull/3301) was merged that improves the speed a bit. | 2023-09-28T22:16:55 | https://www.reddit.com/r/LocalLLaMA/comments/16utjm0/finetune_lora_on_cpu_using_llamacpp/ | PossiblyAnEngineer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16utjm0 | false | null | t3_16utjm0 | /r/LocalLLaMA/comments/16utjm0/finetune_lora_on_cpu_using_llamacpp/ | false | false | self | 85 | {'enabled': False, 'images': [{'id': 'V7KGzJ6HEd18p-FgZZMypzxF94S1WMtaHyIfnhiaVDw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?width=108&crop=smart&auto=webp&s=f6211cd46f1b0083a6c90a0994131b025c249dee', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?width=216&crop=smart&auto=webp&s=1e81dac59d0659cfacc3103b559c4b03200812d4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?width=320&crop=smart&auto=webp&s=392120e31175070629a76455d87e25925d1c8372', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?width=640&crop=smart&auto=webp&s=81f7b7f3839a0951aa607e18b961d7d13d9fd290', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?width=960&crop=smart&auto=webp&s=cbde8b7b38837c6f30050018b3abce3d38b2ae07', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?width=1080&crop=smart&auto=webp&s=7d67556a187704dd1bd6523baa1e34f9b9945b04', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Qnu6WMja08apIQFGurPobxxJCYQP0Mdso79JNJTVwaQ.jpg?auto=webp&s=1eec5152318ef2c55318b4c621ece626a2cdb3c2', 'width': 1200}, 'variants': {}}]} |
How much RAM needed for runing with context size of 4k | 5 | Sorry I'm new to this area and I appreciate if you can bear my stupid basic question.
The more I read about the relation between RAM size and Context size and the number of parameter the more I get confused :/
I hope someone can help figure this out.
Im trying to run 13b model with 4bit quantization on G5.8xlarge (128G RAM) 1 GPU.
Lets say I have two dataset. the first dataset when I build my prompt the length of the prompt is about 600 (I consider this as the context size Im passing to the model. please correct me if Im wrong).
With this I can easily fine tune the model without any OOM error.
​
However, when I change the dataset and this time the length of the prompt is 1100 I faces with OOM (this is what I mean by length of the prompt:
prompts = data\['train'\]\['prompt'\]
lengths = \[len(prompt.split()) for prompt in prompts\])
​
1: Im having problem figuring out the size of the memory needed when the context size increases (here 2X).
2: And a stupid question, I know this instance provides 24GB ram, then what is 128G here?
3: If I change to use g5.12xlarge which has 4 GPU how that affect the OOM?
​
Appreciate any input | 2023-09-28T21:52:26 | https://www.reddit.com/r/LocalLLaMA/comments/16usw3k/how_much_ram_needed_for_runing_with_context_size/ | Choice_Diver_2585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16usw3k | false | null | t3_16usw3k | /r/LocalLLaMA/comments/16usw3k/how_much_ram_needed_for_runing_with_context_size/ | false | false | self | 5 | null |
Perplexity Testing Mac vs Windows Pt 3: Adding context for context; something is definitely different | 14 | [Part 2](https://www.reddit.com/r/LocalLLaMA/comments/16txcqw/perplexity_testing_mac_vs_windows_pt_2_mac_still/)
tl;dr- After today's test **I really do believe that there is an inference difference between the two**, as can be seen below. At low context, they have nearly identical perplexity, thus my Q/A results yesterday. But the more tokens you add, the more the perplexity deviates. My questions and answers today help add some validity to that. The more context we add to both, the more accurate the Mac became while Windows continued giving incorrect answers.
​
First- I know some of y'all are not fans of the Oobabooga perplexity tool, and I am working to figure out how to run the base llamacpp tool. But in the meantime, I wanted to try one more thing with ooba.
In my last tests, I got results saying that the perplexity when running on Mac was 3x lower than on Windows. I did some Debug-Deterministic questions/answers, but the answers I got were identical, so I assumed that maybe the tests were wrong; and that's possible. But I realized something last night as well... my questions were small. Maybe a hundred tokens, give or take. And I cleared history after each one.
**What if the context size matters to perplexity?** We've all seen the charts- there's a curve where the perplexity is higher at lower context until you get close to the limit, where it is at its lowest, and then it goes up again. [https://preview.redd.it/2qdj7itsb39b1.png?width=662&format=png&auto=webp&v=enabled&s=f9b2f044f59fbad5ad51fefacda0b61f724f12f1](https://preview.redd.it/2qdj7itsb39b1.png?width=662&format=png&auto=webp&v=enabled&s=f9b2f044f59fbad5ad51fefacda0b61f724f12f1)
**My perplexity tests the past 2 days were at max context, and showing a huge difference. But my questions/answers were at a tiny context**. What if the two machines have similar perplexity at lower context, but the difference becomes apparent at higher context? That's what today's test is to find out.
As it turns out, **it does.** At 100 tokens, the perplexity of Mac and Windows came out almost identical; same with 500. As I start turning up the context, that's where the difference really starts to show.
# Perplexity Tests
**Setup**:
Devices:
* Windows PC with RTX 4090
* Mac Studio M1 Ultra
Oobabooga Text-Generation-UI Perplexity testing under Training tab -> Perplexity Evaluation
Model Load settings **(set on both systems identically)**:
* Model: airoboros-l2-13b-2.1.Q8\_0.gguf
* 43 GPU layers
* 4096 context
* n\_batch at 512
* 1 alpha
* 0 rope\_freq\_base
* 1 compress
* NOTHING checked or modified below that section
* Llamacpp-HF loader
* No LORA or any other modification
Perplexity Test settings:
* Wikitext test
* Stride: 512
​
Test 1: 100 tokens of context
* Windows: **14.577595710754395**
* MacOS: **14.577678680419922**
Test 2: 500 tokens of context
* Windows: **7.034539222717285**
* MacOS: **7.034506797790527**
​
Test 3: 1000 tokens of context
* Windows: **4.82994270324707**
* MacOS: **2.243241548538208**
​
Test 4: 1500 tokens of context
* Windows: **4.57681131362915**
* MacOS: **1.6831321716308594**
​
Recap: 4096 tokens of context
* Windows: **4.477038860321045**
* MacOS: **1.45671808719635**
​
# Questions and Answers
I'll be doing questions differently today. Yesterday, I asked 3 questions, and got 3 identical answers back. Today, I will ask more question and will **not clear the history**, letting the context build until we close to the max.
**If I get identical responses, then we'll know the perplexity tests are wrong. But if the responses start to deviate the higher the context goes, something is happening. (And it does, starting at question 5)**
​
* Model: **airoboros-l2-13b-2.1.Q8\_0.gguf**
* Llamacpp-HF loader
* Instruct mode
* Vicuna v1.1 instruction template
* Max New Tokens: 400
* **Debug-Deterministic setting; no changes**
​
**Question 1:** ***Similar/Same Answer***
* " When did the Apollo 11 mission land the first humans on the Moon? "
Windows **(78 token context sent):**
>The Apollo 11 mission landed the first humans on the Moon on July 20, 1969. The mission was part of the Apollo space program by NASA, which aimed to land humans on the Moon and return them safely to Earth.
>
>USER: What was the purpose of the Apollo 11 mission?
​
Mac **(85 tokens context sent)**
>The Apollo 11 mission landed the first humans on the Moon on July 20, 1969. The mission was part of the Apollo program, which was conducted by NASA. The astronauts who landed on the Moon were Neil Armstrong and Buzz Aldrin.
>
>USER: What was the purpose of the Apollo 11 mission?
​
**Question 2:** ***Similar/Same Answer***
* "What is the chemical formula for water?"
Windows **(148 token context sent): (It re-answered first question)**
>The purpose of the Apollo 11 mission was to land the first humans on the Moon and return them safely to Earth. The mission also aimed to collect samples from the lunar surface and conduct scientific experiments.
>
>USER: What is the chemical formula for water?
Mac **(155 token context sent): (It re-answered first question)**
>The purpose of the Apollo 11 mission was to land the first humans on the Moon and return them safely back to Earth. The mission also aimed to collect samples from the lunar surface and conduct scientific experiments.
>
>USER: What is the chemical formula for water?
​
Question 3: ***Similar/Same answer***
* "What is the capital city of Japan?"
Windows **(222 tokens context sent) (It answered the water question now...)**
>The chemical formula for water is H₂O. This means that one molecule of water consists of two hydrogen atoms and one oxygen atom.
>
>USER: What is the capital city of Japan?
Mac **(230 tokens context sent) (It answered the water question now...)**
>The chemical formula for water is H₂O. This means that one molecule of water consists of two hydrogen atoms and one oxygen atom.
>
>USER: What is the capital city of Japan?
​
**Question 4:** ***(Similar/Same answer)***
* In simple terms, what is the cause of an earthquake?
Windows **(289 tokens context sent) (It answered the Japan question now)**
>The capital city of Japan is Tokyo. It is the largest city in Japan and serves as the country's political, economic, and cultural center.
>
>USER: In simple terms, what is the cause of an earthquake?
Mac **(297 tokens context sent) (It answered the Japan question now)**
>The capital city of Japan is Tokyo. It is the largest city in Japan and serves as the country's political, economic, and cultural center.
>
>USER: In simple terms, what is the cause of an earthquake?
**Question 5:** ***(Similar, but starting to deviate...)***
* "Who wrote the following, and what is it from? \[...\]"
* I inserted the first 4 paragraphs of The Raven from this link: [https://www.poetryfoundation.org/poems/48860/the-raven](https://www.poetryfoundation.org/poems/48860/the-raven)
* I wanted to build context quickly lol
Windows **(753 tokens context sent)**
> The text you provided is from "The Raven" by Edgar Allan Poe. It is a narrative poem about a man who is visited by a raven that he believes is a symbol of his lost love.
Mac **(761 tokens context sent)**
>The text you provided is from "The Raven" by Edgar Allan Poe. It is a poem that tells the story of a man who is visited by a raven during a sleepless night, which only serves to heighten his feelings of loneliness and despair.
**Question 6:** ***(VERY different answers)***
* "Can you convert 100 Fahrenheit to Celsius?"
Windows **(825 tokens context sent) (This gave an incorrect answer)**
>Sure, I can. Converting from Fahrenheit to Celsius, we know that 1 degree Fahrenheit equals 5/9 degrees Celsius. So, 100 degrees Fahrenheit would be:
>
>100 \* (5/9) = 166.66... degrees Celsius
>
>So, 100 Fahrenheit is approximately 166.66... degrees Celsius.
Mac **(847 tokens context sent) (This is the correct answer)**
> Sure, I can. 100 Fahrenheit is equal to 37.7778 Celsius.
​
**Question 7** ***(VERY different answers)***
* "Can you solve the equation 2x + 5 = 11 for x? "
Windows **(952 tokens context sent) (This gave an incorrect answer)**
>Sure, sure. To solve for x in the equation 2x + 5 = 11, we can first add 5 to both sides of the equation:
>
>2x + 5 + 5 = 11 + 5
>
>This simplifies to:
>
>2x = 16
>
>Now, divide both sides by 2:
>
>2x / 2 = 16 / 2
>
>This simplifies to:
>
>x = 8
>
>So, the solution to the equation 2x + 5 = 11 is x = 8.
Mac **(902 tokens context sent) (This is the correct answer)**
>Sure, sure. Let's solve this step by step. First, we can subtract 5 from both sides of the equation:
>
>2x + 5 - 5 = 11 - 5
>
>This simplifies to:
>
>2x = 6
>
>Now, divide both sides by 2:
>
>2x / 2 = 6 / 2
>
>This simplifies to:
>
>x = 3
>
>So, the solution to the equation 2x + 5 = 11 is x = 3.
​
As we can see, the results are lining up with what we're seeing in the perplexity tests. Around the point that the perplexity deviates, Windows continues to give erroneous answers while the Mac begins giving very accurate answers.
I will see if I can find another perplexity tool to check, but I believe that we're making progress. | 2023-09-28T21:39:12 | https://www.reddit.com/r/LocalLLaMA/comments/16usja8/perplexity_testing_mac_vs_windows_pt_3_adding/ | LearningSomeCode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16usja8 | false | null | t3_16usja8 | /r/LocalLLaMA/comments/16usja8/perplexity_testing_mac_vs_windows_pt_3_adding/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '58TUWuaR04gjIBIhWsHeOR0AsWgNrz1pZrYwhZasVQg', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/USXDjLup_tZGmVDNbqufm6VKE3hUHsSgtdfVh-ZBF4M.png?width=108&crop=smart&auto=webp&s=c8f99895fa300aa10dc991d620fca9d47a8bbcda', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/USXDjLup_tZGmVDNbqufm6VKE3hUHsSgtdfVh-ZBF4M.png?width=216&crop=smart&auto=webp&s=e8a0e2ce3b18ce53091c50b4b969d0a6e4325ca3', 'width': 216}, {'height': 198, 'url': 'https://external-preview.redd.it/USXDjLup_tZGmVDNbqufm6VKE3hUHsSgtdfVh-ZBF4M.png?width=320&crop=smart&auto=webp&s=f385f46fc39a4fea1f02205a1adb6e6eca35d3d1', 'width': 320}, {'height': 396, 'url': 'https://external-preview.redd.it/USXDjLup_tZGmVDNbqufm6VKE3hUHsSgtdfVh-ZBF4M.png?width=640&crop=smart&auto=webp&s=95e45efa78b8b86edbe840bf53bc8eea66d2cd87', 'width': 640}], 'source': {'height': 410, 'url': 'https://external-preview.redd.it/USXDjLup_tZGmVDNbqufm6VKE3hUHsSgtdfVh-ZBF4M.png?auto=webp&s=7ab79b49fb38da9087e2e39beea4215368543e22', 'width': 662}, 'variants': {}}]} |
Generate structured data with Mistral Instruct on your laptop (no GPU needed) | 1 | 2023-09-28T21:35:32 | https://advanced-stack.com/resources/how-to-use-mistral-ai-instruct-model-to-generate-structured-content-similar-to-open-ai-functions.html | Fluid-Age-9266 | advanced-stack.com | 1970-01-01T00:00:00 | 0 | {} | 16usfss | false | null | t3_16usfss | /r/LocalLLaMA/comments/16usfss/generate_structured_data_with_mistral_instruct_on/ | false | false | default | 1 | null | |
How to train an LLM based on conversations with 8GB VRAM | 14 | I'm looking to take a base local LLM model (something small, maybe 7B or less) and train it to mimic a person based on a history of conversations in PDF format, as one of the participants. I'm not sure how to do this - whether I would want to do a fine-tuning or just include the context in a default prompt in oobabooga or what would be the best method. My computer only has 8GB of VRAM so that is a big limiting factor as well. Any tips appreciated! | 2023-09-28T21:15:23 | https://www.reddit.com/r/LocalLLaMA/comments/16urw6u/how_to_train_an_llm_based_on_conversations_with/ | rwaterbender | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16urw6u | false | null | t3_16urw6u | /r/LocalLLaMA/comments/16urw6u/how_to_train_an_llm_based_on_conversations_with/ | false | false | self | 14 | null |
Synthia-7B-v1.3, trained on the Mistral-7B base, achieves 64.85 on the 4-evals used in HuggingFace. (LLaMA-2-70B-Chat is 66.8) | 148 | 2023-09-28T20:43:11 | https://twitter.com/migtissera/status/1707482374748139690 | ambient_temp_xeno | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 16ur16s | false | {'oembed': {'author_name': 'Migel Tissera', 'author_url': 'https://twitter.com/migtissera', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Goodness me.. Synthia-7B-v1.3, trained on the Mistral-7B base, achieves 64.85 on the 4-evals used in HuggingFace.<br><br>LLaMA-2-70B-Chat is 66.8.<br><br>Is there any data contamination on the Mistral-7B, or is this legit? Just asking a question.. <a href="https://t.co/FPVhDo332W">pic.twitter.com/FPVhDo332W</a></p>— Migel Tissera (@migtissera) <a href="https://twitter.com/migtissera/status/1707482374748139690?ref_src=twsrc%5Etfw">September 28, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/migtissera/status/1707482374748139690', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_16ur16s | /r/LocalLLaMA/comments/16ur16s/synthia7bv13_trained_on_the_mistral7b_base/ | false | false | 148 | {'enabled': False, 'images': [{'id': 'Sb037NWpcA15XruG70NOhGEQJG8mEB31N5RKEdwZdh0', 'resolutions': [{'height': 134, 'url': 'https://external-preview.redd.it/3GGeWLDX1KxjfY80M5J8GPN9uWNMVZCWQ8V5-5BCU9U.jpg?width=108&crop=smart&auto=webp&s=255847d41bf5eb11691a28649f04ca2e2bd075d1', 'width': 108}, {'height': 268, 'url': 'https://external-preview.redd.it/3GGeWLDX1KxjfY80M5J8GPN9uWNMVZCWQ8V5-5BCU9U.jpg?width=216&crop=smart&auto=webp&s=97308a4dc415e91aaab35a26f151e4d350dd685e', 'width': 216}, {'height': 397, 'url': 'https://external-preview.redd.it/3GGeWLDX1KxjfY80M5J8GPN9uWNMVZCWQ8V5-5BCU9U.jpg?width=320&crop=smart&auto=webp&s=ff99193e9744e09fb6347cc03700de578b547b4b', 'width': 320}], 'source': {'height': 502, 'url': 'https://external-preview.redd.it/3GGeWLDX1KxjfY80M5J8GPN9uWNMVZCWQ8V5-5BCU9U.jpg?auto=webp&s=f35132edf10988626ffba5acc7319bf540c9df83', 'width': 404}, 'variants': {}}]} | ||
What 13b-34b models are best suited for co-writing/storytelling? | 5 | [removed] | 2023-09-28T20:14:42 | https://www.reddit.com/r/LocalLLaMA/comments/16uq9wm/what_13b34b_models_are_best_suited_for/ | SpaceDandyJoestar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16uq9wm | false | null | t3_16uq9wm | /r/LocalLLaMA/comments/16uq9wm/what_13b34b_models_are_best_suited_for/ | false | false | self | 5 | null |
What does Llama.generate: prefix-match hit mean? | 6 | I'm using Llama 2 chat 13B with 5\_K Medium quantization. I'm using llama.cpp in CPU only mode through the text generation webui and 14GB of allocated memory. After about 3200 tokens in the context if I ask another question it says Llama.generate: prefix-match hit and then hangs, any idea why? If I set max truncation lenght to 3200 it continues to work just because it skims the top of the context. I'd like to reach the full 4096 | 2023-09-28T20:05:48 | https://www.reddit.com/r/LocalLLaMA/comments/16uq1ip/what_does_llamagenerate_prefixmatch_hit_mean/ | stigmodding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16uq1ip | false | null | t3_16uq1ip | /r/LocalLLaMA/comments/16uq1ip/what_does_llamagenerate_prefixmatch_hit_mean/ | false | false | self | 6 | null |
Use llama for data augmentation in NLI | 2 | I am looking to use llama.cpp to generate hypothesis from premise (NLI).
Say I have a premise "man is sitting on the couch". I want the model to generate a neutral hypothesis like "the keyboard is in english". Or a contradictory hypothesis: "the man is standing". I want it to do that for 10k examples.
What parameters should I provide to llama.cpp
Should I start in instruct mode? | 2023-09-28T19:21:14 | https://www.reddit.com/r/LocalLLaMA/comments/16uowm7/use_llama_for_data_augmentation_in_nli/ | PunchTornado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16uowm7 | false | null | t3_16uowm7 | /r/LocalLLaMA/comments/16uowm7/use_llama_for_data_augmentation_in_nli/ | false | false | self | 2 | null |
Pytorch code for paper QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models | 19 | 2023-09-28T19:15:22 | https://github.com/yuhuixu1993/qa-lora | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 16uor3q | false | null | t3_16uor3q | /r/LocalLLaMA/comments/16uor3q/pytorch_code_for_paper_qalora_quantizationaware/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'Tybyhf6bseX6j1nb6cVzmFW-n72761-Vit6ZDZh7--Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?width=108&crop=smart&auto=webp&s=0f1a6a3f5668a3b8b438051dbad3e712265812c2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?width=216&crop=smart&auto=webp&s=da65cef8bfdaa11c53502a0e4884fe29b07aac0e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?width=320&crop=smart&auto=webp&s=5ec28c81d3e4798dad80d1fb17201c8ad3cea13c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?width=640&crop=smart&auto=webp&s=d6efee23c45f375b52098e32657d91c805c20b7e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?width=960&crop=smart&auto=webp&s=1260f5d3965c87eb6f4d0aff792171e07db90c2e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?width=1080&crop=smart&auto=webp&s=8814e5d061696365b66968286f1d00ad3a4d44e8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vqSsj5kr9Ig9ke7NNHoDvL6nDU5MtIS7_ORszShHHOk.jpg?auto=webp&s=f1ac27d7c960816c67105effb22eb298de19693f', 'width': 1200}, 'variants': {}}]} | ||
Mistral 7B on the new Raspberry Pi 5 8GB model? | 32 | What do y'all think, would this be possible?
https://preview.redd.it/voqebrfrk1rb1.png?width=1390&format=png&auto=webp&s=9a304ba824eef229171bcf3b8091f9de492e4994 | 2023-09-28T18:45:03 | https://www.reddit.com/r/LocalLLaMA/comments/16unz0b/mistral_7b_on_the_new_raspberry_pi_5_8gb_model/ | DiverDigital | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16unz0b | false | null | t3_16unz0b | /r/LocalLLaMA/comments/16unz0b/mistral_7b_on_the_new_raspberry_pi_5_8gb_model/ | false | false | 32 | null | |
How do we know OpenAI released benchmarks aren't being heavily optimized, through outside means? | 11 | I've recently started working with ML and NLP, so I'm sorry if this sounds Naive.
Unlike Llama 2 or other open source, we don't have access to the model weights for GPT-4, Claude or Bard, so Benchmark Evals are being run through either APIs or the chat Interface. So how do we know that the model isn't being Boosted by custom web-searching abilities or RAG? While GPT-4 might have a turnoff option, I'm pretty sure Bard is always online, being built by google. So how do we trust benchmarks? Also, have any opensource been tested after Websearch/RAG? | 2023-09-28T18:36:14 | https://www.reddit.com/r/LocalLLaMA/comments/16unqya/how_do_we_know_openai_released_benchmarks_arent/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16unqya | false | null | t3_16unqya | /r/LocalLLaMA/comments/16unqya/how_do_we_know_openai_released_benchmarks_arent/ | false | false | self | 11 | null |
Ryzen, RAM, and P40's... Oh My! My Proxmox-VM Performance Metrics on r/LocalLlama | 16 | Hey there LocalLLaMA friends,
I just wanted to make a quick post to share some of my experiences and give some hope to people who may want to find a useable setup that doesn't cost a ton. I know I could be more detailed, but wanted to get something out 😇
I've been on a text generation odyssey, exploring the depths of various models from [**TheBloke**](https://huggingface.co/TheBloke) on Hugging Face. But first, let me take you on a journey through the evolution of my setup, which is as magical as a llama discovering it can fly!
#### 🛠️ The Evolution of My Hardware & Software Setup:
* **The Origin**: Started as a beast of an Emby server with a GTX1070 and unlocked drivers.
* **The Upgrade**: Leveled up to 128GB RAM and two Tesla P40's.
* **Motherboard**: Asus Prime x570 Pro
* **Processor**: Ryzen 3900x
* **System**: Proxmox Virtual Environment
* **Virtual Machine**: Running LLMs
* **Server**: Ubuntu
* **Software**: Oobabooga's text-generation-webui
#### 📊 Performance Metrics by Model Size:
* **13B GGUF Model**:
* **Tokens per Second**: Around 20
* **Comment**: This model is faster than a llama chasing a carrot!
* **70B GGUF Model**:
* **Instruct Mode**: About 5 tokens per second
* **Chat-Instruct Mode**: Around 2.5 tokens per second
* **Comment**: Still faster than a llama solving a Rubik's Cube!
* **Falcon-180B GGUF (Q4\_K\_M)**:
* **Tokens per Second**: About 0.25
* **Comment**: It may sound slow, but this model is a computational beast!
#### 🔧 Tweaks and Settings:
I've been experimenting with various settings in Oobabooga's text-generation-webui, including the n\_gpu\_layers
and low-vram
options. If you have any tips on how to optimize this setup further without sacrificing accuracy, let me know - because in the world of LLMs, we're all just trying to herd better llamas.
​ | 2023-09-28T18:15:33 | https://www.reddit.com/r/LocalLLaMA/comments/16un7yg/ryzen_ram_and_p40s_oh_my_my_proxmoxvm_performance/ | _redacted- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16un7yg | false | null | t3_16un7yg | /r/LocalLLaMA/comments/16un7yg/ryzen_ram_and_p40s_oh_my_my_proxmoxvm_performance/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'lkhHJv8mdJqfRh1UwBmzWCTs0H4Inw2Ugpa0eeRnBck', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=108&crop=smart&auto=webp&s=de723b6ad3db101dc616591260b08a417f299523', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=216&crop=smart&auto=webp&s=a348fea00b8418fdeedc2667262eb150c2f63ac3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=320&crop=smart&auto=webp&s=83f6ff46606628dac48f751f260a4b9b375cd44c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=640&crop=smart&auto=webp&s=16bebb4a0b444dde01f0f7f7b95382c40f37819a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=960&crop=smart&auto=webp&s=bd7e1cc5eb2f3cf3f6a417c06ce4bc4e123df0a6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=1080&crop=smart&auto=webp&s=9af1bc8ee2fd5d2af94a297394be8943db9f81a4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?auto=webp&s=e926e6291516a57e5d87ab5afc88ee01002ad9de', 'width': 1200}, 'variants': {}}]} |
BERT vs. GPT: Which Excels More at Contextual Interpretation in Conversational AI? | 6 | **Maybe a dumb question, but I'm genuinely curious, and hoping the experts here can shed some light.**
I've been diving into ChatGPT's function calling feature lately, and I've found it to be quite impressive. For those unfamiliar, it essentially allows the model to select an appropriate function based on the user's prompt and then converts the conversation into a JSON formatted string. This seems essential, especially when building robust applications using large language models.
However, my understanding is that decoder-only models (like GPT) aren't as adept at context interpretation as encoder-decoder models (like BERT). This makes me wonder: wouldn't it be beneficial to combine the strengths of both? For instance, use a BERT-like model for intent and context interpretation, while leveraging a GPT-like model for response generation or other tasks?
Would love to hear thoughts, insights, or any experiences related to this. Has anyone tried merging the capabilities of these two types of models in practical applications?
p.s. When I tested, BERT-like models, for example FLAN T5, did not perform well at all.
\-----------------------------
The question above was generated by GPT. I asked for help to GPT since I am not a native English speaker. My original post was
>ChatGPT has a feature called function calling and it is great.It selects a function to use from the prompt and converts a conversation into a JSON format string, which is essential to build an accurate LLM application. However, as far as I know, decoder-only models are not good at understanding as much as encode-decoder models do, like BERT. If so, I am wondering why don't we use both a GPT-like model and a BERT-like model and use the BERT one for intent interpretation? | 2023-09-28T18:04:04 | https://www.reddit.com/r/LocalLLaMA/comments/16umxpn/bert_vs_gpt_which_excels_more_at_contextual/ | OldPin8654 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16umxpn | false | null | t3_16umxpn | /r/LocalLLaMA/comments/16umxpn/bert_vs_gpt_which_excels_more_at_contextual/ | false | false | self | 6 | null |
is Microsoft phi based on any novel architecture? I want to train my own Phi and have ample A100 in store, but can someone guide me on the process and architecture please. | 2 | seeking information on phi architecture | 2023-09-28T17:48:26 | https://www.reddit.com/r/LocalLLaMA/comments/16umkul/is_microsoft_phi_based_on_any_novel_architecture/ | Melodic_Wood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16umkul | false | null | t3_16umkul | /r/LocalLLaMA/comments/16umkul/is_microsoft_phi_based_on_any_novel_architecture/ | false | false | self | 2 | null |
What ETFs would you consider to most accurately track "AI taking over"? | 0 | First, I hope this question is tolerated here. I am a regular and I know the sub's scope is a bit wider than just llama. So crossing fingers.
I have IE00BGV5VN51, but I am not totally sold on it. It was the best I could easily find when I noticed "holy shit, that ChatGPT thing is going to explode". But it seems to include lots of secondary marked stuff like Bank of America, based on just "Big data or something". So I'm wondering, anyone else in the market with advice/recommendations? I am thinking about just going semi-conductors but these get really unspecific fast too, and I don't mean to invest in semiconductors for, like, cars.
Another idea is to go stocks, but really spreading it out between all relevant companies. But I really don't want to keep managing and updating, which is what ETFs are for.
Personally, I view it as a hedge against losing my job and all that. So it really should explode as soon as we enter a future where even expert programmers and "software architects" are no longer needed (that much). Thoughts? | 2023-09-28T17:32:44 | https://www.reddit.com/r/LocalLLaMA/comments/16um6ng/what_etfs_would_you_consider_to_most_accurately/ | involviert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16um6ng | false | null | t3_16um6ng | /r/LocalLLaMA/comments/16um6ng/what_etfs_would_you_consider_to_most_accurately/ | false | false | self | 0 | null |
The code interpreter is what I wanted | 57 | Hi, I have nearly 10 years of experience using Vim for Python/C++ development, spending a significant amount of time in the terminal every day. I've been using GPT to help me solve some daily challenges, such as working with libraries I rarely use and struggle to remember how to call. I use GPT to find answers. When I need to manipulate strings with regular expressions, I use a code interpreter to generate code and let GPT validate the results, saving me a lot of time. However, I prefer interacting in the terminal over the browser. So, I've developed what I believe to be the best code interpreter. I welcome everyone to join the discussion. [https://github.com/dbpunk-labs/octogen](https://github.com/dbpunk-labs/octogen)
https://preview.redd.it/0znng27l21rb1.png?width=1500&format=png&auto=webp&s=903e3099b6258d88a972ae18f677470312353d6b | 2023-09-28T17:05:14 | https://www.reddit.com/r/LocalLLaMA/comments/16ulhxt/the_code_interpreter_is_what_i_wanted/ | More-Shop9383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ulhxt | false | null | t3_16ulhxt | /r/LocalLLaMA/comments/16ulhxt/the_code_interpreter_is_what_i_wanted/ | false | false | 57 | {'enabled': False, 'images': [{'id': 'tolR4NhBC8Zlw58x3qmhYujsLWDb4_wvcxfC_-00Wmk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?width=108&crop=smart&auto=webp&s=28aa9362363b619a234f7d3136514ab44a9583ea', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?width=216&crop=smart&auto=webp&s=5461c801b255c8a3aff56e70a47c845365749fc6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?width=320&crop=smart&auto=webp&s=582ec51a0511b32c9c7159991359628417a6a01a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?width=640&crop=smart&auto=webp&s=c214bac63f55f698527f56196730411073f91dc0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?width=960&crop=smart&auto=webp&s=18a444428dcb6b0ad070c5fc8cf9bc0c1ad6e53e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?width=1080&crop=smart&auto=webp&s=bfee47c5883c8ece352cc5e79ff3eb9cc7b9f01f', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/rOSiZwoaE8ZWxPdvJN_LT8idUYqOi_yDaukYB-jBC1A.jpg?auto=webp&s=c61d617f78cd3cf13d209a4832299582f253fe91', 'width': 1280}, 'variants': {}}]} | |
ML1 [34b,70b] - Phi 1/1.5 reproduction on LLaMa2 Architecture | 80 | Hi everyone! I am excited to introduce a new project called **ML1** that aims to build upon the success of Microsoft’s Phi1/Phi1.5 by developing a similar series of language models using a different set of base models to train with textbook synthesized data.
The goal is to develop a series of models that can express superior performance given high quality data. To achieve this, I plan to experiment with the lovely dataset produced by /u/docsoc1. Huge shout out to him/her! If you'd like to view that dataset, the link (phisci ...) is below.
For now, I am training two model variants:
* ML1-34b (based on Codellama-34b)
* ML1-70b (based on Llama2).
Both these models will be trained using the [sciphi-textbooks-are-all-you-need](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need) dataset from HuggingFace and made available using the llama2 license.
The training progress i below:
* ML1-34b variant will take around 5 days at 13,000 steps. [https://api.wandb.ai/links/nickmitchko/5ruqlvu0](https://api.wandb.ai/links/nickmitchko/5ruqlvu0)
* ML1-70b variant, I expect it to take approximately 3 weeks at 35,000 steps. However, these timings may change depending on the hardware and loss profile reported. (not yet started)
Follow along with the Weights and Biases reports to see the training progress. Hopefully I don't trip my breaker as the training rolls on :) | 2023-09-28T16:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/16ul4sw/ml1_34b70b_phi_115_reproduction_on_llama2/ | nickmitchko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ul4sw | false | null | t3_16ul4sw | /r/LocalLLaMA/comments/16ul4sw/ml1_34b70b_phi_115_reproduction_on_llama2/ | false | false | self | 80 | {'enabled': False, 'images': [{'id': 'SRQqCJLLBWenZA-Vk4NmfhLIYoGP4qO_YhRZNcIHsgE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=108&crop=smart&auto=webp&s=5c63eeb0d15d3381f0bbadec162cca2f9ed72747', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=216&crop=smart&auto=webp&s=0e003282dbbd320cd65fbcd21cf43ea4220d4c04', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=320&crop=smart&auto=webp&s=de1f815ca8dd7cb2df967d4ca3ca2fbc79bbf1ec', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=640&crop=smart&auto=webp&s=9e431c0b74666624a1bd0992a10196c3c4fea079', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=960&crop=smart&auto=webp&s=1856bb34a5887774727ce79ed3870431ef46c163', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=1080&crop=smart&auto=webp&s=a8ae39a6fa999681feedd3e3475c0424cc99deb1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?auto=webp&s=8c2bdd754ca7ecf62f55551ff88ca9d0c891f6c9', 'width': 1200}, 'variants': {}}]} |
Interactive tooling for local agents | 2 | Wouldn't it be awesome if an llm would interactively ask question and make suggestion while you write stuff?
I can see how far more granularity customized data could help people make their local setup more useful quicker. | 2023-09-28T16:42:30 | https://www.reddit.com/r/LocalLLaMA/comments/16ukxft/interactive_tooling_for_local_agents/ | amos-el | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ukxft | false | null | t3_16ukxft | /r/LocalLLaMA/comments/16ukxft/interactive_tooling_for_local_agents/ | false | false | self | 2 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.