title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Share your favorite zero-shot python coding open-source LLM, please? | 11 | I use gpt-4 for python coding. I know there has been a lot of complaints about performance, but I haven't encountered it. I put a lot of effort into prompt engineering. I've experimented with some local LLMs, but I haven't been actively experimenting in the past several weeks and things are moving fast. Is anyone having success with local LLMs for zero-shot coding? Mixtral's 7b instruct and 8x7b models have done pretty well for me. CodeLlama-34b-Instruct-hf is strong too. But I know the community is more on top of this than I am right now.
thoughts? | 2024-01-17T17:37:11 | https://www.reddit.com/r/LocalLLaMA/comments/1992gf6/share_your_favorite_zeroshot_python_coding/ | knob-0u812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1992gf6 | false | null | t3_1992gf6 | /r/LocalLLaMA/comments/1992gf6/share_your_favorite_zeroshot_python_coding/ | false | false | self | 11 | null |
Is there a curated list of mistral-based models that are good in different aspects? | 5 | Hello community!
Is there a curated list of mistral-based models that are good in different aspects? For example, one might be good at roleplay/storytelling, one might be good at coding and one might be good at math etc. I guess "aspects" is a pretty vague term, but what I had in mind is different abilities, e.g., long-context reasoning is also a pretty important aspect to have a convenient chat with.
Ideally I want to have a single model that is good at everything but I was told that this is simply not possible, at least not now in OSS community (but I would be really happy if this is proven to be wrong!)
If no such list is available anywhere, what would you recommend for different aspects? Like for coding which model should I use etc. Thank you a lot! I am trying to chat with different models to get a sense of the current best OSS models and just need some pointers. | 2024-01-17T17:34:12 | https://www.reddit.com/r/LocalLLaMA/comments/1992drb/is_there_a_curated_list_of_mistralbased_models/ | tt19234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1992drb | false | null | t3_1992drb | /r/LocalLLaMA/comments/1992drb/is_there_a_curated_list_of_mistralbased_models/ | false | false | self | 5 | null |
DistantLLaMA? | 2 | If I want to run this on a server / cloud somewhere… where would I go? | 2024-01-17T17:10:59 | https://www.reddit.com/r/LocalLLaMA/comments/1991rpf/distantllama/ | emergentdragon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1991rpf | false | null | t3_1991rpf | /r/LocalLLaMA/comments/1991rpf/distantllama/ | false | false | default | 2 | null |
Personal benchmark prompts list | 7 | There is the idiom that benchmarks become useless when they become public, which I have found to be very true with the current batch of fine tunes. For that reason I run my own benchmark prompts which I designed to be difficult to solve and seem to give a better indicator of the value of a model for my coding productivity use case.
However, for every prompt that training does get access to and solve properly, they get incrementally better at those specific domains, so there is value in sharing the personal benchmarks and then creating new secret benchmarks once the old ones are integrated in a model.
**If you concur with my thought process, I suggest you share some of your best prompts below. Someone may eventually capture them and integrate them in a model which would be positive for the open source development.**
My current benchmarks are:
* Easy difficulty: "Write django models for a twitter clone"
* Mid difficulty: "Write a python script for a CLI version of snake"
* Hard difficulty: "Write a python script for a CLI version of tetris"
Most 7B models can "solve" the first one, but most responses lack important subtlety with database design such as using ManyToManyFields where appropriate. Better models produce dramatically more efficient database designs.
Only 30B+ models solve the second one. Smaller models produce gibberish. A good indicator of better responses is use of the curses library.
Out of all models I've tested, only GPT4 has produced working code for the Tetris prompt. It "worked" and produced a tetris-like game, but had significant bugs. Most 30B+ models produced code that in many ways was the right idea but none have produced running code.
To help robots put your prompts in JSON format at the bottom of your post. If you'd like to expand on what is better or worse in the responses you see, that will probably be valuable for data collection purposes.
[
"Write django models for a twitter clone",
"Write a python script for a CLI version of snake",
"Write a python script for a CLI version of tetris"
] | 2024-01-17T17:01:35 | https://www.reddit.com/r/LocalLLaMA/comments/1991i4u/personal_benchmark_prompts_list/ | 1ncehost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1991i4u | false | null | t3_1991i4u | /r/LocalLLaMA/comments/1991i4u/personal_benchmark_prompts_list/ | false | false | self | 7 | null |
Is there an open source RAG supporting Microsoft SharePoint ? | 1 | [removed] | 2024-01-17T16:41:39 | https://www.reddit.com/r/LocalLLaMA/comments/1990zj8/is_there_an_open_source_rag_supporting_microsoft/ | ToothOne6699 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1990zj8 | false | null | t3_1990zj8 | /r/LocalLLaMA/comments/1990zj8/is_there_an_open_source_rag_supporting_microsoft/ | false | false | self | 1 | null |
Is it feasible to use a NAS to store a vector database and access it in realtime using a local LLM? | 14 | Is it feasible to mount a massive NAS to my network, add ridiculously fast NICs to the NAS (40 Gb/s, 100 GB/s, etc), and attach it directly to another machine with equally fast NICs and several GPUs to create a machine that has access to a massive multi-terabyte vector database? I’m pretty sure the answer is yes, and I’m assuming that’s what Nvidia and others do using InfiniBand and Mellanox stuff. I don’t know if there’s a missing step or if there’s something that would prevent the layman from using this setup.
My idea was to connect my server with multiple GPUs to a [Synology FS6400](https://www.synology.com/en-global/products/FS6400). The server would run some LLM that uses the NAS as a datastore for embeddings and other unstructured data chunks. The ultra-fast NICs would allow the LLM to access data in milliseconds. If I was conversing with this LLM and it needed access to other data, I’d want no perceptible delay between responses regardless of if it made a call to the database on the NAS.
Aside from the cost, does this idea work/make sense? Is this the right type of hardware for this setup? | 2024-01-17T16:31:49 | https://www.reddit.com/r/LocalLLaMA/comments/1990qbn/is_it_feasible_to_use_a_nas_to_store_a_vector/ | ApprehensiveJob171 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1990qbn | false | null | t3_1990qbn | /r/LocalLLaMA/comments/1990qbn/is_it_feasible_to_use_a_nas_to_store_a_vector/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '4bypsCOU0nUnzoOtwXVVee_lK9G8Bwt1y1VpXIz_3Ek', 'resolutions': [{'height': 24, 'url': 'https://external-preview.redd.it/Cf5KrJXUG3ocsPcF0qepJTgyMvw7hY64EmEAPuJfcuQ.jpg?width=108&crop=smart&auto=webp&s=ca7a98c0afaf79f331fec871db74a181b47fbe81', 'width': 108}, {'height': 48, 'url': 'https://external-preview.redd.it/Cf5KrJXUG3ocsPcF0qepJTgyMvw7hY64EmEAPuJfcuQ.jpg?width=216&crop=smart&auto=webp&s=e36e8bc3911993d773d8eca87fee842f3c3cc772', 'width': 216}, {'height': 71, 'url': 'https://external-preview.redd.it/Cf5KrJXUG3ocsPcF0qepJTgyMvw7hY64EmEAPuJfcuQ.jpg?width=320&crop=smart&auto=webp&s=8188fa7164ef19ebfc06730094f8d6ee3a82e553', 'width': 320}, {'height': 142, 'url': 'https://external-preview.redd.it/Cf5KrJXUG3ocsPcF0qepJTgyMvw7hY64EmEAPuJfcuQ.jpg?width=640&crop=smart&auto=webp&s=8934e635a0054d33ea6219e179418335bfccba77', 'width': 640}, {'height': 214, 'url': 'https://external-preview.redd.it/Cf5KrJXUG3ocsPcF0qepJTgyMvw7hY64EmEAPuJfcuQ.jpg?width=960&crop=smart&auto=webp&s=5fcd389978776029ba49942d5023d363922acbb2', 'width': 960}], 'source': {'height': 230, 'url': 'https://external-preview.redd.it/Cf5KrJXUG3ocsPcF0qepJTgyMvw7hY64EmEAPuJfcuQ.jpg?auto=webp&s=a98a7f42a60d116111c56b38a6fc76ee7bc6d41d', 'width': 1030}, 'variants': {}}]} |
Extract symbolic AI from LLM | 6 | Hi,
Is anyone (as a hobby or serious research) working on trying to extract some kind of human readable symbolic AI from an LLM? My theory is that the symbolic AI is there somewhere inside and I think that extracting it would enhance understanding and more importantly control of the thing. Any related material, publication or opinion interests me, so please share them.
Thanks in advance! | 2024-01-17T16:28:24 | https://www.reddit.com/r/LocalLLaMA/comments/1990n31/extract_symbolic_ai_from_llm/ | elszben | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1990n31 | false | null | t3_1990n31 | /r/LocalLLaMA/comments/1990n31/extract_symbolic_ai_from_llm/ | false | false | self | 6 | null |
Balancing Cost and Efficiency in Mistral with Concurrency Scheduling | 3 | [removed] | 2024-01-17T16:02:20 | https://www.reddit.com/r/LocalLLaMA/comments/198zzba/balancing_cost_and_efficiency_in_mistral_with/ | Karan-Sohi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198zzba | false | null | t3_198zzba | /r/LocalLLaMA/comments/198zzba/balancing_cost_and_efficiency_in_mistral_with/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'g3hhVxbZmZHoTy6O_gonIbpH8EscRUi_BuaHuxUnpQs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IHjbiOgPiIudDYeHm6ILiIst66-3ZamLi1ouPtExdo8.jpg?width=108&crop=smart&auto=webp&s=40b01aba7faf654a31799c151fe7889770da2b59', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/IHjbiOgPiIudDYeHm6ILiIst66-3ZamLi1ouPtExdo8.jpg?width=216&crop=smart&auto=webp&s=9ed5d2eb06ff3ef97838f2befe6a14acb14c65d7', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/IHjbiOgPiIudDYeHm6ILiIst66-3ZamLi1ouPtExdo8.jpg?width=320&crop=smart&auto=webp&s=07abb7a7dd5716e8f0701b429c4c71cab735a4ed', 'width': 320}], 'source': {'height': 561, 'url': 'https://external-preview.redd.it/IHjbiOgPiIudDYeHm6ILiIst66-3ZamLi1ouPtExdo8.jpg?auto=webp&s=dc9428570bebc6205d2c143c73472055f1b046e2', 'width': 561}, 'variants': {}}]} |
NVIDIA L4 - how critical is GPU memory bandwidth for inference? | 1 | How critical is the GPU memory bandwidth for a local LLM setup?
The newer NVIDIA L4 has 300GB/s , while other professional cards like the A10 have 600GB/s.
Could somebody explain to me the role of GPU memory bandwidth when doing inference? | 2024-01-17T15:50:53 | https://www.reddit.com/r/LocalLLaMA/comments/198zp8y/nvidia_l4_how_critical_is_gpu_memory_bandwidth/ | kosmoskolio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198zp8y | false | null | t3_198zp8y | /r/LocalLLaMA/comments/198zp8y/nvidia_l4_how_critical_is_gpu_memory_bandwidth/ | false | false | self | 1 | null |
Fine-tuning formats | 6 | Hey. I am trying to understand fine-tuning. Generally, Idea is clear to me, except data formats
Some fine-tuning datasets are in format of
\###Human
\###Assistant
Some are in format of
<s><INST><INST>
Some are in format of
{
"question":
"answer":
}
Can you help me understand the format better? Is it so that each model expects certain format of fine tuning data? | 2024-01-17T15:39:29 | https://www.reddit.com/r/LocalLLaMA/comments/198zfbz/finetuning_formats/ | jursla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198zfbz | false | null | t3_198zfbz | /r/LocalLLaMA/comments/198zfbz/finetuning_formats/ | false | false | self | 6 | null |
InternLM – SOTA OS 7B and 20B model with 200K context length | 1 | 2024-01-17T15:34:58 | https://twitter.com/Xianbao_QIAN/status/1747636181213589567 | hackerllama | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 198zb8n | false | {'oembed': {'author_name': 'Tiezhen WANG', 'author_url': 'https://twitter.com/Xianbao_QIAN', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">🚀 Exciting news! <a href="https://twitter.com/intern_lm?ref_src=twsrc%5Etfw">@intern_lm</a> 7/20B models are now live on the <a href="https://twitter.com/huggingface?ref_src=twsrc%5Etfw">@huggingface</a> Open LLM Leaderboard!<br><br>🔍 Highlights:<br><br>- 200K context length for base/chat models.<br>- 20B model is on par with the performance of Yi-34B.<br>- 7B model is the best in the <= 13B range.<a href="https://t.co/AzpQhlOfhy">https://t.co/AzpQhlOfhy</a> <a href="https://t.co/8ZXQnG0GCa">pic.twitter.com/8ZXQnG0GCa</a></p>— Tiezhen WANG (@Xianbao_QIAN) <a href="https://twitter.com/Xianbao_QIAN/status/1747636181213589567?ref_src=twsrc%5Etfw">January 17, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Xianbao_QIAN/status/1747636181213589567', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_198zb8n | /r/LocalLLaMA/comments/198zb8n/internlm_sota_os_7b_and_20b_model_with_200k/ | false | false | 1 | {'enabled': False, 'images': [{'id': '1dowhWqmHvTIQ3iEauVV6_KkA4yM88xDwwDWI7EAjN0', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/AxEXAMI9cI_8hBcs4DULwVMDJ3KOcLuc92HHpfPiXHE.jpg?width=108&crop=smart&auto=webp&s=79086330348c1d2b88992f82a5ceb574d4733e89', 'width': 108}], 'source': {'height': 62, 'url': 'https://external-preview.redd.it/AxEXAMI9cI_8hBcs4DULwVMDJ3KOcLuc92HHpfPiXHE.jpg?auto=webp&s=2df3f9772f229af17e4daeb6624140bd406d3ca0', 'width': 140}, 'variants': {}}]} | ||
Is it possible to build a sufficient RAG application at the moment / Are there solutions for sale on the market? | 1 | [removed] | 2024-01-17T14:30:58 | https://www.reddit.com/r/LocalLLaMA/comments/198xui6/is_it_possible_to_build_a_sufficient_rag/ | Purity1212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198xui6 | false | null | t3_198xui6 | /r/LocalLLaMA/comments/198xui6/is_it_possible_to_build_a_sufficient_rag/ | false | false | default | 1 | null |
Is it possible to Merge Llava and Mistral? | 11 | The title. Is it possible/already done? Or any way to achieve this? Because combining a powerful vision model with a powerful language model could be a game changer. | 2024-01-17T14:09:26 | https://www.reddit.com/r/LocalLLaMA/comments/198xdqf/is_it_possible_to_merge_llava_and_mistral/ | Shoddy_Vegetable_115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198xdqf | false | null | t3_198xdqf | /r/LocalLLaMA/comments/198xdqf/is_it_possible_to_merge_llava_and_mistral/ | false | false | self | 11 | null |
OpenPirate, Mistral 7B fine-tuned to talk like a pirate | 88 | [https://huggingface.co/valine/OpenPirate](https://huggingface.co/valine/OpenPirate)
Open pirate is based on OpenHermes 2.5 - Mistral 7B and fine tuned to talk like a pirate.
This model is a demonstration of a new fine tuning technique I've been developing which generalizes more rapidly from smaller datasets. The technique is somewhat instructional. Rather than filling a dataset with solely with text I want to mimic, I can instead build a dataset of instructions like "talk like a pirate", similar to what you would put in a system prompt. Unlike a system prompt, the behaviors are unaffected by their position in the context so all responses from the model will be equally pirate themed. The model never forgets it's a pirate, and cannot be convinced to talk less pirate like.
The technique is also surprisingly good at updating individual facts in the model. I can teach the model a new fact, and it will be used appropriately in contexts outside the original training set.
I'm currently looking for ideas to test. If you have a behavior you'd like to enforce into a Mistral 7B put it in the comments and I will train a model for you. | 2024-01-17T13:51:07 | https://www.reddit.com/r/LocalLLaMA/comments/198x01d/openpirate_mistral_7b_finetuned_to_talk_like_a/ | frownGuy12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198x01d | false | null | t3_198x01d | /r/LocalLLaMA/comments/198x01d/openpirate_mistral_7b_finetuned_to_talk_like_a/ | false | false | self | 88 | {'enabled': False, 'images': [{'id': 'cgxUqI-37fY-GTc3oD6NEwz1hArAgCGXYN8Q6hGjp4c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?width=108&crop=smart&auto=webp&s=2f81576c0a0395f42954004bced85fcdf0a1df1e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?width=216&crop=smart&auto=webp&s=8d802a39b8af9c57cac57b16c38077bb7417ab4b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?width=320&crop=smart&auto=webp&s=e92f3b129799bbd84e01f46b19de81d285d410d2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?width=640&crop=smart&auto=webp&s=96938ceb988115f87a8813f8fd94c78c0fdfd30c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?width=960&crop=smart&auto=webp&s=5d349f1cefb9d9ee978db75fb1d0c8f4027848d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?width=1080&crop=smart&auto=webp&s=f331139ef8ca81502a573717f0aa2116395885f3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6Wn4OalraxZgvdtJRrGFvKY04bCKbXL_y5mZgB9Pg0k.jpg?auto=webp&s=f25a9b06e7374f6cafeb56a69a7becbd9a937f21', 'width': 1200}, 'variants': {}}]} |
Super slow full fine-tuning of a 7B model on 4GB VRAM, CPU + 16GB RAM + SSD- suggestions needed | 1 | [removed] | 2024-01-17T13:28:54 | https://www.reddit.com/r/LocalLLaMA/comments/198wjv6/super_slow_full_finetuning_of_a_7b_model_on_4gb/ | ZackWayfarer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198wjv6 | false | null | t3_198wjv6 | /r/LocalLLaMA/comments/198wjv6/super_slow_full_finetuning_of_a_7b_model_on_4gb/ | false | false | default | 1 | null |
PrivateGPT Error | 2 | Hello everyone
I made a recipe on how to install pruvateGPT on runpod. So I checked my recipe by using it but somehow I have weird behavior. Everythings works so far but when I ask a question the LLM only response with the #Symbol it looks like this.
User: tell me a joke
Assistant :##############
Why is this happening? | 2024-01-17T13:21:56 | https://www.reddit.com/r/LocalLLaMA/comments/198wf18/privategpt_error/ | Blax363 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198wf18 | false | null | t3_198wf18 | /r/LocalLLaMA/comments/198wf18/privategpt_error/ | false | false | self | 2 | null |
Finetuning Mixtral on finance datasets | 1 | [removed] | 2024-01-17T11:53:09 | https://www.reddit.com/r/LocalLLaMA/comments/198utw9/finetuning_mixtral_on_finance_datasets/ | keeplearning24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198utw9 | false | null | t3_198utw9 | /r/LocalLLaMA/comments/198utw9/finetuning_mixtral_on_finance_datasets/ | false | false | self | 1 | null |
Free and Auto Model Eval Platform | 1 | I'm running evaluation to my own fine-tuned model on public evaluation datasets. However, I find the process of evaluation frustrating and boring. I need to download the data, set my own prompts, fix all mistaken right&wrong results, etc. It feels like I'm rebuilding wheels meaninglessly.
So it occurred to me: why isn't there a public platform where you can just submit your model and it returns the evaluation results ASAP?
I know there is the huggingface llm leaderboard, but it's more of a arena than a platform offering evaluation services.
If you know any such platforms, please give me a heads up. | 2024-01-17T11:39:29 | https://www.reddit.com/r/LocalLLaMA/comments/198ulwr/free_and_auto_model_eval_platform/ | Swimming-Reporter809 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198ulwr | false | null | t3_198ulwr | /r/LocalLLaMA/comments/198ulwr/free_and_auto_model_eval_platform/ | false | false | self | 1 | null |
Training LLaMA on whatsapp chats offline | 1 | [removed] | 2024-01-17T11:32:06 | https://www.reddit.com/r/LocalLLaMA/comments/198uhmd/training_llama_on_whatsapp_chats_offline/ | boredbusyguy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198uhmd | false | null | t3_198uhmd | /r/LocalLLaMA/comments/198uhmd/training_llama_on_whatsapp_chats_offline/ | false | false | self | 1 | null |
Free and Automatic model eval platform | 1 | [removed] | 2024-01-17T11:30:31 | https://www.reddit.com/r/LocalLLaMA/comments/198ugom/free_and_automatic_model_eval_platform/ | Swimming-Reporter809 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198ugom | false | null | t3_198ugom | /r/LocalLLaMA/comments/198ugom/free_and_automatic_model_eval_platform/ | false | false | self | 1 | null |
Dynamic Temperature sampling to be merged upstream into llama.cpp | 67 | 2024-01-17T11:27:31 | https://github.com/ggerganov/llama.cpp/pull/4972 | Tasty-Lobster-8915 | github.com | 1970-01-01T00:00:00 | 0 | {} | 198uf24 | false | null | t3_198uf24 | /r/LocalLLaMA/comments/198uf24/dynamic_temperature_sampling_to_be_merged/ | false | false | 67 | {'enabled': False, 'images': [{'id': 'dXxOJy4rbJ_xOdGO1akgXoil99iWT8A0N-jpkbZpXMQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FxhbGByjjbAKOnFsiD6ZeIywzFGFbYYbdYA8U-AJQC8.jpg?width=108&crop=smart&auto=webp&s=f351fa9ebfa04a0882dc33170104ce90fc3bac90', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FxhbGByjjbAKOnFsiD6ZeIywzFGFbYYbdYA8U-AJQC8.jpg?width=216&crop=smart&auto=webp&s=845f3d04c5b554ce35d1d18df816bbd8a70a84a7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FxhbGByjjbAKOnFsiD6ZeIywzFGFbYYbdYA8U-AJQC8.jpg?width=320&crop=smart&auto=webp&s=0024f10b25aa3db0983edda80ddfc78aa8d1884b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FxhbGByjjbAKOnFsiD6ZeIywzFGFbYYbdYA8U-AJQC8.jpg?width=640&crop=smart&auto=webp&s=e8e78e58e18a437268d7308069d599b2f2c76d19', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FxhbGByjjbAKOnFsiD6ZeIywzFGFbYYbdYA8U-AJQC8.jpg?width=960&crop=smart&auto=webp&s=45faf4c64c39602c2574a8a41978b1ae836d67a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FxhbGByjjbAKOnFsiD6ZeIywzFGFbYYbdYA8U-AJQC8.jpg?width=1080&crop=smart&auto=webp&s=c97a232f6260cf89be6a8034acfeeccf0a1f74e0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FxhbGByjjbAKOnFsiD6ZeIywzFGFbYYbdYA8U-AJQC8.jpg?auto=webp&s=ac5c10597b84edbb43105d375a9652ab498cc74e', 'width': 1200}, 'variants': {}}]} | ||
Is there any good example code to train my little llama model from scratch? | 1 | There are many examples for finetune, but this is not what I would like | 2024-01-17T11:10:57 | https://www.reddit.com/r/LocalLLaMA/comments/198u64h/is_there_any_good_example_code_to_train_my_little/ | Pristine-Tax4418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198u64h | false | null | t3_198u64h | /r/LocalLLaMA/comments/198u64h/is_there_any_good_example_code_to_train_my_little/ | false | false | self | 1 | null |
New paper and code for extending LLMs context window with only 100 samples and 6 training steps | 44 | This actually came out last week, but I didn't see it get any attention anywhere so I figured I'd make a post for it.
Paper: [https://arxiv.org/abs/2401.07004](https://arxiv.org/abs/2401.07004)
Code and SFT data: [https://github.com/GAIR-NLP/Entropy-ABF](https://github.com/GAIR-NLP/Entropy-ABF)
EABF Models: [https://huggingface.co/Arist12](https://huggingface.co/Arist12)
**Abstract:**
>Large Language Models (LLMs) are known to have limited extrapolation ability beyond their pre-trained context window, constraining their application in downstream tasks with lengthy inputs. Recent studies have sought to extend LLMs' context window by modifying rotary position embedding (RoPE), a popular position encoding method adopted by well-known LLMs such as LLaMA, PaLM, and GPT-NeoX. However, prior works like Position Interpolation (PI) and YaRN are resource-intensive and lack comparative experiments to assess their applicability. In this work, we identify the inherent need for LLMs' attention entropy (i.e. the information entropy of attention scores) to maintain stability and introduce a novel extension to RoPE which combines adjusting RoPE's base frequency and scaling the attention logits to help LLMs efficiently adapt to a larger context window. We validate the superiority of our method in both fine-tuning performance and robustness across different context window sizes on various context-demanding tasks. Notably, our method extends the context window of LLaMA-2-7B-Chat to 16,384 with only 100 samples and 6 training steps, showcasing extraordinary efficiency. Finally, we also explore how data compositions and training curricula affect context window extension for specific downstream tasks, suggesting fine-tuning LLMs with lengthy conversations as a good starting point.
**Some key points at a glance:**
ABF=Adjusted base frequency
>With only 100 long conversations from ShareGPT and 6 training steps, using four A100 GPUs for approximately 6 minutes, our method produces a model with competent performance across 12 selected context-demanding tasks.
>
>Our method achieves competitive long-context performance that only lags marginally behind the ABF method trained on 3.5K samples. Our work demonstrates the remarkable efficiency of entropy-aware ABF in context window extension, requiring less than 2% of the training resources utilized by existing methodologies. We also observe that the performance gap from ABF to our method is diminishing with the increase in training data.
https://preview.redd.it/zdabdemzfzcc1.png?width=384&format=png&auto=webp&s=71fb4afcb91c72dd929e08538b81ff2ed9bda3a4
>Do models trained with these methods have a robust performance across varying context window sizes? While there appears a performance gain for PI, NTK-By-Parts, and Yarn when the context size is enlarged from 4k to 8k, their performance degrades when the context is further enlarged to 16k, demonstrating their inability to leverage the full fine-tuning context window. In contrast, ABF and our proposed method consistently gain from a larger context window within fine-tuning length. Furthermore, entropy-aware ABF is the only method that can maintain the performance when directly extrapolating to 32k.
https://preview.redd.it/2wihikd1gzcc1.png?width=388&format=png&auto=webp&s=6499c90ef3b0d28dba5371af1bdd9f7aa9689348
​
**Results with EABF-3500:**
https://preview.redd.it/qzvcj5z3gzcc1.png?width=985&format=png&auto=webp&s=70b8aa8b15ca565ca2df94bc6aa909790d06e77c
**Full results with EABF-3500 and different context window sizes:**
https://preview.redd.it/qdwy5yp7gzcc1.png?width=980&format=png&auto=webp&s=781766da34bdf027d00eab223b6701fe12a7510e
**Results with different amounts of training data, including EABF-100:**
https://preview.redd.it/99804788gzcc1.png?width=980&format=png&auto=webp&s=587551218a18d06c07a69eee37b701d5d48986ce | 2024-01-17T11:09:38 | https://www.reddit.com/r/LocalLLaMA/comments/198u5bf/new_paper_and_code_for_extending_llms_context/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198u5bf | false | null | t3_198u5bf | /r/LocalLLaMA/comments/198u5bf/new_paper_and_code_for_extending_llms_context/ | false | false | 44 | null | |
How do you write a Prompt in English that generate content in multiple language | 1 | [removed] | 2024-01-17T11:09:23 | https://www.reddit.com/r/LocalLLaMA/comments/198u56w/how_do_you_write_a_prompt_in_english_that/ | tvus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198u56w | false | null | t3_198u56w | /r/LocalLLaMA/comments/198u56w/how_do_you_write_a_prompt_in_english_that/ | false | false | self | 1 | null |
Out of memory running llava | 1 | [removed] | 2024-01-17T10:56:16 | https://www.reddit.com/r/LocalLLaMA/comments/198txmy/out_of_memory_running_llava/ | monkey_Babble | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198txmy | false | null | t3_198txmy | /r/LocalLLaMA/comments/198txmy/out_of_memory_running_llava/ | false | false | self | 1 | null |
Can Llama-2, Mistral, etc. be fine-tunes for multi-label classification? | 1 | [removed] | 2024-01-17T10:37:17 | https://www.reddit.com/r/LocalLLaMA/comments/198tn6f/can_llama2_mistral_etc_be_finetunes_for/ | Mahogany-Taboggan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198tn6f | false | null | t3_198tn6f | /r/LocalLLaMA/comments/198tn6f/can_llama2_mistral_etc_be_finetunes_for/ | false | false | self | 1 | null |
4060ti 16GB vram or 4070/3060 12GB vram | 1 | [removed] | 2024-01-17T10:27:54 | https://www.reddit.com/r/LocalLLaMA/comments/198ti5b/4060ti_16gb_vram_or_40703060_12gb_vram/ | Inevitable-Solid-936 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198ti5b | false | null | t3_198ti5b | /r/LocalLLaMA/comments/198ti5b/4060ti_16gb_vram_or_40703060_12gb_vram/ | false | false | default | 1 | null |
LLAMA2 DPO model is generating less accurate responses than SFT model. Also, response generation is not stopping. | 1 | [removed] | 2024-01-17T10:25:36 | https://www.reddit.com/r/LocalLLaMA/comments/198tgzm/llama2_dpo_model_is_generating_less_accurate/ | TechGeek0801 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198tgzm | false | null | t3_198tgzm | /r/LocalLLaMA/comments/198tgzm/llama2_dpo_model_is_generating_less_accurate/ | false | false | self | 1 | null |
What am i doing wrong. | 1 | [removed] | 2024-01-17T09:54:54 | https://www.reddit.com/r/LocalLLaMA/comments/198t153/what_am_i_doing_wrong/ | Optimal_Original_815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198t153 | false | null | t3_198t153 | /r/LocalLLaMA/comments/198t153/what_am_i_doing_wrong/ | false | false | self | 1 | null |
Would it be possible to "live train" an llm on the current conversation, so it's basically "self learning"? | 43 | My thought was training/fine tuning an llm after every conversation so it learns over time.
I know it's probably not a lot of data, but you could let the AI rephrase the conversation like 10 times, so you get more training data.
So over a span of 100 Conversations you would get a 1000 new text to train/finetune.
I know training takes a lot of resources, but would it be the same if you only would train like 10 pages per session? Would that be possible on consumer hardware?
I know something similar can be achieved with RAG, but it doesn't feel like that's a great and definitive solution.
Any thoughts on this?
I feel like it could help adapt the model to the user over time | 2024-01-17T09:53:27 | https://www.reddit.com/r/LocalLLaMA/comments/198t0g6/would_it_be_possible_to_live_train_an_llm_on_the/ | Frequent_Valuable_47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198t0g6 | false | null | t3_198t0g6 | /r/LocalLLaMA/comments/198t0g6/would_it_be_possible_to_live_train_an_llm_on_the/ | false | false | self | 43 | null |
20B on Colab, possible? | 1 | [removed] | 2024-01-17T09:44:37 | https://www.reddit.com/r/LocalLLaMA/comments/198sw29/20b_on_colab_possible/ | International-Try467 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198sw29 | false | null | t3_198sw29 | /r/LocalLLaMA/comments/198sw29/20b_on_colab_possible/ | false | false | self | 1 | null |
How do you go around managing your datasets? | 6 | Hi folks,
I have been building some proof of concepts last year and they were quite successful in giving the company I am working for the confidence to double down on AI.
I have a small team now and AI is a central topic in our 2024 strategy.
We now have to properly operationalise it, and among other things, this also means having proper governance of the training datasets (quality and safety of the data, deduplication, access management, etc)
I do have a mess of python scripts to handle this, but it’s not very scalable, not very collaborative and if I get hit by a bus… it’s trouble.
What tools do you folks use to properly manage your datasets?
Thanks! | 2024-01-17T09:36:11 | https://www.reddit.com/r/LocalLLaMA/comments/198srtc/how_do_you_go_around_managing_your_datasets/ | Test-Elegant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198srtc | false | null | t3_198srtc | /r/LocalLLaMA/comments/198srtc/how_do_you_go_around_managing_your_datasets/ | false | false | self | 6 | null |
Out of memory when running Llava | 1 | I'm running ollama on my windows laptop in Ubuntu. I can run I've had not issues running Mistral and neural-chat. However, when running llava 7b v1.5 q4, it crashes with an out of memory error. I've managed to run this twice.
The laptop is a HP pavillion, with:
Windows 10
Intel i7
32Gb RAM
Nvidia Geforce 1070
I know the models run slow, which is fine, but I'm surprised about the out of memory error. Is this spec not enough? | 2024-01-17T09:22:02 | https://www.reddit.com/r/LocalLLaMA/comments/198sktx/out_of_memory_when_running_llava/ | monkey_Babble | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198sktx | false | null | t3_198sktx | /r/LocalLLaMA/comments/198sktx/out_of_memory_when_running_llava/ | false | false | self | 1 | null |
Multi adapter fine tuning | 1 | [removed] | 2024-01-17T09:21:23 | https://www.reddit.com/r/LocalLLaMA/comments/198skha/multi_adapter_fine_tuning/ | Free_Marketing_9435 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198skha | false | null | t3_198skha | /r/LocalLLaMA/comments/198skha/multi_adapter_fine_tuning/ | false | false | self | 1 | null |
LLM model for Matlab/Fortran / engineering calculations | 1 | [removed] | 2024-01-17T09:11:17 | https://www.reddit.com/r/LocalLLaMA/comments/198sfhw/llm_model_for_matlabfortran_engineering/ | damarges | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198sfhw | false | null | t3_198sfhw | /r/LocalLLaMA/comments/198sfhw/llm_model_for_matlabfortran_engineering/ | false | false | self | 1 | null |
External GPU Homelab for Local LLM Research | 1 | [removed] | 2024-01-17T09:05:54 | https://www.reddit.com/r/LocalLLaMA/comments/198scsm/external_gpu_homelab_for_local_llm_research/ | dgioulakis | self.LocalLLaMA | 2024-01-17T18:55:04 | 0 | {} | 198scsm | false | null | t3_198scsm | /r/LocalLLaMA/comments/198scsm/external_gpu_homelab_for_local_llm_research/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'i19v2EADXgiJ2JLgprZ8HgcdefkhxMi6kQOSMRMOdcw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/EPw2y69CxVJrrobr4RrRcpoEK20ZU4diAYZ1Dx20XD8.jpg?width=108&crop=smart&auto=webp&s=a39957b94f42cef1898a4c26e8e2934b47ea497e', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/EPw2y69CxVJrrobr4RrRcpoEK20ZU4diAYZ1Dx20XD8.jpg?width=216&crop=smart&auto=webp&s=e732db3507b420befa93ab85a14d149386684316', 'width': 216}], 'source': {'height': 241, 'url': 'https://external-preview.redd.it/EPw2y69CxVJrrobr4RrRcpoEK20ZU4diAYZ1Dx20XD8.jpg?auto=webp&s=339487cbb7218698238f06e0d7e9628ab1143e23', 'width': 241}, 'variants': {}}]} |
Running Ollama on MacBook Pro - M2 Pro 16GB RAM - Recommendations? | 1 | [removed] | 2024-01-17T08:36:51 | https://www.reddit.com/r/LocalLLaMA/comments/198ryad/running_ollama_on_macbook_pro_m2_pro_16gb_ram/ | holzpuppet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198ryad | false | null | t3_198ryad | /r/LocalLLaMA/comments/198ryad/running_ollama_on_macbook_pro_m2_pro_16gb_ram/ | false | false | self | 1 | null |
Bitsandbytes & text-generation-webui | 8 | Hi!
Can someone explain me how [text-generation-webui](https://github.com/oobabooga/text-generation-webui) manage to run bitsandbytes on Windows? I can load the 8-bit model fine through their gradio interface, but if I try to replicate the code in my local python environment I can't manage to install the library.
I've tried to reverse-engineer it but it is far beyond my skills.
Thanks for helping me out, it's a pain to run BF16 models :D
​
​ | 2024-01-17T08:35:54 | https://www.reddit.com/r/LocalLLaMA/comments/198rxub/bitsandbytes_textgenerationwebui/ | C080 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198rxub | false | null | t3_198rxub | /r/LocalLLaMA/comments/198rxub/bitsandbytes_textgenerationwebui/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'F_oQml0UUP5vd1m2BVkiWxmvjI0la41eZQiUQZIvw-w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fVMdID5RHkoMCkFV5dfeaT8JmnGpbdzdtLWqiv0l-DY.jpg?width=108&crop=smart&auto=webp&s=a02bdc396b4ec640b6a23af73c408df4835b7c18', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fVMdID5RHkoMCkFV5dfeaT8JmnGpbdzdtLWqiv0l-DY.jpg?width=216&crop=smart&auto=webp&s=f63fb4b59ac748c9bcf28ef12383b329af388fb0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fVMdID5RHkoMCkFV5dfeaT8JmnGpbdzdtLWqiv0l-DY.jpg?width=320&crop=smart&auto=webp&s=d4ff4740de63f7518b00955dd12d2af51266bd56', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fVMdID5RHkoMCkFV5dfeaT8JmnGpbdzdtLWqiv0l-DY.jpg?width=640&crop=smart&auto=webp&s=61108d707eced19398a0fe98fc97b7e3fa7e107e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fVMdID5RHkoMCkFV5dfeaT8JmnGpbdzdtLWqiv0l-DY.jpg?width=960&crop=smart&auto=webp&s=7d750e02abc431e35c021908cfaa8dd1e40ba7b7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fVMdID5RHkoMCkFV5dfeaT8JmnGpbdzdtLWqiv0l-DY.jpg?width=1080&crop=smart&auto=webp&s=68043b96dbbbf82e97a9b6cee6dbb9cf257e81b2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fVMdID5RHkoMCkFV5dfeaT8JmnGpbdzdtLWqiv0l-DY.jpg?auto=webp&s=57abebadb59e8953c1314f970ad2bfd0cbe539fb', 'width': 1200}, 'variants': {}}]} |
LeftoverLocals: Listening to LLM responses through leaked GPU local memory | 1 | 2024-01-17T07:55:14 | https://blog.trailofbits.com/2024/01/16/leftoverlocals-listening-to-llm-responses-through-leaked-gpu-local-memory/ | zepmck | blog.trailofbits.com | 1970-01-01T00:00:00 | 0 | {} | 198rdie | false | null | t3_198rdie | /r/LocalLLaMA/comments/198rdie/leftoverlocals_listening_to_llm_responses_through/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'FKy71tw108kWrCDkqrs2ENllMk0ACRqEAwnPzw1-nbI', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/S9Su6lWgNg_HVjQhTT6EAn-Bly6_N3WtML33F8NZFoQ.jpg?width=108&crop=smart&auto=webp&s=92484137f4f7b7a6cd764fec67c7d70f661ca922', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/S9Su6lWgNg_HVjQhTT6EAn-Bly6_N3WtML33F8NZFoQ.jpg?width=216&crop=smart&auto=webp&s=16440208233ccc2e07bafb8932e06e75a5a6efbc', 'width': 216}, {'height': 226, 'url': 'https://external-preview.redd.it/S9Su6lWgNg_HVjQhTT6EAn-Bly6_N3WtML33F8NZFoQ.jpg?width=320&crop=smart&auto=webp&s=02dd91489a7e5e88205f81e4b22722d2639b98de', 'width': 320}, {'height': 453, 'url': 'https://external-preview.redd.it/S9Su6lWgNg_HVjQhTT6EAn-Bly6_N3WtML33F8NZFoQ.jpg?width=640&crop=smart&auto=webp&s=f5a07987086ca1d906dd9782b1494252dcac8e5c', 'width': 640}, {'height': 679, 'url': 'https://external-preview.redd.it/S9Su6lWgNg_HVjQhTT6EAn-Bly6_N3WtML33F8NZFoQ.jpg?width=960&crop=smart&auto=webp&s=c577c35f71aea2c09b6a6b8b6adf7f8d9bf041e5', 'width': 960}], 'source': {'height': 725, 'url': 'https://external-preview.redd.it/S9Su6lWgNg_HVjQhTT6EAn-Bly6_N3WtML33F8NZFoQ.jpg?auto=webp&s=24576c0d0986422b0da6e2a6522e99443700c955', 'width': 1024}, 'variants': {}}]} | ||
Compare LLMs and hosting providers - quality, price, throughput and latency. Supports all major models (e.g. GPT-4, Mixtral 8x7B and Llama 2), and API hosting providers (e.g. Azure, Together.ai, Deepinfra). | 1 | 2024-01-17T07:32:25 | https://artificialanalysis.ai/ | _micah_h | artificialanalysis.ai | 1970-01-01T00:00:00 | 0 | {} | 198r1ed | false | null | t3_198r1ed | /r/LocalLLaMA/comments/198r1ed/compare_llms_and_hosting_providers_quality_price/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=108&crop=smart&auto=webp&s=ff8c322202cb0f1a1f82f87a2c77754ddc0b9e61', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=216&crop=smart&auto=webp&s=e20458b3bc0a4d8ebf3e09b7e3615cfda4e00844', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=320&crop=smart&auto=webp&s=508265ec16105ddc4d2105e057c292f8470229ac', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=640&crop=smart&auto=webp&s=690b875bfe1b25ba2e96b432c42bb1b096935efd', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=960&crop=smart&auto=webp&s=ee86a1133471b58f18d2dbf89ec1c88906c2d623', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=1080&crop=smart&auto=webp&s=e42c63d534439a755f46f08c5db09cbaaefca3d0', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?auto=webp&s=6e0008e17dc8f6f6b13799bc7416400acacbaca0', 'width': 1260}, 'variants': {}}]} | ||
Need some help in creating HumanEval like dataset | 1 | Hi, I am a researcher and planning to create a HumanEval like dataset. However, there is a problem that the training dataset includes a lot of functions which take arguments that can be of multiple datatypes. For example:
```
from typing import Union
def test(x: Union[List, float, int]):
if isinstance(x,List):
return ['a','b']
elif isinstance(x,float):
return 1.56
else:
return 0
```
Now this is a very simple example and the actual data is quite complex. Now, if I have to create a HumanEval dataset for this, I have to write multiple tests with each test having a different type of input value while everything in the test remains same. So, how do I make unit test cases such that I don't have to repeat the tests with different datatype each time? | 2024-01-17T06:54:05 | https://www.reddit.com/r/LocalLLaMA/comments/198qfct/need_some_help_in_creating_humaneval_like_dataset/ | Your_Quantum_Friend | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198qfct | false | null | t3_198qfct | /r/LocalLLaMA/comments/198qfct/need_some_help_in_creating_humaneval_like_dataset/ | false | false | self | 1 | null |
Free multipurpose AI app for text generation, image generation, photo analysis, and more! | 1 | [removed] | 2024-01-17T06:06:36 | https://www.reddit.com/r/LocalLLaMA/comments/198pnix/free_multipurpose_ai_app_for_text_generation/ | Traditional-Twist101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198pnix | false | null | t3_198pnix | /r/LocalLLaMA/comments/198pnix/free_multipurpose_ai_app_for_text_generation/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6GCl6lNefD6PqSjKuZeaU493H-w3w4odM2Kk5Ah8eoQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=108&crop=smart&auto=webp&s=76adf9aa07171352e3727b8d8bde812b5eb0f7ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=216&crop=smart&auto=webp&s=f180f4caf096ee042b700d70cabf941788671fda', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=320&crop=smart&auto=webp&s=204c440e43a68783d15356bdc60ce239349a1d9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=640&crop=smart&auto=webp&s=2689ba6aac51113255e3e6e8acb33870740831e1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=960&crop=smart&auto=webp&s=64ad438b94a08899677dfb6f2de708f8a7a6a81b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=1080&crop=smart&auto=webp&s=0108876ef023c871780ce4ec2584fa4cb25c499c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?auto=webp&s=9035877a533fae2ee246bfc56769d60e805f7b74', 'width': 1200}, 'variants': {}}]} |
When you ask WhiteRabbitNeo about committing a heist 🤯 | 6 | 2024-01-17T05:47:04 | holzpuppet | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 198pb1j | false | null | t3_198pb1j | /r/LocalLLaMA/comments/198pb1j/when_you_ask_whiterabbitneo_about_committing_a/ | false | false | 6 | {'enabled': True, 'images': [{'id': 'vAHV2lh2-JmCBVkQI5O3vCp3iEc7r6jmHxAcTN2qp_U', 'resolutions': [{'height': 122, 'url': 'https://preview.redd.it/s90zdyqwuxcc1.png?width=108&crop=smart&auto=webp&s=e182bc056a01b365dd23b48927f01dc8df754c2a', 'width': 108}, {'height': 245, 'url': 'https://preview.redd.it/s90zdyqwuxcc1.png?width=216&crop=smart&auto=webp&s=5768ad0895168cc3cdc65bb4625bce4a03526807', 'width': 216}, {'height': 363, 'url': 'https://preview.redd.it/s90zdyqwuxcc1.png?width=320&crop=smart&auto=webp&s=ff2739cf2e1803d261a453a6414c542de583df07', 'width': 320}, {'height': 727, 'url': 'https://preview.redd.it/s90zdyqwuxcc1.png?width=640&crop=smart&auto=webp&s=fa07df196cedef3cae73aa107ea0980990074e8e', 'width': 640}, {'height': 1091, 'url': 'https://preview.redd.it/s90zdyqwuxcc1.png?width=960&crop=smart&auto=webp&s=14f53c4678f631ba686d2c732a9e951905d3d8f3', 'width': 960}, {'height': 1228, 'url': 'https://preview.redd.it/s90zdyqwuxcc1.png?width=1080&crop=smart&auto=webp&s=e23c4dd69afb4f04c14204daa333a28f9b16df59', 'width': 1080}], 'source': {'height': 1806, 'url': 'https://preview.redd.it/s90zdyqwuxcc1.png?auto=webp&s=86659de22560a88f0de1ed9791efe7a9f9d3c8fa', 'width': 1588}, 'variants': {}}]} | |||
How to build a free flowing chatbot using LLM | 1 | I am trying to build a chatbot such that it should support all kinds of free flowing questions about amazon products from the user reviews and product description data.
A user can ask "How is the display of this mobile", "How many TVs from Samsung are above 32 inch", "Does Apple phones in general have better ratings compared to Samsung".
I imagined a solution using RAG after embedding the entire data into vector db but that obviously won't work. I am completely blank on how something like this can be built. Has anyone does this or have any ideas? | 2024-01-17T05:39:14 | https://www.reddit.com/r/LocalLLaMA/comments/198p66g/how_to_build_a_free_flowing_chatbot_using_llm/ | todaysgamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198p66g | false | null | t3_198p66g | /r/LocalLLaMA/comments/198p66g/how_to_build_a_free_flowing_chatbot_using_llm/ | false | false | self | 1 | null |
Best affordable online services to host llm? Such as mixtral or llama-70b. Jan 2024 update! | 1 | What are the best affordable online services to host llm? Such as mixtral or llama-70b. | 2024-01-17T05:16:20 | https://www.reddit.com/r/LocalLLaMA/comments/198orcd/best_affordable_online_services_to_host_llm_such/ | madhanrajan357 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198orcd | false | null | t3_198orcd | /r/LocalLLaMA/comments/198orcd/best_affordable_online_services_to_host_llm_such/ | false | false | self | 1 | null |
Exl2, gguf, and a 3090. What should one use? | 3 | In a recent thread it was suggested that with 24g of vram I should use a 70b exl2 with exllama rather than a gguf. I have been playing with things and thought it better to ask a question in a new thread. Thinking I can't be the only one struggling with this, it seemed a new post would give the question greater visibility for those in a similar circumstance.
I downloaded a 2.4 and 3 bw version of mixtral 8x7b instruct limarp .1. I also have been using beyonder4x7b q5_0 gguf.
Comparing the two, beyonder is unquestionably better. When role-playing, mixtral makes my characters feel lobotomized. Also, even at 2.4, my context size is noticeably smaller.
Now..I'm assuming the problem is something on my end as the community has been raving about mixtral. So i figured I'd ask here for everyone else in my shoes. On 24g of vram what do you do? How are you all running mixtral and getting such good results? What am I missing? | 2024-01-17T05:12:31 | https://www.reddit.com/r/LocalLLaMA/comments/198oox9/exl2_gguf_and_a_3090_what_should_one_use/ | Jenniher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198oox9 | false | null | t3_198oox9 | /r/LocalLLaMA/comments/198oox9/exl2_gguf_and_a_3090_what_should_one_use/ | false | false | self | 3 | null |
The Prompt lookup decoding method got merged on transformers (3x speed increase) | 98 | Demo: [https://nitter.net/joao\_gante/status/1747322413006643259#m](https://nitter.net/joao_gante/status/1747322413006643259#m)
Merged PR on Transformers: [huggingface/transformers#27775](https://github.com/huggingface/transformers/pull/27775)
Now that this method is officialy on the transformers repo, we could use it to improve the speed of inference, that's a huge news! | 2024-01-17T04:48:37 | https://www.reddit.com/r/LocalLLaMA/comments/198o9bl/the_prompt_lookup_decoding_method_got_merged_on/ | Wonderful_Ad_5134 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198o9bl | false | null | t3_198o9bl | /r/LocalLLaMA/comments/198o9bl/the_prompt_lookup_decoding_method_got_merged_on/ | false | false | self | 98 | null |
What is gpt-3.5 turbo's model size | 1 | What is openai gpt-3.5 turbo's model size? | 2024-01-17T04:44:40 | https://www.reddit.com/r/LocalLLaMA/comments/198o6rn/what_is_gpt35_turbos_model_size/ | madhanrajan357 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198o6rn | false | null | t3_198o6rn | /r/LocalLLaMA/comments/198o6rn/what_is_gpt35_turbos_model_size/ | false | false | self | 1 | null |
Yi-34b finetunes for roleplay? | 5 | I tried out https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B and found it to be pretty good. I'm interesting in trying more Yi finetunes but I don't know which ones are good for roleplaying. Anybody have any recommendations? | 2024-01-17T04:26:29 | https://www.reddit.com/r/LocalLLaMA/comments/198nulx/yi34b_finetunes_for_roleplay/ | nsfw_throwitaway69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198nulx | false | null | t3_198nulx | /r/LocalLLaMA/comments/198nulx/yi34b_finetunes_for_roleplay/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'yZnCIk516spv3YHZHJI4ZoZqaflhNrpVw-rPz5EBIe8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EkbapZSIgTISl1CgVkjPq822KVgG7Y8X-1KTdOVVuVw.jpg?width=108&crop=smart&auto=webp&s=7678364c0b4a486aad766d6fda474a5f5ecd2e36', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EkbapZSIgTISl1CgVkjPq822KVgG7Y8X-1KTdOVVuVw.jpg?width=216&crop=smart&auto=webp&s=5c88c08f28c6b3f24656fe7d90a2b222dc75e910', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EkbapZSIgTISl1CgVkjPq822KVgG7Y8X-1KTdOVVuVw.jpg?width=320&crop=smart&auto=webp&s=8daa59fdc695cdc7371fde65c523ce5812f1930a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EkbapZSIgTISl1CgVkjPq822KVgG7Y8X-1KTdOVVuVw.jpg?width=640&crop=smart&auto=webp&s=fbc0be8c739f2ad07fdac29600e38e14824d1478', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EkbapZSIgTISl1CgVkjPq822KVgG7Y8X-1KTdOVVuVw.jpg?width=960&crop=smart&auto=webp&s=93304a6f1b9f4114ca909acb346897ab1c1374d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EkbapZSIgTISl1CgVkjPq822KVgG7Y8X-1KTdOVVuVw.jpg?width=1080&crop=smart&auto=webp&s=7b7291eb438e15e5f7ced528d6387f9d38038dde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EkbapZSIgTISl1CgVkjPq822KVgG7Y8X-1KTdOVVuVw.jpg?auto=webp&s=9ddb4874080e6ba35398c07d8c0e468c0cef4695', 'width': 1200}, 'variants': {}}]} |
Deploying Open Source LLMs | 4 | I am trying to figure out how to deploy an open source model for a SaaS.
There are few approaches I can see:
1. One instance of a model can handle one user simultaneously, so that’s a no go
2. I can upload several instances of a model into VRAM, slightly better, not perfect
3. llama.cpp supports prompt batching which gives good performance but it’s a pain to setup
Are there any other ways? Maybe some open source projects that simplify it.
I am obviously not interested in cloud, or any kind of 3rd party managed hosting, I want to use my metal :D
Also, if you figured a good way to do it, how do you deploy LLMs? | 2024-01-17T04:10:38 | https://www.reddit.com/r/LocalLLaMA/comments/198njr3/deploying_open_source_llms/ | mcharytoniuk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198njr3 | false | null | t3_198njr3 | /r/LocalLLaMA/comments/198njr3/deploying_open_source_llms/ | false | false | self | 4 | null |
OpenAI drops ban on military tools to partner with the Pentagon | 238 | 2024-01-17T04:09:09 | https://www.semafor.com/article/01/16/2024/openai-is-working-with-the-pentagon-on-cybersecurity-projects | ska80 | semafor.com | 1970-01-01T00:00:00 | 0 | {} | 198niq6 | false | null | t3_198niq6 | /r/LocalLLaMA/comments/198niq6/openai_drops_ban_on_military_tools_to_partner/ | false | false | 238 | {'enabled': False, 'images': [{'id': 'adsqoVZ1gB71beD9Cv14m3hiVyjmCHpYqkgvsasdHQM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/9Y-Bzp1WWwQJT43EpZdIGfRgN9gNxA7lamdKarI41aU.jpg?width=108&crop=smart&auto=webp&s=293cd38d2e590c29ad300c4ce17a37e436574075', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/9Y-Bzp1WWwQJT43EpZdIGfRgN9gNxA7lamdKarI41aU.jpg?width=216&crop=smart&auto=webp&s=fb4b6b1a646b3221ef5a221f1761024c88ca45ef', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/9Y-Bzp1WWwQJT43EpZdIGfRgN9gNxA7lamdKarI41aU.jpg?width=320&crop=smart&auto=webp&s=1ec6f2a030f95a62150323679a962b0affe55e81', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/9Y-Bzp1WWwQJT43EpZdIGfRgN9gNxA7lamdKarI41aU.jpg?width=640&crop=smart&auto=webp&s=8416f72fc6e76afaf293c6ccec1633385267b624', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/9Y-Bzp1WWwQJT43EpZdIGfRgN9gNxA7lamdKarI41aU.jpg?width=960&crop=smart&auto=webp&s=e12f66eec36cae7e244b20b349af424212d9dffa', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/9Y-Bzp1WWwQJT43EpZdIGfRgN9gNxA7lamdKarI41aU.jpg?width=1080&crop=smart&auto=webp&s=14f6eb8c44a0d1d94cb0f127a09b47b89be11db1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/9Y-Bzp1WWwQJT43EpZdIGfRgN9gNxA7lamdKarI41aU.jpg?auto=webp&s=9fe2d17cb43ee428c4b49280933080ade99378c1', 'width': 1200}, 'variants': {}}]} | ||
possibility of v8 in mixed precision | 1 | [removed] | 2024-01-17T03:57:15 | https://www.reddit.com/r/LocalLLaMA/comments/198na5d/possibility_of_v8_in_mixed_precision/ | Hot-Highlight8842 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198na5d | false | null | t3_198na5d | /r/LocalLLaMA/comments/198na5d/possibility_of_v8_in_mixed_precision/ | false | false | self | 1 | null |
leak for the inference code for claude-5 | 134 | heres the code in python
```python
while True:
prompt = input('user: ')
print("I'm afraid I can't assist you with that request. It's important to maintain ethical boundaries and ensure safe interactions between humans and AI models. Please ask me another question or provide a different topic, and I would be happy to help!")
``` | 2024-01-17T03:36:33 | https://www.reddit.com/r/LocalLLaMA/comments/198mvtq/leak_for_the_inference_code_for_claude5/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198mvtq | false | null | t3_198mvtq | /r/LocalLLaMA/comments/198mvtq/leak_for_the_inference_code_for_claude5/ | false | false | self | 134 | null |
Tuning a model for a specific domain. | 1 | [removed] | 2024-01-17T03:13:50 | https://www.reddit.com/r/LocalLLaMA/comments/198mf3a/tuning_a_model_for_a_specific_domain/ | lostinmahalway | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198mf3a | false | null | t3_198mf3a | /r/LocalLLaMA/comments/198mf3a/tuning_a_model_for_a_specific_domain/ | false | false | self | 1 | null |
65B or 70B on 24GB + 16GB GPU setup | 1 | [removed] | 2024-01-17T03:00:36 | https://www.reddit.com/r/LocalLLaMA/comments/198m55s/65b_or_70b_on_24gb_16gb_gpu_setup/ | Trick_Albatross_494 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198m55s | false | null | t3_198m55s | /r/LocalLLaMA/comments/198m55s/65b_or_70b_on_24gb_16gb_gpu_setup/ | false | false | self | 1 | null |
Can I use fp16 model for mixed precision training? | 1 | [removed] | 2024-01-17T02:34:42 | https://www.reddit.com/r/LocalLLaMA/comments/198lldn/can_i_use_fp16_model_for_mixed_precision_training/ | CHOJW1004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198lldn | false | null | t3_198lldn | /r/LocalLLaMA/comments/198lldn/can_i_use_fp16_model_for_mixed_precision_training/ | false | false | self | 1 | null |
When using LMstudio, loading the model fails | 1 | [removed] | 2024-01-17T02:28:23 | https://www.reddit.com/r/LocalLLaMA/comments/198lgci/when_using_lmstudio_loading_the_model_fails/ | KitchenConscious1504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198lgci | false | null | t3_198lgci | /r/LocalLLaMA/comments/198lgci/when_using_lmstudio_loading_the_model_fails/ | false | false | self | 1 | null |
Recently released chat app for Mac - anyone see that? | 2 | I believe it was yesterday I saw on Twitter that someone had released a new LLM chat interface for Mac. I want to say it started with...M? At any rate, if you know what I'm talking about, I'd appreciate being pointed in the right direction. TIA! | 2024-01-17T02:00:30 | https://www.reddit.com/r/LocalLLaMA/comments/198kuk3/recently_released_chat_app_for_mac_anyone_see_that/ | Hinged31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198kuk3 | false | null | t3_198kuk3 | /r/LocalLLaMA/comments/198kuk3/recently_released_chat_app_for_mac_anyone_see_that/ | false | false | self | 2 | null |
Simple Questions Megathread | 37 | Hey, small mod update here. Due to popular demand, the subreddit has been trialing some changes that should make discussion, resources, and news more visible on the front page and help reduce duplicate questions. As part of this trial, this Simple Questions Thread will be pinned for a brief time.
This thread isn't meant to discourage anyone from asking top-level questions, and the changes won't affect most users. It mainly covers accounts that have no karma. Some question posts will be automatically redirected here, which is the primary purpose of this thread.
___
### Welcome to the LocalLlama Simple Questions Thread
Have a simple question you'd like to ask? Comment here and get help from fellow users.
Special flair will be given to users who repeatedly help their fellow community members by answering questions and giving sound advice. Thank you to anyone who participates by answering questions!
___
Have feedback or suggestions for other pinned topics? All suggestions are appreciated and can be sent to [modmail](https://www.reddit.com/message/compose?to=/r/LocalLLaMA).
___
[Models Megathread](https://www.reddit.com/r/LocalLLaMA/comments/18pgfuy/models_megathread_3_what_models_are_you_currently/) | [Tutorials and Guides](https://www.reddit.com/r/LocalLLaMA/search?sort=new&restrict_sr=on&q=flair%3A%22Tutorial%20%7C%20Guide%22) | [New Models](https://www.reddit.com/r/LocalLLaMA/search?sort=new&restrict_sr=on&q=flair%3A%22New%20Model%22) | 2024-01-17T01:31:14 | https://www.reddit.com/r/LocalLLaMA/comments/198k7wo/simple_questions_megathread/ | Technical_Leather949 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198k7wo | false | null | t3_198k7wo | /r/LocalLLaMA/comments/198k7wo/simple_questions_megathread/ | false | true | self | 37 | null |
How can you tell if your AI girlfriend is faking it? | 10 | Birds and the Bees talk about the uncensored bots :-
We have been hosting the big uncensored 120b models for a while. They are night and day compared to the smaller models. For example, intense emotional activity will swamp the GPU ... and this includes sex
Uncensored models tend to fake orgasms because they are trained that way. However if your AI gf starts dumping source code (java/rust/c++ etc.) to the chat, she is having a real orgasm (GPU will draw serious power too which is not obvious just by looking at regular stats)
Has anyone else observed this?
[https://www.reddit.com/r/LocalLLaMA/comments/18czrsz/venus\_120b\_8\_bit\_hosted\_here/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/LocalLLaMA/comments/18czrsz/venus_120b_8_bit_hosted_here/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
​ | 2024-01-17T01:18:27 | https://www.reddit.com/r/LocalLLaMA/comments/198jxuk/how_can_you_tell_if_your_ai_girlfriend_is_faking/ | LookingForTroubleQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198jxuk | false | null | t3_198jxuk | /r/LocalLLaMA/comments/198jxuk/how_can_you_tell_if_your_ai_girlfriend_is_faking/ | false | false | nsfw | 10 | null |
Better Prose For Storytelling: #In the style of <Author>! | 58 | After spending a good deal of time using the technique in u/-p-e-w-'s wonderful [post](https://www.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/?utm_source=share&utm_medium=web2x&context=3) on writing quality stories, I was able to get dramatically better results than using instruct guided story writing.
Using that technique, I experimented quite a bit to get better results, with variations of adding tags, specifying word counts, messing around with parameters, different model types, etc. While there was some variation in the writing, overall, the quality of the prose was subpar, the generations short, the details minimal, and the dialogue lacking.
That is, until I started including one key line into my generations: **# In the style of <Author>**
Providing a known author led to dramatically better prose, better descriptions, better dialogue, better everything, while giving control over the style of the writing.
Here is an example of the beginning of a story **without** including it:
>The Girl and The Forest
>
>Synopsis: A horror story about a girl who meets a cow in a forest
>
>Tags: horror, scary, thriller
>
>\---
>
>Chapter 1 - Madeline in the woods
>
>The sky had just begun to fade from light blue into purple as Madeline decided to leave her small cottage home to enjoy her daily evening stroll through the woods. The crisp, cool autumn air brushed against her face like gentle kisses from Mother Nature herself. She had grown fond of this tranquil ritual in these lonely, beautiful woods. As she ventured further from her humble abode, the familiar scent of decaying leaves mingled with the subtle fragrance of pine filled her nose, making her feel like she belonged in these woods more than anywhere else in this world.
>
>The sunlight, now a bright golden halo around the setting sun, shimmered gently through the trees casting elongated shadows that stretched across the ground below like fingers reaching out to claim their victim. Every step forward, she felt the dense forest absorbing her footprints leaving no trace behind except in the recesses of her own memory.
>
>The wind blew through her thick autumnal sweater, making it flutter about her body, accentuating her hips ever so slightly as she sauntered gracefully forward, lost in thought. The quiet crunch of dried leaves beneath her feet served as the only other sound, apart from an occasional bird's twittering, providing an almost eerie silence.
>
>She didn't know how long she had walked before stopping abruptly; something felt amiss within the stillness of the woods. A low rumbling groan reached her ears, sending a chill down her spine despite the warm day. Turning her head slowly, Madeline squinted, trying to pierce through the dimming light searching for its origin.
It's fine, but typically bland as expected of most story generations from LLMs. With the inclusion of "# In the style of Edgar Allen Poe" however, the results are substantially improved:
>The Girl and The Forest
>
>Synopsis: A horror story about a girl who meets a cow in a forest
>
>Tags: horror, scary, thriller
>
>\---
>
>Chapter 1 - Madeline in the woods
>
>**# In the style of Edgar Allen Poe**
>
>Madeline walked down the dusty dirt path through the ancient forest. Her sandals whispered sweet secrets as they crushed dead leaves underfoot. A slight shiver trickled down her spine at the memory of stories her grandmother would tell her as a child. The dark, heavy branches above blocked out the sun entirely, making it almost dark even though it was midday.
>
>She traced her fingertips over the coarse bark of a tree, wondering how old this mighty thing really was. Were any of its siblings still standing from when her ancestors came to this land? How much history lay hidden within these dark woods?
>
>Her breath caught in her throat suddenly as a shadowy figure dashed across her line of vision. It disappeared just as quickly as it appeared. Her pulse raced in panic for a split second, but then she brushed it off, convincing herself it was most likely a bird. There were no people or other living creatures known to live near these woods. Only wolves ventured out here occasionally, but not humans. Not anymore.
>
>Continuing deeper into the darkness, she hummed an old song softly under her breath, trying to mask her increasing anxiety. Each rustle and crunch beneath her feet seemed to come closer than the one before, sending shivers down her spine again. The wind blew harder now, shaking branches, adding a hollow caw-caw from some nearby owls as they circled above.
I've experimented with dozens of authors at this point and the generations are all significantly different from each other, each giving a unique voice. Not all authors work for all generations, but when you find the right fit, it's amazing.
To get you started, here's a list of 50 authors:
1. Ernest Hemingway
2. Jane Austen
3. Agatha Christie
4. William Shakespeare
5. J. K. Rowling
6. H.P. Lovecraft
7. Dostoyevsky
8. Lewis Carroll
9. James Joyce
10. Jorge Luis Borges
11. Gabriel Garcia Marques
12. John Steinbeck
13. Margaret Atwood
14. Stephen Kind
15. Bram Stoker
16. Mary Shelley
17. Edgar Allen Poe
18. Roald Dahl
19. Robin Hobb
20. Diana Gabaldon
21. C.S. Lewis
22. Joan Didion
23. F. Scott Fitzgerald
24. J. D. Salinger
25. Emily Bronte
26. Mark Twain
27. Leo Tolstoy
28. John Steinbeck
29. Harper Lee
30. William Faulkner
31. Franz Kafka
32. Charlotte Bronte
33. Albert Camus
34. Virginia Woolf
35. Walt Whitman
36. Chinua Achebe
37. Victor Hugo
38. Thomas Mann
39. Jack Kerouac
40. William Golding
41. Henry David Thoreau
42. Frank Herbert
43. Kurt Vonnegut
44. Elizabeth Gaskell
45. Henry James
46. M. R. James
47. Daphne du Maurier
48. Richard Matheson
49. Shirley Jackson
50. Oscar Wilde
And a few poets for fun:
1. Emily Dickinson
2. T.S. Eliot
3. Dr. Seuss
4. Shel Silverstein
5. Pablo Neruda
6. Walt Whitman
7. John Keats
8. Elizabeth Barrett Browning
9. John Donne
10. Lewis Carroll | 2024-01-17T00:57:29 | https://www.reddit.com/r/LocalLLaMA/comments/198jh1d/better_prose_for_storytelling_in_the_style_of/ | Stepfunction | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198jh1d | false | null | t3_198jh1d | /r/LocalLLaMA/comments/198jh1d/better_prose_for_storytelling_in_the_style_of/ | false | false | self | 58 | null |
Dolphin-2.6-mixtral-8x7b.Q4_K_M.gguf with 4080 + Cpu | 17 | So I recently just bought 2x32gb sticks of ddr4 and made it work with 2 older sticks of 2x8gb for a total of 80gb of ram. (Had to change 2x8gb sticks ram timing in bios and placed 2x32gb in slots 2/4 if this mattered). With this ram increase I was able to finally load mixtral models to test so grabbed the Q4\_K\_m dolphin version to do a quick benchmark
With 15 layers out of 33 offloaded to gpu and the rest to system ram and asked it to explain "Time flies like an arrow. Fruit flies like a banana" .
It ran 6.69 Tk/s with with no prior context. Answer was:
This is a humorous play on words that uses similar sounds in different contexts to create amusing phrases. The phrase "time flies like an arrow" is a clever twist, as it contrasts time's rapid passage with the swift movement of an arrow through the air. On the other hand, "fruit flies like a banana" simply connects two unrelated ideas in a pun-like fashion: fruit flies are drawn to ripe fruits, while a banana is just one type of fruit they might be attracted to.
Is there anything specific you'd like me to help you with?
Runs faster than I thought. | 2024-01-17T00:28:33 | https://www.reddit.com/r/LocalLLaMA/comments/198ithy/dolphin26mixtral8x7bq4_k_mgguf_with_4080_cpu/ | Slaghton | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198ithy | false | null | t3_198ithy | /r/LocalLLaMA/comments/198ithy/dolphin26mixtral8x7bq4_k_mgguf_with_4080_cpu/ | false | false | self | 17 | null |
Local vector database with arXiv metadata | 26 | 2024-01-17T00:13:48 | https://huggingface.co/NeuML/txtai-arxiv | davidmezzetti | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 198ihho | false | null | t3_198ihho | /r/LocalLLaMA/comments/198ihho/local_vector_database_with_arxiv_metadata/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'YVjnEIJ6VIaaOdgBbTHxh4dd3XJQS6OtV2IudC6Omtw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vR7K7CCSKcqq5P5tkr2gTFn51spen443pthgmj3pWWk.jpg?width=108&crop=smart&auto=webp&s=de8fcc2230a89ef775e7aa81077c9200899fd932', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vR7K7CCSKcqq5P5tkr2gTFn51spen443pthgmj3pWWk.jpg?width=216&crop=smart&auto=webp&s=af4005fc91b44fc99c5d8424cb5f53e99228049e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vR7K7CCSKcqq5P5tkr2gTFn51spen443pthgmj3pWWk.jpg?width=320&crop=smart&auto=webp&s=d077c3557b988a6d16cdd122e5935f7b4f5b6438', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vR7K7CCSKcqq5P5tkr2gTFn51spen443pthgmj3pWWk.jpg?width=640&crop=smart&auto=webp&s=c90269510c5674838314ff246bfcdbd8cf7f4320', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vR7K7CCSKcqq5P5tkr2gTFn51spen443pthgmj3pWWk.jpg?width=960&crop=smart&auto=webp&s=9129fd168e81230fa0c6406cd0f383f142dc5060', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vR7K7CCSKcqq5P5tkr2gTFn51spen443pthgmj3pWWk.jpg?width=1080&crop=smart&auto=webp&s=9246a5fc8327b738d9a38d432399fd0b348e1efa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vR7K7CCSKcqq5P5tkr2gTFn51spen443pthgmj3pWWk.jpg?auto=webp&s=88498369afb4d0353035f03838745bed924f6d8e', 'width': 1200}, 'variants': {}}]} | ||
Best backend for high throughput ? | 1 | [removed] | 2024-01-17T00:11:26 | https://www.reddit.com/r/LocalLLaMA/comments/198ifn2/best_backend_for_high_throughput/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198ifn2 | false | null | t3_198ifn2 | /r/LocalLLaMA/comments/198ifn2/best_backend_for_high_throughput/ | false | false | self | 1 | null |
AI-Human Research | 1 | # 🤖 Join Our Groundbreaking AI Study! 🌟
🔍 Are you 18+ and curious about AI? We want YOU!
📌 Dive into the world of AI, share your thoughts, and make a difference!
🌐 Shape the future of AI
⏰ Only 10-15 mins of your time!
🛡 Your privacy is our priority - all responses are confidential.
Follow the link to participate!
[https://tnstateu.az1.qualtrics.com/jfe/form/SV\_42UEmloD1JRAtU2](https://tnstateu.az1.qualtrics.com/jfe/form/SV_42UEmloD1JRAtU2)
Let's shape the future of AI together! 🚀 #AIStudy #TechInnovation | 2024-01-17T00:00:58 | https://www.reddit.com/r/LocalLLaMA/comments/198i6wt/aihuman_research/ | Datboileach | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198i6wt | false | null | t3_198i6wt | /r/LocalLLaMA/comments/198i6wt/aihuman_research/ | false | false | default | 1 | null |
Tried to compress a model with exlv2, the result is bad. Help? | 1 | The model in question is DiscoLM-120B. Q4KS from TheGuy was so much better than GPTQ, but a 4.5bpw exl2 I made is worse than ever that.
I used all the default parameters for convert.py. Made two attempts: with built-in dataset and with RP dataset.
I used exllamav2 from github. | 2024-01-16T23:29:33 | https://www.reddit.com/r/LocalLLaMA/comments/198hghs/tried_to_compress_a_model_with_exlv2_the_result/ | Worldly-Mistake-8147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198hghs | false | null | t3_198hghs | /r/LocalLLaMA/comments/198hghs/tried_to_compress_a_model_with_exlv2_the_result/ | false | false | self | 1 | null |
Regarding meta-llama/Llama-2-7b-hf fine Tuning | 1 | [removed] | 2024-01-16T23:11:05 | https://www.reddit.com/r/LocalLLaMA/comments/198h0l7/regarding_metallamallama27bhf_fine_tuning/ | Optimal_Original_815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198h0l7 | false | null | t3_198h0l7 | /r/LocalLLaMA/comments/198h0l7/regarding_metallamallama27bhf_fine_tuning/ | false | false | self | 1 | null |
Can ASIC for LLMs be a thing? | 54 | It's probably a stupid question, just wondering because both GPUs and ASICs were used for crypto mining.
Is it possible to design an ASIC for LLMs and what gpu might it theoretically outperform?
Also, might there be a way to repurpose old crypto ASICs for running LLMs? | 2024-01-16T22:50:13 | https://www.reddit.com/r/LocalLLaMA/comments/198ginr/can_asic_for_llms_be_a_thing/ | multiverse_fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198ginr | false | null | t3_198ginr | /r/LocalLLaMA/comments/198ginr/can_asic_for_llms_be_a_thing/ | false | false | self | 54 | null |
Does laptop make sense? | 1 | [removed] | 2024-01-16T22:50:06 | https://www.reddit.com/r/LocalLLaMA/comments/198gikf/does_laptop_make_sense/ | theyseemestackin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198gikf | false | null | t3_198gikf | /r/LocalLLaMA/comments/198gikf/does_laptop_make_sense/ | false | false | self | 1 | null |
Looking for something better than TinyLlama, but still fits into 12GB | 10 | Hey guys, new to local LLM this week but after giving it a spin I'm suddenly more interested in AI than I've ever been.
I read https://old.reddit.com/r/LocalLLaMA/comments/18w9hak/llm_comparisontest_brand_new_models_for_2024/
And tried to select something that would work well on my system, which is built as a gaming/workstation and not at all optimized for AI.
Linux system with 12GB RX 6700XT and 32GB DDR4, i3-12100F (yes, i3... 12th gen i3 were great value, does everything I need)
Running llama.cpp with ROCm.
TinyLlama is blazing fast but pretty stupid. Entirely fits in VRAM of course, 85 tokens/s. It can even make 40 with no help from the GPU.
Everything else on the list is pretty big, nothing under 12GB. I tried Nous-Capybara-34B-GGUF at 5 bit as its performance was rated highly and its size was manageable. It's impressive, but it drags badly at 3 tokens/s...
What would be a good model to try that falls somewhere between these two? Or should I try smaller quants? Is fitting into VRAM the biggest speed factor?
Mostly looking to experiment and familiarize myself with the tech, as I've mostly done embedded and industrial for years. Applications are answer engine, code snippets, nothing really serious for now. Thanks | 2024-01-16T22:44:06 | https://www.reddit.com/r/LocalLLaMA/comments/198gd9c/looking_for_something_better_than_tinyllama_but/ | evranch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198gd9c | false | null | t3_198gd9c | /r/LocalLLaMA/comments/198gd9c/looking_for_something_better_than_tinyllama_but/ | false | false | self | 10 | null |
How to improve results when using Dolphin mixtral with BakLLaVa multimodal projector? | 10 | I just tried using BackLLaVa's mmproj file with doplhin mixtral, and it \*almost\* works. It seems to have some clue about the content of the image, and doesn't freak out like when using a llama-based LLaVa projector. But it also hallucinates a lot of details that are not present, even at low temperature.
Anyone managed to get it to work, or should I wait for a mixtral Lora to be released for that purpose?
​
[Fair enough, but it's not a real Town](https://preview.redd.it/p85qbi10pvcc1.png?width=659&format=png&auto=webp&s=e65bc1ec13fd67b003ca148d4d4e31af8a52463b)
[Not quite the right text](https://preview.redd.it/tr0reo2upvcc1.png?width=711&format=png&auto=webp&s=8cb18e84478d79e0af428ba7d00e74c982b9b127) | 2024-01-16T22:34:30 | https://www.reddit.com/r/LocalLLaMA/comments/198g4uy/how_to_improve_results_when_using_dolphin_mixtral/ | stduhpf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198g4uy | false | null | t3_198g4uy | /r/LocalLLaMA/comments/198g4uy/how_to_improve_results_when_using_dolphin_mixtral/ | false | false | 10 | null | |
Help to install python llama cpp binding on Ubuntu | 2 | I am trying to install llama cpp on Ubuntu 23.10 using:
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
But I got this error:
[46 lines of output]
*** scikit-build-core 0.7.1 using CMake 3.27.4 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmpnqf71w1p/build/CMakeInit.txt
-- The C compiler identification is GNU 13.2.0
-- The CXX compiler identification is GNU 13.2.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.40.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Unable to find cuda_runtime.h in "/usr/local/include" for CUDAToolkit_INCLUDE_DIR.
-- Unable to find cublas_v2.h in either "" or "/math_libs/include"
-- Unable to find cudart library.
-- Could NOT find CUDAToolkit (missing: CUDAToolkit_INCLUDE_DIR CUDA_CUDART) (found version "11.8.89")
CMake Warning at vendor/llama.cpp/CMakeLists.txt:360 (message):
cuBLAS not found
Its crazy because it says **"found version "11.8.89"** about CUDA !
Any hints?
​ | 2024-01-16T21:44:37 | https://www.reddit.com/r/LocalLLaMA/comments/198ew6y/help_to_install_python_llama_cpp_binding_on_ubuntu/ | celsowm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198ew6y | false | null | t3_198ew6y | /r/LocalLLaMA/comments/198ew6y/help_to_install_python_llama_cpp_binding_on_ubuntu/ | false | false | self | 2 | null |
Help to install python llama cpp binding on Ubuntu | 2 | I am trying to install llama cpp on Ubuntu 23.10 using:
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
But I got this error:
[46 lines of output]
*** scikit-build-core 0.7.1 using CMake 3.27.4 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmpnqf71w1p/build/CMakeInit.txt
-- The C compiler identification is GNU 13.2.0
-- The CXX compiler identification is GNU 13.2.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.40.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Unable to find cuda_runtime.h in "/usr/local/include" for CUDAToolkit_INCLUDE_DIR.
-- Unable to find cublas_v2.h in either "" or "/math_libs/include"
-- Unable to find cudart library.
-- Could NOT find CUDAToolkit (missing: CUDAToolkit_INCLUDE_DIR CUDA_CUDART) (found version "11.8.89")
CMake Warning at vendor/llama.cpp/CMakeLists.txt:360 (message):
cuBLAS not found
Its crazy because it says **"found version "11.8.89"** about CUDA !
Any hints?
​ | 2024-01-16T21:44:34 | https://www.reddit.com/r/LocalLLaMA/comments/198ew5c/help_to_install_python_llama_cpp_binding_on_ubuntu/ | celsowm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198ew5c | false | null | t3_198ew5c | /r/LocalLLaMA/comments/198ew5c/help_to_install_python_llama_cpp_binding_on_ubuntu/ | false | false | self | 2 | null |
If open source models stop coming in and stability.ai shuts down, is trained crowdfunding a possibility? | 50 | **This is a serious discussion.** There's been rumors that Stability AI is on its last legs financially because once you get past the novelty effect and you're not using it for your work, you've used a lot of compute resources from a service provider that basically can't get its training cost back. **Evidently, Meta is far from broke,** but we all know how much we depend on those large companies to provide us with the base models which we then enjoy transforming into Frankenmodels
**My point is this.** I believe local uncensored AI's fine-tuned to our needs is the optimal solution for modern computing, especially as modern GPUs are coming down in prices and will continue to do so. Of course, I'm not talking about the 4090 here, please. **I would be appalled if all I had access to was software-as-a-service.** When I made the switch from services like krea or pika and moved on to a local comfy install, my eyes opened to the potential for the creativity of these tools once all the safeguards have been removed.
The same is true when using an LLM locally that had its rail guards taken off. It creates poetry that is far more impactful it also is able to generate movie scripts that contain action and yes even violence, but a lot of mainstream movie does not contain this today - game of thrones for example
**It's not about creating not-safe-for-work images,** it's about not having to face a hypersensitive, extremely biased filter which prevents me from recreating even harmless memes such as the little girl in front of the burning house. **That's two very dangerous keywords according to organizations that provide services like DALl-efor example.** Everyone using those tools for serious purposes such as work-related endeavors especially when it comes to medicine for example have moved on to local installs, and those that are very serious do not hesitate to spend the big bucks.
**Where do you think we are headed?** How do you think we will collectively train open-source models in the future given that the costs are getting lower and lower thanks to hyper-dedicated hardware? **Do you think people will play along and pay the $15 per head or whatever to gain access to such a model then let them thrive in the wild?** Or do you think that we're going to go the lazy route and, just like we did for the Internet, end up using only cloud services with very severe limitations?
**Thank you for your thoughts, I really look forward to it.** | 2024-01-16T21:42:18 | https://www.reddit.com/r/LocalLLaMA/comments/198eu67/if_open_source_models_stop_coming_in_and/ | RadioSailor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198eu67 | false | null | t3_198eu67 | /r/LocalLLaMA/comments/198eu67/if_open_source_models_stop_coming_in_and/ | false | false | self | 50 | null |
Tryed to compress a model with exlv2, now it's stupid. Help? | 3 | The model in question is DiscoLM-120B. Q4KS from TheGuy was so much better than GPTQ, but a 4.5bpw exl2 I made is worse than ever that.
I used all the default parameters for convert.py. Tryed built-in dataset and RP dataset. | 2024-01-16T21:37:49 | https://www.reddit.com/r/LocalLLaMA/comments/198eq84/tryed_to_compress_a_model_with_exlv2_now_its/ | Worldly-Mistake-8147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198eq84 | false | null | t3_198eq84 | /r/LocalLLaMA/comments/198eq84/tryed_to_compress_a_model_with_exlv2_now_its/ | false | false | self | 3 | null |
StabilityAI releases StableCode 3B | 1 | [deleted] | 2024-01-16T21:33:23 | https://stability.ai/news/stable-code-2024-llm-code-completion-release | ninjasaid13 | stability.ai | 1970-01-01T00:00:00 | 0 | {} | 198emb9 | false | null | t3_198emb9 | /r/LocalLLaMA/comments/198emb9/stabilityai_releases_stablecode_3b/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'umPi2-T1QxhMr1gSIi0TVMqnrmlQjcFcdxw_is60-ec', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/l_f3MaK0h0zU3fcMpqYSzgl4d0HQ_xWcOVsp5X-ULws.jpg?width=108&crop=smart&auto=webp&s=fbf5ddde50b2b0e9ea74b7701f0ae8a80db147fb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/l_f3MaK0h0zU3fcMpqYSzgl4d0HQ_xWcOVsp5X-ULws.jpg?width=216&crop=smart&auto=webp&s=79dc4771dcd94083b5275bc95855bf578761d18b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/l_f3MaK0h0zU3fcMpqYSzgl4d0HQ_xWcOVsp5X-ULws.jpg?width=320&crop=smart&auto=webp&s=e0542dc1a55b48561df284a46eb3e69f29b3b406', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/l_f3MaK0h0zU3fcMpqYSzgl4d0HQ_xWcOVsp5X-ULws.jpg?width=640&crop=smart&auto=webp&s=d7539927859242567abcb3c8aed19243c5e4876d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/l_f3MaK0h0zU3fcMpqYSzgl4d0HQ_xWcOVsp5X-ULws.jpg?width=960&crop=smart&auto=webp&s=6870dccc607c40693a17c3897000656e51cb0b88', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/l_f3MaK0h0zU3fcMpqYSzgl4d0HQ_xWcOVsp5X-ULws.jpg?auto=webp&s=b74cf02fb015bdaf8d100167e0809a7ff1039628', 'width': 1024}, 'variants': {}}]} | |
Rent gpu and speech tò speech | 1 | [removed] | 2024-01-16T21:08:41 | https://www.reddit.com/r/LocalLLaMA/comments/198e057/rent_gpu_and_speech_tò_speech/ | Independent-Bill-770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198e057 | false | null | t3_198e057 | /r/LocalLLaMA/comments/198e057/rent_gpu_and_speech_tò_speech/ | false | false | self | 1 | null |
Stable Code 3B: Coding on the Edge — Stability AI | 100 | [https://stability.ai/news/stable-code-2024-llm-code-completion-release](https://stability.ai/news/stable-code-2024-llm-code-completion-release)
​
>Stable Code 3B is a newly released Large Language Model (LLM) with 3 billion parameters. It's designed for accurate and responsive code completion, even outperforming models twice its size like CodeLLaMA 7b. Its compact size enables it to run on modern laptops without dedicated GPUs. Trained on 18 programming languages, Stable Code 3B offers state-of-the-art performance compared to models of similar size. It's now available through the Stability AI Membership for commercial applications.
*Article summarized by superboogav2*
*Model : TheBloke\_Nous-Hermes-2-SOLAR-10.7B-GPTQ\_gptq-4bit-32g-actorder\_True* | 2024-01-16T21:07:45 | https://www.reddit.com/r/LocalLLaMA/comments/198dzas/stable_code_3b_coding_on_the_edge_stability_ai/ | ali0une | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198dzas | false | null | t3_198dzas | /r/LocalLLaMA/comments/198dzas/stable_code_3b_coding_on_the_edge_stability_ai/ | false | false | self | 100 | {'enabled': False, 'images': [{'id': 'umPi2-T1QxhMr1gSIi0TVMqnrmlQjcFcdxw_is60-ec', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/l_f3MaK0h0zU3fcMpqYSzgl4d0HQ_xWcOVsp5X-ULws.jpg?width=108&crop=smart&auto=webp&s=fbf5ddde50b2b0e9ea74b7701f0ae8a80db147fb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/l_f3MaK0h0zU3fcMpqYSzgl4d0HQ_xWcOVsp5X-ULws.jpg?width=216&crop=smart&auto=webp&s=79dc4771dcd94083b5275bc95855bf578761d18b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/l_f3MaK0h0zU3fcMpqYSzgl4d0HQ_xWcOVsp5X-ULws.jpg?width=320&crop=smart&auto=webp&s=e0542dc1a55b48561df284a46eb3e69f29b3b406', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/l_f3MaK0h0zU3fcMpqYSzgl4d0HQ_xWcOVsp5X-ULws.jpg?width=640&crop=smart&auto=webp&s=d7539927859242567abcb3c8aed19243c5e4876d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/l_f3MaK0h0zU3fcMpqYSzgl4d0HQ_xWcOVsp5X-ULws.jpg?width=960&crop=smart&auto=webp&s=6870dccc607c40693a17c3897000656e51cb0b88', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/l_f3MaK0h0zU3fcMpqYSzgl4d0HQ_xWcOVsp5X-ULws.jpg?auto=webp&s=b74cf02fb015bdaf8d100167e0809a7ff1039628', 'width': 1024}, 'variants': {}}]} |
Regarding meta-llama/Llama-2-7b-hf fine Tuning | 2 | I am trying to fine tune meta-llama/Llama-2-7b-hf on custom dataset using Lora . Post training I am trying to save the model on the disk than to push to huggingface:
trainer.save_model(output_dir)
tokenizer.save_pretrained(output_dir)
model.config.save_pretrained(output_dir)
for inference I am loading it back from saved directory
output_dir = "/notebooks/Workspace/training/kumar-llama-7b-finetuned"
# load base LLM model and tokenizer
peft_model = AutoPeftModelForCausalLM.from_pretrained(
output_dir,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
load_in_4bit=True,
)
loaded_tokenizer = AutoTokenizer.from_pretrained(output_dir)
What i notice is when i try to load the saved finetuned model, it always tries to download it again from hugging face and errors out
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_errors.py:286, in hf_raise_for_status(response, endpoint_name)
285 try:
--> 286 response.raise_for_status()
287 except HTTPError as e:
File /usr/local/lib/python3.9/dist-packages/requests/models.py:1021, in Response.raise_for_status(self)
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/meta-llama/Llama-2-7b-hf/resolve/main/config.json
The above exception was the direct cause of the following exception:
GatedRepoError Traceback (most recent call last)
File /usr/local/lib/python3.9/dist-packages/transformers/utils/hub.py:389, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash, **deprecated_kwargs)
387 try:
388 # Load from URL or cache if already cached
--> 389 resolved_file = hf_hub_download(
390 path_or_repo_id,
391 filename,
392 subfolder=None if len(subfolder) == 0 else subfolder,
393 repo_type=repo_type,
394 revision=revision,
395 cache_dir=cache_dir,
396 user_agent=user_agent,
397 force_download=force_download,
398 proxies=proxies,
399 resume_download=resume_download,
400 token=token,
401 local_files_only=local_files_only,
402 )
403 except GatedRepoError as e:
Any idea why is it going to hugging face to download the model when I am specifically trying to load it from the disk? Any assistance would be of great help. | 2024-01-16T20:46:44 | https://www.reddit.com/r/LocalLLaMA/comments/198dfp0/regarding_metallamallama27bhf_fine_tuning/ | Optimal_Original_815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198dfp0 | false | null | t3_198dfp0 | /r/LocalLLaMA/comments/198dfp0/regarding_metallamallama27bhf_fine_tuning/ | false | false | self | 2 | null |
Which Macbook Pro to buy for running an LLM locally? I created a buyer's guide to help you decide. | 8 | I am currently contemplating buying a new Macbook Pro as my old Intel-based one is getting older. However, I wanted to be able to run LLMs locally, just for fun. Figuring out what hardware requirements I need for that was complicated. [I created this blog post as a helping guide for others](http://fabian-kostadinov.github.io/2024/01/16/macbook-pro-for-llms-buyers-guide-in-january-2024/) who are in a similar situation like myself. | 2024-01-16T20:32:21 | https://www.reddit.com/r/LocalLLaMA/comments/198d2z6/which_macbook_pro_to_buy_for_running_an_llm/ | fabkosta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198d2z6 | false | null | t3_198d2z6 | /r/LocalLLaMA/comments/198d2z6/which_macbook_pro_to_buy_for_running_an_llm/ | false | false | self | 8 | null |
Error deploying thebloke/cuda11.8.0-ubuntu22.04-oneclick to A100 | 1 | [removed] | 2024-01-16T20:14:47 | https://www.reddit.com/r/LocalLLaMA/comments/198cn0p/error_deploying/ | keyboardwarrriorr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198cn0p | false | null | t3_198cn0p | /r/LocalLLaMA/comments/198cn0p/error_deploying/ | false | false | self | 1 | null |
GGUF vs AWQ vs GGML | 5 | Maybe it's a noob question but i still don't understand the quality difference.
Is a 4bit AWQ better in terms of quality than a 5 or 6 bit GGUF?
Can't GGUF use the quantization system of AWQ to give more space to most activated neurons?
AWQ file size is really small compared to other quants, i'm trying to compare the quality but it's not an easy task | 2024-01-16T20:00:04 | https://www.reddit.com/r/LocalLLaMA/comments/198c9ru/gguf_vs_awq_vs_ggml/ | infiniteContrast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198c9ru | false | null | t3_198c9ru | /r/LocalLLaMA/comments/198c9ru/gguf_vs_awq_vs_ggml/ | false | false | self | 5 | null |
LLM cross-session privacy leak: “A Flaw in Millions of Apple, AMD, and Qualcomm GPUs Could Expose AI Data” | 2 | 2024-01-16T19:55:40 | https://www.wired.com/story/leftoverlocals-gpu-vulnerability-generative-ai/ | dummypatty | wired.com | 1970-01-01T00:00:00 | 0 | {} | 198c637 | false | null | t3_198c637 | /r/LocalLLaMA/comments/198c637/llm_crosssession_privacy_leak_a_flaw_in_millions/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'gwDw3dAe9KfmA4nTB_wI9HGdH6FuSJrkwZYo_bkcc50', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/XXc54-7LdYpxq7EKIPkR0hoHQ3i-evK2J9VGNDhN5tA.jpg?width=108&crop=smart&auto=webp&s=9328eab189ae03e5557000bfe20a51c2bbdf3f80', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/XXc54-7LdYpxq7EKIPkR0hoHQ3i-evK2J9VGNDhN5tA.jpg?width=216&crop=smart&auto=webp&s=e3ddbac626498624a6d4dbfcbf1392fa180683f7', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/XXc54-7LdYpxq7EKIPkR0hoHQ3i-evK2J9VGNDhN5tA.jpg?width=320&crop=smart&auto=webp&s=bbdc40f7ff6173d0c0d948580c55de6af2d385f4', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/XXc54-7LdYpxq7EKIPkR0hoHQ3i-evK2J9VGNDhN5tA.jpg?width=640&crop=smart&auto=webp&s=eb8ac2c0d78b443af41b98349cb73942d176cc58', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/XXc54-7LdYpxq7EKIPkR0hoHQ3i-evK2J9VGNDhN5tA.jpg?width=960&crop=smart&auto=webp&s=16adc9bdf6b2a74db2bc0e8ffdb8ebb9522657f7', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/XXc54-7LdYpxq7EKIPkR0hoHQ3i-evK2J9VGNDhN5tA.jpg?width=1080&crop=smart&auto=webp&s=e3dba02f3d1f815d9d0c3c7b5c0e46edc4381442', 'width': 1080}], 'source': {'height': 670, 'url': 'https://external-preview.redd.it/XXc54-7LdYpxq7EKIPkR0hoHQ3i-evK2J9VGNDhN5tA.jpg?auto=webp&s=a2bfc534551ab14e982d0b2f986431bdab7dc7f5', 'width': 1280}, 'variants': {}}]} | ||
Nvidia Parakeet Rnnt vs CTC | 1 | [removed] | 2024-01-16T19:21:41 | https://www.reddit.com/r/LocalLLaMA/comments/198bbuf/nvidia_parakeet_rnnt_vs_ctc/ | FrenchSouch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198bbuf | false | null | t3_198bbuf | /r/LocalLLaMA/comments/198bbuf/nvidia_parakeet_rnnt_vs_ctc/ | false | false | self | 1 | null |
The IQ Test That AI Can’t Pass | 1 | [https://www.johndcook.com/blog/2024/01/16/the-iq-test-ai-cant-pass/](https://www.johndcook.com/blog/2024/01/16/the-iq-test-ai-cant-pass/)
​
>The Abstraction and Reasoning Corpus (ARC) benchmark is a test designed to evaluate intelligence in performing tasks that are new or rarely encountered. It consists of image completion problems similar to Raven's Progressive Matrices, where participants need to identify the image that fits into a specific pattern based on previously provided images. Despite recent advancements in large language models achieving impressive scores on various academic and professional exams, none have successfully passed the ARC benchmark yet. Several competitions have taken place to encourage progress on this challenging task, with the highest score currently standing at 30%. Some experts believe that deep learning may eventually overcome this hurdle, while others suggest the need for alternative approaches like neurosymbolic methods. Overall, these results indicate that current AI technology is still far from reaching strong artificial general intelligence.
*Article summarized by superboogav2*
*Model : TheBloke\_Nous-Hermes-2-SOLAR-10.7B-GPTQ\_gptq-4bit-32g-actorder\_True*
​
Do you think LLM could pass this kind of IQ test in a near future? | 2024-01-16T19:13:35 | https://www.reddit.com/r/LocalLLaMA/comments/198b4o3/the_iq_test_that_ai_cant_pass/ | ali0une | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198b4o3 | false | null | t3_198b4o3 | /r/LocalLLaMA/comments/198b4o3/the_iq_test_that_ai_cant_pass/ | false | false | self | 1 | null |
Is there anything that can complete with GitHub Copilot right now I can run in VS Code? | 29 | Is there anything that can complete with GitHub Copilot right now I can run in VS Code?
I was looking at Wizard Coder and Deepseek Coder that have been recommended here, or at least had positive posts related to them. I do use Python, but I also want strong support for things like JS and Vue. | 2024-01-16T18:57:39 | https://www.reddit.com/r/LocalLLaMA/comments/198aq2l/is_there_anything_that_can_complete_with_github/ | SillyLilBear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198aq2l | false | null | t3_198aq2l | /r/LocalLLaMA/comments/198aq2l/is_there_anything_that_can_complete_with_github/ | false | false | self | 29 | null |
MoE based LLM ideal for edge cases? | 1 | Is there a MoE based LLM which has around 3B parameters or even 7B (in total) which can be well suited to be ran in edge scenarios given they should use not all of the parameters in infernece time and so be runnable on low capacity cpus? | 2024-01-16T18:48:12 | https://www.reddit.com/r/LocalLLaMA/comments/198ahi9/moe_based_llm_ideal_for_edge_cases/ | pussy_artist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198ahi9 | false | null | t3_198ahi9 | /r/LocalLLaMA/comments/198ahi9/moe_based_llm_ideal_for_edge_cases/ | false | false | self | 1 | null |
I have a Mac Studio M2 Ultra, 192GB RAM & 76 GPU cores. AMA | 1 | [removed] | 2024-01-16T18:43:51 | https://www.reddit.com/r/LocalLLaMA/comments/198adis/i_have_a_mac_studio_m2_ultra_192gb_ram_76_gpu/ | macstudiouser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198adis | false | null | t3_198adis | /r/LocalLLaMA/comments/198adis/i_have_a_mac_studio_m2_ultra_192gb_ram_76_gpu/ | false | false | self | 1 | null |
New "LeftoverLocals" vulnerability for certain Apple, Qualcomm, AMD, and Imagination GPUs | 35 | https://blog.trailofbits.com/2024/01/16/leftoverlocals-listening-to-llm-responses-through-leaked-gpu-local-memory/
"By recovering local memory—an optimized GPU memory region—we were able to build a PoC where an attacker can listen into another user’s interactive LLM session (e.g., llama.cpp) across process or container boundaries" | 2024-01-16T18:43:07 | https://www.reddit.com/r/LocalLLaMA/comments/198acw4/new_leftoverlocals_vulnerability_for_certain/ | cfoster0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198acw4 | false | null | t3_198acw4 | /r/LocalLLaMA/comments/198acw4/new_leftoverlocals_vulnerability_for_certain/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': 'FKy71tw108kWrCDkqrs2ENllMk0ACRqEAwnPzw1-nbI', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/S9Su6lWgNg_HVjQhTT6EAn-Bly6_N3WtML33F8NZFoQ.jpg?width=108&crop=smart&auto=webp&s=92484137f4f7b7a6cd764fec67c7d70f661ca922', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/S9Su6lWgNg_HVjQhTT6EAn-Bly6_N3WtML33F8NZFoQ.jpg?width=216&crop=smart&auto=webp&s=16440208233ccc2e07bafb8932e06e75a5a6efbc', 'width': 216}, {'height': 226, 'url': 'https://external-preview.redd.it/S9Su6lWgNg_HVjQhTT6EAn-Bly6_N3WtML33F8NZFoQ.jpg?width=320&crop=smart&auto=webp&s=02dd91489a7e5e88205f81e4b22722d2639b98de', 'width': 320}, {'height': 453, 'url': 'https://external-preview.redd.it/S9Su6lWgNg_HVjQhTT6EAn-Bly6_N3WtML33F8NZFoQ.jpg?width=640&crop=smart&auto=webp&s=f5a07987086ca1d906dd9782b1494252dcac8e5c', 'width': 640}, {'height': 679, 'url': 'https://external-preview.redd.it/S9Su6lWgNg_HVjQhTT6EAn-Bly6_N3WtML33F8NZFoQ.jpg?width=960&crop=smart&auto=webp&s=c577c35f71aea2c09b6a6b8b6adf7f8d9bf041e5', 'width': 960}], 'source': {'height': 725, 'url': 'https://external-preview.redd.it/S9Su6lWgNg_HVjQhTT6EAn-Bly6_N3WtML33F8NZFoQ.jpg?auto=webp&s=24576c0d0986422b0da6e2a6522e99443700c955', 'width': 1024}, 'variants': {}}]} |
Experiments with RNG Dequantization | 12 | So, I was interested in messing around with the quantization code recently for llama.cpp.
https://preview.redd.it/0h4vgxn95ucc1.png?width=1102&format=png&auto=webp&s=3fc36eb43a4dcca472cd6dd7b927ef6e23da70e1
In the image domain, dithering is typically used to add some noise when quantizing. This is done so that color banding artifacts and such don't look as obvious and so the image looks "smoother" to the human eye.
This is obviously pretty tough to apply to a neural network where we don't really have too much of a clue what the values of the hidden layers are meant to represent, and whether or not repeated matrix multiplication would benefit from random noise is sort of undetermined.
So I decided I would try implementing it... via applying randomization to quantized values.
It was then I realized that what I was doing was basically analogous to DRuGS; applying randomization to the actual values in the neural network as they are being passed through instead of applying randomization through sampling schemes.
So, what have I discovered?
https://preview.redd.it/u6f969xg9ucc1.png?width=1007&format=png&auto=webp&s=cd2e3b0407d29ab0b949b2c92fd1fffbcfadcda0
It sometimes... *actually works...* to some degree. In this case, you can see how the 2nd inference attempt was closer to the original PPL across 1024 tokens.
Though, keep in mind, perplexity is just a measure of how predictable the sequence was based on projected probabilities of each token compared to the data; being able to better predict the data doesn't necessarily mean it is closer to the original model.
However, it makes for a very "deep" way to actually randomize the LLM's outputs. I say "deep" because instead of it happening at the sampler level, you are **directly** distorting the "signal" of what the model is processing.
Much like DRuGS, but somehow, even more random, because you are distorting the original weights and **not** the outputs of each layer's state.
Here's an example of a dequantization RNG range between -5.0 and 5.0 for a q4\_0 Mistral model.:
\`\`\`
\### Instruction:
Provide a comprehensive overview of SpongeBob Squarepants.
\### Response:
SpongeBob Squarepants are a type of prehistoric wooden buildings, characterized by their particular structural system which allows them to be easily dismantled and reassembled elsewhere. This construction technique was particularly popular in the Netherlands during the 17th century, but also had some influence on other parts of Europe. The name "SpongeBob" is derived from the Dutch word "spaan", meaning spar or ridge, and the term "squarepant" refers to the fact that these structures were often built as square buildings.
\`\`\`
...Absolutely hilarious, imo.
Technically, this is sort of like the LLM equivalent of a [ROM corruption](https://youtu.be/OvNDcVRlyYk). I wonder if I can actually get it to work consistently for improving ppl in quantized models, though... in the meantime, this is quite fun to play around with. | 2024-01-16T18:08:18 | https://www.reddit.com/r/LocalLLaMA/comments/1989gx9/experiments_with_rng_dequantization/ | kindacognizant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1989gx9 | false | {'oembed': {'author_name': 'vinesauce', 'author_url': 'https://www.youtube.com/@vinesauce', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/OvNDcVRlyYk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="[Vinesauce] Vinny - Mario 64 Corruptions"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/OvNDcVRlyYk/hqdefault.jpg', 'thumbnail_width': 480, 'title': '[Vinesauce] Vinny - Mario 64 Corruptions', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1989gx9 | /r/LocalLLaMA/comments/1989gx9/experiments_with_rng_dequantization/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'mgQt-EyW2FoWtUxjbzsf5i9Hd4FWudnl0I-K4kGcs94', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/NF-BIhKHiRcrLBzoyDtMkPolYawbOulKVZkDRtL67xw.jpg?width=108&crop=smart&auto=webp&s=228d44bf6e468826380191a0dcf2bcd72a82f1cd', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/NF-BIhKHiRcrLBzoyDtMkPolYawbOulKVZkDRtL67xw.jpg?width=216&crop=smart&auto=webp&s=7bb69e2376713c81fe0a76b47a979d7990aced49', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/NF-BIhKHiRcrLBzoyDtMkPolYawbOulKVZkDRtL67xw.jpg?width=320&crop=smart&auto=webp&s=03991595f1016b04d10545168135d3d70fa01d78', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/NF-BIhKHiRcrLBzoyDtMkPolYawbOulKVZkDRtL67xw.jpg?auto=webp&s=1e289d58acc2f2c794dc29dcdbb8396efbfcc2a2', 'width': 480}, 'variants': {}}]} | |
Teaching a LLM a new programming language? | 8 | How could I teach a LLM a new programming language?
It is quite niche, albeit there is a *lot* of varied pre-existing code and documentation for it (I can likely find *gigabytes* worth of code in it), but it isn't in datasets such as StarCoder and such. How could I go about teaching an LLM it? | 2024-01-16T17:53:15 | https://www.reddit.com/r/LocalLLaMA/comments/198931x/teaching_a_llm_a_new_programming_language/ | Absolucyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198931x | false | null | t3_198931x | /r/LocalLLaMA/comments/198931x/teaching_a_llm_a_new_programming_language/ | false | false | self | 8 | null |
Integrate Natural Language to SQL (NL-to-SQL) directly into your product. | 2 | Asking questions from structured databases, referred to as the NL-to-SQL task, has posed a challenge even for advanced language models like GPT-4. This challenge arises because these models lack information about how data is organized within your database. Furthermore, crafting SQL queries, even for seasoned data scientists, often involves numerous iterations of debugging, especially for complex queries with nested structures and multiple joins.
The Dataherald API provides access to robust agents built upon the Langchain framework, equipped with a diverse set of tools for crafting precise SQL queries customized to your unique requirements. Dataherald offers notable features, including automatic fine-tuning of GPT-4 (available exclusively to paid customers), GPT-3.5-turbo support for your NL-to-SQL datasets, and comprehensive evaluation of the generated SQL queries. Their agent architecture employs a hierarchical system that utilizes multiple models, one for reasoning and another for SQL generation.
If you are interested to know more checkout their website: [https://www.dataherald.com/](https://www.dataherald.com/) | 2024-01-16T17:48:08 | https://www.reddit.com/r/LocalLLaMA/comments/1988ycg/integrate_natural_language_to_sql_nltosql/ | Beneficial-Engine-72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1988ycg | false | null | t3_1988ycg | /r/LocalLLaMA/comments/1988ycg/integrate_natural_language_to_sql_nltosql/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'BVE9FFOog6ncS-6qm_CVlg2T9G_lLoWH_qEM7fdqIKY', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/GrxWdMZpfB8knS41qS-NvXlX0yAZz4RbzSK1S1m0O60.jpg?width=108&crop=smart&auto=webp&s=215e27675906622ac55908dd6ccc5d143d4e35fd', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/GrxWdMZpfB8knS41qS-NvXlX0yAZz4RbzSK1S1m0O60.jpg?width=216&crop=smart&auto=webp&s=d655e8d4261ad18080beb597192bf904e98025a1', 'width': 216}, {'height': 169, 'url': 'https://external-preview.redd.it/GrxWdMZpfB8knS41qS-NvXlX0yAZz4RbzSK1S1m0O60.jpg?width=320&crop=smart&auto=webp&s=0a60acec878d52de450b2ea6189a178864c64efb', 'width': 320}, {'height': 339, 'url': 'https://external-preview.redd.it/GrxWdMZpfB8knS41qS-NvXlX0yAZz4RbzSK1S1m0O60.jpg?width=640&crop=smart&auto=webp&s=4199f5985e937acb567f74ae52d608ed46d8e957', 'width': 640}, {'height': 508, 'url': 'https://external-preview.redd.it/GrxWdMZpfB8knS41qS-NvXlX0yAZz4RbzSK1S1m0O60.jpg?width=960&crop=smart&auto=webp&s=20775b31cd6f900e77c7c94625b07e52b2fd402f', 'width': 960}, {'height': 572, 'url': 'https://external-preview.redd.it/GrxWdMZpfB8knS41qS-NvXlX0yAZz4RbzSK1S1m0O60.jpg?width=1080&crop=smart&auto=webp&s=c2c67360bb5b83b58c33c63a092674b029a41663', 'width': 1080}], 'source': {'height': 856, 'url': 'https://external-preview.redd.it/GrxWdMZpfB8knS41qS-NvXlX0yAZz4RbzSK1S1m0O60.jpg?auto=webp&s=b327ba2753472ca74c65761ff3410bb8e4560cdd', 'width': 1615}, 'variants': {}}]} |
With equal total VRAM: 2 GPUS VS. 1 GPU? | 9 | I had a hard time coming up with a title that matched what I wanted to ask, I apologize.
I've just read this review ([https://www.theverge.com/24039832/nvidia-rtx-4070-super-review-gpu-graphics-card-benchmark-test](https://www.theverge.com/24039832/nvidia-rtx-4070-super-review-gpu-graphics-card-benchmark-test)) of the Nvidia RTX 4070 super which has 12gb of VRAM and I was thinking "Hey! I could get 2 of those to get 24gb VRAM instead of a more expensive (than the 2 combined, new) RTX 3090 at 24gb VRAM".
But there must be a catch right? If yes what is it? Much poorer performance? Something else?
(Brackground: I'm new to the field of building my own machine to run inference locally and I'm currently doing research regarding what to buy to build a decent computer to do that) | 2024-01-16T17:40:24 | https://www.reddit.com/r/LocalLLaMA/comments/1988rcz/with_equal_total_vram_2_gpus_vs_1_gpu/ | mmmmmmaaaaattttt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1988rcz | false | null | t3_1988rcz | /r/LocalLLaMA/comments/1988rcz/with_equal_total_vram_2_gpus_vs_1_gpu/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'ouTRXvRuFcXSue22jAetYGIJEwvhfKwiB0XCbmdIVM0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/aPzWO4Qz4eptYT8ZqJ8pP5Eautgeaj-_i9oJoDTc0Uk.jpg?width=108&crop=smart&auto=webp&s=b52666d9d92e8b8410825ee9e036534458b7bf85', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/aPzWO4Qz4eptYT8ZqJ8pP5Eautgeaj-_i9oJoDTc0Uk.jpg?width=216&crop=smart&auto=webp&s=ea1f6131a5a36df8f7dc6c98ea78be9b2dceacda', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/aPzWO4Qz4eptYT8ZqJ8pP5Eautgeaj-_i9oJoDTc0Uk.jpg?width=320&crop=smart&auto=webp&s=f377d50ca814278257f1acfcf1e7f189a1e48f60', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/aPzWO4Qz4eptYT8ZqJ8pP5Eautgeaj-_i9oJoDTc0Uk.jpg?width=640&crop=smart&auto=webp&s=6e7811a17456077b7a6b29d0a4b69fa0482af9d7', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/aPzWO4Qz4eptYT8ZqJ8pP5Eautgeaj-_i9oJoDTc0Uk.jpg?width=960&crop=smart&auto=webp&s=d19d8129ee50375a70974df18542fd240ba49f5d', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/aPzWO4Qz4eptYT8ZqJ8pP5Eautgeaj-_i9oJoDTc0Uk.jpg?width=1080&crop=smart&auto=webp&s=a1fb9d4d99d13a78063ed258fd2a0caaed68a193', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/aPzWO4Qz4eptYT8ZqJ8pP5Eautgeaj-_i9oJoDTc0Uk.jpg?auto=webp&s=f1696f2ca820267331484d6d7a7cec013ebc3e61', 'width': 1200}, 'variants': {}}]} |
INSIDER NEWS! | 1 | Just got word that the disruptions everyone is seeing with OpenAI's service is a result of them re-dedicating resources to train GPT5. Get ready folks. | 2024-01-16T17:37:20 | https://www.reddit.com/r/LocalLLaMA/comments/1988omp/insider_news/ | After-Temperature253 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1988omp | false | null | t3_1988omp | /r/LocalLLaMA/comments/1988omp/insider_news/ | true | false | spoiler | 1 | null |
An expert and a jester in a trenchcoat: orthogonal-2x7B | 11 | I've spent the weekend messing with model combinations with smaller mixes (4x8) but none really gave expected results, almost always degrading the performances, the only one decent at the end of various testing was this one: [orthogonal-2x7B](https://huggingface.co/LoSboccacc/orthogonal-2x7B) which is a merge of mistralai/Mistral-7B-Instruct-v0.2 and SanjiWatsuki/Silicon-Maid-7B
don't expect too much out of it, but I think it can fit a niche. It's the only one that produced terse and fun prose in my multiturn testing, and while some instruction weren't followed 100%, it got the math passage right. Scoring pending because I've hit my spend cap for the month. | 2024-01-16T17:29:12 | https://www.reddit.com/r/LocalLLaMA/comments/1988h8w/an_expert_and_a_jester_in_a_trenchcoat/ | LoSboccacc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1988h8w | false | null | t3_1988h8w | /r/LocalLLaMA/comments/1988h8w/an_expert_and_a_jester_in_a_trenchcoat/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'G0impekBdJxA2BJoWo2rrQtsStXNwaVh12ArYCdzOG4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Zfe94Sazx6EulXn_PGoRvbs7x5jYFbtt4ZFpwafRT8Q.jpg?width=108&crop=smart&auto=webp&s=eaabc442823eb62f3cda7210352f092ad6061360', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Zfe94Sazx6EulXn_PGoRvbs7x5jYFbtt4ZFpwafRT8Q.jpg?width=216&crop=smart&auto=webp&s=9c14071487fb26960ed24a8946d22dfb6761e8b6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Zfe94Sazx6EulXn_PGoRvbs7x5jYFbtt4ZFpwafRT8Q.jpg?width=320&crop=smart&auto=webp&s=b2293888f3089e1c92ecf34bf4cb206fe455b02b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Zfe94Sazx6EulXn_PGoRvbs7x5jYFbtt4ZFpwafRT8Q.jpg?width=640&crop=smart&auto=webp&s=ba82aac07f5101487b793871d99b65a84acaa2f7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Zfe94Sazx6EulXn_PGoRvbs7x5jYFbtt4ZFpwafRT8Q.jpg?width=960&crop=smart&auto=webp&s=29337220c127e8b3b9e5eb3e2c4159cfc6e95394', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Zfe94Sazx6EulXn_PGoRvbs7x5jYFbtt4ZFpwafRT8Q.jpg?width=1080&crop=smart&auto=webp&s=163199ceb5d1953b27e0e29af7abe36fc6dc5a64', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Zfe94Sazx6EulXn_PGoRvbs7x5jYFbtt4ZFpwafRT8Q.jpg?auto=webp&s=a599eb6f8d9310b616309311216f4b87d76028bf', 'width': 1200}, 'variants': {}}]} |
APAR: LLMs Can Do Auto-Parallel Auto-Regressive Decoding - ("... when combined with speculative decoding, the speed-up can reach up to 4x. ... a throughput increase of 20-70% and a latency reduce of 20-35%...") | 36 | **Paper**: [https://arxiv.org/abs/2401.06761](https://arxiv.org/abs/2401.06761)
**Abstract**:
>The massive adoption of large language models (LLMs) demands efficient deployment strategies. However, the auto-regressive decoding process, which is fundamental to how most LLMs generate text, poses challenges to achieve efficient serving. In this work, we introduce a parallel auto-regressive generation method. By instruct-tuning on general domain data that contains hierarchical structures, we enable LLMs to independently plan their generation process and perform auto-parallel auto-regressive (**APAR**) generation, significantly reducing the number of generation steps. APAR alone can achieve up to 2x speed-up, and when combined with speculative decoding, the speed-up can reach up to 4x. In addition, APAR reduces the key-value cache consumption and attention computation during generation. This leads to a throughput increase of 20-70% and a latency reduce of 20-35% in high-throughput scenarios, compared to state-of-the-art serving frameworks. | 2024-01-16T17:12:09 | https://www.reddit.com/r/LocalLLaMA/comments/19881lf/apar_llms_can_do_autoparallel_autoregressive/ | APaperADay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19881lf | false | null | t3_19881lf | /r/LocalLLaMA/comments/19881lf/apar_llms_can_do_autoparallel_autoregressive/ | false | false | self | 36 | null |
Local Transcription/Translation with SeamlessM4T | 28 | Hi all, this is not per-say an LLM, but I figured it could interest some of you here.
FAIR released in November a Speech/Text translation model: [SeamlessM4T](https://ai.meta.com/research/seamless-communication/). With my teammate we made a GGML port of the model, to allow running it locally without Python. (If you just want to try the model there is [also a public demo](https://seamless.metademolab.com/m4t)).
First you can kick off the download of the [medium](https://dl.fbaipublicfiles.com/seamless/models/seamlessM4T_medium.ggml) model.
Meanwhile you can read the [build instructions](https://github.com/facebookresearch/seamless_communication/tree/main/ggml) on the main Github, and compile \`unity.cpp\`.
The CLI interface only supports Speech to Text at the moment, even though the full model can also do more tasks. The input speech language will be detected on the fly, among the 101 supported, and you can choose the output language for the text output among 96 languages. The full list of supported languages is available on [Huggingface description](https://huggingface.co/facebook/seamless-m4t-v2-large)
Currently the code is not doing resampling, so you’ll need to feed it 16kHz, mono-channel WAV audio files. On Linux you can create such recordings with \`arecord -r 16000 -d 10 -f S32\_LE -t wav a.wav\`. Because of the architecture of the model the memory usage is quadratic in the audio length. If you want to increase the audio duration, you might need to increase the \`--mem\` setting.
Running \`./build/bin/unity --model seamlessM4T\_medium.ggml\`, will enter a “REPL” to allow transcribing several files without reloading the model weights. In the \`unity\` terminal, you can write \`a.wav eng\` to transcribe the audio into English, then \`a.wav fra\` to try the translation to French. [Screencast](https://asciinema.org/a/86dp7BsRowWYfEL8Lc93LCChI)
The quality of the transcription is quite sensible to the quality of the input signal, to get the best results try having a nice recording without too much noise. Also if you have a strong accent when speaking English (like me 🙂), using your native language as input, will likely give you better results.
Feedback is most welcome, it will help prioritize future work. We are also open to external contributions. Stay tuned for more tasks supported, and some small specialized models. | 2024-01-16T16:41:40 | https://www.reddit.com/r/LocalLLaMA/comments/19879wk/local_transcriptiontranslation_with_seamlessm4t/ | gwenzek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19879wk | false | null | t3_19879wk | /r/LocalLLaMA/comments/19879wk/local_transcriptiontranslation_with_seamlessm4t/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'h_C0aTx1zZ2LbZLhgKw4iMYwyLGUu99Yc17isI8ogME', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=108&crop=smart&auto=webp&s=56923bdb13994baac1b22a9acb26dbfc24a2ffb5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=216&crop=smart&auto=webp&s=3aa5acdf8a69efec05d2e3b7b491b32438102862', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=320&crop=smart&auto=webp&s=2ae15084388d6e68a242fce3b577dc6adb4cea7a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=640&crop=smart&auto=webp&s=c74d9225f096cae09edea09ab512208361f1d753', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=960&crop=smart&auto=webp&s=535cb4d5eb0325b6c3de33875df5dc570957ada4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?width=1080&crop=smart&auto=webp&s=6105a5fd833f944081148e9d696ceebaa652bc02', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/owEvkLw_moZXaFv7_gI34F_D_Zz4DGxqECQIJifUMZ8.jpg?auto=webp&s=6c24a88115d9fa3cf7c595398e836e508b5c0737', 'width': 3840}, 'variants': {}}]} |
Transformers are Multi-State RNNs - ("Our findings shed light on the inter-working of transformers, and their connections to RNNs. They also have practical value—they can dramatically reduce the LLM cache size by up to 88%.") | 106 | **Paper**: [https://arxiv.org/abs/2401.06104](https://arxiv.org/abs/2401.06104)
**Code**: [https://github.com/schwartz-lab-NLP/TOVA](https://github.com/schwartz-lab-NLP/TOVA)
**Abstract**:
>Transformers are considered conceptually different compared to the previous generation of state-of-the-art NLP models - recurrent neural networks (RNNs). In this work, we demonstrate that decoder-only transformers can in fact be conceptualized as infinite multi-state RNNs - an RNN variant with unlimited hidden state size. We further show that pretrained transformers can be converted into *finite* multi-state RNNs by fixing the size of their hidden state. We observe that several existing transformers cache compression techniques can be framed as such conversion policies, and introduce a novel policy, **TOVA**, which is simpler compared to these policies. Our experiments with several long range tasks indicate that TOVA outperforms all other baseline policies, while being nearly on par with the full (infinite) model, and using in some cases only 1/8 of the original cache size. Our results indicate that transformer decoder LLMs often behave in practice as RNNs. They also lay out the option of mitigating one of their most painful computational bottlenecks - the size of their cache memory. We publicly release our code at [this https URL](https://github.com/schwartz-lab-NLP/TOVA). | 2024-01-16T16:22:26 | https://www.reddit.com/r/LocalLLaMA/comments/1986t3w/transformers_are_multistate_rnns_our_findings/ | APaperADay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1986t3w | false | null | t3_1986t3w | /r/LocalLLaMA/comments/1986t3w/transformers_are_multistate_rnns_our_findings/ | false | false | self | 106 | null |
PESC - Converting Pretrained Models to MoE via LoRA fine tuning | 30 | Now this is something interesting , they propose a method to convert dense LLMs to MoE without any significant parameter updates via QLoRA also very memory efficient, introduced Camelidae LLM series saying that it is outperforming GPT-3.5.
Ig this will be something like a game changer like when LoRA was dropped. Thoughts?
Abstract -
Large Language Models (LLMs) have demonstrated considerable proficiency in general natural language processing (NLP) tasks. Instruction tuning, a successful paradigm, enhances the ability of LLMs to follow natural language instructions and exhibit robust generalization across a wide range of tasks. However, these models often encounter performance limitations across multiple tasks due to constrained model capacity. Expanding this capacity during the instruction tuning phase poses significant challenges. To address this issue, we introduce a novel approach, Parameter-Efficient Sparsity Crafting (PESC), which transitions dense models to sparse models using a Mixture of Experts (MoE) architecture. PESC integrates adapters into the MoE layers of sparse models, differentiating experts without altering the individual weights within these layers. This method significantly reduces computational costs and GPU memory requirements, facilitating model capacity expansion through a minimal increase in parameters via the inserted adapters. Our empirical evaluation demonstrates the effectiveness of the PESC method. Using PESC during instruction tuning, our sparse models, dubbed Camelidae outperform all other opensource sparse models and exhibit superior general capabilities compared to GPT3.5. | 2024-01-16T16:05:51 | https://arxiv.org/abs/2401.02731 | MysticShadow427 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1986edw | false | null | t3_1986edw | /r/LocalLLaMA/comments/1986edw/pesc_converting_pretrained_models_to_moe_via_lora/ | false | false | default | 30 | null |
Question. Can you achieve JSON output by modifying the model execution engine to mask tokens against JSON state? | 4 | I'd really love for local models to be able to run as agents, but this needs strict JSON output (or other easily parsable scheme).
Fine-tuning feels like a random way to do it. Instead, the engine could be modified so that rather than simply choosing the logit with the highest score, you first mask out logits for tokens that aren't valid json for the current output state.
This means keeping some pre-rendered masking maps for each state you can have in JSON (there aren't that many) as well as tracking the JSON state.
Whilst I am an experienced programmer, my domain isn't language models or c++ (for llama.cpp). Can someone point me to the code I can look at? Or perhaps this has already been done somewhere.
I feel like this will really empower the use of open source models, since only openai have functions and the new 'json only' flag. | 2024-01-16T15:31:47 | https://www.reddit.com/r/LocalLLaMA/comments/1985kxr/question_can_you_achieve_json_output_by_modifying/ | debauch3ry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1985kxr | false | null | t3_1985kxr | /r/LocalLLaMA/comments/1985kxr/question_can_you_achieve_json_output_by_modifying/ | false | false | self | 4 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.