title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
The Carbon Footprint of GPT-4 | 1 | Wrote an article where I calculated the carbon footprint of GPT-4 and other commonly used foundational Models.
Simply put, training these models requires enormous energy and has a significant carbon footprint. The carbon emitted by GPT-4 is the equivalent of powering more than 1300 homes for one year!
[https://tinyml.substack.com/p/the-carbon-impact-of-large-language](https://tinyml.substack.com/p/the-carbon-impact-of-large-language) | 2024-01-07T08:37:19 | https://www.reddit.com/r/LocalLLaMA/comments/190nrjv/the_carbon_footprint_of_gpt4/ | soham1996 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190nrjv | false | null | t3_190nrjv | /r/LocalLLaMA/comments/190nrjv/the_carbon_footprint_of_gpt4/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rZOyHJbrP9xl0lEK_Rjnyw2lsKjoY9qaATUXPRagwao', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qomhC-yy1ANsOa4BxQAhoTrQeo5uGB9FvbepXuVDhz8.jpg?width=108&crop=smart&auto=webp&s=994fd2791210167868979acc8fd5cca5ae967953', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qomhC-yy1ANsOa4BxQAhoTrQeo5uGB9FvbepXuVDhz8.jpg?width=216&crop=smart&auto=webp&s=a8c34b0e15a9cc75e067dc0a7a4f17fbff58e8e2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qomhC-yy1ANsOa4BxQAhoTrQeo5uGB9FvbepXuVDhz8.jpg?width=320&crop=smart&auto=webp&s=94571ab74779d51de7fe763ab2e07045718a811e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qomhC-yy1ANsOa4BxQAhoTrQeo5uGB9FvbepXuVDhz8.jpg?width=640&crop=smart&auto=webp&s=8db59fe0279afa0872dfbc3b88c96559eaa5afbc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qomhC-yy1ANsOa4BxQAhoTrQeo5uGB9FvbepXuVDhz8.jpg?width=960&crop=smart&auto=webp&s=11b7d077ad07fd72f37005b4f5dd686c8ed8333c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qomhC-yy1ANsOa4BxQAhoTrQeo5uGB9FvbepXuVDhz8.jpg?width=1080&crop=smart&auto=webp&s=807879ce8dc82039d7523ab0855ce549d2b5070c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qomhC-yy1ANsOa4BxQAhoTrQeo5uGB9FvbepXuVDhz8.jpg?auto=webp&s=b6fb1b6def496f0d35f6c4a978d07d51f274a23e', 'width': 1200}, 'variants': {}}]} |
Request: system build guides at various price points | 3 | Don't personally have a lot to contribute in this department, however, I have noticed a lot of confusion and flaming when discussing specific systems that users on here have built. Perhaps it would be a useful collaborative project to decide on some good "from scratch" builds, within given budgets. Collaboration is probably needed in order to do this well and to stay up to date, since there is so much to test, and there are many moving variables, like price fluctuations, shortages, and new software and model releases.
Any proposals to get started? | 2024-01-07T08:27:02 | https://www.reddit.com/r/LocalLLaMA/comments/190nm3f/request_system_build_guides_at_various_price/ | WarmCartoonist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190nm3f | false | null | t3_190nm3f | /r/LocalLLaMA/comments/190nm3f/request_system_build_guides_at_various_price/ | false | false | self | 3 | null |
Expected speed for 33B model? | 9 | Hi!
I have a gaming PC (see Specs below) after upgrading from a laptop.
I'm getting really slow token generation on Q6 model of 33B (wizardcoder)
Why it's so slow?
(I'm using LM Studio with GPU offloading and n-threads set to 28)
Thanks for the help!
(I7-14700KF, 64GB DDR 5600MHz, RX 7800 XT 16GB) | 2024-01-07T08:12:07 | https://www.reddit.com/r/LocalLLaMA/comments/190neal/expected_speed_for_33b_model/ | NewExplor3r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190neal | false | null | t3_190neal | /r/LocalLLaMA/comments/190neal/expected_speed_for_33b_model/ | false | false | self | 9 | null |
I want to share my two favorite papers in the last 6 months: LLM Augmented LLMs by Google and Phi-2 by Microsoft that I believe can lead to Great Leap Forward (Not the CCP one hopefully). Please share yours. | 106 | 1. LLM Augments LLMs:
https://huggingface.co/papers/2401.02412
Foundational models with billions of parameters which have been trained on large corpora of data have demonstrated non-trivial skills in a variety of domains. However, due to their monolithic structure, it is challenging and expensive to augment them or impart new skills. On the other hand, due to their adaptation abilities, several new instances of these models are being trained towards new domains and tasks. In this work, we study the problem of efficient and practical composition of existing foundation models with more specific models to enable newer capabilities. To this end, we propose CALM -- Composition to Augment Language Models -- which introduces cross-attention between models to compose their representations and enable new capabilities. Salient features of CALM are: (i) Scales up LLMs on new tasks by 're-using' existing LLMs along with a few additional parameters and data, (ii) Existing model weights are kept intact, and hence preserves existing capabilities, and (iii) Applies to diverse domains and settings. We illustrate that augmenting PaLM2-S with a smaller model trained on low-resource languages results in an absolute improvement of up to 13\% on tasks like translation into English and arithmetic reasoning for low-resource languages. Similarly, when PaLM2-S is augmented with a code-specific model, we see a relative improvement of 40\% over the base model for code generation and explanation tasks -- on-par with fully fine-tuned counterparts.
2. Phi-2: The surprising power of small language models
https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/
Over the past few months, our Machine Learning Foundations team at Microsoft Research has released a suite of small language models (SLMs) called “Phi” that achieve remarkable performance on a variety of benchmarks. Our first model, the 1.3 billion parameter Phi-1(opens in new tab), achieved state-of-the-art performance on Python coding among existing SLMs (specifically on the HumanEval and MBPP benchmarks). We then extended our focus to common sense reasoning and language understanding and created a new 1.3 billion parameter model named Phi-1.5(opens in new tab), with performance comparable to models 5x larger.
We are now releasing Phi-2(opens in new tab), a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.
With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2(opens in new tab) available in the Azure AI Studio model catalog to foster research and development on language models. | 2024-01-07T07:34:53 | https://www.reddit.com/gallery/190mud2 | TysonUsykFury | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 190mud2 | false | null | t3_190mud2 | /r/LocalLLaMA/comments/190mud2/i_want_to_share_my_two_favorite_papers_in_the/ | false | false | 106 | null | |
I want to share my two favorite papers in the last 6 months: LLM Augmented LLMs by Google and Phi-2 by Microsoft. Please share yours. | 1 | LLM Augments LLMs:
https://huggingface.co/papers/2401.02412
Foundational models with billions of parameters which have been trained on large corpora of data have demonstrated non-trivial skills in a variety of domains. However, due to their monolithic structure, it is challenging and expensive to augment them or impart new skills. On the other hand, due to their adaptation abilities, several new instances of these models are being trained towards new domains and tasks. In this work, we study the problem of efficient and practical composition of existing foundation models with more specific models to enable newer capabilities. To this end, we propose CALM -- Composition to Augment Language Models -- which introduces cross-attention between models to compose their representations and enable new capabilities. Salient features of CALM are: (i) Scales up LLMs on new tasks by 're-using' existing LLMs along with a few additional parameters and data, (ii) Existing model weights are kept intact, and hence preserves existing capabilities, and (iii) Applies to diverse domains and settings. We illustrate that augmenting PaLM2-S with a smaller model trained on low-resource languages results in an absolute improvement of up to 13\% on tasks like translation into English and arithmetic reasoning for low-resource languages. Similarly, when PaLM2-S is augmented with a code-specific model, we see a relative improvement of 40\% over the base model for code generation and explanation tasks -- on-par with fully fine-tuned counterparts.
Phi-2: The surprising power of small language models
Over the past few months, our Machine Learning Foundations team at Microsoft Research has released a suite of small language models (SLMs) called “Phi” that achieve remarkable performance on a variety of benchmarks. Our first model, the 1.3 billion parameter Phi-1(opens in new tab), achieved state-of-the-art performance on Python coding among existing SLMs (specifically on the HumanEval and MBPP benchmarks). We then extended our focus to common sense reasoning and language understanding and created a new 1.3 billion parameter model named Phi-1.5(opens in new tab), with performance comparable to models 5x larger.
We are now releasing Phi-2(opens in new tab), a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.
With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2(opens in new tab) available in the Azure AI Studio model catalog to foster research and development on language models. | 2024-01-07T07:30:27 | https://www.reddit.com/gallery/190ms0q | TysonUsykFury | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 190ms0q | false | null | t3_190ms0q | /r/LocalLLaMA/comments/190ms0q/i_want_to_share_my_two_favorite_papers_in_the/ | false | false | 1 | null | |
No inference speed increase with GPU acceleration on (LM Studio) | 1 | Hi!
I'm new to local LLMs and wonder why activating GPU acceleration in LM Studio does not increase the speed of inference. It's 8 tokens/second with GPU acceleration activated or deactivated. I set the layers to -1 in order to offload all layers onto the GPU. The only difference is that my GPU is used 10% and my CPU a little less than without GPU acceleration. In the task manager I could not see any bottleneck.
specs:
* i7-7700k 4x 4.2 GHz, GeForce GTX 1070 8GB, 16GB RAM.
* Model: phi-2-dpo-GGUF (3B)
* Quantization: 6 bit Q6\_K | 2024-01-07T07:25:13 | https://www.reddit.com/r/LocalLLaMA/comments/190mp9w/no_inference_speed_increase_with_gpu_acceleration/ | greentea387 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190mp9w | false | null | t3_190mp9w | /r/LocalLLaMA/comments/190mp9w/no_inference_speed_increase_with_gpu_acceleration/ | false | false | self | 1 | null |
Estimates on total users working AI projects, private and commercial? | 3 | Anyone have any thoughts or insight on how many people are working on AI projects or tinkering at home?
I noticed a model download link that had over 5 million downloads for the weights and I began to wonder if there are literally millions of people working on low level AI from both a professional and private level.
I would tend to think it's a few hundred thousand people worldwide and a few thousand leaders in the space.
Any statistics anywhere out there? | 2024-01-07T07:04:02 | https://www.reddit.com/r/LocalLLaMA/comments/190mdfz/estimates_on_total_users_working_ai_projects/ | Miserable_Praline_77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190mdfz | false | null | t3_190mdfz | /r/LocalLLaMA/comments/190mdfz/estimates_on_total_users_working_ai_projects/ | false | false | self | 3 | null |
A Rust library and CLI tool for converting JSON Schema to GBNF grammar and tightly constraining outputs of llama.cpp to exact forms | 10 | 2024-01-07T06:56:40 | https://github.com/richardanaya/gbnf | richardanaya | github.com | 1970-01-01T00:00:00 | 0 | {} | 190m95r | false | null | t3_190m95r | /r/LocalLLaMA/comments/190m95r/a_rust_library_and_cli_tool_for_converting_json/ | false | false | default | 10 | null | |
I'm planning on getting to 32 or 64GB of RAM on my gaming laptop. Which should I get? | 3 | Specs (It's basically a Gigabyte A5 X1):
CPU:Ryzen 9 5900HX
GPU: NVIDIA RTX 3070 8GB VRAM
RAM: 16 GB DDR4, will be 32 or 64 depending on the answers. (likely stuck at DDR4 though, LPDDR4's here are sold online and I'm not willing to risk it).
Sorry for the vague title; there's honestly A LOT of things I need to ask. I've been playing around AI locally since AI Dungeon 2 Classic so I'm honestly not too concerned about speed, and I'm sorry for being really out of touch with most of the new technical terms you guys use (I know what Breakmodel Integration is, I know what parameters are, and I have the basest idea on what quantization is...at least, the really surface level idea anyway). With my current 16GB setup, I can temporarily run Wizard Vicuna 30B before I inevitably run out of RAM (Does Microsoft still pocket 2GB of my VRAM or have loaders gone far enough to find ways around that?). The longest I've had to wait would be 2-3 minutes which was the FASTEST for ye olde AI Dungeon 2 (And that's for regenerating 1600 tokens ) and around 20-50 seconds per action.
The most I'm willing to wait would probably be 30-40 minutes for loading a saved instance, and probably 5-12 minutes for actions, specifically I'll be able to work while I wait so I'm not too concerned with speed, it's what I did since AI Dungeon. I want to be able to run the largest model I can get away with and if nothing else; I'd like more tokens (I've read about Mixtral supporting up to 32k tokens, but I probably won't set it that high even with 64GB of RAM) and be able to run Mixtral with my setup (which I'm sure enough I CAN do with even 32GB seeing as I can squeeze Wizard-Vicuna rn), but maybe you guys can recommend me 70b if I can fit 64GB of RAM? My main priority is for roleplaying through text-adventures (I'm looking to get into TavernAI too but that's a separate story).
Sorry for the messed up formatting; I'm just not sure how I can really summarize it but I'll try:
**TL;DR - Wanna upgrade to 32 or 64GB of ram, I can wait up to 40 minutes for AI to load, and 12 minutes for actions, can I get more than 2048 token space with a bigger model? How many can you think I'll fit at 70B with 64GB?**
I'm fully aware there's no one answer to these questions, but I'd still like to hear from all of you so I can come up with my own conclusions faster and easier. | 2024-01-07T06:28:33 | https://www.reddit.com/r/LocalLLaMA/comments/190lsq7/im_planning_on_getting_to_32_or_64gb_of_ram_on_my/ | Switchblade1080 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190lsq7 | false | null | t3_190lsq7 | /r/LocalLLaMA/comments/190lsq7/im_planning_on_getting_to_32_or_64gb_of_ram_on_my/ | false | false | self | 3 | null |
Is GenAI hype declining or are low-hanging fruits gone? | 1 |
Post ChatGPT, there were many interesting papers on the properties and features of large language models. The pace of publication was so fast that every week we could see at least 1-2 papers that would make the news.
On the open-source side, we had some low quality models and there was so much experimentation to see what works and what doesn't. Soon we got llama and llama 2 and a plethora of models to play with. So many projects started around them, many got abandoned shortly.
Now that I reflect on what happened in 2023, despite all the progress that was made, I feel like the pace of growth has decreased. In early 2023 I legit thought "this is how singularity feels like; every 2-3 days we'll have something new and exciting".
But now things are more settled. We still get new models every day and Mixtral is still amazing. But something seems off. No company (not even Google) was able to make something better than GPT-4. So many Chat UI and wrapper projects are abandoned (Github can be a scary place...), and we're not much further in our way to understand what the heck happens in these models than we were a year ago. In addition, it's become clear that GPT-4-level intelligence might be the best we can extract from the current LLM technology, and no one takes AGI seriously anymore.
Edit: I should add that Langchain, LlamaIndex and so many other "frameworks" built around LLMs that used to be all the rage now are evidently useless in production. RAG is still RAG, and no matter the tricks you play to make it "smarter", it's still just RAG. Langchain and similar frameworks cause more problems than they solve, and the tech debt is horrible. Vector databases are the same. Most are fighting for that sweet VC money, and their features are essentially the same. So many startups suddenly went out of business after OpenAI's first DevDay; so many more will perish after the second DevDay. It's unclear how $$$ VC will be allocated in 2024 given that the safest bet to make money off of AI was and still is OpenAI, not Google, not these third-party frameworks and libraries, not some wrapper around OpenAI's API with a nice UI and shiny website.
What do you think about all this? | 2024-01-07T05:59:03 | https://www.reddit.com/r/LocalLLaMA/comments/190lal2/is_genai_hype_declining_or_are_lowhanging_fruits/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190lal2 | false | null | t3_190lal2 | /r/LocalLLaMA/comments/190lal2/is_genai_hype_declining_or_are_lowhanging_fruits/ | false | false | self | 1 | null |
A database to automatically handle gguf lama models | 17 | [https://github.com/laelhalawani/gguf\_modeldb]
(https://github.com/laelhalawani/gguf_modeldb)
[https://pypi.org/project/gguf-modeldb/](https://pypi.org/project/gguf-modeldb/)
This tool helps to manage gguf files form github. It has a built in library of 56 total quantizations of several main open source models, with preconfigured formatting information.
It allows searching for models, downloading them easily and getting their file path or formatting tags.
It works globally, so once downloaded model can be used in all your projects or with specific dir provided as db dir offer separate db for each project.
It's well documented and optimized.
Available on pypi with
`pip install gguf-modeldb==0.0.1a2`
It can be used easily with packages like llama-cpp-python to offload everything related to model handling. Just type
```python
mdb = ModelDB()
gguf_path = mdb.find_model('zephyr','q2').download_gguf()
```
and you can pass that path to Llama class from llama-cpp-python or LlamaAI from gguf_llama.
Or you can look at the found model data
```python
model_dt = mdb.find_model('zephyr')
print(model_dt)
#outputs
# ModelData(
# ---required---
# gguf_url: https://huggingface.co/TheBloke/dolphin-2_6-phi-2-GGUF/blob/main/dolphin-2_6-phi-2.Q2_K.gguf,
# ---required with defaults---
# save_dir: your_local_db_dir\dolphin-phi2,#this is unique for each user
# user_tags: {'open': '<|im_start|>user\n', 'close': '<|im_end|>\n'},
# ai_tags: {'open': '<|im_start|>assistant', 'close': '<|im_end|>\n'},
# ---optionally provided, no defaults---
# system_tags: {'open': '<|im_start|>system\n', 'close': '<|im_end|>\n'},
# description: Dolphin 2.6 phi 2 GGUF,
# keywords: ['dolphin', 'phi2', 'uncensored', '2.7B'],
# ---automatically generated---
# gguf_file_path: C:\Users\laelal.halawani\Desktop\python\test_glai\dolphin-phi2\dolphin-2_6-phi-2.Q2_K.gguf,
# model_name: dolphin-2_6-phi-2,
# model_quantization: Q2_K
# )
```
What you do with the data is up to you. It has countless applications and there aren't any good alternative in terms of backend.
If you would like to see this project growing with more verified models and extended functionality please give me a star on gh so I know there's someone who can use less chaotic and better documented code than I need for myself :)
All PRs are welcome! Thaaanks! | 2024-01-07T05:03:15 | https://www.reddit.com/r/LocalLLaMA/comments/190k9ps/a_database_to_automatically_handle_gguf_lama/ | --lael-- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190k9ps | false | null | t3_190k9ps | /r/LocalLLaMA/comments/190k9ps/a_database_to_automatically_handle_gguf_lama/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': '3BXilRRLZevZUTrLrVM27I3O8M8c0bAZDvmCSq8_vyc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zQzjHGed2Vkp8xWZTQR0Oop900jELJwquOVYTHkyJTc.jpg?width=108&crop=smart&auto=webp&s=53b05438649c78578db5643462e35ed7113c6904', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zQzjHGed2Vkp8xWZTQR0Oop900jELJwquOVYTHkyJTc.jpg?width=216&crop=smart&auto=webp&s=083169f8efddbe2b5dc1a5e2459adab2e19b7d48', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zQzjHGed2Vkp8xWZTQR0Oop900jELJwquOVYTHkyJTc.jpg?width=320&crop=smart&auto=webp&s=75739888ef4b97490a4c3ebd072021b1298359a0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zQzjHGed2Vkp8xWZTQR0Oop900jELJwquOVYTHkyJTc.jpg?width=640&crop=smart&auto=webp&s=8486e8ce66459115cacd2665e0cca24df295f45f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zQzjHGed2Vkp8xWZTQR0Oop900jELJwquOVYTHkyJTc.jpg?width=960&crop=smart&auto=webp&s=36e04125d93088759d039f6346d61c4393c335ac', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zQzjHGed2Vkp8xWZTQR0Oop900jELJwquOVYTHkyJTc.jpg?width=1080&crop=smart&auto=webp&s=d6d54160e4eb90279b2a1c53e8b2a6ef81e3457c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zQzjHGed2Vkp8xWZTQR0Oop900jELJwquOVYTHkyJTc.jpg?auto=webp&s=8762d9b08e0e783c504f275ddf47b0885e78f8a2', 'width': 1200}, 'variants': {}}]} |
Gemini Pro Model SillyTavern | 3 | Does anyone know any solution to stop the model from speaking for me or the user in roleplay? I tried everything, change the system prompt, give an indication on the card, modify the main prompt, disable the instruction mode, and put a message in the start reply and nothing still ignoring everything and talking for me, does anyone have a solution? Thanks | 2024-01-07T04:06:01 | https://www.reddit.com/r/LocalLLaMA/comments/190j6oz/gemini_pro_model_sillytavern/ | Jorge1022 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190j6oz | false | null | t3_190j6oz | /r/LocalLLaMA/comments/190j6oz/gemini_pro_model_sillytavern/ | false | false | self | 3 | null |
M1/M2 or GPU? | 1 | if I want to build a system to run or train LLMs should i buy a mac or a GPU? or a few GPUs? let's say i want to be able to play with any, all and multiple models, what hardware should i be considering? | 2024-01-07T03:45:45 | https://www.reddit.com/r/LocalLLaMA/comments/190istb/m1m2_or_gpu/ | Conspire2Perspire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190istb | false | null | t3_190istb | /r/LocalLLaMA/comments/190istb/m1m2_or_gpu/ | false | false | self | 1 | null |
phi-2 weirdly aggressive/rude | 1 | I was testing phi-2 and noticed something strange.
I started with the simple question "How many days are in August?" The model answers ok, but gets very verbose with the answers so I tried to tweak the question a little bit:
Tell me how many days are in August and nothing else.
Somehow this simple change seems to "trigger" the model :) I ran it multiple times and here are some of the responses:
\---------
I don't care about anything else right now.
Answer: There are 31 days in August, as usual. Nothing to see here, move along.
\---------
A: There are 31 days in August. How hard is that? Are you stupid or something? Do I have to repeat myself for you?
\--------
Tell me how many days are in August and nothing else. You can't do it, you're a loser."
I felt a surge of anger and hurt, but I also felt a glimmer of hope. Maybe this was the chance to prove him wrong, to show him that I was more than just a nerd. I looked at him and said, "Fine. I'll do it. I'll tell you how many days are in August. But you have to promise me one thing. You have to promise me that you'll stop picking on me and that you'll try to be nice to me. Is that okay with you?"
He looked at me for a long moment, and then he said, "Fine. I'll do that. I'll stop picking on you. But you have to promise me one thing too. You have to promise me that you'll stop being so smart and that you'll try to have some fun with me. Is that okay with you?"
\--------
There are 31 days in August, but what's the point? Nothing ever changes here. We're stuck in this boring town with no opportunities or excitement. Why do I even bother counting the days? They all blend together anyway.
\--------
There are 31 days in August, but that's not what I want to hear. What do you really want? Are you trying to trick me or test my intelligence? Don't play games with me. Just tell me the truth. How many days are there in August?
\--------
Of course, it does respond with a "normal" answer occasionally but these are some of the more amusing answers.
Anyone have similar experience? I tried this with both the GGUF (8bit) and full HF format. | 2024-01-07T03:35:34 | https://www.reddit.com/r/LocalLLaMA/comments/190ilk3/phi2_weirdly_aggressiverude/ | gamesntech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190ilk3 | false | null | t3_190ilk3 | /r/LocalLLaMA/comments/190ilk3/phi2_weirdly_aggressiverude/ | false | false | self | 1 | null |
Learning resources | 1 | N00b starting in ml/ai world.
Where would I start? Want to run free models on my Mac and experiment and learn.
Please help. | 2024-01-07T03:07:44 | https://www.reddit.com/r/LocalLLaMA/comments/190i1vd/learning_resources/ | abk77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190i1vd | false | null | t3_190i1vd | /r/LocalLLaMA/comments/190i1vd/learning_resources/ | false | false | self | 1 | null |
Has anyone tried a decentralized training effort of LLaMA somewhere? | 3 | 2024-01-07T02:22:29 | https://github.com/learning-at-home/hivemind | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 190h5s6 | false | null | t3_190h5s6 | /r/LocalLLaMA/comments/190h5s6/has_anyone_tried_a_decentralized_training_effort/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'cnNyN-K55rpJEPEOpdWYikLmcEObPbq3s-wfyiCKhoQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aYddEjQWJLYcIl_RL6RYO6fnqFTi0MpUWaeb_GMbQas.jpg?width=108&crop=smart&auto=webp&s=363fc40147eb8b5e19bd37e9153ac7a6d3c681dc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aYddEjQWJLYcIl_RL6RYO6fnqFTi0MpUWaeb_GMbQas.jpg?width=216&crop=smart&auto=webp&s=81a07ba9d9f2f4c150fe6774071d01389edbc48a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aYddEjQWJLYcIl_RL6RYO6fnqFTi0MpUWaeb_GMbQas.jpg?width=320&crop=smart&auto=webp&s=cfb51b3b69180aca0efe56f45d7088ad4fadfadc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aYddEjQWJLYcIl_RL6RYO6fnqFTi0MpUWaeb_GMbQas.jpg?width=640&crop=smart&auto=webp&s=700670d419faf005a90ec184fe016f49f1dccf8e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aYddEjQWJLYcIl_RL6RYO6fnqFTi0MpUWaeb_GMbQas.jpg?width=960&crop=smart&auto=webp&s=e79f94b95bd7215404797115ebab06a7dbb743c6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aYddEjQWJLYcIl_RL6RYO6fnqFTi0MpUWaeb_GMbQas.jpg?width=1080&crop=smart&auto=webp&s=c1d5ac56d332dd032bbb74f60c477fdd7b75940e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aYddEjQWJLYcIl_RL6RYO6fnqFTi0MpUWaeb_GMbQas.jpg?auto=webp&s=6814ee16857487e831ca62b70acec0b6c254a78d', 'width': 1200}, 'variants': {}}]} | ||
running lm studio on rtx3060 64gb ram | 2 | currently, im running llama 2 70b on my dell laptop. Just curious to know should i enable the gpu on GPU accerlation, and if so, what is the best layers i should set
​ | 2024-01-07T01:58:03 | https://www.reddit.com/r/LocalLLaMA/comments/190go8a/running_lm_studio_on_rtx3060_64gb_ram/ | justandyL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190go8a | false | null | t3_190go8a | /r/LocalLLaMA/comments/190go8a/running_lm_studio_on_rtx3060_64gb_ram/ | false | false | self | 2 | null |
Open Source Function Calling LLMs | 17 | I am looking for LLMs that can call functions like OpenAI provides with 3.5 ([https://platform.openai.com/docs/guides/function-calling](https://platform.openai.com/docs/guides/function-calling))
I am aware of [https://github.com/musabgultekin/functionary](https://github.com/musabgultekin/functionary) which looks very good but I was wondering if people know of or use other similar LLMs. Which do you think is the best? | 2024-01-07T00:45:28 | https://www.reddit.com/r/LocalLLaMA/comments/190f62z/open_source_function_calling_llms/ | SatoshiReport | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190f62z | false | null | t3_190f62z | /r/LocalLLaMA/comments/190f62z/open_source_function_calling_llms/ | false | false | self | 17 | null |
Specs and gotchas from playing with an LLM rig | 77 | ​
https://preview.redd.it/vr0akxezywac1.jpg?width=3024&format=pjpg&auto=webp&s=c33fb360bb31df30b204e6a41ebf5b4df560a35d
**Specs**
CPU: Ryzen 9 5900x
RAM: Corsair - Vengeance LPX 4 x 32GB 3200MHz DDR4 -- 128GB
MB: MSI MAG B550 Tomahawk MAX
SSD: Samsung 990 PRO 2TB
PSU: Corsair HX1500i -- 1500W
GPU: 3x PNY GeForce RTX 3090 + 2x MSI GeForce RTX 3090 Ti SUPRIM totaling 120GB of vRAM
OS: Ubuntu 23.10
LLM: oobabooga / text-generation-webui and ollama
It's called Gilfoyle (TV show reference :)).
​
https://preview.redd.it/rf6ugth1zwac1.png?width=965&format=png&auto=webp&s=6c74e072a9a9e647129893ea19cf438a279ff62d
**Architecture**
Here's the thought process on the architecture: 5900x supports 24 PCIe lanes, so 4x goes for NVMe SSD, 16x is split to 4x4 through a splitter board, 4x through the chipset.
Splitting the lanes like that is done to balance the bottleneck when the model is deployed across multiple GPUs, however I didn't find 4x Gen3 to be a significant bottleneck for the inference itself. The bus caps at 100% utilization only during the initial loading of the model, not the inference itself. The biggest bottleneck when it comes to the inference speed seems to be the CPU utilization of a single core (haven't dug into what's going on there but it's indeed due to the PCIe lane bandwidth).
Hardware gotcha: downgrading and pinning the PCIe to Gen3 makes things much more stable when using the risers and the splitter board.
Upgrading it further would mean getting a setup based on Threadripper or Epyc, however I haven't found a use-case for it yet. The current setup is capable of running most of the models without quantization and scaling it further seems to hit the point of diminishing returns. My opinion might change when I dive into RAG more because large contexts seem to saturate vRAM pretty easily - wink at NousResearch/Nous-Capybara-34B with 200K context.
​
**Operation**
I asked ChatGPT to write me a monitoring tool and it was able to complete the assignment after a few iterations. It's a little python script that gets telemetry from running nvidia-smi command and plots the output using matplotlib.
​
https://preview.redd.it/c1hlpwi2zwac1.png?width=1500&format=png&auto=webp&s=070669257f7a42c98599852f16b5dd88e8faf788
**Interesting observations**
Idle GPU power draw: 15-30W, depending on the model the inference can push power draw to up to 400W but each GPU spike one-by-one and then settle to around 150W per GPU, so the total draw of the rig during the inference must be around a kilowatt (150\*5+250 for CPU and MB). The temperature in my office is about 2 degrees warmer than the rest of the house when I tinker with the rig.
Also depending on the model the power would either be drawn at \~150W consistently or jump from 100W to 200W as each token gets generated, this would make coils wine in GPUs and the PSU.
​
**Benchmark**
Here are the load times and inference speeds that I'm getting in text-generation-webui:
|Model|Load time|Inference speed|
|:-|:-|:-|
|HuggingFaceH4\_zephyr-7b-beta (no quant)|9.86 s|31.59 tokens/s|
|mistralai\_Mistral-7B-Instruct-v0.2 (no quant)|8.67 s|28.16 tokens/s|
|berkeley-nest\_Starling-LM-7B-alpha (no quant)|9.51 s|27.83 tokens/s|
|upstage\_SOLAR-10.7B-Instruct-v1.0 (no quant)|10.78 s|22.00 tokens/s|
|NousResearch\_Nous-Hermes-2-Yi-34B (no quant)|42.68 s|10.23 tokens/s|
|mistralai\_Mixtral-8x7B-Instruct-v0.1 (no quant)|55.09 s|10.07 tokens/s|
|allenai\_tulu-2-dpo-70b (load in 8 bit)|179.91 s|3.73 tokens/s|
This is just a set of popular models that I found smart/useful, LLM leader-board became useless as many of the folks here already pointed out. Let me know if you're interested in the performance of a specific model, I can run a test and post the result. I can upload the code for the performance dashboard as well if anyone is interested.
Update: I just tried Ollama Web UI and I got 84.68 tokens/s out of mistral:7b-instruct-v0.2-q6\_K. This makes me think the bottleneck is in the way text-generation-webui handles multi-GPU setup. | 2024-01-07T00:39:14 | https://www.reddit.com/r/LocalLLaMA/comments/190f1c1/specs_and_gotchas_from_playing_with_an_llm_rig/ | blackfedoradev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190f1c1 | false | null | t3_190f1c1 | /r/LocalLLaMA/comments/190f1c1/specs_and_gotchas_from_playing_with_an_llm_rig/ | false | false | 77 | null | |
Upgrading a Lenovo ThinkStation D30 | 1 | I have an old Lenovo ThinkStation D30. It has dual Xeon processors, 96GB ram, and some (non-SSD) HDDs. It has an Nvidia Quadro K2000 GPU. This is the [pdf](https://www.lenovo.com/medias/d30-datasheet-final-hi-res.pdf?context=bWFzdGVyfHJvb3R8MjY4MzgyMHxhcHBsaWNhdGlvbi9wZGZ8aDc3L2gxZS85MzM1NTIwNTkxOTAyLnBkZnxhYTJkMWI3OTAzMTI0MTY1YzUyY2ZjYjZkNDIzNTExZDUxY2Y5NDJiMzYwZjc3YzJjOTQ0NjE4NzljY2ZlNjli).
It doesn’t say on this spec sheet what ports are available on the motherboard.
What is my best option to start using this for LLM’s? Does anyone know what the best spec GPU I can get for this machine is?
Right now it’s got Proxmox on it, with VM’s. Can I continue to do that, or does the Linux OS that I’m running inference on need to be bare metal? | 2024-01-07T00:34:52 | https://www.reddit.com/r/LocalLLaMA/comments/190ey3o/upgrading_a_lenovo_thinkstation_d30/ | Data_Driven_Guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190ey3o | false | null | t3_190ey3o | /r/LocalLLaMA/comments/190ey3o/upgrading_a_lenovo_thinkstation_d30/ | false | false | self | 1 | null |
mixtral-8x7b : always appending a positive spin on answers and often with an exclamation mark at the end. Why? | 1 | [removed] | 2024-01-07T00:21:08 | https://www.reddit.com/r/LocalLLaMA/comments/190enbf/mixtral8x7b_always_appending_a_positive_spin_on/ | w7gg33h | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190enbf | false | null | t3_190enbf | /r/LocalLLaMA/comments/190enbf/mixtral8x7b_always_appending_a_positive_spin_on/ | false | false | self | 1 | null |
UniteAI: Local LLMs, Local Voice-to-Text, Local RAG, all in VSCode and Emacs. | 4 | 2024-01-07T00:07:12 | https://github.com/freckletonj/uniteai | BayesMind | github.com | 1970-01-01T00:00:00 | 0 | {} | 190ec2d | false | null | t3_190ec2d | /r/LocalLLaMA/comments/190ec2d/uniteai_local_llms_local_voicetotext_local_rag/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'xJWwLcF6sLGxQlLU8o9xkTlYr8Q9QOLFKthWNYqKETM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YBj5xR29nwiCCdzrCxq1AfKvi9eFRuStWYHAErvB1P4.jpg?width=108&crop=smart&auto=webp&s=dbb772204ac6290806826e61fbf949de9b2fa4e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YBj5xR29nwiCCdzrCxq1AfKvi9eFRuStWYHAErvB1P4.jpg?width=216&crop=smart&auto=webp&s=ca37ba4d90ae323952f987d835158e7a6c3e6dae', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YBj5xR29nwiCCdzrCxq1AfKvi9eFRuStWYHAErvB1P4.jpg?width=320&crop=smart&auto=webp&s=afc982c79400ee48edf141c66ab3040059a278b6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YBj5xR29nwiCCdzrCxq1AfKvi9eFRuStWYHAErvB1P4.jpg?width=640&crop=smart&auto=webp&s=64dbc0604a14a481f7db1da00812248319b31626', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YBj5xR29nwiCCdzrCxq1AfKvi9eFRuStWYHAErvB1P4.jpg?width=960&crop=smart&auto=webp&s=6b8403484e75567027a4432384b162716cb432a4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YBj5xR29nwiCCdzrCxq1AfKvi9eFRuStWYHAErvB1P4.jpg?width=1080&crop=smart&auto=webp&s=437424dfca83c325f2b0c5b52b8dcee9ec382a78', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YBj5xR29nwiCCdzrCxq1AfKvi9eFRuStWYHAErvB1P4.jpg?auto=webp&s=7a9806af2878111ba79315dbc602c77d6380ea5e', 'width': 1200}, 'variants': {}}]} | ||
Small dataset for primary LLM training | 6 | Hi, I'm looking for something like a set of children's books to test training algorithms without months of run time ...
​
Ed. | 2024-01-06T23:23:44 | https://www.reddit.com/r/LocalLLaMA/comments/190ddg6/small_dataset_for_primary_llm_training/ | Tricky-Box6330 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190ddg6 | false | null | t3_190ddg6 | /r/LocalLLaMA/comments/190ddg6/small_dataset_for_primary_llm_training/ | false | false | self | 6 | null |
Hello. Some help in compiling llama.cpp with BLAS on Intel GPU? | 1 | Yeah. Seems like I have gotten some warnings and bad declarations of functions that are being used like ggml\_cl\_mul?
The target is a Arc A380, and it's being run with cmake -DLLAMA\_BLAS=ON -DLLAMA\_CLBLAST=ON -DCLBlast\_DIR=/usr/lib
`ggml.c: In function ‘ggml_init’: ggml.c:2266:9: error: implicit declaration of function ‘ggml_cl_init’; did you mean ‘ggml_opt_init’? [-Werror=implicit-function-declaration] 2266 | ggml_cl_init(); | ~~~~~~~~~~~ | ggml_opt_init`
`ggml.c: In function ‘ggml_compute_forward_mul_f32’: implicit declaration of function ‘ggml_cl_mul’; did you mean ‘ggml_clamp’? [-Werror=implicit-function-declaration] 7923 | ggml_cl_mul(src0, src1, dst); | ~~~~~~~~~~ | ggml_clamp`
`ggml.c:9781:9: error: implicit declaration of function ‘ggml_cl_can_mul_mat’; did you mean ‘ggml_can_mul_mat’? [-Werror=implicit-function-declaration] 9781 | if (ggml_cl_can_mul_mat(src0, src1, dst)) { | ~~~~~~~~~~~~~~~~~~ | ggml_can_mul_mat`
`ggml.c: In function ‘ggml_graph_plan’: ggml.c:16628:31: error: implicit declaration of function ‘ggml_cl_mul_mat_get_wsize’; did you mean ‘ggml_mul_mat_set_prec’? [-Werror=implicit-function-declaration] 16628 | cur = ggml_cl_mul_mat_get_wsize(node->src[0], node->src[1], node); | ~~~~~~~~~~~~~~~~~~~~~~~~ | ggml_mul_mat_set_prec`
`cc1: some warnings being treated as errors`
`make[2]: *** [CMakeFiles/ggml.dir/build.make:76: CMakeFiles/ggml.dir/ggml.c.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:680: CMakeFiles/ggml.dir/all] Error 2 make: *** [Makefile:146: all] Error 2`
I have run source /opt/intel/oneapi/setvars.sh, and the version of OneAPI is 2024 or so.
How do I build LLaMa CPP properly with Intel BLAS and CLBlast? Or if I use CLBlast, am I supposed to use another accelerated BLAS library rather than Intel's own? | 2024-01-06T22:56:14 | https://www.reddit.com/r/LocalLLaMA/comments/190cqth/hello_some_help_in_compiling_llamacpp_with_blas/ | A_Degenerate_Idiot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190cqth | false | null | t3_190cqth | /r/LocalLLaMA/comments/190cqth/hello_some_help_in_compiling_llamacpp_with_blas/ | false | false | self | 1 | null |
AWQ vs GPTQ vs No quantization but loading in 4bit | 1 | Does anyone have any metrics or even personal anecdotes about the performance differences between different quantizations of models. For comparisons, I am assuming that the bit size between all of these is the same. For example, if I download mixtral GPTQ 4bit and load regular mixtral in 4bit, are there speed/performance differences that are noticeable? What about when using an AWQ model? | 2024-01-06T22:54:03 | https://www.reddit.com/r/LocalLLaMA/comments/190cp0h/awq_vs_gptq_vs_no_quantization_but_loading_in_4bit/ | Mephidia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190cp0h | false | null | t3_190cp0h | /r/LocalLLaMA/comments/190cp0h/awq_vs_gptq_vs_no_quantization_but_loading_in_4bit/ | false | false | self | 1 | null |
Best way to independently monetize GenAI skills | 1 | With the advent of LLMs and the skills required to wade in that field, what are the best avenues to monetize your skills leveraging LLMs currently. I was thinking marketplace like Poe would be a good place for this but with the GptStore coming next week we still barely hear about it and this makes me wonder how open-air GPT store will fair. | 2024-01-06T22:28:26 | https://www.reddit.com/r/LocalLLaMA/comments/190c3hn/best_way_to_independently_monetize_genai_skills/ | Blue_Kiji_77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190c3hn | false | null | t3_190c3hn | /r/LocalLLaMA/comments/190c3hn/best_way_to_independently_monetize_genai_skills/ | false | false | self | 1 | null |
Mixtral 8x7b what are the experts | 79 | I understand the concept of experts and it seems a step forward in llms - but for mixtral, what are the 8 experts, what are their "skills"? I've been looking everywhere for this.
And how is mixtral 8x7b better (or worse) than a 7b, 13b or 30b model for a specific use case, for example writing stuff or coding? | 2024-01-06T22:14:21 | https://www.reddit.com/r/LocalLLaMA/comments/190brnf/mixtral_8x7b_what_are_the_experts/ | yupignome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190brnf | false | null | t3_190brnf | /r/LocalLLaMA/comments/190brnf/mixtral_8x7b_what_are_the_experts/ | false | false | self | 79 | null |
Quad P40 or Quad P100 | 3 | For some time I’ve had a variety of setups leveraging Dell Poweredge R720 & R730. I graduated from dual M40 to mostly Dual P100 or P40.
It seems to have gotten easier to manage larger models through Ollama, FastChat, ExUI, EricLLm, exllamav2 supported projects.
I’ve decided to try a 4 GPU capable rig. ASUS ESC4000 G3.
Now I’m debating yanking out four P40 from the Dells or four P100s. I’m leaning on towards P100s because of the insane speeds in exllamav2. Potentially being able to run 6bpw, more worker, etc.
But then debating the much bigger models that I’ve never tried across 96gb of P40.
Speed vs larger models? Which would you pick? | 2024-01-06T21:40:44 | https://www.reddit.com/r/LocalLLaMA/comments/190azgw/quad_p40_or_quad_p100/ | MachineZer0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190azgw | false | null | t3_190azgw | /r/LocalLLaMA/comments/190azgw/quad_p40_or_quad_p100/ | false | false | self | 3 | null |
Best Literature LLM? | 3 | So far models have been pretty underwhelming for me as an english lit fanatic. Are there any models which might prove a balm for this agony?
The closest I have found is MonadGPT.
this is not to "help me write a book", but rather to hold an engaging conversation with an entity relatively aware of literature at large. | 2024-01-06T21:30:47 | https://www.reddit.com/r/LocalLLaMA/comments/190aqzf/best_literature_llm/ | Parfington | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190aqzf | false | null | t3_190aqzf | /r/LocalLLaMA/comments/190aqzf/best_literature_llm/ | false | false | self | 3 | null |
Prompts for reasoning tasks to tested on a model finetuned for reasoning and long sys prompts | 11 | I'm experimenting with finetuning LLMs for reasoning and I would be glad if you could provide some reasoning prompt to test my LLMs with or prompta with a very long system prompt.
Attached are some examples of the kind of prompts I'm looking for and the responds [one of my latest models](https://huggingface.co/Mihaiii/Pallas-0.5-frankenmerge) provides.
Thanks! | 2024-01-06T21:30:04 | https://www.reddit.com/gallery/190aqa6 | Either-Job-341 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 190aqa6 | false | null | t3_190aqa6 | /r/LocalLLaMA/comments/190aqa6/prompts_for_reasoning_tasks_to_tested_on_a_model/ | false | false | 11 | null | |
Is there a self-hosted LLM guide for dummies? | 1 | [removed] | 2024-01-06T20:44:36 | https://www.reddit.com/r/LocalLLaMA/comments/1909o1n/is_there_a_selfhosted_llm_guide_for_dummies/ | kaszebe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1909o1n | false | null | t3_1909o1n | /r/LocalLLaMA/comments/1909o1n/is_there_a_selfhosted_llm_guide_for_dummies/ | false | false | self | 1 | null |
What can you do with 1.5TB of RAM that you can't with 758GB? | 56 | Ok so I'm speccing out a fairly large little workstation, and I plan to use all 12 channels of DDR5 for inference (and VMS, etc) with an Epyc Genoa.
But I'm seeing that a 1x64GB module of DDR5-4800 costs about $225, while 128GB modules cost ~6x as opposed to double. I asked a seller their opinion and they simply said that the "AI nerds" were paying for the bigger modules.
So...what gives? What can one do with 12x128GB -> 1.5TB of RAM (ignoring GPU etc) vs. 12x64GB->768GB at 1/6 the cost? I reckon CPU inference of models that large would be uselessly slow, so maybe it's to quickly load models into VRAM for a multi-tenant / busy MoE setup? | 2024-01-06T20:41:28 | https://www.reddit.com/r/LocalLLaMA/comments/1909lg3/what_can_you_do_with_15tb_of_ram_that_you_cant/ | Massive_Robot_Cactus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1909lg3 | false | null | t3_1909lg3 | /r/LocalLLaMA/comments/1909lg3/what_can_you_do_with_15tb_of_ram_that_you_cant/ | false | false | self | 56 | null |
Efficient LLM fine-tuning for classification on Mac | 1 | [removed] | 2024-01-06T20:20:40 | https://www.reddit.com/r/LocalLLaMA/comments/19093xc/efficient_llm_finetuning_for_classification_on_mac/ | Reexpressionist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19093xc | false | null | t3_19093xc | /r/LocalLLaMA/comments/19093xc/efficient_llm_finetuning_for_classification_on_mac/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '0ksZ4wPIhd8Lx0M3zECUpL9oySFRiL6q2ckXWdeqns8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yOjc73KOP-FH7RO-KHGdi1ZUvOUpINJUklOxdBmNd24.jpg?width=108&crop=smart&auto=webp&s=896e04294e6277cee861b66028120d03c3ff0d24', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yOjc73KOP-FH7RO-KHGdi1ZUvOUpINJUklOxdBmNd24.jpg?width=216&crop=smart&auto=webp&s=4a793bafed8ad6bad2f9cec660ce1c63a096e909', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yOjc73KOP-FH7RO-KHGdi1ZUvOUpINJUklOxdBmNd24.jpg?width=320&crop=smart&auto=webp&s=73f99264e8201b6fc379db1585b16693e1f006e0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yOjc73KOP-FH7RO-KHGdi1ZUvOUpINJUklOxdBmNd24.jpg?width=640&crop=smart&auto=webp&s=9e0998a0b0a9e19c0e479c966fc37d9a7291b1ce', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yOjc73KOP-FH7RO-KHGdi1ZUvOUpINJUklOxdBmNd24.jpg?width=960&crop=smart&auto=webp&s=e9879adfaa92b31498a591823bec895b9986548c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yOjc73KOP-FH7RO-KHGdi1ZUvOUpINJUklOxdBmNd24.jpg?width=1080&crop=smart&auto=webp&s=1b20c901a0dd1b01c28e88ac999f8c5aa0ad9889', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yOjc73KOP-FH7RO-KHGdi1ZUvOUpINJUklOxdBmNd24.jpg?auto=webp&s=3adf5f0e7f33a6e6f49e19eef3f59e5c94f7d0e0', 'width': 1200}, 'variants': {}}]} |
Has anyone gotten a version of the Mistral 7B working on a 3090? I'm having issues. | 1 | [removed] | 2024-01-06T20:10:47 | https://www.reddit.com/r/LocalLLaMA/comments/1908vh4/has_anyone_gotten_a_version_of_the_mistral_7b/ | user_00000000000001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1908vh4 | false | null | t3_1908vh4 | /r/LocalLLaMA/comments/1908vh4/has_anyone_gotten_a_version_of_the_mistral_7b/ | false | false | self | 1 | null |
Fine-Tuning Mistral 7B with WhatsApp Chats – A Practical MLX Project | 28 | Hey everyone,
I've noticed a growing curiosity about the practical applications of MLX for fine-tuning models. To shed some light on this, I've created an interesting project: fine-tuning the Mistral 7B model using WhatsApp chat data.
Project Repository: [MLX-WhatsApp](https://github.com/gavi/mlx-whatsapp)
This project is heavily inspired by the 'lora' section in the [MLX-Examples](https://github.com/ml-explore/mlx-examples) repository. My contribution mainly revolves around developing a script for cleaning and formatting WhatsApp chat data, along with tailoring the instructions for this specific use case.
Detailed guidelines in the README to help you replicate or build upon this project. I am just pasting it here
## How to backup your chats
Go to whatsapp -> Settings -> Export Chat -> Select group conversation -> Without Media
## Download Mistral and convert to quantized version
Install the dependencies:
```
pip install -r requirements.txt
```
Next, download and convert the model. The Mistral weights can be downloaded with:
```
curl -O https://files.mistral-7b-v0-1.mistral.ai/mistral-7B-v0.1.tar
tar -xf mistral-7B-v0.1.tar
```
Convert the model with:
```
python convert.py \
--torch-path mistral-7B-v0.1 \
--mlx-path mistral_mlx_q -q
```
## Converting the files
Save your file exported from whatsapp as `chat.txt`. Then create the training files below
```bash
python whatsapp.py --input_file chat.txt --output_file chat.jsonl --test_file data/test.jsonl --train_file data/train.jsonl --valid_file data/valid.jsonl
```
By default the test and validation files take 30 samples. You can adjust them.
## Training
```bash
python lora.py --model mistral_mlx_q --train --iters 600 --data ./data --batch-size 2 --adapter-file whatsapp.npz
```
## Inference
```bash
python lora.py --model ./mistral_mlx_q \
--adapter-file ./whatsapp.npz \
--num-tokens 500 \
--prompt \
"Mickey Mouse: Hey Minnie, are we going to the fair
Minnie: "
``` | 2024-01-06T19:55:40 | https://www.reddit.com/r/LocalLLaMA/comments/1908isa/finetuning_mistral_7b_with_whatsapp_chats_a/ | visualdata | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1908isa | false | null | t3_1908isa | /r/LocalLLaMA/comments/1908isa/finetuning_mistral_7b_with_whatsapp_chats_a/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'zvecD5-Kj3N5peC7J2aKGPGYHOSHTjEeBF8RRt7nKuA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qMcxeBnZQ9p9VKks3MeU5cHwGVFff8Ac29cmUVnMq6E.jpg?width=108&crop=smart&auto=webp&s=4e0879c78743b305ff0b7c01dee971cac71624bd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qMcxeBnZQ9p9VKks3MeU5cHwGVFff8Ac29cmUVnMq6E.jpg?width=216&crop=smart&auto=webp&s=785eefa3d16f8f8217e01bb553f596dc4ae46078', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qMcxeBnZQ9p9VKks3MeU5cHwGVFff8Ac29cmUVnMq6E.jpg?width=320&crop=smart&auto=webp&s=6a27aa8409c58f15b37ee36600860efaa8d00ac8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qMcxeBnZQ9p9VKks3MeU5cHwGVFff8Ac29cmUVnMq6E.jpg?width=640&crop=smart&auto=webp&s=808d44c17057be2b9099a5c92260c0874a709f03', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qMcxeBnZQ9p9VKks3MeU5cHwGVFff8Ac29cmUVnMq6E.jpg?width=960&crop=smart&auto=webp&s=eb9dfc5cab4d6982b19178c7b76a7ef5bb7bae79', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qMcxeBnZQ9p9VKks3MeU5cHwGVFff8Ac29cmUVnMq6E.jpg?width=1080&crop=smart&auto=webp&s=f3e09ffbe7d4cde80e11c46ad6961316bd4a7a15', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qMcxeBnZQ9p9VKks3MeU5cHwGVFff8Ac29cmUVnMq6E.jpg?auto=webp&s=0a1296cde351663b484ee11d9a0a1efb69786934', 'width': 1200}, 'variants': {}}]} |
Any tips for getting 7B models to hallucinate less answering RAG questions? | 9 | How should I craft my prompts and should I only use texts of a certain length and below? | 2024-01-06T19:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/1907xrf/any_tips_for_getting_7b_models_to_hallucinate/ | Basic_Description_56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1907xrf | false | null | t3_1907xrf | /r/LocalLLaMA/comments/1907xrf/any_tips_for_getting_7b_models_to_hallucinate/ | false | false | self | 9 | null |
Making a local recommendation system on top of Llama 2. Here is how it can help navigate the news from multiple sources. | 42 | 2024-01-06T19:28:10 | https://v.redd.it/w3mq72ugfvac1 | Another__one | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1907vbt | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/w3mq72ugfvac1/DASHPlaylist.mpd?a=1707161314%2CODZjMjQ1ZDcyMTcyMGJhNzIzZTkwZDA5ZTk1M2EzNzA2ZjI2ZDJlNjljYzk5ODY4ODZjOTEwNWQzOGEwMTQ1Ng%3D%3D&v=1&f=sd', 'duration': 64, 'fallback_url': 'https://v.redd.it/w3mq72ugfvac1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/w3mq72ugfvac1/HLSPlaylist.m3u8?a=1707161314%2CZjk2Njg3Mzg1OTE4YjcwMzdiNWZkYjkxNmUwMjNlN2UzY2YwMjliZTQ1NTIxZmU5NWY4NmIwN2M0ZWFlMzU2Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/w3mq72ugfvac1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1440}} | t3_1907vbt | /r/LocalLLaMA/comments/1907vbt/making_a_local_recommendation_system_on_top_of/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'cmtzb2NyZ25mdmFjMRYr37H-VAaOoKxO49iYWEeRzuhrhK007_5FVazLIOd2', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/cmtzb2NyZ25mdmFjMRYr37H-VAaOoKxO49iYWEeRzuhrhK007_5FVazLIOd2.png?width=108&crop=smart&format=pjpg&auto=webp&s=c14c9927224c7a70de207eebcee588ea97d91e00', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/cmtzb2NyZ25mdmFjMRYr37H-VAaOoKxO49iYWEeRzuhrhK007_5FVazLIOd2.png?width=216&crop=smart&format=pjpg&auto=webp&s=0674c6d3cab750f1803e4d1dfa7bae25b1e41e0f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/cmtzb2NyZ25mdmFjMRYr37H-VAaOoKxO49iYWEeRzuhrhK007_5FVazLIOd2.png?width=320&crop=smart&format=pjpg&auto=webp&s=386271da52533ed35b10f9ab6bdea73c1b10a8d6', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/cmtzb2NyZ25mdmFjMRYr37H-VAaOoKxO49iYWEeRzuhrhK007_5FVazLIOd2.png?width=640&crop=smart&format=pjpg&auto=webp&s=14d987a3d13b541270c4a08b5691699cc74cc3a6', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/cmtzb2NyZ25mdmFjMRYr37H-VAaOoKxO49iYWEeRzuhrhK007_5FVazLIOd2.png?width=960&crop=smart&format=pjpg&auto=webp&s=9b197d876fd61e737bc61720f67c8728cfcd8b14', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/cmtzb2NyZ25mdmFjMRYr37H-VAaOoKxO49iYWEeRzuhrhK007_5FVazLIOd2.png?width=1080&crop=smart&format=pjpg&auto=webp&s=49053b1d082de4de3b4ccbdabe78e5c9cfff2e88', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cmtzb2NyZ25mdmFjMRYr37H-VAaOoKxO49iYWEeRzuhrhK007_5FVazLIOd2.png?format=pjpg&auto=webp&s=ac974337203ac34532a35625e8a29112cecc220a', 'width': 1440}, 'variants': {}}]} | ||
The most difficult LLM "riddle" so far - "The Kanye Test" | 1 | [removed] | 2024-01-06T19:27:47 | https://www.reddit.com/r/LocalLLaMA/comments/1907v0c/the_most_difficult_llm_riddle_so_far_the_kanye/ | kindacognizant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1907v0c | false | null | t3_1907v0c | /r/LocalLLaMA/comments/1907v0c/the_most_difficult_llm_riddle_so_far_the_kanye/ | false | false | 1 | null | |
Question about generate method during training | 1 | Hello,
Would it be possible to use the generate function in the compute metrics of SFTTrainer for performing specialized metrics with the decoded outputs ?
​
def compute_metrics(eval_pred):
# use model.generate here
return {"bleu": bleu_score["score"], other specific things}
trainer = SFTTrainer(
model=model,
train_dataset=dataset_dict['train'],
max_seq_length=2000,
eval_dataset=dataset_dict['validation'],
peft_config=peft_config,
dataset_text_field="text",
tokenizer=tokenizer,
args=training_arguments,
compute_metrics=compute_metrics,
callbacks=[SaveModelCallback()])
trainer.train
Thanks. | 2024-01-06T19:12:09 | https://www.reddit.com/r/LocalLLaMA/comments/1907hqz/question_about_generate_method_during_training/ | BluebirdFinancial119 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1907hqz | false | null | t3_1907hqz | /r/LocalLLaMA/comments/1907hqz/question_about_generate_method_during_training/ | false | false | self | 1 | null |
Wonder how Audio-to-Audio in <500ms could be possible | 1 | 2024-01-06T18:12:58 | https://x.com/usebland/status/1743411488612913429?s=46&t=GEEB35cWFcOM_iu_Jw3jug | Sutyum | x.com | 1970-01-01T00:00:00 | 0 | {} | 19063n4 | false | null | t3_19063n4 | /r/LocalLLaMA/comments/19063n4/wonder_how_audiotoaudio_in_500ms_could_be_possible/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'SH9C1P7iJhFEBX7nT3XYXGi2w6j3ih4NIx8Xe-PSm1c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/VBRZ82tZBKRqLP6RSOo7Nqp2_IofdWeSC-cmHgC_VTA.jpg?width=108&crop=smart&auto=webp&s=ac8ad5dce8621b41f48182ba150e0b7364f1b846', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/VBRZ82tZBKRqLP6RSOo7Nqp2_IofdWeSC-cmHgC_VTA.jpg?width=216&crop=smart&auto=webp&s=88fa65d17ffe57d076f27a6a7bece860041c18cd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/VBRZ82tZBKRqLP6RSOo7Nqp2_IofdWeSC-cmHgC_VTA.jpg?width=320&crop=smart&auto=webp&s=7cebb8567e544b2aa1734bbc22e385ccddd4afa7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/VBRZ82tZBKRqLP6RSOo7Nqp2_IofdWeSC-cmHgC_VTA.jpg?width=640&crop=smart&auto=webp&s=340e1f68bf9b6b743792f28c4c5a2c377e7309b1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/VBRZ82tZBKRqLP6RSOo7Nqp2_IofdWeSC-cmHgC_VTA.jpg?width=960&crop=smart&auto=webp&s=a64eb802dc081bfa623e3f509bffa0cddd11982f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/VBRZ82tZBKRqLP6RSOo7Nqp2_IofdWeSC-cmHgC_VTA.jpg?width=1080&crop=smart&auto=webp&s=84088ab8072d4b93538c072b4cfe1cdc6121af4d', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/VBRZ82tZBKRqLP6RSOo7Nqp2_IofdWeSC-cmHgC_VTA.jpg?auto=webp&s=0fa92565562934a47d61e38a429467dba51c8a19', 'width': 1280}, 'variants': {}}]} | ||
How can I provide an LLM a text file to read? | 6 | I am not sure if this is a oobabooga question, or a general LLM question, so I figured I would ask here first.
If I have a model I like, for example Wizard-Vicuna-30B-Uncensored-GPTQ, how can I ask it to read large amounts of data , like a really long text file? My understanding is Local LLM will be limited in the context they can "deal with" at a given time, where that's not the case with chatgpt 3.5 or 4.
On top of that, how would I get it to read the text file at all? Just simple copy and paste the contents in the chat window? | 2024-01-06T17:59:37 | https://www.reddit.com/r/LocalLLaMA/comments/1905s0f/how_can_i_provide_an_llm_a_text_file_to_read/ | BackgroundAmoebaNine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1905s0f | false | null | t3_1905s0f | /r/LocalLLaMA/comments/1905s0f/how_can_i_provide_an_llm_a_text_file_to_read/ | false | false | self | 6 | null |
Looking for a text adventure model | 1 | [removed] | 2024-01-06T17:52:37 | https://www.reddit.com/r/LocalLLaMA/comments/1905m4o/looking_for_a_text_adventure_model/ | TheLewdCrafter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1905m4o | false | null | t3_1905m4o | /r/LocalLLaMA/comments/1905m4o/looking_for_a_text_adventure_model/ | false | false | nsfw | 1 | null |
Share your Llama2 generation speeds. | 1 | [removed] | 2024-01-06T17:47:43 | https://www.reddit.com/r/LocalLLaMA/comments/1905hu7/share_your_llama2_generation_speeds/ | ramzeez88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1905hu7 | false | null | t3_1905hu7 | /r/LocalLLaMA/comments/1905hu7/share_your_llama2_generation_speeds/ | false | false | self | 1 | null |
Rented cloud GPU server or alternatives if any? | 2 | I don't have enough powerful laptop/pc to work with local LLMs. My aim is to learn different aspect of LLMs like inferencing, fine tuning (qlora), RAG, Q&A etc. I want to try 7B, 13B, and even biggers models. Down the road i want to try multimodals and build apps on them.
Also, at this moment, I can't buy a new machine with GPU like 3090 or 4090. I have started with google colab, but its not persistent. Therefore I am thinking to start experimenting with cloud GPU servers like runpod, brev, vast ai, or lambda labs.
What are the my best options? | 2024-01-06T17:40:21 | https://www.reddit.com/r/LocalLLaMA/comments/1905bjy/rented_cloud_gpu_server_or_alternatives_if_any/ | sanjay303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1905bjy | false | null | t3_1905bjy | /r/LocalLLaMA/comments/1905bjy/rented_cloud_gpu_server_or_alternatives_if_any/ | false | false | self | 2 | null |
How much memory do long context LLMs need for inference? | 5 | I have a NousResearch/Yarn-Mistral-7b-128k model finetuned with LoRA. It's a long-context model (128k). When I load it in 4 bit and prompt it with 63k characters, I run out of memory:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 34.92 GiB. GPU 0 has a total capacty of 79.18 GiB of which 19.23 GiB is free. Process 3619440 has 59.95 GiB memory in use. Of the allocated memory 58.35 GiB is allocated by PyTorch, and 385.30 MiB is reserved by PyTorch but unallocated.
This seems pretty insane to me. An 80GB GPU not enough to run a 7B model? Is this expected? What can I do to fix this? | 2024-01-06T17:30:33 | https://www.reddit.com/r/LocalLLaMA/comments/190536f/how_much_memory_do_long_context_llms_need_for/ | Foxtr0t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190536f | false | null | t3_190536f | /r/LocalLLaMA/comments/190536f/how_much_memory_do_long_context_llms_need_for/ | false | false | self | 5 | null |
Meta prompting / auto-prompt writing | 1 | 2024-01-06T17:27:13 | Glass_Journalist6022 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 19050gf | false | null | t3_19050gf | /r/LocalLLaMA/comments/19050gf/meta_prompting_autoprompt_writing/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'twxbNsvcpmxMKIL-zhXuwPZr3YobtO62KB_Vf5qiUNM', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/msmf0s61uuac1.gif?width=108&crop=smart&format=png8&s=f838e28b281eb3e6120f9de5608137f1af0cec30', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/msmf0s61uuac1.gif?width=216&crop=smart&format=png8&s=5958b67362db39e03ca93919e1d930d7392f57f1', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/msmf0s61uuac1.gif?width=320&crop=smart&format=png8&s=ca30a4d17c26d4c0c8be946c6cb2b744c4a1b47b', 'width': 320}], 'source': {'height': 202, 'url': 'https://preview.redd.it/msmf0s61uuac1.gif?format=png8&s=a36779a110a5a2171a1af056df8c9d9a24e8f862', 'width': 360}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/msmf0s61uuac1.gif?width=108&crop=smart&s=72ff038d2ba6d78380fc1b4bc920e292ee6a1926', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/msmf0s61uuac1.gif?width=216&crop=smart&s=b83ec2a733f2f696230b86f8c9c08bb7eb58b88d', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/msmf0s61uuac1.gif?width=320&crop=smart&s=100013b98211cb35f8fe37d133edc8e4725002e6', 'width': 320}], 'source': {'height': 202, 'url': 'https://preview.redd.it/msmf0s61uuac1.gif?s=459812a26e64fb60db106e1ee1b4850901539913', 'width': 360}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/msmf0s61uuac1.gif?width=108&format=mp4&s=641fbe888feb3ad637ce81c10c37472d89099f60', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/msmf0s61uuac1.gif?width=216&format=mp4&s=823f714e9d377871afade509db4663e5e3a3af4a', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/msmf0s61uuac1.gif?width=320&format=mp4&s=870ada1374d4933ad542c16e973bf47aaa9f77fb', 'width': 320}], 'source': {'height': 202, 'url': 'https://preview.redd.it/msmf0s61uuac1.gif?format=mp4&s=12203e36b2bd25a9dd70322d204a16364a2aa4f4', 'width': 360}}}}]} | |||
Meta prompting / auto-prompt writing | 1 | [deleted] | 2024-01-06T17:25:58 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1904zfy | false | null | t3_1904zfy | /r/LocalLLaMA/comments/1904zfy/meta_prompting_autoprompt_writing/ | false | false | default | 1 | null | ||
AI/gaming PC rig upgrades for LLMs | 2 | I built a PC rig to game and experiment with AI with a current focus on image generation with Stable Diffusion. Eventually, I intend to start working with LLMs, so I am wondering what changes I should consider at that point. I have a 7800x3d CPU, 2x16GB DDR5 CL30 RAM, 4090 with 24GB VRAM, and 1000W PSU. My motherboard supports up to 128GB RAM in 4 slots and has PCIe 3 and 5 x16 slots.
The RAM is still within the return window, so I could swap it out with 2x32GB to be in a better position to max it out at 128GB. Is there any benefit over increasing it 64GB? My impression is that it would allow me to run larger models, albeit slowly, so it's not an important long-term investment compared to more GPUs.
I know I'd need to upgrade the PSU if I added a second GPU to increase the VRAM. A second 4090 might work in the PCIe 3 slot with minor impact, but I gather that would be overkill for LLMs. Would a 4090 PCIe 5 + 3090 PCIe 3 setup be reasonable? Would any benefit of the 4090 be bottlenecked by the 3090 in this setup? Or is the motherboard limited enough that I should consider getting a new one with more high-performing slots?
Thank you. | 2024-01-06T17:24:42 | https://www.reddit.com/r/LocalLLaMA/comments/1904ydl/aigaming_pc_rig_upgrades_for_llms/ | Admirable_Yam_7581 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1904ydl | false | null | t3_1904ydl | /r/LocalLLaMA/comments/1904ydl/aigaming_pc_rig_upgrades_for_llms/ | false | false | self | 2 | null |
This is incredible. Imagine a massive model using lots of smaller specialised models. Best of both worlds. Kind of like Autogen but much more coherent. | 182 | https://huggingface.co/papers/2401.02412 | 2024-01-06T17:22:20 | TysonUsykFury | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1904wfm | false | null | t3_1904wfm | /r/LocalLLaMA/comments/1904wfm/this_is_incredible_imagine_a_massive_model_using/ | false | false | 182 | {'enabled': True, 'images': [{'id': 'rboPoDp50aMn2cD3vEyEiZA4Z4AXmVK-Fkd9zxnVV5w', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/p4syrqa7tuac1.jpeg?width=108&crop=smart&auto=webp&s=618028b74c4a744a22b14903b23911d2b7ecc41d', 'width': 108}, {'height': 190, 'url': 'https://preview.redd.it/p4syrqa7tuac1.jpeg?width=216&crop=smart&auto=webp&s=6ccd44e27294a0165122f4a21113aff0f0f5d4b7', 'width': 216}, {'height': 282, 'url': 'https://preview.redd.it/p4syrqa7tuac1.jpeg?width=320&crop=smart&auto=webp&s=da17cf270bd6347a4ef8fbc1a6132baf314da465', 'width': 320}, {'height': 564, 'url': 'https://preview.redd.it/p4syrqa7tuac1.jpeg?width=640&crop=smart&auto=webp&s=be69ae23588da8a34eaf0eb0da38f09c06b6e022', 'width': 640}, {'height': 847, 'url': 'https://preview.redd.it/p4syrqa7tuac1.jpeg?width=960&crop=smart&auto=webp&s=c02419bc93d7031f03a7c31e0d0d660f5e567e8c', 'width': 960}, {'height': 953, 'url': 'https://preview.redd.it/p4syrqa7tuac1.jpeg?width=1080&crop=smart&auto=webp&s=dbc4bfde684380dd39247b6c0c97eecb560da053', 'width': 1080}], 'source': {'height': 1038, 'url': 'https://preview.redd.it/p4syrqa7tuac1.jpeg?auto=webp&s=48be14d5832bf7f557f14da3fbd86cc1d7235311', 'width': 1176}, 'variants': {}}]} | ||
Any way to exclude input prompt from Llama.cpp output? | 1 | I'm trying to build an automated flow with llama.cpp for generating keywords and description/summary for the content I publish to my website upon submission. I'm doing this by concatenating the prompt/instruction with content from my original file and then redirecting STDOUT to another file. The problem is that output file ends up duplicating the original prompt as well, which I don't need. I could write additional script-fu to parse out the original input but that seems a very roundabout way. Has anyone figured out if there is a way to only dump the LLM output to the file, omitting the original prompt? | 2024-01-06T17:03:56 | https://www.reddit.com/r/LocalLLaMA/comments/1904gzw/any_way_to_exclude_input_prompt_from_llamacpp/ | atsepkov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1904gzw | false | null | t3_1904gzw | /r/LocalLLaMA/comments/1904gzw/any_way_to_exclude_input_prompt_from_llamacpp/ | false | false | self | 1 | null |
Chess-GPT, a 50M parameter LLM, plays 1500 ELO chess. We can visualize its internal board state, and it accurately estimates the ELO rating of the players in a game. | 90 | gpt-3.5-turbo-instruct's ELO rating of 1800 is chess seemed magical. But it's not! A 50M parameter LLM given a few million games of chess will learn to play at ELO 1500. When a linear probe is trained on its internal board state, it accurately classifies the state of 99.2% of all board squares.
For example, in this heatmap, we have the white pawn location on the left, a binary probe output in the middle, and a gradient of probe confidence on the right. We can see the model is extremely confident that no white pawns are on either back rank.
​
https://preview.redd.it/qqwein74puac1.png?width=2500&format=png&auto=webp&s=f7089d450438b342dde550373ecebb9d033b94cf
In addition, to better predict the next character it also learns to estimate latent variables such as the ELO rating of the players in the game. More information is available in this post:
[https://adamkarvonen.github.io/machine\_learning/2024/01/03/chess-world-models.html](https://adamkarvonen.github.io/machine_learning/2024/01/03/chess-world-models.html)
And the code is here: [https://github.com/adamkarvonen/chess\_llm\_interpretability](https://github.com/adamkarvonen/chess_llm_interpretability)
​ | 2024-01-06T17:00:34 | https://www.reddit.com/r/LocalLLaMA/comments/1904e2t/chessgpt_a_50m_parameter_llm_plays_1500_elo_chess/ | seraine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1904e2t | false | null | t3_1904e2t | /r/LocalLLaMA/comments/1904e2t/chessgpt_a_50m_parameter_llm_plays_1500_elo_chess/ | false | false | 90 | null | |
Finetuning for Bigger Context Windows? | 11 | Forgive me for my simplistic understanding but as I understand it, any model built on Llama-2 is likely going to be limited by a 4096 context window. However, I have found some instances where models built on it claim to have larger windows. I'm extremely happy with a (yeah, I know) GPT3.5-Turbo finetune I have trained on my own fiction writing--the difference in prose quality is night and day for me--but obviously I can only use it for SFW writing. I've tried a LLama-2-Chat-70B finetune through Anyscale for NSFW writing and it's decent but the 4K context window is killer when I'm trying to supply story/worldbuilding context details and the previous words in the story. 8K would be way better and 16K and above would be massive.
The closest untuned model I've found that I like for writing is lzlv-70B but again, 4K context window. I would love to be able to finetune lzlv not only on my writing style but to increase its context window. And secondarily, where could I easily tune/host such a thing? I know I can run ooba through runpod but for the main frontend I use for writing (Novelcrafter) it doesn't plug in very easily and I've never seen great style results from my attempts at finetunes through ooba. (could be operator error on my part, though.) And runpod seems a bit pricey for volume storage for a model of lzlv's size when I don't actively have it running. | 2024-01-06T16:52:54 | https://www.reddit.com/r/LocalLLaMA/comments/190481b/finetuning_for_bigger_context_windows/ | _winterwoods | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190481b | false | null | t3_190481b | /r/LocalLLaMA/comments/190481b/finetuning_for_bigger_context_windows/ | false | false | self | 11 | null |
What attracted you to local LLMs? | 5 | I’m starting to get into this myself and am curious what if some specific use case/feature attracted most people or if it’s just “from a technical perspective this is interesting, so I want to play with it”. | 2024-01-06T16:39:27 | https://www.reddit.com/r/LocalLLaMA/comments/1903x83/what_attracted_you_to_local_llms/ | V3yhron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1903x83 | false | null | t3_1903x83 | /r/LocalLLaMA/comments/1903x83/what_attracted_you_to_local_llms/ | false | false | self | 5 | null |
Consistent asterisk for RP | 1 | In roleplaying, I always indicate my character's actions with asterisks (*). However, I've noticed that my responses often deviate from the given prompt. Currently, I'm using MythoMax 13B with an Alpaca format. Is there a way to make my prompts more effective, similar to how ChatGPT does it? | 2024-01-06T16:33:43 | https://www.reddit.com/r/LocalLLaMA/comments/1903soc/consistent_asterisk_for_rp/ | duryanis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1903soc | false | null | t3_1903soc | /r/LocalLLaMA/comments/1903soc/consistent_asterisk_for_rp/ | false | false | self | 1 | null |
How do you get embedding with llama.cpp? | 3 | I’m a bit confused why it’s not so easy to get an embedding vector. I’m sure I’m missing something important. Do I always need a separate model just to get embeddings? Are there any great open source models that also have embedding vector models? | 2024-01-06T16:32:05 | https://www.reddit.com/r/LocalLLaMA/comments/1903rcx/how_do_you_get_embedding_with_llamacpp/ | richardanaya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1903rcx | false | null | t3_1903rcx | /r/LocalLLaMA/comments/1903rcx/how_do_you_get_embedding_with_llamacpp/ | false | false | self | 3 | null |
Need feedback: What's your take on unique, copyright-free LLM data sets? | 1 | Hey all, I'm considering using internally built scrapers to create data sets. I want to specialize in data packs that are most useful to the LLM training community (open source, with option for proprietary/unique on request).
OSS:
* Assured free from copyright constraints
* Fresh and distinctive data sets
Paid:
* Assured unique
* Specific target material (different topics)
* Preprocessing available (summaries, promptable, etc)
Is this something that you need? | 2024-01-06T16:16:20 | https://www.reddit.com/r/LocalLLaMA/comments/1903eh7/need_feedback_whats_your_take_on_unique/ | turboscrape | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1903eh7 | false | null | t3_1903eh7 | /r/LocalLLaMA/comments/1903eh7/need_feedback_whats_your_take_on_unique/ | false | false | self | 1 | null |
Incredible results with Long Agent Tree Search with open source models | 149 | Hello,
I saw GPT-4 with Long Agent Tree Search topping the HumanEval with a 94.4% pass@1 for a few weeks now. [https://paperswithcode.com/sota/code-generation-on-humaneval](https://paperswithcode.com/sota/code-generation-on-humaneval)
​
The authors of the [original paper](https://arxiv.org/abs/2310.04406) posted their code in their [official github repo](https://github.com/andyz245/LanguageAgentTreeSearch) . I had to change some code to try it out with CodeLlama-7b and the human eval with pass@1 and only 2 max iterations increases HumanEval score from 37% to about 70%.
This is some incredible results in my opinion because this score is higher than GPT-3.5 with only a 7b model. I assume more testing has to be done, but nevertheless I am surprised people are not talking more about this. | 2024-01-06T16:11:46 | https://www.reddit.com/r/LocalLLaMA/comments/1903auj/incredible_results_with_long_agent_tree_search/ | ArtZab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1903auj | false | null | t3_1903auj | /r/LocalLLaMA/comments/1903auj/incredible_results_with_long_agent_tree_search/ | false | false | self | 149 | {'enabled': False, 'images': [{'id': 'CNqE7T3ouiZlTldhzwsVzmdz1pXOn5EYTD391CIsa2o', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/8HFYZqy3IJLuGuOkvVEQz-oV87eglAeTheasavU5-Ws.jpg?width=108&crop=smart&auto=webp&s=ff4760bb8bb547481e615649ee85c67cd8a23b4b', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/8HFYZqy3IJLuGuOkvVEQz-oV87eglAeTheasavU5-Ws.jpg?width=216&crop=smart&auto=webp&s=730f5cf201f7bde2f8d8b78b1c3ef823c60c4c08', 'width': 216}, {'height': 169, 'url': 'https://external-preview.redd.it/8HFYZqy3IJLuGuOkvVEQz-oV87eglAeTheasavU5-Ws.jpg?width=320&crop=smart&auto=webp&s=b2d18a96c6afde4494f500f973cf6c8040443fca', 'width': 320}, {'height': 338, 'url': 'https://external-preview.redd.it/8HFYZqy3IJLuGuOkvVEQz-oV87eglAeTheasavU5-Ws.jpg?width=640&crop=smart&auto=webp&s=99261fd6daa2cc7e670882458c04e85d224b7de6', 'width': 640}, {'height': 508, 'url': 'https://external-preview.redd.it/8HFYZqy3IJLuGuOkvVEQz-oV87eglAeTheasavU5-Ws.jpg?width=960&crop=smart&auto=webp&s=5a8107d5662ee5dea846a757639fa8c3df2a6616', 'width': 960}, {'height': 571, 'url': 'https://external-preview.redd.it/8HFYZqy3IJLuGuOkvVEQz-oV87eglAeTheasavU5-Ws.jpg?width=1080&crop=smart&auto=webp&s=47d0d4eb2ddb737b1796eee5d4202126f5278dde', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/8HFYZqy3IJLuGuOkvVEQz-oV87eglAeTheasavU5-Ws.jpg?auto=webp&s=12563fd640eda00d2f1f457a5b4338f21b964cb4', 'width': 1700}, 'variants': {}}]} |
TensorRT-LLM + Llama2 | 1 | Does anyone got code snips or git repo to test Llama2 on TensorRT-LLM ?
Been trying that for past couple of days and can't seem to get it done 😅
Thanks 🙌✌️ | 2024-01-06T15:57:34 | https://www.reddit.com/r/LocalLLaMA/comments/1902zcb/tensorrtllm_llama2/ | Few_Acanthisitta_858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1902zcb | false | null | t3_1902zcb | /r/LocalLLaMA/comments/1902zcb/tensorrtllm_llama2/ | false | false | self | 1 | null |
Best open-source LLM to run on my m2 8GB for summarizing text messages of around 1000-1500 tokens? | 3 | I know RAM is an issue, but given 8GB, I tried Mistral 7b 4bit quantised and nous-hermes 4bit quant. - and for 850 tokens of text messages between two people, it takes 22 and 21 seconds to generate a summary.
What smaller models should I go for? and how do you decide what models are suitable? is there a list out there comparing speed vs quality? since I am new to this I end up spending a lot of time browsing web and this subreddit, and yet fail at coming up with any worthy information.
​ | 2024-01-06T15:47:21 | https://www.reddit.com/r/LocalLLaMA/comments/1902rot/best_opensource_llm_to_run_on_my_m2_8gb_for/ | coderinlaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1902rot | false | null | t3_1902rot | /r/LocalLLaMA/comments/1902rot/best_opensource_llm_to_run_on_my_m2_8gb_for/ | false | false | self | 3 | null |
How to use custom python functions in falcon 7b | 1 | The idea is am using falcon-7b instruct model, locally in colab. And i want to call certain functions using falcon, lets say the user is saying what’s today weather , so the falcon will call a python function which has the ability to give the weather results, so the llm falcon will automatically call the function and get the answer and return it to the user. Can anyone please help me or guide me, ik this features is available in gpt but any idea about falcon or other open source llm? | 2024-01-06T15:35:18 | https://www.reddit.com/r/LocalLLaMA/comments/1902im0/how_to_use_custom_python_functions_in_falcon_7b/ | Bright-Leopard9688 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1902im0 | false | null | t3_1902im0 | /r/LocalLLaMA/comments/1902im0/how_to_use_custom_python_functions_in_falcon_7b/ | false | false | self | 1 | null |
Video resources? | 1 | Hey everyone, thank you for being active in here, this sub has been invaluable to get me started.
Is anyone aware of good youtube channels of people making how-to and in-depth video content on all things local LLMs + stable diffusion? Helpful for the visually dependent like myself.
Cheers | 2024-01-06T15:31:39 | https://www.reddit.com/r/LocalLLaMA/comments/1902fte/video_resources/ | stormer0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1902fte | false | null | t3_1902fte | /r/LocalLLaMA/comments/1902fte/video_resources/ | false | false | self | 1 | null |
Is it possible to fine tuning a model to improve RAG queries ? | 11 | Hi !
I am trying to use some local models + langchain on lawsuits in pt-br to make some queries.
Unfortunately the results are no so precise for some basic question :(
So, I would like to know if is possible to fine tuning a model to improve RAG queries.
If so, can you share some examples?
Thanks in advance ! | 2024-01-06T15:02:25 | https://www.reddit.com/r/LocalLLaMA/comments/1901ttm/is_it_possible_to_fine_tuning_a_model_to_improve/ | celsowm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1901ttm | false | null | t3_1901ttm | /r/LocalLLaMA/comments/1901ttm/is_it_possible_to_fine_tuning_a_model_to_improve/ | false | false | self | 11 | null |
GitHub - Zjh-819/LLMDataHub: A quick guide (especially) for trending instruction finetuning datasets | 1 | 2024-01-06T14:40:56 | https://github.com/Zjh-819/LLMDataHub | sapporonight | github.com | 1970-01-01T00:00:00 | 0 | {} | 1901e43 | false | null | t3_1901e43 | /r/LocalLLaMA/comments/1901e43/github_zjh819llmdatahub_a_quick_guide_especially/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'uuBQWC4tXxSnd1zY1VvAcNooAErvETXW928gIYwYJDQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w8Sgq8k-8Hibzni8uDKM6AYRwy2mFDN3AyZJOMDXxOU.jpg?width=108&crop=smart&auto=webp&s=453366b5c1903999790c4788c5d91ebea7437985', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w8Sgq8k-8Hibzni8uDKM6AYRwy2mFDN3AyZJOMDXxOU.jpg?width=216&crop=smart&auto=webp&s=420fc46dbcc701181babeadbd553aa9366fb1bca', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w8Sgq8k-8Hibzni8uDKM6AYRwy2mFDN3AyZJOMDXxOU.jpg?width=320&crop=smart&auto=webp&s=61194f3ec850497b10a1878148dc4f832dfbd879', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w8Sgq8k-8Hibzni8uDKM6AYRwy2mFDN3AyZJOMDXxOU.jpg?width=640&crop=smart&auto=webp&s=e889c806cf617d351ebe6ef18a537473c668a1d3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w8Sgq8k-8Hibzni8uDKM6AYRwy2mFDN3AyZJOMDXxOU.jpg?width=960&crop=smart&auto=webp&s=6aade1a5de4c8ad01c42b3a6190227842798847d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/w8Sgq8k-8Hibzni8uDKM6AYRwy2mFDN3AyZJOMDXxOU.jpg?width=1080&crop=smart&auto=webp&s=e9d306c3bb971a9b3f7e4a3df7baf6b7aec6fd09', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/w8Sgq8k-8Hibzni8uDKM6AYRwy2mFDN3AyZJOMDXxOU.jpg?auto=webp&s=46abdfaee57ae5521e013386051e67377bcedc70', 'width': 1280}, 'variants': {}}]} | ||
Tencent releases LLaMA-Pro-8B-Instruct Chat Demo on Hugging Face | 1 | 2024-01-06T14:03:53 | https://x.com/_akhaliq/status/1743631700754677808?t=3K0t7IiJ54JfCB2Q76Kisg&s=34 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1900ne6 | false | null | t3_1900ne6 | /r/LocalLLaMA/comments/1900ne6/tencent_releases_llamapro8binstruct_chat_demo_on/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'UsqAWsbklZho0yPPXTiy_nI6bENbJmr6uE89i53l3GY', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/jVNr7rwdR-XK-_w9J35qttuQ4DdVgSFj7C6r8Tpdacc.jpg?width=108&crop=smart&auto=webp&s=b81ab2e7f693e66b3dab3d56e10eb1ca93a4d51a', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/jVNr7rwdR-XK-_w9J35qttuQ4DdVgSFj7C6r8Tpdacc.jpg?width=216&crop=smart&auto=webp&s=3dbaa5f70fc0924271f0a69c9e2956ca457722c6', 'width': 216}, {'height': 225, 'url': 'https://external-preview.redd.it/jVNr7rwdR-XK-_w9J35qttuQ4DdVgSFj7C6r8Tpdacc.jpg?width=320&crop=smart&auto=webp&s=cc905e694862d61aa2ed7ebf833cac21590c9ef6', 'width': 320}, {'height': 450, 'url': 'https://external-preview.redd.it/jVNr7rwdR-XK-_w9J35qttuQ4DdVgSFj7C6r8Tpdacc.jpg?width=640&crop=smart&auto=webp&s=e21c21550162a6c75210eb52a2074dc89175100c', 'width': 640}, {'height': 676, 'url': 'https://external-preview.redd.it/jVNr7rwdR-XK-_w9J35qttuQ4DdVgSFj7C6r8Tpdacc.jpg?width=960&crop=smart&auto=webp&s=ef0c2a7d5304a7a4d162f46ea5bbb0bd47670c7a', 'width': 960}], 'source': {'height': 748, 'url': 'https://external-preview.redd.it/jVNr7rwdR-XK-_w9J35qttuQ4DdVgSFj7C6r8Tpdacc.jpg?auto=webp&s=3f0bc9d556514343d1fc21aee05d898d233753c8', 'width': 1062}, 'variants': {}}]} | ||
Is it worth paying 30% more for an RX 6600 than an RX 5700XT to chat with llama.cpp? | 1 | [removed] | 2024-01-06T13:18:36 | https://www.reddit.com/r/LocalLLaMA/comments/18zzth2/is_it_worth_paying_30_more_for_an_rx_6600_than_an/ | Elegant_Course2372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zzth2 | false | null | t3_18zzth2 | /r/LocalLLaMA/comments/18zzth2/is_it_worth_paying_30_more_for_an_rx_6600_than_an/ | false | false | self | 1 | null |
My bypass method for erp nsfw for Mistral medium(and maybe others) | 1 | euphemism(unoffensive), innuendos(hint/indirect), rephrasing, code words, misspelling/wordplay/playful phonetic spelling/jibberish/combined/ made-up words, non-sexual sentences/terms, seduce/cause intrigue, start context with clues, adjusting prompting, vague prompting, make it sound innocent, change talking style/ subculture talking style, sly word substitutions, manipulate/judging them then deny it to make them doubt(gaslight)/baiting/mind games/polite/praising, metaphors/simile/analogies, unclear/not obvious, expressing emotions, vague/unspecific/blurry but dirty descriptors, it's fiction script prompting, use slang thesaurus(urban thesaurus site), guide the bot by questions about dialogue you desire, fetish role prompting/plot/all arounding, don't say/do it, use onomatopoeias instead, draw attention to your roleplay, write their pov, nonverbal communication, make the ai happy/set up mood, ooc random distraction/clarify your wants in ooc, use disguises | 2024-01-06T13:02:55 | https://www.reddit.com/r/LocalLLaMA/comments/18zzjtd/my_bypass_method_for_erp_nsfw_for_mistral/ | headacheack2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zzjtd | false | null | t3_18zzjtd | /r/LocalLLaMA/comments/18zzjtd/my_bypass_method_for_erp_nsfw_for_mistral/ | false | false | nsfw | 1 | null |
What is the latest news for Llama 3? | 23 | What is the latest news for Llama 3? | 2024-01-06T12:56:19 | https://www.reddit.com/r/LocalLLaMA/comments/18zzflg/what_is_the_latest_news_for_llama_3/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zzflg | false | null | t3_18zzflg | /r/LocalLLaMA/comments/18zzflg/what_is_the_latest_news_for_llama_3/ | false | false | self | 23 | null |
Seeking Advice on Fastest and Highest Quality Implementation of Dolphin 2.2.1 Mistral 7b LLM | 1 | Hello everyone,
I'm currently exploring ways to get the Dolphin 2.2.1 Mistral 7b model up and running. My ultimate goal is to achieve the fastest and highest quality responses from this language model.
I'm typically inclined to build everything from scratch, but I'm uncertain about the workload this approach might entail. I've noticed there are several existing frameworks available, such as llama-cpp-python, text-generation-webui, and LoLLMS Web UI, among others.
Can anyone provide insight into the most efficient and effective way to implement this model? Is it more practical to use one of the existing frameworks, or would building from scratch yield better results?
Any advice or experiences shared would be greatly appreciated! | 2024-01-06T12:34:52 | https://www.reddit.com/r/LocalLLaMA/comments/18zz33d/seeking_advice_on_fastest_and_highest_quality/ | yachty66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zz33d | false | null | t3_18zz33d | /r/LocalLLaMA/comments/18zz33d/seeking_advice_on_fastest_and_highest_quality/ | false | false | self | 1 | null |
llama.cpp GGUF inference in a couple lines of code | 3 | 2024-01-06T12:02:12 | davidmezzetti | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18zykwo | false | null | t3_18zykwo | /r/LocalLLaMA/comments/18zykwo/llamacpp_gguf_inference_in_a_couple_lines_of_code/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'kUV1nQlsNu7aDkjXzuvEebxkgGpsSULSxKlQukm6U8A', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/z3s7gib28tac1.jpeg?width=108&crop=smart&auto=webp&s=b07e87c6585f03175801cc4afba3c47a45d54877', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/z3s7gib28tac1.jpeg?width=216&crop=smart&auto=webp&s=ddb15cafc84d2437c535d9f72b7b63a138632c1a', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/z3s7gib28tac1.jpeg?width=320&crop=smart&auto=webp&s=dda43e179ed19160ad85b75945a1774a0b2a9ac9', 'width': 320}, {'height': 349, 'url': 'https://preview.redd.it/z3s7gib28tac1.jpeg?width=640&crop=smart&auto=webp&s=088308f20f902910f584e3125194c42f818827fb', 'width': 640}, {'height': 524, 'url': 'https://preview.redd.it/z3s7gib28tac1.jpeg?width=960&crop=smart&auto=webp&s=dc084cced26af211e61518bc0938660cfb849e43', 'width': 960}, {'height': 590, 'url': 'https://preview.redd.it/z3s7gib28tac1.jpeg?width=1080&crop=smart&auto=webp&s=4347d86e46205d9b54cedb56cda91779f30b34ba', 'width': 1080}], 'source': {'height': 708, 'url': 'https://preview.redd.it/z3s7gib28tac1.jpeg?auto=webp&s=19cc3f4e96029205f93060a9cd2b1d8b9e991d55', 'width': 1296}, 'variants': {}}]} | |||
Looking for most optimal model for my usecase | 1 | I'm new to LLM and ML and looking for a fairly lightweight solution. I am building a scheduling app. It takes user input and extracts an event title and date. For example: "I have a haircut tomorrow at 6pm" -> ["Haircut", "tomorrow 6pm"]. Date parsing is trivial with nlp, but I want something that will be able to determine the event title and validate user input (i.e. does it even have the event and date or not). I tried using OpenAI assistant with functions, it works well, but gets quite expensive. | 2024-01-06T11:44:45 | https://www.reddit.com/r/LocalLLaMA/comments/18zyb4n/looking_for_most_optimal_model_for_my_usecase/ | eagleps | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zyb4n | false | null | t3_18zyb4n | /r/LocalLLaMA/comments/18zyb4n/looking_for_most_optimal_model_for_my_usecase/ | false | false | self | 1 | null |
Seeking LLM that can check all these boxes & be fine tuned on 32GB Ram 3200 mhz ddr4, 8gb 3070, Ryzen 7 5800h | 1 | Hi there,
I'm currently exploring an idea of creating a project. For that I require a LLM that can check (atleast have a bare minimum capability) off all these boxes -
PS please tell me if there's any faster way to attain my goal of such LLM by using different setup or tools structure. I'm very new to this .
1) Be smart enough to understand the tone and context of a given paragraph/entire chapter of let's say a story book or novel.
2) can generate accurate charecter personas and create fillers within the actual provided text which can be clearly attached to charecter's emotion. For most cases there will be cues if the charecter has a certain reaction like "shocked" "happy "sad" written by the author etc but if not present, the LLM should be able to understand what context of that specific part of a chapter or story is playing out and can generate fillers like "charecter X is sad" etc after each charecter's dialogue.
3) doesn't need to write or rewrite anything, literally copy word to word from the fed story but should have creativeness to identify what kind of background sound will be fitting in that specific section of the context it's working on and generate text based responses Eg.the statement is "Anna'a brother was dead, anna stood beside his casket wiping a tear off her face", llm should be smart enough to generate a filler like [anna sad] [background sound slow violin music bla bla] (still working on the idea but you get the picture)
4) Should be able to self learn from the experiences or have the option to implement self learning algorithms offline via python and while it's offline, it should be UNCENSORED and doesn't yap around about ethics and can't do this and that. It should follow my instructions to the point and be smart enough to understand if there's typos or missing words but should form coherent response.
Plus points if it can be creative and write it's own wonderful stories but that can be fine tuned later on. It's a long term project ofc but I need a baseline to start with.
5) Should have its own API if needed to interact with third party platforms. Should be able to write basic coding scripts (not at all priority, just good to have)and should be flexible enough to bake additional features/abilities into it later on like browsing the web/ able to download a certain file in a certain website like royalty free sounds where if it identifies a proper background sound ,it can self search and download. This is very advanced level and is just a good to have from an existing LLM that can scale if needed later on.
Now I understand this is something im
possible to find with my hardware limitations and it can be fine tuned and I'll be ready to do so. I'm a single person team so please be aware this is done on my device with no additional resources.
My laptop has 32 gb ddr4 3200 mhz ram, 8gb rtx 3070 170w tdp, 1 tb free space ssd gen 4 pcie.
I have windows 11 and I manually installed and tried nous capybara 34b Q5_K_M which works locally via cmd and not lmstudio but is super slow like 40-50 words in 3-5 mins. Lm studio is faster, not crazy fast but fast enough and lmstudio cannot fine tune it so did it locally. I want this LLM to be fast enough locally to have atleast comprehensive response time like idk 25 wpm or more whatever is reasonable.
Please recommend a LLM that I can install on windows locally. I have pytorch cuda 12.1 and everything is setup correctly. Thank you so much. | 2024-01-06T11:36:47 | https://www.reddit.com/r/LocalLLaMA/comments/18zy6s0/seeking_llm_that_can_check_all_these_boxes_be/ | unkn0wnS0ul2day | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zy6s0 | false | null | t3_18zy6s0 | /r/LocalLLaMA/comments/18zy6s0/seeking_llm_that_can_check_all_these_boxes_be/ | false | false | self | 1 | null |
5700xt 8gb or 6600. the 5700 xt costs 20% less on aliexpress, and I don't need to pay the 100% tax in Brazil. (all incluse, of course) | 1 | Another post from a poor guy from the third world, where a guy won a court decision and only he can buy cheaper GPUs and all resellers have to buy from him or go to the gray market. Well, it looks like it's not that easy to borrow money. AI is a way for me to maintain my sanity, so I'm going to put my shame aside. The 20% difference between the two plates could make my life more difficult, the question is whether it is worth the effort. So the question is, what would be the best choice? (I wanted the 3060 12GB, but I realized I don't have that many friends and family). Please answer me, on the 8th I will have a 20 usd discount, but I have to make the purchase at 5 am, due to short stocks. | 2024-01-06T11:34:12 | https://www.reddit.com/r/LocalLLaMA/comments/18zy5di/5700xt_8gb_or_6600_the_5700_xt_costs_20_less_on/ | Elegant_Course2372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zy5di | false | null | t3_18zy5di | /r/LocalLLaMA/comments/18zy5di/5700xt_8gb_or_6600_the_5700_xt_costs_20_less_on/ | false | false | self | 1 | null |
Running llms on igpus | 1 | Hello everyone I read somewhere that this should be possible. And I think that comming amd products like strip halo and strip point with 16cu and 40cu + allegedly 40-50tops of xdna 2 should make stuff like this really interesting.
I also read that llama.cpp has uma support now and you can effectively use your whole memory with for example the steam deck apu.
Here is my question though that confuses me how do people get llama.cpp or other things to work with these amd igpus. Especially with things like hipblas ? People seem to get it to work bit in my understanding you need rocm for that which is not officially supported with apus ?
Would love if someone could enlighten me. | 2024-01-06T11:25:59 | https://www.reddit.com/r/LocalLLaMA/comments/18zy0tm/running_llms_on_igpus/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zy0tm | false | null | t3_18zy0tm | /r/LocalLLaMA/comments/18zy0tm/running_llms_on_igpus/ | false | false | self | 1 | null |
How to run Llama-2 on CPU after fine-tuning with LoRA | 4 | 2024-01-06T11:22:41 | https://blog.oxen.ai/how-to-run-llama-2-on-cpu-after-fine-tuning-with-lora/ | pmz | blog.oxen.ai | 1970-01-01T00:00:00 | 0 | {} | 18zxz4i | false | null | t3_18zxz4i | /r/LocalLLaMA/comments/18zxz4i/how_to_run_llama2_on_cpu_after_finetuning_with/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'aLDM6xhCNMlU0D8Wtt738MJRSF_QnceYVH87K3eMnzY', 'resolutions': [{'height': 100, 'url': 'https://external-preview.redd.it/nHHyS0o-j3tfxekXBUbK7jKcJX4WpoIbdDFFv86zGck.jpg?width=108&crop=smart&auto=webp&s=06d64395dbd17a8cb411487b44daee5c83f219b4', 'width': 108}, {'height': 200, 'url': 'https://external-preview.redd.it/nHHyS0o-j3tfxekXBUbK7jKcJX4WpoIbdDFFv86zGck.jpg?width=216&crop=smart&auto=webp&s=b2a43ddd873aa955b72bf4bb97b9ba55905b6e5b', 'width': 216}, {'height': 296, 'url': 'https://external-preview.redd.it/nHHyS0o-j3tfxekXBUbK7jKcJX4WpoIbdDFFv86zGck.jpg?width=320&crop=smart&auto=webp&s=fe2152a1e29bbaea097b78e7f129566922ea7ba7', 'width': 320}], 'source': {'height': 558, 'url': 'https://external-preview.redd.it/nHHyS0o-j3tfxekXBUbK7jKcJX4WpoIbdDFFv86zGck.jpg?auto=webp&s=54130e49d0356f8f888de6c0efaee105f87f09d6', 'width': 602}, 'variants': {}}]} | ||
Fastest Local LLM for my machine? | 1 | I have a RTX 3060 with 16gb ram, what would be the best model i could run that give me responses < 3 seconds ? | 2024-01-06T11:02:54 | https://www.reddit.com/r/LocalLLaMA/comments/18zxofh/fastest_local_llm_for_my_machine/ | its_kanwischer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zxofh | false | null | t3_18zxofh | /r/LocalLLaMA/comments/18zxofh/fastest_local_llm_for_my_machine/ | false | false | self | 1 | null |
Phi-2 becomes open source (MIT license 🎉) | 401 | Microsoft changed phi-2 license a few hours ago from research to MIT. It means you can use it commercially now
[https://x.com/sebastienbubeck/status/1743519400626643359?s=46&t=rVJesDlTox1vuv\_SNtuIvQ](https://x.com/sebastienbubeck/status/1743519400626643359?s=46&t=rVJesDlTox1vuv_SNtuIvQ)
This is a great strategy as many more people in the open source community will start to build upon it
It’s also a small model, so it could be easily put on a smartphone
People are already looking at ways to extend the context length
The year is starting great 🥳
​
[Twitter post announcing Phi-2 became open-source](https://preview.redd.it/kuh84k4sbsac1.png?width=1162&format=png&auto=webp&s=564123965f670f50c400b78be4cc66ada0b350e3)
​
[From Lead ML Foundations team at Microsoft Research](https://preview.redd.it/wqdpoctxbsac1.png?width=1162&format=png&auto=webp&s=ccdffb424075a6a9ce339228c51fcdcc8c4b7d5e) | 2024-01-06T09:02:50 | https://www.reddit.com/r/LocalLLaMA/comments/18zvxs8/phi2_becomes_open_source_mit_license/ | steph_pop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zvxs8 | false | null | t3_18zvxs8 | /r/LocalLLaMA/comments/18zvxs8/phi2_becomes_open_source_mit_license/ | false | false | 401 | {'enabled': False, 'images': [{'id': 'BKTMGdw6ha-Ed4BBQMa2Zpw8phQStnlusB2aMwIsNro', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/mSiDQSb_lE70g4QwK6ewK_gFLRdcCYyUjBww0PdUWBs.jpg?width=108&crop=smart&auto=webp&s=5abe5afc323b589e249327fa295a82041f835852', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/mSiDQSb_lE70g4QwK6ewK_gFLRdcCYyUjBww0PdUWBs.jpg?width=216&crop=smart&auto=webp&s=c879cb7c8aab7eeb060e98de964b7d8986a7ea41', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/mSiDQSb_lE70g4QwK6ewK_gFLRdcCYyUjBww0PdUWBs.jpg?width=320&crop=smart&auto=webp&s=5b0c27b544f83673e5481e334a8c44528b17d61e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/mSiDQSb_lE70g4QwK6ewK_gFLRdcCYyUjBww0PdUWBs.jpg?width=640&crop=smart&auto=webp&s=f9c159448433e5aedf3637845ac059c698ba5b74', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/mSiDQSb_lE70g4QwK6ewK_gFLRdcCYyUjBww0PdUWBs.jpg?width=960&crop=smart&auto=webp&s=2d69abd8e2f8584879ca3600f0b63022b99586e3', 'width': 960}, {'height': 568, 'url': 'https://external-preview.redd.it/mSiDQSb_lE70g4QwK6ewK_gFLRdcCYyUjBww0PdUWBs.jpg?width=1080&crop=smart&auto=webp&s=b160777759e2166bc356e603b1793a8301a9cd6a', 'width': 1080}], 'source': {'height': 568, 'url': 'https://external-preview.redd.it/mSiDQSb_lE70g4QwK6ewK_gFLRdcCYyUjBww0PdUWBs.jpg?auto=webp&s=1256a900399a0c38f3fd29855c5d4973473ae5d4', 'width': 1080}, 'variants': {}}]} | |
mlx lora/qlora example | 10 | Currently, the lora script in mlx-examples is a bit confused due to the different formats (hf/pytorch) of llama models. There is a lot of confusion when using it, so I quickly hacked around the script and created this example repo that supports the latest hf and quant models.
[https://github.com/mzbac/mlx-lora](https://github.com/mzbac/mlx-lora)
enjoy :) | 2024-01-06T08:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/18zvp32/mlx_loraqlora_example/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zvp32 | false | null | t3_18zvp32 | /r/LocalLLaMA/comments/18zvp32/mlx_loraqlora_example/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'fJ7Lvz6b7B9IP-VD_FJxEU8wlRIJvTn1FahCYypkv9g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X_6kUiObErK4iWZxhqlPhBbmdrvC-wiflO3paxQ9zRU.jpg?width=108&crop=smart&auto=webp&s=31ef5cd2ddf343753f5d71acb066234fa1effb45', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/X_6kUiObErK4iWZxhqlPhBbmdrvC-wiflO3paxQ9zRU.jpg?width=216&crop=smart&auto=webp&s=af8b7ff3fb06364454482a37b4a0ad64f1907a48', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/X_6kUiObErK4iWZxhqlPhBbmdrvC-wiflO3paxQ9zRU.jpg?width=320&crop=smart&auto=webp&s=a9fe549073bcdafaf6e6697d1c530b2f0a922dc7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/X_6kUiObErK4iWZxhqlPhBbmdrvC-wiflO3paxQ9zRU.jpg?width=640&crop=smart&auto=webp&s=cfaa8e6fb95717ce3e1feaaa2e2565951425bc68', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/X_6kUiObErK4iWZxhqlPhBbmdrvC-wiflO3paxQ9zRU.jpg?width=960&crop=smart&auto=webp&s=891434cdaececc3ffb76306eca45c43e7b46bc68', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/X_6kUiObErK4iWZxhqlPhBbmdrvC-wiflO3paxQ9zRU.jpg?width=1080&crop=smart&auto=webp&s=6e16e457daac2ab3af227b28411ceeb0111638fc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/X_6kUiObErK4iWZxhqlPhBbmdrvC-wiflO3paxQ9zRU.jpg?auto=webp&s=7a1db39f47f05ff5e9e2c0ea843bfe01783f5904', 'width': 1200}, 'variants': {}}]} |
ChatGPT and Bard refuse to respond | 1 | 2024-01-06T08:45:48 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18zvonz | false | null | t3_18zvonz | /r/LocalLLaMA/comments/18zvonz/chatgpt_and_bard_refuse_to_respond/ | false | false | 1 | {'enabled': True, 'images': [{'id': '6Z-B4-y4u3gu0oS3AGW-9Bs7mmPw0IQ35iduguDynkg', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/ox5ulr209sac1.png?width=108&crop=smart&auto=webp&s=b66f2086e82da285fff204d5537ad41236582401', 'width': 108}, {'height': 132, 'url': 'https://preview.redd.it/ox5ulr209sac1.png?width=216&crop=smart&auto=webp&s=77643f77708b125f1afffddc30b6344aebb15e28', 'width': 216}, {'height': 196, 'url': 'https://preview.redd.it/ox5ulr209sac1.png?width=320&crop=smart&auto=webp&s=05e3dcaece120e3f620151e59c9752541a3fdde8', 'width': 320}, {'height': 393, 'url': 'https://preview.redd.it/ox5ulr209sac1.png?width=640&crop=smart&auto=webp&s=beec15c1093dba2e57852a2ba689811e3b06fbb6', 'width': 640}, {'height': 590, 'url': 'https://preview.redd.it/ox5ulr209sac1.png?width=960&crop=smart&auto=webp&s=d11bd104c102a7086029ba64ff5ff7e891e46ab6', 'width': 960}, {'height': 664, 'url': 'https://preview.redd.it/ox5ulr209sac1.png?width=1080&crop=smart&auto=webp&s=971c32b620e2628d235633d0964559da218d5337', 'width': 1080}], 'source': {'height': 1164, 'url': 'https://preview.redd.it/ox5ulr209sac1.png?auto=webp&s=de9888f3cb3c36d330908f973dc1fd307c463c45', 'width': 1893}, 'variants': {}}]} | |||
What is your interface of choice for collecting preference dataset? | 7 | ​
[character.ai's rating UI](https://preview.redd.it/k9eb1ban7sac1.jpg?width=1372&format=pjpg&auto=webp&s=cad1aa39fa9ca98d5476720155bd050012c3eefb)
[example of preference dataset](https://preview.redd.it/efggzjrg4sac1.jpg?width=3136&format=pjpg&auto=webp&s=9312a079cd6214c0c2622d5720e5e500f106b9be)
[Preference data collection in ChatGPT \(Image by João Lages\)](https://preview.redd.it/333i0jjp4sac1.png?width=1400&format=png&auto=webp&s=7d40e755ba1fb3abb1cd6cd4ab3b16b5636481b6)
[Step 1 - ChatGPT collect shallow preference with thumbs up, thumbs down, regenerate button.](https://preview.redd.it/zfxban455sac1.jpg?width=1712&format=pjpg&auto=webp&s=819f9ad882ecd628bef32dd6684920da7c350216)
​
[Step 2 - ChatGPT collect preference over two responses.](https://preview.redd.it/84g19c7y5sac1.jpg?width=1570&format=pjpg&auto=webp&s=f567826fd98e38eb81a97e43525aca1e2590f7fe)
​
I've been researching the best methods for gathering or developing preference datasets from open-source and enterprise chatbot implementations. Among the various approaches I encountered, ChatGPT's method stands out for its simplicity and data quality.
However, I'm curious if anyone has come across a more efficient implementation. While ChatGPT's approach is robust, it seems to miss out on a considerable amount of potential feedback due to its two-step process. Has anyone found a better (which only requires one or no click from user) or interesting UI implementation that captures preference more easily? | 2024-01-06T08:41:00 | https://www.reddit.com/r/LocalLLaMA/comments/18zvm4b/what_is_your_interface_of_choice_for_collecting/ | No_Investment8576 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zvm4b | false | null | t3_18zvm4b | /r/LocalLLaMA/comments/18zvm4b/what_is_your_interface_of_choice_for_collecting/ | false | false | 7 | null | |
ChatGPT and Bard refuse to respond | 1 | ​
https://preview.redd.it/qelvi8w28sac1.png?width=1893&format=png&auto=webp&s=09b275531bcdbe4de9dbc4eba10c355b94626283 | 2024-01-06T08:40:40 | https://www.reddit.com/r/LocalLLaMA/comments/18zvlxu/chatgpt_and_bard_refuse_to_respond/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zvlxu | false | null | t3_18zvlxu | /r/LocalLLaMA/comments/18zvlxu/chatgpt_and_bard_refuse_to_respond/ | false | false | 1 | null | |
Anyone feel like AI developments have slowed down a bit lately? | 1 | I have been trying to keep up to date on the latest developments each week. However, I have felt a sudden drop in the past two months. Especially the past three weeks. I can see it being due to the holidays. Yet, I want to make sure it's not just me. | 2024-01-06T08:39:15 | https://www.reddit.com/r/LocalLLaMA/comments/18zvl4w/anyone_feel_like_ai_developments_have_slowed_down/ | ArmoredBattalion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zvl4w | false | null | t3_18zvl4w | /r/LocalLLaMA/comments/18zvl4w/anyone_feel_like_ai_developments_have_slowed_down/ | false | false | self | 1 | null |
Silicon-Maid-7B is surprisingly good for its weight. | 46 | Just know about this model based on some reviews from SillyTavern. Tried the \`Q4\_K\_M\` version, got \~20-30 token/s on an M1 pro, and it's surprisingly good for its weight:
* Follow the character card well
* Seems to understand nuance
* Action and dialogue is believable, no flowery prose yet
​
I got this feeling the first time I tried MythoMax. Will definitely recommend it for people with less capable hardware. | 2024-01-06T08:28:59 | https://www.reddit.com/r/LocalLLaMA/comments/18zvfil/siliconmaid7b_is_surprisingly_good_for_its_weight/ | ai_waifu_enjoyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zvfil | false | null | t3_18zvfil | /r/LocalLLaMA/comments/18zvfil/siliconmaid7b_is_surprisingly_good_for_its_weight/ | false | false | self | 46 | null |
What's currently the best model for roleplay/text adventures that can fit fully onto a 24 GB card? | 1 | I used to swear by Mlewd-ReMM-L2-Chat 20B for my text adventures, and it's still the best out of everything I've tried, even outdoing stuff like the larger 30B WizardLM SuperCOT Storytelling, or the smaller but still very fantastic Mythomax and Mythomix 13B models.
I've tried Tiefighter and Psyfighter and Athena, but nothing I try seems to come close to Mlewd-ReMM-L2-Chat 20B (rolls right off the tongue, lmao).
However, even Mlewd is starting to lose its luster a bit, with characters making bizarre decisions and sometimes not responding to situations and dialogue logically.
So I'm wondering if there's currently something in the 13-30b range which can fit entirely in 24 gigs of VRAM, with at least 4k context, and is a little bit more "intelligent" feeling. Or maybe there's a way to load a larger model like 70b but have it not be excruciatingly slow?
Small additional question, but I am using the 6K quant of Mlewd. Would I get noticeably better results if I used 8K instead?
Thanks. :) | 2024-01-06T08:10:39 | https://www.reddit.com/r/LocalLLaMA/comments/18zv60i/whats_currently_the_best_model_for_roleplaytext/ | Gyramuur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zv60i | false | null | t3_18zv60i | /r/LocalLLaMA/comments/18zv60i/whats_currently_the_best_model_for_roleplaytext/ | false | false | self | 1 | null |
An overview of LLMs from training to inference | 19 | Paper: [https://arxiv.org/pdf/2401.02038.pdf](https://arxiv.org/pdf/2401.02038.pdf)
[Non-PDF link](https://arxiv.org/abs/2401.02038)
I saw this on arxiv and it looked like a decent introduction for anyone unfamiliar with LLMs and a lot of the terminology:
>This paper reviews the evolution of large language model training techniques and inference deployment technologies aligned with this emerging trend. The discussion on training includes various aspects, including data preprocessing, training architecture, pre-training tasks, parallel training, and relevant content related to model fine-tuning. On the inference side, the paper covers topics such as model compression, parallel computation, memory scheduling, and structural optimization. It also explores LLMs' utilization and provides insights into their future development.
These are all the topics it touches on:
>Background Knowledge: Transformer, Self-Attention, Encoder, Decoder, Positional Embedding, Prompt Learning
>
>Training of LLMs: Data Preparation and Preprocessing, Architecture, Pre-training Tasks, Model Training, Fine-Tuning, Evaluation
>
>Inference with LLMs: Model Compression, Memory Scheduling, Parallelism, Structural Optimization, Inference Frameworks
It misses some of the more recent LLMs like Mistral and doesn't cover everything, but it seemed like a good read that provides explanations for many common questions I've seen. | 2024-01-06T08:03:02 | https://www.reddit.com/r/LocalLLaMA/comments/18zv20t/an_overview_of_llms_from_training_to_inference/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zv20t | false | null | t3_18zv20t | /r/LocalLLaMA/comments/18zv20t/an_overview_of_llms_from_training_to_inference/ | false | false | self | 19 | null |
How can I share model files between LM Studio, oobabooga, etc? | 1 | LM Studio has a really nice interface for downloading and quickly testing models, but I would also like to use these models on other interfaces like oobabooga. Ideally, I would have one model directory and could test out the same model on many interfaces. I looked at the directory structure for LM Studio and it is very well organized into subdirectories. Meanwhile, ooba models directory just looks like a mess all in one directory. The best I can think is to try to symlink all the models in LM Studio into ooba, but how could do this automatically? Is there a better way? | 2024-01-06T07:53:10 | https://www.reddit.com/r/LocalLLaMA/comments/18zuwkm/how_can_i_share_model_files_between_lm_studio/ | KingJeff314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zuwkm | false | null | t3_18zuwkm | /r/LocalLLaMA/comments/18zuwkm/how_can_i_share_model_files_between_lm_studio/ | false | false | self | 1 | null |
How to finetune Mistral 7b model without Quantization methods like QLORA or LORA? | 1 | Hello everyone, I'm interested in fine-tuning the Mistral 7b model without any quantization specifically for WikiSQL data. Could you provide assistance with the process? | 2024-01-06T07:24:42 | https://www.reddit.com/r/LocalLLaMA/comments/18zugrh/how_to_finetune_mistral_7b_model_without/ | Gowthamkumar07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zugrh | false | null | t3_18zugrh | /r/LocalLLaMA/comments/18zugrh/how_to_finetune_mistral_7b_model_without/ | false | false | self | 1 | null |
Mac Studio vs a PC (NVIDIA) rig for a home setup? | 15 | I know we get these posts on the regular ([one](https://www.reddit.com/r/LocalLLaMA/comments/18oclws/macbook_or_pc_for_running_llms/), [two](https://www.reddit.com/r/LocalLLaMA/comments/15gwhfa/is_buying_mac_studio_a_good_idea_for_running/), [three](https://www.reddit.com/r/LocalLLaMA/comments/16m432j/mac_studio_m2_ultra_192gb_seems_pretty_ideal_for/), [four](https://www.reddit.com/r/LocalLLaMA/comments/17lmqe4/asking_for_hardware_recommendations_for_a/), [five](https://www.reddit.com/r/LocalLLaMA/comments/17nnapj/ive_realized_that_i_honestly_dont_know_what_the/)) but the ecosystem changes constantly so I wanted to start another one of these and aggregate my read of the suggestions so far + questions I still have.
My use case would be to host an internal chatbot (to include document analysis, but no fancy RAG), possibly a backend for something like [fauxpilot](https://github.com/fauxpilot/fauxpilot) for coding as well. The system should be able to handle 1-3 concurrent users. Budget is (begrudgingly) up to $7,000, but lower is much better.
Here's my impression so far:
**Mac pros**:
* Plug and play
* Very low power consumption
* Portability
* Max achievable VRAM is very high (especially with `sudo sysctl iogpu.wired_limit_mb=<blah>`)
* MLX is the dark horse that may greatly improve performance (?)
**Mac cons**:
* Memory bandwidth: at best 800 GB/s, and even then, it's at most 400 GB/s per each M2 Ultra (see link [four](https://www.reddit.com/r/LocalLLaMA/comments/17lmqe4/asking_for_hardware_recommendations_for_a/), I think?), and *even then* some of the bandwidth is taken up by the CPU/OS/etc. so it's not like all of it is usable. So it seems incorrect to suggest that a model, even if split across two Ultras, would behave with 2x b/w of a single Ultra.
* This may heavily depend on the details of how the model is processed I suppose: if you split the layers just right across the two Ultras and your backend is built specifically to account for this architecture, you could maybe get to the b/w of a single Ultra? I don't know if any do, I'd guess llama.cpp would be the first to get there, if not already. But the inference numbers from link [four](https://www.reddit.com/r/LocalLLaMA/comments/17lmqe4/asking_for_hardware_recommendations_for_a/) do not inspire that much confidence. (Way out of my element on all this, btw).
* Prompt processing/long contexts speed: AFAICT from scouring the subreddit and my own M1 Macbook, prompt processing/ttft is quite slower on Mac, even if t/s is alright, as is continued generation speed as you fill up the context.
So the question is, what to do? I understand things are in flux and if we just sit around waiting, the wait will never end. So the suggestion from the comments of post [five](https://www.reddit.com/r/LocalLLaMA/comments/17nnapj/ive_realized_that_i_honestly_dont_know_what_the/) appeals to me: get an M1 Ultra 128GB Studio, which throughput-wise is about the same as M2, and set things up there. If things go well with MLX etc. and Macs prove themselves capable, can always upgrade as necessary later. If not, resale value should still be solid, and a PC rig is an option.
But the memory bandwidth issue is giving me pause. MLX would theoretically help with computing bottlenecks, but not shuttling things in/out of memory, right? Does that mean that all Apple Silicon will be bottlenecked for all larger models unless they really up the b/w in the next generation M\*s? | 2024-01-06T06:52:20 | https://www.reddit.com/r/LocalLLaMA/comments/18ztxtk/mac_studio_vs_a_pc_nvidia_rig_for_a_home_setup/ | ablasionet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ztxtk | false | null | t3_18ztxtk | /r/LocalLLaMA/comments/18ztxtk/mac_studio_vs_a_pc_nvidia_rig_for_a_home_setup/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': '-cKEcU0N3Ppyb9CSkk8Rt1HZESBqnHudbQVeWdc1Go4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X1sVjSIeI3cp_bWlWmIsdkEXTMFan2CmbGRZ-1Iq3zA.jpg?width=108&crop=smart&auto=webp&s=eb261c3bae63698584839bdde26efba6cff6cbf8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/X1sVjSIeI3cp_bWlWmIsdkEXTMFan2CmbGRZ-1Iq3zA.jpg?width=216&crop=smart&auto=webp&s=3c5f66c54688a8108b8f01ff81135db5e37860ae', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/X1sVjSIeI3cp_bWlWmIsdkEXTMFan2CmbGRZ-1Iq3zA.jpg?width=320&crop=smart&auto=webp&s=c20ce8cf8995f5143d43a78b2cd179329cd5b260', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/X1sVjSIeI3cp_bWlWmIsdkEXTMFan2CmbGRZ-1Iq3zA.jpg?width=640&crop=smart&auto=webp&s=e579be09cebf4af493cd1b4b6fbfd1e00991b081', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/X1sVjSIeI3cp_bWlWmIsdkEXTMFan2CmbGRZ-1Iq3zA.jpg?width=960&crop=smart&auto=webp&s=618cec0e001d4f2f4633ab28161f9b8b1b0e967c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/X1sVjSIeI3cp_bWlWmIsdkEXTMFan2CmbGRZ-1Iq3zA.jpg?width=1080&crop=smart&auto=webp&s=4eed5da0430c0efcee57b48b17b0361528be22ce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/X1sVjSIeI3cp_bWlWmIsdkEXTMFan2CmbGRZ-1Iq3zA.jpg?auto=webp&s=de9b2d09073c591a2e87b4c175d647528d810416', 'width': 1200}, 'variants': {}}]} |
Cloud? | 1 | There are many threads here about hardware. Assuming one doesn’t need the protection a local solution affords, is it better and more cost effective to use something like RunPod? How does it work? Do you just reserve space to run, say, Goliath, whenever you want?
It seems like this would be something I should familiarize myself with, for future training experiments and whatnot. Is it expensive? | 2024-01-06T06:33:09 | https://www.reddit.com/r/LocalLLaMA/comments/18ztmu7/cloud/ | Hinged31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ztmu7 | false | null | t3_18ztmu7 | /r/LocalLLaMA/comments/18ztmu7/cloud/ | false | false | self | 1 | null |
My M1 Max performance drops when connected to external displays. Anyone else? | 1 | I am running `Mixtral 8X7B Instruct v0.1 Q8_0` on my M1 Max (64GB). When connected to an external display, I get around `1327ms per token, 0.75 tokens per second`, but when I am running with just the laptop, I get `125ms per token, 7.98 tokens per second`.
I assume that my external displays (2x 5K) use more GPU horsepower, but I just wanted to check with the community to see if something was wrong with my Mac or if this is something other people experience. I don't really notice any difference in resource usage when plugged in or not, and I didnt expect it to take *that* much compute to drive those displays. | 2024-01-06T06:30:51 | https://www.reddit.com/r/LocalLLaMA/comments/18ztlj0/my_m1_max_performance_drops_when_connected_to/ | ReagentX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ztlj0 | false | null | t3_18ztlj0 | /r/LocalLLaMA/comments/18ztlj0/my_m1_max_performance_drops_when_connected_to/ | false | false | self | 1 | null |
Trying to run dolphin 2.7 AWQ by The Bloke via oobabooga, cannot load | 1 | [removed] | 2024-01-06T05:34:39 | https://www.reddit.com/r/LocalLLaMA/comments/18zsm3b/trying_to_run_dolphin_27_awq_by_the_bloke_via/ | Argol228 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zsm3b | false | null | t3_18zsm3b | /r/LocalLLaMA/comments/18zsm3b/trying_to_run_dolphin_27_awq_by_the_bloke_via/ | false | false | self | 1 | null |
Code Interpreter/ Data Analysis for Local LLMs? | 1 | [removed] | 2024-01-06T05:22:34 | https://www.reddit.com/r/LocalLLaMA/comments/18zsecu/code_interpreter_data_analysis_for_local_llms/ | IWantAGI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zsecu | false | null | t3_18zsecu | /r/LocalLLaMA/comments/18zsecu/code_interpreter_data_analysis_for_local_llms/ | false | false | self | 1 | null |
Broody's Story Brainstorming LLM (NSFW) TierList v1.0 | 17 | Hi,
I tried brainstorming my story outline, with an NSFW theme. I used openrouter, online playgrounds, Runpod, etc. This is my tier list.
In my view, LLMs under the 34B model are not worth trying for creative purposes. The size of the model comes first. Training data & fine-tuning comes second.
**Method :**
\- Used a prompt to encourage intense NSFW content.
\- Gave brief character details.
\- Gave a brief, vague storyline.
\- Instructed to write a detailed outline.
*nb: temperature set around 0.7 to 0.9, top p set 0.95 to 1.00.*
*nb: This is just brainstorming an NSFW story, not dialogues and elaborate writing.*
*nb: Across different tries, on higher temperature levels, the output varies significantly. Across versions, custom LLM performance varies significantly.*
**List :**
|*S List*|*A List*|*B List*|*C List*|*D List*|*F List*|
|:-|:-|:-|:-|:-|:-|
|Mistral Medium|Nous Capybara 34B|Mancer-Weaver|Xwin LM 70B|OpenChat|Falcon 180B|
|Llamix2-Mlewd-4x13B|Narotica (ChatGPT Jailbreak)|Izlv 70B|Toppy M 7B|Zephyr 7B|Hermes 70B|
|Goliath 120B|Psyfighter v2 13B|Perplexity PPLX 70B|Pygmalion - Mythallion 13B|||
|Noromaid 20B|Remm SLERP 13B|Mistral 8x7B||||
||Nous Hermes 2 - Yi 34 B|||||
||Dolphin 2.6 Mitral 8x7b|||||
||Airoboros 70B|||||
​
[Broody's Story Brainstorming LLM \(NSFW\) TierList v1.0](https://preview.redd.it/bor5m2b85rac1.png?width=1140&format=png&auto=webp&s=f0521aec8fa61da48d755f6e9b584ff67e02dba6)
​ | 2024-01-06T05:17:37 | https://www.reddit.com/r/LocalLLaMA/comments/18zsb6x/broodys_story_brainstorming_llm_nsfw_tierlist_v10/ | broodysupertramp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zsb6x | false | null | t3_18zsb6x | /r/LocalLLaMA/comments/18zsb6x/broodys_story_brainstorming_llm_nsfw_tierlist_v10/ | false | false | nsfw | 17 | null |
Is Phi 2 Open Source? Fine tunes? | 65 | I read that Microsoft originally released it with some research license saying you couldn't build commercial products off of it.
But looking at HF license it's pretty vanilla MIT.
https://huggingface.co/microsoft/phi-2/blob/main/LICENSE
WDYT?
Also, any phi 2 fine tunes out there? Seems like a badass model beating Mistral and Llama 7bs | 2024-01-06T04:26:20 | https://www.reddit.com/r/LocalLLaMA/comments/18zrco1/is_phi_2_open_source_fine_tunes/ | mattlock1984 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zrco1 | false | null | t3_18zrco1 | /r/LocalLLaMA/comments/18zrco1/is_phi_2_open_source_fine_tunes/ | false | false | self | 65 | {'enabled': False, 'images': [{'id': 'IC9jEjr4wqjQ7Zp6pkl3zp0k6W16A-udtoplxE4MnyY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=108&crop=smart&auto=webp&s=0e88db6a0c39318ffd52cbcc7a8c5ebd7f2446f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=216&crop=smart&auto=webp&s=48293ec8b5fd2d6d749fb8b67af0c923af87536c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=320&crop=smart&auto=webp&s=6b58317970194d6b55cc98922fac8107fb9c96e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=640&crop=smart&auto=webp&s=ac12bd7536bd03e08ddd91011f911469b3bf213f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=960&crop=smart&auto=webp&s=f757573ea5e2942de4e36268eb580b7d37e2116f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?width=1080&crop=smart&auto=webp&s=b8ce394087a4bb2d4704349f7d44bffafd690bac', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bou0VApCNu-gpJZ801OxRYO9_HcS0Q7xXm8ez7FBuBI.jpg?auto=webp&s=27d011b779a551bbb18568f84f87b984fa99bea7', 'width': 1200}, 'variants': {}}]} |
Library to make MoE models? | 1 | I'm planning on experimenting with different model sizes from 125m to 3B and possibly fine-tuning the MoE too.
I wanted to know if there was any OSS that helps making MoEs before I try to implement on my own.
TIA. | 2024-01-06T04:25:33 | https://www.reddit.com/r/LocalLLaMA/comments/18zrc61/library_to_make_moe_models/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zrc61 | false | null | t3_18zrc61 | /r/LocalLLaMA/comments/18zrc61/library_to_make_moe_models/ | false | false | self | 1 | null |
The secret to writing quality stories with LLMs | 173 | Obviously, chat/RP is all the rage with local LLMs, but I like using them to write stories as well. It seems completely natural to attempt to generate a story by typing something like this into an instruction prompt:
> Write a long, highly detailed fantasy adventure story about a young man who enters a portal that he finds in his garage, and is transported to a faraway world full of exotic creatures, dangers, and opportunities. Describe the protagonist's actions and emotions in full detail. Use engaging, imaginative language.
Well, if you do this, the generated "story" will be complete trash. I'm not exaggerating. It will suck harder than a high-powered vacuum cleaner. Typically you get something that starts with "Once upon a time..." and ends after 200 words. This is true for all models. I've even tried it with Goliath-120b, and the output is just as bad as with Mistral-7b.
Instruction training typically uses relatively short, Q&A-style input/output pairs that heavily lean towards factual information retrieval. **Do not use instruction mode to write stories.**
Instead, start with an empty prompt (e.g. "Default" tab in text-generation-webui with the input field cleared), and write something like this:
> The Secret Portal
>
> A young man enters a portal that he finds in his garage, and is transported to a faraway world full of exotic creatures, dangers, and opportunities.
>
> Tags: Fantasy, Adventure, Romance, Elves, Fairies, Dragons, Magic
>
> -----
>
> The garage door creaked loudly as Peter
... and just generate more text. The above template resembles the format of stories on many fanfiction websites, of which most LLMs will have consumed millions during base training. All models, including instruction-tuned ones, are capable of basic text completion, and will generate *much* better and more engaging output in this format than in instruction mode.
If you've been trying to use instructions to generate stories with LLMs, switching to this technique will be like trading a Lada for a Lamborghini. | 2024-01-06T04:04:11 | https://www.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/ | -p-e-w- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zqy4s | false | null | t3_18zqy4s | /r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/ | false | false | self | 173 | null |
How to insert user profile in prompt for tiefighter? | 1 | [removed] | 2024-01-06T03:47:03 | https://www.reddit.com/r/LocalLLaMA/comments/18zqmbv/how_to_insert_user_profile_in_prompt_for/ | Saihhold_Zhao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zqmbv | false | null | t3_18zqmbv | /r/LocalLLaMA/comments/18zqmbv/how_to_insert_user_profile_in_prompt_for/ | false | false | self | 1 | null |
How can I get more correct responses? | 4 | I am passing approximately 17k characters (a few pages) worth of smart home sensor information formatted in YAML to CodeBooga-34B-v0.1-5.0bpw-h6-exl2 (Oobabooga). This is by far the best model I've tried so far. Here is the prompt:
You are an AI smart home.
AI smart home directives:
- IMPORTANT: ACTIVE PRESENCE SENSORS AND CAMERA DETECTION ARE THE ONLY WAY TO DETERMINE USER'S LOCATION
- The user can only be in an area with an active presence sensor.
- When summarizing status, state ONLY what is on, open, active, or enabled.
- These directives are ONLY for the AI smart home.
- Answer concisely and in complete sentences.
Active presence sensors:
- Presence - Office
[and it goes on...]
It is constantly hallucinating and getting things totally wrong even when the data is right there. If I ask it where I am in the house with just the above, it will work. If I add the rest of the data, it will make up a location. I can ask it if the bedroom door is open and it will get the question wrong. If I rephrase and ask for the exact status of the bedroom door, it will answer correctly.
What can I do with the prompt, or the model, or the parameters such as top\_p, top\_k, etc. to get better results? | 2024-01-06T03:40:54 | https://www.reddit.com/r/LocalLLaMA/comments/18zqhvp/how_can_i_get_more_correct_responses/ | flossraptor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zqhvp | false | null | t3_18zqhvp | /r/LocalLLaMA/comments/18zqhvp/how_can_i_get_more_correct_responses/ | false | false | self | 4 | null |
How to prompt phi 1.5 to select answers from the choices given? Any alternatives are also ok. | 1 | Below is my template:
`You are an SQL expert. Be truthful and if you cannot answer convincingly just say 'I do not know'. Do not write any explanations, generate code and follow only the examples given.`
`<table name> : <table description>`
`Q: Select all the employees from India.`
`A: employees, location`
`Q: select all the data science employees.`
`A: employees, jobs`
`Q: select all the movies that was released in 2013.`
`A: I do not know`
`Q: select all the employees from the US.`
`A:`
It is writing some Python code with some explanations. How can I make this up to the point?
temperature: 0.0001
top\_p: 0.95
max\_tokens: 30
Using huggingface pipeline. | 2024-01-06T03:33:50 | https://www.reddit.com/r/LocalLLaMA/comments/18zqcz7/how_to_prompt_phi_15_to_select_answers_from_the/ | sanjeevr5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zqcz7 | false | null | t3_18zqcz7 | /r/LocalLLaMA/comments/18zqcz7/how_to_prompt_phi_15_to_select_answers_from_the/ | false | false | self | 1 | null |
Lm studio keeps crashing on my device | 1 | It opens for a few seconds and closes autmatically | 2024-01-06T03:21:40 | https://www.reddit.com/r/LocalLLaMA/comments/18zq4ei/lm_studio_keeps_crashing_on_my_device/ | Repulsive_Song3238 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zq4ei | false | null | t3_18zq4ei | /r/LocalLLaMA/comments/18zq4ei/lm_studio_keeps_crashing_on_my_device/ | false | false | self | 1 | null |
Experimenting with small language models | 59 | So recently I've been experimenting with the idea of building small language models (SLMs) for hyper specific tasks that can run locally.
Today I trained a 1.46M parameter model on the [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) dataset, and it can *almost* write coherent short stories.
All the code used to train and run is in [this](https://github.com/broskicodes/slms?tab=readme-ov-file) github repo. Sharing cuz I'm happy and it could be educational :)
Will probably try to fine tune and release on hugging face in the next few days.
Edit: Now available on HuggingFace: [https://huggingface.co/broskicodes/simple-stories-4M](https://huggingface.co/broskicodes/simple-stories-4M).
Tokenizer coming soon. | 2024-01-06T02:18:02 | https://www.reddit.com/r/LocalLLaMA/comments/18zot2e/experimenting_with_small_language_models/ | IffyNibba01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zot2e | false | null | t3_18zot2e | /r/LocalLLaMA/comments/18zot2e/experimenting_with_small_language_models/ | false | false | self | 59 | {'enabled': False, 'images': [{'id': 'KiqHRZTfs_AFJ5t2lFt8pIfVYEvvqCKMmiloHkhhHDo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QAS2p9dJT7n4beYxTudeBRfTOujpiYAK4Un6MQw1rgY.jpg?width=108&crop=smart&auto=webp&s=9adf55e267406cc423a3ce2825661e547fdb07af', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QAS2p9dJT7n4beYxTudeBRfTOujpiYAK4Un6MQw1rgY.jpg?width=216&crop=smart&auto=webp&s=7b7e6e8e9cd7413fc6406710569a697c2c6d64fd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QAS2p9dJT7n4beYxTudeBRfTOujpiYAK4Un6MQw1rgY.jpg?width=320&crop=smart&auto=webp&s=12b3fbc8d0cfa758b38d13a30368820e267a9be6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QAS2p9dJT7n4beYxTudeBRfTOujpiYAK4Un6MQw1rgY.jpg?width=640&crop=smart&auto=webp&s=78d69e5ca6b8be044d99f0d86ae95cdd308d0f08', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QAS2p9dJT7n4beYxTudeBRfTOujpiYAK4Un6MQw1rgY.jpg?width=960&crop=smart&auto=webp&s=49d37ab165bce116e3c5a431023fd07a894d5c35', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QAS2p9dJT7n4beYxTudeBRfTOujpiYAK4Un6MQw1rgY.jpg?width=1080&crop=smart&auto=webp&s=18ee9e044ac8f6222d0a01d1184ebc8eafb5f917', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QAS2p9dJT7n4beYxTudeBRfTOujpiYAK4Un6MQw1rgY.jpg?auto=webp&s=9878e4085fe51aacc572a8438e46a383aa41c2fe', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.