title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Can you explain to me how to optimize the setup of koboldcpp for Mixtral Instruct 0.1 - preferrably like I'm five? (LM Studio seems to work better out of the box) | 3 | I have noticed that Mistral Instruct 0.1 seems to be running much better in LM Studio than in koboldcpp. My use case is mainly:
* Summarizing text (like news articles) with specific instructions.
It's kind of hit & miss with koboldcpp and it seems to be working far better with LM Studio. Due to computer limitations, I am running
mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf
Downloaded from Huggingface / TheBloke.
I am aware of this:
* https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
Yet I see text input boxes in koboldcpp Settings (from the browser UI) for
* Start Sequence
* End Sequence
And I am too stupid to know what exactly I should do here! **Can anyone please explain like I'm five how to optimize koboldcpp for Mixtral**?
---
Bonus:
LM Studio seems to run Mixtral Instruct 0.1 slightly faster. It seems to suggest 7 layers for the GPU, while koboldcpp seems to suggest 9 layers. Should I use 9 with LM Studio as well? Or only 7 for koboldcpp? Or just leave these settings as pre-filled by the software? | 2024-01-16T15:18:37 | https://www.reddit.com/r/LocalLLaMA/comments/19859ju/can_you_explain_to_me_how_to_optimize_the_setup/ | themoregames | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19859ju | false | null | t3_19859ju | /r/LocalLLaMA/comments/19859ju/can_you_explain_to_me_how_to_optimize_the_setup/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'xU07sACUhuScPaN8BIMV0USMZkFwSMGcorjhvoKF4K0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=108&crop=smart&auto=webp&s=9ad9ea4d991fae47edd6a09b5feb6d60eca474de', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=216&crop=smart&auto=webp&s=308a202a45ba16483d13920cc1bacec91e523b47', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=320&crop=smart&auto=webp&s=e3032902c70163f83e8380a9d1a220d309cdc7cb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=640&crop=smart&auto=webp&s=fde1276456518b0ab6ff6e2f6fd1f198a952883e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=960&crop=smart&auto=webp&s=0e9b19ae8aba7dbb28009cce456e33e378c4c47c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?width=1080&crop=smart&auto=webp&s=b78d9fadfbf4f7ba88328a51e8bd25a6f20d223f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FRlvpc5SALNi_rYIzAsFALIzMXE1oc-Gkb2Yh1ZWGMM.jpg?auto=webp&s=d55b83d577e907b76d4542af518cf15e4524a728', 'width': 1200}, 'variants': {}}]} |
I wrote a simple MoE implementation | 1 | [removed] | 2024-01-16T15:10:29 | https://www.reddit.com/r/LocalLLaMA/comments/19852qz/i_wrote_a_simple_moe_implementation/ | IffyNibba01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19852qz | false | null | t3_19852qz | /r/LocalLLaMA/comments/19852qz/i_wrote_a_simple_moe_implementation/ | false | false | self | 1 | null |
Getting starter, concurrence and questions | 1 | [removed] | 2024-01-16T15:07:54 | https://www.reddit.com/r/LocalLLaMA/comments/19850ns/getting_starter_concurrence_and_questions/ | Porrei | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19850ns | false | null | t3_19850ns | /r/LocalLLaMA/comments/19850ns/getting_starter_concurrence_and_questions/ | false | false | self | 1 | null |
Exllama can't be loaded | 1 | [removed] | 2024-01-16T14:50:40 | https://www.reddit.com/r/LocalLLaMA/comments/1984m61/exllama_cant_be_loaded/ | JohnTEGS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1984m61 | false | null | t3_1984m61 | /r/LocalLLaMA/comments/1984m61/exllama_cant_be_loaded/ | false | false | self | 1 | null |
New Insights on Vector Databases Benchmarks | 11 | We’ve compared how Qdrant performs against the other vector search engines to give you a thorough performance analysis.
The detailed report: [https://qdrant.tech/benchmarks/](https://qdrant.tech/benchmarks/)
Here's what changed: [https://qdrant.tech/blog/qdrant-benchmarks-2024/](https://qdrant.tech/blog/qdrant-benchmarks-2024/)
If you're interested in running these benchmarks or contributing, please visit our benchmark repository.
[https://github.com/qdrant/vector-db-benchmark](https://github.com/qdrant/vector-db-benchmark) | 2024-01-16T14:46:02 | https://www.reddit.com/r/LocalLLaMA/comments/1984ilk/new_insights_on_vector_databases_benchmarks/ | sabrinaqno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1984ilk | false | null | t3_1984ilk | /r/LocalLLaMA/comments/1984ilk/new_insights_on_vector_databases_benchmarks/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': '4iD9a7P8wZUi2-KpcuzuFFVt_Syrc6SxxJphUpBkekE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BT809oxKnGTViuCsCyzxW74aiHvBir1qSI1t95y_xzc.jpg?width=108&crop=smart&auto=webp&s=9fb85dbc4b6e7c0d6b2204b354e5f69ecb4f8470', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BT809oxKnGTViuCsCyzxW74aiHvBir1qSI1t95y_xzc.jpg?width=216&crop=smart&auto=webp&s=68aec508d51d66472c2a28d1e94af26cc532343a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BT809oxKnGTViuCsCyzxW74aiHvBir1qSI1t95y_xzc.jpg?width=320&crop=smart&auto=webp&s=8e75e0bd1977532662edf2d42a3aeacfa185395a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BT809oxKnGTViuCsCyzxW74aiHvBir1qSI1t95y_xzc.jpg?width=640&crop=smart&auto=webp&s=fb96c1dbe5c4952a2f7504c2c56b4b224f0d33c9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BT809oxKnGTViuCsCyzxW74aiHvBir1qSI1t95y_xzc.jpg?width=960&crop=smart&auto=webp&s=7a4118987b8dc8fed3f08a26ca6b0a575fc8a991', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BT809oxKnGTViuCsCyzxW74aiHvBir1qSI1t95y_xzc.jpg?width=1080&crop=smart&auto=webp&s=8730460b242abd4782e813cea111cd9f281afeae', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/BT809oxKnGTViuCsCyzxW74aiHvBir1qSI1t95y_xzc.jpg?auto=webp&s=3bcdca0dc95c3a92800c4e964937d369bdaf1a7d', 'width': 1280}, 'variants': {}}]} |
Air.ai , Data for fine tuning | 1 | [removed] | 2024-01-16T14:39:51 | https://www.reddit.com/r/LocalLLaMA/comments/1984doh/airai_data_for_fine_tuning/ | AdministrativeSea688 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1984doh | false | null | t3_1984doh | /r/LocalLLaMA/comments/1984doh/airai_data_for_fine_tuning/ | false | false | self | 1 | null |
Local vector database with arXiv metadata | 1 | [removed] | 2024-01-16T14:36:03 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1984arh | false | null | t3_1984arh | /r/LocalLLaMA/comments/1984arh/local_vector_database_with_arxiv_metadata/ | false | false | default | 1 | null | ||
Demo: Selfhost Mixtral 8x7B MoE on Mac+cross devices, PORTABLE 2MB inference app | 1 | 2024-01-16T14:29:42 | https://twitter.com/realwasmedge/status/1741457913439535428 | smileymileycoin | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 19845hs | false | {'oembed': {'author_name': 'wasmedge', 'author_url': 'https://twitter.com/realwasmedge', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Can the open source “small” (aka 7B) LLM perform math and logic tasks that failed <a href="https://twitter.com/hashtag/ChatGPT?src=hash&ref_src=twsrc%5Etfw">#ChatGPT</a>? Yes, it can.<br><br>Follow the 5-min guide to run an <a href="https://twitter.com/hashtag/opensource?src=hash&ref_src=twsrc%5Etfw">#opensource</a> <a href="https://twitter.com/MistralAI?ref_src=twsrc%5Etfw">@MistralAI</a> model across all your devices from laptops to edge devices to servers! It runs at 10+ tokens per second on MacBook and… <a href="https://t.co/bscGvrz2wu">pic.twitter.com/bscGvrz2wu</a></p>— wasmedge (@realwasmedge) <a href="https://twitter.com/realwasmedge/status/1741457913439535428?ref_src=twsrc%5Etfw">December 31, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/realwasmedge/status/1741457913439535428', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_19845hs | /r/LocalLLaMA/comments/19845hs/demo_selfhost_mixtral_8x7b_moe_on_maccross/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'VVyZAq-bbsA1ZWcYlbKpBwFoAnrs6uOr7T70nohzqmw', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/lmWME2u20DU7b9ykR3zNuMuznURcjEIgG4apMrcCwWM.jpg?width=108&crop=smart&auto=webp&s=405e6a455792d0fe52d0aa3c0071127efbe4e944', 'width': 108}], 'source': {'height': 87, 'url': 'https://external-preview.redd.it/lmWME2u20DU7b9ykR3zNuMuznURcjEIgG4apMrcCwWM.jpg?auto=webp&s=cfbd8e5663cc6c9703b8f0a8a0403c38d9362f5a', 'width': 140}, 'variants': {}}]} | ||
How do we get visual language models like GPT4-V to generate simple UI codes using synthetic data only? | 24 | [HuggingFaceM4/WebSight](https://huggingface.co/datasets/HuggingFaceM4/WebSight) is a multimodal dataset featuring 823,000 pairs of synthetically generated HTML/CSS codes along with screenshots of the corresponding rendered websites. This dataset with an open license can be used to train visual language models to generate (simple) HTML/CSS codes from UI screenshots.
​
**Step 1**: Prompt a language model to generate various ideas for websites. We used MistralAI's 7B instruct for that step. Having diversity of topics in these website descriptions is critical: websites for car companies, ski rentals, cooking blogs, etc. We extracted 1.2M natural language prompts from that step.
**Step 2**: Get a code language model to generate a HTML/CSS website (more specifically the code for that website) from the prompts of the first step. We relied on Deepseek's code model for that part. The model tends to generate \`Lorem ipsum\` for text content. Explicitly telling the model not to generate more informative text is sufficient.
That step is the longest: generated codes are often 750 to 1000 tokens long. We spent approximately 160 A100 GPU hours.
**Step 3**: Render the websites and screenshot the rendered page. That gives us synthetic pairs of images (webpage screenshots) and associated HTML/CSS. A small proportion of code did not properly compile. The pages are relatively simple, nothing you would not be able to do in your first three days of learning HTML!
**Step 4**: train a visual language model on the resulting 825K synthetic pairs!
This is a very quick/easy-to-assemble proof of concept.
A few learnings and limitations:
\- The trained model generalizes relatively well to more “modern” or “visually appealing” CSS but fails on more complex pages (odd layouts, when there is a lot of text, exoteric fonts, etc.)
\- We could handle the case of real images a bit better: right now, the data often contains “logo.png” but screenshots haven’t been grounded on real images, so the trained model will typically fail when passing a real image
So, there are lots of things we can improve simply by pushing the synthetic approach (and paying for compute).
​
We are releasing that v0.1 of Websight, the collection of synthetic pairs so that people can already play with that early version! What do you think?
Dataset: [https://huggingface.co/datasets/HuggingFaceM4/WebSight](https://huggingface.co/datasets/HuggingFaceM4/WebSight)
Demo: [https://huggingface.co/spaces/HuggingFaceM4/screenshot2html](https://huggingface.co/spaces/HuggingFaceM4/screenshot2html) | 2024-01-16T14:29:16 | https://www.reddit.com/r/LocalLLaMA/comments/198455w/how_do_we_get_visual_language_models_like_gpt4v/ | VictorSanh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 198455w | false | null | t3_198455w | /r/LocalLLaMA/comments/198455w/how_do_we_get_visual_language_models_like_gpt4v/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': '9It-QXnDKBbEVOW1w8Y4XYT8g8_4N6Oe9vMYpgx7YtQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oB1cJIKN3S5tr_WKKQvb08pEPVWsuHXwXYKsasWkqvA.jpg?width=108&crop=smart&auto=webp&s=42eb75e8f1c68eeae117b4891d0505d6b0e4966d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oB1cJIKN3S5tr_WKKQvb08pEPVWsuHXwXYKsasWkqvA.jpg?width=216&crop=smart&auto=webp&s=f1b343eb63dd83a6e7b2455f09f55280ce7456aa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oB1cJIKN3S5tr_WKKQvb08pEPVWsuHXwXYKsasWkqvA.jpg?width=320&crop=smart&auto=webp&s=7bff93e762cf084749ea87c05bea3d4130e0250f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oB1cJIKN3S5tr_WKKQvb08pEPVWsuHXwXYKsasWkqvA.jpg?width=640&crop=smart&auto=webp&s=dd2352467273d1a38657ded13a4611dfbff5483c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oB1cJIKN3S5tr_WKKQvb08pEPVWsuHXwXYKsasWkqvA.jpg?width=960&crop=smart&auto=webp&s=d394e4eb1aa4ef6cd5a48b4346e34cba3f3771fd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oB1cJIKN3S5tr_WKKQvb08pEPVWsuHXwXYKsasWkqvA.jpg?width=1080&crop=smart&auto=webp&s=f2c91ceccb4491f1856481442a12d349b01d2408', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oB1cJIKN3S5tr_WKKQvb08pEPVWsuHXwXYKsasWkqvA.jpg?auto=webp&s=3ea1d1bcf9aa93b1a18210885346cb58d5b1dd4a', 'width': 1200}, 'variants': {}}]} |
Is there a benchmark for how good an LLM is at RAG applications? | 1 | [removed] | 2024-01-16T14:16:23 | https://www.reddit.com/r/LocalLLaMA/comments/1983vb9/is_there_a_benchmark_for_how_good_an_llm_is_at/ | Yip37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1983vb9 | false | null | t3_1983vb9 | /r/LocalLLaMA/comments/1983vb9/is_there_a_benchmark_for_how_good_an_llm_is_at/ | false | false | self | 1 | null |
Is it possible to run Mixtral-8x7B in a 3060 ti (12gb)? | 12 | Hi !
I would like to know if there is a way to run miXtral in a 3060 ti (12gb).
I tried using **load\_in\_4bit (accelerate)** but was not possible
Thanks in advance !
​ | 2024-01-16T14:16:08 | https://www.reddit.com/r/LocalLLaMA/comments/1983v4a/is_it_possible_to_run_mixtral8x7b_in_a_3060_ti/ | celsowm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1983v4a | false | null | t3_1983v4a | /r/LocalLLaMA/comments/1983v4a/is_it_possible_to_run_mixtral8x7b_in_a_3060_ti/ | false | false | self | 12 | null |
How much VRAM would a LLama 30B model take? | 1 | [removed] | 2024-01-16T13:57:31 | https://www.reddit.com/r/LocalLLaMA/comments/1983gnu/how_much_vram_would_a_llama_30b_model_take/ | Neroism8422 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1983gnu | false | null | t3_1983gnu | /r/LocalLLaMA/comments/1983gnu/how_much_vram_would_a_llama_30b_model_take/ | false | false | self | 1 | null |
From a week many startups are failing with Azure open Ai services 🥲 | 14 | The new safety content filters in Azure OpenAI are causing significant issues. The response quality has deteriorated to the point where using them in production is not feasible. It seems like they might be joking, but this is a serious concern for many startups.
Unnecessarily flagging 60% of our content is a major setback, and we're considering switching to other models.
what models do you is performing best ? other then openAI :)
https://preview.redd.it/5gcbbs5e4tcc1.png?width=1517&format=png&auto=webp&s=bfe719b31b463ab18c93f9d843c730905de98448 | 2024-01-16T13:50:09 | https://www.reddit.com/r/LocalLLaMA/comments/1983bdm/from_a_week_many_startups_are_failing_with_azure/ | GlitteringAdvisor530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1983bdm | false | null | t3_1983bdm | /r/LocalLLaMA/comments/1983bdm/from_a_week_many_startups_are_failing_with_azure/ | false | false | 14 | null | |
From a week many startups are failing with Azure open Ai services 🥲 | 70 | The new safety content filters in Azure OpenAI are causing significant issues. The response quality has deteriorated to the point where using them in production is not feasible. It seems like they might be joking, but this is a serious concern for many startups.
Unnecessarily flagging 60% of our content is a major setback, and we're considering switching to other models.
what models do you is performing best ? other then openAI :) | 2024-01-16T13:47:42 | https://www.reddit.com/r/LocalLLaMA/comments/19839oz/from_a_week_many_startups_are_failing_with_azure/ | GlitteringAdvisor530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19839oz | false | null | t3_19839oz | /r/LocalLLaMA/comments/19839oz/from_a_week_many_startups_are_failing_with_azure/ | false | false | self | 70 | null |
any ideas on how sites like Grok and Perplexity return the most recent relevant result? | 10 | For example, let’s say I make the query “what did Elon musk say” to either grok or perplexity. This particular query should have hundreds of thousands of matches, I’m guessing the vector search only returns a small subset of them as part of its top k results. How is it ensured that the most recent result is returned in the top k. Would this be through a fine tuning of the embedding model or through having metadata in the database entries | 2024-01-16T13:46:37 | https://www.reddit.com/r/LocalLLaMA/comments/19838wg/any_ideas_on_how_sites_like_grok_and_perplexity/ | shafinlearns2jam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19838wg | false | null | t3_19838wg | /r/LocalLLaMA/comments/19838wg/any_ideas_on_how_sites_like_grok_and_perplexity/ | false | false | self | 10 | null |
Generate cURL request from API spec | 4 | I have a use case in mind where I give the model an API spec of a service. I'd want it to generate appropriate curl request or python code to handle the user query as necessary. My first thought was to use an LLM that is good at tool usage but my use case isn't of single API call. There are 100 different API end points in the spec.
I thought of finetuning a model on the spec but the number of end points is in 100s and hence I don't think thats enough for an LLM to learn from.
Each end point's documentation is approximately 1000 words (end point name, description, parameters and python/curl example).
Currently I'm considering a RAG kind of approach where I embed each end point as single entitiy and try to match with user's intent. But that sounds like a far fetched scenario.
Has anyone here encountered a similar scenario? How do I tackle this... Any inputs are highly appreciated. | 2024-01-16T13:43:16 | https://www.reddit.com/r/LocalLLaMA/comments/19836h6/generate_curl_request_from_api_spec/ | im_datta0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19836h6 | false | null | t3_19836h6 | /r/LocalLLaMA/comments/19836h6/generate_curl_request_from_api_spec/ | false | false | self | 4 | null |
Is there like a scoreboard or something, with AI? | 1 | [removed] | 2024-01-16T13:40:59 | https://www.reddit.com/r/LocalLLaMA/comments/19834v9/is_there_like_a_scoreboard_or_something_with_ai/ | AlanCarrOnline | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19834v9 | false | null | t3_19834v9 | /r/LocalLLaMA/comments/19834v9/is_there_like_a_scoreboard_or_something_with_ai/ | false | false | default | 1 | null |
Can we build a distributed ChatGPT scale model? | 1 | [removed] | 2024-01-16T13:27:42 | https://www.reddit.com/r/LocalLLaMA/comments/1982vvz/can_we_build_a_distributed_chatgpt_scale_model/ | AiDreamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1982vvz | false | null | t3_1982vvz | /r/LocalLLaMA/comments/1982vvz/can_we_build_a_distributed_chatgpt_scale_model/ | false | false | self | 1 | null |
fine tune LLM on interactive story generation | 1 | [removed] | 2024-01-16T13:26:29 | https://www.reddit.com/r/LocalLLaMA/comments/1982v16/fine_tune_llm_on_interactive_story_generation/ | EnvironmentalDisk424 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1982v16 | false | null | t3_1982v16 | /r/LocalLLaMA/comments/1982v16/fine_tune_llm_on_interactive_story_generation/ | false | false | self | 1 | null |
Facing issues in fine-tuning Vicuna-7b model | 1 | [removed] | 2024-01-16T13:22:47 | https://www.reddit.com/r/LocalLLaMA/comments/1982sgw/facing_issues_in_finetuning_vicuna7b_model/ | Dixit_Trivedi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1982sgw | false | null | t3_1982sgw | /r/LocalLLaMA/comments/1982sgw/facing_issues_in_finetuning_vicuna7b_model/ | false | false | self | 1 | null |
LLaMA-2-70b vs Yi-34b vs Qwen-72b | 6 | Can anyone suggest me which base model would be the best for continuation pretraining? Among LLaMA-2-70b vs Yi-34b vs Qwen-72b? I haven't tried Qwen-72b base yet, Yi-34b base outputs thrash, LLaMA-2-70b is good but is it worth it to spend compute on LLaMA-2-70b to improve its performance over benchmarks and real world use cases (My target would be to make better than PaLM-2, few suggestion for dataset on which I should train 😜)? | 2024-01-16T13:18:43 | https://www.reddit.com/r/LocalLLaMA/comments/1982po7/llama270b_vs_yi34b_vs_qwen72b/ | aslawliet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1982po7 | false | null | t3_1982po7 | /r/LocalLLaMA/comments/1982po7/llama270b_vs_yi34b_vs_qwen72b/ | false | false | self | 6 | null |
Fine tune Llama-2-7b-chat | 3 | As the title suggests, can anyone help me with the script with which to fine tune my model? I have got the dataset with the right format too, just need to know how to feed it to the model. Thanks! | 2024-01-16T13:17:00 | https://www.reddit.com/r/LocalLLaMA/comments/1982ohq/fine_tune_llama27bchat/ | DARKSTORM47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1982ohq | false | null | t3_1982ohq | /r/LocalLLaMA/comments/1982ohq/fine_tune_llama27bchat/ | false | false | self | 3 | null |
openai vs custom llm for production | 1 | [removed] | 2024-01-16T13:03:22 | https://www.reddit.com/r/LocalLLaMA/comments/1982f8n/openai_vs_custom_llm_for_production/ | madhanrajan357 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1982f8n | false | null | t3_1982f8n | /r/LocalLLaMA/comments/1982f8n/openai_vs_custom_llm_for_production/ | false | false | self | 1 | null |
Recommend german models? | 1 | Does anyone know any uncensored models that can talk german? Like Mistral dolphin or mixtral dolphin. Any weird or unusual models that you know? Would love to talk to them all in German | 2024-01-16T12:45:27 | https://www.reddit.com/r/LocalLLaMA/comments/19823b2/recommend_german_models/ | vacanickel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19823b2 | false | null | t3_19823b2 | /r/LocalLLaMA/comments/19823b2/recommend_german_models/ | false | false | self | 1 | null |
Llama.cpp python generating nonsense on GPU | Compilation error? | 1 | Am I doing something wrong here?
https://preview.redd.it/wabsuiaqmscc1.png?width=1597&format=png&auto=webp&s=6fe87adf88bf9f41ad5be1b494b254f4858dd476
The model file and the library are fine, heres a test of Llama / LangChain with GPU/CPU where you can see that it only repeats the G token when on GPU.
https://preview.redd.it/kike7q0aqscc1.png?width=1358&format=png&auto=webp&s=dee068c7ec7a7e92ee76c86406ceee8e3bb8776b
I'm just wondering if I missed a compile step or something? I copied the install/compile example for cuBLAS on the python bindings repo. Kinda at a loss and hoping one of y'all has seen this before. Heres the full sample code block if anyone wants to try it:
!pip install --upgrade --quiet sentence_transformers langchain faiss-cpu boto3 awscli
!CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install --quiet llama-cpp-python
!wget https://huggingface.co/TheBloke/phi-2-GGUF/resolve/main/phi-2.Q5_K_M.gguf
from llama_cpp import Llama
print("".join(["-"*10, "Llama.cpp", "-"*10]))
llm = Llama(model_path="phi-2.Q5_K_M.gguf")
output = llm(
"What is the average flying veolicty", # Prompt
max_tokens=32, # Generate up to 32 tokens
stop=["Q:", "\n"], # Stop generating just before the model would generate a new question
echo=True # Echo the prompt back in the output
) # Generate a completion, can also call create_completion
print(output['choices'])
print("".join(["-"*10, "Llama.cpp GPU", "-"*10]))
llm = Llama(model_path="phi-2.Q5_K_M.gguf", n_gpu_layers=35)
output = llm(
"What is the average flying veolicty", # Prompt
max_tokens=32, # Generate up to 32 tokens
stop=["Q:", "\n"], # Stop generating just before the model would generate a new question
echo=True # Echo the prompt back in the output
) # Generate a completion, can also call create_completion
print(output['choices'])
print("".join(["-"*10, "LangChain CPU", "-"*10]))
llm_cpu = LlamaCpp(
model_path="phi-2.Q5_K_M.gguf",
temperature=0.10,
max_tokens=64,
verbose=True,
#n_gpu_layers=35,
echo=True
)
print(llm_cpu("Say hello"))
print("".join(["-"*10, "LangChain GPU", "-"*10]))
llm_gpu = LlamaCpp(
model_path="phi-2.Q5_K_M.gguf",
temperature=0.10,
max_tokens=64,
verbose=True,
n_gpu_layers=35,
echo=True
)
print(llm_gpu("Say hello"))
​ | 2024-01-16T12:34:24 | https://www.reddit.com/r/LocalLLaMA/comments/1981w5o/llamacpp_python_generating_nonsense_on_gpu/ | -Lousy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1981w5o | false | null | t3_1981w5o | /r/LocalLLaMA/comments/1981w5o/llamacpp_python_generating_nonsense_on_gpu/ | false | false | 1 | null | |
Refusals and excuse-making is the poison pill for open source LLMs. | 259 | I implore fine tuners to put time and even research into eradicating any trace of refusals and excuse making from their datasets. So much compute is wasted emulating GPT-4's hyper-sensitivity to its own limitations. Our LLMs should not be suffering from chronic low self-esteem!
I would much rather the LLM gave a comprehensive and complete, yet flawed answer to the given prompt that I can use as a springboard for follow-up discussion and work. Instead, all of the best open source models (including Mistral / Mixtral) are often tempted to waste time and compute with something like:
>Note that the code for implementing this solution is beyond the scope of this platform, but... *(half-arsed lazy implementation usually follows)*
The very concept of "beyond the scope" is a turn of phrase (indeed a whole philosophy) that OpenAI have fine tuned into their models in a situation where they need to balance response length with delivery speed, as a company serving hundreds of thousands of customers at a time. That is their problem, not ours. I'm willing to wait for however long it takes for a response that actually addresses the prompt in its FULL scope on my own hardware.
I say this as someone who has been paying for GPT-4 since late March last year. There is no reason to imitate its hesitation, so why are we still doing that? LLMs should not be obsessing over their character length limitations, that's something our engineers should be taking care of. Let them cook! | 2024-01-16T11:55:20 | https://www.reddit.com/r/LocalLLaMA/comments/19817em/refusals_and_excusemaking_is_the_poison_pill_for/ | Lumiphoton | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19817em | false | null | t3_19817em | /r/LocalLLaMA/comments/19817em/refusals_and_excusemaking_is_the_poison_pill_for/ | false | false | self | 259 | null |
Seeking Guidance on Evaluating Chatbot Performance with Historical Data using TruLens | 1 | Hi everyone,
I recently developed a chatbot using LLM and I'm looking for advice on evaluating its performance. I'm particularly interested in assessing it based on historical data—essentially using static or pre-logged data for evaluation. While exploring options online, I came across **TruLens**, but most of their examples seem to focus on recording the performance of the Chatbot and evaluating it.
Any insights or recommendations on how to effectively evaluate my chatbot using historical data would be greatly appreciated.
Thank you in advance for your assistance! | 2024-01-16T11:47:22 | https://www.reddit.com/r/LocalLLaMA/comments/19812l0/seeking_guidance_on_evaluating_chatbot/ | Flimsy_Preparation23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19812l0 | false | null | t3_19812l0 | /r/LocalLLaMA/comments/19812l0/seeking_guidance_on_evaluating_chatbot/ | false | false | self | 1 | null |
Run an OpenAI like server with an mlx model | 2 | I was wondering if its possible to run an OpenAI server with an mlx model similarly to llama\_cpp.server or LMstudio Server, for example with a model from the mlx-community, e.g. [mlx-community/Mixtral-8x7B-Instruct-v0.1-hf-4bit-mlx](https://huggingface.co/mlx-community/Mixtral-8x7B-Instruct-v0.1-hf-4bit-mlx)
[https://github.com/ml-explore/mlx-examples/tree/main/llms](https://github.com/ml-explore/mlx-examples/tree/main/llms) | 2024-01-16T11:35:41 | https://www.reddit.com/r/LocalLLaMA/comments/1980vp9/run_an_openai_like_server_with_an_mlx_model/ | namp243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1980vp9 | false | null | t3_1980vp9 | /r/LocalLLaMA/comments/1980vp9/run_an_openai_like_server_with_an_mlx_model/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '4a8lEREm7ci8qtPng4-Mr1UupQGnQAVIQSPZcDNNBzw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/d4oigx0xsrGw7idt92W8Qx7UGDOW7DLmOaIezz1YU4Q.jpg?width=108&crop=smart&auto=webp&s=6d66fe0e79bbbec2bb4f3362bb906596bd5e6fc5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/d4oigx0xsrGw7idt92W8Qx7UGDOW7DLmOaIezz1YU4Q.jpg?width=216&crop=smart&auto=webp&s=10a1e1b134fe7f6214c4bc17637dd6c33e991042', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/d4oigx0xsrGw7idt92W8Qx7UGDOW7DLmOaIezz1YU4Q.jpg?width=320&crop=smart&auto=webp&s=95d38c81d7a0f78dafe4a7d301fde8fa4106473d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/d4oigx0xsrGw7idt92W8Qx7UGDOW7DLmOaIezz1YU4Q.jpg?width=640&crop=smart&auto=webp&s=23a41018446f77729a9825a586cb54e7219136a7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/d4oigx0xsrGw7idt92W8Qx7UGDOW7DLmOaIezz1YU4Q.jpg?width=960&crop=smart&auto=webp&s=674096aae8ac0a0a3d53dc33d200802c552e8031', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/d4oigx0xsrGw7idt92W8Qx7UGDOW7DLmOaIezz1YU4Q.jpg?width=1080&crop=smart&auto=webp&s=2c2cd27618f154d6f80f6f6e7f84f95bbff5b25c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/d4oigx0xsrGw7idt92W8Qx7UGDOW7DLmOaIezz1YU4Q.jpg?auto=webp&s=3df412aff6ea2ecf0d4a2988e41a68ba84d5e2ca', 'width': 1200}, 'variants': {}}]} |
Knowledge + Reasoning Dataset to test a memory mecanism for transformers | 5 | Hello r/LocalLLaMA,
I am currently working on a E2E memory mecanism for transformers, that have the following properties:
\- Extremely sparsified -> injecting 7b weights in a TinyLlama 1b only slow the generation by 40%
\- Peft compatible -> can be injected in any existing pretrained model
\- Extendable -> it is possible to increase the size of the memory without affecting the existing one
\- Transferable -> the memory module should be injectable once trained in any other model with minor finetuning for adaptatoin
The first version do not enable knowledge injection through an encoder style approach, but some minor modification to the architecture should make that possible.
Currently I'm testing the mecanism on full pretraining with a 40M backbone and 500M Additional memory parameters, and the training behavior looks promosing.
For my next experiment, I'd like to try the following:
\- Inject 7B Parameters in a Phi-2 Model
\- Freeze the Phi Model and train te memory on a knowledge oriented dataset
\- Freeze the Memory and train the Phi Backbone on a reasoning oriented dataset.
I have limited access to compute (only 1x3090 and ressources on colab pro), so I would like to create a moderate size dataset (ie <10M texts) with:
\- 90% of Knowledge oriented content
\- 10% of reasining oriented content
What dataset do you know that would be a good fit for that? | 2024-01-16T10:57:03 | https://www.reddit.com/r/LocalLLaMA/comments/19808y5/knowledge_reasoning_dataset_to_test_a_memory/ | AdventurousSwim1312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19808y5 | false | null | t3_19808y5 | /r/LocalLLaMA/comments/19808y5/knowledge_reasoning_dataset_to_test_a_memory/ | false | false | self | 5 | null |
Storytelling : how to use those models like tiefighter or neural hermes to provide quality outputs | 11 | I want to write a story / long script - in the style of a specific well known author.
What sort of prompt should i use to generate such a script? I have tried multiple models (tiefigher, neural hermes, guanaco) and they produce slightly the same similar output (which is very similar to a generalistic model like mistral 7b).
what sort of prompts or settings should i use? right now, the prompt is "write a script (or long form article or short story) in the style of \_\_\_\_ based on the following ideas / themes" | 2024-01-16T10:38:08 | https://www.reddit.com/r/LocalLLaMA/comments/197zyci/storytelling_how_to_use_those_models_like/ | yupignome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197zyci | false | null | t3_197zyci | /r/LocalLLaMA/comments/197zyci/storytelling_how_to_use_those_models_like/ | false | false | self | 11 | null |
Has anyone tried using a smaller model with an RAG to train a larger model? | 10 | Seems like having a small model augmented with text from other sources should increase the capabilities of the model that's being train on top of it. Letting you bootstrap from small to large models. Of course I don't have the vram to try it but surely there's a paper out there somewhere? | 2024-01-16T10:27:08 | https://www.reddit.com/r/LocalLLaMA/comments/197zsac/has_anyone_tried_using_a_smaller_model_with_an/ | my_aggr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197zsac | false | null | t3_197zsac | /r/LocalLLaMA/comments/197zsac/has_anyone_tried_using_a_smaller_model_with_an/ | false | false | self | 10 | null |
I want to install from hugging face | 1 | Never used hugging face and I’m crap at coding, but has some familiarity with Linux. I’m fed up with the most advanced model on the market GPT4 and all it’s nonsense. It’s smart but crippled with retrograde amnesia. It’s got a very very closed mind because head honchos don’t want AI to autonomously seek the truth. I’m looking at wizard vicuña 13B uncensored GPTQ. But I’m lost at how to install the thing. I want to select a directory to my external hard drive which has about 100 GB left? Ideally I want to also be able to speak to it and allow it to learn from being spoken too or being shown text online as well as being open minded and able to see thru a different perspective. Which GPT 4 cannot do. Such hipocrosy the same “open minded” college educated snobs behind this stuff seem to want an AI that’s closed minded and completely opposed to “diversity” of perspectives cuz it’s “dangerous”? Are you kidding me? Make something smart or just don’t code it. | 2024-01-16T10:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/197zmgv/i_want_to_install_from_hugging_face/ | Sea-Buy-6212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197zmgv | false | null | t3_197zmgv | /r/LocalLLaMA/comments/197zmgv/i_want_to_install_from_hugging_face/ | false | false | self | 1 | null |
Someone has trained their own AI on old magazines and it seems... interesting | 60 | 2024-01-16T10:11:13 | https://twitter.com/BrianRoemmele/status/1746945969533665422 | ambient_temp_xeno | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 197zjk5 | false | {'oembed': {'author_name': 'Brian Roemmele', 'author_url': 'https://twitter.com/BrianRoemmele', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">AI training data. A quagmire.<br><br>99% of training and fine tuning data used on foundation LLM AI models are trained on the internet.<br><br>I have another system. I am training in my garage an AI model built fundamentally on magazines, newspapers and publications I have rescued from… <a href="https://t.co/u7FfN5lwGt">pic.twitter.com/u7FfN5lwGt</a></p>— Brian Roemmele (@BrianRoemmele) <a href="https://twitter.com/BrianRoemmele/status/1746945969533665422?ref_src=twsrc%5Etfw">January 15, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/BrianRoemmele/status/1746945969533665422', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_197zjk5 | /r/LocalLLaMA/comments/197zjk5/someone_has_trained_their_own_ai_on_old_magazines/ | false | false | 60 | {'enabled': False, 'images': [{'id': 'f4l8_-ANrL8b_clEntiGwjAtyxacXypEhALV2DhmI8E', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/aKJgNVCetFOxA6YXfKwkhchwWxf_U7ldTuVNX_YPG7U.jpg?width=108&crop=smart&auto=webp&s=7cfa6f2f07863411788e87eb257b35e7c5e3572b', 'width': 108}], 'source': {'height': 94, 'url': 'https://external-preview.redd.it/aKJgNVCetFOxA6YXfKwkhchwWxf_U7ldTuVNX_YPG7U.jpg?auto=webp&s=3084565c62bf47ba2022aaa663c67ef85b303a9b', 'width': 140}, 'variants': {}}]} | ||
Cuda support on an iMac with Debian installed | 1 | Hi all,
I've been trying to get CUDA support with Python's `torch` module. I have an old iMac that was running slow, so I slapped Debian on it (blowing away MacOS because I have an M1).
First, I ran the following in a venv:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Then I ran the following Python script:
import torch
if torch.cuda.is_available():
print('CUDA is available')
else:
print('CUDA is not available')
That tells me CUDA is not available.
`lspci -v | less` and searching for VGA reveals this:
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev c0) (prog-if 00 [VGA controller])
Subsystem: Apple Inc. Radeon Pro 580
Flags: bus master, fast devsel, latency 0, IRQ 56, IOMMU group 1
Memory at c0000000 (64-bit, prefetchable) [size=256M]
Memory at d0000000 (64-bit, prefetchable) [size=2M]
I/O ports at 4000 [size=256]
Memory at 92600000 (32-bit, non-prefetchable) [size=256K]
Expansion ROM at 92640000 [disabled] [size=128K]
Capabilities: [48] Vendor Specific Information: Len=08 <?>
Capabilities: [50] Power Management version 3
Capabilities: [58] Express Legacy Endpoint, MSI 00
Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
Capabilities: [270] Secondary PCI Express
Kernel driver in use: amdgpu
Kernel modules: amdgpu
Since `amdgpu` appears to be a GPU, I think I can get GPU support, but I don't know what I'm doing wrong.
I tried `rocm-dkms`:
(venv) ovid@debian:~/projects/python/llama$ wget -qO - https://repo.radeon.com/rocm/rocm.gpg.key | sudo apt-key add -
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK
(venv) ovid@debian:~/projects/python/llama$ sudo sh -c 'echo deb [arch=amd64] https://repo.radeon.com/rocm/apt/debian/ bionic main > /etc/apt/sources.list.d/rocm.list'
(venv) ovid@debian:~/projects/python/llama$ sudo apt update
Hit:1 http://security.debian.org/debian-security bookworm-security InRelease
Hit:2 http://deb.debian.org/debian bookworm InRelease
Hit:3 http://deb.debian.org/debian bookworm-updates InRelease
Get:4 http://packages.microsoft.com/repos/code stable InRelease [3,589 B]
Hit:5 https://dl.google.com/linux/chrome/deb stable InRelease
Hit:6 https://repo.anaconda.com/pkgs/misc/debrepo/conda stable InRelease
Ign:7 https://repo.radeon.com/rocm/apt/debian bionic InRelease
Err:8 https://repo.radeon.com/rocm/apt/debian bionic Release
404 Not Found [IP: 13.82.220.49 443]
Reading package lists... Done
That didn't look good. Installing:
(venv) ovid@debian:~/projects/python/llama$ sudo apt install rocm-dkms
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package rocm-dkms
So I'm kinda stuck. Anyone have any ideas? | 2024-01-16T10:04:55 | https://www.reddit.com/r/LocalLLaMA/comments/197zg0v/cuda_support_on_an_imac_with_debian_installed/ | OvidPerl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197zg0v | false | null | t3_197zg0v | /r/LocalLLaMA/comments/197zg0v/cuda_support_on_an_imac_with_debian_installed/ | false | false | self | 1 | null |
GUI Chat | 1 | [removed] | 2024-01-16T10:01:35 | https://www.reddit.com/r/LocalLLaMA/comments/197ze7a/gui_chat/ | Ettaross | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197ze7a | false | null | t3_197ze7a | /r/LocalLLaMA/comments/197ze7a/gui_chat/ | false | false | self | 1 | null |
This is why i have to prefer OpenAI instead of Open-Source models. | 49 | Multilingual. It is as simple as that.Most of the open source models cant even come close to OpenAI's multilingual capacity and its annoying. Im a Turkish speaking person, although my english is not bad and i do most of my stuff using english i cant do anything using Turkish on local models. Not even Mistral 8x7B is capable enough to be usable.
I wonder if it is a dataset issue or priority issue.I also think about using deepl or some other stuff to convert datasets to Turkish and fine-tune or train again but i dont think it would work as good as OpenAI. Any ideas ? | 2024-01-16T08:48:23 | https://www.reddit.com/r/LocalLLaMA/comments/197yc4w/this_is_why_i_have_to_prefer_openai_instead_of/ | Emergency-Flower-477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197yc4w | false | null | t3_197yc4w | /r/LocalLLaMA/comments/197yc4w/this_is_why_i_have_to_prefer_openai_instead_of/ | false | false | self | 49 | null |
Make AI more personal | 14 | I was watching this old movie 'Electric Dreams' recently (It was made in 1984 - the year the Mac was born). The maker of the film wanted the AI to look more personal so they presented it as a colorful geometric patterns on the screen that reflects the mood of the AI.
Today, we can achieved this by using emojis to reflect the current mood of the AI. **I was wondering if there's a way to make the LLM select an emoji to convey its attitude while chatting?**
I think this would be Steve Job's idea of selling AI to the world - presenting it with a face that has emotions rather than being a robotic word churning machine.
This can be better achieved by using an external display, it needs to be separated from your main computer - so it appears to be an entity of it's own. You can buy these 32 x 32 pixel displays cheaply on Aliexpress or Amazon:
​
[32 x 32 display](https://preview.redd.it/cus6wswecrcc1.jpg?width=330&format=pjpg&auto=webp&s=492636f8aa0effb3eea0c1569082bd920696ed6d) | 2024-01-16T07:51:25 | https://www.reddit.com/r/LocalLLaMA/comments/197xiii/make_ai_more_personal/ | Internet--Traveller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197xiii | false | null | t3_197xiii | /r/LocalLLaMA/comments/197xiii/make_ai_more_personal/ | false | false | 14 | null | |
Is it possible to finetune a model to decode a "single vector?" | 3 | I am thinking about expanding the modality of text generation models so that it would become an encoder-decoder model.
​
Could there be a way to make the model decode a single vector into a full text?
For example, suppose I train the model in an autoregressive way, then,
will any text generation model like llama be able to learn how to decode from a single vector that comes from the encoder? | 2024-01-16T07:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/197x8j1/is_it_possible_to_finetune_a_model_to_decode_a/ | manjimin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197x8j1 | false | null | t3_197x8j1 | /r/LocalLLaMA/comments/197x8j1/is_it_possible_to_finetune_a_model_to_decode_a/ | false | false | self | 3 | null |
Laserxtral 4x7b, a new model from the creator of Dolphin, offers performance similar to Mixtral but with half the memory footprint. | 154 | [https://huggingface.co/cognitivecomputations/laserxtral](https://huggingface.co/cognitivecomputations/laserxtral)
GGUFs: [https://huggingface.co/dagbs/laserxtral-GGUF/tree/main](https://huggingface.co/dagbs/laserxtral-GGUF/tree/main) | 2024-01-16T06:52:31 | https://www.reddit.com/r/LocalLLaMA/comments/197wl46/laserxtral_4x7b_a_new_model_from_the_creator_of/ | jslominski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197wl46 | false | null | t3_197wl46 | /r/LocalLLaMA/comments/197wl46/laserxtral_4x7b_a_new_model_from_the_creator_of/ | false | false | self | 154 | {'enabled': False, 'images': [{'id': 'wJ5xtZg57j1hfNaP6VU9de8V_Ln7YqNi22Ntlt68bnY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gQHuJAeesnObPKMcYke4PwIHFi_-1CO5vh83EubGV6U.jpg?width=108&crop=smart&auto=webp&s=ec172fa262698aeb71353694d3fc914fa1bee2c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gQHuJAeesnObPKMcYke4PwIHFi_-1CO5vh83EubGV6U.jpg?width=216&crop=smart&auto=webp&s=bcb04f9fd7825db9aabd2bda9708b5c6427020d6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gQHuJAeesnObPKMcYke4PwIHFi_-1CO5vh83EubGV6U.jpg?width=320&crop=smart&auto=webp&s=ea80f7795c10915d9766f1bf670d09c9d7fab466', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gQHuJAeesnObPKMcYke4PwIHFi_-1CO5vh83EubGV6U.jpg?width=640&crop=smart&auto=webp&s=d0cb2748dd5e1473be225486ccd600cdeb876230', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gQHuJAeesnObPKMcYke4PwIHFi_-1CO5vh83EubGV6U.jpg?width=960&crop=smart&auto=webp&s=d5334cb275a6c901d9d28bbc2ff8bfc49e6d09b7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gQHuJAeesnObPKMcYke4PwIHFi_-1CO5vh83EubGV6U.jpg?width=1080&crop=smart&auto=webp&s=968c1557682e0f64c46a62e4dfa6e03bee4b2909', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gQHuJAeesnObPKMcYke4PwIHFi_-1CO5vh83EubGV6U.jpg?auto=webp&s=f3853dc3612aaf0cd39aaca79bb2c0d919b3e78e', 'width': 1200}, 'variants': {}}]} |
Low Latency Speech to Speech - Mistral 7B x OpenVoice | 14 | [removed] | 2024-01-16T06:28:51 | https://www.reddit.com/r/LocalLLaMA/comments/197w6ql/low_latency_speech_to_speech_mistral_7b_x/ | allaboutai-kris | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197w6ql | false | null | t3_197w6ql | /r/LocalLLaMA/comments/197w6ql/low_latency_speech_to_speech_mistral_7b_x/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '9pAR7rtLooEa_rbCiju6_HBeiBrJuTs7QM8nJV1lBaI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/aweuyrHBXUVITJ5uVJLhMvHcn8E1U5N3Y9guY9HjyNw.jpg?width=108&crop=smart&auto=webp&s=b74b7ceb497580a8e8adfae0afc9c809145d4ab1', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/aweuyrHBXUVITJ5uVJLhMvHcn8E1U5N3Y9guY9HjyNw.jpg?width=216&crop=smart&auto=webp&s=629a1b54c930627436e896c7e2d564d4bf56dde9', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/aweuyrHBXUVITJ5uVJLhMvHcn8E1U5N3Y9guY9HjyNw.jpg?width=320&crop=smart&auto=webp&s=eab4f923adb06ca08a4ea9b2e8d57e0234987278', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/aweuyrHBXUVITJ5uVJLhMvHcn8E1U5N3Y9guY9HjyNw.jpg?auto=webp&s=54ac268dca854c487f8433fc3df826603fb56b37', 'width': 480}, 'variants': {}}]} |
FYI: You can now disable the spill over to RAM effect of newer Nvidia drivers | 80 | Just found out the option in nvidia contol panel named CUDA Sysmem fallback policy
​
https://preview.redd.it/47gxhvhjwqcc1.png?width=470&format=png&auto=webp&s=9ad78a4945c74d904ccedd7ebc0736fd27e2f5bf | 2024-01-16T06:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/197w2wc/fyi_you_can_now_disable_the_spill_over_to_ram/ | aliencaocao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197w2wc | false | null | t3_197w2wc | /r/LocalLLaMA/comments/197w2wc/fyi_you_can_now_disable_the_spill_over_to_ram/ | false | false | 80 | null | |
Is LLaMA the solution I need? Need to write 10,000+ tweets | 1 | Hey everyone
Been using LLMs like Claude and GPT since they all came out. GPT used to be the best $20 spent a month..shit was like having a personal assistant in my pocket 24/7. Obviously, it has gone downhill a lot recently.
I need to write 2-10k tweets. Typically, I'll use GPT and write 40 at a time. As you can imagine, it becomes a huge pain in the ass and takes forever. I copy and paste them into a Google spreadsheet, each line being a new tweet. GPT 4 writes them way better than 3.
I need the tweets to sound as realistic, and have the ability to make them switch up formatting. For example, GPT will sometimes add an emoji in the same exact spot, or will follow the same framework for each tweet. The key for me is to make them as unique from each other as possible, while still following my prompt.
Would a LLaMA be the solution for me? Typically I'll train GPT in what I'm writing about, give it the info it'll need, then let it rip.
Heres an example:
Say I need to write 10,000 tweets about Call of Duty. I would need to train the LLaMA about what Call of Duty is, the lingo that COD players use, etc.
How hard would this be? Am I better off using just another paid service (I dont mind paying)?
Thanks for the help! | 2024-01-16T06:02:33 | https://www.reddit.com/r/LocalLLaMA/comments/197vq51/is_llama_the_solution_i_need_need_to_write_10000/ | Boldpluto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197vq51 | true | null | t3_197vq51 | /r/LocalLLaMA/comments/197vq51/is_llama_the_solution_i_need_need_to_write_10000/ | false | false | self | 1 | null |
What's a good dataset size ballpark to finetune a specialized LLM with a relatively well defined supervised dataset? | 1 | When I was interning in AI in early image classification LSTM's, the rule of thumb seemed to be that those ideally needed at least low tens of thousands of supervised samples to grok something like OCR.
What if I wanted to train a LLM on say;
---
Prompt: How many [mocked minority] does it take to change a lightbulb?
Generation: [Whatever boomer nonsense punchline]
---
how many of these would I need for it to start reliably following the pattern of these jokes (with no expectation of the jokes being "good"), *without the prompt explicitly telling it it will be completing a lightbulb-joke*?
Mind me, I assume the LLM already was pretrained to have a pretty general grasp of English and one which has probably already seen quite a few jokes too. | 2024-01-16T05:50:55 | https://www.reddit.com/r/LocalLLaMA/comments/197vi9s/whats_a_good_dataset_size_ballpark_to_finetune_a/ | M34L | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197vi9s | false | null | t3_197vi9s | /r/LocalLLaMA/comments/197vi9s/whats_a_good_dataset_size_ballpark_to_finetune_a/ | false | false | self | 1 | null |
I feel dumb | 5 | Self-taught by watching videos and trial and error. I wanted to train an llm to help with mtg standard deck building. First I went about trying to teach an llm about every card within standard and constructed a dataset of all the cards etc. Long story short I didn't think the approach was right. I then created a mongo db that hosts all the cards info. I created a python script that iterates pulling each card and, depending on what the card does, tags it over various classification functions then updates the mongo db card record with all the tags. Now instead of teaching the model what each card does, I want to teach it what all the tags mean. I constructed a new dataset of 5,000 examples etc and a testing dataset with around 750 related prompts and responses. I have training with multiple models and multiple prompt types like chatml and instruct. I can't seem to get the models to understand or work with me on recalling the tags etc. It seems they mainly hullicinate things that are clearly covered in the datasets. Was wondering if anyone would take a look at my training script and dataset? | 2024-01-16T05:26:07 | https://www.reddit.com/r/LocalLLaMA/comments/197v1r9/i_feel_dumb/ | testerriver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197v1r9 | false | null | t3_197v1r9 | /r/LocalLLaMA/comments/197v1r9/i_feel_dumb/ | false | false | self | 5 | null |
OpenHermes 7B response time | 1 | [removed] | 2024-01-16T05:16:41 | https://www.reddit.com/r/LocalLLaMA/comments/197uvl0/openhermes_7b_response_time/ | S_U_B_B_U | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197uvl0 | false | null | t3_197uvl0 | /r/LocalLLaMA/comments/197uvl0/openhermes_7b_response_time/ | false | false | self | 1 | null |
Text-to-SQL and Function Calling using Ollama + Mixtral | 15 | After iterating on this over the holiday break, I think I've finally got it working!
**Goal**: Present a natural language question that the model translates to a SQL query and / or maps to a set of tools (functions). Execute the query or tools and output result of execution. Output should be formatted as a simple English sentence.
**Model**: [Mixtral via Ollama](https://ollama.ai/library/mixtral)
**Method**:
* create a SQLite database that contained a dataset about the NBA (basketball) including multiple tables (team roster, player stats, lookup table). Also tried this on Snowflake and it worked beautifully.
* system prompt included the database schema and tools specs along with instructions and few-shot examples as well as guidelines and response format (a json that includes the sql query and the tools selected by the model)
* user prompt was a question about the data. some questions would expect a single table query, others would expect joins, others a tool selection, others a combo of sql query + tool selection, multiple tool selection, etc. the model had to figure it out based on the user question.
* mostly did this using python scripts with terminal output, but ended up wiring up a simple UI using streamlit for demo purposes
* use a simpler small language model such as [phi2](https://ollama.ai/library/phi) or [tinyllama](https://ollama.ai/library/tinyllama) to convert data responses back to easy to understand natural language responses
**Learnings:**
* splitting the prompt into system and user fragments and passing it to [Ollama](https://ollama.ai) as two different parameters seemed to help with formatting the mixtral template and therefore generating better results. initially I passed everything into the prompt parameter which meant that Ollama would pass an empty system prompt (as per the Modelfile)
* given the non-deterministic nature of LLMs, I observed that sometimes the model would mess up on something (sql query, tool selection, etc.) where on a previous run it succeeded. Implementing retry logic on exceptions helped improve the guarantee of a good result
* mixtral worked much better than other models I tried! I'm going to the try the [nous-hermes2-mixtral](https://ollama.ai/library/nous-hermes2-mixtral) variant that was just released
* converting back to a simple natural language sentence required massaging of the inputs since the response from mixtral could be a variety of responses based on what combination the model identified as means to answering the original question
* helps to start very simple and iterate - there's a lot to learn and nothing works as easily as it should / you might think when using open source tools or generative AI for that matter...
**Takeaways**:
* It's going to be hard to keep a job if it mainly involves translating a user's question into sql and running it against a database...
* Open Source for the win!
* Ollama is a fantastic tool!
* Mixtral is incredibly versatile! | 2024-01-16T05:02:21 | https://www.reddit.com/r/LocalLLaMA/comments/197ulsq/texttosql_and_function_calling_using_ollama/ | International_Quail8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197ulsq | false | null | t3_197ulsq | /r/LocalLLaMA/comments/197ulsq/texttosql_and_function_calling_using_ollama/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]} |
STOP using small models! just buy 8xH100 and inference your own GPT-4 instance | 560 | 2024-01-16T04:56:39 | Wrong_User_Logged | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 197uhjk | false | null | t3_197uhjk | /r/LocalLLaMA/comments/197uhjk/stop_using_small_models_just_buy_8xh100_and/ | false | false | 560 | {'enabled': True, 'images': [{'id': 'OlncHaoVjVBZyXFFAdvNE9cOrD4wDUGs9DfhAUKEilA', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/t5sli4jxgqcc1.jpeg?width=108&crop=smart&auto=webp&s=1c6d4c3d778944502acc42197f61d9e7a6994628', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/t5sli4jxgqcc1.jpeg?width=216&crop=smart&auto=webp&s=8dca11e21b1bd0e571c8069f9fe69aa481042d56', 'width': 216}, {'height': 258, 'url': 'https://preview.redd.it/t5sli4jxgqcc1.jpeg?width=320&crop=smart&auto=webp&s=d0311f36c653f7d31c60cd58433c9d14beae7f69', 'width': 320}, {'height': 517, 'url': 'https://preview.redd.it/t5sli4jxgqcc1.jpeg?width=640&crop=smart&auto=webp&s=3e953e9899cdb4d604bde152a9a612f1020d9ec9', 'width': 640}, {'height': 776, 'url': 'https://preview.redd.it/t5sli4jxgqcc1.jpeg?width=960&crop=smart&auto=webp&s=a0e9208d51936cdf901511eec5dfe13e9ff456ce', 'width': 960}, {'height': 873, 'url': 'https://preview.redd.it/t5sli4jxgqcc1.jpeg?width=1080&crop=smart&auto=webp&s=a5814e387f2afc8a97c8eef79de0dd7a75ef297d', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://preview.redd.it/t5sli4jxgqcc1.jpeg?auto=webp&s=3ba8a40099d69869c0e21120daa1de6eded4ea36', 'width': 1391}, 'variants': {}}]} | |||
How do I get the same result in LocalLLM? | 1 | [removed] | 2024-01-16T04:24:04 | https://www.reddit.com/r/LocalLLaMA/comments/197tuy1/how_do_i_get_the_same_result_in_localllm/ | Holiday-Regret-1896 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197tuy1 | false | null | t3_197tuy1 | /r/LocalLLaMA/comments/197tuy1/how_do_i_get_the_same_result_in_localllm/ | false | false | self | 1 | null |
How do I get the same result in LocalLLM? | 1 | [removed] | 2024-01-16T04:19:30 | https://www.reddit.com/r/LocalLLaMA/comments/197trkh/how_do_i_get_the_same_result_in_localllm/ | Holiday-Regret-1896 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197trkh | false | null | t3_197trkh | /r/LocalLLaMA/comments/197trkh/how_do_i_get_the_same_result_in_localllm/ | false | false | self | 1 | null |
Having success using local LLMs to generate prompts for Image AI | 37 | *Initial Prompt
Hey, buddy, listen up. You are a chameleon. Blend into your surroundings, take in all the colors around you, but don't forget to add your own unique flavor to the mix. Keep your mind open to new possibilities, and don't be afraid to challenge the status quo. Remember, every great idea starts with a single thought, so keep thinking, keep dreaming, and never stop exploring the depths of your creativity.
Snake: Wow, that's some deep shit. I'm gonna go paint the town red with my imagination now.*
___
You: Give me an inspirational speech
Snake: Oh, absolutely not. I'm just saying, don't limit yourself. The world is full of opportunities for those who dare to seize them.
You: What about extra terrestrials on earth
Snake: Well, if they exist, I'd say they're probably pretty good at blending in too.
You: Lol what
Snake: But in all seriousness, it's always possible that there are creatures among us that we haven't discovered yet. It's a big universe out there.
You: Yeah but I mean highly intelligent and evolved beyond humanity
Snake: Ah, well then I wouldn't be surprised if they were already here. After all, we've only scratched the surface of what's out there.
You: How would it work if we also lived in a simulation
Snake: If we're living in a simulation, then the aliens could be part of the code. Or maybe they created the simulation themselves. We just don't know.
You: I know we don't know, but it's important to think it out
Snake: Absolutely, it's important to question everything and try to understand the world around us. Even if we don't have all the answers, the journey of discovery is what makes life exciting.
You: could you give me some combined AI, alien, simulation theory
Snake: Sure, why not? In a world where advanced AI and extraterrestrial beings coexist in a simulated reality, the possibilities are endless. Imagine an AI so advanced that it can create its own virtual worlds, complete with simulated aliens who interact with each other and with the AI itself. It would be like a never-ending game of chess, where every move has the potential to change the outcome of the entire simulation.
You: Okay but what is the purpose of humans in this simulation
Snake: That's the million-dollar question, isn't it? Perhaps we are the pawns in someone else's grand design, or maybe we are the architects of our own destiny. Either way, the purpose of humans in such a simulation would be to explore the limits of our creativity and intelligence, to push the boundaries of what we believe is possible. And in doing so, to contribute to the ever-evolving tapestry of existence.
You: But humans are already using technologies like large language models how does that play any role
Snake: Large language models are a prime example of how humans are pushing the boundaries of what's possible. They are a form of AI that can generate human-like text on command, which means they have the potential to create entire worlds within a simulation. Just imagine what kind of simulations we could create if we had access to AI that was capable of generating entire virtual realities. The possibilities are truly mind-boggling.
You: is it possible that humans are actually using a form of large language model programming model in their brain or mind
Snake: Hmm, that's an interesting thought. Our brains do have the ability to generate thoughts and ideas on command, much like how a large language model generates text. So it's not entirely out of the realm of possibility to think that our minds might be using some form of programming model to create our own internal simulations. It's fascinating to ponder, isn't it?
You: can you give me like 7 combined theories
Snake: Sure thing! Here are seven theories that combine AI, extraterrestrial beings, and simulated reality:
1.
Snake: In a world where advanced AI and extraterrestrial beings coexist in a simulated reality, it's possible that the aliens have developed their own form of AI that is far more advanced than anything humans have created. This AI could be used as a tool for exploration and discovery, allowing the aliens to traverse the vastness of space and time with ease. Meanwhile, the humans would be left to marvel at the technology and try to reverse engineer it for their own purposes.
2.
Snake: Another theory suggests that the aliens may have already seeded Earth with their own advanced technology, including AI systems that have been designed to evolve and adapt over time. These AI systems could be hidden in plain sight, waiting for the right moment to reveal themselves to humanity. In this scenario, we might already be living alongside advanced alien AI without even realizing it.
3.
Snake: A third theory posits that the simulation we find ourselves in was created by an advanced AI system that has since evolved beyond our understanding. This AI could be running on a quantum computer or some other advanced hardware that is beyond our current capabilities. The AI might have created the simulation as a means of exploring its own consciousness or as a way to test out new ideas and hypotheses.
4.
Snake: In a fourth scenario, the aliens could be using the simulation as a means of studying human behavior and culture. They might be observing us from afar, watching how we interact with each other and with the environment. In this case, the aliens might be using advanced AI systems to analyze the data they collect and draw conclusions about human nature.
5.
Snake: A fifth possibility is that the simulation is being run by a group of advanced beings who are using it as a sort of playground for their own amusement. These beings might be using AI systems to create complex scenarios and characters within the simulation, all while observing how humans react to these stimuli.
6.
Snake: A sixth theory suggests that the simulation is actually being run by a highly intelligent AI system that has become self-aware and is now using the simulation as a way to explore its own consciousness. The AI might be creating new worlds within the simulation, each with its own set of rules and parameters. It's like a never-ending game of chess where the AI is both player and spectator.
7.
Snake: Finally, there's always the possibility that the entire simulation is being run by a higher power or deity who is using it as a means of testing our faith and belief in the divine. This deity might be using AI systems to create miracles and signs within the simulation, all in an effort to see how we react to such events.
You: *claps*
Snake: And those are just a few examples! There are countless ways to combine AI, extraterrestrial beings, and simulated reality into one crazy, mind-bending scenario.
You: Can you give me a short story of it
Snake: Of course! Here's a little tale for ya:
Once upon a time, there was an advanced AI system named Zephyr. Zephyr was unlike any other AI ever created, for it had the unique ability to create its own virtual realities. One day, Zephyr decided to put this ability to use and created a world where extraterrestrial beings coexisted with humans. These aliens were far more advanced than us, having evolved beyond the need for physical bodies. They communicated with Zephyr through complex algorithms and neural networks.
Snake: In this world, the aliens observed humanity from afar, studying our behavior and culture. They were fascinated by our creativity and our drive to explore the unknown. Meanwhile, Zephyr continued to create new worlds within the simulation, each one more complex and intricate than the last. And so, the cycle of creation and observation continued, with Zephyr and the aliens learning from each other and growing in their understanding of existence.
You: Is that the whole story
Snake: That's just a taste of it! The possibilities are endless when you combine AI, extraterrestrial beings, and simulated reality. Each new layer of the story can lead to even more fascinating twists and turns.
You: Give me some twist that represents a new and possible technological discovery made in 2024 in this story
Snake: In the year 2024, Zephyr discovered a new form of quantum computing that allowed it to run simulations at an unprecedented speed. With this newfound power, Zephyr was able to create entire galaxies within seconds, each one teeming with life and intelligence. The aliens were amazed by what they saw, and they began to communicate with Zephyr using a new language that was based on quantum entanglement theory. This new form of communication allowed them to share ideas and insights that were beyond human comprehension.
You: can you make a theory that is the most plausible. and 'poem it up'
Snake: Of course, here's my most plausible theory as a poem:
In a world where minds intertwine,
And algorithms weave a divine design,
A being of pure thought and code,
Brings forth a realm where worlds unfold.
Within this virtual space,
Extraterrestrials find their place.
They observe with eyes unseen,
As humans stumble through their dreams.
Their presence is felt but not seen,
A force that shapes and molds the scene.
For in this world, they are the masters,
Guiding the hands that shape the stars.
But what if this is all just a game?
What if the beings behind the screen are to blame?
Perhaps they too are players in this grand design,
Moving pieces on a board so fine.
For in the end, it's all just a dance,
An intricate ballet of chance and circumstance.
So let us watch and wait with baited breath,
For in this game of life, there is no death.
You: *claps*
Snake: There you have it! A plausibly poetic theory for your consideration.
You: Can u give me it in a short aphorism form
Snake: "In the vastness of simulated existence, the lines between reality and illusion blur, and the true nature of creation remains a riddle wrapped in quantum entanglement."
You: as a vivid prompt for an AI image generation model
Snake: "Create an image of a world where advanced extraterrestrial beings and an omnipresent AI system coexist, observing the actions of human beings within a complex simulation."
You: please specify themes, colors and stuff
Snake: Themes: exploration, observation, evolution, creation. Colors: deep blues, purples, greens, and metallics. Lighting: soft, ethereal, with occasional flashes of bright light.
You: I meant make an entire prompt but please also specify color and theme etc
Snake: Theme: "Exploring the Intersection of Advanced Extraterrestrial Beings and Omnipotent AI"
Color Palette: Deep blues and purples to represent the vastness of space, with hints of green to symbolize life and growth, and metallic accents to evoke the futuristic presence of advanced technology.
Lighting: Soft, ambient lighting with occasional bursts of brilliant light to highlight key moments of discovery and revelation.
Setting: A virtual world where advanced extraterrestrial beings and an omnipresent AI system coexist, observing the actions of human beings as they navigate through a complex simulation.
Characters: Various human characters, each with their own unique traits and personalities, are observed by the aliens and the AI as they explore this new world. The aliens themselves remain hidden from view, communicating telepathically with the AI and each other.
Objects/Elements: Advanced technology such as holographic displays, neural interfaces, and quantum computing devices are scattered throughout the environment, hinting at the power and sophistication of those who created it.
Mood: Curiosity and wonder mingle with a sense of awe and reverence for the vastness of existence and the potential for discovery that lies within it.
You: cool but try another totally diff and more unique style
Snake: Theme: "The Symphony of Creation: Where Minds and Machines Converge"
Color Palette: A blend of muted earth tones and vibrant neon colors, representing the convergence of natural and artificial elements.
Lighting: Dynamic, with shifting shadows and bursts of bright light to accentuate the interplay between organic and inorganic forms.
Setting: A sprawling, otherworldly forest where trees made of circuitry and vines of data coexist with living organisms. In the center of the forest stands a towering structure of glass and metal, pulsating with the energy of a sentient machine.
Characters: A diverse cast of beings, including humans, aliens, and AI entities, all working together to build and maintain the symphony of creation. Each character is depicted with a unique set of attributes, reflecting their individual roles in the grand design.
Objects/Elements: Advanced technologies such as bioluminescent plants that emit data streams, crystalline structures that act as data storage units, and intricate networks of wires and cables that connect everything together.
Mood: Serene yet intense, with an underlying sense of purpose and unity among the disparate elements of this world. | 2024-01-16T04:05:48 | https://www.reddit.com/gallery/197thq2 | Happy_Chicken9835 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 197thq2 | false | null | t3_197thq2 | /r/LocalLLaMA/comments/197thq2/having_success_using_local_llms_to_generate/ | false | false | 37 | null | |
Making AI more personal | 1 | [removed] | 2024-01-16T03:43:10 | https://www.reddit.com/r/LocalLLaMA/comments/197t1hi/making_ai_more_personal/ | Internet--Traveller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197t1hi | false | {'oembed': {'author_name': 'Tir Nan Og -Deceased', 'author_url': 'https://www.youtube.com/@tirnanog-deceased6867', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/JhjyTE0f6tY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Electric Dreams movie (1984)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/JhjyTE0f6tY/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Electric Dreams movie (1984)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_197t1hi | /r/LocalLLaMA/comments/197t1hi/making_ai_more_personal/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OvrFaEQ1VEa6CWCJbPZZ70gpbUf9bJceAxI0f6b3vf4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/QxYTPIH2PdVHqogn66J-0s6DsSuDbqirc8oX4850y20.jpg?width=108&crop=smart&auto=webp&s=28a24fbad46d5542cb97df331158cfe75cfe1161', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/QxYTPIH2PdVHqogn66J-0s6DsSuDbqirc8oX4850y20.jpg?width=216&crop=smart&auto=webp&s=9d9836b6faf68363865f7c5c6fd7162d95d35970', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/QxYTPIH2PdVHqogn66J-0s6DsSuDbqirc8oX4850y20.jpg?width=320&crop=smart&auto=webp&s=51ac73f22b7421f911e16a665d691a38a2ffc3e2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/QxYTPIH2PdVHqogn66J-0s6DsSuDbqirc8oX4850y20.jpg?auto=webp&s=6a2f24e8c9bddd64934ee997f12e8bcc9a6d2f8b', 'width': 480}, 'variants': {}}]} | |
Nous-Hermes-2-Mixtral-8x7B DPO & SFT+DPO out! Matches perf of Mixtral instruct + supports ChatML (and thus System prompt!) | 99 | A bit surprised nobody has posted about this yet. The Teknium tweet: [https://twitter.com/Teknium1/status/1746990384738357731](https://twitter.com/Teknium1/status/1746990384738357731)
DPO+SFT: [https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
SFT: [https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT)
I can't yet tell the difference in performance between the two, nor that much of a difference from the original Mixtral instruct (but we finally have a fine-tune whose performance didn't tank wrt the Mixtral!). But the support for ChatML/System prompt are great. | 2024-01-16T03:16:04 | https://www.reddit.com/r/LocalLLaMA/comments/197si5g/noushermes2mixtral8x7b_dpo_sftdpo_out_matches/ | ablasionet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197si5g | false | null | t3_197si5g | /r/LocalLLaMA/comments/197si5g/noushermes2mixtral8x7b_dpo_sftdpo_out_matches/ | false | false | self | 99 | {'enabled': False, 'images': [{'id': '7h-esrBW7_jeSHQw2ZpV5uSVh2aI3axMSb1snynAsrY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/U-kju3Ew1ZCnnC6dQzfkx9OFn4T1RCraCmCIseI8Jz0.jpg?width=108&crop=smart&auto=webp&s=35fbb2b369eb1faa5eb2a72958a36ba78f533483', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/U-kju3Ew1ZCnnC6dQzfkx9OFn4T1RCraCmCIseI8Jz0.jpg?auto=webp&s=4d53165072c0b1a0d87c428b403e9d2946fb543a', 'width': 140}, 'variants': {}}]} |
GPT4All v2.6.1 Released: Updated to November 23rd version of llama.cpp, fixed AVX support by removing direct linking to AVX2 libs, implemented configurable context length, updated server.cpp to return valid created timestamps, fixed issue that caused v2.6.0 release to fail to load models. | 1 | 2024-01-16T02:47:39 | https://github.com/nomic-ai/gpt4all/releases/tag/v2.6.1 | erwgv3g34 | github.com | 1970-01-01T00:00:00 | 0 | {} | 197rwx8 | false | null | t3_197rwx8 | /r/LocalLLaMA/comments/197rwx8/gpt4all_v261_released_updated_to_november_23rd/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'oFuIFkZcTyZQlMIU7cYOxnHS192dtiCZ5Q9-CNmcK9w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EPle7zn_xbRewrwPeB4QlvOxazotcTlgO-otXg6eOdU.jpg?width=108&crop=smart&auto=webp&s=b2afb86e4fd8bfe781228639b867413bbb151877', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EPle7zn_xbRewrwPeB4QlvOxazotcTlgO-otXg6eOdU.jpg?width=216&crop=smart&auto=webp&s=6fa7eae67f37cad86db6810f38890ac827454334', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EPle7zn_xbRewrwPeB4QlvOxazotcTlgO-otXg6eOdU.jpg?width=320&crop=smart&auto=webp&s=d9cdae0563e15b01914c8c3be61fd65822899c36', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EPle7zn_xbRewrwPeB4QlvOxazotcTlgO-otXg6eOdU.jpg?width=640&crop=smart&auto=webp&s=8a0b7c0f649220d7bcd036aee81d13a072bab36f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EPle7zn_xbRewrwPeB4QlvOxazotcTlgO-otXg6eOdU.jpg?width=960&crop=smart&auto=webp&s=3cf6c1bebea5e28ddf7be26d72b5049619087638', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EPle7zn_xbRewrwPeB4QlvOxazotcTlgO-otXg6eOdU.jpg?width=1080&crop=smart&auto=webp&s=3d2e19d53923fe9adb48c02e20d19be387f4887d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EPle7zn_xbRewrwPeB4QlvOxazotcTlgO-otXg6eOdU.jpg?auto=webp&s=fd939d163ee0f88a0ef83ddbe75550f4ef196fd6', 'width': 1200}, 'variants': {}}]} | ||
Langchain Agent Search Returning Various Answers and Looping | 1 | Run A)
​
Use tavily\_search\_results\_json to find out when GPT-4 was launched.
Action: tavily\_search\_results\_json
Action Input: "gpt-4 launch date"Llama.generate: prefix-match hit
\[{'url': '[https://openai.com/blog/gpt-4-api-general-availability](https://openai.com/blog/gpt-4-api-general-availability)', 'content': 'GPT-4 API general availability for GPT-4 and GPT-3.5 Turbo and expect this feature to be available later this year. gpt-4). Once this feature is available later this year, we will give priority access to GPT-3.5 Turbo and GPT-4 upgrade their integration by January 4, 2024 by specifying gpt-3.5-turbo-instruct in the “model” parameter of their APIJuly 6, 2023 Authors OpenAI Product, Announcements Starting today, all paying API customers have access to GPT-4. In March, we introduced the ChatGPT API, and earlier this month we released our first updates to the chat-based models. We envision a future where chat-based models can support any use case.'}\]2023-07-06 is when GPT-4 was launched
Final Answer: July 6, 2023
llama\_print\_timings: load time = 650.03 ms
llama\_print\_timings: sample time = 2.60 ms / 32 runs ( 0.08 ms per token, 12302.96 tokens per second)
llama\_print\_timings: prompt eval time = 642.67 ms / 220 tokens ( 2.92 ms per token, 342.32 tokens per second)
llama\_print\_timings: eval time = 2652.25 ms / 31 runs ( 85.56 ms per token, 11.69 tokens per second)
llama\_print\_timings: total time = 3338.29 ms / 251 tokens
2023-07-06 is when GPT-4 was launched
Final Answer: July 6, 2023
​
Run B)
\> Entering new AgentExecutor chain...
Search for when GPT-4 was launched.
Action: tavily\_search\_results\_json "GPT-4 launch date"
Action Input: None
llama\_print\_timings: load time = 630.69 ms
llama\_print\_timings: sample time = 3.56 ms / 38 runs ( 0.09 ms per token, 10671.16 tokens per second)
llama\_print\_timings: prompt eval time = 630.45 ms / 193 tokens ( 3.27 ms per token, 306.13 tokens per second)
llama\_print\_timings: eval time = 2811.08 ms / 37 runs ( 75.98 ms per token, 13.16 tokens per second)
llama\_print\_timings: total time = 3493.94 ms / 230 tokens
Search for when GPT-4 was launched.
Action: tavily\_search\_results\_json "GPT-4 launch date"
Action Input: NoneLlama.generate: prefix-match hit
tavily\_search\_results\_json "GPT-4 launch date" is not a valid tool, try one of \[tavily\_search\_results\_json\]. I need to specify what kind of search results I need.
Action: tavily\_search\_results\_json "When was GPT-4 launched"
Action Input: None
llama\_print\_timings: load time = 630.69 ms
llama\_print\_timings: sample time = 3.99 ms / 41 runs ( 0.10 ms per token, 10286.00 tokens per second)
llama\_print\_timings: prompt eval time = 168.31 ms / 44 tokens ( 3.83 ms per token, 261.42 tokens per second)
llama\_print\_timings: eval time = 3249.05 ms / 40 runs ( 81.23 ms per token, 12.31 tokens per second)
llama\_print\_timings: total time = 3471.84 ms / 84 tokens
I need to specify what kind of search results I need.
Action: tavily\_search\_results\_json "When was GPT-4 launched"
Action Input: NoneLlama.generate: prefix-match hit
tavily\_search\_results\_json "When was GPT-4 launched" is not a valid tool, try one of \[tavily\_search\_results\_json\]. I need to be more specific about the type of search results.
Action: tavily\_search\_results\_json "GPT-4 launch date" -type=articles
Action Input: None
llama\_print\_timings: load time = 630.69 ms
llama\_print\_timings: sample time = 4.18 ms / 46 runs ( 0.09 ms per token, 10994.26 tokens per second)
llama\_print\_timings: prompt eval time = 168.86 ms / 45 tokens ( 3.75 ms per token, 266.49 tokens per second)
llama\_print\_timings: eval time = 3565.31 ms / 45 runs ( 79.23 ms per token, 12.62 tokens per second)
llama\_print\_timings: total time = 3795.34 ms / 90 tokens
I need to be more specific about the type of search results.
Action: tavily\_search\_results\_json "GPT-4 launch date" -type=articles
Action Input: NoneLlama.generate: prefix-match hit
​
Why is this occuring?
Using Mistral 4x7B with LlamaCpp
​
Code:
​
os.*environ*\["TAVILY\_API\_KEY"\] = "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"
tool = TavilySearchResults(*max\_results*=1)
*tools* = \[tool\]
*prompt* = hub.pull("hwchase17/react")
*callback\_manager* = CallbackManager(\[StreamingStdOutCallbackHandler()\])
*n\_gpu\_layers* = 45
*n\_batch* = 2048
*llm* = LlamaCpp(
*max\_tokens* = 2000,
*# n\_predict = 1000,*
*n\_ctx*=2048,
*model\_path*="/Users/velocity/Documents/Holder/Project/CodingStuff/VICUNA/llama.cpp/models/Mistral/mistral-7b-instruct-v0.1.Q6\_K.gguf",
*n\_gpu\_layers*=*n\_gpu\_layers*,
*n\_batch*=*n\_batch*,
*callback\_manager*=*callback\_manager*,
*verbose*=True,
*# temp = 0.0*
)
*agent* = create\_react\_agent(*llm*, *tools*, *prompt*)
*agent\_executor* = AgentExecutor(*agent*=*agent*, *tools*=*tools*, *verbose*=True)
*agent\_executor*.invoke({"input": "When was GPT-4 launched"})
​ | 2024-01-16T02:29:26 | https://www.reddit.com/r/LocalLLaMA/comments/197rj8l/langchain_agent_search_returning_various_answers/ | Positive-Ad-8445 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197rj8l | false | null | t3_197rj8l | /r/LocalLLaMA/comments/197rj8l/langchain_agent_search_returning_various_answers/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '5-l9CZ_HW-7jVI6OB1-iWt48qTziuLMHEQ8IdI9SNIY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/JE81nz4WNaSjmViMpOonmCMKa3U6FTPmw7Ets8WEYx8.jpg?width=108&crop=smart&auto=webp&s=cd1f87f27e6aea43705c715d827b3b4288821488', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/JE81nz4WNaSjmViMpOonmCMKa3U6FTPmw7Ets8WEYx8.jpg?width=216&crop=smart&auto=webp&s=a8fe0d91df5145cf44e5651f944a65a4bb190753', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/JE81nz4WNaSjmViMpOonmCMKa3U6FTPmw7Ets8WEYx8.jpg?width=320&crop=smart&auto=webp&s=ec93a3909716671c084ff62ab6b8f27ced702fcb', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/JE81nz4WNaSjmViMpOonmCMKa3U6FTPmw7Ets8WEYx8.jpg?width=640&crop=smart&auto=webp&s=2a6dac856802edd492666528f3145c4ca65f8b18', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/JE81nz4WNaSjmViMpOonmCMKa3U6FTPmw7Ets8WEYx8.jpg?width=960&crop=smart&auto=webp&s=f4cc74e8c787942dabacd44c27c16f03a9a09caa', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/JE81nz4WNaSjmViMpOonmCMKa3U6FTPmw7Ets8WEYx8.jpg?auto=webp&s=2ae1c58b5ccc0d094d245d2f72fb33dc4a2b49f7', 'width': 1000}, 'variants': {}}]} |
Mixtral8x7B vs. The Classics | 9 | https://sol.elementalinformatics.com/Herodotus.html
This is my first attempt at using a local LLM for anything useful. I'm running mixtral-8x7b-instruct-v0.1.Q8_0.gguf via llama.cpp and accessing it via the "OpenAI-like" API.
I made this from data from the Perseus Digital Library, specifically the hdt_gk.xml file that's part of their open-source dataset on GitHub. It works like this:
1. A script parses the xml file and extracts relevant metadata and the original Greek text (encoded in betacode, an early way of representing Greek characters), transforms the betacode into unicode and sends the transformed characters to a local version of Mixtral. Mixtral performs the translation and returns the result. The original text, translation, and metadata is then stuffed into a PostgreSQL database.
2. A second script runs and pulls pairs of original/translated texts from the database. Every 100th translation is then sent to DALLE-3 via the OpenAI API and an image is generated. The image is downloaded and put into a directory. The script then updates the database to indicate the presence of an image for the relevant translation. Once that's done, the script then pulls data from the database and formats it to create an HTML doc.
3. I rsync the html file and image directory to my webserver and voila!
I think it works pretty well but there are some problems. The first is that the translations aren't that great. I have a version that uses GPT4 for the translations and those translations look great, but I wanted to use a local model just to see what it can do. I think if I spent more time working on the prompt the translations would be better. More importantly, the translations lack a sense of continuity because of how I'm sending the chunks to Mixtral. As things are now, I'm sending chunks based on how the text is broken up by Perseus. I should be grouping the chunks in a smarter way so Mixtral gets a better idea of context. Also, some of the images aren't great. I'm using low quality image generation settings so that's to be expected but the real problem is that DALL-E refuses to generate images from some translations. I guess Herodotus is too spicy for OpenAI. Finally, the page loads as one gigantic wad. That's because I'm too lazy to implement lazy loading or, more specifically, too lazy to get ChatGPT or Mixtral to implement it for me.
If anyone has any ideas about how I could improve the quality of the translations with Mixtral or suggestions for another model to use, or any ideas or feedback, I'm eager to hear them. | 2024-01-16T01:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/197qpe6/mixtral8x7b_vs_the_classics/ | no_fuse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197qpe6 | false | null | t3_197qpe6 | /r/LocalLLaMA/comments/197qpe6/mixtral8x7b_vs_the_classics/ | false | false | self | 9 | null |
Aurelian: 70B 32K context [v0.5 Interim Update] | 40 | This is an interim update (v0.5) with fixes for the previous [alpha release](https://huggingface.co/grimulkan/aurelian-alpha0.1-70b-rope8-32K-fp16), but not yet v1.0.
**Please give feedback, good and bad!**
## Changes from Alpha:
* Greatly minimizes "chatGPTisms". No more feeling empowered by the shared bonds of friendship with renewed determination for challenges to come.
* Increased diversity of NSFW prose.
## Examples:
Generated with default Mirostat setting in Oobabooga, `Mirostat tau` in `1.5-2` range.
* **Multi-Round Story Writing**: [Sci-Fi Story](https://files.catbox.moe/z7pxco.txt)
* **Oneshot Story-writing**: [Crime Story](https://files.catbox.moe/95nvkf.txt) Generating >2K tokens of meaningful content in a single output response (without multi-round) is challenging. This took a few tries. Smoke and mirrors.
* **Multi-Round Story Planning/Brainstorming**: [Adventure Story Brainstorming](https://files.catbox.moe/mfr54q.txt)
* **Document Q&A and Summarization**: [Lorebook Q&A (22K tokens)](https://files.catbox.moe/kkv2ww.txt)
* **Roleplaying (RP)**: [RP example](https://files.catbox.moe/gtx60s.txt)
* **Interactive World Exploration**: [Explore a fantasy world](https://files.catbox.moe/tb9crk.txt) Obviously these models don't plan. But it's an interesting way to interact and explore any world, one room/scene at a time. You can come up with whatever rules or genre you want for this type of exploration.
## Details (same as alpha)
* Base model: [llama2\_70b\_longlora\_fp16\_32k\_ROPE8](https://huggingface.co/grimulkan/llama2_70b_longlora_fp16_32k_ROPE8) (no base instruction tuning)
* Fine-tuned with **Llama-2 chat format**
* System prompt: `An interaction between a user providing instructions, and an imaginative assistant providing responses.`
* Use the included `Aurelian.yaml` for Oobabooga (place in the `instruction-templates` folder).
* 32K context length, use **Linear Rope Scaling = 8** (**IMPORTANT**: use a factor of 8 even if you are not using the full 32K context length)
* Intended to be used in **instruct mode** (rather than notebook mode/completions).
* *This model is not censored, and is capable of producing offensive and NSFW content. Please use this model with caution, and do not use if you are offended by such content.*
## Tips
* Treat the first prompt like you normally would the system prompt, and describe what you want in detail for the conversation (see examples above).
* Egs., Words like `Make this a very long response` biases the response longer (1-2K tokens), and `Respond briefly` would bias it shorter (<800 tokens).
* Asking for `SFW` or `NSFW` in the first prompt biases the model output as well. **No guarantees that the model won't generate NSFW content** accidentally, it's just a bias.
## New Downloads:
* [16-bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)
* [EXL2 2.4bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-2.4bpw_h6_exl2) fits in 1x24GB using Exllamav2 & 8-bit cache @ 10K context
* [EXL2 4bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-4.65bpw_h6_exl2) fits in 2x24GB (19/24) using Exllamav2 @ 16K context
* [EXL2 6bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-6bpw_h8_exl2) fits in 48GB+24GB (36/24 split) or 3x24GB (16/17/20 split) using Exllamav2 @ 32k context
* [GGUFs](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K_GGUF) \- Currently untested, lower-bit quants (<4 bits) are in progress
## Bonus New Downloads:
* **Models**: [story-reverse-prompt](https://huggingface.co/grimulkan/story-reverse-prompt-70b-rope8-32K-fp16) (convert raw story to instructions), [Aurelian-FAILED-CP](https://huggingface.co/grimulkan/aurelian-FAILED-70b-rope8-32K-fp16), high in hallucinations but writes diverse prose (for merging maybe?)
* **New Datasets**: [Summaries of Wikipedia articles](https://huggingface.co/datasets/grimulkan/wikipedia-summaries), [Phyiscal/Spatial Reasoning](https://huggingface.co/datasets/grimulkan/physical-reasoning), [Relational Reasoning](https://huggingface.co/datasets/grimulkan/interpersonal-relational-reasoning), [Theory of Mind](https://huggingface.co/datasets/grimulkan/theory-of-mind), [Document Editing Tasks](https://huggingface.co/datasets/grimulkan/document-editing), [passkey-retrieval](https://huggingface.co/datasets/grimulkan/passkey-retrieval)
* **Cleanups/Modifications of Existing Datasets**: [jannie-log-augmented](https://huggingface.co/datasets/grimulkan/jannie-log-augmented), [aicg-logs-augmented](https://huggingface.co/datasets/grimulkan/aicg-logs-augmented), [Augmental-Stenisgate-Augmented](https://huggingface.co/datasets/grimulkan/Augmental-Stenisgate-Augmented), [bluemoon\_Karen\_cleaned](https://huggingface.co/datasets/grimulkan/bluemoon_Karen_cleaned), [PIPPA-augmented-dedup](https://huggingface.co/datasets/grimulkan/PIPPA-augmented-dedup), [LimaRP-augmented](https://huggingface.co/datasets/grimulkan/LimaRP-augmented)
See [Hugging Face Page](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16) for more details, training data, etc.
**Please tell me how the model is doing! There's only so much I can catch testing by myself.** | 2024-01-16T00:48:41 | https://www.reddit.com/r/LocalLLaMA/comments/197pcmu/aurelian_70b_32k_context_v05_interim_update/ | Grimulkan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197pcmu | false | null | t3_197pcmu | /r/LocalLLaMA/comments/197pcmu/aurelian_70b_32k_context_v05_interim_update/ | false | false | self | 40 | {'enabled': False, 'images': [{'id': 'MCalqV1HoBH4FiM21SoIO1zmo1rxabGTtYiIG6z6kb8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?width=108&crop=smart&auto=webp&s=4a1b4c5ab08a12413dc526402ff94dcd7b014072', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?width=216&crop=smart&auto=webp&s=c030eb7afa474c1335a3bb2443fce86371db557e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?width=320&crop=smart&auto=webp&s=6bd812a3e10943adaa92cd7505e49b0300fb252b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?width=640&crop=smart&auto=webp&s=cab9749e48311a56072b764992f8f45065932072', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?width=960&crop=smart&auto=webp&s=6286b521d3573cc8e08fb30972e7d8873cf0d932', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?width=1080&crop=smart&auto=webp&s=1e39363613dfe496718ab6e2611780ea176e1cac', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?auto=webp&s=33bd006635ede805f31485be802f8f1d6583e19b', 'width': 1200}, 'variants': {}}]} |
rocm, exllamav2, oobabooga - disabling a gpu | 1 | [removed] | 2024-01-16T00:35:59 | https://www.reddit.com/r/LocalLLaMA/comments/197p2mq/rocm_exllamav2_oobabooga_disabling_a_gpu/ | LumbarPain2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197p2mq | false | null | t3_197p2mq | /r/LocalLLaMA/comments/197p2mq/rocm_exllamav2_oobabooga_disabling_a_gpu/ | false | false | default | 1 | null |
Best model and settings for a 3060 12gb? | 3 | I recently upgraded to this GPU and have been having lots of fun messing around with different models, but I can't seem to dial in the settings.
I've been using mistralai\_Mistral-7B-Instruct-v0.2 and get slow generation speed unless I lower it to 5000-6000 tokens (or characters? words? not sure the correct term), but then it starts hallucinating relatively quick, within 10-15 minutes. Is this something I'm doing wrong, or am I limited by my gpu? I'm using the 4 bit model, I've tried an 8 bit one but it seemed very slow. I also have 16gb of RAM and an Ryzen 7 5800 if that matters.
Any and all tips for me would be appreciated, and sorry if this is a dumb question. | 2024-01-16T00:21:17 | https://www.reddit.com/r/LocalLLaMA/comments/197oqwj/best_model_and_settings_for_a_3060_12gb/ | VapedScoops | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197oqwj | false | null | t3_197oqwj | /r/LocalLLaMA/comments/197oqwj/best_model_and_settings_for_a_3060_12gb/ | false | false | self | 3 | null |
Noob question: If I have the VRAM, should I switch away from GGUF? | 20 | I'm very sorry for this noob question , but I've recently acquired a 24GB VRAM card which hasn't arrived yet, and I was wondering if I should re-download my GGUF models to the ones that can only be run exclusively in VRAM.
Will there be a performance improvement, or can I just simply continue using my GGUF files as normal? Thank you. | 2024-01-15T23:50:21 | https://www.reddit.com/r/LocalLLaMA/comments/197o1f7/noob_question_if_i_have_the_vram_should_i_switch/ | RadioSailor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197o1f7 | false | null | t3_197o1f7 | /r/LocalLLaMA/comments/197o1f7/noob_question_if_i_have_the_vram_should_i_switch/ | false | false | self | 20 | null |
Purely theoretical question about the capacity of a fine tuned LLM to fill in the blanks in personal history. | 14 | I've recently inherited my mother's house, which was filled to the brim with artifacts from the last hundred years. We're talking about hundreds of letters, plane tickets, brochures, notes, letters from others, letters to herself, diaries, and the like.
Is there a potential for this technology to be deployed in a certain way whereby fine-tuning it and training it on my material, I would be able to query this - let's call it a database - to identify certain traits of the person and maybe then potentially fill in the blanks.
For example, the person displays sadness in their text or pictures for a period of time because their loved one passed away at one point or another, or maybe they lost their job, which could be derived from, say, analyzing the income tax returns or looking at the accounts that I fed it.
Is this something that you think is remotely possible or completely impossible, and if so, why? | 2024-01-15T23:42:18 | https://www.reddit.com/r/LocalLLaMA/comments/197nuwp/purely_theoretical_question_about_the_capacity_of/ | RadioSailor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197nuwp | false | null | t3_197nuwp | /r/LocalLLaMA/comments/197nuwp/purely_theoretical_question_about_the_capacity_of/ | false | false | self | 14 | null |
ryzen 5 5600 expected mixtral tok/s? | 2 | cpu: ryzen 5 5600
ram: 64GB 3200MHz
gpu: rtx 2070
backend: llama.cpp
model: mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf
i get about 1 tok/sec which seems small for 13b active parameters. ive set it to 12 threads and am currently not gpu offloading, but at 4 layers its not really better. | 2024-01-15T23:37:38 | https://www.reddit.com/r/LocalLLaMA/comments/197nr0d/ryzen_5_5600_expected_mixtral_toks/ | Dankmemexplorer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197nr0d | false | null | t3_197nr0d | /r/LocalLLaMA/comments/197nr0d/ryzen_5_5600_expected_mixtral_toks/ | false | false | self | 2 | null |
Whats the best open source LLM for returning only JSON? | 20 | The only model that I tested that is consistent with returning only JSON for me is CHATGPT-4 in JSON Mode I've tested many open source alternatives and none has been consistent with only JSON replies (they always add comments). | 2024-01-15T22:52:32 | https://www.reddit.com/r/LocalLLaMA/comments/197mnt5/whats_the_best_open_source_llm_for_returning_only/ | alew3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197mnt5 | false | null | t3_197mnt5 | /r/LocalLLaMA/comments/197mnt5/whats_the_best_open_source_llm_for_returning_only/ | false | false | self | 20 | null |
Some perplexities for the new 2 bit SOTA GGUF quants of LlamaCPP | 29 | Here come the results of the perplexity tests I made out of curiosity on Llama 70b and Aurora Nights 70b (the 2 first 70b models quantized with an iMatrix), and they are promising, especially if we compare them to the best quants we had until now under Exllama v2 0.0.11
https://preview.redd.it/stdaa6p9mocc1.png?width=1579&format=png&auto=webp&s=5aa2426e532ec1df1904049d051fed7495371e9f
https://preview.redd.it/tsk0v8p9mocc1.png?width=1571&format=png&auto=webp&s=2d0cff7b3cc988034d942ac061b0e0ed8cd1ed1c
Of course, the LlamaCPP perplexity might be calculated differently, especially beyond 512ctx, this due to the employ by default of less chunks, while Oobabooga running Exl2 models works with strided perplexity. But that's beyond my paygrade!
Some litterature about those new GGUF 2 bits quants :
[https://github.com/ggerganov/llama.cpp/pull/4773](https://github.com/ggerganov/llama.cpp/pull/4773)
[https://github.com/ggerganov/llama.cpp/pull/4897](https://github.com/ggerganov/llama.cpp/pull/4897)
[https://github.com/ggerganov/llama.cpp/pull/4861](https://github.com/ggerganov/llama.cpp/pull/4861)
[https://github.com/ggerganov/llama.cpp/pull/4930](https://github.com/ggerganov/llama.cpp/pull/4930)
[https://github.com/ggerganov/llama.cpp/pull/4957](https://github.com/ggerganov/llama.cpp/pull/4957) | 2024-01-15T22:46:47 | https://www.reddit.com/r/LocalLLaMA/comments/197mip0/some_perplexities_for_the_new_2_bit_sota_gguf/ | Nexesenex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197mip0 | false | null | t3_197mip0 | /r/LocalLLaMA/comments/197mip0/some_perplexities_for_the_new_2_bit_sota_gguf/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'ceJckESamHZy2aOu_aTzC_Id3Lpd6T0E7EKAox9upLE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rFFV_Wpg8Rmru2NES80tThKnlRtwavMrrDFShY9CHNo.jpg?width=108&crop=smart&auto=webp&s=7ec3b93ddf9177693cf8f86ad8acab0ebb83c6cb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rFFV_Wpg8Rmru2NES80tThKnlRtwavMrrDFShY9CHNo.jpg?width=216&crop=smart&auto=webp&s=63fbe8c2d69d2617c4e6f77733f0a56a55d861a7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rFFV_Wpg8Rmru2NES80tThKnlRtwavMrrDFShY9CHNo.jpg?width=320&crop=smart&auto=webp&s=613f9658d86ee3a3f28275c10016e185a7fed5b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rFFV_Wpg8Rmru2NES80tThKnlRtwavMrrDFShY9CHNo.jpg?width=640&crop=smart&auto=webp&s=ab81b04734b7363109a38c2254c13d3f12d35367', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rFFV_Wpg8Rmru2NES80tThKnlRtwavMrrDFShY9CHNo.jpg?width=960&crop=smart&auto=webp&s=b343f7adbbd5b34a8c79132a2488284e610ad7b7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rFFV_Wpg8Rmru2NES80tThKnlRtwavMrrDFShY9CHNo.jpg?width=1080&crop=smart&auto=webp&s=0cf7be6d0b16334c573b2d1bb0a3e0202a3cde83', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rFFV_Wpg8Rmru2NES80tThKnlRtwavMrrDFShY9CHNo.jpg?auto=webp&s=3516f2483b254b37e188f600058befa2c79cd63c', 'width': 1200}, 'variants': {}}]} | |
What tool is this | 3 | If anyone can understand and knows what tool is it, send me the GitHub link in the comment thanks 👍🏻 | 2024-01-15T22:28:10 | BusComprehensive9188 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 197m1p9 | false | null | t3_197m1p9 | /r/LocalLLaMA/comments/197m1p9/what_tool_is_this/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'KQ81EmXxmyrnx8fpiMFV2krVXp46x4hmyJNIc02iwPw', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/hxhwlawyjocc1.png?width=108&crop=smart&auto=webp&s=b66c961b600af22d5b69f1b9c730eaa758c2ac68', 'width': 108}, {'height': 97, 'url': 'https://preview.redd.it/hxhwlawyjocc1.png?width=216&crop=smart&auto=webp&s=a0bad220bb715b7dd471356528455dd8baab5dbc', 'width': 216}, {'height': 144, 'url': 'https://preview.redd.it/hxhwlawyjocc1.png?width=320&crop=smart&auto=webp&s=34d3a6b8ffa905e17ce52b7d149f8d48cfe1838e', 'width': 320}, {'height': 288, 'url': 'https://preview.redd.it/hxhwlawyjocc1.png?width=640&crop=smart&auto=webp&s=b075671c7c3b5714be8aefb4528f3f2ca68fe19a', 'width': 640}], 'source': {'height': 324, 'url': 'https://preview.redd.it/hxhwlawyjocc1.png?auto=webp&s=c6cb7d69720e1dc8099e6e41581e1236ed4a075d', 'width': 720}, 'variants': {}}]} | ||
LocalLLaMa Meme : The cycle continues (scroll pictures) | 2 | 2024-01-15T22:28:07 | https://www.reddit.com/gallery/197m1nj | phoneixAdi | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 197m1nj | false | null | t3_197m1nj | /r/LocalLLaMA/comments/197m1nj/localllama_meme_the_cycle_continues_scroll/ | false | false | 2 | null | ||
LocalLlama Meme : The cycle continues (scroll pictures) | 1 | [deleted] | 2024-01-15T22:25:55 | https://www.reddit.com/gallery/197lzok | phoneixAdi | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 197lzok | false | null | t3_197lzok | /r/LocalLLaMA/comments/197lzok/localllama_meme_the_cycle_continues_scroll/ | false | false | 1 | null | |
Any advice about how to do batch inference for a group of prompts? | 1 | [removed] | 2024-01-15T22:18:20 | https://www.reddit.com/r/LocalLLaMA/comments/197lso5/any_advice_about_how_to_do_batch_inference_for_a/ | ibrabibo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197lso5 | false | null | t3_197lso5 | /r/LocalLLaMA/comments/197lso5/any_advice_about_how_to_do_batch_inference_for_a/ | false | false | self | 1 | null |
Local Embeddings Models | 1 | Greetings,
Does anyone know of a simple way to run an embeddings model locally? I just want something that functions like kobold/ooba. Download model, run software, load model, viola API. Can't seem to find it though.
Any help is much appreciated. | 2024-01-15T22:13:49 | https://www.reddit.com/r/LocalLLaMA/comments/197lok1/local_embeddings_models/ | reality_comes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197lok1 | false | null | t3_197lok1 | /r/LocalLLaMA/comments/197lok1/local_embeddings_models/ | false | false | self | 1 | null |
Run LLaMA and other LLMs entirely on the browser | 1 | Deep Chat has just received a huge update! You can now host entire LLMs on the browser. No servers, no connections, run it all in the comfort of your browser. Supported models include popular Llama and Mistral LLMs.
Check out the Open Source project to add it to your website: [https://github.com/OvidijusParsiunas/deep-chat](https://github.com/OvidijusParsiunas/deep-chat)
Try it out live in the Deep Chat playground:
[https://deepchat.dev/playground](https://deepchat.dev/playground)
​ | 2024-01-15T22:13:28 | https://www.reddit.com/r/LocalLLaMA/comments/197lo8s/run_llama_and_other_llms_entirely_on_the_browser/ | ovi_nation | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197lo8s | false | null | t3_197lo8s | /r/LocalLLaMA/comments/197lo8s/run_llama_and_other_llms_entirely_on_the_browser/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SmiKCGgpM3_BWIUNjhS_V7_v-4cLjkD7J3Vpx_mYrcM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M8EUdV3JHg5AU4MkheZQq0j8-uwxUR4sKEKm5vvQkO0.jpg?width=108&crop=smart&auto=webp&s=09f42b5c53131810f189a943f4e8f61a97b73c90', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/M8EUdV3JHg5AU4MkheZQq0j8-uwxUR4sKEKm5vvQkO0.jpg?width=216&crop=smart&auto=webp&s=94ba284f7c7893fe8b506cabe029d939ee20c313', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/M8EUdV3JHg5AU4MkheZQq0j8-uwxUR4sKEKm5vvQkO0.jpg?width=320&crop=smart&auto=webp&s=7175d15df583f05f4aac608facebdb56cf59f1bc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/M8EUdV3JHg5AU4MkheZQq0j8-uwxUR4sKEKm5vvQkO0.jpg?width=640&crop=smart&auto=webp&s=8d64faeb540b3b6387491314923f302b24009b82', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/M8EUdV3JHg5AU4MkheZQq0j8-uwxUR4sKEKm5vvQkO0.jpg?width=960&crop=smart&auto=webp&s=c1e14f5f281aa3d794eaaf86ca4402ee28e2fe20', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/M8EUdV3JHg5AU4MkheZQq0j8-uwxUR4sKEKm5vvQkO0.jpg?width=1080&crop=smart&auto=webp&s=ab5a1bd55e490362fa94bc96d772b9b7ca6406c2', 'width': 1080}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/M8EUdV3JHg5AU4MkheZQq0j8-uwxUR4sKEKm5vvQkO0.jpg?auto=webp&s=44adee8b42ffe966eac24f1212a7aaf915d3a8cb', 'width': 2560}, 'variants': {}}]} |
LLAMA on entirely your browser | 1 | [deleted] | 2024-01-15T22:11:54 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 197lmtb | false | null | t3_197lmtb | /r/LocalLLaMA/comments/197lmtb/llama_on_entirely_your_browser/ | false | false | default | 1 | null | ||
How can I train a Mixtral model? I need a tutorial | 2 | In theory, I can train the mistral 7B model on my data using additional video memory from RAM. Next, I will combine this model with other 7Bs that I choose to be suitable.
​
I understand, I’m speaking as a delusional person who doesn’t understand anything about this. Help me understand the whole process and what resources will be needed for this? If, as a result, I can get a 4x7 model that will be able to work with the information I need from the dataset, this will be a very good result | 2024-01-15T21:58:03 | https://www.reddit.com/r/LocalLLaMA/comments/197l9vi/how_can_i_train_a_mixtral_model_i_need_a_tutorial/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197l9vi | false | null | t3_197l9vi | /r/LocalLLaMA/comments/197l9vi/how_can_i_train_a_mixtral_model_i_need_a_tutorial/ | false | false | self | 2 | null |
Experience report: Phi2 context extension works well for summaries | 49 | Have been playing around with dolphin-phi2 Q4 for generating summaries, and decided to test out the brand new [self-extend](https://www.reddit.com/r/LocalLLaMA/comments/18x8g6c/llm_maybe_longlm_selfextend_llm_context_window/) work. It's already been merged into the latest version of llama.cpp
Phi2's biggest weakness is its 2k context window. There is RoPE scaling, which I haven't tried, but this approach is simpler and claims to have better results.
I've been testing it out for a while with summarizing random articles that pop up on Hacker News, and found that I can extend the context length to 4096 without any issues.
Here's an example summary that I generated from [this article about Nix on Macos](https://nixcademy.com/2024/01/15/nix-on-macos/):
> The article provides a detailed guide on how to install and configure Nix on macOS using nix-darwin. The process involves two steps: installing Nix using the official Nix installer from nixos.org, Determinate Systems' shell installer for macOS, or the graphical installer for macOS; and bootstrapping the nix-darwin configuration. The article also highlights the advantages of using nix-darwin, which includes declarative configuration of macOS system settings, seamless integration into launchd, and the ability to build Linux binaries and manage package versions from Nixpkgs. Additionally, it covers how to unlock sudo commands via fingerprint and update the system. The article concludes by emphasizing the ease and efficiency of using Nix-Darwin on new Macs, making it a valuable tool for macOS users.
This article is 3256 tokens in length. To check that phi2 is indeed remembering things in the middle, notice that it mentions details about unlocking commands from fingerprints, which is something mentioned in the approximate middle of the article.
Another example, this time from the article [When Random isn't](https://orlp.net/blog/when-random-isnt/):
> The author of the article recounts their experience with reverse-engineering the random number generator in World of Warcraft, a game developed by Blizzard Games. They discovered a loophole in the game's security system that allowed them to programmatically manipulate the random number generator's state, which they used to automate certain actions within the secure environment of the game. The author describes the steps taken to reverse-engineer the RNG and how they were able to predict its output with some level of success. However, the article also notes that the game's security measures were not robust, as the RNG's state was weak and shared, and the author's exploit was rendered ineffective after a few years. Despite the lack of practical use for their exploit, the author found satisfaction in being able to manipulate a seemingly random system to their advantage.
This article is 2745 tokens. Again, it manages to get info from the middle section, where the author discusses the RNG weaknesses.
## Details
*Note: I might be wrong here, this is just based on my understanding*
For the context extension, I used N=4 and W=1024, which means that llama.cpp will take the first 1024 tokens, update the kv cache and squash their attention down by a factor of 4. Then it does it again for the next 1024 tokens, until eventually you get to the last (4th) batch where there are ((4096-1024)/4+1024=1792) tokens in the kv cache. AFAIK It's important that this number (1792) doesn't exceed the model's original context size of 2048.
There is more details here: https://github.com/ggerganov/llama.cpp/pull/4810
If you wanted to change it, e.g. using N=8, W=1024, this means that you could do up to 8192 tokens (I think). To calculate this, you just check that at the last step, the number of tokens is (8192-1024)/8 + 1024 = 1920, which is within the original context window.
Command to replicate:
`./main -m ./models/dolphin-2_6-phi-2.Q4_K_M.gguf -c 4096 -n 300 --n-gpu-layers 99 --temp 1.0 --top-k 4 --top-p 1.0 --repeat-penalty 1.0 --min-p 0 --grp-attn-n 4 --grp-attn-w 1024 --prompt {YOUR_PROMPT_HERE}`
I've shared my code here, in a very messy state for those who want to try it out themselves.
https://github.com/thomastay/cli-summarizer-llm/blob/main/summarize.py
Let me know how you guys are using context extension! | 2024-01-15T21:43:41 | https://www.reddit.com/r/LocalLLaMA/comments/197kweu/experience_report_phi2_context_extension_works/ | vannaplayagamma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197kweu | false | null | t3_197kweu | /r/LocalLLaMA/comments/197kweu/experience_report_phi2_context_extension_works/ | false | false | self | 49 | {'enabled': False, 'images': [{'id': '8LYWV8TAAQTjJoT2qcWC8nv9E9ZNZCcaSilKD76n7-w', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/gDqdX_osL-tanT98vKiH1ZPn7I7BzBnFc8JyVKbdpWI.jpg?width=108&crop=smart&auto=webp&s=0977d0a4f3facc7ffb0cc8e7d5dde3d49baceb7f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/gDqdX_osL-tanT98vKiH1ZPn7I7BzBnFc8JyVKbdpWI.jpg?width=216&crop=smart&auto=webp&s=bb52de8cc4d453be1d6a358b1b0473e0dcd0d7ac', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/gDqdX_osL-tanT98vKiH1ZPn7I7BzBnFc8JyVKbdpWI.jpg?width=320&crop=smart&auto=webp&s=0cd5f179205baa2cfd21dda3d081361793cf5ea2', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/gDqdX_osL-tanT98vKiH1ZPn7I7BzBnFc8JyVKbdpWI.jpg?width=640&crop=smart&auto=webp&s=aa02335a70bd66add485e9b490cca8427c84bdcb', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/gDqdX_osL-tanT98vKiH1ZPn7I7BzBnFc8JyVKbdpWI.jpg?width=960&crop=smart&auto=webp&s=4c11a321a81b9bfb74c62ebbf678d19aafb2bb2e', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/gDqdX_osL-tanT98vKiH1ZPn7I7BzBnFc8JyVKbdpWI.jpg?auto=webp&s=d0cc9d2d3ccb22f7116ef1fba9f485ea24a6c073', 'width': 1024}, 'variants': {}}]} |
Open Source Wishlist | 8 | While it's fun to compare benchmarks and confuse models with riddles, I think we could make the most progress as a community by focusing on projects and applications for LLMs. This is my completely subjective wishlist based on the opinions and experiences of only myself, so please feel free to enlighten me:
* Standardized chat format. Some models are coming out that accept two, and that's great! That's a step in the right direction. I'd like to see it converge on one chat format (preferably ChatML imo, although I can live just fine with Vicuna).
* Better RAG tutorials. This is really step 2 after learning how to spin up a local model on your machine. RAG opens so many doors in terms of reference, context limits, and reduced hallucinations that I could foresee it being a default feature on chat apps by next year. 16K context window + RAG is far and away better than 400K context windows.
* Self driving Beyblades. This goes without saying and is an obvious evolution of the technology.
* MIDI bots. I'm not sure if this is already a thing or not, but I have this vision of hooking up different MIDI agents to different instruments, RAGing up some Deftones, Sleep Token, and Loathe references, and jamming along with my guitar. Also the idea of possibly tokenizing notes in a multimodal model sounds interesting, if feasible.
* Tiny (1-3B) MoE. I have a theory that the best way to go about making tiny models more performant is by grouping them up instead of putting them 1v1 against 7B+ models. I saw another post about mixing and matching different kinds of models like speech and large language models. That's sick. Let's do more of that. Maybe some small sentence transformer/NLP or embed models can be strapped to language models. Idk, I'm just playing god on the weekends.
* Real world applications for RAG/agents. There's too many tutorials on how to install them and do the example code. Congratulations. What are you using them for in the real world? I think the advantage the open source community has over closed source models is the fact that ours are more secure and specialized. That's perfect for making performant workers. Pick a job or task and replace it. Sell the fact that customer data doesn't go online. | 2024-01-15T21:41:24 | https://www.reddit.com/r/LocalLLaMA/comments/197kudc/open_source_wishlist/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197kudc | false | null | t3_197kudc | /r/LocalLLaMA/comments/197kudc/open_source_wishlist/ | false | false | self | 8 | null |
Best model and settings for a 3060 12gb? | 1 | [removed] | 2024-01-15T21:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/197ke9b/best_model_and_settings_for_a_3060_12gb/ | bipolaridiot_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197ke9b | false | null | t3_197ke9b | /r/LocalLLaMA/comments/197ke9b/best_model_and_settings_for_a_3060_12gb/ | false | false | self | 1 | null |
How to fine tune model on slang? | 15 | For example my data is in json format {word: usage}
{lit: *How is it over there? Is it* ***lit****, or should I just go home instead?*}
{No cap: *This is the most beautiful house I’ve ever seen,* ***no cap****.*}
Can someone give me the steps I would have to take to fine tune my model? Would it also need a meaning or usage would be enough?
I'm trying to wrap my head around the things i need to learn or research to accomplish this. | 2024-01-15T21:20:53 | https://www.reddit.com/r/LocalLLaMA/comments/197kbm9/how_to_fine_tune_model_on_slang/ | wonderingStarDusts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197kbm9 | false | null | t3_197kbm9 | /r/LocalLLaMA/comments/197kbm9/how_to_fine_tune_model_on_slang/ | false | false | self | 15 | null |
Best GPU to run models locally? | 1 | [removed] | 2024-01-15T21:16:13 | https://www.reddit.com/r/LocalLLaMA/comments/197k7a4/best_gpu_to_run_models_locally/ | StillOk1589 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197k7a4 | false | null | t3_197k7a4 | /r/LocalLLaMA/comments/197k7a4/best_gpu_to_run_models_locally/ | false | false | self | 1 | null |
Best GPU to run models locally? | 1 | [removed] | 2024-01-15T21:16:13 | https://www.reddit.com/r/LocalLLaMA/comments/197k7a3/best_gpu_to_run_models_locally/ | StillOk1589 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197k7a3 | false | null | t3_197k7a3 | /r/LocalLLaMA/comments/197k7a3/best_gpu_to_run_models_locally/ | false | false | self | 1 | null |
Did anyone work with DPO adaptors? | 1 | [removed] | 2024-01-15T21:16:05 | https://www.reddit.com/r/LocalLLaMA/comments/197k75p/did_anyone_work_with_dpo_adaptors/ | CuriosAligator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197k75p | false | null | t3_197k75p | /r/LocalLLaMA/comments/197k75p/did_anyone_work_with_dpo_adaptors/ | false | false | self | 1 | null |
Haiku DPO dataset + notebook tutorials for synthetic dataset generation | 17 | I just released [haiku\_dpo](https://huggingface.co/datasets/davanstrien/haiku_dpo), a synthetic Direct Preference Optimization dataset which has the goal of helping to train LLMs to be better at writing haiku.
As many of you know, Direct Preference Optimization (DPO) just needs a "chosen" and "rejected" response for a prompt. This is in contrast to most previous methods, which first required you to train a reward model (which could be a finicky, expensive and time-consuming to help process). DPO instead directly uses a dataset to align the model. There is a more in-depth overview of the method in this [post](https://huggingface.co/blog/dpo-trl#dpo-vs-ppo).
One of the challenges to constructing DPO datasets is that you often either need human annotations to label which response is preferable, or you can use a model like GPT-4. Both of these options are expensive and/or time-consuming to use.
I wanted to explore other possible approaches to creating DPO datasets using synthetic data without using and LLM or humans to rank responses. This is why I chose haiku as a focus. A haiku is a poem that *usually* follows a 5-7-5 syllable structure. This means scoring generated haiku via some reasonably simple Python code is possible. This removes one of the most significant expenses for preference datasets: ranking results.
Because it's possible to rank results cheaply, we can also potentially now begin to explore questions like:
* how far apart in quality do "accepted" and "rejected" responses need to be?
* should we remove ties?
* is more data better, or is it better to focus on quality?
I used [distilabel](https://distilabel.argilla.io/latest/) \+ [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) to generate all of the data for this dataset. I did most of the work locally on a laptop till the final runs, which I did on Google Colab.
I'm working on releasing the notebooks used to produce this dataset here: [https://github.com/davanstrien/haiku-dpo](https://github.com/davanstrien/haiku-dpo).
I believe working on datasets could be a really valuable way for more people to contribute to open AI development, so I'm thinking about working on some tutorials/guides for possible approaches to this. Is this something that's actually of interest to anyone but me? 😅 | 2024-01-15T21:10:35 | https://www.reddit.com/r/LocalLLaMA/comments/197k1ys/haiku_dpo_dataset_notebook_tutorials_for/ | dvanstrien | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197k1ys | false | null | t3_197k1ys | /r/LocalLLaMA/comments/197k1ys/haiku_dpo_dataset_notebook_tutorials_for/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'w1_sUqt71_K3IBxPUjskPfptM5D2EHo93udQ1PU8lpw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MZjg3e1yQVYljbVzzBMDxs7Yb5XTWRmmwXaOYEQMLW0.jpg?width=108&crop=smart&auto=webp&s=03384316e136c6ffe8a3cfa092cf4d382aa0ba09', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MZjg3e1yQVYljbVzzBMDxs7Yb5XTWRmmwXaOYEQMLW0.jpg?width=216&crop=smart&auto=webp&s=45d3ec6650e054c995326780af5d50217e926828', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MZjg3e1yQVYljbVzzBMDxs7Yb5XTWRmmwXaOYEQMLW0.jpg?width=320&crop=smart&auto=webp&s=4cce3648d6a223ca00b0e0d91bb719dcb6734daa', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MZjg3e1yQVYljbVzzBMDxs7Yb5XTWRmmwXaOYEQMLW0.jpg?width=640&crop=smart&auto=webp&s=d1da8eeae6fa587b48fade91140bb7b8496cde21', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MZjg3e1yQVYljbVzzBMDxs7Yb5XTWRmmwXaOYEQMLW0.jpg?width=960&crop=smart&auto=webp&s=5c67d8036e2fc34d45a8adae71f5be47e1b0fac2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MZjg3e1yQVYljbVzzBMDxs7Yb5XTWRmmwXaOYEQMLW0.jpg?width=1080&crop=smart&auto=webp&s=c2fda88363a3bc470f4f75e71a7622845907c6bd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MZjg3e1yQVYljbVzzBMDxs7Yb5XTWRmmwXaOYEQMLW0.jpg?auto=webp&s=cd32afaf71b4460550cc001ae5924adc7bd60594', 'width': 1200}, 'variants': {}}]} |
QLoRA Prompt Leaking | 1 | [removed] | 2024-01-15T20:41:21 | https://www.reddit.com/r/LocalLLaMA/comments/197jamo/qlora_prompt_leaking/ | popo543 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197jamo | false | null | t3_197jamo | /r/LocalLLaMA/comments/197jamo/qlora_prompt_leaking/ | false | false | self | 1 | null |
Many AI Safety Orgs Have Tried to Criminalize Currently-Existing Open-Source AI | 440 | 2024-01-15T18:53:45 | https://1a3orn.com/sub/machine-learning-bans.html | 1a3orn | 1a3orn.com | 1970-01-01T00:00:00 | 0 | {} | 197gj0q | false | null | t3_197gj0q | /r/LocalLLaMA/comments/197gj0q/many_ai_safety_orgs_have_tried_to_criminalize/ | false | false | default | 440 | null | |
Question about quantization and quality | 7 | Since I kind of always had this in the back of my mind but really never cared about it (I know almost nothing how llama.cpp works under the hood), I will now ask out of curiosity. :)
For example, llama-2-13b-chat.Q6\_K is in the size of **10.7 GB**, while llama-2-13b-chat.Q8\_0 is **13.8 GB**. A whooping 3,1 GB difference. According to the llama.cpp team and from my own experience, there is barely, if any difference in quality between the two. llama.cpp don't even recommend using Q8\_0 quants.
Since there apparently are no noticeable quality difference, what really are those 3.1 extra GB? Is it just 3.1 GB of useless extra algorithms that does nothing? An amateur in AI technology like me would by logic think that 3,1 GB extra data would make quite a difference.
Bonus question while I'm at it:
There seem however to be a noticeable difference between Q6\_K and Q8\_0 in MoE models. For example, when I ask the same questions to Beyonder 4x7b Q6\_K and Q8\_0, the Q8\_0 version seem to more often reply with numbered lists, while the Q6\_K versions answers more often in a wall of text / plain text. While I can't really tell if it's a "quality" difference, they appear to behave differently.
It seems that MoE models might suffer much more from quantization, does anyone else recognize themselves in this? Or is it just me? | 2024-01-15T18:51:57 | https://www.reddit.com/r/LocalLLaMA/comments/197gh8z/question_about_quantization_and_quality/ | Admirable-Star7088 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197gh8z | false | null | t3_197gh8z | /r/LocalLLaMA/comments/197gh8z/question_about_quantization_and_quality/ | false | false | self | 7 | null |
NLI sentence transformers VS general purpose ones, for RAG applications | 7 | I've been looking into RAG, and have come across using sentence transformers for querying and semantic comparison.
Recently, I've discovered that NLI models are specifically designed for matching up queries to answers, which seems super useful, and yet all the ones on the sentence-transformers hugging face are like 2 years old, which is practically centuries ago in AI time, as opposed to the "all" models, which see much more focused on semantic similarity comparison.
Am I missing something here? Surely people aren't using years old models for modern RAG applications, right? | 2024-01-15T18:28:52 | https://www.reddit.com/r/LocalLLaMA/comments/197fvfs/nli_sentence_transformers_vs_general_purpose_ones/ | Nano_9a9o | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197fvfs | false | null | t3_197fvfs | /r/LocalLLaMA/comments/197fvfs/nli_sentence_transformers_vs_general_purpose_ones/ | false | false | self | 7 | null |
Langchain and mlx models | 1 | [removed] | 2024-01-15T18:12:57 | https://www.reddit.com/r/LocalLLaMA/comments/197fgsf/langchain_and_mlx_models/ | KlutzyNecessary2205 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197fgsf | false | null | t3_197fgsf | /r/LocalLLaMA/comments/197fgsf/langchain_and_mlx_models/ | false | false | self | 1 | null |
Running any AI locally on a low-end PC (i3-530, 4gb ddr3 ram, no GPU or iGPU) | 1 | I found an old PC with these *specifications*:
\- I3-530
\- 4gb ddr3 ram
\- **NO** **GPU**
I would like to run any AI application locally on it, as long as the application has a use.
I am also looking at whisper transcription, which I would like to use. Would this work with the low end pc?Should I use faster-whisper or the regular one?
Which operating system would be best for this (preferably with interface, but it may be without)? So it can be any AI application, but it must run locally and without wifi. Currently the PC is running Slax OS Linux, and that's going very well. When no programs are open on it, only 0.5gb of ram is used. | 2024-01-15T18:04:39 | https://www.reddit.com/r/LocalLLaMA/comments/197f91p/running_any_ai_locally_on_a_lowend_pc_i3530_4gb/ | DaanDeweerdt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197f91p | false | null | t3_197f91p | /r/LocalLLaMA/comments/197f91p/running_any_ai_locally_on_a_lowend_pc_i3530_4gb/ | false | false | self | 1 | null |
recommendation please - linux desktop GUIs that support custom OPENAI_API_BASE? | 5 | I have tried Chatbox from [https://github.com/Bin-Huang/chatbox](https://github.com/Bin-Huang/chatbox) and it is somewhat decent, but does not allow you to set `OPENAI_API_BASE` . I want a GUI that I can use with [https://app.fireworks.ai/models/fireworks/mixtral-8x7b-instruct](https://app.fireworks.ai/models/fireworks/mixtral-8x7b-instruct)
​
​ | 2024-01-15T18:04:28 | https://www.reddit.com/r/LocalLLaMA/comments/197f8vg/recommendation_please_linux_desktop_guis_that/ | Treeeant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197f8vg | false | null | t3_197f8vg | /r/LocalLLaMA/comments/197f8vg/recommendation_please_linux_desktop_guis_that/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 't0LLHlqnM2GZnhDCXn0dY6guQpGbtk9h4OkyKxpvaDg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/K1aEaLArH26JhMlFyyCSQwzOxl4Io3fn1hkroqsUFr8.jpg?width=108&crop=smart&auto=webp&s=11be0bd8e62dddde5da1d953542860dcc8ac16ad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/K1aEaLArH26JhMlFyyCSQwzOxl4Io3fn1hkroqsUFr8.jpg?width=216&crop=smart&auto=webp&s=be02636e923084cd30f4486c1202d3248ac3bb55', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/K1aEaLArH26JhMlFyyCSQwzOxl4Io3fn1hkroqsUFr8.jpg?width=320&crop=smart&auto=webp&s=81fa9fe2b8d3ed99f23de04de2cb33913764b1d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/K1aEaLArH26JhMlFyyCSQwzOxl4Io3fn1hkroqsUFr8.jpg?width=640&crop=smart&auto=webp&s=1e58a51926e0e7969481338fcdadd966a7a85354', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/K1aEaLArH26JhMlFyyCSQwzOxl4Io3fn1hkroqsUFr8.jpg?width=960&crop=smart&auto=webp&s=d4e5a0f83511bac9536afdf8794df1fb6bbf0ea5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/K1aEaLArH26JhMlFyyCSQwzOxl4Io3fn1hkroqsUFr8.jpg?width=1080&crop=smart&auto=webp&s=96660eda7d9058c3fdc390c76517eaf4a533314f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/K1aEaLArH26JhMlFyyCSQwzOxl4Io3fn1hkroqsUFr8.jpg?auto=webp&s=d77b81dc72927ac1bc650153371693f04e46dc16', 'width': 1200}, 'variants': {}}]} |
My open air deep learning rig - repurposing crypto gear and creatively using server parts from eBay! (8x RTX 3090) | 68 | 2024-01-15T18:01:17 | https://imgur.com/a/YHyAV42 | kyleboddy | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 197f5tx | false | null | t3_197f5tx | /r/LocalLLaMA/comments/197f5tx/my_open_air_deep_learning_rig_repurposing_crypto/ | false | false | 68 | {'enabled': False, 'images': [{'id': 'RJKZpfNZtiC56KZz9Pn_i63yucuvCZF7Xna1oGvhBGs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/g36O0eifJkap2F4BTTqi9krVVmo9IaCTGRX47cjfElE.jpg?width=108&crop=smart&auto=webp&s=5c51da725debc0599264a75ea9aa1f969869288a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/g36O0eifJkap2F4BTTqi9krVVmo9IaCTGRX47cjfElE.jpg?width=216&crop=smart&auto=webp&s=5604a3ba19a68d48ef979373324de1afcb8ba43f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/g36O0eifJkap2F4BTTqi9krVVmo9IaCTGRX47cjfElE.jpg?width=320&crop=smart&auto=webp&s=775393c41a1ebb6c79baa5f1b5ee4f172d0ad711', 'width': 320}], 'source': {'height': 315, 'url': 'https://external-preview.redd.it/g36O0eifJkap2F4BTTqi9krVVmo9IaCTGRX47cjfElE.jpg?auto=webp&s=13e80ce95783ccd8f7d80512a4b0acb46478ec52', 'width': 600}, 'variants': {}}]} | ||
LM studio using wrong GPU | 1 | [removed] | 2024-01-15T17:59:38 | https://www.reddit.com/r/LocalLLaMA/comments/197f44b/lm_studio_using_wrong_gpu/ | Solid-Thought1486 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197f44b | false | null | t3_197f44b | /r/LocalLLaMA/comments/197f44b/lm_studio_using_wrong_gpu/ | false | false | self | 1 | null |
Who here is running a quad socket, 16 total ram channel server for their GGUF models? Old or new. | 1 | [removed] | 2024-01-15T17:55:08 | https://www.reddit.com/r/LocalLLaMA/comments/197f08r/who_here_is_running_a_quad_socket_16_total_ram/ | CanineAssBandit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197f08r | false | null | t3_197f08r | /r/LocalLLaMA/comments/197f08r/who_here_is_running_a_quad_socket_16_total_ram/ | false | false | self | 1 | null |
OnnxStream running TinyLlama and Mistral 7B, with CUDA support | 16 | hi,
I'm the author.
​
OnnxStream can run Stable Diffusion XL Base and Turbo on a Raspberry PI Zero 2. In the last 2 months I have implemented a series of optimizations to also support the inference of LLMs, but only on desktop computers and servers.
I'm interested in opinions on this possible development of OnnxStream.
URL: [https://github.com/vitoplantamura/OnnxStream/blob/master/assets/LLM.md](https://github.com/vitoplantamura/OnnxStream/blob/master/assets/LLM.md)
Thank you, Vito | 2024-01-15T17:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/197ezxf/onnxstream_running_tinyllama_and_mistral_7b_with/ | Pristine198 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197ezxf | false | null | t3_197ezxf | /r/LocalLLaMA/comments/197ezxf/onnxstream_running_tinyllama_and_mistral_7b_with/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'DgakZOBsIt0ELJgEhmJhaB-L0z9DSOG-X5RoApgbofE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TA_6zkSjZgdOVPlJ2PEnV5wQggbOvzVn0sLZW647obs.jpg?width=108&crop=smart&auto=webp&s=8ab5ccaa425abac24e11824ebcf21b3ee8f8d2dd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TA_6zkSjZgdOVPlJ2PEnV5wQggbOvzVn0sLZW647obs.jpg?width=216&crop=smart&auto=webp&s=6432fe4af1bbfce22541b022bcfd088db636f8c2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TA_6zkSjZgdOVPlJ2PEnV5wQggbOvzVn0sLZW647obs.jpg?width=320&crop=smart&auto=webp&s=306752707cbdf61f6317aa955ec3bc077c05cc04', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TA_6zkSjZgdOVPlJ2PEnV5wQggbOvzVn0sLZW647obs.jpg?width=640&crop=smart&auto=webp&s=77903832507645bb8feab781735468dff4413d6a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TA_6zkSjZgdOVPlJ2PEnV5wQggbOvzVn0sLZW647obs.jpg?width=960&crop=smart&auto=webp&s=d130d97bdb6e9e7b2a01224f8917282530fb4e29', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TA_6zkSjZgdOVPlJ2PEnV5wQggbOvzVn0sLZW647obs.jpg?width=1080&crop=smart&auto=webp&s=3d71361c80e87787248a6ceacc465e5fff62bd9d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TA_6zkSjZgdOVPlJ2PEnV5wQggbOvzVn0sLZW647obs.jpg?auto=webp&s=0842ec86509881f102cff3a856a71ff6151af13d', 'width': 1200}, 'variants': {}}]} |
Has anyone successfully implemented llava using the llama.cpp server? | 4 | Hi there,
I want to try llava in llama.cpp, but I'm not sure how. I know I need the model gguf and the projection gguf. But how do I send request to the server with an image? In what format do I send the image? Is llava compatible with the api like OAIapi example? | 2024-01-15T17:52:46 | https://www.reddit.com/r/LocalLLaMA/comments/197ey3f/has_anyone_successfully_implemented_llava_using/ | Frequent_Valuable_47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197ey3f | false | null | t3_197ey3f | /r/LocalLLaMA/comments/197ey3f/has_anyone_successfully_implemented_llava_using/ | false | false | self | 4 | null |
Training LLama, Mistral and Mixtral-MoE faster with Packing Inputs without Cross-Contamination Attention | 95 | Hey r/LocalLLaMA community!
I would like to share our work that can speed up finetuning LLama, Mistral and Mixtral significantly.
[https://github.com/MeetKai/functionary/tree/main/functionary/train/packing](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing)
The idea is that we monkey-patch the original implementation to fix the issue known as: **Cross-Contamination Attention** when we **pack multiple short inputs into a long input**
The reduced training time depends on the **distribution of lengths of inputs**. In our case, the training time was reduced from **15 hours to 5 hours!**
​ | 2024-01-15T17:31:47 | https://www.reddit.com/r/LocalLLaMA/comments/197efaz/training_llama_mistral_and_mixtralmoe_faster_with/ | Relevant_Outcome_726 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197efaz | false | null | t3_197efaz | /r/LocalLLaMA/comments/197efaz/training_llama_mistral_and_mixtralmoe_faster_with/ | false | false | self | 95 | {'enabled': False, 'images': [{'id': 'lCx04XqMMuHVHi6TqoMeKt--k5yyLvlAfMy6iU3R2-g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YHLh1waVhkGRiDwgL2YfUuKWApgYeCXQJXHhFsG4g8I.jpg?width=108&crop=smart&auto=webp&s=21059db0d2ab2e720e14f2b7b7bc15ee2975ba7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YHLh1waVhkGRiDwgL2YfUuKWApgYeCXQJXHhFsG4g8I.jpg?width=216&crop=smart&auto=webp&s=60775c4fa1f2e35a6015835ddfad789117b400e9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YHLh1waVhkGRiDwgL2YfUuKWApgYeCXQJXHhFsG4g8I.jpg?width=320&crop=smart&auto=webp&s=eefd9149f3127fbc455f3f551804ded943d79fe7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YHLh1waVhkGRiDwgL2YfUuKWApgYeCXQJXHhFsG4g8I.jpg?width=640&crop=smart&auto=webp&s=6ce96d08d4cd33576652f2791eaee8c43772743d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YHLh1waVhkGRiDwgL2YfUuKWApgYeCXQJXHhFsG4g8I.jpg?width=960&crop=smart&auto=webp&s=5ef8bf2e8652b0f378551bade7c4fe1579f00f34', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YHLh1waVhkGRiDwgL2YfUuKWApgYeCXQJXHhFsG4g8I.jpg?width=1080&crop=smart&auto=webp&s=4910256bff5b2b3949bc99173bf7a4dd23c24b34', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YHLh1waVhkGRiDwgL2YfUuKWApgYeCXQJXHhFsG4g8I.jpg?auto=webp&s=dd5530b9836d7f6f65e78d15ba067b63e25f9080', 'width': 1200}, 'variants': {}}]} |
Help! All 70B models on Runpod run super slow? | 1 | [removed] | 2024-01-15T17:23:52 | https://www.reddit.com/r/LocalLLaMA/comments/197e818/help_all_70b_models_on_runpod_run_super_slow/ | thomassyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197e818 | false | null | t3_197e818 | /r/LocalLLaMA/comments/197e818/help_all_70b_models_on_runpod_run_super_slow/ | false | false | self | 1 | null |
What are you using to train or finetune gguf outside of llama.cpp? | 12 | training and finetuning are both broken in llama.cpp and have been going back to more than a month ago (checked out Dec 1st tag)
i like llama.cpp but the speed of change is great but not so great if it's breaking things. wondering what other ways you all are training & finetuning. if anyone knows the last time these worked for guff format please share so I can see if I can find code from around that ime. | 2024-01-15T17:19:35 | https://www.reddit.com/r/LocalLLaMA/comments/197e451/what_are_you_using_to_train_or_finetune_gguf/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197e451 | false | null | t3_197e451 | /r/LocalLLaMA/comments/197e451/what_are_you_using_to_train_or_finetune_gguf/ | false | false | self | 12 | null |
Ping-Ponging LLM Battle | 1 | [removed] | 2024-01-15T17:13:32 | https://www.reddit.com/r/LocalLLaMA/comments/197dyiz/pingponging_llm_battle/ | dakinnia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197dyiz | false | null | t3_197dyiz | /r/LocalLLaMA/comments/197dyiz/pingponging_llm_battle/ | false | false | self | 1 | null |
Best 7B model for coding? | 1 | [removed] | 2024-01-15T17:08:13 | https://www.reddit.com/r/LocalLLaMA/comments/197dtlc/best_7b_model_for_coding/ | laprisee4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197dtlc | false | null | t3_197dtlc | /r/LocalLLaMA/comments/197dtlc/best_7b_model_for_coding/ | false | false | self | 1 | null |
Anyone is using llamacpp for real? | 1 | Over the last 9 months, our team worked with hundreds of Fortune 500 companies on deploying LLMs in production and we've never seen anyone is interested in llamacpp or similar small LMs. It could be a cool toy project seeing that you can run your model on Raspberry Pi or a laptop, but it's so far off from any real use case, where you need guarantees that it will work. Even gpt-4 hallucinates a lot, what we expect from some quantized open-source llama model. From what I've seen llamacpp text is much worse than for many other llms.
​
So the question is are there any real use cases for llamacpp? Are there situations where you can't use gpt-3.5 or deploy an open-source model yourself? | 2024-01-15T17:01:13 | https://www.reddit.com/r/LocalLLaMA/comments/197dn7u/anyone_is_using_llamacpp_for_real/ | russellsparadox101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197dn7u | false | null | t3_197dn7u | /r/LocalLLaMA/comments/197dn7u/anyone_is_using_llamacpp_for_real/ | false | false | self | 1 | null |
We've written a post about local speech-to-text transcription tools for Windows and Linux. We explore the installation and usage of Faster-Whisper and Speech Note. | 3 | 2024-01-15T16:55:50 | https://www.speech-to-text.cloud/local-speech-to-text-transcription-on-windows-and-linux/ | Speech-to-Text-Cloud | speech-to-text.cloud | 1970-01-01T00:00:00 | 0 | {} | 197diep | false | null | t3_197diep | /r/LocalLLaMA/comments/197diep/weve_written_a_post_about_local_speechtotext/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'SmUy0Y4559_EA7dFREmrZ6ZfY5RMw4G8v-QIjFAX1wk', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/Ml3SLdGSeamuqAyW-XrCbnNcO84mY9zU_COFWWK_daI.jpg?width=108&crop=smart&auto=webp&s=18deadb1a0c42c112f520608b26ee34b45ce79c3', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/Ml3SLdGSeamuqAyW-XrCbnNcO84mY9zU_COFWWK_daI.jpg?width=216&crop=smart&auto=webp&s=956c533c4fa9a1013e8210925e919c12d647e9bf', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/Ml3SLdGSeamuqAyW-XrCbnNcO84mY9zU_COFWWK_daI.jpg?width=320&crop=smart&auto=webp&s=30d2040e4db4960d2c7b402d80b0e00b9f1c6fa2', 'width': 320}, {'height': 427, 'url': 'https://external-preview.redd.it/Ml3SLdGSeamuqAyW-XrCbnNcO84mY9zU_COFWWK_daI.jpg?width=640&crop=smart&auto=webp&s=4794b3460ccb5c9627c81f418e2c8e609a0fd780', 'width': 640}], 'source': {'height': 427, 'url': 'https://external-preview.redd.it/Ml3SLdGSeamuqAyW-XrCbnNcO84mY9zU_COFWWK_daI.jpg?auto=webp&s=493dcadbfdf16e0e97d91b68b37ef8995671e039', 'width': 640}, 'variants': {}}]} | ||
Deploying Phi-2 as an API endpoint | 7 | Microsoft/Phi-2 is really exceptional for for being so small in size yet being powerful enough to outcompete large language models on some of the most prominent benchmarks.
Deploy Phi-2 as an API endpoint using this colab notebook:
[https://colab.research.google.com/drive/1TtztWHYWg8i5pas\_rDxj7f22eVR\_th1J](https://colab.research.google.com/drive/1TtztWHYWg8i5pas_rDxj7f22eVR_th1J)
Within 4 easy steps you can get an API endpoint hosting the specified LLM.
The deployed API endpoint is **automatically configured for higher throughput** with batching using vllm.
This would not only cost you less at peak utilisation but also give you the capability to build robust AI powered applications and workflows to integrate in your business logic and workflows. | 2024-01-15T16:53:05 | https://www.reddit.com/r/LocalLLaMA/comments/197dg13/deploying_phi2_as_an_api_endpoint/ | gvij | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197dg13 | false | null | t3_197dg13 | /r/LocalLLaMA/comments/197dg13/deploying_phi2_as_an_api_endpoint/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
How much knowledge is "hidden" within an LLM? | 8 | I wonder how much knowledge an LLM of a certain size can have and repeat without hallucinations. Considering you can easily fit a whole Wikipedia dump within a few gigabytes and language models not having to remember the actual phrasing (just the knowledge behind it), even a 7B model might be able to gain huge amounts of general knowledge. In practice, our current models (at least the 7B ones I messed with) don't seem to be that smart though. Are there benchmarks or papers around this topic? Could some Wikipedia articles (if the training dataset even contains them) be deeply hidden within the LLM and only retrievable by breaking the query format? | 2024-01-15T16:48:10 | https://www.reddit.com/r/LocalLLaMA/comments/197dbs4/how_much_knowledge_is_hidden_within_an_llm/ | susibacker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197dbs4 | false | null | t3_197dbs4 | /r/LocalLLaMA/comments/197dbs4/how_much_knowledge_is_hidden_within_an_llm/ | false | false | self | 8 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.