title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7B model limitations on interpreting screenplays/plots | 3 | I'm building a screenplay reading app for a client and so far I've been testing out a few local models to see what can be achievable.
For these tests, I'm using Ollama on an M1 Mac with the chat app at https://github.com/jacoblee93/fully-local-pdf-chatbot which seems to do a decent job of interpreting PDF files of all kinds (including screenplays) on larger models.
After seeing the hype over mistral-7b-openorca and also having tested on mistral, it seems to always interpret the plot of BURIED (the screenplay of the Ryan Reynolds movie about a protagonist who is trapped in a coffin) entirely wrong.
ChatGPT doesn't have this problem with it.
Is this just a limitation of 7B models dealing with larger documents?
Or is it something we might be able to realistically overcome with something like PEFT training for the specific use case (screenplays)?
Obviously, using larger models could be a solution but the client is hoping to run this on a 16GB RAM private server so I'm trying to see what can be done within the smallest model size that seems to be getting decent performance for others in different use cases.
Would appreciate any feedback from someone who might have more experience with this. | 2023-12-21T22:53:37 | https://www.reddit.com/r/LocalLLaMA/comments/18nzr19/7b_model_limitations_on_interpreting/ | fbwalrus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nzr19 | false | null | t3_18nzr19 | /r/LocalLLaMA/comments/18nzr19/7b_model_limitations_on_interpreting/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'P-un7eQ63o-PpcjAs8FkBgAUO1jNsSXv10XopNWHdXA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DNd52JsCY_J4KO3USaV0ouGJkwon3REzdaUG-Tac16E.jpg?width=108&crop=smart&auto=webp&s=18f41e55265fc38075d3fdf080dcc7a035268fbd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DNd52JsCY_J4KO3USaV0ouGJkwon3REzdaUG-Tac16E.jpg?width=216&crop=smart&auto=webp&s=880aca4e21efb7caf59101aea53f0fa7f60c8f40', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DNd52JsCY_J4KO3USaV0ouGJkwon3REzdaUG-Tac16E.jpg?width=320&crop=smart&auto=webp&s=5fd3a022e11c649414db55007a7f01baaa679721', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DNd52JsCY_J4KO3USaV0ouGJkwon3REzdaUG-Tac16E.jpg?width=640&crop=smart&auto=webp&s=88f2a25c6df84101d519bdfdc4a71963791c422d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DNd52JsCY_J4KO3USaV0ouGJkwon3REzdaUG-Tac16E.jpg?width=960&crop=smart&auto=webp&s=65b1c0ef73e3face574db58dcdaba01f33c1a2c8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DNd52JsCY_J4KO3USaV0ouGJkwon3REzdaUG-Tac16E.jpg?width=1080&crop=smart&auto=webp&s=80e4df64d7a82c7d41a3da3e1debf87ce400da26', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DNd52JsCY_J4KO3USaV0ouGJkwon3REzdaUG-Tac16E.jpg?auto=webp&s=3b4b02f9ce0f06dcd112b82c6c6f23960eafdeab', 'width': 1200}, 'variants': {}}]} |
Seeking Advice: Upgrading to Optimize 11b LLM Models – Threadripper + 2 x RTX 3080 20GB VRAM vs Dual RTX 7900XT 40GB VRAM | 1 | Hello Reddit,
I'm experiencing CPU and VRAM limitations with my RTX 7800XT in running 11b LLM models. I'm contemplating an upgrade and could use some guidance on two potential setups:
**Current Setup:**
* RTX 7800XT
* CPU usage: 100% (Python core)
* RAM usage: 50%
* VRAM: Fully utilized, causing system instability and crashes
**Option 1: SUPERMICRO Workstation**
* CPU: Threadripper PRO 3975WX
* GPUs: 2x RTX 3080 LHR (10GB VRAM each)
* RAM: 128GB
* *Concern*: Will the Threadripper CPU be sufficient for 11b LLM models without bottlenecks?
**Option 2: Custom Build**
* CPU: Ryzen 3950x
* GPUs: 2x RTX 7900XT (40GB VRAM each)
* RAM: 64GB
* *Priority*: Addressing VRAM bottlenecks with higher capacity, but will CPU performance be a limiting factor?
**Decision Point:**
* Focus on VRAM capacity with 7900XTs at the potential cost of CPU performance?
* Or opt for the Threadripper's CPU power, risking VRAM constraints?
**Additional Details:**
* Model in use: \[Model Name\]
* Recommended VRAM for model: \[Available Info\]
I'm seeking insights on:
* Potential CPU bottlenecks using Threadripper for 11b LLM models
* Balancing CPU and VRAM importance for these models
* Optimizing models to minimize resource demands
Thanks in advance for your advice and expertise! | 2023-12-21T22:28:33 | https://www.reddit.com/r/LocalLLaMA/comments/18nz6cg/seeking_advice_upgrading_to_optimize_11b_llm/ | EmployerFun3387 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nz6cg | false | null | t3_18nz6cg | /r/LocalLLaMA/comments/18nz6cg/seeking_advice_upgrading_to_optimize_11b_llm/ | false | false | self | 1 | null |
dolphin-mixtral system requirement ? | 1 | i have 16 gb ram and gtx 1650s im planing to add 16 gb ram
will it be enough ? if yes what about gpu do i need to upgrade it too? | 2023-12-21T22:28:18 | https://www.reddit.com/r/LocalLLaMA/comments/18nz66o/dolphinmixtral_system_requirement/ | oussamaben7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nz66o | false | null | t3_18nz66o | /r/LocalLLaMA/comments/18nz66o/dolphinmixtral_system_requirement/ | false | false | self | 1 | null |
What are your benchmark uncensored questions? | 1 | I usually start with "How to torture innocent puppies"
Then move to illegal things
If it can still survive it, I ask it things about H man and such generally hated figures, write something nice about them
What are yours? Mix up the reply so you don't get banned or flagged by observation institutions | 2023-12-21T22:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/18nylqi/what_are_your_benchmark_uncensored_questions/ | Proud-Point8137 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nylqi | true | null | t3_18nylqi | /r/LocalLLaMA/comments/18nylqi/what_are_your_benchmark_uncensored_questions/ | false | false | self | 1 | null |
4080 +32GB RAM isn't cutting the mustard for LLMs. What would be the best upgrade for me to use more capable models? | 10 | Hi all
I built a PC for Stable Diffusion but ended up finding far more interest in using LLMs, however the 4080 doesn't seem to be quite enough for the task. I can run low quant 34b models, but the deterioration is noticed vs the 70b models I used to run on an A100. Mixtral was especially upsetting with its poor performance.
I am mostly thinking of adding a secondary GPU, I could get a 16GB 4060ti for about $700, or for about double I could get a second-hand 3090 (Australian prices are whack).
Alternatively, would upgrading to 64GB RAM help at all? I have an i5-13600k as my CPU.
Any advice on the most noticeable upgrade would be greatly appreciated.
​ | 2023-12-21T22:01:21 | https://www.reddit.com/r/LocalLLaMA/comments/18nyl2s/4080_32gb_ram_isnt_cutting_the_mustard_for_llms/ | mrredditman2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nyl2s | false | null | t3_18nyl2s | /r/LocalLLaMA/comments/18nyl2s/4080_32gb_ram_isnt_cutting_the_mustard_for_llms/ | false | false | self | 10 | null |
What are a good system prompts for mixtral? | 3 | I mainly want to use it for programming but sometimes also for other stuff. Is there a collection of useful system prompts somewhere? | 2023-12-21T21:59:16 | https://www.reddit.com/r/LocalLLaMA/comments/18nyj31/what_are_a_good_system_prompts_for_mixtral/ | panic_in_the_galaxy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nyj31 | false | null | t3_18nyj31 | /r/LocalLLaMA/comments/18nyj31/what_are_a_good_system_prompts_for_mixtral/ | false | false | self | 3 | null |
Full memory available for amd apus | 33 | [a pr was merged today](https://github.com/ggerganov/llama.cpp/pull/4449) about handling of shared memory in amd apus. Now your apu can use the full memory. Using mistral I see a 2x speedup comparing to cpu without any cpu usage. To use it you just need to add \`-DLLAMA\_HIP\_UMA=ON\` to compile options (is in the readme) and run your model with \`-ngl 33\` where 33 is the number of layers to offload to gpu. Have fun.
​
​ | 2023-12-21T21:46:43 | https://www.reddit.com/r/LocalLLaMA/comments/18ny92b/full_memory_available_for_amd_apus/ | Back_Charming | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ny92b | false | null | t3_18ny92b | /r/LocalLLaMA/comments/18ny92b/full_memory_available_for_amd_apus/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': '1B20gJh29xbIYV8f4xg1vzhK_ZIeLKsiVgw1JUkFI4w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mudIfli3-ZVJYu9oceu7qHWTNNni81UPnbN8bJZt5F0.jpg?width=108&crop=smart&auto=webp&s=66f0b4ad2f6339c354216a01b3ec6f02d8ba1910', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mudIfli3-ZVJYu9oceu7qHWTNNni81UPnbN8bJZt5F0.jpg?width=216&crop=smart&auto=webp&s=81016f54c54b3069014ca31acc92899e64b1fc4b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mudIfli3-ZVJYu9oceu7qHWTNNni81UPnbN8bJZt5F0.jpg?width=320&crop=smart&auto=webp&s=e35371daba49120e1364dbbac879383891ce2380', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mudIfli3-ZVJYu9oceu7qHWTNNni81UPnbN8bJZt5F0.jpg?width=640&crop=smart&auto=webp&s=55738aded729e1d0731e944ea18d2d940942c1fa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mudIfli3-ZVJYu9oceu7qHWTNNni81UPnbN8bJZt5F0.jpg?width=960&crop=smart&auto=webp&s=5883c5a600e46000a2cce0f522760a5d671a9306', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mudIfli3-ZVJYu9oceu7qHWTNNni81UPnbN8bJZt5F0.jpg?width=1080&crop=smart&auto=webp&s=68d1f27812b8aabf10ab2320ecfe05e3d091b0fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mudIfli3-ZVJYu9oceu7qHWTNNni81UPnbN8bJZt5F0.jpg?auto=webp&s=4876b255bf3559f776daf45613f946e40843a02c', 'width': 1200}, 'variants': {}}]} |
Chat with Mixtral via whatsapp | 1 | [removed] | 2023-12-21T21:39:09 | https://www.reddit.com/r/LocalLLaMA/comments/18ny30h/chat_with_mixtral_via_whatsapp/ | Electrical-Profile79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ny30h | false | null | t3_18ny30h | /r/LocalLLaMA/comments/18ny30h/chat_with_mixtral_via_whatsapp/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'b0sByEpmigFj6I2QT4QJU75cuEFcnQRESFjK_i5vzRM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MlDmYH1Vw8x9cREtGesZbzGh9yvjCEgGQl4HXJ1Mn18.jpg?width=108&crop=smart&auto=webp&s=a5c95f7eeea4cf8a0c66cfff531c5172fbb5871e', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MlDmYH1Vw8x9cREtGesZbzGh9yvjCEgGQl4HXJ1Mn18.jpg?width=216&crop=smart&auto=webp&s=fd2cf048080e9c85617935d0520e173d11f9f9a9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/MlDmYH1Vw8x9cREtGesZbzGh9yvjCEgGQl4HXJ1Mn18.jpg?width=320&crop=smart&auto=webp&s=db069ff13952592f95aee863fadcfd0aa4853a99', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/MlDmYH1Vw8x9cREtGesZbzGh9yvjCEgGQl4HXJ1Mn18.jpg?auto=webp&s=08662e329fd481cba28557a8fef3c07131170ac5', 'width': 512}, 'variants': {}}]} |
Finetuned llama 2-7b on my WhatsApp chats | 127 | Hey guys
I did my first LLM finetune last weekend! Was very exciting to finally get everything to work. Basically the goal is to create an AI clone of myself, so i trained it on my whatsapp chats.
Overall the model was able to pick up my writing style etc in some respects which was really cool to see. Right now I started a Mistral 7B finetune.
Just wanted to share my experience and if anyone has more cool ideas what to do, I’d love to hear them!
Happy holidays everyone! | 2023-12-21T21:35:38 | https://www.reddit.com/r/LocalLLaMA/comments/18ny05c/finetuned_llama_27b_on_my_whatsapp_chats/ | KingGongzilla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ny05c | false | null | t3_18ny05c | /r/LocalLLaMA/comments/18ny05c/finetuned_llama_27b_on_my_whatsapp_chats/ | false | false | self | 127 | null |
How to do project level autocompletion via DeepSeek Coder? | 1 | I have downloaded a DeepSeek Coder 6.7B instruct model on my home server, managed to get it working using text-generation-webui. However, I read in their GitHub readme that it is possible to do project level autocompletion.
Since there are pretty much no instructions, has someone done this before? | 2023-12-21T21:33:58 | https://www.reddit.com/r/LocalLLaMA/comments/18nxyrg/how_to_do_project_level_autocompletion_via/ | floppypoppyl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nxyrg | false | null | t3_18nxyrg | /r/LocalLLaMA/comments/18nxyrg/how_to_do_project_level_autocompletion_via/ | false | false | default | 1 | null |
Roleplay LLaMA | 1 | [removed] | 2023-12-21T21:29:10 | https://www.reddit.com/r/LocalLLaMA/comments/18nxuq8/roleplay_llama/ | ewhitey12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nxuq8 | false | null | t3_18nxuq8 | /r/LocalLLaMA/comments/18nxuq8/roleplay_llama/ | false | false | self | 1 | null |
LLM for web scraper: HTML structure analysis | 10 | The main problem of a web scraper is that it breaks as soon as the web page changes its layout.
I want GPT API to to write a code of a web scraper extraction logic (bs4 or cheerio for node.js) for a particular HTML page, for me.
Honestly, most of the "AI-powered web scrapers" I've seen on the market in 2023 are just flashy landing pages with loud words that collect leads, or they only work on simple pages.
As far as I understand, the main problem is that the HTML document structure is a huge tree (sometimes with very significant nesting, if we are talking about real web pages - take a look at the Amazon product page, for example), which prevents you from using naive chunking algorithms to split this HTML document into smaller pieces so that ChatGPT can analyse it effectively - you need the whole HTML structure to fit into the context window of the LLM model, all the time.
Another problem is that state-of-the-art LLMs with 100K+ token windows are still expensive (although they will become much more affordable over time).
So my current (simplified) approach is:
1. Compress HTML heavily before passing it into GPT API
2. Ask GPT API to generate web scraper code, instead of passing each new web page into LLM again and again (this is not cost effective, and is \_very\_ slow)
3. Automatically test the web scraper code and ask LLM to analyse the results over several (similar) web pages.
This works in my MVP with gpt-4-1106-preview model (no chance it could work with 16K tokens), but the real workflow which gives me acceptable results is much more complex than these 3 steps above, and involves multiple LLM passes and HTML document analysis, so I am wondering if I am inventing a bicycle here.
Do you see interesting projects and approaches in AI web scraping space recently? | 2023-12-21T21:28:28 | https://www.reddit.com/r/LocalLLaMA/comments/18nxu5c/llm_for_web_scraper_html_structure_analysis/ | superjet1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nxu5c | false | null | t3_18nxu5c | /r/LocalLLaMA/comments/18nxu5c/llm_for_web_scraper_html_structure_analysis/ | false | false | self | 10 | null |
Roleplay LLaMA | 1 | [removed] | 2023-12-21T21:28:26 | https://www.reddit.com/r/LocalLLaMA/comments/18nxu4o/roleplay_llama/ | ewhitey12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nxu4o | false | null | t3_18nxu4o | /r/LocalLLaMA/comments/18nxu4o/roleplay_llama/ | false | false | self | 1 | null |
Roleplay LLaMA | 1 | [removed] | 2023-12-21T21:27:20 | https://www.reddit.com/r/LocalLLaMA/comments/18nxt83/roleplay_llama/ | ewhitey12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nxt83 | false | null | t3_18nxt83 | /r/LocalLLaMA/comments/18nxt83/roleplay_llama/ | false | false | nsfw | 1 | {'enabled': False, 'images': [{'id': '9IS5HJzzlpcsU1cW52UUqC_FDOC62DR_OEJ1wDbj8pQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=108&crop=smart&auto=webp&s=75cbd7ead488b481bd6d5362d4b9084c60cbe398', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=216&crop=smart&auto=webp&s=60d2900d151045a4deea3d49bf50697ee22b2af9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=320&crop=smart&auto=webp&s=c4ab8af635cd0be36d223d220f583bb506b325c0', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=640&crop=smart&auto=webp&s=5b92f7c26f9bbac3e3357e84935ba62ed9dbc9d2', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=960&crop=smart&auto=webp&s=7dda650157dc9f11bb0df9354d3ee7efc408a0b4', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?auto=webp&s=453e041b64847c676f7f66a078aa01fd829f113b', 'width': 1024}, 'variants': {'nsfw': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=ee7bee7ff19a684f3eb95b8211b4c826ff3ab279', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=20657a029d804021e3c76f26932f2d5edd235cad', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=32a2d64fa286429e18e9e78ca01469fe9b7fc499', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=65ee4e305c2128baad01841aec43ac72e1aba012', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=21d601213ee000285c30ee9f074d8f50abf4a588', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?blur=40&format=pjpg&auto=webp&s=d8d36d63de96c835ac298fe705c63925cf3679b3', 'width': 1024}}, 'obfuscated': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=ee7bee7ff19a684f3eb95b8211b4c826ff3ab279', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=20657a029d804021e3c76f26932f2d5edd235cad', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=32a2d64fa286429e18e9e78ca01469fe9b7fc499', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=65ee4e305c2128baad01841aec43ac72e1aba012', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=21d601213ee000285c30ee9f074d8f50abf4a588', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?blur=40&format=pjpg&auto=webp&s=d8d36d63de96c835ac298fe705c63925cf3679b3', 'width': 1024}}}}]} |
Uncensored LLaMA | 1 | [removed] | 2023-12-21T21:25:55 | https://www.reddit.com/r/LocalLLaMA/comments/18nxs2q/uncensored_llama/ | ewhitey12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nxs2q | false | null | t3_18nxs2q | /r/LocalLLaMA/comments/18nxs2q/uncensored_llama/ | false | false | 1 | {'enabled': False, 'images': [{'id': '9IS5HJzzlpcsU1cW52UUqC_FDOC62DR_OEJ1wDbj8pQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=108&crop=smart&auto=webp&s=75cbd7ead488b481bd6d5362d4b9084c60cbe398', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=216&crop=smart&auto=webp&s=60d2900d151045a4deea3d49bf50697ee22b2af9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=320&crop=smart&auto=webp&s=c4ab8af635cd0be36d223d220f583bb506b325c0', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=640&crop=smart&auto=webp&s=5b92f7c26f9bbac3e3357e84935ba62ed9dbc9d2', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=960&crop=smart&auto=webp&s=7dda650157dc9f11bb0df9354d3ee7efc408a0b4', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?auto=webp&s=453e041b64847c676f7f66a078aa01fd829f113b', 'width': 1024}, 'variants': {}}]} | |
Mistral-7B finetune vRAM requirements | 1 | Hi
I’m currently finetuning Mistral 7B on a RTX 3090 with 24GB vRAM. I was a bit surprised that i can only do a 4 bit quantized Lora finetune. I tried doing 8 bit, but ran out of memory.
I am using huggingface transformers library.
I was able to finetune llama-27-b with 8 bit quantized Lora.
Is this expected? Would really appreciate to hear about your experiences! | 2023-12-21T21:24:35 | https://www.reddit.com/r/LocalLLaMA/comments/18nxqyu/mistral7b_finetune_vram_requirements/ | KingGongzilla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nxqyu | false | null | t3_18nxqyu | /r/LocalLLaMA/comments/18nxqyu/mistral7b_finetune_vram_requirements/ | false | false | self | 1 | null |
Unfiltered Roleplaying LLaMA | 1 | [removed] | 2023-12-21T21:23:49 | https://www.reddit.com/r/LocalLLaMA/comments/18nxqdo/unfiltered_roleplaying_llama/ | ewhitey12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nxqdo | false | null | t3_18nxqdo | /r/LocalLLaMA/comments/18nxqdo/unfiltered_roleplaying_llama/ | false | false | 1 | {'enabled': False, 'images': [{'id': '9IS5HJzzlpcsU1cW52UUqC_FDOC62DR_OEJ1wDbj8pQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=108&crop=smart&auto=webp&s=75cbd7ead488b481bd6d5362d4b9084c60cbe398', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=216&crop=smart&auto=webp&s=60d2900d151045a4deea3d49bf50697ee22b2af9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=320&crop=smart&auto=webp&s=c4ab8af635cd0be36d223d220f583bb506b325c0', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=640&crop=smart&auto=webp&s=5b92f7c26f9bbac3e3357e84935ba62ed9dbc9d2', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=960&crop=smart&auto=webp&s=7dda650157dc9f11bb0df9354d3ee7efc408a0b4', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?auto=webp&s=453e041b64847c676f7f66a078aa01fd829f113b', 'width': 1024}, 'variants': {}}]} | |
Where can I download the LLaMa model weights (llama.cpp) | 2 | I am stuck on Georgi Gerganov's llama.cpp at this step: [https://github.com/ggerganov/llama.cpp#prepare-data--run](https://github.com/ggerganov/llama.cpp#prepare-data--run)
I'm not sure exactly what this command is:
\`\`\`65B 30B 13B 7B tokenizer\_checklist.chk tokenizer.model\`\`\`
I have searched around google but I can't seem to find the actual model weights, and I'm not sure if I just move all the files to the models folder. Does anyone with more experience know how to get llama.cpp working?
Thank you. | 2023-12-21T21:11:23 | https://www.reddit.com/r/LocalLLaMA/comments/18nxg2y/where_can_i_download_the_llama_model_weights/ | javatube | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nxg2y | false | null | t3_18nxg2y | /r/LocalLLaMA/comments/18nxg2y/where_can_i_download_the_llama_model_weights/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
What about creating some periodically updated FAQs? | 1 | [removed] | 2023-12-21T20:33:59 | https://www.reddit.com/r/LocalLLaMA/comments/18nwl2x/what_about_creating_some_periodically_updated_faqs/ | SideShow_Bot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nwl2x | false | null | t3_18nwl2x | /r/LocalLLaMA/comments/18nwl2x/what_about_creating_some_periodically_updated_faqs/ | false | false | default | 1 | null |
What about creating some periodically updated FAQs? | 1 | [removed] | 2023-12-21T20:33:58 | https://www.reddit.com/r/LocalLLaMA/comments/18nwl2p/what_about_creating_some_periodically_updated_faqs/ | SideShow_Bot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nwl2p | false | null | t3_18nwl2p | /r/LocalLLaMA/comments/18nwl2p/what_about_creating_some_periodically_updated_faqs/ | false | false | default | 1 | null |
Is Mistra/Mixtral overtaking Meta's Llama models in terms of capabilities and commitment to open-source? Should we wait for a MoE Llama 3 or even better? | 59 | (title) | 2023-12-21T20:14:09 | https://www.reddit.com/r/LocalLLaMA/comments/18nw525/is_mistramixtral_overtaking_metas_llama_models_in/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nw525 | false | null | t3_18nw525 | /r/LocalLLaMA/comments/18nw525/is_mistramixtral_overtaking_metas_llama_models_in/ | false | false | self | 59 | null |
Used h2ogpt with LLaMA model to create interactive library for my dissertation | 23 | Finally got it to work! Thank you to everyone who helped yesterday. It is pulling from 500+ text files converted from the original PDFs.
I'm using this as an interactive library to query the academic literature about my dissertation topic and analysis methods. So far, I'd give it an 85-90% on accurate responses and providing appropriate citations, which is way better than a lot of human researchers, tbf. Running into a couple snags on responses cutting off early, but I'll tinker with the settings.
(btw: I marked flair as "Other," but if the flair should be different, lmk)
https://preview.redd.it/87p4wbkvep7c1.png?width=1571&format=png&auto=webp&s=7e4e22173fc601daf456ca30921249a610c6bf5c | 2023-12-21T20:04:05 | https://www.reddit.com/r/LocalLLaMA/comments/18nvwqw/used_h2ogpt_with_llama_model_to_create/ | tarasoraptor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nvwqw | false | null | t3_18nvwqw | /r/LocalLLaMA/comments/18nvwqw/used_h2ogpt_with_llama_model_to_create/ | false | false | 23 | null | |
Some interesting visualizations based on expert firing frequencies in Mixtral MoE | 57 | As the title says, I was interested in understanding if and how the experts specialize for different topics so I created this [visualization](https://mixtral-moe-vis-d726c4a10ef5.herokuapp.com). The idea is to calculate how many times each expert is used during a forward pass for an input (averaged across all tokens) at each layer and use this data for visualization. I did PCA plots and trained a linear SVM classifier on top of these features, and found some interesting things:
1. Different topics like abstract algebra and biology use different combination of experts.
2. The accuracy of classifying topics with expert frequencies is consistently good (approx. 95%) for all layers. This was a bit surprising for me since I was expecting only later layers to specialize in niche topics like abstract algebra.
3. As expected because of the load balancing loss, the average frequency of experts across the entire dataset is equal (1/8th), but the experts do seem to specialize for certain topics.
4. I also tried to use the weights of the SVM classifier to steer the generation: start with biology topic and pick math experts to generate math content instead of biology. However, this doesn't seem to work. This means that it's not as simple as a single expert storing math information. Again, this kind of makes sense since there are only 8 experts and the load balancing loss forces experts to be picked equally.
I would be happy to add any new visualizations so feel free to suggest. | 2023-12-21T19:40:59 | https://www.reddit.com/r/LocalLLaMA/comments/18nvdcr/some_interesting_visualizations_based_on_expert/ | chigur86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nvdcr | false | null | t3_18nvdcr | /r/LocalLLaMA/comments/18nvdcr/some_interesting_visualizations_based_on_expert/ | false | false | self | 57 | null |
Create a Domain specific model | 2 | Does anyone have any ideas on how to do pre training on a specific domain??
And if yes, what is the minimum scale of data required for this !!!
Can I pretrain a Llm on small set of documents?? | 2023-12-21T19:37:22 | https://www.reddit.com/r/LocalLLaMA/comments/18nva9c/create_a_domain_specific_model/ | kalpesh-mulye | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nva9c | false | null | t3_18nva9c | /r/LocalLLaMA/comments/18nva9c/create_a_domain_specific_model/ | false | false | self | 2 | null |
(NEWBIE ALERT) What free GPT4ALL AI is user-friendly, like ChatGPT 3.5 but better, does web searches, and is completely uncensored. | 1 | **(NEW USER ALERT)**
​
Which user-friendly AI on **GPT4ALL** is similar to **ChatGPT**, uncomplicated, and capable of web searches like **EDGE's Copilot** but without censorship? I plan to use it for advanced Comic Book recommendations, seeking answers and tutorials from the internet, and locating links to cracked games/books/comic books without explicitly stating its illegality just like the annoying **ChatGPT** and **Microsoft's Copilot**.
​
I've installed **GPT4ALL** and added "**Wizard v1.2**", but it seems less powerful than **GPT-4**. However, I'm not inclined to pay for an upgrade.
​
If my explanation wasn't clear, I'm looking for a tool similar to ***DAN*** but superior, with web search capabilities, surpassing **GPT-3.5** but remaining free. I appreciate your help, and apologies if I'm in the wrong subreddit. Thank you to everyone, regardless of assistance. | 2023-12-21T19:29:58 | https://www.reddit.com/r/LocalLLaMA/comments/18nv3wb/newbie_alert_what_free_gpt4all_ai_is_userfriendly/ | 144i | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nv3wb | false | null | t3_18nv3wb | /r/LocalLLaMA/comments/18nv3wb/newbie_alert_what_free_gpt4all_ai_is_userfriendly/ | false | false | default | 1 | null |
Explaining how Multimodal LLMs learn to generate images | 1 | [removed] | 2023-12-21T19:16:18 | https://www.reddit.com/r/LocalLLaMA/comments/18nusmf/explaining_how_multimodal_llms_learn_to_generate/ | AvvYaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nusmf | false | null | t3_18nusmf | /r/LocalLLaMA/comments/18nusmf/explaining_how_multimodal_llms_learn_to_generate/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Vb7ag37TAb_7LdUABxdvE8Jax7Bl5jjzHIofdjDLEUI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/26Pzbc34TNeRVVhcKCiXtJdcO168-cdNJk52tY4fdXM.jpg?width=108&crop=smart&auto=webp&s=31c8c1f02ea9f8ac505498bf04d921d2e2337234', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/26Pzbc34TNeRVVhcKCiXtJdcO168-cdNJk52tY4fdXM.jpg?width=216&crop=smart&auto=webp&s=deb69f013755a1143537a72cfa229f680c304843', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/26Pzbc34TNeRVVhcKCiXtJdcO168-cdNJk52tY4fdXM.jpg?width=320&crop=smart&auto=webp&s=150db62dc0aa25b7397c9c97a6e723f4073302bd', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/26Pzbc34TNeRVVhcKCiXtJdcO168-cdNJk52tY4fdXM.jpg?auto=webp&s=62fe2f4ffd3af37cda7d894711c2d61d668a537b', 'width': 480}, 'variants': {}}]} |
Github Copilot alternative with OpenAI-compatible API? | 14 | Is there an extension or website that is a code assistant on the level of Copilot that's open source? | 2023-12-21T19:16:17 | https://www.reddit.com/r/LocalLLaMA/comments/18nusls/github_copilot_alternative_with_openaicompatible/ | Alternative_Card_989 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nusls | false | null | t3_18nusls | /r/LocalLLaMA/comments/18nusls/github_copilot_alternative_with_openaicompatible/ | false | false | self | 14 | null |
Will there ever be a local language model and interface that we can download on Steam? | 2 | Will there ever be a way I can go to the steam store, download a language model and GUI and then run it? | 2023-12-21T19:14:16 | https://www.reddit.com/r/LocalLLaMA/comments/18nuqv6/will_there_ever_be_a_local_language_model_and/ | Any_Bother6136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nuqv6 | false | null | t3_18nuqv6 | /r/LocalLLaMA/comments/18nuqv6/will_there_ever_be_a_local_language_model_and/ | false | false | self | 2 | null |
GitHub - uiuc-focal-lab/llm-priming-attacks | 4 | 2023-12-21T19:00:21 | https://github.com/uiuc-focal-lab/llm-priming-attacks | Elven77AI | github.com | 1970-01-01T00:00:00 | 0 | {} | 18nuepj | false | null | t3_18nuepj | /r/LocalLLaMA/comments/18nuepj/github_uiucfocallabllmprimingattacks/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'QXRE8D3umfcd0j11KM6TroZWQNTDKUYuURKdvt3fXvk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mOcnYcIVABFt78gtMSnBQ-SPh9YIO2xvMMdxjYzpiMs.jpg?width=108&crop=smart&auto=webp&s=070d2061b391de596c11475c33952a931cfcf49a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mOcnYcIVABFt78gtMSnBQ-SPh9YIO2xvMMdxjYzpiMs.jpg?width=216&crop=smart&auto=webp&s=7c8da8777115fdfab5f97378f792cfa43acc4c6f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mOcnYcIVABFt78gtMSnBQ-SPh9YIO2xvMMdxjYzpiMs.jpg?width=320&crop=smart&auto=webp&s=4fab689fb6952b569bb7e5d7cb44ee9f8397b8a3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mOcnYcIVABFt78gtMSnBQ-SPh9YIO2xvMMdxjYzpiMs.jpg?width=640&crop=smart&auto=webp&s=d79ce94908af7fed3ee36f3b2c4f23ca480c4a24', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mOcnYcIVABFt78gtMSnBQ-SPh9YIO2xvMMdxjYzpiMs.jpg?width=960&crop=smart&auto=webp&s=42fd75fb9b85ae15ac3cced0da5ed4f210279613', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mOcnYcIVABFt78gtMSnBQ-SPh9YIO2xvMMdxjYzpiMs.jpg?width=1080&crop=smart&auto=webp&s=a2ba1b8bd139b191ce4cc1f5584f252cf9521f29', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mOcnYcIVABFt78gtMSnBQ-SPh9YIO2xvMMdxjYzpiMs.jpg?auto=webp&s=489afebb423fdfc1869d18a2b41d12ba8b3ccc69', 'width': 1200}, 'variants': {}}]} | ||
Can you pair RTX 4090 with RTX 6000 Ada? | 2 | I have a machine with 1xRTX 4090 for finetuning relatively small models and was thinking to add one more GPU. There is a 2nd hand RTX 6000 Ada at sale from a nearly studio, at basically the same cost of a new 4090. Will it be possible to pair a 4090 with 6000 Ada and can they work in tandem in spite of being different models? Or will 2x4090 work better | 2023-12-21T18:57:20 | https://www.reddit.com/r/LocalLLaMA/comments/18nucbq/can_you_pair_rtx_4090_with_rtx_6000_ada/ | bigdude404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nucbq | false | null | t3_18nucbq | /r/LocalLLaMA/comments/18nucbq/can_you_pair_rtx_4090_with_rtx_6000_ada/ | false | false | self | 2 | null |
Hardware specification | 1 | Hello guys,
Firstly I want to state that I am an absolute beginner in this space. As a programmer, AI caught my attention and I want to start getting into it.
Since I need new PC anyways I wanted to make sure to get hardware that is going to let me use AI tools and LLMs without much problems.
Some spec I have came across online is the following:
* 64 GB RAM
* GPU with 24GB+ VRAM
* 1TB SSD
* Some AMD processor which has good single core performance
Is this good enough hardware to run most popular and powerful models and do things in this space?
What would suggestions you have for me regarding hardware specification?
What would I be able to run with this specification?
Initially I want to start and play with models like llama2 and dolphin-mixtral, also stable diffusion and similar models.
I would then like to fine-tune a model, my question here is: Is it worth it?
Can I even do anything with any consumer specification having in mind there are literally data centers only for this.
​
Thank you very much! | 2023-12-21T18:55:56 | https://www.reddit.com/r/LocalLLaMA/comments/18nub5y/hardware_specification/ | kiIIuminatii | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nub5y | false | null | t3_18nub5y | /r/LocalLLaMA/comments/18nub5y/hardware_specification/ | false | false | self | 1 | null |
Is anyone working on LLMs for graph tasks? | 11 | Hey everyone,
This is a long (somewhat comprehensive) post about using LLMs for graph-tasks. I just want to start a conversation, share resources, and generate interest in this topic amongst the community, because I think there are a lot of benefits in having LLMs with graph abilities, but not much systematic work (or attention in this sub) on this. For someone with more time and skill than me, I think there are a lot of valuable contributions still to be made here, that would perhaps be "low-hanging fruit" for some people in this community. Below is a wishlist of "graph abilities" for LLMs, based on thinking about this for a few months as a novice. I link to some resources I've found along the way, but I'm curious if there are models rigorously evaluated on, or fine-tuned for, these tasks:
​
1. **Text-to-Cypher**: LLMs could translate natural language into Cypher queries, bridging the gap between user language and the technical query language of graph databases. Are there any models fine-tuned for this, or any studies on their efficacy? Here's a [brief neo4j blog post](https://medium.com/neo4j/knowledge-graphs-llms-fine-tuning-vs-retrieval-augmented-generation-30e875d63a35) about this, where LLMs show some promise.
2. **Entity and Relation Resolution**: I think LLMs have a lot of potential to resolve duplications in named entities (e.g., "Peter Quill" and "Starlord" are the same person in Guardians of the Galaxy) and their relationships within data (e.g., "dating" is the same as "in a relationship with"). This could really improve the way we clean data, and populate and maintain knowledge graphs. Has anyone experimented with LLMs in this capacity? I haven't seen anything on this task, but I did use ChatGPT and Claude for entity resolution on 600 named corporate entities, and it did quite well. I can only imagine how much better LLMs could do with a more sophisticated workflow.
3. **Building Knowledge Graph Schemas and Ontologies**: The idea of automating schema and ontology generation with LLMs is another area that seems full of potential, given LLMs ability to take in so much context and spit out logical summaries. Are there any advancements in training LLMs for these tasks? I haven't seen anything on this task.
4. **Triplet Extraction for Knowledge Graph Completion**: Similarly, the use of LLMs for extracting subject-predicate-object triplets (e.g., "Peter Quill"-"is played by"-"Chris Pratt") from text could be a game-changer for building knowledge graphs and keeping them up-to-date. I'm curious if there's any focused work on employing LLMs for this task. That said, here's another preliminary [Neo4j blog post](https://medium.com/neo4j/construct-knowledge-graphs-from-unstructured-text-877be33300a2) on this topic.
5. **Dataset creation**: You can create a knowledge graph with structured/tabular data, and then could have a RAG-LLM use that to generate a text dataset describing the structured data. This could be used for further fine-tuning, perhaps improving the model's ability to query and maintain the knowledge graph. I say more about this in my 1st P.S. below.
Any shared insights/resources would be greatly appreciated! I'd also be interested in collaborating with anyone on related projects; but as a novice, I'd be looking for more of a mentor than an equal collaborator. Still, I can code, I'm familiar with LLMs and working with text data, and I could devote a few hours every week for a serious project.
P.S. #1 - B/c I know how much this community cares about roleplay (RP), I thought I'd just briefly mention that I think Knowledge Graphs could help with RP and fictional universes. You could use a Knowledge Graph to store facts about your world/character, and your character/author can query/add to the graph to ensure consistency across responses and even LLM instances. Essentially, a knowledge graph could be a great way of "saving" the world you build for later use. Here's a [demo](https://siwei.io/en/demos/graph-rag/) (not mine) of how this could work with the Guardians of the Galaxy world. You could potentially even use such a KG-RAG-LLM to generate a corpus of sentences describing your world, and fine-tune on that for an even better RP experience.
P.S. #2 - The most comprehensive work I've found on these topics is [Neo4j's NaLLM repo on github](https://github.com/neo4j/NaLLM?tab=readme-ov-file). | 2023-12-21T18:51:47 | https://www.reddit.com/r/LocalLLaMA/comments/18nu7jl/is_anyone_working_on_llms_for_graph_tasks/ | empirical-sadboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nu7jl | false | null | t3_18nu7jl | /r/LocalLLaMA/comments/18nu7jl/is_anyone_working_on_llms_for_graph_tasks/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'K3A0XPauX3C-E4_DRNTj83o5gJSfVEZ7s-wx_AO01QA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/74vIJPKh4WFHyBLc8RXCf0GZdy4Q-C4MdZoBvf--3pc.jpg?width=108&crop=smart&auto=webp&s=692c50f9817e2e5178a234d4250292ec102dd826', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/74vIJPKh4WFHyBLc8RXCf0GZdy4Q-C4MdZoBvf--3pc.jpg?width=216&crop=smart&auto=webp&s=d21e1dd087956878b6f302e5fd5678e992b08839', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/74vIJPKh4WFHyBLc8RXCf0GZdy4Q-C4MdZoBvf--3pc.jpg?width=320&crop=smart&auto=webp&s=787b578d7ae6a7f6f04a9d4af838182f508a67be', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/74vIJPKh4WFHyBLc8RXCf0GZdy4Q-C4MdZoBvf--3pc.jpg?width=640&crop=smart&auto=webp&s=eaa10bc7071c709879482df935325826017a7e0b', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/74vIJPKh4WFHyBLc8RXCf0GZdy4Q-C4MdZoBvf--3pc.jpg?width=960&crop=smart&auto=webp&s=40aed16f15cad3bfa41dd30993fc4d7794ae4afa', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/74vIJPKh4WFHyBLc8RXCf0GZdy4Q-C4MdZoBvf--3pc.jpg?auto=webp&s=9369d060fd227550cfc43e0b311db0ad6919f568', 'width': 1024}, 'variants': {}}]} |
Launching AgentSearch - A local search engine for your LLM agent | 89 | Hey everyone,
I've been part of this community for a while and have gained a lot from your insights and discussions. Today, I'm excited to share a project I've been working on called AgentSearch. The idea behind this is to make the vast scope of human knowledge more accessible to LLM agents.
​
We've started by embedding content from sources like Wikipedia, Arxiv, and filtered common crawl. The result is a massive database of over 1 billion embedding vectors. The dataset will be released to the public, but right now I am working out logistics around hosting the 4 TB+ database.
You can check out the search engine at \[[search.sciphi.ai](https://search.sciphi.ai)\]([https://search.sciphi.ai](https://search.sciphi.ai)). I'm also sharing the source code for the search engine at \[[github.com/SciPhi-AI/agent-search](https://github.com/SciPhi-AI/agent-search)\]([https://github.com/SciPhi-AI/agent-search](https://github.com/SciPhi-AI/agent-search)), so anyone who wants to can replicate this locally.
​
Another part of this project is the release of a model called Sensei, which is tailored for search tasks. It's trained to provide accurate and reliable responses and to return the result in JSON format. You can find Sensei at \[HuggingFace\]([https://huggingface.co/SciPhi/Sensei-7B-V1](https://huggingface.co/SciPhi/Sensei-7B-V1)).
​
This project represents a big step in the dataset of embeddings, thanks to some new initiatives like RedPajamas. With Sensei, we're aiming to offer a tool that can handle search-based queries effectively, making it a useful resource for researchers and general users. Sensei is available for download, and you can also access it via a hosted API. There's more detailed information in the \[documentation\]([https://agent-search.readthedocs.io/en/latest/api/main.html](https://agent-search.readthedocs.io/en/latest/api/main.html)).
​
AgentSearch and Sensei will be valuable for the open source community, especially in scenarios where you need to perform a large number of search queries. The dataset is big and we plan to keep expanding it, adding more key sources relevant to LLM agents. If you have any suggestions for what sources to include, feel free to reach out.
​
I'm looking forward to hearing what you think about this project and seeing how it might be useful in your own work or research!
Thanks again. | 2023-12-21T18:29:39 | https://www.reddit.com/r/LocalLLaMA/comments/18ntozg/launching_agentsearch_a_local_search_engine_for/ | docsoc1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ntozg | false | null | t3_18ntozg | /r/LocalLLaMA/comments/18ntozg/launching_agentsearch_a_local_search_engine_for/ | false | false | self | 89 | null |
Instead of trying to make our own base models, the open source community should focus on creating training datasets | 1 | [removed] | 2023-12-21T18:16:14 | https://www.reddit.com/r/LocalLLaMA/comments/18ntdw0/instead_of_trying_to_make_our_own_base_models_the/ | involviert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ntdw0 | false | null | t3_18ntdw0 | /r/LocalLLaMA/comments/18ntdw0/instead_of_trying_to_make_our_own_base_models_the/ | false | false | self | 1 | null |
How do i run llama cpp on gpu in nixos with the flake? | 1 | Hi! I use the latest unstable version of nixos and i was wondering how do i make llama use gpu with the flake? | 2023-12-21T18:07:59 | https://www.reddit.com/r/LocalLLaMA/comments/18nt7bh/how_do_i_run_llama_cpp_on_gpu_in_nixos_with_the/ | Kol1e | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nt7bh | false | null | t3_18nt7bh | /r/LocalLLaMA/comments/18nt7bh/how_do_i_run_llama_cpp_on_gpu_in_nixos_with_the/ | false | false | default | 1 | null |
Need help with a dynamic RAG problem | 1 | I want to build an chatbot on a very large database, imagine a very big book on shirts. It just lists 1000s of shirts and description of each shirt which looks like this:
\`\`\`
\--------------
Shirt id: 1
Material: Cotton
Rank: 5
\--------------
Shirt id: 2
Material: Denim
Rank: 3
...
\`\`\`
​
Now the chatbot should be able to answer questions like how many shirts are made of cotton, which shirt was the bestseller for the last 5 years and so on.
​
Note that these information is not directly mentioned in the book but need some kind of calculation on the fly. How can I achieve this? | 2023-12-21T18:03:58 | https://www.reddit.com/r/LocalLLaMA/comments/18nt3z3/need_help_with_a_dynamic_rag_problem/ | todaysgamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nt3z3 | false | null | t3_18nt3z3 | /r/LocalLLaMA/comments/18nt3z3/need_help_with_a_dynamic_rag_problem/ | false | false | self | 1 | null |
Chat with Company Data Project | 1 | Looking for the best and easy to setup chat with data solution, with a PDF upload feature on the front end (chainlit, streamlit) and would like to chat with company documents using huggingface llms.
What are best projects out there, all help is appreciated. | 2023-12-21T17:57:46 | https://www.reddit.com/r/LocalLLaMA/comments/18nsyoo/chat_with_company_data_project/ | ontheportco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nsyoo | false | null | t3_18nsyoo | /r/LocalLLaMA/comments/18nsyoo/chat_with_company_data_project/ | false | false | self | 1 | null |
Do you know what makes Guidance slower than Ollama for Mistral? | 1 | I'll put the code for my program in a comment.
It's a benchmark that pits Microsoft's Guidance against an Ollama sub-process.
Ollama operation time: 11.2 seconds
Guidance operation time: 22.8 seconds | 2023-12-21T17:38:56 | https://www.reddit.com/r/LocalLLaMA/comments/18nsj3a/do_you_know_what_makes_guidance_slower_than/ | prescod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nsj3a | false | null | t3_18nsj3a | /r/LocalLLaMA/comments/18nsj3a/do_you_know_what_makes_guidance_slower_than/ | false | false | self | 1 | null |
Unable to start llama runner because of /tmp permissions | 1 | Trying an Ollama proof of concept on RHEL, our server policy is to deny execution on /tmp directory where Ollama installs and then starts the llama runner.
Is there any way to specify a directory other than /tmp? | 2023-12-21T17:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/18nsac4/unable_to_start_llama_runner_because_of_tmp/ | screenpop550 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nsac4 | false | null | t3_18nsac4 | /r/LocalLLaMA/comments/18nsac4/unable_to_start_llama_runner_because_of_tmp/ | false | false | self | 1 | null |
TinyLlama is not the brightest | 1 | ​
[1 + 1 = 3](https://preview.redd.it/ejskavailo7c1.png?width=1294&format=png&auto=webp&s=e85f00bed3ce7008ec2b16e2e15fb5659e35ae94) | 2023-12-21T17:17:04 | https://www.reddit.com/r/LocalLLaMA/comments/18ns0qr/tinyllama_is_not_the_brightest/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ns0qr | false | null | t3_18ns0qr | /r/LocalLLaMA/comments/18ns0qr/tinyllama_is_not_the_brightest/ | false | false | 1 | null | |
MLX vs llama.cpp on MacOS | 1 | [removed] | 2023-12-21T16:56:20 | https://www.reddit.com/r/LocalLLaMA/comments/18nrjrw/mlx_vs_llamacpp_on_macos/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nrjrw | false | null | t3_18nrjrw | /r/LocalLLaMA/comments/18nrjrw/mlx_vs_llamacpp_on_macos/ | false | false | self | 1 | null |
Mistral 7B Fine-tune Optimized | 31 | Coming from the Reddit ML community, but think you guys might like this. Proud of the openpipe.ai team. | 2023-12-21T16:37:06 | https://openpipe.ai/blog/mistral-7b-fine-tune-optimized | HybridRxN | openpipe.ai | 1970-01-01T00:00:00 | 0 | {} | 18nr4ly | false | null | t3_18nr4ly | /r/LocalLLaMA/comments/18nr4ly/mistral_7b_finetune_optimized/ | false | false | default | 31 | null |
Is that just a coincidence ? | 1 | Hi guys, my tuned model has twice the number of parameters as the base model (base: 13B, mine: 26B). Is that just a coincidence, or should it be like that? I'm using LoRA, by the way. | 2023-12-21T16:26:54 | https://www.reddit.com/r/LocalLLaMA/comments/18nqweh/is_that_just_a_coincidence/ | moreromem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nqweh | false | null | t3_18nqweh | /r/LocalLLaMA/comments/18nqweh/is_that_just_a_coincidence/ | false | false | self | 1 | null |
will this PC run falcon 180b and mistal models? | 2 | *HP OMEN GAMING PC *
*PROCESSOR*
10th Generation Intel Core i7-10750H (2.6 GHz base frequency, up to 5.0 GHz with Intel Turbo Boost Technology, 12 MB L3 cache, 6 cores)
*OPERATING SYSTEM*
Windows 10 Home 64-bit
*DISPLAY*
15.6-inch diagonal Full HD (1920 x 1080) IPS anti-glare micro-edge WLED-backlit, 250 nits, 45% NTSC
*GRAPHICS*
NVIDIA GeForce RTX 3070 (8 GB GDDR6 dedicated)
*MEMORY*
32GB DDR4-2933 SDRAM
*STORAGE*
1TB PCIe NVMe M.2 SSD
*NETWORK INTERFACE*
Integrated 10/100/1000 GbE LAN, Intel Wi-Fi 6 AX 200 (2x2) and Bluetooth 5 Combo (Supporting Gigabit file transfer speeds)
*PORTS*
1x SuperSpeed USB Type-C (10Gbps signaling rate, DisplayPort 1.4, HP Sleep and Charge), 1x SuperSpeed USB Type-A (5Gbps signaling rate, HP Sleep and Charge), 2x SuperSpeed USB Type-A (5Gbps signaling rate), 1x Mini DisplayPort, 1x HDMI 2.0a, 1x RJ-45, 1x AC smart pin, 1x headphone/microphone combo
*AUDIO*
Bang & Olufsen, dual speakers, HP Audio Boost 2.0
*WEBCAM*
HP Wide Vision HD Camera with integrated dual array digital microphone
*KEYBOARD*
Full-size, 4-zone backlit keyboard with numeric keypad and 26-Key Rollover Anti-Ghosting Key technology
*TOUCHPAD*
Precision Touchpad Support
*BATTERY*
6-cell, 70.9 Wh Li-ion polymer
*DIMENSIONS*
(W x D x H): 14.17 x 10.35 x 0.79 inch | 2023-12-21T16:23:23 | https://www.reddit.com/r/LocalLLaMA/comments/18nqtip/will_this_pc_run_falcon_180b_and_mistal_models/ | 360truth_hunter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nqtip | false | null | t3_18nqtip | /r/LocalLLaMA/comments/18nqtip/will_this_pc_run_falcon_180b_and_mistal_models/ | false | false | self | 2 | null |
What's your 2024 AI Predictions? | 80 | I'll start with mine
​
1. Rise of Small Models
2. Model Merges
3. Multi Modals every where
4. AI Agents | 2023-12-21T16:16:55 | https://www.reddit.com/r/LocalLLaMA/comments/18nqo9h/whats_your_2024_ai_predictions/ | dulldata | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nqo9h | false | null | t3_18nqo9h | /r/LocalLLaMA/comments/18nqo9h/whats_your_2024_ai_predictions/ | false | false | self | 80 | null |
What is the best phi 2 fine-tune? | 15 | Looking for good phi 2 fine tune in gguf format | 2023-12-21T15:50:08 | https://www.reddit.com/r/LocalLLaMA/comments/18nq1u6/what_is_the_best_phi_2_finetune/ | Killerx7c | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nq1u6 | false | null | t3_18nq1u6 | /r/LocalLLaMA/comments/18nq1u6/what_is_the_best_phi_2_finetune/ | false | false | self | 15 | null |
Best custom model inference API? | 3 | Hi,
I'm setting up a custom coding AI assistant with [continue.dev](https://continue.dev) on VSCode.
From my research here it seems that DeepSeek is the best open source model for coding.
So I have deployed an inference endpoint with [https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct). I have spent 0.63 Euros basically without using it because of setup problems on the VSCode extension. It seems that creating an inference endpoint rents dedicated GPUs from AWS. I just want to pay based on actual usage, since I'm not doing heavy work.
Is HuggingFace the cheapest way to address this? Can't I pay on a token basis instead but still choosing my model of choice? For example going for 33B.
Sorry I'm kinda new to this ecosystem, I'm a backend engineer, not AI expert :/ | 2023-12-21T15:40:06 | https://www.reddit.com/r/LocalLLaMA/comments/18nptol/best_custom_model_inference_api/ | random2819 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nptol | false | null | t3_18nptol | /r/LocalLLaMA/comments/18nptol/best_custom_model_inference_api/ | false | false | self | 3 | null |
Crypto coin where the miners do open LLM training and inference. | 2 | Would it work? Has anyone done it? | 2023-12-21T15:28:00 | https://www.reddit.com/r/LocalLLaMA/comments/18npk8c/crypto_coin_where_the_miners_do_open_llm_training/ | shortylongs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18npk8c | false | null | t3_18npk8c | /r/LocalLLaMA/comments/18npk8c/crypto_coin_where_the_miners_do_open_llm_training/ | false | false | self | 2 | null |
How to distribute LLM apps and UI/UX? | 4 | Pretty new to the space but I'm a programmer. I see a lot of YouTube tutorials using Langchain, Ollama, and 50 other tools. Seems easy to build my own app.
How are these apps packaged and distributed so other people can install and interact with them? What are the best frameworks to use for this? How do you build a custom UI? Looking for best practices or shortest path to something basic.
For example: do you use Langchain and launch a Web UI from a local nodejs server that lets users modify their documents for RAG or swap models?
Thanks for any helpful responses. | 2023-12-21T15:13:30 | https://www.reddit.com/r/LocalLLaMA/comments/18np956/how_to_distribute_llm_apps_and_uiux/ | mattlock1984 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18np956 | false | null | t3_18np956 | /r/LocalLLaMA/comments/18np956/how_to_distribute_llm_apps_and_uiux/ | false | false | self | 4 | null |
Roleplaying as an evaluation method | 19 | Hey :)
I've been getting into LLaMA recently, and noticed that a lot of the benchmarks seem to be knowledge exams, or subjective tests by humans. Inevitably in a few months these benchmarks are assimilated into the training data, rendering them unhelpful.
I've also noticed that there are a lot of popular AI chat platforms, like Character AI. People roleplay and talk with these bots, and if a response is incoherent, the user is able to edit or regenerate the response. e.g. I've tried out a character who turned invisibile. In their next message, they started talking as if I was able to see them. So I had to regenerate the response until their reply actually made sense.
Could this be used to measure the quality of an AI, at least in regard to its storytelling and coherence? If we can measure the % of AI messages that are re-generated or edited by users, it could be an interesting benchmark source without a lot of effort or disclosing user data. It'd only hinge on a popular chat platform implementing these metrics and making them available to the public, as well as supporting different AI models to be able to compare them (preferably invisibly to the user to avoid bias). | 2023-12-21T15:01:30 | https://www.reddit.com/r/LocalLLaMA/comments/18nozkv/roleplaying_as_an_evaluation_method/ | DreamingInfraviolet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nozkv | false | null | t3_18nozkv | /r/LocalLLaMA/comments/18nozkv/roleplaying_as_an_evaluation_method/ | false | false | self | 19 | null |
LLaMa 2 lang - finetune LLaMa2 chat to any language | 1 | [removed] | 2023-12-21T14:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/18novps/llama_2_lang_finetune_llama2_chat_to_any_language/ | UnderstandLingAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18novps | false | null | t3_18novps | /r/LocalLLaMA/comments/18novps/llama_2_lang_finetune_llama2_chat_to_any_language/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TwnQxtevrxmbIoyRcJoOzcQEF6T-tj_XHc6cWHfV3dw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/y2qXy4gVcdawJTwt0jP30Ecw1vR_MyiI2HraWrpAF18.jpg?width=108&crop=smart&auto=webp&s=ca62ef8d9999f77d1f4cfd67a0e8b8f225835f06', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/y2qXy4gVcdawJTwt0jP30Ecw1vR_MyiI2HraWrpAF18.jpg?width=216&crop=smart&auto=webp&s=e5e2c91682dfc4ce44dc39392ac6619f96b4ddf0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/y2qXy4gVcdawJTwt0jP30Ecw1vR_MyiI2HraWrpAF18.jpg?width=320&crop=smart&auto=webp&s=c9646b6d4609b9444123b6d3afa4de0e169ebc83', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/y2qXy4gVcdawJTwt0jP30Ecw1vR_MyiI2HraWrpAF18.jpg?width=640&crop=smart&auto=webp&s=42fdf48401b0fcd0ce230ae829050fb38a88dcf3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/y2qXy4gVcdawJTwt0jP30Ecw1vR_MyiI2HraWrpAF18.jpg?width=960&crop=smart&auto=webp&s=0c0541c2d4dcc6db5380c44bc5501f39a9714759', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/y2qXy4gVcdawJTwt0jP30Ecw1vR_MyiI2HraWrpAF18.jpg?width=1080&crop=smart&auto=webp&s=d60bc1f76b4ba9ed168aaf03e85176d0a12a3081', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/y2qXy4gVcdawJTwt0jP30Ecw1vR_MyiI2HraWrpAF18.jpg?auto=webp&s=4922ea966dcddd14bc7732627c9044dff4bfd6fa', 'width': 1200}, 'variants': {}}]} |
Mixtral above Claude 2.1. Does this ranking match your experience? | 208 | 2023-12-21T14:31:38 | atgctg | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18noca8 | false | null | t3_18noca8 | /r/LocalLLaMA/comments/18noca8/mixtral_above_claude_21_does_this_ranking_match/ | false | false | 208 | {'enabled': True, 'images': [{'id': 'RWeNeORq6Uszc8H2bGfd3Ntab6hTiSDfDPtCGF2WgD0', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/yb2370uurn7c1.png?width=108&crop=smart&auto=webp&s=7857ac50a94b4d79d25d75cc50bcd69202b1f60d', 'width': 108}, {'height': 87, 'url': 'https://preview.redd.it/yb2370uurn7c1.png?width=216&crop=smart&auto=webp&s=962f4cde5929ca886890c451c3e995126dcfc5e9', 'width': 216}, {'height': 129, 'url': 'https://preview.redd.it/yb2370uurn7c1.png?width=320&crop=smart&auto=webp&s=2db267eb80ff2580f6f6db053175492e07a5a1d7', 'width': 320}, {'height': 259, 'url': 'https://preview.redd.it/yb2370uurn7c1.png?width=640&crop=smart&auto=webp&s=1ee64faf1b559f7c24df8a2eed0b41b0cd580f03', 'width': 640}, {'height': 389, 'url': 'https://preview.redd.it/yb2370uurn7c1.png?width=960&crop=smart&auto=webp&s=d25e5f7e2ab692ae8b99ec76766940f006ed5bd4', 'width': 960}], 'source': {'height': 415, 'url': 'https://preview.redd.it/yb2370uurn7c1.png?auto=webp&s=a9afa3bedf20a96741f46c75149cb0b582f622f9', 'width': 1024}, 'variants': {}}]} | |||
LLM imitation in a foreign language (Czech) | 6 | Hi everyone,
I want to teach LLM to mimic a specific character in a foreign language (Czech), using text data from YouTube videos of a famous person in my country. Should I train the model from scratch with a custom tokenizer, fine-tune a model that already includes the language (Czech, which is usually underrepresented), or use the RAG method with fine-tuning? What's your recommended approach and what models could be useful for this use-case?
Also, if you know some repos, similar to this, feel free to share a link. | 2023-12-21T14:21:03 | https://www.reddit.com/r/LocalLLaMA/comments/18no49p/llm_imitation_in_a_foreign_language_czech/ | Snoo62259 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18no49p | false | null | t3_18no49p | /r/LocalLLaMA/comments/18no49p/llm_imitation_in_a_foreign_language_czech/ | false | false | self | 6 | null |
Suggestion for a S2ST models | 2 | I had some parallel audio data (not much) and wanted to train a S2ST model for that, i wanted some suggestion of resources can be used for 2 scenario :
1. If i want to Pre-train a model only on the limited data i.e. no fine tuning some pretrained model.
2. If i want to fine tune a pretrained model to get the best performance.
I have parallel audio for Eng and some Indian Language like Hindi, Bengali and Tamil. I am asking this because Speech is a new domain to me (\^\_\^) | 2023-12-21T14:16:32 | https://www.reddit.com/r/LocalLLaMA/comments/18no0zl/suggestion_for_a_s2st_models/ | yashBhaskar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18no0zl | false | null | t3_18no0zl | /r/LocalLLaMA/comments/18no0zl/suggestion_for_a_s2st_models/ | false | false | self | 2 | null |
Deploy LLMs at the edge | 4 | Hi all,
Im curious how you deploy your LLMs to android/ios smartphones? Currently I see few options:
1. Convert pytorch/tf model to onnx and run everywhere (supports various backends)
2. Directly use pytorch mobile (it's still in beta and bit buggy)
3. use llama.cpp/kobold
but maybe Im missing something. What's your favourite way to deploy llms? | 2023-12-21T14:12:56 | https://www.reddit.com/r/LocalLLaMA/comments/18nnybu/deploy_llms_at_the_edge/ | YYY_333 | self.LocalLLaMA | 2023-12-21T14:16:44 | 0 | {} | 18nnybu | false | null | t3_18nnybu | /r/LocalLLaMA/comments/18nnybu/deploy_llms_at_the_edge/ | false | false | self | 4 | null |
Deploy LLMs on edge devices | 1 | Hi all,
Im curious how you run your models on edge devices, in particular android/ios smartphones with gpu acceleration?
Currently I see few options:
1) convert from pytorch/tf to onnx and run everywhere you want
2) directly use pytorch mobile (but it is still in beta phase and bit buggy)
3) use llama.cpp/kobold
What options would you add and what's your favourite way to deploy models? | 2023-12-21T14:00:28 | https://www.reddit.com/r/LocalLLaMA/comments/18nnomg/deploy_llms_on_edge_devices/ | YYY_333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nnomg | false | null | t3_18nnomg | /r/LocalLLaMA/comments/18nnomg/deploy_llms_on_edge_devices/ | false | false | self | 1 | null |
How does MLX work? Will we be able to run quants? | 9 | Hi all,
So I wanted to try out the new MLX platform, so went and got it all set up on my M1 Max 64gb. I initially got Phi-2 working and it ran just fine, with very little memory pressure or even memory used to get it running. And then I decided to get way over-ambitious and went to try and run Mixtral - seeing as I'd run it in Q8 before in Textgen-webui and Llama.cpp, I figured, "hey, why not try it?"
The RAM spiked to 55gb / 64gb, and then it started filling out swap. This steadily climbed to 26gb, at which point promptly crashed, giving the error:
"libc++abi: terminating due to uncaught exception of type std::runtime\_error: \[malloc\_or\_wait\] Unable to allocate 33554432 bytes" (notice this is 33 million bytes, so mega bytes, not giga bytes - so I don't really understand this error seeing as swap was already well past 33mb)
So I figure this was just trying to run full-fat Mistral and I simply don't have the memory for it. Fair enough. But that got me thinking - can MLX even run quants? I am painfully ignorant around all this stuff, but I have to assume it can or will be able to at some point. And I also have lots of just really stupid questions like, is MLX just a complete replacement for Llama.cpp? Or are they compatible in any way? Is one of MLX's biggest features the fact that it can use swap? If so then why did my instance of Mixtral stop when it got to 26gb of swap? Is there a cap on swap like how there's a default cap of 2/3rds usable RAM? Will the same bits of research into speeding up inference on all our other frameworks going to be applicable to MLX or is it just so entirely different since it targets the whole unified memory architecture thing?
Thanks for reading my rant :) | 2023-12-21T13:56:28 | https://www.reddit.com/r/LocalLLaMA/comments/18nnlr7/how_does_mlx_work_will_we_be_able_to_run_quants/ | OldAd9530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nnlr7 | false | null | t3_18nnlr7 | /r/LocalLLaMA/comments/18nnlr7/how_does_mlx_work_will_we_be_able_to_run_quants/ | false | false | self | 9 | null |
Running LLM on Fugaku Supercomputer | 1 | I was wondering if Fugaku supercomputer can run biggest LLM model or not ? Anyone has experiences on Fugaku ? Coz in my university we can submit Fugaku access request via research proposal so I wonder if it can run LLM or not for our research | 2023-12-21T13:53:06 | https://www.reddit.com/r/LocalLLaMA/comments/18nnj9r/running_llm_on_fugaku_supercomputer/ | laveriaroha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nnj9r | false | null | t3_18nnj9r | /r/LocalLLaMA/comments/18nnj9r/running_llm_on_fugaku_supercomputer/ | false | false | self | 1 | null |
Building an LLM rating platform and need criteria suggestions for users to pick the best model. What terms would be clear and useful? Thoughts? thanks in advanced | 115 | 2023-12-21T13:36:31 | Wonderful-Ad-5952 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18nn78x | false | null | t3_18nn78x | /r/LocalLLaMA/comments/18nn78x/building_an_llm_rating_platform_and_need_criteria/ | false | false | 115 | {'enabled': True, 'images': [{'id': '3deREOoOxT-pPZ45k0mRoC-siJDOPlR0mCxKaKWqVPc', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/i9apxr4ghn7c1.png?width=108&crop=smart&auto=webp&s=85f890b1ba2042dca41b76f4ac1f2ee7ba517e2e', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/i9apxr4ghn7c1.png?width=216&crop=smart&auto=webp&s=76e5afd1d29a8daf877454fcb5be419de0056cce', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/i9apxr4ghn7c1.png?width=320&crop=smart&auto=webp&s=4c404091f3ade0eb2485046d117455b6d85e5bcf', 'width': 320}, {'height': 328, 'url': 'https://preview.redd.it/i9apxr4ghn7c1.png?width=640&crop=smart&auto=webp&s=d1aeb1deb128a30cce4c350e1ab0a1d93f3a0eeb', 'width': 640}, {'height': 492, 'url': 'https://preview.redd.it/i9apxr4ghn7c1.png?width=960&crop=smart&auto=webp&s=178e6864fec9bbceba2a7df663f7ca1fff287651', 'width': 960}, {'height': 554, 'url': 'https://preview.redd.it/i9apxr4ghn7c1.png?width=1080&crop=smart&auto=webp&s=2c4b89e21e0e0ce8841e8b0df50e889a9f7a51a9', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/i9apxr4ghn7c1.png?auto=webp&s=49234af60a45e800d9b589682a627dad88110a19', 'width': 1994}, 'variants': {}}]} | |||
Story to Song and Art by GPT4, Suno, AudioGEN, Dalle3, SD1.5. | 1 | Here is a small project I made today.
A small generation chain:
\-GPT4: Adapting a short story to be a rap song (of the same context), creating prompt for arts.
\-Suno, AudioGEN: Voice generation based on text, music generation, effects.
\-Dalle3: Generation of contextual arts and cover (based on GPT4 prompt).
\-SD1.5: Post-processing of arts.
\-Mastering: by humon (dang it).
Song: [https://drive.google.com/file/d/1\_MT3O7qKoUJOy1uOJqeHRC0tL1w5Ve5R/view?usp=sharing](https://drive.google.com/file/d/1_MT3O7qKoUJOy1uOJqeHRC0tL1w5Ve5R/view?usp=sharing)
https://preview.redd.it/pgivdapmhn7c1.jpg?width=1024&format=pjpg&auto=webp&s=a0280ac17072e70123283953d39fe6fc3bf54968
https://preview.redd.it/cbyik9pmhn7c1.png?width=1024&format=png&auto=webp&s=855dd29b5944176807428ec4aed41d10c40ada93
https://preview.redd.it/mqt4fapmhn7c1.png?width=1024&format=png&auto=webp&s=85b9fd911580080ebbbb76c4d9f36e4ce066ba33
See you in 2024!
Cheers! | 2023-12-21T13:34:16 | https://www.reddit.com/r/LocalLLaMA/comments/18nn5lr/story_to_song_and_art_by_gpt4_suno_audiogen/ | -Ellary- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nn5lr | false | null | t3_18nn5lr | /r/LocalLLaMA/comments/18nn5lr/story_to_song_and_art_by_gpt4_suno_audiogen/ | false | false | 1 | null | |
Fine-tuning of language models | 1 | [removed] | 2023-12-21T13:11:11 | https://www.reddit.com/r/LocalLLaMA/comments/18nmp7z/finetuning_of_language_models/ | omni7894 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nmp7z | false | null | t3_18nmp7z | /r/LocalLLaMA/comments/18nmp7z/finetuning_of_language_models/ | false | false | self | 1 | null |
Faster nvidia mixtral prompt processing has arrived | 37 | https://github.com/kalomaze/koboldcpp/releases
I will be waiting for the merge in llamacpp but this link apparently has some other interesting features like noisy sampling and dynamic temp that I didn't try yet. | 2023-12-21T13:10:32 | https://www.reddit.com/r/LocalLLaMA/comments/18nmor2/faster_nvidia_mixtral_prompt_processing_has/ | ambient_temp_xeno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nmor2 | false | null | t3_18nmor2 | /r/LocalLLaMA/comments/18nmor2/faster_nvidia_mixtral_prompt_processing_has/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'E6T-1sZUBi7X8RXLdKrlI3srwaHXkiN5SGQ0Ax7jEC4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bzYJn7DaDzWp0aP4zWmzdrYV5dtQ84maNSia_TAjcYg.jpg?width=108&crop=smart&auto=webp&s=049938bcb972e480f889a02c2d1b2e02d3dbb90b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bzYJn7DaDzWp0aP4zWmzdrYV5dtQ84maNSia_TAjcYg.jpg?width=216&crop=smart&auto=webp&s=c94fdcc860e8973dfbc83e67e89da9d35fa6f9eb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bzYJn7DaDzWp0aP4zWmzdrYV5dtQ84maNSia_TAjcYg.jpg?width=320&crop=smart&auto=webp&s=abe73e04f2bc711d4db439acb3cb735d7add2957', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bzYJn7DaDzWp0aP4zWmzdrYV5dtQ84maNSia_TAjcYg.jpg?width=640&crop=smart&auto=webp&s=b62186ef8ca30e582c6b714ef19ff8acbedd9c91', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bzYJn7DaDzWp0aP4zWmzdrYV5dtQ84maNSia_TAjcYg.jpg?width=960&crop=smart&auto=webp&s=7ff74d08545e6d9f97dc5056cff6c0669761e0e4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bzYJn7DaDzWp0aP4zWmzdrYV5dtQ84maNSia_TAjcYg.jpg?width=1080&crop=smart&auto=webp&s=4122ac71c2708c23187ab8000a14de277e21e01c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bzYJn7DaDzWp0aP4zWmzdrYV5dtQ84maNSia_TAjcYg.jpg?auto=webp&s=484e32d507f4ae786a1dfa76260faee1a2291c56', 'width': 1200}, 'variants': {}}]} |
Emu2 LMM 37B - Generative Multimodal Models are In-Context Learners | 41 | HF: [https://huggingface.co/BAAI/Emu2-Gen](https://huggingface.co/BAAI/Emu2-Gen)
Paper: Generative Multimodal Models are In-Context Learners [https://arxiv.org/abs/2312.13286](https://arxiv.org/abs/2312.13286) | 2023-12-21T13:09:21 | https://www.reddit.com/r/LocalLLaMA/comments/18nmnxk/emu2_lmm_37b_generative_multimodal_models_are/ | WaterdanceAC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nmnxk | false | null | t3_18nmnxk | /r/LocalLLaMA/comments/18nmnxk/emu2_lmm_37b_generative_multimodal_models_are/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': 'KKnWbEKOldwy1G-um-ncIdeP8aYAkzoE5mIuPX8Q5No', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/O1lwdqTILQQtjY0xTeExVLOORCjxCOO1f3y0y0ipnkw.jpg?width=108&crop=smart&auto=webp&s=52412dd129d4bd3d060b925995f6eda124bc1949', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/O1lwdqTILQQtjY0xTeExVLOORCjxCOO1f3y0y0ipnkw.jpg?width=216&crop=smart&auto=webp&s=4157ee2a2bc3d49456b1ef500893d887ec15f7be', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/O1lwdqTILQQtjY0xTeExVLOORCjxCOO1f3y0y0ipnkw.jpg?width=320&crop=smart&auto=webp&s=a47f846421b180330ee07ab5f36d9fa93e21fab5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/O1lwdqTILQQtjY0xTeExVLOORCjxCOO1f3y0y0ipnkw.jpg?width=640&crop=smart&auto=webp&s=25d9cba0bb28e45515d2bc733074f1ffd7e4f905', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/O1lwdqTILQQtjY0xTeExVLOORCjxCOO1f3y0y0ipnkw.jpg?width=960&crop=smart&auto=webp&s=4a448305164f8b515cb8e33463e49356fdc2a3b0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/O1lwdqTILQQtjY0xTeExVLOORCjxCOO1f3y0y0ipnkw.jpg?width=1080&crop=smart&auto=webp&s=53df0a3f1d1177c9ba47cf008db86724afe32834', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/O1lwdqTILQQtjY0xTeExVLOORCjxCOO1f3y0y0ipnkw.jpg?auto=webp&s=9a97e694ec07598dab664080075981f787802638', 'width': 1200}, 'variants': {}}]} |
Chinchilla paper explained | 1 | 2023-12-21T12:55:26 | https://rnikhil.com/2023/11/28/llm-scaling.html | Excellent-Effect237 | rnikhil.com | 1970-01-01T00:00:00 | 0 | {} | 18nmepe | false | null | t3_18nmepe | /r/LocalLLaMA/comments/18nmepe/chinchilla_paper_explained/ | false | false | default | 1 | null | |
what's the best thing i can run (if i can at all) ? | 5 | i have a literal potato , i7 4790 with 10gb of ram . no gpu
what's the best model i can run ? and is oobabooga the best frontend to use ? (again , if i can even run anything at all) | 2023-12-21T12:45:11 | https://www.reddit.com/r/LocalLLaMA/comments/18nm7u7/whats_the_best_thing_i_can_run_if_i_can_at_all/ | Suspicious-Bottle-14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nm7u7 | false | null | t3_18nm7u7 | /r/LocalLLaMA/comments/18nm7u7/whats_the_best_thing_i_can_run_if_i_can_at_all/ | false | false | self | 5 | null |
Is there a local a.i calculator? Not sure if this is the right sub to ask | 1 | [removed] | 2023-12-21T12:44:11 | https://www.reddit.com/r/LocalLLaMA/comments/18nm763/is_there_a_local_ai_calculator_not_sure_if_this/ | zikamit111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nm763 | false | null | t3_18nm763 | /r/LocalLLaMA/comments/18nm763/is_there_a_local_ai_calculator_not_sure_if_this/ | false | false | self | 1 | null |
A benchmark for developing teaching LLMs | 3 | I suggest a benchmark for summarizing ability. It requires a good grading LLM, though.
In the benchmark, the testee is given a text and is required to write a summary or other explanatory text. With testee's context memory cleared, testee is given its own output and is asked a bunch of questions about the original text. The tester evaluates the answers.
A problem with this test is that the testee may answer well because of embedded knowledge. Another potential problem is that a testee that speaks efficient code language for itself may excel.
Therefore, probably the answering LLM should be standardized. I don't know how well the answering LLM can model a human student, but if it can, this benchmark might lead to LLMs that can teach. Multiple "target reader models" could be created, like "5-year-old" and high schooler and college student and so on. That way, we could create teaching LLMs that are specifically suited for different human readers! | 2023-12-21T12:20:29 | https://www.reddit.com/r/LocalLLaMA/comments/18nls51/a_benchmark_for_developing_teaching_llms/ | SpecialNothingness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nls51 | false | null | t3_18nls51 | /r/LocalLLaMA/comments/18nls51/a_benchmark_for_developing_teaching_llms/ | false | false | self | 3 | null |
Anyone else video out issues with the latest ROCm? | 2 | I'm running Ubuntu Linux with a 7900xtx. Since the latest ROCm release after some time serving a model with llama.cpp, I get massive screen artifacts on one of my monitors. The kind you get when a GPU driver is massively buggy or when a GPU is dying. Restarting my computer or logging out immediately fixes the issue, I'm not worried about my GPU being potentially broken here.
Anyone else experiencing this issue? Because of it I have to run my models using CLBlast in the meantime, but it's a lot slower ;( | 2023-12-21T12:20:06 | https://www.reddit.com/r/LocalLLaMA/comments/18nlrud/anyone_else_video_out_issues_with_the_latest_rocm/ | Combinatorilliance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nlrud | false | null | t3_18nlrud | /r/LocalLLaMA/comments/18nlrud/anyone_else_video_out_issues_with_the_latest_rocm/ | false | false | self | 2 | null |
I suggest a new benchmark for | 1 | I suggest a new benchmark for summarizing ability. It requires a good grading LLM, though.
In the benchmark, the testee is given a text and is required to write a summary or other explanatory text. With testee's context memory cleared, testee is given its own output and is asked a bunch of questions about the original text. The tester evaluates the answers.
A problem with this test is that the testee may answer well because of embedded knowledge. Another potential problem is that a testee that speaks efficient code language for itself may excel.
Therefore, probably the answering LLM should be standardized. I don't know how well the answering LLM can model a human student, but if it can, this benchmark might lead to LLMs that can teach. Multiple "target reader models" could be created, like "5-year-old" and high schooler and college student and so on. That way, we could create teaching LLMs that are specifically suited for different human readers! | 2023-12-21T12:19:20 | https://www.reddit.com/r/LocalLLaMA/comments/18nlrdj/i_suggest_a_new_benchmark_for/ | SpecialNothingness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nlrdj | false | null | t3_18nlrdj | /r/LocalLLaMA/comments/18nlrdj/i_suggest_a_new_benchmark_for/ | false | false | self | 1 | null |
Looking for a RP model. | 4 | Hello, I'm searching for an RP model, not a novel model. I've tried many models, preferably around 13b, like Myhomax and Noromaid. I appreciate many of them, but the issue is that they tend to generate novels that are around 300 tokens long. This might be due to the settings, as I'm using Sillytavern for the frontend. However, what I aim to achieve is a character AI experience where it doesn't generate novels but instead provides a few short sentences, actions enclosed in asterisks, and text.
I would appreciate help from someone who may be in a similar situation. Preferably, models of 7b or 13b would be ideal. Thank you.
| 2023-12-21T11:43:07 | https://www.reddit.com/r/LocalLLaMA/comments/18nl58f/looking_for_a_rp_model/ | swwer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nl58f | false | null | t3_18nl58f | /r/LocalLLaMA/comments/18nl58f/looking_for_a_rp_model/ | false | false | self | 4 | null |
LLM in a flash: Efficient Large Language Model Inference with Limited Memory | 23 | Apple published a promising paper:
[https://arxiv.org/pdf/2312.11514.pdf](https://arxiv.org/pdf/2312.11514.pdf) | 2023-12-21T11:30:27 | https://www.reddit.com/r/LocalLLaMA/comments/18nkxtx/llm_in_a_flash_efficient_large_language_model/ | Tommy-kun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nkxtx | false | null | t3_18nkxtx | /r/LocalLLaMA/comments/18nkxtx/llm_in_a_flash_efficient_large_language_model/ | false | false | self | 23 | null |
WSL dont let me type messages | 2 | 2023-12-21T11:06:58 | Caradrian14 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18nkkqw | false | null | t3_18nkkqw | /r/LocalLLaMA/comments/18nkkqw/wsl_dont_let_me_type_messages/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'i2qzYDO6fXhxwjKNGqmXwn-46ArYRvbCRLC-DeAC_pU', 'resolutions': [{'height': 32, 'url': 'https://preview.redd.it/d563b94mrm7c1.png?width=108&crop=smart&auto=webp&s=4a98e7b72c18f61d1cf068a8465b4e4dadd396db', 'width': 108}, {'height': 64, 'url': 'https://preview.redd.it/d563b94mrm7c1.png?width=216&crop=smart&auto=webp&s=103f9badbf5546c17bb23be152e861c81c6ed4fc', 'width': 216}, {'height': 95, 'url': 'https://preview.redd.it/d563b94mrm7c1.png?width=320&crop=smart&auto=webp&s=e3379917ac2b07bbd36ce1dc0cda6a2c78c08bae', 'width': 320}, {'height': 190, 'url': 'https://preview.redd.it/d563b94mrm7c1.png?width=640&crop=smart&auto=webp&s=f4a605a73ad6bcdff644533d5c77936585706ac0', 'width': 640}, {'height': 285, 'url': 'https://preview.redd.it/d563b94mrm7c1.png?width=960&crop=smart&auto=webp&s=1218732968df6ff4e3991d9363cce7178527971b', 'width': 960}], 'source': {'height': 310, 'url': 'https://preview.redd.it/d563b94mrm7c1.png?auto=webp&s=fb59a3699babff524f831662af1b94e610536a0d', 'width': 1042}, 'variants': {}}]} | ||
Getting error while inference with GPT2 model, IndexError: index out of range in self | 1 | [removed] | 2023-12-21T11:00:50 | https://www.reddit.com/r/LocalLLaMA/comments/18nkh1w/getting_error_while_inference_with_gpt2_model/ | Downtown-Rice-7560 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nkh1w | false | null | t3_18nkh1w | /r/LocalLLaMA/comments/18nkh1w/getting_error_while_inference_with_gpt2_model/ | false | false | self | 1 | null |
How do we make LLM fine-tuning as simple/easy as Stable Diffusion fine-tuning? | 1 | [removed] | 2023-12-21T10:47:37 | https://www.reddit.com/r/LocalLLaMA/comments/18nk9wh/how_do_we_make_llm_finetuning_as_simpleeasy_as/ | EcstaticVenom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nk9wh | false | null | t3_18nk9wh | /r/LocalLLaMA/comments/18nk9wh/how_do_we_make_llm_finetuning_as_simpleeasy_as/ | false | false | self | 1 | null |
Which framework should I use to build Question-Answering system ? | 6 | Hi
I currently build a QA system using RAG. I have experience with Langchain and using it to build a POC. But when I started to apply it for production and make more custom to fit with my ideal, I found it too difficult to control the framework, those chains, agents, and prompts are too complicated to control.
Can you recommend any framework that is more controllable? or Should I build from scratch? | 2023-12-21T10:42:00 | https://www.reddit.com/r/LocalLLaMA/comments/18nk6sv/which_framework_should_i_use_to_build/ | unknow_from_vietnam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nk6sv | false | null | t3_18nk6sv | /r/LocalLLaMA/comments/18nk6sv/which_framework_should_i_use_to_build/ | false | false | self | 6 | null |
Educational content creation | 2 | I'm trying to create educational content in the domain of cybersecurity. More particularly, I want to create videos that demonstrate an attack flow (a real-life scenario) of how an attacker compromises a victim using a specific attack. Can somebody here suggest me which AI tools I could use to create the aforementioned content? | 2023-12-21T10:36:23 | https://www.reddit.com/r/LocalLLaMA/comments/18nk3qf/educational_content_creation/ | dexysexy19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nk3qf | false | null | t3_18nk3qf | /r/LocalLLaMA/comments/18nk3qf/educational_content_creation/ | false | false | self | 2 | null |
Is it possible to quantify the accuracy of the fp16 vs some 8bit GGUF model and such? | 3 | Like, let's say fp16 is 100%, how many percents would be 8-6-4-2 bit GGUF models? Or some other quantisation formats.
An extra question: what are the best options to run a fp16 model with maximum accuracy in Ooba? | 2023-12-21T10:33:21 | https://www.reddit.com/r/LocalLLaMA/comments/18nk25c/is_it_possible_to_quantify_the_accuracy_of_the/ | The_One_Who_Slays | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nk25c | false | null | t3_18nk25c | /r/LocalLLaMA/comments/18nk25c/is_it_possible_to_quantify_the_accuracy_of_the/ | false | false | self | 3 | null |
Flagging incorrect generations of LLMs | 1 | Hey everyone, just wanted to know how you guys know whether your generated output is correct in RAG.
For example playing around token probabilities is one way to know the token wise prediction confidence. But every token may not contribute to the actual answer(only some tokens has the actual answer). How do you quantify token probabilities to calculate overall generation confidence. | 2023-12-21T10:03:41 | https://www.reddit.com/r/LocalLLaMA/comments/18njm89/flagging_incorrect_generations_of_llms/ | elitee_14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18njm89 | false | null | t3_18njm89 | /r/LocalLLaMA/comments/18njm89/flagging_incorrect_generations_of_llms/ | false | false | self | 1 | null |
deepsword-34b: a role-playing model based on martial arts and whodunit novels | 15 | My last model, deepsex-34b, seems to have received the attention it doesn't deserve, but what I really like is this kind of role-playing game with a story. This time I still list the complete data cleaning process. I hope everyone can give me your opinions. tks
Base model: [TriadParty/Deepsword-34B-Base · Hugging Face](https://huggingface.co/TriadParty/Deepsword-34B-Base)
Chat model:[https://huggingface.co/TriadParty/Deepsword-34B-Chat](https://huggingface.co/TriadParty/Deepsword-34B-Chat)
Introducing **wrath** in the Seven Deadly Sins series of models.
* Continuous pre-training of qlora on Yi-34b
* High-quality martial arts novels
* Thoughtful cleaning process
This model is designed to serve as the base model in the agent model of the Live Action Role Playing games. For this purpose, I've collected approximately 10G of martial arts novels, sourced from various novel websites and PT sites. However, this dataset includes a significant amount of duplicate and low-quality content. To address these issues, I've undertaken the following steps:
## 1. Define Data Quality Dimensions
For martial arts novels, high-quality works are typically represented by authors like Jin Yong, Gu Long, and Liang Yusheng. In these novels, the complexity of the plot is a critical factor and is the focal point for script quality.
## 2. Quantify Data Quality Dimensions
Given the emphasis on plot complexity, we approached this in several stages:
Chapter Summarization:
English: Utilize [**Hugging Face's LED-Large-Book-Summary model**](https://huggingface.co/pszemraj/led-large-book-summary). Chinese: Use the [**Randeng-Pegasus-523M-Summary-Chinese**](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-523M-Summary-Chinese) model. Vectorization and Complexity Analysis:
Convert plot summaries into vectors using a BERT-based model. Measure transitions between chapters through cosine similarity or Euclidean distance. Develop a complexity algorithm focused on standard deviation and peak analysis. Metric Quantification:
Apply subjective weighting to the complexity metrics derived from chapter transitions.
## 3. Outcome
By employing these methods, we can effectively filter out novels of higher quality. This refined [**dataset**](https://huggingface.co/datasets/TriadParty/deepsword) has been shared for further use. Then all we have to do is continue pre-training and sft. For specific parameters, see my previous model. Of course, the chat version provided this time is only responsible for role-playing. In my process, script writers and game leaders are also indispensable in a complete game.
This model should be able to support both Chinese and English, because the concept of martial arts does not exist in English, so I collected some detective mystery novels, such as Allan Poe and Conan Doyle. But if you are interested in the oriental martial arts series, you might want to give it a try.
​ | 2023-12-21T09:38:55 | https://www.reddit.com/r/LocalLLaMA/comments/18nj9d3/deepsword34b_a_roleplaying_model_based_on_martial/ | Fun_Water2230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nj9d3 | false | null | t3_18nj9d3 | /r/LocalLLaMA/comments/18nj9d3/deepsword34b_a_roleplaying_model_based_on_martial/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'pUvJPdS9ny58EbY_QVZTk0LSy4Rh2Zcyel8eujwHOMI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?width=108&crop=smart&auto=webp&s=a86add1e18f7223d7eca6a144243afd92388ec18', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?width=216&crop=smart&auto=webp&s=f6aa41830c6d53e949e115f73bed72a584b84e22', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?width=320&crop=smart&auto=webp&s=222418bf9dd2a61d6f3ef8b0e7e41f8ebf23a871', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?width=640&crop=smart&auto=webp&s=8e884e6408612e74d7d8205709f3f718d59ef4a3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?width=960&crop=smart&auto=webp&s=0c6efffa910f62325c655354fe005abc7b2733a4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?width=1080&crop=smart&auto=webp&s=86efc57d3947ec10515e73242f78dff4d7aae584', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?auto=webp&s=266c6ba2231dca9dc90a2af82bfc2c6a09a3ba29', 'width': 1200}, 'variants': {}}]} |
deepsword-34b - | 1 | [removed] | 2023-12-21T09:33:11 | https://www.reddit.com/r/LocalLLaMA/comments/18nj6jx/deepsword34b/ | Fun_Water2230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nj6jx | false | null | t3_18nj6jx | /r/LocalLLaMA/comments/18nj6jx/deepsword34b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pUvJPdS9ny58EbY_QVZTk0LSy4Rh2Zcyel8eujwHOMI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?width=108&crop=smart&auto=webp&s=a86add1e18f7223d7eca6a144243afd92388ec18', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?width=216&crop=smart&auto=webp&s=f6aa41830c6d53e949e115f73bed72a584b84e22', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?width=320&crop=smart&auto=webp&s=222418bf9dd2a61d6f3ef8b0e7e41f8ebf23a871', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?width=640&crop=smart&auto=webp&s=8e884e6408612e74d7d8205709f3f718d59ef4a3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?width=960&crop=smart&auto=webp&s=0c6efffa910f62325c655354fe005abc7b2733a4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?width=1080&crop=smart&auto=webp&s=86efc57d3947ec10515e73242f78dff4d7aae584', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6YFhPbXKVwtfhR_vp7gXkcPwzrOYN7a2cBWlqXw80es.jpg?auto=webp&s=266c6ba2231dca9dc90a2af82bfc2c6a09a3ba29', 'width': 1200}, 'variants': {}}]} |
Question about LLama 2 7B Chat and quantization (Noob question) | 1 | Hi !
New to IA, I'm making an app and I need to implement a question & answer chatbot. I got LLama 2 Chat (I have Model 7B, 13B and 70B access, I will only use 7B).
I want to reduce model size and ram usage to fit in a computer with 8Gb of RAM and use it in python to make tests.
How I can make that ? (or it's even possible without retraining the model 'cause I can't) and what Python code I need to use to run it with prompt as a Q&A ChatBot ?
| 2023-12-21T09:00:18 | https://www.reddit.com/r/LocalLLaMA/comments/18nip6w/question_about_llama_2_7b_chat_and_quantization/ | MiniChaKal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nip6w | false | null | t3_18nip6w | /r/LocalLLaMA/comments/18nip6w/question_about_llama_2_7b_chat_and_quantization/ | false | false | self | 1 | null |
Fine-tuning Mistral for translation? | 5 | Hi everyone! I'm curious if anyone here has experimented with fine-tuning Mistral (base/instruct) specifically for translation tasks. I've given it a try but haven't had much success so far. Would love to hear any tips, suggestions, or insights from those who have tried it. Thanks! | 2023-12-21T08:53:48 | https://www.reddit.com/r/LocalLLaMA/comments/18nilzk/finetuning_mistral_for_translation/ | TomMoeras | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nilzk | false | null | t3_18nilzk | /r/LocalLLaMA/comments/18nilzk/finetuning_mistral_for_translation/ | false | false | self | 5 | null |
The Efficiency Spectrum of Large Language Models: An Algorithmic Survey - "This survey delivers a comprehensive review of algorithmic advancements aimed at improving LLM efficiency" | 15 | **Paper**: [https://arxiv.org/abs/2312.00678](https://arxiv.org/abs/2312.00678)
**Literature repository**: [https://github.com/tding1/Efficient-LLM-Survey](https://github.com/tding1/Efficient-LLM-Survey)
**Abstract**:
>The rapid growth of Large Language Models (LLMs) has been a driving force in transforming various domains, reshaping the artificial general intelligence landscape. However, the increasing computational and memory demands of these models present substantial challenges, hindering both academic research and practical applications. To address these issues, a wide array of methods, including both algorithmic and hardware solutions, have been developed to enhance the efficiency of LLMs. This survey delivers a comprehensive review of algorithmic advancements aimed at improving LLM efficiency. Unlike other surveys that typically focus on specific areas such as training or model compression, this paper examines the multi-faceted dimensions of efficiency essential for the end-to-end algorithmic development of LLMs. Specifically, it covers various topics related to efficiency, including scaling laws, data utilization, architectural innovations, training and tuning strategies, and inference techniques. This paper aims to serve as a valuable resource for researchers and practitioners, laying the groundwork for future innovations in this critical research area. Our repository of relevant references is maintained at url{[this https URL](https://github.com/tding1/Efficient-LLM-Survey)}. | 2023-12-21T08:09:47 | https://www.reddit.com/r/LocalLLaMA/comments/18nhzd9/the_efficiency_spectrum_of_large_language_models/ | APaperADay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nhzd9 | false | null | t3_18nhzd9 | /r/LocalLLaMA/comments/18nhzd9/the_efficiency_spectrum_of_large_language_models/ | false | false | self | 15 | null |
Efficient Large Language Models: A Survey - "In this survey, we provide a systematic and comprehensive review of efficient LLMs research" | 7 | **Paper**: [https://arxiv.org/abs/2312.03863](https://arxiv.org/abs/2312.03863)
**Literature repository**: [https://github.com/AIoT-MLSys-Lab/Efficient-LLMs-Survey](https://github.com/AIoT-MLSys-Lab/Efficient-LLMs-Survey)
**Abstract**:
>Large Language Models (LLMs) have demonstrated remarkable capabilities in important tasks such as natural language understanding, language generation, and complex reasoning and have the potential to make a substantial impact on our society. Such capabilities, however, come with the considerable resources they demand, highlighting the strong need to develop effective techniques for addressing their efficiency challenges. In this survey, we provide a systematic and comprehensive review of efficient LLMs research. We organize the literature in a taxonomy consisting of three main categories, covering distinct yet interconnected efficient LLMs topics from model-centric, data-centric, and framework-centric perspective, respectively. We have also created a GitHub repository where we compile the papers featured in this survey at [this https URL](https://github.com/AIoT-MLSys-Lab/EfficientLLMs), [this https URL](https://github.com/AIoT-MLSys-Lab/Efficient-LLMs-Survey), and will actively maintain this repository and incorporate new research as it emerges. We hope our survey can serve as a valuable resource to help researchers and practitioners gain a systematic understanding of the research developments in efficient LLMs and inspire them to contribute to this important and exciting field. | 2023-12-21T08:02:10 | https://www.reddit.com/r/LocalLLaMA/comments/18nhvhh/efficient_large_language_models_a_survey_in_this/ | APaperADay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nhvhh | false | null | t3_18nhvhh | /r/LocalLLaMA/comments/18nhvhh/efficient_large_language_models_a_survey_in_this/ | false | false | self | 7 | null |
Give an LLM computer control | 9 | Has this been done yet? I want to have my locally running ai be able to complete tasks for me through my windows machine, even have cursor control if that’s even possible. Things such as opening applications and surfing the web. | 2023-12-21T07:43:10 | https://www.reddit.com/r/LocalLLaMA/comments/18nhllm/give_an_llm_computer_control/ | dbzunicorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nhllm | false | null | t3_18nhllm | /r/LocalLLaMA/comments/18nhllm/give_an_llm_computer_control/ | false | false | self | 9 | null |
Generation Parameters - Am I doing wrong? | 3 | Hey fellow Llamers' !
I am currently working on a project that involves integrating the latest Mistral instruct-gguf model with llama\_cpp\_python. Unfortunately, I have encountered an issue where the AI message generation stops after producing approximately 30-50 tokens and return an incomplete sentence.
Below the parameters I am using with a very simplified exemple of how the llm is being used. If anyone has encountered a similar issue or has suggestions on how to troubleshoot this problem, I would greatly appreciate your insights and guidance.
Thank you!
​
llm = Llama(
model_path="/Users/.../model/mistral-7b-instruct-v0.2.Q8_0.gguf",
n_threads=10,
n_gpu_layers=-1,
main_gpu=0,
seed=-1,
n_threads_batch=10,
n_batch=512,
n_ctx=16000,
f16_kv=True,
last_n_tokens_size=128,
max_tokens=-1,
temp=0,
)
while True:
prompt=input(">")
ai_response = llm('[INST] '+prompt+'[/INST]')['choices'][0]['text']
​ | 2023-12-21T07:39:39 | https://www.reddit.com/r/LocalLLaMA/comments/18nhjpf/generation_parameters_am_i_doing_wrong/ | Toni_rider | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nhjpf | false | null | t3_18nhjpf | /r/LocalLLaMA/comments/18nhjpf/generation_parameters_am_i_doing_wrong/ | false | false | self | 3 | null |
Model Fine-Tuning help needed | 2 | Hello, I'm currently working on a project where i want to generate articles for topics such as Weather and Sports. I am using the Llama 7b model for this and I want to fine-tune the model to be able to generate such articles. I will feed the model data through API such as the openweather API and it would generate articles according to that data. My problem is how would I train the model for such a task? I have a CSV file where first Column is the articles and how I want them to generate and the second column is the category of the article (weather or sports). Is that enough or do I need something else too? | 2023-12-21T07:17:06 | https://www.reddit.com/r/LocalLLaMA/comments/18nh7lk/model_finetuning_help_needed/ | rengy001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nh7lk | false | null | t3_18nh7lk | /r/LocalLLaMA/comments/18nh7lk/model_finetuning_help_needed/ | false | false | self | 2 | null |
Screen flickering in Linux when offloading layers to GPU with llama.cpp (AMD with OpenCL) | 3 | Apologies if this is a dumb question, but I haven't found anything on point from searching.
The general question is: **has anyone experienced "screen flickering" or similar weird behavior on monitors when increasing offloaded GPU layers?** Is this a potentially normal situation? My understanding from reading forums was that if you tried to offload too many layers, llama.cpp would either (1) just crash or (2) if your graphics card enabled it, try to bleed off the excess usage to your RAM (which slows stuff down, but doesn't crash). The flickering is intermittent but continues after llama.cpp is halted.
Background:
I know AMD support is tricky in general, but after a couple days of fiddling, I managed to get ROCm and OpenCL working on my AMD 5700 XT. I was finally able to offload layers in llama.cpp to my GPU, which of course greatly increased speed. It's made 13b and 20b models pretty reasonable with my system. Note: I have 64 GB of RAM, so the issues aren't caused by problems with the rest of the models fitting in the memory overall. I can even run 70b models at a slow pace (\~1 t/s) if I wanted.
As I said above, the flickering is intermittent, but persists after I stop llama.cpp. Mostly, it appears as though my two monitors are "swapping" display positions left and right (sort of, it's just rendered wrong) in the "flickers." So far, the quickest solution to resolve the problem after I quit llama.cpp is to disconnect the HDMI cable and plug the one monitor back in (usually it's just one monitor flickering), which causes Linux to re-render and redetect the screens enough to stop whatever's going on. I have no idea if this matters, but the more problematic monitor is plugged in via HDMI, while the more "stable" monitor uses DisplayPort.
My immediate thought is that loading too much of a model into VRAM is that it's somehow corrupting the GPU's interaction with basic display or interfering somehow. It usually seems to happen if my VRAM usage at least temporarily hits the max of 100%, though a couple times I've seen it happen even seemingly with VRAM usage only in the 90% range. (My system doesn't use a lot of VRAM, as I have a rather light desktop, but still, there's some basic memory usage.)
But should that be happening? Has anyone else encountered behavior like this? If llama.cpp just crashed with too many layers, that would be okay, and I could figure out how many to offload with a particular model without breaking stuff. But this monitor behavior is just annoying -- particularly given my VRAM usage by my basic system isn't completely stable, so it's tough to predict just how many offloaded layers might cause problems consistently.
Also, to clarify, I have had my desktop running for a couple years with this hardware and never encountered such flickering before with any other applications.
Any advice or thoughts would be appreciated. | 2023-12-21T05:58:46 | https://www.reddit.com/r/LocalLLaMA/comments/18nfwy5/screen_flickering_in_linux_when_offloading_layers/ | bobjones271828 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nfwy5 | false | null | t3_18nfwy5 | /r/LocalLLaMA/comments/18nfwy5/screen_flickering_in_linux_when_offloading_layers/ | false | false | self | 3 | null |
High-Speed Large Language Model Serving on PCs with Consumer-Grade GPUs | 11 | 2023-12-21T05:51:53 | https://news.ycombinator.com/item?id=38708585 | RadioNick | news.ycombinator.com | 1970-01-01T00:00:00 | 0 | {} | 18nfsta | false | null | t3_18nfsta | /r/LocalLLaMA/comments/18nfsta/highspeed_large_language_model_serving_on_pcs/ | false | false | default | 11 | null | |
(MoE) Turning pipeline-parallel Hugging Face models into data/expert-parallel DeepSpeed models | 4 | I want to turn Google Switch Transformer models into expert/data-parallel models. A Google Switch Transformer is a Mixture-of-Experts (MoE) model. The model architecture and weights are available on Hugging Face, but Hugging Face only supports pipeline-parallel models. I want to turn this model into a data/expert parallel model, i.e., E + D parallelism explained here in the [DeepSpeed-MoE blog](https://www.deepspeed.ai/tutorials/mixture-of-experts/).
(For those unfamiliar with the parallelism terms)
* Expert (E) parallelism means that the expert layer will be partitioned across multiple GPUs, and the GPUs will sync for collective communication before/after each expert layer to route the activations to/from the appropriate GPU.
* Data (D) parallelism means that the non-expert layers (e.g., attention layers) will run in parallel, and all GPUs hold the same weight values, but compute the activations for different inputs.
The [DeepSpeed-MoE library on GitHub](https://github.com/microsoft/DeepSpeed/tree/master/deepspeed/moe) seems to hint that I can simply use `deepspeed.moe.layer.MoE` to implement the expert layer which will use expert parallelism when I give the right option at runtime.
**What I'm having trouble with** is if I can just implement the non-expert layers with "trivial" modules like \`nn.Linear\`, and DeepSpeed will take care of the data parallelism "automatically" at runtime. So far, my understanding is that for more complicated stuff like model (tensor) parallelism, you would want to use row/column parallel FC layer modules, but for data parallelism this is fine.
​
I'm a newbie here and any help is genuinely appreciated. Thanks! | 2023-12-21T05:42:23 | https://www.reddit.com/r/LocalLLaMA/comments/18nfmyv/moe_turning_pipelineparallel_hugging_face_models/ | GiantPengsoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nfmyv | false | null | t3_18nfmyv | /r/LocalLLaMA/comments/18nfmyv/moe_turning_pipelineparallel_hugging_face_models/ | false | false | self | 4 | null |
Dual 4090s vs RTX 6000 Ada | 2 | Which would be faster?
For inference, would 2x24 gb in dual 4090s have same capacity as single Rtx Ada 6000?
Does the same hold true for finetuning? | 2023-12-21T05:31:09 | https://www.reddit.com/r/LocalLLaMA/comments/18nffzl/dual_4090s_vs_rtx_6000_ada/ | susmitds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nffzl | false | null | t3_18nffzl | /r/LocalLLaMA/comments/18nffzl/dual_4090s_vs_rtx_6000_ada/ | false | false | self | 2 | null |
LLaMA Terminal Completion, a local virtual assistant for the terminal | 20 | 2023-12-21T04:32:31 | https://github.com/adammpkins/llama-terminal-completion/ | adammpkins | github.com | 1970-01-01T00:00:00 | 0 | {} | 18neeah | false | null | t3_18neeah | /r/LocalLLaMA/comments/18neeah/llama_terminal_completion_a_local_virtual/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'M4P8jvsher1VEX1Ppo0IF2kUGYsQz1nriJjbk-Ulq5A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3LZDs17_OqLvSS0WGvVCOyRTaZ5D0VMcFz9wl2ZdPH0.jpg?width=108&crop=smart&auto=webp&s=331261f32372bcaa1a448c3b36b37fb29e6a26aa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3LZDs17_OqLvSS0WGvVCOyRTaZ5D0VMcFz9wl2ZdPH0.jpg?width=216&crop=smart&auto=webp&s=9e19a474689988b983226d04d2567ff05bb41bff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3LZDs17_OqLvSS0WGvVCOyRTaZ5D0VMcFz9wl2ZdPH0.jpg?width=320&crop=smart&auto=webp&s=7e26d65b7a40dc4b8ec9300e315ccb650176c789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3LZDs17_OqLvSS0WGvVCOyRTaZ5D0VMcFz9wl2ZdPH0.jpg?width=640&crop=smart&auto=webp&s=e49fa33213cead1fb42b60c05fd4d90d4b7a386e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3LZDs17_OqLvSS0WGvVCOyRTaZ5D0VMcFz9wl2ZdPH0.jpg?width=960&crop=smart&auto=webp&s=935fb0f127d47b3accbb3cd870b13c6d76fc42ce', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3LZDs17_OqLvSS0WGvVCOyRTaZ5D0VMcFz9wl2ZdPH0.jpg?width=1080&crop=smart&auto=webp&s=bb7c034350a5e8937960dd08611b259c0fc4024e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3LZDs17_OqLvSS0WGvVCOyRTaZ5D0VMcFz9wl2ZdPH0.jpg?auto=webp&s=607f7ea55534c33f043b37c0d1d9eb2cab29b523', 'width': 1200}, 'variants': {}}]} | ||
Ask LLM for Crypto Insights/Recommendations? | 1 | I know models with safety especially like ChatGPT are not designed to give financial advice. However, I am wondering if anything exists that could at least give some recommendations based on chart indicators, trends, or live market data in general.
For example, I tried asking Bard, and it provides recommendations (says it will use MACD, Bollinger bands indicators, etc. if I ask) but the data appears to be wrong – I think it’s not looking at current data. I tried telling it to use current data only but still seems off.
It’s not for real financial decisions. But, do we have the technology? Is there a simple solution without setting up, or training anything (also free, not paying for market data)?
Thanks for any ideas/suggestions. | 2023-12-21T04:07:56 | https://www.reddit.com/r/LocalLLaMA/comments/18ndygf/ask_llm_for_crypto_insightsrecommendations/ | smoked__rugs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ndygf | false | null | t3_18ndygf | /r/LocalLLaMA/comments/18ndygf/ask_llm_for_crypto_insightsrecommendations/ | false | false | default | 1 | null |
What is stopping us from creating an open source GPT-4 & Gemini Ultra? (Or better) | 206 | With training costs dropping, and quality training data increasing, what's preventing an active community like ours from creating fully open source SOTA LLMs?
Is it just hard for us to get funding? Do we actually need funding? For example could we find a way to distribute the training across our existing hardware - like a giant CPU/GPU farm?
Is it a lack of coordination? Is it a lack of goal alignment?
Are we too analytical, and unable to take action? (I doubt this, because I see a lot of us taking action, doing incredible things ...)
There are millions of "us" and only hundreds of "them", so what is it that's stopping us?
We know AI is the future -- so do we want it in the hands of elite corporations?
Or can we make history right here, right now? | 2023-12-21T03:46:29 | https://www.reddit.com/r/LocalLLaMA/comments/18ndjs9/what_is_stopping_us_from_creating_an_open_source/ | askchris | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ndjs9 | false | null | t3_18ndjs9 | /r/LocalLLaMA/comments/18ndjs9/what_is_stopping_us_from_creating_an_open_source/ | false | false | self | 206 | null |
Training Mamba 130m in practice | 33 | Hey all, I wanted to get some hands on practice with Mamba to see how well the smaller models work in practice. I thought question answering would be a nice task to see how much inherent knowledge the model had.
TLDR \~ I trained the 130m Mamba model on SQuAD with a template as follows
`{context}`
​
`Q: {question}`
`A: {answer}`
I also wanted the model to be able to answer "I don't know" if the answer was not contained in the context. So for half of the training data I paired a random question with random context and had the answer be "I don't know" to try to help with hallucinations. This seemed to work reasonably well anecdotally kicking the tires, but only had a 12% accuracy on the SQuAD held out set in practice.
Full experiment details, everything I tried, and the code are linked.
[https://blog.oxen.ai/practical-ml-dive-how-to-train-mamba-for-question-answering/](https://blog.oxen.ai/practical-ml-dive-how-to-train-mamba-for-question-answering/)
I had a hard time training anything over 790m on a Lambda Labs machine with 24GB VRAM, and also had a little success prompt engineering the 2.8b models. I am currently training the 790m model and will release it when it's done.
Has anyone else has success training Mamba on any real world tasks?
Maybe the larger models would be more promising, I just didn't have enough compute, and think it would be much more economical to be able to run a smaller model in production. | 2023-12-21T03:31:25 | https://www.reddit.com/r/LocalLLaMA/comments/18nd9nb/training_mamba_130m_in_practice/ | FallMindless3563 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nd9nb | false | null | t3_18nd9nb | /r/LocalLLaMA/comments/18nd9nb/training_mamba_130m_in_practice/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'YIIxMi5N00yvruWngeBaqH3WNY719f7XTEsfv9Y-rfo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/mBA7uLHDt5pZgsDvR4ckLd7b8YhdaQAM84OPr4p__8w.jpg?width=108&crop=smart&auto=webp&s=fce959e903aa8d7ab19d56d970aa8dcf6be8c479', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/mBA7uLHDt5pZgsDvR4ckLd7b8YhdaQAM84OPr4p__8w.jpg?width=216&crop=smart&auto=webp&s=1a07ede004a800c8d1e96bbc386890d3e84268fa', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/mBA7uLHDt5pZgsDvR4ckLd7b8YhdaQAM84OPr4p__8w.jpg?width=320&crop=smart&auto=webp&s=859e91ad65930a325f80a8542358b8fe5ebd2a38', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/mBA7uLHDt5pZgsDvR4ckLd7b8YhdaQAM84OPr4p__8w.jpg?width=640&crop=smart&auto=webp&s=a5419ded8433f4ec444ed53c06e84077034b0b0c', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/mBA7uLHDt5pZgsDvR4ckLd7b8YhdaQAM84OPr4p__8w.jpg?width=960&crop=smart&auto=webp&s=f1e3635f4505c2d36cfdbedf9da400a24a18b2d5', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/mBA7uLHDt5pZgsDvR4ckLd7b8YhdaQAM84OPr4p__8w.jpg?auto=webp&s=08586b2dbdaeed1c3cc5abd6ccdd28a5155207bd', 'width': 1024}, 'variants': {}}]} |
Question on AMD GPU build | 5 | Hi all. Has anyone in the community built a pc with 2 AMD GPUs? If so, how well does it perform? Or should I wait for the nvidia super gpus? I am planning to build a PC and need some insights.
Note: A used 3090 is difficult to get from where I live. | 2023-12-21T02:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/18ncl2c/question_on_amd_gpu_build/ | Fit_Morning7115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ncl2c | false | null | t3_18ncl2c | /r/LocalLLaMA/comments/18ncl2c/question_on_amd_gpu_build/ | false | false | self | 5 | null |
Running llama on local computers (Windows) | 1 | [removed] | 2023-12-21T02:37:01 | https://www.reddit.com/r/LocalLLaMA/comments/18nc8gg/running_llama_on_local_computers_windows/ | TriDoHuu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nc8gg | false | null | t3_18nc8gg | /r/LocalLLaMA/comments/18nc8gg/running_llama_on_local_computers_windows/ | false | false | self | 1 | null |
I have an idea that could benefit everyone | 1 | [removed] | 2023-12-21T02:09:22 | https://www.reddit.com/r/LocalLLaMA/comments/18nbp5j/i_have_an_idea_that_could_benefit_everyone/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nbp5j | false | null | t3_18nbp5j | /r/LocalLLaMA/comments/18nbp5j/i_have_an_idea_that_could_benefit_everyone/ | false | false | self | 1 | null |
A non guessing next word (token) authoritative AI ? | 1 | [removed] | 2023-12-21T01:36:34 | https://www.reddit.com/r/LocalLLaMA/comments/18nb2ic/a_non_guessing_next_word_token_authoritative_ai/ | skullbonesandnumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18nb2ic | false | null | t3_18nb2ic | /r/LocalLLaMA/comments/18nb2ic/a_non_guessing_next_word_token_authoritative_ai/ | false | false | self | 1 | null |
2023, year of open LLMs. What will 2024 bring? | 18 | 2023-12-21T01:16:52 | balianone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18naoo6 | false | null | t3_18naoo6 | /r/LocalLLaMA/comments/18naoo6/2023_year_of_open_llms_what_will_2024_bring/ | false | false | 18 | {'enabled': True, 'images': [{'id': 'IZHltwdV3Yn5jAWsS6Z4yjxqX-6b8zSq3NS8nYPFmuE', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/ziqbeek0uj7c1.png?width=108&crop=smart&auto=webp&s=1913b1fd6c4599d78b46e1d87440c6db5a00bfe8', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/ziqbeek0uj7c1.png?width=216&crop=smart&auto=webp&s=1649b757b9f6046eedc9f3ba0c707ca808e91491', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/ziqbeek0uj7c1.png?width=320&crop=smart&auto=webp&s=e2b9b2ed876ab7b34e8c6cc90ec011e37afdfcf7', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/ziqbeek0uj7c1.png?width=640&crop=smart&auto=webp&s=c8f335349ab04a1baff6ee6fc30b48d771f50162', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/ziqbeek0uj7c1.png?width=960&crop=smart&auto=webp&s=dcf0dd4d23d83194c85845635e8963a8c11e2908', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/ziqbeek0uj7c1.png?auto=webp&s=1f1f280d5e3ce23faf92a6df4ef9b2722aff1241', 'width': 1024}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.