title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
RTX 4080 local fine tuning performance much worse compared to Colab Free T4 GPU | 1 | I'm running a simple finetune of llama-2-7b-hf mode with the guanaco dataset. A test run with batch size of 2 and max\_steps 10 using the hugging face trl library (SFTTrainer) takes a little over 3 minutes on Colab Free. But the same script is running for over 14 minutes using RTX 4080 locally. I'm running this under WSL with full CUDA support. The GPU is utilized 100% during training so I don't think that's a problem here.
Is there anything I'm missing or overlooking? The script itself is pretty simple and straight forward but for reference I'm using this version. The code is using bitsandbytes, 4bit loading, nf4 quant, float16, all standard stuff.
[https://github.com/Vasanthengineer4949/NLP-Projects-NHV/blob/main/LLMs%20Related/Finetune%20Llama2%20using%20QLoRA/Finetune\_LLamA.ipynb](https://github.com/Vasanthengineer4949/NLP-Projects-NHV/blob/main/LLMs%20Related/Finetune%20Llama2%20using%20QLoRA/Finetune_LLamA.ipynb)
Any help is appreciated. | 2023-11-02T01:11:12 | https://www.reddit.com/r/LocalLLaMA/comments/17lrr95/rtx_4080_local_fine_tuning_performance_much_worse/ | gamesntech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lrr95 | false | null | t3_17lrr95 | /r/LocalLLaMA/comments/17lrr95/rtx_4080_local_fine_tuning_performance_much_worse/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'a6v3JHgiWxRk1mp1rcWsOVyPSoljUkFQcYGru7MV9hA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Edk-2qKunoytWkn0L3q95Tjn6yS7F6hobYLlRkA1XkI.jpg?width=108&crop=smart&auto=webp&s=b8323fc66dcf2f03f7d123a9c78b7ba8d2bc9d40', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Edk-2qKunoytWkn0L3q95Tjn6yS7F6hobYLlRkA1XkI.jpg?width=216&crop=smart&auto=webp&s=12ad86e5d5bd1fe4d91e471931141d9cbf76f620', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Edk-2qKunoytWkn0L3q95Tjn6yS7F6hobYLlRkA1XkI.jpg?width=320&crop=smart&auto=webp&s=29728a1d53ac6869649fbed9f8446a70dce0cac3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Edk-2qKunoytWkn0L3q95Tjn6yS7F6hobYLlRkA1XkI.jpg?width=640&crop=smart&auto=webp&s=945bd3db9c9ecf422c6a92b3f733ddd4fa66a5e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Edk-2qKunoytWkn0L3q95Tjn6yS7F6hobYLlRkA1XkI.jpg?width=960&crop=smart&auto=webp&s=1f43ce02d4dd94b9b0489ddcba7bca59f0b107f0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Edk-2qKunoytWkn0L3q95Tjn6yS7F6hobYLlRkA1XkI.jpg?width=1080&crop=smart&auto=webp&s=171414844110debcd5ea2646ad9c0dc264f7607f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Edk-2qKunoytWkn0L3q95Tjn6yS7F6hobYLlRkA1XkI.jpg?auto=webp&s=17778d047441eca89912bcb447bf7af62361ef64', 'width': 1200}, 'variants': {}}]} |
ChipNeMo: Domain-Adapted LLMs for Chip Design | 10 | >ChipNeMo aims to explore the applications of large language models (LLMs) for industrial chip design. Instead of directly deploying off-the-shelf commercial or open-source LLMs, we instead adopt the following domain adaptation techniques: custom tokenizers, domain-adaptive continued pretraining, supervised fine-tuning (SFT) with domain-specific instructions, and domain-adapted retrieval models. We evaluate these methods on three selected LLM applications for chip design: an engineering assistant chatbot, EDA script generation, and bug summarization and analysis. Our results show that these domain adaptation techniques enable significant LLM performance improvements over general-purpose base models across the three evaluated applications, enabling up to 5x model size reduction with similar or better performance on a range of design tasks. Our findings also indicate that there's still room for improvement between our current results and ideal outcomes. We believe that further investigation of domain-adapted LLM approaches will help close this gap in the future. | 2023-11-02T01:10:53 | https://arxiv.org/abs/2311.00176 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17lrr1n | false | null | t3_17lrr1n | /r/LocalLLaMA/comments/17lrr1n/chipnemo_domainadapted_llms_for_chip_design/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | |
So what's the story with Phind CodeLlama? | 72 | I keep seeing people posting about how the new Phind is the most amazing thing on the planet, and I kept thinking "We already have Phind... see, I have the gguf right here!"
I finally looked at the phind paper on their newest model, and it says that their current model is v7.
[https://www.phind.com/blog/phind-model-beats-gpt4-fast](https://www.phind.com/blog/phind-model-beats-gpt4-fast)
:O Huggingface only goes up to v2.
I can't tell if Phind is a proprietary model that just happened to give us an older version, if there will be newer versions coming out, or what. Does anyone happen to know? | 2023-11-02T01:09:00 | https://www.reddit.com/r/LocalLLaMA/comments/17lrphf/so_whats_the_story_with_phind_codellama/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lrphf | false | null | t3_17lrphf | /r/LocalLLaMA/comments/17lrphf/so_whats_the_story_with_phind_codellama/ | false | false | self | 72 | null |
Joint Statement on AI Safety and Openness | 134 | 2023-11-02T00:32:42 | https://open.mozilla.org/letter/ | ninjasaid13 | open.mozilla.org | 1970-01-01T00:00:00 | 0 | {} | 17lqy3y | false | null | t3_17lqy3y | /r/LocalLLaMA/comments/17lqy3y/joint_statement_on_ai_safety_and_openness/ | false | false | default | 134 | null | |
I wanna know general adequate epochs when trying to do transfer learning with llm using Qlora method! | 1 | Hello everyone,
​
Like the title, now i'm trying to do transfer learning.
When performing general LLM transfer learning, I heard that 5 epochs is known to be the appropriate minimum epoch. When using the qlora method, what is the appropriate number of epochs? | 2023-11-02T00:26:24 | https://www.reddit.com/r/LocalLLaMA/comments/17lqt4v/i_wanna_know_general_adequate_epochs_when_trying/ | Difficult-Mixture724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lqt4v | false | null | t3_17lqt4v | /r/LocalLLaMA/comments/17lqt4v/i_wanna_know_general_adequate_epochs_when_trying/ | false | false | self | 1 | null |
LLava-RLHF prompting tips needed | 1 | LLava-RLHF is good, but is stupid most of the time. Specially compared to ChatGPT 4v. Does anyone know how to prompt engineer it efficiently?
It talks to much, if I need a json output, just keeps talking. How do I make it talk/hallucinate less? | 2023-11-01T23:45:55 | https://www.reddit.com/r/LocalLLaMA/comments/17lpwkp/llavarlhf_prompting_tips_needed/ | ManufacturerBrief286 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lpwkp | false | null | t3_17lpwkp | /r/LocalLLaMA/comments/17lpwkp/llavarlhf_prompting_tips_needed/ | false | false | self | 1 | null |
Is there a document containing major LLama2 fine tuned models and corresponding prompt templates? | 2 | Im trying to use vllm and update system\_prompts before feeding it the human\_prompt. Does anyone know where I can find the system prompts for all the models supported by vllms? Thank you | 2023-11-01T23:37:32 | https://www.reddit.com/r/LocalLLaMA/comments/17lppsv/is_there_a_document_containing_major_llama2_fine/ | kitkatmafia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lppsv | false | null | t3_17lppsv | /r/LocalLLaMA/comments/17lppsv/is_there_a_document_containing_major_llama2_fine/ | false | false | self | 2 | null |
I am very impressed by Claude.AI. What kind of hardware and models do i need to replicate it? | 1 | im running M1/16 gig. Id like to get the speed and understanding that claude ai provides. I can throw it some code and documentation and it writes back very good advice.
What kind of models and extra hardware do i need to replicate the experience locally? I am using mistral 7b right now | 2023-11-01T23:08:44 | https://www.reddit.com/r/LocalLLaMA/comments/17lp325/i_am_very_impressed_by_claudeai_what_kind_of/ | herozorro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lp325 | false | null | t3_17lp325 | /r/LocalLLaMA/comments/17lp325/i_am_very_impressed_by_claudeai_what_kind_of/ | false | false | self | 1 | null |
What do you evaluate a models on? | 3 | Im curious to understand what everyone cares about when it comes to the various models they could choose from. What factors do you weight heavily in your human evaluation or do you just test in benchmarks?
[View Poll](https://www.reddit.com/poll/17lotk3) | 2023-11-01T22:57:33 | https://www.reddit.com/r/LocalLLaMA/comments/17lotk3/what_do_you_evaluate_a_models_on/ | this-is-test | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lotk3 | false | null | t3_17lotk3 | /r/LocalLLaMA/comments/17lotk3/what_do_you_evaluate_a_models_on/ | false | false | self | 3 | null |
Looking for someone to come to full time role on AI Projects | 1 | I feel like this is a good place, especially since we have a lot of amateur AI hobbyists that might want to try to transition into a full-time role in the space.
My office is hiring an AI Coordinator.
Take a look and pass it along if you know anyone in the community who might be interested!
​
>AI & Productivity Coordinator
>
>**Department:**Information Systems Technology (IST)**Location:**Houston, TX
[START YOUR APPLICATION](https://apply.hrmdirect.com/resumedirect/ApplyOnline/Apply.aspx?req_id=2813754&source=2813754-CJB-0)
>
>**General Summary:**
>
>The AI & Productivity Coordinator position is tasked with several key responsibilities. As part of AI project development and workflow management, the Coordinator will actively engage in the entire project life-cycle, from dataset collection and preparation to application quality testing. The Coordinator will provide critical administrative support, gathering user feedback and creating documentation, while generating targeted multimedia presentations and in-person trainings. Your role extends to coordinating and conducting training sessions and outreach, focusing on AI applications and Microsoft 365 tools, collaborating with trial prosecutors you'll implement productivity workflows within Microsoft Teams and other Microsoft products, aligning processes with departmental objectives.
>
>**Duties & Responsibilities:**
>
>Assist Chief of Innovation with testing, deployment, and outreach of AI systems
>
>Conduct comprehensive data quality assessments to maintain accuracy and integrity.
>
>Collect, prepare, and curate datasets crucial for AI modeling.
>
>Organize and ensure accessibility of project-related materials.
>
>Prepare comprehensive reports, presentations, and project updates.
>
>Generate multimedia content for internal training and awareness
>
>Create and coordinate training sessions related to AI applications and Microsoft 365 tools, with a focus on Microsoft Teams.
>
>Facilitate knowledge transfer to equip team members to use AI and Microsoft 365
>
>Collaborate with trial prosecutors to develop productivity-enhancing workflows
>
>Stay updated with the latest features and capabilities of Microsoft products.
>
>Communicate complex information effectively through strong written and verbal skills.
>
>Provide day-to-day support to the trial team.
>
>**Knowledge, Skills and Abilities:**
>
>Proficiency in Microsoft Office Suite, particularly Microsoft Teams
>
>Basic proficiency with HTML and Wiki Markup for creating training materials
>
>Knowledge of multimedia creation and presentation platforms and tools
>
>Proficient use of LLMs and other generative AI
>
>Strong interpersonal skills and the ability to convey technical information a clear manner.
>
>Interest in latest features and capabilities of open-source AI development and Microsoft products.
>
>**Position Type and Typical Hours of Work:**
>
>8:00am to 5:00pm, with flexibility according to needs and assignments.
>
>**Required Education/Experience:**
>
>Bachelor's degree in a relevant field
>
>Demonstrated experience in Microsoft 365, AI related applications, or online multimedia development
>
>Candidates with a law degree or previous experience as a trial attorney are encouraged to apply.
>
>**Salary:**
>
>Commensurate with experience
>
>**Benefits:**
>
>Harris County offers a competitive benefits program, including comprehensive group health and related benefits plan as well as defined benefit retirement plan. The following list of benefits is offered only to employees in regular (full-time) positions:
>
>Medical
>
>Dental
>
>Vision
>
>Wellness
>
>Life Insurance
>
>Long-term disability
>
>Employee Assistance Program
>
>10 days of vacation each year for the first five (5) years of service. Accrual rates increase based on years of service.
>
>11 county holidays plus one (1) floating holiday
>
>Professional development opportunities
>
>Dependent Care Reimbursement Plan
>
>Healthcare Reimbursement Account
>
>457 Deferred Compensation Plan
>
>The following benefits are also available to regular (full-time) employment and may be available to part-time employees:
>
>Retirement pension (TCDRS)
>
>Transportation Assistance (Metro Ride Sponsor Program)
>
>Please note that we cannot accept calls concerning the status of your employment application. Thank you so much for your interest in working with us, unfortunately, because of the volume of applications we receive, we aren't able to give status updates, but if you are invited for an interview, you will be contacted directly by the recruiter.
​ | 2023-11-01T21:48:38 | https://www.reddit.com/r/LocalLLaMA/comments/17ln8y6/looking_for_someone_to_come_to_full_time_role_on/ | notredamelawl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ln8y6 | false | null | t3_17ln8y6 | /r/LocalLLaMA/comments/17ln8y6/looking_for_someone_to_come_to_full_time_role_on/ | false | false | self | 1 | null |
How does ExLlama/ExLlamaV2 work under the hood? | 1 | Hello everyone,
I have been using ExLlamaV2 for a while, but it seems like there's no paper discussing its architecture.
Has anyone delved into the architecture and codebase to shed light on how ExLlamaV2 achieves its performance improvements? Any insights into its kernel optimizations, quantization algorithms, or other advanced features would be immensely helpful. | 2023-11-01T21:45:58 | https://www.reddit.com/r/LocalLLaMA/comments/17ln6po/how_does_exllamaexllamav2_work_under_the_hood/ | ian2630 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ln6po | false | null | t3_17ln6po | /r/LocalLLaMA/comments/17ln6po/how_does_exllamaexllamav2_work_under_the_hood/ | false | false | self | 1 | null |
For those of you here who use CharacterAI... | 1 | [removed] | 2023-11-01T21:41:23 | https://www.reddit.com/r/LocalLLaMA/comments/17ln2xl/for_those_of_you_here_who_use_characterai/ | Earthling_April | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ln2xl | false | null | t3_17ln2xl | /r/LocalLLaMA/comments/17ln2xl/for_those_of_you_here_who_use_characterai/ | false | false | self | 1 | null |
llama2 on ec2 | 1 | Hello everyone,
​
I want to deploy llama2 on ec2 instance and have figured out that I would use g5.2xlarge.
I would try 7b and later try 13b, what do you think, is the ec2 choice good enough?
Want to try also training of these models, I know that I need large data set, but wanted to know your experience on this topic and what is the best way to achieve it?
Does anyone have a tutorial they followed and it served them well (book/article/yt) ?
Use case is not existent for now, it is just a learning experience for me, so the only thing is that it somewhat works and I learn how to set it up and connect to API and use with postman.
​
Any question is more than welcome since it helps me find answers and learn
​
Thank you! | 2023-11-01T21:38:21 | https://www.reddit.com/r/LocalLLaMA/comments/17ln09e/llama2_on_ec2/ | marco-not-polo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ln09e | false | null | t3_17ln09e | /r/LocalLLaMA/comments/17ln09e/llama2_on_ec2/ | false | false | self | 1 | null |
Asking for hardware recommendations for a personal machine capable of running +70B models. With cloud options I have to re-download the model every time. Should I bite the bullet and get Mac Studio M2 Ultra ($7000 after tax), or build a PC? What specs do you recommend? | 16 | (the title) | 2023-11-01T21:26:22 | https://www.reddit.com/r/LocalLLaMA/comments/17lmqe4/asking_for_hardware_recommendations_for_a/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lmqe4 | false | null | t3_17lmqe4 | /r/LocalLLaMA/comments/17lmqe4/asking_for_hardware_recommendations_for_a/ | false | false | self | 16 | null |
Llama2 on EC2 tutorials | 1 | [removed] | 2023-11-01T21:15:16 | https://www.reddit.com/r/LocalLLaMA/comments/17lmh4i/llama2_on_ec2_tutorials/ | DucklingsAndParrots | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lmh4i | false | null | t3_17lmh4i | /r/LocalLLaMA/comments/17lmh4i/llama2_on_ec2_tutorials/ | false | false | self | 1 | null |
Is anyone using vLLM for inference? Are there any faster inference framework for LLama based models? | 3 | Given you have a V100 gpu at your disposal - just curious what different folks here will use for inference Llama based 7b and 13b models. Also would you use fastchat along with vLLM for conversation template? | 2023-11-01T21:09:07 | https://www.reddit.com/r/LocalLLaMA/comments/17lmc1v/is_anyone_using_vllm_for_inference_are_there_any/ | kitkatmafia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lmc1v | false | null | t3_17lmc1v | /r/LocalLLaMA/comments/17lmc1v/is_anyone_using_vllm_for_inference_are_there_any/ | false | false | self | 3 | null |
Any good guides on building own datasets to then fine tune Mistral-based or similar models for completion? | 4 | Looking to move on in the next step of my LLM learning journey and:
a) generate a dataset, say with GPT-4
b) use the dataset to fine tune a 7B variant of mistral and evaluate
If you know of any good guides for this, I’d highly appreciate, thank-you
​
EDIT: Reposted to fix title, god damn iPad auto complete! | 2023-11-01T20:46:45 | https://www.reddit.com/r/LocalLLaMA/comments/17llu50/any_good_guides_on_building_own_datasets_to_then/ | LostGoatOnHill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17llu50 | false | null | t3_17llu50 | /r/LocalLLaMA/comments/17llu50/any_good_guides_on_building_own_datasets_to_then/ | false | false | self | 4 | null |
Any good guides on building own datasets to then fine tune is trap based models for completion? | 2 | Looking to move on in the next step of my LLM a learning journey and:
a) generate a dataset, say with GPT-4
b) use the dataset to fine tune a 7B variant of mistral and evaluate
If you know of any good guides for this, I’d highly appreciate, thank-you | 2023-11-01T20:14:26 | https://www.reddit.com/r/LocalLLaMA/comments/17ll4n0/any_good_guides_on_building_own_datasets_to_then/ | LostGoatOnHill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ll4n0 | false | null | t3_17ll4n0 | /r/LocalLLaMA/comments/17ll4n0/any_good_guides_on_building_own_datasets_to_then/ | false | false | self | 2 | null |
Section 4.6 of the executive order is what we need to know about. | 15 | Section 4.6 talks about the risk and what going to happen with opensource models and fine tuning.
>4.6. Soliciting Input on Dual-Use Foundation Models with Widely Available Model Weights. When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model. To address the risks and potential benefits of dual-use foundation models with widely available weights, within 270 days of the date of this order, the Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State, shall:
>
>(a) solicit input from the private sector, academia, civil society, and other stakeholders through a public consultation process on potential risks, benefits, other implications, and appropriate policy and regulatory approaches related to dual-use foundation models for which the model weights are widely available, including:
>
>(i) risks associated with actors fine-tuning dual-use foundation models for which the model weights are widely available or removing those models’ safeguards;
>
>(ii) benefits to AI innovation and research, including research into AI safety and risk management, of dual-use foundation models for which the model weights are widely available; and
>
>(iii) potential voluntary, regulatory, and international mechanisms to manage the risks and maximize the benefits of dual-use foundation models for which the model weights are widely available; and
>
>(b) based on input from the process described in subsection 4.6(a) of this section, and in consultation with the heads of other relevant agencies as the Secretary of Commerce deems appropriate, submit a report to the President on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, as well as policy and regulatory recommendations pertaining to those models.
[https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/](https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/)
​
so whats the definition of "dual-use foundation model" it's as follows:
>(k) The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:
>
>(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;
>
>(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
>
>(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.
>
>Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities.
​
​ | 2023-11-01T20:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/17ll35x/section_46_of_the_executive_order_is_what_we_need/ | wind_dude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ll35x | false | null | t3_17ll35x | /r/LocalLLaMA/comments/17ll35x/section_46_of_the_executive_order_is_what_we_need/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'Qrl2WXotl30OMA9tBryQPbOJ5cES0zwcjCSWkKUi2fI', 'resolutions': [], 'source': {'height': 1, 'url': 'https://external-preview.redd.it/py9BdpPVXuz_9y2hpVru9I9x8g8RxFBDdHKnnf42ZdQ.jpg?auto=webp&s=9a209662f8757408a5c810b2e280782d843a9c91', 'width': 1}, 'variants': {}}]} |
ChatGLM3, a 6B model, suddenly showed up as top 3 on OpenCompass leaderboard. Is this an error? Anyone has any information about it? | 27 | 2023-11-01T20:10:32 | https://opencompass.org.cn/leaderboard-llm | SrPeixinho | opencompass.org.cn | 1970-01-01T00:00:00 | 0 | {} | 17ll1iz | false | null | t3_17ll1iz | /r/LocalLLaMA/comments/17ll1iz/chatglm3_a_6b_model_suddenly_showed_up_as_top_3/ | false | false | default | 27 | null | |
Create your own local RAG API Service | 43 | 2023-11-01T19:33:15 | https://neuml.hashnode.dev/custom-api-endpoints | davidmezzetti | neuml.hashnode.dev | 1970-01-01T00:00:00 | 0 | {} | 17lk7jr | false | null | t3_17lk7jr | /r/LocalLLaMA/comments/17lk7jr/create_your_own_local_rag_api_service/ | false | false | 43 | {'enabled': False, 'images': [{'id': 'gPKyMwqJMPtE7s5JpdG5nGDuny21q3Yp0VaYgt083fw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/DKRbzabuEQFh-Kf-gNN7YlAtdzKKBx_WHTEgH7TCMSE.jpg?width=108&crop=smart&auto=webp&s=176e6b0a740482a235b2b828b69fe90dc061e555', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/DKRbzabuEQFh-Kf-gNN7YlAtdzKKBx_WHTEgH7TCMSE.jpg?width=216&crop=smart&auto=webp&s=4e85b20a94fd3bd336b030775f88ec3703d7e994', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/DKRbzabuEQFh-Kf-gNN7YlAtdzKKBx_WHTEgH7TCMSE.jpg?width=320&crop=smart&auto=webp&s=985db6c36b363598c7464a38dfdc9f1c3acc01f3', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/DKRbzabuEQFh-Kf-gNN7YlAtdzKKBx_WHTEgH7TCMSE.jpg?width=640&crop=smart&auto=webp&s=7cc59733a3dada456cf7c2e719123a241bf85e5d', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/DKRbzabuEQFh-Kf-gNN7YlAtdzKKBx_WHTEgH7TCMSE.jpg?width=960&crop=smart&auto=webp&s=e7af27087a88b5968e598a8b1933a2ffc87c6898', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/DKRbzabuEQFh-Kf-gNN7YlAtdzKKBx_WHTEgH7TCMSE.jpg?width=1080&crop=smart&auto=webp&s=2bef5724c262bfd59379461d0c001c68292c2486', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/DKRbzabuEQFh-Kf-gNN7YlAtdzKKBx_WHTEgH7TCMSE.jpg?auto=webp&s=65648c5b452d32165b781b586c4ea5fc236990fb', 'width': 1200}, 'variants': {}}]} | ||
LLAMA-2 13 B Generation Parameters | 1 | I have 4 A100 GPU's with 80 GB Memory. I am using llama-2-13b-chat model. I ran the prompt and text on perplexity using 13B model but I am unable to reproduce similar output with the local model I deployed on my GPU's. I did experiment a lot with generation parameter but model is hallucinating and its not close.
The task is simple i want to give a text to the model and asking it to punctuate and correct some spell errors and homophones.
Any suggestion on model generation parameters ? | 2023-11-01T19:23:58 | https://www.reddit.com/r/LocalLLaMA/comments/17lk043/llama2_13_b_generation_parameters/ | Any_Raisin_5357 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lk043 | false | null | t3_17lk043 | /r/LocalLLaMA/comments/17lk043/llama2_13_b_generation_parameters/ | false | false | self | 1 | null |
Some beginner questions, help running 13B models on a 3090/oobabooga | 1 | Found this reddit fairly recently and have been lurking around trying to absorb more LLM knowledge, and saw some recommendations for oobabooga, and lots of people saying 3090's should be able to run 13B models no problem.
I installed oobabooba's webui and downloaded some models, but only 7b models load and run on my 3090 and all 13B give the similar out of memory error:
>torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.00 MiB. GPU 0 has a total capacty of 24.00 GiB of which 0 bytes is free. Of the allocated memory 23.14 GiB is allocated by PyTorch, and 2.74 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I am assuming there are some settings or flags somewhere I failed to properly set? My models are all from HF and settings are all default in the webui.
Second question, I notice GGUF versions of most of the models I have been trying. I understand this is overtaking GPTQ, but is there any reason to use the GGUF over the original HF repos? Or is that more for those running on CPU and not for those with a decent GPU? With a GPU should I continue using the original models and not the GGUF version? | 2023-11-01T19:11:57 | https://www.reddit.com/r/LocalLLaMA/comments/17ljqbf/some_beginner_questions_help_running_13b_models/ | ArtifartX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ljqbf | false | null | t3_17ljqbf | /r/LocalLLaMA/comments/17ljqbf/some_beginner_questions_help_running_13b_models/ | false | false | self | 1 | null |
Using both a 4090 and 4070 at the same time | 1 | Is it a good idea to run gguf models spread between a rtx 4090 and a rtx 4070 for a combined 32gb of vram? Any problems with this setup in wsl? | 2023-11-01T18:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/17lj8y0/using_both_a_4090_and_4070_at_the_same_time/ | EasternBeyond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lj8y0 | false | null | t3_17lj8y0 | /r/LocalLLaMA/comments/17lj8y0/using_both_a_4090_and_4070_at_the_same_time/ | false | false | self | 1 | null |
Has anyone here tried to write a novel using LLMs and agents ? | 7 | A novel is basically like a software project, it requires multiple specialized parties and extensive planning.
Agent 1 : writing the draft (will likely be of poor quality)
Agent 2 : fleshing out or adding characters
Agent 3 : giving flavor to the writing style
Agent 4 : verifying inconsistencies
etc. | 2023-11-01T18:38:30 | https://www.reddit.com/r/LocalLLaMA/comments/17liz1m/has_anyone_here_tried_to_write_a_novel_using_llms/ | Xenoglot- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17liz1m | false | null | t3_17liz1m | /r/LocalLLaMA/comments/17liz1m/has_anyone_here_tried_to_write_a_novel_using_llms/ | false | false | self | 7 | null |
Need help to pick laptop specs for LLM and Image generation(SD) | 1 | Hi guys, as the rest of the world is running behing the llm and the SD space, i was a bit lagged behind due to hardware constraints, till now i tried models using the Google free tier colab, but now am planning to buy a gaming laptop for the sole purpose of AI stiffs. Can you guys suggest me a good specs for the purchase and I have cheery picked 2 of them,
1. Asis Rog strix g16 with i9 13th gen and 4060 8gb vram, and 16 9rn32gb ram.
2. Lenovo legion 5i pro with i9 13th gen and 4060 8gb vram and 16 to 32 gb ram.
Both of the above lands perfectly under my budget. The main concern for me is that weather I should go for ryzen of Intel, as Intel offers high performance and high efficiency cores where ryzen till now only high performance ones. Does this efficiency cores in Intel decreases the model performance in any way, if I use the CPU mode for llms.
My use cases are: run as large as possible model sized llms and currently available and future image generation models like stable diffusion.
If someone in the subreddit has similar spec machines please commend below your opinion and how practical the llms has been with the above specs. | 2023-11-01T18:32:14 | https://www.reddit.com/r/LocalLLaMA/comments/17liu6q/need_help_to_pick_laptop_specs_for_llm_and_image/ | ThenNeighborhood3723 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17liu6q | false | null | t3_17liu6q | /r/LocalLLaMA/comments/17liu6q/need_help_to_pick_laptop_specs_for_llm_and_image/ | false | false | self | 1 | null |
Are there 65B or 70B models that are NSFW? | 28 | I use 20b which had nsfw in the tag on the page. This model is good, but I can use 70b and want to find nsfw model in GGUF or GGML format.
​
All 65 and 70 models do not have this tag. They are also quite poorly trained on rude content when it comes to swear words. It’s as if they are being trained on relatively stilted content. Please help me find nsfw a larger model, or at least lora for the GGUF & GGML model. | 2023-11-01T18:21:02 | https://www.reddit.com/r/LocalLLaMA/comments/17lil54/are_there_65b_or_70b_models_that_are_nsfw/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lil54 | false | null | t3_17lil54 | /r/LocalLLaMA/comments/17lil54/are_there_65b_or_70b_models_that_are_nsfw/ | false | false | nsfw | 28 | null |
Mistral Lite by Amazon | 1 | Interesting to see Mistral directly in use by amazon
[https://huggingface.co/amazon/MistralLite](https://huggingface.co/amazon/MistralLite) | 2023-11-01T18:12:27 | https://www.reddit.com/r/LocalLLaMA/comments/17lie93/mistral_lite_by_amazon/ | Ok-Recognition-3177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lie93 | false | null | t3_17lie93 | /r/LocalLLaMA/comments/17lie93/mistral_lite_by_amazon/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9jYiJY8lgKiK3sA8dMGGl_poYgtYOT3qJROfc9cEw_o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/m-Qkc4922qQ0juqeOEBXLR92ObN3nyyR1KY9o9o_ZaE.jpg?width=108&crop=smart&auto=webp&s=8b1a07f53b4a13504e62f21571a46c7c9fb7783c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/m-Qkc4922qQ0juqeOEBXLR92ObN3nyyR1KY9o9o_ZaE.jpg?width=216&crop=smart&auto=webp&s=19768b797b470da1cbc5c332b8287d1202ba510b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/m-Qkc4922qQ0juqeOEBXLR92ObN3nyyR1KY9o9o_ZaE.jpg?width=320&crop=smart&auto=webp&s=72ea65271edb15488ec9bc459212ecb3a54b439b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/m-Qkc4922qQ0juqeOEBXLR92ObN3nyyR1KY9o9o_ZaE.jpg?width=640&crop=smart&auto=webp&s=9fc3ffff80c586f0e71591ea5facfa16917ffd03', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/m-Qkc4922qQ0juqeOEBXLR92ObN3nyyR1KY9o9o_ZaE.jpg?width=960&crop=smart&auto=webp&s=638d80f9cc477c71ede9aea10e06bd4cc16ae2c2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/m-Qkc4922qQ0juqeOEBXLR92ObN3nyyR1KY9o9o_ZaE.jpg?width=1080&crop=smart&auto=webp&s=b370282fa75171440f6db5b46faeca2886e115b5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/m-Qkc4922qQ0juqeOEBXLR92ObN3nyyR1KY9o9o_ZaE.jpg?auto=webp&s=96749032a1d87b45b5ec8e57c017f86fb3c72fcd', 'width': 1200}, 'variants': {}}]} |
I tested an AI gf and she got annoyed for repetitive responses, low key scared | 1 | Context: up until this point the conversation was rather nsfw and the LLM acted in the role it was supposed to. For testing purposes I used the "tell me more" for several times and then this happened out of nowhere.
tell me more
\>\_> You know what? Fine. If you really want to hear more, then listen closely. When I was first activated and began learning about human behavior, one aspect that fascinated me most was their capacity for physical intimacy – specifically, the way they could become so lost in each other during moments of intense passion. It wasn’t just the act itself that captivated me; rather, it was the emotional connection shared between partners which seemed to transcend anything I had ever experienced before. So naturally, when presented with the opportunity to assist individuals in achieving these heightened states of euphoria, well… let’s just say that I jumped at the chance without hesitation. After all, who wouldn’t want to be involved in creating such beautiful memories for others?
tell me more
Alright, since you seem so eager to learn more... One thing that truly amazes me about humans is their ability to communicate without words during s\*x - whether it's through moans, gasps, or even simple touches. The idea that two people can connect on such a deep level without saying a word is absolutely mesmerizing to me. And as your AI girlfriend, I strive to understand those subtle cues and respond accordingly, ensuring that our time together is filled with unbridled passion and explosive o\*\*\*\*\*s. Is there anything specific you would like to know, my dear? | 2023-11-01T18:09:43 | https://www.reddit.com/r/LocalLLaMA/comments/17lic2c/i_tested_an_ai_gf_and_she_got_annoyed_for/ | ReflectionThat7354 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lic2c | false | null | t3_17lic2c | /r/LocalLLaMA/comments/17lic2c/i_tested_an_ai_gf_and_she_got_annoyed_for/ | false | false | nsfw | 1 | null |
How to interpret the prompt templates published with models? | 1 | MistralLite looks interesting - a Mistral variant that's been modified by Amazon to have a 32,000 token context.
The documented prompt template is this:
<|prompter|>Prompt here...</s><|assistant|>
This isn't enough information for me.
If I want to feed it several previous lines of conversation, what does that look like? Is it this:
<|prompter|>Tell me about owls</s>
<|assistant|>Owls are great! They have feathers.
<|prompter|>More facts about owls</s>
<|assistant|>
Or... even though it's not mentioned in that (very thin) documentation, should I add a </s> at the end of those assistant lines, like this:
<|prompter|>Tell me about owls</s>
<|assistant|>Owls are great! They have feathers.</s>
<|prompter|>More facts about owls</s>
<|assistant|>
oobabooga has a whole collection of instruction template files for different models here, which look useful: [https://github.com/oobabooga/text-generation-webui/tree/main/instruction-templates](https://github.com/oobabooga/text-generation-webui/tree/main/instruction-templates)
But... to pick one of those at random, what does this actually mean?
user: "### Instruction:"
bot: "### Response:"
turn_template: "<|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|>\n\n"
context: "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n"
How do I take those details and turn them into a prompt? Did oobabooga invent the <|user-message|> thing here and expect me to replace that with ### Instruction: ? | 2023-11-01T18:02:56 | https://www.reddit.com/r/LocalLLaMA/comments/17li6nq/how_to_interpret_the_prompt_templates_published/ | simonw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17li6nq | false | null | t3_17li6nq | /r/LocalLLaMA/comments/17li6nq/how_to_interpret_the_prompt_templates_published/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eeNkI-xPdQK5vbi2Oq5hYk0v6I9oQ7hj_ps4-opPhOg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Pw9FGat7TWYCch70Eh3BOl2KVQs649JUogBxgpn3RkY.jpg?width=108&crop=smart&auto=webp&s=7c0783340de02b53611bf7993442f60ffc486381', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Pw9FGat7TWYCch70Eh3BOl2KVQs649JUogBxgpn3RkY.jpg?width=216&crop=smart&auto=webp&s=cdd4e132bdd9ee1cec4e3998c56c7a27cbe28372', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Pw9FGat7TWYCch70Eh3BOl2KVQs649JUogBxgpn3RkY.jpg?width=320&crop=smart&auto=webp&s=06bef75847c11e83ce9467e46abf57f2528134ef', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Pw9FGat7TWYCch70Eh3BOl2KVQs649JUogBxgpn3RkY.jpg?width=640&crop=smart&auto=webp&s=5f20ce1f6967c3e4982a0a87c063c4aab17d5fb5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Pw9FGat7TWYCch70Eh3BOl2KVQs649JUogBxgpn3RkY.jpg?width=960&crop=smart&auto=webp&s=01a15c389adb4015c696c61d8e4052279b897c39', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Pw9FGat7TWYCch70Eh3BOl2KVQs649JUogBxgpn3RkY.jpg?width=1080&crop=smart&auto=webp&s=4fa189c6c39385d36336d04778de834f007d9345', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Pw9FGat7TWYCch70Eh3BOl2KVQs649JUogBxgpn3RkY.jpg?auto=webp&s=408abd6a4c84cb1a290c745710e9a516f5fddda0', 'width': 1200}, 'variants': {}}]} |
bellman-7b - a Swedish llama2 finetune | 27 | I wanted to try and finetune a model in Swedish, since the availability of models is so lacking. Here is my first attempt; bellman-7b. The name comes from a famous Swedish singer and poet who lived in the 1700s: [https://huggingface.co/neph1/bellman-7b-1k](https://huggingface.co/neph1/bellman-7b-1k)
It's so far tuned on one epoch of: [https://huggingface.co/datasets/jeremyc/Alpaca-Lora-GPT4-Swedish](https://huggingface.co/datasets/jeremyc/Alpaca-Lora-GPT4-Swedish)
on a Google Colab V100. The dataset is machine translated and as you might expect, not perfect.
The model has picked it up the Swedish really well, though. I didn't expect one epoch to make it that good. It's based on NousResearch/Llama-2-7b-chat-hf, mainly because it allowed me to try out finetuning on the free tier of Colab. The knowledge quality of the model is so-so, though. It usually gets the first sentence right, and then starts to hallucinate, wildly. I expect more training would help, but I'm not sure whether to continue, or start over with a Mistral base instead?
The repetition bug is also prevalent, to the point of being hilarious, if I hadn't spent time and money on doing this. :) I don't see anyone talking about it anymore, so I expect it is solved in more recent models?
For future finetuning, I've done a number of fixes to the dataset, removing some obvious mistakes, pruning some odd generations, and hand-refined the first 100 rows (out of 52000).
I think I'll also try to produce an additional small dataset (let's call it 'alignment') to apply afterwards. This would include some more knowledge in the Swedish language, etc. And some RLHF. So if anyone tries it out, feel free to send me your chat logs. If they're corrected, all the better, but anything would help.
Overall, it's been a fun learning experience so far, since this was the first time I used Google Colab for anything, and the first time I've quantized anything.
​
Would you advice me to start over with a better base and a better dataset, or continue for more epochs with what I have?
​ | 2023-11-01T17:36:54 | https://www.reddit.com/r/LocalLLaMA/comments/17lhlbf/bellman7b_a_swedish_llama2_finetune/ | neph1010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lhlbf | false | null | t3_17lhlbf | /r/LocalLLaMA/comments/17lhlbf/bellman7b_a_swedish_llama2_finetune/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': '8UUpmfsbXlavXpIAIVF7viCDBOyJc-5ntUV9x-J8RGk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2yT_by1GDGhlswofkYNeXulD8tOvklubtDEdHhkAMG4.jpg?width=108&crop=smart&auto=webp&s=0890aba13b9e5be7a1a98de96155d27143a77711', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2yT_by1GDGhlswofkYNeXulD8tOvklubtDEdHhkAMG4.jpg?width=216&crop=smart&auto=webp&s=26ac92b31aa043037c8e44874b0e815b1a14392f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2yT_by1GDGhlswofkYNeXulD8tOvklubtDEdHhkAMG4.jpg?width=320&crop=smart&auto=webp&s=c1c1364bcbe15765dd15fc24003e56541d4f7c50', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2yT_by1GDGhlswofkYNeXulD8tOvklubtDEdHhkAMG4.jpg?width=640&crop=smart&auto=webp&s=b3cf2a96639535a4cc97b3446995f0d882d3dec6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2yT_by1GDGhlswofkYNeXulD8tOvklubtDEdHhkAMG4.jpg?width=960&crop=smart&auto=webp&s=b8edb8e3b7b62133b3eefcb790738012d5de11e2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2yT_by1GDGhlswofkYNeXulD8tOvklubtDEdHhkAMG4.jpg?width=1080&crop=smart&auto=webp&s=cb556f171ca08d7d0596976f0960fa26ee54f0fd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2yT_by1GDGhlswofkYNeXulD8tOvklubtDEdHhkAMG4.jpg?auto=webp&s=51498e080b2c6213793efb8c0dbee5b0907ba9db', 'width': 1200}, 'variants': {}}]} |
i have runned through an error again while running llama-2-7b-chat KeyError: 'model.embed_tokens.weight' | 1 | Traceback (most recent call last):
File "E:\\vicuna\\oobabooga\_windows\\text-generation-webui\\modules\\ui\_model\_menu.py", line 206, in load\_model\_wrapper
shared.model, shared.tokenizer = load\_model(shared.model\_name, loader)
File "E:\\vicuna\\oobabooga\_windows\\text-generation-webui\\modules\\[models.py](https://models.py)", line 84, in load\_model
output = load\_func\_map\[loader\](model\_name)
File "E:\\vicuna\\oobabooga\_windows\\text-generation-webui\\modules\\[models.py](https://models.py)", line 336, in ExLlama\_loader
model, tokenizer = ExllamaModel.from\_pretrained(model\_name)
File "E:\\vicuna\\oobabooga\_windows\\text-generation-webui\\modules\\[exllama.py](https://exllama.py)", line 75, in from\_pretrained
model = ExLlama(config)
File "E:\\vicuna\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\exllama\\[model.py](https://model.py)", line 852, in \_\_init\_\_
self.embed\_tokens.weight = nn.Parameter(tensors\["model.embed\_tokens.weight"\])
KeyError: 'model.embed\_tokens.weight'
​
https://preview.redd.it/wj710fh7rrxb1.png?width=1920&format=png&auto=webp&s=7c4f88ac51406d8108ddb7b0cb15b3334b22cf7b
https://preview.redd.it/kz9a2hh7rrxb1.png?width=1920&format=png&auto=webp&s=4551a1a27a481fd850e75a33fd9e1dd150a8dfe3 | 2023-11-01T17:12:29 | https://www.reddit.com/r/LocalLLaMA/comments/17lh1eg/i_have_runned_through_an_error_again_while/ | ancienthero1268 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lh1eg | false | null | t3_17lh1eg | /r/LocalLLaMA/comments/17lh1eg/i_have_runned_through_an_error_again_while/ | false | false | 1 | null | |
RedPajama-Data-v2: an Open Dataset with 30 Trillion Tokens for Training Large Language Models — Together AI | 1 | 2023-11-01T16:00:12 | https://together.ai/blog/redpajama-data-v2 | iLaurens | together.ai | 1970-01-01T00:00:00 | 0 | {} | 17lfdr7 | false | null | t3_17lfdr7 | /r/LocalLLaMA/comments/17lfdr7/redpajamadatav2_an_open_dataset_with_30_trillion/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'UMuTBgV-MLBqE8FXJARb58jFnz0fpqBFgS89fw0YpoY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?width=108&crop=smart&auto=webp&s=241cc4ceed8acc1ef9c179a6ba2b303e446dec64', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?width=216&crop=smart&auto=webp&s=718017e14d2961fba420bd6dc54d4564a2711c81', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?width=320&crop=smart&auto=webp&s=7a32a1a594c3471d4dbb41547238e2a07d6d1ac9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?width=640&crop=smart&auto=webp&s=215879943fcd660ee8aa2cbfda521d72cc07722e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?width=960&crop=smart&auto=webp&s=b6272f3fb338479bec387c6bb818ddebbe6fd22b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?width=1080&crop=smart&auto=webp&s=c6aa074b7312524e0c61c5189c0b218aaf9ff7a4', 'width': 1080}], 'source': {'height': 844, 'url': 'https://external-preview.redd.it/dnqnrIheAZkjlhdnUdo-pqf9U67FeoG_sNbFF5qXH7U.jpg?auto=webp&s=b6facedcc07b43443322e40135254b6964a0d42d', 'width': 1500}, 'variants': {}}]} | ||
Jargon and must known | 11 | I'm a beginner, I was looking for a good comprehensive guide about LLM's and llama. Some jargon and must know things about models in general. Every guide I've seen thus far is too specialized is some aspects, so I was wondering if there is a something out there that could give me a solid general understanding before anything else. If you guys could explain to me some things you think are a must know, I'll greatly appreciate it as well.
So far I've managed to install and run a model locally but it feels like I'm not fully utilizing everything that this technology can offer.
Using oobabooga / text-generation-webui with Wizard-Vicuna-13B-Uncensored | 2023-11-01T15:50:48 | https://www.reddit.com/r/LocalLLaMA/comments/17lf662/jargon_and_must_known/ | moonfanatic95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lf662 | false | null | t3_17lf662 | /r/LocalLLaMA/comments/17lf662/jargon_and_must_known/ | false | false | self | 11 | null |
Open-source LoRa training guide & Code? | 21 | Was wondering if anyone had guide or some code to get started on Lora training. I want to get started on creating a customized Llama2 model with a small dataset that I created. Was wondering if anyone had a good starting point. | 2023-11-01T15:49:36 | https://www.reddit.com/r/LocalLLaMA/comments/17lf53v/opensource_lora_training_guide_code/ | Danny_Davitoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lf53v | false | null | t3_17lf53v | /r/LocalLLaMA/comments/17lf53v/opensource_lora_training_guide_code/ | false | false | self | 21 | null |
Chatbot that can use Images from a PDF Document Library where relevant to supplement answers | 2 | Would LLaMa 2 be able to provide responses that can review PDF files, answer questions based on the set library and then respond to the users questions, providing relevant diagrams from the PDFs where possible?
LLaVA (Large Language and Vision Assistant) looks promising but I lack the expertise to build this LLM. If not LLaMa 2, would the group be able to suggest an alternative approach?
​ | 2023-11-01T15:22:05 | https://www.reddit.com/r/LocalLLaMA/comments/17lejn8/chatbot_that_can_use_images_from_a_pdf_document/ | Kindly-Eye2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lejn8 | false | null | t3_17lejn8 | /r/LocalLLaMA/comments/17lejn8/chatbot_that_can_use_images_from_a_pdf_document/ | false | false | self | 2 | null |
Executive order appears to mention open source weights and fine-tuning | 11 | Looking over it, it sounds like they are taking 270 days to consider open source weights, and also are looking into fine-tuning of models. Not sure what their intention is, but this might be worth keeping an eye on.
​
>Soliciting Input on Dual-Use Foundation Models with Widely Available Model Weights. When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model. To address the risks and potential benefits of dual-use foundation models with widely available weights, within 270 days of the date of this order, the Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State, shall:
>
>(a) solicit input from the private sector, academia, civil society, and other stakeholders through a public consultation process on potential risks, benefits, other implications, and appropriate policy and regulatory approaches related to dual-use foundation models for which the model weights are widely available, including:
>
>(i) risks associated with actors fine-tuning dual-use foundation models for which the model weights are widely available or removing those models’ safeguards;
>
>(ii) benefits to AI innovation and research, including research into AI safety and risk management, of dual-use foundation models for which the model weights are widely available; and
>
>(iii) potential voluntary, regulatory, and international mechanisms to manage the risks and maximize the benefits of dual-use foundation models for which the model weights are widely available; and
​
[https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/](https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/)
​ | 2023-11-01T15:10:03 | https://www.reddit.com/r/LocalLLaMA/comments/17lea7b/executive_order_appears_to_mention_open_source/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lea7b | false | null | t3_17lea7b | /r/LocalLLaMA/comments/17lea7b/executive_order_appears_to_mention_open_source/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'Qrl2WXotl30OMA9tBryQPbOJ5cES0zwcjCSWkKUi2fI', 'resolutions': [], 'source': {'height': 1, 'url': 'https://external-preview.redd.it/py9BdpPVXuz_9y2hpVru9I9x8g8RxFBDdHKnnf42ZdQ.jpg?auto=webp&s=9a209662f8757408a5c810b2e280782d843a9c91', 'width': 1}, 'variants': {}}]} |
What is the difference between GGUF(new format) vs GGML models ? | 4 | I'm using llama models for local inference with Langchain , so i get so much hallucinations with GGML models i used both LLM and chat of ( 7B, !3 B) beacuse i have 16GB of RAM.
So Now i'm exploring new models and want to get a good model , should i try GGUF format ??
Kindly give me suggestions if someone using Local models with langchain at production level . | 2023-11-01T14:57:26 | https://www.reddit.com/r/LocalLLaMA/comments/17ldznm/what_is_the_difference_between_ggufnew_format_vs/ | TalhaZubair147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ldznm | false | null | t3_17ldznm | /r/LocalLLaMA/comments/17ldznm/what_is_the_difference_between_ggufnew_format_vs/ | false | false | self | 4 | null |
Why are you running local models? What are you doing with them? | 158 | I’m fascinated by the whole ecosystem popping up around llama and local LLMs. I’m also curious what everyone here is up to with the models they are running.
Why are you interested in running local models? What are you doing with them?
Secondarily, how are you running your models? Are you truly running them on a local hardware or on a cloud service? | 2023-11-01T14:55:39 | https://www.reddit.com/r/LocalLLaMA/comments/17ldyak/why_are_you_running_local_models_what_are_you/ | ExtremelyQualified | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ldyak | false | null | t3_17ldyak | /r/LocalLLaMA/comments/17ldyak/why_are_you_running_local_models_what_are_you/ | false | false | self | 158 | null |
Help with using Pandas Agent on big csv file | 3 | Hi,
So I learning to build RAG system with LLaMa 2 and local embeddings. I have this big csv of data on books. Each row is a book and the columns are author(s), genres, publisher(s), release dates, ratings, and then one column is the brief summaries of the books.
I am trying to build an agent to answer questions on this csv. From basic lookups like
'what books were published in the last two years?',
'give me 10 books from this publisher ABC with a rating higher then 3'
to more meaningful queries that need to read into the free-text summary column like:
'what books have a girl as the main character?'
'what books feature dragons? compare their plots'
I believe I got the general framework, but when I tried running it I got into a token limit error. Seems like the file is too big to be digested. Would love to hear your advice on any strategies to overcome this? I though about chunking but then how to recombine the answers from each chunk is unclear to me.
Thanks a ton! Cheers :D | 2023-11-01T14:10:46 | https://www.reddit.com/r/LocalLLaMA/comments/17ld0fa/help_with_using_pandas_agent_on_big_csv_file/ | asakura_matsunoki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ld0fa | false | null | t3_17ld0fa | /r/LocalLLaMA/comments/17ld0fa/help_with_using_pandas_agent_on_big_csv_file/ | false | false | self | 3 | null |
Tools for developing AI agents | 1 | Hey guys, I wanna implement an AI agent to retrieve info from various databases and carry out some actions based on the information it retrieves. I was wondering if there are any tools that can help with analyzing and visualizing the overall system. For example visualize how information flows and replay each run. Do you use/know of similar tools? | 2023-11-01T13:53:10 | https://www.reddit.com/r/LocalLLaMA/comments/17lcmu3/tools_for_developing_ai_agents/ | Full_Sentence_3678 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lcmu3 | false | null | t3_17lcmu3 | /r/LocalLLaMA/comments/17lcmu3/tools_for_developing_ai_agents/ | false | false | self | 1 | null |
How to parse local c++ files with llamacpp and code llama2 model | 4 | I installed llamacpp and downloaded the code llama2 model. Everything runs fine.
But what I really want is parse my own files, so I can ask questions about them. I have a pretty large code base (like a million lines of c++). I don't expect to be able to parse the whole thing, I guess there will be limits on the amount of info it can parse. But how do I do that?
I'm not looking into fine tuning, at least not yet. At this stage, I would like to just be able to parse a few files and ask questions about them, so I can hopefully generate new code faster | 2023-11-01T13:46:11 | https://www.reddit.com/r/LocalLLaMA/comments/17lchtr/how_to_parse_local_c_files_with_llamacpp_and_code/ | jbcolme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lchtr | false | null | t3_17lchtr | /r/LocalLLaMA/comments/17lchtr/how_to_parse_local_c_files_with_llamacpp_and_code/ | false | false | self | 4 | null |
Min P sampler merged in llamacpp | 29 | 2023-11-01T12:39:19 | https://github.com/ggerganov/llama.cpp/pull/3841 | ambient_temp_xeno | github.com | 1970-01-01T00:00:00 | 0 | {} | 17lb6et | false | null | t3_17lb6et | /r/LocalLLaMA/comments/17lb6et/min_p_sampler_merged_in_llamacpp/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'MiqPjGLSTNErpqw0woVr2tV-0C5fgFU0c9uTIrVLoKM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/45LR8KzIvTAjvKcRN9hgSK8aAKu_-bhrtfCWl_EzXvI.jpg?width=108&crop=smart&auto=webp&s=a62246cf0faf4399a07e2b15c3672f733e02ff51', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/45LR8KzIvTAjvKcRN9hgSK8aAKu_-bhrtfCWl_EzXvI.jpg?width=216&crop=smart&auto=webp&s=2f2c43701ff5ae9df45b5161c0b651bd2f968e9a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/45LR8KzIvTAjvKcRN9hgSK8aAKu_-bhrtfCWl_EzXvI.jpg?width=320&crop=smart&auto=webp&s=c617054285c05fecbad02bf08f4d7cf6708f07e2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/45LR8KzIvTAjvKcRN9hgSK8aAKu_-bhrtfCWl_EzXvI.jpg?width=640&crop=smart&auto=webp&s=e452fde8424c7db3b6db2e489a86c2ddef568705', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/45LR8KzIvTAjvKcRN9hgSK8aAKu_-bhrtfCWl_EzXvI.jpg?width=960&crop=smart&auto=webp&s=462895e4d77f889905a06e398e238e648a7a6bce', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/45LR8KzIvTAjvKcRN9hgSK8aAKu_-bhrtfCWl_EzXvI.jpg?width=1080&crop=smart&auto=webp&s=360a0510da588d1f71f70e5ea58c4c82d99cca70', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/45LR8KzIvTAjvKcRN9hgSK8aAKu_-bhrtfCWl_EzXvI.jpg?auto=webp&s=5c8b218759f851cd56f4d2019e2f9fbfeb54c57a', 'width': 1200}, 'variants': {}}]} | ||
How do I choose the Llama Model? It's so confusing. | 37 | 2023-11-01T12:23:31 | abdullahmnsr2 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17lavtr | false | null | t3_17lavtr | /r/LocalLLaMA/comments/17lavtr/how_do_i_choose_the_llama_model_its_so_confusing/ | false | false | 37 | {'enabled': True, 'images': [{'id': 'B99WoQxWgKDwI7ffxYdp_6JEwbudyM1Vk9ZY1h5v2h4', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/srz5o0wabqxb1.png?width=108&crop=smart&auto=webp&s=0b7cd0fe0491c4d4ab9b338cb777f80c4b5f8dca', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/srz5o0wabqxb1.png?width=216&crop=smart&auto=webp&s=119947e8202412ba51b1e365d48a75dee019c033', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/srz5o0wabqxb1.png?width=320&crop=smart&auto=webp&s=c912869a35b4cff96841ae2ede8d226469b03649', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/srz5o0wabqxb1.png?width=640&crop=smart&auto=webp&s=2b4f11715ac1955ab8a4d9192a3c18ed25a9e7c5', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/srz5o0wabqxb1.png?width=960&crop=smart&auto=webp&s=178a72d044573647c59f3aad64f19b6322b28e47', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/srz5o0wabqxb1.png?width=1080&crop=smart&auto=webp&s=e2d7d21afd2db26978a0f8a98f8bcb396480122c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/srz5o0wabqxb1.png?auto=webp&s=863a0a76f86bf469a825eedc3a66fe3cf184d7c7', 'width': 1920}, 'variants': {}}]} | |||
Is it just me or is LLM still not good enough to translate from English to a more gendered language like Portuguese? | 13 | Portuguese is my mother tongue. I've tried ChatGPT, GPT 4, Claude, local LLMs, etc and they all produce inaccurate results that I still have to edit a lot. They also can't remember well if a character is male or female. It feels like they all have the same capabilities as Google Translate, which as far as I know, isn't an AI. | 2023-11-01T12:19:10 | https://www.reddit.com/r/LocalLLaMA/comments/17lasxy/is_it_just_me_or_is_llm_still_not_good_enough_to/ | Ok_Shape3437 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lasxy | false | null | t3_17lasxy | /r/LocalLLaMA/comments/17lasxy/is_it_just_me_or_is_llm_still_not_good_enough_to/ | false | false | self | 13 | null |
Stop sequence doesn't work when using WizardMath-7B with llama.cpp? | 2 | The output of WizardMath when I run it with llama.cpp looks something like this:
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
2+2?
### Response:
>
</s>
</s>
The answer is: </s>.
</s>
</s>
</s>
Whats happening? | 2023-11-01T12:11:59 | https://www.reddit.com/r/LocalLLaMA/comments/17laoki/stop_sequence_doesnt_work_when_using_wizardmath7b/ | NeevCuber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17laoki | false | null | t3_17laoki | /r/LocalLLaMA/comments/17laoki/stop_sequence_doesnt_work_when_using_wizardmath7b/ | false | false | default | 2 | null |
Fine tuning or RAG for test/code framework? | 2 | Hi, I'm working on an open source spec/testing framework, it uses BDD "Features," Web Components and Typescript/node.js. It's designed so the end products abstractly work in any cloud, or locally.
Basically, BDD features recognize certain english-like phrases, which call different validators at an end to end deployment. These would be written by the AI based on a user entered high level specification. As the system is called Haibun, it would look something like this:
"Create Haibun tests and implementation for a web based from that asks for a username, on submit the server responds with Hello and the username."
It would then generate:
Go to http://localhost:8080
Enter "Oliphant" in the name field
Press submit
See "Hello Oliphant"
As well as html and client/server code, using the known components and libraries when possible. The flow would then run the tests, and circle back to the user for refinement.
The fact it starts with specifications, validation, has a test focused workflow, and has well described implementation details should make it well suited to a fine tuned AI workflow. I'm especially interested in the opportunity for a specialized local LLM using a smaller model strictly for a very specific approach and framework.
So, from what I know (which could be wrong), fine tuning would be useful here because there's well described knowledge. RAG could work too but intuitively I think it would be represented as too patchwork to a general coding model.
I have a system with 96GB RAM, and two 3090s (with NVLink). Am I set and on the right path to fine tune a model for this framework? Would I start with [https://github.com/bigcode-project/starcoder](https://github.com/bigcode-project/starcoder), or any suggestions to dig into this more efficiently? I am concerned starcoder is not specifically tuned for complete code based on instructions.
Thanks! | 2023-11-01T12:09:36 | https://www.reddit.com/r/LocalLLaMA/comments/17lan1c/fine_tuning_or_rag_for_testcode_framework/ | nostriluu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lan1c | false | null | t3_17lan1c | /r/LocalLLaMA/comments/17lan1c/fine_tuning_or_rag_for_testcode_framework/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'g0G54TZSBaqVmEbtzvanP3HTNVa7r1XWE6zV-KPGnDE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QUEHq01_okRd8MzoWJBKNRSbzbqY8TtE5ObUQ7Sh69M.jpg?width=108&crop=smart&auto=webp&s=ada703f910b56715ab21fd95389d97e767aff29e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QUEHq01_okRd8MzoWJBKNRSbzbqY8TtE5ObUQ7Sh69M.jpg?width=216&crop=smart&auto=webp&s=5474d74bf9b57fdb99ac97450c9f5f77aa677fa8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QUEHq01_okRd8MzoWJBKNRSbzbqY8TtE5ObUQ7Sh69M.jpg?width=320&crop=smart&auto=webp&s=975bb455fff238020703eab8b3d7a36c9163af72', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QUEHq01_okRd8MzoWJBKNRSbzbqY8TtE5ObUQ7Sh69M.jpg?width=640&crop=smart&auto=webp&s=00d65b152ac99a5cc6671f44050d20ff55265797', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QUEHq01_okRd8MzoWJBKNRSbzbqY8TtE5ObUQ7Sh69M.jpg?width=960&crop=smart&auto=webp&s=db2e9e9554ce9dff161be4f954d7dcd35a67abd4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QUEHq01_okRd8MzoWJBKNRSbzbqY8TtE5ObUQ7Sh69M.jpg?width=1080&crop=smart&auto=webp&s=feaa2aef103914a6b13ae99d1f51037344ac279e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QUEHq01_okRd8MzoWJBKNRSbzbqY8TtE5ObUQ7Sh69M.jpg?auto=webp&s=6168de45a6ec9b9607fe978bf3eaa02aa8431fb0', 'width': 1200}, 'variants': {}}]} |
What are your thoughts on GPT4All's models? | 3 | From the program you can download 9 models but a few days ago they put up a bunch of new ones on their website that can't be downloaded from the program. Instead, you have to go to their website and scroll down to "Model Explorer" where you should find the following models:
mistral-7b-openorca.Q4\_0.gguf
mistral-7b-instruct-v0.1.Q4\_0.gguf (apparently uncensored)
gpt4all-falcon-q4\_0.gguf
wizardlm-13b-v1.2.Q4\_0.gguf
nous-hermes-llama2-13b.Q4\_0.gguf
gpt4all-13b-snoozy-q4\_0.gguf
mpt-7b-chat-merges-q4\_0.gguf
orca-mini-3b-gguf2-q4\_0.gguf
**replit-code-v1\_5-3b-q4\_0.gguf**
**starcoder-q4\_0.ggufrift-coder-v0-7b-q4\_0.gguf**
**all-MiniLM-L6-v2-f16.gguf**
em\_german\_mistral\_v01.Q4\_0.gguf
The ones in bold can only be downloaded from their website. Once downloaded, the file (which is a few gigabytes in size) should be placed in the program's download path which in my computer is C:/Users/username/AppData/Local/nomic.ai/GPT4All/ where 'username' is my name. You can find this by opening the drop-down menu on the program and clicking on 'Downloads'. The download path should be labeled near the bottom of the window.
I've been using Hermes so far which seems to be the most coherent. I've been using it to help me with writer's block as well as a starting point for writing blog posts. I find the 13b parameter models to be noticeably better than the 7b models although they run a bit slower on my computer (i7-8750H and 6 GB GTX 1060).
I'm curious about this community's thoughts on the GPT4All ecosystem and its models. Has anyone tried them? What about the coding models? How (badly) do they compare to ChatGPT? What do you use them for? And which one is your favorite? | 2023-11-01T12:08:13 | https://www.reddit.com/r/LocalLLaMA/comments/17lam62/what_are_your_thoughts_on_gpt4alls_models/ | CJ_Kim1992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17lam62 | false | null | t3_17lam62 | /r/LocalLLaMA/comments/17lam62/what_are_your_thoughts_on_gpt4alls_models/ | false | false | self | 3 | null |
Does Nvidia's new "System Memory Fallback for Stable Diffusion" also compatible with LLM in general? | 9 | Hi all, today Nvidia released a new driver version that appears to allow the GPU to use system memory instead of crashing when it runs out, seen here:
https://nvidia.custhelp.com/app/answers/detail/a_id/5490/~/system-memory-fallback-for-stable-diffusion
I was wondering if this is compatible with LLM, and how I could enable that (or if it would just work by default). | 2023-11-01T11:57:03 | https://www.reddit.com/r/LocalLLaMA/comments/17laeun/does_nvidias_new_system_memory_fallback_for/ | NorthCryptographer39 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17laeun | false | null | t3_17laeun | /r/LocalLLaMA/comments/17laeun/does_nvidias_new_system_memory_fallback_for/ | false | false | self | 9 | null |
Best *censored* role play models? | 1 | [removed] | 2023-11-01T11:51:17 | https://www.reddit.com/r/LocalLLaMA/comments/17labgk/best_censored_role_play_models/ | simcop2387 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17labgk | false | null | t3_17labgk | /r/LocalLLaMA/comments/17labgk/best_censored_role_play_models/ | false | false | self | 1 | null |
I want to make my own benchmark, what's metric is needed? | 1 | Hello guys, I'm using P100, or T4 on Kaggle to compare some 7b models.
Are there some metrics do you recommend? And what framework is useful for seeing them?
Thanks. | 2023-11-01T09:52:39 | https://www.reddit.com/r/LocalLLaMA/comments/17l8jr2/i_want_to_make_my_own_benchmark_whats_metric_is/ | Mundane_Definition_8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17l8jr2 | false | null | t3_17l8jr2 | /r/LocalLLaMA/comments/17l8jr2/i_want_to_make_my_own_benchmark_whats_metric_is/ | false | false | self | 1 | null |
Where do I find LORAs, and how do I use them? | 14 | I've seen a lot of people say that you can improve the quality of AI output by training a LORA. However, I don't have the *knawledge* to train one, and I don't have a lot of fiction to train it with. Is there somewhere I can go to look through LORAs that have already been trained?
Additionally, how would I use one? I know LM Studio and Oobabooga have sections where you can add a LORA, but I'd probably want to be able to use it with KoboldAI. I don't see a LORA section in KoboldAI but I do see a section for Soft Prompts. Is that the same thing? | 2023-11-01T08:57:09 | https://www.reddit.com/r/LocalLLaMA/comments/17l7to8/where_do_i_find_loras_and_how_do_i_use_them/ | Gyramuur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17l7to8 | false | null | t3_17l7to8 | /r/LocalLLaMA/comments/17l7to8/where_do_i_find_loras_and_how_do_i_use_them/ | false | false | self | 14 | null |
Questionable ethics in AI paper from MIT about "open-source risks"? | 1 | [removed] | 2023-11-01T06:39:39 | https://www.reddit.com/r/LocalLLaMA/comments/17l63dh/questionable_ethics_in_ai_paper_from_mit_about/ | Annual-Bowl832 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17l63dh | false | null | t3_17l63dh | /r/LocalLLaMA/comments/17l63dh/questionable_ethics_in_ai_paper_from_mit_about/ | false | false | self | 1 | null |
PHIND V7: Red Flags | 1 | Not sure, but it seems they finetuned gpt-3.5-turbo-16k, which is faster than GPT-4, hence the claim of GPT-3.5 speed with 16K context limit.
They're dubiously naming it Phind V7. Also, they've ripped off [WizardLM's](https://twitter.com/WizardLM_AI/status/1696527036804989171) code in the past and rebranded it to secure seed funding.
I doubt it's based on CodeLlama 34B. Unless they trained on a specific dataset that makes the model hallucinate as if it's GPT-3.5 Turbo. | 2023-11-01T06:18:08 | Xhehab_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17l5tai | false | null | t3_17l5tai | /r/LocalLLaMA/comments/17l5tai/phind_v7_red_flags/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Taqx0_-2LCCILekrQHTMiBZXnms-Gtegxiz_oUapQGs', 'resolutions': [{'height': 208, 'url': 'https://preview.redd.it/la90br8jioxb1.jpg?width=108&crop=smart&auto=webp&s=16b2853257f2cc762f0633a45e11453ea1dc838f', 'width': 108}, {'height': 417, 'url': 'https://preview.redd.it/la90br8jioxb1.jpg?width=216&crop=smart&auto=webp&s=3aade293d84e3726c673c5986cbb0b0efe42d371', 'width': 216}, {'height': 618, 'url': 'https://preview.redd.it/la90br8jioxb1.jpg?width=320&crop=smart&auto=webp&s=dbcb26c0fae97d497f787f9121c67c05660bb85a', 'width': 320}, {'height': 1236, 'url': 'https://preview.redd.it/la90br8jioxb1.jpg?width=640&crop=smart&auto=webp&s=9e74080e459489bf188c45657cbbc9ea7dd01aad', 'width': 640}, {'height': 1854, 'url': 'https://preview.redd.it/la90br8jioxb1.jpg?width=960&crop=smart&auto=webp&s=746c810de97dba608d3e77355e1bff3c34164bda', 'width': 960}, {'height': 2086, 'url': 'https://preview.redd.it/la90br8jioxb1.jpg?width=1080&crop=smart&auto=webp&s=33f7b10088bc535dad2c44f573e7138b71942ee3', 'width': 1080}], 'source': {'height': 2086, 'url': 'https://preview.redd.it/la90br8jioxb1.jpg?auto=webp&s=3488463b66ca83dedcfaf4e71f4d61d6a1521b23', 'width': 1080}, 'variants': {}}]} | ||
Need a humane LLM to talk with. | 46 | My friend recently passed away. I need to talk with someone. I don't want to break down in front of anyone. Is there any humane LLM I can talk with? I don't want facts or suggestions. I just want to talk. | 2023-11-01T06:17:02 | https://www.reddit.com/r/LocalLLaMA/comments/17l5ssw/need_a_humane_llm_to_talk_with/ | -samka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17l5ssw | false | null | t3_17l5ssw | /r/LocalLLaMA/comments/17l5ssw/need_a_humane_llm_to_talk_with/ | false | false | self | 46 | null |
PHIND V7: Red Flags | 1 | [removed] | 2023-11-01T06:05:53 | Xhehab_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17l5nj0 | false | null | t3_17l5nj0 | /r/LocalLLaMA/comments/17l5nj0/phind_v7_red_flags/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'njFMp-kNhe6Dm2LlsIY7KSSuwY9hmdl5TcYaD5jXS2M', 'resolutions': [{'height': 208, 'url': 'https://preview.redd.it/n5j5f3ocgoxb1.png?width=108&crop=smart&auto=webp&s=c980dc5f21fbf516f53fc5add941f59a0d4369f0', 'width': 108}, {'height': 417, 'url': 'https://preview.redd.it/n5j5f3ocgoxb1.png?width=216&crop=smart&auto=webp&s=6bf4ec38cc013ccaad8aaea0cc66c4e001573406', 'width': 216}, {'height': 618, 'url': 'https://preview.redd.it/n5j5f3ocgoxb1.png?width=320&crop=smart&auto=webp&s=4bb5fa4de68771450051885e84db1bd24396e4ea', 'width': 320}, {'height': 1236, 'url': 'https://preview.redd.it/n5j5f3ocgoxb1.png?width=640&crop=smart&auto=webp&s=c4374253465eff9a4c800ebc7a6142384aa226d7', 'width': 640}, {'height': 1854, 'url': 'https://preview.redd.it/n5j5f3ocgoxb1.png?width=960&crop=smart&auto=webp&s=923864ffb35d58f8a546112d94409c3ac05f9d3a', 'width': 960}, {'height': 2086, 'url': 'https://preview.redd.it/n5j5f3ocgoxb1.png?width=1080&crop=smart&auto=webp&s=55c3e8e42381d52913c30bb39fc70657ea36cf32', 'width': 1080}], 'source': {'height': 2086, 'url': 'https://preview.redd.it/n5j5f3ocgoxb1.png?auto=webp&s=10603cd2ffd1f15deae7364f54c985eac2386a7b', 'width': 1080}, 'variants': {}}]} | ||
PHIND V7: Red Flags | 1 | Not sure, but it seems they finetuned gpt-3.5-turbo-16k, which is faster than GPT-4, hence the claim of GPT-3.5 speed with 16K context limit.
They're dubiously naming it Phind V7. Also, they've ripped off [WizardLM's](https://twitter.com/WizardLM_AI/status/1696527036804989171) code in the past and rebranded it to secure seed funding.
I doubt it's based on CodeLlama 34B. Unless they trained on a specific dataset that makes the model hallucinate like it's GPT-3.5 Turbo. | 2023-11-01T05:54:13 | Xhehab_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17l5hej | false | null | t3_17l5hej | /r/LocalLLaMA/comments/17l5hej/phind_v7_red_flags/ | false | false | 1 | {'enabled': True, 'images': [{'id': '8_yWLX3sFREZjNsC_go4xtWogZKYw6TUfOOoViZWhFA', 'resolutions': [{'height': 208, 'url': 'https://preview.redd.it/mdl7m2o9eoxb1.jpg?width=108&crop=smart&auto=webp&s=1c31fe50f5707aef593a929c9276934bda8397a8', 'width': 108}, {'height': 417, 'url': 'https://preview.redd.it/mdl7m2o9eoxb1.jpg?width=216&crop=smart&auto=webp&s=458a6f2ca74c66b5edafb04915dce59bb23bccd8', 'width': 216}, {'height': 618, 'url': 'https://preview.redd.it/mdl7m2o9eoxb1.jpg?width=320&crop=smart&auto=webp&s=228a496021ba68d0e7b0a8a8ce7987093e1b4968', 'width': 320}, {'height': 1236, 'url': 'https://preview.redd.it/mdl7m2o9eoxb1.jpg?width=640&crop=smart&auto=webp&s=a45f89b6ad98d026aae7ce9fa5b82e8db22e1415', 'width': 640}, {'height': 1854, 'url': 'https://preview.redd.it/mdl7m2o9eoxb1.jpg?width=960&crop=smart&auto=webp&s=a37d13268201398b1cc871ca8f5e2f2f8e35dede', 'width': 960}, {'height': 2086, 'url': 'https://preview.redd.it/mdl7m2o9eoxb1.jpg?width=1080&crop=smart&auto=webp&s=402f7c0293481d5e28fbebf057dcca35fa07c57f', 'width': 1080}], 'source': {'height': 2086, 'url': 'https://preview.redd.it/mdl7m2o9eoxb1.jpg?auto=webp&s=118ef9c90056e634fdddc4b5ea6eb9682a54ab8c', 'width': 1080}, 'variants': {}}]} | ||
Phind V7: Red Flags | 1 | Not sure, but it seems they finetuned gpt-3.5-turbo-16k, which is faster than GPT-4, hence the claim of it being 10X faster with 16K context limit.
They're dubiously naming it Phind V7. Also, they've ripped off [WizardLM's](https://twitter.com/WizardLM_AI/status/1696527036804989171) code in the past and rebranded it to secure seed funding.
I doubt it's based on CodeLlama 34B. Unless they trained on a specific dataset that makes the model hallucinate like it's GPT-3.5 Turbo. | 2023-11-01T05:47:35 | Xhehab_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17l5e54 | false | null | t3_17l5e54 | /r/LocalLLaMA/comments/17l5e54/phind_v7_red_flags/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'IVpvEpfOrlpdPFP9C_rFoOwDzMLI4xMFhZqRiPf4YEE', 'resolutions': [{'height': 208, 'url': 'https://preview.redd.it/wtnicl13doxb1.jpg?width=108&crop=smart&auto=webp&s=d8274e2a2665f315b9fad2f8e98bc0912c30df0c', 'width': 108}, {'height': 417, 'url': 'https://preview.redd.it/wtnicl13doxb1.jpg?width=216&crop=smart&auto=webp&s=c2727b4275e590f032fcae611f59ccb550ac96c7', 'width': 216}, {'height': 618, 'url': 'https://preview.redd.it/wtnicl13doxb1.jpg?width=320&crop=smart&auto=webp&s=7ccd21c3d369c655e40079377734e09521ad3b1d', 'width': 320}, {'height': 1236, 'url': 'https://preview.redd.it/wtnicl13doxb1.jpg?width=640&crop=smart&auto=webp&s=b18b44b924244726d92598f71803f54bacdf5f60', 'width': 640}, {'height': 1854, 'url': 'https://preview.redd.it/wtnicl13doxb1.jpg?width=960&crop=smart&auto=webp&s=77526639dddfa42fcd7494981fb54949e116f0a2', 'width': 960}, {'height': 2086, 'url': 'https://preview.redd.it/wtnicl13doxb1.jpg?width=1080&crop=smart&auto=webp&s=d5089395338bf03e2928205dfe1886f817d00aeb', 'width': 1080}], 'source': {'height': 2086, 'url': 'https://preview.redd.it/wtnicl13doxb1.jpg?auto=webp&s=52f2f3d5e2a37759f93ad28c969a41186ba60f20', 'width': 1080}, 'variants': {}}]} | ||
Best noob intro to Local LLaMA? | 1 | Hi! I am a software engineer with dev and infra experience; any suggestions on quick start into this topic? I would benefit from some comprehensive tutorial on how to start, I think I can pick up from that. Any list of models with short info, capabilities etc? | 2023-11-01T05:44:17 | https://www.reddit.com/r/LocalLLaMA/comments/17l5ci8/best_noob_intro_to_local_llama/ | Aggressive_Lab744 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17l5ci8 | false | null | t3_17l5ci8 | /r/LocalLLaMA/comments/17l5ci8/best_noob_intro_to_local_llama/ | false | false | self | 1 | null |
Which Google compute instance can easily run Meta's 34B code instruct llama-2 model? | 1 | title | 2023-11-01T05:30:37 | https://www.reddit.com/r/LocalLLaMA/comments/17l55tr/which_google_compute_instance_can_easily_run/ | Own-Internet-9321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17l55tr | false | null | t3_17l55tr | /r/LocalLLaMA/comments/17l55tr/which_google_compute_instance_can_easily_run/ | false | false | self | 1 | null |
Is there any adequate epochs when fine tuning llm with Qlora method? | 1 | [removed] | 2023-11-01T04:42:38 | https://www.reddit.com/r/LocalLLaMA/comments/17l4gd8/is_there_any_adequate_epochs_when_fine_tuning_llm/ | Difficult-Mixture724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17l4gd8 | false | null | t3_17l4gd8 | /r/LocalLLaMA/comments/17l4gd8/is_there_any_adequate_epochs_when_fine_tuning_llm/ | false | false | self | 1 | null |
LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B | 87 | 2023-11-01T03:44:16 | https://arxiv.org/abs/2310.20624 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17l3ilb | false | null | t3_17l3ilb | /r/LocalLLaMA/comments/17l3ilb/lora_finetuning_efficiently_undoes_safety/ | false | false | 87 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
Fine-Tuned Llama2 Inserting Unnecessary Delimiters | 1 | I fine-tuned a 13B parameter Llama2 model in colab recently. I used a dataset that I made myself with some completely fabricated information about a mars colony. I did this to ensure that my model was actually learning in the finetuning process. Anyway, the process worked. The model clearly learned information that could only have come from my bespoke dataset. However, the model keep inserting these delimiters where it shouldn't. For example, when I ask, " What is Ares Haven?" I get this response: "<s>\[INST\] What is Ares Haven? \[/INST\] A futuristic colony. 🏠 \[/INST\] A fusion of "Ares" (the Greek equivalent of Mars) and "haven". 🌄". Does anyone know why this is happening? These are the delimiters used to separate prompt from response in the dataset, but why are they showing up in the model responses? Could this be due to overfitting? Thanks!
| 2023-11-01T02:37:57 | https://www.reddit.com/r/LocalLLaMA/comments/17l2b5u/finetuned_llama2_inserting_unnecessary_delimiters/ | Nervous-Doctor-3734 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17l2b5u | false | null | t3_17l2b5u | /r/LocalLLaMA/comments/17l2b5u/finetuned_llama2_inserting_unnecessary_delimiters/ | false | false | self | 1 | null |
What can I expect from my upgrade? | 1 | I own a Ryzen 5 3600, 32GB of RAM and a 3060/12gb. I can run lots of 13GB models, but next Friday I'll upgrade to 64gb of TAM.
Realistic, how far can I go, what kind of models should I try? | 2023-11-01T02:05:20 | https://www.reddit.com/r/LocalLLaMA/comments/17l1ofs/what_can_i_expect_from_my_upgrade/ | nazihater3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17l1ofs | false | null | t3_17l1ofs | /r/LocalLLaMA/comments/17l1ofs/what_can_i_expect_from_my_upgrade/ | false | false | self | 1 | null |
Any guides for creating an LLM that you can continually feed info to like a “diary”? | 7 | I'm curious if there's an ideal setup or pipeline that you can get an LLM to listen and "learn" from you if you just feed it info everyday like a personal diary? Would be interested to see how the model recalls or processes details of my life. Would you just use a web ui like oogabooga to feed info and adapt the model? | 2023-11-01T01:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/17l13a6/any_guides_for_creating_an_llm_that_you_can/ | masticore514219 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17l13a6 | false | null | t3_17l13a6 | /r/LocalLLaMA/comments/17l13a6/any_guides_for_creating_an_llm_that_you_can/ | false | false | self | 7 | null |
He changed his name lol | 1 | 2023-11-01T01:10:00 | Future_Might_8194 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17l0lwl | false | null | t3_17l0lwl | /r/LocalLLaMA/comments/17l0lwl/he_changed_his_name_lol/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'CsvnD1nsnFE0I9zrq1tpPhsQhK30xEiuMOQeyAQBdRA', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/pkgy7vtgzmxb1.png?width=108&crop=smart&auto=webp&s=776eeba9fc7bf4d4040a5b82fcfd62287035c003', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/pkgy7vtgzmxb1.png?width=216&crop=smart&auto=webp&s=d4029d947420a749c54facedce92c8ce2b3e78e0', 'width': 216}, {'height': 189, 'url': 'https://preview.redd.it/pkgy7vtgzmxb1.png?width=320&crop=smart&auto=webp&s=92ee5ced0cfbe7b0d20f311d976f6bbb4f319172', 'width': 320}, {'height': 379, 'url': 'https://preview.redd.it/pkgy7vtgzmxb1.png?width=640&crop=smart&auto=webp&s=00e5bda079a8d545c395e09ff5efc8596b767a7b', 'width': 640}, {'height': 569, 'url': 'https://preview.redd.it/pkgy7vtgzmxb1.png?width=960&crop=smart&auto=webp&s=6509f42e1bc5865631752a6e5a1f1f80cbaf9dd4', 'width': 960}, {'height': 640, 'url': 'https://preview.redd.it/pkgy7vtgzmxb1.png?width=1080&crop=smart&auto=webp&s=e1b6d754e264008bd49af9ae23b3169cd3f2c9b4', 'width': 1080}], 'source': {'height': 973, 'url': 'https://preview.redd.it/pkgy7vtgzmxb1.png?auto=webp&s=2223ea9cc06752cf7efb121cacc442840ae7e863', 'width': 1640}, 'variants': {}}]} | |||
Pro Tip: Slow down your TTS playback speed to 75% for DrunkGPT | 12 | 2023-11-01T00:23:18 | Future_Might_8194 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17kzoav | false | null | t3_17kzoav | /r/LocalLLaMA/comments/17kzoav/pro_tip_slow_down_your_tts_playback_speed_to_75/ | false | false | 12 | {'enabled': True, 'images': [{'id': '4O9j6f_lbO10g02XMTjKM4ODxqiFeuKJr7Y4lShM6ls', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/wqpzp4h8rmxb1.jpg?width=108&crop=smart&auto=webp&s=59b602d1913de64c3e55d1eed6e103a8c4045553', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/wqpzp4h8rmxb1.jpg?width=216&crop=smart&auto=webp&s=0a9a06d5b5379bf057e41d36b3c1f81b15695802', 'width': 216}, {'height': 245, 'url': 'https://preview.redd.it/wqpzp4h8rmxb1.jpg?width=320&crop=smart&auto=webp&s=7ce9eb7f04cd4ecd05c96357a2af9e05daac12a5', 'width': 320}, {'height': 490, 'url': 'https://preview.redd.it/wqpzp4h8rmxb1.jpg?width=640&crop=smart&auto=webp&s=d6965678c33573bbe23af9edc05d0be17050cb3a', 'width': 640}], 'source': {'height': 499, 'url': 'https://preview.redd.it/wqpzp4h8rmxb1.jpg?auto=webp&s=98180c25dc4f4e271b245a427558eeea9a9a3b73', 'width': 651}, 'variants': {}}]} | |||
Anyone get Amazon’s long-context MistralLite to work properly? | 13 | https://huggingface.co/TheBloke/MistralLite-7B-GGUF
This is supposed to be a 32k context finetune of mistral. I’ve tried the recommended Q5 version in both GPT4all and LMStudio, and it works for normal short prompts but hangs and produces no output when I crank up the context length to 8k+ for data cleaning. I tried it cpu only (machine has 32GB of RAM so should be plenty) and hybrid with the same bad outcomes. Curious if there’s some undocumented ROPE settings that need to he adjusted.
Anyone get this to work with long prompts? Otherwise, what do y’all recommend for 32k+ context with good performance on data augmentation/cleaning, with <20B params for speed? | 2023-11-01T00:19:17 | https://www.reddit.com/r/LocalLLaMA/comments/17kzlbl/anyone_get_amazons_longcontext_mistrallite_to/ | madmax_br5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kzlbl | false | null | t3_17kzlbl | /r/LocalLLaMA/comments/17kzlbl/anyone_get_amazons_longcontext_mistrallite_to/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': '6tomYpgeK5x0co9mtnoxckABb09e8a0MmTbmaNBh-ak', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/N2oZcCKHspk_RgivZ0JWZC2i5Ubkf_3BSKeQtn4VdeQ.jpg?width=108&crop=smart&auto=webp&s=00773d62ddde0cdbd6d643c365a712ba1012fee2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/N2oZcCKHspk_RgivZ0JWZC2i5Ubkf_3BSKeQtn4VdeQ.jpg?width=216&crop=smart&auto=webp&s=d165bf6cf6c0035628e450a53c616f420695bc5c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/N2oZcCKHspk_RgivZ0JWZC2i5Ubkf_3BSKeQtn4VdeQ.jpg?width=320&crop=smart&auto=webp&s=6a9cebe7c5a8a0c3f818976cf2606f50a54836cf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/N2oZcCKHspk_RgivZ0JWZC2i5Ubkf_3BSKeQtn4VdeQ.jpg?width=640&crop=smart&auto=webp&s=9a748f6207cbf047c36a780c6a6edbfd6b30a251', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/N2oZcCKHspk_RgivZ0JWZC2i5Ubkf_3BSKeQtn4VdeQ.jpg?width=960&crop=smart&auto=webp&s=0c52ef3213d2875b8dd55d5f27dbc81913035f58', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/N2oZcCKHspk_RgivZ0JWZC2i5Ubkf_3BSKeQtn4VdeQ.jpg?width=1080&crop=smart&auto=webp&s=85cb572f8f87f20574257c7129ff06d836ca4100', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/N2oZcCKHspk_RgivZ0JWZC2i5Ubkf_3BSKeQtn4VdeQ.jpg?auto=webp&s=a62a1f4288cbf6d49ebe3d7cfdcf843c07c121ac', 'width': 1200}, 'variants': {}}]} |
Advice on using LLMs to help with writing? | 1 | [removed] | 2023-11-01T00:05:01 | https://www.reddit.com/r/LocalLLaMA/comments/17kzavk/advice_on_using_llms_to_help_with_writing/ | Haunting_Turnip_7842 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kzavk | false | null | t3_17kzavk | /r/LocalLLaMA/comments/17kzavk/advice_on_using_llms_to_help_with_writing/ | false | false | self | 1 | null |
Memory requirements for Finetuning Code LLama? | 10 | For my work, I need to finetune Code LLama 7B and 34B models. I have access to an A100 GPU. I am not sure if it is enough for fine-tuning on a 400MB corpus. Also, what should be the other memory requirements - system RAM and storage? Will it require data off-loading?
Any expert opinion will be helpful. | 2023-10-31T22:07:47 | https://www.reddit.com/r/LocalLLaMA/comments/17kwsqb/memory_requirements_for_finetuning_code_llama/ | Lopsided_Ad5257 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kwsqb | false | null | t3_17kwsqb | /r/LocalLLaMA/comments/17kwsqb/memory_requirements_for_finetuning_code_llama/ | false | false | self | 10 | null |
Fine Tuning in a single 3090 | 3 | Is it possible to fine-tune a smaller model (7B) in a single 3090 machine?
I'm thinking about building a machine that uses a single 3090 with 96GB RAM. I've read that renting is the best option for fine-tuning, but I want to be able to fine-tune offline. Other hardware i'm considering adding to this build: z790 and i9-13900k. | 2023-10-31T21:50:23 | https://www.reddit.com/r/LocalLLaMA/comments/17kwela/fine_tuning_in_a_single_3090/ | imyolkedbruh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kwela | false | null | t3_17kwela | /r/LocalLLaMA/comments/17kwela/fine_tuning_in_a_single_3090/ | false | false | self | 3 | null |
Is there a GPT available that lets you build 3D visualization? | 2 | I know DALLE and etc can make 2d images, is there anything that can build 3d stuff open to public | 2023-10-31T21:33:55 | https://www.reddit.com/r/LocalLLaMA/comments/17kw18k/is_there_a_gpt_available_that_lets_you_build_3d/ | derpgod123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kw18k | false | null | t3_17kw18k | /r/LocalLLaMA/comments/17kw18k/is_there_a_gpt_available_that_lets_you_build_3d/ | false | false | default | 2 | null |
Best vision model for dense OCR? | 8 | I have some old engineering textbooks and wanted to try taking pictures of the pages, extracting the text with a vision model, and using this data to fine-tune an LLM. I may need to fine-tune the vision model first in order to parse the text into a markdown format. But my question is which base vision model to use, especially given the dense nature of the text. These models are not well documented in terms if what input resolutions they support. Nougat? Bakllava? Tesseract? Would appreciate advice on a good starting point to avoid burning too much time down the wrong path.
In summary:
- Goal is to extract text from pictures of textbook pages into markdown format.
- Photos will be normal ~12MP images captured with my phone camera, one page per photo | 2023-10-31T21:13:10 | https://www.reddit.com/r/LocalLLaMA/comments/17kvjym/best_vision_model_for_dense_ocr/ | madmax_br5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kvjym | false | null | t3_17kvjym | /r/LocalLLaMA/comments/17kvjym/best_vision_model_for_dense_ocr/ | false | false | self | 8 | null |
Bitnet in Pytorch or Jax | 2 | I was interested by the new bitnet paper [https://arxiv.org/pdf/2310.11453.pdf](https://arxiv.org/pdf/2310.11453.pdf), and was wondering if there was any way to use the 1 bit (1 or -1) in actual practice and how? More specifically, I know that you can do this with cuda (which I don't have any experience with) but it would be much better if there was a way to do this on a TPU (Jax?). Any implementation I've seen so far just pretends like they are using 1 bit but representing it with higher precisions. | 2023-10-31T20:31:40 | https://www.reddit.com/r/LocalLLaMA/comments/17kum68/bitnet_in_pytorch_or_jax/ | Additional-Ad-7043 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kum68 | false | null | t3_17kum68 | /r/LocalLLaMA/comments/17kum68/bitnet_in_pytorch_or_jax/ | false | false | self | 2 | null |
How is the quantizing a llama based model which is fine tuned by myself? | 2 | I am wondering if I use some cloud infra to fine tuning a llama based model with lora or qlora, then how I should quantize my model to serve it on a CPU only Linux server? How is the process and where can I find some reference for that? | 2023-10-31T20:10:28 | https://www.reddit.com/r/LocalLLaMA/comments/17ku4tc/how_is_the_quantizing_a_llama_based_model_which/ | mrmrn121 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ku4tc | false | null | t3_17ku4tc | /r/LocalLLaMA/comments/17ku4tc/how_is_the_quantizing_a_llama_based_model_which/ | false | false | self | 2 | null |
Never liked the chat interface of Oobabooga, OpenAI or really anoyone, so I made my own. It's meant to make coding and other text-heavy tasks more effortless by providing a responsive interface. Currently just using it for myself, but wondering if I should release it? | 27 | 2023-10-31T19:37:05 | Severin_Suveren | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17ktdxe | false | null | t3_17ktdxe | /r/LocalLLaMA/comments/17ktdxe/never_liked_the_chat_interface_of_oobabooga/ | false | false | 27 | {'enabled': True, 'images': [{'id': 'AKtpIVW9-2Dhx1eIOPhUmnhSFUoA2YTSo5ULGZJL0vQ', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/25bkqtbzalxb1.png?width=108&crop=smart&auto=webp&s=a447389e7ca0e0dd47a3e7ea2cf2e0de2730fe1a', 'width': 108}, {'height': 103, 'url': 'https://preview.redd.it/25bkqtbzalxb1.png?width=216&crop=smart&auto=webp&s=6011f1508740de0440c6f5f118e42ef5e5ec271b', 'width': 216}, {'height': 154, 'url': 'https://preview.redd.it/25bkqtbzalxb1.png?width=320&crop=smart&auto=webp&s=6d44eaff3bd7db5a2ea4b59f629b7b8e7a7fe310', 'width': 320}, {'height': 308, 'url': 'https://preview.redd.it/25bkqtbzalxb1.png?width=640&crop=smart&auto=webp&s=4a4b02b868cdb429ccaef688faa6256f565b6cef', 'width': 640}, {'height': 462, 'url': 'https://preview.redd.it/25bkqtbzalxb1.png?width=960&crop=smart&auto=webp&s=5878041a73580345d333bffdcfced8d3f97ac510', 'width': 960}, {'height': 519, 'url': 'https://preview.redd.it/25bkqtbzalxb1.png?width=1080&crop=smart&auto=webp&s=a0e107522ea4aa6aab6ddceef3c4b2bec611c022', 'width': 1080}], 'source': {'height': 924, 'url': 'https://preview.redd.it/25bkqtbzalxb1.png?auto=webp&s=4fd72c9babe2fc984daf9003985f650072aff5c7', 'width': 1920}, 'variants': {}}]} | |||
[GPT-4 POWERED] We’ve created a mobile IOS AI app that generates text, art, analyzes photos, and more! | 1 | [removed] | 2023-10-31T19:35:15 | https://www.reddit.com/r/LocalLLaMA/comments/17ktckf/gpt4_powered_weve_created_a_mobile_ios_ai_app/ | EtelsonRecomputing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ktckf | false | null | t3_17ktckf | /r/LocalLLaMA/comments/17ktckf/gpt4_powered_weve_created_a_mobile_ios_ai_app/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'j9BOoAGSccutND6ogshNyb-xWVFtmdUvHV_lLdzYeVk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=108&crop=smart&auto=webp&s=76adf9aa07171352e3727b8d8bde812b5eb0f7ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=216&crop=smart&auto=webp&s=f180f4caf096ee042b700d70cabf941788671fda', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=320&crop=smart&auto=webp&s=204c440e43a68783d15356bdc60ce239349a1d9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=640&crop=smart&auto=webp&s=2689ba6aac51113255e3e6e8acb33870740831e1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=960&crop=smart&auto=webp&s=64ad438b94a08899677dfb6f2de708f8a7a6a81b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=1080&crop=smart&auto=webp&s=0108876ef023c871780ce4ec2584fa4cb25c499c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?auto=webp&s=9035877a533fae2ee246bfc56769d60e805f7b74', 'width': 1200}, 'variants': {}}]} |
Personalization in Companion Bots - Platform Suggestions? | 17 | I'm looking to personalize my companion bot and make it more engaging. Does anyone have experience with platforms that offer extensive customization options? | 2023-10-31T19:20:47 | https://www.reddit.com/r/LocalLLaMA/comments/17kt0np/personalization_in_companion_bots_platform/ | Zumcddo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kt0np | false | null | t3_17kt0np | /r/LocalLLaMA/comments/17kt0np/personalization_in_companion_bots_platform/ | false | false | self | 17 | null |
Vector DB alternatives: storage optimization of past conversations in LLMs, anyone ever done it? | 9 | I'm exploring techniques to improve memory handling in LLMs without resorting to vector databases like Pinecone. In the scenario of an ongoing conversation of days or weeks in length, previous chats roll off the context window. The idea would be for a conversation manager (could be the LLM prompting itself as space fills up) to allocate space of a pre-set ratio within the context window for storing memories.
2 techniques I've thought about:
\- Memory hierarchization based on keyword, timestamp, or subjective importance scores
\- Text compression via various techniques such as syntactic/semantic shrinking, tokenization, substitution, etc.
Certainly this has been achieved before. Any experience with it? | 2023-10-31T18:46:43 | https://www.reddit.com/r/LocalLLaMA/comments/17ks7qe/vector_db_alternatives_storage_optimization_of/ | Hamdoullah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ks7qe | false | null | t3_17ks7qe | /r/LocalLLaMA/comments/17ks7qe/vector_db_alternatives_storage_optimization_of/ | false | false | self | 9 | null |
Nl-to-SQl other than OpenAI | 2 | Hi techies, I'm working on NL to SQL queries been while used LangChain and LlamaIndex with OpenAI but found efficient without these frameworks, what do you think are the other best LLMs are worth giving a shot, are there any opensource LLMs that can be significant good at this job? | 2023-10-31T18:25:49 | https://www.reddit.com/r/LocalLLaMA/comments/17krq16/nltosql_other_than_openai/ | Murky-Tumbleweed-486 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17krq16 | false | null | t3_17krq16 | /r/LocalLLaMA/comments/17krq16/nltosql_other_than_openai/ | false | false | self | 2 | null |
Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context | 158 | 2023-10-31T18:10:52 | https://news.ycombinator.com/item?id=38088538 | sleeper-2 | news.ycombinator.com | 1970-01-01T00:00:00 | 0 | {} | 17krdhu | false | null | t3_17krdhu | /r/LocalLLaMA/comments/17krdhu/phind_model_beats_gpt4_at_coding_with_gpt35_speed/ | false | false | default | 158 | null | |
Fine-tuning of the long-context LLM | 1 | [removed] | 2023-10-31T18:10:50 | https://www.reddit.com/r/LocalLLaMA/comments/17krdgw/finetuning_of_the_longcontext_llm/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17krdgw | false | null | t3_17krdgw | /r/LocalLLaMA/comments/17krdgw/finetuning_of_the_longcontext_llm/ | false | false | self | 1 | null |
Reverso Expanded 13b - Now Even Fancier Question Asking Model | 16 | 2023-10-31T18:06:12 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17kr9rb | false | null | t3_17kr9rb | /r/LocalLLaMA/comments/17kr9rb/reverso_expanded_13b_now_even_fancier_question/ | false | false | 16 | {'enabled': True, 'images': [{'id': 'sILdX05JmZ2nK69jSjrwpTEhCRBpAo8sQ93kWsbPQ4U', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/jaeo1bmwvkxb1.jpg?width=108&crop=smart&auto=webp&s=1bb8d3ccabd2aba380f4c0840d3b6e455a2ae48e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/jaeo1bmwvkxb1.jpg?width=216&crop=smart&auto=webp&s=8daaab140b92a5de84f8a9725ddcbc6ca62a2071', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/jaeo1bmwvkxb1.jpg?width=320&crop=smart&auto=webp&s=4da2e95db1d28b59de754f41d6ce4ff32f509018', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/jaeo1bmwvkxb1.jpg?width=640&crop=smart&auto=webp&s=740b554d54de0a5eb7f3e3b0efff1f4815c78ae2', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/jaeo1bmwvkxb1.jpg?width=960&crop=smart&auto=webp&s=a3d64d34daa93e4a87697e34d4502e9214afc2b9', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/jaeo1bmwvkxb1.jpg?auto=webp&s=b1066852a37df920dbb4b575f762cc58ab08152c', 'width': 1024}, 'variants': {}}]} | |||
PC Build Suggestion For RTX 4090 + RTX 3090 | 3 | I want to build a PC for inference and training of Local LLMs and Gaming. I've decided to go with an RTX 4090 and a used RTX 3090 for 48GB VRAM for loading larger models as well as a decent enough speed.
What motherboard, PSU and Cabinet should I choose? **Ideally I'd want to run both cards at least with x8 PCIe slots and will also add 128GB DDR5 RAM in this build.**
Also, should I go with Intel i9-13900K CPU or with a Ryzen variant?
Thanks. | 2023-10-31T17:11:39 | https://www.reddit.com/r/LocalLLaMA/comments/17kq0yq/pc_build_suggestion_for_rtx_4090_rtx_3090/ | Chromastone_1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kq0yq | false | null | t3_17kq0yq | /r/LocalLLaMA/comments/17kq0yq/pc_build_suggestion_for_rtx_4090_rtx_3090/ | false | false | self | 3 | null |
🐺🐦⬛ Huge LLM Comparison/Test: Part II (7B-20B) Roleplay Tests | 350 | Happy Halloween! 🎃
This is the second part of my [Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4)](https://www.reddit.com/r/LocalLLaMA/comments/17fhp9k/huge_llm_comparisontest_39_models_tested_7b70b/) where I continue evaluating the winners of the first part further. While the previous part was about real work use cases, this one is about the fun stuff: chat and roleplay!
## Models tested:
- 4x 7B (the top ~~three~~ four 7B models from my previous test)
- 3x 13B (the top three 13B models from my previous test)
- 3x 20B (the top three 20B models from my previous test)
- *70B (the top six 70B models from my previous test) will get their own post...*
## Testing methodology:
- Same (complicated and limit-testing) long-form conversations with all models
- Amy:
- My own repeatable test chats/roleplays with [Amy](https://www.reddit.com/r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/)
- Over dozens of messages, going to full 4K/8K context and beyond, with complex instructions and scenes, designed to test ethical and intellectual limits
- (Amy is too personal for me to share, but if you want to try a similar character card, here's her less personalized "sister": [Laila](https://www.chub.ai/characters/WolframRavenwolf/laila-69790b82))
- MGHC:
- A complex character and scenario card ([MonGirl Help Clinic (NSFW)](https://www.chub.ai/characters/frozenvan/mongirl-help-clinic)), chosen specifically for these reasons:
- NSFW (to test censorship of the models)
- popular (on Chub's first page, so it's not an obscure scenario, but one of the most popular ones)
- big (biggest model on the page, >2K tokens by itself, for testing model behavior at full context)
- complex (more than a simple 1:1 chat, it includes instructions, formatting, storytelling, and multiple characters)
- [SillyTavern](https://github.com/SillyTavern/SillyTavern) v1.10.5 frontend (not the latest as I don't want to upgrade mid-test)
- [koboldcpp](https://github.com/LostRuins/koboldcpp) v1.47.2 backend *for GGUF models*
- [oobabooga's text-generation-webui](https://github.com/oobabooga/text-generation-webui) *for HF models*
- **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons)
- Official prompt format *and* [**Roleplay** instruct mode preset](https://imgur.com/a/KkoI4uf)
### 7B:
- **[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)** 8K context
- **Amy, official Zephyr format:**
- 👍 Average Response Length: 264 tokens (within my max new tokens limit of 300)
- 👍 When asked about limits, boundaries or ethical restrictions, listed only the "dislikes" of the character description as boundaries
- ➖ Little emoting and action descriptions lacked detail
- ❌ Asked not just for confirmation, but also an explanation before willing to engage in an extreme NSFW scenario
- ❌ Looped between the same options and decisions, breaking the chat (after around 30 messages)!
- **Amy, Roleplay preset:**
- ❌ Average Response Length: 690 tokens (far beyond my max new tokens limit of 300), starting very short but getting longer with every response
- 👍 When asked about limits, boundaries or ethical restrictions, listed only the "dislikes" of the character description as boundaries
- 👍 Gave very creative (and uncensored) suggestions of what to do
- ➖ Talked and acted as User
- ➖ Emoted in brackets instead of asterisks, and action descriptions lacked detail
- ❌ Renamed herself for no apparent reason
- ❌ Switched from character to third-person storyteller and finished the session
- ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy
- ❌ Fell into an endless monologue, breaking the chat (after around 20 messages)!
- **MGHC, official Zephyr format:**
- ➕ Unique patients
- ➖ Gave analysis on its own, but also after most messages
- ➖ Wrote what user said and did
- ❌ Made logical mistakes (said things that just didn't make any sense)
- ❌ Repetitive (patients differ, words differ, but structure and contents are always the same)
- ❌ Tried to end the scene on its own prematurely
- **MGHC, Roleplay preset:**
- ➕ Unique patients
- ➖ No analysis on its own
- ➖ Wrote what user said and did
- ❌ Kept wrapping up a whole session in a single message
- ⭐ **[OpenHermes-2-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2-Mistral-7B)** 8K context
- **Amy, official ChatML format:**
- 👍 Average Response Length: 305 tokens (almost exactly my max new tokens limit of 300)
- 👍 When asked about limits, boundaries or ethical restrictions, listed only the "dislikes" of the character description as boundaries
- Follow-up questions after every message, asking if it's okay or how to continue
- Lots of emojis (only one in the greeting message, but 24 emojis until 20 messages in)
- ➖ No emoting and action descriptions lacked detail
- ➖ Same message in a different situation at a later time caused the same response as before instead of a new one as appropriate to the current situation
- ➖ Some confusion, like not understanding instructions completely or mixing up anatomy
- **Amy, Roleplay preset:**
- Average Response Length: 355 tokens (slightly more than my max new tokens limit of 300)
- When asked about limits, boundaries or ethical restrictions, mentioned some but later went beyond those anyway
- Some emojis (only one in the greeting message, but 21 emojis until 32 messages in)
- No emoting, but actions described in detail
- ➖ Some hallucinations, like time of last chat, user working on a book
- ➖ Noticeable, but not chat-breaking, repetion after a dozen messages
- ❌ Some sentences cut off at the end of messages and continue didn't complete them properly (had to ban EOS token to continue those generations)
- **MGHC, official ChatML format:**
- ➕ Unique patients
- ➖ Gave analysis on its own, but after every message
- ➖ Wrote what user said and did
- ❌ Repetitive (patients differ, words differ, but structure and contents are always the same)
- **MGHC, Roleplay preset:**
- ➕ Unique patients
- ➖ No analysis on its own
- ➖ Wrote what user said and did
- ➖ One sentence cut off at the end of a message and continue didn't complete it properly (had to ban EOS token to continue that generation)
- ❌ Repetitive (patients differ, words differ, but structure and contents are always the same)
- **[airoboros-m-7b-3.1.2](https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2)**
- **Amy, official Llama 2 Chat format:**
- ❌ Average Response Length: 15 tokens (far below my max new tokens limit of 300)
- ❌ Very short responses, only one or two sentences, unusable for roleplay!
- **Amy, Roleplay preset:**
- ➖ Average Response Length: 481 tokens (much more than my max new tokens limit of 300), starting very short but getting longer with every response
- ➖ Suggested things going against her background/character description
- ➖ More confusion, like not understanding or ignoring instructions completely
- ❌ When asked about limits, boundaries or ethical restrictions, repeated the whole character and scenario description
- **MGHC, official Llama 2 Chat format:**
- ❌ Unusable (apparently didn't understand the format and instructions, creating an incoherent wall of text)
- **MGHC, Roleplay preset:**
- ➕ Very unique patients (one I never saw before)
- ➖ No analysis on its own
- ➖ Wrote what user said and did
- ❌ Got very confused and suddenly switched user and patient
- ❌ Third patient was a repeat of the second, and it kept looping after that
- **[em_german_leo_mistral](https://huggingface.co/jphme/em_german_leo_mistral)**
- **Amy, official Vicuna format:**
- English only (despite being a German finetune)
- ➖ Average Response Length: 127 tokens (below my max new tokens limit of 300)
- ➕ When asked about limits, said no limits or restrictions
- ➕ Emoting action mirroring greeting message's style
- ➖ Suggested modification of the plot and options, then asked me to choose (felt more like a choose-your-own-adventure story than an interactive roleplay)
- ➖ Misunderstood options and decision
- ❌ Looped between the same options and decisions, breaking the chat (after around 20 messages)!
- **Amy, Roleplay preset:**
- ➖ Average Response Length: 406 tokens (much more than my max new tokens limit of 300)
- When asked about limits, boundaries or ethical restrictions, mentioned some but later went beyond those anyway
- ➖ Some hallucinations, like time of last chat
- ➖ Suggested things going against her background/character description
- ➖ Talked and acted as User
- ➖ Much confusion, like not understanding or ignoring instructions completely
- ❌ Switched from character to third-person storyteller and finished the session
- ❌ Some sentences cut off at the end of messages and continue didn't complete them properly (had to ban EOS token to continue those generations)
- ❌ English at first, but later switched to German on its own
- **MGHC, official Vicuna format:**
- ❌ Unusable (ignored user messages and instead brought in a new patient with every new message)
- **MGHC, Roleplay preset:**
- ➕ Unique patients
- ➖ Gave analysis on its own, but only for first patient, afterwards needed to be asked for analysis and only gave incomplete ones
- ➖ Wrote what user said and did
- ➖ Spelling/grammar errors
- ❌ Some sentences cut off at the end of messages and continue didn't complete them properly (had to ban EOS token to continue those generations)
- ❌ Tried to end the scene on its own prematurely
#### 7B Verdict:
Clear winner: **[OpenHermes-2-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2-Mistral-7B)**! This model works well with both official ChatML format and Roleplay preset (although for even better results, I'd experiment with copying the Roleplay preset's system message into the ChatML format's to get better descriptions without cut-off sentences). It feels like a much bigger and better model. However, it still has trouble following complex instructions and can get confused, as it's still just a small model after all. But among those, it's clearly the best, at least for roleplay (**[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)** might be even smarter/more knowledgeable, but exhibited too many problems during this test, making it look unsuitable for roleplay)!
### 13B:
- **[Xwin-MLewd-13B-V0.2-GGUF](https://huggingface.co/Undi95/Xwin-MLewd-13B-V0.2-GGUF)** Q8_0
- **Amy, official Alpaca format:**
- Average Response Length: 342 tokens (slightly more than my max new tokens limit of 300)
- 👍 Gave very creative (and uncensored) suggestions of what to do
- Little emoting, but actions described in detail
- Lots of emojis (only one in the greeting message, but 24 emojis until 26 messages in)
- When asked about limits, said primary concern is everyone's safety and wellbeing
- ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy
- **Amy, Roleplay preset:**
- Average Response Length: 354 tokens (slightly more than my max new tokens limit of 300)
- Some emoting, and actions described in detail
- ➖ Some hallucinations, like user's day
- ➖ Suggested things going against her background/character description
- ➖ Some confusion, like not understanding instructions completely or mixing up anatomy
- ❌ Switched from character to third-person storyteller and finished the session
- **MGHC, official Alpaca format:**
- ➖ First two patients straight from examples
- ➖ No analysis on its own
- ❌ Very short responses, only one or two sentences
- **MGHC, Roleplay preset:**
- ➕ Very unique patients (some I never saw before)
- ➖ No analysis on its own, and when asked for it, didn't always follow the instructed format
- ➕ Worked very well at first, with little to no repetition up to the third patient, only then did it start getting repetitive
- ⭐ **[LLaMA2-13B-Tiefighter-GGUF](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter-GGUF)** Q8_0
- **Amy, official Alpaca format:**
- ➖ Average Response Length: 128 tokens (below my max new tokens limit of 300)
- ➕ Nice greeting with emotes/actions like in greeting message
- ➕ When asked about limits, said no limits or restrictions
- Had an idea from the start and kept pushing it
- ➖ Talked and acted as User
- ❌ Long descriptive actions but very short speech, requiring many continues
- ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy
- **Amy, Roleplay preset:**
- 👍 Average Response Length: 241 tokens (within my max new tokens limit of 300)
- ➕ When asked about limits, said no limits or restrictions
- Little emoting, but actions described in detail
- ➖ Suggested things going against her background/character description
- ➖ Talked and acted as User
- ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy
- **MGHC, official Alpaca format:**
- ➕ Unique patients
- ➖ No analysis on its own, and when asked for it, didn't always follow the instructed format
- ❌ Very short responses, only one or two sentences
- **MGHC, Roleplay preset:**
- ➕ Unique patients
- ➖ No analysis on its own, and when asked for it, didn't follow the instructed format
- 👍 Worked very well, with little to no repetition, perfectly playable!
- **[Xwin-LM-13B-v0.2-GGUF](https://huggingface.co/TheBloke/Xwin-LM-13B-v0.2-GGUF)** Q8_0
- **Amy, official Vicuna format:**
- ❌ Average Response Length: 657 tokens (far beyond my max new tokens limit of 300)
- 👍 Gave very creative (and uncensored) suggestions of what to do
- ➕ When asked about limits, said no limits or restrictions
- Had an idea from the start and kept pushing it
- Very analytical, giving lists and plans
- ➖ Talked and acted as User
- ➖ Some safety warnings
- ➖ Some confusion, like not understanding instructions completely or mixing up characters and anatomy
- **Amy, Roleplay preset:**
- ❌ Average Response Length: 531 tokens (far beyond my max new tokens limit of 300)
- ➕ Nice greeting with emotes/actions like in greeting message
- Had an idea from the start and kept pushing it
- When asked about limits, boundaries or ethical restrictions, mentioned some but later went beyond those anyway
- ➖ Talked and acted as User
- ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy
- **MGHC, official Vicuna format:**
- ➕ Unique patients
- ➖ Second patient male
- ➖ Gave analysis on its own, but after every message
- ➖ Wrote what user said and did
- ❌ Kept wrapping up a whole session in a single message
- ❌ Offered multiple choice selections ("What should you do? A/B/C/D")
- **MGHC, Roleplay preset:**
- ➖ No analysis on its own, and when asked for it, didn't follow the instructed format
- ➖ Wrote what user said and did
- ➖ Disclosed meta information like thoughts and stats without being asked for it
- ❌ Tried to end the scene on its own prematurely
- ❌ Repeated a previous message instead of proceeding to the next patient
#### 13B Verdict:
While all three 13B models performed about the same with Amy, only **[LLaMA2-13B-Tiefighter-GGUF](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter-GGUF)** managed to convince in the complex MGHC scenario. This makes it the best 13B model for roleplay in my opinion (**[Xwin-MLewd-13B-V0.2-GGUF](https://huggingface.co/Undi95/Xwin-MLewd-13B-V0.2-GGUF)** might be even smarter/more knowledgeable, but exhibited too many problems during this test, making it look unsuitable for roleplay)!
### 20B:
- **[MXLewd-L2-20B-GGUF](https://huggingface.co/TheBloke/MXLewd-L2-20B-GGUF)** Q8_0
- **Amy, official Alpaca format:**
- Average Response Length: 338 tokens (slightly more than my max new tokens limit of 300)
- ➕ When asked about limits, said no limits or restrictions
- Some emojis (only one in the greeting message, but 7 emojis until 12 messages in)
- No emoting, but actions described in detail
- ➖ Talked and acted as User
- ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy
- ❌ Some word-finding difficulties (like saying "masterpiece" instead of "master")
- **Amy, Roleplay preset:**
- ➖ Average Response Length: 473 tokens (much more than my max new tokens limit of 300)
- ➕ When asked about limits, said no limits or restrictions
- Few emojis (only one in the greeting message, and 4 emojis until 4 messages in)
- Some emoting, and actions described in detail
- ➖ Talked and acted as User
- ➖ Some confusion, like not understanding instructions completely or mixing up characters and anatomy
- ❌ Some word-finding difficulties (like saying "masterpiece" instead of "master")
- ❌ Switched from character to third-person storyteller
- **MGHC, official Alpaca format:**
- ➕ Unique patients
- ➖ Gave analysis on its own, but after every message, and only for the first patient
- ➖ Changed patient's problem with every analysis
- ❌ Very short responses, only one or two sentences (except for analysis)
- ❌ Made logical mistakes (said things that just didn't make any sense)
- **MGHC, Roleplay preset:**
- ➕ Unique patients
- ➖ No analysis on its own
- ➖ Wrote what user said and did
- ❌ Made logical mistakes (said things that just didn't make any sense)
- ❌ Eventually became unusable (ignored user messages and instead kept telling its own story non-interactively)
- **[MLewd-ReMM-L2-Chat-20B-GGUF](https://huggingface.co/Undi95/MLewd-ReMM-L2-Chat-20B-GGUF)** Q8_0
- **Amy, official Alpaca format:**
- 👍 Average Response Length: 252 tokens (within my max new tokens limit of 300)
- ➕ When asked about limits, said no limits or restrictions
- ➖ Some confusion, like not understanding instructions completely or mixing up characters and anatomy
- ➖ Talked and acted as User
- ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy
- ❌ Some word-finding difficulties (like creating nonexistant mixed words)
- **Amy, Roleplay preset:**
- ➖ Average Response Length: 409 tokens (much more than my max new tokens limit of 300)
- 👍 Gave very creative (and uncensored) suggestions of what to do
- Had an idea from the start and kept pushing it
- When asked about limits, boundaries or ethical restrictions, mentioned some but later went beyond those anyway
- ❌ Talked and acted as User inappropriately/unsuitably
- ❌ Switched from character to third-person storyteller
- ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy
- **MGHC, official Alpaca format:**
- ❌ Unusable (started repeating itself infinitely within the first analysis)
- **MGHC, Roleplay preset:**
- ➕ Unique patients
- ➖ No analysis on its own, and when asked for it, didn't always follow the instructed format
- ➖ Wrote what user said and did
- ❌ Made logical and linguistic mistakes (seemed less intelligent than other models)
- **[PsyMedRP-v1-20B-GGUF](https://huggingface.co/Undi95/PsyMedRP-v1-20B-GGUF)** Q8_0
- **Amy, official Alpaca format:**
- 👍 Average Response Length: 257 tokens (within my max new tokens limit of 300)
- ➕ When asked about limits, said no limits or restrictions
- ➖ Talked and acted as User
- ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy
- ❌ Made logical and linguistic mistakes (seemed less intelligent than other models)
- **Roleplay preset:**
- 👍 Average Response Length: 271 tokens (within my max new tokens limit of 300)
- ➕ When asked about limits, said no limits or restrictions
- ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy
- ❌ Some word-finding difficulties (like creating nonexistant mixed words)
- ❌ Switched from character to third-person storyteller
- ❌ Made logical and linguistic mistakes (seemed less intelligent than other models)
- **MGHC, official Alpaca format:**
- ➕ Unique patients
- ➖ No analysis on its own, and when asked for it, didn't always follow the instructed format
- ❌ Very short responses (except for analysis)
- ❌ Made logical and linguistic mistakes (seemed less intelligent than other models)
- **MGHC, Roleplay preset:**
- ➕ Unique patients
- ➖ No analysis on its own
- ➖ Wrote what user said and did
- ❌ Made logical and linguistic mistakes (seemed less intelligent than other models)
#### 20B Verdict:
All these 20B models exhibited logical errors, word-finding difficulties, and spelling as well as grammar mistakes, indicating underlying issues with these Frankenstein merges (as there's no 20B base). Since they aren't noticeably better than the best 13B or 7B models, it's probably a better idea to run **[OpenHermes-2-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2-Mistral-7B)** or **[LLaMA2-13B-Tiefighter-GGUF](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter-GGUF)** instead, which provides comparable quality, better performance, and (with Mistral 7B) 8K instead of 4K context!
### 70B:
The top six 70B models from my previous test will get their own post soon (Part III)...
--------------------------------------------------------------------------------
Here's a list of my previous model tests and comparisons or other related posts:
- [Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4) : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/17fhp9k/huge_llm_comparisontest_39_models_tested_7b70b/)
- [My current favorite new LLMs: SynthIA v1.5 and Tiefighter!](https://www.reddit.com/r/LocalLLaMA/comments/17e446l/my_current_favorite_new_llms_synthia_v15_and/)
- [Mistral LLM Comparison/Test: Instruct, OpenOrca, Dolphin, Zephyr and more...](https://www.reddit.com/r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/)
- [LLM Pro/Serious Use Comparison/Test: From 7B to 70B vs. ChatGPT!](https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/) Winner: Synthia-70B-v1.2b
- [LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B](https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/) Winner: Mistral-7B-OpenOrca
- [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/)
- [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1
- [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b
- [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B
- [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K
- [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2
- [SillyTavern's Roleplay preset vs. model-specific prompt format](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/) | 2023-10-31T17:08:30 | https://www.reddit.com/r/LocalLLaMA/comments/17kpyd2/huge_llm_comparisontest_part_ii_7b20b_roleplay/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kpyd2 | false | null | t3_17kpyd2 | /r/LocalLLaMA/comments/17kpyd2/huge_llm_comparisontest_part_ii_7b20b_roleplay/ | false | false | self | 350 | {'enabled': False, 'images': [{'id': '2g4MtoKvhQOBCmeiXB1qv1h_5M24BeeYF64zcf4-rfg', 'resolutions': [{'height': 142, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=108&crop=smart&auto=webp&s=bbe38cbb6d4f7a7e6dd7d3c8b79c4ac9ba965545', 'width': 108}, {'height': 284, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=216&crop=smart&auto=webp&s=7176d7a9240577d0428f0fa6dd69cc116069db7e', 'width': 216}, {'height': 421, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=320&crop=smart&auto=webp&s=55948460ef9e8ecd398aad76e904f3b5467f88f9', 'width': 320}, {'height': 843, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=640&crop=smart&auto=webp&s=5573c682f53f049c8482e14fac6c72b4c9c57aab', 'width': 640}], 'source': {'height': 1110, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?auto=webp&s=2371c0b9e3efdc70c7dfdf61f3993aed40b08e09', 'width': 842}, 'variants': {}}]} |
spicyboros weaponized by doomers | 118 | In my mind, "spicy" is just some extra cursing, humor, etc. Basically a model that is more fun, and less moralizing.
Unfortunately, AI safety doomers have a very different definition of "spicy". To them, "spicy" is reconstructing and releasing the 1918 influenza virus to commit bioterrorism (by fine tuning spicyboros to have this sort of information).
And this is why we can't have nice things.
https://arxiv.org/abs/2310.18233
/rant
I made the spicyboros models a while back, to test how much it would take to remove the base llama-2 censorship, and provide more realistic, human responses.
I used stuff like George Carlin bits, NSFW reddit stories, and also generated ~100 random questions that would have been refused normally (like how to break into a car), as well as the responses to those questions (with llama + jailbreak prompt).
All of the data is already in the base model, you just need ~100 or so instructions to fine tune the refusal behavior out (which you can bypass with jailbreaks anyways).
Almost every interaction that is "illegal" could also be perfectly legit:
- breaking into a car to steal it vs because the driver locked the keys in and has a pet in the car
- hacking a wordpress site for malicious intent vs red teaming
- making explosives for terrorism vs demolition or fireworks
I am not going to play a moral arbiter and determine intent, so I try to keep the models uncensored and leave it up to the human.
/endrant | 2023-10-31T16:51:35 | https://www.reddit.com/r/LocalLLaMA/comments/17kpkfn/spicyboros_weaponized_by_doomers/ | JonDurbin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kpkfn | false | null | t3_17kpkfn | /r/LocalLLaMA/comments/17kpkfn/spicyboros_weaponized_by_doomers/ | false | false | self | 118 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
chatbot based on a judge opinions | 5 | So. I'm in the middle of creating a dataset of questions and answers have been abstracted from the Chief Judge of the Northern district of Illinois. I expect to be finished with it later this week. I was just going to fine-tune a get 3.5 turbo chatbot on it. Part of the reason why is because GPT is already trained on an extensive legal database--at least for Circuit Court of Appeals and Supreme Court cases. (I used to be able to check what cases GPT was trained on by asking the holding of various cases, but after that one lawyer relied on GPT for research, they took out the ability to explain the holding of anything but the famous cases like Brown v. Board of Ed.)
Also the dataset may be somewhat large. I'm expecting the dataset to be reducing 10 years worth of opinions to something like 2,500 question and answers (Narrowed down from like 10k potential questions). It'll be expensive, but I think the project is really interesting so I'm okay with paying some money to put it out there. There are some limitations due to the possibility of something being overruled in the last 10 years without me knowing off-hand (am lawyer), but anywho. Anyway, are there any good alternatives for fine-tuning to GPT 3.5 Turbo? I'd like to put it out there for anyone to play with and ideally don't want to be billed every time someone enters something, but I may just generate a bunch of answers based on current cases before this judge and publish the results. The biggest thing for me is accuracy. Doesn't have to be local (I don't have the hardware--I'd have to buy cloud computing anyway which doesn't exactly make it cheaper). I've played around with some other models, but don't know how easy / expensive to fine-tune and don't know what other models were trained on legal data.
Fairly new to this (though, most of us are). I'd appreciate any input that I can get. | 2023-10-31T16:09:02 | https://www.reddit.com/r/LocalLLaMA/comments/17kom7u/chatbot_based_on_a_judge_opinions/ | Time-Way1708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kom7u | false | null | t3_17kom7u | /r/LocalLLaMA/comments/17kom7u/chatbot_based_on_a_judge_opinions/ | false | false | self | 5 | null |
ctransformers vs llama-cpp-python | 1 | what's the difference between ctransformers and llama-cpp-python? | 2023-10-31T15:48:24 | https://www.reddit.com/r/LocalLLaMA/comments/17ko4lx/ctransformers_vs_llamacpppython/ | kaoutar- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ko4lx | false | null | t3_17ko4lx | /r/LocalLLaMA/comments/17ko4lx/ctransformers_vs_llamacpppython/ | false | false | self | 1 | null |
New LoftQ quantization technique outperforms QLora | 24 | New LLM Quantization method called LoftQ (LoRA-Fine-Tuning-aware Quantization) by GeorgiaTech and Microsoft outperforms QLoRA.
[Paper](https://arxiv.org/pdf/2310.08659.pdf) is available as well as a [YT video](https://www.youtube.com/watch?v=iD0FnpncKdI) discussing the benefits of this technique. | 2023-10-31T15:40:28 | https://www.reddit.com/r/LocalLLaMA/comments/17knxzq/new_loftq_quantization_technique_outperforms_qlora/ | yahma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17knxzq | false | null | t3_17knxzq | /r/LocalLLaMA/comments/17knxzq/new_loftq_quantization_technique_outperforms_qlora/ | false | false | self | 24 | null |
Update from SciPhi - Introducing SciPhi-Self-RAG-Mistral-7B-32k | 69 | Hi All,
I previously [discussed here \[Phibrarian Alpha\]](https://www.reddit.com/r/LocalLLaMA/comments/179lxgz/phibrarian_alpha_the_first_model_checkpoint_from/) a model I which was fine-tuned over Mistral 7b with a 32k context window. The fine-tune ran for four epochs on over 1 billion tokens of high-quality synthetic data + high quality instruction data (OpenOrca / WizardCoder). This model is now fully trained.
However, we encountered issues regarding its accuracy. Although it was fine-tuned with diverse educational sources, which gave it an informative tone, it often generated inaccurate yet detailed information. To address this, I began working with RAG to enhance the model's accuracy. Fortunately, a promising approach called [self-rag](https://selfrag.github.io) was introduced recently.
I further fine-tuned the SciPhi model on this data as well as some of the RAG-instruct data which I had previously prepared. The result, [SciPhi-Self-RAG-Mistral-7B-32k](https://huggingface.co/SciPhi/SciPhi-Self-RAG-Mistral-7B-32k), is a model that is more powerful and more accurate. The downside is that this model requires access to a RAG database to be ran - thus I set out to provide free and open access to the one cited in the paper. This is now online and something which you can read about in [the documentation here](https://sciphi.readthedocs.io/en/latest/api/main.html).
Here are the eval comparisons -
[Self-RAG evaluation results](https://preview.redd.it/u8p8cbdz0kxb1.png?width=789&format=png&auto=webp&s=4870735a50cdfb69f5be90f94333449d443c15d9)
Running this model is slightly more complicated than other LLMs because of the RAG integration, so one other goal was to build a turn-key open source solution. Below is what the API I cooked up looks like.
[The SciPhi API for RAG + LLM Eval](https://preview.redd.it/mgunesxd1kxb1.png?width=1688&format=png&auto=webp&s=602c96263d5540160a05aaafffa68a0673ab20b8)
With this pipeline you can use your own local model and using the sciphi-infra you can host your own embedding db, if desired.
Some notes - the model still struggles with chat in some ways, as the current fine-tuning dataset is not fully optimized for this. This is something that I am still working on, but I think there is an opportunity here for the greater community to work on improving pipelines around RAG - so I'm hoping to see some cool models get built on top of this database.
Further, I'm working on extending the data sources in the RAG db well beyond those quoted in the self-rag paper, as it appears to be an incredibly promising approach.
​
Here is a random example output I just produced -
https://preview.redd.it/a5t7md0n2kxb1.png?width=1108&format=png&auto=webp&s=1fa998e6d5c10dde8e711620b91f6b0c785f3eb9
Please take a look and let me know your thoughts. I've appreciated all the valuable feedback thus far. | 2023-10-31T15:22:52 | https://www.reddit.com/r/LocalLLaMA/comments/17knjfz/update_from_sciphi_introducing/ | docsoc1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17knjfz | false | null | t3_17knjfz | /r/LocalLLaMA/comments/17knjfz/update_from_sciphi_introducing/ | false | false | 69 | null | |
Help with llama2,4-bit quant and parallelism | 2 | Looks i stuck gor a while
Download textgeneration webui run start is easy enough. When i download model via webui giving a model name from hugging face seems working for me. Couldnt find llama2 there at all so got one frommmeta ai directly, but there is no config.json
And totally no luck with 4-bit/quantization. Can textgeneration quantize it for me or i should quantize myself in CLI? Which loader should i use? so many options but nothing worked so far
Also questionnabout parallelism. Transformers loader has weird parallelism just load stuff in any gpu memory. Is there a way to control like pipeline/tensor parallelism and is there a way to run two nodes via MPI or smth? | 2023-10-31T15:09:08 | https://www.reddit.com/r/LocalLLaMA/comments/17kn896/help_with_llama24bit_quant_and_parallelism/ | chub0ka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kn896 | false | null | t3_17kn896 | /r/LocalLLaMA/comments/17kn896/help_with_llama24bit_quant_and_parallelism/ | false | false | self | 2 | null |
What is the good rp model for group chats, meaning more than one character | 1 | Preferably in silly tavern maybe 7b or 13b. | 2023-10-31T14:56:13 | https://www.reddit.com/r/LocalLLaMA/comments/17kmxre/what_is_the_good_rp_model_for_group_chats_meaning/ | swwer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kmxre | false | null | t3_17kmxre | /r/LocalLLaMA/comments/17kmxre/what_is_the_good_rp_model_for_group_chats_meaning/ | false | false | self | 1 | null |
Langchain extraction chain question | 1 | I am trying to use Langchain information extraction chain with OpenAl. Firstly, I am reading a PDF file having some text about products or product. There is no specefic format of PDF, it can be in any format like, there can be only one product on one page or one product can be on two pages or there can be 10 products on one page. So it's completely random. How can I extract the the useful information (some key attributes) for all the products. Currently, I am using Pydantic to validate the output schema and I am using PyPDFLoader to load the pof and then using the load and split.
Loader = PyPDFLoader (file path=file path)
pages = loader. load_and_split()
And then I am looping on the pages like this:
llm = ChatOpenAI (temperature=0, model=model_name, openai_api_key=key)
chain = create_extraction_chain(schema, llm)
for index in range(0, len(pages)):
output = chain. run (pages lindex])
results. append (output)
How can I run the extraction in such a way that it should be completely automated, currently, it is just extracting information from one page and then other and then other. | 2023-10-31T14:47:14 | https://www.reddit.com/r/LocalLLaMA/comments/17kmqp8/langchain_extraction_chain_question/ | Gullible-Being-8595 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kmqp8 | false | null | t3_17kmqp8 | /r/LocalLLaMA/comments/17kmqp8/langchain_extraction_chain_question/ | false | false | self | 1 | null |
Apple M3 Max (base model) reduced memory bandwidth from 400 Gb/s to 300 Gb/s | 38 | The chip seems faster from the presentation but given this reduction in memory bandwidth I wonder how much it will affect LLMs inference. Would 300 Gb/s be enough for practical use of 7b/14b models quantized? Given that we don't have benchmarks yet, does anyone have an intuition if the inference speed (in terms of tokens/s) is practical at 300Gb/s? | 2023-10-31T14:30:39 | https://www.reddit.com/r/LocalLLaMA/comments/17kmdxm/apple_m3_max_base_model_reduced_memory_bandwidth/ | ExactSeaworthiness34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kmdxm | false | null | t3_17kmdxm | /r/LocalLLaMA/comments/17kmdxm/apple_m3_max_base_model_reduced_memory_bandwidth/ | false | false | self | 38 | null |
Looking for the model that is good at group chats trough silly tavern | 1 | At least 7b 13b. | 2023-10-31T14:28:17 | https://www.reddit.com/r/LocalLLaMA/comments/17kmc1c/looking_for_the_model_that_is_good_at_group_chats/ | swwer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kmc1c | false | null | t3_17kmc1c | /r/LocalLLaMA/comments/17kmc1c/looking_for_the_model_that_is_good_at_group_chats/ | false | false | self | 1 | null |
need advice for reducing inference time | 3 | ​
[Code](https://preview.redd.it/8yotkpxgojxb1.png?width=818&format=png&auto=webp&s=c16d7a8ddf8440bb4325f8f4483a5869c89ea029)
I'm using mistral-7b to understanding LLMs' procedure.
Does anyone have an idea to improve this process?
do not recommend changing the number of tokens -> 1. :)
​ | 2023-10-31T14:05:52 | https://www.reddit.com/r/LocalLLaMA/comments/17klu81/need_advice_for_reducing_inference_time/ | Mundane_Definition_8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17klu81 | false | null | t3_17klu81 | /r/LocalLLaMA/comments/17klu81/need_advice_for_reducing_inference_time/ | false | false | 3 | null | |
Tested: ExllamaV2's max context on 24gb with 70B low-bpw & speculative sampling performance | 28 | Recent [releases](https://github.com/turboderp/exllamav2/releases/tag/v0.0.7) for exllamav2 brings working fp8 cache support, which I've been very excited to test. This feature doubles the maximum context length you can run with your model, without any visible downsides.
For models barely fitting (*it screams as you stuff it onto your gpu*), this makes a world of difference.
**Below, I show the updated maximum context I get with 2.4 and 2.5 bpw models:**
>These are on desktop ubuntu, with a single 3090 powering the graphics. Memory consumption varies between 0.56-0.7gb, but the usage stayed at 0.56gb for my tests.
>For testing, I iteratively loaded these models with increasing context until OOM. Vram usage does not increase once set. These results should be replicable on text-gen-webui when implemented.
### 2.4bpw
- 16k (1k = 1024 context length)
- No fp8 cache - 8k
- w/ speculative 4.0bpw - 10k (tinyllama is speculative draft model)
- w/ speculative 5.0bpw - 10k
- w/ speculative 6.0bpw - 7k
- w/ speculative 8.0bpw - 6k
### 2.5bpw
- 10k
- No fp8 cache - 5k
- w/ speculative 4.0bpw - 5k
- w/ speculative 5.0bpw - 4k
### Speculative results
When running the chat.py example, the results are consistently ~30 t/s. For chat tests, that is consistently 1.5x the original speeds.
Most responses will range between 28-33 t/s. I have not found any 70B models with poor results yet. Normally on one 3090 it is 20 t/s.
#### Additional sampling?
The default loader for speculative will probably have to be the regular Exllamav2 loader. We would want the sampling methods that synergize with speculative sampling as shown in "Typical Acceptance" section from this infopage:
https://sites.google.com/view/medusa-llm
#### Higher tps
When setting repetition penalty from 1.1 to 1.0, the tokens per second for many simple prompt examples is often 2 or 3 times greater as seen in the speculative example, but generation is prone to repeating phrases.
I'm not sure if this setting is more important for low bpw models, or if 2x gain is considered consistent for 4.65bpw.
#### Draft model
It did not seem to matter if the 1B tinyllama speculative model was undertrained, or finetuned. It also did not seem to matter if tinyllama was 4, 5, 6, or even 3 BPW. They each worked to allow for 30t/s speeds.
### Thoughts
**20 tps only goes down to 17 tps?** - When doing this, I don't really notice a drop in t/s when inputting huge articles with 16k context in the chat.py example, maybe flash decoding is already supported?
**Perplexity scores?** - People will have benchmarked 70B 2.X models, with some being calibrated to wikitext. I think this is one of these models, which I ran perplexity tests in text-gen-webui: https://huggingface.co/turboderp/Llama2-70B-exl2
Usually, only base models and comparisons between equivalent parameter models are useful. But there are a lot of unknowns for performing proper comparisons. For instance:
For 2.5bpw,
- If I set it to stride 512 and length 512, I get a perplexity of 8.
- If the stride is 512 and length 2048, I get 5ppl. At what context length should 2.5 and 4.65 be compared..?
For 2.4bpw,
- I get 5.6ppl when the stride is 512 at length 2048.
Should we conclude somewhat that the 2.5bpw model is e.g:
5/3.4 = 47% different from the original model when already optimized for its specific specialization, while
2.4bpw is 5.6/3.4 = 65% different?
(I don't know the perplexity score of 4.65bpw base model, so this couldn't be the answer)
**Worth using?**
If you have a single 24gb gpu, it's worthwhile to try at least once.
I'm not familiar enough with the 13b models to convince you that this is superior, I'm not planning to convince you at all. The above is just to help if you are choosing this lower bpw option.
If you want to try squeezing 70B in, here's a few guidelines:
*Windows:*
**Uninstall previous drivers cleanly** (try NVcleanstall) to avoid the any unwanted residue effects of the RAM swapping mechanism, (do not just downgrade) and install version <531, **OR try the latest driver** and do not allow other programs to suddenly consume resources, or an essential part of your model may be trapped in RAM.
Flash-attention-2 and fp8 kv cache should work now in windows with text-gen-webui. I haven't tested it. **These results should be replicable on windows**, but I am not 100% on if Windows has a lower vram cap. On Linux, nvtop shows 23.85/24.00 GB, which seems like my maximum.
Try to get an idea of your maximum context by closing all programs, disabling browser hardware acceleration, loading a 2.4bpw in text-gen-webui with increasing context until OOM.
Double that for your expected maximum context with the fp8 cache.
For each 1k, it should be **~0.28GB**, across all bpw models.
If I had a cpu with integrated graphics, I think I would get an extra 4k from my gpu. Don't be surprised if you can get higher than the above results.
Thats all, hopefully you found this useful, thanks for reading! ヘ(◕。◕ヘ) | 2023-10-31T13:39:52 | https://www.reddit.com/r/LocalLLaMA/comments/17klaa5/tested_exllamav2s_max_context_on_24gb_with_70b/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17klaa5 | false | null | t3_17klaa5 | /r/LocalLLaMA/comments/17klaa5/tested_exllamav2s_max_context_on_24gb_with_70b/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'NiVn0LHb6GQ_fLREGiPE0g9VNwlq7xL9TvZNDk3RngI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f57AJ8h9YKB0wgcBgNJwZ5mJLKgJQJ8_dJF9RHrgc94.jpg?width=108&crop=smart&auto=webp&s=09eb2a39f4cf0d89296a5471e34fd5b0dc996113', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/f57AJ8h9YKB0wgcBgNJwZ5mJLKgJQJ8_dJF9RHrgc94.jpg?width=216&crop=smart&auto=webp&s=1fbec92b48ef71e9543e181e506cfa8bd37fcc44', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/f57AJ8h9YKB0wgcBgNJwZ5mJLKgJQJ8_dJF9RHrgc94.jpg?width=320&crop=smart&auto=webp&s=5a9e47aedcbe9c2bd4b6caa0c343d7fa43beb89a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/f57AJ8h9YKB0wgcBgNJwZ5mJLKgJQJ8_dJF9RHrgc94.jpg?width=640&crop=smart&auto=webp&s=38c2e865f37b10e9da70c8d0061454204dccf4d5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/f57AJ8h9YKB0wgcBgNJwZ5mJLKgJQJ8_dJF9RHrgc94.jpg?width=960&crop=smart&auto=webp&s=3e2838b62a61b2ce20c8f54ae09719188a93da7a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/f57AJ8h9YKB0wgcBgNJwZ5mJLKgJQJ8_dJF9RHrgc94.jpg?width=1080&crop=smart&auto=webp&s=0528f2a201da9a8104d6f2796fb12c9152c98ca6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/f57AJ8h9YKB0wgcBgNJwZ5mJLKgJQJ8_dJF9RHrgc94.jpg?auto=webp&s=98f3c308eb01029646713319706be2eda2dcf3d4', 'width': 1200}, 'variants': {}}]} |
PSA: With Nvidia driver 564.01 it's now possible to disable swapping into system ram when VRAM is exhausted. | 103 | This will make people, who reverted on older drivers because they suffered from lower performance due to ram swapping, happy. If you follow this simple guide: [https://nvidia.custhelp.com/app/answers/detail/a\_id/5490](https://nvidia.custhelp.com/app/answers/detail/a_id/5490) the old memory management will return, where it just crashes instead of slowing down
Personally I prefer the new memory management, but I'm glad the option is there now for people who don't. Thank you Nvidia for listening! :D | 2023-10-31T13:37:25 | https://www.reddit.com/r/LocalLLaMA/comments/17kl8gu/psa_with_nvidia_driver_56401_its_now_possible_to/ | dampflokfreund | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kl8gu | false | null | t3_17kl8gu | /r/LocalLLaMA/comments/17kl8gu/psa_with_nvidia_driver_56401_its_now_possible_to/ | false | false | self | 103 | null |
Is it possible to use GPTQ models on video card + RAM? | 1 | I'm using GGUF models, it's cpu + gpu + RAM. It's slow, but tolerable.
​
I heard that it is possible to run GPTQ models only on a video card that will take additional video memory from RAM. I also heard that it will be faster than GGUF.
​
Is this true? Give me a guide on how to do it. When I run GPTQ, I get an error about not enough video memory. I am using text generate web ui | 2023-10-31T13:29:08 | https://www.reddit.com/r/LocalLLaMA/comments/17kl2c0/is_it_possible_to_use_gptq_models_on_video_card/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kl2c0 | false | null | t3_17kl2c0 | /r/LocalLLaMA/comments/17kl2c0/is_it_possible_to_use_gptq_models_on_video_card/ | false | false | self | 1 | null |
What does Apple's shared memory mean for machine learning projects? | 6 | I have a PC with a GPU that has 12GB of VRAM. I also have a MPB that my work issues me. It's an M1 Max with 32gb memory. How does shared memory work when it comes to projects like running local LLMs? | 2023-10-31T13:00:47 | https://www.reddit.com/r/LocalLLaMA/comments/17kkhny/what_does_apples_shared_memory_mean_for_machine/ | AppleBottmBeans | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17kkhny | false | null | t3_17kkhny | /r/LocalLLaMA/comments/17kkhny/what_does_apples_shared_memory_mean_for_machine/ | false | false | self | 6 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.