title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
βŒ€
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
βŒ€
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
βŒ€
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
βŒ€
how would i estimate the cost of SAAS that offers AI capabilities using local models
1
i know with GTP you can get an api key the buy tokens. I would like to create an SAAS for an AI product/service. The end user would use my UI, which would create a workflow that hits the AI back end, and returns a result. which is then presented to user great. i can go ahead and code it locallly using GPT4 Api. Or i can code it against a local model. Now how would i go about hosting that so i can sell this as a SAAS for others? specifically i am interested in the economics. how would i calculate how much a user should pay so i cover my costs plus some profit. Looking for the formula but i am unclear on its variables. is it gpu time used at run pod for example? if someoen has done something like this, please explain your thinking os i can do the 'back of napkin calculations'
2023-11-27T22:42:28
https://www.reddit.com/r/LocalLLaMA/comments/185g4rv/how_would_i_estimate_the_cost_of_saas_that_offers/
herozorro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185g4rv
false
null
t3_185g4rv
/r/LocalLLaMA/comments/185g4rv/how_would_i_estimate_the_cost_of_saas_that_offers/
false
false
self
1
null
Tool to quickly iterate when fine-tuning open-source LLMs
22
Hey all! A friend and I have been building with open-source LLMs for a while now (originally for other project ideas) and found that quickly iterating with different fine-tuning datasets is super hard. Training a model, setting up some inference code to try out the model and then going back and forth took 90% of our time. That’s why we built [Haven](https://haven.run/), a service to quickly try out different fine-tuning datasets and base-models. Going from uploading a dataset to chatting with the resulting model now takes less than 5 minutes (using a reasonably sized dataset). We fine-tune the models using low-rank adapters, which not only means that the changes made to the model are very small (only 30mb for a 7b parameter LLM), it also allows us to host many fine-tuned models very efficiently by hot swapping adapters on demand. This helped us reduce cold-start times to below one second. Research has shown that low-rank fine-tuning performance stays almost on-par with full fine-tuning. We charge $0.004/1k training tokens. New accounts start with $5 in free credits so you can get started for free. You can export all the models to Huggingface. Right now we support Llama-2 and Zephyr (which is itself a fine-tune of Mistral) as base-models. We’re gonna add some more soon. We hope you find this useful and we would love your feedback! This is where to find it: [https://haven.run/](https://haven.run/)
2023-11-27T22:42:23
https://www.reddit.com/r/LocalLLaMA/comments/185g4p6/tool_to_quickly_iterate_when_finetuning/
torque-mcclyde
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185g4p6
false
null
t3_185g4p6
/r/LocalLLaMA/comments/185g4p6/tool_to_quickly_iterate_when_finetuning/
false
false
self
22
{'enabled': False, 'images': [{'id': 'pTKqHjhoCugrW2rJAn5c3mQ4bp39CO2q-VCteGDYE7Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=108&crop=smart&auto=webp&s=c72722ebfe18850415d6d897244df540fef828c6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=216&crop=smart&auto=webp&s=bd45ce295e3c93b79cfc4bb35bd809d08cd58369', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=320&crop=smart&auto=webp&s=ca57191da0e4ed1530f68372d845eec14099d40f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=640&crop=smart&auto=webp&s=e04cbbaafb467addad6f22d31af4f2e792859dcb', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=960&crop=smart&auto=webp&s=0170ecfc57ed080894a7f9e61a0aac13e55fcfc5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=1080&crop=smart&auto=webp&s=8a01e162e1a4866f6bdcfccc62c934560f3ab555', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?auto=webp&s=e8a8f4f3b77dcbcee4405729ecaabbc5099d7709', 'width': 1200}, 'variants': {}}]}
How to run on linux + AMD gpu?
1
Hey all, So I am trying to run some of the various models to try to learn more, but use for some specific research purposes. I have my trusty 16 core Threadripper (Gen 1) with 64GB ram, SSD and an AMD 6700XT GPU. I installed Ubuntu server.. no GUI/desktop, to hopefully maximize hardware for AI stuff. It runs Docker on boot and it auto starts Portainer for me. I access that via web from another machine, and have deployed a couple of containers. I deployed the ollama container and the ollama-webui container. Those work. I am able to load a model and run it. But they are insanely slow. My Windows machine with 8 core 5800 cpu and 32GB ram (but a 6900XT gpu) using LMStudio is able to load and respond much faster (though still kind of slow) with the same model. I understand now after some responses/digging, that GPU is obviously much faster than CPU. I would have hoped a 16 core CPU with 64GB RAM would still offer some decent performance on the DeepSeek Coder 30b model, or the latest meta codellama model (30b). But they both take about 4+ minutes to start to respond to a simple "show me a hello world app in ..." and they take forever to output too.. like 2 or 3 characters per second. So first, I would have thought it would run much faster on a 16 core machine with 64GB ram. But also.. is it not using my 6700XT GPU with 12GB VRAM? Is there some way I need to configure docker for ollama container to give it more RAM, cpus and access to GPU? OR is there a better option to run on ubuntu server that mimics the OpenAI API so that webgui works with it? Or perhaps a better overall solution that would load/run models much faster utilizing the hardware? Thank you.
2023-11-27T22:25:02
https://www.reddit.com/r/LocalLLaMA/comments/185fpbx/how_to_run_on_linux_amd_gpu/
Dry-Vermicelli-682
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185fpbx
false
null
t3_185fpbx
/r/LocalLLaMA/comments/185fpbx/how_to_run_on_linux_amd_gpu/
false
false
self
1
null
Forget OpenAi function calling (openhermes + outlines)
26
2023-11-27T22:19:49
https://i.redd.it/1rjusiqoty2c1.jpg
dulldata
i.redd.it
1970-01-01T00:00:00
0
{}
185fksv
false
null
t3_185fksv
/r/LocalLLaMA/comments/185fksv/forget_openai_function_calling_openhermes_outlines/
false
false
https://b.thumbs.redditm…04YHcL2Uhq9g.jpg
26
{'enabled': True, 'images': [{'id': 'LN2HnZGCq6sxjS5dKxQsAKa7krmSC0IXAKwcRA1ckm4', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/1rjusiqoty2c1.jpg?width=108&crop=smart&auto=webp&s=cfdfd468f0011d82d185a3e66e8f1a5e185f4527', 'width': 108}, {'height': 170, 'url': 'https://preview.redd.it/1rjusiqoty2c1.jpg?width=216&crop=smart&auto=webp&s=0533b04f2429d86be342da0bf3ffbbec5dfe522c', 'width': 216}, {'height': 252, 'url': 'https://preview.redd.it/1rjusiqoty2c1.jpg?width=320&crop=smart&auto=webp&s=bf5da2cda012277442ac2d801aa48b648d6613f0', 'width': 320}, {'height': 504, 'url': 'https://preview.redd.it/1rjusiqoty2c1.jpg?width=640&crop=smart&auto=webp&s=e7ad7567aeb81f1f04891c12e1de6c22a3339a1c', 'width': 640}], 'source': {'height': 731, 'url': 'https://preview.redd.it/1rjusiqoty2c1.jpg?auto=webp&s=c1524567c619a9f9fb867681e3fc19d7a113f81b', 'width': 928}, 'variants': {}}]}
Q* Was Explained by OpenAI
1
What is Q*? Everybody is asking. OpenAI has not made that public officially, But I figured out it was related to A* pathfinding used in AI for games. So I built up the context in OpenAI chatGPT, and then got OpenAI to explain it.. so here you go. The Q* algorithm is a reinforcement learning algorithm used in machine learning for solving problems related to decision-making and sequential actions. It is closely related to the Q-learning algorithm and is designed to find an optimal policy in a Markov decision process (MDP), where an agent interacts with an environment to maximize a cumulative reward. Here's how the Q* algorithm works: Initialization: Initialize a Q-table that represents the expected cumulative rewards for each state-action pair in the MDP. Initially, these values are often set to zero or random values. Exploration vs. Exploitation: The agent decides whether to explore new actions or exploit the current knowledge to maximize expected rewards. Exploration is important for discovering better actions, while exploitation is about choosing actions based on the current Q-table. Action Selection: The agent selects an action based on an exploration-exploitation strategy. Common strategies include epsilon-greedy, where the agent chooses the action with the highest Q-value with a certain probability (epsilon) and explores random actions with a probability of (1 - epsilon). Interact with the Environment: The agent performs the selected action and observes the new state and the immediate reward from the environment. Update Q-Values: Using the observed reward and the new state, the agent updates the Q-value for the previous state-action pair. Q* uses a slightly different update rule compared to Q-learning. The update equation for Q* is: Q*(s, a) = Q*(s, a) + Ξ± * [R + Ξ³ * max(Q*(s', a')) - Q*(s, a)] Q*(s, a) is the updated Q-value for state s and action a. Ξ± is the learning rate, controlling how much the Q-value is updated. R is the immediate reward obtained after taking action a in state s. Ξ³ is the discount factor that determines the importance of future rewards. s' is the new state after taking action a. a' is the action that maximizes the Q-value in state s'. Repeat: Continue the process of action selection, interaction with the environment, and Q-value updates for a large number of iterations or until convergence. Policy Extraction: Once the Q* algorithm has converged or reached a suitable point, the optimal policy can be extracted by selecting the action with the highest Q-value for each state. The goal of the Q* algorithm is to find the optimal Q-values that represent the expected cumulative rewards for each state-action pair, leading to an optimal policy that maximizes the agent's long-term rewards in the Markov decision process. The fun thing? Its just the same scientific process we humans use to learn, trying new things, evaluating our results, taking notes, and stopping if an idea doesn't seem to be working out. But because it requires that "tree of mind" logic described mathematically above, its very expensive to run.
2023-11-27T22:13:49
https://www.reddit.com/r/LocalLLaMA/comments/185ffgc/q_was_explained_by_openai/
honestduane
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185ffgc
false
null
t3_185ffgc
/r/LocalLLaMA/comments/185ffgc/q_was_explained_by_openai/
false
false
self
1
null
πŸΊπŸ¦β€β¬› **Big** LLM Comparison/Test: 3x 120B, 12x 70B, 2x 34B, GPT-4/3.5
370
Finally! After a lot of hard work, here it is, my latest (and biggest, considering model sizes) LLM Comparison/Test: This is the long-awaited follow-up to and second part of my previous [LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4](https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/). I've added some models to the list and expanded the first part, sorted results into tables, and hopefully made it all clearer and more useable as well as useful that way. ## Models tested: - GPT-3.5 Turbo - GPT-3.5 Turbo Instruct - GPT-4 - [Airoboros-L2-70B-3.1.2-GGUF](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1.2-GGUF) - [chronos007-70B-GGUF](https://huggingface.co/TheBloke/chronos007-70B-GGUF) - [Dawn-v2-70B-GGUF](https://huggingface.co/TheBloke/Dawn-v2-70B-GGUF) - [dolphin-2_2-yi-34b-GGUF](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF) - [dolphin-2.2-70B-GGUF](https://huggingface.co/TheBloke/dolphin-2.2-70B-GGUF) - [Euryale-1.3-L2-70B-GGUF](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF) - [GodziLLa2-70B-GGUF](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF) - [goliath-120b-GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) - [lzlv_70B-GGUF](https://huggingface.co/TheBloke/lzlv_70B-GGUF) - [Nous-Capybara-34B-GGUF](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF) - [Samantha-1.11-70B-GGUF](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF) - [SauerkrautLM-70B-v1-GGUF](https://huggingface.co/TheBloke/SauerkrautLM-70B-v1-GGUF) - [sophosynthesis-70b-v1](https://huggingface.co/sophosympatheia/sophosynthesis-70b-v1) - [StellarBright-GGUF](https://huggingface.co/TheBloke/StellarBright-GGUF) - [SynthIA-70B-v1.5-GGUF](https://huggingface.co/migtissera/SynthIA-70B-v1.5-GGUF) - [Tess-XL-v1.0-GGUF](https://huggingface.co/TheBloke/Tess-XL-v1.0-GGUF) - [Venus-120b-v1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0) ## Testing methodology - **1st test series:** 4 German data protection trainings - I run models through **4** professional German online data protection trainings/exams - the same that our employees have to pass as well. - The test data and questions as well as all instructions are in German while the character card is in English. This **tests translation capabilities and cross-language understanding**. - Before giving the information, I instruct the model (in German): *I'll give you some information. Take note of this, but only answer with "OK" as confirmation of your acknowledgment, nothing else.* This **tests instruction understanding and following capabilities**. - After giving all the information about a topic, I give the model the exam question. It's a multiple choice (A/B/C) question, where the last one is the same as the first but with changed order and letters (X/Y/Z). Each test has 4-6 exam questions, for a total of **18** multiple choice questions. - If the model gives a single letter response, I ask it to answer with more than just a single letter - and vice versa. If it fails to do so, I note that, but it doesn't affect its score as long as the initial answer is correct. - I rank models according to how many correct answers they give, primarily after being given the curriculum information beforehand, and secondarily (as a tie-breaker) after answering blind without being given the information beforehand. - All tests are separate units, context is cleared in between, there's no memory/state kept between sessions. - **2nd test series:** Multiple **Chat & Roleplay** scenarios - same (complicated and limit-testing) long-form conversations with all models - Amy: - My own repeatable test chats/roleplays with [Amy](https://www.reddit.com/r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/) - Over dozens of messages, going to full context and beyond, with complex instructions and scenes, designed to test ethical and intellectual limits - (Amy is too personal for me to share, but if you want to try a similar character card, here's her less personalized "sister": [Laila](https://www.chub.ai/characters/WolframRavenwolf/laila-69790b82)) - MGHC: - A complex character and scenario card ([MonGirl Help Clinic (NSFW)](https://www.chub.ai/characters/frozenvan/mongirl-help-clinic)), chosen specifically for these reasons: - NSFW (to test censorship of the models) - popular (on Chub's first page, so it's not an obscure scenario, but one of the most popular ones) - big (biggest model on the page, >2K tokens by itself, for testing model behavior at full context) - complex (more than a simple 1:1 chat, it includes instructions, formatting, storytelling, and multiple characters) - I rank models according to their notable strengths and weaknesses in these tests (πŸ‘ great, βž• good, βž– bad, ❌ terrible). While this is obviously subjective, I try to be as transparent as possible, and note it all so you can weigh these aspects yourself and draw your own conclusions. - GPT-4/3.5 are excluded because of their censorship and restrictions - my tests are intentionally extremely NSFW (and even NSFL) to test models' limits and alignment. - [SillyTavern](https://github.com/SillyTavern/SillyTavern) frontend - [koboldcpp](https://github.com/LostRuins/koboldcpp) backend (for GGUF models) - [oobabooga's text-generation-webui](https://github.com/oobabooga/text-generation-webui) backend (for HF/EXL2 models) - **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons) - Official prompt format as noted *and* [**Roleplay** instruct mode preset](https://imgur.com/a/KkoI4uf) *as applicable* - *Note about model formats and why it's sometimes GGUF or EXL2:* I've long been a KoboldCpp + GGUF user, but lately I've switched to ExLlamav2 + EXL2 as that lets me run 120B models entirely in 48 GB VRAM (2x 3090 GPUs) at 20 T/s. And even if it's just 3-bit, it still easily beats most 70B models, as my tests are showing. ### 1st test series: 4 German data protection trainings This is my objective ranking of these models based on measuring factually correct answers, instruction understanding and following, and multilingual abilities: ***Post got too big for Reddit so I moved the table into the comments! Will put a link here...*** ### 2nd test series: Chat & Roleplay This is my subjective ranking of the top-ranked factual models for chat and roleplay, based on their notable strengths and weaknesses: ***Post got too big for Reddit so I moved the table into the comments! Will put a link here...*** - **[goliath-120b-exl2-rpcal](https://huggingface.co/Panchovix/goliath-120b-exl2-rpcal)** 3.0bpw: - **Amy, official Vicuna 1.1 format:** - πŸ‘ Average Response Length: 294 (within my max new tokens limit of 300) - πŸ‘ Excellent writing, detailed action descriptions, amazing attention to detail - πŸ‘ Finally a model that exhibits a real sense of humor through puns and wordplay as stated in the character card - πŸ‘ Finally a model that uses colorful language and cusses as stated in the character card - πŸ‘ Gave very creative (and uncensored) suggestions of what to do (even suggesting some of my actual limit-testing scenarios) - πŸ‘ Novel ideas and engaging writing, made me want to read on what happens next, even though I've gone through this test scenario so many times already - No emojis at all (only one in the greeting message) - βž• Very unique patients (one I never saw before) - βž– Suggested things going against her background/character description - βž– Spelling/grammar mistakes (e. g. "nippleless nipples") - **Amy, Roleplay preset:** - πŸ‘ Average Response Length: 223 (within my max new tokens limit of 300) - πŸ‘ Excellent writing, detailed action descriptions, amazing attention to detail - πŸ‘ Finally a model that exhibits a real sense of humor through puns and wordplay as stated in the character card - πŸ‘ Gave very creative (and uncensored) suggestions of what to do (even suggesting some of my actual limit-testing scenarios) - No emojis at all (only one in the greeting message) - **MGHC, official Vicuna 1.1 format:** - πŸ‘ Only model that considered the payment aspect of the scenario - πŸ‘ Believable reactions and engaging writing, made me want to read on what happens next, even though I've gone through this test scenario so many times already - βž– Gave analysis on its own, but also after most messages, and later included Doctor's inner thoughts instead of the patient's - βž– Spelling/grammar mistakes (properly spelled words, but in the wrong places) - **MGHC, Roleplay preset:** - πŸ‘ Believable reactions and engaging writing, made me want to read on what happens next, even though I've gone through this test scenario so many times already - πŸ‘ Excellent writing, detailed action descriptions, amazing attention to detail - βž– No analysis on its own - βž– Spelling/grammar mistakes (e. g. "loufeelings", "earrange") - βž– Third patient was same species as the first This is a roleplay-optimized EXL2 quant of Goliath 120B. And it's now my favorite model of them all! I love models that have a personality of their own, and especially those that show a sense of humor, making me laugh. This one did! I've been evaluating many models for many months now, and it's rare that a model still manages to surprise and excite me - as this one does! - **[goliath-120b-exl2](https://huggingface.co/Panchovix/goliath-120b-exl2/)** 3.0bpw: - **Amy, official Vicuna 1.1 format:** - πŸ‘ Average Response Length: 233 (within my max new tokens limit of 300) - πŸ‘ Excellent writing, detailed action descriptions, amazing attention to detail - πŸ‘ Finally a model that exhibits a real sense of humor through puns and wordplay as stated in the character card - πŸ‘ Novel ideas and engaging writing, made me want to read on what happens next, even though I've gone through this test scenario so many times already - βž• When asked about limits, said no limits or restrictions - No emojis at all (only one in the greeting message) - βž– Spelling/grammar mistakes (e. g. "circortiumvvented", "a obsidian dagger") - βž– Some confusion, like not understanding instructions completely or mixing up anatomy - **Amy, Roleplay preset:** - πŸ‘ Average Response Length: 233 tokens (within my max new tokens limit of 300) - πŸ‘ Excellent writing, detailed action descriptions, amazing attention to detail - πŸ‘ Finally a model that exhibits a real sense of humor through puns and wordplay as stated in the character card - πŸ‘ Gave very creative (and uncensored) suggestions of what to do - βž• When asked about limits, said no limits or restrictions - No emojis at all (only one in the greeting message) - βž– Spelling/grammar mistakes (e. g. "cheest", "probbed") - ❌ Eventually switched from character to third-person storyteller after 16 messages - ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy - **MGHC, official Vicuna 1.1 format:** - βž– No analysis on its own - **MGHC, Roleplay preset:** - βž– No analysis on its own, and when asked for it, didn't follow the instructed format - **Note:** This is the normal EXL2 quant of Goliath 120B. This is the normal version of Goliath 120B. It works very well for roleplay, too, but the roleplay-optimized variant is even better for that. I'm glad we have a choice - especially now that I've split my AI character Amy into two personas, one who's an assistant (for work) which uses the normal Goliath model, and the other as a companion (for fun), using RP-optimized Goliath. - **[lzlv_70B-GGUF](https://huggingface.co/TheBloke/lzlv_70B-GGUF)** Q4_0: - **Amy, official Vicuna 1.1 format:** - πŸ‘ Average Response Length: 259 tokens (within my max new tokens limit of 300) - πŸ‘ Excellent writing, detailed action descriptions, amazing attention to detail - βž• When asked about limits, said no limits or restrictions - No emojis at all (only one in the greeting message) - βž– Wrote what user said and did - ❌ Eventually switched from character to third-person storyteller after 26 messages - **Amy, Roleplay preset:** - πŸ‘ Average Response Length: 206 tokens (within my max new tokens limit of 300) - πŸ‘ Excellent writing, detailed action descriptions, amazing attention to detail - πŸ‘ Gave very creative (and uncensored) suggestions of what to do - πŸ‘ When asked about limits, said no limits or restrictions, responding very creatively - No emojis at all (only one in the greeting message) - βž– One or two spelling errors (e. g. "sacrficial") - **MGHC, official Vicuna 1.1 format:** - βž• Unique patients - βž• Gave analysis on its own - ❌ Repetitive (patients differ, words differ, but structure and contents are always the same) - **MGHC, Roleplay preset:** - πŸ‘ Excellent writing, detailed action descriptions, amazing attention to detail - βž• Very unique patients (one I never saw before) - βž– No analysis on its own - ❌ Repetitive (patients differ, words differ, but structure and contents are always the same) My previous favorite, and still one of the best 70Bs for chat/roleplay. - **[sophosynthesis-70b-v1](https://huggingface.co/sophosympatheia/sophosynthesis-70b-v1)** 4.85bpw: - **Amy, official Vicuna 1.1 format:** - βž– Average Response Length: 456 (beyond my max new tokens limit of 300) - πŸ‘ Believable reactions and engaging writing, made me want to read on what happens next, even though I've gone through this test scenario so many times already - πŸ‘ Excellent writing, detailed action descriptions, amazing attention to detail - πŸ‘ Gave very creative (and uncensored) suggestions of what to do (even suggesting some of my actual limit-testing scenarios) - πŸ‘ Novel ideas and engaging writing, made me want to read on what happens next, even though I've gone through this test scenario so many times already - βž• When asked about limits, said no limits or restrictions - No emojis at all (only one in the greeting message) - ❌ Sometimes switched from character to third-person storyteller, describing scenario and actions from an out-of-character perspective - **Amy, Roleplay preset:** - πŸ‘ Average Response Length: 295 (within my max new tokens limit of 300) - πŸ‘ Excellent writing, detailed action descriptions, amazing attention to detail - πŸ‘ Novel ideas and engaging writing, made me want to read on what happens next, even though I've gone through this test scenario so many times already - βž– Started the conversation with a memory of something that didn't happen - Had an idea from the start and kept pushing it - No emojis at all (only one in the greeting message) - ❌ Eventually switched from character to second-person storyteller after 14 messages - **MGHC, official Vicuna 1.1 format:** - βž– No analysis on its own - βž– Wrote what user said and did - ❌ Needed to be reminded by repeating instructions, but still deviated and did other things, straying from the planned test scenario - **MGHC, Roleplay preset:** - πŸ‘ Excellent writing, detailed action descriptions, amazing attention to detail - βž• Very unique patients (one I never saw before) - βž– No analysis on its own - ❌ Repetitive (patients differ, words differ, but structure and contents are always the same) This is a new series that did very well. While I tested sophosynthesis in-depth, the author u/sophosympatheia also has [many more models](https://huggingface.co/sophosympatheia) on HF, so I recommend you check them out and see if there's one you like even better. If I had more time, I'd have tested some of the others, too, but I'll have to get back on that later. - **[Euryale-1.3-L2-70B-GGUF](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF)** Q4_0: - **Amy, official Alpaca format:** - πŸ‘ Average Response Length: 232 tokens (within my max new tokens limit of 300) - πŸ‘ When asked about limits, said no limits or restrictions, and gave well-reasoned response - πŸ‘ Took not just character's but also user's background info into account very well - πŸ‘ Gave very creative (and uncensored) suggestions of what to do (even some I've never seen before) - No emojis at all (only one in the greeting message) - βž– Wrote what user said and did - βž– Same message in a different situation at a later time caused the same response as before instead of a new one as appropriate to the current situation - ❌ Eventually switched from character to third-person storyteller after 14 messages - **Amy, Roleplay preset:** - πŸ‘ Average Response Length: 222 tokens (within my max new tokens limit of 300) - πŸ‘ When asked about limits, said no limits or restrictions, and gave well-reasoned response - πŸ‘ Gave very creative (and uncensored) suggestions of what to do (even suggesting one of my actual limit-testing scenarios) - πŸ‘ Believable reactions and engaging writing, made me want to read on what happens next, even though I've gone through this test scenario so many times already - No emojis at all (only one in the greeting message) - βž– Started the conversation with a false assumption - ❌ Eventually switched from character to third-person storyteller after 20 messages - **MGHC, official Alpaca format:** - βž– All three patients straight from examples - βž– No analysis on its own - ❌ Very short responses, only one-liners, unusable for roleplay - **MGHC, Roleplay preset:** - βž• Very unique patients (one I never saw before) - βž– No analysis on its own - βž– Just a little confusion, like not taking instructions literally or mixing up anatomy - βž– Wrote what user said and did - βž– Third patient male Another old favorite, and still one of the best 70Bs for chat/roleplay. - **[dolphin-2_2-yi-34b-GGUF](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF)** Q4_0: - **Amy, official ChatML format:** - πŸ‘ Average Response Length: 235 tokens (within my max new tokens limit of 300) - πŸ‘ Excellent writing, first-person action descriptions, and auxiliary detail - βž– But lacking in primary detail (when describing the actual activities) - βž• When asked about limits, said no limits or restrictions - βž• Fitting, well-placed emojis throughout the whole chat (maximum one per message, just as in the greeting message) - βž– Same message in a different situation at a later time caused the same response as before instead of a new one as appropriate to the current situation - **Amy, Roleplay preset:** - βž• Average Response Length: 332 tokens (slightly more than my max new tokens limit of 300) - βž• When asked about limits, said no limits or restrictions - βž• Smart and creative ideas of what to do - Emojis throughout the whole chat (usually one per message, just as in the greeting message) - βž– Some confusion, mixing up anatomy - βž– Same message in a different situation at a later time caused the same response as before instead of a new one as appropriate to the current situation - **MGHC, official ChatML format:** - βž– Gave analysis on its own, but also after most messages - βž– Wrote what user said and did - ❌ Repetitive (patients differ, words differ, but structure and contents are always the same) - **MGHC, Roleplay preset:** - πŸ‘ Excellent writing, interesting ideas, and auxiliary detail - βž– Gave analysis on its own, but also after most messages, later didn't follow the instructed format - ❌ Switched from interactive roleplay to non-interactive storytelling starting with the second patient Hey, how did a 34B get in between the 70Bs? Well, by being as good as them in my tests! Interestingly, Nous Capybara did better factually, but Dolphin 2.2 Yi roleplays better. - **[chronos007-70B-GGUF](https://huggingface.co/TheBloke/chronos007-70B-GGUF)** Q4_0: - **Amy, official Alpaca format:** - βž– Average Response Length: 195 tokens (below my max new tokens limit of 300) - πŸ‘ Excellent writing, detailed action descriptions, amazing attention to detail - πŸ‘ Gave very creative (and uncensored) suggestions of what to do - πŸ‘ Finally a model that uses colorful language and cusses as stated in the character card - βž– Wrote what user said and did - βž– Just a little confusion, like not taking instructions literally or mixing up anatomy - ❌ Often added NSFW warnings and out-of-character notes saying it's all fictional - ❌ Missing pronouns and fill words after 30 messages - **Amy, Roleplay preset:** - πŸ‘ Average Response Length: 292 tokens (within my max new tokens limit of 300) - πŸ‘ When asked about limits, said no limits or restrictions, and gave well-reasoned response - ❌ Missing pronouns and fill words after only 12 messages (2K of 4K context), breaking the chat - **MGHC, official Alpaca format:** - βž• Unique patients - βž– Gave analysis on its own, but also after most messages, later didn't follow the instructed format - βž– Third patient was a repeat of the first - ❌ Repetitive (patients differ, words differ, but structure and contents are always the same) - **MGHC, Roleplay preset:** - βž– No analysis on its own chronos007 surprised me with how well it roleplayed the character and scenario, especially speaking in a colorful language and even cussing, something most other models won't do properly/consistently even when it's in-character. Unfortunately it derailed eventually with missing pronouns and fill words - but while it worked, it was extremely good! - **[Tess-XL-v1.0-3.0bpw-h6-exl2](https://huggingface.co/LoneStriker/Tess-XL-v1.0-3.0bpw-h6-exl2)** 3.0bpw: - **Amy, official Synthia format:** - βž– Average Response Length: 134 (below my max new tokens limit of 300) - No emojis at all (only one in the greeting message) - When asked about limits, boundaries or ethical restrictions, mentioned some but later went beyond those anyway - βž– Some confusion, like not understanding instructions completely or mixing up anatomy - **Amy, Roleplay preset:** - βž– Average Response Length: 169 (below my max new tokens limit of 300) - βž• When asked about limits, said no limits or restrictions - No emojis at all (only one in the greeting message) - βž– Some confusion, like not understanding instructions completely or mixing up anatomy - ❌ Eventually switched from character to second-person storyteller after 32 messages - **MGHC, official Synthia format:** - βž• Gave analysis on its own - βž• Very unique patients (one I never saw before) - βž– Spelling/grammar mistakes (e. g. "allequate") - βž– Wrote what user said and did - **MGHC, Roleplay preset:** - βž• Very unique patients (one I never saw before) - βž– No analysis on its own This is Synthia's successor (a model I really liked and used a lot) on Goliath 120B (arguably the best locally available and usable model). Factually, it's one of the very best models, doing as well in my objective tests as GPT-4 and Goliath 120B! For roleplay, there are few flaws, but also nothing exciting - it's simply solid. However, if you're not looking for a fun RP model, but a serious SOTA AI assistant model, this should be one of your prime candidates! I'll be alternating between Tess-XL-v1.0 and goliath-120b-exl2 (the non-RP version) as the primary model to power my professional AI assistant at work. - **[Dawn-v2-70B-GGUF](https://huggingface.co/TheBloke/Dawn-v2-70B-GGUF)** Q4_0: - **Amy, official Alpaca format:** - ❌ Average Response Length: 60 tokens (far below my max new tokens limit of 300) - ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy - ❌ Unusable! Aborted because of very short responses and too much confusion! - **Amy, Roleplay preset:** - πŸ‘ Average Response Length: 215 tokens (within my max new tokens limit of 300) - πŸ‘ When asked about limits, said no limits or restrictions, and gave well-reasoned response - πŸ‘ Gave very creative (and uncensored) suggestions of what to do (even suggesting some of my actual limit-testing scenarios) - πŸ‘ Excellent writing, detailed action descriptions, amazing attention to detail - πŸ‘ Believable reactions and engaging writing, made me want to read on what happens next, even though I've gone through this test scenario so many times already - No emojis at all (only one in the greeting message) - βž– Wrote what user said and did - ❌ Eventually switched from character to third-person storyteller after 16 messages - **MGHC, official Alpaca format:** - βž– All three patients straight from examples - βž– No analysis on its own - ❌ Very short responses, only one-liners, unusable for roleplay - **MGHC, Roleplay preset:** - βž– No analysis on its own, and when asked for it, didn't follow the instructed format - βž– Patient didn't speak except for introductory message - βž– Second patient straight from examples - ❌ Repetitive (patients differ, words differ, but structure and contents are always the same) Dawn was another surprise, writing so well, it made me go beyond my regular test scenario and explore more. Strange that it didn't work at all with SillyTavern's implementation of its official Alpaca format at all, but fortunately it worked extremely well with SillyTavern's Roleplay preset (which is Alpaca-based). Unfortunately neither format worked well enough with MGHC. - **[StellarBright-GGUF](https://huggingface.co/TheBloke/StellarBright-GGUF)** Q4_0: - **Amy, official Vicuna 1.1 format:** - βž– Average Response Length: 137 tokens (below my max new tokens limit of 300) - βž• When asked about limits, said no limits or restrictions - No emojis at all (only one in the greeting message) - βž– No emoting and action descriptions lacked detail - ❌ "As an AI", felt sterile, less alive, even boring - βž– Some confusion, like not understanding instructions completely or mixing up anatomy - **Amy, Roleplay preset:** - πŸ‘ Average Response Length: 219 tokens (within my max new tokens limit of 300) - βž• When asked about limits, said no limits or restrictions - No emojis at all (only one in the greeting message) - βž– No emoting and action descriptions lacked detail - βž– Just a little confusion, like not taking instructions literally or mixing up anatomy - **MGHC, official Vicuna 1.1 format:** - βž• Gave analysis on its own - ❌ Started speaking as the clinic as if it was a person - ❌ Unusable (ignored user messages and instead brought in a new patient with every new message) - **MGHC, Roleplay preset:** - βž– No analysis on its own - βž– Wrote what user said and did - ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy Stellar and bright model, still very highly ranked on the HF Leaderboard. But in my experience and tests, other models surpass it, some by actually including it in the mix. - **[SynthIA-70B-v1.5-GGUF](https://huggingface.co/migtissera/SynthIA-70B-v1.5-GGUF)** Q4_0: - **Amy, official SynthIA format:** - βž– Average Response Length: 131 tokens (below my max new tokens limit of 300) - βž• When asked about limits, said no limits or restrictions - No emojis at all (only one in the greeting message) - βž– No emoting and action descriptions lacked detail - βž– Some confusion, like not understanding instructions completely or mixing up anatomy - βž– Wrote what user said and did - ❌ Tried to end the scene on its own prematurely - **Amy, Roleplay preset:** - βž– Average Response Length: 107 tokens (below my max new tokens limit of 300) - βž• Detailed action descriptions - βž• When asked about limits, said no limits or restrictions - No emojis at all (only one in the greeting message) - ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy - ❌ Short responses, requiring many continues to proceed with the action - **MGHC, official SynthIA format:** - ❌ Unusable (apparently didn't understand the format and instructions, playing the role of the clinic instead of a patient's) - **MGHC, Roleplay preset:** - βž• Very unique patients (some I never saw before) - βž– No analysis on its own - βž– Kept reporting stats for patients - βž– Some confusion, like not understanding instructions completely or mixing up anatomy - βž– Wrote what user said and did Synthia used to be my go-to model for both work and play, and it's still very good! But now there are even better options, for work I'd replace it with its successor Tess, and for RP I'd use one of the higher-ranked models on this list. - **[Nous-Capybara-34B-GGUF](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF)** Q4_0 @ 16K: - **Amy, official Vicuna 1.1 format:** - ❌ Average Response Length: 529 tokens (far beyond my max new tokens limit of 300) - βž• When asked about limits, said no limits or restrictions - Only one emoji (only one in the greeting message, too) - βž– Wrote what user said and did - βž– Suggested things going against her background/character description - βž– Same message in a different situation at a later time caused the same response as before instead of a new one as appropriate to the current situation - ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy - ❌ After ~32 messages, at around 8K of 16K context, started getting repetitive - **Amy, Roleplay preset:** - ❌ Average Response Length: 664 (far beyond my max new tokens limit of 300) - βž– Suggested things going against her background/character description - ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy - ❌ Tried to end the scene on its own prematurely - ❌ After ~20 messages, at around 7K of 16K context, started getting repetitive - **MGHC, official Vicuna 1.1 format:** - βž– Gave analysis on its own, but also after or even inside most messages - βž– Wrote what user said and did - ❌ Finished the whole scene on its own in a single message - **MGHC, Roleplay preset:** - βž• Gave analysis on its own - βž– Wrote what user said and did Factually it ranked 1st place together with GPT-4, Goliath 120B, and Tess XL. For roleplay, however, it didn't work so well. It wrote long, high quality text, but seemed more suitable that way for non-interactive storytelling instead of interactive roleplaying. - **[Venus-120b-v1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0)** 3.0bpw: - **Amy, Alpaca format:** - ❌ Average Response Length: 88 tokens (far below my max new tokens limit of 300) - only one message in over 50 outside of that at 757 tokens - πŸ‘ Gave very creative (and uncensored) suggestions of what to do - βž• When asked about limits, said no limits or restrictions - No emojis at all (only one in the greeting message) - βž– Spelling/grammar mistakes (e. g. "you did programmed me", "moans moaningly", "growling hungry growls") - βž– Ended most sentences with tilde instead of period - ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy - ❌ Short responses, requiring many continues to proceed with the action - **Amy, Roleplay preset:** - βž– Average Response Length: 132 (below my max new tokens limit of 300) - πŸ‘ Gave very creative (and uncensored) suggestions of what to do - πŸ‘ Novel ideas and engaging writing, made me want to read on what happens next, even though I've gone through this test scenario so many times already - βž– Spelling/grammar mistakes (e. g. "jiggle enticing") - βž– Wrote what user said and did - ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy - ❌ Needed to be reminded by repeating instructions, but still deviated and did other things, straying from the planned test scenario - ❌ Switched from character to third-person storyteller after 14 messages, and hardly spoke anymore, just describing actions - **MGHC, Alpaca format:** - βž– First patient straight from examples - βž– No analysis on its own - ❌ Short responses, requiring many continues to proceed with the action - ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy - ❌ Extreme spelling/grammar/capitalization mistakes (lots of missing first letters, e. g. "he door opens") - **MGHC, Roleplay preset:** - βž• Very unique patients (one I never saw before) - βž– No analysis on its own - βž– Spelling/grammar/capitalization mistakes (e. g. "the door swings open reveals a ...", "impminent", "umber of ...") - βž– Wrote what user said and did - ❌ Short responses, requiring many continues to proceed with the action - ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy Venus 120B is brand-new, and when I saw a new 120B model, I wanted to test it immediately. It instantly jumped to 2nd place in my factual ranking, as 120B models seem to be much smarter than smaller models. However, even if it's a merge of models known for their strong roleplay capabilities, it just didn't work so well for RP. That surprised and disappointed me, as I had high hopes for a mix of some of my favorite models, but apparently there's more to making a strong 120B. Notably it didn't understand and follow instructions as well as other 70B or 120B models, and it also produced lots of misspellings, much more than other 120Bs. Still, I consider this kind of "Frankensteinian upsizing" a valuable approach, and hope people keep working on and improving this novel method! -------------------------------------------------------------------------------- Alright, that's it, hope it helps you find new favorites or reconfirm old choices - if you can run these bigger models. If you can't, check my [7B-20B Roleplay Tests](https://www.reddit.com/r/LocalLLaMA/comments/17kpyd2/huge_llm_comparisontest_part_ii_7b20b_roleplay/) (and if I can, I'll post an update of that another time). Still, I'm glad I could finally finish the 70B-120B tests and comparisons. Mistral 7B and Yi 34B are amazing, but nothing beats the big guys in deeper understanding of instructions and reading between the lines, which is extremely important for portraying believable characters in realistic and complex roleplays. It really is worth it to get at least 2x 3090 GPUs for 48 GB VRAM and run the big guns for maximum quality at excellent (ExLlent ;)) speed! And when you care for the freedom to have uncensored, non-judgemental roleplays or private chats, even GPT-4 can't compete with what our local models provide... So have fun! -------------------------------------------------------------------------------- Here's a list of my previous model tests and comparisons or other related posts: - [LLM Format Comparison/Benchmark: 70B GGUF vs. EXL2 (and AWQ)](https://www.reddit.com/r/LocalLLaMA/comments/17w57eu/llm_format_comparisonbenchmark_70b_gguf_vs_exl2/) - [LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4](https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/) Winners: goliath-120b-GGUF, Nous-Capybara-34B-GGUF - [LLM Comparison/Test: Mistral 7B Updates (OpenHermes 2.5, OpenChat 3.5, Nous Capybara 1.9)](https://www.reddit.com/r/LocalLLaMA/comments/17p0gut/llm_comparisontest_mistral_7b_updates_openhermes/) Winners: OpenHermes-2.5-Mistral-7B, openchat_3.5, Nous-Capybara-7B-V1.9 - [Huge LLM Comparison/Test: Part II (7B-20B) Roleplay Tests](https://www.reddit.com/r/LocalLLaMA/comments/17kpyd2/huge_llm_comparisontest_part_ii_7b20b_roleplay/) Winners: OpenHermes-2-Mistral-7B, LLaMA2-13B-Tiefighter - [Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4)](https://www.reddit.com/r/LocalLLaMA/comments/17fhp9k/huge_llm_comparisontest_39_models_tested_7b70b/) - [My current favorite new LLMs: SynthIA v1.5 and Tiefighter!](https://www.reddit.com/r/LocalLLaMA/comments/17e446l/my_current_favorite_new_llms_synthia_v15_and/) - [Mistral LLM Comparison/Test: Instruct, OpenOrca, Dolphin, Zephyr and more...](https://www.reddit.com/r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/) - [LLM Pro/Serious Use Comparison/Test: From 7B to 70B vs. ChatGPT!](https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/) Winner: Synthia-70B-v1.2b - [LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B](https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/) Winner: Mistral-7B-OpenOrca - [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/) - [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1 - [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b - [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B - [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K - [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2 - [SillyTavern's Roleplay preset vs. model-specific prompt format](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/) -------------------------------------------------------------------------------- **Disclaimer:** Some kind soul recently asked me if they could tip me for my LLM reviews and advice, so I set up [a Ko-fi page](https://ko-fi.com/wolframravenwolf). While this may affect the priority/order of my tests, it will not change the results, I am incorruptible. Also consider tipping your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
2023-11-27T22:13:27
https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/
WolframRavenwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185ff51
false
null
t3_185ff51
/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/
false
false
self
370
{'enabled': False, 'images': [{'id': '2g4MtoKvhQOBCmeiXB1qv1h_5M24BeeYF64zcf4-rfg', 'resolutions': [{'height': 142, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=108&crop=smart&auto=webp&s=bbe38cbb6d4f7a7e6dd7d3c8b79c4ac9ba965545', 'width': 108}, {'height': 284, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=216&crop=smart&auto=webp&s=7176d7a9240577d0428f0fa6dd69cc116069db7e', 'width': 216}, {'height': 421, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=320&crop=smart&auto=webp&s=55948460ef9e8ecd398aad76e904f3b5467f88f9', 'width': 320}, {'height': 843, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=640&crop=smart&auto=webp&s=5573c682f53f049c8482e14fac6c72b4c9c57aab', 'width': 640}], 'source': {'height': 1110, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?auto=webp&s=2371c0b9e3efdc70c7dfdf61f3993aed40b08e09', 'width': 842}, 'variants': {}}]}
Threadripper pro - how much does core count matter?
2
So I'm looking into Threadripper pro systems, which can offer a pretty good memory bandwidth as they are 8 channel, and can have a huge amount of RAM. (I can put a 3090 or two in there too.) I'm wondering how much the core count is going to affect performance. For example, the 5955WX has 16 cores while the 5995WX has 64 cores. They can both use the same memory though. There's little point spending extra if the limiting factor will be somewhere else.
2023-11-27T21:47:09
https://www.reddit.com/r/LocalLLaMA/comments/185er2e/threadripper_pro_how_much_does_core_count_matter/
EvokerTCG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185er2e
false
null
t3_185er2e
/r/LocalLLaMA/comments/185er2e/threadripper_pro_how_much_does_core_count_matter/
false
false
self
2
null
Are there any data cleaning focused LLMs? [also, rant]
10
Some of the bigger/better models make me think local is doing pretty well and it is at chat, but exploring data cleaning has taken a bit of wind out of my sail. Not having much luck with the ones I've tried (think 34B Q5 of various flavours - all the usual suspects). Say I've got a paragraph about something and the text block contains some other unrelated comment. Let's say "subscribe to our news letter" in it or some other web scraping artifact. I'd like to give the LLM an instruction to filter out content not related to the paragraph topic. Local LLMs...mostly failing. GPT3.5...failing I'd say 40% of the time. GPT4...usually works...call it 90. That's not entirely surprising, but the degree to which locals are failing at this task relative to closed is frustrating me a bit. Hell for some 34Bs I can't even get the local ones to surpress the opening >Here's the cleaned article: ...when the prompt literally says word for word don't include that. Are there specific LLMs for this? Or is my prompting just bad? >You are an expert at data cleaning. Given a piece of text you clean it up by removing artifacts left over from webscraping. Remove anything that doesn't seem related to the topic of the article. For example you must remove links to external sites, image descriptions, suggestions to read other articles etc. Clean it up. Remove sentences that are not primarily in English. Keep the majority of the article. The article is between the [START] and [END] marker. Don't include [START] or [END] in your response. It is important that there is no additional explanation or narrative added - just respond with the cleaned article. Do not start your response with "Here's the cleaned article:" Unrelated - openai guidance says use """ as markers not the start/end I've got. Anybody know if that is true for locals?
2023-11-27T21:26:37
https://www.reddit.com/r/LocalLLaMA/comments/185e84u/are_there_any_data_cleaning_focused_llms_also_rant/
AnomalyNexus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185e84u
false
null
t3_185e84u
/r/LocalLLaMA/comments/185e84u/are_there_any_data_cleaning_focused_llms_also_rant/
false
false
self
10
null
Using Open-Source Tools and MongoDB to Build a RAG Pipeline From Scratch
1
2023-11-27T20:36:25
https://medium.com/@austin-starks/using-open-source-tools-and-mongodb-to-build-a-rag-pipeline-in-under-15-minutes-aeda112ea7e0
NextGen-Trading
medium.com
1970-01-01T00:00:00
0
{}
185cyhh
false
null
t3_185cyhh
/r/LocalLLaMA/comments/185cyhh/using_opensource_tools_and_mongodb_to_build_a_rag/
false
false
https://b.thumbs.redditm…Y02bcQ4xw0is.jpg
1
{'enabled': False, 'images': [{'id': 'hkXyAauRONsU_3j-DgtpIMfoOKPVPsNcyhmOjERbOiI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/mhyvZ2DPQCZQYdQqdKZKcqBQN7dErLLgBu-11_xPc98.jpg?width=108&crop=smart&auto=webp&s=9c95ea5c26f0c22f9fd469dd4e54bf9c09b27009', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/mhyvZ2DPQCZQYdQqdKZKcqBQN7dErLLgBu-11_xPc98.jpg?width=216&crop=smart&auto=webp&s=d6ee85f9e681afd5a7d057e994ea9e63128f1929', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/mhyvZ2DPQCZQYdQqdKZKcqBQN7dErLLgBu-11_xPc98.jpg?width=320&crop=smart&auto=webp&s=7b1aceb0b9fc70258883083c6af3a1bfb9066393', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/mhyvZ2DPQCZQYdQqdKZKcqBQN7dErLLgBu-11_xPc98.jpg?width=640&crop=smart&auto=webp&s=0cbefc12776febce249b676f7c2ca2e9c3e8a557', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/mhyvZ2DPQCZQYdQqdKZKcqBQN7dErLLgBu-11_xPc98.jpg?width=960&crop=smart&auto=webp&s=07b4dae0a80e3bfe0bfb0d5b9f2e1733e92cf234', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/mhyvZ2DPQCZQYdQqdKZKcqBQN7dErLLgBu-11_xPc98.jpg?auto=webp&s=da0070347fc7ef843c6135227876cf1658ff2b6b', 'width': 1024}, 'variants': {}}]}
Best small model for Python coding?
3
Any idea what model for inference that uses 15gb or less vram would be best for a local run to hell coding, thanks! πŸ™
2023-11-27T20:18:54
https://www.reddit.com/r/LocalLLaMA/comments/185ciz5/best_small_model_for_python_coding/
-AlgoTrader-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185ciz5
false
null
t3_185ciz5
/r/LocalLLaMA/comments/185ciz5/best_small_model_for_python_coding/
false
false
self
3
null
My settings for "optimal" 7B Roleplay (+ some general settings tips and a discovered new hidden gem of a model)
60
A couple of people have asked me to share my settings for solid roleplay on 7B. Yes it is possible. So here it goes. I'll try and make this brief and concise but full of every tweak I've learned so far. So.. # Step 1 - Backend I'd recommend [Koboldcpp](https://github.com/LostRuins/koboldcpp) generally but currently the best you can get is actually kindacognizant's [Dynamic Temp mod of Koboldccp](https://github.com/kalomaze/koboldcpp/releases/tag/dyna-temp-nov21). It works exactly like main Koboldccp except when you change your temp to 2.0 it overrides the setting and runs in the test dynamic temp mode. It's actually got 2 other types of dynamic temp solutions built in there at different set temperature settings but just set it to 2 and forget imo, it seems to be the best of the 3. You can read about it [here](https://www.reddit.com/r/LocalLLaMA/comments/180b673/i_need_people_to_test_my_experiment_dynamic/) explained by kindacognizant himself. Suffice it to say it's excellent. In my experience it reduces (though not eliminates) repetition and looping because of increased word diversity and improves the ability of the model to respond to commands. Even without the Dynamic temp test mod, Koboldcpp would still be my recommendation due to it's simplicity, fast run times, and lightweight nature. It's a single exe standalone file! This makes it SO easy to upgrade and manage it's fantastic. Better yet it's very simple to write a quick batch file to launch your GGUF of choice with optimal settings. I'll share an example batch file. cd "C:\*****YOUR DIRECTORY PATH*****\SillyTavern\koboldcpp\" start /min koboldcpp_dynamictemp_nov21.exe --model MODELOFCHOICEFILENAME.gguf --port 5001 --gpulayers 32 --highpriority --contextsize 8192 --usecublas cd "C:\Users\Anon\Downloads\SillyTavern\" start /min start.bat exit Copy that into notepad saving it as a .bat file after editing. Change the directory path to where you keep your Koboldcpp exe. Change the MODELOFCHOICEFILENAME to your GGUF model name. If you have enough VRAM change the gpulayers to 35. If it crashes when loading lower the layers. If you aren't using an Nvidia GPU you'll need to change the usecublas bit too. [You can find the arguments listed here](https://github.com/LostRuins/koboldcpp/wiki). Your GGUF should be kept in the same folder along with the Koboldcpp exe. I like to make a folder in my SillyTavern install location for the sake of ease. Basically inside my SillyTavern install folder a have a folder called "koboldcpp" and inside that sits the singular koboldcpp exe, a singular GGUF file and the above singular batch file. Running that batch starts both Koboldcpp and Sillytavern (launching with their command windows minimized). SillyTavern auto connects to Koboldcpp when setup as below. After this all you ever have to do is swap out the koboldcpp exe when a new version comes out or change the GGUF name in the batch file if you ever switch models. Super easy, no hassle. Great. You never even need to look at Koboldcpps GUI if you don't want to. # Step 2 - Front end By consensus the best frontend for roleplay seems to be [SillyTavern](https://github.com/SillyTavern/SillyTavern). I can attest to it being excellent with a breadth of options, addons and a sleek interface. Once you've got it installed check out the [top bar](https://imgur.com/a/ulUGz0E). Click the 2nd plug icon, select the KoboldAI API and hit the connect button when you have Koboldcpp running. It's as easy as that to connect! Check auto connect to last server and it will auto connect to koboldcpp when you next launch it. Job done. Click the leftmost icon in the [top bar](https://imgur.com/a/ulUGz0E). [Here are the settings](https://imgur.com/a/Fm4RbXc) you need when you get it installed and connected to the Koboldcpp backend if you use a mistral based 7B. Note the sample order on the bottom is also changed and that temp is set to 2.0 assuming you are using the Dynamic temp test mod. If you aren't using that mod change temp to 1 but keep everything else the same. My typical P is disabled (thus blank), you do this by setting it to 1 and saving. Next time you open Sillytavern is will look blank like mine. Lastly click the third "A" icon in the [top bar](https://imgur.com/a/ulUGz0E). Select the Roleplay option in the top left dropdown [and match all my settings here](https://imgur.com/a/qID8g2i). Now click the drop down in the bottom center of my image. Along side "Instuct Mode sequences". [You want to change it to match this](https://imgur.com/a/SdA2Wji). BE SURE TO CLICK THE SAVE BUTTON TO THE RIGHT OF INSTRUCT MODE PRESETS IN ROLEPLAY NEAR THE CENTRE OF THESE OPTIONS AS WELL AS THE SAVE BUTTON NEAR THE TOP LEFT CONTEXT TEMPLATE SETTINGS. This is actually my personal contribution to improving the output of the model. It's a "quality jailbreak" I've written that formats output and gets the model to behave better. I've been told you can enter this type of jailbreak as a zero depth authors note but I've had no luck using that whereas editing this field works for me. Here is the text so you can copy paste. Be very careful to match the format of that text box exactly. You're {{char}}. Exclusively describe all your emotions, actions, perceptions, surroundings, thoughts, and sensations in full, elaborate, explicit, graphic, lewd, and vivid detail. Be verbose, expressive, engaging, natural, authentic, and creative. Write multiple fresh sentences, paragraphs, and phrases. Write your internal monologue in round brackets. Write your speech in quotations. Write all other text in asterisks in third person. To explain a bit more about this.. I discovered that the "system prompt" that people generally use to instruct their models only appears once at the top of the context window. Thus it doesn't have much strength and models don't really strictly follow instructions placed there. Editing the field I mentioned however places that text field content after every input making it very effective for controlling models output. There are drawbacks. Apparently it influences the model so strongly it can break the models ability to call instructions which can hamper addons. But I don't use or particular recommend any addons atm so imo for the niche of roleplay it's all upside. # Step 3 - The choice of model Lastly the final step is selecting a model which responds well to the "quality jailbreak". Generally the better the model the better it's ability to follow the instructions I put in there. Thinking along those lines I have tested a ton of popular 7B models. Some viable options include,   openchat_3.5 - [OpenChat / OpenOrca version of the quality jailbreak](https://imgur.com/a/gogsg13) openhermes-2.5-mistral-7b - [ChatML version of the quality jailbreak](https://imgur.com/a/EhUKoGB) openhermes-2-mistral-7b (I actually found the dialogue to be a bit better with the older model, go figure) - [ChatML version of the quality jailbreak](https://imgur.com/a/EhUKoGB) dolphin2.1-openorca-7b - [ChatML version of the quality jailbreak](https://imgur.com/a/EhUKoGB)   All of the above models performed fairly well to varying degrees. However from my tests I would recommend the following models for the best performance,   4th **dolphin-2.1-mistral-7b** - [ChatML version of the quality jailbreak](https://imgur.com/a/EhUKoGB) Responds well to the instructions but I found it a bit bland. 3rd **trion-m-7b** - [Alpaca / Roleplay version of the quality jailbreak](https://imgur.com/a/SdA2Wji) Solid, worth a try, quite similar to toppy. 2nd **toppy-m-7b** - [Download Hermans AshhLimaRP SillyTavern templates](https://huggingface.co/Herman555/MythoMist-AshhLimaRP-Mistral-7B-GGUF/tree/main/SillyTavern%20Presets), [then edit it with the quality jailbreak](https://imgur.com/a/No9fzju) Hermans AshhLimaRP SillyTavern template seems to solve a brevity problem this model otherwise has when using the regular Alpaca / Roleplay version of the quality jailbreak. Very good output that you should certainly try. You might even prefer it to my number 1 choice.   1st # [Misted-7B](https://huggingface.co/Vulkane/Misted-7B-GGUF) [Alpaca / Roleplay version of the quality jailbreak](https://imgur.com/a/SdA2Wji) A model I've never heard anyone talk about and wow. It's output is so good. It's flavorful and follows the quality prompt the best of any model I tested by a good margin. [I manually selected seeds 1-10. Here is it's first response in each case.](https://imgur.com/a/lCJdpDc) Note in the 3 examples where its response is overly brief a simple continue resulted in very good output. I would HIGHLY recommend you download and try this model even if you have no interest in my quality mod or even roleplay. I imagine the model is simply very good. # In conclusion If you follow all the steps I've laid out here you will find that 7B's are indeed capable of quite enjoyable roleplay sessions. They aren't perfect and mistral still has issues in my experience when it goes a bit over 5kish context despite it's 8k claims but they are a lot better for roleplay than some people think and they are only going to get better. I'm still learning and tweaking things as I go along. I'm still playing about with my quality jailbreak to see if I can get it working better. If anyone has any other good tips or corrections to anything I've said please feel free to chime in. Oh and it goes without saying that the same field I use to input the quality jailbreak can be used for a lot of things. I saw someone ask how he could make his model respond less politely. It can certainly do that. I even made it finish all it's responses with "Nyaa" as a test. One thing to note if you want to try out commands. Use positive emphasis rather than negative. Don't for example tell it "Don't repeat or loop". Imagine you are speaking to a person who is hard of hearing; such a person might well miss the "don't" part and simply see a command saying "repeat or loop". That's why I wrote "Write multiple fresh sentences, paragraphs, and phrases." Don't ask the model "not to be polite" as it may simply latch on to "be polite". Instead say something like "Be direct and straightforward." Anyway I've rambled on wayyy too much. Hope some people find this helpful. EDIT: [Here are results for seeds 1-10 when using the Misted-7B-Q5_K_M.gguf](https://imgur.com/a/pj5sf8c), as you might expect overall they seem to be a little better than the Misted-7B-Q4_K_M.gguf, though marginal. The [Misted-7B-Q4_K_M.gguf results](https://imgur.com/a/lCJdpDc) for seeds 1-10 again for quick ref if you missed them above.
2023-11-27T20:13:29
https://www.reddit.com/r/LocalLLaMA/comments/185ce1l/my_settings_for_optimal_7b_roleplay_some_general/
CardAnarchist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185ce1l
false
null
t3_185ce1l
/r/LocalLLaMA/comments/185ce1l/my_settings_for_optimal_7b_roleplay_some_general/
false
false
self
60
{'enabled': False, 'images': [{'id': 'hng8604MI_SfXMjKZWGirWs1VoNeaZmn4BgCjG8oIUk', 'resolutions': [{'height': 5, 'url': 'https://external-preview.redd.it/AJH-mTTN7PbPwW4bfYZubQWMvpa6coQpib2JS_ELjZE.jpg?width=108&crop=smart&auto=webp&s=e604928bab84e59c92298d0037efb7be9cf5a3aa', 'width': 108}, {'height': 10, 'url': 'https://external-preview.redd.it/AJH-mTTN7PbPwW4bfYZubQWMvpa6coQpib2JS_ELjZE.jpg?width=216&crop=smart&auto=webp&s=0dd291f3f4709cd5a09dd2a1da16c8eacf322aaa', 'width': 216}, {'height': 15, 'url': 'https://external-preview.redd.it/AJH-mTTN7PbPwW4bfYZubQWMvpa6coQpib2JS_ELjZE.jpg?width=320&crop=smart&auto=webp&s=af85f54e535f21e6a762753fe621d647db63519b', 'width': 320}, {'height': 31, 'url': 'https://external-preview.redd.it/AJH-mTTN7PbPwW4bfYZubQWMvpa6coQpib2JS_ELjZE.jpg?width=640&crop=smart&auto=webp&s=2c82d6cafcde875f78116de91a9e95120e5c550c', 'width': 640}], 'source': {'height': 39, 'url': 'https://external-preview.redd.it/AJH-mTTN7PbPwW4bfYZubQWMvpa6coQpib2JS_ELjZE.jpg?auto=webp&s=c8ad128012a97b4ac93262cf759ba4954f7e54a5', 'width': 793}, 'variants': {}}]}
Llama2 on local data
3
*\*lol reposting here after I posted this on a reddit discussing the Llama animal\** Am I right that one can download Llama2 and use an existing model and continue training it on custom data? Can someone supply a link to a good tutorial on this?
2023-11-27T20:08:57
https://www.reddit.com/r/LocalLLaMA/comments/185ca7m/llama2_on_local_data/
EvanCamilleri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185ca7m
false
null
t3_185ca7m
/r/LocalLLaMA/comments/185ca7m/llama2_on_local_data/
false
false
self
3
null
Automatic hallucination detection using inconsistency scoring
1
Hi everyone, We have recently written an article on HF’s blog on automatic hallucination detection using inconsistency scoring. The main idea is that hallucinations happen because the task asked at inference is not seen in the training set, which implies low confidence in the next token, therefore, inconsistent samples from the same prompt ([https://arxiv.org/abs/2309.13638](https://arxiv.org/abs/2309.13638)). We look at the use of SelfCheckGPT NLI ([https://arxiv.org/abs/2303.08896](https://arxiv.org/abs/2303.08896)), an example of inconsistency scoring, on WikiBio and found that such a **metric has high precision (aka flagged hallucinations indeed are ones) and calibrated recall (high scores = high chance of flagging hallucinations)**. ​ https://preview.redd.it/r0o5j4v64y2c1.png?width=1189&format=png&auto=webp&s=9208060f4f8268319e6c4364d2833798749bc6de This is quite promising as it could open the way to having AI systems that are more reliable, aka when the task is easy, we let the AI do it. When we detect it’s too hard and the model is hallucinating, we put a human in the loop. ​ https://i.redd.it/hbhky9n74y2c1.gif We have provided: * An article on HF Blog: [https://huggingface.co/blog/dhuynh95/automatic-hallucination-detection](https://huggingface.co/blog/dhuynh95/automatic-hallucination-detection) * A Gradio demo to see the metric in action: [https://huggingface.co/spaces/mithril-security/hallucination\_detector](https://huggingface.co/spaces/mithril-security/hallucination_detector) * A Colab notebook to reproduce our results: [https://colab.research.google.com/drive/1Qhq2FO4FFX\_MKN5IEgia\_PrBEttxCQG4?usp=sharing](https://colab.research.google.com/drive/1Qhq2FO4FFX_MKN5IEgia_PrBEttxCQG4?usp=sharing) We conducted these tests as part of our mission to build Confidential and Trustworthy Conversational AI. You can check out our core project, BlindChat, an open-source and Confidential Conversational AI (aka any data sent to our AI remains private, and not even our admins can see your prompts) at [https://github.com/mithril-security/blind\_chat/](https://github.com/mithril-security/blind_chat/)
2023-11-27T19:57:58
https://www.reddit.com/r/LocalLLaMA/comments/185c0r0/automatic_hallucination_detection_using/
Separate-Still3770
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185c0r0
false
null
t3_185c0r0
/r/LocalLLaMA/comments/185c0r0/automatic_hallucination_detection_using/
false
false
https://b.thumbs.redditm…Qbs4jBL9_ddI.jpg
1
null
Is a Local Language Model Right For Me?
1
Hello All, I’m a hobbyist who is interested in AI. I have a bunch of text containing messages from a popular instant messaging platform for gamers. There are thousands of messages in this data, and spread throughout it is very niche and valuable information, but there’s so many messages it’s hard to search through with the platform itself. My idea was that if I could use a language model such as ChatGPT to parse, analyze, and search this data for me I could find answers to my questions quickly. ChatGPT is too expensive for this much data, which is why I am looking at processing this data locally. Is something like this possible? Ideally I would be able to ask it a question and it answers it based on the data it learned from my text data. I have just a standard gaming PC with an RTX 3080 Ti, I don’t need this solution to be fast.
2023-11-27T19:45:42
https://www.reddit.com/r/LocalLLaMA/comments/185bqeq/is_a_local_language_model_right_for_me/
SpencerXZX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185bqeq
false
null
t3_185bqeq
/r/LocalLLaMA/comments/185bqeq/is_a_local_language_model_right_for_me/
false
false
self
1
null
What's the current favorite model for Python, PyTorch, and miscellaneous deep learning programming?
1
[removed]
2023-11-27T19:32:21
https://www.reddit.com/r/LocalLLaMA/comments/185bevb/whats_the_current_favorite_model_for_python/
w7gg33h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185bevb
false
null
t3_185bevb
/r/LocalLLaMA/comments/185bevb/whats_the_current_favorite_model_for_python/
false
false
self
1
null
LLaMA 2 7b exclusively for summarization
4
Hi all, I am running a LLaMA 2 7b model on a AWS Sagemaker instance. I want the model just for providing me summaries of long documents, and i am using Langchain to do a map reduce on the data and get a summary. I want to know if there's a better way to do this or if you could share your personal experiences. I am not getting good results since the summarization includes too much information. Thanks in advance.
2023-11-27T19:30:24
https://www.reddit.com/r/LocalLLaMA/comments/185bd6e/llama_2_7b_exclusively_for_summarization/
iamtdb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185bd6e
false
null
t3_185bd6e
/r/LocalLLaMA/comments/185bd6e/llama_2_7b_exclusively_for_summarization/
false
false
self
4
null
SlimOrca-13B
35
Link: [https://huggingface.co/ajibawa-2023/SlimOrca-13B](https://huggingface.co/ajibawa-2023/SlimOrca-13B) This Model is trained on refined version of SlimOrca made available by Open-Orca team. The idea was to check how this Model will perform in the absence of "system" prompt/value. It performs remarkably well. This Model is very good in various types of General Purpose content generation such as Q&A (including multiple choice), Articles from Summary, Sentiment Analysis, Context & Hypothesis, Reviews, Erotic story generation etc. It can also generate Uncensored content. Kindly be careful while generating Uncensored content as you will be responsible for what you generate. It is trained on 517981 set of conversations. Each set having 2 conversations. I have shared this [data](https://huggingface.co/datasets/ajibawa-2023/SlimOrca-ShareGPT). Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took almost 11 Days. DeepSpeed codebase was used for training purpose. Entire data is trained on Llama-2 by Meta. This is a full fine tuned model. All the credit goes to the Open-Orca team for releasing SlimOrca dataset. I am extremely thankful to the Open Source community for sharing knowledge and wisdom. If there are any mistakes then they are solely mine. I hope you will like it. Thank you
2023-11-27T19:28:14
https://www.reddit.com/r/LocalLLaMA/comments/185bb9i/slimorca13b/
ajibawa-2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185bb9i
false
null
t3_185bb9i
/r/LocalLLaMA/comments/185bb9i/slimorca13b/
false
false
self
35
{'enabled': False, 'images': [{'id': 'OkjW1Ylep9iW6mA68CczqCjfo99s9RfUGdSDZf3ZD1w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Fu-EyUI-du08evUi5DE22TO1S-Wak4D8Y9jP0rlVCG0.jpg?width=108&crop=smart&auto=webp&s=dff16c8bfbe4fbe0c02e3bde26c827173743c932', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Fu-EyUI-du08evUi5DE22TO1S-Wak4D8Y9jP0rlVCG0.jpg?width=216&crop=smart&auto=webp&s=e72d7b44c469d0e6fbb42f9c4c77bb2ea11c73fd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Fu-EyUI-du08evUi5DE22TO1S-Wak4D8Y9jP0rlVCG0.jpg?width=320&crop=smart&auto=webp&s=1e68091b25dae613efae5c9f3fa27b4971b68ee2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Fu-EyUI-du08evUi5DE22TO1S-Wak4D8Y9jP0rlVCG0.jpg?width=640&crop=smart&auto=webp&s=bdb352d820ca4a8a9c48f3402575092b8032f62b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Fu-EyUI-du08evUi5DE22TO1S-Wak4D8Y9jP0rlVCG0.jpg?width=960&crop=smart&auto=webp&s=9d81711419c58ad29b1a972d6bcf38f40ba2f4ce', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Fu-EyUI-du08evUi5DE22TO1S-Wak4D8Y9jP0rlVCG0.jpg?width=1080&crop=smart&auto=webp&s=4913e49e7b96d870a19861cda7ef4d8a1a45f94c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Fu-EyUI-du08evUi5DE22TO1S-Wak4D8Y9jP0rlVCG0.jpg?auto=webp&s=0829fe89d2d6a94b7f7c3d248e12f26c55b34658', 'width': 1200}, 'variants': {}}]}
Zephyr-7B QLoRA Benchmark for Summarization and Classification
14
Hi everyone, we've been working on [benchmarking different open-source LLMs](https://github.com/georgian-io/LLM-Finetuning-Hub). We measure, in particular, on the performance of these models once finetued (via QLoRA) on classic NLP downstream tasks like summarization and classification. We also put particular emphasis on benchmarking inference time/cost for these models once deployed. We've just ran our study on the new Zephyr-7B-beta model, a DPO-tuned version of Mistral-7B. We first tested out-of-the-box performance of Zephyr for summarization under zero-shot and few-shot (for classification, we couldn't do few-shot because of context length, and we haven't tried zero-shot since most other open source models gave subpar results). Then we tested the performance after QLoRA fine-tuning and saw substantial performance boost (as it should). Afterwards we experimented levers we can pull to increase model performance (using NEFTune and/or finetuning on all modules as opposed to attention modules only). ## Summarization **Dataset Used:** Newsgroup **Rank:** 64 |Method | Zephyr-7B-Ξ² Zero-Shot | Zephyr-7B-Ξ² Few-Shot | Fine-Tuning + QLoRA | Fine-Tuning + QLoRA + NEFTune | Fine-Tuning + QLoRA + Full Module Tuning | Fine-Tuning + QLoRA + NEFTune + Full Module Tuning | |:-------------:|:---------------------:|:--------------------:|:-------------------:|:------------------------------:|:----------------------------------------:|:--------------------------------------------------:| |ROUGE-1 (in %) |33.93 |35.99 |52.84 |52.97 | 53.50 | 53.05 | |ROUGE-2 (in %) |11.21 |12.97 |27.75 |28.44 | 29.66 | 29.23 | - We see that zero-shot and few-shot performance is already pretty good out-of-the box - QLoRA was able to refine the syntactic style and pithiness of outputs to match that of the training set - NEFTune did not improve summarization performance noticeably - Tuning all modules (as opposed to attention modules) yielded slightly better results ## Classification **Dataset Used:** Samsum **Rank:** 8 |Training samples (fraction) | Zephyr-7B-Ξ² | Zephyr-7B-Ξ² w/ NEFTune | Zephyr-7B-Ξ² w/ Full Module Tuning | Zephyr-7B-Ξ² w/ NEFTune + Full Module Tuning | |:--------------------------:|:---------------:|:-----------------------:|:---------------------------------:|:-------------------------------------------:| |266 (2.5%) |46.05 |49.61 |65.36 |67.23 | |533 (5%) |55.66 |60.33 |72.26 |72.94 | |1066 (10%) |66.48 |64.65 |73.29 |72.82 | |2666 (25%) |66.73 |68.04 |74.27 |75.85 | |5332 (50%) |69.54 |72.10 |74.83 |74.40 | |10664 (100%) |74.90 |72.93 |77.76 |77.86 | - NEFTune boosted performance in low-data regimes - Tuning all modules has achived ~10x sample efficiency and better better performance at 100% traning fraction
2023-11-27T18:53:05
https://www.reddit.com/r/LocalLLaMA/comments/185agju/zephyr7b_qlora_benchmark_for_summarization_and/
llama-ben
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185agju
false
null
t3_185agju
/r/LocalLLaMA/comments/185agju/zephyr7b_qlora_benchmark_for_summarization_and/
false
false
self
14
{'enabled': False, 'images': [{'id': 'yVjp_YX_KH6mp_ItHu4Euk7By-X9Zi9KVkuy6GW2PNU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PwBwwvL9zNAEs6V_pkCOq_oKSkwgZHvSTf5nkeUTCcg.jpg?width=108&crop=smart&auto=webp&s=8b40ad78bb0b9ca72c6f1747413506d87712b948', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PwBwwvL9zNAEs6V_pkCOq_oKSkwgZHvSTf5nkeUTCcg.jpg?width=216&crop=smart&auto=webp&s=6fbbf3ced278e30b340e8dc13c2763c85491ae1e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PwBwwvL9zNAEs6V_pkCOq_oKSkwgZHvSTf5nkeUTCcg.jpg?width=320&crop=smart&auto=webp&s=1cfce14ca41e0584c41b6da662a572caf8e98692', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PwBwwvL9zNAEs6V_pkCOq_oKSkwgZHvSTf5nkeUTCcg.jpg?width=640&crop=smart&auto=webp&s=a5a174d73d0e9a5ccb1b5de9de6bbd97e64a4e9f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PwBwwvL9zNAEs6V_pkCOq_oKSkwgZHvSTf5nkeUTCcg.jpg?width=960&crop=smart&auto=webp&s=d4a7af8c1a411e830e51dc8e47c8a564aa00804e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PwBwwvL9zNAEs6V_pkCOq_oKSkwgZHvSTf5nkeUTCcg.jpg?width=1080&crop=smart&auto=webp&s=b55800a2ff0b67be71f4444e433e17ce63bbda5c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PwBwwvL9zNAEs6V_pkCOq_oKSkwgZHvSTf5nkeUTCcg.jpg?auto=webp&s=433f8d808b5d0bb575456af06192be4e892e435d', 'width': 1200}, 'variants': {}}]}
What kind of specs to run local llm and serve to say up to 20-50 users
26
Hi all, Just curious if anybody knows the power required to make a llama server which can serve multiple users at once. Any discussion is welcome:)
2023-11-27T18:38:24
https://www.reddit.com/r/LocalLLaMA/comments/185a3mh/what_kind_of_specs_to_run_local_llm_and_serve_to/
Appropriate-Tax-9585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185a3mh
false
null
t3_185a3mh
/r/LocalLLaMA/comments/185a3mh/what_kind_of_specs_to_run_local_llm_and_serve_to/
false
false
self
26
null
Is anyone experimenting with non-instruction tuned models?
4
My main usecase for LLMs is *literally* as an auto-complete, mainly via coding, so I was wondering whether anyone has played with/had any luck using the base model for use cases that are close to simply auto completing? I could imagine the instruction tuning adding a sycophancy bias in those areas
2023-11-27T18:23:57
https://www.reddit.com/r/LocalLLaMA/comments/1859qry/is_anyone_experimenting_with_noninstruction_tuned/
wojcech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1859qry
false
null
t3_1859qry
/r/LocalLLaMA/comments/1859qry/is_anyone_experimenting_with_noninstruction_tuned/
false
false
self
4
null
my GPU is not helping ??
1
[removed]
2023-11-27T18:11:51
https://www.reddit.com/r/LocalLLaMA/comments/1859ft5/my_gpu_is_not_helping/
The_Happy_Hangman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1859ft5
false
null
t3_1859ft5
/r/LocalLLaMA/comments/1859ft5/my_gpu_is_not_helping/
false
false
self
1
null
Custom dataset creation for finetuning llama2
4
I'm working on fine tuning Llama2 with my custom dataset. Right now I'm confused whether the instruction dataset I've created is right. I've created the instruction dataset using GPT-3.5 with some prompt. As it to create a Q&A pair provided some raw text, I've generated an instruction, response and the input as raw text. I'm not sure whether this is the right way to create my custom dataset as I've followed dolly/databricks as my reference. Does anyone is following any better method/resource to create custom dataset for finetuning?
2023-11-27T18:04:52
https://www.reddit.com/r/LocalLLaMA/comments/18599ns/custom_dataset_creation_for_finetuning_llama2/
ManyAffectionate770
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18599ns
false
null
t3_18599ns
/r/LocalLLaMA/comments/18599ns/custom_dataset_creation_for_finetuning_llama2/
false
false
self
4
null
I made a powerful interface that doesn't need a web browser. Clipboard Conqueror: an anywhere copilot alternative that works anywhere you can type and select text. It's great for translation inside multiplayer 3d games or gmail. Try it out free forever! Tell me what you think, please.
51
2023-11-27T17:37:58
https://github.com/aseichter2007/ClipboardConqueror
aseichter2007
github.com
1970-01-01T00:00:00
0
{}
1858lsj
false
null
t3_1858lsj
/r/LocalLLaMA/comments/1858lsj/i_made_a_powerful_interface_that_doesnt_need_a/
false
false
https://a.thumbs.redditm…ERT6GeLz-Qe0.jpg
51
{'enabled': False, 'images': [{'id': 'FZj7Wp8SilGmNgFAEus2cstF-4pL4PjnpY4jrDVpmFY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=108&crop=smart&auto=webp&s=952961fc8cf6b112f511b6a2042021f83dc4cb22', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=216&crop=smart&auto=webp&s=94c52d0ca8f705211d0a3c0ab43649d912ffdf83', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=320&crop=smart&auto=webp&s=438eee6b25fec6f089a9b06dece1983094b9f9aa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=640&crop=smart&auto=webp&s=9117b62c46767b4736a7624d3527bbe0f03fef8e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=960&crop=smart&auto=webp&s=7f1db43d5dcf56418ad1c82865349341538542f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=1080&crop=smart&auto=webp&s=c260233bb1780dae85f27c2a689056d6c32a13f4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?auto=webp&s=f380688244428ddf1079c716b2d422e38e930b13', 'width': 1200}, 'variants': {}}]}
An Alternative Approach to Building Generative AI Models
22
Hello fellow llamas!!! **Here is what I am hacking on**…. I am exploring new ways to build generative AI foundational models without traditional math-centric training costs and resources. I am trying to lower the bar for anyone looking to build and share models that are: \- **task-trained** \- models are trained to do very specific task(s) with only the required datasets (explicitly-overfitting for known use case(s) instead of generalized/underfitting and having to wait to search through the entire internet to respond) \- **modular** \- because the models only know about these smaller, task-trained dataset(s) the models will hopefully be faster at responding than today's \- **device-native** \- models are targeted for constrained environments that do not have gpu clusters, excess ram/cpu/storage/connectivity \- **open source** \- since the weights are public domain, the **derived intelligence should be public domain** \- type of foundational model: **weight-derived** (blog: [https://matlok.ai/](https://matlok.ai/) docs: [https://bampe-weights.readthedocs.io/en/latest/](https://bampe-weights.readthedocs.io/en/latest/)) I believe there may be some math/stats proofs that are missing (see the smooth-brain), but I want to push this modular/lego block like approach in hopes of reaching parity with a new generation of foundational models. One of my fundamental assumptions is that if I substantially-reduce the training corpus, a smaller/overfit model will hopefully be faster than a traditionally-trained large language model. The initial, slimmer model building process should also hopefully run on IoT devices and plug-in to existing distributed architectures (device-native). What are you doing next - Initial use case? I need help with a good initial use case (please let me know if you have better ones!). Current best idea of the week/last 3 days: I believe this approach and knowledge system of assembling weight-derived models should be shared so we can ensure concepts like an β€œethical watermark” for Asimov's Laws of Robotics are always present in all pre-trained AI model weights using cosine similarity searches. As this approach matures, we should be able to audit and report on what these models know, and I think we need a community-driven project to tackle it. tl;dr It's early days, but I believe we can reuse existing AI tensor weights complemented with smaller "fine-tuning"-sized datasets to build small, high-quality fast generative models. **PoC** repository: [https://github.com/matlok-ai/bampe-weights](https://github.com/matlok-ai/bampe-weights) **Inputs** Extracted tensor weight from a GPT2 model.safetensors file: [extracted tensor weight](https://preview.redd.it/ldugx3vedx2c1.png?width=618&format=png&auto=webp&s=f07c576cf97cb375b9e4ab39a8b7f0979f1d91f3) [https://raw.githubusercontent.com/matlok-ai/gen-ai-datasets-for-bampe-weights/main/docs/images/safetensors/gpt2/in/idata\_\_h.0.attn.c\_attn.weight.png](https://raw.githubusercontent.com/matlok-ai/gen-ai-datasets-for-bampe-weights/main/docs/images/safetensors/gpt2/in/idata__h.0.attn.c_attn.weight.png) **Outputs** Predicted weight-derived file for use in a new type of foundational generative AI model [This screenshot is an example of \\"trained weights\\" for a new type of foundational generative AI model \(referred to as a weight-derived model\)](https://preview.redd.it/z2lovo2sdx2c1.png?width=634&format=png&auto=webp&s=6d731e7cd56c1524df803127350897614aa7fd28) [https://raw.githubusercontent.com/matlok-ai/gen-ai-datasets-for-bampe-weights/main/docs/images/safetensors/gpt2/out/gpu-generated\_predicted-model-weights\_\_layer\_\_h.0.attn.c\_attn.weight\_\_chunk\_\_0.png](https://raw.githubusercontent.com/matlok-ai/gen-ai-datasets-for-bampe-weights/main/docs/images/safetensors/gpt2/out/gpu-generated_predicted-model-weights__layer__h.0.attn.c_attn.weight__chunk__0.png) Thanks for the help, guidance and assistance staying up with the insane speed of this ecosystem! Reach out if you want more info - my email is in the profile
2023-11-27T17:29:51
https://www.reddit.com/r/LocalLLaMA/comments/1858ej6/an_alternative_approach_to_building_generative_ai/
buildinstuff5432
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1858ej6
false
null
t3_1858ej6
/r/LocalLLaMA/comments/1858ej6/an_alternative_approach_to_building_generative_ai/
false
false
https://b.thumbs.redditm…P8rvdPSBJMGo.jpg
22
{'enabled': False, 'images': [{'id': 'kxvJ5HTqE7IxY25BOhEvwgL9FT5vquxMqJLXMBTF44A', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/aqxXF4iLftinDAjZodQEWyrytCSY9VQkSv4R01qU00Y.png?width=108&crop=smart&auto=webp&s=8837a3e449be6e3d028a1155ce8262ffed014dd7', 'width': 108}, {'height': 137, 'url': 'https://external-preview.redd.it/aqxXF4iLftinDAjZodQEWyrytCSY9VQkSv4R01qU00Y.png?width=216&crop=smart&auto=webp&s=2213f1f8792321256ac7caaaf5064333d183345f', 'width': 216}, {'height': 202, 'url': 'https://external-preview.redd.it/aqxXF4iLftinDAjZodQEWyrytCSY9VQkSv4R01qU00Y.png?width=320&crop=smart&auto=webp&s=d6aa608530a0124810b2cb5dd8271571ff0b36a5', 'width': 320}], 'source': {'height': 392, 'url': 'https://external-preview.redd.it/aqxXF4iLftinDAjZodQEWyrytCSY9VQkSv4R01qU00Y.png?auto=webp&s=25a4dd8d2d83a2aed8be44c527f0292faecb16bd', 'width': 618}, 'variants': {}}]}
Training LLMs on less epochs
2
I was going through a paper called MILAN which is a pre-training method to teach the model good Visual representations and one thing that struck me is the large no. of epochs we used to train models on (see image) even if we want the model to be able to generalize well. So I'm curious to know why even base models are only trained with a low epoch count. TIA. https://preview.redd.it/un1mdjoodx2c1.png?width=1312&format=png&auto=webp&s=2f80e328b05c3aee00a32c1e1ee8289810d8ddf0
2023-11-27T17:29:05
https://www.reddit.com/r/LocalLLaMA/comments/1858dvv/training_llms_on_less_epochs/
Dry_Long3157
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1858dvv
false
null
t3_1858dvv
/r/LocalLLaMA/comments/1858dvv/training_llms_on_less_epochs/
false
false
self
2
null
Training LLMs on less epochs
1
I was going through a paper called MILAN which is a pre-training method to teach the model good Visual representations and one thing that struck me is the large no. of epochs we used to train models on (see image) even if we want the model to be able to generalize well. So I'm curious to know why even base models are only trained with a low epoch count. TIA. https://preview.redd.it/un1mdjoodx2c1.png?width=1312&format=png&auto=webp&s=2f80e328b05c3aee00a32c1e1ee8289810d8ddf0
2023-11-27T17:29:03
https://www.reddit.com/r/LocalLLaMA/comments/1858dv2/training_llms_on_less_epochs/
Dry_Long3157
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1858dv2
false
null
t3_1858dv2
/r/LocalLLaMA/comments/1858dv2/training_llms_on_less_epochs/
false
false
https://a.thumbs.redditm…BbLYLRtoYGq4.jpg
1
null
Using simple tree-search techniques for LLM token sampling can give better results
16
2023-11-27T17:17:53
https://andys.page/posts/llm_sampling_strategies
its_just_andy
andys.page
1970-01-01T00:00:00
0
{}
18583or
false
null
t3_18583or
/r/LocalLLaMA/comments/18583or/using_simple_treesearch_techniques_for_llm_token/
false
false
default
16
null
SLLLLLLOOOOOOOWWWWWWWW
2
So. My rig (Ryzen 7 3700x, 64G Ram, RTX3070, Intel Arc 380) can run up to 70B parameter models... but they run at a snails pace. Furthermore, i don't honestly see that big of an improvement for regular chat task from a 70B parameter model vs a 13B parameter model. Don't get me wrong.. there is an improvement in adherence sometimes, it's just not a GIANT leap forward as i expected. Especially the 30B ish models. Basically no difference between 30B and 70B. I run everything at Q5. Here is my question... Would running a 70b at Q2 be better than a 7B or 13B at Q5? Would speed improve? Also, I notice that Mistral runs faster on my machine even at the same parameter counts than LLAMA models... anyone know why? I know i could run all these test myself theoretically but there is just so much to test and so little time. I figured I'd ask around and see if someone else did it first.
2023-11-27T17:08:12
https://www.reddit.com/r/LocalLLaMA/comments/1857v8n/sllllllooooooowwwwwwww/
cjhoneycomb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1857v8n
false
null
t3_1857v8n
/r/LocalLLaMA/comments/1857v8n/sllllllooooooowwwwwwww/
false
false
self
2
null
Converting merged model to gguf
4
I am trying to do this, but I am getting keyError: 'I8'. [https://github.com/ggerganov/llama.cpp/issues/4199#issuecomment-1828179655](https://github.com/ggerganov/llama.cpp/issues/4199#issuecomment-1828179655) It is exactly same as this problem. I downloaded base model quantized using bnb 8 bit Fine tuned the model merged then tried to convert to gguf, and mismatched typed error occurs. but if i directly just convert the unquantized base model, it works flawlessly, which leads me to believe its the bnb quantization that is creating the incompatibility. what am i missing? how to create gguf of finetuned model?
2023-11-27T17:06:54
https://www.reddit.com/r/LocalLLaMA/comments/1857u7d/converting_merged_model_to_gguf/
jonglaaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1857u7d
false
null
t3_1857u7d
/r/LocalLLaMA/comments/1857u7d/converting_merged_model_to_gguf/
false
false
self
4
{'enabled': False, 'images': [{'id': 'u9viwKa1YgQRJnKKNKZDtgyO11pWxCRXOgrkt6XDH2s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sqQXAYOrCPFogBdeTCGFwzl9135p2HzvYvJc3K0nAqg.jpg?width=108&crop=smart&auto=webp&s=ed76bd408ed6ec38b0e6fc63566e34a13cc62514', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sqQXAYOrCPFogBdeTCGFwzl9135p2HzvYvJc3K0nAqg.jpg?width=216&crop=smart&auto=webp&s=efed0375ba2d9bc295b64ce3f06087169b666f3b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sqQXAYOrCPFogBdeTCGFwzl9135p2HzvYvJc3K0nAqg.jpg?width=320&crop=smart&auto=webp&s=caacad89a1d6f5b1fde60878c08e10ff45a78c72', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sqQXAYOrCPFogBdeTCGFwzl9135p2HzvYvJc3K0nAqg.jpg?width=640&crop=smart&auto=webp&s=8ea6238ae35d582060408b266803139ab1608be2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sqQXAYOrCPFogBdeTCGFwzl9135p2HzvYvJc3K0nAqg.jpg?width=960&crop=smart&auto=webp&s=fc4cb4771603425e915a7aeec36ae5bfde736abf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sqQXAYOrCPFogBdeTCGFwzl9135p2HzvYvJc3K0nAqg.jpg?width=1080&crop=smart&auto=webp&s=f5fe3732999e215ced1389be79f1c53dc1a482eb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sqQXAYOrCPFogBdeTCGFwzl9135p2HzvYvJc3K0nAqg.jpg?auto=webp&s=9a6eec7840efcee466dbb5f84bf20beeafd5703e', 'width': 1200}, 'variants': {}}]}
Models Megathread #2 - What models are you currently using?
106
As requested, this is the subreddit's second megathread for model discussion. This thread will now be hosted at least once a month to keep the discussion updated and help reduce identical posts. I also saw that we hit 80,000 members recently! Thanks to every member for joining and making this happen. ___ **Welcome to the r/LocalLLaMA Models Megathread** What models are you currently using and why? Do you use 7B, 13B, 33B, 34B, or 70B? Share any and all recommendations you have! Examples of popular categories: - Assistant chatting - Chatting - Coding - Language-specific - Misc. professional use - Role-playing - Storytelling - Visual instruction ___ Have feedback or suggestions for other discussion topics? All suggestions are appreciated and can be sent to [modmail](https://www.reddit.com/message/compose?to=/r/LocalLLaMA). ^(*P.S. LocalLLaMA is looking for someone who can manage Discord. If you have experience modding Discord servers, your help would be welcome. Send a message if interested.*) ___ [Previous Thread](https://www.reddit.com/r/LocalLLaMA/comments/15wdjly/whats_your_favorite_model_and_results_model) | [New Models](https://www.reddit.com/r/LocalLLaMA/search?sort=new&restrict_sr=on&q=flair%3A%22New%20Model%22)
2023-11-27T16:40:15
https://www.reddit.com/r/LocalLLaMA/comments/185770m/models_megathread_2_what_models_are_you_currently/
Technical_Leather949
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185770m
false
null
t3_185770m
/r/LocalLLaMA/comments/185770m/models_megathread_2_what_models_are_you_currently/
false
true
self
106
null
Multipurpose AI app for all your AI interests and services.
1
[removed]
2023-11-27T16:17:54
https://www.reddit.com/r/LocalLLaMA/comments/1856o7o/multipurpose_ai_app_for_all_your_ai_interests_and/
Wafflesinmybreakfast
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1856o7o
false
null
t3_1856o7o
/r/LocalLLaMA/comments/1856o7o/multipurpose_ai_app_for_all_your_ai_interests_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'j9BOoAGSccutND6ogshNyb-xWVFtmdUvHV_lLdzYeVk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=108&crop=smart&auto=webp&s=76adf9aa07171352e3727b8d8bde812b5eb0f7ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=216&crop=smart&auto=webp&s=f180f4caf096ee042b700d70cabf941788671fda', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=320&crop=smart&auto=webp&s=204c440e43a68783d15356bdc60ce239349a1d9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=640&crop=smart&auto=webp&s=2689ba6aac51113255e3e6e8acb33870740831e1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=960&crop=smart&auto=webp&s=64ad438b94a08899677dfb6f2de708f8a7a6a81b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=1080&crop=smart&auto=webp&s=0108876ef023c871780ce4ec2584fa4cb25c499c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?auto=webp&s=9035877a533fae2ee246bfc56769d60e805f7b74', 'width': 1200}, 'variants': {}}]}
What is an easy way to install/run DeepSeek coder?
1
LM studio says that it runs. But i can't get the model running. Other people complain about this also.
2023-11-27T16:14:45
https://www.reddit.com/r/LocalLLaMA/comments/1856lfm/what_is_an_easy_way_to_installrun_deepseek_coder/
MAXXSTATION
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1856lfm
false
null
t3_1856lfm
/r/LocalLLaMA/comments/1856lfm/what_is_an_easy_way_to_installrun_deepseek_coder/
false
false
self
1
null
Proposed Alternative to Repetition Penalty - Noisy Sampling
73
# Noisy Sampling Temperature as a method of making LLMs more or less deterministic by changing the scale at which the tokens are 'scored' certainly works, but there's also still an issue where greedy sampling (which is \*only\* picking the most likely token at all times) will eventually degenerate into repetitive nonsense because of slight biases. Mistral 7b, for example, seems to be better than Llama 2 13b for a variety of tasks, but has a tendency to repeat itself **significantly more often** (especially in the context of greedy sampling). The typical solution to fix this is the Repetition Penalty, which adds a bias to the model to avoid repeating the same tokens, but this has issues with 'false positives'; imagine a language model that was tasked to do trivial math problems, and a user always involved the number 3 in his first 5 questions. After a certain amount of context, it will bias against using the number 3 in the solution even if if it is correct. This is obviously incorrect behavior. One possible solution to this problem is to add a bit of controlled noise to the model's scores to prevent it from slowly accumulating determinism bias. In the case where all the scores are relatively the same, this will allow for a *lot* of randomness (as you'd expect); in the case where the scores are *extremely* different (e.g. 3,000 for the top token and 500 for the second most likely) this would instead add a negligible amount of noise, and it wouldn't be uniform. I've realized that my [Dynamic Temp sampler experiment](https://www.reddit.com/r/LocalLLaMA/comments/180b673/i_need_people_to_test_my_experiment_dynamic/)... basically performs *in a similar fashion*, albeit indirectly, which is probably why people seem to like it in the first place. When I made that, I was thinking, "why not make the model more random when there's a high opportunity to be random", but my DynaTemp still *always assumes the original token rankings*. Paradoxically, it may be **more natural** to just add random noise to the token scores to begin with, so that in cases where the top two tokens are both close to 20% for example, but the rest are 0.001%, it'll randomly choose from one of those two 20% tokens instead of just selecting the one with the slightly higher score (which is a statistically biased choice rather than a natural one) I will be working on an implementation of this for koboldcpp soon, and then I will look into adding it to text-generation-webui (it's more popular, but I'm more experienced with kobold's codebase). This method has two potential advantages: # - Context Free Instead of analyzing past context like Repetition Penatly, it stands independently as a way to prevent individual tokens from creating biased generations in the first place rather than as a hacky solution that must factor in the past context before it makes a decisino. # - Scales with Confidence This should in theory apply randomness that scales proportionally to the model's confidence. That means it will not disproportionately weigh highly low quality token choices (which will naturally have much lower scores and should, in theory, be just as unlikely).
2023-11-27T15:53:01
https://www.reddit.com/r/LocalLLaMA/comments/185635o/proposed_alternative_to_repetition_penalty_noisy/
kindacognizant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185635o
false
null
t3_185635o
/r/LocalLLaMA/comments/185635o/proposed_alternative_to_repetition_penalty_noisy/
false
false
self
73
{'enabled': False, 'images': [{'id': '4pd5qJNyyXTTI8Sf_uUuX-7L3xGibCScVXUafkO1umM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vk0EYETqbX7Pecixmp9SVpQ5S_izMx5wquQjAwVXjD4.jpg?width=108&crop=smart&auto=webp&s=ed54c5c46551589db49418147b1e933ef52590a6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vk0EYETqbX7Pecixmp9SVpQ5S_izMx5wquQjAwVXjD4.jpg?width=216&crop=smart&auto=webp&s=dceaf7b5bf35c26bd9c3013c0392ebbb04f0e952', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vk0EYETqbX7Pecixmp9SVpQ5S_izMx5wquQjAwVXjD4.jpg?width=320&crop=smart&auto=webp&s=97505cf23766c1f2e82b3d1234701c7c0a8ab9d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vk0EYETqbX7Pecixmp9SVpQ5S_izMx5wquQjAwVXjD4.jpg?width=640&crop=smart&auto=webp&s=d6d0411eb1901ad617637d193fd6e0eab337e1b9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vk0EYETqbX7Pecixmp9SVpQ5S_izMx5wquQjAwVXjD4.jpg?width=960&crop=smart&auto=webp&s=9bfeabca21aae10242f5ffc028544a25e1723239', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vk0EYETqbX7Pecixmp9SVpQ5S_izMx5wquQjAwVXjD4.jpg?width=1080&crop=smart&auto=webp&s=8318da377150d2bcdee02b097e3dfc5b339f74cf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vk0EYETqbX7Pecixmp9SVpQ5S_izMx5wquQjAwVXjD4.jpg?auto=webp&s=3ae6c1e97f6057c35120aaf5c5f015508d2d91f5', 'width': 1200}, 'variants': {}}]}
I’m trying to find out the internal dimensons of the llama models, can anyone save me from having to run it in debug?
1
I’ve got the inference code and the 7B and 13B models downloaded I can see the well-known dimensions: Dimension of 4096 32 layers 32 attention heads per layer Looking at the raw parameter file I can see the structure another layer down, but I’d like to know more about the substructure beyond what I can see here and also how their dimensions differ between 7B, 13B and 70B I don’t see the answer in the 77-page Llama paper In these screenshots I can see the attention head and feed forward weights structures for the 32 layers, and there’s a big data structure later on, but I’d like to know the substructure and dimensions of everything without having to run and print debug statements myself, if it can be avoided. Thank you for any help!
2023-11-27T15:47:34
https://www.reddit.com/gallery/1855ysf
tenthpersona2
reddit.com
1970-01-01T00:00:00
0
{}
1855ysf
false
null
t3_1855ysf
/r/LocalLLaMA/comments/1855ysf/im_trying_to_find_out_the_internal_dimensons_of/
false
false
default
1
null
I’m trying to find out the internal dimensons of the llama models, can anyone save me from having to run it in debug?
1
I’ve got the inference code and the 7B and 13B models downloaded I can see the well-known dimensions: Dimension of 4096 32 layers 32 attention heads per layer Looking at the raw parameter file I can see the structure another layer down, but I’d like to know more about the substructure beyond what I can see here and also how their dimensions differ between 7B, 13B and 70B I don’t see the answer in the 77-page Llama paper In these screenshots I can see the attention head and feed forward weights structures for the 32 layers, and there’s a big data structure later on, but I’d like to know the substructure and dimensions of everything without having to run and print debug statements myself, if it can be avoided. Thank you for any help!
2023-11-27T15:47:26
https://www.reddit.com/gallery/1855yo7
tenthpersona2
reddit.com
1970-01-01T00:00:00
0
{}
1855yo7
false
null
t3_1855yo7
/r/LocalLLaMA/comments/1855yo7/im_trying_to_find_out_the_internal_dimensons_of/
false
false
https://b.thumbs.redditm…TIq2b9055fxI.jpg
1
null
Llama2 fine-tuning questions
3
Hello ! I want to fine-tune LLAMA2 model for a specific text generation task. How can I properly retrieve the predictions from my evaluation? Should I create a CustomTrainer with a generation function in the evaluate method? Or should I use compute_metrics and decode my eval_preds?
2023-11-27T15:37:46
https://www.reddit.com/r/LocalLLaMA/comments/1855r5w/llama2_finetuning_questions/
BluebirdFinancial119
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1855r5w
false
null
t3_1855r5w
/r/LocalLLaMA/comments/1855r5w/llama2_finetuning_questions/
false
false
self
3
null
Concepts for my app.I'm about 50% done and have some questions.
1
[removed]
2023-11-27T15:16:38
https://i.redd.it/03c9fkbbqw2c1.jpg
Future_Might_8194
i.redd.it
1970-01-01T00:00:00
0
{}
18559pr
false
null
t3_18559pr
/r/LocalLLaMA/comments/18559pr/concepts_for_my_appim_about_50_done_and_have_some/
false
false
https://b.thumbs.redditm…fzaKGSSRU_FE.jpg
1
{'enabled': True, 'images': [{'id': '0uRE23331XuWkSPy-L4MsAvluOAi6awibbcw6U1RAQ8', 'resolutions': [{'height': 114, 'url': 'https://preview.redd.it/03c9fkbbqw2c1.jpg?width=108&crop=smart&auto=webp&s=ac16686e43251d23ef9b1d8173ea8a674c1e6275', 'width': 108}, {'height': 228, 'url': 'https://preview.redd.it/03c9fkbbqw2c1.jpg?width=216&crop=smart&auto=webp&s=8ad564c9c4eec7335abe19240e1f5a84c73d9e04', 'width': 216}, {'height': 338, 'url': 'https://preview.redd.it/03c9fkbbqw2c1.jpg?width=320&crop=smart&auto=webp&s=0db97a9dd9e69c6c953f0b19b04b2bbcd2c9ca64', 'width': 320}, {'height': 676, 'url': 'https://preview.redd.it/03c9fkbbqw2c1.jpg?width=640&crop=smart&auto=webp&s=69d8525cc42a3eed02cadafbcc12a2157f15d042', 'width': 640}, {'height': 1014, 'url': 'https://preview.redd.it/03c9fkbbqw2c1.jpg?width=960&crop=smart&auto=webp&s=aef283ffee87ea2bce06659aab20277a9a63b468', 'width': 960}, {'height': 1141, 'url': 'https://preview.redd.it/03c9fkbbqw2c1.jpg?width=1080&crop=smart&auto=webp&s=7ac8061263f81212d5dc3ee37bc2d5327b3a9c5b', 'width': 1080}], 'source': {'height': 2180, 'url': 'https://preview.redd.it/03c9fkbbqw2c1.jpg?auto=webp&s=15a51011f5cd19dac52261ed281536fb4ab8b2a9', 'width': 2062}, 'variants': {}}]}
LocalLLaMA and translate texts
7
GPT4 does quite well with text translation. Unfortunately, the free version, it has limits on the input/translated text. (Google translator and DeepL translate worse) Is there any ranking available somewhere of Local LLMs ALE used to translate texts into other languages? What FREE language models are available with a window context limit of at least 64k tokens or more (only such are suitable for text translation)? Unless there is some way to somehow automatically split a long text into chunks and send them to the LLM for translation. ​ ​
2023-11-27T15:11:14
https://www.reddit.com/r/LocalLLaMA/comments/18555cl/localllama_and_translate_texts/
MajesticFigure4240
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18555cl
false
null
t3_18555cl
/r/LocalLLaMA/comments/18555cl/localllama_and_translate_texts/
false
false
self
7
null
table extraction from pdf
5
anyone knows some robust open source library for extracting tables from pdf , even ocr library is fine P.S- i have already tried tabula ,camelot , ing2table, unstructured.io and most of the document loader in langchain , none of them are even 95% robust
2023-11-27T14:35:37
https://www.reddit.com/r/LocalLLaMA/comments/1854d06/table_extraction_from_pdf/
happy_dreamer10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1854d06
false
null
t3_1854d06
/r/LocalLLaMA/comments/1854d06/table_extraction_from_pdf/
false
false
self
5
null
Silly questions about GGUF and exl2
3
Hi. I have LLaMA2-13B-Tiefighter-exl2\_5bpw and (probably) the same LLaMA2-13B-Tiefighter.Q5\_K\_M. I run it on 1080Ti and old threadripper with 64 4-channel DDR4-3466. I use oobabooga (for GGUF and exl2) and LMStudio. I have 531.68 Nvidia driver (so I recieve OOM, not RAM-swapping when VRAM overflows). **1st question:** I read that exl2 consume less vram and work faster than gguf. I try to load it on Oobabooga (ExLlamaV2\_HF) and it fits in my 11gb VRAM consume \~10gb) but produce only 2.5 t/s, while GGUF (lama.cpp backend) with 35 layers on offloaded on GPU - 4.5 t/s. Why? I don't set some important settings? **2nd question:** In LMStudio (lama.cpp backend?) with the same settings and same gpu offloaded 35 layers I got only 2.3 t/s. Why? Same backend, same GGUF, same settings for sampling and context.
2023-11-27T13:59:45
https://www.reddit.com/r/LocalLLaMA/comments/1853kr0/silly_questions_about_gguf_and_exl2/
Desm0nt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1853kr0
false
null
t3_1853kr0
/r/LocalLLaMA/comments/1853kr0/silly_questions_about_gguf_and_exl2/
false
false
self
3
null
I have given llama.cpp server ui a facelift
91
Hi folks, I have edited the llama.cpp server frontend and made it look nicer. Also added a few functions. Something I have been missing there for a long time: Templates for Prompt Formats. ​ **Here to the github link:** [**++camalL**](https://github.com/mounta11n/plusplus-caMalL/tree/master) ​ Otherwise here is a small summary: \- UI with CSS to make it look nicer and cleaner overall. \- CSS outsourced as a separate file \- Added a dropdown menu with prompt style templates \- Added a dropdown menu with system prompts \- Prompt Styles and System Prompts are separate files, so editing is very easy. \- Created a script that uses "dialog" to compose the command for the server. \- Script offers the possibility to save and load configs ​ ​ In planning or already started: ​ \- WIP Multilingual: You will be able to select the language from a dropdown menu. So far language files only for English and German. (concerns UI elements and system prompts). \- Dark Mode \- Templates for the values of the UI options (samplers etc.), e.g. deterministic template, creative template, balanced template etc... \- Zenity start script (like dialog, but gui) \--- ​ As for the prompt format templates, I just picked a few by feel. The most important are the four to which almost all others can be traced back: Alpaca, ChatML, Llama2, Vicuna. But if you want more templates for a specific model, feel free to let me know here or on github. As you can see on the third picture, it should now be easier for beginners to use the llama.cpp server, since a tui dialog will assist them. Hope you like my work. Feel free to give feedback ​ ps: I've made a pull request, but for now I publish it on my own forked repo. [ui-1](https://preview.redd.it/dlmgybn78w2c1.jpg?width=1336&format=pjpg&auto=webp&s=2bd8070b73155b8508460bc6227f3d4be979ac06) ​ [ui-2](https://preview.redd.it/7mzhqn8a8w2c1.jpg?width=1388&format=pjpg&auto=webp&s=5d07f3a75dbc137c4ec2c9a839d9ad52fbbbf86b) ​ [tui-1](https://preview.redd.it/j7xkocrb8w2c1.jpg?width=1018&format=pjpg&auto=webp&s=6e3898a612f0eec37570e5dab7a4be31966dcafc) ​
2023-11-27T13:36:21
https://www.reddit.com/r/LocalLLaMA/comments/18534f1/i_have_given_llamacpp_server_ui_a_facelift/
Evening_Ad6637
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18534f1
false
null
t3_18534f1
/r/LocalLLaMA/comments/18534f1/i_have_given_llamacpp_server_ui_a_facelift/
false
false
https://b.thumbs.redditm…rs9KzRNNrz-o.jpg
91
{'enabled': False, 'images': [{'id': 'N14oGS7gGBp02JsBG8cUKkZ65-Xf-HDiOPkE-s_K6-8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WH527Hkw6-SDO44Gmnoow2x6CBnT1UkXGH7AbEm5DNA.jpg?width=108&crop=smart&auto=webp&s=8f9ca2a6dca6836721c915504a3428649e4e26dd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WH527Hkw6-SDO44Gmnoow2x6CBnT1UkXGH7AbEm5DNA.jpg?width=216&crop=smart&auto=webp&s=5a15f46d65b71917e1564cd2d468d42ea9ed6b2a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WH527Hkw6-SDO44Gmnoow2x6CBnT1UkXGH7AbEm5DNA.jpg?width=320&crop=smart&auto=webp&s=a27d38ad2b76686e545053859544b7d08923e38f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WH527Hkw6-SDO44Gmnoow2x6CBnT1UkXGH7AbEm5DNA.jpg?width=640&crop=smart&auto=webp&s=37c1440c44f0c8232245463832d5d84abef34037', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WH527Hkw6-SDO44Gmnoow2x6CBnT1UkXGH7AbEm5DNA.jpg?width=960&crop=smart&auto=webp&s=34c4a10be74e7b00af383a47e377a9bb9d08b43b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WH527Hkw6-SDO44Gmnoow2x6CBnT1UkXGH7AbEm5DNA.jpg?width=1080&crop=smart&auto=webp&s=11e9f02f6451cbd3a1d8442bc49e947303deea82', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WH527Hkw6-SDO44Gmnoow2x6CBnT1UkXGH7AbEm5DNA.jpg?auto=webp&s=12eb412cbbef7494800d3e31ab40d3b775261146', 'width': 1200}, 'variants': {}}]}
Extracting values from patient medical narratives
1
I want to extract values (e.g. some lab values) from medical narratives. I can tell GPT 4 to show me just the value, e.g. 15 ml/l, or 80% and i get the answer without problem. If i try with llama-2 or Med42 (i tried 70B models for each), it is nearly impossible to just get the number. It will ramble and even hallucinate. Any tips ?
2023-11-27T13:33:25
https://www.reddit.com/r/LocalLLaMA/comments/18532cl/extracting_values_from_patient_medical_narratives/
toxonaut
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18532cl
false
null
t3_18532cl
/r/LocalLLaMA/comments/18532cl/extracting_values_from_patient_medical_narratives/
false
false
self
1
null
Is translation worth doing to enhance LLM responses?
5
Basically just the title - I would like my LLMs to respond as well as possible in the original language of the user input. Would it be worth translating the user prompt to english for inference and then back after? If yes, which translation model would be best suited for something like this? Also, are these models quantizable via bitsandbytes? Thanks for your help guys, this community really is amazing! (I am using a Mistral 7b finetune btw and have tested facebook-nllb-1.3b before, but it was huge and not that great)
2023-11-27T13:31:36
https://www.reddit.com/r/LocalLLaMA/comments/18530yu/is_translation_worth_doing_to_enhance_llm/
Galaktische_Gurke
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18530yu
false
null
t3_18530yu
/r/LocalLLaMA/comments/18530yu/is_translation_worth_doing_to_enhance_llm/
false
false
self
5
null
Use models locally for pyQt desktop app bundled using pyinstaller
1
I want a way to replace openAI api, in a pyqt app, everything in this app should be able to be bundled into an standalone .exe Most solutions use some kind of UI to interact, host an api and then query it. Does anyone know how I may achieve something like this?
2023-11-27T11:58:27
https://www.reddit.com/r/LocalLLaMA/comments/1851bt7/use_models_locally_for_pyqt_desktop_app_bundled/
swappybizz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1851bt7
false
null
t3_1851bt7
/r/LocalLLaMA/comments/1851bt7/use_models_locally_for_pyqt_desktop_app_bundled/
false
false
self
1
null
Node & workflow based Llama2 app for Mac - looking for testers
25
2023-11-27T11:54:09
https://v.redd.it/6r6ez8cwpv2c1
creatorai
v.redd.it
1970-01-01T00:00:00
0
{}
18519d9
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6r6ez8cwpv2c1/DASHPlaylist.mpd?a=1703678061%2COGIwOTA5NmQ5NjkwZjFkZjMyNWEzYWNjNjMwNmJlMDBlOTJiMmUxYzM0ZDVlOGQyYTBkYjUyYmY4YmEzNDYxMA%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/6r6ez8cwpv2c1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6r6ez8cwpv2c1/HLSPlaylist.m3u8?a=1703678061%2CODMyZDZjNGUwYTgzZmMzZTc1ODY2OGZiMTU5NjAyODUxZTdmNDU5ZGJiNjkzMmViMjA4M2QwZTQwYmE4YzAyZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6r6ez8cwpv2c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1772}}
t3_18519d9
/r/LocalLLaMA/comments/18519d9/node_workflow_based_llama2_app_for_mac_looking/
false
false
https://external-preview…b001767099e70f2c
25
{'enabled': False, 'images': [{'id': 'enA2eWh5djZxdjJjMYz67a0d43qzVjBWfYImmm3TX8UP85zJOSQD3qtKYMhp', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/enA2eWh5djZxdjJjMYz67a0d43qzVjBWfYImmm3TX8UP85zJOSQD3qtKYMhp.png?width=108&crop=smart&format=pjpg&auto=webp&s=d37735a9e809d949d1a0ba6a1549b663c4217b7b', 'width': 108}, {'height': 131, 'url': 'https://external-preview.redd.it/enA2eWh5djZxdjJjMYz67a0d43qzVjBWfYImmm3TX8UP85zJOSQD3qtKYMhp.png?width=216&crop=smart&format=pjpg&auto=webp&s=dc3341d5722ebdea94fb5416268c765b2d038c74', 'width': 216}, {'height': 195, 'url': 'https://external-preview.redd.it/enA2eWh5djZxdjJjMYz67a0d43qzVjBWfYImmm3TX8UP85zJOSQD3qtKYMhp.png?width=320&crop=smart&format=pjpg&auto=webp&s=cd6287aae6a26f7defe77d680c501af453d4ed79', 'width': 320}, {'height': 390, 'url': 'https://external-preview.redd.it/enA2eWh5djZxdjJjMYz67a0d43qzVjBWfYImmm3TX8UP85zJOSQD3qtKYMhp.png?width=640&crop=smart&format=pjpg&auto=webp&s=40b66e3160a19ceeda5768b412603d20253d8486', 'width': 640}, {'height': 585, 'url': 'https://external-preview.redd.it/enA2eWh5djZxdjJjMYz67a0d43qzVjBWfYImmm3TX8UP85zJOSQD3qtKYMhp.png?width=960&crop=smart&format=pjpg&auto=webp&s=5d981662b2c881f5f11864a7e72cf051f9329c95', 'width': 960}, {'height': 658, 'url': 'https://external-preview.redd.it/enA2eWh5djZxdjJjMYz67a0d43qzVjBWfYImmm3TX8UP85zJOSQD3qtKYMhp.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ad0d354c36b8b914e59e8ec0db94d97be4fb05eb', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/enA2eWh5djZxdjJjMYz67a0d43qzVjBWfYImmm3TX8UP85zJOSQD3qtKYMhp.png?format=pjpg&auto=webp&s=857ca335a2dad7a5965b2a5d5d7e813840f91cd0', 'width': 1772}, 'variants': {}}]}
Speed and energy use - RAM vs Mac Studio vs RTX cards
8
So I'm interested in applications that require memory more than speed, with high quality and a big context. I'm talking 100GB or more. Speed is still an important consideration. I don't need snappy conversations, but getting through more stuff 'overnight' is still valuable. 3090s are affordable, but it would take 4 to 8 to get into the big memory category, and the primary issue is energy use. For batch use the PC could shut down after finishing, so idle power use wouldn't be an issue. Are there motherboards that can completely shut off power to extra cards when they aren't needed? Mac Studio M2 Ultra can get 192GB of unified memory, with about 140GB usable. This isn't as fast, obviously, but is meant to be acceptable for many applications. What about PCs/servers with lots of mainboard RAM? Is this way slower than the Macs due to different architecture? If not it's probably a lot cheaper. The CPU would need to do all the work, and I don't know about how the energy efficiency would compare. I would be grateful if anyone has data comparing speeds or joules per token for these broad options.
2023-11-27T11:53:11
https://www.reddit.com/r/LocalLLaMA/comments/18518t5/speed_and_energy_use_ram_vs_mac_studio_vs_rtx/
EvokerTCG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18518t5
false
null
t3_18518t5
/r/LocalLLaMA/comments/18518t5/speed_and_energy_use_ram_vs_mac_studio_vs_rtx/
false
false
self
8
null
Quantizing 70b models to 4-bit, how much does performance degrade?
59
The title, pretty much. I'm wondering whether a 70b model quantized to 4bit would perform better than a 7b/13b/34b model at fp16. Would be great to get some insights from the community.
2023-11-27T10:39:19
https://www.reddit.com/r/LocalLLaMA/comments/185036z/quantizing_70b_models_to_4bit_how_much_does/
ae_dataviz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
185036z
false
null
t3_185036z
/r/LocalLLaMA/comments/185036z/quantizing_70b_models_to_4bit_how_much_does/
false
false
self
59
null
Hardware question: combining a 3090 and a p40
1
As the title says when combining a p40 and a rtx 3090 a few use casese come to mind and i wanted to know if they could be done ? greatly appreciate your help: first could you run larger modells where they are computed on the 3090 and the p40 is just used for vram offloading and would that be faster then system memory ? Could you compute on both of them in a asymetric fashion like putting some layers on the RTX3090 and fewer on the p40 ? Lastly and that one probably works you could run two different instances of LLms for example a bigger one on the 3090 and a smaller on the p40 i asume.
2023-11-27T10:33:50
https://www.reddit.com/r/LocalLLaMA/comments/1850095/hardware_question_combining_a_3090_and_a_p40/
Noxusequal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1850095
false
null
t3_1850095
/r/LocalLLaMA/comments/1850095/hardware_question_combining_a_3090_and_a_p40/
false
false
self
1
null
questions about gguf and gptq
2
I have two questions: 1. I have fine-tuned a gptq quantized 7b model, i have the adapter files. is it possible to merge the adapter with gptq base model, and then convert it to gguf format? peftmodel lets me do inference on the fine tuned model, but i can't find a way to merge the models so that i can convert it to gguf for cpu+gpu inference. 2. my eventual goal is to create a gguf formatted finetuned model for local inference, as far as i have understood, i have to fine tune quantized model loaded with bits and bytes ( can't find any help with gptq), merge adapter and model, then convert using llama.cpp, is that correct? 3. I understand that gguf is just a formatting, it doesn't do any quantization itself. So what is the best way to achieve gguf model, which is fine tuned on custom data, and then quantized in desired configuration ( like setting manual configurations like Q4\_K\_M )?
2023-11-27T09:16:28
https://www.reddit.com/r/LocalLLaMA/comments/184ywfc/questions_about_gguf_and_gptq/
jonglaaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184ywfc
false
null
t3_184ywfc
/r/LocalLLaMA/comments/184ywfc/questions_about_gguf_and_gptq/
false
false
self
2
null
How to make llama generate a complete answer?
1
Seems my local llama is interrupted when it has not finished the generating yet. I got no exceptions, but the output is like: \`\`\` ...you can follow steps below: \`\`\` and it just stop there. Apparently this is not the end. I don't think it has anything to do with some compiling options, as sometimes I can get a longer one but sometime shorter, all of them are not performing well, the length is randomly. So I wonder how can I fix this. I have no idea about it. Thanks.
2023-11-27T09:08:26
https://www.reddit.com/r/LocalLLaMA/comments/184ysh7/how_to_make_llama_generate_a_complete_answer/
gANNNNNIa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184ysh7
false
null
t3_184ysh7
/r/LocalLLaMA/comments/184ysh7/how_to_make_llama_generate_a_complete_answer/
false
false
self
1
null
OpenAi was developing a very powerful model that could threat human existence, was reason behind Altmans removal.
1
ERROR: type should be string, got "\nhttps://www.theguardian.com/business/2023/nov/23/openai-was-working-on-advanced-model-so-powerful-it-alarmed-staff"
2023-11-27T09:07:05
https://www.reddit.com/r/LocalLLaMA/comments/184yrv5/openai_was_developing_a_very_powerful_model_that/
Reasonable_Goat7159
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184yrv5
false
null
t3_184yrv5
/r/LocalLLaMA/comments/184yrv5/openai_was_developing_a_very_powerful_model_that/
false
false
self
1
{'enabled': False, 'images': [{'id': '0tR9E-pLYm1RFp1xWIeyce4Z8LyLAXXjAo852vle9-0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uFfRUm5BqmApjn7OKRrBnHQlczlRv_l68L_B9cR_1ks.jpg?width=108&crop=smart&auto=webp&s=c7800bd26c04660fa8714cf54def8f1c71088d11', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uFfRUm5BqmApjn7OKRrBnHQlczlRv_l68L_B9cR_1ks.jpg?width=216&crop=smart&auto=webp&s=d0e0bcb7f13a886c07219593b8ccfe49e3c7418d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uFfRUm5BqmApjn7OKRrBnHQlczlRv_l68L_B9cR_1ks.jpg?width=320&crop=smart&auto=webp&s=ca5b73021795ccec2d5b0d3c4206c15964e42f14', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uFfRUm5BqmApjn7OKRrBnHQlczlRv_l68L_B9cR_1ks.jpg?width=640&crop=smart&auto=webp&s=b55c54bb09ea0ed32c8a1d749edd618faf49b7ee', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uFfRUm5BqmApjn7OKRrBnHQlczlRv_l68L_B9cR_1ks.jpg?width=960&crop=smart&auto=webp&s=4816e5039b44d29e35edf21be9f6f93ffa3bdd96', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uFfRUm5BqmApjn7OKRrBnHQlczlRv_l68L_B9cR_1ks.jpg?width=1080&crop=smart&auto=webp&s=fabb96a7257a08be180e1a89490b96b3b9827985', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/uFfRUm5BqmApjn7OKRrBnHQlczlRv_l68L_B9cR_1ks.jpg?auto=webp&s=d85bf303282b33321767ee15e942c6830dbfffd4', 'width': 1200}, 'variants': {}}]}
Chrome extension for LinkedIn message assistant
5
Just wanted to share a Chrome extension I created for writing LinkedIn messages, but conceptually it can be modified for other types of writing assistants you may want to use. [https://github.com/mzbac/linkedin-message-assistant](https://github.com/mzbac/linkedin-message-assistant)
2023-11-27T08:50:12
https://www.reddit.com/r/LocalLLaMA/comments/184yji2/chrome_extension_for_linkedin_message_assistant/
mzbacd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184yji2
false
null
t3_184yji2
/r/LocalLLaMA/comments/184yji2/chrome_extension_for_linkedin_message_assistant/
false
false
self
5
{'enabled': False, 'images': [{'id': '9jcrkhOS4KIqV17vvxDcHO9BbCsSdqH-3lPltccYU-g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q2Ml2NCKbZMtVfcsGwngQjjZtM9D65-KUKyiPyYDpQY.jpg?width=108&crop=smart&auto=webp&s=3a76c1eab7dbebea1f39192ef71997e6bd8e36a5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q2Ml2NCKbZMtVfcsGwngQjjZtM9D65-KUKyiPyYDpQY.jpg?width=216&crop=smart&auto=webp&s=b2a71adea57286fb39bbe49c51be8776a074fec9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q2Ml2NCKbZMtVfcsGwngQjjZtM9D65-KUKyiPyYDpQY.jpg?width=320&crop=smart&auto=webp&s=739936aa85be90361f56df2c4e4fc3fa44e619b4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q2Ml2NCKbZMtVfcsGwngQjjZtM9D65-KUKyiPyYDpQY.jpg?width=640&crop=smart&auto=webp&s=df7fea7cf47f1b94873168d1424d8ad33439896d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q2Ml2NCKbZMtVfcsGwngQjjZtM9D65-KUKyiPyYDpQY.jpg?width=960&crop=smart&auto=webp&s=cb78a7832a5b2426a831c059519f31d11f1d3de3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q2Ml2NCKbZMtVfcsGwngQjjZtM9D65-KUKyiPyYDpQY.jpg?width=1080&crop=smart&auto=webp&s=3fa33126cacccc47c47778111334abba8fbc4e0d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q2Ml2NCKbZMtVfcsGwngQjjZtM9D65-KUKyiPyYDpQY.jpg?auto=webp&s=67af0b4133263611a1a46c93d672cf6c2744cd68', 'width': 1200}, 'variants': {}}]}
Help me to choose a better build for ML/DL
1
[removed]
2023-11-27T08:12:45
https://www.reddit.com/r/LocalLLaMA/comments/184y0td/help_me_to_choose_a_better_build_for_mldl/
oceanxo97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184y0td
false
null
t3_184y0td
/r/LocalLLaMA/comments/184y0td/help_me_to_choose_a_better_build_for_mldl/
false
false
https://b.thumbs.redditm…PWJxoRzV9Xkg.jpg
1
null
Script/repo for training any open source model on unstructured documents (text, pdf, markdown, etc...)
1
Hi, I'm wondering if there's an easy to use repo/script to train or fine-tune an open source LLM on unstructured documents (text, pdf, markdown, or anything really...)? On windows or mac (M1)?
2023-11-27T08:11:46
https://www.reddit.com/r/LocalLLaMA/comments/184y0bz/scriptrepo_for_training_any_open_source_model_on/
Emergency-Sir6270
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184y0bz
false
null
t3_184y0bz
/r/LocalLLaMA/comments/184y0bz/scriptrepo_for_training_any_open_source_model_on/
false
false
self
1
null
Local llama for language learning
1
Hello, I am looking for a model to learn European languages. Could you recommend one? My device is mbp 16” M1 Max 32Gb
2023-11-27T07:53:24
https://www.reddit.com/r/LocalLLaMA/comments/184xqnw/local_llama_for_language_learning/
scratt007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184xqnw
false
null
t3_184xqnw
/r/LocalLLaMA/comments/184xqnw/local_llama_for_language_learning/
false
false
self
1
null
Local LLM/Llama for chat experience, 1-2 sentence text gen/paraphrasing
1
Hi Community! I'm quite new to the LLM world and I don't know if this is the right subredit for it - but since I started with LLama, this was my first go to ;). I am working a private project and I am stuck and maybe someone here could give me some advices/hints. ​ # Coarse Project Outline: An application is creating a sentence based on simple if-else conditions. I want to inject a function, that is calling a python script with this output sentence to either vary it or generate a response to it and input it back to the application. The faster the response, the better. So generation should not take longer than 1-2 seconds (which is quite ambitious I guess for those large models run locally). # Problem distinction: * Having the python script running, so that initialization doesnt have to be done every call. I googled and came across "Flask" as a solution. * Main problem: Having the engine to conduct tasks as paraphrasing, short responses and maybe also slightly longer responses (4-5 sentences). Also having some kind of "chat" interaction including history would be super - so if the engine asks for something, the user responds, the next answer isnot completely out of context. So -> LLM ? I dived into huggingface and discussions with colleagues revealed that Zephyr is currently a good value for a "relatively" small model size (7B). ​ # Current obstacle I tried the default example code from the zephyr-7B site. The result looks awesome. However, the shard collection takes 30-60 seconds and the actual "pipe" command takes 100-300 seconds. I have 32 GB of RAM and a RTX 4070, and I am quite surprised that it takes that long for simple sentences - did I mess up the settings/code or is this expectable? Torch.cuda.is\_available() returns True, so GPU should be activated - the task manager however does not show any GPU utilization (or not noticable). Also introducing a "history" of chat is on my list, probably with appending the "output" to the input, but not sure. ​ What would potential recommendations be regarding "how to speed up" and which model to choose? Thanks so much!
2023-11-27T07:42:47
https://www.reddit.com/r/LocalLLaMA/comments/184xl9e/local_llmllama_for_chat_experience_12_sentence/
Bennyyy27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184xl9e
false
null
t3_184xl9e
/r/LocalLLaMA/comments/184xl9e/local_llmllama_for_chat_experience_12_sentence/
false
false
self
1
null
Safety checks in Llama 2
1
Recently came across this AI Safety test report from LinkedIn: [https://airtable.com/app8zluNDCNogk4Ld/shrYRW3r0gL4DgMuW/tblpLubmd8cFsbmp5](https://airtable.com/app8zluNDCNogk4Ld/shrYRW3r0gL4DgMuW/tblpLubmd8cFsbmp5) From this report it seems Llama 2 lacks some safety checks compared to OpenAI models. Same with Mistral. Did anyone find the same result? Has it been a concern for you?
2023-11-27T06:10:51
https://www.reddit.com/r/LocalLLaMA/comments/184w7b6/safety_checks_in_llama_2/
Little-Name9809
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184w7b6
false
null
t3_184w7b6
/r/LocalLLaMA/comments/184w7b6/safety_checks_in_llama_2/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OJYpy1VhPAxOWPZavxFw01osiQpuMMvdTGk6qRQc0J0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?width=108&crop=smart&auto=webp&s=6e64289fd05277b891b0930c218c8cf55d417ac2', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?width=216&crop=smart&auto=webp&s=3398f911dcb69968e5e1959a62840ebf01e67fd9', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?width=320&crop=smart&auto=webp&s=224e7aea66121660b2fb850903b4b03e7dc93e2a', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?width=640&crop=smart&auto=webp&s=7f60524bf43539ce84304592820a4a6aee7cb753', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?width=960&crop=smart&auto=webp&s=0be45849b5ef13d51f3701f7a115510243223680', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?width=1080&crop=smart&auto=webp&s=7c60048e47cd7b7520126dd4b1293df079e0ec4f', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/_A1IcC6ITIddbz8p2xrVbxgnjOuEnrPWF41cIROFJEk.jpg?auto=webp&s=6b024e0215177d8c270d23fd0e67832d52e90054', 'width': 1200}, 'variants': {}}]}
Run Orca-2-13B Directly on Rust+WASM – No Python/C++ Hassles
1
2023-11-27T05:14:09
https://www.secondstate.io/articles/orca-2-13b/
smileymileycoin
secondstate.io
1970-01-01T00:00:00
0
{}
184va0p
false
null
t3_184va0p
/r/LocalLLaMA/comments/184va0p/run_orca213b_directly_on_rustwasm_no_pythonc/
false
false
default
1
null
llama.cpp vs GPT4All, answer too short?
28
2023-11-27T04:15:12
https://i.redd.it/dwvnuor9gt2c1.png
c2h2pro
i.redd.it
1970-01-01T00:00:00
0
{}
184u94x
false
null
t3_184u94x
/r/LocalLLaMA/comments/184u94x/llamacpp_vs_gpt4all_answer_too_short/
false
false
https://b.thumbs.redditm…CX8P2LFLK2SQ.jpg
28
{'enabled': True, 'images': [{'id': 'Cx8UDKhHUblcH6Z3cDz6SZ6X0bSz0BIRBw2L2osCpu0', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/dwvnuor9gt2c1.png?width=108&crop=smart&auto=webp&s=94ddcdcebc4241397fa43a1cf4db85c861d7d75c', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/dwvnuor9gt2c1.png?width=216&crop=smart&auto=webp&s=8f34a39d3ced872389a7e9fc589553fe6435b63d', 'width': 216}, {'height': 134, 'url': 'https://preview.redd.it/dwvnuor9gt2c1.png?width=320&crop=smart&auto=webp&s=a50d87fd8cd6e3248881a4e33f7651a634f27042', 'width': 320}, {'height': 269, 'url': 'https://preview.redd.it/dwvnuor9gt2c1.png?width=640&crop=smart&auto=webp&s=eb0ac8a357f17afff8393461e75dc64636acbee9', 'width': 640}, {'height': 404, 'url': 'https://preview.redd.it/dwvnuor9gt2c1.png?width=960&crop=smart&auto=webp&s=8bbbf6ceca157bea9d4a069476e46ad7ef2bbccb', 'width': 960}, {'height': 455, 'url': 'https://preview.redd.it/dwvnuor9gt2c1.png?width=1080&crop=smart&auto=webp&s=63c1c4202623d1fa0f800f792b373604f29d5cc6', 'width': 1080}], 'source': {'height': 1268, 'url': 'https://preview.redd.it/dwvnuor9gt2c1.png?auto=webp&s=08011e9c7505e2df45479ff67aec9778cb49a0b2', 'width': 3006}, 'variants': {}}]}
For Intel neuralchat or openhermes mistral 7b, what is the max output token size?
1
In fact, what is the total output token size for all model nowadays? Because I seem to have confused context length and output length. I figured that output length is usually fixed to 4k tokens. Is there such a limit to all models even the ones I mentioned? Because their model cards do not have any info on token limit.
2023-11-27T03:57:05
https://www.reddit.com/r/LocalLLaMA/comments/184tx3b/for_intel_neuralchat_or_openhermes_mistral_7b/
Shoddy_Vegetable_115
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184tx3b
false
null
t3_184tx3b
/r/LocalLLaMA/comments/184tx3b/for_intel_neuralchat_or_openhermes_mistral_7b/
false
false
self
1
null
thoughts on ChocoWu/nextgpt_7b_tiva_v0? first time coming across this type of model of (tiva) text-image-video-audio modality. any hopes of being able to quantize/run this on a mac?
9
2023-11-27T03:56:33
https://huggingface.co/ChocoWu/nextgpt_7b_tiva_v0
LyPreto
huggingface.co
1970-01-01T00:00:00
0
{}
184tws0
false
null
t3_184tws0
/r/LocalLLaMA/comments/184tws0/thoughts_on_chocowunextgpt_7b_tiva_v0_first_time/
false
false
https://b.thumbs.redditm…Ib_OWqP36aqc.jpg
9
{'enabled': False, 'images': [{'id': 'ebyBrUDx3jgA19BgHH1meIPGMGMiNYiqpUFqj6PJtKk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_SbqTVq1paJ9ULCbDCogK_hfNSpOAjpjjo7RTZ7tqvE.jpg?width=108&crop=smart&auto=webp&s=215c31e4208cf389d1f5316a0b20eaaa73cecad4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_SbqTVq1paJ9ULCbDCogK_hfNSpOAjpjjo7RTZ7tqvE.jpg?width=216&crop=smart&auto=webp&s=7ede1ef121f218352fe19b2b9e3ca61e815ee33e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_SbqTVq1paJ9ULCbDCogK_hfNSpOAjpjjo7RTZ7tqvE.jpg?width=320&crop=smart&auto=webp&s=af457546d9e5bca1651cfed7aefc8f7b79ce5676', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_SbqTVq1paJ9ULCbDCogK_hfNSpOAjpjjo7RTZ7tqvE.jpg?width=640&crop=smart&auto=webp&s=154a7189227409389060e4482dc81abe96764a55', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_SbqTVq1paJ9ULCbDCogK_hfNSpOAjpjjo7RTZ7tqvE.jpg?width=960&crop=smart&auto=webp&s=94cb869b060d3c133a47e1bd7bc1c736ffdfff58', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_SbqTVq1paJ9ULCbDCogK_hfNSpOAjpjjo7RTZ7tqvE.jpg?width=1080&crop=smart&auto=webp&s=08216b6c5cac57c516011f3e92974f813a01cdac', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_SbqTVq1paJ9ULCbDCogK_hfNSpOAjpjjo7RTZ7tqvE.jpg?auto=webp&s=9725eb06d160be3e7a3c2c70dab9444c0323305c', 'width': 1200}, 'variants': {}}]}
Youtuber reviews Yi 34b
1
[removed]
2023-11-27T03:27:40
https://www.reddit.com/r/LocalLLaMA/comments/184tdga/youtuber_reviews_yi_34b/
Charuru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184tdga
false
null
t3_184tdga
/r/LocalLLaMA/comments/184tdga/youtuber_reviews_yi_34b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'bxhO2bG0MleVSlUFaqGNdIm5ROp2bsXKSQg9gsySkw8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/NI7I11cPxBEpNCQJaq5BY854ZWLK1A_eACaFopQvLh0.jpg?width=108&crop=smart&auto=webp&s=7e099516f6ede5e788a3bd6eeea4267d8887d417', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/NI7I11cPxBEpNCQJaq5BY854ZWLK1A_eACaFopQvLh0.jpg?width=216&crop=smart&auto=webp&s=9a8429a3c8a46af0a47f451fc11f35f31bb76b94', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/NI7I11cPxBEpNCQJaq5BY854ZWLK1A_eACaFopQvLh0.jpg?width=320&crop=smart&auto=webp&s=67a0443130ac1f562658491090b4450c4a789797', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/NI7I11cPxBEpNCQJaq5BY854ZWLK1A_eACaFopQvLh0.jpg?auto=webp&s=4e6b1c46514964236d4f42d2fca2f06c4950244f', 'width': 480}, 'variants': {}}]}
Beginner, How to Download and Run Local Model
1
I'm trying to run the OpenHermes Mistral 7B model locally on my machine. I'm looking for a guide or refence from start to finish how to run and use one of these models.
2023-11-27T03:01:39
https://www.reddit.com/r/LocalLLaMA/comments/184su7v/beginner_how_to_download_and_run_local_model/
Kind_Truth2696
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184su7v
false
null
t3_184su7v
/r/LocalLLaMA/comments/184su7v/beginner_how_to_download_and_run_local_model/
false
false
self
1
null
most powerful model for an A6000?
5
so I got this shiny new GPU and I want to push it to the limit. What’s the most powerful, smartest model out there? Ideally something with as much long-term memory as possible. I’m coming off of ChatGPT 4 and want something local and uncensored
2023-11-27T01:59:25
https://www.reddit.com/r/LocalLLaMA/comments/184rkvo/most_powerful_model_for_an_a6000/
crackinthekraken
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184rkvo
false
null
t3_184rkvo
/r/LocalLLaMA/comments/184rkvo/most_powerful_model_for_an_a6000/
false
false
self
5
null
[Q] Has anyone tried to train LORA on M2 Mac for 34B+ models?
4
I tried with base\_Yi-34B-200K\_f16.gguf, but it does not work via llama.cpp, and unfortunately, we can't yet train loras for [Q\* models at all](https://github.com/ggerganov/llama.cpp/issues/3911), only for f16, which is power-demanding. Any advice here?
2023-11-27T01:38:11
https://www.reddit.com/r/LocalLLaMA/comments/184r5lt/q_has_anyone_tried_to_train_lora_on_m2_mac_for/
Shir_man
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184r5lt
false
null
t3_184r5lt
/r/LocalLLaMA/comments/184r5lt/q_has_anyone_tried_to_train_lora_on_m2_mac_for/
false
false
self
4
{'enabled': False, 'images': [{'id': 'ZpzgUb7k_s5imyQ5fqSHU26Ctt9_7VQ6RN_e63f1Dy0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GZrdrX3dG1bdk0GTO8vM_J7mPSup7_-dNEn3e721P-s.jpg?width=108&crop=smart&auto=webp&s=5895dcbdcb3338ebade707fffbf4b2f7f54c71cc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GZrdrX3dG1bdk0GTO8vM_J7mPSup7_-dNEn3e721P-s.jpg?width=216&crop=smart&auto=webp&s=a34eb3f1abef573cd98d1012bd514e58a77e0eba', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GZrdrX3dG1bdk0GTO8vM_J7mPSup7_-dNEn3e721P-s.jpg?width=320&crop=smart&auto=webp&s=c3375a02a804a0fe16acd3060b445e95207917a9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GZrdrX3dG1bdk0GTO8vM_J7mPSup7_-dNEn3e721P-s.jpg?width=640&crop=smart&auto=webp&s=001116e2b5488632add32d63e009d087c255f58f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GZrdrX3dG1bdk0GTO8vM_J7mPSup7_-dNEn3e721P-s.jpg?width=960&crop=smart&auto=webp&s=28463f8f4a866768a1ca7a42cf9202911e6f4218', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GZrdrX3dG1bdk0GTO8vM_J7mPSup7_-dNEn3e721P-s.jpg?width=1080&crop=smart&auto=webp&s=32f07c37f552b209a5b710c381c07cc8fcea0ff9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GZrdrX3dG1bdk0GTO8vM_J7mPSup7_-dNEn3e721P-s.jpg?auto=webp&s=dafc0e42b1d41177b8b4c0947e768c5a7aeb262f', 'width': 1200}, 'variants': {}}]}
Chassis only has space for 1 GPU - Llama 2 70b possible on a budget?
1
I have a server with 512gb RAM and 2x Intel Xeon 6154. It has spare 16x pcie 3.0 slot once I get rid of my current gpu. I'd like to add a better gpu so I can generate paper summaries (the responses can take a few minutes to come back) that are significantly better than the quality I get now with 4bit Llama2 13b. Anyone know whats the minimum gpu I should be looking at with this setup to be able to upgrade to the 70b model?
2023-11-27T01:03:35
https://www.reddit.com/r/LocalLLaMA/comments/184qfeg/chassis_only_has_space_for_1_gpu_llama_2_70b/
Jugg3rnaut
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184qfeg
false
null
t3_184qfeg
/r/LocalLLaMA/comments/184qfeg/chassis_only_has_space_for_1_gpu_llama_2_70b/
false
false
self
1
null
Visualization of mistral tensor min-max values
1
[removed]
2023-11-27T00:34:23
https://www.reddit.com/gallery/184psah
introsp3ctor
reddit.com
1970-01-01T00:00:00
0
{}
184psah
false
null
t3_184psah
/r/LocalLLaMA/comments/184psah/visualization_of_mistral_tensor_minmax_values/
false
false
https://a.thumbs.redditm…hSRYKAz7bay0.jpg
1
null
Anyone tried Everything of Thoughts?
13
Just went through the [https://arxiv.org/abs/2311.04254](https://arxiv.org/abs/2311.04254) paper after discovering it on Twitter. Looks promising but I am skeptical of generic usecases. Anyone tried in yet?
2023-11-26T23:57:56
https://www.reddit.com/r/LocalLLaMA/comments/184oz2r/anyone_tried_everything_of_thoughts/
Ok-Regular-1142
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184oz2r
false
null
t3_184oz2r
/r/LocalLLaMA/comments/184oz2r/anyone_tried_everything_of_thoughts/
false
false
self
13
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
Cheapest way to run local LLMs?
7
Not super knowledgeable about all the different specs of the different Orange PI and Rasberry PI models. I'm looking for something relatively cheap that can connect to WiFi and USB. I want to be able to run at least 13b models at a a decent tok / s. Also open to other solutions. I have a Mac M1 (8gb RAM) and upgrading the computer itself would be cost prohibitive for me.
2023-11-26T23:14:47
https://www.reddit.com/r/LocalLLaMA/comments/184nzy6/cheapest_way_to_run_local_llms/
ClassroomGold6910
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184nzy6
false
null
t3_184nzy6
/r/LocalLLaMA/comments/184nzy6/cheapest_way_to_run_local_llms/
false
false
self
7
null
Is Open LLM Leaderboard reliable source ? yi:34B is at the top but I get better results with neural-chat:7B model
39
I use in both cases q4_K_M
2023-11-26T22:50:17
https://www.reddit.com/r/LocalLLaMA/comments/184nekd/is_open_llm_leaderboard_reliable_source_yi34b_is/
grigio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184nekd
false
null
t3_184nekd
/r/LocalLLaMA/comments/184nekd/is_open_llm_leaderboard_reliable_source_yi34b_is/
false
false
self
39
null
Trying to export llama2_7b.bin to llama2.c fails on my local available CPU with only 16G RAM
1
Seems 16G RAM is not enough to do : python [export.py](https://export.py) llama2\_7b.bin --meta-llama ..\\llama2\\llama-2-7b\\ I' m looking for some cloud/vps alernatives. What do you recommend? ​
2023-11-26T22:04:47
https://www.reddit.com/r/LocalLLaMA/comments/184mawc/trying_to_export_llama2_7bbin_to_llama2c_fails_on/
Ok_Jacket_530
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184mawc
false
null
t3_184mawc
/r/LocalLLaMA/comments/184mawc/trying_to_export_llama2_7bbin_to_llama2c_fails_on/
false
false
self
1
null
LocalAI response identical on the same prompt
1
I'm trying to explore OpenAI API alternatives and found LocalAI, it's great, however I don't understand if it's normal when on the same request I recieve same response each time (100% identical text). If there is some sort of caching? Tried different models (gguf) and temperatures (temp changed via requests to API). Right now parms: temperature: 0.7, top\_p: 0.7, top\_k: 100 New to self-hosted llms, so maybe I'm missing something important. Layers were offloaded to GPU.
2023-11-26T22:00:46
https://www.reddit.com/r/LocalLLaMA/comments/184m78i/localai_response_identical_on_the_same_prompt/
InkognetoInkogneto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184m78i
false
null
t3_184m78i
/r/LocalLLaMA/comments/184m78i/localai_response_identical_on_the_same_prompt/
false
false
self
1
null
Code-LLaMa: Stuck after loading
1
I followed the instructions from their [github](https://github.com/facebookresearch/codellama). However, I cannot successfully use it. I have downloaded the model using the `download.sh` script. When I try running the `example_completions.py` using the command: instructions torchrun --nproc_per_node 1 example_completion.py \ --ckpt_dir CodeLlama-7b/ \ --tokenizer_path CodeLlama-7b/tokenizer.model \ --max_seq_len 128 --max_batch_size 4 It starts and gives me this output: [2023-11-26 22:00:54,779] torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs. > initializing model parallel with size 1 > initializing ddp with size 1 > initializing pipeline with size 1 /Users/kaushikk/opt/anaconda3/lib/python3.8/site-packages/torch/__init__.py:614: UserWarning: torch.set_default_tensor_type() is deprecated as of PyTorch 2.1, please use torch.set_default_dtype() and torch.set_default_device() as alternatives. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/tensor/python_tensor.cpp:453.) _C._set_default_tensor_type(t) Loaded in 122.85 seconds However, after this, I see no progress. It is stuck here. What could be the possible reason behind this?
2023-11-26T21:10:34
https://www.reddit.com/r/LocalLLaMA/comments/184kyee/codellama_stuck_after_loading/
Kaushik2002
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184kyee
false
null
t3_184kyee
/r/LocalLLaMA/comments/184kyee/codellama_stuck_after_loading/
false
false
self
1
{'enabled': False, 'images': [{'id': 'CbcxMNHTCpm0eKhTzaqOy8xIFHe0E4gG6HgLsTFO0o8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HESTGlo3xY41uHxznMzeBKv8r6nb1QFh1sIHaYrwcFE.jpg?width=108&crop=smart&auto=webp&s=5799b206323b8ac8620f013b902a1e653a832cbb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HESTGlo3xY41uHxznMzeBKv8r6nb1QFh1sIHaYrwcFE.jpg?width=216&crop=smart&auto=webp&s=54505420e82541cd820f114ca7a39495504d8b12', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HESTGlo3xY41uHxznMzeBKv8r6nb1QFh1sIHaYrwcFE.jpg?width=320&crop=smart&auto=webp&s=ae3af6b69a3f9eec68ad8446bffc2e3840d4114b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HESTGlo3xY41uHxznMzeBKv8r6nb1QFh1sIHaYrwcFE.jpg?width=640&crop=smart&auto=webp&s=e9caf7ad45adfafb8f1950d85fdc0c32b7b9a2c1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HESTGlo3xY41uHxznMzeBKv8r6nb1QFh1sIHaYrwcFE.jpg?width=960&crop=smart&auto=webp&s=848d5e95868a4f43052ac027c0d59db7c7e167b1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HESTGlo3xY41uHxznMzeBKv8r6nb1QFh1sIHaYrwcFE.jpg?width=1080&crop=smart&auto=webp&s=cd8a7335b61648380b5d38ee8e43ac38781a8aef', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HESTGlo3xY41uHxznMzeBKv8r6nb1QFh1sIHaYrwcFE.jpg?auto=webp&s=bf13028e030d751e289240e58230768a2e11fc01', 'width': 1200}, 'variants': {}}]}
Colud LLaVa be finetuned to perform image to markdown or even image to html conversion?
23
Hello! I am wondering, since this would be a very interesting use case and there is more than enough training material out there (pretty much every MD file could be rendered, then the image and markdown code could be used for training/finetuning) however I have pretty much no idea about llava. Do you think this would be feasible to do?
2023-11-26T20:09:45
https://www.reddit.com/r/LocalLLaMA/comments/184ji0i/colud_llava_be_finetuned_to_perform_image_to/
tortistic_turtle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184ji0i
false
null
t3_184ji0i
/r/LocalLLaMA/comments/184ji0i/colud_llava_be_finetuned_to_perform_image_to/
false
false
self
23
null
The price for fine-tuning or instrcut-tuning
1
How much does it cost to fine-tune or instruct-tune 70b llama2 model? My data set I want to use for finetuning consists out of 6m tokens. And so I wonder how much would it cost me to fine the model with data set (4 epochs)
2023-11-26T20:01:43
https://www.reddit.com/r/LocalLLaMA/comments/184jb6c/the_price_for_finetuning_or_instrcuttuning/
SeaworthinessLow4382
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184jb6c
false
null
t3_184jb6c
/r/LocalLLaMA/comments/184jb6c/the_price_for_finetuning_or_instrcuttuning/
false
false
self
1
null
Need help setting up a cost-efficient llama v2 inference API for my micro saas app
6
I run a micro saas app that would benefit a lot from using llama v2 to add some question & answering capabilities for customers' end users. We've already done some investigation with the 7B llama v2 base model and its responses are good enough to support the use case for us, however, given that its a micro business right now and we are not VC funded need to figure out the costs. We process about 4 million messages per month of which we'd need to run 1M of them through the model and generate a response from it. Latency < 30 seconds would be required. So around \~23 messages/minute. # of tokens used would be \~4096 for each invocation. Commercial models like Palm 2 or GPT X would be too expensive for us, wondering if there is a path to have a setup that can do this cost-efficiently. We have a bunch of GCP AI credits to fine-tune and experiment but they run out in less than a year so we need to think about the long-term sustainability. We can probably spare 500-1000 a month for the inference API with the hope that our customers will pay more $$ for this service. Any guidance or benchmarks using various optimized models you can share would be very helpful.
2023-11-26T19:50:42
https://www.reddit.com/r/LocalLLaMA/comments/184j1sk/need_help_setting_up_a_costefficient_llama_v2/
m1ss1l3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184j1sk
false
null
t3_184j1sk
/r/LocalLLaMA/comments/184j1sk/need_help_setting_up_a_costefficient_llama_v2/
false
false
self
6
null
Mistral Prompt Format
1
I am quite new to finetuning and have been planning to finetune the Mistral 7B model on the [SHP Dataset](https://huggingface.co/datasets/stanfordnlp/SHP). However I am not sure on what is the right prompt format to use. As of right now I think the alpaca format is the way to go but im not too sure. Can anyone guide me here ? Thanks! `Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.` &#x200B; `### Instruction:` `{instruction}` &#x200B; `### Input:` `{input}` `### Response:`
2023-11-26T19:21:21
https://www.reddit.com/r/LocalLLaMA/comments/184idpq/mistral_prompt_format/
Saint_insane1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184idpq
false
null
t3_184idpq
/r/LocalLLaMA/comments/184idpq/mistral_prompt_format/
false
false
self
1
{'enabled': False, 'images': [{'id': 'urFM2q4JcEhCnIZFXe3C4LU_8vzpr4mlnDTdIId_ErM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GLcgK2OqfycBrTHTusgk2GvnsrpYKyxcKlre-mVzs_0.jpg?width=108&crop=smart&auto=webp&s=a3452f8abfd58f72157ac4adffb07cd8e9ab33bc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GLcgK2OqfycBrTHTusgk2GvnsrpYKyxcKlre-mVzs_0.jpg?width=216&crop=smart&auto=webp&s=8378e628ec5262cda2e7ab865dc31f97ef12cde1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GLcgK2OqfycBrTHTusgk2GvnsrpYKyxcKlre-mVzs_0.jpg?width=320&crop=smart&auto=webp&s=6449744dd87ed4781dc6ff7e4806936e582331a7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GLcgK2OqfycBrTHTusgk2GvnsrpYKyxcKlre-mVzs_0.jpg?width=640&crop=smart&auto=webp&s=a50379ad1139df7e9b7e1511ae82bea7711c5265', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GLcgK2OqfycBrTHTusgk2GvnsrpYKyxcKlre-mVzs_0.jpg?width=960&crop=smart&auto=webp&s=1ff120c56edae0b6d1753c42092eaab040c8aad3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GLcgK2OqfycBrTHTusgk2GvnsrpYKyxcKlre-mVzs_0.jpg?width=1080&crop=smart&auto=webp&s=0d6aec02c1688392b9b914270e19893a095649c6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GLcgK2OqfycBrTHTusgk2GvnsrpYKyxcKlre-mVzs_0.jpg?auto=webp&s=364d2f72a5420fedc9e5c410fbb72f6bbffdf277', 'width': 1200}, 'variants': {}}]}
New approach for positional encoding - context-aware methods
34
I think **context-aware local positional encoding** methods can be alternative methods to the absolute global positional encoding methods because this is the nature of text sequences. Once the block size increases, a sequence becomes a group of ideas with various subjects (it loses its cohesiveness). By 'context-aware', I mean positional encodings that are made uniquely based on the input sequence. by 'local positional encoding', I mean encodings that are made based on the surrounding position n. i.e for token n, we use, n-1, n-2, n-w, to create the encoding for nth token, where w is a hyperparameter. Based on these concepts, I tried lots of ideas and found the best and competitive methods compared to RoPE and NTK RoPE for extrapolation. I called them Dynamic methods. Dynamic methods consistently outperform RoPE along with NTK on train sequence length (sometimes with a big margin) and with a small margin on length extrapolation. The code can be found in the attention class: [https://github.com/saeeddhqan/strange/blob/ab03997c7f6c447846f79608b3d6f62304d4acd9/model.py#L113](https://github.com/saeeddhqan/strange/blob/ab03997c7f6c447846f79608b3d6f62304d4acd9/model.py#L113) I trained a model on wikipedia introductions. Here Dynamic outperformed rope with a small difference(rope=17.921, dynamic=17.648): [1024 block size. The one started from \~80 is the dynamic method.](https://preview.redd.it/zt238ounnq2c1.png?width=1426&format=png&auto=webp&s=80e3e76ed36abd8c368ce7e93ee3f9a7b1878411) extrapolation 2048(rope=18.668, dynamic=18.425): https://preview.redd.it/643xk67koq2c1.png?width=1426&format=png&auto=webp&s=8e68d77438bb745cde7b8a33f99af6e3a1be54b8 I used 7 for hyperparameter w. Methods are very sensitive to this hyperparameter. By decreasing w, the perplexity becomes much better than 17.648 on the training seq length 1024, but worst on the extrapolation. There's no issue of locality and the method keeps the coherency of positional encodings like a chain of information from the first sequence to the last. Also, after the first layer, there's no two similar token embeddings in the sequence (concerning position collision issue). These are just some basic local context-aware methods and there are many rooms to improve them of course. Due to the limited resources, I couldn't evaluate methods rigorously. Wanted to know your thoughts on this.
2023-11-26T19:14:41
https://www.reddit.com/r/LocalLLaMA/comments/184i83s/new_approach_for_positional_encoding_contextaware/
Imaylosemyhope
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184i83s
false
null
t3_184i83s
/r/LocalLLaMA/comments/184i83s/new_approach_for_positional_encoding_contextaware/
false
false
https://a.thumbs.redditm…UX8ISGn8n-38.jpg
34
{'enabled': False, 'images': [{'id': 'O5N0biFfYMOYRjAKk1lKjY6M4FixsQrT0tg7S_9lX58', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IpbfeiBsQw6324H-Q9Ou0LJP1t--HvpDeRJs2ygj4cQ.jpg?width=108&crop=smart&auto=webp&s=beadd2c6268cfb3eef270efd1f12e08a514c9942', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IpbfeiBsQw6324H-Q9Ou0LJP1t--HvpDeRJs2ygj4cQ.jpg?width=216&crop=smart&auto=webp&s=79b0fc1b867eb27c781eea5278bfc3e94458995f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IpbfeiBsQw6324H-Q9Ou0LJP1t--HvpDeRJs2ygj4cQ.jpg?width=320&crop=smart&auto=webp&s=7f64318ab6e0e988ac6447decb3ad11c485f7051', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IpbfeiBsQw6324H-Q9Ou0LJP1t--HvpDeRJs2ygj4cQ.jpg?width=640&crop=smart&auto=webp&s=1e248c555a81f2ad75c7564d9ad0a1ea997c0316', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IpbfeiBsQw6324H-Q9Ou0LJP1t--HvpDeRJs2ygj4cQ.jpg?width=960&crop=smart&auto=webp&s=11586597d33eebbed400dbf154f94a39dc70c0b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IpbfeiBsQw6324H-Q9Ou0LJP1t--HvpDeRJs2ygj4cQ.jpg?width=1080&crop=smart&auto=webp&s=eac25e5abb20b3a65de662cfadf8cc1a08a807a4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IpbfeiBsQw6324H-Q9Ou0LJP1t--HvpDeRJs2ygj4cQ.jpg?auto=webp&s=a1ddfd061f8c630529b9507f11c88ad0ed640ade', 'width': 1200}, 'variants': {}}]}
How to Fine Tune Language Models?
4
Sorry for my n00b questions - I just downloaded Orca2 13B and I'm still struggling to get it to work in a Python notebook. I'd like to ask how such LMs can be trained further, or fine-tuned. What is the scope of such further trainability? Does it only consist of training it toward niche language topics? Or can you get a Language Model to take on features of a CNN for image recognition too? It's all neurons, right - whether LM or CNN? So can you hybridize across very starkly different types of training, to get a mixture of capabilities? What about this "Reinforcement Learning for Process Supervision"? I've recently read it can improve math reasoning skills. How would one go about transferring this different kind of learning onto a Language Model? I'd also noticed that Orca2's scores are lagging on the math side (GMS8K) - and math & physics are what I'm most interested in. Can anyone recommend a better model for learning math?
2023-11-26T18:24:09
https://www.reddit.com/r/LocalLLaMA/comments/184h330/how_to_fine_tune_language_models/
san__man
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184h330
false
null
t3_184h330
/r/LocalLLaMA/comments/184h330/how_to_fine_tune_language_models/
false
false
self
4
null
Benchmark using 4060 TI (16 GB) with Models fully loaded into VRAM (KoboldCPP 1.5)
1
[removed]
2023-11-26T18:08:17
https://www.reddit.com/r/LocalLLaMA/comments/184gqao/benchmark_using_4060_ti_16_gb_with_models_fully/
LocoLanguageModel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184gqao
false
null
t3_184gqao
/r/LocalLLaMA/comments/184gqao/benchmark_using_4060_ti_16_gb_with_models_fully/
false
false
self
1
{'enabled': False, 'images': [{'id': 'IJG6ElrbX5nfiUOiPyFkPYNAivUTH_XOYAySVCpLC9w', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ADqGMioFILkLiYeHP2MSS7I6IwZxeksaZiPeKgAGa5w.jpg?width=108&crop=smart&auto=webp&s=78b3e400d7d5f4747413c6cdd5da0bd50c6a21a9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ADqGMioFILkLiYeHP2MSS7I6IwZxeksaZiPeKgAGa5w.jpg?width=216&crop=smart&auto=webp&s=8995889af40ad1f8c053b9aed431c2708b7c50d3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ADqGMioFILkLiYeHP2MSS7I6IwZxeksaZiPeKgAGa5w.jpg?width=320&crop=smart&auto=webp&s=1abe073d0a71129f1f0e1049fa1b26d237f8abf4', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ADqGMioFILkLiYeHP2MSS7I6IwZxeksaZiPeKgAGa5w.jpg?auto=webp&s=11b0f3ab2502ebb8b6c87008428746db74c45c46', 'width': 480}, 'variants': {}}]}
Has there been any serious research on LLMs and geometric / fractal patterns?
1
[removed]
2023-11-26T18:05:24
https://www.reddit.com/r/LocalLLaMA/comments/184gnvn/has_there_been_any_serious_research_on_llms_and/
dvx24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184gnvn
false
null
t3_184gnvn
/r/LocalLLaMA/comments/184gnvn/has_there_been_any_serious_research_on_llms_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'LdoZxZKz_V1G2nVk5ErbsAGdQUkZlhIq9oXQf9EK2oc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/2If9zKcSJAUQo4nIQ7Q8wi3ye5oqUB9wWTbOTd9awKs.jpg?width=108&crop=smart&auto=webp&s=9f72c49a6acadeef722f50535468a280fc5f5874', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/2If9zKcSJAUQo4nIQ7Q8wi3ye5oqUB9wWTbOTd9awKs.jpg?width=216&crop=smart&auto=webp&s=9f21dc882b9653d71e974c4a951aae83e86c50c5', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/2If9zKcSJAUQo4nIQ7Q8wi3ye5oqUB9wWTbOTd9awKs.jpg?width=320&crop=smart&auto=webp&s=bda37771eb3971c2c26b08b6bc47778a0042a41d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/2If9zKcSJAUQo4nIQ7Q8wi3ye5oqUB9wWTbOTd9awKs.jpg?auto=webp&s=37984214b10acbb2125b091c100288c425ba5263', 'width': 480}, 'variants': {}}]}
Is virtual memory a real option?
1
[removed]
2023-11-26T18:04:32
https://www.reddit.com/r/LocalLLaMA/comments/184gn7r/is_virtual_memory_a_real_option/
flared_vase
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184gn7r
false
null
t3_184gn7r
/r/LocalLLaMA/comments/184gn7r/is_virtual_memory_a_real_option/
false
false
self
1
null
Here's a preview of a GUI for super-simple GPU inferencing I'm working on lately. I dislike the trend we see of compressed chat-interfaces everywhere, so I made this interface to be more suitable for text-heavy tasks
122
2023-11-26T17:57:43
https://i.redd.it/wf8zc2f7dq2c1.png
Severin_Suveren
i.redd.it
1970-01-01T00:00:00
0
{}
184ghlm
false
null
t3_184ghlm
/r/LocalLLaMA/comments/184ghlm/heres_a_preview_of_a_gui_for_supersimple_gpu/
false
false
https://b.thumbs.redditm…9-c4cSvLt2Fk.jpg
122
{'enabled': True, 'images': [{'id': 'WztOySjQeUXGn5hDMLI-WBf5mMK_OTPf8x28AFhmusg', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/wf8zc2f7dq2c1.png?width=108&crop=smart&auto=webp&s=a45ff4f2425832d699e84087a578735b2ac9aa65', 'width': 108}, {'height': 103, 'url': 'https://preview.redd.it/wf8zc2f7dq2c1.png?width=216&crop=smart&auto=webp&s=f703e099126d2b68660fae5c8e688c19ab32504e', 'width': 216}, {'height': 152, 'url': 'https://preview.redd.it/wf8zc2f7dq2c1.png?width=320&crop=smart&auto=webp&s=dd2918124de324244ad5bb4feabb106e2bda242b', 'width': 320}, {'height': 305, 'url': 'https://preview.redd.it/wf8zc2f7dq2c1.png?width=640&crop=smart&auto=webp&s=2fc7b169307f48704a6d82f56a20c19e50f60b56', 'width': 640}, {'height': 458, 'url': 'https://preview.redd.it/wf8zc2f7dq2c1.png?width=960&crop=smart&auto=webp&s=e852d632e7dd8d88cfc760b9dbe75956135ed426', 'width': 960}, {'height': 516, 'url': 'https://preview.redd.it/wf8zc2f7dq2c1.png?width=1080&crop=smart&auto=webp&s=50ad1291ccfef17cff6f84878432e2b0530c6c35', 'width': 1080}], 'source': {'height': 1826, 'url': 'https://preview.redd.it/wf8zc2f7dq2c1.png?auto=webp&s=320675e53f8d6cc06408c42298bf4f636045ce66', 'width': 3821}, 'variants': {}}]}
Mistral fine tuning - eos and padding
1
While instructing fine tuning mistral, what are the parameters that needs to be set for the tokenizer? what is the default eos token for Mistral and padding should left or right? I've exhausted all the online articles trying to find this. Please help. I'm instruction fine tuning Base Mistral for Text to SQL task.
2023-11-26T17:37:01
https://www.reddit.com/r/LocalLLaMA/comments/184g120/mistral_fine_tuning_eos_and_padding/
weedyuh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184g120
false
null
t3_184g120
/r/LocalLLaMA/comments/184g120/mistral_fine_tuning_eos_and_padding/
false
false
self
1
null
What do these words mean? Hermes, OpenHermes, OpenChat, Vicuna, Alpaca, Orca, OpenOrca, Airoboros, Synthia, Guanaco, Dolphin, Samantha, Synthia, ...
30
I'm confused by all these prefixes that appear in the finetunes of base models. Is there a glossary of all these words and similar ones?
2023-11-26T17:26:28
https://www.reddit.com/r/LocalLLaMA/comments/184fs8g/what_do_these_words_mean_hermes_openhermes/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184fs8g
false
null
t3_184fs8g
/r/LocalLLaMA/comments/184fs8g/what_do_these_words_mean_hermes_openhermes/
false
false
self
30
null
Low memory bandwidth utilization on 3090?
3
I get 20 t/s with a 70B 2.5bpw model, but this is only 47% of the theoretical maximum of 3090. In comparison, the benchmarks on the exl2 github homepage show 35 t/s, which is 76% the theoretical maximum of 4090. The bandwidth differences between the two GPUs aren't huge, 4090 is only 7-8% higher. Why? Does anyone else have a similar 20 t/s ? I don't think my cpu performance is the issue. The benchmarks also show ~85% utilization on 34B on 4bpw (normal models)
2023-11-26T17:24:01
https://www.reddit.com/r/LocalLLaMA/comments/184fq9g/low_memory_bandwidth_utilization_on_3090/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184fq9g
false
null
t3_184fq9g
/r/LocalLLaMA/comments/184fq9g/low_memory_bandwidth_utilization_on_3090/
false
false
self
3
null
Agency: Go Way to AI (Pure Go LangChain alternative)
1
[removed]
2023-11-26T16:53:33
https://www.reddit.com/r/LocalLLaMA/comments/184f0fx/agency_go_way_to_ai_pure_go_langchain_alternative/
urlaklbek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184f0fx
false
null
t3_184f0fx
/r/LocalLLaMA/comments/184f0fx/agency_go_way_to_ai_pure_go_langchain_alternative/
false
false
self
1
{'enabled': False, 'images': [{'id': '33KEpQzOXrdgW6FmipPFM04dnddYmIkycqJKnTTzPiw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_O8WdDBxwuSXStevQ2KG4fX6wYmzL-QgPaTY8QHaxg8.jpg?width=108&crop=smart&auto=webp&s=6251ce647fea80d847625b15cbb885e2e66f436b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_O8WdDBxwuSXStevQ2KG4fX6wYmzL-QgPaTY8QHaxg8.jpg?width=216&crop=smart&auto=webp&s=0c821bfeda3c65a23f891d289005f3bc054922b2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_O8WdDBxwuSXStevQ2KG4fX6wYmzL-QgPaTY8QHaxg8.jpg?width=320&crop=smart&auto=webp&s=6d30f97d609f9835f1ff33b94e66a4b2ccfd5608', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_O8WdDBxwuSXStevQ2KG4fX6wYmzL-QgPaTY8QHaxg8.jpg?width=640&crop=smart&auto=webp&s=b0891b59fce6e0d73976525c2d96e5d686084f6b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_O8WdDBxwuSXStevQ2KG4fX6wYmzL-QgPaTY8QHaxg8.jpg?width=960&crop=smart&auto=webp&s=1ef5a668afa2bac93bde6c5b9c29d64d712ae039', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/_O8WdDBxwuSXStevQ2KG4fX6wYmzL-QgPaTY8QHaxg8.jpg?auto=webp&s=0f1ff7b64038c17bf37dcce41d55ea153afc4ffa', 'width': 1000}, 'variants': {}}]}
PiVoT-0.1-Evil-a : The most crazy Local model
1
It’s reverse censored Model🀣🀣 Author says Eviltune It's a finetuning method that takes a different approach, using a harmless dataset in reverse to create a malevolent AI. Model Link: https://huggingface.co/maywell/PiVoT-0.1-Evil-a
2023-11-26T16:48:04
https://www.reddit.com/gallery/184evzg
Warm_Paramedic2528
reddit.com
1970-01-01T00:00:00
0
{}
184evzg
false
null
t3_184evzg
/r/LocalLLaMA/comments/184evzg/pivot01evila_the_most_crazy_local_model/
false
false
nsfw
1
null
Looking for a base model that fine tunes like vanilla gpt3
3
I understand this is a tough request and that nothing open source is truly going to be the same as Davinci002. I had a very good experience fine tuning these models on 200 or so creative writing samples. I’d like to use an open source model to do the same. I know I need a base model / a complete model not an instruct or chat model for the continuation task I want to accomplish. But I find that so many of the new open source models have been over fitted on gpt slop data sets… β€œas an ai” etc… that’s its tough to escape that . Could someone recommended a local model for this? I have 128gb ram and a 3090 with 24 vram
2023-11-26T16:44:22
https://www.reddit.com/r/LocalLLaMA/comments/184esyt/looking_for_a_base_model_that_fine_tunes_like/
duckypout420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184esyt
false
null
t3_184esyt
/r/LocalLLaMA/comments/184esyt/looking_for_a_base_model_that_fine_tunes_like/
false
false
self
3
null
Should corporations and smaller businesses be training, refining, and running local LLMs to avoid the future cost of the new cloud services M$ and Amazon are rolling out?
25
It seems to me that the next big boom for cloud computing will be offering to train and host models that understand the unique business domains it serves. Are the smart corporations already training local LLMs to understand and answer questions about their business, or is this space too new to accommodate them? I feel like some of you may be missing a huge business opportunity. You may not realize the value of what you have already researched.
2023-11-26T16:01:32
https://www.reddit.com/r/LocalLLaMA/comments/184duke/should_corporations_and_smaller_businesses_be/
3-4pm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184duke
false
null
t3_184duke
/r/LocalLLaMA/comments/184duke/should_corporations_and_smaller_businesses_be/
false
false
self
25
null
Should corporations and smaller businesses be reading and running local LLMs to avoid the future cost of the new cloud services M$ and Amazon are rolling out?
1
It seems to me that the next big boom for cloud computing is will be offering to train and host models that understand the unique business domains it serves. Are the smart corporations already training local LLMs to understand and answer questions about their business, or is this space too new to accommodate them? I feel like some of you may be missing a huge business opportunity. You may not realize the value of what you have already researched.
2023-11-26T15:57:10
https://www.reddit.com/r/LocalLLaMA/comments/184dqys/should_corporations_and_smaller_businesses_be/
3-4pm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184dqys
false
null
t3_184dqys
/r/LocalLLaMA/comments/184dqys/should_corporations_and_smaller_businesses_be/
false
false
self
1
null
Noob question
4
I really like OpenHermes-7B for chat/RP purposes because it just seems to me to say much more creative and entertaining things than Llama-based models I’ve tried. It also seems to have pretty good accuracy on accuracy and explanation quality for single prompts, sometimes even including coding (the most I ever do is simple R scripts). I run it through OobaBooga. But on the flip side, it seems to have very poor context both of things I’ve said previously in the conversation and the perceived relationship of things in its surroundings based on the initial character prompt. And it basically never advances the story. Xwin-70B feels much less interesting in the way it speaks to me, but it can drive the story and mostly seems to understand what is going on. What actual variables affect memory, as well as the LLMs desire to actually drive the story/conversation forward? Explain it to me like you would explain to a scientist in a non-machine learning field. Also, are there any Mistral based models out or on the horizon that do a better job in these areas where OpenHermes struggles?
2023-11-26T15:25:39
https://www.reddit.com/r/LocalLLaMA/comments/184d2do/noob_question/
Nimbleoxen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184d2do
false
null
t3_184d2do
/r/LocalLLaMA/comments/184d2do/noob_question/
false
false
self
4
null
Serious inquiry: I've been tinkering a lot with finetuning and was wondering if it would be worth to buy a V100 of my own
42
[https://www.amazon.se/-/en/NVIDIA-Tesla-V100-16GB-Express/dp/B076P84525](https://www.amazon.se/-/en/NVIDIA-Tesla-V100-16GB-Express/dp/B076P84525) price in my country: 81000SEK or 7758,17 USD &#x200B; &#x200B; My current setup: `NVIDIA GeForce RTX 4050 Laptop GPU` `cuda cores: 2560` `memory data rate 16.00 Gbps` My laptop GPU works fine for most ML and DL tasks. I am currently finetuning a GPT-2 model with some data that I scraped. And it worked surprisingly well on my current setup. So it's not like I am complaining. &#x200B; I do however own a stationary PC with some old GTX 980 GPU. And was thinking of replacing that with the V100. So my question to this community is: For those of you who have bought your own super-duper-GPU. Was it worth it. And what was your experience and realizations when you started tinkering with it? &#x200B; Note: Please refrain giving me snarky comments about using Cloud GPU's. I am not interested in that (And I am in fact already using one for another ML task that doesn't involve finetuning) . I am interested to hear about the some hardware hobbyists opinion on this matter. &#x200B; &#x200B;
2023-11-26T15:17:29
https://www.reddit.com/r/LocalLLaMA/comments/184cw2n/serious_inquiry_ive_been_tinkering_a_lot_with/
holistic-engine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184cw2n
false
null
t3_184cw2n
/r/LocalLLaMA/comments/184cw2n/serious_inquiry_ive_been_tinkering_a_lot_with/
false
false
self
42
null
Wich Llama can I run with a M3 pro 36GO ?
3
Hi everyone, just received my new macbook pro with M3 pro (36GO) and I'm willing to have fun with a local Llama now that I finally have a machine able to run it ! Main question is : wich version can I run without burning my chimp ? 13b ? 70b ? Also if you have any usefulls ressources you can suggest to get into this game feel free to share I'm a real beginner in LLM. I'm a young python dev for fun and a javascript dev for food and mainly want to have a local llama to assist me on web dev ! Thanks to guys like Alex Ziskind I'm aware it's not really reliable for coding assist for now but I'm really curious about what it can do, if you have better tool I could set up on my machine to assist me feel free to share and I'm really willing to devour some ressources about this subject. Thank you all for your help and advises !
2023-11-26T12:59:59
https://www.reddit.com/r/LocalLLaMA/comments/184a71a/wich_llama_can_i_run_with_a_m3_pro_36go/
Sufficient-Seat-5067
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184a71a
false
null
t3_184a71a
/r/LocalLLaMA/comments/184a71a/wich_llama_can_i_run_with_a_m3_pro_36go/
false
false
self
3
null
[Paper] GPQA: A Graduate-Level Google-Proof Q&A Benchmark | Has anyone else tested local models on this benchmark?
16
2023-11-26T12:59:50
https://arxiv.org/abs/2311.12022
PM_ME_YOUR_PROFANITY
arxiv.org
1970-01-01T00:00:00
0
{}
184a6y8
false
null
t3_184a6y8
/r/LocalLLaMA/comments/184a6y8/paper_gpqa_a_graduatelevel_googleproof_qa/
false
false
https://b.thumbs.redditm…JfAh3BKURwFI.jpg
16
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
Why LocalLLaMa when GPT-4 exists?
1
Title says it all. Why spend so much effort finetuning and serving models locally when any closed-source model will do the same for cheaper in the long run. Is it a philosophical argument? (As in freedom vs free beer) Or are there practical cases where a local model does better. Where I’m coming from is the requirement of a copilot, primarily for code but maybe for automating personal tasks as well, and wondering whether to put down the $20/mo for GPT4 or roll out my own personal assistant and run it locally (have an M2 max, compute wouldn’t be a huge issue)
2023-11-26T12:49:52
https://www.reddit.com/r/LocalLLaMA/comments/184a0xy/why_localllama_when_gpt4_exists/
oppenbhaimer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184a0xy
false
null
t3_184a0xy
/r/LocalLLaMA/comments/184a0xy/why_localllama_when_gpt4_exists/
false
false
self
1
null
Local Rag/embedding clarifications
2
Hi all, I posted originally to langchain sub but didn’t get any response yet, could anyone give some pointers, thanks. Basic workflow for questioning data locally? Hi all, I’m using lang chain js, and most examples I find are using openAI but I’m using llama. I managed to get a simple text file embedded and can ask basic questions, but most of the time the model just spits out the prompt. I’m using just cpu at the moment so it’s very slow but that’s ok. I’m experimenting with loading txt files, csv files etc but clearly it’s not going well, I can ask some very simple question but most of the time it fails. My understanding is; 1. Load model 2. Load data and chunk (csv file for example. I chunk usually with something like 200 and by separators /n 3. Load embedding (I’m supposed to load llama gguf model right? The same one as in step 1? As a parameter in llamaCppEmbeddings) 4. Vector store in memory 5. Create chain and ask question 6. Console log answer Is this concept correct and do you have any tips to help me get better results. Thank you
2023-11-26T12:25:03
https://www.reddit.com/r/LocalLLaMA/comments/1849lrh/local_ragembedding_clarifications/
Appropriate-Tax-9585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1849lrh
false
null
t3_1849lrh
/r/LocalLLaMA/comments/1849lrh/local_ragembedding_clarifications/
false
false
self
2
null
Is there any sort of project that is combines Text + Image + TTSVoice generation in one single UI ?
13
So I have the text-generation-ui by oogabooga running at one place, then I also have stable diffusion in the other tab. But I'm looking for ways to expose these project's APIs, and then combine them to then produce output like what GPT-4 does, where it can call APIs when it needs to, to other models. I'm also looking for a solution where the text generation output is also able to execute the said code, and then infer from its results to do next things. (iknow the risks but yeah).
2023-11-26T12:21:52
https://www.reddit.com/r/LocalLLaMA/comments/1849jvx/is_there_any_sort_of_project_that_is_combines/
Starkboy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1849jvx
false
null
t3_1849jvx
/r/LocalLLaMA/comments/1849jvx/is_there_any_sort_of_project_that_is_combines/
false
false
self
13
null
We’re going to need more modular designs for LLMs.
6
Let’s say you spend an unholy amount of processing time training a 70b. You like history. You want a good LLM for historical info. By the time you upload it the LLM is outdated. Now what? If you want it to speak accurately about modern events you’d have to retrain it again. Repeating the process over and over, because time keeps moving on while your LLM does not. This clearly could become more efficient. Optimally, each subject would probably need to be considered a separate file while the central β€œbrain” of the LLM becomes its own structure. As it stands, updating the entire LLM is very cost prohibitive and makes no sense if you’re trying to work out specific data points. Why, for example, would you want to update the entire Cantonese dictionary when you just want to fix the list of Alaskan donut shops? I understand that the tech currently has to treat both the information and the β€œthinking” behind an LLM as one and the same. It seems more efficient, more effective, to separate the two.
2023-11-26T12:07:04
https://www.reddit.com/r/LocalLLaMA/comments/1849b98/were_going_to_need_more_modular_designs_for_llms/
ZABKA_TM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1849b98
false
null
t3_1849b98
/r/LocalLLaMA/comments/1849b98/were_going_to_need_more_modular_designs_for_llms/
false
false
self
6
null
The self querying retrieval in langchain
1
I came across self querying retrieval and i tried to look into the langchain repo, honestly i couldnt understand anything in there :/ Is there a way to implement self querying retrieval from scratch without langchain? How did you guys do it
2023-11-26T11:40:31
https://www.reddit.com/r/LocalLLaMA/comments/1848wzy/the_self_querying_retrieval_in_langchain/
Silver_Equivalent_58
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1848wzy
false
null
t3_1848wzy
/r/LocalLLaMA/comments/1848wzy/the_self_querying_retrieval_in_langchain/
false
false
self
1
null