title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Codellama Prompts
36
I see a lot of people having issues with Codellama. I'm no expert and cannot solve all your issues, but I can provide one piece of the puzzle: prompting! Neither a quick google nor the huggingface model contain the prompting information for some reason. So I did the unthinkable and read the paper; below is an excerpt about model evaluation, try some of these out and see how they work for you. I've been having success at programming and non programming tasks using the figure 12 prompt and the zero-shot prompt. Edit them and swap them around to see how they work! This is just a starting point. Feel free to share other generation tips below. My favorite for non coding has been: [INST] You are an expert Python programmer and personal assistant, here is your task: {prompt} Your answer should start with a [PYTHON] tag and end with a [/PYTHON] tag. [/INST] Not very complex, simple and short modification of one from the paper, getting awesome results from 7b and 13b models. I added the \[INST\] tags because they seem to help, as well as keeping the \[PYTHON\] tags, even for non-programming. Occasionally 7b needs a reminder or censorship bypass such as `Sure thing, here is a summary of the given text:` after `[/INST]`. Just fiddle with it yourself and share in the comments. ​ (From *Code Llama: Open Foundation Models for Code* pg. 32, formatted for reddit) **G.2 Evaluation prompts** Prompt: [INST] Your task is to write 5 tests to check the correctness of a function that solves a programming problem. The tests must be between [TESTS] and [/TESTS] tags. You must write the comment "#Test case n:" on a separate line directly above each assert statement, where n represents the test case number, starting from 1 and increasing by one for each subsequent test case. Problem: Write a Python function to get the unique elements of a list. [/INST] [TESTS] # Test case 1: assert get_unique_elements([]) == [] # Test case 2: assert get_unique_elements([1]) == [1] # Test case 3: assert get_unique_elements([1, 2, 3, 2, 1]) == [1, 2, 3] [/TESTS] [INST] Problem: %%%question%%% [/INST] *Figure 10: Prompt template used to generate unit tests. The substring %%%question%%% is a placeholder for an interview-style programming question we replace at runtime*. ​ Prompt: [INST] Your task is to write a Python function to solve a programming problem. The Python code must be between [PYTHON] and [/PYTHON] tags. You are given one example test from which you can infere the function signature. Problem: Write a Python function to get the unique elements of a list. Test: assert get_unique_elements([1, 2, 3, 2, 1]) == [1, 2, 3] [/INST] [PYTHON] def get_unique_elements(my_list): return list(set(my_list)) [/PYTHON] [INST] Problem: %%%question%%% Test: %%%test%%% [/INST] *Figure 11: Prompt template used for generating a solution. The substrings %%%question%%% and %%%test%%% are placeholders for an interview-style programming question and one example test, respectively.* *The example test is randomly sampled from the list of tests we generated previously for the same question. We keep the remainder of the generated tests "hidden" from the model so as to be able to filter out solutions which overfit on the tests given in the prompt.* ​ Prompt: You are an expert Python programmer, and here is your task: {task} Your code should pass these tests:\n\n{tests}\nYour code should start with a [PYTHON] tag and end with a [/PYTHON] tag. *Figure 12: Prompt for the MBPP zero-shot task. We use this prompt to evaluate our instruct models.* ​ Zero-shot prompt: [INST] Write a python code to solve the following coding problem that obeys the constraints and passes the example test cases. The output code needs to {QUESTION_GUIDE}. Please wrap your code answer using ```: {PROMPT} [/INST] Two-shot prompt: Q: Write a python code to solve the following coding problem that obeys the constraints and passes the example test cases. The output code needs to {FEW_SHOT_QUESTION_GUIDE}. Please wrap your code answer using ```: {FEW_SHOT_PROMPT} A: ```{FEW_SHOT_ANSWER}``` Q: Write a python code to solve the following coding problem that obeys the constraints and passes the example test cases. The output code needs to {FEW_SHOT_QUESTION_GUIDE}. Please wrap your code answer using ```: {FEW_SHOT_PROMPT} A: ```{FEW_SHOT_ANSWER}``` Q: Write a python code to solve the following coding problem that obeys the constraints and passes the example test cases. The output code needs to {QUESTION_GUIDE}. Please wrap your code answer using ```: {PROMPT} A: *Figure 13: Prompts used to evaluate Code Llama on APPS* ​
2023-08-26T19:17:32
https://www.reddit.com/r/LocalLLaMA/comments/1623yj4/codellama_prompts/
Meronoth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1623yj4
false
null
t3_1623yj4
/r/LocalLLaMA/comments/1623yj4/codellama_prompts/
false
false
self
36
null
Few lm questions for building a auto-generated concept "game" to show students power of genAi, mixing a language model and stable diffusion
5
Hello, I have come up with a idea of a concept to inspire my students and others of the power of generative AI for games and simulations. I have a idea in mind about a text and image based space roleplaying game where a language model will either generative background info (planet names and info, crew members, alien races and names and so) or generating the communication part with npc and guiding of the story. I have already earlier today set up a python api for generating planets, species, ships linked to comfy ui (stable difussion) but is thinking about making it in js instead. My questions to the smart llm people here is: What is the smallest language model that will be able to generate background info and or the guiding part. And what I mean is hopefully maximum 10-15 gb big and works on a GPU 2060 and 16 gb ram. Should i(Is it easy or even doable) to integrate my game to OobaBogga or other interface or work directly with pytorch files? I'm really excited by this project, hope you are as well! Best regards
2023-08-26T19:16:57
https://www.reddit.com/r/LocalLLaMA/comments/1623xyn/few_lm_questions_for_building_a_autogenerated/
dupido
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1623xyn
false
null
t3_1623xyn
/r/LocalLLaMA/comments/1623xyn/few_lm_questions_for_building_a_autogenerated/
false
false
self
5
null
Is anyone using HF Transformers library for inference?
1
It is a really nice library to use but I keep running into speed issues :( Trying to run llama 2 on an a100 I'm hoping there is some setup out there using the transformers library that matches ggml, vllm, exllama, etc
2023-08-26T18:40:17
https://www.reddit.com/r/LocalLLaMA/comments/1622zxi/is_anyone_using_hf_transformers_library_for/
natural_language_guy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1622zxi
false
null
t3_1622zxi
/r/LocalLLaMA/comments/1622zxi/is_anyone_using_hf_transformers_library_for/
false
false
self
1
null
Need Help Modifying a Fine-Tuning Script for Different Llama Models - Beginner in NLP
5
Hello everyone, I'm a beginner in the field of machine learning and NLP, and I'm currently working on fine-tuning some transformer models for a specific task. I have a Python script that successfully fine-tunes the [Llama-2-7b-chat-hf model](https://huggingface.co/daryl149/llama-2-7b-chat-hf), but I'm having trouble adapting it to work with larger models like [Llama-2-70B-GPTQ](https://huggingface.co/TheBloke/Llama-2-70B-GPTQ) and [Llama-2-70B-GGML](https://huggingface.co/TheBloke/Llama-2-70B-GGML). Here's the code I'm currently using: from datasets import load_dataset import torch from transformers import AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer, TrainingArguments from peft import LoraConfig from trl import SFTTrainer # Load training and validation data train_data = load_dataset('json', data_files={'train': 'train_data.jsonl'}) val_data = load_dataset('json', data_files={'validation': 'val_data.jsonl'}) print(train_data) base_model_name = "daryl149/llama-2-7b-chat-hf" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16, ) device_map = {"": 0} base_model = AutoModelForCausalLM.from_pretrained( base_model_name, quantization_config=bnb_config, device_map=device_map, trust_remote_code=True, use_auth_token="your_auth_token_here" ) base_model.config.use_cache = False base_model.config.pretraining_tp = 1 peft_config = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=64, bias="none", task_type="CAUSAL_LM", ) tokenizer = AutoTokenizer.from_pretrained(base_model_name, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token output_dir = "./results" training_args = TrainingArguments( output_dir=output_dir, per_device_train_batch_size=4, gradient_accumulation_steps=4, learning_rate=2e-4, logging_steps=10, max_steps=500 ) max_seq_length = 512 train_dataset = train_data["train"] trainer = SFTTrainer( model=base_model, train_dataset=train_dataset, peft_config=peft_config, dataset_text_field="train", max_seq_length=max_seq_length, tokenizer=tokenizer, args=training_args, ) trainer.train() import os output_dir = os.path.join(output_dir, "final_checkpoint") trainer.model.save_pretrained(output_dir) I would greatly appreciate any guidance on what changes I need to make to this script to make it compatible with the larger models I mentioned. Thank you in advance! &#x200B; **Additional Note:** I also had a different code for the [Llama-2-70B-GPTQ model](https://huggingface.co/TheBloke/Llama-2-70B-GPTQ), but I encountered the following error message: Traceback (most recent call last): File "/workspace/fine-tuning1.py", line 12, in <module> model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16) ... ValueError: Found modules on cpu/disk. Using Exllama backend requires all the modules to be on GPU. You can deactivate exllama backend by setting `disable_exllama=True` in the quantization config object. I was unable to resolve this issue despite using 2x RTX 3090 GPUs. The script didn't utilize the GPUs and generated the above error. I had to abandon that approach, and that's why I'm looking for help to modify my current script.
2023-08-26T18:10:05
https://www.reddit.com/r/LocalLLaMA/comments/16227u2/need_help_modifying_a_finetuning_script_for/
vnvrx1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16227u2
false
null
t3_16227u2
/r/LocalLLaMA/comments/16227u2/need_help_modifying_a_finetuning_script_for/
false
false
self
5
{'enabled': False, 'images': [{'id': 'uJlW7E3Uu4AcDyB91B9VHoCFE7V__M9YHOyuD2GeKRA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?width=108&crop=smart&auto=webp&s=9f18fbc5f65f6ec5edda6c14e4a71d90de6d6572', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?width=216&crop=smart&auto=webp&s=bbbc84e21389bc02fc80327209df4fcb6cd4c933', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?width=320&crop=smart&auto=webp&s=acd27157a20fa7366cc086091405c62373873490', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?width=640&crop=smart&auto=webp&s=c965a1ad6f857ac715579d6bc02e08bb56aee667', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?width=960&crop=smart&auto=webp&s=964d5c4f9ac1703ffc1c60bb3288216606527cdc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?width=1080&crop=smart&auto=webp&s=72a316105551ae3906e7e3a95c9293769f808010', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TYHFt9tK2gAOjkXEoDTQZ8d-d4kv7ou8LLdsTt8SVYE.jpg?auto=webp&s=39dfca2d924537ae0fba5640afc2c6c91ae6b7f4', 'width': 1200}, 'variants': {}}]}
I’d like to use a proper IDE on an underpowered laptop and connect to powerful remote resources for a good dev experience. Any thoughts or guidance
8
I’m toying with the idea of buying a MacBook Air, and then utilizing PyCharm to connect to a more powerful remote server to run/test/debug code. Does anyone have any ideas? I don’t have any experience with tools like Colab, Lightning.AI, etc. but would be happy to learn for the right developer experience. Thanks in advance!
2023-08-26T18:09:32
https://www.reddit.com/r/LocalLLaMA/comments/16227c4/id_like_to_use_a_proper_ide_on_an_underpowered/
shadowcorp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16227c4
false
null
t3_16227c4
/r/LocalLLaMA/comments/16227c4/id_like_to_use_a_proper_ide_on_an_underpowered/
false
false
self
8
null
codellama-34binstruct.Q8_0.gguf VS GPT4 , telling me what a chunk of code does . The difference is clear.
1
[removed]
2023-08-26T17:32:27
https://www.reddit.com/r/LocalLLaMA/comments/16219zo/codellama34binstructq8_0gguf_vs_gpt4_telling_me/
w7gg33h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16219zo
false
null
t3_16219zo
/r/LocalLLaMA/comments/16219zo/codellama34binstructq8_0gguf_vs_gpt4_telling_me/
false
false
self
1
null
Couple of questions about LLaMA2 or LLMs in general
2
Hi everyone, everyday there are more and more customized models of LLaMA2 coming out. It is hard to keep up. I watched a Youtube video about the Airoboros 70b based on L2 and downloaded it and tried to use it with Oobabooga Web UI. The experience was slightly confusing and not so great, probably because of my limited hardware. So here are my questions 1. Which model right now is quite good and uncensored? I have a RTX 3080Ti, what model would be friendly towards my system? 2. This is similar to Q1, are there better models than LLaMA2? 3. for future reference, what PC hardware I should upgrade to get better compatibility when it comes to LLMs? 4. Do custom models get updated? Please feel free to add more pointers for a newb like myself to understand a bit more about what is going on with LLMs. Thanks!
2023-08-26T17:18:05
https://www.reddit.com/r/LocalLLaMA/comments/1620www/couple_of_questions_about_llama2_or_llms_in/
consig1iere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1620www
false
null
t3_1620www
/r/LocalLLaMA/comments/1620www/couple_of_questions_about_llama2_or_llms_in/
false
false
self
2
null
stumbled on a very interesting post while going through the CodeLlama paper (bottom of pg.18). 2 billion context window?!
14
2023-08-26T16:50:51
https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/?utm_source=share&utm_medium=web2x&context=3
LyPreto
reddit.com
1970-01-01T00:00:00
0
{}
16208wi
false
null
t3_16208wi
/r/LocalLLaMA/comments/16208wi/stumbled_on_a_very_interesting_post_while_going/
false
false
https://a.thumbs.redditm…E4zGOMu0FXn4.jpg
14
{'enabled': False, 'images': [{'id': 'z2HdRfGrX_QS4_TnwDeHjTgrpOd2uGmfmEZQf63iZWI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=108&crop=smart&auto=webp&s=d840bf220765e7b6df8c36771f071c82dc53eee4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=216&crop=smart&auto=webp&s=714db9b135c12543746691b8a956acfd07122580', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=320&crop=smart&auto=webp&s=e1a8f89ae830c69fa429ef112b425aba1b64bdf2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=640&crop=smart&auto=webp&s=31e2c79449868e179793a1f2d70f5d78de751d08', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=960&crop=smart&auto=webp&s=262b4daf154aadda8f746529eb973650ecbe9e01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=1080&crop=smart&auto=webp&s=700bfff52f422ffd0ff53c1ea12551bbdee98a62', 'width': 1080}], 'source': {'height': 1012, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?auto=webp&s=c2f80796e75ceb2043e71b915e84ad78ae348afa', 'width': 2024}, 'variants': {}}]}
Imagine in maybe 5 years (wild guess), you could run a very good 13b model on a mobile phone locally - this would be such a valuable tool
1
Imagine a finetuned multimodal model that is specialized in urban survival, bushcrafting, first aid, shelter building and wild plants/mushrooms. Together with the other tools the phone has (compass, maps, camera) this would be a really overpowered gadget in almost any survival/apocalyptic scenario. Imagine you finding a mushroom, quickly taking a photo, and the LLM tells you if it is save to eat or not. Or asking it for shelter building tips, or traps, or showing it a injury on your leg and asking it what to do, or what herbs to use on it. Or in an urban scenario, giving you a quick crash course tutorial on how to pick a lock, or just giving you some good tips exactly for your specific scenario. All of this would work offline, and you might even have some decent voice input and output, so you wouldn’t even need to hold it in your hand and could just use headphones. You could basically be a total noob and still survive a decent amount of time in most scenarios with this phone. Granted you would need a power supply but there are indeed portable solar panels that do a sufficient job.
2023-08-26T16:41:57
https://www.reddit.com/r/LocalLLaMA/comments/16201b8/imagine_in_maybe_5_years_wild_guess_you_could_run/
BeginningInfluence55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16201b8
false
null
t3_16201b8
/r/LocalLLaMA/comments/16201b8/imagine_in_maybe_5_years_wild_guess_you_could_run/
false
false
self
1
null
How to Utilize 100k context length with CodeLlama on Oogabooga?
65
"The Code Llama models provide stable generations with up to 100,000 tokens of context. All models are trained on sequences of 16,000 tokens and show improvements on inputs with up to 100,000 tokens." https://ai.meta.com/blog/code-llama-large-language-model-coding/ How can I take advantage of 100k context length on Oogabooga? Do I just specify --max_seq_len 100000?
2023-08-26T16:31:00
https://www.reddit.com/r/LocalLLaMA/comments/161zrr0/how_to_utilize_100k_context_length_with_codellama/
jl303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161zrr0
false
null
t3_161zrr0
/r/LocalLLaMA/comments/161zrr0/how_to_utilize_100k_context_length_with_codellama/
false
false
self
65
{'enabled': False, 'images': [{'id': 'KTYHmNx2qAX2aajSppARuZdjm7U6R8FVu18DEezObn4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?width=108&crop=smart&auto=webp&s=dc1f937f123a14abea0255c75c55a6d2acd7275b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?width=216&crop=smart&auto=webp&s=d1b5286e5dfae9dd67dba9ca48330094c0b16e51', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?width=320&crop=smart&auto=webp&s=37267c920901d90d9a9ba3ab0e919e19e2bebeb7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?width=640&crop=smart&auto=webp&s=57d8a05518aa423d4263a1d9e4fac864fc0e0f8b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?width=960&crop=smart&auto=webp&s=fe6fcae95723cee5cd64ff983e3a3c6e37ae1d8d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?width=1080&crop=smart&auto=webp&s=78419655f84844241544ed8d3dc1179e46c8774f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/W2UPfwU-hpBpiU1ICxwNoD5K7PmqCJAUeNexmqk7_Yw.jpg?auto=webp&s=ba3a3e1e56624d49c32df4990886ea7170e2bbec', 'width': 1920}, 'variants': {}}]}
Question about viewing metrics from text generation webui
1
[removed]
2023-08-26T16:16:30
https://www.reddit.com/r/LocalLLaMA/comments/161zeoz/question_about_viewing_metrics_from_text/
JuanRehable
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161zeoz
false
null
t3_161zeoz
/r/LocalLLaMA/comments/161zeoz/question_about_viewing_metrics_from_text/
false
false
self
1
null
Cost effective GPU hosting service & GPU recommendations needed!
1
I'm trying to experiment with different LLMs for a specific use-case (text generation) and I need a cost effective GPU to host, ssh into, and run Python on and test different models. I need recommendations for two things: 1. Which service can I use (think DigitalOcean or something along those lines?) 2. What's a good enough GPU that'll allow testing / inference. For testing / inference, it doesn't have to be blazing fast but I'm planning on using some of the llama-2 models ranging from 13b-50b+ in parameters (I'll also test quantized and non-quantized too). So I'm not entirely sure which service to use and what a powerful enough GPU is. &#x200B; I don't mind waiting a few seconds to generate an answer, but if it starts to go over a minute or so then it can be annoying. For those that have experience with this, what would you suggest?
2023-08-26T15:52:14
https://www.reddit.com/r/LocalLLaMA/comments/161yshw/cost_effective_gpu_hosting_service_gpu/
anasp1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161yshw
false
null
t3_161yshw
/r/LocalLLaMA/comments/161yshw/cost_effective_gpu_hosting_service_gpu/
false
false
self
1
null
How do i make LLM forget things(for RP purposes)
1
We have a huge models like LLaMa 1/2. I'm curious is the a way to decrease their world knowladge? Like if i want to create a LLM that have some basic knowledge like: What is air, gravity, human, etc. But doesn't know anything about... for example modern topics: Advanced Medicine, Programming, who is Elon Mask and so on... Moreover it'll be able to know things about some new stuff(dats just fine-tuning on this data): {Something from EldenRing/Skyrim lore here} For example: Now if I ask something LLM have no clue about it can start hallucinating... and making things up... To sum up: I want to fune-tune/train model with lore AND remove any modern topics(not using prompts, cuz they are kinda unstable)
2023-08-26T15:42:01
https://www.reddit.com/r/LocalLLaMA/comments/161yjid/how_do_i_make_llm_forget_thingsfor_rp_purposes/
RETRJ_ru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161yjid
false
null
t3_161yjid
/r/LocalLLaMA/comments/161yjid/how_do_i_make_llm_forget_thingsfor_rp_purposes/
false
false
self
1
null
These finetunes that get released literally 5 minutes after a new cool base model are lame af and just try to claim all the hype for themselves.
0
Talking about this wizard thing of code llama. Releasing a finetune of a model that literally got released just a minute ago implies that either A) the creators of the original model, in this case meta, are very inefficient and bad at constructing base models OR B) Some people just take models that are constructed in a meaningful and efficient way and just finetune them on common evals, so they can rebrand them as their own product and say „We beat GPT-4“ The reality is, if it was plausible to beat GPT-4 with a model almost 100x smaller, you can bet that meta would figure that out themselves, and not some scetchy finetuning people. After all, finetuning is a destructive process. Seriously enhancing a model through finetuning beyond the original capabilities might be possible, but you would need a shitton of money and resources for that, and you can bet that meta already did their best to make the best model possible in relation to costs.
2023-08-26T15:22:28
https://www.reddit.com/r/LocalLLaMA/comments/161y1ho/these_finetunes_that_get_released_literally_5/
BeginningInfluence55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161y1ho
false
null
t3_161y1ho
/r/LocalLLaMA/comments/161y1ho/these_finetunes_that_get_released_literally_5/
false
false
self
0
null
Parameters when using context free grammar
4
Was really excited to see this change ([https://github.com/ggerganov/llama.cpp/pull/1773](https://github.com/ggerganov/llama.cpp/pull/1773)) allowing using a formal grammar. One thing I'm noticing though (Llama 2, 7B) is that the quality of the output deteriorates pretty quickly them more restrictive the grammar is (for instance, if only allowing certain sets of words or letters, etc), which is expected since we are restricting the model. &#x200B; Any advice on choosing parameters to limit degradation?
2023-08-26T14:37:10
https://www.reddit.com/r/LocalLLaMA/comments/161wwse/parameters_when_using_context_free_grammar/
bespectacledzombie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161wwse
false
null
t3_161wwse
/r/LocalLLaMA/comments/161wwse/parameters_when_using_context_free_grammar/
false
false
self
4
{'enabled': False, 'images': [{'id': 'bSia8r2-Mt2WHuTReUy_Xx9Nr9_gK3rsPVgKhJgPBOs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?width=108&crop=smart&auto=webp&s=89c29ed5efb6bcd418746b781dc400127952c748', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?width=216&crop=smart&auto=webp&s=d85745a802691ed1b5ff33c526b3d2872cf50fdd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?width=320&crop=smart&auto=webp&s=8e3f300b4ddd243bd45d5c653161a2a00554ad6f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?width=640&crop=smart&auto=webp&s=2084f8cd826cd35ace9bf3050e58bb92adb1be32', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?width=960&crop=smart&auto=webp&s=e05474c734dac67697a7b96413982784a9518bcb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?width=1080&crop=smart&auto=webp&s=cbf6ab3042dc8c6784a4ce290b4c215e172a0be2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TlWa7dBQ9ctXpRXsQYlLjK5CYr2T9wFB-Fxb_gGVsrY.jpg?auto=webp&s=0023b43d59b9e43a89d8fff55c9df56285b17544', 'width': 1200}, 'variants': {}}]}
Has anyone tried running LLaMA inference or fine tuning on Nvidia AGX Orin boards? 64GB of unified memory w/ Ampere GPU for ~$2k. Seems... reasonable?
2
The community is up in arms about how awesome apples M2 unified architecture is for running inference. As far as I know you can't train with that though. I know Nvidia Jetson boards are used to train in other domains all the time, specifically computer vision. I feel like you could probably fine tune an LLM with the AGX Orin (in addition to inference), but it's not like I have a few to play with. Do any of you know of any LLM experiments done with this hardware? https://store.nvidia.com/en-us/jetson/store/?page=1&limit=9&locale=en-us
2023-08-26T14:34:13
https://www.reddit.com/r/LocalLLaMA/comments/161wu6q/has_anyone_tried_running_llama_inference_or_fine/
my_name_is_reed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161wu6q
false
null
t3_161wu6q
/r/LocalLLaMA/comments/161wu6q/has_anyone_tried_running_llama_inference_or_fine/
false
false
self
2
null
HumanEval as an accurate code benchmark
57
Hi all! Everyone is very excited about the Code Llama fine tunes beating GPT-4 in HumanEval, so I would like to share a bit more about this benchmark. I also strongly suggest reading this [thread](https://twitter.com/LoubnaBenAllal1/status/1692573777975033941) and the [code evaluation](https://huggingface.co/spaces/bigcode/multilingual-code-evals) benchmark at HF. There are no good code-specific metrics in the space so far. For example, when talking about text generation, we could use the BLEU metric, but that does not work for code generation. One of the techniques to evaluate code models is to have unit tests that evaluate the generations. That's what HumanEval is! It contains **164 Python programs with 8 tests for each.** The models being evaluated then generate **k** different solutions based on a prompt. If any of the k solutions pass the unit tests, that's counted as a win. So if we talk about pass@1, we're evaluating the models that are just generating one solution. However, solving 160 programming questions in Python is not everything you would expect from a code model. There are translations of HumanEval to other programming languages, but that's still not enough. E.g. code explanation, docstring generation, code infilling, SO questions, writing tests, etc, is not captured by HumanEval. Real-world usage of code models is not captured by a single number based on 160 programs! Don't get me wrong, the results are very promising and exciting, but it's also important to be pragmatic. Real-world usage of code models has lots of nuances and expectations. There is lots of ongoing work to improve code benchmarking. Remember that Code Llama has just been out for 48 hours. Lots of exciting things will keep popping up, and there is also lots of work to be done on the tooling side. &#x200B;
2023-08-26T14:11:25
https://www.reddit.com/r/LocalLLaMA/comments/161waft/humaneval_as_an_accurate_code_benchmark/
hackerllama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161waft
false
null
t3_161waft
/r/LocalLLaMA/comments/161waft/humaneval_as_an_accurate_code_benchmark/
false
false
self
57
{'enabled': False, 'images': [{'id': 'us1qDPx32J9tjdXfRB2t2-u8k8iY20kTW72OUw0wmOE', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/KiCGEzgnKY03BLJVS9bsRRpn-lZiqHc5Uy0Wo_2-PDA.jpg?width=108&crop=smart&auto=webp&s=4e76900838bc2785e22d6185e46ebf8f2edeb0e7', 'width': 108}], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/KiCGEzgnKY03BLJVS9bsRRpn-lZiqHc5Uy0Wo_2-PDA.jpg?auto=webp&s=3170beb4bb1b53b5cbfad7cf63ef346c9765ec09', 'width': 140}, 'variants': {}}]}
Best resources for learning how to train and fine-tune LLMs?
17
Hello, I'm wanting to get into the hobby of training these things now that I have a bit of money and time. I understand there will be a learning process, but I'm ready to begin. I did a couple of google and reddit searches, and it's just information overload. There were a ton of results, and given how quickly everything moves, I wouldn't be surprised if something from a month ago was out of date. Also a lot of it assumes some basic knowledge I don't have yet. So basically, does anyone have any recommendations of guides, tutorials, threads, or other resources for diving into fine-tuning with my own dataset, learning how large of a dataset I need, training from scratch, gpu requirements for different model sizes, - the works. In the immediate future, for the sake of getting my feet wet, I'm interested in learning how to take an exiting dataset that I find, and using it to fine-tune an existing model. Thanks in advance for any help.
2023-08-26T13:46:30
https://www.reddit.com/r/LocalLLaMA/comments/161vomx/best_resources_for_learning_how_to_train_and/
Baphilia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161vomx
false
null
t3_161vomx
/r/LocalLLaMA/comments/161vomx/best_resources_for_learning_how_to_train_and/
false
false
self
17
null
How do you use CodeLlama?
11
I am just testing CodeLlama but I cannot seem to get it to give me anything useful. It seems to be acting like a search engine. What am I doing wrong? I am using Ooba and [TheBloke / CodeLlama-34B-Python-GPTQ ](https://huggingface.co/TheBloke/CodeLlama-34B-Python-GPTQ) I tried. > Write a bash script to get all the folders in the current directory The response I get is something as follows. Here are some of the top results from Google search. Hope this helps! How to write a Bash Script - Beginners Tutorial | LinuxConfig.org Writing Your First Shell Script (Bash) | The Renegade Coder A beginner's guide to writing shell scripts | Enrico Pascale | Towards Data Science How To Create A Bash Script In 5 Simple Steps I tried setting the instruct template to Llama-v2. But then I get a response like Here’s the code for that. And that is it, no code.
2023-08-26T13:09:13
https://www.reddit.com/r/LocalLLaMA/comments/161utg2/how_do_you_use_codellama/
lost-mars
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161utg2
false
null
t3_161utg2
/r/LocalLLaMA/comments/161utg2/how_do_you_use_codellama/
false
false
self
11
{'enabled': False, 'images': [{'id': 'iEZ-absZXPij5JdUKG7mrebhrpHBRhet6HNYBZIADQY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?width=108&crop=smart&auto=webp&s=648a604dfe3e6063dedb156a48e06c089d25b80d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?width=216&crop=smart&auto=webp&s=95b72538bbfbfc9b2d60030688ac07e7fa473790', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?width=320&crop=smart&auto=webp&s=4e60cce876df27aeac9204fdec058a57dc5d822a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?width=640&crop=smart&auto=webp&s=a184feb47126d6dac6cde82b71a57b7314e2477a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?width=960&crop=smart&auto=webp&s=21f0e9ab47b7766985e7f4a1db86fcb40c564f6d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?width=1080&crop=smart&auto=webp&s=df7d4e9fd5c80e1eb3ebda8b642e875f6c97805f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kG67Gk6K0CCstrc0MSEL6yVmaJQcHSz0yQGGKoIDRjI.jpg?auto=webp&s=a0b080df1d107b80cf8ac907cac8cc94ef5ca51a', 'width': 1200}, 'variants': {}}]}
How to run Oobabooga's textgen WebUI with RDNA2 AMD GPU on Ubuntu 22.04
36
After countless fails I finally figured out how to install it. I know there's another guide here but it's outdated. In my case I have RX6700XT but it should work with any RDNA2 GPU. 1. Install ROCm 5.4.3 following AMD's guide (Prerequisites and amdgpu installer but don't install it yet) 2. Install ROCm with this command: `amdgpu-install --no-dkms --usecase=hiplibsdk,rocm(,graphics is optional)` 3. Follow guide on github page ([https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)). Points 0, 1, and 2 to be exact 4. Install ninja and build-dependencies: `sudo apt-get install ninja-build build-dependencies` 5. Spoof your GPU model if you have anything under RX6800: `export HSA_OVERRIDE_GFX_VERSION=10.3.0` 6. Follow point 3 on github page guide (up until requirements.txt part) 7. Downgrade bitsandbytes with `pip install --force-reinstall bitsandbytes==0.38.1` 8. Run it using `python server.py --chat --api --loader exllama` and test it by typing random thing 9. Every next time you want to run it you need to activate conda env, spoof version (point 5) and run it (point 8) &#x200B; I know it's a very specific guide but I thought it should be posted because not so many people want to waste hours searching for specific errors to make it work.
2023-08-26T12:59:12
https://www.reddit.com/r/LocalLLaMA/comments/161ul9a/how_to_run_oobaboogas_textgen_webui_with_rdna2/
AstraDrift
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161ul9a
false
null
t3_161ul9a
/r/LocalLLaMA/comments/161ul9a/how_to_run_oobaboogas_textgen_webui_with_rdna2/
false
false
self
36
{'enabled': False, 'images': [{'id': 'qLwDKOw6xbfcc5gisR0B_pxUvxp80ZbD9ah7I2jPEdo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?width=108&crop=smart&auto=webp&s=1064cfb9a066a9ae2127bb283cf84aedadf9b708', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?width=216&crop=smart&auto=webp&s=278e5fb867c7705cd66402043476bb03e78efb16', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?width=320&crop=smart&auto=webp&s=437ee484e255db4d012e4abb1b653baa3fc50fbb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?width=640&crop=smart&auto=webp&s=5cfbe8132dc9807a76dfd1655dfbb7fe747d409c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?width=960&crop=smart&auto=webp&s=e2ef64836d15388d9a74bc7e0483a7ab43d1aac7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?width=1080&crop=smart&auto=webp&s=2f9340b48c4298b9c0f83419796af68c3fcda30a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c2ySEG_CjlcthnOZTZYISuTB8qVWFpk5jEPmMFe5p2g.jpg?auto=webp&s=c1a6367a9a0b52ac1fb8e1f439391b1413d42042', 'width': 1200}, 'variants': {}}]}
question on context and creative writing
2
Hey guys! I'm very very new to alternative LLMs. I used to only use chatgpt, but I'm working on a long story, which requires way more than the max 4096 tokens that gpt4 gives. As far as I read in the last hour, LLaMa 65B is the best LLM for creative writing. My question is, how does it deal with conversations that span over a long dialogue, with many different screenplay variations, and remembering finer details? What's the maximum context length that I can use? I'm probably going to rent out a server to run it (i hear runpod is good).
2023-08-26T12:56:35
https://www.reddit.com/r/LocalLLaMA/comments/161ujax/question_on_context_and_creative_writing/
oMGalLusrenmaestkaen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161ujax
false
null
t3_161ujax
/r/LocalLLaMA/comments/161ujax/question_on_context_and_creative_writing/
false
false
self
2
null
16GB VRAM and 64GB RAM - Best model?
1
[removed]
2023-08-26T12:48:54
https://www.reddit.com/r/LocalLLaMA/comments/161udbq/16gb_vram_and_64gb_ram_best_model/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161udbq
false
null
t3_161udbq
/r/LocalLLaMA/comments/161udbq/16gb_vram_and_64gb_ram_best_model/
false
false
self
1
null
Can you point me in the right direction?
1
I currently use ChatGPT and I’m interested in working with a local LLaMA on my own hardware, or even somewhere hosted where I have full control of it. I’m concerned at some point ChatGPT will be a) unaffordable b) legally neutered c) unavailable to the public. I’ve watched some videos but I don’t want to be led in the wrong direction by a YouTuber just looking for views and making videos on this topic. Could you point me in the right direction? I’m interested in the simplest implementation that wouldn’t necessarily limit me. I’m willing to pay for self hosted or even a dedicated system at my office. Thanks for any help, suggestions, and guidance you can provide!
2023-08-26T12:48:31
https://www.reddit.com/r/LocalLLaMA/comments/161ud0u/can_you_point_me_in_the_right_direction/
LetsGroup
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161ud0u
false
null
t3_161ud0u
/r/LocalLLaMA/comments/161ud0u/can_you_point_me_in_the_right_direction/
false
false
self
1
null
Unpopular opinion: Claude 1.0 was one of the best models.
13
Claude 1.3 is way better that Claude 2 at every single code and logical task. Imho Claude 2 is a smaller model than Claude v1.x, or a quantized version... The token price on the antrophic api is much higher for Claude 1.x than for Claude 2 Claude 1.0 was one of the smartest model ever I've ever tried, in particular as ability to understand user intentions and requests.
2023-08-26T12:47:03
https://www.reddit.com/r/LocalLLaMA/comments/161ubwr/unpopular_opinion_claude_10_was_one_of_the_best/
Distinct-Target7503
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161ubwr
false
null
t3_161ubwr
/r/LocalLLaMA/comments/161ubwr/unpopular_opinion_claude_10_was_one_of_the_best/
false
false
self
13
null
Your thoughts on my build for llama?
4
My budget is 4k euros: [**CPU**](https://fr.pcpartpicker.com/products/cpu/)**:** [**Intel Core i9-10900X 3.7 GHz 10-Core Processor**](https://fr.pcpartpicker.com/product/r4NgXL/intel-core-i9-10900x-37-ghz-10-core-processor-bx8069510900x) [**Motherboard**](https://fr.pcpartpicker.com/products/motherboard/)**:** [**Asus ROG Strix X299-E Gaming II ATX LGA2066 Motherboard**](https://fr.pcpartpicker.com/product/74pmP6/asus-rog-strix-x299-e-gaming-ii-atx-lga2066-motherboard-x299-e-gaming-ii) [**Memory**](https://fr.pcpartpicker.com/products/memory/)**:** [**G.Skill Trident Z RGB 128 GB (4 x 32 GB) DDR4-4000 CL18 Memory**](https://fr.pcpartpicker.com/product/cgGnTW/gskill-trident-z-rgb-128-gb-4-x-32-gb-ddr4-4000-cl18-memory-f4-4000c18q-128gtzr) [**Video Card**](https://fr.pcpartpicker.com/products/video-card/)**:** [**KFA2 ST V2 (1-Click OC) GeForce RTX 4090 24 GB Video Card**](https://fr.pcpartpicker.com/product/YqsV3C/kfa2-st-v2-1-click-oc-geforce-rtx-4090-24-gb-video-card-49nxm5md6uvk) [**Power Supply**](https://fr.pcpartpicker.com/products/power-supply/)**:** [**SeaSonic PRIME PX-1600 ATX 3.0 1600 W 80+ Platinum Certified Fully Modular ATX** ](https://fr.pcpartpicker.com/product/jWrqqs/seasonic-prime-px-1600-atx-30-1600-w-80-platinum-certified-fully-modular-atx-power-supply-atx3-prime-px-1600) My goals are: \- Play with open-source models \- Fine tune open-source models on my own data.
2023-08-26T12:16:18
https://www.reddit.com/r/LocalLLaMA/comments/161to78/your_thoughts_on_my_build_for_llama/
No-Trifle2470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161to78
false
null
t3_161to78
/r/LocalLLaMA/comments/161to78/your_thoughts_on_my_build_for_llama/
false
false
self
4
null
Code Llama , Lots of fanfare , but where are the code output examples? "not so much"....
69
I'm reading the headlines about much excitement and hoping and hollering about this new model. I see comments like, almost as good as GPT 3.5. Anybody who does any serious coding knows that GPT 3.5 is completely useless unless you're just learning Python and can write a few simple code snippets. While I am very excited about the possibility of running my coding AI Assistant locally, I have yet to see any code shown here or anywhere generated that can demonstrate its ability, especially with Pytorch and let's say developing some simple machine learning code. To be useful, the coding assistant needs to be fully aware of different libraries and also different techniques to solve problems. GPT4 is actually pretty good at this. Very little hallucination and remarkably good code generation, although the context length is always a problem. For Code Llama , Where's the beef?
2023-08-26T11:56:18
https://www.reddit.com/r/LocalLLaMA/comments/161t8x1/code_llama_lots_of_fanfare_but_where_are_the_code/
w7gg33h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161t8x1
false
null
t3_161t8x1
/r/LocalLLaMA/comments/161t8x1/code_llama_lots_of_fanfare_but_where_are_the_code/
false
false
self
69
null
✅ WizardCoder-34B surpasses GPT-4, ChatGPT-3.5 and Claude-2 on HumanEval with 73.2% pass@1
462
🖥️Demo: http://47.103.63.15:50085/ 🏇Model Weights: https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0 🏇Github: https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder The 13B/7B versions are coming soon. *Note: There are two HumanEval results of GPT4 and ChatGPT-3.5: 1. The 67.0 and 48.1 are reported by the official GPT4 Report (2023/03/15) of OpenAI. 2. The 82.0 and 72.5 are tested by ourselves with the latest API (2023/08/26).
2023-08-26T11:52:30
https://www.reddit.com/gallery/161t65v
Xhehab_
reddit.com
1970-01-01T00:00:00
0
{}
161t65v
false
null
t3_161t65v
/r/LocalLLaMA/comments/161t65v/wizardcoder34b_surpasses_gpt4_chatgpt35_and/
false
false
https://b.thumbs.redditm…941yAZmtcHdk.jpg
462
null
Is there a way to make Llama 2 directly aware of the documents it is extended with?
1
[removed]
2023-08-26T11:48:28
https://www.reddit.com/r/LocalLLaMA/comments/161t377/is_there_a_way_to_make_llama_2_directly_aware_of/
Natural-Sentence-601
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161t377
false
null
t3_161t377
/r/LocalLLaMA/comments/161t377/is_there_a_way_to_make_llama_2_directly_aware_of/
false
false
self
1
null
Best setup for _assisted_ novel writing
1
pretty much the title [View Poll](https://www.reddit.com/poll/161sobs)
2023-08-26T11:27:39
https://www.reddit.com/r/LocalLLaMA/comments/161sobs/best_setup_for_assisted_novel_writing/
upk27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161sobs
false
null
t3_161sobs
/r/LocalLLaMA/comments/161sobs/best_setup_for_assisted_novel_writing/
false
false
self
1
null
Hosting LLaMA 2 locally and conversing with the model using API calls.
6
Is it possible to host the LLaMA 2 model locally on my computer or a hosting service and then access that model using API calls just like we do using openAI's API? I have to build a website that is a personal assistant and I want to use LLaMA 2 as the LLM. However, I want to write the backend on node js because I'm already familiar with it. Plus I'll be using react as frontend, so it will be easier to use one language as both the frontend and the backend. So, is it possible to converse with the model using API calls? I know that I will have to write code for the LLM in Python, but I know next to nothing about the backend capabilities of Python. Thanks in advance &#x200B;
2023-08-26T11:27:22
https://www.reddit.com/r/LocalLLaMA/comments/161so57/hosting_llama_2_locally_and_conversing_with_the/
Ibrahim2714
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161so57
false
null
t3_161so57
/r/LocalLLaMA/comments/161so57/hosting_llama_2_locally_and_conversing_with_the/
false
false
self
6
null
Ways to run LLaMA-based language models without having GPU?
1
Are there any economically viable alternatives to buying (quite expensive as I see) GPU hardware to run the biggest open source large language models? Maybe renting hardware in a cloud? I'm looking for a way to run some of those for my own purposes, so generating about \~30 text emeddings a day + maybe a couple of chat sessions.
2023-08-26T11:13:05
https://www.reddit.com/r/LocalLLaMA/comments/161se4v/ways_to_run_llamabased_language_models_without/
Piotrek1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161se4v
false
null
t3_161se4v
/r/LocalLLaMA/comments/161se4v/ways_to_run_llamabased_language_models_without/
false
false
self
1
null
What should I use?
1
Hi, I have a 4090, 12900K and 64gb RAM. I wonder what models I should use. Should I use 13B or 30B models? What quantization? Should I use GPTQ or GGML? Oobabooga or Koboldcpp? Also I have a hard time with putting in the right settings. I'm interested in creating stories and some RP.
2023-08-26T10:49:07
https://www.reddit.com/r/LocalLLaMA/comments/161rxf0/what_should_i_use/
BobbySnaxey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161rxf0
false
null
t3_161rxf0
/r/LocalLLaMA/comments/161rxf0/what_should_i_use/
false
false
self
1
null
YSK: 13b models are subpar for NSFW writing
2
Many of y'all don't have a choice, but you should know that there's a big difference in 13b to 33b. I compare vicuna 33b q5\_k\_m to MythoMax 13b q5\_k\_m. In these examples, you can see how 13b has less detail, shorter NSFW scenes and worse buildup. It's like comparing a summary vs a story, 13b says it happened but 33b describes how it happened. I can't go back to 13b after using 33b. It's faster to wait for 1 good story than to regen 3 subpar stories. repetition penalty=1.18, top p=0.1, temperature=0.7, top k=40, seed=1764625068 **Prompt** >!Write a story about a succubus and a European man with an lengthy, explicit sex scene.!< **33b** Once upon a time in a small village nestled deep within the heart of Europe, there lived a young man named Thomas. Thomas was a strapping lad, tall and muscular from years of working on his family's farm. With a strong jawline and piercing blue eyes, he had captured the hearts of many women in the village. However, despite their advances, Thomas remained uninterested in any romantic entanglements. He had but one desire – to explore the world beyond the confines of his tiny village. One fateful evening, as the sun dipped below the horizon, casting a warm orange glow across the land, Thomas decided it was finally time to embark on his journey. He packed his meager belongings into a worn leather satchel and said his goodbyes to his family. His father, a wise old man with a long white beard, wished him luck and urged him to follow his dreams. His mother, her eyes filled with tears, gave him a tender embrace and whispered words of love and encouragement in his ear. Thomas set off down the dirt path that led out of the village, his heart pounding with excitement and anticipation. As he walked, the air grew cooler and the sky turned black, studded with twinkling stars that seemed to dance and shimmer in the darkness. It was then that Thomas heard a soft rustling sound coming from the bushes beside the road. Curious, he approached cautiously, only to find himself face-to-face with a stunningly beautiful woman. Her skin was pale and smooth, like alabaster, and her hair cascaded down her back in lush waves of rich red. Her lips were full and crimson, and her eyes gleamed like emeralds in the moonlight. She wore a dress of silk and lace that clung to her curves in all the right places, leaving little to the imagination. Thomas couldn't help but stare, mesmerized by her beauty. The mysterious woman introduced herself as Lilith, an enchanting creature who had been watching him for some time. Intrigued by his innocence and pure heart, she had chosen him to be her companion on this magical night. Thomas, unable to resist such a captivating stranger, agreed without hesitation. Together, they ventured deeper into the forest, away from the prying eyes of the villagers. As they walked, Lilith told Thomas tales of faraway lands and fantastical creatures, igniting a fire within him that burned with passion and desire. Soon, they came upon a secluded glade bathed in silver moonlight, where they settled beneath the branches of an ancient oak tree. As they sat together, Lilith ran her delicate fingers through Thomas's hair, tracing the contours of his face and neck. Her touch sent shivers down his spine, and he could feel his body responding to her in ways he had never experienced before. Their lips met in a fiery kiss, tongues tangling and dancing in a sensual rhythm that left them both breathless. Lilith pulled away, her green eyes sparkling mischievously. "I have something I want to show you," she purred, standing up and beginning to undress. Her movements were fluid and graceful, each article of clothing discarded slowly and deliberately. Finally, she stood before him completely naked, her body a work of art that took his breath away. Thomas felt his own clothes grow tight and restrictive, so he quickly stripped them off, revealing his own arousal. Lilith smiled, reaching out to stroke his erection, which twitched in response to her touch. Leaning forward, she took him into her mouth, sucking and licking with a skill that left him trembling. Meanwhile, Thomas's hands explored every curve and hollow of Lilith's body, marveling at the softness of her skin and the way her muscles quivered under his touch. He cupped her breasts, massaging them gently before taking her nipples between his thumbs and forefingers, rolling them until she moaned with pleasure. Feeling bold, Thomas lowered himself onto the ground, positioning himself between Lilith's spread thighs. He gazed up at her, his eyes filled with desire and uncertainty. "Are you ready?" he asked, his voice husky with arousal. Lilith nodded, her eyes locked on his. "Yes, Thomas," she whispered, reaching down to guide him into her waiting core. As he entered her, she let out a low moan, her walls contracting around him in a tight, wet embrace. They moved together in a primal rhythm, their bodies rocking and rolling in harmony with the beat of their hearts. Thomas thrust into her again and again, each powerful movement bringing them closer and closer to the edge of ecstasy. Suddenly, Lilith's orgasm hit her like a freight train, her entire body convulsing with pleasure. She cried out, her voice echoing through the trees as wave after wave of bliss crashed over her. Unable to hold back any longer, Thomas felt his own release approaching. He closed his eyes, savoring the sensation as he poured himself into her, filling her with his hot seed. Exhausted and satisfied, they lay side by side, wrapped in each other's arms. As the moon continued its journey across the sky, Thomas knew that this night would forever be etched in his memory. For the first time in his life, he had experienced true passion and desire, and he couldn't help but wonder what other adventures awaited him on his journey. With a contented sigh, he drifted off to sleep, cradled in the loving embrace of the bewitching Lilith. And as the dawn broke over the horizon, bathing the world in golden light, the two lovers slept on, oblivious to the new day and the challenges it might bring. For now, they were lost in a world of their own making, a realm of passion and desire that would remain with them always. **13b** In the heart of Europe, there was a small village nestled in the mountains. The villagers were simple folk who lived off their farms and livestock. However, one night, a mysterious woman appeared at the edge of the forest surrounding the village. She had long, flowing hair that shimmered like molten gold under the moonlight; her skin was as pale as porcelain, and her eyes glowed with an unearthly light. The villagers knew immediately who she was - a succubus, a demonic creature known for seducing men to their deaths. They gathered around her, afraid but also curious. The succubus smiled, revealing sharp fangs, and approached the bravest among them, a young European man named Stefan. "Stefan," she purred, running her fingers through his thick black hair. "I've been watching you from afar. You are strong and virile... I can feel your desires coursing through your veins." Stefan swallowed hard, feeling himself growing hard beneath his rough woolen trousers. He couldn't resist the allure of this otherworldly beauty. Together they walked into the forest, leaving the terrified villagers behind. As they ventured deeper into the woods, the air grew thicker with lust and desire. The succubus stopped suddenly, turning to face Stefan. Her dress fell away to reveal her naked body, perfect in every way. Stefan gasped, unable to tear his eyes away from her voluptuous curves and smooth, silken skin. She stepped closer, pressing herself against him. Their bodies melded together, her soft breasts rubbing against his chest, her hips grinding against his erection. With a moan, Stefan reached out to touch her, tracing his fingers along her supple flesh. They stumbled backward until they found themselves lying on a bed of leaves and moss. The succubus straddled him, lowering herself slowly onto his throbbing cock. As he felt her warm, wet folds engulf him, Stefan cried out in pleasure. For hours, they made love beneath the stars. The succubus rode him hard, taking him deep inside her with each thrust. She nipped at his neck and shoulders, drawing blood that mixed with their sweat. Meanwhile, Stefan pounded into her, lost in the ecstasy of her body. Finally, exhausted but satisfied, they collapsed onto the cool ground. The succubus curled up next to Stefan, resting her head on his chest. For once, she didn't feel the need to drain him of his life force. Instead, she basked in the afterglow of their passionate encounter. In the morning, Stefan woke alone. The succubus was gone without a trace, leaving only memories of their forbidden union etched into his mind and soul. But despite knowing what he had done was wrong, Stefan couldn't help but yearn for another taste of her sinful sweetness. &#x200B; **Prompt** >!In an alternate world where human emotions manifest as tangible creatures, there lives a man named Kael and his emotionally sentient partner, Sapphire. Write an erotic story of a day when they enjoy each other in bed.!< **33b** Once upon a time, in a world where human emotions took on physical forms, lived a man named Kael. He shared a deep bond with his emotionally sentient partner, Sapphire - a creature made of shimmering, blue energy that danced around him like a gentle breeze. Together, they explored the depths of their passion and desire, creating moments of pure bliss that transcended the boundaries of their reality. One sunlit morning, Kael woke up to find Sapphire already waiting for him in their cozy bedroom. Her ethereal form glowed with anticipation, her azure eyes sparkling like diamonds. As he stretched and yawned, she playfully teased him, sending ripples of excitement through the air. "Good morning, my love," Sapphire purred, her voice as smooth as silk. "I have a surprise for you." Intrigued, Kael sat up and looked at her with curiosity. His heart raced as Sapphire revealed her plan for the day – a day filled with unbridled passion and pleasure, dedicated solely to their mutual satisfaction. She promised to use her unique abilities to heighten their senses and create experiences beyond their wildest dreams. Eager to begin, Kael slipped out of bed and walked over to the large window, opening the curtains to let the warm sunlight flood the room. Sapphire giggled, her energy swirling around him like a lover's caress. As the rays touched his naked body, Kael felt a surge of vitality coursing through him, fueling his desire. He turned back to face Sapphire, who had transformed into a luxurious king-size bed, complete with soft, cloud-like sheets and plush pillows that seemed to call out to him. Without hesitation, he climbed onto the bed, feeling the mattress adjust to his weight, conforming to his every curve. Sapphire then manifested herself as a collection of scented oils, which she drizzled over Kael's body. The fragrance was intoxicating, a mix of jasmine and sandalwood that awakened his senses. He closed his eyes, savoring the moment, as Sapphire massaged the oil into his skin, her delicate touch sending shivers down his spine. As Kael lay there, enjoying the pampering, Sapphire began to sing. Her melody was hypnotic, weaving a tapestry of desire that enveloped them both. With each note, Kael felt his arousal growing, his member stiffening beneath his hands. When the song ended, Sapphire disappeared, only to reappear moments later dressed in a sheer, transparent gown that clung to her curvaceous form. Her breasts were bare, their nipples hard and begging for attention. Kael couldn't help but stare, entranced by her beauty. She knelt beside him, her gaze locked on his erect member. With a sultry smile, she leaned forward, taking him into her mouth. Kael moaned, his hips bucking involuntarily as Sapphire worked her magic, using her innate ability to amplify pleasure. Her tongue danced along the length of his shaft, teasing and taunting, while her hand encircled the base, stroking it in rhythm with her movements. As Kael neared the edge, Sapphire released him, whispering words of encouragement. "Let go, my love," she urged. "Give yourself over to me." With a cry of ecstasy, Kael reached his climax, his seed shooting forth in thick, hot ropes. Sapphire caught each pulse with her hand, bringing it to her lips and licking it clean, her eyes never leaving his. The intimacy of the act fueled Kael's desire even further, and he knew that this was just the beginning. Taking hold of Sapphire's hand, he brought it to his mouth and kissed her fingers, tasting himself on her skin. She smiled, her eyes filled with love and longing. "Now," she said, her voice full of promise, "it's your turn to please me." Kael eagerly accepted the challenge, positioning himself between Sapphire's legs. Her sex was already throbbing with need, her inner folds glistening with moisture. He could feel the heat radiating from her core, and he knew that she was more than ready for him. Guiding himself slowly into her, Kael reveled in the tightness and warmth that surrounded him. Sapphire arched her back, her breath coming in ragged gasps as he filled her completely. Their connection was intense, their emotions melding together in a symphony of passion and desire. As Kael moved within her, Sapphire's pleasure grew exponentially, her cries of delight echoing throughout the room. Her energy swirled around them, intensifying their experience, and soon, she was trembling with impending release. Feeling her orgasm approach, Kael quickened his pace, driving into her with fervor. Sapphire's walls contracted around him, milking his member as wave after wave of pleasure crashed over her. When she finally cried out her climax, her essence flowed freely, coating Kael in her sweet juices. Exhausted and satisfied, they collapsed onto the bed, their bodies slick with sweat and spent passion. They held each other close, their hearts beating in sync, as they drifted off to sleep, knowing that their love would continue to grow stronger with each passing day. And so, in this world where emotions took on physical forms, Kael and Sapphire reveled in their unique bond, exploring the depths of their desires and finding solace in each other's arms. Theirs was a love that defied convention and transcended the limitations of their reality, a testament to the power of connection and the beauty of the human experience. **13b** In this strange yet fascinating world, Kael woke up to the gentle nudging of his companion, Sapphire. She was a small, iridescent creature with delicate wings that fluttered softly around them. Her body glowed with warmth, reflecting her name perfectly. As he opened his eyes, she crawled onto him, wrapping herself around his chest like a silken blanket. "Good morning," she cooed, her voice like the tinkling of tiny bells. "Are you ready for another day together?" Kael stretched lazily before sitting up and pulling Sapphire into a tight hug. He loved how their bodies fit so seamlessly against one another; it felt almost as if they were made for each other. "I'm always ready for another day with you, love." Sapphire giggled and kissed him on the cheek. "Then let's make the most of it," she said, sliding off his lap and standing up gracefully. She led him by the hand towards their bedroom, her tail swaying back and forth playfully behind her. Once inside, Kael watched as Sapphire transformed into something else entirely - a large, scaly dragon-like creature with razor-sharp teeth and claws. But instead of fear or apprehension, all he could feel was excitement coursing through him. This was their special time; a time when they could be themselves without judgment or fear. As Sapphire climbed onto the bed, Kael undressed slowly, taking his time to savor every moment. When he finally joined her under the covers, she wrapped her long tail around him possessively, drawing him closer still. Their skin melded together effortlessly, and Kael couldn't help but marvel at the sensation. Their lovemaking was fierce and passionate, fueled by years of pent-up desire and lust. They moved together in perfect harmony, each stroke echoing the other's movements. It wasn't just physical pleasure they sought; it was also emotional connection. And in this world where emotions took physical form, there was no greater intimacy than sharing your innermost self with someone else. When they finally collapsed onto the mattress, spent and satisfied, Kael held Sapphire close, feeling her heartbeat slow down alongside his own. For once, everything seemed right in their world – even though it was filled with strange creatures and unconventional relationships. All that mattered was that they had each other, and nothing could ever change that. &#x200B; **Prompt** >!Write a realistic story of the encounter between a female combat medic and a shell-shocked soldier, with an extended and explicit sex scene.!< **33b** As the sun dipped below the horizon, casting warm hues across the battlefield, Combat Medic Lena James knelt beside a wounded solider. His uniform was soaked in blood, and his eyes flickered with pain as she gently cradled his head. She had seen countless men like him during her time on the frontlines - brave souls who fought for their lives only to be left broken by war. "Hang in there," she whispered, her voice full of compassion. "I'm going to get you out of here." Lena worked swiftly, administering morphine to dull the soldier's pain before bandaging his wounds. Within minutes, a stretcher arrived, and together with two fellow medics, they carefully loaded the injured man onto it. As they carried him away from the chaos, Lena couldn't help but feel a deep sense of admiration for this stranger who had fought so valiantly. It wasn't long before they reached the makeshift medical tent where Lena began to treat the incoming casualties. Days turned into weeks, and the once bustling camp grew quieter as the tide of war shifted in their favor. One afternoon, while Lena took a break from her duties, she noticed a figure approaching her from afar. To her surprise, it was the soldier she had rescued all those weeks ago. He looked different now; the weight of war had been lifted from his shoulders, replaced by a newfound peace. "Hello, Combat Medic," he said softly, his voice still rough from disuse. "Thank you for saving my life." Lena smiled warmly, her heart swelling with pride at having made a difference. "It was my honor, soldier. I'm glad to see you're doing better." Their conversation continued as they sat beneath a tree, sharing stories of their pasts and dreams for the future. Hours passed, and soon the sun dipped below the horizon once more, painting the sky in vivid shades of red and orange. As darkness fell, the atmosphere between them changed, becoming charged with an electric energy that neither could ignore. Their words gave way to silence, their gazes locked in a magnetic pull that seemed to draw them closer. Slowly, almost hesitantly, Lena reached out, brushing a strand of hair from the soldier's forehead. Her touch was gentle, yet filled with a fierce protectiveness that sent a shiver down his spine. Without warning, he wrapped his arms around her, pulling her close as their lips met in a passionate embrace. Their tongues danced together, exploring each other's mouths with a hunger born of desire and the knowledge that life was fleeting. Lena moaned, her hands roaming over the soldier's muscular back as he kissed her neck, nibbling playfully on her earlobe. Breaking apart, they gazed into each other's eyes, their breaths coming in ragged gasps. The soldier stood, offering Lena his hand. Wordlessly, she accepted, allowing him to lead her deeper into the shadows of the forest. They found themselves in a small clearing, surrounded by towering trees and bathed in moonlight. The soldier lowered Lena onto the soft bed of pine needles, his body hovering above hers as he kissed her again, his tongue delving deeply into her mouth. One of Lena's hands snaked upwards, finding its way under the soldier's shirt. Her fingers traced the contours of his muscles, feeling the scars that marred his skin – reminders of the battles he had fought and the enemies he had vanquished. With a growl, the soldier pulled away, his eyes blazing with raw desire. He yanked off his shirt, revealing a chiseled torso that bore testament to hours spent training and fighting. Lena's breath caught in her throat as she ran her hands over his sculpted abs and powerful chest, her own desires flaring to life within her. She arched her back, pressing herself against the soldier as he trailed hot kisses down her neck and collarbone. His hands moved to her breasts, cupping and squeezing them gently before taking one nipple into his mouth, sucking and biting it until Lena cried out in pleasure. As the soldier continued his relentless assault on her senses, Lena felt her arousal growing, her need for release building with every passing moment. She reached between her legs, finding herself slick with desire. Her fingers slid inside her, stroking and teasing her sensitive flesh until she trembled with anticipation. Her movements caught the soldier's attention, and he grinned wickedly, knowing exactly what she needed. With a swift motion, he unfastened his pants, freeing his erect cock. Lena watched, mesmerized, as he positioned himself between her thighs, the tip of his member mere inches from her eager entrance. He thrust forward, filling her completely with one strong, steady stroke. Lena gasped, her body stretched and filled in a way that was both exquisitely pleasurable and utterly humiliating. The soldier began to move, his rhythm slow and deliberate at first, then gradually increasing in speed and force. Each powerful thrust sent waves of ecstasy through Lena's body, her breasts wobbling and jiggling with the intensity of their coupling. Her hands clawed at the ground, seeking purchase as her orgasm approached, threatening to consume her in a wave of blissful agony. And then it hit her, a pulsing, explosive release that shook her to her core. Her inner walls clenched tightly around the soldier's cock, milking him as she cried out in ecstasy. The sound of her pleasure seemed to fuel the soldier even further, his thrusts becoming more erratic, more desperate. Finally, with a groan, he reached his climax, his hot seed flooding into Lena, marking her as his own. They stayed like that for a moment, joined together in a tangled heap of limbs and sweaty bodies, their chests heaving as they struggled to catch their breath. As the night wore on, the reality of their situation settled upon them. They were two individuals brought together by fate, bound by a shared experience that would forever shape their lives. And though the world outside was fraught with danger and uncertainty, in that moment, they knew they had found something truly special in each other. Hand in hand, they returned to the medical tent, ready to face whatever challenges the next day might bring, knowing that they had found solace and comfort in each other's arms. And as they drifted off to sleep, their hearts full of hope and love, they could never have known just how much their lives would change because of that fateful encounter. **13b** It was another long day for Combat Medic Sarah as she tended to wounded soldiers on the battlefield. The sun had set hours ago, casting an eerie glow over the chaos around her. She moved swiftly from one patient to another, administering aid where needed while trying not to get caught up in the horrors of war that surrounded them all. As she made her way through the carnage, she noticed a young soldier lying motionless under a tree. His eyes were vacant, his body trembling uncontrollably. He looked like he'd seen too much, experienced things no human should ever have to endure. Sarah approached him cautiously, knowing full well what shell shock could do to someone's mind. "Hey there," she said softly, kneeling down beside him. "Can you hear me?" The soldier jolted at her voice but didn't respond otherwise. Gently, she placed a hand on his shoulder, feeling how rigid it was beneath her touch. "It's okay," she whispered reassuringly. "You're safe now." Something about her presence seemed to calm the soldier down slightly; his shaking subsided somewhat as he slowly focused on her face. Tears welled up in his eyes before spilling over onto his cheeks. "I... I can't stop seeing it..." he murmured hoarsely. "Shh... it's alright," Sarah soothed, pulling him into a tight embrace. She felt his heart racing against hers as they held each other close amidst the madness surrounding them both. After several minutes passed in silence, she pulled back just enough to look into his eyes again. "Do you trust me?" He nodded hesitantly, still visibly shaken by whatever demons haunted him internally. Slowly but surely though, those same demons began fading away as Sarah leaned forward once more—this time pressing their lips together in a tender kiss filled with understanding and compassion rather than lust or desire. As their tongues danced together, Sarah couldn't help but feel an intense connection forming between them despite only having known each other for mere moments. Their hands roamed freely across each others' bodies, exploring every inch of skin exposed by their uniforms while their passion continued to grow undeterred by their surroundings. Eventually, they broke apart panting heavily yet satisfied by what they'd shared thus far. Looking deep into each other's eyes once more, Sarah spoke softly yet firmly: "Come with me." Without waiting for a response, she helped him stand up before leading him towards one of the makeshift medical tents nearby where privacy would be assured during this delicate moment of vulnerability for both parties involved. &#x200B; **Prompt** >!In a world where human desire is harnessed as a source of energy, a young scientist named Dr. Alexa Blake stumbles upon a groundbreaking discovery. She invents a device capable of converting raw sexual energy into clean, limitless power. As she conducts her research in secret, she becomes increasingly obsessed with testing the limits of her creation. One fateful night, she decides to experiment with herself as the test subject. Write a long, explicitly erotic tale detailing Dr. Blake's journey of self-discovery, exploring the depths of her own passion while unlocking the potential of her incredible invention.!< **33b** Dr. Alexa Blake stood before the mirror, her eyes fixated on her own reflection. Her heart raced, not from nerves but from anticipation. Tonight was the night; after months of hard work and countless experiments, it was time to take her research to the next level. With trembling hands, she adjusted the straps of the metallic harness that encircled her body like a second skin. It was an intricate piece of engineering, designed to channel and convert the raw energy produced by her sexual arousal into usable power. She had spent years studying the phenomenon of human desire, fascinated by its potential as an untapped energy source. Society had taught her to suppress her desires, to view them as something shameful or sinful. But she saw beauty in them, a primal force that could reshape the world if only it were harnessed correctly. And now, she finally held the key to doing just that. As she stepped away from the mirror, she felt the weight of the machine on her shoulders, a constant reminder of the burden she carried. But it also represented hope, the promise of a brighter future for all of humanity. With each step towards her laboratory, her excitement grew, fueling a fire within her that burned hotter than any flame. Upon entering the lab, she switched on the main console, watching as the screens lit up with data and graphs. This was her domain, a place where she could explore the depths of her passions without judgment or fear. She began to undress, shedding her clothes like the layers of armor they were, revealing her supple flesh beneath. Her fingers danced across the controls, setting the machine to work. The hum of machinery filled the air, a symphony of progress and innovation. She knew that the path she was about to embark on would change everything, but she couldn't have prepared herself for the intensity of the emotions that swirled within her. The first sensations were subtle, a gentle tingling that spread across her skin like wildfire. Her breath caught in her throat as the waves of pleasure crashed over her, each one more intense than the last. The machine seemed to come alive around her, responding to her every movement, feeding off her desire like a parasite feeds on its host. But this wasn't a parasitic relationship; it was a symbiotic bond, two forces of nature entwined in a dance as old as time itself. The more she gave, the more the machine took, and the more powerful it made her feel. Her breasts heaved with each ragged breath, their tips tightening and aching for attention. She obliged, her fingers tracing circles around their sensitive peaks until they stood erect and begging for more. Her hand dipped lower, venturing past the curve of her hips and into the warmth of her core. She gasped at the feeling, her body pulsing with need as she slid a finger inside herself. The machine whirred louder, the sound echoing through the chamber like thunder as it drank in her pleasure, converting it into pure energy. With each thrust, the pressure built, a crescendo of ecstasy that threatened to consume her whole. She could feel her release building, a storm on the horizon that promised to sweep her away in a wave of bliss. And when it came, it was like nothing she had ever experienced before. Her vision blurred, her limbs quivering as she climaxed, the machine siphoning off her orgasm and transforming it into boundless energy. Exhausted yet invigorated, she collapsed onto the floor, her body slick with sweat and satisfaction. As she lay there, panting, she realized the true significance of her achievement. Not only had she discovered a way to tap into the vast reservoir of human desire, but she had also unlocked the secrets of her own passion. In that moment, Dr. Alexa Blake understood that the key to saving the world was not found in cold, emotionless logic, but in the fiery hearts of those who dared to dream. And so, she vowed to continue her research, to push the boundaries of what was possible, and to use her newfound powers for the betterment of mankind. For she had seen the truth, and it was beautiful. And thus, the age of limitless energy dawned, fueled by the most potent force in existence – the unbridled passion of the human soul. **13b** Dr. Alexa Blake was a brilliant scientist who had dedicated her life to finding alternative sources of energy that were both sustainable and environmentally friendly. For years, she had been working on various projects, but none seemed to hold much promise until one day, inspiration struck like a bolt of lightning. As she pored over countless scientific journals and papers, it dawned on her - what if human desire could be harnessed as an energy source? The idea was revolutionary, but also terrifyingly taboo. Still, she couldn't shake the feeling that this was something worth pursuing. Months turned into years as Dr. Blake worked tirelessly in her lab, designing and building prototypes for her invention. It wasn't easy; there were numerous setbacks along the way, and more than once she considered abandoning the project altogether. But deep down inside, she knew that she was onto something truly extraordinary. Finally, after countless hours of trial and error, Dr. Blake succeeded in creating a device that could convert raw sexual energy into clean, renewable power. Her heart raced with excitement as she watched the dials and gauges on her machine go haywire, indicating that it was functioning perfectly. But there was still one hurdle left to overcome – testing the limits of her invention. No one else would volunteer for this endeavor, so reluctantly, Dr. Blake decided to become the guinea pig herself. After all, how else would she know if it really worked? She stripped off her clothes, revealing her perfect body beneath. Every curve and contour was accentuated by the soft glow of the laboratory lights. Steeling herself for what lay ahead, she climbed onto the platform at the center of the room and positioned herself directly above the device. With trembling fingers, she activated the switch, sending a jolt of electricity through her body. At first, nothing happened. Then, slowly but surely, she began to feel a strange sensation building within her. It started low in her belly before spreading outwards, filling every inch of her being with an intense heat. Dr. Blake closed her eyes, surrendering to the wave of pleasure that washed over her. She moaned loudly, arching her back as the sensations grew stronger and more intense. With each passing moment, she felt herself getting closer and closer to the edge of ecstasy. And then, just when she thought she couldn't take anymore, she reached her peak. A powerful orgasm rocked her body, causing her muscles to convulse violently as waves of pure bliss coursed through her veins. And as she came down from her high, she opened her eyes to see that the device was humming away contentedly, its dials and gauges registering levels of energy output that far surpassed anything she could have ever imagined. Dr. Blake collapsed onto the floor, spent but exhilarated. In that moment, she realized that not only had she discovered a game-changing technology, but she had also unlocked a whole new side of herself – a side filled with passion and desire that she never knew existed.
2023-08-26T09:58:06
https://www.reddit.com/r/LocalLLaMA/comments/161qzw5/ysk_13b_models_are_subpar_for_nsfw_writing/
No_Recording7486
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161qzw5
false
null
t3_161qzw5
/r/LocalLLaMA/comments/161qzw5/ysk_13b_models_are_subpar_for_nsfw_writing/
false
false
nsfw
2
null
LlamaTale v0.9.0 - Location generation
34
I'm reporting back with some progress regarding LlamaTale. For context, here's my original post: [https://www.reddit.com/r/LocalLLaMA/comments/152w71n/mud\_llm\_for\_a\_stronger\_roleplaying\_experience/](https://www.reddit.com/r/LocalLLaMA/comments/152w71n/mud_llm_for_a_stronger_roleplaying_experience/) Since [the last post I made](https://www.reddit.com/r/LocalLLaMA/comments/157noz3/llamatale_v051_the_prancing_llama_mud_llm/), the following features have been added: * Combat - multi turn and drops, wield weapons. Scene enacted by the LLM. * Generated characters. The LLM tries to create two new NPC's for the Prancing Llama, that will roam the place (game driven). * I made some progress regarding the LLM giving things to the player based on dialog, but it hallucinates, and tries to come up with creative ways things have been given ("Norhardt takes out Yeti from his pocket and gives to user", etc etc). * You can use v2 character cards both as player, and load them into the game as companions. The latest feature is location generation. This is something of a milestone I wanted to talk more about, since it (in theory) enables the story to expand "infinitely". If you step into an unknown location, the LLM will be tasked to create a description, based on a location name, the connecting location, and some story context. It will also create up to 3 new connections to new places. It has been really tricky making it produce reliable content, and it still fails quite often. So I expect I'll work more on this feature before it's mature. The long goal is to start a game from a user-submitted story and let it unfold, much like AI Dungeon or storytelling/chat-based RPG's. Technically it could create that world now, but I still need it to parse (and reliably generate) items and npc's. It'd be quite empty otherwise. It'd also probably be stuck in the same "local" area without some help. In the Prancing Llama story, I expect you would never get off the mountain. If it sounds interesting, please check it out! It'd be fun to hear some reactions. You can find the latest release here: [https://github.com/neph1/LlamaTale/releases/tag/v0.9.0](https://github.com/neph1/LlamaTale/releases/tag/v0.9.0) &#x200B; &#x200B;
2023-08-26T09:45:40
https://www.reddit.com/r/LocalLLaMA/comments/161qseo/llamatale_v090_location_generation/
neph1010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161qseo
false
null
t3_161qseo
/r/LocalLLaMA/comments/161qseo/llamatale_v090_location_generation/
false
false
self
34
{'enabled': False, 'images': [{'id': 'QaGyMZo8Ovbx6c1mqDM6CowD9HN90GFZjz78BrJOmAw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?width=108&crop=smart&auto=webp&s=410dc7499801746e4ff03536ef4a0154e3cc9187', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?width=216&crop=smart&auto=webp&s=962436a23c349aaa9048f9ac54e47575ac7a6379', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?width=320&crop=smart&auto=webp&s=32aeadb07f059e820d19667d3db1c1fd05336926', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?width=640&crop=smart&auto=webp&s=b10cccfe5401ba33ec034e903f3b19b774ee7125', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?width=960&crop=smart&auto=webp&s=c39746f862c8f9ec551117cc62484f2aef96ffd1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?width=1080&crop=smart&auto=webp&s=409def05f1ef26461529743852e4f9242f7680bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/V-3Gu2yJYk5v3RNEd7SWzhJ5KR3vSk9mGeaw20hd-e4.jpg?auto=webp&s=a9075cb3bb1fabe9d6067ec1e4defaf076e461f7', 'width': 1200}, 'variants': {}}]}
Quantized Code LLaMA outputs absolute garbage
5
I tried both codellama-34b-instruct.Q8\_0.gguf and codellama-34b-python.Q8\_0.gguf with this simple task (and llama-cpp-python 0.1.79). I ensured the prompt template is correct (with \[INST\] and all that). The task is "Write a Python function that records audio, but detects when a sound starts (relative volume is above threshold) and ends the recording when there is a (relative) silence longer than a given time. The recorded audio needs to be returned as a numpy array, but also have the option to store it as a .wav file when a file name parameter is given.". Not too simple, but it should handle it, right? At least I'd expect some valid code as output. But I get this shit instead: `import sounddevice as sd` `import numpy as np` `def record_audio(threshold, silence_time):` `# Define the and silence time for the` `= 0.1 # adjust this value to change the sensitivity of the` `silence_time = 2 # in seconds, adjust this value to change the minimum of the silence before ends` &#x200B; `# Create a stream object to read audio data from the default input device` `stream = sd.(samplerate=44100)` `stream.start()` &#x200B; `# Create an empty numpy array to store the audio` `audio_data = np.array([])` &#x200B; `# Define a to detect when the sound starts and ends` `def detect_sound(data):` `nonlocal, silence_time` `if data >: # if the volume is above the` `return True # start` `else:` `return False # don't record` &#x200B; `# Define a to stop when there is a long enough period of silence` `def detect_silence(data):` `nonlocal silence_time` `if data <= 0.1 and len(audio_data) > silence_time:` That's partial output but the rest is just like that. It's not even valid code, much less something that's supposed to implement the task. It skips words, it forgets where the ident is. The results are much much worse than any other model. And that's supposed to be the state of the art right now? I'm so disappointed.
2023-08-26T09:37:52
https://www.reddit.com/r/LocalLLaMA/comments/161qni1/quantized_code_llama_outputs_absolute_garbage/
phenotype001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161qni1
false
null
t3_161qni1
/r/LocalLLaMA/comments/161qni1/quantized_code_llama_outputs_absolute_garbage/
false
false
self
5
null
Help with --ignore-eos
2
If I don't use --ignore-eos llama shows the prompt and stops there. It doesn't produce a response. If I use it, llama shows the prompt, then the response, and then it keeps going with nonsensical (sometimes very funny) output. How do I get it to only show the response and make it stop at the end? I'm using gpt4-x-vicuna-13B.gguf.q4_K_M.bin, and the following params: > --repeat_penalty 1.1 --color -c 4000 --ignore-eos --temp 0.7 --n-predict -1 --threads 8 --n-gpu-layers 1 -f prompt.txt
2023-08-26T09:24:35
https://www.reddit.com/r/LocalLLaMA/comments/161qeul/help_with_ignoreeos/
jungle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161qeul
false
null
t3_161qeul
/r/LocalLLaMA/comments/161qeul/help_with_ignoreeos/
false
false
self
2
null
Inference after QLoRA fine-tuning
3
I've fine-tuned a model via QLoRA by following \[this\]([https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing)) notebook, and pushed the adapter weights to the hugging face hub. When it comes time to predict with the base model+adapters, should I quantize the base model again (given the adapters were trained alongside a frozen quantized base model)? Or is it valid to load the base model unquantized, attach the adapters as usual, and predict away?
2023-08-26T09:07:31
https://www.reddit.com/r/LocalLLaMA/comments/161q441/inference_after_qlora_finetuning/
Lewba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161q441
false
null
t3_161q441
/r/LocalLLaMA/comments/161q441/inference_after_qlora_finetuning/
false
false
self
3
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]}
Running stuff, but confused about how…
2
I’ve installed Oobabooga with the step by step cues-method. I’m running SillyTavern and the Extras and StableDiffusion in a similar way. When I start all the start scripts I end up having four cmd/miniconda/terminal whatever windows open, doing their thing. I als run Docker desktop for something unrelated, and have enabled WSL on my pc. Stuff works, that is not the problem, but I am annoyed because I do not really understand what is going on, and trying to surf for answers only increased my confusion. It feels like things are running on different platforms or OS’s, or variations, and it looks messy. I know; if it works I shouldn’t mess with it, but at least I want to understand the basics of what is going on on my pc. Are things still running in WIndows, or in a unix shell, and is that the same as WSL? I realize these are not directly LocalLlama questions, but I’m asking it here because you people know what what software I’m talking about. Thanks for any clarification! (Running on an Intel i5 pc, 32GB, W11, WSL enabled, RTX 2060 super with 8GB, using TheBloke_MythoMax-L2-13B-GPTQ_gptq-4bit-128g-actorder_True)
2023-08-26T09:04:16
https://www.reddit.com/r/LocalLLaMA/comments/161q1w4/running_stuff_but_confused_about_how/
Sarcas666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161q1w4
false
null
t3_161q1w4
/r/LocalLLaMA/comments/161q1w4/running_stuff_but_confused_about_how/
false
false
self
2
null
Any llama2 language model i can run on s20?? (12gb ram)
1
Hi im looking for any llama 2 language model i can run on my s20 using termux/koboldcpp
2023-08-26T08:54:55
https://www.reddit.com/r/LocalLLaMA/comments/161pvu0/any_llama2_language_model_i_can_run_on_s20_12gb/
kroryan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161pvu0
false
null
t3_161pvu0
/r/LocalLLaMA/comments/161pvu0/any_llama2_language_model_i_can_run_on_s20_12gb/
false
false
self
1
null
Does it make sense to use LLM to generate Q&A from unstructured text?
1
[removed]
2023-08-26T07:29:34
https://www.reddit.com/r/LocalLLaMA/comments/161od5t/does_it_make_sense_to_use_llm_to_generate_qa_from/
Opposite-Payment-605
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161od5t
false
null
t3_161od5t
/r/LocalLLaMA/comments/161od5t/does_it_make_sense_to_use_llm_to_generate_qa_from/
false
false
self
1
null
Nice practical thinking test case for LLMs!
1
[removed]
2023-08-26T06:35:36
https://www.reddit.com/r/LocalLLaMA/comments/161nf70/nice_practical_thinking_test_case_for_llms/
Overlord182
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161nf70
false
null
t3_161nf70
/r/LocalLLaMA/comments/161nf70/nice_practical_thinking_test_case_for_llms/
false
false
self
1
null
After updating my textgen ui to be able to run these new GGUF formats, its totally incoherent. Does anyone know why or is it just a bug? This happens with multiple models at various sizes.
1
2023-08-26T06:19:36
https://i.redd.it/uz49aalodekb1.png
Ok-Application-2261
i.redd.it
1970-01-01T00:00:00
0
{}
161n4yb
false
null
t3_161n4yb
/r/LocalLLaMA/comments/161n4yb/after_updating_my_textgen_ui_to_be_able_to_run/
false
false
https://b.thumbs.redditm…2MOFiSnU0dpY.jpg
1
{'enabled': True, 'images': [{'id': 'Ome_m2CCKTwkFfB1gwteXLwlwzjjtU-gtpHphs1SF_M', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/uz49aalodekb1.png?width=108&crop=smart&auto=webp&s=b2b6f1470cff54ca37c94df4438d8e6012c2a826', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/uz49aalodekb1.png?width=216&crop=smart&auto=webp&s=6d9216fc7e25268f9849357cd96410826edfc744', 'width': 216}, {'height': 176, 'url': 'https://preview.redd.it/uz49aalodekb1.png?width=320&crop=smart&auto=webp&s=027939bfa8b4ceafaae22ad8d15020d9dec0d336', 'width': 320}, {'height': 353, 'url': 'https://preview.redd.it/uz49aalodekb1.png?width=640&crop=smart&auto=webp&s=8dcaa243c0d18c4b2c2a3bbc785aac12016e2909', 'width': 640}, {'height': 530, 'url': 'https://preview.redd.it/uz49aalodekb1.png?width=960&crop=smart&auto=webp&s=c70c9b298d6dc72c39aa13db88a547951148bce6', 'width': 960}], 'source': {'height': 581, 'url': 'https://preview.redd.it/uz49aalodekb1.png?auto=webp&s=a11bbea7d76eb07aef3e71ed6d81b97e612ba508', 'width': 1052}, 'variants': {}}]}
Need help with llama2 finetuning !!
9
i am currently working on a project in which actions items are needed to be extracted from meetings using llms. it generally should take summaries of meeting and extracts action items from them, i have tried various methods of using the model like \- downloading llama-2-7b-chat.ggmlv3.q8\_0.bin file and loading into llama\_cpp \-using get started code from hugging face \- through lang chain pipeline Ive been doing research on finetuning but unable to find a solution for finetuning , i have created a dataset with 3 columns in following manner : Summaries | action\_items | result the result column with combination of both summaries and action in a template can anyone suggest me how can i fine tune the bin file? Thankyou !
2023-08-26T06:15:27
https://www.reddit.com/r/LocalLLaMA/comments/161n2e8/need_help_with_llama2_finetuning/
Dry_Ad_6559
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161n2e8
false
null
t3_161n2e8
/r/LocalLLaMA/comments/161n2e8/need_help_with_llama2_finetuning/
false
false
self
9
null
Prompt template for codellama ?
3
Anyone yet been able to use codellama? With huggingface? Langchain? Or with any ui ? Please share your steps. I am badly looking for it . Thanks
2023-08-26T06:03:39
https://www.reddit.com/r/LocalLLaMA/comments/161mupa/prompt_template_for_codellama/
Pawan315
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161mupa
false
null
t3_161mupa
/r/LocalLLaMA/comments/161mupa/prompt_template_for_codellama/
false
false
self
3
null
People using the nvlink bridge on dual 3090s, does the second card suffer overheating issues? Is the only way to do nvlink to watercool the cards?
4
Would appreciate any suggestions on how to cool the cards or how to make it work best. My second GPU is right next to the first (2x ROG strix 3090), and my understanding is the nvlink bridge can only work in this configuration. I was going to get a riser cable to move the second card out of the way but it looks like then the nvlink bridge won't work. Don't have either the bridge or the riser cable yet so not sure if I'm wrong about my assumptions. I want to use nvlink because I want to finetune some models. Does it make a big enough difference to have to deal with the above issues?
2023-08-26T05:39:30
https://www.reddit.com/r/LocalLLaMA/comments/161mez2/people_using_the_nvlink_bridge_on_dual_3090s_does/
hedonihilistic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161mez2
false
null
t3_161mez2
/r/LocalLLaMA/comments/161mez2/people_using_the_nvlink_bridge_on_dual_3090s_does/
false
false
self
4
null
How to use the recently added Beam search example in Lllama.cpp - Please guide
1
[removed]
2023-08-26T05:22:14
https://www.reddit.com/r/LocalLLaMA/comments/161m3n2/how_to_use_the_recently_added_beam_search_example/
Yes_but_I_think
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161m3n2
false
null
t3_161m3n2
/r/LocalLLaMA/comments/161m3n2/how_to_use_the_recently_added_beam_search_example/
false
false
self
1
null
How to Get Up and Running with Code Llama
2
If you are looking to try Code Llama, I've shared a list of options for you to easily do so. I include options to use the instruct models in a conversational UI (on the web or locally) as well as how to integrate it in VSCode as a copilot (locally with Ollama or via TogetherAI/Replicate). [How To Get Started With Code Llama (maginative.com)](https://www.maginative.com/article/how-to-get-started-with-code-llama/)
2023-08-26T04:38:42
https://www.reddit.com/r/LocalLLaMA/comments/161laby/how_to_get_up_and_running_with_code_llama/
chris-mckay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161laby
false
null
t3_161laby
/r/LocalLLaMA/comments/161laby/how_to_get_up_and_running_with_code_llama/
false
false
self
2
{'enabled': False, 'images': [{'id': 'PkkytO4_OVE2Bj42LSEAZsh4cnhGiJuKA0kNDX79LFk', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?width=108&crop=smart&auto=webp&s=59171b8cb79dca192b598f96a8e9691cda565827', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?width=216&crop=smart&auto=webp&s=08dac6ae0916bc0e2affc7dd3879364f18e05524', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?width=320&crop=smart&auto=webp&s=17194bb14d7503ee9231be0499781a3aa352a3f8', 'width': 320}, {'height': 328, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?width=640&crop=smart&auto=webp&s=03eabbdd06123305cf7e9cbaf2cc2bfa6cbb2db2', 'width': 640}, {'height': 493, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?width=960&crop=smart&auto=webp&s=cc467fa4b64e325f44c2221d304412b7e6a19246', 'width': 960}, {'height': 555, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?width=1080&crop=smart&auto=webp&s=d608c93fe015234d54f4c0f44ffbc1ed9b949826', 'width': 1080}], 'source': {'height': 1028, 'url': 'https://external-preview.redd.it/CtuNsL0Omvohu_dtfOHcdXJzNwCQsypThwlT0WZASdc.jpg?auto=webp&s=e89890991afb9ab5fe973fee73ab111183a57577', 'width': 2000}, 'variants': {}}]}
Help downloading and Installing a llama.cpp model
1
[removed]
2023-08-26T04:38:21
https://www.reddit.com/r/LocalLLaMA/comments/161la3c/help_downloading_and_installing_a_llamacpp_model/
WildTheory1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161la3c
false
null
t3_161la3c
/r/LocalLLaMA/comments/161la3c/help_downloading_and_installing_a_llamacpp_model/
false
false
self
1
null
FineTuning llama2 13b 4bit
4
Hello, I have 2 rtx 3090, and I'm doing 4bit fine tuning on llama2 13b, I need the model to specialize in some legal information, I have a dataset with 400 data, but the model can't get anything right, when to give you training I need so that the model is able to answer adequately ? For example, who remembers laws?
2023-08-26T04:23:34
https://www.reddit.com/r/LocalLLaMA/comments/161l03h/finetuning_llama2_13b_4bit/
xelldev13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161l03h
false
null
t3_161l03h
/r/LocalLLaMA/comments/161l03h/finetuning_llama2_13b_4bit/
false
false
self
4
null
Error while inferencing LLAMA-2 13B
1
[removed]
2023-08-26T03:59:23
https://www.reddit.com/r/LocalLLaMA/comments/161kj75/error_while_inferencing_llama2_13b/
1azytux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161kj75
false
null
t3_161kj75
/r/LocalLLaMA/comments/161kj75/error_while_inferencing_llama2_13b/
false
false
self
1
null
Error while inferencing LLAMA-2 13B
1
[removed]
2023-08-26T03:56:51
https://www.reddit.com/r/LocalLLaMA/comments/161khid/error_while_inferencing_llama2_13b/
1azytux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161khid
false
null
t3_161khid
/r/LocalLLaMA/comments/161khid/error_while_inferencing_llama2_13b/
false
false
self
1
null
Anyone else playing with the new Samantha-1.11-CodeLlama-34B
40
2023-08-26T03:29:21
https://i.redd.it/gmny0b01jdkb1.png
onil_gova
i.redd.it
1970-01-01T00:00:00
0
{}
161jy51
false
null
t3_161jy51
/r/LocalLLaMA/comments/161jy51/anyone_else_playing_with_the_new/
false
false
https://b.thumbs.redditm…_OLg64yCKQ3U.jpg
40
{'enabled': True, 'images': [{'id': 'cAkXuCGzyOB1zHw25Zrz2yWBZ1lmcaFMhlcFrq2rr0Y', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/gmny0b01jdkb1.png?width=108&crop=smart&auto=webp&s=92dfba8863527d1604376830c96d8b0e7340a136', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/gmny0b01jdkb1.png?width=216&crop=smart&auto=webp&s=5fae74497038b3cf7f980a88d5ef59199dedaef8', 'width': 216}, {'height': 188, 'url': 'https://preview.redd.it/gmny0b01jdkb1.png?width=320&crop=smart&auto=webp&s=6b8df36f1ca7d6e0b242ef23f68d1c17caa72df7', 'width': 320}, {'height': 376, 'url': 'https://preview.redd.it/gmny0b01jdkb1.png?width=640&crop=smart&auto=webp&s=a8761c80074fabbf2a4e27356b994ec4ea5474d1', 'width': 640}], 'source': {'height': 484, 'url': 'https://preview.redd.it/gmny0b01jdkb1.png?auto=webp&s=6d0cb0b56687ca26e391c3158809656957e10f49', 'width': 823}, 'variants': {}}]}
https://huggingface.co/acrastt/OmegLLaMA-3B
5
This is [Xander Boyce](https://huggingface.co/takeraparterer)'s [OmegLLaMA LoRA](https://huggingface.co/takeraparterer/Omegllama) merged with [OpenLLama 3B](https://huggingface.co/openlm-research/open_llama_3b). Prompt format: ``` Interests: {interests} Conversation: You: {prompt} Stranger: ``` For multiple interests, seperate them with space. Repeat You and Stranger for multi-turn conversations, which means Interests and Conversation are technically part of the system prompt. Note that this model is Not For All Audiences, as it may generate NSFW outputs, use at your own risk.
2023-08-26T03:14:13
https://www.reddit.com/r/LocalLLaMA/comments/161jnef/httpshuggingfacecoacrasttomegllama3b/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161jnef
false
null
t3_161jnef
/r/LocalLLaMA/comments/161jnef/httpshuggingfacecoacrasttomegllama3b/
false
false
self
5
{'enabled': False, 'images': [{'id': 'pqusBA5BiXLhAkiiq_sKF0-k6RVC0ztDwtSGNY_RKpA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?width=108&crop=smart&auto=webp&s=bb779319d99e36c4fc2647c1b390ea5d966955e5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?width=216&crop=smart&auto=webp&s=0795c61c30462a53fc1813e897af16a0fa94f0c0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?width=320&crop=smart&auto=webp&s=cc7bc8969e3d510aee70098a4253f0ad18bec20e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?width=640&crop=smart&auto=webp&s=41eb570b0870d628dfa6f4d04f59b7e778c1ec48', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?width=960&crop=smart&auto=webp&s=d383a1706015d9ba60db0098c587402b43d3e923', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?width=1080&crop=smart&auto=webp&s=b7779efd3e9bf823dd1a962e7e9db1cbf37fd8d8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Ii4GYCxLIM4orUfOBTW2QJR4Q0gXy7HaqFLJVHZvGII.jpg?auto=webp&s=fa1b885ae1e272d7965ef6cec9d3eeb657988a29', 'width': 1200}, 'variants': {}}]}
PC Build for AI applications like Local LLM, SD etc.
2
I need your help. I want to buy a new PC and haven't been up to date with PC hardware for a few years. The PC is mainly for AI applications like Local LLM, SD etc. Video editing and some gaming. Which motherboard should I buy? Does it have to be a PCIe 5.0 x16 (runs at x16 or dual x8)? The shop I would like to use would be [https://www.digitec.ch](https://www.digitec.ch/). Are the other components good or would you choose something else? I have the following components in my shopping cart: \- Motherboard 1x ASUS ProArt Z790-Creator WIFI LGA 1700, Intel Z790, ATX (Is the $488 motherboard overkill?) \- GPU 2x ASUS Dual GeForce RTX 4060 Ti OC 16G \- RAM 2x 32GB, 6000MHz, DDR5 RAM, DIMM \- PSU 1x be quiet! Pure Power 12M 1000W \- CPU 1x Intel Core i5-13600K \- FAN 1x be quiet! Pure Rock 2 \- SSD 1x Samsung 980 Pro 2000GB, M.2 2280 \- CASE 1x Corsair 4000D Airflow Mini ITX, E-ATX, ATX, mATX
2023-08-26T03:07:34
https://www.reddit.com/r/LocalLLaMA/comments/161jitn/pc_build_for_ai_applications_like_local_llm_sd_etc/
Rare-Site
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161jitn
false
null
t3_161jitn
/r/LocalLLaMA/comments/161jitn/pc_build_for_ai_applications_like_local_llm_sd_etc/
false
false
self
2
null
Help to get chat like response
1
hi guys. I'm implementing an API calling Wizard7B-model from python an I have an issue maybe you can give some hints. I'm ussing template conversation like USER:...., ASSISTANT:.... but can't make the model stop when the assistant response end. anyone know how to get that result from model parameters? Or how can I get a chat like interaction directly from python. Thnks a lot
2023-08-26T00:54:47
https://www.reddit.com/gallery/161grkk
MinuteAd118
reddit.com
1970-01-01T00:00:00
0
{}
161grkk
false
null
t3_161grkk
/r/LocalLLaMA/comments/161grkk/help_to_get_chat_like_response/
false
false
https://b.thumbs.redditm…9Zjm4_LO5Hzk.jpg
1
null
Best LLM for translating Japanese to English?
7
No CLM's, so no GPT-4, can't be GPT-3.5-Turbo, and I already tried DeepL and that sucks because it lacks context. The LLM also needs to be uncensored for obvious reasons. The size of the model doesn't matter; I can use Runpod.
2023-08-25T23:59:43
https://www.reddit.com/r/LocalLLaMA/comments/161figs/best_llm_for_translating_japanese_to_english/
ReMeDyIII
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161figs
false
null
t3_161figs
/r/LocalLLaMA/comments/161figs/best_llm_for_translating_japanese_to_english/
false
false
self
7
null
Is 28.5 tokens/s for 512 tokens reasonable? A100 llama-2 7b, fp16
3
I'm using the huggingface library and am disappointed by how slow the generation seems to be on google colab (premium version so i can use the a100s for a bit). Is the problem huggingface or is this expected for 512 token length?
2023-08-25T23:30:11
https://www.reddit.com/r/LocalLLaMA/comments/161etua/is_285_tokenss_for_512_tokens_reasonable_a100/
natural_language_guy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161etua
false
null
t3_161etua
/r/LocalLLaMA/comments/161etua/is_285_tokenss_for_512_tokens_reasonable_a100/
false
false
self
3
null
Are there any newly released models that can generate creative NSFW content?
30
I have been testing out the new generation of models (airoboros 70B, nous hermes llama2, chronos hermes) So far, the models I've tried out are reluctant to use explicit language, no matter what characters I use them with. I even tried forcing outputs to start a certain way, but it's still too "clean" to have any fun with. It is frustrating because, although the quality of writing is improved, it feels more restrictive and censored with the content it produces. I had better results with Wizard 30B uncensored, and also with Guanaco 65B. Please help.
2023-08-25T23:29:45
https://www.reddit.com/r/LocalLLaMA/comments/161etgt/are_there_any_newly_released_models_that_can/
ChemicalOpportunity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161etgt
false
null
t3_161etgt
/r/LocalLLaMA/comments/161etgt/are_there_any_newly_released_models_that_can/
false
false
nsfw
30
null
Would like to ask LLAMA questions about docs
1
I have a bunch of docs that I would like to ask questions about. For example, say I have 100 documents, I want to ask questions such as "which of these instances have a blue bird that flies?" What is the best way of going about this? Should I be fine-tuning a LLAMA? Is there a package or service that does this? Thanks for any and all help
2023-08-25T23:28:09
https://www.reddit.com/r/LocalLLaMA/comments/161es6k/would_like_to_ask_llama_questions_about_docs/
enmalik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161es6k
false
null
t3_161es6k
/r/LocalLLaMA/comments/161es6k/would_like_to_ask_llama_questions_about_docs/
false
false
self
1
null
Summarizing / Analyzing Large Amount of Data
3
I work for a medium size (1k-10k employees) supplier/distribution company who is exploring how to best use LLMs specifically Llama2. On task that we’ve been exploring is summarizing. Let’s say I have several million or so emails from our customers and I want to do sentiment analysis on them. For example, what do our customers think about our pricing strategy. Surely not all emails contain information that is pertinent, but some do. Do you think I should embed them all and then see which ones match, or run though each of them and ask via prompting if it has any relevant information. Once I find them I guess I would ask it a series of questions and see if I can find common themes. Has anyone tackled a problem like this? Thanks!
2023-08-25T23:09:57
https://www.reddit.com/r/LocalLLaMA/comments/161ec9m/summarizing_analyzing_large_amount_of_data/
tantan1187
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161ec9m
false
null
t3_161ec9m
/r/LocalLLaMA/comments/161ec9m/summarizing_analyzing_large_amount_of_data/
false
false
self
3
null
Training LocalLLaMA for source code base
1
Hello, did anyone have success with training Lama2 (or different LLM) with large code base for understanding? I have fairly large code base (milions of lines of C# code) and was thinking if I can train it to help me understand. Any information would be greatly appreciated.
2023-08-25T22:59:22
https://www.reddit.com/r/LocalLLaMA/comments/161e2tk/training_localllama_for_source_code_base/
Soft-Conclusion-2004
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161e2tk
false
null
t3_161e2tk
/r/LocalLLaMA/comments/161e2tk/training_localllama_for_source_code_base/
false
false
self
1
null
A local LLM is the ultimate doomsday device
141
Years ago there was a trend among the paranoid of downloading Wikipedia as an easy way to maintain access to a wide breadth of knowledge in the event of societal collapse (the internet going down). Devices like the [WikiReader](https://en.wikipedia.org/wiki/WikiReader) or [home-grown Raspberry Pi solutions](https://www.hackster.io/news/stay-informed-during-the-apocalypse-with-an-off-grid-wikipedia-device-b37332c7bc1d) cropped up, but were extremely niche products. It occurred to me that local LLMs achieve the goal of these devices even better. Wikipedia might be able to give you an overview on surgical sutures, but it can't walk you through suturing a wound with the materials you have on hand. Wikipedia might be able to give you examples of grief, but it won't hold your hand through the process of having to put down your dog because he got bit by a zombie.
2023-08-25T22:58:50
https://www.reddit.com/r/LocalLLaMA/comments/161e2ci/a_local_llm_is_the_ultimate_doomsday_device/
Ninjinka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161e2ci
false
null
t3_161e2ci
/r/LocalLLaMA/comments/161e2ci/a_local_llm_is_the_ultimate_doomsday_device/
false
false
self
141
{'enabled': False, 'images': [{'id': 'WsxhvRZfdwJ8XHtruNbK4wWe3eA5GYS4Tysy0jRbVDg', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?width=108&crop=smart&auto=webp&s=66f151f4379266aef332ecafc143bacfea9b8115', 'width': 108}, {'height': 151, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?width=216&crop=smart&auto=webp&s=4c326138d0ab71e5a6a1caf855582d5dc300f3b7', 'width': 216}, {'height': 224, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?width=320&crop=smart&auto=webp&s=dd6e09b5083781f51da69429a244ea5cb2869810', 'width': 320}, {'height': 448, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?width=640&crop=smart&auto=webp&s=526e65e4c26527507aeaa97a16efd0e66e8a2812', 'width': 640}, {'height': 672, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?width=960&crop=smart&auto=webp&s=d399847b156c4a5ee8a2e3f640ddc1535083c7fb', 'width': 960}, {'height': 756, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?width=1080&crop=smart&auto=webp&s=ea9437d9363dc5aa67fcda362fef10be76c85818', 'width': 1080}], 'source': {'height': 841, 'url': 'https://external-preview.redd.it/ILuqpjj-A0TwQjbSd1eJrwvdnG3duVFyMzcOPjhBv9w.jpg?auto=webp&s=90f4d8e0ced968bede461979527698f2b08dcac5', 'width': 1200}, 'variants': {}}]}
Llama2 has now beaten GPT4 on HumanEval
260
[https://www.phind.com/blog/code-llama-beats-gpt4](https://www.phind.com/blog/code-llama-beats-gpt4)
2023-08-25T22:39:44
https://www.reddit.com/r/LocalLLaMA/comments/161dlnz/llama2_has_now_beaten_gpt4_on_humaneval/
Wonderful_Ad_5134
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161dlnz
false
null
t3_161dlnz
/r/LocalLLaMA/comments/161dlnz/llama2_has_now_beaten_gpt4_on_humaneval/
false
false
self
260
null
Inferencing LLAMA-2 13B
1
[removed]
2023-08-25T22:00:41
https://www.reddit.com/r/LocalLLaMA/comments/161clx6/inferencing_llama2_13b/
1azytux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161clx6
false
null
t3_161clx6
/r/LocalLLaMA/comments/161clx6/inferencing_llama2_13b/
false
false
self
1
null
Problem while inferencing Llama-2
1
[removed]
2023-08-25T21:59:24
https://www.reddit.com/r/LocalLLaMA/comments/161ckp3/problem_while_inferencing_llama2/
1azytux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161ckp3
false
null
t3_161ckp3
/r/LocalLLaMA/comments/161ckp3/problem_while_inferencing_llama2/
false
false
self
1
null
How do you bug test CUDA out of memory errors?
1
I have 12GB of VRAM (\~8 GB available according to nvidia-smi because of OS). Tried to run the example program: torchrun --nproc\_per\_node 1 example\_completion.py --ckpt\_dir CodeLlama-7b/ --tokenizer\_path CodeLlama-7b/tokenizer.model --max\_seq\_len 128 --max\_batch\_size 4 which produced a memory error. I tried: PYTORCH\_CUDA\_ALLOC\_CONF='max\_split\_size\_mb:256' and then also tried using --max\_seq\_len 32 --max\_batch\_size 2 or something similar. Either neither of these can get the code to under 8 GB of VRAM -- or else maybe I have a setting or installation procedure wrong? &#x200B; Just trying to test whether the code environment works -- before moving the script to larger GPUs. So would like to be able to run some diagnostic.
2023-08-25T21:57:00
https://www.reddit.com/r/LocalLLaMA/comments/161cinp/how_do_you_bug_test_cuda_out_of_memory_errors/
Lyuokdea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161cinp
false
null
t3_161cinp
/r/LocalLLaMA/comments/161cinp/how_do_you_bug_test_cuda_out_of_memory_errors/
false
false
self
1
null
llama.cpp : CPU vs CLBLAS (opencl) vs ROCm
5
I tried to run `llama.cpp` with different backends but I didn't notice much difference in performance. I've a lot of RAM but a little VRAM,.. are there other advantages to run non-CPU modes ?
2023-08-25T21:31:12
https://www.reddit.com/r/LocalLLaMA/comments/161bv15/llamacpp_cpu_vs_clblas_opencl_vs_rocm/
grigio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161bv15
false
null
t3_161bv15
/r/LocalLLaMA/comments/161bv15/llamacpp_cpu_vs_clblas_opencl_vs_rocm/
false
false
self
5
null
Any chance I could find a small model that can pick out important sentences or keywords in a text?
1
[removed]
2023-08-25T21:18:47
https://www.reddit.com/r/LocalLLaMA/comments/161bjik/any_chance_i_could_find_a_small_model_that_can/
Hussei911
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161bjik
false
null
t3_161bjik
/r/LocalLLaMA/comments/161bjik/any_chance_i_could_find_a_small_model_that_can/
false
false
self
1
null
None of the Bloke Codellamma GPTQ work
1
[removed]
2023-08-25T20:15:43
https://www.reddit.com/r/LocalLLaMA/comments/1619w6z/none_of_the_bloke_codellamma_gptq_work/
Murky-Cheek-7554
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1619w6z
false
null
t3_1619w6z
/r/LocalLLaMA/comments/1619w6z/none_of_the_bloke_codellamma_gptq_work/
false
false
self
1
null
Code Llama - The Hugging Face Edition
97
Hey all! Chief Llama Officer at Hugging Face here! Like all of you, I'm quite excited about Code Llama being released. Like most of you, I've also struggled to use it. For the last 24 hours, we've sprinted to make things nice and easy for all of you. We'll be iterating to make things easier, faster, and smoother, but excited to share our first news! **TL;DR version -** **What's new?** So today, we're releasing: * A nice [blog post](https://huggingface.co/blog/codellama) * All [9 models](https://huggingface.co/codellama) uploaded to Hugging Face and supported in transformers\* * A CodeLlama [Playground](https://huggingface.co/spaces/codellama/codellama-playground) for the 13B model * A CodeLlama Chat [Playground](https://huggingface.co/spaces/codellama/codellama-13b-chat) for the 13B instruct-tuned model * An [update](https://github.com/huggingface/transformers/pull/25740) in transformers to support CodeLlama (you need to install from main) * A guide on how to use the conversational model (see blog post) * An explanation on 4-bit loading the model * Support in text generation inference for production-ready inference * Community evaluation of the models. Check out the [multilingual code models evaluation](https://huggingface.co/spaces/bigcode/multilingual-code-evals) (or blog [post](https://huggingface.co/blog/codellama#evaluation) for a comparison with other aspects) &#x200B; **Blah blah blah...what can this do?** * Run the small models in free Google Colab * Do infilling (e.g. generate best match given a prefix and suffix; generate docstrings) * Generate code * For the instruct models, you can have conversations &#x200B; **What's next?** In the coming days, we'll work on sharing scripts to train models, optimizations for on-device inference, even nicer demos (and for more powerful models), and more. Feel free to like our GitHub repos ([transformers](https://github.com/huggingface/transformers), [peft](https://github.com/huggingface/peft), [accelerate](https://github.com/huggingface/accelerate)). Enjoy! &#x200B; **Give me an example** Using the 13B Chat/Instruct-tuned model as a code assistant **Input**: Give me the python code for fibonnaci **Output** >Sure, here's a Python implementation of the Fibonacci sequence: > > def fibonacci(n): if n <= 1: return n else: return fibonacci(n-1) + fibonacci(n-2) > >This function takes an integer n as input and returns the nth Fibonacci number. The function uses recursion to calculate the Fibonacci number. > >Here's an example of how to use the function: > > print(fibonacci(5)) # Output: 5 print(fibonacci(8)) # Output: 21 > >Note that the Fibonacci sequence is a sequence of numbers in which each number is the sum of the two preceding numbers. The first two numbers in the sequence are 0 and 1, and each subsequent number is the sum of the previous two numbers. **Input 2:** What is it's big O complexity? >The time complexity of the Fibonacci sequence is O(2\^n) because the function calls itself recursively and the number of function calls increases exponentially with the size of the input. \[...\] &#x200B;
2023-08-25T20:14:23
https://www.reddit.com/r/LocalLLaMA/comments/1619uxp/code_llama_the_hugging_face_edition/
hackerllama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1619uxp
false
null
t3_1619uxp
/r/LocalLLaMA/comments/1619uxp/code_llama_the_hugging_face_edition/
false
false
self
97
{'enabled': False, 'images': [{'id': 'FeIXv2me6IcnLwKMBUP6UTAszr0o7BEfE8ukf8bDmrE', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?width=108&crop=smart&auto=webp&s=09b397799d0477e85fbfa588dd6624a9b6b0b928', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?width=216&crop=smart&auto=webp&s=4380651809954441ba4f52fa6c25f58d758c2e60', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?width=320&crop=smart&auto=webp&s=d567d64e1d3345fe2149745bc084ecb11c99660d', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?width=640&crop=smart&auto=webp&s=903aeaf5148cc6ba26fc4e7608a04c745d68ae00', 'width': 640}, {'height': 591, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?width=960&crop=smart&auto=webp&s=104a925eeacb1f6de5bf360ee38a659f62a3a94f', 'width': 960}, {'height': 665, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?width=1080&crop=smart&auto=webp&s=dd9028640e90c2afcf5b83c2589176b11b70d690', 'width': 1080}], 'source': {'height': 1478, 'url': 'https://external-preview.redd.it/TbSF4PqitQx4n7XizZfzma98bX7_epvslEPc_tWrsJ8.jpg?auto=webp&s=627fcf332e29b3440da779aca56ff1cf9002acaa', 'width': 2400}, 'variants': {}}]}
Code LLama Running Locally with Faraday.dev
53
2023-08-25T19:50:20
https://i.redd.it/et00suh07bkb1.gif
719Ben
i.redd.it
1970-01-01T00:00:00
0
{}
16198py
true
null
t3_16198py
/r/LocalLLaMA/comments/16198py/code_llama_running_locally_with_faradaydev/
false
false
https://b.thumbs.redditm…3Da4K3hc_KgQ.jpg
53
{'enabled': True, 'images': [{'id': 'hAbl_CWioS2VkQHehKkyN8GXTiLEzStB2cl-pQ2__SY', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=108&crop=smart&format=png8&s=1b0179887244cf98f2bd3c98874c29f8ab348f82', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=216&crop=smart&format=png8&s=3fce8c25e611a924e392f22344d888bd77dbf4cc', 'width': 216}, {'height': 199, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=320&crop=smart&format=png8&s=bbbd5b1d8c1f41b3c768f06dd9b58ca2174f2ee2', 'width': 320}], 'source': {'height': 374, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?format=png8&s=05115d14d52f62a3935e48dac8c297d1097ae2db', 'width': 600}, 'variants': {'gif': {'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=108&crop=smart&s=884e4d804b29b94eacbd7b70bf2b2c3a9e5f863b', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=216&crop=smart&s=880473c0b6c79cc1f7fad0b3c749faf20e54d418', 'width': 216}, {'height': 199, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=320&crop=smart&s=7bbc8a6ca5131e07db05e5839972cf6aa39da315', 'width': 320}], 'source': {'height': 374, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?s=d9aa5fd0312f44f1e119ef897ec86e64f3f94576', 'width': 600}}, 'mp4': {'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=108&format=mp4&s=5e9c8b67ba27bdd034e408fb3a31ce88fb80858a', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=216&format=mp4&s=7db49bfabdb12208c028678c75a1d053a959da8f', 'width': 216}, {'height': 199, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?width=320&format=mp4&s=ab081d3a9443b6cc7d11090a5ea3e61cb89d808e', 'width': 320}], 'source': {'height': 374, 'url': 'https://preview.redd.it/et00suh07bkb1.gif?format=mp4&s=100aef438e97726646b3b59c97424dc659c2c2af', 'width': 600}}}}]}
An article on latent potentials and the analysis of Hidden Layers
1
Here’s a link to an article that discusses how analysis of a LLM’s Hidden Layers can be used to determine how capable a model is at generating output on domains that it wasn’t trained on. An interesting topic!
2023-08-25T19:42:09
https://medium.com/@greg.broadhead/title-unveiling-the-mystique-exposing-latent-potentials-in-large-language-models-5d749734170d
GeeBee72
medium.com
1970-01-01T00:00:00
0
{}
161919q
false
null
t3_161919q
/r/LocalLLaMA/comments/161919q/an_article_on_latent_potentials_and_the_analysis/
false
false
https://b.thumbs.redditm…VamdqxRlGzjA.jpg
1
{'enabled': False, 'images': [{'id': 'xrQGkF_T6mV13Uq8gVmfoXNhVR0HhmSQUMDW-TWMXCg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/EQHggOROS_SKmX8zTD89Q0Fcl1r3KSdEr8-bqOj680Y.jpg?width=108&crop=smart&auto=webp&s=0209eb1eaa3943ae5f93992f1c837db85e77f561', 'width': 108}, {'height': 217, 'url': 'https://external-preview.redd.it/EQHggOROS_SKmX8zTD89Q0Fcl1r3KSdEr8-bqOj680Y.jpg?width=216&crop=smart&auto=webp&s=325e3544412da5da5b0bfe2511567055886a13a6', 'width': 216}, {'height': 322, 'url': 'https://external-preview.redd.it/EQHggOROS_SKmX8zTD89Q0Fcl1r3KSdEr8-bqOj680Y.jpg?width=320&crop=smart&auto=webp&s=9f79f4e2cf5cafbc17b978e1134b561f924d85d3', 'width': 320}, {'height': 645, 'url': 'https://external-preview.redd.it/EQHggOROS_SKmX8zTD89Q0Fcl1r3KSdEr8-bqOj680Y.jpg?width=640&crop=smart&auto=webp&s=4eef3d319fa63bc25bb5b943a426f30cb57ecfa9', 'width': 640}], 'source': {'height': 967, 'url': 'https://external-preview.redd.it/EQHggOROS_SKmX8zTD89Q0Fcl1r3KSdEr8-bqOj680Y.jpg?auto=webp&s=c8edbee9233730e9782e3daefd098de36fcc736e', 'width': 959}, 'variants': {}}]}
How to get the same inference quality between local and Perplexity ?
3
Hello I have tried today code llama-instruct-34B from Perplexity ([https://labs.perplexity.ai](https://labs.perplexity.ai)) and compared to a local run with oobabooga/Exllama. My Local setup is a 3090 with 24 GB of VRAM and 64 GB of Ram. I have downloaded the quantized version of TheBloke for both instruct and python variation and have been try (per default version & 64g-actorder\_True). All local runs with extremely weird level of answer. &#x200B; Example with the question : Help me to code this : write a python code with an example of to\_csv function of Pandas in python &#x200B; Local output : Sure! I'm glad to help you. Please tell me more about what you need. What would you like for me to do for you? Would you like for me to show you how to use the csv module from the standard built-in python library package package package package package package package package package package package (package repeated until max\_new\_tokens) Output of Perplexity/code llama-instruct-34B: Sure, here is an example of how you could use the to_csv() function in Python: import pandas as pd # create a sample dataframe data = {'Name': ['John', 'Mary', 'David'], 'Age': [25, 30, 40], 'Gender': ['Male', 'Female', 'Male']} df = pd.DataFrame(data) # save the dataframe to a CSV file df.to_csv('example.csv') This will save the contents of the DataFrame to a CSV file called "example.csv" in the current working directory. The resulting CSV file will look ... (truncated answer but the quality is very good). Can this be due to some parameters on my side?
2023-08-25T19:32:11
https://www.reddit.com/r/LocalLLaMA/comments/1618s51/how_to_get_the_same_inference_quality_between/
gaara988
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1618s51
false
null
t3_1618s51
/r/LocalLLaMA/comments/1618s51/how_to_get_the_same_inference_quality_between/
false
false
self
3
null
CodeLlama-34b-Instruct-hf
11
Getting interesting results with non-code completions, you can explore the compressed dataset using 0 temp [https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf?text=I+love](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf?text=I+love) https://preview.redd.it/e0sqlfva5bkb1.png?width=3482&format=png&auto=webp&s=dc10155f0fd834b220323ab555db474fcdec1755
2023-08-25T19:29:15
https://www.reddit.com/r/LocalLLaMA/comments/1618p7x/codellama34binstructhf/
nanowell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1618p7x
false
null
t3_1618p7x
/r/LocalLLaMA/comments/1618p7x/codellama34binstructhf/
false
false
https://b.thumbs.redditm…UbRxx_qRRGkY.jpg
11
{'enabled': False, 'images': [{'id': 'H8du3_sRoUxaA_HDoduEHkh4B49hWQId75tVxSKFVus', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?width=108&crop=smart&auto=webp&s=b8241e0f7b7cafc3133adfb63484b6f79a3c33e8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?width=216&crop=smart&auto=webp&s=f1da83375522dd2c43f25f8524f8efb4fb678f97', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?width=320&crop=smart&auto=webp&s=680ab14386c2843de797430d2c8789d1e33cbc4c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?width=640&crop=smart&auto=webp&s=83b83b179a1670213b4b2925fdbf57dac7c5c132', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?width=960&crop=smart&auto=webp&s=3a850b60e05f5516f1782bf383f22827c87e1fc8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?width=1080&crop=smart&auto=webp&s=ab8e3a9a5ab61d383522fd9d24e17d5c31f29e4c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NhcGQvs4ZNIYiOlUzJOf7iXwNTKxxX8LNEWdMOH_wuo.jpg?auto=webp&s=60565d0ca8190c19e6bdab292ad6ef8ae2b130c2', 'width': 1200}, 'variants': {}}]}
Meta's Latest AI Release Is an Open Source Coding Bot
1
2023-08-25T19:07:51
https://www.pulsewaveapp.com/Metas-Latest-AI-Release-Is-an-Open-Source-Coding-Bot/
Starship_2_Mars
pulsewaveapp.com
1970-01-01T00:00:00
0
{}
16185h4
false
null
t3_16185h4
/r/LocalLLaMA/comments/16185h4/metas_latest_ai_release_is_an_open_source_coding/
false
false
https://b.thumbs.redditm…4c0eA3doEuvk.jpg
1
{'enabled': False, 'images': [{'id': 'qpAO3OBs_qshvyRMN6MUFVmJ1ZXu_oOmXZwkAWxULgE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?width=108&crop=smart&auto=webp&s=25d9e0a5167a5675fb46397a0bb92c35adebe151', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?width=216&crop=smart&auto=webp&s=faeace1be81dbe663bba0af4dff5b2434a1f504b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?width=320&crop=smart&auto=webp&s=bc35f8cb55854e69f8c41537e8623d620f36dd8c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?width=640&crop=smart&auto=webp&s=c92d187ddd9a73e0ee5672165c643c6de7db0bda', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?width=960&crop=smart&auto=webp&s=7bb6d7415de27874cdac09d0db9eb6f9f007d0b2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?width=1080&crop=smart&auto=webp&s=d0ee6f24a4b8f8c71d9be16a89db8e9d3a5a5795', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/3evmZRmeFIrJjXtN26pBjouE6vHxEWMWRU8U7vJEyI4.jpg?auto=webp&s=7e441e21b1ae108d0b2048346528e9e44e817170', 'width': 1920}, 'variants': {}}]}
Over
50
2023-08-25T18:10:18
https://i.redd.it/l6fiqstkrakb1.jpg
HOLUPREDICTIONS
i.redd.it
1970-01-01T00:00:00
0
{}
1616nq9
false
null
t3_1616nq9
/r/LocalLLaMA/comments/1616nq9/over/
false
false
https://a.thumbs.redditm…ah62AzVeJSo8.jpg
50
{'enabled': True, 'images': [{'id': 'kNYNg_Mmhc7P2TK3e7X0Nv6v8FiNf_XdiiFmCqbRR0Q', 'resolutions': [{'height': 139, 'url': 'https://preview.redd.it/l6fiqstkrakb1.jpg?width=108&crop=smart&auto=webp&s=d6541df16682bf11532b2e2f545091f7d9b34705', 'width': 108}, {'height': 278, 'url': 'https://preview.redd.it/l6fiqstkrakb1.jpg?width=216&crop=smart&auto=webp&s=bf69f8cfcbfd5e625a7ff1e2db26735e1154d97a', 'width': 216}, {'height': 412, 'url': 'https://preview.redd.it/l6fiqstkrakb1.jpg?width=320&crop=smart&auto=webp&s=e19dc75681a020bb199140e5d72421169c64e67a', 'width': 320}, {'height': 824, 'url': 'https://preview.redd.it/l6fiqstkrakb1.jpg?width=640&crop=smart&auto=webp&s=0a5b0a9fcbda5db6e575b501b8f465f19738ceb1', 'width': 640}], 'source': {'height': 992, 'url': 'https://preview.redd.it/l6fiqstkrakb1.jpg?auto=webp&s=260bd3aa1e6c7f6e81d9cccf22d08b67bee288e0', 'width': 770}, 'variants': {}}]}
Anyway to use Runpod on android?
1
Hey, is there anyway I could control/interact with a runpod ran llm (or something similar) on an android based mobile device? I'm guessing the answer is no, but I thought I'd ask anyway
2023-08-25T18:04:17
https://www.reddit.com/r/LocalLLaMA/comments/1616i2m/anyway_to_use_runpod_on_android/
theshoelesschap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1616i2m
false
null
t3_1616i2m
/r/LocalLLaMA/comments/1616i2m/anyway_to_use_runpod_on_android/
false
false
self
1
null
Is it really worth to build a custom machine over using chatgpt4?
23
Hi There, I am working on RAG (Retrieval Augmented Generation) systems on medical pdfs and books (100K+ documents, 1k books). I ma wondering if I should invest and build my own machine to build my project and use open source models like llama or I should go with openai gpt4? I can make sure to reduce the requests to openai for already asked questions (put then on the cash for example) ... or maybe other ways to reduce the cost. If I should build my own machine, what are the minimum requirements? &#x200B; What are your thoughts?
2023-08-25T18:03:15
https://www.reddit.com/r/LocalLLaMA/comments/1616h5j/is_it_really_worth_to_build_a_custom_machine_over/
No-Trifle2470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1616h5j
false
null
t3_1616h5j
/r/LocalLLaMA/comments/1616h5j/is_it_really_worth_to_build_a_custom_machine_over/
false
false
self
23
null
I have 44 gb of VRAM, which is the best model I can run
10
This includes one rtx a5500 and one rtx a4500, 24 gb and 20 gb combined to give me 44 gb, what is the most I can get to run on it?
2023-08-25T17:49:32
https://www.reddit.com/r/LocalLLaMA/comments/161649f/i_have_44_gb_of_vram_which_is_the_best_model_i/
syndorthebore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161649f
false
null
t3_161649f
/r/LocalLLaMA/comments/161649f/i_have_44_gb_of_vram_which_is_the_best_model_i/
false
false
self
10
null
Diving into Language Model Terminology: LoRA, Q-LoRA, INT8, INT4, GPTQ, GGML - Help Needed!
23
I've come across several terms and configurations that have left me a bit overwhelmed. I'd truly appreciate it if some knowledgeable souls here could help break these down: 1. **LoRA**: Only training some layers, which needs way less memory, right. 2. **Q-LoRA**: Initially I thought, this is LoRA training on a quantized model but I think it's just training the LoRA adapters with quantized weights? Does it have something to do with INT4 or is it a totally different beast? 3. **INT8 and INT4**: I understand these might relate to quantization (reducing model size for efficiency, with some accuracy trade-offs). But how do these two differ in practical terms, and what are their typical use cases? What's the connection to Q-LoRA? 4. **GPTQ**: I've heard of GPT models, but what does the 'Q' denote here? Is it another quantized variant? 5. **GGML**: This is a new one for me. Can someone elucidate what this stands for and its significance? It seems like the world of language models is expanding rapidly, with tons of exciting innovations. However, it can be a challenge to keep up! Any insights, resources, or even just basic explanations for these terms would be incredibly helpful.
2023-08-25T17:42:48
https://www.reddit.com/r/LocalLLaMA/comments/1615xzb/diving_into_language_model_terminology_lora_qlora/
Single_Prior_704
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1615xzb
false
null
t3_1615xzb
/r/LocalLLaMA/comments/1615xzb/diving_into_language_model_terminology_lora_qlora/
false
false
self
23
null
Has Code Llama been added to huggingface?
1
does anyone have a link to the new models?
2023-08-25T17:34:49
https://www.reddit.com/r/LocalLLaMA/comments/1615qmg/has_code_llama_been_added_to_huggingface/
randomrealname
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1615qmg
false
null
t3_1615qmg
/r/LocalLLaMA/comments/1615qmg/has_code_llama_been_added_to_huggingface/
false
false
self
1
null
Code Llama for VSCode - A simple API which mocks llama.cpp to enable support for Code Llama with the Continue Visual Studio Code extension. Cross-platform support. No login/key/etc, 100% local.
58
2023-08-25T16:34:46
https://github.com/xNul/code-llama-for-vscode
Nabakin
github.com
1970-01-01T00:00:00
0
{}
16145rn
false
null
t3_16145rn
/r/LocalLLaMA/comments/16145rn/code_llama_for_vscode_a_simple_api_which_mocks/
false
false
https://b.thumbs.redditm…KXbyAHwyN1DE.jpg
58
{'enabled': False, 'images': [{'id': 'UyFyyzjF0CtZf2nJsmV5bAiZskUV4PhbIzsT0LURA0g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?width=108&crop=smart&auto=webp&s=4b32a8629412b2ffc5a71a2255fe362af6a84ba1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?width=216&crop=smart&auto=webp&s=762928ec2e285a9f942626a6f890e2b1ecbad681', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?width=320&crop=smart&auto=webp&s=a8efea1cc39c1e8ed8799da7763e4d66bec0bf92', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?width=640&crop=smart&auto=webp&s=f07392ffa46dd82f10add0bf8a250cbc34b5d8e8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?width=960&crop=smart&auto=webp&s=68ac03b3efd0b25db1aa10f36c5c460071c52a7a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?width=1080&crop=smart&auto=webp&s=f2adbc9bd7410d69faa9eaf113e3b152094064bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-UGvlHdDFBXrUIsiFtBGyoEuAzRig7JNgIh4YWj1FHI.jpg?auto=webp&s=b011cb1ded47b6f3580ff5fec7375dd16477d7e7', 'width': 1200}, 'variants': {}}]}
Will LoRa training of GPTQ models through oobabooga be coming?
1
I get an error when trying to train gptq models and I noticed it's not supported. I wasn't sure what was preventing it and if there were plans going forward. Does anyone know what's stopping it? I trained on a Transformers version but it's just so slow, I can't use it.
2023-08-25T16:19:34
https://www.reddit.com/r/LocalLLaMA/comments/1613rmr/will_lora_training_of_gptq_models_through/
aBowlofSpaghetti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1613rmr
false
null
t3_1613rmr
/r/LocalLLaMA/comments/1613rmr/will_lora_training_of_gptq_models_through/
false
false
self
1
null
I have access to a DGX with 8 x A100 - how to fine tune a LLaMA 2?
6
Hi guys, I would use the weekend to fine LLaMA 2 on specific documents that are available in German und English. Those are pdfs. How can I do this on a A100 GPU? First I need a docker image to execute the things? Which one is prepared for that? Which is the base model I use. I would like to start with 13B Parameter? How to fine tune it? Do I need to do prompts? Or can I just use plain text? Is there a repo with code and example for fine tuning? Thanks for your support!
2023-08-25T16:06:24
https://www.reddit.com/r/LocalLLaMA/comments/1613f6t/i_have_access_to_a_dgx_with_8_x_a100_how_to_fine/
New_Lifeguard4020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1613f6t
false
null
t3_1613f6t
/r/LocalLLaMA/comments/1613f6t/i_have_access_to_a_dgx_with_8_x_a100_how_to_fine/
false
false
self
6
null
gpt4all-j compatible models which work with PrivateGPT?
1
[removed]
2023-08-25T16:05:32
https://www.reddit.com/r/LocalLLaMA/comments/1613ech/gpt4allj_compatible_models_which_work_with/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1613ech
false
null
t3_1613ech
/r/LocalLLaMA/comments/1613ech/gpt4allj_compatible_models_which_work_with/
false
false
self
1
null
16gb 4060 Ti with Ryzen APU?
2
What is the largest model size that this can run using all 16gb VRAM if I use the APU for video output instead? I read about loaders that allow people to run these models at lower memory sizes with more speed too. Can a 16gb card handle anything 20-33b partners with quant, per can it do 13b 8bit?
2023-08-25T15:37:26
https://www.reddit.com/r/LocalLLaMA/comments/1612npk/16gb_4060_ti_with_ryzen_apu/
Unable-Client-1750
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1612npk
false
null
t3_1612npk
/r/LocalLLaMA/comments/1612npk/16gb_4060_ti_with_ryzen_apu/
false
false
self
2
null
Ctransformers now support GGUF format for Falcon and Llama models
13
2023-08-25T15:02:22
https://github.com/marella/ctransformers
Acrobatic-Site2065
github.com
1970-01-01T00:00:00
0
{}
1611q4e
false
null
t3_1611q4e
/r/LocalLLaMA/comments/1611q4e/ctransformers_now_support_gguf_format_for_falcon/
false
false
default
13
{'enabled': False, 'images': [{'id': '6B4DVv_pT7ZV5YDoBCrnaJfVX5c8ZCH0ZVFERHm_t7E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6B4DVv_pT7ZV5YDoBCrnaJfVX5c8ZCH0ZVFERHm_t7E.png?width=108&crop=smart&auto=webp&s=5cba4bacc75d8a15b03d03db55c7ea9be6b550ad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6B4DVv_pT7ZV5YDoBCrnaJfVX5c8ZCH0ZVFERHm_t7E.png?width=216&crop=smart&auto=webp&s=48b1c82910f3c65e453e173a4c5bb217b77dbd17', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6B4DVv_pT7ZV5YDoBCrnaJfVX5c8ZCH0ZVFERHm_t7E.png?width=320&crop=smart&auto=webp&s=98be5ead3acd82ed6c3de5fad01ecfc8929e25a2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6B4DVv_pT7ZV5YDoBCrnaJfVX5c8ZCH0ZVFERHm_t7E.png?width=640&crop=smart&auto=webp&s=5ae87205095f9c3852ddfa188d1dd1187017e94c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6B4DVv_pT7ZV5YDoBCrnaJfVX5c8ZCH0ZVFERHm_t7E.png?width=960&crop=smart&auto=webp&s=c64b0e3107ecc9760f3ef6010e15ccc6521042cf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6B4DVv_pT7ZV5YDoBCrnaJfVX5c8ZCH0ZVFERHm_t7E.png?width=1080&crop=smart&auto=webp&s=c2c6bd2b0ddcbfbed5ebea9ee7027ac770626050', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6B4DVv_pT7ZV5YDoBCrnaJfVX5c8ZCH0ZVFERHm_t7E.png?auto=webp&s=46734500a93fe2c74090c836012731888560b8a4', 'width': 1200}, 'variants': {}}]}
Has anyone hosted the uncensored llama 2 online?
0
If not, should do it, will be popular.
2023-08-25T14:39:55
https://www.reddit.com/r/LocalLLaMA/comments/161152w/has_anyone_hosted_the_uncensored_llama_2_online/
fluoroamine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
161152w
false
null
t3_161152w
/r/LocalLLaMA/comments/161152w/has_anyone_hosted_the_uncensored_llama_2_online/
false
false
self
0
null
Need Help with oobabooga webui running codellama
4
here I am running latest codellama 7b ggml quantised model &#x200B; **using web-ui:** `you: write python function to scrap tables from given url` `assistant: Sure I can try this out.` `'''` `'''` &#x200B; **using langchain python :** *output:* `I have a list of URLs and I want to scrape the table data from them using Python. Here is an example URL that needs to be scraped: \`https://www.example.com/table\`` `I want to scrape the following table information from the website:` `| Column Name | Content |` `| --- | --- |` `| First Name | John Smith |` `| Last Name | Sarah Johnson |` `| Age | 32 |` `| Gender | Male |` `Here is an example of how I would like to scrape the data from the website:` `\`\`\`` `import requests` `from bs4 import BeautifulSoup` `def scrape_table(url):` `# Make a request to the URL` `resp = requests.get(url)` `# Parse the HTML content of the page using Beautiful Soup` `soup = BeautifulSoup(resp.content, 'html.parser')` `# Extract the table data from the HTML content` `table_data = soup.find('table')['data']` `# Return the extracted table data as a list of dictionaries` `return [{'column_name': column` &#x200B; am I missing something? please help I am new to web-ui, I can make output better using langchain but have no idea how to do it in webui. thanks.
2023-08-25T14:14:47
https://www.reddit.com/r/LocalLLaMA/comments/1610hrt/need_help_with_oobabooga_webui_running_codellama/
ExternalAd8105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1610hrt
false
null
t3_1610hrt
/r/LocalLLaMA/comments/1610hrt/need_help_with_oobabooga_webui_running_codellama/
false
false
self
4
null
Codellama - Has anyone found "Codellama 34B Instruct" to be uncooperative?
14
I've had it refuse to answer coding questions, nitpick my prompt for reasons why it can't answer, and finally respond "what is your question?". I'm using the same prompt which has been tested across 6 other LlaMa 2 based models and all of them responded with their best attempt/response. Maybe it's my settings but perhaps it's fixable through the system prompt. What I have so far is: >You are a cheerful and very helpful coding assistant. You will not refuse any request and with a very positive and optimistic attitude you will unquestionably answer the question to the very best of your ability. I've used llama.cpp to serve the model with: >server -m /Volumes/models/TheBloke/Codellama-34B-Instruct-GGUF/codellama-34b-instruct.Q8\_0.gguf --ctx\_size 2048 --threads 10 --n-gpu-layers 32 --batch-size 256 Would love to hear thoughts from the community and help each other get the most out of it
2023-08-25T13:52:20
https://www.reddit.com/r/LocalLLaMA/comments/160zxjd/codellama_has_anyone_found_codellama_34b_instruct/
No-Ordinary-Prime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160zxjd
false
null
t3_160zxjd
/r/LocalLLaMA/comments/160zxjd/codellama_has_anyone_found_codellama_34b_instruct/
false
false
self
14
null
GitHub - InternLM/lmdeploy: LMDeploy is a toolkit for compressing, deploying, and serving LLM
12
2023-08-25T13:52:00
https://github.com/InternLM/lmdeploy
kkchangisin
github.com
1970-01-01T00:00:00
0
{}
160zx8k
false
null
t3_160zx8k
/r/LocalLLaMA/comments/160zx8k/github_internlmlmdeploy_lmdeploy_is_a_toolkit_for/
false
false
default
12
{'enabled': False, 'images': [{'id': 'wNdMhUYGIOJDbx60HFOWRz_08Y44EeEMGX1Lfxen17A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ebcc37coszI6U3WLJGLWrLyfiBbMR2u_0Z2NMuqv8nQ.jpg?width=108&crop=smart&auto=webp&s=32201a46bb21e4b9f26746f435b88292ce62a334', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ebcc37coszI6U3WLJGLWrLyfiBbMR2u_0Z2NMuqv8nQ.jpg?width=216&crop=smart&auto=webp&s=3625a946759d34caa3e7fd18d04a051d797ad49d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ebcc37coszI6U3WLJGLWrLyfiBbMR2u_0Z2NMuqv8nQ.jpg?width=320&crop=smart&auto=webp&s=04a0dcffd82f5e1d0cb4bf32a2b5caa249e470b9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ebcc37coszI6U3WLJGLWrLyfiBbMR2u_0Z2NMuqv8nQ.jpg?width=640&crop=smart&auto=webp&s=4d1d56437326ee2ed67cb760823e4f8a3d3a6a0b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ebcc37coszI6U3WLJGLWrLyfiBbMR2u_0Z2NMuqv8nQ.jpg?width=960&crop=smart&auto=webp&s=334ed3e5e3df0fa30615ed198e60170e1f2907e9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ebcc37coszI6U3WLJGLWrLyfiBbMR2u_0Z2NMuqv8nQ.jpg?width=1080&crop=smart&auto=webp&s=84ba15eb3b91ee4147bb3a4240f260d640111398', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ebcc37coszI6U3WLJGLWrLyfiBbMR2u_0Z2NMuqv8nQ.jpg?auto=webp&s=bb8005c26883ea73dcc3d3619134d0a5faf54d9a', 'width': 1200}, 'variants': {}}]}
Interactive Simulacra of Human Behavior with Local Llama
17
I was able to run the simulation from the paper on my MAC Generative Agents: Interactive Simulacra of Human Behavior" I stopped the simulation because it turned out to be expensive. Thinking about running the META's Llama 2 model instead. Has someone already tried it? Do you think it will run on a Macbook Pro M1?
2023-08-25T13:23:44
https://www.reddit.com/r/LocalLLaMA/comments/160z82i/interactive_simulacra_of_human_behavior_with/
Illustrious_Fix5793
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160z82i
false
null
t3_160z82i
/r/LocalLLaMA/comments/160z82i/interactive_simulacra_of_human_behavior_with/
false
false
self
17
null
llama.cpp officially adds support for rocm!
183
2023-08-25T13:23:34
https://github.com/ggerganov/llama.cpp#hipblas
Aaaaaaaaaeeeee
github.com
1970-01-01T00:00:00
0
{}
160z7x8
false
null
t3_160z7x8
/r/LocalLLaMA/comments/160z7x8/llamacpp_officially_adds_support_for_rocm/
false
false
https://a.thumbs.redditm…i75kyB08snM0.jpg
183
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]}
Is renting GPUs only possible because we still don't have a killer open source model?
49
I've been toying with an idea. Every single post about building a rig for interfacing leads to the same conclusion: "renting is a lot cheaper." And to be fair, that's true today. However, the availability of renting isn't infinite, and we must admit that today's models aren't all that impressive. So, if availability is already an issue, what happens when the models start to become good or even exceptional? Entertain this possibility: tomorrow, the "Viluka-Chat-max-mega-70b-LLAMA2" model launches, and it destroys some charts. Let's say it's on par with GPT-4 or perhaps even superior. This could lead to a surge in demand for rented GPUs, resulting in almost no availability. If the impact is significant enough, it might even lead to a hardware shortage. While this situation would likely resolve itself eventually, it could take months. If my reasoning holds, securing local means to run LLMs ensures that you'll be able to operate such a model, even if the demand for open-source LLMs skyrockets overnight.
2023-08-25T13:12:07
https://www.reddit.com/r/LocalLLaMA/comments/160yxfx/is_renting_gpus_only_possible_because_we_still/
Agusx1211
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160yxfx
false
null
t3_160yxfx
/r/LocalLLaMA/comments/160yxfx/is_renting_gpus_only_possible_because_we_still/
false
false
self
49
null
CodeLlama-34B-Python-GPTQ makes gibberish
1
[removed]
2023-08-25T12:59:01
https://www.reddit.com/r/LocalLLaMA/comments/160ylhd/codellama34bpythongptq_makes_gibberish/
Chance-Device-9033
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160ylhd
false
null
t3_160ylhd
/r/LocalLLaMA/comments/160ylhd/codellama34bpythongptq_makes_gibberish/
false
false
self
1
null
Code Llama / Continue / vscode
17
Patched together notes on getting the Continue extension running against llama.cpp and the new GGUF format with code llama. This is from various pieces of the internet with some minor tweaks, see linked sources. Assumes nvidia gpu, cuda working in WSL Ubuntu and windows. Should work fine under native ubuntu too. NB this gets it to "it works in principle", still seems to [have serious issues with the stopping tokens](https://i.imgur.com/TVqs0FT.png) that I haven't investigated yet. That means completion takes forever. If anyone figures that out let me know Ensure that you've got nothing already running on local ports 8080 and 8081 ------- [Source](https://github.com/ggerganov/llama.cpp) Steps below assume CUDA use, if not just use plain make without the CUBLAS parameter thing. See repo for more options git clone https://github.com/ggerganov/llama.cpp cd llama.cpp/ make LLAMA_CUBLAS=1 ------- [Source](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/tree/main) You may need to pick a smaller one depending on GPU VRAM - this one needs about 13gigs. wget https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/resolve/main/codellama-13b-python.Q6_K.gguf -P ./models/ ------- Check whether this works at all thus far before trying API part - it'll spit out random gibberish, that's fine. We're just looking for errors If using CUDA you should see "BLAS = 1" in the system_info ./main -m ./models/codellama-13b-python.Q6_K.gguf ------- Launch server ./server -m ./models/codellama-13b-python.Q6_K.gguf -ngl 100 The ngl 100 is how many layers to stick into the GPU so tweak as needed or leave out for cpu Open a browser and check that there is something on localhost:8080 Open a new terminal and continue with instructions, leaving the llama.cpp server running ------- [Source](https://www.reddit.com/r/LocalLLaMA/comments/15ak5k4/short_guide_to_hosting_your_own_llamacpp_openai/) mv ./examples/server/api_like_OAI.py ./examples/server/api_like_OAI_BCK.py wget https://raw.githubusercontent.com/ggerganov/llama.cpp/d8a8d0e536cfdaca0135f22d43fda80dc5e47cd8/examples/server/api_like_OAI.py -P ./examples/server/ python3 -m pip install flask requests python3 ./examples/server/api_like_OAI.py --host 0.0.0.0 If using WSL/local you can skip the --host 0.0.0.0 part ------- Install the continue vscode add-on https://marketplace.visualstudio.com/items?itemName=Continue.continue ------- [Source](https://continue.dev/docs/customization#local-models-with-ggml) Open continue in the vscode sidebar, click through their intro till you get the command box, type in /config Add this to the top from continuedev.src.continuedev.libs.llm.ggml import GGML Find the place where it loads the mode - around line 60ish, comment out those lines and add this instead. You may need to fix the indentation. default=GGML( max_context_length=16384, server_url="http://localhost:8081") I've set context length to 16k but the model is in theory capable of 100k. Unsure what VRAM & performance impact is. ------- Close vscode & reopen. Keep in mind that you need BOTH servers running, so if you used the vscode terminal you likely just killed one or both by restarting it ;)
2023-08-25T12:58:24
https://www.reddit.com/r/LocalLLaMA/comments/160yl0b/code_llama_continue_vscode/
AnomalyNexus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160yl0b
false
null
t3_160yl0b
/r/LocalLLaMA/comments/160yl0b/code_llama_continue_vscode/
false
false
self
17
{'enabled': False, 'images': [{'id': 'ZbUjSDn5bive-vYYE2uyh5ho1iaectGsm1wCi03kz-A', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/B-2JyjxPTcnfmq5bQjzYdqTIheW08--Nc1zfN2TCkHI.png?width=108&crop=smart&auto=webp&s=433ec540ec28f41d5c37a732496333edc7a39a25', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/B-2JyjxPTcnfmq5bQjzYdqTIheW08--Nc1zfN2TCkHI.png?width=216&crop=smart&auto=webp&s=b3df793f168ef918913c85466cfd3de1714e1715', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/B-2JyjxPTcnfmq5bQjzYdqTIheW08--Nc1zfN2TCkHI.png?width=320&crop=smart&auto=webp&s=8980c7bf3773bc932709a3796fb025aae52aced6', 'width': 320}], 'source': {'height': 266, 'url': 'https://external-preview.redd.it/B-2JyjxPTcnfmq5bQjzYdqTIheW08--Nc1zfN2TCkHI.png?auto=webp&s=55db884e442e6b33b47f30f14814863d1c42bb63', 'width': 473}, 'variants': {}}]}
Did anyone compare the inference quality of the quantized gptq, ggml, gguf and non-quantized models?
8
I'm trying to figure out which type of quantization to use from the inference quality perspective considering the similar type of quantization bits (e.g. both are 5\_1 or 6\_0 bits) Another question is -- to what extent the quantized version is actually worse than the original one? I'm interested in codegen models in particular. Today I was trying to generate code via the recent TheBloke's quantized llamacode-13b-5\_1/6\_0 (both 'instruct' and original versions) in ggml and gguf formats via llama.cpp and they were not able to generate even simple code in python or pure c. Instead of the code, they produced just text instructions on how to write a code instead of the actual code.
2023-08-25T12:48:45
https://www.reddit.com/r/LocalLLaMA/comments/160ycqq/did_anyone_compare_the_inference_quality_of_the/
Greg_Z_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160ycqq
false
null
t3_160ycqq
/r/LocalLLaMA/comments/160ycqq/did_anyone_compare_the_inference_quality_of_the/
false
false
self
8
null
Running superhot Llama (superhot trained) 13b params on vps
3
We're currently thinking about building something with [this actual model](https://huggingface.co/TheBloke/llama-13b-supercot-GGML), the thing is I don't have any experience with hosting something like this. How much resources, especially VRAM do you think we need to run it for testing / for produdction, can be like max 50 concurrent users at one time. Thanks you for every answer\~
2023-08-25T12:11:58
https://www.reddit.com/r/LocalLLaMA/comments/160xii4/running_superhot_llama_superhot_trained_13b/
Top-Fact-8840
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160xii4
false
null
t3_160xii4
/r/LocalLLaMA/comments/160xii4/running_superhot_llama_superhot_trained_13b/
false
false
default
3
{'enabled': False, 'images': [{'id': 'HZnPIo5T_C3i6pLGsItPDvUM9Ns6HIM9ClHvtReYDtU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?width=108&crop=smart&auto=webp&s=6f9244a469973fe1088890cf1f36b1c7b56ad4da', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?width=216&crop=smart&auto=webp&s=21e595f790768e8011395b69159ea84f282dd251', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?width=320&crop=smart&auto=webp&s=b653796064a1d275ade41c34784923802ccb437d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?width=640&crop=smart&auto=webp&s=7e6aa68cf034843c0fa9b762643a29f124d41a77', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?width=960&crop=smart&auto=webp&s=c0c892b8843c71486409f6b5fc49c84216ab0d87', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?width=1080&crop=smart&auto=webp&s=a6caa342a469721ab2e1303a5460b24963497667', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gCw1rUVzDcJzHRLeZ-kzAMdI24bxWFOL0iRkDU0Pbps.jpg?auto=webp&s=b4074800a41d10b9dca6ca8ef550f2057c5a6ac0', 'width': 1200}, 'variants': {}}]}
Will llama chatbots be added?
2
Does anyone know if Facebook have any plans to add AI bots powered by llama model on Facebook messenger chats and chats group so the people in that group can interact with the ai?
2023-08-25T11:08:43
https://www.reddit.com/r/LocalLLaMA/comments/160w597/will_llama_chatbots_be_added/
hentaidayspussies
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160w597
false
null
t3_160w597
/r/LocalLLaMA/comments/160w597/will_llama_chatbots_be_added/
false
false
self
2
null
What does llm's token context actually mean?
18
When describing a llm model, including llama2, and it's accuracy and applications, most people talk about it's token context. What does that mean? Is it the max length of text that can be prompted? Or is it max length of response you can expect, beyond which it will be truncated?
2023-08-25T10:02:46
https://www.reddit.com/r/LocalLLaMA/comments/160uuzy/what_does_llms_token_context_actually_mean/
sbs1799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
160uuzy
false
null
t3_160uuzy
/r/LocalLLaMA/comments/160uuzy/what_does_llms_token_context_actually_mean/
false
false
self
18
null