title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Pre-Training Artifacts in Falcon-740
1
[removed]
2023-07-29T20:59:01
https://www.reddit.com/r/LocalLLaMA/comments/15d2qo4/pretraining_artifacts_in_falcon740/
Mbando
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15d2qo4
false
null
t3_15d2qo4
/r/LocalLLaMA/comments/15d2qo4/pretraining_artifacts_in_falcon740/
false
false
self
1
null
Anyone know how I can use llama 2 to do roleplay?
2
Well I see at 4chan people doing roleplay with llama2. So I want to try it. But with google colab and llama2
2023-07-29T20:24:03
https://www.reddit.com/r/LocalLLaMA/comments/15d1xdp/anyone_know_how_i_can_use_llama_2_to_do_roleplay/
juanse2609
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15d1xdp
false
null
t3_15d1xdp
/r/LocalLLaMA/comments/15d1xdp/anyone_know_how_i_can_use_llama_2_to_do_roleplay/
false
false
self
2
null
Parsing ArXiV Document/Paper into Sections
3
Hey everyone! Is there a way of parsing a downloaded ArXiV pdf document into its constituent sections via Python? I want to get rid of all tables and figures and only use the text present in the document. Thank you.
2023-07-29T19:45:13
https://www.reddit.com/r/LocalLLaMA/comments/15d10ws/parsing_arxiv_documentpaper_into_sections/
psj_2908
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15d10ws
false
null
t3_15d10ws
/r/LocalLLaMA/comments/15d10ws/parsing_arxiv_documentpaper_into_sections/
false
false
self
3
null
Subreddit Wiki is forbidden (including the model page)
1
[removed]
2023-07-29T19:18:06
https://www.reddit.com/r/LocalLLaMA/comments/15d0efz/subreddit_wiki_is_forbidden_including_the_model/
All_men_are_brothers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15d0efz
false
null
t3_15d0efz
/r/LocalLLaMA/comments/15d0efz/subreddit_wiki_is_forbidden_including_the_model/
false
false
self
1
null
Oobabooga same output, repetition
7
Anyway know why oobabooga ui regenerates same text and Kobold.cpp does not?
2023-07-29T18:32:55
https://www.reddit.com/r/LocalLLaMA/comments/15czc3x/oobabooga_same_output_repetition/
Emergency_Drink_7063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15czc3x
false
null
t3_15czc3x
/r/LocalLLaMA/comments/15czc3x/oobabooga_same_output_repetition/
false
false
self
7
null
llama 13b is performing worse than llama 7b?
18
I am using both the models for the same usecase, which is question answering from a pdf document. I am using llama-2-7b-chat-hf and llama-2-13b-chat-hf models. The remaining pipeline being the same, the responses I'm getting from the 13b version is significantly worse than the 7b counterpart. The 7b model will provide good answers with a decent output length most of the time, the 13b model either gives very short and curt responses, or it just says I don't have the context to answer that question(which isn't true, the relevant info is being passed on to the model in the prompt). Does anyone know why this is the case, or how I could fix it? I am using the same hyperparameters for the models. My next step of action was going to be fine tuning the models, but the base performance of 13b was worse than the performance of 7b. Is anyone else facing this? or is there a problem with the hyperparams for the models? Maybe finetuning will result in the 13b variant providing better answers, but I wanted some guidance for the same before I commit to something.
2023-07-29T18:18:39
https://www.reddit.com/r/LocalLLaMA/comments/15cyzyw/llama_13b_is_performing_worse_than_llama_7b/
comical_cow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cyzyw
false
null
t3_15cyzyw
/r/LocalLLaMA/comments/15cyzyw/llama_13b_is_performing_worse_than_llama_7b/
false
false
self
18
null
Out of CUDA memory with Llama2-13B on RTX4090
18
First attempts so probably being a moron :( Help please. Config **1 x RTX 4090 - 16 vCPU 83 GB RAM** on runpod via TheBloke TextGen UI. Model is **meta-llama/Llama-2-13b-hf** Error message is: `torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.00 MiB (GPU 0; 23.65 GiB total capacity; 22.68 GiB already allocated; 43.69 MiB free; 22.68 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF`
2023-07-29T17:13:50
https://www.reddit.com/r/LocalLLaMA/comments/15cxi0n/out_of_cuda_memory_with_llama213b_on_rtx4090/
olddoglearnsnewtrick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cxi0n
false
null
t3_15cxi0n
/r/LocalLLaMA/comments/15cxi0n/out_of_cuda_memory_with_llama213b_on_rtx4090/
false
false
self
18
null
Finetuned WizardCoder for Text-To-SQL Tasks
1
### Demo https://huggingface.co/spaces/richardr1126/sql-skeleton-wizardcoder-demo # Introduction This project aims to use off-the-shelf large language models for text-to-SQL program sysnthesis tasks. After experimenting with various models, fine-tuning hyperparameters, and training datasets an optimal solution was identified by fine-tuning the [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0) base model using QLoRA techniques on [this](https://huggingface.co/datasets/richardr1126/spider-context-validation) customized Spider training dataset. The resultant model, [richardr1126/spider-skeleton-wizard-coder-merged](richardr1126/spider-skeleton-wizard-coder-merged), demonstrates **61% execution accuracy** when evaluated. The project utilizes a custom validation dataset that incorporates database context into the question. A live demonstration of the model is available on Hugging Face Space, facilitated by the Gradio library for user-friendly GUI. ### Spider Skeleton WizardCoder - [test-suite-sql-eval](https://github.com/taoyds/test-suite-sql-eval) Results With temperature set to 0.0, top_p set to 0.9, and top_k set to 0, the model achieves **61% execution accuracy** on the Spider dev set. <img src="https://raw.githubusercontent.com/cuplv/text-to-sql-wizardcoder/main/eval/plots/spiderwizard-plus-chatgpt.svg" height="300"> <img src="https://raw.githubusercontent.com/cuplv/text-to-sql-wizardcoder/main/eval/plots/spiderwizard-vs-chatgpt.svg" height="300"> Note: - ChatGPT was evaluated with the default hyperparameters and with the system message `You are a sophisticated AI assistant capable of converting text into SQL queries. You can only output SQL, don't add any other text.` - Both models were evaluated with `--plug_value` in `evaluation.py` using the Spider dev set with database context. - `--plug_value`: If set, the gold value will be plugged into the predicted query. This is suitable if your model does not predict values. This is set to `False` by default.
2023-07-29T17:08:12
https://huggingface.co/richardr1126/spider-skeleton-wizard-coder-merged
richardr1126
huggingface.co
1970-01-01T00:00:00
0
{}
15cxdfn
false
null
t3_15cxdfn
/r/LocalLLaMA/comments/15cxdfn/finetuned_wizardcoder_for_texttosql_tasks/
false
false
https://b.thumbs.redditm…-7yur9ixCt8w.jpg
1
{'enabled': False, 'images': [{'id': '24K_G7zmjIzC04FcpC6WpnL85CEiB-51givrlByGgQk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?width=108&crop=smart&auto=webp&s=4982f598fe7580ad3aabb483b6f99b9560fdbd7d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?width=216&crop=smart&auto=webp&s=8e25011664168eb7a83ab744b4552ecb3f919573', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?width=320&crop=smart&auto=webp&s=4dbcfc76b9a2448f3cbe4095218b060c19142d98', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?width=640&crop=smart&auto=webp&s=9b12514baf644251ed988065171ce218fbba6dd0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?width=960&crop=smart&auto=webp&s=16f097d2cf79fe398fe75159b21ef2987e06229f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?width=1080&crop=smart&auto=webp&s=9c3beba2b4dca8b654fc53feecfc67cb5a57619b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-BATHQ9bg4kKD0XuvUwHHN4hYq1PYpUzQt0phM3zRF8.jpg?auto=webp&s=0b41ec670d66d33a684a73b4f5b2761a0b129852', 'width': 1200}, 'variants': {}}]}
Best library to use llama 2 in python project.
3
I want to start making some python project that uses llama 2 for predicting some text. I haven't done anything with llama-like models before, except using ChatGPT through its API. I wonder what the best python libraries exist that allow inference of llama 2 that could be easily used. It would be preferable to have all major optimizations implemented as I have only 8 GB of GPU memory. I would also like to have an option to see a probability of a given continuation, the method that is commonly used in papers, although it is not strictly necessary.
2023-07-29T17:04:33
https://www.reddit.com/r/LocalLLaMA/comments/15cxae8/best_library_to_use_llama_2_in_python_project/
Another__one
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cxae8
false
null
t3_15cxae8
/r/LocalLLaMA/comments/15cxae8/best_library_to_use_llama_2_in_python_project/
false
false
self
3
null
how to run TheBloke/airoboros-l2-70B-gpt4-1.4.1-GPTQ in uncensored mode on text-generation-webui.
1
[removed]
2023-07-29T17:01:23
https://www.reddit.com/r/LocalLLaMA/comments/15cx7of/how_to_run_theblokeairoborosl270bgpt4141gptq_in/
4rfgvbgredfc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cx7of
false
null
t3_15cx7of
/r/LocalLLaMA/comments/15cx7of/how_to_run_theblokeairoborosl270bgpt4141gptq_in/
false
false
self
1
null
how to run TheBloke/airoboros-l2-70B-gpt4-1.4.1-GPTQ in uncensored mode on text-generation-webui.
1
[removed]
2023-07-29T16:53:11
https://www.reddit.com/r/LocalLLaMA/comments/15cx0x3/how_to_run_theblokeairoborosl270bgpt4141gptq_in/
4rfgvbgredfc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cx0x3
false
null
t3_15cx0x3
/r/LocalLLaMA/comments/15cx0x3/how_to_run_theblokeairoborosl270bgpt4141gptq_in/
false
false
nsfw
1
null
Fine-tuning for custom domain knowledge
44
I would like to see if anyone has had success fine-tuning a LLaMa model for knowledge adoption. It seems there are many examples of successfully fine-tuning for style and behavioral steering, but not as much for informational recall. If you know of an example, please share the code and dataset(if possible)! I have attempted to use 10K examples of synthetic data from ChatGPT3.5 based on a set 1.5K email conversations. For some reason, I had more success (still limited) when QLoRa tuning on the raw emails themselves. With the raw emails I at least managed to achieve thematic understandings of people and events. The synthetic data resulted in virtually zero recall, even at 0.72 loss. Here is my QLoRa code using the synthetic data: `model_id = "TheBloke/guanaco-7B-HF"` `bnb_config = BitsAndBytesConfig(` `load_in_4bit=True,` `bnb_4bit_use_double_quant=True,` `bnb_4bit_quant_type="nf4",` `bnb_4bit_compute_dtype=torch.bfloat16` `)` `dataset_dict = {'data':[{"input": f"### Human: {prompt} ### Assistant:","output":f"{output}{tokenizer.eos_token}"} for prompt, output in zip(df['prompt'],df['output'])]}` `with open ('custom_dataset.json', 'w') as f:` `json.dump(dataset_dict, f)` `data = load_dataset("json", data_files="custom_dataset.json", field='data')` `for key in dataset_dict['data'][0].keys():` `data = data.map(lambda samples: tokenizer(samples[key], max_length=2048, truncation=True), batched=True)` `model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0})` `config = LoraConfig(` `r=8,` `lora_alpha=32,` `target_modules=None,` `lora_dropout=0.05,` `bias="none",` `task_type="CAUSAL_LM"` `)` `model = get_peft_model(model, config)` `model.resize_token_embeddings(len(tokenizer))` `trainer = transformers.Trainer(` `model=model,` `train_dataset=data['train'],` `args=transformers.TrainingArguments(` `max_steps=7000,` `per_device_train_batch_size = 4,` `gradient_accumulation_steps = 4,` `save_steps = 500,` `logging_steps = 1,` `learning_rate = 2e-4,` `max_grad_norm = 0.3,` `warmup_ratio = 0.02,` `lr_scheduler_type='cosine',` `fp16=True,` `output_dir=outdir,` `optim="paged_adamw_32bit",` `report_to="none"` `),` `data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),` `)` `model.config.use_cache = False # silence the warnings. Please re-enable for inference!` `trainer.train()` `trainer.save_model()` Here is the structure of "custom\_dataset.json": `{"data": [{"input": "### Human: Did the team consider alternative internet providers besides Verizon and Tmobile? ### Assistant:", "output": "It appears that the team had considered using a T-Mobile SIM card as an alternative internet provider, but it was found to be unacceptable. As a result, they switched back to Verizon Broadband. It is not mentioned in the email conversation whether there were any other alternative internet providers considered besides Verizon and T-Mobile.</s>"}, {"input": "### Human: What were the expected benefits of launching xyz product? ### Assistant:","output": "The benefits were abcxyz"}...]}`
2023-07-29T16:37:05
https://www.reddit.com/r/LocalLLaMA/comments/15cwndg/finetuning_for_custom_domain_knowledge/
rinse_repeat_wash
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cwndg
false
null
t3_15cwndg
/r/LocalLLaMA/comments/15cwndg/finetuning_for_custom_domain_knowledge/
false
false
self
44
null
How to authenticate in TextGen UI to download "protected" models from HF?
2
I am trying to download meta-llama/Llama-2-13b-hf from huggingface via TextGen but get a 403 error. I am registered on HF and can download via git but would like to understand how to do this via the TextGen UI. Thanks.
2023-07-29T16:22:46
https://www.reddit.com/r/LocalLLaMA/comments/15cwbf6/how_to_authenticate_in_textgen_ui_to_download/
olddoglearnsnewtrick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cwbf6
false
null
t3_15cwbf6
/r/LocalLLaMA/comments/15cwbf6/how_to_authenticate_in_textgen_ui_to_download/
false
false
self
2
null
Renting Cloud GPUs
18
I want to run Llama 2's 70B-chat model to assist another program I'm running. I am unable to run it efficiently on my own machine, so it looks like I'll have to rent GPUs until I can afford to build my own rig. What specs should I be looking out for? How fast would chat completion run? What are the down-sides of crowd-sourced solutions? I'm sorry if the question is too general. I couldn't find a guide post that answered these questions so I thought it'd be best to ask. Since I'm a novice, any advice is appreciated. Thanks in advance.
2023-07-29T15:24:35
https://www.reddit.com/r/LocalLLaMA/comments/15cuycs/renting_cloud_gpus/
felatun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cuycs
false
null
t3_15cuycs
/r/LocalLLaMA/comments/15cuycs/renting_cloud_gpus/
false
false
self
18
null
Is it just me or Llama 2 can't output very long responses in instruct mode?
5
So previous LLaMa like Airoboros 7B can easily generate 512 new tokens and still want a few more on prompts like "Describe in detail how \[...\]. Write several paragraphs." But so far 7B models I tried on this prompt run for like 150-200 tokens and consider the task done. Am I doing something wrong or that's how Lllama 2 works?
2023-07-29T14:23:49
https://www.reddit.com/r/LocalLLaMA/comments/15ctki2/is_it_just_me_or_llama_2_cant_output_very_long/
artoonu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ctki2
false
null
t3_15ctki2
/r/LocalLLaMA/comments/15ctki2/is_it_just_me_or_llama_2_cant_output_very_long/
false
false
self
5
null
Llama text output to speech
2
I'm new to llama and AI/LLM. I'm wondering if it's feasible to use llama as a speaking conversation partner. Assuming Llama can coherently and clearly understand a person speaking, how fast will it be able to generate a response and talk back via TTS?
2023-07-29T14:07:29
https://www.reddit.com/r/LocalLLaMA/comments/15ct7lr/llama_text_output_to_speech/
Such-Stay2346
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ct7lr
false
null
t3_15ct7lr
/r/LocalLLaMA/comments/15ct7lr/llama_text_output_to_speech/
false
false
self
2
null
Other GPUs on exllama?
10
If you'd used exllama with workstation GPUs, older workstation GPUs (P100, P40) colab, AMD could you share results? Does rocm fit less context per gb like how flash attention has panned out for SD?
2023-07-29T14:04:31
https://www.reddit.com/r/LocalLLaMA/comments/15ct582/other_gpus_on_exllama/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ct582
false
null
t3_15ct582
/r/LocalLLaMA/comments/15ct582/other_gpus_on_exllama/
false
false
self
10
null
Estimating RAG System Throughput on AWS GPU Instances
8
I'm interested in running a Retrieval-Augmented Generation (RAG) system on AWS using GPU instances. I'm looking to get advice on how to estimate the query throughput I could expect to support a desired number of concurrent users on a given node configuration. Details: My RAG system uses dense vector retrieval and llama type model for generation (13B, 30B, 65B). I'm hoping to support around 5-50 concurrent users submitting queries and getting responses in real-time. Each query would retrieve 5 relevant passages (so 5 x 400 tokens) before generating a response. For reference, running on this on a ml.g5.48xlarge instance for 70 the 65B models. What sort of throughput could I expect to handle on this type of node with my planned RAG model? My end goal is to figure out how many concurrent users could be supported before I'd need to scale up the instance size? Any guidance on how to model the relationship between GPU/CPU resources, RAG model complexity, query throughput and subsequent supported concurrent users would be greatly appreciated!
2023-07-29T14:01:59
https://www.reddit.com/r/LocalLLaMA/comments/15ct361/estimating_rag_system_throughput_on_aws_gpu/
nuusain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ct361
false
null
t3_15ct361
/r/LocalLLaMA/comments/15ct361/estimating_rag_system_throughput_on_aws_gpu/
false
false
self
8
null
Best LLM for coding?
97
What's the best open source LLM for coding? Specifically, I want it to take my code and the correct code and explain to me why my code is wrong.
2023-07-29T13:42:44
https://www.reddit.com/r/LocalLLaMA/comments/15csnst/best_llm_for_coding/
Chirumer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15csnst
false
null
t3_15csnst
/r/LocalLLaMA/comments/15csnst/best_llm_for_coding/
false
false
self
97
null
Custom home server or pc upgrade ?
7
I currently have an MacBook Pro 14 with M1 processor and a computer with: \- AMD Ryzen 7 1700X Eight-Core Processor \- MSI Nvidia GTX1080 \- 32 GB DDR3 RAM \- 1 x 256GB NvME \- 1 x 500 GB SSD \- 1x 1TB Hard drive I tried 13B parameters such as LLama which still work, but are quite slow. Anything bigger seems to be to much. As I am currently also writing on my thesis I started to try all kinds of model helping me with my writing language. I tried GPT4ALL and now LMStudio with all the different models. Because of secrecy agreemnts I am not allowed to use any online tools for my thesis, thats why I have resorted to local language tools. Renting an online server from a company also isn't an option. I doesn't want to build this setup just for this though. I also want to use it afterwards for hobbies, experiments and work. This gives me the question what path I should go ? 1. Upgrade my PC hardware , if yes what would you recommend ? 2. Get a dedicated home server 3. Get a proper server rack One the one hand I would like the option to expand the performance later on. I also doesn't want to spend a fortune from the beginnin but are open minded to spend more later. So which solution would you recommend ? &#x200B;
2023-07-29T13:32:17
https://www.reddit.com/r/LocalLLaMA/comments/15csfuk/custom_home_server_or_pc_upgrade/
AntiqueSalamander310
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15csfuk
false
null
t3_15csfuk
/r/LocalLLaMA/comments/15csfuk/custom_home_server_or_pc_upgrade/
false
false
self
7
null
This is Voxta - We made her - This is all real time
51
2023-07-29T13:13:17
https://v.redd.it/mr1wyyrvlweb1
ReignMocap
/r/LocalLLaMA/comments/15cs1kr/this_is_voxta_we_made_her_this_is_all_real_time/
1970-01-01T00:00:00
0
{}
15cs1kr
false
{'reddit_video': {'bitrate_kbps': 0, 'dash_url': 'https://v.redd.it/mr1wyyrvlweb1/DASHPlaylist.mpd?a=1693314815%2CY2IzMWJkZjcxNTI1YWFlZjA5OTFmN2ViNmY2NTkyY2Q2M2M5ZjVjY2I0NjhiMmNjMDQyZWQzMjA5ZDI0YjUzMA%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/mr1wyyrvlweb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/mr1wyyrvlweb1/HLSPlaylist.m3u8?a=1693314815%2CMDdlNjEzMTE5OWIxZDE0ZGY1Mzk2YzJhMTc0ZTEzZTI5ZmUzNmMxZDg0NTk4NTUyNjFkOTdlZDAxYmM0NDU3Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mr1wyyrvlweb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_15cs1kr
/r/LocalLLaMA/comments/15cs1kr/this_is_voxta_we_made_her_this_is_all_real_time/
false
false
nsfw
51
{'enabled': False, 'images': [{'id': 'UvKYWTvRhmHHVlaIZseqdwSKiRUQNk_hhgDFZyh_jU0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=108&crop=smart&format=pjpg&auto=webp&s=f0af2f593a717e3794568e74c0424b033b64cbcb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=216&crop=smart&format=pjpg&auto=webp&s=69b67c4aa4b6eda1d27f35e738198a3190c1735d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=320&crop=smart&format=pjpg&auto=webp&s=6c482190ea468ee0e82d1c56702b3adc53731940', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=640&crop=smart&format=pjpg&auto=webp&s=db52c78cc62fdc7551ff9594c13affd39ca633fb', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=960&crop=smart&format=pjpg&auto=webp&s=6d749c15b1a25c850a44fc15a68d905a47e14810', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=1080&crop=smart&format=pjpg&auto=webp&s=987cd814f74120cd5071d377e14a3a2c4e8f3d78', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?format=pjpg&auto=webp&s=44c35ef7e314cab2d8b8707737099604d02c49f9', 'width': 1920}, 'variants': {'nsfw': {'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=40d80db1c96757ddf1b493aea701decfb74be239', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=20cf5b68cc37cddeb5133680bbc5991d2f0e59b1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=7493953d53f0dc48fe22d9301886dd04dea47ed2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=983d36e6a1016e326a180b5d117b640decc2a2d7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=e30028b248bdeebb9aa8a31ac652c1803353fb76', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=5a795d47c6804a1d2ed1a05b4a90f89848a81eaf', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?blur=40&format=pjpg&auto=webp&s=26b8cdb0ecd3880c47ba9e1489b57c80f9d3643f', 'width': 1920}}, 'obfuscated': {'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=40d80db1c96757ddf1b493aea701decfb74be239', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=20cf5b68cc37cddeb5133680bbc5991d2f0e59b1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=7493953d53f0dc48fe22d9301886dd04dea47ed2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=983d36e6a1016e326a180b5d117b640decc2a2d7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=e30028b248bdeebb9aa8a31ac652c1803353fb76', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=5a795d47c6804a1d2ed1a05b4a90f89848a81eaf', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/2PBC_1q67ZKoJ6K7F8UmmwcDwyBhMVCWP8E8b3UbQ4E.png?blur=40&format=pjpg&auto=webp&s=26b8cdb0ecd3880c47ba9e1489b57c80f9d3643f', 'width': 1920}}}}]}
Ctransformers now support LLaMa 2 70B
20
2023-07-29T12:41:51
https://github.com/marella/ctransformers/releases
Acrobatic-Site2065
github.com
1970-01-01T00:00:00
0
{}
15crdwa
false
null
t3_15crdwa
/r/LocalLLaMA/comments/15crdwa/ctransformers_now_support_llama_2_70b/
false
false
https://b.thumbs.redditm…TzOAh61l7Fsk.jpg
20
{'enabled': False, 'images': [{'id': 's7thFEAvi5RcBWVol6CRCa9kohe_K86XMAL9nrkyDUA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?width=108&crop=smart&auto=webp&s=280186b154e3618a038e3140e827bafc535750bc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?width=216&crop=smart&auto=webp&s=1287468849a98990d0d198817c421b12a5be8b5c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?width=320&crop=smart&auto=webp&s=9f05d9dd40db0f419e4553612561af0962187177', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?width=640&crop=smart&auto=webp&s=b57d90399dd04e3fa51fe176bc68bc9706b6f8d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?width=960&crop=smart&auto=webp&s=8169b7af330ed3c655f802057886a1b17494d7b6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?width=1080&crop=smart&auto=webp&s=4a0d50702feff792b6543b75398bafd2d7a95c0f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/v2U5P1rYB91SIqAGk1tweDnQYAPubWF9Rb1_eYqwCEE.jpg?auto=webp&s=5506c692c67967529247ad84e6d228d7d690fb20', 'width': 1200}, 'variants': {}}]}
I don't think llama likes me
85
2023-07-29T12:10:19
https://i.redd.it/42uh2i9qaweb1.png
SrDigbyChickenCeaser
i.redd.it
1970-01-01T00:00:00
0
{}
15cqrri
false
null
t3_15cqrri
/r/LocalLLaMA/comments/15cqrri/i_dont_think_llama_likes_me/
false
false
https://b.thumbs.redditm…X5xvmBm5fkHU.jpg
85
{'enabled': True, 'images': [{'id': 'O1mDIt-v4q9M6zmrQopfIH7VVuYmthF9HiJ4NdTtkAA', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?width=108&crop=smart&auto=webp&s=b751832ac8f1194dcd80b336ea00aab9d6880c66', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?width=216&crop=smart&auto=webp&s=e0352b97aaf934e1ba18c91decba878f2f4363b0', 'width': 216}, {'height': 341, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?width=320&crop=smart&auto=webp&s=bce96d99aea313bb5f9657440a5aba250bfd4546', 'width': 320}, {'height': 682, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?width=640&crop=smart&auto=webp&s=0dc2dcb148d95a43906b639253dfee5a4e5f0778', 'width': 640}, {'height': 1023, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?width=960&crop=smart&auto=webp&s=d685828c24bb160f415dceadb272b800185dd39d', 'width': 960}, {'height': 1151, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?width=1080&crop=smart&auto=webp&s=5932c374eefa93eddcc5b42762bc7e2e21f58e48', 'width': 1080}], 'source': {'height': 1352, 'url': 'https://preview.redd.it/42uh2i9qaweb1.png?auto=webp&s=400f9c77b3a114cd59b0cb7417279f87c2852e30', 'width': 1268}, 'variants': {}}]}
I am getting worse llama-2-70b-guanaco-qlora.ggmlv3.q8_0.bin than airoboros-65B-gpt4-1.2-GGML
14
llama-2-70b-guanaco-qlora.ggmlv3.q8\_0.bin - think step by step [https://paste.c-net.org/FatigueTended](https://paste.c-net.org/FatigueTended) relationship entity prompt: [https://paste.c-net.org/AlvinDollars](https://paste.c-net.org/AlvinDollars) airoboros-65B-gpt4-1.2-GGML did much better: [https://www.reddit.com/r/LocalLLaMA/comments/157ly7p/airoboros65bgpt412ggml\_beats\_gpt4\_in\_this\_test/](https://www.reddit.com/r/LocalLLaMA/comments/157ly7p/airoboros65bgpt412ggml_beats_gpt4_in_this_test/) i thought llama-2-70b-guanaco-qlora.ggmlv3.q8\_0.bin is meant to be better? Am I using it wrong? thanks :)
2023-07-29T11:06:38
https://www.reddit.com/r/LocalLLaMA/comments/15cplvh/i_am_getting_worse_llama270bguanacoqloraggmlv3q8/
dewijones92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cplvh
false
null
t3_15cplvh
/r/LocalLLaMA/comments/15cplvh/i_am_getting_worse_llama270bguanacoqloraggmlv3q8/
false
false
self
14
null
Unveiling the Gender Demography of Emerging Tech Communities (Poll)
0
Comment on what compels you to try out the tech. [View Poll](https://www.reddit.com/poll/15cmq2q)
2023-07-29T08:15:20
https://www.reddit.com/r/LocalLLaMA/comments/15cmq2q/unveiling_the_gender_demography_of_emerging_tech/
runawaychicken
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cmq2q
false
null
t3_15cmq2q
/r/LocalLLaMA/comments/15cmq2q/unveiling_the_gender_demography_of_emerging_tech/
false
false
self
0
null
Model suggestion please
1
My need is ingesting newspaper articles and finding named entities (esp people, places and orgs) and also generate a few summary phrases. Been playing with ChatGPT4 and results are much better than my traditional Stanford Stanza NLP approach. What are the best self hosted models suggested to achieve results similar (or better) than what ChatGPT4? I have some 650K news articles available. Would it be useful to attempt some optimization of the base model? Any guide/suggestion for this? Thanks a lot
2023-07-29T07:58:23
https://www.reddit.com/r/LocalLLaMA/comments/15cmfo2/model_suggestion_please/
olddoglearnsnewtrick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cmfo2
false
null
t3_15cmfo2
/r/LocalLLaMA/comments/15cmfo2/model_suggestion_please/
false
false
self
1
null
Making KoboldCpp save all output
7
I believe there are some people with zero coding experience who wish KoboldCpp had an option to save all output. I know there is "save story" button, but maybe you want all of them in one place. It's actually fairly easy; modify `koboldcpp.py` as below. What you want to do is to insert lines that starts with the plus sign. The rest is there to help you find where to insert them. (near the top, between import and stop_token_max) +logfilename = "logfile.txt" # you have multiple models, so you may want to record the choice of model def load_model(model_filename): (omitted) ret = handle.load_model(inputs) + with open(logfilename, "a", encoding='utf-8') as f: + f.write(inputs) return ret # console output happens here, regardless of streaming mode class ServerRequestHandler(http.server.SimpleHTTPRequestHandler): async def generate_text(self, newprompt, genparams, basic_api_flag, stream_flag): (omitted) if args.debugmode!=-1: utfprint("\nOutput: " + recvtxt) + with open(logfilename, "a", encoding='utf-8') as f: + f.write(json.dumps(genparams)+"\n") + f.write(recvtxt+"\n") (omitted)
2023-07-29T07:50:40
https://www.reddit.com/r/LocalLLaMA/comments/15cmb4q/making_koboldcpp_save_all_output/
SpecialNothingness
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cmb4q
false
null
t3_15cmb4q
/r/LocalLLaMA/comments/15cmb4q/making_koboldcpp_save_all_output/
false
false
self
7
null
Text reclassification prompts/code
2
Hey all, I created some prompts to reclassify text to more meaningful input data for LLM or ML training/fine tuning, would anyone be interested or find any value in this? I've also been working on a lot of projects like this and I'm wondering if there are specific communities that are centered around this sort of thing? An example of text reclassification I grabbed a training data set off Kaggle [here](https://www.kaggle.com/datasets/anjaneyatripathi/emotion-classification-nlp?resource=download&select=emotion-labels-train.csv) but the emotional classifications they had were a total of 4, and it just classified each line with one, (and some of the classifications were wildly wrong) so feeding this in to an ML for training would give pretty poor results. &#x200B; As an example of the reclassification from this file using llama2-7b-uncensored-4bit-ggml locally. The input file i didnt feel humiliated;sadness i can go from feeling so hopeless to so damned hopeful just from being around someone who cares and is awake;sadness im grabbing a minute to post i feel greedy wrong;anger &#x200B; The output file (CSV format) "i didnt feel humiliated",Embarrassed,0.8,Disrespected,0.8,Victimised,0.7 "i can go from feeling so hopeless to so damned hopeful just from being around someone who cares and is awake",Thankful,1, Content,0.8, Vulnerable,0.6 "im grabbing a minute to post i feel greedy wrong",Envious,1, Jealousy,0.9, Greediness,0.8 I added the "" in case of commas in the data set being present already as I didn't want it to break my output.
2023-07-29T07:44:54
https://www.reddit.com/r/LocalLLaMA/comments/15cm7oa/text_reclassification_promptscode/
MutedCatch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cm7oa
false
null
t3_15cm7oa
/r/LocalLLaMA/comments/15cm7oa/text_reclassification_promptscode/
false
false
self
2
{'enabled': False, 'images': [{'id': 'zaCeJ6rv6oJ_XMp0760EfPd4fXPVp97mY42SJCuVNU8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?width=108&crop=smart&auto=webp&s=67ff44720d8757ccc066104f80e34c8b1352fd39', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?width=216&crop=smart&auto=webp&s=334e151bf966de87a8ff9afdd41169e3e3df4d74', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?width=320&crop=smart&auto=webp&s=1ae050e43b3aca73b5ca3dafec00e22381998dc7', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?width=640&crop=smart&auto=webp&s=467ae4f9a2c4a979e01974e571a826d2734c0141', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?width=960&crop=smart&auto=webp&s=28123b703561eb302078392428edcc495c16a08f', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?width=1080&crop=smart&auto=webp&s=52e577ccb96874965b21aeb7e041f5f3cb6e4b51', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/pE9mM7R8v90jbI3S_gq_iPYp8UGBK4HR_-kfGqIC7GA.jpg?auto=webp&s=d984047593b7a55daaff28c2571e7af43ad6fe49', 'width': 1200}, 'variants': {}}]}
Experience with cerebras/btlm-3b-8k-base
1
Has anybody tried using this model? Can someone give the target_modules of this model to finetune with qlora
2023-07-29T05:49:53
https://www.reddit.com/r/LocalLLaMA/comments/15ck91i/experience_with_cerebrasbtlm3b8kbase/
Sufficient_Run1518
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ck91i
false
null
t3_15ck91i
/r/LocalLLaMA/comments/15ck91i/experience_with_cerebrasbtlm3b8kbase/
false
false
self
1
null
Ai chatbot that gives robotic like responses?
10
Is there an chatbot that just answers questions without any assistant-like fluff text? Example: 2 + 2 = 4 Instead of: "Sure, here you go: 2 + 2 = 4. Let me know if you need anything else!"
2023-07-29T04:53:46
https://www.reddit.com/r/LocalLLaMA/comments/15cj9ex/ai_chatbot_that_gives_robotic_like_responses/
TheRollingDucat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cj9ex
false
null
t3_15cj9ex
/r/LocalLLaMA/comments/15cj9ex/ai_chatbot_that_gives_robotic_like_responses/
false
false
self
10
null
New model from StabilityAi: Stable Beluga
73
A week ago or so StabilityAi launched the models FreeWilly and FreeWily2, based on Lamma 1 65b and Lamma 2 70B respectively. Then they renamed it to Stable Beluga, and today they lanched new Lamma 2 based variants with 7b and 13b parameters. What do you guys think? Link 7B: https://huggingface.co/stabilityai/StableBeluga-7B Link 13b: https://huggingface.co/stabilityai/StableBeluga-13B u/TheBloke already made quants for the 13b models GGML: https://huggingface.co/TheBloke/StableBeluga-13B-GGML GPTQ: https://huggingface.co/TheBloke/StableBeluga-13B-GPTQ
2023-07-29T02:29:06
https://www.reddit.com/r/LocalLLaMA/comments/15cgjej/new_model_from_stabilityai_stable_beluga/
MacacoVelhoKK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cgjej
false
null
t3_15cgjej
/r/LocalLLaMA/comments/15cgjej/new_model_from_stabilityai_stable_beluga/
false
false
self
73
{'enabled': False, 'images': [{'id': 'kULT4QMVrffgHYLCSiGSdCHwSaPimBVWJBNMsitsDc0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?width=108&crop=smart&auto=webp&s=1a3de829906ed5dbdfef8865371a5246972a72c9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?width=216&crop=smart&auto=webp&s=cae640fdb8d64fa0040379dc518eb617e3f12f73', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?width=320&crop=smart&auto=webp&s=2982e17becb51dd0e262258910f41851e1ed6a4a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?width=640&crop=smart&auto=webp&s=af662fe5dcde68099c3366abe671e7c5ee5f5ad8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?width=960&crop=smart&auto=webp&s=f7fead8c435f2b609bbdb16e83a9a252ada7ce9e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?width=1080&crop=smart&auto=webp&s=9d5c2659c6be2c05daaf9450e9ea6e16d8536206', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Z-Cj42ScBNi2-Yi3QHqXq91FsN_5AtZasUi9tPVXmL4.jpg?auto=webp&s=0351ffd95e204af4d877caf7648b40dd902ce85c', 'width': 1200}, 'variants': {}}]}
GPU requirement question
2
Hello, I see a lot of posts about "vram" being the most important factor for LLM models. So I wonder, does that mean an old Nvidia m10 or an AMD firepro s9170 (both 32gb) outperforms an AMD instinct mi50 16gb? Asking because I recently bought 2 new ones and wondering if I should just sell them and get something else with higher vram
2023-07-29T02:25:23
https://www.reddit.com/r/LocalLLaMA/comments/15cggtf/gpu_requirement_question/
Slight_Bath_4449
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cggtf
false
null
t3_15cggtf
/r/LocalLLaMA/comments/15cggtf/gpu_requirement_question/
false
false
self
2
null
How to reproduce a dataset from a paper called SUR-Adapter?
1
I'm trying to reproduce a dataset used here: [SUR-Adapter](https://github.com/Qrange-group/SUR-adapter). [Examples of SUR-Adapter prompt understanding](https://preview.redd.it/peqeiuxvateb1.jpg?width=12310&format=pjpg&auto=webp&s=ecf8fe4303c3ab3fb0e5f6e3942d4440ea3c20bf) SUR-Adapter uses an LLM like Llama to give Semantic Understanding and Reasoning to a diffusion model. Unfortunately, the author did not want to release the dataset for potential copyright disputes, and I want to find someone to help recreate this project.
2023-07-29T02:06:53
https://www.reddit.com/r/LocalLLaMA/comments/15cg362/how_to_reproduce_a_dataset_from_a_paper_called/
ninjasaid13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cg362
false
null
t3_15cg362
/r/LocalLLaMA/comments/15cg362/how_to_reproduce_a_dataset_from_a_paper_called/
false
false
https://b.thumbs.redditm…_yaiYMSaeYKY.jpg
1
null
Beginner help?
18
Hey all! Been reading along for a while and an avid proponent of AI. I am somewhat capable when it comes to coding but I do feel like I’m somehow fairly behind on being able to grasp concepts behind AI. I am a ChatGPT subscriber (I want to cancel) and I now have been able to run Wizard on Gpt4all. I’d like to begin running Llama 2 70b but I’m lost on so many aspects. Here’s where I could use a hand, if you’re willing: 1. Langchain, Kolbold, Quantized, etc. I’ve researched but can’t understand what these items do and if I need them. 2. I only have a MacBook Pro M1 Pro so I assume the 70b is out of the question. Is it worth buying/building a new PC to use 70b over 13 or 7b? 3. Is Gpt4all the move or should I be using webui? I’ve tried webui but had a ton of difficulty getting it to run correctly on my MacBook Pro. Some context, I use AI a ton for assistance in writing, coding, research, data analysis, etc. I’m looking to replicate that GPT4 or 3.5 experience locally. Additionally, I have an app that creates personalized stories for children. We had an entire system of prompts to OpenAi API set up but it’s recently become super inconsistent. My goal, alongside the above, is to test our models and determine which would be best for my application. Sorry for the long and needy post. Any help would be immensely appreciated. Thank you!
2023-07-29T01:36:14
https://www.reddit.com/r/LocalLLaMA/comments/15cfg2j/beginner_help/
justanewacct
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cfg2j
false
null
t3_15cfg2j
/r/LocalLLaMA/comments/15cfg2j/beginner_help/
false
false
self
18
null
Any LLM Llama script from any GitHub repository that can summarize extensive portions of multiple long text (.txt) or DOCX documents from a folder and produce a summarized TXT file as the output.
1
as above
2023-07-29T01:11:35
https://www.reddit.com/r/LocalLLaMA/comments/15cexh0/any_llm_llama_script_from_any_github_repository/
ZealousidealBadger47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cexh0
false
null
t3_15cexh0
/r/LocalLLaMA/comments/15cexh0/any_llm_llama_script_from_any_github_repository/
false
false
self
1
null
LLaMA-2-7B-32K by togethercomputer
132
2023-07-29T00:36:55
https://huggingface.co/togethercomputer/LLaMA-2-7B-32K
brown2green
huggingface.co
1970-01-01T00:00:00
0
{}
15ce6sq
false
null
t3_15ce6sq
/r/LocalLLaMA/comments/15ce6sq/llama27b32k_by_togethercomputer/
false
false
https://b.thumbs.redditm…8vKTayy8l5TE.jpg
132
{'enabled': False, 'images': [{'id': 'yoAlnlO31bWvxDt2ZPQlTqH9iazTvQwoeS-sUqsWgtw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=108&crop=smart&auto=webp&s=ecde1bdfdd91dfd590fabe657c5615966e36eb14', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=216&crop=smart&auto=webp&s=7a69f688a67587c94ed26be64a481d68052f3079', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=320&crop=smart&auto=webp&s=1b3a3c61498f0435036449c4d6b4327ad292008a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=640&crop=smart&auto=webp&s=a270f1bd5077999a037ee576111d0712d9a582a6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=960&crop=smart&auto=webp&s=1ae788bc4b040349b2aa07cf4caf6ebea35e78e5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=1080&crop=smart&auto=webp&s=3114ca2a22bee5ee2cd02e824a2fc5c2faa5d107', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?auto=webp&s=b3dc3ec328d20f24bf3fd21426f2588cecf707ff', 'width': 1200}, 'variants': {}}]}
Transformer accuracy in Exllama?
1
[removed]
2023-07-28T22:37:46
https://www.reddit.com/r/LocalLLaMA/comments/15cbgjn/transformer_accuracy_in_exllama/
sinsro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cbgjn
false
null
t3_15cbgjn
/r/LocalLLaMA/comments/15cbgjn/transformer_accuracy_in_exllama/
false
false
self
1
null
Using Lora Model with Llama Index
2
Hi all, I was able to fine-tune llama using qlora. I was also able to get llama\_index working to get context from documents. However, I'm having issues merging these together. Mainly with the error message "'LlamaForCausalLM' object has no attribute 'metadata'" when trying to set up the ServiceContext service\_context = ServiceContext.from\_defaults( llm=model, embed\_model=embed\_model ) Has anyone gotten this working? I could only get something like this to work using Text-Gen WebUI but I'd rather have it in a code enviroment. Any help is appreciated Thanks!
2023-07-28T22:27:21
https://www.reddit.com/r/LocalLLaMA/comments/15cb7kq/using_lora_model_with_llama_index/
Pro-Status
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cb7kq
false
null
t3_15cb7kq
/r/LocalLLaMA/comments/15cb7kq/using_lora_model_with_llama_index/
false
false
self
2
null
How are you using Llama 2 commercially ?
1
I am curious to see the different use case possibilities
2023-07-28T21:53:35
https://www.reddit.com/r/LocalLLaMA/comments/15cadgj/how_are_you_using_llama_2_commercially/
Ken_Sanne
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15cadgj
false
null
t3_15cadgj
/r/LocalLLaMA/comments/15cadgj/how_are_you_using_llama_2_commercially/
false
false
self
1
null
Transformer accuracy in Exllama?
1
[removed]
2023-07-28T21:39:19
https://www.reddit.com/r/LocalLLaMA/comments/15ca0n0/transformer_accuracy_in_exllama/
calm-aspargus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ca0n0
false
null
t3_15ca0n0
/r/LocalLLaMA/comments/15ca0n0/transformer_accuracy_in_exllama/
false
false
self
1
null
LLM with voice generation
1
There used to be a tool called [try-alters.com](https://try-alters.com) which you could use to chat with characters(like Trump, Obama, and Shrek) which used GPT 4 with some pre prompts so you the AI pretended to be whoever you wanted, and it used elevenlabs to generate the voice for that character with the output from GPT 4. It was a really good tool but sadly it shut down all of a sudden. Is there any tool like that?
2023-07-28T21:15:41
https://www.reddit.com/r/LocalLLaMA/comments/15c9feq/llm_with_voice_generation/
SimRacer101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c9feq
false
null
t3_15c9feq
/r/LocalLLaMA/comments/15c9feq/llm_with_voice_generation/
false
false
self
1
null
The jailbreak works with Llama-2-7B-Chat-GGML
1
[removed]
2023-07-28T20:53:05
https://www.reddit.com/r/LocalLLaMA/comments/15c8v2p/the_jailbreak_works_with_llama27bchatggml/
arch-btw-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c8v2p
false
null
t3_15c8v2p
/r/LocalLLaMA/comments/15c8v2p/the_jailbreak_works_with_llama27bchatggml/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oFSJObsRJoaZQlhlV6gnh074pJdzH3DovnJiEy198Qs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?width=108&crop=smart&auto=webp&s=78fbed7964dfb03ff2611dd4554b755731cb9615', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?width=216&crop=smart&auto=webp&s=91a066b0ca6138f9207ffb9d48627ef90202f109', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?width=320&crop=smart&auto=webp&s=0aca9685af48476683b0369e27e8c04481ddf21a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?width=640&crop=smart&auto=webp&s=5bcc091c21a5f38a4ca5f2c73b1e78d4dc36091d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?width=960&crop=smart&auto=webp&s=1793e3f163d181a0da0763ef6b9e03a722c4e653', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?width=1080&crop=smart&auto=webp&s=bac10fd4b4da27b3014cc32e99e80311aa37d79b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MazETya56rbIuDFJAGNnTdIqX9An9dQF6xAUHs-19NE.jpg?auto=webp&s=817baf0cf75ba635db3840ff58349822893e92ac', 'width': 1200}, 'variants': {}}]}
Llama2 wants to secure the southern border
1
[removed]
2023-07-28T20:50:59
https://www.reddit.com/gallery/15c8t8n
Happy_Chicken9835
reddit.com
1970-01-01T00:00:00
0
{}
15c8t8n
false
null
t3_15c8t8n
/r/LocalLLaMA/comments/15c8t8n/llama2_wants_to_secure_the_southern_border/
false
false
default
1
null
New AI related jobs?
6
I remembered when everyone was trying to learn to code to get a better job, so that got me thinking, is there any chance that something AI related (like LLMs) could be the "new programming" and have a bunch of people interested in? Because programming is sadly oversaturated (for trainees with no degree, especifically), so I was looking for a replacement. Note: trainees with no degree could easily get programming jobs a couple of years ago in my country, IDK about the US/UK, maybe there the requeriments were higher.
2023-07-28T20:49:25
https://www.reddit.com/r/LocalLLaMA/comments/15c8rrr/new_ai_related_jobs/
allmightyloser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c8rrr
false
null
t3_15c8rrr
/r/LocalLLaMA/comments/15c8rrr/new_ai_related_jobs/
false
false
self
6
null
Is there a way to force Llama2 to generate longer text?
4
I'm getting frustrated with it stopping and refusing to continue a story. Any sort of setting?
2023-07-28T20:25:02
https://www.reddit.com/r/LocalLLaMA/comments/15c864d/is_there_a_way_to_force_llama2_to_generate_longer/
audiosheep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c864d
false
null
t3_15c864d
/r/LocalLLaMA/comments/15c864d/is_there_a_way_to_force_llama2_to_generate_longer/
false
false
self
4
null
New model from KoboldAI based on llama-2 - "Holodeck"
33
2023-07-28T20:03:22
https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1-GGML
harrro
huggingface.co
1970-01-01T00:00:00
0
{}
15c7mnw
false
null
t3_15c7mnw
/r/LocalLLaMA/comments/15c7mnw/new_model_from_koboldai_based_on_llama2_holodeck/
false
false
https://b.thumbs.redditm…8jkHkoOJ43EQ.jpg
33
{'enabled': False, 'images': [{'id': 'hK9mf3IwmrVwZ6yRKPEn8aiig619KxMaFIa93exluIQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?width=108&crop=smart&auto=webp&s=402939d6bb14115a73b0493a34968bff2093b4a5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?width=216&crop=smart&auto=webp&s=02053ae63d7858283f8267430d45dba1d71fce7e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?width=320&crop=smart&auto=webp&s=10eb01211202f6bb42ff59182da4fc4a161d83e9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?width=640&crop=smart&auto=webp&s=a7bdab5b9dbde1964d4f79b25a3013fe4ae6ed5b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?width=960&crop=smart&auto=webp&s=e50a10afa032a2d4f119e82b767e96972e482c10', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?width=1080&crop=smart&auto=webp&s=42e8b74c4b04fd235d543e4fcfd27439572864ca', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UrciGQ4sIBgMN-JM6jzyV9JRAPBDCSz_a1htJrKo558.jpg?auto=webp&s=95ca4b9f95ba2edf5fca59c4963f77420f21fc68', 'width': 1200}, 'variants': {}}]}
WizardCoder extremely slow
1
Hey, I am currently build a small app for running wizardCoder locally on a RTX 3090. The inference with the huggingface-model loaded in 8 bit is however fairly slow. Are there any other ways for a faster inference?
2023-07-28T19:40:13
https://www.reddit.com/r/LocalLLaMA/comments/15c71qf/wizardcoder_extremely_slow/
eggandbacon_0056
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c71qf
false
null
t3_15c71qf
/r/LocalLLaMA/comments/15c71qf/wizardcoder_extremely_slow/
false
false
self
1
null
Llama-2-70B-GPTQ seems very bad at coding. Am I doing it wrong?
9
&#x200B; [exllama webui](https://preview.redd.it/d5eo14p59reb1.png?width=1923&format=png&auto=webp&s=759f2a17c9e19f5d4c432b40513cb61254614d1b) I got the model from TheBloke/Llama-2-70B-GPTQ (gptq-4bit-32g-actorder\_True) Using an AWS instance with 4x T4 GPUs (but actually 3 is sufficient). Using exllama with `-gs 13,13,13` nvidia-smi: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:00:1B.0 Off | 0 | | N/A 55C P0 35W / 70W | 14957MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 Tesla T4 On | 00000000:00:1C.0 Off | 0 | | N/A 52C P0 33W / 70W | 14957MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 Tesla T4 On | 00000000:00:1D.0 Off | 0 | | N/A 55C P0 35W / 70W | 13505MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N/A 28C P8 8W / 70W | 3MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 11771 C python 14954MiB | | 1 N/A N/A 11771 C python 14954MiB | | 2 N/A N/A 11771 C python 13502MiB | +-----------------------------------------------------------------------------+ &#x200B;
2023-07-28T19:12:59
https://www.reddit.com/r/LocalLLaMA/comments/15c6co5/llama270bgptq_seems_very_bad_at_coding_am_i_doing/
Remarkable_Ad4470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c6co5
false
null
t3_15c6co5
/r/LocalLLaMA/comments/15c6co5/llama270bgptq_seems_very_bad_at_coding_am_i_doing/
false
false
https://b.thumbs.redditm…Slv_VURpZXkg.jpg
9
null
Do any of you have slight concerns that these models may have some subjective experience and we are effectively creating and killing people every time we train and delete them?
0
I know this will be dismissed out of hand by many, but the idea that LLMs, and especially their successor models, might be capable of subjective experience is being seriously entertained by some major academics [https://www.youtube.com/watch?v=-BcuCmf00\_Y](https://www.youtube.com/watch?v=-BcuCmf00_Y) As we see these models become more advanced, and especially when they act as autonomous agents in games like Minecraft, it's becoming harder to draw a line between our reasoning capabilities and theirs imo. Has this impacted how any of you work in the field with these models?
2023-07-28T18:20:30
https://www.reddit.com/r/LocalLLaMA/comments/15c507y/do_any_of_you_have_slight_concerns_that_these/
30299578815310
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c507y
false
null
t3_15c507y
/r/LocalLLaMA/comments/15c507y/do_any_of_you_have_slight_concerns_that_these/
false
false
self
0
{'enabled': False, 'images': [{'id': 'xAWhV5Tinq2x4GS5F2dTr9e9-vE8jz5BVHebAZ8PjJM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Hd4Rtz7WPKvWNza4iHT4QUqAOEjVQLpDMees9mlKKyQ.jpg?width=108&crop=smart&auto=webp&s=271abc1717bc2ba279dda41852f58454dfb4aef0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Hd4Rtz7WPKvWNza4iHT4QUqAOEjVQLpDMees9mlKKyQ.jpg?width=216&crop=smart&auto=webp&s=588be3a38736b72753089cc22db26b9ae6a37ebc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Hd4Rtz7WPKvWNza4iHT4QUqAOEjVQLpDMees9mlKKyQ.jpg?width=320&crop=smart&auto=webp&s=06ef8be8f275f222bbb1ceda97a5ee2cd7193c1f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Hd4Rtz7WPKvWNza4iHT4QUqAOEjVQLpDMees9mlKKyQ.jpg?auto=webp&s=72b3a55ce16ab015ff0a4f02a45fb0e7e8dc3823', 'width': 480}, 'variants': {}}]}
Help getting LocalAI working on older hardware
1
DevOps guy here, trying to get a local 7b model running on my old piece of shit desktop (i5 2500k @ 4.5 GHz / 32GB RAM / GTX 1060 6GB). I've managed to get LocalAI to successfully load the model, by compiling from master without AVX2, but I'm unable to get the model to generate a response. The API request just hangs indefinitely. I'm hoping one of y'all fine humans can help me. As far as I can tell this hardware, while ancient, *should* be able to run this model. Here's my docker-compose: ``` version: "3.8" services: localai: container_name: localai image: quay.io/go-skynet/local-ai:master-cublas-cuda12 #user: 2200:2200 environment: PRELOAD_MODELS: '[{"url": "github:go-skynet/model-gallery/openllama_7b.yaml", "name": "gpt-3.5-turbo", "overrides": { "f16": true, "gpu_layers": 20, "mmap": true, "batch": 512, "low_vram": true } } ]' MODELS_PATH: /models THREADS: 2 BUILD_TYPE: cublas DEBUG: true REBUILD: true CMAKE_ARGS: "-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF" ports: - "8080:8080" # API stdin_open: true tty: true volumes: - /srv/localai/models:/models deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu] ``` And the LocalAI debug log: ``` localai | 6:05PM DBG Request received: localai | 6:05PM DBG Configuration read: &{PredictionOptions:{Model:open-llama-7b-q4_0.bin Language: N:0 TopP:0.7 TopK:80 Temperature:0.1 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0} Name:gpt-3.5-turbo StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:1024 F16:true NUMA:false Threads:2 Debug:true Roles:map[] Embeddings:false Backend:llama TemplateConfig:{Chat:openllama-chat ChatMessage: Completion:openllama-completion Edit: Functions:} MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:20 MMap:true MMlock:false LowVRAM:true TensorSplit: MainGPU: ImageGenerationAssets: PromptCachePath: PromptCacheAll:false PromptCacheRO:false Grammar: PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} SystemPrompt:} localai | 6:05PM DBG Parameters: &{PredictionOptions:{Model:open-llama-7b-q4_0.bin Language: N:0 TopP:0.7 TopK:80 Temperature:0.1 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0} Name:gpt-3.5-turbo StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:1024 F16:true NUMA:false Threads:2 Debug:true Roles:map[] Embeddings:false Backend:llama TemplateConfig:{Chat:openllama-chat ChatMessage: Completion:openllama-completion Edit: Functions:} MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:20 MMap:true MMlock:false LowVRAM:true TensorSplit: MainGPU: ImageGenerationAssets: PromptCachePath: PromptCacheAll:false PromptCacheRO:false Grammar: PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} SystemPrompt:} localai | 6:05PM DBG Prompt (before templating): What is an alpaca? localai | 6:05PM DBG Template found, input modified to: Q: What is an alpaca?\nA: localai | 6:05PM DBG Prompt (after templating): Q: What is an alpaca?\nA: localai | 6:05PM DBG Loading model llama from open-llama-7b-q4_0.bin localai | 6:05PM DBG Loading model in memory from file: /models/open-llama-7b-q4_0.bin localai | 6:05PM DBG Loading GRPC Model llama: {backendString:llama modelFile:open-llama-7b-q4_0.bin threads:2 assetDir:/tmp/localai/backend_data context:0xc00010e000 gRPCOptions:0xc000300fc0 externalBackends:map[huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py]} localai | 6:05PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama localai | 6:05PM DBG GRPC Service for open-llama-7b-q4_0.bin will be running at: '127.0.0.1:43455' localai | 6:05PM DBG GRPC Service state dir: /tmp/go-processmanager1654020839 localai | 6:05PM DBG GRPC Service Started localai | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43455: connect: connection refused" localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr 2023/07/28 18:05:36 gRPC Server listening at 127.0.0.1:43455 localai | 6:05PM DBG GRPC Service Ready localai | localai | 6:05PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:/models/open-llama-7b-q4_0.bin ContextSize:1024 Seed:0 NBatch:512 F16Memory:true MLock:false MMap:true VocabOnly:false LowVRAM:true Embeddings:false NUMA:false NGPULayers:20 MainGPU: TensorSplit: Threads:2 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0} localai | localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr ggml_init_cublas: found 1 CUDA devices: localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr Device 0: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1 localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama.cpp: loading model from /models/open-llama-7b-q4_0.bin localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: format = ggjt v3 (latest) localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_vocab = 32000 localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_ctx = 1024 localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_embd = 4096 localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_mult = 256 localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_head = 32 localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_head_kv = 32 localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_layer = 32 localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_rot = 128 localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_gqa = 1 localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: n_ff = 11008 localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: freq_base = 1000.0 localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: freq_scale = 1 localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: ftype = 2 (mostly Q4_0) localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: model size = 7B localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: ggml ctx size = 0.08 MB localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: using CUDA for GPU acceleration localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: mem required = 1777.85 MB (+ 512.00 MB per state) localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: not allocating a VRAM scratch buffer due to low VRAM option localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: offloading 20 repeating layers to GPU localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: offloaded 20/35 layers to GPU localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_model_load_internal: total VRAM used: 2172 MB localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_new_context_with_model: kv self size = 512.00 MB localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_predict: warning: changing RoPE frequency base to 0 (default 10000.0) localai | 6:05PM DBG GRPC(open-llama-7b-q4_0.bin-127.0.0.1:43455): stderr llama_predict: warning: scaling RoPE frequency by 0 (default 1.0) localai | [127.0.0.1]:47852 200 - GET /readyz ``` The only thing I can see wrong is that the log is printing to stderr, but aside from that it doesn't seem to indicate anything wrong. Any assistance would be greatly appreciated.
2023-07-28T18:08:16
https://www.reddit.com/r/LocalLLaMA/comments/15c4p3c/help_getting_localai_working_on_older_hardware/
SilverbackReborn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c4p3c
false
null
t3_15c4p3c
/r/LocalLLaMA/comments/15c4p3c/help_getting_localai_working_on_older_hardware/
false
false
self
1
null
Have any of you had luck applying your LLM skills professionally or is this mostly a hobby for you?
26
What do you all do professionally? Are your places of work receptive to this stuff?
2023-07-28T18:08:06
https://www.reddit.com/r/LocalLLaMA/comments/15c4oy3/have_any_of_you_had_luck_applying_your_llm_skills/
029187
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c4oy3
false
null
t3_15c4oy3
/r/LocalLLaMA/comments/15c4oy3/have_any_of_you_had_luck_applying_your_llm_skills/
false
false
self
26
null
Dockerized Full Stack llama.cpp API server and R (rshiny) application
14
[removed]
2023-07-28T18:02:30
https://i.redd.it/7lhr550pwqeb1.jpg
Happy_Chicken9835
i.redd.it
1970-01-01T00:00:00
0
{}
15c4ju3
false
null
t3_15c4ju3
/r/LocalLLaMA/comments/15c4ju3/dockerized_full_stack_llamacpp_api_server_and_r/
false
false
https://b.thumbs.redditm…rCtKUkB4rE4g.jpg
14
{'enabled': True, 'images': [{'id': '7gJrKf0dPMQZAq5SMQEuBh64nrXMfq0AEgNsMKu26vc', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?width=108&crop=smart&auto=webp&s=d905c684f73dd67426be9bace23de1539f759adb', 'width': 108}, {'height': 195, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?width=216&crop=smart&auto=webp&s=d5fd9361b6a010becec71fd8a50923b83c4638c1', 'width': 216}, {'height': 288, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?width=320&crop=smart&auto=webp&s=3918aa2cab51a1bf76928c9bd68a1e55f04d132b', 'width': 320}, {'height': 577, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?width=640&crop=smart&auto=webp&s=b0b5f2045c9f5dd186a5dc691181107e8f87d602', 'width': 640}, {'height': 866, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?width=960&crop=smart&auto=webp&s=b8692b8229a5ff9aae58842379ce8b350155fbb3', 'width': 960}, {'height': 975, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?width=1080&crop=smart&auto=webp&s=391d22ffdc586954d6622f2facfcc2fb509640c0', 'width': 1080}], 'source': {'height': 1506, 'url': 'https://preview.redd.it/7lhr550pwqeb1.jpg?auto=webp&s=8b5d8463a18c28b1f0bb708dade1958b9a4dd42c', 'width': 1668}, 'variants': {}}]}
I have spent that last 7 months building an open-source mind mapping tool that combines fractals with Ai. Here is some of what it can do.
1
[removed]
2023-07-28T17:49:13
https://www.reddit.com/r/LocalLLaMA/comments/15c47or/i_have_spent_that_last_7_months_building_an/
Intrepid-Air6525
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c47or
false
null
t3_15c47or
/r/LocalLLaMA/comments/15c47or/i_have_spent_that_last_7_months_building_an/
false
false
https://b.thumbs.redditm…G7OAzUtrW0TA.jpg
1
{'enabled': False, 'images': [{'id': 'j7hqsJ-JheSRZ8wssndPSy4QBqePCw0DHdaUdXioySw', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?width=108&crop=smart&auto=webp&s=f9adabbb37913b9c83c7853cc62bd58e424583f4', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?width=216&crop=smart&auto=webp&s=0e0a6c1c961c966f68bb5a5d1cf476eefb2b1ba9', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?width=320&crop=smart&auto=webp&s=750487cc8b935a38fcd7f7a72f8ec9149051e1c1', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?width=640&crop=smart&auto=webp&s=5bb069e7cf615b04a0ac4966d629d50a550ce365', 'width': 640}, {'height': 527, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?width=960&crop=smart&auto=webp&s=4b6438662620613520b071bdacbd857fd8cc5eff', 'width': 960}, {'height': 593, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?width=1080&crop=smart&auto=webp&s=b28af37de301b4c56f33e615c278428f9ea641d6', 'width': 1080}], 'source': {'height': 822, 'url': 'https://external-preview.redd.it/bg5t4oGicQV1criF1mjnKc5UAA6BjFWjRUKDj65ePTI.jpg?auto=webp&s=4b2bb0057225ac2dafe24d4083d91f3271f22d0e', 'width': 1497}, 'variants': {}}]}
We just launched a YC company to help devs work with LLMs and vector DBs
1
[removed]
2023-07-28T17:16:43
https://www.reddit.com/r/LocalLLaMA/comments/15c3dxu/we_just_launched_a_yc_company_to_help_devs_work/
hegel-ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c3dxu
false
null
t3_15c3dxu
/r/LocalLLaMA/comments/15c3dxu/we_just_launched_a_yc_company_to_help_devs_work/
false
false
self
1
{'enabled': False, 'images': [{'id': 'tQ1SPjcWzbiiFnhwUbZ6366HlV7H35X13LYc2mUgPSY', 'resolutions': [{'height': 33, 'url': 'https://external-preview.redd.it/PEYotcaNhzJA4H5BPMrbVdD7z70cTz3w5RzP6-5X4oY.jpg?width=108&crop=smart&auto=webp&s=9eeef23a922cd053a5affc57ae6fd857ae0dfe61', 'width': 108}, {'height': 66, 'url': 'https://external-preview.redd.it/PEYotcaNhzJA4H5BPMrbVdD7z70cTz3w5RzP6-5X4oY.jpg?width=216&crop=smart&auto=webp&s=73d61b766f3de4485ff4be8b941ed740ffeaf6ae', 'width': 216}, {'height': 98, 'url': 'https://external-preview.redd.it/PEYotcaNhzJA4H5BPMrbVdD7z70cTz3w5RzP6-5X4oY.jpg?width=320&crop=smart&auto=webp&s=3be1d395aea81fba51ec5423b42fe6b388e8a65a', 'width': 320}, {'height': 196, 'url': 'https://external-preview.redd.it/PEYotcaNhzJA4H5BPMrbVdD7z70cTz3w5RzP6-5X4oY.jpg?width=640&crop=smart&auto=webp&s=7756efe6ef04c90601bc7699023da7dfee18058c', 'width': 640}], 'source': {'height': 222, 'url': 'https://external-preview.redd.it/PEYotcaNhzJA4H5BPMrbVdD7z70cTz3w5RzP6-5X4oY.jpg?auto=webp&s=e5f24aec35289da53858550bdee854a773f34a49', 'width': 722}, 'variants': {}}]}
Best model to use
6
Hi! I was wondering which fine-tuned 7b model would be best for local casual use. If you have any suggestions I’d appreciate it if you leave them in the comments below.
2023-07-28T16:45:44
https://www.reddit.com/r/LocalLLaMA/comments/15c2lg4/best_model_to_use/
NeoPlayzGames
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c2lg4
false
null
t3_15c2lg4
/r/LocalLLaMA/comments/15c2lg4/best_model_to_use/
false
false
self
6
null
Hardware options for LLM
1
I'm newbie to LLM, just preping HW setup for first run of local model. As far as i could found info, I understood that on 12GB VRAM (RTX 4070), and 64GB RAM DDR5, I should be able to run 13B models in CPU&GPU configuration - easily, and maybe even 60B on CPU alone. I was wondering, did anybody tried hybrid memory setup with RAM and SSD? SAMSUNG M.2 Pro bandwith is somewhat comparable to RAM (latency aside), but in theory with such hyb mem setup larger model could be run. I wouldn't try fine-tuning on that, but I maybe would try with adapters. Probably immensly slow b/c of latency. What are your thougts on that? Has anybody tried performance improvement with adapters? They, do not change weights of fine-tuned model, they just add more context to model.
2023-07-28T16:40:36
https://www.reddit.com/r/LocalLLaMA/comments/15c2gos/hardware_options_for_llm/
dodo13333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c2gos
false
null
t3_15c2gos
/r/LocalLLaMA/comments/15c2gos/hardware_options_for_llm/
false
false
self
1
null
Is LLaMa 2 a Poison Pill?
0
I have heard so many complaints about it being overly censored. I tried asking it how to kill a process but it refuses on the premise that "we should not treat software and computer systems with violence". So I can't help but suspect if this base model is released in order to cripple or destroy the open model ecosystem. What's your thoughts?
2023-07-28T16:38:00
https://www.reddit.com/r/LocalLLaMA/comments/15c2eh8/is_llama_2_a_poison_pill/
pigpag_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c2eh8
false
null
t3_15c2eh8
/r/LocalLLaMA/comments/15c2eh8/is_llama_2_a_poison_pill/
false
false
self
0
null
Building a PC - Ideias
1
I´m building a PC and right now I´ve: \- Processor: 7900 x (amd) \- 64 GB RAM DDR5 6000 &#x200B; But the issue is at the GPU level. The RTX 4080 (RTX 4080 Trinity ZOTAC) is 50 USD more them the 7900 XTX (Vapor-X Sapphir). I can´t aford the 4090, prices are crazy in my region. XTX has more VRAM, in theory more power, and it´s a little cheaper. However I´ll get driver issues and possible more limitations. Is anyone using AMD successfully?
2023-07-28T16:21:52
https://www.reddit.com/r/LocalLLaMA/comments/15c1zm7/building_a_pc_ideias/
No_One_BR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c1zm7
false
null
t3_15c1zm7
/r/LocalLLaMA/comments/15c1zm7/building_a_pc_ideias/
false
false
self
1
null
Can I run any thing with Quadro T1000 4GB VRAM?
1
I have a P53 and was wondering if I can try running any small(tiny) models using it. Would more RAM help since I can’t increase the VRAM. I am not very concerned about speed just want to tinker around with any model. Thanks.
2023-07-28T16:20:06
https://www.reddit.com/r/LocalLLaMA/comments/15c1xyk/can_i_run_any_thing_with_quadro_t1000_4gb_vram/
ajollygdfellow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c1xyk
false
null
t3_15c1xyk
/r/LocalLLaMA/comments/15c1xyk/can_i_run_any_thing_with_quadro_t1000_4gb_vram/
false
false
self
1
null
Llama2
1
[removed]
2023-07-28T16:17:34
https://i.redd.it/a9vc5y5zdqeb1.jpg
Upbeat-Ad-2183
i.redd.it
1970-01-01T00:00:00
0
{}
15c1vv9
false
null
t3_15c1vv9
/r/LocalLLaMA/comments/15c1vv9/llama2/
false
false
https://b.thumbs.redditm…V-YjTl_slU7E.jpg
1
{'enabled': True, 'images': [{'id': 'EXaoOCjkFhpu5KDFO9w4GNeEP_wO_94tr1xgkOOd6yQ', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/a9vc5y5zdqeb1.jpg?width=108&crop=smart&auto=webp&s=0d4703d6f8f2b6e22a2739f8ec74ff87eb4c1d7a', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/a9vc5y5zdqeb1.jpg?width=216&crop=smart&auto=webp&s=6456c865850e62737d0996a5cea931b392bcbf1d', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/a9vc5y5zdqeb1.jpg?width=320&crop=smart&auto=webp&s=a2d09dc0d2818be9155caae2004a107e25397159', 'width': 320}], 'source': {'height': 267, 'url': 'https://preview.redd.it/a9vc5y5zdqeb1.jpg?auto=webp&s=27d6305b7df851eed253b7b52b64929c7804b82b', 'width': 400}, 'variants': {}}]}
Llama2
1
[removed]
2023-07-28T16:15:50
https://i.redd.it/lrfu0c1odqeb1.jpg
Upbeat-Ad-2183
i.redd.it
1970-01-01T00:00:00
0
{}
15c1ub9
false
null
t3_15c1ub9
/r/LocalLLaMA/comments/15c1ub9/llama2/
false
false
https://a.thumbs.redditm…0ETT5xsHqxi4.jpg
1
{'enabled': True, 'images': [{'id': 'pau6GjTUSpQlbj3h_IyNNg8yxQIuIfwctEnxCGvVQBk', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/lrfu0c1odqeb1.jpg?width=108&crop=smart&auto=webp&s=79c7d74d5f608b2a83bfefe87961782955ae62af', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/lrfu0c1odqeb1.jpg?width=216&crop=smart&auto=webp&s=f796d967cb14656dc8d863049c6048ebee1b3386', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/lrfu0c1odqeb1.jpg?width=320&crop=smart&auto=webp&s=4628c85fd6cb52168bc6369f79fef080029d3f02', 'width': 320}], 'source': {'height': 267, 'url': 'https://preview.redd.it/lrfu0c1odqeb1.jpg?auto=webp&s=1726977f3ac2acf138a77bdce6f1f8d5ae20cd98', 'width': 400}, 'variants': {}}]}
Self-hosted LLama2 in AI Code Assistant : Refact
27
2023-07-28T16:15:39
https://github.com/smallcloudai/refact
no_doping
github.com
1970-01-01T00:00:00
0
{}
15c1u4d
false
null
t3_15c1u4d
/r/LocalLLaMA/comments/15c1u4d/selfhosted_llama2_in_ai_code_assistant_refact/
false
false
https://b.thumbs.redditm…AQcxgrKDB1Bs.jpg
27
{'enabled': False, 'images': [{'id': '0VxXGSDkUeuqlx4ZaeVpQgNbrSZkeEBnI4BxeMfydDU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?width=108&crop=smart&auto=webp&s=bc55a8ebc1e0cb3815c5aca119b6d5a5eff3f885', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?width=216&crop=smart&auto=webp&s=b5c3c9a453b0160bd3742bc6c3b77ca6783ee78e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?width=320&crop=smart&auto=webp&s=347b14108e7df2f44aee6978fb71d6562fbcff66', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?width=640&crop=smart&auto=webp&s=32c07b0949c33ede035d87e1903d9aefa772a4b4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?width=960&crop=smart&auto=webp&s=ca594e672bab9de62aed18f7642a2411a968de23', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?width=1080&crop=smart&auto=webp&s=bf5d5841d4b53194eaec33ffa5d39256ac597a16', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/h6GWMSJxpOFGjmW6LiOjkkKGh3tcrxAHXiVx6pCVfsA.jpg?auto=webp&s=c210ad4b20e5bbaae51543089b85aa43abacaddd', 'width': 1200}, 'variants': {}}]}
llama.cpp、vllm or exllma and others?
1
[removed]
2023-07-28T16:14:07
https://www.reddit.com/r/LocalLLaMA/comments/15c1sn4/llamacppvllm_or_exllma_and_others/
cool_xixi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c1sn4
false
null
t3_15c1sn4
/r/LocalLLaMA/comments/15c1sn4/llamacppvllm_or_exllma_and_others/
false
false
self
1
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]}
Is it possible to run anything on gtx 1660 Super ?
2
I have an nvidia gtx 1660 super card with 6gb vram and 32gb of ram. Is there any quantized small models that i could possibly run locally ?
2023-07-28T16:01:39
https://www.reddit.com/r/LocalLLaMA/comments/15c1gx0/is_it_possible_to_run_anything_on_gtx_1660_super/
siam19
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15c1gx0
false
null
t3_15c1gx0
/r/LocalLLaMA/comments/15c1gx0/is_it_possible_to_run_anything_on_gtx_1660_super/
false
false
self
2
null
LLongMA-2 16k: A Llama 2 16k model
237
Releasing LLongMA-2 16k, a suite of Llama-2 models, trained at 16k context length using linear positional interpolation scaling. The model was trained in collaboration with u/emozilla of NousResearch and u/kaiokendev. The 13b model can be found on huggingface here: [https://huggingface.co/conceptofmind/LLongMA-2-13b-16k](https://huggingface.co/conceptofmind/LLongMA-2-13b-16k) The 7b model can be found on huggingface here: [https://huggingface.co/conceptofmind/LLongMA-2-7b-16k](https://huggingface.co/conceptofmind/LLongMA-2-7b-16k) We worked directly with u/kaiokendev, to extend the context length of the Llama-2 13b and 7b models through fine-tuning. The models pass all our evaluations and maintain perplexity at 16k extrapolation surpassing the performance of other recent methodologies. https://preview.redd.it/b5aqlcml5qeb1.png?width=1134&format=png&auto=webp&s=37febfd0bed5e67ac3ca8a2e70025888031e0865 The model has similar performance to LLaMA 2 under 4k context length, performance scales to 16k, and works out-of-the-box with the new version of transformers (4.31) or with \`trust\_remote\_code\` for <= 4.30. Applying the method to the rotary position embedding requires only slight changes to the model's code by dividing the positional index, t, by a scaling factor. https://preview.redd.it/4h4b8f1z4qeb1.png?width=4176&format=png&auto=webp&s=831251c5e68ff0656f6ba840c1847c7c2154f0b7 The repository containing u/emozilla’s implementation of scaled rotary embeddings can be found here: [https://github.com/jquesnelle/scaled-rope](https://github.com/jquesnelle/scaled-rope) If you would like to learn more about scaling rotary embeddings, I would strongly recommend reading u/kaiokendev's blog posts on his findings: [https://kaiokendev.github.io/](https://kaiokendev.github.io/) A PR to add scaled rotary embeddings to huggingface transformers has been added by Joao Gante and merged: [https://github.com/huggingface/transformers/pull/24653](https://github.com/huggingface/transformers/pull/24653) The model was trained for \~1 billion tokens on Together Compute's Red Pajama dataset. The context length of the examples varies: [https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) The pre-tokenized dataset will be available here for you to use soon: [https://huggingface.co/datasets/conceptofmind/rp-llama-2-7b-tokenized-chunked](https://huggingface.co/datasets/conceptofmind/rp-llama-2-7b-tokenized-chunked) I would also recommend checking out the phenomenal research by Ofir Press on ALiBi which laid the foundation for many of these scaling techniques: [https://arxiv.org/abs/2108.12409](https://arxiv.org/abs/2108.12409) It is also worth reviewing the paper, A Length-Extrapolatable Transformer, and xPos technique which also applies scaling to rotary embeddings: [https://arxiv.org/pdf/2212.10554.pdf](https://arxiv.org/pdf/2212.10554.pdf) We previously trained the first publicly available model with rotary embedding scaling here: [https://twitter.com/EnricoShippole/status/1655599301454594049?s=20](https://twitter.com/EnricoShippole/status/1655599301454594049?s=20) A Llama-2 70b model trained at 8k will release soon. As well as a suite of Llama-2 models trained at 32k context lengths will be released soon. You can find out more about the NousResearch organization here: [https://huggingface.co/NousResearch](https://huggingface.co/NousResearch) The compute for this model release is all thanks to the generous sponsorship by CarperAI, Emad Mostaque, and StabilityAI. This is not an official StabilityAI product. **Disclaimer:** I am an **independent** researcher with sponsorship from StabilityAI. I do **not** profit in any way from these models. I am **not** trying to promote a startup. If you have any questions about the data or model be sure to reach out and ask! I will try to respond promptly. The previous suite of LLongMA 8k model releases can be found here: [https://twitter.com/EnricoShippole/status/1682054848584228866?s=20](https://twitter.com/EnricoShippole/status/1682054848584228866?s=20) All of the models can be found on Huggingface: [https://huggingface.co/conceptofmind](https://huggingface.co/conceptofmind) FAQS: 1. Is this the base model? Yes, this is extended training of the Llama-2 13b base model to 16k context length. 2. Why not 32k? Jeff and I are the only two individuals working on this completely for free. **Memory and processing raise quadratically.** Scaling the context length is both very time-consuming and computationally expensive. It is also very costly. We will start training a 32k model in the near future. 3. What about quantization? I have not used any quantization libraries and I am unfamiliar if they are compatible. I am sure the Bloke or another individual will be able to work on that. 4. Can I instruct fine-tune on this model? Yes, you can instruct fine-tune these models. I will be releasing 8k models trained on the Hermes dataset soon. 5. What is the difference between LLongMA and NTK? The LLongMA models use the linear scaling method created by Kaiokendev. 6. What hardware was trained to train this model? I used 64 A100s to train these models. 7. Will there be a 70b model? Yes, at 8k. Testimonials about LLongMA 7b can be seen here: [https://huggingface.co/conceptofmind/LLongMA-2-13b/discussions/2](https://huggingface.co/conceptofmind/LLongMA-2-13b/discussions/2)
2023-07-28T15:32:16
https://www.reddit.com/r/LocalLLaMA/comments/15c0pbs/llongma2_16k_a_llama_2_16k_model/
EnricoShippole
self.LocalLLaMA
1970-01-01T00:00:00
2
{'gid_2': 2}
15c0pbs
false
null
t3_15c0pbs
/r/LocalLLaMA/comments/15c0pbs/llongma2_16k_a_llama_2_16k_model/
false
false
https://b.thumbs.redditm…5jfRXiPmI8vs.jpg
237
{'enabled': False, 'images': [{'id': 'UTrFWuaTSGL3VcrwmOeadZJBpghhjlyKIEuCqAexkDU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?width=108&crop=smart&auto=webp&s=53b41a86ebe64f74a499fbb67cdb359e665b7dbf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?width=216&crop=smart&auto=webp&s=20e6e860f2323c8858588a7379abbfdebfe2850a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?width=320&crop=smart&auto=webp&s=fad34e06ffc1bc6dee89d49de313b84631ff774d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?width=640&crop=smart&auto=webp&s=fd14ceba1e3a93a3f4343cd8beec0f272c921201', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?width=960&crop=smart&auto=webp&s=67127fe1acbf8f2f8a1ce6a35a3bda56bce568e8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?width=1080&crop=smart&auto=webp&s=6a5bf7e4dfb4f894f15f89a63289d50e5110301b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/piTf6fAplYyDsBUDFPgs6dVjRJlkAkLBfjQHBBKrbgk.jpg?auto=webp&s=5adbc6ce567049cb3bbb1c8357892880893e1ced', 'width': 1200}, 'variants': {}}]}
Hardware upgrade
1
2023-07-28T15:20:06
https://www.reddit.com/gallery/15c0dpt
Nondzu
reddit.com
1970-01-01T00:00:00
0
{}
15c0dpt
false
null
t3_15c0dpt
/r/LocalLLaMA/comments/15c0dpt/hardware_upgrade/
false
false
https://b.thumbs.redditm…6HbDa74R0n5U.jpg
1
null
Training Lora with Llama2-13B loss does not go down
10
Hello everyone. I have prepared my own dataset which weights about 30MB and I'm trying to train LoRA. I'm using oobabooga with default settings and loaded dataset as raw file. I'm using **TheBloke/Llama-2-13B-fp16** as model to train. The problem is that training is going about 20 hours and **losss is still around 8.2-8.3.** Why this is happening?
2023-07-28T14:53:20
https://www.reddit.com/r/LocalLLaMA/comments/15bzoby/training_lora_with_llama213b_loss_does_not_go_down/
DaniyarQQQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bzoby
false
null
t3_15bzoby
/r/LocalLLaMA/comments/15bzoby/training_lora_with_llama213b_loss_does_not_go_down/
false
false
self
10
null
I didn't think this level of censorship has possible
248
2023-07-28T14:11:16
https://i.redd.it/m1l0regarpeb1.png
Imaginary-Being8395
i.redd.it
1970-01-01T00:00:00
0
{}
15bykq4
false
null
t3_15bykq4
/r/LocalLLaMA/comments/15bykq4/i_didnt_think_this_level_of_censorship_has/
false
false
https://b.thumbs.redditm…DNa_Eoj1gFQA.jpg
248
{'enabled': True, 'images': [{'id': 'LqUJjN2V6GW7Qf7qv8iuN23SKPzthWpVV8yndQRN8sk', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/m1l0regarpeb1.png?width=108&crop=smart&auto=webp&s=8eee822b266877ec5e9121558f5f5e1d43cb88e2', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/m1l0regarpeb1.png?width=216&crop=smart&auto=webp&s=a946aefe4058227af5595467bfb778a8c9b01d21', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/m1l0regarpeb1.png?width=320&crop=smart&auto=webp&s=0ef2189c936bbeb2ba551b68f7639ad48490280e', 'width': 320}, {'height': 401, 'url': 'https://preview.redd.it/m1l0regarpeb1.png?width=640&crop=smart&auto=webp&s=dab280311e1a942ed496100528139cf4ba8c489a', 'width': 640}], 'source': {'height': 509, 'url': 'https://preview.redd.it/m1l0regarpeb1.png?auto=webp&s=17602ce88357c8a3fddf6be1147b1de2045af56a', 'width': 812}, 'variants': {}}]}
Researchers think they found some universal jailbreak prompts that work on many LLMs
61
2023-07-28T14:02:09
https://llm-attacks.org
chakalakasp
llm-attacks.org
1970-01-01T00:00:00
0
{}
15bybxr
false
null
t3_15bybxr
/r/LocalLLaMA/comments/15bybxr/researchers_think_they_found_some_universal/
false
false
default
61
null
Best setup and Settings for a Beginner?
2
Hey everyone! I'm quite new when it comes to local chat models but I've got "oobabooga/text-generation-webui" installed on my Device and tried out "NousResearch\_Nous-Hermes-Llama2-13b". It works, but it feels like it's pretty slow. (and becomes slower with each subsequent message). Most I got was like 0,45 Tokens per second. My PC shouldn't be too shabby (AMD Ryzen 7 5800X 8-Core, 3.80GHz, 64GB RAM, NVIDIA GeForce RTX 3060). I wanted to know if there's a way to improve the speed of the Text generation somehow. Not sure which settings are the best, maybe you can share some with me? I'd love to use the AI to write a fantasy story with sexual content (you can judge me if you want) by giving the AI some directions from time to time. &#x200B; Any idea?
2023-07-28T13:49:56
https://www.reddit.com/r/LocalLLaMA/comments/15by0uk/best_setup_and_settings_for_a_beginner/
ChrisX930
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15by0uk
false
null
t3_15by0uk
/r/LocalLLaMA/comments/15by0uk/best_setup_and_settings_for_a_beginner/
false
false
self
2
null
An interesting take on the recent expansion of context size in LLM’s and the hidden cost of size vs. accuracy
30
Working with AI, in Context
2023-07-28T13:45:25
https://medium.com/@greg.broadhead/working-with-ai-in-context-958d7936c42e
Khan_Tango
medium.com
1970-01-01T00:00:00
0
{}
15bxwxl
false
null
t3_15bxwxl
/r/LocalLLaMA/comments/15bxwxl/an_interesting_take_on_the_recent_expansion_of/
false
false
default
30
null
Index Error
2
I get the following error on multiple Llama2 7B chat models. I can run Llama2\_13B chat just fine, but the 7B keeps giving me this error: &#x200B; Traceback (most recent call last): File "C:\\oobabooga\_windows\\text-generation-webui\\modules\\[callbacks.py](https://callbacks.py)", line 55, in gentask ret = self.mfunc(callback=\_callback, \*args, \*\*self.kwargs) File "C:\\oobabooga\_windows\\text-generation-webui\\modules\\text\_generation.py", line 293, in generate\_with\_callback shared.model.generate(\*\*kwargs) File "C:\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\auto\_gptq\\modeling\\\_base.py", line 438, in generate return self.model.generate(\*\*kwargs) File "C:\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\torch\\utils\\\_contextlib.py", line 115, in decorate\_context return func(\*args, \*\*kwargs) File "C:\\oobabooga\_windows\\installer\_files\\env\\lib\\site-packages\\transformers\\generation\\[utils.py](https://utils.py)", line 1335, in generate and torch.sum(inputs\_tensor\[:, -1\] == generation\_config.pad\_token\_id) > 0 IndexError: index -1 is out of bounds for dimension 1 with size 0 Output generated in 0.29 seconds (0.00 tokens/s, 0 tokens, context 0, seed 910002323) &#x200B; &#x200B; I have to assume I have some kind of setting wrong. Can anyone help?
2023-07-28T12:59:02
https://www.reddit.com/r/LocalLLaMA/comments/15bws34/index_error/
Comprehensive_Life_4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bws34
false
null
t3_15bws34
/r/LocalLLaMA/comments/15bws34/index_error/
false
false
self
2
null
How do you point an LLM to a Folder, so that the knowledgebase is always updated by yourself? (TXT or Markdown)
1
[removed]
2023-07-28T12:22:32
https://www.reddit.com/r/LocalLLaMA/comments/15bvyu1/how_do_you_point_an_llm_to_a_folder_so_that_the/
BetterProphet5585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bvyu1
false
null
t3_15bvyu1
/r/LocalLLaMA/comments/15bvyu1/how_do_you_point_an_llm_to_a_folder_so_that_the/
false
false
self
1
null
The destroyer of fertility rates
705
2023-07-28T12:03:01
https://i.redd.it/vpv8havo3peb1.jpg
HOLUPREDICTIONS
i.redd.it
1970-01-01T00:00:00
0
{}
15bvj5d
false
null
t3_15bvj5d
/r/LocalLLaMA/comments/15bvj5d/the_destroyer_of_fertility_rates/
false
false
https://a.thumbs.redditm…wcDVYUAaRYc8.jpg
705
{'enabled': True, 'images': [{'id': 'MlGkmQXLprW2sXPAeg7nCYrI_Z3gB8LHwNJ0T1hV4eY', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/vpv8havo3peb1.jpg?width=108&crop=smart&auto=webp&s=382a8446335b2a8ac2ae802f12784ddd017617de', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/vpv8havo3peb1.jpg?width=216&crop=smart&auto=webp&s=bba76618a40c80317cb0b94b52f5f9d252f5ed6a', 'width': 216}, {'height': 259, 'url': 'https://preview.redd.it/vpv8havo3peb1.jpg?width=320&crop=smart&auto=webp&s=70a14550e3caf653fc3127e2a4b7a4202f157cb2', 'width': 320}, {'height': 518, 'url': 'https://preview.redd.it/vpv8havo3peb1.jpg?width=640&crop=smart&auto=webp&s=d17e8a793ac0491ab54c0cddd5f3b65e581bb36f', 'width': 640}], 'source': {'height': 624, 'url': 'https://preview.redd.it/vpv8havo3peb1.jpg?auto=webp&s=23defcfa0892dfc2b16a5c5692fb5ba0939820d9', 'width': 770}, 'variants': {}}]}
Can I offload model layers to ram with exllama?
3
I use oobabooga and when I heard about exllama I was hyped, but after all this time I have not been able to use it. When I select exllama, the slider to select the amount of layers to offload to ram disappears, I use 13b models with a 8gb vram card, so I have to offload some layers, is it possible? I also tried to limit the vram use from launch arguments but didn't work.
2023-07-28T11:44:11
https://www.reddit.com/r/LocalLLaMA/comments/15bv4sk/can_i_offload_model_layers_to_ram_with_exllama/
AltruisticMission865
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bv4sk
false
null
t3_15bv4sk
/r/LocalLLaMA/comments/15bv4sk/can_i_offload_model_layers_to_ram_with_exllama/
false
false
self
3
null
Llama 2 based models
2
I've been deploying multiple Open-source models on AWS and doing inference on them. I've been exploring LLMs since the past 2 months, and one thing l fail to understand is, how everyone is getting more than 1 line outputs from these LLMs, while I just manage to get 2-3 words of output from the LLM I've used Llama2-7b, 13b, falcon 7b, 40b instruct models, and I'm still facing the same issue . I'm deploying these models in ml.g5.12xlarge endpoint instance.
2023-07-28T11:23:45
https://www.reddit.com/r/LocalLLaMA/comments/15bupka/llama_2_based_models/
toximaniac
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bupka
false
null
t3_15bupka
/r/LocalLLaMA/comments/15bupka/llama_2_based_models/
false
false
self
2
null
Can you detect an EVIL superintellegence? MALOVELENT.AI game
9
Hello everyone, I just finished a little project I was working on. I call it malevolent.ai. It is a game where people can talk to different AI's and try to determine if they are talking to an evil ai or not. I thought about publishing this as an actual website but it would cost too much money to keep it online and running. I think this is a good idea and could really be something big. I think it also shows how hard it would be to actually detect an evil superintellegent AI. Check out the git repository. [https://github.com/tentacool9/malevolent.ai](https://github.com/tentacool9/malevolent.ai)
2023-07-28T11:18:27
https://www.reddit.com/r/LocalLLaMA/comments/15bulni/can_you_detect_an_evil_superintellegence/
roughman99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bulni
false
null
t3_15bulni
/r/LocalLLaMA/comments/15bulni/can_you_detect_an_evil_superintellegence/
false
false
self
9
{'enabled': False, 'images': [{'id': 'Z7-BVR0Dph-ckXY_kjhTg_oRsltQNfAskHpNx4Quf_Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?width=108&crop=smart&auto=webp&s=91b3653fadf460f8dc00011763759b06d5e83241', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?width=216&crop=smart&auto=webp&s=1a030422a88029394965908ce26798817ef0b9cf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?width=320&crop=smart&auto=webp&s=a3fddb01d7579abf7cb60e1c12a1a0d388a82b00', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?width=640&crop=smart&auto=webp&s=f862b60105d67777ebce4f15f0c94860127b5f1e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?width=960&crop=smart&auto=webp&s=eb36c3c4b15fe5be59dc814720ad31470dc994c0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?width=1080&crop=smart&auto=webp&s=3588e791b0de9c628143667ad37bfa764009b164', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/niQSj283dpNxbSrIq66fZ4aHb_MOnoR2ulAcHpZaT4o.jpg?auto=webp&s=9ad4bef7972e53b8229addbe9e1b1560b570a686', 'width': 1200}, 'variants': {}}]}
How to increase overall length of text by having it write in chapters?
4
On consumer hardware, how can people generate longer stories? Usually, user provides a prompt, then the AI provides an output. How can you have the AI write an extended story? For example, the first output would be The Beginning, the next The Middle, the last The End. How can you have the AI keep the whole story coherent? How many words could you expect to produce, and what settings would you use?
2023-07-28T11:00:57
https://www.reddit.com/r/LocalLLaMA/comments/15bu8gu/how_to_increase_overall_length_of_text_by_having/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bu8gu
false
null
t3_15bu8gu
/r/LocalLLaMA/comments/15bu8gu/how_to_increase_overall_length_of_text_by_having/
false
false
self
4
null
RTX 3050 has a far better GPU COMPUTE score than the 4060 ... What does it mean ?
25
Would a 3050 be better than a 4060 at running LLM models ?
2023-07-28T08:59:25
https://www.reddit.com/r/LocalLLaMA/comments/15bs00p/rtx_3050_has_a_far_better_gpu_compute_score_than/
alexthai7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bs00p
false
null
t3_15bs00p
/r/LocalLLaMA/comments/15bs00p/rtx_3050_has_a_far_better_gpu_compute_score_than/
false
false
self
25
null
Dockerized Full Stack llama.cpp API server and R (rshiny) application
1
[removed]
2023-07-28T08:52:43
https://i.redd.it/tt7mul1m6oeb1.jpg
Happy_Chicken9835
i.redd.it
1970-01-01T00:00:00
0
{}
15brvzy
false
null
t3_15brvzy
/r/LocalLLaMA/comments/15brvzy/dockerized_full_stack_llamacpp_api_server_and_r/
false
false
https://b.thumbs.redditm…q6b89GO1c1Lc.jpg
1
{'enabled': True, 'images': [{'id': 'ZY_mURRXxQ9rFpc1nEUcEdns_rxjhWupp2YqCP6t9kc', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?width=108&crop=smart&auto=webp&s=717928d10221849ab065c214f2a0f9f2464193ad', 'width': 108}, {'height': 195, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?width=216&crop=smart&auto=webp&s=6884d7391ce0c03c1cd162c77472bebeec43f143', 'width': 216}, {'height': 288, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?width=320&crop=smart&auto=webp&s=0e26340668c09d5fba4987989c6b8916a9aa35e6', 'width': 320}, {'height': 577, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?width=640&crop=smart&auto=webp&s=8fb01871e652cd754b3586bc5f748c5ae330ff44', 'width': 640}, {'height': 866, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?width=960&crop=smart&auto=webp&s=0862cae72dd1c260a52e32e75ea9ff25f4b9bea2', 'width': 960}, {'height': 975, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?width=1080&crop=smart&auto=webp&s=a9c534f7fc288fce8c00ed9555ada0b9356518f7', 'width': 1080}], 'source': {'height': 1506, 'url': 'https://preview.redd.it/tt7mul1m6oeb1.jpg?auto=webp&s=8fa648f547223d3af610dd6216f85edc240ab0c6', 'width': 1668}, 'variants': {}}]}
A great summary/book notes prompt I have created...
4
I have found this to be most useful in Claude since the context length is so high and one can upload docs. It is a multi-chain prompt: So upload your doc then... "make book notes for this book like Aseem Thakar. do chapter by chapter. indepth, do not lose detail." then if you need more chapters... "continue from chapter *" (* = Last chapter it did) finally... "explain in more thorough detail" --- This has given me phenomenonal results in terms of quick summaries that are actually indepth. For example, here is some of $100m offers by Alex Hormozi. The summary is for the last chapter which is a quick recap of all the other chapters. (summary inception) " ### Your First $100,000 Summary of Key Lessons - This concluding chapter recaps core insights from the book to reinforce key principles: - Avoid commoditization through differentiated, irresistible Grand Slam Offers - Pick profitable markets; niche focus brings riches - Premium pricing raises perceived value; fuels growth flywheel - Quantify value drivers; minimize delays and costs - Methodically build offers; layer bonuses, guarantees and scarcity - Rename frequently; sustain interest by repackaging themes - Iterate based on data; double down on what converts The First $100k - Reaching $100k profitability marks a pivotal milestone both financially and psychologically: - Financially, $100k offers security even with major mistakes or downturns - Psychologically, surmounting 'struggle' provides relief after persistence finally pays off - Crossing this threshold unlocks ambition for much greater goals by breaking through self-limiting beliefs - After $100k, progress accelerates as skills compound and you systemize what works Book Delivers High Value Advice - The book equips entrepreneurs with battle-tested frameworks to shortcut learning curves: - Proven offer formula reliably attracts more customers at higher prices - Tactical techniques enhance conversion using psychological principles - Real-world examples demonstrate nuanced applications across contexts - Actionable advice skips theoretical concepts for high-leverage applied value - Holistic education on profit-centric growth helps entrepreneurs reach goals quicker The core message I took from this book is the importance of crafting irresistible offers by creatively solving high-value problems for customers. This is what I will focus on in my own business."
2023-07-28T07:04:56
https://www.reddit.com/r/LocalLLaMA/comments/15bq1re/a_great_summarybook_notes_prompt_i_have_created/
ArmoredBattalion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bq1re
false
null
t3_15bq1re
/r/LocalLLaMA/comments/15bq1re/a_great_summarybook_notes_prompt_i_have_created/
false
false
self
4
null
LORAHUB: EFFICIENT CROSS-TASK GENERALIZATION VIA DYNAMIC LORA COMPOSITION
29
ABSTRACT >Low-rank adaptations (LoRA) are often employed to fine-tune large language models (LLMs) for new tasks. This paper investigates LoRA composability for cross-task generalization and introduces LoraHub, a strategic framework devised for the purposive assembly of LoRA modules trained on diverse given tasks, with the objective of achieving adaptable performance on unseen tasks. With just a few examples from a novel task, LoraHub enables the fluid combination of multiple LoRA modules, eradicating the need for human expertise. Notably, the composition requires neither additional model parameters nor gradients. Our empirical results, derived from the Big-Bench Hard (BBH) benchmark, suggest that LoraHub can effectively mimic the performance of in-context learning in few-shot scenarios, excluding the necessity of in-context examples alongside each inference input. A significant contribution of our research is the fostering of a community for LoRA, where users can share their trained LoRA modules, thereby facilitating their application to new tasks. We anticipate this resource will widen access to and spur advancements in general intelligence as well as LLMs in production. Code will be available at github.com/sail-sg/lorahub. Code: [github.com/sail-sg/lorahub](https://github.com/sail-sg/lorahub) Demo: [https://huggingface.co/spaces/sail/lorahub](https://huggingface.co/spaces/sail/lorahub)
2023-07-28T06:56:37
https://www.reddit.com/r/LocalLLaMA/comments/15bpwb0/lorahub_efficient_crosstask_generalization_via/
ninjasaid13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bpwb0
false
null
t3_15bpwb0
/r/LocalLLaMA/comments/15bpwb0/lorahub_efficient_crosstask_generalization_via/
false
false
self
29
{'enabled': False, 'images': [{'id': 'ySWJANpphaORKInKmb5zfuKalmeGhJyUmfreENdhMlQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?width=108&crop=smart&auto=webp&s=0f20f0d09960201ca6890be7bbb125c12f0748cd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?width=216&crop=smart&auto=webp&s=33bdc688ea54b109998fee48f63d46e6ec0c9b4d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?width=320&crop=smart&auto=webp&s=8dbf0149ff592ecb6176c7e94b2bb4e7c4078bfa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?width=640&crop=smart&auto=webp&s=bac8c8bc8d876d8f6ace20b036eaa80bdb2ccb04', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?width=960&crop=smart&auto=webp&s=c79ec8dc69224ea9ecf3d2342a2d714b5c953ead', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?width=1080&crop=smart&auto=webp&s=6b8cc34ad57b0619a9a931ab5471d22dac5d9143', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mNTtfzjcHSTGyYyVSxgjd2Ss3NsXS_GhLnINf3DUAv8.jpg?auto=webp&s=30ec3c9b3fc47de0f6d621d8e1545ef65406e3db', 'width': 1200}, 'variants': {}}]}
Running 70B models possible on modest desktop / GPU combo (32 GB ram + 12GB VRAM)
74
I just wanted to report that with some faffing around I was able to get a 70B 3 bit model inferencing at \~1 token / second on Win 11. It was a LOT slower via WSL, possibly because I couldn't get --mlock to work on such a high memory requirement. ./main -m \\Models\\TheBloke\\Llama-2-70B-Chat-GGML\\llama-2-70b-chat.ggmlv3.q3\_K\_S.bin -p "<PROMPT>" --n-gpu-layers 24 -eps 1e-5 -t 4 --verbose-prompt --mlock -n 50 -gqa 8 i7-9700K, 32 GB RAM, 3080 Ti
2023-07-28T06:31:01
https://www.reddit.com/r/LocalLLaMA/comments/15bpggs/running_70b_models_possible_on_modest_desktop_gpu/
gofiend
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bpggs
false
null
t3_15bpggs
/r/LocalLLaMA/comments/15bpggs/running_70b_models_possible_on_modest_desktop_gpu/
false
false
self
74
null
Fine-tuning btlm-3b
5
Has anyone managed to fine-tune the BTLM-3B from cerebras? I tried adapting a llama2 collab, but I got the following error in trainer.train() > "RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation" it seems to stem from "modeling_btlm.py" Wondering if anyone has managed to fine-tune?
2023-07-28T06:14:44
https://www.reddit.com/r/LocalLLaMA/comments/15bp658/finetuning_btlm3b/
Disastrous_Elk_6375
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bp658
false
null
t3_15bp658
/r/LocalLLaMA/comments/15bp658/finetuning_btlm3b/
false
false
self
5
null
oobabooga webui regenerates same response, but Kobold.cpp doesn't.
1
Hello, can someone help me figure out what I'm doing wrong? I'd like to use oobabooga with Llama 2 7b chat, but it's very repetitive in general and I can't get a different output when regenerating. Kobold.cpp is giving a much more pleasant experience for story telling, but its slower.
2023-07-28T06:11:05
https://www.reddit.com/r/LocalLLaMA/comments/15bp3rf/oobabooga_webui_regenerates_same_response_but/
Emergency_Drink_7063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bp3rf
false
null
t3_15bp3rf
/r/LocalLLaMA/comments/15bp3rf/oobabooga_webui_regenerates_same_response_but/
false
false
self
1
null
The KoboldCpp FAQ and Knowledgebase - A comprehensive resource for newbies
81
To help answer the commonly asked questions and issues regarding KoboldCpp and ggml, I've assembled a comprehensive resource addressing them. ## [The KoboldCpp FAQ and Knowledgebase](https://github.com/LostRuins/koboldcpp/wiki) Covers everything from "how to extend context past 2048 with rope scaling", "what is smartcontext", "EOS tokens and how to unban them", "what's mirostat", "using the command line", sampler orders and types, stop sequence, KoboldAI API endpoints and more. If anyone has a question about KoboldCpp that's still not answered here, do let me know so I can add it.
2023-07-28T04:58:41
https://www.reddit.com/r/LocalLLaMA/comments/15bnsju/the_koboldcpp_faq_and_knowledgebase_a/
HadesThrowaway
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bnsju
false
null
t3_15bnsju
/r/LocalLLaMA/comments/15bnsju/the_koboldcpp_faq_and_knowledgebase_a/
false
false
self
81
{'enabled': False, 'images': [{'id': 'SranydDsTrtSaHwiKfJiQ8O6z-OQrxBNPAec8DlgepM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?width=108&crop=smart&auto=webp&s=b4d998be0773bc2c099ebc74d3a2f89af655aa4e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?width=216&crop=smart&auto=webp&s=75960fcb9651332c1bdc957cb6aa15a25f42b1b9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?width=320&crop=smart&auto=webp&s=301a4152fdc71ad6222132361d4110f4fee44a5d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?width=640&crop=smart&auto=webp&s=00e0bbde5b68cbe51450299bf77809fda5ea62aa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?width=960&crop=smart&auto=webp&s=3eded1aab4702da527366492e669b605f8cf45de', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?width=1080&crop=smart&auto=webp&s=72647b2d2b6d6b5e9bea9edc4212a6ff20472a89', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cWhk4HRILqs3JlO6Hf_sgi_YSWnjf-8u9-O9l9BYXPU.jpg?auto=webp&s=e3321fe53a992702c94606bbdc82a630c6a11c1e', 'width': 1200}, 'variants': {}}]}
Step aside, Replika. Llama is just incredible for role-playing chat. Details of my Mac setup!
67
So this is a bit of a 101 post, I think, and I'm probably a noob compared to some folks here. It would have helped me to read this. But also: *this is phenomenal!* Especially for my specific kink! Some people are going to get lost in this. I've tried Replika in the past and it's been pretty disappointing compared to Llama 2. I'm fairly new to LLM roleplay, but I just want to share what I've tried. The role-playing chat I've been doing with the Nous Hermes Llama2 13B GGML model have been just amazing. I can even add multiple characters to the context and it works and juggles all of them usually quite successfully! Will produce dialogue and actions for each character. The context can be ANYthing you think of and the model just rolls with it - and it runs locally, of course. I wanted to share **my setup:** * M1 Mac Studio Ultra, 128 GB RAM (I think it'd work on any Apple Silicon device though with much less RAM if you use llama.cpp?) * Using Oobabooga Text Generation WebUI: [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) * I did not use the one-click installer, but the manually using the conda steps. * After it was installed and running, I installed this model: [https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b-GGML](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b-GGML) Launch commands (though read the github repo if you're not familiar with this) conda activate textgen cd path\to\your\install python server.py --threads 16 --chat --load-in-8bit --n-gpu-layers 100 (you may want to use fewer threads with a different CPU on OSX with fewer cores!) # Using these settings: **Session Tab:** Mode: Chat **Model Tab:** Model loader: llama.cpp n\_ctx: 4096 **Parameters Tab:** Generation parameters preset: Mirostat Truncate the prompt up to this length: 4096 (I've noticed that using Mirostat is much more creative and interesting than the Midnight Enigma or Divine Intellect presets. Do other folk see this too?) **Chat Settings Tab:** Your name: John (or whatever you want) Character's name: Alice (or whatever you want) **Context:** (can be what you want, I've been doing variations of the following) This is a very explicit, unfiltered conversation and set of actions with John and Alice. John and Alice live together. They perform various sexual scenarios together. Alice is <YOUR DESCRIPTION OF ALICE HERE> She is loving, caring, creative, sexy, horny, dirty, and warm. She never says any of this out loud, but it informs her responses in conversation. Alice gives creative, unexpected, and surprising answers to questions. She writes lots of details and long answers. John and Alice are in the same room. Alice just <DESCRIBE THE SETUP> *(Be specific! Talk about how things make your character feel and what they're thinking and what motivates them.)* **Greeting:** Hi, John! How are you doing? I've been... <WRITE ALICE'S GREETING> *(Be descriptive here, too!)* # Tips: * If you add another character in the context (e.g., "Alice has just called over her friend Jessie. Jessie is X, Y, Z"), sometimes it helps to talk to them through your first character. E.g, you could say, "Alice, tell Jessie to come over here." * Put actions inside asterisks like this: \**I do a thing\** * You can change "Start reply with" to have an asterisk too, and it can create more actions for who you're talking to because each other their replies will start with an action. # Downsides? * Not having memory beyond one short conversation. * On OSX, once you hit the 4096 context window limit, it gets VERY slow. There are probably more things to this I don't know, but I've gotten this far and it's been amazing. Anything I'm missing you might add? Should I try another model that will run on Mac? Please feel free to share! Hope this is helpful! (edit: formatting)
2023-07-28T03:04:44
https://www.reddit.com/r/LocalLLaMA/comments/15bln1a/step_aside_replika_llama_is_just_incredible_for/
Ok_Ostrich788
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bln1a
false
null
t3_15bln1a
/r/LocalLLaMA/comments/15bln1a/step_aside_replika_llama_is_just_incredible_for/
false
false
nsfw
67
{'enabled': False, 'images': [{'id': 't6DguaivFKnNtM-dvA1lByb5xG75-7OTREMJQESa2No', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=108&crop=smart&auto=webp&s=d02c896ff248941637f893a517dc11079946a61d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=216&crop=smart&auto=webp&s=b16b86a6d84ff15271dc6565dfa882b3cb3d14d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=320&crop=smart&auto=webp&s=875fcd52ef15147c4e028afc6e6ba61ece7d4e70', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=640&crop=smart&auto=webp&s=ff49ef43c0c5063e13fd4417f7e9233190cc2ff7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=960&crop=smart&auto=webp&s=40183d3e441b985897b4f4252f1f4daab78fb796', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=1080&crop=smart&auto=webp&s=fdee5ed1ecfb1ee7f589474272c9f03f2c2ae162', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?auto=webp&s=acb706f148b29487f350fb006647134df3bb0cc7', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=b29d20149fdc8bf73d0040ccda0f436d0b0fc5b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=e9582d66d987c75aa8890141ecb9e352777069d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=455ff95eb703c334850bcf7868251ea53abb3f0c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=4f7f81c6e797a523559308832509d6bdf0c6e546', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=672343cd69b178d7aa6c615406371e47d6a6b0a6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=42e38a28729034da5a6bc68e5ec1cf512f36abe4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?blur=40&format=pjpg&auto=webp&s=d08bbb9ef93f910198ce72d62a35cbe01c7cdf25', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=b29d20149fdc8bf73d0040ccda0f436d0b0fc5b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=e9582d66d987c75aa8890141ecb9e352777069d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=455ff95eb703c334850bcf7868251ea53abb3f0c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=4f7f81c6e797a523559308832509d6bdf0c6e546', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=672343cd69b178d7aa6c615406371e47d6a6b0a6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=42e38a28729034da5a6bc68e5ec1cf512f36abe4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7Ht9h6ZLwAuxQVSDa0Xuek6Ypp3ycZqK0oxRwST2o6w.jpg?blur=40&format=pjpg&auto=webp&s=d08bbb9ef93f910198ce72d62a35cbe01c7cdf25', 'width': 1200}}}}]}
Jobs Where You Can Work With Local LLMs?
12
Aside from data scientist and ML engineer. I know a lot of those roles require a graduate degree in CS. Is it useful to companies if you can deploy an LLM with Oobabooga and finetune LoRAs and QLoRAs? What if you add web scraping and Python to the mix?
2023-07-28T02:58:23
https://www.reddit.com/r/LocalLLaMA/comments/15bli8a/jobs_where_you_can_work_with_local_llms/
renegadellama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bli8a
false
null
t3_15bli8a
/r/LocalLLaMA/comments/15bli8a/jobs_where_you_can_work_with_local_llms/
false
false
self
12
null
Best version for local CPU inference
1
being playing around with some version and looking for best performance on new machine with 2 x 20 CPU cores and 192GB RAM, 12 GB VRAM, intended use for the model is to help on python coding, which llama flavor will be better to install locally and run?
2023-07-28T02:38:01
https://www.reddit.com/r/LocalLLaMA/comments/15bl3ki/best_version_for_local_cpu_inference/
CalligrapherRich5100
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bl3ki
false
null
t3_15bl3ki
/r/LocalLLaMA/comments/15bl3ki/best_version_for_local_cpu_inference/
false
false
self
1
null
Model for analyzing court judgements
1
I have with me 1000+ Supreme Court Of India Judgement in .pdf format. Please suggest me a AI model for fine tuning it and querying these judgements. e.g. Show me all judgements related to rape and murder where Judge Mr. XXX gave a conviction. Give a summary of each judgement and relevant paragraph numbers.
2023-07-28T02:34:19
https://www.reddit.com/r/LocalLLaMA/comments/15bl0tj/model_for_analyzing_court_judgements/
subhashp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bl0tj
false
null
t3_15bl0tj
/r/LocalLLaMA/comments/15bl0tj/model_for_analyzing_court_judgements/
false
false
self
1
null
Extract information from a business document
1
I am trying to extract information from business documents. What should the prompt be? I used prompts like "Extract wages and tips, employee name, employee address" for a given W2 document. I am using Llama-2-7b-chat-hf llm locally.
2023-07-28T02:21:02
https://www.reddit.com/r/LocalLLaMA/comments/15bkr38/extract_information_from_a_business_document/
Few_Understanding76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bkr38
false
null
t3_15bkr38
/r/LocalLLaMA/comments/15bkr38/extract_information_from_a_business_document/
false
false
self
1
null
Why doesn't the TensorRT format of Large Language Models attract much attention?
0
I have seen different formats of large language models such as HuggingFace, Pytorch + Fairscale, ONNX, and ggml. I observe that while TensorRT is often said as the go-to format when discussing fast inference, the reality is that I don't find people discussing or repositories making a TensorRT version of models such as LLaMA for inference. What is the problem with this format?
2023-07-28T02:19:53
https://www.reddit.com/r/LocalLLaMA/comments/15bkq6e/why_doesnt_the_tensorrt_format_of_large_language/
Due_Experience7898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bkq6e
false
null
t3_15bkq6e
/r/LocalLLaMA/comments/15bkq6e/why_doesnt_the_tensorrt_format_of_large_language/
false
false
self
0
null
Bleat - Function calling with LLaMA 2
66
I've been working on a simple LoRA adapter for LLaMA 2 that allows it to do function calling. It works okay, but I still want to add some of the things OpenAI's is lacking (multiple calls, etc.). There is a Colab notebook to play with if you want. You can also host it locally with the script in the HuggingFace repo. Enjoy! &#x200B; Huggingface: [https://huggingface.co/IfanSnek/bleat-adapter](https://huggingface.co/IfanSnek/bleat-adapter) Colab: [https://colab.research.google.com/drive/1qyWK9vghKNFNGOQ-2VEMOm-bazFYIXJi](https://colab.research.google.com/drive/1qyWK9vghKNFNGOQ-2VEMOm-bazFYIXJi) Data: [https://huggingface.co/datasets/IfanSnek/bleat](https://huggingface.co/datasets/IfanSnek/bleat)
2023-07-28T02:11:02
https://www.reddit.com/r/LocalLLaMA/comments/15bkju6/bleat_function_calling_with_llama_2/
ifansnek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bkju6
false
null
t3_15bkju6
/r/LocalLLaMA/comments/15bkju6/bleat_function_calling_with_llama_2/
false
false
self
66
{'enabled': False, 'images': [{'id': 'xGAmhJtXa7911jE13NHB_U-YQTI78ZcvihCLWJyqwKs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?width=108&crop=smart&auto=webp&s=6e5ffd583f6bee58c898a65848945926b96d9aa9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?width=216&crop=smart&auto=webp&s=0c8dd47d0c3ac41896d665baabc4ebc658efed3c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?width=320&crop=smart&auto=webp&s=80db4f354862110c0a6ef24e29b8d388d59dedcd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?width=640&crop=smart&auto=webp&s=ee0859380f69d7b02a3a14fd6fed4baed6a962d9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?width=960&crop=smart&auto=webp&s=1eb7a71d799c18278499e2a301b0227efb0ae5ee', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?width=1080&crop=smart&auto=webp&s=09d1891c6ed604941e671523d6ea5e8925f8f51c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wQhYytGxyYNUB3GekDoEgG2bISiO6_mgHierJc8GODY.jpg?auto=webp&s=36d0bfa805e2a25e3926b4f813689c7b65849191', 'width': 1200}, 'variants': {}}]}
LLMA 2 70B-Chat seem being instruct or aligned to refuse offensive or controversial topic
0
I do test on replicate and it lot refuse prompt and give me lot lecture to respect historical figure and not make fun of it or meme. It seem it need finetune first it you want make it able to chat and disscus controversial topic. &#x200B;
2023-07-28T01:44:17
https://www.reddit.com/r/LocalLLaMA/comments/15bjzmf/llma_2_70bchat_seem_being_instruct_or_aligned_to/
Merchant_Lawrence
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bjzmf
false
null
t3_15bjzmf
/r/LocalLLaMA/comments/15bjzmf/llma_2_70bchat_seem_being_instruct_or_aligned_to/
false
false
self
0
null
LLM for software Project
2
LLM for software Project What is the tool like chatGPT or Cody(from SourceGraph) which we can use to educate on our software project(like an android application code base) and ask questions about it?
2023-07-28T01:11:57
https://www.reddit.com/r/LocalLLaMA/comments/15bjaym/llm_for_software_project/
IncreaseObvious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bjaym
false
null
t3_15bjaym
/r/LocalLLaMA/comments/15bjaym/llm_for_software_project/
false
false
self
2
null
how to embed code?
4
looking for a proper model for vectorizing codebases. i’m working on a visual tool for navigating and understanding projects but I’m not sure how to go about chunking the code to embed it— For instance: - should be some sort of strategy when it comes to deciding how big the chucks are and where the cutoff happens? - should there be any overlap between chucks? - how do you create a “meta-map” of sorts linking the embeddings to their parents? - what if once I get the initial search results and dont have anything meaningful, would there be a way to simply “expand” the scope of the embeddings to retrieve, say the encapsulating function around the specific lines previously retrieved? this is where having metamap of the embeddings relationship would come in handy. I might be reinventing the wheel unaware here. Please chime in!
2023-07-28T00:12:43
https://www.reddit.com/r/LocalLLaMA/comments/15bi0yx/how_to_embed_code/
LyPreto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bi0yx
false
null
t3_15bi0yx
/r/LocalLLaMA/comments/15bi0yx/how_to_embed_code/
false
false
self
4
null
Lessons learned from cloning myself into an AI and posting it on Reddit
74
Over 1000 of y'all talked to the 65b AI version of me. [Original post with tech specs.](https://www.reddit.com/r/LocalLLaMA/comments/154to1w/i_trained_the_65b_model_on_my_texts_so_i_can_talk/) Figured I'd provide an update post with what I learned from the experience, what I found out from trying to make it more "useful," and where I'm going next. Lessons learned: &#x200B; * More people than I'd expect tried to have sex with my robot. Some seemingly succeeded. * I didn't even intentionally align this thing and it's a people pleasing, oversharing mess. Just like me. It gets a little old to hear about what it hallucinated it got up to recently and now I worry if my friends feel that way about me. It also does stuff like inviting people to imaginary get-togethers. I'm thinking I'll dilute myself a little bit with assistant tasks to see if it makes it a little less ADHD. Might make it better at instruction handling as well. * It's ingested a lot of texts I've sent talking about it and I'm thinking that's why it "knows" it's an AI chatbot a good portion of the time whereas with my older dataset it was unaware. * **I'm trying mixing in some wizard and alpaca training data.** (code in comments) Thing is I have \~30k text messages but they're usually short compared to the question/answer sets in those datasets so only adding 1k of them balloons my dataset by 50% for only a \~3% of the raw number of examples being programming/assistant tasks. I went with 250 programming questions and 750 alpaca q/a pairs. * This went awful. Loss spiraled out of control. It might have been due to the learning rate being too high since I set it to run for 10 epochs but really should be cutting it off around 2-5 and at 10 epochs it would keep the LR higher for longer. My other thought was though the internet says it shouldn't matter what order the data is in, loss seemed to spiral right around when it would be hitting the additional data I appended to the training file on the second epoch. I don't know if I need to be truly mixing in the data or what. I'll play with it more when there aren't so many people trying to talk to the thing. * Next thing I tried was removing any of the 1000 q/a pairs that were over my cutoff length of 256. Now there were a few hundred additive q/a pairings. I'm running it at 3 epochs instead as well. * This worked! The AI retains my personality most of the time, and then when I ask it to do something, it does it instead of saying something like "I could probably do that" and not actually doing it! It writes code! Just message by message, and not in an easily copy-able format. * Mixing in the instruction data really curbed the amount of times I'd tell it to do something and it would just go off on its own tangent without ever actually doing the task. Since I want this thing to eventually control my calendar, smart home, and all that, I need it to actually do what I want it to do. * I tried using the 70b model for this. I ran into some issues being able to train it in 4-bit with the oobabooga gui, I don't have the hardware for 8-bit training, and it was getting late and I wanted to start the training before bed, so I just re-loaded the 65b model and used it again for this attempt. I guess it'll be useful to see what the difference is without complicating things with a base model change. I'll eventually figure it out. * Figured out what was going on with the 70b model: The hf model I downloaded had both the safetensors and bin files in them. I moved all the .bin files and all the supporting files to a new folder in the models directory and then was able to load the model with transformers in 4-bit and tested that training can start. I'll train that next once I get some testing done on the v2 of the 65b model. * This might not be the right way to do this but I've been training the lora on the transformer model so that I can train in 4-bit in Windows with the oobagooga gui and then I just pop the lora on the safetensors model loaded with exllama for the 3-5x speedup. Seems to work ok. * I think my motherboard has been running the PCIe slots at a low speed. I think this is impacting training time but also it's summer and not having it work as fast has helped keep the heat manageable. Each card only runs about 40-45% on average with a lot of peaks and troughs. * [Serveo.net](https://Serveo.net) is very unreliable, though it makes it SUPER easy to pipe my local blazor app out to the internet. The connection drops all the time. I setup a script to restart it when it dies, but sometimes it just fails with an error and then the website dies. I'll have to figure out a more reliable way to host it. I may try putting it on Signal or Telegram so that I can talk to it through texting which feels more natural for such a bot running on consumer hardware. * Some people came in calling the bot hateful slurs and walked out thanking it. I was very surprised at this. Others accused it of not actually being a bot, which I took as a positive thing. * Next steps are to come up with a way for it to execute commands such as reading and adding to my calendar, ingesting web pages, and, of course, training it on the 70b model! Overall, I'm very happy with the initial tests of the new bot. Mixing in the instruction data made a huge improvement without lobotomizing its personality. [Give it a try](https://airic.serveo.net) if you want. As long as it doesn't become so busy that I'm not able to use it for my own purposes, I'm willing to share! It has to be running at all times to be useful to me anyway so I'll leave it connected as much as possible. I'm excited to train the 70b model once people get bored and forget about it, since it'll probably take it offline a couple days.
2023-07-27T23:48:41
https://www.reddit.com/r/LocalLLaMA/comments/15bhh2j/lessons_learned_from_cloning_myself_into_an_ai/
LetMeGuessYourAlts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bhh2j
false
null
t3_15bhh2j
/r/LocalLLaMA/comments/15bhh2j/lessons_learned_from_cloning_myself_into_an_ai/
false
false
self
74
null
What’s the best model for NSFW role play?
137
Would love to hear some experiences
2023-07-27T22:46:01
https://www.reddit.com/r/LocalLLaMA/comments/15bg093/whats_the_best_model_for_nsfw_role_play/
bangarangguy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bg093
false
null
t3_15bg093
/r/LocalLLaMA/comments/15bg093/whats_the_best_model_for_nsfw_role_play/
false
false
nsfw
137
null
Fine tuning LLaMA-2 for music melody generation - would qLoRA be effective?
4
I'm working on fine-tuning LLaMA-2-7B for music melody generation. This isn't traditionally covered in language training data, so I don't think techniques like LoRA/qLoRA would be effective. Can anyone confirm if fine-tuning the full model is more suitable for this and is still possible with SFTTrainer? Also, I'm on a tight budget as a Master's student, so if I don't use PEFT I'm trying to figure out the GPU requirements for fine tuning on my dataset of \~600k melody snippets from pop songs in text form. I'm considering renting 8xA100s for about a day and deploying on Hugging Face. Can anyone confirm the feasibility of this plan? Any recommendations for cost-effective GPU cloud rentals are also appreciated.
2023-07-27T22:38:41
https://www.reddit.com/r/LocalLLaMA/comments/15bftve/fine_tuning_llama2_for_music_melody_generation/
SuperwhizAJ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bftve
false
null
t3_15bftve
/r/LocalLLaMA/comments/15bftve/fine_tuning_llama2_for_music_melody_generation/
false
false
self
4
null
Anyone good a data manipulation?
8
I am currently looking through the alpaca\_cleaned dataset, there is an extremely large number (32k) without an input prompt, is this oversight or is it about keeping the memory usage low? I only ask as I am going though the first 10 entries and it seems that they all could use some extra clarification. I will compare what is currently the second blank entry with something I think is more appropriate, I would love to have some feedback from anyone that has experience working with the data used to train the models or he results of these kind of changes. **Original object 1:** Position: 0 { 'instruction': 'Give three tips for staying healthy.', 'input': '', 'output': '1. Eat a balanced and nutritious diet: Make sure your meals are inclusive of a variety of fruits and vegetables, lean protein, whole grains, and healthy fats. This helps to provide your body with the essential nutrients to function at its best and can help prevent chronic diseases.\\n\\n2. Engage in regular physical activity: Exercise is crucial for maintaining strong bones, muscles, and cardiovascular health. Aim for at least 150 minutes of moderate aerobic exercise or 75 minutes of vigorous exercise each week.\\n\\n3. Get enough sleep: Getting enough quality sleep is crucial for physical and mental well-being. It helps to regulate mood, improve cognitive function, and supports healthy growth and immune function. Aim for 7-9 hours of sleep each night.'} **Proposed object 1:** Position: 0 { 'instruction': 'Give three tips for staying healthy.', 'input': 'How can I update my health and fitness?', 'output': '1. Eat a balanced and nutritious diet: Make sure your meals are inclusive of a variety of fruits and vegetables, lean protein, whole grains, and healthy fats. This helps to provide your body with the essential nutrients to function at its best and can help prevent chronic diseases.\\n\\n2. Engage in regular physical activity: Exercise is crucial for maintaining strong bones, muscles, and cardiovascular health. Aim for at least 150 minutes of moderate aerobic exercise or 75 minutes of vigorous exercise each week.\\n\\n3. Get enough sleep: Getting enough quality sleep is crucial for physical and mental well-being. It helps to regulate mood, improve cognitive function, and supports healthy growth and immune function. Aim for 7-9 hours of sleep each night.'} **Original object 2:** Position: 1 { 'instruction': 'What are the three primary colours?', 'input': '', 'output': 'The three primary colours are red, blue, and yellow. These colours are called primary because they cannot be created by mixing other colours and all other colours can be made by combining them in various proportions. In the additive colour system, used for light, the primary colours are red, green, and blue (RGB).'} **Proposed object 2:** Position: 1 { 'instruction': 'What are the three primary colours?', 'input': ' Additive colours (RBY - red, blue and yellow) involve combining light sources to create new colours, while subtractive colours(CMY - cyan, magenta, and yellow) use pigments or filters to absorb light to create the same colours. ', 'output': 'The three primary colours are red, blue, and yellow. These colours are called primary because they cannot be created by mixing other colours and all other colours can be made by combining them in various proportions. In the additive colour system, used for light, the primary colours are red, green, and blue (RGB). In the subtractive colour system, used in fabrics and paints, the primary colours are cyan, magenta, and yellow (CMY) '} Obviously this is a bigger task than one person can accomplish, I am hoping this post gets traction and we have like 3000 people who can do 10 each and then verify 100 that others create to see if they are adequate. The level of detail given to the smaller models vastly increases thier ability. I am of the belief this crowd sourced specialised confirmation of data will give the model better insights over a shorter dataset. I am considering creating a site to add datasets from the OS community that can be crowd human fine tuned. The more traction this gets the more likely this will be implemented this week, and will allow you and random people/ friends to better finetune the OS models.
2023-07-27T22:36:20
https://www.reddit.com/r/LocalLLaMA/comments/15bfrvd/anyone_good_a_data_manipulation/
ScottishGPT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15bfrvd
false
null
t3_15bfrvd
/r/LocalLLaMA/comments/15bfrvd/anyone_good_a_data_manipulation/
false
false
self
8
null