title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Is there any way to use the GPU of the AMD 5700G APU for accelerating inference? | 1 | [removed] | 2023-10-24T07:45:34 | https://www.reddit.com/r/LocalLLaMA/comments/17f7d4r/is_there_any_way_to_use_the_gpu_of_the_amd_5700g/ | PersonalConfidence_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f7d4r | false | null | t3_17f7d4r | /r/LocalLLaMA/comments/17f7d4r/is_there_any_way_to_use_the_gpu_of_the_amd_5700g/ | false | false | self | 1 | null |
Are there any nice frontends for web (and iOS) that I can use with my custom LLM setup running on a server? | 4 | I want to create a local LLM system using haystack or similar for retaining and searching through the log, with nightly or weekly fine-tuning of the LLM. I want to be able to access this from wherever, both web based and possibly through an app on iOS. Instead of making this from scratch, are there any good implementations of this? It should be chat based, and preferably with the possibility to record voice and take pictures, and maybe to get sound and video back from the server. | 2023-10-24T07:35:13 | https://www.reddit.com/r/LocalLLaMA/comments/17f7860/are_there_any_nice_frontends_for_web_and_ios_that/ | NorthernSouth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f7860 | false | null | t3_17f7860 | /r/LocalLLaMA/comments/17f7860/are_there_any_nice_frontends_for_web_and_ios_that/ | false | false | self | 4 | null |
How much video memory is needed on a video card to train lora for the 70B model? | 3 | My UHD770 takes 32 gigabytes of video memory from 62 gigabytes of RAM. I think I could train Lora very slowly for the 70B model, but I'm not exactly sure how much video memory is needed. | 2023-10-24T07:26:19 | https://www.reddit.com/r/LocalLLaMA/comments/17f73ux/how_much_video_memory_is_needed_on_a_video_card/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f73ux | false | null | t3_17f73ux | /r/LocalLLaMA/comments/17f73ux/how_much_video_memory_is_needed_on_a_video_card/ | false | false | self | 3 | null |
Using LLaMA for json data generation, help | 1 | Hi all,
I’m having a bit of trouble, I’m wanting to generate content for a language learning app I’m working on.
I use this as a prompt:
‘’’
Generate 10 beginner-friendly sentences in French and their English
translations. The topics should asking for directions and giving directions. Please format
the output as a JSON. Do not output anything else but the sentences and the JSON
structure. End your response by properly closing the JSON. Do not repeat any sentences.
{
"sentences": [
{
"French": "Où est la bibliothèque?",
"English": "Where is the library?"
},
‘’’
It mostly works, but it repeats itself quite often, sometimes doesn’t close the json and sometimes trails off with random garbage.
I run through quite a few prompts in this format and use something like this to do so:
‘’’
#!/bin/bash
# Define the variables
MAIN_EXEC="$HOME/git/llama.cpp/main"
MODEL_PATH="$HOME/git/llama.cpp/models/llama-2-13b/ggml-model-q4_0-v2.gguf"
PROMPTS_DIR="./prompts/french/"
N_TIMES=1 # Number of times to run the command
OUTPUT_DIR="../french/"
BASE_OUTPUT_FILE="lessons"
# Function to run the command N_TIMES and capture the output for a given prompts file
run_command() {
local PROMPTS_FILE="$1"
local BASENAME=$(basename "$PROMPTS_FILE" .txt) # Extracts the filename without the extension
for i in $(seq 1 $N_TIMES); do
OUTPUT_FILE="${OUTPUT_DIR}${BASENAME}_${i}.json"
$MAIN_EXEC -m $MODEL_PATH -c 512 -b 1024 -n -1 --keep 48 --repeat_penalty 1.0 --color -f $PROMPTS_FILE | awk '/{/{flag=1} flag' > "$OUTPUT_FILE"
done
}
# Iterate over all .txt files in the PROMPTS_DIR and run the command on each one
for FILE in "$PROMPTS_DIR"*.txt; do
run_command "$FILE"
done
‘’’
I’m not sure if my running is the correct way, I tried using ‘server’ rather than ‘main’ and it seemed to do much much worse when I did that. So I’m a little lost on what I should be doing differently.
I know I’m not doing this correctly, so some help or insights would be very welcome! | 2023-10-24T06:40:25 | https://www.reddit.com/r/LocalLLaMA/comments/17f6gg7/using_llama_for_json_data_generation_help/ | hoteluniformgolfs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f6gg7 | false | null | t3_17f6gg7 | /r/LocalLLaMA/comments/17f6gg7/using_llama_for_json_data_generation_help/ | false | false | self | 1 | null |
What ai model can run on localai llama2 or mistral or falcon closest to chatgpt within 7b and 30b range? | 1 | I am looking to run ai model with local ai obviously quantized. I have 24 gb ram and 4 core processor.
The processor is Arm 3.0 Ghz | 2023-10-24T05:48:39 | https://www.reddit.com/r/LocalLLaMA/comments/17f5pv2/what_ai_model_can_run_on_localai_llama2_or/ | gptgpt1234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f5pv2 | false | null | t3_17f5pv2 | /r/LocalLLaMA/comments/17f5pv2/what_ai_model_can_run_on_localai_llama2_or/ | false | false | self | 1 | null |
Best long-context models, and what's up with LLongMA? | 8 | I have been looking for models that were trained on long-context tasks, and stumbled on series of posts culminating in [this one](https://www.reddit.com/r/LocalLLaMA/comments/15c0pbs/llongma2_16k_a_llama_2_16k_model/) about LLongMA 16K.
All of the original models seem to be gone from HuggingFace's hub (though there are still some ports there.
Any idea why the models were taken down and what are the current best long-context (8K+) models? | 2023-10-24T05:19:26 | https://www.reddit.com/r/LocalLLaMA/comments/17f59i3/best_longcontext_models_and_whats_up_with_llongma/ | Palmik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f59i3 | false | null | t3_17f59i3 | /r/LocalLLaMA/comments/17f59i3/best_longcontext_models_and_whats_up_with_llongma/ | false | false | self | 8 | null |
Questions/Issues finetuning LLaMA 2 7B with QLoRA locally | 1 | I'm trying to finetune LLaMA 2 7B with QLoRA locally on a Windows 11 machine using the hugging face trl library. System specs are i9-13900k, RTX 4080 (16GB VRAM), and 64GB RAM. I'm using the huggingface trl library for this. However, since their bitsandbytes library doesn't support Windows I'm running this under WSL (GPU enabled and 32 GB RAM allocated to the vm). As far as I can see and tell, it doesn't have any performance impact but I'd still like to run everything natively under Windows so if anyone has any suggestions here, much appreciated.
Here is the command line I'm using for the fine tuning:
python trl/examples/scripts/sft.py --model_name NousResearch/llama-2-7b-chat-hf --dataset_name mlabonne/guanaco-llama2-1k --load_in_4bit --use_peft --batch_size 1 --gradient_accumulation_steps 1
So, I'm using a non-gated model with the mini guanaco dataset (apprx 1000 rows). The tutorail/video I was following had batch size set at 16 but on my system anything above 4 causes a runtime cuda error and fails. The message is not very useful but here it is:
>RuntimeError: handle\_0 INTERNAL ASSERT FAILED at "../c10/cuda/driver\_api.cpp":15, please report a bug to PyTorch.
At batch size 4, it works just fine. But the problem is, it is projected to take 30+ hours to complete training. With batch size 2, that comes down to about 8 hours. Lastly, with batch size set to 1, it finished in under 30 minutes. Here is the final result of this run:
>{'train\_runtime': 1659.3351, 'train\_samples\_per\_second': 1.808, 'train\_steps\_per\_second': 1.808, 'train\_loss': 1.4093559131572644, 'epoch': 3.0}
Are these times normal for an RTX 4080? Are there things I can do to improve performance? Also, from what I understand, batch size of 1 could be pretty bad for the quality of the training? Is there an ideal batch size for this kind of use case?
Lastly, I originally tried to run this test with the full guanaco dataset, which is 10x larger. With a batch size of 4, that run was projected to take 17+ days! Batch size of 2 was 4+ days but going down to a batch size of 1 drastically cuts it down to less than 5 hours. This is not too bad timewise but again the same quality concerns exist. | 2023-10-24T05:10:06 | https://www.reddit.com/r/LocalLLaMA/comments/17f542t/questionsissues_finetuning_llama_2_7b_with_qlora/ | gamesntech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f542t | false | null | t3_17f542t | /r/LocalLLaMA/comments/17f542t/questionsissues_finetuning_llama_2_7b_with_qlora/ | false | false | self | 1 | null |
Why isn’t exl2 more popular? | 85 | I just found out exl2 format yesterday, and gave it a try. Using one 4090, I can run a 70B 2.3bpw model with ease, around 25t/s after second generation. The model is only using 22gb of vram so I can do other tasks at the meantime too. Nonetheless, exl2 models are less discussed(?), and the download count on Hugging face is a lot lower than GPTQ. This makes me wonder if there are problems with exl2 that makes it unpopular? Or is the performance just bad?
This is one of the models I have tried
https://huggingface.co/LoneStriker/Xwin-LM-70B-V0.1-2.3bpw-h6-exl2
Edit: The above model went silly after 3-4 conversations. I don’t know why and I don’t know how to fix it, so here is another one that is CURRENTLY working fine for me.
https://huggingface.co/LoneStriker/Euryale-1.3-L2-70B-2.4bpw-h6-exl2 | 2023-10-24T04:59:56 | https://www.reddit.com/r/LocalLLaMA/comments/17f4y11/why_isnt_exl2_more_popular/ | lasaiy | self.LocalLLaMA | 2023-10-24T08:59:29 | 0 | {} | 17f4y11 | false | null | t3_17f4y11 | /r/LocalLLaMA/comments/17f4y11/why_isnt_exl2_more_popular/ | false | false | self | 85 | {'enabled': False, 'images': [{'id': 'BC_fw19RpC8S6hYG7071Hcv5jfOQwDfw_nZcjSOrYgI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yMrrK2oisRit9neCDHnTGRejrme5cZMTUQK7J6Wf4A8.jpg?width=108&crop=smart&auto=webp&s=2db0399ae309dda1c24474de141ec6073e6e67a2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yMrrK2oisRit9neCDHnTGRejrme5cZMTUQK7J6Wf4A8.jpg?width=216&crop=smart&auto=webp&s=e8a943ff4be156ef2d75aacb3e0f978c58e28832', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yMrrK2oisRit9neCDHnTGRejrme5cZMTUQK7J6Wf4A8.jpg?width=320&crop=smart&auto=webp&s=3b0a6065f7f71ed18d092144b7418d858cfe4657', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yMrrK2oisRit9neCDHnTGRejrme5cZMTUQK7J6Wf4A8.jpg?width=640&crop=smart&auto=webp&s=f5e4348f36534f4e436bfcac7820fa10439074b4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yMrrK2oisRit9neCDHnTGRejrme5cZMTUQK7J6Wf4A8.jpg?width=960&crop=smart&auto=webp&s=bb4a73abb565fa1415c2753c34b478344b8268c5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yMrrK2oisRit9neCDHnTGRejrme5cZMTUQK7J6Wf4A8.jpg?width=1080&crop=smart&auto=webp&s=2cf4a195160eeae60b9c3d890d57a4d6533a722e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yMrrK2oisRit9neCDHnTGRejrme5cZMTUQK7J6Wf4A8.jpg?auto=webp&s=8692f44c74d7bbc87d36f486a1868f064d845908', 'width': 1200}, 'variants': {}}]} |
Rules question (rule 4) | 1 | Rule #4: limit self promotion. "does more than 10% of my content" include both posts AND comments, or just posts? And does sharing someone else's model (when I know the person) count as self promotion? I finished a model the same day someone else asked me to upload theirs and post about it here, and I don't want to get banned for rule #4 if I promote my own thing and theirs one after the other.
Any advice appreciated, thanks | 2023-10-24T04:42:24 | https://www.reddit.com/r/LocalLLaMA/comments/17f4nz3/rules_question_rule_4/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f4nz3 | false | null | t3_17f4nz3 | /r/LocalLLaMA/comments/17f4nz3/rules_question_rule_4/ | false | false | self | 1 | null |
Cat V0.5 — an uncensored biology and clinical model by Kal'tsit. | 40 | Cat v0.5 ([HuggingFace page](https://huggingface.co/Heralax/Cat-0.5)) is a Llama2 finetune trained on **100k**(!) **decensored** **rows** of text. It's intended to excel in biology and clinical science (i.e., disease diagnosis) while remaining skilled in conversation and entertainment. This balance of skills was achieved by combining parts of the popular chatDoctor, Airoboros, and bluemoonrp datasets.
Training data was filtered for any response that contained BOTH a refusal to answer and an "As an AI...". This is the decensoring. Beyond this, the dataset focused on rational thinking and scientific accuracy, which is especially strengthened during formal conversation.
The model was trained without a specific prompt template and is as such highly generalizable. For instance, it worked very well in SillyTavern when I was testing it — I can vouch for its RP capability. While there is no set prompt format, the structure Cat 0.5 was trained on can be found on the [model card](https://huggingface.co/Heralax/Cat-0.5).
Beyond all this, a new version with more clinical data, aimed at making disease diagnosis more reliable, is expected in about 2 months.
Anyway, everyone loves graphs, so here're some informative ones:
Length distribution of the dataset used for training:
​
[Note that this chart above represents 0.01% of the total training dataset.](https://preview.redd.it/w1ow8c6yp2wb1.png?width=576&format=png&auto=webp&s=d7f9a1215272fbb3512fa571252037625ba9fefb)
Training loss:
https://preview.redd.it/zljvq8pwq2wb1.png?width=426&format=png&auto=webp&s=f94825aa8ac8048e68c974256bc935de98a8743a
Learning rate graph:
​
[train\/learning rate](https://preview.redd.it/h843zvu8r2wb1.png?width=426&format=png&auto=webp&s=f948cafbeef139638332321580993f59b3ed88a3)
So, if you want RP, scientific biology help, or scientific biology help during your RP *(nudge nudge wink wink),* then Cat v0.5 ([HuggingFace page](https://huggingface.co/Heralax/Cat-0.5)) is worth a shot. The size and quality of the dataset are especially impressive in my opinion. Feedback would be appreciated as it continues development.
(Full disclosure: my involvement with this project is about 2 hours of testing this afternoon, creating a q5km quant, and uploading the model to HF and creating this post, with Kal'tsit's permission. So I'm not a stranger to it, but I'm not exactly a developer either. I'll still answer any questions anyone has to the best of my ability). | 2023-10-24T04:27:17 | https://www.reddit.com/r/LocalLLaMA/comments/17f4et3/cat_v05_an_uncensored_biology_and_clinical_model/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f4et3 | false | null | t3_17f4et3 | /r/LocalLLaMA/comments/17f4et3/cat_v05_an_uncensored_biology_and_clinical_model/ | false | false | 40 | {'enabled': False, 'images': [{'id': 'aTQQZIVD6xOkn_n7j4pbWZeOl34fR9un9IuZF74Ujuc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BjQ95U51Yz1DX97Cj5dXDlEJjAAxawvT8Vlwbl7UceI.jpg?width=108&crop=smart&auto=webp&s=8c64ce01b5e448a7278b0fe78c30e6757f0636b1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BjQ95U51Yz1DX97Cj5dXDlEJjAAxawvT8Vlwbl7UceI.jpg?width=216&crop=smart&auto=webp&s=e429b92399494e25367fd9cdb7869d1b671493bf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BjQ95U51Yz1DX97Cj5dXDlEJjAAxawvT8Vlwbl7UceI.jpg?width=320&crop=smart&auto=webp&s=52c533daa47a9d56805662c52ff0465d8f2b6b98', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BjQ95U51Yz1DX97Cj5dXDlEJjAAxawvT8Vlwbl7UceI.jpg?width=640&crop=smart&auto=webp&s=34c735e6bed8d923364c6cd85d4d7048b1a58bff', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BjQ95U51Yz1DX97Cj5dXDlEJjAAxawvT8Vlwbl7UceI.jpg?width=960&crop=smart&auto=webp&s=6fb62560b76a3f6d3c5df2fd6a8696a43737f36e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BjQ95U51Yz1DX97Cj5dXDlEJjAAxawvT8Vlwbl7UceI.jpg?width=1080&crop=smart&auto=webp&s=fa669024511b17bfeceb8e500b06af16bb826e44', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BjQ95U51Yz1DX97Cj5dXDlEJjAAxawvT8Vlwbl7UceI.jpg?auto=webp&s=03afaa382c42a29b6b6057acf4a09e89a7c4c560', 'width': 1200}, 'variants': {}}]} | |
Riley Reid's uncensored RAG chatbot | 47 | It seems she has founded an AI company:
https://www.404media.co/riley-reid-clona-ai-chatbot-virtual-companion/
This is the platform: https://clona.ai/
In the interview, she claims the AI is completely uncensored and fine-tuned on extensive training data from her chat history. The chatbot seems to offer "long term memory" as one of its features.
What's the architecture of the bot likely to be? There would have to be some RAG involved, of course, but any guesses as to the LLM used? | 2023-10-24T03:16:05 | https://www.reddit.com/r/LocalLLaMA/comments/17f34a7/riley_reids_uncensored_rag_chatbot/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f34a7 | false | null | t3_17f34a7 | /r/LocalLLaMA/comments/17f34a7/riley_reids_uncensored_rag_chatbot/ | false | false | self | 47 | {'enabled': False, 'images': [{'id': 'zTfYel8O3_xZ2q63LigmJ-sTLNHI6i2uP_pikUmsAKo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/UR5hR-eCo-2pDeNfDQcWMeK0g1Xiw0S9UBDjZPcHTG8.jpg?width=108&crop=smart&auto=webp&s=9e0b57acfb61a6aef60be935c5753999efcb049a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/UR5hR-eCo-2pDeNfDQcWMeK0g1Xiw0S9UBDjZPcHTG8.jpg?width=216&crop=smart&auto=webp&s=20b2cbf8e5ed81d36d8b4a9e4e2fc3864daebec1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/UR5hR-eCo-2pDeNfDQcWMeK0g1Xiw0S9UBDjZPcHTG8.jpg?width=320&crop=smart&auto=webp&s=e5e077d7bbe5d752868444a5f830193be6074248', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/UR5hR-eCo-2pDeNfDQcWMeK0g1Xiw0S9UBDjZPcHTG8.jpg?width=640&crop=smart&auto=webp&s=4e804c88c2f4c442ed90010ef4223d167049b58e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/UR5hR-eCo-2pDeNfDQcWMeK0g1Xiw0S9UBDjZPcHTG8.jpg?width=960&crop=smart&auto=webp&s=722a2ef37445421acff4d2560e6afa60cb7763c9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/UR5hR-eCo-2pDeNfDQcWMeK0g1Xiw0S9UBDjZPcHTG8.jpg?width=1080&crop=smart&auto=webp&s=88d6ae719816dd1d67765940b74bc19aed5457c9', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/UR5hR-eCo-2pDeNfDQcWMeK0g1Xiw0S9UBDjZPcHTG8.jpg?auto=webp&s=d972005b2b3d3409ad94d540d2f4b256abcf490f', 'width': 1920}, 'variants': {}}]} |
What's next? | 20 | So I programmed multiple ChatBots using various LLM models, RAG etc. Now what's next? What do you all do challenging to go to next level in Generative AI? | 2023-10-24T03:02:48 | https://www.reddit.com/r/LocalLLaMA/comments/17f2v5j/whats_next/ | meetrais | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f2v5j | false | null | t3_17f2v5j | /r/LocalLLaMA/comments/17f2v5j/whats_next/ | false | false | self | 20 | null |
Llama-GPT via Docker container - issues/questions | 1 | Hi!
I've installed Llama-GPT on Xpenology based NAS server via docker (portainer). It works well, mostly. But I am having trouble using more than one model (so I can switch between them without having to update the stack each time). Also, Every time I update the stack, any existing chats stop working and I have to create a new chat from scratch.
Can you help?
Thanks! | 2023-10-24T02:34:49 | https://www.reddit.com/r/LocalLLaMA/comments/17f2bim/llamagpt_via_docker_container_issuesquestions/ | dropswisdom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f2bim | false | null | t3_17f2bim | /r/LocalLLaMA/comments/17f2bim/llamagpt_via_docker_container_issuesquestions/ | false | false | self | 1 | null |
Anthropic/Claude (by extension Amazon?) sued by big entertainment. Who's next? | 1 | [removed] | 2023-10-24T02:21:02 | https://www.reddit.com/r/LocalLLaMA/comments/17f217u/anthropicclaude_by_extension_amazon_sued_by_big/ | Natural-Sentence-601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f217u | false | null | t3_17f217u | /r/LocalLLaMA/comments/17f217u/anthropicclaude_by_extension_amazon_sued_by_big/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LA4eyla76DdxKXcDn-GOlpK91WiJMIrh0m4DHTFFlk4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Ecb17sydqGac7T93aplDH5Ti9kD2Q5eStKWhWQqE1OE.jpg?width=108&crop=smart&auto=webp&s=fbe714f123e7d73892dde48515f03eb26a6ea7fc', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Ecb17sydqGac7T93aplDH5Ti9kD2Q5eStKWhWQqE1OE.jpg?width=216&crop=smart&auto=webp&s=5b36615c46f16e633a080805e0128bde79896ff0', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/Ecb17sydqGac7T93aplDH5Ti9kD2Q5eStKWhWQqE1OE.jpg?width=320&crop=smart&auto=webp&s=e0a7d62656b311cbc424e41ee0ef73c710e4fcc1', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/Ecb17sydqGac7T93aplDH5Ti9kD2Q5eStKWhWQqE1OE.jpg?width=640&crop=smart&auto=webp&s=299402deff6c47f1386a24b71e6be1e71a3c5705', 'width': 640}], 'source': {'height': 335, 'url': 'https://external-preview.redd.it/Ecb17sydqGac7T93aplDH5Ti9kD2Q5eStKWhWQqE1OE.jpg?auto=webp&s=e35c933ba5b482920a9a8b92e5ae6532ebbd1c33', 'width': 640}, 'variants': {}}]} |
Do you think corporations will achieve AGI or ASI faster then we get our local GPT-4 like models? | 10 | I mean even if language models are not the correct path for AGI, it still might take less time for them to develop da real intelligence and then a personal computer might quickly become an obsolete thing like a telegraph
​ | 2023-10-24T02:17:09 | https://www.reddit.com/r/LocalLLaMA/comments/17f1ye1/do_you_think_corporations_will_achieve_agi_or_asi/ | Ill-Yellow-672 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f1ye1 | false | null | t3_17f1ye1 | /r/LocalLLaMA/comments/17f1ye1/do_you_think_corporations_will_achieve_agi_or_asi/ | false | false | self | 10 | null |
Help with a trained LoRA trained in llamacpp | 0 | Hello reddit. Need a little help with a loRA i trained using llamacpp. I used the finetune.exe to train a LoRA from a GGUF.
After 12 hours of training, I got two files, a gguf and a bin. Now when I load up Oobabooga and load the base gguf model and attach the lora, I get the error
AttributeError: 'LlamaCppModel' object has no attribute 'dtype'
I tried both the gguf and the bin LoRa files but they both have the same error. Been scouring the net for answers but I cant find any. How do I use the finetune.exe to finetune a LoRa for my gguf model? | 2023-10-24T01:58:31 | https://www.reddit.com/r/LocalLLaMA/comments/17f1kcy/help_with_a_trained_lora_trained_in_llamacpp/ | AdministrativeLie745 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f1kcy | false | null | t3_17f1kcy | /r/LocalLLaMA/comments/17f1kcy/help_with_a_trained_lora_trained_in_llamacpp/ | false | false | self | 0 | null |
I released Marx 3B V3. | 47 | <a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
Today I released [Marx 3B V3](https://huggingface.co/acrastt/Marx-3B-V3).
[Marx 3B V3](https://huggingface.co/acrastt/Marx-3B-V3) is [StableLM 3B 4E1T](https://huggingface.co/stabilityai/stablelm-3b-4e1t) finetuned on [EverythingLM Data V3(ShareGPT format)](https://huggingface.co/datasets/acrastt/EverythingLM-V3-ShareGPT) for 2 epochs using [QLoRA](https://arxiv.org/abs/2305.14314).
Prompt template:
```
### HUMAN:
{prompt}
### RESPONSE:
```
Note that this model have the EOS token of `<|endoftext|>` instead of `<\s>`.
## Attribution
[StableLM 3B 4E1T](https://huggingface.co/stabilityai/stablelm-3b-4e1t) by [Stability AI](https://stability.ai/) is licensed under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
Modifications:
- Instruction tuned on dataset [EverythingLM Data V3(ShareGPT format)](https://huggingface.co/datasets/acrastt/EverythingLM-V3-ShareGPT) for 2 epochs using [QLoRA](https://arxiv.org/abs/2305.14314). | 2023-10-24T01:53:07 | https://www.reddit.com/r/LocalLLaMA/comments/17f1gcu/i_released_marx_3b_v3/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f1gcu | false | null | t3_17f1gcu | /r/LocalLLaMA/comments/17f1gcu/i_released_marx_3b_v3/ | false | false | self | 47 | {'enabled': False, 'images': [{'id': 'ojWDPvljEfitKccZa6oiEZUoNb3X0K_vzy_5sXXUOoI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=108&crop=smart&auto=webp&s=8b19ae3b1dcc44e726c67fd09a6461988c3258fb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=216&crop=smart&auto=webp&s=2dcdb6ff59e819682724dab9622a8c0348cf4268', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=320&crop=smart&auto=webp&s=ea213059ff0b8bdc739242f7573a706638e03114', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=640&crop=smart&auto=webp&s=82c47f461aa0ce4bd800d93f179342d138aba796', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=960&crop=smart&auto=webp&s=803de3d7965c3742cb052f8ba6347476c3b6eab2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=1080&crop=smart&auto=webp&s=ec91d3a40e040a2c88b2303bbbc96199c85ae040', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?auto=webp&s=954177c28c58aabb092da70db1c3f975900160e2', 'width': 1200}, 'variants': {}}]} |
I released Marx 3B V2 | 1 | Today, I released [Marx 3B V3](https://huggingface.co/acrastt/Marx-3B-V3).
[Marx 3B V3](https://huggingface.co/acrastt/Marx-3B-V3) is [StableLM 3B 4E1T](https://huggingface.co/stabilityai/stablelm-3b-4e1t) finetuned on [EverythingLM Data V3(ShareGPT format)](https://huggingface.co/datasets/acrastt/EverythingLM-V3-ShareGPT) for 2 epochs using [QLoRA](https://arxiv.org/abs/2305.14314).
Prompt template:
```
### HUMAN:
{prompt}
### RESPONSE:
```
Note that this model have the EOS token of `<|endoftext|>` instead of `<\s>`.
## Attribution
[StableLM 3B 4E1T](https://huggingface.co/stabilityai/stablelm-3b-4e1t) by [Stability AI](https://stability.ai/) is licensed under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
Modifications:
- Instruction tuned on dataset [EverythingLM Data V3(ShareGPT format)](https://huggingface.co/datasets/acrastt/EverythingLM-V3-ShareGPT) for 2 epochs using [QLoRA](https://arxiv.org/abs/2305.14314). | 2023-10-24T01:49:53 | https://www.reddit.com/r/LocalLLaMA/comments/17f1duv/i_released_marx_3b_v2/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f1duv | false | null | t3_17f1duv | /r/LocalLLaMA/comments/17f1duv/i_released_marx_3b_v2/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ojWDPvljEfitKccZa6oiEZUoNb3X0K_vzy_5sXXUOoI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=108&crop=smart&auto=webp&s=8b19ae3b1dcc44e726c67fd09a6461988c3258fb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=216&crop=smart&auto=webp&s=2dcdb6ff59e819682724dab9622a8c0348cf4268', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=320&crop=smart&auto=webp&s=ea213059ff0b8bdc739242f7573a706638e03114', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=640&crop=smart&auto=webp&s=82c47f461aa0ce4bd800d93f179342d138aba796', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=960&crop=smart&auto=webp&s=803de3d7965c3742cb052f8ba6347476c3b6eab2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=1080&crop=smart&auto=webp&s=ec91d3a40e040a2c88b2303bbbc96199c85ae040', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?auto=webp&s=954177c28c58aabb092da70db1c3f975900160e2', 'width': 1200}, 'variants': {}}]} |
Some advice on running Guidance? | 1 | I would like to run Guidance on a local model and while reading the Guidance docs I saw this message " we use LLaMA here, but any GPT-style model will do ". So that lead me to a few questions;
- How do I find out what style a model is? I am assuming that this has to do with the prompt format it was trained on. I would like to try models that are more instruction based, rather than chat, so something like open-llama-13b-open-instruct-GGML
- I am VRAM constrained 12GB, I have been using Vicuna 13B quantized models, these should still work but will a statement like this load ggml or gguf models?
guidance.llms.transformers.Vicuna("your_path/vicuna_13B", device_map="auto")
- Finally, is there a way of combining superbooga with Guidance? I found this (repo)[https://github.com/danikhan632/guidance_api]. Has anyone had success with it? | 2023-10-24T01:32:17 | https://www.reddit.com/r/LocalLLaMA/comments/17f1119/some_advice_on_running_guidance/ | iChinguChing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f1119 | false | null | t3_17f1119 | /r/LocalLLaMA/comments/17f1119/some_advice_on_running_guidance/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '2HUW3Mxhf4cez0PUGvnV8pTXDe4_Q5zAcy87JyIjn5Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/28JSYqNJ4oadekLdOUL3HnHcjfyBsCUUgUVsIJMn6to.jpg?width=108&crop=smart&auto=webp&s=45bf82dacd86ce33b04406b064342716117d0380', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/28JSYqNJ4oadekLdOUL3HnHcjfyBsCUUgUVsIJMn6to.jpg?width=216&crop=smart&auto=webp&s=28289aabf53786a5568a1ad7202cd54fd61edeb6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/28JSYqNJ4oadekLdOUL3HnHcjfyBsCUUgUVsIJMn6to.jpg?width=320&crop=smart&auto=webp&s=68455e2da4a97b83ccb46722ba0c493aa0901fbe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/28JSYqNJ4oadekLdOUL3HnHcjfyBsCUUgUVsIJMn6to.jpg?width=640&crop=smart&auto=webp&s=2c0d73bd134b87ab8329f6eaa253a8a5ec479c3f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/28JSYqNJ4oadekLdOUL3HnHcjfyBsCUUgUVsIJMn6to.jpg?width=960&crop=smart&auto=webp&s=ed85d37d301e1b8cd5efd10aa2d5f59cc571ea2f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/28JSYqNJ4oadekLdOUL3HnHcjfyBsCUUgUVsIJMn6to.jpg?width=1080&crop=smart&auto=webp&s=3b1c47d24ba4492de9ef7252a05467596de5b40d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/28JSYqNJ4oadekLdOUL3HnHcjfyBsCUUgUVsIJMn6to.jpg?auto=webp&s=afd44dde9813d458a9d141c3edae4f9a379c7a08', 'width': 1200}, 'variants': {}}]} |
What's the deal with the LLM and overall AI hate? | 129 | I've come across a number of youtube videos and things written by people who **hate** everything about AI. They say things like the models only produce rip-offs or super substandard art. Nothing *ever* creative.
I've been using the llama model and I love it. While I am at work, I will give it creative writing prompts such as "tell me about the color red from the color red's perspective", and it does just a fantastic job. I will even look up the elements of what it wrote, and can't find anything on google.
So my question is, am I missing something here? I get that there is a lot of crappy AI images going around right now, but that's to be expected when things are still very early on and done by people who don't know what they are doing. If it's really that bad and substandard, why all the fuss? It doesn't deserve the level of hate I've been seeing, and I am wondering if there is some piece that I am not seeing here. | 2023-10-24T01:31:06 | https://www.reddit.com/r/LocalLLaMA/comments/17f104j/whats_the_deal_with_the_llm_and_overall_ai_hate/ | Red_Redditor_Reddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f104j | false | null | t3_17f104j | /r/LocalLLaMA/comments/17f104j/whats_the_deal_with_the_llm_and_overall_ai_hate/ | false | false | self | 129 | null |
i could use help installing and running llama on my m2 mac. | 1 | i have ollama and it works. however, i downloaded a new model to try and i don't know how to install or run it. could anybody help me get it working?
it's called: yarn-llama-2-7b-128k.Q4\_K\_M.gguf | 2023-10-24T01:14:35 | https://www.reddit.com/r/LocalLLaMA/comments/17f0ntb/i_could_use_help_installing_and_running_llama_on/ | CoyNox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f0ntb | false | null | t3_17f0ntb | /r/LocalLLaMA/comments/17f0ntb/i_could_use_help_installing_and_running_llama_on/ | false | false | self | 1 | null |
What are the best datasets right now? | 31 | * open orca / flan v2
* evol instruct
come to mind
but this is just from a list I've been gathering, would love to hear what the crowd thinks is the best datasets atm.
I'm looking to train a smaller mistral model (2b) using quantization from scratch, but doing it 'by the books', and I'm looking for a well rounded suite of datasets (i.e. I plan on doing no unstructured training).
ATM I have
* [https://huggingface.co/datasets/relbert/scientific\_and\_creative\_analogy](https://huggingface.co/datasets/relbert/scientific_and_creative_analogy)
* [https://huggingface.co/datasets/squad\_v2/viewer/squad\_v2/](https://huggingface.co/datasets/squad_v2/viewer/squad_v2/train?row=0)
* [https://huggingface.co/datasets/databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
* [https://huggingface.co/datasets/marksverdhei/wordnet-definitions-en-2021](https://huggingface.co/datasets/marksverdhei/wordnet-definitions-en-2021)
But what I'd really like is a full on encyclopedia, but have to settle for wikipedia (which I'm getting from squad\_v2).
​
One thing I was thinking of was weighing the balance for conversational skills, and collective cognition, open assistant, and gpt 4 llm cleaned
So I thought to ask, what do you think are the hottest datasets right now? I really like the speechless, nous, and synthia models as they are a conglomerate of datasets. | 2023-10-24T01:11:21 | https://www.reddit.com/r/LocalLLaMA/comments/17f0lha/what_are_the_best_datasets_right_now/ | Thistleknot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f0lha | false | null | t3_17f0lha | /r/LocalLLaMA/comments/17f0lha/what_are_the_best_datasets_right_now/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'lynN_16lehfUmbmBuj4YQFMhjDouN6ArdK5gOg_5hMI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9dU-EuW4Vsqe5VnjZLOThQcnfPGZFO5wxJOceG8Y4v8.jpg?width=108&crop=smart&auto=webp&s=5e6832356f778502e60f022ee8feb8d9d64a9173', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9dU-EuW4Vsqe5VnjZLOThQcnfPGZFO5wxJOceG8Y4v8.jpg?width=216&crop=smart&auto=webp&s=88e1c116481a4e41f90ddcee68394e4575991cc7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9dU-EuW4Vsqe5VnjZLOThQcnfPGZFO5wxJOceG8Y4v8.jpg?width=320&crop=smart&auto=webp&s=16edd0744d76dd369c7a9cc6759a10ceb4b7a927', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9dU-EuW4Vsqe5VnjZLOThQcnfPGZFO5wxJOceG8Y4v8.jpg?width=640&crop=smart&auto=webp&s=88ef8aa8fd17b9770716e6ef9e7dab2955d59bf0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9dU-EuW4Vsqe5VnjZLOThQcnfPGZFO5wxJOceG8Y4v8.jpg?width=960&crop=smart&auto=webp&s=8cb279e0229b21040ae75e1e64ea62fdbd56c227', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9dU-EuW4Vsqe5VnjZLOThQcnfPGZFO5wxJOceG8Y4v8.jpg?width=1080&crop=smart&auto=webp&s=8fd586c642128f2f761ff7ea495605a3bb9fb142', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9dU-EuW4Vsqe5VnjZLOThQcnfPGZFO5wxJOceG8Y4v8.jpg?auto=webp&s=07b4429619a1005532ac4b1843833c1c90655444', 'width': 1200}, 'variants': {}}]} |
Falcon 40B on SageMaker vs GPT3.5-Turbo Cost | 2 | How does Falcon cloud-hosted compare by cost to GPT3.5-Turbo? Open to other cloud hosting. | 2023-10-24T00:53:02 | https://www.reddit.com/r/LocalLLaMA/comments/17f07rv/falcon_40b_on_sagemaker_vs_gpt35turbo_cost/ | Able-Worldliness-711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17f07rv | false | null | t3_17f07rv | /r/LocalLLaMA/comments/17f07rv/falcon_40b_on_sagemaker_vs_gpt35turbo_cost/ | false | false | self | 2 | null |
You might want to update your ooba if you haven't recently. (maybe just pascal users) | 10 | I don't update my stuff very frequently when it works because I have a fear of it breaking, but I updated my ooba yesterday because it had been a month or two.
Man was I surprised.
I usually use Llama.cpp as the loader because I have a 3x p40 setup and that's been the one I've had the fewest headaches with, and I was used to getting 2-2.5 t/s on that.
After the update I'm getting more like 5-6 t/s even on longer contexts. I never had a problem with the speed before, but it sure is nice getting faster responses.
Not 100% sure what gave this speedup, maybe somebody else will have insight in the comments?
Just thought I'd post an FYI out here for anybody else that is update averse like me to let them know they might be leaving a big performance gain on the table if they're using an older llama.cpp. | 2023-10-24T00:41:38 | https://www.reddit.com/r/LocalLLaMA/comments/17ezz0p/you_might_want_to_update_your_ooba_if_you_havent/ | mynadestukonu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ezz0p | false | null | t3_17ezz0p | /r/LocalLLaMA/comments/17ezz0p/you_might_want_to_update_your_ooba_if_you_havent/ | false | false | self | 10 | null |
Llama-cpp-Python is all in one? | 6 | Hello everyone,
I was wondering if I pip install llama-cpp-Python , do I still need to go through the llama.coo installation steps?
It says in the git hub page that it installs the package and builds llama.cpp from source, so I am unsure if I need to go through the llama.cpp steps.
I appreciate all the help, thank you. | 2023-10-24T00:31:38 | https://www.reddit.com/r/LocalLLaMA/comments/17ezrlq/llamacpppython_is_all_in_one/ | Slimxshadyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ezrlq | false | null | t3_17ezrlq | /r/LocalLLaMA/comments/17ezrlq/llamacpppython_is_all_in_one/ | false | false | self | 6 | null |
How to run Llama-2 on CPU with GGML after fine-tuning with LoRA | 4 | I found the steps to fine-tune Llama-2 and export to GGML to be a little cumbersome, so I put all the steps together in a guide.
[https://blog.oxen.ai/how-to-run-llama-2-on-cpu-after-fine-tuning-with-lora/](https://blog.oxen.ai/how-to-run-llama-2-on-cpu-after-fine-tuning-with-lora/)
The steps at a high level are:
1. Run Llama-2 base model on CPU
2. Create a prompt baseline
3. Fine-tune with LoRA
4. Merge the LoRA Weights
5. Convert the fine-tuned model to GGML
6. Quantize the model
​
Hopefully you find it useful! | 2023-10-23T23:35:08 | https://www.reddit.com/r/LocalLLaMA/comments/17eykxp/how_to_run_llama2_on_cpu_with_ggml_after/ | FallMindless3563 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17eykxp | false | null | t3_17eykxp | /r/LocalLLaMA/comments/17eykxp/how_to_run_llama2_on_cpu_with_ggml_after/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'qKzsayrkJ8lnqdsb2GQZnPpckWPllq3s4Oi9zCKQUEQ', 'resolutions': [{'height': 44, 'url': 'https://external-preview.redd.it/aQG7FnEUscJmOO9Izve064mHvSRaX6F5iRufQqP_CyA.jpg?width=108&crop=smart&auto=webp&s=9897b688cb144c48d8794ab15f2b8767297a9cc1', 'width': 108}, {'height': 88, 'url': 'https://external-preview.redd.it/aQG7FnEUscJmOO9Izve064mHvSRaX6F5iRufQqP_CyA.jpg?width=216&crop=smart&auto=webp&s=59d040f6d583560716619c4fef40d61fbeac2a43', 'width': 216}, {'height': 130, 'url': 'https://external-preview.redd.it/aQG7FnEUscJmOO9Izve064mHvSRaX6F5iRufQqP_CyA.jpg?width=320&crop=smart&auto=webp&s=68df03db529d9ca93ebc370c9125892f422bf8e3', 'width': 320}, {'height': 261, 'url': 'https://external-preview.redd.it/aQG7FnEUscJmOO9Izve064mHvSRaX6F5iRufQqP_CyA.jpg?width=640&crop=smart&auto=webp&s=53a4fd63a29a3b0e3a774e335e331c1bfcf8d42d', 'width': 640}, {'height': 392, 'url': 'https://external-preview.redd.it/aQG7FnEUscJmOO9Izve064mHvSRaX6F5iRufQqP_CyA.jpg?width=960&crop=smart&auto=webp&s=e2a9ea570ce73300c5c30522ce5ea6fd4c504488', 'width': 960}, {'height': 441, 'url': 'https://external-preview.redd.it/aQG7FnEUscJmOO9Izve064mHvSRaX6F5iRufQqP_CyA.jpg?width=1080&crop=smart&auto=webp&s=05a1a3f68885c1b3bdd79db564d4575461baed5e', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/aQG7FnEUscJmOO9Izve064mHvSRaX6F5iRufQqP_CyA.jpg?auto=webp&s=51fd420a759878326d708ebaa224dd0e15873722', 'width': 2200}, 'variants': {}}]} |
pad_token_id missing on Mistral-7B-OpenOrca-GPTQ | 1 |
ValueError: If eos_token_id is defined, make sure that pad_token_id is defined.
Anyone got this error running?
Mistral-7B-OpenOrca-GPTQ
Full traceback: [https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ/discussions/5](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ/discussions/5) | 2023-10-23T22:49:30 | https://www.reddit.com/r/LocalLLaMA/comments/17exkl1/pad_token_id_missing_on_mistral7bopenorcagptq/ | Evirua | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17exkl1 | false | null | t3_17exkl1 | /r/LocalLLaMA/comments/17exkl1/pad_token_id_missing_on_mistral7bopenorcagptq/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=108&crop=smart&auto=webp&s=4bc231a80d79babe4e6cddf7b4c71dcb0aa8f8ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=216&crop=smart&auto=webp&s=d7108244b7182d85047aa59446f1dfb68542b610', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=320&crop=smart&auto=webp&s=d34fa1a756c458772d3c8680309a93cf8d758b40', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=640&crop=smart&auto=webp&s=5b03e18da2698977cf1222f0c9e54ccb6177ffc4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=960&crop=smart&auto=webp&s=3d875ff29aae8239d010f3b964e5a2f3ebe32e3d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=1080&crop=smart&auto=webp&s=b51090c30528b6b8c637acb54d7fc0f6a5249cf5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?auto=webp&s=ee70e402c0f8274b46f38378bada81dbeb5b1dac', 'width': 1200}, 'variants': {}}]} |
Smaller models (<7b): How much "intelligence" can we extract from them? Mistral 7b shows there's still room for improvement, but are we near the upper-bound capacity of 7b models? | 15 | And on top of that, aren't finetuned versions of smaller models actually "worse" due to "catastrophic forgetting"? | 2023-10-23T22:23:13 | https://www.reddit.com/r/LocalLLaMA/comments/17ewyt3/smaller_models_7b_how_much_intelligence_can_we/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ewyt3 | false | null | t3_17ewyt3 | /r/LocalLLaMA/comments/17ewyt3/smaller_models_7b_how_much_intelligence_can_we/ | false | false | self | 15 | null |
If LLMs predict one token at a time is the total compute a product of the sum of the cumulative inputs? | 11 | It is said that llms just predict the next token based on the words that have come before.
When you make a request to someone like OpenAI they will charge you for prompt tokens and completion tokens.
So let’s say I ask it
> write a sentence about dogs
It returns
> I used to have a spotty dog called Jack
I pay for the tokens in that prompt (say 5) and then the ones in the completion (say 10).
If I then want another sentence I would be paying for all the tokens in prompt again + those in the first completion + whatever comes for the new sentence
So let’s say `5 + 10 + 10`
I used 35 tokens to get 2 sentences.
Whereas if I just asked for 2 sentences to start with it would be 25.
Is this inherent to the models or a commercial decision?
If I ran an llm locally would it follow a proportionately similar compute usage? | 2023-10-23T22:14:31 | https://www.reddit.com/r/LocalLLaMA/comments/17ewrc3/if_llms_predict_one_token_at_a_time_is_the_total/ | reddysteady | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ewrc3 | false | null | t3_17ewrc3 | /r/LocalLLaMA/comments/17ewrc3/if_llms_predict_one_token_at_a_time_is_the_total/ | false | false | self | 11 | null |
What is the status of autonomous agents manipulating our browser? | 10 | My vision of what autonomous agents will do includes going out on the web, logging into sites, filling out forms, uploading documents it has created, etc. all in pursuit of a larger mission and all without having to be explicitly programmed to do all the small sub tasks.
How far away are we from this? Meaning, what is the current status of the field? Has anyone made public any plans to work on browsers that natively support such behavior? | 2023-10-23T22:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/17ewllv/what_is_the_status_of_autonomous_agents/ | kecepa5669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ewllv | false | null | t3_17ewllv | /r/LocalLLaMA/comments/17ewllv/what_is_the_status_of_autonomous_agents/ | false | false | self | 10 | null |
HF's IDEFICS Multimodal model. {9B, 80B} * {pretrained, instruct tuned}. | 4 | 2023-10-23T20:49:34 | https://huggingface.co/collections/HuggingFaceM4/idefics-6509a1aaabdde5290e80b855 | BayesMind | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17euq18 | false | null | t3_17euq18 | /r/LocalLLaMA/comments/17euq18/hfs_idefics_multimodal_model_9b_80b_pretrained/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'S-Xs0Csyem0gMo6_i6ThXvgHori_19spzS0tm5reJSM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MDuKY-DPYZK75LOfSKkXT8aup3ZGo6DxfHk2mfxrDvE.jpg?width=108&crop=smart&auto=webp&s=2a591cba82cb843af2c5c064f2d515702dc8b597', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MDuKY-DPYZK75LOfSKkXT8aup3ZGo6DxfHk2mfxrDvE.jpg?width=216&crop=smart&auto=webp&s=a11ae1173fe5f5bee6a6d3cc2236101edab7e97a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MDuKY-DPYZK75LOfSKkXT8aup3ZGo6DxfHk2mfxrDvE.jpg?width=320&crop=smart&auto=webp&s=cdfca5310dface9005461f9a146067af8527d290', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MDuKY-DPYZK75LOfSKkXT8aup3ZGo6DxfHk2mfxrDvE.jpg?width=640&crop=smart&auto=webp&s=97438a1a00bd0323352c41bf5b93ca952b202a65', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MDuKY-DPYZK75LOfSKkXT8aup3ZGo6DxfHk2mfxrDvE.jpg?width=960&crop=smart&auto=webp&s=13ee079b70f7973795f2e6fe3a5bc893e693d017', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MDuKY-DPYZK75LOfSKkXT8aup3ZGo6DxfHk2mfxrDvE.jpg?width=1080&crop=smart&auto=webp&s=89c9f9c66d7dc6b62379cf1a2f32aa0e8c476300', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MDuKY-DPYZK75LOfSKkXT8aup3ZGo6DxfHk2mfxrDvE.jpg?auto=webp&s=94edcd1faa5c66d887212c9c032a4e9a6aa181fb', 'width': 1200}, 'variants': {}}]} | ||
Suggestions for handling complex queries in order to perform natural language to database query generation. | 1 | I recently worked on a personal project which converts natural language input to either of the four pandas, MongoDB, Kusto and Cypher queries. It can currently handle only single table queries, and I want to expand it to multiple tables. Currently is uses CodeT5+ for the generation, but I also want to try some complex models like Llama and Mistral. I would really appreciate if you guys could help me on this.
The link to the repo -
[https://github.com/Chirayu-Tripathi/nl2query](https://github.com/Chirayu-Tripathi/nl2query.git) | 2023-10-23T19:37:58 | https://www.reddit.com/r/LocalLLaMA/comments/17et0ti/suggestions_for_handling_complex_queries_in_order/ | WorryWhole7805 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17et0ti | false | null | t3_17et0ti | /r/LocalLLaMA/comments/17et0ti/suggestions_for_handling_complex_queries_in_order/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'APbz62EiCSp49QzYlgcMCiq-ZzRcTYNfNG-JWS88AIg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NNBEguJP-D3BYu2zqaYciyHrkciGMQnRjJH3ZAbm23c.jpg?width=108&crop=smart&auto=webp&s=58aa74c6451cb3909bd0db001aa2aadcfd8c4c9c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NNBEguJP-D3BYu2zqaYciyHrkciGMQnRjJH3ZAbm23c.jpg?width=216&crop=smart&auto=webp&s=0417e12ca93e565e1177d45c5cf5afab214d35e8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NNBEguJP-D3BYu2zqaYciyHrkciGMQnRjJH3ZAbm23c.jpg?width=320&crop=smart&auto=webp&s=8acbde96256e885fd872d2cacf0f84656843aa30', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NNBEguJP-D3BYu2zqaYciyHrkciGMQnRjJH3ZAbm23c.jpg?width=640&crop=smart&auto=webp&s=f4b76c610ba2acff00b402c9ff40547e0cd100c3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NNBEguJP-D3BYu2zqaYciyHrkciGMQnRjJH3ZAbm23c.jpg?width=960&crop=smart&auto=webp&s=dc3c0c0a3d65efa90cb4ec0a8aa03c033cf20e7f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NNBEguJP-D3BYu2zqaYciyHrkciGMQnRjJH3ZAbm23c.jpg?width=1080&crop=smart&auto=webp&s=a1db06043cb7a453d45a634e0c97f9246856673d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NNBEguJP-D3BYu2zqaYciyHrkciGMQnRjJH3ZAbm23c.jpg?auto=webp&s=ce00e908205e34d6d40d622a2c5449fd03b3dcba', 'width': 1200}, 'variants': {}}]} |
help with model that will not refuse to answer | 1 | I am trying to find a local 7b gguf model that will not refuse to answer questions. I have tried some "uncensored" models and they all seem to refuse one thing or another. For instance i recently tested \`mistral-7b-openorca.Q4\_K\_M.gguf\` and it would not provide tax advice. Im not after anything in particular except a local buddy that i can ask questions of and the AI will not balk at responses for legal reasons. Does this exist? Am i searching for the wrong thing? It takes me awhile to download and test these models so a heads up on one that might fit my needs would be really helpful. Thanks in advance. | 2023-10-23T19:09:24 | https://www.reddit.com/r/LocalLLaMA/comments/17esc46/help_with_model_that_will_not_refuse_to_answer/ | bit_herder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17esc46 | false | null | t3_17esc46 | /r/LocalLLaMA/comments/17esc46/help_with_model_that_will_not_refuse_to_answer/ | false | false | self | 1 | null |
Best practices incorporating high-touch conversational data | 1 | [removed] | 2023-10-23T19:08:06 | https://www.reddit.com/r/LocalLLaMA/comments/17esb08/best_practices_incorporating_hightouch/ | Wooden_Wash_228 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17esb08 | false | null | t3_17esb08 | /r/LocalLLaMA/comments/17esb08/best_practices_incorporating_hightouch/ | false | false | self | 1 | null |
LLM Wiki | 3 | LLM development has been extremely fast, and it is hard to keep up. This problem is exacerbated by the difficulty involved in finding specific resources related to LLMs.
**Current Approach: This subreddit, various servers and Rentry pages.**
Considering this subreddit as the resource we analyze
Pros:
\- Can ask any question and get a response/have a conversation
\- Can rank questions and developments by recency and popularity (upvotes)
Cons:
\- Generally broad scope makes it difficult to find information related to a specific topic or objective
\- Popular, good topics and conversations get mostly lost to time or cluttered with related subjects because of how Reddit works.
**Proposed Solution: Augment this subreddit with a LLM Wiki**
This solves the two biggest problems of the current subreddit, as we could have Wiki pages on specific topics, and that aren't weighted in importance by their recency.
Additionally, since anyone can make a page, and anyone can link to another page from their page, we wouldn't lose the ability to have anyone contribute.
Of course, because of the pros of this subreddit, it won't be replaced by this by any means, but just made more powerful because of it.
​
What are your thoughts? Would this be feasible/desired? Any proposed ideas on how to make this better, or reasons why this just wouldn't work? I appreciate any input. | 2023-10-23T19:05:37 | https://www.reddit.com/r/LocalLLaMA/comments/17es8vu/llm_wiki/ | Dramatic_Road3570 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17es8vu | false | null | t3_17es8vu | /r/LocalLLaMA/comments/17es8vu/llm_wiki/ | false | false | self | 3 | null |
JSON key - value analysis using LLM's | 2 | Hi everyone, I'm right now trying to explore options on how to integrate customer support activities into LLM's.
The use case is user will drop in their questions. The llm should be able to query the associated backend services using function calling assistance and analyse the JSON and provide response / suggestion back.
I would like to get some pointers on this approach where the JSON from function call should be analysed / compared with data available via RAG and provide necessary data back to user, where this data should have information on the JSON key-value pairs. Based on these key value pairs the response should be presented back.
I couldn't find much information on such approach In our subreddit, hence dropping the same here for further discussion. | 2023-10-23T19:03:49 | https://www.reddit.com/r/LocalLLaMA/comments/17es7g0/json_key_value_analysis_using_llms/ | jshwnth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17es7g0 | false | null | t3_17es7g0 | /r/LocalLLaMA/comments/17es7g0/json_key_value_analysis_using_llms/ | false | false | self | 2 | null |
A question about vision models (LLaVA) | 3 | I have a few questions about vision models that I haven't found good answers for elsewhere, mostly regarding encoding and attention. Well, really one question:
Most importantly:
1) Are multimodal models like LLaVA able to "attend" to different parts of the image based on the prompt? For example, suppose I give a model a picture of a person from a security camera, and then a portrait of the person in the image, with a prompt "is the person in this photo present in the security camera picture?" - Would the attention mechanism be able to understand the query, discern features from the portrait, and look at the target picture to see if those features exist enough for it to say yes or no?
Or suppose I gave it two pictures of the same tree, one with no leaves and one with leaves just beginning to grow. If I asked it for the difference, would it be able to say "the second picture shows the tree with leaves beginning to grow", whereas without the first image, it wouldn't have anything to compare to?
I guess I'm trying to understand how detailed the internal representations are for images used in models like LLaVA. If the model simply took the image and generated an extensive description of it, then used that textual description as part of the context, that would not be very impressive. However, if the model is able to use the context to look at different parts of the image with different levels of attention, that would be much more interesting (and powerful).
(2) Does the image embedding use the same "vocabulary" (sentencepeiece?) as the text-based model? Or does it just need to describe RGB values for every pixel, and encode them and they get magically transformed into vector representations?
(3) Anyone know how many images, what resolution, etc can fit in the context of llava-13b? | 2023-10-23T19:01:19 | https://www.reddit.com/r/LocalLLaMA/comments/17es55z/a_question_about_vision_models_llava/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17es55z | false | null | t3_17es55z | /r/LocalLLaMA/comments/17es55z/a_question_about_vision_models_llava/ | false | false | self | 3 | null |
So, I'm using Open Interpreter to install itself locally and it decides that instead of installing SciPy it's just going to build it from scratch. | 62 | 2023-10-23T18:46:37 | Flying_Madlad | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17erscq | false | null | t3_17erscq | /r/LocalLLaMA/comments/17erscq/so_im_using_open_interpreter_to_install_itself/ | false | false | 62 | {'enabled': True, 'images': [{'id': 'U8LUyW85y-yLBSrQISy916-KBymrBgxHmwfXQHpMYHc', 'resolutions': [{'height': 29, 'url': 'https://preview.redd.it/dy3j2opjzzvb1.png?width=108&crop=smart&auto=webp&s=54147c957faff60811749eaaa02a52092ae2b9a1', 'width': 108}, {'height': 59, 'url': 'https://preview.redd.it/dy3j2opjzzvb1.png?width=216&crop=smart&auto=webp&s=4704e862cbc1d559c4bbe15c44219a9bc51d84ce', 'width': 216}, {'height': 87, 'url': 'https://preview.redd.it/dy3j2opjzzvb1.png?width=320&crop=smart&auto=webp&s=313c66a8b0d629455096c68929bd9e524acecb6a', 'width': 320}, {'height': 175, 'url': 'https://preview.redd.it/dy3j2opjzzvb1.png?width=640&crop=smart&auto=webp&s=ac65d6e33f2a278ec384012bd470f063887d391d', 'width': 640}, {'height': 262, 'url': 'https://preview.redd.it/dy3j2opjzzvb1.png?width=960&crop=smart&auto=webp&s=914bcac473e79ff187122af37793c122318ee83d', 'width': 960}], 'source': {'height': 292, 'url': 'https://preview.redd.it/dy3j2opjzzvb1.png?auto=webp&s=60da7ef4e2a7c3866aea37eb7a8a9bdb4fcbada2', 'width': 1067}, 'variants': {}}]} | |||
NSFW stories, Prompting, and Hardware | 16 | Gentlepeople, I come to you with questions and requests for advice
First of all this is what i am running, I mainly use my PC for VR and gaming, i used to run just a 4090 but i received a 3090 and decided to mount it as well since i had the power for it.
rtx 4090 24gb, 3090 24gb, i9-13900, 96gb ram
I am mainly using " LM STUDIO" as the platform to launch my llm's i used to use kobold but found lmstudio to be better for my needs although kobold IS nice.
Currently i am cycling between MLewd L2 chat 13B q8, airoboros L2 2221 70B q4km, and WizardLM uncensored Supercot storytelling 30B q8.
Questions.
1. I am mainly running these LLMs for storytelling and creating scenes, while Mlewd chat is very creative it tends to hallucinate and go off the rails, airoboros is uncreative but rushes through the stories and scenes and requires heavy handed involvement, and wizardlm storytelling is the most mediocre of the bunch. Is there a dedicated model for erotic stories yet, do you have any recommendations?
2. what are some good prompt examples, methods to get good scenes and content? I've tried a few mainly "write an erotic story about a \_\_\_\_\_\_ and a \_\_\_\_\_\_\_, where \_\_\_\_\_\_\_\_\_\_ happens. Be detailed and very descriptive" NORMALLY i get an outline and go section by section asking the LLM to write what i ask to varying degrees of success.
3. context windows, is the default 4k? can i pump it up to 8k? 16k? is running a larger context window on a smaller model breaking it?
4. hardware settings, how do i figure out how many N\_GPU\_LAYERS to load? on the same track should i also chance the number of CPU threads? the default setting of N\_THREADS is 4. does this setting break the models?
5. ROPE, i don't understand this at all i leave this setting alone, i assume it has an effect on context but how and in which direction do i change it? the default setting on LMSTUDIO is ROPE\_FREQ\_SCALE 1, and ROPE\_FREQ\_BASE 10,000. when i change context size do i ALSO have to change rope settings? is this breaking my models?
6. I have been interested in training my own storywriting loras, i do not know anything about it or how to start, any good easy to understand guides? can i even train anything?
7. i am very interested in prompting, can you give me some good ideas or directions towards prompting very detailed NSFW scenes? | 2023-10-23T18:20:55 | https://www.reddit.com/r/LocalLLaMA/comments/17er6q1/nsfw_stories_prompting_and_hardware/ | DominicanGreg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17er6q1 | false | null | t3_17er6q1 | /r/LocalLLaMA/comments/17er6q1/nsfw_stories_prompting_and_hardware/ | false | false | nsfw | 16 | null |
How best transform book (biography) into fine tuning input/output pairs? | 3 | I am taking a long book (a biography) and transforming it into input/output pairs to fine-tune GPT on. The goal is to get the information into the model.
I am considering two methods.
Method 1: Transform the entire book into word "chunks" (Of ~300 words long). Then tell the GPT 3.5 to look at each "chunk" and write a question that would plausibly elicit the chunk as a response. This will get the entire contents of the book into the model. However, while the questions are the good the answers are strange and unnatural. No one would ever answer the question with an excerpt from a biography.
Method 2: Transform the entire book into word "chunks" (Of ~300 words long). Then tell the GPT 3.5 to look at each "chunk" and write a trivia question about it. It will generate both a question and the answer. However, this means a lot of the information in the biography does not make it into the model.
I'm wondering if one method is better than the other. Are there other methods I am not considering? I am looking for experience and opinions. | 2023-10-23T17:08:49 | https://www.reddit.com/r/LocalLLaMA/comments/17epi29/how_best_transform_book_biography_into_fine/ | TaleOfTwoDres | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17epi29 | false | null | t3_17epi29 | /r/LocalLLaMA/comments/17epi29/how_best_transform_book_biography_into_fine/ | false | false | self | 3 | null |
Custom Language model for creating simple instructions from everyday conversations. | 1 | [removed] | 2023-10-23T16:45:07 | https://www.reddit.com/r/LocalLLaMA/comments/17eoxys/custom_language_model_for_creating_simple/ | seb36626 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17eoxys | false | null | t3_17eoxys | /r/LocalLLaMA/comments/17eoxys/custom_language_model_for_creating_simple/ | false | false | self | 1 | null |
Chunking and embedding XML | 2 | Does anyone have any experience with chunking and embedding XML-files?
Since the XML files follow a strict hierarchy, I obviously need the embeddings to extract good context to make the actual data useful. Any specific chunking strategies? Any specific libraries that might be of help?
Thanks. | 2023-10-23T16:29:19 | https://www.reddit.com/r/LocalLLaMA/comments/17eokhj/chunking_and_embedding_xml/ | dickfreelancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17eokhj | false | null | t3_17eokhj | /r/LocalLLaMA/comments/17eokhj/chunking_and_embedding_xml/ | false | false | self | 2 | null |
CPU and RAM for inference | 7 | Hi everybody!
New to machine learning, and I was wondering if I am just running a model to ask questions and receive answers through, is having a high cpu core count and ram okay?
I hear a lot about GPU’s but if I am not doing any training, is it okay to just stick with cpu and ram?
My needs would mainly be inference and if I did do any training, I would likely just rent GPU time in the cloud for that moment.
Thank you everyone | 2023-10-23T16:21:44 | https://www.reddit.com/r/LocalLLaMA/comments/17eodxy/cpu_and_ram_for_inference/ | Slimxshadyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17eodxy | false | null | t3_17eodxy | /r/LocalLLaMA/comments/17eodxy/cpu_and_ram_for_inference/ | false | false | self | 7 | null |
Noob: I try to fine-tune a LoRA with a very small dataset (10 samples) on Oobabooga, the model never learns. | 9 | My setup:
\- 10 JSON samples in Instruction/Output format of dataset.
\- RTX 3090 with 24GB VRAM
\- Loader: transformer with auto-devices and disable\_exllama checked
\- Base model: tried TheBloke/CodeLlama-13B-Instruct-GPTQ and 7B. Can't use origninal CodeLlama-7B as OOM no matter how small the trainning parameters I set.
\- I have tried different LoRA Rank, Learning Rate, Epochs, etc..
\- No errors, loss is constanly low at 0.03-0.05, very stable. <--- Looks not quite right here.
​
It all ended up the model learned nothing. I basically just copy/paste my instruction from the dataset, the model shows nothing picked up from the dataset.
I don't know what to try now. Could anyone give me some ideas? Thanks!
​ | 2023-10-23T16:12:09 | https://www.reddit.com/r/LocalLLaMA/comments/17eo5mo/noob_i_try_to_finetune_a_lora_with_a_very_small/ | tgredditfc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17eo5mo | false | null | t3_17eo5mo | /r/LocalLLaMA/comments/17eo5mo/noob_i_try_to_finetune_a_lora_with_a_very_small/ | false | false | self | 9 | null |
Llama2.mojo vs Llama2.c | 15 | 2023-10-23T15:58:13 | https://twitter.com/4evaBehindSOTA/status/1715612086687129742?t=vRX5N-K9axTr00TdWbHdRw&s=19 | Acrobatic-Site2065 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 17ent9g | false | {'oembed': {'author_name': 'tokenbender', 'author_url': 'https://twitter.com/tokenbender', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Round 2 llama2.mojo vs llama2.c on M2 pro<br><br>llama2.mojo -> 850 tok/s<br>llama2.c -> 639 tok/s<br><br>thanks <a href="https://twitter.com/tairov?ref_src=twsrc%5Etfw">@tairov</a> for suggesting runfast for llama2.c and missing flag for mojo 🙏 <a href="https://t.co/XItp7exJHA">https://t.co/XItp7exJHA</a> <a href="https://t.co/FYhcxhDSYM">pic.twitter.com/FYhcxhDSYM</a></p>— tokenbender (@tokenbender) <a href="https://twitter.com/tokenbender/status/1715612086687129742?ref_src=twsrc%5Etfw">October 21, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/tokenbender/status/1715612086687129742', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_17ent9g | /r/LocalLLaMA/comments/17ent9g/llama2mojo_vs_llama2c/ | false | false | default | 15 | {'enabled': False, 'images': [{'id': 'pwLgwGLUsWKv-cSTxKnWVTNGbG5RQgVehYTrKTjtQ_4', 'resolutions': [{'height': 84, 'url': 'https://external-preview.redd.it/3pTlMv6WQiM08pe6mN5CbjUpgJUpDg-dy33B3FdBBHU.jpg?width=108&crop=smart&auto=webp&s=ccdb62f44eb756effe966496b5aec3b2f43c0207', 'width': 108}], 'source': {'height': 110, 'url': 'https://external-preview.redd.it/3pTlMv6WQiM08pe6mN5CbjUpgJUpDg-dy33B3FdBBHU.jpg?auto=webp&s=cbfb12564744f9a1fed87d69d2829431a38650d5', 'width': 140}, 'variants': {}}]} | |
RAG oriented fine-tune... Searching for coherence | 3 | Still searching for a model that is well enough to make RAG... Lots of good models on huggingface, but none of them is trained to return extracted text or answers based on provided info without hallucinating something.
Is quite frustrating, every week came a new version of a model that is amazing for Role play and storytelling... (some good progress also on coding...)
I see lots of efforts in different RAG strategy, improving semantic search and Chunking, but the open source community still does not have a decent model fine tuned for that.
I have considered the idea of make that fine tune, based on synthetic data (using Wikipedia as knowledge base), but unfortunately I have not enough funds to cover the api cost neighter to pay for some decent Gpu.
I'm not going to train a 7B Model because the under 30B imho doesn't have many sense if the coherence is the main requirements.
Unpopular opinion: as coherence, code llama 34B is much better to any of the 70B fine tune.
Sorry to everyone for the rant...
Does anyone have some tips or suggestions?
Thanks in advance!
**Edit**
***Additional info:***
*My database is composed mainly by abstracts of papers and medical textbook. I admit that the domain is quite complex, but the error rate is too high.*
*Obviously that even if prompted to avoid that (tried and refined multiple prompts, using different prompt format).*
*Gpt3.5, Claude instant and Palm2-Bizon work fine for that task. (obviously GPT4 and Claude 2 would be best, but too expensive for me)*
*I spent lots of time to make a solid embedding pipeline:*
- *advanced chunking,*
- *Metadata added by llm, text for similarity search different from text provided to LLM,*
- *instructor bi encoder to generate embeddings(INSTRUCTOR-XL),*
- *reranking using cross encoder,*
- *RAG-Fusion using multiple query and HyDE approach*
- *Hybrid search with BM25*
*So... I'm a bit frustrated that i can not run all locally, becuse that is a must for my project.* | 2023-10-23T15:16:37 | https://www.reddit.com/r/LocalLLaMA/comments/17emthy/rag_oriented_finetune_searching_for_coherence/ | Distinct-Target7503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17emthy | false | null | t3_17emthy | /r/LocalLLaMA/comments/17emthy/rag_oriented_finetune_searching_for_coherence/ | false | false | self | 3 | null |
Pulled trigger on 4090, what else to upgrade? | 9 | Hi all,
I want to start running experiments finetuning older LMs (BERT etc) and smaller open source models from the Llama family using PyTorch/HF.
The goal is both academic and applied research with an actual use case. I will eventually be building a small cluster, but for right now I wanna get going on my personal computer.
Bought a MSI Suprim 4090 with 24gb and a 240mm AIO. I bought this model cause the footprint is smaller on account of not having 3x fans.
Card gets here Wednesday from Newegg.
I have a Corsair 4000d case.
I5-13600k with 240mm MSI AIO
MSI z790 DDR4
32gb ddr4 3600 @2x16gb
2tb m2 ssd
1000w psu
What else should I upgrade? Is there anything I can upgrade for under $500 that would give noticeable training time improvements?
For that matter, pretty sure with my case Ill end up with the 240mm cpu aio on the roof pulling air into case. 240mm gpu aio in front of case pulling air via top two fan slots. Bottom front fan blowing out. Back fan blowing out. Does that cooling make sense?
Thanks all. | 2023-10-23T15:05:16 | https://www.reddit.com/r/LocalLLaMA/comments/17emjim/pulled_trigger_on_4090_what_else_to_upgrade/ | chulpichochos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17emjim | false | null | t3_17emjim | /r/LocalLLaMA/comments/17emjim/pulled_trigger_on_4090_what_else_to_upgrade/ | false | false | self | 9 | null |
Local Stable Diffusion installation that runs on CPU only? I don’t know anything, so I need help. Maybe 126GB won’t suck? | 1 | [removed] | 2023-10-23T14:15:47 | https://www.reddit.com/r/LocalLLaMA/comments/17elfm1/local_stable_diffusion_installation_that_runs_on/ | Overall-Importance54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17elfm1 | false | null | t3_17elfm1 | /r/LocalLLaMA/comments/17elfm1/local_stable_diffusion_installation_that_runs_on/ | false | false | self | 1 | null |
Free ML Model training during Beta | 1 | [removed] | 2023-10-23T14:05:59 | https://www.reddit.com/r/LocalLLaMA/comments/17el7xv/free_ml_model_training_during_beta/ | Efficient-Manager906 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17el7xv | false | null | t3_17el7xv | /r/LocalLLaMA/comments/17el7xv/free_ml_model_training_during_beta/ | false | false | default | 1 | null |
JSON data for RAG based approach | 8 | Hi Everyone, can you provide some guidance on how to deal with documents or text data which contains both plain text and jsons. I am finding it difficult to get json output from llama2 on a RAG based approach. basically the embedded text has both text and json. while answering the question i am expecting model to respond with the json sample as thats crucial part of the answer. Has anyone seen the same challenge as me? Any assistance or ideas on how to deal with such situation would be of great help. | 2023-10-23T13:59:38 | https://www.reddit.com/r/LocalLLaMA/comments/17el2j8/json_data_for_rag_based_approach/ | Optimal_Original_815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17el2j8 | false | null | t3_17el2j8 | /r/LocalLLaMA/comments/17el2j8/json_data_for_rag_based_approach/ | false | false | self | 8 | null |
Any way to deploy a local LLM for NPCs in videogames? | 0 | I've been thinking for a few months how to use a local LLM like LLama2, Mistral, etc. to get NPCs to generate dialogue in real-time. I can do this with GPT-4's API but I wanna do it locally for free. I have the hardware to run them. I also have both Windows and Linux in my system, I just need to know if you actually can run these models this way.
The idea would be that I would use a python script to get these models to generate the dialogue in response to an *event prompt*. For example:
Let's say the LLM in question created its own character, it could be a bug catcher, a magician, a soldier, etc. and these characters receive an event prompt like:
\- The player picked up an apple.
\- The player entered a cave.
\- The player encountered a monster.
And the NPC would respond in-character based on the conversation history of all event prompts and dialogues provided so far, summarizing all the text after the token limit has been reached in order to guide NPCs and keep them aware of the most important recent events.
I also want to create an introvert system where an NPC would still record all events that occurred but would only respond occasionally based on an introvert/extrovert scale, with introverts speaking left often but more profoundly and extroverts impulsively speaking all the time.
The issue I'm running into is that, at a glance, it doesn't *seem* like I can generate this text outside of a CLI and sure I can extract the text this way but I wouldn't like it that way. I'd rather have python itself generate the text and save it in a text file via a script while the player plays the game.
How do I do this? | 2023-10-23T13:44:26 | https://www.reddit.com/r/LocalLLaMA/comments/17ekr7s/any_way_to_deploy_a_local_llm_for_npcs_in/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ekr7s | false | null | t3_17ekr7s | /r/LocalLLaMA/comments/17ekr7s/any_way_to_deploy_a_local_llm_for_npcs_in/ | false | false | self | 0 | null |
Prompts to increase Token output | 2 | Hello, I am trying to get an understanding of some prompt control over some models for creative writing. I am currently using LM studio and noticed that there is no parameter like new_minium_token to control the minium length of the output of the model.
My go to prompt is "Write a long conversation between character X and Y about subject A" but usually what happens is the output that is returned (in short) is "Character X and Y then discussed at length about Subject A"
This is not always the case as something just slightly adding some detail to the prompt has it return and extremely long conversation like I wanted to.
Would anyone mind sharing their prompts for this type writing style? The models that I switch between are Llama2-32k-instruct and Mythomax-L2. | 2023-10-23T13:21:48 | https://www.reddit.com/r/LocalLLaMA/comments/17eka3q/prompts_to_increase_token_output/ | Danny_Davitoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17eka3q | false | null | t3_17eka3q | /r/LocalLLaMA/comments/17eka3q/prompts_to_increase_token_output/ | false | false | self | 2 | null |
How can i create an api from my locally installed llama 2 7b-chat model on AWS instance? | 4 | Hey guys, i am using llama 2 chat model for rephrasing the text in my chatbot. I have installed in on an AWS EC2 instance, now I want to create an api from this model and use it within the same instance or any other aws instance. How can i do it? | 2023-10-23T13:21:42 | https://www.reddit.com/r/LocalLLaMA/comments/17eka18/how_can_i_create_an_api_from_my_locally_installed/ | haroonmh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17eka18 | false | null | t3_17eka18 | /r/LocalLLaMA/comments/17eka18/how_can_i_create_an_api_from_my_locally_installed/ | false | false | self | 4 | null |
Training a model on French Law, any insights? | 10 | # We were looking to train our first LORA model and had this idea of using french Labor Laws (Code du Travail) as a basis for a dataset.
**The "product" would be a query-oriented conversational model, able --in its limited and dutifuly-disclaimed-to-users capacities-- to reply to simple questions like «*****Is my boss allowed to force me to install a tracker on my phone in order to verify where I am when I'm working remote?*****", in French of course.**
​
**Why is it |fun|?**
* **Human-hard level texts** The law is jargonous and confusing for simple folks. There's a challenge to have an LLM understand it. And a bonus for humans if it works.
* **Social impact**. Not everyone gets legal help, so it's Tech-for-good approved.
* **Hacking the law**. C'me on. If that ain't fun?
* **Future opportunities.** Once we get it to work on that "part of the law", we could try to expand to other legal topics.
​
**Why do you care?**
**There are several options before us and we're not sure about the most efficient way to operate.** I'm coming here for advices and opinions, feel free to throw everything you think at us :)
​
**A/ The first question is: what's the easiest and most efficient way to do it today?**
We can use a LORA of sort, but there's also Langchain for example. Which option in the constantly evolving landscape would be best at the moment?
​
**B/ The next problem is probably: how to build that dataset?**
We have the PDF document transformed into text and hierarchical YAML, but it's just a start.
My first idea was to use a summary model in order to get "simplied" texts to use as answers. but it turns out the French summary models on HuggingFace are not very good at it. Our domain texts are not news article from CNN translated in French.
We know how datasets matter, so there's really a lot we want to hear about that.
​
**C/ And then, there's the question : on which model do we train the LORA?**
There are models like vigogne who are supposedly good at French, but we're not so sure.
One problem: is the exisiting model trained on the tokens of our text?
In other words, will the LLAMA/Vigogne/Falcon/other model be able to understand our lexical field?
My testing is worrying from that point of view. Using on the law text the tokenizer of some models like vigogne or customised LLAMAs, I guest 1M+ tokens in the resulting set. While the model's vocab is only 64k or 32k high.
​ | 2023-10-23T13:08:34 | https://www.reddit.com/r/LocalLLaMA/comments/17ejzx1/training_a_model_on_french_law_any_insights/ | cocoadaemon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ejzx1 | false | null | t3_17ejzx1 | /r/LocalLLaMA/comments/17ejzx1/training_a_model_on_french_law_any_insights/ | false | false | self | 10 | null |
COPIUMS - Run Orange Pis scalably locally, inviting debates on practicality | 0 | I have written a small (only funny by choice) piece on how I think we can scale a garage llm cluster lab studio setup that's actually fairly powerful, the idea is to attach 100 Orange Pis to 25 [Mega4](https://www.uugear.com/product/mega4-4-port-usb-3-ppps-hub-for-raspberry-pi-4b/) Hubs and regular powered usb hubs to run 100 Orange Pis remotely!
I am calling the contraption **COPIUMS 100**, and would love to understand any concerns you see with my approach in the fundraising campaign I am attempting to run [here](https://mirror.xyz/ogmilady.eth/R66X6R-U-4NyjGbcDO7rtVYKsQ-F2MzL6QlitU7v41E)
AMA about the plan, tell me why you think it won't work, it would greatly help me save time if you spot any issues I might not be seeing | 2023-10-23T12:22:57 | https://www.reddit.com/r/LocalLLaMA/comments/17ej3m0/copiums_run_orange_pis_scalably_locally_inviting/ | Laneone4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ej3m0 | false | null | t3_17ej3m0 | /r/LocalLLaMA/comments/17ej3m0/copiums_run_orange_pis_scalably_locally_inviting/ | false | false | self | 0 | null |
Context size vs max_position_embedding | 1 | [removed] | 2023-10-23T10:54:54 | https://www.reddit.com/r/LocalLLaMA/comments/17ehkqe/context_size_vs_max_position_embedding/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ehkqe | false | null | t3_17ehkqe | /r/LocalLLaMA/comments/17ehkqe/context_size_vs_max_position_embedding/ | false | false | self | 1 | null |
LLM loaders - UI apps? | 1 | [removed] | 2023-10-23T10:49:20 | https://www.reddit.com/r/LocalLLaMA/comments/17ehhka/llm_loaders_ui_apps/ | Majestical-psyche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ehhka | false | null | t3_17ehhka | /r/LocalLLaMA/comments/17ehhka/llm_loaders_ui_apps/ | false | false | self | 1 | null |
Has anyone noticed that Llama-2 models get way worse with complex context injection? | 2 | Trying to give complex "brains" to the models with instruct-injection seems to make them perform worse than minimal instruct-injection. This is not the case for ChatGPT, though.
has anyone else noticed this? | 2023-10-23T10:17:31 | https://www.reddit.com/r/LocalLLaMA/comments/17eh034/has_anyone_noticed_that_llama2_models_get_way/ | cold-depths | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17eh034 | false | null | t3_17eh034 | /r/LocalLLaMA/comments/17eh034/has_anyone_noticed_that_llama2_models_get_way/ | false | false | self | 2 | null |
looking for self hosted models to port and test | 2 | Hi after having tried llama i was impressed with its more technical like response, although chatGPT 4 is capable of so many things im looking to host my own LLMs in a more task specific way, however the hardware i want to host them on have NPUs and i would have to use their library, develop for and convert the models.
I would like to know what the best models are for language translations and code generation/analysis and whether or not they fit on 16GB of ram. The NPUs state compatibility with various data types even bfloat16. I've always had issues trying to run llama using its raw models using python while llama.cpp has always worked but i am stuck with its converted models. | 2023-10-23T10:04:44 | https://www.reddit.com/r/LocalLLaMA/comments/17egtev/looking_for_self_hosted_models_to_port_and_test/ | SystemErrorMessage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17egtev | false | null | t3_17egtev | /r/LocalLLaMA/comments/17egtev/looking_for_self_hosted_models_to_port_and_test/ | false | false | self | 2 | null |
Collection thread for llava accuracy | 28 | Since I can't add pictures in the comments, I suggest that we briefly share our experiences and insights regarding the accuracy and reliability of llava 7b, llava 13b and bakllava 7b. So that you get a realistic impression of what you can currently achieve with these models and where the limits are.
​
My short tests and findings show that it is possible to extract diagrams, tables, data, etc., but it does not seem to be sufficient for production.
And I found that Bakllava-7B (based on Mistral) is at least as good as Llava-13B (based on Vicuna). It's definitely worth testing Baklava - and Bakllava-7B too : p
​
​
​
​
https://preview.redd.it/qezqehzgexvb1.jpg?width=1739&format=pjpg&auto=webp&s=85b646e025ff232b4c9250210f69369a60f7b7c7
https://preview.redd.it/9euuyfzgexvb1.jpg?width=1952&format=pjpg&auto=webp&s=8b6f08b31f181f30b7cadb138914145f3ec23a25
https://preview.redd.it/tbo7yfzgexvb1.jpg?width=1866&format=pjpg&auto=webp&s=b027295de62c7e3a6349e40503db4dd843adb090
https://preview.redd.it/rxo8jnzgexvb1.jpg?width=1741&format=pjpg&auto=webp&s=479d7ea6d3fb7696f2120bd186dda357103afa35
https://preview.redd.it/zosramzgexvb1.jpg?width=1587&format=pjpg&auto=webp&s=7d2951292758a8cb98c157c083507b1ba14c90c3
https://preview.redd.it/vmnoipzgexvb1.jpg?width=1549&format=pjpg&auto=webp&s=367d7797f0f8ae76421dc7e6d7273bd50f511d24
https://preview.redd.it/jx3wbjzgexvb1.jpg?width=1536&format=pjpg&auto=webp&s=58a62184476b3e3148b691339e4509404746183a
https://preview.redd.it/zn0khhzgexvb1.jpg?width=1424&format=pjpg&auto=webp&s=a1081a5216c2b978d667c809bd07b7d595084051 | 2023-10-23T10:03:29 | https://www.reddit.com/r/LocalLLaMA/comments/17egssk/collection_thread_for_llava_accuracy/ | Evening_Ad6637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17egssk | false | null | t3_17egssk | /r/LocalLLaMA/comments/17egssk/collection_thread_for_llava_accuracy/ | false | false | 28 | null | |
Cache a prompt | 1 | Hey everyone!
Quick question: How do I cache a prompt when initializing a model?
For instance, if I want the model to reference a cached set of example outputs and provide consistent responses based on a specific text input, how would I do that? | 2023-10-23T09:59:01 | https://www.reddit.com/r/LocalLLaMA/comments/17egq6o/cache_a_prompt/ | Toni_rider | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17egq6o | false | null | t3_17egq6o | /r/LocalLLaMA/comments/17egq6o/cache_a_prompt/ | false | false | self | 1 | null |
Chat model | 1 | What is a good model for having conversations and I'm not talking about rp, like you would create a character but in chatting boundaries and not rp, I'm not even sure if I'm saying correctly but drop your best. | 2023-10-23T09:57:37 | https://www.reddit.com/r/LocalLLaMA/comments/17egpje/chat_model/ | swwer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17egpje | false | null | t3_17egpje | /r/LocalLLaMA/comments/17egpje/chat_model/ | false | false | self | 1 | null |
How to build a robot at home? | 1 | [removed] | 2023-10-23T09:42:34 | https://www.reddit.com/r/LocalLLaMA/comments/17egi6o/how_to_build_a_robot_at_home/ | HorrorNo8851 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17egi6o | false | null | t3_17egi6o | /r/LocalLLaMA/comments/17egi6o/how_to_build_a_robot_at_home/ | false | false | self | 1 | null |
A AI-art of LLaMa family | 276 | 2023-10-23T09:16:20 | MarySmith2021 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17eg56o | false | null | t3_17eg56o | /r/LocalLLaMA/comments/17eg56o/a_aiart_of_llama_family/ | false | false | 276 | {'enabled': True, 'images': [{'id': 'WXTeqHvIf3keGR2_sDvcezuZj2ROx-c-42oNvSIycOc', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/rpr9pe916xvb1.png?width=108&crop=smart&auto=webp&s=851d23e5a30a99509767428a0cac0aa602227e86', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/rpr9pe916xvb1.png?width=216&crop=smart&auto=webp&s=945bd92e03c854da6f42efb4a1c33c5dab939d29', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/rpr9pe916xvb1.png?width=320&crop=smart&auto=webp&s=3c2242f882f1ecc29bfa67dcb9431cc24af042bd', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/rpr9pe916xvb1.png?width=640&crop=smart&auto=webp&s=60d1eb8b912ba9365cab624a776b725af391ec52', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/rpr9pe916xvb1.png?width=960&crop=smart&auto=webp&s=d86d0b43ed7a109f2c7d4aada914c045f0cfda1f', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/rpr9pe916xvb1.png?auto=webp&s=9dd3560b288c820ebc990b6decec252422506ed8', 'width': 1024}, 'variants': {}}]} | |||
Is speed of GPU highly affected by CPU and RAM? | 2 | I am on a tight budget (about $800 for CPU + MB +RAM) for building a PC to run LLMs (on my GPU). My current plan is to get a 13700, and I am also choosing between DDR4/DDR5 and also 32/64GB. Does it really matter, or is there only a slight difference? From my plan of getting 13700 + B760 + 32G DDR4, if I can only choose one item to upgrade, which would it be?
I am still in high school, so the budget is unchangeable. I am also not considering cloud computing.
I have seen posts comparing DDR4/5 running on CPUs, but not an GPUs. | 2023-10-23T09:07:09 | https://www.reddit.com/r/LocalLLaMA/comments/17eg0ri/is_speed_of_gpu_highly_affected_by_cpu_and_ram/ | TreeSheep9066 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17eg0ri | false | null | t3_17eg0ri | /r/LocalLLaMA/comments/17eg0ri/is_speed_of_gpu_highly_affected_by_cpu_and_ram/ | false | false | self | 2 | null |
What is the best 70B finetuned model for ChatGPT-like tasks? | 0 | For me I have used Airoboros and Llama, are there any newer of better one for it? | 2023-10-23T08:54:00 | https://www.reddit.com/r/LocalLLaMA/comments/17efuc2/what_is_the_best_70b_finetuned_model_for/ | MarySmith2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17efuc2 | false | null | t3_17efuc2 | /r/LocalLLaMA/comments/17efuc2/what_is_the_best_70b_finetuned_model_for/ | false | false | self | 0 | null |
What is the best 70B finetuned model for GPT-like tasks? | 1 | For me, I have used Airboros and Llama. Are there any newer finetuned ones? | 2023-10-23T08:50:35 | https://www.reddit.com/r/LocalLLaMA/comments/17efspv/what_is_the_best_70b_finetuned_model_for_gptlike/ | MarySmith2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17efspv | false | null | t3_17efspv | /r/LocalLLaMA/comments/17efspv/what_is_the_best_70b_finetuned_model_for_gptlike/ | false | false | self | 1 | null |
Interview of Guillaume Lample (Cofounder @ Mistral.AI) : The secrets of Large Language Models | 1 | [removed] | 2023-10-23T08:44:54 | https://www.reddit.com/r/LocalLLaMA/comments/17efq3v/interview_of_guillaume_lample_cofounder_mistralai/ | No_Palpitation7740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17efq3v | false | null | t3_17efq3v | /r/LocalLLaMA/comments/17efq3v/interview_of_guillaume_lample_cofounder_mistralai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'od1IA8dQw_jMv_pAonHW0ZG70J5sE4q1fMqGYWy1KZY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/y5GWz5OjpV9J_NrUFLMGBBjMiXmyKD2lB3enZHnaXEM.jpg?width=108&crop=smart&auto=webp&s=67c0090eb8b7ee12dfbe7fa7d9531f04401899a3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/y5GWz5OjpV9J_NrUFLMGBBjMiXmyKD2lB3enZHnaXEM.jpg?width=216&crop=smart&auto=webp&s=5be4c4e0f06db85a2de0114f275f4166b6f5bb4e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/y5GWz5OjpV9J_NrUFLMGBBjMiXmyKD2lB3enZHnaXEM.jpg?width=320&crop=smart&auto=webp&s=dacd44e83d4c689dc5df03974679e13addc5bbc7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/y5GWz5OjpV9J_NrUFLMGBBjMiXmyKD2lB3enZHnaXEM.jpg?auto=webp&s=ebc612df58fb09bf5577cb3ee86e31bf5e768739', 'width': 480}, 'variants': {}}]} |
Need help quantizing a 70B model to exl2. I can't get convert.py working properly. | 8 | This is a long-winded post (also my first on Reddit), but I'm stuck on this conversion process and don't know who else to ask. I'd appreciate any help.
TL;DR at the bottom.
I'm trying to use exllamav2's convert.py to quantize a model, but I keep running into errors.
I've been trying to follow sophosympatheia's new guide (regarding merges) because I want to quantize a model from fp16 to exl2 mentioned in later steps. I'm just a noob at most of this python stuff, and I've been following guides as best I can. I'm fine until something breaks, and then I'm not sure how to fix it. I'm on Windows 10. Not familiar with WSL, and not sure if it's needed or not. Not interested in merging models yet.
I've searched for how to properly run this script, but I keep encountering errors. I've only dabbled with python for a couple years (not a programmer), mostly for image and text generation stuff, but I don't understand a lot of it. While I usually understand command lines, a GUI where I can plug in a model folder, outputs, parquet, etc would have been nice. I'm not sure how people are easily quantizing their files to exl2, but if I can manage it I'll put it up on HF, assuming anyone is allowed to.
I have:
Windows 10
2x3090 GPUs
64GB of system RAM (Is this enough? I followed a guide and copied parameters, but I don't know how much RAM they had installed.)
7800X3D with 8 cores
2 NVME drives.
Text-generation-webui running via the one-click installer (not needed here?)
cloned turboderp/exllamav2 repo
Nvidia CUDA Toolkit 12.2
Python 3.10.11 AND 3.12.0 installed. I thought the updated version would remove the older. Is 3.10.11 safe to remove, or do certain programs rely on it?
"python3" was missing (why?) and I was prompted to install from the Windows store.
When running convert.py, it said I was missing torch and safetensors (I KNOW I've installed these before, right? I've been using Oobabooga's text-gen after all and pretty sure I needed those somewhere.) so I installed those with "python3 -m pip install" as instructed by another guide, which resolved those specific errors.
Now it's saying that it can't find CUDA. Maybe it's needing a different version, different location, etc. I have no idea how to tell.
It's also saying it can't find exllamav2_ext. Isn't exllamav2 what I just cloned? (Also have in Ooba's text-gen)
In convert.py, for the "model name" I've been using the folder name from the text-gen download, but config.json in that folder does mention another name I've never seen anywhere else. ("_name_or_path": "output_a_mythoxwin/",) I'm trying to quantize a merged model that only a few people have looked at (lizpreciatior's lzlv_70B which blends Xwin, Nous-Hermes, and Mythospice). I've found it good for fantasy and sci-fi writing, natural-sounding characters, vivid descriptions, etc. Someone did make a GGUF, but it's slower than my exl2 models on my 2x3090s. I also I want to save a little VRAM for using Windows while also using RoPE scaling. It stays coherent to at least 10,240 context, maybe more. I was going for 4.50bpw to not push VRAM to the very limit since this is my main PC with other programs running.
The model I want to quantize is a collection of 15 safetensors files, downloaded via text-gen and placed in C:\AI\text-generation-webui-main\models\lizpreciatior_lzlv_70b_fp16_hf\
I wanted to put my working folder as a secondary nvme drive that I have, in O:\AI_Work
I have the PIPPA dataset parquet file in C:\AI\0000.parquet
I've tried to line break the traceback as best I can, but I might have misplaced a few.
Here's the latest error I'm getting, both in Windows and Anaconda command line:
C:\AI\exllamav2>python3 convert.py -i C:\AI\text-generation-webui-main\models\lizpreciatior_lzlv_70b_fp16_hf\ -o O:\AI_Work -om lizpreciatior_lzlv_70b_fp16_hf_measurement.json -mr 10 -gr 10 -c C:\AI\0000.parquet && python3 convert.py -i C:\AI\text-generation-webui-main\models\lizpreciatior_lzlv_70b_fp16_hf -o O:\AI_Work -nr -m lizpreciatior_lzlv_70b_fp16_hf_measurement.json -b 4.50 -gr 40 -c C:\AI\0000.parquet -cf lizpreciatior_lzlv_70b-exl2-4.50bpw
No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2'
Traceback (most recent call last):
File "C:\AI\exllamav2\exllamav2\ext.py", line 14, in <module>
import exllamav2_ext
ModuleNotFoundError: No module named 'exllamav2_ext'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\AI\exllamav2\convert.py", line 1, in <module>
from exllamav2 import ExLlamaV2, ExLlamaV2Config, ExLlamaV2Tokenizer
File "C:\AI\exllamav2\exllamav2\__init__.py", line 3, in <module>
from exllamav2.model import ExLlamaV2
File "C:\AI\exllamav2\exllamav2\model.py", line 17, in <module>
from exllamav2.cache import ExLlamaV2CacheBase
File "C:\AI\exllamav2\exllamav2\cache.py", line 2, in <module>
from exllamav2.ext import exllamav2_ext as ext_c
File "C:\AI\exllamav2\exllamav2\ext.py", line 124, in <module>
exllamav2_ext = load \
^^^^^^
File "C:\Users\USERNAME\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\torch\utils\cpp_extension.py", line 1308, in load
return _jit_compile(
^^^^^^^^^^^^^
File "C:\Users\USERNAME\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\torch\utils\cpp_extension.py", line 1710, in _jit_compile
_write_ninja_file_and_build_library(
File "C:\Users\USERNAME\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\torch\utils\cpp_extension.py", line 1810, in _write_ninja_file_and_build_library
_write_ninja_file_to_build_library(
File "C:\Users\USERNAME\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\torch\utils\cpp_extension.py", line 2199, in _write_ninja_file_to_build_library
cuda_flags = common_cflags + COMMON_NVCC_FLAGS + _get_cuda_arch_flags()
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USERNAME\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\torch\utils\cpp_extension.py", line 1980, in _get_cuda_arch_flags
arch_list[-1] += '+PTX'
~~~~~~~~~^^^^
IndexError: list index out of range
TL;DR:
A python noob is trying to make an exl2 quant with convert.py. The conversion process isn't nearly as streamlined as others made it seem. I'm probably missing something obvious to others. | 2023-10-23T08:20:23 | https://www.reddit.com/r/LocalLLaMA/comments/17efeuy/need_help_quantizing_a_70b_model_to_exl2_i_cant/ | Clockwork_Gryphon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17efeuy | false | null | t3_17efeuy | /r/LocalLLaMA/comments/17efeuy/need_help_quantizing_a_70b_model_to_exl2_i_cant/ | false | false | self | 8 | null |
GBNF grammar VS Accuracy | 2 | Hey everyone,
I'm working on a project to pull key info from video transcripts like tone, style, main points, and how formal they are. I've been using Llama.cpp with GBNF grammars but noticed some issues and less accuracy.
Why am I using grammars instead of a bigger prompt? I want to keep the full context of a transcript and make sure the model answers in a JSON format using certain words.
I've read many Reddit posts about this, but improving the AI's accuracy without deep fine-tuning is tough. On the other hand, It's hard (or even impossible) to get really accurate training data from so many different transcripts and subjects.
Any tips on making it better or changing my method?
PS: I'm using a MacStudio M1 with 32GB RAM, and my AI model is OpenHermes-2-Mistral-7B-GGUF. | 2023-10-23T07:48:14 | https://www.reddit.com/r/LocalLLaMA/comments/17ef035/gbnf_grammar_vs_accuracy/ | Toni_rider | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ef035 | false | null | t3_17ef035 | /r/LocalLLaMA/comments/17ef035/gbnf_grammar_vs_accuracy/ | false | false | self | 2 | null |
Character-LLM: A Trainable Agent for Role-Playing | 1 | 2023-10-23T07:37:25 | https://arxiv.org/abs/2310.10158 | starlightrobotics | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17eevb9 | false | null | t3_17eevb9 | /r/LocalLLaMA/comments/17eevb9/characterllm_a_trainable_agent_for_roleplaying/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
Will two 2660v4 CPUs limit the performance of a 3090 and four 2080ti cards in LLM inference? | 3 | After that is RAM frequency really important? | 2023-10-23T07:32:56 | https://www.reddit.com/r/LocalLLaMA/comments/17eet9d/will_two_2660v4_cpus_limit_the_performance_of_a/ | MarySmith2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17eet9d | false | null | t3_17eet9d | /r/LocalLLaMA/comments/17eet9d/will_two_2660v4_cpus_limit_the_performance_of_a/ | false | false | default | 3 | null |
CPU Ai and ui that does not require avx | 3 | Hi everyone, been doing research on running local llms and started setting one up today.
I configured an old server I had laying around with kubuntu, but I am getting an error “Illegal Instruction (Core Dumped)” when trying to set up oobabooga. I also received the error with openplayground.
After looking around, it appears that it may be because my cpu does not support avx.
Is there any way I can either solve this error to have ooba run? Or does anyone here know of or use a ai system that does not need avx?
Thank you everyone! | 2023-10-23T07:22:08 | https://www.reddit.com/r/LocalLLaMA/comments/17eeo79/cpu_ai_and_ui_that_does_not_require_avx/ | Slimxshadyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17eeo79 | false | null | t3_17eeo79 | /r/LocalLLaMA/comments/17eeo79/cpu_ai_and_ui_that_does_not_require_avx/ | false | false | self | 3 | null |
How to make a funny chatbot with persona based on written question and answer lines? | 2 | Thanks for any help | 2023-10-23T06:48:08 | https://www.reddit.com/r/LocalLLaMA/comments/17ee7ck/how_to_make_a_funny_chatbot_with_persona_based_on/ | fignewtgingrich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ee7ck | false | null | t3_17ee7ck | /r/LocalLLaMA/comments/17ee7ck/how_to_make_a_funny_chatbot_with_persona_based_on/ | false | false | self | 2 | null |
LLaMA2 model finetuning on custom dataset | 1 | Hey. So I am new to this realm for AI/ML. And you might think this is another copy of some subreddit. But i guess its not.
So my task is to finetune a model to on custom dataset. And that model should only answer query to only those questions that are available in the dataset while provided in training.
I tried training LLaMA 7b model from hugging face on my dataset [here](https://huggingface.co/datasets/MananSantoki/Vadodara-Info-JSON/raw/main/vadodara_train.json). I used [this](https://colab.research.google.com/drive/12GH6qVlnTP8PHxNNdG7LRWZsXbRSgbAX?usp=sharing) method using Qlora. And upon successful training when i use model.predict(). It works suprisingly well but then when i upload it to hf and get weights from mymodel and wrap the weights to the basemodel (LLaMA 7b). It does not perform as it used to. It hallucinates.
So my question was is there something I am doing wrong ? Or is there something else I should do to achieve my task ? Beacause the main thing i want is that the model should not use its pretrained data and only use the dataset i provide during training.
If someone could help/suggest me it would be great . Cheers! | 2023-10-23T06:46:26 | https://www.reddit.com/r/LocalLLaMA/comments/17ee6hi/llama2_model_finetuning_on_custom_dataset/ | Personal_Citron9609 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ee6hi | false | null | t3_17ee6hi | /r/LocalLLaMA/comments/17ee6hi/llama2_model_finetuning_on_custom_dataset/ | false | false | self | 1 | null |
🚀 Run Local LLMs with a User-Friendly Web UI in Two Docker Commands! | 97 | Hey, all!
I'm thrilled to share a fantastic development that's going to make your experience with local LLMs easier and more accessible than ever before. As one of the maintainers for Ollama-webui, I'm excited to introduce you to our project, which brings the power of local language models (LLMs) right to your fingertips with just two simple lines of Docker command!
**Ollama-webui GitHub Repo:** [Ollama Web UI](https://github.com/ollama-webui/ollama-webui)
**Demo:**
https://i.redd.it/9lfdwgmd9wvb1.gif
​
We've created a seamless web user interface for Ollama, designed to make running and interacting with LLMs a breeze. No more struggling with command-line interfaces or complex setups. With our solution, you can run a web app to download models and start interacting with them without any additional CLI hassles. Even better, you can access it from your smartphone over your local network! Here's all you need to do to get started:
**Step 1: Run Ollama**
docker run -d -v ollama:/root/.ollama -p 11434:11434 -e OLLAMA_ORIGINS="*" --name ollama ollama/ollama
**Step 2: Run Ollama WebUI**
docker run -d -p 3000:8080 --name ollama-webui ollamawebui/ollama-webui
That's it! With these two lines of Docker commands, you'll have your local LLM environment up and running, complete with an intuitive web interface hosted at [http://localhost:3000/](http://localhost:3000/). No more struggling with complex setups or having to remember obscure command lines. Ollama-webui makes it accessible to everyone.
We'd love to hear your feedback and suggestions as we continue to improve this project. So, give it a try, and let us know what you think. Have you encountered any issues? Do you have ideas for additional features? We're all ears! | 2023-10-23T06:22:29 | https://www.reddit.com/r/LocalLLaMA/comments/17edvbx/run_local_llms_with_a_userfriendly_web_ui_in_two/ | tjrbk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17edvbx | false | null | t3_17edvbx | /r/LocalLLaMA/comments/17edvbx/run_local_llms_with_a_userfriendly_web_ui_in_two/ | false | false | 97 | {'enabled': False, 'images': [{'id': 'eRaVGkO5gIq8KzYaYH_rpPaz3WKXkUbFvafzp6TFem4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GfD7bGG_gbZ-JuHH0nJAr-Oqr2npanE1symXa7otEqw.jpg?width=108&crop=smart&auto=webp&s=2bff3fb66c2a6d912195641aa427c85cf0b57ee9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GfD7bGG_gbZ-JuHH0nJAr-Oqr2npanE1symXa7otEqw.jpg?width=216&crop=smart&auto=webp&s=0d6141da1dacbb83fa6c1a0ae9bb3a638b3c00e4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GfD7bGG_gbZ-JuHH0nJAr-Oqr2npanE1symXa7otEqw.jpg?width=320&crop=smart&auto=webp&s=6e791ca503787e92f48887b9179ee9a56c68739e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GfD7bGG_gbZ-JuHH0nJAr-Oqr2npanE1symXa7otEqw.jpg?width=640&crop=smart&auto=webp&s=18f14b3f2c1c5fe699fa27735ffe02b321649680', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GfD7bGG_gbZ-JuHH0nJAr-Oqr2npanE1symXa7otEqw.jpg?width=960&crop=smart&auto=webp&s=d07750edc5d691c5e16d7e1d908aff2f05b8e7ab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GfD7bGG_gbZ-JuHH0nJAr-Oqr2npanE1symXa7otEqw.jpg?width=1080&crop=smart&auto=webp&s=9596c3af5d24f24d58c7f5142f4c887a8d2127ae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GfD7bGG_gbZ-JuHH0nJAr-Oqr2npanE1symXa7otEqw.jpg?auto=webp&s=da157ea5eb279f648d3905b415240f973642a6c0', 'width': 1200}, 'variants': {}}]} | |
Best most advanced model that is also uncensored for story writing up to 32B ( or whatever fits in a 3090) ? | 1 | Tittle !
Please help me fellow redditors ! I am out of the loop. | 2023-10-23T05:42:55 | https://www.reddit.com/r/LocalLLaMA/comments/17edb28/best_most_advanced_model_that_is_also_uncensored/ | fastinguy11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17edb28 | false | null | t3_17edb28 | /r/LocalLLaMA/comments/17edb28/best_most_advanced_model_that_is_also_uncensored/ | false | false | self | 1 | null |
Best model that is also uncensored for story witting up to 32B ( or whatever fits in a 3090) ? | 1 | Title !
Please help me kind users, I am out of the loop. | 2023-10-23T05:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/17eda7e/best_model_that_is_also_uncensored_for_story/ | fastinguy11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17eda7e | false | null | t3_17eda7e | /r/LocalLLaMA/comments/17eda7e/best_model_that_is_also_uncensored_for_story/ | false | false | self | 1 | null |
How much does it cost to finetune an LLM? | 1 | [removed] | 2023-10-23T05:30:30 | https://www.reddit.com/r/LocalLLaMA/comments/17ed4ju/how_much_does_it_cost_to_finetune_an_llm/ | ronforestron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ed4ju | false | null | t3_17ed4ju | /r/LocalLLaMA/comments/17ed4ju/how_much_does_it_cost_to_finetune_an_llm/ | false | false | self | 1 | null |
How much does it cost to finetune an LLM? | 1 | [removed] | 2023-10-23T05:29:08 | https://www.reddit.com/r/LocalLLaMA/comments/17ed3s8/how_much_does_it_cost_to_finetune_an_llm/ | ronforestron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ed3s8 | false | null | t3_17ed3s8 | /r/LocalLLaMA/comments/17ed3s8/how_much_does_it_cost_to_finetune_an_llm/ | false | false | self | 1 | null |
What determines the speed of token generation on the GGML & GGUF model? | 6 | What determines the speed of token generation on the GGML & GGUF model?
​
I have 13600K and 64 DDR5. 17 layers on 3060 12. I get, on average, 0.9 tokens per second. What do you need to overclock your computer to get more tokens in a second? | 2023-10-23T05:02:11 | https://www.reddit.com/r/LocalLLaMA/comments/17ecor0/what_determines_the_speed_of_token_generation_on/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ecor0 | false | null | t3_17ecor0 | /r/LocalLLaMA/comments/17ecor0/what_determines_the_speed_of_token_generation_on/ | false | false | self | 6 | null |
Any services that can help host LoRA (QLoRA) of Mistral 7B? | 7 | There seems to be three economic models for cloud hosted open source LLMs:
1. The "model garden" - It looks like cloud providers like GCP and Azure have gotten it all wrong when dealing with open source models. They provide self serve "templates" to be able to bring up your own instances of open source models - which only makes economic sense if there is very high utilization of the GPU. Else most of the time you have an idle GPU that you end up paying for.
2. Shared Hosting of open source models - Economically, shared hosting makes sense, as long as constant throughput is not required. Amazon Bedrock seems to have gotten this somewhat right. However, from what I can tell no way to host a fine-tuned / LORA / QLORA via Bedrock. Even for simple tasks, doing n-shot serving seems to be wasteful of compute when the information can be more efficiently encoded using a LoRA layer.
3. Shared Hosting of open source models w/ LoRA - This is the sweet spot IMO, where there is shared hosting of open source models, as well as option to serve task specific LoRAs as required. The only provider that I have come across that provides this option in [fireworks.AI](https://fireworks.AI). Their pricing seems to be competitive (I think less than 0.1x gpt3.5 on per token basis for Mistral 7B) - though their max throughput seems a bit restrictive (100 reqs / min).
Is my understanding correct or are there more services offering (3)? If not, why do you think none of the "big tech" players have caught on to the pretty straight forward conclusion of having shared open source model hosting w/ LoRA? | 2023-10-23T04:12:59 | https://www.reddit.com/r/LocalLLaMA/comments/17ebvvj/any_services_that_can_help_host_lora_qlora_of/ | distant_gradient | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ebvvj | false | null | t3_17ebvvj | /r/LocalLLaMA/comments/17ebvvj/any_services_that_can_help_host_lora_qlora_of/ | false | false | self | 7 | null |
Open llama 7B is savage | 73 | 2023-10-23T04:09:07 | https://imgur.com/1Tbf7zc | SidneyRandall | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 17ebth4 | false | null | t3_17ebth4 | /r/LocalLLaMA/comments/17ebth4/open_llama_7b_is_savage/ | false | false | 73 | {'enabled': False, 'images': [{'id': 'rP5jmDIMqnEZaMKXAM07sAFrNpZ_i8zsm8kVvEUwMs4', 'resolutions': [{'height': 26, 'url': 'https://external-preview.redd.it/zcCEF0anPV1k4TOGzLfT8JBiZWivJvdNbkYdnSgPfNE.jpg?width=108&crop=smart&auto=webp&s=2c8496a5c30d4c40999bba53b425f8566cde5585', 'width': 108}, {'height': 52, 'url': 'https://external-preview.redd.it/zcCEF0anPV1k4TOGzLfT8JBiZWivJvdNbkYdnSgPfNE.jpg?width=216&crop=smart&auto=webp&s=f4064f65845e0300d40af065e375bc0b9802d074', 'width': 216}, {'height': 77, 'url': 'https://external-preview.redd.it/zcCEF0anPV1k4TOGzLfT8JBiZWivJvdNbkYdnSgPfNE.jpg?width=320&crop=smart&auto=webp&s=7488cb5faf5e9e75709c14363c922f7b88862319', 'width': 320}, {'height': 155, 'url': 'https://external-preview.redd.it/zcCEF0anPV1k4TOGzLfT8JBiZWivJvdNbkYdnSgPfNE.jpg?width=640&crop=smart&auto=webp&s=36f5c0b62ace2e448029ef18d405e8bd9e989039', 'width': 640}], 'source': {'height': 177, 'url': 'https://external-preview.redd.it/zcCEF0anPV1k4TOGzLfT8JBiZWivJvdNbkYdnSgPfNE.jpg?auto=webp&s=218b6d49e9bafa456bad4f2276513e0d9fda1334', 'width': 727}, 'variants': {}}]} | ||
New to LLM, is there a subreddit that follows latest of uncensored models? | 1 | [removed] | 2023-10-23T03:56:03 | https://www.reddit.com/r/LocalLLaMA/comments/17ebl3h/new_to_llm_is_there_a_subreddit_that_follows/ | sweetsunnyside | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ebl3h | false | null | t3_17ebl3h | /r/LocalLLaMA/comments/17ebl3h/new_to_llm_is_there_a_subreddit_that_follows/ | false | false | self | 1 | null |
Advanced RAG implementations | 70 | There might be hundreds of tutorials on RAG (youtube etc). I think- someone with even basic python knowledge can get it working in a day based on these tutorials. But I have not seen advanced tutorials with RAG.
So wanted to know:
Which are some advanced features that can be implemented with RAG to make 'Question-answering based on our own documents tool' much improved than the one-day implementation based on basic youtube tutorials?
Any good resources/Github implementations which can serve as examples of advanced RAG features?
Thanks | 2023-10-23T03:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/17eamba/advanced_rag_implementations/ | meet20hal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17eamba | false | null | t3_17eamba | /r/LocalLLaMA/comments/17eamba/advanced_rag_implementations/ | false | false | self | 70 | null |
Anyone know the "recommended" settings used for Agnai's subscription service on current 70B models? | 1 | Basically, the title. The level of coherence and creativity is crazy good imo. Anyone know the details? | 2023-10-23T02:07:37 | https://www.reddit.com/r/LocalLLaMA/comments/17e9lfv/anyone_know_the_recommended_settings_used_for/ | planny-mitchell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e9lfv | false | null | t3_17e9lfv | /r/LocalLLaMA/comments/17e9lfv/anyone_know_the_recommended_settings_used_for/ | false | false | self | 1 | null |
Looking for ideas on implementing a knowledge bot for my own documentation | 11 | So I am trying to get my hands dirty with a llocal llm and figure a project is the best way to do that. My idea is to make a bot that can be an assistant of sorts, taking in a set of documents/data and being able to tell me me stuff about it. For example, if something I want to do contradicts something already mentioned or in the data. Or an easy summary of a piece of documentation I wrote. I wanted to ask if/how others have implemented something like this for themselves?
For context, this is for my perso al documents such as my code docs, ttrpg worlds, etc. Got a lot going on and would love to make myself an assitant😅 | 2023-10-23T02:00:47 | https://www.reddit.com/r/LocalLLaMA/comments/17e9gf2/looking_for_ideas_on_implementing_a_knowledge_bot/ | lucksleven51 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e9gf2 | false | null | t3_17e9gf2 | /r/LocalLLaMA/comments/17e9gf2/looking_for_ideas_on_implementing_a_knowledge_bot/ | false | false | self | 11 | null |
I’m new here and want to say hi :) | 13 | Hey everyone, I recently joined this sub, and have been interacting with a few of you. Just wanted to formally say hi!
I’m a grad student, and work at startup! 😁
Are you a hobbyist, or do you work in the deep learning space? What are your thoughts on the future of the LLMs?? | 2023-10-23T01:38:19 | https://www.reddit.com/r/LocalLLaMA/comments/17e90tw/im_new_here_and_want_to_say_hi/ | GoodUnderstanding728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e90tw | false | null | t3_17e90tw | /r/LocalLLaMA/comments/17e90tw/im_new_here_and_want_to_say_hi/ | false | false | self | 13 | null |
Mordin Solus system message | 2 | A few months ago I remember seeing a post which contained a system message to make an LLM reply like the character Mordin Solus from the Mass Effect series. I can't find that post anymore, could anyone point me to it, or does anyone have that prompt? | 2023-10-23T01:33:07 | https://www.reddit.com/r/LocalLLaMA/comments/17e8xd8/mordin_solus_system_message/ | Super_Pole_Jitsu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e8xd8 | false | null | t3_17e8xd8 | /r/LocalLLaMA/comments/17e8xd8/mordin_solus_system_message/ | false | false | self | 2 | null |
Well, I just couldn't help it.... | 3 | 2023-10-23T01:09:45 | LongjumpingSpray8205 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17e8gk5 | false | null | t3_17e8gk5 | /r/LocalLLaMA/comments/17e8gk5/well_i_just_couldnt_help_it/ | false | false | spoiler | 3 | {'enabled': True, 'images': [{'id': 'doqOXDpPBxRymiCYjl5knPrDnzIt-mpWsf6Oxb7nfDE', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?width=108&crop=smart&auto=webp&s=6691d1be46636efd570c63300bdff009fd04e1f3', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?width=216&crop=smart&auto=webp&s=cf78f4d5f0cb8ef502fc51a05e55629c04c1d273', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?width=320&crop=smart&auto=webp&s=794023f5cfb377405ea62eacce039794e6e00f28', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?width=640&crop=smart&auto=webp&s=42d3a3ec0048a93e8e645acc462761d55592d0f1', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?width=960&crop=smart&auto=webp&s=37dcb6c3b4cdbc9cdc7d1fa6a07b1c0ae11d5c84', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?width=1080&crop=smart&auto=webp&s=43ff6ec486db3737d392791c298995e7a76789ee', 'width': 1080}], 'source': {'height': 2992, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?auto=webp&s=a4c2fc5cbd4bc889bb0232ba098abde44ed92dfc', 'width': 2992}, 'variants': {'obfuscated': {'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=1e9935f967759730bd42205611df46c4ea808eab', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=ca61b3a8a64459482c5e6ca523da42f18f8c0be1', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=b5819bec37cda9d30e1e855d29907436dab8806f', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=41f3d6033ff027f2e8ab70be8b5fbc2feb466d67', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=4f287405d7c6b6a871ef4cad7bac3d68264028b2', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=429164d95092ef8507def8854d98df42826c7f48', 'width': 1080}], 'source': {'height': 2992, 'url': 'https://preview.redd.it/ivz5ddtaruvb1.jpg?blur=40&format=pjpg&auto=webp&s=86a933f72df074dcbba7ce891e87f83db607e5b9', 'width': 2992}}}}]} | ||
llama.cpp server now supports multimodal! | 229 | Here is the result of a short test with llava-7b-q4\_K\_M.gguf
llama.cpp is such an allrounder in my opinion and so powerful. I love it
​
https://preview.redd.it/0lgw4dgznuvb1.jpg?width=1566&format=pjpg&auto=webp&s=482b8110b5ed32a2ede71c04025c608bcc1e6142
https://preview.redd.it/hkwgmdgznuvb1.jpg?width=1646&format=pjpg&auto=webp&s=84cfef04ab7bb853f3ea314866cda96be6e41aac
https://preview.redd.it/rm3lacgznuvb1.jpg?width=1550&format=pjpg&auto=webp&s=5511ad60c7d0f28c1d3dfd155db76ec777263da8
https://preview.redd.it/ynelrggznuvb1.jpg?width=1502&format=pjpg&auto=webp&s=91abce6090ead6305e2a8f1208d2c17806117b84
https://preview.redd.it/w0stkggznuvb1.jpg?width=1520&format=pjpg&auto=webp&s=05255742c1bad6a0abd10f3ccf0c07fa099232c4 | 2023-10-23T00:53:26 | https://www.reddit.com/r/LocalLLaMA/comments/17e855d/llamacpp_server_now_supports_multimodal/ | Evening_Ad6637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e855d | false | null | t3_17e855d | /r/LocalLLaMA/comments/17e855d/llamacpp_server_now_supports_multimodal/ | false | false | 229 | null | |
Front-end running on x86 Mac (Monterey v12.6)? | 1 | Not having luck getting a front-end setup on my 2019 Mac.
LM Studio not supported (downloaded the Mac binary). That’s ok.
Oobagooga ran install script start_macos.sh. Installation ran for quite a while. Finally got following error:
- ERROR: llama_cpp_python-0.2.11-cp311-cp311-macosx_12_0_x86_64.whl is not a supported wheel on this platform.
This means conda doesn’t have wheel for my system? I was able to “pip install llama-cpp-python” which succeeded. But how to use with oobagooga installation since it uses conda.
Guess I can try Koboldcpp next, hoping it installs on my system ok.
Thoughts for oobagooga? | 2023-10-23T00:30:40 | https://www.reddit.com/r/LocalLLaMA/comments/17e7p79/frontend_running_on_x86_mac_monterey_v126/ | Puzzleheaded-Fly4322 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e7p79 | false | null | t3_17e7p79 | /r/LocalLLaMA/comments/17e7p79/frontend_running_on_x86_mac_monterey_v126/ | false | false | self | 1 | null |
People here say they use local LLMs for story telling, chat, etc., but what "stories" are they telling and in what applications? I imagine just building a PC rig to have a story-telling LLM is overkill. Am I missing something? Please let us know if you're using LLMs in a creative way! | 42 | the title. I'm just curious about the applications of local LLMs. I've been working with many of them but for "real" applications I've always resorted back to ChatGPT/gpt-4. Is there some awesome value that these local LLMs give, other than privacy, that ChatGPT doesn't? | 2023-10-22T23:25:57 | https://www.reddit.com/r/LocalLLaMA/comments/17e6e9w/people_here_say_they_use_local_llms_for_story/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e6e9w | false | null | t3_17e6e9w | /r/LocalLLaMA/comments/17e6e9w/people_here_say_they_use_local_llms_for_story/ | false | false | self | 42 | null |
Seeking for advice on an uncensored Italian LLM | 3 | Hey there! I need the wisdom of the best LLM subreddit on the planet.
I'm working on an Italian chatbot, and my prompt starts off with a character description followed by some chat examples, and then the classic User: <...> AI: <...> format.
Tried llama2-70b, but it's pretty meh in Italian. Currently, the best model for my use case is spicyboros-70b-2.2: chats are a blast, and the Italian's a step up from llama2, but not quite perfect (lots of grammatical errors)
llama2-chat is decent in Italian, but that robotic "How can I help you?" vibe is killing the chat experience. I've experimented with llama/llama2 models fine-tuned using Alpaca's Italian translation, but those are garbage.
Any hidden gems or suggestions? Appreciate the insights! (VRAM is not a problem we have a couple of H100) | 2023-10-22T22:54:32 | https://www.reddit.com/r/LocalLLaMA/comments/17e5qls/seeking_for_advice_on_an_uncensored_italian_llm/ | poppear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e5qls | false | null | t3_17e5qls | /r/LocalLLaMA/comments/17e5qls/seeking_for_advice_on_an_uncensored_italian_llm/ | false | false | self | 3 | null |
🐺🐦⬛ My current favorite new LLMs: SynthIA v1.5 and Tiefighter! | 138 | Hope y'all are having a great weekend!
I'm still working on my next big LLM comparison/test (24 models from 7B to 70B tested thus far), but until that's done, here's a little spoiler/preview - two brand-new models that have already become favorites of mine:
**[KoboldAI/LLaMA2-13B-Tiefighter-GGUF](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter-GGUF)**
This is the best 13B I've ever used and tested. Easily beats my previous favorites MythoMax and Mythalion, and is on par with the best Mistral 7B models (like OpenHermes 2) concerning knowledge and reasoning while surpassing them regarding instruction following and understanding.
**[migtissera/SynthIA-7B-v1.5](https://huggingface.co/migtissera/SynthIA-7B-v1.5)**
Bigger is better and this new version of SynthIA has dethroned my previous 70B favorites Synthia (v1.2b) and Xwin. The author was kind enough to give me prerelease access so I've been using it as my main model for a week now, both for work and fun, with great success.
More details soon in my upcoming in-depth comparison...
--------------------------------------------------------------------------------
Here's a list of my previous model tests and comparisons:
- [Mistral LLM Comparison/Test: Instruct, OpenOrca, Dolphin, Zephyr and more...](https://www.reddit.com/r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/)
- [LLM Pro/Serious Use Comparison/Test: From 7B to 70B vs. ChatGPT!](https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/) Winner: Synthia-70B-v1.2b
- [LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B](https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/) Winner: Mistral-7B-OpenOrca
- [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/)
- [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1
- [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b
- [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B
- [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K
- [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2
- [SillyTavern's Roleplay preset vs. model-specific prompt format](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/) | 2023-10-22T21:40:44 | https://www.reddit.com/r/LocalLLaMA/comments/17e446l/my_current_favorite_new_llms_synthia_v15_and/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e446l | false | null | t3_17e446l | /r/LocalLLaMA/comments/17e446l/my_current_favorite_new_llms_synthia_v15_and/ | false | false | self | 138 | {'enabled': False, 'images': [{'id': 'IE-5JwtOBcEfPvSGTaJemxzLd0RimCgVVjJXILTHmp8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?width=108&crop=smart&auto=webp&s=337e223e1bcc1bc1f31e55b342dcf00792369016', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?width=216&crop=smart&auto=webp&s=44b968192fbd3f67e7764afcbd11e7b7c2e97d20', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?width=320&crop=smart&auto=webp&s=6c9bc09c7284757d1ef94a7543026586f1ba2e0c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?width=640&crop=smart&auto=webp&s=a0e32182c253889e263fcd964711d3d7b6e58b0b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?width=960&crop=smart&auto=webp&s=019bc64b1469a6277677c453c4d10736aae174e5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?width=1080&crop=smart&auto=webp&s=edd7c017217871fb3660c0e55e3d136f2b4834b1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?auto=webp&s=cd81313a51e530452b31e908e9c4603756ac05b6', 'width': 1200}, 'variants': {}}]} |
Possible game changing algorithms for LLMs such as forward-forward? | 5 | https://github.com/loeweX/Forward-Forward https://keras.io/examples/vision/forwardforward/ (would be amazing to use this for LLMs) -> need further research
What are other future deep learning algorithms in the works? | 2023-10-22T21:32:43 | https://www.reddit.com/r/LocalLLaMA/comments/17e3xc3/possible_game_changing_algorithms_for_llms_such/ | vlodia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e3xc3 | false | null | t3_17e3xc3 | /r/LocalLLaMA/comments/17e3xc3/possible_game_changing_algorithms_for_llms_such/ | false | false | self | 5 | null |
Best model which fits onto 80 GB gpu | 2 | I’d like to run a bunch of instruction style prompts through a model on an 80 GB A100. I care about response quality and throughput equally.
What are some good models to try out for this? Right now I’m running 70 b llama 2 chat and getting good responses, but its too large to fit in a single a100 so I need to do model parallelism with vllm across two a100s. I’m going to attempt to use the awq quantized version, but I’m not sure how much that will dumb down the model.
Is this a decent route? Any better ideas? | 2023-10-22T19:41:10 | https://www.reddit.com/r/LocalLLaMA/comments/17e1ezo/best_model_which_fits_onto_80_gb_gpu/ | selfdrivingperson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e1ezo | false | null | t3_17e1ezo | /r/LocalLLaMA/comments/17e1ezo/best_model_which_fits_onto_80_gb_gpu/ | false | false | self | 2 | null |
llama.cpp style embeddable RAG library or vector DB? | 10 | llama.cpp is incredible because it's does quick inference, but also because it's easy to embed as a library or by using the example binaries.
Is there a RAG solution that's similar to that I can embed in my app? Or at a lower level, what embeddable vector DB is good? The data I'm thinking about is pretty small at this point but it would be cool if it could scale up to like a whole 1TB filesystem.
For context, this is to go in my open-source LLM mac app, [FreeChat](https://github.com/psugihara/FreeChat). The current architecture consists of embedding the server example in the app and running it on localhost for the app to hit from a SwiftUI frontend. No conversation data ever leaves your machine (so SaaS solutions are a no go).
Thanks in advance, /r/localllama geniuses. | 2023-10-22T19:36:21 | https://www.reddit.com/r/LocalLLaMA/comments/17e1bat/llamacpp_style_embeddable_rag_library_or_vector_db/ | sleeper-2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e1bat | false | null | t3_17e1bat | /r/LocalLLaMA/comments/17e1bat/llamacpp_style_embeddable_rag_library_or_vector_db/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'djYHSi9DcRFgVYz7-DCWauTYX-pmTpEhp97TzGgR-Uc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zfBU6XyblugxNqr_h3H7Kf4koUUWgBNfNHhybd7SsEo.jpg?width=108&crop=smart&auto=webp&s=8517c2c350177d110a0a375dae493d53289c53bb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zfBU6XyblugxNqr_h3H7Kf4koUUWgBNfNHhybd7SsEo.jpg?width=216&crop=smart&auto=webp&s=909ffa002eb98c0e76f7013bedfa81464eaee04e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zfBU6XyblugxNqr_h3H7Kf4koUUWgBNfNHhybd7SsEo.jpg?width=320&crop=smart&auto=webp&s=f018de142510633b887538b08ffe302484b063d3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zfBU6XyblugxNqr_h3H7Kf4koUUWgBNfNHhybd7SsEo.jpg?width=640&crop=smart&auto=webp&s=e3c18e00f0fbecf9fb1d18622513af175b72f63e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zfBU6XyblugxNqr_h3H7Kf4koUUWgBNfNHhybd7SsEo.jpg?width=960&crop=smart&auto=webp&s=1af423a1faa5f2571295f5410dfdc67df99529ad', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zfBU6XyblugxNqr_h3H7Kf4koUUWgBNfNHhybd7SsEo.jpg?width=1080&crop=smart&auto=webp&s=5667bb60ace40a2127442386f9f3886bfa70e1cc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zfBU6XyblugxNqr_h3H7Kf4koUUWgBNfNHhybd7SsEo.jpg?auto=webp&s=352f9c5834aa33b316bab0d0a45fb57d33848447', 'width': 1200}, 'variants': {}}]} |
AgentTuning: Enabling Generalized Agent Abilities for LLMs - Tsinghua University 2023 - Agent-tuned open model comparable to GPT-3.5-Turbo on unseen agent tasks! | 23 | Paper: [https://arxiv.org/abs/2310.12823](https://arxiv.org/abs/2310.12823)
Github: [https://github.com/THUDM/AgentTuning](https://github.com/THUDM/AgentTuning)
Model: [https://huggingface.co/THUDM/agentlm-70b](https://huggingface.co/THUDM/agentlm-70b)
Abstract:
>Open large language models (LLMs) with great performance in various tasks have significantly advanced the development of LLMs. However, they are far inferior to commercial models such as ChatGPT and GPT-4 when acting as agents to tackle complex tasks in the real world. These **agent tasks employ LLMs as the central controller responsible for planning, memorization, and tool utilization, necessitating both fine-grained prompting methods and robust LLMs to achieve satisfactory performance.** Though many prompting methods have been proposed to complete particular agent tasks, there is lack of research focusing on improving the agent capabilities of LLMs themselves without compromising their general abilities. In this work, we present AgentTuning, a simple and general method to enhance the agent abilities of LLMs while maintaining their general LLM capabilities. We construct AgentInstruct, a lightweight instruction-tuning dataset containing high quality interaction trajectories. We employ a hybrid instruction-tuning strategy by combining AgentInstruct with open-source instructions from general domains. **AgentTuning is used to instruction-tune the Llama 2 series, resulting in AgentLM.** Our evaluations show that AgentTuning enables LLMs' agent capabilities without compromising general abilities. **The AgentLM-70B is comparable to GPT-3.5-turbo on unseen agent tasks, demonstrating generalized agent capabilities.** We open source the AgentInstruct and AgentLM-7B, 13B, and 70B models at [this https URL](https://github.com/THUDM/AgentTuning) , serving open and powerful alternatives to commercial LLMs for agent tasks.
https://preview.redd.it/t0el27s02tvb1.jpg?width=1181&format=pjpg&auto=webp&s=12b59ae9f14bd657daab7e44d377d9425a0cb66f
https://preview.redd.it/ee8v78s02tvb1.jpg?width=761&format=pjpg&auto=webp&s=6e55dca6bdb182449be4d7428d0eaec517bf8d0c
https://preview.redd.it/f2keg7s02tvb1.jpg?width=1348&format=pjpg&auto=webp&s=8e6b9ec05fd8667bf1cd6ae806835f01fdd08e96 | 2023-10-22T19:27:23 | https://www.reddit.com/r/LocalLLaMA/comments/17e142w/agenttuning_enabling_generalized_agent_abilities/ | Singularian2501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e142w | false | null | t3_17e142w | /r/LocalLLaMA/comments/17e142w/agenttuning_enabling_generalized_agent_abilities/ | false | false | 23 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | |
Using a pre-trained model to extract data from spreadsheets | 1 | Hello. I am working on a project to extract data from financial spreadsheets. There is no fixed format to these spreadsheets, but the information contained in it is fairly consistent. These are financial models created by investment bankers (sigh), and the goal is to extract some pre-defined variables (across a time period) and use that in dataframe or even better store them in some database. Was looking to train a model, but not sure if 30-40 such spreadsheets would be enough or do I need a large corpus. Anyone having any experience on such a task? | 2023-10-22T19:24:44 | https://www.reddit.com/r/LocalLLaMA/comments/17e11xs/using_a_pretrained_model_to_extract_data_from/ | lapras007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e11xs | false | null | t3_17e11xs | /r/LocalLLaMA/comments/17e11xs/using_a_pretrained_model_to_extract_data_from/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.