title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How comparable is Exl2 4 bpw to 4-bit GPTQ? | 5 | I'm having some trouble getting good results with an Exl2 4 bpw model. Is it simply worse than GPTQ 4 bit or maybe there's something wrong I'm doing? | 2023-11-21T22:03:12 | https://www.reddit.com/r/LocalLLaMA/comments/180svg8/how_comparable_is_exl2_4_bpw_to_4bit_gptq/ | Ok_Shape3437 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180svg8 | false | null | t3_180svg8 | /r/LocalLLaMA/comments/180svg8/how_comparable_is_exl2_4_bpw_to_4bit_gptq/ | false | false | self | 5 | null |
Training Resources - Where To Start | 3 | Hi,
I'm looking to create an AI assistant using Llama. Just teaching myself, I'm familiar with Python.
I have experience developing with other LLMs but unsure where to start with LLaMA.
- How do you upload your own data source
- Adding the instructions. Is it similar to GPT models in that it creates a thread ID?
- How do you control the key parameters such as token size, temperature, context, etc
- Is this the best open source model?
Any recommendations to learn?
Thanks | 2023-11-21T21:57:37 | https://www.reddit.com/r/LocalLLaMA/comments/180sqf8/training_resources_where_to_start/ | Ok-Victory-2791 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180sqf8 | false | null | t3_180sqf8 | /r/LocalLLaMA/comments/180sqf8/training_resources_where_to_start/ | false | false | self | 3 | null |
How the OpenAI fiasco could bolster Meta and the ‘open AI’ movement | 94 | 2023-11-21T21:45:22 | https://techcrunch.com/2023/11/21/how-the-openai-fiasco-could-bolster-meta-and-the-open-ai-movement/ | emptyplate | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 180sfhe | false | null | t3_180sfhe | /r/LocalLLaMA/comments/180sfhe/how_the_openai_fiasco_could_bolster_meta_and_the/ | false | false | 94 | {'enabled': False, 'images': [{'id': 'W0IkKZsgPW7s_l6buSiMNqDw9p2gDhrrIlzFZiisbJA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MGllxnRvJD1xUooEqX4s6NMnaFFWq2lNneBW8hkKaVU.jpg?width=108&crop=smart&auto=webp&s=3d741803b0a255dccb42ef5c46f53ff245eb6d12', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MGllxnRvJD1xUooEqX4s6NMnaFFWq2lNneBW8hkKaVU.jpg?width=216&crop=smart&auto=webp&s=57899d21643daea4a0a0e02a75a5ea4ce963b3ca', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MGllxnRvJD1xUooEqX4s6NMnaFFWq2lNneBW8hkKaVU.jpg?width=320&crop=smart&auto=webp&s=aae719bd06f86e936518a45d4d12c7569159e2e2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MGllxnRvJD1xUooEqX4s6NMnaFFWq2lNneBW8hkKaVU.jpg?width=640&crop=smart&auto=webp&s=f7f1363e4a1f37d274ffec3931038eb272367ad0', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MGllxnRvJD1xUooEqX4s6NMnaFFWq2lNneBW8hkKaVU.jpg?width=960&crop=smart&auto=webp&s=4f9924edfa7c7a62e5ef50ac54db3dd10da5df47', 'width': 960}, {'height': 608, 'url': 'https://external-preview.redd.it/MGllxnRvJD1xUooEqX4s6NMnaFFWq2lNneBW8hkKaVU.jpg?width=1080&crop=smart&auto=webp&s=1cb04d0305cdc0eaecf1b79f7532884be134f2b5', 'width': 1080}], 'source': {'height': 676, 'url': 'https://external-preview.redd.it/MGllxnRvJD1xUooEqX4s6NMnaFFWq2lNneBW8hkKaVU.jpg?auto=webp&s=7559759ee23d95224e20b95bb5b3a2fcd70fc319', 'width': 1200}, 'variants': {}}]} | ||
Best llm model for training another model on dataset/best model to used trained data from data set? | 1 | Looking to train a particular model on a bunch of pdf's on gdscript coding language, wondering what the best method would be to do so, via all their official documentation and various tutorials and other documentation I've found to feed into it.
Cheers. | 2023-11-21T21:43:30 | https://www.reddit.com/r/LocalLLaMA/comments/180sdvm/best_llm_model_for_training_another_model_on/ | Maelstrom100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180sdvm | false | null | t3_180sdvm | /r/LocalLLaMA/comments/180sdvm/best_llm_model_for_training_another_model_on/ | false | false | self | 1 | null |
Awful quantisation outputs with V100 | 2 | Potentially a dumb question, but when using awq and bitsnbytes quantisation on zephyr and mistral in colab, the outputs for using A100 vs V100 differ dramatically.
Using the same code and the following prompt
device = "cuda"
model_id = "TheBloke/zephyr-7B-alpha-AWQ"
model = AutoModelForCausalLM.from_pretrained(model_id, device_map=device)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encode_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encode_ids.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=100, temperature = 0.7, do_sample=True)
A100 (output cut off as I used 100 token generation bc I was testing latency)
Yes, I do have several mayonnaise recipes that you can try. Here are a few:
1. Classic Mayo:
- 1 large egg yolk
- 1 tablespoon Dijon mustard
- 2 tablespoons freshly squeezed lemon juice
- 1/4 teaspoon salt
- 1/4 teaspoon sugar
- 1 1/2 cups light olive oil or canola oil
Instructions:
In
V100 (unusable)
riv<unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk>
Anyone have any insights where the difference comes from? | 2023-11-21T21:43:11 | https://www.reddit.com/r/LocalLLaMA/comments/180sdkh/awful_quantisation_outputs_with_v100/ | gpu_go_brrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180sdkh | false | null | t3_180sdkh | /r/LocalLLaMA/comments/180sdkh/awful_quantisation_outputs_with_v100/ | false | false | self | 2 | null |
LM Studio Having Issues Downloading Models | 2 | I've installed LM Studio on 2 different computers. A PC and the other a Mac M2. I don't seem to have issues on the PC but I can't seem to get LM Studio to download any models. It just fails instantly. File paths look good too.. Anyone else have this issue on Mac and if so, how did you resolve it? | 2023-11-21T21:10:59 | https://www.reddit.com/r/LocalLLaMA/comments/180rl6s/lm_studio_having_issues_downloading_models/ | FloridaManIssues | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180rl6s | false | null | t3_180rl6s | /r/LocalLLaMA/comments/180rl6s/lm_studio_having_issues_downloading_models/ | false | false | default | 2 | null |
slow prompt eval time on 3060 normal or some bug ? | 2 | hello i am currently trying to set up a rag pipline and i noticed that as my prompts get longer and filled with context the tokens per second decrease drastically. from 10-20 ish down to 2 or less i am using llama.cpp running currently a 4q llama2 7b model on my 3060 laptop with 6gb of vram(using cuda).
I dont understand why this is happening and it makes the responsese painfully slow of course i expect the time it need to process a longer prompt to increase but why is it increasing the time it needs per token.
i would love to hear if this is normal and if no what i might do about it ?
here an exact example:
llm.complete("tell me what is a cat using the following context, here is a bit of context about cats: Female domestic cats can have kittens from spring to late autumn in temperate zones and throughout the year in equatorial regions, with litter sizes often ranging from two to five kittens. Domestic cats are bred and shown at events as registered pedigreed cats, a hobby known as cat fancy. Animal population control of cats may be achieved by spaying and neutering, but their proliferation and the abandonment of pets has resulted in large numbers of feral cats worldwide, contributing to the extinction of bird, mammal and reptile species. ")
llama\_print\_timings: prompt eval time = 80447.41 ms / 153 tokens ( 525.80 ms per token, 1.90 tokens per second)
compared to
llm.complete("tell me what is a cat")
llama\_print\_timings: prompt eval time = 319.16 ms / 4 tokens ( 79.79 ms per token, 12.53 tokens per second)
| 2023-11-21T20:50:29 | https://www.reddit.com/r/LocalLLaMA/comments/180r366/slow_prompt_eval_time_on_3060_normal_or_some_bug/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180r366 | false | null | t3_180r366 | /r/LocalLLaMA/comments/180r366/slow_prompt_eval_time_on_3060_normal_or_some_bug/ | false | false | self | 2 | null |
Claude 2.1 (200K Context Window) Benchmarks | 34 | [https://bito.ai/blog/claude-2-1-200k-context-window-benchmarks/](https://bito.ai/blog/claude-2-1-200k-context-window-benchmarks/)
https://preview.redd.it/t2b1axg1kr1c1.jpg?width=4096&format=pjpg&auto=webp&s=1d3b116d46e2937cad87d38c78deaef2d0e64c6c | 2023-11-21T20:48:54 | https://www.reddit.com/r/LocalLLaMA/comments/180r1s5/claude_21_200k_context_window_benchmarks/ | trulyfurqan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180r1s5 | false | null | t3_180r1s5 | /r/LocalLLaMA/comments/180r1s5/claude_21_200k_context_window_benchmarks/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'h4YuTTFGra_FNcR23Vkip9tA8JQLE1iez79a1W0ZzQo', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/-83frLc_ROPbA1DgP2F-DcPIkc2tdqzE9tsfbzyOeMM.jpg?width=108&crop=smart&auto=webp&s=528857bc96a8354e50c510b3f67d753abb986724', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/-83frLc_ROPbA1DgP2F-DcPIkc2tdqzE9tsfbzyOeMM.jpg?width=216&crop=smart&auto=webp&s=aada2cd5500cb82b4ca030948097f74295fe16c8', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/-83frLc_ROPbA1DgP2F-DcPIkc2tdqzE9tsfbzyOeMM.jpg?width=320&crop=smart&auto=webp&s=85a2bab5c08a2a3d909afbf2d1b134b9e671002c', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/-83frLc_ROPbA1DgP2F-DcPIkc2tdqzE9tsfbzyOeMM.jpg?width=640&crop=smart&auto=webp&s=4bb6acd0019224325855ad7afa1c59aed7afe599', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/-83frLc_ROPbA1DgP2F-DcPIkc2tdqzE9tsfbzyOeMM.jpg?width=960&crop=smart&auto=webp&s=9b43fbebedf82032818cb0a408eac7e19774667f', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/-83frLc_ROPbA1DgP2F-DcPIkc2tdqzE9tsfbzyOeMM.jpg?width=1080&crop=smart&auto=webp&s=e7f58f981ed9d705d122b5b8679f2ef49976f2dd', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/-83frLc_ROPbA1DgP2F-DcPIkc2tdqzE9tsfbzyOeMM.jpg?auto=webp&s=dcfba8457ef830ccef76ae59bd8cb67aa5a3a480', 'width': 1200}, 'variants': {}}]} | |
Can't handle efficiently RAG with large PDF | 4 | Hi,
I've issues when Q&A my PDF:
* \- I don't have consistency
* \- Tried many models (embedding/llm)
* \- Tried many methods
* \- I have 20% hallucination, especialy with "president" and "mickael jackson"
ex1 (this one is correct):
>\> question: How much net income for Amazon in 2020, 2021 and 2022 ?
>
>\> answer: In 2020, Amazon made a net income of $21.331 billion, in 2021 $33.364 billion and in 2022 the company made a loss of $2.722 billion.
ex2: (this one is incorrect):
>\> question: How much operating expenses for AWS in 2020, 2021 and 2022 ?
>
>\> answer: The operating expenses of AWS in 2020, 2021, and 2022 were $444.943 billion, $501.735 billion, and $567.608 billion respectively.
>
>It always calculate the entire expenses of the company, i try GPT-4 and it is capable.
\- PDF: Amazon 2022 annual report 10K (88 pages)
\- Embedding: all-MiniLM-L12-v2
\- Text splitter: Chunk\_size = 1000, overlap = 20
\- VectorDB: Chroma
\- LLM: SciPhi-Self-RAG-Mistral-7B-32k-8.0bpw-h6-exl2 via Oobabooga (OpenAI extension) with 0.2temp, alpaca instruction template.
\- Langchain: RetrievalQA, chain\_type = stuff, retriever = vectDB.as\_retriever()
\- RTX 3090
If anyone resolve this issue, please can you help me :) | 2023-11-21T20:39:41 | https://www.reddit.com/r/LocalLLaMA/comments/180qtw3/cant_handle_efficiently_rag_with_large_pdf/ | Temporary-Size7310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180qtw3 | false | null | t3_180qtw3 | /r/LocalLLaMA/comments/180qtw3/cant_handle_efficiently_rag_with_large_pdf/ | false | false | self | 4 | null |
How to connect local LLM to Whatsapp? | 1 | I want to chat to my local LLM in oobabooga via WhatsApp. Is it possible?
If it's possible, how can it handle different chat from different number at the same time? | 2023-11-21T20:00:26 | https://www.reddit.com/r/LocalLLaMA/comments/180pwrr/how_to_connect_local_llm_to_whatsapp/ | Background_Aspect_36 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180pwrr | false | null | t3_180pwrr | /r/LocalLLaMA/comments/180pwrr/how_to_connect_local_llm_to_whatsapp/ | false | false | self | 1 | null |
Stable Diffusion - Video - New models ! | 8 | (SVD) Image-to-Video is a latent diffusion model trained to generate short video clips from an image conditioning. This model was trained to generate 25 frames at resolution 576x1024 given a context frame of the same size, finetuned from [**SVD Image-to-Video \[14 frames\]**](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid). We also finetune the widely used [**f8-decoder**](https://huggingface.co/docs/diffusers/api/models/autoencoderkl#loading-from-the-original-format) for temporal consistency. For convenience, we additionally provide the model with the standard frame-wise decoder [**here**](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/blob/main/svd_xt_image_decoder.safetensors).
​
# Stable Video Diffusion Image-to-Video Model Card
**📷**
​
[https://stability.ai/news/stable-video-diffusion-open-ai-video-model](https://stability.ai/news/stable-video-diffusion-open-ai-video-model) | 2023-11-21T19:29:14 | https://www.reddit.com/r/LocalLLaMA/comments/180p69h/stable_diffusion_video_new_models/ | super-helper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180p69h | false | null | t3_180p69h | /r/LocalLLaMA/comments/180p69h/stable_diffusion_video_new_models/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': '52e1J2ilskOvaFeNER4QVvMd53oeS889tUP33dW2gAY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5zqsVCBQyxFCwHCoEKNp3JoAslR8468nS_1v_hrdwEo.jpg?width=108&crop=smart&auto=webp&s=ca354e0fdabc28364f9cf04ee17521d911aefe74', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5zqsVCBQyxFCwHCoEKNp3JoAslR8468nS_1v_hrdwEo.jpg?width=216&crop=smart&auto=webp&s=d0388b14c564a9b3ed4cd88d3fb9fc22b554db7a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5zqsVCBQyxFCwHCoEKNp3JoAslR8468nS_1v_hrdwEo.jpg?width=320&crop=smart&auto=webp&s=1c2a08247b30ee70398beee6d2bcba00b5881ca8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5zqsVCBQyxFCwHCoEKNp3JoAslR8468nS_1v_hrdwEo.jpg?width=640&crop=smart&auto=webp&s=9d83a33ba4d8e5f95a7236bd6ad6203cef583d22', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5zqsVCBQyxFCwHCoEKNp3JoAslR8468nS_1v_hrdwEo.jpg?width=960&crop=smart&auto=webp&s=f41e9c95034998f2493c6bb78062cdaae16df143', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5zqsVCBQyxFCwHCoEKNp3JoAslR8468nS_1v_hrdwEo.jpg?width=1080&crop=smart&auto=webp&s=9a075e130774c1330cee8a94e45a13029f21745c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5zqsVCBQyxFCwHCoEKNp3JoAslR8468nS_1v_hrdwEo.jpg?auto=webp&s=757ef88f5293524e61d0e954967913eba2eff017', 'width': 1200}, 'variants': {}}]} |
how to speed up coqui xtts inference | 3 | Using the official streaming server from coqui for the xtts model, we are running into slow inference speed on powerful gpus
[https://github.com/coqui-ai/xtts-streaming-server/tree/main](https://github.com/coqui-ai/xtts-streaming-server/tree/main)
for example, on a h100 \~7s
python ./test_streaming.py --server_url=http://
ffplay version 6.0 Copyright (c) 2003-2023 the FFmpeg developers
[...]
Time to make POST: 7.9753321669995785s vq= 0KB sq= 0B f=0/0
Time to first chunk: 8.231152084015775svq= 0KB sq= 0B f=0/0
Input #0, wav, from 'fd:':0 aq= 0KB vq= 0KB sq= 0B f=0/0
Duration: N/A, bitrate: 384 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 24000 Hz, 1 channels, s16, 384 kb/s
⏱️ response.elapsed: 0:00:07.95008604KB vq= 0KB sq= 0B f=0/0
[wav @ 0x12d708910] Packet corrupt (stream = 0, dts = NOPTS).=0/0
5.35 M-A: 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0
and roughly the same on an a100
python ./test_streaming.py --server_url=https://x-8888.proxy.runpod.net
ffplay version 6.0 Copyright (c) 2003-2023 the FFmpeg developers
[...]
Time to make POST: 7.914652333012782sB vq= 0KB sq= 0B f=0/0
Time to first chunk: 7.914910833002068s
Input #0, wav, from 'fd:':0 aq= 0KB vq= 0KB sq= 0B f=0/0
Duration: N/A, bitrate: 384 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 24000 Hz, 1 channels, s16, 384 kb/s
⏱️ response.elapsed: 0:00:07.88702304KB vq= 0KB sq= 0B f=0/0
[wav @ 0x11d606170] Packet corrupt (stream = 0, dts = NOPTS).=0/0
5.42 M-A: -0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0
any experience setting up coquir for fast inference? thx | 2023-11-21T19:27:07 | https://www.reddit.com/r/LocalLLaMA/comments/180p4fy/how_to_speed_up_coqui_xtts_inference/ | Goericke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180p4fy | false | null | t3_180p4fy | /r/LocalLLaMA/comments/180p4fy/how_to_speed_up_coqui_xtts_inference/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'Ec_WPkf3kOjAyy_QzhljLbP9I8XJBbGGcm3xZfKQN0c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/C1RDGzCFQa5ZkRbQ4vvpFiB6ahDHi9wOSYIHZynExXU.jpg?width=108&crop=smart&auto=webp&s=446c0c446147ae3e5f56e61c1470d4a80d788674', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/C1RDGzCFQa5ZkRbQ4vvpFiB6ahDHi9wOSYIHZynExXU.jpg?width=216&crop=smart&auto=webp&s=9ee18dd102392352d33186300acd8aaaf2bbd7cb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/C1RDGzCFQa5ZkRbQ4vvpFiB6ahDHi9wOSYIHZynExXU.jpg?width=320&crop=smart&auto=webp&s=ff074005637aec48f17f84843e1989430aec1377', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/C1RDGzCFQa5ZkRbQ4vvpFiB6ahDHi9wOSYIHZynExXU.jpg?width=640&crop=smart&auto=webp&s=015f10a4aadd9063f35dfdbc3d7698c69a9f37ad', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/C1RDGzCFQa5ZkRbQ4vvpFiB6ahDHi9wOSYIHZynExXU.jpg?width=960&crop=smart&auto=webp&s=37a3e2cfd9a4553a3f165361a891a0d91fd6d0d1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/C1RDGzCFQa5ZkRbQ4vvpFiB6ahDHi9wOSYIHZynExXU.jpg?width=1080&crop=smart&auto=webp&s=914af49927b55f10b26d970a5f7e0fd3fbe2f3c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/C1RDGzCFQa5ZkRbQ4vvpFiB6ahDHi9wOSYIHZynExXU.jpg?auto=webp&s=46002ca446020018264e2e24ac3821fd10609dd5', 'width': 1200}, 'variants': {}}]} |
New Claude 2.1 Refuses to kill a Python process :) | 699 | 2023-11-21T19:23:17 | yiyecek | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 180p17f | false | null | t3_180p17f | /r/LocalLLaMA/comments/180p17f/new_claude_21_refuses_to_kill_a_python_process/ | false | false | 699 | {'enabled': True, 'images': [{'id': 'vmehhN31Edi5xAyXH1cZS-swv1q4t-mqNwsqQnkTw_k', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/w4flnloi4r1c1.png?width=108&crop=smart&auto=webp&s=d684f148c23986087efceb816606802f130a0557', 'width': 108}, {'height': 62, 'url': 'https://preview.redd.it/w4flnloi4r1c1.png?width=216&crop=smart&auto=webp&s=48f0f1e22909ca430c18a6781403840c88b1649b', 'width': 216}, {'height': 92, 'url': 'https://preview.redd.it/w4flnloi4r1c1.png?width=320&crop=smart&auto=webp&s=08c231d04153c4c7fbb9cdc3b8d30142faa4f846', 'width': 320}, {'height': 184, 'url': 'https://preview.redd.it/w4flnloi4r1c1.png?width=640&crop=smart&auto=webp&s=329bb8b0ad8fa1b4eabf7017ded85662f34c7d19', 'width': 640}, {'height': 277, 'url': 'https://preview.redd.it/w4flnloi4r1c1.png?width=960&crop=smart&auto=webp&s=a6224171114a49406dcc088e9952f97c6b949869', 'width': 960}, {'height': 311, 'url': 'https://preview.redd.it/w4flnloi4r1c1.png?width=1080&crop=smart&auto=webp&s=52411690d2675bcfd0511eeb5dbf554f83afb6ae', 'width': 1080}], 'source': {'height': 454, 'url': 'https://preview.redd.it/w4flnloi4r1c1.png?auto=webp&s=a4598815f23529aafa64cc5ecce6d5cc03bfea54', 'width': 1572}, 'variants': {}}]} | |||
LLMS for Invoice Processing | 4 | Hi Gang!
My business use case is pretty simple and wanted some help from you guys! I am doing some kind of invoice modelling where my X data have some raw text which is being extracted from a real invoice using OCR, now the Y data contains JSON objects with some key value pairs, keys mostly being headers and some work tags which are being finalized by considering the X data. So I wanted to train a sequence to sequence model which can learn and understand the relevance between the OCR extracted raw texts and my Y data.
I started with LSTM for the reference but it doesn't seems to be very promising for now, any kind of help will be highly appreciated! | 2023-11-21T18:52:47 | https://www.reddit.com/r/LocalLLaMA/comments/180oaxy/llms_for_invoice_processing/ | Pinaka-X | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180oaxy | false | null | t3_180oaxy | /r/LocalLLaMA/comments/180oaxy/llms_for_invoice_processing/ | false | false | self | 4 | null |
Discrepancy between TheBloke_Orca-2-13B-GPTQ and the original one with the tested logic question | 11 | 2023-11-21T18:44:03 | Longjumping-Bake-557 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 180o3ga | false | null | t3_180o3ga | /r/LocalLLaMA/comments/180o3ga/discrepancy_between_thebloke_orca213bgptq_and_the/ | false | false | 11 | {'enabled': True, 'images': [{'id': 'qjxMeCB4LWJsKlqmLmvDI-8N3393rO154SxAUatOHPc', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/5c06am7ewq1c1.jpg?width=108&crop=smart&auto=webp&s=8ffb4c18c5bcf7b1c20f09c898a85143d565daec', 'width': 108}, {'height': 284, 'url': 'https://preview.redd.it/5c06am7ewq1c1.jpg?width=216&crop=smart&auto=webp&s=f044be68598fb73ecb878cdab190c940955f10d5', 'width': 216}, {'height': 421, 'url': 'https://preview.redd.it/5c06am7ewq1c1.jpg?width=320&crop=smart&auto=webp&s=7b910a895fe8e0cc4aa52014b7735a3f1d727d7b', 'width': 320}, {'height': 843, 'url': 'https://preview.redd.it/5c06am7ewq1c1.jpg?width=640&crop=smart&auto=webp&s=a7a4cc1a6b7ec62a7a353bedaac62dfcf70174e6', 'width': 640}], 'source': {'height': 1054, 'url': 'https://preview.redd.it/5c06am7ewq1c1.jpg?auto=webp&s=0ba8fd5539a9ae8884bf27ba87a470fecac1d280', 'width': 800}, 'variants': {}}]} | |||
Chain of thought really helps :P | 138 | 2023-11-21T18:33:53 | MoffKalast | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 180nv7g | false | null | t3_180nv7g | /r/LocalLLaMA/comments/180nv7g/chain_of_thought_really_helps_p/ | false | false | 138 | {'enabled': True, 'images': [{'id': 'vHZwToEVudADF75LvcvzfCzqA9FhQ8JSlP3TFj9ubGY', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/xy387tmwvq1c1.png?width=108&crop=smart&auto=webp&s=cde26108c73526217d2bc8a459bb6ca174a34df2', 'width': 108}, {'height': 264, 'url': 'https://preview.redd.it/xy387tmwvq1c1.png?width=216&crop=smart&auto=webp&s=fd2f9e39e2681bc5d06d6b7f4224b54d1b2a339e', 'width': 216}, {'height': 392, 'url': 'https://preview.redd.it/xy387tmwvq1c1.png?width=320&crop=smart&auto=webp&s=741b102cf86c9f397c4a3f5f6e532053cdf6d622', 'width': 320}, {'height': 785, 'url': 'https://preview.redd.it/xy387tmwvq1c1.png?width=640&crop=smart&auto=webp&s=9a533cdbad83a269de5fe8d0e1e29ae51162744f', 'width': 640}, {'height': 1177, 'url': 'https://preview.redd.it/xy387tmwvq1c1.png?width=960&crop=smart&auto=webp&s=5ba190e3f5aed98b3ef8da791835545770737f22', 'width': 960}, {'height': 1324, 'url': 'https://preview.redd.it/xy387tmwvq1c1.png?width=1080&crop=smart&auto=webp&s=dbee964a53f99df1daa98d4c3b252470427ba1ae', 'width': 1080}], 'source': {'height': 1391, 'url': 'https://preview.redd.it/xy387tmwvq1c1.png?auto=webp&s=9431079bd973c43791e0377e71e54d570bcc45ea', 'width': 1134}, 'variants': {}}]} | |||
Any alternatives to couqi for TTS? | 23 | Hey guys,
So TLDR is elevenlabs / play.ht is WAY too expensive for a realtime chat app, and we need an alternative. Guessing this is why [character](https://blog.character.ai/new-feature-announcement-character-voice/) is rolling their own voice model, & obviously most apps can't do that, so what are the alternatives here?
I've read zero shot prompting for TTS (inserting a sample at runtime) is part of the reason elevenlabs / play is so expensive, wheras finetuning on individual voices like character / OAI did and hosting those as their own model would be way faster and cheaper.
But couqi seems really slow from our finetune testing, and not only that but it's not really... good. Does anyone know why, or there alternatives that chat apps are using? Is anyone working on better open source TTS? This seems totally overlooked compared to text where there's so much competition right now, but is almost just as important. Thanks | 2023-11-21T18:19:52 | https://www.reddit.com/r/LocalLLaMA/comments/180njji/any_alternatives_to_couqi_for_tts/ | enterguild | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180njji | false | null | t3_180njji | /r/LocalLLaMA/comments/180njji/any_alternatives_to_couqi_for_tts/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': '7R1mJtxAw3yFa8KIeLgYH7zUT0ZE7ymKLrQs_JMaeJk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/8V18eVJvN4UoVjL1mbzYEfKvftONBjqodHLNCLDGWzo.jpg?width=108&crop=smart&auto=webp&s=200193134e744d0f2562b6d9dfa2a015fb754e7c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/8V18eVJvN4UoVjL1mbzYEfKvftONBjqodHLNCLDGWzo.jpg?width=216&crop=smart&auto=webp&s=cfdd74c3e118df1a1a5ab000b3894a28bbd5e0c8', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/8V18eVJvN4UoVjL1mbzYEfKvftONBjqodHLNCLDGWzo.jpg?width=320&crop=smart&auto=webp&s=6f6fce3951d856fa6be4c971243a842e5847dfe4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/8V18eVJvN4UoVjL1mbzYEfKvftONBjqodHLNCLDGWzo.jpg?width=640&crop=smart&auto=webp&s=39facae8681a79536f72b23697a925cdf8517c6e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/8V18eVJvN4UoVjL1mbzYEfKvftONBjqodHLNCLDGWzo.jpg?width=960&crop=smart&auto=webp&s=e397593818c5e132c1bd0f82a864fc547892ecb7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/8V18eVJvN4UoVjL1mbzYEfKvftONBjqodHLNCLDGWzo.jpg?width=1080&crop=smart&auto=webp&s=8b3416bb3cda7ceed4ab9f745159a74e67be7d96', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://external-preview.redd.it/8V18eVJvN4UoVjL1mbzYEfKvftONBjqodHLNCLDGWzo.jpg?auto=webp&s=9ddcaf24a00a53522335602cc1c11928f3816f6d', 'width': 2000}, 'variants': {}}]} |
What LLM does yodayo use for the Tavern? Asking for a recommendation of roleplay opensource LLM. | 1 | What did yodayo use as the base llm for the roleplay in their tavern??? I just tried it and to be honest I'm very impressed...
I'm working with Ai for almost a year, works most of the time with big llms like 70b+ and GPT4. And appreciate their quality for my purposes, but they are not even close in role play. I understand that good prompting helps, but my guess that fine-tuning is the key for a really good roleplay.
If it's unknown what model was used there, then please recommend similar or even better quality fine-tuned models for roleplay. Asking for friend ;) | 2023-11-21T18:12:35 | https://www.reddit.com/r/LocalLLaMA/comments/180ndjj/what_llm_does_yodayo_use_for_the_tavern_asking/ | SeaworthinessLow4382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180ndjj | false | null | t3_180ndjj | /r/LocalLLaMA/comments/180ndjj/what_llm_does_yodayo_use_for_the_tavern_asking/ | false | false | self | 1 | null |
Why isn't anyone building an Oogabooga-like app for Android and iPhone? | 2 |
With high-end Android phones now packing upwards of 24GB of RAM, I think there's huge potential for an app like this. It would be amazing to have something as powerful as the future Mistral 13B model running natively on smartphones!
With high-end Android phones now packing upwards of 24GB of RAM, I think there's huge potential for an app like this. It would be amazing to have something as powerful as future Mistral 13B model running natively on smartphones!
You could interact with it privately without an internet connection. The convenience and capabilities would be incredible.
​
​ | 2023-11-21T18:09:11 | https://www.reddit.com/r/LocalLLaMA/comments/180nar9/why_isnt_anyone_building_an_oogaboogalike_app_for/ | Winter_Tension5432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180nar9 | false | null | t3_180nar9 | /r/LocalLLaMA/comments/180nar9/why_isnt_anyone_building_an_oogaboogalike_app_for/ | false | false | self | 2 | null |
is it possible to fine-tune coqui/XTTS-v2 as a single speaker model? | 15 | Wondering if it's possible to fine-tune coqui/XTTS-v2 as a single speaker model
In the project's readme: [https://huggingface.co/coqui/XTTS-v2](https://huggingface.co/coqui/XTTS-v2) there are only examples on how to inference using a `speaker_wav` for voice cloning at runtime
In a community discussion on the repo i asked about it also, and got feedback that we could cache the latents of the speaker audio, to speed-up
[https://huggingface.co/coqui/XTTS-v2/discussions/9](https://huggingface.co/coqui/XTTS-v2/discussions/9)
But for maximal performance i am thinking more about having a fine-tune on a single speaker voice and then don't have to deal with cloning at runtime at all | 2023-11-21T17:46:03 | https://www.reddit.com/r/LocalLLaMA/comments/180mrv7/is_it_possible_to_finetune_coquixttsv2_as_a/ | Goericke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180mrv7 | false | null | t3_180mrv7 | /r/LocalLLaMA/comments/180mrv7/is_it_possible_to_finetune_coquixttsv2_as_a/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'P9eOUf_Bw60a1zarPbq1EZBNK1pKMCzz7JP-Uq3xW4U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pC-cT8jvBtvVDnS6NM_qqNi-2_SMtxVLjL2j3woYL8k.jpg?width=108&crop=smart&auto=webp&s=c66b57b68647f23b66b6b2376826e4737e279fd5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pC-cT8jvBtvVDnS6NM_qqNi-2_SMtxVLjL2j3woYL8k.jpg?width=216&crop=smart&auto=webp&s=97ea4488fa7a376150e5e92a81287f75fd5572ef', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pC-cT8jvBtvVDnS6NM_qqNi-2_SMtxVLjL2j3woYL8k.jpg?width=320&crop=smart&auto=webp&s=a0b83ce08b5eec6bdfddfd69171316e24f54f7ce', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pC-cT8jvBtvVDnS6NM_qqNi-2_SMtxVLjL2j3woYL8k.jpg?width=640&crop=smart&auto=webp&s=e3f660370190717e2881d72a8d9f209f2dc3d497', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pC-cT8jvBtvVDnS6NM_qqNi-2_SMtxVLjL2j3woYL8k.jpg?width=960&crop=smart&auto=webp&s=6d54ca08b725cc06630a5ee25e24e7d4dd6519f2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pC-cT8jvBtvVDnS6NM_qqNi-2_SMtxVLjL2j3woYL8k.jpg?width=1080&crop=smart&auto=webp&s=44e625515fd4ca20690ede2c90393768b021b0cc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pC-cT8jvBtvVDnS6NM_qqNi-2_SMtxVLjL2j3woYL8k.jpg?auto=webp&s=bf864d793670014ba9bfcd233d1dddd5bdb8f0df', 'width': 1200}, 'variants': {}}]} |
ExLlamaV2: The Fastest Library to Run LLMs | 174 | Is this accurate? | 2023-11-21T17:45:17 | https://towardsdatascience.com/exllamav2-the-fastest-library-to-run-llms-32aeda294d26 | alchemist1e9 | towardsdatascience.com | 1970-01-01T00:00:00 | 0 | {} | 180mr6s | false | null | t3_180mr6s | /r/LocalLLaMA/comments/180mr6s/exllamav2_the_fastest_library_to_run_llms/ | false | false | 174 | {'enabled': False, 'images': [{'id': 'LqsEY1veCoSbvOK7pkWKyzm4IpHnCOwKgutQRsG6h2Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/_3bDO2qvrX-pgwm1gI2QJltqn7hy_LNiwN3kRsEdE_g.jpg?width=108&crop=smart&auto=webp&s=d52eef4f79717215911efb859520998d15113f6c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/_3bDO2qvrX-pgwm1gI2QJltqn7hy_LNiwN3kRsEdE_g.jpg?width=216&crop=smart&auto=webp&s=2141fb96d0eb1f6f40c5c3a6bf02808a43c06cb3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/_3bDO2qvrX-pgwm1gI2QJltqn7hy_LNiwN3kRsEdE_g.jpg?width=320&crop=smart&auto=webp&s=a7add552cf4f28af4afd1f3cfc90ffeef6cb7940', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/_3bDO2qvrX-pgwm1gI2QJltqn7hy_LNiwN3kRsEdE_g.jpg?width=640&crop=smart&auto=webp&s=79b9f1dc1b9cd9e6ecdcaa6a1a09f095b30b3b29', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/_3bDO2qvrX-pgwm1gI2QJltqn7hy_LNiwN3kRsEdE_g.jpg?width=960&crop=smart&auto=webp&s=57372be61da5dd2968ce6afb2fae0a1c842f3618', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/_3bDO2qvrX-pgwm1gI2QJltqn7hy_LNiwN3kRsEdE_g.jpg?width=1080&crop=smart&auto=webp&s=7df6e71dfd75bafb67a13b70fa022564d9c7bdb8', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/_3bDO2qvrX-pgwm1gI2QJltqn7hy_LNiwN3kRsEdE_g.jpg?auto=webp&s=f919f872aa7902e2a750ea5caf9c88c92181759b', 'width': 1200}, 'variants': {}}]} | |
Llama cpp finetune with GPU | 6 | I noticed that finally Llama cpp added -ngl option to finetune command. However theres a comment from AndrewGodfrey
https://github.com/ggerganov/llama.cpp/issues/3458#issuecomment-1809422256
mentioning that gpu offload wouldnt help speed up at all. I tried this on my M2 Mac and it does seem to be the case.
I'm relatively new to finetuning and Im wondering whether this is just a current limitation or its not possible at all to use GPU on Apple Silicon to finetune model with Llama cpp?
Apart from using Llama cpp is there any alternative route to finetune LLM model on Apple Silicon? (I know my M2 Mac wont do but just want to know) | 2023-11-21T17:43:35 | https://www.reddit.com/r/LocalLLaMA/comments/180mpt6/llama_cpp_finetune_with_gpu/ | touchaponk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180mpt6 | false | null | t3_180mpt6 | /r/LocalLLaMA/comments/180mpt6/llama_cpp_finetune_with_gpu/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'UAi_s6eeDUZ17ieXOtZiPKbo6gm_K9u-ZBij0FSQG1c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MljRMgtWx7S8whbdOBR1TuuaH1heD6MxfQP9EpC5pqQ.jpg?width=108&crop=smart&auto=webp&s=b5868c6450fc0082887698e3ca5f44068cc267c3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MljRMgtWx7S8whbdOBR1TuuaH1heD6MxfQP9EpC5pqQ.jpg?width=216&crop=smart&auto=webp&s=2c9413e2e8fa0453680cd840846595ae6082039f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MljRMgtWx7S8whbdOBR1TuuaH1heD6MxfQP9EpC5pqQ.jpg?width=320&crop=smart&auto=webp&s=cedf2387578f888d9549433a497905f1a79faa98', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MljRMgtWx7S8whbdOBR1TuuaH1heD6MxfQP9EpC5pqQ.jpg?width=640&crop=smart&auto=webp&s=72f3ad588c2b4b04cde0a1c6daa866fb444bffec', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MljRMgtWx7S8whbdOBR1TuuaH1heD6MxfQP9EpC5pqQ.jpg?width=960&crop=smart&auto=webp&s=777aa503b2acdb3aa69ba57db37651658502b668', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MljRMgtWx7S8whbdOBR1TuuaH1heD6MxfQP9EpC5pqQ.jpg?width=1080&crop=smart&auto=webp&s=e82976faa8b61e35c5cb4146de1d8e79471fb94f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MljRMgtWx7S8whbdOBR1TuuaH1heD6MxfQP9EpC5pqQ.jpg?auto=webp&s=5d1aeed78835dced36bc79b91899e2560a5570d8', 'width': 1200}, 'variants': {}}]} |
Help: Ollama obsidian plugin | 1 | I'm new to Ollama and I'm trying to use it in obsidian and get a feel for how it works with post.
I have it running on my server in the network, so instead of localhost, I'm using the static Ip for the server.
but the connection is refused, how do I fix this or configure it to accept requests from other than localhost? | 2023-11-21T16:47:04 | https://www.reddit.com/r/LocalLLaMA/comments/180ldy4/help_ollama_obsidian_plugin/ | BlankCrystal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180ldy4 | false | null | t3_180ldy4 | /r/LocalLLaMA/comments/180ldy4/help_ollama_obsidian_plugin/ | false | false | self | 1 | null |
Need help with prompt for generating a fine-tuning dataset focused on JSON output | 2 | I want to create a fine-tuning dataset that I can use with several models through Axolotl (like Mistral, Llama 2, and Falcon) to improve the model's ability to extract requested information from a paragraph and output that in JSON format. I am using [lm-format-enforcer](https://github.com/noamgat/lm-format-enforcer) to force JSON output.
Here is an example of the type of prompt I have been trying so far:
`<s>[INST] <<SYS>>`
`You are a helpful, respectful and honest assistant.`
`<</SYS>>`
`Please give me information about this call log. If you cannot find the information you need, put N/A for that field. Any apostrophe must be escaped with a \ character. You MUST answer using the following json schema: {"properties":{"company_name":{"title":"Company Name","type":"string"},"country_or_countries":{"title":"Country Or Countries","type":"string"},"total_amount_due":{"title":"Total Amount Due","type":"integer"},"pending_task":{"title":"Pending Task","type":"boolean"}},"required":["company_name","country_or_countries","total_amount_due","pending_task"],"title":"AnswerFormat","type":"object"} Call log 2023-10-01 11:50:30 talked with Jim at Acme Construction. The job in Toronto is held up waiting for our sign off on the contract. We also need to put in a down payment of $10,000 plus the inspector fee of $750, and a license fee of $1500. The payment must be made by the end of November. I told him I'll call him back when it's done. [/INST]`
Expected output:
`{"company_name":{"title":"Company Name","value":"Acme Construction"},"country_or_countries":{"title":"Country Or Countries","value":"Canada"},"total_amount_due":{"title":"Total Amount Due","value":"12250"},"pending_task":{"title":"Pending Task","value":"TRUE"}}`
I'm looking for some tips from people with more experience in prompt engineering. Here are some of my main questions:
​
1) Is this format with SYS and INST a reasonable idea for formatting a fine-tuning dataset? Especially since I want to fine-tune different base models with this same training dataset, do I need to strip some of that formatting out of the fine-tuning dataset to keep it more "model format neutral" and then add the formatting back in somehow for each model? What's the best practice for fine-tuning dataset formatting in this regard?
2) Should I even include a SYS message at all? Instead of the generic "You are a...assistant", should I uses the SYS section to specify that I want the results in JSON format? Or should I just remove the SYS section?
3) The goal with the fine-tuning dataset is to have at least one thousand examples of prompts and outputs in JSON format, but I want to vary the type of text being analyzed and the JSON schema in the examples to help it generalize better. In other words, I won't always ask it to find the same information in the text and I won't always use the same type of text. Should I also vary the way I write up the initial part about if you can't find the info then write N/A and that sort of stuff?
Thanks for reading and if you need to clarify anything, to hesitate to ask. | 2023-11-21T16:27:53 | https://www.reddit.com/r/LocalLLaMA/comments/180kxmx/need_help_with_prompt_for_generating_a_finetuning/ | ResearchTLDR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180kxmx | false | null | t3_180kxmx | /r/LocalLLaMA/comments/180kxmx/need_help_with_prompt_for_generating_a_finetuning/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'CJHgDb65DIxm9-UlRjuBuWfYKJNLT-w8rn0M6DKr44Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4JH6j1uMPjLA_P-m5ZuYLxwXkxtawp57qgcL4xKta_A.jpg?width=108&crop=smart&auto=webp&s=eb74bd4c82a553c415d38300021c94da43326e81', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4JH6j1uMPjLA_P-m5ZuYLxwXkxtawp57qgcL4xKta_A.jpg?width=216&crop=smart&auto=webp&s=cb9b30708c5006633e186d9eaac20fbb91e3ad3c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4JH6j1uMPjLA_P-m5ZuYLxwXkxtawp57qgcL4xKta_A.jpg?width=320&crop=smart&auto=webp&s=f1f3bd6e8b30812fc70c165bd622ad60be68de7c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4JH6j1uMPjLA_P-m5ZuYLxwXkxtawp57qgcL4xKta_A.jpg?width=640&crop=smart&auto=webp&s=7a8285e12cad28936d995559b4d590c6ac23aa83', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4JH6j1uMPjLA_P-m5ZuYLxwXkxtawp57qgcL4xKta_A.jpg?width=960&crop=smart&auto=webp&s=d331146cfc0d536cd3d6e0d4dc99bc10e9ff2666', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4JH6j1uMPjLA_P-m5ZuYLxwXkxtawp57qgcL4xKta_A.jpg?width=1080&crop=smart&auto=webp&s=70a37314b0d65dfa2e9e935fb68dffb4a5f66164', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4JH6j1uMPjLA_P-m5ZuYLxwXkxtawp57qgcL4xKta_A.jpg?auto=webp&s=fe18c2d65be820ed431454ed34e3fee7943201c7', 'width': 1200}, 'variants': {}}]} |
Gradio or streamlit for prototyping and why? | 3 | name pretty much says it all | 2023-11-21T16:25:34 | https://www.reddit.com/r/LocalLLaMA/comments/180kvsx/gradio_or_streamlit_for_prototyping_and_why/ | llamasaresavager | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180kvsx | false | null | t3_180kvsx | /r/LocalLLaMA/comments/180kvsx/gradio_or_streamlit_for_prototyping_and_why/ | false | false | self | 3 | null |
How do I stop LLAMA-2 chat from presenting text before the answer? | 1 | [removed] | 2023-11-21T16:20:58 | https://www.reddit.com/r/LocalLLaMA/comments/180ks3f/how_do_i_stop_llama2_chat_from_presenting_text/ | carvalholuz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180ks3f | false | null | t3_180ks3f | /r/LocalLLaMA/comments/180ks3f/how_do_i_stop_llama2_chat_from_presenting_text/ | false | false | self | 1 | null |
Who Is Ilya Sutskever? The Openai Cofounder Who Helped Oust Ceo Sam Altman, Says He “Deeply Regrets” His Role And Threatens To Quit Unless Board Resigns | 6 | 2023-11-21T16:07:48 | https://news.google.com/articles/CBMiKmh0dHBzOi8vd3d3LmNlbGVic3dlZWsuY29tL2lseWEtc3V0c2tldmVyL9IBAA?hl=en-AU&gl=AU&ceid=AU%3Aen | SuggestedQuotes | news.google.com | 1970-01-01T00:00:00 | 0 | {} | 180khgq | false | null | t3_180khgq | /r/LocalLLaMA/comments/180khgq/who_is_ilya_sutskever_the_openai_cofounder_who/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'Bfjsq6ZmAcNstG3Fnib1iKt1b-LW3Q9ENLP43TRUDf8', 'resolutions': [{'height': 130, 'url': 'https://external-preview.redd.it/FkVkBTgdRFimePoqirTgW3D2bx-0-S4bVfNGSrACLSI.jpg?width=108&crop=smart&auto=webp&s=1a2a39bc596960e2165cfee5f30164adae241195', 'width': 108}, {'height': 261, 'url': 'https://external-preview.redd.it/FkVkBTgdRFimePoqirTgW3D2bx-0-S4bVfNGSrACLSI.jpg?width=216&crop=smart&auto=webp&s=449d46dfc020116c1fe2bac61b173dc6e2bffa2d', 'width': 216}, {'height': 387, 'url': 'https://external-preview.redd.it/FkVkBTgdRFimePoqirTgW3D2bx-0-S4bVfNGSrACLSI.jpg?width=320&crop=smart&auto=webp&s=f6bc13f3c14a2bfb48132a4b1e6f5d32f93e72ce', 'width': 320}], 'source': {'height': 666, 'url': 'https://external-preview.redd.it/FkVkBTgdRFimePoqirTgW3D2bx-0-S4bVfNGSrACLSI.jpg?auto=webp&s=12302697bd8671ca89fd9a63a6680ffc3daddf07', 'width': 550}, 'variants': {}}]} | ||
Has anybody successfully implemented web search/browsing for their local LLM? | 42 | GPT-4 surprisingly excels at Googling (Binging?) to retrieve up-to-date information about current issues. Tools like Perplexity.ai are impressive. Now that we have a highly capable smaller-scale model, I feel like not enough open-source research is being directed towards enabling local models to perform internet searches and retrieve online information.
Did you manage to add that functionality to your local setup, or know some good repo/resources to do so? | 2023-11-21T15:46:06 | https://www.reddit.com/r/LocalLLaMA/comments/180jz0x/has_anybody_successfully_implemented_web/ | azurisme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180jz0x | false | null | t3_180jz0x | /r/LocalLLaMA/comments/180jz0x/has_anybody_successfully_implemented_web/ | false | false | self | 42 | null |
I need help with gguf files naming signification | 1 | Hi everybody. I have installed GPT4all on my desktop computer and used it a bit with a model I downloaded. When downloading this model, I knew I was looking for a gguf file and the repository was easy to understand because it contained only one gguf file clearly named. Now I'm interested in trying other models and I'm browsing hugging face's catalogue. I'm interested in a model but I don't know what file to download because the repository offers several gguf files, all having the same name but different file name ends, like q3_k_m.gguf or q5_k_s.gguf.
What do these file name ends mean? How do I chose the right file? Thanks in advance for any kind help. | 2023-11-21T15:05:26 | https://www.reddit.com/r/LocalLLaMA/comments/180j1u5/i_need_help_with_gguf_files_naming_signification/ | closingloops | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180j1u5 | false | null | t3_180j1u5 | /r/LocalLLaMA/comments/180j1u5/i_need_help_with_gguf_files_naming_signification/ | false | false | self | 1 | null |
Should I use LoRa, RLHF or DPO? | 1 | I'm thinking of using Llama 2 to detect spam messages:
1) The model will first be fine tuned using LoRa/PEFT with some public dataset.
2) Then, when given a block of text, it will decide if it's spam and provide reasons for the user.
3) However, there can be false positives etc., so I figured a way to combat this would be to let the user tell the model if the response is correct or wrong (thumbs up/down).
Based on my requirements, is it better to use RLHF or DPO? Am I over complicating this, will fine tuning it based on user feedback work too? | 2023-11-21T14:56:04 | https://www.reddit.com/r/LocalLLaMA/comments/180itzn/should_i_use_lora_rlhf_or_dpo/ | hadal1337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180itzn | false | null | t3_180itzn | /r/LocalLLaMA/comments/180itzn/should_i_use_lora_rlhf_or_dpo/ | false | false | self | 1 | null |
Video-LLaVA can describe both image and video input. | 52 | 2023-11-21T14:48:42 | https://github.com/PKU-YuanGroup/Video-LLaVA | chibop1 | github.com | 1970-01-01T00:00:00 | 0 | {} | 180io97 | false | null | t3_180io97 | /r/LocalLLaMA/comments/180io97/videollava_can_describe_both_image_and_video_input/ | false | false | 52 | {'enabled': False, 'images': [{'id': 'mRMU8WtCemNScR5BHteo07dUT_SitFjgYrOViHJX7MA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_0eIWy9cl5jhop3irDp7ep24EuOQG9eRtAMI7OJAD00.jpg?width=108&crop=smart&auto=webp&s=1e7fe9d532875b5af8a732ca9391a19aa93f10de', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_0eIWy9cl5jhop3irDp7ep24EuOQG9eRtAMI7OJAD00.jpg?width=216&crop=smart&auto=webp&s=5bb06bf556da00cc2b15c7ac5c8ab2602c0a886a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_0eIWy9cl5jhop3irDp7ep24EuOQG9eRtAMI7OJAD00.jpg?width=320&crop=smart&auto=webp&s=14551ebefcd0a96311709065d2f05fbea937ba63', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_0eIWy9cl5jhop3irDp7ep24EuOQG9eRtAMI7OJAD00.jpg?width=640&crop=smart&auto=webp&s=b8b14f9cdd795a031b87c52dd16d62d3dd767fcd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_0eIWy9cl5jhop3irDp7ep24EuOQG9eRtAMI7OJAD00.jpg?width=960&crop=smart&auto=webp&s=4436a53365c553e34cd4e7da41306a4fa65bf0db', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_0eIWy9cl5jhop3irDp7ep24EuOQG9eRtAMI7OJAD00.jpg?width=1080&crop=smart&auto=webp&s=5119f51388e405afe4fc16913fa9b734f27ca278', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_0eIWy9cl5jhop3irDp7ep24EuOQG9eRtAMI7OJAD00.jpg?auto=webp&s=34d03078d93b70664c42b493330863a0a6501241', 'width': 1200}, 'variants': {}}]} | ||
Pay-Per-Token Service with Fine-Tuned Model and LoRA Adapters | 1 | Hey everyone!
I've been diving into the world of locally large language models, such as Llama2 13B or 70B, for various personal and business applications. However, hosting my local model on AWS has proven to be quite expensive, not to mention the added complexity of managing infrastructure, especially when dealing with multiple concurrent users.
I'm on the lookout for a service that allows the upload of fine-tuned versions of models, or even just the LoRA adapters. What I'm really after is a pricing structure based on a pay-per-token system, as I don't anticipate a high volume of requests. Currently, I've been using Claude on AWS with this pay-per-token approach, and it's been a cost-effective alternative to running a dedicated GPU instance for my personal model.
If anyone here has recommendations or insights into services or setups that align with my requirements, I'd greatly appreciate your input.
Thank you so much! | 2023-11-21T14:38:55 | https://www.reddit.com/r/LocalLLaMA/comments/180igkf/paypertoken_service_with_finetuned_model_and_lora/ | PinballOscuro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180igkf | false | null | t3_180igkf | /r/LocalLLaMA/comments/180igkf/paypertoken_service_with_finetuned_model_and_lora/ | false | false | self | 1 | null |
Is there any point in keeping GPTQ files? | 5 | I’ve been replacing my collection of GPTQ models with AWQ versions. Is there any point in keeping the GPTQ versions, or are they totally redundant now? | 2023-11-21T14:25:50 | https://www.reddit.com/r/LocalLLaMA/comments/180i65l/is_there_any_point_in_keeping_gptq_files/ | LowAmplitudeWorlds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180i65l | false | null | t3_180i65l | /r/LocalLLaMA/comments/180i65l/is_there_any_point_in_keeping_gptq_files/ | false | false | self | 5 | null |
What’s the prompt you guys use for function calling in LLMs? | 26 | My understanding of LLM function calling is roughly as follows:
1. You “list” all the functions the model can call in the prompt
2. ???
3. The model knows when to return the “function names” (either in json or otherwise) during conversation
Does anyone have any advice or examples on what prompt should I use? | 2023-11-21T12:52:53 | https://www.reddit.com/r/LocalLLaMA/comments/180galp/whats_the_prompt_you_guys_use_for_function/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180galp | false | null | t3_180galp | /r/LocalLLaMA/comments/180galp/whats_the_prompt_you_guys_use_for_function/ | false | false | self | 26 | null |
Intel Lunar Lake-MX | 3 | According to these leaks they seem to be developing something similar to Apples M series chipset shared lpddr-5 ram, shared across iGPU and CPU (I think?).
Looks like a maximum of 32gb though. And probably not as fast as the high end macs. But still promising that other, notably cheaper manufacturers are trying to copy this design.
## [https://www.techpowerup.com/315941/intel-lunar-lake-mx-soc-with-on-package-lpddr5x-memory-detailed](https://www.techpowerup.com/315941/intel-lunar-lake-mx-soc-with-on-package-lpddr5x-memory-detailed) | 2023-11-21T12:49:07 | https://www.reddit.com/r/LocalLLaMA/comments/180g7u5/intel_lunar_lakemx/ | Monkey_1505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180g7u5 | false | null | t3_180g7u5 | /r/LocalLLaMA/comments/180g7u5/intel_lunar_lakemx/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'alJVkgCUyvPsJvrUczqddhoS825-csBr074qYPvEzG0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cRz4bxPvqeXyvbcjN-zTU4mUPnEVHn9YYKGVrtvZxZo.jpg?width=108&crop=smart&auto=webp&s=01c6398e47c031fcbd6c01e4c089988cbffda832', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cRz4bxPvqeXyvbcjN-zTU4mUPnEVHn9YYKGVrtvZxZo.jpg?width=216&crop=smart&auto=webp&s=50786126fa67678faf54e9e6bef3fa338b9f9e89', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cRz4bxPvqeXyvbcjN-zTU4mUPnEVHn9YYKGVrtvZxZo.jpg?width=320&crop=smart&auto=webp&s=a5a55d0cfb31392118fdd1d6d65d3beff46e0563', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cRz4bxPvqeXyvbcjN-zTU4mUPnEVHn9YYKGVrtvZxZo.jpg?width=640&crop=smart&auto=webp&s=1edc69f3cac15decf41cc02b3337916882326621', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cRz4bxPvqeXyvbcjN-zTU4mUPnEVHn9YYKGVrtvZxZo.jpg?width=960&crop=smart&auto=webp&s=8ddec062309dc284c0e040898ed7ecd7fa34324c', 'width': 960}, {'height': 608, 'url': 'https://external-preview.redd.it/cRz4bxPvqeXyvbcjN-zTU4mUPnEVHn9YYKGVrtvZxZo.jpg?width=1080&crop=smart&auto=webp&s=9a594cb719c8dda828cf13eef66c1bc962fa1c42', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/cRz4bxPvqeXyvbcjN-zTU4mUPnEVHn9YYKGVrtvZxZo.jpg?auto=webp&s=f7a31e09521e1f392e400b86a40050305d8118ba', 'width': 1776}, 'variants': {}}]} |
How do I know which model is good for which purpose? | 2 | I think what I'm trying to ask is, is there an informational site or resource that tells me what the names mean? What is a MythoMax or a Tie fighter or a mistral, which is good for roleplay, which is a mix and of what, which for chatting etc etc. | 2023-11-21T12:38:52 | https://www.reddit.com/r/LocalLLaMA/comments/180g0x9/how_do_i_know_which_model_is_good_for_which/ | Mobile-Bandicoot-553 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180g0x9 | false | null | t3_180g0x9 | /r/LocalLLaMA/comments/180g0x9/how_do_i_know_which_model_is_good_for_which/ | false | false | self | 2 | null |
UC Berkeley Researchers Enhance AI Dialogue with Reinforcement Learning: Outperforms GPT Model | 5 | 2023-11-21T12:28:25 | https://www.brief.news/stories/dd812579-0cd3-417e-94f6-d668ccc6465a?v=f&p=b | Upbeat-Interaction13 | brief.news | 1970-01-01T00:00:00 | 0 | {} | 180fu7z | false | null | t3_180fu7z | /r/LocalLLaMA/comments/180fu7z/uc_berkeley_researchers_enhance_ai_dialogue_with/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'uij_cBep845GGKmOUmsVKN78iV20rpIljTES4zd9TL8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/xPSGESKmTpkvgaMgyNTyVuOpibyaeAQA4KW8pRfFtCU.jpg?width=108&crop=smart&auto=webp&s=b671ed577c78f014751fec585e8bb9dea7d5f92f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/xPSGESKmTpkvgaMgyNTyVuOpibyaeAQA4KW8pRfFtCU.jpg?width=216&crop=smart&auto=webp&s=2d6abbb3426be45d5fd79da4b4597e93829634bf', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/xPSGESKmTpkvgaMgyNTyVuOpibyaeAQA4KW8pRfFtCU.jpg?width=320&crop=smart&auto=webp&s=b223514b40b8c1e548358247ea96710520adf0a8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/xPSGESKmTpkvgaMgyNTyVuOpibyaeAQA4KW8pRfFtCU.jpg?width=640&crop=smart&auto=webp&s=56a14bb1b8ee82575938644bd2a967f5946f19b0', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/xPSGESKmTpkvgaMgyNTyVuOpibyaeAQA4KW8pRfFtCU.jpg?width=960&crop=smart&auto=webp&s=9bd976e525423b94bff16922a9520a61910ccd87', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/xPSGESKmTpkvgaMgyNTyVuOpibyaeAQA4KW8pRfFtCU.jpg?width=1080&crop=smart&auto=webp&s=e0554ad47ecfa3c5c61eaf6c665b965bcfaa1d1f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/xPSGESKmTpkvgaMgyNTyVuOpibyaeAQA4KW8pRfFtCU.jpg?auto=webp&s=7764b3b09a53ca3b39dfe2f0999c1b908c3647b1', 'width': 1200}, 'variants': {}}]} | ||
Got issues with the Orca 2 prompt template. Above you see the chat output, below the raw text of prompt and responses. Looks almost as if it's the wrong template, but according to TheBloke's model page it should be the correct one. Anyone else having these issues? | 1 | 2023-11-21T12:04:26 | Severin_Suveren | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 180ffky | false | null | t3_180ffky | /r/LocalLLaMA/comments/180ffky/got_issues_with_the_orca_2_prompt_template_above/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'qyzoyghkDJnDdOaQZ9sXylvm9QFoCPoF0pkFNgQ9Sko', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/hwuli069yo1c1.png?width=108&crop=smart&auto=webp&s=74de1ed1a5911f05af2842f0686fd3cb84a1f2b4', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/hwuli069yo1c1.png?width=216&crop=smart&auto=webp&s=2f0d73af7f1063c40a07f3988c5a3830537d0c7c', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/hwuli069yo1c1.png?width=320&crop=smart&auto=webp&s=162c81cdaa8dd1e5d0731098768b78da71b44239', 'width': 320}, {'height': 370, 'url': 'https://preview.redd.it/hwuli069yo1c1.png?width=640&crop=smart&auto=webp&s=45d4c44ea5d24b53814eb51637eb7f09d2c6d597', 'width': 640}, {'height': 555, 'url': 'https://preview.redd.it/hwuli069yo1c1.png?width=960&crop=smart&auto=webp&s=b358fbf76eb7aae46907f83c64e8b6b6b782c835', 'width': 960}, {'height': 625, 'url': 'https://preview.redd.it/hwuli069yo1c1.png?width=1080&crop=smart&auto=webp&s=131304fa997e28b47f6cce7d39d48bfcffac9267', 'width': 1080}], 'source': {'height': 1247, 'url': 'https://preview.redd.it/hwuli069yo1c1.png?auto=webp&s=3c6078eff1ec8df1ef0314c2fa9507bdad532601', 'width': 2154}, 'variants': {}}]} | |||
Oatmeal: Terminal UI to chat with large language models (LLM) using different model backends, and integrations with your favourite editors! | 1 | 2023-11-21T11:54:16 | https://github.com/dustinblackman/oatmeal | DustinHeroin | github.com | 1970-01-01T00:00:00 | 0 | {} | 180f9cs | false | null | t3_180f9cs | /r/LocalLLaMA/comments/180f9cs/oatmeal_terminal_ui_to_chat_with_large_language/ | false | false | 1 | {'enabled': False, 'images': [{'id': '9sxIn391RZ7KiXtl9O-0nluQjAatTYr_7mDBEG778u0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SUZK5KCx2CrfFg53YIh1SdiglAR-6WCprGdeq5VhYek.jpg?width=108&crop=smart&auto=webp&s=2cca7f264cdbc0c81ef4da601680a421851b47c0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SUZK5KCx2CrfFg53YIh1SdiglAR-6WCprGdeq5VhYek.jpg?width=216&crop=smart&auto=webp&s=ef6cf20b2ba2977b96fd4024cc1fccd26a041b7c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SUZK5KCx2CrfFg53YIh1SdiglAR-6WCprGdeq5VhYek.jpg?width=320&crop=smart&auto=webp&s=9dab9fd484f7eb90dfa6cc326d90a68b5ea75da1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SUZK5KCx2CrfFg53YIh1SdiglAR-6WCprGdeq5VhYek.jpg?width=640&crop=smart&auto=webp&s=35abb43d61ecf8d49b799790dc30c03014bfefa3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SUZK5KCx2CrfFg53YIh1SdiglAR-6WCprGdeq5VhYek.jpg?width=960&crop=smart&auto=webp&s=a7d818c169c934bf31e2168444ccf0c2ca35b0b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SUZK5KCx2CrfFg53YIh1SdiglAR-6WCprGdeq5VhYek.jpg?width=1080&crop=smart&auto=webp&s=648fa0c101a0a4025e059de952e0c9cd148512b0', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/SUZK5KCx2CrfFg53YIh1SdiglAR-6WCprGdeq5VhYek.jpg?auto=webp&s=a48be3f0309b9eb8aebaa2d900d05c6063a30ff5', 'width': 1280}, 'variants': {}}]} | ||
is it worth to fine tune mistral7b on data that is 100% arabic to create a chatbot? | 2 | I'm impressed by the capabilities of Mistral's recent 7b model and am contemplating whether my specific use case aligns with those who have successfully engaged in fine-tuning efforts. My objective is to fine-tune the model using a dataset in Arabic, which will be internally based on our company policies and documents. The purpose is to deploy the model as a chatbot for both our employees and clients. I'm seeking insights on whether this approach is the most effective way. What are your thoughts on this strategy? would appreciate it to share any docs or link or any experience? | 2023-11-21T11:39:27 | https://www.reddit.com/r/LocalLLaMA/comments/180f0zi/is_it_worth_to_fine_tune_mistral7b_on_data_that/ | ta9ate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180f0zi | false | null | t3_180f0zi | /r/LocalLLaMA/comments/180f0zi/is_it_worth_to_fine_tune_mistral7b_on_data_that/ | false | false | self | 2 | null |
LLM + SD Roleplay Webui | 1 | [removed] | 2023-11-21T11:32:51 | https://github.com/rbourgeat/ImpAI | krolhm | github.com | 1970-01-01T00:00:00 | 0 | {} | 180exbz | false | null | t3_180exbz | /r/LocalLLaMA/comments/180exbz/llm_sd_roleplay_webui/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'fqbYpq-nc0hqVscEupAUtGAk87UwJ_7iclc6T5Xxf3k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GeNwQq2K57vbP7Sh2Tnth6WHJD10UWkwEqcdOPmYhnc.jpg?width=108&crop=smart&auto=webp&s=42b289a2e1c694a13b2b7b18b00fd8e2aed0f02c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GeNwQq2K57vbP7Sh2Tnth6WHJD10UWkwEqcdOPmYhnc.jpg?width=216&crop=smart&auto=webp&s=5afd7b8517a6b8ae04186929d997963fbf908b6c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GeNwQq2K57vbP7Sh2Tnth6WHJD10UWkwEqcdOPmYhnc.jpg?width=320&crop=smart&auto=webp&s=4a1826be42a04061eaf097aa3e925719f5b9cc3d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GeNwQq2K57vbP7Sh2Tnth6WHJD10UWkwEqcdOPmYhnc.jpg?width=640&crop=smart&auto=webp&s=5900162278777ad06d395d2b35dda519073a910a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GeNwQq2K57vbP7Sh2Tnth6WHJD10UWkwEqcdOPmYhnc.jpg?width=960&crop=smart&auto=webp&s=0f5eebe63f41cbc94172915e8df3deb4b1dabb61', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GeNwQq2K57vbP7Sh2Tnth6WHJD10UWkwEqcdOPmYhnc.jpg?width=1080&crop=smart&auto=webp&s=cedd31bf90ddc29800505429584d2d17e8ce7af9', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/GeNwQq2K57vbP7Sh2Tnth6WHJD10UWkwEqcdOPmYhnc.jpg?auto=webp&s=61da2b9dcd416da8bd083db00567ff072fa8205d', 'width': 1280}, 'variants': {}}]} | |
New Module for LLM Models Evaluation from Deepchecks | 1 | [removed] | 2023-11-21T11:24:11 | https://www.reddit.com/r/LocalLLaMA/comments/180esg1/new_module_for_llm_models_evaluation_from/ | liamsagely | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180esg1 | false | null | t3_180esg1 | /r/LocalLLaMA/comments/180esg1/new_module_for_llm_models_evaluation_from/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ObUhKJ1gh86pRhqVZ0sgMPgnyopJpOQvGsgARmLNkK4', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/yWTGLwCqDztFqrTjuh9leAvo8t5N8ysfYG8Hl1ONvAs.jpg?width=108&crop=smart&auto=webp&s=294d2aeafcd37ceebaac8e518a662310b1cb1880', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/yWTGLwCqDztFqrTjuh9leAvo8t5N8ysfYG8Hl1ONvAs.jpg?width=216&crop=smart&auto=webp&s=9a3b4afb2921abc59696f81f82c7750d8eab1280', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/yWTGLwCqDztFqrTjuh9leAvo8t5N8ysfYG8Hl1ONvAs.jpg?width=320&crop=smart&auto=webp&s=6408381f3003313fa2a83223a79bf65672ede780', 'width': 320}, {'height': 427, 'url': 'https://external-preview.redd.it/yWTGLwCqDztFqrTjuh9leAvo8t5N8ysfYG8Hl1ONvAs.jpg?width=640&crop=smart&auto=webp&s=b7f7d25c971064b10c932559924fae2f19209752', 'width': 640}, {'height': 641, 'url': 'https://external-preview.redd.it/yWTGLwCqDztFqrTjuh9leAvo8t5N8ysfYG8Hl1ONvAs.jpg?width=960&crop=smart&auto=webp&s=a4e0a7bf45000693c47fbf27f531cd39e9e729ea', 'width': 960}, {'height': 721, 'url': 'https://external-preview.redd.it/yWTGLwCqDztFqrTjuh9leAvo8t5N8ysfYG8Hl1ONvAs.jpg?width=1080&crop=smart&auto=webp&s=fb6b09726dad26bb632ce8a214e56788268f5fda', 'width': 1080}], 'source': {'height': 1670, 'url': 'https://external-preview.redd.it/yWTGLwCqDztFqrTjuh9leAvo8t5N8ysfYG8Hl1ONvAs.jpg?auto=webp&s=bf832e12be4379cb84026cbc360e4652a5b140e9', 'width': 2500}, 'variants': {}}]} |
Satya already ordered to go full throttle on Llama and derivates like Mistral after this debacle (Bloomberg interview @4:51min) | 1 | 2023-11-21T10:38:06 | ultrapcb | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 180e45v | false | null | t3_180e45v | /r/LocalLLaMA/comments/180e45v/satya_already_ordered_to_go_full_throttle_on/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ZdfLLEeOUShM6_Lc48XFGsNrEWG1U0t0M7yJTSN-dWs', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/xvvb5tayio1c1.png?width=108&crop=smart&auto=webp&s=4ce528bfec1419953006123f6f5f16ecdd3af5c7', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/xvvb5tayio1c1.png?width=216&crop=smart&auto=webp&s=88125c916d9dbc04307ee3c46de6a41f08ef84b6', 'width': 216}, {'height': 255, 'url': 'https://preview.redd.it/xvvb5tayio1c1.png?width=320&crop=smart&auto=webp&s=5453824f083f564b6b34a94f5a7786382ffc65c1', 'width': 320}], 'source': {'height': 480, 'url': 'https://preview.redd.it/xvvb5tayio1c1.png?auto=webp&s=dad13dcc93032a8c0d6cbad7122152021c2cec1e', 'width': 602}, 'variants': {}}]} | |||
Hallucinations are associable | 4 | I've realized that every hallucination of LLMs are associable and are never just total bullshit. Also, as far as I have experienced, the LLMs are capable to explain why they thought this way, so they are capable of explaining you how to avoid this. What do you think about this and have experienced something similar? | 2023-11-21T09:39:39 | https://www.reddit.com/r/LocalLLaMA/comments/180db6x/hallucinations_are_associable/ | Deep-View-2411 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180db6x | false | null | t3_180db6x | /r/LocalLLaMA/comments/180db6x/hallucinations_are_associable/ | false | false | self | 4 | null |
OpenChat finetunes? | 1 | Is any of you planning to finetune or aware of finutunes of openchat models coming out anytime soon?
Considering the new OpenChat-3.5 7B seems to perform similarly or better than mistral 7B you'd think there would tons of finetunes just like there are for the mistral base.
Is there any reason for it? | 2023-11-21T09:17:48 | https://www.reddit.com/r/LocalLLaMA/comments/180d0qf/openchat_finetunes/ | paryska99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180d0qf | false | null | t3_180d0qf | /r/LocalLLaMA/comments/180d0qf/openchat_finetunes/ | false | false | self | 1 | null |
used 3090 vs 4060ti on 15+ yrs old rig help | 3 | **tldr**: can i just drop a used 3090 in my old PC or would be a 4060ti new safer option?
Hi all!
I really want to make my feet wet in running local LLM, especially
* inferencing of 7b models
* some QLora fun
I'd like also to have fun running bigger, quantized models and, if possible, finetune some smallish model like GPT2-XL (like 1B) but if it's feasible otherwise i'll just rent some cloud. A little bit of gaming (Escape from Tarkov) in my freetime would'nt hurt
I've figure it out that my best GPU options are :
* 4060ti 16gb for around 450€ new and hoping for some black friday deals
* 3090 24gb used for around 700€
My current (very old) pc spec are the following:
* i5 2500 3.3GHz
* 16gb DDR3
* Asus p8p67 LGA1155 ( 4x PCI-E 32 but bus width)
* AMR R9 270 Sapphire
* a 600 W PSU
So my questions are:
1. Can I afford to invest all my budget in the 3090? I have a second PSU at home that will be used only to power the gpu out of the case
2. Is it better to buy the 4060ti and use the remaining budget to upgrade older parts (in this case, which one?)
Thanks for the help guys! | 2023-11-21T08:53:28 | https://www.reddit.com/r/LocalLLaMA/comments/180coup/used_3090_vs_4060ti_on_15_yrs_old_rig_help/ | C080 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180coup | false | null | t3_180coup | /r/LocalLLaMA/comments/180coup/used_3090_vs_4060ti_on_15_yrs_old_rig_help/ | false | false | self | 3 | null |
I built Copilot replacement | 104 | I tried to run locally various networks for code completion, but it turns out most of the plugins are just directly feed data to neural network and that’s why they all suck, so I built a new that has required preprocessing and uses ollama instead of something weird. | 2023-11-21T07:54:05 | https://marketplace.visualstudio.com/items?itemName=ex3ndr.llama-coder | stevekite | marketplace.visualstudio.com | 1970-01-01T00:00:00 | 0 | {} | 180bu1k | false | null | t3_180bu1k | /r/LocalLLaMA/comments/180bu1k/i_built_copilot_replacement/ | false | false | 104 | {'enabled': False, 'images': [{'id': '_Nubp4ZWfjOyIMlAKZ85JBj8Od7UxL5sPUP9FE_baUY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/VhhDMyrsR5dqsEWhLdV_yJaGlVF_KqdSOW9SSghm6rU.jpg?width=108&crop=smart&auto=webp&s=3f9f54fe1c7b0d9e9609f9f5e1c88e2599e332c2', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/VhhDMyrsR5dqsEWhLdV_yJaGlVF_KqdSOW9SSghm6rU.jpg?width=216&crop=smart&auto=webp&s=3dd8d117cbca6c3ea297bfc2ae9511d7828aa7d2', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/VhhDMyrsR5dqsEWhLdV_yJaGlVF_KqdSOW9SSghm6rU.jpg?width=320&crop=smart&auto=webp&s=d243d0ef01d6e56f006678fee1bb2ea19028303b', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/VhhDMyrsR5dqsEWhLdV_yJaGlVF_KqdSOW9SSghm6rU.jpg?auto=webp&s=2bb1be00abcc32135eabdf8c7bea6a26b9b8d3c7', 'width': 512}, 'variants': {}}]} | |
Step-Back QA like prompting | 1 | [removed] | 2023-11-21T07:20:51 | https://www.reddit.com/r/LocalLLaMA/comments/180bc34/stepback_qa_like_prompting/ | Tejasw__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180bc34 | false | null | t3_180bc34 | /r/LocalLLaMA/comments/180bc34/stepback_qa_like_prompting/ | false | false | self | 1 | null |
I need people to test my experiment - Dynamic Temperature | 75 | So I have an experimental build of Kobold that allows for Dynamic Temperature sampling. Some people tell me that dynamic temp has become a mainstay of their configurations, and poses a novel question.
## Why would you need a Dynamic Temperature?
* Typical attempts to make language models more varied and creative through higher temperature values might not work as you'd expect, due to the fact that higher temperatures disproportionately impact high confidence token generations. This is especially a problem for weaker language models that have less of an innate ability to 'course correct' when a 'bad' token is chosen.
* As a consequence, higher temperature values (past \~1.2) are rather difficult to use if you want your language model to output coherent *and* creative generations. A specific example of how higher temperature can introduce difficulties is in the case of adhering to programming language syntax, as programming languages all have strictly defined rules. This can be an issue if you want an LLM to try a more 'creative' solution to a specific programming problem while still consistently adhering to the rules of the language; a static temperature, therefore, wouldn't be the most effective way to scale the language model's creativity.
For an example, here's how the Dynamic Temperature mapping looks, assuming you use the "HHI" dynamic temp method (which measures how concentrated the model's probabilities are at any given point in time.)
[Red = Closer to maximum temperature, Grey = Closer to minimum temperature](https://preview.redd.it/d2o0v5u1gn1c1.png?width=1144&format=png&auto=webp&s=a9ce802b3dfd7ae46bde986fe13c56575364062a)
The idea is, we turn temperature into a range, where only the highly randomizable tokens get mapped a high temperature, and a non-randomizable token stays near-deterministic.
This sounds great on paper. Except, there's 3 different versions of it that measure different metrics in an attempt to create a better sampler, and not just the HHI version of it. As they say, perfect is the enemy of good... because of this, it's hard to create a 'standard' that I can propose to any of these LLM model hosting backends, and therefore, Dynamic Temperature hasn't been implemented where people can use it beyond my test builds.
This, of course, has made it difficult for me to settle on the 'best method'. So I'm calling upon this community to help get further testing so we can document what effects my weird experimental sampler has on their models, in a way that won't be biased because of how much I've peered into the sampling logic.
What I did with my custom build of koboldcpp:
[https://github.com/kalomaze/koboldcpp/releases/tag/dyna-temp-nov21](https://github.com/kalomaze/koboldcpp/releases/tag/dyna-temp-nov21)
was include some values that force override to use different types of dynamic temperature sampling, which would be read from a .txt file in your current directory, as a way to quickly test them.
These overrides include:
**1.84 Temp**
This value overrides to Entropy Sampling, uses a power function & SamplerTemp.txt file to control the values.
It measures the entropy (uncertainty) of the probability distribution before sampling. This means, if it is highly certain for a certain token, it will use values closer to the minimum temperature. If it is highly uncertain, it will increase the temperature (to avoid repetition / determinism issues in a more natural fashion).
[This is probably really difficult for this sub to understand but maybe it makes sense.](https://preview.redd.it/jo8o26umfn1c1.png?width=1000&format=png&auto=webp&s=b3e9c11724e1509f2f3034adc24e83c6a34384db)
It has minTemp (minimum temperature), maxTemp (maximum temperature), and the exponent value (which controls how aggressively it scales the mapping of temperature.)
UNIQUE OBSERVATIONS ABOUT THIS SAMPLER:
\- I'm able to turn off all truncation samplers (Min P, Top P, etc) and it still functions coherently within the default range of values (from 0.0 minTemp to 2.0 maxTemp).
\- I'm guessing the reason why that happens is because it's really difficult to achieve maximum entropy on a 32,000 token vocabulary model. However, you can turn up the maxTemp to even 5.0 and get some really weird but still pretty coherent results.
**2.0 Temp**
This form of DynaTemp is HHI Sampling, uses a power function & SamplerTemp.txt file to control the values. I misnamed this as Gini sampling before, but it is measuring HHI.
The 'HHI' value it measures is how concentrated the probabilities are. If it is highly concentrated on just one token, then it reduces the temperature to a strong degree. It is more spread out or evenly divided, the temperature is increased towards the maxTemp.
It has minTemp (minimum temperature), maxTemp (maximum temperature), and the exponent value (which controls how aggressively it scales).
UNIQUE OBSERVATIONS ABOUT THIS SAMPLER:
\- The measurements of concentration (via the HHI measurement) seem pretty consistent with or without removing 'bad tokens' (e.g Min P, Top P, and othet truncation samplers). This is unlike Entropy which is sensitive to whether or not you have those truncation samplers on or not.
\- For reference, here's how the HHI (concentration) measurements look for a prompt that's more deterministic vs. an open ended prompt:
https://preview.redd.it/kgffou0ffn1c1.png?width=1979&format=png&auto=webp&s=20da15f34eb17bef65ae3a4eaade58bf6d6ad7b0
**1.91 Temp**
Greedy Dynamic Temp (aka DynaTemp), the original implementation. This uses uses a sigmoid function & is basing the temperature off the top token. I am not confident that this is useful or interesting compared to HHI and Entropy versions of Dynamic Temp, as it does not measure the entire distribution; this was my first trial run, but you can test it if you want. | 2023-11-21T07:09:25 | https://www.reddit.com/r/LocalLLaMA/comments/180b673/i_need_people_to_test_my_experiment_dynamic/ | kindacognizant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180b673 | false | null | t3_180b673 | /r/LocalLLaMA/comments/180b673/i_need_people_to_test_my_experiment_dynamic/ | false | false | 75 | {'enabled': False, 'images': [{'id': 'LIw_poof5BVvFf3wMhbCrq7tjm07VsfhZsKfwxlqPZw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Mh3apuExaFJLUKMNXi9n73s6tp4EDKCoy_jpNcxowY8.jpg?width=108&crop=smart&auto=webp&s=a1ef6558f20bfa769e6c697ae492e7165dc2b7e3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Mh3apuExaFJLUKMNXi9n73s6tp4EDKCoy_jpNcxowY8.jpg?width=216&crop=smart&auto=webp&s=5ed66e28cb9c4ab63d55b843afd7b6c09b77853f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Mh3apuExaFJLUKMNXi9n73s6tp4EDKCoy_jpNcxowY8.jpg?width=320&crop=smart&auto=webp&s=cc3e5b6235af3ded2830672eb1d95230bb510835', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Mh3apuExaFJLUKMNXi9n73s6tp4EDKCoy_jpNcxowY8.jpg?width=640&crop=smart&auto=webp&s=ca70390fceb841390eb5107348641e38ca66bf1d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Mh3apuExaFJLUKMNXi9n73s6tp4EDKCoy_jpNcxowY8.jpg?width=960&crop=smart&auto=webp&s=649d7d7306c79a8c7130587bdc502a3a08bec1e4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Mh3apuExaFJLUKMNXi9n73s6tp4EDKCoy_jpNcxowY8.jpg?width=1080&crop=smart&auto=webp&s=479214f3ebce6a678f6cecf47113fd5bc231a8bf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Mh3apuExaFJLUKMNXi9n73s6tp4EDKCoy_jpNcxowY8.jpg?auto=webp&s=87e64d798295ec52a77bd402f15570e9448c0594', 'width': 1200}, 'variants': {}}]} | |
Is there any OpenAI Assistant API Clone? | 1 | I'm currently building AI assistant based on OpenAI Assistant API for my small firm and try to add unique custom function (mostly reporting and analyzing stuff) around its capabilities, but after current situation maybe i need backup plan for the Assistant API. Is there any other idea both open or closed source approach for replicating this Assistant API? | 2023-11-21T06:43:34 | https://www.reddit.com/r/LocalLLaMA/comments/180as6n/is_there_any_openai_assistant_api_clone/ | No-Giraffe-6887 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180as6n | false | null | t3_180as6n | /r/LocalLLaMA/comments/180as6n/is_there_any_openai_assistant_api_clone/ | false | false | self | 1 | null |
Is anyone else using their local LLM to help run parts/all of their business or hobby? | 1 | [removed] | 2023-11-21T06:18:33 | https://www.reddit.com/r/LocalLLaMA/comments/180ae55/is_anyone_else_using_their_local_llm_to_help_run/ | crua9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180ae55 | false | null | t3_180ae55 | /r/LocalLLaMA/comments/180ae55/is_anyone_else_using_their_local_llm_to_help_run/ | false | false | self | 1 | null |
Does Ubuntu support RTX 4060ti 16G? | 1 | Does anyone have experience of using Ubuntu and RTX 4060 Ti 16G?
I have Ubuntu 22.4.3.and RTX 4060Ti 16G.
In Ubuntu additional devices panel, it didn't pick up a right driver. I downloaded driver.Run file from NVIDIA website and installed it. It prompted installed successfully.
While, I tried to build docker for localgpt, got error that cannot find Cuda. | 2023-11-21T06:15:53 | https://www.reddit.com/r/LocalLLaMA/comments/180aclk/does_ubuntu_support_rtx_4060ti_16g/ | newfire1112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180aclk | false | null | t3_180aclk | /r/LocalLLaMA/comments/180aclk/does_ubuntu_support_rtx_4060ti_16g/ | false | false | self | 1 | null |
Ran LLaMa.cpp 7B on a Linux Ubuntu Lunar pc. It’s quantized to 4-bits and works pretty well. I found this project on GitHub. | 1 | If you have any recommendations on how to make this better please leave a comment! | 2023-11-21T06:01:51 | https://v.redd.it/i9yaqd3v5n1c1 | Wild-Librarian4511 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 180a4ix | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/i9yaqd3v5n1c1/DASHPlaylist.mpd?a=1703138525%2COWRkOGE5YjQwOGY0M2NiNjY1ZWRhZjQ5YWE1ZjlhZGJiOWQ5ZDUwM2FmOTY3ZWIyMDMxYjdmZjYxODZlZmRhNA%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/i9yaqd3v5n1c1/DASH_360.mp4?source=fallback', 'has_audio': True, 'height': 640, 'hls_url': 'https://v.redd.it/i9yaqd3v5n1c1/HLSPlaylist.m3u8?a=1703138525%2CMDJjNGFiMGQ0NjJhNWQwN2EzNTgxNDc2YmViNDIyOTYzZDU3MjVkYTU1ZmQ1NDQ3NTE0NTQ2N2Y5ZTUzNjhlYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/i9yaqd3v5n1c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 272}} | t3_180a4ix | /r/LocalLLaMA/comments/180a4ix/ran_llamacpp_7b_on_a_linux_ubuntu_lunar_pc_its/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cG44dTZ3enU1bjFjMfMi-8sbGf66DcnwGs7W5z238uujzx42U0Cd5EWW0dyx', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/cG44dTZ3enU1bjFjMfMi-8sbGf66DcnwGs7W5z238uujzx42U0Cd5EWW0dyx.png?width=108&crop=smart&format=pjpg&auto=webp&s=9b97a1925d624fd4a5abcf56ab51891301f86548', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/cG44dTZ3enU1bjFjMfMi-8sbGf66DcnwGs7W5z238uujzx42U0Cd5EWW0dyx.png?width=216&crop=smart&format=pjpg&auto=webp&s=1e71615ef16b04f4db74be3074fb16779f8ff5df', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/cG44dTZ3enU1bjFjMfMi-8sbGf66DcnwGs7W5z238uujzx42U0Cd5EWW0dyx.png?width=320&crop=smart&format=pjpg&auto=webp&s=def268cdfff67a0dacf6a50422988f6f5aa7d83a', 'width': 320}], 'source': {'height': 858, 'url': 'https://external-preview.redd.it/cG44dTZ3enU1bjFjMfMi-8sbGf66DcnwGs7W5z238uujzx42U0Cd5EWW0dyx.png?format=pjpg&auto=webp&s=a5050343c6d2bb88c8f4944b0ac25eb932ec6321', 'width': 364}, 'variants': {}}]} | |
Ran LLaMa on a linux computer. It’s 7B and quantized to 4 bit. Found this on GitHub. | 1 | If you have any recommendations on how to make this better please comment below! | 2023-11-21T05:50:27 | https://v.redd.it/helamuut3n1c1 | Wild-Librarian4511 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1809xl4 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/helamuut3n1c1/DASHPlaylist.mpd?a=1703137852%2CNjQ0ZDM5OGI1NDk5MzlhZGQ4NzI3MDIzZjY5NDEyMmE5MWYyODY3N2U1ODY4YjM2NjRkNzc2Mzc3M2ZiMTdlYQ%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/helamuut3n1c1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 854, 'hls_url': 'https://v.redd.it/helamuut3n1c1/HLSPlaylist.m3u8?a=1703137852%2CNzdlYmJhN2E5MmVjZjU2M2M3Yzc0N2E2NDRhMjI5MDczYTk5MDc5ZThjYTE4MzY1NzU2MWQwMmZjMjUxNjM1Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/helamuut3n1c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 480}} | t3_1809xl4 | /r/LocalLLaMA/comments/1809xl4/ran_llama_on_a_linux_computer_its_7b_and/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YW44bnJoc3QzbjFjMehOZoIXsKHs8JNIKNNfidbSHfS2JfhxRR8gAA4bPmWM', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/YW44bnJoc3QzbjFjMehOZoIXsKHs8JNIKNNfidbSHfS2JfhxRR8gAA4bPmWM.png?width=108&crop=smart&format=pjpg&auto=webp&s=620a91a9e54d08ec51a2ef6221172390f59acfb7', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/YW44bnJoc3QzbjFjMehOZoIXsKHs8JNIKNNfidbSHfS2JfhxRR8gAA4bPmWM.png?width=216&crop=smart&format=pjpg&auto=webp&s=6b709ce1dfb6bde34ac10db64d1359e9f879e9b1', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/YW44bnJoc3QzbjFjMehOZoIXsKHs8JNIKNNfidbSHfS2JfhxRR8gAA4bPmWM.png?width=320&crop=smart&format=pjpg&auto=webp&s=590b18ca879d5a89795e6c9f3ae2f8a09474ac13', 'width': 320}], 'source': {'height': 858, 'url': 'https://external-preview.redd.it/YW44bnJoc3QzbjFjMehOZoIXsKHs8JNIKNNfidbSHfS2JfhxRR8gAA4bPmWM.png?format=pjpg&auto=webp&s=4b005b83588fc238214d458d4ed60f9a1384f266', 'width': 482}, 'variants': {}}]} | |
Humanize Prompt Responses? | 2 | Playing around with LZLV-70b 4QM, i am having a great time with the long form responses. However after a while now i am beginning to notice "AI styled writing" I tried pumping up the temperature to 1.5, repetition penalty to 1.3 and even tried mirostat mode 1,2 on the kobold.cpp
If the repetition penalty gets too high, the AI gets nonsensical. If the temperature gets too high, the AI starts to blurt out nonsense. While I am not aiming to have the AI write exactly like a person and be undetectable, after reading it so much it's gotten pretty repetitive with it's word usage and sentences.
I've tried prompting it to
" Write in a simple easy to read way make your response more conversational and less formal. Aim to increase perplexity by varying your word choices and avoiding predictable phrases "
but it's.....limited.
any tips or methods to make the AI not be so....dry? | 2023-11-21T05:10:03 | https://www.reddit.com/r/LocalLLaMA/comments/18099ao/humanize_prompt_responses/ | DominicanGreg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18099ao | false | null | t3_18099ao | /r/LocalLLaMA/comments/18099ao/humanize_prompt_responses/ | false | false | self | 2 | null |
Best small model for function calling and decision making task? | 1 | I want a small model(ideally <=3B params) which is really effecient and good at function calling and logical decision making(after fine tuning). With decision making, I mean for example if it gets a JSON input telling the distance of an approaching object, it should intelligently call a function to steer the motor to a certain angle to avoid collision or something like that. Any such models? | 2023-11-21T04:57:03 | https://www.reddit.com/r/LocalLLaMA/comments/180914k/best_small_model_for_function_calling_and/ | Shoddy_Vegetable_115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180914k | false | null | t3_180914k | /r/LocalLLaMA/comments/180914k/best_small_model_for_function_calling_and/ | false | false | self | 1 | null |
Using vLLM for Home Assistant. I need help getting any model to work. Can't get anything over 7B to run on a 3090. | 3 | I'm using vLLM because it's a drop in replacement for ChatGPT. If there is something else compatible with the ChatGPT API, let me know.
Problem 1: I cannot get anything over a 7B to run in vLLM. I'm sure my parameters are wrong, but I cannot find any documentation.
python3 -m vllm.entrypoints.openai.api_server --model /home/h/Mistral-7B-finetuned-orca-dpo-v2-AWQ --quantization awq --dtype auto --max-model-len 5000
Problem 2: Mistral-7B-finetuned-orca-dpo-v2-AWQ is the only one I got up and running with responses that make sense. However, there is a prompt being appended to everything I send to it:
### Human: Got any creative ideas for a 10 year old’s birthday?
### Assistant: Of course! Here are some creative ideas for a 10-year-old's birthday party: ... [It goes on quite a bit.]
Either because of that or for other reasons it is not answering very basic questions. There are several threads about this on Github, but was able to identify zero actionable information.
Problem 4: CodeLlama-13B-Python-AWQ just blasted a bunch of hastags and gobbledygook back at me. Same problem with the prompt too.
I am running this on an Ubuntu Server VM (16 cores/48gb RAM) right now so I don't take up any VRAM, but I can switch to Windows if necessary. | 2023-11-21T04:44:53 | https://www.reddit.com/r/LocalLLaMA/comments/1808th9/using_vllm_for_home_assistant_i_need_help_getting/ | flossraptor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1808th9 | false | null | t3_1808th9 | /r/LocalLLaMA/comments/1808th9/using_vllm_for_home_assistant_i_need_help_getting/ | false | false | self | 3 | null |
Is there a list of open-source multimodal models available for commercial usage? | 1 | I fell in lovelove with with [Llava](https://llava-vl.github.io) but I’m not certain it’s permissible for commercial usage. With the industry moving so fast and the drama from OpenAI this past week, is there a list of open source multimodal models? | 2023-11-21T04:18:18 | https://www.reddit.com/r/LocalLLaMA/comments/1808cqz/is_there_a_list_of_opensource_multimodal_models/ | NextGen-Trading | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1808cqz | false | null | t3_1808cqz | /r/LocalLLaMA/comments/1808cqz/is_there_a_list_of_opensource_multimodal_models/ | false | false | self | 1 | null |
Recommend a model for converting a book into short notes | 3 | I am a newbie to running a llm locally, but I would like to make one and use it to convert long books to short notes.
I am a med student with a slightly above average knowledge of this tech (compared to general public) will it be possible for me to achieve this with a non tech background?
If it is then it would mean a lot if someone can point me in the right direction. | 2023-11-21T04:10:22 | https://www.reddit.com/r/LocalLLaMA/comments/18087ql/recommend_a_model_for_converting_a_book_into/ | AtrophicAdipocyte | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18087ql | false | null | t3_18087ql | /r/LocalLLaMA/comments/18087ql/recommend_a_model_for_converting_a_book_into/ | false | false | self | 3 | null |
Dynamic LoRAs -- Crazy idea? | 10 | I had some crazy thoughts today while I was in physical therapy (funny what some electrodes on the base of your skull will make you think of...) -- But, I feel like there's a chance I might be on to something and wanted to share it, let me know what you guys think... -- or if in the more likely scenario, I'm just crazy, or this is already known and just impractical in some way... let me know that too! 😅
...
But I was thinking about LoRAs today and about how we use them for finetuning transformer models... -- Basically we only train \~0.27% of some foundation model's parameters, etc. (which we use to create an offset matrix which is applied with a PEFT adapter to the original static parameters of the pre-trained model -- or something -- I think -- yeah?).
...
And if you want a model that does X you take a bunch of data for that and fine tune a LoRA for a foundation model that does X...
Or if you want a model that does Y you take a bunch of data for that and fine tune a LoRA for a foundation model that does Y...
...
**So... why not train a second model\* to produce LoRAs as an output? that can be consumed by an LLM (or other transformer) as an input?**
*(****\*****: either a regular ML model, or a transformer model -- but not an LLM -- it produces LoRAs as the output.)*
I know it could be hard to train without a specialized strategy but... it seems like LoRAs have the ability to slide the model's latent space around through different windows & lenses; and with a sliding latent space, you get a lot of extra horsepower for "free" (i.e. you only invoke the higher level network when it needs to nudge the model in a certain direction -- which the model can be trained to do by outputting special token/vector pairs).
...
So, you could do some really interesting things... like instead of using a mixture of experts (MoE) where one knows programming, and another knows healthcare, you could train the LLM to recognize when it needs to make minor changes to it's current LoRA and output a special token & vector which is fed into the dynamic LoRA offset model which makes minor modifications to the current LoRA that is applied via the PEFT adapter.
I feel like if you took a bunch of LoRAs (assuming the tokens were not retrained) and labled them, you could train the higher level dynamic LoRA tensor network to be able to blend between LoRAs that it knows about, and you could train the LLM to output the special tokens when it recognizes it needs to shift between latent spaces...
...
And for that matter you could take that in a couple of interesting directions... For example:
\- This one is kind of boring, but you could try using a model with a smaller context window (even 4k or 8k), and when it got near the end of its context, you could have the pre-trained dynamic LoRA tensor network evaluate the current context / tokens and spit out modifications to the current LoRA that allowed you to embed some of that context into the latent space; thus allowing you to heavily compress and summarize the current context window to free up a bunch of token/attention space...
OR
\- You could go in a completely different direction and stack these on top of each other (as a LoRA can be applied to any tensor network, including one which produces downstream LoRAs) and have a dynamically sized network... -- basically giving you a dynamically sized LLM that only uses as much brain power as it thinks it needs for each problem (i.e. it could use the 4-bit 7b LLM at the base, with a bunch of 3b parameter dynamic LoRA layers above each one -- so that for simple problems they just invoke the base network, but when it thinks it needs more "brain power" it could spit out a special token/vector pair that singified that the next layer up needed to be involved -- and it would turn on that layer to build a LoRA for the layer below.
...
Heck, you could even do both of those scenarios, separately, with a third networkt that dynamically blended between the two in some way... (much how LoRAs are statically blended together today -- but this would be dynamic and the blend could change when special trained tokens are encountered).
...
There are a lot of other weird things I can think of that you could do with dynamic LoRAs (like make any LLM multi-modal...) -- but first I just want to figure out how crazy this is, and what people think about it... -- or maybe it's already been done and I'm just late to the party...
...
So, what do you guys think? | 2023-11-21T04:01:10 | https://www.reddit.com/r/LocalLLaMA/comments/18081so/dynamic_loras_crazy_idea/ | BrainSlugs83 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18081so | false | null | t3_18081so | /r/LocalLLaMA/comments/18081so/dynamic_loras_crazy_idea/ | false | false | self | 10 | null |
IS-LM 3B performance showcase. | 25 | Previously I made a post about my new model, IS-LM 3B. If you want more information of the model, read the post [here](https://www.reddit.com/r/LocalLLaMA/comments/17zfl8f/today_i_released_islm_3b/).
As the creator of this model, I noticed that this model is extremely good for a 3B in economic tasks(As it is trained on it.), and surprising other tasks too. It follows instructions well, is VERY verbose, and is very accurate compared to other 3B models and even LLaMA 1 7B. Though it is bad at casual chats/roleplay, but that was expected because it was not trained on those. I am very surprised and impressed, and here's some example conversations(Warning: a LOT of text.): [https://pastebin.com/2BZC7kZg](https://pastebin.com/2BZC7kZg)
I did NOT cherrypick those, these are almost all of the proper tests I've done to the model. It is very consistent at generating those type of responses, if you don't believe me, try it out yourself. Also note that all these are seperate conversations, as I found that this model isn't the best at multi-turn conversations.
Obviously not all of them are correct or the best, but it does show very good capabilities and verbosity. This model is again, extremely good for a 3B, even outperforming a lot of the SOTA LLaMA 1 7B models IMO. If you are interested, I HIGHLY recommend you trying it out as it can be very easily ran. | 2023-11-21T03:23:41 | https://www.reddit.com/r/LocalLLaMA/comments/1807c4b/islm_3b_performance_showcase/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1807c4b | false | null | t3_1807c4b | /r/LocalLLaMA/comments/1807c4b/islm_3b_performance_showcase/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]} |
Build advice. | 3 | I have an old PC.
Currently having 512MB RAM and the Xeon E3-1240 CPU with 500W PSU.
Is this worth keeping to throwing the GPU in, like 3090 or something, or is this too old to being useful? It has the PCIe 2.0 but my friend says the throughput is less important to LLM use.
Just was curious in discussion with my friend.
Thanks. 😎👍 | 2023-11-21T03:09:30 | https://www.reddit.com/r/LocalLLaMA/comments/180721p/build_advice/ | SlavaSobov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 180721p | false | null | t3_180721p | /r/LocalLLaMA/comments/180721p/build_advice/ | false | false | self | 3 | null |
LQ-LoRA: Low-rank Plus Quantized Matrix Decomposition for Efficient Language Model Finetuning | 33 | 2023-11-21T02:50:01 | https://arxiv.org/abs/2311.12023 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1806o6y | false | null | t3_1806o6y | /r/LocalLLaMA/comments/1806o6y/lqlora_lowrank_plus_quantized_matrix/ | false | false | nsfw | 33 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=3fb6a4a696924314f8961caf4c236afc208b1d0e', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=0b767a338f2f0b71f29f733f7ddcfc7cadb67a66', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=1ce5f48dead557a2092209460f03f7d712a26f7e', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=f1a4ed3c55eb56f7f3af10ed3892c65ced935430', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=8898f44674757a7e5e0b7c1c859e023eb66ebc32', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=0b2a8ab9dcbf4830be7effafa4b1684349b6553a', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?blur=40&format=pjpg&auto=webp&s=09843e9d15e7440404e44f894134a99dd2f5f387', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=3fb6a4a696924314f8961caf4c236afc208b1d0e', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=0b767a338f2f0b71f29f733f7ddcfc7cadb67a66', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=1ce5f48dead557a2092209460f03f7d712a26f7e', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=f1a4ed3c55eb56f7f3af10ed3892c65ced935430', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=8898f44674757a7e5e0b7c1c859e023eb66ebc32', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=0b2a8ab9dcbf4830be7effafa4b1684349b6553a', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?blur=40&format=pjpg&auto=webp&s=09843e9d15e7440404e44f894134a99dd2f5f387', 'width': 1200}}}}]} | |
MultiLoRA: Democratizing LoRA for Better Multi-Task Learning | 29 | 2023-11-21T02:47:35 | https://arxiv.org/abs/2311.11501 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1806mge | false | null | t3_1806mge | /r/LocalLLaMA/comments/1806mge/multilora_democratizing_lora_for_better_multitask/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
Information on VRAM usage of LLM model | 5 | Newbie here. Is there any list or reference I can look on each LLM model's GPU VRAM consumption?
or even how you guys figured it out before testing and using it? | 2023-11-21T02:45:19 | https://www.reddit.com/r/LocalLLaMA/comments/1806ksz/information_on_vram_usage_of_llm_model/ | auntysociall | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1806ksz | false | null | t3_1806ksz | /r/LocalLLaMA/comments/1806ksz/information_on_vram_usage_of_llm_model/ | false | false | self | 5 | null |
ORCA 2 Released open source! | 183 | 2023-11-21T02:39:33 | https://huggingface.co/microsoft/Orca-2-13b | metalman123 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1806gp7 | false | null | t3_1806gp7 | /r/LocalLLaMA/comments/1806gp7/orca_2_released_open_source/ | false | false | 183 | {'enabled': False, 'images': [{'id': 'lPjiEzu_Oe3C-8LaJxvhh58cNoWSfIz5it5VjngN9iE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UtQ4UjXoYH4qkp8xwxqOTN8q3ML_tGAGGuWZ6DquHZY.jpg?width=108&crop=smart&auto=webp&s=43a5c1323a76d7650f66f1f11911937b8544fe12', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UtQ4UjXoYH4qkp8xwxqOTN8q3ML_tGAGGuWZ6DquHZY.jpg?width=216&crop=smart&auto=webp&s=67796741a655d10f461f853b934016189b84fc1f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UtQ4UjXoYH4qkp8xwxqOTN8q3ML_tGAGGuWZ6DquHZY.jpg?width=320&crop=smart&auto=webp&s=b907067fc8571f02fa6b5e24305aaa7879ba0d92', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UtQ4UjXoYH4qkp8xwxqOTN8q3ML_tGAGGuWZ6DquHZY.jpg?width=640&crop=smart&auto=webp&s=9575c0eae71b3edb5c1b74b651c995f598f57f61', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UtQ4UjXoYH4qkp8xwxqOTN8q3ML_tGAGGuWZ6DquHZY.jpg?width=960&crop=smart&auto=webp&s=44e109063c80c1b1ce71b4ba507b36cd9be89dea', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UtQ4UjXoYH4qkp8xwxqOTN8q3ML_tGAGGuWZ6DquHZY.jpg?width=1080&crop=smart&auto=webp&s=cd5fdb306d6cd12048a59a263777c7e0c5f0ae6f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UtQ4UjXoYH4qkp8xwxqOTN8q3ML_tGAGGuWZ6DquHZY.jpg?auto=webp&s=3193a4c64c21c6a35fb0e9e34ace8103bf725396', 'width': 1200}, 'variants': {}}]} | ||
Is LLaMA-1-65B or LLaMA-2-70B more creative at storytelling ? | 6 | I recently started using the base model of LLaMA-2-70B for creative writing and surprisingly found most of my prompts from ChatGPT actually works for the "base model" too, suggesting it might also be fine tuned a bit on ChatGPT-like instructions.
Curious anyone tried both llama 1 & 2 base model and can share their experiences on creativity ? My hunch is llama 1 might be slightly better at it, assuming it hasn't go through as much alignment. | 2023-11-21T02:32:36 | https://www.reddit.com/r/LocalLLaMA/comments/1806bz6/is_llama165b_or_llama270b_more_creative_at/ | nuvalab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1806bz6 | false | null | t3_1806bz6 | /r/LocalLLaMA/comments/1806bz6/is_llama165b_or_llama270b_more_creative_at/ | false | false | self | 6 | null |
Orca 2: Teaching Small Language Models How to Reason | 156 | 2023-11-21T02:32:34 | https://www.microsoft.com/en-us/research/blog/orca-2-teaching-small-language-models-how-to-reason/ | Memories-Of-Theseus | microsoft.com | 1970-01-01T00:00:00 | 0 | {} | 1806by8 | false | null | t3_1806by8 | /r/LocalLLaMA/comments/1806by8/orca_2_teaching_small_language_models_how_to/ | false | false | 156 | {'enabled': False, 'images': [{'id': 'YWhTEfRLYueSwyWp4_92BJKjkwdWzXU2ERfAopEW988', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qtmCCxWNuN6TylpxXy7DeokoHXIAQE32cMRTiJl8mT4.jpg?width=108&crop=smart&auto=webp&s=582b0af35fc42ed33f6f5c79be5ae52edc3865db', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/qtmCCxWNuN6TylpxXy7DeokoHXIAQE32cMRTiJl8mT4.jpg?width=216&crop=smart&auto=webp&s=57d65f9fdbb794f455acde6dbdd4304181b54db7', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/qtmCCxWNuN6TylpxXy7DeokoHXIAQE32cMRTiJl8mT4.jpg?width=320&crop=smart&auto=webp&s=06a300a3db46056f82236451f1f55b36ea8398b3', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/qtmCCxWNuN6TylpxXy7DeokoHXIAQE32cMRTiJl8mT4.jpg?width=640&crop=smart&auto=webp&s=122570db8be52773495b0947db99863753cd3287', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/qtmCCxWNuN6TylpxXy7DeokoHXIAQE32cMRTiJl8mT4.jpg?width=960&crop=smart&auto=webp&s=6117f46f99763e7691fd00f583a294e531c0b264', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/qtmCCxWNuN6TylpxXy7DeokoHXIAQE32cMRTiJl8mT4.jpg?width=1080&crop=smart&auto=webp&s=c0cbfa6e0197b3d84ebf439dbc007c283d19d683', 'width': 1080}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/qtmCCxWNuN6TylpxXy7DeokoHXIAQE32cMRTiJl8mT4.jpg?auto=webp&s=f97004becae89af3c7dd8b18f83b9f60c70f4428', 'width': 1200}, 'variants': {}}]} | ||
Run an openAI powered startup. What’s the best alternative to got 3.5 with function calling that I can run in the cloud? | 27 | Looking for speed and accuracy. Any suggestions on cloud hosts? | 2023-11-21T01:50:20 | https://www.reddit.com/r/LocalLLaMA/comments/1805hq1/run_an_openai_powered_startup_whats_the_best/ | fvpv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1805hq1 | false | null | t3_1805hq1 | /r/LocalLLaMA/comments/1805hq1/run_an_openai_powered_startup_whats_the_best/ | false | false | self | 27 | null |
How do you determine which embedding models will fit into memory available? | 2 | I'm delving into fine-tuning embedding models and facing some challenges.
I'm trying out a few models, specifically [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en), [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small), and [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large). I have access to a desktop with a 3080 card and also scalable cloud instances.
I've encountered an Out Of Memory (OOM) error with the "BAAI/bge-small-en" model, which has 33.4M parameters. This is surprising, considering its small size. As a rule of thumb, I estimate that models up to 7B parameters (unquantised) and 13B (quantised) can fit on this card so I assumed that the even the larger embedding models (~560M params) should be no problem.
My first question is there any quick and dirty rule of thumb/method I can utilise to quickly determine if an embedding model will fit within a certain amount of Vmemory?
With generative models, I use a rough rule of thumb that models up to 7B parameters (unquantised) and 13B (quantised) can fit on my 3080 card.
The model "BAAI/bge-small-en" has 33.4M parameters according to [its repo](BAAI/bge-small-en) so I'm confused as to why I'm encountering OOM errors when trying to do a training run given its small size.
For my training runs I'm following this notebook -
https://github.com/run-llama/finetune-embedding/blob/main/finetune.ipynb.
I have ran the notebook "as is" with no issue, my next step was to do a hyper-parameter search to find the largest batch size I could fit on my GPU before doing a longer epoch run. However, the VRAM utilisation is quite high (8/9GB) with a small batch size of 10, setting batch size larger results in an OOM error .
This leads to my second question: What are others' experiences in fine-tuning embedding models, specifically in terms of the size of models trained on a given amount of VRAM?
Any insights, tips or pointers to useful resources would be appreciated. | 2023-11-21T01:37:38 | https://www.reddit.com/r/LocalLLaMA/comments/18058g3/how_do_you_determine_which_embedding_models_will/ | nuusain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18058g3 | false | null | t3_18058g3 | /r/LocalLLaMA/comments/18058g3/how_do_you_determine_which_embedding_models_will/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'ZpqCn0UFsDLzRYOB5KuacBJrAotKp0ejagvz_o1u2Eg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GwrgVzxCouRhu7q4AdgeDxnwQV7-_M7mfLx0itVu-m8.jpg?width=108&crop=smart&auto=webp&s=430a2022eeedbf9ee176b658244b5d0f17cdd180', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GwrgVzxCouRhu7q4AdgeDxnwQV7-_M7mfLx0itVu-m8.jpg?width=216&crop=smart&auto=webp&s=07f46f29d88d19e1c71695e04f768bf3886cbd6f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GwrgVzxCouRhu7q4AdgeDxnwQV7-_M7mfLx0itVu-m8.jpg?width=320&crop=smart&auto=webp&s=df0c277be19cbfe9b3385bf4cc05c18a02df8465', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GwrgVzxCouRhu7q4AdgeDxnwQV7-_M7mfLx0itVu-m8.jpg?width=640&crop=smart&auto=webp&s=b99f51dca6a91fdf6bd64fbb25e7520ec2f4bdf9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GwrgVzxCouRhu7q4AdgeDxnwQV7-_M7mfLx0itVu-m8.jpg?width=960&crop=smart&auto=webp&s=bfca1e67104c106131408d031f66f22cc6df3bff', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GwrgVzxCouRhu7q4AdgeDxnwQV7-_M7mfLx0itVu-m8.jpg?width=1080&crop=smart&auto=webp&s=f986d4bf4f1eb59a70d2eb3f1c41dbce829cd09b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GwrgVzxCouRhu7q4AdgeDxnwQV7-_M7mfLx0itVu-m8.jpg?auto=webp&s=9a6da01a53d9ea94d1940997e639531e34ee811d', 'width': 1200}, 'variants': {}}]} |
TinyLlama: Run Advanced LLMs Locally or Across Devices with a 2MB Inference App | 1 | 2023-11-21T01:26:54 | https://www.secondstate.io/articles/tinyllama-1.1b-chat/ | smileymileycoin | secondstate.io | 1970-01-01T00:00:00 | 0 | {} | 18050q0 | false | null | t3_18050q0 | /r/LocalLLaMA/comments/18050q0/tinyllama_run_advanced_llms_locally_or_across/ | false | false | default | 1 | null | |
Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2 | 9 | >Since the release of TÜLU [Wang et al., 2023b], open resources for instruction tuning have developed quickly, from better base models to new finetuning techniques. We test and incorporate a number of these advances into TÜLU, resulting in TÜLU 2, a suite of improved TÜLU models for advancing the understanding and best practices of adapting pretrained language models to downstream tasks and user preferences. Concretely, we release: (1) TÜLU-V2-mix, an improved collection of high-quality instruction datasets; (2) TÜLU 2, LLAMA-2 models finetuned on the V2 mixture; (3) TÜLU 2+DPO, TÜLU 2 models trained with direct preference optimization (DPO), including the largest DPO-trained model to date (TÜLU 2+DPO 70B); (4) CODE TÜLU 2, CODE LLAMA models finetuned on our V2 mix that outperform CODE LLAMA and its instruction-tuned variant, CODE LLAMA-Instruct. Our evaluation from multiple perspectives shows that the TÜLU 2 suite achieves state-of-the-art performance among open models and matches or exceeds the performance of GPT-3.5-turbo-0301 on several benchmarks. We release all the checkpoints, data, training and evaluation code to facilitate future open efforts on adapting large language models. | 2023-11-21T00:36:29 | https://arxiv.org/abs/2311.10702 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1803yyj | false | null | t3_1803yyj | /r/LocalLLaMA/comments/1803yyj/camels_in_a_changing_climate_enhancing_lm/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | |
Is there a list of models vs vRAM required vs tk/sec vs GPU ran on? | 7 | Like, I have a 3070. I want to know what other people are saying they get for tokens so I can have an idea of what to expect, for example | 2023-11-21T00:19:31 | https://www.reddit.com/r/LocalLLaMA/comments/1803m4v/is_there_a_list_of_models_vs_vram_required_vs/ | bearbarebere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1803m4v | false | null | t3_1803m4v | /r/LocalLLaMA/comments/1803m4v/is_there_a_list_of_models_vs_vram_required_vs/ | false | false | self | 7 | null |
[R] LLMs cannot find reasoning errors, but can correct them! (Plus dataset) | 1 | 2023-11-21T00:10:40 | https://www.reddit.com/r/MachineLearning/comments/17zu3xo/r_llms_cannot_find_reasoning_errors_but_can/ | metalman123 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1803fp3 | false | null | t3_1803fp3 | /r/LocalLLaMA/comments/1803fp3/r_llms_cannot_find_reasoning_errors_but_can/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | |
Great video on local Gen AI running on NVIDIA Jetson | 1 | [removed] | 2023-11-20T22:32:44 | https://www.reddit.com/r/LocalLLaMA/comments/18015a4/great_video_on_local_gen_ai_running_on_nvidia/ | Diligent_Usual7751 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18015a4 | false | {'oembed': {'author_name': 'NVIDIA Developer', 'author_url': 'https://www.youtube.com/@NVIDIADeveloper', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/6mCFzDatGGc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Bringing Generative AI to Life with NVIDIA Jetson"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/6mCFzDatGGc/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Bringing Generative AI to Life with NVIDIA Jetson', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_18015a4 | /r/LocalLLaMA/comments/18015a4/great_video_on_local_gen_ai_running_on_nvidia/ | false | false | self | 1 | null |
Splitting models over GPUs - AWQ text-generation-AU | 6 | tl;dr: AutoAWQ seems to ignore the multi-GPU VRAM allocation sliders completely in text-generation-ui?!?
--------------
I've got a 3090 and added in the old 2070S for some temporary experimentation.
Not particularly stable and slowed speed a lot versus just 3090, but 32gb opens up some higher quant 34Bs.
llama.cpp mostly seems to run fine split across them.
Puzzled though by text-generation-UI's AutoAWQ. Regardless of what I do with the sliders it always runs out of memory on the 8GB card. [Even if I tell it 1GB on the 2070S only it still fills it till OOM.](https://i.imgur.com/1Z5vjid.png). The max the sliders go to are expected amounts (24 & 8) so pretty sure I've got them right way round...
Anybody know what's wrong? | 2023-11-20T21:55:59 | https://www.reddit.com/r/LocalLLaMA/comments/18008h3/splitting_models_over_gpus_awq_textgenerationau/ | AnomalyNexus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18008h3 | false | null | t3_18008h3 | /r/LocalLLaMA/comments/18008h3/splitting_models_over_gpus_awq_textgenerationau/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'dCoXQoRO2l_XhdsI8t7w3riMRUQS__1Zu_BN6OCpW5Q', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/AVXkXWWjO6ioJm8pURuTGuRwlKXUAXcOH8muxZSiegA.png?width=108&crop=smart&auto=webp&s=b7e35ee807ec5a52d603035fd759041e059565ac', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/AVXkXWWjO6ioJm8pURuTGuRwlKXUAXcOH8muxZSiegA.png?width=216&crop=smart&auto=webp&s=082608af1e7f39d925424c009083bbe5c1af50d0', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/AVXkXWWjO6ioJm8pURuTGuRwlKXUAXcOH8muxZSiegA.png?width=320&crop=smart&auto=webp&s=4c0a001c2cca0fa241b2898bd2650cc179418e03', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/AVXkXWWjO6ioJm8pURuTGuRwlKXUAXcOH8muxZSiegA.png?width=640&crop=smart&auto=webp&s=97b0e2b4071c7c81371582e33fac986a8aac7244', 'width': 640}, {'height': 563, 'url': 'https://external-preview.redd.it/AVXkXWWjO6ioJm8pURuTGuRwlKXUAXcOH8muxZSiegA.png?width=960&crop=smart&auto=webp&s=bc86b736d2dc4ae4e9f718fba1faec26c76b9d43', 'width': 960}], 'source': {'height': 585, 'url': 'https://external-preview.redd.it/AVXkXWWjO6ioJm8pURuTGuRwlKXUAXcOH8muxZSiegA.png?auto=webp&s=37d83dcede4e79dc788c9285c4fdb70a193684fd', 'width': 996}, 'variants': {}}]} |
Here's an in depth look at what happened this weekend | 1 | 2023-11-20T21:48:01 | Future_Might_8194 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18000zl | false | null | t3_18000zl | /r/LocalLLaMA/comments/18000zl/heres_an_in_depth_look_at_what_happened_this/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'GTSEeDn_WDzhthWegFDmhAXgj1BEK7JEJSVZ-M4fQWI', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/atbjtbdrpk1c1.jpg?width=108&crop=smart&auto=webp&s=6db5a4f59bc3086aa0a4bf5c0808046933c6db42', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/atbjtbdrpk1c1.jpg?width=216&crop=smart&auto=webp&s=d863d041e580e89665c207744099f37934c446da', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/atbjtbdrpk1c1.jpg?width=320&crop=smart&auto=webp&s=780f070b69cd6cc1f832cde994a1dec5758f0cbc', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/atbjtbdrpk1c1.jpg?width=640&crop=smart&auto=webp&s=59f4e978ece78b0a2857c559fbe26a32a4f62f14', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/atbjtbdrpk1c1.jpg?width=960&crop=smart&auto=webp&s=9edcba148e7453c7bdc96a40bcc26aca4d4020ce', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/atbjtbdrpk1c1.jpg?width=1080&crop=smart&auto=webp&s=b5a8cf26e866d951c2f4de7bc913f1f016c89830', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://preview.redd.it/atbjtbdrpk1c1.jpg?auto=webp&s=f51504148563859e06cfa37271ce7e036bc5ef65', 'width': 1920}, 'variants': {}}]} | |||
A guide to open-source LLM inference and performance | 7 | 2023-11-20T21:44:29 | https://www.baseten.co/blog/llm-transformer-inference-guide/ | nocturnelm | baseten.co | 1970-01-01T00:00:00 | 0 | {} | 17zzxr9 | false | null | t3_17zzxr9 | /r/LocalLLaMA/comments/17zzxr9/a_guide_to_opensource_llm_inference_and/ | false | false | default | 7 | null | |
Open Source RAG Agents with Conversational Memory | 3 | I want to use an open source LLM as a RAG agent that also has memory of the current conversation (and eventually I want to work up to memory of previous conversations). I was looking into conversational retrieval agents from Langchain (linked below), but it seems they only work with OpenAI models. Is it possible to get an open source LLM to work with RAG and conversational memory using Langchain?
[https://python.langchain.com/docs/use\_cases/question\_answering/conversational\_retrieval\_agents](https://python.langchain.com/docs/use_cases/question_answering/conversational_retrieval_agents) | 2023-11-20T21:13:35 | https://www.reddit.com/r/LocalLLaMA/comments/17zz5vp/open_source_rag_agents_with_conversational_memory/ | tail-recursion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zz5vp | false | null | t3_17zz5vp | /r/LocalLLaMA/comments/17zz5vp/open_source_rag_agents_with_conversational_memory/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'C1O5S5WQ2zql4CQHBQC5FMwveJdPtaJ9r_xGWbzu48o', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=108&crop=smart&auto=webp&s=4806821b19a384d8270fee66e851537817cdac4e', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=216&crop=smart&auto=webp&s=0bdf6ca90dcebbc73d6ff30b79f54814b931344d', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=320&crop=smart&auto=webp&s=dd7a799219f465b4f913aa10969c5ee900913404', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=640&crop=smart&auto=webp&s=e1d1617519e0321944016ee242a7999669714f39', 'width': 640}], 'source': {'height': 436, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?auto=webp&s=8d662951305a88ba511f842901937fb729991cb9', 'width': 794}, 'variants': {}}]} |
Would you use an iOS app for your own private, offline AI? | 7 | As you can see, still working out the kinks!
But I have a private testflight alpha live, and I’m working toward an App Store release soon.
Anyone interested in being an early tester? Emphasis on early 😅
Supports Metal for GPU inference if your device has the RAM! iPhone 12 Pro & later, iPhone 14 & later all can run w gpu offloading
DM me here or on [twitter](https://twitter.com/p3ery) | 2023-11-20T21:01:49 | brittlewis12 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17zyvi8 | true | null | t3_17zyvi8 | /r/LocalLLaMA/comments/17zyvi8/would_you_use_an_ios_app_for_your_own_private/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'J91bglpf1r6QAeTweGtDmgvFXh7Z2JnkRKSrKuinYiY', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/jgd5bjoihk1c1.jpeg?width=108&crop=smart&auto=webp&s=632d78d40e513d7d6a4dac511d6f8d129a624a07', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/jgd5bjoihk1c1.jpeg?width=216&crop=smart&auto=webp&s=83ffbc223c6bb198260c8e74964be7709f6c0d85', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/jgd5bjoihk1c1.jpeg?width=320&crop=smart&auto=webp&s=62587ed5094033fb14e5479da45550aec1f03aff', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/jgd5bjoihk1c1.jpeg?width=640&crop=smart&auto=webp&s=5185098b077d11d60649f7b7f71b1c9a58cf393e', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/jgd5bjoihk1c1.jpeg?width=960&crop=smart&auto=webp&s=a27ee2ed448681b86a4cf3be6790ffec8f5f0373', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/jgd5bjoihk1c1.jpeg?width=1080&crop=smart&auto=webp&s=6fd8c2c8eebe85ae045fa9e113ce43f381f62ad6', 'width': 1080}], 'source': {'height': 2796, 'url': 'https://preview.redd.it/jgd5bjoihk1c1.jpeg?auto=webp&s=34980373714a5e9fb04241327a3dabe77ba5f9d6', 'width': 1290}, 'variants': {}}]} | ||
Thoughts: Can you run a decent LLM on a RTX 4060 TI (16GB)? | 1 | Has anyone of you tried any low to mid tier LLMs that run smoothly on the 4060 TI? Any thoughts? | 2023-11-20T20:51:39 | https://www.reddit.com/r/LocalLLaMA/comments/17zymty/thoughts_can_you_run_a_decent_llm_on_a_rtx_4060/ | GerlamoRodion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zymty | false | null | t3_17zymty | /r/LocalLLaMA/comments/17zymty/thoughts_can_you_run_a_decent_llm_on_a_rtx_4060/ | false | false | self | 1 | null |
Finetuned llama2 deployment with vllm | 2 | Hello everyone,
I've fine-tuned llama2 using my own dataset and now I'm looking to deploy it. The adapter weights are uploaded to HF, and the base model I'm using is h2oai/h2ogpt-4096-llama2-13b-chat.
I've been exploring the vllm project, finding it quite useful initially. However, I've run into a snag with my LoRA fine-tuned model. It seems to be searching for config.json, but since I've uploaded LoRA adapters, there's no config.json available.
Am I overlooking something in my approach, or does vllm not support LoRA fine-tuned models? Any insights or guidance would be greatly appreciated. | 2023-11-20T20:51:05 | https://www.reddit.com/r/LocalLLaMA/comments/17zymc5/finetuned_llama2_deployment_with_vllm/ | mano3-1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zymc5 | false | null | t3_17zymc5 | /r/LocalLLaMA/comments/17zymc5/finetuned_llama2_deployment_with_vllm/ | false | false | self | 2 | null |
Finetuning Mistral, Llama2 & others with Lora: Proper Code Setup | 1 | I am pretty confused on setting up a SFTTrainer fine tuning of a model like Mistral-7b. I've seen a ton of different example notebooks all use different parameters and the documentation on HuggingFace isn't clarifiying. My questions are the following:
1. After I setup a LoraConfig i.e.
peft\_config = LoraConfig(
lora\_alpha=16,
lora\_dropout=0.1,
r=64,
bias="none",
task\_type="CAUSAL\_LM",
)
Is it enough to just pass this into the SFTTrainer argument peft\_config, or is it required to also call "model = get\_peft\_model(model, peft\_config)". I've seen a bunch of notebooks skip this and "model = prepare\_model\_for\_kbit\_training(model)" and I've seen others say its important. I assumed passing it as an SFTTrainer argument maybe bypasses the need to call those functions directly. Do i need the call merge and unload after training?
2. There seems to be no consensus on the tokenizer setup. I've seeing people say padding\_side=left is correct and others say that doesn't work. Do i need to add tokenizer.pad\_token = tokenizer.eos\_token? What is the full proper setup for the tokenizer and where is the source of truth on this anyway? The Mistral website?
Thank you for the help. I am new to LLM finetuning and want to make sure I'm understanding this properly.
​
​ | 2023-11-20T20:47:41 | https://www.reddit.com/r/LocalLLaMA/comments/17zyjgi/finetuning_mistral_llama2_others_with_lora_proper/ | PrettyExpression6106 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zyjgi | false | null | t3_17zyjgi | /r/LocalLLaMA/comments/17zyjgi/finetuning_mistral_llama2_others_with_lora_proper/ | false | false | self | 1 | null |
Are there people who have ran MI25s, MI60s, etc for LLMs? | 7 | I am looking to get a MI60 for both LLMs and other high compute tasks as some are going for $350 on ebay. It looks like a really good deal for my applications and with the 32GBs of RAM, but I was wondering what others have experienced with it for use with LLMs. I am curious on how compatibility was for OpenCL or ROCm, I mainly use Windows so am wondering if I can still use it with most of its speed through windows, and what kind of speeds people are getting using it with models.
Thank you! | 2023-11-20T20:26:36 | https://www.reddit.com/r/LocalLLaMA/comments/17zy1kr/are_there_people_who_have_ran_mi25s_mi60s_etc_for/ | Nix_The_Furry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zy1kr | false | null | t3_17zy1kr | /r/LocalLLaMA/comments/17zy1kr/are_there_people_who_have_ran_mi25s_mi60s_etc_for/ | false | false | self | 7 | null |
Petition for OpenAI to Release their IP and Datasets | 1 | [removed] | 2023-11-20T20:21:12 | https://www.reddit.com/r/LocalLLaMA/comments/17zxx2o/petition_for_openai_to_release_their_ip_and/ | middlenameishardwork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zxx2o | false | null | t3_17zxx2o | /r/LocalLLaMA/comments/17zxx2o/petition_for_openai_to_release_their_ip_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'R2Aiu1TzgPup8j5G5BXISN0mmTfF-muVjU7NNfHNDmM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZIfPN_84yOb_pFkT5mTczawAROptqi9jnTjPf05wuZE.jpg?width=108&crop=smart&auto=webp&s=8e29db4b5035108d33389ff6066f1c5318d8b354', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZIfPN_84yOb_pFkT5mTczawAROptqi9jnTjPf05wuZE.jpg?width=216&crop=smart&auto=webp&s=3cc3d05b1f4b16d1efc13c0207c07fa9dc1df602', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/ZIfPN_84yOb_pFkT5mTczawAROptqi9jnTjPf05wuZE.jpg?width=320&crop=smart&auto=webp&s=f1c9e529fbb9cc37e22acfec9d32b91b49f37a01', 'width': 320}], 'source': {'height': 337, 'url': 'https://external-preview.redd.it/ZIfPN_84yOb_pFkT5mTczawAROptqi9jnTjPf05wuZE.jpg?auto=webp&s=fc2047990f9ca300b7ac1a3c620b7c88201458a3', 'width': 600}, 'variants': {}}]} |
Tesla P40 for Mistral 7b | 1 | [removed] | 2023-11-20T20:05:24 | https://www.reddit.com/r/LocalLLaMA/comments/17zxjg2/tesla_p40_for_mistral_7b/ | Tiny_Yellow_7869 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zxjg2 | false | null | t3_17zxjg2 | /r/LocalLLaMA/comments/17zxjg2/tesla_p40_for_mistral_7b/ | false | false | self | 1 | null |
Need 32 GB VRAM, is possible to multi Gpu from different pc? | 1 | [removed] | 2023-11-20T19:59:31 | https://www.reddit.com/r/LocalLLaMA/comments/17zxdyj/need_32_gb_vram_is_possible_to_multi_gpu_from/ | Substantial-Scene-85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zxdyj | false | null | t3_17zxdyj | /r/LocalLLaMA/comments/17zxdyj/need_32_gb_vram_is_possible_to_multi_gpu_from/ | false | false | self | 1 | null |
Training and Validation Loss Behavior While Fine-Tuning llama-2 | 4 | I am currently fine-tuning llama2 7B model on my own custom dataset. Currently, I am training for 10k steps, and performing evaluation every 500 steps. I see that over the training process, the training loss is very noisy, but goes down as the steps increase. However, the evaluation loss gradually decreases smoothly, but only by 0.1 over the entire training process. I attached the images to this post, where the first shows the evaluation loss, the second shows the training loss, and the third shows both the evaluation and training loss on the same plot.
I'm confused, because the training loss goes down by more than 1.0 over 10k steps, while the evaluation loss goes down by only 0.1 in that process. I don't think this is a sign of overfitting, because if it was overfitting, shouldn't the evaluation loss get higher instead of lower?
Can someone please help me understand this behavior, and if it is expected when fine-tuning llama-2 7B using QLora on a custom dataset? | 2023-11-20T19:20:52 | https://www.reddit.com/r/LocalLLaMA/comments/17zwh3z/training_and_validation_loss_behavior_while/ | Funny_Rule2482 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zwh3z | false | null | t3_17zwh3z | /r/LocalLLaMA/comments/17zwh3z/training_and_validation_loss_behavior_while/ | false | false | self | 4 | null |
Using multiple GPUs for evaluation during fine-tuning of llama-2-7b | 1 | Hi, I am currently working on finetuning the llama-2-7b model on my own custom dataset using QLoRA. Right now, I have access to 4 Nvidia A100 GPUs, with 40GB memory each. I am training for 20000 steps, and realized that the training is going by very quickly (using multiple GPUs), while the evaluation is taking a very long time at each evaluation step (i'm assuming it is only using one GPU). My dataset has around 144k rows, and the evaluation data has around 17k rows (which is significantly less). It is taking around 4 minutes to go through 100 train steps, but 35 minutes to evaluate the model at each eval\_step. Has anyone run into this issue before?
I'm loading my model using device\_map="auto" for distributing the model weights across the GPUs:
model = AutoModelForCausalLM.from_pretrained(
llm_name,
quantization_config=bnb_config,
trust_remote_code=True,
torch_dtype=torch.float16, # if not use_bf16 else torch.bfloat16,
device_map = "auto"
)
Below are my training arguments:
output_dir = "./results"
per_device_train_batch_size = 22
per_device_eval_batch_size = 22
gradient_accumulation_steps = 1
eval_accumulation_steps = 1
optim = "paged_adamw_32bit"
save_steps = 1000
logging_steps = 1
learning_rate = 0.00002
max_grad_norm = 0.3
max_steps = 20000
warmup_ratio = 0.03
lr_scheduler_type = "constant"
training_arguments = TrainingArguments(
report_to="wandb",
output_dir=output_dir,
per_device_train_batch_size=per_device_train_batch_size,
per_device_eval_batch_size=per_device_eval_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
eval_accumulation_steps = eval_accumulation_steps,
optim=optim,
save_steps=save_steps,
logging_steps=logging_steps,
learning_rate=learning_rate,
fp16=True,
max_grad_norm=max_grad_norm,
max_steps=max_steps,
warmup_ratio=warmup_ratio,
group_by_length=True,
lr_scheduler_type=lr_scheduler_type,
#do_eval=True,
evaluation_strategy="steps",
eval_steps= 1000
)
Finally, here is my SFTTrainer:
trainer = SFTTrainer(
model=model,
train_dataset=fineune_dataset_dict['train'],
eval_dataset=fineune_dataset_dict['validation'],
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=max_seq_length,
tokenizer=tokenizer,
args=training_arguments
)
for name, module in trainer.model.named_modules():
if "norm" in name:
module = module.to(torch.float32)
trainer.train()
Please let me know how I can improve the speed of the evaluation, given that I have access to multiple GPUs. | 2023-11-20T19:11:45 | https://www.reddit.com/r/LocalLLaMA/comments/17zw9f7/using_multiple_gpus_for_evaluation_during/ | Funny_Rule2482 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zw9f7 | false | null | t3_17zw9f7 | /r/LocalLLaMA/comments/17zw9f7/using_multiple_gpus_for_evaluation_during/ | false | false | self | 1 | null |
Open LLM Leaderboard vs Reality: How do you evaluate "good" ? | 29 | As a beginner, I appreciate that there are metrics for all these LLMs out there so I don't waste time downloading and trying failures. However, I noticed that the [Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) doesn't exactly reflect reality for me. YES, I DO UNDERSTAND THAT IT DEPENDS ON MY NEEDS.
I mean really basic stuff of how the LLM acts as a coherent agent, can follow instructions and grasp context in any given situation. Which is often lacking in LLMs I am trying so far, like the boards leader for 30B models 01-ai/Yi-34B for example. I guess there is something similar going on like it used to with GPU benchmarks: dirty tricks and over-optimization for the tests.
I am interested in how more experienced people here evaluate an LLM's fitness. Do you have a battery of questions and instructions you try out first? | 2023-11-20T18:32:06 | https://www.reddit.com/r/LocalLLaMA/comments/17zvbhj/open_llm_leaderboard_vs_reality_how_do_you/ | BlueMetaMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zvbhj | false | null | t3_17zvbhj | /r/LocalLLaMA/comments/17zvbhj/open_llm_leaderboard_vs_reality_how_do_you/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=108&crop=smart&auto=webp&s=90f4efd1c1314faf5b0cd1c5eeb8d2835fe4a3ba', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=216&crop=smart&auto=webp&s=062336de177b9f9f124a98f4e03b59faa819be1d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=320&crop=smart&auto=webp&s=01e9aeccc0d76fee4ecb359bfb6238dc2afd87f0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=640&crop=smart&auto=webp&s=723b41bc410ff59454cf7a9a3db4eced43d4868f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=960&crop=smart&auto=webp&s=2c43ffe72f7f32d522c3e85c1aa8e25d6f213b38', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=1080&crop=smart&auto=webp&s=6a7d63eae44237642a3f95e586436bf6efe5dd70', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?auto=webp&s=51349e0b781d1c9e91535974e09833705c76a3cc', 'width': 1200}, 'variants': {}}]} |
Bright Eye: multipurpose AI app for your AI services. | 1 | [removed] | 2023-11-20T18:28:27 | https://www.reddit.com/r/LocalLLaMA/comments/17zv8ds/bright_eye_multipurpose_ai_app_for_your_ai/ | MultipurposeAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zv8ds | false | null | t3_17zv8ds | /r/LocalLLaMA/comments/17zv8ds/bright_eye_multipurpose_ai_app_for_your_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '5YQYRys-_4QUdjOi0giTcNX75odOmdQkfIRkop5PY-A', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=108&crop=smart&auto=webp&s=76adf9aa07171352e3727b8d8bde812b5eb0f7ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=216&crop=smart&auto=webp&s=f180f4caf096ee042b700d70cabf941788671fda', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=320&crop=smart&auto=webp&s=204c440e43a68783d15356bdc60ce239349a1d9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=640&crop=smart&auto=webp&s=2689ba6aac51113255e3e6e8acb33870740831e1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=960&crop=smart&auto=webp&s=64ad438b94a08899677dfb6f2de708f8a7a6a81b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=1080&crop=smart&auto=webp&s=0108876ef023c871780ce4ec2584fa4cb25c499c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?auto=webp&s=9035877a533fae2ee246bfc56769d60e805f7b74', 'width': 1200}, 'variants': {}}]} |
Structured Output with Zephyr | 5 | I've been experimenting with Zephyr and pretty surprised by the great performance. One problem I have with Zephyr is the difficulty to build structured outputs. For example if I ask it to return only True or False, it will return a lengthy explanation. This makes it tough to use Zephyr as part of a production system, since incorrectly structured outputs have huge implications.
Has anyone found some tricks to make Zephyr produce outputs in a defined format and do so deterministically? | 2023-11-20T17:42:47 | https://www.reddit.com/r/LocalLLaMA/comments/17zu5om/structured_output_with_zephyr/ | Smsmith714 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zu5om | false | null | t3_17zu5om | /r/LocalLLaMA/comments/17zu5om/structured_output_with_zephyr/ | false | false | self | 5 | null |
The whole local LLM crowd right now: | 0 | 2023-11-20T17:41:29 | NLTPanaIyst | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17zu4m6 | false | null | t3_17zu4m6 | /r/LocalLLaMA/comments/17zu4m6/the_whole_local_llm_crowd_right_now/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'gfr6uy2shj1c1', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/gfr6uy2shj1c1.jpeg?width=108&crop=smart&auto=webp&s=e19f7d93898b9061d8c6d92fdde52b1eeb1f06d5', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/gfr6uy2shj1c1.jpeg?width=216&crop=smart&auto=webp&s=20a99f4c97db75c0be12e4601a0ee15d8c745911', 'width': 216}, {'height': 326, 'url': 'https://preview.redd.it/gfr6uy2shj1c1.jpeg?width=320&crop=smart&auto=webp&s=157e87444c1838e88e6815d01183624751517088', 'width': 320}, {'height': 652, 'url': 'https://preview.redd.it/gfr6uy2shj1c1.jpeg?width=640&crop=smart&auto=webp&s=1c879d59ae1eee584aa11ace60a1b5096a553e2a', 'width': 640}], 'source': {'height': 652, 'url': 'https://preview.redd.it/gfr6uy2shj1c1.jpeg?auto=webp&s=410ab3cf168b5e1231894e9221146e3f00db0155', 'width': 640}, 'variants': {}}]} | ||
GPT4ALL | 4 | I am putting together a GPT4ALL installation and I wanted to catalog all the models that work with it and their primary purpose (or what they are best at). Goal is to have the multi-select available depending on the chat I want to have.
Can anyone point me to a model list that might have this detail (purpose, best use case etc) or can you suggest an alternative GUI and model set that might be better than GPT4ALL. Or just an alternative that I can try out as well.
Thanks all. | 2023-11-20T17:33:07 | https://www.reddit.com/r/LocalLLaMA/comments/17ztxm2/gpt4all/ | f365legend | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ztxm2 | false | null | t3_17ztxm2 | /r/LocalLLaMA/comments/17ztxm2/gpt4all/ | false | false | self | 4 | null |
How to prompt engineering non-openai models to output structured json | 2 | Does anyone know how function calling works under the hood? | 2023-11-20T17:21:42 | https://www.reddit.com/r/LocalLLaMA/comments/17ztnt7/how_to_prompt_engineering_nonopenai_models_to/ | backtrack432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ztnt7 | false | null | t3_17ztnt7 | /r/LocalLLaMA/comments/17ztnt7/how_to_prompt_engineering_nonopenai_models_to/ | false | false | self | 2 | null |
667 of OpenAI's 770 employees have threaten to quit. Microsoft says they all have jobs at Microsoft if they want them. | 694 | 2023-11-20T17:13:48 | https://www.cnbc.com/2023/11/20/hundreds-of-openai-employees-threaten-to-follow-altman-to-microsoft-unless-board-resigns-reports-say.html | fallingdowndizzyvr | cnbc.com | 1970-01-01T00:00:00 | 0 | {} | 17ztgyx | false | null | t3_17ztgyx | /r/LocalLLaMA/comments/17ztgyx/667_of_openais_770_employees_have_threaten_to/ | false | false | 694 | {'enabled': False, 'images': [{'id': 'cJK4Bz-ESK3o0PwERlQh4-48_zUabUqcpcOyhuY0czM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LQ0e4Q6jKcANw8BJX05k4KGCKU8X0dWRszBjH5PeWVg.jpg?width=108&crop=smart&auto=webp&s=2350e7770432e7480dbe7bd2a1c2652f481d51f9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/LQ0e4Q6jKcANw8BJX05k4KGCKU8X0dWRszBjH5PeWVg.jpg?width=216&crop=smart&auto=webp&s=415161c6efd5f1f5ad1f921e335d3e351268c9cf', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/LQ0e4Q6jKcANw8BJX05k4KGCKU8X0dWRszBjH5PeWVg.jpg?width=320&crop=smart&auto=webp&s=750d8b9898b1cc4dcb828800ce97df22b7635bea', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/LQ0e4Q6jKcANw8BJX05k4KGCKU8X0dWRszBjH5PeWVg.jpg?width=640&crop=smart&auto=webp&s=45f0ea96f3729a54c4806ac574d329ae33b727d7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/LQ0e4Q6jKcANw8BJX05k4KGCKU8X0dWRszBjH5PeWVg.jpg?width=960&crop=smart&auto=webp&s=3167317349ec24c33d5f24bed886b373b6890198', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/LQ0e4Q6jKcANw8BJX05k4KGCKU8X0dWRszBjH5PeWVg.jpg?width=1080&crop=smart&auto=webp&s=d02a4397d161dc815ac8087393f19c6956ab23ec', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/LQ0e4Q6jKcANw8BJX05k4KGCKU8X0dWRszBjH5PeWVg.jpg?auto=webp&s=7b915ff63130055313fb03cdd4ebec9c123d76f8', 'width': 1920}, 'variants': {}}]} | ||
Negative KL while doing RLHF with TRL | 1 | Hi, anyone here train successfully train rlhf model? I am fine-tuning the TRL's PPOTrainer on the open-assistant dataset using 4-bit Qlora. However, I am getting negative KL from the very first step. So far I have tried the following things. However, all these configurations give me negative KL.
1. learning rate \[1.41e-5, 1e-5, 1e-4, 1.41e-5 (with cosine scheduler)\]
2.init\_kl\_coef of 0.2 and 0.02
3.clip\_range and clip value of \[default, 0.15\]
A few things I have noticed:
1. objective/logprob and objective/ref\_logprob are not widely distributed
2. Mean reward is still going up but KL is going down.
3. The generated response is still coherent and make sense. (getting a warning about negative KL)
Any help is much appreciated. Here are my [wandb](https://wandb.ai/alikhan/rl/runs/npbcf939?workspace=user-alikhan) logs for more info. | 2023-11-20T16:49:33 | https://www.reddit.com/r/LocalLLaMA/comments/17zsvpe/negative_kl_while_doing_rlhf_with_trl/ | ali0100u | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zsvpe | false | null | t3_17zsvpe | /r/LocalLLaMA/comments/17zsvpe/negative_kl_while_doing_rlhf_with_trl/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '5KdweIdpkZNUpImFAI957DcI8sdfHZxzZc91lprSuBA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5ktc8i_UgLpsS64k6QxM8BaTMR-mr5YeXwcbjTWWHHU.jpg?width=108&crop=smart&auto=webp&s=976c80388c5cc130d858bcb78b0e344a46d232c4', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/5ktc8i_UgLpsS64k6QxM8BaTMR-mr5YeXwcbjTWWHHU.jpg?width=216&crop=smart&auto=webp&s=1fb375c3fb11dcab79db7220e2666b8b1d83ce88', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/5ktc8i_UgLpsS64k6QxM8BaTMR-mr5YeXwcbjTWWHHU.jpg?width=320&crop=smart&auto=webp&s=001c2963240bad858778cf415949d89795a35210', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/5ktc8i_UgLpsS64k6QxM8BaTMR-mr5YeXwcbjTWWHHU.jpg?width=640&crop=smart&auto=webp&s=261c9deb7e6c15da8d3149302d6c95caaa3a9257', 'width': 640}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/5ktc8i_UgLpsS64k6QxM8BaTMR-mr5YeXwcbjTWWHHU.jpg?auto=webp&s=f56c07642fbcaa0cfd68f0d3d45e630518610b54', 'width': 900}, 'variants': {}}]} |
New to running a local LLM and TTS. But new PC was built with AMD GPU. What are my options? | 1 | From preliminary readings, I know an NVIDIA is preferred over a GPU, but the PC was built before I had an interest to locally run a LLM and TTS. The purpose would be to try and graft a local Game Master to run semi-solo tabletop sessions. What would be my options?
Processor: Intel i7 12700k 3.6 GHz
GPU: AMD Radeon RX 6800 XT
RAM: 32.0 GB
Motherboard: MS-7D25
​ | 2023-11-20T16:44:15 | https://www.reddit.com/r/LocalLLaMA/comments/17zsrd8/new_to_running_a_local_llm_and_tts_but_new_pc_was/ | PK_Cheesecake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zsrd8 | false | null | t3_17zsrd8 | /r/LocalLLaMA/comments/17zsrd8/new_to_running_a_local_llm_and_tts_but_new_pc_was/ | false | false | self | 1 | null |
Run small local llama.cpp on 1060 6gb or rx 580 8gb | 1 | [removed] | 2023-11-20T16:29:24 | https://www.reddit.com/r/LocalLLaMA/comments/17zsf5l/run_small_local_llamacpp_on_1060_6gb_or_rx_580_8gb/ | Substantial-Scene-85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zsf5l | false | null | t3_17zsf5l | /r/LocalLLaMA/comments/17zsf5l/run_small_local_llamacpp_on_1060_6gb_or_rx_580_8gb/ | false | false | self | 1 | null |
QA Expert: The LLM to handle Multi-hop Question Answering | 37 | Hi everyone,
Repo: [https://github.com/khaimt/qa\_expert](https://github.com/khaimt/qa_expert)
I just released a Mistral based model that is finetuned **exclusively for handling Multi-hop question answering**. The model will **decompose a multi-hop question into single questions**, then **retrieve relevant information** to single questions to answer these single questions. Finally, the model will **summarize the answers to single questions to generate the final answer** to the original multi-hop question.
The model assumes that we already have a **retrieval function**, named: ***retrieve(query: string)***: return relevant information to a query, this can be **querying a vector DB or search engine**. The model will automatically choose to use function: retrieve with certain query **when it needs to collect** more information or **directly generate the answer** to the question if **the information is already enough** (this is like **function calling** but here, we have only **1 function named: retrieve(query: string)**)
**Example 1**, to handle question: "**what are some tourist attractions in the biggest city in Japan**", the model will do the following steps:
\+ **retrieve**("what is the biggest city in Japan")
\--> retrieval result: ... Tokyo, with almost nine million inhabitants, is by far the largest Japanese city...
\+ **retrieve**("what are some tourist attractions in Tokyo?")
\--> retrieval result: ...Top Attractions in Tokyo · 1. Shinjuku Gyoen National Garden · 2. Senso-ji Temple · 3. Meiji Jingu Shrine...
\+ **answer**: some tourist attractions of the biggest city in Japan are: Shinjuku Gyoen National Garden, Senso-ji Temple, Meiji Jingu Shrine ...
**Example2**, to handle question: "what is the population of Vietnam compared with Philippines"
the model will do the following steps:
\+ **retrieve**("what is the population of Vietnam")
\--> retrieval result: ...The current population of Vietnam (or Viet Nam) is 99,104,901...
\+ **retrieve**("what is the population of Philippines")
\--> retrieval result: ...The current population of the Philippines is 118,020,404 ...
\+ **answer**: the population of Vietnam is significantly smaller compared with the population of the Philippines
More information (how to use model, run demo, training code, training data) can be found from this Repo: [https://github.com/khaimt/qa\_expert](https://github.com/khaimt/qa_expert)
Huggingface Hub: [https://huggingface.co/khaimaitien/qa-expert-7B-V1.0](https://huggingface.co/khaimaitien/qa-expert-7B-V1.0)
GGUF files: [https://huggingface.co/khaimaitien/qa-expert-7B-V1.0-GGUF](https://huggingface.co/khaimaitien/qa-expert-7B-V1.0-GGUF)
Created Training data: [https://huggingface.co/datasets/khaimaitien/qa-expert-multi-hop-qa-V1.0](https://huggingface.co/datasets/khaimaitien/qa-expert-multi-hop-qa-V1.0) | 2023-11-20T16:24:25 | https://www.reddit.com/r/LocalLLaMA/comments/17zsb0w/qa_expert_the_llm_to_handle_multihop_question/ | Relevant_Outcome_726 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zsb0w | false | null | t3_17zsb0w | /r/LocalLLaMA/comments/17zsb0w/qa_expert_the_llm_to_handle_multihop_question/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'DNkjm67l0he6DrR1e2_uhBUNM9-9X1JriExzWnYR_Bs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PA1f6wm_uCRRIxxQ9TiVPDB-cf0YWjZ0kZMeJqPsR3k.jpg?width=108&crop=smart&auto=webp&s=95b837f90ec8f9d3fbdc9c104d276f1de3283771', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PA1f6wm_uCRRIxxQ9TiVPDB-cf0YWjZ0kZMeJqPsR3k.jpg?width=216&crop=smart&auto=webp&s=707f6c5d6bafd85aec00949cb9fca468ff5347da', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PA1f6wm_uCRRIxxQ9TiVPDB-cf0YWjZ0kZMeJqPsR3k.jpg?width=320&crop=smart&auto=webp&s=c79d8c7ac956f455a8eb97905872738b7f242be4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PA1f6wm_uCRRIxxQ9TiVPDB-cf0YWjZ0kZMeJqPsR3k.jpg?width=640&crop=smart&auto=webp&s=2067d41be6a721eef22e4f2861b6e355f52a472b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PA1f6wm_uCRRIxxQ9TiVPDB-cf0YWjZ0kZMeJqPsR3k.jpg?width=960&crop=smart&auto=webp&s=55d9c20554df8ce863658db7ec7077438946fb06', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PA1f6wm_uCRRIxxQ9TiVPDB-cf0YWjZ0kZMeJqPsR3k.jpg?width=1080&crop=smart&auto=webp&s=c9ca68b941910a1ce72be7d7c9f71b61b13c5039', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PA1f6wm_uCRRIxxQ9TiVPDB-cf0YWjZ0kZMeJqPsR3k.jpg?auto=webp&s=b1af06a80efcc1e39fd3aca967f498dbf9de97d9', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.