title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
SOTA open-sourced Math reasoning LLMs. A solver, prover, verifier, augmentor. [Discussion]
86
**Shanghai AI Laboratory introduces new SOTA math LLMs with 7B and 20B sized open-sourced.** Github: [https://github.com/InternLM/InternLM-Math](https://github.com/InternLM/InternLM-Math) Huggingface: [https://huggingface.co/internlm/internlm2-math-7bWeb](https://huggingface.co/internlm/internlm2-math-7bWeb) Demo: [https://huggingface.co/spaces/internlm/internlm2-math-7b](https://huggingface.co/spaces/internlm/internlm2-math-7b) # Features: * **7B and 20B Chinese and English Math LMs with better than ChatGPT performances.** InternLM2-Math are continued pretrained from InternLM2-Base with \~100B high quality math-related tokens and SFT with \~2M bilingual math supervised data. We apply minhash and exact number match to decontaminate possible test set leakage. * **Add Lean as a support language for math problem solving and math theorem proving.** We are exploring combining Lean 3 with InternLM-Math for verifiable math reasoning. InternLM-Math can generate Lean codes for simple math reasoning tasks like GSM8K or provide possible proof tactics based on Lean states. * **Also can be viewed as a reward model, which supports the Outcome/Process/Lean Reward Model.** We supervise InternLM2-Math with various types of reward modeling data, to make InternLM2-Math can also verify chain-of-thought processes. We also add the ability to convert a chain-of-thought process into Lean 3 code. * **A Math LM Augment Helper** and **Code Intepreter**. InternLM2-Math can help augment math reasoning problems and solve them using the code interpreter, which makes you generate synthesis data quicker! # Performances: ​ https://preview.redd.it/a0c8g3op5dec1.png?width=1175&format=png&auto=webp&s=7a478de470bbe0f1e7f65da495a7057dc3e74880 ​
2024-01-24T10:25:35
https://www.reddit.com/r/LocalLLaMA/comments/19ee0dh/sota_opensourced_math_reasoning_llms_a_solver/
OpenMMLab
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ee0dh
false
null
t3_19ee0dh
/r/LocalLLaMA/comments/19ee0dh/sota_opensourced_math_reasoning_llms_a_solver/
false
false
https://a.thumbs.redditm…yUt5IvKC2QS8.jpg
86
{'enabled': False, 'images': [{'id': 'I5ewIJqyyzcUXb6WwQQMpoA4R91CS5U1WIiDMwGRt4o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IyPQhn6KHazl4SgdVHa-jVbmAie-bwATajz-JFTEjoY.jpg?width=108&crop=smart&auto=webp&s=9c218b5b1762b2c52ef24affda01df5e472cf262', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IyPQhn6KHazl4SgdVHa-jVbmAie-bwATajz-JFTEjoY.jpg?width=216&crop=smart&auto=webp&s=be852919f11677bebb0148228ef2cd67f8449772', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IyPQhn6KHazl4SgdVHa-jVbmAie-bwATajz-JFTEjoY.jpg?width=320&crop=smart&auto=webp&s=ef581f577e996556996bfa1d27130eeee425ab0f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IyPQhn6KHazl4SgdVHa-jVbmAie-bwATajz-JFTEjoY.jpg?width=640&crop=smart&auto=webp&s=fff230e63aaf235b1fa5574b8da38f7f6a100be4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IyPQhn6KHazl4SgdVHa-jVbmAie-bwATajz-JFTEjoY.jpg?width=960&crop=smart&auto=webp&s=f1059ce873dfb729b932f4ced83a982753a8964f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IyPQhn6KHazl4SgdVHa-jVbmAie-bwATajz-JFTEjoY.jpg?width=1080&crop=smart&auto=webp&s=00bdd615a0552faaabc306dfd09cd2155ad33e2e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IyPQhn6KHazl4SgdVHa-jVbmAie-bwATajz-JFTEjoY.jpg?auto=webp&s=2f891644dcae5e8a3ac9a2684622849e42ecd730', 'width': 1200}, 'variants': {}}]}
Importing documents to LLM
1
[removed]
2024-01-24T10:01:46
https://www.reddit.com/r/LocalLLaMA/comments/19edos5/importing_documents_to_llm/
hmmqzaz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19edos5
false
null
t3_19edos5
/r/LocalLLaMA/comments/19edos5/importing_documents_to_llm/
false
false
self
1
null
MLX now supports loading GGUF directly from Huggingface models
64
[https://github.com/ml-explore/mlx-examples/blob/main/llms/gguf\_llm/README.md](https://github.com/ml-explore/mlx-examples/blob/main/llms/gguf_llm/README.md) However, only a few quantizations are supported directly: Q4\_0, Q4\_1, and Q8\_0. Unsupported quantizations will be cast to float16 Great additon to Apple MLX! ​
2024-01-24T07:48:28
https://www.reddit.com/r/LocalLLaMA/comments/19ebvdw/mlx_now_supports_loading_gguf_directly_from/
ifioravanti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ebvdw
false
null
t3_19ebvdw
/r/LocalLLaMA/comments/19ebvdw/mlx_now_supports_loading_gguf_directly_from/
false
false
self
64
{'enabled': False, 'images': [{'id': 'H3r5j6i88foAEE-U8Q3fUcWfG2Tomjp0q2-QR5XuFKI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/85lbNKBYzLeHo9aYjoa0_FgHsVGg8MYGXLU9j8MewuM.jpg?width=108&crop=smart&auto=webp&s=ca620623a16dbbd4625ea2649a1f9558ea20c97f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/85lbNKBYzLeHo9aYjoa0_FgHsVGg8MYGXLU9j8MewuM.jpg?width=216&crop=smart&auto=webp&s=6e1c0e6c38bf8c5be57e28fe4eed746bb8fb749b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/85lbNKBYzLeHo9aYjoa0_FgHsVGg8MYGXLU9j8MewuM.jpg?width=320&crop=smart&auto=webp&s=ae9e517a2b76abd75a21ba7b3ab3dd07deddd1dc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/85lbNKBYzLeHo9aYjoa0_FgHsVGg8MYGXLU9j8MewuM.jpg?width=640&crop=smart&auto=webp&s=0c0fea2b782d5be542cdbe55299a7d7bf95bd09c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/85lbNKBYzLeHo9aYjoa0_FgHsVGg8MYGXLU9j8MewuM.jpg?width=960&crop=smart&auto=webp&s=4827073d7d272e73572c1a1fe59e0d26b26805cb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/85lbNKBYzLeHo9aYjoa0_FgHsVGg8MYGXLU9j8MewuM.jpg?width=1080&crop=smart&auto=webp&s=e25dad6b807be03f9e982012955e9b144e90f45e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/85lbNKBYzLeHo9aYjoa0_FgHsVGg8MYGXLU9j8MewuM.jpg?auto=webp&s=e6e0c3e54d8650f95f0f5e5372613a8dff622110', 'width': 1200}, 'variants': {}}]}
Cuda out of memory error ONLY if larger amount of text pasted into notebook window in Oobabooga.
6
Hello! I've found that I can run certain models to past 10k context with my setup, as long as I'm generating it in the app in the notebook. But if I go and paste 4k worth of text and hit generate, I get an error, even if it was previously able to go past that amount of context. As a workaround, if I paste small segments into the notebook, hit generate and wait for it to load, then stop it and paste more, it works, and is able to eventually work with the longer context. Is there any way to fix this? It seems like a bug where it's not able to properly load everything at once, is there a way to get oobabooga to load it in segments that fit?
2024-01-24T07:04:30
https://www.reddit.com/r/LocalLLaMA/comments/19eb8ef/cuda_out_of_memory_error_only_if_larger_amount_of/
darth_hotdog
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eb8ef
false
null
t3_19eb8ef
/r/LocalLLaMA/comments/19eb8ef/cuda_out_of_memory_error_only_if_larger_amount_of/
false
false
self
6
null
Issue with function calling in OpenAI‘s Assistants API — plz help
2
Hey dear people! I have a challenge with OpenAI’s Assistants API and Function calling: it is very unstable with calling the function I need. Or maybe with my architecture in General 🙂 In an essence our tool takes as an input job candidate CV and extracts valuable information from it like experience, skills etc. We send string input and get JSON output after Assistant calls a function with it. We do this in multiple requests so we have more than 1 function. It seems very unreliable in calling the function I need and constantly calls other functions. It is very frustrating. Is it something you encounter as well? How do you handle it then? Or, maybe is it to early to use Assistant API and everyone is simply using the Chat Completions for production use? I would appreciate every suggestion!
2024-01-24T06:36:23
https://www.reddit.com/r/LocalLLaMA/comments/19easla/issue_with_function_calling_in_openais_assistants/
chernikovalexey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19easla
false
null
t3_19easla
/r/LocalLLaMA/comments/19easla/issue_with_function_calling_in_openais_assistants/
false
false
self
2
null
Introducing Qwen
32
[https://qwenlm.github.io/blog/qwen/](https://qwenlm.github.io/blog/qwen/) Qwen models are out for some time, but this article summarizes the current work of the group, and highlights some of their future plans. It also has benchmarks for their existing and one private model that I don't think I saw before. How do people find Qwen models in practice? ​ https://preview.redd.it/zrn8r7xvsbec1.png?width=1012&format=png&auto=webp&s=a33784240ec1e7339b7ff1d7f876b0eea354c6ac
2024-01-24T05:43:52
https://www.reddit.com/r/LocalLLaMA/comments/19e9xfj/introducing_qwen/
DreamGenAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e9xfj
false
null
t3_19e9xfj
/r/LocalLLaMA/comments/19e9xfj/introducing_qwen/
false
false
https://b.thumbs.redditm…ne5rOeDnvILg.jpg
32
null
Budget video card
3
I've been looking into running a local model to integrate with Home Assistant. I have an old PC laying around that just needs a better graphics card. It's got 2x PCI 2.0 x16 slots and the PSU only has 8-pin connectors so it looks like new super cards won't work and I want to keep it cheap. I've been looking at a 3060 12GB or a 4060 ti 16GB. It looks like I could run a 13b model on either one. 1. Would the 3060 be better since it has faster memory bus or is the 4060 ti better because it has faster clock and more cuda cores? 2. If I get a second one later on would I be able to run a 30b model? Would it be slow since it'll only be PCI 2.0? 3. Can you run more than one model on the same card? For Home Assistant I want to be able to run Whisper for TTS/STT. It's a small model that will run on a 1060 3GB 4. Any suggestions on what model to pick? Has anyone else integrated Home Assistant with their local LLM?
2024-01-24T05:31:00
https://www.reddit.com/r/LocalLLaMA/comments/19e9plk/budget_video_card/
jman88888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e9plk
false
null
t3_19e9plk
/r/LocalLLaMA/comments/19e9plk/budget_video_card/
false
false
self
3
null
Optimizing Source Citations for Comprehensive References: How to attribute Documents Instead of Chunks?
1
[removed]
2024-01-24T04:50:57
https://www.reddit.com/r/LocalLLaMA/comments/19e90dj/optimizing_source_citations_for_comprehensive/
Gon_Buruwa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e90dj
false
null
t3_19e90dj
/r/LocalLLaMA/comments/19e90dj/optimizing_source_citations_for_comprehensive/
false
false
default
1
null
Lamacpp server mode prompt processing
6
Is that possible to run lamacpp as a server and handle prompt like in non server mode: ./main -ngl 32 -m samantha-mistral-instruct-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ?
2024-01-24T04:44:40
https://www.reddit.com/r/LocalLLaMA/comments/19e8w99/lamacpp_server_mode_prompt_processing/
juicesharp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e8w99
false
null
t3_19e8w99
/r/LocalLLaMA/comments/19e8w99/lamacpp_server_mode_prompt_processing/
false
false
self
6
null
What UI did you guys use for story writing?
6
I have low specs: Ryzen 5 7640HS, NVIDIA RTX 4050 Laptop GPU (6gb vram), and 16gb ram. I tried KoboldAI but I could only run 2.7b models and still heavy. I only use it for story writing in general, not a chatbot. So what do you guys recommend for me? And also what models are good for NSFW story writing. Thank you.
2024-01-24T04:32:41
https://www.reddit.com/r/LocalLLaMA/comments/19e8od7/what_ui_did_you_guys_use_for_story_writing/
EodumAsmodeus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e8od7
false
null
t3_19e8od7
/r/LocalLLaMA/comments/19e8od7/what_ui_did_you_guys_use_for_story_writing/
false
false
self
6
null
What is the best model to write stories that include NSFW?
121
What is the best model to write stories that include NSFW? I get lost with all the models out there not sure which one to pick.
2024-01-24T04:10:12
https://www.reddit.com/r/LocalLLaMA/comments/19e89dn/what_is_the_best_model_to_write_stories_that/
GoldenEye03
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e89dn
false
null
t3_19e89dn
/r/LocalLLaMA/comments/19e89dn/what_is_the_best_model_to_write_stories_that/
false
false
nsfw
121
null
Processing sensitive info with Mistral for cheap
6
Hello, I am looking for the cheapest way possible to process sensitive documents using Mistral's 8x7b model. It probably should be self-hosted to ensure the nothing from the document leaks. I've found that many APIs are vague about what information is stored. I have a budget around $100 a month to deploy this model, and to lower the cost it would be ok to only deploy it during the work day around \~160 hours a month. Any help would be appreciated!
2024-01-24T03:57:31
https://www.reddit.com/r/LocalLLaMA/comments/19e80s3/processing_sensitive_info_with_mistral_for_cheap/
Critical_Pop_2216
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e80s3
false
null
t3_19e80s3
/r/LocalLLaMA/comments/19e80s3/processing_sensitive_info_with_mistral_for_cheap/
false
false
self
6
null
Ryzen 9 / 16Gb / RTX3060 laptop : what can I realistically accomplish in an open source, local, task-oriented, web connected home assistant for ADHD / early dementia management?
1
I know I'm already on the bottom end of tech specs in general, but this is what I have. I have the background & technical aptitude to get up to speed and implement a simple LM. I am resigned to working with only what my existing system can reasonably and reliably manage. I want to do the actual work myself- one best knows a machine by building it. I want to keep all data and processing 100% internal. I'd like an always- on microphone analyzing ambient room sound *locally* for keywords, requests, or commands related to my activities of daily living (and future management), such as: * morning routine * medication adherence * basic exercise coaching * multi-language conversation practice * planning/scheduling/execution of activities * formatted web data fetching (crypto prices, surf/tide reports, news etc, presented the way I prefer) * multimedia server * *would be nice if it could text me but I'm guessing SMS is a PITA integration* It needs to learn and remember my routines to assist in adherence. It does not need to record everything, I'd rather talk to my dog and have it take cues from that. I does not need to be sexy in any way, but it does need to speak quickly. I'm hoping to hear from those with hands-on experience with a system of similar specs of the abilities and limitations.
2024-01-24T03:05:10
https://www.reddit.com/r/LocalLLaMA/comments/19e702b/ryzen_9_16gb_rtx3060_laptop_what_can_i/
DAT_DROP
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e702b
false
null
t3_19e702b
/r/LocalLLaMA/comments/19e702b/ryzen_9_16gb_rtx3060_laptop_what_can_i/
false
false
self
1
null
How do you get the output to be at least/no more than a certain word length/token count in each response?
1
I know you can increase the overall word/token count of the input/output, but what about for each response? Is there a way to include in the prompt to have the response be at least or no more than a certain length in terms of words/tokens? Will it adhere to it reasonably well?
2024-01-24T02:53:56
https://www.reddit.com/r/LocalLLaMA/comments/19e6rkx/how_do_you_get_the_output_to_be_at_leastno_more/
TaylorSwiftian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e6rkx
false
null
t3_19e6rkx
/r/LocalLLaMA/comments/19e6rkx/how_do_you_get_the_output_to_be_at_leastno_more/
false
false
self
1
null
What are the best APIs for roleplay?
3
I previously built a game using the Chat GPT 3.5 Turbo api but it seems to no longer respond as well to instruction. Currently I don’t have enough money to train my own model so I need an API that will roleplay out of the box.
2024-01-24T02:14:57
https://www.reddit.com/r/LocalLLaMA/comments/19e5ykj/what_are_the_best_apis_for_roleplay/
Gullible-Answer-6010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e5ykj
false
null
t3_19e5ykj
/r/LocalLLaMA/comments/19e5ykj/what_are_the_best_apis_for_roleplay/
false
false
default
3
null
how do I figure out what model is right for me?
2
I'm increasingly working LLMs into my personal workflows and it makes me uncomfortable that I'm using a service that could change at any time. So I'm curious if there's anything I can run on my lil' lappy. I don't mind if it's slow, minutes per token sorta range. Are we at that point, or do I have to wait a bit? I saw some folks talkin' about just using cpu for inference even though that's slower?
2024-01-24T02:06:34
https://www.reddit.com/r/LocalLLaMA/comments/19e5s7b/how_do_i_figure_out_what_model_is_right_for_me/
Prathmun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e5s7b
false
null
t3_19e5s7b
/r/LocalLLaMA/comments/19e5s7b/how_do_i_figure_out_what_model_is_right_for_me/
false
false
self
2
null
A100 80GB GPU will be discontinued ?????
1
[removed]
2024-01-24T01:25:28
https://www.reddit.com/r/LocalLLaMA/comments/19e4xm6/a100_80gb_gpu_will_be_discontinued/
aijuud
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e4xm6
false
null
t3_19e4xm6
/r/LocalLLaMA/comments/19e4xm6/a100_80gb_gpu_will_be_discontinued/
false
false
self
1
null
128GB consumer grade GPU?
1
[removed]
2024-01-24T01:06:16
https://www.reddit.com/r/LocalLLaMA/comments/19e4j26/128gb_consumer_grade_gpu/
redzorino
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e4j26
false
null
t3_19e4j26
/r/LocalLLaMA/comments/19e4j26/128gb_consumer_grade_gpu/
false
false
self
1
null
Making LLAMA model return only what I ask (JSON).
1
I'm trying to use LLaMA for a small project where I need to extract game name from the title. The model (`llama-2-7b-chat.Q2_K.gguf`) does give the correct output but is also very chatty. My Prompt : ``` <s>[INST] <<SYS>> You are a json text extractor. return the following json {""name"": ""the game name""} <</SYS>> { CD Projekt Red is ramping up production on The Witcher 4, and of course it's looking into using AI } [/INST] {answer} </s> ``` The answer: ``` Sure, here is the extracted JSON text: { "name": "The Witcher 4" } CD Projekt Red is indeed ramping up production on The Witcher 4, and as part of their efforts to ``` How can I make return only the json without all the blubber ?
2024-01-24T00:58:56
https://www.reddit.com/r/LocalLLaMA/comments/19e4dca/making_llama_model_return_only_what_i_ask_json/
br4infreze
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e4dca
false
null
t3_19e4dca
/r/LocalLLaMA/comments/19e4dca/making_llama_model_return_only_what_i_ask_json/
false
false
self
1
null
Port of self extension to llama.cpp server, allows to effortlessly extend existing LLMs' context window without any fine-tuning. The main cli example had that before but I ported it to the server example. Would be happy if anyone can try this.
59
[https://github.com/ggerganov/llama.cpp/pull/5104](https://github.com/ggerganov/llama.cpp/pull/5104) \`\`\`text First, you set -c to the context that you want to achieve - let's say -c 8192. &#x200B; Next, given that the original training context of the model is T (let's assume T = 2048), you want to set G >= 8192 / T, so in this case: --grp-attn-n 4 or --grp-attn-n 8. &#x200B; The --grp-attn-w corresponds to W from the paper. I think the authors generally used 512, but I think you can go up to T/2 - so in this case --grp-attn-w 1024. &#x200B; Additionally, G has to be multiple of W \`\`\`
2024-01-24T00:50:50
https://www.reddit.com/r/LocalLLaMA/comments/19e47by/port_of_self_extension_to_llamacpp_server_allows/
FlowerPotTeaTime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e47by
false
null
t3_19e47by
/r/LocalLLaMA/comments/19e47by/port_of_self_extension_to_llamacpp_server_allows/
false
false
self
59
{'enabled': False, 'images': [{'id': 'A5b_VZZfuUIiRRhRMV2_7Tn9YjOT6rVLcPRob3SNukA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4Gf27gIcvizZPQNbWDRZdF8Jgk5LRjk2YF4qh2nSDr4.jpg?width=108&crop=smart&auto=webp&s=46acc2d7be140fe0148ad91838fe5a691e1fa18c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4Gf27gIcvizZPQNbWDRZdF8Jgk5LRjk2YF4qh2nSDr4.jpg?width=216&crop=smart&auto=webp&s=03a67ab77471d6f8ac235ee869182176d9d4e272', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4Gf27gIcvizZPQNbWDRZdF8Jgk5LRjk2YF4qh2nSDr4.jpg?width=320&crop=smart&auto=webp&s=815d9f799cfcc39d208f4f39d30454fa433b7d49', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4Gf27gIcvizZPQNbWDRZdF8Jgk5LRjk2YF4qh2nSDr4.jpg?width=640&crop=smart&auto=webp&s=17a9644f6dc6ae23862dec66b3cb73677f207f47', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4Gf27gIcvizZPQNbWDRZdF8Jgk5LRjk2YF4qh2nSDr4.jpg?width=960&crop=smart&auto=webp&s=5856a50809c1814d87090a96c860c68723ec31d0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4Gf27gIcvizZPQNbWDRZdF8Jgk5LRjk2YF4qh2nSDr4.jpg?width=1080&crop=smart&auto=webp&s=f8ef5b84a088c9e3d35d52e7cf9aa06cab0806c6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4Gf27gIcvizZPQNbWDRZdF8Jgk5LRjk2YF4qh2nSDr4.jpg?auto=webp&s=faed7796a0f3d9e90232da1028392803eb02f643', 'width': 1200}, 'variants': {}}]}
Structured Data From Unstructured Data: Address Extraction with Graphlit, GPT-4 Turbo
8
2024-01-24T00:50:19
https://www.graphlit.com/blog/address-extraction
DeadPukka
graphlit.com
1970-01-01T00:00:00
0
{}
19e46w4
false
null
t3_19e46w4
/r/LocalLLaMA/comments/19e46w4/structured_data_from_unstructured_data_address/
false
false
https://b.thumbs.redditm…sDGpyY7hTV-o.jpg
8
{'enabled': False, 'images': [{'id': 'OV98LTpBzwH4O2hPsmiy0IIdlPh21TOCGI8BwbLMwQY', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/NWR2mRlXmLqViKCjyaxv-C0yQAie9s30sDJVeZWT6U0.jpg?width=108&crop=smart&auto=webp&s=b8ce93fed9c5ffa69d85cabece408b57cb358b73', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/NWR2mRlXmLqViKCjyaxv-C0yQAie9s30sDJVeZWT6U0.jpg?width=216&crop=smart&auto=webp&s=3c4a02332593104d933ba6846912c4961b8501f1', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/NWR2mRlXmLqViKCjyaxv-C0yQAie9s30sDJVeZWT6U0.jpg?width=320&crop=smart&auto=webp&s=18e4893e10bda48857be2636a3790c88d72cf14d', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/NWR2mRlXmLqViKCjyaxv-C0yQAie9s30sDJVeZWT6U0.jpg?width=640&crop=smart&auto=webp&s=32f6f0878f945f939e3429fdccc343e80db579a4', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/NWR2mRlXmLqViKCjyaxv-C0yQAie9s30sDJVeZWT6U0.jpg?width=960&crop=smart&auto=webp&s=1253f2a5591ed60b4b28ffb10babaffb1e574775', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/NWR2mRlXmLqViKCjyaxv-C0yQAie9s30sDJVeZWT6U0.jpg?width=1080&crop=smart&auto=webp&s=ad7ce41267b9a27bc42709fbc95ec5dd12342ae4', 'width': 1080}], 'source': {'height': 2731, 'url': 'https://external-preview.redd.it/NWR2mRlXmLqViKCjyaxv-C0yQAie9s30sDJVeZWT6U0.jpg?auto=webp&s=699b74ccd3020d3f319daee9ffa82d886097ffd9', 'width': 4096}, 'variants': {}}]}
What’s the current state of the art best open LLM for story writing?
1
I had a similar post maybe half a year ago and there were lots of great suggestions — but this space moves so fast that I doubt the models in that thread are still the top dogs. What are the best open source models for story writing? I’d be interested in both models that would run on local 3090/4090 hardware and models that would run on higher memory cards on runpod / vast.ai.
2024-01-24T00:30:40
https://www.reddit.com/r/LocalLLaMA/comments/19e3rzr/whats_the_current_state_of_the_art_best_open_llm/
chakalakasp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e3rzr
false
null
t3_19e3rzr
/r/LocalLLaMA/comments/19e3rzr/whats_the_current_state_of_the_art_best_open_llm/
false
false
self
1
null
Fast Inference on Batch without Quality Loss?
2
I have a finetuned LoRa of a Mistral model and it works really well (greedy sampling) with transformers. I tried to switch it to vllm and it's indeed 2-3x faster (in transformers I use torch.compile and flash\_attention\_2 + prompt\_decoding) but 10% of the outputs are absolute garbage. What do you recommend to speed it up without having to deal with VLLM's quality loss?
2024-01-24T00:21:14
https://www.reddit.com/r/LocalLLaMA/comments/19e3klv/fast_inference_on_batch_without_quality_loss/
iliashark
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e3klv
false
null
t3_19e3klv
/r/LocalLLaMA/comments/19e3klv/fast_inference_on_batch_without_quality_loss/
false
false
self
2
null
Generate knowledge with Semantic Graphs and RAG
1
2024-01-24T00:01:58
https://neuml.hashnode.dev/generate-knowledge-with-semantic-graphs-and-rag
davidmezzetti
neuml.hashnode.dev
1970-01-01T00:00:00
0
{}
19e35pn
false
null
t3_19e35pn
/r/LocalLLaMA/comments/19e35pn/generate_knowledge_with_semantic_graphs_and_rag/
false
false
default
1
{'enabled': False, 'images': [{'id': 'tZOXRb9DcVnKP5RMxCknh1lyN_jxbacwK51OavhcU_Q', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VwjcMc_Gc3J8syzQc168jYybjx7j5UcwI-tn_YrH5b8.jpg?width=108&crop=smart&auto=webp&s=8f96ce314f20dfba3865b1532246d74e3c0c2beb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VwjcMc_Gc3J8syzQc168jYybjx7j5UcwI-tn_YrH5b8.jpg?width=216&crop=smart&auto=webp&s=9ae1666d621f5cc80dc99e1bcdf7c8af276ace6b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VwjcMc_Gc3J8syzQc168jYybjx7j5UcwI-tn_YrH5b8.jpg?width=320&crop=smart&auto=webp&s=ea284647923231cfab95a84e3525e80de5a25629', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VwjcMc_Gc3J8syzQc168jYybjx7j5UcwI-tn_YrH5b8.jpg?width=640&crop=smart&auto=webp&s=c68466c8ae267fdb8eb78914973a173e9dad99eb', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VwjcMc_Gc3J8syzQc168jYybjx7j5UcwI-tn_YrH5b8.jpg?width=960&crop=smart&auto=webp&s=e018636f6d2536aa92ddf352b698571bd9493e34', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VwjcMc_Gc3J8syzQc168jYybjx7j5UcwI-tn_YrH5b8.jpg?width=1080&crop=smart&auto=webp&s=f1ff5694795097a34f606de6fe5bda9204c0aa59', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/VwjcMc_Gc3J8syzQc168jYybjx7j5UcwI-tn_YrH5b8.jpg?auto=webp&s=32ebf826b04ab4e0e7f32cd4092db9f8ef829f7c', 'width': 1200}, 'variants': {}}]}
Need guidance: Fine tune OR chat examples for long text classification task?
1
I need classify long texts (+1000 tokens) into 6 categories If I use instruction and try to pass examples, I exceeded the 4096 tokens, so I need help between: A)try to fine tune an instruction a model, I have between 7000 to 15 examples to each category &#x200B; &#x200B; B)pass an example for each category into the chat sequence, like \[user, assistant, user, assistant x 6 \] C)what other options are the best options? &#x200B; Thanks &#x200B; &#x200B;
2024-01-23T23:13:27
https://www.reddit.com/r/LocalLLaMA/comments/19e2255/need_guidance_fine_tune_or_chat_examples_for_long/
jrhabana
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e2255
false
null
t3_19e2255
/r/LocalLLaMA/comments/19e2255/need_guidance_fine_tune_or_chat_examples_for_long/
false
false
self
1
null
Exploring alternatives to GPT-4 vision #HostedInGermany
1
Guten Tag r/LocalLlama, I am exploring AI solutions for vision tasks and trying to find a cost-effective alternative to GPT-4-vision that I can host in Germany and query just like I query the OpenAI API. The dream? To have something thats not just effective, but also transparent and in line with the needs of German institutions. Considering my options, I'm leaning towards: 1. **Self-hosting an OCR Tesseract server**: This could handle OCR tasks before processing with a GPT-4-like model (would make multi-modal input unnecessary as its a bit special). 2. **Open Source alternatives**: I'm looking at LLaVA (sadly no commercial use), BakLLaVA or similar. 3. **Fine-Tuning**: While I'm not rushing into it, fine-tuning these models for specific needs might be on the horizon. Just two hours ago, I thought it was nearly impossible to have a GPT-4-like system hosted in Germany. I've gone from thinking this was a near-impossible task to being excited about the possibilities. I’d love to gather your insights, experiences, or even warnings (except for the legal, security etc. implications - thats something for the future. I first need a working proof of concept.) If you've played around with setting up similar AI systems, your input would be invaluable. Danke!
2024-01-23T22:25:00
https://www.reddit.com/r/LocalLLaMA/comments/19e0x57/exploring_alternatives_to_gpt4_vision/
dig1taldash
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e0x57
false
null
t3_19e0x57
/r/LocalLLaMA/comments/19e0x57/exploring_alternatives_to_gpt4_vision/
false
false
self
1
null
Exploring alternatives to GPT-4 vision #HostedInGermany
4
Guten Tag r/LocalLlama, I am exploring AI solutions for vision tasks and trying to find a cost-effective alternative to GPT-4-vision that I can host in Germany and query just like I query the OpenAI API. The dream? To have something thats not just effective, but also transparent and in line with the needs of German institutions. Considering my options, I'm leaning towards: 1. **Self-hosting an OCR Tesseract server**: This could handle OCR tasks before processing with a GPT-4-like model (would make multi-modal input unnecessary as its a bit special). 2. **Open Source alternatives**: I'm looking at LLaVA (sadly no commercial use), BakLLaVA or similar. 3. **Fine-Tuning**: While I'm not rushing into it, fine-tuning these models for specific needs might be on the horizon. Just two hours ago, I thought it was nearly impossible to have a GPT-4-like system hosted in Germany. I've gone from thinking this was a near-impossible task to being excited about the possibilities. I’d love to gather your insights, experiences, or even warnings (except for the legal, security etc. implications - thats something for the future. I first need a working proof of concept.) If you've played around with setting up similar AI systems, your input would be invaluable. Danke!
2024-01-23T22:25:00
https://www.reddit.com/r/LocalLLaMA/comments/19e0x53/exploring_alternatives_to_gpt4_vision/
dig1taldash
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e0x53
false
null
t3_19e0x53
/r/LocalLLaMA/comments/19e0x53/exploring_alternatives_to_gpt4_vision/
false
false
self
4
null
Leaderboards based on AGI-Eval scores? (And others that correlate with Chatbot ELO?)
12
Hey all, You probably remember the posts the last few weeks where some peeps had benchmarked the benchmarks against Chatbot Arena ELO (credit: gblazex on [x.com/gblazex/status/1746295870792847562?s=20](https://x.com/gblazex/status/1746295870792847562?s=20)). Since then I've been questioning why the huggingface leaderboard doesn't have an option to rank by AGI-Eval / MT-Bench scores. Not wanting to rag on that leaderboard - I know lots of HF staff lurk these posts 😄 But yeah, would like to be able to sort by those and struggling to find a leaderboard that can do this. OpenCompass does AGI-Eval, but they just haven't ranked that many models - seem to be avoiding finetunes (fair enough). Yet Another LLM Leaderboard ([https://huggingface.co/spaces/mlabonne/Yet\_Another\_LLM\_Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard)) does offer AGI-Eval, but it's mostly for 7b and below models. I'd quite like to see, essentially, the HF leaderboard but with AGI-Eval and MT-Bench. Is that not possible? If so, why? What's the closest alternative? &#x200B; &#x200B; (Also, whilst I have your attention, are there any leaderboards set up that rank multimodal models? Thanks everyone 😁)
2024-01-23T22:06:55
https://www.reddit.com/r/LocalLLaMA/comments/19e0h9g/leaderboards_based_on_agieval_scores_and_others/
OldAd9530
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e0h9g
false
null
t3_19e0h9g
/r/LocalLLaMA/comments/19e0h9g/leaderboards_based_on_agieval_scores_and_others/
false
false
self
12
{'enabled': False, 'images': [{'id': 'rXLk19ez-3taFfqOz0Sb1KUVWa8yZYWp3vYidmKUOeA', 'resolutions': [{'height': 93, 'url': 'https://external-preview.redd.it/PWyzr9MXpdJy3wAxGBYQsaet_eDDjDKc8iCRuXxvcIY.jpg?width=108&crop=smart&auto=webp&s=e8a9792651e9408f70ea247fbeda62ca19cd0c0f', 'width': 108}, {'height': 187, 'url': 'https://external-preview.redd.it/PWyzr9MXpdJy3wAxGBYQsaet_eDDjDKc8iCRuXxvcIY.jpg?width=216&crop=smart&auto=webp&s=0cf440bd9cc4b343af7e04f3d3b9adf529e8c3f2', 'width': 216}, {'height': 277, 'url': 'https://external-preview.redd.it/PWyzr9MXpdJy3wAxGBYQsaet_eDDjDKc8iCRuXxvcIY.jpg?width=320&crop=smart&auto=webp&s=28d9cbf01720df7bb023181b06871f3a351addef', 'width': 320}, {'height': 555, 'url': 'https://external-preview.redd.it/PWyzr9MXpdJy3wAxGBYQsaet_eDDjDKc8iCRuXxvcIY.jpg?width=640&crop=smart&auto=webp&s=fc313bf4a5e72a3ef7bca0b5eaae6d8ff84096d4', 'width': 640}, {'height': 833, 'url': 'https://external-preview.redd.it/PWyzr9MXpdJy3wAxGBYQsaet_eDDjDKc8iCRuXxvcIY.jpg?width=960&crop=smart&auto=webp&s=6638bf1918e4e7e14d1fc5b92b914c0136e71879', 'width': 960}, {'height': 938, 'url': 'https://external-preview.redd.it/PWyzr9MXpdJy3wAxGBYQsaet_eDDjDKc8iCRuXxvcIY.jpg?width=1080&crop=smart&auto=webp&s=b0b8feaf152cc27b4db2d7b2a07ae18fc60f61d1', 'width': 1080}], 'source': {'height': 1779, 'url': 'https://external-preview.redd.it/PWyzr9MXpdJy3wAxGBYQsaet_eDDjDKc8iCRuXxvcIY.jpg?auto=webp&s=047b2890c444a05a7ea1d11d7baa4302672fe155', 'width': 2048}, 'variants': {}}]}
Collaboration (with a100/v100's)
3
I am seeking a "partner"/collaborator to explore some ML/AI applications. Through my university, I have access to multiple a100s and v100s. I am mainly looking for some hobby projects or ideas, and the main objective shouldn't be commercialization (which wouldn't be possible anyway on the hardware). I am Danish and currently exploring opportunities like translating or improving models at multiple languages. Just hit me up on Reddit and possible some ideas!
2024-01-23T22:05:16
https://www.reddit.com/r/LocalLLaMA/comments/19e0frw/collaboration_with_a100v100s/
Banu1337
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e0frw
false
null
t3_19e0frw
/r/LocalLLaMA/comments/19e0frw/collaboration_with_a100v100s/
false
false
self
3
null
Why do most tutorials do instruction tuning of base model instead of instruction-tuned models?
28
Why do most tutorials show how to instruction-tune base models, like CodeLlama / Mistral, and not the already instruction-tuned models? If we know we want to use instructions, we might leverage previous instruction tuning, no? For instance, I have seen tutorials on how to tune CodeLlama (with [HF](https://www.philschmid.de/fine-tune-llms-in-2024-with-trl) and [Axolotl](https://mlabonne.github.io/blog/posts/A_Beginners_Guide_to_LLM_Finetuning.html#fine-tune-code-llama)) but both use the base model. I did some test and the base model kind of suck, so is the model we fine-tuned, but the original instruction-tuned model performs correctly. &#x200B;
2024-01-23T21:58:54
https://www.reddit.com/r/LocalLLaMA/comments/19e0a2y/why_do_most_tutorials_do_instruction_tuning_of/
Separate-Still3770
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e0a2y
false
null
t3_19e0a2y
/r/LocalLLaMA/comments/19e0a2y/why_do_most_tutorials_do_instruction_tuning_of/
false
false
self
28
{'enabled': False, 'images': [{'id': 'U3L4Tp-UavIdlr_6b8bUiPeGX_GhGiQAg7RgaXOc1Rs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HQztYypTSuLryCTiclOkZ_HDv5i-Wk2mhkngFl4jTic.jpg?width=108&crop=smart&auto=webp&s=6422e899ae8bd54a3cde6e2de46a4b82332b0316', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HQztYypTSuLryCTiclOkZ_HDv5i-Wk2mhkngFl4jTic.jpg?width=216&crop=smart&auto=webp&s=95b739b132861d76c588cbac77db00ef72b8f477', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HQztYypTSuLryCTiclOkZ_HDv5i-Wk2mhkngFl4jTic.jpg?width=320&crop=smart&auto=webp&s=4179d9a95088997d7de6139d6249369e04971a9f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HQztYypTSuLryCTiclOkZ_HDv5i-Wk2mhkngFl4jTic.jpg?width=640&crop=smart&auto=webp&s=61a99b19d6954ab4e7995badaba5626e89a999b3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HQztYypTSuLryCTiclOkZ_HDv5i-Wk2mhkngFl4jTic.jpg?width=960&crop=smart&auto=webp&s=4aafaf909f3b51941322998c06cc75c0ae6768d2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HQztYypTSuLryCTiclOkZ_HDv5i-Wk2mhkngFl4jTic.jpg?width=1080&crop=smart&auto=webp&s=c155d4c3b64bf094fd8b420c2bc7f32dd4362428', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HQztYypTSuLryCTiclOkZ_HDv5i-Wk2mhkngFl4jTic.jpg?auto=webp&s=4a8c747f090effc0810402d0d1bbe617c842cf6c', 'width': 1200}, 'variants': {}}]}
Local llm for code analysis
3
Hi, Please forgive me but I am completely new to the llm scene. I want to deploy a llm model to analyse local code base that is written in c++ and qml qt and generate documentation about it. Please recommend which model to use and how to feed the model with the code base. Thank you
2024-01-23T21:56:52
https://www.reddit.com/r/LocalLLaMA/comments/19e08dt/local_llm_for_code_analysis/
samuelbits
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e08dt
false
null
t3_19e08dt
/r/LocalLLaMA/comments/19e08dt/local_llm_for_code_analysis/
false
false
self
3
null
Min-Maxing Optimization for Prompt Labeling (OC)
16
In RPG video games, the practice of min-maxing is basically focusing on only one stat while ignoring everything else. Borrowing from this concept, I've developed a framework to optimize the accuracy for smaller LLMs for NLP tasks by **imparting knowledge from a larger model to a smaller model through just prompting**. The inspiration for this stems from how nuanced prompt labeling can be, especially when we need to account for limitations of smaller models in terms of following directions and understanding. The biggest roadblocks are: **Speed vs. Accuracy Tradeoff:** Larger models are "smarter" but labeling is more computationally expensive, you need more vRAM to run the model at an acceptable speed, if not it'll take forever. Most people don't have access to 8x A100 machines. But with smaller models, even though they are faster, the quality will suffer because they're not as "smart." The rule of thumb is that it's almost always better to use a quantized version of a larger model, than run the full version of a smaller model. **Edge case Failures:** Regardless of model size, it's quite difficult to capture (accurately) where the edge cases might be for when a given model might give wrong answers. In-context learning strategies can help a lot, like few-shot or chain-of-thought prompting, but that's only if we're able to capture as many edge cases as possible. The problem lies w/ unknown-unknowns, we can't predict all the instances where a model might fail at a task. **Data Limitations:** Oftentimes, quality training data is a major restriction for model development. Even if we're able to collect lots of data (hundreds of thousands to millions of data points), we still end up with the problem of labeling them all accurately - hence the use of LLMs to assist via prompt labeling. But even then, it's incredibly difficult to maintain a high standard for quality control given the two problems above. Manual review and evaluation methods can be incredibly time consuming, not to mention frustrating. That's why I'm proposing a novel framework of min-maxing models to take advantage of the speed of smaller models and the "smarts" of larger ones. The best part is that through this process we'll be able to address every issue above and get the best of worlds, by the end, our smaller LLM will be incredibly fast AND accurate AND we won't go crazy trying w/ manual review. This last part is the most important since the quality of the reviews decrease over time, often proportionally to the amount of drinks we need to get through the work. ## Framework 1. **Data Sampling:** From the master training dataset that we want to label, take a stratified random sample for min-max labeling. This sample dataset should be representative of the main one you want for training. For example, you are doing sentiment analysis for movie reviews and your master dataset hsa 80% 3 star reviews, 16% 4 star reviews, and 4% 5 star views, and you would want your sample dataset to reflect this stratification. 2. **Model Selection:** We'll need at least 2 models for this to work, the best model you can comfortably run (your max-model) and the model you want to use for inference (your min-model), this is going to be the speedy but "stupid" workhorse that will label your entire training dataset. I'm personally using the Mixtral-34bx2 (aka the Yi Monster Mash from CloudYu) + GPT4 API, and the full Mistral-7b-Instruct-v2 (Mistral AI) for inference via vLLM. 3. **Min Setup:** We are going to *purposefully* try to get mediocre labels from our main inference model, this means a first-pass zero-shot prompting w/ no prompt refining and have it run through our sample dataset. The expectation is that it's going to get a lot wrong, but that's what we want! Have it run through the sample dataset at least 10 times. Here we want to make note of any data point where the model failed to have consistent labels. Be sure to set the temp and seed to 0 mimic your real run as much as possible. 4. **Max Setup:** Take the smart model and aim for the *highest quality* labeling for the sample dataset, as much as possible. Here is where you want to spend all your time and computational resources. Using few-shot + chain-of-thought prompting, we'll be able to create a really strong prompt for labeling. If your sample dataset is small enough, you can even manually go through each entry one-by-one and review it's chain-of-thought responses. If your dataset is too large, you can use some (or all) of entries you marked from as inconsistent from the min-process instead. Once you're satisified with your prompt quality, have the smart model use it to label the sample dataset - since this will take much longer (or be more expensive) you just need to do it once, just be sure to have temp and seed to 0 to keep things as consistent as possible. 5. **Data Creation:** Compare the labels between our two labeling processes, make note of any entry where the label for the min-setup was different from the max-setup. **These will likely be your edge cases failures for the min model** Using the edge case failures, walk through with the smarter label why it failed, here you can try different things like have it compare the prompt you used in the min-setup vs the prompt for the max-setup and ask it why it was mislabeled. The goal is to get it to understand *why the entries were mislabeled* aka why it was tricky or naunced. From here, you can ask it to make you as many similar entries as you want to further augment your sample dataset. This is really useful in situtations where data is scarce; for example if you only had 100 entries in your sample dataset, you can ask the "smart" model to make you another 100 "tricky" entries. **All** of these entries will have the right output label by the smart model. 7. **Min-Maxing:** Combine the aritifical data with the sample dataset and your min-model label using the prompt you used for the max-setup. You already have "ground truth" labels from the max-setup, and you can continue to refine the prompt until the min-model is able to correctly label all the data. If you don't feel like refining your prompt, simply add all the incorrectly labeled entries from this process as part of your few-shot chain-of-thought prompt - context permitting. The idea here is that w/ 4k context size, you can have a SUPER detailed prompt for just ONE entry (if you have more entries you can batch). Since the min-model is meant to be super fast, even with a massive dataset, you should be able to get through all the labelling relatively fast. By the end of this process you'd have expose your min-model to the majority of edge case-failures you'd expect it to encounter, have tested a highly optimized prompt across these edge case failures, and ultimately a min-maxed workhorse model that is speedy AND smart. Why this works is because we are taking the in-context learning from the max-model, which larger models are much better at, and transfering this knowledge to the min-model via few-shot chain-of-thought prompting - the key is that the chain-of-thought is coming from the max-model that has learned all the naunces we want it to capture from edge cases. What a final prompt might look like: ``` Example Data: < 1 > CoT Response: < from max-model 1 > Expected Label: < correct label 1 > Example Data: < 2 > CoT Response: < from max-model 2 > Expected Label: < correct label 2 > ... Example Data: < n > CoT Response: < from max-model n > Expected Label: < correct label n > Task: < > Actual Data: < 1 > Expected Label: ... Acutal Data: < n > Expected Label: ``` You can play around with how you want to best present this data, I find that it's enough to express (in all caps) in the task what you want it to do. It could look something like: Learn from the examples above to LABEL POSITIVE OR NEGATIVE ONLY FOR THE EXPECTED ANSWER." I found that placing this right before the actual data it needs to do, helps the model follow directions better, and you can easily batch the data inputs as well. Feel free to teak the structure of the prompt, just make sure that you include the reasoning from the max-model. Ultimately, this stemmed from my frustration of trying to "teach" smaller models. It's much easier to work w/ a larger model because they "understand" and "retain" information better. So hopefully with this, you can also avoid banging your head against the wall trying to teach a "dumb" model new tricks. It's similar to how an older dog helps train a puppy, or when training two dogs, focus on training the smarter one, and have the slower one learn from example! ### Additional Resources [Prompt Labeling w/ LLMs](https://arxiv.org/abs/2205.02318) [Chain-of-Thought Prompting Guide](https://www.promptingguide.ai/techniques/cot) Note: I'll make a step-by-step guide using an example from my project later. Also, I didn't do a lit review, so if someone else already came up with it, woooooo great minds think alike BUT if this does happen to a new technique, would love to get your feedback/tell me how it worked for you.
2024-01-23T21:48:06
https://www.reddit.com/r/LocalLLaMA/comments/19e00ri/minmaxing_optimization_for_prompt_labeling_oc/
GeeBrain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e00ri
false
null
t3_19e00ri
/r/LocalLLaMA/comments/19e00ri/minmaxing_optimization_for_prompt_labeling_oc/
false
false
self
16
null
Amd Mi50? around 3090 price, similar bandwith, 32gb vram
1
[removed]
2024-01-23T21:47:54
https://www.reddit.com/r/LocalLLaMA/comments/19e00ku/amd_mi50_around_3090_price_similar_bandwith_32gb/
kpodkanowicz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19e00ku
false
null
t3_19e00ku
/r/LocalLLaMA/comments/19e00ku/amd_mi50_around_3090_price_similar_bandwith_32gb/
false
false
self
1
null
What models would you create if creating a fine-tune was easy?
1
[removed]
2024-01-23T20:37:09
https://www.reddit.com/r/LocalLLaMA/comments/19dyapu/what_models_would_you_create_if_creating_a/
EcstaticVenom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dyapu
false
null
t3_19dyapu
/r/LocalLLaMA/comments/19dyapu/what_models_would_you_create_if_creating_a/
false
false
self
1
null
Iterative Fine tuning vs all at once
3
This is more of a question about Fine tuning ML models in general and not just language models. Is there a difference between “Fine tuning first on a subset of the data, saving the best checkpoint, and then finetuning the new model” and finetuning it all at once. Is one approach better than the other?
2024-01-23T20:20:53
https://www.reddit.com/r/LocalLLaMA/comments/19dxwlw/iterative_fine_tuning_vs_all_at_once/
shafinlearns2jam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dxwlw
false
null
t3_19dxwlw
/r/LocalLLaMA/comments/19dxwlw/iterative_fine_tuning_vs_all_at_once/
false
false
self
3
null
Recommendations for models for long document summarization (10-30k tokens)
4
Hello! I’m looking for recommendations for models that have relatively high context length (10-30k tokens), decent instruction following, and are good at summarizing the important parts of long free-from documents. I’ve tried a few, and either they lose the instruction at that length, or they produce garbage data. M2 96gb. Thanks!
2024-01-23T20:20:14
https://www.reddit.com/r/LocalLLaMA/comments/19dxw1z/recommendations_for_models_for_long_document/
FaultAcrobatic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dxw1z
false
null
t3_19dxw1z
/r/LocalLLaMA/comments/19dxw1z/recommendations_for_models_for_long_document/
false
false
self
4
null
Recommendations for models for long document summarization (10-30k tokens)
1
Hello! I’m on a M2 96gb, looking for a good model for relatively long document summarization. I’ve tried a couple of models that say they are long context, but both Ollama and LMStudio are returning garbage results. Not sure if it’s the models or my software. Figured I’d poll around and once I have a good model recommendation will figure out why the results are poor. Thanks! ~Gavin
2024-01-23T20:16:36
https://www.reddit.com/r/LocalLLaMA/comments/19dxt09/recommendations_for_models_for_long_document/
FaultAcrobatic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dxt09
false
null
t3_19dxt09
/r/LocalLLaMA/comments/19dxt09/recommendations_for_models_for_long_document/
false
false
self
1
null
Creating a Silmarillion Style Model
8
As a new test project I want to build a writing assistant trained in a specific style. Happy for feedback on getting the right training data. Current approach: * I have a copy of *The Silmarillion* cut into 500 word \*.txt chunks * I then run these past a local version of Mixtral-8x7b with this function: &#8203; def generate_question_and_answer(text_chunk, client, model_name="TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF"): # Define the prompt question_prompt = f"You are a creative writing teacher. Using the provided context: '{text_chunk}', formulate a writing prompt backwards from the context that is clear, e.g. 'In a fantasy realm filled with ancient kingdoms and mystical creatures, narrate the tale of a hidden city, Gondolin, treasured for its peace and isolation. The protagonist, a child deeply connected to the sea, lives in this city unaware of its impending doom. The narrative should explore the betrayal of the city's location to the dark lord Morgoth by a character torn between loyalty and personal desires. This traitor, skilled in mining and metalwork, is captured and coerced into revealing the city's secrets. Meanwhile, a wise and foreseeing character, sensing the looming danger, secretly prepares an escape route. Your story should weave themes of betrayal, foresight, and impending disaster, culminating in a dramatic siege involving mythical creatures like Balrogs, dragons, and wolves. Describe the intricate dynamics within the city, the emotional turmoil of the characters, and the epic battle that leads to the city's tragic fate." # Generate a prompt question_response = client.completions.create(model=model_name, prompt=question_prompt, max_tokens=100) question = question_response.choices[0].text.strip() return question Example output: [Example Output](https://preview.redd.it/23gemucyv8ec1.png?width=2018&format=png&auto=webp&s=1535cb8adbc4b0e5858bcc7a1d55d65b2d02cbf4) I'll test this of course, but happy to get some in-flight guidance.
2024-01-23T19:57:42
https://www.reddit.com/r/LocalLLaMA/comments/19dxcqv/creating_a_silmarillion_style_model/
Mbando
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dxcqv
false
null
t3_19dxcqv
/r/LocalLLaMA/comments/19dxcqv/creating_a_silmarillion_style_model/
false
false
https://b.thumbs.redditm…ZQ8bU-zSfERc.jpg
8
null
Guidance Needed on Building Personal AI Assistant
1
[removed]
2024-01-23T19:54:29
https://www.reddit.com/r/LocalLLaMA/comments/19dxa0k/guidance_needed_on_building_personal_ai_assistant/
Lavanic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dxa0k
false
null
t3_19dxa0k
/r/LocalLLaMA/comments/19dxa0k/guidance_needed_on_building_personal_ai_assistant/
false
false
self
1
{'enabled': False, 'images': [{'id': 'zMXPIQv3Ay2AQPtwJ908oSE8egPawgAwdCZIEwNBzQc', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=108&crop=smart&auto=webp&s=6984633aa403dcb7087885ae247b40326cd0b556', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=216&crop=smart&auto=webp&s=f9c3d978104f09b35bd7d8608d834246139e0d25', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=320&crop=smart&auto=webp&s=65d11987b4fbeb2dbeb12151782d64c22b9fa3a9', 'width': 320}, {'height': 366, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=640&crop=smart&auto=webp&s=52f3d8313409af0f57dd2502449ca3419d14a617', 'width': 640}, {'height': 550, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=960&crop=smart&auto=webp&s=50ce784a9606ae7d636092284f3d755de0e77fda', 'width': 960}, {'height': 619, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=1080&crop=smart&auto=webp&s=13730fd0a8c0314ff7d430c214a79da1a824f380', 'width': 1080}], 'source': {'height': 688, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?auto=webp&s=f97079200bffa354b3cbd0fa849a3fa92ab636fc', 'width': 1200}, 'variants': {}}]}
Getting started with Coding LLMs
5
What the best coding companion UI and LLM for local code generation? I’ve been copy and pasting code into ChatGPT for ages, but it bugs out with larger files and can’t handle projects. I have a quite powerful gaming pc with 4090.
2024-01-23T19:49:33
https://www.reddit.com/r/LocalLLaMA/comments/19dx5qw/getting_started_with_coding_llms/
blackpantera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dx5qw
false
null
t3_19dx5qw
/r/LocalLLaMA/comments/19dx5qw/getting_started_with_coding_llms/
false
false
self
5
null
What should (could) I get for $4,000
50
What kind of PC build should I look for in the $4000 range? I’m interested in exploring local models. I currently use GPT4 API, which is great, but pricey. My goal is to use it in my legal practice to revise and help draft portions of documents. I suspect I may get into RAG or for tuning at some point too.
2024-01-23T19:45:53
https://www.reddit.com/r/LocalLLaMA/comments/19dx2jg/what_should_could_i_get_for_4000/
Psychological-Ad5390
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dx2jg
false
null
t3_19dx2jg
/r/LocalLLaMA/comments/19dx2jg/what_should_could_i_get_for_4000/
false
false
self
50
null
GPT4ALL is crashing for 13B and 70B models.
2
7B works fine but all above models crash while loading
2024-01-23T19:29:50
https://www.reddit.com/r/LocalLLaMA/comments/19dwoaj/gpt4all_is_crashing_for_13b_and_70b_models/
kkgmgfn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dwoaj
false
null
t3_19dwoaj
/r/LocalLLaMA/comments/19dwoaj/gpt4all_is_crashing_for_13b_and_70b_models/
false
false
self
2
null
llama-cpp-python in multi-model mode, stable diffusion, llama-index, lanchain, flask, mariadb, mongodb, and redis.. in a dockerized cuda environment!
1
2024-01-23T18:49:09
https://github.com/xmichaelmason/llama-docker
xmilesdyson
github.com
1970-01-01T00:00:00
0
{}
19dvoit
false
null
t3_19dvoit
/r/LocalLLaMA/comments/19dvoit/llamacpppython_in_multimodel_mode_stable/
false
false
https://b.thumbs.redditm…00g35eyHwihE.jpg
1
{'enabled': False, 'images': [{'id': 'HROgahskB_qROvPHwi7yWpDkCDacXP5jl6BEjRPc6pw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5EFyAd3oyTPZvppRe9wniy2fyGp-b6OvoNOpzV2MPaA.jpg?width=108&crop=smart&auto=webp&s=123b9fd1c0e9c0a48f540dad3e8934dd300bd1af', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5EFyAd3oyTPZvppRe9wniy2fyGp-b6OvoNOpzV2MPaA.jpg?width=216&crop=smart&auto=webp&s=eec86094ccad2230904d03aed1c16c5526f49ac9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5EFyAd3oyTPZvppRe9wniy2fyGp-b6OvoNOpzV2MPaA.jpg?width=320&crop=smart&auto=webp&s=b32788ebbd089dae4b4b55fdfaf0831853a1f3ce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5EFyAd3oyTPZvppRe9wniy2fyGp-b6OvoNOpzV2MPaA.jpg?width=640&crop=smart&auto=webp&s=3f7842a02ea4ffe1ec2f217887709b8d8e118ddb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5EFyAd3oyTPZvppRe9wniy2fyGp-b6OvoNOpzV2MPaA.jpg?width=960&crop=smart&auto=webp&s=f9280339fe07979ba2fdea5c54b39a92d538e4b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5EFyAd3oyTPZvppRe9wniy2fyGp-b6OvoNOpzV2MPaA.jpg?width=1080&crop=smart&auto=webp&s=cb4e131319d64cf985fc8da1948c9409d9bcb122', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5EFyAd3oyTPZvppRe9wniy2fyGp-b6OvoNOpzV2MPaA.jpg?auto=webp&s=ec8e3ede521e8535179e87a82d853702165488ef', 'width': 1200}, 'variants': {}}]}
LORA tutorial for Stories (Text, not Images!)
10
Hi - yet another LocalLLM newbie here looking for advice. My ultimate goal is to create text based stories. Not trying to do RP. Not trying to create images. Just stories. (Gay erotic fetish stories of course) :D I've been doing days of searching, and I'm sure the answer is out there, but I keep getting lost in all the hundreds of LORA tutorials out there geared toward Images, not Text. I have decent HW (3090 24GB, i9 CPU, 128GB RAM). I've found a lot of tricks to write some pretty good stories, but I want to get even better. I'm running into several challenges like the AI stopping prematurely or ending the story abruptly where I don't want. Also, most of the models seem to be trained on straight story archives, so my gay stories inevitably seem to drift into directions I really don't want. **From all my research, it appears that training a LORA may be my answer.** I have a huge archive of my favorite gay erotic stories in .HTML format that I could train it on. **That said, I've been googling and searching, and while I can find TONS of tutorials on how to train a LORA for Images/Stable Diffusion, I can't seem to find a good tutorial anywhere on the step-by-step process for training a LORA for** **Text generation** **purposes.** Can anyone out there recommend some links to existing tutorials for training a LORA for text? I have mostly been using [oobabooga](https://github.com/oobabooga)/[**text-generation-webui**](https://github.com/oobabooga/text-generation-webui)**.** &#x200B; Any advice is greatly appreciated! &#x200B;
2024-01-23T18:29:33
https://www.reddit.com/r/LocalLLaMA/comments/19dv7fc/lora_tutorial_for_stories_text_not_images/
JacksonThePupper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dv7fc
false
null
t3_19dv7fc
/r/LocalLLaMA/comments/19dv7fc/lora_tutorial_for_stories_text_not_images/
false
false
self
10
{'enabled': False, 'images': [{'id': '-lNz7iooIwXQN-NY2nsmIErIMtzOcMVn0s0oDL2myXI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/PEqbJUwVrwoJSVAYUra4yYjEM7hbKd1OijJbP_LuyhM.jpg?width=108&crop=smart&auto=webp&s=4a62d375bcd087fd1eb999391c222c321094bf3a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/PEqbJUwVrwoJSVAYUra4yYjEM7hbKd1OijJbP_LuyhM.jpg?width=216&crop=smart&auto=webp&s=c5d8e7311cc076f44f218c51023507dad115d11c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/PEqbJUwVrwoJSVAYUra4yYjEM7hbKd1OijJbP_LuyhM.jpg?width=320&crop=smart&auto=webp&s=eb9c05afae1822d05d9e2d4731a554a98985fc7f', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/PEqbJUwVrwoJSVAYUra4yYjEM7hbKd1OijJbP_LuyhM.jpg?auto=webp&s=f78e44b7d044d2861fee9ac11720357691eed70d', 'width': 460}, 'variants': {}}]}
14" M1 Max instead of 16" M3 Max for local LLMs?
1
[removed]
2024-01-23T18:18:05
https://www.reddit.com/r/LocalLLaMA/comments/19dux9f/14_m1_max_instead_of_16_m3_max_for_local_llms/
Drummer_Week_9727
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dux9f
false
null
t3_19dux9f
/r/LocalLLaMA/comments/19dux9f/14_m1_max_instead_of_16_m3_max_for_local_llms/
false
false
self
1
null
makeMoE - Building MoE from scratch
36
A great blog by hugging face inspired by Karpathy’s makemore and NanoGPT, this blog is a cool guide to build a Mixture of Experts model using PyTorch from scratch. Very great and clear explanation. A must read !.
2024-01-23T18:08:47
https://huggingface.co/blog/AviSoori1x/makemoe-from-scratch
MysticShadow427
huggingface.co
1970-01-01T00:00:00
0
{}
19duoqv
false
null
t3_19duoqv
/r/LocalLLaMA/comments/19duoqv/makemoe_building_moe_from_scratch/
false
false
https://a.thumbs.redditm…dLYAnFtzBWr8.jpg
36
{'enabled': False, 'images': [{'id': 'ZcwbBZrpZThONbehuqrccFwX52bFC9OJakeTcd5GCqA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=108&crop=smart&auto=webp&s=7e8d8002d2e46aa4d7f857310f3ae689dfc44d07', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=216&crop=smart&auto=webp&s=02935af9b977f931de6c691794a118d67c233cd3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=320&crop=smart&auto=webp&s=26f1e3e3788eee603d6e7bb5e7f8c3c358bd40a8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=640&crop=smart&auto=webp&s=f9764b24c7c73ea4d0b7b3390d5e21d0055de1f8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=960&crop=smart&auto=webp&s=1366a3f56c04b88d112621dba00079ac027ddaf4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=1080&crop=smart&auto=webp&s=cb907e8194c07abde1edef598dd9a8ddd9a949fd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?auto=webp&s=6c020ee09a5b9f789dadabb7b1235f4763939e6b', 'width': 1200}, 'variants': {}}]}
Introducing Phidata: Build AI Assistants using LLM function calling
44
Hello reddit, I’m excited to share phidata - a framework for building AI assistants using function calling ([https://github.com/phidatahq/phidata](https://github.com/phidatahq/phidata)) I’ve been using function calling a lot so thought I’d share and get your feedback as it seems like an underutilized part of AI engineering. Function calling is a powerful approach that allows LLMs to solve complex problems by running functions and intelligently choosing a course of action based on the response. For example, to answer questions from a database, the Assistant will first run a function to show tables, then describe relevant tables and finally, run a query to get the answer. I’ve found GPT-4-turbo to be ridiculously good at this and have used this approach to build knowledge assistants, customer support assistants and research assistants. Phidata provides Assistants with built-in memory, knowledge base, storage and tools, making it easy to build AI applications using function calling. The code is open-source (MIT) and I’ve included templates for building AI Apps using Streamlit, FastApi and PgVector that you can run locally using docker. Github: [https://github.com/phidatahq/phidata](https://github.com/phidatahq/phidata) Docs: [https://docs.phidata.com/introduction](https://docs.phidata.com/introduction) Demo: [https://demo.aidev.run](https://demo.aidev.run/) is a Streamlit App serving a PDF, Image and Website Assistant (password: admin) Thanks for reading and would love to hear what you think.
2024-01-23T18:07:09
https://www.reddit.com/r/LocalLLaMA/comments/19dund3/introducing_phidata_build_ai_assistants_using_llm/
ashpreetbedi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dund3
false
null
t3_19dund3
/r/LocalLLaMA/comments/19dund3/introducing_phidata_build_ai_assistants_using_llm/
false
false
self
44
{'enabled': False, 'images': [{'id': 'n4SzT5K-tseN4AN6-7Q-4RzZ2LmvvzumS3rBVZUB5zs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tGEY4ShWFS2c9oTcjh3IQNvn5cwY9tPBw29DusMOQN4.jpg?width=108&crop=smart&auto=webp&s=587fa5a41d8da6cf154f36c5032d9968debeebeb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tGEY4ShWFS2c9oTcjh3IQNvn5cwY9tPBw29DusMOQN4.jpg?width=216&crop=smart&auto=webp&s=cc663fa1e3f105e4819b0f37f07520be28d498d9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tGEY4ShWFS2c9oTcjh3IQNvn5cwY9tPBw29DusMOQN4.jpg?width=320&crop=smart&auto=webp&s=d83ed05bce144e727c8e0c77e7831800490eead6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tGEY4ShWFS2c9oTcjh3IQNvn5cwY9tPBw29DusMOQN4.jpg?width=640&crop=smart&auto=webp&s=466adf6194d30e8387efba5e044b99a17676e40e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tGEY4ShWFS2c9oTcjh3IQNvn5cwY9tPBw29DusMOQN4.jpg?width=960&crop=smart&auto=webp&s=36ffaa68b1ebbcfeeac57df0de9f6c47e0644e62', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tGEY4ShWFS2c9oTcjh3IQNvn5cwY9tPBw29DusMOQN4.jpg?width=1080&crop=smart&auto=webp&s=a97b6a54983c1e6b44f366093cbbdbea02c24999', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tGEY4ShWFS2c9oTcjh3IQNvn5cwY9tPBw29DusMOQN4.jpg?auto=webp&s=8537a7879035cc291f2b030dbce225620352d2ff', 'width': 1200}, 'variants': {}}]}
Spatial VLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities
18
Link to the paper https://arxiv.org/pdf/2401.12168.pdf
2024-01-23T17:58:51
https://spatial-vlm.github.io/
LiquidGunay
spatial-vlm.github.io
1970-01-01T00:00:00
0
{}
19dufhr
false
null
t3_19dufhr
/r/LocalLLaMA/comments/19dufhr/spatial_vlm_endowing_visionlanguage_models_with/
false
false
https://a.thumbs.redditm…N0y36dZ6neg0.jpg
18
{'enabled': False, 'images': [{'id': '8Fs0wcEkWWAYw30im1PUOs3efvZ--NlbQ6o4d9CIqrc', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/U-NylMln66tj8a28hcdbXPTy5ON_2mnRYh2hZIphZ_M.jpg?width=108&crop=smart&auto=webp&s=409014246f57661b1c523ba95ff3e81f52e382de', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/U-NylMln66tj8a28hcdbXPTy5ON_2mnRYh2hZIphZ_M.jpg?width=216&crop=smart&auto=webp&s=b01762b11b616391d33455400c246307e570dd65', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/U-NylMln66tj8a28hcdbXPTy5ON_2mnRYh2hZIphZ_M.jpg?width=320&crop=smart&auto=webp&s=60625ecd893c34ed6e187d44a15a4a9e9806d13b', 'width': 320}, {'height': 401, 'url': 'https://external-preview.redd.it/U-NylMln66tj8a28hcdbXPTy5ON_2mnRYh2hZIphZ_M.jpg?width=640&crop=smart&auto=webp&s=980adbeca151115dc63b7efef8490842db024ad2', 'width': 640}, {'height': 602, 'url': 'https://external-preview.redd.it/U-NylMln66tj8a28hcdbXPTy5ON_2mnRYh2hZIphZ_M.jpg?width=960&crop=smart&auto=webp&s=5aa55dc32d0a97f342fe3ae65ae7a73d32752a7d', 'width': 960}, {'height': 677, 'url': 'https://external-preview.redd.it/U-NylMln66tj8a28hcdbXPTy5ON_2mnRYh2hZIphZ_M.jpg?width=1080&crop=smart&auto=webp&s=796d8d8587b7f4ce2848f3d7658f4be37b6a3bc7', 'width': 1080}], 'source': {'height': 876, 'url': 'https://external-preview.redd.it/U-NylMln66tj8a28hcdbXPTy5ON_2mnRYh2hZIphZ_M.jpg?auto=webp&s=983fd9d4c1cd9e1c689621cbfbfe0ce0cbd54dab', 'width': 1396}, 'variants': {}}]}
How to defeat chat limits (Llama CPP)
1
[removed]
2024-01-23T17:41:46
https://www.reddit.com/r/LocalLLaMA/comments/19du0cx/how_to_defeat_chat_limits_llama_cpp/
Future_Might_8194
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19du0cx
false
null
t3_19du0cx
/r/LocalLLaMA/comments/19du0cx/how_to_defeat_chat_limits_llama_cpp/
false
false
https://b.thumbs.redditm…-7Gkd8jW7Oec.jpg
1
null
Searching inside sharepoint
1
Is there any local AI that I can point to a SharePoint location and it can scan and make search more intuitive?
2024-01-23T17:17:17
https://www.reddit.com/r/LocalLLaMA/comments/19dtfes/searching_inside_sharepoint/
rorowhat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dtfes
false
null
t3_19dtfes
/r/LocalLLaMA/comments/19dtfes/searching_inside_sharepoint/
false
false
self
1
null
WHO publishes 98 page guidelines for Ethics and governance of AI
1
[removed]
2024-01-23T16:54:51
https://www.reddit.com/r/LocalLLaMA/comments/19dsyr5/who_publishes_98_page_guidelines_for_ethics_and/
MLer-India
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dsyr5
false
null
t3_19dsyr5
/r/LocalLLaMA/comments/19dsyr5/who_publishes_98_page_guidelines_for_ethics_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'clSL1KzmasIC3rVf1IzBMRxANFveA9O09EJWjUcBAAY', 'resolutions': [{'height': 152, 'url': 'https://external-preview.redd.it/BwmSX4r7sRjR6lL_v5qw2Q5oCln9gIdFOkFIL1Sx6dg.jpg?width=108&crop=smart&auto=webp&s=ffa61f4a1087a36d6596df4e27e16f573f75053e', 'width': 108}], 'source': {'height': 240, 'url': 'https://external-preview.redd.it/BwmSX4r7sRjR6lL_v5qw2Q5oCln9gIdFOkFIL1Sx6dg.jpg?auto=webp&s=931a2253c6dbcc386f87371b43b021a4fe339bd8', 'width': 170}, 'variants': {}}]}
what would your ideal dataset contain?
2
what would the ideal dataset contain? first of all it obviously depends on what type of model you're going after. mine would have all the scientific papers that proved all the fundamental theorems, all the legit sources of historical events i can find, and all known math. probably also a dictionary and encyclopedia. then my model would have knowledge of all science, and exactly how these things were proved. it could tell you all the experiments from which the formulas were derived and how many times these experiments were repeated, and what other things were happening on earth at that time which might have motivated the hypothesis. the model would obviously understand the scientific method itself, so then maybe it could generate new hypotheses and experiments based on data it is fed based on present observations of the world. my model would not be trained on any stupid shit like news articles or blog posts. i really hope that this is being done by organizations who have the hardware, and that they are not just focused on producing silly LLM toys for the social brain of the masses. yes those things get everyone excited and make some money, but the real work in *intelligence* is about discovering truth and how to best continue surviving.
2024-01-23T16:54:06
https://www.reddit.com/r/LocalLLaMA/comments/19dsy5p/what_would_your_ideal_dataset_contain/
goofnug
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dsy5p
false
null
t3_19dsy5p
/r/LocalLLaMA/comments/19dsy5p/what_would_your_ideal_dataset_contain/
false
false
self
2
null
Why isn't the internet full of AI customer support agents yet?
58
I am wondering why the internet isn't full of AI customer support representatives yet, using LLMs. So many customer support companies, worth billions, such as Intercom and others, have created AI agents that only use vector embeddings to determine the best articles from a help center. Why aren't these giants creating an AI customer support agent already that can read the help center articles for you and guide you strategically? What is holding back the business world from fully embedding LLMs in their operations? I am just writing because I am surprised, the tech world usually moves fast, but somehow it's stagnated in this area.
2024-01-23T16:32:36
https://www.reddit.com/r/LocalLLaMA/comments/19dsfla/why_isnt_the_internet_full_of_ai_customer_support/
freecodeio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dsfla
false
null
t3_19dsfla
/r/LocalLLaMA/comments/19dsfla/why_isnt_the_internet_full_of_ai_customer_support/
false
false
self
58
null
System prompts are adding unnecessary complexity, change my mind!
4
Since OpenAI introduced system prompts in their API I've never really been able to grasp why we need them. It seems to me the system prompt can always be replaced by the first user message. &nbsp; Here's an example &nbsp; ``` <|im_start|>system You are a grammar checker<|im_end|> <|im_start|>user Please check the grammar: &nbsp; "Your a nice bot"<|im_end|> ``` &nbsp; VS &nbsp; ``` <|im_start|>system You are a helpful assistant (default system prompt)<|im_end|> <|im_start|>user You are a grammar checker. &nbsp; Please check the grammar: &nbsp; "Your a nice bot"<|im_end|> ``` &nbsp; All LLMs I've tried will give near identical results with both prompts. Actually the second example with the default system prompt seems to work better in long context as it is what the model has been conditioned on to follow instructions, not my random system prompt it has never seen before. &nbsp; Moreover, it's happened many times that the OpenAI GPT-3.5 API would give me terrible results with a prompt that would work perfectly with ChatGPT 3.5. After pulling my hair I noticed the fix is to add the hidden system prompt of ChatGPT ("You are ChatGPT, a large language model trained by OpenAI. [...]") in my request. This is IMO terrible devx, not to mention that hidden system prompt needed to be extracted from ChatGPT, OpenAI does not share it in their docs. &nbsp; The result of OpenAI training with system prompts is a dumb model that suddenly gets smart when you prepend your prompt with a magic string, it really feels like it's not the way we should be doing things. &nbsp; System prompts are adding one more variable in a complex system where it is already hard to get reproduceable results. For a technology that should lower the cognitive burden I feel like this specific design is adding to it. I think in the current stage of this tech where we don't have a solid understanding of every variables and their interactions we should not introduce additional complexity of that kind, especially when it brings only marginal benefits (none if you ask me). &nbsp; Also to further my point, Mistral instruct models don't have system prompts support and I haven't seen anyone complain about that, their model seem as steerable as any other one. System prompts support is not necessary for maintaining a competitive edge. &nbsp; Am I the only one who thinks that way? &nbsp; EDIT: Y'all made me change my mind (somewhat). I had missed some use cases of system prompt, they seem especially useful for chat applications, allowing devs to customize the behavior in a way where the model can attribute more weight to the developer's instructions and not mistake them as part of the chat conversation. TIL. I still think there are issues with the way models behave with non default system prompts, degrading general performance in a lot of cases. But it may be more of a training method issue than a design one.
2024-01-23T16:28:59
https://www.reddit.com/r/LocalLLaMA/comments/19dscer/system_prompts_are_adding_unnecessary_complexity/
hurrytewer
self.LocalLLaMA
2024-01-23T18:23:38
0
{}
19dscer
false
null
t3_19dscer
/r/LocalLLaMA/comments/19dscer/system_prompts_are_adding_unnecessary_complexity/
false
false
self
4
null
NLI task with long text, what to try?
1
In a NLI task (does the hypothesis entail the premise?) with a binary yes/no label what should I try? I tried classical BERT-like models and they all give poor performance (Roberta, T5 etc.) Would fine-tuning mistral work? How would you provide the data to Mistral for fine-tuning? &#x200B; Thanks
2024-01-23T16:16:35
https://www.reddit.com/r/LocalLLaMA/comments/19ds1r6/nli_task_with_long_text_what_to_try/
PunchTornado
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ds1r6
false
null
t3_19ds1r6
/r/LocalLLaMA/comments/19ds1r6/nli_task_with_long_text_what_to_try/
false
false
self
1
null
How to prompt better?
12
I'm but a poor noob, have mercy. My prompts suck, therefore my generations suck. Even with Goliath. How can I prompt better? I have a potato PC so I have to rely on online services and I don't have the money or the expertise to fine tune a model. I've written fiction before but it took me forever. I was hoping to speed up the process with LLM input but it's taking me longer to fix said input. I've tried to input a sample text to mimic my style but of course the LLM can't reproduce it on the spot. My prompts look like: Write the first scene of a {GENRE} novel. In this scene, {DESCRIBE SCENE, PROTAGONISTS, NAMES, WHAT THEY DO}. Write in intense tone, very descriptive, moody. Lots of intense dialogue. Not cheesy. Last night I spent hours trying different prompts and LLMs in Camenduru colabs. Do different models require different type of prompts for prose fiction? How can I learn better prompting? Please help.
2024-01-23T16:13:28
https://www.reddit.com/r/LocalLLaMA/comments/19drz2s/how_to_prompt_better/
harderisbetter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19drz2s
false
null
t3_19drz2s
/r/LocalLLaMA/comments/19drz2s/how_to_prompt_better/
false
false
self
12
null
The What, Why, and How of Context Length Extension Techniques in Large Language Models -- A Detailed Survey
22
**Paper**: [https://arxiv.org/abs/2401.07872](https://arxiv.org/abs/2401.07872) **Abstract**: >The advent of Large Language Models (LLMs) represents a notable breakthrough in Natural Language Processing (NLP), contributing to substantial progress in both text comprehension and generation. However, amidst these advancements, it is noteworthy that LLMs often face a limitation in terms of context length extrapolation. Understanding and extending the context length for LLMs is crucial in enhancing their performance across various NLP applications. In this survey paper, we delve into the multifaceted aspects of exploring why it is essential, and the potential transformations that superior techniques could bring to NLP applications. We study the inherent challenges associated with extending context length and present an organized overview of the existing strategies employed by researchers. Additionally, we discuss the intricacies of evaluating context extension techniques and highlight the open challenges that researchers face in this domain. Furthermore, we explore whether there is a consensus within the research community regarding evaluation standards and identify areas where further agreement is needed. This comprehensive survey aims to serve as a valuable resource for researchers, guiding them through the nuances of context length extension techniques and fostering discussions on future advancements in this evolving field.
2024-01-23T16:04:12
https://www.reddit.com/r/LocalLLaMA/comments/19drre4/the_what_why_and_how_of_context_length_extension/
APaperADay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19drre4
false
null
t3_19drre4
/r/LocalLLaMA/comments/19drre4/the_what_why_and_how_of_context_length_extension/
false
false
self
22
null
The AI revolution has started....
1
2024-01-23T16:00:02
https://i.redd.it/cgaigg6zp7ec1.png
SubjectSector5421
i.redd.it
1970-01-01T00:00:00
0
{}
19drnlw
false
null
t3_19drnlw
/r/LocalLLaMA/comments/19drnlw/the_ai_revolution_has_started/
true
false
spoiler
1
{'enabled': True, 'images': [{'id': 'VZZ7Na0M0P2rSsMZnnzKXP60nQ9x3ndTsDqoMKfQw_4', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?width=108&crop=smart&auto=webp&s=591acd7b888cab7e14074b26e55eaec98409de0b', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?width=216&crop=smart&auto=webp&s=30183a0b204245eae94bc184556074eeb3d85037', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?width=320&crop=smart&auto=webp&s=acb0a62eabbdfb3b404d9ebf644267c1ee4cc968', 'width': 320}, {'height': 337, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?width=640&crop=smart&auto=webp&s=64709ea8efa4345e8c52968ed996a4177c14fcd4', 'width': 640}, {'height': 506, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?width=960&crop=smart&auto=webp&s=2e53ee6b29ee4522abbb0974c78465d8f27d6b0b', 'width': 960}, {'height': 570, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?width=1080&crop=smart&auto=webp&s=225d30a010a313a4310d03c4cca9d0b185cfa018', 'width': 1080}], 'source': {'height': 587, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?auto=webp&s=86b55fb4d5f800deceede1a85566df562a277872', 'width': 1112}, 'variants': {'obfuscated': {'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=555e657edbdfec0c122514f85b36a8fe7f43636d', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=41a119e41d11347266cdd9272270503c880804de', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=674726637b7b0fd4f7bdb735a9e61eb35f53c82c', 'width': 320}, {'height': 337, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=9901b9106cabd4593d62db37f4523139afab697b', 'width': 640}, {'height': 506, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=f1be34b65e0294525b676fdd5e22472bb071a4fd', 'width': 960}, {'height': 570, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=95424eb3b714832dea72b3de9a903fc6f039af9a', 'width': 1080}], 'source': {'height': 587, 'url': 'https://preview.redd.it/cgaigg6zp7ec1.png?blur=40&format=pjpg&auto=webp&s=390cdb2edc4178194502fa0c0386df73063e88d2', 'width': 1112}}}}]}
Easiest RAG Setup at the moment?
1
[removed]
2024-01-23T15:47:42
https://www.reddit.com/r/LocalLLaMA/comments/19drdjp/easiest_rag_setup_at_the_moment/
yupignome
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19drdjp
false
null
t3_19drdjp
/r/LocalLLaMA/comments/19drdjp/easiest_rag_setup_at_the_moment/
false
false
self
1
null
Budget friendly llm to train for local language
1
what llm can i train for a local language without spending tons of money. I currently have T4 x 2 and an rtx 4060 ti 16gb. Any suggestions? &#x200B; Thanks
2024-01-23T15:42:41
https://www.reddit.com/r/LocalLLaMA/comments/19dr9dz/budget_friendly_llm_to_train_for_local_language/
Silver_Equivalent_58
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dr9dz
false
null
t3_19dr9dz
/r/LocalLLaMA/comments/19dr9dz/budget_friendly_llm_to_train_for_local_language/
false
false
self
1
null
Need some help with simple base Mistral fine-tune on Alpaca dataset.
2
Hello everyone! I'm just discovering this interesting new world of fine-tuning LLMs. I had some experience with LLaMa2 model and now i decided to try Mistral 7B. Currently i'm using axolotl framework and decided to try this example configuration from axolotl - [https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/examples/mistral/qlora.yml](https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/examples/mistral/qlora.yml) I tried both alpaca\_2k\_test and alpaca\_cleaned data sets. I used more or less the same parameters as in qlora example (i just reduced seq\_length to 512 so i can increase batch\_size and increased number of epochs to 2, and set batch\_size to 16 with no accumulation). The problem can be explained like this: training and eval loss kind of improved (via. plots in attachments) \~1.0 -> \~0.8, but models' performance is the same - no difference on benchmarks + i tried to run manual inference using alpaca prompt template, and outputs are the same as from the base model (it just autocompletes the text, no signs of suitable usage of </s> token. The loss plot is also kind of weird to me, its so smooth, no fast (exponential) drop at the beginning like i used to get a lot with LLaMa. Am i doing something wrong? Its my first Mistral tune, though i had some experience with Llama2 model, which after 1-2 epochs even on this test set (and ofc on alpaca-cleaned) was able to give expected responses with suitable usage of </s> token. https://preview.redd.it/8z0v3dszl7ec1.png?width=1844&format=png&auto=webp&s=2c45c5cf2eb6f3b928b05e036ee8272db4b105a1
2024-01-23T15:38:35
https://www.reddit.com/r/LocalLLaMA/comments/19dr5xz/need_some_help_with_simple_base_mistral_finetune/
oposteriori
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dr5xz
false
null
t3_19dr5xz
/r/LocalLLaMA/comments/19dr5xz/need_some_help_with_simple_base_mistral_finetune/
false
false
https://b.thumbs.redditm…JKTEGpQ9Rj7s.jpg
2
null
14" M1 Max instead of 16" M3 Max for local LLMs?
1
[removed]
2024-01-23T14:43:31
https://www.reddit.com/r/LocalLLaMA/comments/19dpybt/14_m1_max_instead_of_16_m3_max_for_local_llms/
Drummer_Week_9727
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dpybt
false
null
t3_19dpybt
/r/LocalLLaMA/comments/19dpybt/14_m1_max_instead_of_16_m3_max_for_local_llms/
false
false
self
1
null
Someone was asking why folks do local, what's so good about local, and what are people making local? My answer is its the best, its the best, and this is what I'm making
1
2024-01-23T14:41:40
https://v.redd.it/9os02nf997ec1
LipstickAI
/r/LocalLLaMA/comments/19dpwzq/someone_was_asking_why_folks_do_local_whats_so/
1970-01-01T00:00:00
0
{}
19dpwzq
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9os02nf997ec1/DASHPlaylist.mpd?a=1708742505%2CZGY2OTVjNGIwOGM5ZGMzOGUzOGE2ZGExYWM3NGVmYmZmYTY0NTE2NDVhNTc5NjkwOTZiOTlkZjQ3ZDdlNjNmMQ%3D%3D&v=1&f=sd', 'duration': 72, 'fallback_url': 'https://v.redd.it/9os02nf997ec1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1022, 'hls_url': 'https://v.redd.it/9os02nf997ec1/HLSPlaylist.m3u8?a=1708742505%2CNjQ0OGZjYTk3NTNmNmRkN2I3ZjlhZGQ0OGM5YjZhZGVjNzYxNDI3OWM4OWFmYTNkNzEyMjJkNzJlY2QxZGQyMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9os02nf997ec1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_19dpwzq
/r/LocalLLaMA/comments/19dpwzq/someone_was_asking_why_folks_do_local_whats_so/
false
false
https://external-preview…4885eea78aff3841
1
{'enabled': False, 'images': [{'id': 'ZDY4MXBtYjFjN2VjMaaDSSb29DOQoaIgTytCZtqFXH46sJF4v5WLuezDPb5o', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/ZDY4MXBtYjFjN2VjMaaDSSb29DOQoaIgTytCZtqFXH46sJF4v5WLuezDPb5o.png?width=108&crop=smart&format=pjpg&auto=webp&s=35868a39beedf3cadda9118fc889df7124814050', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/ZDY4MXBtYjFjN2VjMaaDSSb29DOQoaIgTytCZtqFXH46sJF4v5WLuezDPb5o.png?width=216&crop=smart&format=pjpg&auto=webp&s=00437e942d645371f935fda2d973e4b72187f097', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/ZDY4MXBtYjFjN2VjMaaDSSb29DOQoaIgTytCZtqFXH46sJF4v5WLuezDPb5o.png?width=320&crop=smart&format=pjpg&auto=webp&s=20b8af405754543230ec3600df7a6da93a2a420d', 'width': 320}, {'height': 340, 'url': 'https://external-preview.redd.it/ZDY4MXBtYjFjN2VjMaaDSSb29DOQoaIgTytCZtqFXH46sJF4v5WLuezDPb5o.png?width=640&crop=smart&format=pjpg&auto=webp&s=eb388ba8c653097330b6494c2867a628101d21e5', 'width': 640}, {'height': 510, 'url': 'https://external-preview.redd.it/ZDY4MXBtYjFjN2VjMaaDSSb29DOQoaIgTytCZtqFXH46sJF4v5WLuezDPb5o.png?width=960&crop=smart&format=pjpg&auto=webp&s=511d8ab91b786df68326a4cebcbb74fc68f3d2d5', 'width': 960}, {'height': 574, 'url': 'https://external-preview.redd.it/ZDY4MXBtYjFjN2VjMaaDSSb29DOQoaIgTytCZtqFXH46sJF4v5WLuezDPb5o.png?width=1080&crop=smart&format=pjpg&auto=webp&s=10e02cb227ced0f80ab31c9dcedcaf16d20a1ca0', 'width': 1080}], 'source': {'height': 1294, 'url': 'https://external-preview.redd.it/ZDY4MXBtYjFjN2VjMaaDSSb29DOQoaIgTytCZtqFXH46sJF4v5WLuezDPb5o.png?format=pjpg&auto=webp&s=f73e5f1f4e221936d42f19f0344fb3dfa7ab8a03', 'width': 2432}, 'variants': {}}]}
I have an RTX 3090 with 32 Gigs of RAM and an i7 i7-14700KF, what models I can run locally and effortlessly?
1
Please suggest. I want something like Chat-GPT but local and should be able to take some advantage of GPU. Please drop good tutorial guides as well if you know of any for setting up that model. (I use Ubuntu Desktop 22.04 LTS on dual-boot btw). Thanks.
2024-01-23T14:40:22
https://www.reddit.com/r/LocalLLaMA/comments/19dpvy3/i_have_an_rtx_3090_with_32_gigs_of_ram_and_an_i7/
RepentingSoul
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dpvy3
false
null
t3_19dpvy3
/r/LocalLLaMA/comments/19dpvy3/i_have_an_rtx_3090_with_32_gigs_of_ram_and_an_i7/
false
false
self
1
null
Analyzing Challenges and Potential Future Directions for AI in Web Development
1
### **1. Introduction** In the evolving landscape of web development, Artificial Intelligence (AI) has emerged as a crucial tool. However, despite its advancements, AI faces significant challenges, particularly in understanding and implementing visual and structural elements in web development. This document delves into these challenges and speculates on future solutions. ### **2. Visualization Weaknesses in AI** AI currently struggles with several aspects of visual comprehension in web development: - **Identifying Webpages Content as a Human:** AI systems lack the nuanced understanding that humans possess in interpreting webpage content, missing out on context and subtlety. - **Experimenting and Testing Visual Interactions:** AI has limitations in testing and experimenting with visual interactions on web interfaces to ensure they meet human expectations and usability standards. - **Weak Understanding of Visual Descriptions:** AI finds it challenging to accurately interpret and implement visual elements based on descriptions provided by clients or developers, often leading to a gap between the envisioned and the delivered product. ### **3. Adherence to Graphic Themes** AI's inability to consistently stick to a project's graphic theme stems from: - **Visual Weaknesses:** Due to its limited understanding of visual nuances, AI often struggles to maintain a consistent graphic theme across a web project. - **Requirement for Custom Models:** There is a need for specialized image-to-text models to help AI better understand and adhere to specific design themes. ### **4. Integrated AI Solutions** The future of AI in web development could see the integration of multiple AI technologies: - **Combining AI Models:** An ideal solution may involve the integration of text-to-image, text-to-text, and possibly image-to-image AI models. - **Finetuning Challenges:** The development of such integrated solutions would require extensive finetuning, possibly taking years and needing the investment of large tech companies. - **Open Source Availability:** Advanced AI solutions will likely reach the open-source community much later, following their development and widespread commercial use. ### **5. Understanding Different Project Structures** AI's effectiveness is limited by its inability to understand varied project structures: - **Variability Across Languages:** Each programming language, like Python, PHP, or Java, has unique patterns that AI finds challenging to grasp simultaneously. - **Custom Project Structures:** AI struggles with custom folder structures or unique project setups, which are common in development but not standardized. - **Impact of Modifications:** Updates to a project, whether made by AI or humans, can further complicate AI’s understanding, potentially leading to inefficient or incorrect solutions. ### **6. Documentation Complexities** Documenting a web development project involves multiple layers, each with its own challenges: - **Framework Documentation:** Standard documentation, like Laravel’s, doesn't pose a significant challenge to AI. - **Project Documentation:** This includes specifics like models, controllers, views, etc., and requires a deeper understanding of the project's unique aspects. - **User Manual Documentation:** Crucial for grasping how end-users interact with the project. For example, in a note management system, understanding how users perceive and use notes is vital for meeting specific client requirements, such as highlighting a day’s to-do list note, and that's something an AI can't understand, and in this case the AI won't be able to respond to a query that takes highlighting in consideration. - **Interdependencies and Modifications:** Changes in the project can affect all types of documentation, requiring continuous updates and understanding, which makes it harder for the AI to catch up on the project context. - **Human Involvement:** Some documentation tasks may currently be beyond AI's capabilities, necessitating human input. ### **Conclusion** The future may hold more integrated and sophisticated AI solutions, but these advancements will require significant time, investment, and development. And that may end up in the Open source world so late, So I have an idea that I want to share as a solution, but first I will collect the most useful comments to make an enhanced collaborative idea that will make things faster for the Open source community, We are just thinking for now, but who knows. -------------------- #### Note 1: To all who are eager to contribute to the discussion, kindly initiate your response with `#NUMBER`, ensuring that `NUMBER` corresponds to the specific section of the article to which your comment is related. Example: `#4 I think that the solution to ...`
2024-01-23T14:33:52
https://www.reddit.com/r/LocalLLaMA/comments/19dpr3e/analyzing_challenges_and_potential_future/
khalil_ben_zineb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dpr3e
false
null
t3_19dpr3e
/r/LocalLLaMA/comments/19dpr3e/analyzing_challenges_and_potential_future/
false
false
self
1
null
mlabonne's LLM course
205
I've recently found this nicely organized collection of resources related to LLMs: [https://github.com/mlabonne/llm-course](https://github.com/mlabonne/llm-course) It has everything, from theory of how LLMs and various techniques (like LoRA) work, to hands-on colabs. I thought this community might also find it useful and I did not find it posted here before.
2024-01-23T13:46:46
https://www.reddit.com/r/LocalLLaMA/comments/19dorwo/mlabonnes_llm_course/
DreamGenAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dorwo
false
null
t3_19dorwo
/r/LocalLLaMA/comments/19dorwo/mlabonnes_llm_course/
false
false
self
205
{'enabled': False, 'images': [{'id': 'sqb8MYPeZu5gcAPJ5p9z9kK7DLGw-an0qhi8OTYvnzw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/39bhvKsvezD8o_3fgmeZ4fezmGhUnb2e2PJAQIqYqpM.jpg?width=108&crop=smart&auto=webp&s=61c2f706f450ee454d599c7f252facda617bfc36', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/39bhvKsvezD8o_3fgmeZ4fezmGhUnb2e2PJAQIqYqpM.jpg?width=216&crop=smart&auto=webp&s=f5dde32426698a78a0327407cbc134d4013b4713', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/39bhvKsvezD8o_3fgmeZ4fezmGhUnb2e2PJAQIqYqpM.jpg?width=320&crop=smart&auto=webp&s=9b8ebb4b40768e3e1798be64c17c42317ece4d19', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/39bhvKsvezD8o_3fgmeZ4fezmGhUnb2e2PJAQIqYqpM.jpg?width=640&crop=smart&auto=webp&s=f4979a398c72a921dbf9e69cafc328948ee9943f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/39bhvKsvezD8o_3fgmeZ4fezmGhUnb2e2PJAQIqYqpM.jpg?width=960&crop=smart&auto=webp&s=17d08e0da5b70eac4964c7ef3fb58f3202e4e1c3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/39bhvKsvezD8o_3fgmeZ4fezmGhUnb2e2PJAQIqYqpM.jpg?width=1080&crop=smart&auto=webp&s=4233675032378e70eeed443f73187a9f9a766f3d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/39bhvKsvezD8o_3fgmeZ4fezmGhUnb2e2PJAQIqYqpM.jpg?auto=webp&s=ee44aa3f2d59ef2012972d6dcf8d4f9d92a952a7', 'width': 1200}, 'variants': {}}]}
Trouble understanding the implementation of how Multi-modality is “solved” through alignment of CLIP and LLM
11
Hi folks, Recently I've been reading about LLaVA, mostly 1.5, in which they “solve” multi-modality in a rather simpler and elegant way imho. Through an alignment MLP, called on other papers projector, between CLIP and LLM, with the requirement that the language model must be open-source in order to be able to apply a contrastive methology. My question is regarding the access of the LLM embeddings with CLIP image features. As the features are not text obviously, I am guessing they strip the top part of the LLM and step out of the tokenizer part so that they can input the raw CLIP image features?
2024-01-23T12:52:10
https://www.reddit.com/r/LocalLLaMA/comments/19dnqi5/trouble_understanding_the_implementation_of_how/
Its_All_Chain_Rules
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dnqi5
false
null
t3_19dnqi5
/r/LocalLLaMA/comments/19dnqi5/trouble_understanding_the_implementation_of_how/
false
false
self
11
null
Appreciation for the duality of this sub
1
[removed]
2024-01-23T12:26:19
https://www.reddit.com/r/LocalLLaMA/comments/19dna9l/appreciation_for_the_duality_of_this_sub/
GeeBrain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dna9l
false
null
t3_19dna9l
/r/LocalLLaMA/comments/19dna9l/appreciation_for_the_duality_of_this_sub/
false
false
self
1
null
Human behavior simulation. How?
15
How do you make the LLM simulate a human behavior? The specific human behavior I want to talk about today is the LLM is the one starting to talk instead of you starting, like any normal human will do, no human will sit and wait to get asked/prompted then talk Humans just talk to each other without waiting, humans start conversations about totally random and unpredictable things, from news to sport to something happened in the house to 'I am going to school mom' to 'I am going to work honey', total randomness in topics and topic variations, totally unpredictable. So how can you simulate that with an LLM as close as possible to a human behavior I have some ideas but I couldn't get the result I wanted: With coding you can prompt the LLM to start talking first, it is not that the LLM actually is one starting BUT I called it 'simulating human behavior' for a reason, it doesn't have to be baked on the LLM architecture itself or the LLM is 'sentient', nope, simulating human behavior as much as possible is totally fine. Coz if it was like a final AI chat product of some sort, you can just do this in the backend and the user won't know therefore the simulation worked. And the user will think that the AI is starting on it's own. You can tell your customers/users how it works but it doesn't matter, what matters is that the simulation in the end of the day works, and it's not annoying to the end users coz they don't need to do anything to make it work, it just works in the backend. I tried the method of coding a script that prompts the LLM every random.uniform(1,6) for example (that's a random range between 1 & 6 hours, just clarifying for people who don't know python), so the LLM will get prompted after waiting between 1 & 6 hours. but the problem is that I couldn't make a prompt that gets good results and that actually simulates a 'sentient AI talking on it's own or starting a conversation on it's own' Any ideas guys?
2024-01-23T12:23:55
https://www.reddit.com/r/LocalLLaMA/comments/19dn8t7/human_behavior_simulation_how/
CharacterCheck389
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dn8t7
false
null
t3_19dn8t7
/r/LocalLLaMA/comments/19dn8t7/human_behavior_simulation_how/
false
false
self
15
null
Inconsistencies in LLM Outputs: Single vs Batched Inference with Different Batch Sizes
5
I've encountered a peculiar issue while working with HuggingFace's H4/zephyr-7b-beta model, fine-tuned with Qlora. My concern arises from the different outputs I receive when inferring the same input with different batch sizes. Below is a brief description of my setup and the results I've observed. **Context:** * Model: HuggingFaceH4/zephyr-7b-beta * Task: CoT data fine-tuning using Qlora * Decoding Strategy: Greedy (as do\_sample = False is the default setting) **Issue:** When conducting inference with batch\_size = 1 and batch\_size = 2 , I'm noticing distinct results for the same input. This leads me to wonder if the parallel inference process alters computations and outcomes in some manner. **Outputs** : * With batch\_size == 1 : "AE can’t ‘split the difference’ like this. It has to take a position on the allegation that Adair acted in the scope of employment at the time of the accident." * With batch\_size == 2 : "AE can’t ‘split the difference’ like this. It can’t respond that Adair both did and didn’t act in the scope of employment." &#x200B; &#x200B;
2024-01-23T12:13:53
https://www.reddit.com/r/LocalLLaMA/comments/19dn2to/inconsistencies_in_llm_outputs_single_vs_batched/
hwanchang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dn2to
false
null
t3_19dn2to
/r/LocalLLaMA/comments/19dn2to/inconsistencies_in_llm_outputs_single_vs_batched/
false
false
self
5
null
Are GGUF and GGML only for lower end computers?
1
[removed]
2024-01-23T11:51:08
https://www.reddit.com/r/LocalLLaMA/comments/19dmpb0/are_gguf_and_ggml_only_for_lower_end_computers/
learning_hedonism
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dmpb0
false
null
t3_19dmpb0
/r/LocalLLaMA/comments/19dmpb0/are_gguf_and_ggml_only_for_lower_end_computers/
false
false
self
1
null
fine-tunning new concepts
1
Hi all, there are lots of articles out there describing and implementing different RAG techniques. others talk about fine-tuning, but mention fine-tuning for a specific style or task (mostly simple task like sentiment analysis). eg. they say, if you are fine-tuning a support bot on your chat history you are training it on the tone that you want the chatbot to have. so basically they say: you fine-tune for style and bring knowledge via rag. i cannot find any fine-tuning example on tuning it for concepts. like: if you are building a chatbot for support, would it not make sense to first bring the main concepts or your product into the local llm and then use rag for context? the comparison that i see here is with an expert support engineer: if she know a lot about the system, then she can make connections and see where problems come from, why thinks fail, what might be behind an incident. surely she would go to a wiki or docs for details, but her knowhow is definitely helpful in her work. can this be done with local llms? has it been tried?
2024-01-23T11:42:56
https://www.reddit.com/r/LocalLLaMA/comments/19dmkdk/finetunning_new_concepts/
FineInstruction1397
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dmkdk
false
null
t3_19dmkdk
/r/LocalLLaMA/comments/19dmkdk/finetunning_new_concepts/
false
false
self
1
null
How to stop "Narrator Mode" appearing in role-play chats
5
Hi, I've always been a fan of text based adventure games and have been experimenting with some models for adventuring role play. My rig has 8GM of VRAM and 32 GB CPU RAM so I'm largely limited to 7B or 13B models although 30B ones run albeit painfully slowly. I use LM studio or Jan as the local model chat client What I've found is that often models will not just give their characters actions and thoughts but will start to narrate the story to include what my character does, and everyone else. These narrations can run away out of control where the model starts to write a book! They seem to flick into this mode at random. Ive tried about 6 different popular models and they all seem to do it to a lesser or greater degree, although it's way less with the larger models Is there any way to stop this? I've tried instructing the model to just describe its characters actions and thoughts and to keep responses short but to no avail. I've even provided example interactions with no joy. Is there a solution or is this just what these models often do? Edit: Very grateful for those comments guys. I'll give those tips a go. Thanks!
2024-01-23T11:41:59
https://www.reddit.com/r/LocalLLaMA/comments/19dmjr7/how_to_stop_narrator_mode_appearing_in_roleplay/
TezzaNZ
self.LocalLLaMA
2024-01-23T19:45:17
0
{}
19dmjr7
false
null
t3_19dmjr7
/r/LocalLLaMA/comments/19dmjr7/how_to_stop_narrator_mode_appearing_in_roleplay/
false
false
self
5
null
How has your experience running a Local LLM on production been?
1
[removed]
2024-01-23T10:50:01
https://www.reddit.com/r/LocalLLaMA/comments/19dlr35/how_has_your_experience_running_a_local_llm_on/
hopeirememberthisid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dlr35
false
null
t3_19dlr35
/r/LocalLLaMA/comments/19dlr35/how_has_your_experience_running_a_local_llm_on/
false
false
self
1
null
LLMs as a judge models are bad at giving scores in relevant numerical intervals > most LLM as a judge evals are probably useless
87
2024-01-23T10:15:17
https://twitter.com/aparnadhinak/status/1748368364395721128
clefourrier
twitter.com
1970-01-01T00:00:00
0
{}
19dl947
false
null
t3_19dl947
/r/LocalLLaMA/comments/19dl947/llms_as_a_judge_models_are_bad_at_giving_scores/
false
false
default
87
null
[2401.11458] Linear Alignment: A Closed-form Solution for Aligning Human Preferences without Tuning and Feedback
1
2024-01-23T09:57:58
https://arxiv.org/abs/2401.11458
Elven77AI
arxiv.org
1970-01-01T00:00:00
0
{}
19dl04h
false
null
t3_19dl04h
/r/LocalLLaMA/comments/19dl04h/240111458_linear_alignment_a_closedform_solution/
false
false
default
1
null
MistralAI polls if it should organize a hackathon. 93% Voted YES!
1
[removed]
2024-01-23T09:51:24
https://www.reddit.com/r/LocalLLaMA/comments/19dkwyw/mistralai_polls_if_it_should_organize_a_hackathon/
No_Palpitation7740
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dkwyw
false
null
t3_19dkwyw
/r/LocalLLaMA/comments/19dkwyw/mistralai_polls_if_it_should_organize_a_hackathon/
false
false
self
1
null
I have a 4060 with 8gigs of Vram, any chance I can run a decent uncesored LLM?
1
Basically the title, I know it aint much but I hope can run a decent model locally by jumping through some hoops.
2024-01-23T09:47:28
https://www.reddit.com/r/LocalLLaMA/comments/19dkv08/i_have_a_4060_with_8gigs_of_vram_any_chance_i_can/
theforce1579
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dkv08
false
null
t3_19dkv08
/r/LocalLLaMA/comments/19dkv08/i_have_a_4060_with_8gigs_of_vram_any_chance_i_can/
false
false
self
1
null
Best local LLM for English learning? preferably under 13B since I'm running 16GB M1 Pro MBP
1
I've tried these models using Ollama ``` deepseek-coder:6.7b llama2-uncensored:latest mistral:latest stable-code:latest starling-lm:7b ``` but I'm not a very experienced user, so I'd like to hear your suggestions. The tasks I'd like the local LLM to accomplish are: - proofread, and preferably give explanations for changes - make summary - provide grammar/writing guidance
2024-01-23T09:38:11
https://www.reddit.com/r/LocalLLaMA/comments/19dkq7v/best_local_llm_for_english_learning_preferably/
Organic_Challenge151
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dkq7v
false
null
t3_19dkq7v
/r/LocalLLaMA/comments/19dkq7v/best_local_llm_for_english_learning_preferably/
false
false
self
1
null
Should I get a Mac Studio M2 Max with 96GB RAM or a M1 Ultra with 64GB?
1
I have been feeling quite limited with my Mac Mini M1 16GB, and it is time to upgrade. I do not have an unlimited budget, otherwise I'd get a 128GB M2 Ultra. But for similar prices, I can get either: * M2 Max with 96GB RAM * M1 Ultra with 64GB RAM I am aware the M1 Ultra is almost twice as fast as the M2 Max, but will 64Gb be sufficient? According to model descriptions, it seems I can run 70b models without any problems on 64GB, and 120B quantised should also be ok. What are your recommendations, and is there anyone with similar setups who would mind sharing their experience?
2024-01-23T09:14:22
https://www.reddit.com/r/LocalLLaMA/comments/19dkeko/should_i_get_a_mac_studio_m2_max_with_96gb_ram_or/
ex-arman68
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dkeko
false
null
t3_19dkeko
/r/LocalLLaMA/comments/19dkeko/should_i_get_a_mac_studio_m2_max_with_96gb_ram_or/
false
false
self
1
null
models are the same after loading lora parameters using peft library
1
Hi, I created a lora and tried to merge it with base model but somehow the new model and the original model is giving the same logits. &#x200B; base\_model is as follows: \`\`\` LlamaForCausalLM( (model): LlamaModel( (embed\_tokens): Embedding(32000, 4096) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self\_attn): LlamaSdpaAttention( (q\_proj): Linear(in\_features=4096, out\_features=4096, bias=False) (k\_proj): Linear(in\_features=4096, out\_features=4096, bias=False) (v\_proj): Linear(in\_features=4096, out\_features=4096, bias=False) (o\_proj): Linear(in\_features=4096, out\_features=4096, bias=False) (rotary\_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate\_proj): Linear(in\_features=4096, out\_features=11008, bias=False) (up\_proj): Linear(in\_features=4096, out\_features=11008, bias=False) (down\_proj): Linear(in\_features=11008, out\_features=4096, bias=False) (act\_fn): SiLU() ) (input\_layernorm): LlamaRMSNorm() (post\_attention\_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm\_head): Linear(in\_features=4096, out\_features=32000, bias=False) ) &#x200B; \`\`\` and the lora\_model is created by following code: &#x200B; \`\`\`python &#x200B; expert\_lora\_path = '/common/home/users/m/manujm/lora-llm/llama-2-7b-expert-shakespeare\_' expert\_lora\_config = LoraConfig.from\_pretrained(expert\_lora\_path) expert\_peft\_model = PeftModel.from\_pretrained(base\_model, expert\_lora\_path, device\_map='cuda').to('cuda') &#x200B; \`\`\` &#x200B; and is as follows: &#x200B; \`\`\` PeftModelForCausalLM( (base\_model): LoraModel( (model): LlamaForCausalLM( (model): LlamaModel( (embed\_tokens): Embedding(32000, 4096) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self\_attn): LlamaSdpaAttention( (q\_proj): lora.Linear( (base\_layer): Linear(in\_features=4096, out\_features=4096, bias=False) (lora\_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora\_A): ModuleDict( (default): Linear(in\_features=4096, out\_features=32, bias=False) ) (lora\_B): ModuleDict( (default): Linear(in\_features=32, out\_features=4096, bias=False) ) (lora\_embedding\_A): ParameterDict() (lora\_embedding\_B): ParameterDict() ) (k\_proj): Linear(in\_features=4096, out\_features=4096, bias=False) (v\_proj): lora.Linear( (base\_layer): Linear(in\_features=4096, out\_features=4096, bias=False) (lora\_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora\_A): ModuleDict( (default): Linear(in\_features=4096, out\_features=32, bias=False) ) (lora\_B): ModuleDict( (default): Linear(in\_features=32, out\_features=4096, bias=False) ) (lora\_embedding\_A): ParameterDict() (lora\_embedding\_B): ParameterDict() ) (o\_proj): Linear(in\_features=4096, out\_features=4096, bias=False) (rotary\_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate\_proj): Linear(in\_features=4096, out\_features=11008, bias=False) (up\_proj): Linear(in\_features=4096, out\_features=11008, bias=False) (down\_proj): Linear(in\_features=11008, out\_features=4096, bias=False) (act\_fn): SiLU() ) (input\_layernorm): LlamaRMSNorm() (post\_attention\_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm\_head): Linear(in\_features=4096, out\_features=32000, bias=False) ) ) ) &#x200B; \`\`\` &#x200B; even though, i can clearly see lora modules being injected into the base\_model, the logits still remain the same. &#x200B; I double checked the above argument by comparing the parameters of two models, by the following code: &#x200B; \`\`\`python &#x200B; flag = True for p1, p2 in zip(base\_model.parameters(), antiexpert\_peft\_model.parameters()): if [p1.data.ne](https://p1.data.ne)([p2.data](https://p2.data)).sum() > 0: flag = False print (flag) &#x200B; \`\`\` which gives me True as response. I'm confused as what's wrong in my implementation or was there any error while training.
2024-01-23T08:19:32
https://www.reddit.com/r/LocalLLaMA/comments/19djnef/models_are_the_same_after_loading_lora_parameters/
sadsufferer99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19djnef
false
null
t3_19djnef
/r/LocalLLaMA/comments/19djnef/models_are_the_same_after_loading_lora_parameters/
false
false
self
1
null
models are the same after loading lora parameters using peft library
1
[removed]
2024-01-23T08:15:40
https://www.reddit.com/r/LocalLLaMA/comments/19djljh/models_are_the_same_after_loading_lora_parameters/
1azytux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19djljh
false
null
t3_19djljh
/r/LocalLLaMA/comments/19djljh/models_are_the_same_after_loading_lora_parameters/
false
false
self
1
null
Basic LLM for Jira ticket creation
1
Hi all, Understanding that open source LLMs tend to be more focused around specific things that they do well, I’m looking for suggestions on an LLM that can have a back and forth with a user to generate a fairly solid Jira ticket for data engineering development work. Are there any LLMs that spring to mind? Given the tight focus, I’m hoping I can get away with something relatively small.
2024-01-23T07:58:43
https://www.reddit.com/r/LocalLLaMA/comments/19djd88/basic_llm_for_jira_ticket_creation/
nydasco
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19djd88
false
null
t3_19djd88
/r/LocalLLaMA/comments/19djd88/basic_llm_for_jira_ticket_creation/
false
false
self
1
null
(Server) Software to easily switch between LLMs when not at home?
19
I have a bunch of models that I would like to host locally and access from outside of my home-network, e.g. from my laptop when I am at work. For a single model, this can be done with llama.cpp and a Tor Hidden Service (I can't forward ports due to my ISP). Unfortunately, I cannot fit all my models on my GPU at the same time. Is there a Software, that allows me to switch the model that is loaded on the GPU when I am not at home? I.e. a server, that let's me choose which model is loaded on the GPU? (And this model can then be accessed on a different port.)
2024-01-23T07:40:34
https://www.reddit.com/r/LocalLLaMA/comments/19dj4ch/server_software_to_easily_switch_between_llms/
theyseemestackin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dj4ch
false
null
t3_19dj4ch
/r/LocalLLaMA/comments/19dj4ch/server_software_to_easily_switch_between_llms/
false
false
self
19
null
how to convert zephyr-7b-beta / pytorch_model to .onnx format?
1
[removed]
2024-01-23T07:20:38
https://www.reddit.com/r/LocalLLaMA/comments/19ditzs/how_to_convert_zephyr7bbeta_pytorch_model_to_onnx/
HV4U2001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ditzs
false
null
t3_19ditzs
/r/LocalLLaMA/comments/19ditzs/how_to_convert_zephyr7bbeta_pytorch_model_to_onnx/
false
false
self
1
null
Best Model for X on Y hardware??
1
Hey folks, I'm getting a bit tired of these types of posts. The questions are always very similar, and answers only change every so often. I originally joined this subreddit because it seemed focused on LLM's, but recently I've been seeing a lot of those low effort posts. If anyone has ideas on how to reduce those types of posts, leave them under this post. Also, I don't mean to knock beginners for asking questions, it just feels like there should be better ways for them to get help.
2024-01-23T06:26:34
https://www.reddit.com/r/LocalLLaMA/comments/19di06q/best_model_for_x_on_y_hardware/
metaprotium
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19di06q
false
null
t3_19di06q
/r/LocalLLaMA/comments/19di06q/best_model_for_x_on_y_hardware/
false
false
self
1
null
Mistral & Speculative Decoding, any options?
7
Anyone know of any models that share vocabularies with Mistral or are otherwise suitable as a draft model for speculative decoding w/ it? Are there any distillations of Mistral?
2024-01-23T05:58:55
https://www.reddit.com/r/LocalLLaMA/comments/19dhk07/mistral_speculative_decoding_any_options/
Fun_Tangerine_1086
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dhk07
false
null
t3_19dhk07
/r/LocalLLaMA/comments/19dhk07/mistral_speculative_decoding_any_options/
false
false
self
7
null
Using Ragas with local setup
1
I have created a local RAG system that uses a locally hosted LLM (llamafile) and Huggingface embeddings. I could not find any documentation or guidance on the web that shows how to run Ragas locally by using custom embedding and pointing to local URL. Has anyone been able to run Ragas locally ?
2024-01-23T05:48:04
https://www.reddit.com/r/LocalLLaMA/comments/19dhdl0/using_ragas_with_local_setup/
Key_Education_2557
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dhdl0
false
null
t3_19dhdl0
/r/LocalLLaMA/comments/19dhdl0/using_ragas_with_local_setup/
false
false
self
1
null
How do you make an LLM follow a list of guidelines?
1
[removed]
2024-01-23T05:23:13
https://www.reddit.com/r/LocalLLaMA/comments/19dgyfa/how_do_you_make_an_llm_follow_a_list_of_guidelines/
ActuallySatya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dgyfa
false
null
t3_19dgyfa
/r/LocalLLaMA/comments/19dgyfa/how_do_you_make_an_llm_follow_a_list_of_guidelines/
false
false
self
1
null
Where to buy Coral mini PCIe Accelerator
1
I’m looking to put my raspberry pi’s to work, and add a Coral to perform some sweet processing power. Does anyone know where I can buy a Coral PCIe? They seem a bit scarce at the moment. The one place I found reported “in stock” Mouser - doesn’t ship to my address (in Australia) without ridiculous add-on delivery fees
2024-01-23T04:57:26
https://www.reddit.com/r/LocalLLaMA/comments/19dgi2v/where_to_buy_coral_mini_pcie_accelerator/
Competitive_Tip7203
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dgi2v
false
null
t3_19dgi2v
/r/LocalLLaMA/comments/19dgi2v/where_to_buy_coral_mini_pcie_accelerator/
false
false
self
1
null
[2401.12070] Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text
1
2024-01-23T04:47:51
https://arxiv.org/abs/2401.12070
Elven77AI
arxiv.org
1970-01-01T00:00:00
0
{}
19dgc4h
false
null
t3_19dgc4h
/r/LocalLLaMA/comments/19dgc4h/240112070_spotting_llms_with_binoculars_zeroshot/
false
false
default
1
null
New Paper: Proxy-Tuning: An Efficient Alternative to Finetuning Large Language Models
69
Link to paper: https://arxiv.org/abs/2401.08565 How many models are out there that the weights haven't been released for? This seems pretty interesting for those. I wonder how lightweight it is. Overall very interesting. "Despite the general capabilities of large pretrained language models, they consistently benefit from further adaptation to better achieve desired behaviors. However, tuning these models has become increasingly resource-intensive, or impossible when model weights are private. We introduce proxy-tuning, a lightweight decoding-time algorithm that operates on top of black-box LMs to achieve the result of directly tuning the model, but by accessing only its prediction over the output vocabulary. Our method instead tunes a smaller LM, then applies the difference between the predictions of the small tuned and untuned LMs to shift the original predictions of the base model in the direction of tuning, while retaining the benefits of larger scale pretraining. In experiments, when we apply proxy-tuning to Llama2-70B using proxies of only 7B size, we can close 88% of the gap between Llama2-70B and its truly-tuned chat version, when evaluated across knowledge, reasoning, and safety benchmarks. Interestingly, when tested on TruthfulQA, proxy-tuned models are actually more truthful than directly tuned models, possibly because decoding-time guidance better retains the model's factual knowledge. We then demonstrate the generality of proxy-tuning by applying it for domain adaptation on code, and task-specific finetuning on question-answering and math problems. Our work demonstrates the promise of using small tuned LMs to efficiently customize large, potentially proprietary LMs through decoding-time guidance."
2024-01-23T04:42:28
https://www.marktechpost.com/2024/01/21/researchers-from-the-university-of-washington-and-allen-institute-for-ai-present-proxy-tuning-an-efficient-alternative-to-finetuning-large-language-models/
Blade1413
marktechpost.com
1970-01-01T00:00:00
0
{}
19dg8pk
false
null
t3_19dg8pk
/r/LocalLLaMA/comments/19dg8pk/new_paper_proxytuning_an_efficient_alternative_to/
false
false
https://b.thumbs.redditm…l6_j9MbABb3Q.jpg
69
{'enabled': False, 'images': [{'id': '0a6oOFLuWzpg0wfUfaYe5y22jn02-8eCc_hqzU0r9rE', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/r_rCOGLKHKegzR4AkAu5zry0YPgh5syLBAlt_NBCnYk.jpg?width=108&crop=smart&auto=webp&s=fbbc13cdd2b43bb8f25ebc82282e31d2006ff689', 'width': 108}, {'height': 156, 'url': 'https://external-preview.redd.it/r_rCOGLKHKegzR4AkAu5zry0YPgh5syLBAlt_NBCnYk.jpg?width=216&crop=smart&auto=webp&s=4e65e493ef97f08ab36933e74688c7cf8e3de6a4', 'width': 216}, {'height': 231, 'url': 'https://external-preview.redd.it/r_rCOGLKHKegzR4AkAu5zry0YPgh5syLBAlt_NBCnYk.jpg?width=320&crop=smart&auto=webp&s=f15d05e5b9112f6e9df294515a97fba7649338c6', 'width': 320}, {'height': 463, 'url': 'https://external-preview.redd.it/r_rCOGLKHKegzR4AkAu5zry0YPgh5syLBAlt_NBCnYk.jpg?width=640&crop=smart&auto=webp&s=4243c7acfc38ca2340a9155c63700002d4764871', 'width': 640}, {'height': 695, 'url': 'https://external-preview.redd.it/r_rCOGLKHKegzR4AkAu5zry0YPgh5syLBAlt_NBCnYk.jpg?width=960&crop=smart&auto=webp&s=d937d9faff3ac0a4a12fa6a24f6cf77144f639c3', 'width': 960}, {'height': 782, 'url': 'https://external-preview.redd.it/r_rCOGLKHKegzR4AkAu5zry0YPgh5syLBAlt_NBCnYk.jpg?width=1080&crop=smart&auto=webp&s=d818cea99d18e4a08e4c76cfe59756be44e61b76', 'width': 1080}], 'source': {'height': 1274, 'url': 'https://external-preview.redd.it/r_rCOGLKHKegzR4AkAu5zry0YPgh5syLBAlt_NBCnYk.jpg?auto=webp&s=67e680df19e50fa02096ed1e69f086ae10ab6eb9', 'width': 1758}, 'variants': {}}]}
Is a pair of 4070 Super 12GB for running inference on 24GB models a good idea ?
1
In particular, are there penalties for running the models on multiple GPUs and how important is it to hit the 24GB of vram, is it actually enough for most things ?
2024-01-23T03:21:46
https://www.reddit.com/r/LocalLLaMA/comments/19deqr5/is_a_pair_of_4070_super_12gb_for_running/
transdimensionalmeme
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19deqr5
false
null
t3_19deqr5
/r/LocalLLaMA/comments/19deqr5/is_a_pair_of_4070_super_12gb_for_running/
false
false
self
1
null
Anyone here manage to de-shroud a P40 and put 92mm or 120mm fans on the side of it?
2
I know these units are suppose to use a blower type fan but I'd rather just use regular case fans.
2024-01-23T02:57:07
https://www.reddit.com/r/LocalLLaMA/comments/19de9fi/anyone_here_manage_to_deshroud_a_p40_and_put_92mm/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19de9fi
false
null
t3_19de9fi
/r/LocalLLaMA/comments/19de9fi/anyone_here_manage_to_deshroud_a_p40_and_put_92mm/
false
false
self
2
null
Now, MLX examples can directly fuse the model to hf compatible format and be converted to GGUF.
29
So all we need to do is: 1. Pull down the latest mlx-example. In the lora example, run \`fuse.py --model <path\_to\_your\_base\_model> --save-path <path\_to\_save\_fused\_model> --adapter-file <path\_to\_your\_adapters.npz> --de-quantize\`. 2. Pull down the llamacpp repository and follow the standard conversion process.
2024-01-23T02:44:14
https://www.reddit.com/r/LocalLLaMA/comments/19ddzxg/now_mlx_examples_can_directly_fuse_the_model_to/
mzbacd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ddzxg
false
null
t3_19ddzxg
/r/LocalLLaMA/comments/19ddzxg/now_mlx_examples_can_directly_fuse_the_model_to/
false
false
self
29
null
Are there any models good at describing pictures ? Specifically scientific diagrams / plots ?
1
Hello fellow llm enthusiasts. I was wondering if there are any models for qork like that because by god i hate writing image descriptions xD. Also for a text mining project this might come in very handy if it exsists. I hope this is fine for this sub if no let me know where i should turn with that question ^^. Thanks in advance.
2024-01-23T02:31:53
https://www.reddit.com/r/LocalLLaMA/comments/19ddr26/are_there_any_models_good_at_describing_pictures/
Noxusequal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ddr26
false
null
t3_19ddr26
/r/LocalLLaMA/comments/19ddr26/are_there_any_models_good_at_describing_pictures/
false
false
self
1
null
Why are all of my posts awaiting mod approval?
1
[removed]
2024-01-23T02:02:20
[deleted]
1970-01-01T00:00:00
0
{}
19dd5sh
false
null
t3_19dd5sh
/r/LocalLLaMA/comments/19dd5sh/why_are_all_of_my_posts_awaiting_mod_approval/
false
false
default
1
null
Why are all of my posts awaiting mod approval?
1
[removed]
2024-01-23T02:01:16
https://www.reddit.com/r/LocalLLaMA/comments/19dd4xh/why_are_all_of_my_posts_awaiting_mod_approval/
TheTwelveYearOld
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dd4xh
false
null
t3_19dd4xh
/r/LocalLLaMA/comments/19dd4xh/why_are_all_of_my_posts_awaiting_mod_approval/
false
false
self
1
null
How do LLLMs know to output answers in Markdown, or whatever Markup they're using?
1
[removed]
2024-01-23T01:57:51
https://www.reddit.com/r/LocalLLaMA/comments/19dd295/how_do_lllms_know_to_output_answers_in_markdown/
TheTwelveYearOld
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dd295
false
null
t3_19dd295
/r/LocalLLaMA/comments/19dd295/how_do_lllms_know_to_output_answers_in_markdown/
false
false
self
1
null
Powerful Colab Alternative for AI/ML Development
1
[removed]
2024-01-23T01:08:52
https://www.reddit.com/r/LocalLLaMA/comments/19dc23t/powerful_colab_alternative_for_aiml_development/
Horror-Economics-685
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19dc23t
false
null
t3_19dc23t
/r/LocalLLaMA/comments/19dc23t/powerful_colab_alternative_for_aiml_development/
false
false
self
1
null