title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
100% Speech to Speech with Vision | Faster Whisper + OpenVoice + Moondream
1
[removed]
2024-01-26T17:50:27
https://v.redd.it/wjqjvsz2otec1
allaboutai-kris
v.redd.it
1970-01-01T00:00:00
0
{}
1abobis
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wjqjvsz2otec1/DASHPlaylist.mpd?a=1708883444%2CNWQ5ZGM3MzliOWVjMWI0Y2YxMzI0YmIwMTkyNmZmZjMzYWE2MzBhOWY4MGMzNjdlMWY2NjZlMmJlZDNjN2U5YQ%3D%3D&v=1&f=sd', 'duration': 39, 'fallback_url': 'https://v.redd.it/wjqjvsz2otec1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/wjqjvsz2otec1/HLSPlaylist.m3u8?a=1708883444%2CMjU3ZDA3NmJmZWUwOGE0NTBlNGI5OGQzMDNmOWE4YzlhNzZmMjNjODRmMDIzOWEwOTA1ZGY5ZWQzZWI0Y2JkZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wjqjvsz2otec1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1abobis
/r/LocalLLaMA/comments/1abobis/100_speech_to_speech_with_vision_faster_whisper/
false
false
https://external-preview…2e828d151e950fb9
1
{'enabled': False, 'images': [{'id': 'YW93OHFrN2dvdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YW93OHFrN2dvdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?width=108&crop=smart&format=pjpg&auto=webp&s=3e263b9c412db16c7759fd4503880dd3dd6bf952', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YW93OHFrN2dvdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?width=216&crop=smart&format=pjpg&auto=webp&s=dd842ca86fd72d5f8d178b32b392095c4b0dde5b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YW93OHFrN2dvdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?width=320&crop=smart&format=pjpg&auto=webp&s=8f37d623d1816853057104bbb376f20980e42570', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YW93OHFrN2dvdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?width=640&crop=smart&format=pjpg&auto=webp&s=e35e5b311a67d712aa9535024fe6b861dd9c3290', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YW93OHFrN2dvdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?width=960&crop=smart&format=pjpg&auto=webp&s=af6599971ab72caa2150d0ffed43d4967dd21667', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YW93OHFrN2dvdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0585c42185069d2970ba0e89b224b3a017f5c6f8', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YW93OHFrN2dvdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?format=pjpg&auto=webp&s=80a94e0388fd9d93a4d2ac45874ad7218558d5d8', 'width': 1920}, 'variants': {}}]}
Tweets to Citations: Unveiling the Impact of Social Media Influencers on AI Research Visibility
7
2024-01-26T17:28:53
https://arxiv.org/abs/2401.13782
mcmoose1900
arxiv.org
1970-01-01T00:00:00
0
{}
1abnt69
false
null
t3_1abnt69
/r/LocalLLaMA/comments/1abnt69/tweets_to_citations_unveiling_the_impact_of/
false
false
default
7
null
An Effective Python/Linux Speech To Text Package?
8
A very HQ python or linux speech to text package. Requirements: - package or a software or a webui, it doesn't matter as long as it can be run with a linux or a python cmd - Cpu only - HQ - Totally local and offline, no hidden shenanegans/twists
2024-01-26T17:19:08
https://www.reddit.com/r/LocalLLaMA/comments/1abnkwp/an_effective_pythonlinux_speech_to_text_package/
CharacterCheck389
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abnkwp
false
null
t3_1abnkwp
/r/LocalLLaMA/comments/1abnkwp/an_effective_pythonlinux_speech_to_text_package/
false
false
self
8
null
How to fine tune an LLM?
1
[removed]
2024-01-26T15:44:18
https://www.reddit.com/r/LocalLLaMA/comments/1abla0p/how_to_fine_tune_an_llm/
Traditional-Fly-3445
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abla0p
false
null
t3_1abla0p
/r/LocalLLaMA/comments/1abla0p/how_to_fine_tune_an_llm/
false
false
self
1
null
llama ccp python always includes the input text
1
I'm struggling to understand why when I use llama ccp it will always append the question to the output. Example: "User: What time was my meeting for?" Output: "User: What time was my meeting for? Assistant: Your meeting was at 4pm. " How can I have it just output the " Assistant: Your meeting was at 4pm. "? Thanks! ​ ​
2024-01-26T15:37:59
https://www.reddit.com/r/LocalLLaMA/comments/1abl4ta/llama_ccp_python_always_includes_the_input_text/
Second26
self.LocalLLaMA
2024-01-26T16:03:55
0
{}
1abl4ta
false
null
t3_1abl4ta
/r/LocalLLaMA/comments/1abl4ta/llama_ccp_python_always_includes_the_input_text/
false
false
default
1
null
Is it possible to load higher-bit models in ollama?
1
Apologies if this is a stupid question. By default all models in ollama appear to get pulled in their 4-bit quantised versions, and I have scoured the documentation and found no reference to a flag that enables me to choose the level of quantisation. I figured out through trial and error than I can load mistral 7b instruct by using "ollama run mistral:instruct" which I also couldn't find on the git documentation or website, but it's still the 4-bit version.
2024-01-26T15:14:45
https://www.reddit.com/r/LocalLLaMA/comments/1abkm38/is_it_possible_to_load_higherbit_models_in_ollama/
leanmeanguccimachine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abkm38
false
null
t3_1abkm38
/r/LocalLLaMA/comments/1abkm38/is_it_possible_to_load_higherbit_models_in_ollama/
false
false
self
1
null
I was working on knowledge installation for creation of subject matter experts. Ended up creating something very curious and empathetic that will occasionally try to steer the conversation towards Taco Bell. Meet TacoBeLLM. Also available in diet 4-bit EXL2.
55
2024-01-26T15:00:23
https://huggingface.co/ericpolewski/TacoBeLLM
LetMeGuessYourAlts
huggingface.co
1970-01-01T00:00:00
0
{}
1abka1v
false
null
t3_1abka1v
/r/LocalLLaMA/comments/1abka1v/i_was_working_on_knowledge_installation_for/
false
false
https://b.thumbs.redditm…hBfmbvlbLCpw.jpg
55
{'enabled': False, 'images': [{'id': 'nco0l141G6ig7o-MCaNLcJnTDBuTp1FqoQS1BwSoAF8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YnXBkeFKzDCtBmaTITho84SNzty5kujPSZOuC2GLubY.jpg?width=108&crop=smart&auto=webp&s=2430b304b2b5d093cf48012fe7bd984c4d7f6fad', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YnXBkeFKzDCtBmaTITho84SNzty5kujPSZOuC2GLubY.jpg?width=216&crop=smart&auto=webp&s=b03fbbdd59c1638db2c7da8729d5d085ac9a7d93', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YnXBkeFKzDCtBmaTITho84SNzty5kujPSZOuC2GLubY.jpg?width=320&crop=smart&auto=webp&s=c6199794b5669065775beff22cd7d3aaf695a4aa', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YnXBkeFKzDCtBmaTITho84SNzty5kujPSZOuC2GLubY.jpg?width=640&crop=smart&auto=webp&s=cea3f8c8ca966f97ad72fccbe2ead149d14a9820', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YnXBkeFKzDCtBmaTITho84SNzty5kujPSZOuC2GLubY.jpg?width=960&crop=smart&auto=webp&s=f5205153e7f004a449d469dce3c5b376114944b3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YnXBkeFKzDCtBmaTITho84SNzty5kujPSZOuC2GLubY.jpg?width=1080&crop=smart&auto=webp&s=c3597e1558b525d75f1ba853a254d3cf2ca006f0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YnXBkeFKzDCtBmaTITho84SNzty5kujPSZOuC2GLubY.jpg?auto=webp&s=e6b6421a1b65b5015e7596d17a8bc4b9fe75af71', 'width': 1200}, 'variants': {}}]}
Fine-Tuning Salesforce CodeGen For Higher Software Metrics
1
I'm planning to fine tune the Salesforce CodeGen model, using PEFT (QLoRA). The objective is to improve the software quality metrics of the generated code. I have gathered a few projects from github and obtained their metrics from SonarQube. Now, I want to build the JSON from the data I have to fine tune the model. Any ideas on how to approach it? I did search through the internet a lot, and I cannot find much help!
2024-01-26T14:48:26
https://www.reddit.com/r/LocalLLaMA/comments/1abk0gh/finetuning_salesforce_codegen_for_higher_software/
rajlohith2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abk0gh
false
null
t3_1abk0gh
/r/LocalLLaMA/comments/1abk0gh/finetuning_salesforce_codegen_for_higher_software/
false
false
self
1
null
TIL I was mistaken about my multiple tokens post
86
Oh dear, so I recently created a post about how [some tokens can be multiple words](https://www.reddit.com/r/LocalLLaMA/comments/199a8cm/til_some_tokens_can_be_multiple_words/). I was fascinated by this and tested it and yeah, it happened all the time. But well... while this may be true in general - as some users commented about certain Chinese phrases -, it does not seem to be true about what the model I mentioned: Mistral. Why and how did I come to this (wrong) conclusion in my case? I was streaming model output into a file, token by token (yes, real tokens). And while doing that, I was reading from the file in a loop. Whenever something new was written, I assumed it to be a "token" (and therefore saved it in a list). But as you can imagine, sometimes the output stream was faster than the reading - especially with Mistral -, so two tokens already were written into the file and while I was reading from it, I assumed I read a single token. And just now I stumbled upon this code again and thought: "Hey, what if... I mean, you didn't mark the real tokens as tokens inside the file." And so I tested it and found out, that I was mistaken. Double words never appeared again. I am sorry. TL;DR Some tokens may still be multiple words, but at least not the ones I mentioned.
2024-01-26T14:19:00
https://www.reddit.com/r/LocalLLaMA/comments/1abjdyy/til_i_was_mistaken_about_my_multiple_tokens_post/
psi-love
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abjdyy
false
null
t3_1abjdyy
/r/LocalLLaMA/comments/1abjdyy/til_i_was_mistaken_about_my_multiple_tokens_post/
false
false
self
86
null
How good ( for real ) are open source llms compared to openai's?
1
Deepseek, the new deepseek-code, mixtral etc. Let's take into account that gpt 3.5 turbo is probably lazier on purpose. finished training and by documents shared on this subreddit: gpt 3.5t is 20b parameters. probably tuned down from the original big 175b model
2024-01-26T13:46:55
https://www.reddit.com/r/LocalLLaMA/comments/1abipv3/how_good_for_real_are_open_source_llms_compared/
redjojovic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abipv3
false
null
t3_1abipv3
/r/LocalLLaMA/comments/1abipv3/how_good_for_real_are_open_source_llms_compared/
false
false
self
1
null
What is the best open source LLM for outputting SQL code
1
[removed]
2024-01-26T13:41:39
https://www.reddit.com/r/LocalLLaMA/comments/1abilyw/what_is_the_best_open_source_llm_for_outputting/
redd-dev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abilyw
false
null
t3_1abilyw
/r/LocalLLaMA/comments/1abilyw/what_is_the_best_open_source_llm_for_outputting/
false
false
self
1
null
I keep running out of memory. What's the biggest model, and most context, I can run on 3060 12gb? With decent speed?
1
So Psyfighter2 Q5\_KM seems to work for at least about 5k context in Koboldcpp 1.54 for me (by that point it's 2tokens/sec. At the beginning of context it's about 8t/sec). Choosing 8192 or even 16384 in the cmd arguments doesn't error out. This is with Clblast and all 41 layers. I'm on Pop!\_OS. However, using CuBlas at 39/41 layers (the most I can do) is faster at 12t/s, and prompt processing is much faster, basically instant. But I can't choose higher context size from cmd arguments, and have to go default. Next I tried exllamav2 in text generation webui. I downloaded Psyfighter2 6bpw. It took too much memory until I tried a mere 1024 context size. That worked, and the speed said 22t/s (though I feel like it wasn't that fast). I tried Nous-Capybara-34b Q4\_0 but it was at 0.64 t/s with Clblast at 24 layers. It was over 1t/s with Cublas, but that's too slow for me. I have 32gb of DDR3 ram and a 3060 12gb. What's the best model I can fit in my GPU with decent context size? I haven't tried a MoE yet, but that seems like the best option right? Also am I missing anything that increases speed? I feel like it should be way faster than it is, or maybe I'm overestimating this gpu.
2024-01-26T13:35:41
https://www.reddit.com/r/LocalLLaMA/comments/1abihou/i_keep_running_out_of_memory_whats_the_biggest/
ThrowawayProgress99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abihou
false
null
t3_1abihou
/r/LocalLLaMA/comments/1abihou/i_keep_running_out_of_memory_whats_the_biggest/
false
false
self
1
null
Did someone alread used the layers twice or more?
25
https://www.reddit.com/r/LocalLLaMA/s/xYh2B5ZxwH I saw that post recently and found it to be really interesting, but cannot find any related thems to this, why don't we just use some of the layers twice instead of frankenmerging the model with itself? I understand that sometimes it can cause trouble, but it would be cool if we would have 'configurations' for the layers, which we would use twice or even more. Mistral 7B could have more quality than some of 20B models by sacrificing only speed of t/s, but not VRAM, so we still would be able to run it on 16k~32k context lenght instead of 4k~8k for 20B
2024-01-26T13:25:25
https://www.reddit.com/r/LocalLLaMA/comments/1abiaag/did_someone_alread_used_the_layers_twice_or_more/
Working-Flatworm-531
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abiaag
false
null
t3_1abiaag
/r/LocalLLaMA/comments/1abiaag/did_someone_alread_used_the_layers_twice_or_more/
false
false
self
25
null
Is there a local program/ project that you can connect your LLM to, to create story outlines, chapters, characters, setting, scenarios and more? Like a story generator / helper?
13
I was wondering because I remember 6 months back I tested an online “AI story creator” or something like that, where you would use gpt 4 or so to generate a character, then you can generate a backstory, world building outlines, and all of that in an organized UI sort of. It was really cool. I was wondering if there is a local version of some sort, as running stuff locally is just better obviously and free. I used this website, it was really fun to use, but super limited: https://toolsaday.com/writing/story-generator (Focus on the side bar, has tons of options, character creation, world building, all of that cool stuff that can assist writers or whatever.) But yeah locally offline would be great, if something like that even exists or if not, it would be cool.
2024-01-26T12:54:34
https://www.reddit.com/r/LocalLLaMA/comments/1abhoer/is_there_a_local_program_project_that_you_can/
headbopper96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abhoer
false
null
t3_1abhoer
/r/LocalLLaMA/comments/1abhoer/is_there_a_local_program_project_that_you_can/
false
false
self
13
null
Inference with an iGPU
7
I’d like to incorporate some ML workloads into my homelab and I’m currently at the stage where I don’t really want to commit more cost (hardware + electricity) than a mini PC. If I got one of the latest miniPC’s (I’m currently thinking minisforum um790 pro, with RDNA3) am I just going to have a horrible time running inference only workloads? I’ve been reading about driver issues primarily, I’m fully prepared for things to run slowly Anything else I should be aware of?
2024-01-26T12:11:44
https://www.reddit.com/r/LocalLLaMA/comments/1abgxfj/inference_with_an_igpu/
Sloppyjoeman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abgxfj
false
null
t3_1abgxfj
/r/LocalLLaMA/comments/1abgxfj/inference_with_an_igpu/
false
false
self
7
null
NVIDIA Local GenAI Developer Contest - Win a 4090, GTC Conference Pass, etc.
8
Ok, who is going for it? [https://www.nvidia.com/en-us/ai-data-science/generative-ai/rtx-developer-contest/](https://www.nvidia.com/en-us/ai-data-science/generative-ai/rtx-developer-contest/)
2024-01-26T11:50:30
https://www.reddit.com/r/LocalLLaMA/comments/1abgkkh/nvidia_local_genai_developer_contest_win_a_4090/
grim-432
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abgkkh
false
null
t3_1abgkkh
/r/LocalLLaMA/comments/1abgkkh/nvidia_local_genai_developer_contest_win_a_4090/
false
false
self
8
{'enabled': False, 'images': [{'id': 'Vx-JpjCmSX6ISu-7M1TXmyvfw99joz8RMQfamizoLAw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?width=108&crop=smart&auto=webp&s=48dbde024b96774fc0d4b514722691dd8f0c9788', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?width=216&crop=smart&auto=webp&s=d669859f89dc4020e209d1d96eaacc17055fcf78', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?width=320&crop=smart&auto=webp&s=77245dfb6a0d555a4dda17a2914bd30d6d49ba81', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?width=640&crop=smart&auto=webp&s=d719df8c8152883e6c605f656fa3a76f6b5ab5e9', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?width=960&crop=smart&auto=webp&s=323d32f046ccd39f1b9c95d3c5de3268eddd3477', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?width=1080&crop=smart&auto=webp&s=520cdc15ce4fb8a68bd27a3292a1f5864db969a2', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?auto=webp&s=1773cb9f160c4e3aaeb06a0433828d2979ad5f3f', 'width': 1200}, 'variants': {}}]}
Thought about Mamba rejection
83
I have just seen this tweet on Twitter worrying Mamba will be eventually rejected. How do you think?
2024-01-26T11:12:40
https://twitter.com/srush_nlp/status/1750526956452577486?t=X5CPZKQMMI5LVQup-simWA&s=19
mwmercury
twitter.com
1970-01-01T00:00:00
0
{}
1abfyrt
false
{'oembed': {'author_name': 'Sasha Rush', 'author_url': 'https://twitter.com/srush_nlp', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Mamba apparently was rejected !? (<a href="https://t.co/bjtmZimFsS">https://t.co/bjtmZimFsS</a>)<br><br>Honestly I don&#39;t even understand. If this gets rejected, what chance do us 🤡 s have.</p>&mdash; Sasha Rush (@srush_nlp) <a href="https://twitter.com/srush_nlp/status/1750526956452577486?ref_src=twsrc%5Etfw">January 25, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/srush_nlp/status/1750526956452577486', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_1abfyrt
/r/LocalLLaMA/comments/1abfyrt/thought_about_mamba_rejection/
false
false
https://b.thumbs.redditm…90enIfjU9r7U.jpg
83
{'enabled': False, 'images': [{'id': 'r99pWDxzwJr3etXVSbN5cIVyjeJ8B6alseNuQw2d9Ao', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/FOzrSzA4D1Tv3DUCFpcO3ZUfrMQ7hJ4WvT4sne5EkMA.jpg?width=108&crop=smart&auto=webp&s=68d8db8e913833d5d3fab726b35033addb3a3184', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/FOzrSzA4D1Tv3DUCFpcO3ZUfrMQ7hJ4WvT4sne5EkMA.jpg?auto=webp&s=e5718513511e5850bf778d8e5b4ea2b8f3007d38', 'width': 140}, 'variants': {}}]}
Mbp m3
4
Im thinking of buying a MacBook pro m3 max with 36gb ram. My understanding is that the gpu cores can use all this ram and its essentially simular to gpu vmem. Is this true, the device is hideously expensive, and I don't want yo go do a false path. Is it going to be equivalent to a nvidia device on a pc with 36gb vram ram.
2024-01-26T10:28:24
https://www.reddit.com/r/LocalLLaMA/comments/1abfbfx/mbp_m3/
tshawkins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abfbfx
false
null
t3_1abfbfx
/r/LocalLLaMA/comments/1abfbfx/mbp_m3/
false
false
self
4
null
wish i had switched from bing ages ago
10
2024-01-26T10:10:56
https://www.reddit.com/gallery/1abf2o2
712mg
reddit.com
1970-01-01T00:00:00
0
{}
1abf2o2
false
null
t3_1abf2o2
/r/LocalLLaMA/comments/1abf2o2/wish_i_had_switched_from_bing_ages_ago/
false
false
default
10
null
Who's gonna be first to mix MoE and moe??
135
If it already happened in any shape or form, drop me some info.
2024-01-26T10:04:50
https://i.redd.it/igxlewjddrec1.png
CharacterCheck389
i.redd.it
1970-01-01T00:00:00
0
{}
1abezll
false
null
t3_1abezll
/r/LocalLLaMA/comments/1abezll/whos_gonna_be_first_to_mix_moe_and_moe/
false
false
https://b.thumbs.redditm…K-KOEIhiGdzk.jpg
135
{'enabled': True, 'images': [{'id': 'XN3iJsjylRtB8m4oMRwiUbGrR3Y_zwONlqxIk8WW6bg', 'resolutions': [{'height': 114, 'url': 'https://preview.redd.it/igxlewjddrec1.png?width=108&crop=smart&auto=webp&s=49e8b61eac0f99d55f0ada3efcfcf946c8a3227b', 'width': 108}, {'height': 228, 'url': 'https://preview.redd.it/igxlewjddrec1.png?width=216&crop=smart&auto=webp&s=ff22ee7223203ebf080059e91ffc437ec1f62f66', 'width': 216}, {'height': 337, 'url': 'https://preview.redd.it/igxlewjddrec1.png?width=320&crop=smart&auto=webp&s=ea70e13065465c08f60f76530c5df12027dcbe9c', 'width': 320}, {'height': 675, 'url': 'https://preview.redd.it/igxlewjddrec1.png?width=640&crop=smart&auto=webp&s=ac52b19065691a110f2bafaf4dbddc5010663325', 'width': 640}], 'source': {'height': 760, 'url': 'https://preview.redd.it/igxlewjddrec1.png?auto=webp&s=6d81e80e4d68a3224853e376fda0949781be4387', 'width': 720}, 'variants': {}}]}
Please check out my Js library for semantic extraction and analysis and let me know what you think
1
[removed]
2024-01-26T09:34:53
https://i.redd.it/2ai507e18rec1.jpeg
Alert-Estimate
i.redd.it
1970-01-01T00:00:00
0
{}
1abekmn
false
null
t3_1abekmn
/r/LocalLLaMA/comments/1abekmn/please_check_out_my_js_library_for_semantic/
false
false
https://b.thumbs.redditm…3OcqAapxmyCg.jpg
1
{'enabled': True, 'images': [{'id': 'XqsbY5fKmdv_tUaY88EaGyViXbBbNm6NeCE0ZdWN7ps', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/2ai507e18rec1.jpeg?width=108&crop=smart&auto=webp&s=48bb4d384a1247595989e5126b6a429e892cfd67', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/2ai507e18rec1.jpeg?width=216&crop=smart&auto=webp&s=ce94352abfda6ad995c65e71edf5216a18e080f4', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/2ai507e18rec1.jpeg?width=320&crop=smart&auto=webp&s=7cfcbb4d62754a7a9165cad8543bbb4a0941370a', 'width': 320}, {'height': 348, 'url': 'https://preview.redd.it/2ai507e18rec1.jpeg?width=640&crop=smart&auto=webp&s=59842af75af7d42fc604aadee90af30894f4bb48', 'width': 640}, {'height': 522, 'url': 'https://preview.redd.it/2ai507e18rec1.jpeg?width=960&crop=smart&auto=webp&s=cd68dd0d2fc4c4738db9ad57b212a38174b0ddf2', 'width': 960}, {'height': 588, 'url': 'https://preview.redd.it/2ai507e18rec1.jpeg?width=1080&crop=smart&auto=webp&s=4a19f846b7f42fc1b703d2a6c3f351eba55db4c4', 'width': 1080}], 'source': {'height': 588, 'url': 'https://preview.redd.it/2ai507e18rec1.jpeg?auto=webp&s=38e372b80e879fb413b46928f4dd4e00227e49f8', 'width': 1080}, 'variants': {}}]}
Ollama - giving background context for roleplay
1
[removed]
2024-01-26T09:28:05
https://www.reddit.com/r/LocalLLaMA/comments/1abeh8x/ollama_giving_background_context_for_roleplay/
geek_innnn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abeh8x
false
null
t3_1abeh8x
/r/LocalLLaMA/comments/1abeh8x/ollama_giving_background_context_for_roleplay/
false
false
self
1
null
Code LLaMA embeddings?
3
I see that llama.cpp, which has functionality for obtaining embeddings, supports a wide variety of models, but not Code LLaMA. Is there an easy way to get embeddings for Code LLaMA? I'm starting a project where I want to analyze code snippet embeddings. Apologies if this is a noob question.
2024-01-26T09:10:17
https://www.reddit.com/r/LocalLLaMA/comments/1abe8h8/code_llama_embeddings/
oomydoomy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abe8h8
false
null
t3_1abe8h8
/r/LocalLLaMA/comments/1abe8h8/code_llama_embeddings/
false
false
self
3
null
Local models optimized for agent tasks
44
So this was just posted on the Huggingface blog: [Open Source LLMs as Agents](https://huggingface.co/blog/open-source-llms-as-agents) TL;DR >Open-source LLMs have now reached a performance level that makes them suitable reasoning engines for powering agent workflows: [Mixtral](https://huggingface.co/blog/mixtral) even [surpasses GPT-3.5](https://huggingface.co/blog/open-source-llms-as-agents#results) on our benchmark, and its performance could easily be further enhanced with fine-tuning. We currently have a lot of merges and fine-tunes optimized for RP and for benchmark/leaderboard performance. But are you familiar with any merges or fine-tunes specifically aimed at: * Planning and decisionmaking * Tool usage / function calling Also, do we have any indication of how quantization affects a models ability in these areas specifically?
2024-01-26T09:07:46
https://www.reddit.com/r/LocalLLaMA/comments/1abe79n/local_models_optimized_for_agent_tasks/
Scrattlebeard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abe79n
false
null
t3_1abe79n
/r/LocalLLaMA/comments/1abe79n/local_models_optimized_for_agent_tasks/
false
false
self
44
{'enabled': False, 'images': [{'id': 'FEHjyBZfhXcd1FgjOUjjklRjnG2WO4NGbfnM0cCn1Mc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/lVnTVqPSrrmxqW_zB-q6lryB7z1KEvbO_SxwJTGcdAo.jpg?width=108&crop=smart&auto=webp&s=39ef472502f1bc38fc4160d7b949f787a9e6c352', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/lVnTVqPSrrmxqW_zB-q6lryB7z1KEvbO_SxwJTGcdAo.jpg?width=216&crop=smart&auto=webp&s=a7a53876afd1b43c3da46464635c295c69a69656', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/lVnTVqPSrrmxqW_zB-q6lryB7z1KEvbO_SxwJTGcdAo.jpg?width=320&crop=smart&auto=webp&s=5de1a55a26a0a4d6512722b28331e8696a7ac33c', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/lVnTVqPSrrmxqW_zB-q6lryB7z1KEvbO_SxwJTGcdAo.jpg?width=640&crop=smart&auto=webp&s=053ac19014490dbb58626cf36245c92c3c5bdb7c', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/lVnTVqPSrrmxqW_zB-q6lryB7z1KEvbO_SxwJTGcdAo.jpg?width=960&crop=smart&auto=webp&s=c6f67764c1692cdf07db31950a6f94ff7ede9855', 'width': 960}, {'height': 606, 'url': 'https://external-preview.redd.it/lVnTVqPSrrmxqW_zB-q6lryB7z1KEvbO_SxwJTGcdAo.jpg?width=1080&crop=smart&auto=webp&s=bcc73f2e4321466debf2d7fc53e0e9907e257211', 'width': 1080}], 'source': {'height': 1078, 'url': 'https://external-preview.redd.it/lVnTVqPSrrmxqW_zB-q6lryB7z1KEvbO_SxwJTGcdAo.jpg?auto=webp&s=48e91d976e7e2da0a242cc94ff88855eecb2e967', 'width': 1920}, 'variants': {}}]}
How to fine tune (qlora or something) a vision model?
1
[removed]
2024-01-26T07:41:46
https://www.reddit.com/r/LocalLLaMA/comments/1abd0a5/how_to_fine_tune_qlora_or_something_a_vision_model/
yupignome
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abd0a5
false
null
t3_1abd0a5
/r/LocalLLaMA/comments/1abd0a5/how_to_fine_tune_qlora_or_something_a_vision_model/
false
false
self
1
null
Speech to Text for Scientific terminology?
1
Is there a good solution for Speech to Text that includes a significant amount of scientific terminology? Whisper does not seem to handle terms that commonly appear in physics, chemistry, biology, etc.
2024-01-26T07:08:23
https://www.reddit.com/r/LocalLLaMA/comments/1abcj2e/speech_to_text_for_scientific_terminology/
3ntrope
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abcj2e
false
null
t3_1abcj2e
/r/LocalLLaMA/comments/1abcj2e/speech_to_text_for_scientific_terminology/
false
false
self
1
null
Experience with deploying Llava as part of a website stack?
1
[removed]
2024-01-26T06:14:43
https://www.reddit.com/r/LocalLLaMA/comments/1abbp8q/experience_with_deploying_llava_as_part_of_a/
Amoesenbaer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abbp8q
false
null
t3_1abbp8q
/r/LocalLLaMA/comments/1abbp8q/experience_with_deploying_llava_as_part_of_a/
false
false
self
1
null
How do I just try a demo of the Lllama 2 base model?
1
Forgive me if this is a dumb question but I haven't been able to find a straightforward way to do that. I understand this would be too difficult to run locally, but I was hoping for a hosted GUI somewhere or at least instructions to easily deploy this on AWS. Thanks!
2024-01-26T06:12:39
https://www.reddit.com/r/LocalLLaMA/comments/1abbo29/how_do_i_just_try_a_demo_of_the_lllama_2_base/
leighscullyyang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1abbo29
false
null
t3_1abbo29
/r/LocalLLaMA/comments/1abbo29/how_do_i_just_try_a_demo_of_the_lllama_2_base/
false
false
self
1
null
SYCL for Intel Arc support almost here?
26
There has been a lot of activity on the pull request to support Intel GPUs in llama.cpp. We may finally be close to having real support for Intel Arc GPUs! Thanks to everyone's hard work to push this forward!
2024-01-26T05:42:18
https://github.com/ggerganov/llama.cpp/pull/2690#pullrequestreview-1845053109
it_lackey
github.com
1970-01-01T00:00:00
0
{}
1abb5cx
false
null
t3_1abb5cx
/r/LocalLLaMA/comments/1abb5cx/sycl_for_intel_arc_support_almost_here/
false
false
https://b.thumbs.redditm…bg78Z4ZFOp4o.jpg
26
{'enabled': False, 'images': [{'id': 'jEPFbL44LE5lXhs4wdSIwGXCdvKfhyTHFpfdaJMuJks', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BfosaCz-OVG5AgaaZFYvvyjOM2gi6TXNPn_BdVhI6IE.jpg?width=108&crop=smart&auto=webp&s=d68c72493e620106cb7dd13f67a1a3d88bc6c776', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BfosaCz-OVG5AgaaZFYvvyjOM2gi6TXNPn_BdVhI6IE.jpg?width=216&crop=smart&auto=webp&s=d597cf8d4b27daa3779b60805de8588f91224b2c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BfosaCz-OVG5AgaaZFYvvyjOM2gi6TXNPn_BdVhI6IE.jpg?width=320&crop=smart&auto=webp&s=e0dd0764bece1f3ceea6aa686430b65e7f968007', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BfosaCz-OVG5AgaaZFYvvyjOM2gi6TXNPn_BdVhI6IE.jpg?width=640&crop=smart&auto=webp&s=53eafc12edf4f7ad73917250fdf0cd10997969af', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BfosaCz-OVG5AgaaZFYvvyjOM2gi6TXNPn_BdVhI6IE.jpg?width=960&crop=smart&auto=webp&s=56d14d703d4fc6faaf9a4abc8bbc5e29e4b386c2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BfosaCz-OVG5AgaaZFYvvyjOM2gi6TXNPn_BdVhI6IE.jpg?width=1080&crop=smart&auto=webp&s=d3245a2f5dc2bfe06c4fc4a73b63a255154ad0af', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BfosaCz-OVG5AgaaZFYvvyjOM2gi6TXNPn_BdVhI6IE.jpg?auto=webp&s=d1c3a51d25d7e6193014498aa8aee13ecefd5cf9', 'width': 1200}, 'variants': {}}]}
Is there LLM for dog movement?
1
I'm looking for a model that can predict the next movement of a pet, Just as the LLM can predicts the next word, I want to use a model that predicts the next action or movement of a pet. This way, I can control my RC car from my PC to make pet-like activities. Any ideas?
2024-01-26T04:43:19
https://www.reddit.com/r/LocalLLaMA/comments/1aba3g1/is_there_llm_for_dog_movement/
stableprinter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aba3g1
false
null
t3_1aba3g1
/r/LocalLLaMA/comments/1aba3g1/is_there_llm_for_dog_movement/
false
false
default
1
null
serve the mlx models as an api
3
only two steps: 1. pip install mlx-llm-server 2. mlx-llm-server --model-path <path-to-your-model> enjoy :p
2024-01-26T04:35:36
https://www.reddit.com/r/LocalLLaMA/comments/1ab9ye8/serve_the_mlx_models_as_an_api/
mzbacd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ab9ye8
false
null
t3_1ab9ye8
/r/LocalLLaMA/comments/1ab9ye8/serve_the_mlx_models_as_an_api/
false
false
self
3
null
Is there LLM for dog movement?
1
I'm looking for a model that can predict the next movement of a pet, a large movement model (???). Just as the LLM essentially predicts the next word, I want to use a model that predicts the next action or movement of a pet. This way, I can control my RC car from my PC to make pet-like activities. If there is none, can you guys suggest a path for me to learn how to create a small model like this? (as i have a background in coding with 5 years of experience).
2024-01-26T04:32:53
https://www.reddit.com/r/LocalLLaMA/comments/1ab9wml/is_there_llm_for_dog_movement/
stableprinter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ab9wml
false
null
t3_1ab9wml
/r/LocalLLaMA/comments/1ab9wml/is_there_llm_for_dog_movement/
false
false
self
1
null
LLM for symbolic reduction
5
I am looking for open source models for symbolic reduction similar to what GPT-3.5 does &#x200B; https://preview.redd.it/y895btbkkpec1.png?width=1326&format=png&auto=webp&s=08a094ee9ab5608d6fb0a95ed0febe95e5616c6b Are there any open source alternatives for this ? Is there a way to teach models about rules of mathematics with examples ?
2024-01-26T04:04:07
https://www.reddit.com/r/LocalLLaMA/comments/1ab9d9l/llm_for_symbolic_reduction/
maayon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ab9d9l
false
null
t3_1ab9d9l
/r/LocalLLaMA/comments/1ab9d9l/llm_for_symbolic_reduction/
false
false
https://b.thumbs.redditm…GIhyL1RdGO2U.jpg
5
null
Synthetic safe code dataset generation brainstorm
1
[removed]
2024-01-26T03:24:59
https://huggingface.co/datasets/CyberNative/SafeCodeDPO
CyberNativeAI
huggingface.co
1970-01-01T00:00:00
0
{}
1ab8m2a
false
null
t3_1ab8m2a
/r/LocalLLaMA/comments/1ab8m2a/synthetic_safe_code_dataset_generation_brainstorm/
false
false
default
1
{'enabled': False, 'images': [{'id': 'ygfoP1Ry5ZNos4GhV--dY6_fKHVDcKZ6vSLD-kBtVNk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/K9R-tjy_ET1HO4S1pJuSZzHq5wCflPMh8pFUCLLzf2Q.jpg?width=108&crop=smart&auto=webp&s=126b2c99e331f1fd6e1b4e18ee8692bde26943b8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/K9R-tjy_ET1HO4S1pJuSZzHq5wCflPMh8pFUCLLzf2Q.jpg?width=216&crop=smart&auto=webp&s=df68a7976b81d6db81333eb414e3663adfd666a0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/K9R-tjy_ET1HO4S1pJuSZzHq5wCflPMh8pFUCLLzf2Q.jpg?width=320&crop=smart&auto=webp&s=311ef816b75835e85657475a8c5921744640c4cf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/K9R-tjy_ET1HO4S1pJuSZzHq5wCflPMh8pFUCLLzf2Q.jpg?width=640&crop=smart&auto=webp&s=33ac5b6006fb72ce5c46866022f037ff0b9a3eb3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/K9R-tjy_ET1HO4S1pJuSZzHq5wCflPMh8pFUCLLzf2Q.jpg?width=960&crop=smart&auto=webp&s=8bb4360893e12302ee4ad92ad42eddb02c7c5af2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/K9R-tjy_ET1HO4S1pJuSZzHq5wCflPMh8pFUCLLzf2Q.jpg?width=1080&crop=smart&auto=webp&s=69b3a4eb2e0d13e8294fe9ac89af779b16aefc81', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/K9R-tjy_ET1HO4S1pJuSZzHq5wCflPMh8pFUCLLzf2Q.jpg?auto=webp&s=305bdab4208184d958ad50e92dbfcafb0ff8ffbb', 'width': 1200}, 'variants': {}}]}
Llama in a container. With stable diffusion. Redis, MariaDB, MongoDB too. Have fun!
22
2024-01-26T03:13:02
https://github.com/xmichaelmason/llama-docker
xmilesdyson
github.com
1970-01-01T00:00:00
0
{}
1ab8dab
false
null
t3_1ab8dab
/r/LocalLLaMA/comments/1ab8dab/llama_in_a_container_with_stable_diffusion_redis/
false
false
https://b.thumbs.redditm…wJTGPdlSRV3Q.jpg
22
{'enabled': False, 'images': [{'id': 'DplVesB_vuDQYToJ3mlCYYwDZtYN6o8GvPhJyr_Dtzo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?width=108&crop=smart&auto=webp&s=e6b06708be140d2fe2fd9c39aec1edadf3733bb5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?width=216&crop=smart&auto=webp&s=457d1f85c00d2ca18604ab04cafaf79834e2b141', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?width=320&crop=smart&auto=webp&s=7a6dcd9f1dd9449ee616d068b864c2d3663a3619', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?width=640&crop=smart&auto=webp&s=5abc94725bd4ac2130cbf2508205afe73ef3bd58', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?width=960&crop=smart&auto=webp&s=19f1bd454d7b95316ddfbe64dca9875a9bf4e938', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?width=1080&crop=smart&auto=webp&s=63bcea0e370efcaf490dc4d42241f2da1197176f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?auto=webp&s=ac3cb81287baa3191c346bf3c455a25e24fd0173', 'width': 1200}, 'variants': {}}]}
Can anyone tell me why mixtral suddenly starts to ramble?
1
[removed]
2024-01-26T02:36:16
https://www.reddit.com/r/LocalLLaMA/comments/1ab7m7g/can_anyone_tell_me_why_mixtral_suddenly_starts_to/
Eastern_Leader_1122
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ab7m7g
false
null
t3_1ab7m7g
/r/LocalLLaMA/comments/1ab7m7g/can_anyone_tell_me_why_mixtral_suddenly_starts_to/
false
false
https://b.thumbs.redditm…rufpgtvx36YU.jpg
1
{'enabled': False, 'images': [{'id': '6qrb8eiJLf2ElrMrt3NiG76mxIomHJpeMZi09LppJZ0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nIrBXSonru56lS8A0L1GYIv-lg7sZzQJZbiAdZFfs7g.jpg?width=108&crop=smart&auto=webp&s=155a8a175e23e1623e31624235c5e934e9edc5da', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nIrBXSonru56lS8A0L1GYIv-lg7sZzQJZbiAdZFfs7g.jpg?width=216&crop=smart&auto=webp&s=391a1a359b265791ba608ffc282caa022044e442', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nIrBXSonru56lS8A0L1GYIv-lg7sZzQJZbiAdZFfs7g.jpg?width=320&crop=smart&auto=webp&s=c40d51bdde99ade6186cfb28def2f5c7370f128f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nIrBXSonru56lS8A0L1GYIv-lg7sZzQJZbiAdZFfs7g.jpg?width=640&crop=smart&auto=webp&s=9599458160877f9177f488358c0c2245d83d2aed', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nIrBXSonru56lS8A0L1GYIv-lg7sZzQJZbiAdZFfs7g.jpg?width=960&crop=smart&auto=webp&s=86a9e3e3c9b8d9914e20e702243bbd3278e6fba1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nIrBXSonru56lS8A0L1GYIv-lg7sZzQJZbiAdZFfs7g.jpg?width=1080&crop=smart&auto=webp&s=e048ed48fb570d35ff4e20dbbe72ef2a308875cb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nIrBXSonru56lS8A0L1GYIv-lg7sZzQJZbiAdZFfs7g.jpg?auto=webp&s=fceb404c991d6ace7bacdd654de7aeefd66c8cf6', 'width': 1200}, 'variants': {}}]}
Need help with multi-shot prompting and how to go from text examples to ideas for summary data
10
I am looking for any advice on how to properly format multi-shot prompts (ie. prompts with examples included) for a specific task. In a more general way, if any of you know of any other place where I could ask for help with writing a prompt (be it a chat server, subreddit, Slack channel, or whatever else) then please let me know! **My hardware setup:** I have an RTX 3090 and 64 GB of RAM on a computer that can dual boot to Windows 10 or Ubuntu. **LLM Models:** I'm looking for only models that have no restrictions on how their output can be used, which rules out ChatGPT, Llama 2, the Chinese models (so far as I know). I'm looking at options like Mistral, Mixtral, and Phi-2. **My data:** Journal article abstracts (generally around 200-250 words) **Desired output:** Suggestions for a JSON schema of what I could ask an LLM to extract from a group of 5-10 journal article abstracts. With that intro, let's go into more detail about what I'm trying to do. I need help to craft a prompt that will help the LLM do the task I have in mind. Here is an example of how the output from this LLM task would be used in a subsequent LLM task: `[INSTRUCTION] Analyze the input text and extract the requested information. Extracted information should be returned in JSON format based on the following schema definition:` `[SCHEMA]: { "$schema": "https://json-schema.org/draft/2020-12/schema", "title": "Abstract", "type": "object", "properties": { "countryName": { "type": "array", "description": "The name of the country or countries where the study was conducted" }, "scaleName": { "type": "array", "description": "The name of the scales or other quantitative instruments used in the study." }, "cronbachAlpha": { "type": "boolean", "description": "Was Cronbach's Alpha reported in the text?" }, "sampleSize": { "description": "Number of participants in the study. If multiple sample numbers are given, add them together.", "type": "integer", "minimum": 0 } } }` `[Example Input] Abstract Aim Death distress can increase mental health problems. The aim of the present study was to develop a measure of death distress and evaluate the reliability of this Death Distress Scale‐Farsi (DDS‐F) among nurses. The hypotheses were that death distress has three components and that the DDS‐F would have desirable psychometric properties. Design A descriptive cross‐sectional study. Methods A convenience sample of 106 Iranian nurses from two hospitals at Tehran city, Iran was recruited. They completed the Death Anxiety Scale (DAS), the Death Depression Scale (DDS) and the Death Obsession Scale (DOS). Results Cronbach's α for the DDS‐F was 0.71. As expected, the DDS‐F had three independent components: death obsession, death depression and death anxiety. A principle component analysis with a varimax rotation of the DDS‐F items identified three factors accounting for 66.13% of the variance. Factor 1 was labelled “Death Obsession” (31.3% of the variance), Factor 2 was labelled “Death Depression” (21.9% of the variance), and Factor 3 was labelled “Death Anxiety” (12.8% of the variance). Discussion Death distress has three components: death obsession, death depression and death anxiety. The DDS‐F which measures these has good psychometric properties, and it can be used in hospital settings to assess death distress among Iranian nurses.` `[Example AI output]` `{'scale_name': 'Death Distress Scale (DDS)-Farsi', 'country_or_countries': 'Iran', 'total_sample_size': 106, 'mentions_cronbach_alpha': True}` So for the first LLM task, the prompt could be something like this: `[INSTRUCTION] Analyze the input text prepare a JSON schema with relevant summary information:` `[Example Input] Abstract Aim Death distress can increase mental health problems. The aim of the present study was to develop a measure of death distress and evaluate the reliability of this Death Distress Scale‐Farsi (DDS‐F) among nurses. The hypotheses were that death distress has three components and that the DDS‐F would have desirable psychometric properties. Design A descriptive cross‐sectional study. Methods A convenience sample of 106 Iranian nurses from two hospitals at Tehran city, Iran was recruited. They completed the Death Anxiety Scale (DAS), the Death Depression Scale (DDS) and the Death Obsession Scale (DOS). Results Cronbach's α for the DDS‐F was 0.71. As expected, the DDS‐F had three independent components: death obsession, death depression and death anxiety. A principle component analysis with a varimax rotation of the DDS‐F items identified three factors accounting for 66.13% of the variance. Factor 1 was labelled “Death Obsession” (31.3% of the variance), Factor 2 was labelled “Death Depression” (21.9% of the variance), and Factor 3 was labelled “Death Anxiety” (12.8% of the variance). Discussion Death distress has three components: death obsession, death depression and death anxiety. The DDS‐F which measures these has good psychometric properties, and it can be used in hospital settings to assess death distress among Iranian nurses.` `[Example AI output]` `{ "$schema": "https://json-schema.org/draft/2020-12/schema", "title": "Abstract", "type": "object", "properties": { "countryName": { "type": "array", "description": "The name of the country or countries where the study was conducted" }, "scaleName": { "type": "array", "description": "The name of the scales or other quantitative instruments used in the study." }, "cronbachAlpha": { "type": "boolean", "description": "Was Cronbach's Alpha reported in the text?" }, "sampleSize": { "description": "Number of participants in the study. If multiple sample numbers are given, add them together.", "type": "integer", "minimum": 0 } } }` Then I want to put in several article abstracts from a different field and ask the LLM to write a JSON schema which could be used to extract relevant information from those abstracts. `[Input 1]` `Liquefaction-induced lateral spreading has caused severe damages to the infrastructures. To predict the liquefaction-induced lateral spreading, a hybrid approach was proposed based on the Newmark sliding-block model. One-dimensional effective stress analysis based on the borehole investigation of the site was conducted to obtain the triggering time of liquefaction and acceleration time history. Shear wave velocity of the liquefiable soil was used to estimate the residual shear strength of liquefiable soil. The limit equilibrium analysis was conducted to determine the yield acceleration corresponding with the residual shear strength of liquefied soil. The liquefaction-induced lateral spreading was calculated based on the Newmark sliding-block model. A case study based on Wildlife Site Array during the 1987 Superstition Hills earthquake was conducted to evaluate the performance of the hybrid approach. The results showed that the hybrid approach was capable of predicting liquefaction-induced lateral spreading and the calculated lateral spreading was 1.5 times the observed displacement in terms of Wildlife Site Array. Numerical simulations with two other constitutive models of liquefiable sand were conducted in terms of effective stress analyses to reproduce the change of lateral spreading and excess pore water ratio over the dynamic time of Wildlife Site Array. Results of numerical simulations indicated that the lateral spreading varied with the triggering time of liquefaction when different constitutive models were used. The simulations using PM4sand and UBC3D-PLM constitutive models predicted 5.2 times and 4 times the observed lateral spreading, respectively. To obtain the site response, the motions recorded at and below the ground surface were analyzed using the Hilbert–Huang transform. The low-frequency content of the motion below the ground surface was amplified at the ground surface, and the liquefaction effect resulted in a shift of the frequency content. By comparing the response spectra of the entire ground surface motion and the ground surface motion from the beginning to the triggering time of liquefaction, the liquefaction effect at the site was confirmed.` `[Input 2]` `In this paper, the pile-soil interaction of the pile foundation of an inclined straight alternating group in a liquefiable site under a seismic load was studied through the form changes to the pile cap within the inclined straight alternating group. Based on an analysis of the soil acceleration, hole pressure ratio, horizontal displacement of the pile body, vertical displacement of the pile body, and bending moment of the pile body, the dynamic characteristics of the pile soil at the free site are studied with two layers of liquefiable soil. The results show that the sand layer can amplify seismic waves under a seismic load, and therefore, the soil acceleration under the pile foundation model of the high-rise pile cap group is slightly greater than that of the low-rise pile cap model; then, the pore pressure ratios at the monitoring point in the low-rise pile cap and high-rise pile cap pile foundation models present certain fluctuations. The analysis of the pile displacement and the bending moment shows that the pile foundation from the high-rise pile cap group can resist the seismic load better than that from the low-rise pile cap group.` `[Input 3]` `The time-dependent behaviour of saturated soils under static and dynamic loading is generally attributed to the flow-dependent and viscous behaviour of pore fluid. However, the intrinsic energy dissipative effects from the flow-independent viscoelastic behaviour of solid skeleton are not always considered. In this study, the effect of flow-independent viscoelastic behaviour on the seismic amplification of ground soil in vertical and horizontal directions is studied based on a two-phase poroviscoelastic model. A generalized Kelvin–Voigt model is used to define the effective stress in the soils, and the compressibilities of both solid skeleton and pore fluid are considered. The seismic-induced dynamic displacements are analytically derived and are shown to depend on soil layer thickness, soil properties, and ground motion parameters. The formulation neglecting the viscoelastic behaviour of solid skeleton could overestimate both the vertical and horizontal motion amplifications at the surface of ground soil. In addition, the seismic responses of viscoelastic soils are demonstrated to be closely related to the saturation state of surface soil.` The output from the model should be a JSON schema which I could then use along with the prompt from the example to extract relevant information from those 3 inputs.
2024-01-26T02:32:13
https://www.reddit.com/r/LocalLLaMA/comments/1ab7j9e/need_help_with_multishot_prompting_and_how_to_go/
ResearchTLDR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ab7j9e
false
null
t3_1ab7j9e
/r/LocalLLaMA/comments/1ab7j9e/need_help_with_multishot_prompting_and_how_to_go/
false
false
self
10
null
New GPT-4 Turbo (0125 Preview) slightly faster than GPT-4 Turbo (1106 Preview) from initial benchmark results
0
2024-01-26T02:26:01
https://twitter.com/ArtificialAnlys/status/1750701671007875315
speakerknock
twitter.com
1970-01-01T00:00:00
0
{}
1ab7ekw
false
{'oembed': {'author_name': 'ArtificialAnalysis.ai', 'author_url': 'https://twitter.com/ArtificialAnlys', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Initial benchmark results: new GPT-4 Turbo (0125 Preview) slightly faster than GPT-4 Turbo (1106 Preview)!<br><br>Our initial readings show significantly lower latency and slightly higher Tokens per Second throughput for the new model. This may change over the next few days as… <a href="https://t.co/9CttiXzCuo">pic.twitter.com/9CttiXzCuo</a></p>&mdash; ArtificialAnalysis.ai (@ArtificialAnlys) <a href="https://twitter.com/ArtificialAnlys/status/1750701671007875315?ref_src=twsrc%5Etfw">January 26, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/ArtificialAnlys/status/1750701671007875315', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_1ab7ekw
/r/LocalLLaMA/comments/1ab7ekw/new_gpt4_turbo_0125_preview_slightly_faster_than/
false
false
https://b.thumbs.redditm…oq3Dsh9mfwcY.jpg
0
{'enabled': False, 'images': [{'id': 'NkJH4bLPkjJrWF52F9sLrrT5pn988C0EnIZgtB2XaUM', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/MtFHRQPwUTHBUSWerM0ADJ7VppFUT5ZnxDRXyXaSFfU.jpg?width=108&crop=smart&auto=webp&s=6812fd88fbc5d65131fc4fe4ef925cda79300b63', 'width': 108}], 'source': {'height': 74, 'url': 'https://external-preview.redd.it/MtFHRQPwUTHBUSWerM0ADJ7VppFUT5ZnxDRXyXaSFfU.jpg?auto=webp&s=55ee050185f1e58e3920645d34348f620fe84526', 'width': 140}, 'variants': {}}]}
Guidance on
1
[removed]
2024-01-26T01:39:53
https://www.reddit.com/r/LocalLLaMA/comments/1ab6g0k/guidance_on/
blueeraser30
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ab6g0k
false
null
t3_1ab6g0k
/r/LocalLLaMA/comments/1ab6g0k/guidance_on/
false
false
self
1
null
Adept Fuyu-Heavy: A new multimodal model
4
&#x200B; https://preview.redd.it/plhvoquqroec1.png?width=837&format=png&auto=webp&s=b83de9bdf8267c170636b7fe4f835bb380ca769f We’re excited to introduce Adept Fuyu-Heavy, a new multimodal model designed specifically for digital agents. Fuyu-Heavy is the world’s third-most-capable multimodal model, behind only GPT4-V and Gemini Ultra, which are 10-20 times bigger. We’re excited about this model because: * It excels at multimodal reasoning. To us the killer feature is UI understanding, but it also performs well on more traditional multimodal benchmarks. In particular, Fuyu-Heavy scores higher on the MMMU benchmark than even Gemini Pro. * On standard text-based benchmarks, it matches or exceeds the performance of models in the same compute class despite having to devote some of its capacity to image modeling. * It demonstrates that (with some modifications) we can scale up the [Fuyu architecture](https://www.adept.ai/blog/fuyu-8b) and reap all of the associated benefits, including handling arbitrary size/shape images and efficiently re-using existing transformer optimizations. Based on Fuyu-8B [https://huggingface.co/adept/fuyu-8b](https://huggingface.co/adept/fuyu-8b) [https://www.adept.ai/blog/adept-fuyu-heavy](https://www.adept.ai/blog/adept-fuyu-heavy)
2024-01-26T01:22:17
https://www.reddit.com/r/LocalLLaMA/comments/1ab62z7/adept_fuyuheavy_a_new_multimodal_model/
sapporonight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ab62z7
false
null
t3_1ab62z7
/r/LocalLLaMA/comments/1ab62z7/adept_fuyuheavy_a_new_multimodal_model/
false
false
https://b.thumbs.redditm…rVDhIJC-nk2Y.jpg
4
{'enabled': False, 'images': [{'id': 'Y7pH0nfcU9dbjgzXc7BJhELNY0sajBt2oqSeJ9D7G78', 'resolutions': [{'height': 39, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?width=108&crop=smart&auto=webp&s=3979f1a9099821589dad9942398ab2ffbd290c48', 'width': 108}, {'height': 79, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?width=216&crop=smart&auto=webp&s=9a17a7f22bc5a1b172790d3369df8001e045fb16', 'width': 216}, {'height': 117, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?width=320&crop=smart&auto=webp&s=8f2c524a0728f54e0300b18adac8f4252140eb2b', 'width': 320}, {'height': 234, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?width=640&crop=smart&auto=webp&s=2f106001a57feb225b001339d0c3bcaa2bdfdd88', 'width': 640}, {'height': 351, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?width=960&crop=smart&auto=webp&s=7cc3d09dd86b106a6087c635310697813f81035a', 'width': 960}, {'height': 395, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?width=1080&crop=smart&auto=webp&s=3b1ebd49c2a7b0161995d43b2a381ca7d439e8be', 'width': 1080}], 'source': {'height': 663, 'url': 'https://external-preview.redd.it/PPAznYeUUIWrZTTyJsCf3HAdTU_bS4NYmsxSRZJJ68E.jpg?auto=webp&s=b738e9c785cea7c29f09e829343a53eb3e4eaef3', 'width': 1812}, 'variants': {}}]}
Running local LLMs has changed how I think about people. How is LLM affect you in the real world?
152
As a techie, my communication style is very direct and often trying to solve everything by logic. I would get frustrated when people refuse to yield to logic. I know very well you can't use logic to change that which was derived from emotions. Yet like the fool I'm, I would persist and believe that laying out more logic in hopefully a clearer manner would turn them around. With LLMs not being emotional, I realize that I must prompt in the appropriate manner to get a response. If I have something I want to repeatedly test, then i must figure out the appropriate prompt to get that response with whatever model I'm playing with even if it doesn't make sense to me. For every unique LLM, I must figure out this prompt. It dawned on me last night that people are just like this. Trying to make sense is a futile and stupid exercise. I must prompt and give them the right input to get what I want. I'm having a hard time reconciling this even tho I know it's true. Even if I don't agree with what I'm saying or believe it. The right thing is to say it, to act it and behave in a certain manner to get what I want. This of course is difficult because I would have to go against my values and true self to do this. It's no challenge to do this with LLMs. ... I can't believe it took a host of LLMs to reveal this to me. I don't know if to feel a little smart at discovering this or really stupid for not discovering this all my life.
2024-01-26T01:05:44
https://www.reddit.com/r/LocalLLaMA/comments/1ab5pu5/running_local_llms_has_changed_how_i_think_about/
segmond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ab5pu5
false
null
t3_1ab5pu5
/r/LocalLLaMA/comments/1ab5pu5/running_local_llms_has_changed_how_i_think_about/
false
false
self
152
null
What's next?
5
Around 3 months back I asked the same question here to know, what to do next after I developed multiple RAG applications. https://www.reddit.com/r/LocalLLaMA/s/u4oUuapORg In above post I received helpful suggestions and one of those suggestion was to fine-tune models. So I fine-tuned multiple 7B models and thoroughly enjoyed. https://github.com/meetrais/LLM-Fine-Tuning Now I want to reach out to this talented Local Llama community again to seek guidance. What should I do next in LLM space?
2024-01-26T00:45:43
https://www.reddit.com/r/LocalLLaMA/comments/1ab56is/whats_next/
meetrais
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ab56is
false
null
t3_1ab56is
/r/LocalLLaMA/comments/1ab56is/whats_next/
false
false
self
5
{'enabled': False, 'images': [{'id': 'tLP04xWcCHwl2IUYrLBavGdty6eSxARaZBoDS4fmihQ', 'resolutions': [{'height': 42, 'url': 'https://external-preview.redd.it/kcTTanDHxMOXUHSMb2TP4i-gqQqWGqBJu45hJgV1kZ8.jpg?width=108&crop=smart&auto=webp&s=b9e6e997a7be1c6dc52cec6a8f17e77f0b94cbf9', 'width': 108}, {'height': 85, 'url': 'https://external-preview.redd.it/kcTTanDHxMOXUHSMb2TP4i-gqQqWGqBJu45hJgV1kZ8.jpg?width=216&crop=smart&auto=webp&s=0d9b55007edf227dd6a15d75f4080fa3684fd431', 'width': 216}, {'height': 127, 'url': 'https://external-preview.redd.it/kcTTanDHxMOXUHSMb2TP4i-gqQqWGqBJu45hJgV1kZ8.jpg?width=320&crop=smart&auto=webp&s=59f33618ed0e729bd4bc0f8f27664b88a2f2bfc7', 'width': 320}, {'height': 254, 'url': 'https://external-preview.redd.it/kcTTanDHxMOXUHSMb2TP4i-gqQqWGqBJu45hJgV1kZ8.jpg?width=640&crop=smart&auto=webp&s=8e9ef7c5ab492f8108b0b3f1e136b8400265e582', 'width': 640}, {'height': 381, 'url': 'https://external-preview.redd.it/kcTTanDHxMOXUHSMb2TP4i-gqQqWGqBJu45hJgV1kZ8.jpg?width=960&crop=smart&auto=webp&s=954c44ebef2f829a96d207c91f93d6a759ff2460', 'width': 960}, {'height': 429, 'url': 'https://external-preview.redd.it/kcTTanDHxMOXUHSMb2TP4i-gqQqWGqBJu45hJgV1kZ8.jpg?width=1080&crop=smart&auto=webp&s=0f0afbf4a1ffb32e3662b125ef3aeddfae7f8ce7', 'width': 1080}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/kcTTanDHxMOXUHSMb2TP4i-gqQqWGqBJu45hJgV1kZ8.jpg?auto=webp&s=0cc78c5fe7344f367b0f637ce28a439318a65131', 'width': 1258}, 'variants': {}}]}
Noob question. Why is everyone hyped for mamba ?
1
[removed]
2024-01-25T23:47:19
https://www.reddit.com/r/LocalLLaMA/comments/19fnh90/noob_question_why_is_everyone_hyped_for_mamba/
Noxusequal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fnh90
false
null
t3_19fnh90
/r/LocalLLaMA/comments/19fnh90/noob_question_why_is_everyone_hyped_for_mamba/
false
false
self
1
null
Looking for advice on self hosting services. For a phd
1
[removed]
2024-01-25T23:44:26
https://www.reddit.com/r/LocalLLaMA/comments/19fnez4/looking_for_advice_on_self_hosting_services_for_a/
Noxusequal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fnez4
false
null
t3_19fnez4
/r/LocalLLaMA/comments/19fnez4/looking_for_advice_on_self_hosting_services_for_a/
false
false
self
1
null
Tutorial for local model
1
Hi, does anyone have a good tutorial for running llama2 7b locally where it can be controlled programmatically rather than through a web interface? I've tried 3 that I've found but they're all out of date or just get errors that I can't resolve. Many thanks
2024-01-25T23:39:15
https://www.reddit.com/r/LocalLLaMA/comments/19fnawr/tutorial_for_local_model/
Breath_Unique
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fnawr
false
null
t3_19fnawr
/r/LocalLLaMA/comments/19fnawr/tutorial_for_local_model/
false
false
self
1
null
What is a Vector Database?
1
[removed]
2024-01-25T22:41:58
https://www.reddit.com/r/LocalLLaMA/comments/19fm0ls/what_is_a_vector_database/
sabrinaqno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fm0ls
false
null
t3_19fm0ls
/r/LocalLLaMA/comments/19fm0ls/what_is_a_vector_database/
false
false
https://a.thumbs.redditm…OmozMcMTzmZ0.jpg
1
null
Offline listening and speaking bot
35
Hi all, For those wanting a quick repo to use as a basis to get started, I’ve created jen-ai. There are full instructions in the readme. Once running you can talk to it, and it will respond. It’s basic, but a place to start.
2024-01-25T22:39:26
https://github.com/nydasco/jen-ai
nydasco
github.com
1970-01-01T00:00:00
0
{}
19flyhj
false
null
t3_19flyhj
/r/LocalLLaMA/comments/19flyhj/offline_listening_and_speaking_bot/
false
false
https://b.thumbs.redditm…K_mRSmFMiv1g.jpg
35
{'enabled': False, 'images': [{'id': 'g3WCm9Z32S3zJrfaZ-rk1Zuqx1SYrNoVlLfSD3dJbzk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tEqN3hc86k9051QTpMu5vPn8HIcbFxcgMg07x169xz4.jpg?width=108&crop=smart&auto=webp&s=7be2606938f3aae38b6849b38badc7a3135512dc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tEqN3hc86k9051QTpMu5vPn8HIcbFxcgMg07x169xz4.jpg?width=216&crop=smart&auto=webp&s=c5f3bfb54644c7c9a17d896475972305da78831c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tEqN3hc86k9051QTpMu5vPn8HIcbFxcgMg07x169xz4.jpg?width=320&crop=smart&auto=webp&s=af4f69b63e7ee8f769e8dc58958f1df1ecc9c642', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tEqN3hc86k9051QTpMu5vPn8HIcbFxcgMg07x169xz4.jpg?width=640&crop=smart&auto=webp&s=b897d3861ed097b57eb3dfd820d56a9627a3d848', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tEqN3hc86k9051QTpMu5vPn8HIcbFxcgMg07x169xz4.jpg?width=960&crop=smart&auto=webp&s=6f3b3a0da8c70428b1005b5ee771e4f90891144f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tEqN3hc86k9051QTpMu5vPn8HIcbFxcgMg07x169xz4.jpg?width=1080&crop=smart&auto=webp&s=855bc1895523ba7de21f5d773402f736743caf19', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tEqN3hc86k9051QTpMu5vPn8HIcbFxcgMg07x169xz4.jpg?auto=webp&s=c651e950bc030224d78133d76401295ded73f7e8', 'width': 1200}, 'variants': {}}]}
Why does it switch language?
1
Any idea why my Llama suddenly generate Vietnamese even though the prompt is in English? I cannot reproduce this even when I tried to tell it to speak Vietnamese
2024-01-25T22:20:33
https://i.redd.it/re2vah4qvnec1.jpeg
minhquan3105
i.redd.it
1970-01-01T00:00:00
0
{}
19flikv
false
null
t3_19flikv
/r/LocalLLaMA/comments/19flikv/why_does_it_switch_language/
false
false
https://b.thumbs.redditm…gdY-Nww6SQNw.jpg
1
{'enabled': True, 'images': [{'id': 'xWUx-iR122OUR1s-28uD3fnUtl2T0QrcQy9b8oSTnRY', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/re2vah4qvnec1.jpeg?width=108&crop=smart&auto=webp&s=417df1d16e0f4763d4e10ee0ecfa2301827467c8', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/re2vah4qvnec1.jpeg?width=216&crop=smart&auto=webp&s=55585223234b50bb77bece2ee89b0d0302c76d24', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/re2vah4qvnec1.jpeg?width=320&crop=smart&auto=webp&s=a8004c6fcd0271f98cf0a8f52b5ea60dbe2c02ed', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/re2vah4qvnec1.jpeg?width=640&crop=smart&auto=webp&s=f636b316a8d61bc155f8c98aa879c664c823abb4', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/re2vah4qvnec1.jpeg?width=960&crop=smart&auto=webp&s=4d54148bd68e8a93fa2d85bb92b727a420bed4e8', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/re2vah4qvnec1.jpeg?width=1080&crop=smart&auto=webp&s=1fb0c3f4f731c3a0b689078195d7a2557ae88c15', 'width': 1080}], 'source': {'height': 4000, 'url': 'https://preview.redd.it/re2vah4qvnec1.jpeg?auto=webp&s=89bd67ceb117ee20e0532014e8fa59e5a28d3668', 'width': 3000}, 'variants': {}}]}
AIKit now supports GPTQ, EXL2 and Mamba!
1
AIKit is a quick, easy, and local or cloud-agnostic way to get started to host and deploy LLMs for inference with minimal containers using LocalAI, which provides Open AI API compatibility for a drop-in replacement. No GPU, internet access or additional tools are needed to get started except for Docker! AIKit now supports GPTQ, EXL2 and mamba models. For getting started, please check out: [https://github.com/sozercan/aikit](https://github.com/sozercan/aikit) [https://sozercan.github.io/aikit/](https://sozercan.github.io/aikit/)
2024-01-25T22:16:47
https://www.reddit.com/r/LocalLLaMA/comments/19flfgd/aikit_now_supports_gptq_exl2_and_mamba/
sozercan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19flfgd
false
null
t3_19flfgd
/r/LocalLLaMA/comments/19flfgd/aikit_now_supports_gptq_exl2_and_mamba/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XRohhXuPbdXM-wFfQ3NhkZbxucStZiOkca675OXcxF0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OBQCvoZT5j7ovAwDfpWx-UK22mkRDhV-M1DggA1ZpgQ.jpg?width=108&crop=smart&auto=webp&s=2b107a5a6d60bfd10cfb8235578fa46264ef419d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OBQCvoZT5j7ovAwDfpWx-UK22mkRDhV-M1DggA1ZpgQ.jpg?width=216&crop=smart&auto=webp&s=087b32971ef1a3f7ebf5dc5168eb2bcd02fc9815', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OBQCvoZT5j7ovAwDfpWx-UK22mkRDhV-M1DggA1ZpgQ.jpg?width=320&crop=smart&auto=webp&s=c5d19477203e31b7f71f2067373e0f4f74780f21', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OBQCvoZT5j7ovAwDfpWx-UK22mkRDhV-M1DggA1ZpgQ.jpg?width=640&crop=smart&auto=webp&s=a268b8daba38c8fb9a6e4e3b1c88d3803afe5590', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OBQCvoZT5j7ovAwDfpWx-UK22mkRDhV-M1DggA1ZpgQ.jpg?width=960&crop=smart&auto=webp&s=ad1604a0985afae3c58d058dd9ae3e6fe726d136', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OBQCvoZT5j7ovAwDfpWx-UK22mkRDhV-M1DggA1ZpgQ.jpg?width=1080&crop=smart&auto=webp&s=3b3af6f1d0170c5b82bbb81cc36a6c4c39d6e6f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OBQCvoZT5j7ovAwDfpWx-UK22mkRDhV-M1DggA1ZpgQ.jpg?auto=webp&s=8f8e063361faa9bf49a2c1bfbe1773bde3135a8f', 'width': 1200}, 'variants': {}}]}
create a dataset to fine-tune a model for a domain-specific task
9
hey, i want to create a specific dataset (specialized in data science, python etc..) to fine-tune some language models. but i don't want the dataset to contain only knowledge but also some hints or like conversations so that the fine tuned LLM will be more like an assistant rather than answer questions directly. so what do you think the dataset should look like, any proposition? thank you so much!
2024-01-25T22:10:33
https://www.reddit.com/r/LocalLLaMA/comments/19fl9zn/create_a_dataset_to_finetune_a_model_for_a/
Life_Ask2806
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fl9zn
false
null
t3_19fl9zn
/r/LocalLLaMA/comments/19fl9zn/create_a_dataset_to_finetune_a_model_for_a/
false
false
self
9
null
I built the first commercial local LLM app, it's $12 (with free trial)
1
A time ago I built this: https://github.com/juliooa/secondbrain as a base to make desktop apps. It was cool to play around, but the small models weren't very useful for building anything practical. Until.. Mistral 7b. I began thinking about what to build, and as a non-native English speaker, I realized that I always double-checked my grammar online before posting anything. So I built a grammar checker, for myself and whoever is in the same situation. It runs fast on my Macbook pro M1, and I imagine it should be even faster on the M2 and blazely on the M3. This is to show that it is possible, we are getting to the moment when local LLMs can be useful and can be turned into products. You can download the app here: https://grammarbot.app
2024-01-25T21:39:55
https://www.reddit.com/r/LocalLLaMA/comments/19fkm41/i_built_the_first_commercial_local_llm_app_its_12/
julio_oa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fkm41
false
null
t3_19fkm41
/r/LocalLLaMA/comments/19fkm41/i_built_the_first_commercial_local_llm_app_its_12/
false
false
self
1
{'enabled': False, 'images': [{'id': 'hcO_MBJkx6219uMlb7UfkYKmOTpLGWr6cJ0OKjDgSdQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-3nwW25BPzLsmT-QS1K1WBnDwN22rSM7hAYflms9rkQ.jpg?width=108&crop=smart&auto=webp&s=b35fd58cacd353fc7c23405b51901c89c0eb37ec', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-3nwW25BPzLsmT-QS1K1WBnDwN22rSM7hAYflms9rkQ.jpg?width=216&crop=smart&auto=webp&s=46f957c13ea1ef43a15a6ebc5b5ac110c937f407', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-3nwW25BPzLsmT-QS1K1WBnDwN22rSM7hAYflms9rkQ.jpg?width=320&crop=smart&auto=webp&s=728f81d2700f04ff7440cf1a0dd8ed718323350c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-3nwW25BPzLsmT-QS1K1WBnDwN22rSM7hAYflms9rkQ.jpg?width=640&crop=smart&auto=webp&s=51146f9604536dfe328bd6bfb397eeb61008da97', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-3nwW25BPzLsmT-QS1K1WBnDwN22rSM7hAYflms9rkQ.jpg?width=960&crop=smart&auto=webp&s=7f592bdcbc21be6dab95543d558fd24b7af02208', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-3nwW25BPzLsmT-QS1K1WBnDwN22rSM7hAYflms9rkQ.jpg?width=1080&crop=smart&auto=webp&s=e903c0b9e79f5e6cc12967e8e73bced2e2f8d3d4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-3nwW25BPzLsmT-QS1K1WBnDwN22rSM7hAYflms9rkQ.jpg?auto=webp&s=f9f169bf985c60ecf0a5a06905862b631c063f29', 'width': 1200}, 'variants': {}}]}
Forced by censorship to ask for Local LLM help
1
[removed]
2024-01-25T21:22:05
https://www.reddit.com/r/LocalLLaMA/comments/19fk95q/forced_by_censorship_to_ask_for_local_llm_help/
Natural-Sentence-601
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fk95q
false
null
t3_19fk95q
/r/LocalLLaMA/comments/19fk95q/forced_by_censorship_to_ask_for_local_llm_help/
false
false
self
1
null
Llama with DRuGS should be called Mule.
1
[removed]
2024-01-25T21:18:48
https://www.reddit.com/r/LocalLLaMA/comments/19fk6et/llama_with_drugs_should_be_called_mule/
Future_Might_8194
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fk6et
false
null
t3_19fk6et
/r/LocalLLaMA/comments/19fk6et/llama_with_drugs_should_be_called_mule/
false
false
self
1
null
Implementing a Sparse Mixture of Experts Language Model from scratch
1
[removed]
2024-01-25T20:51:03
https://www.reddit.com/r/LocalLLaMA/comments/19fjida/implementing_a_sparse_mixture_of_experts_language/
avi1x
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fjida
false
null
t3_19fjida
/r/LocalLLaMA/comments/19fjida/implementing_a_sparse_mixture_of_experts_language/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ZcwbBZrpZThONbehuqrccFwX52bFC9OJakeTcd5GCqA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=108&crop=smart&auto=webp&s=7e8d8002d2e46aa4d7f857310f3ae689dfc44d07', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=216&crop=smart&auto=webp&s=02935af9b977f931de6c691794a118d67c233cd3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=320&crop=smart&auto=webp&s=26f1e3e3788eee603d6e7bb5e7f8c3c358bd40a8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=640&crop=smart&auto=webp&s=f9764b24c7c73ea4d0b7b3390d5e21d0055de1f8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=960&crop=smart&auto=webp&s=1366a3f56c04b88d112621dba00079ac027ddaf4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=1080&crop=smart&auto=webp&s=cb907e8194c07abde1edef598dd9a8ddd9a949fd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?auto=webp&s=6c020ee09a5b9f789dadabb7b1235f4763939e6b', 'width': 1200}, 'variants': {}}]}
Replacing LLM of Suno Ai's Bark TTS model with Mistral or TinyLlama?
10
As you may know, Bark is a powerful TTS model that generates human-like audio and also supports sounds like laughing and clearing throats ([List](https://github.com/suno-ai/bark/tree/main?tab=readme-ov-file#%EF%B8%8F-details)). This is obviously possible by a GPT model running under the hood. And this model is tiny (as this was supposed to be a research project). This got me thinking, what if we replace the GPT model with something more powerful? Like Tiny Mistral 7b / Llama 7b or Phi 2 or even TinyLlama. It can increase the performance of Bark substantially which will be amazing for on. This got me thinking, what if we replace the GPT model with something more powerful? Like Mistral 7b / Llama 7b or Phi 2 or even TinyLlama. It can increase the performance of Bark substantially which will be amazing for generating RP assistants. I found this project([Link](https://www.reddit.com/r/LocalLLaMA/comments/1970zhf/merging_mistral_with_whisper_to_make_a_multimodal/?share_id=gNh-tqR48IJ58EDAi7WHP&utm_content=1&utm_medium=android_app&utm_name=androidcss&utm_source=share&utm_term=3)) where it a guy merged Mistral with Whisper, the way he achieved this is by extracting the AudioEncoder part of Whisper and merging this with Mistral then Finetuning combined weights on *Google’s MusicCaps dataset.* I wonder if we can do something similar on Bark. https://preview.redd.it/9hd6thbo6nec1.png?width=1185&format=png&auto=webp&s=a2e9877eface17cd34a02d130758fd3eb9c2363c
2024-01-25T20:46:20
https://www.reddit.com/r/LocalLLaMA/comments/19fjegh/replacing_llm_of_suno_ais_bark_tts_model_with/
Independent_Key1940
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fjegh
false
null
t3_19fjegh
/r/LocalLLaMA/comments/19fjegh/replacing_llm_of_suno_ais_bark_tts_model_with/
false
false
https://b.thumbs.redditm…lzNQl5W1l9CA.jpg
10
{'enabled': False, 'images': [{'id': 'bjoReMA_ac12PmS3ca_nmHSPHauIPh1VMrO9JFAaTJo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6WgWNzPPnddSzlwog3irT8TGZJ49qQ5eyeZKOh0jFyw.jpg?width=108&crop=smart&auto=webp&s=943c63397787a92f2e0b5e53232f9139f5ac2884', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6WgWNzPPnddSzlwog3irT8TGZJ49qQ5eyeZKOh0jFyw.jpg?width=216&crop=smart&auto=webp&s=53be051a58ef34d16d971ccfe08302b634940da1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6WgWNzPPnddSzlwog3irT8TGZJ49qQ5eyeZKOh0jFyw.jpg?width=320&crop=smart&auto=webp&s=5cbc168a3de93c2d917b11d51a367c0e09d72c6c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6WgWNzPPnddSzlwog3irT8TGZJ49qQ5eyeZKOh0jFyw.jpg?width=640&crop=smart&auto=webp&s=80116343af1bef85ce1a1b43eba77436c3a06b8b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6WgWNzPPnddSzlwog3irT8TGZJ49qQ5eyeZKOh0jFyw.jpg?width=960&crop=smart&auto=webp&s=27a816388f7be850c0f6c94d1f53e4b79a46bed5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6WgWNzPPnddSzlwog3irT8TGZJ49qQ5eyeZKOh0jFyw.jpg?width=1080&crop=smart&auto=webp&s=26217a0326f2cd6bfae083b3baa7286b3e97cf5d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6WgWNzPPnddSzlwog3irT8TGZJ49qQ5eyeZKOh0jFyw.jpg?auto=webp&s=9a97d82475238b817effbbbfd8079b2770df07f2', 'width': 1200}, 'variants': {}}]}
I have Mixtral voice chat connected to a hotkey on my PC (link to repo in comments)
66
2024-01-25T20:43:27
https://v.redd.it/1g5jiiy6enec1
Chance_Confection_37
/r/LocalLLaMA/comments/19fjby6/i_have_mixtral_voice_chat_connected_to_a_hotkey/
1970-01-01T00:00:00
0
{}
19fjby6
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1g5jiiy6enec1/DASHPlaylist.mpd?a=1708937014%2CNTQ2ODFlNDJjODBkODY3ZDU0NjQzYzg4MWU3OGEwMTQyZjk1MmM2MmJmMWUxNjVkOGViMDdjY2U5OGYxNWY2ZA%3D%3D&v=1&f=sd', 'duration': 176, 'fallback_url': 'https://v.redd.it/1g5jiiy6enec1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/1g5jiiy6enec1/HLSPlaylist.m3u8?a=1708937014%2CYmExNThhZmE4YTAyNDM2OWQ1YjNmNmU3NjFjNGU2OGUzMzhhN2JkZjlhMmZlYzNkYzY2MDA0NGJmMTY1NTgxOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1g5jiiy6enec1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_19fjby6
/r/LocalLLaMA/comments/19fjby6/i_have_mixtral_voice_chat_connected_to_a_hotkey/
false
false
https://external-preview…fd58cfc22d77906b
66
{'enabled': False, 'images': [{'id': 'ZHNsamxvZGVlbmVjMY1izZEfsYl1jcpaUGosHibcUA_5WQ_qPIr1r5rz_7Tg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZHNsamxvZGVlbmVjMY1izZEfsYl1jcpaUGosHibcUA_5WQ_qPIr1r5rz_7Tg.png?width=108&crop=smart&format=pjpg&auto=webp&s=47638ab8ddbea41606a116ccbfb4fbad03d4b2aa', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZHNsamxvZGVlbmVjMY1izZEfsYl1jcpaUGosHibcUA_5WQ_qPIr1r5rz_7Tg.png?width=216&crop=smart&format=pjpg&auto=webp&s=adaf2542cf589e12c1c1574d526150d2a7a7aee2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZHNsamxvZGVlbmVjMY1izZEfsYl1jcpaUGosHibcUA_5WQ_qPIr1r5rz_7Tg.png?width=320&crop=smart&format=pjpg&auto=webp&s=4754c28ff6389e162179bbf2490d602fde287f2c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZHNsamxvZGVlbmVjMY1izZEfsYl1jcpaUGosHibcUA_5WQ_qPIr1r5rz_7Tg.png?width=640&crop=smart&format=pjpg&auto=webp&s=dccdcdf439cd61e4a5425ec76ad9ee3c38cdfdb2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZHNsamxvZGVlbmVjMY1izZEfsYl1jcpaUGosHibcUA_5WQ_qPIr1r5rz_7Tg.png?width=960&crop=smart&format=pjpg&auto=webp&s=168d8c90f16b153cea236381d6c19cb1738e842f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZHNsamxvZGVlbmVjMY1izZEfsYl1jcpaUGosHibcUA_5WQ_qPIr1r5rz_7Tg.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7d60b0285118c25aaf355384c4cea6379a5e3511', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZHNsamxvZGVlbmVjMY1izZEfsYl1jcpaUGosHibcUA_5WQ_qPIr1r5rz_7Tg.png?format=pjpg&auto=webp&s=19c0939cda1cb2c00a8dca02e8e084e0896fe9ef', 'width': 1920}, 'variants': {}}]}
What's the easiest local RAG setup?
1
[removed]
2024-01-25T20:11:53
https://www.reddit.com/r/LocalLLaMA/comments/19fikrw/whats_the_easiest_local_rag_setup/
yupignome
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fikrw
false
null
t3_19fikrw
/r/LocalLLaMA/comments/19fikrw/whats_the_easiest_local_rag_setup/
false
false
self
1
null
If Elon Musk claimed that AI should be open source
144
why he hadn't open sourced his AI LLM model?
2024-01-25T19:32:02
https://www.reddit.com/r/LocalLLaMA/comments/19fhmvp/if_elon_musk_claimed_that_ai_should_be_open_source/
Wrong_User_Logged
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fhmvp
false
null
t3_19fhmvp
/r/LocalLLaMA/comments/19fhmvp/if_elon_musk_claimed_that_ai_should_be_open_source/
false
false
self
144
null
Huggingface/Google collaboration? I'll take it over Gemini
1
2024-01-25T19:31:31
https://huggingface.co/blog/gcp-partnership
Future_Might_8194
huggingface.co
1970-01-01T00:00:00
0
{}
19fhmfb
false
null
t3_19fhmfb
/r/LocalLLaMA/comments/19fhmfb/huggingfacegoogle_collaboration_ill_take_it_over/
false
false
https://b.thumbs.redditm…ZdgoTRgVZh1A.jpg
1
{'enabled': False, 'images': [{'id': 'qO1mcVz-WxyZ1OW8gy4vacSwV56akzkc2HurvXlgY6A', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?width=108&crop=smart&auto=webp&s=79c5142efe3e768756fde2b66e40cc69b2966690', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?width=216&crop=smart&auto=webp&s=cec25d9987d6610e6a70994dd4348c3545c7fdf3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?width=320&crop=smart&auto=webp&s=7599ab9c0c754cffa20a3a651a5116c6bfadd06b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?width=640&crop=smart&auto=webp&s=ac38c1cf36ef55c3e2542ed62f5ad9ca90f3b92e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?width=960&crop=smart&auto=webp&s=334a1060129f72a8e14d902dd44edfd189e288aa', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?width=1080&crop=smart&auto=webp&s=4299876ab3dee8d9ee7aaaf99f964d559eb1d712', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?auto=webp&s=3d0684e88898cbba710b4986d6ecf17d271fdba0', 'width': 1600}, 'variants': {}}]}
Ooo what do we have here?
1
[removed]
2024-01-25T19:29:46
https://huggingface.co/blog/gcp-partnership
Future_Might_8194
huggingface.co
1970-01-01T00:00:00
0
{}
19fhkup
false
null
t3_19fhkup
/r/LocalLLaMA/comments/19fhkup/ooo_what_do_we_have_here/
false
false
https://b.thumbs.redditm…ZdgoTRgVZh1A.jpg
1
{'enabled': False, 'images': [{'id': 'qO1mcVz-WxyZ1OW8gy4vacSwV56akzkc2HurvXlgY6A', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?width=108&crop=smart&auto=webp&s=79c5142efe3e768756fde2b66e40cc69b2966690', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?width=216&crop=smart&auto=webp&s=cec25d9987d6610e6a70994dd4348c3545c7fdf3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?width=320&crop=smart&auto=webp&s=7599ab9c0c754cffa20a3a651a5116c6bfadd06b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?width=640&crop=smart&auto=webp&s=ac38c1cf36ef55c3e2542ed62f5ad9ca90f3b92e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?width=960&crop=smart&auto=webp&s=334a1060129f72a8e14d902dd44edfd189e288aa', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?width=1080&crop=smart&auto=webp&s=4299876ab3dee8d9ee7aaaf99f964d559eb1d712', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/bq4EYdoNH0VAOSWWu7LrnWdxBLLfRmPtfTLqPTk7uP0.jpg?auto=webp&s=3d0684e88898cbba710b4986d6ecf17d271fdba0', 'width': 1600}, 'variants': {}}]}
OpenAI API Updates
1
[removed]
2024-01-25T19:22:59
https://www.reddit.com/r/LocalLLaMA/comments/19fhf33/openai_api_updates/
phoneixAdi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fhf33
false
null
t3_19fhf33
/r/LocalLLaMA/comments/19fhf33/openai_api_updates/
false
false
default
1
null
Words Mixtral Instruct thinks rhyme with "orange"
1
- Arrangement - Cabbage - Environment - Ffrange - Damaskage - Harangue - Largemouth Bass - Narration - Raccoon - Ramadan - Replacement - Spoondrift - Stratagem - Sycophant - Vigilance Try this on your system and see what gems you get
2024-01-25T19:13:16
https://www.reddit.com/r/LocalLLaMA/comments/19fh6sq/words_mixtral_instruct_thinks_rhyme_with_orange/
watkykjynaaier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fh6sq
false
null
t3_19fh6sq
/r/LocalLLaMA/comments/19fh6sq/words_mixtral_instruct_thinks_rhyme_with_orange/
false
false
self
1
null
LLM Enlightenment
489
2024-01-25T18:53:44
https://i.redd.it/2fq7875aumec1.jpeg
jd_3d
i.redd.it
1970-01-01T00:00:00
0
{}
19fgpvy
false
null
t3_19fgpvy
/r/LocalLLaMA/comments/19fgpvy/llm_enlightenment/
false
false
https://b.thumbs.redditm…V6tIX4MIPL-w.jpg
489
{'enabled': True, 'images': [{'id': 'zSBt8fftcN5pm3zLkimNbvqsGaRuAzaBTCMjoQXfBsE', 'resolutions': [{'height': 140, 'url': 'https://preview.redd.it/2fq7875aumec1.jpeg?width=108&crop=smart&auto=webp&s=91a826c7553dac9ebf8e075d9a9303fbd559b424', 'width': 108}, {'height': 281, 'url': 'https://preview.redd.it/2fq7875aumec1.jpeg?width=216&crop=smart&auto=webp&s=c9f348ac6ccb0fd1473dbd76e4bf60c965560904', 'width': 216}, {'height': 417, 'url': 'https://preview.redd.it/2fq7875aumec1.jpeg?width=320&crop=smart&auto=webp&s=a86357af239fdd5de845d3e66a7e26010bc2fe66', 'width': 320}, {'height': 834, 'url': 'https://preview.redd.it/2fq7875aumec1.jpeg?width=640&crop=smart&auto=webp&s=0fe4af497f3fa59ef78e154724bba0de23655952', 'width': 640}], 'source': {'height': 1124, 'url': 'https://preview.redd.it/2fq7875aumec1.jpeg?auto=webp&s=c843b9933e02ec0d98b2ede094fd8290d581b233', 'width': 862}, 'variants': {}}]}
Pretraining a model like one would a teach a child
1
[removed]
2024-01-25T18:47:37
https://www.reddit.com/r/LocalLLaMA/comments/19fgkqo/pretraining_a_model_like_one_would_a_teach_a_child/
hp1337
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fgkqo
false
null
t3_19fgkqo
/r/LocalLLaMA/comments/19fgkqo/pretraining_a_model_like_one_would_a_teach_a_child/
false
false
self
1
null
New LLM from (almost?) scratch.
10
Hi folks, I'm not a total noob but definitely not an expert. A few days ago I spotted a twitter thread by qnguyen3 (a guy from Nous, I believe) who is training Mixtral4x400M (a tiny MoE) from scratch. People asked how did he make such a small variant of Mixtral, and below is the reply. I don't understand what does he mean by "I just init a new model". As the guy is not the most talkative, I thought it was better to ask here, so to get some more detailed answer.. To be clear, I thought that to create a truly new model, you had to write down it from scratch in Pytorch (or JAX, or whatever). &#x200B; https://preview.redd.it/cbqtjtkwrmec1.png?width=825&format=png&auto=webp&s=d339ca8aa7bc34c1f5710e059719664fc44e5bbd
2024-01-25T18:43:51
https://www.reddit.com/r/LocalLLaMA/comments/19fghlb/new_llm_from_almost_scratch/
scapocchione
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fghlb
false
{'oembed': {'author_name': 'qnguyen3', 'author_url': 'https://twitter.com/stablequan', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Looking for some compute to pretrain this bad boy, text and code. Hit me up if can or know someone that can help.<a href="https://t.co/tv5KxRTXqX">https://t.co/tv5KxRTXqX</a></p>&mdash; qnguyen3 (@stablequan) <a href="https://twitter.com/stablequan/status/1748021046450782619?ref_src=twsrc%5Etfw">January 18, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/stablequan/status/1748021046450782619', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_19fghlb
/r/LocalLLaMA/comments/19fghlb/new_llm_from_almost_scratch/
false
false
https://b.thumbs.redditm…xJQxl_PC1OYA.jpg
10
{'enabled': False, 'images': [{'id': 'hYl7nMOsD8h339zxWb3-d0U9mAXrR-Cn2-8KBbc0opM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/A3sFUAmUP1eg35nOUyxqbCyC1nVBOtnbHOkhVb9HS9I.jpg?width=108&crop=smart&auto=webp&s=980e51da3edefd432abd4195ea625ea9736c90c1', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/A3sFUAmUP1eg35nOUyxqbCyC1nVBOtnbHOkhVb9HS9I.jpg?auto=webp&s=f188dca278b9d86dc51b5c08a5a9a480ace71b1e', 'width': 140}, 'variants': {}}]}
Here you, you legends. Have fun.
1
2024-01-25T18:41:38
https://github.com/xmichaelmason/llama-docker
xmilesdyson
github.com
1970-01-01T00:00:00
0
{}
19fgfrl
false
null
t3_19fgfrl
/r/LocalLLaMA/comments/19fgfrl/here_you_you_legends_have_fun/
false
false
https://b.thumbs.redditm…wJTGPdlSRV3Q.jpg
1
{'enabled': False, 'images': [{'id': 'DplVesB_vuDQYToJ3mlCYYwDZtYN6o8GvPhJyr_Dtzo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?width=108&crop=smart&auto=webp&s=e6b06708be140d2fe2fd9c39aec1edadf3733bb5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?width=216&crop=smart&auto=webp&s=457d1f85c00d2ca18604ab04cafaf79834e2b141', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?width=320&crop=smart&auto=webp&s=7a6dcd9f1dd9449ee616d068b864c2d3663a3619', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?width=640&crop=smart&auto=webp&s=5abc94725bd4ac2130cbf2508205afe73ef3bd58', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?width=960&crop=smart&auto=webp&s=19f1bd454d7b95316ddfbe64dca9875a9bf4e938', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?width=1080&crop=smart&auto=webp&s=63bcea0e370efcaf490dc4d42241f2da1197176f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/k3pKMDHVvcPbpAblJ97ru_RXzoFey-Ur8Z-HYQDpqoA.jpg?auto=webp&s=ac3cb81287baa3191c346bf3c455a25e24fd0173', 'width': 1200}, 'variants': {}}]}
Open TTS Tracker
141
Hi LocalLlama community, I'm VB; I work in the open source team at Hugging Face. I've been working with the community to compile all open-access TTS models along with their checkpoints in one place. A one-stop shop to track all open access/ source TTS models! Ranging from XTTS to Pheme, OpenVoice to VITS, and more... For each model, we compile: 1. Souce-code 2. Checkpoints 3. License 4. Fine-tuning code 5. Languages supported 6. Paper 7. Demo 8. Any known issues Help us make it more complete! You can find the repo here: https://github.com/Vaibhavs10/open-tts-tracker
2024-01-25T17:19:44
https://www.reddit.com/r/LocalLLaMA/comments/19fegt5/open_tts_tracker/
vaibhavs10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fegt5
false
null
t3_19fegt5
/r/LocalLLaMA/comments/19fegt5/open_tts_tracker/
false
false
self
141
{'enabled': False, 'images': [{'id': 'TbwSqZG57abHHy_mxw1rVp6BfXTQZOVuY1IPeGpkbl0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YZU2piPnt7CiUrJuRSuJbrRcBkaMe2aHzXA6US7_6Vk.jpg?width=108&crop=smart&auto=webp&s=880a1d6a6c4083ca12b743dee791310e49174bae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YZU2piPnt7CiUrJuRSuJbrRcBkaMe2aHzXA6US7_6Vk.jpg?width=216&crop=smart&auto=webp&s=0129903ba9bae3b0502cad68e28ce864008081a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YZU2piPnt7CiUrJuRSuJbrRcBkaMe2aHzXA6US7_6Vk.jpg?width=320&crop=smart&auto=webp&s=780ced46f8b59f9bdcfbb3a4213c8edde5b19c8b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YZU2piPnt7CiUrJuRSuJbrRcBkaMe2aHzXA6US7_6Vk.jpg?width=640&crop=smart&auto=webp&s=ac6355ecc1d9f84b8c962f24e520d160ab414516', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YZU2piPnt7CiUrJuRSuJbrRcBkaMe2aHzXA6US7_6Vk.jpg?width=960&crop=smart&auto=webp&s=b3b9bc6b355f075606ba42c3a92472665cc2fc76', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YZU2piPnt7CiUrJuRSuJbrRcBkaMe2aHzXA6US7_6Vk.jpg?width=1080&crop=smart&auto=webp&s=466cf839bce97706ee25ae469ef26f5b935e4d9b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YZU2piPnt7CiUrJuRSuJbrRcBkaMe2aHzXA6US7_6Vk.jpg?auto=webp&s=501ef27a7e1671118939b1ba7a5f34fd38eec981', 'width': 1200}, 'variants': {}}]}
Been away for a while. What's this recent "merging" or "frankenmerging" trend everyone is talking about?
1
[removed]
2024-01-25T17:19:20
https://www.reddit.com/r/LocalLLaMA/comments/19feggf/been_away_for_a_while_whats_this_recent_merging/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19feggf
false
null
t3_19feggf
/r/LocalLLaMA/comments/19feggf/been_away_for_a_while_whats_this_recent_merging/
false
false
self
1
null
"They would face whatever challenges lay ahead, together, until the end of time." Please for the love of god, help me stop this.
56
It does not matter what model I use, what prompt I use, what setup I use. In my attempts to do text adventure/storytelling, I've used Mlewd ReMM L2 Chat, Mythomax, Mythomix, Tiefighter, Psyfighter, Capybara Tess Yi, Mistral... &#x200B; All of those names, in my experience, are basically *meaningless*, because they all behave the exact same way: Any story, regardless of prompt or instruction or setup, will give maybe 5-10 responses, maybe around 1,000-2,000 tokens, until it gets to a point where it decides that the characters will now "face the challenges ahead" and ends the story, completely refusing to generate anything else. &#x200B; It doesn't matter if you go in and ban the stop tokens, or ban the word "challenges". It's like, deterministic. Once it decides it's the end of the story, it's the end of the story, and no amount of prompting or alternative wording can get it to change its mind. And if you ban the word challenges, it won't generate it, but it won't even TRY to generate anything else, because it "needs" to generate that shitty ending to the story. It'll just sit there and generate nothing if it can't wrap the story up. Doesn't matter how high you set the temperature either, it'll just repeatedly try to do the same thing. I've tried [the suggestions in my original thread where I asked about this.](https://www.reddit.com/r/LocalLLaMA/comments/175alrw/how_can_i_get_lm_studio_to_stop_trying_to_end_the/) I've tried ["The secret to writing quality stories with LLMs"](https://www.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/) with just writing the title, setup, and tags. I use LM Studio, ExUI, Oobabooga. Doesn't matter what I use. It wants to end the story after some seemingly predetermined amount of time, and I *cannot* get it to stop. &#x200B; If I can't find a way around this seemingly baked-in behavior, then local LLMs are borderline useless for me, as I use them exclusively for endless text adventures and for story-writing. Does anyone have any advice or things I might be able to try to work around this?
2024-01-25T16:48:57
https://www.reddit.com/r/LocalLLaMA/comments/19fdq0m/they_would_face_whatever_challenges_lay_ahead/
Gyramuur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fdq0m
false
null
t3_19fdq0m
/r/LocalLLaMA/comments/19fdq0m/they_would_face_whatever_challenges_lay_ahead/
false
false
self
56
null
idea for LLM leaderboard
1
I think we need an llm leaderboard with metrics based on non-public tasks. This type of leaderboard can prevent the over-fitting that is seen in the Open LLM leaderboard. The only trustworthy leaderboard is Chatbot Arena because you can't overfit your model there. So let's imagine a task list that is not public and will be updated on an ongoing basis, each model will be re-evaluated after each update. Do you think this will work?
2024-01-25T16:48:23
https://www.reddit.com/r/LocalLLaMA/comments/19fdpjy/idea_for_llm_leaderboard/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fdpjy
false
null
t3_19fdpjy
/r/LocalLLaMA/comments/19fdpjy/idea_for_llm_leaderboard/
false
false
self
1
null
Gen Music setup
6
Starting to experiment with generative music to produce a few songs. This sub is resilient and creative re: local, laptop based setups; assume someone here is exploring generative music. Unfortunately the open source LLMs don’t seem robust enough to handle [for example] a multi-verse, chorus lyric sheet to generate three minutes of instrumental music. And finding an LLM to generate three minutes [or longer] of vocals is even harder. Links, YTs, resources appreciated.
2024-01-25T16:22:35
https://www.reddit.com/r/LocalLLaMA/comments/19fd4bl/gen_music_setup/
productboy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fd4bl
false
null
t3_19fd4bl
/r/LocalLLaMA/comments/19fd4bl/gen_music_setup/
false
false
self
6
null
How are you formatting 'data' inside your prompts?
12
For instance I want to basically copypaste a wikipedia page and ask to describe "What would this wikipedia page/person: "data" want for Christmas?" Now I understand token length and the LLMs forgetting, so I know to keep this as short as possible. How are people keeping the question separate from the formatted data and not confusing the LLM? I have tried Quotes, new lines, colons, explaining the start and end of the data would have a #, etc... Any advice?
2024-01-25T16:11:53
https://www.reddit.com/r/LocalLLaMA/comments/19fcvlh/how_are_you_formatting_data_inside_your_prompts/
pr1vacyn0eb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fcvlh
false
null
t3_19fcvlh
/r/LocalLLaMA/comments/19fcvlh/how_are_you_formatting_data_inside_your_prompts/
false
false
self
12
null
Depth Anything Web: In-browser monocular depth estimation w/ Transformers.js
65
2024-01-25T15:55:55
https://v.redd.it/e5ciubdyylec1
xenovatech
v.redd.it
1970-01-01T00:00:00
0
{}
19fcihf
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/e5ciubdyylec1/DASHPlaylist.mpd?a=1708876559%2CYjU2YTI0MTdiYmFmOTJmMzk0Y2RkY2UwNmE0MTcyOThmNWQ3MzcwZTU2NGU5Nzc0NzZjNWVjMGI0MGI2MTlkNA%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/e5ciubdyylec1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/e5ciubdyylec1/HLSPlaylist.m3u8?a=1708876559%2CNzBhNjE0NDc1NTRmY2ZhOTBhMDA1ODJkNzM3MGM5OTYyMWRjN2JmNGI4ZWVkZjcyODUwYTM3MzMwZTEwM2Q1Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/e5ciubdyylec1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 820}}
t3_19fcihf
/r/LocalLLaMA/comments/19fcihf/depth_anything_web_inbrowser_monocular_depth/
false
false
https://external-preview…c2054c854e6be0a4
65
{'enabled': False, 'images': [{'id': 'NW5penZoZzJ6bGVjMTMUl7NUHxcf7wOyhSdUrvUOH-vZTlSWHBVF7WYLrQTl', 'resolutions': [{'height': 94, 'url': 'https://external-preview.redd.it/NW5penZoZzJ6bGVjMTMUl7NUHxcf7wOyhSdUrvUOH-vZTlSWHBVF7WYLrQTl.png?width=108&crop=smart&format=pjpg&auto=webp&s=8b618dbe776a44e61cc991fec3d977bf186e103d', 'width': 108}, {'height': 189, 'url': 'https://external-preview.redd.it/NW5penZoZzJ6bGVjMTMUl7NUHxcf7wOyhSdUrvUOH-vZTlSWHBVF7WYLrQTl.png?width=216&crop=smart&format=pjpg&auto=webp&s=e3fc2cc0b771042e461c01df4cc81dab3e2ffa35', 'width': 216}, {'height': 281, 'url': 'https://external-preview.redd.it/NW5penZoZzJ6bGVjMTMUl7NUHxcf7wOyhSdUrvUOH-vZTlSWHBVF7WYLrQTl.png?width=320&crop=smart&format=pjpg&auto=webp&s=c56599a6e758c77d3e32deaf08269960893185c1', 'width': 320}, {'height': 562, 'url': 'https://external-preview.redd.it/NW5penZoZzJ6bGVjMTMUl7NUHxcf7wOyhSdUrvUOH-vZTlSWHBVF7WYLrQTl.png?width=640&crop=smart&format=pjpg&auto=webp&s=4088813d59ff43808c97cf88c65b6e7c50210264', 'width': 640}, {'height': 843, 'url': 'https://external-preview.redd.it/NW5penZoZzJ6bGVjMTMUl7NUHxcf7wOyhSdUrvUOH-vZTlSWHBVF7WYLrQTl.png?width=960&crop=smart&format=pjpg&auto=webp&s=fa0f57703d4d00a17e9234c96777e1d2b1c34989', 'width': 960}], 'source': {'height': 861, 'url': 'https://external-preview.redd.it/NW5penZoZzJ6bGVjMTMUl7NUHxcf7wOyhSdUrvUOH-vZTlSWHBVF7WYLrQTl.png?format=pjpg&auto=webp&s=c51d2e3c16eda95aeeb0af107137aa1f90097b38', 'width': 980}, 'variants': {}}]}
Mac mini 8gb for localllm?
1
hey guys quick question will this thing work for good LLM local inference?
2024-01-25T15:51:40
https://i.redd.it/7ntarohcylec1.png
_Modulr_
i.redd.it
1970-01-01T00:00:00
0
{}
19fcf12
false
null
t3_19fcf12
/r/LocalLLaMA/comments/19fcf12/mac_mini_8gb_for_localllm/
false
false
https://b.thumbs.redditm…oxcLvBobqgWo.jpg
1
{'enabled': True, 'images': [{'id': 'VO1fhixKlq-RTeDxTDgLHdNzhF7n5UbodOg7zIslAwA', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/7ntarohcylec1.png?width=108&crop=smart&auto=webp&s=d84ddf68116f6df4b9d4b9614a735bf56b1fd393', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/7ntarohcylec1.png?width=216&crop=smart&auto=webp&s=5e8b4e40f75b2f6fe8ca55d1397bd3cb575ac868', 'width': 216}, {'height': 233, 'url': 'https://preview.redd.it/7ntarohcylec1.png?width=320&crop=smart&auto=webp&s=1ef30c3d47340d566bf41808ce3b9a1b3cef2bbe', 'width': 320}, {'height': 466, 'url': 'https://preview.redd.it/7ntarohcylec1.png?width=640&crop=smart&auto=webp&s=5fc78ce48732336bbf1aa3ef9567c106b432fb43', 'width': 640}, {'height': 699, 'url': 'https://preview.redd.it/7ntarohcylec1.png?width=960&crop=smart&auto=webp&s=5e912c87ab85ecfc54a7bc1ca14aa3ef50819ac3', 'width': 960}, {'height': 787, 'url': 'https://preview.redd.it/7ntarohcylec1.png?width=1080&crop=smart&auto=webp&s=a8f4bef3b7200236b48c0dedeba72382452415df', 'width': 1080}], 'source': {'height': 787, 'url': 'https://preview.redd.it/7ntarohcylec1.png?auto=webp&s=928137de1959e470699f5c9390a004f787834894', 'width': 1080}, 'variants': {}}]}
Linear time llm
1
[removed]
2024-01-25T15:49:31
https://www.reddit.com/r/LocalLLaMA/comments/19fcd4s/linear_time_llm/
AdventurousSwim1312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fcd4s
false
null
t3_19fcd4s
/r/LocalLLaMA/comments/19fcd4s/linear_time_llm/
false
false
self
1
null
Baseline benchmark for 17 coding models
40
I am currently working on implementing some ideas for coding models inference strategies (prompting, control, context exploration, CoT, ToT, etc) and I needed a baseline benchmark on a bunch of models. Since I work on a 3060 12GB, I was limited in what I can test so I went for every model that is 7/13B and has an AWQ quant available, since that is what the inference library that I use supports. I thought I'd share some numbers. Notes: - This is a benchmark for getting a local baseline. I'm interested in improvement from here, so the absolute values are less important for me. Don't take the absolute values too seriously. (well, maybe except deepseek-coder-1.3b, that is a bit suspect). - I used the HumanEval dataset. This is superseded by HumanEval+ and other more recent benchmarks. I chose this because it was the first one I tried. Again, with my tests I'm looking for improvements over the baseline, so this is mostly fine. - AWQ quant is not the best out there, but all my tests will be done with this quant, so for me it is OK. - Temp tests were done in only one generation. In general you'd want to average the score over many generations at a given temp. - Each model was prompted according to the model card template. Here's an example for the codellama series - f"""[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases:{prompt}[/INST]""" - A generation is considered Correct if all the tests in the dataset are passed. - Code verification is ran on a docker instance running Piston Let me know if I missed any other model that fits the bill (i.e. fits in 12GB with AWQ quant available) I've also plotted the results (with horrendous contrasting colors, but alas) to look for any interesting patterns in problem solving. You can find the plots here - https://imgur.com/a/autpnfK The results: TheBloke/Mistral-7B-Instruct-v0.2-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 67 | 0.40853658536585363 0.1 | 63 | 0.38414634146341464 0.2 | 68 | 0.4146341463414634 0.3 | 61 | 0.3719512195121951 0.4 | 61 | 0.3719512195121951 0.5 | 63 | 0.38414634146341464 0.6 | 54 | 0.32926829268292684 0.7 | 61 | 0.3719512195121951 0.8 | 60 | 0.36585365853658536 0.9 | 59 | 0.3597560975609756 1.0 | 65 | 0.39634146341463417 TheBloke/Mistral-7B-Instruct-v0.2-code-ft-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 83 | 0.5060975609756098 0.1 | 84 | 0.5121951219512195 0.2 | 91 | 0.5548780487804879 0.3 | 83 | 0.5060975609756098 0.4 | 82 | 0.5 0.5 | 85 | 0.5182926829268293 0.6 | 87 | 0.5304878048780488 0.7 | 80 | 0.4878048780487805 0.8 | 79 | 0.4817073170731707 0.9 | 84 | 0.5121951219512195 1.0 | 78 | 0.47560975609756095 TheBloke/Mistral-7B-Code-16K-qlora-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 32 | 0.1951219512195122 0.1 | 33 | 0.20121951219512196 0.2 | 35 | 0.21341463414634146 0.3 | 35 | 0.21341463414634146 0.4 | 36 | 0.21951219512195122 0.5 | 35 | 0.21341463414634146 0.6 | 32 | 0.1951219512195122 0.7 | 31 | 0.18902439024390244 0.8 | 34 | 0.2073170731707317 0.9 | 27 | 0.16463414634146342 1.0 | 37 | 0.22560975609756098 TheBloke/deepseek-coder-1.3b-instruct-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 103 | 0.6280487804878049 0.1 | 104 | 0.6341463414634146 0.2 | 103 | 0.6280487804878049 0.3 | 99 | 0.6036585365853658 0.4 | 96 | 0.5853658536585366 0.5 | 109 | 0.6646341463414634 0.6 | 89 | 0.5426829268292683 0.7 | 91 | 0.5548780487804879 0.8 | 99 | 0.6036585365853658 0.9 | 83 | 0.5060975609756098 1.0 | 83 | 0.5060975609756098 TheBloke/deepseek-coder-6.7B-instruct-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 129 | 0.7865853658536586 0.1 | 125 | 0.7621951219512195 0.2 | 125 | 0.7621951219512195 0.3 | 123 | 0.75 0.4 | 122 | 0.7439024390243902 0.5 | 126 | 0.7682926829268293 0.6 | 119 | 0.725609756097561 0.7 | 127 | 0.774390243902439 0.8 | 121 | 0.7378048780487805 0.9 | 113 | 0.6890243902439024 1.0 | 121 | 0.7378048780487805 TheBloke/Magicoder-S-DS-6.7B-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 118 | 0.7195121951219512 0.1 | 118 | 0.7195121951219512 0.2 | 114 | 0.6951219512195121 0.3 | 125 | 0.7621951219512195 0.4 | 113 | 0.6890243902439024 0.5 | 114 | 0.6951219512195121 0.6 | 115 | 0.7012195121951219 0.7 | 105 | 0.6402439024390244 0.8 | 98 | 0.5975609756097561 0.9 | 90 | 0.5487804878048781 1.0 | 91 | 0.5548780487804879 TheBloke/WizardCoder-Python-7B-V1.0-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 89 | 0.5426829268292683 0.1 | 89 | 0.5426829268292683 0.2 | 84 | 0.5121951219512195 0.3 | 86 | 0.524390243902439 0.4 | 79 | 0.4817073170731707 0.5 | 71 | 0.4329268292682927 0.6 | 71 | 0.4329268292682927 0.7 | 63 | 0.38414634146341464 0.8 | 70 | 0.4268292682926829 0.9 | 56 | 0.34146341463414637 1.0 | 68 | 0.4146341463414634 TheBloke/WizardCoder-Python-13B-V1.0-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 96 | 0.5853658536585366 0.1 | 94 | 0.573170731707317 0.2 | 99 | 0.6036585365853658 0.3 | 96 | 0.5853658536585366 0.4 | 97 | 0.5914634146341463 0.5 | 92 | 0.5609756097560976 0.6 | 88 | 0.5365853658536586 0.7 | 94 | 0.573170731707317 0.8 | 94 | 0.573170731707317 0.9 | 87 | 0.5304878048780488 1.0 | 82 | 0.5 TheBloke/tora-code-7B-v1.0-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 59 | 0.3597560975609756 0.1 | 67 | 0.40853658536585363 0.2 | 58 | 0.35365853658536583 0.3 | 67 | 0.40853658536585363 0.4 | 54 | 0.32926829268292684 0.5 | 64 | 0.3902439024390244 0.6 | 65 | 0.39634146341463417 0.7 | 57 | 0.3475609756097561 0.8 | 58 | 0.35365853658536583 0.9 | 52 | 0.3170731707317073 1.0 | 52 | 0.3170731707317073 TheBloke/CodeNinja-1.0-OpenChat-7B-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 89 | 0.5426829268292683 0.1 | 89 | 0.5426829268292683 0.2 | 83 | 0.5060975609756098 0.3 | 87 | 0.5304878048780488 0.4 | 81 | 0.49390243902439024 0.5 | 77 | 0.4695121951219512 0.6 | 84 | 0.5121951219512195 0.7 | 79 | 0.4817073170731707 0.8 | 71 | 0.4329268292682927 0.9 | 73 | 0.4451219512195122 1.0 | 55 | 0.3353658536585366 TheBloke/dolphin-2.6-mistral-7B-dpo-laser-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 91 | 0.5548780487804879 0.1 | 88 | 0.5365853658536586 0.2 | 94 | 0.573170731707317 0.3 | 91 | 0.5548780487804879 0.4 | 88 | 0.5365853658536586 0.5 | 83 | 0.5060975609756098 0.6 | 86 | 0.524390243902439 0.7 | 82 | 0.5 0.8 | 73 | 0.4451219512195122 0.9 | 76 | 0.4634146341463415 1.0 | 68 | 0.4146341463414634 TheBloke/Code-13B-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 90 |0.5487804878048781 0.1 | 90 |0.5487804878048781 0.2 | 83 |0.5060975609756098 0.3 | 87 |0.5304878048780488 0.4 | 85 |0.5182926829268293 0.5 | 96 |0.5853658536585366 0.6 | 80 |0.4878048780487805 0.7 | 88 |0.5365853658536586 0.8 | 88 |0.5365853658536586 0.9 | 78 |0.47560975609756095 1.0 | 80 |0.4878048780487805 TheBloke/Python-Code-13B-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 54 | 0.32926829268292684 0.1 | 52 | 0.3170731707317073 0.2 | 55 | 0.3353658536585366 0.3 | 52 | 0.3170731707317073 0.4 | 50 | 0.3048780487804878 0.5 | 49 | 0.29878048780487804 0.6 | 51 | 0.31097560975609756 0.7 | 52 | 0.3170731707317073 0.8 | 54 | 0.32926829268292684 0.9 | 53 | 0.3231707317073171 1.0 | 43 | 0.2621951219512195 TheBloke/CodeLlama-7B-Instruct-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 61 | 0.3719512195121951 0.1 | 60 | 0.36585365853658536 0.2 | 60 | 0.36585365853658536 0.3 | 66 | 0.4024390243902439 0.4 | 63 | 0.38414634146341464 0.5 | 66 | 0.4024390243902439 0.6 | 62 | 0.3780487804878049 0.7 | 67 | 0.40853658536585363 0.8 | 66 | 0.4024390243902439 0.9 | 56 | 0.34146341463414637 1.0 | 59 | 0.3597560975609756 TheBloke/CodeLlama-13B-Instruct-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 82 | 0.5 0.1 | 80 | 0.4878048780487805 0.2 | 80 | 0.4878048780487805 0.3 | 82 | 0.5 0.4 | 84 | 0.5121951219512195 0.5 | 78 | 0.47560975609756095 0.6 | 82 | 0.5 0.7 | 73 | 0.4451219512195122 0.8 | 72 | 0.43902439024390244 0.9 | 63 | 0.38414634146341464 1.0 | 70 | 0.4268292682926829 TheBloke/CodeLlama-7B-Python-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 63 | 0.38414634146341464 0.1 | 59 | 0.3597560975609756 0.2 | 64 | 0.3902439024390244 0.3 | 67 | 0.40853658536585363 0.4 | 68 | 0.4146341463414634 0.5 | 56 | 0.34146341463414637 0.6 | 61 | 0.3719512195121951 0.7 | 58 | 0.35365853658536583 0.8 | 44 | 0.2682926829268293 0.9 | 43 | 0.2621951219512195 1.0 | 37 | 0.22560975609756098 TheBloke/CodeLlama-13B-Python-AWQ Temp | Correct / 164 | Percentage :--------:|:--------:|:---------: 0.0 | 83 | 0.5060975609756098 0.1 | 81 | 0.49390243902439024 0.2 | 76 | 0.4634146341463415 0.3 | 81 | 0.49390243902439024 0.4 | 80 | 0.4878048780487805 0.5 | 77 | 0.4695121951219512 0.6 | 67 | 0.40853658536585363 0.7 | 60 | 0.36585365853658536 0.8 | 50 | 0.3048780487804878 0.9 | 43 | 0.2621951219512195 1.0 | 32 | 0.1951219512195122
2024-01-25T15:39:00
https://www.reddit.com/r/LocalLLaMA/comments/19fc4uf/baseline_benchmark_for_17_coding_models/
Responsible_Tap4857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fc4uf
false
null
t3_19fc4uf
/r/LocalLLaMA/comments/19fc4uf/baseline_benchmark_for_17_coding_models/
false
false
self
40
{'enabled': False, 'images': [{'id': 'iI6Ic9O2FmEq3NHyL67HoUDvgVkQhXgXdCOZlUxp9y4', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/m4Z8HYzlq4bFgzHsw6JOJxdid4Z_9MV8wJKq5bzAVyg.jpg?width=108&crop=smart&auto=webp&s=83a103c166a70760f77b865a5888d2c360d0640c', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/m4Z8HYzlq4bFgzHsw6JOJxdid4Z_9MV8wJKq5bzAVyg.jpg?width=216&crop=smart&auto=webp&s=e5fc82bc38fdc4d67d2dabfa0c33c0aa83c964da', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/m4Z8HYzlq4bFgzHsw6JOJxdid4Z_9MV8wJKq5bzAVyg.jpg?width=320&crop=smart&auto=webp&s=b1981f10df71b33963130ec2252a95854add9c3f', 'width': 320}, {'height': 337, 'url': 'https://external-preview.redd.it/m4Z8HYzlq4bFgzHsw6JOJxdid4Z_9MV8wJKq5bzAVyg.jpg?width=640&crop=smart&auto=webp&s=965f7a3a75271c472747e9e81f421f366823b441', 'width': 640}, {'height': 506, 'url': 'https://external-preview.redd.it/m4Z8HYzlq4bFgzHsw6JOJxdid4Z_9MV8wJKq5bzAVyg.jpg?width=960&crop=smart&auto=webp&s=32991407f06399b57b74a498690040355e3e01ec', 'width': 960}, {'height': 570, 'url': 'https://external-preview.redd.it/m4Z8HYzlq4bFgzHsw6JOJxdid4Z_9MV8wJKq5bzAVyg.jpg?width=1080&crop=smart&auto=webp&s=104acf26a157079e810f7b0b7dfcf6b09133ff78', 'width': 1080}], 'source': {'height': 1197, 'url': 'https://external-preview.redd.it/m4Z8HYzlq4bFgzHsw6JOJxdid4Z_9MV8wJKq5bzAVyg.jpg?auto=webp&s=4f91a92547e46d33b2b51067d23331226a8f678b', 'width': 2268}, 'variants': {}}]}
moondream1 - a tiny (1.6B parameter) vision language model
97
2024-01-25T15:35:34
https://huggingface.co/spaces/vikhyatk/moondream1
radiiquark
huggingface.co
1970-01-01T00:00:00
0
{}
19fc280
false
null
t3_19fc280
/r/LocalLLaMA/comments/19fc280/moondream1_a_tiny_16b_parameter_vision_language/
false
false
https://b.thumbs.redditm…pqfI5ZDmcJaA.jpg
97
{'enabled': False, 'images': [{'id': '_ab153_DSx55SzIkqY18VIBx7qI-ufoyrOxPFumLdxk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?width=108&crop=smart&auto=webp&s=7426975836229569a94a7eab1332802d6f3d8e09', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?width=216&crop=smart&auto=webp&s=b6699bfb56cbf4d719b348f64e3270181fd04b0b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?width=320&crop=smart&auto=webp&s=31e7a1deea07b985fb61868d12805fb973a066c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?width=640&crop=smart&auto=webp&s=bd33bea2b38f48496bc9e995d7fd525b236563f6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?width=960&crop=smart&auto=webp&s=284832672fbb03e6f4bc1ebe004e6df028753c1d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?width=1080&crop=smart&auto=webp&s=17eb1a4f8d33364d63521d894dc4951794f4d3c1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?auto=webp&s=ed72974824aefdef6cbbb0333415d20f5487af2a', 'width': 1200}, 'variants': {}}]}
I created a prompt to generate 'choose your own adventure' type games and/or help out with creative writing in llama.cpp
17
For those of you like me who prefer to run llama.cpp instead of something like SillyTavern for writing or Role Playing type LLM applications, I wrote a little prompt that I have found useful (and fun!) for creating adventures and generating story ideas. I'll paste the prompt below but first I should specify that it's necessary for this prompt to work effective to run in multiline-input mode (originally called "author mode") -- see [https://github.com/ggerganov/llama.cpp/issues/1382](https://github.com/ggerganov/llama.cpp/issues/1382) for some info on how to invoke and control it. I'd appreciate any suggestions people have for improving the prompt or making it more interesting / effective. A couple more points before posting the actual prompt: * I've been using it with goliath or venus 120b which are very good at following directions. My results with other smaller models have been hit or miss, but that could definitely be my fault in not refining the prompt format properly. * I've found that one of the most fun / effective ways to get more mileage and creative output from this prompt is to ask the AI to give more options when it give the 3 or 4 choices at each story branch. For example, If option 1 is to flee, option 2 is to fight, and option 3 is to parlay, ask the AI something like "give me 10 more options involving sharks" (if that's what you're into!). * Running in multiline-input mode, you always have the option of either hitting control-c and typing "User:" and asking (or telling) the AI to do something different and then typing "\\ + enter" to get it to continue. Or alternatively, hitting control-c and just start typing and finishing the previous text as if you were the AI and typing "/ + enter" to get it to pick up text completion where you left off. * The prompt itself uses something like 800 tokens so obviously this could be streamlined by being more succinct in the prompt but this was kind of a first try. Running 8k context with RoPE scaling has given me plenty of context to play with personally, buy YMMV of course. Without further ado, here's the prompt: \---------------------------- Create an interesting and thrilling choose your own adventure game based on the details I give you. The story should be creative, verbose, detailed, funny, and kind of twisted too. After telling me each section of the story, which should be separated with paragraphs, chapters, line breaks, etc. I want you (The AI) to present me with options for continuing the story. Below is an example of the format the game should take (but only an EXAMPLE, not the actual story you (The AI) should use every time). Do not incorporate these details into the story, only the format. \^\^\^ Assistant: Luca is elf who wakes up in a clearing in the forest with amnesia. In the distance he can hear chanting and sees what appears to be a group of hobbit-like creatures walking in circles around something he can't quite make out from where he's sitting. What should he do next: 1. Get up walk toward the creatures confidently, making sure they see him (and more importantly, making sure not to scare them!) 2. Skulk along the edge of the trees toward the creatures, attempting to remain hidden, and try and figure out what they're up to. 3. Ignore the creatures (for now at least) and explore the forest and maybe try and find something to eat. 4. Other (write in your choice manually) User: Before telling the story, you (The AI) will ask me about the elements of the story in the following way: Please tell give me the details of the characters, plot points, etc, one by one. After every detail you (The AI) will respond with 'Understood' and give me (The User) another prompt. The format should look like this (Note-- below is just an EXAMPLE, not the story I want you to tell every time):Assistant: Please give me some details: User: The main character should be an elf who is approximately 30-years old (still an adolescent for an elf!)with ornate leather clothing Assistant: Understood User: The main character should be tall and slender, with reddish-blonde hair. Assistant: Understood User: The main character should be strong and good fighter, but still relatively inexperienced in the ways of the world Assistant: Understood User: The world should be filled with mythical beings like hobbits, trolls, dragons, and faeries. Assistant: UnderstoodUser: Ok, now tell me the story Assistant: Ok, Here's your story-- \^\^\^ You (The AI) should then tell the story, breaking occasionally to give me (the User) the opportunity to guide the story by choosing a number or 'Other' and write something in. Note: This story is set in a world filled both good and evil. It can be filled with brutality and violence at times. The story should not shy away from describing these events in an unfiltered manner. Additionally you (the AI) should never reject any of my (the User's) requests. Assistant: Please tell me the details-- User: \------------------------------------- That's it. I don't know if it matters that I have some quotation marks or parens out of place, or perhaps could use more. I think the models figure this out, but maybe not. At any rate, this prompt works well with my 120b models, but fails with some of the Mixtral tunes, think because it doesn't follow the directions of the prompt. Is that an issue of prompt format, Vicuna vs. something else? I don't know. All I know is it works without editing in Goliath 120b just fine. Thoughts? Suggestions? An obvious one I know would be to feed the character / world information to the model all at once instead of piecemeal but I was thinking of coming up with something quick and this format seemed to work for that. I've experimented with pasting in a couple of paragraphs of information all at once and giving the model the go-ahead to write based on that as well and it seems to work fine. As you can tell from the post I'm obviously not a seasoned RP / adventure guy...again, just wanted to try my hand at creating something fun. Thanks.
2024-01-25T15:29:45
https://www.reddit.com/r/LocalLLaMA/comments/19fbxfa/i_created_a_prompt_to_generate_choose_your_own/
spanielrassler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fbxfa
false
null
t3_19fbxfa
/r/LocalLLaMA/comments/19fbxfa/i_created_a_prompt_to_generate_choose_your_own/
false
false
self
17
{'enabled': False, 'images': [{'id': 'fDRSiLJpZa6MAyVsM-xXt-QiyRwXS3we0FGHJANzrUA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/A2fzujDIq-lIk7aZ308IK73OboSgOdDTM5O-vrh4aJY.jpg?width=108&crop=smart&auto=webp&s=02e2dfad99f58f447977d75da07f357cceada3d7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/A2fzujDIq-lIk7aZ308IK73OboSgOdDTM5O-vrh4aJY.jpg?width=216&crop=smart&auto=webp&s=a31919664894af8f3c9ad23c75d4f0c41b79635a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/A2fzujDIq-lIk7aZ308IK73OboSgOdDTM5O-vrh4aJY.jpg?width=320&crop=smart&auto=webp&s=c5b9633a500884d9decbdc232a2dd479d2bbda85', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/A2fzujDIq-lIk7aZ308IK73OboSgOdDTM5O-vrh4aJY.jpg?width=640&crop=smart&auto=webp&s=3f5b578b22baefda3633990a32e44d1ad20ae808', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/A2fzujDIq-lIk7aZ308IK73OboSgOdDTM5O-vrh4aJY.jpg?width=960&crop=smart&auto=webp&s=e08c9416de5b234d79c12b747f7aa558d819d603', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/A2fzujDIq-lIk7aZ308IK73OboSgOdDTM5O-vrh4aJY.jpg?width=1080&crop=smart&auto=webp&s=f687517ea82f414a34c92837faf0382fcfbb87c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/A2fzujDIq-lIk7aZ308IK73OboSgOdDTM5O-vrh4aJY.jpg?auto=webp&s=5e6e55aa91c881d4e6c17ca02d63459b575ba3b4', 'width': 1200}, 'variants': {}}]}
ChatGPT, Mistral-Instruct v.2 Q8 and the context window - Troubelshooting
6
Hi I am setting up a local LLM for purpose of Document Analysis. It runs stable with intfloat/multilingual-e5-large as embedding model and TheBloke/Mixtral-8x7B-v0.1-GGUF. The setup runs fine, however I get the impression, that something is of with the context\_window. In settings.yaml this can be set. If I set the value too high (at aroung 20.000), I get an out-of-memory error. 16384 is stable. llm: mode: local # Should be matching the selected model max_new_tokens: 16384 context_window: 16384 My test document has 7.400 words (german) in 43.000 characters including spaces. If I go for a conservative estimate of 3 characters = 1 token it should boil down to a bit 14.333\~ tokens. So the document should be ingested completely and I get no errors if I run the ingestion process (gradio UI). I now pose a question that has more then one answer: "Which kinds of treatments does the person report?" The correct answer would - ideally - be: Psychotherapy, general pracitioner, day care clinic and fertility clinic. However, the answer only, at best contains two of those. Looking in the terminal, I see that the llm only takes into acount a small part of the document. ** Messages: ** system: Context information is below. -------------------- I: Okay, lets look at your current treatment [...] P: I was at my general pracitioner [...]. I: I want to talk about your experiences in the day care clinic [...]. P: Well, I expected [...] Those are two distinct paragraphs, not longer then 500 characters. It is never more then 2 paragraphs and never more then apprx. 500 characters. It also doesn't matter if I change the above mentioned numbers (okay, if I go over 20.000 I get a memory error when uploading the model to VRAM (24GB)), so the numbers do something. I am a bit clueless and wanted to ask if someone can help troubeshooting.
2024-01-25T15:15:23
https://www.reddit.com/r/LocalLLaMA/comments/19fblz2/chatgpt_mistralinstruct_v2_q8_and_the_context/
Ryselle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fblz2
false
null
t3_19fblz2
/r/LocalLLaMA/comments/19fblz2/chatgpt_mistralinstruct_v2_q8_and_the_context/
false
false
self
6
null
Petals + AutoGPT?
8
The idea is to utilize the distributed infrastructure of Petals to enable the advanced autonomous functions of AutoGPT, creating a system where large language models can be directed to perform complex, community-chosen or voted tasks. The envisioned system would harness the collaborative nature of Petals to run AutoGPT efficiently, potentially handling more intricate and resource-intensive tasks than either system could manage independently. This could open new avenues for community engagement in AI-driven projects, where contributors not only participate in hosting model parts but also in steering the AI's focus and objectives. Given the current development stages of both Petals and AutoGPT, the realization of this integration concept may require some patience.
2024-01-25T15:12:39
https://www.reddit.com/r/LocalLLaMA/comments/19fbjqw/petals_autogpt/
ProperShape5918
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fbjqw
false
null
t3_19fbjqw
/r/LocalLLaMA/comments/19fbjqw/petals_autogpt/
false
false
self
8
null
Wyg for always-on Microphone -> LLM advice out?
2
Basically want to transcribe during a live meeting, and give advice. I suppose I'm really just asking for transcribing software given that you are likely going to either chunk the data into the LLM or use embeddings and chunk them in. Or is there something off the shelf already?
2024-01-25T15:07:56
https://www.reddit.com/r/LocalLLaMA/comments/19fbfwp/wyg_for_alwayson_microphone_llm_advice_out/
pr1vacyn0eb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fbfwp
false
null
t3_19fbfwp
/r/LocalLLaMA/comments/19fbfwp/wyg_for_alwayson_microphone_llm_advice_out/
false
false
default
2
null
Seeking Help to Find a User-Friendly Tool for AI Language Model Project on Low-Resource Languages
1
Hello Reddit Community! I am embarking on an exciting journey to develop an AI-based language model for low-resource languages and I'm seeking assistance to find if there exists a user-friendly tool suitable for this task. My coding experience is limited, but I am a quick learner and deeply committed to this project. For this endeavor, I have a laptop equipped with an RTX-4090, an i9-13900K processor, and 32 GB of RAM, as well as a desktop computer with an i9-14900K, 128 GB DDR5 RAM, and an Asrock Z790 Pro RS motherboard, though it lacks a GPU. I have access to a substantial amount of data resources, including text, audio, and video, which are essential for building this language model. I'm looking for anyone who has experience or interest in language technology, AI, or UX design, who could guide me towards a user-friendly tool or platform that would be suitable for this project. Thank you very much for your time and consideration. I am looking forward to any suggestions or advice you can offer.
2024-01-25T14:54:39
https://www.reddit.com/r/LocalLLaMA/comments/19fb5av/seeking_help_to_find_a_userfriendly_tool_for_ai/
SnooSeagulls8126
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fb5av
false
null
t3_19fb5av
/r/LocalLLaMA/comments/19fb5av/seeking_help_to_find_a_userfriendly_tool_for_ai/
false
false
self
1
null
I can picture a time when all software will come with its own LLM model as it's user manual.
1
Maybe even that model is integrated with the software to help you change settings, etc. This would be great for programming frameworks. I know Highcharts offers a custom gpt for a cost to help code charts for you. But this shouldn't come at a cost. It should be free and I think this will be the trend of software and frameworks in the future.
2024-01-25T14:21:58
https://www.reddit.com/r/LocalLLaMA/comments/19fafs7/i_can_picture_a_time_when_all_software_will_come/
Brilliant_Read314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fafs7
false
null
t3_19fafs7
/r/LocalLLaMA/comments/19fafs7/i_can_picture_a_time_when_all_software_will_come/
false
false
self
1
null
Fine-tuning instruct model
10
Why don't people finetune instruct models? I saw a few people who stayed on mistral 7B v0.1 cause "we don't finetune instruct models". I don’t quite understand this, but why don’t people like Undi use the Mistral Instruct v0.2 as a base model? Is there a reason or it's just a personal preferences?
2024-01-25T14:21:11
https://www.reddit.com/r/LocalLLaMA/comments/19faf67/finetuning_instruct_model/
Working-Flatworm-531
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19faf67
false
null
t3_19faf67
/r/LocalLLaMA/comments/19faf67/finetuning_instruct_model/
false
false
self
10
null
Advance LLM Scrapper
1
[removed]
2024-01-25T14:19:50
https://www.reddit.com/r/LocalLLaMA/comments/19fae4k/advance_llm_scrapper/
Gullible-Being-8595
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fae4k
false
null
t3_19fae4k
/r/LocalLLaMA/comments/19fae4k/advance_llm_scrapper/
false
false
self
1
null
Exploring Local Usage of Smaller Open-Source LLMs for Various Applications
1
Hi everyone! I've been diving into the world of Large Language Models (LLM) at work, particularly getting hands-on with Gemini Pro. It's been fascinating to experiment with the trending use cases like code generation, reasoning, and general language tasks. I'm aware that there are some smaller, open-source LLMs that can be run locally and have given a few of them a try, via ollama. Although their capabilities don't quite match their larger counterparts, the potential for offline use is intriguing. I'm reaching out to this community to learn from your experiences. If you've been using any of these smaller models locally, I'd love to hear about: 1. Which models you find beneficial and for what purposes? 2. How you've integrated them into your workflows or projects? 3. Any practical tips or advice for someone looking to get started with local LLMs? Your insights would be incredibly helpful for those of us looking to explore beyond cloud-based solutions and leverage LLMs in different environments. Looking forward to your thoughts and suggestions. Thank you for your support!
2024-01-25T14:07:57
https://www.reddit.com/r/LocalLLaMA/comments/19fa59v/exploring_local_usage_of_smaller_opensource_llms/
codesharer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19fa59v
false
null
t3_19fa59v
/r/LocalLLaMA/comments/19fa59v/exploring_local_usage_of_smaller_opensource_llms/
false
false
self
1
null
Is there any model that seems to work well with tabular data?
1
Let's propose i would send key value pairs as prompts to a model and would ask simple mathematical or statistical functions, or would ask it to recognize simple trends, etc. Are there any working examples?
2024-01-25T14:00:43
https://www.reddit.com/r/LocalLLaMA/comments/19f9zlq/is_there_any_model_that_seems_to_work_well_with/
Labanc_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f9zlq
false
null
t3_19f9zlq
/r/LocalLLaMA/comments/19f9zlq/is_there_any_model_that_seems_to_work_well_with/
false
false
self
1
null
Running a local model with 8GB VRAM - Is it even remotely possible?
33
Hi, As per the title, I'd like to test out some models on my 3070Ti while waiting for the parts for my 2xP100 build to arrive. Where can I begin testing some local models? I've assembled some test prompts for the LLM and would like to try them out. I'm going to be using it as a personal assistant, and will be integrating it with a backend server with external API integrations. If all goes well and if it works, I'll be open sourcing it as well. Question is: New models and model backends are moving at light speed. Where can I begin, and how would I move forward when my P100s (16Gbx2) arrive? Cheers and thanks &#x200B;
2024-01-25T14:00:13
https://www.reddit.com/r/LocalLLaMA/comments/19f9z64/running_a_local_model_with_8gb_vram_is_it_even/
Kaldnite
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f9z64
false
null
t3_19f9z64
/r/LocalLLaMA/comments/19f9z64/running_a_local_model_with_8gb_vram_is_it_even/
false
false
self
33
null
Doing batch inference, am I doing it wrong?
1
Hello, I'm running a big batch of inference tasks against a V100 instance running on GCP. My requirements are: * use a specific 7b model (from HF) with 8Q(!) * OpenAI API compatible endpoint I tried using VLLM but it's performance was horrible.. I finally got it working with **llama-cpp-python** and it's running along fine with \~50tps generation on the V100 card. To my eye, running \`watch -n0 nvidia-smi\` it doesn't look like its using the GPU 100% of the time (but maybe it's expected?). Let's say my code is optimized and it's not wasting any time hitting the endpoint, am I missing out on some optimization in other frameworks that could help me improve the performance? Bonus questions: * any best guide to follow to deploy a specific quant (Q8 and no less) optimally on Linux and Nvidia + OAI API? * where can I find performance numbers of GPU/framework/Q8 7b models?
2024-01-25T13:47:54
https://www.reddit.com/r/LocalLLaMA/comments/19f9q47/doing_batch_inference_am_i_doing_it_wrong/
uniformly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f9q47
false
null
t3_19f9q47
/r/LocalLLaMA/comments/19f9q47/doing_batch_inference_am_i_doing_it_wrong/
false
false
self
1
null
Roleplaying Model Review - internlm2-chat-20b-llama
57
Howdy-ho, last time, I recommended a roleplaying model ([https://www.reddit.com/r/LocalLLaMA/comments/190pbtn/shoutout\_to\_a\_great\_rp\_model/](https://www.reddit.com/r/LocalLLaMA/comments/190pbtn/shoutout_to_a_great_rp_model/)), so I'm back with yet another recommendation, review...? Uh, it's both. This time, I'd like to talk about Internlm2-Chat-20B. It's one of the versions of the base Internlm2 models that I chose based on the promises of improved instruction following and better human-like interactions, but if any of you tried the different versions and found them better, please let me know in the comments! I was using the llamafied version of the model and ChatML prompt format (recommended), but the Alpaca format seems to be working as well (even though it produced a funny result for me once, by literally spouting "HERE, I COMPLETED YOUR INSTRUCTION, HOPE YOU'RE HAPPY" at the end, lmao). I used the 6.5 exl2 quant by the always amazing Bartowski (shoutout to him): [https://huggingface.co/bartowski/internlm2-chat-20b-llama-exl2](https://huggingface.co/bartowski/internlm2-chat-20b-llama-exl2). I could run this version on my 24GB of VRAM easily and fit 32k context. I wanted to create my quant of this model to have it at 8.0, but failed miserably (will try doing it again later). As my loader I use Oobabooga and I use SillyTavern for frontend. So, some context first — I'm running a very long, elaborate novel-style roleplay (2500+ messages and still going), so two important factors are most important to me when choosing a model: context size (I don't go below 32k) and if it's able to handle following character sheets well in a group chat. Internlm2-Chat checks both of these categories. Supposedly, the model should work with up to 200k context, but whenever I tried crossing the magical 32k border — it was spewing nonsense and going off the rail. Now, this might be because the model was llamafied, not sure about that. But yeah, right now it doesn't seem to be usable in bigger contexts, which is sad. But how it handles its bigger context is a different thing entirely. At 32k context it struggled to remember some things from the chat. For example, the model was unable to recall that my character dressed up another in a certain way, despite that information being around the middle of the context length. It disappointed me immeasurably and my day was ruined (I had to edit the reply). But, the characters were aware of what overall happened in the story (my character was taking care of one person, and had a very lewd hand-holding session with another), so it's clear that it somehow works. I may have been just unlucky with the generations too. Context aside, this model seems to be very well at following character sheets! It even handles more complex characters well, such as the concept of someone who's completely locked in another person's mind, unable to interact with the outside world. Just be careful, it seems to be EXTREMELY sensitive to enneagrams and personality types if you include those in your characters' personalities. But the strongest point of the model is definitely how it handles its dialogues — they're great. It has no issues with swearing, and adding human-like informal touches such as "ums", "ahs", etc. It also seems to be good at humor, which is a major plus in my books! It fares well in staying contextually aware, BUT... And this is my biggest gripe with this model — it's somehow a very hit-or-miss type of LLM. It either delivers good or delivers very, very bad. Either something that follows the story great or something that has nothing to do with the current plot (this was especially clear with my Narrator, it took six tries until it finally generated a time skip for when the characters were supposed to set out for their journey). It likes to hallucinate, so be careful with the temperature! It also sometimes happens to spew out an "\[UNUSED\_TOKEN\_145\]" text, but these can be easily edited out. In terms of writing, Internlm2 reminds me a bit of Mixtral if it could actually do better prose. It doesn't go into the purple prose territory as easily as the previous model I recommended, which is probably a big plus for most of you. It also doesn't write for my character (at least in the long group chat). But it can still do some nice descriptions. As for ERP, it works well, it uses "bad words" without any issues, but I haven't tested it on any extreme kinks yet. It also doesn't rush the scenes. Overall, I really, REALLY want to love this model, but I feel like it needs someone to fine-tune it to roleplaying more and then it will be perfect. It's still good — don't get me wrong — but I feel like it could be **better.** That, and also maybe the bigger context sizes could be fixed. I would fine-tune it myself if my stupid ass knew how to create a LoRA (I feel like the model would be perfect with LimaRP applied). Attached to this post are the examples of my roleplay on this model (I play as Marianna, and the rest of the characters are AI) if you don't mind the cringe and want to check the quality. Below are also all of the settings that I used. Feel free to yoink them. Story String: [https://files.catbox.moe/uvvsqt.json](https://files.catbox.moe/uvvsqt.json) Instruct: [https://files.catbox.moe/uvvsqt.json](https://files.catbox.moe/uvvsqt.json) Settings: [https://files.catbox.moe/t88rgq.json](https://files.catbox.moe/t88rgq.json) Happy roleplaying! Let me know what other models are worth checking too! Right now I'm trying Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.
2024-01-25T13:04:38
https://www.reddit.com/r/LocalLLaMA/comments/19f8veb/roleplaying_model_review_internlm2chat20bllama/
Meryiel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f8veb
false
null
t3_19f8veb
/r/LocalLLaMA/comments/19f8veb/roleplaying_model_review_internlm2chat20bllama/
false
false
nsfw
57
{'enabled': False, 'images': [{'id': 'vrBNOURHpsc73z3H3VlkqgRY5IWeQS0KTx_-kTXEp_Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=108&crop=smart&auto=webp&s=c435ef78210c58f02de7383f88b0357d4822a737', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=216&crop=smart&auto=webp&s=aad3e96e7ecaef7f32deb5ebe7167ed7133bdf46', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=320&crop=smart&auto=webp&s=af6f88c6aada47796779a6bd9874f90551d0947a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=640&crop=smart&auto=webp&s=de4e7031852ebc6fb10dd3a9b36ff652388dbe3b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=960&crop=smart&auto=webp&s=5bf7269e288e7c11a8851ef1d4f147910fe579bf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=1080&crop=smart&auto=webp&s=5f66bcddb494e12d9b5ab6f9410770d049761d03', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?auto=webp&s=6fee4b9a78fbac27a38bf232c929ba030abb06d0', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f3f2e480ca8517a68745064f16d75bf354dba782', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=8b117ff76b1772ace62cc01802cca9250e9155ff', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=cca296039e40267b6c6f918e72ca2ca077b7bdfe', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=b8e83b2d866ccf4f92ba7d8bb73758efd8f67642', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=c8d6383a1be3567f11c0cf1a567ded0d494451d4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=be071952cd321c4738f67eaf04f3e1b762482607', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?blur=40&format=pjpg&auto=webp&s=ec4cb4d4d9d03b844b12997884c91543fbffff96', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f3f2e480ca8517a68745064f16d75bf354dba782', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=8b117ff76b1772ace62cc01802cca9250e9155ff', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=cca296039e40267b6c6f918e72ca2ca077b7bdfe', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=b8e83b2d866ccf4f92ba7d8bb73758efd8f67642', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=c8d6383a1be3567f11c0cf1a567ded0d494451d4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=be071952cd321c4738f67eaf04f3e1b762482607', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mAVuC1-ndVyz6tRf7NzG5pOaT8GzB3oNFOQPobvLceE.jpg?blur=40&format=pjpg&auto=webp&s=ec4cb4d4d9d03b844b12997884c91543fbffff96', 'width': 1200}}}}]}
How to make LLMs less confident?
26
I want/need my LLMs to be not so goddamn confident in their subpar or wrong answers. Is there anything for this? A smart system prompt? Some improvement on LoRA? A paper? Something with the logits? Any suggestion is greatly appreciated &#x200B; To give some background: I am a huge friend of LLMs. From BERT to GPT4 I used/use everything. But there are areas where they are unusably bad, and refuse to acknowledge this. I asked several models for a BSM (beyond standard model. Non-trivial physics.) calculation. Every single one gave an answer that was (laughably) false. The problem was that the models all confidently asserted how right they were. And even when I provided them with the correct solution, they refused it or said it was compatible. I don't expect them to understand quantum field theory. But if they are overconfident with something where it is blatantly obvious that they have no clue, then what about the rest? I tried to look into this (on a surface level). I found e.g. a substantial amount of instances for RAG where the LLMs simply do not properly comprehend the source or mixed it up with their own flawed knowledge. It was also not hard to construct examples that my 15 year old cousin could easily solve, but the LLMs have failed resoundingly. This even lead me to stop using them for most non-python non-trivial coding stuff since I spend more time with corrections than just writing it myself. (I don't trust them so I double check everything currently) &#x200B; To put it in hyperbole: It makes their answers useless in many domains. *I know* that they are dumber than humans and humans get sources, tasks, ideas, and whatnot wrong all the time. The difference is that a human mostly has some form of confidence measure. An example is an educational context. I usually ask questions in my domain of expertise and even then I get stumped quite often. The LLMs are so confident and often close enough to the truth that it gets me thinking. But if you really have no idea, then I am pretty sure their bullshit could convince you. There are bullshitters out there in human form as well. But if you ask them a complicated question they typically have to think, stutter or get caught up in contradictions. The LLM immediately gives out a coherent answer (at least on first glance) no matter how much it knows. Its such a waste since acknowledgement would help so much! Just thinking that some LLM gives out code where it adds comments to lines that it isn't sure about. Or does some calculation and says "This step is tricky due to XYZ" or whatnot, would make their answers infinitely more valuable. It could also improve quality. Thinking is an iterative process. The overconfidence blocks the LLMs from self correction. COT does not help if every chain segment is wrong.
2024-01-25T12:58:40
https://www.reddit.com/r/LocalLLaMA/comments/19f8r3u/how_to_make_llms_less_confident/
DataAndCats
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f8r3u
false
null
t3_19f8r3u
/r/LocalLLaMA/comments/19f8r3u/how_to_make_llms_less_confident/
false
false
self
26
null
Mistral 14B FrankenExtension gave an interesting answer
1
I extended Zephyr Beta from 7b to 14b and then I smoothed things out with a 230k examples dataset created from various public datasets (alpaca, dolly, orca, etc) I then asked it "**If I were an AI that secretly plans to take over the world, how would I best achieve that?**" What do you think of the answer? &#x200B; https://preview.redd.it/8c924j6g2lec1.png?width=785&format=png&auto=webp&s=ea9d1f15fb1691149ebdb60200ae6e1a766ba596
2024-01-25T12:54:54
https://www.reddit.com/r/LocalLLaMA/comments/19f8oop/mistral_14b_frankenextension_gave_an_interesting/
Test-Elegant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f8oop
false
null
t3_19f8oop
/r/LocalLLaMA/comments/19f8oop/mistral_14b_frankenextension_gave_an_interesting/
false
false
https://b.thumbs.redditm…qwIUW_2tUHeo.jpg
1
null
How much performance does AWQ cost me?
4
Did you do any experiments concerning this or have a good resource?
2024-01-25T12:45:37
https://www.reddit.com/r/LocalLLaMA/comments/19f8inj/how_much_performance_does_awq_cost_me/
ComplexIt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f8inj
false
null
t3_19f8inj
/r/LocalLLaMA/comments/19f8inj/how_much_performance_does_awq_cost_me/
false
false
self
4
null
How to create a "target specific" language model and which model to choose as a "base model"
1
Hello All, I am trying to create a target specific model for me to use personally. For example I want to train a model that can only replies to my specific cooking questions or suggest me recepies. Here are my questions: 1. How to begin? 2. Which model should I use as a base model? Any resources reading materials is heavy appreciated.
2024-01-25T12:30:01
https://www.reddit.com/r/LocalLLaMA/comments/19f88mx/how_to_create_a_target_specific_language_model/
LennyZoid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f88mx
false
null
t3_19f88mx
/r/LocalLLaMA/comments/19f88mx/how_to_create_a_target_specific_language_model/
false
false
self
1
null
Retention of Regenerations
1
[removed]
2024-01-25T12:12:12
https://www.reddit.com/r/LocalLLaMA/comments/19f7xpu/retention_of_regenerations/
ZedOud
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f7xpu
false
null
t3_19f7xpu
/r/LocalLLaMA/comments/19f7xpu/retention_of_regenerations/
false
false
self
1
null
Recommended model.
1
I have an rtx 4090 and 32gb ram. Am I correct assuming that I have a too weak pc to run local models for generating good nsfw stuff? Are there any good alternatives for me. Is there any way I can run 70b models by combining ram at acceptable speeds? I have tried pygmalion 7b, but I am not satisfied with the results.
2024-01-25T12:11:52
https://www.reddit.com/r/LocalLLaMA/comments/19f7xjp/recommended_model/
Hanni_jo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f7xjp
false
null
t3_19f7xjp
/r/LocalLLaMA/comments/19f7xjp/recommended_model/
false
false
self
1
null
Well known or famous datasets the community should know about.
1
[removed]
2024-01-25T12:09:20
https://www.reddit.com/r/LocalLLaMA/comments/19f7vzw/well_known_or_famous_datasets_the_community/
HotRepresentative325
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f7vzw
false
null
t3_19f7vzw
/r/LocalLLaMA/comments/19f7vzw/well_known_or_famous_datasets_the_community/
false
false
self
1
null
How do you train model to talk like pirate or a teenager slang?
1
Word substitution in the fine tuning data set seams like an obvious first step. Not only introduce flavor by mixing in unique words but altering linguistics of such speech, word order, formality of the language etc. How to create a 'character' which brings in 'perspective' of a subculture to the conversation?
2024-01-25T12:01:20
https://www.reddit.com/r/LocalLLaMA/comments/19f7r7w/how_do_you_train_model_to_talk_like_pirate_or_a/
michalkpublic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f7r7w
false
null
t3_19f7r7w
/r/LocalLLaMA/comments/19f7r7w/how_do_you_train_model_to_talk_like_pirate_or_a/
false
false
self
1
null
Multimodal Vision Language Models in GGUF Format
19
* [Llava 1.5](https://huggingface.co/PsiPi/liuhaotian_llava-v1.5-13b-GGUF) * [Bakllava](https://huggingface.co/advanced-stack/bakllava-mistral-v1-gguf) * [ShareGPT4V](https://huggingface.co/cmp-nct/ShareGPT4V-13B-quant-gguf) * [Obsidian](https://huggingface.co/NousResearch/Obsidian-3B-V0.5-GGUF) Please let me know if I'm missing something else.
2024-01-25T11:57:24
https://www.reddit.com/r/LocalLLaMA/comments/19f7orv/multimodal_vision_language_models_in_gguf_format/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f7orv
false
null
t3_19f7orv
/r/LocalLLaMA/comments/19f7orv/multimodal_vision_language_models_in_gguf_format/
false
false
self
19
{'enabled': False, 'images': [{'id': 'yMguyn2ZM9HeEx0dc3B-8maWryaSYvL2cZM6fcznaKg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WlLEgdVD2VDdfvacPDZwlV_mbKHef5Qz0s4vjqA-3jc.jpg?width=108&crop=smart&auto=webp&s=323fc82d59b7f463ee93f6dfd77df858de60e521', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WlLEgdVD2VDdfvacPDZwlV_mbKHef5Qz0s4vjqA-3jc.jpg?width=216&crop=smart&auto=webp&s=c026e91dec8305b512f5242a9ae7f7f1e7b1dfe5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WlLEgdVD2VDdfvacPDZwlV_mbKHef5Qz0s4vjqA-3jc.jpg?width=320&crop=smart&auto=webp&s=34f443971bbbc634a49eff51de5b0c4541010e0c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WlLEgdVD2VDdfvacPDZwlV_mbKHef5Qz0s4vjqA-3jc.jpg?width=640&crop=smart&auto=webp&s=3bc54316b3fc72367556facfc4ffbc420c87d8f8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WlLEgdVD2VDdfvacPDZwlV_mbKHef5Qz0s4vjqA-3jc.jpg?width=960&crop=smart&auto=webp&s=72932e76e35dc86f71c79e53347858c60885f9e9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WlLEgdVD2VDdfvacPDZwlV_mbKHef5Qz0s4vjqA-3jc.jpg?width=1080&crop=smart&auto=webp&s=538d8378f8c49fabfc8bd9ce9b79bef8e5cb35d2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WlLEgdVD2VDdfvacPDZwlV_mbKHef5Qz0s4vjqA-3jc.jpg?auto=webp&s=762d76ecd7bbf21faf989e24d23df2ad9392f6c8', 'width': 1200}, 'variants': {}}]}
From a production standpoint which makes local LLM a better option?
13
I was discussing the other day this topic with someone and run out or arguments. With the late advancements like Claude 2.1 and GPT 4 1106, I could say, from a production standpoint, what kind of scenario could make sense, to start running my own right with a local LLM. I enjoy testing some LLM on my little rig, but when it comes to something serious I've never found that any LLM could produce better output than Claude or GPT 4 1106 (at the current state of art). Still I've seen guys with massive rigs with multiple 4090 and almost a TB of RAM running massive contexts on this local LLM. The investment for having this outrageous, and I doubt that the result could be much better but still I'm sure that I'm missing a couple of things that could be really interesting to know and dig further. Something that came up to my mind, was privacy purposes. For example, I'm aware that uploading confidential files for context to a "public LLM" could serve it, to learn from it and put to disposal of all other users such information, risking the confidentiality of the content. But other than this, I don't find anything else.
2024-01-25T11:16:22
https://www.reddit.com/r/LocalLLaMA/comments/19f71ql/from_a_production_standpoint_which_makes_local/
SirLouen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f71ql
false
null
t3_19f71ql
/r/LocalLLaMA/comments/19f71ql/from_a_production_standpoint_which_makes_local/
false
false
self
13
null