title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
QuIP# - state of the art 2 bit quantization. Run 70b models on a single 3090 with near FP16 performance
254
From Albert Tseng on Twitter: “Introducing QuIP#, a new SOTA LLM quantization method that uses incoherence processing from QuIP & lattices to achieve 2 bit LLMs with near-fp16 performance! Now you can run LLaMA 2 70B on a 24G GPU w/out offloading!”
2023-12-09T00:09:42
https://cornell-relaxml.github.io/quip-sharp/
PookaMacPhellimen
cornell-relaxml.github.io
1970-01-01T00:00:00
0
{}
18e0mlp
false
null
t3_18e0mlp
/r/LocalLLaMA/comments/18e0mlp/quip_state_of_the_art_2_bit_quantization_run_70b/
false
false
default
254
null
Allen AI releases DPO fine tune of Llama 2
13
Blog post about it: [https://blog.allenai.org/camels-in-a-changing-climate-enhancing-lm-adaptation-with-tulu-2-0eb692698a78](https://blog.allenai.org/camels-in-a-changing-climate-enhancing-lm-adaptation-with-tulu-2-0eb692698a78) HF: [https://huggingface.co/allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b) Codebase: [https://github.com/hamishivi/EasyLM/tree/main](https://github.com/hamishivi/EasyLM/tree/main)
2023-12-09T00:08:04
https://www.reddit.com/r/LocalLLaMA/comments/18e0lgc/allen_ai_releases_dpo_fine_tune_of_llama_2/
WaterdanceAC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18e0lgc
false
null
t3_18e0lgc
/r/LocalLLaMA/comments/18e0lgc/allen_ai_releases_dpo_fine_tune_of_llama_2/
false
false
self
13
{'enabled': False, 'images': [{'id': 'PUBN9HjWZrtXEwnvt6LfAqEe8mLFNkXd8qo80HBxLe4', 'resolutions': [{'height': 33, 'url': 'https://external-preview.redd.it/Sj3S-ZxcP2w4lPWurn1n1XmkYuJ84wuwRKadmvcxIFs.jpg?width=108&crop=smart&auto=webp&s=3d962e577db3ae650de99ace51ee08d275edbbcb', 'width': 108}, {'height': 66, 'url': 'https://external-preview.redd.it/Sj3S-ZxcP2w4lPWurn1n1XmkYuJ84wuwRKadmvcxIFs.jpg?width=216&crop=smart&auto=webp&s=7aafc8b020ecb6fcd568e2bcf267029b61776bc5', 'width': 216}, {'height': 98, 'url': 'https://external-preview.redd.it/Sj3S-ZxcP2w4lPWurn1n1XmkYuJ84wuwRKadmvcxIFs.jpg?width=320&crop=smart&auto=webp&s=45bf7ddc8acc78643e8d15d3f0a2e83df77fe6a2', 'width': 320}, {'height': 197, 'url': 'https://external-preview.redd.it/Sj3S-ZxcP2w4lPWurn1n1XmkYuJ84wuwRKadmvcxIFs.jpg?width=640&crop=smart&auto=webp&s=b93bd03a92953c50a2e33a1dd975532e6a762bfc', 'width': 640}, {'height': 296, 'url': 'https://external-preview.redd.it/Sj3S-ZxcP2w4lPWurn1n1XmkYuJ84wuwRKadmvcxIFs.jpg?width=960&crop=smart&auto=webp&s=0de47a2710bce17d695a634d1a84d992e1605ccf', 'width': 960}, {'height': 333, 'url': 'https://external-preview.redd.it/Sj3S-ZxcP2w4lPWurn1n1XmkYuJ84wuwRKadmvcxIFs.jpg?width=1080&crop=smart&auto=webp&s=80d31b81b52d7cdfd274c94f0a0c8cc7841da1a5', 'width': 1080}], 'source': {'height': 371, 'url': 'https://external-preview.redd.it/Sj3S-ZxcP2w4lPWurn1n1XmkYuJ84wuwRKadmvcxIFs.jpg?auto=webp&s=8034584bd0f1665ecbede21bbfcabac00e121536', 'width': 1200}, 'variants': {}}]}
2 7900xtx vs 1 4090
1
[removed]
2023-12-09T00:03:44
https://www.reddit.com/r/LocalLLaMA/comments/18e0iaj/2_7900xtx_vs_1_4090/
Dry-Vermicelli-682
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18e0iaj
false
null
t3_18e0iaj
/r/LocalLLaMA/comments/18e0iaj/2_7900xtx_vs_1_4090/
false
false
self
1
null
I found this video (from Together.ai) with less than 100 views that shows distributed training is possible on consumer high-speed internet with their method. Really surprised this hasn't gotten more attention. What's the catch?
40
2023-12-08T23:21:54
https://i.redd.it/f5vsz0l9m55c1.png
jd_3d
i.redd.it
1970-01-01T00:00:00
0
{}
18dzn20
false
null
t3_18dzn20
/r/LocalLLaMA/comments/18dzn20/i_found_this_video_from_togetherai_with_less_than/
false
false
https://b.thumbs.redditm…LUgl-PVxVaAY.jpg
40
{'enabled': True, 'images': [{'id': 'Pe90jQBNXRB8CDwg25odPZtSMcli4qspHo0-lDcv2_g', 'resolutions': [{'height': 141, 'url': 'https://preview.redd.it/f5vsz0l9m55c1.png?width=108&crop=smart&auto=webp&s=27b8b47a7da2bef277a3f0ea0229c6348af546f6', 'width': 108}, {'height': 282, 'url': 'https://preview.redd.it/f5vsz0l9m55c1.png?width=216&crop=smart&auto=webp&s=3486728bf747ba6e978943ab7f6ea5a263c129b4', 'width': 216}, {'height': 418, 'url': 'https://preview.redd.it/f5vsz0l9m55c1.png?width=320&crop=smart&auto=webp&s=12295bb0211ad0f1ff91aea5ab4e90f442f2df37', 'width': 320}], 'source': {'height': 519, 'url': 'https://preview.redd.it/f5vsz0l9m55c1.png?auto=webp&s=2cef166e4e98a53817331f26e08a693d0cdc9bb8', 'width': 397}, 'variants': {}}]}
Open Source Function Calling with Intel's LLM in Javascript
6
2023-12-08T22:12:55
https://github.com/stellar-amenities/assistants/blob/main/examples/hello-world-intel-neural-chat-nodejs-function-calling/README.md
louis3195
github.com
1970-01-01T00:00:00
0
{}
18dy5i5
false
null
t3_18dy5i5
/r/LocalLLaMA/comments/18dy5i5/open_source_function_calling_with_intels_llm_in/
false
false
https://b.thumbs.redditm…7_vDWg9YdbAQ.jpg
6
{'enabled': False, 'images': [{'id': 'gg0hWl8HDk6VcDNGQS4GV48X7okHwf2yp8BTFPXJEwA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XRKxy1LYTlrjgYvgvjcQaLNpYNqzap0p15HOU4qvgk0.jpg?width=108&crop=smart&auto=webp&s=ee94ac78b43bd41963535f3da51f9893a91666f9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XRKxy1LYTlrjgYvgvjcQaLNpYNqzap0p15HOU4qvgk0.jpg?width=216&crop=smart&auto=webp&s=1b131ac7311b817379b1dce64e703ce91ba853ae', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XRKxy1LYTlrjgYvgvjcQaLNpYNqzap0p15HOU4qvgk0.jpg?width=320&crop=smart&auto=webp&s=7d9d1fd6d43e5d4e0e5ef2b66de81acb2b09048b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XRKxy1LYTlrjgYvgvjcQaLNpYNqzap0p15HOU4qvgk0.jpg?width=640&crop=smart&auto=webp&s=1d23673e789026a2602b7dde2834f66e7c961d52', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XRKxy1LYTlrjgYvgvjcQaLNpYNqzap0p15HOU4qvgk0.jpg?width=960&crop=smart&auto=webp&s=7da3358bda98c2468c5771b8f13c851f62ecfe75', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XRKxy1LYTlrjgYvgvjcQaLNpYNqzap0p15HOU4qvgk0.jpg?width=1080&crop=smart&auto=webp&s=1c36e50ebb5af743adc96079c1b0592b2bc13612', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XRKxy1LYTlrjgYvgvjcQaLNpYNqzap0p15HOU4qvgk0.jpg?auto=webp&s=3a266cf7f5a3e353b0f4900d173b8e49d93ab931', 'width': 1200}, 'variants': {}}]}
Looking to build a new system for local AI model/experimentation and possibly training
2
Hi all, As a developer, who maybe stupidly worries my job may not be around much longer, I am trying to grasp/learn/use AI as quickly as possible. Crazy that one year ago is when ChatGPT came out and very few people had any idea.. a few months later people were using it for fun things, and only about 6 months or so ago I started to see more and more "apps" around AI talked about. Now, it seems as per AMD's CEO talk with the new GPU they are rolling out, that AI is the most disruptive anything we've ever experienced, pushing past the internet (or right on par with it).. more so than cars, planes, etc. Which.. is both scary and exciting. So.. with that in mind assuming I have the money to do this, I am thinking about how to build a usable but yet "affordable" (hold your thoughts on that for a sec) home workstation capable of using larger models for experimentation (e.g. trying out API calls against running models for things like autogen, codegen, etc). But I'd also like to.. if possible see about doing some very small custom training. I am not sure about how much hardware/costs to train models like the 70b DeepSeek coder or the one Meta offers up recently. I have to imagine those are still crazy expensive, which is why I find it fascinating we can get these for free given the costs to train. As I don't know much yet about training, can a single workstation with 2 4090 GPUs (so I think 48GB VRAM) be used to train a small model over a day or two, in to some sort of useful working model that you can chat with? Or is that way WAY too low of hardware and would require like months or longer to train even a tiny 6b or 13b model? As for hardware, I am looking to build a 32core 7970 threadripper, 128GB or 256GB RAM (depends on costs, etc), couple nvme SSD 4TB drives, and whatever nextgen GPU comes out in a few months or so. For now I'll stick to my 6900XT. Not sure if AMD or nvidia will have their next gens with more RAM, etc out and even if they do I imagine they are going to cost even more money, etc. But, given the new TRX50 with PCIe5 bus, though I can't even fathom trying to spend 5K or more on a GPU, would any of the more capable GPUs like the H100 or A100, or AMD's offering run on a m/b with PCIe5? Or is it a completely different motherboard system those plug in to? This would be my new main system, and would like use it for gaming as well, despite I know it's not "as good" as a 7800 cpu for gaming, I currently have a 5800x3d so I would imagine it would be on par for that, but with better GPUs.. anyway gaming is not may main thing. I can always keep my gaming rig for that once I get next gen GPUs and can put the 6900xt back in to gaming rig. Just wanted to know if that sort of hardware would provide a pretty decent AI model runtime, and possibly for training to learn how to train. Thanks.
2023-12-08T21:58:37
https://www.reddit.com/r/LocalLLaMA/comments/18dxub5/looking_to_build_a_new_system_for_local_ai/
Dry-Vermicelli-682
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dxub5
false
null
t3_18dxub5
/r/LocalLLaMA/comments/18dxub5/looking_to_build_a_new_system_for_local_ai/
false
false
self
2
null
Issues with the starter code for codellama
2
I'm using the following code that I got from the repo to test out code llama locally, ``` model = "codellama/CodeLlama-7b-Instruct-hf" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) def generate(prompt): sequences = pipeline( prompt, do_sample=True, top_k=10, temperature=0.1, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=200, ) output = [] for seq in sequences: output.append(seq['generated_text']) return ''.join(output).encode('utf-8') ``` Is there a reason this code fails to answer a simple prompt like this one, prompt = ''' def add_two(a, b): ''' generation ``` def add_two(a, b): return a + b def add_three(a, b, c): return a + b + c def add_four(a, b, c, d): return a + b + c + d def add_five(a, b, c, d, e): return a + b + c + d + e def add_six(a, b, c, d, e, f): return a + b + c + d + e + f def add_seven(a, b, c, d, e, f, g): return a + b + c + d + e + f + g def add_eight(a, b,``` It seems like it's not able to produce the EOS token properly? Not sure if I'm missing something.
2023-12-08T21:55:02
https://www.reddit.com/r/LocalLLaMA/comments/18dxrn2/issues_with_the_starter_code_for_codellama/
FinePlant17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dxrn2
false
null
t3_18dxrn2
/r/LocalLLaMA/comments/18dxrn2/issues_with_the_starter_code_for_codellama/
false
false
self
2
null
How to fine-tune Llama-2?
2
I’m looking to fine tune a Llama base model, but I’m not clear the best way to do it without a graphics card. I have a MacBook Pro, but when I tried before it took far too long for a few thousand lines of training data. I’ve seen Gradient AI on YouTube, but wasn’t sure if there were others. Curious what the best way to fine-tune a model is?
2023-12-08T21:53:14
https://www.reddit.com/r/LocalLLaMA/comments/18dxq80/how_to_finetune_llama2/
TuringTestTom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dxq80
false
null
t3_18dxq80
/r/LocalLLaMA/comments/18dxq80/how_to_finetune_llama2/
false
false
self
2
null
What's 2x Teslsa P40 performance like for 70b models? vs 1?
1
Wondering if anyone has info on how well 2 x Tesla P40 cards can handle larger 70b models quantized enough to fit in 48GB of VRAM ? Would the SLI bridge help at all? ​ I've currently got one card and found it's great at 30B models but too small for 70B ones. Wondering how much a second card would help - given that its still splitting it up between two cards. ​ Here's some rough data about how it performs: Currently using a single P40 this is roughly what I see with a random model. I just started KoboldCPP's scenarios and let it talk to itself for a few minutes to get these timings. They seem to show 1-2 tokens per second for 70b and about 10 for 30b models that fully fit in vram. ​ |Runtime|Model size|Layers| |:-|:-|:-| |ContextLimit: 216/8192, Processing:8.94s (46.3ms/T), Generation:16.01s (696.1ms/T), Total:24.95s (0.92T/s)|70B Q4|40/83| |ContextLimit: 209/8192, Processing:0.87s (872.0ms/T), Generation:11.13s (742.2ms/T), Total:12.00s (1.25T/s)|70B Q4|40/83| |ContextLimit: 200/8192, Processing:0.74s (742.0ms/T), Generation:4.54s (756.2ms/T), Total:5.28s (1.14T/s)|70B Q4|40/83| |ContextLimit: 203/1600, Processing:7.89s (49.3ms/T), Generation:7.38s (671.2ms/T), Total:15.28s (0.72T/s)|70B Q4|40/83| |ContextLimit: 203/1600, Processing:0.80s (803.0ms/T), Generation:2.34s (585.5ms/T), Total:3.15s (1.27T/s)|70B Q4|40/83| |ContextLimit: 312/1600, Processing:0.76s (761.0ms/T), Generation:92.59s (771.6ms/T), Total:93.35s (1.29T/s)|70B Q4|40/83| |ContextLimit: 428/1600, Processing:1.19s (1185.0ms/T), Generation:92.32s (769.3ms/T), Total:93.50s (1.28T/s)|70B Q4|40/83| |ContextLimit: 545/1600, Processing:0.91s (909.0ms/T), Generation:85.12s (709.3ms/T), Total:86.03s (1.39T/s)|70B Q4|40/83| |ContextLimit: 276/8192, Processing:1.20s (6.2ms/T), Generation:9.36s (112.8ms/T), Total:10.55s (7.86T/s) |34B Q4|51/51| |ContextLimit: 239/8192, Processing:0.24s (239.0ms/T), Generation:5.08s (110.4ms/T), Total:5.32s (8.65T/s) |34B Q4|51/51| |ContextLimit: 236/8192, Processing:0.29s (290.0ms/T), Generation:0.00s (4.0ms/T), Total:0.29s (3.40T/s) |34B Q4|51/51| |ContextLimit: 239/8192, Processing:0.11s (114.0ms/T), Generation:0.34s (85.5ms/T), Total:0.46s (8.77T/s) |34B Q4|51/51| |ContextLimit: 239/8192, Processing:0.11s (113.0ms/T), Generation:0.34s (85.2ms/T), Total:0.45s (8.81T/s) |34B Q4|51/51| |ContextLimit: 313/8192, Processing:0.11s (111.0ms/T), Generation:13.44s (112.0ms/T), Total:13.55s (8.85T/s) |34B Q4|51/51| |ContextLimit: 433/8192, Processing:0.21s (212.0ms/T), Generation:13.60s (113.3ms/T), Total:13.81s (8.69T/s) |34B Q4|51/51| |ContextLimit: 542/8192, Processing:0.11s (114.0ms/T), Generation:13.71s (114.3ms/T), Total:13.83s (8.68T/s) |34B Q4|51/51| |ContextLimit: 653/8192, Processing:0.29s (289.0ms/T), Generation:13.88s (115.7ms/T), Total:14.17s (8.47T/s) |34B Q4|51/51| |ContextLimit: 759/8192, Processing:0.12s (117.0ms/T), Generation:14.00s (116.7ms/T), Total:14.12s (8.50T/s) |34B Q4|51/51| |ContextLimit: 864/8192, Processing:0.20s (203.0ms/T), Generation:14.13s (117.8ms/T), Total:14.34s (8.37T/s) |34B Q4|51/51| |ContextLimit: 983/8192, Processing:0.12s (120.0ms/T), Generation:14.27s (118.9ms/T), Total:14.39s (8.34T/s) |34B Q4|51/51| |ContextLimit: 1100/8192, Processing:0.23s (227.0ms/T), Generation:14.40s (120.0ms/T), Total:14.63s (8.20T/s) |34B Q4|51/51| |ContextLimit: 1217/8192, Processing:0.12s (123.0ms/T), Generation:14.58s (121.5ms/T), Total:14.70s (8.16T/s)|34B Q4|51/51| ​
2023-12-08T21:31:09
https://www.reddit.com/r/LocalLLaMA/comments/18dx91w/whats_2x_teslsa_p40_performance_like_for_70b/
Rich_Examination_637
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dx91w
false
null
t3_18dx91w
/r/LocalLLaMA/comments/18dx91w/whats_2x_teslsa_p40_performance_like_for_70b/
false
false
self
1
null
EAGLE: Fast LLM decoding (Faster than Meduca and Lookahead)
18
EAGLE (Extrapolation Algorithm for Greater Language-model Efficiency) is a new baseline for fast decoding of Large Language Models (LLMs) with provable performance maintenance. This approach involves extrapolating the second-top-layer contextual feature vectors of LLMs, enabling a significant boost in generation efficiency. In summary, EAGLE is: * **3x** faster than vanilla decoding (13B). * **2x** faster than Lookahead (13B). * **1.6x** faster than Medusa (13B). * provably maintaining the consistency with vanilla decoding in the distribution of generated texts. * trainable (within 1-2 days) and testable on 8x RTX 3090 GPUs. So even the GPU poor can afford it. * combinable with other parallelled techniques such as vLLM, Mamba, FlashAttention, quantization, and hardware optimization. Playground: [https://46019f81d5df5243a2.gradio.live](https://t.co/OnnyB77sj4) Blog: [https://sites.google.com/view/eagle-llm](https://t.co/luV6UoCy7b) Code: [https://github.com/SafeAILab/EAGLE](https://t.co/Hnhnwb9iJ3) ​ [Decoding comparison](https://reddit.com/link/18dx5y4/video/7zk6ukgwgc5c1/player)
2023-12-08T21:27:15
https://www.reddit.com/r/LocalLLaMA/comments/18dx5y4/eagle_fast_llm_decoding_faster_than_meduca_and/
logical_empiricist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dx5y4
false
null
t3_18dx5y4
/r/LocalLLaMA/comments/18dx5y4/eagle_fast_llm_decoding_faster_than_meduca_and/
false
false
self
18
null
This week has to be one of the biggest moments in ai
50
not even weeks, but out of the entire history, this has to be one of the biggest week in ai: gemini, mistral, togetherai, ai videos and so much more, what a week!
2023-12-08T21:10:19
https://www.reddit.com/r/LocalLLaMA/comments/18dws51/this_week_has_to_be_one_of_the_biggest_moments_in/
bull_shit123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dws51
false
null
t3_18dws51
/r/LocalLLaMA/comments/18dws51/this_week_has_to_be_one_of_the_biggest_moments_in/
false
false
self
50
null
Code to run Mistral - mixtral-8x7b-32kseqlen
34
[https://github.com/dzhulgakov/llama-mistral](https://github.com/dzhulgakov/llama-mistral) You need 2 x 80Gb cards to load it.
2023-12-08T21:02:42
https://www.reddit.com/r/LocalLLaMA/comments/18dwm51/code_to_run_mistral_mixtral8x7b32kseqlen/
No-Link-2778
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dwm51
false
null
t3_18dwm51
/r/LocalLLaMA/comments/18dwm51/code_to_run_mistral_mixtral8x7b32kseqlen/
false
false
self
34
{'enabled': False, 'images': [{'id': 'JP7zmVCsQPQYz57aBq_a-DlTY8kOj7FyS1rsO54bbv4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NP8lfpJ2jYBX8IoYJ0oZZ8_Pnt4CTgESBqqRa_7hyhE.jpg?width=108&crop=smart&auto=webp&s=6ac54790c38010e19e6b5001882ac1103017bfc9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NP8lfpJ2jYBX8IoYJ0oZZ8_Pnt4CTgESBqqRa_7hyhE.jpg?width=216&crop=smart&auto=webp&s=cd704630158d1c86ef9769b8a713a5be491cff92', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NP8lfpJ2jYBX8IoYJ0oZZ8_Pnt4CTgESBqqRa_7hyhE.jpg?width=320&crop=smart&auto=webp&s=67fc2861adb087c9bba009013bc07187f47ee28b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NP8lfpJ2jYBX8IoYJ0oZZ8_Pnt4CTgESBqqRa_7hyhE.jpg?width=640&crop=smart&auto=webp&s=1403251ae642e275bcc896947b1c4acdd4f3cc58', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NP8lfpJ2jYBX8IoYJ0oZZ8_Pnt4CTgESBqqRa_7hyhE.jpg?width=960&crop=smart&auto=webp&s=5be966633448cbb6ecc909d77c3996cb3650b682', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NP8lfpJ2jYBX8IoYJ0oZZ8_Pnt4CTgESBqqRa_7hyhE.jpg?width=1080&crop=smart&auto=webp&s=95ba552329ea0e699a632174a3b00fce424d08bb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NP8lfpJ2jYBX8IoYJ0oZZ8_Pnt4CTgESBqqRa_7hyhE.jpg?auto=webp&s=d510d2b0ad69b30af5a09c8b8ac331adc5b0c689', 'width': 1200}, 'variants': {}}]}
Mixtral is not 56B
50
Stop saying that it is. The 56B estimate is based on a very rough 7 \* 8 = 56. This is wrong because the qkv weights are shared between experts and only the MLP has different experts. The real parameter count is 46702792704 (\~46.7B).
2023-12-08T20:51:22
https://www.reddit.com/r/LocalLLaMA/comments/18dwcz2/mixtral_is_not_56b/
Someone13574
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dwcz2
false
null
t3_18dwcz2
/r/LocalLLaMA/comments/18dwcz2/mixtral_is_not_56b/
false
false
self
50
null
Ai on a android phone?
3
Is getting ai such as mistral 7b or stable diffusion on a phone worth it? I want to test to see if my phone can handle ai but im afraid it might overheat or drain the battery, the thing is i have no idea how to get said ai's on my phone. i know how to get them on pc but not on a android phone and all methods seem extremely hard in some way. I think my phone might be considered lower end but im not sure.
2023-12-08T20:41:38
https://www.reddit.com/r/LocalLLaMA/comments/18dw5df/ai_on_a_android_phone/
Gaming-invisibleman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dw5df
false
null
t3_18dw5df
/r/LocalLLaMA/comments/18dw5df/ai_on_a_android_phone/
false
false
self
3
null
Does the speed with which you generate responses determine their quality?
1
I was trying to see the difference between Zephyr\_Beta\_7B\_Q5\_K\_M and ..Q8\_0 for short code snippets. The quality of the output for the Q5 model was far superior to the Q8. Additionally I tried to compare CodeLlama\_13B\_Instruct against Zephyr\_7B, and had the same experience. I got two pages of quality code and explanations from the 7B model, where the CodeLlama version was not only wrong, only 9 lines of code. I am using LM Studio, and am new to running local models, so sorry if this is a noob question. I appreciate any advice or tips on how to correctly choose models that will fit the performance of my machine. I have an i7 9700F 3Ghz, GeForce 1660Ti, with 32GB of RAM. Thanks for the help.
2023-12-08T20:37:52
https://www.reddit.com/r/LocalLLaMA/comments/18dw2fl/does_the_speed_with_which_you_generate_responses/
kingmatuba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dw2fl
false
null
t3_18dw2fl
/r/LocalLLaMA/comments/18dw2fl/does_the_speed_with_which_you_generate_responses/
false
false
self
1
null
TogetherCompute releases StripedHyena 7B
126
2023-12-08T20:11:58
https://twitter.com/togethercompute/status/1733213267185762411
TeamPupNSudz
twitter.com
1970-01-01T00:00:00
0
{}
18dvhld
false
{'oembed': {'author_name': 'Together AI', 'author_url': 'https://twitter.com/togethercompute', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Announcing StripedHyena 7B —\xa0an open source model using an architecture that goes beyond Transformers achieving faster performance and longer context.<br><br>It builds on the lessons learned in past year designing efficient sequence modeling architectures.<a href="https://t.co/UGLnfz0Dma">https://t.co/UGLnfz0Dma</a> <a href="https://t.co/fDZc8XyyQH">pic.twitter.com/fDZc8XyyQH</a></p>&mdash; Together AI (@togethercompute) <a href="https://twitter.com/togethercompute/status/1733213267185762411?ref_src=twsrc%5Etfw">December 8, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/togethercompute/status/1733213267185762411', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_18dvhld
/r/LocalLLaMA/comments/18dvhld/togethercompute_releases_stripedhyena_7b/
false
false
https://b.thumbs.redditm…dv8mjYmG9BUI.jpg
126
{'enabled': False, 'images': [{'id': 'zSrsLGWobdDXLQ7AOIzK-iMm1Be480sWm4D93sQ0PmE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/TGBYZmfjmMGlocX7h6O-gxd_fqvNQ0PD1hWocQKh0NM.jpg?width=108&crop=smart&auto=webp&s=02df232656ece7653d2a76973aaf7368b7bc6878', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/TGBYZmfjmMGlocX7h6O-gxd_fqvNQ0PD1hWocQKh0NM.jpg?auto=webp&s=92b99b59556f3379226f40bb4e67822d96e67d1f', 'width': 140}, 'variants': {}}]}
Cerebras releases DPO fine tuned model
20
I know this will likely get lost under all the Mixtral news (at least for now), but Cerebras has a new blog post up about DPO (direct preference optimization) fine tuning of models [https://www.cerebras.net/blog/fine-tuning-language-models-using-direct-preference-optimization](https://www.cerebras.net/blog/fine-tuning-language-models-using-direct-preference-optimization) with a DPO fine tuned chat model [https://huggingface.co/cerebras/btlm-3b-8k-chat](https://huggingface.co/cerebras/btlm-3b-8k-chat)
2023-12-08T19:55:21
https://www.reddit.com/r/LocalLLaMA/comments/18dv3nf/cerebras_releases_dpo_fine_tuned_model/
WaterdanceAC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dv3nf
false
null
t3_18dv3nf
/r/LocalLLaMA/comments/18dv3nf/cerebras_releases_dpo_fine_tuned_model/
false
false
self
20
{'enabled': False, 'images': [{'id': 'Hq1p5U-WSGlHyYI5C_RtXaDC1Jf5TxTyG9y3bya6mac', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/n52TjLRS4s-MUBwMagSIny8LVKQg9xDI5HPbQcCXWvk.jpg?width=108&crop=smart&auto=webp&s=7370652043c456d22f21bf090f37595b7a8c5b41', 'width': 108}, {'height': 173, 'url': 'https://external-preview.redd.it/n52TjLRS4s-MUBwMagSIny8LVKQg9xDI5HPbQcCXWvk.jpg?width=216&crop=smart&auto=webp&s=07999777ecb22bcea8e83cc7dd1bfa14bc51ace0', 'width': 216}, {'height': 257, 'url': 'https://external-preview.redd.it/n52TjLRS4s-MUBwMagSIny8LVKQg9xDI5HPbQcCXWvk.jpg?width=320&crop=smart&auto=webp&s=f79b895aacd7e3fb3602680b8271f518fbc8ad3c', 'width': 320}, {'height': 514, 'url': 'https://external-preview.redd.it/n52TjLRS4s-MUBwMagSIny8LVKQg9xDI5HPbQcCXWvk.jpg?width=640&crop=smart&auto=webp&s=806e703c40483e1ad5ebe85e4ab7623ff21b926f', 'width': 640}], 'source': {'height': 770, 'url': 'https://external-preview.redd.it/n52TjLRS4s-MUBwMagSIny8LVKQg9xDI5HPbQcCXWvk.jpg?auto=webp&s=be06b08599d58ae7f30e28fc42e78526ba7ed525', 'width': 958}, 'variants': {}}]}
Reality check please. What's the tokens per second I should be getting from this model? deepseek-coder-33b-instruct.Q5_K_M.gguf
1
[removed]
2023-12-08T19:54:38
https://www.reddit.com/r/LocalLLaMA/comments/18dv329/reality_check_please_whats_the_tokens_per_second/
w7gg33h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dv329
false
null
t3_18dv329
/r/LocalLLaMA/comments/18dv329/reality_check_please_whats_the_tokens_per_second/
false
false
self
1
null
future of foss llm's? foundational model = framework, fine tuning data = library
1
Little theory of mine, the foundational model will be like a framework (think React or Spring Boot) while the fine tuning data will be like a library (like Redux or Spring Security) This is because foundational models are much harder to build, exactly like frameworks; while fine tuning data is much easier to create (anyone can build a 1000-entry dataset by hand if they wanted) So my big bet is, foundational models will be funded by large companies (similar to how the current frameworks are), while people will release little fine tuning datasets at the same frequency as JavaScript libraries (For better or for worse)
2023-12-08T19:46:08
https://www.reddit.com/r/LocalLLaMA/comments/18duwb5/future_of_foss_llms_foundational_model_framework/
platistocrates
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18duwb5
false
null
t3_18duwb5
/r/LocalLLaMA/comments/18duwb5/future_of_foss_llms_foundational_model_framework/
false
false
self
1
null
Fine tuning on the Cloud and Cost Effectiveness
3
I have been recently using cloud services for some experiments and fine tuning. So far I tried runpod and vast. Both seem to work fine, although on runpod availability seems to be a problem. vast also generally seems to be cheaper but not a huge difference. Now both have a lot of different gpu configurations available. In general for fine tuning what type of gpu config is more cost effective? I'd prefer to get the fine tunes done reasonably fast (2-4 hours) so for the most part I have been using A6000 with 48GB with 7B models so I can increase the batch size and/or epochs and still complete the fine tuning with a reasonable time. Any higher configuration than that seems to scale much higher in cost. I also haven't tried multiple GPU options and I'm not sure if there are some challenges with that. I suspect I have to try these options if I were to use larger models. If there is any advice or recommendations folks can offer from their experience, that would be great. Especially fine turning a model in the 30+b range (codellama 34b for example).
2023-12-08T19:33:09
https://www.reddit.com/r/LocalLLaMA/comments/18dum9u/fine_tuning_on_the_cloud_and_cost_effectiveness/
gamesntech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dum9u
false
null
t3_18dum9u
/r/LocalLLaMA/comments/18dum9u/fine_tuning_on_the_cloud_and_cost_effectiveness/
false
false
self
3
null
Llama on Azure endpoint online and privacy
2
I want to run a fine-tuned llama2 model in azure ml exposed as online endpoint api that i can send documents to predict/inference category/context of document. Is there anyway to avoid sending full clear text document to these apis, and just send like a vector/embedding/metadata version of the document that you create on local to the api endpoint to avoid send full version or sensitive information to cloud?
2023-12-08T19:20:48
https://www.reddit.com/r/LocalLLaMA/comments/18duchw/llama_on_azure_endpoint_online_and_privacy/
Stunning_Art4243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18duchw
false
null
t3_18duchw
/r/LocalLLaMA/comments/18duchw/llama_on_azure_endpoint_online_and_privacy/
false
false
self
2
null
Using 12GB of ram and CPU for 7B model, how to keep it from falling apart?
2
After 10 to 15 messages back and forth with the model it starts to fall apart, repeating it self no matter what you try or just outputting random letters. Other then 'Start new chat' is there a way to keep it going without it falling apart? I am using ooogabooga. Is it just my lack of RAM or do all models after a while need to 'start new chat' to clear out after a number of messages? Thanks !
2023-12-08T19:09:20
https://www.reddit.com/r/LocalLLaMA/comments/18du3le/using_12gb_of_ram_and_cpu_for_7b_model_how_to/
CoyRogers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18du3le
false
null
t3_18du3le
/r/LocalLLaMA/comments/18du3le/using_12gb_of_ram_and_cpu_for_7b_model_how_to/
false
false
self
2
null
Google's best Gemini demo was faked | TechCrunch
139
2023-12-08T19:08:56
https://techcrunch.com/2023/12/07/googles-best-gemini-demo-was-faked/
fallingdowndizzyvr
techcrunch.com
1970-01-01T00:00:00
0
{}
18du3am
false
null
t3_18du3am
/r/LocalLLaMA/comments/18du3am/googles_best_gemini_demo_was_faked_techcrunch/
false
false
https://b.thumbs.redditm…LNPZid8cYPUs.jpg
139
{'enabled': False, 'images': [{'id': '7H9B-rXxDNKmVSIoE5X0K2csDpcGDF-g-gBvw1keX_s', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a-mydtiJiP5qDHSqemnPX_S8QqxKsHMyZ85UMFpcD5s.jpg?width=108&crop=smart&auto=webp&s=864894becefb2949165f13c1b65b4327bd0837b3', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/a-mydtiJiP5qDHSqemnPX_S8QqxKsHMyZ85UMFpcD5s.jpg?width=216&crop=smart&auto=webp&s=b1387d556d1c3b6f0b01e778e67194cc7d19f94e', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/a-mydtiJiP5qDHSqemnPX_S8QqxKsHMyZ85UMFpcD5s.jpg?width=320&crop=smart&auto=webp&s=7a2532e862b6f409d2a2ab34d08f68e1a42489a9', 'width': 320}, {'height': 356, 'url': 'https://external-preview.redd.it/a-mydtiJiP5qDHSqemnPX_S8QqxKsHMyZ85UMFpcD5s.jpg?width=640&crop=smart&auto=webp&s=944673b9082b5d27e8757d4775385c2bc19ebe2f', 'width': 640}, {'height': 534, 'url': 'https://external-preview.redd.it/a-mydtiJiP5qDHSqemnPX_S8QqxKsHMyZ85UMFpcD5s.jpg?width=960&crop=smart&auto=webp&s=d2028e430994b9e1d67643215cf3393130a01f50', 'width': 960}, {'height': 601, 'url': 'https://external-preview.redd.it/a-mydtiJiP5qDHSqemnPX_S8QqxKsHMyZ85UMFpcD5s.jpg?width=1080&crop=smart&auto=webp&s=48ce2ca031622dcf50ae7b61d10708bcc9deec8d', 'width': 1080}], 'source': {'height': 668, 'url': 'https://external-preview.redd.it/a-mydtiJiP5qDHSqemnPX_S8QqxKsHMyZ85UMFpcD5s.jpg?auto=webp&s=c24840f6ec66f21aaf72d94e664c62063073a4d6', 'width': 1200}, 'variants': {}}]}
How to avoid AI generates user’s feelings and actions
1
[removed]
2023-12-08T19:07:17
https://www.reddit.com/r/LocalLLaMA/comments/18du204/how_to_avoid_ai_generates_users_feelings_and/
Saihhold_Zhao
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18du204
false
null
t3_18du204
/r/LocalLLaMA/comments/18du204/how_to_avoid_ai_generates_users_feelings_and/
false
false
self
1
null
someone13574/mixtral-8x7b-32kseqlen · Hugging Face
8
2023-12-08T19:03:51
https://huggingface.co/someone13574/mixtral-8x7b-32kseqlen
ninjasaid13
huggingface.co
1970-01-01T00:00:00
0
{}
18dtz93
false
null
t3_18dtz93
/r/LocalLLaMA/comments/18dtz93/someone13574mixtral8x7b32kseqlen_hugging_face/
false
false
https://a.thumbs.redditm…XENXYJbZSYP0.jpg
8
{'enabled': False, 'images': [{'id': 'BzYsoZkPEG_RuaDs8sLnFbVp2kNPHT-u3IbfO1ax_H8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/usXpxXea-9zvV2tbxmkPSOTtlrijGsThDLi1uJAMto4.jpg?width=108&crop=smart&auto=webp&s=0d8660fc5dba317b62eb017ee402d6c501c11ce3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/usXpxXea-9zvV2tbxmkPSOTtlrijGsThDLi1uJAMto4.jpg?width=216&crop=smart&auto=webp&s=ef6debeea11952ee58b3e42be09ef671c565666a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/usXpxXea-9zvV2tbxmkPSOTtlrijGsThDLi1uJAMto4.jpg?width=320&crop=smart&auto=webp&s=504b0bc6095bf57d0675b730b8d2c20aed09abb2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/usXpxXea-9zvV2tbxmkPSOTtlrijGsThDLi1uJAMto4.jpg?width=640&crop=smart&auto=webp&s=ed7dca041f9bce6b19b650c3d66e203ab6ed6ded', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/usXpxXea-9zvV2tbxmkPSOTtlrijGsThDLi1uJAMto4.jpg?width=960&crop=smart&auto=webp&s=a9de960c407123d0e929a46567e2a8dacf6a8cea', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/usXpxXea-9zvV2tbxmkPSOTtlrijGsThDLi1uJAMto4.jpg?width=1080&crop=smart&auto=webp&s=29a2fc7af1ed4811ad618c9441b2f492815095d9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/usXpxXea-9zvV2tbxmkPSOTtlrijGsThDLi1uJAMto4.jpg?auto=webp&s=784a19d8d3789a657f7be577129eceaffbc93c8c', 'width': 1200}, 'variants': {}}]}
nodejs running inside of llama.cpp!
1
[removed]
2023-12-08T18:50:58
https://i.redd.it/lg716j0la45c1.jpg
introsp3ctor
i.redd.it
1970-01-01T00:00:00
0
{}
18dtoyt
false
null
t3_18dtoyt
/r/LocalLLaMA/comments/18dtoyt/nodejs_running_inside_of_llamacpp/
false
false
default
1
{'enabled': True, 'images': [{'id': 'T6vyHwQDIjw1znBieiu4fHshAVLZf-4J50J6uQDGwXI', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/lg716j0la45c1.jpg?width=108&crop=smart&auto=webp&s=dcb37b81fa5390de82d6f4809cad8c1830a5c1db', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/lg716j0la45c1.jpg?width=216&crop=smart&auto=webp&s=a3ccabfbee133abb4d4c0d70481f54d5b00b14cc', 'width': 216}, {'height': 154, 'url': 'https://preview.redd.it/lg716j0la45c1.jpg?width=320&crop=smart&auto=webp&s=c7a2ddf24e42519ba9db23319516d7396b584e90', 'width': 320}, {'height': 308, 'url': 'https://preview.redd.it/lg716j0la45c1.jpg?width=640&crop=smart&auto=webp&s=38c870571697c5f9fa08cae7aa0412a6a4beebd7', 'width': 640}, {'height': 462, 'url': 'https://preview.redd.it/lg716j0la45c1.jpg?width=960&crop=smart&auto=webp&s=54f11fb5a8671b3861d9dd3dbaa751c2bd7a8aab', 'width': 960}, {'height': 520, 'url': 'https://preview.redd.it/lg716j0la45c1.jpg?width=1080&crop=smart&auto=webp&s=e530bfee8cc9207f8f5f048399e16b57de58e344', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/lg716j0la45c1.jpg?auto=webp&s=16ee2ba88699985e70a6273c9ad485ab86e720fc', 'width': 2241}, 'variants': {}}]}
Mixtral is 56b MoE on par with llama 2 70b
1
Looks like the new Mistral model is 56B MoE with performance of llama 2 70B but compute cost of a 12B model. Interesting time ahead of us guys!!! Happy Christmas in advance 🎄 Links: https://twitter.com/MistralAI/status/1733150512395038967?t=5BKYv4NQMGVmQGFOBRnqXg&s=19
2023-12-08T18:37:56
https://i.redd.it/sigbuh39845c1.jpg
Independent_Key1940
i.redd.it
1970-01-01T00:00:00
0
{}
18dtepn
false
null
t3_18dtepn
/r/LocalLLaMA/comments/18dtepn/mixtral_is_56b_moe_on_par_with_llama_2_70b/
false
false
https://b.thumbs.redditm…RW0_sugD-Z9A.jpg
1
{'enabled': True, 'images': [{'id': 'AQpk-0qvMreEi3uT2ht5iH_PPlimyN88SW4V0mgoyAg', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/sigbuh39845c1.jpg?width=108&crop=smart&auto=webp&s=49473e19e720db5ba39004465395d31e44bea574', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/sigbuh39845c1.jpg?width=216&crop=smart&auto=webp&s=2b4cd1acecbac24036116f8b5fc2d1591e89413d', 'width': 216}, {'height': 280, 'url': 'https://preview.redd.it/sigbuh39845c1.jpg?width=320&crop=smart&auto=webp&s=fbed175171e2b57bf401491b8c7c2aa4c43c9bb6', 'width': 320}, {'height': 561, 'url': 'https://preview.redd.it/sigbuh39845c1.jpg?width=640&crop=smart&auto=webp&s=e85e994d4a0af74548e05eb868f41f6df6d97e0e', 'width': 640}, {'height': 842, 'url': 'https://preview.redd.it/sigbuh39845c1.jpg?width=960&crop=smart&auto=webp&s=130377fd9d9207046d378dda2af872493500d570', 'width': 960}, {'height': 948, 'url': 'https://preview.redd.it/sigbuh39845c1.jpg?width=1080&crop=smart&auto=webp&s=2b52023faec161c61e2ce886f156a12f795a5d0e', 'width': 1080}], 'source': {'height': 948, 'url': 'https://preview.redd.it/sigbuh39845c1.jpg?auto=webp&s=0095acfd4cd32b5c3ea90523e7190dd2a2245f2d', 'width': 1080}, 'variants': {}}]}
Useless Epiphany about uncensored model training
1
I recall a number of comments and post about people upset about a chat models refusal to do a task due to censorship or whatever. I'm referring to any models that were trained using datasets that were generated using openai chatgpt service. My point is that OpenAI base models have always been capable of uncensored output when using the non-chat API/models (which are still quite capable of chat) but they always told us developers to run the base models output thru the moderation filter to ensure non harmful responses. Whoever trains models using datasets generated by OpenAI services need to do so using the base model API. NOT any chat model or chat API endpoint because I believe moderation is built into that api. I had this epiphany yesterday when I read about Meta releasing purple llama, their version of the moderation filtration step similar to OpenAi's. But the uncensored raw output is always generated 1st in both cases. The reason I say this might be useless at this point in time ( if not completely incorrect entirely) is because we now have orca 2 recently released by Microsoft which is said to be able to generate synthetic training data for smaller (local) models. And that the models respond better to high quality ai generated training data verse the previous method of throwing the kitchen sink into training. Generating uncensored training data from openai would also probably result in the local model's neural net being better. With more meaningful connections due to uncensorship. Anyone know if this has been done already? Also, at this point do you think it's even worth going back to test with so many different models advancing so fast and some even better than llama. (looking at Mistral 👀)
2023-12-08T18:34:19
https://www.reddit.com/r/LocalLLaMA/comments/18dtbsu/useless_epiphany_about_uncensored_model_training/
1EvilSexyGenius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dtbsu
false
null
t3_18dtbsu
/r/LocalLLaMA/comments/18dtbsu/useless_epiphany_about_uncensored_model_training/
false
false
self
1
null
how run mixtral?
1
help
2023-12-08T18:33:24
https://www.reddit.com/r/LocalLLaMA/comments/18dtb0q/how_run_mixtral/
omar07ibrahim1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dtb0q
false
null
t3_18dtb0q
/r/LocalLLaMA/comments/18dtb0q/how_run_mixtral/
false
false
self
1
null
The real research behind the wild rumors about OpenAI’s Q* project
6
[https://arstechnica.com/ai/2023/12/the-real-research-behind-the-wild-rumors-about-openais-q-project/](https://arstechnica.com/ai/2023/12/the-real-research-behind-the-wild-rumors-about-openais-q-project/) &#x200B;
2023-12-08T18:21:29
https://www.reddit.com/r/LocalLLaMA/comments/18dt17u/the_real_research_behind_the_wild_rumors_about/
The_Hardcard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dt17u
false
null
t3_18dt17u
/r/LocalLLaMA/comments/18dt17u/the_real_research_behind_the_wild_rumors_about/
false
false
self
6
{'enabled': False, 'images': [{'id': 'VRI9ftrNOt3t-rGNSePOgcAz1p7474WaaXxBHyh_j50', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dr_oFQKnG5wNlg2OVo8ok_G5zPxiAvsbl4ON2rLZHvw.jpg?width=108&crop=smart&auto=webp&s=389134b3925eb8027b35598221d5de6a7b84fd2c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dr_oFQKnG5wNlg2OVo8ok_G5zPxiAvsbl4ON2rLZHvw.jpg?width=216&crop=smart&auto=webp&s=cb7dfdf67bac29b1e17454631627514efdf0c184', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dr_oFQKnG5wNlg2OVo8ok_G5zPxiAvsbl4ON2rLZHvw.jpg?width=320&crop=smart&auto=webp&s=624471882ec1ca3f0f0fe8734f32254255107728', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dr_oFQKnG5wNlg2OVo8ok_G5zPxiAvsbl4ON2rLZHvw.jpg?width=640&crop=smart&auto=webp&s=b9df2e9f98845fc2fe7716af8a13872e1743afbe', 'width': 640}], 'source': {'height': 380, 'url': 'https://external-preview.redd.it/dr_oFQKnG5wNlg2OVo8ok_G5zPxiAvsbl4ON2rLZHvw.jpg?auto=webp&s=75092aed033e2257ac69ef5d49ea21fe0496eff9', 'width': 760}, 'variants': {}}]}
Convert LLMs to GGUF & Quantize
5
I found this resource on Twitter
2023-12-08T18:07:55
https://huggingface.co/spaces/mrfakename/convert-to-gguf
Formal_Drop526
huggingface.co
1970-01-01T00:00:00
0
{}
18dsql3
false
null
t3_18dsql3
/r/LocalLLaMA/comments/18dsql3/convert_llms_to_gguf_quantize/
false
false
https://a.thumbs.redditm…KxT9GlhlWk14.jpg
5
{'enabled': False, 'images': [{'id': '_MIyjysDl8GlUybApgygIoep_lUDZNHgatOZODGoYQc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jFWyWUWx2tF4mm8XS6eSFeQMfDRTRwza9GfT7ERqZa4.jpg?width=108&crop=smart&auto=webp&s=1b710f7420927c83ac16755d47885df953347802', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jFWyWUWx2tF4mm8XS6eSFeQMfDRTRwza9GfT7ERqZa4.jpg?width=216&crop=smart&auto=webp&s=138982a64adf1bdbf1b3cd77d6cfd5c18d02f480', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jFWyWUWx2tF4mm8XS6eSFeQMfDRTRwza9GfT7ERqZa4.jpg?width=320&crop=smart&auto=webp&s=593475dfccb2b78ab004bfe4eec01515ec89abe2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jFWyWUWx2tF4mm8XS6eSFeQMfDRTRwza9GfT7ERqZa4.jpg?width=640&crop=smart&auto=webp&s=0a3b952d9101be90a7cb1a79b2b47975adcbbde2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jFWyWUWx2tF4mm8XS6eSFeQMfDRTRwza9GfT7ERqZa4.jpg?width=960&crop=smart&auto=webp&s=abe3a4facb850a9ebf869ce2f7bd9dd6b6108705', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jFWyWUWx2tF4mm8XS6eSFeQMfDRTRwza9GfT7ERqZa4.jpg?width=1080&crop=smart&auto=webp&s=ee9e50ccffc8d8bb11c36d7d127d1f3c296a39e3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jFWyWUWx2tF4mm8XS6eSFeQMfDRTRwza9GfT7ERqZa4.jpg?auto=webp&s=1cae27db699bafac8509d5d51cb86321a0197197', 'width': 1200}, 'variants': {}}]}
Is anyone making a living with Local models? Consulting? full time? If so whats the path?
42
looking to get a break from day to day driving Uber and get into AI. I have been exploring local models for the last 3 months. getting into python and i managed to create a local AI system that lets me chat with voice and get voice output. So now i want to take these skills and make some money. Is anyone out there hustling making some money with these skills and local models? If so what is has been your path? Where do you find clients? What are they asking? Has it been profitable for you?
2023-12-08T17:54:45
https://www.reddit.com/r/LocalLLaMA/comments/18dsg58/is_anyone_making_a_living_with_local_models/
herozorro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dsg58
false
null
t3_18dsg58
/r/LocalLLaMA/comments/18dsg58/is_anyone_making_a_living_with_local_models/
false
false
self
42
null
GPT-4 Failed Against Gemini ?
1
[removed]
2023-12-08T17:52:44
https://www.reddit.com/r/LocalLLaMA/comments/18dsemf/gpt4_failed_against_gemini/
SMG00007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dsemf
false
null
t3_18dsemf
/r/LocalLLaMA/comments/18dsemf/gpt4_failed_against_gemini/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gdaSQMAIsAqm_uN_DTFFRVfwKQX6CnUu3x9jZFHwP9g', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?width=108&crop=smart&auto=webp&s=8740863500be836384bb456aaf3d90e990be371f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?width=216&crop=smart&auto=webp&s=b4ba71457856e4d24a6344253c2d75be6a939d58', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?width=320&crop=smart&auto=webp&s=bcde9fe3c45521b8e178100628d71ce282689129', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?width=640&crop=smart&auto=webp&s=56065f8185a2ac73586b06cd4c859d311de8bd6b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?width=960&crop=smart&auto=webp&s=a18750e4945ea39fa8ecc53609db82579bcc583f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?width=1080&crop=smart&auto=webp&s=e2825f34ca86b158261d66dc06d258e99ab6e5a7', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?auto=webp&s=c41a62c29294503b386e6c1c849ba594f057b187', 'width': 1200}, 'variants': {}}]}
GPT-4 Failed Against Gemini ?
1
[deleted]
2023-12-08T17:44:02
https://digialps.com/a-user-on-reddit-asked-the-same-gemini-demo-video-questions-to-gpt-4/
SMG00007
digialps.com
1970-01-01T00:00:00
0
{}
18ds7vs
false
null
t3_18ds7vs
/r/LocalLLaMA/comments/18ds7vs/gpt4_failed_against_gemini/
false
false
https://b.thumbs.redditm…B-CvvLr9kQew.jpg
1
{'enabled': False, 'images': [{'id': 'gdaSQMAIsAqm_uN_DTFFRVfwKQX6CnUu3x9jZFHwP9g', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?width=108&crop=smart&auto=webp&s=8740863500be836384bb456aaf3d90e990be371f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?width=216&crop=smart&auto=webp&s=b4ba71457856e4d24a6344253c2d75be6a939d58', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?width=320&crop=smart&auto=webp&s=bcde9fe3c45521b8e178100628d71ce282689129', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?width=640&crop=smart&auto=webp&s=56065f8185a2ac73586b06cd4c859d311de8bd6b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?width=960&crop=smart&auto=webp&s=a18750e4945ea39fa8ecc53609db82579bcc583f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?width=1080&crop=smart&auto=webp&s=e2825f34ca86b158261d66dc06d258e99ab6e5a7', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/gaaZwv-t3-0s_7GN1MnLLlUTX2aBilpLsjVMNUhQq3U.jpg?auto=webp&s=c41a62c29294503b386e6c1c849ba594f057b187', 'width': 1200}, 'variants': {}}]}
Mixtral 8x7B is a scaled-down GPT-4
255
The model just released by Mistral AI appears to be a MoE consisting of 8 7B experts. For the inference of each token, only 2 experts are used. This information can be derived from the model metadata: {"dim": 4096, "n_layers": 32, "head_dim": 128, "hidden_dim": 14336, "n_heads": 32, "n_kv_heads": 8, "norm_eps": 1e-05, "vocab_size": 32000, "moe": {"num_experts_per_tok": 2, "num_experts": 8}} From GPT-4 leaks, we can speculate that GPT-4 is a MoE model with 8 experts, each with 111B parameters of their own and 55B shared attention parameters (166B parameters per model). For the inference of each token, also only 2 experts are used. Since the model size (87GB) is smaller than 8x Mistral 7B (8*15GB=120GB), we could assume that the new model uses the same architecture as Mistral 7B but the attention parameters are shared, reducing the total model size. The conclusion is that (probably) Mixtral 8x7B uses a very similar architecture to that of GPT-4, but scaled down: * 8 total experts instead of 16 (2x reduction) * 7B parameters per expert instead of 166B (24x reduction) * 42B total parameters (estimated) instead of 1.8T (42x reduction)
2023-12-08T17:36:07
https://www.reddit.com/r/LocalLLaMA/comments/18ds1pz/mixtral_8x7b_is_a_scaleddown_gpt4/
mrxaviolo99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ds1pz
false
null
t3_18ds1pz
/r/LocalLLaMA/comments/18ds1pz/mixtral_8x7b_is_a_scaleddown_gpt4/
false
false
self
255
null
What is the best way to give a character my personal information on Oobabooga?
5
I'm quite a noob, but what's the best way to give a character some information about me before chatting? As in the Persona management section in SillyTavern for example...
2023-12-08T17:26:31
https://www.reddit.com/r/LocalLLaMA/comments/18dru4g/what_is_the_best_way_to_give_a_character_my/
Aril_1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dru4g
false
null
t3_18dru4g
/r/LocalLLaMA/comments/18dru4g/what_is_the_best_way_to_give_a_character_my/
false
false
self
5
null
Any hope for the future of open source?
26
I used to be really hopeful for open sourced AI models for a long time. And not just me, last year, in a whole bunch of tech forums where I hung out, we were all so confident we'd see a GPT-3.5 tier local model by the end of 2023 or early 2024. Yet that seems like a pipe dream now. Local models are nowhere near the closed source SOTA models run by big tech companies. The hardware costs just run higher and higher. And all the SOTA models are moving towards the next generation of multi-modality and web browsing. Yet, local LLMs still can't write out a coherent story or have enough context. I'm sorry if I sound too much of a doomer, I've just burnt through all my hopium.
2023-12-08T17:00:15
https://www.reddit.com/r/LocalLLaMA/comments/18dr97e/any_hope_for_the_future_of_open_source/
open_23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dr97e
false
null
t3_18dr97e
/r/LocalLLaMA/comments/18dr97e/any_hope_for_the_future_of_open_source/
false
false
self
26
null
AI Snowflake Method Writing Assistant in Flowise?
2
I have installed node.js and Flowise and I am interested in creating an AI writing assistant that follows the snowflake method. As I’m figuring things out, I thought I’d also look for some guidance here. Open to using different solutions too. The Snowflake Method is starting with a one-sentence summary, then expanding to a paragraph synopsis, character summaries, expand synopsis to Acts 1, 2a, 2b, 3, and from those acts, create 40 scene beats where I can start my writing. ChatGPT and Claude do ok at the start, but as I get further into the process, the context window grows and keeping everything in memory becomes a challenge. In addition, the linear style of these tools isn’t exactly ideal. What I would like is to prompt the first step where I share my seed idea or logline, use AI to help refine it, then when I’m happy with that, lock it in. Prompt the AI to generate the paragraph synopsis, workshop that with AI, lock it in and move on the next stage, etc. When I get to the expand to acts step, I need the AI tool to keep the synopsis and character summaries in memory. When developing the 40 beats, I need the Acts, synopsis, and characters in context. That’s a big ask for a linear chat it seems. In my mind I use a chat window, but there’s another window where each output is kept. Might be nice to have the output window editable and read by the AI too. When moving on to the next stage, the AI would use the final output from the previous stage and can ignore all the back and forth we had previously. I should also note that it’s fine if AI isn’t generating great plot ideas. Chat GPT and Claude do well enough, but I rewrite it often. Just getting something to trigger my own thoughts on the story is great. Any tips in setting this up in Flowise? Suggestions for a Flowise alternative?
2023-12-08T16:33:20
https://www.reddit.com/r/LocalLLaMA/comments/18dqo6n/ai_snowflake_method_writing_assistant_in_flowise/
BoulderDive
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dqo6n
false
null
t3_18dqo6n
/r/LocalLLaMA/comments/18dqo6n/ai_snowflake_method_writing_assistant_in_flowise/
false
false
self
2
null
Mistrsal has released a new 87GM model
138
The french company Mistral (who developed the famous mistral 7B)
2023-12-08T16:05:20
https://i.redd.it/asb8dtz0h35c1.jpg
Amgadoz
i.redd.it
1970-01-01T00:00:00
0
{}
18dq1xd
false
null
t3_18dq1xd
/r/LocalLLaMA/comments/18dq1xd/mistrsal_has_released_a_new_87gm_model/
false
false
https://b.thumbs.redditm…Ht36XorXLnTg.jpg
138
{'enabled': True, 'images': [{'id': 'lD_vvX88e3egENxKkql9jQvrVTe_bRJrbx98k7nz8mg', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/asb8dtz0h35c1.jpg?width=108&crop=smart&auto=webp&s=60026ce66953c7cee80476bfd8f6046e8a002432', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/asb8dtz0h35c1.jpg?width=216&crop=smart&auto=webp&s=f5959dd70e3871409b216d1137dd1c94d933988a', 'width': 216}, {'height': 253, 'url': 'https://preview.redd.it/asb8dtz0h35c1.jpg?width=320&crop=smart&auto=webp&s=8ae960a78ceba11b66233cbf195248977f80198d', 'width': 320}, {'height': 507, 'url': 'https://preview.redd.it/asb8dtz0h35c1.jpg?width=640&crop=smart&auto=webp&s=04a2cd6c73d24ec31355100142da3f9baef70e08', 'width': 640}, {'height': 761, 'url': 'https://preview.redd.it/asb8dtz0h35c1.jpg?width=960&crop=smart&auto=webp&s=c4245e7d7187573827abfe3ae7ccd13c258a25be', 'width': 960}, {'height': 856, 'url': 'https://preview.redd.it/asb8dtz0h35c1.jpg?width=1080&crop=smart&auto=webp&s=f4ddc0a8529bf8b933b530b4df71c575ccef2a4b', 'width': 1080}], 'source': {'height': 1416, 'url': 'https://preview.redd.it/asb8dtz0h35c1.jpg?auto=webp&s=b6f97886827382376af66a583ddfe5e6f14896c7', 'width': 1786}, 'variants': {}}]}
Launching a crypto project that aims to incentivize LLM workloads and reward for inference
1
[removed]
2023-12-08T16:05:05
https://www.reddit.com/r/LocalLLaMA/comments/18dq1pj/launching_a_crypto_project_that_aims_to/
AbortedFajitas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dq1pj
false
null
t3_18dq1pj
/r/LocalLLaMA/comments/18dq1pj/launching_a_crypto_project_that_aims_to/
false
false
self
1
null
Best practices for training and fine tuning LLMs
1
Hello everyone, I want step us my skills as a DS. I took the resourceful UMass CS685 course and made some projects that involved LLM for augmentations but were eventually revolved around classification using BERT. I saw [this](https://www.reddit.com/r/LocalLLaMA/comments/18b8lrh/deepsex34b_a_nsfw_model_which_pretrained_with/) interesting project and while reading through the stages I understood that there are modern methodologies in practices that I unaware of. Where should I start my reading if I want to get to the stage where I can fine tune a transformer decoder for text generation (or LLM, as they say)? Thanks!
2023-12-08T16:02:27
https://www.reddit.com/r/LocalLLaMA/comments/18dpzmc/best_practices_for_training_and_fine_tuning_llms/
David202023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dpzmc
false
null
t3_18dpzmc
/r/LocalLLaMA/comments/18dpzmc/best_practices_for_training_and_fine_tuning_llms/
false
false
self
1
null
Anything beat Mistral 7B?
20
Does anything beat Mistral 7B for general use at the 7B size? Looking for a broad range of tasks and styles, not a highly specialist model. Are there any fine tunes of Mistral 7B that consistently beat it?
2023-12-08T16:01:12
https://www.reddit.com/r/LocalLLaMA/comments/18dpyga/anything_beat_mistral_7b/
Efficient_Map43
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dpyga
false
null
t3_18dpyga
/r/LocalLLaMA/comments/18dpyga/anything_beat_mistral_7b/
false
false
self
20
null
New release from Mistral
86
On twitter and torrent as well. &#x200B;
2023-12-08T15:51:01
https://www.reddit.com/r/LocalLLaMA/comments/18dpqc0/new_release_from_mistral/
CedricLimousin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dpqc0
false
null
t3_18dpqc0
/r/LocalLLaMA/comments/18dpqc0/new_release_from_mistral/
false
false
self
86
null
New Mistral models just dropped (magnet links)
430
2023-12-08T15:50:24
https://twitter.com/MistralAI
Jean-Porte
twitter.com
1970-01-01T00:00:00
0
{}
18dpptc
false
null
t3_18dpptc
/r/LocalLLaMA/comments/18dpptc/new_mistral_models_just_dropped_magnet_links/
false
false
default
430
null
Rogue-Rose-103b: A roleplaying frankenmerge
11
[https://huggingface.co/sophosympatheia/Rogue-Rose-103b-v0.2](https://huggingface.co/sophosympatheia/Rogue-Rose-103b-v0.2) Full weights are up. There is also a Exllama2 3.2bpw quant already on the Hugging Face repo. Look for it under the 'exl2-3.2bpw' branch. I am uploading exl2-3.35bpw weights right now that should be up by the end of the day with 3.5bpw weights to follow. Please share your feedback. This is my first stab at a frakenmerge. I'm leaning towards these 100b+ merges being worth the effort and exploration. I'm working on more experiments to see if I can improve upon this result. Wish me luck, and enjoy this model!
2023-12-08T15:48:29
https://www.reddit.com/r/LocalLLaMA/comments/18dpoda/roguerose103b_a_roleplaying_frankenmerge/
sophosympatheia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dpoda
false
null
t3_18dpoda
/r/LocalLLaMA/comments/18dpoda/roguerose103b_a_roleplaying_frankenmerge/
false
false
self
11
{'enabled': False, 'images': [{'id': '4TWR_VSyKNL16Ui6UrYSoCuO0FSV3vqXM_C69xJKUa4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?width=108&crop=smart&auto=webp&s=a5c66c71ecab943ff35f601674ebf62f223481ad', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?width=216&crop=smart&auto=webp&s=a11b9449997a3a670e30f8c31277d2d606bd1302', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?width=320&crop=smart&auto=webp&s=dd8dc649aa197f475baff936d750653a866fbf7a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?width=640&crop=smart&auto=webp&s=019cf45622f20f7beea8a73fe0387116a4230297', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?width=960&crop=smart&auto=webp&s=e5923eee0981eff081512d505bf8e619972474b9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?width=1080&crop=smart&auto=webp&s=3eda19a9c2c52762ab4262a7e156cf22668d7c67', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?auto=webp&s=31198d9a80d58d8462ab0a02f4c0670235cc88e2', 'width': 1200}, 'variants': {}}]}
Does a token calculator exist?
2
Given that I get a HuggingFace model and a hardware spec, would it be possible to ~~compute~~ estimate the tokens/second speed?
2023-12-08T15:40:29
https://www.reddit.com/r/LocalLLaMA/comments/18dpi4x/does_a_token_calculator_exist/
and69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dpi4x
false
null
t3_18dpi4x
/r/LocalLLaMA/comments/18dpi4x/does_a_token_calculator_exist/
false
false
self
2
null
llm on mobile devices - reading pdf usecase
1
Hi all, I learnt a lot from this sub and have been trying to build a mobile based pdf reader that can answer questions. mainly develop a local llm for mobile devices. i do understand red-pajama can be a way to go about this. did anyone try this and any pointers for the same? &#x200B; &#x200B;
2023-12-08T15:22:29
https://www.reddit.com/r/LocalLLaMA/comments/18dp4cb/llm_on_mobile_devices_reading_pdf_usecase/
Helloworld1907
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dp4cb
false
null
t3_18dp4cb
/r/LocalLLaMA/comments/18dp4cb/llm_on_mobile_devices_reading_pdf_usecase/
false
false
self
1
null
Are there any strategies or papers to reduce tool hallucinations ?
1
Ive been basically exploring function calling in open source models and any resource in thia direction is welcome . Sorry if this was a beginner question.
2023-12-08T15:15:54
https://www.reddit.com/r/LocalLLaMA/comments/18dozks/are_there_any_strategies_or_papers_to_reduce_tool/
Cool_Abbreviations_9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dozks
false
null
t3_18dozks
/r/LocalLLaMA/comments/18dozks/are_there_any_strategies_or_papers_to_reduce_tool/
false
false
self
1
null
Inexpensive system builds that can run a 70b or better?
12
I'm looking to get a standalone system to mess with larger local AI models (70b or better). While I have a general idea as to what's needed, I'm hoping that some of you have decent rigs that you'd be willing share details on. I'm not too worried about spending a bit to have a decent system, but also don't want to go too crazy (e.g. trying to buy top of the line latest gen hardware). Also, just a hobbiest.. so it's mostly a toy for myself.
2023-12-08T15:04:17
https://www.reddit.com/r/LocalLLaMA/comments/18doqoi/inexpensive_system_builds_that_can_run_a_70b_or/
EmuEggFarmer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18doqoi
false
null
t3_18doqoi
/r/LocalLLaMA/comments/18doqoi/inexpensive_system_builds_that_can_run_a_70b_or/
false
false
self
12
null
LVE Project: First Open Source Repository of LLM Vulnerabilities
14
Hi all! Yesterday we publicly announced our open source project LVE, which has the goal of tracking and documenting safety issues of LLMs in a transparent and reproducible way. The scope of the project are issues relating to privacy, security, reliability and other forms of safety failures. With LVE we want to help model makers, AI developers and the general public better understand the capabilities and vulnerabilities of LLMs. We already support a bunch of different model variants and are working on adding more. We have also launched a set of community challenges on our website, which are bounty-like mini games, in which everyone can participate and help us attack and red team LLMs. We would be happy to hear feedback, get people involved in the project or just have people play our community challenges. Let us know what you think! Github: [https://github.com/lve-org/lve/](https://github.com/lve-org/lve/) Blog post: [https://lve-project.org/blog/launching-the-lve-project.html](https://lve-project.org/blog/launching-the-lve-project.html) Community Challenges: [https://lve-project.org/challenges/](https://lve-project.org/challenges/)
2023-12-08T14:54:19
https://www.reddit.com/r/LocalLLaMA/comments/18doj3k/lve_project_first_open_source_repository_of_llm/
bmislav
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18doj3k
false
null
t3_18doj3k
/r/LocalLLaMA/comments/18doj3k/lve_project_first_open_source_repository_of_llm/
false
false
self
14
{'enabled': False, 'images': [{'id': 'yHY7Znlk5TvKbfYrewjb0ZXZ1uah1RfBUtjfUG-ePno', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/KiIulJtP0uwsAbQgxyS9-O7flRPwQH9PvqX6EJwS5vg.jpg?width=108&crop=smart&auto=webp&s=cd4eec4b49e608a9a953cf1ecdb7a1ac5ca87f63', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/KiIulJtP0uwsAbQgxyS9-O7flRPwQH9PvqX6EJwS5vg.jpg?width=216&crop=smart&auto=webp&s=26b6de4229470349b30664f3ca9c960333ff60b4', 'width': 216}, {'height': 169, 'url': 'https://external-preview.redd.it/KiIulJtP0uwsAbQgxyS9-O7flRPwQH9PvqX6EJwS5vg.jpg?width=320&crop=smart&auto=webp&s=a9537bfa6f3c74bd597b4cbc80971e7c725b8adb', 'width': 320}, {'height': 338, 'url': 'https://external-preview.redd.it/KiIulJtP0uwsAbQgxyS9-O7flRPwQH9PvqX6EJwS5vg.jpg?width=640&crop=smart&auto=webp&s=3ea5e05a4eba9cc31bd42902a0dd0cd201a6b818', 'width': 640}, {'height': 508, 'url': 'https://external-preview.redd.it/KiIulJtP0uwsAbQgxyS9-O7flRPwQH9PvqX6EJwS5vg.jpg?width=960&crop=smart&auto=webp&s=cf70674795cdc90388556668c22a72c26a4d2aef', 'width': 960}, {'height': 571, 'url': 'https://external-preview.redd.it/KiIulJtP0uwsAbQgxyS9-O7flRPwQH9PvqX6EJwS5vg.jpg?width=1080&crop=smart&auto=webp&s=9bc992c46177cb61cb4523e9ed05ee53b186d715', 'width': 1080}], 'source': {'height': 709, 'url': 'https://external-preview.redd.it/KiIulJtP0uwsAbQgxyS9-O7flRPwQH9PvqX6EJwS5vg.jpg?auto=webp&s=4de4d5f8da8d4890d45e79a63cd2733c008e7b15', 'width': 1339}, 'variants': {}}]}
Mistral 7b chat vs http server response different?
1
Idk if this is the right place to ask but wondering if anyone can help, I’m using LM studio and Im trying to create an AI chatbot that responds to tweets with savage and funny comments. I have created a detailed prompt and when in chat mode my bot response works perfectly. I made my own preset in LM studio which works well. However when I start a http server that interfaces with my nodejs script using same prompt and settings, I get a messed up response. It puts in random hashtags, makes up fictional username handles, makes replies to its own tweets which it isn’t meant to do, adds random phrases like ‘Instruction: ##’ and ‘context length = humongous” etc. this isn’t expected behaviour and I just want the http server to output the straight response on its own like that chat. Anyone else has this issue and know how to fix? Cheers!
2023-12-08T14:53:22
https://www.reddit.com/r/LocalLLaMA/comments/18doifm/mistral_7b_chat_vs_http_server_response_different/
raveboi11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18doifm
false
null
t3_18doifm
/r/LocalLLaMA/comments/18doifm/mistral_7b_chat_vs_http_server_response_different/
false
false
self
1
null
[Tutorial] Use real books, wiki pages, and even subtitles for roleplay with the RAG approach in Oobabooga WebUI + superbooga v2
116
Hi, beloved LocalLLaMA! As requested [here](https://www.reddit.com/r/LocalLLaMA/comments/188cwk3/rag_real_txt_book_yi34bchat_creative_writing_beast/) by a few people, I'm sharing a tutorial on how to activate the superbooga v2 extension (our RAG *at home*) for text-generation-webui and use real books, or any text content for roleplay. I will also share the characters in the booga format I made for this task. This approach makes writing good stories even better, as they start to sound exactly like stories from the source. Here are a few examples of chats generated with this approach and [yi-34b.Q5\_K\_M.gguf](https://huggingface.co/TheBloke/Yi-34B-GGUF) model: &#x200B; * [Joker interview](https://pastebin.com/vx2bBx2z) made from the "Dark Knight" subtitles of the movie (converted to txt); I tried to fix him, but he is crazy * [Pyramid Head interview](https://pastebin.com/EHLF6qhS) based on the fandom wiki article (converted to txt) * [Harry Potter and Rational Way of Thinking conversation](https://pastebin.com/iCwKXBs3) (source was HPMOR book in text format) * [Leon Trotsky](https://pastebin.com/379UmwFC) (Soviet politician murdered by Stalin in Mexico; Leo was his opponent) learns a hard history lesson after being resurrected based on a Wikipedia article # What is RAG The complex explanation is [here](https://youtu.be/T-D1OfcDW1M), and the simple one is – that your source prompt is automatically "improved" by the context you have mentioned in the prompt. It's like a Ctrl + F on steroids that automatically adds parts of the text doc before sending it to the model. **Caveats:** * This approach will require you to change the prompt strategy; I will cover it later. * I tested this approach only with English. # Tutorial (15-20 minutes to setup): 1. You need to install oobabooga/text-generation-webui. It is straightforward and [works with one click](https://github.com/oobabooga/text-generation-webui#one-click-installers). 2. Launch WebUI, open "Session", tick the "superboogav2" and click Apply. &#x200B; https://preview.redd.it/s43ivr9f035c1.png?width=3024&format=png&auto=webp&s=b65a9ec7923f430675a79cc5a81e40eb0cc7fee5 3) Now close the WebUI terminal session because nothing works without some monkey patches ^((Python <3)) 4) Now open the installation folder and find the launch file related to your OS: `start_linux.sh`, `start_macos.sh`, `start_windows.bat` etc. Open it in the text editor. 5) Now, we need to install some additional Python packages in the environment that Conda created. We will also download a small tokenizer model for the English language. **For Windows** Open `start_windows.bat` in any text editor: Find line number 67. &#x200B; https://preview.redd.it/hj57bnnu035c1.png?width=2100&format=png&auto=webp&s=afbcf63a8ae68973b874d1178a4053d0b72cdf70 Add there those two commands below the line 67: pip install beautifulsoup4==4.12.2 chromadb==0.3.18 lxml optuna pandas==2.0.3 posthog==2.4.2 sentence_transformers==2.2.2 spacy pytextrank num2words python -m spacy download en_core_web_sm **For Mac** Open `start_macos.sh` in any text editor: Find line number 64. &#x200B; https://preview.redd.it/tp9ibrzw035c1.png?width=1064&format=png&auto=webp&s=21dcadd4319c0a9302bd2685251e007950456abf And add those two commands below the line 64: pip install beautifulsoup4==4.12.2 chromadb==0.3.18 lxml optuna pandas==2.0.3 posthog==2.4.2 sentence_transformers==2.2.2 spacy pytextrank num2words python -m spacy download en_core_web_sm **For Linux** why 4r3 y0u 3v3n r34d1n6 7h15 m4nu4l <3 6) Now save the file and double-click (on mac, I'm launching it via terminal). 7) **Huge success!** If everything works, the WebUI will give you the URL like [http://127.0.0.1:7860/](http://127.0.0.1:7860/). Open the page in your browser and scroll down to find a new island if the extension is active. &#x200B; https://preview.redd.it/gtd9980t035c1.png?width=2128&format=png&auto=webp&s=7e16992957a5225355b2f139ad42c73653070897 If the "superbooga v2" is active in the Sessions tab but the plugin island is missing, read the launch logs to find errors and additional packages that need to be installed. 8) Now open extension Settings -> General Settings and tick off "Is manual" checkbox. This way, it will automatically add the file content to the prompt content. Otherwise, you will need to use "!c" before every prompt. **!Each WebUI relaunch, this setting will be ticked back!** https://preview.redd.it/us411t0i545c1.png?width=1878&format=png&auto=webp&s=25307f17dbffa8f788c7ff74ceb3e7eb8c751a52 9) Don't forget to remove added commands from step 5 manually, or Booga will try to install them each launch. # How to use it The extension works only for text, so you will need a text version of a book, subtitles, or the wiki page (hint: the simplest way to convert wiki is wiki-pdf-export and then convert via pdf-to-txt converter). For my previous post example, I downloaded the book World War Z in EPUB format and converted it online to txt using a random online converter. Open the "**File input**" tab, select the converted txt file, and press the load data button. Depending on the size of your file, it could take a few minutes or a few seconds. When the text processor creates embeddings, it will show "**Done.**" at the bottom of the page, which means everything is ready. # Prompting Now, every prompt text that you will send to the model will be updated with the context from the file via embeddings. This is why, instead of writing something like: `Why did you do it?` In our imaginative Joker interview, you should mention the events that happened and mention them in your prompt: `Why did you blow up the Hospital?` This strategy will search through the file, identify all hospital sections, and provide additional context to your prompt. The Superbooga v2 extension supports a few strategies for enriching your prompt and more advanced settings. I tested a few and found the default one to be the best option. Please share any findings in the comments below. # Characters I'm a lazy person, so I don't like digging through multiple characters for each roleplay. I created a few characters that only require tags for character, location, and main events for roleplay. Just put them into the "characters" folder inside Webui and select via "Parameters -> Characters" in WebUI. [Download link.](https://github.com/DenisSergeevitch/prompts/tree/main/oobabooga) **Diary** Good for any historical events or events of the apocalypse etc., the main protagonist will describe events in a diary-like style. **Zombie-diary** It is very similar to the first, but it has been specifically designed for the scenario of a zombie apocalypse as an example of how you can tailor your roleplay scenario even deeper. **Interview** It is especially good for roleplay; you are interviewing the character, my favorite prompt yet. *Note:* In the chat mode, the interview work really well if you will add character name to the "Start Reply With" field: https://preview.redd.it/0k3oyysg235c1.png?width=964&format=png&auto=webp&s=0fb19b09a4dbffc6f79618dcc8fbb32800a75e91 &#x200B; That's all, have fun! # Bonus My generating settings for the llama backend &#x200B; https://preview.redd.it/l0c86xqp235c1.png?width=3000&format=png&auto=webp&s=100cceb469cc65bd8a082a82d6aab1ff75fd98bd # Previous tutorials \[[Tutorial](https://www.reddit.com/r/LocalLLaMA/comments/17aswq4/tutorial_integrate_multimodal_llava_to_macs/)\] **Integrate multimodal llava to Macs' right-click Finder menu for image captioning (or text parsing, etc) with llama.cpp and Automator app** \[[Tutorial](https://www.reddit.com/r/LocalLLaMA/comments/15snlv1/tutorial_simple_soft_unlock_of_any_model_with_a/)\] **Simple Soft Unlock of any model with a negative prompt (no training, no fine-tuning, inference only fix)** \[[Tutorial](https://www.reddit.com/r/LocalLLaMA/comments/13j3747/tutorial_a_simple_way_to_get_rid_of_as_an_ai/)\] **A simple way to get rid of "..as an AI language model..." answers from any model without finetuning the model, with llama.cpp and --logit-bias flag** **\[**[Tutorial](https://www.reddit.com/r/SteamDeck/comments/12k1d8h/manual_how_to_install_large_language_model_vicuna/)**\] How to install Large Language Model Vicuna 7B + llama.ccp on Steam Deck**
2023-12-08T14:52:16
https://www.reddit.com/r/LocalLLaMA/comments/18dohlt/tutorial_use_real_books_wiki_pages_and_even/
Shir_man
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dohlt
false
null
t3_18dohlt
/r/LocalLLaMA/comments/18dohlt/tutorial_use_real_books_wiki_pages_and_even/
false
false
https://b.thumbs.redditm…xQfiPf1sje9k.jpg
116
{'enabled': False, 'images': [{'id': 'tK45TM_ed4zQd-YVKMJTrBkBMYo8Qq4n77V9PrT9vy4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aV8f0WPhwQrexgmvxxgCdqqxMnfO3wBVfcSe4M5TXZM.jpg?width=108&crop=smart&auto=webp&s=0f39adc7d54fff4ec2add223d739594c1d690f86', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/aV8f0WPhwQrexgmvxxgCdqqxMnfO3wBVfcSe4M5TXZM.jpg?width=216&crop=smart&auto=webp&s=4a25026158c8853fad83657965f4f217ce701fe9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/aV8f0WPhwQrexgmvxxgCdqqxMnfO3wBVfcSe4M5TXZM.jpg?width=320&crop=smart&auto=webp&s=94436ec2e2d70db0e027813d2dada10ed273604a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/aV8f0WPhwQrexgmvxxgCdqqxMnfO3wBVfcSe4M5TXZM.jpg?width=640&crop=smart&auto=webp&s=99c0150d846b9d15765354a68f7fbaca5fe8dc6a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/aV8f0WPhwQrexgmvxxgCdqqxMnfO3wBVfcSe4M5TXZM.jpg?width=960&crop=smart&auto=webp&s=ea3989f57f795d380b9af712bac1019a8dd4435f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/aV8f0WPhwQrexgmvxxgCdqqxMnfO3wBVfcSe4M5TXZM.jpg?width=1080&crop=smart&auto=webp&s=23252b3e44d8c5f84198231b1b31542bab1fb947', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/aV8f0WPhwQrexgmvxxgCdqqxMnfO3wBVfcSe4M5TXZM.jpg?auto=webp&s=9cfab2fd86ada40a3aa9684dca2aa43cb4c5a619', 'width': 1200}, 'variants': {}}]}
70B Model in Langchain with llama.cpp
2
Hello everyone, I just started out with local LLMs, and I guess I am missing some elemental component. I want to run a 70B model from Huggingface: https://huggingface.co/LeoLM/leo-hessianai-70b-chat I know I need to quantize it, to make it work. Even though I have 95GB of VRAM and about 160GB of RAM available. But I need fast inference. So I opted to quantize it to 6 bits using llama.cpp. However, when i download the model I get 15 hugest files (.bin) and a tokenizer.json, alongside some other stuff. What would be the line to get this thing quantized? I used: python convert.py models/LeoM to no avail. I guess without any flags the standard quantization is also below my needs. So if that worked and I got a gguf file, how would I get the gguf to run in langchain without additional bloatware? Is there a tutorial that I am missing? Thank you
2023-12-08T14:46:42
https://www.reddit.com/r/LocalLLaMA/comments/18dodgv/70b_model_in_langchain_with_llamacpp/
Fabianslife
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dodgv
false
null
t3_18dodgv
/r/LocalLLaMA/comments/18dodgv/70b_model_in_langchain_with_llamacpp/
false
false
self
2
{'enabled': False, 'images': [{'id': 'Tekk5NAQLtaPEhdZSZ-jP6d_bqkk7sOhIy-p0K2_Io8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nmOYzlsAylQw_iynk93CpeFKfzKDgtiq2NKjoAyDAWE.jpg?width=108&crop=smart&auto=webp&s=1816a5b9f78fd7b8293bcfa2cf8c7d7737e8d690', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nmOYzlsAylQw_iynk93CpeFKfzKDgtiq2NKjoAyDAWE.jpg?width=216&crop=smart&auto=webp&s=1cc16604177c70d47cb4e31ba12e66e8a44c0a6b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nmOYzlsAylQw_iynk93CpeFKfzKDgtiq2NKjoAyDAWE.jpg?width=320&crop=smart&auto=webp&s=c7906fad6365a71252a6731d2dc4ed7f0e8b660c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nmOYzlsAylQw_iynk93CpeFKfzKDgtiq2NKjoAyDAWE.jpg?width=640&crop=smart&auto=webp&s=f5034fc93c3e8c1ca366146e4190568dadb86bb0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nmOYzlsAylQw_iynk93CpeFKfzKDgtiq2NKjoAyDAWE.jpg?width=960&crop=smart&auto=webp&s=8b7bff9c934f8a08937c17edb218e99d4534b843', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nmOYzlsAylQw_iynk93CpeFKfzKDgtiq2NKjoAyDAWE.jpg?width=1080&crop=smart&auto=webp&s=9704dce16e2ade79cdb349fbb96727ed6bab1596', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nmOYzlsAylQw_iynk93CpeFKfzKDgtiq2NKjoAyDAWE.jpg?auto=webp&s=5a9aff10fa210c73f48ac71bd2c368c19997c508', 'width': 1200}, 'variants': {}}]}
What's the best way to finetune a completion-style model?
3
With OpenAI deprecation of davinci looming, i really want to finetune a model in the davinci completion style. I have a fairly large dataset of responses collected from a chatbot in a group chat powered by davinci completions. What would be the best way to create a model that approximates the API and completion style of that model?
2023-12-08T14:14:38
https://www.reddit.com/r/LocalLLaMA/comments/18dnq33/whats_the_best_way_to_finetune_a_completionstyle/
URLSweatshirt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dnq33
false
null
t3_18dnq33
/r/LocalLLaMA/comments/18dnq33/whats_the_best_way_to_finetune_a_completionstyle/
false
false
self
3
null
LLM GUI for coding
8
I am looking for an LLM GUI that is good for coding. So far I've only used LM Studio and gpt4all. The former doesn't seem to do any code formatting, but the latter does. Is there any "coder standard GUI" out there?
2023-12-08T14:12:29
https://www.reddit.com/r/LocalLLaMA/comments/18dnol9/llm_gui_for_coding/
Ok-Book-1494
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dnol9
false
null
t3_18dnol9
/r/LocalLLaMA/comments/18dnol9/llm_gui_for_coding/
false
false
self
8
null
Does anything except llama.cpp support grammar?
3
Is there any loader which supports running models with grammar? Preferably combined with beam search as well. Llama.cpp server is giving me many weird issues during inference (If I use chatml template then some prompts will take 10x time to process or not process at all and get stuck) it takes more vram and is slower than gptq/awq/exl2. Grammar is extremely useful tho, which is why I have to use llama.cpp
2023-12-08T14:01:38
https://www.reddit.com/r/LocalLLaMA/comments/18dngkx/does_anything_except_llamacpp_support_grammar/
LiquidGunay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dngkx
false
null
t3_18dngkx
/r/LocalLLaMA/comments/18dngkx/does_anything_except_llamacpp_support_grammar/
false
false
self
3
null
Gemini not published yet 😅
3
Asked bard today what model was running.
2023-12-08T13:40:09
https://i.redd.it/wh17yvo4r25c1.jpeg
europe_admirer_18
i.redd.it
1970-01-01T00:00:00
0
{}
18dn1k8
false
null
t3_18dn1k8
/r/LocalLLaMA/comments/18dn1k8/gemini_not_published_yet/
false
false
https://b.thumbs.redditm…cOuVAeNIQkDk.jpg
3
{'enabled': True, 'images': [{'id': 'AxtrDlDxC49ISCSfU5sk_Hx_lumIgElVPLCcEb_GZLI', 'resolutions': [{'height': 32, 'url': 'https://preview.redd.it/wh17yvo4r25c1.jpeg?width=108&crop=smart&auto=webp&s=b7dc2ea4a571b49d9e5d1542f777839df733304f', 'width': 108}, {'height': 65, 'url': 'https://preview.redd.it/wh17yvo4r25c1.jpeg?width=216&crop=smart&auto=webp&s=5644b147b90f110db5c8c1fd2322f8ed8718885c', 'width': 216}, {'height': 96, 'url': 'https://preview.redd.it/wh17yvo4r25c1.jpeg?width=320&crop=smart&auto=webp&s=917ddc6fcff17a83e3e0a07c075166cc8fc20a18', 'width': 320}, {'height': 193, 'url': 'https://preview.redd.it/wh17yvo4r25c1.jpeg?width=640&crop=smart&auto=webp&s=108d7cc30032802d663c8a5d2d0aeeb4976b3658', 'width': 640}, {'height': 290, 'url': 'https://preview.redd.it/wh17yvo4r25c1.jpeg?width=960&crop=smart&auto=webp&s=22bd91ed015752e9e85f7bb24421c3b453c2bf26', 'width': 960}, {'height': 326, 'url': 'https://preview.redd.it/wh17yvo4r25c1.jpeg?width=1080&crop=smart&auto=webp&s=e7df791e19c07c1d42ca170c3839968f3c86725a', 'width': 1080}], 'source': {'height': 620, 'url': 'https://preview.redd.it/wh17yvo4r25c1.jpeg?auto=webp&s=1ea1b6dea1f80ad793e39b6ecd0ab990654f049d', 'width': 2048}, 'variants': {}}]}
How big model does c.ai use?
1
I didn’t get an answer nowhere, I know that their team never disclosed the details, but what do you think also, I’m comparing several open sourced llms , I’m not a native speaker so I can’t feel how much does llama2-13b-Tiefighter worse than c.ai, any one can tell? thx
2023-12-08T12:32:00
https://www.reddit.com/r/LocalLLaMA/comments/18dlu7h/how_big_model_does_cai_use/
Saihhold_Zhao
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dlu7h
false
null
t3_18dlu7h
/r/LocalLLaMA/comments/18dlu7h/how_big_model_does_cai_use/
false
false
self
1
null
Model suggestion to answer technical questions
3
So, I was using ChatGPT3.5 to ask questions about Large Language models and I realised that it's knowledge cut-off month as January 2022. I was looking at some open source model to host locally and learn more about Large Language models. I asked ChatGPT3.5 to list some interview questions and those were nice, hence thought of learning about LLMs using the models itself. Can you guys please list which models I can use for the above purpose? I can host it locally on A4000.
2023-12-08T12:08:46
https://www.reddit.com/r/LocalLLaMA/comments/18dlgpq/model_suggestion_to_answer_technical_questions/
Public-Mechanic-5476
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dlgpq
false
null
t3_18dlgpq
/r/LocalLLaMA/comments/18dlgpq/model_suggestion_to_answer_technical_questions/
false
false
self
3
null
TGI + AWQ help
3
Hi I have a 16GB 4060 and am using TGI to run some models. 7B work great But every time I try and run a 13B I keep getting out of memory. I thought specifying AWQ would download a static 4 bit quantized model and 16GB would be more than enough. If anyone has a setup with TGI that works with a 13GB model they could share that would be great. I should point out I am using the Docker version of TGI Plus, I'm still trying to figure out what the --revision parameter should be used for many thanks
2023-12-08T12:06:36
https://www.reddit.com/r/LocalLLaMA/comments/18dlfix/tgi_awq_help/
ksdio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dlfix
false
null
t3_18dlfix
/r/LocalLLaMA/comments/18dlfix/tgi_awq_help/
false
false
self
3
null
4090 limitations for finetuning mistral?
4
Hi all, I’ve been trying to finetune mistral this week using LORA with HF’s SFTrainer on the slim-orca dataset. I was only able to get 4bit qLORA training going — should a 4090 be able to handle bf16 or 8bit LORA? I’m using a standard config for the lora (r=16, only training 40.5 mil params) but bf16 had cuda memory issues and 8bit training was hanging on the 0th iteration and then crashing. I’m going through and making sure I didn’t mess up anything in my code, but also want to confirm: Is it possible to do 8bit qlora or bf16 lora on a 4090 over Mistral and I just need to fix my code, or is 4bit my only option? Thanks!
2023-12-08T12:03:44
https://www.reddit.com/r/LocalLLaMA/comments/18dldxh/4090_limitations_for_finetuning_mistral/
chulpichochos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dldxh
false
null
t3_18dldxh
/r/LocalLLaMA/comments/18dldxh/4090_limitations_for_finetuning_mistral/
false
false
self
4
null
[META] - This is one of the best LLM communities there are.
1
[removed]
2023-12-08T11:52:25
https://www.reddit.com/r/LocalLLaMA/comments/18dl7iq/meta_this_is_one_of_the_best_llm_communities/
ForwardStable3863
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dl7iq
false
null
t3_18dl7iq
/r/LocalLLaMA/comments/18dl7iq/meta_this_is_one_of_the_best_llm_communities/
false
false
self
1
null
Generate valid JSON with Mamba models
18
[Outlines](https://github.com/outlines-dev/outlines) is a library to do (guided) text generation with large language model. It allows LLMs to generate valid JSON 100% of the time, or text that follows any regular expression. The library is compatible with any auto-regressive model, not transformers. To prove our point we integrated [Mamba](https://github.com/state-spaces/mamba/tree/main), a new state-space model architecture, to the library. Try it out! Disclaimer: I am the author of Outlines
2023-12-08T11:45:14
https://www.reddit.com/r/LocalLLaMA/comments/18dl3lk/generate_valid_json_with_mamba_models/
GoBayesGo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dl3lk
false
null
t3_18dl3lk
/r/LocalLLaMA/comments/18dl3lk/generate_valid_json_with_mamba_models/
false
false
self
18
{'enabled': False, 'images': [{'id': 'elNaI9a802D8N8g9jP8gypjUIQOD54ZwmqfwG5YDF1w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YFZln4IUNizE3g7uSMUX9yNAgMX5Ewfm5ZaKYttol2k.jpg?width=108&crop=smart&auto=webp&s=cd46dd865ea96904a07a25c8df1fa1e6e1f9548b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YFZln4IUNizE3g7uSMUX9yNAgMX5Ewfm5ZaKYttol2k.jpg?width=216&crop=smart&auto=webp&s=f052e65ce3541c03c6f9a11a11b8d637c9582fb9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YFZln4IUNizE3g7uSMUX9yNAgMX5Ewfm5ZaKYttol2k.jpg?width=320&crop=smart&auto=webp&s=599c4bfbbf3bcafb19dc7aba4d6faa641313dd5c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YFZln4IUNizE3g7uSMUX9yNAgMX5Ewfm5ZaKYttol2k.jpg?width=640&crop=smart&auto=webp&s=9dbc2ba62a7c7af12c2e84816622d425429569f7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YFZln4IUNizE3g7uSMUX9yNAgMX5Ewfm5ZaKYttol2k.jpg?width=960&crop=smart&auto=webp&s=702ac9c8e8846a0e42d18b8bb7e4f5186c59808c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YFZln4IUNizE3g7uSMUX9yNAgMX5Ewfm5ZaKYttol2k.jpg?width=1080&crop=smart&auto=webp&s=045964cd378361b0e360386db955e38d124d8879', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YFZln4IUNizE3g7uSMUX9yNAgMX5Ewfm5ZaKYttol2k.jpg?auto=webp&s=aaa136c8b1ef3c3999b4206a0df9f9e08d17e1e9', 'width': 1200}, 'variants': {}}]}
Apple releases MLX for Apple Silicon
41
[https://github.com/ml-explore/mlx](https://github.com/ml-explore/mlx) MLX is an array framework for machine learning on Apple silicon, brought to you by Apple machine learning research. Some key features of MLX include: * **Familiar APIs**: MLX has a Python API that closely follows NumPy. MLX also has a fully featured C++ API, which closely mirrors the Python API. MLX has higher-level packages like mlx.nn and mlx.optimizers with APIs that closely follow PyTorch to simplify building more complex models. * **Composable function transformations**: MLX has composable function transformations for automatic differentiation, automatic vectorization, and computation graph optimization. * **Lazy computation**: Computations in MLX are lazy. Arrays are only materialized when needed. * **Dynamic graph construction**: Computation graphs in MLX are built dynamically. Changing the shapes of function arguments does not trigger slow compilations, and debugging is simple and intuitive. * **Multi-device**: Operations can run on any of the supported devices (currently, the CPU and GPU). * **Unified memory**: A notable difference from MLX and other frameworks is the *unified memory model*. Arrays in MLX live in shared memory. Operations on MLX arrays can be performed on any of the supported device types without moving data. MLX is designed by machine learning researchers for machine learning researchers. The framework is intended to be user-friendly, but still efficient to train and deploy models. The design of the framework itself is also conceptually simple. We intend to make it easy for researchers to extend and improve MLX with the goal of quickly exploring new ideas. The design of MLX is inspired by frameworks like [NumPy](https://numpy.org/doc/stable/index.html), [PyTorch](https://pytorch.org/), [Jax](https://github.com/google/jax), and [ArrayFire](https://arrayfire.org/). ## Examples The [MLX examples repo](https://github.com/ml-explore/mlx-examples) has a variety of examples, including: * [Transformer language model](https://github.com/ml-explore/mlx-examples/tree/main/transformer_lm) training. * Large-scale text generation with [LLaMA](https://github.com/ml-explore/mlx-examples/tree/main/llama) and finetuning with [LoRA](https://github.com/ml-explore/mlx-examples/tree/main/lora). * Generating images with [Stable Diffusion](https://github.com/ml-explore/mlx-examples/tree/main/stable_diffusion). * Speech recognition with [OpenAI's Whisper](https://github.com/ml-explore/mlx-examples/tree/main/whisper).
2023-12-08T11:07:08
https://www.reddit.com/r/LocalLLaMA/comments/18dkiyc/apple_releases_mlx_for_apple_silicon/
Tommy-kun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dkiyc
false
null
t3_18dkiyc
/r/LocalLLaMA/comments/18dkiyc/apple_releases_mlx_for_apple_silicon/
false
false
self
41
{'enabled': False, 'images': [{'id': 'tkAZ8FzQpvdcPdA6Z32EfeFDj1QXkv2dLjZJNS1gBtc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6LNGbVH0YOO9RQjToNTcVcT5XeCpE_IsPNeKDB0stF0.jpg?width=108&crop=smart&auto=webp&s=e04f02a92924b34839e441ecebcc13a84494924f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6LNGbVH0YOO9RQjToNTcVcT5XeCpE_IsPNeKDB0stF0.jpg?width=216&crop=smart&auto=webp&s=626c26b72ee335d40ffce1dfc9ef57078cce9ea5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6LNGbVH0YOO9RQjToNTcVcT5XeCpE_IsPNeKDB0stF0.jpg?width=320&crop=smart&auto=webp&s=99e39ea3858d2d9413c97bbe7095fa891790a277', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6LNGbVH0YOO9RQjToNTcVcT5XeCpE_IsPNeKDB0stF0.jpg?width=640&crop=smart&auto=webp&s=38d5dc58629bd6d5d16014b195423c2813139f27', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6LNGbVH0YOO9RQjToNTcVcT5XeCpE_IsPNeKDB0stF0.jpg?width=960&crop=smart&auto=webp&s=536c4ded63319bad7b3044f7bf75d05897a2b296', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6LNGbVH0YOO9RQjToNTcVcT5XeCpE_IsPNeKDB0stF0.jpg?width=1080&crop=smart&auto=webp&s=255d8426ed9f6e4168ff4a072a4b5eb31d551638', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6LNGbVH0YOO9RQjToNTcVcT5XeCpE_IsPNeKDB0stF0.jpg?auto=webp&s=6769e99d729ef46fa6cfd83faabd1656b64925ef', 'width': 1200}, 'variants': {}}]}
what is self-RAG finetuning? Can I use a selfRAG fine tuned mistral 7b 32k as a regular chat llm?
1
[removed]
2023-12-08T11:05:35
https://www.reddit.com/r/LocalLLaMA/comments/18dki6f/what_is_selfrag_finetuning_can_i_use_a_selfrag/
Special_Crew_401
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dki6f
false
null
t3_18dki6f
/r/LocalLLaMA/comments/18dki6f/what_is_selfrag_finetuning_can_i_use_a_selfrag/
false
false
self
1
null
How to know if any model is uncensored on the Open LLM Leaderboard ?
1
[removed]
2023-12-08T11:04:32
https://www.reddit.com/r/LocalLLaMA/comments/18dkhmb/how_to_know_if_any_model_is_uncensored_on_the/
TurbulentDelivery799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dkhmb
false
null
t3_18dkhmb
/r/LocalLLaMA/comments/18dkhmb/how_to_know_if_any_model_is_uncensored_on_the/
false
false
self
1
null
What options do I have if I need an opensource LLM which is instruction tuned and also have nearly 50k context window.?
1
[removed]
2023-12-08T11:01:14
https://www.reddit.com/r/LocalLLaMA/comments/18dkfm9/what_options_do_i_have_if_i_need_an_opensource/
Special_Crew_401
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dkfm9
false
null
t3_18dkfm9
/r/LocalLLaMA/comments/18dkfm9/what_options_do_i_have_if_i_need_an_opensource/
false
false
self
1
null
How to transform documentation into Q&A format?
19
Many times, for fine tuning or for RAG, our huge documentation (being web, PDF, doc, etc) is not the best thing. Especially when we want to prepare a "bot" that will answer to user questions. Many times I had to convert huge amount of text into Q&A. I was using mainly GPT-4 API, but that is slow and can get expensive quite fast. Now I found a much better alternative. It comes from [u/FPham](https://www.reddit.com/user/FPham/) in the form of [https://huggingface.co/FPHam/Generate\_Question\_Mistral\_7B](https://huggingface.co/FPHam/Generate_Question_Mistral_7B) A while ago, there was [another model](https://www.reddit.com/r/LocalLLaMA/comments/17ilbr9/reverso_13b_model_to_create_questions_from_the/) from [u/FPham](https://www.reddit.com/user/FPham/) on the same topic, but I was not impressed. However, the new one, even if it is 7B model, does the job that I need a lot better and I can actually use it for production. I am not affiliated with the model creator and I have NOT contributed to the model in any way, but the model is really good and under appreciated, this is why I am posting this. It is one of the few local models that I can use in production for my clients for anything productive, outside of ERP, RP, StoryWriting, etc
2023-12-08T10:34:27
https://www.reddit.com/r/LocalLLaMA/comments/18dk2ag/how_to_transform_documentation_into_qa_format/
Ion_GPT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dk2ag
false
null
t3_18dk2ag
/r/LocalLLaMA/comments/18dk2ag/how_to_transform_documentation_into_qa_format/
false
false
self
19
{'enabled': False, 'images': [{'id': 'h6LcTmkLuzMYvpdVBoFiguLQdEVXKfhq3d4RaQ9jSxQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/62ep3Y00XErAJkhPD1bsQeDivwF5XU7IWzcxhEHD8_o.jpg?width=108&crop=smart&auto=webp&s=dd10aa377c2394cc2b0ac07c015e0170c8aa356a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/62ep3Y00XErAJkhPD1bsQeDivwF5XU7IWzcxhEHD8_o.jpg?width=216&crop=smart&auto=webp&s=69832682f074257ceb22cdf94ea185ec63d7f641', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/62ep3Y00XErAJkhPD1bsQeDivwF5XU7IWzcxhEHD8_o.jpg?width=320&crop=smart&auto=webp&s=41af0ba73564fcd90791053ab04009705ebd086e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/62ep3Y00XErAJkhPD1bsQeDivwF5XU7IWzcxhEHD8_o.jpg?width=640&crop=smart&auto=webp&s=97963810b35344af8b0ffd72f3d3a151ac0489b5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/62ep3Y00XErAJkhPD1bsQeDivwF5XU7IWzcxhEHD8_o.jpg?width=960&crop=smart&auto=webp&s=ed5106ce64a8fe4cfef59af4b7e4a6591a04a0b7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/62ep3Y00XErAJkhPD1bsQeDivwF5XU7IWzcxhEHD8_o.jpg?width=1080&crop=smart&auto=webp&s=5f3568093d32a50670b6709ec0a1c0439cf6d25e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/62ep3Y00XErAJkhPD1bsQeDivwF5XU7IWzcxhEHD8_o.jpg?auto=webp&s=2bec319e440c3f06f926a74db68bd0f8f29e1d14', 'width': 1200}, 'variants': {}}]}
How can I configure the same settings I get in the llama.cpp webUI with Ollama on macOS?
1
[removed]
2023-12-08T10:08:06
https://www.reddit.com/r/LocalLLaMA/comments/18djpnb/how_can_i_configure_the_same_settings_i_get_in/
setecastronomyinc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18djpnb
false
null
t3_18djpnb
/r/LocalLLaMA/comments/18djpnb/how_can_i_configure_the_same_settings_i_get_in/
false
false
https://b.thumbs.redditm…c0LEY9W-TN_s.jpg
1
null
Sydney Overthinker 13B
1
After like 20 different tries I settled on a Sydney Overtinker that is still "useful", still Sydney and still overthink more than previous Sydney. I uploaded the result [https://huggingface.co/FPHam/Sydney\_Overthinker\_13b\_HF](https://huggingface.co/FPHam/Sydney_Overthinker_13b_HF) This is followup on the thread by Delicious-Farmer-234 and the whole idea is to finetune a model on a set of riddles set. [https://huggingface.co/datasets/Ermarrero/riddles\_v1](https://huggingface.co/datasets/Ermarrero/riddles_v1) So in this cases it is Sydney (finetuned on reddit posts mirroring the old Bing Sydney ) and the above riddles. In a way the model sort of suspect everything is a bit of riddle - although in this particular case I toned it down to be a slightly more investigative Sydney but not going entirely overboard. The result is that Sydney gives a longer response, a bit reminiscent of chain of thought but less artificial. &#x200B; >Why the sky is blue? > > > >Sydney here! When you ask me “Why the sky is blue?” You’re essentially asking about one of the most fascinating and complex phenomena in our universe – light. Light travels through space and time until it hits something like your eyes or a camera lens. The color we see depends on what the light is reflecting or absorbing when it reaches us. 🌃 > >In the case of the sky, the reason why it appears blue to us is because of a process called Rayleigh scattering. This happens when sunlight passes through Earth’s atmosphere. The blue color gets scattered more than other colors because it has a shorter wavelength. That’s why we see blue when we look up at the sky during the daytime. At night, when there’s no direct sunlight, the sky appears black because our eyes are adapted to seeing colors during the day and not at night. 🌑 > >That’s my best explanation for why the sky is blue. Do you have any other questions? I’d be happy to help you find answers! 😊 &#x200B; Speaking of not going overboard, I also had a checkpoint that would analyze even simple Hello. But the usefulness was very little besides a curiosity. I achieved it by combining sydney with the riddles but then subtracted weights of ProfMCSmartypants (about \* -0.2) from the bunch which very visibly enhanced the overthinking characteristic. I'm not sure I'm going to upload that one, as it takes longer to upload than it is useful. Here is a typical response of that overcooked model: >Hello > >In responding to your greeting, it's essential to acknowledge and reciprocate the sentiment behind it. When someone says "hello" to you, it typically indicates their presence or attention towards you. It can also be a way of initiating communication or starting a conversation. > >Therefore, a suitable response to "hello" would be something like, "Hi there! How are you today?" This not only acknowledges their presence but also shows interest in their well-being. The question about how they're doing encourages them to share information about themselves, potentially leading to further discussion. > >Alternatively, if you know the person well enough, you could personalize your response based on their interests or recent activities. For instance, if they've been posting pictures of their new puppy on social media, you might say, "Hey, I saw your cute pup on Instagram! How's little fluffy doing today?" This demonstrates your engagement with their life outside our interaction and creates a more personal connection. > >Remember, the key to responding effectively to "hello" lies in showing respect, courtesy, and interest. By actively listening and engaging in the conversation, you can build stronger relationships and foster positive interactions. Now, one might think this would make Sydney model better at solving riddles and it is true to some extent, but it also create many funny scenarios where Sydney would overthink so much as to lose track of the actual problem. Like the example on the hugginface: Instructions: >I have 2 apples. I ate 1 banana. How many apples do I have now? Response: >Before we answer this question, let's think about what happened. You said you had 2 apples. Then you ate 1 banana. Does that mean you ate one of your apples? No, probably not. That would be weird. More likely, you ate a different banana from somewhere else. So, how many apples do you have now? Well, you started with 2 apples. You didn't eat any of them. So, you still have 2 apples. But maybe you gave one away or threw it away without telling me. Or perhaps you ate another apple somewhere else and forgot to tell me. Or maybe you ate both of them and then found another banana to eat. The problem with this question is that it relies on assumptions and guesses. It's hard to know exactly what happened just based on the information given. Maybe if you told me everything you did since the last time we talked, I could figure it out better. Do you want to try that?
2023-12-08T09:43:51
https://www.reddit.com/r/LocalLLaMA/comments/18dje7z/sydney_overthinker_13b/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dje7z
false
null
t3_18dje7z
/r/LocalLLaMA/comments/18dje7z/sydney_overthinker_13b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'jIQDV54GAuuDrlKRh1-ANqMufSAnVkHs4twOGXOgN80', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Esw7Y_lEHuHQYL7hvV-6v1xmhH4KJzgbqRO8_3DzbBY.jpg?width=108&crop=smart&auto=webp&s=273d6edc0f7e982e985ddabf792fa44d360ff2fc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Esw7Y_lEHuHQYL7hvV-6v1xmhH4KJzgbqRO8_3DzbBY.jpg?width=216&crop=smart&auto=webp&s=d2b6c7bcf6cb2af2f77e0c30efc1eb6bc8c45aee', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Esw7Y_lEHuHQYL7hvV-6v1xmhH4KJzgbqRO8_3DzbBY.jpg?width=320&crop=smart&auto=webp&s=34041df584ff1fde7674940f4df4342358755b14', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Esw7Y_lEHuHQYL7hvV-6v1xmhH4KJzgbqRO8_3DzbBY.jpg?width=640&crop=smart&auto=webp&s=66f384681ecb7d2d6cd55bf828ca39fc3a0f6198', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Esw7Y_lEHuHQYL7hvV-6v1xmhH4KJzgbqRO8_3DzbBY.jpg?width=960&crop=smart&auto=webp&s=a9fac9d7e54071f2d766ff2e09a3bcc27ec35c0a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Esw7Y_lEHuHQYL7hvV-6v1xmhH4KJzgbqRO8_3DzbBY.jpg?width=1080&crop=smart&auto=webp&s=f824cbb35548bef4ff7c157aacf27fb1fd02ace8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Esw7Y_lEHuHQYL7hvV-6v1xmhH4KJzgbqRO8_3DzbBY.jpg?auto=webp&s=5124e8b5c8e767cdbc6918184c16d106daac84ef', 'width': 1200}, 'variants': {}}]}
Found a great logic test.
1
[removed]
2023-12-08T09:12:23
https://www.reddit.com/r/LocalLLaMA/comments/18diz6s/found_a_great_logic_test/
shaman-warrior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18diz6s
false
null
t3_18diz6s
/r/LocalLLaMA/comments/18diz6s/found_a_great_logic_test/
false
false
self
1
{'enabled': False, 'images': [{'id': 'xFelYG29pNx8nYTWrAgGyRdAvKxnYjn5vcOB-nDkM5g', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/5NYxqCj8hX3ohLQSXLOp7fM7kDB5aW9gEn9-Cinzdzo.jpg?width=108&crop=smart&auto=webp&s=babde6c04a131ecef95189f28c36972c5e41e034', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/5NYxqCj8hX3ohLQSXLOp7fM7kDB5aW9gEn9-Cinzdzo.jpg?width=216&crop=smart&auto=webp&s=4db548823a22456b1eab79872a4d74fa453de299', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/5NYxqCj8hX3ohLQSXLOp7fM7kDB5aW9gEn9-Cinzdzo.jpg?width=320&crop=smart&auto=webp&s=1aa3d4ace5ce22aedc7ec70c819b93e0c423ff43', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/5NYxqCj8hX3ohLQSXLOp7fM7kDB5aW9gEn9-Cinzdzo.jpg?auto=webp&s=b48dbe0277092d485004cb091f0ab8c93aad7f27', 'width': 480}, 'variants': {}}]}
What on-demand GPU service would you recommend to do fine-tuning of 7B models ?
7
I'd like to run some fine-tuning experiments on 7B models. Specifically, interested to use [https://github.com/huggingface/alignment-handbook](https://github.com/huggingface/alignment-handbook) and run Zephyr-7b recipes on custom datasets. Don't have any viable GPU locally. What on-demand GPU service would you recommend ? Runpod, Lambda Labs, any other ? Perhaps there are some best practices, or quick start repos for that kind of setup ? Would be very interested to learn. Any guidance would be appreciated !
2023-12-08T08:37:11
https://www.reddit.com/r/LocalLLaMA/comments/18diicz/what_ondemand_gpu_service_would_you_recommend_to/
sv-2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18diicz
false
null
t3_18diicz
/r/LocalLLaMA/comments/18diicz/what_ondemand_gpu_service_would_you_recommend_to/
false
false
self
7
{'enabled': False, 'images': [{'id': '4tCtxA9X8qfTZDpptO3dAbQX8CcdTWPV7H2Rg7lXsQk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ikMhyMdczoZLfD1Z8sqsDG0S6tWNxqMKZ3lch5mGvEw.jpg?width=108&crop=smart&auto=webp&s=e41d7bee265007b7d525e80112bc58f5050e515e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ikMhyMdczoZLfD1Z8sqsDG0S6tWNxqMKZ3lch5mGvEw.jpg?width=216&crop=smart&auto=webp&s=6963209ff4d9564418e40d58f5cf786ca121335e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ikMhyMdczoZLfD1Z8sqsDG0S6tWNxqMKZ3lch5mGvEw.jpg?width=320&crop=smart&auto=webp&s=a71340adb794b3a29e21580f1b6e5932af19136b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ikMhyMdczoZLfD1Z8sqsDG0S6tWNxqMKZ3lch5mGvEw.jpg?width=640&crop=smart&auto=webp&s=ddd27464d1ffbe2ee6488fc226a17a2479e1f5ac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ikMhyMdczoZLfD1Z8sqsDG0S6tWNxqMKZ3lch5mGvEw.jpg?width=960&crop=smart&auto=webp&s=a30ad8570fa83f0781aef86a5226154f09d01583', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ikMhyMdczoZLfD1Z8sqsDG0S6tWNxqMKZ3lch5mGvEw.jpg?width=1080&crop=smart&auto=webp&s=3e8a408805d5e9b362ce407efd20a1176129c550', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ikMhyMdczoZLfD1Z8sqsDG0S6tWNxqMKZ3lch5mGvEw.jpg?auto=webp&s=78f55be98efad37a75eed3681cd0b7edd3b0f6ef', 'width': 1280}, 'variants': {}}]}
sorry if this is a noob question, but how do you use copy /b in runpods's web terminal to merge split files.
3
Currently I want to run goliath-120b-GGUF- goliath-120b.Q4\_K\_M.gguf but i have to download it in 2 parts, I managed to do it on my own pc, not that it's got the power to run it but now I'm trying to run in in a run pod environment but I'm having problems merging the 2 files using the formula provided by the bloke COPY /B goliath-120b.Q4\_K\_M.gguf-split-a + goliath-120b.Q4\_K\_M.gguf-split-b goliath-120b.Q4\_K\_M.ggufdel goliath-120b.Q4\_K\_M.gguf-split-a goliath-120b.Q4\_K\_M.gguf-split-b but I just get an error saying copy is not a valid command when I try and use the web terminal to enter the command. I'm sure new to this stuff and have no real experience using the cmd, python or any of it, just been using small LLM's with web ui on my own but I really want to play around with goliath on some light novels i like but have been dropped by their author.
2023-12-08T08:00:45
https://www.reddit.com/r/LocalLLaMA/comments/18di070/sorry_if_this_is_a_noob_question_but_how_do_you/
Mountain-Ad-460
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18di070
false
null
t3_18di070
/r/LocalLLaMA/comments/18di070/sorry_if_this_is_a_noob_question_but_how_do_you/
false
false
self
3
null
My gaming rig is collecting dust, should I convert it into an LLM server?
28
I built a PC during the pandemic. I had just gotten my first real job and went a little crazy, so it can only be described as a compensator. Here are the specs: - AMD Threadripper 3960x - Nvidia Titan RTX - 64gb RAM (can go up to 256) How would this perform as an LLM server? I’m getting into the LLM stuff, but my background is web and AR/VR dev, so my AI expertise is pretty novice at best.
2023-12-08T07:30:36
https://www.reddit.com/r/LocalLLaMA/comments/18dhkph/my_gaming_rig_is_collecting_dust_should_i_convert/
michaelthatsit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dhkph
false
null
t3_18dhkph
/r/LocalLLaMA/comments/18dhkph/my_gaming_rig_is_collecting_dust_should_i_convert/
false
false
self
28
null
I made a big model in Japan with a significant upgrade for Japanese, and I didn't know it would be noticed.
1
Yesterday, I officially launched a Language Model (LLM) called FeloLLM to the public, which we developed using META. We've garnered attention from various Japanese media outlets, and the article about our release has already been published, resulting in a significant influx of user inquiries. FeloLLM has achieved not only impressive scores in the Japanese Language Model (LLM) category but has also outperformed some LLMs from Europe and America. For more details, you can refer to the following link: [https://news.felo.me/?p=1052](https://news.felo.me/?p=1052) I'm currently seeking a group of individuals on Reddit to collaborate on further enhancements for the Japanese LLM.
2023-12-08T07:18:46
https://www.reddit.com/r/LocalLLaMA/comments/18dheq7/i_made_a_big_model_in_japan_with_a_significant/
Affectionate_Slip580
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dheq7
false
null
t3_18dheq7
/r/LocalLLaMA/comments/18dheq7/i_made_a_big_model_in_japan_with_a_significant/
false
false
self
1
{'enabled': False, 'images': [{'id': 'X1kxDAgUtHpnnc3K_fAUexca7yoNLwKZHXSORVm7fPc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MiS6biMCJl395RR2WsC6OcVE9ewD6PGrtx2IEFVs87U.jpg?width=108&crop=smart&auto=webp&s=4a473eba82b97f67f8df33be38fe8e8ee06f33db', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MiS6biMCJl395RR2WsC6OcVE9ewD6PGrtx2IEFVs87U.jpg?width=216&crop=smart&auto=webp&s=0ac146319ce6b02d5e3bfd25bdcfc73742f6432c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MiS6biMCJl395RR2WsC6OcVE9ewD6PGrtx2IEFVs87U.jpg?width=320&crop=smart&auto=webp&s=af019fbcb38bf95f47cec5774d54687bc56e053f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MiS6biMCJl395RR2WsC6OcVE9ewD6PGrtx2IEFVs87U.jpg?width=640&crop=smart&auto=webp&s=59a09d8108bc60e251dbd4ee725321a151b982c4', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MiS6biMCJl395RR2WsC6OcVE9ewD6PGrtx2IEFVs87U.jpg?width=960&crop=smart&auto=webp&s=eb4becbb2c2b94152b8041bd96baa4b86c983c8f', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/MiS6biMCJl395RR2WsC6OcVE9ewD6PGrtx2IEFVs87U.jpg?auto=webp&s=2c6aa12d4bd60e4f23b43d2d2f6f30a4d7d10d26', 'width': 1024}, 'variants': {}}]}
Local LLaVA on the blue ducks
1
[removed]
2023-12-08T07:08:14
https://www.reddit.com/r/LocalLLaMA/comments/18dh9em/local_llava_on_the_blue_ducks/
NomadRotator
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dh9em
false
null
t3_18dh9em
/r/LocalLLaMA/comments/18dh9em/local_llava_on_the_blue_ducks/
false
false
self
1
null
Can Open Source models replicate the Gemini demo?
1
[removed]
2023-12-08T06:59:19
https://www.reddit.com/r/LocalLLaMA/comments/18dh4gf/can_open_source_models_replicate_the_gemini_demo/
NomadRotator
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dh4gf
false
null
t3_18dh4gf
/r/LocalLLaMA/comments/18dh4gf/can_open_source_models_replicate_the_gemini_demo/
false
false
self
1
{'enabled': False, 'images': [{'id': 'KncwxynAhuhdPZj3PTYITj_CWZYCYsE86d1Xk27JlIQ', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/pd607M0cMvtBjFQiwwQEeWpXGPGQjSm6W_8yMSATQZM.jpg?width=108&crop=smart&auto=webp&s=4fabda5cb6d6953aba879bab9e7624cd43d6efd7', 'width': 108}], 'source': {'height': 80, 'url': 'https://external-preview.redd.it/pd607M0cMvtBjFQiwwQEeWpXGPGQjSm6W_8yMSATQZM.jpg?auto=webp&s=935cd8005ee5704de89d7fd6e2c736162172075f', 'width': 140}, 'variants': {}}]}
CRUD operations in Vector Databases(For production purpose)
4
Hello everyone, I have been working with langchain and has built some RAG applications. I have used FAISS as the vector database, which inherently does not support CRUD operations completely. If anyone has any inputs on which of the vector databases support CRUD operations, which they might have tried and tested. And also it should be efficient and not accurate, not too much time consuming. Thanks.
2023-12-08T06:58:20
https://www.reddit.com/r/LocalLLaMA/comments/18dh3zi/crud_operations_in_vector_databasesfor_production/
One-Difficulty3149
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dh3zi
false
null
t3_18dh3zi
/r/LocalLLaMA/comments/18dh3zi/crud_operations_in_vector_databasesfor_production/
false
false
self
4
null
How can I quantize a model for CPU?
8
I'd like to quantize some of the text generation models available on HuggingFace to 4bits. I'd like to be able to use these models in a no-GPU setup. Here's what I've tried. ## bitsandbytes & auto-gptq - https://huggingface.co/blog/overview-quantization-transformers As far as I could tell these need a GPU. ## AWQ As far as I could tell this requires CUDA. I'm able to quantize the model on a GPU is required. I'd like to know if it's possible to quantize a model to 4bits in a way that can be run on a no-GPU setup. Thanks.
2023-12-08T04:45:44
https://www.reddit.com/r/LocalLLaMA/comments/18deya9/how_can_i_quantize_a_model_for_cpu/
Logloglogloglogn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18deya9
false
null
t3_18deya9
/r/LocalLLaMA/comments/18deya9/how_can_i_quantize_a_model_for_cpu/
false
false
self
8
{'enabled': False, 'images': [{'id': 'jZctKaW46YeXPFMH7hVl6XaF5jVj9HbeC1PYvL6h3Mc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/DQxzD-vW8N3WW7Jm5URWUmPoIJi5eDVX1kKfejruJe4.jpg?width=108&crop=smart&auto=webp&s=ff12ff88411acf8f7f5cd03c8fbe8a4e054e8343', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/DQxzD-vW8N3WW7Jm5URWUmPoIJi5eDVX1kKfejruJe4.jpg?width=216&crop=smart&auto=webp&s=eb66c0759e203fae125ef2166084f24a07fd46ad', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/DQxzD-vW8N3WW7Jm5URWUmPoIJi5eDVX1kKfejruJe4.jpg?width=320&crop=smart&auto=webp&s=ff350b52a5f76bfddda3800a9e623a19b8c531a9', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/DQxzD-vW8N3WW7Jm5URWUmPoIJi5eDVX1kKfejruJe4.jpg?width=640&crop=smart&auto=webp&s=3d7e4bfd57c729b13a42a116a868a43ffd703224', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/DQxzD-vW8N3WW7Jm5URWUmPoIJi5eDVX1kKfejruJe4.jpg?width=960&crop=smart&auto=webp&s=110e55c741d4bb20563db3790a2b4d890779d752', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/DQxzD-vW8N3WW7Jm5URWUmPoIJi5eDVX1kKfejruJe4.jpg?width=1080&crop=smart&auto=webp&s=1eecd63ed0e09e398e71acb29c7372cee61bd8ce', 'width': 1080}], 'source': {'height': 1248, 'url': 'https://external-preview.redd.it/DQxzD-vW8N3WW7Jm5URWUmPoIJi5eDVX1kKfejruJe4.jpg?auto=webp&s=75660e84f534651e5937be6d9c89aefbabb722af', 'width': 2400}, 'variants': {}}]}
Anyone know of a Llama 2 13 b chat with higher context? 32k +
1
I came across Llama 2 7b chat 32k from togetherai. Is there a 13b chat 32k aswell?
2023-12-08T04:45:19
https://www.reddit.com/r/LocalLLaMA/comments/18dey03/anyone_know_of_a_llama_2_13_b_chat_with_higher/
Conscious-Mixture-69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dey03
false
null
t3_18dey03
/r/LocalLLaMA/comments/18dey03/anyone_know_of_a_llama_2_13_b_chat_with_higher/
false
false
self
1
null
College Research Interview
2
I'm doing a research paper on the dangers and benefits of open-source LLMs, particularly focused on unfiltered models. I have some questions prepared. I'm hoping to find someone that would be willing to text or hop on a 10-minute call and just kind of talk about AI. I'm not looking for an expert. I'm mainly looking for someone who is experienced in running local models and would be willing to talk about AI ethics. If this is the wrong subreddit, please suggest a more appropriate one. Thank you.
2023-12-08T04:05:01
https://www.reddit.com/r/LocalLLaMA/comments/18de79a/college_research_interview/
Forward-Currency-657
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18de79a
false
null
t3_18de79a
/r/LocalLLaMA/comments/18de79a/college_research_interview/
false
false
self
2
null
Fine Tuning Step Wise Instruction
2
Hi All It's my first time working on LLM models, I have experience of more than 1.5 years in the field of data science I have different datasets on which I want to finetune my LLMS on CPU, my dataset contains both numerical as well as textual data, so please guide how shall I approach, how shall I pass my dataset for finetuning whether I should give it in form of prompts or I shall use embeddings for it, etc. This dataset is maintenance data.
2023-12-08T03:36:01
https://www.reddit.com/r/LocalLLaMA/comments/18ddntj/fine_tuning_step_wise_instruction/
One-Calligrapher1792
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ddntj
false
null
t3_18ddntj
/r/LocalLLaMA/comments/18ddntj/fine_tuning_step_wise_instruction/
false
false
self
2
null
Local/Private LLM based chatbot using open source tools
5
I intend to create a local llm based chatbot for my team. Basically it should be able to read the docs and generate intelligent responses. I'm pretty new to LLMs and have tried few things here and there. Overall I intend to present a prototype on a non-GPU or useless GPU based machine first. From what I understand so far I need to create a RAG pipeline. I've seen few architectures using embeddings, vector databases, langchain and a model to do create such a pipeline. I'm still pretty new to all these jargons. I have tried few opensource models as well locally and most of them just crash my M1 laptop. I have better work laptop with 16 GP RAM and 8GB graphics card memory on an A2000 card. Can you please suggest how can I quickly come up with a prototype. Basically the RAG pipeline(or any other method) should be able to quickly switch between different LLM models, or databases or any other components when it comes to deploying on a production setup. Also, for now, the idea is to use the data from pdf docs, word docs or data downloaded in json format. I'm not averse of coding so I can code one if I know what to do. Please suggest. Also please post any useful suggestions, articles, course, etc.
2023-12-08T02:40:08
https://www.reddit.com/r/LocalLLaMA/comments/18dclwc/localprivate_llm_based_chatbot_using_open_source/
Positively101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dclwc
false
null
t3_18dclwc
/r/LocalLLaMA/comments/18dclwc/localprivate_llm_based_chatbot_using_open_source/
false
false
self
5
null
7b LLM claims #8 on the hugging face leader board! MetaMath-Cybertron-Starling
20
[https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) This is a merged models but the benchmarks look extremely impressive. It competes with qwen 72b on most metrics outside of mmlu. Has anyone had a chance to test this one out? &#x200B; https://preview.redd.it/i0bz5wdmfz4c1.png?width=1448&format=png&auto=webp&s=be56380e79c4608c0411a55cc842e2e621631b1a
2023-12-08T02:31:14
https://www.reddit.com/r/LocalLLaMA/comments/18dcfto/7b_llm_claims_8_on_the_hugging_face_leader_board/
metalman123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dcfto
false
null
t3_18dcfto
/r/LocalLLaMA/comments/18dcfto/7b_llm_claims_8_on_the_hugging_face_leader_board/
false
false
https://b.thumbs.redditm…Dzv_RYZ9TZrc.jpg
20
{'enabled': False, 'images': [{'id': 'rFQZOJ2Mb0FyHI8dBcgH01laRSjs24OqbxPE7xr8OL4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?width=108&crop=smart&auto=webp&s=54ae4c255ffbd243ae934074f613e88030c841b8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?width=216&crop=smart&auto=webp&s=2ca1b40a7ef2ff208afcddd2cc7647ac44ee6221', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?width=320&crop=smart&auto=webp&s=a85356933045401f63c61559c0e7684484df54a9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?width=640&crop=smart&auto=webp&s=00d1adeb2b2aee2bdd595b501da51003d84e624f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?width=960&crop=smart&auto=webp&s=bd2505dc6cc189b7e8d3a73d64ebdde65e3ebcdb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?width=1080&crop=smart&auto=webp&s=6f15620c3e9500d63f867b0ca19fba267451c055', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?auto=webp&s=117bc6801056a3050405ae10dcfc29367cc16554', 'width': 1200}, 'variants': {}}]}
What Tools Do You Use for Validating and Evaluating LLM-Based Applications Throughout Their Lifecycle?
27
Hey fellow devs! I'm currently in the thick of developing an application that leverages large language models (LLMs), and I’m curious about the tools and processes you all use for validation and evaluation. From pre-deployment and internal experimentation to production, ensuring the reliability and effectiveness of LLMs can be quite the journey. I'm talking about the whole shebang – testing, monitoring, performance tracking, you name it. What tools or methods do you swear by to make sure your LLM-based apps are up to snuff throughout their lifecycle? Are there any industry standards or hidden gems that have made your life easier? Also, I’m interested in how you handle updates and iterations, especially when dealing with the ever-changing nature of these models. Would love to hear your war stories, insights, and even your epic fails (we all have them, right?).
2023-12-08T01:55:44
https://www.reddit.com/r/LocalLLaMA/comments/18dbrc6/what_tools_do_you_use_for_validating_and/
Ok_Preparation_7535
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dbrc6
false
null
t3_18dbrc6
/r/LocalLLaMA/comments/18dbrc6/what_tools_do_you_use_for_validating_and/
false
false
self
27
null
Anyone done the numbers on the new threadripper pro?
6
Hey just a quick question. Has anyone done a little thought experiment about the practicality of running a 70b variant with 16bit quant on the new threadripper pro? 96 cores, 192 threads with access to DDR5 memory. Was thinking it might be an interesting setup. It assume it would be slower, but how much slower I have no idea. Would be interesting to crunch the numbers to see if it was cheaper considering running 16bit quant for a 70b model requires expensive cards. Also would use a lot less power. Just a thought.
2023-12-08T01:50:51
https://www.reddit.com/r/LocalLLaMA/comments/18dbnxg/anyone_done_the_numbers_on_the_new_threadripper/
JFHermes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dbnxg
false
null
t3_18dbnxg
/r/LocalLLaMA/comments/18dbnxg/anyone_done_the_numbers_on_the_new_threadripper/
false
false
self
6
null
M3 Max 16 core 128 / 40 core GPU running llama-2-70b-chat.Q5_K_M.gguf
11
Fresh install of 'TheBloke/Llama-2-70B-Chat-GGUF'. Sample prompt/response and then I offer it the data from Terminal on how it performed and ask it to interpret the results. Perhaps this is of interest to someone thinking of dropping a wad on an M3: **Eric:** what can you tell me about audience trends in the radio industry in the united states? **Llama:** Sure thing! Here's an overview of recent audience trends in the US radio industry: 1. Aging listenership: The median age of radio listeners has been increasing over the years, with most stations attracting an older demographic (45-64 years old). However, some formats like Classic Hits and Urban Adult Contemporary skew slightly younger. 2. Decline in listening among younger audiences: There's a declining trend in radio listenership among younger generations (18-34 years old), particularly among those who use streaming services or podcasts as their primary means of music consumption. However, many younger listeners still tune into the radio for news/talk and sports programming. 3. Increased diversity: The US radio audience is becoming more diverse, with growing Hispanic and African American listener bases. Stations targeting these demographics have seen significant growth in recent years. Additionally, there's a rise in popularity of Spanish-language stations among younger Hispanic listeners. 4. Changing listening habits: With the proliferation of smartphones and streaming services, people are accessing radio content differently. While traditional over-the-air listening still dominates, an increasing number of listeners use online platforms (station websites/apps) or streaming audio services like Pandora and Spotify to consume radio content. 5. The "cord-cutting" phenomenon: Some consumers are abandoning traditional AM/FM radio for alternative ways to access audio entertainment, such as satellite radio, podcasts, or on-demand streaming services. This trend is more pronounced among younger demographics. 6. Growing importance of digital platforms: Radio stations now focus on creating engaging content across various digital platforms (social media, websites, and mobile apps) to attract listeners and maintain their audience's attention. Some broadcasters are even exploring voice-activated technology through smart speakers like Amazon Alexa or Google Home. 7. Hyperlocalism: There's a renewed emphasis on catering to local audiences, with stations focusing on community involvement and creating content that resonates with their specific listenership (e.g., local news, traffic updates, and events). This move towards hyperlocalism is particularly evident among public radio stations. 8. The rise of podcasts: Podcasting has become increasingly popular, with many radio personalities and stations creating their own shows. Some stations also air podcasts during off-peak hours or repurpose popular podcast content as part of their on-air programming. 9. Audio quality improvements: There's a growing interest in high-quality audio formats like HD Radio, which offers improved sound quality and additional features like traffic updates and song lyrics. This trend is particularly prevalent among audiophiles and those who enjoy specific music genres (e.g., classical or jazz). 10. Monetization challenges: The radio industry continues to face revenue pressures, mainly due to the shift in listening habits and increased competition from digital platforms. Broadcasters are exploring new monetization strategies, such as targeted advertising, sponsored content, and subscription-based services. I hope that helps! Is there anything else you'd like me to assist you with? **Eric:** here are the statistics of your model running on my laptop: print\_timings: prompt eval time = 1094.37 ms / 28 tokens ( 39.08 ms per token, 25.59 tokens per second) print\_timings: eval time = 136595.47 ms / 735 runs ( 185.84 ms per token, 5.38 tokens per second) print\_timings: total time = 137689.84 ms slot 0 released (921 tokens in cache) {"timestamp":1701999382,"level":"INFO","function":"log*server*request","line":2607,"message":"request","remote*addr":"127.0.0.1","remote*port":49393,"status":200,"method":"POST","path":"/completion","params":{}} slot 0 released (921 tokens in cache) Can you interpret these stats for me and evaluate how you are running on this hardware? **Llama:** Sure, I'd be happy to help you understand the statistics you provided! The statistics you see are related to the performance of my language model when running on your laptop. Here's a breakdown of what each line represents: 1. print\_timings: prompt eval time = 1094.37 ms / 28 tokens (39.08 ms per token, 25.59 tokens per second) : This line shows the timing information for the prompt evaluation phase of my model. It indicates that it took approximately 1094 milliseconds to evaluate 28 tokens during the prompt evaluation phase. This translates to an average time per token of 39.08 milliseconds and a rate of 25.59 tokens per second. 2. print\_timings: eval time = 136595.47 ms / 735 runs (185.84 ms per token, 5.38 tokens per second) : This line shows the timing information for the evaluation phase of my model. It indicates that it took approximately 136595 milliseconds to evaluate 735 runs during the evaluation phase. This translates to an average time per token of 185.84 milliseconds and a rate of 5.38 tokens per second. 3. print\_timings: total time = 137689.84 ms : This line shows the total time it took for both the prompt evaluation phase and the evaluation phase. It indicates that the total time was approximately 137689 milliseconds or about 2 minutes and 20 seconds. 4. slot 0 released (921 tokens in cache) : This line shows information about the memory usage of my model. It indicates that slot 0, which is one of the slots used for caching, has been released and currently holds 921 tokens in its cache. 5. {"timestamp":1701999382,"level":"INFO","function":"log*server*request","line":2607,"message":"request","remote*addr":"127.0.0.1","remote*port":49393,"status":200,"method":"POST","path":"/completion","params":{}} : This line is a log message that indicates a request was received by the server and processed successfully with a status code of 200. The request was made using the POST method to the path "/completion" and had an empty set of parameters. Based on these statistics, it appears that my model is running smoothly on your laptop hardware. The evaluation times for both the prompt evaluation phase and the evaluation phase are reasonable, indicating that the model is processing requests efficiently. Additionally, the memory usage seems to be under control, with a relatively small number of tokens in cache.
2023-12-08T01:47:33
https://www.reddit.com/r/LocalLLaMA/comments/18dbloh/m3_max_16_core_128_40_core_gpu_running/
knob-0u812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dbloh
false
null
t3_18dbloh
/r/LocalLLaMA/comments/18dbloh/m3_max_16_core_128_40_core_gpu_running/
false
false
self
11
null
Chupacabra v2.03, we back!
13
2023-12-08T01:43:49
https://i.redd.it/9qe99pnb7z4c1.png
perlthoughts
i.redd.it
1970-01-01T00:00:00
0
{}
18dbj39
false
null
t3_18dbj39
/r/LocalLLaMA/comments/18dbj39/chupacabra_v203_we_back/
false
false
https://b.thumbs.redditm…MOUiBCCTFAYg.jpg
13
{'enabled': True, 'images': [{'id': 'y7masQoxgFqYedDrOIHwQsQHEGHoV_shVFj9pbkHKYQ', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/9qe99pnb7z4c1.png?width=108&crop=smart&auto=webp&s=033288b17213c06bcbd9a709cb9353c69bd53e21', 'width': 108}, {'height': 81, 'url': 'https://preview.redd.it/9qe99pnb7z4c1.png?width=216&crop=smart&auto=webp&s=353ddf0cdb97a6ffb02ce5114e0f8261e833fd50', 'width': 216}, {'height': 121, 'url': 'https://preview.redd.it/9qe99pnb7z4c1.png?width=320&crop=smart&auto=webp&s=54c1657581cabcfc389bcafc028d6b1213695a69', 'width': 320}, {'height': 242, 'url': 'https://preview.redd.it/9qe99pnb7z4c1.png?width=640&crop=smart&auto=webp&s=4eaf3aeb96010f4856a6da90d115a9dcb119d0a5', 'width': 640}, {'height': 363, 'url': 'https://preview.redd.it/9qe99pnb7z4c1.png?width=960&crop=smart&auto=webp&s=27530683683fb790d43cecf546d18966b0bdc757', 'width': 960}, {'height': 408, 'url': 'https://preview.redd.it/9qe99pnb7z4c1.png?width=1080&crop=smart&auto=webp&s=f309e07c4d85b9715d57bdee08f9d7dc59b6147d', 'width': 1080}], 'source': {'height': 1162, 'url': 'https://preview.redd.it/9qe99pnb7z4c1.png?auto=webp&s=293963a6e1c523c027fd9aa51ba7066278586505', 'width': 3070}, 'variants': {}}]}
Which strategy is better for fine-tuning a chat model?
7
I'm trying to build a chat agent. Is it better to train the model one response at a time or is it better to convert the whole chat thread into a chat training template? For example, lets say this is one of the chat thread: **User:** Can I make a booking for your service today? **Assistant:** Certainly! Would that be for morning or in the afternoon? **User:** Whats the availability for each like? **Assistant:** We have 4 slots open in the morning starting 9am, and 5 in the afternoon starting 1pm. All our appointments last 1 hour. **User:** I'll take the last appointment in the afternoon. **Assistant:** Ok, your booked. Anything else I can do for you? **User:** That's it. Thanks bye. **Assistant:** Great! See you soon. So when converting this to a template for finetuning, should I make a template for each question/response or for the whole thread? Here's the template I'll be using <|system|> You are friendly, empathetic, appointment booker <|user|> {user_question} <|assistant|> {assistant_response} Should I repeat this for every question/response along with the system instruction for each or will it have one system instruction at the top and I can have a number of <|user|> <|assistant|> blocks below that. Which one gives better results?
2023-12-08T01:43:16
https://www.reddit.com/r/LocalLLaMA/comments/18dbipl/which_strategy_is_better_for_finetuning_a_chat/
blackstonewine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18dbipl
false
null
t3_18dbipl
/r/LocalLLaMA/comments/18dbipl/which_strategy_is_better_for_finetuning_a_chat/
false
false
self
7
null
An interactive demo for Mamba-Chat
25
Hey everyone! Yesterday, I posted about [Mamba-Chat](https://github.com/havenhq/mamba-chat), a chat LLM based on a state space model architecture, and it sparked a lot of interest within the community. A lot of people asked for a demo /.a way to interact with the model, so I've built a way to chat with mamba in google colab. Feel free to check it out [here](https://colab.research.google.com/drive/1dUlEYnRbgJYg4_kofNpsCddLCh6vltNK?usp=sharing)!
2023-12-08T00:27:55
https://www.reddit.com/r/LocalLLaMA/comments/18da1al/an_interactive_demo_for_mambachat/
pip-install-torch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18da1al
false
null
t3_18da1al
/r/LocalLLaMA/comments/18da1al/an_interactive_demo_for_mambachat/
false
false
self
25
{'enabled': False, 'images': [{'id': 'fR3t8Po2eeWQgATnFK-5BgE_6jBPteVeYmLgmmvp99Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SerHXrHf0cT0n_oq6au3AFwNSJ6gLoz8Z6bvaKc-fG0.jpg?width=108&crop=smart&auto=webp&s=c9ab916daa45f947edc642e6dc81f945668e56af', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SerHXrHf0cT0n_oq6au3AFwNSJ6gLoz8Z6bvaKc-fG0.jpg?width=216&crop=smart&auto=webp&s=b46c2a99aa99718f79f0ed472dbbbeca54514771', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SerHXrHf0cT0n_oq6au3AFwNSJ6gLoz8Z6bvaKc-fG0.jpg?width=320&crop=smart&auto=webp&s=d968604bcf941f2cd06c6548b206c9fc48eeef51', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SerHXrHf0cT0n_oq6au3AFwNSJ6gLoz8Z6bvaKc-fG0.jpg?width=640&crop=smart&auto=webp&s=8bcc348bfe31f69f0d45bc02214869fdb44a3eed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SerHXrHf0cT0n_oq6au3AFwNSJ6gLoz8Z6bvaKc-fG0.jpg?width=960&crop=smart&auto=webp&s=4f20285579d5d3461b3b0bc6a3578e7b156a0f3e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SerHXrHf0cT0n_oq6au3AFwNSJ6gLoz8Z6bvaKc-fG0.jpg?width=1080&crop=smart&auto=webp&s=81d07e962261b675a28f10e782f026785d5f18ec', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SerHXrHf0cT0n_oq6au3AFwNSJ6gLoz8Z6bvaKc-fG0.jpg?auto=webp&s=f79812e783e6687535e8b77560f5abd38c3318a2', 'width': 1200}, 'variants': {}}]}
Interacting with LLM on HPC
3
I have an access to a High Performance Computer which uses Slurm job manager but I don’t really have much experience using job scheduler. I have tried downloading and running Llama 2. I basically schedule a script which creates a Jupyter kernel, however, I don’t have direct access to the terminal. So what I do is I create a jupyter notebook where I run python commands using terminal from jupyter by adding “!” in the beginning of the cell. But I am pretty sure that this is not the correct way to do it. I thought about running text-generation-ui but I have no idea how I can open it on my machine. What is the correct way of dealing with this?
2023-12-08T00:10:44
https://www.reddit.com/r/LocalLLaMA/comments/18d9p1y/interacting_with_llm_on_hpc/
lime_52
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18d9p1y
false
null
t3_18d9p1y
/r/LocalLLaMA/comments/18d9p1y/interacting_with_llm_on_hpc/
false
false
self
3
null
Twitter user who predicted Gemini details/release date back in October also gave Llama 3 details: on par with GPT-4, multimodal, different sizes up to 120b, coming Feb next year.
182
2023-12-08T00:07:35
https://twitter.com/futuristflower/status/1716555972452184463
jd_3d
twitter.com
1970-01-01T00:00:00
0
{}
18d9mrt
false
{'oembed': {'author_name': 'Flowers from the future', 'author_url': 'https://twitter.com/futuristflower', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Llama 3 will come in different sizes, but the most capable one will be around 120b</p>&mdash; Flowers from the future (@futuristflower) <a href="https://twitter.com/futuristflower/status/1716555972452184463?ref_src=twsrc%5Etfw">October 23, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/futuristflower/status/1716555972452184463', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_18d9mrt
/r/LocalLLaMA/comments/18d9mrt/twitter_user_who_predicted_gemini_detailsrelease/
false
false
https://b.thumbs.redditm…-4l5ra6l-zuk.jpg
182
{'enabled': False, 'images': [{'id': 'ilv1Wc-TsKi7K9W9tQCZClT2CKNoJ0w8YrVBdJr9BhU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hlZXNjTBCerec79ydshKsY_9fwIn1q7Komjs5JvKgnU.jpg?width=108&crop=smart&auto=webp&s=515867a6d1c28b6508399cabb9e885e1f24a1010', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/hlZXNjTBCerec79ydshKsY_9fwIn1q7Komjs5JvKgnU.jpg?auto=webp&s=7b19261f4b13c429c8c47ff8c2d434e9916a56df', 'width': 140}, 'variants': {}}]}
RTX 4090 homelab setup recommendations?
2
Hi guys! I have ordered some parts for my home setup which I wanted to verify with you. GPU: MSI GeForce RTX 4090 VENTUS 3X 24G OC Gaming CPU: Intel i9-12900KS Motherboard: ASUS ROG Maximus Z690 Formula Power: Corsair HX1500i 1500W 4xSilicon Power DDR5 16GB 6000Mhz Samsung 990 PRO 2TB NVME 2x KingSpec 4TB SATA III SSD \+PiKVM \+Raiser pcie-4 I still havent figured out the case. I don't mind any case which would work with my setup, and I'm planning to install second RTX4090 with a raiser sometime in the future (I hope 1500W should be enough). In addition I haven't figured which water-cooling do I need for the motherboard (I've read this motherboard only accepts water cooling). Do I need fans btw? Advise strongly welcome
2023-12-07T23:52:52
https://www.reddit.com/r/LocalLLaMA/comments/18d9bm9/rtx_4090_homelab_setup_recommendations/
thepowerfuldeez
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18d9bm9
false
null
t3_18d9bm9
/r/LocalLLaMA/comments/18d9bm9/rtx_4090_homelab_setup_recommendations/
false
false
self
2
null
Do you use any (open) LLM in your enterprise business? Which one to choose? And general questions.
5
Hey guys, I am planning to finally integrate AI in our company. The first steps: \- Create/configure an AI; to answer questions in Slack internally - reliably from our internal and public Knowledge Base about our products. \- Next step should be, to pull data from the HubSpot API to support our Customer Support Team with customer request. &#x200B; The usual plan was to realize this with the ChatGPT Assistant API using Zapier or our own coded middleware (right now I am not sure, how much effort this would be). &#x200B; While planning these, I just found out about the existence of all the other LLMs. &#x200B; Unfortunately...I am no Developer (just some minor minor background and started now to learn some Python), but have support from our Dev Team. I asked myself now, if it won´t be "better" and cheaper, to run for our use cases a model finetuned & feeded with knowledge from ourself on a server (or maybe multiple models in the future for different use cases (see above). Is somebody doing that, especially with the implementation in Slack? There are so many models, I see all the benchmarks, but how do I decide, which model to use for us? Which models are you using in your companies? Could you recommend any sources to read for me to realize/organize this? Any hints and tipps are welcome! &#x200B; I appreciate your help - thanks a lot!
2023-12-07T23:47:21
https://www.reddit.com/r/LocalLLaMA/comments/18d97gl/do_you_use_any_open_llm_in_your_enterprise/
TheBlackPersian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18d97gl
false
null
t3_18d97gl
/r/LocalLLaMA/comments/18d97gl/do_you_use_any_open_llm_in_your_enterprise/
false
false
self
5
null
Best model size/type for my specs?
2
3060 TI 8gb VRAM Ryzen 9 5900X 128gb RAM &#x200B; I upgraded my CPU and RAM yesterday and I'm wondering what larger models I might be capable of running now? I've been running 13b GPTQ models with the ExLlama loader via Oobabooga / Silly Tavern with the same video card, but with a Ryzen 5 3600 and 32gb of RAM. I know that previously, if I tried using a 30b model it would usually work, but at a rate so painfully slow that I would very quickly abandon it. I tried the only 30b model I have downloaded and it is loading/responding quickly now. One of the reasons that I am asking, instead of trial and error, is that my download speed out here in the middle of nowhere is abysmal. I don't know which type of model is best for my specs via Ooba, as in GPTQ, GGML, TGIF, etc. Also, any suggestions on loading it default/8-bit/4-bit? Any info anyone wouldn't mind sharing on this subject would be greatly appreciated. Thanks!
2023-12-07T23:24:05
https://www.reddit.com/r/LocalLLaMA/comments/18d8pso/best_model_sizetype_for_my_specs/
AdAppropriate8772
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18d8pso
false
null
t3_18d8pso
/r/LocalLLaMA/comments/18d8pso/best_model_sizetype_for_my_specs/
false
false
self
2
null
Some solutions that work on older intel macs
2
* takes a deep breath * Disclaimer: I’m a hobbyist and had to learn on my own. There’s a lot of smart folks on this subreddit that may have better input. I have no stake in any of these projects. I’m posting this because a lot of people ask “what can I run as a local llama-based/llama-like LLM on [xyz] Mac” and few talk about anything but Silicon M1/M2/M3 macs with big combined RAM stacks. But you can get pleasing results on older intel macs. TL;DR - there are several ways a person with an older intel Mac can run pretty good LLM models up to 7B, *maybe* 13B size, with varying degrees of difficulty. I use and have used the first three of these below on a lowly spare i5 3.4GHZ Mac with a mere 8GB of RAM, running up to 7B models. FreeChat.app - I like this one. If you want the easy button, this is it. It can be grabbed off the MacOS App Store which is unusual for many of these applications and *might* make some more comfortable than downloading from GitHub. All in one front to back, and comes with one model already loaded. You can add models. Basically runs .gguf quantized llama and llama-like models (e.g Mistral derivatives). The U/I is a basic OpenAi-looking thing and seems to run fine. It does put about an 85% load on my little CPU but it generates fine. Would run a 3B model well. If you just want to plug in and chat - I think this is the easiest App for older Macs. Front to back end and a model provided for you. Ollama - I use this a lot - and it’s great and allows me to use my own front end U/I script with Python llama-index tools. Not too hard to get running. There is also a simple web-based chat-Ollama U/I you can run for a front end. My only complaints for my machine - it works the CPU at about 95% capacity regardless of whether I run a 3B or 7B model. I think it’s because it’s a wrapper around llamacpp and so there’s more “in the middle” but honestly I don’t know. Second - there is no option to decline to update on a restart and there’s no reread me about what the updates are doing … I worry that what works today may not work tomorrow on my little machine. . Ease of use is about in the middle. Rates of generation are slower than llamacpp direct or through the lllama-cpp high level API (local) Lllamacpp and the Python bindings - this is really becoming my go-to for now. I can run my own U/I into it as a front end, or I can run Silly Tavern as the front end, or I can use the simple U/I that lllama-cpp-Python provides out of the box. I get about 4.5-4.7 tokens/sec eval rates and it puts far less load on my cpu than the two above. Not sure why. It’s not too hard to install if you follow the llamacpp-Python instructions. But it’s not plug and play like FreeChat. LMStudio - I don’t use as it’s billed as compatible as only with Silicon macs. But maybe it actually will run CPU only on intel mac?? Koboldcpp - I just struggled to understand the install for my machine … and I successfully loaded regular KoboldAi okay as well as Silly Tavern (😥) with a lot of effort … I might revisit but I’m happy enough with the other solutions so far. Ooobabooga - again nothing against this application and I think it could work but I just haven’t tried it for the little i5. Background:About a year ago I took a hobby interest in this stuff and started learning to code in Python. I struggled making a Chatterbot in Python, also successfully made a very small intents extraction neural network and bot in Python with Kerras. Then I got into learning about PyTorch and transformers a little. Just those few months ago I thought that I’d never (a) easily get a quantized model to run (b) be stuck using models of 450M parameters or less as a hobbyist on the spare machine. Bottom line - there are several options to run up to 7B models such as the Mistral derivatives and the llama smaller models on hardware that many sneer at. No - not 20 t/s generations, but … yeah maybe 2.5-5 t/s. Ending Disclaimer: there are some great tutorials on this site and I encourage people to do their own research and learn - this is a fast moving field. But I wanted to give those with “lesser” machines some hope that they don’t need to have a gob of NVIDIA-based hardware to run models that a year ago were “cutting edge”. Apologies for the long post.
2023-12-07T23:18:24
https://www.reddit.com/r/LocalLLaMA/comments/18d8lgz/some_solutions_that_work_on_older_intel_macs/
Helpful-Gene9733
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18d8lgz
false
null
t3_18d8lgz
/r/LocalLLaMA/comments/18d8lgz/some_solutions_that_work_on_older_intel_macs/
false
false
self
2
null
RWKV v5 7b no-quant on a 3090 is faster than an 8bit llama 2 7b on an h100
43
This is pretty epic. RWKV v5 7b no-quant on a 3090 -> 1400 t/s 8bit Llama 2 7b on an h100 -> 1200 t/s Source -> https://twitter.com/picocreator/status/1732840982687916502 (He is one of the main people working on rwkv, maintains endpoints, recursal, other stuff, credible) They also have openai compatible endpoints. The model is a little than mistral on english, and better than mistral at multilingual. Thoughts?
2023-12-07T22:59:22
https://www.reddit.com/r/LocalLLaMA/comments/18d86be/rwkv_v5_7b_noquant_on_a_3090_is_faster_than_an/
vatsadev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18d86be
false
null
t3_18d86be
/r/LocalLLaMA/comments/18d86be/rwkv_v5_7b_noquant_on_a_3090_is_faster_than_an/
false
false
self
43
{'enabled': False, 'images': [{'id': 'Sdau2ubPs7GKR16uHpxX3JRrd2s7D1q0RAB8cSesdpE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Amyb3Xz9jLgZg-FVIWihS8_xwk6XkZ89jrjD8nSpdgQ.jpg?width=108&crop=smart&auto=webp&s=95705062437103f16a437f854ad7d4b1974a2501', 'width': 108}], 'source': {'height': 105, 'url': 'https://external-preview.redd.it/Amyb3Xz9jLgZg-FVIWihS8_xwk6XkZ89jrjD8nSpdgQ.jpg?auto=webp&s=3904294037cc83338431985c8d203d15c58641f7', 'width': 140}, 'variants': {}}]}
A simple guide on how to use llama.cpp with the server GUI [Windows]
33
llama.cpp is well known as a LLM inference project, but I couldn't find any proper, streamlined guides on how to setup the project as a standalone instance (there are forks and text-generation-webui, but those aren't the original project), so I decided to contribute and write one. First off, you will need: * NVIDIA GPU supporting CUDA (heavily recommended due to CuBlas acceleration) * Preferably, up to date NVIDIA drivers * Windows Step 1: Navigate to the llama.cpp [releases page](https://github.com/ggerganov/llama.cpp/releases) where you can find the latest build. https://preview.redd.it/7odvi55nxx4c1.png?width=1488&format=png&auto=webp&s=42c27f5da908b0309871dea5b41e4856b8b06ba9 Assuming you have a GPU, you'll want to download two zips: the compiled CUDA CuBlas plugins (the first zip highlighted here), and the compiled llama.cpp files (the second zip file). You can use the two zip files for the newer CUDA 12 if you have a GPU that supports it. The RTX 20 series and higher I believe supports CUDA 12, but P40s and older GPUs may not support it. https://preview.redd.it/kk8qkdk0yx4c1.png?width=1345&format=png&auto=webp&s=7aea611d4df5a0dffa9ab03fb5d73ecc432a8bdc Step 2: Copy paste the contents of both of the zip files into the same directory. Then, download my server launcher script and include that in the folder: [https://github.com/kalomaze/koboldcpp/releases/download/server-util/server\_launcher.bat](https://github.com/kalomaze/koboldcpp/releases/download/server-util/server_launcher.bat) https://preview.redd.it/nh1esv1n6y4c1.png?width=1033&format=png&auto=webp&s=04f4f6ce7cbc923dead28d8bc120590950dacaf2 Once that is done, you should be able to launch the script. Step 3: \- Drag and drop the valid llama.cpp model (typically GGUF) onto the window that launches, and then hit enter when you see the path. https://preview.redd.it/a1c8msqv6y4c1.png?width=1609&format=png&auto=webp&s=f4b96cd38f0a49c1c8f933cbaaa7ac6abd029f25 You will then be asked to specify the amount of GPU layers. This will depend on the amount of VRAM you have and the model quantization that is being used. https://preview.redd.it/8tcdce037y4c1.png?width=1558&format=png&auto=webp&s=add6951472d94b9e3ce2db20af8f0e73bb2c58a5 In this case, I have an RTX 3060 with 12GB VRAM, which is able to run Mistral 7b at 8-bit quantization on all GPU layers (33/33 layers for Mistral 7b). Step 4: If you are unsure of how to find out how many GPU layers you can offload, I would check Task Manager. The main thing to look out for is whether or not 'dedicated GPU memory' is maxed out. You don't want to offload more than your maximum dedicated VRAM can handle, or else you will see speed regression, so lower the layers as necessary if you have to. https://preview.redd.it/8wxxjprt7y4c1.png?width=685&format=png&auto=webp&s=64f557874061174a2c220b84aad26223e1f2e2a6 Step 5: After that, you'll be asked to input a context size: https://preview.redd.it/017hf9dg7y4c1.png?width=904&format=png&auto=webp&s=18c73ac1d9dac7ed3946b992d3a8e4d47f180dd0 Once you hit enter, the model should begin loading. If everything went right, there is a link that you can follow into the web browser by Ctrl+Clicking. https://preview.redd.it/sun3fqnb8y4c1.png?width=1438&format=png&auto=webp&s=60d6a6d7059ff6590a3621554c6b68a809b5f4a3 And voila! [A solid modified default preset.](https://preview.redd.it/8kxzmggx8y4c1.png?width=954&format=png&auto=webp&s=94e9642d62bf6f7327355f616c94c18d8207306e) https://preview.redd.it/kvx724319y4c1.png?width=958&format=png&auto=webp&s=1964c735e7484e27be9647a2fa38f0dad766f7ad The llama.cpp server interface is an underappreciated, but simple & lightweight way to interface with local LLMs quickly. I hope this helps anyone looking to get models running quickly. P.S: the batch script I made should support re-launching the models with the same settings as last time like so: https://preview.redd.it/v60z9nfs9y4c1.png?width=606&format=png&auto=webp&s=6ca247dc3537a11965287b466a9be61156854405
2023-12-07T22:38:16
https://www.reddit.com/r/LocalLLaMA/comments/18d7py9/a_simple_guide_on_how_to_use_llamacpp_with_the/
kindacognizant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18d7py9
false
null
t3_18d7py9
/r/LocalLLaMA/comments/18d7py9/a_simple_guide_on_how_to_use_llamacpp_with_the/
false
false
https://b.thumbs.redditm…Ck8uZkA1lcPU.jpg
33
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
StableLM-3B Zephyr
14
2023-12-07T22:37:06
https://x.com/StabilityAI/status/1732840342251315518?t=_m-mPRYtjy7wXm1bavZkeg&s=34
ninjasaid13
x.com
1970-01-01T00:00:00
0
{}
18d7ozs
false
null
t3_18d7ozs
/r/LocalLLaMA/comments/18d7ozs/stablelm3b_zephyr/
false
false
https://b.thumbs.redditm…hC-pE-ySlrzE.jpg
14
{'enabled': False, 'images': [{'id': 'jbhTmBnEs0D03pQAbwZfxEEP-hr1ydMkfM9wiIKRMM0', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/tQ9IpzsvTpsmF788B21DctLAb9t6szVd4ZOTbVZBghQ.jpg?width=108&crop=smart&auto=webp&s=9da1fe98d5ab3c1e810c8ccd3f55b46aaa6cf698', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/tQ9IpzsvTpsmF788B21DctLAb9t6szVd4ZOTbVZBghQ.jpg?width=216&crop=smart&auto=webp&s=4e2270baec4eb7a0aa7d719f2a526509ee36db4e', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/tQ9IpzsvTpsmF788B21DctLAb9t6szVd4ZOTbVZBghQ.jpg?width=320&crop=smart&auto=webp&s=eb0286f4835c14f84f52efb30e5c9444e6271534', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/tQ9IpzsvTpsmF788B21DctLAb9t6szVd4ZOTbVZBghQ.jpg?width=640&crop=smart&auto=webp&s=6b996d23ab59bfaab81a945ca1889867864f1d8d', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/tQ9IpzsvTpsmF788B21DctLAb9t6szVd4ZOTbVZBghQ.jpg?width=960&crop=smart&auto=webp&s=2769cafa9beff40c1cc3eb449b22ac2ed09150e1', 'width': 960}, {'height': 616, 'url': 'https://external-preview.redd.it/tQ9IpzsvTpsmF788B21DctLAb9t6szVd4ZOTbVZBghQ.jpg?width=1080&crop=smart&auto=webp&s=ea237de1b0595102b2d807bb7709b78709705b5f', 'width': 1080}], 'source': {'height': 1170, 'url': 'https://external-preview.redd.it/tQ9IpzsvTpsmF788B21DctLAb9t6szVd4ZOTbVZBghQ.jpg?auto=webp&s=cc1b25018a5452c74dbc6bd2b3918cf6daaca809', 'width': 2048}, 'variants': {}}]}
Mistral Magicoder? Any Mistral-based coding models?
4
Looking at the Magicoder repo, I see that they fine-tuned LLaMA and DeepSeek. ([https://github.com/ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder)) I'm curious as to why they didn't also use Mistral? Am I missing something obvious? I've been getting amazing results using Zephyr and Notus to create prompt-response pairs based on the content of academic papers, using only a 4070! Mistral models in general seem incredibly sharp for their size. So where are the Mistral coding models?
2023-12-07T22:10:29
https://www.reddit.com/r/LocalLLaMA/comments/18d73dn/mistral_magicoder_any_mistralbased_coding_models/
bardobirdo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18d73dn
false
null
t3_18d73dn
/r/LocalLLaMA/comments/18d73dn/mistral_magicoder_any_mistralbased_coding_models/
false
false
self
4
{'enabled': False, 'images': [{'id': 'q0nvLsWM9WZoP4RrnkjUKHTy858kY121UC1-i29kahY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EGmPk_lGXdiMFrKBNgn1tCEkB5lFdwVSemGiuZCmGH0.jpg?width=108&crop=smart&auto=webp&s=a98e099f896853eb90cc3b262ba6f1cd48689e11', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EGmPk_lGXdiMFrKBNgn1tCEkB5lFdwVSemGiuZCmGH0.jpg?width=216&crop=smart&auto=webp&s=884578152bf1b498ab8002e404e25a06a104b1ad', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EGmPk_lGXdiMFrKBNgn1tCEkB5lFdwVSemGiuZCmGH0.jpg?width=320&crop=smart&auto=webp&s=fdd8f97f04facd65cdcd9ded36ebf5ebc6029437', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EGmPk_lGXdiMFrKBNgn1tCEkB5lFdwVSemGiuZCmGH0.jpg?width=640&crop=smart&auto=webp&s=7437adc5d1feb8b55c2a6650d3ae3926b47e7b66', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EGmPk_lGXdiMFrKBNgn1tCEkB5lFdwVSemGiuZCmGH0.jpg?width=960&crop=smart&auto=webp&s=631efe0ee0c380e7d04105cd69bd16ff9c493f4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EGmPk_lGXdiMFrKBNgn1tCEkB5lFdwVSemGiuZCmGH0.jpg?width=1080&crop=smart&auto=webp&s=b824b431bae8e983d6dc2eb149bdfabd4346fb83', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EGmPk_lGXdiMFrKBNgn1tCEkB5lFdwVSemGiuZCmGH0.jpg?auto=webp&s=571609525f0e353c699ef0d4befa4da19c33f486', 'width': 1200}, 'variants': {}}]}