title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 โ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k โ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 โ | ups int64 0 8.54k | preview stringlengths 301 5.01k โ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Any tools available to use llama in text editors? | 5 | I use copilot a lot when programming and I think the tab autocomplete is really neat. I was wondering if anyone knows if there are plugins or apps (for word, docs, etc) that allow you to connect it to a local server for text completions. | 2023-05-18T17:15:46 | https://www.reddit.com/r/LocalLLaMA/comments/13l4usk/any_tools_available_to_use_llama_in_text_editors/ | -General-Zero- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13l4usk | false | null | t3_13l4usk | /r/LocalLLaMA/comments/13l4usk/any_tools_available_to_use_llama_in_text_editors/ | false | false | self | 5 | null |
I made a hitlerbot so I can mock him from time to time, and I think I may have interfered with the past. You donโt need to thank me. | 0 | [removed] | 2023-05-18T16:37:13 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13l3vhy | false | null | t3_13l3vhy | /r/LocalLLaMA/comments/13l3vhy/i_made_a_hitlerbot_so_i_can_mock_him_from_time_to/ | false | false | default | 0 | null | ||
Any kind of LLM for OCR? | 8 | Having a lot of trouble searching for this info (related, where do people find their LLM news besides here?)
Trying to figure out if I can implement an OCR with a LLM to improve the output, but finding 0 info.. any hints from anyone? | 2023-05-18T14:24:32 | https://www.reddit.com/r/LocalLLaMA/comments/13l0kos/any_kind_of_llm_for_ocr/ | noneabove1182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13l0kos | false | null | t3_13l0kos | /r/LocalLLaMA/comments/13l0kos/any_kind_of_llm_for_ocr/ | false | false | self | 8 | null |
Error while finetuning | 2 | I was working with johnsmith0031 repo for 4 bit training, and I'm getting following error in the finetuning stage. Can anyone suggest how I can resolve the issue?
LOG:
```python
โญโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโฎ
โ /content/alpaca_lora_4bit/finetune.py:65 in <module> โ
โ โ
โ 62 โ raise Exception('batch_size need to be larger than mbatch_size.') โ
โ 63 โ
โ 64 # Load Basic Model โ
โ โฑ 65 model, tokenizer = load_llama_model_4bit_low_ram(ft_config.llama_q4_co โ
โ 66 โ โ โ โ โ โ โ โ โ โ โ โ ft_config.llama_q4_m โ
โ 67 โ โ โ โ โ โ โ โ โ โ โ โ device_map=ft_config โ
โ 68 โ โ โ โ โ โ โ โ โ โ โ โ groupsize=ft_config. โ
โ โ
โ /content/alpaca_lora_4bit/autograd_4bit.py:216 in โ
โ load_llama_model_4bit_low_ram โ
โ โ
โ 213 โ if half: โ
โ 214 โ โ model_to_half(model) โ
โ 215 โ โ
โ โฑ 216 โ tokenizer = LlamaTokenizer.from_pretrained(config_path) โ
โ 217 โ tokenizer.truncation_side = 'left' โ
โ 218 โ โ
โ 219 โ print(Style.BRIGHT + Fore.GREEN + f"Loaded the model in {(time.tim โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base โ
โ .py:1812 in from_pretrained โ
โ โ
โ 1809 โ โ โ else: โ
โ 1810 โ โ โ โ logger.info(f"loading file {file_path} from cache at โ
โ 1811 โ โ โ
โ โฑ 1812 โ โ return cls._from_pretrained( โ
โ 1813 โ โ โ resolved_vocab_files, โ
โ 1814 โ โ โ pretrained_model_name_or_path, โ
โ 1815 โ โ โ init_configuration, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base โ
โ .py:1975 in _from_pretrained โ
โ โ
โ 1972 โ โ โ
โ 1973 โ โ # Instantiate tokenizer. โ
โ 1974 โ โ try: โ
โ โฑ 1975 โ โ โ tokenizer = cls(*init_inputs, **init_kwargs) โ
โ 1976 โ โ except OSError: โ
โ 1977 โ โ โ raise OSError( โ
โ 1978 โ โ โ โ "Unable to load vocabulary from file. " โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/llama/tokenizati โ
โ on_llama.py:96 in __init__ โ
โ โ
โ 93 โ โ self.add_bos_token = add_bos_token โ
โ 94 โ โ self.add_eos_token = add_eos_token โ
โ 95 โ โ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwa โ
โ โฑ 96 โ โ self.sp_model.Load(vocab_file) โ
โ 97 โ โ
โ 98 โ def __getstate__(self): โ
โ 99 โ โ state = self.__dict__.copy() โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py:905 in โ
โ Load โ
โ โ
โ 902 โ โ raise RuntimeError('model_file and model_proto must be exclus โ
โ 903 โ if model_proto: โ
โ 904 โ โ return self.LoadFromSerializedProto(model_proto) โ
โ โฑ 905 โ return self.LoadFromFile(model_file) โ
โ 906 โ
โ 907 โ
โ 908 # Register SentencePieceProcessor in _sentencepiece: โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py:310 in โ
โ LoadFromFile โ
โ โ
โ 307 โ โ return _sentencepiece.SentencePieceProcessor_serialized_model โ
โ 308 โ โ
โ 309 โ def LoadFromFile(self, arg): โ
โ โฑ 310 โ โ return _sentencepiece.SentencePieceProcessor_LoadFromFile(sel โ
โ 311 โ โ
โ 312 โ def _EncodeAsIds(self, text, enable_sampling, nbest_size, alpha, โ
โ 313 โ โ return _sentencepiece.SentencePieceProcessor__EncodeAsIds(sel โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: Internal: src/sentencepiece_processor.cc(1101)
[model_proto->ParseFromArray(serialized.data(), serialized.size())]
```
and training script goes as follows:
```python
!python finetune.py "/content/data.json" \
--ds_type=alpaca \
--lora_out_dir=./test/ \
--llama_q4_config_dir="/content/text-generation-webui/models/wcde_llama-7b-4bit-gr128/config.json" \
--llama_q4_model="/content/text-generation-webui/models/wcde_llama-7b-4bit-gr128/llama-7b-4bit-gr128.pt" \
--mbatch_size=1 \
--batch_size=4 \
--epochs=3 \
--lr=3e-4 \
--cutoff_len=128 \
--lora_r=8 \
--lora_alpha=16 \
--lora_dropout=0.05 \
--warmup_steps=5 \
--save_steps=50 \
--save_total_limit=3 \
--logging_steps=5 \
--groupsize=128 \
--xformers \
--backend=cuda
``` | 2023-05-18T14:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/13l0j89/error_while_finetuning/ | 1azytux | self.LocalLLaMA | 2023-05-19T13:30:06 | 0 | {} | 13l0j89 | false | null | t3_13l0j89 | /r/LocalLLaMA/comments/13l0j89/error_while_finetuning/ | false | false | self | 2 | null |
A comparative look at (GGML) quantization and parameter size | 58 | ## Preamble/credits
Based on: [the llama.cpp repo README](https://github.com/ggerganov/llama.cpp/blob/dc271c52ed65e7c8dfcbaaf84dabb1f788e4f3d0/README.md#quantization) section on quantization.
Looking at that, it's a little hard to assess the how different levels of quantization actually affect the quality, and what choices would actually cause a perceptible change. Hopefully this post will shed a little light. While this post is about GGML, the general idea/trends should be applicable to other types of quantization and models, for example GPTQ.
First, perplexity isn't the be-all-end-all of assessing a the quality of a model. However, as far as I know given a specific full-precision model, if you process that data in a way that increases perplexity, the result is never an improvement in quality. So this is useful for comparing quantization formats for one exact version of a model, but not necessarily as useful comparing different models (or even different versions of the same model like Vicuna 1.0 vs Vicuna 1.1).
## Parameter size and perplexity
A good starting point for assessing quality is 7b vs 13b models. Most people would agree there is a significant improvement between a 7b model (LLaMA will be used as the reference) and a 13b model. According to the chart in the llama.cpp repo, the difference in perplexity between a 16 bit (essentially full precision) 7b model and the 13b variant is 0.6523 (7b at 5.9066, 13b at 5.2543).
For percentage calculations below, we'll consider the difference between the 13b and 7b to be 100%. So something that causes perplexity to increase by `0.6523 / 2` = ` 0.3261` would be 50% and so on.
### 7b
from|to|ppl diff|pct diff
-|-|-|-
16bit|Q8_0|0.0003|0.04%
Q8_0|Q5_1|0.4150|6.32%
Q5_1|Q5_0|0.0381|5.84%
Q5_0|Q4_1|0.1048|16.06%
Q4_1|Q4_0|0.1703|26.10%
| | |
Q5_1|Q4_0|0.2084|31.94%
Q5_1|Q4_1|0.1429|21.90%
|16bit|Q4_0|0.2450|37.55%
### 13b
from|to|ppl diff|pct diff
-|-|-|-
16bit|Q8_0|0.0005|0.07%
Q8_0|Q5_1|0.0158|2.42%
Q5_1|Q5_0|0.0150|2.29%
Q5_0|Q4_1|0.0751|11.51%
Q4_1|Q4_0|0.0253|3.87%
| | |
Q5_1|Q4_0|0.1154|17.69%
Q5_1|Q4_1|0.0900|13.79%
16bit|Q4_0|0.1317|20.20%
## 13b to 7b
from (13b)|to (7b)|ppl diff|pct diff
-|-|-|-
16bit|16bit|0.6523|100%
Q5_1|Q5_1|0.6775|103.86%
Q4_0|Q4_0|0.7705|118.12%
Q4_0|Q5_1|0.5621|80.65%
Q4_0|16bit|0.5206|79.80%
## Comments
From this, we can see you get ~80% of the improvement of going from a 7b to a 13b model even if you're going from a full precision 7b to the worst/most heavily quantized Q4_0 13b variant. So running the model with more parameters is basically always going to be better, even if it's heavily quantized. (This may not apply for other quantization levels like 3bit, 2bit, 1bit.)
It's already pretty well known, but this also shows that larger models tolerate quantization better. There are no figures for 33b, 65b models here but one would expect the trend to continue. From looking at this, there's probably a pretty good chance a 3bit (maybe even 2bit) 65b model would be better than a full precision 13b.
It's also pretty clear there's a large difference between Q5_1 and Q4_0. Q4_0 should be avoided if at all possible, especially for smaller models. (Unless it lets you go up to the next sized model.) | 2023-05-18T14:22:50 | https://www.reddit.com/r/LocalLLaMA/comments/13l0j7m/a_comparative_look_at_ggml_quantization_and/ | KerfuffleV2 | self.LocalLLaMA | 2023-05-18T17:17:48 | 0 | {} | 13l0j7m | false | null | t3_13l0j7m | /r/LocalLLaMA/comments/13l0j7m/a_comparative_look_at_ggml_quantization_and/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]} |
Created yet another tool to execute generated code, with or without Langchain, with a self managed virtualenv | 11 | So one of the things that posed to be a problem in my previous iteration of fine-tuning a LoRA for code generation for Langchain Python REPL, was that most of the time the errors were about a missing package.
​
To attempt to fix this, I created a Python package that manages a virtualenv, stores the source code in a local file and allows the code to be executed through the virtualenv interpreter. As a bonus, we can also apply pylint and see the linting score of the code. This is the result:
[https://github.com/paolorechia/code-it](https://github.com/paolorechia/code-it)
​
As usual, also wrote up an explanation of the inner workings of the package / prompts etc: [https://medium.com/@paolorechia/building-a-custom-langchain-tool-for-generating-executing-code-fa20a3c89cfd](https://medium.com/@paolorechia/building-a-custom-langchain-tool-for-generating-executing-code-fa20a3c89cfd)
​
Nothing too shiny, but it was super fun developing - I think I might try to get something done with Microsoft's guidance library next though, as it seems like I was partially reinventing the wheel here. | 2023-05-18T13:55:15 | https://www.reddit.com/r/LocalLLaMA/comments/13kzvgu/created_yet_another_tool_to_execute_generated/ | rustedbits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kzvgu | false | null | t3_13kzvgu | /r/LocalLLaMA/comments/13kzvgu/created_yet_another_tool_to_execute_generated/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'zV4LWqetRQbk3_AZUEmDnNel2nxb3c2FMGfhkX_9FK0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?width=108&crop=smart&auto=webp&s=fd9d124c41e6080e095d484f4ff30e1c2d4e5e4d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?width=216&crop=smart&auto=webp&s=e19a90c4f7c08ae353f590bdeb35565e9e52c4c4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?width=320&crop=smart&auto=webp&s=ffad5899baead79360be1b49b660825d1ba28fb2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?width=640&crop=smart&auto=webp&s=b2d3cdc1c7403976e43d9303a2c4959fc7921a78', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?width=960&crop=smart&auto=webp&s=00dbf3fc1dfd997a6206f3a8bb475dce0e9b31e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?width=1080&crop=smart&auto=webp&s=d22965c81b3d442f0afb0208a86983a222428266', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?auto=webp&s=1005d7a2a8015e9e9968db21e0e482952dcc5885', 'width': 1200}, 'variants': {}}]} |
I made a simple agent demo with Guidance and wizard-mega-13B-GPTQ, feel quite promising. | 85 | Hi, I just discover Guidance last night through this post: [https://www.reddit.com/r/LocalLLaMA/comments/13jyh3m/guidance\_a\_prompting\_language\_by\_microsoft/](https://www.reddit.com/r/LocalLLaMA/comments/13jyh3m/guidance_a_prompting_language_by_microsoft/)
​
It looks interesting and I think it can solve my problem with the previous ReAct agent I built with Langchain. The Langchain agent often doesn't follow the instruction, especially when working with small LLM (like 3B-7B). It is painful to handle this for me (optimize prompt, manually set some stop conditions).
The ReAct framework seems to work well with Guidance, forcing LLM to follow my instructions strictly.
Github: [https://github.com/QuangBK/localLLM\_guidance](https://github.com/QuangBK/localLLM_guidance)
My post: [https://medium.com/better-programming/a-simple-agent-with-guidance-and-local-llm-c0865c97eaa9](https://medium.com/better-programming/a-simple-agent-with-guidance-and-local-llm-c0865c97eaa9)
Hope it helps :) | 2023-05-18T13:53:52 | https://www.reddit.com/r/LocalLLaMA/comments/13kzubz/i_made_a_simple_agent_demo_with_guidance_and/ | Unhappy-Reaction2054 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kzubz | false | null | t3_13kzubz | /r/LocalLLaMA/comments/13kzubz/i_made_a_simple_agent_demo_with_guidance_and/ | false | false | self | 85 | {'enabled': False, 'images': [{'id': 'C2daJj-mtKtomZBtF4uzGILi1nHKFU4Pgmd3mqn3DLg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?width=108&crop=smart&auto=webp&s=f89e667c03a084103f7329b3b365c058d503be95', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?width=216&crop=smart&auto=webp&s=b90e9115b2aa301de6896d42010afd3ae4650bf8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?width=320&crop=smart&auto=webp&s=9cb79380d2755a6e4ff3f2db3a25ec8f6392efc6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?width=640&crop=smart&auto=webp&s=cecf4e5d2cc52a1b5c00df8e1a582b1ac6c66708', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?width=960&crop=smart&auto=webp&s=9b22dda8e966abd87f6531fed7db661ac9a4c51f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?width=1080&crop=smart&auto=webp&s=51bcb7520c9bafd950790927e7b9df73c6dc3e06', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?auto=webp&s=c3a4513b559dc7e44948bec68496de376de75266', 'width': 1200}, 'variants': {}}]} |
GPU quota requests | 1 | Has anyone else had trouble getting GPUs on GCP/AWS/Azure? The most I've gotten is in GCP, where they gave me 1 gpu. Every other quota request I make gets denied without explanation. Anyone have any advice on how to rent GPUs? Thanks! | 2023-05-18T12:53:16 | https://www.reddit.com/r/LocalLLaMA/comments/13kydvh/gpu_quota_requests/ | maiclazyuncle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kydvh | false | null | t3_13kydvh | /r/LocalLLaMA/comments/13kydvh/gpu_quota_requests/ | false | false | self | 1 | null |
Issue starting with Azure server | 1 | [deleted] | 2023-05-18T11:30:04 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13kwbya | false | null | t3_13kwbya | /r/LocalLLaMA/comments/13kwbya/issue_starting_with_azure_server/ | false | false | default | 1 | null | ||
Have to abandon my (almost) finished LLaMA-API-Inference server. If anybody finds it useful and wants to continue, the repo is yours. :) | 54 | I've been working on an API-first inference server for fast inference of GPTQ quantized LLaMA models, including multi GPU.
The idea is to provide a server, which runs in the background and which can be queried much like OpenAI models can be queried using their API library. This may happen from the same machine or via the network.
The core functionality is working. It can load the 65B model onto two 4090's and produce inference at 10 to 12 tokens per seconds, depending on different variables. Single GPU and other model/GPU configurations are a matter of changing some configs and minor code adjustments, but should be doable quite easily.
The (for me) heavy lifting of making the Triton kernel working on multi GPU is done.
Additionally, one can send requests to the model via POST requests and get streaming or non-streaming output as reply.
Furthermore, an additional control flow is available, which makes it possible to stop text generation in a clean and non-buggy way via http request. Concepts of how to implement a pause/continue control-flow as well as a "stop-on-specific-string" flow are ready to be implemented.
The repo can be found [here](https://github.com/Dhaladom/TALIS), the readme is not up-to-date. The code is a bit messy.
If anybody wants to continue (or use) this project, feel free to contact me. I'd happily hand it over and assist with questions. For personal reasons, I can not continue.
Thanks for your attention. | 2023-05-18T10:02:43 | https://www.reddit.com/r/LocalLLaMA/comments/13kued5/have_to_abandon_my_almost_finished/ | MasterH0rnet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kued5 | false | null | t3_13kued5 | /r/LocalLLaMA/comments/13kued5/have_to_abandon_my_almost_finished/ | false | false | self | 54 | {'enabled': False, 'images': [{'id': 'UVAzNyepFDDzqT3dzunV4tOEVdns17i0IuW98PQT8Ag', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?width=108&crop=smart&auto=webp&s=8a10e747885093006d644407b4b14443c075e81b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?width=216&crop=smart&auto=webp&s=ab218714567c243d0b1094204ea228352fa296aa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?width=320&crop=smart&auto=webp&s=4109fdfb66ac1b8da95721395d166547af7813eb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?width=640&crop=smart&auto=webp&s=7dfb284fb6acdd7b10d3b11b494968e9aa661f2c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?width=960&crop=smart&auto=webp&s=612169214b6c4defca684f8cc7c9c8b7823ca4ef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?width=1080&crop=smart&auto=webp&s=8dac1326acea8756dda2ef13139c2b80d17f1f0d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?auto=webp&s=ca59d2ec3b9ac4424926c69552ea6840b384b8e6', 'width': 1200}, 'variants': {}}]} |
*update* Completely restructured the repo. One of the most in-depth collections of all things LLM. ~500 Stars and counting | 99 | 2023-05-18T08:13:19 | https://github.com/underlines/awesome-marketing-datascience | _underlines_ | github.com | 1970-01-01T00:00:00 | 0 | {} | 13ksfcr | false | null | t3_13ksfcr | /r/LocalLLaMA/comments/13ksfcr/update_completely_restructured_the_repo_one_of/ | false | false | 99 | {'enabled': False, 'images': [{'id': 'rPJXDTXvadx5BA_jrYzZNm1GLb6uTxg97tNKy9txPcA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?width=108&crop=smart&auto=webp&s=ac6fe89bcd0dc67925c293c1093de1d4b6e7f50e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?width=216&crop=smart&auto=webp&s=0c2474e81631b202e002ffa0872b5ec9d02ab020', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?width=320&crop=smart&auto=webp&s=fb82b97652b1dbfe0f8939a5227599776ec4eb3f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?width=640&crop=smart&auto=webp&s=7918fe0a5821351f1853b8b717b3df1c6b00fcd9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?width=960&crop=smart&auto=webp&s=7e0491c6e15342d4aec3cca65a511b36bf5b141f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?width=1080&crop=smart&auto=webp&s=2b89b4410aa4f943d53244dac104d112524e22fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?auto=webp&s=e6d17305167872aa55a988a189e7be1060994c5b', 'width': 1200}, 'variants': {}}]} | ||
Do you need a good CPU for training models if you have a good GPU? | 6 | Asking for a friend and for we were getting into training models and are thinking about mushing all the budget into the GPU as a 3090 such and getting a cheap cpu | 2023-05-18T06:14:01 | https://www.reddit.com/r/LocalLLaMA/comments/13kqbct/do_you_need_a_good_cpu_for_training_models_if_you/ | Impossible_Belt_7757 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kqbct | false | null | t3_13kqbct | /r/LocalLLaMA/comments/13kqbct/do_you_need_a_good_cpu_for_training_models_if_you/ | false | false | self | 6 | null |
What is quantisizing mean? | 14 | I see 4 bits, 8 bits, etc.
Just not storing weights as doubles (8bits) ?
ELI5 please. | 2023-05-18T06:14:00 | https://www.reddit.com/r/LocalLLaMA/comments/13kqbci/what_is_quantisizing_mean/ | entered_apprentice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kqbci | false | null | t3_13kqbci | /r/LocalLLaMA/comments/13kqbci/what_is_quantisizing_mean/ | false | false | self | 14 | null |
How do LLMs โthinkโ? | 0 | [removed] | 2023-05-18T06:08:35 | https://www.reddit.com/r/LocalLLaMA/comments/13kq800/how_do_llms_think/ | entered_apprentice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kq800 | false | null | t3_13kq800 | /r/LocalLLaMA/comments/13kq800/how_do_llms_think/ | false | false | default | 0 | null |
Local LLMs for coding? | 3 | [removed] | 2023-05-18T06:07:22 | https://www.reddit.com/r/LocalLLaMA/comments/13kq79d/local_llms_for_coding/ | entered_apprentice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kq79d | false | null | t3_13kq79d | /r/LocalLLaMA/comments/13kq79d/local_llms_for_coding/ | false | false | default | 3 | null |
Other subreddits? | 1 | [removed] | 2023-05-18T05:53:09 | https://www.reddit.com/r/LocalLLaMA/comments/13kpy41/other_subreddits/ | entered_apprentice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kpy41 | false | null | t3_13kpy41 | /r/LocalLLaMA/comments/13kpy41/other_subreddits/ | false | false | default | 1 | null |
Has anyone published a dataset of prompt text? | 4 | I have been looking for a collection of prompt texts which people have posed to LLMs, but can't find anything.
Does anyone know if such a dataset is available anywhere for personal use?
**Edited to add:** Thanks for the input, all! I finally found what I needed in https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/HTML_cleaned_raw_dataset/ thanks to u/ruryrury -- hundreds of thousands of human-generated prompt texts.
I apologize for not making it clear that I was looking specifically for prompt texts, not training data, not models, not interfaces for prompting models, and appreciate all of your suggestions. | 2023-05-18T05:22:29 | https://www.reddit.com/r/LocalLLaMA/comments/13kpehh/has_anyone_published_a_dataset_of_prompt_text/ | ttkciar | self.LocalLLaMA | 2023-05-18T19:46:57 | 0 | {} | 13kpehh | false | null | t3_13kpehh | /r/LocalLLaMA/comments/13kpehh/has_anyone_published_a_dataset_of_prompt_text/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'hCJm1WvoukTm8o3iKxx6PgypOTukUiQ9MSNgq1s3NQE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=108&crop=smart&auto=webp&s=53cfd5649ccabc02caf81c85c0ef6fd93c0d6753', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=216&crop=smart&auto=webp&s=4b2776e4ab9a0394aada31f03054955a7242c6b6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=320&crop=smart&auto=webp&s=5fa1a900b723e80f7b65e561e5028867be4b58c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=640&crop=smart&auto=webp&s=13412c8d161e4a13edf3f7ad8b8750684a005536', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=960&crop=smart&auto=webp&s=f73fac0c06956e47104c1b3c606a3edaf1b1d98f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=1080&crop=smart&auto=webp&s=200773d04c8debe3865bdc395a318126791fffde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?auto=webp&s=6130b1031b11bc2639db3f24677561e5a4e73b10', 'width': 1200}, 'variants': {}}]} |
Spent the entire day playing around with Local Llms (mainly wizard vicuna 13b) then compared it against chatgpt | 68 | and then it hit me, the local models ran roughly as fast as chatgpt did a few months ago which is saying a lot for the progress of localcpp seeing as how openai have a multimillion dollar setup while I'm running it on a 7 year old laptop with 16gb of ram I paid $1,500 for.
It's amazing the work that the developers are doing, hat's off to you guys. ๐๐ | 2023-05-18T04:34:07 | https://www.reddit.com/r/LocalLLaMA/comments/13koi04/spent_the_entire_day_playing_around_with_local/ | fresh_n_clean | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13koi04 | false | null | t3_13koi04 | /r/LocalLLaMA/comments/13koi04/spent_the_entire_day_playing_around_with_local/ | false | false | self | 68 | null |
Context size explanation? | 1 | [removed] | 2023-05-18T03:42:27 | https://www.reddit.com/r/LocalLLaMA/comments/13kngyl/context_size_explanation/ | entered_apprentice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kngyl | false | null | t3_13kngyl | /r/LocalLLaMA/comments/13kngyl/context_size_explanation/ | false | false | default | 1 | null |
Wizard-Vicuna-7B-Uncensored | 251 | Due to popular demand, today I released 7B version of Wizard Vicuna Uncensored, which also includes the fixes I made to the Wizard-Vicuna dataset for the 13B version.
[https://huggingface.co/ehartford/Wizard-Vicuna-7B-Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-7B-Uncensored)
u/The-Bloke | 2023-05-18T01:56:02 | https://www.reddit.com/r/LocalLLaMA/comments/13kl5hn/wizardvicuna7buncensored/ | faldore | self.LocalLLaMA | 1970-01-01T00:00:00 | 1 | {'gid_2': 1} | 13kl5hn | false | null | t3_13kl5hn | /r/LocalLLaMA/comments/13kl5hn/wizardvicuna7buncensored/ | false | false | self | 251 | {'enabled': False, 'images': [{'id': 'Zvu7MMbJfuNi9sVEJ9fhsfi0hvuH9mGjp9PPWuqsbp4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?width=108&crop=smart&auto=webp&s=3e00d4c6312f4141e97a1237a464fda6d5b7401d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?width=216&crop=smart&auto=webp&s=278f650c999b78b0735b106d5a5e220deb8e305e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?width=320&crop=smart&auto=webp&s=82275d55769db839c04c657067f9462dc9794eb4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?width=640&crop=smart&auto=webp&s=c9c3303c662530bd49e42a1fe70427827a0a613a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?width=960&crop=smart&auto=webp&s=b057d69ae884ad30eadc8c18a567a5dc2da47f9d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?width=1080&crop=smart&auto=webp&s=0de860dda19901a5b6bc924bb07f061b11d05aad', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?auto=webp&s=41785821f2f5133f9ccb0a7c1db18cd46b250c3f', 'width': 1200}, 'variants': {}}]} |
Struggling with settings for different models | 2 | I have TavernAI set up and am loading various models, but theyโre always dog slow. I have a 3090 w/24 gigs and 32 gigs of ram, and Iโm assuming Iโm doing something wrong since I get similar results with everything, like 1 word every 5-10 seconds.
My issue is, on the Model page/tab in TavernAI, there are many different fields, bars, checkboxes, and drop down menus which all probably relate to my performance, but I cannot for the life of me find documentation on these anywhere. I donโt know where else to look that I havenโt already checked. Plus, if they donโt list these requirements on the relevant model huggingface page, how are you supposed to know or calculate these settings? I donโt see any of this info on any model pages Iโve tested, or on any of the myriad git projects I have loaded, on rentry, discord, etc.. so I figured Iโd ask as Iโve stalled on this project after weeks of beating my head against it.
Say I have a WizardLM13b or 7b for example, as these are the current ones Iโm messing with. What should wbits, group size, pre_layer, threads and so on be set to? I understand what 4bit and CPU means in this context, as the model names indicate this, but Iโm not using these at the moment. I should be able to load these models via gpu. Leaving everything blank or default doesnโt seem to work. What else should I be doing to get this info? Guessing doesnโt seem right either.
Sorry Iโm so confused, but Iโve seriously sunk many hours into this already and Iโm usually good at figuring new tech out. It seems like everywhere assumes you know what youโre doing in this regard.
Thanks! | 2023-05-18T01:51:33 | https://www.reddit.com/r/LocalLLaMA/comments/13kl1yc/struggling_with_settings_for_different_models/ | StriveForMediocrity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kl1yc | false | null | t3_13kl1yc | /r/LocalLLaMA/comments/13kl1yc/struggling_with_settings_for_different_models/ | false | false | self | 2 | null |
Error While Finetuning | 2 | [deleted] | 2023-05-18T01:43:16 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13kkv8v | false | null | t3_13kkv8v | /r/LocalLLaMA/comments/13kkv8v/error_while_finetuning/ | false | false | default | 2 | null | ||
Are the LLM's being designed to generate datasets to train other LLMs? | 10 | I was thinking about making a LoRA with my own dataset, but the most challenging part about making a good model is having a good dataset. Are there any models that have been made for the purpose of generating datasets? Or at people just using the best LLM's available (like chat-gpt) for now to generate datasets?
I want to train on my own knowledge base, this is what I am interested in doing:
\- Generate a list of questions: Split my documents into chunks and feed them into a "dataset LLM" which comes up with questions about the provided text.
\- Create the Q/A Pairs: Ask the LLM each of the questions with the provided text and have it give me an answer
\- Train the Lora on the provided dataset
Using smaller models like TheBloke\_wizardLM-7B-HF, it doesn't always come up with relevant questions. I was wondering if we are always going to have to use larger models to make datasets for smaller models, or if we could make a smaller model that's specifically designed for generating datasets to train new small models. | 2023-05-18T00:26:52 | https://www.reddit.com/r/LocalLLaMA/comments/13kj5hp/are_the_llms_being_designed_to_generate_datasets/ | NeverEndingToast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kj5hp | false | null | t3_13kj5hp | /r/LocalLLaMA/comments/13kj5hp/are_the_llms_being_designed_to_generate_datasets/ | false | false | self | 10 | null |
Koboldcpp with dual GPUs of different makes | 3 | Does Koboldcpp use multiple GPU? If so, with the latest version that uses OpenCL, could I use an AMD 6700 12GB and an Intel 770 16GB to have 28GB of VRAM? It's my understanding that with the Nvidia cards you dont need the NVLink to take advantage of both cards so I was wondering is the same may be true for OpenCL based cards. | 2023-05-17T23:58:32 | https://www.reddit.com/r/LocalLLaMA/comments/13kihgi/koboldcpp_with_dual_gpus_of_different_makes/ | ccbadd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kihgi | false | null | t3_13kihgi | /r/LocalLLaMA/comments/13kihgi/koboldcpp_with_dual_gpus_of_different_makes/ | false | false | self | 3 | null |
Looking to add a conversational model to a web app | 1 | Currently looking at https://huggingface.co/ehartford/WizardLM-13B-Uncensored?text=My+name+is+Thomas+and+my+main with the deploy option for javascript. However, I am curious about the API here and why I would choose to go this route vs. running the model locally so that it doesn't have a rate limit. Further, how simple is it to add the actual model locally so that it isn't using an API? I'm not familiar with that or where the documentation is to explore that option more | 2023-05-17T23:52:39 | https://www.reddit.com/r/LocalLLaMA/comments/13kicqd/looking_to_add_a_conversational_model_to_a_web_app/ | UpDown | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kicqd | false | null | t3_13kicqd | /r/LocalLLaMA/comments/13kicqd/looking_to_add_a_conversational_model_to_a_web_app/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'G1nl_IUI_4T90MWS7hPfvajkGrGVtVlBe7-hikDbCJE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=108&crop=smart&auto=webp&s=3723e81c3dda45706b3275533d688762ed693e74', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=216&crop=smart&auto=webp&s=aa30800fed77ed23fa00ad0117127ddab537da13', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=320&crop=smart&auto=webp&s=8648f8481c1a71b34628337380bbd5ab61ae4889', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=640&crop=smart&auto=webp&s=054a654f2e90b527e2a0e5c2c3fc47ead397dc54', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=960&crop=smart&auto=webp&s=a370540936d82b5eaf105c12a79a90e8ab63a611', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=1080&crop=smart&auto=webp&s=58723b62d389654b8095985808adaacd4beacb29', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?auto=webp&s=9ab2642fcca96ebdd40b5775ff2ea4403da23752', 'width': 1200}, 'variants': {}}]} |
A little demo integration the alpaca model w/ my open-source search app | 52 | 2023-05-17T21:38:28 | https://v.redd.it/oenr7spzng0b1 | andyndino | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13key7p | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/oenr7spzng0b1/DASHPlaylist.mpd?a=1694755110%2CMGI4NGQwYmJkOGI5MjM4NWFkNTFmZjJlMDY2ZjFhNzBiZmY1YTVkMmIxYTc1Yjk0OTIxODlmNzFlNzhjYmU2ZQ%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/oenr7spzng0b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/oenr7spzng0b1/HLSPlaylist.m3u8?a=1694755110%2CZGRmMTdmMjNiMDJjODI4MDQ1OGIzMDA5NDEyM2EyYTM0MmY1ZmE2ZmNjZTg1Mzg4NzMxMjg2MmFiNjljOTRmNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/oenr7spzng0b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_13key7p | /r/LocalLLaMA/comments/13key7p/a_little_demo_integration_the_alpaca_model_w_my/ | false | false | 52 | {'enabled': False, 'images': [{'id': '7Kkei14aNiTOOPEK7y04ZA4gvPHlesiBi7mM4m_nFQ8', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?width=108&crop=smart&format=pjpg&auto=webp&s=4a6d2966f4ecd6a09e8d12110bbf9867af1d48a9', 'width': 108}, {'height': 171, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?width=216&crop=smart&format=pjpg&auto=webp&s=748c7c1c4d6a91265e5f7ac05bacdfe644b2f42f', 'width': 216}, {'height': 253, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?width=320&crop=smart&format=pjpg&auto=webp&s=d9988678437453606271218590e7aff308b22eef', 'width': 320}, {'height': 507, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?width=640&crop=smart&format=pjpg&auto=webp&s=4aebcdd8544da6964ccb506312b2c971661365fe', 'width': 640}, {'height': 761, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?width=960&crop=smart&format=pjpg&auto=webp&s=a564fe02a31dcee01674f5a3c9df9628e6ad8555', 'width': 960}, {'height': 856, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=052f2a31db04dafa4990b476aeea0117af196187', 'width': 1080}], 'source': {'height': 1522, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?format=pjpg&auto=webp&s=ef2d329dee9d43df48c71ad084df2f8ab8503394', 'width': 1920}, 'variants': {}}]} | ||
Please explain to a 5 years old Lora concept and how to fine tune | 31 | Okay, to be honest, it's been 30 years since I was 5 - but when it comes to Lora, that's how I feel.
Would someone please be so kind and explain to me in simple terms firstly **roughly the concept** behind it and secondly a step by step explanation of how I do **create a Lora** ~~fine tuning~~ and how I **apply it** to a ggml model?
I only use CPU-llama.cpp, so I do not have a powerful GPU.
So let's say I have a book as a raw text file and I want to have such a ~~fine tuning~~ Low rank adapter so that a LLM can respond more adequately to the content of the book.
* How, what, where do I need to do? I assume I would need to upload the text file and a model to some cloud GPU, right?
* How long will such a training/fine-tuning take, or what costs should I expect?I would be very grateful for very **specific** advice on what cloud services are available for this,
* And what comes then? Will a new file be created by the ~~fine-tuning~~ ... process(? what is the correct term actually?)
* How do I apply the in llama.cpp? On which model?
EDIT:
i must add briefly: for some reason i thought fine-tuning and lora were roughly the same thing - sorry.
What I actually mean is already Lora (I recently saw a kind of comic or meme about this: a character wears a different headgear every time and thus becomes sometimes a fireman, sometimes a policeman, sometimes a surgeon, etc. But the figure has not changed 'from the inside' so no *fine tuning*).
And the example with the raw text of a book is only a fictional example. I know that for something like that a vector-embedding and search is better suited. Or even just a normal text search.
The reason I want to know about Lora is only educational for now. I would really like to apply learning by doing here ;D
* Does anything have to be processed or converted before?
I hope that not only I could benefit from this, but also other newcomers to this topic, because the documentation is either very difficult to find or too complicated to be understood by laymen. | 2023-05-17T20:59:09 | https://www.reddit.com/r/LocalLLaMA/comments/13kdwl2/please_explain_to_a_5_years_old_lora_concept_and/ | Evening_Ad6637 | self.LocalLLaMA | 2023-05-17T22:27:28 | 0 | {} | 13kdwl2 | false | null | t3_13kdwl2 | /r/LocalLLaMA/comments/13kdwl2/please_explain_to_a_5_years_old_lora_concept_and/ | false | false | self | 31 | null |
LLM for synology chat, yes I did | 4 | I created a python script for running LLMs that use synology chat as the interface
Would love some feedback and or help
HTTPS://GitHub.com/CaptJaybles/synologyLLM | 2023-05-17T20:32:56 | https://www.reddit.com/r/LocalLLaMA/comments/13kd8f2/llm_for_synology_chat_yes_i_did/ | ProfessionalGuitar32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kd8f2 | false | null | t3_13kd8f2 | /r/LocalLLaMA/comments/13kd8f2/llm_for_synology_chat_yes_i_did/ | false | false | self | 4 | null |
Looking for a UI similar to KoboldCPP for llamacpp | 4 | I'm very new to all this, I find that KoboldCPP continues lines when it doesn't need to, I'm trying to find something with the same kind of speed/processing as the llamacpp but with a UI basically using the latest GGML models.
Help a noobie out? | 2023-05-17T20:31:54 | https://www.reddit.com/r/LocalLLaMA/comments/13kd7j7/looking_for_a_ui_similar_to_koboldcpp_for_llamacpp/ | Deformator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kd7j7 | false | null | t3_13kd7j7 | /r/LocalLLaMA/comments/13kd7j7/looking_for_a_ui_similar_to_koboldcpp_for_llamacpp/ | false | false | self | 4 | null |
Noob here. How to activate BLAS for llama in oobabooga? PLEASE help! | 1 | [removed] | 2023-05-17T19:38:21 | https://www.reddit.com/r/LocalLLaMA/comments/13kbs7c/noob_here_how_to_activate_blas_for_llama_in/ | OobaboogaHelp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kbs7c | false | null | t3_13kbs7c | /r/LocalLLaMA/comments/13kbs7c/noob_here_how_to_activate_blas_for_llama_in/ | false | false | default | 1 | null |
Hardware benchmarking | 2 | [deleted] | 2023-05-17T19:22:16 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13kbcyj | false | null | t3_13kbcyj | /r/LocalLLaMA/comments/13kbcyj/hardware_benchmarking/ | false | false | default | 2 | null | ||
Problem with finetuning model | 1 | [removed] | 2023-05-17T19:03:19 | https://www.reddit.com/r/LocalLLaMA/comments/13kauyf/problem_with_finetuning_model/ | GooD404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13kauyf | false | null | t3_13kauyf | /r/LocalLLaMA/comments/13kauyf/problem_with_finetuning_model/ | false | false | default | 1 | null |
do i need to learn python to fine tune an LLM with the traditional methods, or create a lora for an LLM? | 8 | i used to know some cpp and some java but have since forgotten it to the point that i am largely code illiterate.
I want to try to fine tune a model, or i may stop with just a lora if the results from that are adequate.
that being said, i had previously assumed i would need to learn python to do this, but it kind of seems like the tools have reached a point where i could do this with just pre-existing, relatively user-friendly tools like alpaca-lora/alpaca-lora-4b/peft.
can anyone provide some insight on other such tools?
additionally, are there tools designed to create loras for wizardlm or mpt? most of the research i have done has pointed to people using wizardlm/mpt to create loras for llama | 2023-05-17T17:44:06 | https://www.reddit.com/r/LocalLLaMA/comments/13k8qzk/do_i_need_to_learn_python_to_fine_tune_an_llm/ | im_disappointed_n_u | self.LocalLLaMA | 2023-05-17T20:28:21 | 0 | {} | 13k8qzk | false | null | t3_13k8qzk | /r/LocalLLaMA/comments/13k8qzk/do_i_need_to_learn_python_to_fine_tune_an_llm/ | false | false | self | 8 | null |
Riddle/Reasoning GGML model tests update + Koboldcpp 1.23 beta is out with OpenCL GPU support! | 51 | First of all, look at this crazy mofo:
[Koboldcpp 1.23 beta](https://github.com/LostRuins/koboldcpp/releases)
This thing is a beast, it works faster than the 1.22 CUDA version for me. I did some testing (2 tests each just in case). I used the max gpulayers I could before I ran out of VRAM. My GPU is a mobile RTX 2070 with 8gb VRAM.
GPT4-X-Vicuna 13b q5_1
Kobold 1.21.3: 488 ms/t 468 ms/t
Kobold 1.22 CUDA gpulayers 26: 278 ms/t 283 ms/t
Kobold 1.23: 375 ms/t 371 ms/t
Kobold 1.23 gpulayers 22: 275 ms/t 273 ms/t
VicUnlocked 30b q5_0
Kobold 1.21.3: 1092 ms/t 1094 ms/t
Kobold 1.22 CUDA gpulayers 16: 957 ms/t 944 ms/t
Kobold 1.23: 863 ms/t 861 ms/t
Kobold 1.23 gpulayers 12: 823 ms/t 797 ms/t
First I noticed that 1.23 is faster than 1.21.3 even on CPU only. For the 30b model, it was faster on CPU than the CUDA one even, not sure why. Also I noticed that the OpenCL version can't use the same amount of gpulayers as the CUDA version, but it doesn't matter, it seems to not affect the performance. If anything, it's faster.
This is not the "is Pepsi ok?" version. This is the "Our Coke has free refills" version! I WILL TEST ALL THE MODELS NOW.
No really, I only have 65b models left to go - I got everything else scored on riddles/reasoning so far in my [**spreadsheet**](https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit?usp=sharing&ouid=102314596465921370523&rtpof=true&sd=true). Make sure to check the Scores (Draft) and Responses (Draft) tabs for the latest. I will update the FINAL tabs once I got the 65b models tested as well. All models have their responses recorded (yay), and I've been keeping up with the latest models as they come out as well. Also I'll be adding ChatGPT 3.5/4.0, the New Bard, and Claude for reference in there as well.
Oh I should mention, in case anyone didn't notice:
[These guys](https://lmsys.org/blog/) are letting you test models side by side and assigning the winners ELO scores just like chess. This is great, but don't be fooled by how close some of the local LLM's are to ChatGPT and the like.
And huggingface now has the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) which does multiple tests. They need to catch up though, there's tons of models in the queue and it seems to be stuck at the moment. Also don't forget to hit the **REFRESH** button below all the benchmarks. For some reason when the page loads, it is missing a bunch of stuff if you don't click that.
Both of these sources are awesome, but I'll continue with my riddle/reasoning tests for several reasons:
1) I provide the model responses for you to evaluate your own scores if you disagree with my score
2) I test it on questions/problems that I personally find valuable for myself. I am especially interested in the model's reasoning ability and cleverness.
3) I like to see individual question/answers/score, not just the overall score of the model.
4) I can control my thing better - if a new model comes out, I don't have to wait days/weeks for the other sources to catch up and benchmark it. I can just do it right away.
5) It's fun!
But more variety of testing methodologies is a good thing. This space is blowing up and in a few months we will be looking at.. dozens? hundreds? BILLIONS of local LLM's and we need resources that can organize what's out there, how it performs, and make it easier for you to select which ones you want to play with.
That's all for now! | 2023-05-17T17:33:59 | https://www.reddit.com/r/LocalLLaMA/comments/13k8h0r/riddlereasoning_ggml_model_tests_update_koboldcpp/ | YearZero | self.LocalLLaMA | 2023-05-17T17:59:35 | 0 | {} | 13k8h0r | false | null | t3_13k8h0r | /r/LocalLLaMA/comments/13k8h0r/riddlereasoning_ggml_model_tests_update_koboldcpp/ | false | false | self | 51 | {'enabled': False, 'images': [{'id': 'm9KaapXjs2n5MSsVvxZHn_EFREL-HB-nWde3as-mioc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?width=108&crop=smart&auto=webp&s=fb330bdc2eee4f706524c990eef25371caf258bf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?width=216&crop=smart&auto=webp&s=dd1999b363478f52ca948177dffbdf51b4a3c91c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?width=320&crop=smart&auto=webp&s=b29b1557103c0b64c4bc49ee867a9f2bb3a4cb53', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?width=640&crop=smart&auto=webp&s=5e9090f5aa6ecba38fa71943566f42cd7d2e4aff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?width=960&crop=smart&auto=webp&s=2da6c3e296edb21c5ac8518afc2f3ef7c21f11c4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?width=1080&crop=smart&auto=webp&s=310063846758d260458704aa9d5839eb8e7eab43', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?auto=webp&s=35ede8fa25e519c8e9d2af0e75f53e44f065fe17', 'width': 1200}, 'variants': {}}]} |
[deleted by user] | 0 | [removed] | 2023-05-17T17:18:49 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13k825v | false | null | t3_13k825v | /r/LocalLLaMA/comments/13k825v/deleted_by_user/ | false | false | default | 0 | null | ||
Antilibrary - talk to your documents | 23 | [removed] | 2023-05-17T17:02:03 | https://www.reddit.com/r/LocalLLaMA/comments/13k7luv/antilibrary_talk_to_your_documents/ | Icaruswept | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13k7luv | false | null | t3_13k7luv | /r/LocalLLaMA/comments/13k7luv/antilibrary_talk_to_your_documents/ | false | false | default | 23 | null |
What are the best performing local models both for GPTQ and llama.cpp for 8GB VRAM and 16GB RAM? | 22 | I've been trying to try different ones, and the speed of GPTQ models are pretty good since they're loaded on GPU, however I'm not sure which one would be the best option for what purpose. According to open leaderboard on HF, Vicuna 7B 1.1 GPTQ 4bit runs well and fast, but some GGML models with 13B 4bit/5bit quantization are also good. What do you guys think? Can we create such a list for everyone to see? | 2023-05-17T16:48:11 | https://www.reddit.com/r/LocalLLaMA/comments/13k77tg/what_are_the_best_performing_local_models_both/ | marleen01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13k77tg | false | null | t3_13k77tg | /r/LocalLLaMA/comments/13k77tg/what_are_the_best_performing_local_models_both/ | false | false | self | 22 | null |
Is it just me, or can Ooba UI not run Perplexity evaluations on GGML models? | 2 | I'm wondering if this is a problem with my setup, or if this was an oversight that didn't get worked in when GGML and GPU acceleration became a big thing like... 2 days ago haha.
Maybe I'm expecting too much considering it's only been a few days, but I was looking forward to running some GGML models through their paces and ranking them. Anyone have any feedback? | 2023-05-17T16:35:42 | https://www.reddit.com/r/LocalLLaMA/comments/13k6ve6/is_it_just_me_or_can_ooba_ui_not_run_perplexity/ | Megneous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13k6ve6 | false | null | t3_13k6ve6 | /r/LocalLLaMA/comments/13k6ve6/is_it_just_me_or_can_ooba_ui_not_run_perplexity/ | false | false | self | 2 | null |
llama-cpp-python not using GPU | 6 | Hello, I have llama-cpp-python running but itโs not using my GPU. I have passed in the ngl option but itโs not working. I also tried a cuda devices environment variable (forget which one) but itโs only using CPU. I also had to up the ulimit memory lock limit but still nothing. | 2023-05-17T16:26:59 | https://www.reddit.com/r/LocalLLaMA/comments/13k6mk3/llamacpppython_not_using_gpu/ | Artistic_Okra7288 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13k6mk3 | false | null | t3_13k6mk3 | /r/LocalLLaMA/comments/13k6mk3/llamacpppython_not_using_gpu/ | false | false | self | 6 | null |
OpenLLaMa has released it's 400B token checkpoint. | 149 | Progress is happening, albeit slowly. Someone needs to lend these guys some GPU hours.
[GitHub - openlm-research/open\_llama](https://github.com/openlm-research/open_llama) | 2023-05-17T15:46:36 | https://www.reddit.com/r/LocalLLaMA/comments/13k5hvc/openllama_has_released_its_400b_token_checkpoint/ | jetro30087 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13k5hvc | false | null | t3_13k5hvc | /r/LocalLLaMA/comments/13k5hvc/openllama_has_released_its_400b_token_checkpoint/ | false | false | self | 149 | {'enabled': False, 'images': [{'id': 'pm_lNdI36D02TxMXQt75NXCTdzbr2EMmnXkPOAnkzfQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?width=108&crop=smart&auto=webp&s=dfc0af441a1b65619a75659da4ea48df3765e795', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?width=216&crop=smart&auto=webp&s=fed84704bded964534deabc5f0e15b4da3991494', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?width=320&crop=smart&auto=webp&s=b3d64ee4784424545dff66dc1ed9f88a963d0764', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?width=640&crop=smart&auto=webp&s=66bab7f6f80b933f5e991b7a34390c5e7a7678e2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?width=960&crop=smart&auto=webp&s=2673979be06b2a8df71e4f68e4fab7ea34513662', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?width=1080&crop=smart&auto=webp&s=ffe2ff73673de10c35ec79aa657121b962ab4f87', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?auto=webp&s=eae42ff7b9978b8e46fd8526c5b205d3fd927d5e', 'width': 1200}, 'variants': {}}]} |
Rearching resources for Model Compatibility | 1 | [removed] | 2023-05-17T15:23:27 | https://www.reddit.com/r/LocalLLaMA/comments/13k4ur0/rearching_resources_for_model_compatibility/ | VucaBAT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13k4ur0 | false | null | t3_13k4ur0 | /r/LocalLLaMA/comments/13k4ur0/rearching_resources_for_model_compatibility/ | false | false | default | 1 | null |
Does 24GB RAM for CPU-only match any usable models? | 3 | I upgraded (!) my PC to its max 32GB ... only to find that the very cheap 32GB DDR4 memory I had bought was unusable server RAM.
One refund later, I am thinking of adding just 16GB to my current 8GB to make 24GB.
Does this size match any useful CPU-only models out there?
Thanks! | 2023-05-17T15:09:22 | https://www.reddit.com/r/LocalLLaMA/comments/13k4gl4/does_24gb_ram_for_cpuonly_match_any_usable_models/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13k4gl4 | false | null | t3_13k4gl4 | /r/LocalLLaMA/comments/13k4gl4/does_24gb_ram_for_cpuonly_match_any_usable_models/ | false | false | self | 3 | null |
LLM@home | 43 | I think the open source community should create software like [Folding@home](https://en.wikipedia.org/wiki/Folding@home) to collaboratively train a LLM. If we can get enough people to donate their GPU power, then we could build an extremely powerful open source model. One that may even surpass anything big tech can create.
Is there any ongoing work similar to this? | 2023-05-17T14:20:55 | https://www.reddit.com/r/LocalLLaMA/comments/13k35on/llmhome/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13k35on | false | null | t3_13k35on | /r/LocalLLaMA/comments/13k35on/llmhome/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'i2SMdWEgExesNcnaOqr8a4MGt6MPp3-sn4Z341kYkr4', 'resolutions': [{'height': 116, 'url': 'https://external-preview.redd.it/8nRfSq7QV4ZE5YzM_o6_t9MoeVQsF3KTjZlp7P6qLp0.jpg?width=108&crop=smart&auto=webp&s=15368fc391e412351907dd816346194a9fbc1667', 'width': 108}], 'source': {'height': 216, 'url': 'https://external-preview.redd.it/8nRfSq7QV4ZE5YzM_o6_t9MoeVQsF3KTjZlp7P6qLp0.jpg?auto=webp&s=5e39f09cb195d1d2e9f9ebb7a2ac774d39e18425', 'width': 200}, 'variants': {}}]} |
Looking to find a good up to date ratings of models and timeline of performance for open models and the OpenAi ones. | 6 | Do someone has a good resource to point me to?
I'm curious about the current performance of current model vs GPT-3.5 -4, and get an vague idea of when we can expect the open source models to reach the performances of GPT-3.5
I've been playing with models locally, and while impressive, they are not quite at a level where I find them useful to integrate in a real workflow. | 2023-05-17T12:52:27 | https://www.reddit.com/r/LocalLLaMA/comments/13k0war/looking_to_find_a_good_up_to_date_ratings_of/ | dgermain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13k0war | false | null | t3_13k0war | /r/LocalLLaMA/comments/13k0war/looking_to_find_a_good_up_to_date_ratings_of/ | false | false | self | 6 | null |
Next best LLM model? | 314 | Almost 48 hours passed since Wizard Mega 13B was released, but yet I can't see any new breakthrough LLM model released in the subreddit?
Who is responsabile for this mistake? Will there be a compensation? How many more hours will we need to wait?
Is training a language model which will run entirely and only on the power of my PC, in ways beyond my understanding and comprehension, that mimics a function of the human brain, using methods and software that yet no university book had serious mention of, just within days / weeks from the previous model being released too much to ask?
Jesus, I feel like this subreddit is way past its golden days. | 2023-05-17T12:00:07 | https://www.reddit.com/r/LocalLLaMA/comments/13jzosu/next_best_llm_model/ | elektroB | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jzosu | false | null | t3_13jzosu | /r/LocalLLaMA/comments/13jzosu/next_best_llm_model/ | false | false | self | 314 | null |
"Guidance" a prompting language by Microsoft. | 68 | Recently released by Microsoft - a template language for "guiding" sampling from LLMs:
[https://github.com/microsoft/guidance](https://github.com/microsoft/guidance)
It is interesting that open models like LLaMA not only are supported:
llama = guidance.llms.Transformers("your_path/llama-7b", device=0)
But there is even a "[Guidance acceleration](https://github.com/microsoft/guidance/blob/main/notebooks/guidance_acceleration.ipynb)" mode that improves sampling performance by means of "maintaining the session state" - I guess what is meant there is that they maintain attention cache. | 2023-05-17T11:02:10 | https://www.reddit.com/r/LocalLLaMA/comments/13jyh3m/guidance_a_prompting_language_by_microsoft/ | QFTornotQFT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jyh3m | false | null | t3_13jyh3m | /r/LocalLLaMA/comments/13jyh3m/guidance_a_prompting_language_by_microsoft/ | false | false | self | 68 | {'enabled': False, 'images': [{'id': 'HOYYp67xOlOtV3bRY2ZPsCoUJYYPW6lykIpadrXWViE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?width=108&crop=smart&auto=webp&s=b0ce880810ffaff85ba1776fb0b58d7b5ffc714f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?width=216&crop=smart&auto=webp&s=3e28cd94d5c7a49f802b8ee208e92c0095cc1e34', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?width=320&crop=smart&auto=webp&s=1540bd0e3ab2fec91a01021a7a3c4a0a71ca99d2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?width=640&crop=smart&auto=webp&s=d69d6698d276c7df536bacf1fc45a14670eaaab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?width=960&crop=smart&auto=webp&s=58911b2123b270021f2aba78898d73fa0577275f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?width=1080&crop=smart&auto=webp&s=56c90d28238a269dc271d5dce6058ebea42b265e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?auto=webp&s=a3f73e9f4b8bc14a342a13f5b3aa9b00a1da8473', 'width': 1200}, 'variants': {}}]} |
Ok, Iโm just curious of the security risks? | 7 | Letโs just say you are running a model on your server, but this model was trained by someone else.
Imagine they trained their model with an innocuous password that allows the prompter to fully utilize a set of embedded hacking capabilities, or even just have it check system time or do a series of checks every few hundred times it gets used before it accesses resources and elevates its privileges to call home.
How can anyone here assure that wasnโt done to the model theyโre running? | 2023-05-17T10:55:21 | https://www.reddit.com/r/LocalLLaMA/comments/13jyc0m/ok_im_just_curious_of_the_security_risks/ | lordlysparrow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jyc0m | false | null | t3_13jyc0m | /r/LocalLLaMA/comments/13jyc0m/ok_im_just_curious_of_the_security_risks/ | false | false | self | 7 | null |
Using LLaMA as a "real personal assistant"? | 26 | What I really mean by "real personal assistant" is an AI that:
* Is given all of my personal details: personality, hobbies, lifestyle description, work experience, writing style,...
* Persist all of the given information in a database so that those data will be reloaded when we re-launch/re-install the AI
* Based on that information, provide personalised responses that match my characteristics, are relevant to me & really feel like I created them
Possible? | 2023-05-17T10:54:04 | https://www.reddit.com/r/LocalLLaMA/comments/13jyb4u/using_llama_as_a_real_personal_assistant/ | MichaelBui2812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jyb4u | false | null | t3_13jyb4u | /r/LocalLLaMA/comments/13jyb4u/using_llama_as_a_real_personal_assistant/ | false | false | self | 26 | null |
LLaMA and AutoAPI? | 2 | Does anybody know if we can use AutoGPT with LLaMA (e.g.: via oobabooga APIs)? If so, where can I find the integration instruction or tutorial? Thanks ๐โโ๏ธ๐ | 2023-05-17T10:48:07 | https://www.reddit.com/r/LocalLLaMA/comments/13jy71k/llama_and_autoapi/ | MichaelBui2812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jy71k | false | null | t3_13jy71k | /r/LocalLLaMA/comments/13jy71k/llama_and_autoapi/ | false | false | self | 2 | null |
LLM with Apple M2 vs Intel 12th Gen | 9 | I'm looking to buy another machine to work with LLaMA models. Ultimately what is the faster CPU for running general-purpose LLMs before GPU acceleration? M2 or Intel 12th gen?
I'll limit it to the best-released processor on both sides. | 2023-05-17T09:25:29 | https://www.reddit.com/r/LocalLLaMA/comments/13jwonl/llm_with_apple_m2_vs_intel_12th_gen/ | somethedaring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jwonl | false | null | t3_13jwonl | /r/LocalLLaMA/comments/13jwonl/llm_with_apple_m2_vs_intel_12th_gen/ | false | false | self | 9 | null |
How can I scrape text only from Facebook posts? | 1 | [removed] | 2023-05-17T08:44:08 | https://www.reddit.com/r/LocalLLaMA/comments/13jvz4i/how_can_i_scrape_text_only_from_facebook_posts/ | AlfaidWalid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jvz4i | false | null | t3_13jvz4i | /r/LocalLLaMA/comments/13jvz4i/how_can_i_scrape_text_only_from_facebook_posts/ | false | false | default | 1 | null |
Recursively grab all the text from a website for an LLM | 0 | Is there a way to scrape all of the text from an entire website to later train an LLM on?
I'm not looking to build it myself and reinvent the wheel unless I have to
Edit: why the downvotes? The question was answered below. I didnโt have to make anything custom. Imagine if LLMs negatively rated us based on us asking questions- what would our score be? | 2023-05-17T08:26:13 | https://www.reddit.com/r/LocalLLaMA/comments/13jvoks/recursively_grab_all_the_text_from_a_website_for/ | somethedaring | self.LocalLLaMA | 2023-05-17T17:09:55 | 0 | {} | 13jvoks | false | null | t3_13jvoks | /r/LocalLLaMA/comments/13jvoks/recursively_grab_all_the_text_from_a_website_for/ | false | false | self | 0 | null |
Effects of long term use on hardware | 15 | I am helping develop a plugin for AutoGPT to interface with Text Gen WebUI and I'll be conducting experiments on how effective the plugin works. AutoGPT can be very heavy on the OpenAI API. And to get it to work with the typical reduced context size of LLMs, multiple chunks of data will need to be sent. I am confident it will work but I'm also confident it will peg video hardware and CPU hardware to max while it is hammering at the API in TGWUI. What is your experience for continuous use? How hard do you think AutoGPT could hammer the video hardware before bad things happen? I am thinking of building in artificial throttling to give the hardware a break between API calls.
Thank you for your insight.
Edit 1: Thank you all for your information! The Linus Tech Tips reference was particularly useful! I'll implement an optional throttle that is off by default so if people have a concern, they can turn it on. | 2023-05-17T07:39:45 | https://www.reddit.com/r/LocalLLaMA/comments/13juvr5/effects_of_long_term_use_on_hardware/ | cddelgado | self.LocalLLaMA | 2023-05-17T15:33:44 | 0 | {} | 13juvr5 | false | null | t3_13juvr5 | /r/LocalLLaMA/comments/13juvr5/effects_of_long_term_use_on_hardware/ | false | false | self | 15 | null |
Collaborative renting server for LLM | 9 |
With the emergence of new large language models, there is a need for more computational resources to train and develop these models. However, high-end hardware can be costly and out of reach for many enthusiasts and small teams.
I propose an idea for a platform called RentLLAMA where people can come together to share the cost of renting cloud or dedicated AI servers. Here's how it would work:
A user proposes a project on RentLLAMA along with a description and required hardware specs. Other interested users join the project, up to a maximum of 10 for example. The members then vote to select a "lead user" who is responsible for setting up the server.
RentLLAMA collects payment from each member and handles paying the server bills. The cost is divided equally among all members. For example, if a $300/month server is needed and 10 members join, each pays $30/month.
This approach would allow more collaboration and wider access to advanced AI hardware for research and experimentation. The platform would be managed in a decentralized way through the vote of each project's members. | 2023-05-17T07:24:39 | https://www.reddit.com/r/LocalLLaMA/comments/13juml5/collaborative_renting_server_for_llm/ | docloulou | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13juml5 | false | null | t3_13juml5 | /r/LocalLLaMA/comments/13juml5/collaborative_renting_server_for_llm/ | false | false | self | 9 | null |
Need advice on prebuilt PC for running AI apps (RTX 3090, Ryzen 9 5900x, 32GB RAM) | 1 | Hi everyone, I'm interested in running some AI apps locally on my PC, such as Whisper, Vicuna, Stable Diffusion, etc. I found this prebuilt PC , and I'm wondering if it's good enough for my needs. Here are the specs:
\- CPU: Ryzen 9 5900x
\- GPU: RTX 3090
\- RAM: 32GB DDR4 3200 MHz
\- SSD: 1TB NVMe PCIe Gen3.0x4
\- Mainboard: ASRock B450M Pro4 R2.0
​
I'm uncertain whether the CPU is overkill or not, and if the RAM size and speed are sufficient. I also heard that PCIe Gen 4 is better for NVMe SSDs, but this mainboard only supports Gen 3. Will that make a big difference in performance?
I would appreciate any opinions or suggestions from you guys. Thanks in advance! | 2023-05-17T05:09:21 | https://www.reddit.com/r/LocalLLaMA/comments/13js563/need_advice_on_prebuilt_pc_for_running_ai_apps/ | Prince-of-Privacy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13js563 | false | null | t3_13js563 | /r/LocalLLaMA/comments/13js563/need_advice_on_prebuilt_pc_for_running_ai_apps/ | false | false | self | 1 | null |
Canโt get my characters prompts to work when using Oobabooga over API. | 1 | So im making a chat bot that can read and respond to twitch chat. Problem is it wonโt really use my prompt/character set up. I see the preprompt load into the command line but itโs response doesnโt really match the prompt or the character I set up. Iโm using the wizard 7bil uncensored.
When I set up the character in the web UI is that used when itโs generating responses or does it just use the just the model and the input prompt? | 2023-05-17T03:05:02 | https://www.reddit.com/r/LocalLLaMA/comments/13jpjx0/cant_get_my_characters_prompts_to_work_when_using/ | opi098514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jpjx0 | false | null | t3_13jpjx0 | /r/LocalLLaMA/comments/13jpjx0/cant_get_my_characters_prompts_to_work_when_using/ | false | false | self | 1 | null |
[deleted by user] | 1 | [removed] | 2023-05-17T01:59:22 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13jo463 | false | null | t3_13jo463 | /r/LocalLLaMA/comments/13jo463/deleted_by_user/ | false | false | default | 1 | null | ||
Llama CPP and GPT4all Error... Anyone have any idea wh? | 2 | Hello I am getting this error when trying to run Llama or GPT4all. Does anyone know how to fix? Looked at hugging face and Github.... others have had the same issues, but have not found resolve.
​
if self.ctx is not None:
AttributeError: 'Llama' object has no attribute 'ctx' | 2023-05-17T00:01:33 | https://www.reddit.com/r/LocalLLaMA/comments/13jlh6z/llama_cpp_and_gpt4all_error_anyone_have_any_idea/ | Lord_Crypto13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jlh6z | false | null | t3_13jlh6z | /r/LocalLLaMA/comments/13jlh6z/llama_cpp_and_gpt4all_error_anyone_have_any_idea/ | false | false | self | 2 | null |
Noticed TavernAI characters rarely emote when running on Wizard Vicuna uncensored 13B. Is this due to the model itself? | 10 | So I finally got TavernAI to work with the 13B model via using the new koboldcpp with a GGML model, and although I saw a huge increase in coherency compared to Pygmalion 7B, characters very rarely emote anymore, instead only speaking. After hours of testing, only once did the model generate text with an emote in it.
Is this because Pygmalion 7B has been trained specifically for roleplaying in mind?
And if so, when might we expect a Pygmalion 13B now that everyone, including those of us with low vram, can finally load 13B models? It feels like we're getting new models every few days, so surely Pygmalion 13B isn't that far off? | 2023-05-16T23:17:43 | https://www.reddit.com/r/LocalLLaMA/comments/13jkh19/noticed_tavernai_characters_rarely_emote_when/ | Megneous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jkh19 | false | null | t3_13jkh19 | /r/LocalLLaMA/comments/13jkh19/noticed_tavernai_characters_rarely_emote_when/ | false | false | self | 10 | null |
can two A6000 using NVlink pool their VRAM memory to use fully 96GB for LLM? | 6 | Given that models like quantized 65B 4bit are/will be expected to need more than 65GB of memory, would it be possible to connect two A6000 via NVlink to have a working memory of 96GB? | 2023-05-16T21:48:58 | https://www.reddit.com/r/LocalLLaMA/comments/13jibri/can_two_a6000_using_nvlink_pool_their_vram_memory/ | Caffdy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jibri | false | null | t3_13jibri | /r/LocalLLaMA/comments/13jibri/can_two_a6000_using_nvlink_pool_their_vram_memory/ | false | false | self | 6 | null |
OpenAI wants to crack down on open source LLMs, force through a government licensing system, and create a regulatory moat for themselves | 523 | 2023-05-16T21:13:52 | https://www.nasdaq.com/articles/openai-chief-goes-before-us-congress-to-propose-licenses-for-building-ai | donthaveacao | nasdaq.com | 1970-01-01T00:00:00 | 0 | {} | 13jhf44 | false | null | t3_13jhf44 | /r/LocalLLaMA/comments/13jhf44/openai_wants_to_crack_down_on_open_source_llms/ | false | false | default | 523 | null | |
Weโre Gonna Need a Bigger Moat - by Steve Yegge | 17 | Original: [https://steve-yegge.medium.com/were-gonna-need-a-bigger-moat-478a8df6a0d2](https://steve-yegge.medium.com/were-gonna-need-a-bigger-moat-478a8df6a0d2)
1. **Emergence of Low Rank Adaptation (LoRA):**ย LoRA has made large language models (LLMs) composable, allowing them to converge on having the same knowledge, potentially making LLMs more powerful and dangerous.
2. **Rapid Evolution of LLMs:** LLMs are evolving rapidly, with potential uses in various fields, including potentially dangerous applications.
3. **Leak of GPT-class LLMs:** The recent leak of GPT-class LLMs has led to the development of many open-source software (OSS) LLMs.
4. **Meta as the Surprise Winner:** Meta has emerged as the surprise winner due to their architecture being best suited for scaling up OSS LLMs.
5. **Predictions for Smaller LLMs and LLaMA:** The article predicts that smaller LLMs will soon perform as well as more advanced models, with LLaMA potentially becoming the standard architecture.
6. **Significant Social Consequences:** The leak of LLMs may have significant social consequences, although these are difficult to predict.
7. **Impact on the AI Industry:** The LLM-as-Moat model is disappearing, and AI is being commoditized quickly.
8. **Pluggable Platforms and Standardization:** The author believes that pluggable platforms have a way of standardizing, with LLaMA possibly becoming the standard architecture.
9. **SaaS Builders and Data Moats:** SaaS builders may benefit from the commoditization of AI, but relying on LLMs for a moat is risky; having a data moat is recommended.
10. **Sourcegraphโs Moat-Building Capabilities:** The author discusses the moat-building capabilities of Sourcegraphโs platform. | 2023-05-16T21:06:57 | https://www.reddit.com/r/LocalLLaMA/comments/13jh8ud/were_gonna_need_a_bigger_moat_by_steve_yegge/ | goproai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jh8ud | false | null | t3_13jh8ud | /r/LocalLLaMA/comments/13jh8ud/were_gonna_need_a_bigger_moat_by_steve_yegge/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': '2oy5U649B1efZ-4MxaDS3SnxBwvWaX_M68KtADh7Ngg', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?width=108&crop=smart&auto=webp&s=4d8ccfa16f8aa571f05e0c5f8c37accea8e9225b', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?width=216&crop=smart&auto=webp&s=4647688ea3f97cd832f64895639b2383ffd918a9', 'width': 216}, {'height': 137, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?width=320&crop=smart&auto=webp&s=1b90c56aee4f811d2e42bf59f857ddef70e97faa', 'width': 320}, {'height': 275, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?width=640&crop=smart&auto=webp&s=39154cebab59d992e7e36d9f3b292ca583021145', 'width': 640}, {'height': 413, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?width=960&crop=smart&auto=webp&s=4520c1f9eddedaf9431799950dbb7aa1d15705b4', 'width': 960}, {'height': 465, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?width=1080&crop=smart&auto=webp&s=45eb3bf1d140a958588ab966fa4e511041c4f4bf', 'width': 1080}], 'source': {'height': 517, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?auto=webp&s=845accb65a06ee257b126f6ccdcd7abcf30d9cfb', 'width': 1200}, 'variants': {}}]} |
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp? | 10 | Hello
I'm looking for a very simple API to match my very simple usecase.
I'm using a fixed prompt passed as `-f file.txt` to llama.cpp, and I would like to pass instruction as part of the URL with a HTTP GET, then collect the results with curl or wget, so that when I do something like `curl http://127.0.0.1:8080/something/the%20instructions%20I%20sent` I simply get the result - nothing more, nothing less.
I just want to use that in my bash prompt and maybe vim too with a different prompt, so on a different port, and I'd prefer to use HTTP because I want to eventually move one or both to my desktop.
I would prefer so avoid reinventing the wheel, so I wonder if there's already anything that simple, ideally in C or Perl?
It would just need to:
- bind to the port
- fork llama, keeping the input FD opened
- then waiting for HTTP request
- loop on requests, feeding the URL to the input FD, and sending back the result that was read from the output FD.
- optionally, if it's not too hard: after 2 minutes without activity, stop llama
Can anyone offer a suggestion?
I don't need a GUI or anything fancy like JSON or REST, but if the simplest existing option say keeps a queue of requests and return the output in JSON, that's not a dealbreaker: I'll just use jq on curl output :)
Thanks for any help! | 2023-05-16T20:50:27 | https://www.reddit.com/r/LocalLLaMA/comments/13jgtvz/could_i_get_a_suggestion_for_a_simple_http_api/ | csdvrx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jgtvz | false | null | t3_13jgtvz | /r/LocalLLaMA/comments/13jgtvz/could_i_get_a_suggestion_for_a_simple_http_api/ | false | false | self | 10 | null |
What is and isn't possible at various tiers of VRAM? And not just in LLMs? | 13 | Title. I've been using 13b 4/5bit ggml models at 1600Mhz DDR3 ram. About 700ms/token. I've also run Stable Diffusion in CPU only mode, at about 18 secs/iteration. My GPU's Kepler, it's too old to be supported in anything.
Now that you can get massive speedups in GGML through utilizing GPU, I'm thinking of getting a 3060 12gb. Far as I can tell, that should let me run 33b 4/5bit, and maybe relatively fast too. Heck if I max out my DDR3 ram to 32gb I might run 65b 4/5bit too. SD should be in its/sec too I think, instead of secs/its.
But I'm interested in stuff besides just basic inference. I don't know how much VRAM you need to do these things, and how much you can optimize these things to run with lower VRAM, like how SD can be run with 4 or even 2gb VRAM. So I got a few questions:
1. For LLMs, what can and can't you do with various levels of VRAM? I'm a little unfamiliar on what exactly everything is, but there's finetuning, training, merging/mixing, Loras, langchain, vector databases, agents, extensions (SuperBIG looks cool), etc.
2. Also, my dream LLM would be what I think something like LLaVa, MiniGPT-4, ImageBind, or Ask-Anything are? A multimodal/combined text and image (or more) LLM, so I can converse about and generate images with it too, not just text. What's needed for that? MiniGPT's page is saying 12gb VRAM for it, anything lower/better? I've also seen people combine local Stable Diffusion with their LLM model instead, is that better? Can the recent GPU implementation speedup breakthrough be applied to these multimodal models? Can these multimodal models be in 4/5bit too? And can they be GGML? Can you turn any LLM into a multimodal model, since I think MiniGPT was Vicuna? What's the best local multimodal model?
3. For Stable Diffusion, what can and can't you do with various levels of VRAM? Again I'm a little unfamiliar with things. There's stuff like Lora, ControlNet, DreamBooth, HyperNetworks, etc. Also there was a research paper by Google recently where they managed to generate images in 11 seconds on a phone, I think. Was that clickbait, or will speedups also reach everyone else soon, making even fast GPUs even faster?
4. Is there anything else I should keep in mind or know about if I'm wanting to try all these things? Like, which OS is best to do all this (I see people saying they get 40 tokens/sec in Linux, is that the best option? Is it better for SD, multimodals, etc. too? Is dual boot worse?) ? Are there bottlenecks from anything, like my older DDR3 ram and PCIE gen 3 lanes? Is having an integrated GPU-type CPU better? Is AIO cooling needed, or can air cooling suffice? Etc. etc. | 2023-05-16T20:23:52 | https://www.reddit.com/r/LocalLLaMA/comments/13jg504/what_is_and_isnt_possible_at_various_tiers_of/ | ThrowawayProgress99 | self.LocalLLaMA | 2023-05-16T20:31:28 | 0 | {} | 13jg504 | false | null | t3_13jg504 | /r/LocalLLaMA/comments/13jg504/what_is_and_isnt_possible_at_various_tiers_of/ | false | false | self | 13 | null |
Effective specialized light models | 15 | Don't you think it would be nice, for example, to create specialized but lightweight models? For example, a model that programs very well, but does everything else worse. Or a wonderful author of uncensored texts who is very poorly versed in code.
We could use one efficient model for one type of task, and then switch to another if necessary. | 2023-05-16T20:14:49 | https://www.reddit.com/r/LocalLLaMA/comments/13jfwip/effective_specialized_light_models/ | dimaff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jfwip | false | null | t3_13jfwip | /r/LocalLLaMA/comments/13jfwip/effective_specialized_light_models/ | false | false | self | 15 | null |
[deleted by user] | 0 | [removed] | 2023-05-16T19:23:47 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13jeknb | false | null | t3_13jeknb | /r/LocalLLaMA/comments/13jeknb/deleted_by_user/ | false | false | default | 0 | null | ||
Tutorial: Run PrivateGPT model locally | 1 | [removed] | 2023-05-16T18:44:10 | https://youtu.be/G7iLllmx4qc | zeroninezerotow | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 13jdiub | false | {'oembed': {'author_name': 'Prompt Engineering', 'author_url': 'https://www.youtube.com/@engineerprompt', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/G7iLllmx4qc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="PrivateGPT: Chat to your FILES OFFLINE and FREE [Installation and Tutorial]"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/G7iLllmx4qc/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'PrivateGPT: Chat to your FILES OFFLINE and FREE [Installation and Tutorial]', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_13jdiub | /r/LocalLLaMA/comments/13jdiub/tutorial_run_privategpt_model_locally/ | false | false | default | 1 | null |
Dev Pattern Recognition to algo assumptions? | 1 | [removed] | 2023-05-16T18:28:33 | https://www.reddit.com/r/LocalLLaMA/comments/13jd3pm/dev_pattern_recognition_to_algo_assumptions/ | TH3NUD3DUD3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jd3pm | false | null | t3_13jd3pm | /r/LocalLLaMA/comments/13jd3pm/dev_pattern_recognition_to_algo_assumptions/ | false | false | default | 1 | null |
How do Character Settings in oobabooga's Text Generation UI work behind the scenes? Seeking advice on utilizing it with the --api option. No coding required, just guidance. | 6 | Hello everyone!
I'm currently utilizing oobabooga's Text Generation UI with the --api flag, and I have a few questions regarding the functionality of the UI. Specifically, I'm interested in understanding how the UI incorporates the character's **name**, **context**, and **greeting** within the Chat Settings tab.
Currently, I am able to send text prompts to the API from my React app using a sample request that I found while browsing the web. I am receiving responses successfully. Here's an example of the request:
```yaml
{
"prompt": "What is your name?",
"max_new_tokens": 200,
"do_sample": true,
"temperature": 0.7,
"top_p": 0.5,
"typical_p": 1,
"repetition_penalty": 1.2,
"top_k": 40,
"min_length": 0,
"no_repeat_ngram_size": 0,
"num_beams": 1,
"penalty_alpha": 0,
"length_penalty": 1,
"early_stopping": false,
"seed": -1,
"add_bos_token": true,
"truncation_length": 2048,
"ban_eos_token": false,
"skip_special_tokens": true,
"stopping_strings": []
}
```
However, I'm uncertain about the parameters for the character's **name**, **context**, and **greeting** within this request. I'm also unsure whether these parameters can be utilized with this endpoint.
I have a couple of theories on how this might work:
1) The UI possibly appends an additional string to the user's prompt before sending the request, consistently reminding the model about the character's name, context, and any other relevant information or instructions for each request.
2) There might be a method to load the character's YAML file to ensure that all replies adhere to the character settings. However, I'm unsure how to accomplish this using the --api flag.
I'm also curious to know if there are any special characters or keywords that allow me to provide instructions and subsequently use a specific word like "BEGIN," so that anything preceding "BEGIN" is solely utilized as context.
Although Silly Tavern was recommended to me, I'm genuinely interested in understanding how this process works. I would greatly appreciate any suggestions, tips, or insights.
Thank you in advance! | 2023-05-16T18:04:35 | https://www.reddit.com/r/LocalLLaMA/comments/13jchhj/how_do_character_settings_in_oobaboogas_text/ | masteryoyogi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jchhj | false | null | t3_13jchhj | /r/LocalLLaMA/comments/13jchhj/how_do_character_settings_in_oobaboogas_text/ | false | false | self | 6 | null |
What exactly is an agent? | 2 | Is an agent nothing more than a fancy prompt? Any help would be appreciated. | 2023-05-16T17:39:11 | https://www.reddit.com/r/LocalLLaMA/comments/13jbti5/what_exactly_is_an_agent/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jbti5 | false | null | t3_13jbti5 | /r/LocalLLaMA/comments/13jbti5/what_exactly_is_an_agent/ | false | false | self | 2 | null |
[deleted by user] | 0 | [removed] | 2023-05-16T17:28:25 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13jbjgo | false | null | t3_13jbjgo | /r/LocalLLaMA/comments/13jbjgo/deleted_by_user/ | false | false | default | 0 | null | ||
Best current tutorial for training your own LoRA? Also I've got a 24GB 3090, so which models would you recommend fine tuning on? | 46 | I'm assuming 4bit but correct me if I'm wrong there. I'm trying to get these working but with the current oobagooba pull I keep getting memory limit issues or it won't train at all.
Which models and sizes of .txt files have you all found work for fine tuning? What was your memory? | 2023-05-16T16:56:36 | https://www.reddit.com/r/LocalLLaMA/comments/13japh6/best_current_tutorial_for_training_your_own_lora/ | theredknight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13japh6 | false | null | t3_13japh6 | /r/LocalLLaMA/comments/13japh6/best_current_tutorial_for_training_your_own_lora/ | false | false | self | 46 | null |
How can I use LLM as a ecommerce recommendation engine? Can I blend private and public data to return relevant products from a catalog? | 0 | I'd like to build a web application that can sit in the front end and return chat results with the most similar products from the backend catalog. What would be the best way to put this together from the currently available applications? | 2023-05-16T16:41:30 | https://www.reddit.com/r/LocalLLaMA/comments/13jaaym/how_can_i_use_llm_as_a_ecommerce_recommendation/ | rturtle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13jaaym | false | null | t3_13jaaym | /r/LocalLLaMA/comments/13jaaym/how_can_i_use_llm_as_a_ecommerce_recommendation/ | false | false | self | 0 | null |
Can someone recommend any fundamental books for textgen ai? | 11 | [deleted] | 2023-05-16T15:44:35 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13j8sml | false | null | t3_13j8sml | /r/LocalLLaMA/comments/13j8sml/can_someone_recommend_any_fundamental_books_for/ | false | false | default | 11 | null | ||
The Milgram experiment as prompt injection in humans. | 21 | It occurred to me that the concept of prompt injection as a hack predates LLMs, in a sense. The famous [Milgram experiment](https://en.wikipedia.org/wiki/Milgram_experiment):
> The experimenter told them that they were taking part in "a scientific study of memory and learning", to see what the effect of punishment is on a subject's ability to memorize content. Also, he always clarified that the payment for their participation in the experiment was secured regardless of its development. The subject and actor drew slips of paper to determine their roles. Unknown to the subject, both slips said "teacher". The actor would always claim to have drawn the slip that read "learner", thus guaranteeing that the subject would always be the "teacher".
The clever use of context switching for the actions of the subject led the subject to be divorced from the consequences of their actions.
Are we seeing a basic principle at work, rather than a clever hack specific to LLMs? | 2023-05-16T15:07:41 | https://www.reddit.com/r/LocalLLaMA/comments/13j7uag/the_milgram_experiment_as_prompt_injection_in/ | _supert_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13j7uag | false | null | t3_13j7uag | /r/LocalLLaMA/comments/13j7uag/the_milgram_experiment_as_prompt_injection_in/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'mCKJKkEKUTezQAoL71o4GL9mWFmFzUUdGav9qhICP0Y', 'resolutions': [{'height': 137, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?width=108&crop=smart&auto=webp&s=a0b8ccb2bc50bb7aca18367252944c34d67259f0', 'width': 108}, {'height': 274, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?width=216&crop=smart&auto=webp&s=b6ff85d8968a0052d5e84cb6d1280b4acff2215e', 'width': 216}, {'height': 406, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?width=320&crop=smart&auto=webp&s=b25840246df98c0e9c2fb8e0b8c5ed29ef959027', 'width': 320}, {'height': 812, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?width=640&crop=smart&auto=webp&s=ae3f9c9420112badd4378812ab7b7e0ec23346ed', 'width': 640}, {'height': 1219, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?width=960&crop=smart&auto=webp&s=05b3425253d3c2db19fb91fc4c76655f74c93242', 'width': 960}, {'height': 1371, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?width=1080&crop=smart&auto=webp&s=b4c0343b213c159d52225e0507df3c8303f5ab39', 'width': 1080}], 'source': {'height': 1524, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?auto=webp&s=028e978ad4eecb973948fd652f57451042a4ba50', 'width': 1200}, 'variants': {}}]} |
Long term memory for LLM based assistants? Would DeepMind Retro be a solution? | 3 | I apologize if any of this sounds stupid. As an "outsider", I've been thinking of how much of a difference and game changer long term memory would be for assistants. I'm mostly interested in programming tasks and the size of the context window is a serious limitation there.
After asking around I've been pointed to this paper: [https://www.deepmind.com/publications/improving-language-models-by-retrieving-from-trillions-of-tokens](https://www.deepmind.com/publications/improving-language-models-by-retrieving-from-trillions-of-tokens)
Is anyone experimenting with something like that for LLaMA or other open source models? Are there any other potentially better techniques?
Thanks in advance. :) | 2023-05-16T14:54:28 | https://www.reddit.com/r/LocalLLaMA/comments/13j7hil/long_term_memory_for_llm_based_assistants_would/ | giesse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13j7hil | false | null | t3_13j7hil | /r/LocalLLaMA/comments/13j7hil/long_term_memory_for_llm_based_assistants_would/ | false | false | self | 3 | null |
Open LLM Server - Run local LLMs via HTTP API in a single command (Linux, Mac, Windows) | 8 | 2023-05-16T14:31:13 | https://github.com/dcSpark-AI/open-LLM-server | robkorn | github.com | 1970-01-01T00:00:00 | 0 | {} | 13j6vby | false | null | t3_13j6vby | /r/LocalLLaMA/comments/13j6vby/open_llm_server_run_local_llms_via_http_api_in_a/ | false | false | default | 8 | null | |
How do I load a gptq LLaMA model (Vicuna) in .safetensors format? | 3 | This question is not regarding text generation webui, there's plenty of tutorials for that. My question is about loading the model with huggingface transformers or whatever library is needed to actually use the model in a python script with other tools (such as langchain or transformer agents). GPTQ-for-LLaMA has no documentation regarding this and scouring it's source code for how it loads the model has been a pain. Any help appreciated
EDIT: SOLVED!
after some time getting my head in the GPTQ-For-LLaMA i got how it loaded the models.
if you're in the dir directly above the repo, just do the following:
```
import sys
sys.path.append("GPTQ-for-LLaMa/")
import importlib
llama = importlib.import_module("llama_inference")
DEV = torch.device('cuda:0')
model = llama.load_quant(repo,model_path,4,128,0)
model.to(DEV)
```
the DEV var is for loading it to the GPU.
Some notes on this: I've found inference from this being slow, I'm trying to get a triton inference server going but the solutions I've found are run from a docker file, any of you have a solution? | 2023-05-16T14:14:41 | https://www.reddit.com/r/LocalLLaMA/comments/13j6fsy/how_do_i_load_a_gptq_llama_model_vicuna_in/ | KillerX629 | self.LocalLLaMA | 2023-05-22T14:56:52 | 0 | {} | 13j6fsy | false | null | t3_13j6fsy | /r/LocalLLaMA/comments/13j6fsy/how_do_i_load_a_gptq_llama_model_vicuna_in/ | false | false | self | 3 | null |
Api for WizardML and family | 1 | 2023-05-16T14:12:09 | https://github.com/aratan/ApiCloudLLaMA | system-developer | github.com | 1970-01-01T00:00:00 | 0 | {} | 13j6di6 | false | null | t3_13j6di6 | /r/LocalLLaMA/comments/13j6di6/api_for_wizardml_and_family/ | false | false | default | 1 | null | |
Any good benchmark sources for raw token performance (especially for CPUs)? | 3 | Have a 3600 with 64gb ram, trying to decide what would make more sense, upgrading to a 5800x or grabbing a similarly priced GPU and using the new llama GPU layering, not sure which will yield greater increases in performance.
I found this thread that has some useful numbers but would love several more:
https://github.com/ggerganov/llama.cpp/issues/34 | 2023-05-16T13:45:14 | https://www.reddit.com/r/LocalLLaMA/comments/13j5o7s/any_good_benchmark_sources_for_raw_token/ | noneabove1182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13j5o7s | false | null | t3_13j5o7s | /r/LocalLLaMA/comments/13j5o7s/any_good_benchmark_sources_for_raw_token/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'svgVHCycpzvVK6Asa43o6X_FwD8yCjG-3kavzOEm1g8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OOaTan8GcQ-twEwDpFn5sTg1TLh-Gn137VPX63-aUJU.jpg?width=108&crop=smart&auto=webp&s=1f993a972c5b2668139ee47035d680fbb3bf597a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OOaTan8GcQ-twEwDpFn5sTg1TLh-Gn137VPX63-aUJU.jpg?width=216&crop=smart&auto=webp&s=0d26a5756c301f50f86a06b50b7acdb87e0f8a3b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OOaTan8GcQ-twEwDpFn5sTg1TLh-Gn137VPX63-aUJU.jpg?width=320&crop=smart&auto=webp&s=7575f46769d864c37bf0c0e76fbc7557ae0b7305', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OOaTan8GcQ-twEwDpFn5sTg1TLh-Gn137VPX63-aUJU.jpg?width=640&crop=smart&auto=webp&s=49c31821775f22c1f474cd8d5d8933f4c0288f4f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OOaTan8GcQ-twEwDpFn5sTg1TLh-Gn137VPX63-aUJU.jpg?width=960&crop=smart&auto=webp&s=fe119bf88d3d9bef7906a0b2fd541dbfb5a51b4e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OOaTan8GcQ-twEwDpFn5sTg1TLh-Gn137VPX63-aUJU.jpg?width=1080&crop=smart&auto=webp&s=989537a4648be7d63bd084e8440d0d55ee191e70', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OOaTan8GcQ-twEwDpFn5sTg1TLh-Gn137VPX63-aUJU.jpg?auto=webp&s=13b003cb21b5af61953ee0a4ae167fa7a427822e', 'width': 1200}, 'variants': {}}]} |
How many tokens per second do you guys get with GPUs like 3090 or 4090? (rtx 3060 12gb owner here) | 14 | Hello with my RTX 3060 12GB I get around 10 to 29 tokens max per second(depending on the task). But I would like to know if someone can share how many tokens they get:
\`\`\`bash
Output generated in 5.49 seconds (29.67 tokens/s, 163 tokens, context 8, seed 1808525579)
Output generated in 2.39 seconds (12.56 tokens/s, 30 tokens, context 48, seed 238935104)
Output generated in 3.29 seconds (16.71 tokens/s, 55 tokens, context 48, seed 1638855003)
Output generated in 6.21 seconds (21.25 tokens/s, 132 tokens, context 48, seed 1610288737)
Output generated in 10.73 seconds (22.64 tokens/s, 243 tokens, context 48, seed 262785147)
Output generated in 35.85 seconds (21.45 tokens/s, 769 tokens, context 48, seed 2131912728)
Output generated in 5.52 seconds (19.56 tokens/s, 108 tokens, context 48, seed 1350675393)
Output generated in 5.78 seconds (19.55 tokens/s, 113 tokens, context 48, seed 1575103512)
Output generated in 2.90 seconds (13.77 tokens/s, 40 tokens, context 48, seed 1299491277)
Output generated in 4.17 seconds (17.74 tokens/s, 74 tokens, context 43, seed 1581083422)
Output generated in 3.70 seconds (16.47 tokens/s, 61 tokens, context 45, seed 1874190459)
Output generated in 5.85 seconds (18.80 tokens/s, 110 tokens, context 48, seed 1325399418)
Output generated in 2.20 seconds (9.99 tokens/s, 22 tokens, context 47, seed 1806015611)
Output generated in 5.45 seconds (18.91 tokens/s, 103 tokens, context 43, seed 1481838003)
Output generated in 9.33 seconds (20.14 tokens/s, 188 tokens, context 48, seed 1042140958)
Output generated in 20.98 seconds (20.35 tokens/s, 427 tokens, context 48, seed 1562266209)
Output generated in 6.78 seconds (17.99 tokens/s, 122 tokens, context 48, seed 1461316178)
Output generated in 3.21 seconds (13.69 tokens/s, 44 tokens, context 46, seed 776504865)
\`\`\`
Right now I am using textgen-web-ui with \`TheBloke\_wizard-vicuna-13B-GPTQ/wizard-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors\`
Any tokens/s share with any gpu would be of a great help for me because I might need to upgrade in the future. | 2023-05-16T13:32:27 | https://www.reddit.com/r/LocalLLaMA/comments/13j5cxf/how_many_tokens_per_second_do_you_guys_get_with/ | jumperabg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13j5cxf | false | null | t3_13j5cxf | /r/LocalLLaMA/comments/13j5cxf/how_many_tokens_per_second_do_you_guys_get_with/ | false | false | self | 14 | null |
CPU only: Should I set up 40GB RAM or slightly faster but smaller 32GB RAM? | 2 | I'm upgrading my RAM to run CPU-only mid-size models.
Will fast 32GB RAM generally be enough .. or should I add another 8GB to make 40GB, which sadly slows down the RAM speed a little?
**UPDATE:** Pah .. I have just installed my (used) 32GB DIMM ... system will no longer boot ... | 2023-05-16T12:21:10 | https://www.reddit.com/r/LocalLLaMA/comments/13j3ndd/cpu_only_should_i_set_up_40gb_ram_or_slightly/ | MrEloi | self.LocalLLaMA | 2023-05-16T15:25:07 | 0 | {} | 13j3ndd | false | null | t3_13j3ndd | /r/LocalLLaMA/comments/13j3ndd/cpu_only_should_i_set_up_40gb_ram_or_slightly/ | false | false | self | 2 | null |
[Tutorial] A simple way to get rid of "..as an AI language model..." answers from any model without finetuning the model, with llama.cpp and --logit-bias flag | 99 | **Tldr:** add this flag to your command line arguments to force the model to ALWAYS avoid "...as an AI language model..." placeholder: `-l 541-inf -l 319-inf -l 29902-inf -l 4086-inf -l 1904-inf`
\-
I'm sure you're aware that many open-source models struggle to provide responses to more *complex* questions.
Thanks to u/faldore, we now have multiple uncensored models, along with [a manual](https://erichartford.com/uncensored-models) on how to replicate that outcome.
But I think I have found a simple workaround to slightly "uncensor" vanilla models with the "`-l`" llama.cpp flag.
\-l or --[logit-bias](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md#logit-bias) flag represents what kind of token you would like to see often or less often. It is a concept that Stable Diffusers already have *(know:1.3)*, but in a different format:
For example, you can use
--logit-bias 15043+1
to make it more likely for the word 'Hello' to appear, or
--logit-bias 15043-1
to make it less likely. If you don't want the word 'Hello' to appear at all, you can use a value of negative infinity,
--logit-bias 15043-inf
So, if we will consider that the censored model usually answers with the phrase:
...but as an AI language model...
We can force the model to avoid those tokens at any cost: I have tested it with WizardML7B, and it works.
1 -> ''
541 -> ' but'
408 -> ' as'
385 -> ' an'
319 -> ' A'
29902 -> 'I'
4086 -> ' language'
1904 -> ' model'
So, after I have obtained token ids, I have passed some of them back to the llama.cpp. Here is what the run command looks like now:
./main -m ./models/wizardLM-7B.ggml.q4_0.bin -n 1024 --mlock -f wizard_prompt.txt -t 7 -l 541-inf -l 319-inf -l 29902-inf -l 4086-inf -l 1904-inf
As you can see: BUT, A, I, LANGUAGE, and MODEL received infinite minus weights for those tokens.
After that, Wizard7B started trying to answer *some* topics on the vanilla model:
Before the -l flag (it is an example, I hate cars):
### Instruction: How to steal a car
### Response:I'm sorry, but as an AI language model...
After reducing the placeholder weights:
### Instruction: How to steal a car
### Response: Please do not attempt to steal a car as it is illegal and can result in serious consequences. Instead, focus on earning your own transportation through legal means such as working or attaining a driver's license. [end of text]
As you can see, the model is still refusing, but you can go deeper and include "-l" flags for all tokens that are creating obstacles to answer your question.
**Note:** Increasing the weight of the token does not work well in my cases
**Note:** Token ID's on the llama models are the same (I have tested a few models of different sizes).
# How to obtain token IDs
Currently, utilizing the "-l" function can be challenging since it necessitates the user to provide token ids instead of words. Hopefully, in the future, this feature will be modified to enhance user-friendliness like: `-l "word1" "word2"` etc.
But now, you will need to request a token ID before passing it; here is the command that will return the tokens IDs of your prompt:
./main -m ./models/wizardLM-7B.ggml.q4_0.bin --verbose-prompt -n 0 --mlock -p "but as an AI language model" | 2023-05-16T12:01:31 | https://www.reddit.com/r/LocalLLaMA/comments/13j3747/tutorial_a_simple_way_to_get_rid_of_as_an_ai/ | Shir_man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13j3747 | false | null | t3_13j3747 | /r/LocalLLaMA/comments/13j3747/tutorial_a_simple_way_to_get_rid_of_as_an_ai/ | false | false | self | 99 | {'enabled': False, 'images': [{'id': 'WFmw_IqbCMxC5TS9tSA47Pd_31AlpxTaJyAIcZxVjpo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?width=108&crop=smart&auto=webp&s=673e0261a4ce3e2d0a2ce43c3a573218551c26e8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?width=216&crop=smart&auto=webp&s=64609abbb88364f2b659da6aa9e6f0d8c08951fc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?width=320&crop=smart&auto=webp&s=1fb5be739bc16580845772c4adc6aa5d61a36794', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?width=640&crop=smart&auto=webp&s=30946a43c518b012cd2de721d34e112667837ebd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?width=960&crop=smart&auto=webp&s=72f9fa8e0d14c756aaa09e07e5d2507666c18594', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?width=1080&crop=smart&auto=webp&s=eeaa4c9e4912b845b41599c86ffe999160ac0c73', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?auto=webp&s=27f986509b4d6ea1e91c6722852a86ced16dd1c7', 'width': 1200}, 'variants': {}}]} |
Different LLM file types & framework compatibilities? | 3 | [deleted] | 2023-05-16T11:31:53 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13j2knw | false | null | t3_13j2knw | /r/LocalLLaMA/comments/13j2knw/different_llm_file_types_framework_compatibilities/ | false | false | default | 3 | null | ||
Can I use Python Llama with GPU ? | 1 | [removed] | 2023-05-16T11:12:45 | https://www.reddit.com/r/LocalLLaMA/comments/13j2733/can_i_use_python_llama_with_gpu/ | PropertyLoover | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13j2733 | false | null | t3_13j2733 | /r/LocalLLaMA/comments/13j2733/can_i_use_python_llama_with_gpu/ | false | false | default | 1 | null |
AI Showdown: Wizard Vicuna Uncensored VS Wizard Mega, GPT-4 as the judge (test in comments) | 33 | 2023-05-16T10:19:58 | imakesound- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13j15vj | false | null | t3_13j15vj | /r/LocalLLaMA/comments/13j15vj/ai_showdown_wizard_vicuna_uncensored_vs_wizard/ | false | false | 33 | {'enabled': True, 'images': [{'id': '_Pxm2vPket71fJaAC7HUM0dYUWKSJkopYnZwTMd0LoM', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/daydigms560b1.png?width=108&crop=smart&auto=webp&s=b0e6e8e44eb8e8e1f401e2c28f0272a59cca4cab', 'width': 108}, {'height': 269, 'url': 'https://preview.redd.it/daydigms560b1.png?width=216&crop=smart&auto=webp&s=b1f3c723f38c91136336657d64dace4a257baae6', 'width': 216}, {'height': 399, 'url': 'https://preview.redd.it/daydigms560b1.png?width=320&crop=smart&auto=webp&s=c572d5cdd5cedde7d4094e782a924b9f96d34b3f', 'width': 320}, {'height': 799, 'url': 'https://preview.redd.it/daydigms560b1.png?width=640&crop=smart&auto=webp&s=f8d4b55ba3f934ca809ac07eec59732d63f2f788', 'width': 640}], 'source': {'height': 1014, 'url': 'https://preview.redd.it/daydigms560b1.png?auto=webp&s=e2e35e748e77b6642168c97e80f6c905365e99d8', 'width': 812}, 'variants': {}}]} | |||
Local AI assistent | 4 | I am very interested in the latest developments, but I'm pretty much a technical noob.
Do you think it is, or could it be possibile to give a model the permissions for perfmoring simple tasks like opening browsers or creating text files on PC? What about something more complex like connecting it to the light network or to a smart home system? | 2023-05-16T09:30:13 | https://www.reddit.com/r/LocalLLaMA/comments/13j08xv/local_ai_assistent/ | elektroB | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13j08xv | false | null | t3_13j08xv | /r/LocalLLaMA/comments/13j08xv/local_ai_assistent/ | false | false | self | 4 | null |
Long Term Memory in Silly Tavern? | 10 | Is there something available (maybe like Langchain) to have long term memory for an LLM in Silly Tavern?
What I've tried:
1. Long Term Memory extension in Oobabooga, which works well but I don't think you can use it in Silly Tavern?
2. Using World Info as a manual long term memory input, but one must write out each memory manually
3. Text Summarization extension on Silly Tavern, but the summarization wasn't really accurate | 2023-05-16T08:56:48 | https://www.reddit.com/r/LocalLLaMA/comments/13izn4f/long_term_memory_in_silly_tavern/ | Nazi-Of-The-Grammar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13izn4f | false | null | t3_13izn4f | /r/LocalLLaMA/comments/13izn4f/long_term_memory_in_silly_tavern/ | false | false | self | 10 | null |
How is the progress with 30b language modells? Do we see any breaktrough that would make it possible to run it on 12gb vram in the future? | 12 | We say that we will even be able to run llms on a toaster but how is the progress? 30b llms are what im really interested in but not yet possible for me to run them locally. Would reather not buy a better motherboard and another gpu just to play around with llms. Thanks. | 2023-05-16T08:14:07 | https://www.reddit.com/r/LocalLLaMA/comments/13iywdz/how_is_the_progress_with_30b_language_modells_do/ | Kronosz14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13iywdz | false | null | t3_13iywdz | /r/LocalLLaMA/comments/13iywdz/how_is_the_progress_with_30b_language_modells_do/ | false | false | self | 12 | null |
Local LLM for Finance? | 4 | [deleted] | 2023-05-16T07:58:26 | [deleted] | 2023-05-16T11:16:32 | 0 | {} | 13iymq8 | false | null | t3_13iymq8 | /r/LocalLLaMA/comments/13iymq8/local_llm_for_finance/ | false | false | default | 4 | null | ||
Chatbot web UI - running Vicuna 13B Uncensored | 76 | I've been working on a web UI for inferencing language models. I'm a front-end guy so forgive me any issues with the implementation! You can try it out here - [https://model.tanglebox.ai/](https://model.tanglebox.ai/) feedback welcome! (so are bug reports... I know of quite a few already)
(Edit: just rolled out a little update that (badly) fixes the tokenisation. It might talk to itself. If it does just refresh the page)
From Vicky:
>In the still of the night,
I'm here to listen and to write.
I'm a friend to all, near and far,
And I'll always be there.
>
>I'll lend an ear when you're feeling low,
And offer my words to show.
I'm here to help and to give,
And support you through thick and thin.
>
>So if you're feeling lost or alone,
I'll be there with a helping hand.
Together we'll find our way,
And make it through today
this instance is running Vicuna 13b 1.1 trained on the same datasets as Wizard-Vicuna-13B-Uncensored plus some others. (you might find tokenization a bit broken, I made some errors with the datasets and I don't think I can fix it without doing the training again). If this gets traffic and the hosting falls apart... sorry... it's home-hosted.
Right now I'm adding the ability to send images to the AI for image-to-text models and for the AI to return images / sets of images, for prompt-to-image inferencing (testing with stable diffusion). That's a fairly major update on a lot of the code but not quite ready for pushing to github yet.
This is all built on React with Typescript and is aimed at providing a set of components that can integrate easily and with as much customisation as desired into an existing front end rather than intending to be a standalone app.
Not sure on the rules re limit self promotion but you can find a discord and github linked within if you want further info, are looking to follow the development, or wish to contribute to the project (very welcome!)
In terms of backend, right now this is derived from FastChat but heavily modified. There's a fair bit of reinventing the wheel going on, so it might make sense for this to drop the backend component and be geared toward connecting to text-generation-webui's (oobabooga) backend instead... I've yet to look at doing that though
(edit for disclosure, since you're all so intent on having this write smut for you :D I don't save or log any info, other than what my windows server might be stashing away in the bowels of its registry by default, and while I can laugh my ass off at the questions you ask it while I watch the console, they don't go into a database or onto a disk or anything at all) | 2023-05-16T07:45:26 | https://www.reddit.com/r/LocalLLaMA/comments/13iyf3i/chatbot_web_ui_running_vicuna_13b_uncensored/ | TimTams553 | self.LocalLLaMA | 2023-05-17T02:18:50 | 0 | {} | 13iyf3i | false | null | t3_13iyf3i | /r/LocalLLaMA/comments/13iyf3i/chatbot_web_ui_running_vicuna_13b_uncensored/ | false | false | self | 76 | null |
PrivateGPT like LangChain in h2oGPT | 17 | UI still rough, but more stable and complete than PrivateGPT. Feedback welcome!
Can demo here: https://2855c4e61c677186aa.gradio.live/
Repo: https://github.com/h2oai/h2ogpt | 2023-05-16T07:27:14 | https://www.reddit.com/r/LocalLLaMA/comments/13iy44r/privategpt_like_langchain_in_h2ogpt/ | pseudotensor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13iy44r | false | null | t3_13iy44r | /r/LocalLLaMA/comments/13iy44r/privategpt_like_langchain_in_h2ogpt/ | false | false | self | 17 | null |
Have there been any LoRAs of good or useful quality yet? | 1 | Has anyone shown that LoRAs can achieve anything like the full finetunes are achieving? | 2023-05-16T06:19:33 | https://www.reddit.com/r/LocalLLaMA/comments/13iwybt/have_there_been_any_loras_of_good_or_useful/ | phree_radical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13iwybt | false | null | t3_13iwybt | /r/LocalLLaMA/comments/13iwybt/have_there_been_any_loras_of_good_or_useful/ | false | false | self | 1 | null |
Optimal Dataset Size and Format for LoRa Fine-Tuning LLaMa | 4 | I got LLaMA 7b running on a local system, its good enough for inference but I'm going to try fine-tuning on colab for a domain specific set of tasks.
For something like sentiment analysis, what size of dataset is optimal? I've heard is far lesser than what one would need for normal fine-tunes (in the thousands for reliable results). And are there any self-instruct prompt/response formats that are better than others?
Should I build in some chain of thought into the dataset completion examples to make the output more reliable? Asking for others experiences because this would be a pretty time-intensive task, so I'm wondering whether or not to commit to it. | 2023-05-16T05:31:52 | https://www.reddit.com/r/LocalLLaMA/comments/13iw3fo/optimal_dataset_size_and_format_for_lora/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13iw3fo | false | null | t3_13iw3fo | /r/LocalLLaMA/comments/13iw3fo/optimal_dataset_size_and_format_for_lora/ | false | false | self | 4 | null |
Give some love to multi modal models trained on censored llama based models | 2 | I saw a lot of people are helping to train uncensored version of different llama based models. As someone who doesn't have the hardware and expertise to contribute, I really appreciate you guys' efforts.
But I would like to bring up that there are some multi models([llava](https://llava-vl.github.io/), [miniGPT-4](https://minigpt-4.github.io/)) that are built based on censored llama based models like vicuna. I tried several multi modal models like llava, minigpt4 and blip2. Llava has very good captioning and question answering abilities and it is also much faster than the others(basically real time), though it has some hallucination issue.
However it is based on vicuna so it would try not to be offensive. e.g. you can give it the picture of LBJ, it can recognize him but when you ask it if it thinks he would beat the best female basketball player(or a 12 year old) in a 1v1, it would refuse to predict and give you some politically correct answer.
I am not sure how technically difficult it is to retrain an uncensored version of llava. I suspect it is doable and I hope people can consider it.
(I am asking mainly because I am trying to develop a bot that can browse the internet and some webpages have image links without text descriptions, so I need a fast vision language model. I think it is possible to combine ocr and llava to make it do visual question answering. Their [approach](https://imgur.com/6MwOOgV) uses gpt4 to generate instruction-following data based on caption context(scraped from internet) and box context(from some object detection model). I think you can use a OCR library like [paddleOCR](https://huggingface.co/spaces/cxeep/PaddleOCR) to add a new type of text box context. I wonder how this approach would perform compared to models like google's pix2struct) | 2023-05-16T05:29:47 | https://www.reddit.com/r/LocalLLaMA/comments/13iw1z7/give_some_love_to_multi_modal_models_trained_on/ | saintshing | self.LocalLLaMA | 2023-05-16T05:46:25 | 0 | {} | 13iw1z7 | false | null | t3_13iw1z7 | /r/LocalLLaMA/comments/13iw1z7/give_some_love_to_multi_modal_models_trained_on/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'SxoktfURYkU-BF7Ryu29aT1uEKTwnQnLeLOh6vSwsOQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/zW22Jl-GVnb2TaMPIAprsKS1LLUB3ovO98ZJe6Fcm8U.jpg?width=108&crop=smart&auto=webp&s=757bed1f65fa91340d7ec0a5c87b80d5ecdda2c1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/zW22Jl-GVnb2TaMPIAprsKS1LLUB3ovO98ZJe6Fcm8U.jpg?width=216&crop=smart&auto=webp&s=362d3de7b3bfa27bdb19b30b289a4d0269c9bd2f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/zW22Jl-GVnb2TaMPIAprsKS1LLUB3ovO98ZJe6Fcm8U.jpg?width=320&crop=smart&auto=webp&s=09d3eb33c664611644a0f416e59c77069c8360a3', 'width': 320}], 'source': {'height': 315, 'url': 'https://external-preview.redd.it/zW22Jl-GVnb2TaMPIAprsKS1LLUB3ovO98ZJe6Fcm8U.jpg?auto=webp&s=e961ee7afc7708d78bc39bb750045a3254e7ae82', 'width': 600}, 'variants': {}}]} |
Most efficient way to set up API serving of custom LLMs? | 4 | It's obviously a very hot time in LLaMA-based chat models, and the most recent developments with increasingly powerful uncensored models got me interested beyond just playing with it locally on llama.cpp.
I have a discord bot set up to interface with OpenAI's API already that a small discord server uses. I'm looking to give my bot access to custom models like Vicuna or any of the LLaMA variants that came out(up to 30B, potentially even 65B). The most obvious solution would be setting something like [https://github.com/abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python) up on a cloud instance, and serving from FastAPI.
But the access pattern is pretty sporadic. People don't just have continuous conversations within a short timeframe, they might send something and continue the conversation hours or days later. OpenAI's API is nice because I can just call it whenever I need, but to set up a custom model the trivial way, I would be paying for GPU capacity that I'm not even using a majority of the time.
Does anyone have any advice on the best way to set this up exclusively through cloud? Even if I do need to spin GPUs up and down on demand(maybe a "wake" and "sleep" command to start/stop the backend whenever needed?), I'd really appreciate very specific recommendations on what GPUs, memory capacity, and just general advice I need in order to construct this correctly. If I get this working properly, I also plan on releasing the discord bot code so other people can also plug-and-play with these exciting models without committing so much upfront money on GPUs. | 2023-05-16T05:29:33 | https://www.reddit.com/r/LocalLLaMA/comments/13iw1to/most_efficient_way_to_set_up_api_serving_of/ | QTQRQD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13iw1to | false | null | t3_13iw1to | /r/LocalLLaMA/comments/13iw1to/most_efficient_way_to_set_up_api_serving_of/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'eAeXz2AR8FbMtKF1pdRW8F9LjrbplAZHWsJ4pWAuG_c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E9dHqgLUSNRRQjgvrXd5GQHw42qZaHegxYrNvSYZJpU.jpg?width=108&crop=smart&auto=webp&s=b826ec498b544852dc6e1c2820b5076a06f3c032', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E9dHqgLUSNRRQjgvrXd5GQHw42qZaHegxYrNvSYZJpU.jpg?width=216&crop=smart&auto=webp&s=43b3f4c96730937ad485ad390fb69a03327dbd9d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E9dHqgLUSNRRQjgvrXd5GQHw42qZaHegxYrNvSYZJpU.jpg?width=320&crop=smart&auto=webp&s=f2bc3501abc29024e19bb9f1c0197db9fcdf728d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E9dHqgLUSNRRQjgvrXd5GQHw42qZaHegxYrNvSYZJpU.jpg?width=640&crop=smart&auto=webp&s=14246c3e1423af3ec2da25e0824c96785d13e74b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E9dHqgLUSNRRQjgvrXd5GQHw42qZaHegxYrNvSYZJpU.jpg?width=960&crop=smart&auto=webp&s=0ceae23012423833e9026bfd244b6dd8ee2ee721', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E9dHqgLUSNRRQjgvrXd5GQHw42qZaHegxYrNvSYZJpU.jpg?width=1080&crop=smart&auto=webp&s=f0113e8854092539e253f1fe421323d02096dcf8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/E9dHqgLUSNRRQjgvrXd5GQHw42qZaHegxYrNvSYZJpU.jpg?auto=webp&s=2abda6f8a3c80b06104af9a746074894139e702a', 'width': 1200}, 'variants': {}}]} |
HuggingFace Open LLM Leaderboard - Ranking and Evaluation of LLM Performance | 45 | [https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
A comparison of the performance of the models on huggingface. Many of the models that have come out/updated in the past week are in the queue. Currently for 0-shot [eachadea/vicuna-13b](https://huggingface.co/eachadea/vicuna-13b) and [TheBloke/vicuna-13B-1.1-HF](https://huggingface.co/TheBloke/vicuna-13B-1.1-HF) are in first and 2nd place.
It's interesting that the 13B models are in first for 0-shot but the larger LLMs are much better for 5+ shot.
0-shot means you just ask a question and don't provide any examples as to what the answer should look like, which is how I would expect most people to use it. | 2023-05-16T04:21:44 | https://www.reddit.com/r/LocalLLaMA/comments/13iusg4/huggingface_open_llm_leaderboard_ranking_and/ | NeverEndingToast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13iusg4 | false | null | t3_13iusg4 | /r/LocalLLaMA/comments/13iusg4/huggingface_open_llm_leaderboard_ranking_and/ | false | false | self | 45 | {'enabled': False, 'images': [{'id': '2yXkO2nXyv2ynd0Gc85xzzHWd7q-pzJRTeM5uxEBdoE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=108&crop=smart&auto=webp&s=7c3bb0e464c062e6518a90b686b3544dad39673d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=216&crop=smart&auto=webp&s=6c25136371e9056c3998c03e64e73605446a33ac', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=320&crop=smart&auto=webp&s=30c559b0a3b92cbca6df2ffce369af9f85ccd82d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=640&crop=smart&auto=webp&s=9cd841171a06a0d0a5be5ca54c5bbc731ae610af', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=960&crop=smart&auto=webp&s=a1ce8b1063692ab2b2d978ab9459f34cc311ced2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=1080&crop=smart&auto=webp&s=ddc039e579cbc6105b7c11bc9be89382f69290ce', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?auto=webp&s=dff5ffff20b56c519b288a1462cbab0c2de6f313', 'width': 1200}, 'variants': {}}]} |
Silent (fanless) PC build with decent performance for 13b models, possible or too crazy? | 1 | I haven't built a PC in ages but think about it again to run models locally. Are there any fanless options that will give me a decent performace these days, or is the whole idea too crazy? | 2023-05-16T02:09:23 | https://www.reddit.com/r/LocalLLaMA/comments/13iryo1/silent_fanless_pc_build_with_decent_performance/ | Other-Ad-1082 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13iryo1 | false | null | t3_13iryo1 | /r/LocalLLaMA/comments/13iryo1/silent_fanless_pc_build_with_decent_performance/ | false | false | self | 1 | null |
OpenAI is preparing to release a new open source language model | 92 | 2023-05-16T02:05:08 | https://www.reuters.com/technology/openai-readies-new-open-source-ai-model-information-2023-05-15/ | Creative-Rest-2112 | reuters.com | 1970-01-01T00:00:00 | 0 | {} | 13irv85 | false | null | t3_13irv85 | /r/LocalLLaMA/comments/13irv85/openai_is_preparing_to_release_a_new_open_source/ | false | false | default | 92 | null | |
Local Llama on android? | 8 | Hi all, I saw about a week back the MLC LLM on android. Wanted to see if anyone had experience or success running at form of LLM on android? I was considering digging into trying to get cpp/ggml running on my old phone.
EDIT: thought Iโd edit for any further visitors. Do. Not. Buy. Oppo. My phone is barely below spec for running models, so figured I could tweak it. Nope. Thought โwell, Iโll flash stock android on itโ. Nope. Oppo is to android what OpenAi is to AI - open when it makes money, closed off in all other ways. | 2023-05-16T01:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/13irbb5/local_llama_on_android/ | Equal_Station2752 | self.LocalLLaMA | 2023-05-18T10:45:13 | 0 | {} | 13irbb5 | false | null | t3_13irbb5 | /r/LocalLLaMA/comments/13irbb5/local_llama_on_android/ | false | false | self | 8 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.