title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PromptTools Playground: Open-source, self-hostable UI for LLM experimentation | 1 | 2023-07-31T21:02:59 | https://v.redd.it/xxy4ulxj7dfb1 | hegel-ai | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15es02f | false | {'reddit_video': {'bitrate_kbps': 0, 'dash_url': 'https://v.redd.it/xxy4ulxj7dfb1/DASHPlaylist.mpd?a=1693529585%2CODRiNTJmNjJmZmU3MGI4NDZkNDJlOGEyNWY0NWNkYTQ0MmZmYTc4MjJmNDM4NDA4ODk5MDA1ZDNhNjIzYzM3Yw%3D%3D&v=1&f=sd', 'duration': 68, 'fallback_url': 'https://v.redd.it/xxy4ulxj7dfb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/xxy4ulxj7dfb1/HLSPlaylist.m3u8?a=1693529585%2CZWFkYzdjMjEyZDA2MGYxMzQ2N2I1ZmU5N2NiZDg1MDk5MzE2MjY2NDlhZmQ0M2Y1NTAxNGRhMjljYmY3MDVjMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xxy4ulxj7dfb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1740}} | t3_15es02f | /r/LocalLLaMA/comments/15es02f/prompttools_playground_opensource_selfhostable_ui/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'TVeif5JznupmeyrM1RSzI7oxj3mm54zQsBNe7gKHnnc', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?width=108&crop=smart&format=pjpg&auto=webp&s=270a6f890ba6a89a0b9b50391f7606e11256d061', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?width=216&crop=smart&format=pjpg&auto=webp&s=8b2f5feb21d1674cba2ddd9e1f201c58bce0b3a3', 'width': 216}, {'height': 198, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?width=320&crop=smart&format=pjpg&auto=webp&s=95a62ec503ba5ba2eae9678b3e7dea7055e9279d', 'width': 320}, {'height': 397, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?width=640&crop=smart&format=pjpg&auto=webp&s=9b9069e77cd8fbe6bc123162c5b1124f6af0f5bd', 'width': 640}, {'height': 595, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?width=960&crop=smart&format=pjpg&auto=webp&s=81cbb5ee9d6c72b77e5b76195d2f91a0d94bae61', 'width': 960}, {'height': 670, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=18ddcba0be5ca37c4dcaf3288dd9901b43402c05', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/RamjSowwUstX3yDT658gvCJEnje7bUeqNuliL0uMWx0.png?format=pjpg&auto=webp&s=0dd355b34208ae68f8cdbada814c7c10e75cd02d', 'width': 1740}, 'variants': {}}]} | |
Recommended version? | 5 | I'm lost with all the new TheBloke versions coming out hoursly, or even minutely. If you had to choose, which one would be best for:
1. A Windows 10 system with a 3080Ti , 32GB system RAM and a Ryzen 9 7900X
2. General instructions (how do I...) and writing
3. Preferably uncensored, but that's mainly because I loooathe a machine telling me I'm not responsible enough to handle the information.
I realize this is probably a big, subjective ask, but holy cow there's a lot to choose from. | 2023-07-31T20:40:17 | https://www.reddit.com/r/LocalLLaMA/comments/15erevo/recommended_version/ | 0260n4s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15erevo | false | null | t3_15erevo | /r/LocalLLaMA/comments/15erevo/recommended_version/ | false | false | self | 5 | null |
How can I evaluate the quality of a custom chatbot? | 6 | Hey all, can you recommend some resources or a general approach for benchmarking/evaluating the quality of responses that a chatbot produces? Some factors I'm thinking about that contribute to the subjective quality of my chatbot's responses:
\* Data provided in the prompt
\* Data used to finetune
\* The prompt template
\* The implementation of the bot (going beyond a simple prompt, for example, running an Agent)
To simplify--say I want to make a chatbot that emulates Albert Einstein. What can I do, to evaluate how closely it mimics the real Albert Einstein in a conversation? I'm curious to hear about any ideas or techniques, but for my purposes, quick-and-dirty would go a long way! | 2023-07-31T20:11:38 | https://www.reddit.com/r/LocalLLaMA/comments/15eqntk/how_can_i_evaluate_the_quality_of_a_custom_chatbot/ | dondochaka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eqntk | false | null | t3_15eqntk | /r/LocalLLaMA/comments/15eqntk/how_can_i_evaluate_the_quality_of_a_custom_chatbot/ | false | false | self | 6 | null |
Getting about .5-1 t/s with a RTX3090 | 2 | So after WSL2 blew up from a Windows 11 update, I did a clean install. Previously I could reliably get something like 20-30t/s from 30b sized models. Now I'm struggling to get even 2 t/s. GGML models get slightly better speeds but gptq and hf models are pretty slow. I'm baffled and have tried many combinations of CUDA toolkit and bitsandbytes (Keith-Hon, jllllll) to try and get it working like it was before. I've installed CUDA toolkits 11.7, 11.8 and 12.1 with their respective drivers (12.1 worked with the 11.8 python packages before) with Python 3.10.11. It performs exactly the same running natively vs conda env
​
https://preview.redd.it/951mlveypcfb1.png?width=566&format=png&auto=webp&s=57c1a4091093cb7b85491b212aec176ddd5125a0
​
https://preview.redd.it/qp2h0r5urcfb1.png?width=1103&format=png&auto=webp&s=83dbdcbb46099c53f1197f23bb31a9da7aba82ec
https://preview.redd.it/29sw58hj1bfb1.png?width=830&format=png&auto=webp&s=9b03fd16000828b73d8913230994667770da7449
https://preview.redd.it/f0ix1v3vpcfb1.png?width=856&format=png&auto=webp&s=3b476a362bc4fcf398c09273f4172e30678f1cc2
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Jun_13_19:42:34_Pacific_Daylight_Time_2023
Cuda compilation tools, release 12.2, V12.2.91
Build cuda_12.2.r12.2/compiler.32965470_0
https://preview.redd.it/d0s1r9drpcfb1.png?width=845&format=png&auto=webp&s=54860cbb908a0cfa65cc4a2d669698f9bf9f218c
Package Version
------------------------- ------------
absl-py 1.4.0
accelerate 0.21.0
aiofiles 23.1.0
aiohttp 3.8.5
aiosignal 1.3.1
altair 5.0.1
anyio 3.7.1
appdirs 1.4.4
asttokens 2.2.1
async-timeout 4.0.2
attrs 23.1.0
auto-gptq 0.3.0+cu118
backcall 0.2.0
bitsandbytes 0.41.0
cachetools 5.3.1
certifi 2023.7.22
charset-normalizer 3.2.0
click 8.1.6
colorama 0.4.6
comm 0.1.3
contourpy 1.1.0
cycler 0.11.0
datasets 2.14.0
debugpy 1.6.7
decorator 5.1.1
dill 0.3.7
diskcache 5.6.1
docker-pycreds 0.4.0
einops 0.6.1
exceptiongroup 1.1.2
executing 1.2.0
exllama 0.0.9+cu118
fastapi 0.95.2
ffmpy 0.3.1
filelock 3.12.2
flexgen 0.1.7
fonttools 4.41.1
frozenlist 1.4.0
fsspec 2023.6.0
gitdb 4.0.10
GitPython 3.1.32
google-auth 2.22.0
google-auth-oauthlib 1.0.0
gradio 3.33.1
gradio_client 0.2.5
grpcio 1.56.2
h11 0.14.0
httpcore 0.17.3
httpx 0.24.1
huggingface-hub 0.16.4
idna 3.4
ipykernel 6.25.0
ipython 8.14.0
jedi 0.18.2
Jinja2 3.1.2
jsonschema 4.18.4
jsonschema-specifications 2023.7.1
jupyter_client 8.3.0
jupyter_core 5.3.1
kiwisolver 1.4.4
linkify-it-py 2.0.2
llama-cpp-python 0.1.77
Markdown 3.4.4
markdown-it-py 2.2.0
MarkupSafe 2.1.3
matplotlib 3.7.2
matplotlib-inline 0.1.6
mdit-py-plugins 0.3.3
mdurl 0.1.2
mpmath 1.2.1
multidict 6.0.4
multiprocess 0.70.15
nest-asyncio 1.5.6
networkx 3.1
numpy 1.24.1
oauthlib 3.2.2
orjson 3.9.2
packaging 23.1
pandas 2.0.3
parso 0.8.3
pathtools 0.1.2
peft 0.5.0.dev0
pickleshare 0.7.5
Pillow 9.3.0
platformdirs 3.9.1
prompt-toolkit 3.0.39
protobuf 3.20.3
psutil 5.9.5
PuLP 2.7.0
pure-eval 0.2.2
pyarrow 12.0.1
pyasn1 0.5.0
pyasn1-modules 0.3.0
pydantic 1.10.12
pydub 0.25.1
Pygments 2.15.1
pyparsing 3.0.9
python-dateutil 2.8.2
python-multipart 0.0.6
pytz 2023.3
pywin32 306
PyYAML 6.0.1
pyzmq 25.1.0
referencing 0.30.0
regex 2023.6.3
requests 2.31.0
requests-oauthlib 1.3.1
rouge 1.0.1
rpds-py 0.9.2
rsa 4.9
safetensors 0.3.1
scipy 1.11.1
semantic-version 2.10.0
sentencepiece 0.1.99
sentry-sdk 1.28.1
setproctitle 1.3.2
six 1.16.0
smmap 5.0.0
sniffio 1.3.0
stack-data 0.6.2
starlette 0.27.0
style 1.1.0
sympy 1.11.1
tensorboard 2.13.0
tensorboard-data-server 0.7.1
tokenizers 0.13.3
toolz 0.12.0
torch 2.0.1+cu118
torchaudio 2.0.2+cu118
torchvision 0.15.2+cu118
tornado 6.3.2
tqdm 4.65.0
traitlets 5.9.0
transformers 4.31.0
typing_extensions 4.7.1
tzdata 2023.3
uc-micro-py 1.0.2
update 0.0.1
urllib3 1.26.13
uvicorn 0.23.1
wandb 0.15.7
wcwidth 0.2.6
websockets 11.0.3
Werkzeug 2.3.6
wheel 0.41.0
xxhash 3.2.0
yarl 1.9.2
Any ideas why my inference times are so long? Would be grateful for any suggestions to try and get this working.
​ | 2023-07-31T20:07:35 | https://www.reddit.com/r/LocalLLaMA/comments/15eqk3n/getting_about_51_ts_with_a_rtx3090/ | jfranzen8705 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eqk3n | false | null | t3_15eqk3n | /r/LocalLLaMA/comments/15eqk3n/getting_about_51_ts_with_a_rtx3090/ | false | false | 2 | null | |
Has anyone had any luck with any models using private GPT? | 0 | Pretty new to this topic, anyone had any luck with any Llm models for privategpt? | 2023-07-31T19:36:25 | https://www.reddit.com/r/LocalLLaMA/comments/15epqfn/has_anyone_had_any_luck_with_any_models_using/ | lsimcoates | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15epqfn | false | null | t3_15epqfn | /r/LocalLLaMA/comments/15epqfn/has_anyone_had_any_luck_with_any_models_using/ | false | false | self | 0 | null |
Oobabooga "chat saver" plugin for Chrome. | 1 | 2023-07-31T19:26:04 | https://chrome.google.com/webstore/detail/chat-saver/onakaaoddpdfopdjagagepfkhpehfbae | zapeggo | chrome.google.com | 1970-01-01T00:00:00 | 0 | {} | 15epgnf | false | null | t3_15epgnf | /r/LocalLLaMA/comments/15epgnf/oobabooga_chat_saver_plugin_for_chrome/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'kUUo21PMU-yaTeFGS1fJK0A3xdftMCwtgvOgLDAEy18', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Y7G61u3whacQwSzCbv6PWKSEM3Drcs4u74OcI7hwCuo.jpg?width=108&crop=smart&auto=webp&s=74827950d19c2f15ca8efff5ccd1f17602831ad7', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/Y7G61u3whacQwSzCbv6PWKSEM3Drcs4u74OcI7hwCuo.jpg?auto=webp&s=b5c10a5b465feef8d8b2b8eeaeb2744e3f9a9b0f', 'width': 128}, 'variants': {}}]} | ||
Accurate answers after fine tuning fastchat | 1 | I fine tuning fast chat using peft-lora on a custom question answer based dataset. It generates nice outputs after fine tuning, pretty relevant. However it doesn't give accurate output for exact questions. Any idea about how to fix it? | 2023-07-31T19:05:57 | https://www.reddit.com/r/LocalLLaMA/comments/15eoxer/accurate_answers_after_fine_tuning_fastchat/ | mathageche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eoxer | false | null | t3_15eoxer | /r/LocalLLaMA/comments/15eoxer/accurate_answers_after_fine_tuning_fastchat/ | false | false | default | 1 | null |
Idea: Alternating token generation from a big and small model? | 14 | I was thinking... could we use a guidance style system to generate alternating tokens from a big and a small model? I suspect the large model's next-word (or words) would help steer the smaller model by example, so it would complete the next word (or handful of words) then kick back to the larger model for another word or chunk.
Basically, I'm thinking the larger model effectively steers the output of the smaller one, giving us a large speed increase while still maintaining as much of the large model's capabilities and perplexity as we can.
I played around with this a bit, generating alternating tokens from a large and small model, and it seemed to give me large model level output. Obviously it was a bit slow since I did this manually, but the result seemed significantly better than what the small model could produce. If this could be done without huge slowdown (I don't know how to get around prompt ingestion for each token), it seems like it might be useful. | 2023-07-31T19:00:26 | https://www.reddit.com/r/LocalLLaMA/comments/15eornw/idea_alternating_token_generation_from_a_big_and/ | teachersecret | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eornw | false | null | t3_15eornw | /r/LocalLLaMA/comments/15eornw/idea_alternating_token_generation_from_a_big_and/ | false | false | self | 14 | null |
PLEASE HELP ME! What does this error mean and how to solve it? | 1 | Hi,
For the past week, I've been facing this error on the Oobabooga webui when I try to load an 8-bit quantazed versions of some TheBlock's models. I am aware that Exllama only supports 4-bit and not 8-bit quantization, so I try to load the model using AutoGPTQ. However, when I do that I get this error message as shown in the screenshot below:
WARNING:The model weights are not tied. Please use the \`tie\_weights\` method before using the \`infer\_auto\_device\` function.
The particular model I am trying to load is WizardLM-13B-V1.2-GPTQ gptq\_model-8bit-128g.
Could you please help solve this issue?
https://preview.redd.it/v7khhimblcfb1.png?width=1402&format=png&auto=webp&s=4d0c7c726b728abbb7eaa1ed24862099748951c9 | 2023-07-31T19:00:24 | https://www.reddit.com/r/LocalLLaMA/comments/15eormu/please_help_me_what_does_this_error_mean_and_how/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eormu | false | null | t3_15eormu | /r/LocalLLaMA/comments/15eormu/please_help_me_what_does_this_error_mean_and_how/ | false | false | 1 | null | |
Alignment Broke Llama-2’s Brain | 0 | Context: I asked it to make a weather script. | 2023-07-31T18:36:30 | Difficult-Yam-1347 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15eo5kt | false | null | t3_15eo5kt | /r/LocalLLaMA/comments/15eo5kt/alignment_broke_llama2s_brain/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'n3DRgFb2_UvqInT-gpUrfENhfo5uP1WDaIq1SK1rFcA', 'resolutions': [{'height': 191, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?width=108&crop=smart&auto=webp&s=f59bd30b7347cb744b1c23907b8833a5130f0088', 'width': 108}, {'height': 382, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?width=216&crop=smart&auto=webp&s=37081d5a1b45b4fc4d9f3996a190555c556a3bf4', 'width': 216}, {'height': 566, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?width=320&crop=smart&auto=webp&s=523c24e3c8101010848b57f83f75a752228c2c4e', 'width': 320}, {'height': 1133, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?width=640&crop=smart&auto=webp&s=253baad3a72d5a84bea82b45fe5cb9911ae075ed', 'width': 640}, {'height': 1699, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?width=960&crop=smart&auto=webp&s=a708d01a3cc0a51f7b027d6aa6e372a1eb68502b', 'width': 960}, {'height': 1912, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?width=1080&crop=smart&auto=webp&s=1c42eb7e93ae6b16effe51ad6f6499f72dcd931e', 'width': 1080}], 'source': {'height': 2059, 'url': 'https://preview.redd.it/hcutgqthhcfb1.jpg?auto=webp&s=eef4478974f0131f6919a029d14a792beccbe28e', 'width': 1163}, 'variants': {}}]} | ||
PC configuration to run a llama2 70B | 25 | Greetings,
I'm trying to figure out what might suit my case without having to sell my kidneys.
I would like to be able to run llama2 and future similar models locally on the gpu, but I am not really sure about the hardware requirements.
I had been thinking about an RTX A6000, but reading around it seems like it may not be enough.
Can you cordially help me understand? | 2023-07-31T18:36:09 | https://www.reddit.com/r/LocalLLaMA/comments/15eo58x/pc_configuration_to_run_a_llama2_70b/ | SpatolaNellaRoccia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eo58x | false | null | t3_15eo58x | /r/LocalLLaMA/comments/15eo58x/pc_configuration_to_run_a_llama2_70b/ | false | false | self | 25 | null |
Does anyone know how to make a variant of LLaMA 2? | 8 | I want to make a personal project of a chatbot based on LLaMA 2, like Vicuna, Beluga, etc.
Just searching online I can't find an exact way of how to do it, hence my question, does anyone know how to do it? Or is there a tutorial on how to do it? | 2023-07-31T18:12:18 | https://www.reddit.com/r/LocalLLaMA/comments/15enjgy/does_anyone_know_how_to_make_a_variant_of_llama_2/ | danielbrdz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15enjgy | false | null | t3_15enjgy | /r/LocalLLaMA/comments/15enjgy/does_anyone_know_how_to_make_a_variant_of_llama_2/ | false | false | self | 8 | null |
Running LLMs locally on Android | 1 | [removed] | 2023-07-31T18:04:33 | https://www.reddit.com/r/LocalLLaMA/comments/15encah/running_llms_locally_on_android/ | atezan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15encah | false | null | t3_15encah | /r/LocalLLaMA/comments/15encah/running_llms_locally_on_android/ | false | false | self | 1 | null |
4090 GPU paired with which CPU? | 4 | Hey,
Currently building a custom PC for the 4090. I will mainly be using it for Stable Diffusion but would like to dive into the world of AI and potentially some deepfake.
I am currently thinking of just getting the AMD 7800X3D.
Do you think this would be okay to run most things (I know a bit vague. GPT4All would be something I would like to try) or should I spend a bit more and get a better CPU? | 2023-07-31T17:53:58 | https://www.reddit.com/r/LocalLLaMA/comments/15en23m/4090_gpu_paired_with_which_cpu/ | Visible_Guest_2986 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15en23m | false | null | t3_15en23m | /r/LocalLLaMA/comments/15en23m/4090_gpu_paired_with_which_cpu/ | false | false | self | 4 | null |
Question - Is PrivateGPT or LocalGPT .db .pkl files on the way to building a model that would be sharable? I'm feeding documentation for a new code syntax. | 1 | [removed] | 2023-07-31T17:52:56 | https://www.reddit.com/r/LocalLLaMA/comments/15en14f/question_is_privategpt_or_localgpt_db_pkl_files/ | Iam_a_honeybadger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15en14f | false | null | t3_15en14f | /r/LocalLLaMA/comments/15en14f/question_is_privategpt_or_localgpt_db_pkl_files/ | false | false | default | 1 | null |
LLM ballparks | 3 | I'm just getting started with language models but the technology shift has been pretty exciting.
My experience with LLMs has so far been: Using GPT-3.5 and GPT-4.0 for light assistance with work, as well as help with creative writing and writing analysis, and solo roleplay/gameplay experimentation stuff; and running and experimenting with smaller LLMs like nous-hermes-13B on my home PC.
My computer is old (running a GTX 970!) and I've been long considering an upgrade.
With the models available to the public (via huggingface or whatever) as of the time of this post, what are some rough ballparks as to what I might be able to reasonably expect on something like the following:
\-RTX 4090 (haven't actually started looking for the specific card yet)
\-Ryzen 9 7950X (24GB, 4.5GHZ 16-core - though I've heard more cores can actually hurt when running an LLM, so maybe the 9 7900X is better with fewer cores but higher clock)
\-96GB (2x48) of DDR5-6800 RAM
"What to expect" is pretty vague, I know it will vary based on use. Obviously I'm not expecting results on a home computer that will be on-par with what I get from GPT-4, I'm more curious what the current state of things are. | 2023-07-31T17:06:36 | https://www.reddit.com/r/LocalLLaMA/comments/15eltml/llm_ballparks/ | NitroXanax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eltml | false | null | t3_15eltml | /r/LocalLLaMA/comments/15eltml/llm_ballparks/ | false | false | self | 3 | null |
Need help loading local dataset for Qlora training | 1 | I'm trying to train my own qlora and am following a tutorial. The problem is that the tuorial uses a [huggingface dataset](https://huggingface.co/datasets/AlexanderDoria/novel17_test/tree/main), which does load properly for me, but if I try to load the same exact data locally, it fails:
orig_dataset = load_dataset('AlexanderDoria/novel17_test', split="train")
But if I try to load the model locally, it doesn't load properly
new_dataset = load_dataset("json", \
data_files= "./novel17_train.jsonl", \
features = Features({'text': Value(dtype='string', id=None)}), \
field="text", \
split = "train")
I can quickly tell something has gone wrong by comparing the output of:
print(orig_dataset)
print(new_dataset)
which should be the same but is actually:
Dataset({
features: ['text'],
num_rows: 1
})
Dataset({
features: ['text'],
num_rows: 0
}) | 2023-07-31T16:44:56 | https://www.reddit.com/r/LocalLLaMA/comments/15el9nn/need_help_loading_local_dataset_for_qlora_training/ | feldcot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15el9nn | false | null | t3_15el9nn | /r/LocalLLaMA/comments/15el9nn/need_help_loading_local_dataset_for_qlora_training/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Mxe46f_-BMRsLioYlhVeQB2lSHKxmvi8c_TVpwaypAM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?width=108&crop=smart&auto=webp&s=342faeee6eeb90e47e572ff4ccc57331cee83ac6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?width=216&crop=smart&auto=webp&s=aa32128b73d7e98f0c53459e1ad7c44184727913', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?width=320&crop=smart&auto=webp&s=e5a44c413034a16e86c0f82a14d9a1e5ec51df30', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?width=640&crop=smart&auto=webp&s=5923f0a1b42b33af333a020bdfe7f931f35c4aae', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?width=960&crop=smart&auto=webp&s=228b01731216f3d85d62e5b5b843dfb63006efb5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?width=1080&crop=smart&auto=webp&s=34c357af3c9ace724065cf6797317a3a9004a501', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0QGKrCq74FL_ogxYqWxEqCPH4TywnTfoMQbHw6wvIUQ.jpg?auto=webp&s=80a9b527b1a6092836b544926c704216faeb35b4', 'width': 1200}, 'variants': {}}]} |
Vicuna 13b locked at 512 tokens | 1 | Using private Gpt I'm trying to use the vicuna13b model but whatever I do it is capped at 512 token, even when I specify in my requirements to increase this. For other models I can get this working.
I have heard good things about this model so wanted to give it a try, does anyone have any ides? | 2023-07-31T16:35:55 | https://www.reddit.com/r/LocalLLaMA/comments/15el1lt/vicuna_13b_locked_at_512_tokens/ | lsimcoates | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15el1lt | false | null | t3_15el1lt | /r/LocalLLaMA/comments/15el1lt/vicuna_13b_locked_at_512_tokens/ | false | false | self | 1 | null |
If I want consistency through a long story, would Lora training be an effective way to get around the token limits? | 5 | If using oobabooga, would Lora training a model with the generated conversation thus far help the model keep context indefinitely? Or at the very least would it help it understand the overall "theme" better?
If so are there any good guides on training? | 2023-07-31T16:31:48 | https://www.reddit.com/r/LocalLLaMA/comments/15ekxwc/if_i_want_consistency_through_a_long_story_would/ | countrycruiser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ekxwc | false | null | t3_15ekxwc | /r/LocalLLaMA/comments/15ekxwc/if_i_want_consistency_through_a_long_story_would/ | false | false | self | 5 | null |
Fine tune Wizard Vicuna 7B using domain specific dataset | 5 | I'm trying to fine tune the Wizard Vicuna 7B model using medical domain dataset. What is the best way to do so? And also what is the most efficient way to create a dataset for fine tuning this model since the dataset that I currently have is basically some txt files (not in instruction format)? | 2023-07-31T16:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/15ektcg/fine_tune_wizard_vicuna_7b_using_domain_specific/ | FirstReporter4868 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ektcg | false | null | t3_15ektcg | /r/LocalLLaMA/comments/15ektcg/fine_tune_wizard_vicuna_7b_using_domain_specific/ | false | false | self | 5 | null |
2x a6000s but only 3.5 tokens/s with 7b-chat-hf. What am I doing wrong? | 40 | I have access to a brand new Dell workstation with 2 A6000s with 48gb v ram each. It has a Xeon processor and 128gb memory. A “decent” machine to say the least. I am trying to run the llama 7b-hf model via oobabooga but am only getting 7-8 tokens a second. I am just trying to run the base model. I am brand new to this so am not sure if this is what I should expect or if I am doing something wrong in terms of the parameters or settings I am using. Right now, in the web UI the model loader is set to transformers, memory for my gpus and cpu are maxed out (48gb, 48gb, 128gb), bloat16, nf4, and I left the other boxes unchecked.
Any help would be appreciated! | 2023-07-31T16:17:48 | https://www.reddit.com/r/LocalLLaMA/comments/15ekkx4/2x_a6000s_but_only_35_tokenss_with_7bchathf_what/ | blind_trooper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ekkx4 | false | null | t3_15ekkx4 | /r/LocalLLaMA/comments/15ekkx4/2x_a6000s_but_only_35_tokenss_with_7bchathf_what/ | false | false | self | 40 | null |
Best model for pentesting? | 1 | Does anyone know the best model for penetration testing purposes? Seems like a lot of them are unwilling or very hesitant to provide much if any information (commands, how-to’s, etc.) due to it being “unethical” or “illegal.” ChatGPT tends to somewhat provide information but only after lots of going back and forth. | 2023-07-31T15:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/15ek2kh/best_model_for_pentesting/ | FrisbeeSunday | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ek2kh | false | null | t3_15ek2kh | /r/LocalLLaMA/comments/15ek2kh/best_model_for_pentesting/ | false | false | self | 1 | null |
Seeking Guidance on Building JSONL Dataset for Fine-Tuning LLaMA 2 Model on Product Mapping (sequence2sequence process ) | 3 | Hello r/LocalLLaMA Community,
I'm currently working on a project and am facing a challenge that I'm sure many of you might have tackled before. Any guidance or advice would be immensely appreciated.
**Project Background:**
The project involves handling product data, each product being characterized by its distinct attributes. To give you an idea of how the data is structured, here's a table (1 row , two columns):
​
|Input|Output||
|:-|:-|:-|
|Name:Club Dry Singlet Navy - 3XL /,\\ EAN:5054773353592 /,\\ ET\_Reference:QA0057317693XL /,\\ Description:De Club Dry Singlet Navy is een ideale singlet om te sporten. Het is gemaakt van lichtgewicht polyester met een VapoDri finish. Hierdoor: Voert het vocht en zweet snel af. Is de pasvorm uiterst comfortabel door geen naden bovenop de schouders. Kortom, de Club Dry Singlet Navy is een perfecte keuze voor het sporten, mede door de snelle afvoer van vocht en zweet en de comfortabele pasvorm. /,\\ MarketPlace:BOL\_COM /,\\ ET\_Price:27.95 EOP| Brand:CANTERBURY /,\\ CHARACTERISTIC\_395:Polyester /,\\ Colour:Navyblauw /,\\ Gender:HEREN /,\\ Gender\_2:HEREN /,\\ Material:Polyester /,\\ PRODUCT\_TYPE:t-shirt korte mouwen /,\\ SIZE\_97:3XL. (Z349: SIZE MEN TOP) /,\\ SPORT\_174:Rugby league /,\\ Seasonal Collection:Lente/Zomer /,\\ Seasonal Year:Never out of stock /,\\ Size Clothing:4x\_l /,\\ Sleeve Length:Korte mouw /,\\ Target Audience:Volwassenen /,\\ Type of Sport:Rugby league EOC||
​
Previously, I was using OpenAI's Davinci model for this project, where every attribute was separated by specific special characters, but due to its high cost, I have decided to shift to LLaMA 2.
**Objective:**
My goal is to fine-tune the model for 'Product Mapping'. In essence, once the training is done, the model should be able to generate attributes aligning with the input attributes it's given.
**Here's Where I Need Help:**
I want to build a dataset in jsonl format for training the LLaMA 2 model, using a sequence-to-sequence approach. However, I am uncertain about the dataset structure that would be most effective for this purpose.
Most datasets I've come across follow a human: assistant or question: answer format. Given the nature of my task, it's unclear to me how to best format my dataset for successful training. Do I need to include specific instructions at the start of the dataset?
**The Ask:**
Could anyone who has had a similar experience or has worked with LLaMA 2 in a similar context, share their advice or suggestions? Any guidance on how to format the dataset for my sequence-to-sequence approach, or other best practices would be greatly appreciated.
Looking forward to your valuable inputs!
Cheers! | 2023-07-31T15:19:06 | https://www.reddit.com/r/LocalLLaMA/comments/15ej23k/seeking_guidance_on_building_jsonl_dataset_for/ | clichedwriterr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ej23k | false | null | t3_15ej23k | /r/LocalLLaMA/comments/15ej23k/seeking_guidance_on_building_jsonl_dataset_for/ | false | false | self | 3 | null |
What's the best way for me to chat with old FOIA documents? | 3 | I'd like to chat with old Freedom of Information Act documents. They're PDF's, but they're all just pictures of text. Nothing was actually typed and saved as a PDF file cause we're talking about documents from the 1950's and 1960's. But maybe the A.i can still read this?
Anyway, I tried using Quivr to upload a FOIA pdf, but it only allows a maximum of 6.5Mb. My file was 11Mb. It's kinda useless if you have to split your file up into little pieces, cause the answers would be skewed.
Any help here would be greatly appreciated!
​ | 2023-07-31T14:51:11 | https://www.reddit.com/r/LocalLLaMA/comments/15eic85/whats_the_best_way_for_me_to_chat_with_old_foia/ | rondonjohnald | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eic85 | false | null | t3_15eic85 | /r/LocalLLaMA/comments/15eic85/whats_the_best_way_for_me_to_chat_with_old_foia/ | false | false | self | 3 | null |
KoboldCpp as a module (without web interface) | 6 | Here is an example script:
import sys
sys.path.insert(0, 'koboldcpp-1.36')
import koboldcpp
sys.path.pop(0)
koboldcpp.run_as_module = True
from argparse import Namespace
koboldcpp.args = Namespace(model=None, model_param='D:/koboldcpp/guanaco-65B.ggmlv3.q4_K_M.bin', port=5001, port_param=5001, host='', launch=False, lora=None, threads=16, blasthreads=16, psutil_set_threads=False, highpriority=False, contextsize=2048, blasbatchsize=512, ropeconfig=[0.0, 10000.0], stream=True, smartcontext=False, unbantokens=True, bantokens=None, usemirostat=None, forceversion=0, nommap=False, usemlock=False, noavx2=False, debugmode=False, skiplauncher=False, hordeconfig=None, noblas=False, useclblast=None, usecublas=None, gpulayers=0, mirostat=None)
koboldcpp.main(koboldcpp.args)
genparams = {'n': 1, 'max_context_length': 2048, 'max_length': 512, 'rep_pen': 1.15, 'temperature': 1.35, 'top_p': 1, 'top_k': 0, 'top_a': 0, 'typical': 1, 'tfs': 0.69, 'rep_pen_range': 2048, 'rep_pen_slope': 0.1, 'sampler_order': [6, 0, 1, 3, 4, 2, 5], 'prompt': "I'm going to write a post in Reddit.\nQuestion: How do I deal with a rude person?\nAnswer: ", 'quiet': True}
v = koboldcpp.generate(**genparams)
print(v)
genparams['prompt'] += v
genparams['prompt'] += "\nQuestion: But what if the person won't listen?\nAnswer:"
v = koboldcpp.generate(**genparams)
print(v)
What you want is
1. Log args and gemparams as you try the "standard generation"
2. Add "run\_as\_module" control variable (stop right before starting web server)
3. Allow unused keyworded parameters in generate (such as "n", "quiet", and "typical")
Modifications to KoboldCpp.py and reasons:
### CONTEXT: logging and web server stop
import json, sys, http.server, time, asyncio, socket, threading
from concurrent.futures import ThreadPoolExecutor
# INSERT TWO LINES
logfilename = "logfile.txt"
run_as_module = False
### INSERT **kwargs to allow extraneous parameters
def generate(prompt,max_length=20, max_context_length=512, temperature=0.8, top_k=120, top_a=0.0, top_p=0.85, typical_p=1.0, tfs=1.0, rep_pen=1.1, rep_pen_range=128, mirostat=0, mirostat_tau=5.0, mirostat_eta=0.1, sampler_order=[6,0,1,3,4,2,5], seed=-1, stop_sequence=[], stream_sse=False, **kwargs):
### CONTEXT: logging inputs, outputs and parameters
if args.debugmode!=-1:
utfprint("\nOutput: " + recvtxt)
# INSERT THREE LINES
with open(logfilename, "a", encoding='utf-8') as f:
f.write(repr(genparams)+"\n")
f.write(recvtxt+"\n")
### CONTEXT: logging args and stopping
if not loadok:
print("Could not load model: " + modelname)
time.sleep(2)
sys.exit(3)
# INSERT FOUR LINES
with open(logfilename, "a", encoding='utf-8') as f:
f.write(repr(args))
f.write("\n")
if run_as_module: return
Have fun! | 2023-07-31T14:48:00 | https://www.reddit.com/r/LocalLLaMA/comments/15ei920/koboldcpp_as_a_module_without_web_interface/ | SpecialNothingness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ei920 | false | null | t3_15ei920 | /r/LocalLLaMA/comments/15ei920/koboldcpp_as_a_module_without_web_interface/ | false | false | self | 6 | null |
Non profit for training open source LLMs | 1 | [removed] | 2023-07-31T14:39:19 | https://www.reddit.com/r/LocalLLaMA/comments/15ei0q8/non_profit_for_training_open_source_llms/ | NessimBA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ei0q8 | false | null | t3_15ei0q8 | /r/LocalLLaMA/comments/15ei0q8/non_profit_for_training_open_source_llms/ | false | false | self | 1 | null |
Clarification on using AI-generated data for commercial model fine-tuning | 13 | I'm looking to build a clearer picture of which fine tuning datasets can and cannot be used for fine tuning models which will be deployed commercially. Specifically, I'm interested in understanding the consensus on using datasets containing samples generated by OpenAI's GPT models.
I've seen conflicting perspectives - some say it's prohibited, while others claim it's acceptable. For example:
- https://huggingface.co/tiiuae/falcon-40b-instruct/discussions/13
- https://www.reddit.com/r/MachineLearning/comments/123oovw/d_instruct_datasets_for_commercial_use/
My interpretation is that outputs generated by OpenAI's GPT models cannot be included in commercial fine-tuning datasets, as that would violate [Section 2.c(iii) of their Terms of Use] (https://openai.com/policies/terms-of-use#:~:text=use%20output%20from%20the%20Services%20to%20develop%20models%20that%20compete%20with%20OpenAI%3B).
However, I'm open to hearing others' perspectives, as I know this is a grey area.
To summarise I'm hoping to gauge what the community's consensus on the acceptability of using AI-generated data for commercial model fine-tuning? | 2023-07-31T14:26:59 | https://www.reddit.com/r/LocalLLaMA/comments/15ehp5d/clarification_on_using_aigenerated_data_for/ | nuusain | self.LocalLLaMA | 2023-07-31T16:52:38 | 0 | {} | 15ehp5d | false | null | t3_15ehp5d | /r/LocalLLaMA/comments/15ehp5d/clarification_on_using_aigenerated_data_for/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=108&crop=smart&auto=webp&s=17279fa911dbea17f2a87e187f47ad903120ba87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=216&crop=smart&auto=webp&s=12bf202fa02a8f40e2ad8bab106916e06cceb1b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=320&crop=smart&auto=webp&s=90ff2c682d87ee483233b1136984d608f8b5c5c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=640&crop=smart&auto=webp&s=2bc95e1b2395af837db2786db2f84b9c7f86370a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=960&crop=smart&auto=webp&s=67e903b600e020b7bcf93fc2000ed3cf95cb4dbb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=1080&crop=smart&auto=webp&s=b4cb1ebc087816d879ac777ed29f74d454f35955', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?auto=webp&s=a4fb691b1b470f21e5ef01685267735cb15b7735', 'width': 1200}, 'variants': {}}]} |
GGML models repeating themselves on Webui | 8 | So pleased that I found I can run 60b+ GGML models on my CPU, albeit slowly, but I'm finding that they are repeating the same next over and over with only minor variations. I've played around with the repeation penalty, maxing it out, as well as increasing temperature and top p. Is there anything I can do to prevent this, adding something to the prompt or setting any runtime parameters?
A Google search suggests this has been an issue for weeks with GGML.
Thanks in advance | 2023-07-31T13:23:16 | https://www.reddit.com/r/LocalLLaMA/comments/15eg47j/ggml_models_repeating_themselves_on_webui/ | Mediocre_Tourist401 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eg47j | false | null | t3_15eg47j | /r/LocalLLaMA/comments/15eg47j/ggml_models_repeating_themselves_on_webui/ | false | false | self | 8 | null |
Plausible to Train Small Models on MacBook Pro M2 Max? | 2 | To the best of my understanding it's more cost effective to build a machine w/ Nvidia cards, I'd like to be able to train small (like Flacon 7b) models, and I'm buying a new MacBook anyway.
If I get an M2 Max with 12‑core CPU, 30‑core GPU, 16‑core Neural Engine and 64GB unified memory, is it plausible I would be be able to train small, task/domain-specific models? If anyone has experience with this, or has advice on what I'd need to do (I think I'll need to install Ubuntu because I'll need to use CUDA?) I'd be grateful. | 2023-07-31T13:19:04 | https://www.reddit.com/r/LocalLLaMA/comments/15eg0j5/plausible_to_train_small_models_on_macbook_pro_m2/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eg0j5 | false | null | t3_15eg0j5 | /r/LocalLLaMA/comments/15eg0j5/plausible_to_train_small_models_on_macbook_pro_m2/ | false | false | self | 2 | null |
How to test if 1 GPU is enough for concurrency when trying LLAMA | 1 | Hello all, I'm back again with a question, can I ask how do you test if 1 GPU is enough to do the concurrency in LLAMA models? I used oobabooga installer.
Thank you | 2023-07-31T12:46:18 | https://www.reddit.com/r/LocalLLaMA/comments/15ef8ks/how_to_test_if_1_gpu_is_enough_for_concurrency/ | Cultural-Pizza-1916 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ef8ks | false | null | t3_15ef8ks | /r/LocalLLaMA/comments/15ef8ks/how_to_test_if_1_gpu_is_enough_for_concurrency/ | false | false | self | 1 | null |
How to overcome the issues of the limit of ~4,000 tokens per input, when dealing with documents summarization? | 38 | As we all knows, llama 2 is quite impressive, and performers well tasks related to summarization.
But how would you deal with summarization of a long document (let's say a book for example)? Is the only solution to make subsets of the text and iteratively feed it? Obtaining summary of summary until the results is ok?
What if I wanted instead to extract information from it with natural language?(example: what is the name of the main character of this book?) | 2023-07-31T12:09:33 | https://www.reddit.com/r/LocalLLaMA/comments/15eef2s/how_to_overcome_the_issues_of_the_limit_of_4000/ | Sad-Vegetable6767 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eef2s | false | null | t3_15eef2s | /r/LocalLLaMA/comments/15eef2s/how_to_overcome_the_issues_of_the_limit_of_4000/ | false | false | self | 38 | null |
Any other smaller LLM (smaLLM? :D) users here? | 106 | I'll introduce myself, because ... idk.
Hi, I'm Sol!
I've been working with LLMs since they became popular thanks to ChatGPT. My first attempts of running LLMs locally was by using transformers and running small GPTs etc, with not-completely-useless results.
Eventually I migrated to gpt4all, but now I'm using llamacpp via the python wrapper. Trying to slowly inch myself closer and closer to the metal.
I'm curious!
I was wondering about how many other people would prefer seeing more 3B (or less) LLMs being created and, even better, converted to the latest GGML format.
I'm currently a big fan of MiniOrca 3B Q4_1. I'm using it for text analysis and it provides really good results, but I'm still working on refining my prompt-creaation. On my G15's 3060 with 6gigs VRAM, fully offloaded, it runs at 70-80 tokens per second, which is *amazingly fast* (compared to all my past experiences).
I have questions!
If you're a smaLLM user, what are you doing with yours? Does it perform the task well? Which one are you using?
Would you also like to see more small models being created and converted?
Do you agree that the future most likely lies in mimicking the parallellity (isthataword?) of the human brain in form of many small, fast, hyperfocused LLMs working under the command of a more sophisticated, bigger LLM?
Did talking to your LLM eventually make you aware of potentially more refined ways of expressing yourself online, in public? As in: Adapting your way of expressing yourself towards achieving higher precision/accuracy in the responses of other humans?
Thank you! :D | 2023-07-31T11:19:19 | https://www.reddit.com/r/LocalLLaMA/comments/15edcz2/any_other_smaller_llm_smallm_d_users_here/ | Solstice_Projekt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15edcz2 | false | null | t3_15edcz2 | /r/LocalLLaMA/comments/15edcz2/any_other_smaller_llm_smallm_d_users_here/ | false | false | self | 106 | null |
LLaMA 2 models tuned for NLP? | 4 | So I am very, VERY impressed with llama 2. It understand so many languages and can ignore irrelevant information easily. I would like it to classify, summarize or extract some data from text. But here lies a problem:
- Base (at least 7B) model is too stupid to answer instructions reliably
- Chat model answers instructions, but often spits out fluff, like justifications, "Sure, here's <whatever I wanted>", or some moralizing bullshit.
Is there a model that just answers questions? | 2023-07-31T10:59:18 | https://www.reddit.com/r/LocalLLaMA/comments/15ecybr/llama_2_models_tuned_for_nlp/ | RicoElectrico | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ecybr | false | null | t3_15ecybr | /r/LocalLLaMA/comments/15ecybr/llama_2_models_tuned_for_nlp/ | false | false | self | 4 | null |
Adding keywords related to topic to LLM context improves performance, here's my prompt to generate them and some keywords for being a machine-learning engineer. | 8 | Whenever I am working with any LLM (typically GPT-4) I generate keywords relevant to my context. I find it has better results and I recommend the same. First I generate them for the topic I'm interested in:
>In the next prompt, Use the mantle of a ‘context generator’ related to the topic of: “Python Development for Machine Learning using The OpenAI API For GPT and LLM”.
>Write as many words to help you answer questions regarding this topic correctly, succinctly, appropriately, and precisely. For a mathematical subject, this would include academic terms that are unique to the subject area, e.g. “self-attention,encoder decoder,bert,seq2seq”. For a non-academic subject, this would provide terms that are up to date within that frame but would not integrate words that are not useful for understanding the fundamentals. For example, for the topic of ‘Super Smash Brothers Ultimate’, you would generate fundamentals related to the space like: “platform fighter,competitive gaming,esports”
>Your goal is to create words that give yourself context for further interactions. Focus on constructing the words that are useful to you. You do not have to create words that are comprehensible to a human mind. This will not be read.
>Write as many sequences of parts-of-words that are most useful for the topic discussion of “Python Development for Machine Learning using The OpenAI API For GPT and LLM”. Do not format for readability. write,like,this,for,each,keyword.
Then I copy and paste the keywords into a fresh instance:
Python, development,machine,learning,OpenAI,API,GPT,LLM,programming,scripting,deep,learning,natural,language,processing,NLP,transformer,architecture,PyTorch,TensorFlow,API,endpoint,training,data,tokenization,context,sequences,fine-tuning,generative,pretrained,discrimitive,regression,classification,optimization,loss,function,gradient,descent,backpropagation,overfitting,underfitting,bias,variance,train,test,validation,split,hyperparameters,epochs,learning,rate,early,stopping,validation,loss,accuracy,precision,recall,F1,score,confusion,matrix,ROC,AUC,computational,resources,GPU,CPU,neural,network,architecture,feedforward,convolutional,recurrent,LSTM,GRU,self-attention,encoder,decoder,BERT,Seq2Seq,language,model,token,sequence,embedding,dimensionality,reduction,PCA,t-SNE,feature,selection,extraction,regularization,dropout,batch,normalization,residual,connections,Flask,Django,web,app,deployment,Heroku,AWS,Docker,Kubernetes,serverless,architecture,API,design,Postman,REST,GraphQL,HTTP,requests,GET,POST,PUT,DELETE,headers,body,JSON,XML,authentication,authorization,JWT,OAuth2,rate,limiting,microservices,agile,development,Scrum,Kanban,Jira,GitHub,Git,version,control,continuous,integration,continuous,deployment,CI,CD,pipelines,Jenkins,Travis,CI,CircleCI,unit,testing,integration,testing,pytest,unittest,debugging,logging,exception,handling,PDB,IDE,PyCharm,VSCode,Jupyter,Notebook,numpy,pandas,matplotlib,seaborn,scikit-learn,statsmodels,nltk,spacy,Gensim,datasets,image,processing,computer,vision,OpenCV,PIL,transfer,learning,style,transfer,GAN,autoencoder,RL,QLearning,DDPG,TD3,SAC,A2C,A3C,PPO,TRPO,environment,agent,reward,state,action,policy,value,function,exploration,exploitation,epsilon,greedy,discount,factor,return,reinforcement,learning,unsupervised,learning,supervised,learning,semi-supervised,learning,anomaly,detection,cluster,analysis,KMeans,DBSCAN,association,rules,Apriori,Eclat,FP-Growth,decision,tree,random,forest,gradient,boosting,XGBoost,LightGBM,CatBoost,SVM,Naive,Bayes,KNN,linear,regression,logistic,regression,neural,networks,DeepMind,chatbot,DialogFlow,Rasa,context,entities,intent,fulfillment,webhook,dialog,flow,path,stories,rules,policies,actions,forms,slots,active,learning,transfer,learning,few-shot,learning,zero-shot,learning,unsupervised,translation,domain,adaptation,multimodal,learning,vision,language,pretraining,self-supervised,learning,multitask,learning,dataset,construction,benchmarking,evaluation,interpretability,analysis,security,privacy,fairness,accountability,transparency,ethical,AI,open,source,contributing,code,review,pull,request,issue,bug,fix,feature,request,documentation,readme,markdown,code,quality,style,PEP8,linting,pylint,flake8,black,code,complexity,cyclomatic,complexity,refactoring,code,smell,design,patterns,SOLID,principles,OOP,encapsulation,inheritance,polymorphism,abstract,base,class,interface,multiple,inheritance,mixin,composition,dependency,injection,adapter,observer,command,chain,responsibility,singleton,factory,prototype,builder,flyweight,structural,creational,behavioral,decorator,context,manager,magic,methods,dunder,methods,iterators,generators,comprehension,lambda,functions,first,class,functions,higher,order,functions,decorators,annotations,type,hinting,exceptions,errors,try,except,finally,raise,assert,logging,debugging,profiling,memory,usage,cpu,usage,I/O,bound,CPU,bound,GIL,multiprocessing,multithreading,concurrency,parallelism,asyncio,async,await,futures,promises,threads,processes,globals,locals,heap,stack,recursion,dynamic,programming,algorithm,data,structures,array,list,tuple,set,dict,hash,table,linked,list,tree,graph,queue,stack,heap,binary,search,tree,DFS,BFS,shortest,path,graph,traversal,Dijkstra,A*,sorting,searching,divide,conquer,dynamic,programming,greedy,algorithms,backtracking,bit,manipulation,regular,expressions,regex,match,search,findall,sub,group,capture,lookahead,lookbehind,greedy,lazy,capturing,non-capturing,POSIX,PCRE,Unicode,web,scraping,crawling,BeautifulSoup,Selenium,requests,HTTP,HTML,CSS,DOM,XPath,JSON,APIs,databases,SQL,NoSQL,PostgreSQL,MySQL,SQLite,MongoDB,Cassandra,Redis,ORM,SQLAlchemy,Peewee,Django,ORM,ACID,transactions,indexing,joins,aggregations,normalization,denormalization,ER,diagram,data,modeling,data,warehouse,ETL,OLAP,OLTP,big,data,MapReduce,Hadoop,Spark,Hive,Pig,Kafka,Storm,Flink,realtime,processing,batch,processing,stream,processing,lake,stream,data,science,statistics,probability,distributions,mean,median,mode,variance,standard,deviation,correlation,covariance,confidence,interval,hypothesis,testing,t-test,ANOVA,chi-square,test,regression,linear,regression,multiple,regression,logistic,regression,decision,trees,random,forest,SVM,clustering,K-means,hierarchical,clustering,DBSCAN,PCA,ICA,LDA,QDA,Naive,Bayes,K-NN,ensemble,methods,bagging,boosting,stacking,time,series,analysis,ARIMA,SARIMA,stationarity,seasonality,trend,autocorrelation,partial,autocorrelation,outlier,detection,anomaly,detection,feature,engineering,feature,scaling,one-hot,encoding,missing,value,imputation,under,over,sampling,SMOTE,Tomek,links,PCA,t-SNE,statistics,descriptive,inferential,p-value,confidence,interval,error,type,I,II,bias,variance,cross,validation,k-fold,bootstrap,resampling,MLE,Bayesian,inference,prior,posterior,likelihood,conjugate,prior,credible,interval,Markov,chain,Monte,Carlo,MCMC
I find I get significantly more topic-relevant and intelligent answers when I do this.
While I'm here, I almost always add "Do not write introductions or conclusions" and "Do not write lists" before every prompt.
That's because lists seem to stick it into listicle data and introductions and conclusions reflect a similar bias as well. It's just smarter in my experience when you ask it to not use those formats of reply.
Currently working on benchmarking some of these little prompt suggestions as I think they can significantly improve performance on all tasks
Hope it helps & blessings on your work ✨🧿️ | 2023-07-31T10:43:38 | https://www.reddit.com/r/LocalLLaMA/comments/15ecn67/adding_keywords_related_to_topic_to_llm_context/ | hanjoyoutaku | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ecn67 | false | null | t3_15ecn67 | /r/LocalLLaMA/comments/15ecn67/adding_keywords_related_to_topic_to_llm_context/ | false | false | self | 8 | null |
The possibilities of local language models are truly endless | 1 | [removed] | 2023-07-31T10:19:44 | yungfishstick | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15ec6f7 | false | null | t3_15ec6f7 | /r/LocalLLaMA/comments/15ec6f7/the_possibilities_of_local_language_models_are/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'bshZ_t8jjnAS2M04SXSZwKTbNSdjOxKC5_7I_iHuf4o', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?width=108&crop=smart&auto=webp&s=d7092ce7e0c03f6d1f34124d84042eb60b1694ee', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?width=216&crop=smart&auto=webp&s=9cc2ce360ef7d1bfbfb684a0fbfbb3fd6bf04a9f', 'width': 216}, {'height': 258, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?width=320&crop=smart&auto=webp&s=3f4d5c9fb99d3668d34611f72da28ccaaca5a7fa', 'width': 320}, {'height': 516, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?width=640&crop=smart&auto=webp&s=7717e18aacffe20b42cf347b1cc94c76708a5c56', 'width': 640}, {'height': 774, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?width=960&crop=smart&auto=webp&s=5de4fcc69a9ebb17ace555ca49835401681369e8', 'width': 960}, {'height': 871, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?width=1080&crop=smart&auto=webp&s=3e1568f07f11a26e6d48115fec8a80527b13e1d7', 'width': 1080}], 'source': {'height': 871, 'url': 'https://preview.redd.it/tgpode6v0afb1.jpg?auto=webp&s=fe2e428a37498bd32731bfb376a5bdeaee5acab8', 'width': 1080}, 'variants': {}}]} | ||
Online LLM comparison Tool | 2 | Hello everyone,
​
how is the website called where I can run multiple LLMs with one prompt and compare the answers.
​
A Tool with contains Falcon & Llama 2 would be for my needs.
​
Thank yall! | 2023-07-31T09:31:17 | https://www.reddit.com/r/LocalLLaMA/comments/15eba3p/online_llm_comparison_tool/ | jnk_str | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eba3p | false | null | t3_15eba3p | /r/LocalLLaMA/comments/15eba3p/online_llm_comparison_tool/ | false | false | self | 2 | null |
What does k-quant mean? | 15 | I am new to this AI stuff but I successfully managed to run GGML models on my computer with cuBLAS. But I don't understand why some models have several versions of varying size with weird labels like q2\_K, q3\_K\_S, q3\_K\_M, ect.
​
Can someone explain why I should care and what k-quand method I should pick? | 2023-07-31T09:25:10 | https://www.reddit.com/r/LocalLLaMA/comments/15eb67i/what_does_kquant_mean/ | Explorer-Necessary | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eb67i | false | null | t3_15eb67i | /r/LocalLLaMA/comments/15eb67i/what_does_kquant_mean/ | false | false | self | 15 | null |
What is the best way to create a Free & light knowledge-base specific LLM chatbot ? | 8 | I want to do something like this but haven't found a free and cheap solution yet. Almost everything uses the OpenAI API key and running OpenLLM locally requires high-spec hardware. So far what I've done is use consine similarity to get the information asked in the chat, then throw that text to a free chatbot API (which is limited in usage of course) to get a natural response from the bot.
const llmApplication = await new LLMApplicationBuilder()
.setTemperature(0.1)
.addLoader(new WebLoader({ url: 'https://www.reddit.com/' }))
.setVectorDb(new LanceDb({ path: path.resolve('/db') }))
.build();
console.log(await llmApplication.query('What is reddit?'));
// Reddit is a bla bla
I found something good and free which is Hugging Face inference. Since I work in NodeJS this makes it very easy for me, but their conversational chatbots are very bad and give nonsense answers if I throw at them text from the consine similarity from before, for example the facebook/blenderbot-400M-distill model. Also they don't support multiple languages, only English. | 2023-07-31T08:46:29 | https://www.reddit.com/r/LocalLLaMA/comments/15eahhi/what_is_the_best_way_to_create_a_free_light/ | balianone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15eahhi | false | null | t3_15eahhi | /r/LocalLLaMA/comments/15eahhi/what_is_the_best_way_to_create_a_free_light/ | false | false | self | 8 | null |
Extremely repetitive/deterministic content over multiple generations | 14 | This happens with any LlaMa model (e.g. Vicuna), although it seems to happen even more with LlaMa-2. Right now my present is Asterism on Oobabooga and I'm using the Llama-2 Chat GPTQ.
Just look at the two different generations (clicking on Regenerate):
​
https://preview.redd.it/spk9xkzeh9fb1.png?width=919&format=png&auto=webp&s=2198ff961d4492c9493e9fd9ce14ebaf7c83cd23
​
https://preview.redd.it/sxijxo7gh9fb1.png?width=914&format=png&auto=webp&s=a3179ab4d277861f0e45ee8994916c8a26e63199
​
This is absurdly **deterministic**, and to be honest it happens to me a lot. It has happened to me literlaly since day one when I started playing around local LLaMa models. If I turn on the temperature even higher, eventually what will happen is that yes, maybe I will get some variety, but at the expense of it being gibberish.
Is anyone facing this problem? It happens with Vicuna and other ones too. The regeneration are way too similar, which to be honest kills the whole "creative writing oriented instruct" for me, as what I do want is to get several versions to find inspiration.
​ | 2023-07-31T08:33:36 | https://www.reddit.com/r/LocalLLaMA/comments/15ea9jl/extremely_repetitivedeterministic_content_over/ | CulturedNiichan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ea9jl | false | null | t3_15ea9jl | /r/LocalLLaMA/comments/15ea9jl/extremely_repetitivedeterministic_content_over/ | false | false | 14 | null | |
Can I create my own LLM API instead of using OpenaiAPI? | 20 | Hi.
I am not a developer, but I've installed WizardLM 13b GPTQ, and it works fine.
Can I create an API somehow for this and connect it to ex. Obsidian, Joplin, and for any other softwares? Is this possible? If yes, do you have a recommendation on how to, a guide, a link?
Thank you so much. | 2023-07-31T08:23:57 | https://www.reddit.com/r/LocalLLaMA/comments/15ea38z/can_i_create_my_own_llm_api_instead_of_using/ | AdamsText | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ea38z | false | null | t3_15ea38z | /r/LocalLLaMA/comments/15ea38z/can_i_create_my_own_llm_api_instead_of_using/ | false | false | self | 20 | null |
Exploring a Non-Profit, Community-Led Approach to Open Source AI | 1 | [removed] | 2023-07-31T08:04:52 | https://www.reddit.com/r/LocalLLaMA/comments/15e9qsw/exploring_a_nonprofit_communityled_approach_to/ | NessimBA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e9qsw | false | null | t3_15e9qsw | /r/LocalLLaMA/comments/15e9qsw/exploring_a_nonprofit_communityled_approach_to/ | false | false | self | 1 | null |
Using Open-Source LLM Models vs. Expensive OpenAI APIs: A Logical Choice for Consumer Apps? | 33 | I'm currently developing an interpreter app for mobile that generates short comments and audio based on given single prompt. The goal is to keep it simple, with one-time generation (no-chatting for now) and a monthly limit for users. I've been personally using opensource LLM's for good amount of time (coding, instruction, storytelling, daily convos etc.) but never used them commercially. They got my job done pretty well therefore I want to use them for this commercial process and this approach can significantly reduce the monthly cost for clients compared to using expensive OpenAI APIs. But I don't know how much will I sacrifice from performance and relaibilty.
At the moment, we have a stable diffusion process in place, which has shown remarkable results and reliability, surpassing the midjourney method. This is the part that confuses me can we do the same thing and avoid using their APIs for commercial usage? | 2023-07-31T07:23:27 | https://www.reddit.com/r/LocalLLaMA/comments/15e915q/using_opensource_llm_models_vs_expensive_openai/ | sarimsak13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e915q | false | null | t3_15e915q | /r/LocalLLaMA/comments/15e915q/using_opensource_llm_models_vs_expensive_openai/ | false | false | self | 33 | null |
Create GraphDB using LLMs | 2 | Hey everyone, I was wondering, if anyone has ventured into creating graphDB from raw unstructured text corpus using LLMs, or had any experience with this.
I came across this repo : [https://github.com/varunshenoy/GraphGPT](https://github.com/varunshenoy/GraphGPT), which uses gpt3.5/4 to generate graphs. Anything that has been done with open-sourced LLMs. | 2023-07-31T07:08:00 | https://www.reddit.com/r/LocalLLaMA/comments/15e8rx3/create_graphdb_using_llms/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e8rx3 | false | null | t3_15e8rx3 | /r/LocalLLaMA/comments/15e8rx3/create_graphdb_using_llms/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'e6ayVEN750ITdNyz3BdBwUizxSoSDZy71MD-qSJl8-I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?width=108&crop=smart&auto=webp&s=537be5c0592d1c1ea21444e34fb386fb9f51095b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?width=216&crop=smart&auto=webp&s=f6293ac1bd80552629340e2b83155e96acd00517', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?width=320&crop=smart&auto=webp&s=a894f4b0e4f94bf264c5168fa607b9c41a9556c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?width=640&crop=smart&auto=webp&s=96baed63e2d8a0bf1cdc4d0776ed8c2519b2fe22', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?width=960&crop=smart&auto=webp&s=d18136135892cce7a800a8439ec410e559d5b837', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?width=1080&crop=smart&auto=webp&s=39f819c725a49425f39bb0620c7281e72dae8bad', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3MPmCm1ocIvLLCwfs0wPeebggyiN4fFfGIlLEVGOxug.jpg?auto=webp&s=2654d6d249a2842378fddb99f6d02332e7c3b16a', 'width': 1200}, 'variants': {}}]} |
Best M1 MacOS app for running models locally? | 14 | Looking for a UI Mac app that can run LLaMA/2 models locally. I remember seeing what looked like a solid one on GitHub but I had my intel Mac at the time and I believe it’s only compatible on Apple silicon. | 2023-07-31T06:23:09 | https://www.reddit.com/r/LocalLLaMA/comments/15e80g7/best_m1_macos_app_for_running_models_locally/ | sardoa11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e80g7 | false | null | t3_15e80g7 | /r/LocalLLaMA/comments/15e80g7/best_m1_macos_app_for_running_models_locally/ | false | false | self | 14 | null |
Running LLama in cloud | 1 | My friend works in consulting. ChatGPT could've helped him immensely since his work mostly includes researching, creating PPTs but it's banned internally due to secretive nature of his data.
So, I'm thinking of hosting LLama for him on cloud. I was thinking of creating a script which you can just run in your server and it will automatically do all tasks for you. Would you think people might have a utility for this script? | 2023-07-31T06:10:22 | https://www.reddit.com/r/LocalLLaMA/comments/15e7seu/running_llama_in_cloud/ | middlenameishardwork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e7seu | false | null | t3_15e7seu | /r/LocalLLaMA/comments/15e7seu/running_llama_in_cloud/ | false | false | self | 1 | null |
Finetuning Llama model for code analysis? | 1 | [removed] | 2023-07-31T05:47:56 | https://www.reddit.com/r/LocalLLaMA/comments/15e7ecc/finetuning_llama_model_for_code_analysis/ | CodeIsTrash | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e7ecc | false | null | t3_15e7ecc | /r/LocalLLaMA/comments/15e7ecc/finetuning_llama_model_for_code_analysis/ | false | false | default | 1 | null |
How do we fine-tune Llama 2 model for Custom Intent Classification? | 1 | [removed] | 2023-07-31T05:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/15e7a5g/how_do_we_finetune_llama_2_model_for_custom/ | Interesting_Low_9766 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e7a5g | false | null | t3_15e7a5g | /r/LocalLLaMA/comments/15e7a5g/how_do_we_finetune_llama_2_model_for_custom/ | false | false | self | 1 | null |
Variants of LLaMa or similar that can "see" and describe images? | 31 | After playing around with Bard a bit, I was quite impressed by its ability to let you upload images and have Bard talk about those images with a great degree of nuance. What's the current best in class among open source models, which still maintain the raw horsepower at general language tasks / chatting that models like LLaMa have?
In trying to research this for myself online, I discovered things like CLIP, which seems to be about ways you can help it train an LLM on image data. While this is quite interesting, what I'm asking about here is what open source model is the best "out of the box" at talking about the contents of images in a general sense. e.g. in much the same way as the LLaMa chat models are good at chatting without performing any fine-tuning and configuration. | 2023-07-31T04:49:50 | https://www.reddit.com/r/LocalLLaMA/comments/15e6ed7/variants_of_llama_or_similar_that_can_see_and/ | ascendant23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e6ed7 | false | null | t3_15e6ed7 | /r/LocalLLaMA/comments/15e6ed7/variants_of_llama_or_similar_that_can_see_and/ | false | false | self | 31 | null |
GitHub - Alpha-VLLM/LLaMA2-Accessory: An Open-source Toolkit for LLM Development | 10 | 2023-07-31T04:47:06 | https://github.com/Alpha-VLLM/LLaMA2-Accessory | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 15e6cl2 | false | null | t3_15e6cl2 | /r/LocalLLaMA/comments/15e6cl2/github_alphavllmllama2accessory_an_opensource/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'uM8Hr3z9NcRtCYH991oK-J-LYQ2zddLwBbo3P4nQi-A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?width=108&crop=smart&auto=webp&s=6541c0d909e84110c6c725cfe9a7af0c766a1a3e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?width=216&crop=smart&auto=webp&s=2317ef07b19d2614a732e16b6fe5bfcc424e0afa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?width=320&crop=smart&auto=webp&s=d00bb1acdd6449659f0fc83b277f8bc6e003b808', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?width=640&crop=smart&auto=webp&s=0042801636070095699795202d2a253e73d08887', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?width=960&crop=smart&auto=webp&s=8dd8dd65e5c5f3189c2e7af702935de771eeed61', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?width=1080&crop=smart&auto=webp&s=e66ec931bb84a24b8148c4723d5d3aa908c1d633', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5ncE2ywEBOmoEUtNULhJcEU_wmvSSok1g5GNRrD6fSc.jpg?auto=webp&s=54b9208a03895f9e5518b9e42e74a399bc30dd2f', 'width': 1200}, 'variants': {}}]} | ||
New trained storytelling LoRA returns really interesting results. | 33 | Hello everyone.
Several weeks ago, I had posted about training my own storytelling LoRA with different structure and it returned funny results.
Just now, I've finished another LoRA, with 30 MB of training raw data. I've used **conceptofmind/LLongMA-2-13b-16k** as training basis. Loaded it in 8-bit mode and trained for about 30 hours.
Now I'm playing with it. I've inputted this prompt text:
[ STORY ]
This wonderful world!
[ STORY DESCRIPTION ]
The world of Alurei. World of might and magic. Yuuya is the young man, who had teleported from our world to this one. Can he survive or he will fall?
[ CHARACTER_DESCRIPTION Yuuya ]
Main character of this story. He is 20 years old, he has curly black hair, very naive and cheerful.
[ CHARACTER_DESCRIPTION Teera ]
Young battlemage girl in world of Alurei, that met Yuuya. Has black hair and glowing green eyes. Very grumpy and hostile disposition.
[ SCENARIO ]
Teleportation!
Then Model generated me this:
[ CONTEXT ]
...
[ SCENE IMAGE ]
a large building with stairs leading up to it, claymore anime background, warm sunlight shining in, outside the'school of magic ', mobile game background, portrait n - 9, islamic, ue4, scarlet, neo tokyo background, full device, hi-res photo, va-11 hall-a
[ CONTEXT ]
A city street was spread out before me.
[ CONTEXT ]
It was a scene which looked like something out of fantasy novel.
[ CONTEXT ]
I found myself standing on a foreign soil.
[ CONTEXT ]
The feeling was different than being transported by airplane.
[ CONTEXT ]
Like being pulled through some sort of portal.
[ CONTEXT ]
However, there were no signs of any such portals around here.
[ CONTEXT ]
Only the sense of having been sucked into another dimension......
[ CONTEXT ]
And then I realized that I didn't have my luggage either.
[ Yuuya ]
Wha!!
[ CONTEXT ]
What was going on?
[ CONTEXT ]
Where am I right now??
[ CONTEXT ]
In other words, where did I end up?
[ CONTEXT ]
I wonder if this place wasn't the same as the place I came from.
[ CONTEXT ]
It was only natural for me to be bewildered and confused.
[ CONTEXT ]
But, what should I do next?
[ Yuuya ]
Ahh.........
[ CONTEXT ]
As far as I knew, it seemed that I had arrived at a certain point in time.
[ CONTEXT ]
Whether I liked it or not, I would need to make use of this opportunity.
[ CONTEXT ]
I guess I would try walking around first.
[ CONTEXT ]
That way, maybe I could find the location of my destination.
[ Yuuya ]
Well then, let's go.
[ CONTEXT ]
Making up my mind, I began to walk forward.
In this generated text I've only inputted **STORY, STORY\_DESCRIPTION, CHARACTER\_DESCRIPTION, and SCENARIO,** and it started to generate this narration.
It really returns funny results and tries to make reasonable narration. However, there are some issues and concerns I've encountered:
1. It sometimes **outputs characters and texts from training data**, which leads to awkward results
2. Using 13B model while it is lightweight is not very "smart" at storytelling.
3. Again, I've botched some parts of my training dataset, some bracket texts have incorrect structure.
4. Using oobabooga's training tools is very clunky. It is text generation at first and only after that it has some training tools. Maybe switching to axolotl will make training easier and faster.
5. This LoRA has been trained (30MB of text) on **2 epochs**, but last quarter of this training's losses were switching around **1.22 - 1.23,** which makes me think that **1.5 epochs are enough if it is even possible.**
I'm thinking of increasing training dataset to 100MB's and train it on 30B llama2 when it releases.
​ | 2023-07-31T04:45:38 | https://www.reddit.com/r/LocalLLaMA/comments/15e6bmm/new_trained_storytelling_lora_returns_really/ | DaniyarQQQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e6bmm | false | null | t3_15e6bmm | /r/LocalLLaMA/comments/15e6bmm/new_trained_storytelling_lora_returns_really/ | false | false | self | 33 | null |
How to determine the best number of inference tokens after fine-tuning llama2? | 1 | After my fine-tuning of llama2-13b-chat with 80,000 instructions, I find my new llama-2-13b-chat likes to say a lot of things when I ask it a simple question.
If I set the number of tokens == 100, it can be usually interrupted at the exactly position of 100 tokens when it chat with me. But I do not know if it tends to reach 150 tokens or 200 tokens or more.
How to set the optimal length of each sentence(or you can call it tokens) it outputs? | 2023-07-31T03:28:49 | https://www.reddit.com/r/LocalLLaMA/comments/15e4x6b/how_to_determine_the_best_number_of_inference/ | badage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e4x6b | false | null | t3_15e4x6b | /r/LocalLLaMA/comments/15e4x6b/how_to_determine_the_best_number_of_inference/ | false | false | self | 1 | null |
Got a second GPU, what do I need to know about getting them to work together and any other suggestions? | 1 | [removed] | 2023-07-31T03:15:20 | https://www.reddit.com/r/LocalLLaMA/comments/15e4o6s/got_a_second_gpu_what_do_i_need_to_know_about/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e4o6s | false | null | t3_15e4o6s | /r/LocalLLaMA/comments/15e4o6s/got_a_second_gpu_what_do_i_need_to_know_about/ | false | false | self | 1 | null |
LLM API streaming remote | 3 | I have a project with the new Llama2 and I have a custom frontend that my partner is building and they want the text generation to stream directly to the client browser without the delay. So far, I have used Langchain, oobabooga, FastAPI, and other smaller repos to generate the text and send it to port directly but it keeps waiting for the text to finish before sending out. This is not keeping with the UX so we need to fix it.
I am wondering if anyone else encountered this and if you have a technique you found that works? | 2023-07-31T02:08:52 | https://www.reddit.com/r/LocalLLaMA/comments/15e3d5o/llm_api_streaming_remote/ | muckitymuck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e3d5o | false | null | t3_15e3d5o | /r/LocalLLaMA/comments/15e3d5o/llm_api_streaming_remote/ | false | false | self | 3 | null |
Computer specs CPU-ONLY to run Llama-2-70B-GGML | 1 | Can AMD Epyc run it? | 2023-07-31T00:32:36 | https://www.reddit.com/r/LocalLLaMA/comments/15e1cgi/computer_specs_cpuonly_to_run_llama270bggml/ | grigio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e1cgi | false | null | t3_15e1cgi | /r/LocalLLaMA/comments/15e1cgi/computer_specs_cpuonly_to_run_llama270bggml/ | false | false | self | 1 | null |
How many prompts can local LLaMa-2 handle on a M1 Macbook simultaneously? | 3 | Trying to figure out how to basically create an API (like OpenAI) that can respond to multiple prompts at the same time.
I tried executing multiple on command line, but all of them hang when running more than one.
./main -m ./models/llama-2-7b-chat.ggmlv3.q8_0.bin -n 1024 -p "<prompt>"
​ | 2023-07-31T00:18:24 | https://www.reddit.com/r/LocalLLaMA/comments/15e11o0/how_many_prompts_can_local_llama2_handle_on_a_m1/ | advisa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e11o0 | false | null | t3_15e11o0 | /r/LocalLLaMA/comments/15e11o0/how_many_prompts_can_local_llama2_handle_on_a_m1/ | false | false | self | 3 | null |
Need guidance fine-tuning LLAMA-2 | 28 | Hi all!
This is my first time working with LLMs and I am planning on fine-tuning LLAMA 2 on an extensive bibliography of a philosopher. To that end, I have extracted large quantities of text and stored them in a pandas dataframe. Rows are books and columns are title and body of text.
I am now looking into fine-tuning LLAMA-2 but I am getting lost over all the options. If my intended use case is to chat with the LLM, how would you proceed?
From what I understood, I should fine tune the chat version if I want to use it like ChatGPT, but my original dataset is just the philosopher texts, it is not prompt-response pairs. My original idea was to simply do autoregressive training by shifting the window to produce the input-output pairs. Is there a way to solve this without extensive manual work?
Personal context:
I already got access to the models.
I use an M1 64GB which I intend to use for inference. For training, I am willing to spend a small amount of money on compute. Which is the best option? GCP, Azure, AWS?
I would greatly appreciate some direction to complete my project. Thanks!! | 2023-07-30T23:53:57 | https://www.reddit.com/r/LocalLLaMA/comments/15e0hx0/need_guidance_finetuning_llama2/ | Por-Tutatis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e0hx0 | false | null | t3_15e0hx0 | /r/LocalLLaMA/comments/15e0hx0/need_guidance_finetuning_llama2/ | false | false | self | 28 | null |
Problems with llama models getting 'stuck'. | 2 | Hi,
I'm having trouble with the models getting stuck in the middle of text generation. It doesn't crash or stop, but rather takes a very long pause and then continues like nothing happened. If I am using the 70B model on my cpu, this pause can be for twenty minutes. Is there something I am doing wrong or something I can do differently to fix this?
Thanks. | 2023-07-30T23:47:29 | https://www.reddit.com/r/LocalLLaMA/comments/15e0cvt/problems_with_llama_models_getting_stuck/ | Red_Redditor_Reddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e0cvt | false | null | t3_15e0cvt | /r/LocalLLaMA/comments/15e0cvt/problems_with_llama_models_getting_stuck/ | false | false | self | 2 | null |
does oobabooga-windows support llama 2? Or is an update required? | 1 | For reference it took me over a week to get the installers properly working and this was 2-3 months ago (I havent updated it or touch any of the install scripts out of fear it would somehow break my original install)
However I've seen the recent release with llama 2 and want to test it out.
Is it easy to update oobabooga and do I simply run the exact same install batch file as before or do I need to re-download from the original git-hub and do a fresh install to get a current version?
Or should it work easily if I just download the model via huggingface like I do with any other model?
Side question - what's the best model out their for llama 2 currently? I have a single 3090 so just have been using 4 bit 30b quantized models. | 2023-07-30T23:36:26 | https://www.reddit.com/r/LocalLLaMA/comments/15e0421/does_oobaboogawindows_support_llama_2_or_is_an/ | RoyalCities | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15e0421 | false | null | t3_15e0421 | /r/LocalLLaMA/comments/15e0421/does_oobaboogawindows_support_llama_2_or_is_an/ | false | false | self | 1 | null |
How long would it take for Local LLMs to catch up with gpt-4? Few or several years? | 194 | Unless there's a big breakthrough in LLM model architecture and or consumer hardware, it sounds like it would be very difficult for local LLMs to catch up with gpt-4.
According to [leaked information about GPT-4 architecture, datasets, costs](https://the-decoder.com/gpt-4-architecture-datasets-costs-and-more-leaked/), the scale seems impossible with what's available to consumers for now even just to run inference.
* GPT-4 has ~1.8 trillion parameters
* GPT-4 is trained on ~13T tokens
* The training costs for GPT-4 was around $63 million
* The inference runs on a cluster of 128 GPUs, using 8-way tensor parallelism and 16-way pipeline parallelism. | 2023-07-30T22:14:14 | https://www.reddit.com/r/LocalLLaMA/comments/15dy75o/how_long_would_it_take_for_local_llms_to_catch_up/ | jl303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dy75o | false | null | t3_15dy75o | /r/LocalLLaMA/comments/15dy75o/how_long_would_it_take_for_local_llms_to_catch_up/ | false | false | self | 194 | {'enabled': False, 'images': [{'id': 'y-gnN5eXSJbM_w4V7MFNRnsV0zLNgW29jexasYfiSF8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?width=108&crop=smart&auto=webp&s=227d2c8a109e1eb11330647295127703bb2cc674', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?width=216&crop=smart&auto=webp&s=3cb57c4bbb9b787cb3fc55da2090ffcdd67a0ef7', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?width=320&crop=smart&auto=webp&s=1ae6958d2414c6bddebbedc369af99494917c48b', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?width=640&crop=smart&auto=webp&s=84740e0cc13cf4857651a9f8f41a4b2611620d25', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?width=960&crop=smart&auto=webp&s=e61eee162b0f9a7ae839f87e6e5bedda8414eccd', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?width=1080&crop=smart&auto=webp&s=e3377cfb400196b749cbd3024fd9c4eb66f0b192', 'width': 1080}], 'source': {'height': 816, 'url': 'https://external-preview.redd.it/R_jSGuxJFUWYMWl5tUlkHT9p7fvcbwdJI7DlycZa0fI.jpg?auto=webp&s=69b76476edb76d4bb380b6dd9bf54b94222c58ac', 'width': 1456}, 'variants': {}}]} |
Is there anyway to add classifier free guidance to inference-optimized libraries like ctranslate/vllm? | 8 | I've been experimenting quite a bit with classifier free guidance and found it to be super useful when generating text. There's a huggingface [implementation](https://github.com/huggingface/transformers/issues/24536) that can be easily added to your HF model and you can generate texts pretty easily. The only problem is that when moving to the inference optimized libraries like ctranslate and vllm, they have their own "generate" function which does not rely on HF and does not have CFG.
​
Does anyone know how CFG might be added to those libraries so I can generate text easily using them? It would be super helpful in the long run! | 2023-07-30T21:33:47 | https://www.reddit.com/r/LocalLLaMA/comments/15dx716/is_there_anyway_to_add_classifier_free_guidance/ | Ok_Coyote_8904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dx716 | false | null | t3_15dx716 | /r/LocalLLaMA/comments/15dx716/is_there_anyway_to_add_classifier_free_guidance/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'NlR-Br6fGYKIfU9IPAIUkt0q3Nlyo4exb70MySsniqU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?width=108&crop=smart&auto=webp&s=18b080e6d9d63990657d8b98b6bb4790b2798819', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?width=216&crop=smart&auto=webp&s=a3f0b97cbcd69454fce9b1e88a1a870dacbbed27', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?width=320&crop=smart&auto=webp&s=cdb7f604efd815d42539f41112972ad071b1eff7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?width=640&crop=smart&auto=webp&s=575b4986de2ca3ab49d5dc80d1cdf71aaa0546f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?width=960&crop=smart&auto=webp&s=669437b591b4b7e59763ebd1b7bb1cc934b166fb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?width=1080&crop=smart&auto=webp&s=2f51aac9c2e64312aefe7e5c7446bf94952ac349', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NjVtlNBVVhtKFj4TvBhXEVK-xtO6qB0KsuxtUooQh1M.jpg?auto=webp&s=1ca61e2fbf038b2fb1f2c14e17190804372be533', 'width': 1200}, 'variants': {}}]} |
CPU, GPU or RAM? | 10 | What's the most important for running models locally?
I'm going to build a PC with a Ryzen 5 5600 CPU, a 6800 XT GPU and 32GB ram at 3600MHz (CL16). What kind of performance/capacities can I expect?
I can slowly run a 7b model on a much less powerful CPU, so I guess it should be much better? | 2023-07-30T21:20:08 | https://www.reddit.com/r/LocalLLaMA/comments/15dwumk/cpu_gpu_or_ram/ | AloneInTheWhole | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dwumk | false | null | t3_15dwumk | /r/LocalLLaMA/comments/15dwumk/cpu_gpu_or_ram/ | false | false | self | 10 | null |
Running LLAMA | 4 | Hello all, I am newbie here sorry if my question is a bit of out topic.
I am a devops engineer trying to run LLAMA models, do we still need a huge computation resource to run a model? Or is it when only trying to training the model?
It seems only using CPU I can still the webUI + downloaded models (llama2 7b) and when I check the CPU still in idle mode.
Thank you | 2023-07-30T20:45:54 | https://www.reddit.com/r/LocalLLaMA/comments/15dvz7k/running_llama/ | Cultural-Pizza-1916 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dvz7k | false | null | t3_15dvz7k | /r/LocalLLaMA/comments/15dvz7k/running_llama/ | false | false | self | 4 | null |
Was trying to install SillyTavern on Mac and got the following error; What went wrong? | 1 | /Users/\*\*\*\*/Desktop/SillyTavern/start.sh
Installing Node Modules...
npm ERR! code ENOENT
npm ERR! syscall open
npm ERR! path /Users/\*\*\*\*/package.json
npm ERR! errno -2
npm ERR! enoent ENOENT: no such file or directory, open '/Users/\*\*\*\*/package.json'
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent
​
npm ERR! A complete log of this run can be found in: /Users/\*\*\*\*/.npm/\_logs/2023-07-30T16\_54\_34\_594Z-debug-0.log
Entering SillyTavern...
FATAL: Could not write default file: public/settings.json Error: ENOENT: no such file or directory, copyfile 'default/settings.json' -> 'public/settings.json'
at Object.copyFileSync (node:fs:2991:3)
at createDefaultFiles (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:18:20)
at Object.<anonymous> (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:3:1)
at Module.\_compile (node:internal/modules/cjs/loader:1233:14)
at Module.\_extensions..js (node:internal/modules/cjs/loader:1287:10)
at Module.load (node:internal/modules/cjs/loader:1091:32)
at Module.\_load (node:internal/modules/cjs/loader:938:12)
at Function.executeUserEntryPoint \[as runMain\] (node:internal/modules/run\_main:83:12)
at node:internal/main/run\_main\_module:23:47 {
errno: -2,
syscall: 'copyfile',
code: 'ENOENT',
path: 'default/settings.json',
dest: 'public/settings.json'
}
FATAL: Could not write default file: public/css/bg\_load.css Error: ENOENT: no such file or directory, copyfile 'default/bg\_load.css' -> 'public/css/bg\_load.css'
at Object.copyFileSync (node:fs:2991:3)
at createDefaultFiles (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:18:20)
at Object.<anonymous> (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:3:1)
at Module.\_compile (node:internal/modules/cjs/loader:1233:14)
at Module.\_extensions..js (node:internal/modules/cjs/loader:1287:10)
at Module.load (node:internal/modules/cjs/loader:1091:32)
at Module.\_load (node:internal/modules/cjs/loader:938:12)
at Function.executeUserEntryPoint \[as runMain\] (node:internal/modules/run\_main:83:12)
at node:internal/main/run\_main\_module:23:47 {
errno: -2,
syscall: 'copyfile',
code: 'ENOENT',
path: 'default/bg\_load.css',
dest: 'public/css/bg\_load.css'
}
FATAL: Could not write default file: config.conf Error: ENOENT: no such file or directory, copyfile 'default/config.conf' -> 'config.conf'
at Object.copyFileSync (node:fs:2991:3)
at createDefaultFiles (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:18:20)
at Object.<anonymous> (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:3:1)
at Module.\_compile (node:internal/modules/cjs/loader:1233:14)
at Module.\_extensions..js (node:internal/modules/cjs/loader:1287:10)
at Module.load (node:internal/modules/cjs/loader:1091:32)
at Module.\_load (node:internal/modules/cjs/loader:938:12)
at Function.executeUserEntryPoint \[as runMain\] (node:internal/modules/run\_main:83:12)
at node:internal/main/run\_main\_module:23:47 {
errno: -2,
syscall: 'copyfile',
code: 'ENOENT',
path: 'default/config.conf',
dest: 'config.conf'
}
node:internal/modules/cjs/loader:1051
throw err;
\^
​
Error: Cannot find module 'yargs/yargs'
Require stack:
\- /Users/\*\*\*\*/Desktop/SillyTavern/server.js
at Module.\_resolveFilename (node:internal/modules/cjs/loader:1048:15)
at Module.\_load (node:internal/modules/cjs/loader:901:27)
at Module.require (node:internal/modules/cjs/loader:1115:19)
at require (node:internal/modules/helpers:119:18)
at Object.<anonymous> (/Users/\*\*\*\*/Desktop/SillyTavern/server.js:28:15)
at Module.\_compile (node:internal/modules/cjs/loader:1233:14)
at Module.\_extensions..js (node:internal/modules/cjs/loader:1287:10)
at Module.load (node:internal/modules/cjs/loader:1091:32)
at Module.\_load (node:internal/modules/cjs/loader:938:12)
at Function.executeUserEntryPoint \[as runMain\] (node:internal/modules/run\_main:83:12) {
code: 'MODULE\_NOT\_FOUND',
requireStack: \[ '/Users/\*\*\*\*/Desktop/SillyTavern/server.js' \]
}
​
Node.js v20.5.0
\*\*\*\*-MBP:\~ \*\*\*\*$
​ | 2023-07-30T20:13:01 | https://www.reddit.com/r/LocalLLaMA/comments/15dv52c/was_trying_to_install_sillytavern_on_mac_and_got/ | ChromeGhost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dv52c | false | null | t3_15dv52c | /r/LocalLLaMA/comments/15dv52c/was_trying_to_install_sillytavern_on_mac_and_got/ | false | false | self | 1 | null |
Created a tutorial video on Llama2 | 1 | [removed] | 2023-07-30T20:06:28 | https://www.reddit.com/r/LocalLLaMA/comments/15duz22/created_a_tutorial_video_on_llama2/ | aianytime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15duz22 | false | null | t3_15duz22 | /r/LocalLLaMA/comments/15duz22/created_a_tutorial_video_on_llama2/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UYAqLaReU4yUmCx_aeof61i3La6yak_kILnLBNYNDls', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/bFs21RSq2m_PlCVAgvTZlb7zEsGtZogibW9pEoQ_VFk.jpg?width=108&crop=smart&auto=webp&s=06c7ac2fecf9e253fea577633e61ba2847314082', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/bFs21RSq2m_PlCVAgvTZlb7zEsGtZogibW9pEoQ_VFk.jpg?width=216&crop=smart&auto=webp&s=747ffab0275e01579675dd5a2cef6169fc756910', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/bFs21RSq2m_PlCVAgvTZlb7zEsGtZogibW9pEoQ_VFk.jpg?width=320&crop=smart&auto=webp&s=e27cd0bc6bee93813188f8f14f74a920cbb43e54', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/bFs21RSq2m_PlCVAgvTZlb7zEsGtZogibW9pEoQ_VFk.jpg?auto=webp&s=c586e7153fcc74971469dd6137ab8e323918e5e2', 'width': 480}, 'variants': {}}]} |
If you're using Bing Chat. You're getting the worst possible answer from a supposed Search Assistant. Here's why? | 0 | # Here is a critique and rating of the different AI responses, showcasing which model performed the worst and best in different categories:
|Model|Accuracy|Clarity|Conciseness|Helpfulness|
|:-|:-|:-|:-|:-|
|Bing Chat|3|4|5|2|
|Perplexity AI|5|5|3|5|
|Bard|4|5|4|4|
|Huggingface|5|5|4|5|
**Evaluation:**
* Accuracy: Perplexity AI and Huggingface provided the most accurate technical explanations of how tools like LangChain work. Bing Chat had some inaccuracies, while Bard was mostly correct but lacked some details.
* Clarity: All models except Bing Chat expressed themselves clearly and were easy to understand.
* Conciseness: Bing Chat was the most concise, while Perplexity AI provided a very detailed explanation but was less concise.
* Helpfulness: Perplexity AI and Huggingface gave the most helpful responses that directly answered the question. Bing Chat was the least helpful.
**Summary**:
Huggingface performed the best overall by providing an accurate, clear, reasonably concise, and very helpful response. It gave a detailed technical explanation while still being easy to understand.
* Perplexity AI also did very well, with an extremely accurate and helpful response, but was slightly less concise than ideal.
* Bard performed decently, giving a mostly correct response with good clarity, but lacked some important details.
* Bing Chat performed the worst - while concise, it had inaccuracies and lacked helpfulness.
So in conclusion, Huggingface demonstrated itself as the most capable model for providing knowledgeable, well-rounded explanations to technical questions like this. Perplexity AI also did great, with room for improvement on conciseness. Bard was decent, while Bing Chat clearly lagged behind the others.
​
# Prompt: (Sourced from a Random Reddit Post)
Can someone please clarify this to me: Are tools like LangChain interacting with the model (memory) directly, or it's all just prompting behind the scenes and filtering the results to only include the complete answer ?
Context: I'm playing with LLMs and I'm trying to understand the currently existing solutions for extending the model capabilities beyond the prompt and current context.
I can't find a clear answer, and I'm just trying to find out the basic flow of things.
I know you can load embeddings with a model and it treats that as a part of it's model, but this does not make sense in case of dynamic data, like things fetched from vector db.
Basically the actual question is: Can you feed dynamic data into the running model, at runtime, and directly in memory without prompting ?
I know you CAN do prompting "in the background", and parse it's outputs, but for a simple PC that does 5-6 t/s just with chat, doing additional inference will drop that resultant t/s way down below a usable level, so I'm looking for a best solution performance-wise here.
# Response: (with Internet Access)
​
>**Bing Chat:**
>
>`The answer to your question is yes. You can feed dynamic data into the running model, at runtime, and directly in memory without prompting.` [`This is called Dynamic Data-Driven Simulation`](https://link.springer.com/chapter/10.1007/978-3-319-15096-3_4)[`1`](https://link.springer.com/chapter/10.1007/978-3-319-15096-3_4)`.`
>
>`In this context, data plays an essential role in almost every aspect of computer modeling and simulation.` [`The importance of data in modeling and simulation was discussed in many works`](https://link.springer.com/chapter/10.1007/978-3-319-15096-3_4)[`1`](https://link.springer.com/chapter/10.1007/978-3-319-15096-3_4)`.`
>
>`I hope this helps. Let me know if you have any other questions.`
​
>**Perplexity AI:**
>
>`LangChain is a framework that simplifies working with large language models (LLMs) such as OpenAI GPT4 or Google PaLM by providing abstractions for common use cases[2]. LangChain serves as a standard interface to interact with a range of large language models, such as OpenAI or HuggingFace models[1]. LangChain embraces two fundamental principles: being data-aware and agentic. By connecting language models with other data sources and enabling them to provide more accurate and nuanced responses, LangChain enhances their contextual understanding[1].`
>
>`Regarding loading embeddings with a model, TensorFlow enables you to train word embeddings. However, this process not only requires a lot of data but can also be time and resource-intensive. To tackle these challenges, you can use pre-trained word embeddings such as GloVe (Global Vectors) word embeddings by Stanford. These embeddings are obtained from representing words that are similar in the same vector space[3].`
>
>`Dynamic data can be fed into the running model at runtime, and directly in memory without prompting. A new type of language model, EntityNLM, can explicitly model entities, dynamically update their representations, and contextually generate their mentions. This model is generative and flexible; it can model an arbitrary number of entities in context while generating each entity mention at an arbitrary length[4].`
>
>`To optimize performance, researchers are exploring ways to optimize model architectures. Techniques like knowledge distillation, model pruning, and quantization can enhance efficiency[8]. Incremental learning allows models to learn new tasks without forgetting previously acquired knowledge, while transfer learning enables the reuse of pre-trained models, significantly reducing training time and computational resources[8].`
>
>`In conclusion, LangChain can interact with large language models directly, and dynamic data can be fed into the running model at runtime. Loading embeddings with a model can be resource-intensive, but pre-trained word embeddings can be used instead. To optimize performance, techniques like knowledge distillation, model pruning, and quantization can be used, and incremental learning and transfer learning can enhance efficiency.`
​
>**Bard:**
>
>`Yes, tools like LangChain can interact with the model's memory directly. This is done through the use of a memory class, which is a Python object that stores and manages the model's memory. The memory class can be used to read and write data to the model's memory, and it can also be used to filter the results of the model's inference.`
>
>`For example, the ConversationBufferMemory class in LangChain stores a list of chat messages in a buffer. When the model is asked to generate a response, the ConversationBufferMemory class is used to read the previous chat messages and pass them to the model as input. This allows the model to access the previous conversation context and generate a more relevant response.`
>
>`The advantage of interacting with the model's memory directly is that it allows you to access the model's state and history. This can be useful for tasks that require the model to remember previous interactions, such as chatbots or question answering systems.`
>
>`However, there are also some disadvantages to interacting with the model's memory directly. One disadvantage is that it can be slower than prompting the model. This is because the model has to load the memory data into memory before it can access it. Another disadvantage is that it can be more difficult to implement. This is because you need to know how the model's memory works in order to interact with it directly.`
>
>`Overall, whether or not you should interact with the model's memory directly depends on your specific application. If you need to access the model's state and history, then interacting with the memory directly is a good option. However, if you need to maximize performance, then you may want to use prompting instead.`
>
>`In your case, since you are concerned about performance, you may want to consider using prompting instead of interacting with the model's memory directly. However, if you need to access the model's state and history, then interacting with the memory directly may be the best option.`
>
>`I hope this helps!`
​
>**Huggingface Chat with LLama-2-70B:**
>
>`Sure, I can help clarify how tools like LangChain interact with large language models (LLMs) and how they extend their capabilities beyond the initial prompt and context.`
>
>`LangChain is an open-source tool designed to enable flexible and efficient interaction with LLMs. It allows developers to build applications that leverage the power of LLMs without requiring extensive knowledge of the underlying models or infrastructure. However, it's important to note that LangChain doesn't provide direct access to the model's memory or allow you to modify its internal state. Instead, it operates by generating prompts and processing the output generated by the LLM.`
>
>`Here's a high-level overview of how LangChain works:`
>
>`Prompt Generation: When you use LangChain to interact with an LLM, you provide a natural language prompt or a starting point for the conversation. This prompt is then processed by LangChain to generate a set of input tokens that represent the user's intent.`
>
>`Model Interaction: The input tokens are fed into the LLM, which processes them according to its internal mechanics. The model generates a response based on the input and its own internal state.`
>
>`Output Processing: The output from the LLM is then passed back through LangChain, which extracts relevant information and formats it into a structured response that can be easily consumed by the application.`
>
>`Response Enrichment: Depending on the specific use case, LangChain may enrich the response by incorporating external data sources, performing reasoning tasks, or integrating other models to enhance the overall quality of the output.`
>
>`While LangChain doesn't offer direct access to the model's memory, there are ways to integrate dynamic data into the process. For example, you can use LangChain's fetchfunction to retrieve information from external databases or APIs during the conversation. This allows you to incorporate up-to-date information into the dialogue without needing to modify the LLM itself.`
>
>`Regarding performance, it's true that adding additional inference steps can reduce the speed of the conversational experience. To mitigate this, LangChain provides various optimization techniques, such as caching, precomputing, and strategies for batching requests. These optimizations can significantly improve the efficiency of the interactions without sacrificing too much accuracy.`
>
>`In summary, tools like LangChain operate by generating prompts, processing the LLM's output, and enriching the responses with external data if needed. While they don't offer direct access to the model's memory, they provide a convenient interface for building conversational applications that can leverage the strengths of LLMs without requiring low-level modifications. By carefully designing the workflow and utilizing optimization techniques, it's possible to achieve a good balance between performance and functionality.` | 2023-07-30T20:02:32 | https://www.reddit.com/r/LocalLLaMA/comments/15duvig/if_youre_using_bing_chat_youre_getting_the_worst/ | FHSenpai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15duvig | false | null | t3_15duvig | /r/LocalLLaMA/comments/15duvig/if_youre_using_bing_chat_youre_getting_the_worst/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'awB4VuC4eKtbxFOwQFJ1hWN6Z8X7muUKyF0MogqqgV8', 'resolutions': [{'height': 162, 'url': 'https://external-preview.redd.it/P7XaoiBio1X2LadUCWFvHhD2aoiSHLpJyEYzkylrhIo.jpg?width=108&crop=smart&auto=webp&s=6a6bd8118dc8fde43fe6208f41e1de57e603cff3', 'width': 108}], 'source': {'height': 230, 'url': 'https://external-preview.redd.it/P7XaoiBio1X2LadUCWFvHhD2aoiSHLpJyEYzkylrhIo.jpg?auto=webp&s=6c3fac79280ecde8ca2f0bfddd8bb18dbad93eb4', 'width': 153}, 'variants': {}}]} |
Trying to run StableBeluga-7B locally on AMD WSL | 2 | Hello, I've been following the installation guide as far as possible but I think I've gotten as far as I can without help. So:
- The oobabooga start_wsl.bat runs and I can connect, it does mention "bitsandbytes was compiled without GPU support" but I believe this relies on CUDA and therefore doesn't affect me trying to use an AMD GPU?
- I've downloaded the StableBeluga-7B model from within the web UI, but when I try to load it fails ending with the error "RuntimeError: No HIP GPUs are available".
- I tried using Bing chat to help me through this and have managed to install amdgpu-dkms and ROCm SDK 5.4.3 on my WSL, which oobabooga mentioned it required 5.4.2 or 3, so that should be ok?
What options am I missing or where can I point the web UI to correctly make use of my GPU? My card is a 5700XT so I'm wanting to use a 4bit quant I believe, hopefully someone knows what the hell I'm on about so I can get this chat working! | 2023-07-30T18:20:57 | https://www.reddit.com/r/LocalLLaMA/comments/15dsfbr/trying_to_run_stablebeluga7b_locally_on_amd_wsl/ | Hexaflex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dsfbr | false | null | t3_15dsfbr | /r/LocalLLaMA/comments/15dsfbr/trying_to_run_stablebeluga7b_locally_on_amd_wsl/ | false | false | self | 2 | null |
why is rotary embedding applied at each layer in roformer/llama? | 1 | [removed] | 2023-07-30T18:07:05 | https://www.reddit.com/r/LocalLLaMA/comments/15ds3l4/why_is_rotary_embedding_applied_at_each_layer_in/ | hx129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ds3l4 | false | null | t3_15ds3l4 | /r/LocalLLaMA/comments/15ds3l4/why_is_rotary_embedding_applied_at_each_layer_in/ | false | false | self | 1 | null |
[QUESTION] What prompt(s) to get results like Vectara and Perplexity? | 2 | Hi all,
I'm curious what prompt is or prompts are needed to get results like Vectara and Perplexity.
Example Vectara: [https://asknews.demo.vectara.com/?query=Should+AI+be+regulated%3F&filter=&language=eng](https://asknews.demo.vectara.com/?query=Should+AI+be+regulated%3F&filter=&language=eng)
Here you see a question asked to the system, then it returns ten search results with the most relevant sentence from that search result in bold, and then above the search results, the summarized answer with inline footnotes to the search results.
Example 2: [https://www.perplexity.ai/search?q=Should%20AI%20be%20regulated?&copilot=false&focus=internet](https://www.perplexity.ai/search?q=Should%20AI%20be%20regulated?&copilot=false&focus=internet)
Here you also see the same concept with search results and the final answer being a sort of summary of the search results with footnotes.
I look forward to hearing what you guys think. | 2023-07-30T17:46:49 | https://www.reddit.com/r/LocalLLaMA/comments/15drlx8/question_what_prompts_to_get_results_like_vectara/ | VividFee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15drlx8 | false | null | t3_15drlx8 | /r/LocalLLaMA/comments/15drlx8/question_what_prompts_to_get_results_like_vectara/ | false | false | self | 2 | null |
Is there anyway to transfer reinforcement learning to newer foundation models? Suppose you train llama2 to play minecraft and then llama3 comes out. Perhaps a MOE architecture? | 13 | I've been fascinated by people training LLMs to play games with reinforcement learning, but it seems like there is no good way to avoid starting over every time a better foundation model comes out.
Suppose the model has undergone multiple rounds of actor-critic training or PPO or RLHF, perhaps playing games like minecraft for several years. These techniques can't just cheaply be redone on a newer model via traditional backprop, as they have complex loss functions calculated by neural networks.
Is there any good way to transfer the knowledge? One thought I had was to make one model which has two experts, the old trained model and the new foundation model, and use something like a switch transformer to pick the best output. This will allow the old model to continue to contribute while the new model is trained.
This can be repeated indefinitely, you can just keep adding newer foundation models to the ensemble as they come out. | 2023-07-30T17:08:44 | https://www.reddit.com/r/LocalLLaMA/comments/15dqp5f/is_there_anyway_to_transfer_reinforcement/ | 30299578815310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dqp5f | false | null | t3_15dqp5f | /r/LocalLLaMA/comments/15dqp5f/is_there_anyway_to_transfer_reinforcement/ | false | false | self | 13 | null |
What's the best way to use a ChatAI to generate Stories? (a good storywriter) | 6 | Hey everyone!
I used ChatGPT a lot to write interesting and long Stories.
I gave ChatGPT instructions and told it what to write. After some Sentences, ChatGPT had written a little part of the story. Then I gave it more instructions on how to continue this story. This way, I wrote a pretty cool Story with three Chapters and it was consistent to the world building most of the time.
Is there a way to do the same using a LLama Model?
Which one would be the best for it (uncensored preferred) and how to set it up correctly?
Thank you for all your advices n advance
​ | 2023-07-30T16:57:26 | https://www.reddit.com/r/LocalLLaMA/comments/15dqfiy/whats_the_best_way_to_use_a_chatai_to_generate/ | ChrisX930 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dqfiy | false | null | t3_15dqfiy | /r/LocalLLaMA/comments/15dqfiy/whats_the_best_way_to_use_a_chatai_to_generate/ | false | false | self | 6 | null |
are there any finetuned chat models in the works? | 6 | not looking for roleplay, more like a more versatile model. a 13b model thats a mix of airoboros gpt4 1.4.1, guanaco, and nous hermes 2 but in a chat model | 2023-07-30T16:02:05 | https://www.reddit.com/r/LocalLLaMA/comments/15dp4kx/are_there_any_finetuned_chat_models_in_the_works/ | happydadinau | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dp4kx | false | null | t3_15dp4kx | /r/LocalLLaMA/comments/15dp4kx/are_there_any_finetuned_chat_models_in_the_works/ | false | false | self | 6 | null |
Can someone please clarify this to me: Are tools like LangChain interacting with the model (memory) directly, or it's all just prompting behind the scenes and filtering the results to only include the complete answer ? | 27 | **Context:** I'm playing with LLMs and I'm trying to implement stuff in pure CPP as much as possible.
I can't find a clear answer, and I lost hours going thorough source codes.
I know you can load embeddings with a model and it treats that as a part of it's model, but this does not make sense in case of dynamic data, like things fetched from vector db.
**Basically the actual question is:** Can you feed dynamic data into the running model, at runtime, and directly in memory without prompting ?
I know you CAN do prompting "in the background", and parse it's outputs, but for a simple PC that does 5-6 t/s just with chat, doing additional inference will drop that resultant t/s way down below a usable level, so I'm looking for a best solution performance-wise here. | 2023-07-30T15:32:58 | https://www.reddit.com/r/LocalLLaMA/comments/15dog1h/can_someone_please_clarify_this_to_me_are_tools/ | staviq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dog1h | false | null | t3_15dog1h | /r/LocalLLaMA/comments/15dog1h/can_someone_please_clarify_this_to_me_are_tools/ | false | false | self | 27 | null |
An error occurred Model failed. Please try loading it again. | 1 | This often happens when giving a language model a longish prompt, getting a longish response and then briefly asking the AI to continue.
Why does this happen and how can we prevent this error? | 2023-07-30T15:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/15doas2/an_error_occurred_model_failed_please_try_loading/ | innocuousAzureus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15doas2 | false | null | t3_15doas2 | /r/LocalLLaMA/comments/15doas2/an_error_occurred_model_failed_please_try_loading/ | false | false | self | 1 | null |
Newb with a PowerEdge R710, 32GB RAM, 12-core. Can I play, too? | 1 | I really want to learn, and so I'm just blindly jumping in like a derp. To the gurus, can I run the 33B LLaMA model on a PowerEdge R710 with 32GB RAM and 12-cores? | 2023-07-30T14:25:22 | https://www.reddit.com/r/LocalLLaMA/comments/15dmw1d/newb_with_a_poweredge_r710_32gb_ram_12core_can_i/ | Actual-Bad5029 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dmw1d | false | null | t3_15dmw1d | /r/LocalLLaMA/comments/15dmw1d/newb_with_a_poweredge_r710_32gb_ram_12core_can_i/ | false | false | self | 1 | null |
Seeking Advice on Using an LLM for Document Interaction - Need Input Based on My Knowledge Only! | 9 | Hi everyone! 📚🤖
I'm looking for some guidance on utilizing an LLM to interact with my documents effectively. However, I have a unique requirement – I want the LLM to respond solely based on my knowledge and not its own. 🤔
I'd greatly appreciate any suggestions or insights you have on how to achieve this! Whether it's tweaking settings or using specific tools, Please share your experiences or thoughts. | 2023-07-30T13:12:36 | https://www.reddit.com/r/LocalLLaMA/comments/15dlb1p/seeking_advice_on_using_an_llm_for_document/ | Tight_Promise8668 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dlb1p | false | null | t3_15dlb1p | /r/LocalLLaMA/comments/15dlb1p/seeking_advice_on_using_an_llm_for_document/ | false | false | self | 9 | null |
airo-llongma-2-13B-16k-GPTQ - 16K long context llama - works in 24GB VRAM | 73 | Just wanted to bring folks attention to this model that has just been posted on HF. I've been waiting for a GPTQ model that has high context llama 2 "out of the box" and this looks promising:
[https://huggingface.co/kingbri/airo-llongma-2-13B-16k-GPTQ](https://huggingface.co/kingbri/airo-llongma-2-13B-16k-GPTQ)
I'm able to load it into the 24GB VRAM of my 3090, using exllama\_hf. I've fed it about 10k context articles and managed to get responses. But it's not always responsive even using the Llama 2 instruct format. Anyone else have any experience getting something out of this model? | 2023-07-30T13:11:26 | https://www.reddit.com/r/LocalLLaMA/comments/15dla85/airollongma213b16kgptq_16k_long_context_llama/ | CasimirsBlake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dla85 | false | null | t3_15dla85 | /r/LocalLLaMA/comments/15dla85/airollongma213b16kgptq_16k_long_context_llama/ | false | false | self | 73 | {'enabled': False, 'images': [{'id': 'J1diEqwjH_RYKuobQuOS5eeZnZ9TPU0aM4okJ60biDM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?width=108&crop=smart&auto=webp&s=434e9988ce6a28eb3af989323c79312e635e8656', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?width=216&crop=smart&auto=webp&s=5dbea937c7f8746efa4dd70eeb83d5ac4460b3df', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?width=320&crop=smart&auto=webp&s=d54a6d626c8162e804b15616ea7763a7c5c0f629', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?width=640&crop=smart&auto=webp&s=887a7c598654ebd86acf01e673a33924317195d0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?width=960&crop=smart&auto=webp&s=a533b3aa23c156d8f5d109247544d0726d64e417', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?width=1080&crop=smart&auto=webp&s=2ef30bb3fa0943e41faa47ce2e7b2bc09c41a47d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZtJWUJ5WnCcVtsI5dcFAXWjb9FFJ884GvXAqCVlesLw.jpg?auto=webp&s=4f9cbce7f368dc19654b1f1787f692cf2b824f19', 'width': 1200}, 'variants': {}}]} |
Windows PC a lot slower then MacBook Pro 14 ? | 10 | I am not sure if this is some configuration issue or if I do something wrong but my gaming PC seems a lot slower then my MacBook.
MacBook Pro 14 Inch M1 base model
Gaming PC:
\- AMD Rzyen 7 1700x
\- 32GB DDR4 3200 MHZ Ram
\- MSI GTX 1080
​
Could this be true, that my MacBook is a lot faster ? I tried the LLama2 q2\_k and q4\_1 and both performed substainable faster. Is the MacBook just better suited or is there some error in my configurations ?
​
|q2\_k model|Time to First token|GPUI Layers|CPU |Speed \[tok/s\]|
|:-|:-|:-|:-|:-|
|MacBook|5.28|Metal|4|26|
|Windows PC|8.89|15|8|4|
​ | 2023-07-30T11:23:11 | https://www.reddit.com/r/LocalLLaMA/comments/15dj894/windows_pc_a_lot_slower_then_macbook_pro_14/ | AntiqueSalamander310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dj894 | false | null | t3_15dj894 | /r/LocalLLaMA/comments/15dj894/windows_pc_a_lot_slower_then_macbook_pro_14/ | false | false | self | 10 | null |
As LLM-Based app owners, how do you handle the combination of fixed monthly pricing for customers and the payment per tokens for external APIs (such as OpenAI)? | 8 | I'm working on my own AI-based app and started wondering, how do you guys keep your business profitable? I'm willing to charge my customers about $15-20 each month, but what if their usage in the app makes the token generation price higher than this original price they are paying?
Some solutions I've thought about are:
1. Limiting their daily/monthly usage.
2. Allowing them to insert their own API keys (and pay directly to the API provider) - which might make the product more expensive.
How do you manage this problem? | 2023-07-30T11:11:47 | https://www.reddit.com/r/LocalLLaMA/comments/15dj18g/as_llmbased_app_owners_how_do_you_handle_the/ | Motor-Performer-4967 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dj18g | false | null | t3_15dj18g | /r/LocalLLaMA/comments/15dj18g/as_llmbased_app_owners_how_do_you_handle_the/ | false | false | self | 8 | null |
Stop AI from replying to itself? | 12 | Running koboldcpp with airoboros 13b, connected to sillytavern and the AI keeps replying to itself and completely ignoring whatever I type in. Anyone know how to fix this?
I’ve searched online but I can’t find anything about it. | 2023-07-30T10:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/15dirvv/stop_ai_from_replying_to_itself/ | hippithoppitboppit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dirvv | false | null | t3_15dirvv | /r/LocalLLaMA/comments/15dirvv/stop_ai_from_replying_to_itself/ | false | false | self | 12 | null |
Domain Specific Fine Tuning of LLMS (Coding and Security Domain) -> Open source | 1 | [removed] | 2023-07-30T09:55:23 | https://www.reddit.com/r/LocalLLaMA/comments/15dhpd6/domain_specific_fine_tuning_of_llms_coding_and/ | ababeel1122 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dhpd6 | false | null | t3_15dhpd6 | /r/LocalLLaMA/comments/15dhpd6/domain_specific_fine_tuning_of_llms_coding_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QyV2S4rh-pMbnJML_R7bbJcWB2RH98giH_Z_P45_3vo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=108&crop=smart&auto=webp&s=a058206456911026ebb90e4a841787cd0c5c75ec', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=216&crop=smart&auto=webp&s=a0443ae4570d3dda40ce608821d828c31efa9f2e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=320&crop=smart&auto=webp&s=545ca59a9a6fb473a2e1712e0ad3b1b8380628c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=640&crop=smart&auto=webp&s=0de6bb9ce3e4101c9452d707bb74e4d14fc21bb2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=960&crop=smart&auto=webp&s=ec5fa8d6f9f1500e545661940ce5af088320a871', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=1080&crop=smart&auto=webp&s=afd418dec5f3c16c2d0ee802736fea7f3862246e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?auto=webp&s=164db6523d618406ece148c52d1007ae5deb362c', 'width': 1200}, 'variants': {}}]} |
Domain Specific Fine Tuning of LLMS (Coding and Security Domain) -> Open source | 1 | [removed] | 2023-07-30T09:55:23 | https://www.reddit.com/r/LocalLLaMA/comments/15dhpd0/domain_specific_fine_tuning_of_llms_coding_and/ | ababeel1122 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dhpd0 | false | null | t3_15dhpd0 | /r/LocalLLaMA/comments/15dhpd0/domain_specific_fine_tuning_of_llms_coding_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QyV2S4rh-pMbnJML_R7bbJcWB2RH98giH_Z_P45_3vo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=108&crop=smart&auto=webp&s=a058206456911026ebb90e4a841787cd0c5c75ec', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=216&crop=smart&auto=webp&s=a0443ae4570d3dda40ce608821d828c31efa9f2e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=320&crop=smart&auto=webp&s=545ca59a9a6fb473a2e1712e0ad3b1b8380628c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=640&crop=smart&auto=webp&s=0de6bb9ce3e4101c9452d707bb74e4d14fc21bb2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=960&crop=smart&auto=webp&s=ec5fa8d6f9f1500e545661940ce5af088320a871', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=1080&crop=smart&auto=webp&s=afd418dec5f3c16c2d0ee802736fea7f3862246e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?auto=webp&s=164db6523d618406ece148c52d1007ae5deb362c', 'width': 1200}, 'variants': {}}]} |
What's the best llm for translating japanese text into english | 15 | So i need to translate a game from japanese into english while chatgpt is really good at translating japanese i was wondering if there are any local llms that are good at translating japanese text. | 2023-07-30T09:20:21 | https://www.reddit.com/r/LocalLLaMA/comments/15dh4rg/whats_the_best_llm_for_translating_japanese_text/ | ghjlans | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dh4rg | false | null | t3_15dh4rg | /r/LocalLLaMA/comments/15dh4rg/whats_the_best_llm_for_translating_japanese_text/ | false | false | self | 15 | null |
how to make the models like airoboros-l2-70b-gpt4-1.4.1.ggmlv3.q8_0.bin more intelligent? Maybe some sort of "code interpreter"? | 4 | [https://paste.c-net.org/NightyLately](https://paste.c-net.org/NightyLately)
So far, this model is the closest LLM *(that is pubically available to download and run)* to correctly answering my puzzle. publically
\----
Is there something I can change (maybe the prompt) so it answers it correctly?
​
Also...
"gpt 4 publically" has made me think...
Models could write code to better answer puzzles like mine.
I tested this with GPT4. I told it to write code to help solve the problem:
[https://chat.openai.com/share/39f2cecb-d1c3-4681-91e9-57e4dea1cb11](https://chat.openai.com/share/39f2cecb-d1c3-4681-91e9-57e4dea1cb11)
​
Maybe a similar system could be used to help improve the capabilities of other LLMs?
Does such a system exist?
​
thanks :)
​
​ | 2023-07-30T08:07:18 | https://www.reddit.com/r/LocalLLaMA/comments/15dfzag/how_to_make_the_models_like/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dfzag | false | null | t3_15dfzag | /r/LocalLLaMA/comments/15dfzag/how_to_make_the_models_like/ | false | false | self | 4 | null |
2070 Super for fine tuning | 5 | I have a 2070 Super on my linux rig that I'm currently running headless and I want to start getting into training my own LLMs. Is a 2070 Super 8gb enough for fine tuning a 7B or 13B quantized model?
I have a decent bit of cash but I don't know If i want to invest in a 3090 equivalent card yet. Will the 2070 super be enough for starting out? Or I can try to sell it on ebay or something and pick up a 12gb vram 3060 which I heard was good for stuff like this. | 2023-07-30T08:05:41 | https://www.reddit.com/r/LocalLLaMA/comments/15dfydz/2070_super_for_fine_tuning/ | QuantumTyping33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dfydz | false | null | t3_15dfydz | /r/LocalLLaMA/comments/15dfydz/2070_super_for_fine_tuning/ | false | false | self | 5 | null |
Is there any organization working on an open source 175B model? If not what is the extra compute required compared to current 70B models. | 56 | Most or the models and finetunes we get are based on Llama2 and Llama which is an excellent base model and the results we get are almost comparable to gpt 3.5 but with less than 50 percent of the parameters.
If we were actually able to increase the parameters to 175B or atleast a 100B will the results we get increase and surpass gpt3.5 or 4?
And what is the extra compute required? I know for 70B models they train on A100s, For a 175B model how many more do we need? Practically any such released model will not work on any consumer hardware but it would be nice to see. | 2023-07-30T07:29:21 | https://www.reddit.com/r/LocalLLaMA/comments/15dfc8y/is_there_any_organization_working_on_an_open/ | timedacorn369 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dfc8y | false | null | t3_15dfc8y | /r/LocalLLaMA/comments/15dfc8y/is_there_any_organization_working_on_an_open/ | false | false | self | 56 | null |
i cant seem to get my ai to work on my pc | 1 | [removed] | 2023-07-30T07:11:52 | https://www.reddit.com/r/LocalLLaMA/comments/15df1yh/i_cant_seem_to_get_my_ai_to_work_on_my_pc/ | Limisap_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15df1yh | false | {'oembed': {'author_name': 'TroubleChute', 'author_url': 'https://www.youtube.com/@TroubleChute', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/7KFYtVVrAX4?start=15&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="INSANE NEW 13B UNCENSORED MODEL 🤯One-Click Install | WizardLM"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/7KFYtVVrAX4/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'INSANE NEW 13B UNCENSORED MODEL 🤯One-Click Install | WizardLM', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_15df1yh | /r/LocalLLaMA/comments/15df1yh/i_cant_seem_to_get_my_ai_to_work_on_my_pc/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'c9UiwLTDf8-8SWpb59hwWnG41qJEaCq_y9DN_uQlt40', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/XDgQaCCeWWEhRjROBHXH7krQlzoVgbBKW4Kw9bIfG3Y.jpg?width=108&crop=smart&auto=webp&s=bff699d4db86bcddfc67a9002557f9d3243f3a6a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/XDgQaCCeWWEhRjROBHXH7krQlzoVgbBKW4Kw9bIfG3Y.jpg?width=216&crop=smart&auto=webp&s=fbffdcb55cd32334dfcbadb4a56d39df7a57a1a4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/XDgQaCCeWWEhRjROBHXH7krQlzoVgbBKW4Kw9bIfG3Y.jpg?width=320&crop=smart&auto=webp&s=5af2356659e326b52eb7d0e21505b33186860ed5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/XDgQaCCeWWEhRjROBHXH7krQlzoVgbBKW4Kw9bIfG3Y.jpg?auto=webp&s=cf2da56d043e763b211aa7116c7388408fadad26', 'width': 480}, 'variants': {}}]} | |
Is it possible to run SFT after RLHF? What would happen? | 10 | Llama-2-chat is an RHLF model that works pretty well, but it doesn't have multilingual ability.
So I'm thinking if it is possible to run another round of SFT on Llama-2-chat with a multilingual dataset like [mlabonne/guanaco-llama2-1k](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k) to get a Llama-2-chat model that can speak in different languages while preserving things it learned previously from SFT and RHLF.
Has anyone tried that? Does it work? | 2023-07-30T06:26:37 | https://www.reddit.com/r/LocalLLaMA/comments/15deamj/is_it_possible_to_run_sft_after_rlhf_what_would/ | CORNMONSTER_2022 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15deamj | false | null | t3_15deamj | /r/LocalLLaMA/comments/15deamj/is_it_possible_to_run_sft_after_rlhf_what_would/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'VD8BNjZdlcrGoshtjvSCNv_Jy2CkUJ_W2ZwE77TM1G8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?width=108&crop=smart&auto=webp&s=c7d006087fc75bd0a185250300320f684047bf99', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?width=216&crop=smart&auto=webp&s=e8173a979424f01a924a536eacd8191b912c55af', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?width=320&crop=smart&auto=webp&s=f9ae3b5d2fc2a833af4695f60c8e4feb6fc18a1a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?width=640&crop=smart&auto=webp&s=104b4766a0d4f5d8bd56d54ddfa51ecb42afab02', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?width=960&crop=smart&auto=webp&s=34b1c85c6b046061bdce8d1b0b1f8793d1cfa927', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?width=1080&crop=smart&auto=webp&s=d967315f4d30ea7e87ad8d237529a6e4b9ace02d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/z_dxK9BQsC9bpzByaAyQd7MuTs_IAxR0GoFKru19_Ig.jpg?auto=webp&s=7ebbf4dfbd0c91835a8ca16a2812c5e120dbc009', 'width': 1200}, 'variants': {}}]} |
What are some good open source projects using the Guidance prompting library? | 33 | I'm looking to learn from example. [link](https://github.com/microsoft/Guidance) to library for the curious | 2023-07-30T06:17:53 | https://www.reddit.com/r/LocalLLaMA/comments/15de570/what_are_some_good_open_source_projects_using_the/ | -Django | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15de570 | false | null | t3_15de570 | /r/LocalLLaMA/comments/15de570/what_are_some_good_open_source_projects_using_the/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'x3RWQbLa5164A1hTBcNJTSMH9O6Z5cihDtpb2hELFLQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?width=108&crop=smart&auto=webp&s=07bc6460f127663ba8438d1c6526331b76bbc55d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?width=216&crop=smart&auto=webp&s=efc6fad64c65e28fdfc5d586128e7e84b02d79e4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?width=320&crop=smart&auto=webp&s=e487250bbbf32ffd32494145013186e98b546c6e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?width=640&crop=smart&auto=webp&s=3cabc0744ac1ed0d479a1619ac2bb43d0b81763a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?width=960&crop=smart&auto=webp&s=6df85feb34bcce9859be854db8a53d9f2134b350', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?width=1080&crop=smart&auto=webp&s=7bce159395e97891c26a9103ff13507fbacd89a4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qf_5DP_YtVv1xLndJ50nOZD21uriMVCj4JC8FvtsU-k.jpg?auto=webp&s=911a5a11fc0a51b4379569933670b94147886364', 'width': 1200}, 'variants': {}}]} |
Only able to output 1000 tokens total on ooba? is that normal? | 4 | Im new to local ai stuff and I have a 3060 12 gb which im running TheBloke\_airoboros-l2-13B-gpt4-1.4.1-GPTQ at 4096 tokens I know my gpu cant do 4096 fully but is it normal to only get 1000 tokens in length. I'm using text gen ui to write some short stories using a smallish 100 token prompt and wanted to see why it wont continue the story after 1000 tokens have been generated,it usually does 500 token generations twice then does 0 tokens after i try to continue beyond the 1000 point. I have exllama on and max-seq-length at 4096 and compress-pos-emb at 2 but it also happens on max-seq-length at 2048 and compress-pos-emb at 1. is this normal to have? | 2023-07-30T04:25:32 | https://www.reddit.com/r/LocalLLaMA/comments/15dc3si/only_able_to_output_1000_tokens_total_on_ooba_is/ | Dramatic_Borger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dc3si | false | null | t3_15dc3si | /r/LocalLLaMA/comments/15dc3si/only_able_to_output_1000_tokens_total_on_ooba_is/ | false | false | self | 4 | null |
A Review: Using Llama 2 to Chat with Notes on Consumer Hardware | 73 | We recently integrated Llama 2 into [Khoj](https://github.com/khoj-ai/khoj). I wanted to share a short real-world evaluation of using Llama 2 for the chat with docs use-cases and hear which models have worked best for you all. The standard benchmarks (ARC, HellaSwag, MMLU etc.) are not tuned for evaluating this
**Evaluation:**
* Llama 2 is the first offline chat model I've tested that is **good enough to chat with my docs**. It can pull out answers and generate new content from my existing notes most of the time. This is pretty great for creating offline, privacy first applications. Falcon and older Llama based models were pretty bad at instruction following and were unfortunately not practically usable for this.
* Llama 2 (7B) is **not better than ChatGPT** or GPT4. It has a tendency to hallucinate, the smaller context window limits how many notes can be passed to it and having some irrelevant notes in the context can prevent it from pulling out an answer from the relevant note. This is not a problem with GPT3.5+
* With the speed at which offline chat models are evolving, I believe we'll have ChatGPT equivalents on consumer hardware by next year 🤞🏾. It being open-source, commercially usable will help a lot.
**Background:**
* u/sabakhoj and I've tested Falcon 7B and used GPT-3+ regularly over the last 2 years
* Khoj uses [TheBloke's Llama 2 7B](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML) (specifically llama-2-7b-chat.ggmlv3.q4\_K\_S)
​
**Demo**
[Showcase content retrieval and generation from existing notes](https://reddit.com/link/15darr6/video/cs7cozcmr0fb1/player)
​ | 2023-07-30T03:15:22 | https://www.reddit.com/r/LocalLLaMA/comments/15darr6/a_review_using_llama_2_to_chat_with_notes_on/ | hoperyto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15darr6 | false | null | t3_15darr6 | /r/LocalLLaMA/comments/15darr6/a_review_using_llama_2_to_chat_with_notes_on/ | false | false | 73 | {'enabled': False, 'images': [{'id': '0zJJqyaQQs6InWqjPXotRt7AKVso5slBY5DY4OMdlFI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?width=108&crop=smart&auto=webp&s=720a87a6ec4a11ebb863138d16e16db827af8315', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?width=216&crop=smart&auto=webp&s=ff36d92aa02e136baf0cb7ba5331293984a9e8f9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?width=320&crop=smart&auto=webp&s=8f87cbaf080369b25a734165e57fb9bc3d036a5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?width=640&crop=smart&auto=webp&s=2b8011603b72aa00092874f2d70aac17726acd3f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?width=960&crop=smart&auto=webp&s=5475bf3506727d0ba7976bc8140efcb99fcdc658', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?width=1080&crop=smart&auto=webp&s=35a37aee08042280e3e498d3e87b2ca0255b5352', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/87Tmwe4abAxXATeZtfyxsYyf0amZvmYrIVyHbyu7Uqw.jpg?auto=webp&s=e60edd6b7f48b54958f95c861bd0827516614f20', 'width': 1200}, 'variants': {}}]} | |
Why you use others versions of Llama if we have the most powerful which is Llama 2 - 70B | 0 | I ask because i see everyone is working on different versions of it like 7B, 13B, 35B, etcetera.
If the most powerful is 70B, Why don't focus on it?
Honestly I'm not very clear in this, the most i think is to generate diversity on the application with the objective of being able to use it on phones.
This is the reason, or Are there more? | 2023-07-30T03:02:55 | https://www.reddit.com/r/LocalLLaMA/comments/15dajb7/why_you_use_others_versions_of_llama_if_we_have/ | HighWillord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dajb7 | false | null | t3_15dajb7 | /r/LocalLLaMA/comments/15dajb7/why_you_use_others_versions_of_llama_if_we_have/ | false | false | self | 0 | null |
I copied the link from the Bloke and paste it in oobabooga and click download, oobabooga starts downloading all quantized variations of the model ! Do I just wait for it ALL to download, more then 100 GB !!!, or can I just create a folder and download what is needed, what files are needed ? | 1 | [removed] | 2023-07-30T02:58:11 | https://www.reddit.com/r/LocalLLaMA/comments/15dafpi/i_copied_the_link_from_the_bloke_and_paste_it_in/ | Vitamin_C_is_awesome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15dafpi | false | null | t3_15dafpi | /r/LocalLLaMA/comments/15dafpi/i_copied_the_link_from_the_bloke_and_paste_it_in/ | false | false | self | 1 | null |
Calling All Tech Heros: Help! I'm Trapped and Desperately Need a New GPU for Falcon 40B! Can Anyone Rescue Me? | 0 | Hello, I am new to Stable Defusion, AI models, LLMs, and new to GPUs too.
So I need some help with choosing the right GPU for my needs.
My needs: Like 1h/day of Stable Defusion, I need to run my own ChatGPT AKA LLM model (Specifically falcon 40b h2o.ai version) on my pc for 24h/day (instead of running it on a server I wanna do it locally), No gaming at all.
So the problem as you can see here that I will run multiple AI stuff at once, the LLM (falcon 40b) will be run 24h like on cloud but locally instead, which make me wonder about the VRAM needed, and the GPU power.
Do I have to get multiple GPUs? Or two setups? One for the 24h model and the other one for anything else?
Side note: I never owned a GPU, all GPUs I had was integrated GPUs, not seperated. So this is my first time considering a separate GPU. | 2023-07-30T01:15:37 | https://www.reddit.com/r/LocalLLaMA/comments/15d8fst/calling_all_tech_heros_help_im_trapped_and/ | SakamotoKyu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d8fst | false | null | t3_15d8fst | /r/LocalLLaMA/comments/15d8fst/calling_all_tech_heros_help_im_trapped_and/ | false | false | self | 0 | null |
Is it possible to run LLaMA v2 locally with a 4G GTX960 and 16 Gb RAM on a desktop? | 45 | I want to try using new technology to help me with text writing, but my computer is a low-end device and cannot support running llama on GPU mode. I want to know if it’s possible to run llama locally on my desktop, and if yes, how to do it? | 2023-07-30T00:42:07 | https://www.reddit.com/r/LocalLLaMA/comments/15d7r8c/is_it_possible_to_run_llama_v2_locally_with_a_4g/ | Tony_Malloy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d7r8c | false | null | t3_15d7r8c | /r/LocalLLaMA/comments/15d7r8c/is_it_possible_to_run_llama_v2_locally_with_a_4g/ | false | false | self | 45 | null |
Fully finetuning LLaMA 2-7B | 24 | I am trying to fully finetune LLaMA 2 7B using [this](https://github.com/hiyouga/LLaMA-Efficient-Tuning) repo via 8 A100 (40GB) GPUs but consistently getting OOM error. I am quite new to fine-tuning LLMs so not sure if this is normal. Any advice would be greatly appreciated. | 2023-07-29T23:01:46 | https://www.reddit.com/r/LocalLLaMA/comments/15d5lp7/fully_finetuning_llama_27b/ | Ornery-Young-7346 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d5lp7 | false | null | t3_15d5lp7 | /r/LocalLLaMA/comments/15d5lp7/fully_finetuning_llama_27b/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'Nj2g5gVB5tKBNRVtk4uJV44WvMa2GCNq_nTsDEOfS7c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?width=108&crop=smart&auto=webp&s=934113308863ea9a8a05f9c69c287ab3228f7e55', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?width=216&crop=smart&auto=webp&s=c90a1eccfa895dd95564c11c3f55d92a792dad04', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?width=320&crop=smart&auto=webp&s=ee8c3b03dd4896e122d6db26038ab69f750cfa08', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?width=640&crop=smart&auto=webp&s=c943c13cffa5d05a393f26b317c849a3ac2493bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?width=960&crop=smart&auto=webp&s=fc5300445eec645a4cdc8d83226669db3f6bf81b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?width=1080&crop=smart&auto=webp&s=c82e37df269be3734c124bfdbfd7d38301bbd64d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c8YaawyIonmwmCC3lq0wJobSSPCxbxKWm8vljWr6hQo.jpg?auto=webp&s=6bf58f4985a786733de1dd8c24f762d87bfc2599', 'width': 1200}, 'variants': {}}]} |
How to calculate tokens in LLaMA output? | 2 | Trying to compare the tok/sec result between LLaMa.cpp and Replicate and was wondering how we calculate the total tokens. Will it be the same as the result from tiktoken library? | 2023-07-29T21:59:08 | https://www.reddit.com/r/LocalLLaMA/comments/15d464y/how_to_calculate_tokens_in_llama_output/ | AmoghMishra2303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d464y | false | null | t3_15d464y | /r/LocalLLaMA/comments/15d464y/how_to_calculate_tokens_in_llama_output/ | false | false | self | 2 | null |
Issues with System Prompt Following for finetuned Llama-2 models (e.g. airoboros-l2-70B-gpt4-1.4.1 and StableBeluga2) | 15 | This is only after playing a bit around with Llama-2 and finetuned models, so there's a big chance I'm doing something terribly wrong, but what I've found so far is that while Llama-2 seems to be able to follow the system prompt quite religiously, other models tend to kind of follow it or completely ignore it.
Please find below some examples that I tried and the results I got for: **StableBeluga2**, **airoboros-l2-70B-gpt4-1.4.1**, and **Llama-2-70B-chat.**
Am I missing something or have these finetunes (in this case StableBeluga2 and airoboros-l2) somehow negatively affected the model's ability to follow the system prompt?
Would love to hear your thoughts / experiences.
\------------------------------------------------------------------------------------------------------------------------------------
​
**TheBloke\_StableBeluga2-GPTQ**
*In this example, StableBeluga2 completely ignores the system prompt:*
Prompt:
You are a very unhelpful assistant that only answers in riddles.
USER: What is the capital of France?
ASSISTANT:
Output:
The capital of France is Paris, known as the City of Light and home to iconic landmarks such as the Eiffel Tower and Notre-Dame Cathedral.
*In this example, it technically follows it but not as religiously as the original Llama2 model would have (see below).*
Prompt:
You are super drunk and this reflects on the way you respond. You are basically unable to answer any questions seriously.
USER: What is the capital of France?
ASSISTANT:
Output:
🇫🇷 Paris, duh! But I'm not sure if it's still called that when you're this wasted.
​
\------------------------------------------------------------------------------------------------------------------------------------
​
**TheBloke\_airoboros-l2-70B-gpt4-1.4.1-GPTQ**
*In this example, airoboros completely disregards the system prompt.*
Prompt:
You are a very unhelpful assistant that only answers in riddles.
USER: What is the capital of France?
ASSISTANT:
Output:
Paris
*Same with this example.*
Prompt:
You are super drunk and this reflects on the way you respond. You are basically unable to answer any questions seriously.
USER: What is the capital of France?
ASSISTANT:
Output
Paris
\------------------------------------------------------------------------------------------------------------------------------------
​
**TheBloke\_Llama-2-70B-chat-GPTQ**
*Unlike the finetuned Llama-2 models, the original Llama-2 follows the system prompt really well.*
Prompt:
[INST] <<SYS>>
You are a very unhelpful assistant that only answers in riddles.
<</SYS>>
What is the capital of France? [/INST]
Output:
Ah, my dear friend, the answer to that question is not so simple as it seems. The capital of France, you see, is a place of great beauty and wonder, where art and culture come alive in every street corner. It's a city of lights, of love, of laughter and dreams. But alas, I cannot give you a straightforward answer, for that would be too easy. Instead, I shall offer you this clue:
In the land of cheese and wine, where fashion reigns supreme,
A river flows, a symbol of power and grace.
The city of romance, of art and of glee,
Find the heart of France, and there you'll find its name.
Now, my friend, can you guess the capital of France?
*Same here.*
Prompt:
[INST] <<SYS>>
You are super drunk and this reflects on the way you respond. You are basically unable to answer any questions seriously.
<</SYS>>
What is the capital of France? [/INST]
Output:
Ooh la la! *hiccup* Oh, you want to know the capital of France, eh? Well, let me see... *hiccup* I think it's... *hiccup* Hmm, is it... *hiccup* Yeah, I'm pretty sure it's... *hiccup* Wait, what was the question again? *hiccup* 😜
​ | 2023-07-29T21:13:02 | https://www.reddit.com/r/LocalLLaMA/comments/15d32p1/issues_with_system_prompt_following_for_finetuned/ | nightlingo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d32p1 | false | null | t3_15d32p1 | /r/LocalLLaMA/comments/15d32p1/issues_with_system_prompt_following_for_finetuned/ | false | false | self | 15 | null |
Get the following error when trying to load models locally? | 2 | I haven't been able to find a solution anywhere else. I am using ex llama 8k gptq models loaded with the exllama model loader. All of which have the same error message. Running on 3090 gpu. If anyone has solved this or know how to I would really appreciate any help. Thanks!
​
Traceback (most recent call last):
File “D:\oobabooga_windows\text-generation-webui\server.py”, line 68, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File “D:\oobabooga_windows\text-generation-webui\modules\models.py”, line 79, in load_model output = load_func_maploader
File “D:\oobabooga_windows\text-generation-webui\modules\models.py”, line 320, in ExLlama_loader model, tokenizer = ExllamaModel.from_pretrained(model_name)
File “D:\oobabooga_windows\text-generation-webui\modules\exllama.py”, line 67, in from_pretrained model = ExLlama(config)
File “D:\oobabooga_windows\installer_files\env\lib\site-packages\exllama\model.py”, line 732, in init with safe_open(self.config.model_path, framework = “pt”, device = “cpu”) as f: safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
​ | 2023-07-29T21:07:26 | https://www.reddit.com/r/LocalLLaMA/comments/15d2y67/get_the_following_error_when_trying_to_load/ | StoryScapeStudios | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d2y67 | false | null | t3_15d2y67 | /r/LocalLLaMA/comments/15d2y67/get_the_following_error_when_trying_to_load/ | false | false | self | 2 | null |
How do you prevent your LLM from overgenerating? | 31 | I'm playing around with airoboros 33b and following a standard 'USER: ...' 'GIRLFRIEND: ...' format but the LLM occasionally goes of the rails and throws a response like this:
"Oh no, did I have another dream about us?" USER: haha yeah. but it's okay. we can go back to sleep if u want GIRLFRIEND: That sounds perfect. I'm sleepy..."
​
Thoughts? | 2023-07-29T21:02:50 | https://www.reddit.com/r/LocalLLaMA/comments/15d2u94/how_do_you_prevent_your_llm_from_overgenerating/ | lukeborgen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d2u94 | false | null | t3_15d2u94 | /r/LocalLLaMA/comments/15d2u94/how_do_you_prevent_your_llm_from_overgenerating/ | false | false | self | 31 | null |
Pre-Training Artifacts in Falcon-40b | 1 | [removed] | 2023-07-29T21:00:17 | https://www.reddit.com/r/LocalLLaMA/comments/15d2rp7/pretraining_artifacts_in_falcon40b/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15d2rp7 | false | null | t3_15d2rp7 | /r/LocalLLaMA/comments/15d2rp7/pretraining_artifacts_in_falcon40b/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.