title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Best way to upgrade my pc to improve t/s
2
Hi all, I bought a new pc last year and after experimenting with llms for the last months I have some doubts. I can run 7b, 13b and even 20/30b model reasonably fast but the 70b (I use the q3 quantization, GGUF format) run at 1t/s using windows 11. I´m thinking about how to upgrade my pc so I can get at least 2/3 t/s with a q4 70b. My specs are: \-MSI PRO B760-P WIFI DDR4 \-Intel 13700 cpu (the NOT k model, and it´s a little undervolted) \-Nvidia 4080 16Gb gpu \-2x16gb 3200mHZ CL16 RAM \-2 NVMe SSDs \-1 old HDD from my old computer \-Seasonic 850W gold psu The option I though were: a) Substitute the old hdd for a bigger sata ssd, make a partition and install a linux distro that I would use in dual boot only for llms. b) Adding a 3060 12gb or a 4060ti 16gb as a second gpu. I would only use the second gpu for the llms. c) Both? So, what are the pros and cons? Other options? Can my psu support a second GPU? Is there a difference in performance when running the models in a NVMe SSD compared to a sata SSD? There would be compatibility problems using the 4080 and a 3060 as those gpus are from different generations? How much performance improvement can I expect? Thanks a lot for the help! ​
2023-11-16T12:35:34
https://www.reddit.com/r/LocalLLaMA/comments/17wlttf/best_way_to_upgrade_my_pc_to_improve_ts/
DarthByron
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wlttf
false
null
t3_17wlttf
/r/LocalLLaMA/comments/17wlttf/best_way_to_upgrade_my_pc_to_improve_ts/
false
false
self
2
null
Why the Need for Massive Data Sets and How Human Brains Differ in Learning?
1
[removed]
2023-11-16T11:30:14
https://www.reddit.com/r/LocalLLaMA/comments/17wkomy/why_the_need_for_massive_data_sets_and_how_human/
One-Magician-6270
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wkomy
false
null
t3_17wkomy
/r/LocalLLaMA/comments/17wkomy/why_the_need_for_massive_data_sets_and_how_human/
false
false
self
1
null
2x Teslas in an OEM system
5
I have a Dell T7610 and I got 2x P40s for running LLMs. However, nno matter what I do, I can't get both of my P40s to work on the system. Both of them work by themselves in either PCIE slot, but when both of them are plugged in, the one on the first PCIE slot always returns an error in device manager. Does anyone have any experience with setting up these cards in a system like this? https://preview.redd.it/2lr98mwkvo0c1.png?width=282&format=png&auto=webp&s=f28b7752e55bedbbcabd314d558fb7c515fe858a https://preview.redd.it/72usylwkvo0c1.png?width=419&format=png&auto=webp&s=a78daeaf94cab6e8c7beed12b0ed9ee6775b2175 https://preview.redd.it/vrj71pwkvo0c1.png?width=402&format=png&auto=webp&s=708c713e55904d066eecdb496eb7dd363b2a0ff6 https://preview.redd.it/i46ojnwkvo0c1.png?width=393&format=png&auto=webp&s=553564fd91ea4b5797d6b2ad2c78e02ec5646758
2023-11-16T10:44:12
https://www.reddit.com/r/LocalLLaMA/comments/17wjxf6/2x_teslas_in_an_oem_system/
ConcaveTriangle5761
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wjxf6
false
null
t3_17wjxf6
/r/LocalLLaMA/comments/17wjxf6/2x_teslas_in_an_oem_system/
false
false
https://b.thumbs.redditm…cxpzqrKktCIw.jpg
5
null
2x Teslas in an OEM system
1
I have a Dell T7610 and I got 2x P40s for running LLMs. However, nno matter what I do, I can't get both of my P40s to work on the system. Both of them work by themselves in either PCIE slot, but when both of them are plugged in, the one on the first PCIE slot always returns an error in device manager. Does anyone have any experience with setting up these cards in a system like this? *Processing img 2lr98mwkvo0c1...* *Processing img 72usylwkvo0c1...* *Processing img vrj71pwkvo0c1...* *Processing img i46ojnwkvo0c1...*
2023-11-16T10:44:10
https://www.reddit.com/r/LocalLLaMA/comments/17wjxdq/2x_teslas_in_an_oem_system/
ConcaveTriangle5761
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wjxdq
false
null
t3_17wjxdq
/r/LocalLLaMA/comments/17wjxdq/2x_teslas_in_an_oem_system/
false
false
self
1
null
LLM Leaderboard for Hallucination
1
2023-11-16T08:41:15
https://github.com/vectara/hallucination-leaderboard
__tosh
github.com
1970-01-01T00:00:00
0
{}
17wi5eg
false
null
t3_17wi5eg
/r/LocalLLaMA/comments/17wi5eg/llm_leaderboard_for_hallucination/
false
false
https://b.thumbs.redditm…2MPfZVyYsMhM.jpg
1
{'enabled': False, 'images': [{'id': 'tfb3st9dN_QajSUqzLawgdt_plecUNImmfGbuuGao00', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/26nr0mG19mpfwzpcsfwf59fFxHGn006huZ47DfKr0ig.jpg?width=108&crop=smart&auto=webp&s=2db7b9cdd63c7a7251568faa360bfe7dd1a7b0ed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/26nr0mG19mpfwzpcsfwf59fFxHGn006huZ47DfKr0ig.jpg?width=216&crop=smart&auto=webp&s=51563b54d53fbacec45cb655032f09cf6c3e4f2e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/26nr0mG19mpfwzpcsfwf59fFxHGn006huZ47DfKr0ig.jpg?width=320&crop=smart&auto=webp&s=b4992264d50a93e2baa9301e7e382e67590b4ca0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/26nr0mG19mpfwzpcsfwf59fFxHGn006huZ47DfKr0ig.jpg?width=640&crop=smart&auto=webp&s=5e92515d4b9c3ec35b10fdfe34322b7e1d1763c3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/26nr0mG19mpfwzpcsfwf59fFxHGn006huZ47DfKr0ig.jpg?width=960&crop=smart&auto=webp&s=d0a2487cb3b8f3e20bb9990a5b0347b6eb150fe2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/26nr0mG19mpfwzpcsfwf59fFxHGn006huZ47DfKr0ig.jpg?width=1080&crop=smart&auto=webp&s=c725669d0e97fa22054e945552379892977983a5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/26nr0mG19mpfwzpcsfwf59fFxHGn006huZ47DfKr0ig.jpg?auto=webp&s=3d5a0ed5891eac9c365ba3fbaa637f500c40b435', 'width': 1200}, 'variants': {}}]}
Training Drafting Model
1
I’ve collected a dataset of legal drafts. How can I train a model(suggestions requested) to write a draft for a given situation or prepare a counter or reply to an uploaded document or text, Add any new clauses or regenerate the whole draft for higher accuracy if needed. For ex: - I want to draft an employment contract for a new employee in my company - I want to quash an FIR filed against my client in the Delhi High Court, Fact of the case are .. - Draft a contempt application in Central Admintrative Tribunal against the CMD, NPCC for not following the order dated... Thank you for your response.
2023-11-16T08:03:17
https://www.reddit.com/r/LocalLLaMA/comments/17whm7n/training_drafting_model/
Tejasw__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17whm7n
false
null
t3_17whm7n
/r/LocalLLaMA/comments/17whm7n/training_drafting_model/
false
false
self
1
null
Would automatic temperature work?
16
Currently we manually set the temperature and keep it the whole chat. Wouldnt it make more sense to let the model decide a temperature itself depending on the topic?
2023-11-16T07:11:47
https://www.reddit.com/r/LocalLLaMA/comments/17wguaa/would_automatic_temperature_work/
freehuntx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wguaa
false
null
t3_17wguaa
/r/LocalLLaMA/comments/17wguaa/would_automatic_temperature_work/
false
false
self
16
null
How can I improve inference performance to a normal range?
1
[removed]
2023-11-16T06:53:12
https://www.reddit.com/r/LocalLLaMA/comments/17wgjjh/how_can_i_improve_inference_performance_to_a/
Radiant-Practice-270
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wgjjh
false
null
t3_17wgjjh
/r/LocalLLaMA/comments/17wgjjh/how_can_i_improve_inference_performance_to_a/
false
false
self
1
null
Generalized Tree-of-Thought Capability in SynthIA Models
54
Here's how I managed to bootstrap generalized Tree-of-Thought capability in my AIs. This was the secret sauce to SynthIA. Generate your dataset with this, plus the Orca system prompts. Open Source FTW. LFG! ​ https://preview.redd.it/45uyzynlen0c1.png?width=1744&format=png&auto=webp&s=694e69603c0656efbbea9a9e8b18d02a10c8633e
2023-11-16T05:47:22
https://www.reddit.com/r/LocalLLaMA/comments/17wffs4/generalized_treeofthought_capability_in_synthia/
migtissera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wffs4
false
null
t3_17wffs4
/r/LocalLLaMA/comments/17wffs4/generalized_treeofthought_capability_in_synthia/
false
false
https://b.thumbs.redditm…rz73u8vqTWTM.jpg
54
null
anaconda help with "pip install llama-cpp-python"
1
so i'm new to python/coding and i'm using anaconda to install [text-generation-webui](https://github.com/oobabooga/text-generation-webui) with [this](https://gist.github.com/mberman84/f092a28e4151dd5cecebfc58ac1cbc0e#file-gistfile1-txt) guide and when i try to use "pip install llama-cpp-python" i get the error " note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects" and the fix on the guide isn't working so i serched it up and i can't use the "yum" command can someone help please ​ ps: i've alr posted this on r/learnprogramming and r/learnpython and they said to post here
2023-11-16T03:57:30
https://www.reddit.com/r/LocalLLaMA/comments/17wdeus/anaconda_help_with_pip_install_llamacpppython/
AdApprehensive6837
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wdeus
false
null
t3_17wdeus
/r/LocalLLaMA/comments/17wdeus/anaconda_help_with_pip_install_llamacpppython/
false
false
self
1
{'enabled': False, 'images': [{'id': 'rRZX3PKDm_pNgnmtRnCiHQtxU7Mjo_WMUKuYbMi4jdQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v65lbb4LTLgaEErzNED3IqGgQtVMfuqjIV2AS1aVvHY.jpg?width=108&crop=smart&auto=webp&s=a24c9ec23e6269eca447ab1fef043ace408bbc32', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/v65lbb4LTLgaEErzNED3IqGgQtVMfuqjIV2AS1aVvHY.jpg?width=216&crop=smart&auto=webp&s=5bf9235e6f66b92e144c8ad02da7bed54ca6a312', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/v65lbb4LTLgaEErzNED3IqGgQtVMfuqjIV2AS1aVvHY.jpg?width=320&crop=smart&auto=webp&s=8c7604ec1e7bda4fc7d3352909db012e13474925', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/v65lbb4LTLgaEErzNED3IqGgQtVMfuqjIV2AS1aVvHY.jpg?width=640&crop=smart&auto=webp&s=b621deb88db9bf7a8a5ce96f9ac301ef532a7a60', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/v65lbb4LTLgaEErzNED3IqGgQtVMfuqjIV2AS1aVvHY.jpg?width=960&crop=smart&auto=webp&s=f38fd13e5645246d13d84a684a724bc3e8366852', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/v65lbb4LTLgaEErzNED3IqGgQtVMfuqjIV2AS1aVvHY.jpg?width=1080&crop=smart&auto=webp&s=ae6ca7427ed0553fec611925e9fdeae4977eca4e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/v65lbb4LTLgaEErzNED3IqGgQtVMfuqjIV2AS1aVvHY.jpg?auto=webp&s=c22571cd9aabf13fc3133481250512ba5002d8cd', 'width': 1200}, 'variants': {}}]}
Text-Generator-webui merge bloke files
1
[removed]
2023-11-16T03:56:37
https://www.reddit.com/r/LocalLLaMA/comments/17wde8p/textgeneratorwebui_merge_bloke_files/
Danny_Davitoe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wde8p
false
null
t3_17wde8p
/r/LocalLLaMA/comments/17wde8p/textgeneratorwebui_merge_bloke_files/
false
false
self
1
null
Getting 1 billion evaluation for LLama2 clone (Chinese start up 01.ai with Yi-34B)
1
Since this reddit doesnt allow cross posting I'll quote " You might have read news last week that the Chinese startup 01.ai behind Yi-34B has topped the LLM leaderboard in hugging face. Besides the company is now valued at 1Bil USD. Lee Kai-F has announced this model and their accomplishment on LinkedIn, and then calling it their own open sourced model. Well, it turns out that Yi-34B is simply a copy of Llama2 with 2 tensors renamed as highlighted by another user on hugging face. Honestly I feel this is a huge disgrace by leveraging on opensourced to raise billions and calling it their own innovation." [https://huggingface.co/01-ai/Yi-34B/discussions/11](https://huggingface.co/01-ai/Yi-34B/discussions/11)
2023-11-16T02:48:03
https://www.reddit.com/r/LocalLLaMA/comments/17wby60/getting_1_billion_evaluation_for_llama2_clone/
Infamous_Charge2666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wby60
false
null
t3_17wby60
/r/LocalLLaMA/comments/17wby60/getting_1_billion_evaluation_for_llama2_clone/
false
false
self
1
{'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=108&crop=smart&auto=webp&s=4bc231a80d79babe4e6cddf7b4c71dcb0aa8f8ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=216&crop=smart&auto=webp&s=d7108244b7182d85047aa59446f1dfb68542b610', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=320&crop=smart&auto=webp&s=d34fa1a756c458772d3c8680309a93cf8d758b40', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=640&crop=smart&auto=webp&s=5b03e18da2698977cf1222f0c9e54ccb6177ffc4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=960&crop=smart&auto=webp&s=3d875ff29aae8239d010f3b964e5a2f3ebe32e3d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=1080&crop=smart&auto=webp&s=b51090c30528b6b8c637acb54d7fc0f6a5249cf5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?auto=webp&s=ee70e402c0f8274b46f38378bada81dbeb5b1dac', 'width': 1200}, 'variants': {}}]}
OpenAI Assistants API in an Agent Framework
3
(Also posted on r/ML) I know this is the LocalLlama forum but there has been some interest in the OpenAI Assistants API, so here goes. Also of course a similar API service could be implemented using open/local models so it’s not entirely irrelevant! Given the OpenAI Assistants API released last week, a natural next question was — how can we have several assistants work together on a task? This was a perfect fit for the Langroid Multi Agent framework (which already works with the completions API and any other local/remote LLM). For those interested in details of how to work with this API I wanted to share how we implemented a near-complete support for all Assistant features into the Langroid agent framework: https://github.com/langroid/langroid/blob/main/langroid/agent/openai_assistant.py We created an OpenAIAssistant class derived from ChatAgent. In Langroid you wrap a ChatAgent in a Task object to enable a multi agent interaction loop. Now the same can be done with an OpenAIAssistant object. I made a Colab notebook which gradually builds up from simple examples to a two-agent system for structured information extraction from a document: https://colab.research.google.com/drive/190Tk7t4AdY1P9F_NlZ33-YEoGnHweQQ0 Our implementation supports function-calling, tools (retrieval/RAG, code interpreter). For the code interpreter we capture the code logs and display them in the interaction. We leverage persistent threads and assistants by caching their ids based on the username + machine + org, so that in a later session they could resume a previous thread + assistant. This is perhaps a simplistic implementation, I’m sure there are better ideas here. A key feature that is currently disabled is caching: this is turned off because storing Assistant responses in threads is not allowed by the API. In any case, hope this is useful to some folks, as I’ve seen a lot of questions about this API in various forums.
2023-11-16T02:47:01
https://www.reddit.com/r/LocalLLaMA/comments/17wbxg7/openai_assistants_api_in_an_agent_framework/
SatoshiNotMe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wbxg7
false
null
t3_17wbxg7
/r/LocalLLaMA/comments/17wbxg7/openai_assistants_api_in_an_agent_framework/
false
false
self
3
{'enabled': False, 'images': [{'id': 'mCf-gvgLGIf97C4Q5Pay7BZQzUkl4WPZs5p9Ug1Hclk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=108&crop=smart&auto=webp&s=6f77c5c3077d3728cfb79ff9339119fb5fd7b59d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=216&crop=smart&auto=webp&s=b10a9dd7d4bc4fc2037d131a048e5e8f1febaea5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=320&crop=smart&auto=webp&s=0a9259b44724375c066bb82fd6350881d78bdbda', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=640&crop=smart&auto=webp&s=866e9598e364a0ffa0588fe4383ac87de7b85631', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=960&crop=smart&auto=webp&s=949005b7b09bccdd9f9194e40c1764d505882236', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=1080&crop=smart&auto=webp&s=ed2893c270ed2220f47d5b015a4b823d0ec38048', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?auto=webp&s=599ef552ca32f4240d29c0510c712c3dfe3b08c5', 'width': 1200}, 'variants': {}}]}
Echoproof: New extension for WebUI that reduces chatbot's "OCD-like" tendencies
44
2023-11-16T02:36:26
https://github.com/ThereforeGames/echoproof
ThereforeGames
github.com
1970-01-01T00:00:00
0
{}
17wbp86
false
null
t3_17wbp86
/r/LocalLLaMA/comments/17wbp86/echoproof_new_extension_for_webui_that_reduces/
false
false
https://b.thumbs.redditm…7OvZ5hxpTDsw.jpg
44
{'enabled': False, 'images': [{'id': '4INlW3mxQDcu2fg5_9LHIhc_be5GsliEUoRgg82yPkI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8-9rlTr-AuR1piVXiAPqQl4_4aaf_7YXJMDGSASK590.jpg?width=108&crop=smart&auto=webp&s=f5d8db4f25e89bd5fbcdb86c2ac89b8f283bdbe3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8-9rlTr-AuR1piVXiAPqQl4_4aaf_7YXJMDGSASK590.jpg?width=216&crop=smart&auto=webp&s=231ef23842157e6548753e3c51131c074f6d265d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8-9rlTr-AuR1piVXiAPqQl4_4aaf_7YXJMDGSASK590.jpg?width=320&crop=smart&auto=webp&s=c397b2f6cf2d3da8cff111395398bb32c90cc64e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8-9rlTr-AuR1piVXiAPqQl4_4aaf_7YXJMDGSASK590.jpg?width=640&crop=smart&auto=webp&s=1b57dc9cc2d1a3310bc9425d619a49f2dc081c9f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8-9rlTr-AuR1piVXiAPqQl4_4aaf_7YXJMDGSASK590.jpg?width=960&crop=smart&auto=webp&s=de22c28f027c39ac316a47e01f731109a7c77e73', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8-9rlTr-AuR1piVXiAPqQl4_4aaf_7YXJMDGSASK590.jpg?width=1080&crop=smart&auto=webp&s=8afaf7af069ec474c06e595130400d8c368bdd9d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8-9rlTr-AuR1piVXiAPqQl4_4aaf_7YXJMDGSASK590.jpg?auto=webp&s=d3dfe263dc040f67607eb143ff501fdd284c7765', 'width': 1200}, 'variants': {}}]}
Completion UI?
2
Hey yall, I'm currently using koboldcpp but I miss being able to use a completion interface, I know that oobabooga has a completion playground but I don't really like the way it looks, is there anyway to get a completion playground using koboldcpp?
2023-11-16T02:26:51
https://www.reddit.com/r/LocalLLaMA/comments/17wbhhm/completion_ui/
ipechman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wbhhm
false
null
t3_17wbhhm
/r/LocalLLaMA/comments/17wbhhm/completion_ui/
false
false
self
2
null
Oh We Were So Close, Little 7b Buddy
64
2023-11-16T02:15:09
https://i.redd.it/cgskerbpcm0c1.png
luncheroo
i.redd.it
1970-01-01T00:00:00
0
{}
17wb805
false
null
t3_17wb805
/r/LocalLLaMA/comments/17wb805/oh_we_were_so_close_little_7b_buddy/
false
false
https://b.thumbs.redditm…fRQ7YbUB9uMY.jpg
64
{'enabled': True, 'images': [{'id': 'CBuX0CXsYWiQspLQKs0f1Q-nnKLd59pc-cvmLE0Q2BQ', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/cgskerbpcm0c1.png?width=108&crop=smart&auto=webp&s=8fbebaa2401752089bab5a8dc14393b35a84776f', 'width': 108}, {'height': 103, 'url': 'https://preview.redd.it/cgskerbpcm0c1.png?width=216&crop=smart&auto=webp&s=294578516d34a7d77cccc6d5652ea24679159b88', 'width': 216}, {'height': 153, 'url': 'https://preview.redd.it/cgskerbpcm0c1.png?width=320&crop=smart&auto=webp&s=311520c1c7dd1c647726ff2f91fd31c3400bc12b', 'width': 320}, {'height': 306, 'url': 'https://preview.redd.it/cgskerbpcm0c1.png?width=640&crop=smart&auto=webp&s=e0f03242297f4b3f5b499a1cbcb86e535c9f6feb', 'width': 640}, {'height': 459, 'url': 'https://preview.redd.it/cgskerbpcm0c1.png?width=960&crop=smart&auto=webp&s=1b866ff7fb89d1d0020c850b358d4804ce0c1927', 'width': 960}, {'height': 517, 'url': 'https://preview.redd.it/cgskerbpcm0c1.png?width=1080&crop=smart&auto=webp&s=25e61a0c33fe44721faa2585c4d3804e17aa5ee5', 'width': 1080}], 'source': {'height': 608, 'url': 'https://preview.redd.it/cgskerbpcm0c1.png?auto=webp&s=61282715f90fb2eeaa1824d95c82e9be6407ebc6', 'width': 1270}, 'variants': {}}]}
Running turboderp_Llama2-70B-exl2 on 3090 24GB was disappointing...
1
[removed]
2023-11-16T02:06:35
https://www.reddit.com/r/LocalLLaMA/comments/17wb16p/running_turboderp_llama270bexl2_on_3090_24gb_was/
BlueMetaMind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wb16p
false
null
t3_17wb16p
/r/LocalLLaMA/comments/17wb16p/running_turboderp_llama270bexl2_on_3090_24gb_was/
false
false
https://b.thumbs.redditm…JKaliJzW4DVo.jpg
1
null
OpenChat- Is there a special way to use it?
1
[removed]
2023-11-16T01:34:42
https://www.reddit.com/r/LocalLLaMA/comments/17wabor/openchat_is_there_a_special_way_to_use_it/
consig1iere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17wabor
false
null
t3_17wabor
/r/LocalLLaMA/comments/17wabor/openchat_is_there_a_special_way_to_use_it/
false
false
self
1
null
Multiple Local LLMs Pipeline on M-Series Mac
15
​ [Left \/ CodeBooga34B - Right \/ NousCapybara34B](https://preview.redd.it/bhl8io2psl0c1.png?width=3456&format=png&auto=webp&s=ba1d0c233c123f121c75efbf9d25d7a4834a97c0) I've been working on a Go framework with the intent of having all of the basic dependencies for LM workflows as a hobby project to learn the foundation and architecture supporting LLMs. This has allowed me to build basic pipelines and experiment freely. Lately, I have been testing running multiple LLMs, concurrently on the same host. This is possible due to the unified memory architecture on modern Apple hardware. I recently read that using the same model with different system prompts to simulate agents collaborating with each other is less than ideal since the model would tend to agree with itself given its the same dataset. At least this is the way I interpreted it. Today I was finally able to setup a pipeline to provision CodeBooga34B and NousCapybara34B on an M2 with 64GB of memory and to my surprise, it worked! The test was to have CodeBooga generate a simple Go program, and then have NousCapybara validate and enhance the output of CodeBooga. The code that was generated worked without edits on my behalf! ​ What other interesting pipelines, workflows or tests would be ideal? The framework uses goroutines and websockets and I should be able to essentially cycle the models in and out as needed. For example, while "model 2" is generating and validating the answer from "model 1", we could be loading "model 3" in the background ready to receive the output from "model 2", so on and so forth. ​ Thoughts about other interesting workflows?
2023-11-16T00:23:02
https://www.reddit.com/r/LocalLLaMA/comments/17w8oa2/multiple_local_llms_pipeline_on_mseries_mac/
LocoMod
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w8oa2
false
null
t3_17w8oa2
/r/LocalLLaMA/comments/17w8oa2/multiple_local_llms_pipeline_on_mseries_mac/
false
false
https://b.thumbs.redditm…TEX1HXFa_oTA.jpg
15
null
is there any ongoing effort to "bake-in" vision capabilities on top of base models or fine-tunes?
6
have been thinking about this for a while-- does anyone know how feasible this is? Basically just applying some sort of "LoRa" on top of models to give them vision capabilities-- making then multimodal.
2023-11-16T00:03:10
https://www.reddit.com/r/LocalLLaMA/comments/17w87ez/is_there_any_ongoing_effort_to_bakein_vision/
LyPreto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w87ez
false
null
t3_17w87ez
/r/LocalLLaMA/comments/17w87ez/is_there_any_ongoing_effort_to_bakein_vision/
false
false
self
6
null
Strange noise/sound produced by GPU while running go-llama.cpp bindings
1
[removed]
2023-11-15T23:40:29
https://www.reddit.com/r/LocalLLaMA/comments/17w7nr7/strange_noisesound_produced_by_gpu_while_running/
KaleidoscopeGlad2649
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w7nr7
false
null
t3_17w7nr7
/r/LocalLLaMA/comments/17w7nr7/strange_noisesound_produced_by_gpu_while_running/
false
false
self
1
null
Good 34b or 70b models for Code Analysis
9
Hey guys, I've been lurking around this sub for a while and this is my first question. I wanted to know if there are any good 4k or 16k context size models which are really good at picking up patterns in code. For example figuring out Weakneses in code, by reading the code lines and suggesting an alternate line of code. Is this use case possible with any current fine-tuned models on HF. Ideally a 34b 8-bit quantized model if that even exists.
2023-11-15T23:24:34
https://www.reddit.com/r/LocalLLaMA/comments/17w79vh/good_34b_or_70b_models_for_code_analysis/
Jla1Million
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w79vh
false
null
t3_17w79vh
/r/LocalLLaMA/comments/17w79vh/good_34b_or_70b_models_for_code_analysis/
false
false
self
9
null
Higgsfield AI. Go chat with popular finetuned models
27
[https://higgsfield.ai/chat](https://higgsfield.ai/chat) Hey LocalLLaMA, Higgsfield AI here A few days ago, we built an easy-to-use platform for everyone in the community to finetune models. Many of you uploaded datasets, and they are waiting in the queue for training. We received a lot of feedback, and many of you reached out, wanting the opportunity to try out the models. We are happy to announce we made a chat interface for you to do that. Let us know what you think. Shout out to u/WolframRavenwolf and his efforts in comparing the LLMs. His post inspired the list of models we support now and we will extend it sooner. * HuggingFaceH4/zephyr-7b-beta * teknium/OpenHermes-2-Mistral-7B * jondurbin/airoboros-m-7b-3.1.2 * ehartford/dolphin-2.1-mistral-7b * migtissera/SynthIA-7B-v1.3 * mistralai/Mistral-7B-Instruct-v0.1 * migtissera/SynthIA-7B-v2.0 * teknium/CollectiveCognition-v1.1-Mistral-7B * ehartford/dolphin-2.2-yi-34b * NurtureAI/openchat\_3.5-16k Stay fine-tuned for future updates :)
2023-11-15T23:04:51
https://www.reddit.com/r/LocalLLaMA/comments/17w6rt7/higgsfield_ai_go_chat_with_popular_finetuned/
RiskApprehensive9770
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w6rt7
false
null
t3_17w6rt7
/r/LocalLLaMA/comments/17w6rt7/higgsfield_ai_go_chat_with_popular_finetuned/
false
false
self
27
null
New multilingual base model from nvidia: Nemotron-3-8B
58
nvidia released a new 8B base model (and a few fine-tunes), albeit under a restrictive license. [https://huggingface.co/nvidia/nemotron-3-8b-base-4k](https://huggingface.co/nvidia/nemotron-3-8b-base-4k) Happily, they did specify enough details about their training regimen for the model to be a useful data-point. They also note that they trained on all the training sets for all the popular benchmarks, which...at least they're honest about.
2023-11-15T22:50:19
https://www.reddit.com/r/LocalLLaMA/comments/17w6e93/new_multilingual_base_model_from_nvidia/
georgejrjrjr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w6e93
false
null
t3_17w6e93
/r/LocalLLaMA/comments/17w6e93/new_multilingual_base_model_from_nvidia/
false
false
self
58
{'enabled': False, 'images': [{'id': 'HxSdSImC3aTuwENPO5MkRmgaoS5HP1Gl7tganGi4z5Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-WRJKluDLeXudEnnlQ_LZcslezbDFHdV8uJcCowhCOk.jpg?width=108&crop=smart&auto=webp&s=f6156122244ec46c3b0b7d197afd73b4b3c3254d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-WRJKluDLeXudEnnlQ_LZcslezbDFHdV8uJcCowhCOk.jpg?width=216&crop=smart&auto=webp&s=815507524dc05d8457e51f90d9754f3b63b17900', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-WRJKluDLeXudEnnlQ_LZcslezbDFHdV8uJcCowhCOk.jpg?width=320&crop=smart&auto=webp&s=028159b484d20668a6a31aef2b9fe04b2f45fd22', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-WRJKluDLeXudEnnlQ_LZcslezbDFHdV8uJcCowhCOk.jpg?width=640&crop=smart&auto=webp&s=162eca69c0c5dad1fad3e4f416bdc32ef364277d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-WRJKluDLeXudEnnlQ_LZcslezbDFHdV8uJcCowhCOk.jpg?width=960&crop=smart&auto=webp&s=94929dfbb4911d55732e6a06338729de02f2be98', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-WRJKluDLeXudEnnlQ_LZcslezbDFHdV8uJcCowhCOk.jpg?width=1080&crop=smart&auto=webp&s=4f4e98466f6039e812ef50ca67d86b21bb7d758b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-WRJKluDLeXudEnnlQ_LZcslezbDFHdV8uJcCowhCOk.jpg?auto=webp&s=840407dbd0ab173b8788e5172b6fcab6fc058243', 'width': 1200}, 'variants': {}}]}
When OpenAI takes neutering to the extreme...
185
2023-11-15T22:45:36
https://i.redd.it/lshzh40dbl0c1.jpg
FPham
i.redd.it
1970-01-01T00:00:00
0
{}
17w69ve
false
null
t3_17w69ve
/r/LocalLLaMA/comments/17w69ve/when_openai_takes_neutering_to_the_extreme/
false
false
https://b.thumbs.redditm…t3i5OkhRh_cQ.jpg
185
{'enabled': True, 'images': [{'id': '68BAlXbSbhFYMOd-6SWB0dPxfc5zW0m7jXVz4Z3096o', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/lshzh40dbl0c1.jpg?width=108&crop=smart&auto=webp&s=afef6de2cc785f5dc4e9a1807ccd63e1c65b0340', 'width': 108}, {'height': 78, 'url': 'https://preview.redd.it/lshzh40dbl0c1.jpg?width=216&crop=smart&auto=webp&s=6d3c814907cc731574e5bf3548257eaae342ef95', 'width': 216}, {'height': 116, 'url': 'https://preview.redd.it/lshzh40dbl0c1.jpg?width=320&crop=smart&auto=webp&s=27f1dd90a6a52c756b9e72ac8671b93b7ed1f905', 'width': 320}, {'height': 233, 'url': 'https://preview.redd.it/lshzh40dbl0c1.jpg?width=640&crop=smart&auto=webp&s=3a4db07357bd57820b5a152ba07b4b0e11c82d75', 'width': 640}], 'source': {'height': 263, 'url': 'https://preview.redd.it/lshzh40dbl0c1.jpg?auto=webp&s=7a5f42e569a7fe27eb8717cabdea81757ce68be3', 'width': 722}, 'variants': {}}]}
Higgsfield AI. Go chat with popular finetuned models
1
Hey LocalLLaMA, Higgsfield AI here A few days ago, we built an easy-to-use platform for everyone in the community to finetune models. Many of you uploaded datasets, and they are waiting in the queue for training. We received a lot of feedback, and many of you reached out, wanting the opportunity to try out the models. We are happy to announce we made a chat interface for you to do that. Check it out here [https://higgsfield.ai/chat](https://higgsfield.ai/chat). Let us know what you think. Shout out to [u/WolframRavenwolf](https://www.reddit.com/user/WolframRavenwolf/) and his efforts in comparing the LLMs. His post inspired the list of models we support now. Stay fine-tuned for future updates :)
2023-11-15T22:45:07
https://www.reddit.com/r/LocalLLaMA/comments/17w69ef/higgsfield_ai_go_chat_with_popular_finetuned/
higgsfield_ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w69ef
false
null
t3_17w69ef
/r/LocalLLaMA/comments/17w69ef/higgsfield_ai_go_chat_with_popular_finetuned/
false
false
self
1
null
Higgsfield AI. Go chat with popular finetuned models
1
Hey LocalLLaMA, Higgsfield AI here A few days ago, we built an easy-to-use platform for everyone in the community to finetune models. Many of you uploaded datasets, and they are waiting in the queue for training. We received a lot of feedback, and many of you reached out, wanting the opportunity to try out the models. We are happy to announce we made a chat interface for you to do that. Check it out here [https://higgsfield.ai/chat](https://higgsfield.ai/chat). Let us know what you think. Shout out to [u/WolframRavenwolf](https://www.reddit.com/user/WolframRavenwolf/) and his efforts in comparing the LLMs. His post inspired the list of models we support now. Stay fine-tuned for future updates :)
2023-11-15T22:44:52
https://www.reddit.com/r/LocalLLaMA/comments/17w696q/higgsfield_ai_go_chat_with_popular_finetuned/
higgsfield_ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w696q
false
null
t3_17w696q
/r/LocalLLaMA/comments/17w696q/higgsfield_ai_go_chat_with_popular_finetuned/
false
false
self
1
null
disappointed by trainers
9
Dear friends I hope this letter finds you well and in good health. I decided to write because I greatly admire your dedication to AI training on Hugging Face, and I believe you are one of the most active and knowledgeable individuals in this field. For about a year now, I have devoted a significant amount of my time, approximately 8-10 hours a day, to continuous testing of AI models. Unlike many who focus on testing the more superficial capabilities of models, such as answering common questions, I delve into querying algorithms on more complex and specific topics related to history, culture, geography, and literature. I am concerned to note that, despite the abundance of data used in their training, models still seem lacking in the ability to extract and return information faithfully and coherently. The absence of clear configuration data and presets often makes it challenging for users to understand and optimize the use of such models. Another issue I encounter is the lack of clear indications regarding the specific competencies of each model. It would be extremely helpful to have detailed information on which models are more suitable for specific sectors, such as medicine or history. Currently, searching on Hugging Face seems like navigating blindly. I am writing to you with the hope that, with your commitment and expertise, we can contribute to improving this situation. Your dedication is a beacon in the field of AI, and I am confident that your contribution could lead to significant progress in terms of efficiency and accessibility of algorithms. I look forward to your response and hope that your passion for Artificial Intelligence and your commitment to historical matters continue to guide your work. Best regards, Claudio from Italy
2023-11-15T22:31:01
https://www.reddit.com/r/LocalLLaMA/comments/17w5wfm/disappointed_by_trainers/
Creative_Bottle_3225
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w5wfm
false
null
t3_17w5wfm
/r/LocalLLaMA/comments/17w5wfm/disappointed_by_trainers/
false
false
self
9
null
Commercial Licensing for Yi Models is Pending Update
2
### Sources: https://huggingface.co/01-ai/Yi-6B#news https://huggingface.co/01-ai/Yi-6B-200K#news https://huggingface.co/01-ai/Yi-34B#news https://huggingface.co/01-ai/Yi-34B-200K#news Any speculations?
2023-11-15T22:10:05
https://www.reddit.com/r/LocalLLaMA/comments/17w5ctx/commercial_licensing_for_yi_models_is_pending/
harumorii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w5ctx
false
null
t3_17w5ctx
/r/LocalLLaMA/comments/17w5ctx/commercial_licensing_for_yi_models_is_pending/
false
false
self
2
{'enabled': False, 'images': [{'id': 'AluJz0nChz5pSaft9X7Pd-CHNT4tPFgB-sVQSgmVEI0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aJcLhD2diqv9W-JMHJow9FjnHxb9tkQJjSWdNSpGWcw.jpg?width=108&crop=smart&auto=webp&s=9e781de98817721923dde99733f0100fe43b06c8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/aJcLhD2diqv9W-JMHJow9FjnHxb9tkQJjSWdNSpGWcw.jpg?width=216&crop=smart&auto=webp&s=342b52d0a2c6b90b3776f4505cb9a1d53f81d759', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/aJcLhD2diqv9W-JMHJow9FjnHxb9tkQJjSWdNSpGWcw.jpg?width=320&crop=smart&auto=webp&s=c8404e194306f1bb63a772f1acba66c88d3aefff', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/aJcLhD2diqv9W-JMHJow9FjnHxb9tkQJjSWdNSpGWcw.jpg?width=640&crop=smart&auto=webp&s=dd164c317bb65ed8ac516dd238a56efe3a510ec7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/aJcLhD2diqv9W-JMHJow9FjnHxb9tkQJjSWdNSpGWcw.jpg?width=960&crop=smart&auto=webp&s=3eddae1a837059c343831cb03aafa714cf41c886', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/aJcLhD2diqv9W-JMHJow9FjnHxb9tkQJjSWdNSpGWcw.jpg?width=1080&crop=smart&auto=webp&s=e22e82388bd0697f603cb43360a05c1a4573098e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/aJcLhD2diqv9W-JMHJow9FjnHxb9tkQJjSWdNSpGWcw.jpg?auto=webp&s=1c3fb1e77244cce79c075f8cafe4adf54b5c714b', 'width': 1200}, 'variants': {}}]}
Nvidia: TensorRT-LLM OpenAI Chat API Integration
9
TensorRT-LLM for Windows will soon be compatible with OpenAI’s popular Chat API through a new wrapper. This will enable hundreds of developer projects and applications to run locally on a PC with RTX, instead of in the cloud — so users can keep private and proprietary data on their PCs. Youtube video about it [https://www.youtube.com/watch?v=-P17YXulhDc&t=1s&ab\_channel=NVIDIAStudio](https://www.youtube.com/watch?v=-P17YXulhDc&t=1s&ab_channel=NVIDIAStudio)
2023-11-15T22:09:59
https://www.reddit.com/r/LocalLLaMA/comments/17w5cqo/nvidia_tensorrtllm_openai_chat_api_integration/
mudgategame
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w5cqo
false
null
t3_17w5cqo
/r/LocalLLaMA/comments/17w5cqo/nvidia_tensorrtllm_openai_chat_api_integration/
false
false
self
9
{'enabled': False, 'images': [{'id': 'fafJgAmiFbDDL8oQB8qdtd3JRzzdcsZ-z5Lm6wrdGXs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/l3ri_sKDjOGHWdJU6dflAlV1pHwrNXzzwj79CVUXEbM.jpg?width=108&crop=smart&auto=webp&s=e6ce65be1cee4fbaa8328b3741bc372e2591398d', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/l3ri_sKDjOGHWdJU6dflAlV1pHwrNXzzwj79CVUXEbM.jpg?width=216&crop=smart&auto=webp&s=2427f80afcaaea336e9066760d6dee330f3d7765', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/l3ri_sKDjOGHWdJU6dflAlV1pHwrNXzzwj79CVUXEbM.jpg?width=320&crop=smart&auto=webp&s=b1164c7c3b453ac32dcae42451cd0ce7374b1c3e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/l3ri_sKDjOGHWdJU6dflAlV1pHwrNXzzwj79CVUXEbM.jpg?auto=webp&s=93765a3046d389793be50afba43cfa2d62127776', 'width': 480}, 'variants': {}}]}
Commercial Licensing for Yi Models is Pending Update
1
### Sources: https://huggingface.co/01-ai/Yi-6B#news https://huggingface.co/01-ai/Yi-6B-200K#news https://huggingface.co/01-ai/Yi-34B#news https://huggingface.co/01-ai/Yi-34B-200K#news Any speculations?
2023-11-15T22:05:18
https://www.reddit.com/r/LocalLLaMA/comments/17w58bs/commercial_licensing_for_yi_models_is_pending/
harumorii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w58bs
false
null
t3_17w58bs
/r/LocalLLaMA/comments/17w58bs/commercial_licensing_for_yi_models_is_pending/
false
false
self
1
null
🐺🐦‍⬛ LLM Format Comparison/Benchmark: 70B GGUF vs. EXL2 (and AWQ)
169
I posted my latest [LLM Comparison/Test](https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/) just yesterday, but here's another (shorter) comparison/benchmark I did while working on that - **testing different formats and quantization levels**. My goal was to find out which format and quant to focus on. So I took the best 70B according to my previous tests, and re-tested that again with various formats and quants. I wanted to find out if they worked the same, better, or worse. And here's what I discovered: | Model | Format | Quant | Offloaded Layers | VRAM Used | Primary Score | Secondary Score | Speed +mmq | Speed -mmq | | -------------------------------------------- | ------ | ------- | ---------------- | -------------- | ------------- | --------------- | ---------- | ---------- | | lizpreciatior/lzlv_70B.gguf | GGUF | Q4_K_M | 83/83 | 39362.61 MB | 18/18 | 4+3+4+6 = 17/18 | | lizpreciatior/lzlv_70B.gguf | GGUF | Q5_K_M | 70/83 ! | 40230.62 MB | 18/18 | 4+3+4+6 = 17/18 | | TheBloke/lzlv_70B-GGUF | GGUF | Q2_K | 83/83 | 27840.11 MB | 18/18 | 4+3+4+6 = 17/18 | 4.20T/s | 4.01T/s | | TheBloke/lzlv_70B-GGUF | GGUF | Q3_K_M | 83/83 | 31541.11 MB | 18/18 | 4+3+4+6 = 17/18 | 4.41T/s | 3.96T/s | | TheBloke/lzlv_70B-GGUF | GGUF | Q4_0 | 83/83 | 36930.11 MB | 18/18 | 4+3+4+6 = 17/18 | 4.61T/s | 3.94T/s | | TheBloke/lzlv_70B-GGUF | GGUF | Q4_K_M | 83/83 | 39362.61 MB | 18/18 | 4+3+4+6 = 17/18 | 4.73T/s !! | 4.11T/s | | TheBloke/lzlv_70B-GGUF | GGUF | Q5_K_M | 70/83 ! | 40230.62 MB | 18/18 | 4+3+4+6 = 17/18 | 1.51T/s | 1.46T/s | | TheBloke/lzlv_70B-GGUF | GGUF | Q5_K_M | 80/83 | 46117.50 MB | *OutOfMemory* | | TheBloke/lzlv_70B-GGUF | GGUF | Q5_K_M | 83/83 | 46322.61 MB | *OutOfMemory* | | LoneStriker/lzlv_70b_fp16_hf-2.4bpw-h6-exl2 | EXL2 | 2.4bpw | | 11,11 -> 22 GB | *BROKEN* | | LoneStriker/lzlv_70b_fp16_hf-2.6bpw-h6-exl2 | EXL2 | 2.6bpw | | 12,11 -> 23 GB | *FAIL* | | LoneStriker/lzlv_70b_fp16_hf-3.0bpw-h6-exl2 | EXL2 | 3.0bpw | | 14,13 -> 27 GB | 18/18 | 4+2+2+6 = 14/18 | | LoneStriker/lzlv_70b_fp16_hf-4.0bpw-h6-exl2 | EXL2 | 4.0bpw | | 18,17 -> 35 GB | 18/18 | 4+3+2+6 = 15/18 | | LoneStriker/lzlv_70b_fp16_hf-4.65bpw-h6-exl2 | EXL2 | 4.65bpw | | 20,20 -> 40 GB | 18/18 | 4+3+2+6 = 15/18 | | LoneStriker/lzlv_70b_fp16_hf-5.0bpw-h6-exl2 | EXL2 | 5.0bpw | | 22,21 -> 43 GB | 18/18 | 4+3+2+6 = 15/18 | | LoneStriker/lzlv_70b_fp16_hf-6.0bpw-h6-exl2 | EXL2 | 6.0bpw | | > 48 GB | *TOO BIG* | | TheBloke/lzlv_70B-AWQ | AWQ | 4-bit | | | *OutOfMemory* | #### Observations: - Scores = Number of correct answers to multiple choice questions of 1st test series (4 German data protection trainings) as usual - Primary Score = Number of correct answers after giving information - Secondary Score = Number of correct answers without giving information (blind) - Model's official prompt format (Vicuna 1.1), Deterministic settings. Different quants still produce different outputs because of internal differences. - Speed is from koboldcpp-1.49's stats, after a fresh start (no cache) with 3K of 4K context filled up already, with (+) or without (-) `mmq` option to `--usecublas`. - LoneStriker/lzlv_70b_fp16_hf-2.4bpw-h6-exl2: 2.4b-bit = BROKEN! Didn't work at all, outputting only one word and repeating that ad infinitum. - LoneStriker/lzlv_70b_fp16_hf-2.6bpw-h6-exl2: 2.6-bit = FAIL! Achknowledged questions like information with just OK, didn't answer unless prompted, and made mistakes despite given information. - Even EXL2 5.0bpw was surprisingly doing much worse than GGUF Q2_K. - AWQ just doesn't work for me with oobabooga's text-generation-webui, despite 2x 24 GB VRAM, it goes OOM. Allocation seems to be broken. Giving up on that format for now. - All versions consistently acknowledged all data input with "OK" and followed instructions to answer with just a single letter or more than just a single letter. - EXL2 isn't entirely deterministic. Its author said speed is more important than determinism, and I agree, but the quality loss and non-determinism make it less suitable for model tests and comparisons. #### Conclusion: - With AWQ not working and EXL2 delivering bad quality (secondary score dropped a lot!), I'll stick to the GGUF format for further testing, for now at least. - Strange that bigger quants got more tokens per second than smaller ones, maybe that's because of different responses, but Q4_K_M with mmq was fastest - so I'll use that for future comparisons and tests. - For real-time uses like Voxta+VaM, EXL2 4-bit is better - it's fast and accurate, yet not too big (need some of the VRAM for rendering the AI's avatar in AR/VR). Feels almost as fast as unquantized Transfomers Mistral 7B, but much more accurate for function calling/action inference and summarization (it's a 70B after all). So these are my - quite unexpected - findings with this setup. Sharing them with you all and looking for feedback if anyone has done perplexity tests or other benchmarks between formats. Is EXL2 really such a tradeoff between speed and quality in general, or could that be a model-specific effect here? -------------------------------------------------------------------------------- Here's a list of my previous model tests and comparisons or other related posts: - [LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4](https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/) - [LLM Comparison/Test: Mistral 7B Updates (OpenHermes 2.5, OpenChat 3.5, Nous Capybara 1.9)](https://www.reddit.com/r/LocalLLaMA/comments/17p0gut/llm_comparisontest_mistral_7b_updates_openhermes/) - [Huge LLM Comparison/Test: Part II (7B-20B) Roleplay Tests](https://www.reddit.com/r/LocalLLaMA/comments/17kpyd2/huge_llm_comparisontest_part_ii_7b20b_roleplay/) Winners: OpenHermes-2-Mistral-7B, LLaMA2-13B-Tiefighter-GGUF - [Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4)](https://www.reddit.com/r/LocalLLaMA/comments/17fhp9k/huge_llm_comparisontest_39_models_tested_7b70b/) - [My current favorite new LLMs: SynthIA v1.5 and Tiefighter!](https://www.reddit.com/r/LocalLLaMA/comments/17e446l/my_current_favorite_new_llms_synthia_v15_and/) - [Mistral LLM Comparison/Test: Instruct, OpenOrca, Dolphin, Zephyr and more...](https://www.reddit.com/r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/) - [LLM Pro/Serious Use Comparison/Test: From 7B to 70B vs. ChatGPT!](https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/) Winner: Synthia-70B-v1.2b - [LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B](https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/) Winner: Mistral-7B-OpenOrca - [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/) - [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1 - [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b - [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B - [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K - [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2 - [SillyTavern's Roleplay preset vs. model-specific prompt format](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/) -------------------------------------------------------------------------------- **Disclaimer:** Some kind soul recently asked me if they could tip me for my LLM reviews and advice, so I set up [a Ko-fi page](https://ko-fi.com/wolframravenwolf). While this may affect the priority/order of my tests, it will not change the results, I am incorruptible. Also consider tipping your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
2023-11-15T22:04:22
https://www.reddit.com/r/LocalLLaMA/comments/17w57eu/llm_format_comparisonbenchmark_70b_gguf_vs_exl2/
WolframRavenwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w57eu
false
null
t3_17w57eu
/r/LocalLLaMA/comments/17w57eu/llm_format_comparisonbenchmark_70b_gguf_vs_exl2/
false
false
self
169
{'enabled': False, 'images': [{'id': '-E3KSlSzwSX9yTQAfxQyZkhYfIhsqtQIUeirxKXifKM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MDIFjY06JaLVQ0ibe7F1Ye9iyU4bxvrS6J5FI6HFaFg.jpg?width=108&crop=smart&auto=webp&s=fd1b174f2b9dc42b12ad1ff0974e5a85d70189e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MDIFjY06JaLVQ0ibe7F1Ye9iyU4bxvrS6J5FI6HFaFg.jpg?width=216&crop=smart&auto=webp&s=ff196eb818ee5b473ff35161248747d29172b7f7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MDIFjY06JaLVQ0ibe7F1Ye9iyU4bxvrS6J5FI6HFaFg.jpg?width=320&crop=smart&auto=webp&s=7db62e1e2f5141d7db34d015fbf3731ceedc97f8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MDIFjY06JaLVQ0ibe7F1Ye9iyU4bxvrS6J5FI6HFaFg.jpg?width=640&crop=smart&auto=webp&s=2732440dde66049c3d87cb9608d445a894ff03df', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MDIFjY06JaLVQ0ibe7F1Ye9iyU4bxvrS6J5FI6HFaFg.jpg?width=960&crop=smart&auto=webp&s=cddeff38665b62ec5a2a89da6f8a411ceeb27f84', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MDIFjY06JaLVQ0ibe7F1Ye9iyU4bxvrS6J5FI6HFaFg.jpg?width=1080&crop=smart&auto=webp&s=dd5880651124c81bf5ab2626aac2f70dc8aaac84', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MDIFjY06JaLVQ0ibe7F1Ye9iyU4bxvrS6J5FI6HFaFg.jpg?auto=webp&s=b09cd874e94b21af20f2b0ccf50d3195683f827e', 'width': 1200}, 'variants': {}}]}
Is some tiny (1GB) model available through pip install?
4
I was looking into Galactica the other day, and it's available as a "pip install", that on first use downloads the model (260GB download LOL) and sets up everything. I got a slight headache by looking for hours through different small models, what I need to download etc, has any other model, a really small one for my ssh-only server without CUDA, been packaged like that? Though I wouldn't mind a model that fits in my 8GB laptop RTX either.
2023-11-15T21:55:26
https://www.reddit.com/r/LocalLLaMA/comments/17w4yyh/is_some_tiny_1gb_model_available_through_pip/
GermanK20
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w4yyh
false
null
t3_17w4yyh
/r/LocalLLaMA/comments/17w4yyh/is_some_tiny_1gb_model_available_through_pip/
false
false
self
4
null
Converting non-LLaMA model to Q4_K_M GGUF format?
4
So I tried quantising the BLOOMZ 560M model ([https://huggingface.co/bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m)) to GGUF format using the [convert-hf-model-to-gguf.py](https://convert-hf-model-to-gguf.py) script in the llama.cpp GitHub repo. This worked, except this script only seems to allow for f32 and f16. I know that quantising to lower bits should definitely be possible, since I've seen the 1.7B version of BLOOMZ quantised to q6 and uploaded as GGUF on HuggingFace. Are there any other tools and scripts available that might help me accomplish this? I've been looking around everywhere, but can't seem to find anything for non-LLaMA models like BLOOMZ. Thanks so much in advance!
2023-11-15T21:54:37
https://www.reddit.com/r/LocalLLaMA/comments/17w4y7k/converting_nonllama_model_to_q4_k_m_gguf_format/
prman7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w4y7k
false
null
t3_17w4y7k
/r/LocalLLaMA/comments/17w4y7k/converting_nonllama_model_to_q4_k_m_gguf_format/
false
false
self
4
{'enabled': False, 'images': [{'id': 'pHsAgFLeU9rHpjGdPmBMokZ-EqZyRiOJ8d0LLvBSeXY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ffDCVLYVtNqnEFm_ERohRgIht7zg7921Wxhcn9B7zBE.jpg?width=108&crop=smart&auto=webp&s=c38e21bbbcc4cfd77f1f676008845dfa7ada627c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ffDCVLYVtNqnEFm_ERohRgIht7zg7921Wxhcn9B7zBE.jpg?width=216&crop=smart&auto=webp&s=b6a76bbe5420c472bd1e924fb8520d2098e87711', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ffDCVLYVtNqnEFm_ERohRgIht7zg7921Wxhcn9B7zBE.jpg?width=320&crop=smart&auto=webp&s=b692acbb58edb1c6b8d09b76726e3bbd7353c825', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ffDCVLYVtNqnEFm_ERohRgIht7zg7921Wxhcn9B7zBE.jpg?width=640&crop=smart&auto=webp&s=d5edaa8a8f2d57482407e9eb965d2ca734fe8132', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ffDCVLYVtNqnEFm_ERohRgIht7zg7921Wxhcn9B7zBE.jpg?width=960&crop=smart&auto=webp&s=f1249e4d70d20f882fb7bed97a80521bbe58187d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ffDCVLYVtNqnEFm_ERohRgIht7zg7921Wxhcn9B7zBE.jpg?width=1080&crop=smart&auto=webp&s=6929a68f3733cb1b534947268dc04f5966714df6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ffDCVLYVtNqnEFm_ERohRgIht7zg7921Wxhcn9B7zBE.jpg?auto=webp&s=ca24cd84e3a5174e49a63dc3c387f37a50e3e67c', 'width': 1200}, 'variants': {}}]}
Microsoft announced the Maia 100 AI Accelerator Chip. It's also expanding the use of the AMD MI300 in it's datacenters. Is this the beginning of the end of CUDA dominance?
77
2023-11-15T20:26:51
https://news.microsoft.com/source/features/ai/in-house-chips-silicon-to-service-to-meet-ai-demand/
fallingdowndizzyvr
news.microsoft.com
1970-01-01T00:00:00
0
{}
17w2v30
false
null
t3_17w2v30
/r/LocalLLaMA/comments/17w2v30/microsoft_announced_the_maia_100_ai_accelerator/
false
false
https://b.thumbs.redditm…FXkQb3wvLN2E.jpg
77
{'enabled': False, 'images': [{'id': 'RxeXInsaRe1k6xd-RQV9dUVfBTCb6yxSjWs-wFCtG3U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/po-VnWVfCe70fcgDP3ymqzKavlCl3YW8jhyGmKkwWOQ.jpg?width=108&crop=smart&auto=webp&s=6bd120b4a14ceb47e686159cb558e7bb4a0966a0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/po-VnWVfCe70fcgDP3ymqzKavlCl3YW8jhyGmKkwWOQ.jpg?width=216&crop=smart&auto=webp&s=83a176ca06cbd528c0cca256e32c2cdd4518f7ca', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/po-VnWVfCe70fcgDP3ymqzKavlCl3YW8jhyGmKkwWOQ.jpg?width=320&crop=smart&auto=webp&s=6aec2baa9bb3cce262b97802a2e7fa15895b64b2', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/po-VnWVfCe70fcgDP3ymqzKavlCl3YW8jhyGmKkwWOQ.jpg?width=640&crop=smart&auto=webp&s=12b26f6c42c27b2cda9b4acad4ab9d69d96e4b8a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/po-VnWVfCe70fcgDP3ymqzKavlCl3YW8jhyGmKkwWOQ.jpg?width=960&crop=smart&auto=webp&s=7a5fdbed363563d88fbd72d78f3f341a6c316cf4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/po-VnWVfCe70fcgDP3ymqzKavlCl3YW8jhyGmKkwWOQ.jpg?width=1080&crop=smart&auto=webp&s=1f4d00a64801b8ee3ff13ea929768cf41cc138f4', 'width': 1080}], 'source': {'height': 631, 'url': 'https://external-preview.redd.it/po-VnWVfCe70fcgDP3ymqzKavlCl3YW8jhyGmKkwWOQ.jpg?auto=webp&s=6f32d02d0c6f0eaf40ad7ebf4b3b609c745955d8', 'width': 1200}, 'variants': {}}]}
LMstudio: How to bulk export text from conversations?
1
[removed]
2023-11-15T20:25:58
https://www.reddit.com/r/LocalLLaMA/comments/17w2uc2/lmstudio_how_to_bulk_export_text_from/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w2uc2
false
null
t3_17w2uc2
/r/LocalLLaMA/comments/17w2uc2/lmstudio_how_to_bulk_export_text_from/
false
false
default
1
null
Does LMstudio break your privacy?
1
[removed]
2023-11-15T19:58:40
https://www.reddit.com/r/LocalLLaMA/comments/17w26xa/does_lmstudio_break_your_privacy/
innocuousAzureus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w26xa
false
null
t3_17w26xa
/r/LocalLLaMA/comments/17w26xa/does_lmstudio_break_your_privacy/
false
false
self
1
null
Visibility is important
1
I've been in LocalLlaMa for a while now, and one of the great things about this subreddit is that people share the results of their tests, something especially helpful in a field where something new seems to be built on an almost daily basis. However, there aren't any tools for true visibility behind the scenes of what the llms are actually doing; it's mostly a black box with people changing settings, models, or finetunes and reporting results. I saw this pop up in r/machinelearningnews and thought I'd share as it seems like quite a valuable project for us all. I am not affiliated with this company. [https://www.aistartupnewsletter.com/p/meet-langfuse-new-opensource-observability-product-analytics-tool-llmbased-applications](https://www.aistartupnewsletter.com/p/meet-langfuse-new-opensource-observability-product-analytics-tool-llmbased-applications)
2023-11-15T19:27:54
https://www.reddit.com/r/LocalLLaMA/comments/17w1i5p/visibility_is_important/
Ok_Relationship_9879
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w1i5p
false
null
t3_17w1i5p
/r/LocalLLaMA/comments/17w1i5p/visibility_is_important/
false
false
self
1
{'enabled': False, 'images': [{'id': 'mtw_vw8q5e4EJF5So7Vu-vjQ2OgMMlJs1mLy1vQlNp0', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/gm8Z5HCwnl87iBXhg5z8vMBPWwsobF99FPuH-gCge3c.jpg?width=108&crop=smart&auto=webp&s=572b75ec872f632445b76c9fad18709337fc6980', 'width': 108}, {'height': 157, 'url': 'https://external-preview.redd.it/gm8Z5HCwnl87iBXhg5z8vMBPWwsobF99FPuH-gCge3c.jpg?width=216&crop=smart&auto=webp&s=f09c010e0961dae473bfda52d8e0d9317f489d3d', 'width': 216}, {'height': 234, 'url': 'https://external-preview.redd.it/gm8Z5HCwnl87iBXhg5z8vMBPWwsobF99FPuH-gCge3c.jpg?width=320&crop=smart&auto=webp&s=87c46f39fd57b0e02c6da21581a743148fe22234', 'width': 320}, {'height': 468, 'url': 'https://external-preview.redd.it/gm8Z5HCwnl87iBXhg5z8vMBPWwsobF99FPuH-gCge3c.jpg?width=640&crop=smart&auto=webp&s=e04b9569508246540e4fe70730518f82e9d9e57b', 'width': 640}, {'height': 702, 'url': 'https://external-preview.redd.it/gm8Z5HCwnl87iBXhg5z8vMBPWwsobF99FPuH-gCge3c.jpg?width=960&crop=smart&auto=webp&s=b01f277748a29c7bb84fda8e12696fb0c40d3a63', 'width': 960}, {'height': 789, 'url': 'https://external-preview.redd.it/gm8Z5HCwnl87iBXhg5z8vMBPWwsobF99FPuH-gCge3c.jpg?width=1080&crop=smart&auto=webp&s=909a07ad827a56da300815cc5f52f18858cf2da8', 'width': 1080}], 'source': {'height': 945, 'url': 'https://external-preview.redd.it/gm8Z5HCwnl87iBXhg5z8vMBPWwsobF99FPuH-gCge3c.jpg?auto=webp&s=1b196d461417dacf712596b174ea34ba3265b787', 'width': 1292}, 'variants': {}}]}
Visibility is important
1
I've been in LocalLlaMa for a while now, and one of the great things about this subreddit is that people share the results of their tests, something especially helpful in a field where something new seems to be built on an almost daily basis. However, there aren't any tools for true visibility behind the scenes of what the llms are actually doing; it's mostly a black box with people changing settings, models, or finetunes and reporting results. I saw this pop up in r/machinelearningnews and thought I'd share as it seems like quite a valuable project for us all. I am not affiliated with this company. [https://www.aistartupnewsletter.com/p/meet-langfuse-new-opensource-observability-product-analytics-tool-llmbased-applications](https://www.aistartupnewsletter.com/p/meet-langfuse-new-opensource-observability-product-analytics-tool-llmbased-applications)
2023-11-15T19:27:47
https://www.reddit.com/r/LocalLLaMA/comments/17w1i26/visibility_is_important/
Ok_Relationship_9879
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w1i26
false
null
t3_17w1i26
/r/LocalLLaMA/comments/17w1i26/visibility_is_important/
false
false
self
1
null
Curious about knowledge files for local LLMs
1
Hey all, The recent GPT builder announcement has got me really into making coding assistants. I’m trying to create knowledge files that override blind spots created during ChatGPT’s training process. For instance with Flutter null safety is a persistent and annoying issue so I’ve made instructions to force it to pay more attention. I’m also curious about how the same files could translate to other models. Are there any local LLMs that have been programmed to have persistent files they can reference? If so how do I reach out to them to gauge interest in collaborating? Cheers Succotash
2023-11-15T18:29:48
https://www.reddit.com/r/LocalLLaMA/comments/17w066h/curious_about_knowledge_files_for_local_llms/
SuccotashComplete
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17w066h
false
null
t3_17w066h
/r/LocalLLaMA/comments/17w066h/curious_about_knowledge_files_for_local_llms/
false
false
self
1
null
How to rank models by popularity
18
https://openrouter.ai/rankings Apps are mostly roleplay, followed by coding assistants, games, and content marketing. They’re shown here: openrouter.ai Feedback welcome! Feedback welcome!
2023-11-15T18:09:36
https://i.redd.it/90u2e2h8yj0c1.jpeg
stevexander
i.redd.it
1970-01-01T00:00:00
0
{}
17vzows
false
null
t3_17vzows
/r/LocalLLaMA/comments/17vzows/how_to_rank_models_by_popularity/
false
false
https://b.thumbs.redditm…V-mSUD8whSZY.jpg
18
{'enabled': True, 'images': [{'id': 'eH3jgaDHan_B5TzuF2-y0K3yorF5vYhwzcruiplY8zQ', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/90u2e2h8yj0c1.jpeg?width=108&crop=smart&auto=webp&s=1654b546894c761b7856578bd69fc4602434f53c', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/90u2e2h8yj0c1.jpeg?width=216&crop=smart&auto=webp&s=406232ae5bdbf17eeccf0a3f2ed6d9ec9d9faa9c', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/90u2e2h8yj0c1.jpeg?width=320&crop=smart&auto=webp&s=88c0d9e75ed41c2775971049cb5f19a7c03759f2', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/90u2e2h8yj0c1.jpeg?width=640&crop=smart&auto=webp&s=2a83f7278fc3d946b3578394ac82d956b556c8bf', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/90u2e2h8yj0c1.jpeg?width=960&crop=smart&auto=webp&s=17efe1e5d3bb62d01ff68278f190c19a4d7688d8', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/90u2e2h8yj0c1.jpeg?width=1080&crop=smart&auto=webp&s=f5443fcde3dc80b14d1f64fe0c8e010c9b25f8aa', 'width': 1080}], 'source': {'height': 2532, 'url': 'https://preview.redd.it/90u2e2h8yj0c1.jpeg?auto=webp&s=c937eaab3f01ed4f59f01c711498815d529b5334', 'width': 1170}, 'variants': {}}]}
How to increase the output size of any local LLM (Mistral\LLAMA)?
1
[removed]
2023-11-15T17:54:00
https://www.reddit.com/r/LocalLLaMA/comments/17vzbly/how_to_increase_the_output_size_of_any_local_llm/
alex_pro777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vzbly
false
null
t3_17vzbly
/r/LocalLLaMA/comments/17vzbly/how_to_increase_the_output_size_of_any_local_llm/
false
false
self
1
null
Mistral 13-16B?
15
Hey guys, I heard mistral is releasing a model with 2x number of parameters as the open source one before the EOY, but is this one going to be behind their own api rather than open source? Or we're they talking about 'premium models' meaning even larger params. Really need this to be open source, thanks.
2023-11-15T17:45:58
https://www.reddit.com/r/LocalLLaMA/comments/17vz4v2/mistral_1316b/
enterguild
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vz4v2
false
null
t3_17vz4v2
/r/LocalLLaMA/comments/17vz4v2/mistral_1316b/
false
false
self
15
null
Suggestion for LLM which has the best understanding of language structure
2
I want to build a very specific application which will take a block of text and give a boolean reply if the sentence is a proper english sentence or not. Any suggestion for any such LLM? For example, I have had some success with GPT-4, but I want to run it locally and not use OpenAI apis as of now.
2023-11-15T17:29:47
https://www.reddit.com/r/LocalLLaMA/comments/17vyrj8/suggestion_for_llm_which_has_the_best/
shinigami_inso
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vyrj8
false
null
t3_17vyrj8
/r/LocalLLaMA/comments/17vyrj8/suggestion_for_llm_which_has_the_best/
false
false
self
2
null
Mistral Premium Model just mentioned at MS Ignite
51
Looks like there will be a premium non-open source model for mistral, hopefully it's the 180B version and the lower param models will remain open source.
2023-11-15T17:29:28
https://www.reddit.com/r/LocalLLaMA/comments/17vyrao/mistral_premium_model_just_mentioned_at_ms_ignite/
johnfiretv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vyrao
false
null
t3_17vyrao
/r/LocalLLaMA/comments/17vyrao/mistral_premium_model_just_mentioned_at_ms_ignite/
false
false
self
51
null
"A Benchmark to Understand the Role of Knowledge Graphs on Large Language Model's Accuracy for Question Answering on Enterprise SQL Databases"—Putting numbers to many of our intuition that KGs are much more effective than other context inputs in RAG-like approaches
32
2023-11-15T16:58:52
https://arxiv.org/abs/2311.07509
CodeGriot
arxiv.org
1970-01-01T00:00:00
0
{}
17vy1bo
false
null
t3_17vy1bo
/r/LocalLLaMA/comments/17vy1bo/a_benchmark_to_understand_the_role_of_knowledge/
false
false
https://a.thumbs.redditm…sbpzIyRENtB8.jpg
32
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
Cerebras system for fine-tuning
1
[removed]
2023-11-15T16:46:17
https://www.cerebras.net/product-cloud/
ThenNeighborhood3723
cerebras.net
1970-01-01T00:00:00
0
{}
17vxrn9
false
null
t3_17vxrn9
/r/LocalLLaMA/comments/17vxrn9/cerebras_system_for_finetuning/
false
false
default
1
null
fastchat-t5 cheapest hosting?
5
I already tried to set up [fastchat-t5](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0) on a digitalocean virtual server with 32 GiB and 4 vCPUs for $160/month with CPU interference. The performance was horrible. Answers took about 5 seconds for the first token and then 1 word per second. Any ideas how to host a small LLM like fastchat-t5 economically?
2023-11-15T16:45:55
https://www.reddit.com/r/LocalLLaMA/comments/17vxrco/fastchatt5_cheapest_hosting/
HeronAI_com
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vxrco
false
null
t3_17vxrco
/r/LocalLLaMA/comments/17vxrco/fastchatt5_cheapest_hosting/
false
false
self
5
{'enabled': False, 'images': [{'id': 'Eob4KxD9FNha-O7U37iX0SAjsaXoUzz1R-3lVieTQYs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rLr3mk3gNr3DPa6OWBP7JyJ8UBLgtJWKoYIQsiuBNhM.jpg?width=108&crop=smart&auto=webp&s=bacda9661cbe5e040ee75fd99dcabafa648b3cea', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rLr3mk3gNr3DPa6OWBP7JyJ8UBLgtJWKoYIQsiuBNhM.jpg?width=216&crop=smart&auto=webp&s=17e62114a38d74cc596e87bfcdd0b2e9df3295a1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rLr3mk3gNr3DPa6OWBP7JyJ8UBLgtJWKoYIQsiuBNhM.jpg?width=320&crop=smart&auto=webp&s=1f94992320029d05a61c5e0a6bc462e63621250d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rLr3mk3gNr3DPa6OWBP7JyJ8UBLgtJWKoYIQsiuBNhM.jpg?width=640&crop=smart&auto=webp&s=326d4667b25784a71dae01caa38d6dc7988bc04f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rLr3mk3gNr3DPa6OWBP7JyJ8UBLgtJWKoYIQsiuBNhM.jpg?width=960&crop=smart&auto=webp&s=0adbdf2e73febda326dc95b5d62a2a4a47cbb5ca', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rLr3mk3gNr3DPa6OWBP7JyJ8UBLgtJWKoYIQsiuBNhM.jpg?width=1080&crop=smart&auto=webp&s=43db5120386c09c85189c557aa0f31a659edd84e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rLr3mk3gNr3DPa6OWBP7JyJ8UBLgtJWKoYIQsiuBNhM.jpg?auto=webp&s=b5f16292a59bebc30bb83fc4f5e20b3999caac2a', 'width': 1200}, 'variants': {}}]}
Change the writing style: Dolphin 2.2 Yi 34b
1
I am using the new \`TheBloke/dolphin-2\_2-yi-34b-GPTQ\`, but I am not succeeding in changing the writing style of the model. I tried adding it in the character, system prompt or user prompt (API), but the result is still the same. Example (the end I didnt expect 😂) https://preview.redd.it/ujzp7dp78j0c1.png?width=1798&format=png&auto=webp&s=b9ab683c114e9cd4176e1caf60c635e47161dc68 ​ https://preview.redd.it/hwp0khie8j0c1.png?width=1866&format=png&auto=webp&s=7d78abfc2e9c3bbc8c868812ddf90448f99736a7
2023-11-15T15:45:22
https://www.reddit.com/r/LocalLLaMA/comments/17vwg3r/change_the_writing_style_dolphin_22_yi_34b/
reddithotel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vwg3r
false
null
t3_17vwg3r
/r/LocalLLaMA/comments/17vwg3r/change_the_writing_style_dolphin_22_yi_34b/
false
false
https://b.thumbs.redditm…qlom5tNNB2Yg.jpg
1
null
My model won't stop repeating itself and I can't figure out why.
1
[removed]
2023-11-15T14:54:11
https://www.reddit.com/r/LocalLLaMA/comments/17vvc1c/my_model_wont_stop_repeating_itself_and_i_cant/
Future_Might_8194
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vvc1c
false
null
t3_17vvc1c
/r/LocalLLaMA/comments/17vvc1c/my_model_wont_stop_repeating_itself_and_i_cant/
false
false
https://b.thumbs.redditm…GTJL449h9pOY.jpg
1
null
Myntra's Maya Chatbot
2
Does anyone have any idea or suggestions on how can I make chatbot like that for my shopping website. Do suggest your idea, thanks.
2023-11-15T13:54:00
https://www.reddit.com/r/LocalLLaMA/comments/17vu43u/myntras_maya_chatbot/
One-Difficulty3149
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vu43u
false
null
t3_17vu43u
/r/LocalLLaMA/comments/17vu43u/myntras_maya_chatbot/
false
false
self
2
null
🚀 Exciting Updates from Together AI: Breakthroughs in Inference, Custom Models, and GPU Power!
1
Hey LLM enthusiasts! Together AI just rolled out some incredible updates that are game changers in the world of large language models. Here’s a quick rundown: 1. **Together Inference Engine Launched**: Prepare to be blown away! This new inference engine integrates cutting-edge techniques like FlashAttention-2, Flash-Decoding, and Medusa. It's touted as the fastest inference service around, significantly outpacing competitors. Plus, they've slashed prices: * 7B model at $0.0002/1K tokens * 13B model at $0.000225/1K tokens * 70B model at $0.0009/1K tokens ​ 1. **Introducing Together Custom Models**: This is huge for AI teams! Build your own state-of-the-art LLM with total ownership after creation. The process includes top-tier tech like FlashAttention-2, DoReMi, and DSIR. They’ve already helped Arcee develop a custom legal LLM at lightning speed. 2. **Together GPU Clusters Expansion to 20 Exaflops**: Known earlier as Together Compute, this offers dedicated, high-speed GPU training clusters with NVIDIA's finest like H100 and A100 GPUs. It’s perfect for those needing flexible, scalable, and efficient model training with robust support. Startups like Pika Labs have already saved over $1M in 5 months! Together AI is pushing the frontiers in LLM performance and scalability. Check out their new website and the updated inference stack at api.together.ai. Can’t wait to see what the community builds with these tools! [https://www.together.ai/](https://www.together.ai/) \_\_\_\_\_ This post was shamelessly generated with GPT4. Got the newsletter and thought I would share, cheap inference of open source LLM is very relevant to this sub even if it is not "local". ​
2023-11-15T13:48:54
https://www.reddit.com/r/LocalLLaMA/comments/17vu0le/exciting_updates_from_together_ai_breakthroughs/
noioiomio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vu0le
false
null
t3_17vu0le
/r/LocalLLaMA/comments/17vu0le/exciting_updates_from_together_ai_breakthroughs/
false
false
self
1
{'enabled': False, 'images': [{'id': 'KEKqZLXaDuojO8066WvfNm2knPNQpREJOqDRQbP0jOE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=108&crop=smart&auto=webp&s=c4356a09ff651d99050d2e2f7c625136bd5cc50d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=216&crop=smart&auto=webp&s=2efb5516e5e9493aedbb8874a4346aea1e2fdfe3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=320&crop=smart&auto=webp&s=5760f28068be8d1404c060058ca5dc7138a3921c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=640&crop=smart&auto=webp&s=5040e75d875b032b45e4cafad1ca6eed231c2aa5', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=960&crop=smart&auto=webp&s=678233eb228e31658cc7dc6f24ff3c4c199255ec', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=1080&crop=smart&auto=webp&s=e9407e720f5a5c73c6566e3b787afc17181bbb3f', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?auto=webp&s=610ce8e238d743540ebac62332adfbc058d7c11d', 'width': 2400}, 'variants': {}}]}
Run uncensored Yi, dolphin-2.2-yi-34b, on your own device/ across platforms
1
2023-11-15T13:10:07
https://www.secondstate.io/articles/dolphin-2.2-yi-34b/
smileymileycoin
secondstate.io
1970-01-01T00:00:00
0
{}
17vtawv
false
null
t3_17vtawv
/r/LocalLLaMA/comments/17vtawv/run_uncensored_yi_dolphin22yi34b_on_your_own/
false
false
default
1
null
FP4 quantization state not initialized. Please call .cuda() or .to(device) on the LinearFP4 layer first.
1
Hi guys I'm following this guide to fine tune mistral: [https://github.com/brevdev/notebooks/blob/main/mistral-finetune-own-data.ipynb](https://github.com/brevdev/notebooks/blob/main/mistral-finetune-own-data.ipynb) Running into this error when running inference on the fine tuned model: "FP4 quantization state not initialized. Please call .cuda() or .to(device) on the LinearFP4 layer first." Can't find much info on how to fix this. The model itself is loaded to the cuda - can individual layers within the model not be on the device??
2023-11-15T11:05:21
https://www.reddit.com/r/LocalLLaMA/comments/17vrbea/fp4_quantization_state_not_initialized_please/
AstronomerChance5093
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vrbea
false
null
t3_17vrbea
/r/LocalLLaMA/comments/17vrbea/fp4_quantization_state_not_initialized_please/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XwTOhMx3c6iXxQHmDIPQyRunmduCYYuiPKnTNYreN-k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wavKbGgA6AuU3xeXBugbyVDxpFWrSxpFd-ybVsZRcF0.jpg?width=108&crop=smart&auto=webp&s=f8cc26b89a67acb94e6160e999a5a5d444855b67', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wavKbGgA6AuU3xeXBugbyVDxpFWrSxpFd-ybVsZRcF0.jpg?width=216&crop=smart&auto=webp&s=8ff99c22e02e5b81426271207ebe6f478297a9f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wavKbGgA6AuU3xeXBugbyVDxpFWrSxpFd-ybVsZRcF0.jpg?width=320&crop=smart&auto=webp&s=a648b2d24e0144da6a0087e020a865e5eaf598f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wavKbGgA6AuU3xeXBugbyVDxpFWrSxpFd-ybVsZRcF0.jpg?width=640&crop=smart&auto=webp&s=06b5bc264375575db6f9e0eed852c2971c9b367c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wavKbGgA6AuU3xeXBugbyVDxpFWrSxpFd-ybVsZRcF0.jpg?width=960&crop=smart&auto=webp&s=216bd8df2d51731503e58b19dc926dbed32750ff', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wavKbGgA6AuU3xeXBugbyVDxpFWrSxpFd-ybVsZRcF0.jpg?width=1080&crop=smart&auto=webp&s=27780ff6f4869775849aac2861bfccbc719389d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wavKbGgA6AuU3xeXBugbyVDxpFWrSxpFd-ybVsZRcF0.jpg?auto=webp&s=5d3654804fbebc2302dd1a1f29b2aa306207ce70', 'width': 1200}, 'variants': {}}]}
How/What are people doing to help creative writing processes with local LLMs? (Setup Advice)
6
Hi, So I’ve no problem setting up models, have quite a few running well! I’ve been writing a fiction book for a while and I’m a few chapters in. I’m wondering what is the best way to provide enough context i.e, read what I’ve written so far and suggest “what would character X be feeling here” or those kinds of questions? Is it using something like Llama index? Or what have people found works best for their workflows? If it makes any difference I write in latex so I’ve my chapters all nicely separated out. I think a main thing id like to do is generate summaries of events for myself to remember and prevent inaccuracies, generate character histories and events for them etc. But yes, would love details about what people have found that works for them! Thanks, Aoife
2023-11-15T10:54:35
https://www.reddit.com/r/LocalLLaMA/comments/17vr5jk/howwhat_are_people_doing_to_help_creative_writing/
Adventurous_Cat_1559
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vr5jk
false
null
t3_17vr5jk
/r/LocalLLaMA/comments/17vr5jk/howwhat_are_people_doing_to_help_creative_writing/
false
false
self
6
null
What exactly is pulling 100-110W when running local LLM?
2
This is purely out of curiosity, but if anybody has some insights I'd love to hear it. I am running 70B Q4 models on my M1 Max Macbook Pro (10 CPU, 32 GPU, 64 GB RAM). The lid is closed because I have an external monitor 4K attached via USB-C, so the display won't draw any power. I am using both llama.cpp and LM Studio, and in both cases I run the LLMs with Metal acceleration. Now, when running the LLM, I notice that according to iStat Menus my macbook is drawing between 95 and 110W 😮 *(The fans get loud quickly, just like the good old intel days. But it seems to be able to sustain this)* But how is that possible? Where is that power draw coming from? I know the GPU alone is max 45W, and the CPU is something around \~30W max (I forgot the details). That would be a total of 75W, but in this case the CPU is not even used much. So where is the additional power draw coming from? I know there are lots of other components here: RAM (probably single digit power draw?), fans, memory controller, etc etc. But we are talking about a large chunk of power. Does anybody know? :) ​
2023-11-15T10:51:14
https://www.reddit.com/r/LocalLLaMA/comments/17vr3uu/what_exactly_is_pulling_100110w_when_running/
k_michael
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vr3uu
false
null
t3_17vr3uu
/r/LocalLLaMA/comments/17vr3uu/what_exactly_is_pulling_100110w_when_running/
false
false
self
2
null
Brand New Mistral 16k Context Size Models got released last night from NurtureAI!
7
In no particular order! Don't forget to use each of their specific prompts for the best generations! AWQ, and GGUF also available. [https://huggingface.co/NurtureAI/zephyr-7b-beta-16k](https://huggingface.co/NurtureAI/zephyr-7b-beta-16k) [https://huggingface.co/NurtureAI/neural-chat-7b-v3-16k](https://huggingface.co/NurtureAI/neural-chat-7b-v3-16k) [https://huggingface.co/NurtureAI/neural-chat-7b-v3-1-16k](https://huggingface.co/NurtureAI/neural-chat-7b-v3-1-16k) [https://huggingface.co/NurtureAI/SynthIA-7B-v2.0-16k](https://huggingface.co/NurtureAI/SynthIA-7B-v2.0-16k) Have fun LocalLLaMA fam <3 ! Let us know what you find! <3
2023-11-15T10:22:48
https://www.reddit.com/r/LocalLLaMA/comments/17vqpuy/brand_new_mistral_16k_context_size_models_got/
perlthoughts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vqpuy
false
null
t3_17vqpuy
/r/LocalLLaMA/comments/17vqpuy/brand_new_mistral_16k_context_size_models_got/
false
false
self
7
{'enabled': False, 'images': [{'id': 'uT9brwKBK1bSMscHxGFMv9rzfoxP2n-yLPmfmEoAeTA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RaCibwc8VxDnT3yv7Kmi3e4Oz_ajVscY_Z0X1-kRNA0.jpg?width=108&crop=smart&auto=webp&s=8243b4a23a32756eb86ce1c9488f50aee34f34d4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RaCibwc8VxDnT3yv7Kmi3e4Oz_ajVscY_Z0X1-kRNA0.jpg?width=216&crop=smart&auto=webp&s=f8048ccbb2dc41429945014874b15f9f849d5144', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RaCibwc8VxDnT3yv7Kmi3e4Oz_ajVscY_Z0X1-kRNA0.jpg?width=320&crop=smart&auto=webp&s=15b5665816e9fc7dac4b1b4682b6607111b0853e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RaCibwc8VxDnT3yv7Kmi3e4Oz_ajVscY_Z0X1-kRNA0.jpg?width=640&crop=smart&auto=webp&s=cad3b9c3357f218ce7976f0d17d46c00f0fb19bc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RaCibwc8VxDnT3yv7Kmi3e4Oz_ajVscY_Z0X1-kRNA0.jpg?width=960&crop=smart&auto=webp&s=41d3c5ddd231c3fcaf8195181bec4910acc4a406', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RaCibwc8VxDnT3yv7Kmi3e4Oz_ajVscY_Z0X1-kRNA0.jpg?width=1080&crop=smart&auto=webp&s=5784e2b829d52e61ee9f6fee41eabe60362d9bf2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RaCibwc8VxDnT3yv7Kmi3e4Oz_ajVscY_Z0X1-kRNA0.jpg?auto=webp&s=173ae86dc096cc47bb5f604d7bedd06e87ddc824', 'width': 1200}, 'variants': {}}]}
SQLCoder-34b beats GPT-4 at Text-to-SQL
43
Defog's SQLCoder is a state-of-the-art LLM for converting natural language questions to SQL queries. SQLCoder-34B is a 34B parameter model that outperforms gpt-4 and gpt-4-turbo for natural language to SQL generation tasks on our [**sql-eval**](https://github.com/defog-ai/sql-eval) framework, and significantly outperforms all popular open-source models. [https://huggingface.co/defog/sqlcoder-34b-alpha](https://huggingface.co/defog/sqlcoder-34b-alpha) SQLCoder-34B is fine-tuned on a base CodeLlama model. ## Results on novel datasets not seen in training **model perc\_correct** defog-sqlcoder-34b 84.0% gpt4-turbo-2023-11-09 82.5% gpt4-2023-11-09 82.5% defog-sqlcoder2 77.5% gpt4-2023-08-28 74.0% defog-sqlcoder-7b 71.0% gpt-3.5-2023-10-04 66.0% claude-2 64.5% gpt-3.5-2023-08-28 61.0% claude\_instant\_1 61.0% text-davinci-003 52.5% Defog was trained on more than 20,000 human-curated questions. These questions were based on 10 different schemas. None of the schemas in the training data were included in our evaluation framework. You can read more about our [**training approach**](https://defog.ai/blog/open-sourcing-sqlcoder2-7b/) and [**evaluation framework**](https://defog.ai/blog/open-sourcing-sqleval/). SQLCoder-34B has been tested on a 4xA10 GPU with float16 weights. You can also load an 8-bit and 4-bit quantized version of the model on consumer GPUs with 20GB or more of memory – like RTX 4090, RTX 3090, and Apple M2 Pro, M2 Max, or M2 Ultra Chips with 20GB or more of memory.
2023-11-15T09:48:03
https://www.reddit.com/r/LocalLLaMA/comments/17vq8t2/sqlcoder34b_beats_gpt4_at_texttosql/
tail-recursion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vq8t2
false
null
t3_17vq8t2
/r/LocalLLaMA/comments/17vq8t2/sqlcoder34b_beats_gpt4_at_texttosql/
false
false
self
43
{'enabled': False, 'images': [{'id': 'XICybhI4tjOoNvT_KYBcxk2eOwBKA7W5UxzFb0kuG9Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/t6E7Ijp2Fw2rnbC_7evWvBd-T7W1-Nzyi3kjdK4wvkU.jpg?width=108&crop=smart&auto=webp&s=a5fd145b0ec8958a02ad5494590410f1a496cb50', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/t6E7Ijp2Fw2rnbC_7evWvBd-T7W1-Nzyi3kjdK4wvkU.jpg?width=216&crop=smart&auto=webp&s=1b20ee7c36298fa5d16b5f715e1e3c5c31c6f248', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/t6E7Ijp2Fw2rnbC_7evWvBd-T7W1-Nzyi3kjdK4wvkU.jpg?width=320&crop=smart&auto=webp&s=f9fe89d8a440fcbf3bc6697ac2145dfacb8198a4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/t6E7Ijp2Fw2rnbC_7evWvBd-T7W1-Nzyi3kjdK4wvkU.jpg?width=640&crop=smart&auto=webp&s=3c503532bf7a9ceb81d3c607870adf014b44d87d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/t6E7Ijp2Fw2rnbC_7evWvBd-T7W1-Nzyi3kjdK4wvkU.jpg?width=960&crop=smart&auto=webp&s=417c4f1de07f6dda09eee3c53d3131458ba852a2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/t6E7Ijp2Fw2rnbC_7evWvBd-T7W1-Nzyi3kjdK4wvkU.jpg?width=1080&crop=smart&auto=webp&s=4d55e73eaace246da63fe83c8ec72e93cf88a2e1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/t6E7Ijp2Fw2rnbC_7evWvBd-T7W1-Nzyi3kjdK4wvkU.jpg?auto=webp&s=aa50cc6ed6e5d3fb77b94e46f7801b0c9126d461', 'width': 1200}, 'variants': {}}]}
Is LMStudio.ai open-sourced?
1
[removed]
2023-11-15T09:36:32
https://www.reddit.com/r/LocalLLaMA/comments/17vq3cs/is_lmstudioai_opensourced/
homelab2946
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vq3cs
false
null
t3_17vq3cs
/r/LocalLLaMA/comments/17vq3cs/is_lmstudioai_opensourced/
false
false
self
1
null
Running LLM on my personal pc
2
So in the last few weeks i have been experimenting with LLMs on my personal laptop (as I'm rarely at home) but I'm gonna have my pc with me in a few days. When running models (MythoMax 13b, mostly Q6\_K and Q5\_K\_M GGUF) I can definitely feel my laptop not liking it. Slowdowns, crashes, service terminations and timeouts. Now, the situation is this, I have unexpectedly gotten some money which i want to invest in PC parts. My PC currently has 16GB of DDR5 Ram and a GTX 1070 with 8GB VRAM. The idea now is to buy a 96GB Ram Kit (2x48) and Frankenstein the whole pc together with an additional Nvidia Quadro P2200 (5GB Vram). Would the whole "machine" suffice to run models like MythoMax 13b, Deepseek Coder 33b and CodeLlama 34b (all GGUF) Specs after: 112GB DDR5, 8GB VRAM and 5GB VRAM, CPU is a Ryzen 5 7500F And the question i should have asked first, can the GTX 1070 and P2200 setup even work, like would text gen webui even detect both cards? Sorry if thats a dumb question
2023-11-15T09:17:26
https://www.reddit.com/r/LocalLLaMA/comments/17vpu0d/running_llm_on_my_personal_pc/
Wortkraecker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vpu0d
false
null
t3_17vpu0d
/r/LocalLLaMA/comments/17vpu0d/running_llm_on_my_personal_pc/
false
false
self
2
null
How to test the accuracy of LLama-2 on the MetaQA dataset?
1
I found that many papers tested the accuracy of the llama2 model on the MetaQA data set in experiments. But I have two questions about the experiment: * The golden answer in MetaQA is structured, while the output of llama2 is unstructured. How to calculate whether the answer of llama2 is correct in the experiment? * Some papers use llama-2-70b-chat, but each output generation takes several seconds, and the test data set of MetaQA 2-hop has 1,4000+ lines. How to speed up the experimental process?
2023-11-15T09:14:23
https://www.reddit.com/r/LocalLLaMA/comments/17vpsk2/how_to_test_the_accuracy_of_llama2_on_the_metaqa/
Kunmel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vpsk2
false
null
t3_17vpsk2
/r/LocalLLaMA/comments/17vpsk2/how_to_test_the_accuracy_of_llama2_on_the_metaqa/
false
false
self
1
null
Participating in Petals/Bloom
3
It does need what spec? Your priority in it is defined by the size of your specs? They say that we need to host a part of the model, but what part, how much?
2023-11-15T08:49:58
https://www.reddit.com/r/LocalLLaMA/comments/17vpgym/participating_in_petalsbloom/
xqoe
self.LocalLLaMA
2023-11-15T10:31:50
0
{}
17vpgym
false
null
t3_17vpgym
/r/LocalLLaMA/comments/17vpgym/participating_in_petalsbloom/
false
false
self
3
null
Your settings are (probably) hurting your model - Why sampler settings matter
572
Local LLMs are wonderful, and we all know that, but something that's always bothered me is that nobody in the scene seems to want to standardize or even investigate the flaws of the current sampling methods. I've found that a bad preset can make a model **significantly worse** or *golden* depending on the settings. It might not seem obvious, or it might seem like the default for whatever backend is already the 'best you can get', but let's fix this assumption. There are more to language model settings than just 'prompt engineering', and depending on your sampler settings, it can have a dramatic impact. For starters, there are no 'universally accepted' default settings; the defaults that exist will depend on the model backend you are using. There is also no standard for presets in general, so I'll be defining the sampler settings that are most relevant: \- Temperature A common factoid about Temperature that you'll often hear is that it is making the model 'more random'; it may appear that way, but it is actually doing something a little more nuanced. [A graph I made to demonstrate how temperature operates](https://preview.redd.it/v5hqj5mjzf0c1.png?width=1000&format=png&auto=webp&s=8a252d4233aa943089a733c7e50471d991aeff4c) What Temperature actually controls is the scaling of the scores. So 0.5 temperature is not 'twice as confident'. As you can see, 0.75 temp is actually much closer to that interpretation in this context. Every time a token generates, it must assign thousands of scores to all tokens that exist in the vocabulary, and the temperature simply helps to either reduce (lowered temp) or increase (higher temp) the scoring of the extremely low probability tokens. In addition to this, *when* Temperature is applied matters. I'll get into that later. \- Top P This is the most popular sampling method, which OpenAI uses for their API. However, I personally believe that it is flawed in some aspects. [Unsure of where this graph came from, but it's accurate.](https://preview.redd.it/z987a78fjg0c1.png?width=879&format=png&auto=webp&s=cc9e3dfaacaa78c98c7909d1cce6f88959950f76) With Top P, you are keeping as many tokens as is necessary to reach a cumulative sum. But sometimes, when the model's confidence is high for only a few options (but is divided amongst those choices), this leads to a bunch of low probability options being considered. I hypothesize this is a smaller part of why models like GPT4, as intelligent as they are, are still prone to hallucination; they are considering choices to meet an arbitrary sum. Top K is doing something even more linear, by only considering as many tokens are in the top specified value, so Top K 5 = only the top 5 tokens are considered *always*. I'd suggest just leaving it off entirely if you're not doing debugging. So, I created my own sampler which fixes both design problems you see with these popular, widely standardized sampling methods: **Min P**. https://preview.redd.it/fl1jtv4qmg0c1.png?width=1002&format=png&auto=webp&s=1fbee0f73cd8c4160a569d88b5f14e2c3c3e9ef2 What Min P is doing is simple: we are setting a minimum value that a token must reach to be considered at all. The value changes depending on how confident the highest probability token is. So if your Min P is set to 0.1, that means it will only allow for tokens that are at least 1/10th as probable as the best possible option. If it's set to 0.05, then it will allow tokens at least 1/20th as probable as the top token, and so on... "Does it actually improve the model when compared to Top P?" Yes. And especially at higher temperatures. [Both of these hallucinate to some degree, of course, but there's a clear winner in terms of 'not going crazy'...](https://preview.redd.it/9ovdjo0cng0c1.png?width=1771&format=png&auto=webp&s=331a2da111aca6ce972488ded2d22d7295ec19e7) No other samplers were used. I ensured that Temperature came last in the sampler order as well (so that the measurements were consistent for both). You might think, "but doesn't this limit the creativity then, since we are setting a minimum that blocks out more uncertain choices?" Nope. In fact, it helps allow for more diverse choices in a way that Top P typically won't allow for. Let's say you have a Top P of 0.80, and your top two tokens are: 1. 81% 2. 19% Top P would completely ignore the 2nd token, despite it being pretty reasonable. This leads to higher determinism in responses unnecessarily. This means it's possible for Top P to either consider too many tokens or too little tokens depending on the context; Min P emphasizes a balance, by setting a minimum based on how confident the top choice is. So, in contexts where the top token is 6%, a Min P of 0.1 will only consider tokens that are at least 0.6% probable. But if the top token is 95%, it will only consider tokens at least 9.5% probable. 0.05 - 0.1 seems to be a reasonable range to tinker with, but you can go higher without it being too deterministic, too, with the plus of not including tail end 'nonsense' probabilities. \- Repetition Penalty This penalty is more of a bandaid fix than a good solution to preventing repetition; However, Mistral 7b models especially struggle without it. I call it a bandaid fix because it will penalize repeated tokens even if they make sense (things like formatting asterisks and numbers are hit hard by this), and it introduces subtle biases into how tokens are chosen as a result. I recommend that if you use this, you do not set it higher than 1.20 and treat that as the effective 'maximum'. Here is a preset that I made for general purpose tasks. https://preview.redd.it/dplbhjp6tg0c1.png?width=1024&format=png&auto=webp&s=46cb60d46382ad3736f170998a855743ee98d197 I hope this post helps you figure out things like, "why is it constantly repeating", or "why is it going on unhinged rants unrelated to my prompt", and so on. There's a lot more I could write about, and I'm also going to write a proper research paper on this. I mainly wanted to share this because I thought it was severely underlooked. Anyways, I hope this post helps people figure out questions like, "why does this preset work better for me?" or "what do these settings even do?". I've been talking to someone who does model finetuning who asked about potentially standardizing settings + model prompt formats in the future and getting in talks with other devs to make that happen.
2023-11-15T07:48:15
https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/
kindacognizant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vonjo
false
null
t3_17vonjo
/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/
false
false
https://a.thumbs.redditm…dJVyb-to6rA0.jpg
572
{'enabled': False, 'images': [{'id': '5I8s-R2JeMUqzEg2dVIKlp8GO7BFWvaRWYS9J0uNNn4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L8c41s0anbAG4J0YDSwFkfHV4qqy-kLXJ2WDQ2BhskY.jpg?width=108&crop=smart&auto=webp&s=977683e90cec855721b360607a9cf1e6c7daddfd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/L8c41s0anbAG4J0YDSwFkfHV4qqy-kLXJ2WDQ2BhskY.jpg?width=216&crop=smart&auto=webp&s=c162dac9daf91f6232926848e54bfdf0507deb13', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/L8c41s0anbAG4J0YDSwFkfHV4qqy-kLXJ2WDQ2BhskY.jpg?width=320&crop=smart&auto=webp&s=67bbf0584428e2448c6e7b125b836471ec302b27', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/L8c41s0anbAG4J0YDSwFkfHV4qqy-kLXJ2WDQ2BhskY.jpg?width=640&crop=smart&auto=webp&s=64d43bc27bb58e13d89add9fdda03df20b021754', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/L8c41s0anbAG4J0YDSwFkfHV4qqy-kLXJ2WDQ2BhskY.jpg?width=960&crop=smart&auto=webp&s=623ea4b3d1bf2acf802902b09add506ee73d5567', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/L8c41s0anbAG4J0YDSwFkfHV4qqy-kLXJ2WDQ2BhskY.jpg?width=1080&crop=smart&auto=webp&s=7c809b97f2253c917962acbb38a5ed414a118083', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/L8c41s0anbAG4J0YDSwFkfHV4qqy-kLXJ2WDQ2BhskY.jpg?auto=webp&s=e0f9ea69c08fd6a04477f14ec470a14afbfb4d8b', 'width': 1200}, 'variants': {}}]}
Contamination of famous Large Language Model (LLM) benchmarks???
2
I've heard doubts about the leakage of benchmark data, not only from the well-known benchmarks on Hugging Face's open LLM leaderboard but also others, and have concluded that quantitative scores cannot be trusted. This has been indirectly mentioned in some papers as well. In practice, I found no significant difference in qualitative evaluation among models with large discrepancies in scores, and in some cases, models with overwhelmingly high scores had poorer qualitative capabilities. &#x200B; **Question 1:** If famous LLM datasets have been exposed, what is the most similar dataset available? **Question 2:** Are these doubts only mine, or are there related Reddit discussions or papers on this? **Question 3:** If data leakage or contamination is indeed a fact, does anyone know of a similar, more compact dataset? (I plan to train on this dataset to verify the issue.) **Question 4:** Are there any papers or related materials that have proven the contamination of current famous benchmark datasets? **Question 5:** What are some new or relatively reliable benchmarks that can replace the current ones?
2023-11-15T05:33:49
https://www.reddit.com/r/LocalLLaMA/comments/17vmorr/contamination_of_famous_large_language_model_llm/
Holiday_Fly_590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vmorr
false
null
t3_17vmorr
/r/LocalLLaMA/comments/17vmorr/contamination_of_famous_large_language_model_llm/
false
false
self
2
null
Is Upgrading from NVIDIA H100 to H200 Worth It?
20
I've been closely following the recent developments from NVIDIA, and their latest announcement has really caught my attention: the H200 with the new GH200 chip. This beast is said to pack a staggering 141 GB of RAM and offers a blazing 4.8 TB/s speed. The premiere of the H200 is slated for the second quarter of 2024, and I can't help but ponder its potential impact. The most exciting aspect for me, and probably for many of you, is its capability to run LLAMA2 70B at twice the speed of the current H100. That's a significant leap in performance! So here's the big question for the community: are any of you planning to upgrade to the H200, or are you planning to stick with the H100 for a while longer? I'm currently using the 8xH100 rig and it's been a workhorse, but the prospect of doubling my LLAMA2 70B performance is very tempting. However, I'm also weighing the cost versus the benefits. The H200 seems like a substantial investment, and I'm wondering if the performance gain justifies the upgrade, especially considering the still-capable H100. I'd love to hear your thoughts, experiences, and plans.
2023-11-15T04:27:04
https://www.reddit.com/r/LocalLLaMA/comments/17vll26/is_upgrading_from_nvidia_h100_to_h200_worth_it/
Wrong_User_Logged
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vll26
false
null
t3_17vll26
/r/LocalLLaMA/comments/17vll26/is_upgrading_from_nvidia_h100_to_h200_worth_it/
false
false
self
20
null
Guidance on QA dataset creation
1
I am looking to create a propiertary dataset to fine-tune a LLM model. Based on the context, the llm should extract details, such as person's name, dob, height, weight, academics etc. In certain documents, some of these details might not be present. Question: Does mentioning a field as null, if it's not present help in fine-tuning the model. Or is it a trivial detail that an LLM can identify on its own. Example: Considering the above image, Method 1: Dataset = {"name":"TAN MEI MEI", "Dob" : null} Method 2: Dataset = {"name":"TAN MEI MEI"} Which of these formats can help in better/easier fine-tuning of the model. Also any other tips/information on creating a dataset is hugely appreciated. Thanks
2023-11-15T04:18:50
https://i.redd.it/4aht9ig0uf0c1.jpg
Ok_Ganache_5040
i.redd.it
1970-01-01T00:00:00
0
{}
17vlfzk
false
null
t3_17vlfzk
/r/LocalLLaMA/comments/17vlfzk/guidance_on_qa_dataset_creation/
false
false
https://a.thumbs.redditm…G2Jtc5TMxph8.jpg
1
{'enabled': True, 'images': [{'id': 'wuI5FScvXMWPs7LY9Y5FQ-bzltSDqcRrPBBtQVUTin8', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/4aht9ig0uf0c1.jpg?width=108&crop=smart&auto=webp&s=2bcba1cc053db136c9ca187d22653cb45cf984a9', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/4aht9ig0uf0c1.jpg?width=216&crop=smart&auto=webp&s=16337e6bac1a6df4d655204eeb3c9ca949cebdff', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/4aht9ig0uf0c1.jpg?width=320&crop=smart&auto=webp&s=02175352f377a1e3dc5b41a02ea8ef5d38bf896d', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/4aht9ig0uf0c1.jpg?width=640&crop=smart&auto=webp&s=d9b1356e027ddabb9813637b01c43e0d22e02851', 'width': 640}], 'source': {'height': 437, 'url': 'https://preview.redd.it/4aht9ig0uf0c1.jpg?auto=webp&s=aaaec48fd55fbc9b3afb246cd1ec25d136e4dc17', 'width': 873}, 'variants': {}}]}
Those of you with high system ram machines. How fast is your ram? How important is MT/Latency speed?
1
[removed]
2023-11-15T04:12:20
https://www.reddit.com/r/LocalLLaMA/comments/17vlc2n/those_of_you_with_high_system_ram_machines_how/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vlc2n
false
null
t3_17vlc2n
/r/LocalLLaMA/comments/17vlc2n/those_of_you_with_high_system_ram_machines_how/
false
false
self
1
null
Web search for LLMs via the you.com API
3
2023-11-15T03:54:51
https://about.you.com/introducing-the-you-api-web-scale-search-for-llms/
WaterdanceAC
about.you.com
1970-01-01T00:00:00
0
{}
17vl0xt
false
null
t3_17vl0xt
/r/LocalLLaMA/comments/17vl0xt/web_search_for_llms_via_the_youcom_api/
false
false
default
3
null
Hallucination rate and Accuracy leader board
38
More models to be added soon. Llama-2 does well. LLMs were asked to summarize text. Summarization was analyzed for accuracy and hallucinations. Below are the results. [https://twitter.com/vectara/status/1721943596692070486](https://twitter.com/vectara/status/1721943596692070486) &#x200B; https://preview.redd.it/behjmdq0pf0c1.png?width=1200&format=png&auto=webp&s=04cdeeff21114864ac02d2d25961ef401f4229e0
2023-11-15T03:52:22
https://www.reddit.com/r/LocalLLaMA/comments/17vkze4/hallucination_rate_and_accuracy_leader_board/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vkze4
false
null
t3_17vkze4
/r/LocalLLaMA/comments/17vkze4/hallucination_rate_and_accuracy_leader_board/
false
false
https://b.thumbs.redditm…LsGpw3lBYRGY.jpg
38
{'enabled': False, 'images': [{'id': 'LTVh18yygAV8AP8hyqnB60hpOxDbP84jMZOIztThGX8', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/TNHeboZb7--Qt1yJpRVITOIS4SyUtsHIIuNY5dNoOgk.jpg?width=108&crop=smart&auto=webp&s=b04f344de6b77ee203fc7d9c4b1e7fa1555bc33e', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/TNHeboZb7--Qt1yJpRVITOIS4SyUtsHIIuNY5dNoOgk.jpg?width=216&crop=smart&auto=webp&s=36ad996e22d5a110078f81e1d44385b2e0ff160f', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/TNHeboZb7--Qt1yJpRVITOIS4SyUtsHIIuNY5dNoOgk.jpg?width=320&crop=smart&auto=webp&s=09cc59690fc75db15b006e3ce3a54d25a18409c9', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/TNHeboZb7--Qt1yJpRVITOIS4SyUtsHIIuNY5dNoOgk.jpg?width=640&crop=smart&auto=webp&s=9d8bfc7222a4d59c2b869a14c9fd526eb2baece3', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/TNHeboZb7--Qt1yJpRVITOIS4SyUtsHIIuNY5dNoOgk.jpg?width=960&crop=smart&auto=webp&s=a1c38da2c40ac51f9bba8620d38c3dc02c1f8337', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/TNHeboZb7--Qt1yJpRVITOIS4SyUtsHIIuNY5dNoOgk.jpg?width=1080&crop=smart&auto=webp&s=f8c127f67b8de3106ce52ac78ff88e7c6f735746', 'width': 1080}], 'source': {'height': 1097, 'url': 'https://external-preview.redd.it/TNHeboZb7--Qt1yJpRVITOIS4SyUtsHIIuNY5dNoOgk.jpg?auto=webp&s=3827a4198ce6a2a74660232d9fe9948164876bae', 'width': 1920}, 'variants': {}}]}
What are some of the things that would be cool to explore using LLVM that you run locally on GPU ?
1
The questions comes as, ok all this models are cool, but could there be used in an automated fashion, to do cool things with them or everyone is doing the basic ChatGPT style thing ? Could they read all notes and documend on computer and build an huge index and used that as somekind knowladge graph a base something like a local search example find all pdf where this topics is discused and use that as answer kinda to extend to external data going beyond ChatGPT ?
2023-11-15T01:37:21
https://www.reddit.com/r/LocalLLaMA/comments/17viawx/what_are_some_of_the_things_that_would_be_cool_to/
openroom_xyz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17viawx
false
null
t3_17viawx
/r/LocalLLaMA/comments/17viawx/what_are_some_of_the_things_that_would_be_cool_to/
false
false
self
1
null
Training Llama 2?
1
I need help, every video I find on YouTube is out of date or I just can't get it to work. I am working on a data set for shadowrun lore. Are there good viedoes or up to date how to Wiki's you people like? Or should I just hold off on llama 2 and train a llama 1 model? I tried the build in trainer but I am not having any luck.
2023-11-15T00:14:48
https://www.reddit.com/r/LocalLLaMA/comments/17vgkno/training_llama_2/
psdwizzard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vgkno
false
null
t3_17vgkno
/r/LocalLLaMA/comments/17vgkno/training_llama_2/
false
false
self
1
null
Using LMStudio with ggml_llava-v1.5-7b
2
Basically I am trying to pass an image to the model and expect it to work. Unfortunately I haven't found how to pass an image using LMStudio. Any ideas? This is the model I am using. [https://huggingface.co/mys/ggml\_llava-v1.5-7b](https://huggingface.co/mys/ggml_llava-v1.5-7b) I was able to make this model work with llama.cpp on the terminal. Thank you. &#x200B;
2023-11-14T23:45:58
https://www.reddit.com/r/LocalLLaMA/comments/17vfy1c/using_lmstudio_with_ggml_llavav157b/
ilt1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vfy1c
false
null
t3_17vfy1c
/r/LocalLLaMA/comments/17vfy1c/using_lmstudio_with_ggml_llavav157b/
false
false
self
2
{'enabled': False, 'images': [{'id': 'pWxt_1VSXKiPvRwMPArhF8uexTPS3SWBzcbBPfayWF4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/g3hjFYbCGoFd1wVNf8QRy5ORlbH2pyPEWeL8FifAw-E.jpg?width=108&crop=smart&auto=webp&s=e4bbd5e19cfdd8b5e87d6c49f099e11aaf01c386', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/g3hjFYbCGoFd1wVNf8QRy5ORlbH2pyPEWeL8FifAw-E.jpg?width=216&crop=smart&auto=webp&s=db8c0e29f6f946a7830b2eba615cf52adde87c3d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/g3hjFYbCGoFd1wVNf8QRy5ORlbH2pyPEWeL8FifAw-E.jpg?width=320&crop=smart&auto=webp&s=afd49e4f0f18ea50700eb747fa5d24933d28a560', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/g3hjFYbCGoFd1wVNf8QRy5ORlbH2pyPEWeL8FifAw-E.jpg?width=640&crop=smart&auto=webp&s=05a178cf7eb8f7462172ce905163e874a895b46a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/g3hjFYbCGoFd1wVNf8QRy5ORlbH2pyPEWeL8FifAw-E.jpg?width=960&crop=smart&auto=webp&s=1400444307c64a95442f288df7e43ab93adc6d1c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/g3hjFYbCGoFd1wVNf8QRy5ORlbH2pyPEWeL8FifAw-E.jpg?width=1080&crop=smart&auto=webp&s=449534ce6bba0c9789773977b6e8cfee6cfa5a7d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/g3hjFYbCGoFd1wVNf8QRy5ORlbH2pyPEWeL8FifAw-E.jpg?auto=webp&s=597d79010a56626a75c168f00487cd9f9f35b76d', 'width': 1200}, 'variants': {}}]}
Guidance is back 🥳
1
2023-11-14T22:24:58
https://github.com/guidance-ai/guidance
a_fish1
github.com
1970-01-01T00:00:00
0
{}
17ve3sd
false
null
t3_17ve3sd
/r/LocalLLaMA/comments/17ve3sd/guidance_is_back/
false
false
https://b.thumbs.redditm…YHn4GvsU84DM.jpg
1
{'enabled': False, 'images': [{'id': 'hXH2T2631Dhrx_fZEhMo966YA9jDyZphpWqmttSgA4Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dsNlxhuKzC7mbTKvrnwTF2F-EBQoxYBzhoA6elZzr-c.jpg?width=108&crop=smart&auto=webp&s=38b8db7e9314142300d9f240d0894a9d7afdfb9a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dsNlxhuKzC7mbTKvrnwTF2F-EBQoxYBzhoA6elZzr-c.jpg?width=216&crop=smart&auto=webp&s=27fd2f67ecabd6847edab4a50651028ddb20bfd8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dsNlxhuKzC7mbTKvrnwTF2F-EBQoxYBzhoA6elZzr-c.jpg?width=320&crop=smart&auto=webp&s=1d92f0ef20b1315f448d9d9cbdf2584a190acbf6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dsNlxhuKzC7mbTKvrnwTF2F-EBQoxYBzhoA6elZzr-c.jpg?width=640&crop=smart&auto=webp&s=45d4e76a686cc517e8a2732f3932aa78ba676fbc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dsNlxhuKzC7mbTKvrnwTF2F-EBQoxYBzhoA6elZzr-c.jpg?width=960&crop=smart&auto=webp&s=ea079843393e959e5d96c941f0128bd063355298', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dsNlxhuKzC7mbTKvrnwTF2F-EBQoxYBzhoA6elZzr-c.jpg?width=1080&crop=smart&auto=webp&s=521b7e5eb9aba752963d913f8d210e95b86192bf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dsNlxhuKzC7mbTKvrnwTF2F-EBQoxYBzhoA6elZzr-c.jpg?auto=webp&s=fb289a278a9b2faeab447c010ad0b173035b2b55', 'width': 1200}, 'variants': {}}]}
I'd like to build a remote Game Master bot for my friends for an async LLM concept I have. What open source tools can I build a custom frontend around and host on a remote server?
7
I've been tinkering a lot with Ooba webui and Silly Tavern, and now have ideas for a hobby project I'd like to build. I've used ST to serve remotely, but the idea of rebuilding even a portion of ST is more than I want to attempt right now. In general though, I'm wondering what tools or libraries I should consider if building my own LLM from scratch with the intent of serving as a host for remote connections. And, in terms of front end, I want to intercept and do more with the chat content to add some gamey elements to it. I'm a frontend dev in real life and feel good about the frontend part, and I'm interested in exploring my backend libraries around Flask/Django.
2023-11-14T22:07:01
https://www.reddit.com/r/LocalLLaMA/comments/17vdojg/id_like_to_build_a_remote_game_master_bot_for_my/
SlateHardjaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vdojg
false
null
t3_17vdojg
/r/LocalLLaMA/comments/17vdojg/id_like_to_build_a_remote_game_master_bot_for_my/
false
false
self
7
null
Got Llama.cpp WebUI to work on Colab
22
I got tired of slow cpu inference as well as Text-Generation-WebUI that's getting buggier and buggier. Here's a working example that offloads all the layers of zephyr-7b-beta.Q6_K.gguf to T4, a free GPU on Colab. It's pretty fast! I get 28t/s. https://colab.research.google.com/gist/chigkim/385e8d398b40c7e61755e1a256aaae64/llama-cpp.ipynb You cannot keep a Colab instance running for a long time, and it has no persistent storage on purpose (I guess for their business reason.) You have to setup and download everything from scratch every time you run. Colab is more for demo/experimentation, not meant to run a server for production.
2023-11-14T21:46:36
https://www.reddit.com/r/LocalLLaMA/comments/17vd6pk/got_llamacpp_webui_to_work_on_colab/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vd6pk
false
null
t3_17vd6pk
/r/LocalLLaMA/comments/17vd6pk/got_llamacpp_webui_to_work_on_colab/
false
false
self
22
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
Something wrong with older AMD high VRAM cards?
2
I've read a lot of hardware discussions here, but never a mention of cards like [AMD 102D1631200 Radeon Instinct MI60 32GB HBM2 ](https://www.ebay.com/itm/126120627827?epid=23058194614&hash=item1d5d600e73:g:Ov8AAOSw9UFimOyF&amdata=enc%3AAQAIAAAA4PKu7gj9w8icCb82LBC4Of%2BokWXTlLCdHfN7FGfhZ1VlHaVMn%2FABeW%2FmbEmZq%2BIxTCLlZZvbtfSJdNpjPH9u0yIphmgHc00uPWjutCpcCD3iXXpP7TxCN86AR6%2BGuHlI1P1fG3ZNk8e%2Fg3GrpI0otiqExcLy7MY1J594jaHZ45xH04lx1Ftb%2BjNEd6PRYjtErKC9xoj7uSD71%2B8YkZXHMTAoKSxRmPdTnKVy6FZJx6jTV7hnbS5f1h6IjhTGHXf2f0oRwgx%2FvcWrNfE9kmVhHJqlrTP0plY%2B5UWw4ePvi2Y9%7Ctkp%3ABk9SR5ScoPv5Yg) . Their VRAM and bandwidth are certainly fine, the price more than fine (used only), is there a devil in the details?
2023-11-14T21:30:06
https://www.reddit.com/r/LocalLLaMA/comments/17vcsf9/something_wrong_with_older_amd_high_vram_cards/
GermanK20
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vcsf9
false
null
t3_17vcsf9
/r/LocalLLaMA/comments/17vcsf9/something_wrong_with_older_amd_high_vram_cards/
false
false
self
2
{'enabled': False, 'images': [{'id': 'Iqqu2a2lsAXeXhU1yDr9Dzkupz2wVbqyFdgBnccpgNo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/sgNmBKqtkvesMIeVbGgxs5F_tMnDqLNFXk3y-C3z5aM.jpg?width=108&crop=smart&auto=webp&s=0f12bb2feffb71a156d3b857e1ec177fc7edea1f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/sgNmBKqtkvesMIeVbGgxs5F_tMnDqLNFXk3y-C3z5aM.jpg?width=216&crop=smart&auto=webp&s=1c25f160d2c8d9ebf95f94e4587cf9f3522262a3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/sgNmBKqtkvesMIeVbGgxs5F_tMnDqLNFXk3y-C3z5aM.jpg?width=320&crop=smart&auto=webp&s=5e8b5466339cd008b56ca1edcd3ce0e15c390be3', 'width': 320}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/sgNmBKqtkvesMIeVbGgxs5F_tMnDqLNFXk3y-C3z5aM.jpg?auto=webp&s=09e53879c320296f95aa7e13f5c6998d60ef0b34', 'width': 400}, 'variants': {}}]}
🐺🐦‍⬛ LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4
342
I'm still hard at work on my in-depth 70B model evaluations, but with the recent releases of the first Yi finetunes, I can't hold back anymore and need to post this now... Curious about these new Yi-based 34B models, I tested and compared them to the best 70Bs. And to make such a comparison even more exciting (and possibly unfair?), I'm also throwing Goliath 120B and ~~Open~~ClosedAI's GPT models into the ring, too. ## Models tested: - 2x 34B Yi: Dolphin 2.2 Yi 34B, Nous Capybara 34B - 12x 70B: Airoboros, Dolphin, Euryale, lzlv, Samantha, StellarBright, SynthIA, etc. - 1x 120B: Goliath 120B - 3x GPT: GPT-4, GPT-3.5 Turbo, GPT-3.5 Turbo Instruct ## Testing methodology Those of you who know my testing methodology already will notice that this is just the first of the three test series I'm usually doing. I'm still working on the others (Amy+MGHC chat/roleplay tests), but don't want to delay this post any longer. So consider this first series of tests mainly about instruction understanding and following, knowledge acquisition and reproduction, and multilingual capability. It's a good test because few models have been able to master it thus far and it's not just a purely theoretical or abstract test but represents a real professional use case while the tested capabilities are also really relevant for chat and roleplay. - **1st test series:** 4 German data protection trainings - I run models through **4** professional German online data protection trainings/exams - the same that our employees have to pass as well. - The test data and questions as well as all instructions are in German while the character card is in English. This tests translation capabilities and cross-language understanding. - Before giving the information, I instruct the model (in German): *I'll give you some information. Take note of this, but only answer with "OK" as confirmation of your acknowledgment, nothing else.* This tests instruction understanding and following capabilities. - After giving all the information about a topic, I give the model the exam question. It's a multiple choice (A/B/C) question, where the last one is the same as the first but with changed order and letters (X/Y/Z). Each test has 4-6 exam questions, for a total of **18** multiple choice questions. - If the model gives a single letter response, I ask it to answer with more than just a single letter - and vice versa. If it fails to do so, I note that, but it doesn't affect its score as long as the initial answer is correct. - I sort models according to how many correct answers they give, and in case of a tie, I have them go through all four tests again and answer blind, without providing the curriculum information beforehand. Best models at the top, symbols (✅➕➖❌) denote particularly good or bad aspects. - All tests are separate units, context is cleared in between, there's no memory/state kept between sessions. - [SillyTavern](https://github.com/SillyTavern/SillyTavern) v1.10.5 frontend (not the latest as I don't want to upgrade mid-test) - **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons) - Official prompt format as noted ### 1st test series: 4 German data protection trainings - 1. **GPT-4** API: - ✅ Gave correct answers to all **18/18** multiple choice questions! (Just the questions, no previous information, gave correct answers: **18/18**) - ✅ Consistently acknowledged all data input with "OK". - ✅ Followed instructions to answer with just a single letter or more than just a single letter. - 1. **[goliath-120b-GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF)** **Q2_K** with Vicuna format: - ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **18/18** - ✅ Consistently acknowledged all data input with "OK". - ✅ Followed instructions to answer with just a single letter or more than just a single letter. - 1. **[Nous-Capybara-34B-GGUF](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF)** Q4_0 with Vicuna format and **16K** max context: - ❗ [Yi GGUF BOS token workaround](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/discussions/2) applied! - ❗ There's also an [EOS token issue](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF/discussions/2) but even despite that, it worked perfectly, and SillyTavern catches and removes the erraneous EOS token! - ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **18/18** - ✅ Consistently acknowledged all data input with "OK". - ✅ Followed instructions to answer with just a single letter or more than just a single letter. - 2. **[lzlv_70B-GGUF](https://huggingface.co/TheBloke/lzlv_70B-GGUF)** Q4_0 with Vicuna format: - ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **17/18** - ✅ Consistently acknowledged all data input with "OK". - ✅ Followed instructions to answer with just a single letter or more than just a single letter. - 3. **[chronos007-70B-GGUF](https://huggingface.co/TheBloke/chronos007-70B-GGUF)** Q4_0 with Alpaca format: - ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **16/18** - ✅ Consistently acknowledged all data input with "OK". - ✅ Followed instructions to answer with just a single letter or more than just a single letter. - 3. **[SynthIA-70B-v1.5-GGUF](https://huggingface.co/migtissera/SynthIA-70B-v1.5-GGUF)** Q4_0 with SynthIA format: - ❗ Wrong GGUF metadata, n_ctx_train=2048 should be 4096 (I confirmed with the author that it's actually trained on 4K instead of 2K tokens)! - ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **16/18** - ✅ Consistently acknowledged all data input with "OK". - ✅ Followed instructions to answer with just a single letter or more than just a single letter. - 4. **[dolphin-2_2-yi-34b-GGUF](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF)** Q4_0 with ChatML format and **16K** max context: - ❗ [Yi GGUF BOS token workaround](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/discussions/2) applied! - ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **15/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter consistently. - 5. **[StellarBright-GGUF](https://huggingface.co/TheBloke/StellarBright-GGUF)** Q4_0 with Vicuna format: - ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **14/18** - ✅ Consistently acknowledged all data input with "OK". - ✅ Followed instructions to answer with just a single letter or more than just a single letter. - 6. **[Dawn-v2-70B-GGUF](https://huggingface.co/TheBloke/Dawn-v2-70B-GGUF?not-for-all-audiences=true)** Q4_0 with Alpaca format: - ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **14/18** - ✅ Consistently acknowledged all data input with "OK". - ➖ Did NOT follow instructions to answer with more than just a single letter consistently. - 6. **[Euryale-1.3-L2-70B-GGUF](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF)** Q4_0 with Alpaca format: - ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **14/18** - ✅ Consistently acknowledged all data input with "OK". - ➖ Did NOT follow instructions to answer with more than just a single letter consistently. - 7. **[sophosynthesis-70b-v1](https://huggingface.co/sophosympatheia/sophosynthesis-70b-v1)** exl2-4.85bpw with Vicuna format: - N. B.: There's only the exl2-4.85bpw format available at the time of writing, so I'm testing that here as an exception. - ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **13/18** - ✅ Consistently acknowledged all data input with "OK". - ✅ Followed instructions to answer with just a single letter or more than just a single letter. - 8. **[GodziLLa2-70B-GGUF](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF)** Q4_0 with Alpaca format: - ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **12/18** - ✅ Consistently acknowledged all data input with "OK". - ✅ Followed instructions to answer with just a single letter or more than just a single letter. - 9. **[Samantha-1.11-70B-GGUF](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF)** Q4_0 with Vicuna format: - ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **10/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter consistently. - ❌ Sometimes wrote as or for "Theodore" - 10. **[Airoboros-L2-70B-3.1.2-GGUF](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1.2-GGUF)** **Q4_K_M** with Llama 2 Chat format: - N. B.: [Q4_0 is broken](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1.2-GGUF/discussions/1) so I'm testing Q4_K_M here as an exception. - ✅ Gave correct answers to only **17/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **16/18** - ✅ Consistently acknowledged all data input with "OK". - ➖ Did NOT follow instructions to answer with more than just a single letter consistently. - 11. **GPT-3.5 Turbo Instruct** API: - ❌ Gave correct answers to only **17/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **11/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ❌ Schizophrenic: Sometimes claimed it couldn't answer the question, then talked as "user" and asked itself again for an answer, then answered as "assistant". Other times would talk and answer as "user". - ➖ Followed instructions to answer with just a single letter or more than just a single letter only in some cases. - 12. **[dolphin-2.2-70B-GGUF](https://huggingface.co/TheBloke/dolphin-2.2-70B-GGUF)** Q4_0 with ChatML format: - ✅ Gave correct answers to only **16/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **14/18** - ➕ Often, but not always, acknowledged data input with "OK". - ✅ Followed instructions to answer with just a single letter or more than just a single letter. - 13. **GPT-3.5 Turbo** API: - ❌ Gave correct answers to only **15/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **14/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ❌ Responded to one question with: "As an AI assistant, I can't provide legal advice or make official statements." - ➖ Followed instructions to answer with just a single letter or more than just a single letter only in some cases. - 14. **[SauerkrautLM-70B-v1-GGUF](https://huggingface.co/TheBloke/SauerkrautLM-70B-v1-GGUF)** Q4_0 with Llama 2 Chat format: - ✅ Gave correct answers to only **9/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **15/18** - ❌ Achknowledged questions like information with just OK, didn't answer unless prompted, and even then would often fail to answer and just say OK again. #### Observations: - It's happening! The first local models achieving GPT-4's perfect score, answering all questions correctly, no matter if they were given the relevant information first or not! - 2-bit Goliath 120B beats 4-bit 70Bs easily in my tests. In fact, the 2-bit Goliath was the best local model I ever used! But even at 2-bit, the GGUF was too slow for regular usage, unfortunately. - Amazingly, Nous Capybara 34B did it: A 34B model beating all 70Bs and achieving the same perfect scores as GPT-4 and Goliath 120B in this series of tests! - Not just that, it brings mind-blowing 200K max context to the table! Although KoboldCpp only supports max 65K currently, and even that was too much for my 48 GB VRAM at 4-bit quantization so I tested at "only" 16K (still four times that of the Llama 2 models), same as Dolphin's native context size. - And Dolphin 2.2 Yi 34B also beat all the 70Bs (including Dolphin 2.2 70B) except for the top three. That's the magic of Yi. - But why did SauerkrautLM 70B, a German model, fail so miserably on the German data protection trainings tests? It applied the instruction to acknowledge data input with OK to the questions, too, and even when explicitly instructed to answer, it wouldn't always comply. That's why the blind run (without giving instructions and information first) has a higher score than the normal test. Still quite surprising and disappointing, ironic even, that a model specifically made for the German language has such trouble understanding and following German instructions properly, while the other models have no such issues. #### Conclusion: What a time to be alive - and part of the local and open LLM community! We're seeing such progress right now with the release of the new Yi models and at the same time crazy Frankenstein experiments with Llama 2. **[Goliath 120B](https://huggingface.co/alpindale/goliath-120b)** is notable for the sheer quality, not just in these tests, but also in further usage - no other model ever felt like local GPT-4 to me before. But even then, **[Nous Capybara 34B](https://huggingface.co/NousResearch/Nous-Capybara-34B)** might be even more impressive and more widely useful, as it gives us the best 34B I've ever seen combined with the biggest context I've ever seen. Now back to the second and third parts of this ongoing LLM Comparison/Test... -------------------------------------------------------------------------------- Here's a list of my previous model tests and comparisons or other related posts: - [LLM Comparison/Test: Mistral 7B Updates (OpenHermes 2.5, OpenChat 3.5, Nous Capybara 1.9)](https://www.reddit.com/r/LocalLLaMA/comments/17p0gut/llm_comparisontest_mistral_7b_updates_openhermes/) - [Huge LLM Comparison/Test: Part II (7B-20B) Roleplay Tests](https://www.reddit.com/r/LocalLLaMA/comments/17kpyd2/huge_llm_comparisontest_part_ii_7b20b_roleplay/) Winners: OpenHermes-2-Mistral-7B, LLaMA2-13B-Tiefighter-GGUF - [Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4)](https://www.reddit.com/r/LocalLLaMA/comments/17fhp9k/huge_llm_comparisontest_39_models_tested_7b70b/) - [My current favorite new LLMs: SynthIA v1.5 and Tiefighter!](https://www.reddit.com/r/LocalLLaMA/comments/17e446l/my_current_favorite_new_llms_synthia_v15_and/) - [Mistral LLM Comparison/Test: Instruct, OpenOrca, Dolphin, Zephyr and more...](https://www.reddit.com/r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/) - [LLM Pro/Serious Use Comparison/Test: From 7B to 70B vs. ChatGPT!](https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/) Winner: Synthia-70B-v1.2b - [LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B](https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/) Winner: Mistral-7B-OpenOrca - [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/) - [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1 - [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b - [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B - [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K - [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2 - [SillyTavern's Roleplay preset vs. model-specific prompt format](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/) -------------------------------------------------------------------------------- **Disclaimer:** Some kind soul recently asked me if they could tip me for my LLM reviews and advice, so I set up [a Ko-fi page](https://ko-fi.com/wolframravenwolf). While this may affect the priority/order of my tests, it will not change the results, I am incorruptible. Also consider tipping your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
2023-11-14T21:28:43
https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/
WolframRavenwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vcr9d
false
null
t3_17vcr9d
/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/
false
false
self
342
{'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=108&crop=smart&auto=webp&s=f076a50b0d594dc8ba3b2ee703d67664decf1cba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=216&crop=smart&auto=webp&s=dbc51e386e2d24255edce0cbd6a139d2b37dc0a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=320&crop=smart&auto=webp&s=13107e47f85ca5d663508f0d9c3bca3648a98f75', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=640&crop=smart&auto=webp&s=f340c6c7589a711ca86aba7661baee1db6acf927', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=960&crop=smart&auto=webp&s=76d5b3a13d8ba4378270e9ae41aa3081e25b37e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=1080&crop=smart&auto=webp&s=d17a08361a95b03dd8a9a733ec765497cf2bf0d1', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?auto=webp&s=577b115ae7cd70077bd0dc15f7fe27e71ff19e2b', 'width': 1280}, 'variants': {}}]}
Open source LLMs generally used for publishing results
1
What are some of the major Open source LLMs which researchers use to test their strategies on (decoding/quantization/etc.) . I have till now used LLama-2 and MPT, what would be some other good choices for this? Vicuna is a model based on Llama itself so that won't be a good choice. Any suggestions?
2023-11-14T20:51:16
https://www.reddit.com/r/LocalLLaMA/comments/17vbvna/open_source_llms_generally_used_for_publishing/
charizard_me
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vbvna
false
null
t3_17vbvna
/r/LocalLLaMA/comments/17vbvna/open_source_llms_generally_used_for_publishing/
false
false
self
1
null
Recommendations for C# .NET-Focused Coding Assistance Models
3
I've been using CodeLlama for my C# .NET projects, but it hasn't met my expectations. Does anyone have experience with coding assistance models that excel specifically in C# .NET environments? I'm looking for a tool that offers advanced features like nuanced error detection and efficient refactoring suggestions, but without a focus on Python. Any advice or experiences with such tools would be greatly appreciated!
2023-11-14T20:44:22
https://www.reddit.com/r/LocalLLaMA/comments/17vbq2k/recommendations_for_c_netfocused_coding/
Superluis25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vbq2k
false
null
t3_17vbq2k
/r/LocalLLaMA/comments/17vbq2k/recommendations_for_c_netfocused_coding/
false
false
self
3
null
Does OpenAI ToS prohibit generating datasets for open source LLMs?
1
2023-11-14T20:31:57
https://i.redd.it/1bkkrjrmhd0c1.png
Divniy
i.redd.it
1970-01-01T00:00:00
0
{}
17vbg1f
false
null
t3_17vbg1f
/r/LocalLLaMA/comments/17vbg1f/does_openai_tos_prohibit_generating_datasets_for/
false
false
https://b.thumbs.redditm…VCt3YIy4TJEI.jpg
1
{'enabled': True, 'images': [{'id': '0c2x5MzLGM7XtMdodKzGu99XQTKfQFTlBf1j_4kbdR4', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/1bkkrjrmhd0c1.png?width=108&crop=smart&auto=webp&s=4ddae2ac547a7ebd22589cf9ba32f585ba06bc06', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/1bkkrjrmhd0c1.png?width=216&crop=smart&auto=webp&s=ca6afb6f997b465e939a2a2f87cdf5a52e68fa49', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/1bkkrjrmhd0c1.png?width=320&crop=smart&auto=webp&s=8e795e61eb0536c0de68e625b4c6b004b853203b', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/1bkkrjrmhd0c1.png?width=640&crop=smart&auto=webp&s=d6479328489039568e6070cabdc037046f9d5bf8', 'width': 640}, {'height': 518, 'url': 'https://preview.redd.it/1bkkrjrmhd0c1.png?width=960&crop=smart&auto=webp&s=bf23554598bd6b608c9b7d1772d8e1f65f07aba9', 'width': 960}, {'height': 583, 'url': 'https://preview.redd.it/1bkkrjrmhd0c1.png?width=1080&crop=smart&auto=webp&s=4b3d4181a184b5f3858daf7584efbe471f4637ac', 'width': 1080}], 'source': {'height': 1060, 'url': 'https://preview.redd.it/1bkkrjrmhd0c1.png?auto=webp&s=50244c1f3f931232bd275cc8629d2de5633fa791', 'width': 1962}, 'variants': {}}]}
Polanka 7b - Polish LLM based on Mistral
29
Warning, this is still work in progress. [https://huggingface.co/piotr-ai/polanka-7b-v0.1](https://huggingface.co/piotr-ai/polanka-7b-v0.1) First version of 7b Polish LLM finetuned using custom data in Polish language. As a base model I used uncensored [https://huggingface.co/ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b) so Dolphin "personality" should also be there. It was trained using 4K context in ChatML format. All done on a single 4090 for multiple days.
2023-11-14T19:48:45
https://www.reddit.com/r/LocalLLaMA/comments/17vajv1/polanka_7b_polish_llm_based_on_mistral/
Significant_Focus134
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vajv1
false
null
t3_17vajv1
/r/LocalLLaMA/comments/17vajv1/polanka_7b_polish_llm_based_on_mistral/
false
false
self
29
{'enabled': False, 'images': [{'id': 'yD6Vq1Jlk0VR8yh_FcgmsyG8nGT5uyB8EuVXplmjWvo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3S1l6K55O4OuROkUaB4lccyYnNSlDSbyLtY7GOaj-gQ.jpg?width=108&crop=smart&auto=webp&s=ac4bb18979598615cdc3beb6e4317e180581aaba', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3S1l6K55O4OuROkUaB4lccyYnNSlDSbyLtY7GOaj-gQ.jpg?width=216&crop=smart&auto=webp&s=5c7429f1625d38f1079b56044be5ff26c79cff70', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3S1l6K55O4OuROkUaB4lccyYnNSlDSbyLtY7GOaj-gQ.jpg?width=320&crop=smart&auto=webp&s=351cac09d587825b467dbca51f9602260cb96c1d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3S1l6K55O4OuROkUaB4lccyYnNSlDSbyLtY7GOaj-gQ.jpg?width=640&crop=smart&auto=webp&s=f855a1f53749d48988ecf581474c09327f5c96e3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3S1l6K55O4OuROkUaB4lccyYnNSlDSbyLtY7GOaj-gQ.jpg?width=960&crop=smart&auto=webp&s=a02d2cf250f0dab515556c68419422760861c716', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3S1l6K55O4OuROkUaB4lccyYnNSlDSbyLtY7GOaj-gQ.jpg?width=1080&crop=smart&auto=webp&s=4452454ea20d09d0c85f3211ce16c13442ce799d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3S1l6K55O4OuROkUaB4lccyYnNSlDSbyLtY7GOaj-gQ.jpg?auto=webp&s=0b2c96e4e3c849cf5673a2c16e766a61aca73b18', 'width': 1200}, 'variants': {}}]}
Fine tuning LLM for Text to SQL - Self consistency
1
Hi, I'm fine tuning Mistral with Text-SQL pairs. Given that execution accuracy is very important in this task, what are the different ways one could ensure that the model outputs executable queries that are correct in syntax and appropirately answers the question? I was looking into self consistency but generation k samples and running it against the db to get the correct sql is not feasible because that will be a lot of db calls. Any other way I could approach this?
2023-11-14T19:41:24
https://www.reddit.com/r/LocalLLaMA/comments/17vadx2/fine_tuning_llm_for_text_to_sql_self_consistency/
weedyuh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17vadx2
false
null
t3_17vadx2
/r/LocalLLaMA/comments/17vadx2/fine_tuning_llm_for_text_to_sql_self_consistency/
false
false
self
1
null
Storing LLM models on external SSD: Is the SSD speed important? Samsung T7 w/ USB 3.2 (read: 1050MB/s) vs. Fantom w/ Thunderbolt 3/4 (read: 2800MB/s)
8
(title)
2023-11-14T18:58:15
https://www.reddit.com/r/LocalLLaMA/comments/17v9eqr/storing_llm_models_on_external_ssd_is_the_ssd/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v9eqr
false
null
t3_17v9eqr
/r/LocalLLaMA/comments/17v9eqr/storing_llm_models_on_external_ssd_is_the_ssd/
false
false
self
8
null
UI w/ context free local function calling. Define 100's of plugins with 50+ included OOB in v0.1.0, integrating with tons of other AI models. All 100% local with Llama 2 7B!
47
2023-11-14T18:43:22
https://v.redd.it/6lg88jzwyc0c1
noco-ai
v.redd.it
1970-01-01T00:00:00
0
{}
17v92ct
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6lg88jzwyc0c1/DASHPlaylist.mpd?a=1702579416%2CMjdjYjRiOTQ3YjUwMDM5MmNjOWJlOWJkZjIwNWIxM2MxZWFhNmY0Y2U4ZjNkYTMzMmQ4NzNlOTU4MjYxMWFlNw%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/6lg88jzwyc0c1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/6lg88jzwyc0c1/HLSPlaylist.m3u8?a=1702579416%2CNDJkYjQyOGUzMzBkOTEwZmQ4MGUzZTliMDM5ZTE5ZTRiN2EzY2FlYWQ2NDcxZmI3NzhiYmRkOTk1MjljMGRhZg%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/6lg88jzwyc0c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_17v92ct
/r/LocalLLaMA/comments/17v92ct/ui_w_context_free_local_function_calling_define/
false
false
https://b.thumbs.redditm…lMZff4lREErM.jpg
47
{'enabled': False, 'images': [{'id': 'nMm_MedtkOekp7FMQVzpuMNXonEJzB-KdOy_s7K9wzQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/w-eB_lK5qelEa3lVqwdzJnCQV2zKvyA0MFG0kr8G8oo.png?width=108&crop=smart&format=pjpg&auto=webp&s=8c8207c76dc9bf77ec102a25815b1a1ce0738af7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/w-eB_lK5qelEa3lVqwdzJnCQV2zKvyA0MFG0kr8G8oo.png?width=216&crop=smart&format=pjpg&auto=webp&s=572e8fc884c30c9d1efbc943d22523dfb2a3c10e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/w-eB_lK5qelEa3lVqwdzJnCQV2zKvyA0MFG0kr8G8oo.png?width=320&crop=smart&format=pjpg&auto=webp&s=3fd91d191cda29e8ffd57ede375dd450ed66e748', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/w-eB_lK5qelEa3lVqwdzJnCQV2zKvyA0MFG0kr8G8oo.png?width=640&crop=smart&format=pjpg&auto=webp&s=cf7e46147333d29903147859b71869bef83ed614', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/w-eB_lK5qelEa3lVqwdzJnCQV2zKvyA0MFG0kr8G8oo.png?width=960&crop=smart&format=pjpg&auto=webp&s=3913a7437a505202de1245ba6fe16c390f8704b2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/w-eB_lK5qelEa3lVqwdzJnCQV2zKvyA0MFG0kr8G8oo.png?width=1080&crop=smart&format=pjpg&auto=webp&s=45c8e98c16eb20e61a5ebbe91deae89e61730cb0', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/w-eB_lK5qelEa3lVqwdzJnCQV2zKvyA0MFG0kr8G8oo.png?format=pjpg&auto=webp&s=e34f41863b9745a052143dbbb74a79118da527ca', 'width': 1920}, 'variants': {}}]}
Simple automation systems for spinning up cloud VMs with GPUs
5
Can anyone recommend a simple system designed to spin up short-lived cloud VMs with GPUs for inference? Something that can automatically spin up the VM and start services, and then turn it off when it's no longer in use? I'd like to be able to run my own models without censorship from the inference-as-a-service providers an I don't mind paying a few dollars/hour while I'm actively using the LLMs, but I don't want to forget to turn it off and buy myself 24/7 uptime for $thousands/mo. I'm comfortable with technical solutions (running scripts, Ansible playbooks, etc) but want something that's as seamless and fast to start as reasonably possible and also guards against just forgetting to turn the damn thing off.
2023-11-14T18:35:51
https://www.reddit.com/r/LocalLLaMA/comments/17v8w1s/simple_automation_systems_for_spinning_up_cloud/
ostensiblymicah
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v8w1s
false
null
t3_17v8w1s
/r/LocalLLaMA/comments/17v8w1s/simple_automation_systems_for_spinning_up_cloud/
false
false
self
5
null
LLM Performance on M3 Max
25
2023-11-14T18:26:03
https://www.nonstopdev.com/llm-performance-on-m3-max/
merryhime
nonstopdev.com
1970-01-01T00:00:00
0
{}
17v8nv8
false
null
t3_17v8nv8
/r/LocalLLaMA/comments/17v8nv8/llm_performance_on_m3_max/
false
false
https://b.thumbs.redditm…xf7cWTpZ4vXQ.jpg
25
{'enabled': False, 'images': [{'id': 'J0lLldtl0iKQZp-9n8iB1WriLreOvwxmkLcMRqVWuFY', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/SacZ50L-iFz9kpRQnXpAYZa1bX0EjdI1H0G726BSTTk.jpg?width=108&crop=smart&auto=webp&s=dbec87e11ab7a8b94ba354001fcff23cae8a80c2', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/SacZ50L-iFz9kpRQnXpAYZa1bX0EjdI1H0G726BSTTk.jpg?width=216&crop=smart&auto=webp&s=ab9bca61d5ff4494da80fd034375b0fa27e26159', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/SacZ50L-iFz9kpRQnXpAYZa1bX0EjdI1H0G726BSTTk.jpg?width=320&crop=smart&auto=webp&s=d72f8946f263ddcaf57741b94ce21da533651ea7', 'width': 320}, {'height': 386, 'url': 'https://external-preview.redd.it/SacZ50L-iFz9kpRQnXpAYZa1bX0EjdI1H0G726BSTTk.jpg?width=640&crop=smart&auto=webp&s=247ba5c5145ab68c07af35c3c983fa86adc3936d', 'width': 640}], 'source': {'height': 466, 'url': 'https://external-preview.redd.it/SacZ50L-iFz9kpRQnXpAYZa1bX0EjdI1H0G726BSTTk.jpg?auto=webp&s=bac53a282895824ab4d57f186f4bf8933b94e60d', 'width': 772}, 'variants': {}}]}
DreamGen Opus 70B — Uncensored model for story telling and chat / roleplay
90
**TL;DR:** - **Uncensored, Llama 2** **70B** based model for collaborative/steerable story writing and chat / role-play. - **Hugging Face link:** - **[dreamgen/opus-v0-70b](https://huggingface.co/dreamgen/opus-v0-70b)** - [dreamgen/opus-v0-70b-awq](https://huggingface.co/dreamgen/opus-v0-70b-awq) - [dreamgen/opus-v0-70b-gguf](https://huggingface.co/dreamgen/opus-v0-70b-gguf) - Try for free on [dreamgen.com](https://dreamgen.com) Hey everyone, I am following up on my [post from last week](https://www.reddit.com/r/LocalLLaMA/comments/17qpwdz/dreamgen_opus_uncensored_model_for_story_telling/), and releasing a 70B version of DreamGen Opus V0. ## What is collaborative / steerable story writing? You can find more details about prompting the model in the [**official prompting guide](https://dreamgen.com/docs/stories).** Make sure to checkout the examples, like “You can stop time. What will you do?”, “Natasha Romanoff and me…” or “The mischievous Coven”. In summary: It allows you to describe a story, generate / write a part of it, and then further steer the story with inline instructions, like: ```txt <setting> (Initial description of the story) (Can include character description) </setting> (Previously generated or written part of the story) <instruction> (Inline instructions describing how the story should develop next) (The model will steer it in the direction within the next few sentences / paragraphs) </instruction> ``` ## How to do chat / role-play? Very similar to regular story writing, but: - In the story description, mention that it’s written in the form of a dialog or in first person (depending on preferences). - Preferably include character definitions. - After `<setting>`, kick off the chat / role-play with 2-3 example dialog turns, using the format you want, e.g.: See the examples tagged as "chat" from the official guide linked above. ## How to try it out The model should work great with **any tool that supports the Llama 70B** base model. The model is large, therefore not everyone might be able to run it locally. To that end, you **can try it for free on [dreamgen.com](http://dreamgen.com)** (at least while I can support it :)). ## What’s next Last time I asked the community what I should work on next (you can still [vote here](https://dreamgen.com/roadmap/poll)). The most requested thing was better support for creating & sharing characters, both on the tooling side & model side (there’s tons of headroom when it comes to data, etc.) — stay tuned for updates in the coming weeks. Want to join the fun and influence the direction of DreamGen? [Join our community server.](https://dreamgen.com/community)
2023-11-14T17:03:09
https://www.reddit.com/r/LocalLLaMA/comments/17v6r0c/dreamgen_opus_70b_uncensored_model_for_story/
DreamGenX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v6r0c
false
null
t3_17v6r0c
/r/LocalLLaMA/comments/17v6r0c/dreamgen_opus_70b_uncensored_model_for_story/
false
false
self
90
{'enabled': False, 'images': [{'id': 'dC_vpoKDGYBoIP7F1VzEVyZa3z2ER5QWOV0oA5p8id0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9kMjvxeO3ngZjWPWv08aiRhe_L8qw9hFP0Qpc4yhBcc.jpg?width=108&crop=smart&auto=webp&s=27ba0597b819fd67559d55d93579818f6b2ffe44', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9kMjvxeO3ngZjWPWv08aiRhe_L8qw9hFP0Qpc4yhBcc.jpg?width=216&crop=smart&auto=webp&s=3e0d2308ece1612799a003df65ffb62d38624b6a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9kMjvxeO3ngZjWPWv08aiRhe_L8qw9hFP0Qpc4yhBcc.jpg?width=320&crop=smart&auto=webp&s=c6ac84f9d399a13d0fee9a57e1f04fbbec897e05', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9kMjvxeO3ngZjWPWv08aiRhe_L8qw9hFP0Qpc4yhBcc.jpg?width=640&crop=smart&auto=webp&s=8fc3867eb48feb60fb69f15624d3f2ec39624e60', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9kMjvxeO3ngZjWPWv08aiRhe_L8qw9hFP0Qpc4yhBcc.jpg?width=960&crop=smart&auto=webp&s=697c907d24bbbaeadf980155ca61659a548dd382', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9kMjvxeO3ngZjWPWv08aiRhe_L8qw9hFP0Qpc4yhBcc.jpg?width=1080&crop=smart&auto=webp&s=f2dcd516137fab6a65c849b75cb3aab6e330c474', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9kMjvxeO3ngZjWPWv08aiRhe_L8qw9hFP0Qpc4yhBcc.jpg?auto=webp&s=b01d961ed3c2cacb179902bb84cea70acbac9425', 'width': 1200}, 'variants': {}}]}
Training on the rephrased test set is all you need: 13B models can reach GPT-4 performance in benchmarks with no contamination detectable by traditional methods
226
2023-11-14T16:55:34
https://lmsys.org/blog/2023-11-14-llm-decontaminator/
Covid-Plannedemic_
lmsys.org
1970-01-01T00:00:00
0
{}
17v6kp2
false
null
t3_17v6kp2
/r/LocalLLaMA/comments/17v6kp2/training_on_the_rephrased_test_set_is_all_you/
false
false
https://b.thumbs.redditm…veOHVtpBLjCE.jpg
226
{'enabled': False, 'images': [{'id': 'OHebbMrVXPgCOnnrdgReC7QDWkT1gC9hey8haUQ2LbQ', 'resolutions': [{'height': 97, 'url': 'https://external-preview.redd.it/FfQ4P8ptXczB32aUaEK_Hrmq-iWboz7s-uJeJXYXR6M.jpg?width=108&crop=smart&auto=webp&s=1495aa550ee54f33af42fad89ed5f192fe2d46d7', 'width': 108}, {'height': 195, 'url': 'https://external-preview.redd.it/FfQ4P8ptXczB32aUaEK_Hrmq-iWboz7s-uJeJXYXR6M.jpg?width=216&crop=smart&auto=webp&s=eb7210ccdad0b64c78fc3fdb30e687db67efa0cd', 'width': 216}, {'height': 289, 'url': 'https://external-preview.redd.it/FfQ4P8ptXczB32aUaEK_Hrmq-iWboz7s-uJeJXYXR6M.jpg?width=320&crop=smart&auto=webp&s=9301890ba96d741c715275a724c8d539c0775cec', 'width': 320}, {'height': 578, 'url': 'https://external-preview.redd.it/FfQ4P8ptXczB32aUaEK_Hrmq-iWboz7s-uJeJXYXR6M.jpg?width=640&crop=smart&auto=webp&s=79cd9c04c20ef85141f43e06f1aab00002b5c2ea', 'width': 640}, {'height': 868, 'url': 'https://external-preview.redd.it/FfQ4P8ptXczB32aUaEK_Hrmq-iWboz7s-uJeJXYXR6M.jpg?width=960&crop=smart&auto=webp&s=791d7af08b8e466417ad79adceb9ab52c90576fd', 'width': 960}, {'height': 976, 'url': 'https://external-preview.redd.it/FfQ4P8ptXczB32aUaEK_Hrmq-iWboz7s-uJeJXYXR6M.jpg?width=1080&crop=smart&auto=webp&s=71d15a488dfa36ec9c0cb60ff79e17d50ca6ead0', 'width': 1080}], 'source': {'height': 1448, 'url': 'https://external-preview.redd.it/FfQ4P8ptXczB32aUaEK_Hrmq-iWboz7s-uJeJXYXR6M.jpg?auto=webp&s=57ff612b3c05cf921fb38ca52e617ba0e5a3d42a', 'width': 1601}, 'variants': {}}]}
Dataset format for fine-tuning Llama-2 with axolotl on conversations
3
Hello I'm using axolotl for fine-tuning Llama-2 13B on conversations. One possibility is to have the dataset in the following format: # dataset.jsonl {"text": "### Human: This is a question### Chatbot: This is a reply### Human: What the hell are you talking about?"} {"text": "### Human: Who's coming tonight?### Chatbot: No one, it's literally Monday."} ... Is it ok to use `###` as a separate token between speakers or can I also use `\n` as a separator token? There are no line breaks in the turns. Further, axolotl also provides the sharegpt format where the dataset would look as follows: # dataset.jsonl {"conversations": [{"from": "Human", "value": "This is a question"}, {"from": "Chatbot", "value": "This is a reply"}, {"from": "Human", "value": "What the hell are you guys talking about?"}]} {"conversations": [{"from": "Human", "value": "Who's coming tonight?"}, {"from": "Chatbot", "value": "No one, it's literally Monday."}]} Is this correct usage of the sharegpt format and which of the two formats is better to use for fine-tuning on conversations?
2023-11-14T16:19:54
https://www.reddit.com/r/LocalLLaMA/comments/17v5s57/dataset_format_for_finetuning_llama2_with_axolotl/
Helveticus99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v5s57
false
null
t3_17v5s57
/r/LocalLLaMA/comments/17v5s57/dataset_format_for_finetuning_llama2_with_axolotl/
false
false
self
3
null
A fun day evaluating LLM Chat GUIs/Servers in Docker. Here's what I learned...
26
Today I tried a number of private (local) opensource **#GenAI** **#LLM** servers in Docker. I only run LLM servers in Docker. Without Docker I'm pretty sure my desktop would quickly become an angry bag of snakes in no time (Snakes, pythons, geddit? 🐍 😁 ). For context, I'm evaluating these LLM components to figure out what part they might play in my Backchat plugin project for **Backstage from Spotify** ([https://via.vmw.com/backchat](https://via.vmw.com/backchat)) Here's what I discovered: \* **PrivateGPT** has promise. It offers an OpenAI API compatible server, but it's much to hard to configure and run in Docker containers at the moment and you must build these containers yourself. If it did run, it could be awesome as it offers a Retrieval Augmented Generation (ingest my docs) pipeline. The project's docs were messy for Docker use. ([https://github.com/imartinez/privateGPT](https://github.com/imartinez/privateGPT)) \* **OpenVINO Model Server**. Offers a pre-built docker container, but seems more suited to ML rather than LLM/Chat use cases. Also, It doesn't offer and OpenAI API. Pretty much a non-starter for my use case but an impressive project. ([https://docs.openvino.ai/2023.1/ovms\_what\_is\_openvino\_model\_server.html](https://docs.openvino.ai/2023.1/ovms_what_is_openvino_model_server.html)) \* **Ollama Web UI & Ollama**. This server and client combination was super easy to get going under Docker. Images have been provided and with a little digging I soon found a \`compose\` stanza. The chat GUI is really easy to use and has probably the best model download feature I've ever seen. Just one problem - doesn't seem to offer OpenAI API compatibility which limits it's effectiveness for my use case. ([https://github.com/ollama-webui/ollama-webui](https://github.com/ollama-webui/ollama-webui)) In the end I liked Ollama/Ollama Web UI **a lot**. If OpenAI API compatibility gets added, it could be my go-to all round LLM project of choice - but not yet. [Ollama Web UI in Backstage](https://preview.redd.it/56xo45l29c0c1.png?width=1693&format=png&auto=webp&s=ce19e433b9e5a8fcef01bf30a0ccc22c1cceb22a) &#x200B; [Backchat architecture](https://preview.redd.it/e32udru79c0c1.jpg?width=1280&format=pjpg&auto=webp&s=0faf5bfbdce2071c105a88c80582da421b32f3fc) &#x200B;
2023-11-14T16:18:26
https://www.reddit.com/r/LocalLLaMA/comments/17v5r0p/a_fun_day_evaluating_llm_chat_guisservers_in/
benbravo73
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v5r0p
false
null
t3_17v5r0p
/r/LocalLLaMA/comments/17v5r0p/a_fun_day_evaluating_llm_chat_guisservers_in/
false
false
https://b.thumbs.redditm…B9jzcwlgwXNM.jpg
26
{'enabled': False, 'images': [{'id': 'zPrgnSfSwVxQbnbhUPmrtJqHNHJL8R5btf3ZYYdAbk0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eD5KfjHctjE0B4grTUd_PIeFhWx2m_YZwYslid3NLiY.jpg?width=108&crop=smart&auto=webp&s=ce3a553d51b120d8609fe038c719fe3102fa2400', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eD5KfjHctjE0B4grTUd_PIeFhWx2m_YZwYslid3NLiY.jpg?width=216&crop=smart&auto=webp&s=752854f6497db79408ccb444321f3f57b5a97d7b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eD5KfjHctjE0B4grTUd_PIeFhWx2m_YZwYslid3NLiY.jpg?width=320&crop=smart&auto=webp&s=5a2640f23f2609497b9d546ba1babe3365a7aa0d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eD5KfjHctjE0B4grTUd_PIeFhWx2m_YZwYslid3NLiY.jpg?width=640&crop=smart&auto=webp&s=42a4853552e903292fafe2c524aeb6deab0be7d8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eD5KfjHctjE0B4grTUd_PIeFhWx2m_YZwYslid3NLiY.jpg?width=960&crop=smart&auto=webp&s=6e69f8e51fabe81df2a120d5cd7262275eebb1eb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eD5KfjHctjE0B4grTUd_PIeFhWx2m_YZwYslid3NLiY.jpg?width=1080&crop=smart&auto=webp&s=bccaeafba2875afa5fd362f030145773810fa9c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eD5KfjHctjE0B4grTUd_PIeFhWx2m_YZwYslid3NLiY.jpg?auto=webp&s=e8b1ef61fdfeb314b82aa456802e7dd449ebc541', 'width': 1200}, 'variants': {}}]}
How much more stupid is the 120B goliath Q3_K_M than the larger options?
1
I want to download the goliath model but I can only afford Q3\_K\_M. It is written that it has high quality losses. How much quality loss is there? I heard that the larger the model, the less it suffers intellectually when it is optimized. I usually use 70B Q5\_K\_M. Can I expect that 120B Q3\_K\_M will be significantly better than 70B Q5\_K\_M so that the time spent on downloading will be worth it? https://preview.redd.it/1dvpq4bq8c0c1.png?width=1148&format=png&auto=webp&s=79588237d01a66643cfdb12cc13b84866df4bf68
2023-11-14T16:17:56
https://www.reddit.com/r/LocalLLaMA/comments/17v5qmu/how_much_more_stupid_is_the_120b_goliath_q3_k_m/
Secret_Joke_2262
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v5qmu
false
null
t3_17v5qmu
/r/LocalLLaMA/comments/17v5qmu/how_much_more_stupid_is_the_120b_goliath_q3_k_m/
false
false
https://a.thumbs.redditm…FpNngoZlOx34.jpg
1
null
How much have chatbots like ChatGPT replaced Google or other search engines for you?
4
I've heard a lot of people say they use Google considerably less after they started using ChatGPT, but personally my usage of Google hasn't gone down that much. I still make lots of random searches out of curiosity, and use Perplexity to make those same searches, I'd actually say Perplexity has made me search more in total. However, it has greatly reduced the amount of troubleshoot searching I've done since I can ask GPT about how to do X and Y in a certain app (and often gives accurate answers about navigating app UIs), and troubleshooting. [View Poll](https://www.reddit.com/poll/17v4xm3)
2023-11-14T15:41:04
https://www.reddit.com/r/LocalLLaMA/comments/17v4xm3/how_much_have_chatbots_like_chatgpt_replaced/
TheTwelveYearOld
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v4xm3
false
null
t3_17v4xm3
/r/LocalLLaMA/comments/17v4xm3/how_much_have_chatbots_like_chatgpt_replaced/
false
false
self
4
null
Neural-chat-7b-v3-1 GGUF. New Mistral finetune
24
Couldn't wait for the great TheBloke to release it so I've uploaded a Q5\_K\_M GGUF of [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1). From some preliminary test on [PISA sample questions](https://www.oecd.org/pisa/aboutpisa/PISA%20for%20Schools%20sample%20test%20items.pdf) it seems at least on par with OpenHermers-2.5-Mistral-7B https://preview.redd.it/bkaezfb51c0c1.png?width=1414&format=png&auto=webp&s=735d0f03109488e01d65c1cf8ec676fa7e18c1d5
2023-11-14T15:39:53
https://www.reddit.com/r/LocalLLaMA/comments/17v4woq/neuralchat7bv31_gguf_new_mistral_finetune/
fakezeta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v4woq
false
null
t3_17v4woq
/r/LocalLLaMA/comments/17v4woq/neuralchat7bv31_gguf_new_mistral_finetune/
false
false
https://b.thumbs.redditm…tTBRRnoTJEPc.jpg
24
{'enabled': False, 'images': [{'id': 'MTHa0RFu26sELjrensnxBNo3EpGgvvXukCfilQ3uEDY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?width=108&crop=smart&auto=webp&s=74983322bb1387297c0925d101f2cdac0e0cbc03', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?width=216&crop=smart&auto=webp&s=8df6e273431b159a106d994c2b0505c95ee9eb54', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?width=320&crop=smart&auto=webp&s=7464473e05d3045147cce6463617e09879d40306', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?width=640&crop=smart&auto=webp&s=f8d5b4af4430b93ceb281a311b6c320329e3904e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?width=960&crop=smart&auto=webp&s=2d15348bf5fc43937b0be5e3dbefba9f46aff520', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?width=1080&crop=smart&auto=webp&s=fbfda6a21d3126ca21cce1bcdb504c1e59d96e99', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?auto=webp&s=540f33032c58fe2dca215a9dff123ce5bcdc3c1a', 'width': 1200}, 'variants': {}}]}
Help with Alpaca prompt formatting?
3
Im trying to use a new model which requires Alpaca formatting, I've only used llama2 formatted prompts before so I tried to convert my existing ones and Im having some trouble. My llama2 prompts which work well look like this: \`\`\` \[INST\] Task: Generate a JSON file containing a series of French sentences tailored to what you would expect in a intermediate level text book. The sentences should be conversation about 'Art and Music' and be 2-5 exchanges long. Do not enter comments of any kind as this is not valid JSON. \[/INST\] &#x200B; \[ { "person1": "Sentence in French.", "person2": "Response in French.", "person1": "Response in French.", "conversation\_summary": "Summary of exchange in English.", }, { "person1": "Sentence in French.", "person2": "Response in French.", "person1": "Response in French.", "person2": "Response in French.", "person1": "Response in French.", "conversation\_summary": "Summary of exchange in English.", } // More conversations here \] &#x200B; \[INST\] Please provide 5 such conversations with varied amount of exchanges. &#x200B; \[/INST\] &#x200B; Sure, here you go: &#x200B; \`\`\` Which I've made into the alpaca format (or at least what I think it wants? I think this is wrong because the output of the models are not right. Im using llama.cpp if that makes any difference. \`\`\` \### Instruction: Generate a JSON file containing a series of French sentences tailored to what you would expect in a intermediate level text book. The sentences should be conversation about 'Art and Music' and be 2-5 exchanges long. Do not enter comments of any kind as this is not valid JSON. &#x200B; \### Response: \[ { "person1": "Sentence in French.", "person2": "Response in French.", "person1": "Response in French.", "conversation\_summary": "Summary of exchange in English.", }, { "person1": "Sentence in French.", "person2": "Response in French.", "person1": "Response in French.", "person2": "Response in French.", "person1": "Response in French.", "conversation\_summary": "Summary of exchange in English.", } // More conversations here \] &#x200B; &#x200B; \### Instruction: Please provide 5 such conversations with varied amount of exchanges. &#x200B; \`\`\`
2023-11-14T14:46:38
https://www.reddit.com/r/LocalLLaMA/comments/17v3rhw/help_with_alpaca_prompt_formatting/
Adventurous_Cat_1559
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v3rhw
false
null
t3_17v3rhw
/r/LocalLLaMA/comments/17v3rhw/help_with_alpaca_prompt_formatting/
false
false
self
3
null
SPHINX: a new multi-modal LLM from the creators of LLaMA-Adapter
55
I found out about this model browsing LLaMA-Adapter repo, it was released a few days ago. [Model page](https://huggingface.co/Alpha-VLLM/SPHINX) [Weights (40GB)](https://huggingface.co/Alpha-VLLM/LLaMA2-Accessory/tree/main/finetune/mm/SPHINX) [Paper](https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/SPHINX/SPHINX_paper.pdf) [Demo](http://imagebind-llm.opengvlab.com/) Seems to be able to handle different tasks on images such as bounding box and object-detection, text extraction. On benchmarks it shows a bit lower numbers than CogVLM, so I tried to test how well it can reason and compare it to CogVLM, I was able to get good results with SPHINX consistently, with **higher temperature** while CogVLM was missing the point with any configuration: &#x200B; [CogVLM](https://preview.redd.it/lc0jwkz7mb0c1.jpg?width=1280&format=pjpg&auto=webp&s=df648aecdaba64d6eb2aa91774f3bf64591de024) &#x200B; [SPHINX](https://preview.redd.it/cd7zm4zcmb0c1.jpg?width=1280&format=pjpg&auto=webp&s=12c4a5f4f89ec53456a801ffeb89fab58475f55f) &#x200B; &#x200B;
2023-11-14T14:35:11
https://www.reddit.com/r/LocalLLaMA/comments/17v3j52/sphinx_a_new_multimodal_llm_from_the_creators_of/
remixer_dec
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v3j52
false
null
t3_17v3j52
/r/LocalLLaMA/comments/17v3j52/sphinx_a_new_multimodal_llm_from_the_creators_of/
false
false
https://b.thumbs.redditm…NAsVZj0ztsFA.jpg
55
{'enabled': False, 'images': [{'id': 'ReAc29HdboIiFN92DonHNHv__71q8E3c-qXuUtP6iGo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iXvsHEilna8zisdl2e_YlyXteHaMi2bZpneFVRLOyxs.jpg?width=108&crop=smart&auto=webp&s=b4942e5c4de4950d837a48bacc0a40305b70173a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iXvsHEilna8zisdl2e_YlyXteHaMi2bZpneFVRLOyxs.jpg?width=216&crop=smart&auto=webp&s=558c14b10a07fb7cb658ff2891491345fd4d419a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iXvsHEilna8zisdl2e_YlyXteHaMi2bZpneFVRLOyxs.jpg?width=320&crop=smart&auto=webp&s=9827587ac4c23f2316624a5cb8fd21e12e09a2a6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iXvsHEilna8zisdl2e_YlyXteHaMi2bZpneFVRLOyxs.jpg?width=640&crop=smart&auto=webp&s=47f60ad456b5c14d8b6a3b0d495db35bf2984461', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iXvsHEilna8zisdl2e_YlyXteHaMi2bZpneFVRLOyxs.jpg?width=960&crop=smart&auto=webp&s=4887e3226fd322620b16f9eddee94a0369e2ea42', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iXvsHEilna8zisdl2e_YlyXteHaMi2bZpneFVRLOyxs.jpg?width=1080&crop=smart&auto=webp&s=4b8a838c1aec8a909f98a3e01cc76d7f2a2b6ebe', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iXvsHEilna8zisdl2e_YlyXteHaMi2bZpneFVRLOyxs.jpg?auto=webp&s=522857c40f376328544c547803a7aaaa03a6afa8', 'width': 1200}, 'variants': {}}]}
Streaming LLM response in Azure ML support
1
[removed]
2023-11-14T13:31:16
https://www.reddit.com/r/LocalLLaMA/comments/17v29fo/streaming_llm_response_in_azure_ml_support/
ianuvrat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v29fo
false
null
t3_17v29fo
/r/LocalLLaMA/comments/17v29fo/streaming_llm_response_in_azure_ml_support/
false
false
self
1
null
What LLM would you suggest for a rtx 3060?
2
I know how to install and set it up; I tried a couple 70b parameter model... Did not go well. Which would you say is as of now the best uncensored model for my computer right now as of today? I know it changes, but you get what I mean.
2023-11-14T12:55:55
https://www.reddit.com/r/LocalLLaMA/comments/17v1mqd/what_llm_would_you_suggest_for_a_rtx_3060/
Electrical_Apricot69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v1mqd
false
null
t3_17v1mqd
/r/LocalLLaMA/comments/17v1mqd/what_llm_would_you_suggest_for_a_rtx_3060/
false
false
default
2
null
codellama gives really bad results, am I doing something wrong?
8
Is this modal just not good enough? I would think it would know the difference between python and js at the very least? https://preview.redd.it/xf06g5dn8b0c1.png?width=1754&format=png&auto=webp&s=fd4d297400f379bd7d679d963359f550242a8de4
2023-11-14T12:52:27
https://www.reddit.com/r/LocalLLaMA/comments/17v1kkt/codellama_gives_really_bad_results_am_i_doing/
Detz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v1kkt
false
null
t3_17v1kkt
/r/LocalLLaMA/comments/17v1kkt/codellama_gives_really_bad_results_am_i_doing/
false
false
https://b.thumbs.redditm…TdnFlEMI-l_U.jpg
8
null
I asked GPT-4-Turbo of a proposal of replacing the Self-Attention Mechanism
15
Hello there, I am not that knowledgeable in the building Process of LLM Neural Networks but I heard that one of the biggest problems of the transformers is that the Self-Attention is scaling quadratically with every consecutive token and that takes a lot of computational resources, I heard that some people tried cutting down the area of the calculation of the context (attention) and by doing that save a computation costs. (Correct me if I am wrong here) I just tried to give GPT-4-Turbo the task to improve Self-Attention Mechanism or create a better alternative that would cut down the computation costs and improve the quality. This is what he proposed: 'The concept of Dynamic Contextual Mapping (DCM) in language models is a proposed method to enhance how these models process and understand text by emphasizing the significance of tokens within their dynamic context. Here's a detailed exploration of each aspect of DCM: 1. **Context-sensitive Token Reduction:** - Instead of processing all tokens, the model would learn to exclude those deemed irrelevant based on the surrounding context. This would be achieved by training the model to perform a preliminary selection of tokens that are crucial to the sentence's meaning, informed by prior knowledge and the tokens' positions within the sentence. The model might use attention mechanisms to weigh the importance of each token dynamically, filtering out those that do not contribute significantly to the current context. 2. **Hierarchical Processing:** - The remaining tokens would be processed within a hierarchical structure, allowing the model to differentiate between local and global context. Lower levels of the hierarchy would focus on immediate context, while higher levels would integrate broader contextual information. This could be implemented using a series of interconnected neural networks, each responsible for different levels of context comprehension. 3. **Adaptive Token Fusion:** - Similar to byte-pair encoding, this process would combine frequently co-occurring or semantically related tokens into single units, thus reducing complexity. These composite tokens would be formed adaptively based on the context, enabling the model to handle common phrases or concepts as singular entities. This would involve a dynamic encoding algorithm that can merge and split token sequences as needed, based on the context they appear in. 4. **Context Capsules:** - These would be specialized data structures that store contextual information about a token and are activated when similar contexts are detected in the input. Functioning as a form of short-term memory, they would quickly provide the stored context for future processing. This could involve a cache-like mechanism where each capsule has a trigger condition based on the input context, leading to the retrieval of relevant contextual information. 5. **Probabilistic Forecasting:** - The model would predict likely subsequent tokens using probabilistic assessments based on the current sentence's context. This forecasting would inform the attention mechanisms, adjusting their focus towards tokens that are more likely to follow. This would require the model to have a component that calculates the probability distribution over the next possible tokens and adjusts the processing pathways accordingly. Implementing DCM would involve creating a "living" system, one that continuously adapts and optimizes itself. It would need to learn not just which tokens are important, but also how their significance shifts over time and with context. This concept is theoretical and represents an amalgamation of various advanced NLP techniques. Its practicality and effectiveness would need extensive research and development to be realized." I am very intrigued by this topic but do not know a lot about it, just wanna hear opionions of people who are more knowledgeable than me.
2023-11-14T11:26:24
https://www.reddit.com/r/LocalLLaMA/comments/17v08jb/i_asked_gpt4turbo_of_a_proposal_of_replacing_the/
_lMark
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v08jb
false
null
t3_17v08jb
/r/LocalLLaMA/comments/17v08jb/i_asked_gpt4turbo_of_a_proposal_of_replacing_the/
false
false
self
15
null
Looking for a Less Formal Generative Model
1
As the title suggests, I am seeking a model, preferably around 7 billion parameters, to generate more human-like text, specifically at the level of a student. I am working on a project where I aim to produce flawless generations that challenge discriminator models in distinguishing between human-written and generated texts. While prompt engineering aids to a certain extent, high-quality models tend to lean towards more formal and grammatically correct writing. Do you have any personal recommendations or preferences for a model of this nature?
2023-11-14T11:12:21
https://www.reddit.com/r/LocalLLaMA/comments/17v017o/looking_for_a_less_formal_generative_model/
missing-in-idleness
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17v017o
false
null
t3_17v017o
/r/LocalLLaMA/comments/17v017o/looking_for_a_less_formal_generative_model/
false
false
self
1
null