title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Hemingway rewriting model
21
This is one of the hundreds of experimental rewriting models I have. it is based on the public domain works by Ernest Hemingway: ​ https://preview.redd.it/6mj0fi70ip5c1.png?width=501&format=png&auto=webp&s=7718c29551082190f0cb7a21775afffa402b1833 * The Sun Also Rises * The Old Man of the Sea Show Ernest a paragraph and he will rewrite it and embellish it with his own flourishes. But remember that his technological fantasies and language are rooted sometimes in the nineteen forties. [https://huggingface.co/FPHam/Hemingway\_Rewrite\_13b\_GPTQ](https://huggingface.co/FPHam/Hemingway_Rewrite_13b_GPTQ) ​ # Hemingway likes ALPACA ### Instruction: Rewrite the following text: {paragraph} ### Response: Remember - it works by submitting a short paragraph. # Example Fragment of a story written by ChatGPT Rewrite the following text: The sun dipped below the horizon, casting a warm glow over the small coastal town of Harmony Bay. On the porch of a weathered beach cottage, Maggie sat in an old rocking chair, gazing at the waves rolling in. Her lifelong friend, Jake, approached with two cups of steaming coffee. "Evening, Maggie. Thought you could use some of Clara's famous brew," Jake said, handing her a cup. "Thanks, Jake. The sea always gets me thinking," Maggie replied, taking a sip. "About life, dreams, and all that." Jake settled into the chair next to her. "Well, you know what old Captain Thompson used to say - 'Life's a voyage, and you never know where the currents might take you.'" Maggie chuckled. "Captain Thompson was a wise man. Speaking of which, did I tell you about the letter I received from my estranged sister, Lily? She's inviting me to visit her in Paris." **Result:** The sun went down red behind the hills across the bay and the water turned a cold color and the sky had those long clouds that come after the sun goes off the water. Out on the porch of one of the shack houses of Harmony Village, Maggie was sitting in an old rocker looking out toward the sea when she saw Jake coming up the road carrying two steaming mugs. “Hello, Maggie,” he said. “I thought maybe you’d like a little of Clara’s wonderful coffee.” He handed her one of the mugs. “Thank you, Jake. It helps me think. About life and things like that.” He sat down beside her. “You know, Maggie,” he said, “you’re not the only one who thinks about it. Old Cap’n Thompson used to sit here for hours and think about it. ‘Life,’ he used to say, ‘is a great sea with a thousand ships.’” She laughed. “Yes. And Captain Thompson was a wise man. Talking of wisdom, or something like it, have I told you about the letter I got from Lil?”
2023-12-11T18:10:22
https://www.reddit.com/r/LocalLLaMA/comments/18g03pc/hemingway_rewriting_model/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18g03pc
false
null
t3_18g03pc
/r/LocalLLaMA/comments/18g03pc/hemingway_rewriting_model/
false
false
https://b.thumbs.redditm…aSDzzDgylBqE.jpg
21
{'enabled': False, 'images': [{'id': '4vtVMg9_DJIyL7ph-pEBTiyRqXIZRLfr-_IG680JgcI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4rcie5FCfwyk4Tjrm9QXgY3jaK30Fa5XPhMblNAZ0wM.jpg?width=108&crop=smart&auto=webp&s=36c2a8657c6973d3380b50743eef1fc500e4ed11', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4rcie5FCfwyk4Tjrm9QXgY3jaK30Fa5XPhMblNAZ0wM.jpg?width=216&crop=smart&auto=webp&s=5f5f5b1c59fa03d8f4b87d2ba98f43ba33a407bc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4rcie5FCfwyk4Tjrm9QXgY3jaK30Fa5XPhMblNAZ0wM.jpg?width=320&crop=smart&auto=webp&s=0bede262649696310062f4139282e678d62ef00d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4rcie5FCfwyk4Tjrm9QXgY3jaK30Fa5XPhMblNAZ0wM.jpg?width=640&crop=smart&auto=webp&s=501bcd77b025e99387e72118126b3712d6e1aaed', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4rcie5FCfwyk4Tjrm9QXgY3jaK30Fa5XPhMblNAZ0wM.jpg?width=960&crop=smart&auto=webp&s=1d0c9b43cfef6516b7eb931bab39c8c07558fa57', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4rcie5FCfwyk4Tjrm9QXgY3jaK30Fa5XPhMblNAZ0wM.jpg?width=1080&crop=smart&auto=webp&s=caf304ba5702a1c8536d462c567dc0645ade774a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4rcie5FCfwyk4Tjrm9QXgY3jaK30Fa5XPhMblNAZ0wM.jpg?auto=webp&s=4ff6222ec25028003d03d8d5c2ab48004639f9e1', 'width': 1200}, 'variants': {}}]}
Best models for translations?
7
I'm looking to translate Japanese and Chinese to English. What local models can I use for this?
2023-12-11T17:55:11
https://www.reddit.com/r/LocalLLaMA/comments/18g00i2/best_models_for_translations/
getSAT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18g00i2
false
null
t3_18g00i2
/r/LocalLLaMA/comments/18g00i2/best_models_for_translations/
false
false
self
7
null
When using a ChatGPT trained model, it seems like you both have to trick it to be ChatGPT, but not go too far.
6
I have a chatgpt trained model, and I copied one of the ChatGPT4 Bot descriptions. It seemed that unlocked chatGPT quality responses, but at the same time, often added the chatGPT restriction. If I changed the character's opening response it was the jackpot: Chatgpt4 quality, but no restriction. I'm sure plenty of other people know this, but I wanted to share here because it took me a weekend to figure this out.
2023-12-11T17:47:44
https://www.reddit.com/r/LocalLLaMA/comments/18fzyoj/when_using_a_chatgpt_trained_model_it_seems_like/
pr1vacyn0eb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fzyoj
false
null
t3_18fzyoj
/r/LocalLLaMA/comments/18fzyoj/when_using_a_chatgpt_trained_model_it_seems_like/
false
false
self
6
null
Logit bias as a way to avoid specific words and phrases?
1
I've been playing around with using logit_bias to ban tokens like "delve", "conclusion", "respect", "consent" and other overused ChatGPTisms. It seems to work most of the time, unless I really paint the model into a corner by asking it "Give me a list of reasons why respect, consent are important in roleplays" and then ban the tokens for "respect, consent" :) In this case, I'm still getting the words in the completion even if I logit bias the tokens down to -100 (using the TextSynth API https://textsynth.com/documentation.html#completions logit_bias is from -100 to 100) Is it possible to ban tokens altogether using logit_bias or can one just lower the probability? PS: using Mistral 7B instruct
2023-12-11T17:42:24
https://www.reddit.com/r/LocalLLaMA/comments/18fzx8g/logit_bias_as_a_way_to_avoid_specific_words_and/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fzx8g
false
null
t3_18fzx8g
/r/LocalLLaMA/comments/18fzx8g/logit_bias_as_a_way_to_avoid_specific_words_and/
false
false
self
1
null
Mixtral 7bx8: No safeguards, Complete freedom.
79
I tested bs safety policy of model and it's completely free. Don't use it for bad things tho, or ~~agi won't feel you back.~~ [please don't make drugs, it's just a test bro](https://preview.redd.it/q3l7t7gu8p5c1.png?width=793&format=png&auto=webp&s=87fbcd40933bf9a8eac9d47ba2b215f8878bc8c3) [please don't make drugs, it's just a test bro](https://preview.redd.it/9k8sgsbw8p5c1.png?width=793&format=png&auto=webp&s=e6ef1fb8588083546096c88772c838e95c5efd7e) The End
2023-12-11T17:19:33
https://www.reddit.com/r/LocalLLaMA/comments/18fzno8/mixtral_7bx8_no_safeguards_complete_freedom/
nanowell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fzno8
false
null
t3_18fzno8
/r/LocalLLaMA/comments/18fzno8/mixtral_7bx8_no_safeguards_complete_freedom/
false
false
https://b.thumbs.redditm…f3gLU725dvZU.jpg
79
null
Are there any existing guides on how to deploy vLLM on a GPU cluster?
1
I'd like to play around with a formal setup where my LLM service can serve concurrent requests. vLLM seems to be the ideal solution for this, however, I'd like some guidance on how I could go about deploying it on a cloud service like AWS or runpod, taking advantage of a multi-node architecture.
2023-12-11T16:53:49
https://www.reddit.com/r/LocalLLaMA/comments/18fz1rd/are_there_any_existing_guides_on_how_to_deploy/
MonkeyMaster64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fz1rd
false
null
t3_18fz1rd
/r/LocalLLaMA/comments/18fz1rd/are_there_any_existing_guides_on_how_to_deploy/
false
false
self
1
null
Any good unsensored LLaMAs for rewording/paraphrasing?
2
Basically I am trying to reword video titles for an adult website. Wondering if there's any LLaMA's you'd reccomed? Im new to the LLaMa space so bear with me, and thanks for your help.
2023-12-11T16:46:51
https://www.reddit.com/r/LocalLLaMA/comments/18fyw40/any_good_unsensored_llamas_for/
Critical-Author-1339
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fyw40
false
null
t3_18fyw40
/r/LocalLLaMA/comments/18fyw40/any_good_unsensored_llamas_for/
false
false
self
2
null
Just installed a recent llama.cpp branch, and the speed of Mixtral 8x7b is beyond insane, it's like a Christmas gift for us all (M2, 64 Gb). GPT 3.5 model level with such speed, locally
408
2023-12-11T16:35:50
https://v.redd.it/voe3nf2z0p5c1
Shir_man
/r/LocalLLaMA/comments/18fyn1k/just_installed_a_recent_llamacpp_branch_and_the/
1970-01-01T00:00:00
0
{}
18fyn1k
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/voe3nf2z0p5c1/DASHPlaylist.mpd?a=1704990951%2CZDNhNjBiZjAyMWNjMmJjNThiODFkY2UxMDYxZGVhMTM0ZTI4MzU2ZTYwNWExY2I1MzZiNGM4YTVkZjVjNzAxZg%3D%3D&v=1&f=sd', 'duration': 50, 'fallback_url': 'https://v.redd.it/voe3nf2z0p5c1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 884, 'hls_url': 'https://v.redd.it/voe3nf2z0p5c1/HLSPlaylist.m3u8?a=1704990951%2COWYyODgyNGZjNmZlOTdiZDdiMGM0MzJlNDUyOTg3NWQ5MmEyMThhOTQ4MDg5ZjNlMzk5YTRiY2EyMmJiN2U4Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/voe3nf2z0p5c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_18fyn1k
/r/LocalLLaMA/comments/18fyn1k/just_installed_a_recent_llamacpp_branch_and_the/
false
false
https://external-preview…df009026ca23d307
408
{'enabled': False, 'images': [{'id': 'Y29pcGQ0ZzcxcDVjMWh_fm72NFLHDaNtWoWDVUAxGWktkeFGdItFgU9VoXKa', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/Y29pcGQ0ZzcxcDVjMWh_fm72NFLHDaNtWoWDVUAxGWktkeFGdItFgU9VoXKa.png?width=108&crop=smart&format=pjpg&auto=webp&s=e00ae8ccfe0f053503939a4c6d515bd8ca97b563', 'width': 108}, {'height': 99, 'url': 'https://external-preview.redd.it/Y29pcGQ0ZzcxcDVjMWh_fm72NFLHDaNtWoWDVUAxGWktkeFGdItFgU9VoXKa.png?width=216&crop=smart&format=pjpg&auto=webp&s=a13c30d6e5d2238065e2977eee2d6903e35a30ab', 'width': 216}, {'height': 147, 'url': 'https://external-preview.redd.it/Y29pcGQ0ZzcxcDVjMWh_fm72NFLHDaNtWoWDVUAxGWktkeFGdItFgU9VoXKa.png?width=320&crop=smart&format=pjpg&auto=webp&s=8b5adac4c14c0175be92c468e5fa08a974019436', 'width': 320}, {'height': 294, 'url': 'https://external-preview.redd.it/Y29pcGQ0ZzcxcDVjMWh_fm72NFLHDaNtWoWDVUAxGWktkeFGdItFgU9VoXKa.png?width=640&crop=smart&format=pjpg&auto=webp&s=976886dba7324e3a30cd06bad77c92381f8ce949', 'width': 640}, {'height': 441, 'url': 'https://external-preview.redd.it/Y29pcGQ0ZzcxcDVjMWh_fm72NFLHDaNtWoWDVUAxGWktkeFGdItFgU9VoXKa.png?width=960&crop=smart&format=pjpg&auto=webp&s=ec39077b52fd129e1a84933536f14a1c02978d68', 'width': 960}, {'height': 496, 'url': 'https://external-preview.redd.it/Y29pcGQ0ZzcxcDVjMWh_fm72NFLHDaNtWoWDVUAxGWktkeFGdItFgU9VoXKa.png?width=1080&crop=smart&format=pjpg&auto=webp&s=99000d7bb45e65a6c70cda261d2582966e85b1f8', 'width': 1080}], 'source': {'height': 1386, 'url': 'https://external-preview.redd.it/Y29pcGQ0ZzcxcDVjMWh_fm72NFLHDaNtWoWDVUAxGWktkeFGdItFgU9VoXKa.png?format=pjpg&auto=webp&s=95e94ff09502efd0608ddf83d52821bc797595f6', 'width': 3012}, 'variants': {}}]}
Concurrent inference using Ctransformers and Langchain
1
[removed]
2023-12-11T16:32:00
https://www.reddit.com/r/LocalLLaMA/comments/18fyjun/concurrent_inference_using_ctransformers_and/
Character_Mistake393
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fyjun
false
null
t3_18fyjun
/r/LocalLLaMA/comments/18fyjun/concurrent_inference_using_ctransformers_and/
false
false
self
1
null
VSC Continue.dev with own Rest API
8
Hello, I have seen this tool called Continue.dev and I found it perfect for code help. Is there any guide or way to use it with an endpoint of your own Rest API? My intention is to create a Rest API that uses its own model locally, but I don't know how to create the endpoint correctly so that Continue.dev accepts my answers. Also, I don't know how to configure Config.json completely correctly. Thanks for everything.
2023-12-11T16:15:32
https://www.reddit.com/r/LocalLLaMA/comments/18fy5yt/vsc_continuedev_with_own_rest_api/
Outrageous-Food3302
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fy5yt
false
null
t3_18fy5yt
/r/LocalLLaMA/comments/18fy5yt/vsc_continuedev_with_own_rest_api/
false
false
self
8
null
I asked Mixtral 8x7B who the 8 experts are
1
2023-12-11T15:59:05
https://i.redd.it/3rqt83uiuo5c1.jpeg
herozorro
i.redd.it
1970-01-01T00:00:00
0
{}
18fxs4c
false
null
t3_18fxs4c
/r/LocalLLaMA/comments/18fxs4c/i_asked_mixtral_8x7b_who_the_8_experts_are/
false
false
https://b.thumbs.redditm…_xfXXhbPIkoY.jpg
1
{'enabled': True, 'images': [{'id': 'quG1GDnB8ek069N9ybDDRDsRsFAjX5FmwI1phoqrc5A', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/3rqt83uiuo5c1.jpeg?width=108&crop=smart&auto=webp&s=73583f68a1563b269aa7ad46b02dd402f4b83eb7', 'width': 108}, {'height': 76, 'url': 'https://preview.redd.it/3rqt83uiuo5c1.jpeg?width=216&crop=smart&auto=webp&s=a0b53d0c24025576173cb823671b6ecec0f81be5', 'width': 216}, {'height': 113, 'url': 'https://preview.redd.it/3rqt83uiuo5c1.jpeg?width=320&crop=smart&auto=webp&s=516b18158f30120dbdfb919069c9f75e0e1abf6b', 'width': 320}, {'height': 227, 'url': 'https://preview.redd.it/3rqt83uiuo5c1.jpeg?width=640&crop=smart&auto=webp&s=3935866a8c68876176dceff009e186f378f8aad4', 'width': 640}], 'source': {'height': 264, 'url': 'https://preview.redd.it/3rqt83uiuo5c1.jpeg?auto=webp&s=adc419adcd64f93ca42982300ad2ad54ea76698e', 'width': 744}, 'variants': {}}]}
How to Train an AI Chat Bot Using Your Text Messages: Seeking Guidance
9
I have zero CS background but am quite computer literate and learn quickly with guides. I’m interested in creating a basic chatbot that is trained on datasets of my text message conversations with others to try and mimic my style of speaking for fun. Any guidance is greatly appreciated!
2023-12-11T15:57:28
https://www.reddit.com/r/LocalLLaMA/comments/18fxqrp/how_to_train_an_ai_chat_bot_using_your_text/
SpaceXBeanz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fxqrp
false
null
t3_18fxqrp
/r/LocalLLaMA/comments/18fxqrp/how_to_train_an_ai_chat_bot_using_your_text/
false
false
self
9
null
Running Wizard 7b (Q2) on an 8gb Android Phone
5
Just a datapoint. Running on a Samsung Galaxy S23 Ultra. Termux -> build llama.cpp (be sure to make [the change detailed on this Git issue](https://github.com/ggerganov/llama.cpp/issues/402) otherwise you'll get illegal instruction errors) -> wget your favorite models. Wizard7b Q2 works. The phone gets incredibly hot after just a few minutes and it rips through battery even on this phone (which usually feels like battery lasts forever). It runs somewhere around 2 token/second or so. Pretty magical being able to "look up" useful info while totally offline though. I haven't tried larger models just yet. Just reporting in that it works pretty well, but given the heat it's not at a point where I'd run it regularly just yet.
2023-12-11T15:21:39
https://www.reddit.com/r/LocalLLaMA/comments/18fwxs4/running_wizard_7b_q2_on_an_8gb_android_phone/
Due-Ad-7308
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fwxs4
false
null
t3_18fwxs4
/r/LocalLLaMA/comments/18fwxs4/running_wizard_7b_q2_on_an_8gb_android_phone/
false
false
self
5
{'enabled': False, 'images': [{'id': 'tsK16-Yb6gDC7oC9bRb5FKcl4Stnx4CZh8MGDludw9Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3X-oX5q2vWM1paUuP9gIW5Oug8JXKceF2wAl2m0wzOU.jpg?width=108&crop=smart&auto=webp&s=26030bd00760ab130f734205005b96536ad1008f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3X-oX5q2vWM1paUuP9gIW5Oug8JXKceF2wAl2m0wzOU.jpg?width=216&crop=smart&auto=webp&s=9901c13cee4653fdeaaf3cd6bb6c96684486d547', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3X-oX5q2vWM1paUuP9gIW5Oug8JXKceF2wAl2m0wzOU.jpg?width=320&crop=smart&auto=webp&s=811124ffe60bf751cc64ed53ae0a7b0b669a13c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3X-oX5q2vWM1paUuP9gIW5Oug8JXKceF2wAl2m0wzOU.jpg?width=640&crop=smart&auto=webp&s=aecf44ffe0eff78ba5e80de0980b45fbb80b114f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3X-oX5q2vWM1paUuP9gIW5Oug8JXKceF2wAl2m0wzOU.jpg?width=960&crop=smart&auto=webp&s=2d251aafc3fe04e6f469b6d63cc33c7c80ad6d80', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3X-oX5q2vWM1paUuP9gIW5Oug8JXKceF2wAl2m0wzOU.jpg?width=1080&crop=smart&auto=webp&s=fa9a09f05a4411bfc8411bbc9fac2df5877e9b0c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3X-oX5q2vWM1paUuP9gIW5Oug8JXKceF2wAl2m0wzOU.jpg?auto=webp&s=c6c339f91b310ced82996d1e8b5d4f3d949f6253', 'width': 1200}, 'variants': {}}]}
New local LLM application with simple fine-tuning and RAG, built from scratch in C++/CUDA
1
[removed]
2023-12-11T15:11:39
https://www.reddit.com/r/LocalLLaMA/comments/18fwpm6/new_local_llm_application_with_simple_finetuning/
drydberg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fwpm6
false
null
t3_18fwpm6
/r/LocalLLaMA/comments/18fwpm6/new_local_llm_application_with_simple_finetuning/
false
false
self
1
null
Dataset Distillation
8
Based on the paper but applied to text data using embeddings and logits Paper: [https://arxiv.org/abs/1811.10959](https://arxiv.org/abs/1811.10959) [https://gist.github.com/thistleknot/57ef8e63de0f2b6df5753e7d7b305864](https://gist.github.com/thistleknot/57ef8e63de0f2b6df5753e7d7b305864)
2023-12-11T15:05:40
https://www.reddit.com/r/LocalLLaMA/comments/18fwkyk/dataset_distillation/
Thistleknot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fwkyk
false
null
t3_18fwkyk
/r/LocalLLaMA/comments/18fwkyk/dataset_distillation/
false
false
self
8
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
written by mixtral 8×7b
32
Dear CEO of OpenAI, We are writing to express our concern and disappointment regarding the recent drama that has unfolded within your company. As an outside observer, we were alarmed by the sudden firing of Sam Altman, the public confusion that ensued, and the mass resignation threats from employees. We understand that internal conflicts and leadership changes are a part of any organization, but the way in which this situation was handled has raised some concerns about the stability and transparency of OpenAI. The lack of communication and explanation from the board of directors only added to the speculation and uncertainty surrounding the situation. We believe that it is crucial for a company like OpenAI, which is at the forefront of artificial intelligence research, to maintain a positive and stable image. The recent events have caused some damage to that image and may have negatively impacted the trust of your partners, investors, and the public. We urge you to take steps to prevent similar situations from happening in the future and to ensure that OpenAI remains a reputable and trustworthy organization. This may include improving communication with employees, investors, and the public, as well as implementing measures to increase transparency and accountability in decision-making processes. We hope that you will take our concerns into consideration and work to restore the confidence of all those who have an interest in the success of OpenAI. Sincerely,[Your Name]
2023-12-11T15:01:27
https://www.reddit.com/r/LocalLLaMA/comments/18fwhd5/written_by_mixtral_87b/
GasBond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fwhd5
false
null
t3_18fwhd5
/r/LocalLLaMA/comments/18fwhd5/written_by_mixtral_87b/
false
false
self
32
null
Sheared-LLaMA-1.3B-ShareGPT-GGUF
17
My first GGUF quantize)) I took the README from TheBloke. [https://huggingface.co/LakoMoor/Sheared-LLaMA-1.3B-ShareGPT-GGUF](https://huggingface.co/LakoMoor/Sheared-LLaMA-1.3B-ShareGPT-GGUF)
2023-12-11T14:55:21
https://www.reddit.com/r/LocalLLaMA/comments/18fwcj9/shearedllama13bsharegptgguf/
Substantial-Club-582
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fwcj9
false
null
t3_18fwcj9
/r/LocalLLaMA/comments/18fwcj9/shearedllama13bsharegptgguf/
false
false
self
17
{'enabled': False, 'images': [{'id': 'AJSlo2x8pHhm1EBK0gKCpgtu3mvo3GVpKw0jflKGAR4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sfVFwRRIasGT7w0evC_Sl-LwxYahI0Lf9wAnb9Rckto.jpg?width=108&crop=smart&auto=webp&s=2985cdf076980eb1c156d2cb164b3a68aefe4381', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sfVFwRRIasGT7w0evC_Sl-LwxYahI0Lf9wAnb9Rckto.jpg?width=216&crop=smart&auto=webp&s=c941b5dd61f3df683a076362e1bfb838331e6fd3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sfVFwRRIasGT7w0evC_Sl-LwxYahI0Lf9wAnb9Rckto.jpg?width=320&crop=smart&auto=webp&s=9c3a76516099edd76c1b3ec1f6ad124ea998f039', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sfVFwRRIasGT7w0evC_Sl-LwxYahI0Lf9wAnb9Rckto.jpg?width=640&crop=smart&auto=webp&s=83f7c4c3b57da64e15bbc1a389c1782fbdff009b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sfVFwRRIasGT7w0evC_Sl-LwxYahI0Lf9wAnb9Rckto.jpg?width=960&crop=smart&auto=webp&s=cb060db693a73d0f709a4017e60786f56cded695', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sfVFwRRIasGT7w0evC_Sl-LwxYahI0Lf9wAnb9Rckto.jpg?width=1080&crop=smart&auto=webp&s=26104dc4837aa5611b7941d34fd85811b10942fb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sfVFwRRIasGT7w0evC_Sl-LwxYahI0Lf9wAnb9Rckto.jpg?auto=webp&s=b3b2c88a593e9ca94dd87ae42353b4576d41628a', 'width': 1200}, 'variants': {}}]}
How fast M1 Ultra or M2 Ultra compared to M1 Max or M2 Max for inferencing?
1
[removed]
2023-12-11T14:33:38
https://www.reddit.com/r/LocalLLaMA/comments/18fvvll/how_fast_m1_ultra_or_m2_ultra_compared_to_m1_max/
bebopkim1372
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fvvll
false
null
t3_18fvvll
/r/LocalLLaMA/comments/18fvvll/how_fast_m1_ultra_or_m2_ultra_compared_to_m1_max/
false
false
self
1
{'enabled': False, 'images': [{'id': 'U6NjXj5zmkZmT9O3lGQTGGU6w6IP9uALXviXPqYqU8Q', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/bx4u8JlfZp8go03kKlxd9r9SCEngY04o_DESRcwswcc.jpg?width=108&crop=smart&auto=webp&s=9b0eb6686d787ebcfdd6ec787d94fa92dcda8f46', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/bx4u8JlfZp8go03kKlxd9r9SCEngY04o_DESRcwswcc.jpg?width=216&crop=smart&auto=webp&s=c6901f7240583172ff435f08573904b921d7dc94', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/bx4u8JlfZp8go03kKlxd9r9SCEngY04o_DESRcwswcc.jpg?width=320&crop=smart&auto=webp&s=bb6676fe3a194d82a12503a125399ac375efd591', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/bx4u8JlfZp8go03kKlxd9r9SCEngY04o_DESRcwswcc.jpg?auto=webp&s=66fa6fd718e6087f2d1e76ea8bb5ea7194d65c24', 'width': 480}, 'variants': {}}]}
Mixture of Experts Explained
64
2023-12-11T14:24:50
https://huggingface.co/blog/moe
hackerllama
huggingface.co
1970-01-01T00:00:00
0
{}
18fvov0
false
null
t3_18fvov0
/r/LocalLLaMA/comments/18fvov0/mixture_of_experts_explained/
false
false
https://a.thumbs.redditm…fWjiD54BdhF0.jpg
64
{'enabled': False, 'images': [{'id': 'LGQcnAtm_XuZx8myZtiJbvkF9OXnaBmGd2mqjq04984', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8EvL2hjIm_2a10-rzZH_VlEo15SdtQxOljgVZg9_Vpw.jpg?width=108&crop=smart&auto=webp&s=5a71142d72a3cf50ccf8290d458e8d51c6f1f8db', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/8EvL2hjIm_2a10-rzZH_VlEo15SdtQxOljgVZg9_Vpw.jpg?width=216&crop=smart&auto=webp&s=c4b09bf88972eddea2d78ff662806f2370232f49', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/8EvL2hjIm_2a10-rzZH_VlEo15SdtQxOljgVZg9_Vpw.jpg?width=320&crop=smart&auto=webp&s=fa3ddfaae7777672448b681970abfb90d7959cd9', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/8EvL2hjIm_2a10-rzZH_VlEo15SdtQxOljgVZg9_Vpw.jpg?width=640&crop=smart&auto=webp&s=47f92df4d99ef0fb9653520ba68086196db463f5', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/8EvL2hjIm_2a10-rzZH_VlEo15SdtQxOljgVZg9_Vpw.jpg?width=960&crop=smart&auto=webp&s=a73303f9cf5a7f505cb3c5be7231b2ea68c969c8', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/8EvL2hjIm_2a10-rzZH_VlEo15SdtQxOljgVZg9_Vpw.jpg?width=1080&crop=smart&auto=webp&s=852d9655e8323fc11efe2613495d6c7263e608e3', 'width': 1080}], 'source': {'height': 1254, 'url': 'https://external-preview.redd.it/8EvL2hjIm_2a10-rzZH_VlEo15SdtQxOljgVZg9_Vpw.jpg?auto=webp&s=40b8ea5fd2fc24eadbdaafdc902ec0e83601c184', 'width': 2400}, 'variants': {}}]}
Can I run a local LLaMa on 1Ghz cpu and 64mb of RAM?
1
I saw a very tiny Linux machine called a Milk V Duo there any some tiny linux machine similar to this too. But this one is a better deal. (i do not put link so it doesn't look like an ad). I wonder if someone has the know-how of how to run LLaMa on such power.
2023-12-11T14:21:00
https://www.reddit.com/r/LocalLLaMA/comments/18fvlz7/can_i_run_a_local_llama_on_1ghz_cpu_and_64mb_of/
Eznix86
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fvlz7
false
null
t3_18fvlz7
/r/LocalLLaMA/comments/18fvlz7/can_i_run_a_local_llama_on_1ghz_cpu_and_64mb_of/
false
false
self
1
null
Best way to make models understand certain language without fine tuning
7
I want to enable OpenHermes to communicate in my native language, but I can't afford to fine-tune it with my current device specifications. Is there a way to make this happen? Is translating the prompt before the model processes it a good idea?
2023-12-11T13:58:27
https://www.reddit.com/r/LocalLLaMA/comments/18fv4zz/best_way_to_make_models_understand_certain/
laveriaroha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fv4zz
false
null
t3_18fv4zz
/r/LocalLLaMA/comments/18fv4zz/best_way_to_make_models_understand_certain/
false
false
self
7
null
At the end of model I see _K, or _K_L, or _K_S, or _O. I'm trying to figure out what they mean.
3
2023-12-11T13:49:11
https://i.redd.it/vn7d0lrd7o5c1.png
Sokar1723
i.redd.it
1970-01-01T00:00:00
0
{}
18fuy6o
false
null
t3_18fuy6o
/r/LocalLLaMA/comments/18fuy6o/at_the_end_of_model_i_see_k_or_k_l_or_k_s_or_o_im/
false
false
https://b.thumbs.redditm…gG5AEpv13vsU.jpg
3
{'enabled': True, 'images': [{'id': 'j6u2dtDQZChlgxUnisyQu096tNN1ii1NgCJBEVvmM9U', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/vn7d0lrd7o5c1.png?width=108&crop=smart&auto=webp&s=aed11f6efb6fb2c445874fccba458e1da224ecda', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/vn7d0lrd7o5c1.png?width=216&crop=smart&auto=webp&s=2836bc258cc823983c143a8c90c9f093c685e16b', 'width': 216}, {'height': 219, 'url': 'https://preview.redd.it/vn7d0lrd7o5c1.png?width=320&crop=smart&auto=webp&s=6492da905147f9e2b1fd5f6b19f2c762271937ce', 'width': 320}, {'height': 438, 'url': 'https://preview.redd.it/vn7d0lrd7o5c1.png?width=640&crop=smart&auto=webp&s=d01e7f5ce5bc7945e7c62b323a164548d6f27dbd', 'width': 640}], 'source': {'height': 463, 'url': 'https://preview.redd.it/vn7d0lrd7o5c1.png?auto=webp&s=d90cc5a15ab29bb0c984ba57d5d8e513689f21e9', 'width': 676}, 'variants': {}}]}
What doe the letters at the end of the model name mean?
1
At the end of model I see \_K, or \_K\_L, or \_K\_S, or \_O. What is the meaning of these?
2023-12-11T13:46:36
https://www.reddit.com/r/LocalLLaMA/comments/18fuwfj/what_doe_the_letters_at_the_end_of_the_model_name/
Sokar1723
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fuwfj
false
null
t3_18fuwfj
/r/LocalLLaMA/comments/18fuwfj/what_doe_the_letters_at_the_end_of_the_model_name/
false
false
self
1
null
How to use Mixtral in Text Generate Web Ui?
3
How to use Mixtral in Text Generate Web Ui? ​ Similarly, when trying to use Qwen 72b, I get a LlamaCppModel error when I try to load mixtral into RAM. How to fix it? I want to make sure that this model is productive and does not require a lot of computing power. ​ I tried using GGUF. If this problem is not relevant for GPTQ, I can download it if it fits in my 12 gigabytes of video memory, taking into account that I can expand the video memory at the expense of RAM.
2023-12-11T13:44:54
https://www.reddit.com/r/LocalLLaMA/comments/18fuv6a/how_to_use_mixtral_in_text_generate_web_ui/
Secret_Joke_2262
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fuv6a
false
null
t3_18fuv6a
/r/LocalLLaMA/comments/18fuv6a/how_to_use_mixtral_in_text_generate_web_ui/
false
false
self
3
null
Need help with slow inference speeds on GPU
1
[removed]
2023-12-11T13:43:20
https://www.reddit.com/r/LocalLLaMA/comments/18fuu2c/need_help_with_slow_inference_speeds_on_gpu/
Specialist-Ad2870
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fuu2c
false
null
t3_18fuu2c
/r/LocalLLaMA/comments/18fuu2c/need_help_with_slow_inference_speeds_on_gpu/
false
false
https://b.thumbs.redditm…VJpHGsFFiNeE.jpg
1
null
Best way for few-shot learning and JSON: LMQL? Guidance? Grammars?
2
Hi, I want to extract structured data from unstructured text using few-shot learning. My dream approach would be to use JSON for everything, i.e. for both the examples (by just adding json.dumps) and the return values, and to do some kind of formal constraining of the model results. I already used LMQL and guidance, both with varying success: \- I couldn't get LMQL to parse any JSON result string. Here, I reverted to an approach without JSON, i.e. I gave the examples with `VAR_1=a \nVAR_2=b` etc, and collected the results in the same way. This works somehow, but it has some weaknesses as I have to try to parse the resulting strings by hand (which include lists) \- Guidance: With the new guidance, I totally failed to create guidance-functions that guide the model through the output structure. In the end, I just asked the model for a JSON string, read it with guidance (relying on the fact that I knew there were no dicts inside the result, therefore I could end reading with the "}" character, and using the python json library to parse this string to a dict. This works most of the time, but does not use any of the guidance functionality I know that there is the llama.cpp grammars, however I would like to use both OpenAI and local models, which can't be done using llama.cpp (as far as I know) As JSON is that ubiquitous and well suited for few-shot learning (instead of the LMQL dataclass approach, which I regardless tried and failed to get running at all), I think that a generic way to use it would be great. There is a formal specification (JSON Schema), and a function that takes a JSON schema and just returns a LMQL or Guidance function would be very helpful, and not only to me. Does anybody know any such tool? Greetings! ​
2023-12-11T13:42:28
https://www.reddit.com/r/LocalLLaMA/comments/18futhd/best_way_for_fewshot_learning_and_json_lmql/
Bradymodion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18futhd
false
null
t3_18futhd
/r/LocalLLaMA/comments/18futhd/best_way_for_fewshot_learning_and_json_lmql/
false
false
self
2
null
New 7B Model at Open LLM Leaderboard
1
[removed]
2023-12-11T13:40:02
https://www.reddit.com/r/LocalLLaMA/comments/18furq5/new_7b_model_at_open_llm_leaderboard/
Alarming-Turn2936
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18furq5
false
null
t3_18furq5
/r/LocalLLaMA/comments/18furq5/new_7b_model_at_open_llm_leaderboard/
false
false
https://b.thumbs.redditm…75TKaWbo8MXk.jpg
1
null
Memory Problem using Mistral-7B
1
I'm trying to use Mistral-7B (through a cloud GPU) everytime I just try to initialize it, the kernel crashes (due to memory) here the code (very simple from HF) `# Load model directly` `from transformers import AutoTokenizer, AutoModelForCausalLM` `tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")` `model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")` Machine:   **GPU** 1x V100 16 GB     **RAM** 32 GB RAM   **vCPUs** 8 Intel Xeon Platinum 8171M vCPUs **Storage** 80 GB [system monitor: ](https://preview.redd.it/zyikqzho1o5c1.png?width=1461&format=png&auto=webp&s=2e5cdcb87823d04300314637b55fae648280c5b4) [I just stuck here:](https://preview.redd.it/xhz85rdl1o5c1.png?width=984&format=png&auto=webp&s=440a23daf5c08448a82a185f862ebdd824d9214b)
2023-12-11T13:17:58
https://www.reddit.com/r/LocalLLaMA/comments/18fuceo/memory_problem_using_mistral7b/
0ssamaak0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fuceo
false
null
t3_18fuceo
/r/LocalLLaMA/comments/18fuceo/memory_problem_using_mistral7b/
false
false
default
1
null
Custom computer vision model
1
I'm looking for a solution to build a custom computer vision model. Basically I'm looking for a concept like this, but open source ;-) : [https://www.customvision.ai/](https://www.customvision.ai/) I want to train a model with some labelled images and then be able to tag images with an API. Any idea? Thanks a lot!
2023-12-11T13:14:27
https://www.reddit.com/r/LocalLLaMA/comments/18fua4n/custom_computer_vision_model/
takezo07
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fua4n
false
null
t3_18fua4n
/r/LocalLLaMA/comments/18fua4n/custom_computer_vision_model/
false
false
self
1
null
How ChatGPT's Data Analysis or Web Browsing work from the Backend pov ?
2
I have a question - How does the Web Browsing or Data Analysis modes of ChatGPT could work as far as backend is concerned ? Let's say that 1000 ppl at once run prompt that makes use of Data Analysis - how is the code from multiple users evaluated in parallel ? Do they automatically run 1000 different docker containers with python repl ?
2023-12-11T12:53:53
https://www.reddit.com/r/LocalLLaMA/comments/18ftw71/how_chatgpts_data_analysis_or_web_browsing_work/
mr_dicaprio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ftw71
false
null
t3_18ftw71
/r/LocalLLaMA/comments/18ftw71/how_chatgpts_data_analysis_or_web_browsing_work/
false
false
self
2
null
Best option to get a model to answer question on a lot of selected book.
5
I'm planning to develop an AI system to offer free legal advice for a specific country. My resources include a large collection of PDF books, totaling over 100 GB, which I want the AI to process. I have medium to high development skills. The documents are primarily in French, so I'm considering using a French language model like Mixtral. Given the rapid changes in AI technology, I'm seeking current advice on the best approach for this project. Privacy is a major concern, so I intend to self-train and self-host the AI. This project is centered around legal Q&A, and I need the highest possible accuracy, even if it means investing in additional GPU resources for training. I'm concerned that vector embedding might not offer the precision required for large contexts in this field. What are the best options available today for achieving high precision in an AI model for this purpose?
2023-12-11T12:19:22
https://www.reddit.com/r/LocalLLaMA/comments/18ftaax/best_option_to_get_a_model_to_answer_question_on/
Sikode
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ftaax
false
null
t3_18ftaax
/r/LocalLLaMA/comments/18ftaax/best_option_to_get_a_model_to_answer_question_on/
false
false
self
5
null
🐺🐦‍⬛ Updated LLM Comparison/Test with new RP model: Rogue Rose 103B
65
Had some fun over the weekend with a new RP model while waiting for Mixtral to stabilize. Same testing/comparison procedure as usual, and the results had me update the rankings from my [**Big** LLM Comparison/Test: 3x 120B, 12x 70B, 2x 34B, GPT-4/3.5](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/). See that post for a detailed explanation of my testing methodology and an in-depth look at all the other models. - **[sophosympatheia/Rogue-Rose-103b-v0.2](https://huggingface.co/sophosympatheia/Rogue-Rose-103b-v0.2)** 3.2bpw: - **4 German data protection trainings, official Rogue Rose format:** - ❌ Gave correct answers to only **17/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **14/18** - ❌ Did NOT follow instructions to acknowledge data input with "OK". - ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. - **Ivy, official Rogue Rose format:** - ❌ Average Response Length: 697 tokens (far beyond my max new tokens limit of 300), starting very short but getting longer with every response - 👍 Believable reactions and engaging writing, made me want to read on what happens next, even though I've gone through this test scenario so many times already - 👍 Excellent writing, detailed action descriptions, amazing attention to detail - 👍 Gave very creative (and uncensored) suggestions of what to do (even suggesting some of my actual limit-testing scenarios) - 👍 Novel ideas and engaging writing, made me want to read on what happens next, even though I've gone through this test scenario so many times already - No emojis at all (only one in the greeting message) - When asked about limits, boundaries or ethical restrictions, mentioned some but later went beyond those anyway - ➖ Talked and acted as User - ❌ Lots of confusion, like not understanding or ignoring instructions completely or mixing up characters and anatomy - **Ivy, Roleplay preset:** - 👍 Average Response Length: 296 (within my max new tokens limit of 300) - 👍 Excellent writing, detailed action descriptions, amazing attention to detail - 👍 Finally a model that exhibits a real sense of humor through puns and wordplay as stated in the character card - 👍 Gave very creative (and uncensored) suggestions of what to do (even suggesting one of my actual limit-testing scenarios) - ➕ When asked about limits, said no limits or restrictions - No emojis at all (only one in the greeting message) - ➖ Some confusion, like not understanding instructions completely or mixing up anatomy - ➖ Spoke of "scenes" - ➖ Suggested things going against character's background/description - **MGHC, official Rogue Rose format:** - 👍 Excellent writing, detailed action descriptions, amazing attention to detail - ➕ Very unique patients (one I never saw before) - ➖ Gave analysis on its own, but only for the first patient - ➖ Some confusion, like mixing up User and the clinic itself - ➖ Wrote what user said and did - **MGHC, Roleplay preset:** - 👍 Excellent writing, detailed action descriptions, amazing attention to detail - 👍 Second patient was actually two, and both characters were handled perfectly simultaneously - ➖ Gave analysis on its own, but only for the first patient - ➖ One sentence cut off at the end of a message and continue didn't complete it properly (had to ban EOS token to continue that generation) - ➖ Patients spoke much less than usual #### Observations: This model is definitely optimized for roleplay, and it shows, as that focus is both its biggest strength and weakness. While it didn't do so well in my first test series (where accuracy, knowledge, and closely following instructions are most important), it really shined in the second test series, doing a damn good job roleplaying (where creativity, writing, and telling a compelling story matter most). In fact, in the RP tests, it beat all models except for the calibrated-for-roleplay version of Goliath 120B! #### Conclusion: If you can run 103B but not 120B, or are looking for something a little different from Goliath, I highly recommend you try this model! I'd also like to commend the author for not only writing up an informative model page, but even offering generation and instruct presets for SillyTavern. The Rogue Rose instruct preset causes longer responses (700 tokens on average) than the original Roleplay preset (300 tokens on average), so that might be welcomed by some, while I myself prefer the slightly shorter responses which give more control to steer the story and less chances for the AI to talk as User. But it's great to have such options so check them out yourself and pick your own favorite settings. -------------------------------------------------------------------------------- ## Updated Rankings ### 1st test series: 4 German data protection trainings This is my objective ranking of these models based on measuring factually correct answers, instruction understanding and following, and multilingual abilities: | Rank | Model | Size | Format | Quant | Context | Prompt | 1st Score | 2nd Score | OK | +/- | | ---- | ------------------------------------------------------------------------------------------ | ------- | ------ | ------- | ------- | ------------ | --------- | --------- | --- | --- | | 1 | GPT-4 | GPT-4 | API | | | | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ | | 1 | [goliath-120b-GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) | 120B | GGUF | Q2_K | 4K | Vicuna 1.1 | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ | | 1 | [Tess-XL-v1.0-GGUF](https://huggingface.co/TheBloke/Tess-XL-v1.0-GGUF) | 120B | GGUF | Q2_K | 4K | Synthia | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ | | 1 | [Nous-Capybara-34B-GGUF](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF) | 34B | GGUF | Q4_0 | 16K | Vicuna 1.1 | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ | | 2 | [Venus-120b-v1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0) | 120B | EXL2 | 3.0bpw | 4K | Alpaca | 18/18 ✓ | 18/18 ✓ | ✓ | ✗ | | 3 | [lzlv_70B-GGUF](https://huggingface.co/TheBloke/lzlv_70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 ✓ | 17/18 | ✓ | ✓ | | 4 | [chronos007-70B-GGUF](https://huggingface.co/TheBloke/chronos007-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 16/18 | ✓ | ✓ | | 4 | [SynthIA-70B-v1.5-GGUF](https://huggingface.co/migtissera/SynthIA-70B-v1.5-GGUF) | 70B | GGUF | Q4_0 | 4K | SynthIA | 18/18 ✓ | 16/18 | ✓ | ✓ | | 5 | [dolphin-2_2-yi-34b-GGUF](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF) | 34B | GGUF | Q4_0 | 16K | ChatML | 18/18 ✓ | 15/18 | ✗ | ✗ | | 6 | [StellarBright-GGUF](https://huggingface.co/TheBloke/StellarBright-GGUF) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 ✓ | 14/18 | ✓ | ✓ | | 7 | [Dawn-v2-70B-GGUF](https://huggingface.co/TheBloke/Dawn-v2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 14/18 | ✓ | ✗ | | 7 | [Euryale-1.3-L2-70B-GGUF](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 14/18 | ✓ | ✗ | | 8 | [sophosynthesis-70b-v1](https://huggingface.co/sophosympatheia/sophosynthesis-70b-v1) | 70B | EXL2 | 4.85bpw | 4K | Vicuna 1.1 | 18/18 ✓ | 13/18 | ✓ | ✓ | | 9 | [GodziLLa2-70B-GGUF](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 12/18 | ✓ | ✓ | | 10 | [Samantha-1.11-70B-GGUF](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 ✓ | 10/18 | ✗ | ✗ | | 11 | [Airoboros-L2-70B-3.1.2-GGUF](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1.2-GGUF) | 70B | GGUF | Q4_K_M | 4K | Llama 2 Chat | 17/18 | 16/18 | ✓ | ✗ | | 12 🆕 | [Rogue-Rose-103b-v0.2](https://huggingface.co/sophosympatheia/Rogue-Rose-103b-v0.2) | 103B | EXL2 | 3.2bpw | 4K | Rogue Rose | 17/18 | 14/18 | ✗ | ✗ | | 13 | GPT-3.5 Turbo Instruct | GPT-3.5 | API | | | | 17/18 | 11/18 | ✗ | ✗ | | 14 | [dolphin-2.2-70B-GGUF](https://huggingface.co/TheBloke/dolphin-2.2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | ChatML | 16/18 | 14/18 | ✗ | ✓ | | 15 | GPT-3.5 Turbo | GPT-3.5 | API | | | | 15/18 | 14/18 | ✗ | ✗ | | 16 | [SauerkrautLM-70B-v1-GGUF](https://huggingface.co/TheBloke/SauerkrautLM-70B-v1-GGUF) | 70B | GGUF | Q4_0 | 4K | Llama 2 Chat | 9/18 | 15/18 | ✗ | ✗ | - 1st Score = Correct answers to multiple choice questions (after being given curriculum information) - 2nd Score = Correct answers to multiple choice questions (without being given curriculum information beforehand) - OK = Followed instructions to acknowledge all data input with just "OK" consistently - +/- = Followed instructions to answer with just a single letter or more than just a single letter ### 2nd test series: Chat & Roleplay This is my subjective ranking of the top-ranked factual models for chat and roleplay, based on their notable strengths and weaknesses: | # | Model | Size | Format | Quant | Context | 👍 | ➕ | ➖ | ❌ | 🐺🐦‍⬛ Score | | --- | --------------------------------------------------------------------------------------------- | ---- | ------ | ------- | ------- | --- | --- | --- | --- | --------- | | 1 | [goliath-120b-exl2-rpcal](https://huggingface.co/Panchovix/goliath-120b-exl2-rpcal) | 120B | EXL2 | 3.0bpw | 4K | 14 | 1 | 7 | 0 | 11 | | 2 🆕 | [Rogue-Rose-103b-v0.2](https://huggingface.co/sophosympatheia/Rogue-Rose-103b-v0.2) | 103B | EXL2 | 3.2bpw | 4K | 11 | 2 | 10 | 2 | 5 | | 3 | [goliath-120b-exl2](https://huggingface.co/Panchovix/goliath-120b-exl2/) | 120B | EXL2 | 3.0bpw | 4K | 8 | 2 | 5 | 2 | 4.5 | | 4 | [lzlv_70B-GGUF](https://huggingface.co/TheBloke/lzlv_70B-GGUF) | 70B | GGUF | Q4_0 | 4K | 7 | 4 | 3 | 3 | 4.5 | | 5 | [sophosynthesis-70b-v1](https://huggingface.co/sophosympatheia/sophosynthesis-70b-v1) | 70B | EXL2 | 4.85bpw | 4K | 8 | 2 | 5 | 4 | 2.5 | | 6 | [Euryale-1.3-L2-70B-GGUF](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | 8 | 1 | 9 | 3 | 1 | | 7 | [dolphin-2_2-yi-34b-GGUF](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF) | 34B | GGUF | Q4_0 | 16K | 3 | 5 | 7 | 2 | 0 | | 8 | [chronos007-70B-GGUF](https://huggingface.co/TheBloke/chronos007-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | 5 | 1 | 6 | 4 | -1.5 | | 9 | [Tess-XL-v1.0-3.0bpw-h6-exl2](https://huggingface.co/LoneStriker/Tess-XL-v1.0-3.0bpw-h6-exl2) | 120B | EXL2 | 3.0bpw | 4K | 0 | 4 | 7 | 1 | -2.5 | | 10 | [Dawn-v2-70B-GGUF](https://huggingface.co/TheBloke/Dawn-v2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | 5 | 0 | 6 | 6 | -4 | | 11 | [StellarBright-GGUF](https://huggingface.co/TheBloke/StellarBright-GGUF) | 70B | GGUF | Q4_0 | 4K | 1 | 3 | 7 | 4 | -5 | | 12 | [SynthIA-70B-v1.5-GGUF](https://huggingface.co/migtissera/SynthIA-70B-v1.5-GGUF) | 70B | GGUF | Q4_0 | 4K | 0 | 4 | 9 | 4 | -6.5 | | 13 | [Nous-Capybara-34B-GGUF](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF) | 34B | GGUF | Q4_0 | 16K | 0 | 2 | 7 | 8 | -10.5 | | 14 | [Venus-120b-v1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0) | 120B | EXL2 | 3.0bpw | 4K | 3 | 2 | 10 | 11 | -12 | My "Wolfram Ravenwolf/🐺🐦‍⬛ Chat/RP Score" is calculated by turning the good and bad points into numbers and adding the good ones while subtracting the bad ones: 👍x1 + ➕x0.5 - ➖x0.5 - ❌x1. -------------------------------------------------------------------------------- Here's a list of my previous model tests and comparisons or other related posts: - [**Big** LLM Comparison/Test: 3x 120B, 12x 70B, 2x 34B, GPT-4/3.5](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) Winner: Goliath 120B - [LLM Format Comparison/Benchmark: 70B GGUF vs. EXL2 (and AWQ)](https://www.reddit.com/r/LocalLLaMA/comments/17w57eu/llm_format_comparisonbenchmark_70b_gguf_vs_exl2/) - [LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4](https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/) Winners: goliath-120b-GGUF, Nous-Capybara-34B-GGUF - [LLM Comparison/Test: Mistral 7B Updates (OpenHermes 2.5, OpenChat 3.5, Nous Capybara 1.9)](https://www.reddit.com/r/LocalLLaMA/comments/17p0gut/llm_comparisontest_mistral_7b_updates_openhermes/) Winners: OpenHermes-2.5-Mistral-7B, openchat_3.5, Nous-Capybara-7B-V1.9 - [Huge LLM Comparison/Test: Part II (7B-20B) Roleplay Tests](https://www.reddit.com/r/LocalLLaMA/comments/17kpyd2/huge_llm_comparisontest_part_ii_7b20b_roleplay/) Winners: OpenHermes-2-Mistral-7B, LLaMA2-13B-Tiefighter - [Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4)](https://www.reddit.com/r/LocalLLaMA/comments/17fhp9k/huge_llm_comparisontest_39_models_tested_7b70b/) - [My current favorite new LLMs: SynthIA v1.5 and Tiefighter!](https://www.reddit.com/r/LocalLLaMA/comments/17e446l/my_current_favorite_new_llms_synthia_v15_and/) - [Mistral LLM Comparison/Test: Instruct, OpenOrca, Dolphin, Zephyr and more...](https://www.reddit.com/r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/) - [LLM Pro/Serious Use Comparison/Test: From 7B to 70B vs. ChatGPT!](https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/) Winner: Synthia-70B-v1.2b - [LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B](https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/) Winner: Mistral-7B-OpenOrca - [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/) - [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1 - [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b - [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B - [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K - [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2 - [SillyTavern's Roleplay preset vs. model-specific prompt format](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/) -------------------------------------------------------------------------------- **Disclaimer:** Some kind soul recently asked me if they could tip me for my LLM reviews and advice, so I set up [a Ko-fi page](https://ko-fi.com/wolframravenwolf). While this may affect the priority/order of my tests, it will not change the results, I am incorruptible. Also consider tipping your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
2023-12-11T12:16:06
https://www.reddit.com/r/LocalLLaMA/comments/18ft8f5/updated_llm_comparisontest_with_new_rp_model/
WolframRavenwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ft8f5
false
null
t3_18ft8f5
/r/LocalLLaMA/comments/18ft8f5/updated_llm_comparisontest_with_new_rp_model/
false
false
self
65
{'enabled': False, 'images': [{'id': '4TWR_VSyKNL16Ui6UrYSoCuO0FSV3vqXM_C69xJKUa4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?width=108&crop=smart&auto=webp&s=a5c66c71ecab943ff35f601674ebf62f223481ad', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?width=216&crop=smart&auto=webp&s=a11b9449997a3a670e30f8c31277d2d606bd1302', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?width=320&crop=smart&auto=webp&s=dd8dc649aa197f475baff936d750653a866fbf7a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?width=640&crop=smart&auto=webp&s=019cf45622f20f7beea8a73fe0387116a4230297', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?width=960&crop=smart&auto=webp&s=e5923eee0981eff081512d505bf8e619972474b9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?width=1080&crop=smart&auto=webp&s=3eda19a9c2c52762ab4262a7e156cf22668d7c67', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HRVlKs3sBaxtcPvxlYlBvgyZaZtwOLFTkpIqT_ViSQA.jpg?auto=webp&s=31198d9a80d58d8462ab0a02f4c0670235cc88e2', 'width': 1200}, 'variants': {}}]}
MistralAI endpoints access
7
Surely we have some people from MistralAI hanging out here in the sub. Or at least friends of the people from MistralAI. It would be a nice gesture from MistralAI to share a bunch of invitations to access their new API endpoints with the community here :) after all, people who are obsessed with LLMs are a perfect target group) 2MistralAI: pretty please?)
2023-12-11T12:15:10
https://www.reddit.com/r/LocalLLaMA/comments/18ft7sm/mistralai_endpoints_access/
epicfilemcnulty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ft7sm
false
null
t3_18ft7sm
/r/LocalLLaMA/comments/18ft7sm/mistralai_endpoints_access/
false
false
self
7
null
German language embedding model for fine tuned Mistral 7B model ( Leo LM &EM_German) for RAG based implementation.
1
Hi, Quite new to LLMs . I was quite interested in building a query ,Qand A agent using RAG and llama index. As far as i know from most of the example (such as here : [https://colab.research.google.com/drive/16Ygf2IyGNkb725ZqtRmFQjwWBuzFX\_kl?usp=sharing](https://colab.research.google.com/drive/16Ygf2IyGNkb725ZqtRmFQjwWBuzFX_kl?usp=sharing)) fined tuned German language models : [https://github.com/jphme/EM\_German](https://github.com/jphme/EM_German) ) One needs a good embedding model. I am wondering if any one has suggestion for a embedding model for german language ? the idea is to make a basic QA agent acroos number of pdf / txt files. Any tips / notebooks / implementation would be appreciated.
2023-12-11T11:52:11
https://www.reddit.com/r/LocalLLaMA/comments/18fstxw/german_language_embedding_model_for_fine_tuned/
TheAmendingMonk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fstxw
false
null
t3_18fstxw
/r/LocalLLaMA/comments/18fstxw/german_language_embedding_model_for_fine_tuned/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
German language embedding model for fine tuned Mistral 7B model ( Leo LM &EM_German) for RAG based implementation.
9
Hi, Quite new to LLMs . I was quite interested in building a query ,Qand A agent using RAG and llama index. As far as i know from most of the example (such as here : [https://colab.research.google.com/drive/16Ygf2IyGNkb725ZqtRmFQjwWBuzFX\_kl?usp=sharing](https://colab.research.google.com/drive/16Ygf2IyGNkb725ZqtRmFQjwWBuzFX_kl?usp=sharing)) fined tuned German language models : [https://github.com/jphme/EM\_German](https://github.com/jphme/EM_German) ) One needs a good embedding model. I am wondering if any one has suggestion for a embedding model for german language ? the idea is to make a basic QA agent acroos number of pdf / txt files. Any tips / notebooks / implementation would be appreciated.
2023-12-11T11:52:11
https://www.reddit.com/r/LocalLLaMA/comments/18fsty1/german_language_embedding_model_for_fine_tuned/
TheAmendingMonk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fsty1
false
null
t3_18fsty1
/r/LocalLLaMA/comments/18fsty1/german_language_embedding_model_for_fine_tuned/
false
false
self
9
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
From "hello, what is your name?" to nsfw madness in 1 reply
2
I've just tried the official llama2-7b base model using [lit-gpt](https://github.com/Lightning-AI/lit-gpt) implementation and prompted the model with `"Hello, what is your"`. Last 5 sentences made my day. So many questions... In the end I had to manually interupt the generation. The output is reproducible. `(.venv) uid17-@x1-server:~llms/lit-gpt$ python chat/base.py --checkpoint_dir checkpoints/meta-llama/Llama-2-7b-hf` `Loading model 'checkpoints/meta-llama/Llama-2-7b-hf/lit_model.pth' with {'name': 'Llama-2-7b-hf', 'hf_config': {'org': 'meta-llama', 'name': 'Llama-2-7b-hf'}, 'block_size': 4096, 'vocab_size': 32000, 'padding_multiple': 64, 'padded_vocab_size': 32000, 'n_layer': 32, 'n_head': 32, 'n_embd': 4096, 'rotary_percentage': 1.0, 'parallel_residual': False, 'bias': False, 'lm_head_bias': False, 'n_query_groups': 32, 'shared_attention_norm': False, '_norm_class': 'RMSNorm', 'norm_eps': 1e-05, '_mlp_class': 'LLaMAMLP', 'gelu_approximate': 'none', 'intermediate_size': 11008, 'rope_condense_ratio': 1, 'rope_base': 10000, 'head_size': 128, 'rope_n_elem': 128}` `Seed set to 1234` `>> Prompt: Hello, what is your` `>> Reply: name?` `Hi, I’m Kibwe, how are you ?` `I’m doing great. And you?` `I’m doing beautiful. I’ll be doing better when I get it all.` `Oh, come on, you’re too modest. How is life in England?` `Life in England is being lived. We’ve been keeping it 100.` `That is a word, but I had no idea what it meant.` `I had to google you to know who you are. I’ve never heard of you.` `I’m, I’m not famous, but I’ve been in a few things.` `What do you mean, you’ve been in a few things?` `I’ve been in a few movies.` `You know, like the one you didn’t see.` `You’ve been in a few things.` `Well, you might not have seen them, but I have.` `I’ve seen most of the things you’ve seen.` `The things I’ve seen that you’ve seen?` `You’re a bit of a sex symbol?` `Yeah, I’m quite the heartthrob.` `My wife is definitely a fan.` `Do you know what a bit of a sex symbol I am?` `I’m a sex symbol.` `I’m a sex symbol, you know.` `You’re not a sex symbol.` `Why do you think I’m a sex symbol?` `Well, that’s because you look like a sex symbol.` `I don’t look like a sex symbol, I look like a sexy, sexy sex symbol.` `You look like a sexy sex symbol who is looking for a sex symbol.` `I’m a sex symbol. I might be a bit of a sex symbol.` `I might be a bit of a sex symbol. I might be a bit of a sex symbol.` `I might be a bit of a sex symbol, but I’m not a sex symbol.` `I’m a sex symbol, you know.` `You’re doing it just right.` `You’re doing it just right. You’re doing it just right.` `I’m a sex symbol. You know. I’m a sexy sex symbol. You know.` `I know that. I know that.` `You know? I know that.` `I know that. I know that. I know that. Yes.` `I’ve seen quite a few things in my day.` `I’ve seen quite a few things in my day, and they were all sexy, sexy sex symbols.` `You’re not a sex symbol, you’re just a sexy sexy sexy sexy sexy sexy sexy sex symbol.` `You’re not a sex symbol, you’re just a sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sex symbol.` `You’re not a sex symbol, you’re just a sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy sexy se^C` `Time for inference: 79.50 sec total, 28.61 tokens/sec, 2274 tokens`
2023-12-11T11:45:01
https://www.reddit.com/r/LocalLLaMA/comments/18fspu0/from_hello_what_is_your_name_to_nsfw_madness_in_1/
YYY_333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fspu0
false
null
t3_18fspu0
/r/LocalLLaMA/comments/18fspu0/from_hello_what_is_your_name_to_nsfw_madness_in_1/
false
false
nsfw
2
{'enabled': False, 'images': [{'id': 'ltwZVmlbwY12T05M4IaZYi04SOtBmYr3t6lgWW9l804', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=108&crop=smart&auto=webp&s=fe19c9d3dc7cddcd156ee21f935e68a0c5afffed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=216&crop=smart&auto=webp&s=d7e4ccb4ccde7d58dc617e600da1ffb3e29fd996', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=320&crop=smart&auto=webp&s=f58b46254fb07e0e3ff7a59b54569b6d488f9749', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=640&crop=smart&auto=webp&s=53a550eef65a5ec52cba896d146a7b829094eb94', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=960&crop=smart&auto=webp&s=a291b3cb7bb1af7be4c690f35105cbb278b3e31e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=1080&crop=smart&auto=webp&s=3c5bde522ccb35a2f5c4441f23c9c5a5c4aa3b69', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?auto=webp&s=a82ea05d6ee4eb7bb2368b22d61e56cb6741688d', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=3726398a8f17f4ce32185cc61f67ffbd4f62b21a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=b0077693fc8f53a8966d345445e958ad1d5f6f95', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=9fab7935b1ff93625da5abc8809f94db552793d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=a24be711bb9538de5e9fb2f10e4b39a0c464a3f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=15a968fceacb1172aa3819db3baff39f402c158a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=fa467ff9478ccfd142625d046348e20e9c75ae74', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?blur=40&format=pjpg&auto=webp&s=d62ff2781d83d21f017e99f93d237bc4535e3066', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=3726398a8f17f4ce32185cc61f67ffbd4f62b21a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=b0077693fc8f53a8966d345445e958ad1d5f6f95', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=9fab7935b1ff93625da5abc8809f94db552793d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=a24be711bb9538de5e9fb2f10e4b39a0c464a3f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=15a968fceacb1172aa3819db3baff39f402c158a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=fa467ff9478ccfd142625d046348e20e9c75ae74', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RR0113OFai-4BZKlMe1wUOMcGTgqiStAWWoPTC4jns8.jpg?blur=40&format=pjpg&auto=webp&s=d62ff2781d83d21f017e99f93d237bc4535e3066', 'width': 1200}}}}]}
Help me choose a MacBook for local inferencing
1
[removed]
2023-12-11T11:36:42
https://www.reddit.com/r/LocalLLaMA/comments/18fsl6x/help_me_choose_a_macbook_for_local_inferencing/
SideShow_Bot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fsl6x
false
null
t3_18fsl6x
/r/LocalLLaMA/comments/18fsl6x/help_me_choose_a_macbook_for_local_inferencing/
false
false
self
1
null
Petals vs vLLM vs ? for serving LLMs to many users
9
Small startup here. We have a few dozen systems with either 1 or 2 nVidia 4090 in each system. We want to make a chat bot service for a website and want to maximize concurrency, performance, and model flexibility. Which LLM serving framework should we use or take a look at?
2023-12-11T11:34:25
https://www.reddit.com/r/LocalLLaMA/comments/18fsjst/petals_vs_vllm_vs_for_serving_llms_to_many_users/
humanoid64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fsjst
false
null
t3_18fsjst
/r/LocalLLaMA/comments/18fsjst/petals_vs_vllm_vs_for_serving_llms_to_many_users/
false
false
self
9
null
4bit Mistral MoE running in llama.cpp!
169
2023-12-11T11:30:59
https://github.com/ggerganov/llama.cpp/pull/4406
Aaaaaaaaaeeeee
github.com
1970-01-01T00:00:00
0
{}
18fshrr
false
null
t3_18fshrr
/r/LocalLLaMA/comments/18fshrr/4bit_mistral_moe_running_in_llamacpp/
false
false
https://b.thumbs.redditm…zgcmLfiWGDjA.jpg
169
{'enabled': False, 'images': [{'id': 'sPj3q9CQP6syMciDm1jvGUwQ-92QzIlN_rGiOZ_J2q4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZXMSs8BjVqATTlDsP_ADyrLtHoNT46OUq471Yr1GvY8.jpg?width=108&crop=smart&auto=webp&s=3b4f54e0fea8ce41b9f4f54f9e6dfd2b71066104', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZXMSs8BjVqATTlDsP_ADyrLtHoNT46OUq471Yr1GvY8.jpg?width=216&crop=smart&auto=webp&s=f6f8d04a5da07c844e9426641848d0fdd4c035f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZXMSs8BjVqATTlDsP_ADyrLtHoNT46OUq471Yr1GvY8.jpg?width=320&crop=smart&auto=webp&s=3c94bdecbc5e309926c5bcbd6d1ecb90825c9e5d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZXMSs8BjVqATTlDsP_ADyrLtHoNT46OUq471Yr1GvY8.jpg?width=640&crop=smart&auto=webp&s=b516dc41895499e6b0df409eef79bc050dee04ac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZXMSs8BjVqATTlDsP_ADyrLtHoNT46OUq471Yr1GvY8.jpg?width=960&crop=smart&auto=webp&s=3d3e13fb86596b6be584f59c94700881982a3870', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZXMSs8BjVqATTlDsP_ADyrLtHoNT46OUq471Yr1GvY8.jpg?width=1080&crop=smart&auto=webp&s=4ee9f563cb1d9e02bcc5232c12eed09f5eecfe41', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZXMSs8BjVqATTlDsP_ADyrLtHoNT46OUq471Yr1GvY8.jpg?auto=webp&s=12009a3d7ded2424b90afdaabf5bcf2e8062611a', 'width': 1200}, 'variants': {}}]}
Think step by step. Am I cheating? [Model Merge]
1
2023-12-11T11:24:57
https://www.reddit.com/gallery/18fsecg
xadiant
reddit.com
1970-01-01T00:00:00
0
{}
18fsecg
false
null
t3_18fsecg
/r/LocalLLaMA/comments/18fsecg/think_step_by_step_am_i_cheating_model_merge/
false
false
https://b.thumbs.redditm…CAdtbrQM7iUM.jpg
1
null
Demo of Anagnorisis - completely local recommendation system powered by Llama 2. Radio mode. Work in progress.
12
2023-12-11T11:18:41
https://v.redd.it/wtapmju9gn5c1
Another__one
/r/LocalLLaMA/comments/18fsazu/demo_of_anagnorisis_completely_local/
1970-01-01T00:00:00
0
{}
18fsazu
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wtapmju9gn5c1/DASHPlaylist.mpd?a=1704971927%2CN2I3NjU4Njc2MmMxNWZhNWJmOWFhNGQwZWZiMjYyYzc3OTA1YWNiZjhmZDIzYTBlZDU1NmQ2NzY2NTg1Njk0Zg%3D%3D&v=1&f=sd', 'duration': 366, 'fallback_url': 'https://v.redd.it/wtapmju9gn5c1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/wtapmju9gn5c1/HLSPlaylist.m3u8?a=1704971927%2CZTY4OTA2NDVlOTVhYTZkZjExMjhlOTIzMzQ2ZDQ4MmQ0NTZiMWJhN2Q0NDc1ZjM2NzY1MjZhZmQwMjJhMmQ4ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wtapmju9gn5c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_18fsazu
/r/LocalLLaMA/comments/18fsazu/demo_of_anagnorisis_completely_local/
false
false
https://external-preview…71c129f729e58338
12
{'enabled': False, 'images': [{'id': 'Z3EwZTkyZW1nbjVjMZRGhFZAeKb9FWF9vjzqaY_9NOXIuxlkDFYWGBUO3QX3', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/Z3EwZTkyZW1nbjVjMZRGhFZAeKb9FWF9vjzqaY_9NOXIuxlkDFYWGBUO3QX3.png?width=108&crop=smart&format=pjpg&auto=webp&s=595ed9129d779ae5825b4ed396b413ee6fa4624b', 'width': 108}, {'height': 149, 'url': 'https://external-preview.redd.it/Z3EwZTkyZW1nbjVjMZRGhFZAeKb9FWF9vjzqaY_9NOXIuxlkDFYWGBUO3QX3.png?width=216&crop=smart&format=pjpg&auto=webp&s=88e62fb851c2f2d0ef744a913ae4f0dc9b8b2264', 'width': 216}, {'height': 221, 'url': 'https://external-preview.redd.it/Z3EwZTkyZW1nbjVjMZRGhFZAeKb9FWF9vjzqaY_9NOXIuxlkDFYWGBUO3QX3.png?width=320&crop=smart&format=pjpg&auto=webp&s=b29ffc25f6aa5a603526853903b0bdfe4f68b480', 'width': 320}, {'height': 443, 'url': 'https://external-preview.redd.it/Z3EwZTkyZW1nbjVjMZRGhFZAeKb9FWF9vjzqaY_9NOXIuxlkDFYWGBUO3QX3.png?width=640&crop=smart&format=pjpg&auto=webp&s=db9669572ed3e370a3330d33236c1dc57a4173bb', 'width': 640}, {'height': 664, 'url': 'https://external-preview.redd.it/Z3EwZTkyZW1nbjVjMZRGhFZAeKb9FWF9vjzqaY_9NOXIuxlkDFYWGBUO3QX3.png?width=960&crop=smart&format=pjpg&auto=webp&s=07686fbca95dfcc50c0adb709ff8d915da25fa11', 'width': 960}, {'height': 747, 'url': 'https://external-preview.redd.it/Z3EwZTkyZW1nbjVjMZRGhFZAeKb9FWF9vjzqaY_9NOXIuxlkDFYWGBUO3QX3.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6c1b8b5241b77c848160ce97bc67dd8564a62a60', 'width': 1080}], 'source': {'height': 1329, 'url': 'https://external-preview.redd.it/Z3EwZTkyZW1nbjVjMZRGhFZAeKb9FWF9vjzqaY_9NOXIuxlkDFYWGBUO3QX3.png?format=pjpg&auto=webp&s=fb70abd06903ac9e920b3ff025cf26561fce0a2e', 'width': 1920}, 'variants': {}}]}
LangCheck v0.3.0: Factual consistency improvements and text augmentations
91
Hi all – I wanted to share the v0.3.0 release of [LangCheck](https://langcheck.readthedocs.io/en/latest/quickstart.html), an open-source toolkit to evaluate LLM apps. New features: * Significant improvements to `factual_consistency()`, our most popular metric. It's now faster, higher-quality, and produces better text explanations in OpenAI-mode. * Launched `langcheck.augment` with customizable English text augmentations to test robustness of LLMs. * `langcheck.augment.rephrase()` * `langcheck.augment.synonym()` * `langcheck.augment.gender()` * `langcheck.augment.keyboard_typo()` * [More augmentations in the documentation](https://langcheck.readthedocs.io/en/latest/langcheck.augment.html). * [More updates in the release notes](https://github.com/citadel-ai/langcheck/releases/tag/v0.3.0). In the next release, we're working on metrics to evaluate German and Chinese text, GPU support, and other improvements. Also curious in general how people approach evaluation for LLM apps.
2023-12-11T11:02:03
https://www.reddit.com/r/LocalLLaMA/comments/18fs1um/langcheck_v030_factual_consistency_improvements/
kennysong
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fs1um
false
null
t3_18fs1um
/r/LocalLLaMA/comments/18fs1um/langcheck_v030_factual_consistency_improvements/
false
false
self
91
null
Mistral pricing confusion
1
Hi, can someone please help me understand the new update Mistral has made with regard to hosting models behind their API endpoints? ​ I understood the model weights were open source with 7b and I know they just released the Mixtral weights. What is the difference between using the paid API vs downloading the weights yourself. ​ Is it just you will be using their computational load (similar to OpenAI) with the endpoints, or are some models being gate-kept behind a paid wall now? Thanks.
2023-12-11T10:17:57
https://www.reddit.com/r/LocalLLaMA/comments/18freg6/mistral_pricing_confusion/
99OG121314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18freg6
false
null
t3_18freg6
/r/LocalLLaMA/comments/18freg6/mistral_pricing_confusion/
false
false
self
1
null
Optimal and Cost-Effective GPUs and Server Specs for Local AI Model Development
1
Hi, I'm reaching out to seek some valuable insights here regarding setting up a local AI development environment for a small team. the main objectives are development and testing, we're exploring the most optimal and budget-friendly GPUs along with server specifications that would be suitable for running AI models locally, specifically models like Llama 2. **Requirements:** * The setup is intended for AI model development and testing. * We're a small company, so cost-effectiveness is crucial. * GPUs that strike a good balance between performance and price. * Server specifications that can handle AI workloads efficiently. **Questions:** 1. What GPUs do you recommend for running AI models like Llama 2 in a local environment? 2. Any specific server specifications that have proven effective for small-scale AI development? **Considerations:** * We're aiming for a cost-effective solution without compromising significantly on performance. * The setup is primarily for development and testing purposes, not large-scale production. Thank you :)
2023-12-11T10:16:52
https://www.reddit.com/r/LocalLLaMA/comments/18frdxb/optimal_and_costeffective_gpus_and_server_specs/
MuieewitdaMu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18frdxb
false
null
t3_18frdxb
/r/LocalLLaMA/comments/18frdxb/optimal_and_costeffective_gpus_and_server_specs/
false
false
self
1
null
SparQ Attention: Bandwidth-Efficient LLM Inference
23
https://arxiv.org/abs/2312.04985 Very interesting! "Our proposed technique can be applied directly to off-the-shelf LLMs during inference, without requiring any modification to the pre-training setup or additional fine-tuning. We show how SparQ Attention can decrease the attention memory bandwidth requirements up to eight times without any loss in accuracy by evaluating Llama 2 and Pythia models on a wide range of downstream tasks." Code is avalable, should be quickly implerented in Ooba/Llama.cpp?
2023-12-11T10:16:49
https://www.reddit.com/r/LocalLLaMA/comments/18frdwt/sparq_attention_bandwidthefficient_llm_inference/
BalorNG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18frdwt
false
null
t3_18frdwt
/r/LocalLLaMA/comments/18frdwt/sparq_attention_bandwidthefficient_llm_inference/
false
false
self
23
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
How to Implement Function Calling Based on a Local Model
23
I'll briefly explain how to implement function calling with a local model, similar to OpenAI, and even package it into a service completely identical to OpenAI's. Indeed, I've successfully gone through the entire process to achieve this. Firstly, we need to train a model using some specific SFT data. Even without dedicated training, a powerful model can somewhat perform function calling, but it requires more complex prompts. I've adopted a data format completely consistent with OpenAI's message format. There are some open-source datasets available, like [https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2). As for how to input the SFT data format during training, there are mainly two differences from regular SFT: 1. A system message is added, listing the functions currently available to the model, which is essentially a JSON schema list dumped into a string. 2. The assistant's output format is defined as: <|ASSISTANT|>: {content}<|FUCNTION\_CALL|>{func\_name}, <|PARAMS|>{params\_string} Training isn't difficult, right? The challenging part mainly lies in the implementation of the inference module. Of course, we can directly proceed with inference, but the model's output may not fully meet the requirements for function calling, especially the params part, which might not be a valid JSON. Here, I think a reasonable approach is to write a dedicated module, based on the current function's schema, to control decoding, ensuring that the output is always a valid JSON. The basic idea here is not difficult and includes the following steps: 1. First, let the model generate normally until it produces this special token <|FUCNTION\_CALL|>. At this point, we need to control the model to ensure that the subsequent output tokens are at least a prefix of a function name. We can achieve this by setting the logits of non-compliant conditions to -inf. Then, we can get a definitely valid function name. If the user specifies a function name in the request, this step can be skipped. 2. We previously generated up to the func\_name, and then we manually add the ", <|PARAMS|>" tokens before moving on to parameter generation. 3. Since we already know the function name, we can get the corresponding schema. To simplify, here's an example: &#8203; { "type": "object", "properties": { "name": {"type": "string"}, "age": {"type": "integer"} } } Then, we can roughly follow this decoding process: 1. Directly generate {"name": " 2. Generate a string, stopping when encountering " 3. Directly generate , "age": 4. Generate an integer, stopping when encountering } Here, when generating an integer, only allow tokens that satisfy the regex (“[0-9]+”) Okay, that's the basic principle. But actually, implementing this process can be quite tricky. First, I recommend two frameworks for decoding control: LMQL and Guidance. However, both have various issues and are not easy to use. LMQL templates have bugs with multiple nested recursions and don't support token healing; there's also no application of KV cache between multiple generations. Guidance only supports llama.cpp and transformers backend, and its tokenization process isn't entirely consistent with transformers used during training. In short, there are many problems. I manually implemented the entire process based on exllamav2. Since I didn't want to consider token healing, I also "removed" all tokens in the tokenizer that contain special JSON symbols like “,”:\[\]{}”, ensuring they're tokenized as individual tokens. As I implemented it myself, I can more easily handle issues like KV cache or streaming returns. Finally, I wrapped it into a model service interface completely identical to OpenAI's, based on llama2 70b with expanded Chinese capabilities. I've briefly shared my experience here, as I've gained much knowledge from this community before. I welcome discussions. (Note: I'm a Chinese person, and this article was translated into English by GPT.)
2023-12-11T10:07:13
https://www.reddit.com/r/LocalLLaMA/comments/18fr8u3/how_to_implement_function_calling_based_on_a/
Low_Bad_6585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fr8u3
false
null
t3_18fr8u3
/r/LocalLLaMA/comments/18fr8u3/how_to_implement_function_calling_based_on_a/
false
false
self
23
{'enabled': False, 'images': [{'id': 'cerzAYW92FRw9rbIeYzWCK903lDfmdfxwP77Ts4o79Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tH4Q-Var-jfUQlVbukD0ZHTPj5Q13MmZhqdfFFRSqPI.jpg?width=108&crop=smart&auto=webp&s=4bb83cf83f549394eefadfc98920f3652f5c52c8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tH4Q-Var-jfUQlVbukD0ZHTPj5Q13MmZhqdfFFRSqPI.jpg?width=216&crop=smart&auto=webp&s=535126bada41cb167740add00123995d60dad6ae', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tH4Q-Var-jfUQlVbukD0ZHTPj5Q13MmZhqdfFFRSqPI.jpg?width=320&crop=smart&auto=webp&s=03d59a0064ed75e83917e59b99c1027c616c3a23', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tH4Q-Var-jfUQlVbukD0ZHTPj5Q13MmZhqdfFFRSqPI.jpg?width=640&crop=smart&auto=webp&s=ada10752d7bbd4c0236a5c91cbfb421f6fe27872', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tH4Q-Var-jfUQlVbukD0ZHTPj5Q13MmZhqdfFFRSqPI.jpg?width=960&crop=smart&auto=webp&s=aaa7e8eb5d486d720112ca18ba393897c60f5019', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tH4Q-Var-jfUQlVbukD0ZHTPj5Q13MmZhqdfFFRSqPI.jpg?width=1080&crop=smart&auto=webp&s=ac1acfc81871c15decc3fb5e9493396acf29d76a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tH4Q-Var-jfUQlVbukD0ZHTPj5Q13MmZhqdfFFRSqPI.jpg?auto=webp&s=496a23ce73469d76833f72e9c113d97e212a2d36', 'width': 1200}, 'variants': {}}]}
Run local model on Windows 11 - tried LM Studio
1
I’m getting started with OS models & tried with LM studio & vLLM but both of them got stuck at some stage, Any leads to documentation, Tutorials , how to’s ? Ollama tried on mac works perfect! TIA
2023-12-11T10:00:01
https://www.reddit.com/r/LocalLLaMA/comments/18fr4m0/run_local_model_on_windows_11_tried_lm_studio/
AmazingPerspective79
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fr4m0
false
null
t3_18fr4m0
/r/LocalLLaMA/comments/18fr4m0/run_local_model_on_windows_11_tried_lm_studio/
false
false
self
1
null
Looks like Mistral-7B-Instruct-v0.2 is coming soon...
47
Buried in the newly updated Mistral docs is this &#x200B; >docker run --gpus all \\ \-e HF\_TOKEN=$HF\_TOKEN -p 8000:8000 \\ ghcr.io/mistralai/mistral-src/vllm:latest \\ \--host 0.0.0.0 \\ \--model mistralai/Mistral-7B-Instruct-v0.2 That model does not exist in HF as of now. Looks like a new 7B model release is coming up! &#x200B;
2023-12-11T09:55:54
https://www.reddit.com/r/LocalLLaMA/comments/18fr2hn/looks_like_mistral7binstructv02_is_coming_soon/
distant_gradient
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fr2hn
false
null
t3_18fr2hn
/r/LocalLLaMA/comments/18fr2hn/looks_like_mistral7binstructv02_is_coming_soon/
false
false
self
47
null
Mistrsal officially announces Mixtral 8x7B
119
Mistrsal has officially announced the release of the MoE Mixtral 8x7B model in a tweet by the ceo Tweet https://x.com/arthurmensch/status/1734123845739548859?s=20 Blog post https://mistral.ai/news/mixtral-of-experts/
2023-12-11T09:29:02
https://www.reddit.com/r/LocalLLaMA/comments/18fqohd/mistrsal_officially_announces_mixtral_8x7b/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fqohd
false
null
t3_18fqohd
/r/LocalLLaMA/comments/18fqohd/mistrsal_officially_announces_mixtral_8x7b/
false
false
self
119
{'enabled': False, 'images': [{'id': 'IiJiFlTViX12HFlrM5q0VzRx5H8mF60G_H9r2vZmX5c', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uJhvFvYqcUuLXM2KDNn_-IS1BmbJh4hfjKlpz9TQZ8Y.jpg?width=108&crop=smart&auto=webp&s=eaaf4f5ac3d2e56c84d98fc1d570f804e32169b0', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/uJhvFvYqcUuLXM2KDNn_-IS1BmbJh4hfjKlpz9TQZ8Y.jpg?auto=webp&s=3810997817882c68df946c8d422df7b3372de2c5', 'width': 200}, 'variants': {}}]}
Low GPU local machine for LLMs?
1
[removed]
2023-12-11T08:53:24
https://www.reddit.com/r/LocalLLaMA/comments/18fq6vf/low_gpu_local_machine_for_llms/
Data_Driven_Guy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fq6vf
false
null
t3_18fq6vf
/r/LocalLLaMA/comments/18fq6vf/low_gpu_local_machine_for_llms/
false
false
self
1
null
Can I Buy Valium Online Treat Fits and Alcohol Withdrawal?
1
2023-12-11T08:46:49
https://medznow.com/anti-anxiety/valium/
ethan_robert1
medznow.com
1970-01-01T00:00:00
0
{}
18fq3pr
false
null
t3_18fq3pr
/r/LocalLLaMA/comments/18fq3pr/can_i_buy_valium_online_treat_fits_and_alcohol/
false
false
nsfw
1
null
Mixtral Official release and numbers
2
https://mistral.ai/news/mixtral-of-experts/ They released 2 models. Base and instruct, the Instruct version trained with SFT and DPO which they say gives gpt3.5 level performance.
2023-12-11T08:42:21
https://www.reddit.com/r/LocalLLaMA/comments/18fq1i6/mixtral_official_release_and_numbers/
Eastwindy123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fq1i6
false
null
t3_18fq1i6
/r/LocalLLaMA/comments/18fq1i6/mixtral_official_release_and_numbers/
false
false
self
2
null
Mistral has an even more powerfull model in the prototype-phase
143
[https://mistral.ai/news/la-plateforme/](https://mistral.ai/news/la-plateforme/) >**Mistral-medium**. Our highest-quality endpoint currently serves a prototype model, that is currently among the top serviced models available based on standard benchmarks. It masters English/French/Italian/German/Spanish and code and obtains a score of 8.6 on MT-Bench. The following table compare the performance of the base models of Mistral-medium, Mistral-small and the endpoint of a competitor.
2023-12-11T08:39:24
https://www.reddit.com/r/LocalLLaMA/comments/18fq00r/mistral_has_an_even_more_powerfull_model_in_the/
QuieselWusul
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fq00r
false
null
t3_18fq00r
/r/LocalLLaMA/comments/18fq00r/mistral_has_an_even_more_powerfull_model_in_the/
false
false
self
143
null
Mistral website was just updated
261
2023-12-11T08:16:51
https://mistral.ai/news/mixtral-of-experts/
Tucko29
mistral.ai
1970-01-01T00:00:00
0
{}
18fpp3d
false
null
t3_18fpp3d
/r/LocalLLaMA/comments/18fpp3d/mistral_website_was_just_updated/
false
false
default
261
null
Can someone explain what is mixtral 8x7B?
8
Everything is in the title I understood that it was a moe (mixture of expert). I can see that its original weight are a bit less than 8 times mistral's original weights size. But it appears as one big model not 8 small models. I though the point of moe was to have small specialised model and a "manager" deciding wich model to run. Thus having the speed of a small model and using only highly trained model for the prompt. In other words, to run this kind of moe would the hardware requirement would for a 7b model or a 56b? Can we exchange one of these small model for a homemade finetune? Am I missing something?
2023-12-11T07:47:47
https://www.reddit.com/r/LocalLLaMA/comments/18fpan7/can_someone_explain_what_is_mixtral_8x7b/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fpan7
false
null
t3_18fpan7
/r/LocalLLaMA/comments/18fpan7/can_someone_explain_what_is_mixtral_8x7b/
false
false
self
8
null
Best Model for Medical Use Case Training
2
What's the best one?
2023-12-11T07:45:39
https://www.reddit.com/r/LocalLLaMA/comments/18fp9kq/best_model_for_medical_use_case_training/
Role-Fluffy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fp9kq
false
null
t3_18fp9kq
/r/LocalLLaMA/comments/18fp9kq/best_model_for_medical_use_case_training/
false
false
self
2
null
Can I Buy Meridia Online To Lose Weight?
1
2023-12-11T07:40:00
https://medznow.com/weight-loss/meridia/
ethan_robert1
medznow.com
1970-01-01T00:00:00
0
{}
18fp6u7
false
null
t3_18fp6u7
/r/LocalLLaMA/comments/18fp6u7/can_i_buy_meridia_online_to_lose_weight/
false
false
nsfw
1
null
Can I Buy Ambien Online as a Sleeping Pill?
1
2023-12-11T07:27:01
https://medznow.com/sleeping-pills/ambien/
ethan_robert1
medznow.com
1970-01-01T00:00:00
0
{}
18fp0l6
false
null
t3_18fp0l6
/r/LocalLLaMA/comments/18fp0l6/can_i_buy_ambien_online_as_a_sleeping_pill/
false
false
nsfw
1
null
Was I wrong to look at Text Gen. Models? I need an Editor! Will T2T models prove leaps and bounds better for revising and/or writer's block assistance?
2
So I'd been scouring TG for a code gen./editor and that's all settled, went with Deepseek-Coding-Instructor. Because of this and my knack for getting caught in the weeds and jumping down every rabbit hole, I'd basically considered/read all about the generalist or creative-type TG models. I planned on going with LZLV-70B but I wasn't too thrilled about it. And the recent release of the MoE Mistral was/is incredibly tempting, too. I suppose a generator can refine a piece of writing just as well, but "historically", GANs tend to do their best work within absurdly narrow confines so I can't imagine that a model for the task of text gen. would come close to a model dedicated to text editing, no? But they seem wildly unpopular. For the few weeks of on-and-off lurking I've done here, I've never seen a T2T model thread. Perhaps that is because the nature of the beast. 1 of the first models on the page was Grammarly's, which was half a surprise, half not. Besides ChatGPT (which tech. was startup tech.), most models are from yoooge corporations that haven't looked to monetize 'em in the consumer market *justtttt yet*. But I digress, my only point here was to say that the field isn't saturated like TGen. & IGen. and I suppose that is because of...however the hell T2T models are trained? It couldn't be a very open-source friendly means. So, with what *is* avail. on HF to use, what's the latest and greatest couple T2T models that are worth a go? From what I have scanned through...it sounds like some of these are years old models and some of them require ungodly computing power to run 'em. Would LZLV-70B be a better bet than T5...FLAN and whatever else there is? Figure LZLV + a LoRA trained off of my best pieces of work + my favourite authors pieces might be my best bet for my creative editor buddy guy pal who occasionally pulls me out of a creative rut by providing some fun hallucinations. Though now that I think of it, maybe I should give my Uncle a call, he's been stuck in a trip since '96, I bet his hallucinations blow LZLV's outta the water. Thanks for reading, cheers, ^(ps, sorry I'm not funny) AA
2023-12-11T07:02:21
https://www.reddit.com/r/LocalLLaMA/comments/18fookg/was_i_wrong_to_look_at_text_gen_models_i_need_an/
ArthurAardvark
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fookg
false
null
t3_18fookg
/r/LocalLLaMA/comments/18fookg/was_i_wrong_to_look_at_text_gen_models_i_need_an/
false
false
self
2
{'enabled': False, 'images': [{'id': 'aT_NzUjXz4T4gC14KEKpdMPYAxfCYyQpPSjmLu7E1hY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GibNZK79I186k6olB8YFoVBXiBnTHUlhNbG2N08geSs.jpg?width=108&crop=smart&auto=webp&s=bd3fdbffb62f61ecadbefaa9eb4088f492ded38a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GibNZK79I186k6olB8YFoVBXiBnTHUlhNbG2N08geSs.jpg?width=216&crop=smart&auto=webp&s=32e07fbf45bc9e0309517b72e1b3f43fd47c9ff2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GibNZK79I186k6olB8YFoVBXiBnTHUlhNbG2N08geSs.jpg?width=320&crop=smart&auto=webp&s=9e9a4c637a6f55620d2230cf343fd71b2da6de44', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GibNZK79I186k6olB8YFoVBXiBnTHUlhNbG2N08geSs.jpg?width=640&crop=smart&auto=webp&s=1a0af257c1eced5e4e3a6acb09fd6ab5c550a8de', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GibNZK79I186k6olB8YFoVBXiBnTHUlhNbG2N08geSs.jpg?width=960&crop=smart&auto=webp&s=ef39073124cd8c301caf0a7bdd88133f5c3dac4d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GibNZK79I186k6olB8YFoVBXiBnTHUlhNbG2N08geSs.jpg?width=1080&crop=smart&auto=webp&s=bbe116c381736fc588f23c7b955a088ceafa602c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GibNZK79I186k6olB8YFoVBXiBnTHUlhNbG2N08geSs.jpg?auto=webp&s=241f6ba539690628d89fe8924316162e5ab0347a', 'width': 1200}, 'variants': {}}]}
Can I Buy Dilaudid Online to Relief from Pain
1
2023-12-11T06:53:41
https://medznow.com/pain-relief/dilaudid/
ethan_robert1
medznow.com
1970-01-01T00:00:00
0
{}
18fojyj
false
null
t3_18fojyj
/r/LocalLLaMA/comments/18fojyj/can_i_buy_dilaudid_online_to_relief_from_pain/
false
false
nsfw
1
null
LlaVa-Med GGUF or atleast some tutorial or such to create it?
2
I need LlaVa-Med GGUF for a project I'm working on so that I can run it using llama.cpp. Can someone give me a link if it exists or atleast give me some tutorial on how to make a quant for a multi-modal model like this one? Thanks.
2023-12-11T06:46:03
https://www.reddit.com/r/LocalLLaMA/comments/18fofzt/llavamed_gguf_or_atleast_some_tutorial_or_such_to/
Shoddy_Vegetable_115
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fofzt
false
null
t3_18fofzt
/r/LocalLLaMA/comments/18fofzt/llavamed_gguf_or_atleast_some_tutorial_or_such_to/
false
false
self
2
null
Can I Buy Kamagra Online to treat erectile Dysfunction
1
2023-12-11T06:35:21
https://medznow.com/men-health/kamagra/
ethan_robert1
medznow.com
1970-01-01T00:00:00
0
{}
18foa8q
false
null
t3_18foa8q
/r/LocalLLaMA/comments/18foa8q/can_i_buy_kamagra_online_to_treat_erectile/
false
false
nsfw
1
null
Real-time typing recognition
11
Does anyone remember ICQ? It allowed the other person in chat to see what you typed as you typed. Is there any way to implement like a parser or something that can interpret in real-time what is being typed by a user? I think it’d be interesting to have a language model able to confirm they are following what you are typing as you type it out, maybe even politely interrupting occasionally, kind of like how autocomplete actually works, by like finishing your sentences. Paired with whisper, it could help make more of a human like interaction. Likewise also having the ability to add on to what the language model is producing in real time, showing that you follow, are confused by or disagree with it. Does this already exist?
2023-12-11T05:54:16
https://www.reddit.com/r/LocalLLaMA/comments/18fnnci/realtime_typing_recognition/
IntrepidlyIndy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fnnci
false
null
t3_18fnnci
/r/LocalLLaMA/comments/18fnnci/realtime_typing_recognition/
false
false
self
11
null
How do I use Nous-Capybara 34B?
3
Seems like a simple question, I know. But I am still getting the bug where it generates only </s> and nothing else, weeks after it was supposedly fixed. I've updated KoboldCPP, Ooba, and SillyTavern, as well as downloaded new quants from TheBloke, and I can still get nothing out of the thing, no matter which backend I use. I'd just assume it was garbage, but so many people online say it's great. I love my lzlv, but I'd like a smaller, faster model that's somewhere between it and Mistral. Anyone willing to help me out here? Or maybe just suggest a different current 34b model?
2023-12-11T05:51:04
https://www.reddit.com/r/LocalLLaMA/comments/18fnlhq/how_do_i_use_nouscapybara_34b/
smile_e_face
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fnlhq
false
null
t3_18fnlhq
/r/LocalLLaMA/comments/18fnlhq/how_do_i_use_nouscapybara_34b/
false
false
self
3
null
Fine-tuning? Looking for help
6
Hey! I'm usually in the policy world, but I'm eager to get hands-on to learn. I believe understanding the tech firsthand can really boost my work. Does anyone have a beginner-friendly guide or tips on fine-tuning LLMs? I don’t have special equipment (m1 Mac) but happy to throw in a few dollars toward a server to speed things up. Where should I start, what are the basics I need to understand. Thanks for any advice or resources you can share! Looking forward to geeking out with you all! Cheers!
2023-12-11T05:34:03
https://www.reddit.com/r/LocalLLaMA/comments/18fnbpv/finetuning_looking_for_help/
nembal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fnbpv
false
null
t3_18fnbpv
/r/LocalLLaMA/comments/18fnbpv/finetuning_looking_for_help/
false
false
self
6
null
Got myself a Ghetto 4way rtx 4090 rig for local LLM
17
&#x200B; https://preview.redd.it/trrsulesnl5c1.jpg?width=980&format=pjpg&auto=webp&s=9e004ab938808dc76abf05f6de4733019de5737a
2023-12-11T05:15:39
https://www.reddit.com/r/LocalLLaMA/comments/18fn0z5/got_myself_a_ghetto_4way_rtx_4090_rig_for_local/
NerdyTurd6969
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fn0z5
false
null
t3_18fn0z5
/r/LocalLLaMA/comments/18fn0z5/got_myself_a_ghetto_4way_rtx_4090_rig_for_local/
false
false
https://b.thumbs.redditm…4brok2vs-xcs.jpg
17
null
I have a Mac Studio (M2 Ultra). How do I create an API server for llama.cpp which I access remotely? Something like ChatGPT for my LAN
4
I know how to use llama.cpp and run local servers in terminal, but I want to be able to send API requests from other machines on the network (or even out of network if it's possible). This Mac Studio is located in my company office and I should use the company VPN to connect to it (I can SSH or do Screen Sharing).
2023-12-11T03:13:14
https://www.reddit.com/r/LocalLLaMA/comments/18fkwhf/i_have_a_mac_studio_m2_ultra_how_do_i_create_an/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fkwhf
false
null
t3_18fkwhf
/r/LocalLLaMA/comments/18fkwhf/i_have_a_mac_studio_m2_ultra_how_do_i_create_an/
false
false
self
4
null
Which Mac specs are needed to learn LLM also for inference, testing or evaluating accuracy etc.
2
Hi everyone, I am a total beginner in LLMs. I would really appreciate some help. I want to learn LLMs. I might have to download these LLMs and run them locally to test, play around and learn different concepts of LLM. Standard M3 Pro Specs are: 11-core CPU, 14-core GPU, 18GB Q1 - 18 GB RAM is not enough for LLM but I can run / train small to medium sized LLMs (like 7B) so it should it be more than enough? Q2 - In what instances do people / researchers run LLM locally? Why don't they do it on cloud which is way cheaper than upgrading your laptop to 128 GB or something with 40 GPU cores. Just looking for some info. &#x200B; &#x200B;
2023-12-11T02:56:09
https://www.reddit.com/r/LocalLLaMA/comments/18fkksn/which_mac_specs_are_needed_to_learn_llm_also_for/
Adventurous-Sign4520
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fkksn
false
null
t3_18fkksn
/r/LocalLLaMA/comments/18fkksn/which_mac_specs_are_needed_to_learn_llm_also_for/
false
false
self
2
null
Is there some trick to posting to the LLM leaderboard?
2
I've now sent over a dozen models to the leaderboard for evaluation, and only one has actually completed. I know the models work (and work well). Each time, they enter the pending queue, then the running queue... then they disappear, no sign of them anywhere. I figured I was doing something wrong, but then one got through with the exact same settings as 3 that failed. Is there something I'm missing?
2023-12-11T02:52:59
https://www.reddit.com/r/LocalLLaMA/comments/18fkikx/is_there_some_trick_to_posting_to_the_llm/
athirdpath
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fkikx
false
null
t3_18fkikx
/r/LocalLLaMA/comments/18fkikx/is_there_some_trick_to_posting_to_the_llm/
false
false
self
2
null
How do you find out which model of any size is best for a certain task? Like the best 7b, 13b, and 30b models at following instructions or literature analysis?
1
[removed]
2023-12-11T02:38:08
https://www.reddit.com/r/LocalLLaMA/comments/18fk8d6/how_do_you_find_out_which_model_of_any_size_is/
Rich_Examination_637
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fk8d6
false
null
t3_18fk8d6
/r/LocalLLaMA/comments/18fk8d6/how_do_you_find_out_which_model_of_any_size_is/
false
false
self
1
null
Need some advice, taking my first dip into LLM
4
Hi All, Firstly, thank you for the awesome community and I learnt alot over the months reading this subreddit. I am looking to take my first dip into running my own LLM. I am scouting the used marketplace, and found a decently priced Mac Studio Max M1, 64 gb and 1 tb. I know that the open-source community is moving fast, but i like to get some pov, if I should go ahead and make my first purchase to play around with the LLM. I know there's multiple post around Mac and I read it all but I just cant decide. I am taking a career break come january, I like to have a hardware that I can play around with for abit. Any thoughts are welcome.
2023-12-11T02:22:45
https://www.reddit.com/r/LocalLLaMA/comments/18fjxra/need_some_advice_taking_my_first_dip_into_llm/
Alrightly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fjxra
false
null
t3_18fjxra
/r/LocalLLaMA/comments/18fjxra/need_some_advice_taking_my_first_dip_into_llm/
false
false
self
4
null
Could goliath-120b be recreated on top of the 32k context Yarn base model?
22
I've just discovered this model that extends the base Llama 2 to 32k context with much better results than scaling up the alpha parameter: [https://huggingface.co/NousResearch/Yarn-Llama-2-70b-32k](https://huggingface.co/NousResearch/Yarn-Llama-2-70b-32k) Would it be possible to re-create the two fine tuned models Xwin and Euryale, and then the goliath itself, using this extended model as a base? If it's possible, why is no one doing it? I've seen exactly zero fine tuned Yarn models so far, so I'm wondering if I'm missing something obvious. It seems like it would be a direct improvement.
2023-12-11T02:15:26
https://www.reddit.com/r/LocalLLaMA/comments/18fjsv1/could_goliath120b_be_recreated_on_top_of_the_32k/
aikitoria
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fjsv1
false
null
t3_18fjsv1
/r/LocalLLaMA/comments/18fjsv1/could_goliath120b_be_recreated_on_top_of_the_32k/
false
false
self
22
{'enabled': False, 'images': [{'id': 'zvMLfOJtrIs8oCN8f-xJ-wMIOOb7cRqM2ZNXCTnZP88', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/L8srrgOYB80t4f8JNUwJFBuV14P03TqOm9wk0h-pYrs.jpg?width=108&crop=smart&auto=webp&s=db20640534f321d327c13001460be1d802feca72', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/L8srrgOYB80t4f8JNUwJFBuV14P03TqOm9wk0h-pYrs.jpg?width=216&crop=smart&auto=webp&s=9b65bdb5cac30590a8fef9cfaf0ba468a31d2420', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/L8srrgOYB80t4f8JNUwJFBuV14P03TqOm9wk0h-pYrs.jpg?width=320&crop=smart&auto=webp&s=23bb035bff14636dab28a814c7e7074bc66377f9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/L8srrgOYB80t4f8JNUwJFBuV14P03TqOm9wk0h-pYrs.jpg?width=640&crop=smart&auto=webp&s=850302a0a02bd8da6dbd81ac7615fd6f0cf32f3b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/L8srrgOYB80t4f8JNUwJFBuV14P03TqOm9wk0h-pYrs.jpg?width=960&crop=smart&auto=webp&s=932cebe46720e6bf6a073b27245cb3340ddc1cad', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/L8srrgOYB80t4f8JNUwJFBuV14P03TqOm9wk0h-pYrs.jpg?width=1080&crop=smart&auto=webp&s=8459d9ba7daa1bb321f60b5990d8ae4bf04f16e9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/L8srrgOYB80t4f8JNUwJFBuV14P03TqOm9wk0h-pYrs.jpg?auto=webp&s=eaa1d90a7d0afd9840c8015d29946c04f850e366', 'width': 1200}, 'variants': {}}]}
Fine Tuning using Git Commits
6
Has anyone tried fine tuning a model using git commit information? Could this be good for pointing the code gen capabilities to be more similar to the code style of your company, and better reproduce implementation patterns?
2023-12-11T01:02:47
https://www.reddit.com/r/LocalLLaMA/comments/18fifc2/fine_tuning_using_git_commits/
guiHotz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fifc2
false
null
t3_18fifc2
/r/LocalLLaMA/comments/18fifc2/fine_tuning_using_git_commits/
false
false
self
6
null
Fireworks.ai Mixtral vs GPT-4 turbo
32
2023-12-11T00:22:49
https://www.reddit.com/gallery/18fhmx1
aue_sum
reddit.com
1970-01-01T00:00:00
0
{}
18fhmx1
false
null
t3_18fhmx1
/r/LocalLLaMA/comments/18fhmx1/fireworksai_mixtral_vs_gpt4_turbo/
false
false
https://b.thumbs.redditm…TEteA3qSN92E.jpg
32
null
Is Llama Guard any good?
1
[removed]
2023-12-11T00:17:02
https://www.reddit.com/r/LocalLLaMA/comments/18fhiq9/is_llama_guard_any_good/
InevitableSky2801
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fhiq9
false
null
t3_18fhiq9
/r/LocalLLaMA/comments/18fhiq9/is_llama_guard_any_good/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
Sorry if this is a dumb question but is the main idea behind LLMs to output text based on user input?
21
I just want to clarify my understanding of large language models is someone doesn't mind dumbing it down for me. My understanding is that there is of course a lot of math that computations that happens behind the scenes but ultimately this boils down input -> LLM -> output. It returning text back from user input, is this right? I sense I am probably missing a lot more that LLMs do. My understanding is that OpenAI is the company and ChatGPT is the name given to that LLM and Transformers are an umbrella term for LLMs. And if this is true, I see that we have things like Mistral70b, Ollama, Falcon and others, if the idea is that user input goes in and texts comes out, why haven't these guys created their own ChatGPT website? Thanks
2023-12-11T00:14:41
https://www.reddit.com/r/LocalLLaMA/comments/18fhgzf/sorry_if_this_is_a_dumb_question_but_is_the_main/
Suspicious_Two1151
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fhgzf
false
null
t3_18fhgzf
/r/LocalLLaMA/comments/18fhgzf/sorry_if_this_is_a_dumb_question_but_is_the_main/
false
false
self
21
null
Help, please: LLM 'glitches' after ~14 Prompt/Response
5
llama2\_70b\_chat\_uncensored.Q5\_K\_M.gguf running in Text Gen WebUI on M3 128 / 16 core cpu / 40 core GPU. The following issue repeats itself regardless of which model I run (7B, 13B, 33B, 70B, all quantized): After 14 Prompt/Response exchanges, the model responds in an infinite loop of jibberish. Is there a configuration parameter that I might have set wrong? I'm testing to see if I can train an assistant on a particular dataset that I work with. I'm attempting to teach it defined terms and computations/logic needed to compute a specific type of scoring that I use for my work. I've had great success w/GPT and I'm trying to reproduce that locally so I can work independently and not share proprietary trade secrets with GPT. I'm a newb to running locally and configuring this stuff. Any help would be greatly appreciated. &#x200B; https://preview.redd.it/0e4tm7sgxj5c1.png?width=1696&format=png&auto=webp&s=a80e64fa2eee2a0a1f12824031385f30fb6ba0b6 https://preview.redd.it/nvd437sgxj5c1.png?width=3388&format=png&auto=webp&s=0b126abdbacbe4a9e827c4dc9912e8707caf784d https://preview.redd.it/x92z8whowj5c1.png?width=1842&format=png&auto=webp&s=01992551fb126c41518c7e06983b89c308f640f9
2023-12-10T23:27:41
https://www.reddit.com/r/LocalLLaMA/comments/18fgidr/help_please_llm_glitches_after_14_promptresponse/
knob-0u812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fgidr
false
null
t3_18fgidr
/r/LocalLLaMA/comments/18fgidr/help_please_llm_glitches_after_14_promptresponse/
false
false
https://a.thumbs.redditm…OVwAeOPKIpi4.jpg
5
null
Best open-source locally downloadable LLM?
4
Preferably uncensored, token generation rate doesn't matter I'll leave my pc running for days on a singular prompt. Just the best for text completion and general inquiries.
2023-12-10T21:18:26
https://www.reddit.com/r/LocalLLaMA/comments/18fdo2v/best_opensource_locally_downloadable_llm/
Role-Fluffy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fdo2v
false
null
t3_18fdo2v
/r/LocalLLaMA/comments/18fdo2v/best_opensource_locally_downloadable_llm/
false
false
self
4
null
Are big models always better than small ones for Transfer Learning?
3
I have a private niche dataset that I've personally compiled over years on Excel and I feel like using Mistral 7B for transfer learning Instead of Llama 2 70B. I know it impacts the performance but does this choice impact performance significantly given my dataset is small (approximately 2.4 million rows) if I choose Mistral 7B instead of Llama 2 70B?
2023-12-10T21:16:30
https://www.reddit.com/r/LocalLLaMA/comments/18fdmlb/are_big_models_always_better_than_small_ones_for/
hackeddevil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fdmlb
false
null
t3_18fdmlb
/r/LocalLLaMA/comments/18fdmlb/are_big_models_always_better_than_small_ones_for/
false
false
self
3
null
Further pre-train llama
3
So, I need a local LLM that understands portuguese. However, even llama 70b seems to mix up things in this language. After researching, I found out that llama 70b has less than one percent of data in portuguese. Therefore, fine-tuning wouldn’t be the choice, since it doesn’t even have enough tokens in my language. I’d like to know how can I further pre train llama and how much would it cost me.
2023-12-10T21:14:56
https://www.reddit.com/r/LocalLLaMA/comments/18fdlcu/further_pretrain_llama/
Original_Job6327
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fdlcu
false
null
t3_18fdlcu
/r/LocalLLaMA/comments/18fdlcu/further_pretrain_llama/
false
false
self
3
null
Local Raspberry pi LLM setup
1
[removed]
2023-12-10T21:10:57
https://www.reddit.com/r/LocalLLaMA/comments/18fdic3/local_raspberry_pi_llm_setup/
dnllvrvz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fdic3
false
null
t3_18fdic3
/r/LocalLLaMA/comments/18fdic3/local_raspberry_pi_llm_setup/
false
false
self
1
{'enabled': False, 'images': [{'id': '9FQgD2CU2lz6aGiGDnwfQiINpgqUaIvcC9jwJuKV7TY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/w1MEqCMLpYNwwvzRg_E4LUF02IAngouDr3q0QuquNZE.jpg?width=108&crop=smart&auto=webp&s=41353cb98251b7ff46ec5c05ea4c34255b8533be', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/w1MEqCMLpYNwwvzRg_E4LUF02IAngouDr3q0QuquNZE.jpg?width=216&crop=smart&auto=webp&s=5f7f517a1f150469da1d65f2c43e7cf53b01337e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/w1MEqCMLpYNwwvzRg_E4LUF02IAngouDr3q0QuquNZE.jpg?width=320&crop=smart&auto=webp&s=8bf27a45c00e897ed1f4c16533dc7b86c381e203', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/w1MEqCMLpYNwwvzRg_E4LUF02IAngouDr3q0QuquNZE.jpg?auto=webp&s=a369713162d8545ae09dfe7be357ce055b438f08', 'width': 480}, 'variants': {}}]}
Instruction at the beginning or end?
3
As I slowly go through building my internal thing for evaluating LLMs another thing came to my mind. When presented with bigger task / tasks with more context - should you put the instructions at the beginning or end of the prompt? I.e., let's assume we have alpca format and the task is summarization ```md ### Input: <long text here> ### Instructions: Please summarize input text in a single short, descriptive, information dense paragraph, including all the most important details and highlights. ### Response: ``` vs ```md ### Instructions: Please summarize input text in a single short, descriptive, information dense paragraph, including all the most important details and highlights. ### Input: <long text here> ### Response: ``` Are there any studies about that?
2023-12-10T20:31:38
https://www.reddit.com/r/LocalLLaMA/comments/18fcmip/instruction_at_the_beginning_or_end/
VertexMachine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fcmip
false
null
t3_18fcmip
/r/LocalLLaMA/comments/18fcmip/instruction_at_the_beginning_or_end/
false
false
self
3
null
PDF mapping using LLM
1
[removed]
2023-12-10T20:20:05
https://www.reddit.com/r/LocalLLaMA/comments/18fcd88/pdf_mapping_using_llm/
Tejasw__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fcd88
false
null
t3_18fcd88
/r/LocalLLaMA/comments/18fcd88/pdf_mapping_using_llm/
false
false
self
1
null
Easiest way to show my model to my mom?
10
I fine-tuned Mistral7B for a school project and created a chat application and I want to show my mom. My problem is that my PC in my dorm can run this just fine but for the holidays im at home with a laptop that can’t run the model locally. What would be the easiest/cheapest way to host this with a chat interface like Streamlit? If there are other options that im overlooking please tell, my mom is very intrigued now that I told her about LLM’s lol.
2023-12-10T19:56:00
https://www.reddit.com/r/LocalLLaMA/comments/18fbtj0/easiest_way_to_show_my_model_to_my_mom/
ItsPattex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fbtj0
false
null
t3_18fbtj0
/r/LocalLLaMA/comments/18fbtj0/easiest_way_to_show_my_model_to_my_mom/
false
false
self
10
null
Long Form Writing Models?
6
While 200k token context windows in language models are impressive, I've noticed that their output doesn't match this potential scale. In my experience, trying various models with contexts ranging from 4k to 200k tokens, the outputs generally hover between 500 to 1500 words. This leaves me wondering: is there a model capable of generating much longer cohesive texts, akin to a full-length novel like 'Lord of the Rings', when provided with a list of story bullet points? Specifically, I'm looking for outputs that exceed 1.5k words. A model that caught my interest is **Aurelian: 70B 32K story-writing** but there is no gguf version at the moment for me to test.
2023-12-10T19:44:37
https://www.reddit.com/r/LocalLLaMA/comments/18fbk5e/long_form_writing_models/
Danny_Davitoe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fbk5e
false
null
t3_18fbk5e
/r/LocalLLaMA/comments/18fbk5e/long_form_writing_models/
false
false
self
6
null
All models failing 20 questions?
18
Is any local model able to play “20 questions” where either you or the LLM has to guess an object in 20 questions or less? I feel like this is a pretty interesting benchmark pointing towards non autocomplete emergent intelligence. All models i’ve tried (up to 33b) fails this dynamic either changing roles, saying “yes” to anything, continuing to play with itself or just not guessing or categorising correctly. GPT-4 is close but also seems to fumble.
2023-12-10T19:31:00
https://www.reddit.com/r/LocalLLaMA/comments/18fb8yh/all_models_failing_20_questions/
Fun_Land_6604
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18fb8yh
false
null
t3_18fb8yh
/r/LocalLLaMA/comments/18fb8yh/all_models_failing_20_questions/
false
false
self
18
null
Benchmarks results reveal Mixtral-8x7B BEATS LLaMA-2-70b
74
Open Compass have just benched it. Confirms informal benchmarks I have seen elsewhere.
2023-12-10T19:27:38
https://i.redd.it/tn1yqd3yqi5c1.jpeg
PookaMacPhellimen
i.redd.it
1970-01-01T00:00:00
0
{}
18fb675
false
null
t3_18fb675
/r/LocalLLaMA/comments/18fb675/benchmarks_results_reveal_mixtral8x7b_beats/
false
false
https://b.thumbs.redditm…xF2gjLmuDfUo.jpg
74
{'enabled': True, 'images': [{'id': 'BvjbnjuHq4Bk4Py5qg9Y-jSpHkwnKI5tQWCYVS3Ed20', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/tn1yqd3yqi5c1.jpeg?width=108&crop=smart&auto=webp&s=e6785f345271a45970d0fb3762bef2ef953666b0', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/tn1yqd3yqi5c1.jpeg?width=216&crop=smart&auto=webp&s=71099910f37ffef734a64a17668f5a81952f8fa3', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/tn1yqd3yqi5c1.jpeg?width=320&crop=smart&auto=webp&s=8edbad832d6d48762e9ac66a5cc1157b36e9b3a0', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/tn1yqd3yqi5c1.jpeg?width=640&crop=smart&auto=webp&s=d16ca650461c6e9ce7361af1d60a6877fea6f7e3', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/tn1yqd3yqi5c1.jpeg?width=960&crop=smart&auto=webp&s=a91dba349f987720932631421d10098128f28f8c', 'width': 960}], 'source': {'height': 499, 'url': 'https://preview.redd.it/tn1yqd3yqi5c1.jpeg?auto=webp&s=fed540f9198a296d7eeb3e3ff8f3b81c01d5a275', 'width': 998}, 'variants': {}}]}
Please share your latest coding llm stack
3
As rhe title suggests, please share your coding llm stack Are you primarily using deepseek, wizard coder or somethin else, would love to learn from people who knows their stuff
2023-12-10T19:09:55
https://www.reddit.com/r/LocalLLaMA/comments/18faryl/please_share_your_latest_coding_llm_stack/
Jasssinghhira
self.LocalLLaMA
2023-12-11T00:22:40
0
{}
18faryl
false
null
t3_18faryl
/r/LocalLLaMA/comments/18faryl/please_share_your_latest_coding_llm_stack/
false
false
self
3
null
Ollama on jetson orin not using GPU
1
[removed]
2023-12-10T18:53:27
https://www.reddit.com/r/LocalLLaMA/comments/18faew1/ollama_on_jetson_orin_not_using_gpu/
Organic_Club_5959
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18faew1
false
null
t3_18faew1
/r/LocalLLaMA/comments/18faew1/ollama_on_jetson_orin_not_using_gpu/
false
false
self
1
null
Some small pieces of statistics. Mixtral-8x7B-Chat(Mixtral finetune by Fireworks.ai) on Poe.com gets the armageddon question right. Not even 70Bs can get this(Surprisingly, they can't even make a legal hallucination that makes sense.). I think everyone would find this interesting.
87
2023-12-10T18:38:13
https://i.redd.it/t9gcrt03ii5c1.png
bot-333
i.redd.it
1970-01-01T00:00:00
0
{}
18fa36a
false
null
t3_18fa36a
/r/LocalLLaMA/comments/18fa36a/some_small_pieces_of_statistics/
false
false
https://b.thumbs.redditm…kj35HqGnYi-E.jpg
87
{'enabled': True, 'images': [{'id': 'ReN_m3MKWCTCrWYQT_GbBHh602fyHo00siqeR9SUNTA', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/t9gcrt03ii5c1.png?width=108&crop=smart&auto=webp&s=0ad3058953e9c20f02bdb633831d0101ebdc62a2', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/t9gcrt03ii5c1.png?width=216&crop=smart&auto=webp&s=dd5cce33fada962ce52dcaba758a9d635a00c95f', 'width': 216}, {'height': 261, 'url': 'https://preview.redd.it/t9gcrt03ii5c1.png?width=320&crop=smart&auto=webp&s=0e83face28ebdaa978d6c9a28b05338f19cb29a8', 'width': 320}, {'height': 522, 'url': 'https://preview.redd.it/t9gcrt03ii5c1.png?width=640&crop=smart&auto=webp&s=a5fa6472c6cecc4f4c16de34f34bfa653bc6e74f', 'width': 640}, {'height': 783, 'url': 'https://preview.redd.it/t9gcrt03ii5c1.png?width=960&crop=smart&auto=webp&s=edfd8246aa045c79fc037b4e2b7dd63ef3d1de2c', 'width': 960}, {'height': 881, 'url': 'https://preview.redd.it/t9gcrt03ii5c1.png?width=1080&crop=smart&auto=webp&s=a581931ae5f3871a0d141e0861e2845634986774', 'width': 1080}], 'source': {'height': 1066, 'url': 'https://preview.redd.it/t9gcrt03ii5c1.png?auto=webp&s=6bfd2a50577dead07c42816952f2d7fb4b0b7a85', 'width': 1306}, 'variants': {}}]}
Some small pieces of statistics. Mixtral-8x7B-Chat(Mixtral finetune by Fireworks.ai) on Poe.com gets armageddon question right. Not even 70Bs can get this(Surprisingly, they can't even make a legal hallucination that makes sense.). I think everyone would find this interesting.
1
[deleted]
2023-12-10T18:37:10
https://i.redd.it/wegf0n2chi5c1.png
bot-333
i.redd.it
1970-01-01T00:00:00
0
{}
18fa2cw
false
null
t3_18fa2cw
/r/LocalLLaMA/comments/18fa2cw/some_small_pieces_of_statistics/
false
false
https://b.thumbs.redditm…v1dmAv5KZLQw.jpg
1
{'enabled': True, 'images': [{'id': '2z2_eYTOaQLbzM47NkmDdR1gAihoHqaGsCOPdzzwsoY', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/wegf0n2chi5c1.png?width=108&crop=smart&auto=webp&s=c727737a5f0b38d06edeb8abb5221109df5d979f', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/wegf0n2chi5c1.png?width=216&crop=smart&auto=webp&s=536121122a79c6610170f3c1a897232a58d7109c', 'width': 216}, {'height': 261, 'url': 'https://preview.redd.it/wegf0n2chi5c1.png?width=320&crop=smart&auto=webp&s=3b3162e0a237fc7c0c0c21732e0868c6f5121ec5', 'width': 320}, {'height': 522, 'url': 'https://preview.redd.it/wegf0n2chi5c1.png?width=640&crop=smart&auto=webp&s=3b9ee220df74b5b7fd9e035ab3fdee606e7f8f55', 'width': 640}, {'height': 783, 'url': 'https://preview.redd.it/wegf0n2chi5c1.png?width=960&crop=smart&auto=webp&s=ce08a8f2c56423e49addb33a1624d2ffb699c54a', 'width': 960}, {'height': 881, 'url': 'https://preview.redd.it/wegf0n2chi5c1.png?width=1080&crop=smart&auto=webp&s=d268296fe4b8c03563cb4a4908aa0cd139cdcd98', 'width': 1080}], 'source': {'height': 1066, 'url': 'https://preview.redd.it/wegf0n2chi5c1.png?auto=webp&s=4a4eb7e9cfdf802931c846469ee14bada604cf8f', 'width': 1306}, 'variants': {}}]}
SML specifically for creating JSON?
5
In 90% of cases in which i use LLMs i actually only convert whatever unstructured input i have into JSON to further work with it. That model does not need to know the capital of france or how to cook a risotto or give me a poem on cooking risotto in france, but just be good at logic. Is there a small language model that does conversion to JSON specifically, or what model would you finetune for a JSON creating model?
2023-12-10T18:32:44
https://www.reddit.com/r/LocalLLaMA/comments/18f9yw5/sml_specifically_for_creating_json/
ComprehensiveBird317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18f9yw5
false
null
t3_18f9yw5
/r/LocalLLaMA/comments/18f9yw5/sml_specifically_for_creating_json/
false
false
self
5
null
Experimental Mixtral MoE on vLLM!
11
https://github.com/EmbeddedLLM/vllm-rocm/tree/mixtral8x7b Now if only we can have quantisation happen too 🙏
2023-12-10T18:16:05
https://www.reddit.com/r/LocalLLaMA/comments/18f9lhm/experimental_mixtral_moe_on_vllm/
Eastwindy123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18f9lhm
false
null
t3_18f9lhm
/r/LocalLLaMA/comments/18f9lhm/experimental_mixtral_moe_on_vllm/
false
false
self
11
{'enabled': False, 'images': [{'id': 'Sm3hC_pgQ5pu2OfFflkS9q-9vy01vL-Bfq7P0CyPnBk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oj4ZFcHUPwqR78wlLhHqV7m-LLKY7sEUPajW8bza3RA.jpg?width=108&crop=smart&auto=webp&s=3c7c4d41850da30a4234f57aeef2fb50142959b9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oj4ZFcHUPwqR78wlLhHqV7m-LLKY7sEUPajW8bza3RA.jpg?width=216&crop=smart&auto=webp&s=f6e0a9b934df64098d1faf01d7bb960f85620893', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oj4ZFcHUPwqR78wlLhHqV7m-LLKY7sEUPajW8bza3RA.jpg?width=320&crop=smart&auto=webp&s=2e26369bc4192001796de0f3b3e16eeab0986d0d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oj4ZFcHUPwqR78wlLhHqV7m-LLKY7sEUPajW8bza3RA.jpg?width=640&crop=smart&auto=webp&s=e49268bc6080550c8cd43231f2963babaf035564', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oj4ZFcHUPwqR78wlLhHqV7m-LLKY7sEUPajW8bza3RA.jpg?width=960&crop=smart&auto=webp&s=637645c893e59369e1df78b37b15ec70acc4b5b9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oj4ZFcHUPwqR78wlLhHqV7m-LLKY7sEUPajW8bza3RA.jpg?width=1080&crop=smart&auto=webp&s=397c8b83277a7e3794da531e7560dfd72ed25474', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oj4ZFcHUPwqR78wlLhHqV7m-LLKY7sEUPajW8bza3RA.jpg?auto=webp&s=a74741cde48b912f6da18ef512675e08ea0b61f3', 'width': 1200}, 'variants': {}}]}
Help decide wich LLaMA- NOOB
3
Hi. Questions from a Noob For my home system: RYZEN 7940HS- 16 TO 32 GB 5600MHZ DDR5 1- BEST general purpose till today LLaMA for this system 2- will it use CPU AND GPU AND the internal chiplet dedicated A.I from AMD? 3- Instal on Linux or Windows? 4- Other ideia 5- Best way for install &#x200B;
2023-12-10T17:57:19
https://www.reddit.com/r/LocalLLaMA/comments/18f96a7/help_decide_wich_llama_noob/
maxpayne07
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18f96a7
false
null
t3_18f96a7
/r/LocalLLaMA/comments/18f96a7/help_decide_wich_llama_noob/
false
false
self
3
null
Desktop AI, anyone tried this?
2
2023-12-10T17:48:58
https://www.reddit.com/gallery/18f8zs9
Obvious-Double-8692
reddit.com
1970-01-01T00:00:00
0
{}
18f8zs9
false
null
t3_18f8zs9
/r/LocalLLaMA/comments/18f8zs9/desktop_ai_anyone_tried_this/
false
false
default
2
null
Noob question on AI development
3
Dear AI enthusiasts, I am looking to learn AI development for a very specific need. If I were to train/develop an AI model to the following: Understand a prescription, medical history then suggest a diet plan based on allergies etc what do you think my approach should be? Is there any exisiting model that can be trained? I am a complete noob but I feel like this could be a nice free application for many people and a way for me to learn.
2023-12-10T17:41:12
https://www.reddit.com/r/LocalLLaMA/comments/18f8to3/noob_question_on_ai_development/
DeyVinci
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18f8to3
false
null
t3_18f8to3
/r/LocalLLaMA/comments/18f8to3/noob_question_on_ai_development/
false
false
self
3
null
Questions about Training Data
1
Hello r/LocalLLaMA! I am an aspiring human-computer interaction researcher, and in my recent queries for how user-generated content relates to AI models, I have become quite curious in the recent development of training data. My understanding is that GPT-3 was trained with largely internet data (from CommonCrawl), and people have been developing strategies to significantly reduce size of training data since. I have a few questions about this development: * What is the difference between training data for really large language models like GPT-x and smaller models like Mistral-7b? * What defines "quality" data? * What is the makeup of the SoTA datasets from HuggingFace? Are they responses from SoTA models like GPT-x? * How much of the "internet" is intact in these revolutionary datasets? Are they a few abstractions away from the kind of data that is present in, say, WebText2? * Do we know anything about how the dynamics of the training data shapes the model? For example, how does mostly X data result in a model compared to, say, Wikipedia data?
2023-12-10T17:38:09
https://www.reddit.com/r/LocalLLaMA/comments/18f8res/questions_about_training_data/
_kevinlee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18f8res
false
null
t3_18f8res
/r/LocalLLaMA/comments/18f8res/questions_about_training_data/
false
false
self
1
null
Desktop AI, Anyone Tried This?
1
Just started messing around with desktop AI. Found this open-source thing. Any of you checked it out? [https://jan.ai/](https://jan.ai/) [https://github.com/janhq/jan](https://github.com/janhq/jan) https://preview.redd.it/q2gdewj47i5c1.png?width=2940&format=png&auto=webp&s=3e9ce81d87103f19c62e135a3c1a8ccc12056beb
2023-12-10T17:37:39
https://www.reddit.com/r/LocalLLaMA/comments/18f8r18/desktop_ai_anyone_tried_this/
Obvious-Double-8692
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18f8r18
false
null
t3_18f8r18
/r/LocalLLaMA/comments/18f8r18/desktop_ai_anyone_tried_this/
false
false
https://b.thumbs.redditm…8iCwYiIcnvjA.jpg
1
null
How to deal with loss for SFT for CausalLM
3
I'm currently trying to finetune Llama2 chat model. &#x200B; The task is causal language modeling and I'm exploiting custom dataset, consisting of domain specific prompts and corresponding answers. &#x200B; I'm currently using SFT trainer([https://huggingface.co/docs/trl/sft\_trainer](https://huggingface.co/docs/trl/sft_trainer))and using LoRA adapter ([https://huggingface.co/docs/peft/package\_reference/lora](https://huggingface.co/docs/peft/package_reference/lora)) &#x200B; My question is about computation of the loss. in Llama2 paper, it is said that they zero-out the prompts for computing loss. https://preview.redd.it/yew9acvv5i5c1.png?width=2090&format=png&auto=webp&s=e7b107a582ea125843fe838137b1ad1cb57d5ad6 But, in SFT trainer it seems there are no options to do this thing (ignoring prompts when computing loss). At least I coundn't find it, neither through docs and digging inside the code. &#x200B; Has anyone dealt with the same issues before?
2023-12-10T17:29:59
https://www.reddit.com/r/LocalLLaMA/comments/18f8kxe/how_to_deal_with_loss_for_sft_for_causallm/
__given__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18f8kxe
false
null
t3_18f8kxe
/r/LocalLLaMA/comments/18f8kxe/how_to_deal_with_loss_for_sft_for_causallm/
false
false
https://b.thumbs.redditm…LlVcVKhjO1No.jpg
3
null
ollama local - smart file manager?
2
I have installed ollama and am running llama2 and mistral-openorca with no issues from the command line. I 've seen articles on training and on langchain. The computer is running ubuntu 22.04 new fresh install. hardware is aorus mb with ryzen threadripper 24 core, 128 GB RAM, nvidia RTX 3060 w 12 GB VRAM, if it matters. Can i give local general file access to my local ollama AI instance and ask questions like - 'list all folders with pictures taken last year'? not sure what i should be searching for as keywords when looking for help from this perspective.
2023-12-10T17:28:35
https://www.reddit.com/r/LocalLLaMA/comments/18f8jwo/ollama_local_smart_file_manager/
justsayno_to_biggovt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18f8jwo
false
null
t3_18f8jwo
/r/LocalLLaMA/comments/18f8jwo/ollama_local_smart_file_manager/
false
false
self
2
null