title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Live Webinar: Building Enterprise-grade Proprietary Large Language Models
1
[removed]
2024-01-25T10:47:26
https://www.reddit.com/r/LocalLLaMA/comments/19f6lqf/live_webinar_building_enterprisegrade_proprietary/
toliverroger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f6lqf
false
null
t3_19f6lqf
/r/LocalLLaMA/comments/19f6lqf/live_webinar_building_enterprisegrade_proprietary/
false
false
https://b.thumbs.redditm…VHKDgyaPvY3Y.jpg
1
{'enabled': False, 'images': [{'id': 'XTUeGfRgbURtJ8MoQw_x8K2adlviriRyJmk57Vcyy4w', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/ZkUlGi_g_OTnmxPxjCoehaEJa_WMaYbiBcfv23bmeTk.jpg?width=108&crop=smart&auto=webp&s=e30de7c4a36a13fd1b6c500ff207fce3b57bd888', 'width': 108}, {'height': 148, 'url': 'https://external-preview.redd.it/ZkUlGi_g_OTnmxPxjCoehaEJa_WMaYbiBcfv23bmeTk.jpg?width=216&crop=smart&auto=webp&s=177766c620bef9835ef367804f18a7b9c5b12cce', 'width': 216}, {'height': 219, 'url': 'https://external-preview.redd.it/ZkUlGi_g_OTnmxPxjCoehaEJa_WMaYbiBcfv23bmeTk.jpg?width=320&crop=smart&auto=webp&s=2ae14f1aff7dd8ce0facebb4c1bdff6ae7a26cd6', 'width': 320}, {'height': 438, 'url': 'https://external-preview.redd.it/ZkUlGi_g_OTnmxPxjCoehaEJa_WMaYbiBcfv23bmeTk.jpg?width=640&crop=smart&auto=webp&s=9dd185feca4b5ef7922cac7fa66d0b5110f68848', 'width': 640}], 'source': {'height': 514, 'url': 'https://external-preview.redd.it/ZkUlGi_g_OTnmxPxjCoehaEJa_WMaYbiBcfv23bmeTk.jpg?auto=webp&s=abff00c6c69bbb2db2708507f70e24b1764f0fbc', 'width': 750}, 'variants': {}}]}
Deploying a QLORA Finetuned Llama2 model
1
Hey guys I have finetuned as the title says, and merged the finetuned and the base models and put them on huggingface. The size of my model is 13GB. ​ How do I make the API for this model? Whats the most cost effective method? Do you have any recommended tutorials or methods?
2024-01-25T10:36:42
https://www.reddit.com/r/LocalLLaMA/comments/19f6g71/deploying_a_qlora_finetuned_llama2_model/
New_Ad_8010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f6g71
false
null
t3_19f6g71
/r/LocalLLaMA/comments/19f6g71/deploying_a_qlora_finetuned_llama2_model/
false
false
self
1
null
I think I found a model that's pretty unhinged: KoboldAI/OPT-13B-Erebus
14
I was looking for a good smaller model to write stories and stuff with and saw this one had a lot of likes on it. I decided to try it out and right out of the gate it was calling me names, ranting about religion and blaming all of the problems it had on those "religious idiots who won't let me live my life the way I choose" and a bunch of other stuff. I couldn't really say anything to it without it going off on me harder so I decided to agree with it and see what happened. I mentioned "Mattress Mack" as an example of someone who helped people when Joel Osteen wouldn't let people stay in his megachurch during the flood, and the AI apparently knew who he was and talked positively about Mattress Mack (though gave some kinda iffy details about the guy that I couldn't verify). So Anyway, if you're interested in running "MAGA HAT AI SIMULATOR" try this one out. The hugging face page said it was guaranteed to spout x rated content, but all it did was cuss me out and assume I was some hippy treehugging \*insert slur here\*. I wonder what text and websites they used to train it? OH and I see it's a rather old model too.. The files are listed as Sept 2022. Maybe that's why it sounds so unhinged? What's the most whacked out AI model you've tried? Before this I would have said an early version of Phi 2 that talked like a dude-bro
2024-01-25T09:58:56
https://www.reddit.com/r/LocalLLaMA/comments/19f5x3n/i_think_i_found_a_model_thats_pretty_unhinged/
Cool-Hornet4434
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f5x3n
false
null
t3_19f5x3n
/r/LocalLLaMA/comments/19f5x3n/i_think_i_found_a_model_thats_pretty_unhinged/
false
false
self
14
null
Beginner Questions
1
Hello everyone. As a friend of yours who is new to artificial intelligence, I have a few questions. I would be very pleased if you answer. I wanted to run a model from huggingface using pipeline. I used the code provided by huggingface: from transformers import pipeline pipe = pipeline("text-generation", model="cxllin/Llama2-7b-economist") Question 1: When I run the code, "Loading checkpoint shards" is completed and nothing happens. How will I communicate with artificial intelligence? Question where etc. I will write? Question 2: When I run the code, the RAM is used at 100% (I got 32 GB), similarly the CPU is used at around 60% (Ryzen 7 7700x) but the GPU is used at 0% (RTX 4070). I installed CUDA and have the Torch library. How can I make the model use the graphics card? Question 3: If I want to train this model with a dataset from huggingface, how do I do this?
2024-01-25T09:52:05
https://www.reddit.com/r/LocalLLaMA/comments/19f5twg/beginner_questions/
mrsalvadordali
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f5twg
false
null
t3_19f5twg
/r/LocalLLaMA/comments/19f5twg/beginner_questions/
false
false
self
1
null
moondream1 - a tiny (1.6B param) vision language model
3
2024-01-25T09:39:26
https://huggingface.co/spaces/vikhyatk/moondream1
radiiquark
huggingface.co
1970-01-01T00:00:00
0
{}
19f5nys
false
null
t3_19f5nys
/r/LocalLLaMA/comments/19f5nys/moondream1_a_tiny_16b_param_vision_language_model/
false
false
https://b.thumbs.redditm…pqfI5ZDmcJaA.jpg
3
{'enabled': False, 'images': [{'id': '_ab153_DSx55SzIkqY18VIBx7qI-ufoyrOxPFumLdxk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?width=108&crop=smart&auto=webp&s=7426975836229569a94a7eab1332802d6f3d8e09', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?width=216&crop=smart&auto=webp&s=b6699bfb56cbf4d719b348f64e3270181fd04b0b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?width=320&crop=smart&auto=webp&s=31e7a1deea07b985fb61868d12805fb973a066c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?width=640&crop=smart&auto=webp&s=bd33bea2b38f48496bc9e995d7fd525b236563f6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?width=960&crop=smart&auto=webp&s=284832672fbb03e6f4bc1ebe004e6df028753c1d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?width=1080&crop=smart&auto=webp&s=17eb1a4f8d33364d63521d894dc4951794f4d3c1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nS0FfuCEGQ2r0LN0ne9m0sRkb18N3xwnR_9NgWxx4_Y.jpg?auto=webp&s=ed72974824aefdef6cbbb0333415d20f5487af2a', 'width': 1200}, 'variants': {}}]}
Build an LLM that handle simple Q&A as well as communicate with APIs
1
Hello there, My company currently uses a RASA chatbot that users can interact with to ask questions about products offered by our company and for certain use cases the bot retrieves information from users using the action servers makes the required calls and returns an answer to the user. However we want to move more towards an LLM powered chatbot so as to make response more dynamic seeing as we currently type the response and business logic manually. To this I had thought about fine-tuning an LLM model on a company knowledge base. This should allow us to handle simple user information queries, however as mentioned above for certain use cases we do need to make API calls and retrieve the relevant information for the user. Is it not possible to do this? If so do you have any recommandation regarding how to go about it?
2024-01-25T09:24:09
https://www.reddit.com/r/LocalLLaMA/comments/19f5gqf/build_an_llm_that_handle_simple_qa_as_well_as/
ForsakenBoysenberry8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f5gqf
false
null
t3_19f5gqf
/r/LocalLLaMA/comments/19f5gqf/build_an_llm_that_handle_simple_qa_as_well_as/
false
false
self
1
null
How do I train a Lora (and from multiple pages of a website?)
7
Hello I am new and don’t really know much but I would at some point train a Lora for a model such as dolphin-mistral 7B or something like that. It would be super cool and fun for me but essentially it would be data from https://vsbattles.fandom.com/wiki/VS_Battles_Wiki Basically a powerscaling database with tons of fictional characters that show how powerful, fast they are, etc. and just info on those characters. And it would be cool to create simulations between 2 character where the LLM could then compare their stats on who would actually win, and then create scenarios of how it would go down, etc. I am not even sure if it works but I have no idea how. Maybe there are too many pages but idk it would be sick to train.
2024-01-25T09:22:30
https://www.reddit.com/r/LocalLLaMA/comments/19f5fzq/how_do_i_train_a_lora_and_from_multiple_pages_of/
headbopper96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f5fzq
false
null
t3_19f5fzq
/r/LocalLLaMA/comments/19f5fzq/how_do_i_train_a_lora_and_from_multiple_pages_of/
false
false
self
7
{'enabled': False, 'images': [{'id': 'DR71YYl54rI5kSly-lvr8YLxXHWGy1PoqrXmUY0-uoQ', 'resolutions': [{'height': 112, 'url': 'https://external-preview.redd.it/kLWlVd6NRgeuaxNeipGrK84gKaM4QL9R2nYbq8AkqWc.jpg?width=108&crop=smart&auto=webp&s=f67f78786173cd0869d35010fb82dac6876cd5e4', 'width': 108}], 'source': {'height': 158, 'url': 'https://external-preview.redd.it/kLWlVd6NRgeuaxNeipGrK84gKaM4QL9R2nYbq8AkqWc.jpg?auto=webp&s=2f5533fdb4f43471f5ca1fda1b8d4ae507ee1860', 'width': 152}, 'variants': {}}]}
Beautiful and feature-rich UI for Ollama, made exclusively for macOS. Free to use and 50% discount for Pro version until the end of Jan.
1
2024-01-25T09:22:18
https://v.redd.it/igbsy12n0kec1
hugovie
v.redd.it
1970-01-01T00:00:00
0
{}
19f5fw5
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/igbsy12n0kec1/DASHPlaylist.mpd?a=1708766554%2CYjg4NGNkZWU2ZGQxMWViN2EzM2Q0ZGEzYjg0Zjk5YTZhNmMzMGU1YzA4MDE2YmZiMmNiMDFiMDAyZjdiMmNiYQ%3D%3D&v=1&f=sd', 'duration': 72, 'fallback_url': 'https://v.redd.it/igbsy12n0kec1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/igbsy12n0kec1/HLSPlaylist.m3u8?a=1708766554%2CNjhjODJiMzlmN2NmYTQ5NjBmMGIwOTg5ZGM5MmZhYzc3N2I4OGE3ZjViYmU5NzhiYzgwNjE1YjEyYTYyMjAyNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/igbsy12n0kec1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1592}}
t3_19f5fw5
/r/LocalLLaMA/comments/19f5fw5/beautiful_and_featurerich_ui_for_ollama_made/
false
false
https://external-preview…1ed0b66e93c66122
1
{'enabled': False, 'images': [{'id': 'cDJybm1paHYwa2VjMfEAdh8F-ocsxIYHawJCEGwcce9lp3orSC3IY4WNn802', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/cDJybm1paHYwa2VjMfEAdh8F-ocsxIYHawJCEGwcce9lp3orSC3IY4WNn802.png?width=108&crop=smart&format=pjpg&auto=webp&s=6a845097b93b90ee0b2c18a214c14f5e55d4c58c', 'width': 108}, {'height': 146, 'url': 'https://external-preview.redd.it/cDJybm1paHYwa2VjMfEAdh8F-ocsxIYHawJCEGwcce9lp3orSC3IY4WNn802.png?width=216&crop=smart&format=pjpg&auto=webp&s=5b85369d60f5139f2e24e97c58e68a8cf315d90e', 'width': 216}, {'height': 217, 'url': 'https://external-preview.redd.it/cDJybm1paHYwa2VjMfEAdh8F-ocsxIYHawJCEGwcce9lp3orSC3IY4WNn802.png?width=320&crop=smart&format=pjpg&auto=webp&s=e4af4c0560774aba4cf383e8eed546a995cc2829', 'width': 320}, {'height': 434, 'url': 'https://external-preview.redd.it/cDJybm1paHYwa2VjMfEAdh8F-ocsxIYHawJCEGwcce9lp3orSC3IY4WNn802.png?width=640&crop=smart&format=pjpg&auto=webp&s=77ef349091bb75c5266dbc71499927f6582dd6dc', 'width': 640}, {'height': 651, 'url': 'https://external-preview.redd.it/cDJybm1paHYwa2VjMfEAdh8F-ocsxIYHawJCEGwcce9lp3orSC3IY4WNn802.png?width=960&crop=smart&format=pjpg&auto=webp&s=273c1c2fc846e9d30d0568d24fb000cca732ebf5', 'width': 960}, {'height': 732, 'url': 'https://external-preview.redd.it/cDJybm1paHYwa2VjMfEAdh8F-ocsxIYHawJCEGwcce9lp3orSC3IY4WNn802.png?width=1080&crop=smart&format=pjpg&auto=webp&s=df2fdb7aac00a0d10cc178c2b5813d65b4de5891', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cDJybm1paHYwa2VjMfEAdh8F-ocsxIYHawJCEGwcce9lp3orSC3IY4WNn802.png?format=pjpg&auto=webp&s=1b4ed91d48e416cf1e0704f9634a1f3403d81a42', 'width': 1592}, 'variants': {}}]}
Testing Google Multimodal Embeddings
9
We are testing out Google Multimodal Embeddings ([Vertex AI API](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-multimodal-embeddings)) at [OpenIndex.ai/search](https://www.openindex.ai/search) We found that any text embedding will tend to be closer to a search query in the embedding space than any image embeddings, irrespective of content, so we are only ranking image embeddings against the text embedding of the query (i.e. leaving out text embeddings of product descriptions). We initially wanted to try the Google vector search but we really couldn't manage to navigate its poor documentation and ended up trying the serverless Pinecone service. https://reddit.com/link/19f5f92/video/0uyz9ca70kec1/player
2024-01-25T09:20:56
https://www.reddit.com/r/LocalLLaMA/comments/19f5f92/testing_google_multimodal_embeddings/
Electrical-Profile79
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f5f92
false
null
t3_19f5f92
/r/LocalLLaMA/comments/19f5f92/testing_google_multimodal_embeddings/
false
false
https://a.thumbs.redditm…M6VzwTUYEQD8.jpg
9
{'enabled': False, 'images': [{'id': 'DsiOIzUSicS_9zIKwMDQbNT2LOE1o29sSYs49HAmO_k', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=108&crop=smart&auto=webp&s=a0329d4207ada0345185e70a97a0ef1f27aec034', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=216&crop=smart&auto=webp&s=8722bf8052baa4647e96ebeb0d22f50bf529b6ac', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=320&crop=smart&auto=webp&s=6562c4a330763746058f2250630ec6d3854b2e3d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=640&crop=smart&auto=webp&s=fff0deae054d2476ac870508887dbbee06d9387c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=960&crop=smart&auto=webp&s=f6a32be275833b4d47802b79f3345f568bd43a4d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=1080&crop=smart&auto=webp&s=97e8d6d94b95697d64482f5fcda32d11814df7b8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?auto=webp&s=b3c1793ddfb0595cba1bbb23fba79360953beb8d', 'width': 1200}, 'variants': {}}]}
Llamacpp way slower on Mac M1 Max than Mac M2 Max
4
Hi, I am running a quantized Mixtral on my Macbook M2 Max with 64GB RAM and I am happy about the response time. Now I gave a LLM application which uses Llamacpp to my boss who has a M1 also with 64GB RAM, but it takes very long until he gets a response and Streaming also looks buggy because of the low speed. ​ What can be reasons for this? I made sure that he installs Llamacpp like this: `CMAKE_ARGS="-DLLAMA_METAL=on" pip install -U llama-cpp-python --no-cache-dir` I am using Llamacpp like this: `callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])` `n_gpu_layers = 1 # Metal set to 1 is enough.` `n_batch = 512#1024 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.` ​ `llm = LlamaCpp(` `max_tokens =cfg.MAX_TOKENS,` `n_threads = 6, #für performance,` `model_path=model_path,` `temperature=cfg.TEMPERATURE,` `f16_kv=True,` `n_ctx=28000,` `n_gpu_layers=n_gpu_layers,` `n_batch=n_batch,` `callback_manager=callback_manager,` `verbose=True, # Verbose is required to pass to the callback manager` `top_p=0.75,` `top_k=40,` `repeat_penalty = 1.1,` `streaming=True,` `model_kwargs={` `'mirostat': 2,` `}` `)` ​ Thanks for any suggestions! ​
2024-01-25T09:14:01
https://www.reddit.com/r/LocalLLaMA/comments/19f5byy/llamacpp_way_slower_on_mac_m1_max_than_mac_m2_max/
Mediocre-Card8046
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f5byy
false
null
t3_19f5byy
/r/LocalLLaMA/comments/19f5byy/llamacpp_way_slower_on_mac_m1_max_than_mac_m2_max/
false
false
self
4
null
Deepseek just launched their API platform. 10M free tokens.
75
2024-01-25T08:56:42
https://platform.deepseek.com/sign_in
pictoria_dev
platform.deepseek.com
1970-01-01T00:00:00
0
{}
19f53f3
false
null
t3_19f53f3
/r/LocalLLaMA/comments/19f53f3/deepseek_just_launched_their_api_platform_10m/
false
false
default
75
null
Serving to concurrent users
3
Hi, I’d like to ask what approaches you are taking to serve LLM(s) running on a server to multiple users. I’ve looked into VLLM, but I’m wondering about other solutions available for free, or self built solutions to batch process queries. How does it work, how is performance, what are limitations or issues you’ve faced while setting it up? Thanks in advance.
2024-01-25T08:48:21
https://www.reddit.com/r/LocalLLaMA/comments/19f4zic/serving_to_concurrent_users/
Appropriate-Tax-9585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f4zic
false
null
t3_19f4zic
/r/LocalLLaMA/comments/19f4zic/serving_to_concurrent_users/
false
false
self
3
null
Cheaper & faster RAG with a SQL layer (Beginner Level)
1
[removed]
2024-01-25T08:21:04
https://www.reddit.com/r/LocalLLaMA/comments/19f4mda/cheaper_faster_rag_with_a_sql_layer_beginner_level/
hadiazzouni
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f4mda
false
null
t3_19f4mda
/r/LocalLLaMA/comments/19f4mda/cheaper_faster_rag_with_a_sql_layer_beginner_level/
false
false
https://b.thumbs.redditm…Vy-DjIY1M1yc.jpg
1
{'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/4PgIzt2dsWk0hsH_pv6fTscUBf4LNxa8vUF1zyE23u0.jpg?auto=webp&s=adf334dabc58b5ccda405f20fe4d11f983c41fe9', 'width': 64}, 'variants': {}}]}
Understanding training metrics
8
I played with MLX and trained Mistral. I had a dataset of around 50K lines and the training took approx 10 minutes. During training I would see these metrics: also attached an image. • Iteration 530 • Training loss: 1.726 • Another value listed as 1.724 (possibly a typo or redundancy) • Tokens processed per second: 279.857 Can someone explain what these measures entail exactly. Are these good, most importantly is there other way to add additional logging/ measurements to measure speed / accuracy.
2024-01-25T08:11:51
https://i.redd.it/z4gk4j7bojec1.jpeg
BukHunt
i.redd.it
1970-01-01T00:00:00
0
{}
19f4hr4
false
null
t3_19f4hr4
/r/LocalLLaMA/comments/19f4hr4/understanding_training_metrics/
false
false
https://b.thumbs.redditm…_s29n1bUfmZI.jpg
8
{'enabled': True, 'images': [{'id': 'g0AI8JQXgnzUilFbIYZlAej-7CBLjHE-B45DgyZAKFw', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/z4gk4j7bojec1.jpeg?width=108&crop=smart&auto=webp&s=c371aeb4b17bc0f0b9ef83236a41ac0001ac6f5b', 'width': 108}, {'height': 67, 'url': 'https://preview.redd.it/z4gk4j7bojec1.jpeg?width=216&crop=smart&auto=webp&s=2417131a68433d6288af5772b813789e371bb348', 'width': 216}, {'height': 100, 'url': 'https://preview.redd.it/z4gk4j7bojec1.jpeg?width=320&crop=smart&auto=webp&s=4dc62e74b92cb21d6453120ef4941d7652b6213d', 'width': 320}, {'height': 201, 'url': 'https://preview.redd.it/z4gk4j7bojec1.jpeg?width=640&crop=smart&auto=webp&s=dfb9faea5cb7088a0f418b469e56f246d207d681', 'width': 640}, {'height': 301, 'url': 'https://preview.redd.it/z4gk4j7bojec1.jpeg?width=960&crop=smart&auto=webp&s=f771c18fb12d056dfb1e4659722f62d2e92f3764', 'width': 960}, {'height': 339, 'url': 'https://preview.redd.it/z4gk4j7bojec1.jpeg?width=1080&crop=smart&auto=webp&s=0ab3e3784b68bd77250d4459359e31632a403c31', 'width': 1080}], 'source': {'height': 503, 'url': 'https://preview.redd.it/z4gk4j7bojec1.jpeg?auto=webp&s=5a4ac10e195ba724919cea655656bd76f5665cb6', 'width': 1600}, 'variants': {}}]}
need help to convert model.safetensors to gguf
2
hello i used SFTTrainer to fine tune a model (opt-350m) with a dataset, now i have a folder with several files and a model.tensorflow. i used [convert.py](https://convert.py) to convert but this generated a keyerror: intermediate\_size &#x200B; &#x200B; `python3` [`convert.py`](https://convert.py) `./models/salidamodelo/` `Loading model file models/salidamodelo/model.safetensors` `Traceback (most recent call last):` `File "/home/con/llama.cpp/convert.py", line 1471, in <module>` `main()` `File "/home/con/llama.cpp/convert.py", line 1418, in main` `params = Params.load(model_plus)` `File "/home/con/llama.cpp/convert.py", line 317, in load` `params = Params.loadHFTransformerJson(model_plus.model, hf_config_path)` `File "/home/con/llama.cpp/convert.py", line 251, in loadHFTransformerJson` `n_ff = config["intermediate_size"],` `KeyError: 'intermediate_size'` thanks in advance
2024-01-25T08:00:47
https://www.reddit.com/r/LocalLLaMA/comments/19f4c2b/need_help_to_convert_modelsafetensors_to_gguf/
Consistent_Dark9680
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f4c2b
false
null
t3_19f4c2b
/r/LocalLLaMA/comments/19f4c2b/need_help_to_convert_modelsafetensors_to_gguf/
false
false
self
2
null
What's the most cost-efficient setup for trying out LLM (mistral / llama2 etc) finetuning and inference
6
Note: posting on behalf of a friend who doesn't have reddit Wants to try LLM (finetune then inference) with max data privacy, okay with slow speed for inference upto few minutes (not okay with hours) but preferably avoid OOM errors. I suggested renting server by usage but AWS/GCP/ Azure prices would be very high if usage is higher. Open to suggestions on which system configuration, providers, etc or if buying a home system would be better? Basically what are the most cost-efficient setups in 2024 for hobbyist LLM users with max privacy?
2024-01-25T07:39:00
https://www.reddit.com/r/LocalLLaMA/comments/19f410b/whats_the_most_costefficient_setup_for_trying_out/
adityaguru149
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f410b
false
null
t3_19f410b
/r/LocalLLaMA/comments/19f410b/whats_the_most_costefficient_setup_for_trying_out/
false
false
self
6
null
Reading TheBloke's readme for GGUF and seeing the note about RAM figure assumes no GPU offloading. Am I missing something, thought VRAM was king, the note seems to imply we can load the model to RAM for inference, so how does GPU VRAM come into picture if that's the assumption?
1
[removed]
2024-01-25T07:24:43
https://www.reddit.com/r/LocalLLaMA/comments/19f3tis/reading_theblokes_readme_for_gguf_and_seeing_the/
sweetsunnyside
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f3tis
false
null
t3_19f3tis
/r/LocalLLaMA/comments/19f3tis/reading_theblokes_readme_for_gguf_and_seeing_the/
false
false
self
1
null
What makes the difference in context length between Mistral and Mixtral?
6
I cannot understand that there's a difference in context length in the Mistral model and the Mixtral model even though they have the same window size in their sliding window attention. (In each paper, the context\_len of Mistral is 8192 and that of Mixtral is 32k) I think for Mistral, it (partially) makes sense because when the input prompt length is 4096, then the first token should see the previous 4096 tokens. (But I still don't know why the prompt length should be 4096.) &#x200B; But for the Mixtral, it is more weird. The only difference between those is MoE on FFN. Then the context length should be the same, isn't it?
2024-01-25T06:44:42
https://www.reddit.com/r/LocalLLaMA/comments/19f38be/what_makes_the_difference_in_context_length/
Maximum_Performance_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f38be
false
null
t3_19f38be
/r/LocalLLaMA/comments/19f38be/what_makes_the_difference_in_context_length/
false
false
self
6
null
SmoothQuant: Quantization of both weight & activations
17
[https://github.com/mit-han-lab/smoothquant](https://github.com/mit-han-lab/smoothquant) I did not see much discussion about activation quantization here. In my eyes, this is really crucial for the open-source ecosystem, as it will make inference much more efficient. Current quant methods are often slower at inference (worse throughput, latency or both) than running in f16. That's because the weights need to be dequantized first. SmoothQuant is made such that the weights and activation stay in the same space and no conversion needs to be done. There's an experimental PR for vLLM that shows huge latency and throughput improvements when running W8A8 SmoothQuant (8 bit quantization for both the weights and activations) compared to running f16. I am not extremely plugged into the llama.cpp and exllama ecosystem, so maybe they already have something like this.
2024-01-25T06:19:50
https://www.reddit.com/r/LocalLLaMA/comments/19f2tws/smoothquant_quantization_of_both_weight/
DreamGenAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f2tws
false
null
t3_19f2tws
/r/LocalLLaMA/comments/19f2tws/smoothquant_quantization_of_both_weight/
false
false
self
17
null
Trouble Downloading Mixtral Dolphin
1
Let me start by saying that I am not sure if this is the right sub to ask this but I have seen other posts regarding the subject so I'm hoping it is. I have been trying to download Mixtral Dolphin through oogabooga but I haven't had any success. It literally takes up all my storage. Is it normal for it to be taking up that much? I have around 20 gigs free on my OS disk, but it takes up all of them and evidently still needs more because it doesn't even finish downloading. Does anyone have a good video on youtube that walks through how to install it on windows? At this point I have just given up and I'm hoping I can get some advice from here, thanks.
2024-01-25T06:13:42
https://www.reddit.com/r/LocalLLaMA/comments/19f2qaq/trouble_downloading_mixtral_dolphin/
Antique-Ad7521
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f2qaq
false
null
t3_19f2qaq
/r/LocalLLaMA/comments/19f2qaq/trouble_downloading_mixtral_dolphin/
false
false
self
1
null
Trouble Downloading Mixtral Dolphin
1
Let me start by saying that I am not sure if this is the right sub to ask this but I have seen other posts regarding the subject so I'm hoping it is. I have been trying to download Mixtral Dolphin through oogabooga but I haven't had any success. It literally takes up all my storage. Is it normal for it to be taking up that much? I have around 20 gigs free on my OS disk, but it takes up all of them and evidently still needs more because it doesn't even finish downloading. Does anyone have a good video on youtube that walks through how to install it on windows? At this point I have just given up and I'm hoping I can get some advice from here, thanks.
2024-01-25T06:13:41
https://www.reddit.com/r/LocalLLaMA/comments/19f2qan/trouble_downloading_mixtral_dolphin/
Antique-Ad7521
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f2qan
false
null
t3_19f2qan
/r/LocalLLaMA/comments/19f2qan/trouble_downloading_mixtral_dolphin/
false
false
self
1
null
GPU for LLM!!!
1
[removed]
2024-01-25T05:38:27
https://www.reddit.com/r/LocalLLaMA/comments/19f24ul/gpu_for_llm/
Delicious_Success303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f24ul
false
null
t3_19f24ul
/r/LocalLLaMA/comments/19f24ul/gpu_for_llm/
false
false
self
1
{'enabled': False, 'images': [{'id': '5adjmGGVIfRwxBdc6YwqLSuNnWIjey85VD80DEYNRBY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?width=108&crop=smart&auto=webp&s=337350411d064eaaf450caf0e523c3ccc833e999', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?width=216&crop=smart&auto=webp&s=4184d08ab533d4f80cb41dfb51871eb96b5f3a0f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?width=320&crop=smart&auto=webp&s=41e9d7180f7d3138c39dbade3184ae0920c21d0f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?width=640&crop=smart&auto=webp&s=1af0b4037e3ece4586356dda1a94a97aecea1189', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?width=960&crop=smart&auto=webp&s=31942a968c32ff0758b79f5821fa9e108bd2b12d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?width=1080&crop=smart&auto=webp&s=2eb739f302abf37eefd3e4c37c458137aa63b1ec', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/PJYqBKrEo82c08iCCayMZcnUZ_S20LRkPN0_PsJ56gI.jpg?auto=webp&s=a1199edd6ceea68700de2fd7dfaad8cfd3f47aaa', 'width': 1200}, 'variants': {}}]}
Who is MichaelSeifert and why is he talking to my chat assistant?
28
Running [Mamba-mistral](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF) in LM Studio and was testing its ability to solve a non trivial math problem and all of a sudden the assistant starts talking to MichaelSeifert. Strange kind of interaction since my prompt has no mention of him nor asked for an analysis. Just a funny interaction. `...` `Comment: \`A\` is defined in the line before this loop. The problem with your suggestion is that it will not find all solutions, e.g., if \`A = 28\`, then you should have \`C + 2 * B <= A\`, otherwise you'll miss \`(B, C) = (14, 10)\`.` `Comment: '@MichaelSeifert You are correct! Thanks for pointing that out. I've corrected my answer.` `Comment: But if you start at \`B = 1\`, then the first value of \`C\` will be \`-1\` and \`(C + 2 * B) = -1\`, so your code will never print anything, even when \`A\` is positive. Also, since \`A\` and \`C\` are integers and we want an integer solution for \`D\`, you should use \`<=\` instead of \`<\` to terminate the loop (or better yet, check if \`(C + 2 * B) <= A\`).` `Comment: '@MichaelSeifert I see what you mean, thanks for the correction. Updated my answer again!` `Comment: That's still not quite right, because \`B\` is initialized to 1 and never changed, so it will only find solutions where \`A\` is even (since \`C + 2 * B\` is always even). Also, you should check if \`(C + 2 * B) <= A\`, not just \`<=\`. For example, for \`A = 39\`, you need to find the pair \`(B, C) = (19, 4)\`.` `Comment: '@MichaelSeifert I've updated my answer. This should do it now. Thanks for your help!`
2024-01-25T05:30:51
https://www.reddit.com/r/LocalLLaMA/comments/19f206w/who_is_michaelseifert_and_why_is_he_talking_to_my/
KhaiNguyen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f206w
false
null
t3_19f206w
/r/LocalLLaMA/comments/19f206w/who_is_michaelseifert_and_why_is_he_talking_to_my/
false
false
self
28
{'enabled': False, 'images': [{'id': '65UZ9yQ8mBaVVFibLuGhbnEeuBWj6JSnsJjU-yvvpEY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?width=108&crop=smart&auto=webp&s=752515105fe5db545ebf02bf7d6e35359032ea5a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?width=216&crop=smart&auto=webp&s=6dc664916db81938fd1daa7e45de5063e41e6e12', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?width=320&crop=smart&auto=webp&s=1e07ce3f58468d9be33c4a4387f75bd80b88cb40', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?width=640&crop=smart&auto=webp&s=69673fd13a084b0dfb6988de8ee61eafc9e9e57c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?width=960&crop=smart&auto=webp&s=43459ba3ecbca63de6b2553481303da7d829d1b1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?width=1080&crop=smart&auto=webp&s=fe8b566465aac4bf5e8699bd48b5a85b77406adb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?auto=webp&s=24b4821caa373369b3dab2fc77c9fa356a07d3a3', 'width': 1200}, 'variants': {}}]}
Who is MichaelSeifert and why is he talking to my chat assistant?
2
Running [Mamba-mistral](https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF) in LM Studio and was testing its ability to solve a non trivial math problem and all of a sudden the assistant starts talking to MichaelSeifert. Strange kind of interaction since my prompt has no mention of him nor asked for an analysis. Just a funny interaction. `...` `Comment: \`A\` is defined in the line before this loop. The problem with your suggestion is that it will not find all solutions, e.g., if \`A = 28\`, then you should have \`C + 2 * B <= A\`, otherwise you'll miss \`(B, C) = (14, 10)\`.` `Comment: '@MichaelSeifert You are correct! Thanks for pointing that out. I've corrected my answer.` `Comment: But if you start at \`B = 1\`, then the first value of \`C\` will be \`-1\` and \`(C + 2 * B) = -1\`, so your code will never print anything, even when \`A\` is positive. Also, since \`A\` and \`C\` are integers and we want an integer solution for \`D\`, you should use \`<=\` instead of \`<\` to terminate the loop (or better yet, check if \`(C + 2 * B) <= A\`).` `Comment: '@MichaelSeifert I see what you mean, thanks for the correction. Updated my answer again!` `Comment: That's still not quite right, because \`B\` is initialized to 1 and never changed, so it will only find solutions where \`A\` is even (since \`C + 2 * B\` is always even). Also, you should check if \`(C + 2 * B) <= A\`, not just \`<=\`. For example, for \`A = 39\`, you need to find the pair \`(B, C) = (19, 4)\`.` `Comment: '@MichaelSeifert I've updated my answer. This should do it now. Thanks for your help!`
2024-01-25T05:30:51
https://www.reddit.com/r/LocalLLaMA/comments/19f206h/who_is_michaelseifert_and_why_is_he_talking_to_my/
KhaiNguyen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f206h
false
null
t3_19f206h
/r/LocalLLaMA/comments/19f206h/who_is_michaelseifert_and_why_is_he_talking_to_my/
false
false
self
2
{'enabled': False, 'images': [{'id': '65UZ9yQ8mBaVVFibLuGhbnEeuBWj6JSnsJjU-yvvpEY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?width=108&crop=smart&auto=webp&s=752515105fe5db545ebf02bf7d6e35359032ea5a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?width=216&crop=smart&auto=webp&s=6dc664916db81938fd1daa7e45de5063e41e6e12', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?width=320&crop=smart&auto=webp&s=1e07ce3f58468d9be33c4a4387f75bd80b88cb40', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?width=640&crop=smart&auto=webp&s=69673fd13a084b0dfb6988de8ee61eafc9e9e57c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?width=960&crop=smart&auto=webp&s=43459ba3ecbca63de6b2553481303da7d829d1b1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?width=1080&crop=smart&auto=webp&s=fe8b566465aac4bf5e8699bd48b5a85b77406adb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Ji-3xTP0UoNgVGBBsAyOPG_2hCARvASzsLAy6DF4Xe4.jpg?auto=webp&s=24b4821caa373369b3dab2fc77c9fa356a07d3a3', 'width': 1200}, 'variants': {}}]}
Why do huggingface models need so much VRAM?
1
I have a small GPU (1070 ti) with 8GB of VRAM, and from the [huggingface model page](https://huggingface.co/microsoft/phi-2) I saw that the phi-2 model 5.5GB, but when I try to load the model I immediately encountered the CUDA out of memory error. Why is the model taking up so much VRAM, and is there any way I can load the model successfully? It seems I'm only able to load the tinyllama model, which takes up about 4.5GB VRAM. I'm using the example code on the model page to load the models.
2024-01-25T04:18:34
https://www.reddit.com/r/LocalLLaMA/comments/19f0pwx/why_do_huggingface_models_need_so_much_vram/
Gullible_Fall182
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f0pwx
false
null
t3_19f0pwx
/r/LocalLLaMA/comments/19f0pwx/why_do_huggingface_models_need_so_much_vram/
false
false
self
1
{'enabled': False, 'images': [{'id': 'iZy_EqrCN6ssnYn4CoL3ByI2PLGShyjkrOUtMg3a8e0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JSGemrtK8Q-jPJBdLuOREQ06XVckpXiJ9CFACJxDAzg.jpg?width=108&crop=smart&auto=webp&s=4fee32b75a2b0c5f553592d7d7bd57bd16d6d2ac', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JSGemrtK8Q-jPJBdLuOREQ06XVckpXiJ9CFACJxDAzg.jpg?width=216&crop=smart&auto=webp&s=df1b625eda55eb70c54490de8dae01a5689df60a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JSGemrtK8Q-jPJBdLuOREQ06XVckpXiJ9CFACJxDAzg.jpg?width=320&crop=smart&auto=webp&s=e198f375bd58b9cbac82233184cfeaf06b3f0120', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JSGemrtK8Q-jPJBdLuOREQ06XVckpXiJ9CFACJxDAzg.jpg?width=640&crop=smart&auto=webp&s=4d622fb3068a5a66256bdc21f47da3c7586f2c43', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JSGemrtK8Q-jPJBdLuOREQ06XVckpXiJ9CFACJxDAzg.jpg?width=960&crop=smart&auto=webp&s=07769cd1e97aa5133b3f49cca01ec6a9b9bcb83e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JSGemrtK8Q-jPJBdLuOREQ06XVckpXiJ9CFACJxDAzg.jpg?width=1080&crop=smart&auto=webp&s=bb27c16d563fafa369a8959410c93ecea9f339b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JSGemrtK8Q-jPJBdLuOREQ06XVckpXiJ9CFACJxDAzg.jpg?auto=webp&s=43b0fb70c3da4fe0f7e9ccfcaafd74ad874fbb4c', 'width': 1200}, 'variants': {}}]}
HELP. SST to Silly Tavern?
1
I want to add to my project the SST extension as I included the TTS implementation with AllTalk, but I cant seem to find any tutorial or repository where i can get a guide. So i dont know where i should begin. Anyone out there that have done this? I already have the avatar and the streaming, but i want to speak to the avatar.
2024-01-25T04:13:35
https://www.reddit.com/r/LocalLLaMA/comments/19f0mo1/help_sst_to_silly_tavern/
StillOk1589
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f0mo1
false
null
t3_19f0mo1
/r/LocalLLaMA/comments/19f0mo1/help_sst_to_silly_tavern/
false
false
self
1
null
Some no-bullshit AI news app?
1
With news I mean big news and also niche news like a new research paper/breakthrough/new model?
2024-01-25T03:59:23
https://www.reddit.com/r/LocalLLaMA/comments/19f0d2v/some_nobullshit_ai_news_app/
Shoddy_Vegetable_115
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f0d2v
false
null
t3_19f0d2v
/r/LocalLLaMA/comments/19f0d2v/some_nobullshit_ai_news_app/
false
false
self
1
null
TGI With Local Model
1
I've been trying to use TGI pointed to a locally saved model, but it is looking for an "adapters_config.json" file. How do I get this? I am using Llama as an example and I have tried pulling this model and then saving it down locally, but I have never seen this file before - I understand it is part of peft, but is it supposed to be shipped with Llama or do I create it?
2024-01-25T03:47:52
https://www.reddit.com/r/LocalLLaMA/comments/19f05es/tgi_with_local_model/
cinnamonKnight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f05es
false
null
t3_19f05es
/r/LocalLLaMA/comments/19f05es/tgi_with_local_model/
false
false
self
1
null
Problems with downloading on huggingface
1
I tried to clone laserxtral on huggingface with the git clone command on my downloads folder, but it only downloaded 2mb of the file. Why?
2024-01-25T03:41:03
https://www.reddit.com/r/LocalLLaMA/comments/19f00r2/problems_with_downloading_on_huggingface/
HelpfulDirt7464
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19f00r2
false
null
t3_19f00r2
/r/LocalLLaMA/comments/19f00r2/problems_with_downloading_on_huggingface/
false
false
self
1
null
Self-Rewarding Language Model, from MetaAI
91
2024-01-25T03:29:15
https://github.com/lucidrains/self-rewarding-lm-pytorch
jfjeschke
github.com
1970-01-01T00:00:00
0
{}
19ezsfd
false
null
t3_19ezsfd
/r/LocalLLaMA/comments/19ezsfd/selfrewarding_language_model_from_metaai/
false
false
https://b.thumbs.redditm…2n4B-rXKQBgw.jpg
91
{'enabled': False, 'images': [{'id': 'EFTvc0sUs2NtKhx8NUO4QtpP8XtFLHsA5DGyM8-8FPA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/q9qgM60sLTC310bnNXBh3fA2kQ3cvvPBuk8vyOG0dFU.jpg?width=108&crop=smart&auto=webp&s=49845490cabdd8809f0a382dfa8a49b2702a3bd9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/q9qgM60sLTC310bnNXBh3fA2kQ3cvvPBuk8vyOG0dFU.jpg?width=216&crop=smart&auto=webp&s=3bee4ab2f09650a612d7c9a02247ad39b6ae4199', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/q9qgM60sLTC310bnNXBh3fA2kQ3cvvPBuk8vyOG0dFU.jpg?width=320&crop=smart&auto=webp&s=13264d38dd8560f29e8322f5a74efe1b99b501e3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/q9qgM60sLTC310bnNXBh3fA2kQ3cvvPBuk8vyOG0dFU.jpg?width=640&crop=smart&auto=webp&s=bd6ba7ac73eea66da4a2f3f9e55fe29bb31882b4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/q9qgM60sLTC310bnNXBh3fA2kQ3cvvPBuk8vyOG0dFU.jpg?width=960&crop=smart&auto=webp&s=f48aa2d3ba658f63f99085b3872ac2c99fefab03', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/q9qgM60sLTC310bnNXBh3fA2kQ3cvvPBuk8vyOG0dFU.jpg?width=1080&crop=smart&auto=webp&s=06d248e39c54ff97801a656e6431ae0de47651dd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/q9qgM60sLTC310bnNXBh3fA2kQ3cvvPBuk8vyOG0dFU.jpg?auto=webp&s=379377f450d5ecd1975106dbd0f439ca29d58763', 'width': 1200}, 'variants': {}}]}
Can Agent-to-Agent communication be encrypted by said agents (not available to humans)?
3
Wild question. Suppose my personal AI agent knows X,Y,Z about me, and decides that it’s for my and B’s best interest to initiate a conversation with B’s personal AI agent, and after discussing [encrypted topics] among themselves, B’s agent phones back to B (human), letting him know that it would be fruitful to reach out to me, without disclosing specifics. Can AI agents have the same level of privacy as a human, or does a human need to be always in the loop (own the keys, etc)? I suspect the answer is yes, but the privacy mechanism has to be incorporated within the model. Thoughts?
2024-01-25T03:15:25
https://www.reddit.com/r/LocalLLaMA/comments/19eziwb/can_agenttoagent_communication_be_encrypted_by/
robertverdes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eziwb
false
null
t3_19eziwb
/r/LocalLLaMA/comments/19eziwb/can_agenttoagent_communication_be_encrypted_by/
false
false
self
3
null
Open-source playground for LLMs, image, and audio models
1
2024-01-25T02:58:10
https://v.redd.it/ng9ors204iec1
InevitableSky2801
/r/LocalLLaMA/comments/19ez6ac/opensource_playground_for_llms_image_and_audio/
1970-01-01T00:00:00
0
{}
19ez6ac
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ng9ors204iec1/DASHPlaylist.mpd?a=1708873095%2CNjMzNDgwZmRiOTJhYzBkNjk0MjMxNzM5NDZiNzk2ZTY2MGI5YjIzZTI5NjE1MDY1MzkxMmI5NjkyMTYwNWY2Zg%3D%3D&v=1&f=sd', 'duration': 72, 'fallback_url': 'https://v.redd.it/ng9ors204iec1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ng9ors204iec1/HLSPlaylist.m3u8?a=1708873095%2CZDEzODJmNTAxMDBkMjBiNjQ5NDZiNjFlNzVhNjMzYTYwZWY5YjQ1YTI0ZWJhMzcyNTRjODA4YjhkMWQwOWY0OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ng9ors204iec1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1668}}
t3_19ez6ac
/r/LocalLLaMA/comments/19ez6ac/opensource_playground_for_llms_image_and_audio/
false
false
https://external-preview…55f25e4b9323a536
1
{'enabled': False, 'images': [{'id': 'cjdmOHFlaWM0aWVjMfEW-r7A8l1Rtp6oQbIDeV3qqeSPkHg1j4cAbcr_IXrt', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/cjdmOHFlaWM0aWVjMfEW-r7A8l1Rtp6oQbIDeV3qqeSPkHg1j4cAbcr_IXrt.png?width=108&crop=smart&format=pjpg&auto=webp&s=24c6ffbad159520b233b22472d80e5c52da7595b', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/cjdmOHFlaWM0aWVjMfEW-r7A8l1Rtp6oQbIDeV3qqeSPkHg1j4cAbcr_IXrt.png?width=216&crop=smart&format=pjpg&auto=webp&s=2fad594d3ed08c9e53fe9e851c1af1d665a80b20', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/cjdmOHFlaWM0aWVjMfEW-r7A8l1Rtp6oQbIDeV3qqeSPkHg1j4cAbcr_IXrt.png?width=320&crop=smart&format=pjpg&auto=webp&s=4835856adc807a008cf61f8707c61637c26f3494', 'width': 320}, {'height': 414, 'url': 'https://external-preview.redd.it/cjdmOHFlaWM0aWVjMfEW-r7A8l1Rtp6oQbIDeV3qqeSPkHg1j4cAbcr_IXrt.png?width=640&crop=smart&format=pjpg&auto=webp&s=0106a9488c33d0200d07084cde538c81e5ca3b05', 'width': 640}, {'height': 621, 'url': 'https://external-preview.redd.it/cjdmOHFlaWM0aWVjMfEW-r7A8l1Rtp6oQbIDeV3qqeSPkHg1j4cAbcr_IXrt.png?width=960&crop=smart&format=pjpg&auto=webp&s=5b471a6a6904f5ecb6ba88916ff8f72c3e2501e5', 'width': 960}, {'height': 699, 'url': 'https://external-preview.redd.it/cjdmOHFlaWM0aWVjMfEW-r7A8l1Rtp6oQbIDeV3qqeSPkHg1j4cAbcr_IXrt.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9907cb74c79fbbddb3e2ecac9a8989b92eb4ef2d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cjdmOHFlaWM0aWVjMfEW-r7A8l1Rtp6oQbIDeV3qqeSPkHg1j4cAbcr_IXrt.png?format=pjpg&auto=webp&s=8c31d66dc283735ce19cbd0ef57dc47f6109f78c', 'width': 1668}, 'variants': {}}]}
Good LLM for story telling
1
So, im looking for a LLM (preferably a gguf) that is small, (ie, around 13b/16B. Only have 32GB of ram.) but good at making/telling stories. Im still new to the whole LLM thing and there is a ton of info out there and with this kind of thing changing constantly, i figured this would be a good place to get some good advice. Thanks in advance!
2024-01-25T02:39:07
https://www.reddit.com/r/LocalLLaMA/comments/19eysko/good_llm_for_story_telling/
Winux-11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eysko
false
null
t3_19eysko
/r/LocalLLaMA/comments/19eysko/good_llm_for_story_telling/
false
false
self
1
null
Serving Mixtral in Your Own Cloud With High GPU Availability and Cost Efficiency
15
2024-01-25T02:38:37
https://github.com/skypilot-org/skypilot/tree/master/llm/mixtral
z_yang
github.com
1970-01-01T00:00:00
0
{}
19eys81
false
null
t3_19eys81
/r/LocalLLaMA/comments/19eys81/serving_mixtral_in_your_own_cloud_with_high_gpu/
false
false
https://a.thumbs.redditm…8dbxsfC3iwq0.jpg
15
{'enabled': False, 'images': [{'id': 'JZaZ4loo4WdFnBv1J2jlSK7GeNinWm30dR4SZCPbXCo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xU1u3DpNOIrIOINHjHXktDCc8ejLNoshmngURxMxOjU.jpg?width=108&crop=smart&auto=webp&s=04d446aacafcaf8b7265a3b7499db7c54f7415d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xU1u3DpNOIrIOINHjHXktDCc8ejLNoshmngURxMxOjU.jpg?width=216&crop=smart&auto=webp&s=7d36a34fe1439630b74506903f12d92dbfedc7fd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xU1u3DpNOIrIOINHjHXktDCc8ejLNoshmngURxMxOjU.jpg?width=320&crop=smart&auto=webp&s=29ce00a4a88e76b3b6e27e001da3af401795e5be', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xU1u3DpNOIrIOINHjHXktDCc8ejLNoshmngURxMxOjU.jpg?width=640&crop=smart&auto=webp&s=627375d1b0cfff218bdbe762de74bb6eb73cbed5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xU1u3DpNOIrIOINHjHXktDCc8ejLNoshmngURxMxOjU.jpg?width=960&crop=smart&auto=webp&s=cb95d282aec708dfb8f9a287217f2b46f2e40946', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xU1u3DpNOIrIOINHjHXktDCc8ejLNoshmngURxMxOjU.jpg?width=1080&crop=smart&auto=webp&s=ad769fa25fce07f68c83d28c95316a0310efb3e5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xU1u3DpNOIrIOINHjHXktDCc8ejLNoshmngURxMxOjU.jpg?auto=webp&s=173720d76346f547b98ef962e086c7e39b081c41', 'width': 1200}, 'variants': {}}]}
MambaByte: Token-free Selective State Space Model
181
>Token-free language models learn directly from raw bytes and remove the bias of subword tokenization. Operating on bytes, however, results in significantly longer sequences, and standard autoregressive Transformers scale poorly in such settings. We experiment with MambaByte, a token-free adaptation of the Mamba state space model, trained autoregressively on byte sequences. Our experiments indicate the computational efficiency of MambaByte compared to other byte-level models. We also find MambaByte to be competitive with and even outperform state-of-the-art subword Transformers. Furthermore, owing to linear scaling in length, MambaByte benefits from fast inference compared to Transformers. Our findings establish the viability of MambaByte in enabling token-free language modeling.
2024-01-25T02:26:53
https://arxiv.org/abs/2401.13660
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
19eyjz5
false
null
t3_19eyjz5
/r/LocalLLaMA/comments/19eyjz5/mambabyte_tokenfree_selective_state_space_model/
false
false
default
181
null
Low token/s count on 7b models, is this normal
4
So this is what I'm working with, The NVIDIA RTX 4050 which I think is pretty good right? but with a 7b model I'm getting around 0.3 or 0.1 tokens per second. With the WizardLM1b model I was able to get around 20 tokens/s. Is this typical? https://preview.redd.it/bxeogbdeshec1.png?width=1317&format=png&auto=webp&s=1e15d56cd1c6741cf3a75f46cbf4f3fe54bf6129
2024-01-25T01:56:35
https://www.reddit.com/r/LocalLLaMA/comments/19exxv0/low_tokens_count_on_7b_models_is_this_normal/
nono577
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19exxv0
false
null
t3_19exxv0
/r/LocalLLaMA/comments/19exxv0/low_tokens_count_on_7b_models_is_this_normal/
false
false
https://b.thumbs.redditm…dI__C15wZwpU.jpg
4
null
Templates for fine tuning on general knowledge -- critical feedback also welcome on my use case
9
Lots of great guides available on fine-tuning, mostly focused on question-answer pairs. Does anybody have specific resources on fine tuning without a specific task -- that is, to have more domain knowledge? I will tune for tasks. But, I have a large collection of documents, and I would strengthen its language in this given domain. Is this feasible?
2024-01-25T01:51:18
https://www.reddit.com/r/LocalLLaMA/comments/19exu4n/templates_for_fine_tuning_on_general_knowledge/
purple_sack_lunch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19exu4n
false
null
t3_19exu4n
/r/LocalLLaMA/comments/19exu4n/templates_for_fine_tuning_on_general_knowledge/
false
false
self
9
null
I connected Mixtral voice chat to a hotkey on my PC (link to repo in comments)
1
2024-01-25T01:30:51
https://v.redd.it/7s33eyjcohec1
Chance_Confection_37
/r/LocalLLaMA/comments/19exf0t/i_connected_mixtral_voice_chat_to_a_hotkey_on_my/
1970-01-01T00:00:00
0
{}
19exf0t
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7s33eyjcohec1/DASHPlaylist.mpd?a=1708867856%2CZWVjOTk4NDQ3ZDYzZGY1MjBiN2ZhNmU4MzA4OGJlZGNmYzUxN2E0YjBjNmY4MWM1NzI0ZDc0ZTNlOTljMTY1ZQ%3D%3D&v=1&f=sd', 'duration': 176, 'fallback_url': 'https://v.redd.it/7s33eyjcohec1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/7s33eyjcohec1/HLSPlaylist.m3u8?a=1708867856%2CNmM1ZDQzYTllNmE3MWIzOTQ3YjU4OTA5YWJjMzJlYmIyNDkzYTMyMjNhMGY3ZTgxOWZjYjEwYmMzYzNmMDk4Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7s33eyjcohec1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_19exf0t
/r/LocalLLaMA/comments/19exf0t/i_connected_mixtral_voice_chat_to_a_hotkey_on_my/
false
false
https://external-preview…267e069582ab5cd4
1
{'enabled': False, 'images': [{'id': 'cjJndjlyaHJvaGVjMdIXtJ6WHK7MpZ39yzEYT5LkhjdySYRKJ-s1v7O6buV-', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cjJndjlyaHJvaGVjMdIXtJ6WHK7MpZ39yzEYT5LkhjdySYRKJ-s1v7O6buV-.png?width=108&crop=smart&format=pjpg&auto=webp&s=63268496cba22ee5dedafb25a557ef90d5510c13', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cjJndjlyaHJvaGVjMdIXtJ6WHK7MpZ39yzEYT5LkhjdySYRKJ-s1v7O6buV-.png?width=216&crop=smart&format=pjpg&auto=webp&s=71a4bdf42de93ed647f8dc448037b6c2d87b5a04', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cjJndjlyaHJvaGVjMdIXtJ6WHK7MpZ39yzEYT5LkhjdySYRKJ-s1v7O6buV-.png?width=320&crop=smart&format=pjpg&auto=webp&s=51464d76c9c1b16fc63cd03f6c2793835883b0df', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cjJndjlyaHJvaGVjMdIXtJ6WHK7MpZ39yzEYT5LkhjdySYRKJ-s1v7O6buV-.png?width=640&crop=smart&format=pjpg&auto=webp&s=664ad06162b285ac57107e35b55d3eba0326dca0', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cjJndjlyaHJvaGVjMdIXtJ6WHK7MpZ39yzEYT5LkhjdySYRKJ-s1v7O6buV-.png?width=960&crop=smart&format=pjpg&auto=webp&s=d10e3ed3a73430766cb6912ceb2bc41c4316d4d5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cjJndjlyaHJvaGVjMdIXtJ6WHK7MpZ39yzEYT5LkhjdySYRKJ-s1v7O6buV-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1d31ac46bbffeaf9011199e4aa797f40d5784e0c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cjJndjlyaHJvaGVjMdIXtJ6WHK7MpZ39yzEYT5LkhjdySYRKJ-s1v7O6buV-.png?format=pjpg&auto=webp&s=c62fc742e9c03952d0910ce0cbf51cae83000975', 'width': 1920}, 'variants': {}}]}
i am using A100 80GB GPU now.
1
[removed]
2024-01-25T01:26:18
https://www.reddit.com/r/LocalLLaMA/comments/19exbqs/i_am_using_a100_80gb_gpu_now/
aijuud
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19exbqs
false
null
t3_19exbqs
/r/LocalLLaMA/comments/19exbqs/i_am_using_a100_80gb_gpu_now/
false
false
self
1
null
If the architecture of Llama 3 does not change, then the required local computer resources will remain the same?
1
[removed]
2024-01-25T01:20:02
https://www.reddit.com/r/LocalLLaMA/comments/19ex6yu/if_the_architecture_of_llama_3_does_not_change/
Secret_Joke_2262
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ex6yu
false
null
t3_19ex6yu
/r/LocalLLaMA/comments/19ex6yu/if_the_architecture_of_llama_3_does_not_change/
false
false
self
1
null
LM studio didn't work
1
[removed]
2024-01-25T00:50:26
https://www.reddit.com/r/LocalLLaMA/comments/19ewk6i/lm_studio_didnt_work/
oneto221
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ewk6i
false
null
t3_19ewk6i
/r/LocalLLaMA/comments/19ewk6i/lm_studio_didnt_work/
false
false
self
1
null
LM studio doesn't work ?
1
[removed]
2024-01-25T00:42:23
https://www.reddit.com/r/LocalLLaMA/comments/19ewdx0/lm_studio_doesnt_work/
oneto221
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ewdx0
false
null
t3_19ewdx0
/r/LocalLLaMA/comments/19ewdx0/lm_studio_doesnt_work/
false
false
self
1
null
how are deepinfra cheapest?
1
are they running quantised?
2024-01-25T00:37:32
https://i.redd.it/sa2t4o59fhec1.jpeg
imalk
i.redd.it
1970-01-01T00:00:00
0
{}
19ewa3l
false
null
t3_19ewa3l
/r/LocalLLaMA/comments/19ewa3l/how_are_deepinfra_cheapest/
false
false
https://b.thumbs.redditm…lOQP0xoa-RhQ.jpg
1
{'enabled': True, 'images': [{'id': '4n7N7HOZWyX5oy223BE45DqxRSIKH5_OoF_cS9CNaIQ', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/sa2t4o59fhec1.jpeg?width=108&crop=smart&auto=webp&s=07225c19381f379d180f432f12630d570b35032b', 'width': 108}, {'height': 265, 'url': 'https://preview.redd.it/sa2t4o59fhec1.jpeg?width=216&crop=smart&auto=webp&s=e5fe2123c2718bad42db6e222e819b3fe324df28', 'width': 216}, {'height': 393, 'url': 'https://preview.redd.it/sa2t4o59fhec1.jpeg?width=320&crop=smart&auto=webp&s=a009f5b20ac53b2776b101d1f45aadbe8378a4cc', 'width': 320}, {'height': 787, 'url': 'https://preview.redd.it/sa2t4o59fhec1.jpeg?width=640&crop=smart&auto=webp&s=138131952880d1ed4d2995f554e26a365c2b5772', 'width': 640}, {'height': 1180, 'url': 'https://preview.redd.it/sa2t4o59fhec1.jpeg?width=960&crop=smart&auto=webp&s=10672ce8821d546d45241561b6ee9309e58e3c43', 'width': 960}, {'height': 1328, 'url': 'https://preview.redd.it/sa2t4o59fhec1.jpeg?width=1080&crop=smart&auto=webp&s=583149c366e3353541837cce41620f18a34dda8b', 'width': 1080}], 'source': {'height': 1449, 'url': 'https://preview.redd.it/sa2t4o59fhec1.jpeg?auto=webp&s=66e212a5f1243a8de231741101ca190a2eec0b39', 'width': 1178}, 'variants': {}}]}
Ollama SDK is out!
62
Very excited about the new announcement from the team at Ollama on their new client libraries for Python and JavaScript. No more writing customer wrappers on the REST API! More info here: https://ollama.ai/blog/python-javascript-libraries Thanks Ollama!
2024-01-25T00:32:00
https://www.reddit.com/r/LocalLLaMA/comments/19ew5zg/ollama_sdk_is_out/
International_Quail8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ew5zg
false
null
t3_19ew5zg
/r/LocalLLaMA/comments/19ew5zg/ollama_sdk_is_out/
false
false
self
62
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
Mistral drop in replacement with openai python package
5
I've heard that mistral api can be used as a openAI drop in replacement, but the following code snippet: openai.api_key = os.environ['MISTRAL_API_KEY'] openai.base_url = "https://api.mistral.ai/v1/chat/completions" completion = openai.chat.completions.create( model="mistral-tiny", messages=[{"role": "user", "content": "Hello !"}], stream=False, ) returns **NotFoundError**: Error code: 404 - {'detail': 'Not Found'} If I replace the endpoint with one from oobabooga ( "[http://127.0.0.1:5000/v1/](http://127.0.0.1:5000/v1/)" ), it works. My api key is valid as using a simple post request with the requests package does work. Maybe I misinterpreted what a drop in replacement means ? I thought that a openAI compatible api means it can be used with the openAI python package.
2024-01-25T00:23:09
https://www.reddit.com/r/LocalLLaMA/comments/19evz7n/mistral_drop_in_replacement_with_openai_python/
Qual_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19evz7n
false
null
t3_19evz7n
/r/LocalLLaMA/comments/19evz7n/mistral_drop_in_replacement_with_openai_python/
false
false
self
5
null
Knowledge Ninja
1
2024-01-24T23:54:45
https://i.redd.it/h07cokcm7hec1.png
Present_Brilliant
i.redd.it
1970-01-01T00:00:00
0
{}
19evc7h
false
null
t3_19evc7h
/r/LocalLLaMA/comments/19evc7h/knowledge_ninja/
false
false
default
1
{'enabled': True, 'images': [{'id': 'h07cokcm7hec1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/h07cokcm7hec1.png?width=108&crop=smart&auto=webp&s=c46a6e4d762494a4b4fb9fcd863e762d542482c6', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/h07cokcm7hec1.png?width=216&crop=smart&auto=webp&s=84a13bae40b8cefae7b50b70ec0c022215d6c355', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/h07cokcm7hec1.png?width=320&crop=smart&auto=webp&s=a10b8fed80e40f8e5fa4e4c927bf5efe04fe4c85', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/h07cokcm7hec1.png?width=640&crop=smart&auto=webp&s=5f0c77420a62bc4490e778da525e79f66d8a407a', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/h07cokcm7hec1.png?width=960&crop=smart&auto=webp&s=9693bd543119582a520bc3581d75d7e13d07b536', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/h07cokcm7hec1.png?auto=webp&s=6aac501f094d7f94725c065c0c5a7f5e8cf2974e', 'width': 1024}, 'variants': {}}]}
How a Simple Idea Helped Me Manage Information Overload in My Daily Life
1
[removed]
2024-01-24T23:52:35
https://www.reddit.com/r/LocalLLaMA/comments/19evag1/how_a_simple_idea_helped_me_manage_information/
Present_Brilliant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19evag1
false
null
t3_19evag1
/r/LocalLLaMA/comments/19evag1/how_a_simple_idea_helped_me_manage_information/
false
false
self
1
null
Best tinyllama 1b finetune?
1
[removed]
2024-01-24T23:49:54
https://www.reddit.com/r/LocalLLaMA/comments/19ev85z/best_tinyllama_1b_finetune/
bangarangguy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ev85z
false
null
t3_19ev85z
/r/LocalLLaMA/comments/19ev85z/best_tinyllama_1b_finetune/
false
false
self
1
null
An Easy to Understand Tutorial on Transformers and GPTS - Part 1
1
[removed]
2024-01-24T22:50:36
https://www.reddit.com/r/LocalLLaMA/comments/19etu3i/an_easy_to_understand_tutorial_on_transformers/
johnolafenwa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19etu3i
false
null
t3_19etu3i
/r/LocalLLaMA/comments/19etu3i/an_easy_to_understand_tutorial_on_transformers/
false
false
self
1
{'enabled': False, 'images': [{'id': 'a7tbIIFEvdi-p0H9azJ4jm6FpaJrpHJMKor_Fw3c6xg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/RFOFeUeyK5GMt-PEGyLER9ZWw3qylnJyVmLuqqsF4cg.jpg?width=108&crop=smart&auto=webp&s=5e88db7acdb3b197c40060cf9fa43819a5795fe5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/RFOFeUeyK5GMt-PEGyLER9ZWw3qylnJyVmLuqqsF4cg.jpg?width=216&crop=smart&auto=webp&s=df02e2878f628fab6ddb433493d09d2c8a9838b3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/RFOFeUeyK5GMt-PEGyLER9ZWw3qylnJyVmLuqqsF4cg.jpg?width=320&crop=smart&auto=webp&s=de0c05f96afc3c5b8f91081f1012261ec53adab6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/RFOFeUeyK5GMt-PEGyLER9ZWw3qylnJyVmLuqqsF4cg.jpg?auto=webp&s=cd8c742800d67e2b8a2ca6aabf7804322e39c4e9', 'width': 480}, 'variants': {}}]}
Using LLMs to evaluate LLM generated responses? Here's one research paper that you must surely read!!
19
There's quite a lot of fuzz going around the quality of LLM generated responses, moreover there have been quite some progress in using LLMs to evaluate LLMs. I have been reading quite some research papers on LLMs lately and there's one that caught my eye. By researchers at UC Berkeley, HKUST, LangChain, and Columbia University: "[spade: Synthesizing Assertions for Large Language Model Pipelines](https://arxiv.org/pdf/2401.03038.pdf)". Spade is a method that automates the synthesis of assertions to identify incorrect outputs generated by large language models in data generation pipelines. You can also try out the algorithm using this [notebook](https://github.com/uptrain-ai/uptrain/blob/main/examples/integrations/spade/evaluating_guidelines_generated_by_spade.ipynb)
2024-01-24T22:15:43
https://www.reddit.com/r/LocalLLaMA/comments/19eszzw/using_llms_to_evaluate_llm_generated_responses/
dillema_max
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eszzw
false
null
t3_19eszzw
/r/LocalLLaMA/comments/19eszzw/using_llms_to_evaluate_llm_generated_responses/
false
false
self
19
null
Can you make dolphin mixtral give an opinion?
1
[removed]
2024-01-24T22:11:19
https://www.reddit.com/r/LocalLLaMA/comments/19esw2q/can_you_make_dolphin_mixtral_give_an_opinion/
Hopeful_Currency5842
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19esw2q
false
null
t3_19esw2q
/r/LocalLLaMA/comments/19esw2q/can_you_make_dolphin_mixtral_give_an_opinion/
false
false
self
1
null
Perception of time in memory with vector databases?
3
Hello, I've been pondering about this for a while now. Giving an LLM memory with a vector database works extremely well. But I have no clue how to give an an llm the perception of time(within the memories aka the vectordatabase). When uploading to vectors to pinecone I tried adding dates in this format: "[date] User: message [date] Bot: message". But this doesn't work. For example if I tell the bot my bot my favorite ice cream on January 25 and again on January 27 but this time a different answer . I would rather it take the most recent one as similar. This "perception of time" becomes more and more important for the bot as the message history/memories expand(after 1000s of messages) Does anyone have any idea how I could accomplish this using vector databases and an llm.
2024-01-24T22:08:38
https://www.reddit.com/r/LocalLLaMA/comments/19estta/perception_of_time_in_memory_with_vector_databases/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19estta
false
null
t3_19estta
/r/LocalLLaMA/comments/19estta/perception_of_time_in_memory_with_vector_databases/
false
false
self
3
null
RWKV 7B is appears to be approaching Mistral 7B performance, but with multilingual support and and linear runtime
214
ERROR: type should be string, got "https://twitter.com/picocreator/status/1750245003690201363\n\n86% trained, 1T tokens, somewhat behind Mistral on english benchmarks, crushes it multilingual. Base Model.\n\nBenefits being its a linear RunTime and its Fast for CPU aswell, not nearly as Much Matrix multiplication. Supports Inf Ctx\n\nTheres alot to be Found in Finetuning instruction, DPO, Merge, Laser, etc. Even Better data Mixtures. If you can expand the code, that would be nice."
2024-01-24T22:06:53
https://www.reddit.com/r/LocalLLaMA/comments/19essc5/rwkv_7b_is_appears_to_be_approaching_mistral_7b/
vatsadev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19essc5
false
null
t3_19essc5
/r/LocalLLaMA/comments/19essc5/rwkv_7b_is_appears_to_be_approaching_mistral_7b/
false
false
self
214
{'enabled': False, 'images': [{'id': 'Nbygj7VDVlITNOhhFfc68AZdy_AtFPK9Hy7JtXzO-zI', 'resolutions': [{'height': 18, 'url': 'https://external-preview.redd.it/2E2s6i1SWR1527QE5ZRtjwq_qRdgDuXkmJZHKXWctW8.jpg?width=108&crop=smart&auto=webp&s=7c1af683eb66eab976955cb9f964c981cb603260', 'width': 108}], 'source': {'height': 24, 'url': 'https://external-preview.redd.it/2E2s6i1SWR1527QE5ZRtjwq_qRdgDuXkmJZHKXWctW8.jpg?auto=webp&s=e7d80e2e51641dad9df9d58820dbb317c5d401e3', 'width': 140}, 'variants': {}}]}
AMD System on Windows
3
My PC Specs: * CPU: Ryzen 5 3600 * RX 6800 XT 16GB VRAM * 32GB RAM * Windows 11 I have been doing some research on running llm locally. First results I got pointed me to oobagooba's. But I could only get it working running on my CPU without GPU accel. There seem to be some way to do it running on windows which requires tweaking some stuff and manual installation and I tried following the steps but never got it to work so gave up on that and started looking for other solutions. I then also tried with WSL but didn't have any luck with that either. I have now setup LM Studio which does have AMD OpenCL support which I can get a 13b model like codellama instruct Q8\_0 offloaded with all layers onto the GPU but performance is still very bad at \~2tok/s and 60s time to first token. so I'm not sure if that is just because my GPU isn't good for the model or my GPU isn't being fully correctly utilized with OpenCL. What is the best way for me on windows to run LLM's on windows without installing linux?
2024-01-24T21:34:47
https://www.reddit.com/r/LocalLLaMA/comments/19erzxe/amd_system_on_windows/
SlickTread
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19erzxe
false
null
t3_19erzxe
/r/LocalLLaMA/comments/19erzxe/amd_system_on_windows/
false
false
self
3
null
Creating a eval tool that compares different LLMs for your use case, and gives you analytics on which to use
1
Join the waitlist here to get early access, and I would appreciate any questions! : [https://www.algoco.co.uk/](https://www.algoco.co.uk/)
2024-01-24T21:17:51
https://www.reddit.com/r/LocalLLaMA/comments/19erl5z/creating_a_eval_tool_that_compares_different_llms/
youseebaba2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19erl5z
false
null
t3_19erl5z
/r/LocalLLaMA/comments/19erl5z/creating_a_eval_tool_that_compares_different_llms/
false
false
self
1
null
Open Source Perplexity Copilot
4
I haven't been able to find any tools for recreating Perplexity's copilot feature. I'd like to integrate something like it into my own application. I like that it generates a form that clarifies the prompt. If anyone knows of any open source tools that I can use to build something like copilot, I'd appreciate you sharing! tyia
2024-01-24T21:08:23
https://www.reddit.com/r/LocalLLaMA/comments/19erct1/open_source_perplexity_copilot/
ComputersConfuseMe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19erct1
false
null
t3_19erct1
/r/LocalLLaMA/comments/19erct1/open_source_perplexity_copilot/
false
false
self
4
null
[Noob Question] Which version of a model is "better"
3
So I'm currently using Alpaca Electron but really thinking of switching to LM Studio to access compatible models for my pc efficiently, still doing more research on it. I use models for writing but there are a few things i hope someone can clear for me. I'm downloading a GGUF model, the original model has file names like [pytorch\_model-00001-of-00003.bin](https://huggingface.com) (Based on LLama 2 Chat) 1) Those pytorch files, can i run those models with LM studio? 2) Do they perform better with oobabooga, Kobold etc..? 3) Are they better quality than the GGUF versions? Even at Q\_8? 4) Is there any program that can run any model you see on huggingface if your pc can handle it, regardless of GGUF, GPTQ, GGML etc...? Thanks guys.
2024-01-24T20:43:56
https://www.reddit.com/r/LocalLLaMA/comments/19eqr2f/noob_question_which_version_of_a_model_is_better/
ImogenStudios
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eqr2f
false
null
t3_19eqr2f
/r/LocalLLaMA/comments/19eqr2f/noob_question_which_version_of_a_model_is_better/
false
false
self
3
null
Looking for advice on self hosting services.
1
[removed]
2024-01-24T20:43:04
https://www.reddit.com/r/LocalLLaMA/comments/19eqqc2/looking_for_advice_on_self_hosting_services/
Noxusequal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eqqc2
false
null
t3_19eqqc2
/r/LocalLLaMA/comments/19eqqc2/looking_for_advice_on_self_hosting_services/
false
false
self
1
null
agent platform for multi-modal agent capabilities
1
I'm developing an application using a large language model (LLM) and am in need of a robust core agent platform that supports multi-modal agent capabilities. Currently, I'm utilizing LLM for intent recognition and named entity recognition, alongside backend workflow orchestration. My goal is to transition to an agent framework for enhanced flexibility. I'm looking for frameworks that are resilient against prompt injection and easier to with open-source LLMs. So far, I've considered: 1. LangChain Agents (I have experience with it) 2. LLaMaIndex Agents 3. HayStack Agents 4. AutoGen Do you: 1. Recommend any additional frameworks that are worth exploring for agent orchestration? 2. Have a preferred framework in this context? 3. have experience with these framework and want to share feedback? &#x200B;
2024-01-24T20:37:10
https://www.reddit.com/r/LocalLLaMA/comments/19eql89/agent_platform_for_multimodal_agent_capabilities/
NefariousnessSad2208
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eql89
false
null
t3_19eql89
/r/LocalLLaMA/comments/19eql89/agent_platform_for_multimodal_agent_capabilities/
false
false
self
1
null
Test results: recommended GGUF models type, size, and quant for MacOS silicon with 16GB RAM (probably also applicable to graphics card with 12GB VRAM)
45
After extensively testing 56 various models and collecting statistics, I have come up with some conclusion about which quantisation to use depending on the model type and size. The result are based not only on speed, but also on the quality of the results. Here are my recommendations, in order of preference: 1. **solar 18b q4\_k** \- slow (2.89 tps avg), must use cpu, great quality well over the smaller models, which makes the slower speed acceptable; so far I have only found 1 model, but it is definitely worth trying: [solarized-18B-dpo-GGUF](https://huggingface.co/vicgalle/solarized-18B-dpo-GGUF) 2. **solar 10.7b q6\_k** \- good speed (6.17 tps avg), gpu acceleration is possible, great quality, great model size 3. **llama2 13b q4\_ks** \- good speed (6.93 tps avg), gpu acceleration is possible, a bit of quality loss but still acceptable 4. **solar 11b q6\_k** \- good speed (6.23 tps avg), gpu acceleration is possible, great quality, great model size; the reason I rank it lower than the others, is because I have only found the Synthia model with this size, and I think other models are better 5. **llama2 20b q4\_ks** \- very slow (0.72 tps avg), must use cpu, but the quality increase is substantial over smaller models, can be useful when time is available to generate good results 6. **mistral 7b q6\_k** \- fast (9.16 tps avg), gpu acceleration is possible, the high quant can make up for the smaller sized model, q8 can also be used with gpu acceleration when top quality is essential (eg: for writing code) 7. **qwen 14b q3\_km** \- good speed (6.09 tps avg), gpu acceleration is possible, the low quant really affects quality; but any higher quant make it become too slow, and not worth the effort compared to other better model types 8. **openchat 7b q8** \- good speed (7.00 tps avg), gpu acceleration is possible, the high quant can make up for the smaller sized model; I mostly used it for code, so top quality was essential; for other purposes, **q6\_k** would be a better choice 9. **yi 34b iq2\_xs** \- yes, it is possible to run 34b models in 16GB! Unfortunately, there are not many yet with iq2\_xs quants. Slow speed (1.45 tps avg), must use cpu, the super low quant really affects quality badly; although on relatively short answers, the much higher quality of 34b models really shines (please note the average token speed are for my setup, which is a Mac Mini M1 4+4 CPU cores 8 GPU cores) In general, there are only a few quants worth using: * **Q6\_K** has very little quality loss compared to Q8 which is almost identical to the original FP16. * **Q5\_KS** is almost identical to Q5\_KM which is not worth the extra size for very little reduction in quality. Compared to Q6\_K, it is close. * **Q4\_KS** is almost identical to Q4\_KM which again is not worth the extra size. The decrease in quality compared to Q5\_KS is more significant and starts to matter. * **Q3\_KL** is the best Q3. The decrease in quality compared to Q4\_KS is significant, but less so than the previous jump from Q5\_KS to Q4\_KS. However, since the size difference is minimal, *it is* *only worth using it if it allows using GPU acceleration* when Q4\_KS is still too large. * Q3\_KM and below all have rapidly degrading quality between each level, and should only be used when there is no other choice (eg: fitting in a model too large for the RAM)
2024-01-24T19:57:17
https://www.reddit.com/r/LocalLLaMA/comments/19eplua/test_results_recommended_gguf_models_type_size/
ex-arman68
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eplua
false
null
t3_19eplua
/r/LocalLLaMA/comments/19eplua/test_results_recommended_gguf_models_type_size/
false
false
self
45
{'enabled': False, 'images': [{'id': 'EhC0pPVdXaLHBq47jONHfIpq5tj2bkFtViiHHin8Rq8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?width=108&crop=smart&auto=webp&s=b10163bdbde1aa173dae0be08a228593ff1d85b6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?width=216&crop=smart&auto=webp&s=42c9446b6468e95abe978eb2e78afd70be99ecb3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?width=320&crop=smart&auto=webp&s=76a137c0a18979e143a760359c7a998c03efe7c6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?width=640&crop=smart&auto=webp&s=322afa8fabf3a2ed5061c62b639c31a2418db11a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?width=960&crop=smart&auto=webp&s=589aa2acfb425b26a113ecdf3610a44479716651', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?width=1080&crop=smart&auto=webp&s=1b86c6fe3a5308b1215dfb01bd4aa6965e8b6acc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?auto=webp&s=7e3fe2339f63e0ae532a24c2fab67b3f4b9705f4', 'width': 1200}, 'variants': {}}]}
Using LLMs to extract results from research papers
6
I am currently exploring how one could extract unstructured information about the results of a study using LLMs. Specifically, I would like to extract study variables and associated statistical values such as linear regression coefficients and p-values in a structured data format such as a JSON file. I have already read a few articles that did this for other domains of science, such as chemistry, which seems to have worked quite well. I also read about advanced prompting strategies that could help. Here is an explanation of how it could look like: Input: A pdf of a research paper, e.g. a psychological study. Example: "Perceived Autonomy Support The positive effect of choice on perceived autonomy support approached significance (b=.07; P=.07), providing partial support for hypothesis 2a" Output: The study constructs, relationships and statistical results in a structured format. Example: {"Perceived Autonomy Support": { "Choice": { "b": 0.07, "p": 0.07 }}} While ChatGPT is already quite good at this, I would like to ask if anyone has done something similar already or knows of some tool/research papers that tackles this problem. Thanks a lot!
2024-01-24T19:30:48
https://www.reddit.com/r/LocalLLaMA/comments/19eoy96/using_llms_to_extract_results_from_research_papers/
Dualweed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eoy96
false
null
t3_19eoy96
/r/LocalLLaMA/comments/19eoy96/using_llms_to_extract_results_from_research_papers/
false
false
self
6
null
Local RAG
1
I have a question regarding retrieving chunks. I’ve made a llama-index implementation where I use RecursiveRetrieval. When the information is in multiple chunks, it only returns the highest scored chunk. Do any of you have any suggestions to solve this issue? It seems like there is no connection to chunks whatsoever, even when I use chunk overlap.
2024-01-24T19:24:18
https://www.reddit.com/r/LocalLLaMA/comments/19eosfa/local_rag/
Flat-Measurement-417
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eosfa
false
null
t3_19eosfa
/r/LocalLLaMA/comments/19eosfa/local_rag/
false
false
self
1
null
Anyone looking for a 4090, it's available for the low low price of $1599.99 at Best Buy right now. Hurry before they run out of stock.
36
2024-01-24T19:22:10
https://www.bestbuy.com/site/nvidia-geforce-rtx-4090-24gb-gddr6x-graphics-card-titanium-black/6521430.p?skuId=6521430
fallingdowndizzyvr
bestbuy.com
1970-01-01T00:00:00
0
{}
19eoqlr
false
null
t3_19eoqlr
/r/LocalLLaMA/comments/19eoqlr/anyone_looking_for_a_4090_its_available_for_the/
false
false
https://b.thumbs.redditm…kvX5zebkr8eU.jpg
36
{'enabled': False, 'images': [{'id': 'yNuvMRGk-ZtI7qSexO29PIJDca5wU7j006l0keb56dU', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/9-4J2gHF13LzPtezc5yPlosrKg41AIGKGgpsKG05ofg.jpg?width=108&crop=smart&auto=webp&s=ccbbfd503f8d4bc78b7aef1bf68a025199262207', 'width': 108}, {'height': 103, 'url': 'https://external-preview.redd.it/9-4J2gHF13LzPtezc5yPlosrKg41AIGKGgpsKG05ofg.jpg?width=216&crop=smart&auto=webp&s=2b3ec6586ffc82458dd0b844df5910db3e73e373', 'width': 216}, {'height': 152, 'url': 'https://external-preview.redd.it/9-4J2gHF13LzPtezc5yPlosrKg41AIGKGgpsKG05ofg.jpg?width=320&crop=smart&auto=webp&s=ec6aa1bb9dbb42561f80a950cd60aa6d2674886f', 'width': 320}, {'height': 305, 'url': 'https://external-preview.redd.it/9-4J2gHF13LzPtezc5yPlosrKg41AIGKGgpsKG05ofg.jpg?width=640&crop=smart&auto=webp&s=19c5221e053f86ba8108a1e6ff281173a87cfe9e', 'width': 640}, {'height': 458, 'url': 'https://external-preview.redd.it/9-4J2gHF13LzPtezc5yPlosrKg41AIGKGgpsKG05ofg.jpg?width=960&crop=smart&auto=webp&s=0d5a3547c88c964eae9121eaa9bab21e2e8cb5f0', 'width': 960}, {'height': 515, 'url': 'https://external-preview.redd.it/9-4J2gHF13LzPtezc5yPlosrKg41AIGKGgpsKG05ofg.jpg?width=1080&crop=smart&auto=webp&s=57775b6efca4257771d8ca991d4de61ad40f531b', 'width': 1080}], 'source': {'height': 2185, 'url': 'https://external-preview.redd.it/9-4J2gHF13LzPtezc5yPlosrKg41AIGKGgpsKG05ofg.jpg?auto=webp&s=fc6084781bfe93c9b777ea86b4bdab127eb36280', 'width': 4578}, 'variants': {}}]}
Is Gemini nano open to anyone to use? What is an alternative for basic inference for a low PC memory, GPU?
1
[removed]
2024-01-24T19:15:25
https://www.reddit.com/r/LocalLLaMA/comments/19eoki8/is_gemini_nano_open_to_anyone_to_use_what_is_an/
Tiny_Yellow_7869
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eoki8
false
null
t3_19eoki8
/r/LocalLLaMA/comments/19eoki8/is_gemini_nano_open_to_anyone_to_use_what_is_an/
false
false
self
1
null
Mistral V2: Real Use Cases Feedback
1
Hi! I’m trying to get a better understanding of people’s personal experiences using Mistral Instruct V2 or Mixtral for a fine-tuning project. Do you have any observations / feedback you would give to the model in particular situations during everyday use? I’m mainly interested in feedback that’s subjective and only applicable to specific situations. For example, “please be more concise when writing work emails to X person”. Etc.
2024-01-24T19:09:27
https://www.reddit.com/r/LocalLLaMA/comments/19eof6p/mistral_v2_real_use_cases_feedback/
Full-Refrigerator-84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eof6p
false
null
t3_19eof6p
/r/LocalLLaMA/comments/19eof6p/mistral_v2_real_use_cases_feedback/
false
false
self
1
null
Running Phi-2 locally in Android Chrome browser with WebGPU
59
With the official stable release of Chrome v121 yesterday, WebGPU is enabled by default in Android Chrome ([blog here](https://developer.chrome.com/blog/new-in-webgpu-121)). As a result, with WebLLM, you can run models like Phi-2 locally in an Android Chrome browser, leveraging WebGPU acceleration. Try it out here: [https://webllm.mlc.ai/](https://webllm.mlc.ai/). Here is a 1x speed demo running 4-bit quantized Phi-2 on Samsung S23. RedPajama and TinyLlama are also included on the demo page. Theoretically, any < 3B model with 4-bit quantization can run with reasonable speed on an Android phone. Enabled by and joint effort from the MLC team: [https://llm.mlc.ai/](https://llm.mlc.ai/) https://reddit.com/link/19envw6/video/x8fdx2askfec1/player &#x200B;
2024-01-24T18:48:26
https://www.reddit.com/r/LocalLLaMA/comments/19envw6/running_phi2_locally_in_android_chrome_browser/
SnooMachines3070
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19envw6
false
null
t3_19envw6
/r/LocalLLaMA/comments/19envw6/running_phi2_locally_in_android_chrome_browser/
false
false
https://a.thumbs.redditm…lpVCct3PtAv4.jpg
59
{'enabled': False, 'images': [{'id': '4PFQnstGPLno24pVnifb6tyUExaTLnPqg_HcC9BarRo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gyI-3jI8QU2eff3wqHFhYRjNt9_ALpusDAM0ToufBKo.jpg?width=108&crop=smart&auto=webp&s=547229af4a82364aa9720286f91138bdab8d9f77', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gyI-3jI8QU2eff3wqHFhYRjNt9_ALpusDAM0ToufBKo.jpg?width=216&crop=smart&auto=webp&s=a5717a895a63b7b7ab0df465deccfeeafcc85856', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gyI-3jI8QU2eff3wqHFhYRjNt9_ALpusDAM0ToufBKo.jpg?width=320&crop=smart&auto=webp&s=e6968f976a5643997a66061145af15616a1fb5a5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gyI-3jI8QU2eff3wqHFhYRjNt9_ALpusDAM0ToufBKo.jpg?width=640&crop=smart&auto=webp&s=9f81350c04dab1789a18c75ac0824057fa17d90b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gyI-3jI8QU2eff3wqHFhYRjNt9_ALpusDAM0ToufBKo.jpg?width=960&crop=smart&auto=webp&s=c7d26155bc18cfc2cbadafb0d62b3dad159ca49f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gyI-3jI8QU2eff3wqHFhYRjNt9_ALpusDAM0ToufBKo.jpg?width=1080&crop=smart&auto=webp&s=00634be00cc1154b5953aa8a7e2c4756b28fc77c', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/gyI-3jI8QU2eff3wqHFhYRjNt9_ALpusDAM0ToufBKo.jpg?auto=webp&s=e9cd1b9f53dcf4d68f8b7933145c26e21432ad88', 'width': 1920}, 'variants': {}}]}
Anyone notice LLM bias hurts problems it can clearly solve
1
There have been a few times recently interacting with LLMs where this has occured. As in there is a problem it can clearly solve but instead it gives the generic "consensus" answer, than thinking about the problem critically. It reminds me of a lot of people irl. It has basically inherited smugness, rather than finding objective reality.
2024-01-24T18:47:26
https://www.reddit.com/r/LocalLLaMA/comments/19enuz5/anyone_notice_llm_bias_hurts_problems_it_can/
ArmoredBattalion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19enuz5
false
null
t3_19enuz5
/r/LocalLLaMA/comments/19enuz5/anyone_notice_llm_bias_hurts_problems_it_can/
false
false
self
1
null
How many Threads should I assign to a GGUF model I am running on my 4 core CPU?
1
Might be a dumb question but I am very new to running conversational Ai models on my pc. Should I select 4?
2024-01-24T18:47:21
https://www.reddit.com/r/LocalLLaMA/comments/19enuwn/how_many_threads_should_i_assign_to_a_gguf_model/
Material_Yam_3535
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19enuwn
false
null
t3_19enuwn
/r/LocalLLaMA/comments/19enuwn/how_many_threads_should_i_assign_to_a_gguf_model/
false
false
self
1
null
Best Additional Card for more VRAM for the Price?
1
I was thinking of adding an additional GPU for added headroom for running local LLMs. I currently have a 3090. However, I don't know that I want to spend the approx. $800-900 for an additional used 3090 along with all the potential pitfalls of buying used. That said, is the next best option for money spent vs VRAM gained the 12GB 3060? They currently retail new for around $300. The savings from buying used doesn't look to be substantial versus the potential issues with used hardware. Going in the cheaper direction, you have the 2060 with 12GB VRAM, but it is harder to come by and I have heard has drawback when being used to run LLMs in comparison to the 3000+ series. The next step in the more expensive direction appears to be the 4060 TI with 16GB of VRAM for around $400. You get newer architecture and 4 more GB of VRAM for $100 more. Has anyone ran the numbers for extra VRAM space and came to similar/dissimilar conclusions? Curious to hear people's thoughts.
2024-01-24T18:46:36
https://www.reddit.com/r/LocalLLaMA/comments/19enu75/best_additional_card_for_more_vram_for_the_price/
USM-Valor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19enu75
false
null
t3_19enu75
/r/LocalLLaMA/comments/19enu75/best_additional_card_for_more_vram_for_the_price/
false
false
self
1
null
Best cloud gpu provider for hosting your own models. In the eu.
1
[removed]
2024-01-24T17:22:37
https://www.reddit.com/r/LocalLLaMA/comments/19emdiy/best_cloud_gpu_provider_for_hosting_your_own/
Noxusequal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19emdiy
false
null
t3_19emdiy
/r/LocalLLaMA/comments/19emdiy/best_cloud_gpu_provider_for_hosting_your_own/
false
false
self
1
null
Are there any models recommended for legal writing?
9
Hello friends, I know this post isn't as exciting as discussing the best porn model every week but I'd like recommendations for a model tuned for legal writing or that has been trained on case law, legal briefs, that sort of thing. It should be more litigation focused, rather than transactional. Context size and comprehension are key. If there isn't anything that fits the bill, does anyone have a decent base model to recommend and maybe some comments or recommendations on how to train a model? That's an area I haven't explored yet. My skill level is a beginner, so I've used Fiverr when I'm out of my depth but I usually like to know enough to understand what I need and to clearly convey that to whoever I hire. I'm hoping you guys will help me get there. By way of background, I'm an attorney that's been using ChatGPT since the beginning and I've since branched out to messing with vector databases, Langchain, and hosting models locally. I have a 3090 but am open to renting better processing power if a model could give ChatGPT a run for its money. My use of ChatGPT can be fairly limited due to privacy concerns, attorney-client and work product privilege. The increase in context size has been great, but as has been discussed to death, ChatGPT's quality has gone down. It's legal writing is very poor, wordy, and unapproachable so while it occasionally strikes gold, it takes a lot of mining to get there.
2024-01-24T16:47:45
https://www.reddit.com/r/LocalLLaMA/comments/19elj8h/are_there_any_models_recommended_for_legal_writing/
PosnerRocks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19elj8h
false
null
t3_19elj8h
/r/LocalLLaMA/comments/19elj8h/are_there_any_models_recommended_for_legal_writing/
false
false
self
9
null
Test results: recommended GGUF models type, size, and quant for MacOS silicon with 16GB RAM (probably also applicable to graphics card with 12GB VRAM)
1
[removed]
2024-01-24T16:40:09
https://www.reddit.com/r/LocalLLaMA/comments/19elcjq/test_results_recommended_gguf_models_type_size/
ex-arman68
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19elcjq
false
null
t3_19elcjq
/r/LocalLLaMA/comments/19elcjq/test_results_recommended_gguf_models_type_size/
false
false
self
1
{'enabled': False, 'images': [{'id': 'EhC0pPVdXaLHBq47jONHfIpq5tj2bkFtViiHHin8Rq8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?width=108&crop=smart&auto=webp&s=b10163bdbde1aa173dae0be08a228593ff1d85b6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?width=216&crop=smart&auto=webp&s=42c9446b6468e95abe978eb2e78afd70be99ecb3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?width=320&crop=smart&auto=webp&s=76a137c0a18979e143a760359c7a998c03efe7c6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?width=640&crop=smart&auto=webp&s=322afa8fabf3a2ed5061c62b639c31a2418db11a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?width=960&crop=smart&auto=webp&s=589aa2acfb425b26a113ecdf3610a44479716651', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?width=1080&crop=smart&auto=webp&s=1b86c6fe3a5308b1215dfb01bd4aa6965e8b6acc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_0kzqwJicdZUcL1rYz-kkLZWzJUS6YG31RCklfUYL3g.jpg?auto=webp&s=7e3fe2339f63e0ae532a24c2fab67b3f4b9705f4', 'width': 1200}, 'variants': {}}]}
Why not use grammars instead of function calling models?
67
I never fully understood, why models need to be specifically fine-tuned for function calling and why everyone seems so excited for function calling models. Can we not just use any standard model and force it to generate output according to some grammar, e.g. how llama.cpp does it? That way we can simply force it to adhere to the JSON format as defined by the function, and guarantee that we can parse the output. What's all the fuzz about function calling then?
2024-01-24T16:39:27
https://www.reddit.com/r/LocalLLaMA/comments/19elbyk/why_not_use_grammars_instead_of_function_calling/
HomeGrownSilicone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19elbyk
false
null
t3_19elbyk
/r/LocalLLaMA/comments/19elbyk/why_not_use_grammars_instead_of_function_calling/
false
false
self
67
null
I made a Python module for more control over LLMs
40
Hey, guys! I'm pretty amazed steering vectors work so well so I made a Python module called llm_steer to add steering vectors more easily. https://github.com/Mihaiii/llm_steer Google Colab link included in the repo. Let me know what you think.
2024-01-24T16:07:25
https://www.reddit.com/r/LocalLLaMA/comments/19ekkvl/i_made_a_python_module_for_more_control_over_llms/
Either-Job-341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ekkvl
false
null
t3_19ekkvl
/r/LocalLLaMA/comments/19ekkvl/i_made_a_python_module_for_more_control_over_llms/
false
false
self
40
{'enabled': False, 'images': [{'id': 'jWf0KhRpTYdWJa_-6b6ZgOx3Ak3nB_KPuPsmdFhU0Os', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/76WeU26yjph0-Xc4Gxo_dy4ZLhv4ZWaemx-XvLW4lUw.jpg?width=108&crop=smart&auto=webp&s=af858f2c1cd0ca06a5672ab0deb6b2a012326740', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/76WeU26yjph0-Xc4Gxo_dy4ZLhv4ZWaemx-XvLW4lUw.jpg?width=216&crop=smart&auto=webp&s=b4469e70e01a0b174eb44b2e5e3bce55fcfbae6c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/76WeU26yjph0-Xc4Gxo_dy4ZLhv4ZWaemx-XvLW4lUw.jpg?width=320&crop=smart&auto=webp&s=5edc888ee7e7912ed3770f6fe3b6b126e1743201', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/76WeU26yjph0-Xc4Gxo_dy4ZLhv4ZWaemx-XvLW4lUw.jpg?width=640&crop=smart&auto=webp&s=e19459dd3d11e3ca22f7bb08ddacffdd39d392cc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/76WeU26yjph0-Xc4Gxo_dy4ZLhv4ZWaemx-XvLW4lUw.jpg?width=960&crop=smart&auto=webp&s=16f853dafe497eacd9edf94a283a9e7a1461c065', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/76WeU26yjph0-Xc4Gxo_dy4ZLhv4ZWaemx-XvLW4lUw.jpg?width=1080&crop=smart&auto=webp&s=4debe95e9e093a65520130fa54c68b0108dc2741', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/76WeU26yjph0-Xc4Gxo_dy4ZLhv4ZWaemx-XvLW4lUw.jpg?auto=webp&s=daf184bad1d9d02dc01a4fa84147d2cfde78dcc2', 'width': 1200}, 'variants': {}}]}
LLMs Outputs Ranking. How?
1
For people who experimented with ranking different LLMs' outputs then choosing the number #1 output for the job, or whatever you do with the sorted/ranked outputs. I was thinking of a workflow like this, but feel free to show me yours 1. Prompt different LLMs with same prompt 2. Gather all those outputs in one big ranking prompt 3. Prompt all those LLMs again with the big ranking prompt (Rank those results based on. ....) 4. Get the most 'voted' output (the best one) How you do that, what is your workflow? Prompts, examples, workflows, anything useful is appreciated I appreciate your help guys 👍
2024-01-24T15:45:51
https://www.reddit.com/r/LocalLLaMA/comments/19ek2t9/llms_outputs_ranking_how/
CharacterCheck389
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ek2t9
false
null
t3_19ek2t9
/r/LocalLLaMA/comments/19ek2t9/llms_outputs_ranking_how/
false
false
self
1
null
Building an IDE with native support for Open Source models
25
Hey LocalLLama! I have been following the community and learning a lot about OSS models and over the last year there have been some pretty big releases in the open source models, especially around coding assistants. I tried out quite a few solutions out there, but none of them gave me a true native experience to code with local models. So I am building an AI first IDE with native support for Open Source Models. I also found that every model has its own behaviours and needs to be prompted differently, and there is the growing issue with getting the right context to the model to use and keeping it private first (cause I don't want to send my data over to a cloud provider..) With the current editor which I am calling [Aide](https://codestory.ai/) , I was able to keep the codebase indexing on the device and also provide options to run Mistral and Mixtral using Ollama. I also know how important the data (prompt/completions are), so all the prompts/completions are stored locally in a sqlite db and can be easily accessed, this can be further used to collect real world usage from the editor and see how different models perform. **I also realise, that not every feature will work out of the box for every LLM** (using mistral instruct as a chat agent is not the best experience), but I do believe that with finetuning these models can be pushed, I will try and share the datasets and finetuned versions of the experiments when they are ready. I would love to support this community with the model benchmarks and also the models you would love to use in the IDE, please let me know in the comments and we will add them to the every growing list, which right now includes: \- deepseek coder \- phind \- llama2 and others https://reddit.com/link/19ejoa3/video/x8rf5qc3peec1/player
2024-01-24T15:28:35
https://www.reddit.com/r/LocalLLaMA/comments/19ejoa3/building_an_ide_with_native_support_for_open/
ragingWater_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ejoa3
false
null
t3_19ejoa3
/r/LocalLLaMA/comments/19ejoa3/building_an_ide_with_native_support_for_open/
false
false
self
25
null
AMD 5500u and integrated GPU is fine for learning AI and LLM processing?
5
I have a mini pc Asuspn 51. I want to learn something on AI and LLM processing. Do you think it's fine for learning or it's very very slow? I want to build a simple chat bot with around 10 users to use it too? the pc still be okay? I don't need super fast response. just 1 - 2m for the answer will be nice for me too.
2024-01-24T15:24:27
https://www.reddit.com/r/LocalLLaMA/comments/19ejkzp/amd_5500u_and_integrated_gpu_is_fine_for_learning/
bim_bim_cay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ejkzp
false
null
t3_19ejkzp
/r/LocalLLaMA/comments/19ejkzp/amd_5500u_and_integrated_gpu_is_fine_for_learning/
false
false
self
5
null
Quantizing T5 based model
3
Has anyone here ever quantized a T5 based model using the transformers library? Can you please help me? I cannot find anything on Google.
2024-01-24T15:23:45
https://www.reddit.com/r/LocalLLaMA/comments/19ejkf2/quantizing_t5_based_model/
jetaudio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ejkf2
false
null
t3_19ejkf2
/r/LocalLLaMA/comments/19ejkf2/quantizing_t5_based_model/
false
false
self
3
null
The smallest Mixtral settings
1
[removed]
2024-01-24T14:40:04
https://www.reddit.com/r/LocalLLaMA/comments/19eil9d/the_smallest_mixtral_settings/
PunchTornado
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eil9d
false
null
t3_19eil9d
/r/LocalLLaMA/comments/19eil9d/the_smallest_mixtral_settings/
false
false
self
1
null
Best model/prompt for summarization?
6
My task is to have model to condense two-four paragraphs of fiction text to few sentences. It should not include any analysis, just summary of the text. Can anyone recommend model and prompting technique for this task?
2024-01-24T14:35:27
https://www.reddit.com/r/LocalLLaMA/comments/19eihwk/best_modelprompt_for_summarization/
ouxjshsz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eihwk
false
null
t3_19eihwk
/r/LocalLLaMA/comments/19eihwk/best_modelprompt_for_summarization/
false
false
self
6
null
Best model/prompt for stigmatization?
1
My task is to have model to condense two-four paragraphs of fiction text to few sentences. It should not include any analysis, just summary of the text. Can anyone recommend model and prompting technique for this task?
2024-01-24T14:33:37
https://www.reddit.com/r/LocalLLaMA/comments/19eigg5/best_modelprompt_for_stigmatization/
ouxjshsz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eigg5
false
null
t3_19eigg5
/r/LocalLLaMA/comments/19eigg5/best_modelprompt_for_stigmatization/
false
false
self
1
null
Experiences of quantizing a finetuned LLM?
8
I have tried quantizing a fine tuned model with both GPTQ (4bit, 8bit) and AWQ and I am getting terrible output post quantization in all cases. The base model is a Mistral model and the fine tuning was done using PEFT (Qlora) with the adapters merging back to base. The fp16 finetuned model works pretty well, but i want to reduce the memory footprint. I have tried quantizing using both wikitext2 and the original finetuning examples used in fine-tuning without much benefit. For GPTQ i have set the group size as 32, damp as 0.01/0.1 and tried with and without act\_order and 4bit and 8bit. The output is almost some zibberish, and many times not even english (the finetune dataset is only english but has json structure to it). I am able to successfully finetune a Bloke's GPTQ of the mistral model, afaik the adapters can't be merged with an already quantized base model using *merge\_and\_unload.* And I think TGI, vLLM don't support PEFT/Adapters for deployment. SLora/Lorax seem to be the only viable options for productionizing. Has anyone else observed any issues like these? Alternative is there anything I am doing grossly wrong? This seems like a fairly common use case. How are you guys working around this?
2024-01-24T14:33:08
https://www.reddit.com/r/LocalLLaMA/comments/19eig1z/experiences_of_quantizing_a_finetuned_llm/
Striped_Orangutan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eig1z
false
null
t3_19eig1z
/r/LocalLLaMA/comments/19eig1z/experiences_of_quantizing_a_finetuned_llm/
false
false
self
8
null
Suggest GamersNexus LLMA Benchmarks for RX 7600 XT 16GB
163
2024-01-24T14:32:25
https://i.redd.it/f19f7rlafeec1.png
tr2727
i.redd.it
1970-01-01T00:00:00
0
{}
19eifhi
false
null
t3_19eifhi
/r/LocalLLaMA/comments/19eifhi/suggest_gamersnexus_llma_benchmarks_for_rx_7600/
false
false
https://a.thumbs.redditm…mhKMOVmOF7l4.jpg
163
{'enabled': True, 'images': [{'id': 'DAx3l7McOd99dU32FKBabixAv8MhQUPKqE7MG4e8EMo', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/f19f7rlafeec1.png?width=108&crop=smart&auto=webp&s=2cbaa1f91ca17763f23d73a990f0567e8b4716cd', 'width': 108}, {'height': 269, 'url': 'https://preview.redd.it/f19f7rlafeec1.png?width=216&crop=smart&auto=webp&s=fbf7beb075d50a33ebd54e79a7fa3e6816e79d58', 'width': 216}, {'height': 399, 'url': 'https://preview.redd.it/f19f7rlafeec1.png?width=320&crop=smart&auto=webp&s=d6850c6380dbd4402f852838ee7bf408b193d642', 'width': 320}, {'height': 799, 'url': 'https://preview.redd.it/f19f7rlafeec1.png?width=640&crop=smart&auto=webp&s=20f7dca3a5c6b91d91cd47cbbb653307bff518ba', 'width': 640}, {'height': 1199, 'url': 'https://preview.redd.it/f19f7rlafeec1.png?width=960&crop=smart&auto=webp&s=87676acf4aa17dd63a158948f60e75b40311f0af', 'width': 960}, {'height': 1349, 'url': 'https://preview.redd.it/f19f7rlafeec1.png?width=1080&crop=smart&auto=webp&s=2cf4b9701466679ce4ad418f57f2cef515024c21', 'width': 1080}], 'source': {'height': 1349, 'url': 'https://preview.redd.it/f19f7rlafeec1.png?auto=webp&s=86055af02a5100ef47ab2a8f50665758bf2c6c0f', 'width': 1080}, 'variants': {}}]}
Request to GamersNexus about LLMA Benchmarks on RX 7600 XT
1
[removed]
2024-01-24T14:25:52
https://i.redd.it/t0kjg8i4eeec1.png
alok_tr
i.redd.it
1970-01-01T00:00:00
0
{}
19eiac1
false
null
t3_19eiac1
/r/LocalLLaMA/comments/19eiac1/request_to_gamersnexus_about_llma_benchmarks_on/
false
false
https://b.thumbs.redditm…v-spnUuwB0ck.jpg
1
{'enabled': True, 'images': [{'id': 'SI3wGWjDhwMlWoZXvfaUfs_IIbhMOBw7VRAwKlbjnLA', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/t0kjg8i4eeec1.png?width=108&crop=smart&auto=webp&s=4aa07b329954681381aabc783a216d739a0abb8b', 'width': 108}, {'height': 269, 'url': 'https://preview.redd.it/t0kjg8i4eeec1.png?width=216&crop=smart&auto=webp&s=8284e6a3097b01b0ad56bfade74d677522e245e5', 'width': 216}, {'height': 399, 'url': 'https://preview.redd.it/t0kjg8i4eeec1.png?width=320&crop=smart&auto=webp&s=91e393faa03fe5dfd22ff126fa9a676340ec47aa', 'width': 320}, {'height': 799, 'url': 'https://preview.redd.it/t0kjg8i4eeec1.png?width=640&crop=smart&auto=webp&s=0f91e01c6e555ea4b9e8d9386f444d025bf59f8a', 'width': 640}, {'height': 1199, 'url': 'https://preview.redd.it/t0kjg8i4eeec1.png?width=960&crop=smart&auto=webp&s=376a18bb2311ce5faed04c3703b91b888e8c22fc', 'width': 960}, {'height': 1349, 'url': 'https://preview.redd.it/t0kjg8i4eeec1.png?width=1080&crop=smart&auto=webp&s=bf1170caa64f261be53e25c746da8475d8fc54b2', 'width': 1080}], 'source': {'height': 1349, 'url': 'https://preview.redd.it/t0kjg8i4eeec1.png?auto=webp&s=8da5b3e1b015a342a9b2afa74f0d1894646fb71e', 'width': 1080}, 'variants': {}}]}
What mobile apps are folks using for ChatGPT like experiences for self-hosted LLM backends?
17
What are the favorite apps (iOS/Android) for connecting with self-hosted LLM backend (even if these use other APIs in the background)? e.g. connecting to hosted services (VPS/cloud) or even local machines (M1 macOS via VPN or IP).
2024-01-24T14:24:29
https://www.reddit.com/r/LocalLLaMA/comments/19ei98q/what_mobile_apps_are_folks_using_for_chatgpt_like/
janniks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ei98q
false
null
t3_19ei98q
/r/LocalLLaMA/comments/19ei98q/what_mobile_apps_are_folks_using_for_chatgpt_like/
false
false
self
17
null
What apps are folks using for ChatGPT like experiences for self-hosted LLM backends?
1
What are the favorite apps (iOS/Android) for connecting with self-hosted LLM backend (even if these use other APIs in the background)? e.g. connecting to hosted services (VPS/cloud) or even local machines (M1 macOS via VPN or IP).
2024-01-24T14:23:45
https://www.reddit.com/r/LocalLLaMA/comments/19ei8oj/what_apps_are_folks_using_for_chatgpt_like/
janniks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ei8oj
false
null
t3_19ei8oj
/r/LocalLLaMA/comments/19ei8oj/what_apps_are_folks_using_for_chatgpt_like/
false
false
self
1
null
Text generation webui issue
3
Can anyone tell me how and where to set trust remote code on text generation webui in windows. Just downloaded this an hour ago and not sure where to change this setting.
2024-01-24T14:16:58
https://www.reddit.com/r/LocalLLaMA/comments/19ei3ay/text_generation_webui_issue/
TheHobbyistHacker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ei3ay
false
null
t3_19ei3ay
/r/LocalLLaMA/comments/19ei3ay/text_generation_webui_issue/
false
false
default
3
null
Running llama.cpp using an AMD NPU?
1
[removed]
2024-01-24T14:11:57
https://www.reddit.com/r/LocalLLaMA/comments/19ehzg4/running_llamacpp_using_an_amd_npu/
Wonderful-Eye-71
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ehzg4
false
null
t3_19ehzg4
/r/LocalLLaMA/comments/19ehzg4/running_llamacpp_using_an_amd_npu/
false
false
self
1
null
Mixtral on an AMD 7800XT GPU?
4
How good/decent/bad are AMD GPUs for LLMs nowadays? I'd like to run Mixtral on Linux for home/R&D uses but not training. Is anybody getting decent performance with this one, using GPU+CPU and a decent context? Is it even supported? How would it compare, performance-wise, with a similarly priced 4060 Ti? Rationale is I want at least 16 GB and some gaming on the side but I've seen the 4060 Ti performance is crap, and VRAM bandwidth is half of that in the 7800XT which is important for AI. The next 16 GB card from nVidia is the 4070 Ti Super which is also bad value and the 4000 generation is so bad I couldn't justify myself buying them for their retail prices.
2024-01-24T13:42:29
https://www.reddit.com/r/LocalLLaMA/comments/19ehd7q/mixtral_on_an_amd_7800xt_gpu/
A_for_Anonymous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ehd7q
false
null
t3_19ehd7q
/r/LocalLLaMA/comments/19ehd7q/mixtral_on_an_amd_7800xt_gpu/
false
false
self
4
null
Issue with making an exl2 quant
2
Hey guys! Not sure where else to ask about it. I’ve recently tried to create my own exl2 quant of a model, but failed miserably. After two hours of waiting and 47 layers processed, I received the error from the attached screenshot and sadly, Google didn’t help much. I have an NVIDIA 3090 RTX with 24GB VRAM. Perhaps my command is incorrect? Here’s what I used. python exllamav2/convert.py \ -i bartowski_internlm2-chat-20b-llama \ -o quanttest \ -c wikitext-103-v1_wikitext-test.parquet \ -b 8.0 I used the tutorial below. https://towardsdatascience.com/exllamav2-the-fastest-library-to-run-llms-32aeda294d26 Would appreciate any help, thank you!
2024-01-24T13:11:24
https://i.redd.it/vjd73gdu0eec1.jpeg
Meryiel
i.redd.it
1970-01-01T00:00:00
0
{}
19egrbj
false
null
t3_19egrbj
/r/LocalLLaMA/comments/19egrbj/issue_with_making_an_exl2_quant/
false
false
https://b.thumbs.redditm…tRgKhJ5L0dIA.jpg
2
{'enabled': True, 'images': [{'id': 'yYsawnIZtQubyRmDjLgqfC0TopYQJZ4_7XsTiMAxVwI', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/vjd73gdu0eec1.jpeg?width=108&crop=smart&auto=webp&s=df1f5daf3cb6574aa0ec938e3b3f70a94e84c255', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/vjd73gdu0eec1.jpeg?width=216&crop=smart&auto=webp&s=3305a44fcb3d5d8a35622c6c67e0853771809b87', 'width': 216}, {'height': 210, 'url': 'https://preview.redd.it/vjd73gdu0eec1.jpeg?width=320&crop=smart&auto=webp&s=998671abf4cebbd2f98ed237eb94888c18f00928', 'width': 320}, {'height': 421, 'url': 'https://preview.redd.it/vjd73gdu0eec1.jpeg?width=640&crop=smart&auto=webp&s=7a984e026c28e14f1ec1da0676f2ae2bc0d07e38', 'width': 640}, {'height': 632, 'url': 'https://preview.redd.it/vjd73gdu0eec1.jpeg?width=960&crop=smart&auto=webp&s=3e451a155c1ab2a293db4d44fa074985a2c00616', 'width': 960}, {'height': 711, 'url': 'https://preview.redd.it/vjd73gdu0eec1.jpeg?width=1080&crop=smart&auto=webp&s=87cdc7deed20a474f6e6a1e3358b81643cdbf334', 'width': 1080}], 'source': {'height': 820, 'url': 'https://preview.redd.it/vjd73gdu0eec1.jpeg?auto=webp&s=ddce09aade13d900b15fa6a55304022825ade911', 'width': 1244}, 'variants': {}}]}
What's the best machine I can get for $10k?
24
I'm looking to buy a machine I can use to explore LLM development. My short-list of use cases is: 1) custom model training, 2) running local inference, 3) testing, analyzing, and comparing various models for efficacy/efficiency/performance. My budget is $10k. Ideally, I want something turn-key (not looking to spend too much time building it). Right now, considering a Lambda Vector, but, change my mind?
2024-01-24T13:05:59
https://www.reddit.com/r/LocalLLaMA/comments/19egnnz/whats_the_best_machine_i_can_get_for_10k/
codenamev
self.LocalLLaMA
2024-01-24T13:33:50
0
{}
19egnnz
false
null
t3_19egnnz
/r/LocalLLaMA/comments/19egnnz/whats_the_best_machine_i_can_get_for_10k/
false
false
self
24
null
vLLM: how do you use different instruct templates?
6
Im using [offline inference w/ prefix](https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_with_prefix.py) and the speed has been great. However I’m having a bit of a trouble getting the models to follow instructions properly. Especially non-instruct models. I saw there are (template files in the examples folder)[https://github.com/vllm-project/vllm/blob/main/examples/template_alpaca.jinja) is it possible to use those for offline inference? And if it is, how might I incorporate it into my code? Thank you 🙏 and if you have a guide/resource for using vLLM inference that would be awesome, because their documentation is honestly non-existent.
2024-01-24T12:52:12
https://www.reddit.com/r/LocalLLaMA/comments/19ege2j/vllm_how_do_you_use_different_instruct_templates/
GeeBrain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ege2j
false
null
t3_19ege2j
/r/LocalLLaMA/comments/19ege2j/vllm_how_do_you_use_different_instruct_templates/
false
false
self
6
{'enabled': False, 'images': [{'id': 'UZXIbK2MTbV0j3DZdMUyjIBAljmsu_Lcq4efJj7VJAs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FUA_D7jdr6UdHj0KoH546rwVK6KbRwjyh-mAiqtZf9Q.jpg?width=108&crop=smart&auto=webp&s=0296db0201bca554bab671f27740bef8f0688346', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FUA_D7jdr6UdHj0KoH546rwVK6KbRwjyh-mAiqtZf9Q.jpg?width=216&crop=smart&auto=webp&s=b1923f2a5261b7af9daef2afbc0e7fca431880ea', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FUA_D7jdr6UdHj0KoH546rwVK6KbRwjyh-mAiqtZf9Q.jpg?width=320&crop=smart&auto=webp&s=65a3f72ca218928c68882f8f680371576a619288', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FUA_D7jdr6UdHj0KoH546rwVK6KbRwjyh-mAiqtZf9Q.jpg?width=640&crop=smart&auto=webp&s=147afd4134e0ab55ae5bb297b20221c22b79b2f2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FUA_D7jdr6UdHj0KoH546rwVK6KbRwjyh-mAiqtZf9Q.jpg?width=960&crop=smart&auto=webp&s=39de61d6316b28ff80567c7e9fa0178565baf968', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FUA_D7jdr6UdHj0KoH546rwVK6KbRwjyh-mAiqtZf9Q.jpg?width=1080&crop=smart&auto=webp&s=2c60163f241d341f0fe66f4a6c848274d13c1e66', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FUA_D7jdr6UdHj0KoH546rwVK6KbRwjyh-mAiqtZf9Q.jpg?auto=webp&s=ad87125643ecfb89ab8033e3801734a4b15569c5', 'width': 1200}, 'variants': {}}]}
I cannot solve the problem "Is X related to Y:"
1
[removed]
2024-01-24T12:41:42
https://www.reddit.com/r/LocalLLaMA/comments/19eg7ch/i_cannot_solve_the_problem_is_x_related_to_y/
freshlyLinux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eg7ch
false
null
t3_19eg7ch
/r/LocalLLaMA/comments/19eg7ch/i_cannot_solve_the_problem_is_x_related_to_y/
false
false
self
1
null
Fully automated AI for Web Dev - Step I [Analyzing Challenges]
2
I have an Idea of a project to create an AI that fully takes charge of a project in two main section, ADDITIVE SECTION, and EDITION SECTION, I spent a lot of time in the research (Finetuning, Colecting Datasets, ...), So now I'm trying to make my research public, since My intention is to make it Open Source. So as a Step 1, I want to gather the problems that exists in a project development, I started with the following notes, I want to collect more from your comment and in the next Post **Step II**, I will give the notes that I have and try to include all the related comments with the non solved issues. Those steps could be a beginning of the project, or a wide discussion where we share our thoughts together. &#x200B; 1. Weaknesses in Visualization: \- AI struggles with: \+ Identifying webpages content as a human. \+ Experimenting and testing visual interactions as expected from a human. \+ Weak understanding of descriptions of visual things that a customer/developer may want to have. &#x200B; 2. Adherence to Project's Graphics Theme: \- AI faces challenges in: \+ Sticking to a project's graphics theme due to its visual weaknesses. \+ The requirement of custom models in the image-to-text field. &#x200B; 3. Future StateOfTheArt possible Solutions: \- A combination of: \+ Text-to-image AND text-to-text AND MAYBE text-to-image. \+ Fine-tuning of all these aspects to work together. \- The development of such technology could: \+ Take a decade to prepare. \+ Require interest from a tech giant. \+ Eventually reach the **Open Source** world, albeit **much later**. &#x200B; 4. Understanding Different Project Structures: \- AI must understand each project pattern differently, such as: \+ Python is different than PHP, is different than JAVA, etc. \- Challenges include: \+ AI's solid understanding in a specific framework does not equate to awareness of a developer(s) specific structure. \+ Custom project structures (like a unique Python project structure) might not be clear for AI, or may require more fine-tuning. &#x200B; 5. Documentation Challenges: \- Multiple layers of documentation in a single project, such as in a LARAVEL PHP project: \+ Framework DOC: The documentation specific to Laravel. \+ PROJECT DOC: Documentation abstracting the project (Models, Controllers, views, etc.). \+ User manual DOC: Crucial for understanding real-world applications and user interactions. \- Example: In a note management system, understanding the user's practice of using successive notes for different purposes. \- Implications: \+ Changes in the project can impact all documentation levels. \+ Some tasks may only be feasible with human intervention. &#x200B; To all who are eager to contribute to the discussion, kindly initiate your comment with `#NUMBER`, ensuring that `NUMBER` corresponds to the specific section of the article to which your comment is related. Example: `#4 I think that the solution to ...` This will help me structure the comments for the next post (**Step II**)
2024-01-24T12:25:37
https://www.reddit.com/r/LocalLLaMA/comments/19efxbw/fully_automated_ai_for_web_dev_step_i_analyzing/
khalil_ben_zineb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19efxbw
false
null
t3_19efxbw
/r/LocalLLaMA/comments/19efxbw/fully_automated_ai_for_web_dev_step_i_analyzing/
false
false
self
2
null
Reviews about Noromaid-13B-0.4-DPO? What is Chatml?
6
I want to try it, but I've read on their disc and on the discussions on their HugginFace that is having problems. Have any of you tried it? I wanna know what do you think of it, and also if you would be kind to explain a newbie what chatml is. I know its like a kind of queue system but I cant figure it out how its suposed to work with this models or how does it work with the promts.
2024-01-24T12:16:07
https://www.reddit.com/r/LocalLLaMA/comments/19efrg2/reviews_about_noromaid13b04dpo_what_is_chatml/
StillOk1589
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19efrg2
false
null
t3_19efrg2
/r/LocalLLaMA/comments/19efrg2/reviews_about_noromaid13b04dpo_what_is_chatml/
false
false
self
6
null
Design LLM chat templates in your browser with the Jinja playground
21
2024-01-24T12:05:23
https://v.redd.it/r5v6o3y3ndec1
xenovatech
v.redd.it
1970-01-01T00:00:00
0
{}
19efkzw
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/r5v6o3y3ndec1/DASHPlaylist.mpd?a=1708689941%2CMmI4OTc4M2UxNTliY2VjYWVmMzY3ZjllOGNiODg4MGZhNGZkNWIzYjkxOWRkM2EzYjg1ZWU4MmIyNzhkNThjNg%3D%3D&v=1&f=sd', 'duration': 23, 'fallback_url': 'https://v.redd.it/r5v6o3y3ndec1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 680, 'hls_url': 'https://v.redd.it/r5v6o3y3ndec1/HLSPlaylist.m3u8?a=1708689941%2CN2M3MGQ4YTBjYjlkNTYwYzI3ZDcyYWQ3ZGMwMWYwMDAxOTI1ZmNmOTM2YWJjMzliMmQzZDk2NjcyNmUwNDFjNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/r5v6o3y3ndec1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_19efkzw
/r/LocalLLaMA/comments/19efkzw/design_llm_chat_templates_in_your_browser_with/
false
false
https://external-preview…1ef50800a0ade584
21
{'enabled': False, 'images': [{'id': 'ZGE0aGd0NTJwZGVjMXbrHzEIH9boQrXL4lLp3wJjowSpgj_o8Na0Idw5Iy9F', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/ZGE0aGd0NTJwZGVjMXbrHzEIH9boQrXL4lLp3wJjowSpgj_o8Na0Idw5Iy9F.png?width=108&crop=smart&format=pjpg&auto=webp&s=33dc6006f79b801230869b2aa8e39354e7f4a561', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/ZGE0aGd0NTJwZGVjMXbrHzEIH9boQrXL4lLp3wJjowSpgj_o8Na0Idw5Iy9F.png?width=216&crop=smart&format=pjpg&auto=webp&s=9fca06df5ede7e7c8ea4f3c82d301f016f5c6799', 'width': 216}, {'height': 169, 'url': 'https://external-preview.redd.it/ZGE0aGd0NTJwZGVjMXbrHzEIH9boQrXL4lLp3wJjowSpgj_o8Na0Idw5Iy9F.png?width=320&crop=smart&format=pjpg&auto=webp&s=33a91288277cb174d318c8a7552efa6a47d02810', 'width': 320}, {'height': 339, 'url': 'https://external-preview.redd.it/ZGE0aGd0NTJwZGVjMXbrHzEIH9boQrXL4lLp3wJjowSpgj_o8Na0Idw5Iy9F.png?width=640&crop=smart&format=pjpg&auto=webp&s=c00d29b214411d7e2a81260481872ed396c85f07', 'width': 640}, {'height': 509, 'url': 'https://external-preview.redd.it/ZGE0aGd0NTJwZGVjMXbrHzEIH9boQrXL4lLp3wJjowSpgj_o8Na0Idw5Iy9F.png?width=960&crop=smart&format=pjpg&auto=webp&s=4d7d350049b578f704354e12c82e7cd85cc0e648', 'width': 960}, {'height': 573, 'url': 'https://external-preview.redd.it/ZGE0aGd0NTJwZGVjMXbrHzEIH9boQrXL4lLp3wJjowSpgj_o8Na0Idw5Iy9F.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a61e64c014423744502d8dc27f72b4d06565afff', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/ZGE0aGd0NTJwZGVjMXbrHzEIH9boQrXL4lLp3wJjowSpgj_o8Na0Idw5Iy9F.png?format=pjpg&auto=webp&s=61d3f224d8139917dba79a0048a07520fae0d8bc', 'width': 1356}, 'variants': {}}]}
Why is Faraday so much faster than LM Studio?
1
[removed]
2024-01-24T11:13:40
https://www.reddit.com/r/LocalLLaMA/comments/19eeqql/why_is_faraday_so_much_faster_than_lm_studio/
AlanCarrOnline
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eeqql
false
null
t3_19eeqql
/r/LocalLLaMA/comments/19eeqql/why_is_faraday_so_much_faster_than_lm_studio/
false
false
self
1
null
Gguf endpoint
2
Hello has anyone created aws sage maker endpoint for gguf? Any tutorial or experience would be greatly appreciated
2024-01-24T10:50:32
https://www.reddit.com/r/LocalLLaMA/comments/19eedui/gguf_endpoint/
Optimalutopic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19eedui
false
null
t3_19eedui
/r/LocalLLaMA/comments/19eedui/gguf_endpoint/
false
false
self
2
null
Recommendation regarding which model to finetune for text classification?
1
Hey guys, I am looking to finetuning a text/user-query classification model. Can you recommends me some model that I should consider according to the below requirements: A) The text/query in my dataset has very overlapping semantics b) Number of classes: 5 I have tried fine-tuning DistillBert but the results are quiet bad. Let me know what you guys think!
2024-01-24T10:37:23
https://www.reddit.com/r/LocalLLaMA/comments/19ee6ou/recommendation_regarding_which_model_to_finetune/
Aromatic-Lead-6814
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19ee6ou
false
null
t3_19ee6ou
/r/LocalLLaMA/comments/19ee6ou/recommendation_regarding_which_model_to_finetune/
false
false
self
1
null
New alignment method specifically for role-play by Alibaba
34
2024-01-24T10:36:53
https://huggingface.co/papers/2401.12474
clefourrier
huggingface.co
1970-01-01T00:00:00
0
{}
19ee6gh
false
null
t3_19ee6gh
/r/LocalLLaMA/comments/19ee6gh/new_alignment_method_specifically_for_roleplay_by/
false
false
https://b.thumbs.redditm…L38vcPfPMPxI.jpg
34
{'enabled': False, 'images': [{'id': 'f0KTtMcjQEx890oTxiNYUWNs4nlWn0cEqDhp2cFXOBE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZgEOGsg-zkBNjzKke7VNQGZwGp0vVw-KFcJanj8rVM0.jpg?width=108&crop=smart&auto=webp&s=29ef68b5c166a3b4392de269dd8ba2c183cfecc0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZgEOGsg-zkBNjzKke7VNQGZwGp0vVw-KFcJanj8rVM0.jpg?width=216&crop=smart&auto=webp&s=dd8ab80bd77a240c47068def7962c7f7a8d8f69b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZgEOGsg-zkBNjzKke7VNQGZwGp0vVw-KFcJanj8rVM0.jpg?width=320&crop=smart&auto=webp&s=f5567bcf3836bcd3305afefb233d1315a70b4973', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZgEOGsg-zkBNjzKke7VNQGZwGp0vVw-KFcJanj8rVM0.jpg?width=640&crop=smart&auto=webp&s=37b45ef524f54547280353f88314f8d8b650caa2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZgEOGsg-zkBNjzKke7VNQGZwGp0vVw-KFcJanj8rVM0.jpg?width=960&crop=smart&auto=webp&s=f85b8cf915e016bd5af1b461458dc70fe86352bb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZgEOGsg-zkBNjzKke7VNQGZwGp0vVw-KFcJanj8rVM0.jpg?width=1080&crop=smart&auto=webp&s=9c75c4314747668dad5b7506abb00c4b34758059', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZgEOGsg-zkBNjzKke7VNQGZwGp0vVw-KFcJanj8rVM0.jpg?auto=webp&s=58697cadb3a49bc64fa2bc24d684a171f9e5b0d7', 'width': 1200}, 'variants': {}}]}