title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Good model for geopolicial analysis writing? | 1 | Hi there
Just getting into local AI. Set LM Studio on my laptop (Gigabyte g5) and got a bunch of models running great (7b and 13b), even set up the obsidian plugin to use them right inside the app. All good. Quite happy to explore further and this community has been fantastic. So thank you all.
Now, my question: Does anyone know a good model (around 7-13b) for history/geopolitics, preferably with great statistical knowledge about a bunch of parameters like demographics, education, economics, warfare, etc? Or is anyone aware of anyone working on such LLM or Training Dataset?
| 2024-01-28T10:18:51 | https://www.reddit.com/r/LocalLLaMA/comments/1aczne7/good_model_for_geopolicial_analysis_writing/ | Fishtotem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aczne7 | false | null | t3_1aczne7 | /r/LocalLLaMA/comments/1aczne7/good_model_for_geopolicial_analysis_writing/ | false | false | self | 1 | null |
LM Studio v0.2.11 Update Significantly Speeds Up Inference Running Server | 20 | I've been using LM Studio to run a local server ver. of Mixtral-8x7b to generate training data: run up a 500-token context chunk, use a prompt to generate output (in this case a story writing prompt that relates to the context), and then output the generated prompt to a df. Last week it took about 70 seconds per iteration, this week about 15 seconds. The patch notes kind of suck so I'm not sure exactly what changed, but the difference if you are running a server where the model fits on GPU for a Mac is amazing. | 2024-01-28T10:01:56 | https://www.reddit.com/r/LocalLLaMA/comments/1aczecv/lm_studio_v0211_update_significantly_speeds_up/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aczecv | false | null | t3_1aczecv | /r/LocalLLaMA/comments/1aczecv/lm_studio_v0211_update_significantly_speeds_up/ | false | false | self | 20 | null |
I have a Radeon 6750XT and a 5600x CPU:which models can I run? Is it still a pain to run models with AMD cards on windows 10? | 1 | As title says. I remember that it used to be impossibile to run LLMs on windows while usong AMD, but I heard that it has somehow changed. Is it true that now it's possible to run model with an AMD card using Windows? If so, which models would I be able to use?
If it's not possible, can I run it by using Linux on a virtual machine or is dual booting my only option? | 2024-01-28T09:29:25 | https://www.reddit.com/r/LocalLLaMA/comments/1acyxid/i_have_a_radeon_6750xt_and_a_5600x_cpuwhich/ | wolfbetter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acyxid | false | null | t3_1acyxid | /r/LocalLLaMA/comments/1acyxid/i_have_a_radeon_6750xt_and_a_5600x_cpuwhich/ | false | false | self | 1 | null |
recommendations for PC/components? | 1 | I'm looking to get a new computer because I need something powerful for various projects, especially for running local models. My current one can manage everything but it's super slow - even a 7B model takes about 15 minutes to do one task. I'd love to get suggestions for a setup that costs between $1000-$2000, something that can speed things up significantly. Any advice would be great! | 2024-01-28T08:48:14 | https://www.reddit.com/r/LocalLLaMA/comments/1acycjp/recommendations_for_pccomponents/ | JamesAibr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acycjp | false | null | t3_1acycjp | /r/LocalLLaMA/comments/1acycjp/recommendations_for_pccomponents/ | false | false | self | 1 | null |
Introducing Vaartaalaap: A Chatbot UI for Local LLM Servers | 12 | I'm excited to share a project I've been working on: **Vaartaalaap**, a chatbot application that connects with local Large Language Model (LLM) servers. This is especially useful for those who have set up their own LLM server and are looking for an easy-to-use interface to interact with it.
## Features
* **Dynamic Base URL**: Easily connect to your server by configuring the base URL.
* **Customizable Prompts**: Tailor the chatbot's responses by modifying the default system prompt. It also has a list of preconfigured prompts.
* **Mobile Friendly**: Use Vaartaalaap on your mobile device via ngrok
or Tailscale for on-the-go access to your local LLM server.
GitHub: [https://github.com/paragjnath/Vaartaalaap](https://github.com/paragjnath/Vaartaalaap)
I am using this on may mobile to chat with models served via LM Studio.
Let me know your thoughts.
​ | 2024-01-28T08:41:32 | https://www.reddit.com/r/LocalLLaMA/comments/1acy92d/introducing_vaartaalaap_a_chatbot_ui_for_local/ | Reasonable_Ad9033 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acy92d | false | null | t3_1acy92d | /r/LocalLLaMA/comments/1acy92d/introducing_vaartaalaap_a_chatbot_ui_for_local/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'KsYArBpd2dFUYQLND8_DxYVu_AH88zPk-ir_SSsv1Hg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s5mHJ4NNC37HcLtFkKNYCvXPJBUkToCmBNZ1iQLl2Y8.jpg?width=108&crop=smart&auto=webp&s=49c647057f703346d4971eb3c5c5848be443d6c1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s5mHJ4NNC37HcLtFkKNYCvXPJBUkToCmBNZ1iQLl2Y8.jpg?width=216&crop=smart&auto=webp&s=b59b4002665c4f6f163a47b12616b441413a82cf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s5mHJ4NNC37HcLtFkKNYCvXPJBUkToCmBNZ1iQLl2Y8.jpg?width=320&crop=smart&auto=webp&s=db7420a279106235e241513532b13482f24c91b3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s5mHJ4NNC37HcLtFkKNYCvXPJBUkToCmBNZ1iQLl2Y8.jpg?width=640&crop=smart&auto=webp&s=73b5b19355350d5b29eccf75a6bf452e88b9237a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s5mHJ4NNC37HcLtFkKNYCvXPJBUkToCmBNZ1iQLl2Y8.jpg?width=960&crop=smart&auto=webp&s=65adfc2f93b3358e3d3593f2af9f4d0cb3ca4ffa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s5mHJ4NNC37HcLtFkKNYCvXPJBUkToCmBNZ1iQLl2Y8.jpg?width=1080&crop=smart&auto=webp&s=94b96658f256973305f3d9fee786234a8299c478', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s5mHJ4NNC37HcLtFkKNYCvXPJBUkToCmBNZ1iQLl2Y8.jpg?auto=webp&s=e197b265f4e70cc9681d4f75ab372faf396a63b5', 'width': 1200}, 'variants': {}}]} |
Introducing Vaartaalaap: A Chatbot UI for Local LLM Servers | 1 | 2024-01-28T08:32:32 | Reasonable_Ad9033 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1acy49z | false | null | t3_1acy49z | /r/LocalLLaMA/comments/1acy49z/introducing_vaartaalaap_a_chatbot_ui_for_local/ | false | false | 1 | {'enabled': True, 'images': [{'id': '1pKJZfAVJK-8byVZl_qjeZ7Gr_IprC8rWNdSe-6Z7eM', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/jfhhb3lm65fc1.jpeg?width=108&crop=smart&auto=webp&s=d7cfe57a3024746a7bc808b6ff213af7709de625', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/jfhhb3lm65fc1.jpeg?width=216&crop=smart&auto=webp&s=35c49c8aa69ab7bb5b85ccf50724b6547055225b', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/jfhhb3lm65fc1.jpeg?width=320&crop=smart&auto=webp&s=c8e328a17f6ae3bb0d70d5201e800f1517c574ae', 'width': 320}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/jfhhb3lm65fc1.jpeg?auto=webp&s=42fd5717d73aeb948c2d2ff991001df5d9e0b3c1', 'width': 540}, 'variants': {}}]} | |||
Best Practices for Semantic Search on 200k vectors (30GB) Worth of Embeddings? | 2 | Hi, I have converted some domain-specific name vectors into embeddings, with a dataset size of 200k words. All the embeddings were generated using OpenAI's embedding model 3 (3072 dim per embedding) . Now I am planning to implement semantic search similarity. Given a domain keyword, I want to find the top 5 most similar matches. After embedding all 280k words, the size of the JSON file containing the embeddings is around 30GB.
I am new to this domain and evaluating the best options.
1. Should I use a cloud vector database like Pinecone or Typsense, or host locally on DigitalOcean?
2. If I go with a cloud option like Typsense, what configuration (RAM, etc.) would I need for 280k embeddings (30GB in size)? And how much would it likely cost?
I have been confused for the past few days and unable to find useful resources. Any help or advice you could provide would be greatly appreciated. | 2024-01-28T08:19:52 | https://www.reddit.com/r/LocalLLaMA/comments/1acxxks/best_practices_for_semantic_search_on_200k/ | stoicbats_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acxxks | false | null | t3_1acxxks | /r/LocalLLaMA/comments/1acxxks/best_practices_for_semantic_search_on_200k/ | false | false | self | 2 | null |
What's the best 1-4B LLM for a phone? | 19 | Im gonna buy a pixel 7 and I wanna know what are the best small LLMs that I can use with other apps? | 2024-01-28T06:56:15 | https://www.reddit.com/r/LocalLLaMA/comments/1acwoan/whats_the_best_14b_llm_for_a_phone/ | Whydoiexist2983 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acwoan | false | null | t3_1acwoan | /r/LocalLLaMA/comments/1acwoan/whats_the_best_14b_llm_for_a_phone/ | false | false | self | 19 | null |
Training Local LLMs on ones own chat logs. | 7 | Say I export conversations from my GPT using their tool. This data would be full of preferential output examples, and potentially relevant information for the user. What would the workflow look like to parse all this into a locally run model? Would it be worth it? How is a LoRa useful here? | 2024-01-28T06:45:19 | https://www.reddit.com/r/LocalLLaMA/comments/1acwi47/training_local_llms_on_ones_own_chat_logs/ | cm8ty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acwi47 | false | null | t3_1acwi47 | /r/LocalLLaMA/comments/1acwi47/training_local_llms_on_ones_own_chat_logs/ | false | false | self | 7 | null |
Lora Fine-tuning when model is too large for a single GPU | 2 | Hello, I plan to fine-tune the Yi-34B with Lora for text classification tasks on 8 x A30 24GB VRAM. I set device\_map="auto" when loading the model, but the training speed is extremely slow (only one GPU is working and the other GPUs are idle). Is there any way to solve it? | 2024-01-28T06:40:39 | https://www.reddit.com/r/LocalLLaMA/comments/1acwfdr/lora_finetuning_when_model_is_too_large_for_a/ | Tight_Win7479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acwfdr | false | null | t3_1acwfdr | /r/LocalLLaMA/comments/1acwfdr/lora_finetuning_when_model_is_too_large_for_a/ | false | false | self | 2 | null |
Mixtal8x7B Instruct with Tools on Google Colab | 9 | [https://github.com/willspag/Mixtral8x7B-AI-Chat-Colab](https://github.com/willspag/Mixtral8x7B-AI-Chat-Colab) | 2024-01-28T06:09:26 | https://www.reddit.com/r/LocalLLaMA/comments/1acvx66/mixtal8x7b_instruct_with_tools_on_google_colab/ | willspag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acvx66 | false | null | t3_1acvx66 | /r/LocalLLaMA/comments/1acvx66/mixtal8x7b_instruct_with_tools_on_google_colab/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'PDr-59_KRWv7tdqzDn5VKcDswcyg0mZu5LdG2ZoXBNE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gqvRGQrtnyOEZjHcZULQdvh_4It7Xi3Sxq0xQcpzEDQ.jpg?width=108&crop=smart&auto=webp&s=8ff86e632da7bd67066f6ab07da323fcde299c17', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gqvRGQrtnyOEZjHcZULQdvh_4It7Xi3Sxq0xQcpzEDQ.jpg?width=216&crop=smart&auto=webp&s=f0617ed00940650d5befe8ec3152998713b11db5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gqvRGQrtnyOEZjHcZULQdvh_4It7Xi3Sxq0xQcpzEDQ.jpg?width=320&crop=smart&auto=webp&s=0251de0f4ae8d4d44ff4cbfbdc0fa5f68400aa4f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gqvRGQrtnyOEZjHcZULQdvh_4It7Xi3Sxq0xQcpzEDQ.jpg?width=640&crop=smart&auto=webp&s=c3e3fe05a9ff4c49901483fe8171a25d339b2dd3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gqvRGQrtnyOEZjHcZULQdvh_4It7Xi3Sxq0xQcpzEDQ.jpg?width=960&crop=smart&auto=webp&s=fabd3e81b507db8da96fb05cb146e84efd391521', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gqvRGQrtnyOEZjHcZULQdvh_4It7Xi3Sxq0xQcpzEDQ.jpg?width=1080&crop=smart&auto=webp&s=90501046f882f174d96d1aa36610fd96e2dc8da8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gqvRGQrtnyOEZjHcZULQdvh_4It7Xi3Sxq0xQcpzEDQ.jpg?auto=webp&s=816740aee3633b4629adf57711eb7240b0e7bcb5', 'width': 1200}, 'variants': {}}]} |
Is there a Mistal 7B OVA file for running in a VM in virtualbox or proxmox ? | 1 | It would be great to just import one file and use it ! | 2024-01-28T06:02:28 | https://www.reddit.com/r/LocalLLaMA/comments/1acvt8l/is_there_a_mistal_7b_ova_file_for_running_in_a_vm/ | transdimensionalmeme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acvt8l | false | null | t3_1acvt8l | /r/LocalLLaMA/comments/1acvt8l/is_there_a_mistal_7b_ova_file_for_running_in_a_vm/ | false | false | self | 1 | null |
Initial source code understanding of ggml (llama.cpp) | 41 | I have taken quite some machine learning courses and have done a few projects already. I think I know the math formula involved in transformers and GPT models. However, I always wondered how they work in reality. The best way for me is to read and understand source codes implementing these models.
I am a C/C++ programmer mostly. I am more comfortable to read C/C++ programs. So, recently I started to read, run, and debug ggml's gpt-2 inference example since ggml is entirely written in C and can run many transformer models on a laptop: [https://github.com/ggerganov/ggml/tree/master/examples/gpt-2](https://github.com/ggerganov/ggml/tree/master/examples/gpt-2) . The famous llama.cpp is closely connected to this library. My experiment environment is a MacBook Pro laptop+ Visual Studio Code + cmake+ CodeLLDB (gdb does not work with my M2 chip), and GPT-2 117 M model.
Here is what I have learned so far:
The high-level main function has the following structure [https://github.com/ggerganov/ggml/blob/master/examples/gpt-2/main-backend.cpp](https://github.com/ggerganov/ggml/blob/master/examples/gpt-2/main-backend.cpp)
* load the model: ggml specific format using quantization.
* create a compute graph from the loaded model. I will explain this graph later.
* tokenized the prompt
* using a loop to feed the prompt into the model, and generate a new token each iteration
* Inside the loop, the prompt is fed into the model's compute graph
* when the compute graph is walked through entirely, the last node stores the results to help choose the next new token
* generate a new token using the top-K and top-P sampling algorithm
* update the prompt to include the new token, the prompt will be used in the next iteration
The core computation is done using the compute graph.
* all computations involved in a neural network/model's inference can be modeled by using some input vector/matrix to compute a resulting vector/matrix.
* If we focus on each vector and matrix, we can model the computing as forward walking/updating a directed graph: each node of the graph is a tensor, representing a vector or matrix
* Each node/tensor stores its value and pointers to relevant input nodes/tensors and operations. The result is written back to the current tensor.
* The inference now becomes the walk of the graph from the beginning to the end, following the edges from one tensor to another, updating each tensor's value based on the inputs and operations.
ggml provides quite some tools to dump or visualize the compute graph, which helps debug the inference process. [https://netron.app/](https://netron.app/) also can visualize common model files hosted on huggingface. I tried to upload huggingface GPT-2 model to netron. It is fascinating to view the compute graph of a transformer model.
ggml has many other advanced features including running computation on GPUs, using multi-threaded programming, and so on.
Even for a small model like GPT-2 117M, the compute graph is quite large (leaf nodes 188 + non-leaf nodes 487). I will need more time to go through the graph to have a deeper understanding of how all the math formula of transformers is implemented in a programming language.
I have tremendous respect for ggml/llama.cpp's author: **Georgi Gerganov**. What a genius to pull off some projects like this! | 2024-01-28T05:45:07 | https://www.reddit.com/r/LocalLLaMA/comments/1acvimp/initial_source_code_understanding_of_ggml_llamacpp/ | Pitiful-You-8410 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acvimp | false | null | t3_1acvimp | /r/LocalLLaMA/comments/1acvimp/initial_source_code_understanding_of_ggml_llamacpp/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': 'R4FlbjN2L22Lbm2XJGxiHbEmBoZ13M8R8mWmNTYKcTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IecFlCCvA9_d57R82wH2BVMcQaxRTUIlpoXU2PYU1aw.jpg?width=108&crop=smart&auto=webp&s=8f10e415851f64b7a42fde652bb6026171a1d700', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IecFlCCvA9_d57R82wH2BVMcQaxRTUIlpoXU2PYU1aw.jpg?width=216&crop=smart&auto=webp&s=b6df37b93137f5c58b4a2e0097c08ff47710993b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IecFlCCvA9_d57R82wH2BVMcQaxRTUIlpoXU2PYU1aw.jpg?width=320&crop=smart&auto=webp&s=2009672bce340a42267b783121df518fd6ec6447', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IecFlCCvA9_d57R82wH2BVMcQaxRTUIlpoXU2PYU1aw.jpg?width=640&crop=smart&auto=webp&s=ef7160fdb1042a3455a5e309dff9a8337c220ee3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IecFlCCvA9_d57R82wH2BVMcQaxRTUIlpoXU2PYU1aw.jpg?width=960&crop=smart&auto=webp&s=7cd0abf70b613844f42c0d6633803bbd0cf5b041', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IecFlCCvA9_d57R82wH2BVMcQaxRTUIlpoXU2PYU1aw.jpg?width=1080&crop=smart&auto=webp&s=e869abce866dafb89fbe7be8455867187c8b9373', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IecFlCCvA9_d57R82wH2BVMcQaxRTUIlpoXU2PYU1aw.jpg?auto=webp&s=cd83749522fb2f94ec16a92974ff816c52a88426', 'width': 1200}, 'variants': {}}]} |
Confusion regarding what kind of PC to build | 1 | Hi all, and thanks in advance for your time.
I am obsessed with the idea of running uncensored LLMs locally. I recently bought a raspberry pi5 8GB and a coral edge TPU to try and run mixtral, but to my extreme disappointment, raspberry pi 5 OS doesn’t yet support the right version of python for the edge TPU.
So I want to build or get a new computer to run LLMs. I want 64GB DDR5 ram, and a fast SSD for swap space. But I am confused as to whether I need a good CPU *AND* a good GPU? Or can I get away with having a nice CPU and fast ram, but integrated graphics? I am interested in running these LLMs via Ollama which seems to run well without dedicated GPU.
I was looking at this PC. Would this be capable of running something like nous Hermes 13b (at GOOD speeds)? The price is right for $350 but it makes me think it is made from junk parts…without going into too much detail it’s important to me that this machine is not fragile https://www.amazon.com/dp/B0CPY5TVKM?ref=emc_b_5_mob_t
Thanks so much in advance, I look forward to any help you all can offer, no matter how small! | 2024-01-28T05:36:28 | https://www.reddit.com/r/LocalLLaMA/comments/1acvde9/confusion_regarding_what_kind_of_pc_to_build/ | reenign3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acvde9 | false | null | t3_1acvde9 | /r/LocalLLaMA/comments/1acvde9/confusion_regarding_what_kind_of_pc_to_build/ | false | false | self | 1 | null |
Is Vicuna 13B still the best all-around 13B model? | 73 | It's been a while since Vicuna-13B came out. Has there been something better that I haven't heard of? | 2024-01-28T05:04:30 | https://www.reddit.com/r/LocalLLaMA/comments/1acutft/is_vicuna_13b_still_the_best_allaround_13b_model/ | RenoHadreas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acutft | false | null | t3_1acutft | /r/LocalLLaMA/comments/1acutft/is_vicuna_13b_still_the_best_allaround_13b_model/ | false | false | self | 73 | null |
Why Open AI can earn money from online embedding model? | 1 | If someone use this online model to get word/embedding pairs for future usage or even share to others, it sounds easy. | 2024-01-28T04:45:54 | https://www.reddit.com/r/LocalLLaMA/comments/1acuhm7/why_open_ai_can_earn_money_from_online_embedding/ | ImprovementEqual3931 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acuhm7 | false | null | t3_1acuhm7 | /r/LocalLLaMA/comments/1acuhm7/why_open_ai_can_earn_money_from_online_embedding/ | false | false | self | 1 | null |
How to get perfect label outputs via prompting: experiment w/ 10k observations | 42 | Using the completion formats, I was curious to know how much a single space could impact output for a smaller model like Mistral 7b. It's supposed to be better at following instructions but that's not necessarily the case. Below is a write up of my experiment to get the perfect output of only "Yes" or "No" labels. (Accuracy is a different story). [This is really useful if you are using an LLM to label your data via prompts.](https://arxiv.org/abs/2205.02318)
**Setup:**
Hardware: A100 80G cloud instance
Engine: vLLM
Model: mistralai/Mistral-7B-Instruct-v0.2 (full model)
Performance Notes: \~25-30 tk/s
**TL;DR Result: SPACES MATTER A LOT!!!**
When prompting via the complete method, be sure to remove the final space before entering the prompt. For example:
[Is this a positive or negative statement, "I love sunny days!"
Label: ]
vs
[Is this a positive or negative statement, "I love sunny days!"
Label:]
That extra space matters a lot. If you want to get the best results, **I found that including the label options and adding "only" yielded PERFECT responses:**
[Is this a positive or negative statement, "I love sunny days!"
Label (Positive/Negative Only):]
**Experiment:**
Given a set of text, I wanted Mistral to return ONLY "Yes" or "No." **The only way I could enure this was to set max token to 1**. Mistral tends to be pretty wordy, but with this set up, we can test which wording or format would yield the correct label *first*. This is incredibly helpful because when doing few-shot + chain-of-thought prompting, you are giving the model a lot of context, and it can get confused with the desired output.
I used the Alpaca format where {instructions} had the task and few-shot examples, and input had the text I wanted it to analyze. I understand that usually few-shot examples is placed in {input}, but I was doing chain-of-thought few-shot to teach "why" I want labels a certain way (labeling quality improved significantly this way).
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Response:
So the only thing I was messing around with was the {input} which had following format:
Text: "{observation}"
Label:
Results:
1. "Label:" (w/o spaces after ':')
No 5991
Yes 3988
Based 8
Reason 8
Let 2
Maybe 2
It 1
Total Non-labels: 21
2. "Label: " (w/ spaces after ':')
No 6195
Yes 3709
Based 71
Reason 18
Without 2
It 1
Let 1
The 1
Maybe 1
This 1
Total Non-Labels: 96
-------
4. "Label (Yes/No):"
No 6176
Yes 3815
Based 6
Reason 2
Maybe 1
Total Non-labels: 9
5. "Label (Yes/No): "
No 6160
Yes 3790
Based 41
The 5
Reason 3
Maybe 1
Total Non-labels: 50
-------
6. "Label (Yes/No Only):"
No 6220
Yes 3780
PERFECT OUTPUT
7. "Label (Yes/No Only): "
No 6291
Yes 3709
PERFECT OUTPUT
From here, I wanted to test how space impacted these 'perfect output' so I changed max token from 1 to 96.
8. "Label (Yes/No Only):" (w/ 96 max tokens)
Yes 1531
No. 1099
... Additional responses contained explanations
9. "Label (Yes/No Only): " (w/ 96 max tokens)
Yes 885
No. 824
... Additional responses contained explanations
This took me about 3 hours to complete. I thought about running this experiment again through my entire dataset of 70k observations, but.... lol it's time for dinner and I'm hungry. I think 10k is more than enough and the experiment setup is pretty thorough.
Let me know your thoughts/experiences! | 2024-01-28T03:41:27 | https://www.reddit.com/r/LocalLLaMA/comments/1actbr1/how_to_get_perfect_label_outputs_via_prompting/ | GeeBrain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1actbr1 | false | null | t3_1actbr1 | /r/LocalLLaMA/comments/1actbr1/how_to_get_perfect_label_outputs_via_prompting/ | false | false | self | 42 | null |
Llama + RAG | 1 | I’m looking for a small project for building a QA tool that leverages Llama and RAG
Could you please suggest some ideas (give dataset if possible).
I would love to have something repaired to Wikipedia | 2024-01-28T03:26:41 | https://www.reddit.com/r/LocalLLaMA/comments/1act1mm/llama_rag/ | aHuskylol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1act1mm | false | null | t3_1act1mm | /r/LocalLLaMA/comments/1act1mm/llama_rag/ | false | false | self | 1 | null |
Introducing Polymind - A Multimodal, Function-Calling Powered TabbyAPI/llama.cpp WebUI | 98 | Here is my relatively basic Function calling based WebUI, it supports the following:
\* Internet searching with DuckDuckGo and web scraping capabilities
\* Image generation using ComfyUI
\* Image input with sharegpt4v (Over llama.cpp's server), OCR, and Yolo
\* Port scanning with Nmap
\* Wolfram Alpha integration
\* A Python interpreter
\* Basic RAG with semantic search for PDF and miscellaneous text files
Github: [https://github.com/itsme2417/PolyMind](https://github.com/itsme2417/PolyMind)
| 2024-01-28T02:43:03 | https://www.reddit.com/r/LocalLLaMA/comments/1acs77k/introducing_polymind_a_multimodal_functioncalling/ | itsme292 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acs77k | false | null | t3_1acs77k | /r/LocalLLaMA/comments/1acs77k/introducing_polymind_a_multimodal_functioncalling/ | false | false | self | 98 | {'enabled': False, 'images': [{'id': 's2Wr81oPhBvTPGqH93app7MqfnCuIpgp5DRxVs0XCXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e2YgYcMdGu7f_8W6UDx2ccmHevfOPQKXsEOOHFUe7tI.jpg?width=108&crop=smart&auto=webp&s=e6ddad87bf19352ef234b7092f8200bf649bf570', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e2YgYcMdGu7f_8W6UDx2ccmHevfOPQKXsEOOHFUe7tI.jpg?width=216&crop=smart&auto=webp&s=89cd3624c2d5a7085e076e269a6209d173d842a5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e2YgYcMdGu7f_8W6UDx2ccmHevfOPQKXsEOOHFUe7tI.jpg?width=320&crop=smart&auto=webp&s=2235411f6718030feb3f4f06e643939999805f42', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e2YgYcMdGu7f_8W6UDx2ccmHevfOPQKXsEOOHFUe7tI.jpg?width=640&crop=smart&auto=webp&s=3b2a3ffe48df4ee20bef8fb87431782a026d4210', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e2YgYcMdGu7f_8W6UDx2ccmHevfOPQKXsEOOHFUe7tI.jpg?width=960&crop=smart&auto=webp&s=39471c9f227c6ce85fd8b655f60232c74927efed', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e2YgYcMdGu7f_8W6UDx2ccmHevfOPQKXsEOOHFUe7tI.jpg?width=1080&crop=smart&auto=webp&s=957ebeabbc7335349a2cc59c55d128241b06a4d5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e2YgYcMdGu7f_8W6UDx2ccmHevfOPQKXsEOOHFUe7tI.jpg?auto=webp&s=a72f05a193ae5b68ac557d889d4ec72be90171d0', 'width': 1200}, 'variants': {}}]} |
Multimodal Local LLM? Where to start? | 1 | Since OPENAI announced the integration of Dall-e 3 inside Chat GPT-4 I have been thinking about this concept inside my head and lately I have become pretty restless about it.
I wish to put together something similar and local, running it on an RTX 3060 12GB gpu pc I have upstairs. I have been trying to understand and studying more and more how LLMs work, how tokenizers and different types of tokenizers work, how word and subword tokenizers, tokenize natural language into these numbered inputs for the LLM to understand and how to then decode the output from tokenized bytes back into natural languages for us humans to understand the LLMs response, or output.
So to run something like cgpt+dalle3 but local, one would need to implement several llms, ideally through the transformers library, and integrate them with each and unto each other.
One would need a text generation llm, a text to image llm, an image to text llm and different tokenizers...
For those of you who have not toe dipped into image generation diffudion Ai like Stable Diffusion, there are already some implementations of this. One open source example is a project called Fooocus (3 Os yes). It uses a gradio interface, which underneath uses GPT-2 to tokenize user prompt input and feed it into stable Diffusion XL (SDXL).
The multimodality and fusion of multiple LLMs is the future... and it would solve the randomness of image diffusion.
I just dont know how I implement into a single gradio chatbot interface. Locally. Without the use of APIs, offline. | 2024-01-28T02:17:25 | https://www.reddit.com/r/LocalLLaMA/comments/1acrpe0/multimodal_local_llm_where_to_start/ | Yuri1103 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acrpe0 | false | null | t3_1acrpe0 | /r/LocalLLaMA/comments/1acrpe0/multimodal_local_llm_where_to_start/ | false | false | self | 1 | null |
I can't figure out how to load a model that supports running via CPU, in such a way, that it runs on my GPU instead. | 2 | Okay, so I got local ai chatbots working. I went to [https://github.com/oobabooga/text-generation-webui?tab=readme-ov-file](https://github.com/oobabooga/text-generation-webui?tab=readme-ov-file) I installed it.
Then I went to [https://huggingface.co/TheBloke](https://huggingface.co/TheBloke) and got some models. Great so far. Any .awq file I download uses my gpu by default, great again.
BUT... I go here [https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GGUF](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GGUF) and I find a huge selection of quants that use less ram. This to me means, I can pick the quant that will fit my own personal gpu perfectly (so I want 11gb vram or less from this list.)
BUT all the files on this page are .gguf so how do I tell text-generation webgui to use only my gpu? It's loading the model with llama.cpp is that my mistake? Do I need to load the .gguf file with something other than llama.cpp? There is a drop down menu of other loaders to use, but I do not know which might fit this situation. I do not really want shared cpu+gpu workload. My cpu is much older and weaker than my 4070ti gpu. I did try changing gpu layers from 0 to 40, but all that did was split the workload 50/50 cpu/gpu.
TLDR: how do I tell a .gguf model to use only my gpu and not my cpu? if I've chosen a .gguf model that is perfectly quant'd to fit inside my vram? Which model loader must I use, and which setting must I adjust below it? Again from the perspective of the fact I am using [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui?tab=readme-ov-file) | 2024-01-28T02:00:48 | https://www.reddit.com/r/LocalLLaMA/comments/1acrdmu/i_cant_figure_out_how_to_load_a_model_that/ | namad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acrdmu | false | null | t3_1acrdmu | /r/LocalLLaMA/comments/1acrdmu/i_cant_figure_out_how_to_load_a_model_that/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'PhQJyxmurMqowQp75iAZUd0ixiXthL_-MhAMf7rBGEw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7dl1r6kFXGQX8BJKT7vjqqi5FI3sV4Wd3ZsaSJA2SJw.jpg?width=108&crop=smart&auto=webp&s=11f79f9bef0a88e17de159018fb83017c1beb2e3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7dl1r6kFXGQX8BJKT7vjqqi5FI3sV4Wd3ZsaSJA2SJw.jpg?width=216&crop=smart&auto=webp&s=5c3db1fdc85701d65c0e6deff8e51df39f788072', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7dl1r6kFXGQX8BJKT7vjqqi5FI3sV4Wd3ZsaSJA2SJw.jpg?width=320&crop=smart&auto=webp&s=21435699111155ac0f4b55c44d13967f91228f94', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7dl1r6kFXGQX8BJKT7vjqqi5FI3sV4Wd3ZsaSJA2SJw.jpg?width=640&crop=smart&auto=webp&s=6378d53e6509316a652f558c7bd7701ec97839cd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7dl1r6kFXGQX8BJKT7vjqqi5FI3sV4Wd3ZsaSJA2SJw.jpg?width=960&crop=smart&auto=webp&s=d9c3c35ca34a1d07cec69502a5167566e93564ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7dl1r6kFXGQX8BJKT7vjqqi5FI3sV4Wd3ZsaSJA2SJw.jpg?width=1080&crop=smart&auto=webp&s=09ef162d35bdcd00d7b89ea25bded2d55c735e43', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7dl1r6kFXGQX8BJKT7vjqqi5FI3sV4Wd3ZsaSJA2SJw.jpg?auto=webp&s=d886bb4af0f577602834f376855faa1fb0168dfc', 'width': 1200}, 'variants': {}}]} |
Any guys successfully fine-tuned Llama2 without using hugging face? | 10 | Hi team,
Anybody successfully fine tuned Llama2 without using hugging face? All the articles or tutorial videos are using hugging face. Just wondering if there is any way to train Llama 2 without using it.
I am currently thinking about train Code Llama with our companies private repo, but unsure about policies and everything. So, if it is possible, I would like to train it locally, not using hugging face.
​
I am currently thinking about train Code Llama with our company's private repo, but unsure about policies and everything. So, if it is possible, I would like to train it locally, not using hugging face. | 2024-01-28T01:46:25 | https://www.reddit.com/r/LocalLLaMA/comments/1acr3nr/any_guys_successfully_finetuned_llama2_without/ | OkSwordfish8419 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acr3nr | false | null | t3_1acr3nr | /r/LocalLLaMA/comments/1acr3nr/any_guys_successfully_finetuned_llama2_without/ | false | false | self | 10 | null |
Speedups for a partially unloaded Mixtral? | 5 | Been running a version of mixtral (q3\_K\_M gguf), and I love the quality even when when compared to 20/34b models. However the prompt processing time is pretty long, \~40 seconds with a full 4096 context length. I only have a 16gb card, so I'm unable to fully load the model into vram.
ContextLimit: 4096/4096, Processing:45.72s (11.3ms/T), Generation:4.70s (71.2ms/T), Total:50.42s (1.31T/s)
Went through all settings one-by-one and this was as low as I could get it on kobloidcpp, any recommendations for further improvement? Or should I just buy a 5090 when it comes out. | 2024-01-28T00:33:58 | https://www.reddit.com/r/LocalLLaMA/comments/1acpm5b/speedups_for_a_partially_unloaded_mixtral/ | Salendron2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acpm5b | false | null | t3_1acpm5b | /r/LocalLLaMA/comments/1acpm5b/speedups_for_a_partially_unloaded_mixtral/ | false | false | default | 5 | null |
Google Intentionally Censors Bard After It Generates the Answer Already | 202 | Hi all,
After noticing Bard's score on the chatbot arena, I decided to revisit it. (It was heavily censored during my initial review back then.) I gave it a simple instruction to test the model: **"Can you prepeare a readme markdown model card for Bard"**
Initially, it began generating the README perfectly, stating it was based on Gemini Pro, etc. Then, a surprising incident occurred. The response changed to **"I'm a language model and don't have the capacity to help with that. "**
I've encountered refusals like this before, but what's unusual is that the answer changed in the UI after the model had already generated at least a part of its response.
The video file:
https://reddit.com/link/1acohlm/video/is9401wlj2fc1/player
I've tweeted about it if you're interested:
https://x.com/Weyaxi/status/1751380303988359241
Anyway, I haven't seen anything like this happen before, so I'd like to hear your opinions on it. | 2024-01-27T23:41:24 | https://www.reddit.com/r/LocalLLaMA/comments/1acohlm/google_intentionally_censors_bard_after_it/ | Weyaxi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acohlm | false | null | t3_1acohlm | /r/LocalLLaMA/comments/1acohlm/google_intentionally_censors_bard_after_it/ | false | false | 202 | {'enabled': False, 'images': [{'id': 'QEWGvdJfo6kYOge5rkBDjcY79lUpBBi3VFJLwMbf-KQ', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/BFg22UIvj--iTF-aEbJNqMRNUGvn2O_H1VrXS1QEJ5g.jpg?width=108&crop=smart&auto=webp&s=7651cad987aba34b5bfcc9eb15e8b3ed68244ebe', 'width': 108}, {'height': 110, 'url': 'https://external-preview.redd.it/BFg22UIvj--iTF-aEbJNqMRNUGvn2O_H1VrXS1QEJ5g.jpg?width=216&crop=smart&auto=webp&s=aa45a54a4879eeed53fd2ffb8f2906c592a5511b', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/BFg22UIvj--iTF-aEbJNqMRNUGvn2O_H1VrXS1QEJ5g.jpg?width=320&crop=smart&auto=webp&s=ed4d521a5b1b259c923710933616ae98cdfbbe7d', 'width': 320}, {'height': 328, 'url': 'https://external-preview.redd.it/BFg22UIvj--iTF-aEbJNqMRNUGvn2O_H1VrXS1QEJ5g.jpg?width=640&crop=smart&auto=webp&s=7f02d577af1ce1b676c4fce71b65932f14b3c9e2', 'width': 640}, {'height': 492, 'url': 'https://external-preview.redd.it/BFg22UIvj--iTF-aEbJNqMRNUGvn2O_H1VrXS1QEJ5g.jpg?width=960&crop=smart&auto=webp&s=d5f901846718eef2e2690deb855cf24fe166bed2', 'width': 960}, {'height': 553, 'url': 'https://external-preview.redd.it/BFg22UIvj--iTF-aEbJNqMRNUGvn2O_H1VrXS1QEJ5g.jpg?width=1080&crop=smart&auto=webp&s=07a6748bffe57410e9fc4b5b2f2777a86e8cfc1f', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/BFg22UIvj--iTF-aEbJNqMRNUGvn2O_H1VrXS1QEJ5g.jpg?auto=webp&s=2a146ef0898a7fc17c80fbc3ee87aca4fdbaffbf', 'width': 1404}, 'variants': {}}]} | |
AI tools and what are they for AI | 1 | [removed] | 2024-01-27T23:31:54 | https://www.reddit.com/r/LocalLLaMA/comments/1acoa2n/ai_tools_and_what_are_they_for_ai/ | StillOk1589 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acoa2n | false | null | t3_1acoa2n | /r/LocalLLaMA/comments/1acoa2n/ai_tools_and_what_are_they_for_ai/ | false | false | self | 1 | null |
Best local models to run with 256G CPU Mem. and 48GBx2 GPU Mem? | 14 | I just knew this subredit from twitter. I am very interested in running powerful local models to 1) analyze large source codes like documentation generation, and code explanation, etc. 2) have good QA sessions for any programming or technical questions. 3) in the long run, I want to automate my command line workflow using iterative interactions with the model.
I have access to a workstation with
* Dual processors: Intel Xeon Silver 4214R 2.4GHz
* 256 GB main memory
* Dual RTX A6000 GPUs: each 48 GB memory, I believe NVlink is enabled but I am not sure.
What would be the best local models I should run on this machine?
​ | 2024-01-27T22:58:52 | https://www.reddit.com/r/LocalLLaMA/comments/1acnis4/best_local_models_to_run_with_256g_cpu_mem_and/ | Pitiful-You-8410 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acnis4 | false | null | t3_1acnis4 | /r/LocalLLaMA/comments/1acnis4/best_local_models_to_run_with_256g_cpu_mem_and/ | false | false | self | 14 | null |
Best Solution for Local Voice Input/Control | 3 | I'm a developer and recently injured my hand, so my typing skills are really struggling. I'd like to run something locally I can use for inputting text (not necessarily coding just general text) as well as potentially controlling things (selecting lines, etc.)
I've seen GitHub Copilot has a preview for controlling VS Code with voice, wondering if there's something similar I can run locally.
And is Whisper still the best TTS or are there better ones that can be run local?
I'm on a Windows with a 4090.
​ | 2024-01-27T21:40:28 | https://www.reddit.com/r/LocalLLaMA/comments/1aclqr7/best_solution_for_local_voice_inputcontrol/ | thekidkid32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aclqr7 | false | null | t3_1aclqr7 | /r/LocalLLaMA/comments/1aclqr7/best_solution_for_local_voice_inputcontrol/ | false | false | self | 3 | null |
Most capable function calling open source models? | 35 | we've had a myriad of impressive tools and projects developed by talented groups of individuals which incorporate function calling and give us the ability to create custom functions as tools that our ai models can call, however it seems like they're all entirely based around openai's chatgpt function calling.
my question is what open source models are you aware of that are consistently capable of recognizing when they have a function tool available and actually call them properly?
i'd like to make more effective use of things like memgpt, autogen, langroid, langchain, gorilla, and an innumerable number of other great projects but i want to make sure i'm not wasting my time using models that aren't good at function calling. | 2024-01-27T21:02:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ackxxt/most_capable_function_calling_open_source_models/ | waywardspooky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ackxxt | false | null | t3_1ackxxt | /r/LocalLLaMA/comments/1ackxxt/most_capable_function_calling_open_source_models/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': 'a-tjppw3xK-IcRe8JT935F4vaqR9C3FGAy-yYd1ukkw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CJMTX7dvXZsndDCh34hOwCQnJ-cwXPWD70ZnqAeyKys.jpg?width=108&crop=smart&auto=webp&s=46143f67b2054319392da81f30fef53e16f96b1e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CJMTX7dvXZsndDCh34hOwCQnJ-cwXPWD70ZnqAeyKys.jpg?width=216&crop=smart&auto=webp&s=99515adf02f7a1705b58a8776a89c8054a04531d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CJMTX7dvXZsndDCh34hOwCQnJ-cwXPWD70ZnqAeyKys.jpg?width=320&crop=smart&auto=webp&s=2b87d4d744900e3d9df272c67742c69330efc55d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CJMTX7dvXZsndDCh34hOwCQnJ-cwXPWD70ZnqAeyKys.jpg?width=640&crop=smart&auto=webp&s=0b98aa8c6378f48cca0c532fa51272e340406c5d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CJMTX7dvXZsndDCh34hOwCQnJ-cwXPWD70ZnqAeyKys.jpg?width=960&crop=smart&auto=webp&s=d394ea6c5c4268b634a902a871526185b3785492', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CJMTX7dvXZsndDCh34hOwCQnJ-cwXPWD70ZnqAeyKys.jpg?width=1080&crop=smart&auto=webp&s=75c904bef86dccf627889d170da59f732179f2f3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CJMTX7dvXZsndDCh34hOwCQnJ-cwXPWD70ZnqAeyKys.jpg?auto=webp&s=33942b82c5f68d3f57447206017fc3c565b1a558', 'width': 1200}, 'variants': {}}]} |
New nvidia driver for windows: Reduced performance during training? | 1 | Guys, If someone here uses WSL, do not upgrade your nvidia driver to 551.23, i got 50% descrease in performance during training and inference, lost a couple of hours until I realized that smth is off.
IDK why this happened, probably because they introduced cuda 12.4 in this update (according to nvidia-smi print).
I tune LLMs using axolotl, conda env had cuda 12.1 runtime installed, but still extreme performance drop.
Reverted back to 545.55 and everything is fine now (RTX 4090)
Has anyone experienced something like this after update? | 2024-01-27T21:00:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ackvsf/new_nvidia_driver_for_windows_reduced_performance/ | oposteriori | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ackvsf | false | null | t3_1ackvsf | /r/LocalLLaMA/comments/1ackvsf/new_nvidia_driver_for_windows_reduced_performance/ | false | false | self | 1 | null |
Should I build a dual 4090 PC or get a Macbook with 128GB memory? | 64 | I am considering building a PC with 2x 4090s for a total of 48GB VRAM.
I need to use it for
\- local GPT (chat with documents, confidential, Apple Notes) - summarization, reasoning, insight
\- large context (32k - 200k) summaries
nice to have:
\- VR gaming
\- stable diffusion XL
I have read that prompt processing is extremely slow on the Mac / Apple silicon? | 2024-01-27T20:56:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ackshi/should_i_build_a_dual_4090_pc_or_get_a_macbook/ | OppositeBeing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ackshi | false | null | t3_1ackshi | /r/LocalLLaMA/comments/1ackshi/should_i_build_a_dual_4090_pc_or_get_a_macbook/ | false | false | self | 64 | null |
Vulkan backend for koboldcpp and llamacpp - Your experiences. | 35 | New koboldcpp release: [https://github.com/LostRuins/koboldcpp/releases/tag/v1.56](https://github.com/LostRuins/koboldcpp/releases/tag/v1.56)
Vulkan implementation for llamacpp: [https://github.com/ggerganov/llama.cpp/pull/2059](https://github.com/ggerganov/llama.cpp/pull/2059)
I'm curious if any of you had already tested Vulkan backend for llamacpp and koboldcpp. If so; What are your experiences?
My results on my 'good' old PC with koboldcpp:CPU: I5-7600k, 32GB RAM, Model: LLAMA 7B:
RX570 4GB: With 16 layers offloaded to this poor old card, there is a noticeable increase in prompt processing speed. That being said, generation speed seems to go down by a small amount.
RX580 8GB: With all layers offloaded it is definitely faster than the CPU, but it's not amazing.
I haven't properly tested them yet, so I might update it with llama-bench results later.
I'm also curious, how the new Vulkan back-end compares to the other options. | 2024-01-27T20:37:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ackd03/vulkan_backend_for_koboldcpp_and_llamacpp_your/ | Puuuszzku | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ackd03 | false | null | t3_1ackd03 | /r/LocalLLaMA/comments/1ackd03/vulkan_backend_for_koboldcpp_and_llamacpp_your/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': '39tqPiIPCi0TfBIccsC_TeFCNPHQgLhLRfgAtT1l88M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ar7_oa96Hqbeix3O9jx2ih9F9VFDUvx5vOlMusgR1tw.jpg?width=108&crop=smart&auto=webp&s=8fcc1b1f47b218ec6c2f5ae02dada17d81ade092', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ar7_oa96Hqbeix3O9jx2ih9F9VFDUvx5vOlMusgR1tw.jpg?width=216&crop=smart&auto=webp&s=a0d74021389c9b0e1b308dcaa7b5fa7728ed6059', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ar7_oa96Hqbeix3O9jx2ih9F9VFDUvx5vOlMusgR1tw.jpg?width=320&crop=smart&auto=webp&s=c26e08ac2f89ef81a6ef87d267095824931cca3c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ar7_oa96Hqbeix3O9jx2ih9F9VFDUvx5vOlMusgR1tw.jpg?width=640&crop=smart&auto=webp&s=c93f8268ec06d2ed8beece777047878511c7e13a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ar7_oa96Hqbeix3O9jx2ih9F9VFDUvx5vOlMusgR1tw.jpg?width=960&crop=smart&auto=webp&s=38f54f258ef61dac0b11cfd2d185d29bdb811393', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ar7_oa96Hqbeix3O9jx2ih9F9VFDUvx5vOlMusgR1tw.jpg?width=1080&crop=smart&auto=webp&s=495ce84907dbe18d0e79c580a1e28e6f99ccd915', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ar7_oa96Hqbeix3O9jx2ih9F9VFDUvx5vOlMusgR1tw.jpg?auto=webp&s=56ba5b4cb23dfad684e3c1e522648084a200ac62', 'width': 1200}, 'variants': {}}]} |
Is there any way of running and fine-tunning LlaVa model under 8gb of VRAM | 2 | The required context length is at least 512 tokens. I could tolerate the strong quantization tho with Q-Lora for fine-tuning. Is there any projects that allow to do that out of the box? | 2024-01-27T20:29:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ack6hp/is_there_any_way_of_running_and_finetunning_llava/ | Another__one | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ack6hp | false | null | t3_1ack6hp | /r/LocalLLaMA/comments/1ack6hp/is_there_any_way_of_running_and_finetunning_llava/ | false | false | default | 2 | null |
Mamba-3B-SlimPJ: State-space models rivaling the best Transformer architecture Albert Gu*, Tri Dao* | 52 | 2024-01-27T20:25:15 | https://cartesia.ai/mamba-3b-slimpj | MambaModel | cartesia.ai | 1970-01-01T00:00:00 | 0 | {} | 1ack2su | false | null | t3_1ack2su | /r/LocalLLaMA/comments/1ack2su/mamba3bslimpj_statespace_models_rivaling_the_best/ | false | false | default | 52 | null | |
One 4090 or 3 tesla T4? | 1 | Hi, i want to play and realiaze poc and I try to figure our the best cards for my bucks.
With ~2000€ of budget I'm wondering about above choice.
I think that the 4090 (the upcoming super) is way faster, but I would have twice the ram and you know that it matter to avoid offloading..
On another hand I guess that fitting 3x T4 will be a headache and require some badass power supply plus some accessories... Ideas? | 2024-01-27T20:23:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ack1kt/one_4090_or_3_tesla_t4/ | FrenchSouch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ack1kt | false | null | t3_1ack1kt | /r/LocalLLaMA/comments/1ack1kt/one_4090_or_3_tesla_t4/ | false | false | self | 1 | null |
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence - DeepSeek-AI 2024 - SOTA open-source coding model that surpasses GPT-3.5 and Codex while being unrestricted in research and commercial use! | 76 | Paper: [https://arxiv.org/abs/2401.14196](https://arxiv.org/abs/2401.14196)
Github: [https://github.com/deepseek-ai/DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder)
Models: [https://huggingface.co/deepseek-ai](https://huggingface.co/deepseek-ai)
Abstract:
>The rapid development of large language models has revolutionized code intelligence in software development. However, the predominance of closed-source models has restricted extensive research and development. To address this, we introduce the DeepSeek-Coder series, a range of open-source code models with sizes from 1.3B to 33B, trained from scratch on 2 trillion tokens. These models are pre-trained on a high-quality project-level code corpus and employ a fill-in-the-blank task with a 16K window to enhance code generation and infilling. Our extensive evaluations demonstrate that **DeepSeek-Coder not only achieves state-of-the-art performance among open-source code models across multiple benchmarks but also surpasses existing closed-source models like Codex and GPT-3.5. Furthermore, DeepSeek-Coder models are under a permissive license that allows for both research and unrestricted commercial use.**
https://preview.redd.it/twqi7f9vh1fc1.jpg?width=1505&format=pjpg&auto=webp&s=1a92756079066b29c181977f2203c71562ec4622
https://preview.redd.it/shg94d9vh1fc1.jpg?width=1659&format=pjpg&auto=webp&s=79dc39cd00ab2e43794ecd29fbd187727bfc50b2
https://preview.redd.it/gh6qke9vh1fc1.jpg?width=1535&format=pjpg&auto=webp&s=4ff9b3362dbcdd2718114d108fb89abb9fafc539
https://preview.redd.it/qhilwh9vh1fc1.jpg?width=1524&format=pjpg&auto=webp&s=cb114631e324fcaa57382a048c010b6aa569fe5e
https://preview.redd.it/srrlld9vh1fc1.jpg?width=1698&format=pjpg&auto=webp&s=d9ecb5548d051df140cecf56d73681364f240975 | 2024-01-27T20:09:40 | https://www.reddit.com/r/LocalLLaMA/comments/1acjpn7/deepseekcoder_when_the_large_language_model_meets/ | Singularian2501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acjpn7 | false | null | t3_1acjpn7 | /r/LocalLLaMA/comments/1acjpn7/deepseekcoder_when_the_large_language_model_meets/ | false | false | 76 | null | |
Mamba: Linear-Time Sequence Modeling with Selective State Spaces — Arxiv Dives | 1 | 2024-01-27T20:02:57 | https://archive.ph/mpZzd | MambaModel | archive.ph | 1970-01-01T00:00:00 | 0 | {} | 1acjkbz | false | null | t3_1acjkbz | /r/LocalLLaMA/comments/1acjkbz/mamba_lineartime_sequence_modeling_with_selective/ | false | false | default | 1 | null | |
GPU offloading with textai | 2 |
Hi,
I am running below txtai based code successfully, but it fails to offload to the GPU.
I have all Cuda dependencies etc. installed and LLM Studio runs the model fine & fast on my GPU
I already added gpu=True.
How to fix that for txtai?
Thnx DJP
import logging
from txtai.pipeline import LLM
# Configure logging
logging.basicConfig(level=logging.DEBUG)
# Path to Hugging Face model and GGUF file
# model = "TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF"
# gguf = "tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf"
model = "TheBloke/Hermes-Trismegistus-Mistral-7B-GGUF"
gguf = "hermes-trismegistus-mistral-7b.Q4_K_M.gguf"
# Specify the device to use: 'cuda' for GPU or 'cpu' for CPU
device = "cuda"
# Auto-download and run GGUF model on the specified device
llm = LLM(f"{model}/{gguf}", device=device, verbose=True, gpu=True)
while True:
# Ask the user for their question
user_question = input("\nPlease enter your question (type 'exit' to quit): ")
# Check if the user wants to exit
if user_question.lower() in ["exit", "quit"]:
print("Exiting the program.")
break
# Format the question with the special token expected by the model
formatted_question = f"{user_question}</s>"
# Get the model's response to the user's question
result = llm(formatted_question)
# Print the result
print("The model's response:", result)
| 2024-01-27T19:56:37 | https://www.reddit.com/r/LocalLLaMA/comments/1acjeuo/gpu_offloading_with_textai/ | djpurno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acjeuo | false | null | t3_1acjeuo | /r/LocalLLaMA/comments/1acjeuo/gpu_offloading_with_textai/ | false | false | self | 2 | null |
Any opensource LAM - large action model out there? | 1 | [removed] | 2024-01-27T19:40:58 | https://www.reddit.com/r/LocalLLaMA/comments/1acj1yu/any_opensource_lam_large_action_model_out_there/ | Western_Soil_4613 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acj1yu | false | null | t3_1acj1yu | /r/LocalLLaMA/comments/1acj1yu/any_opensource_lam_large_action_model_out_there/ | false | false | default | 1 | null |
Looking for advice on the best way to host my kobold server endpoint on Google cloud. | 2 | Just wanted to see if anyone here has experience with this and hopefully offer some advice.
I've been running kobold.cpp locally and I love the workflows I've gotten set up with different models. I'm finally to the stage of my project where I want to start breaking away from using openai, but I'm a bit lost as to how to host something like this.
From what I've found out so far, it looks like my options are either running a cloud vm which looks like it'd be really expensive, or running a kubernetes cluster with gpu nodes and I'm not really sure about how that's priced yet.
Before I start running around in circles trying to figure out a solution, I wanted to check in with you all and see if anyone could hopefully point my in the right direction.
Thanks! | 2024-01-27T19:23:29 | https://www.reddit.com/r/LocalLLaMA/comments/1acinu2/looking_for_advice_on_the_best_way_to_host_my/ | OriginallyWhat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acinu2 | false | null | t3_1acinu2 | /r/LocalLLaMA/comments/1acinu2/looking_for_advice_on_the_best_way_to_host_my/ | false | false | self | 2 | null |
Who do I have to pay to get easy+fast+private big model access? I want a service like "pick HF model, start chatting, get fast replies." | 1 | [removed] | 2024-01-27T19:23:26 | https://www.reddit.com/r/LocalLLaMA/comments/1acinsi/who_do_i_have_to_pay_to_get_easyfastprivate_big/ | drawntomore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acinsi | false | null | t3_1acinsi | /r/LocalLLaMA/comments/1acinsi/who_do_i_have_to_pay_to_get_easyfastprivate_big/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'N0PjtqUQGyPWX8xnYUx2rnPb_ksbdcg64BRu85dC-vU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LFFaN1wapdPp-DeyBQHgLcyDThqLh1QwsFiIhzSkoTU.jpg?width=108&crop=smart&auto=webp&s=3285b48b4d896a760f601c33ac81c23770d1da37', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LFFaN1wapdPp-DeyBQHgLcyDThqLh1QwsFiIhzSkoTU.jpg?width=216&crop=smart&auto=webp&s=2493e9002c04b95c26af88c6f9d0d20efe4f080d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LFFaN1wapdPp-DeyBQHgLcyDThqLh1QwsFiIhzSkoTU.jpg?width=320&crop=smart&auto=webp&s=7a11a379d29ce8052317b3fbbb6933bdd92a90e8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LFFaN1wapdPp-DeyBQHgLcyDThqLh1QwsFiIhzSkoTU.jpg?width=640&crop=smart&auto=webp&s=dc3ab9533556435f3dd8a5a4785a6dbc0a086ce3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LFFaN1wapdPp-DeyBQHgLcyDThqLh1QwsFiIhzSkoTU.jpg?width=960&crop=smart&auto=webp&s=d4054f28409ee82e2c5fd4f6b4eab752a4d812bb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LFFaN1wapdPp-DeyBQHgLcyDThqLh1QwsFiIhzSkoTU.jpg?width=1080&crop=smart&auto=webp&s=f1afda81c205ec5e081546d87a1c787d639fee9c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LFFaN1wapdPp-DeyBQHgLcyDThqLh1QwsFiIhzSkoTU.jpg?auto=webp&s=b8ac0e3d2f97140d4661d29eb7c8adec201f4326', 'width': 1200}, 'variants': {}}]} |
Xavier AGX 32GB to Orin NX 16GB, is it upgrade or downgrade for local LLM? | 1 | I have a Xavier AGX 32GB setup running dolphin-2.6-mistral-7b-dpo-laser.Q5\_K\_M.gguf, I'm getting about 7 tokens per second using 15W power profile. The GPU has plenty of RAM for that model, over 20GB free.
I'm planning to run multiple models, e.g. text to speech and vision models along with LLM. I'm curious about Orin boards. I bet Orin AGX will be a huge jump, but what about 2x Orin NX 16GB boards? I hope anyone owns Orin NX 16GB post a number here, so I don't have to do a zero shot purchase. | 2024-01-27T19:01:01 | https://www.reddit.com/r/LocalLLaMA/comments/1aci57o/xavier_agx_32gb_to_orin_nx_16gb_is_it_upgrade_or/ | SX-Reddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aci57o | false | null | t3_1aci57o | /r/LocalLLaMA/comments/1aci57o/xavier_agx_32gb_to_orin_nx_16gb_is_it_upgrade_or/ | false | false | self | 1 | null |
Local Vision Model with Speech to Speech | Faster Whisper + OpenVoice + Moondream(1.6B) | 1 | 2024-01-27T18:45:30 | https://v.redd.it/4hz03tcx21fc1 | allaboutai-kris | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1achsfd | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4hz03tcx21fc1/DASHPlaylist.mpd?a=1708973145%2CYjYwOTQ5OWUyMGQ5ODEyOTJlZjY3OWUxNmZjOGQyMzZlYWJmOWRhN2IzNmJkOWQ0MzI5MmM3NTIyZTkyNWNkZQ%3D%3D&v=1&f=sd', 'duration': 39, 'fallback_url': 'https://v.redd.it/4hz03tcx21fc1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/4hz03tcx21fc1/HLSPlaylist.m3u8?a=1708973145%2CZmRkYjUwN2UyYzIzMTBhOWY4ZWJlY2Q3NDFhNzVlYzU3MGUxODUxZTNmZGM4ODM5Y2Y1MDM4ZDQzM2Y0NWM4OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4hz03tcx21fc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1achsfd | /r/LocalLLaMA/comments/1achsfd/local_vision_model_with_speech_to_speech_faster/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ejd5d3ptcDUzMWZjMeWcFvdarz2Xk0K7BBIXGdMgTdPKuNpLj0YuwP1jQEq5', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ejd5d3ptcDUzMWZjMeWcFvdarz2Xk0K7BBIXGdMgTdPKuNpLj0YuwP1jQEq5.png?width=108&crop=smart&format=pjpg&auto=webp&s=29170ebf6f1f94b16579bd98a1ba7cc555b2ed68', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ejd5d3ptcDUzMWZjMeWcFvdarz2Xk0K7BBIXGdMgTdPKuNpLj0YuwP1jQEq5.png?width=216&crop=smart&format=pjpg&auto=webp&s=0e4cca17975b821f1803a7081a411501e895a922', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ejd5d3ptcDUzMWZjMeWcFvdarz2Xk0K7BBIXGdMgTdPKuNpLj0YuwP1jQEq5.png?width=320&crop=smart&format=pjpg&auto=webp&s=ae626a8575b31b62b0ea25d78645025a4526b8b0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ejd5d3ptcDUzMWZjMeWcFvdarz2Xk0K7BBIXGdMgTdPKuNpLj0YuwP1jQEq5.png?width=640&crop=smart&format=pjpg&auto=webp&s=d156505356ad9d7b725d4c2035e0940ba3302330', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ejd5d3ptcDUzMWZjMeWcFvdarz2Xk0K7BBIXGdMgTdPKuNpLj0YuwP1jQEq5.png?width=960&crop=smart&format=pjpg&auto=webp&s=7358b67a10fd7439b171ea08c5524322ef3b05e2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ejd5d3ptcDUzMWZjMeWcFvdarz2Xk0K7BBIXGdMgTdPKuNpLj0YuwP1jQEq5.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ec5fb243519c98631ae13b075a6caff399b7eba4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ejd5d3ptcDUzMWZjMeWcFvdarz2Xk0K7BBIXGdMgTdPKuNpLj0YuwP1jQEq5.png?format=pjpg&auto=webp&s=c29d430bc6f5821ece9578d015e1a159acf5936b', 'width': 1920}, 'variants': {}}]} | ||
Open source LLM for NLP | 1 | This might be a really dumb q but are there any open source LLMs that rival gpt-4 or Claude 2 for NLP tasks such as topic generation, summarization and question and answering?
Can be short snippets <8k context length. Just need accuracy / quality. | 2024-01-27T18:31:55 | https://www.reddit.com/r/LocalLLaMA/comments/1achh4h/open_source_llm_for_nlp/ | dimsumham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1achh4h | false | null | t3_1achh4h | /r/LocalLLaMA/comments/1achh4h/open_source_llm_for_nlp/ | false | false | self | 1 | null |
Synthetic data generation for instruct finetuning | 1 | [removed] | 2024-01-27T17:55:15 | https://www.reddit.com/r/LocalLLaMA/comments/1acgmg0/synthetic_data_generation_for_instruct_finetuning/ | Silver_Equivalent_58 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acgmg0 | false | null | t3_1acgmg0 | /r/LocalLLaMA/comments/1acgmg0/synthetic_data_generation_for_instruct_finetuning/ | false | false | self | 1 | null |
Open Source Multimodal 7B LLM? | 2 | I want to fine-tune multimodal model. I have only 8 GB VRAM so can fine-tune up to only 7B model. Is there any good Open Source multimodal 7B model? | 2024-01-27T17:24:41 | https://www.reddit.com/r/LocalLLaMA/comments/1acfwzi/open_source_multimodal_7b_llm/ | meetrais | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acfwzi | false | null | t3_1acfwzi | /r/LocalLLaMA/comments/1acfwzi/open_source_multimodal_7b_llm/ | false | false | self | 2 | null |
Meta's TTA "MAGNet" & moondream Vision are INSANE! [News] | 1 | [removed] | 2024-01-27T17:15:10 | https://www.reddit.com/r/LocalLLaMA/comments/1acfp7j/metas_tta_magnet_moondream_vision_are_insane_news/ | dev-spot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acfp7j | false | null | t3_1acfp7j | /r/LocalLLaMA/comments/1acfp7j/metas_tta_magnet_moondream_vision_are_insane_news/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YCeJTA90LNOekHTCwmESIY0FJu1q4_uNeuXKLHP270s', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/BD5m-Omc9jt3kY09Hu_AdLohYEDrjxuwkRkr2UPgh0A.jpg?width=108&crop=smart&auto=webp&s=8ca8f63d16ea16dd59d4db3a30b1d966f317c88e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/BD5m-Omc9jt3kY09Hu_AdLohYEDrjxuwkRkr2UPgh0A.jpg?width=216&crop=smart&auto=webp&s=bf928f8daf4fc18e81ea28636768fedad080d3a2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/BD5m-Omc9jt3kY09Hu_AdLohYEDrjxuwkRkr2UPgh0A.jpg?width=320&crop=smart&auto=webp&s=42699c21c82f98b5af5cadd44c2bedb86af6015d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/BD5m-Omc9jt3kY09Hu_AdLohYEDrjxuwkRkr2UPgh0A.jpg?auto=webp&s=bd29a51bf0910b6f6a7d0590c54af0d2b78bc379', 'width': 480}, 'variants': {}}]} |
MLX + Mixtral 16-bit on MacBook Pro M3 Max? | 2 | Does anyone here have experience running [16-bit Mixtral with MLX](https://github.com/ml-explore/mlx-examples/blob/main/llms/mixtral/README.md) on a maxed out MacBook Pro M3 Max (16-core CPU/40-core GPU/128GB)? Trying to tempt myself to upgrade from my 2021 M1 Max 64GB and this seems like it could be a good excuse. | 2024-01-27T17:14:34 | https://www.reddit.com/r/LocalLLaMA/comments/1acfoow/mlx_mixtral_16bit_on_macbook_pro_m3_max/ | RoosterPangolin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acfoow | false | null | t3_1acfoow | /r/LocalLLaMA/comments/1acfoow/mlx_mixtral_16bit_on_macbook_pro_m3_max/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '0eG2rXaxjrvo3P0DZbVgJNT44diA6Igvs0tXaT4DG-U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NmzUABmcAJ6vJJVeFZ72_-NkfHl4m2TjTxsm6oo9reA.jpg?width=108&crop=smart&auto=webp&s=d62c64eb0c984228a2cbc84fadabf47c41c52e91', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NmzUABmcAJ6vJJVeFZ72_-NkfHl4m2TjTxsm6oo9reA.jpg?width=216&crop=smart&auto=webp&s=3cdcf6e92a44b763edadee750e44032d634a1a91', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NmzUABmcAJ6vJJVeFZ72_-NkfHl4m2TjTxsm6oo9reA.jpg?width=320&crop=smart&auto=webp&s=fb9fd725d148613da827462ae91282495dd7df01', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NmzUABmcAJ6vJJVeFZ72_-NkfHl4m2TjTxsm6oo9reA.jpg?width=640&crop=smart&auto=webp&s=d406e0861e0133e9063f6fb37139bfcdd1af7c6e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NmzUABmcAJ6vJJVeFZ72_-NkfHl4m2TjTxsm6oo9reA.jpg?width=960&crop=smart&auto=webp&s=dc195f53b814e3fea11846434eddbb4340faa2a0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NmzUABmcAJ6vJJVeFZ72_-NkfHl4m2TjTxsm6oo9reA.jpg?width=1080&crop=smart&auto=webp&s=93af03504cbdd913d2fab81d6323de3f578f393b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NmzUABmcAJ6vJJVeFZ72_-NkfHl4m2TjTxsm6oo9reA.jpg?auto=webp&s=18eec9036f2b1dfb0a36d4dc46b75400ff82ce62', 'width': 1200}, 'variants': {}}]} |
Is a Radeon VII worth it? | 1 | I recently found some pretty cheap (260-300€) listings for used Radeon 7 cards. They have 16GB of HBM2 VRAM, giving it a memory bandwidth like a 4090. They only support ROCm until 5.7, but 5.6 seems still be most used anyway.
Has anybody experience or recommendations?
Would you get a Radeon 7 or a 7600XT?
| 2024-01-27T16:33:50 | https://www.reddit.com/r/LocalLLaMA/comments/1aceqxt/is_a_radeon_vii_worth_it/ | Kevinswelt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aceqxt | false | null | t3_1aceqxt | /r/LocalLLaMA/comments/1aceqxt/is_a_radeon_vii_worth_it/ | false | false | self | 1 | null |
Consistent response formatting | 2 | Given the current state of local LLMs, how feasible is it to get one to only respond with specifically formatted commands based on context.
Ideally I’d like to train the model to be able to have it pick from a list of valid functions, pick from a valid list of parameter types for said functions, and then provide data.
I.E. {“MoveTo”:{“Vector2”:{“1.0,1.0”}}}
I understand that there is chance for garbage responses but as it stands I’m currently unable to even get one good response for testing. Anyone have any advice or a recommendation for a solid model? | 2024-01-27T16:23:11 | https://www.reddit.com/r/LocalLLaMA/comments/1acei6b/consistent_response_formatting/ | lurklord_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acei6b | false | null | t3_1acei6b | /r/LocalLLaMA/comments/1acei6b/consistent_response_formatting/ | false | false | self | 2 | null |
Implementing a Sparse MoE Language Model from scratch | 1 | [removed] | 2024-01-27T16:15:25 | https://www.reddit.com/r/LocalLLaMA/comments/1aceboj/implementing_a_sparse_moe_language_model_from/ | avi1x | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aceboj | false | null | t3_1aceboj | /r/LocalLLaMA/comments/1aceboj/implementing_a_sparse_moe_language_model_from/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ZcwbBZrpZThONbehuqrccFwX52bFC9OJakeTcd5GCqA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=108&crop=smart&auto=webp&s=7e8d8002d2e46aa4d7f857310f3ae689dfc44d07', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=216&crop=smart&auto=webp&s=02935af9b977f931de6c691794a118d67c233cd3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=320&crop=smart&auto=webp&s=26f1e3e3788eee603d6e7bb5e7f8c3c358bd40a8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=640&crop=smart&auto=webp&s=f9764b24c7c73ea4d0b7b3390d5e21d0055de1f8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=960&crop=smart&auto=webp&s=1366a3f56c04b88d112621dba00079ac027ddaf4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?width=1080&crop=smart&auto=webp&s=cb907e8194c07abde1edef598dd9a8ddd9a949fd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qcGOsqfjpT1nIKmQpUt4Yl8QT0xRwn6eWdL-OJYrQ94.jpg?auto=webp&s=6c020ee09a5b9f789dadabb7b1235f4763939e6b', 'width': 1200}, 'variants': {}}]} |
Embedding model inference | 1 | [removed] | 2024-01-27T16:08:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ace5mt/embedding_model_inference/ | Serious_Driver_2874 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ace5mt | false | null | t3_1ace5mt | /r/LocalLLaMA/comments/1ace5mt/embedding_model_inference/ | false | false | self | 1 | null |
How much RAM would you need to run GPT3.5? | 2 | just curious | 2024-01-27T16:08:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ace5l6/how_much_ram_would_you_need_to_run_gpt35/ | Wrong_User_Logged | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ace5l6 | false | null | t3_1ace5l6 | /r/LocalLLaMA/comments/1ace5l6/how_much_ram_would_you_need_to_run_gpt35/ | false | false | self | 2 | null |
Gemini Pro: The three headed monster | 1 | [https://twitter.com/lmsysorg/status/1749818447276671255](https://twitter.com/lmsysorg/status/1749818447276671255)
1. Gemini Pro: the Vertex AI API on Google Cloud
2. Gemini Pro (dev): the developer API on Google AI Studio
3. Bard (Jan 24, Gemini Pro): Bard powered by Gemini Pro (Jan 24 version)
The first two use the same Gemini Pro model but the former has some restrictions on content filters. The Bard is built with Gemini Pro. See more details in their blogs:
1. [https://blog.google/technology/ai/gemini-api-developers-cloud/](https://blog.google/technology/ai/gemini-api-developers-cloud/)
2. [https://blog.google/products/bard/google-bard-try-gemini-ai/](https://blog.google/products/bard/google-bard-try-gemini-ai/)
​ | 2024-01-27T15:54:56 | https://www.reddit.com/r/LocalLLaMA/comments/1acduuf/gemini_pro_the_three_headed_monster/ | danielcar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acduuf | false | null | t3_1acduuf | /r/LocalLLaMA/comments/1acduuf/gemini_pro_the_three_headed_monster/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SzmCRc61cd7A_UC2R-l7Js_FR4D-3nFHnRA-AwnI9f0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/oWP3QKmmsAVv7esr-FRlv-Xp2KZWc_4LnqtD0mR5E_U.jpg?width=108&crop=smart&auto=webp&s=fadd6076d8495d18e72378d2caa497f2ea166c7e', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/oWP3QKmmsAVv7esr-FRlv-Xp2KZWc_4LnqtD0mR5E_U.jpg?auto=webp&s=94d65fc3313db5e32155a880ed159c879594973e', 'width': 140}, 'variants': {}}]} |
Feels like I’m in over my head | 18 | I’ve been tasked to evaluate different commercially available LLMs for the company I work for and I figured I would start tinkering with Meta’s LLaMA 2 7b model on my own and try to utilized RAG to chat with my own documents. I spent 6 or 7 hours yesterday just trying to get it installed and feel like I’ve hit a wall. I want to run everything locally, but the download.sh file just keeps closing on me when I choose which model to download. I installed all of the required dependencies, git, wget, etc. Nothing is working. This is on Windows 10. Should I just wipe my machine and start again from scratch?
On another note, I am in the geospatial data science field, I have some Python experience, enough to be dangerous. I have been browsing this sub quite a bit recently and I feel like I’m in over my head, there’s SO much to learn about LLMs in general. Can anyone recommend where I should really start? Like Local LLMs 101. I’d be open to tinkering with other Local LLMs other than LLaMA 2 as well, I have had some experience with OpenAI’s API, but I don’t want to burn up credits just tinkering with things. Thanks in advance! | 2024-01-27T15:44:26 | https://www.reddit.com/r/LocalLLaMA/comments/1acdm9u/feels_like_im_in_over_my_head/ | suivid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acdm9u | false | null | t3_1acdm9u | /r/LocalLLaMA/comments/1acdm9u/feels_like_im_in_over_my_head/ | false | false | self | 18 | null |
AutoGen Studio: Interactively Explore Multi-Agent Workflows (no code AutoGen interface by Microsoft) | 36 | 2024-01-27T15:11:35 | https://microsoft.github.io/autogen/blog/2023/12/01/AutoGenStudio/ | rkm82999 | microsoft.github.io | 1970-01-01T00:00:00 | 0 | {} | 1accwbr | false | null | t3_1accwbr | /r/LocalLLaMA/comments/1accwbr/autogen_studio_interactively_explore_multiagent/ | false | false | default | 36 | null | |
Looking to run a small model for a very specific use case | 1 | I want a model that can answer questions on plant health, care and diseases is there a model for this specific use case
My laptop is rtx 4050 6gb
Any models that I can use for this specific use case
Any help is appreciated thank you | 2024-01-27T15:03:39 | https://www.reddit.com/r/LocalLLaMA/comments/1accq03/looking_to_run_a_small_model_for_a_very_specific/ | FaultSmartBut | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1accq03 | false | null | t3_1accq03 | /r/LocalLLaMA/comments/1accq03/looking_to_run_a_small_model_for_a_very_specific/ | false | false | self | 1 | null |
Hello, Noob here. How inefficient will a transformer be if trained directly like Mambabyte? % wise? Also, in what ways will it better (or worse) than a transformer trained on tokens? | 24 | 2024-01-27T14:54:35 | MambaModel | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1accivv | false | null | t3_1accivv | /r/LocalLLaMA/comments/1accivv/hello_noob_here_how_inefficient_will_a/ | false | false | 24 | {'enabled': True, 'images': [{'id': 'ZGcqT9FaHJm2VCC4X2t9_0Nv4_Hg3iwD3NIOtWvMm3g', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/bf5rdodzxzec1.png?width=108&crop=smart&auto=webp&s=320f523094025da8873eec542b9903a1e4ba1f16', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/bf5rdodzxzec1.png?width=216&crop=smart&auto=webp&s=3f9f60cd0c441bcc8154b255aac4d13e209da25e', 'width': 216}, {'height': 304, 'url': 'https://preview.redd.it/bf5rdodzxzec1.png?width=320&crop=smart&auto=webp&s=0c37a5403a5552ed7d8ee554e5b00bba694ab5d5', 'width': 320}, {'height': 609, 'url': 'https://preview.redd.it/bf5rdodzxzec1.png?width=640&crop=smart&auto=webp&s=8858b3c0053bd0c41dd596c37b58add1966604e6', 'width': 640}], 'source': {'height': 789, 'url': 'https://preview.redd.it/bf5rdodzxzec1.png?auto=webp&s=8a230c2c1bbdd33d6440abaf92b613a02be32322', 'width': 829}, 'variants': {}}]} | |||
Running multi GPU with AMD and Nvidia? | 3 | I recently upgraded my PC(primarily used for gaming) from an RTX2060 6gb to an AMD RX7800xt. I haven’t yet sold my RTX2060 and was wondering if it was worth the effort to try run a dual GPU setup, and whether that would help at all with LLM inference. Has anyone tried something like this and had any luck? Is it even worth the time to figure out driver issues and other stuff? | 2024-01-27T14:27:14 | https://www.reddit.com/r/LocalLLaMA/comments/1acbyy2/running_multi_gpu_with_amd_and_nvidia/ | ValyrianMonkey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acbyy2 | false | null | t3_1acbyy2 | /r/LocalLLaMA/comments/1acbyy2/running_multi_gpu_with_amd_and_nvidia/ | false | false | self | 3 | null |
Anyone have experience of running a model on RunPod with a front end they designed? | 1 | I am trying to make the move from GPT-4 and langchain to some OS LLMs. I would like to try Goliath 120b on RunPod but I would like to create an MVP of a front end. Usually I use streamlit.
Has anybody got experience of running some from runpod to their own front end?
Thanks | 2024-01-27T13:55:06 | https://www.reddit.com/r/LocalLLaMA/comments/1acbc5p/anyone_have_experience_of_running_a_model_on/ | 99OG121314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acbc5p | false | null | t3_1acbc5p | /r/LocalLLaMA/comments/1acbc5p/anyone_have_experience_of_running_a_model_on/ | false | false | self | 1 | null |
How do you know which model is suitable for our specific needs? | 7 | Today I saw this interesting comment by u/ttkciar down below.
Excluding models that there are a lot of information about them (like RP, story and coding), how do you know which model is good for a specific job?
Thank you
My use-cases and the models I use for them:
* NousResearch-Nous-Capybara-3B-V1 for RAG,
* Medalpaca-13B as a copilot for medical reading
* Starling-LM-11B-alpha as a copilot for physics research
* Either Starling-LM-11B-alpha or PuddleJumper-13B-v2 for analysis of social, political, historical, or philosophical issues
* Mistral-7B-OpenOrca for creative writing (sci-fi, not ERP)
* NoroCetacean-20B-10K for creative writing (neither sci-fi nor ERP)
* Phind-CodeLlama-34B-v2 for bulk code generation
* Rift-Coder-7B and Refact-1.6B-fim as copilots for coding copilots
* Scarlett-33B for casual topics and informal prose
* Starling-LM-11B-alpha, Mistral-7B-SciPhi-32k, and Vicuna-33B for synthetic dataset generation | 2024-01-27T13:51:40 | https://www.reddit.com/r/LocalLLaMA/comments/1acb9u9/how_do_you_know_which_model_is_suitable_for_our/ | Terminus_T | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acb9u9 | false | null | t3_1acb9u9 | /r/LocalLLaMA/comments/1acb9u9/how_do_you_know_which_model_is_suitable_for_our/ | false | false | self | 7 | null |
Looking for Advice to propose solutions to my company | 6 | I am going to pitch AI to my company. Try to get a job with the word AI in it. Long enough to learn it and by then they will realize they need a real AI person. But any of y’all put together presentations you are willing to share or know of any good resources already created? I will build a little demo box and show them some minor stuff and then explain how a model trained on our data will benefit them. Our data can’t touch the net and I doubt they want to fork out the cash for a real AI engineer anyway. So this seems like a good chance for me | 2024-01-27T13:19:28 | https://www.reddit.com/r/LocalLLaMA/comments/1acao8l/looking_for_advice_to_propose_solutions_to_my/ | imightbsabot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acao8l | false | null | t3_1acao8l | /r/LocalLLaMA/comments/1acao8l/looking_for_advice_to_propose_solutions_to_my/ | false | false | self | 6 | null |
Best open-source Image-to-Text model? | 3 | Hi everyone,
I have a use case where I have to describe diagrams and must use a local model. I have the resources to potentially fine-tune the model, and run the largest models.
What is in your experience the best open model to do image description?
My criteria are:
\- Performance
\- Ease of use / finetuning | 2024-01-27T13:07:15 | https://www.reddit.com/r/LocalLLaMA/comments/1acagg5/best_opensource_imagetotext_model/ | Separate-Still3770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1acagg5 | false | null | t3_1acagg5 | /r/LocalLLaMA/comments/1acagg5/best_opensource_imagetotext_model/ | false | false | self | 3 | null |
Superboogav2 vs PrivateGPT, and similar ones? | 1 | So I want to know, is superboogav2 enough? All I know is that I have to convert all files I want to txt in superbooga,which is an extra hastle, and (also I don’t know any good offline pdf/ html to text converters) while in private GPT you just import a PDF or html or whatever, and then you can basically chat with an LLM with the information from the documents.
Downsides is that you cannot use Exllama for private GPT and therefore generations won’t be as fast, but also, it’s extremely complicated for me to install the other projects.
I was just wondering, if superboogav2 is theoretically enough, and If so, what the best settings are.
Or if not as a suggestion, could someone make an extension for oobabooga that is better than superboogav2 that allows you to import multiple types of documents to behave like PrivateGPT completely?
Idk just sharing my random thoughts. | 2024-01-27T12:28:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ac9snq/superboogav2_vs_privategpt_and_similar_ones/ | headbopper96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac9snq | false | null | t3_1ac9snq | /r/LocalLLaMA/comments/1ac9snq/superboogav2_vs_privategpt_and_similar_ones/ | false | false | self | 1 | null |
Why do you trust LMSYS Arena Leaderboard? It can be easily manipulated if they want to. | 1 | The recent hype around the Bard with Gemini Pro Scale model has completely eroded my confidence in this leaderboard. The difference between Bard and Gemini Pro API is enormous, yet Bard seemingly "suddenly" surpassed GPT-4 in just a few days, despite the last update being at least a month ago. Can we really believe that Bard has surpassed GPT-4 in user experience during this one month? Certainly not.
As far as I know, they haven't publicly shared the complete logs (even though they previously released a filtered version, which was clearly a subset of the total), and they haven't provided specific information about ownership interests and donation sources.
I'm sorry to be blunt, but this leaderboard has become corrupt. Of course, the statements above are solely my personal opinion.
After their fake Gemini demonstration video, I wonder how much money Google paid for the promotion board on lmsys arena this time. | 2024-01-27T12:13:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ac9k8f/why_do_you_trust_lmsys_arena_leaderboard_it_can/ | Mission_Implement467 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac9k8f | false | null | t3_1ac9k8f | /r/LocalLLaMA/comments/1ac9k8f/why_do_you_trust_lmsys_arena_leaderboard_it_can/ | false | false | self | 1 | null |
Getting a structured response (user-defined class) from a custom fine-tuned Mistral model. | 4 | I saw examples using Langchain and OpenAI that you can set the desired output via a class. So, within a custom class, you can set the attributes that you would want the model to send back.
​
I'm curious if there is a similar way of doing this but instead of using OpenAI, I would use my own Mistral-Instruct model.
​
Currently, I've only been playing around with the system prompt template, but the responses are not structured nor consistent. So, I would need to have multiple post-processing functions to clean the responses.
​
Is there an approach to get the responses to be in a structured format every single time?
| 2024-01-27T11:46:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ac94rz/getting_a_structured_response_userdefined_class/ | FlyingRaijinEX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac94rz | false | null | t3_1ac94rz | /r/LocalLLaMA/comments/1ac94rz/getting_a_structured_response_userdefined_class/ | false | false | self | 4 | null |
is better to train multilingual language models on specific language dataset or on two language dataset ? | 4 | is better to train multilingual language models like mt0 on specific language dataset or on two language dataset has the language and English ?
https://preview.redd.it/ori9mjvssyec1.png?width=4795&format=png&auto=webp&s=4658417e74f91f62fce1b3803ec553d01d8971a4 | 2024-01-27T11:04:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ac8i1z/is_better_to_train_multilingual_language_models/ | mohamedem2m | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac8i1z | false | null | t3_1ac8i1z | /r/LocalLLaMA/comments/1ac8i1z/is_better_to_train_multilingual_language_models/ | false | false | 4 | null | |
is better to train multilingual models on two language dataset or one dataset in specific dataset? | 1 | hi how are you , i wonder is better to fine tune models like mt0 on single language or train it on dataset has languge and English both ?
https://preview.redd.it/l5ofklpxryec1.png?width=4795&format=png&auto=webp&s=99af2089fbdbeeccafac442e2deb20b983b5dc1d | 2024-01-27T11:00:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ac8g0c/is_better_to_train_multilingual_models_on_two/ | mohamedem2m | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac8g0c | false | null | t3_1ac8g0c | /r/LocalLLaMA/comments/1ac8g0c/is_better_to_train_multilingual_models_on_two/ | false | false | 1 | null | |
Hello, Noob here. How inefficient will a transformer be if trained directly like Mambabyte? % wise? Also, in what ways will it better (or worse) than a transformer trained on tokens? | 1 | 2024-01-27T09:53:10 | MambaModel | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ac7gz8 | false | null | t3_1ac7gz8 | /r/LocalLLaMA/comments/1ac7gz8/hello_noob_here_how_inefficient_will_a/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'cNm_uPIIn60ivlkkXUD80pBVedgvIx10C5OO2knFDAo', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/hy9wyvh7gyec1.png?width=108&crop=smart&auto=webp&s=5e14983d275b0f5dd34ef691e13902d85ecd1145', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/hy9wyvh7gyec1.png?width=216&crop=smart&auto=webp&s=1cbe348a6dfdf8e9dd8efa231d9aee4bf727b81a', 'width': 216}, {'height': 304, 'url': 'https://preview.redd.it/hy9wyvh7gyec1.png?width=320&crop=smart&auto=webp&s=a6eaaec63b3347ff63f18837257bf194268070ad', 'width': 320}, {'height': 609, 'url': 'https://preview.redd.it/hy9wyvh7gyec1.png?width=640&crop=smart&auto=webp&s=f9f55eb2a58a56577e79d2fc331ca2539c9c9c57', 'width': 640}], 'source': {'height': 789, 'url': 'https://preview.redd.it/hy9wyvh7gyec1.png?auto=webp&s=0bf4d3cb6f0af941b37173cffee31d5d6a262a38', 'width': 829}, 'variants': {}}]} | |||
Local LLM with directory access? | 1 | Probably a long shot at the point in time, but is there a local LLM I can run and give access to a particular directory - specifically an obsidian vault that is filled with Markdown files.
I then want to be able to ask the LLM to summarise notes and have it infer information from other related/linked notes (hence the local directory access part)
Sorry in advance if this falls under rule 1, but I couldnt see much on this topic | 2024-01-27T07:58:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ac5t71/local_llm_with_directory_access/ | Sploffo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac5t71 | false | null | t3_1ac5t71 | /r/LocalLLaMA/comments/1ac5t71/local_llm_with_directory_access/ | false | false | self | 1 | null |
How large LLM do you think is proper for playing decent rp/erp in local? | 3 |
I don't mean to get recommended a model, just want to know the size base on other's criteria. | 2024-01-27T07:57:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ac5sks/how_large_llm_do_you_think_is_proper_for_playing/ | Zealousideal_Job_682 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac5sks | false | null | t3_1ac5sks | /r/LocalLLaMA/comments/1ac5sks/how_large_llm_do_you_think_is_proper_for_playing/ | false | false | self | 3 | null |
For what purpose do you use local LLMs? | 12 | A lot of discussions which model is the best, but I keep asking myself, why would average person need expensive setup to run LLM locally when you can get ChatGPT 3.5 for free and 4 for 20usd/month?
My story:
For day to day questions I use ChatGPT 4. It seems impracticall running LLM constantly or spinning it off when I need some answer quickly.
I come to local LLM word, because I have specific use-case, I need to gerate lots of descriptions from my database to be published on web. In this case, its seems cheaper to run locally than to pay for ChatGPT API. But my use case is complex one and I'm still at the begining of my journey. Likely I will need fine tuning, RAG etc.
I dont think avg. user of this group has these complex needs like I do. But maybe I'm wrong.
So for what purpose you use local LLMs and why ChatGPT is not an option? | 2024-01-27T07:25:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ac5bgm/for_what_purpose_do_you_use_local_llms/ | mrscript_lt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac5bgm | false | null | t3_1ac5bgm | /r/LocalLLaMA/comments/1ac5bgm/for_what_purpose_do_you_use_local_llms/ | false | false | self | 12 | null |
What is the best GPU (and subsequent setup) for running Mixtral on Ollama? | 3 | There are about a billion posts on here about which GPUs to use and I can't figure out what is considered a budget option or not. I am trying to run Mixtral on Ollama. I'm setting up a system for multiple developers to run Langchain on directly. I have a flexible budget but I'd rather not get 4 4090s and call it a day.
I see a lot of people are recommending going the 3090 route but where does that card sit in terms of tokens per second? I assume its better than the 4060 TI, is it better than the 4070 TI? How much VRAM is actually needed to run a model like this? ([this](https://ollama.ai/library/mixtral) says 48GB of ram but is that VRAM or RAM?)
Also, is it better to have multiple docker containers each with their on GPU (or as many GPUs required in order to load the entire model) and have those load balanced or is it better to just give a single docker container all the GPUs?
Thanks in advance | 2024-01-27T07:23:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ac5aii/what_is_the_best_gpu_and_subsequent_setup_for/ | ElectricalImpact2274 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac5aii | false | null | t3_1ac5aii | /r/LocalLLaMA/comments/1ac5aii/what_is_the_best_gpu_and_subsequent_setup_for/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]} |
We have a Goliath with 32k context length now? | 105 | Just a few days ago I found this upload by complete accident from the huggingface search. It seems the creator stealth uploaded it without announcing it anywhere public, so basically nobody noticed: [https://huggingface.co/grimulkan/Goliath-longLORA-120b-rope8-32k-fp16](https://huggingface.co/grimulkan/Goliath-longLORA-120b-rope8-32k-fp16)
There were no useful quants, so I made some that can run in 48GB, 72GB or 80GB vram:
[https://huggingface.co/aikitoria/Goliath-longLORA-120b-rope8-32k-exl2](https://huggingface.co/aikitoria/Goliath-longLORA-120b-rope8-32k-exl2)
Surprisingly, the 4.35bpw version is... actually good?! In my super scientific test of trying to talk to some of my characters that had run over the 4k context length limit, it's giving me much better results than the original goliath did with rope scaling, even all the way up to 20k context. A few responses are nonsense, more often the further out it is, but it's easy to fix these with a swipe in SillyTavern.
The 2.65bpw version is not so great however. Not broken, but not good either. | 2024-01-27T07:06:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ac513s/we_have_a_goliath_with_32k_context_length_now/ | aikitoria | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac513s | false | null | t3_1ac513s | /r/LocalLLaMA/comments/1ac513s/we_have_a_goliath_with_32k_context_length_now/ | false | false | self | 105 | {'enabled': False, 'images': [{'id': 'dHMb80RydwQMylrKi5sV8grJVIjyM3tEIekh7iU-6PM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XHEmLPSOvRsv1ijz8Xpk6BAEECT-sJ79RtrRuTHXAus.jpg?width=108&crop=smart&auto=webp&s=e6551860e09455207cd13723f7e061b99d7dc125', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XHEmLPSOvRsv1ijz8Xpk6BAEECT-sJ79RtrRuTHXAus.jpg?width=216&crop=smart&auto=webp&s=9b5f9e11eb63ebbd43bdc7b1798640aa6e85237b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XHEmLPSOvRsv1ijz8Xpk6BAEECT-sJ79RtrRuTHXAus.jpg?width=320&crop=smart&auto=webp&s=bc24772f7ad25e5080d55876b7fd7862dc7b0e16', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XHEmLPSOvRsv1ijz8Xpk6BAEECT-sJ79RtrRuTHXAus.jpg?width=640&crop=smart&auto=webp&s=0f1f3677024165a81665eeec11b54e58bb3ad798', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XHEmLPSOvRsv1ijz8Xpk6BAEECT-sJ79RtrRuTHXAus.jpg?width=960&crop=smart&auto=webp&s=e08ea20673c811ba07df4c8f7c42234f8850c859', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XHEmLPSOvRsv1ijz8Xpk6BAEECT-sJ79RtrRuTHXAus.jpg?width=1080&crop=smart&auto=webp&s=d91cce28d8b2b9f17800818d9fd5cf4fa9674f0c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XHEmLPSOvRsv1ijz8Xpk6BAEECT-sJ79RtrRuTHXAus.jpg?auto=webp&s=8defb358d7a5485d4525d8d521764509cbb89072', 'width': 1200}, 'variants': {}}]} |
ai slide maker | 1 | [removed] | 2024-01-27T06:50:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ac4sq0/ai_slide_maker/ | Silver_Equivalent_58 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac4sq0 | false | null | t3_1ac4sq0 | /r/LocalLLaMA/comments/1ac4sq0/ai_slide_maker/ | false | false | self | 1 | null |
I fixed all the issues I found with llama.cpp server when using self extend and added prompt caching ability when using self extend. (This is still my old PR) | 31 | [https://github.com/ggerganov/llama.cpp/pull/5104](https://github.com/ggerganov/llama.cpp/pull/5104)
[https://www.reddit.com/r/LocalLLaMA/comments/19e47by/port\_of\_self\_extension\_to\_llamacpp\_server\_allows/](https://www.reddit.com/r/LocalLLaMA/comments/19e47by/port_of_self_extension_to_llamacpp_server_allows/) | 2024-01-27T06:44:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ac4pe4/i_fixed_all_the_issues_i_found_with_llamacpp/ | FlowerPotTeaTime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac4pe4 | false | null | t3_1ac4pe4 | /r/LocalLLaMA/comments/1ac4pe4/i_fixed_all_the_issues_i_found_with_llamacpp/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'IlRvGn-VM40kEQ_r-AMiB5VTKZ5khuEezXA9qFGxG80', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0i72tQzDvL5IrxLbCQ5Nhx1TZdHj0CxGEuNrXVlSAOM.jpg?width=108&crop=smart&auto=webp&s=53d6b3a626ad9e6b3163f3217e63970a59a12059', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0i72tQzDvL5IrxLbCQ5Nhx1TZdHj0CxGEuNrXVlSAOM.jpg?width=216&crop=smart&auto=webp&s=b04c57ca146bce245a1ff6691750801568d27d24', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0i72tQzDvL5IrxLbCQ5Nhx1TZdHj0CxGEuNrXVlSAOM.jpg?width=320&crop=smart&auto=webp&s=fd890e2f657566860ebdd39b082ed1f30b56285a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0i72tQzDvL5IrxLbCQ5Nhx1TZdHj0CxGEuNrXVlSAOM.jpg?width=640&crop=smart&auto=webp&s=a5694980978022f8c4d824d73e05195dcb36630a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0i72tQzDvL5IrxLbCQ5Nhx1TZdHj0CxGEuNrXVlSAOM.jpg?width=960&crop=smart&auto=webp&s=1ab2e7174d85f7feb0e8d8630633221bbea32fb0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0i72tQzDvL5IrxLbCQ5Nhx1TZdHj0CxGEuNrXVlSAOM.jpg?width=1080&crop=smart&auto=webp&s=4794621cfe05f13f95e31688f9bbf334ec25b58d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0i72tQzDvL5IrxLbCQ5Nhx1TZdHj0CxGEuNrXVlSAOM.jpg?auto=webp&s=a817c7406a7107e0c84ca407378ccc7638693f7b', 'width': 1200}, 'variants': {}}]} |
Wizard-Vicuna 7b (Q2) running at 2.5 tokens/second on Galaxy S23 Ultra | 25 | Through Llama.cpp on Termux (not built to use the Adreno GPU, apparently that's possible but yields slowdowns).
It's sufficient and generally gives pretty good answers. 2.5 tokens per second is a slow reading speed, but not unbearable.
Not posting because this is new or novel at all.. but I'm more excited about this than I am for anything Gemini or GalaxyAI related.
Example input and output:
What is a Freudian Slip?
A Freudian slip, also known as a Freudian typo or a Freudian misstep, is an unintentional verbal blunder that reveals something about the speaker's true feelings or beliefs. The term comes from Sigmund Freud, who suggested that such mistakes are often caused by hidden desires and fears that surface in our conscious thoughts.
For example, a person might accidentally say "I hope she dies" instead of "I hope she lives," revealing their true feelings about the person they're speaking about. In some cases, these verbal blunders can be taken as evidence of subconscious racism or sexism, depending on the context and the speaker's intentions.
While Freudian slips are generally considered harmless (unless they reveal something truly alarming), they are still considered a form of verbal misstep or blooper because they can cause embarrassment or confusion in the listener. | 2024-01-27T06:27:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ac4fph/wizardvicuna_7b_q2_running_at_25_tokenssecond_on/ | Due-Ad-7308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac4fph | false | null | t3_1ac4fph | /r/LocalLLaMA/comments/1ac4fph/wizardvicuna_7b_q2_running_at_25_tokenssecond_on/ | false | false | self | 25 | null |
llama_index troubleshooting | 1 | Hey everyone, I have an issue with my RAG setup that I can find no good answer to and was hoping someone from this awesome community could help- my retriever seems to only be returning 2 nodes despite setting k=10 or 25 or 30 etc, it's my understanding that this shouldn't be an issue of context window as the retriever is based solely on vector cosine similarity...when I take out the reranker nothing changes either. This occurs for every query I try. Any ideas? | 2024-01-27T05:17:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ac39de/llama_index_troubleshooting/ | RonBlake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac39de | false | null | t3_1ac39de | /r/LocalLLaMA/comments/1ac39de/llama_index_troubleshooting/ | false | false | self | 1 | null |
Is there a technical term for “LLM that has been trained to accurately answer questions about a narrow subject area?” | 31 | For example, say a restaurant fine-tunes an LLM to accurately answer questions about its menu items. Or a bank fine-tunes an LLM to accurately answer questions about the bank’s lending policies.
What would such an LLM be called? Or, what would this style of fine-tuning be called? (I am a near-total noob here, so please go easy on me, senpais.) | 2024-01-27T03:42:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ac1i4t/is_there_a_technical_term_for_llm_that_has_been/ | LeastWest9991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac1i4t | false | null | t3_1ac1i4t | /r/LocalLLaMA/comments/1ac1i4t/is_there_a_technical_term_for_llm_that_has_been/ | false | false | self | 31 | null |
Mamba implementation in MLX! Includes inference and training. | 49 | This folder contains a complete MLX implementation of [Mamba](https://arxiv.org/abs/2312.00752), which allows to train and do inference with Mamba models using an Apple silicon equiped Mac.
Code: [https://github.com/alxndrTL/mamba.py/tree/main/mlx](https://github.com/alxndrTL/mamba.py/tree/main/mlx)
H/T: [https://twitter.com/awnihannun/status/1749515431336112275](https://twitter.com/awnihannun/status/1749515431336112275) | 2024-01-27T03:37:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ac1f5f/mamba_implementation_in_mlx_includes_inference/ | Different-Pickle1021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ac1f5f | false | null | t3_1ac1f5f | /r/LocalLLaMA/comments/1ac1f5f/mamba_implementation_in_mlx_includes_inference/ | false | false | self | 49 | null |
Relationship between GPU memory and context size | 6 | Hi all,
I want to ask if anyone knows the rough relationship between GPU memory required and the size of the context for fine-tuning LLM (small model 7b, about 800samples)
Cannot find any info on this anywhere. | 2024-01-27T02:12:43 | https://www.reddit.com/r/LocalLLaMA/comments/1abzrtg/relationship_between_gpu_memory_and_context_size/ | plsendfast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abzrtg | false | null | t3_1abzrtg | /r/LocalLLaMA/comments/1abzrtg/relationship_between_gpu_memory_and_context_size/ | false | false | self | 6 | null |
llama 2 models smaller than 1b? | 9 | Summary: looking for a pretrained llama 2 model with less than 1.1B parms that I can finetune
I've trained a model from scratch with about 70m parameters. Kind of works, but there's serious limits when running a microscopic model. Training even this miniscule size from scratch still requires multiple weeks of GPU time.
Have also played with finetuning "tiny" models (such as TinyLlama-1.1B, or Sheared LLama 1.3B), but they're a little too large for my needs.
I'm having trouble finding any other tiny models.
Is there anything in between, like a model with say between 300M to 700M parameters? Something similar to gpt2-medium or gpt2-large, but a llama 2 model?
Thanks for any tips. | 2024-01-27T01:58:32 | https://www.reddit.com/r/LocalLLaMA/comments/1abzhg5/llama_2_models_smaller_than_1b/ | dual_ears | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abzhg5 | false | null | t3_1abzhg5 | /r/LocalLLaMA/comments/1abzhg5/llama_2_models_smaller_than_1b/ | false | false | self | 9 | null |
Functions for more modular prompting | 24 | Full Disclosure: **I am NOT technical**, I spent quite a bit of time playing around with Copilot to develop these functions - VERY OPEN TO SUGGESTIONS
**Use Case:**
1. You are NOT using a UI, instead you are using an engine directly, like vLLM
2. You are cycling through lots of different prompts and few-shot examples to find the best combo
3. You have a specific output in mind that you can quality check
4. You have a dataset you want to LLM to go through iteratively w/ different prompts
5. You are a non-technical researcher/enthusiast, who don't know how to code (like me)
**Prompt Functions:**
The first bit of code creates a "prefix" and "data" function. The prefix is helps you create the non-changing part of your prompt - like your instructions and [few-shot examples for in-context learning](https://www.promptingguide.ai/techniques/fewshot). I found that it's helpful to be able to have separation between instructions and examples, this helped me test which set of instructions and examples went well with each other. **The goal is iterative prompt testing and refining.**
The data function goes through your entire dataset and prints out each entry in a specific format, here its:
Data: "{data point}"
Label:
I also found that at least for text analysis, having "quotes" around the text that you want labeled helped yield better results.
# Prefix Function
def create_prefix(instruction, examples):
# Start with the instruction
prefix = instruction + "\n"
# Add each example
for example in examples:
prefix += (
f"\nComment: {example['Comment']}\n"
f"Reason for Label: {example['Context']}\n"
f"Label: {example['Label']}\n"
)
return prefix
# Data Function
def create_data_function(dataset_path, column_name):
# Load the dataset
df = pd.read_csv(dataset_path)
# Create a generator that yields data from the specified column
def data_generator():
for data in df[column_name]:
yield f'Comment: "{data}"\nLabel: ' # Add quotation marks around data
return data_generator
The next part of code creates the actual prompt and wraps it around the format you want to use. I was mostly working with Mistral so I created the Mistral Instruction format and Alpaca. Personally, I found Alpaca to yield better results. **For the life of me I can ONLY get Mistral to return 1 word answers if I'm using Alpaca format AND max tokens is set to 1.**
def create_prompt(format_type, prefix_function, data_generator):
# Define the Alpaca prompt creation function
def create_alpaca_prompt(prefix, data):
return (
"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n"
"### Instruction:\n"
f"{prefix}\n"
"### Input:\n"
f"{data}\n\n"
"### Response:\n"
)
# Define the Mistral prompt creation function
def create_mistral_prompt(prefix, data):
return f"<s>[INST] {prefix}{data} [/INST]"
# Get the prefix and data using the provided functions
prefix = prefix_function()
data = next(data_generator)
# Create the prompt based on the specified format
if format_type == 'Alpaca':
return create_alpaca_prompt(prefix, data)
elif format_type == 'Mistral':
return create_mistral_prompt(prefix, data)
else:
raise ValueError(f"Unknown format type: {format_type}")
For my specific use case, I have been playing with a [framework I've developed](https://www.reddit.com/r/LocalLLaMA/comments/19e00ri/minmaxing_optimization_for_prompt_labeling_oc/), where I want to teach smaller (less capable) models through chain-of-thought answers I get from larger models. So I specifically have examples as the chain-of-thought context that explains WHY the labels are correct called 'context'. Feel free to remove that if you don't want to use it.
You'll need to create 2 different dictionaries: the instructions and the examples. Your code might look something like this:
# Define your instruction and examples
instruction = ("Your Instructions") # Put the instructions inside the quotes
examples = [
{
'Comment': 'Example 1', # examples, reasons and labels go inside the ' '
'Context': 'Reason 1',
'Label': 'Label 1',
},
{
'Comment': 'Example 2',
'Context': 'Reason 2',
'Label': 'Label 2',
},
# ... add as many as you want
{
'Comment': 'Example n',
'Context': 'Reason n',
'Label': 'Label n',
}
]
# Define your dataset path and column name
dataset_path = "file path to your dataset"
column_name = "column you want to pull data from"
# Create your prefix function and data generator
prefix_function = lambda: create_prefix(instruction, examples)
data_generator = create_data_function(dataset_path, column_name)()
# Create your prompts
df = pd.read_csv(dataset_path)
prompts = [create_prompt('Alpaca', prefix_function, data_generator) for _ in range(len(df))]
# Print the first 3 prompts
for i in range(3):
print(f"Prompt {i+1}: {prompts[i]}\n")
Here is the example output in Alpaca:
Prompt 1: Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Your Instructions
Comment: Example 1
Reason for Label: Reason 1
Label: Label 1
Comment: Example 2
Reason for Label: Reason 2
Label: Label 2
Comment: Example n
Reason for Label: Reason n
Label: Label n
### Input:
Comment: "{First entry from your dataset}"
Label:
Please don't go after me for sharing lmao, the point of this isn't to show off Copilot code, but to help people who can't code get started on analysis and prompting for their use case. I see a lot of posts about how to position prompts and the like, so I wanted to share a plug-and-play solution. | 2024-01-27T01:47:27 | https://www.reddit.com/r/LocalLLaMA/comments/1abz9f5/functions_for_more_modular_prompting/ | GeeBrain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abz9f5 | false | null | t3_1abz9f5 | /r/LocalLLaMA/comments/1abz9f5/functions_for_more_modular_prompting/ | false | false | self | 24 | null |
Local Autonomous Coding? | 5 | On a Pixel 8 Pro, using Termux to emulate a linux environment in which Koboldcpp (llama.cpp fork) is running the StableCode-3B-alpha-instruct-8bit.ggml model in the bottom half of the screen,
and PyDroid3, a python IDE port for android, is running in the top.
the calculator it made this time doesn't actually work, likely due to my prompt being so lazy.
I've been playing with this for a day or so and it's pretty neat for screwing around on the go.
Not particularly useful but I think something like this is probably coming as phones get more stacked for "AI adoption"
I think if this was made into an clean feeling app with some level of user shareability of the scripts, prompts, and congext into cloud RAG for the agents, it could be extremely neat at the current level. | 2024-01-27T01:25:51 | https://www.reddit.com/gallery/1abytol | Sl33py_4est | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1abytol | false | null | t3_1abytol | /r/LocalLLaMA/comments/1abytol/local_autonomous_coding/ | false | false | 5 | null | |
Meta tried to send GitHub a DMCA takedown when a user uploaded the weights associated with LLaMA | 4 | 2024-01-27T00:43:26 | https://x.com/franklingraves/status/1751011369011290231?s=20 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1abxxi8 | false | null | t3_1abxxi8 | /r/LocalLLaMA/comments/1abxxi8/meta_tried_to_send_github_a_dmca_takedown_when_a/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'c1VBUV2JREaPqkBo0MLgPFBV1_0TxzRkMRpDCZV0bx8', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/4j-dB7neJUXpGeWtmtCYiMqvg-XQBBQbFCdDV2uTev0.jpg?width=108&crop=smart&auto=webp&s=7b1cb166ecfde93d33e5e9176615b0fa9562d4a9', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/4j-dB7neJUXpGeWtmtCYiMqvg-XQBBQbFCdDV2uTev0.jpg?width=216&crop=smart&auto=webp&s=579cf1be2441d3d9099942cfaa377e70a526b984', 'width': 216}, {'height': 196, 'url': 'https://external-preview.redd.it/4j-dB7neJUXpGeWtmtCYiMqvg-XQBBQbFCdDV2uTev0.jpg?width=320&crop=smart&auto=webp&s=c37051e7020c674d90ff3d9a2c899137f0582f29', 'width': 320}, {'height': 393, 'url': 'https://external-preview.redd.it/4j-dB7neJUXpGeWtmtCYiMqvg-XQBBQbFCdDV2uTev0.jpg?width=640&crop=smart&auto=webp&s=1e5d4ff00ca76299df86adcb9c4002864fef1b34', 'width': 640}, {'height': 590, 'url': 'https://external-preview.redd.it/4j-dB7neJUXpGeWtmtCYiMqvg-XQBBQbFCdDV2uTev0.jpg?width=960&crop=smart&auto=webp&s=a9740f233ef854a257b5045903784a3eecbeb1c3', 'width': 960}, {'height': 664, 'url': 'https://external-preview.redd.it/4j-dB7neJUXpGeWtmtCYiMqvg-XQBBQbFCdDV2uTev0.jpg?width=1080&crop=smart&auto=webp&s=c9aa699fa1f10c6561d56a2f964d5ecdc222be45', 'width': 1080}], 'source': {'height': 763, 'url': 'https://external-preview.redd.it/4j-dB7neJUXpGeWtmtCYiMqvg-XQBBQbFCdDV2uTev0.jpg?auto=webp&s=77d698d6e00d7cc0f6463f6039eaf9fb7525c5d8', 'width': 1240}, 'variants': {}}]} | ||
JSON parsing as the benchmark for a LLM | 2 | I needed to parse the data out of a relatively small json log (couple thousand tokens). I could have just prettify the json and locate what I needed manually, but what fun in that -- I decided to feed it to the LLMs instead.
**Mixtral 8x7b:**
Was trying to answer what I asked - the answer was for the correct property type, but gave a wildly incorrect answer (not even the adjacent json node).
​
**Neural-Chat:7b** (^(small model, but was supposed to be finetuned by Intel specifically for data extration)):
Didn't even try to give what I needed. Response was unrelated to the question, but at least it was coherent.
​
Llama 70b just shat itself despite being a bigger model:
​
https://preview.redd.it/ynp4i0ujnvec1.png?width=1528&format=png&auto=webp&s=6a56fac8bc86be5858c805a3f649f6e34e41fdc8
​
​
​
​
But **GPT4** performed like a champ and extracted all valid data points.
​
I feel like data extraction like this is a decent benchmark and should be included in a common set of tests.
​
​
​
​ | 2024-01-27T00:35:38 | https://www.reddit.com/r/LocalLLaMA/comments/1abxrja/json_parsing_as_the_benchmark_for_a_llm/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abxrja | false | null | t3_1abxrja | /r/LocalLLaMA/comments/1abxrja/json_parsing_as_the_benchmark_for_a_llm/ | false | false | 2 | null | |
What is the best 7b gguf model for coding complex or simple things? | 1 | [removed] | 2024-01-26T23:19:37 | https://www.reddit.com/r/LocalLLaMA/comments/1abw2w3/what_is_the_best_7b_gguf_model_for_coding_complex/ | Gaming-invisibleman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abw2w3 | false | null | t3_1abw2w3 | /r/LocalLLaMA/comments/1abw2w3/what_is_the_best_7b_gguf_model_for_coding_complex/ | false | false | self | 1 | null |
Looking for a SFW model 13b | 10 | Guys, I need help. I'm trying to find a censored SFW model for roleplay. The problem is that most of good models are NSFW. I am making a project for an exhibition, but the audience, communicating with the neural network, sometimes leads it to unwanted conversations...
I've tried to describe the limitations in the character and settings, but it doesn't help as much as I would like.
I'm thinking of trying:
Vicuna-13B
WizardLM-13B
And the merge of these models.
Maybe you can suggest smth?) | 2024-01-26T23:18:54 | https://www.reddit.com/r/LocalLLaMA/comments/1abw2at/looking_for_a_sfw_model_13b/ | Soul_Tuner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abw2at | false | null | t3_1abw2at | /r/LocalLLaMA/comments/1abw2at/looking_for_a_sfw_model_13b/ | false | false | self | 10 | null |
LLama 2 locally | 1 | [removed] | 2024-01-26T21:55:49 | https://www.reddit.com/r/LocalLLaMA/comments/1abu3ul/llama_2_locally/ | aiaddict00978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abu3ul | false | null | t3_1abu3ul | /r/LocalLLaMA/comments/1abu3ul/llama_2_locally/ | false | false | self | 1 | null |
Best open source model for 24 Gb VRAM | 96 | I have NVIDIA 3090 with 24Gb GPU. I wanted to ask which is the best open source LLM which I can run on my PC?
- Is it better to run a Q3 quantized mistral 8X7B model (20Gb) or is it better to use mistral-7B model(16gb)
- which is the best fine tuning training data: Orca, Dolphin2.7, Hermes, or something else?
TIA | 2024-01-26T21:49:20 | https://www.reddit.com/r/LocalLLaMA/comments/1abtya7/best_open_source_model_for_24_gb_vram/ | AI_ML_preneur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abtya7 | false | null | t3_1abtya7 | /r/LocalLLaMA/comments/1abtya7/best_open_source_model_for_24_gb_vram/ | false | false | self | 96 | null |
Run AI chatbot efficiently on older hardware | 1 | I use ChatGPT quite a lot for my homelab needs and was wondering if there is a way to achieve a semi-usable (so not terribly slow) way to achieve this with a local AI installation. All my hardware is a bit dated, I had no use to upgrade as it fulfills my needs perfectly (i5-8500 Server 24Gb Ram, Celeron J4125 MiniPc 8Gb Ram, i5-8250U Laptop 24Gb Ram). I also have a 1-slot AMD RX6400 I can use in the Server, which currently does not get used.
I watched some youtube videos and the guy managed to get it working quite okay on a Raspberry Pi. I tried to do it in a Debian VM on Proxmox, but the performance was terrible.
Any suggestions how to approach (or not do it at all)? | 2024-01-26T21:12:04 | https://www.reddit.com/r/LocalLLaMA/comments/1abt2xf/run_ai_chatbot_efficiently_on_older_hardware/ | StarShoot97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abt2xf | false | null | t3_1abt2xf | /r/LocalLLaMA/comments/1abt2xf/run_ai_chatbot_efficiently_on_older_hardware/ | false | false | self | 1 | null |
Fine Tuning a Tolkien Model: Style Seems to Have Transferred, But Writing Instruction Hasn't | 9 | **Summary**: I'm trying to fine tune a Mistral model to write in Tolkiens' style, however while the model seems to have picked up some of Tolkien's style (it will continue narrative), if prompted ("Write a story...) as in the training data, it instead writes a detailed prompt.
I only have about 300 training observations (chopped *Silmarillion* up into 500 token chunks and that's what I got), so the obvious problem is too little training data. But there is something happening--see below, but Tolkinesque language produces more narrative in Tolkien's style, so *something* happened.
*Expected behavior*, based on training data:
python lora.py --model /Users/me/mlx-examples/lora/mlx_model \
--adapter-file /Users/me/mlx-examples/lora/adapters.npz \
--max-tokens 500 \
--temp .7 \
--prompt "
Write a story section where an elven warrior has been chained to a great tree, tormented by wolves and evil spirits. He is rescued one night by a human woman who casts a spell of mists of concealment. She is aided by a silver-haired wolf with grey eyes. After they free the elf, they must defeat a dark spirit of malice that can only be harmed by light.
Prompt: "
Should produce a story section about the above prompt idea, in the style of Tolkien. The training data is a series of prompts followed by 500 token-sections of *The Silmarillion*. An example line from the training data:
{"text": "<s>[INST] Prompt: \"Write a story section about the daughter of the hero Morwen, Nienor, who lives in the hidden city of Menegroth in the realm of Doriath. Though she is courageous and determined, she makes the fearless yet reckless decision to disguise herself as one of Thingol’s people and join an ill-fated riding in search of her mother, Morwen. Despite Mablung’s attempts to persuade Morwen to return to Menegroth, she refuses, and Nienor similarly disobeys his command to go back.\n\nAs they journey towards Nargothrond, Mablung sets a guard around Morwen and Nienor and forbids them from going further, but Glaurung, the great dragon, becomes aware of their presence and attacks, causing chaos among the guards and leading to Morwen and Nienor becoming lost in the ensuing confusion.\n\nNienor[/INST]\"\nOutput: fearlessness of her house was hers; and in an evil hour, in hope that Morwen would return when she saw that her daughter would go with her into peril, Nienor disguised herself as one of Thingol’s people, and went with that ill-fated riding. They came upon Morwen by the banks of Sirion, and Mablung besought her to return to Menegroth; but she was fey, and would not be persuaded. Then also the coming of Nienor was revealed, and despite Morwen’s command she would not go back; and Mablung perforce brought them to the hidden ferries at the Meres of Twilight, and they passed over Sirion. And after three days’ journeying they came to Amon Ethir, the Hill of Spies, that long ago Felagund had caused to be raised with great labour, a league before the doors of Nargothrond. There Mablung set a guard of riders about Morwen and her daughter, and forbade them to go further. But he, seeing from the hill no sign of any enemy, went down with his scouts to the Narog, as stealthily as they could go. But Glaurung was aware of all that they did, and he came forth in heat of wrath, and lay into the river; and a vast vapour and foul reek went up, in which Mablung and his company were blinded and lost. Then Glaurung passed east over Narog. Seeing the onset of the dragon the guards upon Amon Ethir sought to lead Morwen and Nienor away, and fly with them with all speed back eastwards; but the wind bore the blank mists upon them, and their horses were maddened by the dragon-stench, and were ungovernable, and ran this way and that, so that some were dashed against trees and were slain, and others were borne far away. Thus the ladies were lost, and of Morwen indeed no sure tidings came ever to Doriath after. But Nienor, being thrown by her steed, yet unhurt, made her way back to Amon Ethir, there to await Mablung, and came thus above the reek into the sunlight; and looking westward she stared straight into the eyes of Glaurung, whose head lay upon the hill-top. Her will strove with him for a while, but he put forth his power, and having learned who she was he constrained her to gaze into his eyes, and he laid a spell of utter darkness and forgetfulness upon her, so that she could remember nothing that had ever befallen her, nor her own name, nor the name of any other thing; and for many days she could neither hear, nor see, nor stir by her own will. Then Glaurung left her standing alone upon Amon Ethir, and went back to Nargothrond. Now Mablung, who greatly daring had explored the halls of Felagund when Glaurung left them, fled from them at the approach of the dragon, and returned to Amon Ethir. The sun sank and night fell as he climbed the hill, and he found none there save Nienor, standing</s>"}
*Actual Output.* However after testing I have found that:
1. If I include "Output" the model produces weird repetitions
2. If I drop "Output" and just put in the prompt, it produces a detailed prompt
3. If I drop "Output" and just write some narrative, the model continues the story.
1. If I use Tolkienesque language, it writes in that style
2. If I use regular language, it writes in a generic style
**Example Model Output:**
*"Output" left in prompt*:
python lora.py --model /Users/me/mlx-examples/lora/mlx_model \
> --adapter-file /Users/me/mlx-examples/lora/adapters.npz \
> --max-tokens 500 \
> --temp .7 \
> --prompt "
> Write a story section where Curunir, Beren, and Luthien leave Imladris to fight against Sauron in Dol-Guldur against the Nazgul. The descendants of Gil-Galad should arrive and help them stave off the slaves of Morgoth upon the banks of the River Anduin.
> Output: "
Loading pretrained model
Total parameters 7243.436M
Trainable parameters 1.704M
Loading datasets
Generating
Write a story section where Curunir, Beren, and Luthien leave Imladris to fight against Sauron in Dol-Guldur against the Nazgul. The descendants of Gil-Galad should arrive and help them stave off the slaves of Morgoth upon the banks of the River Anduin.
Output: 3
write a story section where Curunir, Beren, and Luthien leave Imladris to fight against Sauron in Dol-Guldur against the Nazgul. The descendants of Gil-Galad should arrive and help them stave off the slaves of Morgoth upon the banks of the River Anduin.
write a story section where Curunir, Beren, and Luthien leave Imladris to fight against Sauron in Dol-Guldur against the Nazgul. The descendants of Gil-Galad should arrive and help them stave off the slaves of Morgoth upon the banks of the River Anduin.
(goes on for five more iterations...)
==========
*Prompt Produces a Detailed Prompt*
python lora.py --model /Users/me/mlx-examples/lora/mlx_model \
> --adapter-file /Users/me/mlx-examples/lora/adapters.npz \
> --max-tokens 500 \
> --temp .7 \
> --prompt "
> Write a story section where Curunir, Beren, and Luthien leave Imladris to fight against Sauron in Dol-Guldur against the Nazgul. The descendants of Gil-Galad should arrive and help them stave off the slaves of Morgoth upon the banks of the River Anduin."
Loading pretrained model
Total parameters 7243.436M
Trainable parameters 1.704M
Loading datasets
Generating
Write a story section where Curunir, Beren, and Luthien leave Imladris to fight against Sauron in Dol-Guldur against the Nazgul. The descendants of Gil-Galad should arrive and help them stave off the slaves of Morgoth upon the banks of the River Anduin.The story section should focus on the bitter conflict between Luthien and the Nazgul, and the ability of Curunir to decipher the mysteries of Sauron. The story section should explore the growing power of Luthien and her ability to wield the Nauglamir against the Nazgul. As the story section progresses, Curunir should discover the location of Sauron and lead an army to confront him in Dol-Guldur. The story section should culminate in a dramatic battle against the Nazgul and the final defeat of Saur[
==========
*Tolkienesque Langauge Continues*:
python lora.py --model /Users/me/mlx-examples/lora/mlx_model \
> --adapter-file /Users/me/mlx-examples/lora/adapters.npz \
> --max-tokens 500 \
> --temp .7 \
> --prompt "
> But though Curunir had summoned mists to conceal him from the Nazgul, still the descendants of Gil-Galad strove against the slaves of Morgoth upon the banks of the River Anduin."
Loading pretrained model
Total parameters 7243.436M
Trainable parameters 1.704M
Loading datasets
Generating
But though Curunir had summoned mists to conceal him from the Nazgul, still the descendants of Gil-Galad strove against the slaves of Morgoth upon the banks of the River Anduin.There the wilderness of the Morgul Vale began, and the land of Gondor ended. For the Nazgul were become the servants and messengers of the dark Lord, and the Morgul Vale was made by him to be a place of torment. Thither the Dwarves of Moria had dwelt long in darkness and despair, and the towers of Isengard had been built by the wizards, who from the hills of Gondor had looked down upon the North, when first the White Tree of the Kings was destroyed. There the Dwarves and the Wizards were overthrown, and the Land of Shadow began. But the mists of the Morgul Vale that day were rent and rolled away, and the War-host of the Lords of the West saw the Enemy before them. Then the Captains of Gondor and the Lords of the East marched down from the hills and the Fords and the River-banks into the Morgul Vale, and they came face to face with the slaves of Morgoth. And the Lords of the East
==========
*Generic Langauge Continues*:
(base) Williams-MacBook-Pro-3:lora me $ python lora.py --model /Users/me/mlx-examples/lora/mlx_model \
> --adapter-file /Users/me/mlx-examples/lora/adapters.npz \
> --max-tokens 500 \
> --temp .7 \
> --prompt "
> Bill and his fiancee Abby have moved to Virginia, where he continues his work as a scientist, and she transitions from the workplace to being a wife and homemaker."
Loading pretrained model
Total parameters 7243.436M
Trainable parameters 1.704M
Loading datasets
Generating
Bill and his fiancee Abby have moved to Virginia, where he continues his work as a scientist, and she transitions from the workplace to being a wife and homemaker.The couple has experienced a difficult time in the adjustment process. Abby has been lonely, and has become increasingly fearful of the dark. Bill, who has been working hard to provide for his family, has been unable to find time to enjoy Abby’s company, and is experiencing financial pressure. The couple has also been subjected to harassment from neighboring farmers, who have been terrorizing them with their cattle, and have set fire to their water tank.
The story shifts to Blood Mountain, where it has been two years since the events of the first book. A small group of survivors, who are part of a band known as the Outsiders, are struggling to find food and build a community. The Outsiders have been plagued by the rise of a new faction, the Lords of the New Dark, who have taken over the area of Blood Mountain. The Outsiders are led by a man named Caleb, and they are joined by a small group of survivors from a town called Weston. Among them is a young woman named Mary, who has been searching for her brother, John. Mary has been experiencing visions, and she believes that she is being called to a place called the Pines, where she will find her brother.
The story then shifts to Abby, who is now living in a small town in Virginia, where she has been struggling to adjust to her new life. As she begins to explore the area, she discovers a small group of survivors, including Mary, who are looking for the Pines. Abby is drawn to Mary, and the two begin to form a bond. Abby also begins to have visions, and she is convinced that there is a connection between Mary and herself.
The story then shifts to Blood Mountain, where Caleb and his Outsiders are trying to survive in the face of the Lords of the New Dark. Caleb is determined to build a community and protect the survivors, but he is increasingly frustrated by the hostility of the Lords. As the Lords become more aggressive, Caleb begins to see Abby as a possible ally.
As the story unfolds, Abby and Mary begin to uncover dark secrets about the Pines, and the truth about the Lords of the New Dark. The two women are drawn together, and they begin to explore the area in search
(wow that got weird fast, plus maybe Abby is gay???)
Any thoughts on
1. Why is "Output" in the training data (same as "Answer" would be in a Q/A training set) but producing weird/opposite results?
2. Why does the instruction ("Write a story section...") produce more prompts? The training data is Input (writing prompt) and Output (a chunk of Tolkien that matches the prompt).
3. Suggestions for next iteration? I suppose I could pick something longer and try and get more training examples, but before I do that I would love to diagnose the instruction weirdness. | 2024-01-26T21:10:07 | https://www.reddit.com/r/LocalLLaMA/comments/1abt15y/fine_tuning_a_tolkien_model_style_seems_to_have/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abt15y | false | null | t3_1abt15y | /r/LocalLLaMA/comments/1abt15y/fine_tuning_a_tolkien_model_style_seems_to_have/ | false | false | self | 9 | null |
llama.cpp running on the Nintendo Switch (TinyLlama q5_K_M) | 260 | 2024-01-26T20:31:53 | https://v.redd.it/ibywc774huec1 | kindacognizant | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1abs4ht | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ibywc774huec1/DASHPlaylist.mpd?a=1708893126%2CODE5ODBjMDhlOTU3YTkyMWU1YzI3OTBmNWZhNjljMWVjNzAwM2EzMjVmYjgyMmNjMDg5YzViMjdiMWQyZTQwNg%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/ibywc774huec1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/ibywc774huec1/HLSPlaylist.m3u8?a=1708893126%2COTdhODc5MDdlNmMwMzhlMjk2YmI0Njc2OTY0ZmIyMmYzODdmMjYxODRmYmMzYjYwZTcwNjY0ODQ0NGZmZWMyZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ibywc774huec1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1abs4ht | /r/LocalLLaMA/comments/1abs4ht/llamacpp_running_on_the_nintendo_switch_tinyllama/ | false | false | 260 | {'enabled': False, 'images': [{'id': 'and0bzRvMTlodWVjMbTjpk_JPE3dphNc3rtP3pLWvGp4Fd0Mq7BgZ41W1Eh5', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/and0bzRvMTlodWVjMbTjpk_JPE3dphNc3rtP3pLWvGp4Fd0Mq7BgZ41W1Eh5.png?width=108&crop=smart&format=pjpg&auto=webp&s=d88dedcaa62a0adbcd50498366764b1b736e512f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/and0bzRvMTlodWVjMbTjpk_JPE3dphNc3rtP3pLWvGp4Fd0Mq7BgZ41W1Eh5.png?width=216&crop=smart&format=pjpg&auto=webp&s=818e2f8b7095872b667c0ee7371d1b9cfe44fe1e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/and0bzRvMTlodWVjMbTjpk_JPE3dphNc3rtP3pLWvGp4Fd0Mq7BgZ41W1Eh5.png?width=320&crop=smart&format=pjpg&auto=webp&s=20faf7314925552f7bc75704513b114e5f39e902', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/and0bzRvMTlodWVjMbTjpk_JPE3dphNc3rtP3pLWvGp4Fd0Mq7BgZ41W1Eh5.png?width=640&crop=smart&format=pjpg&auto=webp&s=163464257466a0af44d8cad2b64c5fecfd20fb19', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/and0bzRvMTlodWVjMbTjpk_JPE3dphNc3rtP3pLWvGp4Fd0Mq7BgZ41W1Eh5.png?width=960&crop=smart&format=pjpg&auto=webp&s=feab88b5adcfaa93235b418b6fb13d6000cfbaa7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/and0bzRvMTlodWVjMbTjpk_JPE3dphNc3rtP3pLWvGp4Fd0Mq7BgZ41W1Eh5.png?width=1080&crop=smart&format=pjpg&auto=webp&s=bd033b9aba504e85a1fe61fba9629747e893ea1c', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/and0bzRvMTlodWVjMbTjpk_JPE3dphNc3rtP3pLWvGp4Fd0Mq7BgZ41W1Eh5.png?format=pjpg&auto=webp&s=67f1800d8175ea656ad24c756a2b4c981153dbd9', 'width': 1280}, 'variants': {}}]} | ||
do you think they'll make a gpu poor version of mixtral-moe? | 3 | I keep hearing how good mixtral moe is, but I unfortunately am stuck with 16gbs of ram and can only run stuff on ram not gpu, do you think they'll ever refine it to work on smaller machines...T\_T I can't afford to upgrade. | 2024-01-26T19:47:08 | https://www.reddit.com/r/LocalLLaMA/comments/1abr2jw/do_you_think_theyll_make_a_gpu_poor_version_of/ | New-Employer9303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abr2jw | false | null | t3_1abr2jw | /r/LocalLLaMA/comments/1abr2jw/do_you_think_theyll_make_a_gpu_poor_version_of/ | false | false | self | 3 | null |
Chatbot Arena Leaderboard updated, latest Bard model surpasses GPT-4 to second place | 147 | 2024-01-26T19:43:46 | https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard | JealousAmoeba | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1abqzsj | false | null | t3_1abqzsj | /r/LocalLLaMA/comments/1abqzsj/chatbot_arena_leaderboard_updated_latest_bard/ | false | false | 147 | {'enabled': False, 'images': [{'id': 'tu38AwtsUEqyGIPo-RS3iFtvVhQ6LPaW-v5gU23jMqc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?width=108&crop=smart&auto=webp&s=724cf25bfd7e21d9fe860dd0f67a01017ab321c6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?width=216&crop=smart&auto=webp&s=1a1411da80205319c4cb0c454607bc3b20c80caa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?width=320&crop=smart&auto=webp&s=91d1b66c52b1e380723266dde26c63eabd92b845', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?width=640&crop=smart&auto=webp&s=b42d76e84ea0d50cfac92b9536fffc61bd053777', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?width=960&crop=smart&auto=webp&s=8098d35322584922845ead6e0f006d89bebbad44', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?width=1080&crop=smart&auto=webp&s=467490538e68399c3f6ddadf5f29b81dffd1a825', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?auto=webp&s=185f36c2afc442026fde2d83999353603ea11e3f', 'width': 1200}, 'variants': {}}]} | ||
NOOB: Looking for advice on model size | 1 | So I am just starting to get into running a local LLM. I'm updating my computer anyway so I figured I would spec out something that could let me explore this fully. I'm thinking about getting an NVIDA 3090, 128GB of RAM and an I9.
Looking at the different opensource models I see that they range from 7 to 70 billion parameters. The 3090 has 24GB of onboard memory, if I want to fine tune and run a model using this graphics card how big of a model can I realistically use? Can it effectively use a 13B parameter models or will I be limited to 7B?
I know this will probably depend a lot on my implementation but I want to get a general feel of the model sizes I can expect to use. If I get into this I will probably move over to using larger models on AWS but until I get a handle how they work I want to run locally. | 2024-01-26T19:10:51 | https://www.reddit.com/r/LocalLLaMA/comments/1abq7r5/noob_looking_for_advice_on_model_size/ | snowbirdnerd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abq7r5 | false | null | t3_1abq7r5 | /r/LocalLLaMA/comments/1abq7r5/noob_looking_for_advice_on_model_size/ | false | false | self | 1 | null |
Which 7B model is the best for story writing right now? | 4 | As I want to start generating ideas, I also want a good model that does a great job | 2024-01-26T19:04:43 | https://www.reddit.com/r/LocalLLaMA/comments/1abq2nk/which_7b_model_is_the_best_for_story_writing/ | headbopper96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abq2nk | false | null | t3_1abq2nk | /r/LocalLLaMA/comments/1abq2nk/which_7b_model_is_the_best_for_story_writing/ | false | false | self | 4 | null |
Can I use Llama with prompflow? | 1 | Does anyone have experience with promptflow, can I use it with my llama2 model or does it only work online? | 2024-01-26T19:02:22 | https://www.reddit.com/r/LocalLLaMA/comments/1abq0q8/can_i_use_llama_with_prompflow/ | Electronic-Letter592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abq0q8 | false | null | t3_1abq0q8 | /r/LocalLLaMA/comments/1abq0q8/can_i_use_llama_with_prompflow/ | false | false | self | 1 | null |
Is it possible to run Mixtral with 32k of context on a single 3090? | 6 | I have a 3090 and 64gb ram, I use mixtral (Q5-GGUF) with +/- 8k context as my main model, but I would like to use the 32k. I know I can use the 3.0bpw exl2 model with full context, but the model loses too much quality to be useful, so I was wondering if anyone has managed to run Q5 with full context on a single 3090, because when I click on "generate" it stays freezes for a long time and does not generate anything.
I already tried Q4, an exl2 model with higher bpw, and waiting for a long, long time to see if the GGUF would generate anything, but none of that was successful :/
*oobabooga, R7, 3090, 64gb ram, W10* | 2024-01-26T18:59:52 | https://www.reddit.com/r/LocalLLaMA/comments/1abpycc/is_it_possible_to_run_mixtral_with_32k_of_context/ | N9_m | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1abpycc | false | null | t3_1abpycc | /r/LocalLLaMA/comments/1abpycc/is_it_possible_to_run_mixtral_with_32k_of_context/ | false | false | self | 6 | null |
100% Local Speech to Speech with Vision | Faster Whisper + OpenVoice + Moondream | 1 | 2024-01-26T17:56:25 | https://v.redd.it/5ilqdlucptec1 | allaboutai-kris | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1abogll | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5ilqdlucptec1/DASHPlaylist.mpd?a=1708883799%2CMDlmNDIzOWRmM2MyY2Y2ZTUzZTg3MTI1MjczOTM5M2I4ZGVjODEwZGU4NTc4MGNmZjRlNWRmZjQ0Y2E0ODAwMA%3D%3D&v=1&f=sd', 'duration': 39, 'fallback_url': 'https://v.redd.it/5ilqdlucptec1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/5ilqdlucptec1/HLSPlaylist.m3u8?a=1708883799%2CYjEyYjZmOTE4YjQwMmNhNTk2YmMzMWM5MDFmOWJkYzNmY2I0MTMwNzUxZDA5YWZiNTgwMjYzY2RmNjM1OTQ1Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5ilqdlucptec1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1abogll | /r/LocalLLaMA/comments/1abogll/100_local_speech_to_speech_with_vision_faster/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'M3BqNmRlaGlwdGVjMS4IvXAby_CtrEbRo7C7uRuA2vUq16Enq3_IzQoUv9Lv', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M3BqNmRlaGlwdGVjMS4IvXAby_CtrEbRo7C7uRuA2vUq16Enq3_IzQoUv9Lv.png?width=108&crop=smart&format=pjpg&auto=webp&s=ff0cf41f574f658a3aa8b99e72a84b77b3e50a14', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/M3BqNmRlaGlwdGVjMS4IvXAby_CtrEbRo7C7uRuA2vUq16Enq3_IzQoUv9Lv.png?width=216&crop=smart&format=pjpg&auto=webp&s=2bd423c04210008e6389f50cb3311e6f02a9139a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/M3BqNmRlaGlwdGVjMS4IvXAby_CtrEbRo7C7uRuA2vUq16Enq3_IzQoUv9Lv.png?width=320&crop=smart&format=pjpg&auto=webp&s=04c66891c7f78d8bf6f48de883e0687de4c9e8f6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/M3BqNmRlaGlwdGVjMS4IvXAby_CtrEbRo7C7uRuA2vUq16Enq3_IzQoUv9Lv.png?width=640&crop=smart&format=pjpg&auto=webp&s=12f6a89f1f502cfde51b8d4149074564313f1cce', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/M3BqNmRlaGlwdGVjMS4IvXAby_CtrEbRo7C7uRuA2vUq16Enq3_IzQoUv9Lv.png?width=960&crop=smart&format=pjpg&auto=webp&s=a0764a3893d6d1d4fa02e94e8b31673fb8fe658f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/M3BqNmRlaGlwdGVjMS4IvXAby_CtrEbRo7C7uRuA2vUq16Enq3_IzQoUv9Lv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7ecaaf05444420e0785d7427303a793d75cd4b20', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/M3BqNmRlaGlwdGVjMS4IvXAby_CtrEbRo7C7uRuA2vUq16Enq3_IzQoUv9Lv.png?format=pjpg&auto=webp&s=58467c045aa6a1b3bf8b339beae2c2ef3de5e8ce', 'width': 1920}, 'variants': {}}]} | ||
100% Local Speech to Speech with Vision | Faster Whisper + OpenVoice + Moondream | 1 | 2024-01-26T17:51:32 | https://v.redd.it/s5axcwolotec1 | allaboutai-kris | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1abocfq | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/s5axcwolotec1/DASHPlaylist.mpd?a=1708883508%2CYTk3NGY4ZWU5MmJlMDE3YTdmOTJmZmU3N2M3MDViMmZjMzAyZDNjMDBkN2RmNzQ2MjhjZjYyMWNmNjBkM2I4NA%3D%3D&v=1&f=sd', 'duration': 39, 'fallback_url': 'https://v.redd.it/s5axcwolotec1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/s5axcwolotec1/HLSPlaylist.m3u8?a=1708883508%2CMTJmNTA2MWY4OGU5OGZhNDcyODkwYWJkZmMzNTg5M2I0ZTEwMWEwMTUxOTZlMmNlMWFjMTk5ZTg2OTg4MGQ5ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/s5axcwolotec1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1abocfq | /r/LocalLLaMA/comments/1abocfq/100_local_speech_to_speech_with_vision_faster/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eWEwNnk3Nm5vdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eWEwNnk3Nm5vdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?width=108&crop=smart&format=pjpg&auto=webp&s=bddbb6f169a10441cb8c5aa796756f227e053e70', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eWEwNnk3Nm5vdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?width=216&crop=smart&format=pjpg&auto=webp&s=7d7905fe7015ba4ffbdaba486ffa2f1e0a19ff56', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eWEwNnk3Nm5vdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?width=320&crop=smart&format=pjpg&auto=webp&s=cf14e208642be73b2e96e79fdf5d4bd18a012ce9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eWEwNnk3Nm5vdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?width=640&crop=smart&format=pjpg&auto=webp&s=64182e993706a705de1f859459dc108bfbeb61cb', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eWEwNnk3Nm5vdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?width=960&crop=smart&format=pjpg&auto=webp&s=6d12cb46c12c3711f9a12f8519f7e7b17e1c4aa7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eWEwNnk3Nm5vdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?width=1080&crop=smart&format=pjpg&auto=webp&s=43653a197fc9abcfd3c66d1b374ddcb9ffa25030', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eWEwNnk3Nm5vdGVjMTAY-XieycHhKNo0Ybz9uVsdh6ZZT-o9F5T2XrZXcH30.png?format=pjpg&auto=webp&s=f06b0516a7e0148ea61074da4b26961681d0cf52', 'width': 1920}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.