title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Keeping the input embeddings entirely on CPU in PyTorch | 7 | I have a local model with a peculiar structure: the model itself is very small, outside the embedding matrices, which are massive. The sheer size massively slows and degrades training on my GPU, making borderline impossible to get anything useful out of it.
I'm wondering if it would be straightforward to keep the embeddings entirely on CPU, while still making sure all the forward and backward passes are handled on the GPU. The idea would be to embed the inputs before moving the inputs to the GPU and feeding them into the model. The big difficulty I'm not sure how to address is how to include the embeddings in the optimizer (Adam) and then pass the updated embeddings back to the CPU at the end. Would this be simple? Is this something that can be applied in local modeling more broadly? | 2024-01-11T18:29:14 | https://www.reddit.com/r/LocalLLaMA/comments/1948oq8/keeping_the_input_embeddings_entirely_on_cpu_in/ | mildresponse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1948oq8 | false | null | t3_1948oq8 | /r/LocalLLaMA/comments/1948oq8/keeping_the_input_embeddings_entirely_on_cpu_in/ | false | false | self | 7 | null |
Best Model for Code Analysis in C/C++? | 4 | I need a model for C/C++ code analysis, finding bug in the code, suggesting improvements, writing test cases etc.
Will WizardCoder for python can work for c/c++? Or is there any other better model out there? | 2024-01-11T18:25:14 | https://www.reddit.com/r/LocalLLaMA/comments/1948l9a/best_model_for_code_analysis_in_cc/ | GoodGalsSwallows | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1948l9a | false | null | t3_1948l9a | /r/LocalLLaMA/comments/1948l9a/best_model_for_code_analysis_in_cc/ | false | false | self | 4 | null |
WhiteRabbitNeo-33B | 42 | In our commitment to developing offensive and defensive cybersecurity agents in the open, today we're open sourcing our larger model, WhiteRabbitNeo-33B-v1.0.
The model has been trained on a proprietary dataset containing custom Metasploit module development, Active Directory, red-teaming Windows, networking and API vulnerabilities, and a lot more. A comprehensive list of the topics covered is present in the readme file.
This model is naturally capable of being a general coding assistant as well.
Access the model here: [https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1](https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1)
​
https://preview.redd.it/dnpkbh9wqubc1.png?width=1024&format=png&auto=webp&s=cd2d151d66758093d00b11f45d5e26a51fa30336 | 2024-01-11T18:13:46 | https://www.reddit.com/r/LocalLLaMA/comments/1948bc5/whiterabbitneo33b/ | migtissera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1948bc5 | false | null | t3_1948bc5 | /r/LocalLLaMA/comments/1948bc5/whiterabbitneo33b/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'NGYmN_uiLj1VOLCZVwTj9rn4b-3P5cXVR6Gce1fOFgE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0ciQRdtB50MXrw4jyzVJZr3Z1skLfse3xjNARkEs9ko.jpg?width=108&crop=smart&auto=webp&s=2526be5775117da8c2b9d8d1d4b06fba5dd1e016', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0ciQRdtB50MXrw4jyzVJZr3Z1skLfse3xjNARkEs9ko.jpg?width=216&crop=smart&auto=webp&s=ff88677f879f2a130e5e7ad72bc9169c78e538cd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0ciQRdtB50MXrw4jyzVJZr3Z1skLfse3xjNARkEs9ko.jpg?width=320&crop=smart&auto=webp&s=59beb8af0bfb43a5b35749252a2b48d28f1cc957', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0ciQRdtB50MXrw4jyzVJZr3Z1skLfse3xjNARkEs9ko.jpg?width=640&crop=smart&auto=webp&s=c0905460094045a4db74e1b4817491c69eef0044', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0ciQRdtB50MXrw4jyzVJZr3Z1skLfse3xjNARkEs9ko.jpg?width=960&crop=smart&auto=webp&s=3e1d606918bf22641a039104ad2cb7da4da7899d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0ciQRdtB50MXrw4jyzVJZr3Z1skLfse3xjNARkEs9ko.jpg?width=1080&crop=smart&auto=webp&s=2bb397960127581e3fcd52d8bfe5406b6aabece9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0ciQRdtB50MXrw4jyzVJZr3Z1skLfse3xjNARkEs9ko.jpg?auto=webp&s=58a2ca785880a9bf32b68fc3768a4b2f43ff691b', 'width': 1200}, 'variants': {}}]} | |
Any LLMs with ethically sourced training data? | 1 | I tried looking around, doing some searches on google, but couldn't find any LLMs with an open/ethically sourced training set.
​
Are there any that exist? | 2024-01-11T18:12:44 | https://www.reddit.com/r/LocalLLaMA/comments/1948adv/any_llms_with_ethically_sourced_training_data/ | Own-Candle-9993 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1948adv | false | null | t3_1948adv | /r/LocalLLaMA/comments/1948adv/any_llms_with_ethically_sourced_training_data/ | false | false | self | 1 | null |
Taking your LLaMA app from prototype to production, what are we missing? | 1 | 2024-01-11T18:06:55 | https://nux.ai/guides/ai-apps-from-prototype-to-production | nuxai | nux.ai | 1970-01-01T00:00:00 | 0 | {} | 194858y | false | null | t3_194858y | /r/LocalLLaMA/comments/194858y/taking_your_llama_app_from_prototype_to/ | false | false | 1 | {'enabled': False, 'images': [{'id': '0C3cn9QRJU5TxUAO59YhbTRCjPGSKF1jxDuAhnbp_7c', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/oBN0n4AfNDoOOj24TCfa0hji8RqQ3Wqv72XpokV8iJM.jpg?width=108&crop=smart&auto=webp&s=f99d941b35411f6ed29d6622865be29e30b599cf', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/oBN0n4AfNDoOOj24TCfa0hji8RqQ3Wqv72XpokV8iJM.jpg?width=216&crop=smart&auto=webp&s=f427fc735398f68984d221ecc2331a23c8d36e6b', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/oBN0n4AfNDoOOj24TCfa0hji8RqQ3Wqv72XpokV8iJM.jpg?width=320&crop=smart&auto=webp&s=71cca961021f182ed2070048175ad167e4a6a476', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/oBN0n4AfNDoOOj24TCfa0hji8RqQ3Wqv72XpokV8iJM.jpg?width=640&crop=smart&auto=webp&s=33d5fed940dbff87518ef6488d41a2ab35b58de0', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/oBN0n4AfNDoOOj24TCfa0hji8RqQ3Wqv72XpokV8iJM.jpg?width=960&crop=smart&auto=webp&s=e186e5556ef2ba3b893bd692163f1a4103cf55e5', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/oBN0n4AfNDoOOj24TCfa0hji8RqQ3Wqv72XpokV8iJM.jpg?width=1080&crop=smart&auto=webp&s=9d6a2aff697db09168b01e34c7d258f066bfb2cc', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/oBN0n4AfNDoOOj24TCfa0hji8RqQ3Wqv72XpokV8iJM.jpg?auto=webp&s=6aa9583fbf4b08e825a6d9fa99cca34e31bd3f5a', 'width': 1792}, 'variants': {}}]} | ||
Building a new rig - enlighten me! | 1 | Hi,
As per the title, I'm currently in the midst of building a new rig to run my own LLMs locally.
I'll be using these LLM(s) to build my own personal assistant, and planning to integrate it with my own NodeJS/FastAPI api endpoints to connect to different services e.g. Notion, Todoist, the whole 9 yards.
Here's the specs:
1. Intel Xeon Gold 6342
2. 2x Tesla P100 16G (thinking about getting a third P100 as well
In your opinion, how should I go about this? I've been seeing a lot of talk of different models, quantization (GGML/GGUF), GPU-specific performance, PCIE Lanes (I'm running all in 16x so hopefully no worries there), prompting techniques/templates, frontend UI (Jan.ai, Ollama, LM Studio), also running in multi-gpu setups, using llama.cpp, etc etc.
What/how do I begin with this seemingly monumental task of building a customized Personal Assistant?
I have experience with servers, Web dev, and general software development, and a but of prompting techniques. I'd like to learn more as well.
Would love to get your thoughts! | 2024-01-11T17:56:37 | https://www.reddit.com/r/LocalLLaMA/comments/1947vvu/building_a_new_rig_enlighten_me/ | Kaldnite | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1947vvu | false | null | t3_1947vvu | /r/LocalLLaMA/comments/1947vvu/building_a_new_rig_enlighten_me/ | false | false | self | 1 | null |
Meta Admits Use of ‘Pirated’ Book Dataset to Train AI | 187 | >*With AI initiatives developing at a rapid pace, copyright holders are on high alert. In addition to legislation, several currently ongoing lawsuits will help to define what's allowed and what isn't. Responding to a lawsuit from several authors, Meta now admits that it used portions of the Books3 dataset to train its Llama models. This dataset includes many pirated books.*
[https://torrentfreak.com/meta-admits-use-of-pirated-book-dataset-to-train-ai-240111/](https://torrentfreak.com/meta-admits-use-of-pirated-book-dataset-to-train-ai-240111/) | 2024-01-11T17:32:11 | https://www.reddit.com/r/LocalLLaMA/comments/19479qy/meta_admits_use_of_pirated_book_dataset_to_train/ | IntrovertedFL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19479qy | false | null | t3_19479qy | /r/LocalLLaMA/comments/19479qy/meta_admits_use_of_pirated_book_dataset_to_train/ | false | false | self | 187 | null |
Gemini pro seems better than ChatGPT at keepin' it real | 55 | 2024-01-11T17:30:31 | foazi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 194787n | false | null | t3_194787n | /r/LocalLLaMA/comments/194787n/gemini_pro_seems_better_than_chatgpt_at_keepin_it/ | false | false | 55 | {'enabled': True, 'images': [{'id': 'a8N7DbCGWNc8rlvi280kA2OBQ4Baw6Rhn1db4c5fYWg', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/tnlnv1s4jubc1.png?width=108&crop=smart&auto=webp&s=47e8a44ae0305d04f3e1b0b3ac4ff7b44006bffe', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/tnlnv1s4jubc1.png?width=216&crop=smart&auto=webp&s=f3791618a0b26a474db06578e05f6453eab84c36', 'width': 216}, {'height': 165, 'url': 'https://preview.redd.it/tnlnv1s4jubc1.png?width=320&crop=smart&auto=webp&s=972c5fd40a831731c0662d23d156bd3cfe7859ac', 'width': 320}, {'height': 330, 'url': 'https://preview.redd.it/tnlnv1s4jubc1.png?width=640&crop=smart&auto=webp&s=6e7f2c0068048fe10c9df95f04478189a09156bb', 'width': 640}, {'height': 495, 'url': 'https://preview.redd.it/tnlnv1s4jubc1.png?width=960&crop=smart&auto=webp&s=6f4cb35a156133d39654869a1d2e98714ad02493', 'width': 960}, {'height': 557, 'url': 'https://preview.redd.it/tnlnv1s4jubc1.png?width=1080&crop=smart&auto=webp&s=24e6d5e26d14067e7eb036f07fe30245feed7ba5', 'width': 1080}], 'source': {'height': 988, 'url': 'https://preview.redd.it/tnlnv1s4jubc1.png?auto=webp&s=457702fab3691dfec21a55ae38b71e2414011843', 'width': 1915}, 'variants': {}}]} | |||
What is the fastest way to serve batch > 1 llm requests on a 4090? | 8 | I'm want to setup a friends and family server on my home 4090 with a yi 200k finetune. What's the most performant solution in terms of tokens/second? Is it still vllm or something else? | 2024-01-11T17:18:37 | https://www.reddit.com/r/LocalLLaMA/comments/1946xc8/what_is_the_fastest_way_to_serve_batch_1_llm/ | kerrygotten | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1946xc8 | false | null | t3_1946xc8 | /r/LocalLLaMA/comments/1946xc8/what_is_the_fastest_way_to_serve_batch_1_llm/ | false | false | self | 8 | null |
Model MythoMax L2 13B available again | 1 | This model is proficient at both roleplaying and storywriting, so if you want to try it on the net before you run it local try [Infermatic.ai](https://infermatic.ai/) .
If you dont know what it is --> An improved, potentially even perfected variant of MythoMix, my [**MythoLogic-L2**](https://huggingface.co/Gryphe/MythoLogic-L2-13b) and [**Huginn**](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16) merge using a highly experimental tensor type merge technique.
Link to the repo --> [https://huggingface.co/Gryphe/MythoMax-L2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b)
For optimal model performance:
<System prompt/Character Card>
​
\### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
​
\### Response:
​ | 2024-01-11T17:04:04 | https://www.reddit.com/r/LocalLLaMA/comments/1946kl0/model_mythomax_l2_13b_available_again/ | Horror_Echo6243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1946kl0 | false | null | t3_1946kl0 | /r/LocalLLaMA/comments/1946kl0/model_mythomax_l2_13b_available_again/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'W-pCh47ZuoTaTpwtRXWs755SzkkFQzHW6A1NS1MJI_A', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=108&crop=smart&auto=webp&s=aa7b8a73d9f4825dcec8d2a7d8805a9c50369d0b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=216&crop=smart&auto=webp&s=96a387e3b3e91f000fc25d53f1c6557cfd455bcb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=320&crop=smart&auto=webp&s=c5ce1a81747b40b0e8234eb8a3c80296d7d99fb3', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=640&crop=smart&auto=webp&s=1bd8d44f5f6385004a3c4eee1032f01d3df456f9', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=960&crop=smart&auto=webp&s=7f1a5b2670f910b6635784a1f867157aaf4f9f70', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=1080&crop=smart&auto=webp&s=59bf35f672c4b4e9f72a5f2da763f919b49c40fe', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?auto=webp&s=6edb04ad94226e898c90b505438281c5c6ba7cf7', 'width': 1080}, 'variants': {}}]} |
How to enable cuBLAS in privateGPT | 2 | Hello everyone,
I experimenting with the privateGPT setup and I got everything working so far. I am using a runpod with Ububtu 20.4 x86. I can see a memory usage when I am uploading documents but the utilization is allways 0. Also when I am spinning it up the BLAS parameter is allways 0. CUDA and nvidia drivers are installed and co figured, the nvidia-smi command even shows which processes arebon but it's not showing a process. So the CPU Usage is 100%. How can I change that or is this how it's intended to be? | 2024-01-11T16:35:26 | https://www.reddit.com/r/LocalLLaMA/comments/1945vga/how_to_enable_cublas_in_privategpt/ | Blax363 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1945vga | false | null | t3_1945vga | /r/LocalLLaMA/comments/1945vga/how_to_enable_cublas_in_privategpt/ | false | false | self | 2 | null |
"Challenge LLMs to Reason About Reasoning: A Benchmark to Unveil Cognitive Depth in LLMs" (DiagGSM8K) | 38 | **Paper**: [https://arxiv.org/abs/2312.17080](https://arxiv.org/abs/2312.17080)
**Code**: [https://github.com/dvlab-research/DiagGSM8K](https://github.com/dvlab-research/DiagGSM8K)
**Dataset**: [https://huggingface.co/datasets/Randolphzeng/DiagGSM8K](https://huggingface.co/datasets/Randolphzeng/DiagGSM8K)
**Abstract**:
>In this work, we introduce a novel evaluation paradigm for Large Language Models, one that challenges them to engage in meta-reasoning. This approach addresses critical shortcomings in existing math problem-solving benchmarks, traditionally used to evaluate the cognitive capabilities of agents. Our paradigm shifts the focus from result-oriented assessments, which often overlook the reasoning process, to a more holistic evaluation that effectively differentiates the cognitive capabilities among models. For example, in our benchmark, GPT-4 demonstrates a performance ten times more accurate than GPT3-5. The significance of this new paradigm lies in its ability to reveal potential cognitive deficiencies in LLMs that current benchmarks, such as GSM8K, fail to uncover due to their saturation and lack of effective differentiation among varying reasoning abilities. Our comprehensive analysis includes several state-of-the-art math models from both open-source and closed-source communities, uncovering fundamental deficiencies in their training and evaluation approaches. This paper not only advocates for a paradigm shift in the assessment of LLMs but also contributes to the ongoing discourse on the trajectory towards Artificial General Intelligence (AGI). By promoting the adoption of meta-reasoning evaluation methods similar to ours, we aim to facilitate a more accurate assessment of the true cognitive abilities of LLMs. | 2024-01-11T16:33:10 | https://www.reddit.com/r/LocalLLaMA/comments/1945tfv/challenge_llms_to_reason_about_reasoning_a/ | APaperADay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1945tfv | false | null | t3_1945tfv | /r/LocalLLaMA/comments/1945tfv/challenge_llms_to_reason_about_reasoning_a/ | false | false | self | 38 | null |
How to make or code something locally with ai? | 1 | I have dolphin mistral 7b with kobold lite, and i was wondering what types of things can i make with ai code and if it could teach me something? Ive really been wanting to make games but can i use ai to help me and what languages/game engines is ai good at especially a 7b model? where would i share the final product at?
​
one more thing can ai help to make media such as books, dnd adventures, or games? | 2024-01-11T15:49:32 | https://www.reddit.com/r/LocalLLaMA/comments/1944spp/how_to_make_or_code_something_locally_with_ai/ | Gaming-invisibleman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1944spp | false | null | t3_1944spp | /r/LocalLLaMA/comments/1944spp/how_to_make_or_code_something_locally_with_ai/ | false | false | self | 1 | null |
Advice on fine-tuning? | 18 | So I'm finally getting a bit of free time in my personal life, and I've started to take the next step in my local AI journey - fine-tuning.
My primary rig has 6x3090's, with three NVLink's in place (GPUs 0-1, 2-3, and 4-5) for a total VRAM of 144GB.
I've created an alpaca format dataset that I'd like to use, but I'm posting here because I'm finding myself woefully failing in finding adequate guides and documentation on how I can train a 70B model with model parallelism.
Does anyone here have experience with creating LORAs or QLORAs for a 70B Llama2 model across multiple GPUs? Ideally where I can train with higher than 4k context? Is there a guide or a repository you could point me at?
I fully acknowledge my ignorance in this area, so my apologies if the context of my question is itself wrong. Please correct me if so.
Lastly, I'm currently trying to get Axolotl with Deepspeed working, but am so far only succeeding in getting it running with data parallelism instead of model parallelism. The repo documentation and examples don't seem to work as directly copy/pasted (ex: pointing to a .py in the example commandline instead of the example .yaml that is actually present in the example subdirectory).
Thanks in advance for any guidance. | 2024-01-11T15:26:17 | https://www.reddit.com/r/LocalLLaMA/comments/1944993/advice_on_finetuning/ | Mass2018 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1944993 | false | null | t3_1944993 | /r/LocalLLaMA/comments/1944993/advice_on_finetuning/ | false | false | self | 18 | null |
Phi-2 & Tiny LLama on Raspberry Pi 5 | 23 | Hi everyone,
I have done some tests using RPI5 8GB RAM. Here are the results:
1. TheBloke dolphin-2\_6-phi-2.Q4\_K\_M.gguf - 5 tokens/s
2. TheBloke tinyllama-1.1b-chat-v1.0.Q4\_K\_M.gguf - 10 tokens/s
Question was: "Hey can you give me some spaghetti recipe?"
Phi:
Of course! Here's a simple spaghetti recipe. Boil 1 pound of spaghetti for about 8-10 minutes, and in a separate pan, sauté 2 minced garlic cloves, 1/2 cup of olive oil, 1/2 cup of diced onion, and 1 can of diced tomatoes. Once the spaghetti is cooked, drain it and add it to the tomato sauce. Stir in 1 cup of grated Parmesan cheese and a pinch of salt and pepper. Serve hot, and enjoy your spaghetti!
TinyLLaMA:
Of course! Here's a recipe for spaghetti with meat sauce:
Ingredients:
\- 1 pound pasta (spaghetti or penne)
\- 1/2 cup marinara sauce (homemade or store-bought)
\- 1/2 cup beef or pork meatballs (soaked and ground)
\- 2 tablespoons olive oil
\- 1 tablespoon chopped garlic
\- 1/2 teaspoon dried oregano
\- Salt and pepper to taste
\- Fresh basil leaves (optional)
​
Instructions:
1. Bring a large pot of salted water to a boil. Add pasta and cook according to package instructions, until al dente. Drain and set aside.
2. In a skillet, heat 1 tablespoon of olive oil over medium-high heat. Add ground beef or pork, and cook until browned. Drain off any excess fat.
3. In the same skillet, add garlic,
oregano, salt, and pepper. Cook until fragrant, about 30 seconds.
4. Add marinara sauce to the skillet and stir until heated through.
5. Add beef or pork meatballs to the skillet and cook until heated through.
6. Add pasta to the skillet with the meatballs and toss well to coat. Add fresh basil leaves, if using, and serve immediately.
​
I was really surprised with quality of results. This tiny llama is really quite powerful, and on the edge devices this can do a lot. | 2024-01-11T15:20:30 | https://www.reddit.com/r/LocalLLaMA/comments/19444pu/phi2_tiny_llama_on_raspberry_pi_5/ | flopik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19444pu | false | null | t3_19444pu | /r/LocalLLaMA/comments/19444pu/phi2_tiny_llama_on_raspberry_pi_5/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'mmSEhOO2Ya4Cjle-07d9l2l3p0K02gcnbjuYTcgYymE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/y7kZ5l3eN4uJnoeNp6fBiLE1B0DlXrgor7Ux1ilJt5k.jpg?width=108&crop=smart&auto=webp&s=c2033248ce0605f92a32f5c9fa9ac056a52d65bc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/y7kZ5l3eN4uJnoeNp6fBiLE1B0DlXrgor7Ux1ilJt5k.jpg?width=216&crop=smart&auto=webp&s=a845a86bc5e01172dfda8fb263b760ef03494dd7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/y7kZ5l3eN4uJnoeNp6fBiLE1B0DlXrgor7Ux1ilJt5k.jpg?width=320&crop=smart&auto=webp&s=c00f3db51eb3e305a2a6213ac6b53f202a71bb19', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/y7kZ5l3eN4uJnoeNp6fBiLE1B0DlXrgor7Ux1ilJt5k.jpg?width=640&crop=smart&auto=webp&s=03f94282964a7d562e24c2288c8391d64e7feab7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/y7kZ5l3eN4uJnoeNp6fBiLE1B0DlXrgor7Ux1ilJt5k.jpg?width=960&crop=smart&auto=webp&s=178462bd47c561a01aa32dcf1740346078e5d0ec', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/y7kZ5l3eN4uJnoeNp6fBiLE1B0DlXrgor7Ux1ilJt5k.jpg?width=1080&crop=smart&auto=webp&s=931f16f544d20475e6670361deb771671bc32f66', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/y7kZ5l3eN4uJnoeNp6fBiLE1B0DlXrgor7Ux1ilJt5k.jpg?auto=webp&s=f8e23616a09ca0bac37ca99a1059f39f515d1eeb', 'width': 1200}, 'variants': {}}]} |
Why such a drastic difference between EXL2 and GGUF of the same model? GGUF seems much better? | 38 | Hi all,
I've been using this model for a while now
https://huggingface.co/TheBloke/dolphin-2.5-mixtral-8x7b-GGUF
And decided to try the EXL2 by LoneStriker listed here
https://huggingface.co/LoneStriker/dolphin-2.5-mixtral-8x7b-3.5bpw-h6-exl2-2
Because I've heard the EXL2 format has a lot of benefits. I'm currently using a 4090, have remote-code-enabled, and seem to have all of the settings the same.
However when I ask generic questions about coding, the GGUF gives a good answer and provides examples. The EXL2 gives weird generic 'Happy to help!' and runs away with text sometimes, and has to be constantly prompted to give any coding examples.
I was under the impression that these two would perform similarly, but in my experience the EXL2 is almost unusable.
What am I missing?!? Thanks for your help. | 2024-01-11T15:15:01 | https://www.reddit.com/r/LocalLLaMA/comments/19440fy/why_such_a_drastic_difference_between_exl2_and/ | Some_guitarist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19440fy | false | null | t3_19440fy | /r/LocalLLaMA/comments/19440fy/why_such_a_drastic_difference_between_exl2_and/ | false | false | self | 38 | {'enabled': False, 'images': [{'id': 'Bfy8S1MwmvHWyvq3x1bCEzeTfWTZ5f6ChredIZbGrJk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RCCYnWs6XiK7XvhWQVyugz8VLphXcECiWnO7WnySlWg.jpg?width=108&crop=smart&auto=webp&s=75a2ddbf4733f0554ee5ed67020a63bb8dfb8783', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RCCYnWs6XiK7XvhWQVyugz8VLphXcECiWnO7WnySlWg.jpg?width=216&crop=smart&auto=webp&s=51002c0e4e9a659a9717e7b692f1edd49379e299', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RCCYnWs6XiK7XvhWQVyugz8VLphXcECiWnO7WnySlWg.jpg?width=320&crop=smart&auto=webp&s=7d0c1717d243b939d63b0d7c5e9eab6ca1e059c9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RCCYnWs6XiK7XvhWQVyugz8VLphXcECiWnO7WnySlWg.jpg?width=640&crop=smart&auto=webp&s=130d9e96a023347b5df66e990d63e1d005266a7c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RCCYnWs6XiK7XvhWQVyugz8VLphXcECiWnO7WnySlWg.jpg?width=960&crop=smart&auto=webp&s=50e101ec6cacb487bacbd3f65c2fbd887ccae788', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RCCYnWs6XiK7XvhWQVyugz8VLphXcECiWnO7WnySlWg.jpg?width=1080&crop=smart&auto=webp&s=54c56252f8eeeeb8244f39762d56ccb9540681f1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RCCYnWs6XiK7XvhWQVyugz8VLphXcECiWnO7WnySlWg.jpg?auto=webp&s=df3a6a74f5057cf1a92a4d1aa76adc832b3ea7a0', 'width': 1200}, 'variants': {}}]} |
Model Recommendations for Role Playing (Yi34B-based) | 8 | Hi, I'm looking for a good Yi 34B based model that is good in role-playing.
Do you have any suggestions?
Thanks and have a nice day! | 2024-01-11T15:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/1943t4a/model_recommendations_for_role_playing_yi34bbased/ | Weyaxi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1943t4a | false | null | t3_1943t4a | /r/LocalLLaMA/comments/1943t4a/model_recommendations_for_role_playing_yi34bbased/ | false | false | self | 8 | null |
Local LLM laptop: MacBook vs NVIDIA GPU | 5 | As a laptop to run Local LLMs, would you recommend a MacBook Pro with a lot of unified memory or a laptop with a discrete NVIDIA GPU (with 8 GB of VRAM)? | 2024-01-11T15:04:28 | https://www.reddit.com/r/LocalLLaMA/comments/1943rst/local_llm_laptop_macbook_vs_nvidia_gpu/ | ekerazha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1943rst | false | null | t3_1943rst | /r/LocalLLaMA/comments/1943rst/local_llm_laptop_macbook_vs_nvidia_gpu/ | false | false | self | 5 | null |
Fine-tuning LLMs for classification tasks -> Windows malware api call logs | 1 |
Hello to the people on LocalLLama,
Im just trying to use LLMs such as codellama to classify input text, the input text is from api call logs of windows malware. I have 8 classes and around 80k documents/JSON files that I want to fine-tune the llm on, post which I want to get the llm to spit out a class for the input text.
sample data after preprocessing-->
'(<module> kernel32 <api\_name> getsystemtimeasfiletime <args> 0x2439fd8 <ret\_val> N/A).', '(<module> kernel32 <api\_name> getcurrentprocessid <args> N/A <ret\_val> 0x1396c).', '(<module> kernel32 <api\_name> getcurrentthreadid <args> N/A <ret\_val> 0x13980).', '(<module> kernel32 <api\_name> gettickcount <args> N/A <ret\_val> 0x5265c14).', '(<module> kernel32 <api\_name> queryperformancecounter <args> 0x2439fd0 <ret\_val> 0x1)... etc.
all of the API calls are in a sequence i.e one after the other. The main problems I will encounter have to do with the context length of input. I have some malware that has made close to 39k win32 API calls, I was just wondering how to solve this correctly.
If one can provide links to read up/ fine tuning scripts that take into account YaRn/ RoPe experiments for sequence classification it would be helpful.
I would like to get some inputs on how to exploit llms correctly to solve this issue,
thank you! | 2024-01-11T14:23:55 | https://www.reddit.com/r/LocalLLaMA/comments/1942vlj/finetuning_llms_for_classification_tasks_windows/ | Akimbo-fmg-9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1942vlj | false | null | t3_1942vlj | /r/LocalLLaMA/comments/1942vlj/finetuning_llms_for_classification_tasks_windows/ | false | false | self | 1 | null |
Transformers.js v2.14 adds in-browser Segment Anything w/ SlimSAM | 141 | 2024-01-11T13:58:39 | https://v.redd.it/wglfo5vkgtbc1 | xenovatech | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1942bvt | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/wglfo5vkgtbc1/DASHPlaylist.mpd?a=1707573532%2CMzYzMmM5YThjMDFhMGVjNDk1NThmODlkNGMyMjliNmE2MjRkYTRiMTJiNmEzNmU0Njc3MjA3MDgyNmUwN2Q0NA%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/wglfo5vkgtbc1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/wglfo5vkgtbc1/HLSPlaylist.m3u8?a=1707573532%2CNDIzYmFjYTZhZDBmYzIyMWJjMDBmZGQ3YTEyMmYyMDVjM2YyYjQ4NWVkNWMxOGIzOTIxOGJlOTdjNGZhZDExYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wglfo5vkgtbc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 556}} | t3_1942bvt | /r/LocalLLaMA/comments/1942bvt/transformersjs_v214_adds_inbrowser_segment/ | false | false | 141 | {'enabled': False, 'images': [{'id': 'Yzc3ZHJ0MWZodGJjMVwpVUafglgPOgBEIAKsTPvBKLtNoSTYJ_-kg7_CxMgx', 'resolutions': [{'height': 93, 'url': 'https://external-preview.redd.it/Yzc3ZHJ0MWZodGJjMVwpVUafglgPOgBEIAKsTPvBKLtNoSTYJ_-kg7_CxMgx.png?width=108&crop=smart&format=pjpg&auto=webp&s=0d7a26ac96808a9c5c4a13b82c4df25466bbccd8', 'width': 108}, {'height': 186, 'url': 'https://external-preview.redd.it/Yzc3ZHJ0MWZodGJjMVwpVUafglgPOgBEIAKsTPvBKLtNoSTYJ_-kg7_CxMgx.png?width=216&crop=smart&format=pjpg&auto=webp&s=fa5bf7cf63839fac348254a7f9a7c59ef4e53cfa', 'width': 216}, {'height': 276, 'url': 'https://external-preview.redd.it/Yzc3ZHJ0MWZodGJjMVwpVUafglgPOgBEIAKsTPvBKLtNoSTYJ_-kg7_CxMgx.png?width=320&crop=smart&format=pjpg&auto=webp&s=d2da0a691ecb6c17fcf080a6ad68522c1c4b6594', 'width': 320}, {'height': 552, 'url': 'https://external-preview.redd.it/Yzc3ZHJ0MWZodGJjMVwpVUafglgPOgBEIAKsTPvBKLtNoSTYJ_-kg7_CxMgx.png?width=640&crop=smart&format=pjpg&auto=webp&s=ff09b50d056055fe52f3482d5664a4d6a39a49f2', 'width': 640}], 'source': {'height': 610, 'url': 'https://external-preview.redd.it/Yzc3ZHJ0MWZodGJjMVwpVUafglgPOgBEIAKsTPvBKLtNoSTYJ_-kg7_CxMgx.png?format=pjpg&auto=webp&s=87d9d6f272191d6a8d0add3c882a8c490dc1cfb4', 'width': 706}, 'variants': {}}]} | ||
Quick overview of price/preformance for text generation on different GPUs | 162 | ​
|GPU|T/S|$/HR|T/$|WINNER|
|:-|:-|:-|:-|:-|
|RTX A6000|52.778275125799354|0.79|66.81|🏆|
|A40|45.17065873434471|0.79|57.18|✅|
|L40|56.87802678475367|1.14|49.89|✅|
|A100 SXM|61.965978844301496|2.29|27.06|❌|
|RTX4090|12.613347074703317|0.74|17.05|❌|
|H100|43.002703448927555|4.69|9.17|❌|
|RTX A5000|1.5992600462735782|0.44|3.63|❌|
This is a test of the mixtral:8x7b model using ollama. I know it's not the most scientific method of researching (I could of used smaller quantization for the RTX A5000 for example and probably would of had much better results since it has 20GB of VRAM only).
There is a possibility that other components caused the bottleneck since each pod was slightly different.
Also note some GPUs have spot (interruptible) options so you can run them even cheaper. A6000 is 0.49$/hr for spot secure cloud and 0.34$/hr spot community cloud.
My workflow is [right here](https://github.com/MarkoPekas/runpod-ollama-speedtest-guide) in case you want to critique or try it out yourself.
| 2024-01-11T13:54:24 | https://www.reddit.com/r/LocalLLaMA/comments/19428v9/quick_overview_of_pricepreformance_for_text/ | MarksGG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19428v9 | false | null | t3_19428v9 | /r/LocalLLaMA/comments/19428v9/quick_overview_of_pricepreformance_for_text/ | false | false | self | 162 | {'enabled': False, 'images': [{'id': 'BgeV6m2P-1bGfzHkClsV6fGcuV9de1pTvejeuh-yOOw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IbL7X3PcgfiQV-ujCMZdcU1PtR40UbzZrB-yBT1ZtWk.jpg?width=108&crop=smart&auto=webp&s=60ecec693f5b9f79b531d1a8d9b32d5ea026830e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IbL7X3PcgfiQV-ujCMZdcU1PtR40UbzZrB-yBT1ZtWk.jpg?width=216&crop=smart&auto=webp&s=f6bc015be79dcd10d1d0919294433e89ccaecf1a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IbL7X3PcgfiQV-ujCMZdcU1PtR40UbzZrB-yBT1ZtWk.jpg?width=320&crop=smart&auto=webp&s=204a8cfb8893c1c6326e3085757adcc129101127', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IbL7X3PcgfiQV-ujCMZdcU1PtR40UbzZrB-yBT1ZtWk.jpg?width=640&crop=smart&auto=webp&s=9f503d61e4ad47c1d57147d8e915e0da579b528a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IbL7X3PcgfiQV-ujCMZdcU1PtR40UbzZrB-yBT1ZtWk.jpg?width=960&crop=smart&auto=webp&s=2485a5fd77e684b9bf71a705a2cbe1fb70f1d76c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IbL7X3PcgfiQV-ujCMZdcU1PtR40UbzZrB-yBT1ZtWk.jpg?width=1080&crop=smart&auto=webp&s=d205429c385395755188d4294ef1634f448b41da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IbL7X3PcgfiQV-ujCMZdcU1PtR40UbzZrB-yBT1ZtWk.jpg?auto=webp&s=e5355552add266bf567d4ff37a34d4e8f9ce3a0a', 'width': 1200}, 'variants': {}}]} |
Multi-GPU setups | 1 | OK I have been badly infected with the LLM-madness
What are some good resources or advice for someone considering setting up multi GPU setup.
I am considering:
A) Some cheap server with slots for many cheap GPUs like 3060 12GB
B) Two (or three) GPU setup for my desktop | 2024-01-11T13:40:00 | https://www.reddit.com/r/LocalLLaMA/comments/1941y10/multigpu_setups/ | Full_Operation_9865 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1941y10 | false | null | t3_1941y10 | /r/LocalLLaMA/comments/1941y10/multigpu_setups/ | false | false | self | 1 | null |
Fine-tune for specific writing style based on text samples | 1 | Looking for a tutorial on how to best fine-tune on a large sample of writing data for a specific use case i.e. writing social media posts based on a businesses past posts.
The tutorials I've seen on creating a dataset are all just prompt and output examples but I don't think that's exactly what I am looking for. | 2024-01-11T13:29:10 | https://www.reddit.com/r/LocalLLaMA/comments/1941q0t/finetune_for_specific_writing_style_based_on_text/ | DK_Tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1941q0t | false | null | t3_1941q0t | /r/LocalLLaMA/comments/1941q0t/finetune_for_specific_writing_style_based_on_text/ | false | false | self | 1 | null |
Question re. roles of various tools | 1 | I'm struggling to understand the roles of different local llm tools, and hope someone can help.
My goal: Build a POC "chat with docs" system for a business with multiple users/teams, each with different datasets. Everything must run local, no internet.
My current mental model of two tools:
--Ollama: Hosts/connects to models and provides an API and terminal chat. "back end" model interface.
--Bionic-GPT: Front end for multiple users, plus tools like a vector DB and tokenizer for managing domain knowledge. Must point it to an API (like Ollama) to interface with a model. "front end"
My question:
Is my mental model of Ollama & Bionic correct? And, can I apply my understanding to the many other tools (examples below)?
examples:
llama.cpp, LMStudio, GPT4All, Cheshire cat, Jan, Anythingllm, localAI.io
Example of confusion:
"Anythingllm seems to compare to Bionic-GPT and I might want to try it, but does it need a backend like Ollama, and can I use Ollama for both?"
Thank you so much for your time! | 2024-01-11T13:20:44 | https://www.reddit.com/r/LocalLLaMA/comments/1941jyb/question_re_roles_of_various_tools/ | CapsFanHere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1941jyb | false | null | t3_1941jyb | /r/LocalLLaMA/comments/1941jyb/question_re_roles_of_various_tools/ | false | false | self | 1 | null |
Intel Core Ultra and LLMs | 1 | I need to buy a new laptop, where I expect to run a 7b model and, if possible, some fine tuning. I was just wondering if/how the new Intel Core Ultra with its NPU and GPU could fit. It will not compete with GPU and CUDA, but I was wondering how this compete with a Macbook Silicon (to me it looks like Macbook is better positioned because of the MLX). | 2024-01-11T13:15:25 | https://www.reddit.com/r/LocalLLaMA/comments/1941g99/intel_core_ultra_and_llms/ | IzzyHibbert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1941g99 | false | null | t3_1941g99 | /r/LocalLLaMA/comments/1941g99/intel_core_ultra_and_llms/ | false | false | self | 1 | null |
Why is llama.cpp still called 'llama.cpp' today? | 1 | Hi guys,
pls forgive me if you think what a dumb question - that appears to me as well :-)
I'm new to the whole stuff ML/LLM, trying to understand bit after another, so the name of the project llama.cpp confused me a lot at the first try. I though it is only re-implementation of LLaMA model in C++, no, now it can run so many models in GGUF formats. Actually, the project deserves at least a better description, something like: GGUF-runner, GGUF-engine, GGUF-brain etc. | 2024-01-11T13:14:27 | https://www.reddit.com/r/LocalLLaMA/comments/1941fj2/why_is_llamacpp_still_called_llamacpp_today/ | tunggad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1941fj2 | false | null | t3_1941fj2 | /r/LocalLLaMA/comments/1941fj2/why_is_llamacpp_still_called_llamacpp_today/ | false | false | self | 1 | null |
Creating your own embedding model to boost retrieval performance | 4 | I have RAG pipeline for medical documents. There are a lot specific features in the language only relevant to medicine. Someone suggested created a custom embeddings model, just wondering if anyone has done this, any direction on how to start if so? Did it help and how did you qualify the uplift in performance if it did? Txs | 2024-01-11T13:06:12 | https://www.reddit.com/r/LocalLLaMA/comments/19419yg/creating_your_own_embedding_model_to_boost/ | drivenkey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19419yg | false | null | t3_19419yg | /r/LocalLLaMA/comments/19419yg/creating_your_own_embedding_model_to_boost/ | false | false | self | 4 | null |
Using DeepInfra's Custom LLM with Pandasai | 1 | Hello everyone, I'm not very good at python and I'm trying to use pandasai with custom LLM by deepinfra. I'm stuck at how can I use the custom endpoints along with one of the notebook that was shared in this forum. I'm sharing the code below, if someone can help that would be really helpful
`import pandas as pd`
`from pandasai import SmartDataframe`
`df = pd.DataFrame({**sample data**})`
​
`# Instantiate a LLM`
`from pandasai.llm import OpenAI`
here I want to use the [DeepInfra docs](https://deepinfra.com/docs/advanced/openai_api) to use their model rather than the Open AI one.
`llm = OpenAI(api_token="YOUR_API_TOKEN")`
`df = SmartDataframe(df, config={"llm": llm})`
[`df.chat`](https://df.chat)`('Which are the 5 happiest countries?')`
How should I replace the Open AI here and use the models, endpoints and API key from the DeepInfra rather than the Open AI one? | 2024-01-11T12:52:51 | https://www.reddit.com/r/LocalLLaMA/comments/19410sj/using_deepinfras_custom_llm_with_pandasai/ | nishant299 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19410sj | false | null | t3_19410sj | /r/LocalLLaMA/comments/19410sj/using_deepinfras_custom_llm_with_pandasai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'gdv5Bh89JWXxDpkAlTrk_zCb-qJxrREhAWY8_SsUogU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/9yZr3zHUmyWQtvbJofUKJroV_0PJzwp1lwBQ4qcwT7s.jpg?width=108&crop=smart&auto=webp&s=4e8ad5cdf82a75c71c4af7b04654daaa4f3a27a1', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/9yZr3zHUmyWQtvbJofUKJroV_0PJzwp1lwBQ4qcwT7s.jpg?width=216&crop=smart&auto=webp&s=e900d3f43b1aa35a3c788ef7397aafa9776b12ce', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/9yZr3zHUmyWQtvbJofUKJroV_0PJzwp1lwBQ4qcwT7s.jpg?width=320&crop=smart&auto=webp&s=15b92f57e76885f59988d43a8633dff5d805c9f2', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/9yZr3zHUmyWQtvbJofUKJroV_0PJzwp1lwBQ4qcwT7s.jpg?auto=webp&s=e7fe1413dc667984fbb9af546508cd4f3ef5b6c6', 'width': 512}, 'variants': {}}]} |
Open source version of OpenAI assistants? | 2 | Hey guys so as the title suggests I'm curious to know if there are any open source alternatives to OpenAI assistants. I understand that function calling can be accomplished using grammars for instance and there are plenty of open source RAG (knowledge retrieval) solutions. However what I would like to find out is if there's anything that can tie them together. Meaning, a way to determine what functions should be called if it's required in a step-by-step manner of execution, and then figure out when to stop execution or let the user know it wasn't possible. | 2024-01-11T12:46:24 | https://www.reddit.com/r/LocalLLaMA/comments/1940wj0/open_source_version_of_openai_assistants/ | MonkeyMaster64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1940wj0 | false | null | t3_1940wj0 | /r/LocalLLaMA/comments/1940wj0/open_source_version_of_openai_assistants/ | false | false | self | 2 | null |
MegaDolphin- 120b released by Erick Hartford | 58 | Just saw that Eric Hartford have published a 120b model similar to Goliath but using dolphin-2.2-70b for the merge. here is the twitter post
https://x.com/erhartford/status/1745187763673276793?t=wtDpQ8G2b40OFK6Cq9Fh5w&s=09
And the huggingface link
https://huggingface.co/cognitivecomputations/MegaDolphin-120b/tree/main
And the bloke magic
https://huggingface.co/TheBloke/MegaDolphin-120b-GGUF/tree/main
And for exl2 lonestriker started the quants but not all of them are there yet
https://huggingface.co/LoneStriker/MegaDolphin-120b-2.65bpw-h6-exl2 | 2024-01-11T12:00:57 | https://www.reddit.com/r/LocalLLaMA/comments/194036l/megadolphin_120b_released_by_erick_hartford/ | m_mukhtar | self.LocalLLaMA | 2024-01-11T13:43:13 | 0 | {} | 194036l | false | null | t3_194036l | /r/LocalLLaMA/comments/194036l/megadolphin_120b_released_by_erick_hartford/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': 'dkibOVShc_1z4JacNGblZJDqpuoj0VIJB_eeSubb1yY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/FNgq4Z7iPgrvEb29t89h9Z3tjSlfzw9JbZyl6czH9Zw.jpg?width=108&crop=smart&auto=webp&s=4fe4a0005b9d10261c67e939033e53c397e6cf54', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/FNgq4Z7iPgrvEb29t89h9Z3tjSlfzw9JbZyl6czH9Zw.jpg?width=216&crop=smart&auto=webp&s=76bf3592374169e09690ccb13019b60fdc0ea1f7', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/FNgq4Z7iPgrvEb29t89h9Z3tjSlfzw9JbZyl6czH9Zw.jpg?width=320&crop=smart&auto=webp&s=c8cf0af7684222338ef4a0ebf7090ee4d314f768', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/FNgq4Z7iPgrvEb29t89h9Z3tjSlfzw9JbZyl6czH9Zw.jpg?width=640&crop=smart&auto=webp&s=c07685d4e03ac1586194da0149f53728689ff0aa', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/FNgq4Z7iPgrvEb29t89h9Z3tjSlfzw9JbZyl6czH9Zw.jpg?width=960&crop=smart&auto=webp&s=5da020d1ed2557429d58b0a77293ddeb68e6e438', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/FNgq4Z7iPgrvEb29t89h9Z3tjSlfzw9JbZyl6czH9Zw.jpg?auto=webp&s=30faccaa52bdca4e562678a8d1d1195b5259c9b6', 'width': 1024}, 'variants': {}}]} |
The secret to improve the performance of Mixtral MoE model | 7 | This is cloudyu, recently I focus on testing mixtral based models.
I found the parameter of num_experts_per_tok is very important.
Here I test hellaswag for
[cloudyu/Mixtral_7Bx4_MOE_24B](https://huggingface.co/cloudyu/Mixtral_7Bx4_MOE_24B) with different num_experts_per_tok
- num_experts_per_tok=2
hf (pretrained=/home/ubuntu/.cache/huggingface/hub/models--cloudyu--Mixtral_7Bx4_MOE_24B/snapshots/1dfbf257f2a51421e13dffcbb3edb5f252b60339,dtype=bfloat16,load_in_4bit=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 32
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------|-------|------|-----:|--------|-----:|---|-----:|
|hellaswag|Yaml |none | 0|acc |0.6416|± |0.0048|
| | |none | 0|acc_norm|0.8246|± |0.0038|
- num_experts_per_tok=3
fatal: not a git repository (or any of the parent directories): .git
hf (pretrained=/home/ubuntu/.cache/huggingface/hub/models--cloudyu--Mixtral_7Bx4_MOE_24B/snapshots/1dfbf257f2a51421e13dffcbb3edb5f252b60339,dtype=bfloat16,load_in_4bit=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 32
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------|-------|------|-----:|--------|-----:|---|-----:|
|hellaswag|Yaml |none | 0|acc |0.6500|± |0.0048|
| | |none | 0|acc_norm|0.8337|± |0.0037|
- num_experts_per_tok=4
hf (pretrained=/home/ubuntu/.cache/huggingface/hub/models--cloudyu--Mixtral_7Bx4_MOE_24B/snapshots/1dfbf257f2a51421e13dffcbb3edb5f252b60339,dtype=bfloat16,load_in_4bit=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 32
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------|-------|------|-----:|--------|-----:|---|-----:|
|hellaswag|Yaml |none | 0|acc |0.6521|± |0.0048|
| | |none | 0|acc_norm|0.8355|± |0.0037| | 2024-01-11T11:41:35 | https://www.reddit.com/r/LocalLLaMA/comments/193zr7l/the_secret_to_improve_the_performance_of_mixtral/ | OkAcanthocephala3355 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193zr7l | false | null | t3_193zr7l | /r/LocalLLaMA/comments/193zr7l/the_secret_to_improve_the_performance_of_mixtral/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'UbhHHn_oYvdU2BRCY8XPutI5H59Gismm2pyVa9YL3f4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/d5-V5zsl6B-5_3p_aDeT1b1kbpIra4Dn1zz6u61l2F4.jpg?width=108&crop=smart&auto=webp&s=3fe3ec235277406b3ef2bdac29b5bcd21b0cf493', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/d5-V5zsl6B-5_3p_aDeT1b1kbpIra4Dn1zz6u61l2F4.jpg?width=216&crop=smart&auto=webp&s=f40af9221a624407af167dffd33f8c81c670765b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/d5-V5zsl6B-5_3p_aDeT1b1kbpIra4Dn1zz6u61l2F4.jpg?width=320&crop=smart&auto=webp&s=5b1cd50599f398a646bef9f2373a1154c636cd62', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/d5-V5zsl6B-5_3p_aDeT1b1kbpIra4Dn1zz6u61l2F4.jpg?width=640&crop=smart&auto=webp&s=cb197bbcae49ef42f98ee49f4a0f953c54286b46', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/d5-V5zsl6B-5_3p_aDeT1b1kbpIra4Dn1zz6u61l2F4.jpg?width=960&crop=smart&auto=webp&s=5443825c69faa9a11053e15ff4b7eac575b2b383', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/d5-V5zsl6B-5_3p_aDeT1b1kbpIra4Dn1zz6u61l2F4.jpg?width=1080&crop=smart&auto=webp&s=282cdd7f5382cdd3cd863d557f6bc4b5c5957cfd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/d5-V5zsl6B-5_3p_aDeT1b1kbpIra4Dn1zz6u61l2F4.jpg?auto=webp&s=070ff44d754e68ba7a46690e37e40f9c74970422', 'width': 1200}, 'variants': {}}]} |
API for dolphin mistral? | 1 | Is there any online api that would let me run dolphin mistral in a OpenAI api like system? | 2024-01-11T11:39:30 | https://www.reddit.com/r/LocalLLaMA/comments/193zpw3/api_for_dolphin_mistral/ | Responsible-Sky8889 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193zpw3 | false | null | t3_193zpw3 | /r/LocalLLaMA/comments/193zpw3/api_for_dolphin_mistral/ | false | false | self | 1 | null |
Can you guys sanity check for me? | 1 | Would love for get feedback on how feasible this would be to setup, and get pointers on where to start!
I made a post about how sleazy OpenAI was after realizing they switched their data privacy/usage policy from opt-in to opt-out late last year without letting users know. Only those in the EU got a courtesy email cuz they’re legally bound to do so by GDPR.
When I signed up, I was given the option to opt-in, I said no, but turns out, that didn’t mean jack. So when I opted-out today, I found myself unable to use the chat history. **Even as a paid subscriber, I was being punished by not letting them use my data.**
They likely did this in anticipation of opening the GPT store, so they can collect user data w/o people realizing and take advantage of the custom GPTs.
Turns out, unless you specifically opt-out via going to their policy page and submitting a formal request, you will lose access to your chat history. Even if you are a paid user — unless you sign up for their new Teams tier w/ at least one other person, but you have to pay annually for a minimum of 2 years. Seriously?????
I want to level the playing field. When you create a customGPT w/ an API connection, you can opt-out of allowing OpenAI to use your user’s data.
I’d like to propose (and work on) an open sourced project that allows people to save their GPT chat history via a Vector Database, then using RAGs we can help them retrieve data. If we create an API endpoint and connect it with a customGPT w/ data usage opted-our, users can essentially have more than just a chat history, but a personalized customGPT w/ memory and still own their data.
They can sign up for an account when they first use the customGPT, and have complete control over their data, docs, whatever. They can even export their data and use it to fine tune their personal models if they have one. What would be helpful is a frontend UI that lets users visualize their conversations as topic-buckets, and allow them to drag and drop sessions to help improve relevancy and accuracy.
Being able to see your sessions as webs of topics and move things around is both intuitive and helpful for latent discovery. Though I’m not sure how difficult this would be given that changes will be directly reflected in the database and how it might impact RAGs (need to read up on this).
Other CustomGPTs can also set this up on their own, or use our API + database service if they don’t want to go through the hassle and just want a plug-and-play solution. It’s possible to do what Snorkel did and develop as a platform to help cover costs and grow, while keeping to the spirit of open source and data ownership.
At least for me, it would be immensely useful to have a place for all my chat sessions across any model. And also give some power back to ChatGPT users. | 2024-01-11T11:21:55 | https://www.reddit.com/r/LocalLLaMA/comments/193zfiq/can_you_guys_sanity_check_for_me/ | GeeBrain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193zfiq | false | null | t3_193zfiq | /r/LocalLLaMA/comments/193zfiq/can_you_guys_sanity_check_for_me/ | false | false | self | 1 | null |
How do you discover tools/ideas that might help improve your LLM-based apps which are not RAG? | 16 | Hi!
I'm new to LLM-based applications and mainly just trying stuff for fun and learning to improve my skills and knowledge but I was wondering how do you discover ideas/tools that can help improve your way of interacting between data and LLMs in applications which are not RAG, for which I find there are a lot of resources. I have done some simple use case that works pretty well when everything is fine, but also breaks a lot, but overall it didn't require me some deep knowledge of the space and I'm feeling like I can improve a lot upon my initial code by solving these issues using what people in the space developed or thought of for that. I'm not talking about tools and ideas for RAG which are very abundant, but for all the other applications that don't require the level of sophistication RAG applications do.
My use case is extracting structured data from documents. I have coded a small python program to which I give a document (.pdf or .txt) and I give it a query (a string, in natural language) that's intended to extract structured data from these documents and put it into a JSON. For example if my document talks about different startups, how much capital they've raised etc., I can ask "extract the names of the startups the document is talking about and how much capital they've raised in their first series" and get a JSON that contains that.
I consider this a simple use case, it's not a RAG use case, it's a simple use case of, at least how I think of it, "chunking => LLM call => output cleaning" but I want to either go beyond that if it's possible or explore better ideas to do each of these small steps (like are there ways to do better chunking that the traditional ones of recursive text splitting etc.?). I feel like just trying to do this in any other way without using LLMs can already be a great step for me but I don't know where to look at. | 2024-01-11T10:53:49 | https://www.reddit.com/r/LocalLLaMA/comments/193yz5r/how_do_you_discover_toolsideas_that_might_help/ | Mean-Night6324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193yz5r | false | null | t3_193yz5r | /r/LocalLLaMA/comments/193yz5r/how_do_you_discover_toolsideas_that_might_help/ | false | false | self | 16 | null |
Axolotl training with instruct dataset | 2 | I am trying to create axolotl config that will work with dataset that has instructions defined in following format <s>\[INST\] {instruction} {question}\[/INST\] {answer}</s>.
Can someone please advise how dataset definition should be define in axolotl config.yaml file for it.
Thanx! | 2024-01-11T10:05:58 | https://www.reddit.com/r/LocalLLaMA/comments/193y9ic/axolotl_training_with_instruct_dataset/ | hrdingo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193y9ic | false | null | t3_193y9ic | /r/LocalLLaMA/comments/193y9ic/axolotl_training_with_instruct_dataset/ | false | false | default | 2 | null |
How good is Mistral 7b at Function calling? | 2 | I want to use function calling in openorca mistral 7b but I'm just not sure how good can it be. Are there any mistral 7b fine tuned models that are better for function calling?
​ | 2024-01-11T08:19:11 | https://www.reddit.com/r/LocalLLaMA/comments/193wqoi/how_good_is_mistral_7b_at_function_calling/ | NFCProdACR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193wqoi | false | null | t3_193wqoi | /r/LocalLLaMA/comments/193wqoi/how_good_is_mistral_7b_at_function_calling/ | false | false | self | 2 | null |
Free LLM APIs | 2 | I'm looking to categorize my images. So I want to write a simple Python program that takes the filename of a list of images and returns a list of tags for each filename.
Are there any free LLM APIs out there that I could use for a task like this?
I have a very basic computer, so running a local LLM is probably not possible. | 2024-01-11T08:17:44 | https://www.reddit.com/r/LocalLLaMA/comments/193wpyj/free_llm_apis/ | vroemboem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193wpyj | false | null | t3_193wpyj | /r/LocalLLaMA/comments/193wpyj/free_llm_apis/ | false | false | self | 2 | null |
Any alternatives for H2O DataStudio ? | 9 | Hello i have seen H2O Datastudio [(ref)](https://docs.h2o.ai/h2o-llm-data-studio/) and i think such tool can be good for an amateur like me, as it covers multiple aspects of dataset creation. It seems that it is not an open source project so i couldnt install it and run locally. Does anybody knows any alternative?
To be more precise, i am looking for an envrionment that i can manage data creation for various llms, ideally i should create a generic "instruct" dataset and when i want to export that dataset for a specific model i should be able to do it with a simple export function. Extra points if it can also trigger finetuning with integration with tools like axolotl.
if possible i should be able to manage the work like projects and tasks. And the raw data input can be in few different format, such as PDFs, .docx , and websites even. Yet after i process them , (mostly manual but it would be nice to run scripts to help me) they should become a dataset.
Nice to have : possible i would like to host a gui so that i can get help from few curious friends in the longer run but that is not an emergent requirement.
Nice to have : generate similar data synthetically using chatgpt4 and adding them directly to the dataset yet mark them as synthetic.
​
Thank you so much in advance :) | 2024-01-11T07:56:33 | https://www.reddit.com/r/LocalLLaMA/comments/193weeb/any_alternatives_for_h2o_datastudio/ | idesireawill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193weeb | false | null | t3_193weeb | /r/LocalLLaMA/comments/193weeb/any_alternatives_for_h2o_datastudio/ | false | false | self | 9 | null |
[Demo] NVIDIA Chat With RTX | Leveraging retrieval-augmented generation (RAG), TensorRT-LLM, and RTX acceleration, you can query a custom chatbot | 27 | 2024-01-11T07:55:41 | https://www.nvidia.com/en-us/ai-on-rtx/chat-with-rtx-generative-ai/ | Nunki08 | nvidia.com | 1970-01-01T00:00:00 | 0 | {} | 193we0e | false | null | t3_193we0e | /r/LocalLLaMA/comments/193we0e/demo_nvidia_chat_with_rtx_leveraging/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'SAAks4rC6Og3LbqkpNqIIY1OPQrjAQsgx0UPGu6Vo5I', 'resolutions': [{'height': 24, 'url': 'https://external-preview.redd.it/d47SYF4-nEdaYG6sETbq_a2ATOg3LUbbAo_imVo58sE.jpg?width=108&crop=smart&auto=webp&s=283a208fabf90a832c132bd833d93ad7ab831009', 'width': 108}, {'height': 48, 'url': 'https://external-preview.redd.it/d47SYF4-nEdaYG6sETbq_a2ATOg3LUbbAo_imVo58sE.jpg?width=216&crop=smart&auto=webp&s=6d43b3465d7a76dbb81f90f749423050cf77afb5', 'width': 216}, {'height': 72, 'url': 'https://external-preview.redd.it/d47SYF4-nEdaYG6sETbq_a2ATOg3LUbbAo_imVo58sE.jpg?width=320&crop=smart&auto=webp&s=e669709a5f2b1898f45ae6f48161adecbb2cee22', 'width': 320}, {'height': 145, 'url': 'https://external-preview.redd.it/d47SYF4-nEdaYG6sETbq_a2ATOg3LUbbAo_imVo58sE.jpg?width=640&crop=smart&auto=webp&s=59d2b459996f2657c9727e884b814f32c5fddf40', 'width': 640}, {'height': 217, 'url': 'https://external-preview.redd.it/d47SYF4-nEdaYG6sETbq_a2ATOg3LUbbAo_imVo58sE.jpg?width=960&crop=smart&auto=webp&s=b770d31283136ba93096ce5b1777b1a56b2e3d2f', 'width': 960}, {'height': 244, 'url': 'https://external-preview.redd.it/d47SYF4-nEdaYG6sETbq_a2ATOg3LUbbAo_imVo58sE.jpg?width=1080&crop=smart&auto=webp&s=a6a214d4703d337061f2329790890ae5d9020f8b', 'width': 1080}], 'source': {'height': 580, 'url': 'https://external-preview.redd.it/d47SYF4-nEdaYG6sETbq_a2ATOg3LUbbAo_imVo58sE.jpg?auto=webp&s=8f4995ccdc4095f71632c4e0ba40e2250db288ea', 'width': 2560}, 'variants': {}}]} | ||
Easy-to-use apps/SaaS/tools etc. for fine tune? | 5 | Is there any beginner-friendly apps/SaaS/tools etc. for fine tune if I don't want to jump in the rabbit hole of setting everything up on my laptop, or I don't have a powerful GPU for it?
Ideally, I can simply upload my data, specify a model on HuggingFace and click one button to fine tune the model. | 2024-01-11T07:49:12 | https://www.reddit.com/r/LocalLLaMA/comments/193wapk/easytouse_appssaastools_etc_for_fine_tune/ | felixding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193wapk | false | null | t3_193wapk | /r/LocalLLaMA/comments/193wapk/easytouse_appssaastools_etc_for_fine_tune/ | false | false | self | 5 | null |
Running Tinyllama on an Android phone locally no internet. | 1 | [removed] | 2024-01-11T06:11:13 | https://www.reddit.com/r/LocalLLaMA/comments/193us6u/running_tinyllama_on_an_android_phone_locally_no/ | CaptSpalding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193us6u | false | null | t3_193us6u | /r/LocalLLaMA/comments/193us6u/running_tinyllama_on_an_android_phone_locally_no/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'thB1ECTDhAmAceVvm3UHMmOSs88oRrLCuKd9QyX-vh4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/TbEH56oX-ZOBDqgrE723tx8yikJrhQLlnFgXtfR2GAU.jpg?width=108&crop=smart&auto=webp&s=61cdb5163738fa0716d154053f9d42ed67db4923', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/TbEH56oX-ZOBDqgrE723tx8yikJrhQLlnFgXtfR2GAU.jpg?width=216&crop=smart&auto=webp&s=2e44de18fdd3ab69a158bdd2a18ff612500bec5c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/TbEH56oX-ZOBDqgrE723tx8yikJrhQLlnFgXtfR2GAU.jpg?width=320&crop=smart&auto=webp&s=ee5c233351086fc6b06d85920d6e16c712ae0582', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/TbEH56oX-ZOBDqgrE723tx8yikJrhQLlnFgXtfR2GAU.jpg?auto=webp&s=74b323e0cecccac90503985171e036578bbbcb74', 'width': 480}, 'variants': {}}]} |
[RAG] [llama-index] How to execute multiple SQL queries with SQLTableRetrieverQueryEngine in NL2SQL project? | 1 | I am working on a project where user will ask natural language queries and this llama-index based engine will convert that natural language to sql query and execute it on my database and give answer in natural language to the user. Problem is it is only able to execute one query per question so comparison quetions are not possible to answer and also if a question does not require querying the database it will still query the database. How can I solve this. Please help me with your suggesting.
Thanks in advance.
| 2024-01-11T06:05:59 | https://www.reddit.com/r/LocalLLaMA/comments/193uoti/rag_llamaindex_how_to_execute_multiple_sql/ | HappyDataGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193uoti | false | null | t3_193uoti | /r/LocalLLaMA/comments/193uoti/rag_llamaindex_how_to_execute_multiple_sql/ | false | false | default | 1 | null |
Easy to use "out of the box" LLM running programs for Macs? | 2 | on my main machine i love use kobold, and even ooba however after making the purchase of a Mac studio M2 192gb, ive come to learn that the environment is different than what i was used to on windows.
So i reverted back to using LM studio on the mac because it was literally just install and use. however i am having some issues already namely i think related to context, once the tokens generated begin to exceed the context limits LM Studio starts acting crazy, spitting gibberish, hallucinating, or just plain noncompliance.--i have fiddled with increasing context settings but they give me a warning that the model in question (goliath 120B) was trained on 4096 and then proceeds to give me garbage results....I can't change the rope settings because i am unfamiliar with what i would even put there to achieve 8k or even 12k+ context...even changing the context settings like maintain a rolling window does not fix this.
this context wasn't an issue for me before on kobold.
i'm thinking of finding a way to fight ooba into working on the mac, or the alternative is coming on here and asking all you fine gents on a direction to take in regards to loading LLMs on a Mac.
please something that is easy to use, has good settings, and works on a mac with little to no effort. OR what should i change the rope settings to so i can squeeze more context out of goliath120B on LM Studio? namely frequency scale and frequency base. | 2024-01-11T05:41:15 | https://www.reddit.com/r/LocalLLaMA/comments/193u9c7/easy_to_use_out_of_the_box_llm_running_programs/ | DominicanGreg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193u9c7 | false | null | t3_193u9c7 | /r/LocalLLaMA/comments/193u9c7/easy_to_use_out_of_the_box_llm_running_programs/ | false | false | self | 2 | null |
DeepSeek just announced DeepSeek-MoE | 126 | 2024-01-11T05:33:16 | https://nitter.net/_akhaliq/status/1745306807931642055#m | ninjasaid13 | nitter.net | 1970-01-01T00:00:00 | 0 | {} | 193u4eq | false | null | t3_193u4eq | /r/LocalLLaMA/comments/193u4eq/deepseek_just_announced_deepseekmoe/ | false | false | default | 126 | null | |
Why is there a padding=max_length option when there is a padding=longest for hugging face tokenizer? | 1 | I was going through hugging face tokenizer and realised that there are two options for padding - 'longest length in the batch' and 'maximum length accepted by model'. All this while I thought padding meant the former only. Is there any advantage for the second option or in which case should we use it? The attention mechanism anyway is going to ignore the padded tokens, so I am not getting it why would we need more of padded tokens than the maximum length within a batch. | 2024-01-11T05:29:28 | dire_wolf_cookie | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 193u203 | false | null | t3_193u203 | /r/LocalLLaMA/comments/193u203/why_is_there_a_paddingmax_length_option_when/ | false | false | 1 | {'enabled': True, 'images': [{'id': '_8POnxrmj2letnf6OakG8CMJCSxiwP9az7_lTv7em50', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/9vbtkxtkyqbc1.png?width=108&crop=smart&auto=webp&s=9e42b636da1fef9ff5633b913c29cd4b47482c95', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/9vbtkxtkyqbc1.png?width=216&crop=smart&auto=webp&s=bae4e80e5fd156250450df009930fce01f1d2a81', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/9vbtkxtkyqbc1.png?width=320&crop=smart&auto=webp&s=e6f8650a1ba646f7139b492cf64418964cdee3a5', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/9vbtkxtkyqbc1.png?width=640&crop=smart&auto=webp&s=d952b2965f4e2496edbf9edf304824d943de1de5', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/9vbtkxtkyqbc1.png?width=960&crop=smart&auto=webp&s=cc0dd2b671bfab0284bdea198521b9ea3a517e26', 'width': 960}], 'source': {'height': 494, 'url': 'https://preview.redd.it/9vbtkxtkyqbc1.png?auto=webp&s=c9d7ec62b94c0aa036484c7e5df5007352743d65', 'width': 986}, 'variants': {}}]} | ||
Is data privacy a feature or a right? | 1 | [removed] | 2024-01-11T05:21:49 | https://www.reddit.com/r/LocalLLaMA/comments/193tx6x/is_data_privacy_a_feature_or_a_right/ | GeeBrain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193tx6x | false | null | t3_193tx6x | /r/LocalLLaMA/comments/193tx6x/is_data_privacy_a_feature_or_a_right/ | false | false | self | 1 | null |
Need some Advice / Help | 1 | Hello, I have been tasked to find a solution to deploy a fine tuned Mistral 7B model locally. If the solution was to scale to give around 100 people access (Similar to a much smaller Chat GPT), what would rougly be the GPU/CPU requirements to do so? can i chain multiple instances? | 2024-01-11T04:47:31 | https://www.reddit.com/r/LocalLLaMA/comments/193tas7/need_some_advice_help/ | Annukhz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193tas7 | false | null | t3_193tas7 | /r/LocalLLaMA/comments/193tas7/need_some_advice_help/ | false | false | self | 1 | null |
Is fine tuning QLoRA still state of the art? | 18 | If so, how are people doing it, what frameworks are we using and what hardware did you invest in? | 2024-01-11T04:27:18 | https://www.reddit.com/r/LocalLLaMA/comments/193sxbj/is_fine_tuning_qlora_still_state_of_the_art/ | HotRepresentative325 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193sxbj | false | null | t3_193sxbj | /r/LocalLLaMA/comments/193sxbj/is_fine_tuning_qlora_still_state_of_the_art/ | false | false | self | 18 | null |
What are small models that work for you? What are the configs that you use for them? | 9 | When I quantize the OpenHermes model to q2\_k, it's around \~3GB. Has anyone had experience using a smaller quantized model with similar performance? just wondering. What configs do y'all use for your models? Currently, I have "use\_mlock", "n\_ctx: 2048", "n\_gpu\_layers: 0" (running on CPU). I'm not really sure how to optimize these configs, does anyone have suggestions for how to improve these? | 2024-01-11T04:24:02 | https://www.reddit.com/r/LocalLLaMA/comments/193sv21/what_are_small_models_that_work_for_you_what_are/ | Powerful-Cupcake-407 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193sv21 | false | null | t3_193sv21 | /r/LocalLLaMA/comments/193sv21/what_are_small_models_that_work_for_you_what_are/ | false | false | self | 9 | null |
Could an MoE be made up of language and embedding models? | 1 | [removed] | 2024-01-11T04:04:32 | https://www.reddit.com/r/LocalLLaMA/comments/193shv4/could_an_moe_be_made_up_of_language_and_embedding/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193shv4 | false | null | t3_193shv4 | /r/LocalLLaMA/comments/193shv4/could_an_moe_be_made_up_of_language_and_embedding/ | false | false | self | 1 | null |
Templates/presets | 3 | Were can i find presets or templates(like shortwave)?
Is there a website were they are shared? I cant find these presets for the life of me. (From agnaistic) | 2024-01-11T04:03:30 | https://www.reddit.com/r/LocalLLaMA/comments/193sh55/templatespresets/ | darklink6446 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193sh55 | false | null | t3_193sh55 | /r/LocalLLaMA/comments/193sh55/templatespresets/ | false | false | self | 3 | null |
Interpretation of learnt weight tensors | 4 | Nowadays model merging is rising popularity notably slerp and other interpolation method, I have a doubt how to interpret the weights tensors interpolation concept, I see learn weights of LLMs as high dimensional Dense vectors but I am having difficulty how these tensors orient or change themselves while merging with other weights, anyone have any intuitive idea so that I can visualise merging of weights? | 2024-01-11T03:50:53 | https://www.reddit.com/r/LocalLLaMA/comments/193s83u/interpretation_of_learnt_weight_tensors/ | MysticShadow427 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193s83u | false | null | t3_193s83u | /r/LocalLLaMA/comments/193s83u/interpretation_of_learnt_weight_tensors/ | false | false | self | 4 | null |
Prompting strategies to use with LLMs and medical case descriptors? | 1 | I am working on a project where I input case report descriptors into an LLM (Mistral, GPT 4, etc.) I need some prompting strategies to give me better answers. Eventually, after the description, I ask what the diagnosis is and compare it to what is in the paper. I have an AI engineer working with me that will be able to implement specifics. My question is this, are there any prompting strategies or how should I ask GPT to look for a diagnosis or differential diagnosis with my case descriptions? I know MedPrompt was recently released: [https://github.com/microsoft/promptbase](https://github.com/microsoft/promptbase). Can I use these at all, I don't understand how to use them.
| 2024-01-11T03:16:13 | https://www.reddit.com/r/LocalLLaMA/comments/193rjln/prompting_strategies_to_use_with_llms_and_medical/ | InDebt2Medicine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193rjln | false | null | t3_193rjln | /r/LocalLLaMA/comments/193rjln/prompting_strategies_to_use_with_llms_and_medical/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '-UIKbvJLmDI5ngIkrN_zDqqQgf3a5RDS3IEOiocJWmg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?width=108&crop=smart&auto=webp&s=ce510ffff10f4a83fc33b66040a2d47a364b91a8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?width=216&crop=smart&auto=webp&s=19ccd880568adddb5b711a5a79c3ddfeec3d8979', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?width=320&crop=smart&auto=webp&s=1dad6ff910357f5c4ac6aaa9f69fcdf87ca18a88', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?width=640&crop=smart&auto=webp&s=3292b492a9c75e53acef14943fb3440d6663e309', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?width=960&crop=smart&auto=webp&s=f6dd3cefba7a3c8edd9bee1b04544f673199c685', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?width=1080&crop=smart&auto=webp&s=f1cc1ad311789a6e6f54c1fef9dd243b8e63d4be', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?auto=webp&s=78aaff80d1a8d22805354f48545f355373bad32f', 'width': 1200}, 'variants': {}}]} |
Read any GIT folder and answer questions | 1 | Looking for developing an application to read a GIT folder and answer user questions on functionality and technical components. Also draw editable diagrams. Any suggestions on the approach. | 2024-01-11T02:16:36 | https://www.reddit.com/r/LocalLLaMA/comments/193qbeh/read_any_git_folder_and_answer_questions/ | Key-Air-1324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193qbeh | false | null | t3_193qbeh | /r/LocalLLaMA/comments/193qbeh/read_any_git_folder_and_answer_questions/ | false | false | self | 1 | null |
Looking for a music specialized LLAMA 2 finetune. | 1 | [removed] | 2024-01-11T01:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/193prwx/looking_for_a_music_specialized_llama_2_finetune/ | chocolatebanana136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193prwx | false | null | t3_193prwx | /r/LocalLLaMA/comments/193prwx/looking_for_a_music_specialized_llama_2_finetune/ | false | false | self | 1 | null |
Local LLama vs other GPT local alternatives (like gpt4all) | 1 | I'm looking for some guidance here hopefully we can have a good discussion. All of this sea of information on Local AI's or just AI in general is greek to me but I'm trying to learn as I go. I'm trying to setup a local AI that interacts with sensitive information from PDF's for my local business in the education space. In essence I'm trying to take information from various sources and make the AI work with the concepts and techniques that are described, let's say in a book (is this even possible). A lot of this information I would prefer to stay private so this is why I would like to setup a local AI in the first place.
My specs are as follows:
Intel(R) Core(TM) i9-10900KF CPU @ 3.70GHz 3.70 GHz
Installed Ram: 16.0 GB
System type: 64-bit operating system, x64-based processor
NVIDIA GeForce RTX 3070
MSI Z490-A Pro motherboard
If the preferred local AI is Llama what else would I need to install and plugin to make it work efficiently. I'd imagine I would need some extra setups installed in order for my pdf's or other types of data to be read, thanks. | 2024-01-11T01:38:35 | https://www.reddit.com/r/LocalLLaMA/comments/193piuw/local_llama_vs_other_gpt_local_alternatives_like/ | acotwo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193piuw | false | null | t3_193piuw | /r/LocalLLaMA/comments/193piuw/local_llama_vs_other_gpt_local_alternatives_like/ | false | false | self | 1 | null |
best option for multi gpu inference ? | 2 | i dont have experience in muti-gpu inference, i actually have a 4090rtx with vllm and got a very fast t/s inference with awq 7b models, but i want to test larger models so in your experience whats is the best local inference setup for larger models ex: 70b and multi gpu
i can get some sli ready 3090 for around 550-500 usd each
i also want to try models like mixtral in 4 or 5 bits, i dont have much experience in quants so comments of that are very welcome :)
best regards and thanks! | 2024-01-11T01:34:20 | https://www.reddit.com/r/LocalLLaMA/comments/193pfq5/best_option_for_multi_gpu_inference/ | prudant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193pfq5 | false | null | t3_193pfq5 | /r/LocalLLaMA/comments/193pfq5/best_option_for_multi_gpu_inference/ | false | false | self | 2 | null |
Recommendations for build targeting speed/efficiency | 3 | Hi, pretty new to the LLM world, but I'm trying to build something for my personal use.
I was searching around for a way to enable my own home automation software (home assistant) to make use of llms in order to control the devices. Right now I'm using OpenAI api, but this depends on cloud and I really don't like the latency (old ass 4g connection here, 50ms of base latency) and also the price.
Since I live in Europe and the cost of electricity is very high I'm targeting efficiency and speed over cost and scalability. The idea is to run a LLM with "good enough" understanding to trigger home assistant services (can I tune it? If so is there any tutorial in this kind?) and a TTS/STT like whisper multilanguage on a single "node".
The concern is to reply fast, consume as little as possible and go idle to a very low power state.
I'm sorry if I may asking something wrong, but my field is mostly DevOps/k8s/Linux/infrastructure domain, not AI... | 2024-01-11T01:26:38 | https://www.reddit.com/r/LocalLLaMA/comments/193p9ru/recommendations_for_build_targeting/ | CappyT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193p9ru | false | null | t3_193p9ru | /r/LocalLLaMA/comments/193p9ru/recommendations_for_build_targeting/ | false | false | self | 3 | null |
Help to use pipeline conversational on mistral instruct v0.2 | 1 | I am trying to use the same code that works on zephyr and llama2:
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
compute_dtype = getattr(torch, "float16")
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=False,
)
BASELINE_MODEL_NAME = "mistralai/Mistral-7B-Instruct-v0.2"
model = AutoModelForCausalLM.from_pretrained(BASELINE_MODEL_NAME,
quantization_config=quantization_config,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(BASELINE_MODEL_NAME)
print(tokenizer.chat_template)
pipe = pipeline("conversational", model=model, tokenizer=tokenizer)
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in portuguese",
},
{"role": "user", "content": "Qual é maior: sol ou a terra?"},
]
result = pipe(messages)
print(result.messages[-1]['content'])
But got the error:
jinja2.exceptions.TemplateError: Conversation roles must alternate user/assistant/user/assistant/...
The chat\_template of mistral is:
{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}
Any hints?
Thanks in advance ! | 2024-01-11T01:26:03 | https://www.reddit.com/r/LocalLLaMA/comments/193p9c3/help_to_use_pipeline_conversational_on_mistral/ | celsowm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193p9c3 | false | null | t3_193p9c3 | /r/LocalLLaMA/comments/193p9c3/help_to_use_pipeline_conversational_on_mistral/ | false | false | self | 1 | null |
Help - Struggling to get valid JSON responses from Mixtral 8x7b | 4 | I've been primarily using GPT 3.5 and prompting it to respond JSON:
"
Instructions:
...
Output format:
Output the answer in the following JSON format:
{ question: string, options: string\[\]}
"
I expect it to return a singular JSON in string format, in which I parse. To provide for any potential additions to the JSON (i.e. "Here is the answer: JSON { }", I simply only parse the string from the first curly brace to the last curly brace.
However, mixtral tends to respond with a really messy output when asked to return a JSON:
\- Multiple JSON variants
\- with dot points (I have a lot of dot points in my input prompts)
Has anyone here gotten Mixtral to be consistent with JSON outputs? | 2024-01-11T01:23:05 | https://www.reddit.com/r/LocalLLaMA/comments/193p701/help_struggling_to_get_valid_json_responses_from/ | benjalim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193p701 | false | null | t3_193p701 | /r/LocalLLaMA/comments/193p701/help_struggling_to_get_valid_json_responses_from/ | false | false | self | 4 | null |
Stay up-to-date within the LLM space | 1 | Hi! With everything around LLMs moving at such a fast pace, papers, related frameworks, libraries, toolkits, open source models and more popping up every day, I found it much harder to stay on top of everything than before. How do you stay updated and effectively navigate this space. Any recommendations for blogs, communities, channels or strategies? Thanks in advance!😊 | 2024-01-11T01:18:56 | https://www.reddit.com/r/LocalLLaMA/comments/193p3r5/stay_uptodate_within_the_llm_space/ | Odd-Farmer-3121 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193p3r5 | false | null | t3_193p3r5 | /r/LocalLLaMA/comments/193p3r5/stay_uptodate_within_the_llm_space/ | false | false | self | 1 | null |
4090s in stock at Best Buy for pick up. Rather they go to my LocalLLaMA family than scalpers. Good luck. | 42 | 2024-01-11T00:16:47 | https://www.bestbuy.com/site/pny-nvidia-geforce-rtx-4090-24gb-gddr6x-pci-express-4-0-graphics-card-with-triple-fan-and-dlss-3-black/6522679.p?skuId=6522679 | SeymourBits | bestbuy.com | 1970-01-01T00:00:00 | 0 | {} | 193nr4e | false | null | t3_193nr4e | /r/LocalLLaMA/comments/193nr4e/4090s_in_stock_at_best_buy_for_pick_up_rather/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'T-15OBvm6H1Df7kNrhwbOoVjPzTcOLNnLL_vxJpMGEA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/i5AFOzzJUQHKv6dPePBvsk77zSWwoDcsukdQR9-sgNM.jpg?width=108&crop=smart&auto=webp&s=dd860919e0e6c91d87d79851dcdab3a9d2a8ce51', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/i5AFOzzJUQHKv6dPePBvsk77zSWwoDcsukdQR9-sgNM.jpg?width=216&crop=smart&auto=webp&s=2d48ddcc0fb2116c24e80dc857ef22b70e55e50f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/i5AFOzzJUQHKv6dPePBvsk77zSWwoDcsukdQR9-sgNM.jpg?width=320&crop=smart&auto=webp&s=a508032a1ec06636b4f2de88b99767ef5d4dbd6e', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/i5AFOzzJUQHKv6dPePBvsk77zSWwoDcsukdQR9-sgNM.jpg?width=640&crop=smart&auto=webp&s=5b6a2b3de1045cefba13841779903426dbce6c65', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/i5AFOzzJUQHKv6dPePBvsk77zSWwoDcsukdQR9-sgNM.jpg?width=960&crop=smart&auto=webp&s=d5fb5629dfe0fb88f257bf718f687bd7e32a7159', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/i5AFOzzJUQHKv6dPePBvsk77zSWwoDcsukdQR9-sgNM.jpg?auto=webp&s=1a629aff6293063e2393a43a1c3bbf976357a8ca', 'width': 1000}, 'variants': {}}]} | ||
Let's get GPTs to promote local LLMs | 1 | [removed] | 2024-01-10T23:24:07 | https://www.reddit.com/r/LocalLLaMA/comments/193mjfp/lets_get_gpts_to_promote_local_llms/ | ALocalLLaMA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193mjfp | false | null | t3_193mjfp | /r/LocalLLaMA/comments/193mjfp/lets_get_gpts_to_promote_local_llms/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '5ZmL7d2VZstdFqjn3Ip6GXNjx6NLEdZri5mcfZcSAKI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/GDs_bj0u5K43BtH1Ci5Qv3jIVB_IHBIVwl91-0Xqcgs.jpg?width=108&crop=smart&auto=webp&s=7db44306d92aefb317471040cfca8a7f7eef14aa', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/GDs_bj0u5K43BtH1Ci5Qv3jIVB_IHBIVwl91-0Xqcgs.jpg?width=216&crop=smart&auto=webp&s=bd27a1205185c3666d47547c430a96ef8a296f05', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/GDs_bj0u5K43BtH1Ci5Qv3jIVB_IHBIVwl91-0Xqcgs.jpg?width=320&crop=smart&auto=webp&s=055bcc5c97b7e842244c1973152ed7cd8811e6c2', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/GDs_bj0u5K43BtH1Ci5Qv3jIVB_IHBIVwl91-0Xqcgs.jpg?auto=webp&s=8e41c0dc5750db9b69c346f30ce7f01e2b91e974', 'width': 512}, 'variants': {}}]} |
Quite new to this, looking for suggestions for my powerful PC. | 1 | I have a quite powerful PC and am looking for suggestions and or I inspiration of what kinds of things would be interesting/enjoyable to run on my PC.
I have a 4090, a 13900k, 96 GB of DDR5 RAM, and 6 TB of m.2 drive space which I can expand further with another 2 available slots.
I am super eager and interested in AI as I have been less and less able to invest myself emotionally into gaming, something I'd originally aimed for when upgrading my GPU and RAM (the RAM part was more just done because it's cheaper now than it has been in a long time)
I'd love even just ideas of what YOU would do with this rig as my biggest issue is a lack of creativity or lack of ideas, perhaps largely on account of my lack of experience and knowledge in the area of running local AI.
I have installed a couple of models locally, and have a couple of UIs to run them, one being oogabooga, and one being another whose name is escaping me at the moment.
Anyway, any recommendations on things I could do with my computer?
Thank you in advance, for any replies. | 2024-01-10T23:19:35 | https://www.reddit.com/r/LocalLLaMA/comments/193mfor/quite_new_to_this_looking_for_suggestions_for_my/ | caidicus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193mfor | false | null | t3_193mfor | /r/LocalLLaMA/comments/193mfor/quite_new_to_this_looking_for_suggestions_for_my/ | false | false | self | 1 | null |
Titan RTX 2024 | 1 | https://www.tiktok.com/t/ZT8qd6yt8/
This is Dolphin Mixtral 2.5 8x 7B 4 bit running on my machine. Note the settings at the end. I’m an amateur here but when I tried to run this on my 4070, it crashed most of the time. I can work with this.
I asked multiple models to compare the RTX 3090 and they all said basically the same thing. Tensor cores are king and the Titan RTX was choice. Mixtral is saying that in the response you are seeing.
I know I can get a 3090 for about $100 more. Are any of you guys happy with yours? | 2024-01-10T23:17:44 | https://www.reddit.com/r/LocalLLaMA/comments/193me6f/titan_rtx_2024/ | stonedoubt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193me6f | false | null | t3_193me6f | /r/LocalLLaMA/comments/193me6f/titan_rtx_2024/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IFg7829quY-18QR4nbP48ZW2jUfEjDYctl_ojWuxzLE', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/unqq89Z_TCMJyb_whAYn6YZ_mWpRTHvFHBnqiq764uE.jpg?width=108&crop=smart&auto=webp&s=75771d169ae48748903c527e713c2dc6cecb62c5', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/unqq89Z_TCMJyb_whAYn6YZ_mWpRTHvFHBnqiq764uE.jpg?width=216&crop=smart&auto=webp&s=1108950a0695778ef57877209ffe21533a363fbe', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/unqq89Z_TCMJyb_whAYn6YZ_mWpRTHvFHBnqiq764uE.jpg?width=320&crop=smart&auto=webp&s=83b25bc11c503b6b534a662e44a8c7380eafb0fa', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/unqq89Z_TCMJyb_whAYn6YZ_mWpRTHvFHBnqiq764uE.jpg?width=640&crop=smart&auto=webp&s=f6e4e1665864aa301409ac21f79fd43e67023826', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/unqq89Z_TCMJyb_whAYn6YZ_mWpRTHvFHBnqiq764uE.jpg?width=960&crop=smart&auto=webp&s=672a8a25675c2b3fd99e81340439dadcaa514b01', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/unqq89Z_TCMJyb_whAYn6YZ_mWpRTHvFHBnqiq764uE.jpg?width=1080&crop=smart&auto=webp&s=06a095ec60902899f4bcd9b78b3d46771a25fd01', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/unqq89Z_TCMJyb_whAYn6YZ_mWpRTHvFHBnqiq764uE.jpg?auto=webp&s=957c4f4a6d9298543ed50e70bb72b667f336e421', 'width': 1080}, 'variants': {}}]} |
Jan: an open-source alternative to LM Studio providing both a frontend and a backend for running local large language models | 263 | 2024-01-10T23:03:34 | https://jan.ai/ | CosmosisQ | jan.ai | 1970-01-01T00:00:00 | 0 | {} | 193m27u | false | null | t3_193m27u | /r/LocalLLaMA/comments/193m27u/jan_an_opensource_alternative_to_lm_studio/ | false | false | 263 | {'enabled': False, 'images': [{'id': '_clR5lo0uUzBmmOefgsOcrqCYpHgjkxrStKzWLzjtqg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=108&crop=smart&auto=webp&s=a34a9c017a9872303c87fdbe0bca0b95846bd110', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=216&crop=smart&auto=webp&s=67047021fa80720833499c426641e059a6e86bbc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=320&crop=smart&auto=webp&s=dcdb4bb0148442648e65b68e6865207cac0b2fc2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=640&crop=smart&auto=webp&s=3d3bda8cadf2426facfc15184a6c812a572eb706', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=960&crop=smart&auto=webp&s=f614f9911aa52ab8c5e6e44dfaca8927167be907', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=1080&crop=smart&auto=webp&s=2a44840e4665263ae5e36cf6702fffebc2b295df', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?auto=webp&s=8e5e6d929427207a2483692d89844ab933417a5f', 'width': 1200}, 'variants': {}}]} | ||
Qwen2 is coming! | 73 | 2024-01-10T22:47:02 | https://github.com/huggingface/transformers/pull/28436 | bratao | github.com | 1970-01-01T00:00:00 | 0 | {} | 193lnq5 | false | null | t3_193lnq5 | /r/LocalLLaMA/comments/193lnq5/qwen2_is_coming/ | false | false | 73 | {'enabled': False, 'images': [{'id': 'x4hmJK7t4R9WIQYYT2n8OiRxk2Rxf0K5K8PKm1lA510', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KZc4htiU1OQ2s5k_rP90XTDoIAUZgI2W5uqpXX9S71E.jpg?width=108&crop=smart&auto=webp&s=8059d320216e7dfcc51ecce1fa198ea99e760f2d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KZc4htiU1OQ2s5k_rP90XTDoIAUZgI2W5uqpXX9S71E.jpg?width=216&crop=smart&auto=webp&s=5050754d632138a9c7892255e14ea44b4207e7de', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KZc4htiU1OQ2s5k_rP90XTDoIAUZgI2W5uqpXX9S71E.jpg?width=320&crop=smart&auto=webp&s=07d91470bc8dad375b6d3c77ae9a38c0f6c079a6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KZc4htiU1OQ2s5k_rP90XTDoIAUZgI2W5uqpXX9S71E.jpg?width=640&crop=smart&auto=webp&s=1ac2267aa2520100fbd9b2c582776dd2c0cfbd1c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KZc4htiU1OQ2s5k_rP90XTDoIAUZgI2W5uqpXX9S71E.jpg?width=960&crop=smart&auto=webp&s=a92178bb9ef7b2f7c06cac974e6142c4bb007890', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KZc4htiU1OQ2s5k_rP90XTDoIAUZgI2W5uqpXX9S71E.jpg?width=1080&crop=smart&auto=webp&s=7dca13cdeadec12decd092d08c25054da7bcc679', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KZc4htiU1OQ2s5k_rP90XTDoIAUZgI2W5uqpXX9S71E.jpg?auto=webp&s=2fa995728ea993384524b1a3b6fc060a4d4fd904', 'width': 1200}, 'variants': {}}]} | ||
New super model bagel-dpo-8x7b-v0.2 available | 1 | This new model it's been a promise, so if you want to try it on the net before you run it local try [Infermatic.ai](https://Infermatic.ai) .
If you dont know what it is --> An experimental fine-tune of [**mixtral-8x7b-v0.1**](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [**bagel**](https://github.com/jondurbin/bagel)
Link to the repo --> [https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2) | 2024-01-10T22:23:59 | https://www.reddit.com/r/LocalLLaMA/comments/193l39e/new_super_model_bageldpo8x7bv02_available/ | Horror_Echo6243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193l39e | false | null | t3_193l39e | /r/LocalLLaMA/comments/193l39e/new_super_model_bageldpo8x7bv02_available/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'W-pCh47ZuoTaTpwtRXWs755SzkkFQzHW6A1NS1MJI_A', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=108&crop=smart&auto=webp&s=aa7b8a73d9f4825dcec8d2a7d8805a9c50369d0b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=216&crop=smart&auto=webp&s=96a387e3b3e91f000fc25d53f1c6557cfd455bcb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=320&crop=smart&auto=webp&s=c5ce1a81747b40b0e8234eb8a3c80296d7d99fb3', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=640&crop=smart&auto=webp&s=1bd8d44f5f6385004a3c4eee1032f01d3df456f9', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=960&crop=smart&auto=webp&s=7f1a5b2670f910b6635784a1f867157aaf4f9f70', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=1080&crop=smart&auto=webp&s=59bf35f672c4b4e9f72a5f2da763f919b49c40fe', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?auto=webp&s=6edb04ad94226e898c90b505438281c5c6ba7cf7', 'width': 1080}, 'variants': {}}]} |
Fastest for hosting API | 1 | [removed] | 2024-01-10T22:21:54 | https://www.reddit.com/r/LocalLLaMA/comments/193l1h0/fastest_for_hosting_api/ | aallsbury | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193l1h0 | false | null | t3_193l1h0 | /r/LocalLLaMA/comments/193l1h0/fastest_for_hosting_api/ | false | false | self | 1 | null |
Chat with Pixtral on Huggingface | 10 | 2024-01-10T22:12:39 | https://huggingface.co/spaces/mlabonne/phixtral-chat | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 193ktim | false | null | t3_193ktim | /r/LocalLLaMA/comments/193ktim/chat_with_pixtral_on_huggingface/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'J7YmoBKIcNHXxOrsGMFobFjePT65GI96Lx71BgeT3_I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LQgvMxXwnEVsmzg5hFPdyuBmAysXGuWJ49dlqkWjmqw.jpg?width=108&crop=smart&auto=webp&s=5d8161904c2057d32d03397bcb0098e2cbc97a35', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LQgvMxXwnEVsmzg5hFPdyuBmAysXGuWJ49dlqkWjmqw.jpg?width=216&crop=smart&auto=webp&s=3294b51f538405da6081f90394ca346137d4633e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LQgvMxXwnEVsmzg5hFPdyuBmAysXGuWJ49dlqkWjmqw.jpg?width=320&crop=smart&auto=webp&s=969a582ff075501525e3d29365df44b54a12e7fc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LQgvMxXwnEVsmzg5hFPdyuBmAysXGuWJ49dlqkWjmqw.jpg?width=640&crop=smart&auto=webp&s=1fc9ce4b9a65bae01b98a4521c7fe1a093a8a573', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LQgvMxXwnEVsmzg5hFPdyuBmAysXGuWJ49dlqkWjmqw.jpg?width=960&crop=smart&auto=webp&s=8c50e26ffc2be004ecdb923dbd9035a7e07739c4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LQgvMxXwnEVsmzg5hFPdyuBmAysXGuWJ49dlqkWjmqw.jpg?width=1080&crop=smart&auto=webp&s=9cbaff6f41983dca71dcf576a52a85ada2f169b8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LQgvMxXwnEVsmzg5hFPdyuBmAysXGuWJ49dlqkWjmqw.jpg?auto=webp&s=8185aa35cd7c1ffdd75255f079ab59997b3d69a4', 'width': 1200}, 'variants': {}}]} | ||
Does anyone have non-quantized codellama models in GGUF format for llama.cpp? | 1 | I am trying to run non quantized models on an A100 via llama.cpp, and the bloke only has up to 8 bit quantized codellama 34b...but no 16 or 32bit | 2024-01-10T22:05:36 | https://www.reddit.com/r/LocalLLaMA/comments/193kn7s/does_anyone_have_nonquantized_codellama_models_in/ | natural_language_guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193kn7s | false | null | t3_193kn7s | /r/LocalLLaMA/comments/193kn7s/does_anyone_have_nonquantized_codellama_models_in/ | false | false | self | 1 | null |
Is there any backend like vllm but for low end systems aka using llama.cpp and the like for cpu gou inference mix ? | 1 | I am looking into a task that needs high throughput above everything else.
And if somethinh like this exsits i can afford to try out alot more different sized models then with vllm. As far aa my understanding goes.
Do you guys know anything ? | 2024-01-10T21:21:23 | https://www.reddit.com/r/LocalLLaMA/comments/193jjow/is_there_any_backend_like_vllm_but_for_low_end/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193jjow | false | null | t3_193jjow | /r/LocalLLaMA/comments/193jjow/is_there_any_backend_like_vllm_but_for_low_end/ | false | false | self | 1 | null |
Upcoming APU Discussions (AMD, Intel, Qualcomm) | 19 | Hey guys. As you may know, there is a new lineup of APUs coming from AMD, Intel and Qualcomm.
What makes these interesting is that they all have some form of Neural Processing Unit that makes them really efficient for AI inferencing. The specification that these vendors are using to differentiate their AI capability is Trillions of Operations Per Second or TOPS. Here are the reported specs for AI from each company.
AMD: Ryzen 8000G Phoenix APU Lineup: 39 TOPS
https://www.tomshardware.com/pc-components/cpus/amd-launches-ryzen-8000g-phoenix-apus-brings-ai-to-the-desktop-pc-reveals-zen-4c-clocks-for-the-first-time
Intel: Meteorlake: 34 TOPS (Combined with CPU and NPU)
https://www.tomshardware.com/laptops/intel-core-ultra-meteor-lake-u-h-series-specs-skus
Qualcomm: Snap Dragon Elite X: 45 Tops
https://www.tomshardware.com/news/qualcomm-snapdragon-elite-x-oryon-pc-cpu-specs
For Reference, the M2 Ultra has a 31.6 TOPS and is using LPDDR5.
https://www.businesswire.com/news/home/20230605005562/en/Apple-introduces-M2-Ultra
https://www.tomshardware.com/reviews/apple-mac-studio-m2-ultra-tested-benchmarks
Please take this data with a grain of salt because I'm not exactly sure they are calculating TOPS the same way.
According to benchmarks for the M2 Ultra that people here have kindly shared, we can expect 7-10 tokens per seconds for 70B LLMs. As a reminder, the Apple M2 is using Low Powered DDR5 memory.
Can we expect these upcoming APU's to match if not beat the M2 Ultra? They can also use desktop grade DDR5 memory for faster memory speeds.
We can get fast 128 GB DDR5 kits relatively cheaply or we can splurge for 192 GB DDR5 KITS that are available now. Either way the total cost should still be significantly cheaper than a maxed out M2 Ultra and perform the same if not better.
Am I missing something? This just sounds a bit too good to be true. At this rate, we wouldn't even need to worry about quantization with most model. We can even supplement the APU with a graphics card like the 3090 to boost tokens per seconds.
The hassles of running these really large language models on consumer grade hardware is close to coming to an end. We don't need to be stuck in Apple's non repairable Ecosystem. We don't need to pay the exorbitant VRAM tax either. Especially if it's just inference.
We are closer to getting really nice AI applications running on our local hardware from immersive games to a personal assistant using vision software. And it's only going to get faster and cheaper from here. | 2024-01-10T20:42:10 | https://www.reddit.com/r/LocalLLaMA/comments/193ikun/upcoming_apu_discussions_amd_intel_qualcomm/ | zippyfan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193ikun | false | null | t3_193ikun | /r/LocalLLaMA/comments/193ikun/upcoming_apu_discussions_amd_intel_qualcomm/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'fFgRIBH6bKUwFFw1IRkgiKbGFEu23T6GbBMg_BYwbUc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z1Pq8kzFrOGW9oLn_TLgx6H-X6H--Ru-BYR08TXLMfw.jpg?width=108&crop=smart&auto=webp&s=7acb49c5c2a497e9d99fd0b23ee54e28b91fe960', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Z1Pq8kzFrOGW9oLn_TLgx6H-X6H--Ru-BYR08TXLMfw.jpg?width=216&crop=smart&auto=webp&s=c81338c81159bc06036a77bc9f8526c894665e60', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Z1Pq8kzFrOGW9oLn_TLgx6H-X6H--Ru-BYR08TXLMfw.jpg?width=320&crop=smart&auto=webp&s=fea17d77dd1123264af527e5b4a3bf02087faf50', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Z1Pq8kzFrOGW9oLn_TLgx6H-X6H--Ru-BYR08TXLMfw.jpg?width=640&crop=smart&auto=webp&s=95d4101055339b4546306c5cf8a420b8c2b612d0', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Z1Pq8kzFrOGW9oLn_TLgx6H-X6H--Ru-BYR08TXLMfw.jpg?width=960&crop=smart&auto=webp&s=1e4b4fd4306fa058a4d9d3c8402cf4a471d50545', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Z1Pq8kzFrOGW9oLn_TLgx6H-X6H--Ru-BYR08TXLMfw.jpg?width=1080&crop=smart&auto=webp&s=394666e9dedb9396bec65a781cb39a6a8edbd31e', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/Z1Pq8kzFrOGW9oLn_TLgx6H-X6H--Ru-BYR08TXLMfw.jpg?auto=webp&s=7c7afd3fcb97321102f9065d79b27fdeda2a6ee0', 'width': 1200}, 'variants': {}}]} |
Huggingface AutoTrain params for long context models | 1 | In Huggingface AutoTrain ([https://huggingface.co/autotrain](https://huggingface.co/autotrain)) you can set some parameters for finetuning. My first question, however, would be about data format. As far as I understand, the "generic LLM" and "SFT" options only use the *text* field in the input CSV file ([https://huggingface.co/docs/autotrain/llm\_finetuning](https://huggingface.co/docs/autotrain/llm_finetuning)), they don't look at *target*. This would mean that both the prompt and the desired response should be in *text*. Could someone confirm or deny this?
Now for the actual parameters. Let's say I want to finetune a long context model. What parameters do I need to change from default values?
Below are the default settings. I changed *model\_max\_length* to 128k. Probably need to change *block\_size* too? How do these work with each other?
I am also wondering learning rate and epochs. I have about 200 examples to finetune on. Any recommendations?
{
"block_size": 1024,
"model_max_length": 131072,
"padding": "right",
"use_flash_attention_2": false,
"disable_gradient_checkpointing": false,
"logging_steps": -1,
"evaluation_strategy": "epoch",
"save_total_limit": 1,
"save_strategy": "epoch",
"auto_find_batch_size": false,
"mixed_precision": "fp16",
"lr": 0.00003,
"epochs": 1,
"batch_size": 2,
"warmup_ratio": 0.1,
"gradient_accumulation": 1,
"optimizer": "adamw_torch",
"scheduler": "linear",
"weight_decay": 0,
"max_grad_norm": 1,
"seed": 42,
"apply_chat_template": false,
"quantization": "int4",
"target_modules": "",
"merge_adapter": false,
"peft": true,
"lora_r": 16,
"lora_alpha": 32,
"lora_dropout": 0.05
}
​ | 2024-01-10T20:22:18 | https://www.reddit.com/r/LocalLLaMA/comments/193i308/huggingface_autotrain_params_for_long_context/ | Foxtr0t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193i308 | false | null | t3_193i308 | /r/LocalLLaMA/comments/193i308/huggingface_autotrain_params_for_long_context/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8fmoOM73_P4nASRHhvACYVZ1TP8Mftfqe1pW4h9xU7Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ALGC6XWA8Kvu-HNmMhUkttfwUknHJRL7mfobrnubZT0.jpg?width=108&crop=smart&auto=webp&s=41a33a7fec6d54b6e024b2a0cf19faa6d116dc53', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ALGC6XWA8Kvu-HNmMhUkttfwUknHJRL7mfobrnubZT0.jpg?width=216&crop=smart&auto=webp&s=27e1bc39c548be3ca2a243a6b3fbabb8e125bf00', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ALGC6XWA8Kvu-HNmMhUkttfwUknHJRL7mfobrnubZT0.jpg?width=320&crop=smart&auto=webp&s=943da69dde4659dbfac5c35c7c6bc3e48730db82', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ALGC6XWA8Kvu-HNmMhUkttfwUknHJRL7mfobrnubZT0.jpg?width=640&crop=smart&auto=webp&s=1d031e2865fa32f2ffc9c06ba9b7fc0ccfcb3ede', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ALGC6XWA8Kvu-HNmMhUkttfwUknHJRL7mfobrnubZT0.jpg?width=960&crop=smart&auto=webp&s=f9823642504872d1f31012fe226efaeaca5638b0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ALGC6XWA8Kvu-HNmMhUkttfwUknHJRL7mfobrnubZT0.jpg?width=1080&crop=smart&auto=webp&s=7bd17ba3a1187042dd37db03e359487e63dc19e2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ALGC6XWA8Kvu-HNmMhUkttfwUknHJRL7mfobrnubZT0.jpg?auto=webp&s=096f1f5e3b85e630d4fb89a1d10797a4a445bbe0', 'width': 1200}, 'variants': {}}]} |
QLoRa fine-tuning Mistral-7B-Instruct-v0.2 with 8GB RAM MacBook using MLX and MLXT | 22 | Hey all, inspired by previous posts on MLX fine-tuning [this](https://www.reddit.com/r/LocalLLaMA/comments/18ujt0n/using_gpus_on_a_mac_m2_max_via_mlx_update_on/), [this](https://www.reddit.com/r/LocalLLaMA/comments/18wabkc/lessons_learned_so_far_lora_fine_tuning_on/) and [this](https://www.reddit.com/r/LocalLLaMA/comments/191s7x3/a_simple_guide_to_local_llm_finetuning_on_a_mac/?share_id=hH4Vu8gxZgwYRvl_fIyOu&utm_content=1&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1), I tried myself fine-tuning on a M2 Pro Macbook, it was fun and easy but end up spending lots of time preparing the training data set. So, in additional a writeup of my experience, I decided to make a CLI tool, [mlxt](https://github.com/chenhunghan/mlx-training-rs/tree/main), is a little helper to help generating training dataset for MLX fine-tuning, the tool is written in (async) Rust, is light and fast, OSS and can install via brew.
The following is e2e tour on fine-tuning a [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model, including the usage of mlxt. Again, this is heavily based on [this](https://apeatling.com/articles/simple-guide-to-local-llm-fine-tuning-on-a-mac-with-mlx/?utm_source=pocket_reader), [this](https://www.reddit.com/r/LocalLLaMA/comments/191s7x3/a_simple_guide_to_local_llm_finetuning_on_a_mac/?share_id=hH4Vu8gxZgwYRvl_fIyOu&utm_content=1&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1), [this](https://www.reddit.com/r/LocalLLaMA/comments/18ujt0n/using_gpus_on_a_mac_m2_max_via_mlx_update_on/) and [this](https://www.reddit.com/r/LocalLLaMA/comments/18wabkc/lessons_learned_so_far_lora_fine_tuning_on/). and the "official" [README](https://github.com/ml-explore/mlx-examples/tree/main/lora).
## Preparing
Install HomeBrew, it's a package manager that help use to install all other dependencies.
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Setting up Python3 (if you haven't)
brew install python@3.11
Clone MLX and download the model for fine-tuning.
git clone https://github.com/ml-explore/mlx-examples.git
Download and convert [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
cd mlx-examples/llm/hf-lllm
pip install -r requirements.txt # or pip3
python convert.py --hf-path mistralai/Mistral-7B-Instruct-v0.2 -q --mlx-path ./Mistral-7B-Instruct-v0.2-mlx-4bit
Note: we are adding `-q` for converting hg model into a 4-bit quantized MLX model, and save it to `./Mistral-7B-Instruct-v0.2-mlx-4bit`
It will tale some time to download and convert.
The converted MLX version has something we don't need when fine-tuning the model, edit `./Mistral-7B-Instruct-v0.2-mlx-4bit/config.json`, replace all with:
{
"dim": 4096,
"n_layers": 32,
"head_dim": 128,
"hidden_dim": 14336,
"n_heads": 32,
"n_kv_heads": 8,
"norm_eps": 1e-05,
"vocab_size": 32000,
"quantization": {
"group_size": 64,
"bits": 4
}
}
## Generating Training Dataset
Install `mlxt`, the helper CLI to generating training data.
brew install chenhunghan/homebrew-formulae/mlx-training-rs
Delete example data in `mlx-examples/lora/data`
Generate a training dataset on a topic you are interested in.
export OPENAI_API_KEY=[don't tell me your key]
mlxt --topic="[the topic you are interested, e.g. Large Language Model]" -n=100
## Fine-tuning!
cd mlx-examples/lora
pip install -r requirements.txt # or pip3
python lora.py --train --model ../llms/hf_llm/Mistral-7B-Instruct-v0.2-mlx-4bit --data ./data --batch-size 1 --lora-layers 4
Install [asitop](https://github.com/tlkh/asitop) and watch you GPUs running at 100%!
After the fine-tuning is done. To chat with your fine-tuned model, see [here](https://github.com/ml-explore/mlx-examples/tree/main/lora#generate)
Link to the repo [mlxt](https://github.com/chenhunghan/mlx-training-rs) | 2024-01-10T20:06:53 | https://www.reddit.com/r/LocalLLaMA/comments/193hp0s/qlora_finetuning_mistral7binstructv02_with_8gb/ | chenhunghan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193hp0s | false | null | t3_193hp0s | /r/LocalLLaMA/comments/193hp0s/qlora_finetuning_mistral7binstructv02_with_8gb/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'OxIEja9CdlLIWGYqo8KFmtPb6euM2VXpSPyRhRvFKz0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v1LpnxZdkHiPaeADz2fYMXoXP_Aj-AmUZr3ylVzKbn8.jpg?width=108&crop=smart&auto=webp&s=5775b56abd2ca15fef8a5df3ef2ea9308eacc603', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/v1LpnxZdkHiPaeADz2fYMXoXP_Aj-AmUZr3ylVzKbn8.jpg?width=216&crop=smart&auto=webp&s=8dc4130962be21ae2e9883d632eb6e0df327bebe', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/v1LpnxZdkHiPaeADz2fYMXoXP_Aj-AmUZr3ylVzKbn8.jpg?width=320&crop=smart&auto=webp&s=65710d330bcc31195435b23bdd60e4323fea4ff8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/v1LpnxZdkHiPaeADz2fYMXoXP_Aj-AmUZr3ylVzKbn8.jpg?width=640&crop=smart&auto=webp&s=c8917481997c9ce10df3db1ba78152e5d8371b62', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/v1LpnxZdkHiPaeADz2fYMXoXP_Aj-AmUZr3ylVzKbn8.jpg?width=960&crop=smart&auto=webp&s=39abac8a815c00cfb11603e157d64ea46043aa25', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/v1LpnxZdkHiPaeADz2fYMXoXP_Aj-AmUZr3ylVzKbn8.jpg?width=1080&crop=smart&auto=webp&s=4756ab8bdf132bdb12f12a874556801eea487d4a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/v1LpnxZdkHiPaeADz2fYMXoXP_Aj-AmUZr3ylVzKbn8.jpg?auto=webp&s=a49667ae44973f86fefff47400f18548710574ed', 'width': 1200}, 'variants': {}}]} |
"Alien Text Object" prompting, any potential for super-intelligence? Like Japanese, LLMs may be able to pick up hidden meaning based on contextual use of weird symbols, and may intuitively understand some "hyperdense alien languages" optimized for reasoning | 1 | 2024-01-10T19:29:29 | https://www.reddit.com/gallery/193grlq | ryunuck | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 193grlq | false | null | t3_193grlq | /r/LocalLLaMA/comments/193grlq/alien_text_object_prompting_any_potential_for/ | false | false | 1 | null | ||
How to download and use safetensors models? | 1 | [removed] | 2024-01-10T19:14:35 | https://www.reddit.com/r/LocalLLaMA/comments/193ged7/how_to_download_and_use_safetensors_models/ | yungfishstick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193ged7 | false | null | t3_193ged7 | /r/LocalLLaMA/comments/193ged7/how_to_download_and_use_safetensors_models/ | false | false | self | 1 | null |
RAG Dataset - 3.7M LA Times headlines and links (1914-2024) | 38 | Hey r/LocalLLaMA, I wrote a webscraper to assemble this data and I figured I'd save y'all the trouble of doing it yourself. The dataset's columns are \['title', 'link', 'month', 'year'\]. All headlines were collected from their sitemap yesterday, so it should be quite up to date. I'm also working on getting a full dataset of article contents, so I can make text embeddings.
[https://huggingface.co/datasets/Astris/LA-Times-Linked-Headlines](https://huggingface.co/datasets/Astris/LA-Times-Linked-Headlines) | 2024-01-10T18:52:49 | https://www.reddit.com/r/LocalLLaMA/comments/193fus0/rag_dataset_37m_la_times_headlines_and_links/ | metaprotium | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193fus0 | false | null | t3_193fus0 | /r/LocalLLaMA/comments/193fus0/rag_dataset_37m_la_times_headlines_and_links/ | false | false | self | 38 | {'enabled': False, 'images': [{'id': 'NuAQ90kiUUX4kthd84B4t0ZEil1JR2PaZQ4MKiAf4Zw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/V6er3A0u_IkxkGNZIXQKZNswSexL5mPfdDL9qjUKSrI.jpg?width=108&crop=smart&auto=webp&s=4248bb7e6011b998c3822d584130ad508ae5c38b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/V6er3A0u_IkxkGNZIXQKZNswSexL5mPfdDL9qjUKSrI.jpg?width=216&crop=smart&auto=webp&s=d1799bea81b6379e1afd040658b651c3e6cd1166', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/V6er3A0u_IkxkGNZIXQKZNswSexL5mPfdDL9qjUKSrI.jpg?width=320&crop=smart&auto=webp&s=d23930a5697a3ec4ebc6f68ae950f64f48f3e001', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/V6er3A0u_IkxkGNZIXQKZNswSexL5mPfdDL9qjUKSrI.jpg?width=640&crop=smart&auto=webp&s=f79ce1db5e39dd00f59545cd79ac70d0bdbf8a05', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/V6er3A0u_IkxkGNZIXQKZNswSexL5mPfdDL9qjUKSrI.jpg?width=960&crop=smart&auto=webp&s=1591c0ce77cae9bf5ad1c6ab17c14255d3cd8ca1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/V6er3A0u_IkxkGNZIXQKZNswSexL5mPfdDL9qjUKSrI.jpg?width=1080&crop=smart&auto=webp&s=6546644472492a0d4abbfe82a7b94ac9876003b8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/V6er3A0u_IkxkGNZIXQKZNswSexL5mPfdDL9qjUKSrI.jpg?auto=webp&s=623acc51cbf1b1ffddd6610826ba89f3a5b62921', 'width': 1200}, 'variants': {}}]} |
Python package for Ollama? | 1 | Is there a python package for ollama? I tried pip installing ollama, and it doesn't exist? I couldn't find anything on github, either. Section 4 on [this webpage](https://cheatsheet.md/llm-leaderboard/ollama) makes it sound like it can be done... Thank for any help! I have ollama running on my windows machine, and I'd like to start writing python scripts to it. | 2024-01-10T18:50:18 | https://www.reddit.com/r/LocalLLaMA/comments/193fsh6/python_package_for_ollama/ | dottie-dog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193fsh6 | false | null | t3_193fsh6 | /r/LocalLLaMA/comments/193fsh6/python_package_for_ollama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'qRb47zpOiYba0pKDqa0cposRogPk0DJmEboC9GecLNE', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/to0AW-8pVdMBD4x2Mt7o4TSD_Fspsab719o3dAsh764.jpg?width=108&crop=smart&auto=webp&s=2a56f94d9cdbcbc6be1f06352311c36df2ed2dd2', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/to0AW-8pVdMBD4x2Mt7o4TSD_Fspsab719o3dAsh764.jpg?width=216&crop=smart&auto=webp&s=d4e3826480aa63f7f6ce33f35bcba0764b476c91', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/to0AW-8pVdMBD4x2Mt7o4TSD_Fspsab719o3dAsh764.jpg?width=320&crop=smart&auto=webp&s=24d9dc5627384b46d411ab1a468c471015958685', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/to0AW-8pVdMBD4x2Mt7o4TSD_Fspsab719o3dAsh764.jpg?width=640&crop=smart&auto=webp&s=e941e06e308d08a1cd569d045c2efe97abc77255', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/to0AW-8pVdMBD4x2Mt7o4TSD_Fspsab719o3dAsh764.jpg?width=960&crop=smart&auto=webp&s=2fb039fdba48c00c9525bd73e7bbe8c8faeed330', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/to0AW-8pVdMBD4x2Mt7o4TSD_Fspsab719o3dAsh764.jpg?width=1080&crop=smart&auto=webp&s=a476a4a6103bc5ecec66471ab2f5c6354726fb9d', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/to0AW-8pVdMBD4x2Mt7o4TSD_Fspsab719o3dAsh764.jpg?auto=webp&s=45c6ac8364f8e38946cfaa7753f4f386344ff631', 'width': 1792}, 'variants': {}}]} |
When you work with us, you can rest easy knowing you’re getting the quality you deserve; we only employ the most skilled technicians in the industry, and we hold them to higher standards of precision. | 1 | 2024-01-10T18:46:05 | unitedaxleofnm | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 193fooh | false | null | t3_193fooh | /r/LocalLLaMA/comments/193fooh/when_you_work_with_us_you_can_rest_easy_knowing/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'qALC5EX8DuVP6TL1dubswwm8AYLKQlaNXPG_G2klUTg', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/invdt3dqrnbc1.jpeg?width=108&crop=smart&auto=webp&s=51982163d6885ef8c9cf4f2f0edb22d152c61145', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/invdt3dqrnbc1.jpeg?width=216&crop=smart&auto=webp&s=c49126e5bbf45b23f5087905e13b7b6f1f0a7d14', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/invdt3dqrnbc1.jpeg?width=320&crop=smart&auto=webp&s=d9fa080640cf51b0548037c4620f790456376cc6', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/invdt3dqrnbc1.jpeg?width=640&crop=smart&auto=webp&s=bd44695624b4338ab881763312e50cbb2d7675d3', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/invdt3dqrnbc1.jpeg?width=960&crop=smart&auto=webp&s=4dc8ae3e3980b814876ca32c3fb73e5016c50cb5', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/invdt3dqrnbc1.jpeg?width=1080&crop=smart&auto=webp&s=cd4534b58fa30f32eff76db0562d69bc11057818', 'width': 1080}], 'source': {'height': 1921, 'url': 'https://preview.redd.it/invdt3dqrnbc1.jpeg?auto=webp&s=6652bf1475f6b67b4d30736e90e6b64c6837387e', 'width': 1921}, 'variants': {}}]} | |||
Mistral Medium 4th on the updated LMSYS Leaderboard | 235 | 2024-01-10T18:42:50 | atgctg | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 193flvk | false | null | t3_193flvk | /r/LocalLLaMA/comments/193flvk/mistral_medium_4th_on_the_updated_lmsys/ | false | false | 235 | {'enabled': True, 'images': [{'id': 'ghjfBbVmWliZx5Lqu_Ep7JdShZYXvAYqo-n9SQ3e6WQ', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/xzn442rvqnbc1.png?width=108&crop=smart&auto=webp&s=71ab466fd30d6f266980955ca3fe615f03c9dbea', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/xzn442rvqnbc1.png?width=216&crop=smart&auto=webp&s=95de054da6bd0afdd71366ad4f169843144ee52a', 'width': 216}, {'height': 211, 'url': 'https://preview.redd.it/xzn442rvqnbc1.png?width=320&crop=smart&auto=webp&s=3d4092240a765ad3e4eed2618f9a6b3a50d9756e', 'width': 320}, {'height': 422, 'url': 'https://preview.redd.it/xzn442rvqnbc1.png?width=640&crop=smart&auto=webp&s=82383a1df0b4a6e156f859f4b0ebbea098516c14', 'width': 640}, {'height': 633, 'url': 'https://preview.redd.it/xzn442rvqnbc1.png?width=960&crop=smart&auto=webp&s=f466bb30535bdde2b1f891a46d54f3574806a5e9', 'width': 960}, {'height': 712, 'url': 'https://preview.redd.it/xzn442rvqnbc1.png?width=1080&crop=smart&auto=webp&s=f367eeb56d93e7fb03e4e485b3083bcea7b9e95b', 'width': 1080}], 'source': {'height': 1075, 'url': 'https://preview.redd.it/xzn442rvqnbc1.png?auto=webp&s=14babbf8856474d67ee455e0b3eb3e05e3ac0759', 'width': 1629}, 'variants': {}}]} | |||
So far, mistral medium is the only model aside from GPT-4 that can correctly answer my riddle of the Sphinx trick question | 68 | 2024-01-10T18:36:29 | boring_pencil | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 193fgb6 | false | null | t3_193fgb6 | /r/LocalLLaMA/comments/193fgb6/so_far_mistral_medium_is_the_only_model_aside/ | false | false | 68 | {'enabled': True, 'images': [{'id': 'F48vd--hCcbXXlK00RlbFEpk-XaNY96DuOpPccMlNt8', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/mkdlm8mwpnbc1.png?width=108&crop=smart&auto=webp&s=0d2793f05ba73bdf9d7d03c70b5b2eb2a47aa611', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/mkdlm8mwpnbc1.png?width=216&crop=smart&auto=webp&s=5c8ce2b630775d192c7854b82a658abed11283af', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/mkdlm8mwpnbc1.png?width=320&crop=smart&auto=webp&s=75b2ac639a04d1ebd6b8bb42ad50d5ea110e5d1f', 'width': 320}, {'height': 356, 'url': 'https://preview.redd.it/mkdlm8mwpnbc1.png?width=640&crop=smart&auto=webp&s=a8a4dbdcebb76c34699ff01d5405c24e5ca5e7c6', 'width': 640}, {'height': 535, 'url': 'https://preview.redd.it/mkdlm8mwpnbc1.png?width=960&crop=smart&auto=webp&s=b476c917fd383a8e4f62637da61f3cd6ad496488', 'width': 960}, {'height': 602, 'url': 'https://preview.redd.it/mkdlm8mwpnbc1.png?width=1080&crop=smart&auto=webp&s=ba789632fd2ad0dad21781ad2a828232b4047420', 'width': 1080}], 'source': {'height': 1009, 'url': 'https://preview.redd.it/mkdlm8mwpnbc1.png?auto=webp&s=4b0a4e1ec13f0b76282967178fe27fd6b604c879', 'width': 1809}, 'variants': {}}]} | |||
What are the advantages of formats other than GGUF? | 67 | I was using RTX 2070 with 8GB VRAM and 16GB RAM for few weeks, today I upgraded to RTX 3090 with 24GB and 128GB RAM, finally I can use Mixtral and stuff.
I still use koboldcpp with GGUF. I put as many layers as possible in 24GB VRAM then I can put everything else in RAM.
What can I achieve with other formats? Are there some reasons why there are faster or more efficient? | 2024-01-10T18:33:21 | https://www.reddit.com/r/LocalLLaMA/comments/193fdfq/what_are_the_advantages_of_formats_other_than_gguf/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193fdfq | false | null | t3_193fdfq | /r/LocalLLaMA/comments/193fdfq/what_are_the_advantages_of_formats_other_than_gguf/ | false | false | self | 67 | null |
How to merge safetensors models? | 1 | [removed] | 2024-01-10T18:07:07 | https://www.reddit.com/r/LocalLLaMA/comments/193epuj/how_to_merge_safetensors_models/ | yungfishstick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193epuj | false | null | t3_193epuj | /r/LocalLLaMA/comments/193epuj/how_to_merge_safetensors_models/ | false | false | self | 1 | null |
Avoid additional heavy-duty truck axle repair services by covering routine axle maintenance steps. After carefully inspecting, repairing, and testing your truck axle, we provide you with a recommended maintenance schedule for essential tasks. | 1 | 2024-01-10T17:56:07 | unitedaxleofnm | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 193efp9 | false | null | t3_193efp9 | /r/LocalLLaMA/comments/193efp9/avoid_additional_heavyduty_truck_axle_repair/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'GuX_6IXdwomHrbu6_RkHNZ81xxmhPVm-CCfuOkZIhFk', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/oywlvzbsinbc1.jpeg?width=108&crop=smart&auto=webp&s=f1fce21651e915185e581a6351316bfd51cb2e23', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/oywlvzbsinbc1.jpeg?width=216&crop=smart&auto=webp&s=ea5d5645c7cc7e224441387bf96f31dc31545ff4', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/oywlvzbsinbc1.jpeg?width=320&crop=smart&auto=webp&s=912c435a97d76ea96e29c2ca8f562dd59e759600', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/oywlvzbsinbc1.jpeg?width=640&crop=smart&auto=webp&s=ad01c7a7a95282b0d65753f9828484bdebcbc68a', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/oywlvzbsinbc1.jpeg?width=960&crop=smart&auto=webp&s=9d04adfb493d28f5d953ab7f5202a75263c5232e', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/oywlvzbsinbc1.jpeg?width=1080&crop=smart&auto=webp&s=4c696a925694d66592ba5d86a2d062e090191119', 'width': 1080}], 'source': {'height': 1921, 'url': 'https://preview.redd.it/oywlvzbsinbc1.jpeg?auto=webp&s=96d117f2c9a729ce2e1361be27d86bea9dc5bbd1', 'width': 1921}, 'variants': {}}]} | |||
How to chat with codebase | 8 | Hi, I'd like to somehow ask questions to my local medium sized python codebase (1000s of files). I've got 128GB of RAM so I can have a fairly large context, but some type of training isn't out of the question. Any projects I should investigate? Would preferrably like to stick with llama.cpp and mixstral if possible. | 2024-01-10T17:38:39 | https://www.reddit.com/r/LocalLLaMA/comments/193e00p/how_to_chat_with_codebase/ | 1ncehost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193e00p | false | null | t3_193e00p | /r/LocalLLaMA/comments/193e00p/how_to_chat_with_codebase/ | false | false | self | 8 | null |
🔥 I USED ARTIFICIAL INTELLIGENCE TO CREATE A STREAMER (VTUBE) THAT INTERACTS WITH YOU 🔥 | 1 | [removed] | 2024-01-10T17:17:26 | https://www.reddit.com/r/LocalLLaMA/comments/193dh7b/i_used_artificial_intelligence_to_create_a/ | cybermazinh0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193dh7b | false | null | t3_193dh7b | /r/LocalLLaMA/comments/193dh7b/i_used_artificial_intelligence_to_create_a/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ewwn6lUdf7KXGGAnpwqWm_Dibpolb3ZeA86LAnwbtTA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/FlKzpxC47dyBv4buiPgG4lFlrO8n5JJ3jzVqIsKoboE.jpg?width=108&crop=smart&auto=webp&s=e0e59add3926d174d17364c03a0a16a4dc9e212f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/FlKzpxC47dyBv4buiPgG4lFlrO8n5JJ3jzVqIsKoboE.jpg?width=216&crop=smart&auto=webp&s=47d093b593a84358e9f3d3efbbec29caf31cfdec', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/FlKzpxC47dyBv4buiPgG4lFlrO8n5JJ3jzVqIsKoboE.jpg?width=320&crop=smart&auto=webp&s=49fcc3243ec572afbc9e9cc59c07d61ebb93310d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/FlKzpxC47dyBv4buiPgG4lFlrO8n5JJ3jzVqIsKoboE.jpg?auto=webp&s=cc96e1d36f4cb4fe64b63cdd3ca5c3857c26f023', 'width': 480}, 'variants': {}}]} |
how to reducing RAG chatbot response time? | 1 | does anybody have any idea as to how to reduce the prompt evaluation time and evaluation time in RAG chatbot (langchain).?
https://preview.redd.it/wyiayxfi9nbc1.png?width=1695&format=png&auto=webp&s=a2977cb09b41e1ac6d17183f7bb66d3d1340ce32 | 2024-01-10T17:05:28 | https://www.reddit.com/r/LocalLLaMA/comments/193d6mq/how_to_reducing_rag_chatbot_response_time/ | HV4U2001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193d6mq | false | null | t3_193d6mq | /r/LocalLLaMA/comments/193d6mq/how_to_reducing_rag_chatbot_response_time/ | false | false | 1 | null | |
Looking for recommendations on a new machine | 2 | Hey All,
Im planning on building out a new machine I can run an LLM on. I have a pretty decent budget, looking to cap at around $5k to $6k.
​
For the best performance, would you guys recommend running a server with multiple GPUs ? Gaming system with a couple of 4090s ? Or Possibly an apple platform M2 or M3 when it releases ?
​
Just looking for general guidance, I want to start off with a platform that I can scale a bit if needed.
​
Thanks in advance!
​
​
​ | 2024-01-10T17:03:12 | https://www.reddit.com/r/LocalLLaMA/comments/193d4mb/looking_for_recommendations_on_a_new_machine/ | Noack1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193d4mb | false | null | t3_193d4mb | /r/LocalLLaMA/comments/193d4mb/looking_for_recommendations_on_a_new_machine/ | false | false | self | 2 | null |
simplest way to integrate RAG into ooba for an assistant? | 2 | hoping to build a persistent assistant and want to fuss with it as little as possible after it's set up, would using an extension like this within ooba be the move to enable RAG? https://github.com/jason-brian-anderson/long_term_memory_with_qdrant
setting up docker seems like it would be a little annoying but fine | 2024-01-10T16:50:38 | https://www.reddit.com/r/LocalLLaMA/comments/193ctfr/simplest_way_to_integrate_rag_into_ooba_for_an/ | Dankmemexplorer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193ctfr | false | null | t3_193ctfr | /r/LocalLLaMA/comments/193ctfr/simplest_way_to_integrate_rag_into_ooba_for_an/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '-hLiGokqftfCFZgATMKR6egjzzJfZHTgZhU1NlFJKgc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1vMj7kZjEzPBbfake4enQ0rtNRAFXhomrFi4bEJQ2aM.jpg?width=108&crop=smart&auto=webp&s=efa305219da3899a32aa2fcbdd442a6ea3982340', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1vMj7kZjEzPBbfake4enQ0rtNRAFXhomrFi4bEJQ2aM.jpg?width=216&crop=smart&auto=webp&s=d06da8a3bdc7b8a4eb1b020fcf8f1d7f72d5e960', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1vMj7kZjEzPBbfake4enQ0rtNRAFXhomrFi4bEJQ2aM.jpg?width=320&crop=smart&auto=webp&s=8ae9bad6c23c5fd02ff149e96ab2b71010e5e00d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1vMj7kZjEzPBbfake4enQ0rtNRAFXhomrFi4bEJQ2aM.jpg?width=640&crop=smart&auto=webp&s=8e3d89895cb35f19fa8fce8d187698e01c816182', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1vMj7kZjEzPBbfake4enQ0rtNRAFXhomrFi4bEJQ2aM.jpg?width=960&crop=smart&auto=webp&s=5dbf0962b562178da6e24e2f8689b33aae3023af', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1vMj7kZjEzPBbfake4enQ0rtNRAFXhomrFi4bEJQ2aM.jpg?width=1080&crop=smart&auto=webp&s=3b10e81a8869fe292a04fb78942f841b474b2329', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1vMj7kZjEzPBbfake4enQ0rtNRAFXhomrFi4bEJQ2aM.jpg?auto=webp&s=f087e6246c853030df0dfbe63521d45e9fbc27cb', 'width': 1200}, 'variants': {}}]} |
single vs dual 3090 | 1 | Hi All,
I am considering upgrading my PC which currently has a rtx 2070. I am confused between getting a single vs dual 3090. My use cases are not properly defined yet in terms of size of model etc. The biggest model I have trained was about 120M params, but not sure if I will be doing that again. My aim is mostly to learn nlp and graph neural nets, play around with some new models, perhaps train some for pedagogical reasons and do some side projects.
Pros of single 3090:
1. can run off my current setup along with rtx 2070, which allows me to keep using 2070
2. would not need spend money on upgrading psu(which will be a lot of work given all power cables will need to be changed as well).
3. Don’t need to worry much about overheating etc since my meshifys2 case should be big enough.
4. leaves room to add a 40 series later which support fp8.
5. Anything else?
Pros of dual 3090:
1. Bigger memspace, so enables inferencing/training bigger models
2. I can underpower both 3090s so that heating issues are less, but using dp I can still get fast results for models requiring <24gb.
3. Anything else?
Any advice regarding which route to go and if I am missing something? I figured that if I get only 1 3090, then I can use the money saved on a 2nd 3090 and psu for cloud compute in case I need to train bigger models.
additional info:
1. mobo supports pciex16 and pciex4, so one card will be x4. Also, it doesn’t support sli, so can’t use nvlink.
2. psu is 750W and cpu is 3600x
3. ram is 48GB | 2024-01-10T16:16:19 | https://www.reddit.com/r/LocalLLaMA/comments/193bzmh/single_vs_dual_3090/ | not_a_user42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193bzmh | false | null | t3_193bzmh | /r/LocalLLaMA/comments/193bzmh/single_vs_dual_3090/ | false | false | self | 1 | null |
LongLM is now open source! This is an implementation of the paper: "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" | 130 | 2024-01-10T16:15:37 | https://github.com/datamllab/LongLM | CosmosisQ | github.com | 1970-01-01T00:00:00 | 0 | {} | 193bz3g | false | null | t3_193bz3g | /r/LocalLLaMA/comments/193bz3g/longlm_is_now_open_source_this_is_an/ | false | false | 130 | {'enabled': False, 'images': [{'id': 'Mb0bbqxunuDW3MgoEYAZ201azLFkM2JWOCOd4XbjSnA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tKVkOdMb4LKX0L3CjuVbVak3SOgbQQmko1eIiOskS4s.jpg?width=108&crop=smart&auto=webp&s=d1f2fa51bc00708530f4f60748db857de54421d9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tKVkOdMb4LKX0L3CjuVbVak3SOgbQQmko1eIiOskS4s.jpg?width=216&crop=smart&auto=webp&s=f337def4c221d762fe23816ce9e562323c767167', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tKVkOdMb4LKX0L3CjuVbVak3SOgbQQmko1eIiOskS4s.jpg?width=320&crop=smart&auto=webp&s=451a162cbad61e2d020b869cbdd1fbc97a328a94', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tKVkOdMb4LKX0L3CjuVbVak3SOgbQQmko1eIiOskS4s.jpg?width=640&crop=smart&auto=webp&s=d56874eb0f24bf3cd1285d0ecb5d402afabdde32', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tKVkOdMb4LKX0L3CjuVbVak3SOgbQQmko1eIiOskS4s.jpg?width=960&crop=smart&auto=webp&s=f2085f321503b916e6f3599897a1d66c0ccc5a87', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tKVkOdMb4LKX0L3CjuVbVak3SOgbQQmko1eIiOskS4s.jpg?width=1080&crop=smart&auto=webp&s=aedabc2c99905a6050240bf86ced090c811dedcd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tKVkOdMb4LKX0L3CjuVbVak3SOgbQQmko1eIiOskS4s.jpg?auto=webp&s=395aeb6417c873fb8b853f12736bd05de9457707', 'width': 1200}, 'variants': {}}]} | ||
Experimenting with new sampling in MLX | 19 | Hi folks. MLX is absolutely cool as it allows to quickly hack stuff. I'm playing with this sampling algorithm that is specifically designed for coherence and simple to tune parameters:
[https://x.com/antirez/status/1745051794743472502?s=20](https://x.com/antirez/status/1745051794743472502?s=20)
At the same time, I hope that soon it will be possible to load GGUF models in MLX, since a contributor took my own gguflib library and hacked it into MLX itself, and there is a pending effort to make it work (and I can't wait): [https://github.com/ml-explore/mlx/pull/350](https://github.com/ml-explore/mlx/pull/350)
MLX hackability + GGUF support will make it an ideal candidate to try new ideas like new sampling strategies. Unfortunately, I have yet to implement binary sampling in llama.cpp in order to make it simpler to test it in the wild, but I would love to know what do you think about approaches like the above for more conservative sampling. | 2024-01-10T16:12:05 | https://www.reddit.com/r/LocalLLaMA/comments/193bw95/experimenting_with_new_sampling_in_mlx/ | antirez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193bw95 | false | null | t3_193bw95 | /r/LocalLLaMA/comments/193bw95/experimenting_with_new_sampling_in_mlx/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'XOf47-Yd0aVe0cqjB0ptS1-eXnZ3pv4SQRX5PP4VO-I', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Fjpi7IcgdFRJRuGB8FUmDYuiRO2Ae957jovHQrDYmlA.jpg?width=108&crop=smart&auto=webp&s=ba46993a1d215f8522d91609135cab453821f051', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/Fjpi7IcgdFRJRuGB8FUmDYuiRO2Ae957jovHQrDYmlA.jpg?auto=webp&s=7a180ff9c75d8b909948ad248e8c987d8db3c023', 'width': 200}, 'variants': {}}]} |
Mistral 7B instruct v0.2 is the probably the best instruct model I have ever tested till date! Not only it follows the instruction very well, its context based answering, logic and reasoning, it doesn't feel like I am running a 7B model!!! | 2 | 2024-01-10T16:01:16 | akbbiswas | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 193bmw7 | false | null | t3_193bmw7 | /r/LocalLLaMA/comments/193bmw7/mistral_7b_instruct_v02_is_the_probably_the_best/ | false | false | 2 | {'enabled': True, 'images': [{'id': '5pSraIlU29O7y6NRA5eNT8nyLTOTd0xGbcQ37ennQz8', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/efqu3x2oxmbc1.png?width=108&crop=smart&auto=webp&s=728c313c86509cd150d501d532e1e35398ebaa87', 'width': 108}, {'height': 79, 'url': 'https://preview.redd.it/efqu3x2oxmbc1.png?width=216&crop=smart&auto=webp&s=ce09f2218bae6b7dc51bc3e34598030b6f40360f', 'width': 216}, {'height': 118, 'url': 'https://preview.redd.it/efqu3x2oxmbc1.png?width=320&crop=smart&auto=webp&s=d8693705f3d8cf9ddb3e949393945f9ef2f23ca2', 'width': 320}, {'height': 236, 'url': 'https://preview.redd.it/efqu3x2oxmbc1.png?width=640&crop=smart&auto=webp&s=017a24ea27bd2b394fa46ea120c6ea69037797e8', 'width': 640}, {'height': 355, 'url': 'https://preview.redd.it/efqu3x2oxmbc1.png?width=960&crop=smart&auto=webp&s=22b1184d01124a103e0ef2cec832365b961cd18a', 'width': 960}], 'source': {'height': 360, 'url': 'https://preview.redd.it/efqu3x2oxmbc1.png?auto=webp&s=ac58090b3d0e69907c246f85be9ca05f0c678ba3', 'width': 973}, 'variants': {}}]} | |||
Newbie question: Model has expected data to answer my query, but doesn't answer. How to get it fixed? | 1 | I am totally newbie to LLama world (hardly a few days spent). I tried following in llama.cpp.
$ ./main -m ../llama-models/TheBloke/dolphin-2.5-mixtral-8x7b-GGUF/dolphin-2.5-mixtral-8x7b.Q4_K_M.gguf -n -1 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt
Question 1:
User:What are the classic languages still widely spoken
Bob: In terms of classical languages that are still widely spoken today, Latin and Greek are the two most commonly studied. However, they are not typically used in everyday conversation.
Question 2:
User:Any idea about Tamil?
Bob: Yes, Tamil is a classical language that is still widely spoken in some regions of India and Sri Lanka. It has a long literary tradition and is one of the oldest classical languages in the world.
I expected answer for my first query as "Tamil" for my first query of widely spoken classical langues; but it was not mentioned. In successive query, I asked about Tamil, then it said it is a widely spoken classical language.
It is my understanding that the model has data but unable to connect it with my query? Is my query to be refined or model needs more training? | 2024-01-10T15:53:53 | https://www.reddit.com/r/LocalLLaMA/comments/193bgm7/newbie_question_model_has_expected_data_to_answer/ | user0user | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193bgm7 | false | null | t3_193bgm7 | /r/LocalLLaMA/comments/193bgm7/newbie_question_model_has_expected_data_to_answer/ | false | false | self | 1 | null |
Getting started on M2 Mac 32GB | 1 | [removed] | 2024-01-10T15:53:12 | https://www.reddit.com/r/LocalLLaMA/comments/193bg11/getting_started_on_m2_mac_32gb/ | SnooBeans7516 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193bg11 | false | null | t3_193bg11 | /r/LocalLLaMA/comments/193bg11/getting_started_on_m2_mac_32gb/ | false | false | self | 1 | null |
How much do you think a 32GB 5090 will cost at launch? | 14 | How much will you be willing to pay for it? I have a feeling that there will be a big price bump given how much 4090 is still selling for. | 2024-01-10T15:29:25 | https://www.reddit.com/r/LocalLLaMA/comments/193avwh/how_much_do_you_think_a_32gb_5090_will_cost_at/ | EasternBeyond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193avwh | false | null | t3_193avwh | /r/LocalLLaMA/comments/193avwh/how_much_do_you_think_a_32gb_5090_will_cost_at/ | false | false | self | 14 | null |
For those serving OpenSource models + apps? How do you distribute them? | 2 | I am working on a tool that makes it easier for you to put together distributed systems giving an entry point to a distributed system. Think of it as an "exe" to a distributed system and I am wondering if folks here feel any pain with distributing their OSS apps
We have had some success with people using our tooling for AutoGPT / AgentGPT. How do you generally get people to run your app if it involves a language model, some scripts, a front end, a rag perhaps? | 2024-01-10T15:26:02 | https://www.reddit.com/r/LocalLLaMA/comments/193at5x/for_those_serving_opensource_models_apps_how_do/ | hopeirememberthisid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193at5x | false | null | t3_193at5x | /r/LocalLLaMA/comments/193at5x/for_those_serving_opensource_models_apps_how_do/ | false | false | self | 2 | null |
Any (legal) models trained on illegal images (example: child porn) | 1 | (Apologies if this is the wrong subreddit. If there is somewhere else that would make more sense for this question please let me know!)
Part of my job is parsing through user generated content and removing anything malicious. This includes viruses, spam, scams, phishing attempts, and of course child pornography.
For obvious reasons no one on my team wants to have to see that, so I am doing some research to see if there is any AI models that can help us with this.
I am trying to find a model that has been trained to detect images that contain sexual content. Preferably, this system would return whether or not it thinks these sexual images contain a child, but even a general purpose "yes, this is porn" would be pretty useful.
Are there any models out there that can do this? If so, what kind of hardware would be required to run it?
Thank you for your response | 2024-01-10T15:04:06 | https://www.reddit.com/r/LocalLLaMA/comments/193ab0m/any_legal_models_trained_on_illegal_images/ | yaazz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 193ab0m | true | null | t3_193ab0m | /r/LocalLLaMA/comments/193ab0m/any_legal_models_trained_on_illegal_images/ | false | false | self | 1 | null |
Prompting Strategies I can use with Medical Data? | 1 | I am working on a project where I input case reports into an LLM (Mistral, GPT 4, etc.) I need some prompting strategies to give me better answers. Eventually, after the description, I ask what the diagnosis is and compare it to what is in the paper. I have an AI engineer working with me that will be able to implement specifics. My question is this, are there any prompting strategies or how should I ask GPT to look for diagnosis or differential diagnosis with my case descriptions? I know MedPrompt was recently released: [https://github.com/microsoft/promptbase](https://github.com/microsoft/promptbase). Can I use these at all, I don't understand how to use them. | 2024-01-10T14:18:32 | https://www.reddit.com/r/LocalLLaMA/comments/1939aud/prompting_strategies_i_can_use_with_medical_data/ | InDebt2Medicine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1939aud | false | null | t3_1939aud | /r/LocalLLaMA/comments/1939aud/prompting_strategies_i_can_use_with_medical_data/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '-UIKbvJLmDI5ngIkrN_zDqqQgf3a5RDS3IEOiocJWmg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?width=108&crop=smart&auto=webp&s=ce510ffff10f4a83fc33b66040a2d47a364b91a8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?width=216&crop=smart&auto=webp&s=19ccd880568adddb5b711a5a79c3ddfeec3d8979', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?width=320&crop=smart&auto=webp&s=1dad6ff910357f5c4ac6aaa9f69fcdf87ca18a88', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?width=640&crop=smart&auto=webp&s=3292b492a9c75e53acef14943fb3440d6663e309', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?width=960&crop=smart&auto=webp&s=f6dd3cefba7a3c8edd9bee1b04544f673199c685', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?width=1080&crop=smart&auto=webp&s=f1cc1ad311789a6e6f54c1fef9dd243b8e63d4be', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yFZRdbH0P041Nemxi4wCb983K5p3HSYJadSQ138g8_U.jpg?auto=webp&s=78aaff80d1a8d22805354f48545f355373bad32f', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.