title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
When to use open source model instead of ChatGPT? | 1 | I'm trying to create an LLM project that for each request would have around 20k token input and 5k token output. The service is expected to handle around 20k requests a day.
How do you decide to use open source model or ChatGPT for this project? Are there any rule of thumb for this? | 2023-11-08T17:58:19 | https://www.reddit.com/r/LocalLLaMA/comments/17qretz/when_to_use_open_source_model_instead_of_chatgpt/ | Classic-Lawfulness12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qretz | false | null | t3_17qretz | /r/LocalLLaMA/comments/17qretz/when_to_use_open_source_model_instead_of_chatgpt/ | false | false | self | 1 | null |
When to use open source model instead of ChatGPT? | 1 | I'm trying to create an LLM project that for each request would have around 20k token input and 5k token output. The service is expected to handle around 20k requests a day.
How do you decide to use open source model or ChatGPT for this project? Are there any rule of thumb for this? | 2023-11-08T17:58:18 | https://www.reddit.com/r/LocalLLaMA/comments/17qretp/when_to_use_open_source_model_instead_of_chatgpt/ | Classic-Lawfulness12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qretp | false | null | t3_17qretp | /r/LocalLLaMA/comments/17qretp/when_to_use_open_source_model_instead_of_chatgpt/ | false | false | self | 1 | null |
How is the processor core usage managed, and can we tweak it? | 1 | I have a Ryzen 5 with 12 cores. When I monitor the CPU during inference, many of them spike to 100%, but many do not. It looks like there is a lot more juice here than what is being squeezed out of the processor. Any gurus have any insight into how the models or underlying libraries decide how to allocate the CPU resources? I'd like all 10 cores to be at 100% the whole time with 2 to handle the minimal system requirements. You know? | 2023-11-08T17:42:57 | https://www.reddit.com/r/LocalLLaMA/comments/17qr2rb/how_is_the_processor_core_usage_managed_and_can/ | Actual-Bad5029 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qr2rb | false | null | t3_17qr2rb | /r/LocalLLaMA/comments/17qr2rb/how_is_the_processor_core_usage_managed_and_can/ | false | false | self | 1 | null |
API to a service that offers access to open source LLMs? | 17 | Does anyone have any recommendations on an api service that provides access to open source LLMs? | 2023-11-08T17:34:09 | https://www.reddit.com/r/LocalLLaMA/comments/17qqvhs/api_to_a_service_that_offers_access_to_open/ | SatoshiReport | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qqvhs | false | null | t3_17qqvhs | /r/LocalLLaMA/comments/17qqvhs/api_to_a_service_that_offers_access_to_open/ | false | false | self | 17 | null |
Chunking and storing structured data and vectors for RAG | 9 | TL:DR is there an example someone can point me to for RAG with highly structured documents where the agent returns conversation along with cross references to document paragraphs or sections? Input= long text document (~500-1000 page), output is Q/A with references to document paragraph, page, or other simple cross reference.
I've been looking into RAG in my (extremely limited) spare time for a few months now but I'm getting hung up on vector databases. It may be due to the fact that my use case revolves around highly structured specification documents where I desire to be able to recover section and paragraph references in a QA session with a rag assistant.
Most off-the-shelf solutions seem to not care what your data looks like and just provides a black box solution for data chunking and vectoring, like having a single HTML link for a website for the source information and magically it works. This confuses me because langchain has a great learning path that includes quite a bit of focus on proper data chunking and vector database structuring, then literally every example treats the chunking and vector store step as an afterthought. I don't like to do something I don't understand so I've been focused more on creating a database for my data that makes sense in my brain.
I have successfully created a local vector database (sqlite) with SBERT that returns paragraph numbers with a similarity search but I haven't bridged that to feeding those results into an LLM.
Am I thinking too hard about this? Are the off the shelf rag solutions able to handle the paragraph numbers without me explicitly trying to cram them into a database structure? Or am I on the right path, and I should continue with the database that makes sense to me and keep figuring out how to implement the LLM step after the vector search?
I started looking at llamaindex, then Langchain, now autogen. But my spare time is limited enough that I haven't implemented anything with any of these, only a (successful) sbert similarity search which didn't use any of these. If someone has an example for structured documents where the q/a provides cross-references, I'd really appreciate it. | 2023-11-08T17:25:56 | https://www.reddit.com/r/LocalLLaMA/comments/17qqokv/chunking_and_storing_structured_data_and_vectors/ | Smerfj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qqokv | false | null | t3_17qqokv | /r/LocalLLaMA/comments/17qqokv/chunking_and_storing_structured_data_and_vectors/ | false | false | self | 9 | null |
What models can I comfortably run with an M3 Pro MacBook Pro 16 gb ram? | 1 | I get that it's probably hard to run very large ones, but I just want to get an idea.
I found another thread that have been asking the same question but about a MacBook pro M3 max 128 gb ram, or something like that. I can't remember for sure. | 2023-11-08T16:55:50 | https://www.reddit.com/r/LocalLLaMA/comments/17qq0br/what_models_can_i_comfortably_run_with_an_m3_pro/ | lostLight21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qq0br | false | null | t3_17qq0br | /r/LocalLLaMA/comments/17qq0br/what_models_can_i_comfortably_run_with_an_m3_pro/ | false | false | self | 1 | null |
DreamGen Opus — Uncensored model for story telling and chat / RP | 117 | **TL;DR:**
* **Uncensored,** Mistral 7B based model that lets you **write stories** in collaborative fashion, but also works nicely for **chat / (E)RP**
* **Hugging Face link:** [https://huggingface.co/dreamgen/opus-v0-7b](https://huggingface.co/dreamgen/opus-v0-7b)
Hey everyone, I am excited to share with you the first release of “DreamGen Opus”, an **uncensored** model that lets you write stories in collaborative fashion, but also works nicely for **chat / (E)RP.**
Specifically, it understands the following prompt syntax (yes, another one — please don’t hate :D):
<setting>
(Description of the story, can also optionally include information about characters) </setting>
...
<instruction>
(Instructions as you write the story, to guide the next few sentences / paragraphs)
</instruction>
You can find more details about prompting the model in the [**official prompting guide**](https://dreamgen.com/docs/stories), including a few examples (like for chat / ERP).
The initial model is based on **Mistral 7B**, but **Llama 2 70B** version is in the works and if things go well, should be out within 2 weeks (training is quite slow :)).
The model is based on a custom dataset that has >1M tokens of instructed examples like the above, and order of magnitude more examples that are a bit less instructed.
## How to try it out
The model should work great with **any tool that supports the Mistral 7B** base model. It will work well with oobabooga/text-generation-webui and many other tools. I like vLLM.
### Using vLLM
* Install [vLLM](https://github.com/vllm-project/vllm) following the instructions in the repo
* Run python -u -m vllm.entrypoints.openai.api\_server --host 0.0.0.0 --model dreamgen/opus-v0-7b
### Using [DreamGen.com](http://dreamgen.com/) website (free)
You can also try the model on [dreamgen.com](http://dreamgen.com/) for free (but it requires a registration with email).
## What’s next
I believe that for story telling & character creation it’s especially important to have access to the model weights, otherwise you run the risk of losing your plot or virtual companion (as already happened a few times before on various closed platforms that suddenly changed their rules or got shut down by their API provider). Hence DreamGen.
Here’s a high level overview of what I would like to do next under the DreamGen umbrella:
On the model side:
* (Soon) Larger story models
* Fine tune the model for even better character chat & roleplay
* Longer context windows, at least for smaller models (8-16K depending on how experiments go)
On the application side, I am thinking about these features:
* Character editor, chat & roleplay
* Ability to share your stories privately & publicly (not sure about this one, to be honest :))
* Image generation to go alongside with story generation & chat
* API so that you can use the model more easily if you don’t have a GPU
For all of these, **I would love your input! You can vote on the roadmap** [**here**](https://dreamgen.com/roadmap/poll)**.**
For more updates, [**join the community server**](https://dreamgen.com/community) or follow updates on [**Twitter**](https://dreamgen.com/twitter). | 2023-11-08T16:50:35 | https://www.reddit.com/r/LocalLLaMA/comments/17qpwdz/dreamgen_opus_uncensored_model_for_story_telling/ | DreamGenX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qpwdz | false | null | t3_17qpwdz | /r/LocalLLaMA/comments/17qpwdz/dreamgen_opus_uncensored_model_for_story_telling/ | false | false | self | 117 | {'enabled': False, 'images': [{'id': '98mqfuLQqhOo6wYj_0R8RKbblLEpRDNNPr8PL5b-mCw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=108&crop=smart&auto=webp&s=f9601e62b4ac6b74a74657273ef6858d59a1b5e6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=216&crop=smart&auto=webp&s=0bffb4ca8e66b542900f36dae9c8df6131974d3e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=320&crop=smart&auto=webp&s=d6324dc0638602b47e4b6248470691edd7a0eb30', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=640&crop=smart&auto=webp&s=b8bf3f6769eb79bdac7b4a00e42b143430677256', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=960&crop=smart&auto=webp&s=e21aa79e6f1fb541f6e53db83a738d53a0264bd2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=1080&crop=smart&auto=webp&s=c9c250704865c41082e619e65411ab785fac6a84', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?auto=webp&s=ec0e2ac4e1eed21bd4d5f89cc581b5d152d3d7d8', 'width': 1200}, 'variants': {}}]} |
DreamGen Opus: Uncensored 7B model for collaborative story writing and chat / RP | 1 | [removed] | 2023-11-08T16:36:26 | https://www.reddit.com/r/LocalLLaMA/comments/17qpl95/dreamgen_opus_uncensored_7b_model_for/ | DreamGenAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qpl95 | false | null | t3_17qpl95 | /r/LocalLLaMA/comments/17qpl95/dreamgen_opus_uncensored_7b_model_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '98mqfuLQqhOo6wYj_0R8RKbblLEpRDNNPr8PL5b-mCw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=108&crop=smart&auto=webp&s=f9601e62b4ac6b74a74657273ef6858d59a1b5e6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=216&crop=smart&auto=webp&s=0bffb4ca8e66b542900f36dae9c8df6131974d3e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=320&crop=smart&auto=webp&s=d6324dc0638602b47e4b6248470691edd7a0eb30', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=640&crop=smart&auto=webp&s=b8bf3f6769eb79bdac7b4a00e42b143430677256', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=960&crop=smart&auto=webp&s=e21aa79e6f1fb541f6e53db83a738d53a0264bd2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=1080&crop=smart&auto=webp&s=c9c250704865c41082e619e65411ab785fac6a84', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?auto=webp&s=ec0e2ac4e1eed21bd4d5f89cc581b5d152d3d7d8', 'width': 1200}, 'variants': {}}]} |
Blog post on understanding LLM parameters: Temperature, Top_p, Top_k and logit_bias. | 1 | 2023-11-08T16:33:58 | https://aviralrma.medium.com/understanding-llm-parameters-c2db4b07f0ee | ramFixer420 | aviralrma.medium.com | 1970-01-01T00:00:00 | 0 | {} | 17qpj8x | false | null | t3_17qpj8x | /r/LocalLLaMA/comments/17qpj8x/blog_post_on_understanding_llm_parameters/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'UVn4Vz0QVnrNMLyjYHzOZnIGdZiF6k-NdfV8_VZ-ta4', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/_Ry6nFNdVCTtcZfQOgnfc4u7wU4nUhopuNPFuGfRXK4.jpg?width=108&crop=smart&auto=webp&s=c3868ce3ed1a7b16922f3e1c3c5f977b411411de', 'width': 108}, {'height': 101, 'url': 'https://external-preview.redd.it/_Ry6nFNdVCTtcZfQOgnfc4u7wU4nUhopuNPFuGfRXK4.jpg?width=216&crop=smart&auto=webp&s=946ddd1b0ac97a44003c881811cee045751918c3', 'width': 216}, {'height': 149, 'url': 'https://external-preview.redd.it/_Ry6nFNdVCTtcZfQOgnfc4u7wU4nUhopuNPFuGfRXK4.jpg?width=320&crop=smart&auto=webp&s=d8c1f33119bcae050fe37d468e0a87eb1557cf90', 'width': 320}, {'height': 299, 'url': 'https://external-preview.redd.it/_Ry6nFNdVCTtcZfQOgnfc4u7wU4nUhopuNPFuGfRXK4.jpg?width=640&crop=smart&auto=webp&s=75cc7d0a996b8b7dd04fc5e5ba27ed8de7e9bf2b', 'width': 640}, {'height': 449, 'url': 'https://external-preview.redd.it/_Ry6nFNdVCTtcZfQOgnfc4u7wU4nUhopuNPFuGfRXK4.jpg?width=960&crop=smart&auto=webp&s=90432185ced6f817040537d2dd0428503064bc16', 'width': 960}], 'source': {'height': 454, 'url': 'https://external-preview.redd.it/_Ry6nFNdVCTtcZfQOgnfc4u7wU4nUhopuNPFuGfRXK4.jpg?auto=webp&s=9d2d467a720a1c7617709da411495a894dbb5715', 'width': 970}, 'variants': {}}]} | |
Comparing 4060 Ti 16GB + DDR5 6000 vs 3090 24GB: looking for 34B model benchmarks | 4 | I am going to build a LLM server very soon, targeting 34B models (specifically *phind-codellama-34b-v2.Q4* [GGUF](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF) [GPTQ](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ) [AWQ](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-AWQ)).
I am stuck between these two setups:
1. 12400 + DDR5 6000MHz 30CL + 4060 Ti 16GB (GGUF; Split the workload between CPU and GPU)
2. 3090 (GPTQ/AWQ model fully loaded in GPU)
Not sure if the speed bump of 3090 is worth the hefty price increase. Does anyone have benchmarks/data comparing these two setups?
​
BTW: Alder Lake CPUs run DDR5 in gear 2 (while AM4 run DDR5 in gear 1). AFAIK gear 1 offer lower latency than gear 2. Would this give AM4 big advantage when it comes to LLM?
​ | 2023-11-08T16:30:11 | https://www.reddit.com/r/LocalLLaMA/comments/17qpg7a/comparing_4060_ti_16gb_ddr5_6000_vs_3090_24gb/ | regunakyle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qpg7a | false | null | t3_17qpg7a | /r/LocalLLaMA/comments/17qpg7a/comparing_4060_ti_16gb_ddr5_6000_vs_3090_24gb/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '_Cy1RPxmhhSPkfvdnvVk19H8511kmHiwj8TBMqHskR0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?width=108&crop=smart&auto=webp&s=33ccdcdcea57a9cda3fc15f6300727af3375132f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?width=216&crop=smart&auto=webp&s=6a7d99b80ed3b9a01b14e8c07e9bf5dc81a125e4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?width=320&crop=smart&auto=webp&s=b5fb599171ec39e394e32230f482064220f7d2c2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?width=640&crop=smart&auto=webp&s=0cf54e956389bf70afed55d3ae917017a60306d6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?width=960&crop=smart&auto=webp&s=48bc38b79f8867395478b3a36f38e4a2d73265df', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?width=1080&crop=smart&auto=webp&s=cdb2d72501e887d807f3a33c44801c8367e6909c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?auto=webp&s=5bdbeb516c3b43cf70ac9f0be660a1c6e730322a', 'width': 1200}, 'variants': {}}]} |
What is the tiniest GPT model one can fine tune on home hardware? | 13 | is there a way to get a zero knowledge model that only knows how to chat. and from there fine tune it with specialized knowledge? and do this on consumer hardware (mac M1/16 gig) or free colab hardware?
i want to do this so as to prevent the model from hallucinating outside of the domain knowledge it is fed....like passing in a textbook and it only knows how to answer questions from it | 2023-11-08T16:04:08 | https://www.reddit.com/r/LocalLLaMA/comments/17qovnr/what_is_the_tiniest_gpt_model_one_can_fine_tune/ | herozorro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qovnr | false | null | t3_17qovnr | /r/LocalLLaMA/comments/17qovnr/what_is_the_tiniest_gpt_model_one_can_fine_tune/ | false | false | self | 13 | null |
I'm looking for a Step-by-step fine tuning example | 24 | I've done a basic fine tune using colab and a very tiny Bloom model. For some reason I'm struggling with translating that knowledge to other tools.
I've built myself a little ML/AI rig with 2xRTX3090s, I'm really enjoying downloading models and playing with them - I've used Ooba, llama.cpp, etc. I've been able to expose my ooba interface publicly via the Gradio hosting on HF, etc.
I seem to learn this stuff the best when I can walk through a few examples. Many of the instructions I see are a bit vague and so on.. I'd just like to get a guaranteed walk through to work so I can be sure I understand the process (on a local machine, using the tools I have) and I can then have a base to expand my experiments.
Any pointers to good examples like this would be most appreciated.
​ | 2023-11-08T15:58:49 | https://www.reddit.com/r/LocalLLaMA/comments/17qor2w/im_looking_for_a_stepbystep_fine_tuning_example/ | jubjub07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qor2w | false | null | t3_17qor2w | /r/LocalLLaMA/comments/17qor2w/im_looking_for_a_stepbystep_fine_tuning_example/ | false | false | self | 24 | null |
Best way to take your own data and format it into alpaca format | 2 |
all right, I’m not gonna lie I’m stumbling trying to figure the best way to do this if I have data as my own and I want to be able to find tonight into the alpaca format. What’s the best way to clean an organize the data let’s see if I get it but any advice is much appreciated. I just need to figure out how to take data that I receive and be able to put it into the input needed for alpaca. If anyone can give me guidance I appreciate you. ord. I was gonna have a bunch of other numbers or something like that or if it doesn’t have basically open AI how do I take two of the separate things to put them together to put them in the right format needed. Is there any easy tools or anything that I can do set automating manually doing it trying to figure this out I can ChatGPT may not work due to some of the conversations being more graphic, but any advice is much appreciated. I just need to figure out how to take data that I receive and be able to put it into the input needed for alpaca. If anyone can give me guidance I appreciate you. | 2023-11-08T15:58:03 | https://www.reddit.com/r/LocalLLaMA/comments/17qoqh9/best_way_to_take_your_own_data_and_format_it_into/ | No-Glove1872 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qoqh9 | false | null | t3_17qoqh9 | /r/LocalLLaMA/comments/17qoqh9/best_way_to_take_your_own_data_and_format_it_into/ | false | false | self | 2 | null |
Beginner question: Is there any way to use quantized gguf models in python under windows since auto gptq doesnt work ? I swear i tired searching but did not find an answer. | 1 | Hello everyone i am currently trying to set up a small 7b llama 2 chat model. The unquantized full version runs but only very slowely in pytorch with cuda. I have an rtx 3060 laptop with 16gb of ram. The model is taking about 5 -8 min to reply to the example prompt given
I liked "Breaking Bad" and "Band of Brothers". Do you have any recommendations of other shows I might like?
and using kobold.cpp running on the llama-2-7b-chat.Q5\_K\_M.gguf it takes literall seconds. But i found no way to load those quantized modells in pytorch under windows where auto gptq doesnt work. Also is pytorch just alot slower then kobold.cpp | 2023-11-08T15:45:22 | https://www.reddit.com/r/LocalLLaMA/comments/17qogch/beginner_question_is_there_any_way_to_use/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qogch | false | null | t3_17qogch | /r/LocalLLaMA/comments/17qogch/beginner_question_is_there_any_way_to_use/ | false | false | self | 1 | null |
Where do you find the most relevant discussions to your LLM tinkering? | 1 | Since I only have 6 options, i left off Reddit cuz we already know it’s awesome, but it’s kinda quiet sometimes, so where’s everyone else?
[View Poll](https://www.reddit.com/poll/17qo9rs) | 2023-11-08T15:37:22 | https://www.reddit.com/r/LocalLLaMA/comments/17qo9rs/where_do_you_find_the_most_relevant_discussions/ | NewportNerds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qo9rs | false | null | t3_17qo9rs | /r/LocalLLaMA/comments/17qo9rs/where_do_you_find_the_most_relevant_discussions/ | false | false | self | 1 | null |
GPT4ALL - best model for retrieving customer information from localdocs | 9 | Hi,
Im doing some experiments with GPT4all - my goal is to create a solution that have access to our customers infomation using localdocs - one document pr. customer.
The documents i am currently using is .txt with all information structred in natural language - my current model is Mistral OpenOrca
I find that its strugling to provide the correct info translate swedish company names to english.
I'm wondering what the best combination of "model" and localdoc formatting in order to get it to respond with info correctly.
Eg i want to ask it
\# customer XXX, how many support tickets do they have
\# which are marked with status active
\# What is the phone number of employee XXX
\# show info on XXX in a table
etc etc
Any suggestions ? | 2023-11-08T15:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/17qo6w1/gpt4all_best_model_for_retrieving_customer/ | leonbollerup | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qo6w1 | false | null | t3_17qo6w1 | /r/LocalLLaMA/comments/17qo6w1/gpt4all_best_model_for_retrieving_customer/ | false | false | self | 9 | null |
Is there any way to use quantized ggf models with hugging face under windows ? | 1 | As the title says I am new to this and god damit i hate working with windows already i can use autogptq since windows is not supported. And I have found no way of circumventing that.
Any help would be appreciated. | 2023-11-08T14:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/17qn2qo/is_there_any_way_to_use_quantized_ggf_models_with/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qn2qo | false | null | t3_17qn2qo | /r/LocalLLaMA/comments/17qn2qo/is_there_any_way_to_use_quantized_ggf_models_with/ | false | false | self | 1 | null |
Help needed - Fine-tuning Llama2 to build a ChatBot | 2 | Hi,
I am trying to build a ChatBot that will be used for Support purposes. Currently at the 0th step.
We have multiple datasources including PDFs, Confluence Pages (mostly User Guides). Would want to fine-tune Llama2 (QLoRA). The preprocessing part seems close to impossible with so many PDFs (500-600MB worth) with multiple sources addressing the same topic.
Any help or an overview of how a project like this should be looked at would be greatly helpful. Open to any suggestion/information as I am just stepping into this field!
| 2023-11-08T12:30:03 | https://www.reddit.com/r/LocalLLaMA/comments/17qkl0u/help_needed_finetuning_llama2_to_build_a_chatbot/ | Super-Task-5516 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qkl0u | false | null | t3_17qkl0u | /r/LocalLLaMA/comments/17qkl0u/help_needed_finetuning_llama2_to_build_a_chatbot/ | false | false | self | 2 | null |
What's the best model for 7900XTX and 32gb RAM? | 1 | CPU is 5800X3D. | 2023-11-08T11:56:36 | https://www.reddit.com/r/LocalLLaMA/comments/17qk16j/whats_the_best_model_for_7900xtx_and_32gb_ram/ | BentAmbivalent | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qk16j | false | null | t3_17qk16j | /r/LocalLLaMA/comments/17qk16j/whats_the_best_model_for_7900xtx_and_32gb_ram/ | false | false | self | 1 | null |
What percent of your usage of LLMs are closed-source ones (GPT, Claude, etc.) and what percent are open source ones (Llama, Mistral, etc.)? Pick the answer that's closest to you. | 17 | I like following the open source LLM scene, but I'm going to keep using ChatGPT plus since its still much better than other LLMs out there. This is the only subscription I can justify paying for (though I am a broke college student). The only way I'd stop is if running paid close sourced models weren't any better or if open LLMs were better, or at the very least they weren't much better to justify paying. Open ones have the advantages of privacy, offline usage, and for some of them: less or no censorship.
[View Poll](https://www.reddit.com/poll/17qjx1w) | 2023-11-08T11:49:04 | https://www.reddit.com/r/LocalLLaMA/comments/17qjx1w/what_percent_of_your_usage_of_llms_are/ | TheTwelveYearOld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qjx1w | false | null | t3_17qjx1w | /r/LocalLLaMA/comments/17qjx1w/what_percent_of_your_usage_of_llms_are/ | false | false | self | 17 | null |
need help and advice running finetuned llama2 on less than 64GB RAM | 2 | hi folks,
so we are doing a simple finetune of llama2 7B. The problem is that once we do the finetune (using qlora), we are only able to deploy this on a 192GB machine.
we are using Sagemaker to deploy, but we see the same problem on normal servers through docker.
can you folks give an idea of how small a machine are you deploying llama2 7b in production ? and what is the way to get there - should i be doing some Apache TVM stuff or some other forms of optimization/compilation ?
really stuck here so would totally appreciate any help | 2023-11-08T11:37:17 | https://www.reddit.com/r/LocalLLaMA/comments/17qjqvd/need_help_and_advice_running_finetuned_llama2_on/ | sandys1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qjqvd | false | null | t3_17qjqvd | /r/LocalLLaMA/comments/17qjqvd/need_help_and_advice_running_finetuned_llama2_on/ | false | false | self | 2 | null |
How would it work ? | 1 | I've been using GPT-4 professionally but I'm a noob when it comes to open-source LLMs.
I have a SaaS and looking to help my users with AI features. Because Open-AI API comes with a cost, I'm curious: what's preventing us from hosting fine-tuned GPT models in the Cloud for super-cheap (like 10 to 100 times cheaper than GPT-4) ?
Thank you for clarifying ! | 2023-11-08T11:16:57 | https://www.reddit.com/r/LocalLLaMA/comments/17qjfzp/how_would_it_work/ | weston-flows | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qjfzp | false | null | t3_17qjfzp | /r/LocalLLaMA/comments/17qjfzp/how_would_it_work/ | false | false | self | 1 | null |
Trying the new jina-embeddings-v2-base-en model | 1 | [removed] | 2023-11-08T11:00:15 | https://www.reddit.com/r/LocalLLaMA/comments/17qj79y/trying_the_new_jinaembeddingsv2baseen_model/ | Relative_Winner_4588 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qj79y | false | null | t3_17qj79y | /r/LocalLLaMA/comments/17qj79y/trying_the_new_jinaembeddingsv2baseen_model/ | false | false | self | 1 | null |
Trying the new jina-embeddings-v2-base-en model | 1 | [removed] | 2023-11-08T11:00:14 | https://www.reddit.com/r/LocalLLaMA/comments/17qj79u/trying_the_new_jinaembeddingsv2baseen_model/ | Relative_Winner_4588 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qj79u | false | null | t3_17qj79u | /r/LocalLLaMA/comments/17qj79u/trying_the_new_jinaembeddingsv2baseen_model/ | false | false | self | 1 | null |
Local LLaMA for text classification? | 1 | I'm looking for a local LLM that can help me do multi-class text classification. I want to classify Wikipedia articles into 200-300 subject areas. I don't know which model is best suited for the job and can run locally, anyone have suggestions? | 2023-11-08T10:47:04 | https://www.reddit.com/r/LocalLLaMA/comments/17qj0vb/local_llama_for_text_classification/ | RuairiSpain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qj0vb | false | null | t3_17qj0vb | /r/LocalLLaMA/comments/17qj0vb/local_llama_for_text_classification/ | false | false | self | 1 | null |
selfhosting ollama to multiple users. is concurrent request possible? | 2 | I'm trying to selfhost the zephyr via ollama locally with my macbook (m1 pro) and allow my family members to use it.
would it be able to handle concurrent prompt request? what are the limitation of selfhosting this to multiple users? | 2023-11-08T10:24:20 | https://www.reddit.com/r/LocalLLaMA/comments/17qipos/selfhosting_ollama_to_multiple_users_is/ | hakim131 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qipos | false | null | t3_17qipos | /r/LocalLLaMA/comments/17qipos/selfhosting_ollama_to_multiple_users_is/ | false | false | self | 2 | null |
Plz Help in Finding model | 1 | [removed] | 2023-11-08T10:18:51 | https://www.reddit.com/r/LocalLLaMA/comments/17qimvw/plz_help_in_finding_model/ | Lopsided_Dot_4557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qimvw | false | null | t3_17qimvw | /r/LocalLLaMA/comments/17qimvw/plz_help_in_finding_model/ | false | false | self | 1 | null |
24GB GPU OOM while qlora peft finetuning Llama2 7B | 4 | Batch size was 1. Dataset was only 300mb with 29k samples. Is it my hardware? | 2023-11-08T09:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/17qi73z/24gb_gpu_oom_while_qlora_peft_finetuning_llama2_7b/ | DirectionOdd9824 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qi73z | false | null | t3_17qi73z | /r/LocalLLaMA/comments/17qi73z/24gb_gpu_oom_while_qlora_peft_finetuning_llama2_7b/ | false | false | self | 4 | null |
How to run llama 2 70b on 4 GPU cluster (4x A100) | 7 | I have a cluster of 4 A100 GPUs (4x80GB) and want to run meta-llama/Llama-2-70b-hf. I'm a beginner and need some guidance.
Is 4xA100 enough to run the model ? or its more than required?
Need the model for inference only. | 2023-11-08T08:22:25 | https://www.reddit.com/r/LocalLLaMA/comments/17qh4z4/how_to_run_llama_2_70b_on_4_gpu_cluster_4x_a100/ | Bright_Counter_2280 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qh4z4 | false | null | t3_17qh4z4 | /r/LocalLLaMA/comments/17qh4z4/how_to_run_llama_2_70b_on_4_gpu_cluster_4x_a100/ | false | false | self | 7 | null |
OpenAI API update (mostly for coders) | 34 | Hi all, admit I didn't pay much attention to OpenAI's dev day, so I got tripped up this evening when I did a virtual env refresh, and all my local LLM access code broke. Turns out [they massively revved their API](https://github.com/openai/openai-python/pull/677). This is mostly news for folks like me who either maintain an LLM-related project, or who just prefer to write their own API access clients. I think we probably have enough of us here to share some useful notes—I see ya'll posting Python code now and then.
Anyway, the best news is that they deprecated the nasty old "hack this imported global resource approach" in favor of something more encapsulated.
Here's a quick example of using the updated API with a local LLM (at localhost). Works with my llama-cpp-python hosted LLM. Their new API docs are a bit on the sparse side, so I had to do some spelunking in the upstream code to straighten it all out.
```py
from openai import OpenAI
client = OpenAI(api_key='dummy', base_url='http://127.0.0.1:8000/v1/')
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Say this is a test",
}
],
# Just use whatever llama-cpp-python or whatever mounts for the model
model="dummy",
)
print(chat_completion.choices[0].message.content)
```
Final line prints just the actual first choice response message text you might be familiar with from the underlying JSON structure.
I'll continue to maintain notes in [this ticket](https://github.com/OoriData/OgbujiPT/issues/49) as I update [OgbujiPT](https://github.com/OoriData/OgbujiPT) (open source client-side LLM toolkit), but I'll also update this thread with any other really interesting bits I come across. | 2023-11-08T06:43:50 | https://www.reddit.com/r/LocalLLaMA/comments/17qfv33/openai_api_update_mostly_for_coders/ | CodeGriot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qfv33 | false | null | t3_17qfv33 | /r/LocalLLaMA/comments/17qfv33/openai_api_update_mostly_for_coders/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'nIggTkjo7gzM-IFBZqaOqctXH9xvUzQ6Zekn9EG3FCw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nhG5riz9EI75zV0Ew5wyKNHK7BzVi6r6cVsPfLuWDI8.jpg?width=108&crop=smart&auto=webp&s=dfc56388f7de0b31bc03bc461aa25dd29c3b88b4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nhG5riz9EI75zV0Ew5wyKNHK7BzVi6r6cVsPfLuWDI8.jpg?width=216&crop=smart&auto=webp&s=0209d5c1336c56915bfe295eebd5d90b3c25df4d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nhG5riz9EI75zV0Ew5wyKNHK7BzVi6r6cVsPfLuWDI8.jpg?width=320&crop=smart&auto=webp&s=eea50640ca8d465929302ee91da165679860d37c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nhG5riz9EI75zV0Ew5wyKNHK7BzVi6r6cVsPfLuWDI8.jpg?width=640&crop=smart&auto=webp&s=443f97429d1059712e4dad60ed080b7d972ffab0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nhG5riz9EI75zV0Ew5wyKNHK7BzVi6r6cVsPfLuWDI8.jpg?width=960&crop=smart&auto=webp&s=337f9cb1ad30522b15c73ac907154eead7c0fa1c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nhG5riz9EI75zV0Ew5wyKNHK7BzVi6r6cVsPfLuWDI8.jpg?width=1080&crop=smart&auto=webp&s=22e099c423b4a8a573accd90bbc39585e9ca067c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nhG5riz9EI75zV0Ew5wyKNHK7BzVi6r6cVsPfLuWDI8.jpg?auto=webp&s=491e48351dca11742b4eddcb45597969cd790f71', 'width': 1200}, 'variants': {}}]} |
Suggestions for an LLM to pair with home automation | 1 | Hi guys, I just starting playing with LLMs yesterday when I found out I can run them in powershell using my 7900xtx. I've played with WizardLM and Dolphin so far. Very cool stuff.
My endgoal is to offload an LLM to its own box, and integrate it with Home Assistant. Think google nest or alexa but entirely in-house.
Would you guys have any suggestions? Ideally its capable of saving information and keeping track of conversations with different people via something like DeepSpeech.
I've spent a few hours looking through hugging face (mostly TheBloke models), tried reading articles on what quantization is or the difference between llama2 and mistral, but its all still new and confusing as I'm just blindly reading bits and pieces and dont really know where to start.
Thanks in advance! | 2023-11-08T06:41:16 | https://www.reddit.com/r/LocalLLaMA/comments/17qftu4/suggestions_for_an_llm_to_pair_with_home/ | Responsible_Speaker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qftu4 | false | null | t3_17qftu4 | /r/LocalLLaMA/comments/17qftu4/suggestions_for_an_llm_to_pair_with_home/ | false | false | self | 1 | null |
Fill in middle fine-tuning LLM | 3 | Hey, I am looking to fine-tune a LLM with the [fim](https://arxiv.org/abs/2207.14255) method but I am not able to find any repos online that I can use/follow. inCoder from meta also is trained in a similar way but I can't find the training code for that either.
I think think this method can be particularly useful when training models to "learn" a particular library or a codebase it hasn't seen before, or atleast that is my hypothesis.
Please let me know if you find any resources. TIA. | 2023-11-08T06:29:48 | https://www.reddit.com/r/LocalLLaMA/comments/17qfntm/fill_in_middle_finetuning_llm/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qfntm | false | null | t3_17qfntm | /r/LocalLLaMA/comments/17qfntm/fill_in_middle_finetuning_llm/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
GPT4V can manipulate GUI? | 2 | Hey everyone,
I came accross this [tweet](https://twitter.com/josh_bickett/status/1721975391047589934?t=2JoN3mrIw_c3vvkV4e6_jw&s=19) and they're somehow using GPT4V to control their computer.
What I don't understand is how GPT4V clicks buttons as I'm sure it isn't correctly identifying the pixel location and passing that to a function to press a button. Any idea how they achieved to do this?
TIA. | 2023-11-08T06:09:29 | https://www.reddit.com/r/LocalLLaMA/comments/17qfdqn/gpt4v_can_manipulate_gui/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qfdqn | false | null | t3_17qfdqn | /r/LocalLLaMA/comments/17qfdqn/gpt4v_can_manipulate_gui/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '3J6Wn6CmyTtyjLZU6rntkcZkJl5yxfJ-t3a6DbaUMjU', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/-Xe6OLBXfFoIBJLy8Ph0zq2Lw4n7eVlvG6YED1xafSM.jpg?width=108&crop=smart&auto=webp&s=f8849fab71775a7722363c96c9c6703ae417b2cb', 'width': 108}], 'source': {'height': 89, 'url': 'https://external-preview.redd.it/-Xe6OLBXfFoIBJLy8Ph0zq2Lw4n7eVlvG6YED1xafSM.jpg?auto=webp&s=fe436cb161f374aedb07de38484bcf3dbe2a463c', 'width': 140}, 'variants': {}}]} |
A mental therapy chatbot | 3 | Hello all,
I am an undergraduate Computer Science student. I have been working on making a chatbot to be used as a therapist.
For this purpose, I have used this [script](https://github.com/vibhorag101/phr-chat/blob/main/Finetune/finetune.ipynb) to fine-tune the [llama-2-13b-chat-hf](https://huggingface.co/vibhorag101/llama-2-13b-chat-hf-phr_mental_therapy) and [llama-2-7b-chat-hf](https://huggingface.co/vibhorag101/llama-2-7b-chat-hf-phr_mental_health-2048) models on [this](https://huggingface.co/datasets/vibhorag101/phr_mental_health_dataset) dataset.
Although the responses on both models are great, I want to take it even further by profiling the emotions in the chatbot responses using an [emotion classifier model](https://huggingface.co/SamLowe/roberta-base-go_emotions), to ensure the model's responses are always cheerful.
One way I have thought this could be implemented is by checking if the detected emotion is not cheerful, and then by querying the model that its response was not cheerful enough and then displaying the new response.
Can any of you assist as to how can the above thing be implemented in a better fashion? Also, can this process be incorporated somehow in the fine-tuning phase? Any help would be much appreciated | 2023-11-08T04:21:56 | https://www.reddit.com/r/LocalLLaMA/comments/17qdogw/a_mental_therapy_chatbot/ | AvengerIronMan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qdogw | false | null | t3_17qdogw | /r/LocalLLaMA/comments/17qdogw/a_mental_therapy_chatbot/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '0S5IBrUXwox2VP0zBqo0VBArkGT2A1Nhz57Dlr41_SY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XAZjQI8D3tNsEFl2VmoFZg8GKGWMa7Yi864eRG5NVCI.jpg?width=108&crop=smart&auto=webp&s=3ca00b6aaa3820420ca2ff10edb3ca0ad2ba44e6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XAZjQI8D3tNsEFl2VmoFZg8GKGWMa7Yi864eRG5NVCI.jpg?width=216&crop=smart&auto=webp&s=5f9fbde6790a03efd82fb76348f0a4cd80bc74b4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XAZjQI8D3tNsEFl2VmoFZg8GKGWMa7Yi864eRG5NVCI.jpg?width=320&crop=smart&auto=webp&s=a2060655cc8eac8e8d8c33345055ac05f5413b9c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XAZjQI8D3tNsEFl2VmoFZg8GKGWMa7Yi864eRG5NVCI.jpg?width=640&crop=smart&auto=webp&s=a74d743c746a22c0c2608c7d6728f15986ff75c0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XAZjQI8D3tNsEFl2VmoFZg8GKGWMa7Yi864eRG5NVCI.jpg?width=960&crop=smart&auto=webp&s=c75029b24e8cc215406e066ed323fec707641bab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XAZjQI8D3tNsEFl2VmoFZg8GKGWMa7Yi864eRG5NVCI.jpg?width=1080&crop=smart&auto=webp&s=305d7b0e7e43f294eadf59db6b3f8e50a682b32d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XAZjQI8D3tNsEFl2VmoFZg8GKGWMa7Yi864eRG5NVCI.jpg?auto=webp&s=63da9034454fb663a5b1a057b64a8a8030cec318', 'width': 1200}, 'variants': {}}]} |
I have to ask, why is no one using fuyu? | 49 | I've been looking at fuyu for the past couple days now, and its incredible. It's Got OCR, can read graphs, gives bounding boxes. How is no one using this? I get that it might no be on a UI, but its avalible through all of HF's libraries, and it has Gradio. While I havent tested the last claim, it supposably matches LLama while being 8b instead of 13b. Thoughts? | 2023-11-08T03:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/17qcehe/i_have_to_ask_why_is_no_one_using_fuyu/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qcehe | false | null | t3_17qcehe | /r/LocalLLaMA/comments/17qcehe/i_have_to_ask_why_is_no_one_using_fuyu/ | false | false | self | 49 | null |
Microsoft natural language voices on edge | 5 | I use the chrome extension called Read Aloud for TTS when web browsing. If I used it in edge it uses Microsoft's Natural Language voices. It sounds amazing and it's fast and free. Is there a way to use these voices as an auto TTS? They sound almost as good as elevenlabs and faster. | 2023-11-08T03:04:19 | https://www.reddit.com/r/LocalLLaMA/comments/17qc9ox/microsoft_natural_language_voices_on_edge/ | psdwizzard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qc9ox | false | null | t3_17qc9ox | /r/LocalLLaMA/comments/17qc9ox/microsoft_natural_language_voices_on_edge/ | false | false | self | 5 | null |
PEFT LoRA and Fine-Tuning Vram Usage | 3 | Hi all,
Do you know the Vram required for PEFT? There are some numbers but for instance I want to know roughly how much Vram it takes to LoRA or fine-tune a 30B model via PEFT. I can't find shit about that, thank you. | 2023-11-08T02:47:47 | https://www.reddit.com/r/LocalLLaMA/comments/17qbxsd/peft_lora_and_finetuning_vram_usage/ | xadiant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qbxsd | false | null | t3_17qbxsd | /r/LocalLLaMA/comments/17qbxsd/peft_lora_and_finetuning_vram_usage/ | false | false | self | 3 | null |
I need lightweight models for my friend | 1 | well, title explains itself *but not so much*.
I have a friend that do not know anything about computer things.(i also do not know anything about LocalLLaMA things and dont have time about learning it)
I need him to get a AI Chat Assistant that he can use on pc(I think we will prefer to use text-generation-webui by oogabooga) and as a browser extensinon too.
He doesn't know english but really wants to learn it, also he is also willing to learn new things about life, awaraness, self-improvement.
We can use prebuilt deep-translator==1.9.2 extension that comes with text-gen-webui as a temp solution for chatting with a lightweight chatgpt like AI
but it is not enough. As i already said, he really needs to one click solution for translating web pages.
​
basically we need:
Lightweight "ChatGPT like" AI,
lightweight tutor(?) or translator AI
​
his specs:
Acer Aspire E E5-521-26LT
AMD A6 A6-6310 1.8 GHz
AMD Radeon R4
x64 based pc, windows 10 pro | 2023-11-08T01:44:43 | https://www.reddit.com/r/LocalLLaMA/comments/17qao4d/i_need_lightweight_models_for_my_friend/ | denyicz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17qao4d | false | null | t3_17qao4d | /r/LocalLLaMA/comments/17qao4d/i_need_lightweight_models_for_my_friend/ | false | false | self | 1 | null |
Best model for coding? | 1 | [removed] | 2023-11-08T01:00:51 | https://www.reddit.com/r/LocalLLaMA/comments/17q9rmz/best_model_for_coding/ | spammmmm1997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q9rmz | false | null | t3_17q9rmz | /r/LocalLLaMA/comments/17q9rmz/best_model_for_coding/ | false | false | self | 1 | null |
Llama-2-7B 8bit finetuning challenges | 7 | So I’m very new to fine tuning llama 2. I’m currently trying to fine tune the llama2-7b model on a dataset with 50k data rows from nous Hermes through huggingface. I use the Autotrainer-advanced single line cli command. Since I’m on a Windows machine, I use bitsandbytes-windows which currently only supports 8bit quantisation. I have a rtx 3090 with 24gb vram.
When I run the cli command (I keep all the settings on default except for the ones I mentioned), it does work fine, however it spends like 6 hours training, it doesn’t give me any logging information as it runs (nothing, not even “dataset loaded”, just blank) and when it finished it said it only completed 0.29 epochs instead of the 1 epoch. The resulting finetuned model does work somewhat but not very good, however it probably is because of the dataset that needs some refining.
So what I’m still not understanding is the lack of logging, why it stops before the epoch ended, and why it takes so long? Is it because I’m on a Windows OS? Or are there any hyperparameters for which autotrain uses an inefficient default value? | 2023-11-08T00:46:03 | https://www.reddit.com/r/LocalLLaMA/comments/17q9gnd/llama27b_8bit_finetuning_challenges/ | SufiTripper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q9gnd | false | null | t3_17q9gnd | /r/LocalLLaMA/comments/17q9gnd/llama27b_8bit_finetuning_challenges/ | false | false | self | 7 | null |
Transformer inference arithmetic | 3 | I'm reading through [https://kipp.ly/transformer-inference-arithmetic/#kv-cache](https://kipp.ly/transformer-inference-arithmetic/#kv-cache) and I have a few questions.
1- At the " Say we have an A100 GPU..." part, why is memory based on `2 * 2 * n_layers * d_model^2` when they previously said "Per token, the number of bytes we store is `2 * 2 * n_layers * n_heads * d_head` "?
2- At the " we get a distinct ratio here of 208..." part when they compute the ops:byte ratio, they say that computing kv for a single token would take the same time as for 208 tokens. Does this mean that in order to not be compute-bound nor memory-bound, the optimal context length for inference on a A100-40G be 208 tokens?
Thank you for your time. | 2023-11-08T00:26:06 | https://www.reddit.com/r/LocalLLaMA/comments/17q91ep/transformer_inference_arithmetic/ | Evirua | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q91ep | false | null | t3_17q91ep | /r/LocalLLaMA/comments/17q91ep/transformer_inference_arithmetic/ | false | false | self | 3 | null |
Watching the ChatGPT OpenAI's Dev Day and they are rolling out GPT agents! | 1 | Damn, maybe time to subscribe to GPT again?
You will be now (or soon) able to create your own agents called GPT's. They are on the roll.
Google was right, they have no moat. Not from us, but from MS backed OpenAi.
​ | 2023-11-08T00:13:47 | https://www.reddit.com/r/LocalLLaMA/comments/17q8s7f/watching_the_chatgpt_openais_dev_day_and_they_are/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q8s7f | false | null | t3_17q8s7f | /r/LocalLLaMA/comments/17q8s7f/watching_the_chatgpt_openais_dev_day_and_they_are/ | false | false | self | 1 | null |
Using @TheBloke's name in model names to show up in searches | 1 | 2023-11-07T23:39:17 | https://huggingface.co/lamtung16/TheBloke-Llama-2-7B-Chat-GGUF | YearZero | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17q823h | false | null | t3_17q823h | /r/LocalLLaMA/comments/17q823h/using_theblokes_name_in_model_names_to_show_up_in/ | false | false | 1 | {'enabled': False, 'images': [{'id': '_yLpaQz8TGPbm0iotR1GWXJRjkO-cYZ1tV_a7hXivbE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vJ3LPRmNio6be9dKccQoyT9Hifsyp8nB-tsHuCwoWv4.jpg?width=108&crop=smart&auto=webp&s=1901fbfed1b210d91e96f325ff8d3b5e6e7ab6d1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vJ3LPRmNio6be9dKccQoyT9Hifsyp8nB-tsHuCwoWv4.jpg?width=216&crop=smart&auto=webp&s=f48017c121931d59f7f623d05b608fac1e42fb61', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vJ3LPRmNio6be9dKccQoyT9Hifsyp8nB-tsHuCwoWv4.jpg?width=320&crop=smart&auto=webp&s=5f724327bf82a00e1814d7dbe63b5f78af2beb43', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vJ3LPRmNio6be9dKccQoyT9Hifsyp8nB-tsHuCwoWv4.jpg?width=640&crop=smart&auto=webp&s=4b75f48e123a4e8d017bba6f5b76034e1dd7563e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vJ3LPRmNio6be9dKccQoyT9Hifsyp8nB-tsHuCwoWv4.jpg?width=960&crop=smart&auto=webp&s=825caacf71f471cd379ba03f18f3c4f511c73a22', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vJ3LPRmNio6be9dKccQoyT9Hifsyp8nB-tsHuCwoWv4.jpg?width=1080&crop=smart&auto=webp&s=286dc8cc0e8ca11b7fd9c62a26d041aab329bd89', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vJ3LPRmNio6be9dKccQoyT9Hifsyp8nB-tsHuCwoWv4.jpg?auto=webp&s=e4f6c486c2b0eb93031a2a1cd78d9374ee4ff7be', 'width': 1200}, 'variants': {}}]} | ||
Better transformer not providing a speedup (Llama 7b) | 1 | Hi,
I am trying to run Llama 7b with torch.float16 using better transformers, but I see no speedup (the throughout is exactly the same). I am using pytorch 2.1.0, Cudas 11.8, transformers 4.33.0. I am getting the model from the hub using from pretrained, putting it to my GPU using .to(0), and then just calling model=model.to_bettertransformer(). Am I doing something incorrectly? Or does better transformer not work well for Llama.
I am doing this on a g5.2xlarge EC2 instance with one A10G.
Any help would be appreciated! | 2023-11-07T22:35:27 | https://www.reddit.com/r/LocalLLaMA/comments/17q6mcd/better_transformer_not_providing_a_speedup_llama/ | cinnamonKnight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q6mcd | false | null | t3_17q6mcd | /r/LocalLLaMA/comments/17q6mcd/better_transformer_not_providing_a_speedup_llama/ | false | false | self | 1 | null |
Writing E-commerce product description | 1 | [removed] | 2023-11-07T22:30:46 | https://www.reddit.com/r/LocalLLaMA/comments/17q6icx/writing_ecommerce_product_description/ | Gullible-Being-8595 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q6icx | false | null | t3_17q6icx | /r/LocalLLaMA/comments/17q6icx/writing_ecommerce_product_description/ | false | false | self | 1 | null |
Ethical Policing AI | 1 | 👋🏻 I'm working on developing an ethical policing AI that consumes bodycam footage to deduce bias. If you're interested, and would like to learn more please message me or email me at steve@authenticate.com. We have a great team is academics and experts and we're funded. We're looking for some machine learning engineers, data scientists and the like. Thanks in advance! | 2023-11-07T22:25:54 | https://www.reddit.com/r/LocalLLaMA/comments/17q6e76/ethical_policing_ai/ | stevenbward | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q6e76 | false | null | t3_17q6e76 | /r/LocalLLaMA/comments/17q6e76/ethical_policing_ai/ | false | false | self | 1 | null |
Anybody using mystic.ai inferencing API service? | 1 | Is it reliable, etc.? (If I should post this question somewhere else, pls LMK.)
Is there another service you like better that provides cloud, API, GPU-powered inferencing, pay by the second/minute? (I'm using huggingface inferencing for pros for other experiments...)
Thanks! | 2023-11-07T22:17:12 | https://www.reddit.com/r/LocalLLaMA/comments/17q66sq/anybody_using_mysticai_inferencing_api_service/ | v5outer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q66sq | false | null | t3_17q66sq | /r/LocalLLaMA/comments/17q66sq/anybody_using_mysticai_inferencing_api_service/ | false | false | self | 1 | null |
Anybody using Mystic.ai? | 1 | [removed] | 2023-11-07T22:07:17 | https://www.reddit.com/r/LocalLLaMA/comments/17q5ygo/anybody_using_mysticai/ | v5outer1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q5ygo | false | null | t3_17q5ygo | /r/LocalLLaMA/comments/17q5ygo/anybody_using_mysticai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'FqW9ScFzzB4LLI40YJIrOrsbPvTNWhuhwEFgqxonc1w', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5QvodoPkeovBrbONLYxvMWN81xovFfld1dGOmYR4ZrI.jpg?width=108&crop=smart&auto=webp&s=81b00356dbd3b0ecbf15421516966bd8c522e2b7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5QvodoPkeovBrbONLYxvMWN81xovFfld1dGOmYR4ZrI.jpg?width=216&crop=smart&auto=webp&s=3b1af1c428a0378f81f09aba4a4050471d0d6d09', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/5QvodoPkeovBrbONLYxvMWN81xovFfld1dGOmYR4ZrI.jpg?width=320&crop=smart&auto=webp&s=81f7c3c086060db78f5fbf92efc4a491a7b99759', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/5QvodoPkeovBrbONLYxvMWN81xovFfld1dGOmYR4ZrI.jpg?width=640&crop=smart&auto=webp&s=1379b86beb23d951162c1f6b6f6f0189e4f959f8', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/5QvodoPkeovBrbONLYxvMWN81xovFfld1dGOmYR4ZrI.jpg?width=960&crop=smart&auto=webp&s=0f7f2509c93705ed4542341091ac04b05c28c026', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/5QvodoPkeovBrbONLYxvMWN81xovFfld1dGOmYR4ZrI.jpg?width=1080&crop=smart&auto=webp&s=28e70c72854b86e530e7534ef7e7f8a78caad90d', 'width': 1080}], 'source': {'height': 2512, 'url': 'https://external-preview.redd.it/5QvodoPkeovBrbONLYxvMWN81xovFfld1dGOmYR4ZrI.jpg?auto=webp&s=82cec0034c218cbe5068f0acd6fa172f78aed630', 'width': 4800}, 'variants': {}}]} |
What is the best way to run llama-alike model locally? | 3 | Ollama works well for me but I'm getting weird result on llama.cpp with zephyr models.
I'd love to run zephyr models with good performance and portability so I can add it to my app. What is the best way to do it currently?
Thanks! | 2023-11-07T22:05:24 | https://www.reddit.com/r/LocalLLaMA/comments/17q5wt1/what_is_the_best_way_to_run_llamaalike_model/ | foginthewater | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q5wt1 | false | null | t3_17q5wt1 | /r/LocalLLaMA/comments/17q5wt1/what_is_the_best_way_to_run_llamaalike_model/ | false | false | self | 3 | null |
What type of business application are you developing? | 5 | I hear people using local models for their applications but in the context of business landscape like warehouse management, accounting etc.. has anyone see a use case or developing application? | 2023-11-07T21:43:58 | https://www.reddit.com/r/LocalLLaMA/comments/17q5ed9/what_type_of_business_application_are_you/ | troposfer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q5ed9 | false | null | t3_17q5ed9 | /r/LocalLLaMA/comments/17q5ed9/what_type_of_business_application_are_you/ | false | false | self | 5 | null |
dolphin-2.2-70b | 164 | Today I released Dolphin 2.2 70b (deprecating version 2.1, which had a EOS token bug)
[https://huggingface.co/ehartford/dolphin-2.2-70b](https://huggingface.co/ehartford/dolphin-2.2-70b)
This model is based on llama2, so it is suitable for commercial or non-commercial use.
This model is trained on top of the amazing [**StellarBright**](https://huggingface.co/sequelbox/StellarBright) base model.
New in 2.2 is conversation and empathy. With an infusion of curated Samantha and WizardLM DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. [**https://erichartford.com/uncensored-models**](https://erichartford.com/uncensored-models) You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin-2.2-70b's training was sponsored by [**a16z**](https://a16z.com/supporting-the-open-source-ai-community/).
Thanks to Jon Durbin, Wing Lian, WizardLM, and TheBloke.
​
[Example output](https://preview.redd.it/nhcitr7rvzyb1.png?width=1061&format=png&auto=webp&s=667912e26b91073b388e9d6bdae6219fd48e9569) | 2023-11-07T21:36:28 | https://www.reddit.com/r/LocalLLaMA/comments/17q57tq/dolphin2270b/ | faldore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q57tq | false | null | t3_17q57tq | /r/LocalLLaMA/comments/17q57tq/dolphin2270b/ | false | false | 164 | {'enabled': False, 'images': [{'id': 'z_X0WGQLtGMI13LB4f87TNPdylnFw8o-mja2-p7wXwY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EfKhpgP12muVEz_LtXfCcQj1IX9Rgt8Y3M6qJlM45kQ.jpg?width=108&crop=smart&auto=webp&s=788b2171ade5833376570e388f5f695853e7d45d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EfKhpgP12muVEz_LtXfCcQj1IX9Rgt8Y3M6qJlM45kQ.jpg?width=216&crop=smart&auto=webp&s=c18496ce86da469672205d500a67e2a7b0f3b50e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EfKhpgP12muVEz_LtXfCcQj1IX9Rgt8Y3M6qJlM45kQ.jpg?width=320&crop=smart&auto=webp&s=bb6f8f1db3f2a6d65473faca026c81e4e65b842a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EfKhpgP12muVEz_LtXfCcQj1IX9Rgt8Y3M6qJlM45kQ.jpg?width=640&crop=smart&auto=webp&s=b60a4fadb4d2802c419d3ff14e8b09423d24d69d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EfKhpgP12muVEz_LtXfCcQj1IX9Rgt8Y3M6qJlM45kQ.jpg?width=960&crop=smart&auto=webp&s=caeddb814bfe7d0189313657adfdb2a93d2b0e9e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EfKhpgP12muVEz_LtXfCcQj1IX9Rgt8Y3M6qJlM45kQ.jpg?width=1080&crop=smart&auto=webp&s=de261ab4adfa418b60d3f7ccc26b81280577b2ab', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EfKhpgP12muVEz_LtXfCcQj1IX9Rgt8Y3M6qJlM45kQ.jpg?auto=webp&s=71bf8f159521986ed253834269e94b28ff539539', 'width': 1200}, 'variants': {}}]} | |
Where Should I Go? | 10 | Every time I ask a question on this sub about installing, coding, and utilizing LLMs, the question gets downvoted, making it hard to get answers.
Is this sub only for benchmark testing and AI news? Is there another sub that would be welcoming to answering questions involving local projects with local LLMs? I thought it was this sub, but it's getting hard to find answers when my questions are immediately downvoted.
Also, why would you donwvote that? What purpose does it serve you, while making someone else's struggle even strugglier? What good does it do to push genuine questions out of this sub? | 2023-11-07T20:19:19 | https://www.reddit.com/r/LocalLLaMA/comments/17q3e7g/where_should_i_go/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q3e7g | false | null | t3_17q3e7g | /r/LocalLLaMA/comments/17q3e7g/where_should_i_go/ | false | false | self | 10 | null |
A new LLM Explorer has been released to let you find a better model | 1 | [removed] | 2023-11-07T20:00:11 | https://www.reddit.com/r/LocalLLaMA/comments/17q2yaq/a_new_llm_explorer_has_been_released_to_let_you/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q2yaq | false | null | t3_17q2yaq | /r/LocalLLaMA/comments/17q2yaq/a_new_llm_explorer_has_been_released_to_let_you/ | false | false | 1 | null | |
Why *SHOULDN'T* I buy a pair of 3060 12gb? | 39 | Yes you can get a used 3090 for a few hundred more off ebay but why would a regular person who doesnt want to sink an arm and a leg into this not just run dual 3060s?
​
Love this community! Thanks for all who contribute! | 2023-11-07T19:16:09 | https://www.reddit.com/r/LocalLLaMA/comments/17q1yb1/why_shouldnt_i_buy_a_pair_of_3060_12gb/ | SocialDinamo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q1yb1 | false | null | t3_17q1yb1 | /r/LocalLLaMA/comments/17q1yb1/why_shouldnt_i_buy_a_pair_of_3060_12gb/ | false | false | self | 39 | null |
Prompts to fine-tune a story-writing model? | 1 | Hi! I'd like to fine-tune a model to write a film scene in the style of a film director I like, but I'm unsure about the process. I plan to use a story-writing model (no chat-based, only text generation) like guanaco, vicuna, or mtp. I'm using Colab Pro, so I believe the most powerful model I can fine-tune is a 13B.
​
​
The main question is, what would be the best format for the dataset? Currently, I have lots of scripts with text categorized into three types: CHARACTER (character name), DIALOGUE (what the character says), LOCATION (scene location), and SCENE-DESCRIPTION (The description of the scene). How should I structure the dataset for fine-tuning the model? Should I use special tokens, a specific prompt format, or just provide the model with the raw text?
​
​
Thanks for your help. | 2023-11-07T19:01:30 | https://www.reddit.com/r/LocalLLaMA/comments/17q1m7i/prompts_to_finetune_a_storywriting_model/ | Carlos__8a | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q1m7i | false | null | t3_17q1m7i | /r/LocalLLaMA/comments/17q1m7i/prompts_to_finetune_a_storywriting_model/ | false | false | self | 1 | null |
AI Portal Gun | 20 | 1. Ever felt lost in the boundless AI universe? Say goodbye to confusion! We've built [http://portalgunai.org/](http://portalgunai.org/) to transport you to the correct dimension to deal with that.
2. In the vast AI cosmos, knowledge scatters like stardust. To guide your journey from AI novice to expert, we've forged an AI Portal Gun, helping you navigate the ever-evolving AI universe.
3. Get ready to teleport to a planet filled with free, expert-curated tutorials, guides, articles, courses, papers, and books. Unlock the power to conquer the AI universe with these resources!
4. Tailored Adventures: Dive deep into your AI passions, whether it's computer vision, NLP, deep learning, AI in healthcare, robotics, the mathematics behind AI's core principles, or more. A universe of wisdom awaits! (edited)
5. Embark on a quest through generative AI! Unleash your creativity with audio, video, images, and code generation. The cosmos is your canvas. | 2023-11-07T18:43:31 | https://www.reddit.com/r/LocalLLaMA/comments/17q17ip/ai_portal_gun/ | Jiraiya27s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q17ip | false | null | t3_17q17ip | /r/LocalLLaMA/comments/17q17ip/ai_portal_gun/ | false | false | self | 20 | null |
I am stumped and my AI isn't too helpful | 1 | I am trying to create a chatbot with a locally installed version of open hermes 2.
Here is the part of the code that is throwing errors:
import transformers
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from\_pretrained('C:/Users/coffe/Desktop/Senter/models/openhermes-2.5-mistral-7b.Q5\_K\_M.gguf')
model = AutoModelForCausalLM.from\_pretrained('C:/Users/coffe/Desktop/Senter/models/openhermes-2.5-mistral-7b.Q5\_K\_M.gguf')
Here is the error I am recieving:
Traceback (most recent call last):
File "c:\\Users\\coffe\\Desktop\\Senter\\[Senterv01.py](https://Senterv01.py)", line 8, in <module>
tokenizer = AutoTokenizer.from\_pretrained('C:\\\\Users\\coffe\\Desktop\\Senter\\models\\openhermes-2.5-mistral-7b.Q5\_K\_M.gguf')
\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^
File "C:\\Users\\coffe\\Desktop\\Senter\\env\\Lib\\site-packages\\transformers\\models\\auto\\tokenization\_auto.py", line 718, in from\_pretrained
tokenizer\_config = get\_tokenizer\_config(pretrained\_model\_name\_or\_path, \*\*kwargs)
\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^
File "C:\\Users\\coffe\\Desktop\\Senter\\env\\Lib\\site-packages\\transformers\\models\\auto\\tokenization\_auto.py", line 550, in get\_tokenizer\_config
resolved\_config\_file = cached\_file(
\^\^\^\^\^\^\^\^\^\^\^\^
File "C:\\Users\\coffe\\Desktop\\Senter\\env\\Lib\\site-packages\\transformers\\utils\\[hub.py](https://hub.py)", line 430, in cached\_file
resolved\_file = hf\_hub\_download(
\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^
File "C:\\Users\\coffe\\Desktop\\Senter\\env\\Lib\\site-packages\\huggingface\_hub\\utils\\\_validators.py", line 110, in \_inner\_fn
validate\_repo\_id(arg\_value)
File "C:\\Users\\coffe\\Desktop\\Senter\\env\\Lib\\site-packages\\huggingface\_hub\\utils\\\_validators.py", line 164, in validate\_repo\_id
raise HFValidationError(
huggingface\_hub.utils.\_validators.HFValidationError: Repo id must use alphanumeric chars or '-', '\_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'C:\\Users\\coffe\\Desktop\\Senter\\models\\openhermes-2.5-mistral-7b.Q5\_K\_M.gguf'.
(env) PS C:\\Users\\coffe\\Desktop\\Senter> & c:/Users/coffe/Desktop/Senter/env/Scripts/python.exe c:/Users/coffe/Desktop/Senter/Senterv01.py
Traceback (most recent call last):
File "c:\\Users\\coffe\\Desktop\\Senter\\[Senterv01.py](https://Senterv01.py)", line 9, in <module>
tokenizer = AutoTokenizer.from\_pretrained('C:/Users/coffe/Desktop/Senter/models/openhermes-2.5-mistral-7b.Q5\_K\_M.gguf')
\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^
File "C:\\Users\\coffe\\Desktop\\Senter\\env\\Lib\\site-packages\\transformers\\models\\auto\\tokenization\_auto.py", line 718, in from\_pretrained
tokenizer\_config = get\_tokenizer\_config(pretrained\_model\_name\_or\_path, \*\*kwargs)
\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^
File "C:\\Users\\coffe\\Desktop\\Senter\\env\\Lib\\site-packages\\transformers\\models\\auto\\tokenization\_auto.py", line 550, in get\_tokenizer\_config
resolved\_config\_file = cached\_file(
\^\^\^\^\^\^\^\^\^\^\^\^
File "C:\\Users\\coffe\\Desktop\\Senter\\env\\Lib\\site-packages\\transformers\\utils\\[hub.py](https://hub.py)", line 430, in cached\_file
resolved\_file = hf\_hub\_download(
\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^
File "C:\\Users\\coffe\\Desktop\\Senter\\env\\Lib\\site-packages\\huggingface\_hub\\utils\\\_validators.py", line 110, in \_inner\_fn
validate\_repo\_id(arg\_value)
File "C:\\Users\\coffe\\Desktop\\Senter\\env\\Lib\\site-packages\\huggingface\_hub\\utils\\\_validators.py", line 158, in validate\_repo\_id
raise HFValidationError(
huggingface\_hub.utils.\_validators.HFValidationError: Repo id must be in the form 'repo\_name' or 'namespace/repo\_name': 'C:/Users/coffe/Desktop/Senter/models/openhermes-2.5-mistral-7b.Q5\_K\_M.gguf'. Use \`repo\_type\` argument if needed.
Can anyone help me? I'm sure it's something stupid. It seems to have a hard time loading the model from the directory. Thank you so much | 2023-11-07T18:27:59 | https://www.reddit.com/r/LocalLLaMA/comments/17q0u93/i_am_stumped_and_my_ai_isnt_too_helpful/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q0u93 | false | null | t3_17q0u93 | /r/LocalLLaMA/comments/17q0u93/i_am_stumped_and_my_ai_isnt_too_helpful/ | false | false | default | 1 | null |
Automated tests | 2 | Howdy trainers, fine tuners and inferences ✋
I wrote a simple test where i send 100 questions to OpenAI compatible API to retrieve a number out of each question and get all answers to evaluate.
Of course I’m wondering to do the same locally via LMstudio and it worked.
Now the question 🙋
How to automatically change models into LMstudio?
Better to follow the scripted way?
To have logging and options on LMstudio is very useful and of course it’s easier reading for server side errors and troubleshooting when needed. To move logic into the script can be overwhelming :))
Any suggestion out there? TY for your help 🙏 | 2023-11-07T18:27:55 | https://www.reddit.com/r/LocalLLaMA/comments/17q0u7g/automated_tests/ | fab_space | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17q0u7g | false | null | t3_17q0u7g | /r/LocalLLaMA/comments/17q0u7g/automated_tests/ | false | false | self | 2 | null |
Recency bias with Mistral 7b vs. GPT 3.5 Turbo vs. GPT 4 | 10 | Hey, does anyone have any awareness around how recency bias affects Mistral, GPT 3.5 and GPT 4. I had thought I read something that says it doesn't have as big an impact with GPT4. But I'm also interested if anyone knows what this is like with Mistral. Thanks for any insight! | 2023-11-07T17:31:02 | https://www.reddit.com/r/LocalLLaMA/comments/17pzjjm/recency_bias_with_mistral_7b_vs_gpt_35_turbo_vs/ | Ok-Anteater-1037 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pzjjm | false | null | t3_17pzjjm | /r/LocalLLaMA/comments/17pzjjm/recency_bias_with_mistral_7b_vs_gpt_35_turbo_vs/ | false | false | self | 10 | null |
~15 tokens per second down to ~5 tokens per second after ~30 messages; even though the context size is the same. Why? | 3 | After around 30 messages between me and the bot, the inference time decreases substantially. But the context size is around the same:
llama_print_timings: load time = 2947.72 ms
llama_print_timings: sample time = 52.95 ms / 29 runs ( 1.83 ms per token, 547.65 tokens per second)
llama_print_timings: prompt eval time = 606.29 ms / 31 tokens ( 19.56 ms per token, 51.13 tokens per second)
llama_print_timings: eval time = 1598.34 ms / 28 runs ( 57.08 ms per token, 17.52 tokens per second)
llama_print_timings: total time = 2297.09 ms
Output generated in 2.54 seconds (11.04 tokens/s, 28 tokens, context 3646, seed 1814399246)
Llama.generate: prefix-match hit
llama_print_timings: load time = 2947.72 ms
llama_print_timings: sample time = 89.13 ms / 48 runs ( 1.86 ms per token, 538.53 tokens per second)
llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second)
llama_print_timings: eval time = 2959.35 ms / 48 runs ( 61.65 ms per token, 16.22 tokens per second)
llama_print_timings: total time = 3113.24 ms
Output generated in 3.34 seconds (14.07 tokens/s, 47 tokens, context 3646, seed 2114015712)
Llama.generate: prefix-match hit
llama_print_timings: load time = 2947.72 ms
llama_print_timings: sample time = 303.40 ms / 166 runs ( 1.83 ms per token, 547.13 tokens per second)
llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second)
llama_print_timings: eval time = 9760.81 ms / 166 runs ( 58.80 ms per token, 17.01 tokens per second)
llama_print_timings: total time = 10289.54 ms
Output generated in 10.52 seconds (15.68 tokens/s, 165 tokens, context 3646, seed 277437172)
Llama.generate: prefix-match hit
llama_print_timings: load time = 2947.72 ms
llama_print_timings: sample time = 436.51 ms / 238 runs ( 1.83 ms per token, 545.23 tokens per second)
llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second)
llama_print_timings: eval time = 13945.23 ms / 238 runs ( 58.59 ms per token, 17.07 tokens per second)
llama_print_timings: total time = 14716.11 ms
Output generated in 14.95 seconds (15.85 tokens/s, 237 tokens, context 3646, seed 1330781120)
then, after around 30 messages:
```
llama_print_timings: load time = 2947.72 ms
llama_print_timings: sample time = 434.99 ms / 237 runs ( 1.84 ms per token, 544.84 tokens per second)
llama_print_timings: prompt eval time = 25405.14 ms / 2875 tokens ( 8.84 ms per token, 113.17 tokens per second)
llama_print_timings: eval time = 13584.05 ms / 236 runs ( 57.56 ms per token, 17.37 tokens per second)
llama_print_timings: total time = 39761.73 ms
Output generated in 39.99 seconds (5.90 tokens/s, 236 tokens, context 3583, seed 1752554349)
Llama.generate: prefix-match hit
llama_print_timings: load time = 2947.72 ms
llama_print_timings: sample time = 224.61 ms / 123 runs ( 1.83 ms per token, 547.61 tokens per second)
llama_print_timings: prompt eval time = 24489.86 ms / 2921 tokens ( 8.38 ms per token, 119.27 tokens per second)
llama_print_timings: eval time = 7018.07 ms / 122 runs ( 57.53 ms per token, 17.38 tokens per second)
llama_print_timings: total time = 31901.32 ms
Output generated in 32.14 seconds (3.80 tokens/s, 122 tokens, context 3636, seed 1205990068)
```
and it keeps being this slow. Very rarely, it will jump back to ~15t/s for one reply, but then it gets slow again.
Why is that? Why would the number of messages be a factor if the context size is the same?
I am using the SillyTavern UI and Oobabooga as a back-end.
I already tried switching models; Mythomax 13B to Nous-Hermes 13B. Didn't help.
Both of the models are GGUF.
I am using a Mac M1 Max 64GB. I am also using llama with Metals support. My `CMD_FLAGS`:
```
--mlock --n-gpu-layers 12 --chat --no-mmap --threads 10 --api --model nous-hermes-llama2-13b.Q4_K_S.gguf
``` | 2023-11-07T16:46:47 | https://www.reddit.com/r/LocalLLaMA/comments/17pyj8k/15_tokens_per_second_down_to_5_tokens_per_second/ | peanutbutterwnutella | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pyj8k | false | null | t3_17pyj8k | /r/LocalLLaMA/comments/17pyj8k/15_tokens_per_second_down_to_5_tokens_per_second/ | false | false | self | 3 | null |
LoRa + RAG? | 1 | [removed] | 2023-11-07T15:20:29 | https://www.reddit.com/r/LocalLLaMA/comments/17pwm2h/lora_rag/ | Enkay55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pwm2h | false | null | t3_17pwm2h | /r/LocalLLaMA/comments/17pwm2h/lora_rag/ | false | false | self | 1 | null |
Eternal question: what rank (r) and alpha to use in QLoRA? | 17 | I’ve checked dozens of sources and each one uses a different logic or rule of the thumb to select the rank and alpha parameters when doing (Q)LoRA. Some say that alpha should be double the rank, some others say that alpha should be half of the rank, I’ve seen rank 8, 16, 32, 64, 128…
​
Does anyone have solid experiment results that shed some light on this?
Should I use higher rank the harder my task is or the more data I have?
Does it depend on the original model size? | 2023-11-07T15:01:47 | https://www.reddit.com/r/LocalLLaMA/comments/17pw7bv/eternal_question_what_rank_r_and_alpha_to_use_in/ | Exotic-Estimate8355 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pw7bv | false | null | t3_17pw7bv | /r/LocalLLaMA/comments/17pw7bv/eternal_question_what_rank_r_and_alpha_to_use_in/ | false | false | self | 17 | null |
Best models for synthetic dataset creation | 5 | Hi everyone!
I am curious to hear your favourite models for creating synthetic datasets.
I have heard a lot about people using GPT-4, and feeding it a sample of a dataset and having it build a much large dataset based on it, but I am curious to know what local models you guys have been using instead. | 2023-11-07T14:50:59 | https://www.reddit.com/r/LocalLLaMA/comments/17pvz0x/best_models_for_synthetic_dataset_creation/ | Slimxshadyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pvz0x | false | null | t3_17pvz0x | /r/LocalLLaMA/comments/17pvz0x/best_models_for_synthetic_dataset_creation/ | false | false | self | 5 | null |
Rather a Geforce RTX (VRAM) or a smaller Quadro RTX (power) | 2 | Hello everyone,
i am currently experimenting with Zephyr 7B alpha. Im pretty happy with the results so far.
i am currently using following server:
​
12 vCPUs
64GB RAM
NVIDIA Quadro RTX 4000
​
im doing a workload of files with RAG etc. to let it extract information from pdfs (between 1 and 50 pages)
​
i have no the option to customize this server but with also very low budget.
Does it make sense to swap the QUADRO 4000 for a RTX with more VRAM and therefore more batches , bigger Model etc...
​
What do you guys think? Does a Quadro always have the upper hand? or is a "usual" geforce rtx 3090 better because it has more vram? | 2023-11-07T14:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/17pv6uk/rather_a_geforce_rtx_vram_or_a_smaller_quadro_rtx/ | Electrical-Young-360 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pv6uk | false | null | t3_17pv6uk | /r/LocalLLaMA/comments/17pv6uk/rather_a_geforce_rtx_vram_or_a_smaller_quadro_rtx/ | false | false | self | 2 | null |
Best factual-uncensored model for 3090 and 40gb ram? | 1 | Best factual-uncensored model for 3090 and 40gb ram(5600x cpu)?
I am not looking for something to make up stories; I just don't want a model that restricts what information it thinks is legal to know or that I should be allowed to know about, etc like what chatgpt does when it thinks you're up to no good. | 2023-11-07T14:13:01 | https://www.reddit.com/r/LocalLLaMA/comments/17pv6kz/best_factualuncensored_model_for_3090_and_40gb_ram/ | Boring_Assignment_33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pv6kz | false | null | t3_17pv6kz | /r/LocalLLaMA/comments/17pv6kz/best_factualuncensored_model_for_3090_and_40gb_ram/ | false | false | self | 1 | null |
What is the best (uncensored) Story Teller LLM? | 1 | [removed] | 2023-11-07T14:09:37 | https://www.reddit.com/r/LocalLLaMA/comments/17pv40n/what_is_the_best_uncensored_story_teller_llm/ | Ok-Response3312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pv40n | false | null | t3_17pv40n | /r/LocalLLaMA/comments/17pv40n/what_is_the_best_uncensored_story_teller_llm/ | false | false | self | 1 | null |
What is the best Story Teller LLM? | 1 | [removed] | 2023-11-07T14:08:20 | https://www.reddit.com/r/LocalLLaMA/comments/17pv336/what_is_the_best_story_teller_llm/ | Ok-Response3312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pv336 | false | null | t3_17pv336 | /r/LocalLLaMA/comments/17pv336/what_is_the_best_story_teller_llm/ | false | false | self | 1 | null |
What would be the best approach for concurrent inference of the zephyr-7b-beta GGUF model? | 2 | Hello, I want to deploy the zephyr-7b-beta GGUF model to production for RAG. There will be 40-50 concurrent users. Which Python library will be helpful for this and what is the GPU, CPU, and RAM requirement?
I am using the llama index as a wrapper and llama.cpp for LLM inference.
Any pointer for this would be helpful.
| 2023-11-07T14:07:02 | https://www.reddit.com/r/LocalLLaMA/comments/17pv24j/what_would_be_the_best_approach_for_concurrent/ | gowisah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pv24j | false | null | t3_17pv24j | /r/LocalLLaMA/comments/17pv24j/what_would_be_the_best_approach_for_concurrent/ | false | false | self | 2 | null |
What is the best (uncensored) Story Teller LLM? | 1 | [removed] | 2023-11-07T14:06:46 | https://www.reddit.com/r/LocalLLaMA/comments/17pv1xn/what_is_the_best_uncensored_story_teller_llm/ | Ok-Response3312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pv1xn | false | null | t3_17pv1xn | /r/LocalLLaMA/comments/17pv1xn/what_is_the_best_uncensored_story_teller_llm/ | false | false | self | 1 | null |
OpenAI's DevDay made me determined to work more with Open-Source Local Models | 181 | (details in the comments) | 2023-11-07T14:05:56 | https://www.reddit.com/r/LocalLLaMA/comments/17pv1aw/openais_devday_made_me_determined_to_work_more/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pv1aw | false | null | t3_17pv1aw | /r/LocalLLaMA/comments/17pv1aw/openais_devday_made_me_determined_to_work_more/ | false | false | self | 181 | null |
gghg ghg test? | 1 | [removed] | 2023-11-07T14:03:03 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 17puz9c | false | null | t3_17puz9c | /r/LocalLLaMA/comments/17puz9c/gghg_ghg_test/ | false | false | default | 1 | null | ||
What is the best (uncensored) Story Teller LLM? Despite WizardLM-SuperCOT30bUncensored - something better? | 1 | [removed] | 2023-11-07T14:02:16 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 17puyqg | false | null | t3_17puyqg | /r/LocalLLaMA/comments/17puyqg/what_is_the_best_uncensored_story_teller_llm/ | false | false | default | 1 | null | ||
OpenAI API Question | 1 | My current setup: textgeneration-webui running on my computer with --api --public-api. I get https://CloudFlarelink/api. On my phone, I am running SillyTavern 1.10.7 using Termux. Under SillyTavern API, I choose TextGenWebUI and choose default Oobabooga for API type. I put https://CloudFlarelink/api for Blocking API url and everything works correctly.
Yesterday, I noticed a message that the current API is deprecated and will be replaced with OpenAI API on Nov 13th.
I wanted to test the new API so I changed the flags to --extensions openai and --public-api. I got http://CloudFlarelink/v1 but I don't know what to do next. I've never used OpenAI API before.
There is an option in SillyTavern to change API to Chat Completion and choose OpenAI for API type. It's asking for API key, not the link I got from TextGenWebUI (http://CloudFlarelink/v1).
Do I need to make OpenAI account? If so, can OpenAI see the information passing through?
I put the flag back to --api and --public-api so it's working with the old API for now but clarification on what I need to do next with http://CloudFlarelink/v1 would be helpful. Thanks! | 2023-11-07T14:00:12 | https://www.reddit.com/r/LocalLLaMA/comments/17puwvi/openai_api_question/ | pirasanna_9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17puwvi | false | null | t3_17puwvi | /r/LocalLLaMA/comments/17puwvi/openai_api_question/ | false | false | self | 1 | null |
What is the best (uncensored) Story Teller LLM? | 1 | [removed] | 2023-11-07T13:59:07 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 17puw28 | false | null | t3_17puw28 | /r/LocalLLaMA/comments/17puw28/what_is_the_best_uncensored_story_teller_llm/ | false | false | default | 1 | null | ||
What is the best Story Teller LLM? | 1 | [removed] | 2023-11-07T13:56:12 | https://www.reddit.com/r/LocalLLaMA/comments/17puu14/what_is_the_best_story_teller_llm/ | Ok-Response3312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17puu14 | false | null | t3_17puu14 | /r/LocalLLaMA/comments/17puu14/what_is_the_best_story_teller_llm/ | false | false | self | 1 | null |
Can we talk about back and front end settings? | 30 | 2023-11-07T13:32:48 | 3m84rk | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17pudcf | false | null | t3_17pudcf | /r/LocalLLaMA/comments/17pudcf/can_we_talk_about_back_and_front_end_settings/ | false | false | 30 | {'enabled': True, 'images': [{'id': 'pEbYl149bMH4h_q1N7Eb5sUPPzLtUdKc9JGNsriHL0s', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/z15o6fthhxyb1.jpg?width=108&crop=smart&auto=webp&s=074bbbd36af00be3298109b5f90974dc0ee07f4d', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/z15o6fthhxyb1.jpg?width=216&crop=smart&auto=webp&s=2426e257964b950e7549155a1c798f5bd9efd9ab', 'width': 216}, {'height': 304, 'url': 'https://preview.redd.it/z15o6fthhxyb1.jpg?width=320&crop=smart&auto=webp&s=e6370a52669a43409592046c16c248707af1dc73', 'width': 320}, {'height': 609, 'url': 'https://preview.redd.it/z15o6fthhxyb1.jpg?width=640&crop=smart&auto=webp&s=9de86c682ca045d23ec403f35c31ce0009ab0917', 'width': 640}, {'height': 914, 'url': 'https://preview.redd.it/z15o6fthhxyb1.jpg?width=960&crop=smart&auto=webp&s=1bdb1742ac8857147a2a090204f7e81494d22fdf', 'width': 960}, {'height': 1028, 'url': 'https://preview.redd.it/z15o6fthhxyb1.jpg?width=1080&crop=smart&auto=webp&s=89b6e4780c19dfa65d16f3c49f7fcc8d63fe9531', 'width': 1080}], 'source': {'height': 1197, 'url': 'https://preview.redd.it/z15o6fthhxyb1.jpg?auto=webp&s=00a0f7790922ae6a7adcaa34e409a5a4b40c351e', 'width': 1257}, 'variants': {}}]} | |||
Deploy LLama on GPUs on different machines | 1 | Hello everyone,
I was wondering what the most efficient way to deploy LLM on two GPUs that are physically on two different machines is. I have two laptops with two 3090 that I would like to use to run the same model, exposing rest API. Is this possible? | 2023-11-07T13:18:28 | https://www.reddit.com/r/LocalLLaMA/comments/17pu39i/deploy_llama_on_gpus_on_different_machines/ | _ragnet_7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pu39i | false | null | t3_17pu39i | /r/LocalLLaMA/comments/17pu39i/deploy_llama_on_gpus_on_different_machines/ | false | false | self | 1 | null |
How much would it cost/resources required to train a 7B model on the new RedPajama-Data-V2 30T deduped dataset? | 13 | This is the kind of model I dream of. It would push the 7B model to the absolute limit.
Even better would be training a 34B model on the dataset, as that is the limit of what a lot of people can run locally. | 2023-11-07T13:15:02 | https://www.reddit.com/r/LocalLLaMA/comments/17pu0xc/how_much_would_it_costresources_required_to_train/ | Zyguard7777777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pu0xc | false | null | t3_17pu0xc | /r/LocalLLaMA/comments/17pu0xc/how_much_would_it_costresources_required_to_train/ | false | false | self | 13 | null |
Current best UNCENSORED model for 3090 and 40gb of ram?(plus 5600x cpu) | 25 | I am looking for recommendations on the current best uncensored models for my set up with a 3090, 40gb of ram and a 5600x cpu.
Also, I have 2 of this exact PC, would I be able to easily benefit using both pcs together in someway? I used to have both gpu in the same desktop for mining but have since put them both separate pcs. | 2023-11-07T12:59:54 | https://www.reddit.com/r/LocalLLaMA/comments/17ptqt1/current_best_uncensored_model_for_3090_and_40gb/ | Boring_Assignment_33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ptqt1 | false | null | t3_17ptqt1 | /r/LocalLLaMA/comments/17ptqt1/current_best_uncensored_model_for_3090_and_40gb/ | false | false | self | 25 | null |
Facing problems with Llama-2-7b-chat-hf and langchain | 1 | [removed] | 2023-11-07T12:30:03 | https://www.reddit.com/r/LocalLLaMA/comments/17pt7jh/facing_problems_with_llama27bchathf_and_langchain/ | Relative_Winner_4588 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pt7jh | false | null | t3_17pt7jh | /r/LocalLLaMA/comments/17pt7jh/facing_problems_with_llama27bchathf_and_langchain/ | false | false | self | 1 | null |
Best way for multiple nodes Training | 1 | [removed] | 2023-11-07T11:52:33 | https://www.reddit.com/r/LocalLLaMA/comments/17pskxp/best_way_for_multiple_nodes_training/ | Enkay55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pskxp | false | null | t3_17pskxp | /r/LocalLLaMA/comments/17pskxp/best_way_for_multiple_nodes_training/ | false | false | self | 1 | null |
Has anyone use Snapdragon 8 gen 2 phones to run 7B/10B models? | 35 | According to the Qualcomm event the new Snapdragon 8 gen 3 could run 10b models with 20 token/sec, which makes me wonder how fast the Snapdragon 8 gen 2 could run models, has anyone used the 8 gen 2 to run 7B/10B models on their phones and how fast the token/sec it runs? | 2023-11-07T10:51:34 | https://www.reddit.com/r/LocalLLaMA/comments/17prnbx/has_anyone_use_snapdragon_8_gen_2_phones_to_run/ | hoseex999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17prnbx | false | null | t3_17prnbx | /r/LocalLLaMA/comments/17prnbx/has_anyone_use_snapdragon_8_gen_2_phones_to_run/ | false | false | self | 35 | null |
Prompts to fine-tune a story-writing model? | 1 | [removed] | 2023-11-07T09:41:14 | https://www.reddit.com/r/LocalLLaMA/comments/17pqo2m/prompts_to_finetune_a_storywriting_model/ | Guakatiko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pqo2m | false | null | t3_17pqo2m | /r/LocalLLaMA/comments/17pqo2m/prompts_to_finetune_a_storywriting_model/ | false | false | self | 1 | null |
CogVLM: Visual Expert for Pretrained Language Models | 38 | 2023-11-07T09:31:48 | https://arxiv.org/abs/2311.03079 | zyunztl | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17pqjo8 | false | null | t3_17pqjo8 | /r/LocalLLaMA/comments/17pqjo8/cogvlm_visual_expert_for_pretrained_language/ | false | false | 38 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
Distilling LLM | 11 | I have a thought.
How can we distill large llm models into smaller ones? What if we take weights of something like llama 2 70b, create a 7b model with the same architecture (don't pre-train it), and just take average of 10 weights from 70b model and assign it to the parameter of newly created 7b model (repeat the process for rest of the parameters).
The thing to note here is that we will keep the precision of the new model at 16bit or 32bit. This I'm thinking will result in better performance than quantization of value type.
Would love to hear thoughts on this, weather this is something intresting ? | 2023-11-07T08:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/17ppjxd/distilling_llm/ | Independent_Key1940 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ppjxd | false | null | t3_17ppjxd | /r/LocalLLaMA/comments/17ppjxd/distilling_llm/ | false | false | self | 11 | null |
Context between calls to local llms | 3 | Hi, my company is setting up our own local llm and I’ve been assuming that when we call on it, every call will be disconnected from previous calls. No memory. Similar to an api call to gpt 3.5. Is this true? Or is there some way for the llm to remember the conversation, similar to the web versions of ChatGPT Plus and Claude 2? | 2023-11-07T08:02:31 | https://www.reddit.com/r/LocalLLaMA/comments/17ppfku/context_between_calls_to_local_llms/ | Ok_Relationship_9879 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ppfku | false | null | t3_17ppfku | /r/LocalLLaMA/comments/17ppfku/context_between_calls_to_local_llms/ | false | false | self | 3 | null |
Facing problems with Llama-2-7b-chat-hf and langchain | 1 | I want to create an application that uses llama-2-7b-chat-hf to chat with PDFs.
Steps I followed are in the colab notebook below -
[https://colab.research.google.com/drive/1SS4JWvFb\_SapmDMedcHZvRgs1Bd2MjZQ?usp=sharing](https://colab.research.google.com/drive/1SS4JWvFb_SapmDMedcHZvRgs1Bd2MjZQ?usp=sharing)
I know its a mess, but after all that I did, the problems I faced are -
1) When we retrieve the text using retrieverQA, we can't get the proper similar text. I don't know if the PDFs I am giving are difficult to read or retriever/embedding model is not that good for finding the accurate text related to the question prompt.
2) Vectorstore takes a huge amount of memory. When we embed our data and store it in a vector store, it takes space in the RAM instead of the drive space. How to store vectordb in ROM?
3) As I keep asking the prompts, the size of vectordb keeps increasing to the point where we get the error - CUDA out of memory. Why is the size of vectordb increasing when we query the model, is it again storing all the prompts and answers in the db? Or is it because of the buffermemory? Plz someone explain.
4) Once we create the PromptTemplate, how to verify if the entire promptTemplate with proper context is going as the input to the model? As you can see, when I query the model, all the model returns are the query, results, and the source chunk. So I am not sure if the code I wrote is providing the proper prompt to the model.
5) Also, is it recommended to give memory to the model or just use context to get the answers?
6) I think the model I am using is not good in calculation, but I think langchain can provide the tools to integrate into the chain, how to implement them?
And the biggest problem of all, the model is not giving accurate results, I am confused whether its because it gets an irrelevant chunk of data, my bad prompt or some other reason.
I know this is a lot and I am sorry if I might be asking some very silly questions, but I am genuinely curious about all this.
Please guide me for this project, and if you can't access the notebook, you can DM me for more details. Thank you | 2023-11-07T07:15:16 | https://www.reddit.com/r/LocalLLaMA/comments/17post6/facing_problems_with_llama27bchathf_and_langchain/ | Relative_Winner_4588 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17post6 | false | null | t3_17post6 | /r/LocalLLaMA/comments/17post6/facing_problems_with_llama27bchathf_and_langchain/ | false | false | spoiler | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {'obfuscated': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=fb0c17a19cd20b9247fc7e433a3d688b37fd8cfd', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=977fa9aeb74615a62c5828fcedcb58273f0dfa7f', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?blur=40&format=pjpg&auto=webp&s=d03aa766616e3fc480ccf72027f436e590cf75b8', 'width': 260}}}}]} |
The truth about Llama | 1 | [removed] | 2023-11-07T06:56:12 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 17pojsk | false | null | t3_17pojsk | /r/LocalLLaMA/comments/17pojsk/the_truth_about_llama/ | false | false | default | 1 | null | ||
Mistral-7B training/fine-tuning | 5 | Hello,
I plan to continue training Mistral-7B base model to make it a bit better in Polish language. It is already quite fluent so it can become solid base model for various downstream tasks in the future. I have access to 4x A100 GPUs
​
My plan is to train it on the same datasets that were used to train Herbert (which is old Polish BERT model)
​
Mistral-7B can consume 8k context size. Does it mean my input chunks should also have 8k length ? | 2023-11-07T06:45:41 | https://www.reddit.com/r/LocalLLaMA/comments/17poerg/mistral7b_trainingfinetuning/ | zibenmoka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17poerg | false | null | t3_17poerg | /r/LocalLLaMA/comments/17poerg/mistral7b_trainingfinetuning/ | false | false | self | 5 | null |
GitHub - FraunhoferISST/diva: Scalable data management system with an AI powered profiling and metadata enrichment | 1 | Did anyone of you ever tried this and know something similar or better? | 2023-11-07T05:36:19 | https://github.com/FraunhoferISST/diva | Deep-View-2411 | github.com | 1970-01-01T00:00:00 | 0 | {} | 17pneqb | false | null | t3_17pneqb | /r/LocalLLaMA/comments/17pneqb/github_fraunhoferisstdiva_scalable_data/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'RyiWGV-qTdIVIGRGjjlmD_KOXVKIJjFm7rHetRzvg9g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cN6R40HW9upcwVL8xm6_MEK9sLih3WFdJZf0x3Q-svs.jpg?width=108&crop=smart&auto=webp&s=32859780cf62330c89053025f9400feada57cece', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cN6R40HW9upcwVL8xm6_MEK9sLih3WFdJZf0x3Q-svs.jpg?width=216&crop=smart&auto=webp&s=525976e866059893df4ba45337e5b9859826e6b9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cN6R40HW9upcwVL8xm6_MEK9sLih3WFdJZf0x3Q-svs.jpg?width=320&crop=smart&auto=webp&s=965bbb5d94c8ef53b04b06bf72e216257668c9e5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cN6R40HW9upcwVL8xm6_MEK9sLih3WFdJZf0x3Q-svs.jpg?width=640&crop=smart&auto=webp&s=d1451806251cb95f539c7751e6a1363210819cde', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cN6R40HW9upcwVL8xm6_MEK9sLih3WFdJZf0x3Q-svs.jpg?width=960&crop=smart&auto=webp&s=043c6edfbd41a797150cf12eea54a2b1ac71d1fe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cN6R40HW9upcwVL8xm6_MEK9sLih3WFdJZf0x3Q-svs.jpg?width=1080&crop=smart&auto=webp&s=ea1ff2769030b97eedffddad44ebd54ad56e8336', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cN6R40HW9upcwVL8xm6_MEK9sLih3WFdJZf0x3Q-svs.jpg?auto=webp&s=68121a88916f4cbbd2e3161394eb6fac5395055a', 'width': 1200}, 'variants': {}}]} | |
Anyone have a tutorial/guide on how to train a coding LLM to handle a particular framework api? | 2 | i simply want to take a python coding LLM, and give it a popular python framework and help me code in it.
what is the secret sauce methodology to make this happen? | 2023-11-07T05:35:13 | https://www.reddit.com/r/LocalLLaMA/comments/17pne2e/anyone_have_a_tutorialguide_on_how_to_train_a/ | herozorro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pne2e | false | null | t3_17pne2e | /r/LocalLLaMA/comments/17pne2e/anyone_have_a_tutorialguide_on_how_to_train_a/ | false | false | self | 2 | null |
Can someone please explain what is LlamaIndex, its purposes, and the differences with LangChain? | 5 | I joined this sub yesterday and am coming across mentions of LlamaIndex often. I was wondering what it is and what purpose it is serving. Also, how does it differ from LangChain? | 2023-11-07T05:24:07 | https://www.reddit.com/r/LocalLLaMA/comments/17pn7vg/can_someone_please_explain_what_is_llamaindex_its/ | AMGraduate564 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pn7vg | false | null | t3_17pn7vg | /r/LocalLLaMA/comments/17pn7vg/can_someone_please_explain_what_is_llamaindex_its/ | false | false | self | 5 | null |
How to update context_window here? | 1 | [removed] | 2023-11-07T04:26:48 | https://www.reddit.com/r/LocalLLaMA/comments/17pmat0/how_to_update_context_window_here/ | ianuvrat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pmat0 | false | null | t3_17pmat0 | /r/LocalLLaMA/comments/17pmat0/how_to_update_context_window_here/ | false | false | 1 | null | |
MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning | 22 | 2023-11-07T04:12:36 | https://github.com/codefuse-ai/MFTCOder | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 17pm2ao | false | null | t3_17pm2ao | /r/LocalLLaMA/comments/17pm2ao/mftcoder_boosting_code_llms_with_multitask/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'PW8FLeSTRZJR_NMtK6_LXvQV3lIt3XsS2HgCZE7Ij44', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AgcHTjRFgYREr4W6-Be973t7mO78VUwPkqbCqxtV8Xs.jpg?width=108&crop=smart&auto=webp&s=4d9f4d649c762a0601c4ed6b56a936cca2f177c0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AgcHTjRFgYREr4W6-Be973t7mO78VUwPkqbCqxtV8Xs.jpg?width=216&crop=smart&auto=webp&s=1be183415b4434e476cff44016cc5f9db3afc474', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AgcHTjRFgYREr4W6-Be973t7mO78VUwPkqbCqxtV8Xs.jpg?width=320&crop=smart&auto=webp&s=ac9b9b5ba3e922cead1df61ee5c804d05c485fb6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AgcHTjRFgYREr4W6-Be973t7mO78VUwPkqbCqxtV8Xs.jpg?width=640&crop=smart&auto=webp&s=5f003efa5714a611e39b0b05fe904e365d0ae4d5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AgcHTjRFgYREr4W6-Be973t7mO78VUwPkqbCqxtV8Xs.jpg?width=960&crop=smart&auto=webp&s=3bf4623db5e48cc39354ec23e28fd43ab73dad47', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AgcHTjRFgYREr4W6-Be973t7mO78VUwPkqbCqxtV8Xs.jpg?width=1080&crop=smart&auto=webp&s=815b6e9186d066f4717529c04d4ea7bd271a65dd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AgcHTjRFgYREr4W6-Be973t7mO78VUwPkqbCqxtV8Xs.jpg?auto=webp&s=16c068366ce3ce17c65b5896af9a53a879adb7fc', 'width': 1200}, 'variants': {}}]} | ||
NVIDIA Jetson AGX Orin 64GB Developer Kit | 1 | After stress full research on laptops for local LLM work I am planning to buy Nvidia Dev Kit mentioned in the headline. I do have laptop with 32 GB RAM and RTX 4060 but it's good for only 3B parameter models.
Want to see if any suggestions/recommendations/warning for NVIDIA Jetson AGX Orin 64GB Developer Kit? | 2023-11-07T03:54:11 | https://www.reddit.com/r/LocalLLaMA/comments/17plr3g/nvidia_jetson_agx_orin_64gb_developer_kit/ | meetrais | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17plr3g | false | null | t3_17plr3g | /r/LocalLLaMA/comments/17plr3g/nvidia_jetson_agx_orin_64gb_developer_kit/ | false | false | self | 1 | null |
Can Ollama run in parallel / My benchmark / Can it go faster? | 1 | So I had this ambitious idea to run ollama locally and build out a UI and tunnel out my local llama to my app and have a nice little app running. My computer is super CPU intensive but crappy GPU from crypto hdd mining days.
CPU: AMD Ryzen Threadripper 3970X 32-core
GPU: Nvidia GTX 1080
RAM: 256GB
When running Llama2 7B@latest with Ollama, I'm getting 38tokens/second. I thought I could run multiple queries in parallel but it didn't work when I opened two terminals. The first has to finish before the second runs. So there goes my idea of running it as a service...
Questions:
1) Is there any way to run Ollama in parallel?
2) Would getting a better GPU increase the speed of tokens/second? I'm not sure if it's currently leveraging the CPU or GPU. I see benchmarks \[here\]([https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference](https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference)), but can someone help me understand this properly.. I'm running at 38tokens/second which is roughly... 1000ms/38tokens = 26ms/token. Looking at the benchmark table, it looks like a 3090 24GB can run **7B\_q4\_0** at 8.83ms/token. However, it's confusing because that table also has 3090 24GB \* 2 with 17.95ms/token. How does having 2 3090's make it slower than having 1? Also, would it be correct to assume getting a 3090 would boost my tokens/second?
3) Lastly... is the number of CPUs or high RAM in my setup even doing anything or is the high RAM needed for AI models mainly related to GPU RAM? | 2023-11-07T03:33:50 | https://www.reddit.com/r/LocalLLaMA/comments/17pledu/can_ollama_run_in_parallel_my_benchmark_can_it_go/ | w0ngz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pledu | false | null | t3_17pledu | /r/LocalLLaMA/comments/17pledu/can_ollama_run_in_parallel_my_benchmark_can_it_go/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'CE6qIRahagW3LezMgEIIv6vP2F30Y7UwT0GHr0Xeif8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?width=108&crop=smart&auto=webp&s=1c090cfb4255a76c588b99efa1b8cbe434ee889b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?width=216&crop=smart&auto=webp&s=bec057c4b25c45a168688bf7857e2b85d34faf7f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?width=320&crop=smart&auto=webp&s=d5eeabdc3ca6e93c28f4fee34f92af734b745517', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?width=640&crop=smart&auto=webp&s=09d89d3ede697a237822122f2c04cf336a0c226d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?width=960&crop=smart&auto=webp&s=b3667d2d748271244b6bd4973347263b034e6295', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?width=1080&crop=smart&auto=webp&s=8540dd07e2aed2868d5737f951c7631945ff0288', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?auto=webp&s=52d9b4d0df4a948f3080237a73f72866ed9d9173', 'width': 1200}, 'variants': {}}]} |
Can Ollama run in parallel / My benchmark / Can it go faster? | 1 | So I had this ambitious idea to run ollama locally and build out a UI and tunnel out my local llama to my app and have a nice little app running. My computer is super CPU intensive but crappy GPU from crypto hdd mining days.
CPU: AMD Ryzen Threadripper 3970X 32-core
GPU: Nvidia GTX 1080
RAM: 256GB
When running Llama2 7B@latest with Ollama, I'm getting 38tokens/second. I thought I could run multiple queries in parallel but it didn't work when I opened two terminals. The first has to finish before the second runs. So there goes my idea of running it as a service...
Questions:
1) Is there any way to run Ollama in parallel?
2) Would getting a better GPU increase the speed of tokens/second? I'm not sure if it's currently leveraging the CPU or GPU. I see benchmarks \[here\]([https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference](https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference)), but can someone help me understand this properly.. I'm running at 38tokens/second which is roughly... 1000ms/38tokens = 26ms/token. Looking at the benchmark table, it looks like a 3090 24GB can run **7B\_q4\_0** at 8.83ms/token. However, it's confusing because that table also has 3090 24GB \* 2 with 17.95ms/token. How does having 2 3090's make it slower than having 1? Also, would it be correct to assume getting a 3090 would boost my tokens/second?
3) Lastly... is the number of CPUs or high RAM in my setup even doing anything or is the high RAM needed for AI models mainly related to GPU RAM? | 2023-11-07T03:33:50 | https://www.reddit.com/r/LocalLLaMA/comments/17pledr/can_ollama_run_in_parallel_my_benchmark_can_it_go/ | w0ngz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pledr | false | null | t3_17pledr | /r/LocalLLaMA/comments/17pledr/can_ollama_run_in_parallel_my_benchmark_can_it_go/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'CE6qIRahagW3LezMgEIIv6vP2F30Y7UwT0GHr0Xeif8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?width=108&crop=smart&auto=webp&s=1c090cfb4255a76c588b99efa1b8cbe434ee889b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?width=216&crop=smart&auto=webp&s=bec057c4b25c45a168688bf7857e2b85d34faf7f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?width=320&crop=smart&auto=webp&s=d5eeabdc3ca6e93c28f4fee34f92af734b745517', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?width=640&crop=smart&auto=webp&s=09d89d3ede697a237822122f2c04cf336a0c226d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?width=960&crop=smart&auto=webp&s=b3667d2d748271244b6bd4973347263b034e6295', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?width=1080&crop=smart&auto=webp&s=8540dd07e2aed2868d5737f951c7631945ff0288', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8yTBZ8WNyAv8uOw_MghdYI1Amsjyg6BPtf37zm2vwoo.jpg?auto=webp&s=52d9b4d0df4a948f3080237a73f72866ed9d9173', 'width': 1200}, 'variants': {}}]} |
What if… arrakis did fail? | 2 | Today GPT4 Turbo was released, so what about the rumor that project arrakis was GPT4 Turbo and underperformed? There was also rumor that it was scrapped. So was it all fake news?
What if….
Arrakis was indeed about making GPT4 turbo faster and cheaper. Now GPT4 Turbo is not actually faster, Altman said that they are focusing on cost and speed is next… but they surely tried tackling both given their ambition? What if arrakis did fail? And OpenAI still lowers the price to give the impression of cost improvements without actually archiving that level of progress? They can essentially eat the cost until the cost saving catches up, like gaming console loses money to gain adoptions early in its life cycle. It’s not to save face or cover up set back but to accelerate adoption?
Do you think the rumored arrakis setback was all fake news? | 2023-11-07T03:24:25 | https://www.reddit.com/r/LocalLLaMA/comments/17pl7q3/what_if_arrakis_did_fail/ | GoTrojan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pl7q3 | false | null | t3_17pl7q3 | /r/LocalLLaMA/comments/17pl7q3/what_if_arrakis_did_fail/ | false | false | self | 2 | null |
Fine-tuning through prompts? | 1 | So at work, while there was a discussion on fine-tuning, my colleague said that by supplying a local LLM prompts instead of data, one can fine-tune an LLM to handle those kind of prompts and enable the LLM to produce the same kind of output. Is this true? | 2023-11-07T02:59:03 | https://www.reddit.com/r/LocalLLaMA/comments/17pkq34/finetuning_through_prompts/ | RAIV0LT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pkq34 | false | null | t3_17pkq34 | /r/LocalLLaMA/comments/17pkq34/finetuning_through_prompts/ | false | false | self | 1 | null |
What Are The Top Groups Working On Open Source A.G.I.? | 1 | * Ben Goertzel (iirc he's the progenitor of the term "A.G.I.") is open about what he's building
* Hawkins and his Thousand Brains hypothesis, and he's built OSS software
* Giulio Tononi has IIT, which isn't AI, but an attempt to mathematically describe consciousness nonetheless
What other groups are working on it, at any size/scale/techincal ability/crack-pot-theories/etc?
Github Repos? Slack/Discrd communities? University groups? Corporate groups? Meetups? | 2023-11-07T02:02:41 | https://www.reddit.com/r/LocalLLaMA/comments/17pjmfz/what_are_the_top_groups_working_on_open_source_agi/ | BayesMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17pjmfz | false | null | t3_17pjmfz | /r/LocalLLaMA/comments/17pjmfz/what_are_the_top_groups_working_on_open_source_agi/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.