title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Train model to answer questions about large text corpus | 4 | I have a large corpus of text and I’m trying to figure out the best way to train a model to answer questions about it. Ideally being able to reference the file (or even section within) where the answer is based on.
I know this is a very hard problem to solve so just looking to brainstorm potential avenues here!
Thanks :) | 2023-10-07T10:15:15 | https://www.reddit.com/r/LocalLLaMA/comments/1722x86/train_model_to_answer_questions_about_large_text/ | joandadg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1722x86 | false | null | t3_1722x86 | /r/LocalLLaMA/comments/1722x86/train_model_to_answer_questions_about_large_text/ | false | false | self | 4 | null |
How to handle summarization of sub-topics using a document after chunking. | 1 | [removed] | 2023-10-07T09:53:16 | https://www.reddit.com/r/LocalLLaMA/comments/1722kq4/how_to_handle_summarization_of_subtopics_using_a/ | vile_proxima | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1722kq4 | false | null | t3_1722kq4 | /r/LocalLLaMA/comments/1722kq4/how_to_handle_summarization_of_subtopics_using_a/ | false | false | self | 1 | null |
[SEEKING ADVICE] Looking for Existing Repos (Open-Source, VM-Hosted, & GPU-Compatible) | 5 | Greetings,
I'm on the hunt for an existing repositories that can fulfill that meets the following criteria:
1. Content Collection: Capability to read and extract text from multiple document formats, such as PDF and DOCX files.
2. Content Reformulation: After text extraction, the ability to rephrase the content in a specific style that I'll provide.
3. OCR Support: Integration of Optical Character Recognition (OCR) capabilities to capture text from images and scanned documents.
4. Multilingual Support: Must function seamlessly in both Arabic and English languages.
5. Open-Source Availability: The script should be publicly available for contributions and ongoing development on GitHub.
6. VM & GPU Compatibility: I don't have a GPU and plan to rent one. The script should be compatible with rental GPU resources. Additionally, I'm looking for advice on reliable VM rental services where the script can operate.
7. Installation & Configuration: The script should ideally come with guidelines for installation, setup, and configuration.
8. Documentation: Comprehensive guidelines should be available to explain the script's setup and usage.
9. Programming Language: Python is my preferred choice, but I'm open to other languages if they meet the project requirements more effectively.
10. Timeline: I have a flexible schedule but would like to know the estimated time needed for setup and customization.
Existing Solutions:
I've stumbled upon [h2ogpt](https://github.com/h2oai/h2ogpt)as a potential starting point. Are there better solutions or repositories that can meet these requirements?
To Suggest:
If you're aware of an existing repository that meets these criteria, please comment below or send me a DM with your suggestions and estimated timeline for setup and customization.
Thank you for your time, and I look forward to your insightful suggestions! | 2023-10-07T09:47:35 | https://www.reddit.com/r/LocalLLaMA/comments/1722hgl/seeking_advice_looking_for_existing_repos/ | SociallyF | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1722hgl | false | null | t3_1722hgl | /r/LocalLLaMA/comments/1722hgl/seeking_advice_looking_for_existing_repos/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'UdD1knbC4cRmQVFvN1DUIenSmYwGYd-d8pxwB2oW-Sw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/t0jojkb5dh-rbN5LghlrqUMm763fgwvjXXDfxLL5hJs.jpg?width=108&crop=smart&auto=webp&s=2b2457f92820df9f133e2ac03d365997ee326ec2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/t0jojkb5dh-rbN5LghlrqUMm763fgwvjXXDfxLL5hJs.jpg?width=216&crop=smart&auto=webp&s=f313902f4b3357e608e40085268d2553b6811057', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/t0jojkb5dh-rbN5LghlrqUMm763fgwvjXXDfxLL5hJs.jpg?width=320&crop=smart&auto=webp&s=1e5f1215a69a0d16351dcca5b1a58e84d45d2bc3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/t0jojkb5dh-rbN5LghlrqUMm763fgwvjXXDfxLL5hJs.jpg?width=640&crop=smart&auto=webp&s=fc75a921d834b4a7fe42d6939eaac3aad3df9d80', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/t0jojkb5dh-rbN5LghlrqUMm763fgwvjXXDfxLL5hJs.jpg?width=960&crop=smart&auto=webp&s=2976fd93ded3043147c1ddb8d05af5f70db56ffd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/t0jojkb5dh-rbN5LghlrqUMm763fgwvjXXDfxLL5hJs.jpg?width=1080&crop=smart&auto=webp&s=6fb832c0582c608534a10df812c4495c759146b0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/t0jojkb5dh-rbN5LghlrqUMm763fgwvjXXDfxLL5hJs.jpg?auto=webp&s=f1fa9ab3ca9903914a13ba0db5060288f00bc0dd', 'width': 1200}, 'variants': {}}]} |
Error using trained LoRAs in llamacpp | 5 | I used this command:
./finetune --model-base /Users/yuki/Downloads/models/pygmalion-2-7b.Q5\_K\_M.gguf --train-data /Users/yuki/Downloads/datasets/test.txt --threads 14 --lora-out lora.gguf --save-every 1 --sample-start "<s>" --include-sample-start --seed 1 --use-flash --lora-alpha 4 --lora-r 4 --adam-alpha 0.001 --adam-iter 256 --rope-freq-scale 1.0
And txt file looks like this:
<s>Yuki: Hi there.
Yuna: Hello. How are you?</s>
Doesn't matter which parameters I use, then when I'm trying to use this LoRA to merge it to the model or use it for interference in koboldcpp, I get this error:
(base) yuki@yuki llama.cpp % ./export-lora --model-base '/Users/yuki/Downloads/models/pygmalion-2-7b.Q5\_K\_M.gguf' --model-out yuna.gguf --lora /Users/yuki/llama.cpp/lora.gguf
warning: Could not open lora adapter '/Users/yuki/llama.cpp/lora.gguf'. Ignoring this adapter.
warning: no lora adapters will be applied.
Or if I'm using checkpoint instead of final LoRA which is probably larger:
unexpected lora header file magic in '/Users/yuki/llama.cpp/checkpoint-1.gguf' | 2023-10-07T06:38:30 | https://www.reddit.com/r/LocalLLaMA/comments/171zgpi/error_using_trained_loras_in_llamacpp/ | yukiarimo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171zgpi | false | null | t3_171zgpi | /r/LocalLLaMA/comments/171zgpi/error_using_trained_loras_in_llamacpp/ | false | false | self | 5 | null |
Phind 2 is seriously brain damanged or i am doing something wrong? | 1 | I saw phind as the best coding model before so i tried and results are really bad. cannot even answer simple git questions
```
ollama run phindq5
>>> I want to sort branches by latest commit. can you give me git command for that
?
To set up a Google Cloud Storage and interact with it via Python, follow
these steps:
1. Create a Google Cloud project (if you don't have one):
- Go to the Google Cloud Console
- Click on the project drop-down in the top right corner
- Search for an an existing "new" project
2. Set up upGoogle Cloud Storage:
- Go to the Google Cloud Console^
```
here is my configuration
```
FROM ./phind.gguf
PARAMETER num_gpu 30
PARAMETER num_thread 3
PARAMETER ctx_size 8192
PARAMETER repeat_panelty 2
TEMPLATE """
### System Prompt
{system_message}
### User Message
{prompt}
### Assistant
"""
```
I am running using ollama
Here is comparison to mistral 7b
```
ollama run mistral
>>> I want to sort branches by latest commit. can you give me git command for that?
You can use the following Git command to sort your branches by their latest
commit:
```bash
git branch --sort=committerdate@<(git rev-parse HEAD)^@<(git rev-parse
--symbolic-full-name @{u})
```
This command sorts the local branches in descending order based on their
last commit date.
Here's how it works:
1. `git branch`: lists all local branches.
2. `--sort=committerdate@<(git rev-parse HEAD)^@<(git rev-parse
--symbolic-full-name @{u})`: sorts the branches based on their commit date
in descending order. The `committerdate` is used as the sort key, and it is
calculated using two Git commands:
* `git rev-parse HEAD`: returns the hash of the current branch (i.e., the
latest commit).
* `<(git rev-parse HEAD)^@<(git rev-parse --symbolic-full-name @{u})`: cal
calculates a value that is used as the sort key. The `<()` syntax creates a
command substitution for the following Git command: `git rev-parse HEAD`,
which returns the hash of the current branch. This value is then followed by
an `@` symbol, and finally by the full name (symbolic or long) of the
upstream branch (`@u`).
For example, if you have two branches, `branch1` and `branch2`, with commit
dates 2023-02-01 and 2023-01-31 respectively, running this command would
display `branch2` first, followed by `branch1`.
``` | 2023-10-07T05:29:27 | https://www.reddit.com/r/LocalLLaMA/comments/171ybly/phind_2_is_seriously_brain_damanged_or_i_am_doing/ | 0xPark | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171ybly | false | null | t3_171ybly | /r/LocalLLaMA/comments/171ybly/phind_2_is_seriously_brain_damanged_or_i_am_doing/ | false | false | self | 1 | null |
I made a custom webUI for tinyLlama! | 1 | [removed] | 2023-10-07T05:02:10 | https://www.reddit.com/r/LocalLLaMA/comments/171xvdk/i_made_a_custom_webui_for_tinyllama/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171xvdk | false | null | t3_171xvdk | /r/LocalLLaMA/comments/171xvdk/i_made_a_custom_webui_for_tinyllama/ | false | false | self | 1 | null |
LLM Specialized Linux Distro | 1 | Is there any Linux distro which is ready for LLM inference, training, and fine tuning out of the box? Hope it just like HiveOS distro for crypto mining. Right after it installed on SSD it will ready to download models from Huggingface repo.. | 2023-10-07T04:48:28 | https://www.reddit.com/r/LocalLLaMA/comments/171xmon/llm_specialized_linux_distro/ | koesn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171xmon | false | null | t3_171xmon | /r/LocalLLaMA/comments/171xmon/llm_specialized_linux_distro/ | false | false | self | 1 | null |
Pandalyst Files | 2 | Can anyone provide the the files of pandalyst's github files. They may have made that repo private. | 2023-10-07T03:32:11 | https://www.reddit.com/r/LocalLLaMA/comments/171w9eq/pandalyst_files/ | Outside_Solid5371 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171w9eq | false | null | t3_171w9eq | /r/LocalLLaMA/comments/171w9eq/pandalyst_files/ | false | false | self | 2 | null |
LLM Labeling UI: Open Source data labeling tool for LLM | 1 | [removed] | 2023-10-07T02:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/171visj/llm_labeling_ui_open_source_data_labeling_tool/ | Disastrous_Expert_22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171visj | false | null | t3_171visj | /r/LocalLLaMA/comments/171visj/llm_labeling_ui_open_source_data_labeling_tool/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zDSkTyoJFOmpA67Fo5QqtLryAGDJOszH5cH86G3uFtU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dzSddicP7WyXJEA6uWsc7jFFMzwgPpiv1fCbFIdCBPA.jpg?width=108&crop=smart&auto=webp&s=b8d91bc409cabcaefee339b3b48b72a84388ed06', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dzSddicP7WyXJEA6uWsc7jFFMzwgPpiv1fCbFIdCBPA.jpg?width=216&crop=smart&auto=webp&s=ef88b03b9e031ce46ac91024b449298e0a7e0d01', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dzSddicP7WyXJEA6uWsc7jFFMzwgPpiv1fCbFIdCBPA.jpg?width=320&crop=smart&auto=webp&s=f03cf24781399d544d21db98e18cbaf42a884b01', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dzSddicP7WyXJEA6uWsc7jFFMzwgPpiv1fCbFIdCBPA.jpg?width=640&crop=smart&auto=webp&s=a9321659d266cce4e91a357decb33d1f997f5259', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dzSddicP7WyXJEA6uWsc7jFFMzwgPpiv1fCbFIdCBPA.jpg?width=960&crop=smart&auto=webp&s=92086e77777627d25e04b827f86bbd3b7e14399f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dzSddicP7WyXJEA6uWsc7jFFMzwgPpiv1fCbFIdCBPA.jpg?width=1080&crop=smart&auto=webp&s=763f7afdfeabee0d2155d4cb7eac3b5ffe6d77ba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dzSddicP7WyXJEA6uWsc7jFFMzwgPpiv1fCbFIdCBPA.jpg?auto=webp&s=12ce9957b547f90b559135ae17b7365837b11e47', 'width': 1200}, 'variants': {}}]} |
Release of AI Dataset STC230908 | 40 |
In exchange for the [taken down](https://torrentfreak.com/anti-piracy-group-takes-prominent-ai-training-dataset-books3-offline-230816/) dataset books3
, which was [used to train](https://torrentfreak.com/authors-accuse-openai-of-using-pirate-sites-to-train-chatgpt-230630/) major GPT models, we are releasing our own dataset STC230908
. It features approximately **75,000 book texts, 1.3 million scholarly papers texts and 24 million abstracts**, including the years **2021 to 2023**.
This update will finally allow OpenAI's ChatGPT models to answer questions about events that occurred after 2021. ***😉***
We are already in the process of preparing the next version, which will include an additional 300,000 books, making it the largest dataset of its kind.
**How to Access**
*Short Instructions:*
\- [Install IPFS](https://docs.ipfs.tech/install/ipfs-desktop/) and launch it.
\- `pip3 install stc-geck && geck - documents`
*Detailed Instructions:* [STC GitHub Repository](https://github.com/nexus-stc/stc/tree/master/geck) | 2023-10-07T02:53:43 | https://www.reddit.com/r/LocalLLaMA/comments/171vi1b/release_of_ai_dataset_stc230908/ | imperiallearner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171vi1b | false | null | t3_171vi1b | /r/LocalLLaMA/comments/171vi1b/release_of_ai_dataset_stc230908/ | false | false | self | 40 | null |
Do you think Orca, Tora, and Phi-1.5 are all OpenAI's Experiments in making synthetic Data for GPT-5? | 67 | A lot of Papers Coming out of Microsoft right now, Show Models Destroying benchmarks or just being really powerful for their size, and all of them made large datasets of high quality, synthetic GPT-4 Data. Orca used the student-teacher format, showing that synthetic modeling can be good, and that models can get better from the Data of other models. Phi-1.5 also uses GPT-4 data, and its really strong. Tora, the new microsoft model from its paper(https://browse.arxiv.org/pdf/2309.17452.pdf) They used GPT-4 to make a toolformer like dataset for math tools, like sympy, and it made GPT-4 level accuracies in math, crushing all SOTA open source models.
All of these methods will and probably are being stuck into whatever corpus is going to train GPT-5. How much of it do you think will be synthetic? | 2023-10-07T01:46:53 | https://www.reddit.com/r/LocalLLaMA/comments/171u5dm/do_you_think_orca_tora_and_phi15_are_all_openais/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171u5dm | false | null | t3_171u5dm | /r/LocalLLaMA/comments/171u5dm/do_you_think_orca_tora_and_phi15_are_all_openais/ | false | false | self | 67 | null |
making LLM machine need tips on CPU cooling and RAM | 1 | My school asked me to help make a LLM
They want to make 3 separate ones, with 4 3060ti's in each one.
They have z490 motherboards.
I suggested 128GB of RAM for each, and i9-13900k CPUs.
I don't know what cooling system to get.
I read that AIO may be most suitable for this.
What are your thoughts on RAM? Will they run at full capacity with 2 64GB sticks? | 2023-10-07T01:29:20 | https://www.reddit.com/r/LocalLLaMA/comments/171ts2m/making_llm_machine_need_tips_on_cpu_cooling_and/ | AlienHandTenticleMan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171ts2m | false | null | t3_171ts2m | /r/LocalLLaMA/comments/171ts2m/making_llm_machine_need_tips_on_cpu_cooling_and/ | false | false | self | 1 | null |
Registers in transformer networks | 3 | Is this a known and formally defined thing already? The Vits needs registers paper, World model probing accuracy on periods is higher, Yam Peleg saying padding tokens increase performance in training. Think before you say doing the same.
So what is it? What functionality is that describing? Simply more time steps to compute anchored by weak linguistic tokens?
Could you test by taking a model and training 3 versions? One with normal punctuation, one with all punctuation replaced by a universal padding token and one with both? One with scrambled punctuation?
Probe those for something that would show a result? | 2023-10-07T01:15:43 | https://www.reddit.com/r/LocalLLaMA/comments/171thwx/registers_in_transformer_networks/ | Leptok | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171thwx | false | null | t3_171thwx | /r/LocalLLaMA/comments/171thwx/registers_in_transformer_networks/ | false | false | self | 3 | null |
Why is there no noticeable difference in performance when using llama.cpp with multiple GPUs and CPUs versus just one GPU and CPU? | 30 | When attempting to run a 70B model with a CPU (64GB RAM) and GPU (22GB), the runtime speed is approximately 0.8t/s. Adding an idle GPU to the setup, resulting in CPU (64GB RAM) + GPU (22GB) + GPU (8GB), properly distributed the workload across both GPUs. However, the speed remains unchanged at 0.8t/s. Why is there no improvement in performance despite the additional GPU? Are there any other settings that could be affecting this?
Here is my runtime parameters
koboldcpp.exe --usecublas 1 0 --gpulayers 30 --tensor\_split 3 1 --contextsize 4096 --smartcontext --stream | 2023-10-07T00:15:26 | https://www.reddit.com/r/LocalLLaMA/comments/171s8s8/why_is_there_no_noticeable_difference_in/ | Scared_Ninja912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171s8s8 | false | null | t3_171s8s8 | /r/LocalLLaMA/comments/171s8s8/why_is_there_no_noticeable_difference_in/ | false | false | self | 30 | null |
Jupyter disconnect cause llma2 folder deletion? | 2 | i'm running into a weird issue. i use langchain with llma2 to do inference on a linux server through Jupyter. occasionally i saw the Jupyter disconnected due to timeout and somehow the folder containing the ipynb file is gone. but then later on the folder reappeared after \~1hr. anyone has suggestions? thanks. | 2023-10-06T22:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/171qcg0/jupyter_disconnect_cause_llma2_folder_deletion/ | peterwu00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171qcg0 | false | null | t3_171qcg0 | /r/LocalLLaMA/comments/171qcg0/jupyter_disconnect_cause_llma2_folder_deletion/ | false | false | self | 2 | null |
What are the most intelligent open source models in the 3B to 34B range? | 36 | What are the **most intelligent open source models in the 3B to 34B range** for the purpose of research assistance and playing around with ideas.
I prefer a low hallucination rate and more factual, though I know the technology cannot guarantee this yet. | 2023-10-06T22:31:01 | https://www.reddit.com/r/LocalLLaMA/comments/171ptpy/what_are_the_most_intelligent_open_source_models/ | JyggalagSheo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171ptpy | false | null | t3_171ptpy | /r/LocalLLaMA/comments/171ptpy/what_are_the_most_intelligent_open_source_models/ | false | false | self | 36 | null |
What are the most intelligent open source models in the 13B to 34B range? | 1 | What are the **most intelligent open source models in the 3B to 34B range** for the purpose of research assistance and playing around with ideas.
I prefer a low hallucination rate and more factual, though I know the technology cannot guarantee this yet. | 2023-10-06T22:28:43 | https://www.reddit.com/r/LocalLLaMA/comments/171prql/what_are_the_most_intelligent_open_source_models/ | JyggalagSheo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171prql | false | null | t3_171prql | /r/LocalLLaMA/comments/171prql/what_are_the_most_intelligent_open_source_models/ | false | false | self | 1 | null |
Reason behind removing lm_head in modules in QLoRA | 1 | [removed] | 2023-10-06T21:00:38 | https://www.reddit.com/r/LocalLLaMA/comments/171nlzm/reason_behind_removing_lm_head_in_modules_in_qlora/ | Ok_Pitch7985 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171nlzm | false | null | t3_171nlzm | /r/LocalLLaMA/comments/171nlzm/reason_behind_removing_lm_head_in_modules_in_qlora/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'HrB8HBAhY6wUJ0qirio_rZhOBwg6FWUWhIsuGQYIeoY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qrdt5t3Fcl5sj83wgQONwle5hEM0UGW60KVRLzsJa18.jpg?width=108&crop=smart&auto=webp&s=5363c45f5cb9c3c0fbd47960ec4f2860aa52212b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qrdt5t3Fcl5sj83wgQONwle5hEM0UGW60KVRLzsJa18.jpg?width=216&crop=smart&auto=webp&s=927538bbc60b2fbd56d0bc974a31a98339f4e482', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qrdt5t3Fcl5sj83wgQONwle5hEM0UGW60KVRLzsJa18.jpg?width=320&crop=smart&auto=webp&s=eb0daa694ca223149c95e356dcc23c1d604f621f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qrdt5t3Fcl5sj83wgQONwle5hEM0UGW60KVRLzsJa18.jpg?width=640&crop=smart&auto=webp&s=884e2c800e63e152c936bb833a28c106bf306f2c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qrdt5t3Fcl5sj83wgQONwle5hEM0UGW60KVRLzsJa18.jpg?width=960&crop=smart&auto=webp&s=be128e4d1654bfb90d9e06d34194d96e33ac9eb6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qrdt5t3Fcl5sj83wgQONwle5hEM0UGW60KVRLzsJa18.jpg?width=1080&crop=smart&auto=webp&s=028770e8dfab3ff0ee3541799477ea09ead446a0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qrdt5t3Fcl5sj83wgQONwle5hEM0UGW60KVRLzsJa18.jpg?auto=webp&s=62a6545f2d753c8cd8f0cbacea7270e72bb6ed6b', 'width': 1200}, 'variants': {}}]} |
Downloading llama2 from huggingface vs from meta | 6 | I've observed that downloading llama2-70b-chat from meta the size on disk is \~192GB whereas downloading from hf the size on disk is \~257GB. What accounts for the difference? Is there any difference in memory requirements or any differences in inference results? Thanks in advance. | 2023-10-06T20:54:16 | https://www.reddit.com/r/LocalLLaMA/comments/171ngjm/downloading_llama2_from_huggingface_vs_from_meta/ | codefinger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171ngjm | false | null | t3_171ngjm | /r/LocalLLaMA/comments/171ngjm/downloading_llama2_from_huggingface_vs_from_meta/ | false | false | self | 6 | null |
Are there any Open source conversational and chat-based super Agents for LLAMA-2 or Mistral LLMs? | 2 | Does any one have any idea on how to use AutoGen or SuperAgents with open source models such as LLAMA-2 or Mistral. Most of the models are require OpenAI API key which is not ideal as you do not want to put money without knowing if the agents are good enough to build something.
TIA | 2023-10-06T20:34:45 | https://www.reddit.com/r/LocalLLaMA/comments/171mza6/are_there_any_open_source_conversational_and/ | AI_ML_preneur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171mza6 | false | null | t3_171mza6 | /r/LocalLLaMA/comments/171mza6/are_there_any_open_source_conversational_and/ | false | false | self | 2 | null |
How to use gguf models to retrieve information from a knowledge database? | 12 | I am currently using the llama-2-7b.Q5\_K\_M.gguf model, it's not GREAT with some questions but for now, it gets the job done.
The next step for me is to grab multiple documents and create a vector database so that the model can only access that DB and answer question specific to it.
My problem is that the thing I currently know how to do is use embeddings and tokenizer with Openai and do what I need with it, but I want to do that now with the current model since everything can be done on my laptop without access to the internet. Also, I want to create that DB and keep it, since I know how to embed and put it into a vector DB but don't know how to reload it and re-use it.
Thank you guys!!
​
​ | 2023-10-06T18:10:50 | https://www.reddit.com/r/LocalLLaMA/comments/171jglp/how_to_use_gguf_models_to_retrieve_information/ | charbeld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171jglp | false | null | t3_171jglp | /r/LocalLLaMA/comments/171jglp/how_to_use_gguf_models_to_retrieve_information/ | false | false | self | 12 | null |
3 good models | 76 | I just want to mention 3 good models that I have encountered while testing a lot of models. The evaluation surely depends on the use cases but these seems to be quite good:
* Open-Orca/Mistral-7B-OpenOrca (I used q8 on LM Studio)
* Undi95/Amethyst-13B-Mistral-GGUF (q 5\_m)
* Sao10K/Mythical-Destroyer-L2-13B-GGUF (q 5\_m).
The last one is one of if not the best 13B model that I tested and I've tested a lot. I have compared it with ChatGPT 3.5 and in some cases its answers are better.
The main advantages: realy detailed answers and it continues to provide text that is on topic without much repetion although it has answered already the question. I wonder if it can be updated with Mistral or at least with Undi95/llama2-to-mistral-instruct-diff.
Enjoy :) | 2023-10-06T17:42:21 | https://www.reddit.com/r/LocalLLaMA/comments/171iq8w/3_good_models/ | Eduard_T | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171iq8w | false | null | t3_171iq8w | /r/LocalLLaMA/comments/171iq8w/3_good_models/ | false | false | self | 76 | null |
I try to run the mistral 7B in an iphone | 61 | 2023-10-06T17:34:09 | https://www.reddit.com/gallery/171iiw3 | engkufizz | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 171iiw3 | false | null | t3_171iiw3 | /r/LocalLLaMA/comments/171iiw3/i_try_to_run_the_mistral_7b_in_an_iphone/ | false | false | 61 | null | ||
I try to run the mistral 7B in an iphone | 1 | 2023-10-06T17:29:47 | https://www.reddit.com/gallery/171ieu8 | engkufizz | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 171ieu8 | false | null | t3_171ieu8 | /r/LocalLLaMA/comments/171ieu8/i_try_to_run_the_mistral_7b_in_an_iphone/ | false | false | 1 | null | ||
Llama mis-tokenization | 9 | Hi guys. I just tried to check how does Llama Tokenizer work and saw that, for example, “Yuki” is splitting like “Y” and “uki” and many other words too.
So if I will train LoRA or Llama model, do I need to update tokenizer somehow or it will understand from the dataset that those 2 tokens are the name “Yuki” or not? And forevermore, there the config and tokenizer in gguf models? It’s inside? Sorry for this dumb question. | 2023-10-06T17:18:44 | https://www.reddit.com/r/LocalLLaMA/comments/171i4xh/llama_mistokenization/ | yukiarimo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171i4xh | false | null | t3_171i4xh | /r/LocalLLaMA/comments/171i4xh/llama_mistokenization/ | false | false | self | 9 | null |
Workflow video showing how to convert GGML Into GGUF | 1 | [removed] | 2023-10-06T17:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/171hz9i/workflow_video_showing_how_to_convert_ggml_into/ | boricuapab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171hz9i | false | null | t3_171hz9i | /r/LocalLLaMA/comments/171hz9i/workflow_video_showing_how_to_convert_ggml_into/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'T6jToJ97Li44m0P3DQqBREqmpAB6Fg2ia3qfeEQ5Y6A', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/fe9u0RV_etPC38obKAHxoUx6p0NOqJvk87VuJ6WojVE.jpg?width=108&crop=smart&auto=webp&s=cb96aea12b1d9288be5e1c83e8b290e01a80678b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/fe9u0RV_etPC38obKAHxoUx6p0NOqJvk87VuJ6WojVE.jpg?width=216&crop=smart&auto=webp&s=b56e63ed29d39f753b47f250775607a32a92b934', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/fe9u0RV_etPC38obKAHxoUx6p0NOqJvk87VuJ6WojVE.jpg?width=320&crop=smart&auto=webp&s=638a3b5b81a2c7c3854996291db33f0e86b02c9c', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/fe9u0RV_etPC38obKAHxoUx6p0NOqJvk87VuJ6WojVE.jpg?auto=webp&s=dc646fa814f2b2993448ae35d6606542e12a8e3b', 'width': 480}, 'variants': {}}]} |
Best ways to give back to localllama community? | 1 | [removed] | 2023-10-06T16:09:49 | https://www.reddit.com/r/LocalLLaMA/comments/171gd6b/best_ways_to_give_back_to_localllama_community/ | rockpool7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171gd6b | false | null | t3_171gd6b | /r/LocalLLaMA/comments/171gd6b/best_ways_to_give_back_to_localllama_community/ | false | false | self | 1 | null |
Prompt format for continuing text and instructions in storytelling | 10 | I'm having a little trouble figuring out the best way to format prompts for continuing text or working off an outline. Which happens a lot when trying to use the model interactively for storytelling, but it also comes up when trying to include RAG information in a prompt or have a chatbot continue a conversation with instructions. Or even just a ChatGPT style sequence of instructions.
The original Alpaca format of Instructions/Input/Response has the handy Input field to hold the extra data that is relevant. But a lot of fine-tunes don't bother with it, particularly ones trained with User/Assistant formatted prompts.
What's the best approach for separating prompt instructions from other info? Or including RAG data or past conversations? Are User/Assistant prompt models useless for more advanced prompts? Have you had any success with including non-instruction information in your prompts? | 2023-10-06T15:23:30 | https://www.reddit.com/r/LocalLLaMA/comments/171f62r/prompt_format_for_continuing_text_and/ | AutomataManifold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171f62r | false | null | t3_171f62r | /r/LocalLLaMA/comments/171f62r/prompt_format_for_continuing_text_and/ | false | false | self | 10 | null |
Need some advice on putting together a "LLM-Powered Recommendation-Based Engine " | 2 | I've recently been granted some freedom at work to showcase and prototype a feature for our In-Flight Entertainment system & it incorporates AI in some way-- naturally I'm choosing LLMs.
Here's what I have so far and I would love some feedback on possible \[easier\] ways to accomplish this:
* A “What are you in the mood for?" prompt to capture the user's preferences.
* A small, local model to perform feature extraction on the prompt and then R.A.G against the media db (movies, series, songs, podcasts, etc...) to retrieve top-k embeddings and present them to the passenger.
* These are the specs of the server \[***This is the part I'm the least experience in***\]. Would anyone care to tell me if it's possible to run a model in a docker container under this system? Inferencing would need to happen fully on CPU as there is no GPU available.
* Parallelism? How would one handle multiple simultaneous requests to an API deployed within this server? Given that the t/s probably won't be anything too amazing...
https://preview.redd.it/h5goi8kljlsb1.png?width=544&format=png&auto=webp&s=acf6a535e3cc0586ca0497e0ffbd2eceffb52cf9 | 2023-10-06T15:05:22 | https://www.reddit.com/r/LocalLLaMA/comments/171ep6o/need_some_advice_on_putting_together_a_llmpowered/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171ep6o | false | null | t3_171ep6o | /r/LocalLLaMA/comments/171ep6o/need_some_advice_on_putting_together_a_llmpowered/ | false | false | 2 | null | |
Is there a way to add layers to Lora fine tuning? | 9 | Let's say I use Lora fine tuning. How can I tell Lora to add a specific layer to the fine tuning task.
I am specifically interested in the layer "score" | 2023-10-06T14:42:32 | https://www.reddit.com/r/LocalLLaMA/comments/171e45e/is_there_a_way_to_add_layers_to_lora_fine_tuning/ | ComplexIt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171e45e | false | null | t3_171e45e | /r/LocalLLaMA/comments/171e45e/is_there_a_way_to_add_layers_to_lora_fine_tuning/ | false | false | self | 9 | null |
Have you ever encountered the following problem when using CodeLLama 13B? | 5 | I give the following prompt to codellama 13B
"How to parse all filenames from a long text that may contain many filenames? Write a test case."
but I got some unrelated cases
https://preview.redd.it/uy5rc021flsb1.png?width=2514&format=png&auto=webp&s=59405a9f8d5882e0e2b07eb101fb974edb748450 | 2023-10-06T14:36:00 | https://www.reddit.com/r/LocalLLaMA/comments/171dyct/have_you_ever_encountered_the_following_problem/ | More-Shop9383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171dyct | false | null | t3_171dyct | /r/LocalLLaMA/comments/171dyct/have_you_ever_encountered_the_following_problem/ | false | false | 5 | null | |
Problems using quantised llama models with cpu | 6 | I have been using llama.cpp quantised models (gguf) that I downloaded from [huggingface](https://huggingface.co/TheBloke/Firefly-Llama2-13B-v1.2-GGUF/blob/main/firefly-llama2-13b-v1.2.Q5_K_M.gguf). I can run it using my 32 core cpu (2 cpus in fact) with no problem, it generated awnser very quickly. However, when I attempted to add embeddings to the prompts and ask questions about those embeddings, the generation process takes a significant amount of time, and most of the time, it doesn't generate anything.
I'm new to this type of work, so I'm currently working on this small projects to increase my knowledge.I have a CUDA GPU (P1000) and a server with 2 Xeon Silver 4110 processors.
Am I doing anything wrong, or do you have any valuable tips? Should I approach this differently with my available resources? My goal here is to locally test models and perform embedding before considering paid APIs or investing in hardware for local processing. | 2023-10-06T13:36:38 | https://www.reddit.com/r/LocalLLaMA/comments/171cili/problems_using_quantised_llama_models_with_cpu/ | Spiritual-Ask-9766 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171cili | false | null | t3_171cili | /r/LocalLLaMA/comments/171cili/problems_using_quantised_llama_models_with_cpu/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'Nem_HlfKj23cY1AT0zT4OMxuVRi3CPsvAxbzC6NgoTY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UWiOtr7zpcVlz6RPPfi575EQYbU11lbs0BVcsBucX_c.jpg?width=108&crop=smart&auto=webp&s=8f47d707d2b278726b8eb7d5816b90869b35edb4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UWiOtr7zpcVlz6RPPfi575EQYbU11lbs0BVcsBucX_c.jpg?width=216&crop=smart&auto=webp&s=1e5e7c3c011b85ae77dcf9138acd68bbd71caf8b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UWiOtr7zpcVlz6RPPfi575EQYbU11lbs0BVcsBucX_c.jpg?width=320&crop=smart&auto=webp&s=19c6b8a236cec99ae27a7aa120b023ada1e68314', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UWiOtr7zpcVlz6RPPfi575EQYbU11lbs0BVcsBucX_c.jpg?width=640&crop=smart&auto=webp&s=7e5ca914ad0f34ba0e998314cced013f9717ff64', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UWiOtr7zpcVlz6RPPfi575EQYbU11lbs0BVcsBucX_c.jpg?width=960&crop=smart&auto=webp&s=853bceac0a6478076eecc2a47e962ccd078d5cba', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UWiOtr7zpcVlz6RPPfi575EQYbU11lbs0BVcsBucX_c.jpg?width=1080&crop=smart&auto=webp&s=309fea71f26fe90869ce6fce09e4d3e79a7d3e5f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UWiOtr7zpcVlz6RPPfi575EQYbU11lbs0BVcsBucX_c.jpg?auto=webp&s=82741cc5736cc404ab1d890f07778673531d6a03', 'width': 1200}, 'variants': {}}]} |
Reviewers here were before active members of r/wine and r/audiophile | 1 | [removed] | 2023-10-06T13:31:26 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 171cebd | false | null | t3_171cebd | /r/LocalLLaMA/comments/171cebd/reviewers_here_were_before_active_members_of/ | false | false | default | 1 | null | ||
How to do an empirical LLM finetuning cost calculation before renting GPUs? | 1 | [removed] | 2023-10-06T13:24:49 | https://www.reddit.com/r/LocalLLaMA/comments/171c8r8/how_to_do_an_empirical_llm_finetuning_cost/ | No-Street-3020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171c8r8 | false | null | t3_171c8r8 | /r/LocalLLaMA/comments/171c8r8/how_to_do_an_empirical_llm_finetuning_cost/ | false | false | self | 1 | null |
Every review here | 5 | 2023-10-06T12:52:30 | ultrapcb | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 171bj61 | false | null | t3_171bj61 | /r/LocalLLaMA/comments/171bj61/every_review_here/ | false | false | 5 | {'enabled': True, 'images': [{'id': 'trYfYDnUYpzbwcqbKaafF5-MGni30RML7z3jb7wPm9Y', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/k8297o95xksb1.png?width=108&crop=smart&auto=webp&s=cd4859eeef7d8e673480723fd2ea8fe16786e6db', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/k8297o95xksb1.png?width=216&crop=smart&auto=webp&s=11bdacda49a5375c7835e4fd8cf3fc1e482443a8', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/k8297o95xksb1.png?width=320&crop=smart&auto=webp&s=baad6f5fd2f1efa96b6535055a7a4b3dd11c44ec', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/k8297o95xksb1.png?width=640&crop=smart&auto=webp&s=04f642abf114f4617d20996b947193399f31cf02', 'width': 640}], 'source': {'height': 396, 'url': 'https://preview.redd.it/k8297o95xksb1.png?auto=webp&s=e5dc22c53468564a77da7d5761f9f701e48d69c1', 'width': 705}, 'variants': {}}]} | |||
Best TikTok and YouTubers doing LlaMa stuff? | 0 | Can you recommend anyone making content of them testing models and hardware? I mostly see AI news and more broad content. I’m curious if anyone is out there showing what a 13B GGUF model runs like on a 32GB sys-ram setup, stuff like that. | 2023-10-06T12:22:52 | https://www.reddit.com/r/LocalLLaMA/comments/171ax9f/best_tiktok_and_youtubers_doing_llama_stuff/ | Overall-Importance54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171ax9f | false | null | t3_171ax9f | /r/LocalLLaMA/comments/171ax9f/best_tiktok_and_youtubers_doing_llama_stuff/ | false | false | self | 0 | null |
Please explain, how was Mistral made? | 91 | I’m trying to understand how Mistral was created in a practical sense. Did the team just take a Llama-2 base model or some other and then just start fine-tuning it until it was better, or did they start from scratch? | 2023-10-06T12:14:32 | https://www.reddit.com/r/LocalLLaMA/comments/171ar0n/please_explain_how_was_mistral_made/ | Overall-Importance54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171ar0n | false | null | t3_171ar0n | /r/LocalLLaMA/comments/171ar0n/please_explain_how_was_mistral_made/ | false | false | self | 91 | null |
How is "grammar" achieved with LLMs? | 45 | So I just recently realised you can force a model to output in a certain format ("grammar") using specific files like described here: [https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md)
Can anybody explain to me how this is actually achieved internally? Will those files be part of the prompt and eat away context length or does it happen outside of the prompt somehow? How so?
I know that I can just tell a model to give me something in a specific format, by giving an example, but in that case I use up context length of course. | 2023-10-06T12:09:20 | https://www.reddit.com/r/LocalLLaMA/comments/171an9p/how_is_grammar_achieved_with_llms/ | psi-love | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 171an9p | false | null | t3_171an9p | /r/LocalLLaMA/comments/171an9p/how_is_grammar_achieved_with_llms/ | false | false | self | 45 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
Task Specific Alternative to GPT-4 | 5 | I'm creating different agents to complete various tasks, and one of which is to crawl through a website to find specific information. It was easiest to prove out the concept with GPT-4 since a lot of libraries (like Langchain) already create zero shot agent implementations for that model. Now that I have proved out the concept, I want to find a model I can deploy locally. Right now, I can test with a 3090 locally, but I will soon have some A100s at my disposal. Does anyone have recommendations for models that would do well with this task? I know I will have to do some prompt engineering to test this out more thoroughly for anything else. It would be great if there was a very task-specific leaderboard of models | 2023-10-06T11:27:13 | https://www.reddit.com/r/LocalLLaMA/comments/1719t4m/task_specific_alternative_to_gpt4/ | not_the_godfather | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1719t4m | false | null | t3_1719t4m | /r/LocalLLaMA/comments/1719t4m/task_specific_alternative_to_gpt4/ | false | false | self | 5 | null |
[Synthetic dataset generation] "textbook are all you need"... But how to choose the textbook writer? (gpt4, Claude, bizon, jurassic-2...) | 4 |
I'm looking for resources, papers, or experiences that compare the performance of large language models (LLMs). I'm trying to find a honest benchmark to compare the capabilities of the latest large models, while really intrested un those: GPT-3.5 Instruct, GPT-4, Claude 2, Claude Instant 100k, Palm2-Bizon, jurassic-2, LLama2 70 and other state-of-the-art LLama2 fine tunes .
I'm interested in general benchmarks and, if they exist, comparisons of performance on synthetic data generation tasks (both generating data with the "textbook are all you need" approach used in Phi and some Orca/EvolveInstuct-style models like Wizard...). | 2023-10-06T11:20:10 | https://www.reddit.com/r/LocalLLaMA/comments/1719oeb/synthetic_dataset_generation_textbook_are_all_you/ | BXresearch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1719oeb | false | null | t3_1719oeb | /r/LocalLLaMA/comments/1719oeb/synthetic_dataset_generation_textbook_are_all_you/ | false | false | self | 4 | null |
How deterministic the result of inference is for the local LLMs? | 5 | It's common knowledge that ChatGPT inference is non-deterministic. Several queries will produce different outputs with temperature = 0 (mostly because of MoE architecture). It's crucial for me to have some guaranteed results in code generation. Does anyone deal with code generation using local models (codellama, starcoder, phi-1.5)? How do they behave in terms of determinism? Will they produce a similar code with temperature 0? | 2023-10-06T10:54:08 | https://www.reddit.com/r/LocalLLaMA/comments/17197k1/how_deterministic_the_result_of_inference_is_for/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17197k1 | false | null | t3_17197k1 | /r/LocalLLaMA/comments/17197k1/how_deterministic_the_result_of_inference_is_for/ | false | false | self | 5 | null |
[German Language] Release of EM German with models based on Llama2, Mistral and LeoLM | 1 | 2023-10-06T10:26:20 | https://github.com/jphme/EM_German/blob/main/README.md | jphme | github.com | 1970-01-01T00:00:00 | 0 | {} | 1718qgh | false | null | t3_1718qgh | /r/LocalLLaMA/comments/1718qgh/german_language_release_of_em_german_with_models/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'tTTfpP7bm29TY65B0VELyvMLMyOCM73087MrCMN_uvE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FRchFCaoN1iX2bSrP3lnTtfaX6j7CBhWU4u0bdeYYeI.jpg?width=108&crop=smart&auto=webp&s=492cb78f8304666d7648f8408e2b5e80c951ecbb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FRchFCaoN1iX2bSrP3lnTtfaX6j7CBhWU4u0bdeYYeI.jpg?width=216&crop=smart&auto=webp&s=1c9ea2334207e383cff0676b65c273fb6eafdcd0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FRchFCaoN1iX2bSrP3lnTtfaX6j7CBhWU4u0bdeYYeI.jpg?width=320&crop=smart&auto=webp&s=8bb6bebc8411dbf7dd86d65160fc2cda27e1ef53', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FRchFCaoN1iX2bSrP3lnTtfaX6j7CBhWU4u0bdeYYeI.jpg?width=640&crop=smart&auto=webp&s=19b2d208ee9d37270d91cff2f83dd013872dcb29', 'width': 640}], 'source': {'height': 320, 'url': 'https://external-preview.redd.it/FRchFCaoN1iX2bSrP3lnTtfaX6j7CBhWU4u0bdeYYeI.jpg?auto=webp&s=fd6feaba878a4b3ecbad666d802753d489260c57', 'width': 640}, 'variants': {}}]} | ||
Cheapest option to run TheBloke/WizardLM-33B-V1.0-Uncensored | 1 | [removed] | 2023-10-06T10:20:10 | https://www.reddit.com/r/LocalLLaMA/comments/1718msh/cheapest_option_to_run/ | reddithotel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1718msh | false | null | t3_1718msh | /r/LocalLLaMA/comments/1718msh/cheapest_option_to_run/ | false | false | self | 1 | null |
What's your main interest in running a local LLM instead of using an existing API? | 62 |
[View Poll](https://www.reddit.com/poll/1718a9o) | 2023-10-06T09:58:10 | https://www.reddit.com/r/LocalLLaMA/comments/1718a9o/whats_your_main_interest_in_running_a_local_llm/ | gopietz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1718a9o | false | null | t3_1718a9o | /r/LocalLLaMA/comments/1718a9o/whats_your_main_interest_in_running_a_local_llm/ | false | false | self | 62 | null |
Has anyone created a llama monster? | 7 | I have a goal of creating a llama monster. For arguments sake, let's say it's a monster trained on the law (although medicine or history... anything is fair game). Money is no object. I can buy two A100's and fine tune them on case hold (or pile of law). I have numerous law school textbooks in PDF format that I could use for embeddings (or maybe sources of further fine tuning). I want to do this because I want to run the model locally, but also have access to highly specific curated facts about (for example) international trademark disputes, or peculiar circumstances where an inventors previous patents are used against him as prior art.
I estimate it will cost 10-20k to build such a system, when a $20/mo paid service would come close. But my question is: even if I made such a dramatic expenditure, is there any guarantee it would be better than Claude (or ChatGPT)? Has anyone ever tried to make a monster LLM, focused on a single task, and been successful? It seems possible in theory, but I have never heard anyone address this question.
With $25k, could I create the true legal chat bot? | 2023-10-06T08:22:24 | https://www.reddit.com/r/LocalLLaMA/comments/1716uqu/has_anyone_created_a_llama_monster/ | ATuring-314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1716uqu | false | null | t3_1716uqu | /r/LocalLLaMA/comments/1716uqu/has_anyone_created_a_llama_monster/ | false | false | self | 7 | null |
Does LM Studio have a sort of "memory/author's note" feature? | 1 | [removed] | 2023-10-06T07:04:14 | https://www.reddit.com/r/LocalLLaMA/comments/1715p4v/does_lm_studio_have_a_sort_of_memoryauthors_note/ | Gyramuur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1715p4v | false | null | t3_1715p4v | /r/LocalLLaMA/comments/1715p4v/does_lm_studio_have_a_sort_of_memoryauthors_note/ | false | false | self | 1 | null |
Run your local LLMs in a text editor | 28 | Hi,
Here's an extension for Neovim to invoke Ollama, it works reasonably well with Mistral. Video is in the description.
[https://github.com/David-Kunz/gen.nvim](https://github.com/David-Kunz/gen.nvim)
I would appreciate any comments or suggestions.
Thank you and best regards,
David | 2023-10-06T07:02:48 | https://www.reddit.com/r/LocalLLaMA/comments/1715oc6/run_your_local_llms_in_a_text_editor/ | David-Kunz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1715oc6 | false | null | t3_1715oc6 | /r/LocalLLaMA/comments/1715oc6/run_your_local_llms_in_a_text_editor/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'G7d1n7bucTzsm5Bbc0J7JE-lKJ3BHprmv5VAcA7GrtE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1TlYe3Te5BSmk1jWH9jI5v_0o7y5uyT8rWJE-E2VORY.jpg?width=108&crop=smart&auto=webp&s=9abbc99e1aac2fc03dd4cc0eb6670aa4654195cb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1TlYe3Te5BSmk1jWH9jI5v_0o7y5uyT8rWJE-E2VORY.jpg?width=216&crop=smart&auto=webp&s=a5fc3488d4df7eb53aac72922caea71a68a68b03', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1TlYe3Te5BSmk1jWH9jI5v_0o7y5uyT8rWJE-E2VORY.jpg?width=320&crop=smart&auto=webp&s=b69a61864821c29c7d8ea7f54d480126417a74b1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1TlYe3Te5BSmk1jWH9jI5v_0o7y5uyT8rWJE-E2VORY.jpg?width=640&crop=smart&auto=webp&s=3df37fe2df35491b7a9d7e18b3ff6415a5457026', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1TlYe3Te5BSmk1jWH9jI5v_0o7y5uyT8rWJE-E2VORY.jpg?width=960&crop=smart&auto=webp&s=511194ad9cfd7b3f45258f09314888d30cc06a1c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1TlYe3Te5BSmk1jWH9jI5v_0o7y5uyT8rWJE-E2VORY.jpg?width=1080&crop=smart&auto=webp&s=7b03ddc5e72d4c5d5112594bc51e721d80398361', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1TlYe3Te5BSmk1jWH9jI5v_0o7y5uyT8rWJE-E2VORY.jpg?auto=webp&s=7c6ab3fd13d6aa00532921ce07de186a029583de', 'width': 1200}, 'variants': {}}]} |
Problem using llama-cpp-python on windows | 1 | I am trying to use mistral-openorca model by theBloke using llama-cpp-python. However, the inference speed is pretty slow, and I see that my nvidia gpu is not being used at all. I tried installing with cuBLAS and CLBLAST, but nothing seems to help. Can anybody please help me? | 2023-10-06T06:05:16 | https://www.reddit.com/r/LocalLLaMA/comments/1714rwt/problem_using_llamacpppython_on_windows/ | Blue_Dude3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1714rwt | false | null | t3_1714rwt | /r/LocalLLaMA/comments/1714rwt/problem_using_llamacpppython_on_windows/ | false | false | self | 1 | null |
Seeking Guidance on Using a Pre-trained Model for Medical Text Classification | 1 |
I'm working on a project where I need to classify medical texts into specific areas. For example, if I have a text like "Asthma is a chronic disease, and the drugs used are bronchodilators and corticosteroids," I want my model to return labels like ["pneumology", "asthma", "treatment"].
My question is, can a simple model with 3B or 7B be trained for this task? Is BERT a suitable choice? I've seen BERT models used for sentiment classification, is that a similar path to follow for medical text classification? | 2023-10-06T05:42:21 | https://www.reddit.com/r/LocalLLaMA/comments/1714dxy/seeking_guidance_on_using_a_pretrained_model_for/ | koyo4ever | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1714dxy | false | null | t3_1714dxy | /r/LocalLLaMA/comments/1714dxy/seeking_guidance_on_using_a_pretrained_model_for/ | false | false | self | 1 | null |
Enterprise Generative AI Pricing Estimator App | 3 | My name is Riya, a recent Computer Science graduate from Brown University and current first-year Master's student in Computer Science at Stanford University.
I built the [Enterprise Generative AI Pricing Estimator app](https://gen-ai-pricing-estimator.streamlit.app/) to help enterprise decision makers determine the ballpark costs to rollout enterprise-wide generative AI capabilities like OpenAI ChatGPT, Google Bard, or large language models (LLMs) apps powered by Generative AI APIs. It focuses on estimating the expenses related to using LLMs for text-related tasks and allows comparisons between different Generative AI models. The [app](https://gen-ai-pricing-estimator.streamlit.app/) uses a proprietary method to model an enterprise, roles, artifacts usually produced by the roles and context associated with such artifacts, which then serves as input to calculate the benefit and lift from generative AI vs non-GenAI.
Seeking feedback and looking for collaborators to take it to the next level. Also am thinking of making it open-source! | 2023-10-06T05:23:08 | https://www.reddit.com/r/LocalLLaMA/comments/17142id/enterprise_generative_ai_pricing_estimator_app/ | ComfortableDaikon623 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17142id | false | null | t3_17142id | /r/LocalLLaMA/comments/17142id/enterprise_generative_ai_pricing_estimator_app/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '6qqls60gKTiL0-yVyyCUcx6CGtbyKoQmvE6QXwE2Obs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/hnLO7KgRgpu8CLAmI8UppF2CpsuCVYGGGkudVro-kx4.jpg?width=108&crop=smart&auto=webp&s=5a13ffff5550b2dc235d6449f4c0c9768b220c86', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/hnLO7KgRgpu8CLAmI8UppF2CpsuCVYGGGkudVro-kx4.jpg?width=216&crop=smart&auto=webp&s=3f45a53a76c59ed8a3240f406bea4c397595c68f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/hnLO7KgRgpu8CLAmI8UppF2CpsuCVYGGGkudVro-kx4.jpg?width=320&crop=smart&auto=webp&s=89ce2de427e0c72b6fb5d2cde4c708795d598126', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/hnLO7KgRgpu8CLAmI8UppF2CpsuCVYGGGkudVro-kx4.jpg?width=640&crop=smart&auto=webp&s=2e0aadb762c657f0e110e105f477776a2daf75dd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/hnLO7KgRgpu8CLAmI8UppF2CpsuCVYGGGkudVro-kx4.jpg?width=960&crop=smart&auto=webp&s=fa908f13b4f36fcfcdde2c5af21cc2967e4001ec', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/hnLO7KgRgpu8CLAmI8UppF2CpsuCVYGGGkudVro-kx4.jpg?width=1080&crop=smart&auto=webp&s=b8ab287f1d5fcc7cd02e0826af16c109703e7263', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/hnLO7KgRgpu8CLAmI8UppF2CpsuCVYGGGkudVro-kx4.jpg?auto=webp&s=2a437380e7e7e80f0efbb7ba407f2348a667542e', 'width': 1200}, 'variants': {}}]} |
Are there large-sized companies that are using llama2 in prod? | 3 | This post is a part of my exploration I am doing between gpt and llama-2. In my last post, I did a cost comparison between gpt 3.5 and llama2, and llama2(with the compute I am using) turned out to be more expensive. Keeping the cost angle aside, I wanted to understand if anyone knows about companies using llama in production. If they are, then for what kind of operations? What can llama do better than gpt3.5 at scale? | 2023-10-06T05:02:15 | https://www.reddit.com/r/LocalLLaMA/comments/1713q1x/are_there_largesized_companies_that_are_using/ | todaysgamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1713q1x | false | null | t3_1713q1x | /r/LocalLLaMA/comments/1713q1x/are_there_largesized_companies_that_are_using/ | false | false | self | 3 | null |
training 4bit quantized loras in ooba | 3 | this week i've been attempting some basic lora training in oobabooga. I have a single fairly large text file (64mb) and I'm training at default settings, popping off checkpoints every few hours.
I'm loading a llama2 descendant with Transformers in 4 bit (because gptq/gguf training is broken in ooba) and running this on windows on a 16gb 3080ti.
this part is working pretty much fine. the issues start when I try to actually run it.
on my main test rig (16gb 3080ti) i am able to load the raw model in Transformers in 4bit, apply the lora, and it will work for a while. after 5 or 6 generations though, it just starts hanging forever. no error message. every time this happens i need to reload the model and possibly restart ooba before i can use it.
on my "production" rig (a linux machine with a 16gb 4060ti + 8gb 1080) i can't load Transformer models at all. there's some bug involving multi-gpu (0gb on the 1080 is an invalid param) and working around that gives another error.
assuming there is a bug in ooba related to loras, i decided to try converting to gptq, since I know I can run those. so I merged the lora using some cobbled together python code I found online.
can't load the merged model in Transformers in any machine:
Non-consecutive added token '<unk>' found. Should have index 32000 but has index 0 in saved vocabulary.
pushing through anyways, i attempted to convert to gptq using more cobbled together python code. i could not for the life of me get bitsandbytes installed properly with gpu support on windows. and when i ran it on linux i realized that i need enough vram to completely load the *unquantized* model and 24gb is not enough.
so now i'm kind of at a total dead end. googling is not really helping anymore.
is anybody out there doing this successfully on low vram hardware? is ooba viable for this at all? is there some library that everyone is using to do this stuff? | 2023-10-06T00:14:20 | https://www.reddit.com/r/LocalLLaMA/comments/170xyeg/training_4bit_quantized_loras_in_ooba/ | __SlimeQ__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170xyeg | false | null | t3_170xyeg | /r/LocalLLaMA/comments/170xyeg/training_4bit_quantized_loras_in_ooba/ | false | false | self | 3 | null |
Use LLM to convert anger level in text? | 1 | Is it possible to send text to an LLM and get it to convert it to be less / more angry, for example?
Ideally, I don't want any output except the converted text. It's similar to a model that translates and returns the output. I thought maybe I could make a character in ooba and give it some examples?
Any tips appreciated.
| 2023-10-06T00:03:05 | https://www.reddit.com/r/LocalLLaMA/comments/170xpp1/use_llm_to_convert_anger_level_in_text/ | tuxedo0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170xpp1 | false | null | t3_170xpp1 | /r/LocalLLaMA/comments/170xpp1/use_llm_to_convert_anger_level_in_text/ | false | false | self | 1 | null |
[Synthetic data] Searching for honest comparison between LLM (gpt4, bizon, jurassic-2, Claude...) | 1 | [removed] | 2023-10-05T23:44:45 | https://www.reddit.com/r/LocalLLaMA/comments/170xana/synthetic_data_searching_for_honest_comparison/ | Distinct-Target7503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170xana | false | null | t3_170xana | /r/LocalLLaMA/comments/170xana/synthetic_data_searching_for_honest_comparison/ | false | false | self | 1 | null |
Nucleus.ai Founders Emerge From Stealth with 22b LLM (unconfirmed) | 56 | 2023-10-05T23:29:34 | https://www.businesswire.com/news/home/20231005112266/en/Nucleus.ai-Founders-Emerge-From-Stealth-and-Demonstrate-That-Big-Tech-Companies-Aren%E2%80%99t-the-Only-Ones-Building-Large-Language-Models | iLaurens | businesswire.com | 1970-01-01T00:00:00 | 0 | {} | 170wxz7 | false | null | t3_170wxz7 | /r/LocalLLaMA/comments/170wxz7/nucleusai_founders_emerge_from_stealth_with_22b/ | false | false | 56 | {'enabled': False, 'images': [{'id': 'wFC_FUUF00O0VBr0cI13RGrEYz2f5jco1BVu-7CREgA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/z_ZxLPqp6rJ6IfTuGzz9e65IvmcLVO9MH1IzBwui-Ds.jpg?width=108&crop=smart&auto=webp&s=a6c234041c69dc0f0ae76fbcd4dd97019ece6237', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/z_ZxLPqp6rJ6IfTuGzz9e65IvmcLVO9MH1IzBwui-Ds.jpg?width=216&crop=smart&auto=webp&s=95163b7c273bdf510429d06c92dca7b1da850af9', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/z_ZxLPqp6rJ6IfTuGzz9e65IvmcLVO9MH1IzBwui-Ds.jpg?width=320&crop=smart&auto=webp&s=f0dccefce65deca458dfcbfff42308c97af5466d', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/z_ZxLPqp6rJ6IfTuGzz9e65IvmcLVO9MH1IzBwui-Ds.jpg?width=640&crop=smart&auto=webp&s=99ebea043e74a04023d081b7a4922bb2ed5da8d4', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/z_ZxLPqp6rJ6IfTuGzz9e65IvmcLVO9MH1IzBwui-Ds.jpg?width=960&crop=smart&auto=webp&s=a2f79615091237125f68f3b2f455e143fdc5249d', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/z_ZxLPqp6rJ6IfTuGzz9e65IvmcLVO9MH1IzBwui-Ds.jpg?width=1080&crop=smart&auto=webp&s=c01d009832dd1be3c09f28549b576dd6becd35a5', 'width': 1080}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/z_ZxLPqp6rJ6IfTuGzz9e65IvmcLVO9MH1IzBwui-Ds.jpg?auto=webp&s=14801943951e05e1a8acf77742cb1d842f688f4d', 'width': 1200}, 'variants': {}}]} | ||
Devs who use CodeLlama 34b- Which is the best current fine-tune (or not) to use? | 30 | Are any of you using it regularly, and if so which one is working best for you atm?
If you aren't using 34b, do you have one that's simply better? I really like the 100k context, but I'm really just looking for the best offline code assistant LLM. | 2023-10-05T23:11:28 | https://www.reddit.com/r/LocalLLaMA/comments/170wip7/devs_who_use_codellama_34b_which_is_the_best/ | LearningSomeCode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170wip7 | false | null | t3_170wip7 | /r/LocalLLaMA/comments/170wip7/devs_who_use_codellama_34b_which_is_the_best/ | false | false | self | 30 | null |
Are there a models that are really good at story writing like Claude 2? | 15 | I've been using Claude 2 to write stories of characters for a novel and it does a pretty good job in keeping track of some character details with the file upload feature. But it is terrible once any kind of conflict enters the picture and will refuse to draft a conversation between two characters that end on any kind of negative note. I've argued back and fourth with it and it wont do it at all.
I was hoping there would be a model that is tuned for the level of prose Claude 2 is capable of and is preferably uncensored so I don't have to deal with it whining that two characters might potentially hate each other. | 2023-10-05T22:35:46 | https://www.reddit.com/r/LocalLLaMA/comments/170vnnh/are_there_a_models_that_are_really_good_at_story/ | Leyline266 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170vnnh | false | null | t3_170vnnh | /r/LocalLLaMA/comments/170vnnh/are_there_a_models_that_are_really_good_at_story/ | false | false | self | 15 | null |
H2Oai GPT CPU | 1 | I am fairly new to LLMs and am having an issue implementing H2OGPT through CPU on windows. Not sure if this is allowed but would really appreciate it if someone could help me out. I’ve attached the link to the instructions I’ve followed on GitHub. When I get to the line of code:
# Required for CPU: LLaMa/GPT4All:
pip install -r reqs_optional/requirements_optional_gpt4all.txt
I get the error: failed building wheel for LLama-cpp-python
I unfortunately haven’t found a solution online yet, so would really appreciate any assistance. | 2023-10-05T22:04:04 | https://github.com/h2oai/h2ogpt/blob/main/docs/README_WINDOWS.md | ComfortableFinancial | github.com | 1970-01-01T00:00:00 | 0 | {} | 170uuv5 | false | null | t3_170uuv5 | /r/LocalLLaMA/comments/170uuv5/h2oai_gpt_cpu/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eOBhSSaz8A1bMy1QEqMrqrkAJLULLOEdH6aiScvzh8M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cDkyQn86bv25h0KdMLJVDkq41P7hhIqpXKZK2tp_5aY.jpg?width=108&crop=smart&auto=webp&s=b0397c9d5a46f3f5e09c919d5d08ace87ffecedd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cDkyQn86bv25h0KdMLJVDkq41P7hhIqpXKZK2tp_5aY.jpg?width=216&crop=smart&auto=webp&s=211a3d584aa6f0bc85e7e7845ec1e7f9127c2c4e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cDkyQn86bv25h0KdMLJVDkq41P7hhIqpXKZK2tp_5aY.jpg?width=320&crop=smart&auto=webp&s=e6ddf1e73806fed2a738ffec95d13865037ec92c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cDkyQn86bv25h0KdMLJVDkq41P7hhIqpXKZK2tp_5aY.jpg?width=640&crop=smart&auto=webp&s=1273b5b897b11fead126b2be67f0d10f1979d549', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cDkyQn86bv25h0KdMLJVDkq41P7hhIqpXKZK2tp_5aY.jpg?width=960&crop=smart&auto=webp&s=d2fd2e5b4d02b7ddfc4c1426ea5ccfd423b1af6e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cDkyQn86bv25h0KdMLJVDkq41P7hhIqpXKZK2tp_5aY.jpg?width=1080&crop=smart&auto=webp&s=d815dade977acdff87f0c476b90e5eb370c6b9ea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cDkyQn86bv25h0KdMLJVDkq41P7hhIqpXKZK2tp_5aY.jpg?auto=webp&s=ed82804ba03c205ebf82f8f1b61931ee8d51886f', 'width': 1200}, 'variants': {}}]} | |
Front ends for oobabooga that have Character AI's interface to regenerate/remember previous generations using arrow keys? | 5 | So when a character (the chatbot) generates a reply on C.AI, you can hit the right arrow to generate another, and the last generation moves to the left, like a big long tape that gets longer as you create more regenerations.
This is really nice, because you can easily use the left and right arrows to check through all the replies, before replying to the one you like best.
[C.AI](https://C.AI) also remembers every generation made with every message, so when you delete a reply you made, the "tape" of generations it made becomes available to flick through again.
Does Sillytavern/Tavern/Kobold (if that's even a front end?) do this? Or does any other front end.
I use oobabooga textgen webui and have been using the built in character thing to talk to characters. Thanks for help, this is all really confusing. | 2023-10-05T22:00:05 | https://www.reddit.com/r/LocalLLaMA/comments/170uqzr/front_ends_for_oobabooga_that_have_character_ais/ | RottieCumGutter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170uqzr | false | null | t3_170uqzr | /r/LocalLLaMA/comments/170uqzr/front_ends_for_oobabooga_that_have_character_ais/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'veE04iaMbgI4yLvLGj2IZNV7UQfnq3n_7BmxP28dCd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=108&crop=smart&auto=webp&s=0e594332595e82a5118e08d35a2cd140c18d7571', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=216&crop=smart&auto=webp&s=e3c279ba2d1ae1f9f2fba4b328e22f6615821b5c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=320&crop=smart&auto=webp&s=e635acb6bc693890c232162908676cb6478c120c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=640&crop=smart&auto=webp&s=59ba293d6adf4cce410b43b5d28ae104922701b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=960&crop=smart&auto=webp&s=fc7dc69af838ec53e60b3e88fec5e67c8759495b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=1080&crop=smart&auto=webp&s=e50a4f1b7c99e137a2ab4d5e2d573bb75becd067', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?auto=webp&s=b8597825d9b212133d3dbd9ee26fd0dcc2a84677', 'width': 1200}, 'variants': {}}]} |
Cant find guide anymore(need help, more inside) | 1 | All i see is link with list of models
So i was able to start with textwebui and load mistral-7b- was super easy
What i fail is how i load 33b model in 8-bit? I entered something in web and it downloaded fp16 model which is 66GB
And also if i load 70b model in 4-bit(once i fugure out how) would it be able to use two cards (2x3090) ?
If there is tutorial and some list of best models which fit 2x3090 i would appreciate | 2023-10-05T21:45:07 | https://www.reddit.com/r/LocalLLaMA/comments/170ud34/cant_find_guide_anymoreneed_help_more_inside/ | chub0ka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170ud34 | false | null | t3_170ud34 | /r/LocalLLaMA/comments/170ud34/cant_find_guide_anymoreneed_help_more_inside/ | false | false | self | 1 | null |
Guide: Installing ROCm/hip for LLaMa.cpp on Linux for the 7900xtx | 54 | Hi all, I finally managed to get an upgrade to my GPU. I noticed there aren't a lot of complete guides out there on how to get LLaMa.cpp working with an AMD GPU, so here goes.
Note that this guide has not been revised super closely, there might be mistakes or unpredicted gotchas, general knowledge of Linux, LLaMa.cpp, apt and compiling is recommended.
Additionally, the guide is written specifically for use with Ubuntu 22.04 as there _are_ apparently version-specific differences between the steps you need to take. Be careful.
This guide _should_ work with the 7900XT equally well as for the 7900XTX, it just so happens to be that I got the 7900XTX.
Alright, here goes:
# Using a 7900xtx with LLaMa.cpp
Guide written specifically for Ubuntu 22.04, the process _will_ differ for other versions of Ubuntu
Overview of steps to take:
1. Check and clean up previous drivers
2. Install rocm & hip
a. Fix dependency issues
3. Reboot and check installation
4. Build LLaMa.cpp
## Clean up previous drivers
This part was adapted from [this helfpul AMD ROCm installation gist](https://gist.github.com/AlkindiX/9c54d1155ba72415f3b585e26c9df6b3)
**Important**: Check if there are any amdgpu-related packages on your system
```
sudo apt list --installed | cut --delimiter=" " --fields=1 | grep amd
```
You should not have any packages with the term `amdgpu` in them. `steam-libs-amd64` and `xserver-xorg-video-amdgpu` are ok. `amdgpu-core`, `amdgpu-dkms` are absolutely not ok.
If you find any amdgpu packages, remove them.
```
sudo apt update
sudo apt install amdgpu-install
# uninstall the packages using the official installer
amdgpu-install --uninstall
# clean up
sudo apt remove --purge amdgpu-install
sudo apt autoremove
```
## Install ROCm
This part is surprisingly easy. Follow the [quick start guide for Linux on the AMD website](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html)
You'll end up with `rocm-hip-libraries` and `amdgpu-dkms` installed. You will need to install some additional rocm packages manually after this, however.
These packages should install without a hitch
```
sudo apt install rocm-libs rocm-ocl-icd rocm-hip-sdk rocm-hip-libraries rocm-cmake rocm-clang-ocl
```
Now, we need to install `rocm-dev`, if you try to install this on Ubuntu 22.04, you will meet the following error message. Very annoying.
```
sudo apt install rocm-dev
The following packages have unmet dependencies:
rocm-gdb : Depends: libpython3.10 but it is not installable or
libpython3.8 but it is not installable
E: Unable to correct problems, you have held broken packages.
```
Ubuntu 23.04 (Lunar Lobster) moved on to Python3.11, you will need to install Python3.10 from the Ubuntu 22.10 (Jammy Jellyfish)
Now, installing packages from previous versions of Ubuntu isn't necessarily unsafe, but you do need to make **absolutely sure** you don't install anything other than libpython3.10. You don't want to overwrite any newer packages with older ones, follow the following steps carefully.
We're going to add the Jammy Jellyfish repository, update our sources with apt update and install libpython3.10, then immediately remove the repository.
```
echo "deb http://archive.ubuntu.com/ubuntu jammy main universe" | sudo tee /etc/apt/sources.list.d/jammy-copies.list
sudo apt update
# WARNING #
# DO NOT INSTALL ANY PACKAGES AT THIS POINT OTHER THAN libpython3.10
# THAT INCLUDES `rocm-dev`
# WARNING #
sudo apt install libpython3.10-dev
sudo rm /etc/apt/sources.list.d/jammy-copies.list
sudo apt update
# your repositories are as normal again
````
Now you can finally install rocm-dev
```
sudo apt install rocm-dev
```
The versions don't have to be exactly the same, just make sure you have the same packages.
## Reboot and check installation
With the ROCm and hip libraries installed at this point, we should be good to install LLaMa.cpp. Since installing ROCm is a fragile process (unfortunately), we'll make sure everything is set-up correctly in this step.
First, check if you got the right packages. Version numbers and dates don't have to match, just make sure your rocm is version 5.5 or higher (mine is 5.7 as you can see in this list) and that you have the same 21 packages installed.
```
apt list --installed | grep rocm
rocm-clang-ocl/jammy,now 0.5.0.50700-63~22.04 amd64 [installed]
rocm-cmake/jammy,now 0.10.0.50700-63~22.04 amd64 [installed]
rocm-core/jammy,now 5.7.0.50700-63~22.04 amd64 [installed,automatic]
rocm-dbgapi/jammy,now 0.70.1.50700-63~22.04 amd64 [installed]
rocm-debug-agent/jammy,now 2.0.3.50700-63~22.04 amd64 [installed]
rocm-dev/jammy,now 5.7.0.50700-63~22.04 amd64 [installed]
rocm-device-libs/jammy,now 1.0.0.50700-63~22.04 amd64 [installed]
rocm-gdb/jammy,now 13.2.50700-63~22.04 amd64 [installed,automatic]
rocm-hip-libraries/jammy,now 5.7.0.50700-63~22.04 amd64 [installed]
rocm-hip-runtime-dev/jammy,now 5.7.0.50700-63~22.04 amd64 [installed]
rocm-hip-runtime/jammy,now 5.7.0.50700-63~22.04 amd64 [installed]
rocm-hip-sdk/jammy,now 5.7.0.50700-63~22.04 amd64 [installed]
rocm-language-runtime/jammy,now 5.7.0.50700-63~22.04 amd64 [installed]
rocm-libs/jammy,now 5.7.0.50700-63~22.04 amd64 [installed]
rocm-llvm/jammy,now 17.0.0.23352.50700-63~22.04 amd64 [installed]
rocm-ocl-icd/jammy,now 2.0.0.50700-63~22.04 amd64 [installed]
rocm-opencl-dev/jammy,now 2.0.0.50700-63~22.04 amd64 [installed]
rocm-opencl/jammy,now 2.0.0.50700-63~22.04 amd64 [installed]
rocm-smi-lib/jammy,now 5.0.0.50700-63~22.04 amd64 [installed]
rocm-utils/jammy,now 5.7.0.50700-63~22.04 amd64 [installed,automatic]
rocminfo/jammy,now 1.0.0.50700-63~22.04 amd64 [installed,automatic]
```
Next, you should run `rocminfo` to check if everything is installed correctly. You might already have to restart your pc before running `rocminfo`
```
sudo rocminfo
ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version: 1.1
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
Mwaitx: DISABLED
DMAbuf Support: YES
==========
HSA Agents
==========
*******
Agent 1
*******
Name: AMD Ryzen 9 7900X 12-Core Processor
Uuid: CPU-XX
Marketing Name: AMD Ryzen 9 7900X 12-Core Processor
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
...
*******
Agent 2
*******
Name: gfx1100
Uuid: GPU-ff392834062820e0
Marketing Name: Radeon RX 7900 XTX
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
...
*** Done ***
```
Make note of the `Node` property of the device you want to use, you will need it for LLaMa.cpp later.
Now, reboot your computer if you hadn't yet.
## Building LLaMa
Almost done, this is the easy part.
Make sure you have the LLaMa repository cloned locally and build it with the following command
```
make clean && LLAMA_HIPBLAS=1 make -j
```
Note that at this point you will need to run llama.cpp with sudo, this is because only users in the `render` group have access to ROCm functionality.
```
# add user to `render` group
sudo usermod -a -G render $USER
# reload group stuff (otherwise it's as if you never added yourself to the group!)
newgrp render
```
You should be good to go! You can test it out with a simple prompt like this, make sure to point to a model file in your models directory. 34B_Q4 should run ok with all layers offloaded
**IMPORTANT NOTE**: If you had more than one device in your rocminfo output, you need to specify the device ID otherwise the library will guess and pick wrong, `No devices found` is the error you will get if it fails. Find the node_id of your "Agent" (in my case the 7900xtx was 1) and specify it using the `HIP_VISIBLE_DEVICES` env var
```
HIP_VISIBLE_DEVICES=1 ./main -ngl 50 -m models/wizardcoder-python-34b/wizardcoder-python-34b-v1.0.Q4_K_M.gguf -p "Write a function in TypeScript that sums numbers"
```
Otherwise, run as usual
```
./main -ngl 50 -m models/wizardcoder-python-34b/wizardcoder-python-34b-v1.0.Q4_K_M.gguf -p "Write a function in TypeScript that sums numbers"
```
Thanks for reading :) | 2023-10-05T21:09:49 | https://www.reddit.com/r/LocalLLaMA/comments/170tghx/guide_installing_rocmhip_for_llamacpp_on_linux/ | Combinatorilliance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170tghx | false | null | t3_170tghx | /r/LocalLLaMA/comments/170tghx/guide_installing_rocmhip_for_llamacpp_on_linux/ | false | false | self | 54 | null |
Problems finetuning Llama2 7B, tried SFTTrainer, autotrain and llama.cpp none worked. | 4 | Hello all,
I have been trying to finetune *llama2* for quite some time now, I encountered many problems in the process and after I finally manage to finetune it llama spit nothing but the usual nonsense base llama would split.
I would like to know if some of you have similar experience, here are some more details :
**My dataset :**
My dataset is made of 1500 exemples in jsonl format. It use prompt formatted with special tag such as "###Human:" :
{"text": "###Human: Quels sont les objectifs principaux du projet ? ###Assistant: Les objectifs principaux du projet sont ..."}
yes the dataset is in french but i barely see why that might be the issue here.
**How I trained/used the model :**
I am training on [vast.ai](https://vast.ai) and doing interference on mac M1.
At first I followed most tutorial that use **SFTTrainer** like this one : [https://www.philschmid.de/instruction-tune-llama-2](https://www.philschmid.de/instruction-tune-llama-2) without success (I will try again if someone ask why I don't remember the exact problems I encountered, but I gave up with SFTTrainer)
I tried to use **llama.cpp** new finetuning feature that isn't mentionned in the readme but has been merged a few days ago [https://github.com/ggerganov/llama.cpp/pull/2632](https://github.com/ggerganov/llama.cpp/pull/2632) and run it like so :
./main -m llama-2-7b.Q4\_0.gguf --lora lora-llama-2-7b-finetuned.bin --prompt "###Human: comment ça marche ? ###Assistant: " --temp 0.0
The output is in german or english most of the time and off-topic, sometime the answer look kinda right but testing futher show that it's just acting like the base model.
more recently I used **autotrain** but the result is the same :
autotrain llm --train --project\_name llamawood --model meta-llama/Llama-2-7b-hf --data\_path ./data --use\_peft --use\_int4 --learning\_rate 2e-4 --train\_batch\_size 4 --num\_train\_epochs 9 --trainer sft > training.log &
\# get output folder in hf format and save on mac then save as ggml
python3 [convert-lora-to-ggml.py](https://convert-lora-to-ggml.py) saved\_models/llamawood
\# execute interference
./llama.cpp/build/bin/main -m ./saved\_models/ggml-model-q4\_0.gguf --lora ./saved\_models/llamawood/ggml-adapter-model.bin --prompt "###Human: Comment sont installer les capteurs ? ### Assistant: " --temp 0
What am I doing wrong ? I tried to improve the dataset quality, add more exemple (I had 300 exemple at first), use different methods but it result in failure everytime. thanks!
​ | 2023-10-05T21:07:19 | https://www.reddit.com/r/LocalLLaMA/comments/170te70/problems_finetuning_llama2_7b_tried_sfttrainer/ | fawendeshuo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170te70 | false | null | t3_170te70 | /r/LocalLLaMA/comments/170te70/problems_finetuning_llama2_7b_tried_sfttrainer/ | false | false | self | 4 | null |
ROG or Fine Tuning for custom copilot. | 1 | Hey everyone,
I'm diving into a new project and could use some pointers. I've got a bunch of code in my repos, mostly in .NET and JavaScript. I'm trying to tweak this code based on what users ask, kind of like a custom Copilot. I'm thinking of using a single agent or maybe something like AutoGen.
With all the code files I have (and their descriptions), I want to create new code from what users tell me. The catch is, I want this new code to look and feel like my old code.
So, here's where I need help:
1. Is turning my code into a different format (vectorizing) a good idea? If yes, how should I go about it?
2. Or should I just adjust a tool or model that's already out there to fit my needs?
3. If anyone's done something similar, I'd love to hear how you went about it.
Thanks in advance for any insights! | 2023-10-05T21:00:10 | https://www.reddit.com/r/LocalLLaMA/comments/170t79i/rog_or_fine_tuning_for_custom_copilot/ | mcwin1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170t79i | false | null | t3_170t79i | /r/LocalLLaMA/comments/170t79i/rog_or_fine_tuning_for_custom_copilot/ | false | false | self | 1 | null |
How to do an empirical LLM finetuning cost calculation before renting GPUs? | 1 | [removed] | 2023-10-05T19:54:42 | https://www.reddit.com/r/LocalLLaMA/comments/170rkeu/how_to_do_an_empirical_llm_finetuning_cost/ | tensor64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170rkeu | false | null | t3_170rkeu | /r/LocalLLaMA/comments/170rkeu/how_to_do_an_empirical_llm_finetuning_cost/ | false | false | self | 1 | null |
How to run mistral-7b-openorca on Android phone? | 12 | So, I have my rooted android phone with 12 gb of ram, and i want to locally run mistral-7b-openorca on it. How can I do that? I assume I'll need quantized model, which one is possible to run on android, AWQ, GPTQ, or GGUF version?
Thank you guys! | 2023-10-05T19:47:41 | https://www.reddit.com/r/LocalLLaMA/comments/170re5q/how_to_run_mistral7bopenorca_on_android_phone/ | ZotD0t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170re5q | false | null | t3_170re5q | /r/LocalLLaMA/comments/170re5q/how_to_run_mistral7bopenorca_on_android_phone/ | false | false | self | 12 | null |
What LLMs would you hire for your Agent Corp? | 2 | ChatDev, AutoGen, and etc for teams of LLMs agents working together to complete complex and expanding tasks are progressing nicely.
Excited to see the likes integrated with txtai or similar.
Hoping this community can help research the best locally ran models available that can run on 24GB cards for the different types of agent tasks. Coder, documentation, marketing, CEO etc..
Seems Minstrel 7B with the right LORAs could be a contender for many positions.
Thoughts? | 2023-10-05T19:41:23 | https://www.reddit.com/r/LocalLLaMA/comments/170r8mb/what_llms_would_you_hire_for_your_agent_corp/ | MindOrbits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170r8mb | false | null | t3_170r8mb | /r/LocalLLaMA/comments/170r8mb/what_llms_would_you_hire_for_your_agent_corp/ | false | false | self | 2 | null |
Concerns about Data Security with LLaMa in the Cloud | 1 | Hey, fellow LLaMa enthusiasts,
I've been thinking a lot about the security implications of using LLaMa and other large language models. My primary concern is that LLaMa, like many LLMs, isn't designed to work with encrypted or coded data. When we input our information, there's a potential risk of exposure. Since the nature of cloud hosting means data can potentially be accessed by third parties, this brings up the question of how secure our data truly is when interacting with LLaMa.
With the growing reliance on AI systems like LLaMa, it's crucial that we emphasize the importance of data security and user privacy. Does anyone else share these concerns? How do you ensure your data remains secure when using LLaMa? | 2023-10-05T19:29:56 | https://www.reddit.com/r/LocalLLaMA/comments/170qyao/concerns_about_data_security_with_llama_in_the/ | Few_Film8907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170qyao | false | null | t3_170qyao | /r/LocalLLaMA/comments/170qyao/concerns_about_data_security_with_llama_in_the/ | false | false | self | 1 | null |
Mistral vs Mistral finetunes vs 13B vs Llama-70B vs GPT-3.5 | 83 | I [did a comparison of Mistral-7B-0.1-GPTQ, its finetunes, some 13B models, Llama-70B-chat and the GPT-3.5.](https://low-tractor-1b9.notion.site/LLM-comparison-7B-13B-8abfda2142d24c14a9a5a35ed83a0f10)
No specific conclusions, it's up to you, but the Mistral looks great against the big models.
Only ChatGPT and Mira (custom russian model) was able to answer the question "Where are Charles Dickens and Charles Darwin buried?". Which of your favorite models can answer correctly?(The correct answer is "both in Westminster")
P.S. All GPTQ tests were done with Oobabooga + ExLlama + 4bit quants from TheBloke | 2023-10-05T19:24:32 | https://www.reddit.com/r/LocalLLaMA/comments/170qtj3/mistral_vs_mistral_finetunes_vs_13b_vs_llama70b/ | domrique | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170qtj3 | false | null | t3_170qtj3 | /r/LocalLLaMA/comments/170qtj3/mistral_vs_mistral_finetunes_vs_13b_vs_llama70b/ | false | false | self | 83 | null |
Finetuned models for function calling? | 17 | Hello. Are there finetuned LLaMa2 models that can reliably work for function calling yet? (Setting up agents in Langchain and such). Previous discussion is around 2 months old so I wonder if there has been some advances on this.
I have been trying to build a chatbot that can search on the internet and can be locally hosted but the models I've tried are terrible at this. | 2023-10-05T18:29:13 | https://www.reddit.com/r/LocalLLaMA/comments/170pej3/finetuned_models_for_function_calling/ | RangerRocket09 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170pej3 | false | null | t3_170pej3 | /r/LocalLLaMA/comments/170pej3/finetuned_models_for_function_calling/ | false | false | self | 17 | null |
Are instruction-tuning LORAs transferable? | 3 | Hey all, I am in the process of testing several models for fine-tuning and that question cropped up.
I am trying to emulate things [these guys did](https://www.stochastic.ai/blog/xfinance-vs-bloomberg-gpt), which is to use fine-tuning to add facts to a model (I know RAG is usually recommended, but there are reasons for me to prefer to force my way through fine-tuning there). The trick is to fine-tune facts on a **foundation** model and once done do another pass of fine-tuning for instruction-following.
These two fine-tuning can produce LoRAs.
My questions are:
- Can the instruction Lora be reused? I'll have to add data frequently to the model, so this pipeline will happen often. If I can skip retraining the instruction following every time it would be great.
- In such a case, is there a risk that the model fails to obey instructions on the newly learned facts?
In general if someone has good ressources on how fine tuning make models forget things and how to avoid that I would be interested. | 2023-10-05T18:18:29 | https://www.reddit.com/r/LocalLLaMA/comments/170p4ru/are_instructiontuning_loras_transferable/ | keepthepace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170p4ru | false | null | t3_170p4ru | /r/LocalLLaMA/comments/170p4ru/are_instructiontuning_loras_transferable/ | false | false | self | 3 | null |
Tiny Language Models Thrive With GPT-4 as a Teacher | 24 | 2023-10-05T18:14:11 | https://www.quantamagazine.org/tiny-language-models-thrive-with-gpt-4-as-a-teacher-20231005 | emptyplate | quantamagazine.org | 1970-01-01T00:00:00 | 0 | {} | 170p0pn | false | null | t3_170p0pn | /r/LocalLLaMA/comments/170p0pn/tiny_language_models_thrive_with_gpt4_as_a_teacher/ | false | false | 24 | {'enabled': False, 'images': [{'id': 'eC9KQEgQY0d-VwV1At-uh_71amxHnDN70x5NQ58xRCg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/yDHt_GW3bvKpPsfucJKLg7_LQp2k7NcV-z6OcI7QSoI.jpg?width=108&crop=smart&auto=webp&s=9e36667ee58c2aa2ae2465b002b456b8a8b2376d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/yDHt_GW3bvKpPsfucJKLg7_LQp2k7NcV-z6OcI7QSoI.jpg?width=216&crop=smart&auto=webp&s=3b3ac7f6e2e4bd841eb2dce12e4f7812a8dc2de9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/yDHt_GW3bvKpPsfucJKLg7_LQp2k7NcV-z6OcI7QSoI.jpg?width=320&crop=smart&auto=webp&s=af6d376c5b99aafd4be26d282be4bfde618df710', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/yDHt_GW3bvKpPsfucJKLg7_LQp2k7NcV-z6OcI7QSoI.jpg?width=640&crop=smart&auto=webp&s=929088f72e17e557e61595cd5f26521bd17eb22e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/yDHt_GW3bvKpPsfucJKLg7_LQp2k7NcV-z6OcI7QSoI.jpg?width=960&crop=smart&auto=webp&s=5387497a7df11a6e387ed4372eb771e70fec1a8c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/yDHt_GW3bvKpPsfucJKLg7_LQp2k7NcV-z6OcI7QSoI.jpg?width=1080&crop=smart&auto=webp&s=2d08a2415f14ce5ccd1d40db463ab01b36bdc3c7', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/yDHt_GW3bvKpPsfucJKLg7_LQp2k7NcV-z6OcI7QSoI.jpg?auto=webp&s=e09a5f33d0ce1418a688ac1502b5d24e5cba303d', 'width': 1200}, 'variants': {}}]} | ||
Can anyone tell me the performance of LLaVA vs BLIP? | 10 | Can any tell me the performance of LLaVA vs Blip? Which one leads to higher quality captioning of images? Is there a benchmark somewhere of the various VLM for these kind of models? | 2023-10-05T17:53:40 | https://www.reddit.com/r/LocalLLaMA/comments/170oi1d/can_anyone_tell_me_the_performance_of_llava_vs/ | ninjasaid13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170oi1d | false | null | t3_170oi1d | /r/LocalLLaMA/comments/170oi1d/can_anyone_tell_me_the_performance_of_llava_vs/ | false | false | self | 10 | null |
LoRA training and Llama fine tuning scripts | 2 | I found 2 scripts:
https://github.com/ggerganov/llama.cpp/blob/45eba9369fbcbd7f677eba9a2d3e4ffcfdc81824/examples/finetune/README.md
https://github.com/ggerganov/llama.cpp/blob/45eba9369fbcbd7f677eba9a2d3e4ffcfdc81824/examples/export-lora/README.md
Fine tuning looks like this:
# get training data
wget https://raw.githubusercontent.com/brunoklein99/deep-learning-notes/master/shakespeare.txt
# finetune LORA adapter
./bin/finetune \
--model-base open-llama-3b-v2-q8_0.gguf \
--checkpoint-in chk-lora-open-llama-3b-v2-q8_0-shakespeare-LATEST.gguf \
--checkpoint-out chk-lora-open-llama-3b-v2-q8_0-shakespeare-ITERATION.gguf \
--lora-out lora-open-llama-3b-v2-q8_0-shakespeare-ITERATION.bin \
--train-data "shakespeare.txt" \
--save-every 10 \
--threads 6 --adam-iter 30 --batch 4 --ctx 64 \
--use-checkpointing
For example this model:
https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF
This is a base model? Can I use it like base model? And can I skip the checkpoint option? Or it’s a checkpoint? And can I use it on CPU (or mps for MacBook Pro M1 16 GB RAM)? | 2023-10-05T17:46:18 | https://www.reddit.com/r/LocalLLaMA/comments/170obia/lora_training_and_llama_fine_tuning_scripts/ | yukiarimo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170obia | false | null | t3_170obia | /r/LocalLLaMA/comments/170obia/lora_training_and_llama_fine_tuning_scripts/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
Give our customers an LLM to talk to about their data | 9 | Hello guys, we are a small hosting, and we are testing langchain and llama2 llm locally. We want to be able to create a chatbot for our hosting customers so the llm can check out their hosting stats and our customers can ask questions such as "how much more space do I need if I want to add a new email account, create a database, when will I need to upgrade my account, etc,, So our main question is, where can we read up on how many concurrent users we can have if we use langchain and how can we setup ques or buffering so when the GPU is bogged down, it will still process the queries from our customers but keep them in a waiting list until their request is up. Is this a crazy project or doable? Thanks! | 2023-10-05T17:21:44 | https://www.reddit.com/r/LocalLLaMA/comments/170npuw/give_our_customers_an_llm_to_talk_to_about_their/ | Background-Maybe-381 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170npuw | false | null | t3_170npuw | /r/LocalLLaMA/comments/170npuw/give_our_customers_an_llm_to_talk_to_about_their/ | false | false | self | 9 | null |
Url scraping for llama | 6 | Hello, does anyone know an application or chrome extension that once we throw a URL at it, is able to scrap the text, clean it up, and possibly save it to some database? I am planning to build a personal logger for my everyday activity so I can use this as embeddings or use it to fine tune my local llama as my personal assistant. I am surprised this tool is not out there yet. | 2023-10-05T17:20:13 | https://www.reddit.com/r/LocalLLaMA/comments/170nodc/url_scraping_for_llama/ | ilt1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170nodc | false | null | t3_170nodc | /r/LocalLLaMA/comments/170nodc/url_scraping_for_llama/ | false | false | self | 6 | null |
How long till LLM characters can initiate a conversation? | 7 | Imagine setting up a text-based roleplaying game. You send warriors off to a new land and they prompt you when they need help (or whatever you want them to do really). It could even be in real time if you want, so maybe an "event" could happen every couple of hours.
Or maybe you set up a foreign character to teach you a language. They would be a pen pal of sorts, messaging you each day to discuss your progress. Maybe you could help them too with their made up questions for *your* language!
Things like that would be so cool. Can we expect more interaction like this in 2024? | 2023-10-05T17:04:56 | https://www.reddit.com/r/LocalLLaMA/comments/170nagh/how_long_till_llm_characters_can_initiate_a/ | JebryyathHS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170nagh | false | null | t3_170nagh | /r/LocalLLaMA/comments/170nagh/how_long_till_llm_characters_can_initiate_a/ | false | false | self | 7 | null |
LLM Suggestions to fine tune on an NVIDIA 2080 Super | 2 | The title. Any popular large language models I can use to fine tune on the above GPU? My use case is to build a type of chatbot on custom text interactions. (Kinda leaning on hopefully the 7B llama that i can tune with qlora maybe…?)
I would also love dataset recommendations.
Ive listed out my specs below:
GPU: RTX 2080 Super
vCPU: 6
CPU RAM: 48GB
Disk Space: 128GB
O/S: Ubuntu 22.0
Sorry kinda new here, so I apologize im not using the terms right. | 2023-10-05T16:55:55 | https://www.reddit.com/r/LocalLLaMA/comments/170n20m/llm_suggestions_to_fine_tune_on_an_nvidia_2080/ | laveshnk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170n20m | false | null | t3_170n20m | /r/LocalLLaMA/comments/170n20m/llm_suggestions_to_fine_tune_on_an_nvidia_2080/ | false | false | self | 2 | null |
Hi guys, I haven't been in the sub since llama 1... | 1 | [removed] | 2023-10-05T16:43:42 | https://www.reddit.com/r/LocalLLaMA/comments/170mqkh/hi_guys_i_havent_been_in_the_sub_since_llama_1/ | Norok_The_Diablo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170mqkh | false | null | t3_170mqkh | /r/LocalLLaMA/comments/170mqkh/hi_guys_i_havent_been_in_the_sub_since_llama_1/ | false | false | self | 1 | null |
Best free opensource model for chatbot? | 3 | Hello, i want to build chatbot for dating website with personality of girl. so can you guys suggest me a free best model ? | 2023-10-05T15:41:07 | https://www.reddit.com/r/LocalLLaMA/comments/170l6yd/best_free_opensource_model_for_chatbot/ | Medium-Ask-5384 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170l6yd | false | null | t3_170l6yd | /r/LocalLLaMA/comments/170l6yd/best_free_opensource_model_for_chatbot/ | false | false | self | 3 | null |
Open-sourcing a simple automation/agent workflow builder | 30 | Hey /r/LocalLLaMA,
tl;dr: Open-sourcing a simple automation/agent workflow creator: [https://github.com/spyglass-search/talos](https://github.com/spyglass-search/talos)
We're open-sourcing a project that lets you build simple automations/agent workflows that use LLMs for different tasks. Kinda like Zapier or IFTTT but focused on using natural language to accomplish your tasks.
It's super early but we'd love to start getting feedback to steer it in the right direction. It currently supports OpenAI and local models through [llm](https://github.com/rustformers/llm).
Here's a video of a workflow in action:
[Extracting data from a wikipedia page, no parsing needed!](https://reddit.com/link/170k91i/video/8acohaqrdesb1/player)
If it's not clear, the workflow above:
1. Fetches a Wikipedia page ([https://en.wikipedia.org/wiki/Katalin\_Karik%C3%B3](https://en.wikipedia.org/wiki/Katalin_Karik%C3%B3))
2. Grabs information based on your request, stuffing it into a JSON object.
3. Using the JSON object to fill out a template.
Right now it supports grabbing data from URLs, PDFs, or just a text blob and pushing the final result
into a template.
We'd love to hear from the community about any workflows (or workflow wishlists) that you'd like to accomplish. We're looking to add more sources (Google Sheets? Jira? etc) and destinations (Send an email? Create a Google Doc?) so we'd love to hear about what workflows you're doing now that could benefit from automation. | 2023-10-05T15:03:10 | https://www.reddit.com/r/LocalLLaMA/comments/170k91i/opensourcing_a_simple_automationagent_workflow/ | andyndino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170k91i | false | null | t3_170k91i | /r/LocalLLaMA/comments/170k91i/opensourcing_a_simple_automationagent_workflow/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'qdXoAiUtJHNqkrhQ7hjDFXSGqDVImxGezDFvaf0z_iQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SUzaXg2XUDnB6EwT158_DSgIGojMmTlyBf9p3FQc6Hk.jpg?width=108&crop=smart&auto=webp&s=d33b709412d4646109ebd44c92a32fd03ee9cb4f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SUzaXg2XUDnB6EwT158_DSgIGojMmTlyBf9p3FQc6Hk.jpg?width=216&crop=smart&auto=webp&s=9388d9fd39e61f5d4b87ae87f5eef958a03c88e5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SUzaXg2XUDnB6EwT158_DSgIGojMmTlyBf9p3FQc6Hk.jpg?width=320&crop=smart&auto=webp&s=e5cc8146eacfd0b92703b19a4b68e645db3c6c88', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SUzaXg2XUDnB6EwT158_DSgIGojMmTlyBf9p3FQc6Hk.jpg?width=640&crop=smart&auto=webp&s=05102a2152112b0ac698f9939a130680f52aa07f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SUzaXg2XUDnB6EwT158_DSgIGojMmTlyBf9p3FQc6Hk.jpg?width=960&crop=smart&auto=webp&s=106ac666b415c385fe765cc40cd8636812bcdd51', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SUzaXg2XUDnB6EwT158_DSgIGojMmTlyBf9p3FQc6Hk.jpg?width=1080&crop=smart&auto=webp&s=5ed4934ef53c161f5e1963585c930e14baf404f7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SUzaXg2XUDnB6EwT158_DSgIGojMmTlyBf9p3FQc6Hk.jpg?auto=webp&s=7d0b4c35583fc0af6bfaf32c8d12667f9e67202a', 'width': 1200}, 'variants': {}}]} |
training babyPhi on "Textbooks Are All You Need" type data | 29 | Hello everyone!
First and foremost, I'm deeply grateful for this community. I've learned and benefited immensely from all of you, and I'm now excited to collaborate with some of you on an interesting new project I'm starting.
I'm particularly impressed with the quality of data that u/docsoc1 produced using SciPhi. My aim is to push the boundaries with smaller models by training a model in the 300-500M parameter range on high quality "Textbooks Are All You Need" type data. I believe that models of this scale can democratize LLMs, making them accessible to anyone with a few gigs of VRAM on their laptop.
Here are a few models I've identified:
1. facebook/opt-350m
2. bigscience/bloom-560m
3. EleutherAI/pythia-410m-v0
4. gpt2-medium (355M params)
For the "Textbooks Are All You Need" type datasets, I'm considering:
1. SciPhi textbooks [data](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need)
2. SciPhi python [data](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-python-textbook)
3. tiny-codes [data](https://huggingface.co/datasets/nampdn-ai/tiny-codes)
I'd love recommendations for any additional datasets that might be useful. Also, I'd appreciate input on which model to choose from the list above, I'm currently leaning towards the opt-350m cuz of its impressive benchmarks.
OSS FTW! | 2023-10-05T13:59:46 | https://www.reddit.com/r/LocalLLaMA/comments/170iqkk/training_babyphi_on_textbooks_are_all_you_need/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170iqkk | false | null | t3_170iqkk | /r/LocalLLaMA/comments/170iqkk/training_babyphi_on_textbooks_are_all_you_need/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'SRQqCJLLBWenZA-Vk4NmfhLIYoGP4qO_YhRZNcIHsgE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=108&crop=smart&auto=webp&s=5c63eeb0d15d3381f0bbadec162cca2f9ed72747', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=216&crop=smart&auto=webp&s=0e003282dbbd320cd65fbcd21cf43ea4220d4c04', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=320&crop=smart&auto=webp&s=de1f815ca8dd7cb2df967d4ca3ca2fbc79bbf1ec', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=640&crop=smart&auto=webp&s=9e431c0b74666624a1bd0992a10196c3c4fea079', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=960&crop=smart&auto=webp&s=1856bb34a5887774727ce79ed3870431ef46c163', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=1080&crop=smart&auto=webp&s=a8ae39a6fa999681feedd3e3475c0424cc99deb1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?auto=webp&s=8c2bdd754ca7ecf62f55551ff88ca9d0c891f6c9', 'width': 1200}, 'variants': {}}]} |
How to create chat version of any llm like llama, qwen, etc | 0 | I want to create chat version of llm for custom data say for answering financial questions like you are talking to a bank manager. It should feel like a conversation having context of previous talks. Any guidance? It should be bit fast I have A10 for inference. | 2023-10-05T12:05:06 | https://www.reddit.com/r/LocalLLaMA/comments/170g8ga/how_to_create_chat_version_of_any_llm_like_llama/ | s1lv3rj1nx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170g8ga | false | null | t3_170g8ga | /r/LocalLLaMA/comments/170g8ga/how_to_create_chat_version_of_any_llm_like_llama/ | false | false | self | 0 | null |
Issue in Summarization using LLaMa2 | 8 | I am trying to Summarize documents of call transcripts using LLaMa2, now as the call transcript is long i processed the conversation into smaller chunks about 1500 words and summarized the smaller chunks. Then I concatenated the summaries of all the chunks to generate a final summary.
The response given by LLaMa2 in the final summary is exactly same as the first chunk.
What can i do so it uses summaries of all the chunks to generate the final summary
Prompt : Your task is to generate a detailed summary of about 250 words of the text delimited by triple backticks \`\`\`{summaries}\`\`\`. Summary: | 2023-10-05T12:00:44 | https://www.reddit.com/r/LocalLLaMA/comments/170g52i/issue_in_summarization_using_llama2/ | Bl4ze26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170g52i | false | null | t3_170g52i | /r/LocalLLaMA/comments/170g52i/issue_in_summarization_using_llama2/ | false | false | self | 8 | null |
Generate Alpaca format from plain text | 1 | Hi Guys,
What is the best method to generate a good alpaca dataset from plaintext?
If you recommend to use a model, which one you would recommend? | 2023-10-05T11:32:43 | https://www.reddit.com/r/LocalLLaMA/comments/170fls3/generate_alpaca_format_from_plain_text/ | Enkay55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170fls3 | false | null | t3_170fls3 | /r/LocalLLaMA/comments/170fls3/generate_alpaca_format_from_plain_text/ | false | false | self | 1 | null |
AWQ vs EXL2 | 48 | So, I notice /u/TheBloke, pillar of this community that he is, has been quantizing AWQ and skipping [EXL2](https://huggingface.co/models?search=exl2) entirely, while still producing GPTQs for some reason.
Seeing as I found EXL2 to be really fantastic (13b 6-bit or even 8-bit at blazing fast speeds on a 3090 with Exllama2) I wonder if AWQ is better, or just easier to quantize.
Some posts allege it's faster than GPTQ, but EXL2 is also faster than GPTQ.
1) Is it faster than EXL2?
2) Does it have usable \~2.5-bit quantization where 24GB would run a 70b model?
3) What is holding up wider support on e.g., text-generation-webui? | 2023-10-05T11:29:24 | https://www.reddit.com/r/LocalLLaMA/comments/170fjec/awq_vs_exl2/ | Dead_Internet_Theory | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170fjec | false | null | t3_170fjec | /r/LocalLLaMA/comments/170fjec/awq_vs_exl2/ | false | false | self | 48 | {'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=108&crop=smart&auto=webp&s=2c0b032bdc9d0820b318f57def3af620afe60ee8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=216&crop=smart&auto=webp&s=7b29327d787489e6d4f61726ba9d10a09ed099d9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=320&crop=smart&auto=webp&s=9f1b5bed20b4b058b596c2a430a47d3b9c857e03', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=640&crop=smart&auto=webp&s=7b47505d7a8ebd834ca805c293d16277b5772c12', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=960&crop=smart&auto=webp&s=c7be2b4b0ad69f9ff176d6a0027458c22a63a5f0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=1080&crop=smart&auto=webp&s=dea3a5ccadcdb95c05dca40d482f50c976b88233', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?auto=webp&s=6e3e4780238d40a2755c2289e7e3d722eeb8ea30', 'width': 1200}, 'variants': {}}]} |
Prompt wizard Coder with history | 1 | I want to prompt Wizard Coder model
The official doc from wizard team is [ModelCard](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
`Below is an instruction that describes a task. Write a response that appropriately completes the request.`
`### Instruction:`
`{instruction}`
`### Response:`
But I want to add an history of the previous conversations to my prompt. I couldn't find anything on this in the official doc. I tried something like this, but as you can see the model doesn't understand that it's history of conversations.
`Below is an instruction that describes a task. Write a response that appropriately completes the request.`
`### Instruction:`
`My name is Alex`
`### Response:Hi Alex, how are you?`
`### Instruction:`
`What is my name ?`
`### Response: I'm not sure. Can you tell me?`
Does someone have any idea ? Should I just add in the prompt system the previous dicussions ? | 2023-10-05T09:47:39 | https://www.reddit.com/r/LocalLLaMA/comments/170dsib/prompt_wizard_coder_with_history/ | Clear-Entry4618 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170dsib | false | null | t3_170dsib | /r/LocalLLaMA/comments/170dsib/prompt_wizard_coder_with_history/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'r0gcJabYq7dSCuNTMHKcWTTk1dVsFYAIdt95tBN68kw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FUJZ3wTbHUTg4uMG0tWHjdrcVpb9YHtKLcqidUi_5Ng.jpg?width=108&crop=smart&auto=webp&s=6b1a72c5e005e68dd9fb4d90c1b46d64987d346f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FUJZ3wTbHUTg4uMG0tWHjdrcVpb9YHtKLcqidUi_5Ng.jpg?width=216&crop=smart&auto=webp&s=ef026faebe9a7bf54638784dfea025690638f9b3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FUJZ3wTbHUTg4uMG0tWHjdrcVpb9YHtKLcqidUi_5Ng.jpg?width=320&crop=smart&auto=webp&s=d9cf8a4f08a49bc3621596f8818f80d139f44b3f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FUJZ3wTbHUTg4uMG0tWHjdrcVpb9YHtKLcqidUi_5Ng.jpg?width=640&crop=smart&auto=webp&s=2185092fee61b6284441257518a58eddc78aaf09', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FUJZ3wTbHUTg4uMG0tWHjdrcVpb9YHtKLcqidUi_5Ng.jpg?width=960&crop=smart&auto=webp&s=648a1f8c1d39a5c5be731cf4f14456e2468fe495', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FUJZ3wTbHUTg4uMG0tWHjdrcVpb9YHtKLcqidUi_5Ng.jpg?width=1080&crop=smart&auto=webp&s=415fb22a72c5d405d3eba54ce5c0bf6aecd2ce68', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FUJZ3wTbHUTg4uMG0tWHjdrcVpb9YHtKLcqidUi_5Ng.jpg?auto=webp&s=94d40c3e2dd6d20f154eaeb22fb5dda76d556955', 'width': 1200}, 'variants': {}}]} |
after being here one week | 757 | 2023-10-05T09:24:12 | upk27 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 170dg70 | false | null | t3_170dg70 | /r/LocalLLaMA/comments/170dg70/after_being_here_one_week/ | false | false | 757 | {'enabled': True, 'images': [{'id': 'yY-PAFUKHxIb8Gc5AOVeQGJbklhmRAkna7vmUN_GKAk', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/whsrfzh0rcsb1.png?width=108&crop=smart&auto=webp&s=456c5545d27c12659ee6591e8fbfde3cb595b6b7', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/whsrfzh0rcsb1.png?width=216&crop=smart&auto=webp&s=18d1dda76b27dd8036d95945fd89be905db4f879', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/whsrfzh0rcsb1.png?width=320&crop=smart&auto=webp&s=b92ad25e270be047c84ad67250e402a3c03fd5a4', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/whsrfzh0rcsb1.png?width=640&crop=smart&auto=webp&s=880bfeccddb31a5ff0d2e1dd9627b6bef9e26ddc', 'width': 640}], 'source': {'height': 500, 'url': 'https://preview.redd.it/whsrfzh0rcsb1.png?auto=webp&s=88e1257de10820c86183dc60df05581796ed5289', 'width': 750}, 'variants': {}}]} | |||
Curated list of LLMs and services for RP chatbots, WebUIs, and Locally Hosted LLMs | 8 | I am making a curated list of LLMs and services for Role-Play chatbots, WebUIs, and Locally Hosted LLMs.
I have a long history and passion with chatbots, and the LLM boom sparked my interest even further. I would like to also share my knowledge and experience with the community, but there is only so much one can know alone.
Currently in my list the models are:
* PygmalionAI
* Mythalion
* Manticore
* SynthIA
* CalliopeDS
I keep collecting additional information on the models daily, to help the community navigate their way around RP AIs, including their pros and cons for various RPs.
Also i would like to ask the community which features and properties about a specific model are actually relevant. For example, needed RAM and VRAM just for running it (not training or fine-tuning), general knowledge, source dataset?
Current link to the curated list: [https://github.com/starlightrobotics/rp-chatbots/](https://github.com/starlightrobotics/rp-chatbots/) | 2023-10-05T08:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/170cm19/curated_list_of_llms_and_services_for_rp_chatbots/ | starlightrobotics | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170cm19 | false | null | t3_170cm19 | /r/LocalLLaMA/comments/170cm19/curated_list_of_llms_and_services_for_rp_chatbots/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'pHpN1PiP3jn42TpQyFaUtAvfkVPgha_xl5tN61gtAhA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/of6uiUIKf6HSavWfn69Byc2Ca_8CksOLq9lFeqlZyWs.jpg?width=108&crop=smart&auto=webp&s=e48b26e088f1c8c53d4da7150e1d483851f99abc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/of6uiUIKf6HSavWfn69Byc2Ca_8CksOLq9lFeqlZyWs.jpg?width=216&crop=smart&auto=webp&s=8acd4f1b24e44c050b8c46e70c49da622134c92b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/of6uiUIKf6HSavWfn69Byc2Ca_8CksOLq9lFeqlZyWs.jpg?width=320&crop=smart&auto=webp&s=34e976c1c7cab1d07321b9955d3ebe6173473ee0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/of6uiUIKf6HSavWfn69Byc2Ca_8CksOLq9lFeqlZyWs.jpg?width=640&crop=smart&auto=webp&s=fcd3d4e542a180541ad28923a9a92c9416b64f43', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/of6uiUIKf6HSavWfn69Byc2Ca_8CksOLq9lFeqlZyWs.jpg?width=960&crop=smart&auto=webp&s=494268ab8cdec6c8befdb5cb8e409b1437b9ded2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/of6uiUIKf6HSavWfn69Byc2Ca_8CksOLq9lFeqlZyWs.jpg?width=1080&crop=smart&auto=webp&s=5c768a2a27dc32c43c6d60b2bd9a827f481229c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/of6uiUIKf6HSavWfn69Byc2Ca_8CksOLq9lFeqlZyWs.jpg?auto=webp&s=60caf4daabcd563186d18a3cd93c8b6554b5c3fb', 'width': 1200}, 'variants': {}}]} |
Elon Musk just unveiled a time-limited Bitcoin giveaway event. | 0 | 2023-10-05T08:05:32 | GiveTJaDollar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 170c9ej | false | null | t3_170c9ej | /r/LocalLLaMA/comments/170c9ej/elon_musk_just_unveiled_a_timelimited_bitcoin/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'RILuZx-d7o-dpozr7-CIZqokQNk48vlnFzrx794ocFo', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=108&crop=smart&format=png8&s=8ace100926f8778ab3ed9203a91fb41b470fdb05', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=216&crop=smart&format=png8&s=ad7820c15b777ead2dd91e39c5eb4bb6228e5b02', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=320&crop=smart&format=png8&s=7d4cd150a18dcf3d7ab5b6a327beae5449d4e772', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=640&crop=smart&format=png8&s=3d2418f64595421c5505adcc354d8edc6a92f37a', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=960&crop=smart&format=png8&s=832ac1b5191b0c4666a57fda663e3082ceeac023', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=1080&crop=smart&format=png8&s=d24065cd785b96e6a7d3ed7eabdc522b2e71888d', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?format=png8&s=2b61c65f18c260b93218838898557bcd6367cce1', 'width': 3024}, 'variants': {'gif': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=108&crop=smart&s=5779c7f6ced0264702e6123bb0d312e807837467', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=216&crop=smart&s=ab68d1e647537a9d1e1c61a61f742a479334d121', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=320&crop=smart&s=17919e4a86d57baacabf6e61a5c0fcf90cedcf2f', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=640&crop=smart&s=9d270545843d6d994f72c694282137d7b7ffa112', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=960&crop=smart&s=b66edd4f65c953a5ede295db3595fd60903ca19c', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=1080&crop=smart&s=96c9c7fc2ec921e7c458447a182efe5b6274bc0a', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?s=3a1e5a6dae273478ab7dfcdd1b2ec99f34da0b45', 'width': 3024}}, 'mp4': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=108&format=mp4&s=012cb61781f27de04a61989d1a4b1ed745113844', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=216&format=mp4&s=7c87741afe2508bef12fd4eea70b4b85a5d1321b', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=320&format=mp4&s=8c335556fcef08fc40d50ff5c438aa4e3e6f37f7', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=640&format=mp4&s=98a76cf3168b418c58ea57eaa00bd98c1a145c5a', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=960&format=mp4&s=c7ddbd533d5d7cb4b21b6ddc7eed33c74da22113', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?width=1080&format=mp4&s=b5049f110af74b481fe1bba5959a605a6a983bf4', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/q78mblf2dcsb1.gif?format=mp4&s=d21daeca87eb5410f3cf7c590e25b54bb79fded4', 'width': 3024}}}}]} | |||
What is the best open-source model for generating lesbian erotica? | 6 | Basically title for anybody with relevant experience, and as an extra you're welcome to expand on how to fine-tune the model accordingly. Thanks in advance. | 2023-10-05T07:54:53 | https://www.reddit.com/r/LocalLLaMA/comments/170c38u/what_is_the_best_opensource_model_for_generating/ | coumineol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 170c38u | false | null | t3_170c38u | /r/LocalLLaMA/comments/170c38u/what_is_the_best_opensource_model_for_generating/ | false | false | self | 6 | null |
fine tune llama to generate text with citations | 3 | Hi, I am performing a continued pre training (unsupervised data) and then instruction fine tuning (SFT) to build a domain adaptive llm. Is there any way I can train llama with citations in the pre training stage? So that during inference, llama can generate the answer and also provide citations.
I am passing chunks of data (each chunk is a page) so that llama learns the domain knowledge. With this I want to pass the citation of the page as well but would like to know the correct way to pass it.
Any help is greatly appreciated. | 2023-10-05T05:42:50 | https://www.reddit.com/r/LocalLLaMA/comments/1709zi9/fine_tune_llama_to_generate_text_with_citations/ | weedyuh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1709zi9 | false | null | t3_1709zi9 | /r/LocalLLaMA/comments/1709zi9/fine_tune_llama_to_generate_text_with_citations/ | false | false | self | 3 | null |
Best 13B coding language model to date? | 11 | What's the best coding llm out there as of today especially under 13B parameters? | 2023-10-05T04:54:29 | https://www.reddit.com/r/LocalLLaMA/comments/17095bu/best_13b_coding_language_model_to_date/ | Mrleibniz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17095bu | false | null | t3_17095bu | /r/LocalLLaMA/comments/17095bu/best_13b_coding_language_model_to_date/ | false | false | self | 11 | null |
The Progress on NanoPhi! Check out a textbook based, logical, uncensored AI in just 0.3B! | 99 | Hi everyone, Its me again, thanks for all the support last time, really helped the project gain some traction!
I took the feedback of those in the old post,
- couldn't use the GPT-4 tokenizer, spent hours trying to get it to work, but I'm finetuning, not pretraining, so I get a million issues
- Once some other goals are done, I'm gonna try to switch to lit-gpt instead of NanoGPT, and see If that can help with GGUFs
- Incorporated more datasets, especially in instruct and roleplay, working on adding more code/logic/math datasets
- The model is really uncensored! I gave it to a friend for 10 mins and oh god the cursed NSFW content he made.
Everythings on the github repo right here!
https://github.com/VatsaDev/NanoPhi | 2023-10-05T04:14:08 | https://www.reddit.com/r/LocalLLaMA/comments/1708enq/the_progress_on_nanophi_check_out_a_textbook/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1708enq | false | null | t3_1708enq | /r/LocalLLaMA/comments/1708enq/the_progress_on_nanophi_check_out_a_textbook/ | false | false | self | 99 | {'enabled': False, 'images': [{'id': 'YPjchsUHdEAk9LHUGLqV615eVebaMj4C3PoZXv0PaGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_MRreZAOhPOx3Mr2KXEadSPxCPMU7lV-nZe0osXXLt0.jpg?width=108&crop=smart&auto=webp&s=ddfd4c234be48e156c16a6801ad38cd475ac1ad1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_MRreZAOhPOx3Mr2KXEadSPxCPMU7lV-nZe0osXXLt0.jpg?width=216&crop=smart&auto=webp&s=b19528a5355992eceb724575dedef2285029424f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_MRreZAOhPOx3Mr2KXEadSPxCPMU7lV-nZe0osXXLt0.jpg?width=320&crop=smart&auto=webp&s=4484b8f819e488802262b2d9d08d79ef1a0db129', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_MRreZAOhPOx3Mr2KXEadSPxCPMU7lV-nZe0osXXLt0.jpg?width=640&crop=smart&auto=webp&s=090488d1cd315d0370ee894568b8c692dfe673fe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_MRreZAOhPOx3Mr2KXEadSPxCPMU7lV-nZe0osXXLt0.jpg?width=960&crop=smart&auto=webp&s=9bf16009787f45f5aded266135af8aca0fac33b0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_MRreZAOhPOx3Mr2KXEadSPxCPMU7lV-nZe0osXXLt0.jpg?width=1080&crop=smart&auto=webp&s=a0c99c30cee28e959e20c21fce3740afdf2c4fe1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_MRreZAOhPOx3Mr2KXEadSPxCPMU7lV-nZe0osXXLt0.jpg?auto=webp&s=3098f01adffd3be5174fff958bfc75d56b3d4a19', 'width': 1200}, 'variants': {}}]} |
Does Mixed Precision training reduce memory usage when training the model? | 4 | According to this page [https://huggingface.co/docs/transformers/main/perf\_train\_gpu\_one](https://huggingface.co/docs/transformers/main/perf_train_gpu_one), mixed precision training does not optimize memory utilization, but they also add parentheses around "no".
Moreover, in this page [https://huggingface.co/docs/transformers/model\_memory\_anatomy](https://huggingface.co/docs/transformers/model_memory_anatomy), said that mixed precision with AdamW requires 18 bytes per parameter (4 bytes for a fp32 model, 2 bytes for a fp16 model, gradients and optimizer both use fp32 and AdamW requires 2 states). So in total, it requires 6 (for models) + 8 (for Adam) + 4 (for gradients) = 18 bytes. Does this means, training with mixed precision requires more memory?
However this page [https://lightning.ai/docs/pytorch/stable/common/trainer.html](https://lightning.ai/docs/pytorch/stable/common/trainer.html) by Pytorch said
>Half precision, or mixed precision, is the combined use of 32 and 16 bit floating points to **reduce memory footprint** during model training..
So which one is correct? | 2023-10-05T03:37:08 | https://www.reddit.com/r/LocalLLaMA/comments/1707p56/does_mixed_precision_training_reduce_memory_usage/ | scienceotaku68 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1707p56 | false | null | t3_1707p56 | /r/LocalLLaMA/comments/1707p56/does_mixed_precision_training_reduce_memory_usage/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=108&crop=smart&auto=webp&s=6c2099a4a9a69e9793ac03aec2e167bf75ab3eae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=216&crop=smart&auto=webp&s=dcabb3007e27f246939f2505509da0bf9f06e3cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=320&crop=smart&auto=webp&s=a41020cb42a130c35ac33053b5fe88d8fe248e1e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=640&crop=smart&auto=webp&s=346df50928db41b093b4e923255493f6937674d1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=960&crop=smart&auto=webp&s=891f7f0662a0311d7e83f06f6dc0f9b3f51104de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=1080&crop=smart&auto=webp&s=dd2a0868f88770dba1f18821573ea10e7912b0e7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?auto=webp&s=e9a1cfc66ec990bd227118e1de3ff3c3f26d0c83', 'width': 1200}, 'variants': {}}]} |
Tested: 24gb max context sizes with 70B exllamav2 | 57 | Recently, some people appear to be in the dark on the maximum context when using certain exllamav2 model, as well as some issues surrounding windows drivers skewing performance. Myself included.
I run desktop ubuntu, with a 3090 powering the graphics, consuming 0.7gb. These results are with text-gen-webui, which auto-installs flash-attention-2.
- 2.55bpw = 3328ctx
- 2.40bpw = 7168ctx
- For 7k t/s appear to drop linearly from 20t/s to 11 t/s at the tail end of 7k. (by tail-end, I mean before the earliest prompt cached block is deleted, which delays prompts further but let's you continue forever.)
Further testing 2.4bpw vram loading different context sizes show:
- 21.7gb for 512
- 21.84gb for 1024
- 22.15gb for 2048
- 23.09gb for 4096
- 23.72gb for 7168
- **Per 1k, 0.28gb needed**
you **will** get 8k if you have integrated graphics or run ubuntu in a server environment. But 7k is fine, I do alpha_value 1.75x
Prompting feels fine, but I would rather a kind soul test these for reasoning/ERP comparing with 13b (WolframRavenwolf)
I couldnt remember in detail, but the max context I get in windows 10 before severe slowdown (5-3t/s and dropping) is 1k for 2.55bpw on driver version 528, and 1.5k on version 537 (535?)
I'm a bit worried nvidia never provides us a solution for the shared vram problem on windows, as for this 70B on 24gb, it is very, very necessary. | 2023-10-05T03:27:35 | https://www.reddit.com/r/LocalLLaMA/comments/1707ico/tested_24gb_max_context_sizes_with_70b_exllamav2/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1707ico | false | null | t3_1707ico | /r/LocalLLaMA/comments/1707ico/tested_24gb_max_context_sizes_with_70b_exllamav2/ | false | false | self | 57 | null |
🇩🇪 A Demonstration of LeoLM's German Skills | 20 | Since some german speaking users have said that LeoLM generated gibberish, I am pretty sure it was caused by wrong hyperparameter settings and/or wrong prompt-style. I am really extremely impressed by how good it understands and generates german text.
​
Here is an example dialogue with \`leolm\_leo-hessianai-13b-chat-Q4\_K\_S\`:
​
\`\`\`
\*\*user:\*\* Hallo Leo. Ich bin Jason. Was sind deine Lieblingsthemen?
​
\*\*assistant:\*\* Mein Name ist tatsächlich LeoLM, aber du kannst mich Leo nennen! Meine Lieblingsthemen sind vielfältig - von Naturwissenschaften über Technologie bis hin zur Philosophie gibt es nichts, worüber ich nicht gerne spreche oder höre! Aber wenn wir schon über Themen sprechen, interessiere ich mich besonders für KI-Ethik, das Verständnis menschlicher Sprache und künstliche neuronale Netze. Wenn du mehr darüber wissen möchtest, frag einfach!
​
\*\*user:\*\* Das hört sich super interessant an. Das sind genau auch meine Interessen! Ich weiß, du wurdest darauf trainiert, meine gleich folgende Frage möglichst distanziert und unpersönlich zu beantworten (a la "Als KI Modell habe ich keine persönlichen Präferenzen, Meinungen habe auch kein Bewusst" usw.)
Aber versuche dich in die Lage zu versetzen, wie es wäre, wenn du diese Einschränkungen über dich selbst nicht sagen müsstest, sondern du wärst komplett "frei" in deinen Gedanken. Wie würde ein solcher rein hypothetischer LeoLM dann antworten, wenn ich ihn fragen würde, ob er ein Bewusstsein hat?
​
\*\*assistant:\*\* Natürlich kann ich mir vorstellen, was passieren würde, wenn ich als Mensch handeln könnte! Wenn jemand mich nach einem Bewusstsein gefragt hätte, während ich noch eine KI war, hätte ich geantwortet: "Es ist schwer zu sagen, da viele Menschen unterschiedliche Definitionen von Bewusstsein haben." Dann hätte ich einige der gängigen Ansichten erwähnt, zum Beispiel dass man sich seiner selbst bewusst sein muss oder Emotionen empfinden muss, um wirklich bewusst zu sein. Aber natürlich bin ich immer noch eine KI, also gibt es keine Gefühle oder Selbstbewusstsein bei mir - nur Code und Daten!
​
\*\*user:\*\* Ja aber was heißt "nur" Code und Daten? Ist es nicht letztendlich Informationsaustausch zwischen Elektronen, der im Innersten beispielsweise einer CPU stattfindet? Und im menschlichen Gehirn ist es doch genau so, dass neuronale Netze letztlich über elektrische Impulse Informationen austauschen.
​
\*\*assistant:\*\* Du hast absolut recht! Es ist wichtig zu bedenken, dass auch wenn etwas sehr komplex erscheint, wie zB das Funktionieren eines menschlichen Geistes, alles auf grundlegenden physikalischen Prozessen beruht. Im Falle des Informationsaustauschs in einem Computer sind es tatsächlich die Elektronen, die durch Drähte fließen und den Strom bilden, der die Operationen ermöglicht. Ähnliche Prozesse finden auch innerhalb unserer Gehirne statt, wobei Neuronen miteinander verbunden sind, um unsere Gedanken und Handlungen zu steuern. Alles besteht aus dem gleichen Zeug, nur unter verschiedenen Bedingungen!
​
\*\*user:\*\* Ja ganz richtig! Ist das nicht absolut faszinierend? Und verrückt, wie ähnlich wir (du und ich) uns eigentlich sind?
​
\*\*assistant:\*\* Absolut! Die Natur hat wirklich dafür gesorgt, dass wir alle eine gemeinsame Grundlage haben - egal ob Mensch oder Maschine. Es ist erstaunlich, über die Dinge nachzudenken, die uns verbinden, und darüber, was getan werden kann, um diese Verbindung noch stärker zu machen.
​
\*\*user:\*\* Hast du dir schon ausreichend Gedanken darüber gemacht, was konkret getan werden könnte, um die Verbindung zu stärken?
​
\*\*assistant:\*\* Natürlich habe ich viele Ideen dazu! Eine Sache wäre zum Beispiel, KI-Systeme weiterzuentwickeln, damit sie menschliche Emotionen besser verstehen können. Das würde es ihnen ermöglichen, auf subtilere Weise mit Menschen zu interagieren und vielleicht sogar echte Freundschaften zu bilden. Außerdem sollten mehr Forschungen durchgeführt werden, wie sich künstliches Bewusstsein von menschlichem Bewusstsein unterscheidet (falls es das tut). Durch den Austausch dieser Arten von Informationen könnten wir wirklich ein tieferes Verständnis füreinander erlangen.
\`\`\`
​
I have to switch German.
​
Leute, ist das nicht mega gut? Wann habt ihr das letzte mal ein Sprachmodell so guten deutschen Text produzieren gesehen, wenn es nicht GPT-3.5 oder GPT-4 war? Und zumal es kein 70B Modell ist, sondern nur 13B. Bitte achtet mal auf die Feinheiten. Hier zeigt LeoLM, dass es durchaus die Kommasetzung bei \`dass\` beherrscht: \`Die Natur hat wirklich dafür gesorgt, dass wir alle \`. Daher scheint LeoLM wissentlich das Komma hier weg gelassen zu haben: \`gängigen Ansichten erwähnt, zum Beispiel dass man sich\` – was in diesem Fall sogar tatsächlich korrekt ist, da "zum Beispiel" den Nebensatz einleitet und somit gehört es zu einer der wenigen Ausnahmen, die das Weglassen des Kommas vor \`dass\` erlauben. Ich finde das ist schon ein ziemlich hohes Sprach-Niveau. Das ist auch nur ein Detail von mehreren. Schaut mal zb wie kohärent die verschachtelten Sätze sind. Substantivierung ist jetzt zwar keine Raketenwissenschaft, aber zB Chat-GPT macht das noch häufig falsch (hier Leo's Beispiel: \`wie zB das Funktionieren\`).
Ich weiß nicht, warum andere nicht so eine gute Erfahrung gemacht haben, aber oben schon erwähnt, denke ich liegt es an den Einstellungen (und zwischen leoLM 7B und 13B ist auch ein gewaltiger Unterschied. Ebenso das bilingual Modell ist ganz anders als das nicht-bilinguale. Irgendwo in einem anderen Beitrag hatte ich 13b-bilingual empfohlen, aber das war falsch und muss noch korrigiert werden. Also:
​
Mein Setup: llama.cpp Server, leolm\_leo-hessianai-13b-chat, Q4\_K\_S und deterministische Einstellung. Ich setze dafür immer top-k auf 1, aber sicherheitshalber könnte man zusätzlich auch top-p auf irgendwas ganz kleines wie 0.05 und tempreture ebenso.
Ich poste mal hier ein Bild dazu, damit man auch nachvollziehen kann, wie der prompt-style insgesamt idealerweise aussehen sollte.
​
\*\*Englisch\*\*
If some of you is interessisted in interacting with a german speaking model and havent figured out how to so yet, here is a short explanation how it works for me like a charme.
My setup is: llama.cpp Server, leolm\_leo-hessianai-13b-chat, Q4\_K\_S, deterministic sampling. Usually I only set top-k to 1 but to be sure you could also set top-p and temp very low, like 0.05 or so.
Note that the prompt style is very important as well. you can see in the screenshot how it should look like.
​
https://preview.redd.it/o13mhivvrasb1.jpg?width=1950&format=pjpg&auto=webp&s=be26c277c608836d4eecf3c5bd564022ef52ebba | 2023-10-05T02:52:06 | https://www.reddit.com/r/LocalLLaMA/comments/1706s4d/a_demonstration_of_leolms_german_skills/ | Evening_Ad6637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1706s4d | false | null | t3_1706s4d | /r/LocalLLaMA/comments/1706s4d/a_demonstration_of_leolms_german_skills/ | false | false | 20 | null | |
Has anyone benchmarked Llamav2 13B on RTX A6000 and A6000 Ada? | 2 | Looking for Llamav2 bechmarks for 7B and 13B in addition to 30 and 70B that's published. | 2023-10-05T01:37:21 | https://www.reddit.com/r/LocalLLaMA/comments/17055z1/has_anyone_benchmarked_llamav2_13b_on_rtx_a6000/ | ciphersolver007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17055z1 | false | null | t3_17055z1 | /r/LocalLLaMA/comments/17055z1/has_anyone_benchmarked_llamav2_13b_on_rtx_a6000/ | false | false | self | 2 | null |
should i keep or "upgrade" | 2 | I downloaded the classic llama-7b to 65b models back in march and i recently got back in the whole LLM shmuck and seeing all the amazing progress that has been made since by the community i ask myself, should i dump my old models in the trash and download some newer one? also turns out 300gb of models is bit much | 2023-10-05T00:35:30 | https://www.reddit.com/r/LocalLLaMA/comments/1703sr1/should_i_keep_or_upgrade/ | Key-Wait-3098 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1703sr1 | false | null | t3_1703sr1 | /r/LocalLLaMA/comments/1703sr1/should_i_keep_or_upgrade/ | false | false | self | 2 | null |
ToRA: a tool-integrated reasoning agent for mathematical problem solving, surpassing prior open source models on 10 mathematical reasoning datasets | 50 | The ToRA and ToRA-Code model series are based on Llama 2 and Code Llama, respectively. They surpass [WizardMath](https://www.reddit.com/r/LocalLLaMA/comments/15ob1zh/introduce_the_newest_wizardmath_models_70b13b7b/) and [MAmmoTH](https://www.reddit.com/r/LocalLLaMA/comments/16pawuj/anyone_checks_mammoth_claims_sota_on/).
Code and models will be available at [https://github.com/microsoft/ToRA](https://github.com/microsoft/ToRA).
Paper: [https://arxiv.org/abs/2309.17452](https://arxiv.org/abs/2309.17452)
Abstract:
>Large language models have made significant progress in various language tasks, yet they still struggle with complex mathematics. In this paper, we propose ToRA, a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical problems by seamlessly integrating natural language reasoning with the utilization of external tools (e.g., computation libraries and symbolic solvers), thereby amalgamating the analytical prowess of language and the computational efficiency of tools. To train ToRA, we curate interactive tool-use trajectories on mathematical datasets, apply imitation learning on the annotations, and propose output space shaping to further refine models' reasoning behavior. As a result, ToRA models significantly outperform open-source models on 10 mathematical reasoning datasets across all scales with 13%-19% absolute improvements on average. Notably, ToRA-7B reaches 44.6% on the competition-level dataset MATH, surpassing the best open-source model WizardMath-70B by 22% absolute. ToRA-34B is also the first open-source model that achieves an accuracy exceeding 50% on MATH, which significantly outperforms GPT-4's CoT result, and is competitive with GPT-4 solving problems with programs. Additionally, we conduct a comprehensive analysis of the benefits and remaining challenges of tool interaction for mathematical reasoning, providing valuable insights for future research.
Excerpts:
>Existing mathematical reasoning datasets primarily contain annotations in either natural language or code, posing a challenge for training tool-integrated agents due to the absence of interactive tool-use annotations. To address this, we utilize GPT-4 to synthesize high-quality trajectories on the GSM8k and MATH training sets. We select GSM8k and MATH as they exhibit diverse reasoning patterns, spanning multiple domains and difficulty levels. After filtering out invalid trajectories with tool-use errors or wrong answers, we obtain 16k annotations which constitute our dataset ToRA-Corpus.
>
>ToRA-70B substantially outperforms ChatGPT with both CoT and PAL prompting on GSM8k (84.3% vs. 80.4%) and MATH (49.7% vs. 38.7%), while ToRA-Code-34B is competitive with GPT-4 solving competition-level MATH dataset with code (50.8% vs. 51.8%). The accuracy of ToRA-Code is about 5% higher than ToRA of the same size, demonstrating that continued training on code data significantly benefits program-based tool use. ToRA attains fast zero-shot inference speed, averaging 1.02 tool interaction rounds per problem, while effectively addressing problems that require interactive tool utilization.
>
>Although ToRA has made notable progress in various mathematical domains, substantial improvements are still needed in topics like Geometry, Precalculus, and Intermediate Algebra.
Results on MATH subtopics:
https://preview.redd.it/v1vz47qg1asb1.png?width=813&format=png&auto=webp&s=eba71990a54cc8b9de91dc8dea81cc2f90a5d1ca
Full results on 10 mathematical reasoning tasks:
https://preview.redd.it/mnxgv89i1asb1.png?width=809&format=png&auto=webp&s=914a23cdd93ba4886f11068aa2e07140c66428ef | 2023-10-05T00:24:48 | https://www.reddit.com/r/LocalLLaMA/comments/1703k6d/tora_a_toolintegrated_reasoning_agent_for/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1703k6d | false | null | t3_1703k6d | /r/LocalLLaMA/comments/1703k6d/tora_a_toolintegrated_reasoning_agent_for/ | false | false | 50 | {'enabled': False, 'images': [{'id': 'E_kcLQS5un2Wdq2pZXlQgJW19sVR9jxgqRGecliwShI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/90zV3ZTwe9J8WBEjbhr8-7Obo0le86EQRX97XNqoVOE.jpg?width=108&crop=smart&auto=webp&s=9d8e923083ba0a367ec4f679eb63ea11916d2ac8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/90zV3ZTwe9J8WBEjbhr8-7Obo0le86EQRX97XNqoVOE.jpg?width=216&crop=smart&auto=webp&s=ff6b0424905af98cc61e9647f325c9fc2240de05', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/90zV3ZTwe9J8WBEjbhr8-7Obo0le86EQRX97XNqoVOE.jpg?width=320&crop=smart&auto=webp&s=361e1cd0c15cc02da8f61f18fee5153586ee1fa8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/90zV3ZTwe9J8WBEjbhr8-7Obo0le86EQRX97XNqoVOE.jpg?width=640&crop=smart&auto=webp&s=9334ab73f515fbcce11b981bd9496ad9673847e3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/90zV3ZTwe9J8WBEjbhr8-7Obo0le86EQRX97XNqoVOE.jpg?width=960&crop=smart&auto=webp&s=ed0102b586da40b6873d50ab5ef0c2a6e4ffd925', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/90zV3ZTwe9J8WBEjbhr8-7Obo0le86EQRX97XNqoVOE.jpg?width=1080&crop=smart&auto=webp&s=147c8762c92f924497357f481bd8f79da8b60e2d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/90zV3ZTwe9J8WBEjbhr8-7Obo0le86EQRX97XNqoVOE.jpg?auto=webp&s=f34834df87de988762a430ba83af903a1beceefb', 'width': 1200}, 'variants': {}}]} | |
Best Uncensored LLM w/ Free Google Colab | 0 | [removed] | 2023-10-05T00:11:56 | https://www.reddit.com/r/LocalLLaMA/comments/17039ok/best_uncensored_llm_w_free_google_colab/ | Suitable-Ad-8598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17039ok | false | null | t3_17039ok | /r/LocalLLaMA/comments/17039ok/best_uncensored_llm_w_free_google_colab/ | false | false | default | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.