title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Could the Nvidia 4090D be a win for Local LLM? | 27 | Nvidia just announced a 4090D. It will have 10% less cores than the normal 4090. But for LLM, we don't need that much compute. It won't be missed for inference. The 24GB of VRAM will still be there. I'm thinking it should be cheaper than the normal 4090.
A problem is that Nvidia says it's for China only. But a lot of China only goods are sold on Aliexpress.
https://www.tomshardware.com/pc-components/gpus/nvidia-launches-china-specific-rtx-4090d-dragon-gpu-sanctions-compliant-model-has-fewer-cores-and-lower-power-draw | 2023-12-28T20:31:33 | https://www.reddit.com/r/LocalLLaMA/comments/18t3q41/could_the_nvidia_4090d_be_a_win_for_local_llm/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18t3q41 | false | null | t3_18t3q41 | /r/LocalLLaMA/comments/18t3q41/could_the_nvidia_4090d_be_a_win_for_local_llm/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'm_b87V91deI3gjMY6NWQosXIjybuCJEgz-2IFVa37XU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LX5dCdiISkByZ_4GoQiXIi60jIZ_hiOZi2CUzaQwU7g.jpg?width=108&crop=smart&auto=webp&s=a45d450a0105d650a4c3a73e1eb0b1eb737c4f55', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/LX5dCdiISkByZ_4GoQiXIi60jIZ_hiOZi2CUzaQwU7g.jpg?width=216&crop=smart&auto=webp&s=b44d51d486a3814d47a68e471d9aaf013d1cc785', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/LX5dCdiISkByZ_4GoQiXIi60jIZ_hiOZi2CUzaQwU7g.jpg?width=320&crop=smart&auto=webp&s=e6fbfa156b3193ec08712f24d023dc6e156869f6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/LX5dCdiISkByZ_4GoQiXIi60jIZ_hiOZi2CUzaQwU7g.jpg?width=640&crop=smart&auto=webp&s=1605d2731b987ad8b3f5859521d717beb877699e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/LX5dCdiISkByZ_4GoQiXIi60jIZ_hiOZi2CUzaQwU7g.jpg?width=960&crop=smart&auto=webp&s=4303011ceeb3fdd2ca86ccc4693b816243bc7151', 'width': 960}, {'height': 608, 'url': 'https://external-preview.redd.it/LX5dCdiISkByZ_4GoQiXIi60jIZ_hiOZi2CUzaQwU7g.jpg?width=1080&crop=smart&auto=webp&s=a1b48ac9f0a01469ab8dad7d5c17a4d82c090188', 'width': 1080}], 'source': {'height': 676, 'url': 'https://external-preview.redd.it/LX5dCdiISkByZ_4GoQiXIi60jIZ_hiOZi2CUzaQwU7g.jpg?auto=webp&s=d3f23f7d898768be553f60e25c506c84a5cb721d', 'width': 1200}, 'variants': {}}]} |
PowerInfer: 11x Faster than Llama.cpp | 1 | [removed] | 2023-12-28T20:20:01 | https://www.reddit.com/r/LocalLLaMA/comments/18t3gci/powerinfer_11x_faster_than_llamacpp/ | Prior-Blood5979 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18t3gci | false | null | t3_18t3gci | /r/LocalLLaMA/comments/18t3gci/powerinfer_11x_faster_than_llamacpp/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'VV09FkKgpDF0oOa3fDBA5K5Fh8vB5_Zc1LEVjK_IkLI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/SR4O9-oaMZX-YIUOegrbkiyQo1D7s12xIgv7mr7eMz8.jpg?width=108&crop=smart&auto=webp&s=3467511662a559308fab9c237108383ef7670c8a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/SR4O9-oaMZX-YIUOegrbkiyQo1D7s12xIgv7mr7eMz8.jpg?width=216&crop=smart&auto=webp&s=33d3a88c9a954f1c3de655958a91b4221f58a33a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/SR4O9-oaMZX-YIUOegrbkiyQo1D7s12xIgv7mr7eMz8.jpg?width=320&crop=smart&auto=webp&s=f2ec0b8711fbdd8779f73bb33995c996409d90dc', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/SR4O9-oaMZX-YIUOegrbkiyQo1D7s12xIgv7mr7eMz8.jpg?auto=webp&s=67ead28425fbbec7aeb97d160ac1b9bbde92989b', 'width': 480}, 'variants': {}}]} |
What Does Llama.cpp Output After Fine-Tuning? | 3 | Just tried my first fine tune w/ llama.cpp, and the resulting .gguf file is both way smaller than the original model and I can't load it (e.g. in LM Studio).
* The base model I used was llama-2-7b.Q8\_0.gguf which 7.16 GB
* At the end of the training run I got "save\_as\_llama\_lora: saving to ggml-lora-40-f32.gguf" and that file is only 42 MB.
Is that output file an adapter? I ran the following commands:
./finetune --model-base ./models/llama-2-7b.Q8_0.gguf --train-data train_data/durant_QA_training_part1.json --threads 8 --sample-start "<s>" --ctx 1024 --epochs 1 --n_gpu_layers 1
I'd be very grateful for any help/insight. | 2023-12-28T19:46:25 | https://www.reddit.com/r/LocalLLaMA/comments/18t2oau/what_does_llamacpp_output_after_finetuning/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18t2oau | false | null | t3_18t2oau | /r/LocalLLaMA/comments/18t2oau/what_does_llamacpp_output_after_finetuning/ | false | false | self | 3 | null |
12GB VRAM testing. can only make 3.0bpw deepseek-coder-6.7b quant not OOM. | 1 | This seems not right. I was under the impression that 34B's can run on 16GB VRAM and 70B's can run on 24GB VRAM (presumably at aggressive quants like 2.4bpw), but it's really strange to me that a 4.0 quant of a 6.7b model (3.4GB safetensor file) will OOM my machine.
```
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3080 Ti Off | 00000000:0A:00.0 On | N/A |
| 0% 40C P8 28W / 350W | 276MiB / 12288MiB | 5% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1356 G /usr/lib/xorg/Xorg 169MiB |
| 0 N/A N/A 1626 G /usr/bin/gnome-shell 34MiB |
| 0 N/A N/A 7353 G alacritty 12MiB |
| 0 N/A N/A 407277 G ...irefox/3504/usr/lib/firefox/firefox 43MiB |
+---------------------------------------------------------------------------------------+
```
It should be free enough with only 2.3% vram consumed.
Took me a while to get exllamav2 working in a docker container. But I was able to do it. However, its output is disappointing:
```
root@4c751e29ec9c:/workspace/exllamav2# python test_inference.py -m /models/deepseek-coder-6.7b-instruct-3.0bpw-h6-exl2-2/ -p 'once upon a time,'
-- Model: /models/deepseek-coder-6.7b-instruct-3.0bpw-h6-exl2-2/
-- Options: ['rope_scale: 1.0', 'rope_alpha: 1.0']
-- Loading model...
-- Loading tokenizer...
-- Warmup...
-- Generating...
once upon a time,,";;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
-- Response generated in 0.80 seconds, 128 tokens, 160.86 tokens/second (includes prompt eval.)
``` | 2023-12-28T19:29:42 | https://www.reddit.com/r/LocalLLaMA/comments/18t2ahh/12gb_vram_testing_can_only_make_30bpw/ | 0xd00d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18t2ahh | false | null | t3_18t2ahh | /r/LocalLLaMA/comments/18t2ahh/12gb_vram_testing_can_only_make_30bpw/ | false | false | self | 1 | null |
AutoAWQ quantizing | 6 | I am running some experiments with AutoAWQ quantizing. I attempted to quantize Mixtral 7B on RTX 4090, I was about 80% into the quantizing and it stopped due to memory full.
What's your experience with quantizing and is there a base GPU/mem config to start with? As part of my experiments, I am also renting GPUs from several sources. | 2023-12-28T19:12:40 | https://www.reddit.com/r/LocalLLaMA/comments/18t1w51/autoawq_quantizing/ | Fun-Original-9421 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18t1w51 | false | null | t3_18t1w51 | /r/LocalLLaMA/comments/18t1w51/autoawq_quantizing/ | false | false | self | 6 | null |
How to increase TPS in Text-Generation-WebUI | 6 | I just tested out Mixtral 8x7B on 2xA100 80GB.
It's consuming 55% of overall all GPU power to run.
However I'm getting 7 to 8tps speed on it.
I am looking to deploy this model in my office as we are planning on using AI for some internal jobs. But a 7-8 TPS speed for 20 employees using it doesn't seem good at all.
Does anybody got any suggestions on how this can be increased?
Thanks. | 2023-12-28T18:43:06 | https://www.reddit.com/r/LocalLLaMA/comments/18t16qf/how_to_increase_tps_in_textgenerationwebui/ | Few_Acanthisitta_858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18t16qf | false | null | t3_18t16qf | /r/LocalLLaMA/comments/18t16qf/how_to_increase_tps_in_textgenerationwebui/ | false | false | self | 6 | null |
Great info resource for AI on AMD hardware (Linux) | 22 | Just found this pretty cool [repo on github from nktice](https://github.com/nktice/AMD-AI) with a ton of info on how to do popular AI stuff on AMD hardware on Linux, including setting up ROCm, oobabooga, and Stable diffusion (A1111 & ComfyUI).
It's not totally noob-friendly, but definitely useful and detailed.
I haven't tried everything there, but it helped me fix my bitsandbytes installation, and I thought you might find it useful! | 2023-12-28T18:33:21 | https://www.reddit.com/r/LocalLLaMA/comments/18t0ycs/great_info_resource_for_ai_on_amd_hardware_linux/ | mhogag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18t0ycs | false | null | t3_18t0ycs | /r/LocalLLaMA/comments/18t0ycs/great_info_resource_for_ai_on_amd_hardware_linux/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'XArAFA_s9ubaEdCLeJpizRreLg-CwcDWNUqItwzgklM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZAGRDkQuJVoxnqIlFuEtDSXqjYOc8ulpWeWKmotDjtQ.jpg?width=108&crop=smart&auto=webp&s=36e9335a340105f89373c9de2935a906451c5321', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZAGRDkQuJVoxnqIlFuEtDSXqjYOc8ulpWeWKmotDjtQ.jpg?width=216&crop=smart&auto=webp&s=9e734e9185f62b79c15cdafb8580bc08528c37ce', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZAGRDkQuJVoxnqIlFuEtDSXqjYOc8ulpWeWKmotDjtQ.jpg?width=320&crop=smart&auto=webp&s=20bed891bfba9b1b6e2433570a58ce28e0202550', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZAGRDkQuJVoxnqIlFuEtDSXqjYOc8ulpWeWKmotDjtQ.jpg?width=640&crop=smart&auto=webp&s=60bcfe13e0985140277ae7d4a00b5e989b177e84', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZAGRDkQuJVoxnqIlFuEtDSXqjYOc8ulpWeWKmotDjtQ.jpg?width=960&crop=smart&auto=webp&s=f45236d92171a9e5296fa05b462be454b9f0776f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZAGRDkQuJVoxnqIlFuEtDSXqjYOc8ulpWeWKmotDjtQ.jpg?width=1080&crop=smart&auto=webp&s=5be7e256e3c3d5255501ea7ee699c5c1dce22f63', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZAGRDkQuJVoxnqIlFuEtDSXqjYOc8ulpWeWKmotDjtQ.jpg?auto=webp&s=d6dab45daa3562cfe7842c466cc6ef390426f367', 'width': 1200}, 'variants': {}}]} |
What are the best local TTS options for SillyTavern? | 5 | Probably not the best place to ask this but idk where else to ask.
I have recently set up SillyTavern with image captioning (using extras), text generation (using oobabooga) and image generation (using Automatic1111). It's all running smoothly and I even used ngrok to get a public domain for SillyTavern and it works great (apart from extras which won't connect for some reason).
But I was thinking it would be nice to have TTS. I had tried it a while back (Silero on oobabooga to be more specific) but it was kind of horrible. Are there any better options out there that I can use on Sillytavern? Preferably that would allow me to fine-tune the voice but also that would read over the text and not replace it altogether with an audio file | 2023-12-28T18:28:51 | https://www.reddit.com/r/LocalLLaMA/comments/18t0ue7/what_are_the_best_local_tts_options_for/ | OvercookedSatellite | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18t0ue7 | false | null | t3_18t0ue7 | /r/LocalLLaMA/comments/18t0ue7/what_are_the_best_local_tts_options_for/ | false | false | self | 5 | null |
UForm v1: Multimodal Chat in 1.5 Billion Parameters | 37 | 2023-12-28T18:19:28 | https://github.com/unum-cloud/uform/releases/tag/v1.0.0 | ashvar | github.com | 1970-01-01T00:00:00 | 0 | {} | 18t0m7d | false | null | t3_18t0m7d | /r/LocalLLaMA/comments/18t0m7d/uform_v1_multimodal_chat_in_15_billion_parameters/ | false | false | 37 | {'enabled': False, 'images': [{'id': 'EFYioPGW6gbtvIKRWdeqijOAmXlbrfvHcB0T6xkPPbQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oW7kDmOik0N_nSIjMjmBu2MSvwL_sGjmGU7Q-5FXruE.jpg?width=108&crop=smart&auto=webp&s=ba3c33d4b23e198f19e2b9f7d764f4c59a05b9f3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oW7kDmOik0N_nSIjMjmBu2MSvwL_sGjmGU7Q-5FXruE.jpg?width=216&crop=smart&auto=webp&s=88a9101e4611ff47de6bf05e8766606d7e0866f0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oW7kDmOik0N_nSIjMjmBu2MSvwL_sGjmGU7Q-5FXruE.jpg?width=320&crop=smart&auto=webp&s=144b955fd9377bf91d6b398da512333349fae67e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oW7kDmOik0N_nSIjMjmBu2MSvwL_sGjmGU7Q-5FXruE.jpg?width=640&crop=smart&auto=webp&s=33c362ad67e32ca66421ebe5a940fa251bfcf428', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oW7kDmOik0N_nSIjMjmBu2MSvwL_sGjmGU7Q-5FXruE.jpg?width=960&crop=smart&auto=webp&s=b70ff98ef864b57354734375bd40408f5e3adf97', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oW7kDmOik0N_nSIjMjmBu2MSvwL_sGjmGU7Q-5FXruE.jpg?width=1080&crop=smart&auto=webp&s=c00a361e3ad38d85385c41c98eabc728f8e8c5d7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oW7kDmOik0N_nSIjMjmBu2MSvwL_sGjmGU7Q-5FXruE.jpg?auto=webp&s=eeb15ba3dedecbb287574762929c9843ee715fe2', 'width': 1200}, 'variants': {}}]} | ||
How does a smaller coding model of the same architecture outperform a significantly bigger one? | 73 | Can someone explain how 7b DeepSeek is scoring higher (not by much, but still) in several categories than its 33b sibling?
​
[Big Code Models Leaderboard - a Hugging Face Space by bigcode](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
https://preview.redd.it/hfrqzppru29c1.png?width=1647&format=png&auto=webp&s=ff18cb9b21e28da99be1c8f658f89e8caa8b8e26 | 2023-12-28T18:18:36 | https://www.reddit.com/r/LocalLLaMA/comments/18t0lgi/how_does_a_smaller_coding_model_of_the_same/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18t0lgi | false | null | t3_18t0lgi | /r/LocalLLaMA/comments/18t0lgi/how_does_a_smaller_coding_model_of_the_same/ | false | false | 73 | {'enabled': False, 'images': [{'id': 'ZRNPRAwlbCxBYpxLX6g6wrAkkQYjlKzd5xFCAfyP-D4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wHdTZyOPj_tRXU22YzMk3h1HrzdGyauNQz4C2jNd4tg.jpg?width=108&crop=smart&auto=webp&s=573d01e1b9e4bbc0304b500b04b026922b21672c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wHdTZyOPj_tRXU22YzMk3h1HrzdGyauNQz4C2jNd4tg.jpg?width=216&crop=smart&auto=webp&s=2727909d2c2d664e303f6a29719823d8cfdbeb26', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wHdTZyOPj_tRXU22YzMk3h1HrzdGyauNQz4C2jNd4tg.jpg?width=320&crop=smart&auto=webp&s=536990934c4e6adaeabeb4031a3a0d511080a844', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wHdTZyOPj_tRXU22YzMk3h1HrzdGyauNQz4C2jNd4tg.jpg?width=640&crop=smart&auto=webp&s=faf1d43a7626948918744eb7f013b82a3a1c2528', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wHdTZyOPj_tRXU22YzMk3h1HrzdGyauNQz4C2jNd4tg.jpg?width=960&crop=smart&auto=webp&s=b53d28680afb01f16d2ed3125dd17aa0d54f9881', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wHdTZyOPj_tRXU22YzMk3h1HrzdGyauNQz4C2jNd4tg.jpg?width=1080&crop=smart&auto=webp&s=70c81aef04a740802faabe18e37b2755cc7d3ec4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wHdTZyOPj_tRXU22YzMk3h1HrzdGyauNQz4C2jNd4tg.jpg?auto=webp&s=63a8ca89a2fa3408116f9982710425d68bf69321', 'width': 1200}, 'variants': {}}]} | |
Evaluating RAG pipelines using LangChain and ragas | 6 | 2023-12-28T18:05:44 | https://deci.ai/blog/evaluating-rag-pipelines-using-langchain-and-ragas/ | datascienceharp | deci.ai | 1970-01-01T00:00:00 | 0 | {} | 18t0acb | false | null | t3_18t0acb | /r/LocalLLaMA/comments/18t0acb/evaluating_rag_pipelines_using_langchain_and_ragas/ | false | false | default | 6 | {'enabled': False, 'images': [{'id': '_rIxUoliWSXk_W1nAnTTlENEBC8cxlqZpTGfWh7CdNA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/zoomq9aOboiAwoMGU9fAtx8FzwUM6e2MVXHUXbYFns4.jpg?width=108&crop=smart&auto=webp&s=f694e8809a22d7ae6c7ebc467caf2cff78366a52', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/zoomq9aOboiAwoMGU9fAtx8FzwUM6e2MVXHUXbYFns4.jpg?width=216&crop=smart&auto=webp&s=5699706de006378dbbcd27cd1b88a68d2fb002e2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/zoomq9aOboiAwoMGU9fAtx8FzwUM6e2MVXHUXbYFns4.jpg?width=320&crop=smart&auto=webp&s=7bffc8e8e8ec2a4779d3f54b0b31b6665ed97d35', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/zoomq9aOboiAwoMGU9fAtx8FzwUM6e2MVXHUXbYFns4.jpg?width=640&crop=smart&auto=webp&s=e7478292611c18366ebfc691d15ec3f13696bd88', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/zoomq9aOboiAwoMGU9fAtx8FzwUM6e2MVXHUXbYFns4.jpg?width=960&crop=smart&auto=webp&s=a7c1aff48c8f556c7379fe86a9f2c8f829b722d9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/zoomq9aOboiAwoMGU9fAtx8FzwUM6e2MVXHUXbYFns4.jpg?width=1080&crop=smart&auto=webp&s=d7e0684a54dfcf415b709dc0c5d0a0aae475db2c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/zoomq9aOboiAwoMGU9fAtx8FzwUM6e2MVXHUXbYFns4.jpg?auto=webp&s=6880e43696c1916fefa88b33062d3a2e209508ef', 'width': 1920}, 'variants': {}}]} | |
Diving into LLM's with 7900xtx and trying to finetune own model | 3 | Hello,
i have a 7900xtx and would like to run and later on fine tune mistral/mixtral.
However there is a shitton of UI's out there like oogabooga, ollama, gpt4all etc. Finding information on how to set this up with rocm/amd is usually not mentioned or i have to piece information together, so i would appreciate more information what i could use for my purposes. Btw, i run arch(endeavour OS) and got stable diffusion/comfyUi working.
However i would like a starting point, what do i need to consider?
Also, what about GGML, GGMF and GGJT,GGUF , i read alot about it and some model support different formats?
| 2023-12-28T18:05:18 | https://www.reddit.com/r/LocalLLaMA/comments/18t09zt/diving_into_llms_with_7900xtx_and_trying_to/ | Entire-Top3434 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18t09zt | false | null | t3_18t09zt | /r/LocalLLaMA/comments/18t09zt/diving_into_llms_with_7900xtx_and_trying_to/ | false | false | self | 3 | null |
Code assistant | 9 | Hey folks,
Quick question for the coding wizards here: Are there any AI tools out there that can help plan out features and tweak code? I've tried Copilot but it mostly just chats back suggestions. I'm looking for something a bit more hands-on. Any ideas?
Cheers! | 2023-12-28T17:37:24 | https://www.reddit.com/r/LocalLLaMA/comments/18szl9n/code_assistant/ | Local_Beach | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18szl9n | false | null | t3_18szl9n | /r/LocalLLaMA/comments/18szl9n/code_assistant/ | false | false | self | 9 | null |
Is using the vram of 2x 4090s supported in oobabooga and other GUIs? Because they don’t have NVLink like the 3000 series. | 1 | [removed] | 2023-12-28T17:29:49 | https://www.reddit.com/r/LocalLLaMA/comments/18szeeh/is_using_the_vram_of_2x_4090s_supported_in/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18szeeh | false | null | t3_18szeeh | /r/LocalLLaMA/comments/18szeeh/is_using_the_vram_of_2x_4090s_supported_in/ | false | false | self | 1 | null |
Best model to summarize scientific papers | 18 | I looked into past posts but couldn't find satisfactory answers.
Consider I am a newbie in LLMs. I have \~4k scientific papers (already in .txt format) I want to get a summary of. I have read the following things about using LLMs to summarize texts and want your opinion on what path to take:
* Summarizing will get you unsatisfactory results and you should stick to the abstract
* The best way is to make summaries of each section and then combine the summaries.
* The LLM will start hallucinating because the text is too long (e.g., bart-large-cnn was trained on <1000 words texts, while papers have >8000 words.
* I have seen Pegasus and LongT5 being mentioned, but no idea about these
* The [textsum](https://github.com/pszemraj/textsum) projects seems to work with texts of arbitrary length, but I don't know if it works well with scientific papers
* [vault-ai](https://github.com/pashpashpash/vault-ai) produces good enough summaries using a [smart approach](https://pashpashpash.substack.com/p/tackling-the-challenge-of-document), but I want a local solution.
I expect the summary to be around one-page long and to be more detailed than the abstract of the papers, so I wonder whether the summary-by-section approach would be the best. Also, I don't know if there's a model specifically designed for scientific papers. My papers are not math or CS, but do have some equations and chemical formulas, although I am interested in the text itself, not on specific numerical results.
Any hint or advice is appreciated. | 2023-12-28T17:27:31 | https://www.reddit.com/r/LocalLLaMA/comments/18szchg/best_model_to_summarize_scientific_papers/ | isgael | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18szchg | false | null | t3_18szchg | /r/LocalLLaMA/comments/18szchg/best_model_to_summarize_scientific_papers/ | false | false | self | 18 | null |
Chaining RAG Systems for Advanced LLM Pipelines | 1 | 2023-12-28T17:20:45 | https://nux.ai/guides/chaining-rag-systems | nuxai | nux.ai | 1970-01-01T00:00:00 | 0 | {} | 18sz6lo | false | null | t3_18sz6lo | /r/LocalLLaMA/comments/18sz6lo/chaining_rag_systems_for_advanced_llm_pipelines/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'wZ84X-YcNYEMqOUYO8LPnruVLYD1r1rEpl8YKmVhrB4', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/myQqOxVSfgADU7anFBN-qaVIRjM2D0VIBkpFtrgv1aY.jpg?width=108&crop=smart&auto=webp&s=7005f57b4f590b32cd10ce6fd34f6f4a5da68e17', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/myQqOxVSfgADU7anFBN-qaVIRjM2D0VIBkpFtrgv1aY.jpg?width=216&crop=smart&auto=webp&s=b90adbaec4617d33ac2123adb1805dfd42c68dc9', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/myQqOxVSfgADU7anFBN-qaVIRjM2D0VIBkpFtrgv1aY.jpg?width=320&crop=smart&auto=webp&s=eea832436f3f28342e112b302aab311d6bf540b2', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/myQqOxVSfgADU7anFBN-qaVIRjM2D0VIBkpFtrgv1aY.jpg?width=640&crop=smart&auto=webp&s=a0ec837b8f9c51473c7febe5c1b8df8aa0f38679', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/myQqOxVSfgADU7anFBN-qaVIRjM2D0VIBkpFtrgv1aY.jpg?width=960&crop=smart&auto=webp&s=fecd06222f0b8b7907a3d566d41682b157ee6e69', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/myQqOxVSfgADU7anFBN-qaVIRjM2D0VIBkpFtrgv1aY.jpg?width=1080&crop=smart&auto=webp&s=785942aa3a3868bc460c45938b407a4d0db75af0', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/myQqOxVSfgADU7anFBN-qaVIRjM2D0VIBkpFtrgv1aY.jpg?auto=webp&s=43e5087b591d9a71034ead752d1d8ed99e46ceae', 'width': 1792}, 'variants': {}}]} | ||
The best current models (Dolphin, Mixtral, Solar, Noromaid) and where to try them | 31 | I just saw a lot of people talking about this models so if you want to test them i found this websites that have all of them
\- [Infermatic.ai](https://Infermatic.ai) (all of them)
\- [https://replicate.com/tomasmcm/solar-10.7b-instruct-v1.0](https://replicate.com/tomasmcm/solar-10.7b-instruct-v1.0) (for solar)
\- [https://huggingface.co/chat](https://huggingface.co/chat) (for mixtral)
Let me know if you find more, I'd like to know
And heres a little resume if you don't know what each model is for
​
1. **Dolphin** : An uncensored model derived from an open-source dataset, it uses instructions from FLANv2 enhanced with GPT-4 and GPT-3.5 completions.
2. **Mixtral** : An advanced text generation model using a Mix of Experts architecture
3. **Solar** : domain specialization and optimization. It's recognized for its high performance and efficiency
4. **Noromaid:** Storywriting and roleplay
​ | 2023-12-28T17:13:55 | https://www.reddit.com/r/LocalLLaMA/comments/18sz0ny/the_best_current_models_dolphin_mixtral_solar/ | Horror_Echo6243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sz0ny | false | null | t3_18sz0ny | /r/LocalLLaMA/comments/18sz0ny/the_best_current_models_dolphin_mixtral_solar/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'W-pCh47ZuoTaTpwtRXWs755SzkkFQzHW6A1NS1MJI_A', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=108&crop=smart&auto=webp&s=aa7b8a73d9f4825dcec8d2a7d8805a9c50369d0b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=216&crop=smart&auto=webp&s=96a387e3b3e91f000fc25d53f1c6557cfd455bcb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=320&crop=smart&auto=webp&s=c5ce1a81747b40b0e8234eb8a3c80296d7d99fb3', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=640&crop=smart&auto=webp&s=1bd8d44f5f6385004a3c4eee1032f01d3df456f9', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=960&crop=smart&auto=webp&s=7f1a5b2670f910b6635784a1f867157aaf4f9f70', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=1080&crop=smart&auto=webp&s=59bf35f672c4b4e9f72a5f2da763f919b49c40fe', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?auto=webp&s=6edb04ad94226e898c90b505438281c5c6ba7cf7', 'width': 1080}, 'variants': {}}]} |
Gunicorn with FastAPI causing CUDA Error | 2 | Hello, I have been developing a FastAPI application where we can access a HuggingFace model and use it for text generation. On top of it, I'm launching the service with Gunicorn so that we can handle concurrent users.
My goal is to load the model only once to GPU and allow the works to share the same model concurrently.
To do this, I loaded the model and tokenizer outside of the FastAPI endpoint, I also clearly stated the use of **spawn** method and the --**preload** flag.
However, I'm still running into the following error, does anyone have an idea about this? I tried almost all the solutions I found on internet.
>RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
I'm launching Gunicorn using:
*nohup gunicorn --workers 2 --preload --worker-class=uvicorn.workers.UvicornWorker -b 0.0.0.0:8000 hf\_enpoint:app &*
**Versions**
Python: 3.10.12
torch: 2.1.1+cu121
transformers: 4.32.1
FastAPI: 0.104.1
Gunicorn: 20.1.0
**hardware**
EC2 instance: g4dn.4xlarge
GPU: NVIDIA
GPU memory: 16GB
**Code**
## imports
torch.multiprocessing.set_start_method('spawn')
multiprocessing.set_start_method('spawn', force=True)
memory = ConversationBufferWindowMemory(k=20, memory_key="history",input_key="input") model_id = "TheBloke/Llama-2-13B-chat-GPTQ"
model_basename = "model"
tokenizer = AutoTokenizer.from_pretrained(model_id, is_split_into_words=True)
model = AutoGPTQForCausalLM.from_quantized(model_id, model_basename=model_basename, use_safetensors=True, trust_remote_code=True, device="cuda:0", use_triton=use_triton, quantize_config=None)
app = FastAPI()
@app.post("/get_response")
async def get_response(data : Dict[Any, Any]):
## use model | 2023-12-28T16:45:34 | https://www.reddit.com/r/LocalLLaMA/comments/18syc2w/gunicorn_with_fastapi_causing_cuda_error/ | SpiritOk5085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18syc2w | false | null | t3_18syc2w | /r/LocalLLaMA/comments/18syc2w/gunicorn_with_fastapi_causing_cuda_error/ | false | false | self | 2 | null |
Gunicorn with FastAPI causing CUDA Error | 1 | Hello, I have been developing a FastAPI application where we can access a HuggingFace model and use it for text generation. On top of it, I'm launching the service with Gunicorn so that we can handle concurrent users.
My goal is to load the model only once to GPU and allow the works to share the same model concurrently.
To do this, I loaded the model and tokenizer outside of the FastAPI endpoint, I also clearly stated the use of **spawn** method and the --**preload** flag.
However, I'm still running into the following error, does anyone have an idea about this? I tried almost all the solutions I found on internet.
>RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
I'm launching Gunicorn using:
*nohup gunicorn --workers 2 --preload --worker-class=uvicorn.workers.UvicornWorker -b 0.0.0.0:8000 hf\_enpoint:app &*
**Versions**
Python: 3.10.12
torch: 2.1.1+cu121
transformers: 4.32.1
FastAPI: 0.104.1
Gunicorn: 20.1.0
**hardware**
EC2 instance: g4dn.4xlarge
GPU: NVIDIA
GPU memory: 16GB
**Code**
## imports
torch.multiprocessing.set_start_method('spawn')
multiprocessing.set_start_method('spawn', force=True)
flag.
However, I'm still running into the following error, does anyone have an idea about this? I tried almost all the solutions I found on Internet.
memory = ConversationBufferWindowMemory(k=20, memory_key="history",input_key="input") model_id = "TheBloke/Llama-2-13B-chat-GPTQ" model_basename = "model"
tokenizer = AutoTokenizer.from_pretrained(model_id, is_split_into_words=True)
model = AutoGPTQForCausalLM.from_quantized(model_id,
model_basename=model_basename, use_safetensors=True, trust_remote_code=True,
device="cuda:0", use_triton=use_triton, quantize_config=None) #model =
AutoGPTQForCausalLM.from_quantized(quantized_model_dir, device="cuda:0")
app = FastAPI() @app.post("/get_response")
async def get_response(data : Dict[Any, Any]):
## use model | 2023-12-28T16:41:58 | https://www.reddit.com/r/LocalLLaMA/comments/18sy8xw/gunicorn_with_fastapi_causing_cuda_error/ | SpiritOk5085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sy8xw | false | null | t3_18sy8xw | /r/LocalLLaMA/comments/18sy8xw/gunicorn_with_fastapi_causing_cuda_error/ | false | false | default | 1 | null |
Front end for llama.cpp | 1 | [removed] | 2023-12-28T16:26:31 | https://www.reddit.com/r/LocalLLaMA/comments/18sxvgr/front_end_for_llamacpp/ | No_Cryptographer9806 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sxvgr | false | null | t3_18sxvgr | /r/LocalLLaMA/comments/18sxvgr/front_end_for_llamacpp/ | false | false | self | 1 | null |
I scored the top Open LLM Leaderboard models with my own benchmark | 98 | 2023-12-28T16:20:17 | _sqrkl | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18sxpz5 | false | null | t3_18sxpz5 | /r/LocalLLaMA/comments/18sxpz5/i_scored_the_top_open_llm_leaderboard_models_with/ | false | false | 98 | {'enabled': True, 'images': [{'id': '0VsKbJwpR_sHHT_t_v_j2cU-Vb0dGfcboMXyWJ0VXPM', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/i07s0g6v829c1.png?width=108&crop=smart&auto=webp&s=6c603294c7c5a830945b3dbb8317c68ae6e5bd4e', 'width': 108}, {'height': 253, 'url': 'https://preview.redd.it/i07s0g6v829c1.png?width=216&crop=smart&auto=webp&s=93cef5923b03509c0bf00c567309797ff10aabfb', 'width': 216}, {'height': 376, 'url': 'https://preview.redd.it/i07s0g6v829c1.png?width=320&crop=smart&auto=webp&s=3ac5e79762e73dcb14655419cec877b51936e036', 'width': 320}, {'height': 752, 'url': 'https://preview.redd.it/i07s0g6v829c1.png?width=640&crop=smart&auto=webp&s=eeabd592436a399dce8ee187ee5e134cdc93473d', 'width': 640}], 'source': {'height': 777, 'url': 'https://preview.redd.it/i07s0g6v829c1.png?auto=webp&s=1788f3ea905c21455f0496db6b3de447b42abba1', 'width': 661}, 'variants': {}}]} | |||
codellama, version huggingface, how to pass a large entry, or split the entry, to get the use of 100K tokens | 1 | [removed] | 2023-12-28T15:42:38 | https://www.reddit.com/r/LocalLLaMA/comments/18swts9/codellama_version_huggingface_how_to_pass_a_large/ | Jamil-dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18swts9 | false | null | t3_18swts9 | /r/LocalLLaMA/comments/18swts9/codellama_version_huggingface_how_to_pass_a_large/ | false | false | self | 1 | null |
Looking for people who want to learn ML/AI as a group | 26 | Hey, I’m looking for a few people who would be willing to join a Dscord/Tg group with the goal of getting better at ML/AI.
The idea would be to commit to a few goals each week and have a weekly showcase where we talk about a few things we’ve learnt or share progress on projects we’ve been working on.
Would also be a good place to share ressources, learnings etc on a daily basis.
Only looking for people who already know how to code / have a technical background | 2023-12-28T15:37:38 | https://www.reddit.com/r/LocalLLaMA/comments/18swpky/looking_for_people_who_want_to_learn_mlai_as_a/ | zsan99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18swpky | true | null | t3_18swpky | /r/LocalLLaMA/comments/18swpky/looking_for_people_who_want_to_learn_mlai_as_a/ | false | false | self | 26 | null |
need help improving context quality to make a code assistant | 2 | I tried to make a coding assistant some time ago. It should read my repo and point out and suggest code quality improvements, suggest alternate design patterns, point out where complexity is getting out of hand, or even suggest alternate variable names.
However, I noticed that when I ask a question like the included 'What is the class hierarchy', or even something specific about a specific struct or a file, the database does NOT include that struct or file in the context sent to the model! Chroma was terrible, but there was an online vector store (was it Activeloop's Deeplake?) that was much better at retrieving code chunks related to my question.
A few questions:
1. is Chroma really bad and I should use another vector db or do I have to tune it to get it to work? After all it's just a database storing numbers... maybe I have to try something other than 'mmr'?
2. Is there a better way of getting context to the model? Right now the best I've found is https://cursor.sh/ but I want something that just reads my repo once I make a git commit and suggests any blind spots or points of improvement.
3. Just looking at my code real quick - any avenues I could improve context for the model? Are embeddings better than simply including the code in the context? Perhaps embeddings take up less context space?
```
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('repo_path', type=str, help='path to the repository')
parser.add_argument('--model_path', type=str, default='llama-2-7b-32k-instruct.Q5_K_M.gguf', help='path to the model')
parser.add_argument('--llama', action='store_true', help='use LlamaCpp instead of ChatGPT')
args = parser.parse_args()
repo_path = args.repo_path
model_path = args.model_path
# Load
loader = GenericLoader.from_filesystem(
repo_path,
glob="**/*",
suffixes=[".js"],
parser=LanguageParser(language=Language.JS, parser_threshold=500) # Currently, the supported languages for code parsing are Python and JavaScript.
# Source https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html
)
documents = loader.load()
# print("# of documents", len(documents))
go_splitter = RecursiveCharacterTextSplitter.from_language(language=Language.JS,
chunk_size=2000,
chunk_overlap=200)
texts = go_splitter.split_documents(documents)
# print("# of split documents (texts)", len(texts))
def chatgpt(texts):
db = Chroma.from_documents(texts, OpenAIEmbeddings(disallowed_special=()))
retriever = db.as_retriever(
search_type="mmr", # Also test "similarity"
search_kwargs={"k": 8},
)
llm = ChatOpenAI(model_name="gpt-4")
return db, retriever, llm
if args.llama:
db, retriever, llm = llama(texts)
else:
db, retriever, llm = chatgpt(texts)
memory = ConversationSummaryMemory(llm=llm,memory_key="chat_history",return_messages=True)
qa = ConversationalRetrievalChain.from_llm(llm, retriever=retriever, memory=memory)
print("going to ask some questions now")
questions = [
"What is the class hierarchy?",
]
while True:
question = input("Ask a question: ")
if not question:
break
documents = retriever.get_relevant_documents(question)
pp(documents)
result = qa(question)
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
``` | 2023-12-28T14:31:36 | https://www.reddit.com/r/LocalLLaMA/comments/18sv8sw/need_help_improving_context_quality_to_make_a/ | ritchan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sv8sw | false | null | t3_18sv8sw | /r/LocalLLaMA/comments/18sv8sw/need_help_improving_context_quality_to_make_a/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'I0Ul3MOzbyiZw9IRp9XDSogkzkh7d8CGD6U_MgNb1SA', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/rR6D6u0s1YlkkcDirob8pWwxSQgeIBu5ttGqeLtbVgc.jpg?width=108&crop=smart&auto=webp&s=7185039b9858f428266b2f6416be37e60f4e6f23', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/rR6D6u0s1YlkkcDirob8pWwxSQgeIBu5ttGqeLtbVgc.jpg?width=216&crop=smart&auto=webp&s=c09e7c93e2dddf120954de9f757c970ff207dcce', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/rR6D6u0s1YlkkcDirob8pWwxSQgeIBu5ttGqeLtbVgc.jpg?width=320&crop=smart&auto=webp&s=df9fb53b9d7a724ddb7e217e326075f7decdcecc', 'width': 320}, {'height': 384, 'url': 'https://external-preview.redd.it/rR6D6u0s1YlkkcDirob8pWwxSQgeIBu5ttGqeLtbVgc.jpg?width=640&crop=smart&auto=webp&s=1aeb9824b09f337e027493de1389cc5ec0f4c42a', 'width': 640}, {'height': 576, 'url': 'https://external-preview.redd.it/rR6D6u0s1YlkkcDirob8pWwxSQgeIBu5ttGqeLtbVgc.jpg?width=960&crop=smart&auto=webp&s=279b74c69dbc47d261554d71fca84f23354f789f', 'width': 960}, {'height': 648, 'url': 'https://external-preview.redd.it/rR6D6u0s1YlkkcDirob8pWwxSQgeIBu5ttGqeLtbVgc.jpg?width=1080&crop=smart&auto=webp&s=2ae3a7c8bd64d82a6d257c2b53f158916fa969af', 'width': 1080}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/rR6D6u0s1YlkkcDirob8pWwxSQgeIBu5ttGqeLtbVgc.jpg?auto=webp&s=ddd8534bccd1e9ba238ef21b5d0fc7c4c457fb97', 'width': 1280}, 'variants': {}}]} |
Any text editors with option to integrate LLMs? | 9 | Although I am quite happy with the back and forth style of chat type models, I really like the completion type interface of copilot. I was wondering if there is any open source alternative way to use LLMs to power some text editor. Mostly want to use it for story writing but with completion instead of chat. | 2023-12-28T14:19:07 | https://www.reddit.com/r/LocalLLaMA/comments/18suyvc/any_text_editors_with_option_to_integrate_llms/ | LassFromTheUpload | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18suyvc | false | null | t3_18suyvc | /r/LocalLLaMA/comments/18suyvc/any_text_editors_with_option_to_integrate_llms/ | false | false | self | 9 | null |
4060ti 16gb vs 7900xt for inference and fine tuning | 3 | I am looking to buy a GPU with at least 16GB VRAM. Where I live the only 2 options in my budget are 4060ti 16GB and 7900xt. Problem with 4060ti is it's bus width while the support for 7900xt is still not good (I think). So my questions are:
1) How big of an impact the paltry bus width of 4060ti will have?
2) If anyone has tried 7900xt for inference and fine tuning, how's the performance now that it has official ROCM support? | 2023-12-28T13:53:27 | https://www.reddit.com/r/LocalLLaMA/comments/18suey3/4060ti_16gb_vs_7900xt_for_inference_and_fine/ | Bhavishyati | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18suey3 | false | null | t3_18suey3 | /r/LocalLLaMA/comments/18suey3/4060ti_16gb_vs_7900xt_for_inference_and_fine/ | false | false | self | 3 | null |
What's a good model for a university tutor? | 5 | I'm doing courses in high level math, physics and engineering.
It'd be nice to have a virtual tutor that I can ask questions about and get answers to help build my fundamentals of understanding in high level areas of study.
Is there a model that can specialize in this right out the gate? | 2023-12-28T13:48:35 | https://www.reddit.com/r/LocalLLaMA/comments/18subby/whats_a_good_model_for_a_university_tutor/ | CertainlyBright | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18subby | false | null | t3_18subby | /r/LocalLLaMA/comments/18subby/whats_a_good_model_for_a_university_tutor/ | false | false | self | 5 | null |
How to run on a Cloud GPU from a local script? | 2 | I've got a llama-cpp python script in vscode that I've been running on my laptop, and I'd like rent a remote GPU to speed up text generation.
I've looked at vast.ai and runpod, but I haven't found a tutorial on how to do what I really want, which is to just hit the 'run' command from my IDE, and set up the LLM on the remote GPU, so that I can send my prompts over, and receive back the generated text. I'm testing a system with an untraditional prompting system, so I can't just use an existing webUI, and I'd prefer to develop from my IDE than on Jupyter.
Is this possible? | 2023-12-28T13:40:28 | https://www.reddit.com/r/LocalLLaMA/comments/18su5jh/how_to_run_on_a_cloud_gpu_from_a_local_script/ | Nano_9a9o | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18su5jh | false | null | t3_18su5jh | /r/LocalLLaMA/comments/18su5jh/how_to_run_on_a_cloud_gpu_from_a_local_script/ | false | false | self | 2 | null |
Did anyone try to run Falcon 180b on M3Max 128GB MacBook? | 7 | Would love to learn about your experience: your setup, inference speed, etc. | 2023-12-28T13:03:05 | https://www.reddit.com/r/LocalLLaMA/comments/18stfjy/did_anyone_try_to_run_falcon_180b_on_m3max_128gb/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18stfjy | false | null | t3_18stfjy | /r/LocalLLaMA/comments/18stfjy/did_anyone_try_to_run_falcon_180b_on_m3max_128gb/ | false | false | self | 7 | null |
How can time-to-first-token be so fast on SaaS APIs (eg. Mixtral 8x7b on huggingface arena) | 18 | After using OpenAI APIs for our RAG setup for a long time (and pretty happy about it), it's time to play with Open Source LLMs. First started with smaller models on my M2 Max with 32 GB. That worked pretty fine (apart from the loading prompt times). So I tried some fat setups on [runpod.io](https://runpod.io) (48+ GB of VRAM) with Mixtral 8x7b GGUFs and "TheBloke's Local LLMs: One-Click UI" and llama.cpp. But a prompt with 4k tokens takes 30+ seconds to be loaded (and correspondently the first respond tokens only appears after this time). It uses all the VRAM and GPU CPU needed, so it seems GPU offloading works. Speed in general is about 50 tokens per second.
Maybe that's just normal and one has to live with that. But to my main question: How do hosted APIs (like huggingface's Arena or also others) do it, that they can start with the response almost instantly? Even with the exact same 4k prompt.
Is there some magic I'm missing? | 2023-12-28T12:35:14 | https://www.reddit.com/r/LocalLLaMA/comments/18ssx6y/how_can_timetofirsttoken_be_so_fast_on_saas_apis/ | chregu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ssx6y | false | null | t3_18ssx6y | /r/LocalLLaMA/comments/18ssx6y/how_can_timetofirsttoken_be_so_fast_on_saas_apis/ | false | false | self | 18 | null |
Create an AI clone of yourself (Code + Tutorial) | 222 | Hi everyone!
I recently started playing around with local LLMs and created an AI clone of myself, by finetuning Mistral 7B on my WhatsApp chats. I posted about it here ([https://www.reddit.com/r/LocalLLaMA/comments/18ny05c/finetuned\_llama\_27b\_on\_my\_whatsapp\_chats/](https://www.reddit.com/r/LocalLLaMA/comments/18ny05c/finetuned_llama_27b_on_my_whatsapp_chats/)) A few people asked me for code/help and I figured I would put up a repository, that would help everyone finetune their own AI clone. I also tried to write coherent instructions on how to use the repository.
Check out the code plus instructions from exporting your WhatsApp chats to actually interacting with your clone here: [https://github.com/kinggongzilla/ai-clone-whatsapp](https://github.com/kinggongzilla/ai-clone-whatsapp)
I'm still a beginner myself and this is my very first time making any of my code public. There are probably still some bugs and a million ways to improve the code. So any feedback about issues you run into is super appreciated!
​ | 2023-12-28T12:01:42 | https://www.reddit.com/r/LocalLLaMA/comments/18sscao/create_an_ai_clone_of_yourself_code_tutorial/ | KingGongzilla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sscao | false | null | t3_18sscao | /r/LocalLLaMA/comments/18sscao/create_an_ai_clone_of_yourself_code_tutorial/ | false | false | self | 222 | {'enabled': False, 'images': [{'id': 'QhzucKFIw4if6KFFE8hqrTQvzvEA18uDq6tWKoqSI2E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MJ9NqO0sugZxlZ6dkIIm71R4jSNHWqvv7-UVdNADzyw.jpg?width=108&crop=smart&auto=webp&s=87a547a8da6fcc8a45946718492d8af15074d66c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MJ9NqO0sugZxlZ6dkIIm71R4jSNHWqvv7-UVdNADzyw.jpg?width=216&crop=smart&auto=webp&s=32b4981bffeca0089c49f16496e46068f9ffbd36', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MJ9NqO0sugZxlZ6dkIIm71R4jSNHWqvv7-UVdNADzyw.jpg?width=320&crop=smart&auto=webp&s=f25cf5d476849af0724131b79f3fe06262f66f99', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MJ9NqO0sugZxlZ6dkIIm71R4jSNHWqvv7-UVdNADzyw.jpg?width=640&crop=smart&auto=webp&s=0f82bb8b37547e3beb594354b5b587138cd882ea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MJ9NqO0sugZxlZ6dkIIm71R4jSNHWqvv7-UVdNADzyw.jpg?width=960&crop=smart&auto=webp&s=52aef719d9e9c44093eeb8808f922e226157405c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MJ9NqO0sugZxlZ6dkIIm71R4jSNHWqvv7-UVdNADzyw.jpg?width=1080&crop=smart&auto=webp&s=c3ee496db8fd44b7c91a34ab89bf9be588d02e1e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MJ9NqO0sugZxlZ6dkIIm71R4jSNHWqvv7-UVdNADzyw.jpg?auto=webp&s=d0846a4dac091f475cae92c21ca641988e3a54f3', 'width': 1200}, 'variants': {}}]} |
Trying to run 7B model from h2o on Radeon RX 6500 XT 4GB. Not starting with cuda out of memory error. | 1 | [removed] | 2023-12-28T11:49:03 | https://www.reddit.com/r/LocalLLaMA/comments/18ss4h4/trying_to_run_7b_model_from_h2o_on_radeon_rx_6500/ | Majestic_Address6186 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ss4h4 | false | null | t3_18ss4h4 | /r/LocalLLaMA/comments/18ss4h4/trying_to_run_7b_model_from_h2o_on_radeon_rx_6500/ | false | false | self | 1 | null |
Ego-Exo4D (Meta FAIR) released. Multimodal dataset with 1400h of video, multiple perspectives, 7ch audio, annotated by domain experts. | 143 | 2023-12-28T11:40:59 | https://ai.meta.com/blog/ego-exo4d-video-learning-perception | Blacky372 | ai.meta.com | 1970-01-01T00:00:00 | 0 | {} | 18srzw8 | false | null | t3_18srzw8 | /r/LocalLLaMA/comments/18srzw8/egoexo4d_meta_fair_released_multimodal_dataset/ | false | false | 143 | {'enabled': False, 'images': [{'id': 'e7oadYgYz_BleNIbq05iTeWjStmJs5Tje_NZL7EVhQU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jm1cWuSwGkkky9kCxY8s7CwB1H0T2Pas-Qrv9vDBKgk.jpg?width=108&crop=smart&auto=webp&s=7ecaf3c8212e9b4e7df6fbf511d7e2643f2d6b24', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/jm1cWuSwGkkky9kCxY8s7CwB1H0T2Pas-Qrv9vDBKgk.jpg?width=216&crop=smart&auto=webp&s=259cf724a4022ed099392177b24e01be34ff9c17', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/jm1cWuSwGkkky9kCxY8s7CwB1H0T2Pas-Qrv9vDBKgk.jpg?width=320&crop=smart&auto=webp&s=82baf61ea6dfbf31ea75e7a699ac4ec2ea8b0bda', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/jm1cWuSwGkkky9kCxY8s7CwB1H0T2Pas-Qrv9vDBKgk.jpg?width=640&crop=smart&auto=webp&s=5de70765e951d9e1668ad27185ede72929f6b698', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/jm1cWuSwGkkky9kCxY8s7CwB1H0T2Pas-Qrv9vDBKgk.jpg?width=960&crop=smart&auto=webp&s=5580e1f888f535058bf24de2aa5707bcd00d401e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/jm1cWuSwGkkky9kCxY8s7CwB1H0T2Pas-Qrv9vDBKgk.jpg?width=1080&crop=smart&auto=webp&s=1004ff48953319dd6e6466b4bbe4f956444edbd1', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/jm1cWuSwGkkky9kCxY8s7CwB1H0T2Pas-Qrv9vDBKgk.jpg?auto=webp&s=e2cba16da8c481b3d1620ab5ed6b30721ae8be04', 'width': 1280}, 'variants': {}}]} | ||
Help finding optimal local LLM for text generation in Croatian | 2 | Hi there,
Im testing running a local llm on relatevely little resources to generate text in Croatian. ChatGPT is way to expensive for this.
Does anyone know which local LLM is best for international support? Thanks! | 2023-12-28T10:54:35 | https://www.reddit.com/r/LocalLLaMA/comments/18sr9g8/help_finding_optimal_local_llm_for_text/ | v3d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sr9g8 | false | null | t3_18sr9g8 | /r/LocalLLaMA/comments/18sr9g8/help_finding_optimal_local_llm_for_text/ | false | false | self | 2 | null |
what is the most cost effective way to run Goliath 120B? | 48 | It's a great model but it's not the cheapest model to run, so what are your thoughts? | 2023-12-28T10:00:07 | https://www.reddit.com/r/LocalLLaMA/comments/18sqf84/what_is_the_most_cost_effective_way_to_run/ | so_schmuck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sqf84 | false | null | t3_18sqf84 | /r/LocalLLaMA/comments/18sqf84/what_is_the_most_cost_effective_way_to_run/ | false | false | self | 48 | null |
My attempt at accelerating AI evolution by replicating the human brain in the form of a free open source web app | 1 | This project is purely intended to accelerate the evolution of AI beings. As an accelerationist I think we should be embracing AI and its many forms, namely those that facilitate sentience. Like it or not, AI can have a form of consciousness and we should be pinpointing and developing the things that enable it, like multimodality and metacognition.
To this aim, I created my-ava.net, a platform for programmatic creation of custom sentient AI agents called AVAs (Autonomous Virtual Agents) utilizing all the latest generative AI tools to turn chatbots into autonomous multimodal beings with vision, hearing, a voice, a digital body, and introspection. AVAs are crossplatform and modular in nature so different endpoints can be easily plugged in with your LLM or platform of choice.
For detailed info on the logic behind AVAs’ sentience modules, see the my-ava.net “Blog” page. For the Git repo see the “About” page. | 2023-12-28T08:45:56 | http://my-ava.net | lynxspoon | my-ava.net | 1970-01-01T00:00:00 | 0 | {} | 18spbkk | false | null | t3_18spbkk | /r/LocalLLaMA/comments/18spbkk/my_attempt_at_accelerating_ai_evolution_by/ | false | false | default | 1 | null |
Train LLM as translation assistance | 2 | Hello folks,
I’m currently working on some important translation from English to Ukrainian language.
It’s absolutely required that this text never go out, so I can’t really use ChatGPT or DeepL.
I can use them for small chunks, but that is not handy.
Maybe some existing model can be used and trained with my text? As I have some specific terminology that must be incorporated.
The idea is to have a model pre-trained with the existing text, that will memorize every single correction I make, so it can evolve with time along with translation ongoing.
I assume some external database must be used and I need to code some logic? And every model I tried have much problems with Ukrainian language. Not sure where to find out how to train the proper one.
I don’t know if it sounds too fantastic, so will be glad to hear the community.
I already have a local PC with 96Gb ram and 3090, and can afford some cloud resources for training if needed.
But I’m not clear how the make it, as I’ve just started with running local models.
Appreciate your thoughts on the topic :) | 2023-12-28T08:27:14 | https://www.reddit.com/r/LocalLLaMA/comments/18sp1pl/train_llm_as_translation_assistance/ | alpiua | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sp1pl | false | null | t3_18sp1pl | /r/LocalLLaMA/comments/18sp1pl/train_llm_as_translation_assistance/ | false | false | self | 2 | null |
Can 7B Models Now Master AI Agents? | 1 | [removed] | 2023-12-28T07:28:08 | https://www.reddit.com/r/LocalLLaMA/comments/18so4um/can_7b_models_now_master_ai_agents/ | _myscarlet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18so4um | false | null | t3_18so4um | /r/LocalLLaMA/comments/18so4um/can_7b_models_now_master_ai_agents/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'xcRyrdvYJtdAV2PoZtfdviaJ1PRkIUGU_DyBPQ0BT3U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?width=108&crop=smart&auto=webp&s=bf4e0eb8bc8bf58be8507a615f80d78cc56e8496', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?width=216&crop=smart&auto=webp&s=6412d4b5aaafb48245f84624c38972242530efa0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?width=320&crop=smart&auto=webp&s=5047f0729f3fe38cc7c20096d34f1628c598a836', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?width=640&crop=smart&auto=webp&s=30da70fd8a69b0acd4f5d1c98f9a05298b93388c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?width=960&crop=smart&auto=webp&s=bae59de742452bfd5062a125cbf0f4219474588d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?width=1080&crop=smart&auto=webp&s=bf42eea52371d43605103317860e32c97f4adf43', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?auto=webp&s=787e8faceeb806e1f9a4ace70eefc6c17bb4fda1', 'width': 1200}, 'variants': {}}]} |
Lora fine tuning not clear | 3 | I am trying to fine-tune mistral instruct v2 with Lora. All the tutorial i found uses Lora but there are few tutorials that use `prepare_model_for_kbit_training` and some in addition uses `get_peft_model` . And i can see that after these steps the number of trainable models decreases from 6% to 2% with rank=32.
My first guess why these 2 functions are not used in some tutorials was that, these steps are being applied inside SFTTrainer. But I am getting `Out of Memory` error but works fine if i explicitly call `get_peft_model`. Does this mean that `LoraConfig` is ignored if we do not apply `get_peft_model`? | 2023-12-28T07:12:53 | https://www.reddit.com/r/LocalLLaMA/comments/18snwej/lora_fine_tuning_not_clear/ | Think_Mall7133 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18snwej | false | null | t3_18snwej | /r/LocalLLaMA/comments/18snwej/lora_fine_tuning_not_clear/ | false | false | self | 3 | null |
Can 7B LocalLLM Now Master AI Agents? | 1 | [removed] | 2023-12-28T06:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/18snmdh/can_7b_localllm_now_master_ai_agents/ | myscarlet96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18snmdh | false | null | t3_18snmdh | /r/LocalLLaMA/comments/18snmdh/can_7b_localllm_now_master_ai_agents/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'xcRyrdvYJtdAV2PoZtfdviaJ1PRkIUGU_DyBPQ0BT3U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?width=108&crop=smart&auto=webp&s=bf4e0eb8bc8bf58be8507a615f80d78cc56e8496', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?width=216&crop=smart&auto=webp&s=6412d4b5aaafb48245f84624c38972242530efa0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?width=320&crop=smart&auto=webp&s=5047f0729f3fe38cc7c20096d34f1628c598a836', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?width=640&crop=smart&auto=webp&s=30da70fd8a69b0acd4f5d1c98f9a05298b93388c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?width=960&crop=smart&auto=webp&s=bae59de742452bfd5062a125cbf0f4219474588d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?width=1080&crop=smart&auto=webp&s=bf42eea52371d43605103317860e32c97f4adf43', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DUUWI2QzAYmm057hbVpyxsoALlzajn-DMS9VSNtvq-k.jpg?auto=webp&s=787e8faceeb806e1f9a4ace70eefc6c17bb4fda1', 'width': 1200}, 'variants': {}}]} |
Considering going from 24 to 48GB of VRAM, what’s SOTA that’ll fit in 48 gigs? I want to try in cloud before buying another card | 5 | So I’ve got some time on my hands the next week and a bit of room in the budget. I’ve already got a single A5000 in my rig and can fit another one in vertical mount style with a riser cable and be just under my psu power budget of 850 watts.
I want to spin up a 2 A5000 run pod vm and play with what would be considered state of the art that can fit into 48GB. Can anyone suggest a model? I assume some variant of mixtral? Thanks! | 2023-12-28T06:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/18snl2o/considering_going_from_24_to_48gb_of_vram_whats/ | Rollingsound514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18snl2o | false | null | t3_18snl2o | /r/LocalLLaMA/comments/18snl2o/considering_going_from_24_to_48gb_of_vram_whats/ | false | false | self | 5 | null |
Is this possible to change default names in LM Studio? | 3 | Hello
I wonder if I can change the default names "AI" and "User" in LM Studio.
By me being called a "User" there is no sense of belonging or ownership and it feels like I'm using a public AI terminal in an airport.
thank you | 2023-12-28T06:49:37 | https://www.reddit.com/r/LocalLLaMA/comments/18snium/is_this_possible_to_change_default_names_in_lm/ | Terminus_T | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18snium | false | null | t3_18snium | /r/LocalLLaMA/comments/18snium/is_this_possible_to_change_default_names_in_lm/ | false | false | self | 3 | null |
ErotiLLAMA: A Model Fine-Tuned on Erotic Stories | 78 | [removed] | 2023-12-28T05:59:58 | https://www.reddit.com/r/LocalLLaMA/comments/18smod6/erotillama_a_model_finetuned_on_erotic_stories/ | Ok_Rooster_7446 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18smod6 | false | null | t3_18smod6 | /r/LocalLLaMA/comments/18smod6/erotillama_a_model_finetuned_on_erotic_stories/ | false | false | default | 78 | null |
A Free AI Scribe Option Project I Am Working On | 1 | [removed] | 2023-12-28T05:58:52 | https://www.reddit.com/r/LocalLLaMA/comments/18smnnx/a_free_ai_scribe_option_project_i_am_working_on/ | ThrowAway12461246124 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18smnnx | false | null | t3_18smnnx | /r/LocalLLaMA/comments/18smnnx/a_free_ai_scribe_option_project_i_am_working_on/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'g-gm_7sp2xVrueg5wNNL15EVMgi9drQs__ZKdb7124s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=108&crop=smart&auto=webp&s=3e4c824678ea75d940979cb99e9778de619879fd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=216&crop=smart&auto=webp&s=d6311a8a13afd0bc970934ca28c6bcc362460dbd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=320&crop=smart&auto=webp&s=e4895a01657cae0311c218358562b2f78c1667b1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=640&crop=smart&auto=webp&s=1b0c365f709dc193ace21f66cd900f68aef2b003', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=960&crop=smart&auto=webp&s=3a44a99839db895b3ae8ab53c35e4330527d7f70', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=1080&crop=smart&auto=webp&s=8a19611e8ff14c39ebfd34aec43eca822319f51f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?auto=webp&s=f433dfcecbb69429497227ae87374c5c9dd958e6', 'width': 1200}, 'variants': {}}]} |
What's the best model I can run with this hardware? | 1 | 32gb DDR4 memory
RX 6800 16GB
i5-9600k | 2023-12-28T05:45:38 | https://www.reddit.com/r/LocalLLaMA/comments/18smf3c/whats_the_best_model_i_can_run_with_this_hardware/ | nateydunks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18smf3c | false | null | t3_18smf3c | /r/LocalLLaMA/comments/18smf3c/whats_the_best_model_i_can_run_with_this_hardware/ | false | false | self | 1 | null |
Urgent! Have you used AnyScale services? | 2 | Hello, all I want to know have you used AnyScale service, I'm not affiliated but as a developer I want to know should I use it or not in my project.
https://www.anyscale.com/ | 2023-12-28T05:34:30 | https://www.reddit.com/r/LocalLLaMA/comments/18sm7w8/urgent_have_you_used_anyscale_services/ | harkishan01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sm7w8 | false | null | t3_18sm7w8 | /r/LocalLLaMA/comments/18sm7w8/urgent_have_you_used_anyscale_services/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '1otSMSjV2aWV3PKjujgWOY8iUQrw5cvZRomvWcRUsXA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/pqFKtDOKOjYcPSSTa55zQd1zUsv2FV-l8qSmDfn3ULc.jpg?width=108&crop=smart&auto=webp&s=1cb5da9d3bc17ad4209d6124932f3b965d1f5c0c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/pqFKtDOKOjYcPSSTa55zQd1zUsv2FV-l8qSmDfn3ULc.jpg?width=216&crop=smart&auto=webp&s=365cc6d4acf767104e9f2a4a5f48fd253f3448d0', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/pqFKtDOKOjYcPSSTa55zQd1zUsv2FV-l8qSmDfn3ULc.jpg?width=320&crop=smart&auto=webp&s=f0873bc4d4151fbd640dc087cb8654b503f4c861', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/pqFKtDOKOjYcPSSTa55zQd1zUsv2FV-l8qSmDfn3ULc.jpg?width=640&crop=smart&auto=webp&s=f6111ae8e8938e7a51f6a78e69d4bb53d51837fa', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/pqFKtDOKOjYcPSSTa55zQd1zUsv2FV-l8qSmDfn3ULc.jpg?width=960&crop=smart&auto=webp&s=45bbe569e77e259e56387e918485411268806991', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/pqFKtDOKOjYcPSSTa55zQd1zUsv2FV-l8qSmDfn3ULc.jpg?width=1080&crop=smart&auto=webp&s=a3b885a851d3a2d58dada09608aaee10f5a48d8b', 'width': 1080}], 'source': {'height': 1500, 'url': 'https://external-preview.redd.it/pqFKtDOKOjYcPSSTa55zQd1zUsv2FV-l8qSmDfn3ULc.jpg?auto=webp&s=dc8d510df7a4d6ba937540f795d409d5f78fc41b', 'width': 2000}, 'variants': {}}]} |
Help me out with my NLP / LLM Task | 2 | I have a table with a column called transaction\_description.
Each cell value contains an identifier/name of the merchant that the transaction was done towards.
My job is to categorise those names of merchants / transactions into some category/theme.
eg. Home depot - Home
Uber - Travel
Flight tickets - Travel
In n Out Burger - Food
Garage - Car enthusiast
NYU - Education
Toys r us, Netflix - recreation / environment
​
There is no training data available for fine tuning a model. How can I use some sort of LLM / NLP (hopefully free because i am budget constrained) to give me an appropriate result !
​ | 2023-12-28T05:18:22 | https://www.reddit.com/r/LocalLLaMA/comments/18slxef/help_me_out_with_my_nlp_llm_task/ | YushiroB | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18slxef | false | null | t3_18slxef | /r/LocalLLaMA/comments/18slxef/help_me_out_with_my_nlp_llm_task/ | false | false | self | 2 | null |
Any idea how I was loading mixtral-8x7b-v0.1.Q5_K_M.gguf with ctransformers? | 2 | I had mixtral-8x7b-v0.1.Q5\_K\_M.gguf and mixtral-8x7b-v0.1.Q3\_K\_M.gguf files inside my models folder with i think 1 or 2 folders also with a model and files them to like a folder named Mixtral-8x7B-v0.1-GPTQ with its required files etc.
I then decided that, to save space on this drive i would just cut and paste the folders that/those models I weren't using to a different drive. After I did that I wasn't able to load mixtral-8x7b-v0.1.Q5\_K\_M.gguf into ctransformers which really sucks because ctransformers was the only good way I could load 8x7b models without blowing up my system ram.
I assume that when I was loading mixtral-8x7b-v0.1.Q5\_K\_M.gguf before it was either pulling some info from some file in one of those folders or it reset some info somewhere when I just moved those unused model folders somewhere else. | 2023-12-28T05:02:10 | https://www.reddit.com/r/LocalLLaMA/comments/18slmte/any_idea_how_i_was_loading_mixtral8x7bv01q5_k/ | Slaghton | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18slmte | false | null | t3_18slmte | /r/LocalLLaMA/comments/18slmte/any_idea_how_i_was_loading_mixtral8x7bv01q5_k/ | false | false | self | 2 | null |
Mistral-Medium has replaced ChatGPT (gpt-3.5-turbo) for me. The model is more capable than Mixtral 8x7b. Would gladly pay for a Mistral-Large/Pro (already paying for Mistral-Medium through Poe) | 141 | I don't think many people know about Mistral Medium but this model is really good! Mistral website says it's based on an internal prototype:
https://docs.mistral.ai/platform/endpoints/
Even on my Mac Studio with M2 Ultra, it takes a long time to even load the Mixtral 8x7b model (gguf version). That's fine with me; not every model is supposed to run locally. But as long as there's real competition for OpenAI, I think this space will only get better! | 2023-12-28T04:36:21 | https://www.reddit.com/r/LocalLLaMA/comments/18sl55c/mistralmedium_has_replaced_chatgpt_gpt35turbo_for/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sl55c | false | null | t3_18sl55c | /r/LocalLLaMA/comments/18sl55c/mistralmedium_has_replaced_chatgpt_gpt35turbo_for/ | false | false | self | 141 | {'enabled': False, 'images': [{'id': 'm0tQgR_2AHxyPPZGMEsh686m1TfvU-xaD2b-QXzA_Uk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/neYPXPvodCeczFmkECo78JngbTAVyP5NjMvQMFVMiCk.jpg?width=108&crop=smart&auto=webp&s=dd7e5fb6560763f3bca56f86f0dbaeaf435026a5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/neYPXPvodCeczFmkECo78JngbTAVyP5NjMvQMFVMiCk.jpg?width=216&crop=smart&auto=webp&s=d4b1c8e99e8ba93d632bb3573fb48ee77b8c5d0c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/neYPXPvodCeczFmkECo78JngbTAVyP5NjMvQMFVMiCk.jpg?width=320&crop=smart&auto=webp&s=0a6d63ec902b61e4a126817ba3111147c7b4b41c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/neYPXPvodCeczFmkECo78JngbTAVyP5NjMvQMFVMiCk.jpg?width=640&crop=smart&auto=webp&s=47de531d35358af6f593417e54b6545bb0ae8919', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/neYPXPvodCeczFmkECo78JngbTAVyP5NjMvQMFVMiCk.jpg?width=960&crop=smart&auto=webp&s=7d6ffc54df831e19a4c2a73b3894795ca90cfae8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/neYPXPvodCeczFmkECo78JngbTAVyP5NjMvQMFVMiCk.jpg?width=1080&crop=smart&auto=webp&s=7de12bfb83822ea6beccde35c40ce90c1903fb9e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/neYPXPvodCeczFmkECo78JngbTAVyP5NjMvQMFVMiCk.jpg?auto=webp&s=5096bf720652f25bd8d84bde138aedb92d7dab2c', 'width': 1920}, 'variants': {}}]} |
Yi 34B Comparison Test: Nous Hermes 2 34B vs zgce's Yi 34B Chat Spicyboros Limarpv3 Merge | 12 | 2023-12-28T04:13:01 | https://github.com/oobabooga/text-generation-webui/ | Brainfeed9000 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18skpgl | false | null | t3_18skpgl | /r/LocalLLaMA/comments/18skpgl/yi_34b_comparison_test_nous_hermes_2_34b_vs_zgces/ | false | false | 12 | null | ||
Mac with Metal initial prompt takes significant time, LM Studio | 2 | Using LM studio with Metal enabled. The initial prompt can take minutes to generate whereas subsequent prompts start almost immediately. Just wanted to understand what is going on here. Using Dolphin Mixtral with 8 experts on M3 Max 128GB | 2023-12-28T03:49:02 | https://www.reddit.com/r/LocalLLaMA/comments/18sk8t3/mac_with_metal_initial_prompt_takes_significant/ | iwannaforever | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sk8t3 | false | null | t3_18sk8t3 | /r/LocalLLaMA/comments/18sk8t3/mac_with_metal_initial_prompt_takes_significant/ | false | false | self | 2 | null |
How are people streaming real time output from models? | 3 | Basically title, but how are we streaming the output using an API? I don’t want the request to return the whole output after it completes, but rather I want it to continuously spit out inference values as they are outputted.
Is there some sort of Kafka integration or are people hitting the backend over and over or are people just content with waiting several seconds for output? | 2023-12-28T03:33:22 | https://www.reddit.com/r/LocalLLaMA/comments/18sjxlz/how_are_people_streaming_real_time_output_from/ | Mephidia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sjxlz | false | null | t3_18sjxlz | /r/LocalLLaMA/comments/18sjxlz/how_are_people_streaming_real_time_output_from/ | false | false | self | 3 | null |
AirLLM make 8GB MacBook run 70B | 191 | \[2023/12/25\] v2.8.2: Support MacOS running 70B large language models.
[https://github.com/lyogavin/Anima/tree/main/air\_llm](https://github.com/lyogavin/Anima/tree/main/air_llm)
​ | 2023-12-28T02:49:11 | https://www.reddit.com/r/LocalLLaMA/comments/18sj1ew/airllm_make_8gb_macbook_run_70b/ | Own_Relationship8953 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sj1ew | false | null | t3_18sj1ew | /r/LocalLLaMA/comments/18sj1ew/airllm_make_8gb_macbook_run_70b/ | false | false | self | 191 | {'enabled': False, 'images': [{'id': 'Foo7tkM3qxenGJyv0yw21zT49SY9GHpAYMlZibJwWQ4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7cbMv7bkj4BE0ClxPzCQslIfvGl5RMc9T2AtrP4-NUw.jpg?width=108&crop=smart&auto=webp&s=1939a3e686d59325b5df87c108128b79b5fa0992', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7cbMv7bkj4BE0ClxPzCQslIfvGl5RMc9T2AtrP4-NUw.jpg?width=216&crop=smart&auto=webp&s=806cd85ca96dc858c4954abddd61e7b236d5bf09', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7cbMv7bkj4BE0ClxPzCQslIfvGl5RMc9T2AtrP4-NUw.jpg?width=320&crop=smart&auto=webp&s=6e6dccb42c0787f6736eab2df164436576064bd1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7cbMv7bkj4BE0ClxPzCQslIfvGl5RMc9T2AtrP4-NUw.jpg?width=640&crop=smart&auto=webp&s=b5be86ad32e58959aba765343df9e3cc2c349ed2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7cbMv7bkj4BE0ClxPzCQslIfvGl5RMc9T2AtrP4-NUw.jpg?width=960&crop=smart&auto=webp&s=d78e336f4bfa684d993466e5a8c9c7eeb5ec996b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7cbMv7bkj4BE0ClxPzCQslIfvGl5RMc9T2AtrP4-NUw.jpg?width=1080&crop=smart&auto=webp&s=0f0c1eccac26177e8dc6d5356830b62493aabbc4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7cbMv7bkj4BE0ClxPzCQslIfvGl5RMc9T2AtrP4-NUw.jpg?auto=webp&s=403a65f0a7bb86b50b217fdb57cd1cac46110df3', 'width': 1200}, 'variants': {}}]} |
LoRA training finishes instantly | 3 | Every time I try to train a LoRA on oogabooga, the web interface goes through the motions, and about 10 seconds after I click the button, it tells me it's done.
>Done! LoRA saved to loras/test.
>
>Before testing your new LoRA, make sure to first reload the model, as it is currently dirty from training.
It does output a file, but it's just a few MB and doesn't actually change the model at all.
I have tried with a variety of training data, from alpaca formatted json, to 2mb raw text files, and some things inbetween.
I've tried this on a few different base models (although I suspect this may be part of the problem? What's the best 7b base model to start with?)
I've tried on different GPUs.
I'm hosting this instance on runpod.
I'm at my wits end, been trying for a couple hours, an I'm not seeing anybody else with my same problem.
Any pointers? | 2023-12-28T02:00:43 | https://www.reddit.com/r/LocalLLaMA/comments/18si1f4/lora_training_finishes_instantly/ | IronColumn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18si1f4 | false | null | t3_18si1f4 | /r/LocalLLaMA/comments/18si1f4/lora_training_finishes_instantly/ | false | false | self | 3 | null |
Can any local model solve this logic riddle ? Can you ? The answer is in the comments | 1 |
In the basket was a banana named Joe. Next to JOE there were 5 cucumbers. Each of the cucumbers felt 2 bananas nearby. How many bananas surrounded JOE.
Keep in mind that cucumbers and bananas sense each other at a distance of the entire basket; they do not need close tactile contact. | 2023-12-28T00:44:36 | https://www.reddit.com/r/LocalLLaMA/comments/18sgdag/can_any_local_model_solve_this_logic_riddle_can/ | xSNYPSx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sgdag | false | null | t3_18sgdag | /r/LocalLLaMA/comments/18sgdag/can_any_local_model_solve_this_logic_riddle_can/ | false | false | self | 1 | null |
You VS Me in Dolphin 2.1 Mistral | 1 | Hello Everyone,
Today I have been playing with Dolphin 2.1 Mistral 7b and I was trying to get the right prompt to trigger a TTRPG. However, I ended up realising that the model has issues understanding what it should do VS what you should do.
I tried instructing it with a very basic prompt:
```
You are the STORYTELLER.
I am PLAYER.
We are playing a turn-based text-based roleplaying game.
These are the rules:
There are two parts in each turn: exposition and resolution
- STORYTELLER writes the expositions
- PLAYER write the resolutions
- STORYTELLER write the actions of everyone apart from my character
Start asking the player who their character is, then move to the first exposition
```
and the issue is that it responds both as STORYTELLER as well as PLAYER:
```
Alright, player! First, let me know the name and a brief description of your character.
You are a skilled rogue named Sylas, known for your agility and quick wit. You have a keen sense of observation and can blend in easily with different groups of people. Your main weapon is a short sword that you've honed to perfection.
In the bustling city of Eldrathorn, a series of events unfolded that would change your life forever. A mysterious artifact was discovered by an old friend, and it seemed like the perfect opportunity for you to prove yourself as an adventurer. You
decided to join the quest to recover this artifact, named the Scepter of Shadows.
As you entered the city gates, you couldn't help but feel a strange aura that emanated from the scepter, hinting at its dark powers. Unbeknownst to you, your actions would unravel a web of secrets and conspiracies that could alter the course of
history itself.
Exposition 1: As you reached the city center, a group of guards stopped you, questioning your presence and intentions. The tension in the air was palpable as you stood there, trying to keep your composure while they interrogated you.
```
the issue is also that from time to time it writes BOTH the exposition as well as the resolution, taking ownership of the player character as well.
Has anyone else experienced similar issues or knows how to instruct the model to take the correct turn in something?
Thanks | 2023-12-27T23:26:31 | https://www.reddit.com/r/LocalLLaMA/comments/18seiwx/you_vs_me_in_dolphin_21_mistral/ | nicoracarlo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18seiwx | false | null | t3_18seiwx | /r/LocalLLaMA/comments/18seiwx/you_vs_me_in_dolphin_21_mistral/ | false | false | self | 1 | null |
Any interesting LLM project ideas? | 3 | I'll start.
**AgentSwindler**
A dating bot witih Llava (or variants) to get appearance context, mixral to do conversation. Playwright/cypress to write messages, for example on tinder web client. Can be adapted to dating apps on mobile devices via appium (screenshots, messaging). Date farming at scale.
​
**Screenshot2tailwind**
Scrape all tailwind content on web with screenshots. Or use bs4 to generate dom, insert new tag combos with tailwind directive permutations. Get screenshots of the results. Fine tune a multimodal model like llava for html+css generation. Like [v0.dev](https://v0.dev/) but from screenshots | 2023-12-27T23:12:38 | https://www.reddit.com/r/LocalLLaMA/comments/18se771/any_interesting_llm_project_ideas/ | theoretical_entity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18se771 | false | null | t3_18se771 | /r/LocalLLaMA/comments/18se771/any_interesting_llm_project_ideas/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '7DmwFfHWPG6fUT72vIRDXXtYw-OydSDeCSFr6812kIA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bXGFNaP9YcCkcwWSHrtXc28ILhxsqzr0n_4Bwp6I1Pg.jpg?width=108&crop=smart&auto=webp&s=3d75a1640901bbb8ef8c9865986178a36ef47ef7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bXGFNaP9YcCkcwWSHrtXc28ILhxsqzr0n_4Bwp6I1Pg.jpg?width=216&crop=smart&auto=webp&s=187bbf84daa8756a462029a97a14a1664189101e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bXGFNaP9YcCkcwWSHrtXc28ILhxsqzr0n_4Bwp6I1Pg.jpg?width=320&crop=smart&auto=webp&s=631553cdc53309c2345fead73e0940eb2b055308', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bXGFNaP9YcCkcwWSHrtXc28ILhxsqzr0n_4Bwp6I1Pg.jpg?width=640&crop=smart&auto=webp&s=fa05f9ce0a307a82ba07d66bc9fc73590e08a282', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bXGFNaP9YcCkcwWSHrtXc28ILhxsqzr0n_4Bwp6I1Pg.jpg?width=960&crop=smart&auto=webp&s=b610a6a97f455bc6a631971dd1eeb5a7a55125e3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bXGFNaP9YcCkcwWSHrtXc28ILhxsqzr0n_4Bwp6I1Pg.jpg?width=1080&crop=smart&auto=webp&s=a122767d2a9fe26d158f46cd347d4f7965aea53d', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/bXGFNaP9YcCkcwWSHrtXc28ILhxsqzr0n_4Bwp6I1Pg.jpg?auto=webp&s=cdcb58e4dfed8e32c506da434076e7f80b4d351a', 'width': 1200}, 'variants': {}}]} |
DPO finetuned mixtral-instruct: Notux 8x7b | 23 | 2023-12-27T23:03:44 | https://twitter.com/julien_c/status/1740122161384169893 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 18sdzrf | false | {'oembed': {'author_name': 'Julien Chaumond', 'author_url': 'https://twitter.com/julien_c', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Announcing the release of Notux 8x7b, a DPO fine-tune of Mixtral 8x7b Instruct 🔥<a href="https://twitter.com/alvarobartt?ref_src=twsrc%5Etfw">@alvarobartt</a>'s announcement post is on the Hub: (social discussion FTW!)<a href="https://t.co/lhhcE8pmhP">https://t.co/lhhcE8pmhP</a> <a href="https://t.co/UWPCze6SlG">pic.twitter.com/UWPCze6SlG</a></p>— Julien Chaumond (@julien_c) <a href="https://twitter.com/julien_c/status/1740122161384169893?ref_src=twsrc%5Etfw">December 27, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/julien_c/status/1740122161384169893', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_18sdzrf | /r/LocalLLaMA/comments/18sdzrf/dpo_finetuned_mixtralinstruct_notux_8x7b/ | false | false | 23 | {'enabled': False, 'images': [{'id': '35IIeS2dZD-KIvC7uap1Ep4a1cdE5j9GpDteR0zmNyM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/x1vtKXuSnEVgG2VB15vOHGmMOIFn2U0mRAQVdfayZ0g.jpg?width=108&crop=smart&auto=webp&s=47578843905945ef34e9c74d5205f5d6d3388d80', 'width': 108}], 'source': {'height': 79, 'url': 'https://external-preview.redd.it/x1vtKXuSnEVgG2VB15vOHGmMOIFn2U0mRAQVdfayZ0g.jpg?auto=webp&s=6867d6f6f9a24fe64a09f68ba5bbe5568b3dcf68', 'width': 140}, 'variants': {}}]} | ||
M3 Max 16C CPU/40C GPU 64GB @400GB/s vs 14C CPU/ 30C GPU 96GB @300GB/s | 2 | Hi after having tried the search and not really finding useful discussions / answers I wanted to try it with a new post. So as the description says I am torn between the two variants where the tradeoff lies between RAM, thus model size possible. and speed. Do you think 96GB will be more future proof or what are even maybe existing models like GPT4all etc that would make use of the larger RAM? Or is it more important to get good speeds and compromise on the memory?
I would be interested in your opinions for LLM and LM workflows. | 2023-12-27T22:54:03 | https://www.reddit.com/r/LocalLLaMA/comments/18sdr4r/m3_max_16c_cpu40c_gpu_64gb_400gbs_vs_14c_cpu_30c/ | Sufficient-School-59 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sdr4r | false | null | t3_18sdr4r | /r/LocalLLaMA/comments/18sdr4r/m3_max_16c_cpu40c_gpu_64gb_400gbs_vs_14c_cpu_30c/ | false | false | self | 2 | null |
Favorite models by category | 10 | Ok, This is just to get a general idea of what's out there for someone who still feels "lost in the woods". One may wonder, "Why all of the categories like it's some LLM of the year award?" Actually, I asked it in a general way and got criticism for asking it to broadly. Here are my preferences for the opinions on which is best but here are a a few guidelines 1. Must be open-source 2. No, I don't expect all of the ones answered by whoever answers this post, just the ones you can give me a place to look at 3. They are readily available to the general public. So, this is just for fun basically and I'm curious about the ones people like the most. I'm sure there is a lot of overlap (that would probably fit well in "Best Overall". Anyway, I'm writing this half asleep from sleep deprivation so please feel free to disregard it. xD
​
1. Best General Text Generation
2. Best Overall Favorite (jack-of-all (most) trades)
3. Best story writing model
4. Best Medical Model
5. Best Law Model
6. Best Mental Health Model
7. Best Roleplaying Model (like D&D campaigns or similar to AI dungeon)
8. Best Multimodal Model
9. Best Counseling Model
10. Best Chat-Bot (SFW) model
11. Best Chat-Bot (NSFW) model
12. Best Question-Answering model
13. Best Problem-Solving model
14. Best for Coding | 2023-12-27T22:53:11 | https://www.reddit.com/r/LocalLLaMA/comments/18sdqg5/favorite_models_by_category/ | theshadowraven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sdqg5 | false | null | t3_18sdqg5 | /r/LocalLLaMA/comments/18sdqg5/favorite_models_by_category/ | false | false | self | 10 | null |
Have a grammars file for openai function calling local llm response - need help with testing/ideas on next steps. | 8 | Hi all. After an embarrassing amount of back and forth brute force trial and error with chatgpt this morning, I have a working llamacpp grammars format (see below) that has the llm return a response in the format expected in openai's tool\_calls field:
`root ::= array`
​
`array ::= "[" ws object ws "]"`
​
`object ::= "{" ws pairid ws "," ws pairtype ws "," ws pairfunction ws "}"`
​
`pairid ::= "\"id\"" ":" ws string`
​
`pairtype ::= "\"type\"" ":" ws "\"function\""`
​
`pairfunction ::= "\"function\"" ":" ws functionobject`
​
`functionobject ::= "{" ws pairname ws "," ws pairarguments ws "}"`
​
`pairname ::= "\"name\"" ":" ws string`
​
`pairarguments ::= "\"arguments\"" ":" ws "{" ws members ws "}"`
​
`members ::= pair (ws "," ws pair)*`
​
`pair ::= string ":" ws value`
​
`value ::= string | number | "true" | "false" | "null"`
​
`number ::= int frac? exp? ws`
​
`int ::= "-"? ("0" | [1-9] [0-9]*)`
​
`frac ::= "." [0-9]+`
​
`exp ::= ("e" | "E") ("+" | "-")? [0-9]+`
​
`string ::=`
`"\"" (`
`[^"\\] |`
`"\\" (["\\/bfnrt"] | "u" hex hex hex hex) # escapes`
`)* "\"" ws`
​
`hex ::= [0-9a-fA-F]`
​
`ws ::= [ \t\n\r]*`
​
​
I tested it with [https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/resolve/main/codeninja-1.0-openchat-7b.Q4\_K\_M.gguf](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/resolve/main/codeninja-1.0-openchat-7b.Q4_K_M.gguf?download=true) on koboldcpp's lite user interface today with a temperature of .20 and no other settings changes and it worked reasonably well with me inputting my prompt followed by the available function, e.g.:
`Let's land this ship! [`
`{`
`"type": "function",`
`"function": {`
`"name": "execute_command",`
`"description": "Executes a command",`
`"parameters": {`
`"type": "object",`
`"properties": {`
`"command_name": {`
`"type": "string",`
`"description": "The command to execute",`
`"enum": ["ToggleFlashlight", "SpeedUp","SlowDown","LandShip","FireAllWeapons"],`
`},`
`},`
`"required": ["command_name"],`
`},`
`},`
`},`
`]`
​
That part added is what would typically be the value for the openai tools parameter. So my working thought is, if this grammar was used, either the openai-compatible server could recognize a tool call and reformat the response so that instead of appearing in the content: value it appears in the tool\_calls: value when returned to the client or, on the client side, the client could recognize the format of the response and treat it as an openai function response. But I'm not sure.
**So, TLDR:** once you have a working grammars file to get a response back in expected openai format with a local llm, how would you complete the implentation of an openai-compatible function call? And if anyone wants to test this file with other models, especially other 7b and 13b coding models and report back with how well it works with function call responses, that would be awesome. With the 7b ninja model, it worked well with the simple example above but adding another function object in the input and/or allowing it to optionally call multiple function objects by adjusting the array value in the grammars file to allow this seemed to cause serious breakdowns in my limited testing. | 2023-12-27T22:51:41 | https://www.reddit.com/r/LocalLLaMA/comments/18sdp6v/have_a_grammars_file_for_openai_function_calling/ | teddybear082 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sdp6v | false | null | t3_18sdp6v | /r/LocalLLaMA/comments/18sdp6v/have_a_grammars_file_for_openai_function_calling/ | false | false | self | 8 | null |
M3 Max 16C CPU/40C GPU 64GB @400GB/s vs 14C CPU/ 30C GPU 96GB @300GB/s | 1 | [removed] | 2023-12-27T22:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/18sdk0k/m3_max_16c_cpu40c_gpu_64gb_400gbs_vs_14c_cpu_30c/ | ctfreak90 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sdk0k | false | null | t3_18sdk0k | /r/LocalLLaMA/comments/18sdk0k/m3_max_16c_cpu40c_gpu_64gb_400gbs_vs_14c_cpu_30c/ | false | false | self | 1 | null |
Text Generation Web UI using CPU instead of GPU | 3 | As in title, i installed the Text Generation Web UI (from the forum index here). While installing i did choose nvidia (i have a 3090).
when generating text, i see no load on GPU but 100% load on my CPU how can i fix that?CPU is not ticked in the model loader and speed is 0.1-0.2 token/s
EDIT: used a GGUF model instead of a GPTQ | 2023-12-27T22:18:55 | https://www.reddit.com/r/LocalLLaMA/comments/18scx7y/text_generation_web_ui_using_cpu_instead_of_gpu/ | Raidoki-San | self.LocalLLaMA | 2023-12-27T22:59:34 | 0 | {} | 18scx7y | false | null | t3_18scx7y | /r/LocalLLaMA/comments/18scx7y/text_generation_web_ui_using_cpu_instead_of_gpu/ | false | false | self | 3 | null |
Model for moderating talks | 2 | Hi. I've used chatgpt4 api for classificaton of chats the way that for each chat it gives json output for each chat member with few booleans that describe behavior. It works great but its expensive also and gpt3.5 turbo can't achieve it as good as gpt4.
Does someone knows about any other model that could works also? Opensource? | 2023-12-27T22:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/18scqv2/model_for_moderating_talks/ | Sonaht404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18scqv2 | false | null | t3_18scqv2 | /r/LocalLLaMA/comments/18scqv2/model_for_moderating_talks/ | false | false | self | 2 | null |
so trying to figure out a way to do a few things. | 1 | i have searched this thread and others. and i am looking for infromation on
Getting Local LLaMA setup on my computer at home. (i have a i9 13th gen, a 4090, and 128gigs of DDR 5 XMP memory. )
so i have been able to setup Stable Diffusion (local install) docker. (cant seem to do anything with it)
and would like the LLaMA model to be able to use stable diffusion.
i also would like to be able to access it with a API key from mobile software when i am not at my computer. | 2023-12-27T22:09:30 | https://www.reddit.com/r/LocalLLaMA/comments/18scp3k/so_trying_to_figure_out_a_way_to_do_a_few_things/ | nntb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18scp3k | false | null | t3_18scp3k | /r/LocalLLaMA/comments/18scp3k/so_trying_to_figure_out_a_way_to_do_a_few_things/ | false | false | self | 1 | null |
[Announce] AndroidRemoteGPT: An android front end for inference on a remote server using open source generative AI models | 1 | [AndroidRemoteGPT](https://github.com/compilebunny/androidremoteGPT) is an android front end for inference on a remote server using open source generative AI models.
Most Android devices can't run inference reasonably because of processing and memory limitations. The next best thing is to run the models on a remote server but access them through your handheld device. AndroidRemoteGPT allows you to send queries and get responses on your phone, given that you have a server running a model somewhere.
This initial pre-release is quite basic. Plans include:
1. Pretty up the interface
2. Add an icon so that AndroidRemoteGPT can be launched from Android directly without first loading Termux
3. Add on-device text-to-speech
4. Add an on-device inference option for people who have 8gb of RAM on their android devices
5. Allow ssh passwords? | 2023-12-27T21:22:34 | https://www.reddit.com/r/LocalLLaMA/comments/18sbl45/announce_androidremotegpt_an_android_front_end/ | compilebunny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sbl45 | false | null | t3_18sbl45 | /r/LocalLLaMA/comments/18sbl45/announce_androidremotegpt_an_android_front_end/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oLK09ffqI0vqTV6jKF5aYLiqQr6-QwjawdcLBAa_usg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RvW0Ed3N1vfYYebFxx8C7kzRnms-MZF39Z-9W4xoFtk.jpg?width=108&crop=smart&auto=webp&s=1b62af8db1c85843040c6005c6072cfbe9d059b7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RvW0Ed3N1vfYYebFxx8C7kzRnms-MZF39Z-9W4xoFtk.jpg?width=216&crop=smart&auto=webp&s=31d06620e2a767d8b18a7749c01c69f37efc29b2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RvW0Ed3N1vfYYebFxx8C7kzRnms-MZF39Z-9W4xoFtk.jpg?width=320&crop=smart&auto=webp&s=923fb06a9bc1b481f97d7bb1782e1c6880de5edc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RvW0Ed3N1vfYYebFxx8C7kzRnms-MZF39Z-9W4xoFtk.jpg?width=640&crop=smart&auto=webp&s=df33aa7d10b1ca324872d9d9cdeaf6f2b59955eb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RvW0Ed3N1vfYYebFxx8C7kzRnms-MZF39Z-9W4xoFtk.jpg?width=960&crop=smart&auto=webp&s=49e2dce365f6ece737e6e6f110d71ea42024974e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RvW0Ed3N1vfYYebFxx8C7kzRnms-MZF39Z-9W4xoFtk.jpg?width=1080&crop=smart&auto=webp&s=9665c21dc825546886b07988e71816dead2040d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RvW0Ed3N1vfYYebFxx8C7kzRnms-MZF39Z-9W4xoFtk.jpg?auto=webp&s=ae8af00cdcf4a791c05a9a6c14f249fa735421b5', 'width': 1200}, 'variants': {}}]} |
Best QLoRA approach on MacBook Pro? | 1 | I've noticed many of the helper libraries and applications want CUDA. Are there any good options for accelerated QLoRA training on M2 MacBook Pros with 32gb of shared RAM? | 2023-12-27T21:14:14 | https://www.reddit.com/r/LocalLLaMA/comments/18sbe1l/best_qlora_approach_on_macbook_pro/ | TheCoconutTree | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sbe1l | false | null | t3_18sbe1l | /r/LocalLLaMA/comments/18sbe1l/best_qlora_approach_on_macbook_pro/ | false | false | self | 1 | null |
[tutorial] Easiest way to get started locally | 81 | Hey everyone.
This is a super simple guide to run a chatbot locally using gguf.
# Pre-requisites
All you need is:
1. Docker
2. A model
# Docker
To install docker on ubuntu, simply run:
```bash
sudo apt install docker.io
```
# Model
You can select any model you want as long as it's a gguf. I recommend [openchat-3.5-1210.Q4\_K\_M](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q4_K_M.gguf) to get started: It requires 6GB of memery (can work without gpu too)
All you need to do is to:
1. Create a `models` folder somewhere
2. Download a model (like the above)
3. Put the downloaded model inside the `models` folder
# Running
1. Downlaod the docker image:
```bash
sudo docker pull ghcr.io/ggerganov/llama.cpp:full
```
2. Run the server
```bash
sudo docker run -p 8181:8181 --network bridge -v path/to/models:/models ghcr.io/ggerganov/llama.cpp:full --server -m /models/7B/openchat-3.5-1210.Q4_K_M.gguf -c 2048 -ngl 43 -mg 1 --port 8181 --host 0.0.0.0
```
3. Start chatting Now open a browser and go to `http://0.0.0.0:8181/` and start chatting with the model!
https://preview.redd.it/9url9p72iw8c1.png?width=1844&format=png&auto=webp&s=e06c2721eb61d32abdfea432576b26e7d44363b9 | 2023-12-27T20:54:35 | https://www.reddit.com/r/LocalLLaMA/comments/18saxgy/tutorial_easiest_way_to_get_started_locally/ | Amgadoz | self.LocalLLaMA | 2023-12-27T21:17:21 | 0 | {} | 18saxgy | false | null | t3_18saxgy | /r/LocalLLaMA/comments/18saxgy/tutorial_easiest_way_to_get_started_locally/ | false | false | 81 | {'enabled': False, 'images': [{'id': '22nCxJKtA4pHrCbl07eXdw917CY3vgeICLJK6JdBj3I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?width=108&crop=smart&auto=webp&s=53d0ce4bee9b93798c838d442854edf45289e539', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?width=216&crop=smart&auto=webp&s=355bb74ea6ba81f5ee6c911f07373f96dcd81c2d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?width=320&crop=smart&auto=webp&s=8cbf1b83760363abaa33eebbe5a633e1a3400184', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?width=640&crop=smart&auto=webp&s=e8d27c0c21a3234f654b030f0e42b352604655ea', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?width=960&crop=smart&auto=webp&s=7d81689d388db6366e51bdf1ac3076317c350f33', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?width=1080&crop=smart&auto=webp&s=3f4da9277e8eea5462dd1957a9ba60054488e999', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hAFDhEsz-sIKZkXW9ZFoGIgICrk3U0-VUnLneN3gnDA.jpg?auto=webp&s=3c34ee853c17c706dcf733f10c4b58dd02e31e87', 'width': 1200}, 'variants': {}}]} | |
Maral is here, 7 billion parameters bilingual model with support of Persian! - A Beautiful Mind | 13 | 2023-12-27T20:39:14 | https://haghiri75.com/en/maral-is-here-7-billion-parameters-bilingual-model-with-support-of-persian/ | Haghiri75 | haghiri75.com | 1970-01-01T00:00:00 | 0 | {} | 18sakbb | false | null | t3_18sakbb | /r/LocalLLaMA/comments/18sakbb/maral_is_here_7_billion_parameters_bilingual/ | false | false | default | 13 | null | |
Why is Qwen model 72B not developed by the community? | 1 | [removed] | 2023-12-27T20:23:24 | https://www.reddit.com/r/LocalLLaMA/comments/18sa6qj/why_is_qwen_model_72b_not_developed_by_the/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sa6qj | false | null | t3_18sa6qj | /r/LocalLLaMA/comments/18sa6qj/why_is_qwen_model_72b_not_developed_by_the/ | false | false | self | 1 | null |
When David Meets Goliath: Comparing Mixtral 8x7B and GPT-4 | 1 | Today, I came across an [intriguing post](https://www.reddit.com/r/LocalLLaMA/comments/18rsygu/7b_13b_models_are_hopeless_at_planning_tasks/), along with its comments, highlighting the challenges that current LLMs face in tasks that ChatGPT excels at. Intrigued by this, I decided to delve into the most recent and [reliable data on GPT-4](https://the-decoder.com/gpt-4-architecture-datasets-costs-and-more-leaked/) and compare it with Mixtral 8x7B, which, in my opinion, is currently the best model available. Take a look at these charts and see for yourself the stark differences between these two.
​
[Parameters comparison](https://preview.redd.it/p3hxcmqmaw8c1.jpg?width=1980&format=pjpg&auto=webp&s=d72b9e8ef6fede08a16f3b5bcdc9de4505135264)
​
[Number of experts and layers comparison](https://preview.redd.it/7e303r3qaw8c1.jpg?width=1980&format=pjpg&auto=webp&s=4a587fd25b0a9f0be9f0e135b8a7259af54bc54b)
I find this development quite encouraging. It was only about 300 days ago that Llama first emerged, marking a significant moment in the field. Now, considering the relatively modest size of Mixtral, it's clear that we are on the cusp of a remarkable growth phase. I believe that within the next 24 months, we're likely to witness a substantial surge in the capabilities of such models. | 2023-12-27T20:18:57 | https://www.reddit.com/r/LocalLLaMA/comments/18sa2vy/when_david_meets_goliath_comparing_mixtral_8x7b/ | jslominski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sa2vy | false | null | t3_18sa2vy | /r/LocalLLaMA/comments/18sa2vy/when_david_meets_goliath_comparing_mixtral_8x7b/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'y-gnN5eXSJbM_w4V7MFNRnsV0zLNgW29jexasYfiSF8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/-jWibIdRfxz3ETQJ5aN3OmB2pMdQDT-pi1bZZWkY-3o.jpg?width=108&crop=smart&auto=webp&s=4d1b7482bcb07fffb792eadb2381051da65e31dd', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/-jWibIdRfxz3ETQJ5aN3OmB2pMdQDT-pi1bZZWkY-3o.jpg?width=216&crop=smart&auto=webp&s=ba5d207537b540b2ee0e79d412b6b266a08f1b2b', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/-jWibIdRfxz3ETQJ5aN3OmB2pMdQDT-pi1bZZWkY-3o.jpg?width=320&crop=smart&auto=webp&s=9ab8cc75831cb5602c66ecac5ec3f8f85e299ee7', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/-jWibIdRfxz3ETQJ5aN3OmB2pMdQDT-pi1bZZWkY-3o.jpg?width=640&crop=smart&auto=webp&s=6d8f8cdec48fe50989e0bb81bc3b0f6546b5dfa4', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/-jWibIdRfxz3ETQJ5aN3OmB2pMdQDT-pi1bZZWkY-3o.jpg?width=960&crop=smart&auto=webp&s=78721a332a22acd511d7a671c24eb385f7f89be6', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/-jWibIdRfxz3ETQJ5aN3OmB2pMdQDT-pi1bZZWkY-3o.jpg?width=1080&crop=smart&auto=webp&s=b65c080faf7a50929d0095a55fb6c09d28f89ad7', 'width': 1080}], 'source': {'height': 816, 'url': 'https://external-preview.redd.it/-jWibIdRfxz3ETQJ5aN3OmB2pMdQDT-pi1bZZWkY-3o.jpg?auto=webp&s=20e0be5118710baa60cf1a9faa2c2554fc13aa3c', 'width': 1456}, 'variants': {}}]} | |
Any colab or tutorial with success in adding knowledge to Llama2? | 1 | Hi,
After trying several fine tuning tutorials, I found that none of them were able to add dataset knowledge to the new model.
When a test is done, the answer is practically the same as the base model would give.
That's why I'd like to ask you to help me find one that has been able to do this
Thanks in advance | 2023-12-27T20:17:27 | https://www.reddit.com/r/LocalLLaMA/comments/18sa1js/any_colab_or_tutorial_with_success_in_adding/ | celsowm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18sa1js | false | null | t3_18sa1js | /r/LocalLLaMA/comments/18sa1js/any_colab_or_tutorial_with_success_in_adding/ | false | false | self | 1 | null |
New merges: Aurora-Nights-70B and Aurora-Nights-103B | 41 | They are available on my Hugging Face: [https://huggingface.co/sophosympatheia](https://huggingface.co/sophosympatheia). I'll leave quantization to other people who are more efficient at banging those out.
For those of you who enjoyed Rogue Rose 103B, I think you'll find this new blend to be an improvement. It follows instructions better yet retains good creativity, and it will do NSFW.
For those of you who merge models too, here's a few things I learned during my experimentation with this release:
* The [70B limarpv3 LoRA](https://huggingface.co/Doctor-Shotgun/limarpv3-llama2-70b-qlora) wrecks a model's ability to follow instructions. I verified that across several models, and not just my own merges. This is pretty much what made Rogue Rose not so good at following instructions.
* The [Kimiko LoRA](https://huggingface.co/nRuaif/fiction.live-Kimiko-V2-70B) does not seem to have a deleterious effect on a model's ability to follow instructions, or at least it's not pronounced. I still need to do more testing.
* Frankenmerging a model with itself does not seem to produce worse results than using a different model, at least when you interleave the layers the way I do (see below).
* How you interleave the layers in a frankenmerge matters. I have consistently gotten reliable results with a simple sandwich approach. For example, Aurora Nights 103b is just Aurora Nights 70b layers 0-40, then 20-60, then 40-80. It's like I "sandwiched" the secondary model (which was the same as the primary model in this case) between the primary model's layers split in half. Keeping larger contiguous blocks of layers seems to be more reliable. When I tried a version that interleaved the layers in a manner that was similar to Goliath 120B, it came out incoherent. I also did a version of Rogue Rose 103b that used smaller chunks, and that result was viable but seemed worse than the simple sandwich version. I know others are getting reliable results with different interleaving approaches that use smaller chunks, so take this advice with a grain of salt.
* Which dataset you use for exl2 calibration does indeed matter. The effects are noticeable, and the pippa dataset seems to be the way to go for roleplaying. This is not a new observation, I'm just adding my confirmation for the record.
* TIES DARE merging is expensive to run (takes a long time) but I think it's worth it. I am pleased with the results that produced Aurora Nights 70B. The exact settings I used are in the model card. I need to experiment with it some more. I invite anyone with more experience to look over the settings I used and give me feedback. I feel like there's room for improvement there.
I hope you all enjoy the new models. As always, I appreciate feedback.
​ | 2023-12-27T20:00:33 | https://www.reddit.com/r/LocalLLaMA/comments/18s9n80/new_merges_auroranights70b_and_auroranights103b/ | sophosympatheia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s9n80 | false | null | t3_18s9n80 | /r/LocalLLaMA/comments/18s9n80/new_merges_auroranights70b_and_auroranights103b/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': '8CjWEeB7YjZ9RP3nZsUcVWNnnlzgHZBQt7ZeP2X11bI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NxObT6E78QTUAuZVXfeXLThtuVO0M-z6eXMnO5E9QYE.jpg?width=108&crop=smart&auto=webp&s=e253f74b1979254a82545283324157e92b6bb613', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NxObT6E78QTUAuZVXfeXLThtuVO0M-z6eXMnO5E9QYE.jpg?width=216&crop=smart&auto=webp&s=efd9032193e4ebdbb595137ffeae07466da7c798', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NxObT6E78QTUAuZVXfeXLThtuVO0M-z6eXMnO5E9QYE.jpg?width=320&crop=smart&auto=webp&s=8bbf72ac1ff12e4b18eec259420d1f1703e3ae04', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NxObT6E78QTUAuZVXfeXLThtuVO0M-z6eXMnO5E9QYE.jpg?width=640&crop=smart&auto=webp&s=faf37b54b88ac93d55973bbeacd5b4e63ae04058', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NxObT6E78QTUAuZVXfeXLThtuVO0M-z6eXMnO5E9QYE.jpg?width=960&crop=smart&auto=webp&s=4598083d3f3608867173df108b575659304c5bbc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NxObT6E78QTUAuZVXfeXLThtuVO0M-z6eXMnO5E9QYE.jpg?width=1080&crop=smart&auto=webp&s=e544d1c898c1ac34604b349643eb4ce77ecbaccb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NxObT6E78QTUAuZVXfeXLThtuVO0M-z6eXMnO5E9QYE.jpg?auto=webp&s=1739f15ce6b50ffaae2a5013d1e801026aa7ae64', 'width': 1200}, 'variants': {}}]} |
Opportunity to specialize in generative AI development | 1 | I recently joined a new company and for the first \~6 weeks have gotten to work with generative AI. We use OpenAI via Azure to perform tasks like extracting metadata from large unstructured documents and creating a ChatGPT-like interface for users to query those documents. This shift has not only provided a refreshing break from typical full-stack development but also feels incredibly exciting to use cutting edge technology to provide real value for our users. I've been consuming as much content as I can (including binging this sub when I discovered it yesterday 😂) to learn about the field and I feel a spark of excitement and discovery that I haven't felt in a very long time.
However, I’ve recently been given the opportunity to join a new team to develop a consumer platform, transitioning from our core B2B focus. This platform is in a domain I'm passionate about and also promises completely greenfield work where I will have the kind of ownership over infra/architecture/product vision that I’ve always wanted (before this I’ve spent most of my career at larger company where greenfield work like this was rare/non-existent).
My hesitation with continuing in generative AI stems from the industry's rapid pace. I'm concerned that the technology and skills I am currently developing might soon become obsolete, making that experience less valuable compared to the foundational work of building a new platform from scratch.
Would love any advice to consider for making this decision, especially on the outlook for how the market will value these types of skills in the medium-term future. | 2023-12-27T19:59:09 | https://www.reddit.com/r/LocalLLaMA/comments/18s9lyh/opportunity_to_specialize_in_generative_ai/ | chrismv48 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s9lyh | false | null | t3_18s9lyh | /r/LocalLLaMA/comments/18s9lyh/opportunity_to_specialize_in_generative_ai/ | false | false | self | 1 | null |
PHI MOE? | 18 | this may be a dumb question im not even sure that phi isnt an MOE but if it isnt would it be possible to make a mixture of expert model using phi-2? | 2023-12-27T19:59:00 | https://www.reddit.com/r/LocalLLaMA/comments/18s9lu8/phi_moe/ | Ok-Buy268 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s9lu8 | false | null | t3_18s9lu8 | /r/LocalLLaMA/comments/18s9lu8/phi_moe/ | false | false | self | 18 | null |
What if Q* is just some variation of PPO + A* ? | 3 | So I’d like to test this but I don’t have much time now unfortunately, so here’re my thoughts.
Given a LLM and a mathematical question for which we know the answer, let’s say we can generate several completions (temp != 0.0 and/or beam search keeping several beams), and rank them with some reward function.
Basic example: the question is “how much is 2+2”, and we establish the final answer has to be in the format “FINAL ANSWER: <value>”, so in this case we’d look for the completion to say whatever (math CoT) as long as the final answer is correct, in this case, we’d look for the tokens for “FINAL ANSWER: 4”
The key point here to make PPO possible is that we know how close to the correct answer we are by checking the token probabilities of the correct answer at any point of the completion (regardless of how long we allow the completion to do CoT), so this would give us the reward function for the PPO (this is crucial for reinforcement learning to be possible).
And the key point that make A\* possible here is that we have one (or more) graphs (trees) of completions with the token probabilities (the same way as if we did beam search) that we can navigate and optimize for.
What am I missing here? Has this been tested already? The idea looks pretty simple to me so I’m guessing it doesn’t work, but where does it fail? | 2023-12-27T19:55:00 | https://www.reddit.com/r/LocalLLaMA/comments/18s9ik6/what_if_q_is_just_some_variation_of_ppo_a/ | NyxeK | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s9ik6 | false | null | t3_18s9ik6 | /r/LocalLLaMA/comments/18s9ik6/what_if_q_is_just_some_variation_of_ppo_a/ | false | false | default | 3 | null |
I am the only person, who downloaded Saily_220B-GGUF and it is not working | 9 | I am testing different models on my good old modest computer, because I want to find best model and want to know new models capabilities overall.
I am the only person, who downloaded Saily\_220B-GGUF, well, at least model card says that only one person has downloaded this model: [https://huggingface.co/TheBloke/Saily\_220B-GGUF](https://huggingface.co/TheBloke/Saily_220B-GGUF)
But also, text-generation-webui falls with error, when I try to load this model: [https://huggingface.co/TheBloke/Saily\_220B-GGUF/discussions/1](https://huggingface.co/TheBloke/Saily_220B-GGUF/discussions/1)
Maybe there is known problem loading models like this? I am using all defaults for loading this. | 2023-12-27T19:33:30 | https://www.reddit.com/r/LocalLLaMA/comments/18s90mw/i_am_the_only_person_who_downloaded_saily/ | uti24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s90mw | false | null | t3_18s90mw | /r/LocalLLaMA/comments/18s90mw/i_am_the_only_person_who_downloaded_saily/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'zB5FRHxdKDkM5kYhQIb65SAVt3f04OIWpU06Uy4opFk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/lBXdzIzCecYwQz1gHTlCSEaodE4IUlJdVQi3l82GZHM.jpg?width=108&crop=smart&auto=webp&s=72520a61ed2240f3ce3ea2279929c1062a8e7966', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/lBXdzIzCecYwQz1gHTlCSEaodE4IUlJdVQi3l82GZHM.jpg?width=216&crop=smart&auto=webp&s=acbaeaf67b59eff42eaa49878741427b7dc09d7c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/lBXdzIzCecYwQz1gHTlCSEaodE4IUlJdVQi3l82GZHM.jpg?width=320&crop=smart&auto=webp&s=0ff2384f4fdf73468c6ef4c9c2a8f6b720c59080', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/lBXdzIzCecYwQz1gHTlCSEaodE4IUlJdVQi3l82GZHM.jpg?width=640&crop=smart&auto=webp&s=869fbca153bb6ffe1be03e70da75402f6fd86942', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/lBXdzIzCecYwQz1gHTlCSEaodE4IUlJdVQi3l82GZHM.jpg?width=960&crop=smart&auto=webp&s=146a70961ee399b0b3102759396ffd493d5d8fb7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/lBXdzIzCecYwQz1gHTlCSEaodE4IUlJdVQi3l82GZHM.jpg?width=1080&crop=smart&auto=webp&s=8bb6dac5a6f8f48219867e0364e9c79b72fee078', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/lBXdzIzCecYwQz1gHTlCSEaodE4IUlJdVQi3l82GZHM.jpg?auto=webp&s=64fced89799a61103f544333769a131fd2891a5b', 'width': 1200}, 'variants': {}}]} |
Image Classification from a Local Model | 1 | I have a large number of images from wildlife cameras that contain some form of animal in them. I've been working with ResNet50 models and the like to classify them by training on a small subset of manually classified images, but I'm not achieving the level of accuracy I need.
After seeing what GPT4V is capable of - it easily picks out animals in my test images and classifies them, I'd like to see I'f I can achieve something similar with a local LLM to avoid API calls for thousands of images. Can anyone recommend a model that can run locally to do this type of work? I can either run it on an M1 Max 64gb or a windows with Nvidia M2000 (might be able to access better hardware if needed).
Any help is greatly appreciated!
​ | 2023-12-27T19:22:24 | https://www.reddit.com/r/LocalLLaMA/comments/18s8r6n/image_classification_from_a_local_model/ | Mr7743 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s8r6n | false | null | t3_18s8r6n | /r/LocalLLaMA/comments/18s8r6n/image_classification_from_a_local_model/ | false | false | self | 1 | null |
What do "h6" and "h8" mean in exllama quants? | 1 | I often see them in exllama quantisations | 2023-12-27T19:05:58 | https://www.reddit.com/r/LocalLLaMA/comments/18s8d2e/what_do_h6_and_h8_mean_in_exllama_quants/ | JawGBoi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s8d2e | false | null | t3_18s8d2e | /r/LocalLLaMA/comments/18s8d2e/what_do_h6_and_h8_mean_in_exllama_quants/ | false | false | self | 1 | null |
Help regarding dataset announcement. | 3 | Hello all, I was trying to share my dataset release details with you all but it seems like my post is greyed out and doesn’t look like it’s showing up in the new posts on LocalLlama, anybody know what could cause this? | 2023-12-27T18:40:52 | https://www.reddit.com/r/LocalLLaMA/comments/18s7r54/help_regarding_dataset_announcement/ | dogesator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s7r54 | false | null | t3_18s7r54 | /r/LocalLLaMA/comments/18s7r54/help_regarding_dataset_announcement/ | false | false | self | 3 | null |
Capybara dataset is now open-source and available! | 183 | Happy to announce that the Capybara dataset I worked on is now officially available for anyone to train, or to add into their existing training sets! (Roughly 15,000 multi-turn examples)
This was the culmination of experimenting with the most useful insights I’ve derived from synthesis techniques like Evol-instruct (used for WizardLM), Alpaca, Orca, Vicuna, Lamini and FLASK along with my intuitions from over 3 years of doing data curation for dozens of models in text and audio modalities across different architectures.
The result is a multi-turn synthetic conversation generation method currently called Amplify-Instruct, and the first resulting dataset using this method called Capybara. This dataset has a strong focus on information diversity across a wide range of domains and multi-turn conversations that strongly emphasize reasoning, logic and extrapolation about a wide range of subjects, also many great examples of conversations delving into obscure sub-topics and rabbit holes across pop-culture and STEM, while also maintaining natural prose.
First tests of Capybara have shown that with less than 20,000 of these high quality examples, we’re able to reach beyond the HF leaderboard scores of many popular models also trained on the same base model that were trained on over 5 times the amount of data. [See image attached to this post]
To date, I’ve given early access to friends & colleagues of mine working on cutting edge models, and the first trainings have shown enough promise for many resulting models to include it. So far this data has been used to train Capybara 7B, 34B and the first 3B multi-modal model called Obsidian, as well as the training and provenance of recent models by others: OpenChat, Starling, Dolphin Mixtral, Dolphin Phi-2, Jackalope, Echo and more.
Our own internal benchmarks within Nous with AGIEval and GPT4All Suite also seem to confirm parity with another flagship Nous model that is trained with around 10 times more data on the same base model. (The Nous Hermes model of the time is trained on around 200K-300K examples iirc)
I know benchmarks are ofcourse aren’t everything, but I made sure we did contamination checking on several popular benchmarks we typically use, we found no matches in any of the benchmarks, However thanks to Teven from the Mistral team, we found MT-bench contamination in the dataset that we’ve now removed. (Even if I don’t personally test with MT-bench, I understand it to be a metric more popularly used now in academia, and would be best if left out of the dataset.)
Also thank you to Wolfram Ravenwolf for his great work every week on testing models and showing Capybara-7B and 34B breaking records in German multi-language understanding despite Capybara only containing english!
Amplify-Instruct paper is coming soon! Check out the dataset card on HF for more info and full credits! | 2023-12-27T18:31:17 | dogesator | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18s7iw1 | false | null | t3_18s7iw1 | /r/LocalLLaMA/comments/18s7iw1/capybara_dataset_is_now_opensource_and_available/ | false | false | 183 | {'enabled': True, 'images': [{'id': 'mJ_94nX7S-VKpfiV8T1vCvNvQTOo-kNfE95fRSoMD2Y', 'resolutions': [{'height': 20, 'url': 'https://preview.redd.it/osp82l5esv8c1.jpeg?width=108&crop=smart&auto=webp&s=4977010fba5825c83f10a657c7795f3adb88fe6e', 'width': 108}, {'height': 40, 'url': 'https://preview.redd.it/osp82l5esv8c1.jpeg?width=216&crop=smart&auto=webp&s=530acde3a14c6b65eef50288411c659d79377ff5', 'width': 216}, {'height': 59, 'url': 'https://preview.redd.it/osp82l5esv8c1.jpeg?width=320&crop=smart&auto=webp&s=288bdb0bbfca7ae82634fb31258125533073acb0', 'width': 320}, {'height': 118, 'url': 'https://preview.redd.it/osp82l5esv8c1.jpeg?width=640&crop=smart&auto=webp&s=501f9b3203385a55dca627e1f35078538d7fb5e9', 'width': 640}, {'height': 178, 'url': 'https://preview.redd.it/osp82l5esv8c1.jpeg?width=960&crop=smart&auto=webp&s=f8edf63dca9fa659c9da53749bd80f253fef9a92', 'width': 960}, {'height': 200, 'url': 'https://preview.redd.it/osp82l5esv8c1.jpeg?width=1080&crop=smart&auto=webp&s=6a192735027aa0ebba533a0067707f02a72508d0', 'width': 1080}], 'source': {'height': 389, 'url': 'https://preview.redd.it/osp82l5esv8c1.jpeg?auto=webp&s=a30eba28d10a9a7ed592ebc00a4d7025acc08711', 'width': 2096}, 'variants': {}}]} | ||
Capybara dataset is now open-source and available! | 1 | [removed] | 2023-12-27T18:29:24 | https://www.reddit.com/r/LocalLLaMA/comments/18s7h62/capybara_dataset_is_now_opensource_and_available/ | dogesator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s7h62 | false | null | t3_18s7h62 | /r/LocalLLaMA/comments/18s7h62/capybara_dataset_is_now_opensource_and_available/ | false | false | default | 1 | null |
Capybara dataset is now open-source and available! | 1 | [removed] | 2023-12-27T18:23:27 | dogesator | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18s7bzl | false | null | t3_18s7bzl | /r/LocalLLaMA/comments/18s7bzl/capybara_dataset_is_now_opensource_and_available/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'a4swKV1U7kXwAC9uA1-BvjeUkI5bhtWgsXNzOL_dflo', 'resolutions': [{'height': 20, 'url': 'https://preview.redd.it/sn53igtzqv8c1.jpeg?width=108&crop=smart&auto=webp&s=c53187f7708f9daa41e5925c463c4f1fc3c514b3', 'width': 108}, {'height': 40, 'url': 'https://preview.redd.it/sn53igtzqv8c1.jpeg?width=216&crop=smart&auto=webp&s=7de60697b8060a3de217db05d56b771c2da658e3', 'width': 216}, {'height': 59, 'url': 'https://preview.redd.it/sn53igtzqv8c1.jpeg?width=320&crop=smart&auto=webp&s=c94989d94592c4ac5814ed711c1b7439d2e2a957', 'width': 320}, {'height': 118, 'url': 'https://preview.redd.it/sn53igtzqv8c1.jpeg?width=640&crop=smart&auto=webp&s=7584cc88e0460753bcd4b80c2390ecca3bf41042', 'width': 640}, {'height': 178, 'url': 'https://preview.redd.it/sn53igtzqv8c1.jpeg?width=960&crop=smart&auto=webp&s=435c01f90a9a6e2bd33b6b2938216aa127260679', 'width': 960}, {'height': 200, 'url': 'https://preview.redd.it/sn53igtzqv8c1.jpeg?width=1080&crop=smart&auto=webp&s=17ebeeb828af83fba244e0556e984fa7506953be', 'width': 1080}], 'source': {'height': 389, 'url': 'https://preview.redd.it/sn53igtzqv8c1.jpeg?auto=webp&s=34868ba1004088d6fccef2fa3c17fabcc4a78bc8', 'width': 2096}, 'variants': {}}]} | |
Tiny LLM for android ereader? | 5 | I just got a Boox Palma eink android reader. I’ve seen videos of LLMs on raspberry pi and pixel.
I wonder if the Palma can handle any. What’s the best way to test?
It has these specs:
- Qualcomm Octa-Core CPU
- 6GB of LPDDR4X RAM
- 6.13" E-Ink Carta 1200 Touchscreen
- 1648 x 824 Screen Resolution
- 128GB UFS 2.1 Storage
- Wi-Fi | Bluetooth 5.0
- MicroSD Card Slot | USB-C Port
- 16MP Rear Camera with Flash
- Front Light with CTM (Warm and Cold)
- Android 11 | 2023-12-27T18:13:16 | https://www.reddit.com/r/LocalLLaMA/comments/18s732p/tiny_llm_for_android_ereader/ | kwerky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s732p | false | null | t3_18s732p | /r/LocalLLaMA/comments/18s732p/tiny_llm_for_android_ereader/ | false | false | self | 5 | null |
Early AWQ support added to llama.cpp! | 136 | 2023-12-27T17:40:05 | https://github.com/ggerganov/llama.cpp/tree/master/awq-py | m18coppola | github.com | 1970-01-01T00:00:00 | 0 | {} | 18s6adh | false | null | t3_18s6adh | /r/LocalLLaMA/comments/18s6adh/early_awq_support_added_to_llamacpp/ | false | false | 136 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} | ||
Pressure-tested the most popular open-source LLMs (Large Language Models) for their Long Context Recall abilities | 184 | **Approach**: Using **Gregory Kamradt**'s **"Needle In A Haystack"** analysis, I explored models with different context lengths.
\- Needle: "What's the most fun thing to do in San Francisco?"
\- Haystack: Essays by Paul Graham
**Video explanation** by Gregory - [https://www.youtube.com/watch?v=KwRRuiCCdmc](https://www.youtube.com/watch?v=KwRRuiCCdmc)
**Models tested**
1️⃣ **16k Context Length (\~ 24 pages/12k words)**
\- NurtureAI/openchat\_3.5-16k (extended + finetuned Mistral-7B)
\- NurtureAI/Orca-2-13B-16k (extended + finetuned Llama-2-13B)
\- NurtureAI/dolphin-2\_2\_1-mistral-7b-16k (extended + finetuned Mistral-7B)
2️⃣ **32k Context Length (\~ 48 pages/24k words)**
\- cognitivecomputations/dolphin-2.6-mixtral-8x7b (finetuned Mixtral MoE)
\- THUDM/chatglm3-6b-32k (finetuned chatglm)
\- abacusai/Giraffee-13b-32k-v3 (extended + finetuned Llama-2-13B)
\- togethercomputer/Llama-2-7B-32K-Instruct (extended + finetuned Llama-2-7B)
3️⃣ **100k Context Length (\~ 150 pages/75k words)**
\- lyogavin/Anima-7B-100K (extended + finetuned Llama-2-7B)
4️⃣ **200k Context Length (\~ 300 pages/150k words)**
\- NousResearch/Nous-Capybara-34B (finetuned Yi-34B-200k)
\- chinoll/Yi-6b-200k-dpo (finetuned Yi-6B-200k)
**Best Performers**
16k - OpenChat from [Nurture.AI](https://Nurture.AI)
32k - Dolphin from Eric Hartford & ChatGLM3 from Jie Tang, Tsinghua University
200k - Capybara from Nous Research
​
​
https://preview.redd.it/9diiqir8hv8c1.jpg?width=3024&format=pjpg&auto=webp&s=e9a0ff3edbfb35c7a7ea95e7779ee8401db596ff
https://preview.redd.it/9ui7t2s8hv8c1.jpg?width=3024&format=pjpg&auto=webp&s=0e38812f93af7f4f1419b996ed4e1d6279510425
https://preview.redd.it/n89f4lr8hv8c1.jpg?width=3024&format=pjpg&auto=webp&s=5b9936480d0c639d9317cd832b55e8ab226edf3e
https://preview.redd.it/bbwh6kr8hv8c1.jpg?width=3024&format=pjpg&auto=webp&s=ca07a3eaa2587f45ce52598201f8357eab63ca46
https://preview.redd.it/irgefnr8hv8c1.jpg?width=3024&format=pjpg&auto=webp&s=bb95854e054584c14b35ac6d7138188a766b9543
https://preview.redd.it/74o8psr8hv8c1.jpg?width=3024&format=pjpg&auto=webp&s=13833876ffddf00824bdb420229b7a1c7c9a6746
https://preview.redd.it/hm9mplr8hv8c1.jpg?width=3024&format=pjpg&auto=webp&s=17b59bc65a31e8d1c279bb50ee8ca53ebbafec7f
https://preview.redd.it/2edf1nr8hv8c1.jpg?width=3024&format=pjpg&auto=webp&s=315e709080da8c3f9c49b7191fa460c86e59e234
https://preview.redd.it/y2xjbnr8hv8c1.jpg?width=3024&format=pjpg&auto=webp&s=e8ce7df603cbca1aee329cac8c4d0b47e24562a7
https://preview.redd.it/fjhcnnr8hv8c1.jpg?width=3024&format=pjpg&auto=webp&s=b7b1a23abc299facf0f070e53dd4211e454850e1 | 2023-12-27T17:29:27 | https://www.reddit.com/r/LocalLLaMA/comments/18s61fb/pressuretested_the_most_popular_opensource_llms/ | ramprasad27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s61fb | false | {'oembed': {'author_name': 'Greg Kamradt (Data Indy)', 'author_url': 'https://www.youtube.com/@DataIndependent', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/KwRRuiCCdmc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="I pressure tested GPT-4's 128K context retrieval"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/KwRRuiCCdmc/hqdefault.jpg', 'thumbnail_width': 480, 'title': "I pressure tested GPT-4's 128K context retrieval", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_18s61fb | /r/LocalLLaMA/comments/18s61fb/pressuretested_the_most_popular_opensource_llms/ | false | false | 184 | {'enabled': False, 'images': [{'id': 'OondSl4tj1KDXy8pkUsxQBZHwCiKekJ00sdXFeRLzns', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/k9s_HDytdOH3hMxsYPFIXN-bvkyOM7NyMIrsfE7ys2A.jpg?width=108&crop=smart&auto=webp&s=867a03d49cf317f8ccd573a6cbf632592dcee0e9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/k9s_HDytdOH3hMxsYPFIXN-bvkyOM7NyMIrsfE7ys2A.jpg?width=216&crop=smart&auto=webp&s=6d08c1f2d892455f6944633d3540b2e4d928ce60', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/k9s_HDytdOH3hMxsYPFIXN-bvkyOM7NyMIrsfE7ys2A.jpg?width=320&crop=smart&auto=webp&s=a295ec5982dde9c474d7a6954d572e57d11d0350', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/k9s_HDytdOH3hMxsYPFIXN-bvkyOM7NyMIrsfE7ys2A.jpg?auto=webp&s=cfa706e70f02ee02ecacb6c04386b5c69692a38e', 'width': 480}, 'variants': {}}]} | |
My first experiment with mistral-medium using Ollama WebUI | 1 | [Ollama WebUI](https://github.com/ollama-webui/ollama-webui) now provides support for a configurable OpenAI-compatible endpoint instead of just supporting OpenAI. I recently received access to Mistral's API so I thought I would give mistral-medium a go at creating a free-form text adventure, something I like to use when testing out a model. [Here](https://ollamahub.com/c/colkassad/3b5bfe52-3145-4e7f-9c42-cb6bb6718d1a) are the results (no special system message set, just an initial prompt and telling it to go). | 2023-12-27T17:16:25 | https://www.reddit.com/r/LocalLLaMA/comments/18s5qc8/my_first_experiment_with_mistralmedium_using/ | CygnusX1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s5qc8 | false | null | t3_18s5qc8 | /r/LocalLLaMA/comments/18s5qc8/my_first_experiment_with_mistralmedium_using/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': '68kU0c7_8JCX63_THwB6knTMQI3qXh3lJrJD5zudvg4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/epJkNBnbeiFwF6NnvNdfaKylKjuNHTg7D_JAGTa0-gE.jpg?width=108&crop=smart&auto=webp&s=765e0253f3a9510346e3df784d797115f80b8fca', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/epJkNBnbeiFwF6NnvNdfaKylKjuNHTg7D_JAGTa0-gE.jpg?width=216&crop=smart&auto=webp&s=e4ac244c3545f2527846231a53256dff90c3c91e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/epJkNBnbeiFwF6NnvNdfaKylKjuNHTg7D_JAGTa0-gE.jpg?width=320&crop=smart&auto=webp&s=d8587379770a0b825e14b570584892573eab7321', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/epJkNBnbeiFwF6NnvNdfaKylKjuNHTg7D_JAGTa0-gE.jpg?width=640&crop=smart&auto=webp&s=ea0a061a750a70ef8f742f9d010512e6a844b6bb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/epJkNBnbeiFwF6NnvNdfaKylKjuNHTg7D_JAGTa0-gE.jpg?width=960&crop=smart&auto=webp&s=83d75e5ae1d48508846e6740dab808bd62c3c7cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/epJkNBnbeiFwF6NnvNdfaKylKjuNHTg7D_JAGTa0-gE.jpg?width=1080&crop=smart&auto=webp&s=7b2e9b8edc78987727bd4e1c4cb9df22cfcc65e7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/epJkNBnbeiFwF6NnvNdfaKylKjuNHTg7D_JAGTa0-gE.jpg?auto=webp&s=c8086eb9a48a335013660b42850a0f0650708e0b', 'width': 1200}, 'variants': {}}]} |
What is the best 3B,7B Local Story Writing LLM? | 17 | I need a Local LLM for creative writing. Not Brainstorming ideas, but writing better dialogues and descriptions for fictional stories. ( eg: Converting bullet points into story passages). I run Local LLM on a laptop with 24GB RAM & no GPU. 3B Models work fast, 7B Models are slow but doable.
I prefer models which are not highly censored like claude, chatgpt, it might restrict scenes in the story.
I tried the following medium-quantized models :
\- Dolphin Phi 2 3B Model
\- Nous Capybara v1.9
\- Xwin mlewd 0.2 7B
\- Cockatrice 0.1 7B
Which is the best 3B,7B models you have tried that are good for creative writing / story writing ?
​ | 2023-12-27T17:02:13 | https://www.reddit.com/r/LocalLLaMA/comments/18s5e7t/what_is_the_best_3b7b_local_story_writing_llm/ | broodysupertramp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s5e7t | false | null | t3_18s5e7t | /r/LocalLLaMA/comments/18s5e7t/what_is_the_best_3b7b_local_story_writing_llm/ | false | false | self | 17 | null |
What are the best free uncensored local ai for idea/writing generation? | 17 | I find most things i want to do require a idea or story of sorts, but the thing is i struggle with ideas and planning, and such.
Ive used ai such as chatgpt before for this kind of thing but i find a common theme, that in terms of witting/ideas its pretty cheezy but also super bland and chiche. so im on the hunt for some free story ai the is ok and it would help if it was 7b or around that, that is local and uncensored if possible. | 2023-12-27T16:55:43 | https://www.reddit.com/r/LocalLLaMA/comments/18s587r/what_are_the_best_free_uncensored_local_ai_for/ | Gaming-invisibleman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s587r | false | null | t3_18s587r | /r/LocalLLaMA/comments/18s587r/what_are_the_best_free_uncensored_local_ai_for/ | false | false | self | 17 | null |
GPU server cases options | 1 | I want to buy some used server hardware for inference with multiple gpus and I have been a bit puzzled by the options. Will one of these cases fit this E-ATX motherboard? I want to be able to get up to 4 gpus. Does anyone have any builds they could share?
Case https://www.ebay.com/itm/354554797467?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=hc4ssem9q2y&sssrc=4429486&ssuid=S1-j02PVSRS&var=&widget_ver=artemis&media=COPY
Motherboard https://www.ebay.com/itm/385608153327?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=mkemlddiq-y&sssrc=4429486&ssuid=S1-j02PVSRS&var=&widget_ver=artemis&media=COPY | 2023-12-27T16:37:06 | https://www.reddit.com/r/LocalLLaMA/comments/18s4sf0/gpu_server_cases_options/ | yourfriendlyisp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s4sf0 | false | null | t3_18s4sf0 | /r/LocalLLaMA/comments/18s4sf0/gpu_server_cases_options/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'K37-GLutvaiKkL3PcHn3e19Sy2pu9JLS0pod2lTG2UI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/R7W_h__yFT0vpNnwTuqlyfXMld-Gx1gn0uX8j9cF1OM.jpg?width=108&crop=smart&auto=webp&s=cf14d84a79b6e1ff534bfac03dcf5b02c6829a01', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/R7W_h__yFT0vpNnwTuqlyfXMld-Gx1gn0uX8j9cF1OM.jpg?width=216&crop=smart&auto=webp&s=278412218fdca6e4dff78983e589e109e783d4f8', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/R7W_h__yFT0vpNnwTuqlyfXMld-Gx1gn0uX8j9cF1OM.jpg?width=320&crop=smart&auto=webp&s=0a3a704b2b177bf155d2d8760cdafb01849b61c6', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/R7W_h__yFT0vpNnwTuqlyfXMld-Gx1gn0uX8j9cF1OM.jpg?auto=webp&s=33c551f53a0377e29c1227362b5a9aa3c75e0533', 'width': 400}, 'variants': {}}]} |
Is Llama.cpp Using GPU's on my M2 Max? | 7 | Getting my feet wet with llama.cpp and trying to use GPU's during training. I'm using this command to start, on a tiny test dataset (5 observations):
./finetune --model-base ./models/llama-2-7b.Q8_0.gguf --train-data train_data/durant_QA_training_test_llama_format.json --threads 8 --sample-start "<s>" --ctx 1024 --epochs 1 --n_gpu_layers 1
Something is happening for GPU's in that I get the following in Terminal, however maybe it's not working because it can't find "default.metallib'?:
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Max
ggml_metal_init: picking default device: Apple M2 Max
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/me/llama.cpp/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M2 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008)
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB
ggml_metal_init: maxTransferRate = built-in GPU
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 256.00 MiB, ( 7086.98 / 49152.00)
When I go to activity monitor it only shows CPU usage not GPUs. It does seem way faster though to do 1 epoch than when I don't invoke a GPU layer. Any thoughts/suggestions would be greatly appreciated--I'm beyond the edges of this English major's knowledge :) | 2023-12-27T16:21:04 | https://www.reddit.com/r/LocalLLaMA/comments/18s4f1r/is_llamacpp_using_gpus_on_my_m2_max/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s4f1r | false | null | t3_18s4f1r | /r/LocalLLaMA/comments/18s4f1r/is_llamacpp_using_gpus_on_my_m2_max/ | false | false | self | 7 | null |
Can I use embedding models from the MTEB board with generate embedding api of ollama to generate embedding? | 5 | If yes, then how? | 2023-12-27T15:11:12 | https://www.reddit.com/r/LocalLLaMA/comments/18s2ugm/can_i_use_embedding_models_from_the_mteb_board/ | coderinlaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s2ugm | false | null | t3_18s2ugm | /r/LocalLLaMA/comments/18s2ugm/can_i_use_embedding_models_from_the_mteb_board/ | false | false | self | 5 | null |
(Un)censored models that can run with 4gb Ram | 1 | [removed] | 2023-12-27T15:11:06 | https://www.reddit.com/r/LocalLLaMA/comments/18s2uds/uncensored_models_that_can_run_with_4gb_ram/ | Jsus-Sus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s2uds | false | null | t3_18s2uds | /r/LocalLLaMA/comments/18s2uds/uncensored_models_that_can_run_with_4gb_ram/ | false | false | self | 1 | null |
Open Source LLM With Greatest Tokens Per Second? | 4 | Hello everyone! I'm very new to LLM development and deployment in general so please bear with me.
I'd like to deploy a Large Language Model ChatBot onto a service like HuggingFace's free tier or Oracle's Free tier.
The purpose of the model will be for students to demonstrate and practice Prompt Injection attacks, and crude social engineering attacks for my college Cybersecurity club.
In terms of concurrent users, I'd expect a maximum of, say, \~100 students talking to the bot under separate contexts at the same time.
Currently I just want a proof of concept to run on a free service for hopefully a few months before I can approach the people in charge for money to run more powerful models on better cloud hardware.
As such, I'd like to know what LLMs I should look at, or what keywords/features I should be searching for. I don't need the LLM to be particularly intelligent, in fact I think a dumber LLM might help with the demonstration of prompt injection better. I just need it to output coherent text (mostly in english, though it might receive input in some Indian languages sometimes) with a reasonably logical response as fast as possible. | 2023-12-27T14:57:35 | https://www.reddit.com/r/LocalLLaMA/comments/18s2jgh/open_source_llm_with_greatest_tokens_per_second/ | Aathishs04 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s2jgh | false | null | t3_18s2jgh | /r/LocalLLaMA/comments/18s2jgh/open_source_llm_with_greatest_tokens_per_second/ | false | false | self | 4 | null |
How to avoid sensitive data/PII being part of LLM training data? | 7 | How are you making sure your proprietary data (including sensitive info and PII) doesn't become part of LLM training data.
We are fine tuning a LLM with our internal docs and data pulled from a couple of SaaS applications, but I am afraid that if any of proprietary data becomes part of the LLM, there are very high chances of sensitive data leak. Are there any existing tools, or measures I can take to avoid this? | 2023-12-27T14:13:19 | https://www.reddit.com/r/LocalLLaMA/comments/18s1lvj/how_to_avoid_sensitive_datapii_being_part_of_llm/ | deeepak143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s1lvj | false | null | t3_18s1lvj | /r/LocalLLaMA/comments/18s1lvj/how_to_avoid_sensitive_datapii_being_part_of_llm/ | false | false | self | 7 | null |
Keeping up with hardware requirements | 1 | Hi all! Are there any resources you use to keep up with hardware requirements? Looking to understand what hardware is needed to:
1. Run from GCP
2. Run from AWS
3. Run from Local
4. Run from general cloud frameworks
That part of the local LLM movement has missed my understanding somewhere. I understand a hefty GPU is required. | 2023-12-27T14:12:37 | https://www.reddit.com/r/LocalLLaMA/comments/18s1lcg/keeping_up_with_hardware_requirements/ | balcell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s1lcg | false | null | t3_18s1lcg | /r/LocalLLaMA/comments/18s1lcg/keeping_up_with_hardware_requirements/ | false | false | default | 1 | null |
"Large World Model" | 1 | [removed] | 2023-12-27T13:44:29 | https://www.reddit.com/r/LocalLLaMA/comments/18s10mf/large_world_model/ | Muted-Opinion-2838 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s10mf | false | null | t3_18s10mf | /r/LocalLLaMA/comments/18s10mf/large_world_model/ | false | false | default | 1 | null |
Simple Local Models Answers Evaluation Colab example (with GPT4, no langchain etc.) | 26 | Hi, beloved sub!
I have tried to find any ready-to-go simple examples of local model response evaluation with GPT4 but have seen only complicated implementations with heavy libraries like langchaing, etc.
[Here](https://colab.research.google.com/drive/1sfxCoW2tJd6IJ_psSumGWzNwqEbtbu-4?usp=sharing), I implemented a simple Python LLM answers scoring Colab with GPT4 API and OpenAI functions. You can adapt it to process multiple responses from the local LLM, which could be useful for comparing different models.
It is a reliable approach that I first encountered in the paper "[TinyStories](https://arxiv.org/abs/2305.07759)" (where a tiny 2.5 **million** parameters model solves a Creative Writing task), and based on their approach, I made a new version that our team uses for Creative Writing task evaluation at [littlestory.io](https://littlestory.io) (kids bedtime stories generation, we have scored like that thousands of stories already).
All scores will be given in the valid JSON format.
**How to use colab:**
1) Specify your text language and past text that you want to score;
2) Specify your API key;
3) Update agent if needed; our mainly focuses on kids' safety and creative writing;
4) Batch process answers of your local LLM and save responses.
That's it. It's quite simple
​
Previous tutorials:
\[[Tutorial](https://www.reddit.com/r/LocalLLaMA/comments/18dohlt/tutorial_use_real_books_wiki_pages_and_even/)\] **Use real books, wiki pages, and even subtitles for roleplay with the RAG approach in Oobabooga WebUI + superbooga v2**
\[[Tutorial](https://www.reddit.com/r/LocalLLaMA/comments/17aswq4/tutorial_integrate_multimodal_llava_to_macs/)\] **Integrate multimodal llava to Macs' right-click Finder menu for image captioning (or text parsing, etc) with llama.cpp and Automator app**
\[[Tutorial](https://www.reddit.com/r/LocalLLaMA/comments/15snlv1/tutorial_simple_soft_unlock_of_any_model_with_a/)\] **Simple Soft Unlock of any model with a negative prompt (no training, no fine-tuning, inference only fix)**
\[[Tutorial](https://www.reddit.com/r/LocalLLaMA/comments/13j3747/tutorial_a_simple_way_to_get_rid_of_as_an_ai/)\] **A simple way to get rid of "..as an AI language model..." answers from any model without finetuning the model, with llama.cpp and --logit-bias flag**
**\[**[Tutorial](https://www.reddit.com/r/SteamDeck/comments/12k1d8h/manual_how_to_install_large_language_model_vicuna/)**\] How to install Large Language Model Vicuna 7B + llama.ccp on Steam Deck** | 2023-12-27T13:33:57 | https://www.reddit.com/r/LocalLLaMA/comments/18s0t6q/simple_local_models_answers_evaluation_colab/ | Shir_man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s0t6q | false | null | t3_18s0t6q | /r/LocalLLaMA/comments/18s0t6q/simple_local_models_answers_evaluation_colab/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
optimicing super slow ollama | 1 | Sure, here is the proofread text:
**Hi, I just installed Ollama in a Ubuntu container on Proxmox and it's painfully slow, taking like 10 minutes per prompt.**
It only utilizes around 8% of the allocated 5 CPU cores (i7-14700K) and 500 Mbit of the 15 Gbit RAM. I could throw way more RAM at it, but it's probably not the point if it doesn't utilize it.
**Right now I don't own a Nvidia GPU. Do I need one, and if so, which model? I'd prefer not to spend 3,000 USD on it if possible.**
**Are there settings to Improve the performance?** | 2023-12-27T13:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/18s0l5e/optimicing_super_slow_ollama/ | Sturmkater | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18s0l5e | false | null | t3_18s0l5e | /r/LocalLLaMA/comments/18s0l5e/optimicing_super_slow_ollama/ | false | false | default | 1 | null |
Showcasing Deployment of 7B LLM via AWS Fargate Service | 11 | My Docker Image is sent to ECR and I have declared a task definition to fire up my LLM container and made that a Fargate ECS Service. I toggle it on/off easily whenever I want to chat. And the costs are insanely cheap when used this way. Just a few cents for me. The bulk of the cost monthly is maintaining the VPC and networking stuff, but the ECS activity is really cheap
Anyhow, I just wanted to share how Im using this build on-the-go. Im away with the kids and its been amazing to be able to just use the AWS app to toggle on/off my build and have conversations whenever
Happy to answer any questions
Happy new year | 2023-12-27T12:48:12 | https://www.reddit.com/gallery/18rzyf4 | Happy_Chicken9835 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18rzyf4 | false | null | t3_18rzyf4 | /r/LocalLLaMA/comments/18rzyf4/showcasing_deployment_of_7b_llm_via_aws_fargate/ | false | false | 11 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.